html_url stringlengths 48 51 | title stringlengths 5 268 | comments stringlengths 70 51.8k | body stringlengths 0 29.8k | comment_length int64 16 1.52k | text stringlengths 164 54.1k | embeddings listlengths 768 768 |
|---|---|---|---|---|---|---|
https://github.com/huggingface/datasets/issues/1773 | bug in loading datasets | Looks like an issue with your csv file. Did you use the right delimiter ?
Apparently at line 37 the CSV reader from pandas reads 2 fields instead of 1. | Hi,
I need to load a dataset, I use these commands:
```
from datasets import load_dataset
dataset = load_dataset('csv', data_files={'train': 'sick/train.csv',
'test': 'sick/test.csv',
'validation': 'sick/validation.csv'})
prin... | 30 | bug in loading datasets
Hi,
I need to load a dataset, I use these commands:
```
from datasets import load_dataset
dataset = load_dataset('csv', data_files={'train': 'sick/train.csv',
'test': 'sick/test.csv',
'validation': 's... | [
-0.2842665910720825,
-0.27147695422172546,
-0.14631570875644684,
0.46084830164909363,
0.30196890234947205,
0.24083691835403442,
0.08345659077167511,
0.5121127367019653,
0.07644950598478317,
0.058581408113241196,
-0.0004540651279967278,
-0.26659661531448364,
-0.10810218751430511,
0.25363516... |
https://github.com/huggingface/datasets/issues/1773 | bug in loading datasets | Note that you can pass any argument you would pass to `pandas.read_csv` as kwargs to `load_dataset`. For example you can do
```python
from datasets import load_dataset
dataset = load_dataset('csv', data_files=data_files, sep="\t")
```
for example to use a tab separator.
You can see the full list of arguments ... | Hi,
I need to load a dataset, I use these commands:
```
from datasets import load_dataset
dataset = load_dataset('csv', data_files={'train': 'sick/train.csv',
'test': 'sick/test.csv',
'validation': 'sick/validation.csv'})
prin... | 64 | bug in loading datasets
Hi,
I need to load a dataset, I use these commands:
```
from datasets import load_dataset
dataset = load_dataset('csv', data_files={'train': 'sick/train.csv',
'test': 'sick/test.csv',
'validation': 's... | [
-0.2842665910720825,
-0.27147695422172546,
-0.14631570875644684,
0.46084830164909363,
0.30196890234947205,
0.24083691835403442,
0.08345659077167511,
0.5121127367019653,
0.07644950598478317,
0.058581408113241196,
-0.0004540651279967278,
-0.26659661531448364,
-0.10810218751430511,
0.25363516... |
https://github.com/huggingface/datasets/issues/1771 | Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.2.1/datasets/csv/csv.py | Indeed in 1.2.1 the script to process csv file is downloaded. Starting from the next release though we include the csv processing directly in the library.
See PR #1726
We'll do a new release soon :) | Hi,
When I load_dataset from local csv files, below error happened, looks raw.githubusercontent.com was blocked by the chinese government. But why it need to download csv.py? should it include when pip install the dataset?
```
Traceback (most recent call last):
File "/home/tom/pyenv/pystory/lib/python3.6/site-p... | 36 | Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.2.1/datasets/csv/csv.py
Hi,
When I load_dataset from local csv files, below error happened, looks raw.githubusercontent.com was blocked by the chinese government. But why it need to download csv.py? should it include when pip install the dataset... | [
-0.09555325657129288,
-0.20197561383247375,
-0.18235330283641815,
0.099574513733387,
0.19236566126346588,
-0.011418279260396957,
0.2810027599334717,
0.18599055707454681,
0.25373342633247375,
0.2542535066604614,
0.08921876549720764,
-0.19674748182296753,
0.2385467141866684,
0.09402350336313... |
https://github.com/huggingface/datasets/issues/1770 | how can I combine 2 dataset with different/same features? | Hi ! Currently we don't have a way to `zip` datasets but we plan to add this soon :)
For now you'll need to use `map` to add the fields from one dataset to the other. See the comment here for more info : https://github.com/huggingface/datasets/issues/853#issuecomment-727872188 | to combine 2 dataset by one-one map like ds = zip(ds1, ds2):
ds1: {'text'}, ds2: {'text'}, combine ds:{'src', 'tgt'}
or different feature:
ds1: {'src'}, ds2: {'tgt'}, combine ds:{'src', 'tgt'} | 45 | how can I combine 2 dataset with different/same features?
to combine 2 dataset by one-one map like ds = zip(ds1, ds2):
ds1: {'text'}, ds2: {'text'}, combine ds:{'src', 'tgt'}
or different feature:
ds1: {'src'}, ds2: {'tgt'}, combine ds:{'src', 'tgt'}
Hi ! Currently we don't have a way to `zip` datasets but we p... | [
-0.36180922389030457,
-0.4964470863342285,
-0.06872214376926422,
0.1217595636844635,
0.06972849369049072,
0.3581109344959259,
-0.0922456830739975,
0.22106260061264038,
-0.03528483211994171,
0.013870269060134888,
-0.3246746063232422,
0.4096490442752838,
0.06939128041267395,
0.73989135026931... |
https://github.com/huggingface/datasets/issues/1770 | how can I combine 2 dataset with different/same features? | Good to hear.
Currently I did not use map , just fetch src and tgt from the 2 dataset and merge them.
It will be a release if you can deal with it at the backend.
Thanks. | to combine 2 dataset by one-one map like ds = zip(ds1, ds2):
ds1: {'text'}, ds2: {'text'}, combine ds:{'src', 'tgt'}
or different feature:
ds1: {'src'}, ds2: {'tgt'}, combine ds:{'src', 'tgt'} | 37 | how can I combine 2 dataset with different/same features?
to combine 2 dataset by one-one map like ds = zip(ds1, ds2):
ds1: {'text'}, ds2: {'text'}, combine ds:{'src', 'tgt'}
or different feature:
ds1: {'src'}, ds2: {'tgt'}, combine ds:{'src', 'tgt'}
Good to hear.
Currently I did not use map , just fetch src a... | [
-0.3885655701160431,
-0.4940946400165558,
-0.1406935304403305,
0.018540415912866592,
-0.022465458139777184,
0.2798508107662201,
-0.17467936873435974,
0.32268843054771423,
-0.05108827352523804,
0.008893117308616638,
-0.19251947104930878,
0.6096795797348022,
0.0999668687582016,
0.54651230573... |
https://github.com/huggingface/datasets/issues/1769 | _pickle.PicklingError: Can't pickle typing.Union[str, NoneType]: it's not the same object as typing.Union when calling datasets.map with num_proc=2 | Hi ! What version of python and datasets do you have ? And also what version of dill and pickle ? | It may be a bug of multiprocessing with Datasets, when I disable the multiprocessing by set num_proc to None, everything works fine.
The script I use is https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm_wwm.py
Script args:
```
--model_name_or_path
../../../model/chine... | 21 | _pickle.PicklingError: Can't pickle typing.Union[str, NoneType]: it's not the same object as typing.Union when calling datasets.map with num_proc=2
It may be a bug of multiprocessing with Datasets, when I disable the multiprocessing by set num_proc to None, everything works fine.
The script I use is https://github... | [
-0.2903596758842468,
-0.2974283695220947,
0.1132444441318512,
0.1903708577156067,
0.16615870594978333,
-0.052209965884685516,
0.33211368322372437,
0.23036368191242218,
0.2993957996368408,
0.2801506817340851,
0.01671714335680008,
0.3376864492893219,
-0.11104455590248108,
0.2174145132303238,... |
https://github.com/huggingface/datasets/issues/1769 | _pickle.PicklingError: Can't pickle typing.Union[str, NoneType]: it's not the same object as typing.Union when calling datasets.map with num_proc=2 | > Hi ! What version of python and datasets do you have ? And also what version of dill and pickle ?
python==3.6.10
datasets==1.2.1
dill==0.3.2
pickle.format_version==4.0 | It may be a bug of multiprocessing with Datasets, when I disable the multiprocessing by set num_proc to None, everything works fine.
The script I use is https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm_wwm.py
Script args:
```
--model_name_or_path
../../../model/chine... | 26 | _pickle.PicklingError: Can't pickle typing.Union[str, NoneType]: it's not the same object as typing.Union when calling datasets.map with num_proc=2
It may be a bug of multiprocessing with Datasets, when I disable the multiprocessing by set num_proc to None, everything works fine.
The script I use is https://github... | [
-0.2903596758842468,
-0.2974283695220947,
0.1132444441318512,
0.1903708577156067,
0.16615870594978333,
-0.052209965884685516,
0.33211368322372437,
0.23036368191242218,
0.2993957996368408,
0.2801506817340851,
0.01671714335680008,
0.3376864492893219,
-0.11104455590248108,
0.2174145132303238,... |
https://github.com/huggingface/datasets/issues/1769 | _pickle.PicklingError: Can't pickle typing.Union[str, NoneType]: it's not the same object as typing.Union when calling datasets.map with num_proc=2 | Multiprocessing in python require all the functions to be picklable. More specifically, functions need to be picklable with `dill`.
However objects like `typing.Union[str, NoneType]` are not picklable in python <3.7.
Can you try to update your python version to python>=3.7 ?
| It may be a bug of multiprocessing with Datasets, when I disable the multiprocessing by set num_proc to None, everything works fine.
The script I use is https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm_wwm.py
Script args:
```
--model_name_or_path
../../../model/chine... | 41 | _pickle.PicklingError: Can't pickle typing.Union[str, NoneType]: it's not the same object as typing.Union when calling datasets.map with num_proc=2
It may be a bug of multiprocessing with Datasets, when I disable the multiprocessing by set num_proc to None, everything works fine.
The script I use is https://github... | [
-0.2903596758842468,
-0.2974283695220947,
0.1132444441318512,
0.1903708577156067,
0.16615870594978333,
-0.052209965884685516,
0.33211368322372437,
0.23036368191242218,
0.2993957996368408,
0.2801506817340851,
0.01671714335680008,
0.3376864492893219,
-0.11104455590248108,
0.2174145132303238,... |
https://github.com/huggingface/datasets/issues/1766 | Issues when run two programs compute the same metrics | Hi ! To avoid collisions you can specify a `experiment_id` when instantiating your metric using `load_metric`. It will replace "default_experiment" with the experiment id that you provide in the arrow filename.
Also when two `experiment_id` collide we're supposed to detect it using our locking mechanism. Not sure w... | I got the following error when running two different programs that both compute sacreblue metrics. It seems that both read/and/write to the same location (.cache/huggingface/metrics/sacrebleu/default/default_experiment-1-0.arrow) where it caches the batches:
```
File "train_matching_min.py", line 160, in <module>ch... | 69 | Issues when run two programs compute the same metrics
I got the following error when running two different programs that both compute sacreblue metrics. It seems that both read/and/write to the same location (.cache/huggingface/metrics/sacrebleu/default/default_experiment-1-0.arrow) where it caches the batches:
``... | [
-0.3611742854118347,
-0.1505344659090042,
-0.04318748414516449,
0.3872222602367401,
0.2670595347881317,
-0.10684029757976532,
0.08155117183923721,
0.3045555055141449,
-0.14090190827846527,
0.21661488711833954,
-0.3760763108730316,
-0.013459558598697186,
0.08526740223169327,
0.0051949820481... |
https://github.com/huggingface/datasets/issues/1766 | Issues when run two programs compute the same metrics | Thank you for your response. I fixed the issue by set "keep_in_memory=True" when load_metric.
I cannot share the entire source code but below is the wrapper I wrote:
```python
class Evaluation:
def __init__(self, metric='sacrebleu'):
# self.metric = load_metric(metric, keep_in_memory=True)
... | I got the following error when running two different programs that both compute sacreblue metrics. It seems that both read/and/write to the same location (.cache/huggingface/metrics/sacrebleu/default/default_experiment-1-0.arrow) where it caches the batches:
```
File "train_matching_min.py", line 160, in <module>ch... | 94 | Issues when run two programs compute the same metrics
I got the following error when running two different programs that both compute sacreblue metrics. It seems that both read/and/write to the same location (.cache/huggingface/metrics/sacrebleu/default/default_experiment-1-0.arrow) where it caches the batches:
``... | [
-0.3611742854118347,
-0.1505344659090042,
-0.04318748414516449,
0.3872222602367401,
0.2670595347881317,
-0.10684029757976532,
0.08155117183923721,
0.3045555055141449,
-0.14090190827846527,
0.21661488711833954,
-0.3760763108730316,
-0.013459558598697186,
0.08526740223169327,
0.0051949820481... |
https://github.com/huggingface/datasets/issues/1765 | Error iterating over Dataset with DataLoader | Instead of:
```python
dataloader = torch.utils.data.DataLoader(encoded_dataset, batch_sampler=32)
```
It should be:
```python
dataloader = torch.utils.data.DataLoader(encoded_dataset, batch_size=32)
```
`batch_sampler` accepts a Sampler object or an Iterable, so you get an error. | I have a Dataset that I've mapped a tokenizer over:
```
encoded_dataset.set_format(type='torch',columns=['attention_mask','input_ids','token_type_ids'])
encoded_dataset[:1]
```
```
{'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]),
'input_ids': tensor([[ 101, 178, 1198, 1400, 1714, 22233, 2... | 30 | Error iterating over Dataset with DataLoader
I have a Dataset that I've mapped a tokenizer over:
```
encoded_dataset.set_format(type='torch',columns=['attention_mask','input_ids','token_type_ids'])
encoded_dataset[:1]
```
```
{'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]),
'input_ids': tenso... | [
-0.23792937397956848,
0.05545622482895851,
-0.04567086696624756,
0.2057335525751114,
0.1531803160905838,
0.00014133301738183945,
0.7921318411827087,
0.3068520724773407,
0.0211307592689991,
0.15271316468715668,
0.04679560288786888,
0.23042596876621246,
-0.2809545397758484,
-0.24164426326751... |
https://github.com/huggingface/datasets/issues/1765 | Error iterating over Dataset with DataLoader | @mariosasko I thought that would fix it, but now I'm getting a different error:
```
/usr/local/lib/python3.6/dist-packages/datasets/arrow_dataset.py:851: UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors. This means you can write to the underlying (supposedly no... | I have a Dataset that I've mapped a tokenizer over:
```
encoded_dataset.set_format(type='torch',columns=['attention_mask','input_ids','token_type_ids'])
encoded_dataset[:1]
```
```
{'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]),
'input_ids': tensor([[ 101, 178, 1198, 1400, 1714, 22233, 2... | 169 | Error iterating over Dataset with DataLoader
I have a Dataset that I've mapped a tokenizer over:
```
encoded_dataset.set_format(type='torch',columns=['attention_mask','input_ids','token_type_ids'])
encoded_dataset[:1]
```
```
{'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]),
'input_ids': tenso... | [
-0.23792937397956848,
0.05545622482895851,
-0.04567086696624756,
0.2057335525751114,
0.1531803160905838,
0.00014133301738183945,
0.7921318411827087,
0.3068520724773407,
0.0211307592689991,
0.15271316468715668,
0.04679560288786888,
0.23042596876621246,
-0.2809545397758484,
-0.24164426326751... |
https://github.com/huggingface/datasets/issues/1765 | Error iterating over Dataset with DataLoader | Yes, padding is an answer.
This can be solved easily by passing a callable to the collate_fn arg of DataLoader that adds padding. | I have a Dataset that I've mapped a tokenizer over:
```
encoded_dataset.set_format(type='torch',columns=['attention_mask','input_ids','token_type_ids'])
encoded_dataset[:1]
```
```
{'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]),
'input_ids': tensor([[ 101, 178, 1198, 1400, 1714, 22233, 2... | 23 | Error iterating over Dataset with DataLoader
I have a Dataset that I've mapped a tokenizer over:
```
encoded_dataset.set_format(type='torch',columns=['attention_mask','input_ids','token_type_ids'])
encoded_dataset[:1]
```
```
{'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]),
'input_ids': tenso... | [
-0.23792937397956848,
0.05545622482895851,
-0.04567086696624756,
0.2057335525751114,
0.1531803160905838,
0.00014133301738183945,
0.7921318411827087,
0.3068520724773407,
0.0211307592689991,
0.15271316468715668,
0.04679560288786888,
0.23042596876621246,
-0.2809545397758484,
-0.24164426326751... |
https://github.com/huggingface/datasets/issues/1762 | Unable to format dataset to CUDA Tensors | Hi ! You can get CUDA tensors with
```python
dataset.set_format("torch", columns=columns, device="cuda")
```
Indeed `set_format` passes the `**kwargs` to `torch.tensor` | Hi,
I came across this [link](https://huggingface.co/docs/datasets/torch_tensorflow.html) where the docs show show to convert a dataset to a particular format. I see that there is an option to convert it to tensors, but I don't see any option to convert it to CUDA tensors.
I tried this, but Dataset doesn't suppor... | 20 | Unable to format dataset to CUDA Tensors
Hi,
I came across this [link](https://huggingface.co/docs/datasets/torch_tensorflow.html) where the docs show show to convert a dataset to a particular format. I see that there is an option to convert it to tensors, but I don't see any option to convert it to CUDA tensors.
... | [
-0.22562183439731598,
-0.4427936375141144,
-0.0796789675951004,
0.18397197127342224,
0.543350100517273,
0.3885731101036072,
0.5336815714836121,
0.334581196308136,
-0.009671260602772236,
0.1427578181028366,
-0.08334475755691528,
0.24232354760169983,
-0.22358499467372894,
0.09736344963312149... |
https://github.com/huggingface/datasets/issues/1762 | Unable to format dataset to CUDA Tensors | Hi @lhoestq,
Thanks a lot. Is this true for all format types?
As in, for 'torch', I can have `**kwargs` to `torch.tensor` and for 'tf' those args are passed to `tf.Tensor`, and the same for 'numpy' and 'pandas'? | Hi,
I came across this [link](https://huggingface.co/docs/datasets/torch_tensorflow.html) where the docs show show to convert a dataset to a particular format. I see that there is an option to convert it to tensors, but I don't see any option to convert it to CUDA tensors.
I tried this, but Dataset doesn't suppor... | 38 | Unable to format dataset to CUDA Tensors
Hi,
I came across this [link](https://huggingface.co/docs/datasets/torch_tensorflow.html) where the docs show show to convert a dataset to a particular format. I see that there is an option to convert it to tensors, but I don't see any option to convert it to CUDA tensors.
... | [
-0.19142423570156097,
-0.4953543245792389,
-0.06351526081562042,
0.14305101335048676,
0.5737581253051758,
0.3578239977359772,
0.6109963059425354,
0.3751016855239868,
-0.006468827370554209,
0.09331325441598892,
-0.18430891633033752,
0.23856399953365326,
-0.18889375030994415,
0.2082744389772... |
https://github.com/huggingface/datasets/issues/1762 | Unable to format dataset to CUDA Tensors | Yes the keywords arguments are passed to the convert function like `np.array`, `torch.tensor` or `tensorflow.ragged.constant`.
We don't support the kwargs for pandas on the other hand. | Hi,
I came across this [link](https://huggingface.co/docs/datasets/torch_tensorflow.html) where the docs show show to convert a dataset to a particular format. I see that there is an option to convert it to tensors, but I don't see any option to convert it to CUDA tensors.
I tried this, but Dataset doesn't suppor... | 26 | Unable to format dataset to CUDA Tensors
Hi,
I came across this [link](https://huggingface.co/docs/datasets/torch_tensorflow.html) where the docs show show to convert a dataset to a particular format. I see that there is an option to convert it to tensors, but I don't see any option to convert it to CUDA tensors.
... | [
-0.23399677872657776,
-0.4713827967643738,
-0.058725759387016296,
0.1454276740550995,
0.5945416688919067,
0.38139984011650085,
0.5536249876022339,
0.3486229479312897,
0.020402204245328903,
0.08473771810531616,
-0.1387397199869156,
0.340721994638443,
-0.17577937245368958,
0.1476909369230270... |
https://github.com/huggingface/datasets/issues/1762 | Unable to format dataset to CUDA Tensors | Thanks @lhoestq,
Would it be okay if I added this to the docs and made a PR? | Hi,
I came across this [link](https://huggingface.co/docs/datasets/torch_tensorflow.html) where the docs show show to convert a dataset to a particular format. I see that there is an option to convert it to tensors, but I don't see any option to convert it to CUDA tensors.
I tried this, but Dataset doesn't suppor... | 17 | Unable to format dataset to CUDA Tensors
Hi,
I came across this [link](https://huggingface.co/docs/datasets/torch_tensorflow.html) where the docs show show to convert a dataset to a particular format. I see that there is an option to convert it to tensors, but I don't see any option to convert it to CUDA tensors.
... | [
-0.20760828256607056,
-0.44932785630226135,
-0.08087822049856186,
0.142959326505661,
0.5672461986541748,
0.3796423077583313,
0.5310694575309753,
0.3309662640094757,
-0.018165189772844315,
0.1409580558538437,
-0.07886810600757599,
0.23653069138526917,
-0.2230985462665558,
0.1325978934764862... |
https://github.com/huggingface/datasets/issues/1759 | wikipedia dataset incomplete | Hi !
From what pickle file fo you get this ?
I guess you mean the dataset loaded using `load_dataset` ? | Hey guys,
I am using the https://github.com/huggingface/datasets/tree/master/datasets/wikipedia dataset.
Unfortunately, I found out that there is an incompleteness for the German dataset.
For reasons unknown to me, the number of inhabitants has been removed from many pages:
Thorey-sur-Ouche has 128 inhabitants a... | 21 | wikipedia dataset incomplete
Hey guys,
I am using the https://github.com/huggingface/datasets/tree/master/datasets/wikipedia dataset.
Unfortunately, I found out that there is an incompleteness for the German dataset.
For reasons unknown to me, the number of inhabitants has been removed from many pages:
Thorey-... | [
0.12411247193813324,
0.09065644443035126,
-0.06485231220722198,
0.4390089809894562,
-0.14518438279628754,
0.13518103957176208,
0.1400448977947235,
-0.09932887554168701,
0.33159226179122925,
0.17427174746990204,
0.18785704672336578,
-0.05558646470308304,
0.383334219455719,
-0.37142884731292... |
https://github.com/huggingface/datasets/issues/1759 | wikipedia dataset incomplete | yes sorry, I used the `load_dataset`function and saved the data to a pickle file so I don't always have to reload it and are able to work offline. | Hey guys,
I am using the https://github.com/huggingface/datasets/tree/master/datasets/wikipedia dataset.
Unfortunately, I found out that there is an incompleteness for the German dataset.
For reasons unknown to me, the number of inhabitants has been removed from many pages:
Thorey-sur-Ouche has 128 inhabitants a... | 28 | wikipedia dataset incomplete
Hey guys,
I am using the https://github.com/huggingface/datasets/tree/master/datasets/wikipedia dataset.
Unfortunately, I found out that there is an incompleteness for the German dataset.
For reasons unknown to me, the number of inhabitants has been removed from many pages:
Thorey-... | [
0.06431560963392258,
0.08949457108974457,
-0.05972103774547577,
0.44190943241119385,
-0.13233672082424164,
0.10839977115392685,
0.15771709382534027,
-0.12213238328695297,
0.27500733733177185,
0.15894311666488647,
0.17383414506912231,
-0.13584336638450623,
0.4038768410682678,
-0.40707221627... |
https://github.com/huggingface/datasets/issues/1759 | wikipedia dataset incomplete | The wikipedia articles are processed using the `mwparserfromhell` library. Even if it works well in most cases, such issues can happen unfortunately. You can find the repo here: https://github.com/earwig/mwparserfromhell
There also exist other datasets based on wikipedia that were processed differently (and are ofte... | Hey guys,
I am using the https://github.com/huggingface/datasets/tree/master/datasets/wikipedia dataset.
Unfortunately, I found out that there is an incompleteness for the German dataset.
For reasons unknown to me, the number of inhabitants has been removed from many pages:
Thorey-sur-Ouche has 128 inhabitants a... | 48 | wikipedia dataset incomplete
Hey guys,
I am using the https://github.com/huggingface/datasets/tree/master/datasets/wikipedia dataset.
Unfortunately, I found out that there is an incompleteness for the German dataset.
For reasons unknown to me, the number of inhabitants has been removed from many pages:
Thorey-... | [
0.14578351378440857,
0.1651308238506317,
-0.06042155995965004,
0.45129600167274475,
-0.1779641956090927,
0.14228935539722443,
0.132568359375,
-0.059978239238262177,
0.22113268077373505,
0.1373942643404007,
0.14837227761745453,
-0.064486563205719,
0.4038362205028534,
-0.47075486183166504,
... |
https://github.com/huggingface/datasets/issues/1758 | dataset.search() (elastic) cannot reliably retrieve search results | Hi !
I tried your code on my side and I was able to workaround this issue by waiting a few seconds before querying the index.
Maybe this is because the index is not updated yet on the ElasticSearch side ? | I am trying to use elastic search to retrieve the indices of items in the dataset in their precise order, given shuffled training indices.
The problem I have is that I cannot retrieve reliable results with my data on my first search. I have to run the search **twice** to get the right answer.
I am indexing data t... | 41 | dataset.search() (elastic) cannot reliably retrieve search results
I am trying to use elastic search to retrieve the indices of items in the dataset in their precise order, given shuffled training indices.
The problem I have is that I cannot retrieve reliable results with my data on my first search. I have to run ... | [
0.23191805183887482,
-0.01810932345688343,
-0.12120548635721207,
0.16620595753192902,
0.15398110449314117,
-0.32754936814308167,
-0.12402671575546265,
0.11519424617290497,
-0.2562255263328552,
0.22354412078857422,
-0.09199883788824081,
-0.018562262877821922,
-0.06207636371254921,
-0.550326... |
https://github.com/huggingface/datasets/issues/1758 | dataset.search() (elastic) cannot reliably retrieve search results | Thanks for the feedback! I added a 30 second "sleep" and that seemed to work well! | I am trying to use elastic search to retrieve the indices of items in the dataset in their precise order, given shuffled training indices.
The problem I have is that I cannot retrieve reliable results with my data on my first search. I have to run the search **twice** to get the right answer.
I am indexing data t... | 16 | dataset.search() (elastic) cannot reliably retrieve search results
I am trying to use elastic search to retrieve the indices of items in the dataset in their precise order, given shuffled training indices.
The problem I have is that I cannot retrieve reliable results with my data on my first search. I have to run ... | [
0.23191805183887482,
-0.01810932345688343,
-0.12120548635721207,
0.16620595753192902,
0.15398110449314117,
-0.32754936814308167,
-0.12402671575546265,
0.11519424617290497,
-0.2562255263328552,
0.22354412078857422,
-0.09199883788824081,
-0.018562262877821922,
-0.06207636371254921,
-0.550326... |
https://github.com/huggingface/datasets/issues/1757 | FewRel | @dspoka Please check the following link : https://github.com/thunlp/FewRel
This link mentions two versions of the datasets. Also, this one seems to be the official link.
I am assuming this is the correct link and implementing based on the same. | ## Adding a Dataset
- **Name:** FewRel
- **Description:** Large-Scale Supervised Few-Shot Relation Classification Dataset
- **Paper:** @inproceedings{han2018fewrel,
title={FewRel:A Large-Scale Supervised Few-Shot Relation Classification Dataset with State-of-the-Art Evaluation},
auth... | 39 | FewRel
## Adding a Dataset
- **Name:** FewRel
- **Description:** Large-Scale Supervised Few-Shot Relation Classification Dataset
- **Paper:** @inproceedings{han2018fewrel,
title={FewRel:A Large-Scale Supervised Few-Shot Relation Classification Dataset with State-of-the-Art Evaluation},
... | [
-0.19483499228954315,
-0.034423209726810455,
-0.10161453485488892,
0.12685315310955048,
-0.05827808752655983,
-0.1250494122505188,
0.43154144287109375,
0.1708209216594696,
0.10607054084539413,
0.1873808652162552,
-0.4312257766723633,
0.07661613076925278,
-0.030574530363082886,
-0.355129629... |
https://github.com/huggingface/datasets/issues/1755 | Using select/reordering datasets slows operations down immensely | Thanks for the input! I gave that a try by adding this after my selection / reordering operations, but before the big computation task of `score_squad`
```
examples = examples.flatten_indices()
features = features.flatten_indices()
```
That helped quite a bit! | I am using portions of HF's helpful work in preparing / scoring the SQuAD 2.0 data. The problem I have is that after using `select` to re-ordering the dataset, computations slow down immensely where the total scoring process on 131k training examples would take maybe 3 minutes, now take over an hour.
The below examp... | 39 | Using select/reordering datasets slows operations down immensely
I am using portions of HF's helpful work in preparing / scoring the SQuAD 2.0 data. The problem I have is that after using `select` to re-ordering the dataset, computations slow down immensely where the total scoring process on 131k training examples wo... | [
-0.22296923398971558,
0.24770279228687286,
-0.013868373818695545,
-0.1000717282295227,
-0.0033286267425864935,
-0.14651009440422058,
-0.09160177409648895,
0.14607682824134827,
-0.34975913166999817,
0.10942187160253525,
-0.2728039026260376,
0.38738900423049927,
0.21265392005443573,
-0.19921... |
https://github.com/huggingface/datasets/issues/1747 | datasets slicing with seed | Hi :)
The slicing API from https://huggingface.co/docs/datasets/splits.html doesn't shuffle the data.
You can shuffle and then take a subset of your dataset with
```python
# shuffle and take the first 100 examples
dataset = dataset.shuffle(seed=42).select(range(100))
```
You can find more information about sh... | Hi
I need to slice a dataset with random seed, I looked into documentation here https://huggingface.co/docs/datasets/splits.html
I could not find a seed option, could you assist me please how I can get a slice for different seeds?
thank you.
@lhoestq | 50 | datasets slicing with seed
Hi
I need to slice a dataset with random seed, I looked into documentation here https://huggingface.co/docs/datasets/splits.html
I could not find a seed option, could you assist me please how I can get a slice for different seeds?
thank you.
@lhoestq
Hi :)
The slicing API from h... | [
0.009547417983412743,
-0.6955254673957825,
-0.10169805586338043,
0.106083445250988,
0.2887978255748749,
-0.05544661730527878,
0.08338604122400284,
0.0634738951921463,
-0.13928133249282837,
0.6284065842628479,
-0.04679027199745178,
0.09934747219085693,
-0.18307369947433472,
0.73511481285095... |
https://github.com/huggingface/datasets/issues/1747 | datasets slicing with seed | thank you so much
On Mon, Jan 18, 2021 at 3:17 PM Quentin Lhoest <notifications@github.com>
wrote:
> Hi :)
> The slicing API doesn't shuffle the data.
> You can shuffle and then take a subset of your dataset with
>
> # shuffle and take the first 100 examplesdataset = dataset.shuffle(seed=42).select(range(100))
>
> Yo... | Hi
I need to slice a dataset with random seed, I looked into documentation here https://huggingface.co/docs/datasets/splits.html
I could not find a seed option, could you assist me please how I can get a slice for different seeds?
thank you.
@lhoestq | 103 | datasets slicing with seed
Hi
I need to slice a dataset with random seed, I looked into documentation here https://huggingface.co/docs/datasets/splits.html
I could not find a seed option, could you assist me please how I can get a slice for different seeds?
thank you.
@lhoestq
thank you so much
On Mon, Jan... | [
-0.024270525202155113,
-0.5800485014915466,
-0.10250262916088104,
0.09017663449048996,
0.33373308181762695,
-0.031354378908872604,
0.08624307066202164,
0.0778433233499527,
-0.15559057891368866,
0.6615407466888428,
-0.035780467092990875,
0.1159350574016571,
-0.1422753483057022,
0.6808797121... |
https://github.com/huggingface/datasets/issues/1745 | difference between wsc and wsc.fixed for superglue | From the description given in the dataset script for `wsc.fixed`:
```
This version fixes issues where the spans are not actually substrings of the text.
``` | Hi
I see two versions of wsc in superglue, and I am not sure what is the differences and which one is the original one. could you help to discuss the differences? thanks @lhoestq | 26 | difference between wsc and wsc.fixed for superglue
Hi
I see two versions of wsc in superglue, and I am not sure what is the differences and which one is the original one. could you help to discuss the differences? thanks @lhoestq
From the description given in the dataset script for `wsc.fixed`:
```
This version... | [
-0.11650468409061432,
-0.44388726353645325,
0.042613670229911804,
-0.3368445932865143,
-0.060285117477178574,
-0.382475882768631,
0.41768980026245117,
-0.25194087624549866,
-0.10315018147230148,
0.05448618903756142,
0.19692428410053253,
-0.14669671654701233,
0.10971884429454803,
0.19315892... |
https://github.com/huggingface/datasets/issues/1743 | Issue while Creating Custom Metric | Currently it's only possible to define the features for the two columns `references` and `predictions`.
The data for these columns can then be passed to `metric.add_batch` and `metric.compute`.
Instead of defining more columns `text`, `offset_mapping` and `ground` you must include them in either references and predic... | Hi Team,
I am trying to create a custom metric for my training as follows, where f1 is my own metric:
```python
def _info(self):
# TODO: Specifies the datasets.MetricInfo object
return datasets.MetricInfo(
# This is the description that will appear on the metrics page.
... | 151 | Issue while Creating Custom Metric
Hi Team,
I am trying to create a custom metric for my training as follows, where f1 is my own metric:
```python
def _info(self):
# TODO: Specifies the datasets.MetricInfo object
return datasets.MetricInfo(
# This is the description that will ap... | [
-0.23883531987667084,
-0.3020201623439789,
-0.14578382670879364,
0.18646110594272614,
0.4193515479564667,
-0.03841887786984444,
0.1069304570555687,
0.1982862651348114,
0.05182933807373047,
0.29978278279304504,
-0.055739808827638626,
0.23833206295967102,
-0.17814353108406067,
0.140640273690... |
https://github.com/huggingface/datasets/issues/1743 | Issue while Creating Custom Metric | Hi @lhoestq,
I am doing text segmentation and the metric is effectively dice score on character offsets. So I need to pass the actual spans and I want to be able to get the spans based on predictions using offset_mapping.
Including them in references seems like a good idea. I'll try it out and get back to you. If... | Hi Team,
I am trying to create a custom metric for my training as follows, where f1 is my own metric:
```python
def _info(self):
# TODO: Specifies the datasets.MetricInfo object
return datasets.MetricInfo(
# This is the description that will appear on the metrics page.
... | 75 | Issue while Creating Custom Metric
Hi Team,
I am trying to create a custom metric for my training as follows, where f1 is my own metric:
```python
def _info(self):
# TODO: Specifies the datasets.MetricInfo object
return datasets.MetricInfo(
# This is the description that will ap... | [
-0.23883531987667084,
-0.3020201623439789,
-0.14578382670879364,
0.18646110594272614,
0.4193515479564667,
-0.03841887786984444,
0.1069304570555687,
0.1982862651348114,
0.05182933807373047,
0.29978278279304504,
-0.055739808827638626,
0.23833206295967102,
-0.17814353108406067,
0.140640273690... |
https://github.com/huggingface/datasets/issues/1733 | connection issue with glue, what is the data url for glue? | Hello @juliahane, which config of GLUE causes you trouble?
The URLs are defined in the dataset script source code: https://github.com/huggingface/datasets/blob/master/datasets/glue/glue.py | Hi
my codes sometimes fails due to connection issue with glue, could you tell me how I can have the URL datasets library is trying to read GLUE from to test the machines I am working on if there is an issue on my side or not
thanks | 20 | connection issue with glue, what is the data url for glue?
Hi
my codes sometimes fails due to connection issue with glue, could you tell me how I can have the URL datasets library is trying to read GLUE from to test the machines I am working on if there is an issue on my side or not
thanks
Hello @juliahane, whi... | [
-0.055291593074798584,
0.011869311332702637,
-0.042774029076099396,
0.2993859350681305,
0.30105993151664734,
-0.268480509519577,
0.20354053378105164,
0.08158136159181595,
0.1122899055480957,
0.12633654475212097,
0.06428955495357513,
-0.06481533497571945,
0.263243705034256,
0.19455544650554... |
https://github.com/huggingface/datasets/issues/1731 | Couldn't reach swda.py | Hi @yangp725,
The SWDA has been added very recently and has not been released yet, thus it is not available in the `1.2.0` version of 🤗`datasets`.
You can still access it by installing the latest version of the library (master branch), by following instructions in [this issue](https://github.com/huggingface/datasets... | ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.2.0/datasets/swda/swda.py
| 54 | Couldn't reach swda.py
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.2.0/datasets/swda/swda.py
Hi @yangp725,
The SWDA has been added very recently and has not been released yet, thus it is not available in the `1.2.0` version of 🤗`datasets`.
You can still access it by... | [
-0.2362145632505417,
-0.3874821066856384,
-0.158318430185318,
-0.004699533339589834,
0.36150333285331726,
-0.022699709981679916,
-0.06851671636104584,
0.22510330379009247,
0.026332801207900047,
0.21840517222881317,
-0.12052082270383835,
-0.00729566952213645,
0.009890337474644184,
0.3339482... |
https://github.com/huggingface/datasets/issues/1729 | Is there support for Deep learning datasets? | Hi @ZurMaD!
Thanks for your interest in 🤗 `datasets`. Support for image datasets is at an early stage, with CIFAR-10 added in #1617
MNIST is also on the way: #1730
If you feel like adding another image dataset, I would advise starting by reading the [ADD_NEW_DATASET.md](https://github.com/huggingface/datasets/b... | I looked around this repository and looking the datasets I think that there's no support for images-datasets. Or am I missing something? For example to add a repo like this https://github.com/DZPeru/fish-datasets | 55 | Is there support for Deep learning datasets?
I looked around this repository and looking the datasets I think that there's no support for images-datasets. Or am I missing something? For example to add a repo like this https://github.com/DZPeru/fish-datasets
Hi @ZurMaD!
Thanks for your interest in 🤗 `datasets`. Su... | [
-0.24644482135772705,
-0.07323762774467468,
-0.23498958349227905,
-0.09773354977369308,
0.28045961260795593,
0.03026409260928631,
0.22501833736896515,
0.08640462905168533,
0.08424601703882217,
0.24715161323547363,
-0.01707732304930687,
0.028118792921304703,
-0.30469051003456116,
0.34195032... |
https://github.com/huggingface/datasets/issues/1728 | Add an entry to an arrow dataset | Hi @ameet-1997,
I think what you are looking for is the `concatenate_datasets` function: https://huggingface.co/docs/datasets/processing.html?highlight=concatenate#concatenate-several-datasets
For your use case, I would use the [`map` method](https://huggingface.co/docs/datasets/processing.html?highlight=concatenat... | Is it possible to add an entry to a dataset object?
**Motivation: I want to transform the sentences in the dataset and add them to the original dataset**
For example, say we have the following code:
``` python
from datasets import load_dataset
# Load a dataset and print the first examples in the training s... | 43 | Add an entry to an arrow dataset
Is it possible to add an entry to a dataset object?
**Motivation: I want to transform the sentences in the dataset and add them to the original dataset**
For example, say we have the following code:
``` python
from datasets import load_dataset
# Load a dataset and print t... | [
0.11834988743066788,
0.12276836484670639,
-0.0548611544072628,
-0.021854033693671227,
0.1973937302827835,
0.30988261103630066,
0.1708185225725174,
-0.1640377640724182,
-0.08370131999254227,
-0.050527412444353104,
0.2646121382713318,
0.5609714984893799,
-0.20104436576366425,
0.0916974246501... |
https://github.com/huggingface/datasets/issues/1728 | Add an entry to an arrow dataset | That's a great idea! Thank you so much!
When I try that solution, I get the following error when I try to concatenate `datasets` and `modified_dataset`. I have also attached the output I get when I print out those two variables. Am I missing something?
Code:
``` python
combined_dataset = concatenate_datasets([d... | Is it possible to add an entry to a dataset object?
**Motivation: I want to transform the sentences in the dataset and add them to the original dataset**
For example, say we have the following code:
``` python
from datasets import load_dataset
# Load a dataset and print the first examples in the training s... | 123 | Add an entry to an arrow dataset
Is it possible to add an entry to a dataset object?
**Motivation: I want to transform the sentences in the dataset and add them to the original dataset**
For example, say we have the following code:
``` python
from datasets import load_dataset
# Load a dataset and print t... | [
0.09493989497423172,
0.2781064212322235,
-0.03920426964759827,
0.02305753529071808,
0.24360977113246918,
0.37788718938827515,
0.2803981900215149,
-0.1585281491279602,
-0.2577432692050934,
-0.07451805472373962,
0.3132960796356201,
0.5345352292060852,
-0.15967892110347748,
0.0007423389470204... |
https://github.com/huggingface/datasets/issues/1728 | Add an entry to an arrow dataset | You should do `combined_dataset = concatenate_datasets([datasets['train'], modified_dataset['train']])`
Didn't we talk about returning a Dataset instead of a DatasetDict with load_dataset and no split provided @lhoestq? Not sure it's the way to go but I'm wondering if it's not simpler for some use-cases. | Is it possible to add an entry to a dataset object?
**Motivation: I want to transform the sentences in the dataset and add them to the original dataset**
For example, say we have the following code:
``` python
from datasets import load_dataset
# Load a dataset and print the first examples in the training s... | 42 | Add an entry to an arrow dataset
Is it possible to add an entry to a dataset object?
**Motivation: I want to transform the sentences in the dataset and add them to the original dataset**
For example, say we have the following code:
``` python
from datasets import load_dataset
# Load a dataset and print t... | [
0.11457224935293198,
0.22762249410152435,
-0.06840576231479645,
-0.02512359991669655,
0.15373234450817108,
0.27101120352745056,
0.22679902613162994,
-0.10380721837282181,
0.01638694480061531,
-0.05275919660925865,
0.30326297879219055,
0.5584480166435242,
-0.2891288995742798,
0.047076456248... |
https://github.com/huggingface/datasets/issues/1728 | Add an entry to an arrow dataset | > Didn't we talk about returning a Dataset instead of a DatasetDict with load_dataset and no split provided @lhoestq? Not sure it's the way to go but I'm wondering if it's not simpler for some use-cases.
My opinion is that users should always know in advance what type of objects they're going to get. Otherwise the d... | Is it possible to add an entry to a dataset object?
**Motivation: I want to transform the sentences in the dataset and add them to the original dataset**
For example, say we have the following code:
``` python
from datasets import load_dataset
# Load a dataset and print the first examples in the training s... | 96 | Add an entry to an arrow dataset
Is it possible to add an entry to a dataset object?
**Motivation: I want to transform the sentences in the dataset and add them to the original dataset**
For example, say we have the following code:
``` python
from datasets import load_dataset
# Load a dataset and print t... | [
0.14892330765724182,
0.20325054228305817,
-0.05366980656981468,
0.005124232266098261,
0.11116006225347519,
0.07312698662281036,
0.273731529712677,
-0.08799418061971664,
0.08768001198768616,
0.021720683202147484,
0.342300146818161,
0.4724978506565094,
-0.3235456645488739,
0.0745659843087196... |
https://github.com/huggingface/datasets/issues/1727 | BLEURT score calculation raises UnrecognizedFlagError | And I have the same error with TF 2.4.1. I believe this issue should be reopened. Any ideas?! | Calling the `compute` method for **bleurt** metric fails with an `UnrecognizedFlagError` for `FLAGS.bleurt_batch_size`.
My environment:
```
python==3.8.5
datasets==1.2.0
tensorflow==2.3.1
cudatoolkit==11.0.221
```
Test code for reproducing the error:
```
from datasets import load_metric
bleurt = load_me... | 18 | BLEURT score calculation raises UnrecognizedFlagError
Calling the `compute` method for **bleurt** metric fails with an `UnrecognizedFlagError` for `FLAGS.bleurt_batch_size`.
My environment:
```
python==3.8.5
datasets==1.2.0
tensorflow==2.3.1
cudatoolkit==11.0.221
```
Test code for reproducing the error:
... | [
-0.3299122154712677,
-0.39836767315864563,
0.04140739142894745,
0.4000958502292633,
0.3001840114593506,
-0.22759149968624115,
0.2741968333721161,
0.25628674030303955,
0.005499313585460186,
0.2942810356616974,
-0.012273075059056282,
0.022469541057944298,
-0.08508630841970444,
0.046141900122... |
https://github.com/huggingface/datasets/issues/1727 | BLEURT score calculation raises UnrecognizedFlagError | I'm seeing the same issue with TF 2.4.1 when running the following in https://colab.research.google.com/github/huggingface/datasets/blob/master/notebooks/Overview.ipynb:
```
!pip install git+https://github.com/google-research/bleurt.git
references = ["foo bar baz", "one two three"]
bleurt_metric = load_metric('bleu... | Calling the `compute` method for **bleurt** metric fails with an `UnrecognizedFlagError` for `FLAGS.bleurt_batch_size`.
My environment:
```
python==3.8.5
datasets==1.2.0
tensorflow==2.3.1
cudatoolkit==11.0.221
```
Test code for reproducing the error:
```
from datasets import load_metric
bleurt = load_me... | 39 | BLEURT score calculation raises UnrecognizedFlagError
Calling the `compute` method for **bleurt** metric fails with an `UnrecognizedFlagError` for `FLAGS.bleurt_batch_size`.
My environment:
```
python==3.8.5
datasets==1.2.0
tensorflow==2.3.1
cudatoolkit==11.0.221
```
Test code for reproducing the error:
... | [
-0.3299122154712677,
-0.39836767315864563,
0.04140739142894745,
0.4000958502292633,
0.3001840114593506,
-0.22759149968624115,
0.2741968333721161,
0.25628674030303955,
0.005499313585460186,
0.2942810356616974,
-0.012273075059056282,
0.022469541057944298,
-0.08508630841970444,
0.046141900122... |
https://github.com/huggingface/datasets/issues/1727 | BLEURT score calculation raises UnrecognizedFlagError | @aleSuglia @oscartackstrom - Are you getting the error when running your code in a Jupyter notebook ?
I tried reproducing this error again, and was unable to do so from the python command line console in a virtual environment similar to the one I originally used (and unfortunately no longer have access to) when I fi... | Calling the `compute` method for **bleurt** metric fails with an `UnrecognizedFlagError` for `FLAGS.bleurt_batch_size`.
My environment:
```
python==3.8.5
datasets==1.2.0
tensorflow==2.3.1
cudatoolkit==11.0.221
```
Test code for reproducing the error:
```
from datasets import load_metric
bleurt = load_me... | 112 | BLEURT score calculation raises UnrecognizedFlagError
Calling the `compute` method for **bleurt** metric fails with an `UnrecognizedFlagError` for `FLAGS.bleurt_batch_size`.
My environment:
```
python==3.8.5
datasets==1.2.0
tensorflow==2.3.1
cudatoolkit==11.0.221
```
Test code for reproducing the error:
... | [
-0.3299122154712677,
-0.39836767315864563,
0.04140739142894745,
0.4000958502292633,
0.3001840114593506,
-0.22759149968624115,
0.2741968333721161,
0.25628674030303955,
0.005499313585460186,
0.2942810356616974,
-0.012273075059056282,
0.022469541057944298,
-0.08508630841970444,
0.046141900122... |
https://github.com/huggingface/datasets/issues/1727 | BLEURT score calculation raises UnrecognizedFlagError | This happens when running the notebook on colab. The issue seems to be that colab populates sys.argv with arguments not handled by bleurt.
Running this before calling bleurt fixes it:
```
import sys
sys.argv = sys.argv[:1]
```
Not the most elegant solution. Perhaps it needs to be fixed in the bleurt code itse... | Calling the `compute` method for **bleurt** metric fails with an `UnrecognizedFlagError` for `FLAGS.bleurt_batch_size`.
My environment:
```
python==3.8.5
datasets==1.2.0
tensorflow==2.3.1
cudatoolkit==11.0.221
```
Test code for reproducing the error:
```
from datasets import load_metric
bleurt = load_me... | 71 | BLEURT score calculation raises UnrecognizedFlagError
Calling the `compute` method for **bleurt** metric fails with an `UnrecognizedFlagError` for `FLAGS.bleurt_batch_size`.
My environment:
```
python==3.8.5
datasets==1.2.0
tensorflow==2.3.1
cudatoolkit==11.0.221
```
Test code for reproducing the error:
... | [
-0.3299122154712677,
-0.39836767315864563,
0.04140739142894745,
0.4000958502292633,
0.3001840114593506,
-0.22759149968624115,
0.2741968333721161,
0.25628674030303955,
0.005499313585460186,
0.2942810356616974,
-0.012273075059056282,
0.022469541057944298,
-0.08508630841970444,
0.046141900122... |
https://github.com/huggingface/datasets/issues/1727 | BLEURT score calculation raises UnrecognizedFlagError | I got the error when running it from the command line. It looks more like an error that should be fixed in the BLEURT codebase. | Calling the `compute` method for **bleurt** metric fails with an `UnrecognizedFlagError` for `FLAGS.bleurt_batch_size`.
My environment:
```
python==3.8.5
datasets==1.2.0
tensorflow==2.3.1
cudatoolkit==11.0.221
```
Test code for reproducing the error:
```
from datasets import load_metric
bleurt = load_me... | 25 | BLEURT score calculation raises UnrecognizedFlagError
Calling the `compute` method for **bleurt** metric fails with an `UnrecognizedFlagError` for `FLAGS.bleurt_batch_size`.
My environment:
```
python==3.8.5
datasets==1.2.0
tensorflow==2.3.1
cudatoolkit==11.0.221
```
Test code for reproducing the error:
... | [
-0.3299122154712677,
-0.39836767315864563,
0.04140739142894745,
0.4000958502292633,
0.3001840114593506,
-0.22759149968624115,
0.2741968333721161,
0.25628674030303955,
0.005499313585460186,
0.2942810356616974,
-0.012273075059056282,
0.022469541057944298,
-0.08508630841970444,
0.046141900122... |
https://github.com/huggingface/datasets/issues/1725 | load the local dataset | You should rephrase your question or give more examples and details on what you want to do.
it’s not possible to understand it and help you with only this information. | your guidebook's example is like
>>>from datasets import load_dataset
>>> dataset = load_dataset('json', data_files='my_file.json')
but the first arg is path...
so how should i do if i want to load the local dataset for model training?
i will be grateful if you can help me handle this problem!
thanks a lot! | 30 | load the local dataset
your guidebook's example is like
>>>from datasets import load_dataset
>>> dataset = load_dataset('json', data_files='my_file.json')
but the first arg is path...
so how should i do if i want to load the local dataset for model training?
i will be grateful if you can help me handle this prob... | [
-0.1804085522890091,
0.013499125838279724,
-0.10371387749910355,
-0.013034078292548656,
0.21346081793308258,
0.1470811367034912,
0.27002498507499695,
0.28800490498542786,
0.43632379174232483,
0.05448304861783981,
0.11620260030031204,
0.4947062134742737,
-0.07456586509943008,
0.297450929880... |
https://github.com/huggingface/datasets/issues/1725 | load the local dataset | sorry for that.
i want to know how could i load the train set and the test set from the local ,which api or function should i use .
| your guidebook's example is like
>>>from datasets import load_dataset
>>> dataset = load_dataset('json', data_files='my_file.json')
but the first arg is path...
so how should i do if i want to load the local dataset for model training?
i will be grateful if you can help me handle this problem!
thanks a lot! | 29 | load the local dataset
your guidebook's example is like
>>>from datasets import load_dataset
>>> dataset = load_dataset('json', data_files='my_file.json')
but the first arg is path...
so how should i do if i want to load the local dataset for model training?
i will be grateful if you can help me handle this prob... | [
-0.317470908164978,
0.06382004171609879,
-0.09525938332080841,
0.07376958429813385,
0.11967439204454422,
0.15572409331798553,
0.15555739402770996,
0.3187201917171478,
0.5008645057678223,
0.06144647300243378,
0.16420260071754456,
0.43741539120674133,
-0.12088166177272797,
0.4828322529792785... |
https://github.com/huggingface/datasets/issues/1725 | load the local dataset | thanks a lot
i find that the problem is i dont use vpn...
so i have to keep my net work even if i want to load the local data ? | your guidebook's example is like
>>>from datasets import load_dataset
>>> dataset = load_dataset('json', data_files='my_file.json')
but the first arg is path...
so how should i do if i want to load the local dataset for model training?
i will be grateful if you can help me handle this problem!
thanks a lot! | 31 | load the local dataset
your guidebook's example is like
>>>from datasets import load_dataset
>>> dataset = load_dataset('json', data_files='my_file.json')
but the first arg is path...
so how should i do if i want to load the local dataset for model training?
i will be grateful if you can help me handle this prob... | [
-0.22625812888145447,
0.07263939827680588,
-0.05076020583510399,
0.029830818995833397,
0.11851225048303604,
0.06380300223827362,
0.2788849472999573,
0.21063505113124847,
0.41184717416763306,
0.13368792831897736,
0.1261635571718216,
0.4746668040752411,
0.07140640914440155,
0.472002327442169... |
https://github.com/huggingface/datasets/issues/1724 | could not run models on a offline server successfully | Hi @lkcao !
Your issue is indeed related to `datasets`. In addition to installing the package manually, you will need to download the `text.py` script on your server. You'll find it (under `datasets/datasets/text`: https://github.com/huggingface/datasets/blob/master/datasets/text/text.py.
Then you can change the line... | Hi, I really need your help about this.
I am trying to fine-tuning a RoBERTa on a remote server, which is strictly banning internet. I try to install all the packages by hand and try to run run_mlm.py on the server. It works well on colab, but when I try to run it on this offline server, it shows:
 directly in the `datasets` package so that they can be used offline | Hi, I really need your help about this.
I am trying to fine-tuning a RoBERTa on a remote server, which is strictly banning internet. I try to install all the packages by hand and try to run run_mlm.py on the server. It works well on colab, but when I try to run it on this offline server, it shows:
 are now part of the `datasets` package since #1726 :)
You can now use them offline
```python
datasets = load_dataset('text', data_files=data_files)
```
We'll do a new release soon | Hi, I really need your help about this.
I am trying to fine-tuning a RoBERTa on a remote server, which is strictly banning internet. I try to install all the packages by hand and try to run run_mlm.py on the server. It works well on colab, but when I try to run it on this offline server, it shows:
 are now part of the `datasets` package since #1726 :)
> You can now use them offline
>
> ```python
> datasets = load_dataset('text', data_files=data_files)
> ```
>
> We'll do a new release soon
so the new version release now? | Hi, I really need your help about this.
I am trying to fine-tuning a RoBERTa on a remote server, which is strictly banning internet. I try to install all the packages by hand and try to run run_mlm.py on the server. It works well on colab, but when I try to run it on this offline server, it shows:

I tried to use `cache_file_names` and wasn't sure how, I tried to give it the following:
```
tokenized_datasets = tokenized_datasets.map(
group_texts,
batched=True,
num_proc=60,
load_from_cache_file=True,
cache_file_names={k: f'.cache/{str(... | Hi,
I am using the datasets package and even though I run the same data processing functions, datasets always recomputes the function instead of using cache.
I have attached an example script that for me reproduces the problem.
In the attached example the second map function always recomputes instead of loading fr... | 229 | Possible cache miss in datasets
Hi,
I am using the datasets package and even though I run the same data processing functions, datasets always recomputes the function instead of using cache.
I have attached an example script that for me reproduces the problem.
In the attached example the second map function alway... | [
-0.1930394321680069,
0.12105351686477661,
0.05009520798921585,
0.19009333848953247,
0.007696859072893858,
0.036921512335538864,
-0.016855232417583466,
0.23035003244876862,
0.01673312298953533,
-0.10844311118125916,
0.10299880802631378,
0.4139713644981384,
0.22491194307804108,
-0.1117240563... |
https://github.com/huggingface/datasets/issues/1718 | Possible cache miss in datasets | The documentation says
```
cache_file_names (`Optional[Dict[str, str]]`, defaults to `None`): Provide the name of a cache file to use to store the
results of the computation instead of the automatically generated cache file name.
You have to provide one :obj:`cache_file_name` per dataset in the dataset dict... | Hi,
I am using the datasets package and even though I run the same data processing functions, datasets always recomputes the function instead of using cache.
I have attached an example script that for me reproduces the problem.
In the attached example the second map function always recomputes instead of loading fr... | 90 | Possible cache miss in datasets
Hi,
I am using the datasets package and even though I run the same data processing functions, datasets always recomputes the function instead of using cache.
I have attached an example script that for me reproduces the problem.
In the attached example the second map function alway... | [
-0.1930394321680069,
0.12105351686477661,
0.05009520798921585,
0.19009333848953247,
0.007696859072893858,
0.036921512335538864,
-0.016855232417583466,
0.23035003244876862,
0.01673312298953533,
-0.10844311118125916,
0.10299880802631378,
0.4139713644981384,
0.22491194307804108,
-0.1117240563... |
https://github.com/huggingface/datasets/issues/1718 | Possible cache miss in datasets | Managed to get `cache_file_names` working and caching works well with it
Had to make a small modification for it to work:
```
cache_file_names = {k: f'tokenized_and_grouped_{str(k)}.arrow' for k in tokenized_datasets}
``` | Hi,
I am using the datasets package and even though I run the same data processing functions, datasets always recomputes the function instead of using cache.
I have attached an example script that for me reproduces the problem.
In the attached example the second map function always recomputes instead of loading fr... | 31 | Possible cache miss in datasets
Hi,
I am using the datasets package and even though I run the same data processing functions, datasets always recomputes the function instead of using cache.
I have attached an example script that for me reproduces the problem.
In the attached example the second map function alway... | [
-0.1930394321680069,
0.12105351686477661,
0.05009520798921585,
0.19009333848953247,
0.007696859072893858,
0.036921512335538864,
-0.016855232417583466,
0.23035003244876862,
0.01673312298953533,
-0.10844311118125916,
0.10299880802631378,
0.4139713644981384,
0.22491194307804108,
-0.1117240563... |
https://github.com/huggingface/datasets/issues/1718 | Possible cache miss in datasets | Another comment on `cache_file_names`, it doesn't save the produced cached files in the dataset's cache folder, it requires to give a path to an existing directory for it to work.
I can confirm that this is how it works in `datasets==1.1.3` | Hi,
I am using the datasets package and even though I run the same data processing functions, datasets always recomputes the function instead of using cache.
I have attached an example script that for me reproduces the problem.
In the attached example the second map function always recomputes instead of loading fr... | 41 | Possible cache miss in datasets
Hi,
I am using the datasets package and even though I run the same data processing functions, datasets always recomputes the function instead of using cache.
I have attached an example script that for me reproduces the problem.
In the attached example the second map function alway... | [
-0.1930394321680069,
0.12105351686477661,
0.05009520798921585,
0.19009333848953247,
0.007696859072893858,
0.036921512335538864,
-0.016855232417583466,
0.23035003244876862,
0.01673312298953533,
-0.10844311118125916,
0.10299880802631378,
0.4139713644981384,
0.22491194307804108,
-0.1117240563... |
https://github.com/huggingface/datasets/issues/1718 | Possible cache miss in datasets | Oh yes indeed ! Maybe we need to update the docstring to mention that it is a path | Hi,
I am using the datasets package and even though I run the same data processing functions, datasets always recomputes the function instead of using cache.
I have attached an example script that for me reproduces the problem.
In the attached example the second map function always recomputes instead of loading fr... | 18 | Possible cache miss in datasets
Hi,
I am using the datasets package and even though I run the same data processing functions, datasets always recomputes the function instead of using cache.
I have attached an example script that for me reproduces the problem.
In the attached example the second map function alway... | [
-0.1930394321680069,
0.12105351686477661,
0.05009520798921585,
0.19009333848953247,
0.007696859072893858,
0.036921512335538864,
-0.016855232417583466,
0.23035003244876862,
0.01673312298953533,
-0.10844311118125916,
0.10299880802631378,
0.4139713644981384,
0.22491194307804108,
-0.1117240563... |
https://github.com/huggingface/datasets/issues/1718 | Possible cache miss in datasets | I upgraded to the latest version and I encountered some strange behaviour, the script I posted in the OP doesn't trigger recalculation, however, if I add the following change it does trigger partial recalculation, I am not sure if its something wrong on my machine or a bug:
```
from datasets import load_dataset
from... | Hi,
I am using the datasets package and even though I run the same data processing functions, datasets always recomputes the function instead of using cache.
I have attached an example script that for me reproduces the problem.
In the attached example the second map function always recomputes instead of loading fr... | 136 | Possible cache miss in datasets
Hi,
I am using the datasets package and even though I run the same data processing functions, datasets always recomputes the function instead of using cache.
I have attached an example script that for me reproduces the problem.
In the attached example the second map function alway... | [
-0.1930394321680069,
0.12105351686477661,
0.05009520798921585,
0.19009333848953247,
0.007696859072893858,
0.036921512335538864,
-0.016855232417583466,
0.23035003244876862,
0.01673312298953533,
-0.10844311118125916,
0.10299880802631378,
0.4139713644981384,
0.22491194307804108,
-0.1117240563... |
https://github.com/huggingface/datasets/issues/1718 | Possible cache miss in datasets | This is because the `group_texts` line definition changes (it is defined 3 lines later than in the previous call). Currently if a function is moved elsewhere in a script we consider it to be different.
Not sure this is actually a good idea to keep this behavior though. We had this as a security in the early developm... | Hi,
I am using the datasets package and even though I run the same data processing functions, datasets always recomputes the function instead of using cache.
I have attached an example script that for me reproduces the problem.
In the attached example the second map function always recomputes instead of loading fr... | 86 | Possible cache miss in datasets
Hi,
I am using the datasets package and even though I run the same data processing functions, datasets always recomputes the function instead of using cache.
I have attached an example script that for me reproduces the problem.
In the attached example the second map function alway... | [
-0.1930394321680069,
0.12105351686477661,
0.05009520798921585,
0.19009333848953247,
0.007696859072893858,
0.036921512335538864,
-0.016855232417583466,
0.23035003244876862,
0.01673312298953533,
-0.10844311118125916,
0.10299880802631378,
0.4139713644981384,
0.22491194307804108,
-0.1117240563... |
https://github.com/huggingface/datasets/issues/1718 | Possible cache miss in datasets | Sounds great, thank you for your quick responses and help! Looking forward for the next release. | Hi,
I am using the datasets package and even though I run the same data processing functions, datasets always recomputes the function instead of using cache.
I have attached an example script that for me reproduces the problem.
In the attached example the second map function always recomputes instead of loading fr... | 16 | Possible cache miss in datasets
Hi,
I am using the datasets package and even though I run the same data processing functions, datasets always recomputes the function instead of using cache.
I have attached an example script that for me reproduces the problem.
In the attached example the second map function alway... | [
-0.1930394321680069,
0.12105351686477661,
0.05009520798921585,
0.19009333848953247,
0.007696859072893858,
0.036921512335538864,
-0.016855232417583466,
0.23035003244876862,
0.01673312298953533,
-0.10844311118125916,
0.10299880802631378,
0.4139713644981384,
0.22491194307804108,
-0.1117240563... |
https://github.com/huggingface/datasets/issues/1718 | Possible cache miss in datasets | I am having a similar issue where only the grouped files are loaded from cache while the tokenized ones aren't. I can confirm both datasets are being stored to file, but only the grouped version is loaded from cache. Not sure what might be going on. But I've tried to remove all kinds of non deterministic behaviour, but... | Hi,
I am using the datasets package and even though I run the same data processing functions, datasets always recomputes the function instead of using cache.
I have attached an example script that for me reproduces the problem.
In the attached example the second map function always recomputes instead of loading fr... | 274 | Possible cache miss in datasets
Hi,
I am using the datasets package and even though I run the same data processing functions, datasets always recomputes the function instead of using cache.
I have attached an example script that for me reproduces the problem.
In the attached example the second map function alway... | [
-0.1930394321680069,
0.12105351686477661,
0.05009520798921585,
0.19009333848953247,
0.007696859072893858,
0.036921512335538864,
-0.016855232417583466,
0.23035003244876862,
0.01673312298953533,
-0.10844311118125916,
0.10299880802631378,
0.4139713644981384,
0.22491194307804108,
-0.1117240563... |
https://github.com/huggingface/datasets/issues/1717 | SciFact dataset - minor changes | Hi Dave,
You are more than welcome to open a PR to make these changes! 🤗
You will find the relevant information about opening a PR in the [contributing guide](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md) and in the [dataset addition guide](https://github.com/huggingface/datasets/blob/master/A... | Hi,
SciFact dataset creator here. First of all, thanks for adding the dataset to Huggingface, much appreciated!
I'd like to make a few minor changes, including the citation information and the `_URL` from which to download the dataset. Can I submit a PR for this?
It also looks like the dataset is being downloa... | 44 | SciFact dataset - minor changes
Hi,
SciFact dataset creator here. First of all, thanks for adding the dataset to Huggingface, much appreciated!
I'd like to make a few minor changes, including the citation information and the `_URL` from which to download the dataset. Can I submit a PR for this?
It also looks... | [
0.15889376401901245,
-0.197348490357399,
-0.06765369325876236,
0.14704012870788574,
0.14846552908420563,
-0.10716106742620468,
-0.2045498639345169,
-0.040136437863111496,
0.08734144270420074,
-0.09037941694259644,
-0.08218386024236679,
0.01589495874941349,
-0.008538701571524143,
0.49486884... |
https://github.com/huggingface/datasets/issues/1717 | SciFact dataset - minor changes | > I'd like to make a few minor changes, including the citation information and the `_URL` from which to download the dataset. Can I submit a PR for this?
Sure ! Also feel free to ping us for reviews or if we can help :)
> It also looks like the dataset is being downloaded directly from Huggingface's Google cloud ... | Hi,
SciFact dataset creator here. First of all, thanks for adding the dataset to Huggingface, much appreciated!
I'd like to make a few minor changes, including the citation information and the `_URL` from which to download the dataset. Can I submit a PR for this?
It also looks like the dataset is being downloa... | 91 | SciFact dataset - minor changes
Hi,
SciFact dataset creator here. First of all, thanks for adding the dataset to Huggingface, much appreciated!
I'd like to make a few minor changes, including the citation information and the `_URL` from which to download the dataset. Can I submit a PR for this?
It also looks... | [
0.17509640753269196,
-0.16272084414958954,
-0.13827094435691833,
0.16952314972877502,
0.1716015338897705,
-0.10548413544893265,
-0.010719459503889084,
0.033970799297094345,
0.21871812641620636,
0.04157521203160286,
-0.17465102672576904,
0.02215523272752762,
-0.0037148466799408197,
0.515471... |
https://github.com/huggingface/datasets/issues/1717 | SciFact dataset - minor changes |
> > I'd like to make a few minor changes, including the citation information and the `_URL` from which to download the dataset. Can I submit a PR for this?
>
> Sure ! Also feel free to ping us for reviews or if we can help :)
>
OK! We're organizing a [shared task](https://sdproc.org/2021/sharedtasks.html#sciv... | Hi,
SciFact dataset creator here. First of all, thanks for adding the dataset to Huggingface, much appreciated!
I'd like to make a few minor changes, including the citation information and the `_URL` from which to download the dataset. Can I submit a PR for this?
It also looks like the dataset is being downloa... | 152 | SciFact dataset - minor changes
Hi,
SciFact dataset creator here. First of all, thanks for adding the dataset to Huggingface, much appreciated!
I'd like to make a few minor changes, including the citation information and the `_URL` from which to download the dataset. Can I submit a PR for this?
It also looks... | [
0.16592411696910858,
-0.16320879757404327,
-0.10140996426343918,
0.25287073850631714,
0.16676770150661469,
-0.11696116626262665,
-0.011766412295401096,
0.06558172404766083,
0.25607752799987793,
0.053932588547468185,
-0.15861570835113525,
0.03160112351179123,
0.022103937342762947,
0.4208246... |
https://github.com/huggingface/datasets/issues/1713 | Installation using conda | Great! Did you guys have a timeframe in mind for the next release?
Thank you for all the great work in developing this library. | Will a conda package for installing datasets be added to the huggingface conda channel? I have installed transformers using conda and would like to use the datasets library to use some of the scripts in the transformers/examples folder but am unable to do so at the moment as datasets can only be installed using pip and... | 24 | Installation using conda
Will a conda package for installing datasets be added to the huggingface conda channel? I have installed transformers using conda and would like to use the datasets library to use some of the scripts in the transformers/examples folder but am unable to do so at the moment as datasets can only... | [
-0.053956855088472366,
-0.037658754736185074,
-0.18016183376312256,
0.1761820763349533,
0.1911168396472931,
-0.17411328852176666,
0.16255329549312592,
-0.06701374799013138,
-0.3270973861217499,
-0.2211773693561554,
-0.22946733236312866,
0.0597514808177948,
-0.1889895498752594,
0.5104843974... |
https://github.com/huggingface/datasets/issues/1713 | Installation using conda | I think we can have `datasets` on conda by next week. Will see what I can do! | Will a conda package for installing datasets be added to the huggingface conda channel? I have installed transformers using conda and would like to use the datasets library to use some of the scripts in the transformers/examples folder but am unable to do so at the moment as datasets can only be installed using pip and... | 17 | Installation using conda
Will a conda package for installing datasets be added to the huggingface conda channel? I have installed transformers using conda and would like to use the datasets library to use some of the scripts in the transformers/examples folder but am unable to do so at the moment as datasets can only... | [
-0.014953937381505966,
-0.042613446712493896,
-0.12364756315946579,
0.15626998245716095,
0.25407880544662476,
-0.009233050048351288,
0.12768612802028656,
-0.13031846284866333,
-0.29622799158096313,
-0.21112656593322754,
-0.2347901612520218,
0.08092198520898819,
-0.14033789932727814,
0.6276... |
https://github.com/huggingface/datasets/issues/1713 | Installation using conda | `datasets` has been added to the huggingface channel thanks to @LysandreJik :)
It depends on conda-forge though
```
conda install -c huggingface -c conda-forge datasets
``` | Will a conda package for installing datasets be added to the huggingface conda channel? I have installed transformers using conda and would like to use the datasets library to use some of the scripts in the transformers/examples folder but am unable to do so at the moment as datasets can only be installed using pip and... | 26 | Installation using conda
Will a conda package for installing datasets be added to the huggingface conda channel? I have installed transformers using conda and would like to use the datasets library to use some of the scripts in the transformers/examples folder but am unable to do so at the moment as datasets can only... | [
-0.029604097828269005,
-0.03997432440519333,
-0.10899737477302551,
0.1860421746969223,
0.2394064962863922,
-0.04451911896467209,
0.1131720095872879,
-0.10590092092752457,
-0.3314245641231537,
-0.3028707206249237,
-0.2650928795337677,
0.10409072041511536,
-0.13574819266796112,
0.53066319227... |
https://github.com/huggingface/datasets/issues/1710 | IsADirectoryError when trying to download C4 | I haven't tested C4 on my side so there so there may be a few bugs in the code/adjustments to make.
Here it looks like in c4.py, line 190 one of the `files_to_download` is `'/'` which is invalid.
Valid files are paths to local files or URLs to remote files. | **TLDR**:
I fail to download C4 and see a stacktrace originating in `IsADirectoryError` as an explanation for failure.
How can the problem be fixed?
**VERBOSE**:
I use Python version 3.7 and have the following dependencies listed in my project:
```
datasets==1.2.0
apache-beam==2.26.0
```
When runn... | 50 | IsADirectoryError when trying to download C4
**TLDR**:
I fail to download C4 and see a stacktrace originating in `IsADirectoryError` as an explanation for failure.
How can the problem be fixed?
**VERBOSE**:
I use Python version 3.7 and have the following dependencies listed in my project:
```
dataset... | [
-0.21100111305713654,
-0.016683408990502357,
0.002013629535213113,
0.23881196975708008,
0.30036333203315735,
0.011594148352742195,
0.08793170005083084,
0.2422640472650528,
-0.10572308301925659,
0.09686119854450226,
-0.19106706976890564,
-0.09682240337133408,
-0.3083794116973877,
-0.0378797... |
https://github.com/huggingface/datasets/issues/1706 | Error when downloading a large dataset on slow connection. | Hi ! Is this an issue you have with `openwebtext` specifically or also with other datasets ?
It looks like the downloaded file is corrupted and can't be extracted using `tarfile`.
Could you try loading it again with
```python
import datasets
datasets.load_dataset("openwebtext", download_mode="force_redownload")... | I receive the following error after about an hour trying to download the `openwebtext` dataset.
The code used is:
```python
import datasets
datasets.load_dataset("openwebtext")
```
> Traceback (most recent call last): ... | 44 | Error when downloading a large dataset on slow connection.
I receive the following error after about an hour trying to download the `openwebtext` dataset.
The code used is:
```python
import datasets
datasets.load_dataset("openwebtext")
```
> Traceback (most recent call last): ... | [
-0.4910077750682831,
0.027839932590723038,
-0.10269362479448318,
0.21930858492851257,
0.21971477568149567,
0.11000148952007294,
0.11827507615089417,
0.4798278510570526,
0.053742069751024246,
0.005826928187161684,
-0.16624683141708374,
-0.10805202275514603,
-0.020688537508249283,
0.09193373... |
https://github.com/huggingface/datasets/issues/1701 | Some datasets miss dataset_infos.json or dummy_data.zip | Thanks for reporting.
We should indeed add all the missing dummy_data.zip and also the dataset_infos.json at least for lm1b, reclor and wikihow.
For c4 I haven't tested the script and I think we'll require some optimizations regarding beam datasets before processing it.
| While working on dataset REAME generation script at https://github.com/madlag/datasets_readme_generator , I noticed that some datasets miss a dataset_infos.json :
```
c4
lm1b
reclor
wikihow
```
And some does not have a dummy_data.zip :
```
kor_nli
math_dataset
mlqa
ms_marco
newsgroup
qa4mre
qanga... | 42 | Some datasets miss dataset_infos.json or dummy_data.zip
While working on dataset REAME generation script at https://github.com/madlag/datasets_readme_generator , I noticed that some datasets miss a dataset_infos.json :
```
c4
lm1b
reclor
wikihow
```
And some does not have a dummy_data.zip :
```
kor_n... | [
0.13598962128162384,
0.27314913272857666,
-0.046184055507183075,
0.2401537448167801,
0.28582724928855896,
0.3439725339412689,
0.21809352934360504,
0.0033974614925682545,
-0.04679195210337639,
0.03129402920603752,
0.1941789835691452,
0.06412878632545471,
-0.007530666422098875,
0.12573221325... |
https://github.com/huggingface/datasets/issues/1687 | Question: Shouldn't .info be a part of DatasetDict? | We could do something. There is a part of `.info` which is split specific (cache files, split instructions) but maybe if could be made to work. | Currently, only `Dataset` contains the .info or .features, but as many datasets contains standard splits (train, test) and thus the underlying information is the same (or at least should be) across the datasets.
For instance:
```
>>> ds = datasets.load_dataset("conll2002", "es")
>>> ds.info
Traceback (most rece... | 26 | Question: Shouldn't .info be a part of DatasetDict?
Currently, only `Dataset` contains the .info or .features, but as many datasets contains standard splits (train, test) and thus the underlying information is the same (or at least should be) across the datasets.
For instance:
```
>>> ds = datasets.load_dataset... | [
0.09181168675422668,
-0.1329856961965561,
-0.06010354682803154,
0.3179159462451935,
0.1159815639257431,
0.285230815410614,
0.48426657915115356,
0.10404092818498611,
0.11708982288837433,
-0.021073853597044945,
0.11989393830299377,
-0.012208228930830956,
0.11281293630599976,
0.57232671976089... |
https://github.com/huggingface/datasets/issues/1687 | Question: Shouldn't .info be a part of DatasetDict? | Yes this was kinda the idea I was going for. DatasetDict.info would be the shared info amongs the datasets (maybe even some info on how they differ). | Currently, only `Dataset` contains the .info or .features, but as many datasets contains standard splits (train, test) and thus the underlying information is the same (or at least should be) across the datasets.
For instance:
```
>>> ds = datasets.load_dataset("conll2002", "es")
>>> ds.info
Traceback (most rece... | 27 | Question: Shouldn't .info be a part of DatasetDict?
Currently, only `Dataset` contains the .info or .features, but as many datasets contains standard splits (train, test) and thus the underlying information is the same (or at least should be) across the datasets.
For instance:
```
>>> ds = datasets.load_dataset... | [
0.10923486948013306,
-0.1863410323858261,
-0.05223248898983002,
0.32486411929130554,
0.1043856292963028,
0.23965099453926086,
0.46166327595710754,
0.01808696985244751,
0.07692903280258179,
0.019750095903873444,
0.16768747568130493,
0.0010690998751670122,
0.09683962166309357,
0.546127676963... |
https://github.com/huggingface/datasets/issues/1686 | Dataset Error: DaNE contains empty samples at the end | One the PR is merged the fix will be available in the next release of `datasets`.
If you don't want to wait the next release you can still load the script from the master branch with
```python
load_dataset("dane", script_version="master")
``` | The dataset DaNE, contains empty samples at the end. It is naturally easy to remove using a filter but should probably not be there, to begin with as it can cause errors.
```python
>>> import datasets
[...]
>>> dataset = datasets.load_dataset("dane")
[...]
>>> dataset["test"][-1]
{'dep_ids': [], 'dep_labels': ... | 40 | Dataset Error: DaNE contains empty samples at the end
The dataset DaNE, contains empty samples at the end. It is naturally easy to remove using a filter but should probably not be there, to begin with as it can cause errors.
```python
>>> import datasets
[...]
>>> dataset = datasets.load_dataset("dane")
[...]
... | [
-0.12326876074075699,
-0.1505776196718216,
-0.20778264105319977,
-0.06342282146215439,
0.2767205238342285,
0.1347072571516037,
0.3861129581928253,
0.3464493155479431,
0.22405380010604858,
0.2550489008426666,
0.1427295207977295,
0.2462116926908493,
-0.1021411195397377,
0.12055405974388123,
... |
https://github.com/huggingface/datasets/issues/1683 | `ArrowInvalid` occurs while running `Dataset.map()` function for DPRContext | Looks like the mapping function returns a dictionary with a 768-dim array in the `embeddings` field. Since the map is batched, we actually expect the `embeddings` field to be an array of shape (batch_size, 768) to have one embedding per example in the batch.
To fix that can you try to remove one of the `[0]` ? In my... | It seems to fail the final batch ):
steps to reproduce:
```
from datasets import load_dataset
from elasticsearch import Elasticsearch
import torch
from transformers import file_utils, set_seed
from transformers import DPRContextEncoder, DPRContextEncoderTokenizerFast
MAX_SEQ_LENGTH = 256
ctx_encoder = DPRCon... | 68 | `ArrowInvalid` occurs while running `Dataset.map()` function for DPRContext
It seems to fail the final batch ):
steps to reproduce:
```
from datasets import load_dataset
from elasticsearch import Elasticsearch
import torch
from transformers import file_utils, set_seed
from transformers import DPRContextEncod... | [
-0.5113661885261536,
-0.3407371938228607,
-0.12627191841602325,
0.06084984540939331,
0.07763803005218506,
0.14176690578460693,
-0.018712103366851807,
0.26980680227279663,
-0.2622148394584656,
0.0491960346698761,
0.11080654710531235,
0.5883790254592896,
-0.030782397836446762,
-0.13176800310... |
https://github.com/huggingface/datasets/issues/1681 | Dataset "dane" missing | Hi @KennethEnevoldsen ,
I think the issue might be that this dataset was added during the community sprint and has not been released yet. It will be available with the v2 of datasets.
For now, you should be able to load the datasets after installing the latest (master) version of datasets using pip:
pip install git+... | the `dane` dataset appear to be missing in the latest version (1.1.3).
```python
>>> import datasets
>>> datasets.__version__
'1.1.3'
>>> "dane" in datasets.list_datasets()
True
```
As we can see it should be present, but doesn't seem to be findable when using `load_dataset`.
```python
>>> datasets.load... | 56 | Dataset "dane" missing
the `dane` dataset appear to be missing in the latest version (1.1.3).
```python
>>> import datasets
>>> datasets.__version__
'1.1.3'
>>> "dane" in datasets.list_datasets()
True
```
As we can see it should be present, but doesn't seem to be findable when using `load_dataset`.
```... | [
-0.006653383374214172,
-0.061256539076566696,
-0.09929956495761871,
0.12845174968242645,
0.26583391427993774,
0.2934044301509857,
0.4646621644496918,
0.1487024426460266,
0.2840229868888855,
0.08272838592529297,
0.13032886385917664,
0.04085879027843475,
-0.09202374517917633,
-0.124587759375... |
https://github.com/huggingface/datasets/issues/1681 | Dataset "dane" missing | The `dane` dataset was added recently, that's why it wasn't available yet. We did an intermediate release today just before the v2.0.
To load it you can just update `datasets`
```
pip install --upgrade datasets
```
and then you can load `dane` with
```python
from datasets import load_dataset
dataset = l... | the `dane` dataset appear to be missing in the latest version (1.1.3).
```python
>>> import datasets
>>> datasets.__version__
'1.1.3'
>>> "dane" in datasets.list_datasets()
True
```
As we can see it should be present, but doesn't seem to be findable when using `load_dataset`.
```python
>>> datasets.load... | 52 | Dataset "dane" missing
the `dane` dataset appear to be missing in the latest version (1.1.3).
```python
>>> import datasets
>>> datasets.__version__
'1.1.3'
>>> "dane" in datasets.list_datasets()
True
```
As we can see it should be present, but doesn't seem to be findable when using `load_dataset`.
```... | [
-0.006653383374214172,
-0.061256539076566696,
-0.09929956495761871,
0.12845174968242645,
0.26583391427993774,
0.2934044301509857,
0.4646621644496918,
0.1487024426460266,
0.2840229868888855,
0.08272838592529297,
0.13032886385917664,
0.04085879027843475,
-0.09202374517917633,
-0.124587759375... |
https://github.com/huggingface/datasets/issues/1679 | Can't import cc100 dataset | cc100 was added recently, that's why it wasn't available yet.
To load it you can just update `datasets`
```
pip install --upgrade datasets
```
and then you can load `cc100` with
```python
from datasets import load_dataset
lang = "en"
dataset = load_dataset("cc100", lang=lang, split="train")
``` | There is some issue to import cc100 dataset.
```
from datasets import load_dataset
dataset = load_dataset("cc100")
```
FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/cc100/cc100.py
During handling of the above exception, another exception occur... | 45 | Can't import cc100 dataset
There is some issue to import cc100 dataset.
```
from datasets import load_dataset
dataset = load_dataset("cc100")
```
FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/cc100/cc100.py
During handling of the above excep... | [
-0.2749535143375397,
-0.3488156795501709,
-0.16288639605045319,
0.2345297783613205,
0.3763970136642456,
0.1457454115152359,
0.08866681158542633,
0.024639474228024483,
0.015882324427366257,
0.2067762017250061,
-0.12557514011859894,
0.058415137231349945,
0.02270512655377388,
0.31943652033805... |
https://github.com/huggingface/datasets/issues/1675 | Add the 800GB Pile dataset? | The pile dataset would be very nice.
Benchmarks show that pile trained models achieve better results than most of actually trained models | ## Adding a Dataset
- **Name:** The Pile
- **Description:** The Pile is a 825 GiB diverse, open source language modelling data set that consists of 22 smaller, high-quality datasets combined together. See [here](https://twitter.com/nabla_theta/status/1345130408170541056?s=20) for the Twitter announcement
- **Paper:*... | 22 | Add the 800GB Pile dataset?
## Adding a Dataset
- **Name:** The Pile
- **Description:** The Pile is a 825 GiB diverse, open source language modelling data set that consists of 22 smaller, high-quality datasets combined together. See [here](https://twitter.com/nabla_theta/status/1345130408170541056?s=20) for the Twi... | [
-0.3252146542072296,
0.21306735277175903,
-0.15067437291145325,
0.13712848722934723,
0.0009430189384147525,
0.25786757469177246,
0.14345327019691467,
0.24660293757915497,
-0.03071785345673561,
-0.06392889469861984,
-0.15774500370025635,
0.14879459142684937,
-0.5934296250343323,
0.184142902... |
https://github.com/huggingface/datasets/issues/1675 | Add the 800GB Pile dataset? | The pile can very easily be added and adapted using this [tfds implementation](https://github.com/EleutherAI/The-Pile/blob/master/the_pile/tfds_pile.py) from the repo.
However, the question is whether you'd be ok with 800GB+ cached in your local disk, since the tfds implementation was designed to offload the storag... | ## Adding a Dataset
- **Name:** The Pile
- **Description:** The Pile is a 825 GiB diverse, open source language modelling data set that consists of 22 smaller, high-quality datasets combined together. See [here](https://twitter.com/nabla_theta/status/1345130408170541056?s=20) for the Twitter announcement
- **Paper:*... | 45 | Add the 800GB Pile dataset?
## Adding a Dataset
- **Name:** The Pile
- **Description:** The Pile is a 825 GiB diverse, open source language modelling data set that consists of 22 smaller, high-quality datasets combined together. See [here](https://twitter.com/nabla_theta/status/1345130408170541056?s=20) for the Twi... | [
-0.3054967522621155,
0.0699857547879219,
-0.16041818261146545,
0.16122521460056305,
0.05833517387509346,
0.20564661920070648,
0.16649332642555237,
0.15289746224880219,
0.09870196133852005,
0.037991736084222794,
-0.20994707942008972,
-0.047399990260601044,
-0.5426978468894958,
0.11756318807... |
https://github.com/huggingface/datasets/issues/1675 | Add the 800GB Pile dataset? | With the dataset streaming feature (see #2375) it will be more convenient to play with such big datasets :)
I'm currently adding C4 (see #2511 ) but I can probably start working on this afterwards | ## Adding a Dataset
- **Name:** The Pile
- **Description:** The Pile is a 825 GiB diverse, open source language modelling data set that consists of 22 smaller, high-quality datasets combined together. See [here](https://twitter.com/nabla_theta/status/1345130408170541056?s=20) for the Twitter announcement
- **Paper:*... | 35 | Add the 800GB Pile dataset?
## Adding a Dataset
- **Name:** The Pile
- **Description:** The Pile is a 825 GiB diverse, open source language modelling data set that consists of 22 smaller, high-quality datasets combined together. See [here](https://twitter.com/nabla_theta/status/1345130408170541056?s=20) for the Twi... | [
-0.4098334312438965,
0.1263113021850586,
-0.16244108974933624,
0.15567880868911743,
0.01711125113070011,
0.1715146154165268,
0.0879783108830452,
0.2739041745662689,
0.01110324077308178,
0.01948508247733116,
-0.15745984017848969,
0.14085689187049866,
-0.5226864218711853,
0.19262942671775818... |
https://github.com/huggingface/datasets/issues/1675 | Add the 800GB Pile dataset? | Hi folks! Just wanted to follow up on this -- would be really nice to get the Pile on HF Datasets... unclear if it would be easy to also add partitions of the Pile subject to the original 22 datasets used, but that would be nice too! | ## Adding a Dataset
- **Name:** The Pile
- **Description:** The Pile is a 825 GiB diverse, open source language modelling data set that consists of 22 smaller, high-quality datasets combined together. See [here](https://twitter.com/nabla_theta/status/1345130408170541056?s=20) for the Twitter announcement
- **Paper:*... | 47 | Add the 800GB Pile dataset?
## Adding a Dataset
- **Name:** The Pile
- **Description:** The Pile is a 825 GiB diverse, open source language modelling data set that consists of 22 smaller, high-quality datasets combined together. See [here](https://twitter.com/nabla_theta/status/1345130408170541056?s=20) for the Twi... | [
-0.24625413119792938,
0.1785697042942047,
-0.10752720385789871,
0.20753329992294312,
-0.0628189817070961,
0.2498987913131714,
0.15014362335205078,
0.18070290982723236,
0.10750558972358704,
-0.0014592688530683517,
-0.3454720079898834,
0.09751681238412857,
-0.47461551427841187,
0.25449499487... |
https://github.com/huggingface/datasets/issues/1675 | Add the 800GB Pile dataset? | Hi folks, thanks to some awesome work by @lhoestq and @albertvillanova you can now stream the Pile as follows:
```python
# Install master branch of `datasets`
pip install git+https://github.com/huggingface/datasets.git#egg=datasets[streaming]
pip install zstandard
from datasets import load_dataset
dset = lo... | ## Adding a Dataset
- **Name:** The Pile
- **Description:** The Pile is a 825 GiB diverse, open source language modelling data set that consists of 22 smaller, high-quality datasets combined together. See [here](https://twitter.com/nabla_theta/status/1345130408170541056?s=20) for the Twitter announcement
- **Paper:*... | 92 | Add the 800GB Pile dataset?
## Adding a Dataset
- **Name:** The Pile
- **Description:** The Pile is a 825 GiB diverse, open source language modelling data set that consists of 22 smaller, high-quality datasets combined together. See [here](https://twitter.com/nabla_theta/status/1345130408170541056?s=20) for the Twi... | [
-0.32235708832740784,
-0.04369297996163368,
-0.14693976938724518,
0.041198279708623886,
0.20533113181591034,
0.056798793375492096,
0.11991959810256958,
0.26331451535224915,
-0.04956879839301109,
0.1010039895772934,
-0.25348079204559326,
0.14151686429977417,
-0.4532608091831207,
0.195507824... |
https://github.com/huggingface/datasets/issues/1675 | Add the 800GB Pile dataset? | > Hi folks! Just wanted to follow up on this -- would be really nice to get the Pile on HF Datasets... unclear if it would be easy to also add partitions of the Pile subject to the original 22 datasets used, but that would be nice too!
Hi @siddk thanks to a tip from @richarddwang it seems we can access some of the p... | ## Adding a Dataset
- **Name:** The Pile
- **Description:** The Pile is a 825 GiB diverse, open source language modelling data set that consists of 22 smaller, high-quality datasets combined together. See [here](https://twitter.com/nabla_theta/status/1345130408170541056?s=20) for the Twitter announcement
- **Paper:*... | 199 | Add the 800GB Pile dataset?
## Adding a Dataset
- **Name:** The Pile
- **Description:** The Pile is a 825 GiB diverse, open source language modelling data set that consists of 22 smaller, high-quality datasets combined together. See [here](https://twitter.com/nabla_theta/status/1345130408170541056?s=20) for the Twi... | [
-0.22825706005096436,
0.1881413757801056,
-0.06500183790922165,
0.12541228532791138,
0.012601650319993496,
0.258663147687912,
0.3119369447231293,
0.33223140239715576,
0.13761365413665771,
0.02713296003639698,
-0.4386690855026245,
0.0903870090842247,
-0.4766182005405426,
0.37805280089378357... |
https://github.com/huggingface/datasets/issues/1675 | Add the 800GB Pile dataset? | Ah I just saw that @lhoestq is already thinking about the specifying of one or more subsets in [this PR](https://github.com/huggingface/datasets/pull/2817#issuecomment-901874049) :) | ## Adding a Dataset
- **Name:** The Pile
- **Description:** The Pile is a 825 GiB diverse, open source language modelling data set that consists of 22 smaller, high-quality datasets combined together. See [here](https://twitter.com/nabla_theta/status/1345130408170541056?s=20) for the Twitter announcement
- **Paper:*... | 21 | Add the 800GB Pile dataset?
## Adding a Dataset
- **Name:** The Pile
- **Description:** The Pile is a 825 GiB diverse, open source language modelling data set that consists of 22 smaller, high-quality datasets combined together. See [here](https://twitter.com/nabla_theta/status/1345130408170541056?s=20) for the Twi... | [
-0.31402507424354553,
0.15608790516853333,
-0.15775591135025024,
0.18204884231090546,
-0.027711741626262665,
0.2084933966398239,
0.15764108300209045,
0.19064508378505707,
0.012680108658969402,
-0.046363722532987595,
-0.1654876470565796,
0.19022008776664734,
-0.4964600205421448,
0.244712725... |
https://github.com/huggingface/datasets/issues/1674 | dutch_social can't be loaded | Hi @koenvandenberge and @alighofrani95!
The datasets you're experiencing issues with were most likely added recently to the `datasets` library, meaning they have not been released yet. They will be released with the v2 of the library.
Meanwhile, you can still load the datasets using one of the techniques described in... | Hi all,
I'm trying to import the `dutch_social` dataset described [here](https://huggingface.co/datasets/dutch_social).
However, the code that should load the data doesn't seem to be working, in particular because the corresponding files can't be found at the provided links.
```
(base) Koens-MacBook-Pro:~ koe... | 59 | dutch_social can't be loaded
Hi all,
I'm trying to import the `dutch_social` dataset described [here](https://huggingface.co/datasets/dutch_social).
However, the code that should load the data doesn't seem to be working, in particular because the corresponding files can't be found at the provided links.
```
... | [
-0.144012913107872,
-0.12778671085834503,
-0.1560758501291275,
0.2933967113494873,
0.21679967641830444,
-0.1348889321088791,
-0.012057781219482422,
0.09374074637889862,
0.395803302526474,
-0.08629671484231949,
-0.2941347360610962,
-0.007835458032786846,
0.01967604272067547,
-0.042146805673... |
https://github.com/huggingface/datasets/issues/1674 | dutch_social can't be loaded | I just did the release :)
To load it you can just update `datasets`
```
pip install --upgrade datasets
```
and then you can load `dutch_social` with
```python
from datasets import load_dataset
dataset = load_dataset("dutch_social")
``` | Hi all,
I'm trying to import the `dutch_social` dataset described [here](https://huggingface.co/datasets/dutch_social).
However, the code that should load the data doesn't seem to be working, in particular because the corresponding files can't be found at the provided links.
```
(base) Koens-MacBook-Pro:~ koe... | 36 | dutch_social can't be loaded
Hi all,
I'm trying to import the `dutch_social` dataset described [here](https://huggingface.co/datasets/dutch_social).
However, the code that should load the data doesn't seem to be working, in particular because the corresponding files can't be found at the provided links.
```
... | [
-0.144012913107872,
-0.12778671085834503,
-0.1560758501291275,
0.2933967113494873,
0.21679967641830444,
-0.1348889321088791,
-0.012057781219482422,
0.09374074637889862,
0.395803302526474,
-0.08629671484231949,
-0.2941347360610962,
-0.007835458032786846,
0.01967604272067547,
-0.042146805673... |
https://github.com/huggingface/datasets/issues/1674 | dutch_social can't be loaded | @lhoestq could you also shed light on the Hindi Wikipedia Dataset for issue number #1673. Will this also be available in the new release that you committed recently? | Hi all,
I'm trying to import the `dutch_social` dataset described [here](https://huggingface.co/datasets/dutch_social).
However, the code that should load the data doesn't seem to be working, in particular because the corresponding files can't be found at the provided links.
```
(base) Koens-MacBook-Pro:~ koe... | 28 | dutch_social can't be loaded
Hi all,
I'm trying to import the `dutch_social` dataset described [here](https://huggingface.co/datasets/dutch_social).
However, the code that should load the data doesn't seem to be working, in particular because the corresponding files can't be found at the provided links.
```
... | [
-0.144012913107872,
-0.12778671085834503,
-0.1560758501291275,
0.2933967113494873,
0.21679967641830444,
-0.1348889321088791,
-0.012057781219482422,
0.09374074637889862,
0.395803302526474,
-0.08629671484231949,
-0.2941347360610962,
-0.007835458032786846,
0.01967604272067547,
-0.042146805673... |
https://github.com/huggingface/datasets/issues/1674 | dutch_social can't be loaded | Okay. Could you comment on the #1673 thread? Actually @thomwolf had commented that if i use datasets library from source, it would allow me to download the Hindi Wikipedia Dataset but even the version 1.1.3 gave me the same issue. The details are there in the issue #1673 thread. | Hi all,
I'm trying to import the `dutch_social` dataset described [here](https://huggingface.co/datasets/dutch_social).
However, the code that should load the data doesn't seem to be working, in particular because the corresponding files can't be found at the provided links.
```
(base) Koens-MacBook-Pro:~ koe... | 49 | dutch_social can't be loaded
Hi all,
I'm trying to import the `dutch_social` dataset described [here](https://huggingface.co/datasets/dutch_social).
However, the code that should load the data doesn't seem to be working, in particular because the corresponding files can't be found at the provided links.
```
... | [
-0.144012913107872,
-0.12778671085834503,
-0.1560758501291275,
0.2933967113494873,
0.21679967641830444,
-0.1348889321088791,
-0.012057781219482422,
0.09374074637889862,
0.395803302526474,
-0.08629671484231949,
-0.2941347360610962,
-0.007835458032786846,
0.01967604272067547,
-0.042146805673... |
https://github.com/huggingface/datasets/issues/1673 | Unable to Download Hindi Wikipedia Dataset | Currently this dataset is only available when the library is installed from source since it was added after the last release.
We pin the dataset version with the library version so that people can have a reproducible dataset and processing when pinning the library.
We'll see if we can provide access to newer data... | I used the Dataset Library in Python to load the wikipedia dataset with the Hindi Config 20200501.hi along with something called beam_runner='DirectRunner' and it keeps giving me the error that the file is not found. I have attached the screenshot of the error and the code both. Please help me to understand how to reso... | 72 | Unable to Download Hindi Wikipedia Dataset
I used the Dataset Library in Python to load the wikipedia dataset with the Hindi Config 20200501.hi along with something called beam_runner='DirectRunner' and it keeps giving me the error that the file is not found. I have attached the screenshot of the error and the code b... | [
-0.1846337616443634,
0.037301838397979736,
-0.06111956387758255,
0.22921812534332275,
0.05320935323834419,
0.11538349837064743,
-0.02166111022233963,
0.34670373797416687,
0.2222076803445816,
0.024645183235406876,
0.3987331986427307,
0.07446524500846863,
0.046689473092556,
0.217279016971588... |
https://github.com/huggingface/datasets/issues/1673 | Unable to Download Hindi Wikipedia Dataset | So for now, should i try and install the library from source and then try out the same piece of code? Will it work then, considering both the versions will match then? | I used the Dataset Library in Python to load the wikipedia dataset with the Hindi Config 20200501.hi along with something called beam_runner='DirectRunner' and it keeps giving me the error that the file is not found. I have attached the screenshot of the error and the code both. Please help me to understand how to reso... | 32 | Unable to Download Hindi Wikipedia Dataset
I used the Dataset Library in Python to load the wikipedia dataset with the Hindi Config 20200501.hi along with something called beam_runner='DirectRunner' and it keeps giving me the error that the file is not found. I have attached the screenshot of the error and the code b... | [
-0.11161893606185913,
0.13646307587623596,
-0.08705770969390869,
0.25539591908454895,
0.0008567371987737715,
0.05895308405160904,
-0.020084837451577187,
0.3475777208805084,
0.19664326310157776,
-0.031185930594801903,
0.41823217272758484,
0.053538184612989426,
0.07384125888347626,
0.2070574... |
https://github.com/huggingface/datasets/issues/1673 | Unable to Download Hindi Wikipedia Dataset | Hey, so i tried installing the library from source using the commands : **git clone https://github.com/huggingface/datasets**, **cd datasets** and then **pip3 install -e .**. But i still am facing the same error that file is not found. Please advise.
The Datasets library version now is 1.1.3 by installing from sour... | I used the Dataset Library in Python to load the wikipedia dataset with the Hindi Config 20200501.hi along with something called beam_runner='DirectRunner' and it keeps giving me the error that the file is not found. I have attached the screenshot of the error and the code both. Please help me to understand how to reso... | 71 | Unable to Download Hindi Wikipedia Dataset
I used the Dataset Library in Python to load the wikipedia dataset with the Hindi Config 20200501.hi along with something called beam_runner='DirectRunner' and it keeps giving me the error that the file is not found. I have attached the screenshot of the error and the code b... | [
-0.14655029773712158,
0.06893201172351837,
-0.06859811395406723,
0.2491660863161087,
0.09402235597372055,
0.0981622263789177,
-0.010803690180182457,
0.300265371799469,
0.20563140511512756,
0.010553015395998955,
0.3993402123451233,
0.12490245699882507,
0.09441278874874115,
0.171274587512016... |
https://github.com/huggingface/datasets/issues/1673 | Unable to Download Hindi Wikipedia Dataset | Looks like the wikipedia dump for hindi at the date of 05/05/2020 is not available anymore.
You can try to load a more recent version of wikipedia
```python
from datasets import load_dataset
d = load_dataset("wikipedia", language="hi", date="20210101", split="train", beam_runner="DirectRunner")
``` | I used the Dataset Library in Python to load the wikipedia dataset with the Hindi Config 20200501.hi along with something called beam_runner='DirectRunner' and it keeps giving me the error that the file is not found. I have attached the screenshot of the error and the code both. Please help me to understand how to reso... | 40 | Unable to Download Hindi Wikipedia Dataset
I used the Dataset Library in Python to load the wikipedia dataset with the Hindi Config 20200501.hi along with something called beam_runner='DirectRunner' and it keeps giving me the error that the file is not found. I have attached the screenshot of the error and the code b... | [
-0.1142456904053688,
0.011291678063571453,
-0.08272907137870789,
0.18819203972816467,
0.05032927170395851,
0.20311041176319122,
-0.02170965075492859,
0.40279945731163025,
0.2084721326828003,
-0.005866138730198145,
0.29710066318511963,
-0.010639035142958164,
0.10074154287576675,
0.112816445... |
https://github.com/huggingface/datasets/issues/1672 | load_dataset hang on file_lock | Having the same issue with `datasets 1.1.3` of `1.5.0` (both tracebacks look the same) and `kilt_wikipedia`, Ubuntu 20.04
```py
In [1]: from datasets import load_dataset ... | I am trying to load the squad dataset. Fails on Windows 10 but succeeds in Colab.
Transformers: 3.3.1
Datasets: 1.0.2
Windows 10 (also tested in WSL)
```
datasets.logging.set_verbosity_debug()
datasets.
train_dataset = load_dataset('squad', split='train')
valid_dataset = load_dataset('squad', split='validat... | 234 | load_dataset hang on file_lock
I am trying to load the squad dataset. Fails on Windows 10 but succeeds in Colab.
Transformers: 3.3.1
Datasets: 1.0.2
Windows 10 (also tested in WSL)
```
datasets.logging.set_verbosity_debug()
datasets.
train_dataset = load_dataset('squad', split='train')
valid_dataset = loa... | [
-0.302115261554718,
-0.02152092754840851,
-0.0879950150847435,
0.17893604934215546,
0.517988920211792,
0.14992554485797882,
0.5325867533683777,
-0.03884607180953026,
0.01999598927795887,
0.004317913204431534,
-0.25354844331741333,
0.2555573880672455,
-0.010385445319116116,
-0.1183219254016... |
https://github.com/huggingface/datasets/issues/1671 | connection issue | Also, mayjor issue for me is the format issue, even if I go through changing the whole code to use load_from_disk, then if I do
d = datasets.load_from_disk("imdb")
d = d["train"][:10] => the format of this is no more in datasets format
this is different from you call load_datasets("train[10]")
could you tell m... | Hi
I am getting this connection issue, resulting in large failure on cloud, @lhoestq I appreciate your help on this.
If I want to keep the codes the same, so not using save_to_disk, load_from_disk, but save the datastes in the way load_dataset reads from and copy the files in the same folder the datasets library r... | 64 | connection issue
Hi
I am getting this connection issue, resulting in large failure on cloud, @lhoestq I appreciate your help on this.
If I want to keep the codes the same, so not using save_to_disk, load_from_disk, but save the datastes in the way load_dataset reads from and copy the files in the same folder th... | [
-0.4165682792663574,
0.23384031653404236,
0.011721443384885788,
0.3550441563129425,
0.43614086508750916,
-0.18401682376861572,
0.17185884714126587,
0.20301806926727295,
-0.19610540568828583,
0.011228928342461586,
-0.2930116653442383,
0.1609506756067276,
0.14485131204128265,
0.2955973446369... |
https://github.com/huggingface/datasets/issues/1671 | connection issue | > `
requests.exceptions.ConnectTimeout: HTTPSConnectionPool(host='s3.amazonaws.com', port=443): Max retries exceeded with url: /datasets.huggingface.co/datasets/datasets/glue/glue.py (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7ff6d6c60a20>, 'Connection to s3.amazonaws.com timed out.... | Hi
I am getting this connection issue, resulting in large failure on cloud, @lhoestq I appreciate your help on this.
If I want to keep the codes the same, so not using save_to_disk, load_from_disk, but save the datastes in the way load_dataset reads from and copy the files in the same folder the datasets library r... | 210 | connection issue
Hi
I am getting this connection issue, resulting in large failure on cloud, @lhoestq I appreciate your help on this.
If I want to keep the codes the same, so not using save_to_disk, load_from_disk, but save the datastes in the way load_dataset reads from and copy the files in the same folder th... | [
-0.4165682792663574,
0.23384031653404236,
0.011721443384885788,
0.3550441563129425,
0.43614086508750916,
-0.18401682376861572,
0.17185884714126587,
0.20301806926727295,
-0.19610540568828583,
0.011228928342461586,
-0.2930116653442383,
0.1609506756067276,
0.14485131204128265,
0.2955973446369... |
https://github.com/huggingface/datasets/issues/1670 | wiki_dpr pre-processing performance | Hi ! And thanks for the tips :)
Indeed currently `wiki_dpr` takes some time to be processed.
Multiprocessing for dataset generation is definitely going to speed up things.
Regarding the index note that for the default configurations, the index is downloaded instead of being built, which avoid spending time on c... | I've been working with wiki_dpr and noticed that the dataset processing is seriously impaired in performance [1]. It takes about 12h to process the entire dataset. Most of this time is simply loading and processing the data, but the actual indexing is also quite slow (3h).
I won't repeat the concerns around multipro... | 129 | wiki_dpr pre-processing performance
I've been working with wiki_dpr and noticed that the dataset processing is seriously impaired in performance [1]. It takes about 12h to process the entire dataset. Most of this time is simply loading and processing the data, but the actual indexing is also quite slow (3h).
I won... | [
-0.21966484189033508,
-0.18326736986637115,
-0.11372299492359161,
0.08813261985778809,
-0.11355595290660858,
-0.08153735846281052,
0.022308815270662308,
0.3311193585395813,
0.18954695761203766,
0.07095278799533844,
0.020521165803074837,
-0.10259182751178741,
0.32460537552833557,
0.14523097... |
https://github.com/huggingface/datasets/issues/1670 | wiki_dpr pre-processing performance | I'd be happy to contribute something when I get the time, probably adding multiprocessing and / or cython support to wiki_dpr. I've written cythonized apache beam code before as well.
For sharded index building, I used the FAISS example code for indexing 1 billion vectors as a start. I'm sure you're aware that the d... | I've been working with wiki_dpr and noticed that the dataset processing is seriously impaired in performance [1]. It takes about 12h to process the entire dataset. Most of this time is simply loading and processing the data, but the actual indexing is also quite slow (3h).
I won't repeat the concerns around multipro... | 66 | wiki_dpr pre-processing performance
I've been working with wiki_dpr and noticed that the dataset processing is seriously impaired in performance [1]. It takes about 12h to process the entire dataset. Most of this time is simply loading and processing the data, but the actual indexing is also quite slow (3h).
I won... | [
-0.22649028897285461,
-0.15619109570980072,
-0.12017626315355301,
0.06685486435890198,
-0.1370592713356018,
-0.08907434344291687,
0.025616968050599098,
0.3342381715774536,
0.19152718782424927,
0.06897865235805511,
0.03954366222023964,
-0.09713640064001083,
0.3024311363697052,
0.14719110727... |
https://github.com/huggingface/datasets/issues/1662 | Arrow file is too large when saving vector data | Hi !
The arrow file size is due to the embeddings. Indeed if they're stored as float32 then the total size of the embeddings is
20 000 000 vectors * 768 dimensions * 4 bytes per dimension ~= 60GB
If you want to reduce the size you can consider using quantization for example, or maybe using dimension reduction te... | I computed the sentence embedding of each sentence of bookcorpus data using bert base and saved them to disk. I used 20M sentences and the obtained arrow file is about 59GB while the original text file is only about 1.3GB. Are there any ways to reduce the size of the arrow file? | 59 | Arrow file is too large when saving vector data
I computed the sentence embedding of each sentence of bookcorpus data using bert base and saved them to disk. I used 20M sentences and the obtained arrow file is about 59GB while the original text file is only about 1.3GB. Are there any ways to reduce the size of the ar... | [
0.11372185498476028,
-0.33165597915649414,
-0.05896599590778351,
0.45345360040664673,
0.13431209325790405,
-0.11615707725286484,
-0.17864489555358887,
0.46086549758911133,
-0.40809890627861023,
0.33289843797683716,
0.1982359141111374,
-0.08847441524267197,
-0.12217715382575989,
-0.18767298... |
https://github.com/huggingface/datasets/issues/1662 | Arrow file is too large when saving vector data | Thanks for your reply @lhoestq.
I want to save original embedding for these sentences for subsequent calculations. So does arrow have a way to save in a compressed format to reduce the size of the file? | I computed the sentence embedding of each sentence of bookcorpus data using bert base and saved them to disk. I used 20M sentences and the obtained arrow file is about 59GB while the original text file is only about 1.3GB. Are there any ways to reduce the size of the arrow file? | 36 | Arrow file is too large when saving vector data
I computed the sentence embedding of each sentence of bookcorpus data using bert base and saved them to disk. I used 20M sentences and the obtained arrow file is about 59GB while the original text file is only about 1.3GB. Are there any ways to reduce the size of the ar... | [
0.06726440787315369,
-0.28253623843193054,
-0.06414582580327988,
0.4023742973804474,
0.08083552867174149,
-0.05377655848860741,
-0.2692088782787323,
0.47933995723724365,
-0.5607516169548035,
0.3315005898475647,
0.10898430645465851,
0.1340748518705368,
-0.11804646998643875,
-0.2116097658872... |
https://github.com/huggingface/datasets/issues/1647 | NarrativeQA fails to load with `load_dataset` | Hi @eric-mitchell,
I think the issue might be that this dataset was added during the community sprint and has not been released yet. It will be available with the v2 of `datasets`.
For now, you should be able to load the datasets after installing the latest (master) version of `datasets` using pip:
`pip install git+... | When loading the NarrativeQA dataset with `load_dataset('narrativeqa')` as given in the documentation [here](https://huggingface.co/datasets/narrativeqa), I receive a cascade of exceptions, ending with
FileNotFoundError: Couldn't find file locally at narrativeqa/narrativeqa.py, or remotely at
https://r... | 55 | NarrativeQA fails to load with `load_dataset`
When loading the NarrativeQA dataset with `load_dataset('narrativeqa')` as given in the documentation [here](https://huggingface.co/datasets/narrativeqa), I receive a cascade of exceptions, ending with
FileNotFoundError: Couldn't find file locally at narrativeqa/na... | [
-0.28219372034072876,
0.1054384857416153,
0.03263697028160095,
0.24390918016433716,
0.19000594317913055,
0.17991851270198822,
0.13333983719348907,
0.041288990527391434,
-0.1444730907678604,
0.04415304586291313,
-0.014136065728962421,
-0.07569536566734314,
-0.06875282526016235,
0.3853095769... |
https://github.com/huggingface/datasets/issues/1647 | NarrativeQA fails to load with `load_dataset` | Update: HuggingFace did an intermediate release yesterday just before the v2.0.
To load it you can just update `datasets`
`pip install --upgrade datasets` | When loading the NarrativeQA dataset with `load_dataset('narrativeqa')` as given in the documentation [here](https://huggingface.co/datasets/narrativeqa), I receive a cascade of exceptions, ending with
FileNotFoundError: Couldn't find file locally at narrativeqa/narrativeqa.py, or remotely at
https://r... | 23 | NarrativeQA fails to load with `load_dataset`
When loading the NarrativeQA dataset with `load_dataset('narrativeqa')` as given in the documentation [here](https://huggingface.co/datasets/narrativeqa), I receive a cascade of exceptions, ending with
FileNotFoundError: Couldn't find file locally at narrativeqa/na... | [
-0.21995525062084198,
0.04800880700349808,
0.06081918627023697,
0.31242308020591736,
0.19409379363059998,
0.1517978012561798,
0.14009970426559448,
0.021162137389183044,
-0.11976555734872818,
0.0751328244805336,
0.006081794388592243,
-0.10550099611282349,
-0.028470121324062347,
0.3287839293... |
https://github.com/huggingface/datasets/issues/1644 | HoVeR dataset fails to load | Hover was added recently, that's why it wasn't available yet.
To load it you can just update `datasets`
```
pip install --upgrade datasets
```
and then you can load `hover` with
```python
from datasets import load_dataset
dataset = load_dataset("hover")
``` | Hi! I'm getting an error when trying to load **HoVeR** dataset. Another one (**SQuAD**) does work for me. I'm using the latest (1.1.3) version of the library.
Steps to reproduce the error:
```python
>>> from datasets import load_dataset
>>> dataset = load_dataset("hover")
Traceback (most recent call last):
... | 40 | HoVeR dataset fails to load
Hi! I'm getting an error when trying to load **HoVeR** dataset. Another one (**SQuAD**) does work for me. I'm using the latest (1.1.3) version of the library.
Steps to reproduce the error:
```python
>>> from datasets import load_dataset
>>> dataset = load_dataset("hover")
Tracebac... | [
-0.22146938741207123,
0.05883113667368889,
0.017389489337801933,
0.2946338355541229,
0.2871960699558258,
0.1036292165517807,
0.2778085768222809,
0.21131351590156555,
0.05538627505302429,
0.04410192742943764,
-0.16387644410133362,
-0.02589070051908493,
0.0066258725710213184,
-0.175087556242... |
https://github.com/huggingface/datasets/issues/1641 | muchocine dataset cannot be dowloaded | I have encountered the same error with `v1.0.1` and `v1.0.2` on both Windows and Linux environments. However, cloning the repo and using the path to the dataset's root directory worked for me. Even after having the dataset cached - passing the path is the only way (for now) to load the dataset.
```python
from datas... | ```python
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, ... | 88 | muchocine dataset cannot be dowloaded
```python
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/datasets/load.py in prepare_module(path, script_version, download_config, do... | [
-0.36579957604408264,
-0.1491081863641739,
-0.053581614047288895,
0.33292171359062195,
0.43087688088417053,
0.12308886647224426,
0.3599627912044525,
0.3173547089099884,
0.3176315128803253,
0.06259830296039581,
-0.2380271553993225,
0.001200711471028626,
-0.088910311460495,
0.061044182628393... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.