html_url
stringlengths
48
51
title
stringlengths
5
268
comments
stringlengths
70
51.8k
body
stringlengths
0
29.8k
comment_length
int64
16
1.52k
text
stringlengths
164
54.1k
embeddings
listlengths
768
768
https://github.com/huggingface/datasets/issues/435
ImportWarning for pyarrow 1.0.0
This was fixed in #434 We'll do a release later this week to include this fix. Thanks for reporting
The following PR raised ImportWarning at `pyarrow ==1.0.0` https://github.com/huggingface/nlp/pull/265/files
19
ImportWarning for pyarrow 1.0.0 The following PR raised ImportWarning at `pyarrow ==1.0.0` https://github.com/huggingface/nlp/pull/265/files This was fixed in #434 We'll do a release later this week to include this fix. Thanks for reporting
[ -0.08307509124279022, -0.11023090779781342, -0.08677279204130173, -0.13964009284973145, 0.24311912059783936, -0.26746636629104614, 0.33662161231040955, 0.3281986117362976, -0.10447439551353455, 0.16092322766780853, -0.04599054902791977, 0.2184552103281021, -0.11118267476558685, 0.062834098...
https://github.com/huggingface/datasets/issues/435
ImportWarning for pyarrow 1.0.0
I dont know if the fix was made but the problem is still present : Instaled with pip : NLP 0.3.0 // pyarrow 1.0.0 OS : archlinux with kernel zen 5.8.5
The following PR raised ImportWarning at `pyarrow ==1.0.0` https://github.com/huggingface/nlp/pull/265/files
31
ImportWarning for pyarrow 1.0.0 The following PR raised ImportWarning at `pyarrow ==1.0.0` https://github.com/huggingface/nlp/pull/265/files I dont know if the fix was made but the problem is still present : Instaled with pip : NLP 0.3.0 // pyarrow 1.0.0 OS : archlinux with kernel zen 5.8.5
[ -0.1463128924369812, -0.08252372592687607, -0.049764927476644516, -0.08211467415094376, 0.15667201578617096, -0.21917781233787537, 0.11781007796525955, 0.41890615224838257, -0.2502535581588745, 0.12906713783740997, 0.07649267464876175, 0.28078582882881165, -0.20306497812271118, -0.11709957...
https://github.com/huggingface/datasets/issues/433
How to reuse functionality of a (generic) dataset?
Hi @ArneBinder, we have a few "generic" datasets which are intended to load data files with a predefined format: - csv: https://github.com/huggingface/nlp/tree/master/datasets/csv - json: https://github.com/huggingface/nlp/tree/master/datasets/json - text: https://github.com/huggingface/nlp/tree/master/datasets/text...
I have written a generic dataset for corpora created with the Brat annotation tool ([specification](https://brat.nlplab.org/standoff.html), [dataset code](https://github.com/ArneBinder/nlp/blob/brat/datasets/brat/brat.py)). Now I wonder how to use that to create specific dataset instances. What's the recommended way to...
56
How to reuse functionality of a (generic) dataset? I have written a generic dataset for corpora created with the Brat annotation tool ([specification](https://brat.nlplab.org/standoff.html), [dataset code](https://github.com/ArneBinder/nlp/blob/brat/datasets/brat/brat.py)). Now I wonder how to use that to create spec...
[ -0.16599518060684204, -0.19047501683235168, 0.11394309997558594, 0.3808882236480713, 0.21837224066257477, -0.004778483882546425, 0.18851029872894287, 0.11226709932088852, -0.11108934134244919, -0.21326816082000732, -0.19556547701358795, 0.28855159878730774, -0.03979530557990074, 0.53945136...
https://github.com/huggingface/datasets/issues/433
How to reuse functionality of a (generic) dataset?
> Maybe your brat loading script could be shared in a similar fashion? @thomwolf that was also my first idea and I think I will tackle that in the next days. I separated the code and created a real abstract class `AbstractBrat` to allow to inherit from that (I've just seen that the dataset_loader loads the first non...
I have written a generic dataset for corpora created with the Brat annotation tool ([specification](https://brat.nlplab.org/standoff.html), [dataset code](https://github.com/ArneBinder/nlp/blob/brat/datasets/brat/brat.py)). Now I wonder how to use that to create specific dataset instances. What's the recommended way to...
416
How to reuse functionality of a (generic) dataset? I have written a generic dataset for corpora created with the Brat annotation tool ([specification](https://brat.nlplab.org/standoff.html), [dataset code](https://github.com/ArneBinder/nlp/blob/brat/datasets/brat/brat.py)). Now I wonder how to use that to create spec...
[ -0.05277569219470024, -0.018387308344244957, 0.1475318819284439, 0.4042399823665619, 0.21433766186237335, -0.03731585294008255, 0.13150572776794434, 0.1071479320526123, -0.20230214297771454, -0.23726898431777954, -0.19186675548553467, 0.40558937191963196, -0.06352157145738602, 0.5268570184...
https://github.com/huggingface/datasets/issues/426
[FEATURE REQUEST] Multiprocessing with for dataset.map, dataset.filter
Yes, that would be nice. We could take a look at what tensorflow `tf.data` does under the hood for instance.
It would be nice to be able to speed up `dataset.map` or `dataset.filter`. Perhaps this is as easy as sharding the dataset sending each shard to a process/thread/dask pool and using the new `nlp.concatenate_dataset()` function to join them all together?
20
[FEATURE REQUEST] Multiprocessing with for dataset.map, dataset.filter It would be nice to be able to speed up `dataset.map` or `dataset.filter`. Perhaps this is as easy as sharding the dataset sending each shard to a process/thread/dask pool and using the new `nlp.concatenate_dataset()` function to join them all tog...
[ -0.27594414353370667, -0.28424569964408875, -0.20223022997379303, -0.11666195094585419, -0.26918527483940125, 0.10101284831762314, 0.23259399831295013, 0.3550625443458557, -0.016504159197211266, 0.061669524759054184, -0.14451374113559723, 0.33245277404785156, -0.13535091280937195, 0.411558...
https://github.com/huggingface/datasets/issues/426
[FEATURE REQUEST] Multiprocessing with for dataset.map, dataset.filter
So `tf.data.Dataset.map()` returns a `ParallelMapDataset` if `num_parallel_calls is not None` [link](https://github.com/tensorflow/tensorflow/blob/2b96f3662bd776e277f86997659e61046b56c315/tensorflow/python/data/ops/dataset_ops.py#L1623). There, `num_parallel_calls` is turned into a tensor and and fed to `gen_dataset...
It would be nice to be able to speed up `dataset.map` or `dataset.filter`. Perhaps this is as easy as sharding the dataset sending each shard to a process/thread/dask pool and using the new `nlp.concatenate_dataset()` function to join them all together?
47
[FEATURE REQUEST] Multiprocessing with for dataset.map, dataset.filter It would be nice to be able to speed up `dataset.map` or `dataset.filter`. Perhaps this is as easy as sharding the dataset sending each shard to a process/thread/dask pool and using the new `nlp.concatenate_dataset()` function to join them all tog...
[ -0.39594772458076477, -0.35890477895736694, -0.1633281409740448, -0.1043202131986618, -0.280354380607605, 0.131526380777359, 0.28607863187789917, 0.298230916261673, 0.023387202993035316, 0.05506087467074394, -0.16390131413936615, 0.32796093821525574, -0.09400996565818787, 0.344282507896423...
https://github.com/huggingface/datasets/issues/426
[FEATURE REQUEST] Multiprocessing with for dataset.map, dataset.filter
Multiprocessing was added in #552 . You can set the number of processes with `.map(..., num_proc=...)`. It also works for `filter` Closing this one, but feel free to reo-open if you have other questions
It would be nice to be able to speed up `dataset.map` or `dataset.filter`. Perhaps this is as easy as sharding the dataset sending each shard to a process/thread/dask pool and using the new `nlp.concatenate_dataset()` function to join them all together?
34
[FEATURE REQUEST] Multiprocessing with for dataset.map, dataset.filter It would be nice to be able to speed up `dataset.map` or `dataset.filter`. Perhaps this is as easy as sharding the dataset sending each shard to a process/thread/dask pool and using the new `nlp.concatenate_dataset()` function to join them all tog...
[ -0.3118087947368622, -0.20808002352714539, -0.17793217301368713, -0.17304682731628418, -0.3436499834060669, -0.11131744086742401, 0.19715453684329987, 0.28352758288383484, 0.03925764933228493, 0.07929808646440506, -0.07674466073513031, 0.39255598187446594, -0.05090516433119774, 0.350817054...
https://github.com/huggingface/datasets/issues/426
[FEATURE REQUEST] Multiprocessing with for dataset.map, dataset.filter
@lhoestq Great feature implemented! Do you have plans to add it to official tutorials [Processing data in a Dataset](https://huggingface.co/docs/datasets/processing.html?highlight=save#augmenting-the-dataset)? It took me sometime to find this parallel processing api.
It would be nice to be able to speed up `dataset.map` or `dataset.filter`. Perhaps this is as easy as sharding the dataset sending each shard to a process/thread/dask pool and using the new `nlp.concatenate_dataset()` function to join them all together?
29
[FEATURE REQUEST] Multiprocessing with for dataset.map, dataset.filter It would be nice to be able to speed up `dataset.map` or `dataset.filter`. Perhaps this is as easy as sharding the dataset sending each shard to a process/thread/dask pool and using the new `nlp.concatenate_dataset()` function to join them all tog...
[ -0.2608761787414551, -0.40522071719169617, -0.16711942851543427, 0.024264108389616013, -0.3476990759372711, 0.10837887227535248, 0.09586935490369797, 0.10010963678359985, 0.009775328449904919, 0.07798191159963608, -0.12978754937648773, 0.26878461241722107, 0.028804278001189232, 0.567956566...
https://github.com/huggingface/datasets/issues/425
Correct data structure for PAN-X task in XTREME dataset?
Hi @lhoestq I made the proposed changes to the `xtreme.py` script. I noticed that I also need to change the schema in the `dataset_infos.json` file. More specifically the `"features"` part of the PAN-X.LANG dataset: ```json "features":{ "word":{ "dtype":"string", "id":null, "_type":"Valu...
Hi 🤗 team! ## Description of the problem Thanks to the fix from #416 I am now able to load the NER task in the XTREME dataset as follows: ```python from nlp import load_dataset # AmazonPhotos.zip is located in data/ dataset = load_dataset("xtreme", "PAN-X.en", data_dir='./data') dataset_train = dataset['tr...
148
Correct data structure for PAN-X task in XTREME dataset? Hi 🤗 team! ## Description of the problem Thanks to the fix from #416 I am now able to load the NER task in the XTREME dataset as follows: ```python from nlp import load_dataset # AmazonPhotos.zip is located in data/ dataset = load_dataset("xtreme", ...
[ -0.04464291036128998, -0.19605229794979095, -0.02139178104698658, 0.3716607987880707, 0.03904010355472565, -0.27633100748062134, -0.02063690684735775, 0.15795141458511353, -0.07178064435720444, -0.06624244898557663, -0.16135291755199432, 0.2627061903476715, 0.0014280215837061405, 0.1680822...
https://github.com/huggingface/datasets/issues/425
Correct data structure for PAN-X task in XTREME dataset?
Hi ! You have to point to your local script. First clone the repo and then: ```python dataset = load_dataset("./datasets/xtreme", "PAN-X.en") ``` The "xtreme" directory contains "xtreme.py". You also have to change the features definition in the `_info` method. You could use: ```python features = nlp.F...
Hi 🤗 team! ## Description of the problem Thanks to the fix from #416 I am now able to load the NER task in the XTREME dataset as follows: ```python from nlp import load_dataset # AmazonPhotos.zip is located in data/ dataset = load_dataset("xtreme", "PAN-X.en", data_dir='./data') dataset_train = dataset['tr...
66
Correct data structure for PAN-X task in XTREME dataset? Hi 🤗 team! ## Description of the problem Thanks to the fix from #416 I am now able to load the NER task in the XTREME dataset as follows: ```python from nlp import load_dataset # AmazonPhotos.zip is located in data/ dataset = load_dataset("xtreme", ...
[ -0.04464291036128998, -0.19605229794979095, -0.02139178104698658, 0.3716607987880707, 0.03904010355472565, -0.27633100748062134, -0.02063690684735775, 0.15795141458511353, -0.07178064435720444, -0.06624244898557663, -0.16135291755199432, 0.2627061903476715, 0.0014280215837061405, 0.1680822...
https://github.com/huggingface/datasets/issues/425
Correct data structure for PAN-X task in XTREME dataset?
Thanks, I am making progress. I got a new error `NonMatchingSplitsSizesError ` (see traceback below), which I suspect is due to the fact that number of rows in the dataset changed (one row per word --> one row per sentence) as well as the number of bytes due to the slightly updated data structure. ```python NonMat...
Hi 🤗 team! ## Description of the problem Thanks to the fix from #416 I am now able to load the NER task in the XTREME dataset as follows: ```python from nlp import load_dataset # AmazonPhotos.zip is located in data/ dataset = load_dataset("xtreme", "PAN-X.en", data_dir='./data') dataset_train = dataset['tr...
130
Correct data structure for PAN-X task in XTREME dataset? Hi 🤗 team! ## Description of the problem Thanks to the fix from #416 I am now able to load the NER task in the XTREME dataset as follows: ```python from nlp import load_dataset # AmazonPhotos.zip is located in data/ dataset = load_dataset("xtreme", ...
[ -0.04464291036128998, -0.19605229794979095, -0.02139178104698658, 0.3716607987880707, 0.03904010355472565, -0.27633100748062134, -0.02063690684735775, 0.15795141458511353, -0.07178064435720444, -0.06624244898557663, -0.16135291755199432, 0.2627061903476715, 0.0014280215837061405, 0.1680822...
https://github.com/huggingface/datasets/issues/425
Correct data structure for PAN-X task in XTREME dataset?
One more thing about features. I mentioned ```python features = nlp.Features({ "words": [nlp.Value("string")], "ner_tags": [nlp.Value("string")], "langs": [nlp.Value("string")], }) ``` but it's actually not consistent with the way we write datasets. Something like this is simpler to read and mor...
Hi 🤗 team! ## Description of the problem Thanks to the fix from #416 I am now able to load the NER task in the XTREME dataset as follows: ```python from nlp import load_dataset # AmazonPhotos.zip is located in data/ dataset = load_dataset("xtreme", "PAN-X.en", data_dir='./data') dataset_train = dataset['tr...
61
Correct data structure for PAN-X task in XTREME dataset? Hi 🤗 team! ## Description of the problem Thanks to the fix from #416 I am now able to load the NER task in the XTREME dataset as follows: ```python from nlp import load_dataset # AmazonPhotos.zip is located in data/ dataset = load_dataset("xtreme", ...
[ -0.04464291036128998, -0.19605229794979095, -0.02139178104698658, 0.3716607987880707, 0.03904010355472565, -0.27633100748062134, -0.02063690684735775, 0.15795141458511353, -0.07178064435720444, -0.06624244898557663, -0.16135291755199432, 0.2627061903476715, 0.0014280215837061405, 0.1680822...
https://github.com/huggingface/datasets/issues/418
Addition of google drive links to dl_manager
I think the problem is the way you wrote your urls. Try the following structure to see `https://drive.google.com/uc?export=download&id=your_file_id` . @lhoestq
Hello there, I followed the template to create a download script of my own, which works fine for me, although I had to shun the dl_manager because it was downloading nothing from the drive links and instead use gdown. This is the script for me: ```python class EmoConfig(nlp.BuilderConfig): """BuilderConfig ...
20
Addition of google drive links to dl_manager Hello there, I followed the template to create a download script of my own, which works fine for me, although I had to shun the dl_manager because it was downloading nothing from the drive links and instead use gdown. This is the script for me: ```python class EmoCo...
[ -0.09869405627250671, 0.05165674537420273, -0.13092012703418732, -0.05589672550559044, 0.0639406144618988, -0.09535485506057739, 0.3679337203502655, 0.22081059217453003, 0.20629212260246277, 0.22497780621051788, -0.07506416738033295, 0.3655760884284973, -0.2819617688655853, 0.1416771411895...
https://github.com/huggingface/datasets/issues/418
Addition of google drive links to dl_manager
Oh sorry, I think `_get_drive_url` is doing that. Have you tried to use `dl_manager.download_and_extract(_get_drive_url(_TRAIN_URL)`? it should work with google drive links.
Hello there, I followed the template to create a download script of my own, which works fine for me, although I had to shun the dl_manager because it was downloading nothing from the drive links and instead use gdown. This is the script for me: ```python class EmoConfig(nlp.BuilderConfig): """BuilderConfig ...
21
Addition of google drive links to dl_manager Hello there, I followed the template to create a download script of my own, which works fine for me, although I had to shun the dl_manager because it was downloading nothing from the drive links and instead use gdown. This is the script for me: ```python class EmoCo...
[ -0.09869405627250671, 0.05165674537420273, -0.13092012703418732, -0.05589672550559044, 0.0639406144618988, -0.09535485506057739, 0.3679337203502655, 0.22081059217453003, 0.20629212260246277, 0.22497780621051788, -0.07506416738033295, 0.3655760884284973, -0.2819617688655853, 0.1416771411895...
https://github.com/huggingface/datasets/issues/414
from_dict delete?
`from_dict` was added in #350 that was unfortunately not included in the 0.3.0 release. It's going to be included in the next release that will be out pretty soon though. Right now if you want to use `from_dict` you have to install the package from the master branch ``` pip install git+https://github.com/huggingface...
AttributeError: type object 'Dataset' has no attribute 'from_dict'
53
from_dict delete? AttributeError: type object 'Dataset' has no attribute 'from_dict' `from_dict` was added in #350 that was unfortunately not included in the 0.3.0 release. It's going to be included in the next release that will be out pretty soon though. Right now if you want to use `from_dict` you have to instal...
[ -0.11688964068889618, -0.20969152450561523, -0.11670266836881638, 0.04655607044696808, 0.1620284467935562, -0.18883657455444336, 0.09435700625181198, 0.3141216039657593, 0.03775666654109955, 0.14038196206092834, 0.0539776086807251, 0.5944423079490662, -0.10694225877523422, 0.29201793670654...
https://github.com/huggingface/datasets/issues/414
from_dict delete?
> `from_dict` was added in #350 that was unfortunately not included in the 0.3.0 release. It's going to be included in the next release that will be out pretty soon though. > Right now if you want to use `from_dict` you have to install the package from the master branch > > ``` > pip install git+https://github.com...
AttributeError: type object 'Dataset' has no attribute 'from_dict'
62
from_dict delete? AttributeError: type object 'Dataset' has no attribute 'from_dict' > `from_dict` was added in #350 that was unfortunately not included in the 0.3.0 release. It's going to be included in the next release that will be out pretty soon though. > Right now if you want to use `from_dict` you have to in...
[ -0.07471852004528046, -0.25162097811698914, -0.11553431302309036, 0.08792196214199066, 0.18377746641635895, -0.17871206998825073, 0.09087163209915161, 0.2998163104057312, 0.03160589560866356, 0.1319086104631424, 0.05100925266742706, 0.5649478435516357, -0.1044340431690216, 0.29663529992103...
https://github.com/huggingface/datasets/issues/413
Is there a way to download only NQ dev?
Unfortunately it's not possible to download only the dev set of NQ. I think we could add a way to download only the test set by adding a custom configuration to the processing script though.
Maybe I missed that in the docs, but is there a way to only download the dev set of natural questions (~1 GB)? As we want to benchmark QA models on different datasets, I would like to avoid downloading the 41GB of training data. I tried ``` dataset = nlp.load_dataset('natural_questions', split="validation", bea...
35
Is there a way to download only NQ dev? Maybe I missed that in the docs, but is there a way to only download the dev set of natural questions (~1 GB)? As we want to benchmark QA models on different datasets, I would like to avoid downloading the 41GB of training data. I tried ``` dataset = nlp.load_dataset('n...
[ -0.4314170181751251, 0.17720870673656464, -0.11297191679477692, 0.11462236195802689, -0.10702252388000488, -0.03472675383090973, -0.25763869285583496, 0.7160261869430542, -0.03295758366584778, 0.31452271342277527, -0.28313010931015015, 0.015157495625317097, -0.22056402266025543, 0.60923802...
https://github.com/huggingface/datasets/issues/413
Is there a way to download only NQ dev?
Ok, got it. I think this could be a valuable feature - especially for large datasets like NQ, but potentially also others. For us, it will in this case make the difference of using the library or keeping the old downloads of the raw dev datasets. However, I don't know if that fits into your plans with the library ...
Maybe I missed that in the docs, but is there a way to only download the dev set of natural questions (~1 GB)? As we want to benchmark QA models on different datasets, I would like to avoid downloading the 41GB of training data. I tried ``` dataset = nlp.load_dataset('natural_questions', split="validation", bea...
70
Is there a way to download only NQ dev? Maybe I missed that in the docs, but is there a way to only download the dev set of natural questions (~1 GB)? As we want to benchmark QA models on different datasets, I would like to avoid downloading the 41GB of training data. I tried ``` dataset = nlp.load_dataset('n...
[ -0.383767694234848, 0.21080820262432098, -0.09517177194356918, 0.10442382842302322, -0.09206399321556091, -0.0907709077000618, -0.22576779127120972, 0.6605114340782166, -0.041867945343256, 0.2795562744140625, -0.3325554430484772, 0.06468448787927628, -0.2607530355453491, 0.5805526971817017...
https://github.com/huggingface/datasets/issues/413
Is there a way to download only NQ dev?
I don't think we could force this behavior generally since the dataset script authors are free to organize the file download as they want (sometimes the mapping between split and files can be very much nontrivial) but we can add an additional configuration for Natural Question indeed as @lhoestq indicate.
Maybe I missed that in the docs, but is there a way to only download the dev set of natural questions (~1 GB)? As we want to benchmark QA models on different datasets, I would like to avoid downloading the 41GB of training data. I tried ``` dataset = nlp.load_dataset('natural_questions', split="validation", bea...
50
Is there a way to download only NQ dev? Maybe I missed that in the docs, but is there a way to only download the dev set of natural questions (~1 GB)? As we want to benchmark QA models on different datasets, I would like to avoid downloading the 41GB of training data. I tried ``` dataset = nlp.load_dataset('n...
[ -0.48872631788253784, 0.20523269474506378, -0.09393583238124847, 0.12609398365020752, -0.04188825935125351, -0.10875630378723145, -0.2285953313112259, 0.6617494821548462, 0.007649184670299292, 0.26699012517929077, -0.27146732807159424, 0.04453861713409424, -0.20828084647655487, 0.594558894...
https://github.com/huggingface/datasets/issues/412
Unable to load XTREME dataset from disk
Hi @lewtun, you have to provide the full path to the downloaded file for example `/home/lewtum/..`
Hi 🤗 team! ## Description of the problem Following the [docs](https://huggingface.co/nlp/loading_datasets.html?highlight=xtreme#manually-downloading-files) I'm trying to load the `PAN-X.fr` dataset from the [XTREME](https://github.com/google-research/xtreme) benchmark. I have manually downloaded the `AmazonPho...
16
Unable to load XTREME dataset from disk Hi 🤗 team! ## Description of the problem Following the [docs](https://huggingface.co/nlp/loading_datasets.html?highlight=xtreme#manually-downloading-files) I'm trying to load the `PAN-X.fr` dataset from the [XTREME](https://github.com/google-research/xtreme) benchmark. ...
[ -0.2940950393676758, -0.38351836800575256, -0.002173429122194648, 0.6607136726379395, 0.14834553003311157, 0.12387950718402863, -0.17308802902698517, 0.1085546612739563, 0.18071170151233673, 0.044096965342760086, -0.14275230467319489, -0.00034474540734663606, 0.13833539187908173, -0.029191...
https://github.com/huggingface/datasets/issues/412
Unable to load XTREME dataset from disk
I was able to repro. Opening a PR to fix that. Thanks for reporting this issue !
Hi 🤗 team! ## Description of the problem Following the [docs](https://huggingface.co/nlp/loading_datasets.html?highlight=xtreme#manually-downloading-files) I'm trying to load the `PAN-X.fr` dataset from the [XTREME](https://github.com/google-research/xtreme) benchmark. I have manually downloaded the `AmazonPho...
17
Unable to load XTREME dataset from disk Hi 🤗 team! ## Description of the problem Following the [docs](https://huggingface.co/nlp/loading_datasets.html?highlight=xtreme#manually-downloading-files) I'm trying to load the `PAN-X.fr` dataset from the [XTREME](https://github.com/google-research/xtreme) benchmark. ...
[ -0.2940950393676758, -0.38351836800575256, -0.002173429122194648, 0.6607136726379395, 0.14834553003311157, 0.12387950718402863, -0.17308802902698517, 0.1085546612739563, 0.18071170151233673, 0.044096965342760086, -0.14275230467319489, -0.00034474540734663606, 0.13833539187908173, -0.029191...
https://github.com/huggingface/datasets/issues/407
MissingBeamOptions for Wikipedia 20200501.en
Fixed. Could you try again @mitchellgordon95 ? It was due a file not being updated on S3. We need to make sure all the datasets scripts get updated properly @julien-c
There may or may not be a regression for the pre-processed Wikipedia dataset. This was working fine 10 commits ago (without having Apache Beam available): ``` nlp.load_dataset('wikipedia', "20200501.en", split='train') ``` And now, having pulled master, I get: ``` Downloading and preparing dataset wikipedia...
30
MissingBeamOptions for Wikipedia 20200501.en There may or may not be a regression for the pre-processed Wikipedia dataset. This was working fine 10 commits ago (without having Apache Beam available): ``` nlp.load_dataset('wikipedia', "20200501.en", split='train') ``` And now, having pulled master, I get: `...
[ 0.06661684066057205, 0.12667572498321533, -0.0205122958868742, -0.05423278734087944, 0.19961956143379211, 0.15268732607364655, 0.17997419834136963, 0.3656727075576782, -0.02220880426466465, -0.08142577856779099, 0.1489364206790924, 0.27538177371025085, -0.09172679483890533, -0.168423429131...
https://github.com/huggingface/datasets/issues/407
MissingBeamOptions for Wikipedia 20200501.en
I found the same issue with almost any language other than English. (For English, it works). Will someone need to update the file on S3 again?
There may or may not be a regression for the pre-processed Wikipedia dataset. This was working fine 10 commits ago (without having Apache Beam available): ``` nlp.load_dataset('wikipedia', "20200501.en", split='train') ``` And now, having pulled master, I get: ``` Downloading and preparing dataset wikipedia...
26
MissingBeamOptions for Wikipedia 20200501.en There may or may not be a regression for the pre-processed Wikipedia dataset. This was working fine 10 commits ago (without having Apache Beam available): ``` nlp.load_dataset('wikipedia', "20200501.en", split='train') ``` And now, having pulled master, I get: `...
[ 0.06661684066057205, 0.12667572498321533, -0.0205122958868742, -0.05423278734087944, 0.19961956143379211, 0.15268732607364655, 0.17997419834136963, 0.3656727075576782, -0.02220880426466465, -0.08142577856779099, 0.1489364206790924, 0.27538177371025085, -0.09172679483890533, -0.168423429131...
https://github.com/huggingface/datasets/issues/407
MissingBeamOptions for Wikipedia 20200501.en
This is because only some languages are already preprocessed (en, de, fr, it) and stored on our google storage. We plan to have a systematic way to preprocess more wikipedia languages in the future. For the other languages you have to process them on your side using apache beam. That's why the lib asks for a Beam r...
There may or may not be a regression for the pre-processed Wikipedia dataset. This was working fine 10 commits ago (without having Apache Beam available): ``` nlp.load_dataset('wikipedia', "20200501.en", split='train') ``` And now, having pulled master, I get: ``` Downloading and preparing dataset wikipedia...
58
MissingBeamOptions for Wikipedia 20200501.en There may or may not be a regression for the pre-processed Wikipedia dataset. This was working fine 10 commits ago (without having Apache Beam available): ``` nlp.load_dataset('wikipedia', "20200501.en", split='train') ``` And now, having pulled master, I get: `...
[ 0.06661684066057205, 0.12667572498321533, -0.0205122958868742, -0.05423278734087944, 0.19961956143379211, 0.15268732607364655, 0.17997419834136963, 0.3656727075576782, -0.02220880426466465, -0.08142577856779099, 0.1489364206790924, 0.27538177371025085, -0.09172679483890533, -0.168423429131...
https://github.com/huggingface/datasets/issues/406
Faster Shuffling?
I think the slowness here probably come from the fact that we are copying from and to python. @lhoestq for all the `select`-based methods I think we should stay in Arrow format and update the writer so that it can accept Arrow tables or batches as well. What do you think?
Consider shuffling bookcorpus: ``` dataset = nlp.load_dataset('bookcorpus', split='train') dataset.shuffle() ``` According to tqdm, this will take around 2.5 hours on my machine to complete (even with the faster version of select from #405). I've also tried with `keep_in_memory=True` and `writer_batch_size=1000`...
51
Faster Shuffling? Consider shuffling bookcorpus: ``` dataset = nlp.load_dataset('bookcorpus', split='train') dataset.shuffle() ``` According to tqdm, this will take around 2.5 hours on my machine to complete (even with the faster version of select from #405). I've also tried with `keep_in_memory=True` and `wri...
[ -0.10384784638881683, 0.048529911786317825, -0.03246060013771057, -0.12386861443519592, -0.07870962470769882, -0.13932061195373535, 0.11683736741542816, 0.549418568611145, -0.2944580912590027, 0.08318284898996353, -0.015447406098246574, 0.5307009816169739, -0.13364925980567932, -0.26115640...
https://github.com/huggingface/datasets/issues/406
Faster Shuffling?
> @lhoestq for all the `select`-based methods I think we should stay in Arrow format and update the writer so that it can accept Arrow tables or batches as well. What do you think? I just tried with `writer.write_table` with tables of 1000 elements and it's slower that the solution in #405 On my side (select 10 ...
Consider shuffling bookcorpus: ``` dataset = nlp.load_dataset('bookcorpus', split='train') dataset.shuffle() ``` According to tqdm, this will take around 2.5 hours on my machine to complete (even with the faster version of select from #405). I've also tried with `keep_in_memory=True` and `writer_batch_size=1000`...
88
Faster Shuffling? Consider shuffling bookcorpus: ``` dataset = nlp.load_dataset('bookcorpus', split='train') dataset.shuffle() ``` According to tqdm, this will take around 2.5 hours on my machine to complete (even with the faster version of select from #405). I've also tried with `keep_in_memory=True` and `wri...
[ -0.16450101137161255, 0.11814001202583313, -0.04446074366569519, -0.10497473180294037, -0.08144102245569229, -0.17317046225070953, 0.14772425591945648, 0.5459221601486206, -0.23468269407749176, 0.06246325373649597, -0.00008180042641470209, 0.5998942852020264, -0.19479942321777344, -0.22840...
https://github.com/huggingface/datasets/issues/406
Faster Shuffling?
I tried using `.take` from pyarrow recordbatches but it doesn't improve the speed that much: ```python import nlp import numpy as np dset = nlp.Dataset.from_file("dummy_test_select.arrow") # dummy dataset with 100000 examples like {"a": "h"*512} indices = np.random.randint(0, 100_000, 1000_000) ``` ```pytho...
Consider shuffling bookcorpus: ``` dataset = nlp.load_dataset('bookcorpus', split='train') dataset.shuffle() ``` According to tqdm, this will take around 2.5 hours on my machine to complete (even with the faster version of select from #405). I've also tried with `keep_in_memory=True` and `writer_batch_size=1000`...
210
Faster Shuffling? Consider shuffling bookcorpus: ``` dataset = nlp.load_dataset('bookcorpus', split='train') dataset.shuffle() ``` According to tqdm, this will take around 2.5 hours on my machine to complete (even with the faster version of select from #405). I've also tried with `keep_in_memory=True` and `wri...
[ -0.2073034793138504, 0.1424284428358078, -0.025603139773011208, -0.14535008370876312, -0.10259024053812027, -0.11131440103054047, 0.12113548070192337, 0.540885865688324, -0.23658202588558197, 0.01907198317348957, -0.03811702877283096, 0.5862334370613098, -0.16830359399318695, -0.2764606773...
https://github.com/huggingface/datasets/issues/406
Faster Shuffling?
Shuffling is now significantly faster thanks to #513 Feel free to play with it now :) Closing this one, but feel free to re-open if you have other questions
Consider shuffling bookcorpus: ``` dataset = nlp.load_dataset('bookcorpus', split='train') dataset.shuffle() ``` According to tqdm, this will take around 2.5 hours on my machine to complete (even with the faster version of select from #405). I've also tried with `keep_in_memory=True` and `writer_batch_size=1000`...
29
Faster Shuffling? Consider shuffling bookcorpus: ``` dataset = nlp.load_dataset('bookcorpus', split='train') dataset.shuffle() ``` According to tqdm, this will take around 2.5 hours on my machine to complete (even with the faster version of select from #405). I've also tried with `keep_in_memory=True` and `wri...
[ -0.16697487235069275, 0.07589307427406311, -0.07566960155963898, -0.10691533237695694, -0.04517574980854988, -0.10969853401184082, 0.10494489222764969, 0.545972466468811, -0.222426176071167, 0.11074366420507431, -0.01715308055281639, 0.5978962779045105, -0.1472129076719284, -0.224778845906...
https://github.com/huggingface/datasets/issues/388
🐛 [Dataset] Cannot download wmt14, wmt15 and wmt17
similar slow download speed here for nlp.load_dataset('wmt14', 'fr-en') ` Downloading: 100%|██████████████████████████████████████████████████████████| 658M/658M [1:00:42<00:00, 181kB/s] Downloading: 100%|██████████████████████████████████████████████████████████| 918M/918M [1:39:38<00:00, 154kB/s] Downloading: 2...
1. I try downloading `wmt14`, `wmt15`, `wmt17`, `wmt19` with the following code: ``` nlp.load_dataset('wmt14','de-en') nlp.load_dataset('wmt15','de-en') nlp.load_dataset('wmt17','de-en') nlp.load_dataset('wmt19','de-en') ``` The code runs but the download speed is **extremely slow**, the same behaviour is not ob...
38
🐛 [Dataset] Cannot download wmt14, wmt15 and wmt17 1. I try downloading `wmt14`, `wmt15`, `wmt17`, `wmt19` with the following code: ``` nlp.load_dataset('wmt14','de-en') nlp.load_dataset('wmt15','de-en') nlp.load_dataset('wmt17','de-en') nlp.load_dataset('wmt19','de-en') ``` The code runs but the download spe...
[ -0.2754460275173187, -0.19817611575126648, -0.019505424425005913, 0.46949896216392517, 0.06568710505962372, 0.19420187175273895, 0.02708504907786846, 0.2684626281261444, 0.08611829578876495, -0.031171778216958046, -0.2506633400917053, -0.06791321933269501, 0.21257668733596802, 0.0307871736...
https://github.com/huggingface/datasets/issues/388
🐛 [Dataset] Cannot download wmt14, wmt15 and wmt17
> The code runs but the download speed is extremely slow, the same behaviour is not observed on wmt16 and wmt18 The original source for the files may provide slow download speeds. We can probably host these files ourselves. > When trying to download wmt17 zh-en, I got the following error: > ConnectionError: Cou...
1. I try downloading `wmt14`, `wmt15`, `wmt17`, `wmt19` with the following code: ``` nlp.load_dataset('wmt14','de-en') nlp.load_dataset('wmt15','de-en') nlp.load_dataset('wmt17','de-en') nlp.load_dataset('wmt19','de-en') ``` The code runs but the download speed is **extremely slow**, the same behaviour is not ob...
97
🐛 [Dataset] Cannot download wmt14, wmt15 and wmt17 1. I try downloading `wmt14`, `wmt15`, `wmt17`, `wmt19` with the following code: ``` nlp.load_dataset('wmt14','de-en') nlp.load_dataset('wmt15','de-en') nlp.load_dataset('wmt17','de-en') nlp.load_dataset('wmt19','de-en') ``` The code runs but the download spe...
[ -0.2686089277267456, -0.14047524333000183, -0.007186468690633774, 0.5020002126693726, -0.016840487718582153, 0.22515735030174255, -0.053077373653650284, 0.29936692118644714, 0.09113503992557526, -0.03681419789791107, -0.22522687911987305, -0.07992782443761826, 0.11619117110967636, 0.005876...
https://github.com/huggingface/datasets/issues/388
🐛 [Dataset] Cannot download wmt14, wmt15 and wmt17
Yeah, the download speed is sadly always extremely slow :-/. I will try to check out the `wmt17 zh-en` bug :-)
1. I try downloading `wmt14`, `wmt15`, `wmt17`, `wmt19` with the following code: ``` nlp.load_dataset('wmt14','de-en') nlp.load_dataset('wmt15','de-en') nlp.load_dataset('wmt17','de-en') nlp.load_dataset('wmt19','de-en') ``` The code runs but the download speed is **extremely slow**, the same behaviour is not ob...
21
🐛 [Dataset] Cannot download wmt14, wmt15 and wmt17 1. I try downloading `wmt14`, `wmt15`, `wmt17`, `wmt19` with the following code: ``` nlp.load_dataset('wmt14','de-en') nlp.load_dataset('wmt15','de-en') nlp.load_dataset('wmt17','de-en') nlp.load_dataset('wmt19','de-en') ``` The code runs but the download spe...
[ -0.36933910846710205, -0.2487270087003708, 0.02165657840669155, 0.38200658559799194, 0.14801853895187378, 0.03158446028828621, 0.02061532251536846, 0.10976894944906235, 0.15086527168750763, -0.022493675351142883, -0.11922953277826309, 0.08131130784749985, 0.2570134401321411, 0.022128967568...
https://github.com/huggingface/datasets/issues/387
Conversion through to_pandas output numpy arrays for lists instead of python objects
To convert from arrow type we have three options: to_numpy, to_pandas and to_pydict/to_pylist. - to_numpy and to_pandas return numpy arrays instead of lists but are very fast. - to_pydict/to_pylist can be 100x slower and become the bottleneck for reading data, but at least they return lists. Maybe we can have to...
In a related question, the conversion through to_pandas output numpy arrays for the lists instead of python objects. Here is an example: ```python >>> dataset._data.slice(key, 1).to_pandas().to_dict("list") {'sentence1': ['Amrozi accused his brother , whom he called " the witness " , of deliberately distorting hi...
69
Conversion through to_pandas output numpy arrays for lists instead of python objects In a related question, the conversion through to_pandas output numpy arrays for the lists instead of python objects. Here is an example: ```python >>> dataset._data.slice(key, 1).to_pandas().to_dict("list") {'sentence1': ['Amro...
[ 0.2560277283191681, -0.07245711237192154, -0.23676365613937378, -0.009235892444849014, 0.26258862018585205, 0.08429209887981415, 0.6678314208984375, 0.32101303339004517, 0.08501558750867844, 0.10968828946352005, -0.22308434545993805, 0.6318756937980652, 0.046178825199604034, -0.01798285916...
https://github.com/huggingface/datasets/issues/378
[dataset] Structure of MLQA seems unecessary nested
Same for the RACE dataset: https://github.com/huggingface/nlp/blob/master/datasets/race/race.py Should we scan all the datasets to remove this pattern of un-necessary nesting?
The features of the MLQA dataset comprise several nested dictionaries with a single element inside (for `questions` and `ids`): https://github.com/huggingface/nlp/blob/master/datasets/mlqa/mlqa.py#L90-L97 Should we keep this @mariamabarham @patrickvonplaten? Was this added for compatibility with tfds? ```python ...
19
[dataset] Structure of MLQA seems unecessary nested The features of the MLQA dataset comprise several nested dictionaries with a single element inside (for `questions` and `ids`): https://github.com/huggingface/nlp/blob/master/datasets/mlqa/mlqa.py#L90-L97 Should we keep this @mariamabarham @patrickvonplaten? Was ...
[ -0.11614258587360382, -0.059321340173482895, -0.09396395832300186, 0.31995904445648193, -0.04902845248579979, 0.24959604442119598, 0.08761217445135117, 0.24669882655143738, -0.17529058456420898, -0.1441371887922287, 0.013056083582341671, 0.1971595734357834, 0.05846541002392769, 0.665258884...
https://github.com/huggingface/datasets/issues/376
to_pandas conversion doesn't always work
Could you try to update pyarrow to >=0.17.0 ? It should fix the `to_pandas` bug Also I'm not sure that structures like list<struct> are fully supported in the lib (none of the datasets use that). It can cause issues when using dataset transforms like `filter` for example
For some complex nested types, the conversion from Arrow to python dict through pandas doesn't seem to be possible. Here is an example using the official SQUAD v2 JSON file. This example was found while investigating #373. ```python >>> squad = load_dataset('json', data_files={nlp.Split.TRAIN: ["./train-v2.0....
47
to_pandas conversion doesn't always work For some complex nested types, the conversion from Arrow to python dict through pandas doesn't seem to be possible. Here is an example using the official SQUAD v2 JSON file. This example was found while investigating #373. ```python >>> squad = load_dataset('json', d...
[ 0.034682098776102066, 0.13347983360290527, -0.10313531756401062, 0.05421443656086922, 0.2966335713863373, -0.0923532173037529, 0.37532466650009155, 0.4698342978954315, -0.13277779519557953, -0.21645307540893555, -0.21985167264938354, 0.89546138048172, 0.1729375720024109, 0.0701540112495422...
https://github.com/huggingface/datasets/issues/375
TypeError when computing bertscore
I am not able to reproduce this issue on my side. Could you give us more details about the inputs you used ? I do get another error though: ``` ~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/bert_score/utils.py in bert_cos_score_idf(model, refs, hyps, tokenizer, idf_dict, verbose, batch_size, device, all_...
Hi, I installed nlp 0.3.0 via pip, and my python version is 3.7. When I tried to compute bertscore with the code: ``` import nlp bertscore = nlp.load_metric('bertscore') # load hyps and refs ... print (bertscore.compute(hyps, refs, lang='en')) ``` I got the following error. ``` Traceback (most rece...
91
TypeError when computing bertscore Hi, I installed nlp 0.3.0 via pip, and my python version is 3.7. When I tried to compute bertscore with the code: ``` import nlp bertscore = nlp.load_metric('bertscore') # load hyps and refs ... print (bertscore.compute(hyps, refs, lang='en')) ``` I got the follow...
[ 0.2761254906654358, -0.1318785548210144, 0.07569071650505066, 0.5165119767189026, 0.09885670989751816, -0.08921453356742859, 0.16246378421783447, 0.47762933373451233, -0.06896992027759552, 0.17806461453437805, 0.19733811914920807, 0.41090142726898193, -0.22790655493736267, -0.2859780788421...
https://github.com/huggingface/datasets/issues/373
Segmentation fault when loading local JSON dataset as of #372
I've seen this sort of thing before -- it might help to delete the directory -- I've also noticed that there is an error with the json Dataloader for any data I've tried to load. I've replaced it with this, which skips over the data feature population step: ```python import os import pyarrow.json as paj imp...
The last issue was closed (#369) once the #372 update was merged. However, I'm still not able to load a SQuAD formatted JSON file. Instead of the previously recorded pyarrow error, I now get a segmentation fault. ``` dataset = nlp.load_dataset('json', data_files={nlp.Split.TRAIN: ["./datasets/train-v2.0.json"]}, f...
191
Segmentation fault when loading local JSON dataset as of #372 The last issue was closed (#369) once the #372 update was merged. However, I'm still not able to load a SQuAD formatted JSON file. Instead of the previously recorded pyarrow error, I now get a segmentation fault. ``` dataset = nlp.load_dataset('json',...
[ -0.017292257398366928, 0.2893960773944855, 0.06094161421060562, 0.057870395481586456, 0.22360970079898834, -0.016548527404665947, 0.32003894448280334, 0.6101840734481812, -0.40100985765457153, -0.21304266154766083, -0.2422538846731186, 0.7714078426361084, 0.13278771936893463, -0.4265961945...
https://github.com/huggingface/datasets/issues/373
Segmentation fault when loading local JSON dataset as of #372
Yes, deleting the directory solves the error whenever I try to rerun. By replacing the json-loader, you mean the cached file in my `site-packages` directory? e.g. `/home/XXX/.cache/lib/python3.7/site-packages/nlp/datasets/json/(...)/json.py` When I was testing this out before the #372 PR was merged I had issues ...
The last issue was closed (#369) once the #372 update was merged. However, I'm still not able to load a SQuAD formatted JSON file. Instead of the previously recorded pyarrow error, I now get a segmentation fault. ``` dataset = nlp.load_dataset('json', data_files={nlp.Split.TRAIN: ["./datasets/train-v2.0.json"]}, f...
96
Segmentation fault when loading local JSON dataset as of #372 The last issue was closed (#369) once the #372 update was merged. However, I'm still not able to load a SQuAD formatted JSON file. Instead of the previously recorded pyarrow error, I now get a segmentation fault. ``` dataset = nlp.load_dataset('json',...
[ -0.017292257398366928, 0.2893960773944855, 0.06094161421060562, 0.057870395481586456, 0.22360970079898834, -0.016548527404665947, 0.32003894448280334, 0.6101840734481812, -0.40100985765457153, -0.21304266154766083, -0.2422538846731186, 0.7714078426361084, 0.13278771936893463, -0.4265961945...
https://github.com/huggingface/datasets/issues/373
Segmentation fault when loading local JSON dataset as of #372
I see, diving in the JSON file for SQuAD it's a pretty complex structure. The best solution for you, if you have a dataset really similar to SQuAD would be to copy and modify the SQuAD data processing script. We will probably add soon an option to be able to specify file path to use instead of the automatic URL enco...
The last issue was closed (#369) once the #372 update was merged. However, I'm still not able to load a SQuAD formatted JSON file. Instead of the previously recorded pyarrow error, I now get a segmentation fault. ``` dataset = nlp.load_dataset('json', data_files={nlp.Split.TRAIN: ["./datasets/train-v2.0.json"]}, f...
117
Segmentation fault when loading local JSON dataset as of #372 The last issue was closed (#369) once the #372 update was merged. However, I'm still not able to load a SQuAD formatted JSON file. Instead of the previously recorded pyarrow error, I now get a segmentation fault. ``` dataset = nlp.load_dataset('json',...
[ -0.017292257398366928, 0.2893960773944855, 0.06094161421060562, 0.057870395481586456, 0.22360970079898834, -0.016548527404665947, 0.32003894448280334, 0.6101840734481812, -0.40100985765457153, -0.21304266154766083, -0.2422538846731186, 0.7714078426361084, 0.13278771936893463, -0.4265961945...
https://github.com/huggingface/datasets/issues/373
Segmentation fault when loading local JSON dataset as of #372
This seems like a more sensible solution! Thanks, @thomwolf. It's been a little daunting to understand what these scripts actually do, due to the level of abstraction and central documentation. Am I correct in assuming that the `_generate_examples()` function is the actual procedure for how the data is loaded from f...
The last issue was closed (#369) once the #372 update was merged. However, I'm still not able to load a SQuAD formatted JSON file. Instead of the previously recorded pyarrow error, I now get a segmentation fault. ``` dataset = nlp.load_dataset('json', data_files={nlp.Split.TRAIN: ["./datasets/train-v2.0.json"]}, f...
156
Segmentation fault when loading local JSON dataset as of #372 The last issue was closed (#369) once the #372 update was merged. However, I'm still not able to load a SQuAD formatted JSON file. Instead of the previously recorded pyarrow error, I now get a segmentation fault. ``` dataset = nlp.load_dataset('json',...
[ -0.017292257398366928, 0.2893960773944855, 0.06094161421060562, 0.057870395481586456, 0.22360970079898834, -0.016548527404665947, 0.32003894448280334, 0.6101840734481812, -0.40100985765457153, -0.21304266154766083, -0.2422538846731186, 0.7714078426361084, 0.13278771936893463, -0.4265961945...
https://github.com/huggingface/datasets/issues/373
Segmentation fault when loading local JSON dataset as of #372
Yes `_generate_examples()` is the main entry point. If you change the shape of the returned dictionary you also need to update the `features` in the `_info`. I'm currently writing the doc so it should be easier soon to use the library and know how to add your datasets.
The last issue was closed (#369) once the #372 update was merged. However, I'm still not able to load a SQuAD formatted JSON file. Instead of the previously recorded pyarrow error, I now get a segmentation fault. ``` dataset = nlp.load_dataset('json', data_files={nlp.Split.TRAIN: ["./datasets/train-v2.0.json"]}, f...
48
Segmentation fault when loading local JSON dataset as of #372 The last issue was closed (#369) once the #372 update was merged. However, I'm still not able to load a SQuAD formatted JSON file. Instead of the previously recorded pyarrow error, I now get a segmentation fault. ``` dataset = nlp.load_dataset('json',...
[ -0.017292257398366928, 0.2893960773944855, 0.06094161421060562, 0.057870395481586456, 0.22360970079898834, -0.016548527404665947, 0.32003894448280334, 0.6101840734481812, -0.40100985765457153, -0.21304266154766083, -0.2422538846731186, 0.7714078426361084, 0.13278771936893463, -0.4265961945...
https://github.com/huggingface/datasets/issues/373
Segmentation fault when loading local JSON dataset as of #372
Could you try to update pyarrow to >=0.17.0 @vegarab ? I don't have any segmentation fault with my version of pyarrow (0.17.1) I tested with ```python import nlp s = nlp.load_dataset("json", data_files="train-v2.0.json", field="data", split="train") s[0] # {'title': 'Normans', 'paragraphs': [{'qas': [{'questio...
The last issue was closed (#369) once the #372 update was merged. However, I'm still not able to load a SQuAD formatted JSON file. Instead of the previously recorded pyarrow error, I now get a segmentation fault. ``` dataset = nlp.load_dataset('json', data_files={nlp.Split.TRAIN: ["./datasets/train-v2.0.json"]}, f...
49
Segmentation fault when loading local JSON dataset as of #372 The last issue was closed (#369) once the #372 update was merged. However, I'm still not able to load a SQuAD formatted JSON file. Instead of the previously recorded pyarrow error, I now get a segmentation fault. ``` dataset = nlp.load_dataset('json',...
[ -0.017292257398366928, 0.2893960773944855, 0.06094161421060562, 0.057870395481586456, 0.22360970079898834, -0.016548527404665947, 0.32003894448280334, 0.6101840734481812, -0.40100985765457153, -0.21304266154766083, -0.2422538846731186, 0.7714078426361084, 0.13278771936893463, -0.4265961945...
https://github.com/huggingface/datasets/issues/373
Segmentation fault when loading local JSON dataset as of #372
Also if you want to have your own dataset script, we now have a new documentation ! See here: https://huggingface.co/nlp/add_dataset.html
The last issue was closed (#369) once the #372 update was merged. However, I'm still not able to load a SQuAD formatted JSON file. Instead of the previously recorded pyarrow error, I now get a segmentation fault. ``` dataset = nlp.load_dataset('json', data_files={nlp.Split.TRAIN: ["./datasets/train-v2.0.json"]}, f...
20
Segmentation fault when loading local JSON dataset as of #372 The last issue was closed (#369) once the #372 update was merged. However, I'm still not able to load a SQuAD formatted JSON file. Instead of the previously recorded pyarrow error, I now get a segmentation fault. ``` dataset = nlp.load_dataset('json',...
[ -0.017292257398366928, 0.2893960773944855, 0.06094161421060562, 0.057870395481586456, 0.22360970079898834, -0.016548527404665947, 0.32003894448280334, 0.6101840734481812, -0.40100985765457153, -0.21304266154766083, -0.2422538846731186, 0.7714078426361084, 0.13278771936893463, -0.4265961945...
https://github.com/huggingface/datasets/issues/373
Segmentation fault when loading local JSON dataset as of #372
@lhoestq For some reason, I am not able to reproduce the segmentation fault, on pyarrow==0.16.0. Using the exact same environment and file. Anyhow, I discovered that pyarrow>=0.17.0 is required to read in a JSON file where the pandas structs contain lists. Otherwise, pyarrow complains when attempting to cast the s...
The last issue was closed (#369) once the #372 update was merged. However, I'm still not able to load a SQuAD formatted JSON file. Instead of the previously recorded pyarrow error, I now get a segmentation fault. ``` dataset = nlp.load_dataset('json', data_files={nlp.Split.TRAIN: ["./datasets/train-v2.0.json"]}, f...
219
Segmentation fault when loading local JSON dataset as of #372 The last issue was closed (#369) once the #372 update was merged. However, I'm still not able to load a SQuAD formatted JSON file. Instead of the previously recorded pyarrow error, I now get a segmentation fault. ``` dataset = nlp.load_dataset('json',...
[ -0.017292257398366928, 0.2893960773944855, 0.06094161421060562, 0.057870395481586456, 0.22360970079898834, -0.016548527404665947, 0.32003894448280334, 0.6101840734481812, -0.40100985765457153, -0.21304266154766083, -0.2422538846731186, 0.7714078426361084, 0.13278771936893463, -0.4265961945...
https://github.com/huggingface/datasets/issues/369
can't load local dataset: pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries
I am able to reproduce this with the official SQuAD `train-v2.0.json` file downloaded directly from https://rajpurkar.github.io/SQuAD-explorer/
Trying to load a local SQuAD-formatted dataset (from a JSON file, about 60MB): ``` dataset = nlp.load_dataset(path='json', data_files={nlp.Split.TRAIN: ["./path/to/file.json"]}) ``` causes ``` Traceback (most recent call last): File "dataloader.py", line 9, in <module> ["./path/to/file.json"]}) File "/...
16
can't load local dataset: pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries Trying to load a local SQuAD-formatted dataset (from a JSON file, about 60MB): ``` dataset = nlp.load_dataset(path='json', data_files={nlp.Split.TRAIN: ["./path/to/file.json"]}) ``` causes ``` Traceback (most re...
[ -0.22482463717460632, 0.20798172056674957, 0.013098775409162045, 0.13777872920036316, 0.2781038284301758, -0.25971925258636475, 0.31692245602607727, 0.46876847743988037, -0.16914048790931702, -0.14901088178157806, 0.0023772921413183212, 0.42034175992012024, 0.06907202303409576, -0.01722797...
https://github.com/huggingface/datasets/issues/369
can't load local dataset: pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries
I am facing this issue in transformers library 3.0.2 while reading a csv using datasets. Is this fixed in latest version? I updated the latest version 4.0.1 but still getting this error. What could cause this error?
Trying to load a local SQuAD-formatted dataset (from a JSON file, about 60MB): ``` dataset = nlp.load_dataset(path='json', data_files={nlp.Split.TRAIN: ["./path/to/file.json"]}) ``` causes ``` Traceback (most recent call last): File "dataloader.py", line 9, in <module> ["./path/to/file.json"]}) File "/...
37
can't load local dataset: pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries Trying to load a local SQuAD-formatted dataset (from a JSON file, about 60MB): ``` dataset = nlp.load_dataset(path='json', data_files={nlp.Split.TRAIN: ["./path/to/file.json"]}) ``` causes ``` Traceback (most re...
[ -0.22482463717460632, 0.20798172056674957, 0.013098775409162045, 0.13777872920036316, 0.2781038284301758, -0.25971925258636475, 0.31692245602607727, 0.46876847743988037, -0.16914048790931702, -0.14901088178157806, 0.0023772921413183212, 0.42034175992012024, 0.06907202303409576, -0.01722797...
https://github.com/huggingface/datasets/issues/368
load_metric can't acquire lock anymore
I found that, in the same process (or the same interactive session), if I do import nlp m1 = nlp.load_metric('glue', 'mrpc') m2 = nlp.load_metric('glue', 'sst2') I will get the same error `ValueError: Cannot acquire lock, caching file might be used by another process, you should setup a unique 'experiment_id'...
I can't load metric (glue) anymore after an error in a previous run. I even removed the whole cache folder `/home/XXX/.cache/huggingface/`, and the issue persisted. What are the steps to fix this? Traceback (most recent call last): File "/home/XXX/miniconda3/envs/ML-DL-py-3.7/lib/python3.7/site-packages/n...
49
load_metric can't acquire lock anymore I can't load metric (glue) anymore after an error in a previous run. I even removed the whole cache folder `/home/XXX/.cache/huggingface/`, and the issue persisted. What are the steps to fix this? Traceback (most recent call last): File "/home/XXX/miniconda3/envs/M...
[ 0.038707979023456573, -0.11587385088205338, 0.07235068082809448, 0.3124665319919586, 0.320459246635437, 0.029449187219142914, 0.09009482711553574, 0.07020644098520279, 0.5023598670959473, -0.0785919651389122, -0.18506582081317902, 0.03060687892138958, -0.048066187649965286, -0.261451393365...
https://github.com/huggingface/datasets/issues/365
How to augment data ?
Using batched map is probably the easiest way at the moment. What kind of augmentation would you like to do ?
Is there any clean way to augment data ? For now my work-around is to use batched map, like this : ```python def aug(samples): # Simply copy the existing data to have x2 amount of data for k, v in samples.items(): samples[k].extend(v) return samples dataset = dataset.map(aug, batched=T...
21
How to augment data ? Is there any clean way to augment data ? For now my work-around is to use batched map, like this : ```python def aug(samples): # Simply copy the existing data to have x2 amount of data for k, v in samples.items(): samples[k].extend(v) return samples dataset = da...
[ -0.007610788103193045, -0.07861076295375824, -0.3029457628726959, -0.09332095086574554, 0.0747116282582283, 0.27884161472320557, -0.10457990318536758, 0.281440407037735, 0.12231355905532837, 0.11616472154855728, -0.1224738135933876, 0.056120093911886215, -0.0021824000868946314, 0.165933176...
https://github.com/huggingface/datasets/issues/365
How to augment data ?
Some samples in the dataset are too long, I want to divide them in several samples.
Is there any clean way to augment data ? For now my work-around is to use batched map, like this : ```python def aug(samples): # Simply copy the existing data to have x2 amount of data for k, v in samples.items(): samples[k].extend(v) return samples dataset = dataset.map(aug, batched=T...
16
How to augment data ? Is there any clean way to augment data ? For now my work-around is to use batched map, like this : ```python def aug(samples): # Simply copy the existing data to have x2 amount of data for k, v in samples.items(): samples[k].extend(v) return samples dataset = da...
[ -0.09215502440929413, -0.06655602157115936, -0.2976476550102234, -0.022698545828461647, 0.023151222616434097, 0.35464292764663696, -0.014462742954492569, 0.27358710765838623, 0.16726431250572205, 0.15173578262329102, -0.11430388689041138, 0.04165049269795418, -0.012731787748634815, 0.12649...
https://github.com/huggingface/datasets/issues/365
How to augment data ?
Using batched map is the way to go then. We'll make it clearer in the docs that map could be used for augmentation. Let me know if you think there should be another way to do it. Or feel free to close the issue otherwise.
Is there any clean way to augment data ? For now my work-around is to use batched map, like this : ```python def aug(samples): # Simply copy the existing data to have x2 amount of data for k, v in samples.items(): samples[k].extend(v) return samples dataset = dataset.map(aug, batched=T...
45
How to augment data ? Is there any clean way to augment data ? For now my work-around is to use batched map, like this : ```python def aug(samples): # Simply copy the existing data to have x2 amount of data for k, v in samples.items(): samples[k].extend(v) return samples dataset = da...
[ -0.016944654285907745, -0.1455656737089157, -0.3001067638397217, -0.08540037274360657, 0.12440033257007599, 0.1755867600440979, -0.06294318288564682, 0.3091179430484772, 0.1504424512386322, 0.16817894577980042, -0.1306254118680954, 0.18781742453575134, 0.030932266265153885, 0.1995355486869...
https://github.com/huggingface/datasets/issues/365
How to augment data ?
It just feels awkward to use map to augment data. Also it means it's not possible to augment data in a non-batched way. But to be honest I have no idea of a good API...
Is there any clean way to augment data ? For now my work-around is to use batched map, like this : ```python def aug(samples): # Simply copy the existing data to have x2 amount of data for k, v in samples.items(): samples[k].extend(v) return samples dataset = dataset.map(aug, batched=T...
35
How to augment data ? Is there any clean way to augment data ? For now my work-around is to use batched map, like this : ```python def aug(samples): # Simply copy the existing data to have x2 amount of data for k, v in samples.items(): samples[k].extend(v) return samples dataset = da...
[ -0.0686047151684761, -0.03053896501660347, -0.29906535148620605, -0.11018471419811249, 0.06244470551609993, 0.3354525864124298, -0.1749584674835205, 0.2741321623325348, 0.15646106004714966, 0.1322222501039505, -0.04436748847365379, 0.09031308442354202, -0.017896652221679688, 0.214575827121...
https://github.com/huggingface/datasets/issues/365
How to augment data ?
Or for non-batched samples, how about returning a tuple ? ```python def aug(sample): # Simply copy the existing data to have x2 amount of data return sample, sample dataset = dataset.map(aug) ``` It feels really natural and easy, but : * it means the behavior with batched data is different * I ...
Is there any clean way to augment data ? For now my work-around is to use batched map, like this : ```python def aug(samples): # Simply copy the existing data to have x2 amount of data for k, v in samples.items(): samples[k].extend(v) return samples dataset = dataset.map(aug, batched=T...
60
How to augment data ? Is there any clean way to augment data ? For now my work-around is to use batched map, like this : ```python def aug(samples): # Simply copy the existing data to have x2 amount of data for k, v in samples.items(): samples[k].extend(v) return samples dataset = da...
[ -0.06329931318759918, -0.055646996945142746, -0.2771686017513275, -0.03686390817165375, 0.06049228459596634, 0.20088115334510803, -0.08675933629274368, 0.33428576588630676, 0.23162993788719177, 0.13165469467639923, -0.04392843693494797, 0.1579211950302124, -0.0891115814447403, 0.2297115325...
https://github.com/huggingface/datasets/issues/365
How to augment data ?
As we're working with arrow's columnar format we prefer to play with batches that are dictionaries instead of tuples. If we have tuple it implies to re-format the data each time we want to write to arrow, which can lower the speed of map for example. It's also a matter of coherence, as we don't want users to be con...
Is there any clean way to augment data ? For now my work-around is to use batched map, like this : ```python def aug(samples): # Simply copy the existing data to have x2 amount of data for k, v in samples.items(): samples[k].extend(v) return samples dataset = dataset.map(aug, batched=T...
77
How to augment data ? Is there any clean way to augment data ? For now my work-around is to use batched map, like this : ```python def aug(samples): # Simply copy the existing data to have x2 amount of data for k, v in samples.items(): samples[k].extend(v) return samples dataset = da...
[ -0.04496295005083084, 0.03150567412376404, -0.26236042380332947, -0.062033236026763916, 0.07125964015722275, 0.1740819811820984, -0.04355531185865402, 0.2856452167034149, 0.2927056550979614, 0.06880275905132294, -0.045228347182273865, 0.2766706645488739, -0.06152574345469475, 0.09913703054...
https://github.com/huggingface/datasets/issues/361
🐛 [Metrics] ROUGE is non-deterministic
> Hi, can you give a full self-contained example to reproduce this behavior? There is a notebook in the post ;)
If I run the ROUGE metric 2 times, with same predictions / references, the scores are slightly different. Refer to [this Colab notebook](https://colab.research.google.com/drive/1wRssNXgb9ldcp4ulwj-hMJn0ywhDOiDy?usp=sharing) for reproducing the problem. Example of F-score for ROUGE-1, ROUGE-2, ROUGE-L in 2 differe...
21
🐛 [Metrics] ROUGE is non-deterministic If I run the ROUGE metric 2 times, with same predictions / references, the scores are slightly different. Refer to [this Colab notebook](https://colab.research.google.com/drive/1wRssNXgb9ldcp4ulwj-hMJn0ywhDOiDy?usp=sharing) for reproducing the problem. Example of F-score ...
[ -0.07294221222400665, -0.431997686624527, -0.07466137409210205, 0.24723802506923676, 0.14245854318141937, -0.2991017997264862, -0.034664154052734375, -0.36222323775291443, -0.042369354516267776, 0.47043144702911377, 0.02695283107459545, 0.3044506311416626, 0.030214518308639526, 0.087276794...
https://github.com/huggingface/datasets/issues/361
🐛 [Metrics] ROUGE is non-deterministic
> If I run the ROUGE metric 2 times, with same predictions / references, the scores are slightly different. > > Refer to [this Colab notebook](https://colab.research.google.com/drive/1wRssNXgb9ldcp4ulwj-hMJn0ywhDOiDy?usp=sharing) for reproducing the problem. > > Example of F-score for ROUGE-1, ROUGE-2, ROUGE-L in...
If I run the ROUGE metric 2 times, with same predictions / references, the scores are slightly different. Refer to [this Colab notebook](https://colab.research.google.com/drive/1wRssNXgb9ldcp4ulwj-hMJn0ywhDOiDy?usp=sharing) for reproducing the problem. Example of F-score for ROUGE-1, ROUGE-2, ROUGE-L in 2 differe...
112
🐛 [Metrics] ROUGE is non-deterministic If I run the ROUGE metric 2 times, with same predictions / references, the scores are slightly different. Refer to [this Colab notebook](https://colab.research.google.com/drive/1wRssNXgb9ldcp4ulwj-hMJn0ywhDOiDy?usp=sharing) for reproducing the problem. Example of F-score ...
[ -0.0629730373620987, -0.1342073231935501, -0.07968541234731674, 0.01057434268295765, 0.17548277974128723, -0.3666589558124542, 0.03127399459481239, -0.2816154956817627, -0.33567509055137634, 0.46499380469322205, -0.14531995356082916, 0.34079819917678833, -0.02386186644434929, 0.10760284960...
https://github.com/huggingface/datasets/issues/361
🐛 [Metrics] ROUGE is non-deterministic
Now if you re-run the notebook, the two printed results are the same @colanim ``` ['0.3356', '0.1466', '0.2318'] ['0.3356', '0.1466', '0.2318'] ``` However across sessions, the results may change (as numpy's random seed can be different). You can prevent that by setting your seed: ```python rouge = nlp.load_metr...
If I run the ROUGE metric 2 times, with same predictions / references, the scores are slightly different. Refer to [this Colab notebook](https://colab.research.google.com/drive/1wRssNXgb9ldcp4ulwj-hMJn0ywhDOiDy?usp=sharing) for reproducing the problem. Example of F-score for ROUGE-1, ROUGE-2, ROUGE-L in 2 differe...
50
🐛 [Metrics] ROUGE is non-deterministic If I run the ROUGE metric 2 times, with same predictions / references, the scores are slightly different. Refer to [this Colab notebook](https://colab.research.google.com/drive/1wRssNXgb9ldcp4ulwj-hMJn0ywhDOiDy?usp=sharing) for reproducing the problem. Example of F-score ...
[ -0.10732854902744293, -0.21593044698238373, -0.06680147349834442, 0.14313030242919922, 0.1828393042087555, -0.31881842017173767, -0.1366637647151947, -0.15665261447429657, -0.0810185968875885, 0.3942079246044159, -0.09586981683969498, 0.5732386708259583, 0.07576126605272293, -0.03530603274...
https://github.com/huggingface/datasets/issues/360
[Feature request] Add dataset.ragged_map() function for many-to-many transformations
Actually `map(batched=True)` can already change the size of the dataset. It can accept examples of length `N` and returns a batch of length `M` (can be null or greater than `N`). I'll make that explicit in the doc that I'm currently writing.
`dataset.map()` enables one-to-one transformations. Input one example and output one example. This is helpful for tokenizing and cleaning individual lines. `dataset.filter()` enables one-to-(one-or-none) transformations. Input one example and output either zero/one example. This is helpful for removing portions from t...
42
[Feature request] Add dataset.ragged_map() function for many-to-many transformations `dataset.map()` enables one-to-one transformations. Input one example and output one example. This is helpful for tokenizing and cleaning individual lines. `dataset.filter()` enables one-to-(one-or-none) transformations. Input one e...
[ -0.5117926597595215, -0.06494329869747162, 0.04673786461353302, -0.21277673542499542, -0.05259743705391884, -0.09725074470043182, 0.24880315363407135, 0.35040512681007385, -0.24927492439746857, 0.0586385540664196, 0.19317233562469482, 0.41998669505119324, -0.42083558440208435, -0.118523567...
https://github.com/huggingface/datasets/issues/360
[Feature request] Add dataset.ragged_map() function for many-to-many transformations
You're two steps ahead of me :) In my testing, it also works if `M` < `N`. A batched map of different length seems to work if you directly overwrite all of the original keys, but fails if any of the original keys are preserved. For example, ```python # Create a dummy dataset dset = load_dataset("wikitext", "wi...
`dataset.map()` enables one-to-one transformations. Input one example and output one example. This is helpful for tokenizing and cleaning individual lines. `dataset.filter()` enables one-to-(one-or-none) transformations. Input one example and output either zero/one example. This is helpful for removing portions from t...
179
[Feature request] Add dataset.ragged_map() function for many-to-many transformations `dataset.map()` enables one-to-one transformations. Input one example and output one example. This is helpful for tokenizing and cleaning individual lines. `dataset.filter()` enables one-to-(one-or-none) transformations. Input one e...
[ -0.5140854120254517, -0.06388507783412933, 0.04673580825328827, -0.22718265652656555, -0.06906411051750183, -0.05510599538683891, 0.277511328458786, 0.3193072974681854, -0.23359182476997375, 0.051775891333818436, 0.18533754348754883, 0.4330142140388489, -0.38917097449302673, -0.15676091611...
https://github.com/huggingface/datasets/issues/359
ArrowBasedBuilder _prepare_split parse_schema breaks on nested structures
Hi, it depends on what it is in your `dataset_builder.py` file. Can you share it? If you are just loading `json` files, you can also directly use the `json` script (which will find the schema/features from your JSON structure): ```python from nlp import load_dataset ds = load_dataset("json", data_files=rel_data...
I tried using the Json dataloader to load some JSON lines files. but get an exception in the parse_schema function. ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-23-9aecfbee53bd> in <mo...
49
ArrowBasedBuilder _prepare_split parse_schema breaks on nested structures I tried using the Json dataloader to load some JSON lines files. but get an exception in the parse_schema function. ``` --------------------------------------------------------------------------- ValueError ...
[ -0.32604527473449707, 0.3610110878944397, -0.06005510687828064, 0.5487005710601807, 0.0407031886279583, -0.1400742083787918, 0.3431127667427063, 0.4284069836139679, 0.21433204412460327, -0.11396969109773636, -0.04363132268190384, 0.33327069878578186, 0.23643989861011505, 0.0587034374475479...
https://github.com/huggingface/datasets/issues/359
ArrowBasedBuilder _prepare_split parse_schema breaks on nested structures
The behavior I'm seeing is from the `json` script. I hacked this together to overcome the error with the `JSON` dataloader ``` class DatasetBuilder(hf_nlp.ArrowBasedBuilder): BUILDER_CONFIG_CLASS = BuilderConfig def _info(self): return DatasetInfo() def _split_generators(self, dl_manag...
I tried using the Json dataloader to load some JSON lines files. but get an exception in the parse_schema function. ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-23-9aecfbee53bd> in <mo...
254
ArrowBasedBuilder _prepare_split parse_schema breaks on nested structures I tried using the Json dataloader to load some JSON lines files. but get an exception in the parse_schema function. ``` --------------------------------------------------------------------------- ValueError ...
[ -0.32604527473449707, 0.3610110878944397, -0.06005510687828064, 0.5487005710601807, 0.0407031886279583, -0.1400742083787918, 0.3431127667427063, 0.4284069836139679, 0.21433204412460327, -0.11396969109773636, -0.04363132268190384, 0.33327069878578186, 0.23643989861011505, 0.0587034374475479...
https://github.com/huggingface/datasets/issues/359
ArrowBasedBuilder _prepare_split parse_schema breaks on nested structures
Also noticed that if you for example in a loader script ``` from nlp import ArrowBasedBuilder class MyBuilder(ArrowBasedBuilder): ... ``` and use that in the subclass, it will be on the module's __dict__ and will be selected before the `MyBuilder` subclass, and it will raise `NotImplementedError` on its `_g...
I tried using the Json dataloader to load some JSON lines files. but get an exception in the parse_schema function. ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-23-9aecfbee53bd> in <mo...
70
ArrowBasedBuilder _prepare_split parse_schema breaks on nested structures I tried using the Json dataloader to load some JSON lines files. but get an exception in the parse_schema function. ``` --------------------------------------------------------------------------- ValueError ...
[ -0.32604527473449707, 0.3610110878944397, -0.06005510687828064, 0.5487005710601807, 0.0407031886279583, -0.1400742083787918, 0.3431127667427063, 0.4284069836139679, 0.21433204412460327, -0.11396969109773636, -0.04363132268190384, 0.33327069878578186, 0.23643989861011505, 0.0587034374475479...
https://github.com/huggingface/datasets/issues/359
ArrowBasedBuilder _prepare_split parse_schema breaks on nested structures
Indeed this is part of a more general limitation which is the fact that we should generate and update the `features` from the auto-inferred Arrow schema when they are not provided (also happen when a user change the schema using `map()`, the features should be auto-generated and guessed as much as possible to keep the ...
I tried using the Json dataloader to load some JSON lines files. but get an exception in the parse_schema function. ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-23-9aecfbee53bd> in <mo...
70
ArrowBasedBuilder _prepare_split parse_schema breaks on nested structures I tried using the Json dataloader to load some JSON lines files. but get an exception in the parse_schema function. ``` --------------------------------------------------------------------------- ValueError ...
[ -0.32604527473449707, 0.3610110878944397, -0.06005510687828064, 0.5487005710601807, 0.0407031886279583, -0.1400742083787918, 0.3431127667427063, 0.4284069836139679, 0.21433204412460327, -0.11396969109773636, -0.04363132268190384, 0.33327069878578186, 0.23643989861011505, 0.0587034374475479...
https://github.com/huggingface/datasets/issues/355
can't load SNLI dataset
I just added the processed files of `snli` on our google storage, so that when you do `load_dataset` it can download the processed files from there :) We are thinking about having available those processed files for more datasets in the future, because sometimes files aren't available (like for `snli`), or the downl...
`nlp` seems to load `snli` from some URL based on nlp.stanford.edu. This subdomain is frequently down -- including right now, when I'd like to load `snli` in a Colab notebook, but can't. Is there a plan to move these datasets to huggingface servers for a more stable solution? Btw, here's the stack trace: ``` ...
66
can't load SNLI dataset `nlp` seems to load `snli` from some URL based on nlp.stanford.edu. This subdomain is frequently down -- including right now, when I'd like to load `snli` in a Colab notebook, but can't. Is there a plan to move these datasets to huggingface servers for a more stable solution? Btw, here's...
[ 0.13044831156730652, -0.04836520552635193, 0.059793390333652496, 0.42177778482437134, 0.168760746717453, -0.21573853492736816, 0.26701903343200684, 0.016133036464452744, -0.024434685707092285, -0.10663749277591705, -0.38732481002807617, -0.0008521024719811976, 0.14539416134357452, 0.410473...
https://github.com/huggingface/datasets/issues/353
[Dataset requests] New datasets for Text Classification
- `nlp` has MR! It's called `rotten_tomatoes` - SST is part of GLUE, or is that just SST-2? - `nlp` also has `ag_news`, a popular news classification dataset I'd also like to see: - the Yahoo Answers topic classification dataset - the Kaggle Fake News classification dataset
We are missing a few datasets for Text Classification which is an important field. Namely, it would be really nice to add: - TREC-6 dataset (see here for instance: https://pytorchnlp.readthedocs.io/en/latest/source/torchnlp.datasets.html#torchnlp.datasets.trec_dataset) **[done]** - Yelp-5 - Movie review (Movie R...
47
[Dataset requests] New datasets for Text Classification We are missing a few datasets for Text Classification which is an important field. Namely, it would be really nice to add: - TREC-6 dataset (see here for instance: https://pytorchnlp.readthedocs.io/en/latest/source/torchnlp.datasets.html#torchnlp.datasets.tr...
[ 0.052762772887945175, 0.17702478170394897, -0.1492282748222351, 0.29231858253479004, 0.17709863185882568, 0.254323810338974, 0.22255933284759521, 0.11882711201906204, -0.14430846273899078, -0.1214151456952095, -0.1536754071712494, 0.05646319314837456, -0.16629162430763245, 0.17042088508605...
https://github.com/huggingface/datasets/issues/353
[Dataset requests] New datasets for Text Classification
Thanks @jxmorris12 for pointing this out. In glue we only have SST-2 maybe we can add separately SST-1.
We are missing a few datasets for Text Classification which is an important field. Namely, it would be really nice to add: - TREC-6 dataset (see here for instance: https://pytorchnlp.readthedocs.io/en/latest/source/torchnlp.datasets.html#torchnlp.datasets.trec_dataset) **[done]** - Yelp-5 - Movie review (Movie R...
18
[Dataset requests] New datasets for Text Classification We are missing a few datasets for Text Classification which is an important field. Namely, it would be really nice to add: - TREC-6 dataset (see here for instance: https://pytorchnlp.readthedocs.io/en/latest/source/torchnlp.datasets.html#torchnlp.datasets.tr...
[ 0.08154139667749405, 0.02390618622303009, -0.12607884407043457, 0.2537372410297394, 0.20224326848983765, 0.1909445971250534, 0.23134548962116241, 0.1288256049156189, -0.01969435065984726, -0.052467964589595795, -0.23717361688613892, 0.028186343610286713, -0.07936090975999832, 0.22847154736...
https://github.com/huggingface/datasets/issues/353
[Dataset requests] New datasets for Text Classification
This is the homepage for the Amazon dataset: https://www.kaggle.com/datafiniti/consumer-reviews-of-amazon-products Is there an easy way to download kaggle datasets programmatically? If so, I can add this one!
We are missing a few datasets for Text Classification which is an important field. Namely, it would be really nice to add: - TREC-6 dataset (see here for instance: https://pytorchnlp.readthedocs.io/en/latest/source/torchnlp.datasets.html#torchnlp.datasets.trec_dataset) **[done]** - Yelp-5 - Movie review (Movie R...
26
[Dataset requests] New datasets for Text Classification We are missing a few datasets for Text Classification which is an important field. Namely, it would be really nice to add: - TREC-6 dataset (see here for instance: https://pytorchnlp.readthedocs.io/en/latest/source/torchnlp.datasets.html#torchnlp.datasets.tr...
[ -0.14583483338356018, 0.06585246324539185, -0.14814119040966034, 0.19024530053138733, 0.3451893925666809, 0.3538248538970947, 0.13700351119041443, 0.07377948611974716, -0.12206227332353592, -0.039434995502233505, -0.2502007484436035, 0.2587280869483948, -0.1631074696779251, 0.4211385250091...
https://github.com/huggingface/datasets/issues/353
[Dataset requests] New datasets for Text Classification
Hi @jxmorris12 for now I think our `dl_manager` does not download from Kaggle. @thomwolf , @lhoestq
We are missing a few datasets for Text Classification which is an important field. Namely, it would be really nice to add: - TREC-6 dataset (see here for instance: https://pytorchnlp.readthedocs.io/en/latest/source/torchnlp.datasets.html#torchnlp.datasets.trec_dataset) **[done]** - Yelp-5 - Movie review (Movie R...
16
[Dataset requests] New datasets for Text Classification We are missing a few datasets for Text Classification which is an important field. Namely, it would be really nice to add: - TREC-6 dataset (see here for instance: https://pytorchnlp.readthedocs.io/en/latest/source/torchnlp.datasets.html#torchnlp.datasets.tr...
[ 0.036327045410871506, 0.12081839889287949, -0.16710256040096283, 0.24904870986938477, 0.2784866392612457, 0.21744196116924286, 0.29182660579681396, 0.05867670103907585, -0.06941979378461838, 0.08942048996686935, -0.07498714327812195, 0.24293731153011322, -0.22165179252624512, 0.18729785084...
https://github.com/huggingface/datasets/issues/353
[Dataset requests] New datasets for Text Classification
Great list. Any idea if Amazon Reviews has been added? - ~40 GB of text (sadly no emoji) - popular MLM pre-training dataset before bigger datasets like WebText https://arxiv.org/abs/1808.01371 - turns out that binarizing the 1-5 star rating leads to great Pos/Neg/Neutral dataset, T5 paper claims to get very high a...
We are missing a few datasets for Text Classification which is an important field. Namely, it would be really nice to add: - TREC-6 dataset (see here for instance: https://pytorchnlp.readthedocs.io/en/latest/source/torchnlp.datasets.html#torchnlp.datasets.trec_dataset) **[done]** - Yelp-5 - Movie review (Movie R...
92
[Dataset requests] New datasets for Text Classification We are missing a few datasets for Text Classification which is an important field. Namely, it would be really nice to add: - TREC-6 dataset (see here for instance: https://pytorchnlp.readthedocs.io/en/latest/source/torchnlp.datasets.html#torchnlp.datasets.tr...
[ 0.009956329129636288, 0.12012901157140732, -0.1890822947025299, 0.21793511509895325, 0.21285873651504517, 0.21809548139572144, 0.20618465542793274, 0.14195455610752106, -0.18605665862560272, -0.10307655483484268, -0.15357927978038788, -0.020446157082915306, -0.14540262520313263, 0.21926246...
https://github.com/huggingface/datasets/issues/353
[Dataset requests] New datasets for Text Classification
On the Amazon Reviews dataset, the original UCSD website has noted these are now updated to include product reviews through 2018 -- actually quite recent compared to many other datasets. Almost certainly the largest NLP dataset out there with labels! https://jmcauley.ucsd.edu/data/amazon/ Any chance someone has ti...
We are missing a few datasets for Text Classification which is an important field. Namely, it would be really nice to add: - TREC-6 dataset (see here for instance: https://pytorchnlp.readthedocs.io/en/latest/source/torchnlp.datasets.html#torchnlp.datasets.trec_dataset) **[done]** - Yelp-5 - Movie review (Movie R...
56
[Dataset requests] New datasets for Text Classification We are missing a few datasets for Text Classification which is an important field. Namely, it would be really nice to add: - TREC-6 dataset (see here for instance: https://pytorchnlp.readthedocs.io/en/latest/source/torchnlp.datasets.html#torchnlp.datasets.tr...
[ -0.019835880026221275, 0.1645354926586151, -0.12858642637729645, 0.12269238382577896, 0.13993075489997864, 0.22253437340259552, 0.17683033645153046, 0.08158795535564423, 0.0018179290927946568, -0.1834152489900589, -0.2986062169075012, -0.0017471399623900652, -0.2310987412929535, 0.25536170...
https://github.com/huggingface/datasets/issues/347
'cp950' codec error from load_dataset('xtreme', 'tydiqa')
This is probably a Windows issue, we need to specify the encoding when `load_dataset()` reads the original CSV file. Try to find the `open()` statement called by `load_dataset()` and add an `encoding='utf-8'` parameter. See issues #242 and #307
![image](https://user-images.githubusercontent.com/50871412/86744744-67481680-c06c-11ea-8612-b77eba92a392.png) I guess the error is related to python source encoding issue that my PC is trying to decode the source code with wrong encoding-decoding tools, perhaps : https://www.python.org/dev/peps/pep-0263/ I gues...
38
'cp950' codec error from load_dataset('xtreme', 'tydiqa') ![image](https://user-images.githubusercontent.com/50871412/86744744-67481680-c06c-11ea-8612-b77eba92a392.png) I guess the error is related to python source encoding issue that my PC is trying to decode the source code with wrong encoding-decoding tools, pe...
[ -0.2610892355442047, -0.07662418484687805, -0.05322122201323509, 0.14644600450992584, 0.38581475615501404, 0.010254738852381706, 0.02544659748673439, 0.17635735869407654, -0.1791246086359024, 0.08384870737791061, 0.17901623249053955, 0.4352130591869354, 0.05886105075478554, 0.3545265197753...
https://github.com/huggingface/datasets/issues/347
'cp950' codec error from load_dataset('xtreme', 'tydiqa')
It should be in `xtreme.py:L755`: ```python if self.config.name == "tydiqa" or self.config.name.startswith("MLQA") or self.config.name == "SQuAD": with open(filepath) as f: data = json.load(f) ``` Could you try to add the encoding parameter: ```python open(filepath, encodin...
![image](https://user-images.githubusercontent.com/50871412/86744744-67481680-c06c-11ea-8612-b77eba92a392.png) I guess the error is related to python source encoding issue that my PC is trying to decode the source code with wrong encoding-decoding tools, perhaps : https://www.python.org/dev/peps/pep-0263/ I gues...
36
'cp950' codec error from load_dataset('xtreme', 'tydiqa') ![image](https://user-images.githubusercontent.com/50871412/86744744-67481680-c06c-11ea-8612-b77eba92a392.png) I guess the error is related to python source encoding issue that my PC is trying to decode the source code with wrong encoding-decoding tools, pe...
[ -0.2695923447608948, -0.028840184211730957, -0.02087143063545227, 0.20285962522029877, 0.44842004776000977, -0.12191351503133774, -0.1402621865272522, 0.2741994857788086, -0.3068619668483734, 0.13081784546375275, 0.11824942380189896, 0.5972367525100708, 0.010758091695606709, 0.372573167085...
https://github.com/huggingface/datasets/issues/347
'cp950' codec error from load_dataset('xtreme', 'tydiqa')
Hello @jerryIsHere :) Did it work ? If so we may change the dataset script to force the utf-8 encoding
![image](https://user-images.githubusercontent.com/50871412/86744744-67481680-c06c-11ea-8612-b77eba92a392.png) I guess the error is related to python source encoding issue that my PC is trying to decode the source code with wrong encoding-decoding tools, perhaps : https://www.python.org/dev/peps/pep-0263/ I gues...
20
'cp950' codec error from load_dataset('xtreme', 'tydiqa') ![image](https://user-images.githubusercontent.com/50871412/86744744-67481680-c06c-11ea-8612-b77eba92a392.png) I guess the error is related to python source encoding issue that my PC is trying to decode the source code with wrong encoding-decoding tools, pe...
[ -0.37319380044937134, -0.0007192498887889087, -0.05706842616200447, 0.1422874480485916, 0.46322643756866455, -0.058336373418569565, 0.010565089993178844, 0.2293296605348587, -0.28686606884002686, 0.04334607347846031, 0.15359462797641754, 0.45696961879730225, -0.013836088590323925, 0.383591...
https://github.com/huggingface/datasets/issues/347
'cp950' codec error from load_dataset('xtreme', 'tydiqa')
@lhoestq sorry for being that late, I found 4 copy of xtreme.py. I did the changes as what has been told to all of them. The problem is not solved
![image](https://user-images.githubusercontent.com/50871412/86744744-67481680-c06c-11ea-8612-b77eba92a392.png) I guess the error is related to python source encoding issue that my PC is trying to decode the source code with wrong encoding-decoding tools, perhaps : https://www.python.org/dev/peps/pep-0263/ I gues...
30
'cp950' codec error from load_dataset('xtreme', 'tydiqa') ![image](https://user-images.githubusercontent.com/50871412/86744744-67481680-c06c-11ea-8612-b77eba92a392.png) I guess the error is related to python source encoding issue that my PC is trying to decode the source code with wrong encoding-decoding tools, pe...
[ -0.2841985523700714, -0.1710125356912613, -0.010297529399394989, 0.24634698033332825, 0.4728092849254608, -0.058877091854810715, -0.14265520870685577, 0.21194729208946228, -0.26024529337882996, 0.12634645402431488, 0.08915408700704575, 0.5027426481246948, -0.04542098194360733, 0.4290333688...
https://github.com/huggingface/datasets/issues/347
'cp950' codec error from load_dataset('xtreme', 'tydiqa')
Could you provide a better error message so that we can make sure it comes from the opening of the `tydiqa`'s json files ?
![image](https://user-images.githubusercontent.com/50871412/86744744-67481680-c06c-11ea-8612-b77eba92a392.png) I guess the error is related to python source encoding issue that my PC is trying to decode the source code with wrong encoding-decoding tools, perhaps : https://www.python.org/dev/peps/pep-0263/ I gues...
24
'cp950' codec error from load_dataset('xtreme', 'tydiqa') ![image](https://user-images.githubusercontent.com/50871412/86744744-67481680-c06c-11ea-8612-b77eba92a392.png) I guess the error is related to python source encoding issue that my PC is trying to decode the source code with wrong encoding-decoding tools, pe...
[ -0.28176888823509216, -0.07712233811616898, -0.025954050943255424, 0.18648859858512878, 0.4318789541721344, -0.10634122043848038, -0.04907930642366409, 0.21024225652217865, -0.2888754904270172, 0.07893192023038864, 0.16983798146247864, 0.5185145735740662, -0.015733987092971802, 0.403936862...
https://github.com/huggingface/datasets/issues/347
'cp950' codec error from load_dataset('xtreme', 'tydiqa')
@lhoestq The error message is same as before: Exception has occurred: UnicodeDecodeError 'cp950' codec can't decode byte 0xe2 in position 111: illegal multibyte sequence File "D:\python\test\test.py", line 3, in <module> dataset = load_dataset('xtreme', 'tydiqa') ![image](https://user-images.githubuserco...
![image](https://user-images.githubusercontent.com/50871412/86744744-67481680-c06c-11ea-8612-b77eba92a392.png) I guess the error is related to python source encoding issue that my PC is trying to decode the source code with wrong encoding-decoding tools, perhaps : https://www.python.org/dev/peps/pep-0263/ I gues...
63
'cp950' codec error from load_dataset('xtreme', 'tydiqa') ![image](https://user-images.githubusercontent.com/50871412/86744744-67481680-c06c-11ea-8612-b77eba92a392.png) I guess the error is related to python source encoding issue that my PC is trying to decode the source code with wrong encoding-decoding tools, pe...
[ -0.29930394887924194, -0.07950893044471741, -0.02546095848083496, 0.21767917275428772, 0.3856870234012604, -0.07032695412635803, -0.10065564513206482, 0.18437038362026215, -0.2766495943069458, 0.056092698127031326, 0.1345289945602417, 0.42476367950439453, -0.08653873205184937, 0.2728080153...
https://github.com/huggingface/datasets/issues/347
'cp950' codec error from load_dataset('xtreme', 'tydiqa')
Hi there ! I encountered the same issue with the IMDB dataset on windows. It threw an error about charmap not being able to decode a symbol during the first time I tried to download it. I checked on a remote linux machine I have, and it can't be reproduced. I added ```encoding='UTF-8'``` to both lines that have ```op...
![image](https://user-images.githubusercontent.com/50871412/86744744-67481680-c06c-11ea-8612-b77eba92a392.png) I guess the error is related to python source encoding issue that my PC is trying to decode the source code with wrong encoding-decoding tools, perhaps : https://www.python.org/dev/peps/pep-0263/ I gues...
72
'cp950' codec error from load_dataset('xtreme', 'tydiqa') ![image](https://user-images.githubusercontent.com/50871412/86744744-67481680-c06c-11ea-8612-b77eba92a392.png) I guess the error is related to python source encoding issue that my PC is trying to decode the source code with wrong encoding-decoding tools, pe...
[ -0.3488943576812744, 0.05805639922618866, -0.005317159928381443, 0.21481502056121826, 0.2911394536495209, 0.06939206272363663, -0.023163437843322754, 0.16827842593193054, -0.11711788922548294, -0.00880303792655468, 0.03516256809234619, 0.38142505288124084, 0.059285227209329605, 0.225645750...
https://github.com/huggingface/datasets/issues/347
'cp950' codec error from load_dataset('xtreme', 'tydiqa')
> Hi there ! > I encountered the same issue with the IMDB dataset on windows. It threw an error about charmap not being able to decode a symbol during the first time I tried to download it. I checked on a remote linux machine I have, and it can't be reproduced. > I added `encoding='UTF-8'` to both lines that have `op...
![image](https://user-images.githubusercontent.com/50871412/86744744-67481680-c06c-11ea-8612-b77eba92a392.png) I guess the error is related to python source encoding issue that my PC is trying to decode the source code with wrong encoding-decoding tools, perhaps : https://www.python.org/dev/peps/pep-0263/ I gues...
97
'cp950' codec error from load_dataset('xtreme', 'tydiqa') ![image](https://user-images.githubusercontent.com/50871412/86744744-67481680-c06c-11ea-8612-b77eba92a392.png) I guess the error is related to python source encoding issue that my PC is trying to decode the source code with wrong encoding-decoding tools, pe...
[ -0.3819620609283447, 0.03284896910190582, -0.015480917878448963, 0.2028753012418747, 0.3120899200439453, 0.06895912438631058, 0.005816287826746702, 0.18256902694702148, -0.08367125689983368, 0.02933458238840103, 0.014070450328290462, 0.40086087584495544, 0.05778171494603157, 0.239922657608...
https://github.com/huggingface/datasets/issues/347
'cp950' codec error from load_dataset('xtreme', 'tydiqa')
> This is probably a Windows issue, we need to specify the encoding when `load_dataset()` reads the original CSV file. > Try to find the `open()` statement called by `load_dataset()` and add an `encoding='utf-8'` parameter. > See issues #242 and #307 Sorry for not responding for about a month. I have just found t...
![image](https://user-images.githubusercontent.com/50871412/86744744-67481680-c06c-11ea-8612-b77eba92a392.png) I guess the error is related to python source encoding issue that my PC is trying to decode the source code with wrong encoding-decoding tools, perhaps : https://www.python.org/dev/peps/pep-0263/ I gues...
115
'cp950' codec error from load_dataset('xtreme', 'tydiqa') ![image](https://user-images.githubusercontent.com/50871412/86744744-67481680-c06c-11ea-8612-b77eba92a392.png) I guess the error is related to python source encoding issue that my PC is trying to decode the source code with wrong encoding-decoding tools, pe...
[ -0.32184669375419617, 0.041040077805519104, -0.017262842506170273, 0.1921311914920807, 0.38413751125335693, -0.04183322563767433, 0.053385019302368164, 0.1839185357093811, -0.24381467700004578, 0.07465629279613495, 0.1122438907623291, 0.4671260118484497, 0.11079901456832886, 0.326335251331...
https://github.com/huggingface/datasets/issues/347
'cp950' codec error from load_dataset('xtreme', 'tydiqa')
Since #481 we shouldn't have other issues with encodings as they need to be set to "utf-8" be default. Closing this one, but feel free to re-open if you gave other questions
![image](https://user-images.githubusercontent.com/50871412/86744744-67481680-c06c-11ea-8612-b77eba92a392.png) I guess the error is related to python source encoding issue that my PC is trying to decode the source code with wrong encoding-decoding tools, perhaps : https://www.python.org/dev/peps/pep-0263/ I gues...
32
'cp950' codec error from load_dataset('xtreme', 'tydiqa') ![image](https://user-images.githubusercontent.com/50871412/86744744-67481680-c06c-11ea-8612-b77eba92a392.png) I guess the error is related to python source encoding issue that my PC is trying to decode the source code with wrong encoding-decoding tools, pe...
[ -0.2920767068862915, -0.11546678841114044, -0.048505187034606934, 0.1429697722196579, 0.39768537878990173, -0.13868530094623566, -0.02993791550397873, 0.2073105275630951, -0.3486037254333496, 0.11078851670026779, 0.1741642951965332, 0.471680223941803, 0.022073015570640564, 0.33678370714187...
https://github.com/huggingface/datasets/issues/345
Supporting documents in ELI5
Hi @saverymax ! For licensing reasons, the original team was unable to release pre-processed CommonCrawl documents. Instead, they provided a script to re-create them from a CommonCrawl dump, but it unfortunately requires access to a medium-large size cluster: https://github.com/facebookresearch/ELI5#downloading-suppor...
I was attempting to use the ELI5 dataset, when I realized that huggingface does not provide the supporting documents (the source documents from the common crawl). Without the supporting documents, this makes the dataset about as useful for my project as a block of cheese, or some other more apt metaphor. According to ...
130
Supporting documents in ELI5 I was attempting to use the ELI5 dataset, when I realized that huggingface does not provide the supporting documents (the source documents from the common crawl). Without the supporting documents, this makes the dataset about as useful for my project as a block of cheese, or some other mo...
[ 0.08067583292722702, -0.14141593873500824, -0.13081765174865723, -0.09870095551013947, -0.3395775258541107, -0.021613985300064087, -0.21735210716724396, 0.21030348539352417, 0.05030074343085289, -0.025913553312420845, 0.1280902475118637, -0.2411946803331375, 0.10086891055107117, 0.14274059...
https://github.com/huggingface/datasets/issues/345
Supporting documents in ELI5
Hi, thanks for the quick response. The blog post is quite an interesting working example, thanks for sharing it. Two follow-up points/questions about my original question: 1. Yes, I read that the facebook team could not share the CommonCrawl b/c of licensing reasons. They state "No, we are not allowed to host proce...
I was attempting to use the ELI5 dataset, when I realized that huggingface does not provide the supporting documents (the source documents from the common crawl). Without the supporting documents, this makes the dataset about as useful for my project as a block of cheese, or some other more apt metaphor. According to ...
256
Supporting documents in ELI5 I was attempting to use the ELI5 dataset, when I realized that huggingface does not provide the supporting documents (the source documents from the common crawl). Without the supporting documents, this makes the dataset about as useful for my project as a block of cheese, or some other mo...
[ 0.16188102960586548, -0.04520326852798462, -0.03878628462553024, -0.04772581532597542, -0.3124102056026459, 0.00023964776482898742, -0.13539756834506989, 0.1302299052476883, -0.02192038483917713, -0.05018486827611923, 0.09411698579788208, -0.26511552929878235, 0.11306099593639374, 0.144380...
https://github.com/huggingface/datasets/issues/331
Loading CNN/Daily Mail dataset produces `nlp.utils.info_utils.NonMatchingSplitsSizesError`
I couldn't reproduce on my side. It looks like you were not able to generate all the examples, and you have the problem for each split train-test-validation. Could you try to enable logging, try again and send the logs ? ```python import logging logging.basicConfig(level=logging.INFO) ```
``` >>> import nlp >>> nlp.load_dataset('cnn_dailymail', '3.0.0') Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.26 GiB, total: 1.81 GiB) to /u/jm8wx/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0... Traceback (most recent call last): File "<stdin>", line 1, in...
45
Loading CNN/Daily Mail dataset produces `nlp.utils.info_utils.NonMatchingSplitsSizesError` ``` >>> import nlp >>> nlp.load_dataset('cnn_dailymail', '3.0.0') Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.26 GiB, total: 1.81 GiB) to /u/jm8wx/.cache/huggingface/datasets/cnn...
[ -0.14106929302215576, -0.05698738247156143, -0.008213234134018421, 0.20865675806999207, 0.07764953374862671, 0.09079599380493164, 0.1833023577928543, 0.5251756906509399, 0.09873175621032715, 0.15423211455345154, 0.01629800535738468, 0.07699685543775558, -0.3141203224658966, -0.043774530291...
https://github.com/huggingface/datasets/issues/331
Loading CNN/Daily Mail dataset produces `nlp.utils.info_utils.NonMatchingSplitsSizesError`
here's the log ``` >>> import nlp import logging logging.basicConfig(level=logging.INFO) nlp.load_dataset('cnn_dailymail', '3.0.0') >>> import logging >>> logging.basicConfig(level=logging.INFO) >>> nlp.load_dataset('cnn_dailymail', '3.0.0') INFO:nlp.load:Checking /u/jm8wx/.cache/huggingface/datasets/720d2e20d...
``` >>> import nlp >>> nlp.load_dataset('cnn_dailymail', '3.0.0') Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.26 GiB, total: 1.81 GiB) to /u/jm8wx/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0... Traceback (most recent call last): File "<stdin>", line 1, in...
223
Loading CNN/Daily Mail dataset produces `nlp.utils.info_utils.NonMatchingSplitsSizesError` ``` >>> import nlp >>> nlp.load_dataset('cnn_dailymail', '3.0.0') Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.26 GiB, total: 1.81 GiB) to /u/jm8wx/.cache/huggingface/datasets/cnn...
[ -0.14106929302215576, -0.05698738247156143, -0.008213234134018421, 0.20865675806999207, 0.07764953374862671, 0.09079599380493164, 0.1833023577928543, 0.5251756906509399, 0.09873175621032715, 0.15423211455345154, 0.01629800535738468, 0.07699685543775558, -0.3141203224658966, -0.043774530291...
https://github.com/huggingface/datasets/issues/331
Loading CNN/Daily Mail dataset produces `nlp.utils.info_utils.NonMatchingSplitsSizesError`
> here's the log > > ``` > >>> import nlp > import logging > logging.basicConfig(level=logging.INFO) > nlp.load_dataset('cnn_dailymail', '3.0.0') > >>> import logging > >>> logging.basicConfig(level=logging.INFO) > >>> nlp.load_dataset('cnn_dailymail', '3.0.0') > INFO:nlp.load:Checking /u/jm8wx/.cache/huggin...
``` >>> import nlp >>> nlp.load_dataset('cnn_dailymail', '3.0.0') Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.26 GiB, total: 1.81 GiB) to /u/jm8wx/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0... Traceback (most recent call last): File "<stdin>", line 1, in...
376
Loading CNN/Daily Mail dataset produces `nlp.utils.info_utils.NonMatchingSplitsSizesError` ``` >>> import nlp >>> nlp.load_dataset('cnn_dailymail', '3.0.0') Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.26 GiB, total: 1.81 GiB) to /u/jm8wx/.cache/huggingface/datasets/cnn...
[ -0.14106929302215576, -0.05698738247156143, -0.008213234134018421, 0.20865675806999207, 0.07764953374862671, 0.09079599380493164, 0.1833023577928543, 0.5251756906509399, 0.09873175621032715, 0.15423211455345154, 0.01629800535738468, 0.07699685543775558, -0.3141203224658966, -0.043774530291...
https://github.com/huggingface/datasets/issues/331
Loading CNN/Daily Mail dataset produces `nlp.utils.info_utils.NonMatchingSplitsSizesError`
In general if some examples are missing after processing (hence causing the `NonMatchingSplitsSizesError `), it is often due to either 1) corrupted cached files 2) decoding errors I just checked the dataset script for code that could lead to decoding errors but I couldn't find any. Before we try to dive more into ...
``` >>> import nlp >>> nlp.load_dataset('cnn_dailymail', '3.0.0') Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.26 GiB, total: 1.81 GiB) to /u/jm8wx/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0... Traceback (most recent call last): File "<stdin>", line 1, in...
74
Loading CNN/Daily Mail dataset produces `nlp.utils.info_utils.NonMatchingSplitsSizesError` ``` >>> import nlp >>> nlp.load_dataset('cnn_dailymail', '3.0.0') Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.26 GiB, total: 1.81 GiB) to /u/jm8wx/.cache/huggingface/datasets/cnn...
[ -0.14106929302215576, -0.05698738247156143, -0.008213234134018421, 0.20865675806999207, 0.07764953374862671, 0.09079599380493164, 0.1833023577928543, 0.5251756906509399, 0.09873175621032715, 0.15423211455345154, 0.01629800535738468, 0.07699685543775558, -0.3141203224658966, -0.043774530291...
https://github.com/huggingface/datasets/issues/331
Loading CNN/Daily Mail dataset produces `nlp.utils.info_utils.NonMatchingSplitsSizesError`
Yes thanks for the support! I cleared out my cache folder and everything works fine now
``` >>> import nlp >>> nlp.load_dataset('cnn_dailymail', '3.0.0') Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.26 GiB, total: 1.81 GiB) to /u/jm8wx/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0... Traceback (most recent call last): File "<stdin>", line 1, in...
16
Loading CNN/Daily Mail dataset produces `nlp.utils.info_utils.NonMatchingSplitsSizesError` ``` >>> import nlp >>> nlp.load_dataset('cnn_dailymail', '3.0.0') Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.26 GiB, total: 1.81 GiB) to /u/jm8wx/.cache/huggingface/datasets/cnn...
[ -0.14106929302215576, -0.05698738247156143, -0.008213234134018421, 0.20865675806999207, 0.07764953374862671, 0.09079599380493164, 0.1833023577928543, 0.5251756906509399, 0.09873175621032715, 0.15423211455345154, 0.01629800535738468, 0.07699685543775558, -0.3141203224658966, -0.043774530291...
https://github.com/huggingface/datasets/issues/329
[Bug] FileLock dependency incompatible with filesystem
Environment is Ubuntu 18.04, Python 3.7.5, nlp==0.3.0, filelock=3.0.12. The external volume is Amazon FSx for Lustre, and it by default creates files with limited permissions. My working theory is that FileLock creates a lockfile that isn't writable, and thus there's no way to acquire it by removing the .lock file. ...
I'm downloading a dataset successfully with `load_dataset("wikitext", "wikitext-2-raw-v1")` But when I attempt to cache it on an external volume, it hangs indefinitely: `load_dataset("wikitext", "wikitext-2-raw-v1", cache_dir="/fsx") # /fsx is an external volume mount` The filesystem when hanging looks like thi...
118
[Bug] FileLock dependency incompatible with filesystem I'm downloading a dataset successfully with `load_dataset("wikitext", "wikitext-2-raw-v1")` But when I attempt to cache it on an external volume, it hangs indefinitely: `load_dataset("wikitext", "wikitext-2-raw-v1", cache_dir="/fsx") # /fsx is an external vo...
[ 0.03299213573336601, 0.0771607905626297, -0.02503036893904209, 0.027964532375335693, 0.08898353576660156, 0.15139466524124146, 0.36469754576683044, 0.06515336036682129, 0.7094059586524963, -0.09789599478244781, 0.09096783399581909, -0.005119395907968283, 0.02144598215818405, -0.31597054004...
https://github.com/huggingface/datasets/issues/329
[Bug] FileLock dependency incompatible with filesystem
Looks like the `flock` syscall does not work on Lustre filesystems by default: https://github.com/benediktschmitt/py-filelock/issues/67. I added the `-o flock` option when mounting the filesystem, as [described here](https://docs.aws.amazon.com/fsx/latest/LustreGuide/getting-started-step2.html), which fixed the issu...
I'm downloading a dataset successfully with `load_dataset("wikitext", "wikitext-2-raw-v1")` But when I attempt to cache it on an external volume, it hangs indefinitely: `load_dataset("wikitext", "wikitext-2-raw-v1", cache_dir="/fsx") # /fsx is an external volume mount` The filesystem when hanging looks like thi...
31
[Bug] FileLock dependency incompatible with filesystem I'm downloading a dataset successfully with `load_dataset("wikitext", "wikitext-2-raw-v1")` But when I attempt to cache it on an external volume, it hangs indefinitely: `load_dataset("wikitext", "wikitext-2-raw-v1", cache_dir="/fsx") # /fsx is an external vo...
[ -0.022068994119763374, -0.07033129781484604, -0.024117914959788322, 0.04778246954083443, 0.06480738520622253, 0.05381781980395317, 0.2719027101993561, -0.09048198908567429, 0.7102738618850708, -0.06344840675592422, -0.02303563430905342, -0.030101168900728226, 0.14038726687431335, -0.404071...
https://github.com/huggingface/datasets/issues/328
Fork dataset
To be able to generate the Arrow dataset you need to either use our csv or json utilities `load_dataset("json", data_files=my_json_files)` OR write your own custom dataset script (you can find some inspiration from the [squad](https://github.com/huggingface/nlp/blob/master/datasets/squad/squad.py) script for example). ...
We have a multi-task learning model training I'm trying to convert to using the Arrow-based nlp dataset. We're currently training a custom TensorFlow model but the nlp paradigm should be a bridge for us to be able to use the wealth of pre-trained models in Transformers. Our preprocessing flow parses raw text and...
72
Fork dataset We have a multi-task learning model training I'm trying to convert to using the Arrow-based nlp dataset. We're currently training a custom TensorFlow model but the nlp paradigm should be a bridge for us to be able to use the wealth of pre-trained models in Transformers. Our preprocessing flow pars...
[ -0.2403901070356369, -0.15193085372447968, -0.004668054170906544, 0.13880158960819244, -0.1991833597421646, 0.16617465019226074, 0.08553371578454971, 0.3236585557460785, 0.19925633072853088, -0.14080938696861267, 0.05220165103673935, 0.676864743232727, -0.400825172662735, 0.161125034093856...
https://github.com/huggingface/datasets/issues/328
Fork dataset
Thanks for the helpful advice, @lhoestq -- I wasn't quite able to get the json recipe working - ``` ~/.virtualenvs/inv-text2struct/lib/python3.6/site-packages/pyarrow/ipc.py in __init__(self, source) 60 61 def __init__(self, source): ---> 62 self._open(source) 63 64 ~/.vir...
We have a multi-task learning model training I'm trying to convert to using the Arrow-based nlp dataset. We're currently training a custom TensorFlow model but the nlp paradigm should be a bridge for us to be able to use the wealth of pre-trained models in Transformers. Our preprocessing flow parses raw text and...
87
Fork dataset We have a multi-task learning model training I'm trying to convert to using the Arrow-based nlp dataset. We're currently training a custom TensorFlow model but the nlp paradigm should be a bridge for us to be able to use the wealth of pre-trained models in Transformers. Our preprocessing flow pars...
[ -0.2709980607032776, -0.1708688586950302, -0.014252598397433758, 0.24242177605628967, -0.09015287458896637, 0.037377264350652695, 0.07227975130081177, 0.35591045022010803, 0.021982187405228615, -0.15691488981246948, 0.05683213844895363, 0.687586784362793, -0.3158580958843231, 0.02382268756...
https://github.com/huggingface/datasets/issues/328
Fork dataset
Thanks this answers my question. I think the issue I was having using the json loader were due to using gzipped jsonl files. The error I get now is : ``` Using custom data configuration test --------------------------------------------------------------------------- ValueError ...
We have a multi-task learning model training I'm trying to convert to using the Arrow-based nlp dataset. We're currently training a custom TensorFlow model but the nlp paradigm should be a bridge for us to be able to use the wealth of pre-trained models in Transformers. Our preprocessing flow parses raw text and...
324
Fork dataset We have a multi-task learning model training I'm trying to convert to using the Arrow-based nlp dataset. We're currently training a custom TensorFlow model but the nlp paradigm should be a bridge for us to be able to use the wealth of pre-trained models in Transformers. Our preprocessing flow pars...
[ -0.2616361975669861, -0.20614217221736908, 0.03750159218907356, 0.35491618514060974, -0.16824226081371307, 0.1174597516655922, 0.14335063099861145, 0.47420817613601685, 0.301339328289032, -0.025156566873192787, -0.041327059268951416, 0.7794692516326904, -0.4380647838115692, -0.000688968866...
https://github.com/huggingface/datasets/issues/328
Fork dataset
I'll close this -- It's still unclear how to go about troubleshooting the json example as I mentioned above. If I decide it's worth the trouble, I'll create another issue, or wait for a better support for using nlp for making custom data-loaders.
We have a multi-task learning model training I'm trying to convert to using the Arrow-based nlp dataset. We're currently training a custom TensorFlow model but the nlp paradigm should be a bridge for us to be able to use the wealth of pre-trained models in Transformers. Our preprocessing flow parses raw text and...
43
Fork dataset We have a multi-task learning model training I'm trying to convert to using the Arrow-based nlp dataset. We're currently training a custom TensorFlow model but the nlp paradigm should be a bridge for us to be able to use the wealth of pre-trained models in Transformers. Our preprocessing flow pars...
[ -0.2255922555923462, -0.21397575736045837, 0.013712040148675442, 0.18746072053909302, -0.2855694591999054, 0.1027282252907753, 0.08597809076309204, 0.30907103419303894, 0.2629741430282593, -0.13470247387886047, 0.0771385207772255, 0.6757096648216248, -0.37672364711761475, 0.130015477538108...
https://github.com/huggingface/datasets/issues/326
Large dataset in Squad2-format
I'm pretty sure you can get some inspiration from the squad_v2 script. It looks like the dataset is quite big so it will take some time for the users to generate it, but it should be reasonable. Also you are saying that you are still making the dataset grow in size right ? It's probably good practice to let the use...
At the moment we are building an large question answering dataset and think about sharing it with the huggingface community. Caused the computing power we splitted it into multiple tiles, but they are all in the same format. Right now the most important facts about are this: - Contexts: 1.047.671 - questions: 1.677...
121
Large dataset in Squad2-format At the moment we are building an large question answering dataset and think about sharing it with the huggingface community. Caused the computing power we splitted it into multiple tiles, but they are all in the same format. Right now the most important facts about are this: - Contex...
[ -0.09977789968252182, -0.24149997532367706, -0.11181346327066422, 0.18103191256523132, 0.1970851719379425, -0.015226945281028748, 0.023252706974744797, 0.4885348379611969, -0.09128009527921677, 0.1816319227218628, -0.2126609832048416, 0.05594266206026077, -0.24527433514595032, 0.3290418088...
https://github.com/huggingface/datasets/issues/326
Large dataset in Squad2-format
It would also be good if there is any possibility for versioning, I think this way is much better than the dynamic way. If you mean that part to put the tiles into one is the generation it would take up to 15-20 minutes on home computer hardware. Are there any compression or optimization algorithms while generating the...
At the moment we are building an large question answering dataset and think about sharing it with the huggingface community. Caused the computing power we splitted it into multiple tiles, but they are all in the same format. Right now the most important facts about are this: - Contexts: 1.047.671 - questions: 1.677...
93
Large dataset in Squad2-format At the moment we are building an large question answering dataset and think about sharing it with the huggingface community. Caused the computing power we splitted it into multiple tiles, but they are all in the same format. Right now the most important facts about are this: - Contex...
[ -0.07837647199630737, -0.33690309524536133, -0.10210345685482025, 0.3794318437576294, 0.29138612747192383, -0.09671332687139511, -0.12171172350645065, 0.5122986435890198, -0.027225544676184654, 0.19465266168117523, -0.2856161296367645, 0.06141325458884239, -0.25026723742485046, 0.259344130...
https://github.com/huggingface/datasets/issues/326
Large dataset in Squad2-format
15-20 minutes is fine ! Also there's no RAM limitations as we save to disk every 1000 elements while generating the dataset by default. After generation, the dataset is ready to use with (again) no RAM limitations as we do memory-mapping.
At the moment we are building an large question answering dataset and think about sharing it with the huggingface community. Caused the computing power we splitted it into multiple tiles, but they are all in the same format. Right now the most important facts about are this: - Contexts: 1.047.671 - questions: 1.677...
41
Large dataset in Squad2-format At the moment we are building an large question answering dataset and think about sharing it with the huggingface community. Caused the computing power we splitted it into multiple tiles, but they are all in the same format. Right now the most important facts about are this: - Contex...
[ -0.14090384542942047, -0.28052473068237305, -0.10055729001760483, 0.2925077974796295, 0.23439863324165344, -0.06676747649908066, -0.006627690978348255, 0.48271894454956055, -0.004640905186533928, 0.10125806927680969, -0.28785738348960876, 0.017063166946172714, -0.2403966337442398, 0.270600...
https://github.com/huggingface/datasets/issues/326
Large dataset in Squad2-format
Wow, that sounds pretty cool. Actually I have the problem of running out of memory while tokenization on our local machine. That wouldn't happen again, would it ?
At the moment we are building an large question answering dataset and think about sharing it with the huggingface community. Caused the computing power we splitted it into multiple tiles, but they are all in the same format. Right now the most important facts about are this: - Contexts: 1.047.671 - questions: 1.677...
28
Large dataset in Squad2-format At the moment we are building an large question answering dataset and think about sharing it with the huggingface community. Caused the computing power we splitted it into multiple tiles, but they are all in the same format. Right now the most important facts about are this: - Contex...
[ -0.004863537847995758, -0.23592399060726166, -0.06830373406410217, 0.2993224859237671, 0.2564299702644348, -0.11595635861158371, 0.01913817599415779, 0.4660230576992035, -0.09819995611906052, 0.15328142046928406, -0.25992727279663086, 0.029207978397607803, -0.24323493242263794, 0.250917464...
https://github.com/huggingface/datasets/issues/326
Large dataset in Squad2-format
You can do the tokenization step using `my_tokenized_dataset = my_dataset.map(my_tokenize_function)` that writes the tokenized texts on disk as well. And then `my_tokenized_dataset` will be a memory-mapped dataset too, so you should be fine :)
At the moment we are building an large question answering dataset and think about sharing it with the huggingface community. Caused the computing power we splitted it into multiple tiles, but they are all in the same format. Right now the most important facts about are this: - Contexts: 1.047.671 - questions: 1.677...
34
Large dataset in Squad2-format At the moment we are building an large question answering dataset and think about sharing it with the huggingface community. Caused the computing power we splitted it into multiple tiles, but they are all in the same format. Right now the most important facts about are this: - Contex...
[ -0.035847075283527374, -0.24777807295322418, -0.058699831366539, 0.24638879299163818, 0.2725089192390442, -0.08004886656999588, 0.03257715702056885, 0.41994789242744446, -0.12915101647377014, 0.0406961664557457, -0.26981955766677856, 0.10341077297925949, -0.2411336898803711, 0.217549249529...
https://github.com/huggingface/datasets/issues/326
Large dataset in Squad2-format
In your training loop, loading the tokenized texts is going to be fast and pretty much negligible compared to a forward pass. You shouldn't expect any slow down.
At the moment we are building an large question answering dataset and think about sharing it with the huggingface community. Caused the computing power we splitted it into multiple tiles, but they are all in the same format. Right now the most important facts about are this: - Contexts: 1.047.671 - questions: 1.677...
28
Large dataset in Squad2-format At the moment we are building an large question answering dataset and think about sharing it with the huggingface community. Caused the computing power we splitted it into multiple tiles, but they are all in the same format. Right now the most important facts about are this: - Contex...
[ -0.09984645992517471, -0.3400503396987915, -0.07225450128316879, 0.27405205368995667, 0.2330145686864853, -0.13411948084831238, 0.05346622318029404, 0.4866887032985687, -0.08115612715482712, 0.03897905349731445, -0.2371416687965393, 0.07050934433937073, -0.2566492557525635, 0.3095554411411...
https://github.com/huggingface/datasets/issues/324
Error when calculating glue score
The glue metric for cola is a metric for classification. It expects label ids as integers as inputs.
I was trying glue score along with other metrics here. But glue gives me this error; ``` import nlp glue_metric = nlp.load_metric('glue',name="cola") glue_score = glue_metric.compute(predictions, references) ``` ``` --------------------------------------------------------------------------- --------------...
18
Error when calculating glue score I was trying glue score along with other metrics here. But glue gives me this error; ``` import nlp glue_metric = nlp.load_metric('glue',name="cola") glue_score = glue_metric.compute(predictions, references) ``` ``` -------------------------------------------------------...
[ -0.16945293545722961, -0.23958128690719604, -0.030473245307803154, 0.16115891933441162, 0.23674039542675018, -0.049164801836013794, 0.10730059444904327, 0.3623579442501068, 0.43302735686302185, 0.051628440618515015, -0.3168230652809143, 0.27634745836257935, -0.07980746030807495, -0.0241096...
https://github.com/huggingface/datasets/issues/324
Error when calculating glue score
I want to evaluate a sentence pair whether they are semantically equivalent, so I used MRPC and it gives the same error, does that mean we have to encode the sentences and parse as input? using BertTokenizer; ``` encoded_reference=tokenizer.encode(reference, add_special_tokens=False) encoded_prediction=tokenizer....
I was trying glue score along with other metrics here. But glue gives me this error; ``` import nlp glue_metric = nlp.load_metric('glue',name="cola") glue_score = glue_metric.compute(predictions, references) ``` ``` --------------------------------------------------------------------------- --------------...
297
Error when calculating glue score I was trying glue score along with other metrics here. But glue gives me this error; ``` import nlp glue_metric = nlp.load_metric('glue',name="cola") glue_score = glue_metric.compute(predictions, references) ``` ``` -------------------------------------------------------...
[ -0.16945293545722961, -0.23958128690719604, -0.030473245307803154, 0.16115891933441162, 0.23674039542675018, -0.049164801836013794, 0.10730059444904327, 0.3623579442501068, 0.43302735686302185, 0.051628440618515015, -0.3168230652809143, 0.27634745836257935, -0.07980746030807495, -0.0241096...
https://github.com/huggingface/datasets/issues/324
Error when calculating glue score
MRPC is also a binary classification task, so its metric is a binary classification metric. To evaluate if pairs of sentences are semantically equivalent, maybe you could take a look at models that compute if one sentence entails the other or not (typically the kinds of model that could work well on the MRPC task).
I was trying glue score along with other metrics here. But glue gives me this error; ``` import nlp glue_metric = nlp.load_metric('glue',name="cola") glue_score = glue_metric.compute(predictions, references) ``` ``` --------------------------------------------------------------------------- --------------...
55
Error when calculating glue score I was trying glue score along with other metrics here. But glue gives me this error; ``` import nlp glue_metric = nlp.load_metric('glue',name="cola") glue_score = glue_metric.compute(predictions, references) ``` ``` -------------------------------------------------------...
[ -0.16945293545722961, -0.23958128690719604, -0.030473245307803154, 0.16115891933441162, 0.23674039542675018, -0.049164801836013794, 0.10730059444904327, 0.3623579442501068, 0.43302735686302185, 0.051628440618515015, -0.3168230652809143, 0.27634745836257935, -0.07980746030807495, -0.0241096...