Instructions to use HeNLP/LongHeRo with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use HeNLP/LongHeRo with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("fill-mask", model="HeNLP/LongHeRo")# Load model directly from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("HeNLP/LongHeRo") model = AutoModelForMaskedLM.from_pretrained("HeNLP/LongHeRo") - Notebooks
- Google Colab
- Kaggle
Update README.md
Browse files
README.md
CHANGED
|
@@ -31,6 +31,6 @@ If you use LongHeRo in your research, please cite [HeRo: RoBERTa and Longformer
|
|
| 31 |
title={HeRo: RoBERTa and Longformer Hebrew Language Models},
|
| 32 |
author={Vitaly Shalumov and Harel Haskey},
|
| 33 |
year={2023},
|
| 34 |
-
journal={2304.11077},
|
| 35 |
}
|
| 36 |
```
|
|
|
|
| 31 |
title={HeRo: RoBERTa and Longformer Hebrew Language Models},
|
| 32 |
author={Vitaly Shalumov and Harel Haskey},
|
| 33 |
year={2023},
|
| 34 |
+
journal={arXiv:2304.11077},
|
| 35 |
}
|
| 36 |
```
|