Update README.md
Browse files
README.md
CHANGED
|
@@ -36,23 +36,24 @@ print('\n'.join(tokenizer.convert_ids_to_tokens(top_2))) # should print לימו
|
|
| 36 |
|
| 37 |
## Performance
|
| 38 |
|
| 39 |
-
Please see our
|
| 40 |
|
| 41 |
|
| 42 |
## Citation
|
| 43 |
|
| 44 |
-
If you use NeoDictaBERT in your research, please cite ```NeoDictaBERT: Pushing the Frontier of BERT models
|
| 45 |
|
| 46 |
**BibTeX:**
|
| 47 |
|
| 48 |
```bibtex
|
| 49 |
-
@misc{
|
| 50 |
-
|
| 51 |
-
|
| 52 |
-
|
| 53 |
-
|
| 54 |
-
|
| 55 |
-
|
|
|
|
| 56 |
}
|
| 57 |
```
|
| 58 |
|
|
|
|
| 36 |
|
| 37 |
## Performance
|
| 38 |
|
| 39 |
+
Please see our technical report [here](https://arxiv.org/abs/2510.20386) for performance metrics. The model outperforms previous SOTA models on almost all benchmarks, with a noticeable jump in the QA scores which indicate a much deeper semantic understanding.
|
| 40 |
|
| 41 |
|
| 42 |
## Citation
|
| 43 |
|
| 44 |
+
If you use NeoDictaBERT in your research, please cite ```NeoDictaBERT: Pushing the Frontier of BERT models for Hebrew```
|
| 45 |
|
| 46 |
**BibTeX:**
|
| 47 |
|
| 48 |
```bibtex
|
| 49 |
+
@misc{shmidman2025neodictabertpushingfrontierbert,
|
| 50 |
+
title={NeoDictaBERT: Pushing the Frontier of BERT models for Hebrew},
|
| 51 |
+
author={Shaltiel Shmidman and Avi Shmidman and Moshe Koppel},
|
| 52 |
+
year={2025},
|
| 53 |
+
eprint={2510.20386},
|
| 54 |
+
archivePrefix={arXiv},
|
| 55 |
+
primaryClass={cs.CL},
|
| 56 |
+
url={https://arxiv.org/abs/2510.20386},
|
| 57 |
}
|
| 58 |
```
|
| 59 |
|