Update README.md
Browse files
README.md
CHANGED
|
@@ -16,7 +16,7 @@ One of the unique features of our approach to adaptation lies in the fact that,
|
|
| 16 |
|
| 17 |
An intriguing aspect of adapting T-pro-it-1.0 is that this model was obtained through continuous pretraining on over 100 billion tokens of Russian-language data using full fine-tuning. Despite this extensive prior training, our methodology still worked effectively (note: the original base model Qwen2.5-32B was adapted!), and the resulting adapted version either outperformed or matched T-pro-it-1.0 on several benchmarks. Moreover, it demonstrated higher efficiency in Russian-language tokenization.
|
| 18 |
|
| 19 |
-
|
| 20 |
|
| 21 |
## Papers
|
| 22 |
Tikhomirov M., Chernyshov D. Facilitating Large Language Model Russian Adaptation with Learned Embedding Propagation //Journal of Language and Education. – 2024. – Т. 10. – №. 4. – С. 130-145. (Preprint: https://arxiv.org/abs/2412.21140)
|
|
|
|
| 16 |
|
| 17 |
An intriguing aspect of adapting T-pro-it-1.0 is that this model was obtained through continuous pretraining on over 100 billion tokens of Russian-language data using full fine-tuning. Despite this extensive prior training, our methodology still worked effectively (note: the original base model Qwen2.5-32B was adapted!), and the resulting adapted version either outperformed or matched T-pro-it-1.0 on several benchmarks. Moreover, it demonstrated higher efficiency in Russian-language tokenization.
|
| 18 |
|
| 19 |
+
<img src="https://cdn-uploads.huggingface.co/production/uploads/652cedbdf120598322ae358a/sKwHvA9ztd7rHx37Ca2ey.png" style="max-width: 50%; height: auto;">
|
| 20 |
|
| 21 |
## Papers
|
| 22 |
Tikhomirov M., Chernyshov D. Facilitating Large Language Model Russian Adaptation with Learned Embedding Propagation //Journal of Language and Education. – 2024. – Т. 10. – №. 4. – С. 130-145. (Preprint: https://arxiv.org/abs/2412.21140)
|