Update README.md
Browse files
README.md
CHANGED
|
@@ -7,6 +7,8 @@ This model is a result of second stage pre-training of Google's Gemma 2B (https:
|
|
| 7 |
|
| 8 |
This is a raw pre-trained model, created with further fine-tuning in mind.
|
| 9 |
Goal of this project is to further research cross-linguistic capabilities of open-source LLMs and to create a strong open-source foundational LLM that would be fluent in Russian language. More about it will be in the upcoming blog and/or research paper.
|
|
|
|
|
|
|
| 10 |
This model was pre-trained using EasyLM's fork as a framework (JAX) on Google's v4-32 TPU which was generously provided under the TRC program. The model reached ~ 1.5 in training loss, LR was roughly 5e-5.
|
| 11 |
|
| 12 |
|
|
|
|
| 7 |
|
| 8 |
This is a raw pre-trained model, created with further fine-tuning in mind.
|
| 9 |
Goal of this project is to further research cross-linguistic capabilities of open-source LLMs and to create a strong open-source foundational LLM that would be fluent in Russian language. More about it will be in the upcoming blog and/or research paper.
|
| 10 |
+
|
| 11 |
+
|
| 12 |
This model was pre-trained using EasyLM's fork as a framework (JAX) on Google's v4-32 TPU which was generously provided under the TRC program. The model reached ~ 1.5 in training loss, LR was roughly 5e-5.
|
| 13 |
|
| 14 |
|