sentence-transformers/parallel-sentences-tatoeba
Viewer • Updated • 8.35M • 2.56k
How to use ekaterinatao/translation_llm with PEFT:
from peft import PeftModel
from transformers import AutoModelForCausalLM
base_model = AutoModelForCausalLM.from_pretrained("NousResearch/Llama-2-7b-hf")
model = PeftModel.from_pretrained(base_model, "ekaterinatao/translation_llm")This model is a fine-tuned version of NousResearch/Llama-2-7b-hf on an dataset that combines russian sentences from two datasets: parallel-sentences-tatoeba and parallel-sentences-wikimatrix. The model is finetuned for russian language.
It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss |
|---|---|---|---|
| 0.0063 | 0.8591 | 1000 | 0.0110 |
| 0.0041 | 1.7181 | 2000 | 0.0089 |
| 0.0016 | 2.5772 | 3000 | 0.0086 |
| 0.0006 | 3.4362 | 4000 | 0.0041 |
| 0.0023 | 4.2953 | 5000 | 0.0039 |
Base model
NousResearch/Llama-2-7b-hf