Safetensors
xalma

Translation of long texts

#2
by lramming - opened

Hi,

first of all, thank you for creating this model.
I have noticed that the model seems to have problems translating long documents. I am using vllm to run this model and when I send a prompt with a source text of 400 tokens, I will get only 160 tokens back (this is not due to the vllm parameters, but the model just decides that it is finished after that).
Is there a way to force the model to stay close to the original text and not stop the generation too soon?

Sign up or log in to comment