fine-tuning the deberta on token length greater than 512

#16
by iamhdave - opened

I want to fine-tune the deberta model for my specific use case. In this the context length is around 1200 tokens, do any one face any issue in fine-tuning when the token length is greater than the limit 512?

Sign up or log in to comment