Multilingual_Language_Detection
This model is a fine-tuned version of bert-base-uncased on the multilingual dataset. It achieves the following results on the evaluation set:
- Loss: 0.3768
Languages
It's trained in more than 22 different languages, they are listed below.
Arabic, Urdu, Tamil, Hindi, English, French, Spanish, Japanese, Chinese, Thai, Indonesian, Dutch, Korean, Latin, Persian, Portugese, Pushto, Romanian, Russian, Swedish, Turkish, Estonian