bytesizedllm commited on
Commit
80db391
·
verified ·
1 Parent(s): e547686

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -0
README.md CHANGED
@@ -1,3 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
1
  This model is a Tamil Masked Language Model (MLM) fine-tuned from the XLM-RoBERTa architecture.
2
 
3
  Perplexity: 4.9
 
1
+ For this study, we fine-tuned the base version of XLM-RoBERTa using Masked Language Modeling (MLM) to adapt it for handling transliteration and code-switching in Tamil-English dataset. The MLM task involves randomly masking a subset of input tokens and training the model to predict these masked tokens based on their context, allowing the model to learn enriched contextual embeddings tailored to the linguistic challenges of bilingual text.
2
+
3
+ To adapt XLM-RoBERTa effectively, the MLM training dataset was constructed from three key components:
4
+ \begin{itemize}
5
+ \item \textbf{Original data}: Contains monolingual text from Tamil and Malayalam social media sources.
6
+ \item \textbf{Fully transliterated data}: All words in the original data were transliterated into Roman script.
7
+ \item \textbf{Partially transliterated data}: A randomly selected 20\% to 70\% of words in each sentence were transliterated into Roman script.
8
+ \end{itemize}
9
+
10
+
11
+
12
  This model is a Tamil Masked Language Model (MLM) fine-tuned from the XLM-RoBERTa architecture.
13
 
14
  Perplexity: 4.9