bytesizedllm commited on
Commit
8d24ffc
·
verified ·
1 Parent(s): 0a8f61a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -1,7 +1,7 @@
1
  We fine-tuned the base version of XLM-RoBERTa using Masked Language Modeling (MLM) to adapt it for handling transliteration and code-switching in Tamil-English dataset. The MLM task involves randomly masking a subset of input tokens and training the model to predict these masked tokens based on their context, allowing the model to learn enriched contextual embeddings tailored to the linguistic challenges of bilingual text.
2
 
3
  To adapt XLM-RoBERTa effectively, the MLM training dataset was constructed from three key components:
4
- 1. Original data: Contains monolingual text from Tamil and Malayalam social media sources.
5
  2. Fully transliterated data: All words in the original data were transliterated into Roman script.
6
  3. Partially transliterated data: A randomly selected 20\% to 70\% of words in each sentence were transliterated into Roman script.
7
 
 
1
  We fine-tuned the base version of XLM-RoBERTa using Masked Language Modeling (MLM) to adapt it for handling transliteration and code-switching in Tamil-English dataset. The MLM task involves randomly masking a subset of input tokens and training the model to predict these masked tokens based on their context, allowing the model to learn enriched contextual embeddings tailored to the linguistic challenges of bilingual text.
2
 
3
  To adapt XLM-RoBERTa effectively, the MLM training dataset was constructed from three key components:
4
+ 1. Original data: Contains monolingual text from Tamil AI4Bharath.
5
  2. Fully transliterated data: All words in the original data were transliterated into Roman script.
6
  3. Partially transliterated data: A randomly selected 20\% to 70\% of words in each sentence were transliterated into Roman script.
7