Update README.md
Browse files
README.md
CHANGED
|
@@ -139,11 +139,11 @@ We evaluated NE-BERT against industry-standard multilingual models (mBERT and In
|
|
| 139 |
|
| 140 |
The superiority of NE-BERT is evident when predicting missing words in low-resource languages. While generic models predict punctuation or sub-word fragments, NE-BERT predicts coherent, culturally relevant words.
|
| 141 |
|
| 142 |
-
| Language | Input Sentence | **NE-BERT (Ours)** | mBERT | IndicBERT |
|
| 143 |
-
| :--- | :--- | :--- | :--- | :--- |
|
| 144 |
-
| **Assamese** | `মই ভাত <mask> ভাল পাওঁ।` <br>*(I like to [eat] rice)* | **খাই** (Eat) <br> *Correct Verb* | `##ি` <br> *Fragment* | `,` <br> *Punctuation* |
|
| 145 |
-
| **Khasi** | `Nga leit sha <mask>.` <br>*(I go to [home/market])* | **iing** (Home) <br> *Correct Noun* | `.` <br> *Period* | `s` <br> *Character* |
|
| 146 |
-
| **Garo** | `Anga <mask> cha·jok.` <br>*(I [ate] ...)* | **nokni** (Of house) <br> *Real Word* | `-` <br> *Symbol* | `.` <br> *Period* |
|
| 147 |
|
| 148 |
### 2. Effectiveness: Perplexity (PPL)
|
| 149 |
|
|
|
|
| 139 |
|
| 140 |
The superiority of NE-BERT is evident when predicting missing words in low-resource languages. While generic models predict punctuation or sub-word fragments, NE-BERT predicts coherent, culturally relevant words.
|
| 141 |
|
| 142 |
+
| Language | Input Sentence | **NE-BERT (Ours)** | mBERT | IndicBERT |
|
| 143 |
+
| :--- | :--- | :--- | :--- | :--- |
|
| 144 |
+
| **Assamese** | `মই ভাত <mask> ভাল পাওঁ।` <br>*(I like to [eat] rice)* | **খাই** (Eat) <br> *Correct Verb* | `##ি` <br> *Fragment* | `,` <br> *Punctuation* |
|
| 145 |
+
| **Khasi** | `Nga leit sha <mask>.` <br>*(I go to [home/market])* | **iing** (Home) <br> *Correct Noun* | `.` <br> *Period* | `s` <br> *Character* |
|
| 146 |
+
| **Garo** | `Anga <mask> cha·jok.` <br>*(I [ate] ...)* | **nokni** (Of house) <br> *Real Word* | `-` <br> *Symbol* | `.` <br> *Period* |
|
| 147 |
|
| 148 |
### 2. Effectiveness: Perplexity (PPL)
|
| 149 |
|