seanghay commited on
Commit
08afb37
·
verified ·
1 Parent(s): c1f3fbc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -79,7 +79,7 @@ The model was trained on a curated dataset of **13 Million Khmer sentences** sou
79
 
80
  ### Why ALBERT for Khmer?
81
 
82
- By using **cross-layer parameter sharing**, this model achieves a hidden size of 768 (similar to BERT-base) but with only **~12M parameters**. This makes it significantly smaller and faster to load than standard BERT models while retaining strong linguistic representation capabilities.
83
 
84
  ## Evaluation Results
85
 
 
79
 
80
  ### Why ALBERT for Khmer?
81
 
82
+ By using **cross-layer parameter sharing**, this model achieves a hidden size of 768 (similar to BERT-base) but with only **~9.42M parameters**. This makes it significantly smaller and faster to load than standard BERT models while retaining strong linguistic representation capabilities.
83
 
84
  ## Evaluation Results
85