Update README.md
Browse files
README.md
CHANGED
|
@@ -65,4 +65,5 @@ tokenizer.batch_decode(outputs)
|
|
| 65 |
* Even though the Full dataset was almost 3 million The lora model was finetuned on only 1 million row for each language
|
| 66 |
|
| 67 |
# Limitations
|
| 68 |
-
The model was not fully trained on all the dataset and Much evaluation was not done so any contributions will be helpful
|
|
|
|
|
|
| 65 |
* Even though the Full dataset was almost 3 million The lora model was finetuned on only 1 million row for each language
|
| 66 |
|
| 67 |
# Limitations
|
| 68 |
+
The model was not fully trained on all the dataset and Much evaluation was not done so any contributions will be helpful
|
| 69 |
+
As of right now this is a smaller model Better model trained on better dataset will be released
|