Instructions to use NlpHUST/vi-word-segmentation with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use NlpHUST/vi-word-segmentation with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("token-classification", model="NlpHUST/vi-word-segmentation")# Load model directly from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("NlpHUST/vi-word-segmentation") model = AutoModelForTokenClassification.from_pretrained("NlpHUST/vi-word-segmentation") - Notebooks
- Google Colab
- Kaggle
Update README.md
Browse files
README.md
CHANGED
|
@@ -20,7 +20,7 @@ should probably proofread and complete it, then remove this comment. -->
|
|
| 20 |
|
| 21 |
# vi-word-segmentation
|
| 22 |
|
| 23 |
-
This model is a fine-tuned version of [NlpHUST/electra-base-vn](https://huggingface.co/NlpHUST/electra-base-vn) on an vlsp 2013 word segmentation dataset.
|
| 24 |
It achieves the following results on the evaluation set:
|
| 25 |
- Loss: 0.0501
|
| 26 |
- Precision: 0.9833
|
|
|
|
| 20 |
|
| 21 |
# vi-word-segmentation
|
| 22 |
|
| 23 |
+
This model is a fine-tuned version of [NlpHUST/electra-base-vn](https://huggingface.co/NlpHUST/electra-base-vn) on an vlsp 2013 vietnamese word segmentation dataset.
|
| 24 |
It achieves the following results on the evaluation set:
|
| 25 |
- Loss: 0.0501
|
| 26 |
- Precision: 0.9833
|