Instructions to use mbazaNLP/Quantized_Nllb_Finetuned_Edu_En_Kin_8bit with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use mbazaNLP/Quantized_Nllb_Finetuned_Edu_En_Kin_8bit with Transformers:
# Use a pipeline as a high-level helper # Warning: Pipeline type "translation" is no longer supported in transformers v5. # You must load the model directly (see below) or downgrade to v4.x with: # 'pip install "transformers<5.0.0' from transformers import pipeline pipe = pipeline("translation", model="mbazaNLP/Quantized_Nllb_Finetuned_Edu_En_Kin_8bit")# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("mbazaNLP/Quantized_Nllb_Finetuned_Edu_En_Kin_8bit", dtype="auto") - Notebooks
- Google Colab
- Kaggle
Model Details
Model Description
This is a Machine Translation model, finetuned from NLLB-200's distilled 1.3B model, it is meant to be used in machine translation for education-related data.
- Finetuning code repository: the code used to finetune this model can be found here
Quantization details
The model is quantized to 8-bit precision using the Ctranslate2 library.
pip install ctranslate2
Using the command:
ct2-transformers-converter --model <model-dir> --quantization int8 --output_dir <output-model-dir>
How to Get Started with the Model
Use the code below to get started with the model.
Training Procedure
The model was finetuned on three datasets; a general purpose dataset, a tourism, and an education dataset.
The model was finetuned in two phases.
Phase one:
- General purpose dataset
- Education dataset
- Tourism dataset
Phase two:
- Education dataset
Other than the dataset changes between phase one, and phase two finetuning; no other hyperparameters were modified. In both cases, the model was trained on an A100 40GB GPU for two epochs.
Evaluation
Metrics
Model performance was measured using BLEU, spBLEU, TER, and chrF++ metrics.
Results
- Downloads last month
- 3