Model Card for Model ID

This model is a fine-tuned version of facebook/nllb-200-distilled-600M for machine translation between English (eng) and Hassaniya Arabic (ha-ar).
It was trained on a dataset of English ↔ Hassaniya sentences.

from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

model_name = "kelSidenna/nllb-eng-ha-v0"

tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)

text = "How are you?"
tokenizer.src_lang = "en"
inputs = tokenizer(text, return_tensors="pt")
translated_tokens = model.generate(
    **inputs,
    forced_bos_token_id=tokenizer.convert_tokens_to_ids("ha-ar")
)
print(tokenizer.batch_decode(translated_tokens, skip_special_tokens=True))
Downloads last month
7
Safetensors
Model size
0.6B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for osidenna/nllb-eng-ha-v0

Finetuned
(222)
this model