# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("llama-lang-adapt/mt-multi-default")
model = AutoModelForCausalLM.from_pretrained("llama-lang-adapt/mt-multi-default")Quick Links
YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
The model is llama-lang-adapt/pretrain-multi-default finetuned on MT data for 5 African languages
The language pairs are: en-ar,ar-en,en-ig,ig-en,en-mg,mg-en,en-sw,sw-en,en-yo,yo-en
- Downloads last month
- 9
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="llama-lang-adapt/mt-multi-default")