How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
# Warning: Pipeline type "translation" is no longer supported in transformers v5.
# You must load the model directly (see below) or downgrade to v4.x with:
# 'pip install "transformers<5.0.0'
from transformers import pipeline

pipe = pipeline("translation", model="transZ/M2M_Vi_Ba")
# Load model directly
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

tokenizer = AutoTokenizer.from_pretrained("transZ/M2M_Vi_Ba")
model = AutoModelForSeq2SeqLM.from_pretrained("transZ/M2M_Vi_Ba")
Quick Links

YAML Metadata Error:"datasets[0]" with value "custom dataset" is not valid. If possible, use a dataset id from https://hf.co/datasets.

How to run the model

from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer

model = M2M100ForConditionalGeneration.from_pretrained("transZ/M2M_Vi_Ba")
tokenizer = M2M100Tokenizer.from_pretrained("transZ/M2M_Vi_Ba")
tokenizer.src_lang = "vi"
vi_text = "Hôm nay ba đi chợ."
encoded_vi = tokenizer(vi_text, return_tensors="pt")
generated_tokens = model.generate(**encoded_vi, forced_bos_token_id=tokenizer.get_lang_id("ba"))
translate = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)[0]
print(translate)
Downloads last month
8
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support