# Load model directly
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("GKLMIP/roberta-tagalog-base")
model = AutoModelForMaskedLM.from_pretrained("GKLMIP/roberta-tagalog-base")Quick Links
YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
https://github.com/GKLMIP/Pretrained-Models-For-Tagalog
If you use our model, please consider citing our paper:
@InProceedings{,
author="Jiang, Shengyi
and Fu, Yingwen
and Lin, Xiaotian
and Lin, Nankai",
title="Pre-trained Language models for Tagalog with Multi-source data",
booktitle="Natural Language Processing and Chinese Computing",
year="2021",
publisher="Springer International Publishing",
address="Cham",
}
- Downloads last month
- 12
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("fill-mask", model="GKLMIP/roberta-tagalog-base")