RoBERTa: A Robustly Optimized BERT Pretraining Approach
Paper • 1907.11692 • Published • 10
# Load model directly
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("Kalindu/SinBerto")
model = AutoModelForMaskedLM.from_pretrained("Kalindu/SinBerto")SinBerto is a small language model trained on a small news corpus. SinBerto is trained on Sinhala Language which is a low resource language compared to other languages.
model : Roberta
vocab_size=52_000, max_position_embeddings=514, num_attention_heads=12, num_hidden_layers=6, type_vocab_size=1
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("Kalindu/SinBerto")
model = AutoModelForMaskedLM.from_pretrained("Kalindu/SinBerto")
git lfs install
git clone https://huggingface.co/Kalindu/SinBerto
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("fill-mask", model="Kalindu/SinBerto")