How to use ClassCat/roberta-base-latin-v2 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("fill-mask", model="ClassCat/roberta-base-latin-v2")
# Load model directly from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("ClassCat/roberta-base-latin-v2") model = AutoModelForMaskedLM.from_pretrained("ClassCat/roberta-base-latin-v2")
Hi ClassCat, i want to ask to you about the training of this roberta based model, for train it with CC100 Latin Dataset how much time and what hardware have you used?About Training Arguments have you use 3 epochs?
Thank you so much
· Sign up or log in to comment