The proposed method aims to overcome the challenges in recognizing Indonesian skills due to the complexity of the Indonesian language and the lack of annotated data. The EBERT-RP model incorporates relative position embeddings, which allow the model to capture the relative positions of tokens in a sentence, and a novel attention mechanism that improves the model’s ability to attend the critical information. To evaluate the performance of the EBERT-RP model, we conducted experiments on a dataset of Indonesian skill recognition task. Please cite our paper at http://www.icicel.org/ell/contents/2024/4/el-18-04-02.pdf

Use a pipeline as a high-level helper

from transformers import pipeline

pipe = pipeline("fill-mask", model="meilanynonsitentua/bertskill-relative-key") Copy # Load model directly from transformers import AutoTokenizer, AutoModelForMaskedLM

tokenizer = AutoTokenizer.from_pretrained("meilanynonsitentua/bertskill-relative-key")

model = AutoModelForMaskedLM.from_pretrained("meilanynonsitentua/bertskill-relative-key")

Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support