relbert/semeval2012_relational_similarity
Viewer • Updated • 158 • 84 • 1
How to use research-backup/roberta-large-semeval2012-average-prompt-e-loob with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("feature-extraction", model="research-backup/roberta-large-semeval2012-average-prompt-e-loob") # Load model directly
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("research-backup/roberta-large-semeval2012-average-prompt-e-loob")
model = AutoModel.from_pretrained("research-backup/roberta-large-semeval2012-average-prompt-e-loob")RelBERT fine-tuned from roberta-large on
relbert/semeval2012_relational_similarity.
Fine-tuning is done via RelBERT library (see the repository for more detail).
It achieves the following results on the relation understanding tasks:
This model can be used through the relbert library. Install the library via pip
pip install relbert
and activate model as below.
from relbert import RelBERT
model = RelBERT("relbert/roberta-large-semeval2012-average-prompt-e-loob")
vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, )
The following hyperparameters were used during training:
The full configuration can be found at fine-tuning parameter file.
If you use any resource from RelBERT, please consider to cite our paper.
@inproceedings{ushio-etal-2021-distilling-relation-embeddings,
title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels",
author = "Ushio, Asahi and
Schockaert, Steven and
Camacho-Collados, Jose",
booktitle = "EMNLP 2021",
year = "2021",
address = "Online",
publisher = "Association for Computational Linguistics",
}