roberta-base / README.md
EunB2's picture
Update README.md
d87cb78 verified
|
raw
history blame
534 Bytes

KL-RoBERTa

KL-RoBERTa is a Korean legal language model further pretrained for legal domain adaptation.
It is built on klue/roberta-base and trained on a large-scale Korean legal corpus to better capture legal terminology and long-form legal context.

For more details, see the GitHub.


How to Use

from transformers import AutoModel, AutoTokenizer

model = AutoModel.from_pretrained("EunB2/KL-RoBERTa")
tokenizer = AutoTokenizer.from_pretrained("EunB2/KL-RoBERTa")