KL-RoBERTa

KL-RoBERTa is a Korean legal language model further pretrained for legal domain adaptation. It is trained on a large-scale Korean legal corpus to better capture legal terminology and long-form legal context.

For more details, see the GitHub.


How to Use

from transformers import AutoModel, AutoTokenizer

model = AutoModel.from_pretrained("EunB2/KL-RoBERTa")
tokenizer = AutoTokenizer.from_pretrained("EunB2/KL-RoBERTa")
Downloads last month
7
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for EunB2/KL-RoBERTa

Base model

klue/roberta-base
Finetuned
(419)
this model