KL-RoBERTa
KL-RoBERTa is a Korean legal language model further pretrained for legal domain adaptation. It is trained on a large-scale Korean legal corpus to better capture legal terminology and long-form legal context.
For more details, see the GitHub.
How to Use
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained("EunB2/KL-RoBERTa")
tokenizer = AutoTokenizer.from_pretrained("EunB2/KL-RoBERTa")
- Downloads last month
- 7
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for EunB2/KL-RoBERTa
Base model
klue/roberta-base