File size: 534 Bytes
d87cb78
2dbb973
d87cb78
 
2dbb973
d87cb78
2dbb973
d87cb78
 
2dbb973
 
 
 
d87cb78
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# KL-RoBERTa

KL-RoBERTa is a Korean legal language model further pretrained for legal domain adaptation.  
It is built on **klue/roberta-base** and trained on a large-scale Korean legal corpus to better capture legal terminology and long-form legal context.

For more details, see the **[GitHub](https://github.com/EunB2/KL-RoBERTa)**.

---
## How to Use

```python
from transformers import AutoModel, AutoTokenizer

model = AutoModel.from_pretrained("EunB2/KL-RoBERTa")
tokenizer = AutoTokenizer.from_pretrained("EunB2/KL-RoBERTa")