| # KL-RoBERTa | |
| KL-RoBERTa is a Korean legal language model further pretrained for legal domain adaptation. | |
| It is built on **klue/roberta-base** and trained on a large-scale Korean legal corpus to better capture legal terminology and long-form legal context. | |
| For more details, see the **[GitHub](https://github.com/EunB2/KL-RoBERTa)**. | |
| --- | |
| ## How to Use | |
| ```python | |
| from transformers import AutoModel, AutoTokenizer | |
| model = AutoModel.from_pretrained("EunB2/KL-RoBERTa") | |
| tokenizer = AutoTokenizer.from_pretrained("EunB2/KL-RoBERTa") | |