Commit
·
9f1c7e4
1
Parent(s):
52e1da2
Update README.md
Browse files
README.md
CHANGED
|
@@ -51,11 +51,21 @@ It achieves the following results on the evaluation set:
|
|
| 51 |
|
| 52 |
## Model description
|
| 53 |
|
| 54 |
-
|
| 55 |
|
| 56 |
## Intended uses & limitations
|
| 57 |
|
| 58 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 59 |
|
| 60 |
## Training and evaluation data
|
| 61 |
|
|
|
|
| 51 |
|
| 52 |
## Model description
|
| 53 |
|
| 54 |
+
Pretrained RoBERTa Model on Korean Language. See [Github](https://github.com/KLUE-benchmark/KLUE) and [Paper](https://arxiv.org/abs/2105.09680) for more details.
|
| 55 |
|
| 56 |
## Intended uses & limitations
|
| 57 |
|
| 58 |
+
## How to use
|
| 59 |
+
|
| 60 |
+
_NOTE:_ Use `BertTokenizer` instead of RobertaTokenizer. (`AutoTokenizer` will load `BertTokenizer`)
|
| 61 |
+
|
| 62 |
+
```python
|
| 63 |
+
from transformers import AutoModel, AutoTokenizer
|
| 64 |
+
|
| 65 |
+
model = AutoModel.from_pretrained("klue/roberta-base")
|
| 66 |
+
tokenizer = AutoTokenizer.from_pretrained("klue/roberta-base")
|
| 67 |
+
```
|
| 68 |
+
|
| 69 |
|
| 70 |
## Training and evaluation data
|
| 71 |
|