Updated README
Browse files
README.md
CHANGED
|
@@ -28,5 +28,19 @@ the_mask_pipe("Ejo ndikwiga nagize <mask> baje kunsura.")
|
|
| 28 |
{'sequence': 'Ejo ndikwiga nagize ngo baje kunsura.', 'score': 0.032475441694259644, 'token': 396, 'token_str': ' ngo'},
|
| 29 |
{'sequence': 'Ejo ndikwiga nagize abana baje kunsura.', 'score': 0.029481062665581703, 'token': 739, 'token_str': ' abana'},
|
| 30 |
{'sequence': 'Ejo ndikwiga nagize abantu baje kunsura.', 'score': 0.016263306140899658, 'token': 500, 'token_str': ' abantu'}]
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 31 |
```
|
| 32 |
__Note__: We used the huggingface implementations for pretraining RoBerta from scratch, both the RoBerta model and the classes needed to do it.
|
|
|
|
| 28 |
{'sequence': 'Ejo ndikwiga nagize ngo baje kunsura.', 'score': 0.032475441694259644, 'token': 396, 'token_str': ' ngo'},
|
| 29 |
{'sequence': 'Ejo ndikwiga nagize abana baje kunsura.', 'score': 0.029481062665581703, 'token': 739, 'token_str': ' abana'},
|
| 30 |
{'sequence': 'Ejo ndikwiga nagize abantu baje kunsura.', 'score': 0.016263306140899658, 'token': 500, 'token_str': ' abantu'}]
|
| 31 |
+
```
|
| 32 |
+
2) Direct use from the transformer library to get features using AutoModel
|
| 33 |
+
|
| 34 |
+
```
|
| 35 |
+
from transformers import AutoTokenizer, AutoModelForMaskedLM
|
| 36 |
+
|
| 37 |
+
tokenizer = AutoTokenizer.from_pretrained("jean-paul/kinyaRoberta-small")
|
| 38 |
+
|
| 39 |
+
model = AutoModelForMaskedLM.from_pretrained("jean-paul/kinyaRoberta-small")
|
| 40 |
+
|
| 41 |
+
input_text = "Ejo ndikwiga nagize abashyitsi baje kunsura."
|
| 42 |
+
encoded_input = tokenizer(input_text, return_tensors='pt')
|
| 43 |
+
output = model(**encoded_input)
|
| 44 |
+
|
| 45 |
```
|
| 46 |
__Note__: We used the huggingface implementations for pretraining RoBerta from scratch, both the RoBerta model and the classes needed to do it.
|