Update README.md
Browse files
README.md
CHANGED
|
@@ -5,7 +5,7 @@ inference: false
|
|
| 5 |
|
| 6 |
# IndoBERT-Lite base fine-tuned on Translated SQuAD v2
|
| 7 |
|
| 8 |
-
[IndoBERT-Lite](https://huggingface.co/indobenchmark/indobert-lite-base-p2)
|
| 9 |
|
| 10 |
## Model in action
|
| 11 |
|
|
@@ -15,7 +15,7 @@ Fast usage with **pipelines**:
|
|
| 15 |
from transformers import BertTokenizerFast, pipeline
|
| 16 |
|
| 17 |
tokenizer = BertTokenizerFast.from_pretrained(
|
| 18 |
-
'Wikidepia/
|
| 19 |
)
|
| 20 |
qa_pipeline = pipeline(
|
| 21 |
"question-answering",
|
|
|
|
| 5 |
|
| 6 |
# IndoBERT-Lite base fine-tuned on Translated SQuAD v2
|
| 7 |
|
| 8 |
+
[IndoBERT-Lite](https://huggingface.co/indobenchmark/indobert-lite-base-p2) trained by [Indo Benchmark](https://www.indobenchmark.com/) and fine-tuned on [Translated SQuAD 2.0](https://github.com/Wikidepia/indonesia_dataset/tree/master/question-answering/SQuAD) for **Q&A** downstream task.
|
| 9 |
|
| 10 |
## Model in action
|
| 11 |
|
|
|
|
| 15 |
from transformers import BertTokenizerFast, pipeline
|
| 16 |
|
| 17 |
tokenizer = BertTokenizerFast.from_pretrained(
|
| 18 |
+
'Wikidepia/indobert-lite-squad'
|
| 19 |
)
|
| 20 |
qa_pipeline = pipeline(
|
| 21 |
"question-answering",
|