Commit ·
59e5498
1
Parent(s): c63dd5c
Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
```
|
| 2 |
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
|
| 3 |
|
|
@@ -22,20 +37,4 @@ for i, output in enumerate(outputs):
|
|
| 22 |
print("{}: {}".format(i+1, tokenizer.decode(output, skip_special_tokens=True)))
|
| 23 |
|
| 24 |
```
|
| 25 |
-
|
| 26 |
-
GitHub: https://github.com/ArvinZhuang/BiTAG
|
| 27 |
-
|
| 28 |
-
---
|
| 29 |
-
inference:
|
| 30 |
-
parameters:
|
| 31 |
-
do_sample: True
|
| 32 |
-
max_length: 500
|
| 33 |
-
top_p: 0.9
|
| 34 |
-
top_k: 20
|
| 35 |
-
temperature: 1
|
| 36 |
-
num_return_sequences: 10
|
| 37 |
-
|
| 38 |
-
widget:
|
| 39 |
-
- text: "abstract: We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement)."
|
| 40 |
-
example_title: "BERT abstract"
|
| 41 |
-
---
|
|
|
|
| 1 |
+
---
|
| 2 |
+
inference:
|
| 3 |
+
parameters:
|
| 4 |
+
do_sample: True
|
| 5 |
+
max_length: 500
|
| 6 |
+
top_p: 0.9
|
| 7 |
+
top_k: 20
|
| 8 |
+
temperature: 1
|
| 9 |
+
num_return_sequences: 10
|
| 10 |
+
|
| 11 |
+
widget:
|
| 12 |
+
- text: "abstract: We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement)."
|
| 13 |
+
example_title: "BERT abstract"
|
| 14 |
+
---
|
| 15 |
+
|
| 16 |
```
|
| 17 |
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
|
| 18 |
|
|
|
|
| 37 |
print("{}: {}".format(i+1, tokenizer.decode(output, skip_special_tokens=True)))
|
| 38 |
|
| 39 |
```
|
| 40 |
+
GitHub: https://github.com/ArvinZhuang/BiTAG
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|