Update README.md
Browse files
README.md
CHANGED
|
@@ -92,13 +92,13 @@ The sketch which can be composed of:
|
|
| 92 |
### How to use the model
|
| 93 |
#### 1. If you already have a sketch in mind, and want to get a paragraph based on it
|
| 94 |
|
| 95 |
-
```
|
| 96 |
from transformers import pipeline
|
| 97 |
-
# 1.
|
| 98 |
genius = pipeline("text2text-generation", model='beyond/genius-large', device=0)
|
| 99 |
-
# 2.
|
| 100 |
sketch = "<mask> Conference on Empirical Methods <mask> submission of research papers <mask> Deep Learning <mask>"
|
| 101 |
-
# 3.
|
| 102 |
generated_text = genius(sketch, num_beams=3, do_sample=True, max_length=200)[0]['generated_text']
|
| 103 |
print(generated_text)
|
| 104 |
```
|
|
|
|
| 92 |
### How to use the model
|
| 93 |
#### 1. If you already have a sketch in mind, and want to get a paragraph based on it
|
| 94 |
|
| 95 |
+
```python
|
| 96 |
from transformers import pipeline
|
| 97 |
+
# 1.load the model with the huggingface pipeline
|
| 98 |
genius = pipeline("text2text-generation", model='beyond/genius-large', device=0)
|
| 99 |
+
# 2.provide a sketch (joint by <mask> tokens)
|
| 100 |
sketch = "<mask> Conference on Empirical Methods <mask> submission of research papers <mask> Deep Learning <mask>"
|
| 101 |
+
# 3.here we go!
|
| 102 |
generated_text = genius(sketch, num_beams=3, do_sample=True, max_length=200)[0]['generated_text']
|
| 103 |
print(generated_text)
|
| 104 |
```
|