Fix R vs S denoising documentation swap.
Browse files
README.md
CHANGED
|
@@ -99,9 +99,9 @@ This model was contributed by [Daniel Hesslow](https://huggingface.co/Seledorn).
|
|
| 99 |
The following shows how one can predict masked passages using the different denoising strategies.
|
| 100 |
Given the size of the model the following examples need to be run on at least a 40GB A100 GPU.
|
| 101 |
|
| 102 |
-
###
|
| 103 |
|
| 104 |
-
For *
|
| 105 |
|
| 106 |
```python
|
| 107 |
from transformers import T5ForConditionalGeneration, AutoTokenizer
|
|
@@ -120,9 +120,9 @@ print(tokenizer.decode(outputs[0]))
|
|
| 120 |
# -> <pad>. Dudley was a very good boy, but he was also very stupid.</s>
|
| 121 |
```
|
| 122 |
|
| 123 |
-
###
|
| 124 |
|
| 125 |
-
For *
|
| 126 |
|
| 127 |
```python
|
| 128 |
from transformers import T5ForConditionalGeneration, AutoTokenizer
|
|
|
|
| 99 |
The following shows how one can predict masked passages using the different denoising strategies.
|
| 100 |
Given the size of the model the following examples need to be run on at least a 40GB A100 GPU.
|
| 101 |
|
| 102 |
+
### S-Denoising
|
| 103 |
|
| 104 |
+
For *S-Denoising*, please make sure to prompt the text with the prefix `[S2S]` as shown below.
|
| 105 |
|
| 106 |
```python
|
| 107 |
from transformers import T5ForConditionalGeneration, AutoTokenizer
|
|
|
|
| 120 |
# -> <pad>. Dudley was a very good boy, but he was also very stupid.</s>
|
| 121 |
```
|
| 122 |
|
| 123 |
+
### R-Denoising
|
| 124 |
|
| 125 |
+
For *R-Denoising*, please make sure to prompt the text with the prefix `[NLU]` as shown below.
|
| 126 |
|
| 127 |
```python
|
| 128 |
from transformers import T5ForConditionalGeneration, AutoTokenizer
|