Commit ·
2d8a250
1
Parent(s): 51d0a37
changes to readme
Browse files
README.md
CHANGED
|
@@ -7,25 +7,25 @@ widget:
|
|
| 7 |
- text: "[VERB+passive+past: break | PATIENT+partial: cup] <extra_id_0> <extra_id_1> <extra_id_2> ."
|
| 8 |
- max_length:
|
| 9 |
---
|
| 10 |
-
|
| 11 |
# Tailor
|
| 12 |
-
|
| 13 |
## Model description
|
| 14 |
-
|
| 15 |
This is a ported version of [Tailor](https://homes.cs.washington.edu/~wtshuang/static/papers/2021-arxiv-tailor.pdf), the general-purpose counterfactual generator.
|
| 16 |
For more code release, please refer to [this github page](https://github.com/allenai/tailor).
|
| 17 |
-
|
| 18 |
#### How to use
|
| 19 |
-
|
| 20 |
```python
|
| 21 |
from transformers import pipeline, AutoTokenizer, AutoModelForSeq2SeqLM
|
| 22 |
-
|
| 23 |
model_path = "allenai/tailor"
|
| 24 |
generator = pipeline("text2text-generation",
|
| 25 |
model=AutoModelForSeq2SeqLM.from_pretrained(model_path),
|
| 26 |
tokenizer=AutoTokenizer.from_pretrained(model_path),
|
| 27 |
framework="pt", device=0)
|
| 28 |
-
|
| 29 |
prompt_text = "[VERB+active+past: comfort | AGENT+complete: the doctor | PATIENT+partial: athlete | LOCATIVE+partial: in] <extra_id_0> , <extra_id_1> <extra_id_2> <extra_id_3> ."
|
| 30 |
generator(prompt_text, max_length=200)
|
| 31 |
```
|
|
|
|
| 7 |
- text: "[VERB+passive+past: break | PATIENT+partial: cup] <extra_id_0> <extra_id_1> <extra_id_2> ."
|
| 8 |
- max_length:
|
| 9 |
---
|
| 10 |
+
|
| 11 |
# Tailor
|
| 12 |
+
|
| 13 |
## Model description
|
| 14 |
+
|
| 15 |
This is a ported version of [Tailor](https://homes.cs.washington.edu/~wtshuang/static/papers/2021-arxiv-tailor.pdf), the general-purpose counterfactual generator.
|
| 16 |
For more code release, please refer to [this github page](https://github.com/allenai/tailor).
|
| 17 |
+
|
| 18 |
#### How to use
|
| 19 |
+
|
| 20 |
```python
|
| 21 |
from transformers import pipeline, AutoTokenizer, AutoModelForSeq2SeqLM
|
| 22 |
+
|
| 23 |
model_path = "allenai/tailor"
|
| 24 |
generator = pipeline("text2text-generation",
|
| 25 |
model=AutoModelForSeq2SeqLM.from_pretrained(model_path),
|
| 26 |
tokenizer=AutoTokenizer.from_pretrained(model_path),
|
| 27 |
framework="pt", device=0)
|
| 28 |
+
|
| 29 |
prompt_text = "[VERB+active+past: comfort | AGENT+complete: the doctor | PATIENT+partial: athlete | LOCATIVE+partial: in] <extra_id_0> , <extra_id_1> <extra_id_2> <extra_id_3> ."
|
| 30 |
generator(prompt_text, max_length=200)
|
| 31 |
```
|