|
|
--- |
|
|
language: |
|
|
- zh |
|
|
pipeline_tag: text2text-generation |
|
|
--- |
|
|
|
|
|
```python |
|
|
from transformers import T5ForConditionalGeneration |
|
|
from transformers import T5TokenizerFast as T5Tokenizer |
|
|
import pandas as pd |
|
|
model = "svjack/comet-atomic-zh" |
|
|
device = "cpu" |
|
|
#device = "cuda:0" |
|
|
tokenizer = T5Tokenizer.from_pretrained(model) |
|
|
model = T5ForConditionalGeneration.from_pretrained(model).to(device).eval() |
|
|
|
|
|
NEED_PREFIX = '以下事件有哪些必要的先决条件:' |
|
|
EFFECT_PREFIX = '下面的事件发生后可能会发生什么:' |
|
|
INTENT_PREFIX = '以下事件的动机是什么:' |
|
|
REACT_PREFIX = '以下事件发生后,你有什么感觉:' |
|
|
|
|
|
event = "X吃了一顿美餐。" |
|
|
for prefix in [NEED_PREFIX, EFFECT_PREFIX, INTENT_PREFIX, REACT_PREFIX]: |
|
|
prompt = "{}{}".format(prefix, event) |
|
|
encode = tokenizer(prompt, return_tensors='pt').to(device) |
|
|
answer = model.generate(encode.input_ids, |
|
|
max_length = 128, |
|
|
num_beams=2, |
|
|
top_p = 0.95, |
|
|
top_k = 50, |
|
|
repetition_penalty = 2.5, |
|
|
length_penalty=1.0, |
|
|
early_stopping=True, |
|
|
)[0] |
|
|
decoded = tokenizer.decode(answer, skip_special_tokens=True) |
|
|
print(prompt, "\n---答案:", decoded, "----\n") |
|
|
``` |
|
|
|
|
|
</br> |
|
|
|
|
|
```json |
|
|
以下事件有哪些必要的先决条件:X吃了一顿美餐。 |
|
|
---答案: X买了食物 ---- |
|
|
|
|
|
下面的事件发生后可能会发生什么:X吃了一顿美餐。 |
|
|
---答案: X会吃到好的食物 ---- |
|
|
|
|
|
以下事件的动机是什么:X吃了一顿美餐。 |
|
|
---答案: X想吃东西 ---- |
|
|
|
|
|
以下事件发生后,你有什么感觉:X吃了一顿美餐。 |
|
|
---答案: X的味道很好 ---- |
|
|
``` |