YAML Metadata Warning: The pipeline tag "text2text-generation" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-ranking, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, image-text-to-image, image-text-to-video, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, video-to-video, other
#### peft version: '0.2.0.dev0'
from peft import PeftModel, PeftConfig
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
import pandas as pd
import torch

peft_model_id = "svjack/mt0-large-comet-atomic-zh-peft-early-cpu"
config = PeftConfig.from_pretrained(peft_model_id)
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
model = AutoModelForSeq2SeqLM.from_pretrained(config.base_model_name_or_path)
#### model.device "cpu"
device = "cpu"
model = PeftModel.from_pretrained(model, peft_model_id)
model.eval()
print("have load")

NEED_PREFIX = '以下事件有哪些必要的先决条件:'
EFFECT_PREFIX = '下面的事件发生后可能会发生什么:'
INTENT_PREFIX = '以下事件的动机是什么:'
REACT_PREFIX = '以下事件发生后,你有什么感觉:'

event = "X吃了一顿美餐。"
for prefix in [NEED_PREFIX, EFFECT_PREFIX, INTENT_PREFIX, REACT_PREFIX]:
    prompt = "{}{}".format(prefix, event)
    encode = tokenizer(prompt, return_tensors='pt').to(device)
    answer = model.generate(input_ids = encode.input_ids,
                           max_length = 128,
        num_beams=2,
        top_p = 0.95,
        top_k = 50,
        repetition_penalty = 2.5,
        length_penalty=1.0,
        early_stopping=True,
                           )[0]
    decoded = tokenizer.decode(answer, skip_special_tokens=True)
    print(prompt, "\n---答案:", decoded, "----\n")

以下事件有哪些必要的先决条件:X吃了一顿美餐。 
---答案: X去超市购物 ----

下面的事件发生后可能会发生什么:X吃了一顿美餐。 
---答案: X变胖 ----

以下事件的动机是什么:X吃了一顿美餐。 
---答案: X想吃好吃的东西 ----

以下事件发生后,你有什么感觉:X吃了一顿美餐。 
---答案: 我可以放松一下 ----
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Space using svjack/mt0-large-comet-atomic-zh-peft-early-cpu 1