Reviewer2: Optimizing Review Generation Through Prompt Generation
Paper • 2402.10886 • Published • 1
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("GitBag/Reviewer2_Mp")
model = AutoModelForCausalLM.from_pretrained("GitBag/Reviewer2_Mp")
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))This is the prompt generation model (Mp) for our Reviewer2 pipeline. A demo of the model is provided in this repo.
If you find this model useful in your research, please cite the following paper:
@misc{gao2024reviewer2,
title={Reviewer2: Optimizing Review Generation Through Prompt Generation},
author={Zhaolin Gao and Kianté Brantley and Thorsten Joachims},
year={2024},
eprint={2402.10886},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="GitBag/Reviewer2_Mp") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)