Automatic Personalized Impression Generation for PET Reports Using Large Language Models
Paper β’ 2309.10066 β’ Published
# Load model directly
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("xtie/BARTScore-PET")
model = AutoModelForSeq2SeqLM.from_pretrained("xtie/BARTScore-PET")Authored by: Xin Tie, Muheon Shin, Ali Pirasteh, Nevein Ibrahim, Zachary Huemann, Sharon M. Castellino, Kara Kelly, John Garrett, Junjie Hu, Steve Y. Cho, Tyler J. Bradshaw
This is the domain-adapted BARTScore for evaluating the quality of PET impressions.
To check our domain-adapted text-generation-based evaluation metrics:
Clone this GitHub repository in a local folder
git clone https://github.com/xtie97/PET-Report-Summarization.git
Go the the folder containing codes for computing BARTScore and create a new folder called "checkpoints"
cd ./PET-Report-Summarization/evaluation_metrics/metrics/BARTScore
mkdir checkpoints
mkdir checkpoints/bart-large
Download the model weights and put them in the folder "checkpoints/bart-large". Run the code for computing text-generation-based metrics
python compute_metrics_text_generation.py
# Use a pipeline as a high-level helper # Warning: Pipeline type "summarization" is no longer supported in transformers v5. # You must load the model directly (see below) or downgrade to v4.x with: # 'pip install "transformers<5.0.0' from transformers import pipeline pipe = pipeline("summarization", model="xtie/BARTScore-PET")