Update README.md
Browse files
README.md
CHANGED
|
@@ -11,6 +11,10 @@ pipeline_tag: summarization
|
|
| 11 |
|
| 12 |
**Authored by**: Xin Tie, Muheon Shin, Ali Pirasteh, Nevein Ibrahim, Zachary Huemann, Sharon M. Castellino, Kara Kelly, John Garrett, Junjie Hu, Steve Y. Cho, Tyler J. Bradshaw
|
| 13 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 14 |
## π Abstract
|
| 15 |
|
| 16 |
Purpose: To determine if fine-tuned large language models (LLMs) can generate accurate, personalized impressions for whole-body PET reports.
|
|
@@ -35,7 +39,7 @@ findings_info =
|
|
| 35 |
Description: PET CT WHOLE BODY
|
| 36 |
Radiologist: James
|
| 37 |
Findings: Head/Neck: xxx Chest: xxx Abdomen/Pelvis: xxx Extremities/Musculoskeletal: xxx
|
| 38 |
-
Indication: The patient is a
|
| 39 |
"""
|
| 40 |
|
| 41 |
inputs = tokenizer(findings_info.replace('\n', ' '), padding="max_length", truncation=True, max_length=1024, return_tensors="pt")
|
|
@@ -59,12 +63,11 @@ output_str = tokenizer.batch_decode(outputs, skip_special_tokens=True) # get the
|
|
| 59 |
### π Performance Metrics
|
| 60 |
|
| 61 |
For detailed evaluation results, please refer to our paper.
|
| 62 |
-
- **ROUGE-1**:
|
| 63 |
-
- **ROUGE-2**:
|
| 64 |
-
- **ROUGE-L**:
|
| 65 |
-
- **BLEU**:
|
| 66 |
-
- **
|
| 67 |
-
- **BERTScore**: 6
|
| 68 |
|
| 69 |
### π‘ Highlights
|
| 70 |
|
|
@@ -78,5 +81,5 @@ The models were trained on NVIDIA A100 GPUs.
|
|
| 78 |
---
|
| 79 |
|
| 80 |
## π Additional Resources
|
| 81 |
-
- **Finetuned from model:** [Facebook's BART Large) (https://huggingface.co/
|
| 82 |
- **Codebase for training and inference:** [GitHub Repository](https://github.com/xtie97/PET-Report-Summarization)
|
|
|
|
| 11 |
|
| 12 |
**Authored by**: Xin Tie, Muheon Shin, Ali Pirasteh, Nevein Ibrahim, Zachary Huemann, Sharon M. Castellino, Kara Kelly, John Garrett, Junjie Hu, Steve Y. Cho, Tyler J. Bradshaw
|
| 13 |
|
| 14 |
+
## π Model Description
|
| 15 |
+
|
| 16 |
+
This is the fine-tuned BART model for summarizing findings in PET reports.
|
| 17 |
+
|
| 18 |
## π Abstract
|
| 19 |
|
| 20 |
Purpose: To determine if fine-tuned large language models (LLMs) can generate accurate, personalized impressions for whole-body PET reports.
|
|
|
|
| 39 |
Description: PET CT WHOLE BODY
|
| 40 |
Radiologist: James
|
| 41 |
Findings: Head/Neck: xxx Chest: xxx Abdomen/Pelvis: xxx Extremities/Musculoskeletal: xxx
|
| 42 |
+
Indication: The patient is a 60-year old male with a history of xxx
|
| 43 |
"""
|
| 44 |
|
| 45 |
inputs = tokenizer(findings_info.replace('\n', ' '), padding="max_length", truncation=True, max_length=1024, return_tensors="pt")
|
|
|
|
| 63 |
### π Performance Metrics
|
| 64 |
|
| 65 |
For detailed evaluation results, please refer to our paper.
|
| 66 |
+
- **ROUGE-1**: 51.9
|
| 67 |
+
- **ROUGE-2**: 29.6
|
| 68 |
+
- **ROUGE-L**: 38.6
|
| 69 |
+
- **BLEU**: 22.6
|
| 70 |
+
- **BERTScore**: 0.735
|
|
|
|
| 71 |
|
| 72 |
### π‘ Highlights
|
| 73 |
|
|
|
|
| 81 |
---
|
| 82 |
|
| 83 |
## π Additional Resources
|
| 84 |
+
- **Finetuned from model:** [Facebook's BART Large) (https://huggingface.co/facebook/bart-large)
|
| 85 |
- **Codebase for training and inference:** [GitHub Repository](https://github.com/xtie97/PET-Report-Summarization)
|