Update README.md
Browse files
README.md
CHANGED
|
@@ -1,13 +1,8 @@
|
|
| 1 |
# Automatic Personalized Impression Generation for PET Reports Using Large Language Models πβ
|
| 2 |
|
| 3 |
-
<p align="center">
|
| 4 |
-
<a href="https://arxiv.org/pdf/1910.10683.pdf"><img src="https://img.shields.io/badge/Paper-ArXiv-blue.svg"></a>
|
| 5 |
-
<a href="https://github.com/xtie97/PET-Report-Summarization"><img src="https://img.shields.io/badge/GitHub-Repo-green.svg"></a>
|
| 6 |
-
</p>
|
| 7 |
-
|
| 8 |
**Authored by**: Xin Tie, Muheon Shin, Ali Pirasteh, Nevein Ibrahim, Zachary Huemann, Sharon M. Castellino, Kara Kelly, John Garrett, Junjie Hu, Steve Y. Cho, Tyler J. Bradshaw
|
| 9 |
|
| 10 |
-
|
| 11 |
|
| 12 |
Purpose: To determine if fine-tuned large language models (LLMs) can generate accurate, personalized impressions for whole-body PET reports.
|
| 13 |
|
|
@@ -19,31 +14,33 @@ Conclusion: Personalized impressions generated by PEGASUS were clinically useful
|
|
| 19 |
|
| 20 |
[Read the full paper](https://arxiv.org/pdf/1910.10683.pdf)
|
| 21 |
|
| 22 |
-
|
| 23 |
|
| 24 |
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
| 25 |
|
| 26 |
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
- ROUGE-
|
| 31 |
-
-
|
| 32 |
-
-
|
| 33 |
-
-
|
|
|
|
|
|
|
| 34 |
|
| 35 |
-
|
| 36 |
-
The fine-tuned large language model provides clinically useful, personalized impressions based on PET findings. To our knowledge, this is the first attempt to automate impression generation for whole-body PET reports.
|
| 37 |
|
| 38 |
-
|
| 39 |
-
|
| 40 |
|
| 41 |
-
|
| 42 |
-
- **Finetuned from model:** https://huggingface.co/google/t5-v1_1-large
|
| 43 |
-
-
|
| 44 |
-
- **Repository:** [https://github.com/xtie97/PET-Report-Summarization]
|
| 45 |
|
| 46 |
-
|
| 47 |
|
|
|
|
| 48 |
|
|
|
|
|
|
|
|
|
|
| 49 |
|
|
|
|
| 1 |
# Automatic Personalized Impression Generation for PET Reports Using Large Language Models πβ
|
| 2 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
**Authored by**: Xin Tie, Muheon Shin, Ali Pirasteh, Nevein Ibrahim, Zachary Huemann, Sharon M. Castellino, Kara Kelly, John Garrett, Junjie Hu, Steve Y. Cho, Tyler J. Bradshaw
|
| 4 |
|
| 5 |
+
## π Abstract
|
| 6 |
|
| 7 |
Purpose: To determine if fine-tuned large language models (LLMs) can generate accurate, personalized impressions for whole-body PET reports.
|
| 8 |
|
|
|
|
| 14 |
|
| 15 |
[Read the full paper](https://arxiv.org/pdf/1910.10683.pdf)
|
| 16 |
|
| 17 |
+
## π Usage
|
| 18 |
|
| 19 |
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
| 20 |
|
| 21 |
|
| 22 |
+
### π Performance Metrics
|
| 23 |
+
|
| 24 |
+
For an exhaustive evaluation, please refer to our [paper](https://arxiv.org/pdf/1910.10683.pdf):
|
| 25 |
+
- **ROUGE-1**: 1
|
| 26 |
+
- **ROUGE-2**: 2
|
| 27 |
+
- **ROUGE-L**: 3
|
| 28 |
+
- **BLEU**: 4
|
| 29 |
+
- **CHRF**: 5
|
| 30 |
+
- **BERTScore**: 6
|
| 31 |
|
| 32 |
+
### π‘ Highlights
|
|
|
|
| 33 |
|
| 34 |
+
- The fine-tuned large language model provides clinically useful, personalized impressions based on PET findings.
|
| 35 |
+
- To our knowledge, this is the first attempt to automate impression generation for whole-body PET reports.
|
| 36 |
|
| 37 |
+
### π₯οΈ Hardware
|
|
|
|
|
|
|
|
|
|
| 38 |
|
| 39 |
+
The models were trained on NVIDIA A100 GPUs.
|
| 40 |
|
| 41 |
+
---
|
| 42 |
|
| 43 |
+
## π Additional Resources
|
| 44 |
+
- **Finetuned from model:** [Facebook's BART Large) (https://huggingface.co/google/t5-v1_1-large)
|
| 45 |
+
- **Repository:** [GitHub Repository](https://github.com/xtie97/PET-Report-Summarization)
|
| 46 |
|