xtie commited on
Commit
bf7cd13
Β·
1 Parent(s): a3cb451

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +20 -23
README.md CHANGED
@@ -1,13 +1,8 @@
1
  # Automatic Personalized Impression Generation for PET Reports Using Large Language Models πŸ“„βœ
2
 
3
- <p align="center">
4
- <a href="https://arxiv.org/pdf/1910.10683.pdf"><img src="https://img.shields.io/badge/Paper-ArXiv-blue.svg"></a>
5
- <a href="https://github.com/xtie97/PET-Report-Summarization"><img src="https://img.shields.io/badge/GitHub-Repo-green.svg"></a>
6
- </p>
7
-
8
  **Authored by**: Xin Tie, Muheon Shin, Ali Pirasteh, Nevein Ibrahim, Zachary Huemann, Sharon M. Castellino, Kara Kelly, John Garrett, Junjie Hu, Steve Y. Cho, Tyler J. Bradshaw
9
 
10
- ### Abstract
11
 
12
  Purpose: To determine if fine-tuned large language models (LLMs) can generate accurate, personalized impressions for whole-body PET reports.
13
 
@@ -19,31 +14,33 @@ Conclusion: Personalized impressions generated by PEGASUS were clinically useful
19
 
20
  [Read the full paper](https://arxiv.org/pdf/1910.10683.pdf)
21
 
22
- ### Uses
23
 
24
  <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
25
 
26
 
27
- #### Metrics (for detailed evaluation results, please reference our paper):
28
- - ROUGE-1: 1
29
- - ROUGE-2: 2
30
- - ROUGE-L: 3
31
- - BLEU: 4
32
- - CHRF: 5
33
- - BERTScore: 6
 
 
34
 
35
- #### Summary
36
- The fine-tuned large language model provides clinically useful, personalized impressions based on PET findings. To our knowledge, this is the first attempt to automate impression generation for whole-body PET reports.
37
 
38
- #### Hardware
39
- NVIDIA A100 GPUs
40
 
41
- #### Additional Information
42
- - **Finetuned from model:** https://huggingface.co/google/t5-v1_1-large
43
- -
44
- - **Repository:** [https://github.com/xtie97/PET-Report-Summarization]
45
 
46
- ## Citation
47
 
 
48
 
 
 
 
49
 
 
1
  # Automatic Personalized Impression Generation for PET Reports Using Large Language Models πŸ“„βœ
2
 
 
 
 
 
 
3
  **Authored by**: Xin Tie, Muheon Shin, Ali Pirasteh, Nevein Ibrahim, Zachary Huemann, Sharon M. Castellino, Kara Kelly, John Garrett, Junjie Hu, Steve Y. Cho, Tyler J. Bradshaw
4
 
5
+ ## πŸ“‘ Abstract
6
 
7
  Purpose: To determine if fine-tuned large language models (LLMs) can generate accurate, personalized impressions for whole-body PET reports.
8
 
 
14
 
15
  [Read the full paper](https://arxiv.org/pdf/1910.10683.pdf)
16
 
17
+ ## πŸš€ Usage
18
 
19
  <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
20
 
21
 
22
+ ### πŸ“Š Performance Metrics
23
+
24
+ For an exhaustive evaluation, please refer to our [paper](https://arxiv.org/pdf/1910.10683.pdf):
25
+ - **ROUGE-1**: 1
26
+ - **ROUGE-2**: 2
27
+ - **ROUGE-L**: 3
28
+ - **BLEU**: 4
29
+ - **CHRF**: 5
30
+ - **BERTScore**: 6
31
 
32
+ ### πŸ’‘ Highlights
 
33
 
34
+ - The fine-tuned large language model provides clinically useful, personalized impressions based on PET findings.
35
+ - To our knowledge, this is the first attempt to automate impression generation for whole-body PET reports.
36
 
37
+ ### πŸ–₯️ Hardware
 
 
 
38
 
39
+ The models were trained on NVIDIA A100 GPUs.
40
 
41
+ ---
42
 
43
+ ## πŸ“ Additional Resources
44
+ - **Finetuned from model:** [Facebook's BART Large) (https://huggingface.co/google/t5-v1_1-large)
45
+ - **Repository:** [GitHub Repository](https://github.com/xtie97/PET-Report-Summarization)
46