xtie commited on
Commit
b665c66
Β·
1 Parent(s): 01b084d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -8
README.md CHANGED
@@ -11,6 +11,10 @@ pipeline_tag: summarization
11
 
12
  **Authored by**: Xin Tie, Muheon Shin, Ali Pirasteh, Nevein Ibrahim, Zachary Huemann, Sharon M. Castellino, Kara Kelly, John Garrett, Junjie Hu, Steve Y. Cho, Tyler J. Bradshaw
13
 
 
 
 
 
14
  ## πŸ“‘ Abstract
15
 
16
  Purpose: To determine if fine-tuned large language models (LLMs) can generate accurate, personalized impressions for whole-body PET reports.
@@ -35,7 +39,7 @@ findings_info =
35
  Description: PET CT WHOLE BODY
36
  Radiologist: James
37
  Findings: Head/Neck: xxx Chest: xxx Abdomen/Pelvis: xxx Extremities/Musculoskeletal: xxx
38
- Indication: The patient is a [AGE]-year old [SEX] with a history of xxx
39
  """
40
 
41
  inputs = tokenizer(findings_info.replace('\n', ' '), padding="max_length", truncation=True, max_length=1024, return_tensors="pt")
@@ -59,12 +63,11 @@ output_str = tokenizer.batch_decode(outputs, skip_special_tokens=True) # get the
59
  ### πŸ“Š Performance Metrics
60
 
61
  For detailed evaluation results, please refer to our paper.
62
- - **ROUGE-1**: 1
63
- - **ROUGE-2**: 2
64
- - **ROUGE-L**: 3
65
- - **BLEU**: 4
66
- - **CHRF**: 5
67
- - **BERTScore**: 6
68
 
69
  ### πŸ’‘ Highlights
70
 
@@ -78,5 +81,5 @@ The models were trained on NVIDIA A100 GPUs.
78
  ---
79
 
80
  ## πŸ“ Additional Resources
81
- - **Finetuned from model:** [Facebook's BART Large) (https://huggingface.co/google/t5-v1_1-large)
82
  - **Codebase for training and inference:** [GitHub Repository](https://github.com/xtie97/PET-Report-Summarization)
 
11
 
12
  **Authored by**: Xin Tie, Muheon Shin, Ali Pirasteh, Nevein Ibrahim, Zachary Huemann, Sharon M. Castellino, Kara Kelly, John Garrett, Junjie Hu, Steve Y. Cho, Tyler J. Bradshaw
13
 
14
+ ## πŸ“‘ Model Description
15
+
16
+ This is the fine-tuned BART model for summarizing findings in PET reports.
17
+
18
  ## πŸ“‘ Abstract
19
 
20
  Purpose: To determine if fine-tuned large language models (LLMs) can generate accurate, personalized impressions for whole-body PET reports.
 
39
  Description: PET CT WHOLE BODY
40
  Radiologist: James
41
  Findings: Head/Neck: xxx Chest: xxx Abdomen/Pelvis: xxx Extremities/Musculoskeletal: xxx
42
+ Indication: The patient is a 60-year old male with a history of xxx
43
  """
44
 
45
  inputs = tokenizer(findings_info.replace('\n', ' '), padding="max_length", truncation=True, max_length=1024, return_tensors="pt")
 
63
  ### πŸ“Š Performance Metrics
64
 
65
  For detailed evaluation results, please refer to our paper.
66
+ - **ROUGE-1**: 51.9
67
+ - **ROUGE-2**: 29.6
68
+ - **ROUGE-L**: 38.6
69
+ - **BLEU**: 22.6
70
+ - **BERTScore**: 0.735
 
71
 
72
  ### πŸ’‘ Highlights
73
 
 
81
  ---
82
 
83
  ## πŸ“ Additional Resources
84
+ - **Finetuned from model:** [Facebook's BART Large) (https://huggingface.co/facebook/bart-large)
85
  - **Codebase for training and inference:** [GitHub Repository](https://github.com/xtie97/PET-Report-Summarization)