xueh commited on
Commit
c7fe161
·
verified ·
1 Parent(s): a2622c2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +60 -110
README.md CHANGED
@@ -8,87 +8,76 @@ license: mit
8
 
9
  A deep learning imaging AI model with imaging transformer, for MR denoising.
10
 
11
- ```latex
12
- @article{
13
- doi:10.1148/ryai.250227,
14
- author = {Xue, Hui and Hooper, Sarah M. and Pierce, Iain and Davies, Rhodri H. and Stairs, John and Naegele, Joseph and Campbell-Washburn, Adrienne E. and Manisty, Charlotte and Moon, James C. and Treibel, Thomas A. and Hansen, Michael S. and Kellman, Peter},
15
- title = {SNRAware: Improved Deep Learning MRI Denoising with Signal-to-noise Ratio Unit Training and G-factor Map Augmentation},
16
- journal = {Radiology: Artificial Intelligence},
17
- volume = {0},
18
- number = {ja},
19
- pages = {e250227},
20
- year = {0},
21
- doi = {10.1148/ryai.250227},
22
- note ={PMID: 41123451},
23
- URL = {https://doi.org/10.1148/ryai.250227}
24
- }
25
- ```
26
 
27
  ## Model Details
28
 
 
 
 
 
 
29
 
 
30
 
31
  ### Model Description
32
 
33
  <!-- Provide a longer summary of what this model is. -->
34
 
 
 
 
 
35
 
 
 
36
 
37
- - **Developed by:** [More Information Needed]
38
- - **Funded by [optional]:** [More Information Needed]
39
- - **Shared by [optional]:** [More Information Needed]
40
- - **Model type:** [More Information Needed]
41
- - **Language(s) (NLP):** [More Information Needed]
42
- - **License:** [More Information Needed]
43
- - **Finetuned from model [optional]:** [More Information Needed]
44
 
45
- ### Model Sources [optional]
46
 
47
  <!-- Provide the basic links for the model. -->
48
 
49
- - **Repository:** [More Information Needed]
50
- - **Paper [optional]:** [More Information Needed]
51
- - **Demo [optional]:** [More Information Needed]
52
 
53
  ## Uses
54
 
55
  <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
56
 
 
 
 
 
57
  ### Direct Use
58
 
59
  <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
60
 
61
- [More Information Needed]
62
-
63
- ### Downstream Use [optional]
64
-
65
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
66
-
67
- [More Information Needed]
68
 
69
  ### Out-of-Scope Use
70
 
71
  <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
72
 
73
- [More Information Needed]
74
 
75
  ## Bias, Risks, and Limitations
76
 
77
  <!-- This section is meant to convey both technical and sociotechnical limitations. -->
78
 
79
- [More Information Needed]
80
 
81
  ### Recommendations
82
 
83
  <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
84
 
85
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
86
 
87
  ## How to Get Started with the Model
88
 
89
- Use the code below to get started with the model.
90
-
91
- [More Information Needed]
92
 
93
  ## Training Details
94
 
@@ -96,121 +85,82 @@ Use the code below to get started with the model.
96
 
97
  <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
98
 
99
- [More Information Needed]
100
 
101
  ### Training Procedure
102
 
103
  <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
104
-
105
- #### Preprocessing [optional]
106
-
107
- [More Information Needed]
108
-
109
 
110
  #### Training Hyperparameters
111
 
112
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
113
 
114
  #### Speeds, Sizes, Times [optional]
115
 
116
  <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
117
 
118
- [More Information Needed]
119
 
120
  ## Evaluation
121
 
122
- <!-- This section describes the evaluation protocols and provides the results. -->
123
-
124
- ### Testing Data, Factors & Metrics
125
-
126
- #### Testing Data
127
-
128
- <!-- This should link to a Dataset Card if possible. -->
129
-
130
- [More Information Needed]
131
-
132
- #### Factors
133
-
134
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
135
-
136
- [More Information Needed]
137
-
138
- #### Metrics
139
-
140
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
141
-
142
- [More Information Needed]
143
 
144
  ### Results
145
 
146
- [More Information Needed]
147
 
148
  #### Summary
149
 
150
 
151
-
152
- ## Model Examination [optional]
153
-
154
- <!-- Relevant interpretability work for the model goes here -->
155
-
156
- [More Information Needed]
157
-
158
  ## Environmental Impact
159
 
160
  <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
161
 
162
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
163
-
164
- - **Hardware Type:** [More Information Needed]
165
- - **Hours used:** [More Information Needed]
166
- - **Cloud Provider:** [More Information Needed]
167
- - **Compute Region:** [More Information Needed]
168
- - **Carbon Emitted:** [More Information Needed]
169
-
170
- ## Technical Specifications [optional]
171
 
172
  ### Model Architecture and Objective
173
 
174
- [More Information Needed]
175
 
176
  ### Compute Infrastructure
177
 
178
- [More Information Needed]
179
 
180
  #### Hardware
181
 
182
- [More Information Needed]
183
 
184
  #### Software
185
 
186
- [More Information Needed]
187
 
188
- ## Citation [optional]
189
 
190
  <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
191
 
192
  **BibTeX:**
193
 
194
- [More Information Needed]
195
-
196
- **APA:**
197
-
198
- [More Information Needed]
199
-
200
- ## Glossary [optional]
201
-
202
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
203
-
204
- [More Information Needed]
205
-
206
- ## More Information [optional]
207
-
208
- [More Information Needed]
209
-
210
- ## Model Card Authors [optional]
211
-
212
- [More Information Needed]
213
 
214
  ## Model Card Contact
215
 
216
- [More Information Needed]
 
8
 
9
  A deep learning imaging AI model with imaging transformer, for MR denoising.
10
 
11
+ ![image](https://cdn-uploads.huggingface.co/production/uploads/690cf6928b7eacea549fd405/ohWF8O1xe0rOJWvhXmxLf.png)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
 
13
  ## Model Details
14
 
15
+ SNRAware is an imaging transformer model trained to denoise complex MR image data.
16
+ Imaging transformers use attention modules to capture local, global, and inter-frame
17
+ signal and noise characteristics. Denoising training used the SNRAware method,
18
+ generating MR-realistic noises on the fly to create low SNR samples with unitary noise scaling.
19
+ Model received low SNR complex images and g-factor maps as input, producing high SNR complex images as output.
20
 
21
+ Please refer to the publication for technical details.
22
 
23
  ### Model Description
24
 
25
  <!-- Provide a longer summary of what this model is. -->
26
 
27
+ The published model was instantiated with a high-res net (HRnet) backbone and consists of multiple imaging attention modules.
28
+ Two models were published:
29
+ . SNRAware-small: a 27.7 million parameter model
30
+ . SNRAware-medium: a 55.1 million parameter model
31
 
32
+ Input to model is 5D tensor [B, C, T/F, H, W] for batch, channel, time/frame, height and width. Output tensor is in the shape of
33
+ [B, C-1, T/F, H, W]. The last channel in input is the g-factor map.
34
 
35
+ - **Developed by:** Microsoft Research, Health Futures
36
+ - **Model type:** Imaging Transformer
37
+ - **License:** MIT
 
 
 
 
38
 
39
+ ### Model Sources
40
 
41
  <!-- Provide the basic links for the model. -->
42
 
43
+ - **Repository:** https://github.com/microsoft/SNRAware
44
+ - **Paper [optional]:** https://pubs.rsna.org/doi/10.1148/ryai.250227
 
45
 
46
  ## Uses
47
 
48
  <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
49
 
50
+ The model should accept the reconstructed MR complex images and g-factor maps and produce the denoising images.
51
+ The model expects the unitary noise level in the input images. This can be achieved by reconstruction images with
52
+ Gadgetron framework.
53
+
54
  ### Direct Use
55
 
56
  <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
57
 
58
+ Denoise the complex MR images.
 
 
 
 
 
 
59
 
60
  ### Out-of-Scope Use
61
 
62
  <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
63
 
64
+ No out-of-scope use should be attempted. Input to the model should be SNRUnit reconstructed.
65
 
66
  ## Bias, Risks, and Limitations
67
 
68
  <!-- This section is meant to convey both technical and sociotechnical limitations. -->
69
 
70
+ No bias and risks are associated with this model. Only limitation is input data should have unitary noise scaling.
71
 
72
  ### Recommendations
73
 
74
  <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
75
 
76
+ None
77
 
78
  ## How to Get Started with the Model
79
 
80
+ Please refer to the documentation in https://github.com/microsoft/SNRAware to get started.
 
 
81
 
82
  ## Training Details
83
 
 
85
 
86
  <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
87
 
88
+ Training data is not shared.
89
 
90
  ### Training Procedure
91
 
92
  <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
93
+ Please refer to the publication for training details.
 
 
 
 
94
 
95
  #### Training Hyperparameters
96
 
97
+ - **Training regime:** fp32 <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
98
 
99
  #### Speeds, Sizes, Times [optional]
100
 
101
  <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
102
 
103
+ For a typical MR image data, the inference time on H100 is ~3-7s for SNRAware-small and ~5-11s for SNRAware-medium.
104
 
105
  ## Evaluation
106
 
107
+ Please refer to the publication for evaluation details.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
108
 
109
  ### Results
110
 
111
+ Please refer to the publication for results.
112
 
113
  #### Summary
114
 
115
 
 
 
 
 
 
 
 
116
  ## Environmental Impact
117
 
118
  <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
119
 
120
+ - **Hardware Type:** B200
121
+ - **Hours used:** 100
122
+ - **Cloud Provider:** Azure
123
+ - **Compute Region:** westus2
124
+ - **Carbon Emitted:** 7.5
 
 
 
 
125
 
126
  ### Model Architecture and Objective
127
 
128
+ Imaging Transformer with a high-res net backbone
129
 
130
  ### Compute Infrastructure
131
 
132
+ Azure GPU VMs
133
 
134
  #### Hardware
135
 
136
+ NVIDIA B200 x16
137
 
138
  #### Software
139
 
140
+ Pytorch 2.8.0+cu128
141
 
142
+ ## Citation
143
 
144
  <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
145
 
146
  **BibTeX:**
147
 
148
+ ```latex
149
+ @article{
150
+ doi:10.1148/ryai.250227,
151
+ author = {Xue, Hui and Hooper, Sarah M. and Pierce, Iain and Davies, Rhodri H. and Stairs, John and Naegele, Joseph and Campbell-Washburn, Adrienne E. and Manisty, Charlotte and Moon, James C. and Treibel, Thomas A. and Hansen, Michael S. and Kellman, Peter},
152
+ title = {SNRAware: Improved Deep Learning MRI Denoising with Signal-to-noise Ratio Unit Training and G-factor Map Augmentation},
153
+ journal = {Radiology: Artificial Intelligence},
154
+ volume = {0},
155
+ number = {ja},
156
+ pages = {e250227},
157
+ year = {0},
158
+ doi = {10.1148/ryai.250227},
159
+ note ={PMID: 41123451},
160
+ URL = {https://doi.org/10.1148/ryai.250227}
161
+ }
162
+ ```
 
 
 
 
163
 
164
  ## Model Card Contact
165
 
166
+ Hui Xue, xueh@microsoft.com