Add 1 files
Browse files- 2307/2307.05249.md +224 -0
2307/2307.05249.md
ADDED
|
@@ -0,0 +1,224 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Title: DRMC: A Generalist Model with Dynamic Routing for Multi-Center PET Image Synthesis
|
| 2 |
+
|
| 3 |
+
URL Source: https://arxiv.org/html/2307.05249
|
| 4 |
+
|
| 5 |
+
Markdown Content:
|
| 6 |
+
\floatsetup
|
| 7 |
+
[table]capposition=top \newfloatcommand capbtabboxtable[][\FBwidth]
|
| 8 |
+
|
| 9 |
+
1 1 institutetext: School of Biological Science and Medical Engineering, State Key Laboratory of Software Development Environment, Key Laboratory of Biomechanics and Mechanobiology of Ministry of Education, Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing 100191, China
|
| 10 |
+
|
| 11 |
+
1 1 email: xuyan04@gmail.com 2 2 institutetext: Department of Biomedical Engineering, Tsinghua University, Beijing 100084, China 3 3 institutetext: Xiaomi Corporation, Beijing 100085, China
|
| 12 |
+
Yang Zhou 11 Hui Zhang 22 Bingzheng Wei 33 Yubo Fan 11 Yan Xu(โ,){}^{(\textrm{{\char 0}},\thanks{Corresponding author})}start_FLOATSUPERSCRIPT ( โ , ) end_FLOATSUPERSCRIPT Corresponding author 11
|
| 13 |
+
|
| 14 |
+
###### Abstract
|
| 15 |
+
|
| 16 |
+
Multi-center positron emission tomography (PET) image synthesis aims at recovering low-dose PET images from multiple different centers. The generalizability of existing methods can still be suboptimal for a multi-center study due to domain shifts, which result from non-identical data distribution among centers with different imaging systems/protocols. While some approaches address domain shifts by training specialized models for each center, they are parameter inefficient and do not well exploit the shared knowledge across centers. To address this, we develop a generalist model that shares architecture and parameters across centers to utilize the shared knowledge. However, the generalist model can suffer from the center interference issue, i.e. the gradient directions of different centers can be inconsistent or even opposite owing to the non-identical data distribution. To mitigate such interference, we introduce a novel dynamic routing strategy with cross-layer connections that routes data from different centers to different experts. Experiments show that our generalist model with dynamic routing (DRMC) exhibits excellent generalizability across centers. Code and data are available at: [https://github.com/Yaziwel/Multi-Center-PET-Image-Synthesis](https://github.com/Yaziwel/Multi-Center-PET-Image-Synthesis).
|
| 17 |
+
|
| 18 |
+
###### Keywords:
|
| 19 |
+
|
| 20 |
+
Multi-Center Positron Emission Tomography Synthesis Generalist Model Dynamic Routing.
|
| 21 |
+
|
| 22 |
+
1 Introduction
|
| 23 |
+
--------------
|
| 24 |
+
|
| 25 |
+
Positron emission tomography (PET) image synthesis [[1](https://arxiv.org/html/2307.05249#bib.bib1), [2](https://arxiv.org/html/2307.05249#bib.bib2), [3](https://arxiv.org/html/2307.05249#bib.bib3), [4](https://arxiv.org/html/2307.05249#bib.bib4), [5](https://arxiv.org/html/2307.05249#bib.bib5), [6](https://arxiv.org/html/2307.05249#bib.bib6), [7](https://arxiv.org/html/2307.05249#bib.bib7), [8](https://arxiv.org/html/2307.05249#bib.bib8), [9](https://arxiv.org/html/2307.05249#bib.bib9), [10](https://arxiv.org/html/2307.05249#bib.bib10)] aims at recovering high-quality full-dose PET images from low-dose ones. Despite great success, most algorithms [[1](https://arxiv.org/html/2307.05249#bib.bib1), [2](https://arxiv.org/html/2307.05249#bib.bib2), [4](https://arxiv.org/html/2307.05249#bib.bib4), [5](https://arxiv.org/html/2307.05249#bib.bib5), [8](https://arxiv.org/html/2307.05249#bib.bib8), [9](https://arxiv.org/html/2307.05249#bib.bib9), [10](https://arxiv.org/html/2307.05249#bib.bib10)] are specialized for PET data from a single center with a fixed imaging system/protocol. This poses a significant problem for practical applications, which are not usually restricted to any one of the centers. Towards filling this gap, in this paper, we focus on multi-center PET image synthesis, aiming at processing data from multiple different centers.
|
| 26 |
+
|
| 27 |
+
However, the generalizability of existing models can still be suboptimal for a multi-center study due to domain shift, which results from non-identical data distribution among centers with different imaging systems/protocols (see Fig.[1](https://arxiv.org/html/2307.05249#S2.F1 "Figure 1 โฃ 2.1 Center Interference Issue โฃ 2 Method โฃ DRMC: A Generalist Model with Dynamic Routing for Multi-Center PET Image Synthesis") (a)). Though some studies have shown that a specialized model (i.e. a convolutional neural network (CNN) [[3](https://arxiv.org/html/2307.05249#bib.bib3), [6](https://arxiv.org/html/2307.05249#bib.bib6)] or Transformer [[9](https://arxiv.org/html/2307.05249#bib.bib9)] trained on a single center) exhibits certain robustness to different tracer types [[9](https://arxiv.org/html/2307.05249#bib.bib9)], different tracer doses [[3](https://arxiv.org/html/2307.05249#bib.bib3)], or even different centers [[6](https://arxiv.org/html/2307.05249#bib.bib6)], such generalizability of a center-specific knowledge is only applicable to small domain shifts. It will suffer a severe performance drop when exposed to new centers with large domain shifts [[11](https://arxiv.org/html/2307.05249#bib.bib11)]. There are also some federated learning (FL) based [[12](https://arxiv.org/html/2307.05249#bib.bib12), [11](https://arxiv.org/html/2307.05249#bib.bib11), [7](https://arxiv.org/html/2307.05249#bib.bib7)] medical image synthesis methods that improve generalizability by collaboratively learning a shared global model across centers. Especially, federated transfer learning (FTL) [[7](https://arxiv.org/html/2307.05249#bib.bib7)] first successfully applies FL to PET image synthesis in a multiple-dose setting. Since the resultant shared model of the basic FL method [[12](https://arxiv.org/html/2307.05249#bib.bib12)] ignores center specificity and thus cannot handle centers with large domain shifts, FTL addresses this by finetuning the shared model for each center/dose. However, FTL only focuses on different doses and does not really address the multi-center problem. Furthermore, it still requires a specialized model for each center/dose, which ignores potentially transferable shared knowledge across centers and scales up the overall model size.
|
| 28 |
+
|
| 29 |
+
A recent trend, known as generalist models, is to request that a single unified model works for multiple tasks/domains, and even express generalizability to novel tasks/domains. By sharing architecture and parameters, generalist models can better utilize shared transferable knowledge across tasks/domains. Some pioneers [[13](https://arxiv.org/html/2307.05249#bib.bib13), [14](https://arxiv.org/html/2307.05249#bib.bib14), [15](https://arxiv.org/html/2307.05249#bib.bib15), [16](https://arxiv.org/html/2307.05249#bib.bib16), [17](https://arxiv.org/html/2307.05249#bib.bib17)] have realized competitive performance on various high-level vision tasks like classification [[13](https://arxiv.org/html/2307.05249#bib.bib13), [16](https://arxiv.org/html/2307.05249#bib.bib16)], object detection [[14](https://arxiv.org/html/2307.05249#bib.bib14)], etc.
|
| 30 |
+
|
| 31 |
+
Nonetheless, recent studies [[18](https://arxiv.org/html/2307.05249#bib.bib18), [16](https://arxiv.org/html/2307.05249#bib.bib16)] report that conventional generalist [[15](https://arxiv.org/html/2307.05249#bib.bib15)] models may suffer from the interference issue, i.e. different tasks with shared parameters potentially conflict with each other in the update directions of the gradient. Specific to PET image synthesis, due to the non-identical data distribution across centers, we also observe the center interference issue that the gradient directions of different centers may be inconsistent or even opposite (see Fig.[1](https://arxiv.org/html/2307.05249#S2.F1 "Figure 1 โฃ 2.1 Center Interference Issue โฃ 2 Method โฃ DRMC: A Generalist Model with Dynamic Routing for Multi-Center PET Image Synthesis")). This will lead to an uncertain update direction that deviates from the optimal, resulting in sub-optimal performance of the model. To address the interference issue, recent generalist models [[14](https://arxiv.org/html/2307.05249#bib.bib14), [16](https://arxiv.org/html/2307.05249#bib.bib16)] have introduced dynamic routing [[19](https://arxiv.org/html/2307.05249#bib.bib19)] which learns to activate experts (i.e. sub-networks) dynamically. The input feature will be routed to different selected experts accordingly so as to avoid interference. Meanwhile, different inputs can share some experts, thus maintaining collaboration across domains. In the inference time, the model can reasonably generalize to different domains, even unknown domains, by utilizing the knowledge of existing experts. In spite of great success, the study of generalist models rarely targets the problem of multi-center PET image synthesis.
|
| 32 |
+
|
| 33 |
+
In this paper, inspired by the aforementioned studies, we innovatively propose a generalist model with D ynamic R outing for M ulti-C enter PET image synthesis, termed DRMC. To mitigate the center interference issue, we propose a novel dynamic routing strategy to route data from different centers to different experts. Compared with existing routing strategies, our strategy makes an improvement by building cross-layer connections for more accurate expert decisions. Extensive experiments show that DRMC achieves the best generalizability on both known and unknown centers. Our contribution can be summarized as:
|
| 34 |
+
|
| 35 |
+
* โข
|
| 36 |
+
A generalist model called DRMC is proposed, which enables multi-center PET image synthesis with a single unified model.
|
| 37 |
+
|
| 38 |
+
* โข
|
| 39 |
+
A novel dynamic routing strategy with cross-layer connection is proposed to address the center interference issue. It is realized by dynamically routing data from different centers to different experts.
|
| 40 |
+
|
| 41 |
+
* โข
|
| 42 |
+
Extensive experiments show that DRMC exhibits excellent generalizability over multiple different centers.
|
| 43 |
+
|
| 44 |
+
2 Method
|
| 45 |
+
--------
|
| 46 |
+
|
| 47 |
+
### 2.1 Center Interference Issue
|
| 48 |
+
|
| 49 |
+
Due to the non-identical data distribution across centers, different centers with shared parameters may conflict with each other in the optimization process. To verify this hypothesis, we train a baseline Transformer with 15 base blocks (Fig.[2](https://arxiv.org/html/2307.05249#S2.F2 "Figure 2 โฃ 2.3 Dynamic Routing Strategy โฃ 2 Method โฃ DRMC: A Generalist Model with Dynamic Routing for Multi-Center PET Image Synthesis") (b)) over four centers. Following the paper [[16](https://arxiv.org/html/2307.05249#bib.bib16)], we calculate the gradient direction interference metric โ i,j subscript โ ๐ ๐\mathcal{I}_{i,j}caligraphic_I start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT of the j ๐ j italic_j-th center C j subscript ๐ถ ๐ C_{j}italic_C start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT on the i ๐ i italic_i-th center C i subscript ๐ถ ๐ C_{i}italic_C start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT. As shown in Fig.[1](https://arxiv.org/html/2307.05249#S2.F1 "Figure 1 โฃ 2.1 Center Interference Issue โฃ 2 Method โฃ DRMC: A Generalist Model with Dynamic Routing for Multi-Center PET Image Synthesis") (b), interference is observed between different centers at different layers. This will lead to inconsistent optimization and inevitably degrade the model performance. Details of โ i,j subscript โ ๐ ๐\mathcal{I}_{i,j}caligraphic_I start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT[[16](https://arxiv.org/html/2307.05249#bib.bib16)] are shown in the supplement.
|
| 50 |
+
|
| 51 |
+
Figure 1: (a) Examples of PET images at different Centers. There are domain shifts between centers. (b) The interference metric โ i,j subscript โ ๐ ๐\mathcal{I}_{i,j}caligraphic_I start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT[[16](https://arxiv.org/html/2307.05249#bib.bib16)] of the center C j subscript ๐ถ ๐ C_{j}italic_C start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT on the center C i subscript ๐ถ ๐ C_{i}italic_C start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT at the 1-st/4-th blocks as examples. The red value indicates that C j subscript ๐ถ ๐ C_{j}italic_C start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT has a negative impact on C i subscript ๐ถ ๐ C_{i}italic_C start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT, and the green value indicates that C j subscript ๐ถ ๐ C_{j}italic_C start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT has a positive impact on C i subscript ๐ถ ๐ C_{i}italic_C start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT.
|
| 52 |
+
|
| 53 |
+
### 2.2 Network Architecture
|
| 54 |
+
|
| 55 |
+
The overall architecture of our DRMC is shown in Fig.[2](https://arxiv.org/html/2307.05249#S2.F2 "Figure 2 โฃ 2.3 Dynamic Routing Strategy โฃ 2 Method โฃ DRMC: A Generalist Model with Dynamic Routing for Multi-Center PET Image Synthesis") (a). DRMC firstly applies a 3ร\timesร3ร\timesร3 convolutional layer for shallow feature extraction. Next, the shallow feature is fed into N ๐ N italic_N blocks with dynamic routing (DRBs), which are expected to handle the interference between centers and adaptively extract the deep feature with high-frequency information. The deep feature then passes through another 3ร\timesร3ร\timesร3 convolutional layer for final image synthesis. In order to alleviate the burden of feature learning and stabilize training, DRMC adopts global residual learning as suggested in the paper [[20](https://arxiv.org/html/2307.05249#bib.bib20)] to estimate the image residual from different centers. In the subsequent subsection, we will expatiate the dynamic routing strategy as well as the design of the DRB.
|
| 56 |
+
|
| 57 |
+
### 2.3 Dynamic Routing Strategy
|
| 58 |
+
|
| 59 |
+
We aim at alleviating the center interference issue in deep feature extraction. Inspired by prior generalist models [[13](https://arxiv.org/html/2307.05249#bib.bib13), [14](https://arxiv.org/html/2307.05249#bib.bib14), [16](https://arxiv.org/html/2307.05249#bib.bib16)], we specifically propose a novel dynamic routing strategy for multi-center PET image synthesis. The proposed dynamic routing strategy can be flexibly adapted to various network architectures, such as CNN and Transformer. To utilize the recent advance in capturing global contexts using Transformers [[9](https://arxiv.org/html/2307.05249#bib.bib9)], without loss of generality, we explore the application of the dynamic routing strategy to a Transformer block, termed dynamic routing block (DRB, see Fig.[2](https://arxiv.org/html/2307.05249#S2.F2 "Figure 2 โฃ 2.3 Dynamic Routing Strategy โฃ 2 Method โฃ DRMC: A Generalist Model with Dynamic Routing for Multi-Center PET Image Synthesis") (c)). We will introduce our dynamic routing strategy in detail from four parts: base expert foundation, expert number scaling, expert dynamic routing, and expert sparse fusion.
|
| 60 |
+
|
| 61 |
+

|
| 62 |
+
|
| 63 |
+
Figure 2: The framework of our proposed DRMC
|
| 64 |
+
|
| 65 |
+
Base Expert Foundation. As shown in [Figure 2](https://arxiv.org/html/2307.05249#S2.F2 "Figure 2 โฃ 2.3 Dynamic Routing Strategy โฃ 2 Method โฃ DRMC: A Generalist Model with Dynamic Routing for Multi-Center PET Image Synthesis") (b), we first introduce an efficient base Transformer block (base block) consisting of an attention expert and a feed-forward network (FFN) expert. Both experts are for basic feature extraction and transformation. To reduce the complexity burden of the attention expert, we follow the paper [[9](https://arxiv.org/html/2307.05249#bib.bib9)] to perform global channel attention with linear complexity instead of spatial attention [[21](https://arxiv.org/html/2307.05249#bib.bib21)]. Notably, as the global channel attention may ignore the local spatial information, we introduce depth-wise convolutions to emphasize the local context after applying attention. As for the FFN expert, we make no modifications to it compared with the standard Transformer block [[21](https://arxiv.org/html/2307.05249#bib.bib21)]. It consists of a 2-layer MLP with GELU activation in between.
|
| 66 |
+
|
| 67 |
+
Expert Number Scaling. Center interference is observed on both attention experts and FFN experts at different layers (see Fig.[1](https://arxiv.org/html/2307.05249#S2.F1 "Figure 1 โฃ 2.1 Center Interference Issue โฃ 2 Method โฃ DRMC: A Generalist Model with Dynamic Routing for Multi-Center PET Image Synthesis") (b)). This indicates that a single expert can not be simply shared by all centers. Thus, we increase the number of experts in the base block to M ๐ M italic_M to serve as expert candidates for different centers. Specifically, each Transformer block has an attention expert bank ๐ AโขTโขT=[๐ AโขTโขT 1,๐ AโขTโขT 2,โฆ,๐ AโขTโขT M]subscript ๐ ๐ด ๐ ๐ subscript superscript ๐ 1 ๐ด ๐ ๐ subscript superscript ๐ 2 ๐ด ๐ ๐โฆsubscript superscript ๐ ๐ ๐ด ๐ ๐\mathbf{E}_{ATT}=[\mathbf{E}^{1}_{ATT},\mathbf{E}^{2}_{ATT},...,\mathbf{E}^{M}% _{ATT}]bold_E start_POSTSUBSCRIPT italic_A italic_T italic_T end_POSTSUBSCRIPT = [ bold_E start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_A italic_T italic_T end_POSTSUBSCRIPT , bold_E start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_A italic_T italic_T end_POSTSUBSCRIPT , โฆ , bold_E start_POSTSUPERSCRIPT italic_M end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_A italic_T italic_T end_POSTSUBSCRIPT ] and an FFN expert bank ๐ FโขFโขN=[๐ FโขFโขN 1,๐ FโขFโขN 2,โฆ,๐ FโขFโขN M]subscript ๐ ๐น ๐น ๐ subscript superscript ๐ 1 ๐น ๐น ๐ subscript superscript ๐ 2 ๐น ๐น ๐โฆsubscript superscript ๐ ๐ ๐น ๐น ๐\mathbf{E}_{FFN}=[\mathbf{E}^{1}_{FFN},\mathbf{E}^{2}_{FFN},...,\mathbf{E}^{M}% _{FFN}]bold_E start_POSTSUBSCRIPT italic_F italic_F italic_N end_POSTSUBSCRIPT = [ bold_E start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_F italic_F italic_N end_POSTSUBSCRIPT , bold_E start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_F italic_F italic_N end_POSTSUBSCRIPT , โฆ , bold_E start_POSTSUPERSCRIPT italic_M end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_F italic_F italic_N end_POSTSUBSCRIPT ], both of which have M ๐ M italic_M base experts. However, it does not mean that we prepare specific experts for each center. Although using center-specific experts can address the interference problem, it is hard for the model to exploit the shared knowledge across centers, and it is also difficult to generalize to new centers that did not emerge in the training stage [[16](https://arxiv.org/html/2307.05249#bib.bib16)]. To address this, we turn to different combinations of experts.
|
| 68 |
+
|
| 69 |
+
Expert Dynamic Routing. Given a bank of experts, we route data from different centers to different experts so as to avoid interference. Prior generalist models [[13](https://arxiv.org/html/2307.05249#bib.bib13), [14](https://arxiv.org/html/2307.05249#bib.bib14), [16](https://arxiv.org/html/2307.05249#bib.bib16)] in high-level vision tasks have introduced various routing strategies to weigh and select experts. Most of them are independently conditioned on the information of the current layer feature, failing to take into account the connectivity of neighboring layers. Nevertheless, PET image synthesis is a dense prediction task that requires a tight connection of adjacent layers for accurate voxel-wise intensity regression. To mitigate the potential discontinuity [[13](https://arxiv.org/html/2307.05249#bib.bib13)], we propose a dynamic routing module (DRM, see Fig.[2](https://arxiv.org/html/2307.05249#S2.F2 "Figure 2 โฃ 2.3 Dynamic Routing Strategy โฃ 2 Method โฃ DRMC: A Generalist Model with Dynamic Routing for Multi-Center PET Image Synthesis") (c)) that builds cross-layer connection for expert decisions. The mechanism can be formulated as:
|
| 70 |
+
|
| 71 |
+
W=๐๐๐๐โข(๐๐๐โข([๐๐๐โข(X),H])),๐ ๐๐๐๐ ๐๐๐ ๐๐๐ ๐ ๐ป W=\mathbf{ReLU}(\mathbf{MLP}(\left[\mathbf{GAP}(X),H\right])),italic_W = bold_ReLU ( bold_MLP ( [ bold_GAP ( italic_X ) , italic_H ] ) ) ,(1)
|
| 72 |
+
|
| 73 |
+
where X ๐ X italic_X denotes the input; ๐๐๐โข(โ
)๐๐๐โ
\mathbf{GAP}(\cdot)bold_GAP ( โ
) represents the global average pooling operation to aggregate global context information of the current layer; H ๐ป H italic_H is the hidden representation of the previous MLP layer. ReLU activation generates sparsity by setting the negative weight to zero. It is a more suitable gating function in comparison with the commonly used softmax activation [[14](https://arxiv.org/html/2307.05249#bib.bib14)] and top-k gating [[13](https://arxiv.org/html/2307.05249#bib.bib13), [16](https://arxiv.org/html/2307.05249#bib.bib16)] in our study (see Table.[4](https://arxiv.org/html/2307.05249#S3.T4 "Table 4 โฃ 3.1 Dataset and Evaluation โฃ 3 Experiments and Results โฃ DRMC: A Generalist Model with Dynamic Routing for Multi-Center PET Image Synthesis")). W ๐ W italic_W is a sparse weight used to assign weights to different experts.
|
| 74 |
+
|
| 75 |
+
In short, DRM sparsely activates the model and selectively routes the input to different subsets of experts. This process maximizes collaboration and meanwhile mitigates the interference problem. On the one hand, the interference across centers can be alleviated by sparsely routing X ๐ X italic_X to different experts (with positive weights). The combinations of selected experts can be thoroughly different across centers if violent conflicts appear. On the other hand, experts in the same bank still cooperate with each other, allowing the network to best utilize the shared knowledge across centers.
|
| 76 |
+
|
| 77 |
+
Expert Sparse Fusion. The final output is a weighted sum of each expertโs knowledge using the sparse weight W=[W 1,W 2,โฆ,W M]๐ superscript ๐ 1 superscript ๐ 2โฆsuperscript ๐ ๐ W=[W^{1},W^{2},...,W^{M}]italic_W = [ italic_W start_POSTSUPERSCRIPT 1 end_POSTSUPERSCRIPT , italic_W start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT , โฆ , italic_W start_POSTSUPERSCRIPT italic_M end_POSTSUPERSCRIPT ] generated by DRM. Given an input feature X ๐ X italic_X, the output X^^๐\hat{X}over^ start_ARG italic_X end_ARG of an expert bank can be obtained as:
|
| 78 |
+
|
| 79 |
+
X^=โm=1 M W mโ
๐ mโข(X),^๐ superscript subscript ๐ 1 ๐โ
superscript ๐ ๐ superscript ๐ ๐ ๐\hat{X}=\sum_{m=1}^{M}W^{m}\cdot\mathbf{E}^{m}(X),over^ start_ARG italic_X end_ARG = โ start_POSTSUBSCRIPT italic_m = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_M end_POSTSUPERSCRIPT italic_W start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT โ
bold_E start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ( italic_X ) ,(2)
|
| 80 |
+
|
| 81 |
+
where ๐ mโข(โ
)superscript ๐ ๐โ
\mathbf{E}^{m}(\cdot)bold_E start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT ( โ
) represents an operator of ๐ AโขTโขT mโข(โ
)subscript superscript ๐ ๐ ๐ด ๐ ๐โ
\mathbf{E}^{m}_{ATT}(\cdot)bold_E start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_A italic_T italic_T end_POSTSUBSCRIPT ( โ
) or ๐ FโขFโขN mโข(โ
)subscript superscript ๐ ๐ ๐น ๐น ๐โ
\mathbf{E}^{m}_{FFN}(\cdot)bold_E start_POSTSUPERSCRIPT italic_m end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_F italic_F italic_N end_POSTSUBSCRIPT ( โ
).
|
| 82 |
+
|
| 83 |
+
Table 1: Multi-Center PET Dataset Information
|
| 84 |
+
|
| 85 |
+
Center Institution Type Lesion System Tracer Dose DRF Spacing (mโขm 3 ๐ superscript ๐ 3{mm}^{3}italic_m italic_m start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT)Shape Train Test
|
| 86 |
+
C kโขn subscript ๐ถ ๐ ๐ C_{kn}italic_C start_POSTSUBSCRIPT italic_k italic_n end_POSTSUBSCRIPT C 1 subscript ๐ถ 1 C_{1}italic_C start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT I 1 subscript ๐ผ 1 I_{1}italic_I start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT Whole Body Yes PolarStar m660 18 18{}^{18}start_FLOATSUPERSCRIPT 18 end_FLOATSUPERSCRIPT F-FDG 293MBq 12 3.15ร\timesร3.15ร\timesร1.87 192ร\timesร192ร\timesรsโขlโขiโขcโขeโขs ๐ ๐ ๐ ๐ ๐ ๐ slices italic_s italic_l italic_i italic_c italic_e italic_s 20 10
|
| 87 |
+
C 2 subscript ๐ถ 2 C_{2}italic_C start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT I 2 subscript ๐ผ 2 I_{2}italic_I start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT Whole Body Yes PolarStar Flight 18 18{}^{18}start_FLOATSUPERSCRIPT 18 end_FLOATSUPERSCRIPT F-FDG 293MBq 4 3.12ร\timesร3.12ร\timesร1.75 192ร\timesร192ร\timesรsโขlโขiโขcโขeโขs ๐ ๐ ๐ ๐ ๐ ๐ slices italic_s italic_l italic_i italic_c italic_e italic_s 20 10
|
| 88 |
+
C 3 subscript ๐ถ 3 C_{3}italic_C start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT[[22](https://arxiv.org/html/2307.05249#bib.bib22)]I 3 subscript ๐ผ 3 I_{3}italic_I start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT Whole Body Yes United Imaging uEXPLORER 18 18{}^{18}start_FLOATSUPERSCRIPT 18 end_FLOATSUPERSCRIPT F-FDG 296MBq 10 1.67ร\timesร1.67ร\timesร2.89 256ร\timesร256ร\timesรsโขlโขiโขcโขeโขs ๐ ๐ ๐ ๐ ๐ ๐ slices italic_s italic_l italic_i italic_c italic_e italic_s 20 10
|
| 89 |
+
C 4 subscript ๐ถ 4 C_{4}italic_C start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT[[22](https://arxiv.org/html/2307.05249#bib.bib22)]I 4 subscript ๐ผ 4 I_{4}italic_I start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT Whole Body Yes Siemens Biograph Vision Quadra 18 18{}^{18}start_FLOATSUPERSCRIPT 18 end_FLOATSUPERSCRIPT F-FDG 296MBq 10 1.65ร\timesร1.65ร\timesร1.65 256ร\timesร256ร\timesรsโขlโขiโขcโขeโขs ๐ ๐ ๐ ๐ ๐ ๐ slices italic_s italic_l italic_i italic_c italic_e italic_s 20 10
|
| 90 |
+
C uโขkโขn subscript ๐ถ ๐ข ๐ ๐ C_{ukn}italic_C start_POSTSUBSCRIPT italic_u italic_k italic_n end_POSTSUBSCRIPT C 5 subscript ๐ถ 5 C_{5}italic_C start_POSTSUBSCRIPT 5 end_POSTSUBSCRIPT I 5 subscript ๐ผ 5 I_{5}italic_I start_POSTSUBSCRIPT 5 end_POSTSUBSCRIPT Brain No PolarStar m660 18 18{}^{18}start_FLOATSUPERSCRIPT 18 end_FLOATSUPERSCRIPT F-FDG 293MBq 4 1.18ร\timesร1.18ร\timesร1.87 256ร\timesร256ร\timesรsโขlโขiโขcโขeโขs ๐ ๐ ๐ ๐ ๐ ๐ slices italic_s italic_l italic_i italic_c italic_e italic_sโโ\textendashโ10
|
| 91 |
+
C 6 subscript ๐ถ 6 C_{6}italic_C start_POSTSUBSCRIPT 6 end_POSTSUBSCRIPT I 6 subscript ๐ผ 6 I_{6}italic_I start_POSTSUBSCRIPT 6 end_POSTSUBSCRIPT Whole Body Yes PolarStar m660 18 18{}^{18}start_FLOATSUPERSCRIPT 18 end_FLOATSUPERSCRIPT F-FDG 293MBq 12 3.15ร\timesร3.15ร\timesร1.87 192ร\timesร192ร\timesรsโขlโขiโขcโขeโขs ๐ ๐ ๐ ๐ ๐ ๐ slices italic_s italic_l italic_i italic_c italic_e italic_sโโ\textendashโ10
|
| 92 |
+
|
| 93 |
+
### 2.4 Loss Function
|
| 94 |
+
|
| 95 |
+
We utilize the Charbonnier loss [[23](https://arxiv.org/html/2307.05249#bib.bib23)] with hyper-parameter ฯต italic-ฯต\epsilon italic_ฯต as 10โ3 superscript 10 3 10^{-3}10 start_POSTSUPERSCRIPT - 3 end_POSTSUPERSCRIPT to penalize pixel-wise differences between the full-dose (Y ๐ Y italic_Y) and estimated (Y^^๐\hat{Y}over^ start_ARG italic_Y end_ARG) PET images:
|
| 96 |
+
|
| 97 |
+
โ=โYโY^โ2+ฯต 2.โ superscript norm ๐^๐ 2 superscript italic-ฯต 2\mathcal{L}=\sqrt{\left\|Y-\hat{Y}\right\|^{2}+\epsilon^{2}}.caligraphic_L = square-root start_ARG โฅ italic_Y - over^ start_ARG italic_Y end_ARG โฅ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT + italic_ฯต start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG .(3)
|
| 98 |
+
|
| 99 |
+
3 Experiments and Results
|
| 100 |
+
-------------------------
|
| 101 |
+
|
| 102 |
+
### 3.1 Dataset and Evaluation
|
| 103 |
+
|
| 104 |
+
Full-dose PET images are collected from 6 different centers (C 1โขโโขC 6 subscript ๐ถ 1โsubscript ๐ถ 6 C_{1}\textendash C_{6}italic_C start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT โ italic_C start_POSTSUBSCRIPT 6 end_POSTSUBSCRIPT) at 6 different institutions 1 1 1 I 1 subscript ๐ผ 1 I_{1}italic_I start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and I 5 subscript ๐ผ 5 I_{5}italic_I start_POSTSUBSCRIPT 5 end_POSTSUBSCRIPT are Peking Union Medical College Hospital; I 2 subscript ๐ผ 2 I_{2}italic_I start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT is Beijing Hospital; I 3 subscript ๐ผ 3 I_{3}italic_I start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT is Department of Nuclear Medicine, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine; I 4 subscript ๐ผ 4 I_{4}italic_I start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT is Department of Nuclear Medicine, University of Bern; I 6 subscript ๐ผ 6 I_{6}italic_I start_POSTSUBSCRIPT 6 end_POSTSUBSCRIPT is Beijing Friendship Hospital.. The data of C 3 subscript ๐ถ 3 C_{3}italic_C start_POSTSUBSCRIPT 3 end_POSTSUBSCRIPT and C 4 subscript ๐ถ 4 C_{4}italic_C start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT[[22](https://arxiv.org/html/2307.05249#bib.bib22)] are borrowed from the Ultra-low Dose PET Imaging Challenge 2 2 2 Challenge site: [https://ultra-low-dose-pet.grand-challenge.org/](https://ultra-low-dose-pet.grand-challenge.org/). The investigators of the challenge contributed to the design and implementation of DATA, but did not participate in analysis or writing of this paper. A complete listing of investigators can be found at:[https://ultra-low-dose-pet.grand-challenge.org/Description/](https://ultra-low-dose-pet.grand-challenge.org/Description/)., while the data from other centers were privately collected. The key information of the whole dataset is shown in Table.[1](https://arxiv.org/html/2307.05249#S2.T1 "Table 1 โฃ 2.3 Dynamic Routing Strategy โฃ 2 Method โฃ DRMC: A Generalist Model with Dynamic Routing for Multi-Center PET Image Synthesis"). Note that C 1โขโโขC 4 subscript ๐ถ 1โsubscript ๐ถ 4 C_{1}\textendash C_{4}italic_C start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT โ italic_C start_POSTSUBSCRIPT 4 end_POSTSUBSCRIPT are for both training and testing. We denote them as C kโขn subscript ๐ถ ๐ ๐ C_{kn}italic_C start_POSTSUBSCRIPT italic_k italic_n end_POSTSUBSCRIPT as these centers are known to the generalist model. C 5 subscript ๐ถ 5 C_{5}italic_C start_POSTSUBSCRIPT 5 end_POSTSUBSCRIPT and C 6 subscript ๐ถ 6 C_{6}italic_C start_POSTSUBSCRIPT 6 end_POSTSUBSCRIPT are unknown centers (denote as C uโขkโขn subscript ๐ถ ๐ข ๐ ๐ C_{ukn}italic_C start_POSTSUBSCRIPT italic_u italic_k italic_n end_POSTSUBSCRIPT) that are only for testing the model generalizability. The low-dose PET data is generated by randomly selecting a certain portion of the raw scans according to the dose reduction factor (DRF), e.g. the portion is 25%percent\%% when DRF=4. Then we reconstruct low-dose PET images using the standard OSEM method [[24](https://arxiv.org/html/2307.05249#bib.bib24)]. Since the voxel size differs across centers, we uniformly resample the images of different centers so that their voxel size becomes 2ร\timesร2ร\timesร2 mโขm 3 ๐ superscript ๐ 3{mm}^{3}italic_m italic_m start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT. In the training phase, we unfold images into small patches (uniformly sampling 1024 patches from 20 patients per center) with a shape of 64ร\timesร64ร\timesร64. In the testing phase, the whole estimated PET image is acquired by merging patches together.
|
| 105 |
+
|
| 106 |
+
To evaluate the model performance, we choose the PSNR metric for image quantitative evaluation. For clinical evaluation, to address the accuracy of the standard uptake value (SUV) that most radiologists care about, we follow the paper [[3](https://arxiv.org/html/2307.05249#bib.bib3)] to calculate the bias of SโขUโขV mโขeโขaโขn ๐ ๐ subscript ๐ ๐ ๐ ๐ ๐ SUV_{mean}italic_S italic_U italic_V start_POSTSUBSCRIPT italic_m italic_e italic_a italic_n end_POSTSUBSCRIPT and SโขUโขV mโขaโขx ๐ ๐ subscript ๐ ๐ ๐ ๐ฅ SUV_{max}italic_S italic_U italic_V start_POSTSUBSCRIPT italic_m italic_a italic_x end_POSTSUBSCRIPT (denoted as B mโขeโขaโขn subscript ๐ต ๐ ๐ ๐ ๐ B_{mean}italic_B start_POSTSUBSCRIPT italic_m italic_e italic_a italic_n end_POSTSUBSCRIPT and B mโขaโขx subscript ๐ต ๐ ๐ ๐ฅ B_{max}italic_B start_POSTSUBSCRIPT italic_m italic_a italic_x end_POSTSUBSCRIPT, respectively) between low-dose and full-dose images in lesion regions.
|
| 107 |
+
|
| 108 |
+
Table 2: Results on C kโขn subscript ๐ถ ๐ ๐ C_{kn}italic_C start_POSTSUBSCRIPT italic_k italic_n end_POSTSUBSCRIPT. The Best and the Second-Best Results are Highlighted.
|
| 109 |
+
|
| 110 |
+
*: Significant Difference at p<0.05 ๐ 0.05 p<0.05 italic_p < 0.05 between Comparison Method and Our Method.
|
| 111 |
+
|
| 112 |
+
Table 3: Results on C uโขkโขn subscript ๐ถ ๐ข ๐ ๐ C_{ukn}italic_C start_POSTSUBSCRIPT italic_u italic_k italic_n end_POSTSUBSCRIPT.
|
| 113 |
+
|
| 114 |
+
Methods PSNRโB mโขeโขaโขn subscript ๐ต ๐ ๐ ๐ ๐ B_{mean}italic_B start_POSTSUBSCRIPT italic_m italic_e italic_a italic_n end_POSTSUBSCRIPTโB mโขaโขx subscript ๐ต ๐ ๐ ๐ฅ B_{max}italic_B start_POSTSUBSCRIPT italic_m italic_a italic_x end_POSTSUBSCRIPTโ
|
| 115 |
+
C 5 subscript ๐ถ 5 C_{5}italic_C start_POSTSUBSCRIPT 5 end_POSTSUBSCRIPT C 6 subscript ๐ถ 6 C_{6}italic_C start_POSTSUBSCRIPT 6 end_POSTSUBSCRIPT C 5 subscript ๐ถ 5 C_{5}italic_C start_POSTSUBSCRIPT 5 end_POSTSUBSCRIPT C 6 subscript ๐ถ 6 C_{6}italic_C start_POSTSUBSCRIPT 6 end_POSTSUBSCRIPT C 5 subscript ๐ถ 5 C_{5}italic_C start_POSTSUBSCRIPT 5 end_POSTSUBSCRIPT C 6 subscript ๐ถ 6 C_{6}italic_C start_POSTSUBSCRIPT 6 end_POSTSUBSCRIPT
|
| 116 |
+
(i)3D-cGAN 26.53*46.07*โโ\textendashโ0.1956*โโ\textendashโ0.1642*
|
| 117 |
+
3D CVT-GAN 27.11*46.03*โโ\textendashโ0.1828ยฏยฏ0.1828\underline{0.1828}underยฏ start_ARG 0.1828 end_ARGโโ\textendashโ0.1686*
|
| 118 |
+
(ii)FedAVG 27.09*46.48*โโ\textendashโ0.1943*โโ\textendashโ0.2291*
|
| 119 |
+
FL-MRCM 25.38*47.08*โโ\textendashโ0.1998*โโ\textendashโ0.1762*
|
| 120 |
+
FTL 27.38ยฏยฏ27.38\underline{27.38}underยฏ start_ARG 27.38 end_ARG*48.05ยฏยฏ48.05\underline{48.05}underยฏ start_ARG 48.05 end_ARG*โโ\textendashโ0.1898*โโ\textendashโ0.1556ยฏยฏ0.1556\underline{0.1556}underยฏ start_ARG 0.1556 end_ARG*
|
| 121 |
+
DRMC 28.54 48.26โโ\textendashโ0.1814โโ\textendashโ0.1483
|
| 122 |
+
|
| 123 |
+
Table 3: Results on C uโขkโขn subscript ๐ถ ๐ข ๐ ๐ C_{ukn}italic_C start_POSTSUBSCRIPT italic_u italic_k italic_n end_POSTSUBSCRIPT.
|
| 124 |
+
|
| 125 |
+
Table 4: Routing Ablation Results.
|
| 126 |
+
|
| 127 |
+
### 3.2 Implementation
|
| 128 |
+
|
| 129 |
+
Unless specified otherwise, the intermediate channel number, expert number in a bank, and Transformer block number are 64, 3, and 5, respectively. We employ Adam optimizer with a learning rate of 10โ4 superscript 10 4 10^{-4}10 start_POSTSUPERSCRIPT - 4 end_POSTSUPERSCRIPT. We implement our method with Pytorch using a workstation with 4 NVIDIA A100 GPUs with 40GB memory (1 GPU per center). In each training iteration, each GPU independently samples data from a single center. After the loss calculation and the gradient back-propagation, the gradients of different GPUs are then synchronized. We train our model for 200 epochs in total as no significant improvement afterward.
|
| 130 |
+
|
| 131 |
+
### 3.3 Comparative Experiments
|
| 132 |
+
|
| 133 |
+
We compare our method with five methods of two types. (i) 3D-cGAN [[1](https://arxiv.org/html/2307.05249#bib.bib1)] and 3D CVT-GAN [[10](https://arxiv.org/html/2307.05249#bib.bib10)] are two state-of-the-art methods for single center PET image synthesis. (ii) FedAVG[[12](https://arxiv.org/html/2307.05249#bib.bib12), [11](https://arxiv.org/html/2307.05249#bib.bib11)], FL-MRCM[[11](https://arxiv.org/html/2307.05249#bib.bib11)], and FTL[[7](https://arxiv.org/html/2307.05249#bib.bib7)] are three federated learning methods for privacy-preserving multi-center medical image synthesis. All methods are trained using data from C kโขn subscript ๐ถ ๐ ๐ C_{kn}italic_C start_POSTSUBSCRIPT italic_k italic_n end_POSTSUBSCRIPT and tested over both C kโขn subscript ๐ถ ๐ ๐ C_{kn}italic_C start_POSTSUBSCRIPT italic_k italic_n end_POSTSUBSCRIPT and C uโขkโขn subscript ๐ถ ๐ข ๐ ๐ C_{ukn}italic_C start_POSTSUBSCRIPT italic_u italic_k italic_n end_POSTSUBSCRIPT. For methods in (i), we regard C kโขn subscript ๐ถ ๐ ๐ C_{kn}italic_C start_POSTSUBSCRIPT italic_k italic_n end_POSTSUBSCRIPT as a single center and mix all data together for training. For federated learning methods in (ii), we follow the โMixโ mode (upper bound of FL-based methods) in the paper [[11](https://arxiv.org/html/2307.05249#bib.bib11)] to remove the privacy constraint and keep the problem setting consistent with our multi-center study.
|
| 134 |
+
|
| 135 |
+
Comparison Results for Known Centers. As can be seen in Table.[2](https://arxiv.org/html/2307.05249#S3.T2 "Table 2 โฃ 3.1 Dataset and Evaluation โฃ 3 Experiments and Results โฃ DRMC: A Generalist Model with Dynamic Routing for Multi-Center PET Image Synthesis"), in comparison with the second-best results, DRMC boosts the performance by 0.77 dB PSNR, 0.0078 B mโขeโขaโขn subscript ๐ต ๐ ๐ ๐ ๐ B_{mean}italic_B start_POSTSUBSCRIPT italic_m italic_e italic_a italic_n end_POSTSUBSCRIPT, and 0.0135 B mโขaโขx subscript ๐ต ๐ ๐ ๐ฅ B_{max}italic_B start_POSTSUBSCRIPT italic_m italic_a italic_x end_POSTSUBSCRIPT. This is because our DRMC not only leverages shared knowledge by sharing some experts but also preserves center-specific information with the help of the sparse routing strategy. Further evaluation can be found in the supplement.
|
| 136 |
+
|
| 137 |
+
Table 5: Comparison results for Specialized Models and Generalist Models.
|
| 138 |
+
|
| 139 |
+
Comparison Results for Unknown Centers. We also test the model generalization ability to unknown centers C 5 subscript ๐ถ 5 C_{5}italic_C start_POSTSUBSCRIPT 5 end_POSTSUBSCRIPT and C 6 subscript ๐ถ 6 C_{6}italic_C start_POSTSUBSCRIPT 6 end_POSTSUBSCRIPT. C 5 subscript ๐ถ 5 C_{5}italic_C start_POSTSUBSCRIPT 5 end_POSTSUBSCRIPT consists of normal brain data (without lesion) that is challenging for generalization. As the brain region only occupies a small portion of the whole-body data in the training dataset but has more sophisticated structure information. C 6 subscript ๐ถ 6 C_{6}italic_C start_POSTSUBSCRIPT 6 end_POSTSUBSCRIPT is a similar center to C 1 subscript ๐ถ 1 C_{1}italic_C start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT but has different working locations and imaging preferences. The quantitative results are shown in Table.[4](https://arxiv.org/html/2307.05249#S3.T4 "Table 4 โฃ 3.1 Dataset and Evaluation โฃ 3 Experiments and Results โฃ DRMC: A Generalist Model with Dynamic Routing for Multi-Center PET Image Synthesis") and the visual results are shown in Fig.[1](https://arxiv.org/html/2307.05249#S2.F1 "Figure 1 โฃ 2.1 Center Interference Issue โฃ 2 Method โฃ DRMC: A Generalist Model with Dynamic Routing for Multi-Center PET Image Synthesis") (a). DRMC achieves the best results by dynamically utilizing existing expertsโ knowledge for generalization. On the contrary, most comparison methods process data in a static pattern and unavoidably produce mishandling of out-of-distribution data.
|
| 140 |
+
|
| 141 |
+
Furthermore, we evaluate the performance of different models on various DRF data on C 6 subscript ๐ถ 6 C_{6}italic_C start_POSTSUBSCRIPT 6 end_POSTSUBSCRIPT, and the results are available in the supplement. These results indicate that our method demonstrates strong robustness.
|
| 142 |
+
|
| 143 |
+

|
| 144 |
+
|
| 145 |
+
(a)Visual comparison on the unknown center C 5 subscript ๐ถ 5 C_{5}italic_C start_POSTSUBSCRIPT 5 end_POSTSUBSCRIPT
|
| 146 |
+
|
| 147 |
+

|
| 148 |
+
|
| 149 |
+
(b)Top-1 Expert
|
| 150 |
+
|
| 151 |
+

|
| 152 |
+
|
| 153 |
+
(c)PSNR/N
|
| 154 |
+
|
| 155 |
+

|
| 156 |
+
|
| 157 |
+
(d)PSNR/M
|
| 158 |
+
|
| 159 |
+
Figure 3: Figures of different experiments.
|
| 160 |
+
|
| 161 |
+
### 3.4 Ablation Study
|
| 162 |
+
|
| 163 |
+
Specialized Model vs. Generalist Model. As can be seen in Table.[5](https://arxiv.org/html/2307.05249#S3.T5 "Table 5 โฃ 3.3 Comparative Experiments โฃ 3 Experiments and Results โฃ DRMC: A Generalist Model with Dynamic Routing for Multi-Center PET Image Synthesis"), the baseline model (using 15 base blocks) individually trained for each center acquires good performance on its source center. But it suffers performance drop on other centers. The baseline model trained over multiple centers greatly enhances the overall results. But due to the center interference issue, its performance on a specific center is still far from the corresponding specialized model. DRMC mitigates the interference with dynamic routing and achieves comparable performance to the specialized model of each center.
|
| 164 |
+
|
| 165 |
+
Ablation Study of Routing Strategy. To investigate the roles of major components in our routing strategy, we conduct ablation studies through (i) removing the condition of hidden representation H ๐ป H italic_H that builds cross-layer connection, and replacing ReLU activation with (ii) softmax activation [[14](https://arxiv.org/html/2307.05249#bib.bib14)] and (iii) top-2 gating [[13](https://arxiv.org/html/2307.05249#bib.bib13)]. The results are shown in Table.[4](https://arxiv.org/html/2307.05249#S3.T4 "Table 4 โฃ 3.1 Dataset and Evaluation โฃ 3 Experiments and Results โฃ DRMC: A Generalist Model with Dynamic Routing for Multi-Center PET Image Synthesis"). We also analyze the interpretability of the routing by showing the distribution of different layersโ top-1 weighted experts using the testing data. As shown in Fig.[3](https://arxiv.org/html/2307.05249#S3.F3 "Figure 3 โฃ 3.3 Comparative Experiments โฃ 3 Experiments and Results โฃ DRMC: A Generalist Model with Dynamic Routing for Multi-Center PET Image Synthesis") (b), different centers show similarities and differences in the expert distribution. For example, C 6 subscript ๐ถ 6 C_{6}italic_C start_POSTSUBSCRIPT 6 end_POSTSUBSCRIPT shows the same distribution with C 1 subscript ๐ถ 1 C_{1}italic_C start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT as their data show many similarities, while C 5 subscript ๐ถ 5 C_{5}italic_C start_POSTSUBSCRIPT 5 end_POSTSUBSCRIPT presents a very unique way since brain data differs a lot from whole-body data.
|
| 166 |
+
|
| 167 |
+
Ablation Study of Hyperparameters. In Fig.[3](https://arxiv.org/html/2307.05249#S3.F3 "Figure 3 โฃ 3.3 Comparative Experiments โฃ 3 Experiments and Results โฃ DRMC: A Generalist Model with Dynamic Routing for Multi-Center PET Image Synthesis") (c) and (d), we show ablation results on expert number (M ๐ M italic_M) and block number (N ๐ N italic_N). We set M ๐ M italic_M=3 and N ๐ N italic_N=5, as this configuration has demonstrated good performance while maintaining acceptable computational complexity.
|
| 168 |
+
|
| 169 |
+
4 Conclusion
|
| 170 |
+
------------
|
| 171 |
+
|
| 172 |
+
In this paper, we innovatively propose a generalist model with dynamic routing (DRMC) for multi-center PET image synthesis. To address the center interference issue, DRMC sparsely routes data from different centers to different experts. Experiments show that DRMC achieves excellent generalizability.
|
| 173 |
+
|
| 174 |
+
References
|
| 175 |
+
----------
|
| 176 |
+
|
| 177 |
+
* [1] Wang, Y., Yu, B., Wang, L., Zu, C., Lin, W., Wu, X., Zhou, J., Zhou, L.: 3d conditional generative adversarial networks for high-quality pet image estimation at low dose. NeuroImage 174 (03 2018)
|
| 178 |
+
* [2] Xiang, L., Qiao, Y., Nie, D., An, L., Lin, W., Wang, Q., Shen, D.: Deep auto-context convolutional neural networks for standard-dose pet image estimation from low-dose pet/mri. Neurocomputing 267, 406โ416 (2017)
|
| 179 |
+
* [3] Zhou, L., Schaefferkoetter, J., Tham, I., Huang, G., Yan, J.: Supervised learning with cyclegan for low-dose fdg pet image denoising. Medical Image Analysis 65, 101770 (07 2020)
|
| 180 |
+
* [4] Zhou, Y., Yang, Z., Zhang, H., Chang, E.I.C., Fan, Y., Xu, Y.: 3d segmentation guided style-based generative adversarial networks for pet synthesis. IEEE Transactions on Medical Imaging 41(8), 2092โ2104 (2022)
|
| 181 |
+
* [5] Luo, Y., Zhou, L., Zhan, B., Fei, Y., Zhou, J., Wang, Y.: Adaptive rectification based adversarial network with spectrum constraint for high-quality pet image synthesis. Medical Image Analysis 77, 102335 (12 2021)
|
| 182 |
+
* [6] Chaudhari, A., Mittra, E., Davidzon, G., Gulaka, P., Gandhi, H., Brown, A., Zhang, T., Srinivas, S., Gong, E., Zaharchuk, G., Jadvar, H.: Low-count whole-body pet with deep learning in a multicenter and externally validated study. npj Digital Medicine 4, 127 (08 2021)
|
| 183 |
+
* [7] Zhou, B., Miao, T., Mirian, N., Chen, X., Xie, H., Feng, Z., Guo, X., Li, X., Zhou, S.K., Duncan, J.S., Liu, C.: Federated transfer learning for low-dose pet denoising: A pilot study with simulated heterogeneous data. IEEE Transactions on Radiation and Plasma Medical Sciences pp.1โ1 (2022)
|
| 184 |
+
* [8] Luo, Y., Wang, Y., Zu, C., Zhan, B., Wu, X., Zhou, J., Shen, D., Zhou, L.: 3d transformer-gan for high-quality pet reconstruction. In: de Bruijne, M., Cattin, P.C., Cotin, S., Padoy, N., Speidel, S., Zheng, Y., Essert, C. (eds.) Medical Image Computing and Computer Assisted Intervention โ MICCAI 2021. pp. 276โ285. Springer International Publishing, Cham (2021)
|
| 185 |
+
* [9] Jang, S.I., Pan, T., Li, Y., Heidari, P., Chen, J., Li, Q., Gong, K.: Spach transformer: Spatial and channel-wise transformer based on local and global self-attentions for pet image denoising (09 2022)
|
| 186 |
+
* [10] Zeng, P., Zhou, L., Zu, C., Zeng, X., Jiao, Z., Wu, X., Zhou, J., Wang, Y.: 3D CVT-GAN: A 3D Convolutional Vision Transformer-GAN for PET Reconstruction, pp. 516โ526 (09 2022)
|
| 187 |
+
* [11] Guo, P., Wang, P., Zhou, J., Jiang, S., Patel, V.M.: Multi-institutional collaborations for improving deep learning-based magnetic resonance image reconstruction using federated learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp. 2423โ2432 (June 2021)
|
| 188 |
+
* [12] McMahan, H.B., Moore, E., Ramage, D., Hampson, S., et al.: Communication-efficient learning of deep networks from decentralized data. arXiv preprint arXiv:1602.05629 (2016)
|
| 189 |
+
* [13] Shazeer, N., Mirhoseini, A., Maziarz, K., Davis, A., Le, Q., Hinton, G., Dean, J.: Outrageously large neural networks: The sparsely-gated mixture-of-experts layer (01 2017)
|
| 190 |
+
* [14] Wang, X., Cai, Z., Gao, D., Vasconcelos, N.: Towards universal object detection by domain attention. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 7289โ7298 (2019)
|
| 191 |
+
* [15] Zhu, X., Zhu, J., Li, H., Wu, X., Wang, X., Li, H., Wang, X., Dai, J.: Uni-perceiver: Pre-training unified architecture for generic perception for zero-shot and few-shot tasks. arXiv preprint arXiv:2112.01522 (2021)
|
| 192 |
+
* [16] Zhu, J., Zhu, X., Wang, W., Wang, X., Li, H., Wang, X., Dai, J.: Uni-perceiver-moe: Learning sparse generalist models with conditional moes. In: Oh, A.H., Agarwal, A., Belgrave, D., Cho, K. (eds.) Advances in Neural Information Processing Systems (2022)
|
| 193 |
+
* [17] Wang, P., Yang, A., Men, R., Lin, J., Bai, S., Li, Z., Ma, J., Zhou, C., Zhou, J., Yang, H.: Ofa: Unifying architectures, tasks, and modalities through a simple sequence-to-sequence learning framework. CoRR abs/2202.03052 (2022)
|
| 194 |
+
* [18] Yu, T., Kumar, S., Gupta, A., Levine, S., Hausman, K., Finn, C.: Gradient surgery for multi-task learning. arXiv preprint arXiv:2001.06782 (2020)
|
| 195 |
+
* [19] Han, Y., Huang, G., Song, S., Yang, L., Wang, H., Wang, Y.: Dynamic neural networks: A survey (02 2021)
|
| 196 |
+
* [20] Zhang, K., Zuo, W., Chen, Y., Meng, D., Zhang, L.: Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising. IEEE Transactions on Image Processing 26(7), 3142โ3155 (2017)
|
| 197 |
+
* [21] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, ล., Polosukhin, I.: Attention is all you need. In: Advances in Neural Information Processing Systems. pp. 5998โ6008 (2017)
|
| 198 |
+
* [22] Xue, S., Guo, R., Bohn, K.P., Matzke, J., Viscione, M., Alberts, I., Meng, H., Sun, C., Zhang, M., Zhang, M., Sznitman, R., El Fakhri, G., Rominger, A., Li, B., Shi, K.: A cross-scanner and cross-tracer deep learning method for the recovery of standard-dose imaging quality from low-dose pet. European Journal of Nuclear Medicine and Molecular Imaging 49, 1619โ7089 (05 2022)
|
| 199 |
+
* [23] Charbonnier, P., Blanc-Feraud, L., Aubert, G., Barlaud, M.: Two deterministic half-quadratic regularization algorithms for computed imaging. In: Proceedings of 1st International Conference on Image Processing. vol.2, pp. 168โ172 vol.2 (1994)
|
| 200 |
+
* [24] Hudson, H., Larkin, R.: Accelerated image reconstruction using ordered subsets of projection data. IEEE Transactions on Medical Imaging 13(4), 601โ609 (1994)
|
| 201 |
+
|
| 202 |
+
Supplement
|
| 203 |
+
----------
|
| 204 |
+
|
| 205 |
+
Center Interference. To quantify the interference of the j ๐ j italic_j-th center task on the i ๐ i italic_i-th center task, we estimate the change in loss โ i subscript โ ๐\mathcal{L}_{i}caligraphic_L start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT for the i ๐ i italic_i-th center task when optimizing the shared parameters ฮธ ๐\theta italic_ฮธ according to the j ๐ j italic_j-th center taskโs loss โ j subscript โ ๐\mathcal{L}_{j}caligraphic_L start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT as follows:
|
| 206 |
+
|
| 207 |
+
ฮ jโขโ iโข(X i)subscript ฮ ๐ subscript โ ๐ subscript ๐ ๐\displaystyle\Delta_{j}\mathcal{L}_{i}\left(X_{i}\right)roman_ฮ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT caligraphic_L start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT )โ๐ผ X jโข(โ iโข(X i;ฮธ)โโ iโข(X i;ฮธโฮปโขโฮธ โ jโข(X j)โโฮธ โ jโข(X j)โ)),approaches-limit absent subscript ๐ผ subscript ๐ ๐ subscript โ ๐ subscript ๐ ๐ ๐ subscript โ ๐ subscript ๐ ๐ ๐ ๐ subscriptโ๐ subscript โ ๐ subscript ๐ ๐ norm subscriptโ๐ subscript โ ๐ subscript ๐ ๐\displaystyle\doteq\mathbb{E}_{X_{j}}\left(\mathcal{L}_{i}\left(X_{i};\theta% \right)-\mathcal{L}_{i}(X_{i};\theta-\lambda\frac{\nabla_{\theta}\mathcal{L}_{% j}\left(X_{j}\right)}{\left\|\nabla_{\theta}\mathcal{L}_{j}\left(X_{j}\right)% \right\|})\right),โ blackboard_E start_POSTSUBSCRIPT italic_X start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT end_POSTSUBSCRIPT ( caligraphic_L start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ; italic_ฮธ ) - caligraphic_L start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ; italic_ฮธ - italic_ฮป divide start_ARG โ start_POSTSUBSCRIPT italic_ฮธ end_POSTSUBSCRIPT caligraphic_L start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ( italic_X start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) end_ARG start_ARG โฅ โ start_POSTSUBSCRIPT italic_ฮธ end_POSTSUBSCRIPT caligraphic_L start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ( italic_X start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) โฅ end_ARG ) ) ,(4)
|
| 208 |
+
โฮปโข๐ผ X jโข(โฮธ โ jโข(X j)โโฮธ โ jโข(X j)โTโขโฮธ โ iโข(X i)),absent ๐ subscript ๐ผ subscript ๐ ๐ superscript subscriptโ๐ subscript โ ๐ subscript ๐ ๐ norm subscriptโ๐ subscript โ ๐ subscript ๐ ๐ ๐ subscriptโ๐ subscript โ ๐ subscript ๐ ๐\displaystyle\approx\lambda\mathbb{E}_{X_{j}}\left({\frac{\nabla_{\theta}% \mathcal{L}_{j}\left(X_{j}\right)}{\left\|\nabla_{\theta}\mathcal{L}_{j}\left(% X_{j}\right)\right\|}}^{T}\nabla_{\theta}\mathcal{L}_{i}\left(X_{i}\right)% \right),โ italic_ฮป blackboard_E start_POSTSUBSCRIPT italic_X start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT end_POSTSUBSCRIPT ( divide start_ARG โ start_POSTSUBSCRIPT italic_ฮธ end_POSTSUBSCRIPT caligraphic_L start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ( italic_X start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) end_ARG start_ARG โฅ โ start_POSTSUBSCRIPT italic_ฮธ end_POSTSUBSCRIPT caligraphic_L start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ( italic_X start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) โฅ end_ARG start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT โ start_POSTSUBSCRIPT italic_ฮธ end_POSTSUBSCRIPT caligraphic_L start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) ) ,
|
| 209 |
+
|
| 210 |
+
where X i subscript ๐ ๐ X_{i}italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT and X j subscript ๐ ๐ X_{j}italic_X start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT are the sampled training batches of the i ๐ i italic_i-th and j ๐ j italic_j-th centers, respectively. In the implementation, we sample 100 batches from each center for interference calculation. The interference of the j ๐ j italic_j-th center task on the i ๐ i italic_i-th center task can then be quantified as follows:
|
| 211 |
+
|
| 212 |
+
โ i,j=๐ผ X iโข(ฮ jโขโ iโข(X i)ฮ iโขโ iโข(X i)),subscript โ ๐ ๐ subscript ๐ผ subscript ๐ ๐ subscript ฮ ๐ subscript โ ๐ subscript ๐ ๐ subscript ฮ ๐ subscript โ ๐ subscript ๐ ๐\mathcal{I}_{i,j}=\mathbb{E}_{X_{i}}\left(\frac{\Delta_{j}\mathcal{L}_{i}\left% (X_{i}\right)}{\Delta_{i}\mathcal{L}_{i}\left(X_{i}\right)}\right),caligraphic_I start_POSTSUBSCRIPT italic_i , italic_j end_POSTSUBSCRIPT = blackboard_E start_POSTSUBSCRIPT italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT ( divide start_ARG roman_ฮ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT caligraphic_L start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) end_ARG start_ARG roman_ฮ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT caligraphic_L start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ( italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) end_ARG ) ,(5)
|
| 213 |
+
|
| 214 |
+
where the denominator is utilized to normalize the scale of the loss change.
|
| 215 |
+
|
| 216 |
+
SSIM Evaluation. To further assess the performance of our method, we compare the SSIM metric between our method and other comparison methods. The results are presented in Table.[6](https://arxiv.org/html/2307.05249#Sx1.T6 "Table 6 โฃ Supplement โฃ DRMC: A Generalist Model with Dynamic Routing for Multi-Center PET Image Synthesis"). The results indicate that DRMC achieves the highest performance on the SSIM metric.
|
| 217 |
+
|
| 218 |
+
Table 6: SSIM Results on C kโขn subscript ๐ถ ๐ ๐ C_{kn}italic_C start_POSTSUBSCRIPT italic_k italic_n end_POSTSUBSCRIPT. The Best and the Second-Best Results are Highlighted.
|
| 219 |
+
|
| 220 |
+
*: Significant Difference at p<0.05 ๐ 0.05 p<0.05 italic_p < 0.05 between Comparison Method and Our Method.
|
| 221 |
+
|
| 222 |
+
Evaluation on Different DRF Data. To verify the robustness of the model on different dosage data, we conducted tests on the unknown center C 6 subscript ๐ถ 6 C_{6}italic_C start_POSTSUBSCRIPT 6 end_POSTSUBSCRIPT. Table[7](https://arxiv.org/html/2307.05249#Sx1.T7 "Table 7 โฃ Supplement โฃ DRMC: A Generalist Model with Dynamic Routing for Multi-Center PET Image Synthesis") presents the comparison results, demonstrating that our DRMC exhibits superior generalizability across different DRF data.
|
| 223 |
+
|
| 224 |
+
Table 7: Testing Results on Different DRF Data from C 6 subscript ๐ถ 6 C_{6}italic_C start_POSTSUBSCRIPT 6 end_POSTSUBSCRIPT. The Best and the Second-Best Results are Highlighted. *: Significant Difference at p<0.05 ๐ 0.05 p<0.05 italic_p < 0.05 between Comparison Method and Our Method.
|