| \documentclass[11pt]{article} |
|
|
| \usepackage[a4paper,margin=1in]{geometry} |
| \usepackage[T1]{fontenc} |
| \usepackage{lmodern} |
| \usepackage{microtype} |
|
|
| \usepackage{amsmath,amssymb,amsfonts,bm} |
| \usepackage{graphicx} |
| \usepackage{booktabs} |
| \usepackage{multirow} |
| \usepackage{enumitem} |
| \usepackage{algorithm} |
| \usepackage{algorithmic} |
| \usepackage{hyperref} |
| \usepackage[nameinlink]{cleveref} |
|
|
| \input{macros} |
|
|
| \title{GliomaSAM3D-MoE: Concept-Prompted 3D Glioma Segmentation with Dual-Domain Enhancement and Sparse Mixture-of-Experts} |
| \author{Anonymous} |
| \date{} |
|
|
| \begin{document} |
| \maketitle |
|
|
| \begin{abstract} |
| Accurate delineation of glioma subregions in multi-parametric MRI is central to treatment planning and longitudinal assessment, and has been standardized by the Brain Tumor Segmentation (BraTS) benchmark~\cite{Menze2015BraTS,Baid2021BraTS,SynapseBraTS2023}. |
| While recent 3D CNN/Transformer segmenters achieve strong overlap metrics, they remain brittle to boundary ambiguity, class imbalance, and the frequent absence of enhancing tumor (\ET), where spurious \ET predictions can disproportionately degrade surface-distance criteria (e.g., HD95). |
| Inspired by the promptable design of the Segment Anything Model (SAM), we propose \textbf{GliomaSAM3D-MoE}, a fully automatic 3D glioma segmentation framework that replaces manual prompts with learned concept tokens. |
| Our method combines an \ET-existence gate with direction-aware dual-domain enhancement and a task-structured sparse mixture-of-experts decoder to reduce false positives and improve boundary fidelity. |
| \end{abstract} |
|
|
| \section{Introduction}\label{sec:intro} |
| Gliomas are among the most common primary brain tumors, and accurate segmentation of tumor subregions from multi-parametric MRI (mpMRI) is essential for diagnosis, treatment planning, radiotherapy targeting, and longitudinal response assessment. |
| To foster reproducible evaluation, the Brain Tumor Segmentation (BraTS) challenges provide multi-institutional mpMRI scans with expert annotations and standardized evaluation protocols~\cite{Menze2015BraTS,Baid2021BraTS,SynapseBraTS2023}. |
| In the conventional BraTS setting, methods segment three clinically-relevant regions---whole tumor (\WT), tumor core (\TC), and enhancing tumor (\ET)---derived from voxel-wise labels of edema, non-enhancing/necrotic core, and enhancing components~\cite{Menze2015BraTS,Baid2021BraTS}. |
|
|
| Despite steady progress, BraTS glioma segmentation remains challenging for three recurring reasons. |
| First, tumor boundaries can be ambiguous due to partial volume effects, intensity inhomogeneity, and infiltrative growth patterns, making boundary-sensitive metrics (e.g., HD95) particularly unforgiving. |
| Second, the region hierarchy (\ET $\subseteq$ \TC $\subseteq$ \WT) induces severe class imbalance, where small \ET volumes are easily overwhelmed by \WT/\TC during optimization. |
| Third, \ET is frequently absent in low-grade cases and in portions of the cohort; in such \ET-absent volumes, even a small number of false-positive \ET voxels can yield large surface-distance penalties and clinically misleading ``enhancing'' findings. |
| These properties suggest that effective models should (i) emphasize high-frequency boundary cues, (ii) respect the nested region structure, and (iii) explicitly model \emph{existence} (whether a region should appear) separately from \emph{localization} (where it appears). |
|
|
| Most state-of-the-art BraTS solutions build on volumetric encoder--decoder architectures, including 3D CNN families~\cite{Cicek2016,Milletari2016VNet,Kamnitsas2017DeepMedic,Isensee2021nnUNet,Roy2023MedNeXt} and transformer-based variants~\cite{Wang2021TransBTS,Hatamizadeh2022UNETR,Hatamizadeh2022SwinUNETR}. |
| While these models can achieve strong Dice scores, they typically treat each subregion as a dense per-voxel classification problem, without an explicit mechanism to prevent anatomically-implausible ``hallucinations'' of \ET in \ET-absent cases, and without a targeted strategy to align optimization with surface-distance behavior. |
|
|
| In parallel, promptable segmentation foundation models such as SAM~\cite{Kirillov2023SAM} introduce a compelling alternative abstraction: the user (or an upstream module) specifies \emph{what} to segment via prompts, and the model focuses on \emph{where} to segment. |
| Recent medical adaptations demonstrate that SAM-style pretraining can transfer to medical targets~\cite{Ma2024MedSAM}, and that SAM-inspired designs can be extended to volumetric data~\cite{Bui2023SAM3D,Wang2023SAMMed3D}. |
| However, most promptable approaches are interactive (requiring points/boxes), operate slice-wise, or do not explicitly address BraTS-specific challenges such as \ET absence, nested region constraints, and boundary-driven evaluation. |
|
|
| To bridge these gaps, we propose \textbf{GliomaSAM3D-MoE}, a fully automatic SAM-style volumetric segmenter tailored to BraTS glioma subregions. |
| Our key idea is to replace manual prompts with \emph{learned concept tokens} predicted from the 3D volume, and to structure the decoder as a \emph{task-aware sparse mixture-of-experts} so that different experts specialize to \WT/\TC/\ET decoding. |
| Crucially, we introduce an explicit \ET-existence classifier whose probability gates the \ET mask to reduce false positives in \ET-absent volumes, and we design a direction-aware dual-domain enhancement module that injects high-frequency priors and spectral modulation to improve boundary fidelity and robustness. |
|
|
| Our main contributions are: |
| \begin{itemize} |
| \item \textbf{Concept-prompted automatic segmentation.} We introduce a SAM-style framework that predicts discrete concept tokens from the input volume and uses them as prompts for region-specific mask decoding, enabling fully automatic volumetric glioma segmentation. |
| \item \textbf{\ET-aware existence gating.} We decouple \emph{existence} from \emph{localization} by adding an \ET-presence predictor and an explicit gating mechanism to suppress spurious \ET predictions in \ET-absent cases, targeting improved boundary-sensitive evaluation. |
| \item \textbf{Direction-aware dual-domain enhancement.} We combine high-frequency directional priors with calibrated multi-scale fusion and spectral modulation to sharpen boundaries and improve robustness to acquisition/style shifts. |
| \item \textbf{Task-structured sparse MoE decoding.} We propose a region-aware sparse mixture-of-experts decoder for \WT/\TC/\ET that promotes expert specialization without incurring the full cost of dense multi-branch decoding. |
| \end{itemize} |
|
|
| \section{Related Work}\label{sec:related} |
| \subsection{Glioma segmentation on BraTS} |
| Early BraTS approaches combined hand-crafted features with classical classifiers, but modern solutions are dominated by deep volumetric segmentation networks. |
| 3D encoder--decoders, such as 3D U-Net~\cite{Cicek2016} and V-Net~\cite{Milletari2016VNet}, established a strong baseline for dense volumetric prediction. |
| Multi-scale and context-aware designs (e.g., DeepMedic~\cite{Kamnitsas2017DeepMedic}) further improved robustness for heterogeneous lesions. |
| More recently, nnU-Net~\cite{Isensee2021nnUNet} popularized automated pipeline configuration and remains a widely used competitive baseline in medical challenges, including BraTS. |
| Transformer-based volumetric segmenters improve global context modeling, exemplified by TransBTS~\cite{Wang2021TransBTS}, UNETR~\cite{Hatamizadeh2022UNETR}, and Swin UNETR~\cite{Hatamizadeh2022SwinUNETR}. |
| Convolutional architectures inspired by transformer design principles (e.g., MedNeXt~\cite{Roy2023MedNeXt}) also show competitive performance with favorable efficiency. |
|
|
| Despite strong overlap metrics, BraTS segmentation is still hampered by (i) boundary ambiguity and (ii) rare/absent subregions (notably \ET), where naive voxel-wise decoding can yield false positives that severely affect surface-distance measures. |
| Our work focuses on explicitly modeling \ET existence and improving boundary fidelity within a SAM-style prompt-conditioned decoding paradigm. |
|
|
| \subsection{Promptable and foundation models for medical segmentation} |
| SAM~\cite{Kirillov2023SAM} introduced a promptable segmentation paradigm that generalizes across diverse natural-image targets via point/box/mask prompts. |
| MedSAM~\cite{Ma2024MedSAM} demonstrated that SAM can be adapted to medical images through large-scale fine-tuning, improving performance on typical medical targets. |
| Extending promptable segmentation to volumetric data is an active research direction: SAM3D~\cite{Bui2023SAM3D} adapts SAM-style features to 3D volumes, and SAM-Med3D~\cite{Wang2023SAMMed3D} constructs a fully learnable 3D promptable model trained on large-scale volumetric masks. |
| While these approaches advance general-purpose promptable segmentation, BraTS requires \emph{fully automatic} multi-region decoding (\WT/\TC/\ET), careful handling of \ET-absent cases, and improved boundary behavior. |
| We therefore replace manual prompts with learned concept tokens and add an \ET-existence gate that targets BraTS-specific failure modes. |
|
|
| \subsection{Boundary-aware learning and frequency-domain robustness} |
| Region-overlap losses (e.g., Dice) may under-penalize boundary errors, motivating boundary-aware objectives such as boundary loss~\cite{Kervadec2019BoundaryLoss} and losses that explicitly target Hausdorff distance behavior~\cite{Karimi2019HDLoss}. |
| In parallel, domain shift across scanners, protocols, and institutions has motivated frequency-domain adaptation and augmentation. |
| FDA~\cite{Yang2020FDA} reduces style discrepancy by swapping low-frequency amplitude components, and AmpMix~\cite{Xu2023AmpMix} perturbs amplitude while preserving phase semantics to improve domain generalization. |
| Our method integrates boundary-centric high-frequency priors and spectral modulation as complementary mechanisms for robust BraTS segmentation across dataset editions. |
|
|
| \subsection{Mixture-of-experts for dense prediction} |
| Mixture-of-experts (MoE) enables conditional computation by routing each input to a sparse subset of specialized experts, improving capacity without proportional cost~\cite{Shazeer2017MoE,Fedus2022Switch}. |
| In computer vision, MoE has been explored for multi-task learning and dense prediction with adaptive routing~\cite{Chen2023AdaMVMoE}. |
| Motivated by the heterogeneity of BraTS subregions and the nested (\ET $\subseteq$ \TC $\subseteq$ \WT) structure, we design a task-structured sparse MoE decoder that encourages expert specialization across regions while remaining efficient for 3D inference. |
|
|
| \section{Method}\label{sec:method} |
|
|
| \subsection{Problem formulation and notation} |
| Given a multi-modal MRI volume |
| $\mathbf{X}\in\R^{C\times H\times W\times D}$ with $C=4$ modalities, |
| our goal is to predict a voxel-wise label map |
| $\widehat{\mathbf{Y}}\in\{0,1,2,3\}^{H\times W\times D}$. |
| Equivalently, we predict three standard BraTS region masks |
| $\widehat{\mathbf{m}}_{r}\in[0,1]^{H\times W\times D}$ for $r\in\{\WT,\TC,\ET\}$ (using the canonical definitions \ET$\subseteq$\TC$\subseteq$\WT). |
| The method is \emph{fully automatic}: no user interactions at inference time. |
|
|
| \subsection{Overview} |
| GliomaSAM3D-MoE consists of five key components: |
| \begin{enumerate}[leftmargin=*] |
| \item a parameter-free high-frequency direction injection module (HFDI-3D) to provide directional boundary priors, |
| \item a SAM-style 2D image encoder $E_{\mathrm{img}}(\cdot)$ applied to each slice, |
| \item a lightweight slice-as-sequence 3D adaptation module $T_{\mathrm{seq}}(\cdot)$ for inter-slice context, |
| \item a concept-prompt module with (i) a fixed discrete token vocabulary and (ii) an attribute predictor $h_{\mathrm{attr}}(\cdot)$ to infer tokens at test time, |
| \item a direction-aware dual-domain enhancement branch (MSDA-3D + FA + FCF + spectral modulation) and a task-structured sparse MoE decoder to output region logits. |
| \end{enumerate} |
|
|
| \subsection{High-frequency direction injection (HFDI-3D)} |
| To strengthen boundary cues and tiny-fragment sensitivity, we inject a \emph{directional} high-frequency prior using parameter-free operators. |
| We first compute a modality-averaged volume: |
| \begin{equation} |
| \bar{\mathbf{X}}=\frac{1}{C}\sum_{c=1}^{C}\mathbf{X}^{(c)}\in\R^{H\times W\times D}. |
| \end{equation} |
| Using fixed finite-difference (or 3D Sobel) operators $\nabla_x,\nabla_y,\nabla_z$, we extract directional high-frequency maps: |
| \begin{equation} |
| \mathbf{G}_x=\nabla_x \bar{\mathbf{X}},\qquad \mathbf{G}_y=\nabla_y \bar{\mathbf{X}},\qquad \mathbf{G}_z=\nabla_z \bar{\mathbf{X}}. |
| \end{equation} |
| We form a normalized direction stack |
| \begin{equation} |
| \mathbf{H}=\mathrm{Norm}\!\Big(\Concat\big(|\mathbf{G}_x|,|\mathbf{G}_y|,|\mathbf{G}_z|\big)\Big)\in\R^{3\times H\times W\times D}, |
| \end{equation} |
| and concatenate it with the original modalities: |
| \begin{equation} |
| \mathbf{X}^{+}=\Concat(\mathbf{X},\mathbf{H})\in\R^{(C+3)\times H\times W\times D}. |
| \end{equation} |
| This augmentation is deterministic and parameter-free, serving as an explicit directional prior that is especially helpful for fragmented \ET boundaries. |
|
|
| \subsection{Slice-as-sequence 3D adaptation} |
| We interpret the axial dimension as a sequence of ``frames'': $\{\mathbf{x}_t\}_{t=1}^{D}$, |
| where $\mathbf{x}_t\in\R^{C\times H\times W}$ denotes the $t$-th slice. |
| After HFDI-3D, the image encoder ingests the augmented slice $\mathbf{x}^{+}_t\in\R^{(C+3)\times H\times W}$. |
| The 2D image encoder produces per-slice token embeddings: |
| \begin{equation} |
| \mathbf{F}_t = E_{\mathrm{img}}(\mathbf{x}^{+}_t)\in\R^{N\times d}, |
| \end{equation} |
| where $N$ is the number of tokens and $d$ is the token dimension. |
|
|
| To inject 3D context, we aggregate neighboring slice information within a short window |
| $\mathcal{W}_t=\{t-K,\ldots,t-1\}$ using memory-style cross-attention: |
| \begin{equation} |
| \widetilde{\mathbf{F}}_t = |
| \Attn\!\left( |
| \mathbf{Q}=\mathbf{F}_t,\, |
| \mathbf{K}=\Concat(\mathbf{F}_{t-K},\ldots,\mathbf{F}_{t-1}),\, |
| \mathbf{V}=\Concat(\mathbf{F}_{t-K},\ldots,\mathbf{F}_{t-1}) |
| \right). |
| \end{equation} |
| In practice, $K$ is small (e.g., 4--8) for efficiency. |
| During training we randomize traversal direction (forward/backward) to encourage bidirectional consistency without doubling inference cost. |
|
|
| \subsection{Discrete concept tokens and prompt injection} |
| \paragraph{Fixed concept vocabulary.} |
| Instead of free-form natural language (which can be non-deterministic at inference), we define a fixed vocabulary $\mathcal{V}$ of discrete concept tokens: |
| \begin{equation} |
| \mathcal{V}=\{\texttt{WT},\texttt{TC},\texttt{ET},\texttt{ET\_PRESENT},\texttt{ET\_ABSENT},\texttt{FRAG\_BIN}_i,\texttt{SCALE\_BIN}_j,\ldots\}. |
| \end{equation} |
| During training, token supervision is derived from the ground truth masks (e.g., \ET presence, fragmentation bins, scale bins). |
| At inference, tokens are predicted by a lightweight attribute predictor, avoiding train/test mismatch. |
|
|
| \paragraph{Attribute predictor and prompt embeddings.} |
| We compute a global volume descriptor by pooling over tokens and slices: |
| \begin{equation} |
| \mathbf{z} = \Pool(\{\widetilde{\mathbf{F}}_t\}_{t=1}^{D}). |
| \end{equation} |
| An attribute head $h_{\mathrm{attr}}(\cdot)$ outputs predicted concept labels $\widehat{\mathbf{c}}$ and an \ET presence probability $\pi_{\ET}\in[0,1]$: |
| \begin{equation} |
| (\widehat{\mathbf{c}},\, \pi_{\ET}) = h_{\mathrm{attr}}(\mathbf{z}). |
| \end{equation} |
| The selected tokens are embedded and injected via a prompt encoder $E_{\mathrm{prm}}(\cdot)$: |
| \begin{equation} |
| \mathbf{p} = E_{\mathrm{prm}}(\mathrm{Embed}(\widehat{\mathbf{c}})). |
| \end{equation} |
|
|
| \paragraph{\ET presence gating.} |
| Let $\mathbf{l}_{\ET}\in\R^{H\times W\times D}$ denote the \ET logits from the decoder. |
| We convert logits to probabilities and gate with $\pi_{\ET}$ to suppress false positives in \ET-absent volumes: |
| \begin{equation} |
| \widehat{\mathbf{m}}_{\ET} = \sigma(\mathbf{l}_{\ET})\cdot \pi_{\ET}. |
| \end{equation} |
| This explicitly decouples existence and localization for \ET and stabilizes HD95 under small spurious detections. |
|
|
| \subsection{Direction-aware dual-domain enhancement (MSDA-3D + FA + FCF)} |
| \paragraph{Learnable spectral modulation.} |
| For a training crop (or a full volume) $\mathbf{X}$, we compute a 3D Fourier transform per modality channel: |
| \begin{equation} |
| \FFT(\mathbf{X}) = \mathbf{A}\odot e^{\ii\mathbf{\Phi}}, |
| \end{equation} |
| where $\mathbf{A}$ and $\mathbf{\Phi}$ denote amplitude and phase, respectively. |
| We apply a learnable radial frequency gate $w_{\theta}(r)$ (parameterized by $\theta$ and indexed by radial frequency magnitude $r$): |
| \begin{equation} |
| \mathbf{A}' = \mathbf{A}\odot w_{\theta}(r),\qquad |
| \mathbf{X}_{\mathrm{spec}} = \IFFT\!\left(\mathbf{A}'\odot e^{\ii\mathbf{\Phi}}\right). |
| \end{equation} |
| We summarize directional frequency statistics (used later for routing and fusion) by partitioning the frequency domain into $Q$ directional sectors $\{\mathbf{B}_q\}_{q=1}^{Q}$ and computing |
| \begin{equation} |
| s_q=\frac{\langle \mathbf{A},\mathbf{B}_q\rangle}{\langle \mathbf{A},\mathbf{1}\rangle},\qquad \mathbf{s}=[s_1,\ldots,s_Q]. |
| \end{equation} |
| The spectral-enhanced volume $\mathbf{X}_{\mathrm{spec}}$ is fused with spatial features before decoding. |
|
|
| \paragraph{Multi-scale direction-aware module (MSDA-3D).} |
| We further enhance directional perception across scales. |
| Let $\mathbf{U}$ denote a 3D feature tensor (obtained by reshaping token embeddings to a grid and aggregating slices). |
| For each scale $k\in\mathcal{K}$ and direction $d\in\{x,y,z\}$, we apply a directional depthwise convolution: |
| \begin{equation} |
| \mathbf{U}_{k,d}=\mathrm{DWConv}_{k}^{(d)}(\mathbf{U}), |
| \end{equation} |
| and combine them with learned attention weights $a_{k,d}$: |
| \begin{equation} |
| a_{k,d}=\mathrm{Softmax}_{k,d}\big(\mathrm{MLP}(\Pool(\mathbf{U}))\big),\qquad |
| \mathbf{U}_{\mathrm{msda}}=\sum_{k\in\mathcal{K}}\sum_{d\in\{x,y,z\}} a_{k,d}\odot \mathbf{U}_{k,d}. |
| \end{equation} |
| This module promotes multi-scale local relation modeling while retaining explicit direction selectivity. |
|
|
| \paragraph{Feature aggregation (FA) and feature calibration fusion (FCF).} |
| To prevent tiny \ET targets from vanishing in high-level representations, we aggregate multi-level features into a lesion-preserving representation $\mathbf{U}_{\mathrm{fa}}=\mathrm{Agg}(\{\mathbf{U}^{(\ell)}\}_{\ell=1}^{L})$. |
| We then calibrate cross-source fusion (spatial vs.\ spectral, and multi-level vs.\ MSDA-enhanced) using a lightweight gate: |
| \begin{equation} |
| \boldsymbol{\eta}=\sigma\!\Big(\mathrm{MLP}\big(\Pool(\Concat(\mathbf{U}_{\mathrm{fa}},\mathbf{U}_{\mathrm{msda}}))\big)\Big),\qquad |
| \mathbf{U}_{\mathrm{fuse}}=\boldsymbol{\eta}\odot \mathbf{U}_{\mathrm{fa}}+(1-\boldsymbol{\eta})\odot \mathbf{U}_{\mathrm{msda}}. |
| \end{equation} |
| Finally, $\mathbf{U}_{\mathrm{fuse}}$ is fused with spectral features derived from $\mathbf{X}_{\mathrm{spec}}$ (e.g., via concatenation or cross-attention) and fed to the MoE decoder. |
|
|
| \paragraph{Fourier amplitude mixing augmentation.} |
| To improve robustness to acquisition/style variation, we randomly mix Fourier amplitudes across samples while preserving phase: |
| \begin{equation} |
| \mathbf{A}_{\mathrm{mix}} = \alpha \mathbf{A}^{(a)} + (1-\alpha)\mathbf{A}^{(b)},\qquad |
| \mathbf{X}_{\mathrm{mix}} = \IFFT\!\left(\mathbf{A}_{\mathrm{mix}}\odot e^{\ii\mathbf{\Phi}^{(a)}}\right), |
| \end{equation} |
| where $\alpha\sim\mathrm{Beta}(\beta,\beta)$ and $(a,b)$ denote two randomly paired training samples. |
|
|
| \subsection{Task-structured sparse MoE decoder} |
| We design $M$ expert decoders $\{D_m\}_{m=1}^{M}$ specialized for different targets: |
| \texttt{\WT/edema}, \texttt{\TC/core}, \texttt{\ET/fine}, \texttt{boundary}, and \texttt{FP-suppress}. |
| A gating network $G(\cdot)$ produces routing weights conditioned on global visual context and prompt embeddings: |
| \begin{equation} |
| \boldsymbol{\gamma} = G\!\left(\mathbf{z},\,\mathbf{p},\,\mathbf{s}\right),\qquad |
| \sum_{m=1}^{M}\gamma_m = 1, |
| \end{equation} |
| where $\mathbf{s}$ denotes optional spectral statistics (e.g., band-energy ratios). |
| Each expert outputs a 3-channel logit tensor $\mathbf{L}^{(m)}\in\R^{3\times H\times W\times D}$ for $\{\WT,\TC,\ET\}$. |
| We apply sparse top-$k$ routing (e.g., $k=2$) to combine expert logits: |
| \begin{equation} |
| \mathbf{L} = |
| \sum_{m\in \TopK(\boldsymbol{\gamma})} |
| \gamma_m \cdot D_m(\{\widetilde{\mathbf{F}}_t\}_{t=1}^{D}, \mathbf{p}) \;=\; \sum_{m\in \TopK(\boldsymbol{\gamma})}\gamma_m \mathbf{L}^{(m)}. |
| \end{equation} |
| We denote the region-specific logits by $\{\mathbf{l}_{\WT},\mathbf{l}_{\TC},\mathbf{l}_{\ET}\}$ as the three channels of $\mathbf{L}$. |
| A load-balancing regularizer encourages diverse expert utilization. |
|
|
| \subsection{Training objectives} |
| The overall loss is: |
| \begin{equation} |
| \mathcal{L} = |
| \mathcal{L}_{\mathrm{seg}} |
| + \lambda_{\mathrm{pres}}\mathcal{L}_{\mathrm{pres}} |
| + \lambda_{\mathrm{attr}}\mathcal{L}_{\mathrm{attr}} |
| + \lambda_{\mathrm{moe}}\mathcal{L}_{\mathrm{moe}} |
| + \lambda_{\mathrm{hier}}\mathcal{L}_{\mathrm{hier}}. |
| \end{equation} |
|
|
| \paragraph{Segmentation loss.} |
| We combine Dice and cross-entropy, with \ET-aware reweighting and focal emphasis: |
| \begin{equation} |
| \mathcal{L}_{\mathrm{seg}} = |
| \sum_{r\in\{\WT,\TC,\ET\}} |
| \lambda_r \cdot \mathcal{L}_{\mathrm{Dice}}^{(r)} |
| + \lambda_{\mathrm{CE}}\cdot \mathcal{L}_{\mathrm{CE}} |
| + \lambda_{\mathrm{Focal}}\cdot \mathcal{L}_{\mathrm{Focal}}^{(\ET)}. |
| \end{equation} |
|
|
| \paragraph{Presence and attribute supervision.} |
| \ET presence uses binary cross-entropy: |
| \begin{equation} |
| \mathcal{L}_{\mathrm{pres}} = \BCE(\pi_{\ET}, y_{\ET}^{\mathrm{pres}}), |
| \end{equation} |
| where $y_{\ET}^{\mathrm{pres}}\in\{0,1\}$ is the ground-truth \ET presence indicator. |
| Other concept attributes (fragmentation/scale bins) use multi-class cross-entropy: |
| \begin{equation} |
| \mathcal{L}_{\mathrm{attr}} = \sum_{u\in\mathcal{U}} \CE(\widehat{c}_u, c_u). |
| \end{equation} |
| Here, $\mathcal{U}$ denotes the set of non-presence concept attributes (e.g., fragmentation bin, scale bin), with ground-truth label $c_u$ and prediction $\widehat{c}_u$ for each attribute $u$. |
|
|
| \paragraph{Hierarchy consistency.} |
| We softly enforce logical constraints (e.g., \ET $\subseteq$ \TC $\subseteq$ \WT) by penalizing violations: |
| \begin{equation} |
| \mathcal{L}_{\mathrm{hier}} = |
| \left\|\max(\widehat{\mathbf{m}}_{\ET}-\widehat{\mathbf{m}}_{\TC},0)\right\|_1 |
| + \left\|\max(\widehat{\mathbf{m}}_{\TC}-\widehat{\mathbf{m}}_{\WT},0)\right\|_1. |
| \end{equation} |
|
|
| \paragraph{MoE load balancing.} |
| To prevent routing collapse, we regularize the batch-average routing probabilities: |
| \begin{equation} |
| \mathcal{L}_{\mathrm{moe}} = |
| \sum_{m=1}^{M} \left(\overline{\gamma}_m - \frac{1}{M}\right)^2, |
| \end{equation} |
| where $\overline{\gamma}_m$ is the batch-average of $\gamma_m$. |
|
|
| \subsection{Inference and post-processing} |
| At inference, concept tokens are predicted by $h_{\mathrm{attr}}(\cdot)$ (no external LLM calls). |
| \ET probabilities are gated by $\pi_{\ET}$. |
| Optionally, we apply light post-processing for \ET to remove tiny isolated components below a small voxel threshold, mitigating HD95 sensitivity. |
|
|
| \begin{algorithm}[t] |
| \caption{GliomaSAM3D-MoE inference (fully automatic)} |
| \label{alg:infer} |
| \begin{algorithmic}[1] |
| \REQUIRE Volume $\mathbf{X}\in\R^{4\times H\times W\times D}$ |
| \STATE HFDI-3D: $\mathbf{X}^{+}\leftarrow \mathrm{HFDI}(\mathbf{X})$ |
| \STATE Slice encoding: $\mathbf{F}_t\leftarrow E_{\mathrm{img}}(\mathbf{x}^{+}_t)$ for $t=1..D$ |
| \STATE Inter-slice aggregation: $\widetilde{\mathbf{F}}_{1:D} \leftarrow T_{\mathrm{seq}}(\mathbf{F}_{1:D})$ |
| \STATE Concept prediction: $(\widehat{\mathbf{c}},\pi_{\ET}) \leftarrow h_{\mathrm{attr}}(\Pool(\widetilde{\mathbf{F}}_{1:D}))$ |
| \STATE Prompt embeddings: $\mathbf{p}\leftarrow E_{\mathrm{prm}}(\mathrm{Embed}(\widehat{\mathbf{c}}))$ |
| \STATE Direction-aware enhancement: MSDA-3D + FA + FCF + spectral modulation |
| \STATE Decode with sparse MoE: obtain logits $\{\mathbf{l}_{\WT},\mathbf{l}_{\TC},\mathbf{l}_{\ET}\}$ |
| \STATE Region probabilities: $\widehat{\mathbf{m}}_{\WT} \leftarrow \sigma(\mathbf{l}_{\WT})$, $\widehat{\mathbf{m}}_{\TC} \leftarrow \sigma(\mathbf{l}_{\TC})$ |
| \STATE \ET gating: $\widehat{\mathbf{m}}_{\ET} \leftarrow \sigma(\mathbf{l}_{\ET})\cdot \pi_{\ET}$ |
| \STATE (Optional) post-process \ET: remove tiny isolated components |
| \STATE \textbf{return} $\{\widehat{\mathbf{m}}_{\WT},\widehat{\mathbf{m}}_{\TC},\widehat{\mathbf{m}}_{\ET}\}$ |
| \end{algorithmic} |
| \end{algorithm} |
|
|
|
|
|
|
| \section{Experiments}\label{sec:experiments} |
|
|
| \subsection{Datasets} |
| We conduct experiments on the BraTS 2021 and BraTS 2023 adult glioma datasets~\cite{Baid2021BraTS,SynapseBraTS2023}. |
| Both releases provide co-registered, skull-stripped, and resampled mpMRI volumes, typically including T1, T1ce, T2, and FLAIR modalities, along with expert tumor annotations~\cite{Menze2015BraTS,Baid2021BraTS}. |
| Following the BraTS convention, we evaluate three derived regions: whole tumor (\WT), tumor core (\TC), and enhancing tumor (\ET)~\cite{Baid2021BraTS}. |
| In addition to in-domain evaluation (training and testing within the same BraTS edition), we consider \emph{cross-year generalization} (e.g., train on BraTS 2021 and evaluate on BraTS 2023) to quantify robustness to dataset shift. |
|
|
| \subsection{Preprocessing and data sampling} |
| Although BraTS provides standardized preprocessing, we apply additional normalization and sampling steps for stable training: |
| (i) per-modality z-score normalization within the brain mask, |
| (ii) foreground-aware random cropping to extract 3D patches centered on tumor regions with a fixed probability, |
| and (iii) standard geometric augmentations (random flips, rotations) and intensity perturbations. |
| We will release exact hyperparameters (patch size, sampling ratios, and augmentation strengths) in the final version. |
|
|
| \subsection{Evaluation metrics} |
| We report region-wise Dice similarity coefficient (Dice) and the 95th percentile Hausdorff distance (HD95) for \WT/\TC/\ET, consistent with BraTS evaluation practice~\cite{Baid2021BraTS}. |
| For \ET-absent volumes (i.e., empty \ET ground truth), we additionally report the false-positive \ET volume and the \ET-presence classification accuracy/AUROC of the proposed gate, since these directly reflect the intended behavior of existence-aware decoding. |
|
|
| \subsection{Implementation details} |
| All models are trained with the same data splits and preprocessing for fair comparison. |
| GliomaSAM3D-MoE uses a SAM-style 2D image encoder (initialized from SAM weights~\cite{Kirillov2023SAM}) applied slice-wise, followed by a 3D aggregation encoder and a task-structured sparse MoE decoder (Section~\ref{sec:method}). |
| We optimize the segmentation loss (Dice + cross-entropy) and the \ET-presence classification loss jointly; details of weighting and schedules will be included in the final version. |
| Unless otherwise stated, results are averaged over multiple runs (or folds) to reduce variance. |
|
|
| \subsection{Compared methods} |
| We compare against representative volumetric CNN/Transformer baselines and SAM-inspired volumetric models: |
| \begin{itemize} |
| \item \textbf{3D U-Net}~\cite{Cicek2016} and \textbf{V-Net}~\cite{Milletari2016VNet} as classical volumetric encoder--decoders. |
| \item \textbf{nnU-Net}~\cite{Isensee2021nnUNet} as a strong self-configuring medical segmentation baseline. |
| \item \textbf{TransBTS}~\cite{Wang2021TransBTS}, \textbf{UNETR}~\cite{Hatamizadeh2022UNETR}, and \textbf{Swin UNETR}~\cite{Hatamizadeh2022SwinUNETR} as representative transformer-based volumetric models. |
| \item \textbf{MedNeXt}~\cite{Roy2023MedNeXt} as a modern ConvNeXt-style volumetric baseline. |
| \item \textbf{SAM3D}~\cite{Bui2023SAM3D} and \textbf{SAM-Med3D}~\cite{Wang2023SAMMed3D} as promptable volumetric SAM adaptations. For a \emph{fully automatic} setting, prompts are generated by a lightweight coarse segmentation network trained on the same data (details in the final version). |
| \end{itemize} |
|
|
| \subsection{Main quantitative results} |
| Tables~\ref{tab:brats21} and~\ref{tab:brats23} report the main comparisons on BraTS 2021 and BraTS 2023, respectively. |
| (Placeholders are included here and should be filled once experiments are completed.) |
|
|
| \begin{table}[t] |
| \centering |
| \caption{Quantitative comparison on \textbf{BraTS 2021}. Report Dice (\%) $\uparrow$ and HD95 (mm) $\downarrow$ for \WT/\TC/\ET.} |
| \label{tab:brats21} |
| \resizebox{\linewidth}{!}{ |
| \begin{tabular}{lcccccc} |
| \toprule |
| Method & \WT Dice $\uparrow$ & \TC Dice $\uparrow$ & \ET Dice $\uparrow$ & \WT HD95 $\downarrow$ & \TC HD95 $\downarrow$ & \ET HD95 $\downarrow$ \\ |
| \midrule |
| 3D U-Net~\cite{Cicek2016} & \textit{TBD} & \textit{TBD} & \textit{TBD} & \textit{TBD} & \textit{TBD} & \textit{TBD} \\ |
| V-Net~\cite{Milletari2016VNet} & \textit{TBD} & \textit{TBD} & \textit{TBD} & \textit{TBD} & \textit{TBD} & \textit{TBD} \\ |
| nnU-Net~\cite{Isensee2021nnUNet} & \textit{TBD} & \textit{TBD} & \textit{TBD} & \textit{TBD} & \textit{TBD} & \textit{TBD} \\ |
| TransBTS~\cite{Wang2021TransBTS} & \textit{TBD} & \textit{TBD} & \textit{TBD} & \textit{TBD} & \textit{TBD} & \textit{TBD} \\ |
| UNETR~\cite{Hatamizadeh2022UNETR} & \textit{TBD} & \textit{TBD} & \textit{TBD} & \textit{TBD} & \textit{TBD} & \textit{TBD} \\ |
| Swin UNETR~\cite{Hatamizadeh2022SwinUNETR} & \textit{TBD} & \textit{TBD} & \textit{TBD} & \textit{TBD} & \textit{TBD} & \textit{TBD} \\ |
| MedNeXt~\cite{Roy2023MedNeXt} & \textit{TBD} & \textit{TBD} & \textit{TBD} & \textit{TBD} & \textit{TBD} & \textit{TBD} \\ |
| SAM3D~\cite{Bui2023SAM3D} & \textit{TBD} & \textit{TBD} & \textit{TBD} & \textit{TBD} & \textit{TBD} & \textit{TBD} \\ |
| SAM-Med3D~\cite{Wang2023SAMMed3D} & \textit{TBD} & \textit{TBD} & \textit{TBD} & \textit{TBD} & \textit{TBD} & \textit{TBD} \\ |
| \midrule |
| \textbf{GliomaSAM3D-MoE (ours)} & \textbf{TBD} & \textbf{TBD} & \textbf{TBD} & \textbf{TBD} & \textbf{TBD} & \textbf{TBD} \\ |
| \bottomrule |
| \end{tabular}} |
| \end{table} |
|
|
| \begin{table}[t] |
| \centering |
| \caption{Quantitative comparison on \textbf{BraTS 2023}. Report Dice (\%) $\uparrow$ and HD95 (mm) $\downarrow$ for \WT/\TC/\ET.} |
| \label{tab:brats23} |
| \resizebox{\linewidth}{!}{ |
| \begin{tabular}{lcccccc} |
| \toprule |
| Method & \WT Dice $\uparrow$ & \TC Dice $\uparrow$ & \ET Dice $\uparrow$ & \WT HD95 $\downarrow$ & \TC HD95 $\downarrow$ & \ET HD95 $\downarrow$ \\ |
| \midrule |
| 3D U-Net~\cite{Cicek2016} & \textit{TBD} & \textit{TBD} & \textit{TBD} & \textit{TBD} & \textit{TBD} & \textit{TBD} \\ |
| V-Net~\cite{Milletari2016VNet} & \textit{TBD} & \textit{TBD} & \textit{TBD} & \textit{TBD} & \textit{TBD} & \textit{TBD} \\ |
| nnU-Net~\cite{Isensee2021nnUNet} & \textit{TBD} & \textit{TBD} & \textit{TBD} & \textit{TBD} & \textit{TBD} & \textit{TBD} \\ |
| TransBTS~\cite{Wang2021TransBTS} & \textit{TBD} & \textit{TBD} & \textit{TBD} & \textit{TBD} & \textit{TBD} & \textit{TBD} \\ |
| UNETR~\cite{Hatamizadeh2022UNETR} & \textit{TBD} & \textit{TBD} & \textit{TBD} & \textit{TBD} & \textit{TBD} & \textit{TBD} \\ |
| Swin UNETR~\cite{Hatamizadeh2022SwinUNETR} & \textit{TBD} & \textit{TBD} & \textit{TBD} & \textit{TBD} & \textit{TBD} & \textit{TBD} \\ |
| MedNeXt~\cite{Roy2023MedNeXt} & \textit{TBD} & \textit{TBD} & \textit{TBD} & \textit{TBD} & \textit{TBD} & \textit{TBD} \\ |
| SAM3D~\cite{Bui2023SAM3D} & \textit{TBD} & \textit{TBD} & \textit{TBD} & \textit{TBD} & \textit{TBD} & \textit{TBD} \\ |
| SAM-Med3D~\cite{Wang2023SAMMed3D} & \textit{TBD} & \textit{TBD} & \textit{TBD} & \textit{TBD} & \textit{TBD} & \textit{TBD} \\ |
| \midrule |
| \textbf{GliomaSAM3D-MoE (ours)} & \textbf{TBD} & \textbf{TBD} & \textbf{TBD} & \textbf{TBD} & \textbf{TBD} & \textbf{TBD} \\ |
| \bottomrule |
| \end{tabular}} |
| \end{table} |
|
|
| \subsection{Cross-year generalization} |
| To explicitly measure robustness to dataset shift, we evaluate cross-year transfer without re-training. |
| Table~\ref{tab:crossyear} summarizes the cross-year performance when training on one BraTS edition and evaluating on the other. |
|
|
| \begin{table}[t] |
| \centering |
| \caption{Cross-year generalization between BraTS 2021 and BraTS 2023. ``Mean'' denotes the average over \WT/\TC/\ET.} |
| \label{tab:crossyear} |
| \resizebox{\linewidth}{!}{ |
| \begin{tabular}{lcccc} |
| \toprule |
| Train $\rightarrow$ Test & Method & Mean Dice $\uparrow$ & Mean HD95 $\downarrow$ & \ET FP Vol. $\downarrow$ \\ |
| \midrule |
| \multirow{2}{*}{2021 $\rightarrow$ 2023} |
| & nnU-Net~\cite{Isensee2021nnUNet} & \textit{TBD} & \textit{TBD} & \textit{TBD} \\ |
| & \textbf{GliomaSAM3D-MoE (ours)} & \textbf{TBD} & \textbf{TBD} & \textbf{TBD} \\ |
| \midrule |
| \multirow{2}{*}{2023 $\rightarrow$ 2021} |
| & nnU-Net~\cite{Isensee2021nnUNet} & \textit{TBD} & \textit{TBD} & \textit{TBD} \\ |
| & \textbf{GliomaSAM3D-MoE (ours)} & \textbf{TBD} & \textbf{TBD} & \textbf{TBD} \\ |
| \bottomrule |
| \end{tabular}} |
| \end{table} |
|
|
| \subsection{Ablation studies} |
| We perform ablations to isolate the impact of each proposed component: (i) concept prompting, (ii) \ET-presence gating, (iii) direction-aware dual-domain enhancement, and (iv) task-structured sparse MoE decoding. |
| Table~\ref{tab:ablation} provides a template for reporting these results. |
|
|
| \begin{table}[t] |
| \centering |
| \caption{Ablation study on a validation split (e.g., BraTS 2021). ``Mean'' denotes the average over \WT/\TC/\ET.} |
| \label{tab:ablation} |
| \resizebox{\linewidth}{!}{ |
| \begin{tabular}{lcccc} |
| \toprule |
| Variant & Mean Dice $\uparrow$ & Mean HD95 $\downarrow$ & \ET Dice $\uparrow$ & \ET FP Vol. $\downarrow$ \\ |
| \midrule |
| w/o concept tokens & \textit{TBD} & \textit{TBD} & \textit{TBD} & \textit{TBD} \\ |
| w/o \ET gate & \textit{TBD} & \textit{TBD} & \textit{TBD} & \textit{TBD} \\ |
| w/o dual-domain enhancement & \textit{TBD} & \textit{TBD} & \textit{TBD} & \textit{TBD} \\ |
| w/o MoE (single decoder) & \textit{TBD} & \textit{TBD} & \textit{TBD} & \textit{TBD} \\ |
| \midrule |
| \textbf{Full model (ours)} & \textbf{TBD} & \textbf{TBD} & \textbf{TBD} & \textbf{TBD} \\ |
| \bottomrule |
| \end{tabular}} |
| \end{table} |
|
|
| \subsection{Visualization and qualitative analysis}\label{sec:vis} |
| In addition to quantitative metrics, we include qualitative comparisons to highlight boundary quality, \ET false-positive suppression, and expert specialization behavior. |
| The following subsections describe the planned visualizations; figures should be inserted once generated. |
|
|
| \subsubsection{Qualitative comparison on representative cases} |
| We will visualize representative axial/coronal/sagittal slices with overlays of predicted \WT/\TC/\ET masks for competing methods. |
| A typical figure includes (i) the four modalities (T1, T1ce, T2, FLAIR), (ii) ground truth, and (iii) predictions from each baseline and our method. |
|
|
| \begin{figure}[t] |
| \centering |
| \fbox{\parbox[c][0.22\textheight][c]{0.95\linewidth}{\centering \small Placeholder: qualitative comparison figure.}} |
| \caption{Qualitative comparison on representative BraTS cases. Each row corresponds to one subject; columns show modalities, ground truth, and predictions from baselines and GliomaSAM3D-MoE.} |
| \label{fig:qualitative} |
| \end{figure} |
|
|
| \subsubsection{\ET-absent case study and false-positive analysis} |
| To directly evaluate existence-aware decoding, we will curate a subset of \ET-absent volumes and visualize: |
| (i) predicted \ET masks before/after applying the \ET gate, (ii) the predicted \ET-presence probability $\pi_{\ET}$, and (iii) the resulting reduction in false-positive \ET regions. |
|
|
| \begin{figure}[t] |
| \centering |
| \fbox{\parbox[c][0.18\textheight][c]{0.95\linewidth}{\centering \small Placeholder: \ET-absent gating case study.}} |
| \caption{Case study on \ET-absent volumes. The proposed \ET gate suppresses spurious \ET predictions while preserving \WT/\TC.} |
| \label{fig:et_gate} |
| \end{figure} |
|
|
| \subsubsection{Boundary error maps and surface-distance visualization} |
| We will visualize boundary errors using signed distance transforms between prediction and ground truth, highlighting where improvements in HD95 arise. |
| In addition, 3D surface renderings can be used to show topological artifacts and boundary smoothness. |
|
|
| \begin{figure}[t] |
| \centering |
| \fbox{\parbox[c][0.18\textheight][c]{0.95\linewidth}{\centering \small Placeholder: boundary error maps / surface-distance visualization.}} |
| \caption{Boundary error visualization via distance-transform maps. Warmer colors indicate larger surface discrepancies.} |
| \label{fig:boundary} |
| \end{figure} |
|
|
| \subsubsection{Expert routing and concept token interpretability} |
| To interpret the MoE behavior, we will visualize the routing weights over experts for each region (\WT/\TC/\ET) and correlate routing patterns with tumor morphology (e.g., size/fragmentation). |
| For concept tokens, we will plot the predicted discrete concept indices and analyze their association with observable properties (e.g., \ET presence, boundary complexity). |
|
|
| \begin{figure}[t] |
| \centering |
| \fbox{\parbox[c][0.18\textheight][c]{0.95\linewidth}{\centering \small Placeholder: MoE routing and concept token visualization.}} |
| \caption{Visualization of MoE routing. We show expert assignment histograms per region and per case, illustrating specialization patterns.} |
| \label{fig:moe} |
| \end{figure} |
|
|
| \subsubsection{Frequency-domain analysis} |
| To motivate dual-domain enhancement, we will visualize amplitude spectra of input modalities and the effect of spectral modulation. |
| Additionally, we will include qualitative examples under synthetic intensity/style perturbations (e.g., amplitude mixing~\cite{Xu2023AmpMix}) to illustrate robustness. |
|
|
| \begin{figure}[t] |
| \centering |
| \fbox{\parbox[c][0.18\textheight][c]{0.95\linewidth}{\centering \small Placeholder: frequency-domain analysis visualization.}} |
| \caption{Frequency-domain visualization. We illustrate amplitude spectra and the effect of spectral modulation/augmentation on segmentation robustness.} |
| \label{fig:freq} |
| \end{figure} |
|
|
| \section{Conclusion} |
| We introduced GliomaSAM3D-MoE, a SAM-style fully automatic 3D glioma segmentation framework with concept prompting, \ET-aware existence gating, direction-aware dual-domain enhancement, and a task-structured sparse MoE decoder. |
| The final version will include complete quantitative results and visual analyses on BraTS 2021 and BraTS 2023. |
|
|
|
|
|
|
| \begin{thebibliography}{99} |
|
|
| \bibitem{Menze2015BraTS} |
| B.~H. Menze, A.~Jakab, S.~Bauer, J.~Kalpathy-Cramer, K.~Farahani, J.~Kirby, and et~al. |
| \newblock The multimodal brain tumor image segmentation benchmark ({BRATS}). |
| \newblock \emph{IEEE Transactions on Medical Imaging}, 34(10):1993--2024, 2015. |
|
|
| \bibitem{Baid2021BraTS} |
| U.~Baid et~al. |
| \newblock The {RSNA}-{ASNR}-{MICCAI} {BraTS} 2021 benchmark on brain tumor segmentation and radiogenomic classification. |
| \newblock \emph{arXiv preprint arXiv:2107.02314}, 2021. |
|
|
| \bibitem{SynapseBraTS2023} |
| BraTS 2023 Challenge (Synapse). |
| \newblock \url{https://www.synapse.org/brats2023}. Accessed: 2026-01-25. |
|
|
| \bibitem{Kirillov2023SAM} |
| A.~Kirillov et~al. |
| \newblock Segment anything. |
| \newblock In \emph{Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, 2023. |
|
|
| \bibitem{Ma2024MedSAM} |
| J.~Ma, Y.~He, F.~Li, L.~Han, C.~You, and B.~Wang. |
| \newblock Segment anything in medical images. |
| \newblock \emph{Nature Communications}, 15:654, 2024. |
|
|
| \bibitem{Bui2023SAM3D} |
| N.-T. Bui, D.-H. Hoang, M.-T. Tran, G.~Doretto, D.~Adjeroh, B.~Patel, A.~Choudhary, and N.~Le. |
| \newblock {SAM3D}: Segment anything model in volumetric medical images. |
| \newblock \emph{arXiv preprint arXiv:2309.03493}, 2023. |
|
|
| \bibitem{Wang2023SAMMed3D} |
| H.~Wang et~al. |
| \newblock {SAM-Med3D}: Towards general-purpose segmentation models for volumetric medical images. |
| \newblock \emph{arXiv preprint arXiv:2310.15161}, 2023. |
|
|
| \bibitem{Cicek2016} |
| {\"O}.~{\c{C}}i{\c{c}}ek, A.~Abdulkadir, S.~S. Lienkamp, T.~Brox, and O.~Ronneberger. |
| \newblock {3D U-Net}: Learning dense volumetric segmentation from sparse annotation. |
| \newblock In \emph{MICCAI}, 2016. |
|
|
| \bibitem{Milletari2016VNet} |
| F.~Milletari, N.~Navab, and S.-A. Ahmadi. |
| \newblock {V-Net}: Fully convolutional neural networks for volumetric medical image segmentation. |
| \newblock In \emph{Proceedings of 3DV}, 2016. |
|
|
| \bibitem{Kamnitsas2017DeepMedic} |
| K.~Kamnitsas et~al. |
| \newblock Efficient multi-scale 3D {CNN} with fully connected {CRF} for accurate brain lesion segmentation. |
| \newblock \emph{Medical Image Analysis}, 36:61--78, 2017. |
|
|
| \bibitem{Isensee2021nnUNet} |
| F.~Isensee, P.~F. Jaeger, S.~A.~A. Kohl, J.~Petersen, and K.~H. Maier-Hein. |
| \newblock nn{U}-{N}et: A self-configuring method for deep learning-based biomedical image segmentation. |
| \newblock \emph{Nature Methods}, 18:203--211, 2021. |
|
|
| \bibitem{Wang2021TransBTS} |
| W.~Wang, C.~Chen, M.~Ding, J.~Li, H.~Yu, and S.~Zha. |
| \newblock {TransBTS}: Multimodal brain tumor segmentation using transformer. |
| \newblock In \emph{MICCAI}, 2021. |
|
|
| \bibitem{Hatamizadeh2022UNETR} |
| A.~Hatamizadeh et~al. |
| \newblock {UNETR}: Transformers for 3D medical image segmentation. |
| \newblock In \emph{WACV}, 2022. |
|
|
| \bibitem{Hatamizadeh2022SwinUNETR} |
| A.~Hatamizadeh, V.~Nath, Y.~Tang, D.~Yang, H.~R. Roth, and D.~Xu. |
| \newblock {Swin UNETR}: Swin transformers for semantic segmentation of brain tumors in {MRI} images. |
| \newblock \emph{arXiv preprint arXiv:2201.01266}, 2022. |
|
|
| \bibitem{Roy2023MedNeXt} |
| S.~Roy, G.~Koehler, C.~Ulrich, M.~Baumgartner, J.~Petersen, F.~Isensee, P.~F. Jaeger, and K.~Maier-Hein. |
| \newblock {MedNeXt}: Transformer-driven scaling of convnets for medical image segmentation. |
| \newblock \emph{arXiv preprint arXiv:2303.09975}, 2023. |
|
|
| \bibitem{Kervadec2019BoundaryLoss} |
| H.~Kervadec et~al. |
| \newblock Boundary loss for highly unbalanced segmentation. |
| \newblock In \emph{Proceedings of MIDL (PMLR)}, 2019. |
|
|
| \bibitem{Karimi2019HDLoss} |
| D.~Karimi and S.~E. Salcudean. |
| \newblock Reducing the Hausdorff distance in medical image segmentation with convolutional neural networks. |
| \newblock \emph{arXiv preprint arXiv:1904.10030}, 2019. |
|
|
| \bibitem{Yang2020FDA} |
| Y.~Yang and S.~Soatto. |
| \newblock {FDA}: Fourier domain adaptation for semantic segmentation. |
| \newblock In \emph{CVPR}, 2020. |
|
|
| \bibitem{Xu2023AmpMix} |
| Q.~Xu et~al. |
| \newblock Fourier-based augmentation with applications to domain generalization. |
| \newblock \emph{Pattern Recognition}, 139:109474, 2023. |
|
|
| \bibitem{Shazeer2017MoE} |
| N.~Shazeer et~al. |
| \newblock Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. |
| \newblock \emph{arXiv preprint arXiv:1701.06538}, 2017. |
|
|
| \bibitem{Fedus2022Switch} |
| W.~Fedus, B.~Zoph, and N.~Shazeer. |
| \newblock Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity. |
| \newblock \emph{Journal of Machine Learning Research}, 23(120):1--39, 2022. |
|
|
| \bibitem{Chen2023AdaMVMoE} |
| T.~Chen et~al. |
| \newblock {AdaMV-MoE}: Adaptive multi-task vision mixture-of-experts. |
| \newblock In \emph{ICCV}, 2023. |
|
|
| \end{thebibliography} |
|
|
|
|
| \end{document} |
|
|
|
|
|
|