Papers
arxiv:2304.03895

MCDIP-ADMM: Overcoming Overfitting in DIP-based CT reconstruction

Published on Apr 8, 2023
Authors:
,

Abstract

The study proposes a novel CT reconstruction method combining multi-code deep image prior with alternative direction method of multipliers to improve image quality and reduce overfitting issues.

AI-generated summary

This paper investigates the application of unsupervised learning methods for computed tomography (CT) reconstruction. To motivate our work, we review several existing priors, namely the truncated Gaussian prior, the l_1 prior, the total variation prior, and the deep image prior (DIP). We find that DIP outperforms the other three priors in terms of representational capability and visual performance. However, the performance of DIP deteriorates when the number of iterations exceeds a certain threshold due to overfitting. To address this issue, we propose a novel method (MCDIP-ADMM) based on Multi-Code Deep Image Prior and plug-and-play Alternative Direction Method of Multipliers. Specifically, MCDIP utilizes multiple latent codes to generate a series of feature maps at an intermediate layer within a generator model. These maps are then composed with trainable weights, representing the complete image prior. Experimental results demonstrate the superior performance of the proposed MCDIP-ADMM compared to three existing competitors. In the case of parallel beam projection with Gaussian noise, MCDIP-ADMM achieves an average improvement of 4.3 dB over DIP, 1.7 dB over ADMM DIP-WTV, and 1.2 dB over PnP-DIP in terms of PSNR. Similarly, for fan-beam projection with Poisson noise, MCDIP-ADMM achieves an average improvement of 3.09 dB over DIP, 1.86 dB over ADMM DIP-WTV, and 0.84 dB over PnP-DIP in terms of PSNR.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2304.03895 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2304.03895 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2304.03895 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.