Image-to-Image
Diffusers

Towards Photorealistic and Efficient Bokeh Rendering via Diffusion Framework

This repository contains the model and code for Towards Photorealistic and Efficient Bokeh Rendering via Diffusion Framework, as presented in the paper:

Towards Photorealistic and Efficient Bokeh Rendering via Diffusion Framework

Abstract

Existing mobile devices are constrained by compact optical designs, such as small apertures, which make it difficult to produce natural, optically realistic bokeh effects. Although recent learning-based methods have shown promising results, they still struggle with photos captured under high digital zoom levels, which often suffer from reduced resolution and loss of fine details. A naive solution is to enhance image quality before applying bokeh rendering, yet this two-stage pipeline reduces efficiency and introduces unnecessary error accumulation. To overcome these limitations, we propose MagicBokeh, a unified diffusion-based framework designed for high-quality and efficient bokeh rendering. Through an alternative training strategy and a focus-aware masked attention mechanism, our method jointly optimizes bokeh rendering and super-resolution, substantially improving both controllability and visual fidelity. Furthermore, we introduce degradation-aware depth module to enable more accurate depth estimation from low-quality inputs. Experimental results demonstrate that MagicBokeh efficiently produces photorealistic bokeh effects, particularly on real-world low-resolution images, paving the way for future advancements in bokeh rendering.

Code and Usage

The official code and model are available at the following GitHub repository: https://github.com/vivoCameraResearch/MagicBokeh

Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for LinxiaoShi/Magicbokeh

Finetuned
(12)
this model

Paper for LinxiaoShi/Magicbokeh