diff --git "a/20240819/2408.09650v1.json" "b/20240819/2408.09650v1.json" new file mode 100644--- /dev/null +++ "b/20240819/2408.09650v1.json" @@ -0,0 +1,1038 @@ +{ + "title": "ExpoMamba: Exploiting Frequency SSM Blocks for Efficient and Effective Image Enhancement", + "abstract": "Low-light image enhancement remains a challenging task in computer vision, with existing state-of-the-art models often limited by hardware constraints and computational inefficiencies, particularly in handling high-resolution images. Recent foundation models, such as transformers and diffusion models, despite their efficacy in various domains, are limited in use on edge devices due to their computational complexity and slow inference times. We introduce ExpoMamba, a novel architecture that integrates components of the frequency state space within a modified U-Net, offering a blend of efficiency and effectiveness. This model is specifically optimized to address mixed exposure challenges\u2014a common issue in low-light image enhancement\u2014while ensuring computational efficiency. Our experiments demonstrate that ExpoMamba enhances low-light images up to 2-3x faster than traditional models with an inference time of 36.6 ms and achieves a PSNR improvement of approximately 15-20% over competing models, making it highly suitable for real-time image processing applications. Model code is open sourced at: github.com/eashanadhikarla/ExpoMamba.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Enhancing low-light images is crucial for applications ranging from consumer gadgets like phone cameras (Liba et al., 2019 ###reference_b45###; Liu et al., 2024 ###reference_b51###) to sophisticated surveillance systems (Xian et al., 2024 ###reference_b87###; Guo et al., 2024 ###reference_b27###; Shrivastav, 2024 ###reference_b69###). Traditional techniques (Dale-Jones & Tjahjadi, 1993 ###reference_b13###; Singh et al., 2015 ###reference_b70###; Khan et al., 2014 ###reference_b39###; Land & McCann, 1971 ###reference_b41###; Ren et al., 2020 ###reference_b65###) often struggle to balance processing speed and high-quality results, particularly with high-resolution images, leading to issues like noise and color distortion in scenarios requiring quick processing such as mobile photography and real-time video streaming.\nLimitations of Current Approaches. \nFoundation models have revolutionized computer vision, including low-light image enhancement, by introducing advanced architectures that model complex relationships within image data.\nIn particular, transformer-based (Wang et al., 2023b ###reference_b80###; Chen et al., 2021a ###reference_b8###; Zhou et al., 2023b ###reference_b105###; Adhikarla et al., 2024 ###reference_b2###) and diffusion-based (Wang et al., 2023c ###reference_b81###, a ###reference_b79###; Zhou et al., 2023a ###reference_b103###) low-light techniques have made significant strides. However, the sampling process requires a computationally intensive iterative procedure, and the quadratic runtime of self-attention in transformers make them unsuitable for real-time use on edge devices where limited processing power and battery constraints pose significant challenges. Innovations such as linear attention (Katharopoulos et al., 2020 ###reference_b38###; Shen et al., 2018 ###reference_b68###; Wang et al., 2020 ###reference_b78###), self-attention approximation, windowing, striding (Kitaev et al., 2020 ###reference_b40###; Zaheer et al., 2020 ###reference_b94###), attention score sparsification (Liu et al., 2021b ###reference_b49###), hashing (Chen et al., 2021c ###reference_b10###), and self-attention operation kernelization (Katharopoulos et al., 2020 ###reference_b38###; Lu et al., 2021 ###reference_b55###; Chen et al., 2021b ###reference_b9###) have aimed to address these complexities, but often at the cost of increased computation errors compared to simple self-attention (Duman Keles et al., 2023 ###reference_b16###; Dosovitskiy et al., 2021 ###reference_b15###). (More details can be found in Appendix A ###reference_###)\nPurpose. With rising need for better images, advanced small camera sensors in edge devices have made it more common for customers to capture high quality images, and use them in real-time applications like mobile, laptop and tablet cameras (Morikawa et al., 2021 ###reference_b58###). However, they all struggle with non-ideal and low lighting conditions in the real world. Our goal is to develop an approach that has high image quality (e.g., like CIDNet (Feng et al., 2024 ###reference_b17###)) for enhancement but also at high speed (e.g., such as that of IAT (Cui et al., 2022 ###reference_b12###) and Zero-DCE++ (Li et al., 2021 ###reference_b43###)).\nContributions. \nOur contributions are summarized as:\nWe introduce the use of Mamba for efficient low-light image enhancement (LLIE), specifically focusing on mixed exposure challenges, where underlit (insufficient brightness) and overlit (excessive brightness) exist in the same image frame.\nWe propose a novel Frequency State Space Block (FSSB) that combines two distinct 2D-Mamba blocks, enabling the model to capture and enhance subtle textural details often lost in low-light images.\nWe describe a novel dynamic batch training scheme to improve robustness of multi-resolution inference in our proposed model.\nWe implement dynamic processing of the amplitude component to highlight distortion (noise, illumination) and the phase component for image smoothing and noise reduction.\n###figure_1###" + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Exposure Mamba", + "text": "Along the lines of recent efficient sequence modeling approaches (Gu & Dao, 2023 ###reference_b21###; Zhu et al., 2024a ###reference_b106###; Wang et al., 2024 ###reference_b83###), we introduce ExpoMamba, a model combining frequency state-space blocks with spatial convolutional blocks (Fig. 2 ###reference_###). This combination leverages the advantages of frequency domain analysis to manipulate features at different scales and frequencies, crucial for isolating and enhancing patterns challenging to detect in the spatial domain, like subtle textural details in low-light images or managing noise in overexposed areas. Additionally, by integrating these insights with the linear-time complexity benefits of the Mamba architecture, our model efficiently manages the spatial sequencing of image data, allowing rapid processing without the computational overhead of transformer models.\nOur proposed architecture utilizes a 2D scanning approach to tackle mixed-exposure challenges in low-light conditions. This model incorporates a combination of (Qin et al., 2020 ###reference_b64###) and M-Net (Mehta & Sivaswamy, 2017 ###reference_b57###), supporting 2D sRGB images with each block performing operations using a convolutional and encoder-style SSM ()111A state space model is a type of sequence model that transforms a one-dimensional sequence via an implicit hidden state.. The subsequent section provides detailed information about our overall pipeline." + }, + { + "section_id": "2.1", + "parent_section_id": "2", + "section_name": "Frequency State Space Block (FSSB)", + "text": "###figure_2### We utilize the frequency state space block (FSSB) to address the computational inefficiencies of transformer architectures especially when processing high-resolution image or long-sequence data. The FSSB\u2019s motivation is in two parts; first, towards enhancing the intricacies that are unaddressed/missed by the spatial domain alone; and second, to speed deep feature extraction using the frequency domain.\nThe FSS block (as in Fig. 3 ###reference_###) initiates its processing by transforming the input image into the frequency domain using the Fourier transform:\nwhere, denotes the frequency domain representation of the image, and are the frequency components corresponding to the spatial coordinates . This transformation allows for the isolation and manipulation of specific frequency components, which is particularly beneficial for enhancing details and managing noise in low-light images. By decomposing the image into its frequency components, we can selectively enhance high-frequency components to improve edge and detail clarity while suppressing low-frequency components that typically contain noise (Lazzarini, 2017 ###reference_b42###; Zhou et al., 2022 ###reference_b104###). This selective enhancement and suppression improve the overall image quality.\nThe core of the FSSB comprises two 2D-Mamba (Visual-SSM) blocks to process the amplitude and phase components separately in the frequency domain. These blocks model state-space transformations as follows:\nHere, , , and are the state matrices that adapt dynamically based on the input features, and represents the state vector at time . represents processed feature at time , capturing the transformed information from the input features. This dual-pathway setup within the FSSB processes amplitude and phase in parallel.\nAfter processing through each of the VSS blocks, the modified amplitude and phase components are recombined and transformed back to the spatial domain using the inverse Fourier transform:\nwhere, is the processed frequency domain representation in the latent space of each M-Net block. This method preserves the structural integrity of the image while enhancing textural details that are typically lost in low-light conditions, removing the need of self-attention mechanisms that are widely seen in transformer-based pipelines (Tay et al., 2022 ###reference_b72###). The FSSB also integrates hardware-optimized strategies similar to those employed in the Vision-Mamba architecture (Gu & Dao, 2023 ###reference_b21###; Zhu et al., 2024a ###reference_b106###) such as scan operations and kernel fusion reducing amount of memory IOs, facilitating efficient data flow between the GPU\u2019s memory hierarchies. This optimization significantly reduces computational overhead by a factor of speeding the operation by times (Gu & Dao, 2023 ###reference_b21###), enhancing processing speed for real-time applications. This can be evidently seen through our Fig. 1 ###reference_###, where increasing the resolution size/input length increases the inference time gap tremendously due to which is more for transformer based models due to .\nWithin the FSS Block, the amplitude and phase components extracted from are processed through dedicated state-space models. These models, adapted from the Mamba framework, are particularly tailored (dynamic adaptation of state matrices (, , ) based on spectral properties and the dual processing of amplitude and phase components.222refer to FSSB module in Appx E ###reference_###) to enhance information across frequencies, effectively addressing the typical loss of detail in low-light conditions.\nAmplitude and Phase Component Modeling. \nEach component and undergoes separate but parallel processing paths, modeled by:\nwhere denotes the state at time t, represents the frequency-domain input at time (either amplitude or phase), and are the state-space matrices that dynamically adapt during training.\nFrequency-Dependent Dynamic Adaptation. The matrices are not only time-dependent but also frequency-adaptive, allowing the model to respond to varying frequency components effectively. This adaptation is crucial for enhancing specific frequencies more affected by noise and low-light conditions. Specifically, these matrices evolve based on the spectral properties of the input: adjust dynamically during the processing. This means that , , and change their values according to both the time step and the frequency components , enabling targeted enhancement of the amplitude and phase components in the frequency domain. By evolving to match the spectral characteristics of the input, these matrices optimize the enhancement process.\nAfter separate processing through the state-space models, the modified amplitude and phase are recombined and transformed back into the spatial domain to reconstruct the enhanced image:\nwhere, denotes the inverse Fourier transform.\nFeature Recovery in FSSB.\nThe HDR (High Dynamic Range) tone mapping process within the Frequency State Space Block (FSSB) is designed to enhance visibility and detail in low-light conditions by selectively normalizing brightness in overexposed areas. Feature recovery in FSSB aims to address the challenges of high dynamic range scenes, where standard methods often fail to maintain natural aesthetics and details. By implementing a thresholding mechanism set above , the HDR layer selectively applies tone mapping to overexposed areas, effectively normalizing brightness without compromising detail or causing unnatural halos often seen in standard HDR processes (Fig. 4 ###reference_###). This selective approach is crucial as it maintains the natural aesthetic of the image while enhancing visibility and detail. The HDR layer is consistently applied as the final layer within each FSSB block and culminates as the ultimate layer in the ExpoMamba model, providing a coherent enhancement across all processed images.\nWe leverage the ComplexConv function from complex networks as introduced by Trabelsi et al. (Trabelsi et al., 2018 ###reference_b73###). This function is incorporated into our model to capture and process additional information beyond traditional real-valued convolutions. Specifically, the ComplexConv function allows the simultaneous manipulation of amplitude and phase information in the frequency domain, which is essential to preserve the integrity of textural details in low-light images. The dual processing of amplitude and phase ensures that each component to be optimized separately. Tone mapping and ComplexConv have proven to be effective in overcoming limitations of traditional image processing techniques (Hu et al., 2022 ###reference_b28###; Liu, 2024 ###reference_b52###). We integrate these methods into our FSS design to address adverse lighting conditions in low light environments.\nThe input components in the frequency representation are processed through dynamic amplitude scaling and phase continuity layer, as shown in Fig. 3 ###reference_###. As claimed by Fourmer (Zhou et al., 2023b ###reference_b105###), we have determined that the primary source of image degradation is indeed amplitude, specifically in the area between the amplitude and phase division within the image. Moreover, we found that the amplitude component primarily contains information about the brightness of the image, which directly impacts the visibility and the sharpness of the features within the image. However, the phase component encodes the positional information of these features, defining the structure and the layout of the image. Previously, it has been found that phase component of the image has a close relation with perceptual analysis (Xiao & Hou, 2004 ###reference_b88###). Along those lines, we show that the human visual system is more sensitive to changes in phase rather than amplitude (proof in Appx C.1 ###reference_###).\n###figure_3### \u201cda\u201d - Dynamic adjustment. (refer Appendix-C.3 ###reference_###) / \u201cgt\u201d - With ground-truth mean." + }, + { + "section_id": "2.2", + "parent_section_id": "2", + "section_name": "Multi-modal Feature Learning", + "text": "The inherent complexity of low-light images, where both underexposed and overexposed elements coexist, necessitates a versatile approach to image processing. Traditional methods, which typically focus either on spatial details or frequency-based features, fail to adequately address the full spectrum of distortions encountered in such environments. By contrast, the hybrid modeling approach of \u201cExpoMamba\u201d leverages the strengths of both the spatial and frequency domains, facilitating a more comprehensive and nuanced enhancement of image quality.\nOperations in the frequency domain, such as the Fourier transform, can isolate and address specific types of distortion, such as noise and fine details, which are often exacerbated in low-light conditions. This domain provides a global view of the image data, allowing for the manipulation of features that are not easily discernible in the spatial layout. Simultaneously, the spatial domain is critical to maintaining the local coherence of image features, ensuring that enhancements do not introduce unnatural artifacts. Finally, the hybrid-modeled features pass through deep supervision, where we combine ExpoMamba\u2019s intermediate layer outputs, apply a color correction matrix in the latent dimensions during deep supervision, and pass through the final layer." + }, + { + "section_id": "2.3", + "parent_section_id": "2", + "section_name": "Dynamic Patch Training", + "text": "Dynamic patch training enhances the 2D scanning model by optimizing its scanning technique for various image resolutions. In ExpoMamba, 2D scanning involves sequentially processing image patches to encode feature representations. We create batches of different resolution images where in a given batch the resolution is fixed and we dynamically randomize the different batch resolutions of input patches during training. This way the model learns to adapt its scanning and encoding process to different scales and levels of detail (Fig 5 ###reference_###). This variability helps the model become more efficient at capturing and processing fine-grained details across different image resolutions, ensuring consistent performance. Consequently, the model\u2019s ability to handle mixed-exposure conditions is improved, as it can effectively manage diverse resolutions and adapt its feature extraction process dynamically, enhancing its robustness and accuracy in real-world applications." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Experiments and Implementation details", + "text": "In this section, we evaluate our method through a series of experiments. We begin by outlining the datasets used, experimental setup, followed by a comparison of our method against state-of-the-art techniques using four quantitative metrics. We also perform a detailed ablation study (Appx E ###reference_###, Tab. 5 ###reference_###) to analyze the components of our proposed method." + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Datasets", + "text": "To test the efficacy of our model, we evaluated ExpoMamba on four datasets: (1) LOL (Wei et al., 2018a ###reference_b84###), which has v1 and v2 versions. LOLv2 (Yang et al., 2020a ###reference_b89###) is divided into real and synthetic subsets. The training and testing sets are split into 485/15, 689/100, and 900/100 on LOLv1, LOLv2-real, and LOLv2-synthetic with resolution images. (2) LOL4K is an ultra-high definition dataset with resolution images, containing 8,099 pairs of low-light/normal-light images, split into 5,999 pairs for training and 2,100 pairs for testing. (3) SICE (Cai et al., 2018 ###reference_b5###) includes 4,800 images, real and synthetic, at various exposure levels and resolutions, divided into training, validation, and testing sets in a 7:1:2 ratio.\nWe use dynamic adjustment for both \u2018s\u2019 and \u2018l\u2019 ExpoMamba models during inference." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "Experimental setting", + "text": "The proposed network is a single-stage end-to-end training model. The patch sizes are set to , , and with checkpoint restarts and batch sizes of , , and , respectively, in consecutive runs. For dynamic patch training, we use different patch sizes simultaneously. The optimizer is RMSProp with a learning rate of , a weight decay of , and momentum of . A linear warm-up cosine annealing (Loshchilov & Hutter, 2016 ###reference_b54###) scheduler with warm-up epochs is used, starting with a learning rate of . All experiments were carried out using the PyTorch library (Paszke et al., 2019 ###reference_b61###) on an NVIDIA A10G GPU.\nLoss functions. To optimize our ExpoMamba model we use a set of loss functions:\nOur \u2018s\u2019: smallest model outperforms all the baselines." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Results", + "text": "The best performance for Tab. 1 ###reference_###, Tab. 2 ###reference_###, and Tab. 3 ###reference_### are marked with Red, Green, and Blue, respectively.\nTab. 1 ###reference_### compares our performance to 31 state-of-the-art baselines, including lightweight and heavy models. We evaluate ExpoMamba\u2019s performance using SSIM, PSNR, LPIPS, and FID. ExpoMamba achieves an inference time of 36 ms, faster than most baselines (Fig. 1 ###reference_###) and the fastest among comparable models. Models like DiffLL (Jiang et al., 2023 ###reference_b34###), CIDNet (Feng et al., 2024 ###reference_b17###), and LLformer (Wang et al., 2023b ###reference_b80###) have comparable results but much longer inference times. Traditional algorithms (e.g., MSRCR (Jobson et al., 1997 ###reference_b36###), MF (Fu et al., 2016a ###reference_b19###), BIMEF (Ying et al., 2017 ###reference_b92###), SRIE (Fu et al., 2016b ###reference_b20###), FEA (Dong et al., 2011 ###reference_b14###), NPE (Wang et al., 2013 ###reference_b77###), LIME (Guo et al., 2016 ###reference_b26###)) generally perform poorly on LOL4K (Tab. 2 ###reference_###). Fig. 1 ###reference_###.b shows that increasing image resolution to 4K significantly increases inference time for transformer models due to their quadratic complexity. Despite being a 41 million parameter model, ExpoMamba demonstrates remarkable storage efficiency, consuming memory (2923 Mb) compared to CIDNet, which, despite its smaller size of 1.9 million parameters, consumes 8249 Mb. This is because ExpoMamba\u2019s state expansion fits inside the GPU\u2019s high-bandwidth memory and removes the quadratic bottleneck which significantly reduces memory footprint. Current SOTA models CIDNet (Feng et al., 2024 ###reference_b17###) and LLformer (Wang et al., 2023b ###reference_b80###) are slower and less memory-efficient." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Conclusion", + "text": "We introduced ExpoMamba, a model designed for efficient and effective low-light image enhancement. By integrating frequency state-space components within a U-Net variant, ExpoMamba leverages spatial and frequency domain processing to address computational inefficiencies and high-resolution challenges. Our approach combines robust feature extraction of state-space models, enhancing low-light images with high fidelity and achieving impressive inference speeds. Our novel dynamic patch training strategy significantly improves robustness and adaptability to real-world hardware constraints, making it suitable for real-time applications on edge devices. Experimental results show that ExpoMamba is much faster and comparably better than numerous existing transformer and diffusion models, setting a new benchmark in low light image enhancement." + } + ], + "appendix": [ + { + "section_id": "Appendix x1", + "parent_section_id": null, + "section_name": "Appendix", + "text": "" + }, + { + "section_id": "Appendix 1", + "parent_section_id": null, + "section_name": "Appendix A Related Work", + "text": "Traditional methods for low-light image enhancement often rely on histogram equalization (HE) (Dale-Jones & Tjahjadi, 1993 ###reference_b13###; Singh et al., 2015 ###reference_b70###; Khan et al., 2014 ###reference_b39###) and Retinex theory (Land & McCann, 1971 ###reference_b41###; Ren et al., 2020 ###reference_b65###). HE based methods aim to adjust the contrast of the image by uniformly distributing the pixel intensities, which can sometimes lead to overenhancement and noise amplification, which were later investigated more carefully by CegaHE (Chiu & Ting, 2016 ###reference_b11###), UMHE (Kansal et al., 2018 ###reference_b37###), etc. Retinex theory, which decomposes an image into illumination and reflectance, provides a more principled approach to enhancement but still faces limitations in complex lighting conditions.\nConvolutional Neural Networks (CNNs) have significantly advanced this field. Early works like LLNet (Lore et al., 2017 ###reference_b53###) used autoencoders to enhance low-light image visibility. The SID (See-in-the-Dark) network (Chen et al., 2018b ###reference_b7###) leveraged raw image data for better enhancement by training on paired low-light and normal-light images. Other works in paired training include DSLR (Lim & Kim, 2020 ###reference_b46###), DRBN (Yang et al., 2020b ###reference_b90###), KinD (Zhang et al., 2019a ###reference_b99###), KinD++ (Zhang et al., 2021b ###reference_b101###), MIRNet (Zamir et al., 2020 ###reference_b95###), ReLLIE (Zhang et al., 2021a ###reference_b98###), DDIM (Song et al., 2020 ###reference_b71###), SCI (Ma et al., 2022 ###reference_b56###), RAUS (Liu et al., 2021a ###reference_b48###), Restormer (Zamir et al., 2022 ###reference_b96###), CIDNet (Feng et al., 2024 ###reference_b17###), LLFormer (Wang et al., 2023b ###reference_b80###), SNRNet (Lin et al., 2020 ###reference_b47###), Uformer (Wang et al., 2022b ###reference_b82###), and CDEF (Valin, 2016 ###reference_b74###). Methods like RetinexNet (Wei et al., 2018b ###reference_b85###), which decompose images into illumination and reflectance components, also show considerable promise but often struggle with varying lighting conditions.\nTransformer Models. Such approaches have gained popularity for modeling long-range dependencies in images. LLFormer (Wang et al., 2023b ###reference_b80###) leverages transformers for low-light enhancement by focusing on global context, significantly improving image quality. Fourmer (Zhou et al., 2023b ###reference_b105###) introduces a Fourier transform-based approach within the transformer architecture, while IAT (Cui et al., 2022 ###reference_b12###) adapts ISP-related parameters to address low-level and high-level vision tasks. IPT (Chen et al., 2021a ###reference_b8###) uses a multi-head, multi-tail shared pre-trained transformer module for image restoration. LYT-Net (Brateanu et al., 2024 ###reference_b4###) addresses image enhancement with minimal computing resources by using YUV colorspace for transformer models. Despite their effectiveness, these transformer models often require substantial computational resources, limiting their practicality on edge devices.\nDiffusion Models. Diffusion models have shown great potential in generating realistic and detailed images. The ExposureDiffusion model (Wang et al., 2023c ###reference_b81###) integrates a diffusion process with a physics-based exposure model, enabling accurate noise modeling and enhanced performance in low-light conditions. Pyramid Diffusion (Zhou et al., 2023a ###reference_b103###) addresses computational inefficiencies by introducing a pyramid resolution approach, speeding up enhancement without sacrificing quality. (Saharia et al., 2022 ###reference_b67###) handles image-to-image tasks using conditional diffusion processes. Models like (Zhang et al., 2022 ###reference_b97###) and deep non-equilibrium approaches (Pokle et al., 2022 ###reference_b63###) aim to reduce sampling steps for faster inference. However, starting from pure noise in conditional image restoration tasks remains a challenge for maintaining image quality while cutting down inference time (Guo et al., 2023 ###reference_b25###).\nHybrid Modelling. Hybrid models includes learning features in both spatial and frequency domain has been another popular area in image enhancement/restoration tasks. Mostly it has been explored in three sub-categories: (1) Fourier Transform (Yuan et al., 2024 ###reference_b93###), Fourmer (Zhou et al., 2023b ###reference_b105###), FD-VisionMamba (Zheng & Zhang, 2024 ###reference_b102###); (2) Wavelet Transform; (3) Homomorphic Filtering . Such methods demonstrate that leveraging both spatial and frequency information can significantly improve enhancement performance.\nState-Space Models. Recent advancements reveal the efficacy of state space models (SSM) as a robust architecture in foundation model era for sequence modeling, offering a fresh perspective beyond conventional RNNs, CNNs, and Transformers. Pioneering this shift, the S4 (Gu et al., 2021 ###reference_b22###) model demonstrated superior performance in managing long-range dependencies by employing the HiPPO matrix (Fu et al., 2022 ###reference_b18###) to define state dynamics systematically. Initially introduced for audio processing, SSMs have emerged as a alternative, later expanded into language and vision domains for handling long-range model dependencies and temporal dynamics becoming a strong competitor for current transformer based methods. The V-Mamba architecture (Zhu et al., 2024b ###reference_b107###; Yang et al., 2024 ###reference_b91###) combines state-space models with U-Net frameworks to capture detailed image aspects at multiple scales, proving effective in biomedical image segmentation. Furthermore, the S4 architecture (Gu et al., 2021 ###reference_b22###; Nguyen et al., 2022 ###reference_b59###) extends this idea by incorporating linear state-space models for fast and efficient sequence modeling, making it suitable for real-time applications." + }, + { + "section_id": "Appendix 2", + "parent_section_id": null, + "section_name": "Appendix B The Importance of Inference Time over FLOPs in Real-World Applications", + "text": "In our paper, we use inference time as a measure because inference time, unlike the abstract measure of FLOPs (Floating Point Operations Per Second), reflects actual performance in real-world applications, being influenced not only by hardware speed but also by model design and optimization.\nIn practical scenarios, wherein systems requiring real-time processing like autonomous vehicles and interactive AI applications, the agility of model inference directly impacts usability and user experience. Moreover, as inference constitutes the primary computational expense post-deployment, optimizing inference time enhances both the cost-effectiveness and the energy efficiency of AI systems. Thus, we focused on minimizing inference time, rather than merely reducing FLOPs, ensuring that AI models are not only theoretically efficient but are also pragmatically viable in dynamic real-world environments. We believe that this approach not only accelerates the adoption of AI technologies but also drives advancements in developing models that are both performant and sustainable." + }, + { + "section_id": "Appendix 3", + "parent_section_id": null, + "section_name": "Appendix C Detailed Methodology", + "text": "For an image , its Fourier transform is given by:\nThis can be decomposed into amplitude and phase :\nThe inverse Fourier transform, which reconstructs the image from its frequency representation, is:\nSuppose that the phase component is uniformly shifted by an angle , the new phase . The modified image with this new phase is represented as:\nUsing Euler\u2019s formula , the equation becomes:\nGiven that and are constants for a particular , they can be factored out of the integral:\nThis shows that the new image is a linear combination of the original image and another image derived from the same amplitude and a phase-shifted version of the original phase components. The transformation demonstrates that even a constant shift in the phase component translates into a significant transformation in the spatial domain, affecting the structural layout and visual features of the image.\n###figure_4### To address the hardware constraints in real-time scenarios such as phones or laptop webcams, which often adjust camera resolutions to optimize performance within design and battery limits, there is a critical need for models that dynamically adapt to these variations. Feeding various image resolutions to the model dynamically also helps avoid spurious correlations that are formed due to strong correlation (Adhikarla et al., 2023 ###reference_b1###) in the data distribution of certain types of images. For instance, the SICEv2 dataset has relatively more mixed-exposure images, and the borders of sudden changes in exposure become more prone to spurious correlations. However, our ExpoMamba uses spatial and temporal components that are inherently designed in Vision Mamba (Zhu et al., 2024a ###reference_b106###) to handle both the spatial distribution of pixels in images and the temporal sequence of frames in videos.\nThe Dynamic Adjustment Approximation module offers a unique way to enhance images without needing ground truth mean and variance. Instead, it dynamically adjusts brightness by using the image\u2019s own statistical properties, specifically the median and mean pixel values. Unlike previous models like KinD, LLFlow, RetinexFormer, which relied on static adjustment factors from the ground truth and often produced less accurate results otherwise, our method calculates a desired shift based on the difference between a normalized brightness value and the image\u2019s mean. Then, it adjusts the image\u2019s medians toward this shift, taking both the current median and mean into account. This leads to a more balanced and natural enhancement. Adjustment factors are carefully computed to avoid infinite or undefined values, ensuring stability. This approach simplifies the process by not requiring ground-truth data and also improves the efficiency and effectiveness of image enhancement.\nThis model configuration table provides a detailed comparison between the two variants of ExpoMamba, highlighting their configurations and performance metrics. Notably, despite an increase of 125 million parameters, the memory consumption of the larger ExpoMambalarge variant is 5690 Mb, which is a modest increase compared to transformer-based models.\nThe following pseudocode presents the details of ExpoMamba training with FSSB blocks:" + }, + { + "section_id": "Appendix 4", + "parent_section_id": null, + "section_name": "Appendix D Loss function", + "text": "The combined loss function as shown in Eq. 7 ###reference_###, is designed to enhance image quality by addressing different aspects of image reconstruction. The L1 loss ensures pixel-level accuracy, crucial for maintaining sharp edges. This loss component has been widely utilized by the low light papers and has proven to be a valuable loss component for training variety of image restoration tasks. VGG loss, leveraging high-level features, maintains perceptual similarity. SSIM loss preserves structural integrity and local visual quality, which is vital for a coherent visual experience. LPIPS loss focuses on perceptual differences to generate natural looking image. Additionally, the overexposed regularizer detects and penalizes overexposed areas, crucial for handling HDR content and preserving details. It works in combination with HDR blocks to suppress artifacts in overexposed areas and control enhancement. In Eq. 7 ###reference_###, is the weight for the overexposed regularization term." + }, + { + "section_id": "Appendix 5", + "parent_section_id": null, + "section_name": "Appendix E Ablation Study", + "text": "When \u2018DoubleConv\u2019 is not used, we default to using the standard -Net/M-Net architecture\u2019s 2D convolutional block.\nWe have performed the ablation study of our model ExpoMamba over LOL-v1 dataset. We used \u2018DoubleConv\u2019 Block instead of regular convolutional blocks in the regular U-Net/M-Net architecture. \u2018Block\u2019 represents the residual block inside every upsampling blocks. We implemented two variants of HDR layer, where HDR/HDROut represent the same single layer approach with different locations for layer placement. On the other hand, HDR-CSRNet+ is a deeper network originally design for congested scene recognition is used inside FSSB instead of simple HDR layer.\nDoubleConv: Its absence results in lower PSNR and SSIM scores, confirming its importance.\nBlock: Inclusion of residual blocks improves performance metrics.\nFSSB: Significantly enhances model performance, indicating its crucial role.\nHDR vs. HDR-CSRNet+ vs. HDROut:\nHDR: Provides notable improvements but is outperformed by HDR-CSRNet+.\nHDR-CSRNet+: Offers the best results among the HDR variants.\nHDROut: Slightly less effective than HDR-CSRNet+.\nDA (Dynamic Adjustment during inference): Consistently boosts the model\u2019s PSNR and SSIM slightly based on input mean value.\n###figure_5### ###figure_6### ###figure_7### ###figure_8###" + } + ], + "tables": { + "1": { + "table_html": "
\n
Table 1: Comparing four popular metrics such that every column showcases the top three methods; Red, Green, and Blue representing the best, second best, and third best models among the proposed and all popular SOTA models from 2011\u20132024.
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodsReferenceLOLv1LOLv2 (Real Captured)\n\n\nInference\n\ntime (ms)\n
\nPSNR \n\nSSIM \n\nLPIPS \n\nFID \n\nPSNR \n\nSSIM \n\nLPIPS \n\nFID \n
NPE\u2020\nTIP\u20191316.9700.4840.400104.0517.3330.4640.396100.02-
SRIE\u2020\nCVPR\u20191611.8550.4950.353088.7214.4510.5240.332078.83-
BIMEF\u2020\narXiv\u20191713.8750.5950.326-17.2000.7130.307--
FEA\u2020\nICME\u20191116.7160.4780.384120.0517.2830.7010.398119.28-
MF\u2020\nSignal Process\u20191616.9660.5070.379-17.5000.751---
LIME\u2020\nTIP\u20191617.5460.5310.387117.8917.4830.5050.428118.1791.12
Retinex\u2021\nBMVC\u20191816.7740.4620.417126.2617.7150.6520.436133.914493
DSLR\u2021\nTMM\u20192014.8160.5720.375104.4317.0000.5960.408114.311537
KinD\u2021\nACM MM\u20191917.6470.7710.175-----2130
DRBN\u2021\nCVPR\u20192016.6770.730.345098.7318.4660.7680.352089.092462
Zero-DCECVPR\u20192014.8610.5620.372087.2418.0590.580.352080.452436
Zero-DCE++TPAMI\u20192114.7480.5170.328-----2618
MIRNetECCV\u20192024.1380.8300.250069.1820.0200.820.233049.111795
EnlightenGAN\u2021\nTIP\u20192117.6060.6530.372094.7018.6760.6780.364084.04-
ReLLIE\u2021\nACM MM\u20192111.4370.4820.375095.5114.4000.5360.334079.843.500
RUAS\u2021\nCVPR\u20192116.4050.5030.364101.9715.3510.4950.395094.1615.51
DDIMICLR\u20192116.5210.7760.376084.0715.2800.7880.387076.391213
CDEFTMM\u20192216.3350.5850.407090.6219.7570.630.349074.06-
SCICVPR\u20192214.7840.5250.366078.6017.3040.540.345067.621755
URetinex-NetCVPR\u20192219.8420.8240.237052.3821.0930.8580.208049.841804
SNRNet\u2021\nCVPR\u20192223.4320.8430.234055.1221.4800.8490.237054.5372.16
Uformer\u22c6\nCVPR\u20192219.0010.7410.354109.3518.4420.7590.347098.14901.2
Restormer\u22c6\nCVPR\u20192220.6140.7970.288073.1024.9100.8510.264058.65513.1
Palette\u2663\nSIGGRAPH\u20192211.7710.5610.498108.2914.7030.6920.333083.94168.5
UHDFour\u2021\nICLR\u20192323.0930.8210.259056.9121.7850.8540.292060.8464.92
WeatherDiff\u2663\nTPAMI\u20192317.9130.8110.272073.9020.0090.8290.253059.675271
GDP\u2663\nCVPR\u20192315.8960.5420.421117.4614.2900.4930.435102.41-
DiffLL\u2663\nACM ToG\u20192326.3360.8450.217048.1128.8570.8760.207045.36157.9
CIDNet\u2021\narXiv\u20192423.0900.8510.085-23.2200.8630.103--
LLformer\u22c6\nAAAI\u20192322.8900.8160.202-23.1280.8550.153-1956
ExpoMamba22.8700.8450.215097.6523.0000.8600.203094.2736.00
-23.0920.8470.214092.1723.1310.8680.224090.2238.00
25.7700.8600.212089.2128.0400.8850.232085.9236.00
\n
\n
\n
\n
\n
    \n
  • \n\u2022\n
    \n

    \u201cda\u201d - Dynamic adjustment. (refer Appendix-C.3 ###reference_###) \u00a0/\u00a0 \u201cgt\u201d - With ground-truth mean.

    \n
    \n
  • \n
\n
\n
\n
", + "capture": "Table 1: Comparing four popular metrics such that every column showcases the top three methods; Red, Green, and Blue representing the best, second best, and third best models among the proposed and all popular SOTA models from 2011\u20132024." + }, + "2": { + "table_html": "
\n
Table 2: Evaluation on the UHD-LOL4K dataset. Symbols , and denote traditional, supervised CNN, unsupervised CNN, zero-shot, and transformer-based models, respectively.\n
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Methods\nUHD-LOL4K\n
\nPSNR \n\nSSIM \n\nLPIPS \n\nMAE \n
\nBIMEF\u2020\u00a0(Ying et\u00a0al., 2017)\n18.10010.88760.13230.1240
\nLIME\u2020\u00a0(Guo et\u00a0al., 2016)\n16.17090.81410.20640.1285
\nNPE\u2020\u00a0(Wang et\u00a0al., 2013)\n17.63990.86650.17530.1125
\nSRIE\u2020\u00a0(Fu et\u00a0al., 2016b)\n16.77300.83650.14950.1416
\nMSRCR\u2020\u00a0(Jobson et\u00a0al., 1997)\n12.52380.81060.21360.2039
\nRetinexNet\u2021\u00a0(Wei et\u00a0al., 2018b)\n21.67020.90860.14780.0690
\nDSLR\u2021\u00a0(Lim & Kim, 2020)\n27.33610.92310.12170.0341
\nKinD\u2021\u00a0(Zhang et\u00a0al., 2019b)\n18.46380.88630.12970.1060
\nZ_DCE\u00a7\u00a0(Guo et\u00a0al., 2020a)\n17.18730.84980.19250.1465
\nZ_DCE++\u00a7\u00a0(Li et\u00a0al., 2021)\n15.57930.83460.22230.1701
\nRUAS\u25b3\u00a0(Liu et\u00a0al., 2021c)\n14.68060.75750.27360.1690
\nELGAN\u25b3\u00a0(Jiang et\u00a0al., 2021)\n18.36930.86420.19670.1011
\nUformer\u00a0(Wang et\u00a0al., 2022b)\n29.98700.98040.03420.0376
\nRestormer\u00a0(Zamir et\u00a0al., 2022)\n36.90940.98810.02260.0117
\nLLFormer\u00a0(Wang et\u00a0al., 2023b)\n37.33400.98620.02000.0116
\nUHD-Four\u00a0(Li et\u00a0al., 2023)\n35.10100.99010.0210-
28.33000.97300.08200.0315
35.23000.98900.06300.0451
\n
\n
\n
\n
\n
    \n
  • \n\u2022\n
    \n

    We use dynamic adjustment for both \u2018s\u2019 and \u2018l\u2019 ExpoMamba models during inference.

    \n
    \n
  • \n
\n
\n
\n
", + "capture": "Table 2: Evaluation on the UHD-LOL4K dataset. Symbols , and denote traditional, supervised CNN, unsupervised CNN, zero-shot, and transformer-based models, respectively.\n" + }, + "3": { + "table_html": "
\n
Table 3: Results for our Exposure Mamba approach over SICE-v2 (Cai et\u00a0al., 2018) datasets.
\n
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
MethodSICE-v2#params
UnderexposureOverexposureAverage
\nPSNR \n\nSSIM \n\nPSNR \n\nSSIM \n\nPSNR \n\nSSIM \n
HE (Pitas, 2000)\n14.690.565112.870.499113.780.5376-
CLAHE (Reza, 2004)\n12.690.503710.210.484711.450.4942-
RetinexNet (Wei et\u00a0al., 2018a)\n12.940.517112.870.525212.900.52120.84M
URetinexNet (Wu et\u00a0al., 2022)\n12.390.54447.400.454312.400.54961.32M
Zero-DCE (Guo et\u00a0al., 2020b)\n16.920.63307.110.429212.020.52110.079M
Zero-DCE++ (Li et\u00a0al., 2021)\n11.930.47556.880.40889.410.44220.010M
DPED (Ignatov et\u00a0al., 2017)\n16.830.61337.990.430012.410.52170.39M
KIND (Zhang et\u00a0al., 2019a)\n15.030.670012.670.670013.850.67000.59M
DeepUPE (Wang et\u00a0al., 2019)\n16.210.680711.980.596714.100.63877.79M
SID (Chen et\u00a0al., 2018a)\n19.510.663516.790.644418.150.6540-
SID-ENC (Huang et\u00a0al., 2022)\n21.360.665219.380.684320.370.6748-
SID-L (Huang et\u00a0al., 2022)\n19.430.664417.000.649518.220.657011.56M
RUAS (Liu et\u00a0al., 2021a)\n16.630.55894.540.319610.590.43940.0014M
SCI (Ma et\u00a0al., 2022)\n17.860.64014.450.362912.490.50510.0003M
MSEC (Afifi et\u00a0al., 2021)\n19.620.651217.590.656018.580.65367.04M
CMEC (Nsampi et\u00a0al., 2021)\n17.680.659218.170.681117.930.67025.40M
LCDPNet (Wang et\u00a0al., 2022a)\n17.450.562217.040.646317.250.60430.96M
DRBN (Yang et\u00a0al., 2020b)\n17.960.676717.330.682817.650.67980.53M
DRBN+ERL (Huang et\u00a0al., 2023)\n18.090.673517.930.686618.010.67960.53M
DRBN-ERL+ENC (Huang et\u00a0al., 2023)\n22.060.705319.500.720520.780.71290.58M
ELCNet (Huang & Belongie, 2017)\n22.050.689319.250.687220.650.68610.018M
ELCNet+ERL (Huang et\u00a0al., 2023)\n22.140.690819.470.698220.810.69450.018M
FECNet (Huang et\u00a0al., 2019)\n22.010.673719.910.696120.960.68490.15M
FECNet+ERL (Huang et\u00a0al., 2023)\n22.350.667120.100.689121.220.67810.15M
IAT (Cui et\u00a0al., 2022)\n21.410.660122.290.681321.850.67070.090M
22.590.716120.620.739221.610.727741M
\n
\n
\n
\n
\n
    \n
  • \n\u2022\n
    \n

    Our \u2018s\u2019: smallest model outperforms all the baselines.

    \n
    \n
  • \n
\n
\n
\n
", + "capture": "Table 3: Results for our Exposure Mamba approach over SICE-v2 (Cai et\u00a0al., 2018) datasets." + }, + "4": { + "table_html": "
\n
Table 4: We describe two variants of our model, s\u2019 and l\u2019 represent small and large model configurations.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
Model TypeConfiguration
base channelpatch sizedepthparamsinference speedMemory consumption
484141 M36 ms2923 Mb
9664166 M95.6 ms5690 Mb
\n
\n
", + "capture": "Table 4: We describe two variants of our model, s\u2019 and l\u2019 represent small and large model configurations. " + }, + "5": { + "table_html": "
\n
Table 5: Ablation Study on various components inside our proposed model ExpoMamba.
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
DoubleConvBlockFSSBHDRHDR-CSRNet+HDROutDAPSNRSSIM
\u2713\u2717\u2717\u2717\u2717\u2717\u271718.9780.815
\u2717\u2713\u2717\u2717\u2717\u2717\u271719.7870.828
\u2717\u2717\u2713\u2717\u2717\u2717\u271722.4590.836
\u2717\u2717\u2717\u2713\u2717\u2717\u271720.5760.823
\u2713\u2713\u2713\u2717\u2717\u2717\u271724.8780.841
\u2713\u2713\u2713\u2713\u2717\u2713\u271325.1100.845
\u2713\u2713\u2713\u2717\u2713\u2713\u271325.6400.860
\n
\n
\n
\n
    \n
  • \n\u2022\n
    \n

    When \u2018DoubleConv\u2019 is not used, we default to using the standard -Net/M-Net architecture\u2019s 2D convolutional block.

    \n
    \n
  • \n
  • \n\u2022
  • \n
\n
\n
\n
", + "capture": "Table 5: Ablation Study on various components inside our proposed model ExpoMamba." + } + }, + "image_paths": { + "1(a)": { + "figure_path": "2408.09650v1_figure_1(a).png", + "caption": "Figure 1: [top: 400x600; bottom: 3840x2160] Scatter plot of model inference time vs. PSNR. Baselines that used ground-truth mean information to produce metrics were reproduced without such information for fairness.", + "url": "http://arxiv.org/html/2408.09650v1/x1.png" + }, + "1(b)": { + "figure_path": "2408.09650v1_figure_1(b).png", + "caption": "Figure 1: [top: 400x600; bottom: 3840x2160] Scatter plot of model inference time vs. PSNR. Baselines that used ground-truth mean information to produce metrics were reproduced without such information for fairness.", + "url": "http://arxiv.org/html/2408.09650v1/x2.png" + }, + "2": { + "figure_path": "2408.09650v1_figure_2.png", + "caption": "Figure 2: Overview of the ExpoMamba Architecture. The diagram illustrates the information flow through the ExpoMamba model. The architecture efficiently processes sRGB images by integrating convolutional layers, 2D-Mamba blocks, and deep supervision mechanisms to enhance image reconstruction, particularly in low-light conditions.", + "url": "http://arxiv.org/html/2408.09650v1/x3.png" + }, + "3": { + "figure_path": "2408.09650v1_figure_3.png", + "caption": "Figure 3: Frequency State-Space Block (FSSB) Processing. The FSSB module is detailed within the ExpoMamba architecture.", + "url": "http://arxiv.org/html/2408.09650v1/x4.png" + }, + "4": { + "figure_path": "2408.09650v1_figure_4.png", + "caption": "Figure 4: Representing the effectiveness of HDR tone mapping layer inside FSS block. Using CSRNet with shrinked conditional blocks and dilated convolutions to remove overexposed artifacts.", + "url": "http://arxiv.org/html/2408.09650v1/x5.png" + }, + "5": { + "figure_path": "2408.09650v1_figure_5.png", + "caption": "Figure 5: The downsampled images are prepared in multiple different training resolutions with padding to dynamically load the batched-images of different resolutions.", + "url": "http://arxiv.org/html/2408.09650v1/x6.png" + }, + "6(a)": { + "figure_path": "2408.09650v1_figure_6(a).png", + "caption": "Figure 6: Images shown are from LOL-v1 dataset. Left column is the input, middle column is the model output, and third column is the target/ground-truth.", + "url": "http://arxiv.org/html/2408.09650v1/extracted/5799004/figures/468.png" + }, + "6(b)": { + "figure_path": "2408.09650v1_figure_6(b).png", + "caption": "Figure 6: Images shown are from LOL-v1 dataset. Left column is the input, middle column is the model output, and third column is the target/ground-truth.", + "url": "http://arxiv.org/html/2408.09650v1/extracted/5799004/figures/471.png" + }, + "6(c)": { + "figure_path": "2408.09650v1_figure_6(c).png", + "caption": "Figure 6: Images shown are from LOL-v1 dataset. Left column is the input, middle column is the model output, and third column is the target/ground-truth.", + "url": "http://arxiv.org/html/2408.09650v1/extracted/5799004/figures/489.png" + }, + "6(d)": { + "figure_path": "2408.09650v1_figure_6(d).png", + "caption": "Figure 6: Images shown are from LOL-v1 dataset. Left column is the input, middle column is the model output, and third column is the target/ground-truth.", + "url": "http://arxiv.org/html/2408.09650v1/extracted/5799004/figures/494.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Robust computer vision in an ever-changing world: A survey of techniques for tackling distribution shifts, 2023.", + "author": "Adhikarla, E., Zhang, K., Yu, J., Sun, L., Nicholson, J., and Davison, B. D.", + "venue": "URL https://arxiv.org/abs/2312.01540.", + "url": null + } + }, + { + "2": { + "title": "Unified-egformer: Exposure guided lightweight transformer for mixed-exposure image enhancement, 2024.", + "author": "Adhikarla, E., Zhang, K., VidalMata, R. G., Aithal, M., Madhusudhana, N. A., Nicholson, J., Sun, L., and Davison, B. D.", + "venue": "URL https://arxiv.org/abs/2407.13170.", + "url": null + } + }, + { + "3": { + "title": "Learning multi-scale photo exposure correction.", + "author": "Afifi, M., Derpanis, K. G., Ommer, B., and Brown, M. S.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9157\u20139167, 2021.", + "url": null + } + }, + { + "4": { + "title": "Lyt-net: Lightweight yuv transformer-based network for low-light image enhancement.", + "author": "Brateanu, A., Balmez, R., Avram, A., and Orhei, C.", + "venue": "arXiv preprint arXiv:2401.15204, 2024.", + "url": null + } + }, + { + "5": { + "title": "Learning a deep single image contrast enhancer from multi-exposure images.", + "author": "Cai, J., Gu, S., and Zhang, L.", + "venue": "IEEE Transactions on Image Processing, 27(4):2049\u20132062, 2018.", + "url": null + } + }, + { + "6": { + "title": "Learning to see in the dark.", + "author": "Chen, C., Chen, Q., Xu, J., and Koltun, V.", + "venue": "In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, pp. 3291\u20133300. Computer Vision Foundation / IEEE Computer Society, 2018a.", + "url": null + } + }, + { + "7": { + "title": "Learning to see in the dark.", + "author": "Chen, C., Chen, Q., Xu, J., and Koltun, V.", + "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3291\u20133300, 2018b.", + "url": null + } + }, + { + "8": { + "title": "Pre-trained image processing transformer.", + "author": "Chen, H., Wang, Y., Guo, T., Xu, C., Deng, Y., Liu, Z., Ma, S., Xu, C., Xu, C., and Gao, W.", + "venue": "In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 12299\u201312310, 2021a.", + "url": null + } + }, + { + "9": { + "title": "Skyformer: Remodel self-attention with gaussian kernel and nystr\\\u201dom method.", + "author": "Chen, Y., Zeng, Q., Ji, H., and Yang, Y.", + "venue": "In Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P., and Vaughan, J. W. (eds.), Advances in Neural Information Processing Systems, volume 34, pp. 2122\u20132135. Curran Associates, Inc., 2021b.", + "url": null + } + }, + { + "10": { + "title": "Transhash: Transformer-based hamming hashing for efficient image retrieval.", + "author": "Chen, Y., Zhang, S., Liu, F., Chang, Z., Ye, M., and Qi, Z.", + "venue": "CoRR, abs/2105.01823, 2021c.", + "url": null + } + }, + { + "11": { + "title": "Contrast enhancement algorithm based on gap adjustment for histogram equalization.", + "author": "Chiu, C.-C. and Ting, C.-C.", + "venue": "Sensors, 16(6):936, 2016.", + "url": null + } + }, + { + "12": { + "title": "You only need 90k parameters to adapt light: a light weight transformer for image enhancement and exposure correction.", + "author": "Cui, Z., Li, K., Gu, L., Su, S., Gao, P., Jiang, Z., Qiao, Y., and Harada, T.", + "venue": "In 33rd British Machine Vision Conference 2022, BMVC 2022, London, UK, November 21-24, 2022. BMVA Press, 2022.", + "url": null + } + }, + { + "13": { + "title": "A study and modification of the local histogram equalization algorithm.", + "author": "Dale-Jones, R. and Tjahjadi, T.", + "venue": "Pattern Recognition, 26(9):1373\u20131381, 1993.", + "url": null + } + }, + { + "14": { + "title": "Fast efficient algorithm for enhancement of low lighting video.", + "author": "Dong, X., Wang, G., Pang, Y., Li, W., Wen, J., Meng, W., and Lu, Y.", + "venue": "In ICME, pp. 1\u20136, 2011.", + "url": null + } + }, + { + "15": { + "title": "An image is worth 16x16 words: Transformers for image recognition at scale.", + "author": "Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., Uszkoreit, J., and Houlsby, N.", + "venue": "ICLR, 2021.", + "url": null + } + }, + { + "16": { + "title": "On the computational complexity of self-attention.", + "author": "Duman Keles, F., Wijewardena, P. M., and Hegde, C.", + "venue": "In Agrawal, S. and Orabona, F. (eds.), Proceedings of The 34th International Conference on Algorithmic Learning Theory, volume 201 of Proceedings of Machine Learning Research, pp. 597\u2013619. PMLR, 20 Feb\u201323 Feb 2023.", + "url": null + } + }, + { + "17": { + "title": "You only need one color space: An efficient network for low-light image enhancement, 2024.", + "author": "Feng, Y., Zhang, C., Wang, P., Wu, P., Yan, Q., and Zhang, Y.", + "venue": null, + "url": null + } + }, + { + "18": { + "title": "Hungry hungry hippos: Towards language modeling with state space models.", + "author": "Fu, D. Y., Dao, T., Saab, K. K., Thomas, A. W., Rudra, A., and R\u00e9, C.", + "venue": "arXiv preprint arXiv:2212.14052, 2022.", + "url": null + } + }, + { + "19": { + "title": "A fusion-based enhancing method for weakly illuminated images.", + "author": "Fu, X., Zeng, D., Huang, Y., Liao, Y., Ding, X., and Paisley, J.", + "venue": "Signal Processing, 129:82\u201396, 2016a.", + "url": null + } + }, + { + "20": { + "title": "A weighted variational model for simultaneous reflectance and illumination estimation.", + "author": "Fu, X., Zeng, D., Huang, Y., Zhang, X.-P., and Ding, X.", + "venue": "In CVPR, pp. 2782\u20132790, 2016b.", + "url": null + } + }, + { + "21": { + "title": "Mamba: Linear-time sequence modeling with selective state spaces, 2023.", + "author": "Gu, A. and Dao, T.", + "venue": null, + "url": null + } + }, + { + "22": { + "title": "Efficiently modeling long sequences with structured state spaces.", + "author": "Gu, A., Goel, K., and R\u00e9, C.", + "venue": "arXiv preprint arXiv:2111.00396, 2021.", + "url": null + } + }, + { + "23": { + "title": "Zero-reference deep curve estimation for low-light image enhancement.", + "author": "Guo, C., Li, C., Guo, J., Loy, C. C., Hou, J., Kwong, S., and Cong, R.", + "venue": "In CVPR, pp. 1780\u20131789, 2020a.", + "url": null + } + }, + { + "24": { + "title": "Zero-reference deep curve estimation for low-light image enhancement.", + "author": "Guo, C. G., Li, C., Guo, J., Loy, C. C., Hou, J., Kwong, S., and Cong, R.", + "venue": "In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pp. 1780\u20131789, June 2020b.", + "url": null + } + }, + { + "25": { + "title": "Shadowdiffusion: When degradation prior meets diffusion model for shadow removal.", + "author": "Guo, L., Wang, C., Yang, W., Huang, S., Wang, Y., Pfister, H., and Wen, B.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14049\u201314058, 2023.", + "url": null + } + }, + { + "26": { + "title": "Lime: Low-light image enhancement via illumination map estimation.", + "author": "Guo, X., Li, Y., and Ling, H.", + "venue": "IEEE TIP, 26(2):982\u2013993, 2016.", + "url": null + } + }, + { + "27": { + "title": "Hawkdrive: A transformer-driven visual perception system for autonomous driving in night scene.", + "author": "Guo, Z., Perminov, S., Konenkov, M., and Tsetserukou, D.", + "venue": "arXiv preprint arXiv:2404.04653, 2024.", + "url": null + } + }, + { + "28": { + "title": "Joint multi-scale tone mapping and denoising for hdr image enhancement.", + "author": "Hu, L., Chen, H., and Allebach, J. P.", + "venue": "In 2022 IEEE/CVF Winter Conference on Applications of Computer Vision Workshops (WACVW), pp. 729\u2013738, 2022.", + "url": null + } + }, + { + "29": { + "title": "Hybrid image enhancement with progressive laplacian enhancing unit.", + "author": "Huang, J., Xiong, Z., Fu, X., Liu, D., and Zha, Z.-J.", + "venue": "In Proceedings of the 27th ACM International Conference on Multimedia, MM \u201919, pp. 1614\u20131622, New York, NY, USA, 2019. Association for Computing Machinery.", + "url": null + } + }, + { + "30": { + "title": "Exposure normalization and compensation for multiple-exposure correction.", + "author": "Huang, J., Liu, Y., Fu, X., Zhou, M., Wang, Y., Zhao, F., and Xiong, Z.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6043\u20136052, 2022.", + "url": null + } + }, + { + "31": { + "title": "Learning sample relationship for exposure correction.", + "author": "Huang, J., Zhao, F., Zhou, M., Xiao, J., Zheng, N., Zheng, K., and Xiong, Z.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9904\u20139913, 2023.", + "url": null + } + }, + { + "32": { + "title": "Arbitrary style transfer in real-time with adaptive instance normalization, 2017.", + "author": "Huang, X. and Belongie, S.", + "venue": null, + "url": null + } + }, + { + "33": { + "title": "Dslr-quality photos on mobile devices with deep convolutional networks.", + "author": "Ignatov, A., Kobyshev, N., Timofte, R., Vanhoey, K., and Van Gool, L.", + "venue": "In Proceedings of the IEEE international conference on computer vision, 2017.", + "url": null + } + }, + { + "34": { + "title": "Low-light image enhancement with wavelet-based diffusion models.", + "author": "Jiang, H., Luo, A., Fan, H., Han, S., and Liu, S.", + "venue": "ACM Transactions on Graphics (TOG), 42(6):1\u201314, 2023.", + "url": null + } + }, + { + "35": { + "title": "Enlightengan: Deep light enhancement without paired supervision.", + "author": "Jiang, Y., Gong, X., Liu, D., Cheng, Y., Fang, C., Shen, X., Yang, J., Zhou, P., and Wang, Z.", + "venue": "IEEE TIP, 30:2340\u20132349, 2021.", + "url": null + } + }, + { + "36": { + "title": "A multiscale retinex for bridging the gap between color images and the human observation of scenes.", + "author": "Jobson, D. J., Rahman, Z.-u., and Woodell, G. A.", + "venue": "IEEE TIP, 6(7):965\u2013976, 1997.", + "url": null + } + }, + { + "37": { + "title": "Image contrast enhancement using unsharp masking and histogram equalization.", + "author": "Kansal, S., Purwar, S., and Tripathi, R. K.", + "venue": "Multimedia Tools and Applications, 77:26919\u201326938, 2018.", + "url": null + } + }, + { + "38": { + "title": "Transformers are rnns: Fast autoregressive transformers with linear attention.", + "author": "Katharopoulos, A., Vyas, A., Pappas, N., and Fleuret, F.", + "venue": "In Proceedings of the International Conference on Machine Learning (ICML), 2020.", + "url": null + } + }, + { + "39": { + "title": "Segment dependent dynamic multi-histogram equalization for image contrast enhancement.", + "author": "Khan, M. F., Khan, E., and Abbasi, Z. A.", + "venue": "Digital Signal Processing, 25:198\u2013223, 2014.", + "url": null + } + }, + { + "40": { + "title": "Reformer: The efficient transformer.", + "author": "Kitaev, N., Kaiser, L., and Levskaya, A.", + "venue": "In International Conference on Learning Representations, 2020.", + "url": null + } + }, + { + "41": { + "title": "Lightness and retinex theory.", + "author": "Land, E. H. and McCann, J. J.", + "venue": "Josa, 61(1):1\u201311, 1971.", + "url": null + } + }, + { + "42": { + "title": "Frequency-Domain Techniques, pp. 223\u2013271.", + "author": "Lazzarini, V.", + "venue": "Springer International Publishing, Cham, 2017.", + "url": null + } + }, + { + "43": { + "title": "Learning to enhance low-light image via zero-reference deep curve estimation.", + "author": "Li, C., Guo, C. G., and Loy, C. C.", + "venue": "In IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021.", + "url": null + } + }, + { + "44": { + "title": "Embedding fourier for ultra-high-definition low-light image enhancement.", + "author": "Li, C., Guo, C.-L., Zhou, M., Liang, Z., Zhou, S., Feng, R., and Loy, C. C.", + "venue": "arXiv preprint arXiv:2302.11831, 2023.", + "url": null + } + }, + { + "45": { + "title": "Handheld mobile photography in very low light.", + "author": "Liba, O., Murthy, K., Tsai, Y.-T., Brooks, T., Xue, T., Karnad, N., He, Q., Barron, J. T., Sharlet, D., Geiss, R., et al.", + "venue": "ACM Trans. Graph., 38(6):164\u20131, 2019.", + "url": null + } + }, + { + "46": { + "title": "Dslr: Deep stacked laplacian restorer for low-light image enhancement.", + "author": "Lim, S. and Kim, W.", + "venue": "IEEE TMM, 23:4272\u20134284, 2020.", + "url": null + } + }, + { + "47": { + "title": "Snrnet: A deep learning-based network for banknote serial number recognition.", + "author": "Lin, Z., He, Z., Wang, P., Tan, B., Lu, J., and Bai, Y.", + "venue": "Neural Processing Letters, 52:1415\u20131426, 2020.", + "url": null + } + }, + { + "48": { + "title": "Benchmarking low-light image enhancement and beyond.", + "author": "Liu, J., Dejia, X., Yang, W., Fan, M., and Huang, H.", + "venue": "International Journal of Computer Vision, 129:1153\u20131184, 2021a.", + "url": null + } + }, + { + "49": { + "title": "Transformer acceleration with dynamic sparse attention.", + "author": "Liu, L., Qu, Z., Chen, Z., Ding, Y., and Xie, Y.", + "venue": "CoRR, abs/2110.11299, 2021b.", + "url": null + } + }, + { + "50": { + "title": "Retinex-inspired unrolling with cooperative prior architecture search for low-light image enhancement.", + "author": "Liu, R., Ma, L., Zhang, J., Fan, X., and Luo, Z.", + "venue": "In CVPR, pp. 10561\u201310570, 2021c.", + "url": null + } + }, + { + "51": { + "title": "Ntire 2024 challenge on low light image enhancement: Methods and results.", + "author": "Liu, X., Wu, Z., Li, A., Vasluianu, F.-A., Zhang, Y., Gu, S., Zhang, L., Zhu, C., Timofte, R., Jin, Z., et al.", + "venue": "arXiv preprint arXiv:2404.14248, 2024.", + "url": null + } + }, + { + "52": { + "title": "Design of a two-branch network enhancement algorithm for deep features in visually communicated images.", + "author": "Liu, Y.", + "venue": "Signal, Image and Video Processing, pp. 1\u201312, 2024.", + "url": null + } + }, + { + "53": { + "title": "Llnet: A deep autoencoder approach to natural low-light image enhancement.", + "author": "Lore, K. G., Akintayo, A., and Sarkar, S.", + "venue": "Pattern Recognition, 61:650\u2013662, 2017.", + "url": null + } + }, + { + "54": { + "title": "Sgdr: Stochastic gradient descent with warm restarts.", + "author": "Loshchilov, I. and Hutter, F.", + "venue": "arXiv preprint arXiv:1608.03983, 2016.", + "url": null + } + }, + { + "55": { + "title": "Soft: Softmax-free transformer with linear complexity.", + "author": "Lu, J., Yao, J., Zhang, J., Zhu, X., Xu, H., Gao, W., XU, C., Xiang, T., and Zhang, L.", + "venue": "In Ranzato, M., Beygelzimer, A., Dauphin, Y., Liang, P., and Vaughan, J. W. (eds.), Advances in Neural Information Processing Systems, volume 34, pp. 21297\u201321309. Curran Associates, Inc., 2021.", + "url": null + } + }, + { + "56": { + "title": "Toward fast, flexible, and robust low-light image enhancement.", + "author": "Ma, L., Ma, T., Liu, R., Fan, X., and Luo, Z.", + "venue": "In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5637\u20135646, June 2022.", + "url": null + } + }, + { + "57": { + "title": "M-net: A convolutional neural network for deep brain structure segmentation.", + "author": "Mehta, R. and Sivaswamy, J.", + "venue": "In 2017 IEEE 14th international symposium on biomedical imaging (ISBI 2017), pp. 437\u2013440. IEEE, 2017.", + "url": null + } + }, + { + "58": { + "title": "Image and video processing on mobile devices: a survey.", + "author": "Morikawa, C., Kobayashi, M., Satoh, M., Kuroda, Y., Inomata, T., Matsuo, H., Miura, T., and Hilaga, M.", + "venue": "the visual Computer, 37(12):2931\u20132949, 2021.", + "url": null + } + }, + { + "59": { + "title": "S4nd: Modeling images and videos as multidimensional signals with state spaces.", + "author": "Nguyen, E., Goel, K., Gu, A., Downs, G., Shah, P., Dao, T., Baccus, S., and R\u00e9, C.", + "venue": "Advances in neural information processing systems, 35:2846\u20132861, 2022.", + "url": null + } + }, + { + "60": { + "title": "Learning exposure correction via consistency modeling.", + "author": "Nsampi, N. E., Hu, Z., and Wang, Q.", + "venue": "In Proc. Brit. Mach. Vision Conf., 2021.", + "url": null + } + }, + { + "61": { + "title": "Pytorch: An imperative style, high-performance deep learning library.", + "author": "Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., Killeen, T., Lin, Z., Gimelshein, N., Antiga, L., et al.", + "venue": "Advances in neural information processing systems, 32, 2019.", + "url": null + } + }, + { + "62": { + "title": "Digital Image Processing Algorithms and Applications.", + "author": "Pitas, I.", + "venue": "John Wiley & Sons, Inc., USA, 1st edition, 2000.", + "url": null + } + }, + { + "63": { + "title": "Deep equilibrium approaches to diffusion models.", + "author": "Pokle, A., Geng, Z., and Kolter, J. Z.", + "venue": "Advances in Neural Information Processing Systems, 35:37975\u201337990, 2022.", + "url": null + } + }, + { + "64": { + "title": "U2-net: Going deeper with nested u-structure for salient object detection.", + "author": "Qin, X., Zhang, Z., Huang, C., Dehghan, M., Zaiane, O., and Jagersand, M.", + "venue": "In Pattern Recognition 2020, volume 106, pp. 107404, 2020.", + "url": null + } + }, + { + "65": { + "title": "Lr3m: Robust low-light enhancement via low-rank regularized retinex model.", + "author": "Ren, X., Yang, W., Cheng, W.-H., and Liu, J.", + "venue": "IEEE Transactions on Image Processing, 29:5862\u20135876, 2020.", + "url": null + } + }, + { + "66": { + "title": "Realization of the contrast limited adaptive histogram equalization (clahe) for real-time image enhancement.", + "author": "Reza, A. M.", + "venue": "Journal of VLSI signal processing systems for signal, image and video technology, 38:35\u201344, 2004.", + "url": null + } + }, + { + "67": { + "title": "Palette: Image-to-image diffusion models.", + "author": "Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., Fleet, D., and Norouzi, M.", + "venue": "In ACM SIGGRAPH 2022 conference proceedings, pp. 1\u201310, 2022.", + "url": null + } + }, + { + "68": { + "title": "Efficient attention: Attention with linear complexities.", + "author": "Shen, Z., Zhang, M., Zhao, H., Yi, S., and Li, H.", + "venue": "CoRR, abs/1812.01243, 2018.", + "url": null + } + }, + { + "69": { + "title": "Advancements and challenges in low-light object detection.", + "author": "Shrivastav, P.", + "venue": "In 2024 2nd International Conference on Intelligent Data Communication Technologies and Internet of Things (IDCIoT), pp. 1351\u20131356. IEEE, 2024.", + "url": null + } + }, + { + "70": { + "title": "Enhancement of low exposure images via recursive histogram equalization algorithms.", + "author": "Singh, K., Kapoor, R., and Sinha, S. K.", + "venue": "Optik, 126(20):2619\u20132625, 2015.", + "url": null + } + }, + { + "71": { + "title": "Denoising diffusion implicit models.", + "author": "Song, J., Meng, C., and Ermon, S.", + "venue": "arXiv:2010.02502, October 2020.", + "url": null + } + }, + { + "72": { + "title": "Efficient transformers: A survey.", + "author": "Tay, Y., Dehghani, M., Bahri, D., and Metzler, D.", + "venue": "ACM Comput. Surv., 55(6), dec 2022.", + "url": null + } + }, + { + "73": { + "title": "Deep complex networks.", + "author": "Trabelsi, C., Bilaniuk, O., Zhang, Y., Serdyuk, D., Subramanian, S., Santos, J. F., Mehri, S., Rostamzadeh, N., Bengio, Y., and Pal, C. J.", + "venue": "In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net, 2018.", + "url": null + } + }, + { + "74": { + "title": "The daala directional deringing filter.", + "author": "Valin, J.", + "venue": "CoRR, abs/1602.05975, 2016.", + "url": null + } + }, + { + "75": { + "title": "Local color distributions prior for image enhancement.", + "author": "Wang, H., Xu, K., and Lau, R. W.", + "venue": "In European Conference on Computer Vision, pp. 343\u2013359. Springer, 2022a.", + "url": null + } + }, + { + "76": { + "title": "Underexposed photo enhancement using deep illumination estimation.", + "author": "Wang, R., Zhang, Q., Fu, C.-W., Shen, X., Zheng, W.-S., and Jia, J.", + "venue": "In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019.", + "url": null + } + }, + { + "77": { + "title": "Naturalness preserved enhancement algorithm for non-uniform illumination images.", + "author": "Wang, S., Zheng, J., Hu, H.-M., and Li, B.", + "venue": "IEEE TIP, 22(9):3538\u20133548, 2013.", + "url": null + } + }, + { + "78": { + "title": "Linformer: Self-attention with linear complexity, 2020.", + "author": "Wang, S., Li, B. Z., Khabsa, M., Fang, H., and Ma, H.", + "venue": null, + "url": null + } + }, + { + "79": { + "title": "Lldiffusion: Learning degradation representations in diffusion models for low-light image enhancement.", + "author": "Wang, T., Zhang, K., Shao, Z., Luo, W., Stenger, B., Kim, T., Liu, W., and Li, H.", + "venue": "CoRR, abs/2307.14659, 2023a.", + "url": null + } + }, + { + "80": { + "title": "Ultra-high-definition low-light image enhancement: A benchmark and transformer-based method.", + "author": "Wang, T., Zhang, K., Shen, T., Luo, W., Stenger, B., and Lu, T.", + "venue": "In Proceedings of the AAAI Conference on Artificial Intelligence, volume 37, pp. 2654\u20132662, 2023b.", + "url": null + } + }, + { + "81": { + "title": "Exposurediffusion: Learning to expose for low-light image enhancement.", + "author": "Wang, Y., Yu, Y., Yang, W., Guo, L., Chau, L.-P., Kot, A. C., and Wen, B.", + "venue": "arXiv preprint arXiv:2307.07710, 2023c.", + "url": null + } + }, + { + "82": { + "title": "Uformer: A general u-shaped transformer for image restoration.", + "author": "Wang, Z., Cun, X., Bao, J., and Liu, J.", + "venue": "In CVPR, pp. 17683\u201317693, 2022b.", + "url": null + } + }, + { + "83": { + "title": "Mamba-unet: Unet-like pure visual mamba for medical image segmentation, 2024.", + "author": "Wang, Z., Zheng, J.-Q., Zhang, Y., Cui, G., and Li, L.", + "venue": null, + "url": null + } + }, + { + "84": { + "title": "Deep retinex decomposition for low-light enhancement.", + "author": "Wei, C., Wang, W., Yang, W., and Liu, J.", + "venue": "In British Machine Vision Conference 2018, BMVC 2018, Newcastle, UK, September 3-6, 2018, pp. 155. BMVA Press, 2018a.", + "url": null + } + }, + { + "85": { + "title": "Deep retinex decomposition for low-light enhancement.", + "author": "Wei, C., Wang, W., Yang, W., and Liu, J.", + "venue": "In BMVC, 2018b.", + "url": null + } + }, + { + "86": { + "title": "Uretinex-net: Retinex-based deep unfolding network for low-light image enhancement.", + "author": "Wu, W., Weng, J., Zhang, P., Wang, X., Yang, W., and Jiang, J.", + "venue": "In 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5891\u20135900, 2022.", + "url": null + } + }, + { + "87": { + "title": "Crose: Low-light enhancement by cross-sensor interaction for nighttime driving scenes.", + "author": "Xian, X., Zhou, Q., Qin, J., Yang, X., Tian, Y., Shi, Y., and Tian, D.", + "venue": "Expert Systems with Applications, pp. 123470, 2024.", + "url": null + } + }, + { + "88": { + "title": "Phase based feature detector consistent with human visual system characteristics.", + "author": "Xiao, Z. and Hou, Z.", + "venue": "Pattern Recognition Letters, 25(10):1115\u20131121, 2004.", + "url": null + } + }, + { + "89": { + "title": "From fidelity to perceptual quality: A semi-supervised approach for low-light image enhancement.", + "author": "Yang, W., Wang, S., Fang, Y., Wang, Y., and Liu, J.", + "venue": "In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020a.", + "url": null + } + }, + { + "90": { + "title": "From fidelity to perceptual quality: A semi-supervised approach for low-light image enhancement.", + "author": "Yang, W., Wang, S., Fang, Y., Wang, Y., and Liu, J.", + "venue": "In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 3063\u20133072, 2020b.", + "url": null + } + }, + { + "91": { + "title": "Vivim: a video vision mamba for medical video object segmentation.", + "author": "Yang, Y., Xing, Z., and Zhu, L.", + "venue": "arXiv preprint arXiv:2401.14168, 2024.", + "url": null + } + }, + { + "92": { + "title": "A bio-inspired multi-exposure fusion framework for low-light image enhancement.", + "author": "Ying, Z., Li, G., and Gao, W.", + "venue": "arXiv preprint arXiv:1711.00591, 2017.", + "url": null + } + }, + { + "93": { + "title": "Multi-frequency field perception and sparse progressive network for low-light image enhancement.", + "author": "Yuan, S., Li, J., Ren, L., and Chen, Z.", + "venue": "Journal of Visual Communication and Image Representation, 100:104133, 2024.", + "url": null + } + }, + { + "94": { + "title": "Big bird: Transformers for longer sequences.", + "author": "Zaheer, M., Guruganesh, G., Dubey, K. A., Ainslie, J., Alberti, C., Onta\u00f1\u00f3n, S., Pham, P., Ravula, A., Wang, Q., Yang, L., and Ahmed, A.", + "venue": "In Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M., and Lin, H. (eds.), Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020.", + "url": null + } + }, + { + "95": { + "title": "Learning enriched features for real image restoration and enhancement.", + "author": "Zamir, S. W., Arora, A., Khan, S., Hayat, M., Khan, F. S., Yang, M.-H., and Shao, L.", + "venue": "In ECCV, 2020.", + "url": null + } + }, + { + "96": { + "title": "Restormer: Efficient transformer for high-resolution image restoration.", + "author": "Zamir, S. W., Arora, A., Khan, S., Hayat, M., Khan, F. S., and Yang, M.-H.", + "venue": "In CVPR, pp. 5728\u20135739, 2022.", + "url": null + } + }, + { + "97": { + "title": "gddim: Generalized denoising diffusion implicit models.", + "author": "Zhang, Q., Tao, M., and Chen, Y.", + "venue": "arXiv preprint arXiv:2206.05564, 2022.", + "url": null + } + }, + { + "98": { + "title": "Rellie: Deep reinforcement learning for customized low-light image enhancement.", + "author": "Zhang, R., Guo, L., Huang, S., and Wen, B.", + "venue": "CoRR, abs/2107.05830, 2021a.", + "url": null + } + }, + { + "99": { + "title": "Kindling the darkness: A practical low-light image enhancer.", + "author": "Zhang, Y., Zhang, J., and Guo, X.", + "venue": "In Proceedings of the 27th ACM International Conference on Multimedia, MM \u201919, pp. 1632\u20131640, New York, NY, USA, 2019a. Association for Computing Machinery.", + "url": null + } + }, + { + "100": { + "title": "Kindling the darkness: A practical low-light image enhancer.", + "author": "Zhang, Y., Zhang, J., and Guo, X.", + "venue": "In ACMMM, pp. 1632\u20131640, 2019b.", + "url": null + } + }, + { + "101": { + "title": "Beyond brightening low-light images.", + "author": "Zhang, Y., Guo, X., Ma, J., Liu, W., and Zhang, J.", + "venue": "Int. J. Comput. Vision, 129(4):1013\u20131037, apr 2021b.", + "url": null + } + }, + { + "102": { + "title": "Fd-vision mamba for endoscopic exposure correction, 2024.", + "author": "Zheng, Z. and Zhang, J.", + "venue": null, + "url": null + } + }, + { + "103": { + "title": "Pyramid diffusion models for low-light image enhancement.", + "author": "Zhou, D., Yang, Z., and Yang, Y.", + "venue": "arXiv preprint arXiv:2305.10028, 2023a.", + "url": null + } + }, + { + "104": { + "title": "Adaptively learning low-high frequency information integration for pan-sharpening.", + "author": "Zhou, M., Huang, J., Li, C., Yu, H., Yan, K., Zheng, N., and Zhao, F.", + "venue": "In Proceedings of the 30th ACM International Conference on Multimedia, pp. 3375\u20133384, 2022.", + "url": null + } + }, + { + "105": { + "title": "Fourmer: An efficient global modeling paradigm for image restoration.", + "author": "Zhou, M., Huang, J., Guo, C.-L., and Li, C.", + "venue": "In International Conference on Machine Learning, pp. 42589\u201342601. PMLR, 2023b.", + "url": null + } + }, + { + "106": { + "title": "Vision mamba: Efficient visual representation learning with bidirectional state space model.", + "author": "Zhu, L., Liao, B., Zhang, Q., Wang, X., Liu, W., and Wang, X.", + "venue": "arXiv preprint arXiv:2401.09417, 2024a.", + "url": null + } + }, + { + "107": { + "title": "Vision mamba: Efficient visual representation learning with bidirectional state space model.", + "author": "Zhu, L., Liao, B., Zhang, Q., Wang, X., Liu, W., and Wang, X.", + "venue": "arXiv preprint arXiv:2401.09417, 2024b.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2408.09650v1" +} \ No newline at end of file