Add 1 files
Browse files- 2402/2402.02593.md +280 -0
2402/2402.02593.md
ADDED
|
@@ -0,0 +1,280 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Title: Leveraging Continuously Differentiable Activation for Learning in Analog and Quantized Noisy Environments
|
| 2 |
+
|
| 3 |
+
URL Source: https://arxiv.org/html/2402.02593
|
| 4 |
+
|
| 5 |
+
Published Time: Tue, 25 Feb 2025 02:33:58 GMT
|
| 6 |
+
|
| 7 |
+
Markdown Content:
|
| 8 |
+
Vivswan Shah and Nathan Youngblood This work was supported in part by the U.S. National Science Foundation under Grants CISE-2105972 and ECCS-2337674 and by AFOSR under Grant FA9550-24-1-0064. This research was supported in part by the University of Pittsburgh Center for Research Computing, RRID:SCR_022735, through the resources provided. Specifically, this work used the H2P cluster, which is supported by NSF award number OAC-2117681. V. Shah and N. Youngblood are with the Department of Electrical and Computer Engineering, Swanson School of Engineering, University of Pittsburgh, Pittsburgh, PA 15261 USA (email: vivswanshah@pitt.edu and nathan.youngblood@pitt.edu). Code available at: [https://github.com/Vivswan/GeLUReLUInterpolation](https://github.com/Vivswan/GeLUReLUInterpolation).
|
| 9 |
+
|
| 10 |
+
###### Abstract
|
| 11 |
+
|
| 12 |
+
Real-world analog systems, such as photonic neural networks, intrinsically suffer from noise that can impede model convergence and accuracy for a variety of deep learning models. In the presence of noise, some activation functions behave erratically or even amplify the noise. Specifically, ReLU, an activation function used ubiquitously in digital deep learning systems, not only poses a challenge to implement in analog hardware but has also been shown to perform worse than continuously differentiable activation functions. In this paper, we demonstrate that GELU and SiLU enable robust propagation of gradients in analog hardware because they are continuously differentiable functions. To analyze this cause of activation differences in the presence of noise, we used functional interpolation between ReLU and GELU/SiLU to perform analysis and training of convolutional, linear, and transformer networks on simulated analog hardware with different interpolated activation functions. We find that in ReLU, errors in the gradient due to noise are amplified during backpropagation, leading to a significant reduction in model performance. However, we observe that error amplification decreases as we move toward GELU/SiLU, until it is non-existent at GELU/SiLU demonstrating that continuously differentiable activation functions are ∼similar-to\sim∼100×\times× more noise-resistant than conventional rectified activations for inputs near zero. Our findings provide guidance in selecting the appropriate activations to realize reliable and performant photonic and other analog hardware accelerators in several domains of machine learning, such as computer vision, signal processing, and beyond.
|
| 13 |
+
|
| 14 |
+
I Introduction
|
| 15 |
+
--------------
|
| 16 |
+
|
| 17 |
+
Rapid advancement of artificial intelligence and deep learning has sparked interest in novel computing paradigms that can overcome the limitations of traditional digital electronics [[1](https://arxiv.org/html/2402.02593v3#bib.bib1)]. Photonic neural networks have emerged as a promising approach, offering the potential for ultra-fast, energy-efficient computation by leveraging the properties of light [[2](https://arxiv.org/html/2402.02593v3#bib.bib2)]. However, the transition from digital to analog photonic systems introduces new challenges, particularly in handling noise and maintaining computational accuracy.
|
| 18 |
+
|
| 19 |
+
Unlike their digital counterparts, photonic neural networks are implemented in analog hardware like coherent [[3](https://arxiv.org/html/2402.02593v3#bib.bib3), [4](https://arxiv.org/html/2402.02593v3#bib.bib4), [5](https://arxiv.org/html/2402.02593v3#bib.bib5), Computational, [6](https://arxiv.org/html/2402.02593v3#bib.bib6), [7](https://arxiv.org/html/2402.02593v3#bib.bib7), [8](https://arxiv.org/html/2402.02593v3#bib.bib8), [9](https://arxiv.org/html/2402.02593v3#bib.bib9), [10](https://arxiv.org/html/2402.02593v3#bib.bib10), [11](https://arxiv.org/html/2402.02593v3#bib.bib11)], electro-absorptive [[12](https://arxiv.org/html/2402.02593v3#bib.bib12)], phase-change [[13](https://arxiv.org/html/2402.02593v3#bib.bib13), [14](https://arxiv.org/html/2402.02593v3#bib.bib14)], magneto-optics [[15](https://arxiv.org/html/2402.02593v3#bib.bib15), [16](https://arxiv.org/html/2402.02593v3#bib.bib16), [17](https://arxiv.org/html/2402.02593v3#bib.bib17)], microring resonator [[18](https://arxiv.org/html/2402.02593v3#bib.bib18)], and dispersive fiber-based architectures [[19](https://arxiv.org/html/2402.02593v3#bib.bib19)]. Due to their physical and analog nature, signals in these devices are continuous and subject to various sources of noise. This includes shot noise, thermal noise, and quantization errors in optical-to-electrical conversions. In this context, the choice of activation function becomes crucial, as it significantly impacts the network’s ability to learn and generalize in the presence of noise.
|
| 20 |
+
|
| 21 |
+
Traditionally, rectified linear units (ReLU) [[20](https://arxiv.org/html/2402.02593v3#bib.bib20)] have been widely used in digital neural networks due to their simplicity and effectiveness in mitigating the vanishing gradient problem [[21](https://arxiv.org/html/2402.02593v3#bib.bib21)]. However, ReLU’s discontinuous nature at zero can lead to instabilities in gradient propagation when implemented in analog photonic systems. This discontinuity can amplify noise effects, potentially degrading the network’s performance and reliability. Thus, attempts to directly mimic activation functions optimized for digital neural networks, such as ReLU, can actually be counterproductive for analog accelerators.
|
| 22 |
+
|
| 23 |
+
To address these challenges, we propose the use of continuously differentiable activation functions for photonic neural networks. Specifically, we investigate the Gaussian Error Linear Unit (GELU) [[22](https://arxiv.org/html/2402.02593v3#bib.bib22)] and Sigmoid Linear Unit (SiLU) [[23](https://arxiv.org/html/2402.02593v3#bib.bib23)] as alternatives to ReLU. These functions offer smooth, continuous derivatives across their entire domain, potentially providing more robust gradient propagation in noisy analog environments.
|
| 24 |
+
|
| 25 |
+
Unlike the sigmoid function, which is prone to the vanishing gradient problem during backpropagation [[21](https://arxiv.org/html/2402.02593v3#bib.bib21)], GELU and SiLU are continuously differentiable variants of the rectified linear unit (ReLU) that may propagate gradients more effectively throughout deep neural networks. Recent work has demonstrated substantially higher accuracy for continuous activations such as GELU and SiLU compared to traditionally used discontinuous activations such as ReLU and LeakyReLU when noise is present [[24](https://arxiv.org/html/2402.02593v3#bib.bib24)]. For example, in image classification, Shah et al. showed that GELU/SiLU equipped models were able to converge on simulated analog hardware, even in the presence of significant noise from multiple sources [[25](https://arxiv.org/html/2402.02593v3#bib.bib25)]. Meanwhile, the ReLU equipped models struggled to show similar robustness to their GELU/SiLU counterparts. However, an explanation on the reasons underpinning this performance gap was not fully explored at the time.
|
| 26 |
+
|
| 27 |
+

|
| 28 |
+
|
| 29 |
+
Figure 1: Overview of model architecture.a) For the Linear, Convolutional, VGG and ResNet models we assume the worst case scenario where both the sensor and the model are physical and exhibit analog noise. For instance, this is the case for a CCD exhibiting electronic noise in conjunction with an analog photonic networks for computation. This is done by adding quantized noise layers between each traditional layer of the model. Adapted from [[25](https://arxiv.org/html/2402.02593v3#bib.bib25)]. b) For a Vision Transformer model, the sensor could be implemented in analog hardware while the transformer network is implemented in digital hardware. c) ReLU, GELU and SiLU activation function and its derivatives
|
| 30 |
+
|
| 31 |
+
In this work, we provide an explanation of why GELU/SiLU equipped models excel given noisy and quantized data such as the low-precision, quantized data provided by analog sensors or passed between layers of an optical neural network. We directly observe and quantify how the discontinuity in the derivatives of the ReLU activation function leads to error amplification during backpropagation under noise. In contrast, GELU/SiLU’s continuous derivatives maintain stability and uniform backpropagation errors in the presence of noise (more information is provided in Section [4](https://arxiv.org/html/2402.02593v3#S3.F4 "Figure 4 ‣ III-B Functional Analysis ‣ III Methods ‣ Leveraging Continuously Differentiable Activation for Learning in Analog and Quantized Noisy Environments")). To understand how these continuously differentiable activation functions work with respect to analog noise sources across different neural network architectures, we also provide an analysis of the results of linear, convolution, VGG, ResNet, and transformer models, as shown in Section [V](https://arxiv.org/html/2402.02593v3#S5 "V Results ‣ Leveraging Continuously Differentiable Activation for Learning in Analog and Quantized Noisy Environments"). Overall, this work shows the superiority of GELU/SiLU in enabling more reliable, noise-resilient perception, prediction, and planning systems. Our findings provide guidance to analog system architects on selecting noise-resilient activations in real-world and real-time environments.
|
| 32 |
+
|
| 33 |
+

|
| 34 |
+
|
| 35 |
+
Figure 2: Effects of Scaling Factor in GELU. GELU function and its derivative at different values of scaling factor with a) with full precision; b) with reduced precision. c) The effective bit-precision of GELU derivative near zero at different values of scaling factor when input precision is set to 6-bits. d) Top-1 test accuracy of ConvNet marginally declines with increasing GELU scaling factor on CIFAR-10.
|
| 36 |
+
|
| 37 |
+
II Background
|
| 38 |
+
-------------
|
| 39 |
+
|
| 40 |
+
### II-A Activations
|
| 41 |
+
|
| 42 |
+
Neural networks rely on activation functions to introduce non-linearities that enable modeling complex patterns in data. The rectified linear unit (ReLU) activation function and its derivative are defined as:
|
| 43 |
+
|
| 44 |
+
ReLU(x)=max(0,x)ReLU 𝑥 0 𝑥\textrm{ReLU}(x)=\max(0,x)ReLU ( italic_x ) = roman_max ( 0 , italic_x )(1)
|
| 45 |
+
|
| 46 |
+
ReLU′(x)={1:x>0 0:x≤0 superscript ReLU′𝑥 cases 1:absent 𝑥 0 0:absent 𝑥 0\textrm{ReLU}^{\prime}(x)=\left\{\begin{array}[]{cl}1&:\ x>0\\ 0&:\ x\leq 0\end{array}\right.ReLU start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_x ) = { start_ARRAY start_ROW start_CELL 1 end_CELL start_CELL : italic_x > 0 end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL : italic_x ≤ 0 end_CELL end_ROW end_ARRAY(2)
|
| 47 |
+
|
| 48 |
+
ReLU has been widely adopted due to its simplicity and effectiveness [[20](https://arxiv.org/html/2402.02593v3#bib.bib20)]. However, ReLU has a discontinuity in its derivative at x=0 𝑥 0 x=0 italic_x = 0 that can impede gradient flow and model training. ReLU neurons can also become stuck in a permanently deactivated state, known as the dying ReLU problem, hindering model expressiveness over time [[26](https://arxiv.org/html/2402.02593v3#bib.bib26)].
|
| 49 |
+
|
| 50 |
+
Variants like LeakyReLU give a small negative slope instead of zero for x<0 𝑥 0 x<0 italic_x < 0 to mitigate this [[24](https://arxiv.org/html/2402.02593v3#bib.bib24)], but LeakyReLU still has a derivative discontinuity that may limit noise resilience. In contrast, GELU and SiLU are continuous variants of ReLU aimed at improving gradient flow.
|
| 51 |
+
|
| 52 |
+
GELU and its derivative are defined as:
|
| 53 |
+
|
| 54 |
+
GELU(x)=xΦ(x)=x⋅1 2[1+erf(x 2)]GELU′(x)=1 2(1+erf(x 2))+x 2πe−x 2 2 GELU 𝑥 𝑥 Φ 𝑥⋅𝑥 1 2 delimited-[]1 erf 𝑥 2 superscript GELU′𝑥 1 2 1 erf 𝑥 2 𝑥 2 𝜋 superscript 𝑒 superscript 𝑥 2 2\begin{split}\textrm{GELU}(x)&=x\ \Phi(x)=x\cdot\frac{1}{2}\left[1+\textrm{erf% }\left(\frac{x}{\sqrt{2}}\right)\right]\\ \textrm{GELU}^{\prime}(x)&=\frac{1}{2}\left(1+\textrm{erf}\left(\frac{x}{\sqrt% {2}}\right)\right)+\frac{x}{\sqrt{2\pi}}e^{-\frac{x^{2}}{2}}\end{split}start_ROW start_CELL GELU ( italic_x ) end_CELL start_CELL = italic_x roman_Φ ( italic_x ) = italic_x ⋅ divide start_ARG 1 end_ARG start_ARG 2 end_ARG [ 1 + erf ( divide start_ARG italic_x end_ARG start_ARG square-root start_ARG 2 end_ARG end_ARG ) ] end_CELL end_ROW start_ROW start_CELL GELU start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_x ) end_CELL start_CELL = divide start_ARG 1 end_ARG start_ARG 2 end_ARG ( 1 + erf ( divide start_ARG italic_x end_ARG start_ARG square-root start_ARG 2 end_ARG end_ARG ) ) + divide start_ARG italic_x end_ARG start_ARG square-root start_ARG 2 italic_π end_ARG end_ARG italic_e start_POSTSUPERSCRIPT - divide start_ARG italic_x start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG start_ARG 2 end_ARG end_POSTSUPERSCRIPT end_CELL end_ROW(3)
|
| 55 |
+
|
| 56 |
+
where erf(x)erf 𝑥\textrm{erf}(x)erf ( italic_x ) is the Gaussian error function. SiLU uses the logistic sigmoid instead and its derivative is defined as:
|
| 57 |
+
|
| 58 |
+
SiLU(x)=xσ(x)=x 1+e−x SiLU′(x)=1+e−x+xe−x(1+e−x)2 SiLU 𝑥 𝑥 𝜎 𝑥 𝑥 1 superscript 𝑒 𝑥 superscript SiLU′𝑥 1 superscript 𝑒 𝑥 𝑥 superscript 𝑒 𝑥 superscript 1 superscript 𝑒 𝑥 2\begin{split}\textrm{SiLU}(x)&=x\ \sigma(x)=\frac{x}{1+e^{-x}}\\ \textrm{SiLU}^{\prime}(x)&=\frac{1+e^{-x}+xe^{-x}}{\left(1+e^{-x}\right)^{2}}% \\ \end{split}start_ROW start_CELL SiLU ( italic_x ) end_CELL start_CELL = italic_x italic_σ ( italic_x ) = divide start_ARG italic_x end_ARG start_ARG 1 + italic_e start_POSTSUPERSCRIPT - italic_x end_POSTSUPERSCRIPT end_ARG end_CELL end_ROW start_ROW start_CELL SiLU start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_x ) end_CELL start_CELL = divide start_ARG 1 + italic_e start_POSTSUPERSCRIPT - italic_x end_POSTSUPERSCRIPT + italic_x italic_e start_POSTSUPERSCRIPT - italic_x end_POSTSUPERSCRIPT end_ARG start_ARG ( 1 + italic_e start_POSTSUPERSCRIPT - italic_x end_POSTSUPERSCRIPT ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG end_CELL end_ROW(4)
|
| 59 |
+
|
| 60 |
+
As shown in Figure [1](https://arxiv.org/html/2402.02593v3#S1.F1 "Figure 1 ‣ I Introduction ‣ Leveraging Continuously Differentiable Activation for Learning in Analog and Quantized Noisy Environments")C, both GELU and SiLU maintain continuity in their function and derivative, suggesting more stable gradients. Recent work has shown improvements in accuracy from using these activations, particularly in noise-corrupted settings [[25](https://arxiv.org/html/2402.02593v3#bib.bib25)].
|
| 61 |
+
|
| 62 |
+
In the context of vision transformers, an additional variant known as GeGLU (Gaussian Error Gated Linear Unit) and ReGLU (Rectified Gated Linear Unit) are explored [[27](https://arxiv.org/html/2402.02593v3#bib.bib27)]. These transformer-unique activations are defined as:
|
| 63 |
+
|
| 64 |
+
ReGLU(x,W,V,b,c)=max(0,xW+b)⊗(xV+c)GeGLU(x,W,V,b,c)=GELU(xW+b)⊗(xV+c)ReGLU 𝑥 𝑊 𝑉 𝑏 𝑐 tensor-product max 0 𝑥 𝑊 𝑏 𝑥 𝑉 𝑐 GeGLU 𝑥 𝑊 𝑉 𝑏 𝑐 tensor-product GELU 𝑥 𝑊 𝑏 𝑥 𝑉 𝑐\begin{split}\textrm{ReGLU}(x,W,V,b,c)&=\textrm{max}(0,xW+b)\otimes(xV+c)\\ \textrm{GeGLU}(x,W,V,b,c)&=\textrm{GELU}(xW+b)\otimes(xV+c)\\ \end{split}start_ROW start_CELL ReGLU ( italic_x , italic_W , italic_V , italic_b , italic_c ) end_CELL start_CELL = max ( 0 , italic_x italic_W + italic_b ) ⊗ ( italic_x italic_V + italic_c ) end_CELL end_ROW start_ROW start_CELL GeGLU ( italic_x , italic_W , italic_V , italic_b , italic_c ) end_CELL start_CELL = GELU ( italic_x italic_W + italic_b ) ⊗ ( italic_x italic_V + italic_c ) end_CELL end_ROW(5)
|
| 65 |
+
|
| 66 |
+
### II-B Analog/Photonics Errors
|
| 67 |
+
|
| 68 |
+
All analog or real-life devices like cameras, sensors, and photodetectors ultimately employ analog-to-digital conversion to transform continuous analog signals into discrete digital data. However, real-world analog signals intrinsically suffer from noise. When physical implementations, such as photonic, neuromorphic, or quantum systems, are used to implement neural network models, this omnipresent noise permeates both the inputs as well as the inter-layer signals due to the analog-to-digital conversion process. This type of quantization noise can be simulated by adding a Gaussian Noise Layer, a Reduced Precision Layer, and a Clamp Layer (as shown in Figure [1](https://arxiv.org/html/2402.02593v3#S1.F1 "Figure 1 ‣ I Introduction ‣ Leveraging Continuously Differentiable Activation for Learning in Analog and Quantized Noisy Environments")) [[25](https://arxiv.org/html/2402.02593v3#bib.bib25)]. Here we define three parameters used to evaluate our models in the presence of low-precision and analog noise:
|
| 69 |
+
|
| 70 |
+
#### Error Probability (EP)
|
| 71 |
+
|
| 72 |
+
This is the probability that the recorded digital value is different from the true analog signal due to the presence of noise. That is, it is the probability that a reduced precision analog data point acquires a different digital value after passing through both a noise layer and then a reduced precision layer. The relationship between EP, photodetector/sensor bit precision (b 𝑏 b italic_b), and standard deviation (σ 𝜎\sigma italic_σ) is defined as follows:
|
| 73 |
+
|
| 74 |
+
EP=1−erf(1 22σ(2 b−1))EP 1 erf 1 2 2 𝜎 superscript 2 𝑏 1\textrm{EP}=1-\textrm{erf}\left(\frac{1}{2\sqrt{2}\ \sigma\ \left(2^{b}-1% \right)}\right)EP = 1 - erf ( divide start_ARG 1 end_ARG start_ARG 2 square-root start_ARG 2 end_ARG italic_σ ( 2 start_POSTSUPERSCRIPT italic_b end_POSTSUPERSCRIPT - 1 ) end_ARG )(6)
|
| 75 |
+
|
| 76 |
+
#### Reduced Precision (RP)
|
| 77 |
+
|
| 78 |
+
Reduce Precision layer applies a round-to-nearest transformation to the input based on the precision (number of discrete levels) [[25](https://arxiv.org/html/2402.02593v3#bib.bib25)].
|
| 79 |
+
|
| 80 |
+
RP(x)=1 2 psign(x)⌈|2 p⋅x|−0.5⌉RP 𝑥 1 superscript 2 𝑝 𝑠 𝑖 𝑔 𝑛 𝑥⋅superscript 2 𝑝 𝑥 0.5\textrm{RP}(x)=\frac{1}{2^{p}}sign(x)\left\lceil\left|2^{p}\cdot x\right|-0.5\right\rceil RP ( italic_x ) = divide start_ARG 1 end_ARG start_ARG 2 start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT end_ARG italic_s italic_i italic_g italic_n ( italic_x ) ⌈ | 2 start_POSTSUPERSCRIPT italic_p end_POSTSUPERSCRIPT ⋅ italic_x | - 0.5 ⌉(7)
|
| 81 |
+
|
| 82 |
+
where p 𝑝 p italic_p is the bit-precision.
|
| 83 |
+
|
| 84 |
+
#### Gradient Step Discontinuity (GSD)
|
| 85 |
+
|
| 86 |
+
This is the size of the step discontinuity present in the derivative of an activation function. For example, the gradient step discontinuity for ReLU at zero is 1 and for GELU/SiLU at zero is 0.
|
| 87 |
+
|
| 88 |
+
GSD f(x 0)=|lim x→x 0−f′(x)−lim x→x 0+f′(x)|subscript GSD 𝑓 subscript 𝑥 0 subscript→𝑥 superscript subscript 𝑥 0 superscript 𝑓′𝑥 subscript→𝑥 superscript subscript 𝑥 0 superscript 𝑓′𝑥\textrm{GSD}_{f}(x_{0})=\left|\lim_{x\to x_{0}^{-}}f^{\prime}(x)-\lim_{x\to x_% {0}^{+}}f^{\prime}(x)\right|GSD start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT ( italic_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ) = | roman_lim start_POSTSUBSCRIPT italic_x → italic_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT - end_POSTSUPERSCRIPT end_POSTSUBSCRIPT italic_f start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_x ) - roman_lim start_POSTSUBSCRIPT italic_x → italic_x start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT end_POSTSUBSCRIPT italic_f start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_x ) |(8)
|
| 89 |
+
|
| 90 |
+

|
| 91 |
+
|
| 92 |
+
Figure 3: Interpolation Factors. ReLU-GELU interpolation function and its derivative at different values of interpolation factor: a) at full-precision b) with reduced precision; c) with reduced precision and noise. d) The Gradient Step Discontinuity in ReLU-GELU interpolation at zero is negatively correlated to interpolation factor. e) & f) Top-1 Test Accuracy of ConvNet Utilizing Linear Interpolation Activation Functions, (e) ReLU-GELU and (f) ReLU-SiLU, with Quantized Noise on CIFAR-10 Dataset.
|
| 93 |
+
|
| 94 |
+
III Methods
|
| 95 |
+
-----------
|
| 96 |
+
|
| 97 |
+
### III-A Model Architecture
|
| 98 |
+
|
| 99 |
+
The model architectures used in this work are illustrated in Figure [1](https://arxiv.org/html/2402.02593v3#S1.F1 "Figure 1 ‣ I Introduction ‣ Leveraging Continuously Differentiable Activation for Learning in Analog and Quantized Noisy Environments"). The analog implementation of convolutional neural network (ConvNet, 6 convolutional layers + 3 linear layers) [[25](https://arxiv.org/html/2402.02593v3#bib.bib25)], VGG-A [[28](https://arxiv.org/html/2402.02593v3#bib.bib28)] and ResNet-18 [[29](https://arxiv.org/html/2402.02593v3#bib.bib29)] are shown in Figure [1](https://arxiv.org/html/2402.02593v3#S1.F1 "Figure 1 ‣ I Introduction ‣ Leveraging Continuously Differentiable Activation for Learning in Analog and Quantized Noisy Environments")A. To simulate the worst-case scenario of both the photonic layers and input sensor data being subjected to noise and limited precision, we insert quantization noise layers between each typical linear or convolutional layer of the model architecture, as well as on the weights and biases within that layer. This represents a fully analog system implementation on an integrated analog hardware chip. In contrast, for the vision transformer presented in Figure [1](https://arxiv.org/html/2402.02593v3#S1.F1 "Figure 1 ‣ I Introduction ‣ Leveraging Continuously Differentiable Activation for Learning in Analog and Quantized Noisy Environments")B, we assume only the photodetector/sensor data is analog, but the model is otherwise implemented digitally. Thus, quantized noise layers are added only to the inputs and not within the transformer network itself. The transformer network has a depth of 4 layers and 8 attention heads.
|
| 100 |
+
|
| 101 |
+
Table [I](https://arxiv.org/html/2402.02593v3#S3.T1 "TABLE I ‣ III-A Model Architecture ‣ III Methods ‣ Leveraging Continuously Differentiable Activation for Learning in Analog and Quantized Noisy Environments") presents the hyperparameters subjected to experimentation across all models. In each experiment, the models underwent standard training and testing processes using Gaussian noise, Clamp normalization within the range of -1 to 1, and reduced precision, conducted on the CIFAR-10 and CIFAR-100 datasets.
|
| 102 |
+
|
| 103 |
+
Hyperparameter Parameters Tested
|
| 104 |
+
Models ConvNet, VGG-A, & ResNet-18
|
| 105 |
+
Color True, False
|
| 106 |
+
Bit-Precision 2, 3, 4, 5, 6
|
| 107 |
+
Error Probability 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8
|
| 108 |
+
Normalization Clamp(-1, 1)
|
| 109 |
+
Noise Gaussian
|
| 110 |
+
Precision ReducePrecision
|
| 111 |
+
Dataset CIFAR-10, CIFAR-100
|
| 112 |
+
|
| 113 |
+
TABLE I: List of hyperparameters tested for all models
|
| 114 |
+
|
| 115 |
+
### III-B Functional Analysis
|
| 116 |
+
|
| 117 |
+

|
| 118 |
+
|
| 119 |
+
Figure 4: Gradient Error in ReLU and GELU when inputs and weight are 8-bit quantized with 0.5 error probability for noise in a linear layer with no bias followed by an activation layer.a) shows the gradient errors when interpolating between ReLU and GELU. Gradient Error b) for ReLU activation; c) for GELU activations;. NOTE: the gradient error colorbar axis is different for ReLU and GELU in (B) and (C) respectively.
|
| 120 |
+
|
| 121 |
+
As photodetector/sensor data is inherently bounded within maximum and minimum values, normalization can rescale these signals to the range [-1, 1]. However, unlike ReLU, the Gaussian error linear unit (GELU) activation function does not approach zero for negative input values until approximately −2.5 2.5-2.5- 2.5 and is limited to approximately +0.84 0.84+0.84+ 0.84 for a normalized maximum input value of +1 1+1+ 1. To understand the effects of change in the input domain over which GELU’s response is non-zero, we introduce an adjustable scaling factor (s 𝑠 s italic_s) that multiplies the input (x 𝑥 x italic_x) in the error function (erf(x)erf 𝑥\textrm{erf}(x)erf ( italic_x )). This scaling factor effectively controls the input domain over which the GELU response is not close to zero. An appropriate choice of the scaling factor may provide a mechanism to match the effective range of GELU activation to the normalized analog input signals. (as seen in Figure [2](https://arxiv.org/html/2402.02593v3#S1.F2 "Figure 2 ‣ I Introduction ‣ Leveraging Continuously Differentiable Activation for Learning in Analog and Quantized Noisy Environments")A):
|
| 122 |
+
|
| 123 |
+
GELU(x)=x⋅1 2[1+erf(s⋅x 2)]GELU 𝑥⋅𝑥 1 2 delimited-[]1 erf⋅𝑠 𝑥 2\textrm{GELU}(x)=x\cdot\frac{1}{2}\left[1+\textrm{erf}\left(\frac{s\cdot x}{% \sqrt{2}}\right)\right]GELU ( italic_x ) = italic_x ⋅ divide start_ARG 1 end_ARG start_ARG 2 end_ARG [ 1 + erf ( divide start_ARG italic_s ⋅ italic_x end_ARG start_ARG square-root start_ARG 2 end_ARG end_ARG ) ](9)
|
| 124 |
+
|
| 125 |
+
While increasing the scaling factor to s=3 𝑠 3 s=3 italic_s = 3 reduces signal attenuation after activation, this comes at the cost of reduced effective precision near zero when the input signal is quantized as shown in Figures [2](https://arxiv.org/html/2402.02593v3#S1.F2 "Figure 2 ‣ I Introduction ‣ Leveraging Continuously Differentiable Activation for Learning in Analog and Quantized Noisy Environments")B-C. This can negatively impact model accuracy when inputs with limited precision (e.g., 6-bits) are also affected by noise (see Figure [2](https://arxiv.org/html/2402.02593v3#S1.F2 "Figure 2 ‣ I Introduction ‣ Leveraging Continuously Differentiable Activation for Learning in Analog and Quantized Noisy Environments")D).
|
| 126 |
+
|
| 127 |
+
For effective comparison between differentiable and non-differentiable activation functions, a linear functional interpolation was used (as seen in Figure [3](https://arxiv.org/html/2402.02593v3#S2.F3 "Figure 3 ‣ Gradient Step Discontinuity (GSD) ‣ II-B Analog/Photonics Errors ‣ II Background ‣ Leveraging Continuously Differentiable Activation for Learning in Analog and Quantized Noisy Environments")A):
|
| 128 |
+
|
| 129 |
+
I GELU(x)=ReLU(x)+i(GELU(x)−ReLU(x))I SiLU(x)=ReLU(x)+i(SiLU(x)−ReLU(x))I GeGLU(x)=ReGLU(x)+i(GeGLU(x)−ReGLU(x))subscript I GELU 𝑥 ReLU 𝑥 𝑖 GELU 𝑥 ReLU 𝑥 subscript I SiLU 𝑥 ReLU 𝑥 𝑖 SiLU 𝑥 ReLU 𝑥 subscript I GeGLU 𝑥 ReGLU 𝑥 𝑖 GeGLU 𝑥 ReGLU 𝑥\begin{split}\textrm{I}_{\textrm{GELU}}(x)&=\textrm{ReLU}(x)+i(\textrm{GELU}(x% )-\textrm{ReLU}(x))\\ \textrm{I}_{\textrm{SiLU}}(x)&=\textrm{ReLU}(x)+i(\textrm{SiLU}(x)-\textrm{% ReLU}(x))\\ \textrm{I}_{\textrm{GeGLU}}(x)&=\textrm{ReGLU}(x)+i(\textrm{GeGLU}(x)-\textrm{% ReGLU}(x))\end{split}start_ROW start_CELL I start_POSTSUBSCRIPT GELU end_POSTSUBSCRIPT ( italic_x ) end_CELL start_CELL = ReLU ( italic_x ) + italic_i ( GELU ( italic_x ) - ReLU ( italic_x ) ) end_CELL end_ROW start_ROW start_CELL I start_POSTSUBSCRIPT SiLU end_POSTSUBSCRIPT ( italic_x ) end_CELL start_CELL = ReLU ( italic_x ) + italic_i ( SiLU ( italic_x ) - ReLU ( italic_x ) ) end_CELL end_ROW start_ROW start_CELL I start_POSTSUBSCRIPT GeGLU end_POSTSUBSCRIPT ( italic_x ) end_CELL start_CELL = ReGLU ( italic_x ) + italic_i ( GeGLU ( italic_x ) - ReGLU ( italic_x ) ) end_CELL end_ROW(10)
|
| 130 |
+
|
| 131 |
+
where i∈[0,1]𝑖 0 1 i\in[0,1]italic_i ∈ [ 0 , 1 ], for i=0 𝑖 0 i=0 italic_i = 0, I X subscript 𝐼 𝑋 I_{X}italic_I start_POSTSUBSCRIPT italic_X end_POSTSUBSCRIPT is ReLU/ReGLU and i=1 𝑖 1 i=1 italic_i = 1, I X subscript 𝐼 𝑋 I_{X}italic_I start_POSTSUBSCRIPT italic_X end_POSTSUBSCRIPT is X.
|
| 132 |
+
|
| 133 |
+
A key observation in Figure [3](https://arxiv.org/html/2402.02593v3#S2.F3 "Figure 3 ‣ Gradient Step Discontinuity (GSD) ‣ II-B Analog/Photonics Errors ‣ II Background ‣ Leveraging Continuously Differentiable Activation for Learning in Analog and Quantized Noisy Environments")A is that a discontinuity exists and is maximized in the derivative when the input value is zero and the interpolation factor is set to zero, which represents a standard rectified linear unit (ReLU) activation function. As the interpolation factor increases, the discontinuity in the derivative decreases until it completely disappears when the interpolation factor reaches 1. Hence, the interpolation factor has a negative correlation with the smoothness of the derivative around the zero input value. This is further proven using gradient step discontinuity for GSD I GELU(0)subscript GSD subscript I GELU 0\textrm{GSD}_{\text{I}_{\textrm{GELU}}}(0)GSD start_POSTSUBSCRIPT I start_POSTSUBSCRIPT GELU end_POSTSUBSCRIPT end_POSTSUBSCRIPT ( 0 ) and GSD I SiLU(0)subscript GSD subscript I SiLU 0\textrm{GSD}_{\text{I}_{\textrm{SiLU}}}(0)GSD start_POSTSUBSCRIPT I start_POSTSUBSCRIPT SiLU end_POSTSUBSCRIPT end_POSTSUBSCRIPT ( 0 ) is equal to 1−i 1 𝑖 1-i 1 - italic_i, where i 𝑖 i italic_i is the interpolation factor, as is shown in Figure [3](https://arxiv.org/html/2402.02593v3#S2.F3 "Figure 3 ‣ Gradient Step Discontinuity (GSD) ‣ II-B Analog/Photonics Errors ‣ II Background ‣ Leveraging Continuously Differentiable Activation for Learning in Analog and Quantized Noisy Environments")D, where we can clearly see that the interpolation factor is negatively correlated with the discontinuity in derivative of the ReLU-GELU interpolation function.
|
| 134 |
+
|
| 135 |
+

|
| 136 |
+
|
| 137 |
+
Figure 5: Models evaluated on the CIFAR-100 dataset. ReLU-GELU interpolation function and its derivative at different values of interpolation factor for ConvNet, VGG-A and ResNet-18 models on CIFAR-100
|
| 138 |
+
|
| 139 |
+
IV Error Analysis
|
| 140 |
+
-----------------
|
| 141 |
+
|
| 142 |
+
When inputs are reduced in terms of precision, the effects of gradient step discontinuity around zero become even more apparent, as illustrated in Figure [3](https://arxiv.org/html/2402.02593v3#S2.F3 "Figure 3 ‣ Gradient Step Discontinuity (GSD) ‣ II-B Analog/Photonics Errors ‣ II Background ‣ Leveraging Continuously Differentiable Activation for Learning in Analog and Quantized Noisy Environments")B. In the presence of noise, gradients close to zero can become highly uncertain due to the inherent discontinuity caused by ReLU activations, as confirmed in the left subplot of Figure [3](https://arxiv.org/html/2402.02593v3#S2.F3 "Figure 3 ‣ Gradient Step Discontinuity (GSD) ‣ II-B Analog/Photonics Errors ‣ II Background ‣ Leveraging Continuously Differentiable Activation for Learning in Analog and Quantized Noisy Environments")C. This uncertainty is absent with continuously differentiable GELU activation, as evidenced in the right subplot of Figure [3](https://arxiv.org/html/2402.02593v3#S2.F3 "Figure 3 ‣ Gradient Step Discontinuity (GSD) ‣ II-B Analog/Photonics Errors ‣ II Background ‣ Leveraging Continuously Differentiable Activation for Learning in Analog and Quantized Noisy Environments")C. The discontinuities introduced by ReLU activations thus make models more sensitive to uncertainties caused by lower precision inputs near zero in the presence of noise. We further investigate this phenomenon in more detail below.
|
| 143 |
+
|
| 144 |
+
The equation for an activation function’s input value after it passes through a linear layer with quantized noise in inputs and weights (but with no bias) can be written as follows:
|
| 145 |
+
|
| 146 |
+
x activation=1 p 2sign(x ix w)⌈|x ip|−0.5⌉⌈|x wp|−0.5⌉+ϵ p⌈|x ip|−0.5⌉2+⌈|x wp|−0.5⌉2+ϵ 2 subscript 𝑥 𝑎 𝑐 𝑡 𝑖 𝑣 𝑎 𝑡 𝑖 𝑜 𝑛 1 superscript 𝑝 2 sign subscript 𝑥 𝑖 subscript 𝑥 𝑤 subscript 𝑥 𝑖 𝑝 0.5 subscript 𝑥 𝑤 𝑝 0.5 italic-ϵ 𝑝 superscript subscript 𝑥 𝑖 𝑝 0.5 2 superscript subscript 𝑥 𝑤 𝑝 0.5 2 superscript italic-ϵ 2 x_{activation}=\frac{1}{p^{2}}\text{sign}(x_{i}x_{w})\left\lceil|x_{i}p|-0.5% \right\rceil\left\lceil|x_{w}p|-0.5\right\rceil\\ +\frac{\epsilon}{p}\sqrt{\left\lceil|x_{i}p|-0.5\right\rceil^{2}+\left\lceil|x% _{w}p|-0.5\right\rceil^{2}}+\epsilon^{2}start_ROW start_CELL italic_x start_POSTSUBSCRIPT italic_a italic_c italic_t italic_i italic_v italic_a italic_t italic_i italic_o italic_n end_POSTSUBSCRIPT = divide start_ARG 1 end_ARG start_ARG italic_p start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG sign ( italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT italic_x start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT ) ⌈ | italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT italic_p | - 0.5 ⌉ ⌈ | italic_x start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT italic_p | - 0.5 ⌉ end_CELL end_ROW start_ROW start_CELL + divide start_ARG italic_ϵ end_ARG start_ARG italic_p end_ARG square-root start_ARG ⌈ | italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT italic_p | - 0.5 ⌉ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT + ⌈ | italic_x start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT italic_p | - 0.5 ⌉ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG + italic_ϵ start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_CELL end_ROW(11)
|
| 147 |
+
|
| 148 |
+
where x i subscript 𝑥 𝑖 x_{i}italic_x start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT is the input value, x w subscript 𝑥 𝑤 x_{w}italic_x start_POSTSUBSCRIPT italic_w end_POSTSUBSCRIPT is the weight, p 𝑝 p italic_p is precision (p=2 bit-precision 𝑝 superscript 2 bit-precision p=2^{\text{bit-precision}}italic_p = 2 start_POSTSUPERSCRIPT bit-precision end_POSTSUPERSCRIPT), and ϵ italic-ϵ\epsilon italic_ϵ is a Gaussian random variable with μ=0 𝜇 0\mu=0 italic_μ = 0. Note that in this case, we assume a single input value and weight for ease of illustration, but a similar analysis holds when the dot-product between input and weight vectors approaches zero.
|
| 149 |
+
|
| 150 |
+
Figure [4](https://arxiv.org/html/2402.02593v3#S3.F4 "Figure 4 ‣ III-B Functional Analysis ‣ III Methods ‣ Leveraging Continuously Differentiable Activation for Learning in Analog and Quantized Noisy Environments")A shows the effects of Equation [11](https://arxiv.org/html/2402.02593v3#S4.E11 "In IV Error Analysis ‣ Leveraging Continuously Differentiable Activation for Learning in Analog and Quantized Noisy Environments") on the gradient calculation of the ReLU and GELU activation functions at various interpolation factors. From Figure [4](https://arxiv.org/html/2402.02593v3#S3.F4 "Figure 4 ‣ III-B Functional Analysis ‣ III Methods ‣ Leveraging Continuously Differentiable Activation for Learning in Analog and Quantized Noisy Environments")A, it can also be seen that when inputs or weights are close to zero (which is not uncommon for weights), noise will dominate, causing errors in the gradients of the activation function. In the case of ReLU (Figure [4](https://arxiv.org/html/2402.02593v3#S3.F4 "Figure 4 ‣ III-B Functional Analysis ‣ III Methods ‣ Leveraging Continuously Differentiable Activation for Learning in Analog and Quantized Noisy Environments")B), the error in gradients are ∼similar-to\sim∼100×\times× higher than those observed for GELU (Figure [4](https://arxiv.org/html/2402.02593v3#S3.F4 "Figure 4 ‣ III-B Functional Analysis ‣ III Methods ‣ Leveraging Continuously Differentiable Activation for Learning in Analog and Quantized Noisy Environments")C) when both inputs and weights have quantized noise in a linear layer with no bias. Notably, the error in GELU models are much more uniformly distributed than the gradient errors reported in ReLU models.
|
| 151 |
+
|
| 152 |
+
Neural network training typically employs mini-batch optimization, where gradients are accumulated across a subset of training samples before updating network weights. The accumulated error introduced due to the activation functions can be mathematically represented as demonstrated in Equation [12](https://arxiv.org/html/2402.02593v3#S4.E12 "In IV Error Analysis ‣ Leveraging Continuously Differentiable Activation for Learning in Analog and Quantized Noisy Environments").
|
| 153 |
+
|
| 154 |
+
E f(x,n)=∑i=0 n f(x+ϵ i)n subscript 𝐸 𝑓 𝑥 𝑛 superscript subscript 𝑖 0 𝑛 𝑓 𝑥 subscript italic-ϵ 𝑖 𝑛 E_{f}(x,n)=\frac{\sum_{i=0}^{n}f(x+\epsilon_{i})}{n}italic_E start_POSTSUBSCRIPT italic_f end_POSTSUBSCRIPT ( italic_x , italic_n ) = divide start_ARG ∑ start_POSTSUBSCRIPT italic_i = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT italic_f ( italic_x + italic_ϵ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ) end_ARG start_ARG italic_n end_ARG(12)
|
| 155 |
+
|
| 156 |
+
where E 𝐸 E italic_E represents the mean error introduced due to activation in a mini-batch, f 𝑓 f italic_f is the activation function, n 𝑛 n italic_n is the mini-batch size, and ϵ italic-ϵ\epsilon italic_ϵ is a Gaussian random variable with mean μ=0 𝜇 0\mu=0 italic_μ = 0 and a small standard deviation σ 𝜎\sigma italic_σ. As the mini-batch size (n 𝑛 n italic_n) increases, we find:
|
| 157 |
+
|
| 158 |
+
E ReLU′(0+,n)→0.5≠1=ReLU′(0+)E ReLU′(0−,n)→0.5≠0=ReLU′(0−)E GELU′(0+,n)→0.5=GELU′(0+)E GELU′(0−,n)→0.5=GELU′(0−)→subscript 𝐸 𝑅 𝑒 𝐿 superscript 𝑈′superscript 0 𝑛 0.5 1 𝑅 𝑒 𝐿 superscript 𝑈′superscript 0 subscript 𝐸 𝑅 𝑒 𝐿 superscript 𝑈′superscript 0 𝑛→0.5 0 𝑅 𝑒 𝐿 superscript 𝑈′superscript 0 subscript 𝐸 𝐺 𝐸 𝐿 superscript 𝑈′superscript 0 𝑛→0.5 𝐺 𝐸 𝐿 superscript 𝑈′superscript 0 subscript 𝐸 𝐺 𝐸 𝐿 superscript 𝑈′superscript 0 𝑛→0.5 𝐺 𝐸 𝐿 superscript 𝑈′superscript 0\begin{split}E_{ReLU^{\prime}}(0^{+},n)&\rightarrow 0.5\neq 1=ReLU^{\prime}(0^% {+})\\ E_{ReLU^{\prime}}(0^{-},n)&\rightarrow 0.5\neq 0=ReLU^{\prime}(0^{-})\\ E_{GELU^{\prime}}(0^{+},n)&\rightarrow 0.5=GELU^{\prime}(0^{+})\\ E_{GELU^{\prime}}(0^{-},n)&\rightarrow 0.5=GELU^{\prime}(0^{-})\\ \end{split}start_ROW start_CELL italic_E start_POSTSUBSCRIPT italic_R italic_e italic_L italic_U start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ( 0 start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT , italic_n ) end_CELL start_CELL → 0.5 ≠ 1 = italic_R italic_e italic_L italic_U start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( 0 start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT ) end_CELL end_ROW start_ROW start_CELL italic_E start_POSTSUBSCRIPT italic_R italic_e italic_L italic_U start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ( 0 start_POSTSUPERSCRIPT - end_POSTSUPERSCRIPT , italic_n ) end_CELL start_CELL → 0.5 ≠ 0 = italic_R italic_e italic_L italic_U start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( 0 start_POSTSUPERSCRIPT - end_POSTSUPERSCRIPT ) end_CELL end_ROW start_ROW start_CELL italic_E start_POSTSUBSCRIPT italic_G italic_E italic_L italic_U start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ( 0 start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT , italic_n ) end_CELL start_CELL → 0.5 = italic_G italic_E italic_L italic_U start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( 0 start_POSTSUPERSCRIPT + end_POSTSUPERSCRIPT ) end_CELL end_ROW start_ROW start_CELL italic_E start_POSTSUBSCRIPT italic_G italic_E italic_L italic_U start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ( 0 start_POSTSUPERSCRIPT - end_POSTSUPERSCRIPT , italic_n ) end_CELL start_CELL → 0.5 = italic_G italic_E italic_L italic_U start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( 0 start_POSTSUPERSCRIPT - end_POSTSUPERSCRIPT ) end_CELL end_ROW(13)
|
| 159 |
+
|
| 160 |
+
As illustrated in Equation [13](https://arxiv.org/html/2402.02593v3#S4.E13 "In IV Error Analysis ‣ Leveraging Continuously Differentiable Activation for Learning in Analog and Quantized Noisy Environments"), due to mini-batch training, the gradients of both GELU and ReLU activation functions near zero converge to 0.5 when quantized input errors are present. However, critical differences emerge in error propagation. For GELU, the accumulated error closely matches its true gradient value of 0.5. In contrast, ReLU exhibits significant discrepancies, with its true gradient near zero alternating between 0 and 1, while the accumulated error consistently approaches 0.5.
|
| 161 |
+
|
| 162 |
+
Due to this asymmetry, the ReLU activation function systematically suppresses any positive or negative weight bias information because of its gradient discontinuity, as it never provides the true gradient value for positive or negative weights. However, GELU activation does provide the true gradient value after mini-batch training. It is well established that weight biases are critical to neural network learning, as they enable sophisticated information storage and nuanced feature extraction from complex datasets. [[30](https://arxiv.org/html/2402.02593v3#bib.bib30), [31](https://arxiv.org/html/2402.02593v3#bib.bib31)] Thus, the uniformity of gradient errors from continuous differentiable activations such as GELU facilitate more reliable convergence with increasing model complexity on analog platforms.
|
| 163 |
+
|
| 164 |
+

|
| 165 |
+
|
| 166 |
+
Figure 6: Impact of Layer Depth in ConvNet Architecture on CIFAR-10 Dataset. For (a) and (b) the number of convolutional layers is varied while only one linear layer is used. For (c) and (d) the number of linear layers is varied with no convolutional layers. In (a) and (c) ReLU-GELU interpolation is used, while in (b) and (d) ReLU-SiLU interpolation is used.
|
| 167 |
+
|
| 168 |
+
V Results
|
| 169 |
+
---------
|
| 170 |
+
|
| 171 |
+
The results presented here comprehensively demonstrate the superior noise resilience of continuously differentiable activations like GELU and SiLU compared to the traditionally used discontinuous ReLU activation.
|
| 172 |
+
|
| 173 |
+
Firstly, Figures [3](https://arxiv.org/html/2402.02593v3#S2.F3 "Figure 3 ‣ Gradient Step Discontinuity (GSD) ‣ II-B Analog/Photonics Errors ‣ II Background ‣ Leveraging Continuously Differentiable Activation for Learning in Analog and Quantized Noisy Environments")E and [3](https://arxiv.org/html/2402.02593v3#S2.F3 "Figure 3 ‣ Gradient Step Discontinuity (GSD) ‣ II-B Analog/Photonics Errors ‣ II Background ‣ Leveraging Continuously Differentiable Activation for Learning in Analog and Quantized Noisy Environments")F clearly show that linearly interpolating between the differentiably discontinuous ReLU and continuous GELU/SiLU activations leads to substantial gains in accuracy as the interpolation factor is increased. That is, systematically reducing the gradient step discontinuity of the activation function through interpolation significantly enhances model test accuracy on noisy quantized inputs. This also holds true for the CIFAR-100 dataset and across different types of model architectures as shown in Figure [5](https://arxiv.org/html/2402.02593v3#S3.F5 "Figure 5 ‣ III-B Functional Analysis ‣ III Methods ‣ Leveraging Continuously Differentiable Activation for Learning in Analog and Quantized Noisy Environments").
|
| 174 |
+
|
| 175 |
+
Figure [2](https://arxiv.org/html/2402.02593v3#S1.F2 "Figure 2 ‣ I Introduction ‣ Leveraging Continuously Differentiable Activation for Learning in Analog and Quantized Noisy Environments")D shows that limiting the input range where GELU responds non-linearly causes a small drop in model accuracy. This accuracy reduction stems from compressing the GELU function, which effectively decreases the precision of gradients around zero as seen from Figure [2](https://arxiv.org/html/2402.02593v3#S1.F2 "Figure 2 ‣ I Introduction ‣ Leveraging Continuously Differentiable Activation for Learning in Analog and Quantized Noisy Environments")B-C. The less precise gradients make small weight adjustments more difficult, slightly hindering model performance. However overall this indicates that GELU is inherently robust to the bounded normalized inputs typical for optical hardware such as laser power, photocurrent, and sensor data.
|
| 176 |
+
|
| 177 |
+
Furthermore, Figures [6](https://arxiv.org/html/2402.02593v3#S4.F6 "Figure 6 ‣ IV Error Analysis ‣ Leveraging Continuously Differentiable Activation for Learning in Analog and Quantized Noisy Environments")A-D examine how these errors can cause failure in model convergence as the number of layers is incrementally increased. As expected, adding more convolutional and linear layers leads to compounding of analog noise effects since the quantized noise is added to both inputs and models (as shown in Figure [1](https://arxiv.org/html/2402.02593v3#S1.F1 "Figure 1 ‣ I Introduction ‣ Leveraging Continuously Differentiable Activation for Learning in Analog and Quantized Noisy Environments")A). During training, this causes the inaccuracy of weight updates to increase in ReLU equipped models, but not in GELU/SiLU equipped models. Notably, in certain deeper configurations, the models with ReLU units are completely unable to learn until the interpolation factor is sufficiently high—meaning the gradient step discontinuity is small enough that noise no longer impedes gradient convergence.
|
| 178 |
+
|
| 179 |
+
To better understand the influence of differentiation on the activation function, we also evaluated the impact of negative slope on LeakyReLU for a sub-set of our listed hyperparameters and observed similar challenges in model convergence. For LeakyReLU with varying negative slopes (α 𝛼\alpha italic_α), as the negative slope increases the gradient step discontinuity decreases, effectively making the function more continuous. We observed that after gradient step discontinuity is below a certain threshold, the model begins to learn effectively (detailed results are provided in Table [II](https://arxiv.org/html/2402.02593v3#S5.T2 "TABLE II ‣ V Results ‣ Leveraging Continuously Differentiable Activation for Learning in Analog and Quantized Noisy Environments")), this is similar to what we observed in Figure [3](https://arxiv.org/html/2402.02593v3#S2.F3 "Figure 3 ‣ Gradient Step Discontinuity (GSD) ‣ II-B Analog/Photonics Errors ‣ II Background ‣ Leveraging Continuously Differentiable Activation for Learning in Analog and Quantized Noisy Environments")E &[3](https://arxiv.org/html/2402.02593v3#S2.F3 "Figure 3 ‣ Gradient Step Discontinuity (GSD) ‣ II-B Analog/Photonics Errors ‣ II Background ‣ Leveraging Continuously Differentiable Activation for Learning in Analog and Quantized Noisy Environments")F. This finding further underscores the critical role of differentiability in enhancing robustness against noise, strengthening the argument for using differentiable activation functions.
|
| 180 |
+
|
| 181 |
+
Activations Top-1 Accuracy (%)
|
| 182 |
+
ReLU 10.78
|
| 183 |
+
LeakyReLU (α=0.01 𝛼 0.01\alpha=0.01 italic_α = 0.01)10.30
|
| 184 |
+
LeakyReLU (α=0.1 𝛼 0.1\alpha=0.1 italic_α = 0.1)68.58
|
| 185 |
+
LeakyReLU (α=0.2 𝛼 0.2\alpha=0.2 italic_α = 0.2)71.41
|
| 186 |
+
LeakyReLU (α=0.3 𝛼 0.3\alpha=0.3 italic_α = 0.3)72.48
|
| 187 |
+
LeakyReLU (α=0.4 𝛼 0.4\alpha=0.4 italic_α = 0.4)69.11
|
| 188 |
+
GeLU 77.57
|
| 189 |
+
|
| 190 |
+
TABLE II: Top-1 Classification Accuracy of ConvNet with ReLU, GeLU and LeakyReLU (α 𝛼\alpha italic_α is the negative slope) on CIFAR-10 Dataset
|
| 191 |
+
|
| 192 |
+
Finally, the vision transformer (ViT) results in Figure [7](https://arxiv.org/html/2402.02593v3#S5.F7 "Figure 7 ‣ V Results ‣ Leveraging Continuously Differentiable Activation for Learning in Analog and Quantized Noisy Environments") confirm the consistent benefits of continuous activations over discontinuous variants for handling noise and reduced precision input data. Figures [7](https://arxiv.org/html/2402.02593v3#S5.F7 "Figure 7 ‣ V Results ‣ Leveraging Continuously Differentiable Activation for Learning in Analog and Quantized Noisy Environments")A and [7](https://arxiv.org/html/2402.02593v3#S5.F7 "Figure 7 ‣ V Results ‣ Leveraging Continuously Differentiable Activation for Learning in Analog and Quantized Noisy Environments")B both show an increasing accuracy trend as the interpolation factor rises. However the increase in model accuracy is not as pronounced as it is for linear and convolutional models. This is because, in the case of the ViT, the quantized noise is only added to the inputs but not to the model weights (as shown in Figure [1](https://arxiv.org/html/2402.02593v3#S1.F1 "Figure 1 ‣ I Introduction ‣ Leveraging Continuously Differentiable Activation for Learning in Analog and Quantized Noisy Environments")B). In Figure [7](https://arxiv.org/html/2402.02593v3#S5.F7 "Figure 7 ‣ V Results ‣ Leveraging Continuously Differentiable Activation for Learning in Analog and Quantized Noisy Environments")C, the correlation between the interpolation factor and accuracy for ReGLU-GeGLU can not be seen. This can be because ReGLU and GeGLU are much more complex functions and their differentiabilities cannot be easily evaluated with interpolation alone while quantization noise is present. However, the overall maximum accuracy of the ViT is still higher when using GELU/SiLU in comparison to ReGLU and GeGLU, demonstrating the importance of a fully differentiable activation function when quantization noise is present.
|
| 193 |
+
|
| 194 |
+
In summary, both the interpolation analysis and model depth analysis substantiate that the differentiability of GELU/SiLU activations enable superior gradient flow and noise resilience as compared to ReLU alternatives across various model architectures. The findings provide clear guidance for selecting activations to mitigate the impacts of unavoidable noise in real-world analog systems.
|
| 195 |
+
|
| 196 |
+

|
| 197 |
+
|
| 198 |
+
Figure 7: Top-1 test accuracy of Vision Transformer (ViT) with analog photodetector/sensor on CIFAR-10 Dataset.. The test accuracy results of the Vision Transformer when using (a) ReLU-GELU interpolation, (b) ReLU-SiLU interpolation, and (c) ReGLU-GeGLU interpolation
|
| 199 |
+
|
| 200 |
+
VI Discussion
|
| 201 |
+
-------------
|
| 202 |
+
|
| 203 |
+
This work expands our understanding of differentiable activation functions’ effectiveness and applicability in real-world quantized noise scenarios, building upon established knowledge of their advantages over non-differentiable activation functions. We demonstrate this benefit concretely for low-precision, high-noise settings that are common in real-world analog systems like photonic accelerators, memristive crossbar arrays, and other analog hardware. At error probabilities as low as 30%, many model configurations with ReLU already struggle to learn, as shown in Figures [3](https://arxiv.org/html/2402.02593v3#S2.F3 "Figure 3 ‣ Gradient Step Discontinuity (GSD) ‣ II-B Analog/Photonics Errors ‣ II Background ‣ Leveraging Continuously Differentiable Activation for Learning in Analog and Quantized Noisy Environments")E and [3](https://arxiv.org/html/2402.02593v3#S2.F3 "Figure 3 ‣ Gradient Step Discontinuity (GSD) ‣ II-B Analog/Photonics Errors ‣ II Background ‣ Leveraging Continuously Differentiable Activation for Learning in Analog and Quantized Noisy Environments")F. In contrast, GELU/SiLU models are able to learn even at higher noise levels. Remarkably, we never observe any cases where ReLU outperforms the continuous activations such as GELU/SiLU. This suggests that the continuity principles we have identified may have broad applicability for enhancing model robustness.
|
| 204 |
+
|
| 205 |
+
More broadly, our findings also aid the explainability and interpretability of deep learning model training in real-world noisy environments, shedding light on how noise can affect model convergence. With neural networks remaining largely black boxes, understanding why certain architectures or components perform better guides debugging and continued progress. By pinpointing derivative continuity as the differentiating factor, we can directly advise system architects to utilize smooth activations to mitigate the effects of unavoidable noise sources in emerging analog accelerators. More generally, we advise those who work with such systems to examine all the non-linear model components which may propagate noise in a similar fashion.
|
| 206 |
+
|
| 207 |
+
The implications are particularly important for emerging analog computing platforms like photonic, neuromorphic or quantum systems, which promise massive speed and power efficiency improvements but suffer from intrinsic hardware non-idealities. Our analysis indicates that realizing the performance potential of analog accelerators requires joint optimization of both the hardware and the model architecture. Additionally, the continuity principles established here may also aid in adversarial and out-of-distribution robustness in other applications, which warrants further investigation. Furthermore, in robotics and autonomous vehicles, where deep learning models process sensor data, this study helps not only with model training but also enables the creation of robust models providing more consistent outputs amidst noise. This enhances decision-making capabilities in dynamic, safety-critical scenarios.
|
| 208 |
+
|
| 209 |
+
Finally, these findings could enable systems that use reduced sensor precision for low-power requirements to maintain or improve real-world performance. Such findings could be particularly interesting for reduced precision neural networks implemented on FPGAs. Typically reducing precision helps use far less silicon area and also enables faster inference (small bit precision systems can often employ fixed-point or integer arithmetic which is faster than floating-point counterparts). This is analogous to the increased speed and robustness seen with 4-level PAM modulation over binary implementations [[32](https://arxiv.org/html/2402.02593v3#bib.bib32), [33](https://arxiv.org/html/2402.02593v3#bib.bib33), [34](https://arxiv.org/html/2402.02593v3#bib.bib34)]. By reducing precision, neural network inferences can operate faster while using less space and power—crucial metrics for embedded applications like robotics and autonomous vehicles.
|
| 210 |
+
|
| 211 |
+
VII Conclusion
|
| 212 |
+
--------------
|
| 213 |
+
|
| 214 |
+
In this work, we have demonstrated the noise resilience advantages of continuously differentiable activations over discontinuous rectified activations through extensive functional analysis and model training across a range of neural network architectures. Our investigations conclusively establish that the built-in continuity of GELU and SiLU derivatives enable reliable gradient flows that mitigate the impacts of errors arising from common analog noise sources such as quantization and hardware non-idealities.
|
| 215 |
+
|
| 216 |
+
The key findings can be summarized as follows:
|
| 217 |
+
|
| 218 |
+
* •GELU and SiLU exhibit inherent robustness to inputs bounded within normalized photodetector/sensor ranges. In contrast, ReLU suffers from uncertainty in gradients that leads to unstable convergence and amplification of errors during backpropagation.
|
| 219 |
+
* •Interpolating between discontinuous ReLU and continuously differentiable activations systematically improves accuracy as the gradient step discontinuity decreases. This establishes a definitive causal link between activation gradient step discontinuity and noise resilience.
|
| 220 |
+
* •Continuity advantages accumulate with model depth, leading to larger improvements from GELU/SiLU usage in deeper networks where analog errors would otherwise compound from discontinuous activations.
|
| 221 |
+
* •Vision transformer results corroborate the consistent benefits of differentiable activations with their convolutional and linear architectured counterparts - affirming the importance of intrinsic activation gradient continuity on modern deep-learning architectures.
|
| 222 |
+
|
| 223 |
+
Together, these comprehensive results and analyses provide guidance to hardware-based model architects to employ smooth activations such as GELU and SiLU in order to fully realize the performance potential of emerging analog platforms, such as photonic accelerators. By selecting appropriate activations, the detrimental impacts of real-world noise sources can be significantly reduced without requiring excessive precision requirements. While our focus has been on robustness to sensor and analog hardware-induced noise, the continuity principles established here may also translate to improved stability for other applications. These may include adversarial robustness and out-of-distribution detection, which might warrant future investigation [[35](https://arxiv.org/html/2402.02593v3#bib.bib35), [36](https://arxiv.org/html/2402.02593v3#bib.bib36)]. Overall, by explaining and demonstrating the underlying mechanisms relating continuity to resilience, this work helps pave the path toward reliable, performant, and fully analog AI implementations.
|
| 224 |
+
|
| 225 |
+
References
|
| 226 |
+
----------
|
| 227 |
+
|
| 228 |
+
* [1] A.Mehonic and A.J. Kenyon, “Brain-inspired computing needs a master plan,” _Nature_, vol. 604, no. 7905, pp. 255–260, Apr. 2022. [Online]. Available: [https://www.nature.com/articles/s41586-021-04362-w](https://www.nature.com/articles/s41586-021-04362-w)
|
| 229 |
+
* [2] B.J. Shastri, A.N. Tait, T.Ferreira de Lima, W.H.P. Pernice, H.Bhaskaran, C.D. Wright, and P.R. Prucnal, “Photonics for artificial intelligence and neuromorphic computing,” _Nature Photonics_, vol.15, no.2, pp. 102–114, Feb. 2021. [Online]. Available: [http://www.nature.com/articles/s41566-020-00754-y](http://www.nature.com/articles/s41566-020-00754-y)
|
| 230 |
+
* [3] X.Lin, Y.Rivenson, N.T. Yardimci, M.Veli, Y.Luo, M.Jarrahi, and A.Ozcan, “All-optical machine learning using diffractive deep neural networks,” _Science_, vol. 361, no. 6406, pp. 1004–1008, Sep. 2018. [Online]. Available: [https://www.science.org/doi/10.1126/science.aat8084](https://www.science.org/doi/10.1126/science.aat8084)
|
| 231 |
+
* [4] Y.Shen, N.C. Harris, S.Skirlo, M.Prabhu, T.Baehr-Jones, M.Hochberg, X.Sun, S.Zhao, H.Larochelle, D.Englund, and M.Soljačić, “Deep learning with coherent nanophotonic circuits,” _Nature Photonics_, vol.11, no.7, pp. 441–446, Jul. 2017. [Online]. Available: [http://www.nature.com/articles/nphoton.2017.93](http://www.nature.com/articles/nphoton.2017.93)
|
| 232 |
+
* [5] S.R. Kari, A.Hastings, N.A. Nobile, D.Pantin, V.Shah, and N.Youngblood, “Integrated Coherent Photonic Crossbar Arrays for Efficient Optical Computing,” in _CLEO 2024_.Charlotte, North Carolina: Optica Publishing Group, 2024, p. SM4M.6. [Online]. Available: [https://opg.optica.org/abstract.cfm?URI=CLEO_SI-2024-SM4M.6](https://opg.optica.org/abstract.cfm?URI=CLEO_SI-2024-SM4M.6)
|
| 233 |
+
* [6] N.Youngblood, V.Shah, and S.Rahimi Kari, “Computational, photonic crossbar arrays for scalable and efficient matrix operations,” in _Silicon Photonics XVIII_, G.T. Reed and A.P. Knights, Eds.San Francisco, United States: SPIE, Mar. 2023, p.4. [Online]. Available: [https://www.spiedigitallibrary.org/conference-proceedings-of-spie/PC12426/2646996/Computational-photonic-crossbar-arrays-for-scalable-and-efficient-matrix-operations/10.1117/12.2646996.full](https://www.spiedigitallibrary.org/conference-proceedings-of-spie/PC12426/2646996/Computational-photonic-crossbar-arrays-for-scalable-and-efficient-matrix-operations/10.1117/12.2646996.full)
|
| 234 |
+
* [7] G.Mourgias-Alexandris, A.Totovic, A.Tsakyridis, N.Passalis, K.Vyrsokinos, A.Tefas, and N.Pleros, “Neuromorphic Photonics With Coherent Linear Neurons Using Dual-IQ Modulation Cells,” _Journal of Lightwave Technology_, vol.38, no.4, pp. 811–819, Feb. 2020. [Online]. Available: [https://ieeexplore.ieee.org/document/8880481/](https://ieeexplore.ieee.org/document/8880481/)
|
| 235 |
+
* [8] N.Youngblood, S.R. Kari, N.Nobile, V.Shah, and D.Pantin, “Realization of an integrated photonic platform for coherent photo-electric processing,” Oct. 2023. [Online]. Available: [https://preprints.opticaopen.org/articles/preprint/Realization_of_an_integrated_photonic_platform_for_coherent_photo-electric_processing/24250795/1](https://preprints.opticaopen.org/articles/preprint/Realization_of_an_integrated_photonic_platform_for_coherent_photo-electric_processing/24250795/1)
|
| 236 |
+
* [9] Z.Chen, A.Sludds, R.Davis, I.Christen, L.Bernstein, L.Ateshian, T.Heuser, N.Heermeier, J.A. Lott, S.Reitzenstein, R.Hamerly, and D.Englund, “Deep learning with coherent VCSEL neural networks,” _Nature Photonics_, vol.17, no.8, pp. 723–730, Aug. 2023. [Online]. Available: [https://www.nature.com/articles/s41566-023-01233-w](https://www.nature.com/articles/s41566-023-01233-w)
|
| 237 |
+
* [10] N.Youngblood, “Coherent Photonic Crossbar Arrays for Large-Scale Matrix-Matrix Multiplication,” _IEEE Journal of Selected Topics in Quantum Electronics_, pp. 1–1, 2022. [Online]. Available: [https://ieeexplore.ieee.org/document/9765351/](https://ieeexplore.ieee.org/document/9765351/)
|
| 238 |
+
* [11] S.Rahimi Kari, N.A. Nobile, D.Pantin, V.Shah, and N.Youngblood, “Realization of an integrated coherent photonic platform for scalable matrix operations,” _Optica_, vol.11, no.4, p. 542, Apr. 2024. [Online]. Available: [https://opg.optica.org/abstract.cfm?URI=optica-11-4-542](https://opg.optica.org/abstract.cfm?URI=optica-11-4-542)
|
| 239 |
+
* [12] G.Giamougiannis, A.Tsakyridis, G.Mourgias-Alexandris, M.Moralis-Pegios, A.Totovic, G.Dabos, N.Passalis, M.Kirtas, N.Bamiedakis, A.Tefas, D.Lazovsky, and N.Pleros, “Silicon-integrated coherent neurons with 32GMAC/sec/axon compute line-rates using EAM-based input and weighting cells,” in _2021 European Conference on Optical Communication (ECOC)_.Bordeaux, France: IEEE, Sep. 2021, pp. 1–4. [Online]. Available: [https://ieeexplore.ieee.org/document/9605987/](https://ieeexplore.ieee.org/document/9605987/)
|
| 240 |
+
* [13] J.Feldmann, N.Youngblood, M.Karpov, H.Gehring, X.Li, M.Stappers, M.L. Gallo, X.Fu, A.Lukashchuk, A.Raja, J.Liu, D.Wright, A.Sebastian, T.Kippenberg, W.Pernice, and H.Bhaskaran, “Parallel convolution processing using an integrated photonic tensor core,” _Nature 2020 589:7840_, vol. 589, no. 7840, pp. 52–58, Feb. 2020, publisher: Nature Publishing Group _eprint: 2002.00281. [Online]. Available: [http://arxiv.org/abs/2002.00281http://dx.doi.org/10.1038/s41586-020-03070-1](http://arxiv.org/abs/2002.00281http://dx.doi.org/10.1038/s41586-020-03070-1)
|
| 241 |
+
* [14] J.R. Erickson, V.Shah, Q.Wan, N.Youngblood, and F.Xiong, “Designing fast and efficient electrically driven phase change photonics using foundry compatible waveguide-integrated microheaters,” _Optics Express_, vol.30, no.8, p. 13673, Apr. 2022. [Online]. Available: [https://opg.optica.org/abstract.cfm?URI=oe-30-8-13673](https://opg.optica.org/abstract.cfm?URI=oe-30-8-13673)
|
| 242 |
+
* [15] P.Pintus, M.Dumont, V.Shah, T.Murai, Y.Shoji, D.Huang, G.Moody, J.E. Bowers, and N.Youngblood, “Integrated non-reciprocal magneto-optics with ultra-high endurance for photonic in-memory computing,” _Nature Photonics_, Oct. 2024. [Online]. Available: [https://www.nature.com/articles/s41566-024-01549-1](https://www.nature.com/articles/s41566-024-01549-1)
|
| 243 |
+
* [16] T.Murai, Y.Shoji, N.Nishiyama, and T.Mizumoto, “Nonvolatile magneto-optical switches integrated with a magnet stripe array,” _Optics Express_, vol.28, no.21, p. 31675, Oct. 2020. [Online]. Available: [https://opg.optica.org/abstract.cfm?URI=oe-28-21-31675](https://opg.optica.org/abstract.cfm?URI=oe-28-21-31675)
|
| 244 |
+
* [17] N.Youngblood, P.Pintus, M.Dumont, V.Shah, T.Murai, Y.Shoji, D.Huang, and J.Bowers, “Non-reciprocal devices for in-memory photonic computing,” in _Frontiers in Optics + Laser Science 2024 (FiO, LS)_.Denver, Colorado: Optica Publishing Group, 2024, p. FTu1D.2. [Online]. Available: [https://opg.optica.org/abstract.cfm?URI=FiO-2024-FTu1D.2](https://opg.optica.org/abstract.cfm?URI=FiO-2024-FTu1D.2)
|
| 245 |
+
* [18] A.N. Tait, M.A. Nahmias, B.J. Shastri, and P.R. Prucnal, “Broadcast and Weight: An Integrated Network For Scalable Photonic Spike Processing,” _Journal of Lightwave Technology_, vol.32, no.21, pp. 4029–4041, Nov. 2014. [Online]. Available: [http://ieeexplore.ieee.org/document/6872524/](http://ieeexplore.ieee.org/document/6872524/)
|
| 246 |
+
* [19] X.Xu, M.Tan, B.Corcoran, J.Wu, A.Boes, T.G. Nguyen, S.T. Chu, B.E. Little, D.G. Hicks, R.Morandotti, A.Mitchell, and D.J. Moss, “11 TOPS photonic convolutional accelerator for optical neural networks,” _Nature_, vol. 589, no. 7840, pp. 44–51, Jan. 2021. [Online]. Available: [http://www.nature.com/articles/s41586-020-03063-0](http://www.nature.com/articles/s41586-020-03063-0)
|
| 247 |
+
* [20] V.Nair and G.E. Hinton, “Rectified linear units improve restricted boltzmann machines,” in _Proceedings of the 27th international conference on machine learning (ICML-10)_, 2010, pp. 807–814.
|
| 248 |
+
* [21] R.Pascanu, T.Mikolov, and Y.Bengio, “On the difficulty of training Recurrent Neural Networks,” Feb. 2013, arXiv:1211.5063 [cs]. [Online]. Available: [http://arxiv.org/abs/1211.5063](http://arxiv.org/abs/1211.5063)
|
| 249 |
+
* [22] D.Hendrycks and K.Gimpel, “Gaussian Error Linear Units (GELUs),” Jun. 2023, arXiv:1606.08415 [cs]. [Online]. Available: [http://arxiv.org/abs/1606.08415](http://arxiv.org/abs/1606.08415)
|
| 250 |
+
* [23] S.Elfwing, E.Uchibe, and K.Doya, “Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning,” Nov. 2017, arXiv:1702.03118 [cs]. [Online]. Available: [http://arxiv.org/abs/1702.03118](http://arxiv.org/abs/1702.03118)
|
| 251 |
+
* [24] B.Xu, N.Wang, T.Chen, and M.Li, “Empirical Evaluation of Rectified Activations in Convolutional Network,” Nov. 2015, arXiv:1505.00853 [cs, stat]. [Online]. Available: [http://arxiv.org/abs/1505.00853](http://arxiv.org/abs/1505.00853)
|
| 252 |
+
* [25] V.Shah and N.Youngblood, “AnalogVNN: A fully modular framework for modeling and optimizing photonic neural networks,” _APL Machine Learning_, vol.1, no.2, p. 026116, Jun. 2023. [Online]. Available: [https://doi.org/10.1063/5.0134156](https://doi.org/10.1063/5.0134156)
|
| 253 |
+
* [26] L.Lu, Y.Shin, Y.Su, and G.E. Karniadakis, “Dying ReLU and Initialization: Theory and Numerical Examples,” _Communications in Computational Physics_, vol.28, no.5, pp. 1671–1706, Jun. 2020, arXiv:1903.06733 [cs, math, stat]. [Online]. Available: [http://arxiv.org/abs/1903.06733](http://arxiv.org/abs/1903.06733)
|
| 254 |
+
* [27] N.Shazeer, “GLU Variants Improve Transformer,” Feb. 2020, arXiv:2002.05202 [cs]. [Online]. Available: [http://arxiv.org/abs/2002.05202](http://arxiv.org/abs/2002.05202)
|
| 255 |
+
* [28] K.Simonyan and A.Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition,” Apr. 2015, arXiv:1409.1556 [cs]. [Online]. Available: [http://arxiv.org/abs/1409.1556](http://arxiv.org/abs/1409.1556)
|
| 256 |
+
* [29] K.He, X.Zhang, S.Ren, and J.Sun, “Deep Residual Learning for Image Recognition,” Dec. 2015, arXiv:1512.03385 [cs]. [Online]. Available: [http://arxiv.org/abs/1512.03385](http://arxiv.org/abs/1512.03385)
|
| 257 |
+
* [30] S.Wang, T.Zhou, and J.Bilmes, “Bias Also Matters: Bias Attribution for Deep Neural Network Explanation,” in _Proceedings of the 36th International Conference on Machine Learning_, ser. Proceedings of Machine Learning Research, K.Chaudhuri and R.Salakhutdinov, Eds., vol.97.PMLR, Jun. 2019, pp. 6659–6667. [Online]. Available: [https://proceedings.mlr.press/v97/wang19p.html](https://proceedings.mlr.press/v97/wang19p.html)
|
| 258 |
+
* [31] E.L. Bolager, I.Burak, C.Datar, Q.Sun, and F.Dietrich, “Sampling weights of deep neural networks,” Nov. 2023, arXiv:2306.16830 [cs]. [Online]. Available: [http://arxiv.org/abs/2306.16830](http://arxiv.org/abs/2306.16830)
|
| 259 |
+
* [32] Z.Zhang, M.A.P. Mahmud, and A.Z. Kouzani, “Resource-constrained FPGA implementation of YOLOv2,” _Neural Computing and Applications_, vol.34, no.19, pp. 16 989–17 006, Oct. 2022. [Online]. Available: [https://link.springer.com/10.1007/s00521-022-07351-w](https://link.springer.com/10.1007/s00521-022-07351-w)
|
| 260 |
+
* [33] J.Ngadiuba, V.Loncar, M.Pierini, S.Summers, G.Di Guglielmo, J.Duarte, P.Harris, D.Rankin, S.Jindariani, M.Liu, K.Pedro, N.Tran, E.Kreinar, S.Sagear, Z.Wu, and D.Hoang, “Compressing deep neural networks on FPGAs to binary and ternary precision with hls4ml,” _Machine Learning: Science and Technology_, vol.2, no.1, p. 015001, Dec. 2020. [Online]. Available: [https://iopscience.iop.org/article/10.1088/2632-2153/aba042](https://iopscience.iop.org/article/10.1088/2632-2153/aba042)
|
| 261 |
+
* [34] M.Wielgosz and M.Karwatowski, “Mapping Neural Networks to FPGA-Based IoT Devices for Ultra-Low Latency Processing,” _Sensors_, vol.19, no.13, p. 2981, Jul. 2019. [Online]. Available: [https://www.mdpi.com/1424-8220/19/13/2981](https://www.mdpi.com/1424-8220/19/13/2981)
|
| 262 |
+
* [35] B.Liang, H.Li, M.Su, X.Li, W.Shi, and X.Wang, “Detecting Adversarial Image Examples in Deep Neural Networks with Adaptive Noise Reduction,” _IEEE Transactions on Dependable and Secure Computing_, vol.18, no.1, pp. 72–85, Jan. 2021. [Online]. Available: [https://ieeexplore.ieee.org/document/8482346/](https://ieeexplore.ieee.org/document/8482346/)
|
| 263 |
+
* [36] J.Yang, K.Zhou, Y.Li, and Z.Liu, “Generalized Out-of-Distribution Detection: A Survey,” Jan. 2024, arXiv:2110.11334 [cs]. [Online]. Available: [http://arxiv.org/abs/2110.11334](http://arxiv.org/abs/2110.11334)
|
| 264 |
+
|
| 265 |
+
[Effects of Learning Rate] We find that even over a wide range of learning rates, only functions with small discontinuities are able to successfully learn features from the training dataset. Table [III](https://arxiv.org/html/2402.02593v3#A0.T3 "TABLE III ‣ Leveraging Continuously Differentiable Activation for Learning in Analog and Quantized Noisy Environments") illustrates this using ConvNet trained on the CIFAR-10 dataset.
|
| 266 |
+
|
| 267 |
+
0.1 0.01 0.001 0.0001
|
| 268 |
+
0 (ReLU)10.01 10.10 10.53 10.65
|
| 269 |
+
0.1 10.02 10.13 10.83 10.55
|
| 270 |
+
0.2 10.04 10.1 10.69 11.02
|
| 271 |
+
0.3 10.17 10.22 10.61 10.8
|
| 272 |
+
0.4 10.38 10.54 10.48 10.7
|
| 273 |
+
0.5 10.50 10.05 10.32 10.75
|
| 274 |
+
0.6 10.10 10.21 10.58 10.85
|
| 275 |
+
0.7 10.01 10.17 69.11 57.31
|
| 276 |
+
0.8 10.66 10.15 70.28 59.72
|
| 277 |
+
0.9 10.15 10.18 71.71 59.24
|
| 278 |
+
1 (GELU)10.29 10.29 71.55 58.13
|
| 279 |
+
|
| 280 |
+
TABLE III: Top-1 Accuracy for Interpolation Factor (rows) vs Learning rate (columns) on ConvNet with CIFAR-10 at 0.8 error probability
|