Introduction
We are interested in computing the Lipschitz constant of neural networks with ReLU activations. Formally, for a network f with multiple inputs and outputs, we are interested in the quantity
(1)
We allow the norm of the numerator and denominator to be arbitrary and further consider the case where x, y are constrained in an open subset of $\mathbb{R}^n$ leading to the more general problem of computing the local Lipschitz constant.
Estimating or bounding the Lipschitz constant of a neural network is an important and well-studied problem. For the Wasserstein GAN formulation [1] the discriminator is required to have a bounded Lipschitz constant, and there are several techniques to enforce this [1β3]. For supervised learning Bartlett et al. [4] have shown that classifiers with lower Lipschitz constants have better generalization properties. It has also been observed that networks with smaller gradient norms are more robust to adversarial attacks. Bounding the (local) Lipschitz constant has been used widely for certifiable robustness against targeted adversarial attacks [5β7]. Lipschitz bounds under fair metrics may also be used as a means to certify the individual fairness of a model [8, 9].
The Lipschitz constant of a function is fundamentally related to the supremal norm of its Jacobian matrix. Previous work has demonstrated the relationship between these two quantities for functions that are scalar-valued and smooth [10, 11]. However, neural networks used for multi-class classification with ReLU activations do not meet either of these assumptions. We establish an analytical result that allows us to formulate the local Lipschitz constant of a vector-valued nonsmooth function as an optimization over the generalized Jacobian. We access the generalized Jacobian by means of the chain rule. As we discuss, the chain rule may produce incorrect results [12] for nonsmooth functions, even ReLU networks. To address this problem, we present a sufficient condition over the parameters of a ReLU network such that the chain rule always returns an element of the generalized Jacobian, allowing us to solve the proposed optimization problem.
Exactly computing Lipschitz constants of scalar-valued neural networks under the <sup>2</sup> norm was shown to be NP-hard [13]. In this paper we establish strong inapproximability results showing that it is hard to even approximate Lipschitz constants of scalar-valued ReLU networks, for 1 and `β norms.
A variety of algorithms exist that estimate Lipschitz constants for various norms. To the best of our knowledge, none of these techniques are exact: they are either upper bounds, or heuristic estimators with no provable guarantees. In this paper we present the first technique to provably exactly compute Lipschitz constants of ReLU networks under the 1, β norms. Our method is called LipMIP and relies on Mixed-Integer Program (MIP) solvers. As expected from our hardness results, our algorithm runs in exponential time in the worst case. At any intermediate time our algorithm may be stopped early to yield valid upper bounds.
We demonstrate our algorithm on various applications. We evaluate a variety of Lipschitz estimation techniques to definitively evaluate their relative error compared to the true Lipschitz constant. We apply our algorithm to yield reliable empirical insights about how changes in architecture and various regularization schemes affect the Lipschitz constants of ReLU networks.
Our contributions are as follows:
- We present novel analytic results connecting the Lipschitz constant of an arbitrary, possibly nonsmooth, function to the supremal norm of generalized Jacobians.
- We present a sufficient condition for which the chain rule will always yield an element of the generalized Jacobian of a ReLU network.
- We show that that it is provably hard to approximate the Lipschitz constant of a network to within a factor that scales almost linearly with input dimension.
- We present a Mixed-Integer Programming formulation (LipMIP) that is able to exactly compute the local Lipschitz constant of a scalar-valued ReLU network over a polyhedral domain.
- We analyze the efficiency and accuracy of LipMIP against other Lipschitz estimators. We provide experimental data demonstrating how Lipschitz constants change under training.
First we define the problem of interest. There have been several recent papers that leverage an analytical result relating the Lipschitz constant of a function to the maximal dual norm of its gradient [6, 10, 14]. This analytical result is limited in two aspects: namely it only applies to functions that are both scalar-valued and continuously differentiable. Neural networks with ReLU nonlinearities are nonsmooth and for multi-class classification or unsupervised learning settings, typically not scalar-valued. To remedy these issues, we will present a theorem relating the Lipschitz constant to the supremal norm of an element of the generalized Jacobian. We stress that this analytical result holds for all Lipschitz continuous functions, though we will only be applying this result to ReLU networks in the sequel.
The quantity we are interested in computing is defined as follows:
Definition 1. The local (Ξ±, Ξ²)-Lipschitz constant of a function f : R d β R m over an open set X β R d is defined as the following quantity:
And if L (Ξ±,Ξ²) (f, X ) exists and is finite, we say that f is (Ξ±, Ξ²)-locally Lipschitz over X .
If f is scalar-valued, then we denote the above quantity L Ξ±(f, X ) where || Β· ||Ξ² = | Β· | is implicit. For smooth, scalar-valued f, it is well-known that
where ||z||Ξ±β := sup||y||Ξ±β€1 y T z is the dual norm of || Β· ||Ξ± [10, 11]. We seek to extend this result to be applicable to vector-valued nonsmooth Lipschitz continuous functions. As the Jacobian is not well-defined everywhere for this class of functions, we recall the definition of Clarke's generalized Jacobian [15]:
Definition 2. The (Clarke) generalized Jacobian of f at x, denoted $\delta_f(x)$ , is the convex hull of the set of limits of the form $\lim_{i\to\infty} \nabla f(x_i)$ for any sequence $(x_i)_{i=1}^{\infty}$ such that $\nabla f(x_i)$ is well-defined and $x_i\to x$ .
Informally, $\delta_f(x)$ may be viewed as the convex hull of the Jacobian of nearby differentiable points. We remark that for smooth functions, $\delta_f(x) = {\nabla f(x)}$ for all x, and for convex nonsmooth functions, $\delta_f(\cdot)$ is the subdifferential operator.
The following theorem relates the norms of the generalized Jacobian to the local Lipschitz constant.
Theorem 1. Let $||\cdot||{\alpha}$ , $||\cdot||{\beta}$ be arbitrary convex norms over $\mathbb{R}^d$ , $\mathbb{R}^m$ respectively, and let $f: \mathbb{R}^d \to \mathbb{R}^m$ be $(\alpha, \beta)$ -Lipschitz continuous over an open set $\mathcal{X}$ . Then the following equality holds:
(4)
where and $||M||{\alpha,\beta} := \sup{||v||{\alpha} \le 1} ||Mv||{\beta}$ .
This result relies on the fact that Lipschitz continuous functions are differentiable almost everywhere (Rademacher's Theorem). As desired our result recovers equation 3 for scalar-valued smooth functions. Developing techniques to optimize the right-hand-side of equation 4 will be the central algorithmic focus of this paper.
Theorem 1 relates the Lipschitz constant to an optimization over generalized Jacobians. Typically we access the Jacobian of a function through backpropagation, which is simply an efficient implementation of the familiar chain rule. However the chain rule is only provably correct for functions that are compositions of continuously differentiable functions, and hence does not apply to ReLU networks [12]. In this section we will provide a sufficient condition over the parameters of a ReLU network such that any standard implementation of the chain rule will always yield an element of the generalized Jacobian.
The chain rule for nonsmooth functions: To motivate the discussion, we turn our attention to neural networks with ReLU nonlinearities. We say that a function is a ReLU network if it may be written as a composition of affine operators and element-wise ReLU nonlinearities, which may be encoded by the following recursion:
$Z_i(x) = W_i \sigma(Z_{i-1}(x)) + b_i$ $Z_0(x) := x$ (5)
where $\sigma(\cdot)$ here is the ReLU operator applied element-wise. We present the following example where the chain rule yields a result not contained in the generalized Jacobian. The univariate identity function may be written as $I(x) := 2x - \sigma(x) + \sigma(-x)$ . Certainly at every point x, $\delta_I(x) = {1}$ . However as Pytorch's automatic differentiation package defines $\sigma'(0) = 0$ , Pytorch will compute I'(0) as 2 [16]. Indeed, this is exactly the case where naively replacing the feasible set $\delta_f(\mathcal{X})$ in Equation 4 by the set of Jacobians returned by the chain rule will yield an incorrect calculation of the Lipschitz constant. To correctly relate the set of generalized Jacobians to the set of elements returnable by an implementation of the chain rule, we introduce the following definition:
Definition 3. Consider any implementation of the chain rule which may arbitrarily assign any element of the generalized gradient $\delta_{\sigma}(0)$ for each required partial derivative $\sigma'(0)$ . We define the set-valued function $\nabla^{#} f(\cdot)$ as the collection of answers yielded by any such chain rule.
The subdifferential of the ReLU function at zero is the closed interval [0,1], so the chain rule as implemented in PyTorch and Tensorflow will yield an element contained in $\nabla^# f(\cdot)$ . Our goal will be to demonstrate that, for a broad class of ReLU networks, the feasible set in Equation 4 may be replaced by the set ${G \in \nabla^# f(x) \mid x \in \mathcal{X}}$ .
General Position ReLU Networks: Taking inspiration from hyperplane arrangements, we refer to this sufficient condition as general position. Letting $f: \mathbb{R}^d \to \mathbb{R}^m$ be a ReLU network with n neurons, we can define the function $g_i(x): \mathbb{R}^d \to \mathbb{R}$ for all $i \in [n]$ as the input to the $i^{th}$ ReLU of
f at x. Then we consider the set of inputs for which each $g_i$ is identically zero: we refer to the set $K_i := {x \mid g_i(x) = 0}$ as the $i^{th}$ ReLU kernel of f. We say that a polytope P is k-dimensional if the affine hull of P has dimension exactly k. Then we define general position ReLU networks as follows:
Definition 4. We say that a ReLU network with n neurons is in general position if, for every subset of neurons $S \subseteq [n]$ , the intersection $\cap_{i \in S} K_i$ is a finite union of (d - |S|)-dimensional polytopes.
We emphasize that this definition requires that particular ReLU kernel is a finite union of (d-1)-dimensional polytopes, i.e. the 'bent hyperplanes' referred to in [17]. For a general position neural net, no (d+1) ReLU kernels may have a nonempty intersection. We now present our theorem on the correctness of chain rule for general position ReLU networks.
Theorem 2. Let f be a general position ReLU network, then for every x in the domain of f, the set of elements returned by the generalized chain rule is exactly the generalized Jacobian:
In particular this theorem implies that, for general position ReLU nets,
(7)
We will develop algorithms to solve this optimization problem predicated upon the assumption that a ReLU network is in general position. As shown by the following theorem, almost every ReLU network satisfies this condition.
Theorem 3. The set of ReLU networks not in general position has Lebesgue measure zero over the parameter space.
In general, we seek algorithms that yield estimates of the Lipschitz constant of ReLU networks with provable guarantees. In this section we will address the complexity of Lipschitz estimation of ReLU networks. We show that under mild complexity theoretic assumptions, no deterministic polynomial time algorithm can provably return a tight estimate of the Lipschitz constant of a ReLU network
Extant work discussing the complexity of Lipschitz estimation of ReLU networks has only shown that computing $L^2(f,\mathbb{R}^d)$ is NP-hard [13]. This does not address the question of whether efficient approximation algorithms exist. We relate this problem to the problem of approximating the maximum independent set of a graph. Maximum independent set is one of the hardest problems to approximate: if G is a graph with d vertices, then assuming the Exponential Time Hypothesis $^1$ , it is hard to approximate the maximum independent set of G with an approximation ratio of $\Omega(d^{1-c})$ for any constant c. Our result achieves the same inapproximability result, where d here refers to the encoding size of the ReLU network, which scales at least linearly with the input dimension and number of neurons.
Theorem 4. Let f be a scalar-valued ReLU network, not necessarily in general position, taking inputs in $\mathbb{R}^d$ . Then assuming the exponential time hypothesis, there does not exist a polynomial-time approximation algorithm with ratio $\Omega(d^{1-c})$ for computing $L^{\infty}(f,\mathcal{X})$ and $L^1(f,\mathcal{X})$ , for any constant c>0.
The results of the previous section indicate that one cannot develop any polynomial-time algorithm to estimate the local Lipschitz constant of ReLU network with nontrivial provable guarantees. Driven by this negative result, we can instead develop algorithms that exactly compute this quantity but do not run in polynomial time in the worst-case. Namely we will use a mixed-integer programming (MIP) framework to formulate the optimization problem posed in Equation 7 for general position ReLU networks. For ease of exposition, we will consider scalar-valued ReLU networks under the $\ell_1, \ell_\infty$ norms, thereby using MIP to exactly compute $L^1(f, \mathcal{X})$ and $L^\infty(f, \mathcal{X})$ . Our formulation may be
<sup>1This states that 3SAT cannot be solved in sub-exponential time [18]. If true, this would imply $P \neq NP$ .
extended to vector-valued networks and a wider variety of norms, which we will discuss in Appendix E
While mixed-integer programming requires exponential time in the worst-case, implementations of mixed-integer programming solvers typically have runtime that is significantly lower than the worst-case. Our algorithm is unlikely to scale to massive state-of-the-art image classifiers, but we nevertheless argue the value of such an algorithm in two ways. First, it is important to provide a ground-truth as a frame of reference for evaluating the relative error of alternative Lipschitz estimation techniques. Second, an algorithm that provides provable guarantees for Lipschitz estimation allows one to make accurate claims about the properties of neural networks. We empirically demonstrate each of these use-cases in the experiments section.
We state the following theorem about the correctness of our MIP formulation and will spend the remainder of the section describing the construction yielding the proof.
Theorem 5. Let $f: \mathbb{R}^d \to \mathbb{R}$ be a general position ReLU network and let $\mathcal{X}$ be an open set that is the neighborhood of a bounded polytope in $\mathbb{R}^d$ . Then there exists an efficiently-encodable mixed-integer program whose optimal objective value is $L^{\alpha}(f, \mathcal{X})$ , where $||\cdot||{\alpha}$ is either the $\ell_1$ or $\ell{\infty}$ norm.
Mixed-Integer Programming: Mixed-integer programming may be viewed as the extension of linear programming where some variables are constrained to be integral. The feasible sets of mixed-integer programs, may be defined as follows:
Definition 5. A mixed-integer polytope is a set $M \subseteq \mathbb{R}^n \times {0,1}^m$ that satisfies a set of linear inequalities:
(8)
Mixed-integer programming then optimizes a linear function over a mixed-integer polytope.
From equation 7, our goal is to frame $\nabla^# f(\mathcal{X})$ as a mixed-integer polytope. More accurately, we aim to frame ${||G^T||\alpha \mid G \in \nabla^# f(\mathcal{X})}$ as a mixed-integer polytope. The key idea for how we do this is encapsulated in the following example. Suppose $\mathcal{X}$ is some set and we wish to solve the optimization problem $\max{x \in \mathcal{X}} (g \circ f)(x)$ . Letting $\mathcal{Y} := {f(x) \mid x \in \mathcal{X}}$ and $\mathcal{Z} := {g(y) \mid y \in \mathcal{Y}}$ , we see that
(9)
Thus, if $\mathcal{X}$ is a mixed-integer polytope, and f is such that $f(\mathcal{X})$ is also a mixed-integer polytope and similar for g, then the optimization problem may be solved under the MIP framework.
From the example above, it suffices to show that $\nabla^{#} f(\cdot)$ is a composition of functions $f_i$ with the property that $f_i$ maps mixed-integer polytopes to mixed-integer polytopes without blowing up in encoding-size. We formalize this notion with the following definition:
Definition 6. We say that a function g is MIP-encodable if, for every mixed-integer polytope M, the image of M mapped through g is itself a mixed-integer polytope.
As an example, we show that the affine function g(x) := Dx + e is MIP-encodable, where g is applied only to the continuous variables. Consider the canonical mixed-integer polytope M defined in equation 8, then g(M) is the mixed-integer polytope over the existing variables (x,a), with the dimension lifted to include the new continuous variable y and a new equality constraint:
(10)
To represent ${||G^T||_\alpha \mid x \in \nabla^# f(\mathcal{X})}$ as a mixed-integer polytope, there are two steps. First we must demonstrate a set of primitive functions such that $||\nabla^# f(x)||_\alpha$ may be represented as a composition of these primitives, and then we must show that each of these primitives are MIP-encodable. In this sense, the following construction allows us to 'unroll' backpropagation into a mixed-integer polytope.
MIP-encodable components of ReLU networks: We introduce the following three primitive operators and show that $||\nabla^{#} f||_{\alpha}$ may be written as a composition of these primitive operators. These operators are the affine, conditional, and switch operators, defined below:
Affine operators: For some fixed matrix W and vector $\hat{b}$ , $A: \mathbb{R}^n \to \mathbb{R}^m$ is an affine operator if it is
of the form A(x) := Wx + b.
The conditional operator $C : \mathbb{R} \to \mathcal{P}({0,1})$ is defined as
(11)
The switch operator $S: \mathbb{R} \times {0,1} \to \mathbb{R}$ is defined as
Then we have the two following lemmas which suffice to show that $\nabla^{#} f(\cdot)$ is a MIP-encodable function:
Lemma 1. Let f be a scalar-valued general position ReLU network. Then f(x), $\nabla^{#} f(x)$ , $||\cdot||1$ , and $||\cdot||{\infty}$ may all be written as a composition of affine, conditional and switch operators.
This is easy to see for f(x) by the recurrence in Equation 5; indeed this construction is used in the MIP-formulation for evaluating robustness of neural networks [19β24]. For $\nabla^{#} f$ , one can define the recurrence:
where $\Lambda_i(x)$ is the conditional operator applied to the input to the $i^{th}$ layer of f. Since $\Lambda_i(x)$ takes values in ${0,1}^*$ , $\mathrm{Diag}(\Lambda(x))Y_{i+1}(x)$ is equivalent to $S(Y_{i+1}(x),\Lambda_i(x))$ .
Lemma 2. Let g be a composition of affine, conditional and switch operators, where global lower and upper bounds are known for each input to each element of the composition. Then g is a MIP-encodable function.
As we have seen, affine operators are trivially MIP-encodable. For the conditional and switch operators, global lower and upper bounds are necessary for MIP-encodability. Provided that our original set $\mathcal{X}$ is bounded, there exist several efficient schemes for propagating upper and lower bounds globally. Conditional and switch operators may be incorporated into the composition by adding only a constant number of new linear inequalities for each new variable. These constructions are described in full detail in Appendix D.
Formulating LipMIP: To put all the above components together, we summarize our algorithm. Provided a bounded polytope $\mathcal{P}$ , we first compute global lower and upper bounds to each conditional and switch operator in the composition that defines $||\nabla^{#}f(\cdot)||{\alpha}$ by propagating the bounds of $\mathcal{P}$ . We then iteratively move components of the composition into the feasible set as in Equation 9 by lifting the dimension of the feasible set and incorporating new constraints and variables. This yields a valid mixed-integer program which can be optimized by off-the-shelf solvers to yield $L^{\alpha}(f,\mathcal{X})$ for either the $\ell_1$ or $\ell{\infty}$ norms.
Extensions: While our results focus on evaluating the $\ell_1$ and $\ell_\infty$ Lipschitz constants of scalar-valued ReLU networks, we note that the above formulation is easily extensible to vector-valued networks over a variety of norms. We present this formulation, including an application to untargeted robustness verification through the use of a novel norm in Appendix E. We also note that any convex relaxation of our formulation will yield a provable upper bound to the local Lipschitz constant. Mixed-integer programming formulations have natural linear programming relaxations, by relaxing each integral constraint to a continuous constraint. We denote this linear programming relaxation as LipLP. Most off-the-shelf MIP solvers may also be stopped early, yielding valid upper bounds for the Lipschitz constant.
Method
As we will be frequently referring to arbitrary norms, we recall the formal definition:
Definition 7. A norm $||\cdot||$ over vector space V is a nonnegative valued function that meets the following three properties:
- Triangle Inequality: For all $x, y \in V$ , $||x + y|| \le ||x|| + ||y||$
- Absolute Homogeneity: For all $x \in V$ , and any field element a, $||ax|| = |a| \cdot ||x||$ .
- Point Separation: If ||x|| = 0, then x = 0, the zero vector of V.
The most common norms are the $\ell_p$ norms over $\mathbb{R}^d$ , with $||x||_p := (\sum_i |x_i|^p)^{1/p}$ , though these are certainly not all possible norms over $\mathbb{R}^d$ . We can also describe norms over matrices. One such norm that we frequently discuss is a norm over matrices in $\mathbb{R}^{m \times n}$ and is induced by norms over $\mathbb{R}^d$ and $\mathbb{R}^m$ :
Definition 8. Given norm $||\cdot||{\alpha}$ over $\mathbb{R}^d$ , and norm $||\cdot||{\beta}$ over $\mathbb{R}^m$ , the matrix norm $||\cdot||_{\alpha,\beta}$ over $\mathbb{R}^{m\times n}$ is defined as
(14)
A convenient way to keep the notation straight is that A, above, can be viewed as a linear operator which maps elements from a space which has norm $||\cdot||{\alpha}$ to a space which has norm $||\cdot||{\beta}$ , and hence is equipped with the norm $||A||{\alpha,\beta}$ . As long as $||\cdot||{\alpha}$ , $||\cdot||{\beta}$ are norms, then $||\cdot||{\alpha,\beta}$ is a norm as well in that the three properties listed above are satisfied.
Every norm induces a dual norm, defined as
Where the $\langle \cdot, \cdot \rangle$ is the standard inner product for vectors over $\mathbb{R}^d$ or matrices $\mathbb{R}^{m \times d}$ . We note that if matrix A is a row-vector, then $||A||{\alpha,|\cdot|} = ||A||{\alpha^*}$ by definition.
We also have versions of Holder's inequality for arbitrary norms over $\mathbb{R}^d$ :
Proposition 1. Let $||\cdot||{\alpha}$ be a norm over $\mathbb{R}^d$ , with dual norm $||\cdot||{\alpha^*}$ . Then, for all $x,y\in\mathbb{R}^d$
Proof. Indeed, assuming WLOG that neither x nor y are zero, and letting $u = \frac{x}{||x||_a}$ , we have
(17)
We can make a similar claim about the matrix norms defined above, $||\cdot||_{\alpha,\beta}$ :
Proposition 2. Letting $||\cdot||{\alpha,\beta}$ be a matrix norm induced by norms $||\cdot||{\alpha}$ over $\mathbb{R}^d$ , and $||\cdot||_{\beta}$ over $\mathbb{R}^m$ , for any $A \in \mathbb{R}^{m \times n}$ , $x \in \mathbb{R}^d$ :
Proof. Indeed, assuming WLOG that x is nonzero, letting $y = x/||x||{\alpha}$ such that $||y||{\alpha} = 1$ , we have
(19)
When $f: \mathbb{R}^d \to \mathbb{R}^m$ is a vector-valued over some open set $\mathcal{X} \subseteq \mathbb{R}^d$ we say that it is $(\alpha, \beta)$ -Lipschitz continuous if there exists a constant L for norms $||\cdot||{\alpha}$ , $||\cdot||{\beta}$ such that all $x, y \in \mathcal{X}$ ,
(20)
Then the Lipschitz constant, $L^{(\alpha,\beta)}(f,\mathcal{X})$ , is the infimum over all such L. Equivalently, one can define $L^{\alpha,\beta}(f,\mathcal{X})$ as
(21)
We say that f is differentiable at x if there exists some linear operator $\nabla f(x) \in \mathbb{R}^{n \times m}$ such that
(22)
A linear operator such that the above equation holds is defined as the Jacobian 2
The directional derivative of f along direction $v \in \mathbb{R}^d$ is defined as
(23)
Where we note that we are taking limits of a vector-valued function. We now add the following known facts:
- If f is lipschitz continuous, then it is absolutely continuous.
- If f is differentiable at x, all directional derivatives exist at x. The converse is not true, however.
- If f is differentiable at x, then for any vector v, $d_v f(x) = \nabla f(x)^T v$ .
- (Rademacher's Theorem): If f is Lipschitz continuous, then f is differentiable everywhere except for a set of measure zero, under the standard Lebesgue measure in $\mathbb{R}^d$ [32].
Finally we introduce some notational shorthand. Letting $f: \mathbb{R}^d \to \mathbb{R}^m$ , be Lipschitz continuous and defined over an open set $\mathcal{X}$ , we denote $\mathrm{Diff}(\mathcal{X})$ refer to the differentiable subset of $\mathcal{X}$ . Let $\mathcal{D}$ be the set of $(x,v)\in\mathbb{R}^{2n}$ for which $d_vf(x)$ exists and $x\in\mathcal{X}$ . Additionally, let $\mathcal{D}_v$ be the set $\mathcal{D}_v={x\mid (x,v)\in\mathcal{D}}$ .
Now we can state our first lemma, which claims that for any norm, the maximal directional derivative is attained at a differentiable point of f:
Lemma 3. For any $(\alpha, \beta)$ Lipschitz continuous function f, norm $||\cdot||_{\beta}$ over $\mathbb{R}^m$ , any $v \in \mathbb{R}^d$ , letting $\mathcal{D}_v := {x \mid (x, v) \in \mathcal{D}}$ , we have:
(24)
<sup>2We typically write the Jacobian of a function $f: \mathbb{R}^d \to \mathbb{R}^m$ as $\nabla f(x)^T \in \mathbb{R}^{m \times n}$ . This is because we like to think of the Jacobian of a scalar-valued function, referred to as the gradient and denoted as $\nabla f(x)$ , as a vector/column-vector
Remark: For scalar-valued functions and norm $||\cdot||{\alpha}$ over $\mathbb{R}^d$ , one can equivalently state that for all vectors v with $||v||{\alpha} = 1$ :
(25)
Proof. Essentially the plan is to say each of the following quantities are within $\epsilon$ of each other: $||d_v f(x)||{\beta}$ , the limit definition of $||d_v f(x)||{\beta}$ , the limit definition of $||d_v f(x')||_{\beta}$ for nearby differentiable x', and the norm of the gradient at x' applied to the direction v.
We fix an arbitrary $v \in \mathbb{R}^d$ . It suffices to show that for every $\epsilon > 0$ , there exists some differentiable $x' \in \mathrm{Diff}(\mathcal{X})$ such that $||\nabla f(x')^T \cdot v|| \geq \sup_{x \in \mathcal{D}_v} ||d_v f(x)|| - \epsilon$ .
By the definition of sup, for every $\epsilon > 0$ , there exists an $x \in \mathcal{D}_v$ such that
Then for all $\epsilon > 0$ , by the limit definition of $d_v f(x)$ there exists a $\delta > 0$ such that for all t with $|t| < \delta$
(27)
Next we note that, since lipschitz continuity implies absolute continuity of f, and t is now a fixed constant, the function $h(x):=\frac{f(x)}{||tv||{\alpha}}$ is absolutely continuous. Hence there exists some $\delta'$ such that for all $y\in\mathcal{X}$ , z with $||z||{\alpha}\leq\delta'$
(28)
Hence, by Rademacher's theorem, there exists some differentiable x' within a $\delta'$ -neighborhood of x, such that both $\frac{||f(x')-f(x)||{\beta}}{||tv||{\alpha}}<\epsilon/4$ and $\frac{||f(x'+tv)-f(x+tv)||{\beta}}{||tv||{\alpha}}<\epsilon/4$ , hence by the triangle inequality for $||\cdot||_{\beta}$
Combining equations 27 and 29 we have that
(30)
Taking limits over $\delta \to 0$ , we get that the final term in equation 30 becomes $3\epsilon/4 + ||d_v f(x')||{\beta}$ , which is equivalent to $3\epsilon/4 + ||\nabla f(x)^T v||{\beta}$ . Hence we have that
as desired, as our choice of v was arbitrary.
Now we can restate and prove our main theorem.
Theorem 6. Let $||\cdot||{\alpha}$ , $||\cdot||{\beta}$ be arbitrary norms over $\mathbb{R}^d$ , $\mathbb{R}^m$ , and let $f: \mathbb{R}^d \to \mathbb{R}^m$ be locally $(\alpha, \beta)$ -Lipschitz continuous over an open set $\mathcal{X}$ . The following equality holds:
(32)
Remarks: Before we proceed with the proof, we make some remarks. First, note that if f is scalar-valued and continuously differentiable, then $\nabla f(x)^T$ is a row-vector, and $||\nabla f(x)^T||{\alpha,\beta} = ||\nabla f(x)||{\alpha^*}$ , recovering the familiar known result. Second, to gain some intuition for this statement, consider the case where f(x) = Ax + b is an affine function. Then $\nabla f(x)^T = A$ , and by applying the theorem and leveraging the definition of $L^{(\alpha,\beta)}(f,\mathcal{X})$ , we have
where the last equality holds because $\mathcal{X}$ is open.
Proof. It suffices to prove the following equality:
(34)
This follows naturally as if $x \in \mathrm{Diff}(\mathcal{X})$ then $\delta_f(x) = {\nabla f(x)}$ . On the other hand, if $x \notin \mathrm{Diff}(\mathcal{X})$ , then for every extreme point G in $\delta_f(x)$ , there exists an $x' \in \mathrm{Diff}(\mathcal{X})$ such that $\nabla f(x') = G$ (by definition). As we seek to optimize over a norm, which is by definition convex, there exists an extreme point of $\delta_f(x)$ which attains the optimal value. Hence, we proceed by showing that Equation 34 holds.
We show that for all $x, y \in \mathcal{X}$ that $\frac{||f(x) - f(y)||{\beta}}{||x - y||{\alpha}}$ is bounded above by $\sup_{x \in \text{Diff}(\mathcal{X})} ||\nabla f(x)||_{\alpha,\beta}$ . Then we will show the opposite inequality.
Fix any $x, y \in \mathcal{X}$ , and note that since the dual of a dual norm is the original norm,
(35)
Moving the sup to the outside, we have
(36)
for $h_c: \mathbb{R} \to \mathbb{R}$ defined as $h_c(t) := c^T f(x + t(y - x))$ . Then certainly $h_c$ is lipschitz continous on the interval [0,1], and the limit $h'_c(t)$ exists almost everywhere, defined as
(37)
Further, there exists a lebesgue integrable function q(t) that equals $h'_c(t)$ almost everywhere and
We can assume without loss of generality that
(39)
where the supremum is defined over all points where $h'_c(t)$ is defined. Then because g agrees almost everywhere with $h'_c$ and is bounded pointwise, we have the following chain of inequalities:
(40)
(43)
Where Equation 45 holds by Proposition 1, Equation 46 holds by Lemma 3, and the final inequality holds by Proposition 2. Dividing by $||x - y||_{\alpha}$ yields the desired result.
On the other hand, we wish to show, for every $\epsilon > 0$ , the existence of an $x, y \in \mathcal{X}$ such that
Fix $\epsilon > 0$ and consider any point $z \in \mathcal{X}$ with $||\nabla f(z)^T||{\alpha,\beta} \ge \sup{x \in \mathcal{X}} ||\nabla f(x)^T||_{\alpha,\beta} - \epsilon/2$ .
Then $||\nabla f(z)^T||{\alpha,\beta} = \sup{||v||{\alpha} \le 1} ||\nabla f(z)^T v||{\beta} = \sup_{||v||{\alpha} \le 1} ||d_v f(z)||{\beta}$ . By the definition of the directional derivative, there exists some $\delta > 0$ such that for all $|t| < \delta$ ,
Hence setting x = z + tv and y = v, we recover equation 48.
In this section, we provide the formal proofs of statements made in Section 3.
Polytopes: We use the term polytope to refer to subsets of $\mathbb{R}^d$ of the form ${x \mid Ax \leq b}$ . The affine hull of a polytope is the smallest affine subspace which contains it. The dimension of a polytope is the dimension of its affine hull. The relative interior of a polytope P is the interior of P within the affine hull of P (i.e., lower-dimensional polytopes have empty interior, but not nonempty relative interior unless the polytope has dimension 0).
Hyperplanes: A hyperplane is an affine subspace of $\mathbb{R}^d$ of codimension 1. A hyperplane may equivalently be viewed as the zero-locus of an affine function: $H:={x\mid a^Tx=b}$ . A hyperplane partitions $\mathbb{R}^d$ into two closed halfspaces, $H^+,H^-$ defined by $H^+:={x\mid a^Tx\geq b}$ and similarly for $H^-$ . When the inequality is strict, we define the open halfspaces as $H^+_o,H^-_o$ . We remark that if U is an affine subspace of $\mathbb{R}^d$ and H is a hyperplane that does not contain U, then $H\cap U$ is a subspace of codimension 1 relative to U. If this is the case, then $H\cap U$ is a subspace of dimension dim(U)-1. A hyperplane H is called a separating hyperplane of a convex set C if $H\cap C=\emptyset$ . H is called a supporting hyperplane of C if $H\cap C\neq\emptyset$ and C is contained in either $H^+$ or $H^-$ .
ReLU Kernels: For a ReLU network, define the functions $g_i(x)$ as the input to the $i^{th}$ ReLU of f. We define the $i^{th}$ ReLU kernel as the set for which $g_i = 0$ :
The Chain Rule: The chain rule is a means to compute derivatives of compositions of smooth functions. Backpropagation is a dynamic-programming algorithm to perform the chain rule, increasing efficiency by memoization. This is most easily viewed as performing a backwards pass over the computation graph, where each node has associated with it a partial derivative of its output with respect to its input. As mentioned in the main paper, the chain rule may perform incorrectly when elements of the composition are nonsmooth, such as the ReLU operator. Indeed, the ReLU $\sigma$ has a derivative which is well defined everywhere except for zero, for which it has a subdifferential of [0,1].
Definition 9. Consider any implementation of the chain rule which may arbitrarily assign any element of the generalized gradient $\delta_{\sigma}(0)$ for each required partial derivative $\sigma'(0)$ . We define the set-valued function $\nabla^{#} f(\cdot)$ as the collection of answers yielded by any such chain rule.
While we note that our mixed-integer programming formulation treats $\nabla^#(f)$ in this set-valued sense, most implementations of automatic differentiation choose either ${0,1}$ to be the evaluation of $\sigma'(0)$ such that $\nabla^# f$ is not set valued (e.g.,in PyTorch and Tensorflow, $\sigma'(0)=0$ ). Our theory holds for our set-valued formulation, but in the case of automatic differentiation packages, as long as $\sigma'(0) \in [0,1]$ , our results will hold.
A Remark on Hyperplane Arrangements: As noted in the main paper, our definition of general position neural networks is spiritually similar to the notion of general position hyperplane arrangements. A hyperplane arrangement $\mathcal{A} := {H_1, \dots, H_n}$ is a collection of hyperplanes in $\mathbb{R}^d$ and is said to be in general position if the intersection of any k hyperplanes is a (d-k) dimensional subspace. Further, if a ReLU network only has one hidden layer, each ReLU kernel is a hyperplane. Thus, hyperplane arrangements are a subset of ReLU kernel arrangements.
Before restating Theorem 2 and the proof, we introduce the following lemmas:
Lemma 4. Let ${K_i}{i=1}^m$ be the ReLU kernels of a general position neural net, f. Then for any x contained in exactly k of them, say WLOG $K_1, \ldots, K_k$ , x lies in the relative interior of one of the polyhedral components of $\bigcap{i=1}^k K_i$ .
Proof. Since f is in general position, $\bigcap_{i=1}^k K_i$ is a union of (d-k)-dimensional polytopes. Let P be one of the polytopes in this union such that $x \in P$ . Since P is an (d-k)-face in the polyhedral
Figure 2: Examples of the three classes of hyperplanes with respect to a polytope P (pink). The blue hyperplane is a separating hyperplane of P, the green hyperplane is a supporting hyperplane of P, and the red hyperplane is a cutting hyperplane of P.
complex induced by ${K_i}_{i=1}^m$ , each point on the boundary of P is the intersection of at least k+1 ReLU kernels of f. Thus x cannot be contained in the boundary of P and must reside in the relative interior.
The rest of the components are geometric. We introduce the notion of a cutting hyperplane:
Definition 10. We say that a hyperplane H is a cutting hyperplane of a polytope P if it is neither a separating nor supporting hyperplane of P.
We now state and prove several properties of cutting hyperplanes:
Lemma 5. The following are equivalent:
- (a) H is a cutting hyperplane of P.
- (b) H contains a point in the relative interior of P, and $H \cap P \neq P$ .
- (c) H cuts P into two polytopes with the same dimension as P: $dim(P \cap H^+) = dim(P \cap H^-) = dim(P)$ and $H \cap P \neq P$ .
Proof. Throughout we will denote the affine hull of P as U.
- (a) $\Longrightarrow$ (b): By assumption, H is neither a supporting nor separating hyperplane. Since neither $H^+ \cap P$ nor $H^- \cap P$ is P, $H \cap P \neq P$ . Thus $H \cap P$ is a codimension 1 subspace, with respect to U. Since H is not a supporting hyperplane, $H \cap U$ must not lie on the boundary of P (relative to U). Thus $H \cap U$ contains a point in the relative interior of P and so does H.
- (b) $\Longrightarrow$ (c): By assumption $H \cap P \neq P$ . Consider some point, x, inside H and the relative interior of P. By definition of relative interior, there is some neighborhood $N_{\epsilon}(x)$ such that $N_{\epsilon}(x) \cap U \subset P$ . Thus there exists some $x' \in N_{\epsilon}(x)$ such that $(N_{\epsilon'}(x') \cap U) \subset (H_o^+ \cap P)$ and thus the affine hull of $H^+ \cap P$ must have the same dimension as U. Similarly for $H^- \cap P$ .
- (c) $\Longrightarrow$ (a): Since $P \cap H^+$ and $P \cap H^-$ are nonempty, then $P \cap H$ is nonempty and thus H is not a separating hyperplane of P. Suppose for the sake of contradiction that $H^+ \cap P = P$ . Then $H_o^- \cap P = \emptyset$ , this implies that $dim(H \cap P) = dim(P)$ which only occurs if $P \subseteq H$ which is a contradiction. Repeating this for $H^- \cap P$ , we see that H is not a supporting hyperplane of P. $\square$
Lemma 6. Let F be a (k)-dimensional face of a polytope P. If H is a cutting hyperplane of F, then H is a cutting hyperplane of P.
Proof. Since H is a cutting hyperplane of F, H is neither a separating hyperplane nor is $P \subseteq H$ . Thus it suffices to show that H is not a supporting hyperplane of P. Since H cuts F, there exist
points inside $F \cap H_o^+$ and $F \cap H_o^-$ , where $H_o^+$ , $H_o^-$ are the open halfspaces induced by H. Thus neither $P \cap H_o^+$ nor $P \cap H_o^-$ are empty, which implies that H is not a supporting hyperplane of P, hence H must also be a cutting hyperplane of P.
Now we can proceed with the proof of Theorem 2:
Theorem 7. Let f be a general position ReLU network, then for every x in the domain of f, the set of elements returned by the generalized chain rule $\nabla^{#} f(x)$ is exactly the generalized Jacobian:
Proof. Part 1: The first part of this proof shows that if x is contained in exactly k ReLU kernels, then x is contained in $2^k$ full-dimensional linear regions of f. We prove this claim by induction on k. The case where k=1 is trivial. Now assume that the claim holds up to k-1. Assume that x lies in the ReLU kernel for every neuron in a set $S \subseteq [m]$ , with |S| = k. Without loss of generality, let $j \in S$ be a neuron whose depth, L, is at least as great as the depth of every other neuron in S. Then one can construct a subnetwork f' of f by considering only the first L layers of f and omitting neuron j. Now $K_i$ is a ReLU kernel of f' for every $i \in S \setminus {j}$ , and further suppose that f' is a general position ReLU net. From the inductive hypothesis, we can see that x is contained in exactly $2^{k-1}$ linear regions of f'. By Lemma 4, x resides in the relative interior of a (n-k+1)-dimensional polytope, P, contained in the union that defines $\bigcap_{i=1}^{k-1} K_i$ . Since j has maximal depth, $g_j(\cdot)$ is affine in P, and thus there exists some hyperplane H such that $P \cap K_j = P \cap H$ . Thus by Lemma 5 (b), H is a cutting hyperplane of P.
Consider some linear region R of f' containing x. Then $g_j(x)$ is affine inside each R and hence there exists some hyperplane $H_R$ such that $R \cap K_j = R \cap H_R$ , with the additional property that $H_R \cap P = H \cap P$ . By general position, $H \cap P \neq P$ and thus $H_R$ is a cutting hyperplane for P by Lemma 5 (b). Since P is a (n-k+1)-dimensional face of R, we can apply Lemma 6 to see that $H_R$ is a cutting hyperplane for R as desired.
Now we show that the implication proved in part 1 of the proof implies that $\nabla^{#}f(x) = \delta_f(x)$ . This follows in two steps. The first step is to show that $\nabla f^{#}(x)$ is a convex set for all x, and the second step is to show the following inclusion holds:
Where, for any convex set C, $\mathcal{V}(C)$ denotes the set of extreme points of C. Then the theorem will follow by taking convex hulls.
To show that $\nabla^# f(x)$ is convex, we make the following observation: every element of $\nabla^# f(x)$ must be attainable by some implementation of the chain rule which assigns values for every $\sigma'(0)$ . If $\Lambda_0 \in \nabla^# f(x)$ is attainable by setting exactly zero $\sigma'(0)'s$ to lie in the open interval (0,1), then $\Lambda_0$ is the Jacobian matrix corresponding to one of the full-dimensional linear regions that x is contained in. Consider some $\Lambda_r \in \nabla^# f(x)$ which is attainable by setting exactly r $\sigma'(0)'s$ to lie in the open interval (0,1). Then certainly $\Lambda_r$ may be written as the convex combination of $\Lambda_{r-1}^{(1)}$ and $\Lambda_{r-1}^{(2)}$ for two elements of $\delta_f(x)$ , attainable by setting exactly (r-1) ReLU partial derivatives to be nonintegral. This holds for all $r \in {1, \dots k}$ and thus $\nabla^# f(x)$ is convex.
To show the equality in Equation 52, we first consider some element of $\mathcal{V}(\delta_f(x))$ . Certainly this must be the Jacobian of some full-dimensional linear region containing x, and hence there exists some assignment of ReLU partial derivatives such that the chain rule yields this Jacobian. On the other hand, we've shown in the previous section that every element of $\nabla^# f(x)$ may be written as a convex combination of the Jacobians of the full-dimensional linear regions of f containing f. Hence each extreme point of f where f is the Jacobian of one of the full-dimensional linear regions of f containing f.
Before presenting the proof of Theorem 3, we will more explicitly define a Lebesgue measure over parameter space of a ReLU network. Indeed, consider every ReLU network with a fixed architecture
and hence a fixed number of parameters. We can identify each of these parameters with $\mathbb{R}$ such that the parameter space of a ReLU network with k parameters is identifiable with $\mathbb{R}^k$ . We introduce the measure $\mu_f$ as the Lebesgue measure over neural networks with the same architecture as a defined ReLU network f. Now we present our Theorem:
Theorem 8. The set of ReLU networks not in general position has Lebesgue measure zero over the parameter space.
Proof. We prove the claim by induction over the number of neurons of a ReLU network. As every ReLU network with only one neuron is in general position, the base case holds trivially. Now suppose that the claim holds for families of ReLU networks with k-1 neurons. Then we can add a new neuron in one of two ways: either we add a new neuron to the final layer, or we add a new layer with only a single neuron. Every neural network may be constructed in this fashion, so the induction suffices to prove the claim. Both cases of the induction may be proved with the same argument:
Consider some ReLU network, f, with k-1 neurons. Then consider adding a new neuron to f in either of the two ways described above. Let $B_f$ denote the set of neural networks with the same architecture as f that are not in general position, and similarly for $B_{f'}$ . Let $C_{f'}$ denote the set of neural networks with the same architecture as f' that are not in general position, but are in general position when the $k^{th}$ neuron is removed. Certainly if f is not in general position, then f' is not in general position. Thus
(53)
where $\mu_f(B_f)=0$ by the induction hypothesis. We need to show the measure of $C_{f'}$ is zero as follows. Letting $K_k$ denote the ReLU kernel of the neuron added to f to yield f', we note that f is not in general position only if one of the affine hulls of the polyhedral components of $K_k$ contains the affine hull of some polyhedral component of some intersection $\cap_{i \in S} K_i$ where S is a nonempty subset of the k-1 neurons of f. We primarily control the bias parameter, as this is universal over all linear regions, and notice that this problem reduces to the following: what is the measure of hyperplanes that contain any of a finite collection of affine subspaces? By the countable subadditivity of the Lebesgue measure and the fact that the set of hyperplanes that contain any single affine subspace has measure 0, $\mu_{f'}(C_{f'})=0$ .
