text
stringlengths
270
6.81k
Now suppose that f (m) is a contraction for some m. Hence by the first part, there is a unique x ∈ X such that f (m)(x) = x. But then f (m)(f (x)) = f (m+1)(x) = f (f (m)(x)) = f (x). So f (x) is also a fixed point of f (n)(x). By uniqueness of fixed points, we must have f (x) = x. Since any fixed point of f is clearly a fixed point of f (n) as well, it follows that x is the unique fixed point of f. Based on the proof of the theorem, we have the following error estimate in the contraction mapping theorem: for x0 ∈ X and xn = f (xn−1), we showed that for m > n, we have d(xm, xn) ≤ λn 1 − λ d(x1, x0). If xn → x, taking the limit of the above bound as m → ∞ gives d(x, xn) ≤ λn 1 − λ d(x1, x0). This is valid for all n. We are now going to use this to obtain the Picard-Lindel¨of existence theorem for ordinary differential equations. The objective is as follows. Suppose we are given a function F = (F1, F2, · · ·, Fn) : R × Rn → Rn. We interpret the R as time and the Rn as space. Given t0 ∈ R and x0 ∈ Rn, we want to know when can we find a solution to the ODE df dt = F(t, f (t)) 52 5 Metric spaces IB Analysis II subject to f (t0) = x0. We would like this solution to be valid (at least) for all t in some interval I containing t0. More explicitly, we want to understand when will there be some ε > 0 and a differentiable function f = (f1, · · ·, fn) : (t0 − ε, t0 + ε) → Rn (i.e. fj : (t0 − ε, t0 + ε) → R is differentiable for all j) satisfying dfj dt = Fj(t, f1(t), · · ·, fn
(t)) such that fj(t0) = x(j) 0 for all j = 1,..., n and t ∈ (t0 − ε, t0 + ε). We can imagine this scenario as a particle moving in Rn, passing through x0 at time t0. We then ask if there is a trajectory f (t) such that the velocity of the particle at any time t is given by F(t, f (t)). This is a complicated system, since it is a coupled system of many variables. Explicit solutions are usually impossible, but in certain cases, we can prove the existence of a solution. Of course, solutions need not exist for arbitrary F. For example, there will be no solution if F is everywhere discontinuous, since any derivative is continuous in a dense set of points. The Picard-Lindel¨of existence theorem gives us sufficient conditions for a unique solution to exists. We will need the following notation Notation. For x0 ∈ Rn, R > 0, we let BR(x0) = {x ∈ Rn : ∥x − x0∥2 ≤ R}. Then the theorem says Theorem (Picard-Lindel¨of existence theorem). Let x0 ∈ Rn, R > 0, a < b, t0 ∈ [a, b]. Let F : [a, b] × BR(x0) → Rn be a continuous function satisfying ∥F(t, x) − F(t, y)∥2 ≤ κ∥x − y∥2 for some fixed κ > 0 and all t ∈ [a, b], x ∈ BR(x0). In other words, F (t, · ) : Rn → Rn is Lipschitz on BR(x0) with the same Lipschitz constant for every t. Then (i) There exists an ε > 0 and a unique differentiable function f : [t0 − ε, t0 + ε] ∩ [a, b] → Rn such that df dt = F(t, f (t)) (∗) and f (t0) = x0. (ii) If sup [a,b]×BR(x0) ∥F∥2 ≤ R b − a, then there exists a unique differential function f : [a, b
] → Rn that satisfies the differential equation and boundary conditions above. Even n = 1 is an important, special, non-trivial case. Even if we have only one dimension, explicit solutions may be very difficult to find, if not impossible. For example, df dt = f 2 + sin f + ef 53 5 Metric spaces IB Analysis II would be almost impossible to solve. However, the theorem tells us there will be a solution, at least locally. Note that any differentiable f satisfying the differential equation is automatically continuously differentiable, since the derivative is F(t, f (t)), which is continuous. Before we prove the theorem, we first show the requirements are indeed necessary. We first look at that ε in (i). Without the addition requirement in (ii), there might not exist a solution globally on [a, b]. For example, we can consider the n = 1 case, where we want to solve df dt = f 2, with boundary condition f (0) = 1. Our F (t, f ) = f 2 is a nice, uniformly Lipschitz function on any [0, b] × BR(1) = [0, b] × [1 − R, 1 + R]. However, we will shortly see that there is no global solution. If we assume f ̸= 0, then for all t ∈ [0, b], the equation is equivalent to d dt (t + f −1) = 0. So we need t + f −1 to be constant. The initial conditions tells us this constant is 1. So we have f (t) = 1 1 − t. Hence the solution on [0, 1) is on [0, 1). So if b ≥ 1, then there is no solution in [0, b]. 1 1−t. Any solution on [0, b] must agree with this The Lipschitz condition is also necessary to guarantee uniqueness. Without this condition, existence of a solution is still guaranteed (but is another theorem, the Cauchy-Peano theorem), but we could have many different solutions. For example, we can consider the differential equation df dt = |f | with f (0) = 0. Here F (t, x) = |x| is not Lipschitz near x = 0. It is easy to see that both f = 0 and f (t) = 1 4 t2 are both
solutions. In fact, for any α ∈ [0, b], the function fα(tt − α)2 α ≤ t ≤ b is also a solution. So we have an infinite number of solutions. We are now going to use the contraction mapping theorem to prove this. In general, this is a very useful idea. It is in fact possible to use other fixed point theorems to show the existence of solutions to partial differential equations. This is much more difficult, but has many far-reaching important applications to theoretical physics and geometry, say. For these, see Part III courses. Proof. First, note that (ii) implies (i). We know that sup [a,b]×BR(x) ∥F∥ 54 5 Metric spaces IB Analysis II is bounded since it is a continuous function on a compact domain. So we can pick an ε such that 2ε ≤ R sup[a,b]×BR(x) ∥F∥. Then writing [t0 − ε, t0 + ε] ∩ [a, b] = [a1, b1], we have sup [a1,b1]×BR(x) ∥F∥ ≤ sup [a,b]×BR(x) ∥F∥ ≤ R 2ε ≤ R b1 − a1. So (ii) implies there is a solution on [t0 − ε, t0 + ε] ∩ [a, b]. Hence it suffices to prove (ii). To apply the contraction mapping theorem, we need to convert this into a fixed point problem. The key is to reformulate the problem as an integral equation. We know that a differentiable f : [a, b] → Rn satisfies the differential equation (∗) if and only if f : [a, b] → BR(x0) is continuous and satisfies f (t) = x0 + t t0 F(s, f (s)) ds by the fundamental theorem of calculus. Note that we don’t require f is differentiable, since if a continuous f satisfies this equation, it is automatically differentiable by the fundamental theorem of calculus. This is very helpful, since we can work over the much larger vector space of continuous functions, and it would be easier to find a solution. We let X = C([a, b], BR(x0)). We equip X with the suprem
um metric ∥g − h∥ = sup t∈[a,b] ∥g(t) − h(t)∥2. We see that X is a closed subset of the complete metric space C([a, b], Rn) (again taken with the supremum metric). So X is complete. For every g ∈ X, we define a function T g : [a, b] → Rn by (T g)(t) = x0 + t t0 F(s, g(s)) ds. Our differential equation is thus f = T f. So we first want to show that T is actually mapping X → X, i.e. T g ∈ X whenever g ∈ X, and then prove it is a contraction map. We have ∥T g(t) − x0∥2 = ≤ ≤ t t0 t t0 F(s, g(s)) ds ∥F(s, g(s))∥2 ds sup [a,b]×BR(x0) ∥F∥ · |b − a| ≤ R 55 5 Metric spaces IB Analysis II Hence we know that T g(t) ∈ BR(x0). So T g ∈ X. Next, we need to show this is a contraction. However, it turns out T need not be a contraction. Instead, what we have is that for g1, g2 ∈ X, we have ∥T g1(t) − T g2(t)∥2 = ≤ t t0 t t0 F(s, g1(s)) − F(s, g2(s)) ds ∥F(s, g1(s)) − F(s, g2(s))∥2 ds 2 by the Lipschitz condition on F. If we indeed have ≤ κ(b − a)∥g1 − g2∥∞ then the contraction mapping theorem gives an f ∈ X such that κ(b − a) < 1, (†) i.e. T f = f, f = x0 + t t0 F(s, f (s)) ds. However, we do not necessarily have (†). There are many ways we can solve this problem. Here, we can solve it by finding an m such that T (m is a
contraction map. We will in fact show that this map satisfies the bound ∥T (m)g1(t) − T (m)g2(t)∥ ≤ sup t∈[a,b] (b − a)mκm m! sup t∈[a,b] ∥g1(t) − g2(t)∥. (‡) The key is the m!, since this grows much faster than any exponential. Given this bound, we know that for sufficiently large m, we have (b − a)mκm m! < 1, i.e. T (m) is a contraction. So by the contraction mapping theorem, the result holds. So it only remains to prove the bound. To prove this, we prove instead the pointwise bound: for any t ∈ [a, b], we have ∥T (m)g1(t) − T (m)g2(t)∥2 ≤ (|t − t0|)mκm m! sup s∈[t0,t] ∥g1(s) − g2(s)∥. From this, taking the supremum on the left, we obtain the bound (‡). To prove this pointwise bound, we induct on m. We wlog assume t > t0. We know that for every m, the difference is given by ∥T (m)g1(t) − T (m)g2(t)∥2 = t t0 t t0 ≤ κ F (s, T (m−1)g1(s)) − F (s, T (m−1)g2(s)) ds 2. ∥T (m−1)g1(s) − T (m−1)g2(s)∥2 ds. 56 5 Metric spaces IB Analysis II This is true for all m. If m = 1, then this gives ∥T g1(t) − T g2(t)∥ ≤ κ(t − t0) sup [t0,t] ∥g1 − g2∥2. So the base case is done. For m ≥ 2, assume by induction the bound holds with m − 1 in place of m. Then the bounds give ∥T (m)g1(t) − T
(m)g2(t)∥ ≤ κ t km−1(s − t0)m−1 (m − 1)! t0 κm sup (m − 1)! [t0,t] κm(t − t0)m m! ≤ = ∥g1 − g2∥2 t0 ∥g1 − g2∥2. sup [t0,t] sup [t0,s] t ∥g1 − g2∥2 ds (s − t0)m−1 ds So done. Note that to get the factor of m!, we had to actually perform the integral, instead of just bounding (s − t0)m−1 by (t − t0). In general, this is a good strategy if we want tight bounds. Instead of bounding b a f (x) dx ≤ (b − a) sup |f (x)|, we write f (x) = g(x)h(x), where h(x) is something easily integrable. Then we can have a bound b a f (x) dx ≤ sup |g(x)| b a |h(x)| dx. 57 6 Differentiation from Rm to Rn IB Analysis II 6 Differentiation from Rm to Rn 6.1 Differentiation from Rm to Rn We are now going to investigate differentiation of functions f : Rn → Rm. The hard part is to first come up with a sensible definition of what this means. There is no obvious way to generalize what we had for real functions. After defining it, we will need to do some hard work to come up with easy ways to check if functions are differentiable. Then we can use it to prove some useful results like the mean value inequality. We will always use the usual Euclidean norm. To define differentiation in Rn, we first we need a definition of the limit. Definition (Limit of function). Let E ⊆ Rn and f : E → Rm. Let a ∈ Rn be a limit point of E, and let b ∈ Rm. We say if for every ε > 0, there is some δ > 0 such that lim x→a f (x) = b (∀x ∈ E) 0 < ∥x − a∥ < δ ⇒ ∥f (x) − b
∥ < ε. As in the case of R in IA Analysis I, we do not impose any requirements on F when x = a. In particular, we don’t assume that a is in the domain E. We would like a definition of differentiation for functions f : Rn → R (or more generally f : Rn → Rm) that directly extends the familiar definition on the real line. Recall that if f : (b, c) → R and a ∈ (b, c), we say f is differentiable if the limit Df (a) = f ′(a) = lim h→0 exists (as a real number). This cannot be extended to higher dimensions directly, since h would become a vector in Rn, and it is not clear what we mean by dividing by a vector. We might try dividing by ∥h∥ instead, i.e. require that (∗) f (a + h) − f (a) h lim h→0 f (a + h) − f (a) ∥h∥ exists. However, this is clearly wrong, since in the case of n = 1, this reduces to the existence of the limit f (a + h) − f (a) |h|, which almost never exists, e.g. when f (x) = x. It is also possible that this exists while the genuine derivative does not, e.g. when f (x) = |x|, at x = 0. So this is clearly wrong. Now we are a bit stuck. We need to divide by something, and that thing better be a scalar. ∥h∥ is not exactly what we want. What should we do? The idea is move f ′(a) to the other side of the equation, and (∗) becomes f (a + h) − f (a) − f ′(a)h h lim h→0 = 0. Now if we replace h by |h|, nothing changes. So this is equivalent to f (a + h) − f (a) − f ′(a)h |h| lim h→0 = 0. 58 6 Differentiation from Rm to Rn IB Analysis II In other words, the function f is differentiable if there is some A such that lim h→0 f (a + h) − f (a) − Ah |h|
= 0, and we call A the derivative. We are now in a good shape to generalize. Note that if f : Rn → R is a real-valued function, then f (a + h) − f (a) is a scalar, but h is a vector. So A is not just a number, but a (row) vector. In general, if our function f : Rn → Rm is vector-valued, then our A should be an m × n matrix. Alternatively, A is a linear map from Rn to Rm. Definition (Differentiation in Rn). Let U ⊆ Rn be open, f : Rn → Rm. We say f is differentiable at a point a ∈ U if there exists a linear map A : Rn → Rm such that lim h→0 f (a + h) − f (a) − Ah ∥h∥ = 0. We call A the derivative of f at a. We write the derivative as Df (a). This is equivalent to saying lim x→a f (x) − f (a) − A(x − a) ∥x − a∥ = 0. Note that this is completely consistent with our usual definition the case where n = m = 1, as we have discussed above, since a linear transformation α : R → R is just given by α(h) = Ah for some real A ∈ R. One might instead attempt to define differentiability as follows: for any f : Rm → R, we say f is differentiable at x if f is differentiable when restricted to any line passing through x. However, this is a weaker notion, and we will later see that if we define differentiability this way, then differentiability will no longer imply continuity, which is bad. Having defined differentiation, we want to show that the derivative is unique. Proposition (Uniqueness of derivative). Derivatives are unique. Proof. Suppose A, B : Rn → Rm both satisfy the condition lim h→0 lim h→0 f (a + h) − f (a) − Ah ∥h∥ f (a + h) − f (a) − Bh ∥h∥ = 0 = 0. By the triangle inequality, we get ∥(B − A)h∥ ≤ ∥f (a + h) − f (a) − Ah∥ + ∥
f (a + h) − f (a) − Bh∥. So ∥(B − A)h∥ ∥h∥ → 0 as h → 0. We set h = tu in this proof to get ∥(B − A)tu∥ ∥tu∥ → 0 59 6 Differentiation from Rm to Rn IB Analysis II as t → 0. Since (B − A) is linear, we know ∥(B − A)tu∥ ∥tu∥ = ∥(B − A)u∥ ∥u∥. So (B − A)u = 0 for all u ∈ Rn. So B = A. Notation. We write L(Rn; Rm) for the space of linear maps A : Rn → Rm. So Df (a) ∈ L(Rn; Rm). To avoid having to write limits and divisions all over the place, we have the following convenient notation: Notation (Little o notation). For any function α : Br(0) ⊆ Rn → Rm, write if α(h) = o(h) α(h) ∥h∥ → 0 as h → 0. In other words, α → 0 faster than ∥h∥ as h → 0. Note that officially, α(h) = o(h) as a whole is a piece of notation, and does not represent equality. Then the condition for differentiability can be written as: f : U → Rm is differentiable at a ∈ U if there is some A with Alternatively, f (a + h) − f (a) − Ah = o(h). f (a + h) = f (a) + Ah + o(h). Note that we require the domain U of f to be open, so that for each a ∈ U, there is a small ball around a on which f is defined, so f (a + h) is defined for for sufficiently small h. We could relax this condition and consider “one-sided” derivatives instead, but we will not look into these in this course. We can interpret the definition of differentiability as saying we can find a “good” linear approximation (technically, it is affine, not linear) to the function f near a. While the definition of the derivative is good, it is purely
existential. This is unlike the definition of differentiability of real functions, where we are asked to compute an explicit limit — if the limit exists, that’s the derivative. If not, it is not differentiable. In the higher-dimensional world, this is not the case. We have completely no idea where to find the derivative, even if we know it exists. So we would like an explicit formula for it. The idea is to look at specific “directions” instead of finding the general derivative. As always, let f : U → Rm be differentiable at a ∈ U. Fix some u ∈ Rn, take h = tu (with t ∈ R). Assuming u ̸= 0, differentiability tells lim t→0 f (a + tu) − f (a) − Df (a)(tu) ∥tu∥ = 0. 60 6 Differentiation from Rm to Rn IB Analysis II This is equivalent to saying lim t→0 f (a + tu) − f (a) − tDf (a)u |t|∥u∥ = 0. Since ∥u∥ is fixed, This in turn is equivalent to lim t→0 f (a + tu) − f (a) − tDf (a)u t = 0. This, finally, is equal to Df (a)u = lim t→0 f (a + tu) − f (a) t. We derived this assuming u ̸= 0, but this is trivially true for u = 0. So this valid for all u. This is of the same form as the usual derivative, and it is usually not too difficult to compute this limit. Note, however, that this says if the derivative exists, then the limit above is related to the derivative as above. However, even if the limit exists for all u, we still cannot conclude that the derivative exists. Regardless, even if the derivative does not exist, this limit is still often a useful notion. Definition (Directional derivative). We write Duf (a) = lim t→0 f (a + tu) − f (a) t whenever this limit exists. We call Duf (a) the directional derivative of f at a ∈ U in the direction of u ∈ Rn. By definition, we have Duf (a) = d dt t=0 f (a + tu
). Often, it is convenient to focus on the special cases where u = ej, a member of the standard basis for Rn. This is known as the partial derivative. By convention, this is defined for real-valued functions only, but the same definition works for any Rm-valued function. Definition (Partial derivative). The jth partial derivative of f : U → R at a ∈ U is Dej f (a) = lim t→∞ f (a + tej) − f (a) t, when the limit exists. We often write this as Dej f (a) = Djf (a) = ∂f ∂xj. Note that these definitions do not require differentiability of f at a. We will see some examples shortly. Before that, we first establish some elementary properties of differentiable functions. Proposition. Let U ⊆ Rn be open, a ∈ U. 61 6 Differentiation from Rm to Rn IB Analysis II (i) If f : U → Rm is differentiable at a, then f is continuous at a. (ii) If we write f = (f1, f2, · · ·, fm) : U → Rm, where each fi : U → R, then f is differentiable at a if and only if each fj is differentiable at a for each j. (iii) If f, g : U → Rm are both differentiable at a, then λf + µg is differentiable at a with D(λf + µg)(a) = λDf (a) + µDg(a). (iv) If A : Rn → Rm is a linear map, then A is differentiable for any a ∈ Rn with DA(a) = A. (v) If f is differentiable at a, then the directional derivative Duf (a) exists for all u ∈ Rn, and in fact Duf (a) = Df (a)u. (vi) If f is differentiable at a, then all partial derivatives Djfi(a) exist for j = 1, · · ·, n; i = 1, · · ·, m, and are given by Djfi(a) = Dfi(a)ej. (vii) If A = (Aij) be the matrix representing Df (a) with respect to the
standard basis for Rn and Rm, i.e. for any h ∈ Rn, Then A is given by Df (a)h = Ah. Aij = ⟨Df (a)ej, bi⟩ = Djfi(a). where {e1, · · ·, en} is the standard basis for Rn, and {b1, · · ·, bm} is the standard basis for Rm. The second property is useful, since instead of considering arbitrary Rm- valued functions, we can just look at real-valued functions. Proof. (i) By definition, if f is differentiable, then as h → 0, we know f (a + h) − f (a) − Df (a)h → 0. Since Df (a)h → 0 as well, we must have f (a + h) → f (h). (ii) Exercise on example sheet 4. (iii) We just have to check this directly. We have (λf + µg)(a + h) − (λf + µg)(a) − (λDf (a) + µDg(a)) ∥h∥ f (a + h) − f (a) − Df (a)h ∥h∥ g(a + h) − g(a) − Dg(a)h ∥h∥ = λ + µ. which tends to 0 as h → 0. So done. 62 6 Differentiation from Rm to Rn IB Analysis II (iv) Since A is linear, we always have A(a + h) − A(a) − Ah = 0 for all h. (v) We’ve proved this in the previous discussion. (vi) We’ve proved this in the previous discussion. (vii) This follows from the general result for linear maps: for any linear map represented by (Aij)m×n, we have Aij = ⟨Aej, bi⟩. Applying this with A = Df (a) and note that for any h ∈ Rn, Df (a)h = (Df1(a)h, · · ·, Dfm(a)h). So done. The above says differentiability at a point implies the existence of all directional derivatives, which in turn implies the existence
of all partial derivatives. The converse implication does not hold in either of these. Example. Let f 2 : R2 → R be defined by f (x, y) = 0 xy = 0 1 xy ̸= 0 Then the partial derivatives are df dx (0, 0) = df dy (0, 0) = 0, In other directions, say u = (1, 1), we have f (0 + tu) − f (0) t = 1 t which diverges as t → 0. So the directional derivative does not exist. Example. Let f : R2 → R be defined by x3 y 0 f (x, y) = y ̸= 0 y = 0 Then for u = (u1, u2) ̸= 0 and t ̸= 0, we can compute So f (0 + tu) − f (0) t = tu3 1 u2 0 u2 ̸= 0 u2 = 0 Duf (0) = lim t→0 f (0 + tu) − f (0) t = 0, and the directional derivative exists. However, the function is not differentiable at 0, since it is not even continuous at 0, as diverges as δ → 0. f (δ, δ4) = 1 δ 63 6 Differentiation from Rm to Rn IB Analysis II Example. Let f : R2 → R be defined by f (x, y) = x3 x2+y2 0 (x, y) ̸= (0, 0) (x, y) = (0, 0). It is clear that f continuous at points other than 0, and f is also continuous at 0 since |f (x, y)| ≤ |x|. We can compute the partial derivatives as ∂f ∂x (0, 0) = 1, ∂f ∂y (0, 0) = 0. In fact, we can compute the difference quotient in the direction u = (u1, u2) ̸= 0 to be f (0 + tu) − f (0) t = u3 1 1 + u2 u2 2. So we have Duf (0) = u3 1 1 + u2 u2 2. We can now immediately conclude that f is not differentiable at 0, since if it were, then we would have D
uf (0) = Df (0)u, which should be a linear expression in u, but this is not. Alternatively, if f were differentiable, then we have Df (0)h = 1 0 h1 h2 = h1. However, we have f (0 + h) − f (0) − Df (0)h ∥h∥ = h3 1 h2 1+h2 2 h2 − h1 1 + h2 2 = − h1h2 2 1 + h2 2 h2 3, which does not tend to 0 as h → 0. For example, if h = (t, t), this quotient is − 1 23/2 for t ̸= 0. To decide if a function is differentiable, the first step would be to compute the partial derivatives. If they don’t exist, then we can immediately know the function is not differentiable. However, if they do, then we have a candidate for what the derivative is, and we plug it into the definition to check if it actually is the derivative. This is a cumbersome thing to do. It turns out that while existence of partial derivatives does not imply differentiability in general, it turns out we can get differentiability if we add some more slight conditions. Theorem. Let U ⊆ Rn be open, f : U → Rm. Let a ∈ U. Suppose there exists some open ball Br(a) ⊆ U such that 64 6 Differentiation from Rm to Rn IB Analysis II (i) Djfi(x) exists for every x ∈ Br(a) and 1 ≤ i ≤ m, 1 ≤ j ≤ n (ii) Djfi are continuous at a for all 1 ≤ i ≤ m, 1 ≤ j ≤ n. Then f is differentiable at a. Proof. It suffices to prove for m = 1, by the long proposition. For each h = (h1, · · ·, hn) ∈ Rn, we have f (a + h) − f (a) = n j=1 f (a + h1e1 + · · · + hjej) − f (a + h1e1 + · · · + hj−1ej−1). Now for convenience, we can write h(j) = h1e1 + · · · + hjej = (h1,
· · ·, hj, 0, · · ·, 0). Then we have f (a + h) − f (a) = = n j=1 n j=1 f (a + h(j)) − f (a + h(j−1)) f (a + h(j−1) + hjej) − f (a + h(j−1)). Note that in each term, we are just moving along the coordinate axes. Since the partial derivatives exist, the mean value theorem of single-variable calculus applied to g(t) = f (a + h(j−1) + tej) on the interval t ∈ [0, hj] allows us to write this as f (a + h) − f (a) = = n j=1 n j=1 hjDjf (a + h(j−1) + θjhjej) hjDjf (a) + hj n j=1 Djf (a + h(j−1) + θjhjej) − Djf (a) for some θj ∈ (0, 1). Note that Djf (a + h(j−1) + θjhjej) − Djf (a) → 0 as h → 0 since the partial derivatives are continuous at a. So the second term is o(h). So f is differentiable at a with n Df (a)h = Djf (a)hj. This is a very useful result. For example, we can now immediately conclude j=1 that the function   → 3x2 + 4 sin y + e6z xyze14x   x y z is differentiable everywhere, since it has continuous partial derivatives. This is much better than messing with the definition itself. 65 6 Differentiation from Rm to Rn IB Analysis II 6.2 The operator norm So far, we have only looked at derivatives at a single point. We haven’t discussed much about the derivative at, say, a neighbourhood or the whole space. We might want to ask if the derivative is continuous or bounded. However, this is not straightforward, since the derivative is a linear map, and we need to define these notions for functions whose values are linear maps. In particular, we want to understand the map Df : Br(a
) → L(Rn; Rm) given by x → Df (x). To do so, we need a metric on the space L(Rn; Rm). In fact, we will use a norm. Let L = L(Rn; Rm). This is a vector space over R defined with addition and scalar multiplication defined pointwise. In fact, L is a subspace of C(Rn, Rm). To prove this, we have to prove that all linear maps are continuous. Let {e1, · · ·, en} be the standard basis for Rn, and for x = n j=1 xjej, A(x) = n j=1 xjAej. and A ∈ L, we have By Cauchy-Schwarz, we know ∥A(x)∥ ≤ n j=1 |xj|∥A(ej)∥ ≤ ∥x∥ n j=1 ∥A(ej)∥2. So we see A is Lipschitz, and is hence continuous. Alternatively, this follows from the fact that linear maps are differentiable and hence continuous. We can use this fact to define the norm of linear maps. Since L is finitedimensional (it is isomorphic to the space of real m × n matrices, as vector spaces, and hence have dimension mn), it really doesn’t matter which norm we pick as they are all Lipschitz equivalent, but a convenient choice is the sup norm, or the operator norm. Definition (Operator norm). The operator norm on L = L(Rn; Rm) is defined by ∥A∥ = sup x∈Rn:∥x∥=1 ∥Ax∥. Proposition. (i) ∥A∥ < ∞ for all A ∈ L. (ii) ∥ · ∥ is indeed a norm on L. (iii) ∥A∥ = sup Rn\{0} ∥Ax∥ ∥x∥. (iv) ∥Ax∥ ≤ ∥A∥∥x∥ for all x ∈ Rn. 66 6 Differentiation from Rm to Rn IB Analysis II (v) Let A ∈ L(Rn; Rm) and B ∈ L(Rm; Rp).
Then BA = B ◦ A ∈ L(Rn; Rp) and Proof. ∥BA∥ ≤ ∥B∥∥A∥. (i) This is since A is continuous and {x ∈ Rn : ∥x∥ = 1} is compact. (ii) The only non-trivial part is the triangle inequality. We have ∥A + B∥ = sup ∥x∥=1 ∥Ax + Bx∥ ≤ sup ∥x∥=1 (∥Ax∥ + ∥Bx∥) ≤ sup ∥x∥=1 ∥Ax∥ + sup ∥x∥=1 ∥Bx∥ = ∥A∥ + ∥B∥ (iii) This follows from linearity of A, and for any x ∈ Rn, we have x ∥x∥ = 1. (iv) Immediate from above. (v) ∥BA∥ = sup Rn\{0} ∥BAx∥ ∥x∥ ≤ sup Rn\{0} ∥B∥∥Ax∥ ∥x∥ = ∥B∥∥A∥. For certain easy cases, we have a straightforward expression for the operator norm. Proposition. (i) If A ∈ L(R, Rm), then A can be written as Ax = xa for some a ∈ Rm. Moreover, ∥A∥ = ∥a∥, where the second norm is the Euclidean norm in Rn (ii) If A ∈ L(Rn, R), then Ax = x · a for some fixed a ∈ Rn. Again, ∥A∥ = ∥a∥. Proof. (i) Set A(1) = a. Then by linearity, we get Ax = xA(1) = xa. Then we have So we have ∥Ax∥ = |x|∥a∥. ∥Ax∥ |x| = ∥a∥. (ii) Exercise on example sheet 4. Theorem (Chain rule). Let U ⊆ Rn be open, a ∈ U, f : U → Rm differentiable at a. Moreover, V ⊆ Rm is open with f
(U ) ⊆ V and g : V → Rp is differentiable at f (a). Then g ◦ f : U → Rp is differentiable at a, with derivative D(g ◦ f )(a) = Dg(f (a)) Df (a). 67 6 Differentiation from Rm to Rn IB Analysis II Proof. The proof is very easy if we use the little o notation. Let A = Df (a) and B = Dg(f (a)). By differentiability of f, we know f (a + h) = f (a) + Ah + o(h) g(f (a) + k) = g(f (a)) + Bk + o(k) Now we have ) g ◦ f (a + h) = g(f (a) + Ah + o(h) k = g(f (a)) + B(Ah + o(h)) + o(Ah + o(h)) = g ◦ f (a) + BAh + B(o(h)) + o(Ah + o(h)). We just have to show the last term is o(h), but this is true since B and A are bounded. By boundedness, ∥B(o(h))∥ ≤ ∥B∥∥o(h)∥. So B(o(h)) = o(h). Similarly, ∥Ah + o(h)∥ ≤ ∥A∥∥h∥ + ∥o(h)∥ ≤ (∥A∥ + 1)∥h∥ for sufficiently small ∥h∥. So o(Ah + o(h)) is in fact o(h) as well. Hence g ◦ f (a + h) = g ◦ f (a) + BAh + o(h). 6.3 Mean value inequalities So far, we have just looked at cases where we assume the function is differentiable at a point. We are now going to assume the function is differentiable in a region, and see what happens to the derivative. Recall the mean value theorem from single-variable calculus: if f : [a, b] → R is continuous on [a, b] and differentiable on (a, b), then f (b) − f (a) = f ′(c)(b
− a) for some c ∈ (a, b). This is our favorite theorem, and we have used it many times in IA Analysis. Here we have an exact equality. However, in general, for vector-valued functions, i.e. if we are mapping to Rm, this is no longer true. Instead, we only have an inequality. We first prove it for the case when the domain is a subset of R, and then reduce the general case to this special case. Theorem. Let f : [a, b] → Rm be continuous on [a, b] and differentiable on (a, b). Suppose we can find some M such that for all t ∈ (a, b), we have ∥Df (t)∥ ≤ M. Then ∥f (b) − f (a)∥ ≤ M (b − a). Proof. Let v = f (b) − f (a). We define g(t) = v · f (t) = m i=1 vifi(t). 68 6 Differentiation from Rm to Rn IB Analysis II Since each fi is differentiable, g is continuous on [a, b] and differentiable on (a, b) with Hence, we know g′(t) = vif ′ i (t). |g′(t)| ≤ m i=1 vif ′ i (t) ≤ ∥v∥ 1/2 f ′2 i (t) n i=1 = ∥v∥∥Df (t)∥ ≤ M ∥v∥. We now apply the mean value theorem to g to get g(b) − g(a) = g′(t)(b − a) for some t ∈ (a, b). By definition of g, we get v · (f (b) − f (a)) = g′(t)(b − a). By definition of v, we have ∥f (b) − f (a)∥2 = |g′(t)(b − a)| ≤ (b − a)M ∥f (b) − f (a)∥. If f (b) = f (a), then there is nothing to prove. Otherwise, divide by ∥f (b) − f (a)∥ and done. We now apply this to prove the general version.
Theorem (Mean value inequality). Let a ∈ Rn and f : Br(a) → Rm be differentiable on Br(a) with ∥Df (x)∥ ≤ M for all x ∈ Br(a). Then ∥f (b1) − f (b2)∥ ≤ M ∥b1 − b2∥ for any b1, b2 ∈ Br(a). Proof. We will reduce this to the previous theorem. Fix b1, b2 ∈ Br(a). Note that tb1 + (1 − t)b2 ∈ Br(a) for all t ∈ [0, 1]. Now consider g : [0, 1] → Rm. g(t) = f (tb1 + (1 − t)b2). By the chain rule, g is differentiable and g′(t) = Dg(t) = (Df (tb1 + (1 − t)b2))(b1 − b2) Therefore ∥Dg(t)∥ ≤ ∥Df (tb1 + (1 − t)b2)∥∥b1 − b2∥ ≤ M ∥b1 − b2∥. Now we can apply the previous theorem, and get ∥f (b1) − f (b2)∥ = ∥g(1) − g(0)∥ ≤ M ∥b1 − b2∥. 69 6 Differentiation from Rm to Rn IB Analysis II Note that here we worked in a ball. In general, we could have worked in a convex set, since all we need is for tb1 + (1 − t)b2 to be inside the domain. But with this, we have the following easy corollary. Corollary. Let f : Br(a) ⊆ Rn → Rm have Df (x) = 0 for all x ∈ Br(a). Then f is constant. Proof. Apply the mean value inequality with M = 0. We would like to extend this corollary. Does this corollary extend to differen- tiable maps f with Df = 0 defined on any open set U ⊆ Rn? The answer is clearly no. Even for functions f : R → R, this is not true, since we
can have two disjoint intervals [1, 2] ∪ [3, 4], and define f (t) to be 1 on [1, 2] and 2 on [3, 4]. Then Df = 0 but f is not constant. f is just locally constant on each interval. The problem with this is that the sets are disconnected. We cannot connect points in [1, 2] and points in [3, 4] with a line. If we can do so, then we would be able to show that f is constant. Definition (Path-connected subset). A subset E ⊆ Rn is path-connected if for any a, b ∈ E, there is a continuous map γ : [0, 1] → E such that γ(0) = a, γ(1) = b. Theorem. Let U ⊆ Rn be open and path-connected. Then for any differentiable f : U → Rm, if Df (x) = 0 for all x ∈ U, then f is constant on U. A naive attempt would be to replace tb1 − (1 − t)b2 in the proof of the mean value theorem with a path γ(t). However, this is not a correct proof, since this has to assume γ is differentiable. So this doesn’t work. We have to think some more. Proof. We are going to use the fact that f is locally constant. wlog, assume m = 1. Given any a, b ∈ U, we show that f (a) = f (b). Let γ : [0, 1] → U be a (continuous) path from a to b. For any s ∈ (0, 1), there exists some ε such that Bε(γ(s)) ⊆ U since U is open. By continuity of γ, there is a δ such that (s − δ, s + δ) ⊆ [0, 1] with γ((s − δ, s + δ)) ⊆ Bε(γ(s)) ⊆ U. Since f is constant on Bε(γ(s)) by the previous corollary, we know that g(t) = f ◦ γ(t) is constant on (s − δ, s + δ). In particular, g is different
iable at s with derivative 0. This is true for all s. So the map g : [0, 1] → R has zero derivative on (0, 1) and is continuous on (0, 1). So g is constant. So g(0) = g(1), i.e. f (a) = f (b). If γ were differentiable, then this is much easier, since we can show g′ = 0 by the chain rule: g′(t) = Df (γ(t))γ′(t). 6.4 Inverse function theorem Now, we get to the inverse function theorem. This is one of the most important theorems of the course. This has many interesting and important consequences, but we will not have time to get to these. Before we can state the inverse function theorem, we need a definition. 70 6 Differentiation from Rm to Rn IB Analysis II Definition (C 1 function). Let U ⊆ Rn be open. We say f : U → Rm is C 1 on U if f is differentiable at each x ∈ U and Df : U → L(Rn, Rm) is continuous. We write C 1(U ) or C 1(U ; Rm) for the set of all C 1 maps from U to Rm. First we get a convenient alternative characterization of C 1. Proposition. Let U ⊆ Rn be open. Then f = (f1, · · ·, fn) : U → Rn is C 1 on U if and only if the partial derivatives Djfi(x) exists for all x ∈ U, 1 ≤ i ≤ n, 1 ≤ j ≤ n, and Djfi : U → R are continuous. Proof. (⇒) Differentiability of f at x implies Djfi(x) exists and is given by Djfi(x) = ⟨Df (x)ej, bi⟩, where {e1, · · ·, en} and {b1, · · ·, bm} are the standard basis for Rn and Rm. So we know |Djfi(x) − Djfi(y)| = |⟨(Df (x) − Df (y))ej, bi⟩| ≤ ∥Df (x) − Df (y)∥ since ej and bi are
unit vectors. Hence if Df is continuous, so is Djfi. (⇐) Since the partials exist and are continuous, by our previous theorem, we know that the derivative Df exists. To show Df : U → L(Rm; Rn) is continuous, note the following general fact: For any linear map A ∈ L(Rn; Rm) represented by (aij) so that Ah = aijhj, then for x = (x1, · · ·, xn), we have ∥Ax∥2 = m   n i=1 j=1  2 Aijxj  By Cauchy-Schwarz, we have m ≤   n  a2 ij    n  x2 j  Dividing by ∥x∥2, we know i=1 j=1 j=1 = ∥x∥2 m n i=1 j=1 a2 ij. ∥A∥ ≤ a2 ij. Applying this to A = Df (x) − Df (y), we get ∥Df (x) − Df (y)∥ ≤ (Djfi(x) − Djfi(y))2. So if all Djfi are continuous, then so is Df. 71 6 Differentiation from Rm to Rn IB Analysis II If we do not wish to go through all that algebra to show the inequality ∥A∥ ≤ a2 ij, a2 ij is a norm on L(Rn, Rm), since it is just the we can instead note that Euclidean norm if we treat the matrix as a vector written in a funny way. So by the equivalence of norms on finite-dimensional vector spaces, there is some C such that ∥A∥ ≤ C a2 ij, and then the result follows. Finally, we can get to the inverse function theorem. Theorem (Inverse function theorem). Let U ⊆ Rn be open, and f : U → Rm be a C 1 map. Let a ∈ U, and suppose that Df (a) is invertible as a linear map Rn → Rn. Then there exists open sets V, W ⊆
Rn with a ∈ V, f (a) ∈ W, V ⊆ U such that f |V : V → W is a bijection. Moreover, the inverse map f |−1 V : W → V is also C 1. We have a fancy name for these functions. Definition (Diffeomorphism). Let U, U ′ ⊆ Rn are open, then a map g : U → U ′ is a diffeomorphism if it is C 1 with a C 1 inverse. Note that different people have different definitions for the word “diffeomorphism”. Some require it to be merely differentiable, while others require it to be infinitely differentiable. We will stick with this definition. Then the inverse function theorem says: if f is C 1 and Df (a) is invertible, then f is a local diffeomorphism at a. Before we prove this, we look at the simple case where n = 1. Suppose f ′(a) ̸= 0. Then there exists a δ such that f ′(t) > 0 or f ′(t) < 0 in t ∈ (a−δ, a+δ). So f |(a−δ,a+δ) is monotone and hence is invertible. This is a triviality. However, this is not a triviality even for n = 2. Proof. By replacing f with (Df (a))−1f (or by rotating our heads and stretching it a bit), we can assume Df (a) = I, the identity map. By continuity of Df, there exists some r > 0 such that ∥Df (x) − I∥ < 1 2 for all x ∈ Br(a). By shrinking r sufficiently, we can assume Br(a) ⊆ U. Let W = Br/2(f (a)), and let V = f −1(W ) ∩ Br(a). That was just our setup. There are three steps to actually proving the theorem. Claim. V is open, and f |V : V → W is a bijection. Since f is continuous, f −1(W ) is open. So V is open. To show f |V : V → W is bijection, we have to show that for each y ∈ W, then
there is a unique x ∈ V such that f (x) = y. We are going to use the contraction mapping theorem to 72 6 Differentiation from Rm to Rn IB Analysis II prove this. This statement is equivalent to proving that for each y ∈ W, the map T (x) = x − f (x) + y has a unique fixed point x ∈ V. Let h(x) = x − f (x). Then note that So by our choice of r, for every x ∈ Br(a), we must have Dh(x) = I − Df (x). ∥Dh(x)∥ ≤ 1 2. Then for any x1, x2 ∈ Br(a), we can use the mean value inequality to estimate ∥h(x1) − h(x2)∥ ≤ 1 2 ∥x1 − x2∥. Hence we know ∥T (x1) − T (x2)∥ = ∥h(x1) − h(x2)∥ ≤ 1 2 ∥x1 − x2∥. Finally, to apply the contraction mapping theorem, we need to pick the right domain for T, namely Br(a). For any x ∈ Br(a), we have ∥T (x) − a∥ = ∥x − f (x) + y − a∥ = ∥x − f (x) − (a − f (a)) + y − f (a)∥ ≤ ∥h(x) − h(a)∥ + ∥y − f (a)∥ ≤ 1 2 r 2 = r. < ∥x − a∥ + ∥y − f (a)∥ + r 2 So T : Br(a) → Br(a) ⊆ Br(a). Since Br(a) is complete, T has a unique fixed point x ∈ Br(a), i.e. T (x) = x. Finally, we need to show x ∈ Br(a), since this is where we want to find our fixed point. But this is true, since T (x) ∈ Br(a) by above. So we must have x ∈ Br(a). Also, since f (x) = y, we know x ∈ f −1(W ). So x ∈
V. So we have shown that for each y ∈ W, there is a unique x ∈ V such that f (x) = y. So f |V : V → W is a bijection. We have done the hard work now. It remains to show that f |V is invertible with C 1 inverse. Claim. The inverse map g = f |−1 In fact, we have V : W → V is Lipschitz (and hence continuous). ∥g(y1) − g(y2)∥ ≤ 2∥y1 − y2∥. For any x1, x2 ∈ V, by the triangle inequality, know ∥x1 − x2∥ − ∥f (x1) − f (x2)∥ ≤ ∥(x1 − f (x1)) − (x2 − f (x2))∥ = ∥h(x1) − h(x0)∥ ∥x1 − x2∥. ≤ 1 2 73 6 Differentiation from Rm to Rn IB Analysis II Hence, we get ∥x1 − x2∥ ≤ 2∥f (x1) − f (x2)∥. Apply this to x1 = g(y1) and x2 = g(y2), and note that f (g(yj)) = yj to get the desired result. Claim. g is in fact C 1, and moreover, for all y ∈ W, Dg(y) = Df (g(y))−1. (∗) Note that if g were differentiable, then its derivative must be given by (∗), since by definition, we know and hence the chain rule gives f (g(y)) = y, Df (g(y)) · Dg(y) = I. Also, we immediately know Dg is continuous, since it is the composition of continuous functions (the inverse of a matrix is given by polynomial expressions of the components). So we only need to check that Df (g(y))−1 satisfies the definition of the derivative. First we check that Df (x) is indeed invertible for every x ∈ Br(a). We use the fact that If Df (x)v = 0, then we have ∥Df (x) − I∥ ≤
1 2. ∥v∥ = ∥Df (x)v − v∥ ≤ ∥Df (x) − I∥∥v∥ ≤ 1 2 ∥v∥. So we must have ∥v∥ = 0, i.e. v = 0. So ker Df (x) = {0}. So Df (g(y))−1 exists. Let x ∈ V be fixed, and y = f (x). Let k be small and In other words, h = g(y + k) − g(y). f (x + h) − f (x) = k. Since g is invertible, whenever k ̸= 0, h ̸= 0. Since g is continuous, as k → 0, h → 0 as well. We have g(y + k) − g(y) − Df (g(y))−1k ∥k∥ = = = h − Df (g(y))−1k ∥k∥ Df (x)−1(Df (x)h − k) ∥k∥ −Df (x)−1(f (x + h) − f (x) − Df (x)h) ∥k∥ = −Df (x)−1 = −Df (x)−1 f (x + h) − f (x) − Df (x)h ∥h∥ f (x + h) − f (x) − Df (x)h ∥h∥ · · ∥h∥ ∥k∥ ∥g(y + k) − g(y)∥ ∥(y + k) − y∥. 74 6 Differentiation from Rm to Rn IB Analysis II As k → 0, h → 0. The first factor −Df (x)−1 is fixed; the second factor tends to 0 as h → 0; the third factor is bounded by 2. So the whole thing tends to 0. So done. Note that in the case where n = 1, if f : (a, b) → R is C 1 with f ′(x) ̸= 0 for every x, then f is monotone on the whole domain (a, b), and hence f : (
a, b) → f ((a, b)) is a bijection. In higher dimensions, this is not true. Even if we know that Df (x) is invertible for all x ∈ U, we cannot say f |U is a bijection. We still only know there is a local inverse. Example. Let U = R2, and f : R2 → R2 be given by f (x, y) = ex cos y ex sin y. Then we can directly compute Df (x, y) = ex cos y −ex sin y ex cos y. ex sin y Then we have det(Df (x, y)) = ex ̸= 0 for all (x, y) ∈ R2. However, by periodicity, we have f (x, y + 2nπ) = f (x, y) for all n. So f is not injective on R2. One major application of the inverse function theorem is to prove the implicit function theorem. We will not go into details here, but an example of the theorem can be found on example sheet 4. 6.5 2nd order derivatives We’ve done so much work to understand first derivatives. For real functions, we can immediately know a lot about higher derivatives, since the derivative is just a normal real function again. Here, it slightly more complicated, since the derivative is a linear operator. However, this is not really a problem, since the space of linear operators is just yet another vector space, so we can essentially use the same definition. Definition (2nd derivative). Let U ⊆ Rn be open, f : U → Rm be differentiable. Then Df : U → L(Rn; Rm). We say Df is differentiable at a ∈ U if there exists A ∈ L(Rn; L(Rn; Rm)) such that lim h→0 1 ∥h∥ (Df (a + h) − Df (a) − Ah) = 0. For this to make sense, we would need to put a norm on L(Rn; Rm) (e.g. the operator norm), but A, if it exists, is independent of the choice of the norm, since all norms are equivalent for a finite-dimensional space. This is, in fact, the same definition as our usual differentiability, since L(Rn; Rm
) is just a finite-dimensional space, and is isomorphic to Rnm. So Df is 75 6 Differentiation from Rm to Rn IB Analysis II differentiable if and only if Df : U → Rnm is differentiable with A ∈ L(Rn; Rnm). This allows use to recycle our previous theorems about differentiability. In particular, we know Df is differentiable is implied by the existence of partial derivatives Di(Djfk) in a neighbourhood of a, and their continuity at a, for all k = 1, · · ·, m and i, j = 1, · · ·, n. Notation. Write Dijf (a) = Di(Djf )(a) = ∂2 ∂xi∂xj f (a). Let’s now go back to the initial definition, and try to interpret it. By linear algebra, in general, a linear map ϕ : Rℓ → L(Rn; Rm) induces a bilinear map Φ : Rℓ × Rn → Rm by Φ(u, v) = ϕ(u)(v) ∈ Rm. In particular, we know Φ(au + bv, w) = aΦ(u, w) + bΦ(v, w) Φ(u, av + bw) = aΦ(u, v) + bΦ(u, w). Conversely, if Φ : Rℓ × Rn → Rm is bilinear, then ϕ : Rℓ → L(Rn; Rm) defined by ϕ(u) = (v → Φ(u, v)) is linear. These are clearly inverse operations to each other. So there is a one-to-one correspondence between bilinear maps ϕ : Rℓ × Rn → Rm and linear maps Φ : Rℓ → L(Rn; Rm). In other words, instead of treating our second derivative as a weird linear map in L(Rn; L(Rn; Rm)), we can view it as a bilinear map Rn × Rn → Rm. Notation. We define D2f (a) : Rn × Rn → Rm by D2f (a)(u, v)
= D(Df )(a)(u)(v). We know D2f (a) is a bilinear map. In coordinates, if u = n j=1 ujej, v = n j=1 vjej, where {e1, · · ·, en} are the standard basis for Rn, then using bilinearity, we have D2f (a)(u, v) = n n i=1 j=1 D2f (a)(ei, ej)uivj. This is very similar to the case of first derivatives, where the derivative can be completely specified by the values it takes on the basis vectors. In the definition of the second derivative, we can again take h = tei. Then we have lim t→0 Df (a + tei) − Df (a) − tD(Df )(a)(ei) t = 0. 76 6 Differentiation from Rm to Rn IB Analysis II Note that the whole thing at the top is a linear map in L(Rn; Rm). We can let the whole thing act on ej, and obtain lim t→0 Df (a + tei)(ej) − Df (a)(ej) − tD(Df )(a)(ei)(ej) t = 0. for all i, j = 1, · · ·, n. Taking the D2f (a)(ei, ej) to the other side, we know D2f (a)(ei, ej) = lim t→0 Df (a + tei)(ej) − Df (a)(ej) t = lim t→0 Dej f (a + tei) − Dej f (a) t = DeiDej f (a). In other words, we have D2f (ei, ej) = m k=1 Dijfk(a)bk, where {b1, · · ·, bm} is the standard basis for Rm. So we have D2f (u, v) = n m i,j=1 k=1 Dijfk(a)uivjbk We have been very careful to keep the right order of the partial derivatives. However, in most cases we care about, it doesn’t matter. Theorem (Symmetry of mixed
partials). Let U ⊆ Rn be open, f : U → Rm, a ∈ U, and ρ > 0 such that Bρ(a) ⊆ U. Let i, j ∈ {1, · · ·, n} be fixed and suppose that DiDjf (x) and DjDif (x) exist for all x ∈ Bρ(a) and are continuous at a. Then in fact DiDjf (a) = DjDif (a). The proof is quite short, when we know what to do. Proof. wlog, assume m = 1. If i = j, then there is nothing to prove. So assume i ̸= j. Let gij(t) = f (a + tei + tej) − f (a + tei) − f (a + tej) + f (a). Then for each fixed t, define ϕ : [0, 1] → R by ϕ(s) = f (a + stei + tej) − f (a + stei). Then we get gij(t) = ϕ(1) − ϕ(0). By the mean value theorem and the chain rule, there is some θ ∈ (0, 1) such that gij(t) = ϕ′(θ) = t Dif (a + θtei + tej) − Dif (a + θtei). 77 6 Differentiation from Rm to Rn IB Analysis II Now apply mean value theorem to the function s → Dif (a + θtei + stej), there is some η ∈ (0, 1) such that gij(t) = t2DjDif (a + θtei + ηtej). We can do the same for gji, and find some ˜θ, ˜η such that gji(t) = t2DiDjf (a + ˜θtei + ˜ηtej). Since gij = gji, we get t2DjDif (a + θtei + ηtej) = t2DiDjf (a + ˜θtei + ˜ηtej). Divide by t2, and take the limit as t → 0
. By continuity of the partial derivatives, we get DjDif (a) = DiDjf (a). This is nice. Whenever the second derivatives are continuous, the order does not matter. We can alternatively state this result as follows: Proposition. If f : U → Rm is differentiable in U such that DiDjf (x) exists in a neighbourhood of a ∈ U and are continuous at a, then Df is differentiable at a and D2f (a)(u, v) = DiDjf (a)uivj. is a symmetric bilinear form. j i Proof. This follows from the fact that continuity of second partials implies differentiability, and the symmetry of mixed partials. Finally, we conclude with a version of Taylor’s theorem for multivariable functions. Theorem (Second-order Taylor’s theorem). Let f : U → R be C 2, i.e. DiDjf (x) are continuous for all x ∈ U. Let a ∈ U and Br(a) ⊆ U. Then f (a + h) = f (a) + Df (a)h + 1 2 D2f (h, h) + E(h), where E(h) = o(∥h∥2). Proof. Consider the function g(t) = f (a + th). Then the assumptions tell us g is twice differentiable. By the 1D Taylor’s theorem, we know g(1) = g(0) + g′(0) + for some s ∈ [0, 1]. 78 1 2 g′′(s) 6 Differentiation from Rm to Rn IB Analysis II In other words, f (a + h) = f (a) + Df (a)h + = f (a) + Df (a)h + 1 2 1 2 D2f (a + sh)(h, h) D2f (a)(h, h) + E(h), where E(h) = 1 2 D2f (a + sh)(h, h) − D2f (a)(h, h). By definition of the operator norm, we get |E(h)| ≤ 1 2 ∥D2f (a + sh) − D2f (a)∥∥h∥2. By continuity
of the second derivative, as h → 0, we get ∥D2f (a + sh) − D2f (a)∥ → 0. So E(h) = o(∥h∥2). So done. 79 definition of a weak = U ηε(x − y)Dαu(y) dy = ηε ∗ Dαu. It is an exercise to verify that we can indeed move the derivative past the integral. Thus, if we fix V U. Then by the previous parts, we see that Dαuε → Dαu in Lp(V ) as ε → 0 for |α| ≤ k. So uε − up W k.p(V ) = |α|≤k Dαuε − Dαup Lp(V ) → 0 as ε → 0. Theorem (Global approximation). Let 1 ≤ p < ∞, and U ⊆ Rn be open and bounded. Then C∞(U ) ∩ W k,p(U ) is dense in W k,p(U ). Our main obstacle to overcome is the fact that the mollifications are only defined on Uε, and not U. Proof. For i ≥ 1, define Ui = x ∈ U | dist(x, ∂U ) > 1 Vi = Ui+3 − ¯Ui+1 Wi = Ui+4 − ¯Ui. i=1 Ui, and we can choose V0 U such that U = ∞ i We clearly have U = ∞ Let {ζi}∞ 0 ≤ ζi ≤ 1, ζi ∈ C∞ i=0 Vi. i=0 be a partition of unity subordinate to {Vi}. Thus, we have c (Vi) and ∞ i=0 ζi = 1 on U. Fix δ > 0. Then for each i, we can choose εi sufficiently small such that ui = ηεi ∗ ζiu satisfies supp ui ⊆ Wi and ui − ζiuW k.p(U ) = ui − ζiuW k.p(Wi) ≤ δ 2i+1. Now set v
= ∞ i=0 ui ∈ C∞(U ). Note that we do not know (yet) that v ∈ W k.p(U ). But it certainly is when we restrict to some V U. In any such subset, the sum is finite, and since u = ∞ i=0 ζiu, we have v − uW k,p(V ) ≤ ∞ i=0 ui − ζiuW k.p(V ) ≤ δ ∞ i=0 2−(i+1) = δ. Since the bound δ does not depend on V, by taking the supremum over all V, we have v − uW k.p(U ) ≤ δ. So we are done. 25 3 Function spaces III Analysis of PDEs It would be nice for C∞( ¯U ) to be dense, instead of just C∞(U ). It turns out this is possible, as long as we have a sensible boundary. Definition (C k,δ boundary). Let U ⊆ Rn be open and bounded. We say ∂U is C k,δ if for any point in the boundary p ∈ ∂U, there exists r > 0 and a function γ ∈ C k,δ(Rn−1) such that (possibly after relabelling and rotating axes) we have U ∩ Br(p) = {(x, xn) ∈ Br(p) : xn > γ(x)}. Thus, this says our boundary is locally the graph of a C k,δ function. Theorem (Smooth approximation up to boundary). Let 1 ≤ p < ∞, and U ⊆ Rn be open and bounded. Suppose ∂U is C 0,1. Then C∞( ¯U ) ∩ W k,p(U ) is dense in W k,p(U ). Proof. Previously, the reason we didn’t get something in C∞( ¯U ) was that we had to glue together infinitely many mollifications whose domain collectively exhaust U, and there is no hope that the resulting function is in C∞( ¯U ). In the current scenario, we know that U locally looks like x0 The idea is that given a u de
fined on U, we can shift it downwards by some ε. It is a known result that translation is continuous, so this only changes u by a tiny bit. We can then mollify with a ¯ε < ε, which would then give a function defined on U (at least locally near x0). So fix some x0 ∈ ∂U. Since ∂U is C 0,1, there exists r > 0 such that γ ∈ C 0,1(Rn−1) such that U ∩ Br(x0) = {(x, xn) ∈ Br(x) | xn > γ(x)}. Set V = U ∩ Br/2(x0). Define the shifted function uε to be uε(x) = u(x + εen). Now pick ¯ε sufficiently small such that vε,¯ε = η¯ε ∗ uε is well-defined. Note that here we need to use the fact that ∂U is C 0,1. Indeed, we can see that if the slope of ∂U is very steep near a point x: ε 26 3 Function spaces III Analysis of PDEs then we need to choose a ¯ε much smaller than ε. By requiring that γ is 1-H¨older continuous, we can ensure there is a single choice of ¯ε that works throughout V. As long as ¯ε is small enough, we know that vε,¯ε ∈ C∞( ¯V ). Fix δ > 0. We can now estimate vε,˜ε − uW k.p(V ) = vε,˜ε − uε + uε − uW k,p(V ) ≤ vε,˜ε − uεW k,p(V ) + uε − uW k.p(V ). Since translation is continuous in the Lp norm for p < ∞, we can pick ε > 0 such that uε − uW k.p(V ) < δ 2. Having fixed such an ε, we can pick ˜ε so small that we also have vε,˜ε − uεW k.p(V ) < δ 2. The conclusion of this is that for any
x0 ∈ ∂U, we can find a neighbourhood V ⊆ U of x0 in U such that for any u ∈ W k,p(U ) and δ > 0, there exists v ∈ C∞( ¯V ) such that u − vW k,p(V ) ≤ δ. It remains to patch all of these together using a partition of unity. By the compactness of ∂U, we can cover ∂U by finitely many of these V, say V1,..., VN. We further pick a V0 such that V0 U and N Vi. U = i=0 We can pick approximations vi ∈ C∞( ¯Vi) for i = 0,..., N (the i = 0 case is given by the previous global approximation theorem), satisfying vi − uW k,p(Vi) ≤ δ. Pick a partition of unity {ζi}N i=0 of ¯U subordinate to {Vi}. Define v = N i=0 ζivi. Clearly v ∈ C∞( ¯U ), and we can bound Dαv − DαuLp(U ) = Dα N i=0 ζivi − Dα N i=0 ζiu Lp(U ) N ≤ Ck vi − uW k.p(Vi) i=0 ≤ Ck(1 + N )δ, where Ck is a constant that solely depends on the derivatives of the partition of unity, which are fixed. So we are done. 3.4 Extensions and traces If U ⊆ Rn is open and bounded, then there is of course a restriction map W 1,p(Rn) → W 1,p(U ). It turns out under mild conditions, there is an extension map going in the other direction as well. Theorem (Extension of W 1.p functions). Suppose U is open, bounded and ∂U is C 1. Pick a bounded V such that U V. Then there exists a bounded linear operator for 1 ≤ p < ∞ such that for any u ∈ W 1,p(U ), E : W 1,p(U ) → W 1.p(Rn) 27 3 Function spaces III Analysis of PDEs (i) Eu = u
almost everywhere in U (ii) Eu has support in V (iii) EuW 1,p(Rn) ≤ CuW 1,p(U ), where the constant C depends on U, V, p but not u. Proof. First note that C 1( ¯U ) is dense in W 1,p(U ). So it suffices to show that the above theorem holds with W 1,p(U ) replaced with C 1( ¯U ), and then extend by continuity. We first show that we can do this locally, and then glue them together using partitions of unity. Suppose x0 ∈ ∂U is such that ∂U near x0 lies in the plane {xn = 0}. In other words, there exists r > 0 such that B+ = Br(x0) ∩ {xn ≥ 0} ⊆ ¯U B− = Br(x0) ∩ {xn ≤ 0} ⊆ Rn \ U. The idea is that we want to reflect u|B+ across the xn = 0 boundary to get a function on B−, but the derivative will not be continuous if we do this. So we define a “higher order reflection” by ¯u(x) = u(x) −3u(x, −xn) + 4 ux, − xn x ∈ B+ x ∈ B− 2 u −x − x 2 x xn We see that this is a continuous function. Moreover, by explicitly computing the partial derivatives, we see that they are continuous across the boundary. So we know ¯u ∈ C 1(Br(x0)). We can then easily check that we have ¯uW 1,p(Br(x0)) ≤ CuW 1,p(B+) for some constant C. 28 3 Function spaces III Analysis of PDEs If ∂U is not necessarily flat near x0 ∈ ∂U, then we can use a C 1 diffeomorphism to straighten it out. Indeed, we can pick r > 0 and γ ∈ C 1(Rn−1) such that U ∩ Br(p) = {(x, xn) ∈ Br(p) | xn > γ(x)}. We can then use the
C 1-diffeomorphism Φ : Rn → Rn given by Φ(x)i = xi Φ(x)n = xn − γ(x1,..., xn) i = 1,..., n − 1 Then since C 1 diffeomorphisms induce bounded isomorphisms between W 1,p, this gives a local extension. Since ∂U is compact, we can take a finite number of points x0 i ∈ ∂W, sets Wi and extensions ui ∈ C 1(Wi) extending u such that ∂U ⊆ N i=1 Wi. Further pick W0 U so that U ⊆ N subordinate to {Wi}. Write i=0 Wi. Let {ζi}N i=0 be a partition of unity ¯u = N i=0 ζi ¯ui where ¯u0 = u. Then ¯u ∈ C 1(Rn), ¯u = u on U, and we have ¯uW 1,p(Rn) ≤ CuW 1,p(U ). By multiplying ¯u by a cut-off, we may assume supp ¯u ⊆ V for some V U. Now notice that the whole construction is linear in u. So we have constructed a bounded linear operator from a dense subset of W 1,p(U ) to W 1,p(V ), and there is a unique extension to the whole of W 1,p(U ) by the completeness of W 1,p(V ). We can see that the desired properties are preserved by this extension. Trace theorems A lot of the PDE problems we are interested in are boundary value problems, namely we want to solve a PDE subject to the function taking some prescribed values on the boundary. However, a function u ∈ Lp(U ) is only defined up to sets of measure zero, and ∂U is typically a set of measure zero. So naively, we can’t naively define u|∂U. We would hope that if we require u to have more regularity, then perhaps it now makes sense to define the value at the boundary. This is true, and is given by the trace theorem Theorem (Trace theorem). Assume U is bounded and has
C 1 boundary. Then there exists a bounded linear operator T : W 1,p(U ) → Lp(∂U ) for 1 ≤ p < ∞ such that T u = u|∂U if u ∈ W 1,p(U ) ∩ C( ¯U ). We say T u is the trace of u. Proof. It suffices to show that the restriction map defined on C∞ functions is a bounded linear operator, and then we have a unique extension to W 1,p(U ). The gist of the argument is that Stokes’ theorem allows us to express the integral of 29 3 Function spaces III Analysis of PDEs a function over the boundary as an integral over the whole of U. In fact, the proof is indeed just the proof of Stokes’ theorem. By a general partition of unity argument, it suffices to show this in the case where U = {xn > 0} and u ∈ C∞ ¯U with supp u ⊆ BR(0) ∩ ¯U. Then Rn−1 |u(x, 0)|p dx = ∞ Rn−1 0 ∂ ∂xn |u(x, xn)|p dxn dx = U p|u|p−1uxn sgn u dxn dx. We estimate this using Young’s inequality to get Rn−1 |u(x, 0)|p dx ≤ Cp U |u|p + |uxn |p dU ≤ Cpup W 1,p(U ). So we are done. We can apply this to each derivative to define trace maps W k,p(U ) → W k−1,p(U ). In general, this trace map is not surjective. So in some sense, we don’t actually need to use up a whole unit of differentiability. In the example sheet, we see that in the case p = 2, we only lose “half” a derivative. c (U ) is dense in W 1,p c (U ). (U ). In fact, the converse is true — if T u = 0, then (U ), and the trace vanishes on C∞ 0 Note that C∞ So T vanishes on W 1,p u ∈ W 1,p (U
). 0 0 3.5 Sobolev inequalities Before we can move on to PDE’s, we have to prove some Sobolev inequalities. These are inequalities that compare different norms, and allows us to “trade” different desirable properties. One particularly important thing we can do is to trade differentiability for continuity. So we will know that if u ∈ W k,p(U ) for some large k, then in fact u ∈ C m(U ) for some (small) m. The utility of these results is that we would like to construct our solutions in W k,p spaces, since these are easier to work with, but ultimately, we want an actual, smooth solution to our equation. Sobolev inequalities let us do so, since if u ∈ W k,p(U ) for all k, then it must be in C m as well. 0 ([0, 1]). A priori, if u ∈ H 1 To see why we should be expected to be able to do that, consider the space H 1 0 ([0, 1]), then we only know it exists as some measurable function, and there is no canonical representative of this function. However, we can simply assign u(x) = x 0 u(t) dt, since we know u is an honest integrable function. This gives a well-defined representative of the function u, and even better, we can bound its supremum using uL2([0,1]). Before we start proving our Sobolev inequalities, we first prove the following lemma: 30 3 Function spaces III Analysis of PDEs Lemma. Let n ≥ 2 and f1,..., fn ∈ Ln−1(Rn−1). For 1 ≤ i ≤ n, denote and set Then f ∈ L1(Rn) with ˜xi = (x1,..., xi−1, xi+1,..., xn), f (x) = f1(˜x1) · · · fn(˜xn). f L1(Rn) ≤ n i=1 fiLn−1(Rn−1). Proof. We proceed by induction on n. If n = 2, then this is easy, since f (x1, x2) = f1
(x2)f2(x1). So |f (x1, x2)| dx = |f1(x2)| dx2 |f2(x1)| dx1 R2 = f1L1(R1)f2L1(R1). Suppose that the result is true for n ≥ 2, and consider the n + 1 case. Write f (x) = fn+1(˜xn+1)F (x), where F (x) = f1(˜x1) · · · fn(˜xn). Then by H¨older’s inequality, we have x1,...,xn |f ( ·, xn+1)| dx ≤ fn+1Ln(Rn)F ( ·, xn+1)Ln/(n−1)(Rn). We now use the induction hypothesis to f n/(n−1) 1 ( ·, xn+1)f n/(n−1) 2 ( ·, xn+1) · · · f n/(n−1) n ( ·, xn+1). So |f ( ·, xn+1)| dx ≤ fn+1Ln(Rn) x1,...,xn n i=1 f n n−1 i n−1 n ( ·, xn)Ln−1(Rn−1) = fn+1Ln(Rn) n i=1 fi( ·, xm)Ln(Rn−1). Now integrate over xn+1. We get f L1(Rn+1) ≤ fn+1Ln(Rn) xn+1 ≤ fn+1Ln(Rn+1) n i=1 n i=1 xn+1 fi( ·, xn+1)Ln(Rn−1) dxn. fi( ·, xn+1)n Ln(Rn−1) dxn+1 1/n = fn+1Ln(Rn) n i=1 fiLn(Rn). 31 3 Function spaces III Analysis of PDEs Theorem (Gagliardo–Nirenberg–Sobolev inequality). Assume n > p. Then we have W 1,p(Rn) ⊆ Lp∗ (Rn), where p∗ = np n − p
> p, and there exists c > 0 depending on n, p such that uLp∗ (Rn) ≤ cuW 1,p(Rn). In other words, W 1,p(Rn) is continuously embedded in Lp∗ (Rn). Proof. Assume u ∈ C∞ c (Rn), and consider p = 1. Since the support is compact, xi u(x) = uxi(x1,..., xi−1, yi, xi+1,..., xn) dyi. −∞ So we know that |u(x)| ≤ ∞ −∞ |Du(x1,..., xi−1, yi, xi+1,..., xn)| dyi ≡ fi(˜xi). Thus, applying this once in each direction, we obtain |u(x)|n/(n−1) ≤ n i=1 fi(˜xi)1/(n−1). If we integrate and then use the lemma, we see that uLn/(n−1)(Rn) n/(n−1) ≤ C n i=1 f 1/(n−1) i Ln−1(Rn−1) = Dun/(n−1) L1(Rn). So uLn/(n−1)(Rn) ≤ CDuL1(Rn). Since C∞ c (Rn) is dense in W 1,1(Rn), the result for p = 1 follows. Now suppose p > 1. We apply the p = 1 case to v = |u|γ for some γ > 1, which we choose later. Then we have Dv = γ sgn u · |u|γ−1Du. So Rn n−1 n |u| γn n−1 dx We choose γ such that Rn ≤ γ ≤ γ |u|γ−1|Du| dx |u|(γ−1) p p−1 dx p−1 p |Du|p dx 1 p. Rn Rn γn n − 1 = (γ − 1)p p − 1. 32 3 Function spaces III Analysis of PDEs So we should pick Then we have γ = p(n − 1) n − p > 1. γn
n − 1 = np n − p = p∗. So So Rn n−1 n |u|p∗ dx ≤ p(n − 1) n − p Rn p−1 p |u|p∗ dx DuLp(Rn). Rn 1/p∗ |u|p∗ dx ≤ p(n − 1) n − p DuLp(Rn). This argument is valid for u ∈ C∞ to W 1,p(Rn). c (Rn), and by approximation, we can extend We can deduce some corollaries of this result: Corollary. Suppose U ⊆ Rn is open and bounded with C 1-boundary, and 1 ≤ p < n. Then if p∗ = np n−p, we have W 1,p(U ) ⊆ Lp∗ (U ), and there exists C = C(U, p, n) such that uLp∗ (U ) ≤ CuW 1,p(U ). Proof. By the extension theorem, we can find ¯u ∈ W 1,p(Rn) with ¯u = u almost everywhere on U and ¯uW 1,p(Rn) ≤ CuW 1,p(U ). Then we have uLp∗ (U ) ≤ ¯uLp∗ (Rn) ≤ c¯uW 1,p(Rn) ≤ ˜CuW 1,p(U ). Corollary. Suppose U is open and bounded, and suppose u ∈ W 1,p some 1 ≤ p < n, then we have the estimates 0 (U ). For uLq(U ) ≤ CDuLp(U ) for any q ∈ [1, p∗]. In particular, uLp(U ) ≤ CDuLp(U ). Proof. Since u ∈ W 1,p 0 Extending um to vanish on U c, we have (U ), there exists u0 ∈ C∞ c (U ) converging to u in W 1,p(U ). Applying Gagliardo–Nirenberg–Sobolev, we find that um ∈ C∞ c (Rn). umLp∗ (Rn) ≤ CDumLp(Rn). 33 3 Function spaces III Analysis of
PDEs So we know that umLp∗ (U ) ≤ CDumLp(U ). Sending m → ∞, we obtain Since U is bounded, by H¨older, we have uLp∗ (U ) ≤ CDuLp(U ). |u|q dx 1/q ≤ 1/rq 1 dx 1/sq |u|qs ds U U U ≤ CuLp∗ (U ) provided q ≤ p∗, where we choose s such that qs = p∗, and r such that r + 1 1 s = 1. The previous results were about the case n > p. If n < p < ∞, then we might hope that if u ∈ W 1,p(Rn), then u is “better than L∞”. Theorem (Morrey’s inequality). Suppose n < p < ∞. Then there exists a constant C depending only on p and n such that uC0,γ (Rn) ≤ CuW 1,p(Rn) c (Rn) where C = C(p, n) and γ = 1 − n Proof. We first prove the H¨older part of the estimate. for all u ∈ C∞ p < 1. Let Q be an open cube of side length r > 0 and containing 0. Define Then Note that So So we have ¯u = 1 |Q| u(x) dx. Q Q [u(x) − u(0)] dx |u(x) − u(0)| dx. 1 |Q| 1 |Q| Q |¯u − u(0)| = ≤ u(x) − u(0) = 1 0 d dt u(tx) dt = 1 0 i xi ∂u ∂xi (tx) dt. |u(x) − u(0)| ≤ r 1 0 i ∂u ∂xi (tx) dt. |¯u − u(0)| ≤ = ≤ r |Q| r |Q| r |Q| 1 Q 0 1 0 1 0 t−n t−n dt dx ∂u ∂xi (y) Lp(tQ) i ∂u ∂xi (tx) i tQ n i=1 ∂u ∂xi 34 dy dt |
tQ|1/p dt. 3 Function spaces III Analysis of PDEs where 1 p + 1 p = 1. Using that |Q| = rn, we obtain |¯u − u(0)| ≤ cr1−n+ n p DuLp(Rn) 1 0 t−n+ n p dt ≤ c 1 − n/p r1−n/pDuLp(Rn). Note that the right hand side is decreasing in r. So when we take r to be very small, we see that u(0) is close to the average value of u around 0. Indeed, suppose x, y ∈ Rn with |x − y| = r 2. Pick a box containing x and y of side length r. Applying the above result, shifted so that x, y play the role of 0, we can estimate |u(x) − u(y)| ≤ |u(x) − ¯u| + |u(y) − ¯u| ≤ ˜Cr1−n/pDuLp(Rn). Since r < x − y, it follows that |u(x) − u(y)| |x − y|1−n/p ≤ C · 21−n/pDuLp(Rn). So we conclude that [u]C0,γ (Rn) ≤ CDuLp(Rn). Finally, to see that u is bounded, any x ∈ Rn belongs to some cube Q of side length 1. So we have |u(x)| ≤ |u(x) − ¯u + ¯u| ≤ |¯u| + CDuLp(Rn). But also |¯u| ≤ Q So we are done. |u(x)| dx ≤ uLp(Rn)1Lp(Q) = uLp(Rn). Corollary. Suppose u ∈ W 1,p(U ) for U open, bounded with C 1 boundary. Then there exists u∗ ∈ C 0,γ(U ) such that u = u∗ almost everywhere and u∗C0,γ (U ) ≤ CuW 1,p(U ). By applying these results iteratively, we can establish higher order versions with some appropriate q. W k,p ⊆ Lq(U ) 35 4 Elliptic boundary value problems III Analysis of PDE
s 4 Elliptic boundary value problems 4.1 Existence of weak solutions In this chapter, we are going to study second-order elliptic boundary value problems. The canonical example to keep in mind is the following: Example. Suppose U ⊆ Rn is a bounded open set with smooth boundary. Suppose ∂U is a perfect conductor and ρ : U → R is the charge density inside U. The electrostatic field φ satisfies ∆φ = ρ on U φ|∂U = 0. This is an example of an elliptic boundary value problem. Note that we cannot tackle this with the Cauchy–Kovalevskaya theorem, since we don’t even have enough boundary conditions, and also because we want an everywhere-defined solution. In general, let U ⊆ Rn be open and bounded with C 1 boundary, and for u ∈ C 2( ¯U ), we define Lu = − n i,j=1 (aij(x)uxj )xi + n i=1 bi(x)uxi + c(x)u, where aij, bi and c are given functions defined on U. Typically, we will assume they are at least L∞, but sometimes we will require more. If aij ∈ C 1(U ), then we can rewrite this as Lu = − n i,j=1 aij(x)uxixj + n i=1 ˜bi(x)uxi + c(x)u for some ˜bi, using the product rule. We will mostly use the first form, called the divergence form, which is suitable for the energy method, while the second (non-divergence form) is suited to the maximum principle. Essentially, what makes the divergence form convenient for us is that it’s easy to integrate by parts. Of course, given the title of the chapter, we assume that L is elliptic, i.e. aij(x)ξiξj ≥ 0 i,j for all x ∈ U and ξ ∈ Rn. It turns out this is not quite strong enough, because this condition allows the aij’s to be degenerate, or vanish at the boundary. Definition (Uniform ellipticity).
An operator n Lu = − (aij(x)uj)xi + i,j=1 n i=1 bi(x)uxi + c(x)u 36 4 Elliptic boundary value problems III Analysis of PDEs is uniformly elliptic if n i,j=1 aij(x)ξiξj ≥ θ|ξ|2 for some θ > 0 and all x ∈ U, ξ ∈ Rn. We shall consider the boundary value problem Lu = f on U u = 0 on ∂U. This form of the equation is not very amenable to study by functional analytic methods. Similar to what we did in the proof of Picard–Lindel¨of, we want to write this in a weak formulation. Let’s suppose u ∈ C 2( ¯U ) is a solution, and suppose v ∈ C 2( ¯U ) also satisfies v|∂U = 0. Multiply the equation Lu = f by v and integrate by parts. Then we get U vf dx = U   ij vxiaijuxj + i  biuxiv + cuv  dx ≡ B[u, v]. (2) Conversely, suppose u ∈ C 2( ¯U ) and u|∂U = 0. If U vf dx = B[u, v] for all v ∈ C 2( ¯U ) such that v|∂U = 0, then we claim u in fact solves the original equation. Indeed, undoing the integration by parts, we conclude that vLu dx = vf dx for all v ∈ C 2( ¯U ) with v|∂U = 0. But if this is true for all v, then it must be that Lu = f. Thus, the PDE problem we started with is equivalent to finding u that solves B[u, v] = U vf dx for all suitable v, provided u is regular enough. But the point is that (2) makes sense for u, v ∈ H 1 0 (U ). So our strategy is to first show that we can find u ∈ H 1 0 (U ) that solves (2), and then hope that under reasonable assumptions, we can show that any such solution must in fact be C 2( ¯
U ). Definition (Weak solution). We say u ∈ H 1 0 (U ) is a weak solution of Lu = f on U u = 0 on ∂U B[u, v] = (f, v)L2(U ) for f ∈ L2(U ) if for all v ∈ H 1 0 (U ). We’ll exploit the Hilbert space structure of H 1 0 (U ) to find weak solutions. Theorem (Lax–Milgram theorem). Let H be a real Hilbert space with inner product ( ·, · ). Suppose B : H × H → R is a bilinear mapping such that there exists constants α, β > 0 so that 37 4 Elliptic boundary value problems III Analysis of PDEs – |B[u, v]| ≤ αuv for all u, v ∈ H – βu2 ≤ B[u, u] (boundedness) (coercivity) Then if f : H → R is a bounded linear map, then there exists a unique u ∈ H such that B[u, v] = f, v for all v ∈ H. Note that if B is just the inner product, then this is the Riesz representation theorem. Proof. By the Riesz representation theorem, we may assume that there is some w such that For each fixed u ∈ H, the map f, v = (u, v). v → B[u, v] is a bounded linear functional on H. So by the Riesz representation theorem, we can find some Au such that It then suffices to show that A is invertible, for then we can take u = A−1w. B[u, v] = (Au, v). – Since B is bilinear, it is immediate that A : H → H is linear. – A is bounded, since we have Au2 = (Au, Au) = B[u, Au] ≤ αuAu. – A is injective and has closed image. Indeed, by coercivity, we know βu2 ≤ B[u, u] = (Au, u) ≤ Auu. Dividing by u, we see that A is bounded below, hence is injective and has closed image (since H is complete). (Indeed, injectivity
is clear, and if Aum → v for some v, then um − un ≤ 1 β Aum − Aun → 0 as m, n → ∞. So (un) is Cauchy, and hence has a limit u. Then by continuity, Au = v, and in particular, v ∈ im A) – Since im A is closed, we know H = im A ⊕ im A⊥. Now let w ∈ im A⊥. Then we can estimate βw2 ≤ B[w, w] = (Aw, w) = 0. So w = 0. Thus, in fact im A⊥ = {0}, and so A is surjective. 38 4 Elliptic boundary value problems III Analysis of PDEs We would like to apply this to our elliptic PDE. To do so, we need to prove that our B satisfy boundedness and coercivity. Unfortunately, this is not always true. Theorem (Energy estimates for B). Suppose aij = aji, bi, c ∈ L∞(U ), and there exists θ > 0 such that n i,j=1 aij(x)ξiξj ≥ θ|ξ|2 for almost every x ∈ U and ξ ∈ Rn. Then if B is defined by B[u, v] =   U ij vxi aijuxj + i  biuxiv + cuv  dx, then there exists α, β > 0 and γ ≥ 0 such that (i) |B[u, v]| ≤ αuH 1(U )vH 1(U ) for all u, v ∈ H 1 0 (U ) (ii) βu2 H 1(U ) ≤ B[u, u] + γu2 L2(U ). Moreover, if bi ≡ 0 and c ≥ 0, then we can take γ. Proof. (i) We estimate |B[u, v]| ≤ aijL∞(U ) |Du||Dv| dx i,j + i bC∞(U ) U U |Du||v| dx + cL∞(U ) U |u||v| dx ≤ c1DuL2(U )DvL2(u) + c2
DuL2(U )vL2(U ) + c3uL2(U )vL2(u) ≤ αuH 1(U )vH 1(U ) for some α. (ii) We start from uniform ellipticity. This implies θ U |Du|2 dx ≤ n U i,j=1 aij(x)uxiuxj dx = B[u, u] − n U i=1 biuxiu + cu2 dx ≤ B[u, u] + n biL∞(U ) |Du||u| dx i=1 + cL∞(U ) U |u|2 dx. 39 4 Elliptic boundary value problems III Analysis of PDEs Now by Young’s inequality, we have U |Du||u| dx ≤ ε U |Du|2 dx + 1 4ε U |u|2 dx for any ε > 0. We choose ε small enough so that ε n i=1 biL∞(U ) ≤ θ 2. So we have θ U |Du|2 dx ≤ B[u, u] + θ 2 U |Du|2 dx + γ U |u|2 dx for some γ. This implies θ 2 Du2 L2(U ) ≤ B[u, u] + γu2 L2(U ) We can add θ L2(U ) on both sides to get the desired bound on uH 1(U ). To get the “moreover” statement, we see that under these conditions, we have 2 u2 θ |Du|2 dx ≤ B[u, u]. Then we apply the Poincar´e’s inequality, which tells us there is some C > 0 such that for all u ∈ H 1 0 (U ), we have uL2(U ) ≤ CDuL2(U ). The estimate (ii) is sometimes called G˚arding’s inequality. Theorem. Let U, L be as above. There is a γ ≥ 0 such that for any µ ≥ γ and any f ∈ L2(U ), there exists a unique weak solution to Moreover, we have Lu + µu = f on U u = 0 on ∂U. uH 1(U ) ≤ Cf L2(U ) for some C = C(L
, U ) ≥ 0. Again, if bi ≡ 0 and c ≥ 0, then we may take γ = 0. Proof. Take γ from the previous theorem when applied to L. Then if µ ≥ γ and we set Bµ[u, v] = B[u, v] + µ(u, v)L2(U ), This is the bilinear form corresponding to the operator Lµ = L + µ. Then by the previous theorem, Bµ satisfies boundedness and coercivity. So if we fix any f ∈ L2, and think of it as an element of H 1 0 (U )∗ by f, v = (f, u)L2(U ) = U f v dx, 40 4 Elliptic boundary value problems III Analysis of PDEs then we can apply Lax–Milgram to find a unique u ∈ H 1 f, v = (f, v)L2(U ) for all v ∈ H 1 a weak solution. 0 (U ) satisfying Bµ[u, v] = 0 (U ). This is precisely the condition for u to be Finally, the G˚arding inequality tells us βu2 H 1(U ) ≤ Bµ[u, u] = (f, u)L2(U ) ≤ f L2(U )uL2(U ). So we know that βuH 1(U ) ≤ f L2(U ). In some way, this is a magical result. We managed to solve a PDE without having to actually work with a PDE. There are a few things we might object to. First of all, we only obtained a weak solution, and not a genuine solution. We will show that under some reasonable assumptions on a, b, c, if f is better behaved, then u is also better behaved, and in general, if f ∈ H k, then u ∈ H k+2. This is known as elliptic regularity. Together Sobolev inequalities, this tells us u is genuinely a classical solution. Another problem is the presence of the µ. We noted that if L is, say, Laplace’s equation, then we can take γ = 0, and so we don’t have this problem. But in general, this theorem requires it, and this is a bit unsatisfactory
. We would like to think a bit more about it. 4.2 The Fredholm alternative To understand the second problem, we shall seek to prove the following theorem: Theorem (Fredholm alternative). Consider the problem Lu = f, u|∂U = 0. (∗) For L a uniformly elliptic operator on an open bounded set U with C 1 boundary, either (i) For each f ∈ L2(U ), there is a unique weak solution u ∈ H 1 0 (U ) to (∗); or (ii) There exists a non-zero weak solution u ∈ H 1 0 (U ) to the homogeneous problem, i.e. (∗) with f = 0. This is similar to what we know about solving matrix equations Ax = b — either there is a solution for all b, or there are infinitely many solutions to the homogneous problem. Similar to the previous theorem, this follows from some general functional analytic result. Recall the definition of a compact operator: Definition (Compact operator). A bounded operator K : H → H is compact if every bounded sequence (um)∞ m=1 has a subsequence umj such that (Kumj )∞ j=1 converges strongly in H. Recall (or prove as an exercise) the following theorem regarding compact operators. Theorem (Fredholm alternative). Let H be a Hilbert space and K : H → H be a compact operator. Then 41 4 Elliptic boundary value problems III Analysis of PDEs (i) ker(I − K) is finite-dimensional. (ii) im(I − K) is finite-dimensional. (iii) im(I − K) = ker(I − K †)⊥. (iv) ker(I − K) = {0} iff im(I − K) = H. (v) dim ker(I − K) = dim ker(I − K †) = dim coker(I − K). How do we apply this to our situation? Our previous theorem told us that L + γ is invertible for large γ, and we claim that (L + γ)−1 is compact. We can then deduce the previous result by applying (iv) of the Fredholm alternative with K a
(scalar multiple of) (L + γ)−1 (plus some bookkeeping). So let us show that (L+γ)−1 is compact. Note that this maps sends f ∈ L2(U ) 0 (U ). To make it an endomorphism, we have to compose this with the 0 (U ) → L2(U ). The proof that (L + γ)−1 is compact will not involve 0 (U ) → L2(U ) is to u ∈ H 1 inclusion H 1 (L + γ)−1 in any way — we shall show that the inclusion H 1 compact! We shall prove this in two steps. First, we need the notion of weak conver- gence. Definition (Weak convergence). Suppose (un)∞ space H. We say un converges weakly to u ∈ H if n=1 is a sequence in a Hilbert (un, w) → (u, w) for all w ∈ H. We write un u. Of course, we have Lemma. Weak limits are unique. Lemma. Strong convergence implies weak convergence. We shall show that given any bounded sequence in H 1 0 (U ), we can find a subsequence that is weakly convergent. We then show that every weakly convergent sequence in H 1 0 (U ) is strongly convergent in L2(U ). In fact, the first result is completely general: Theorem (Weak compactness). Let H be a separable Hilbert space, and suppose (um)∞ m=1 is a bounded sequence in H with um ≤ K for all m. Then um admits a subsequence (umj )∞ j=1 such that umj u for some u ∈ H with u ≤ K. One can prove this theorem without assuming H is separable, but it is slightly messier. Proof. Let (ei)∞ Schwarz, we have i=1 be an orthonormal basis for H. Consider (e1, um). By Cauchy– |(e1, um)| ≤ e1em ≤ K. So by Bolzano–Weierstrass, there exists a subsequence (umj ) such that (e1, umj ) converges. Doing this iteratively, we can find a subsequence (v) such that for each i, there
is some ci such that (ei, v) → ci as → ∞. 42 4 Elliptic boundary value problems III Analysis of PDEs We would expect the weak limit to be ciei. To prove this, we need to first show it converges. We have p j=1 |cj|2 = lim k→∞ p |(ej, v)|2 j=1 p ≤ sup |(ej, v)|2 using Bessel’s inequality. So j=1 ≤ sup vk2 ≤ K 2, u = ∞ j=1 cjej converges in H, and u ≤ K. We already have (ej, v) → (ej, u) for all j. Since v − u is bounded by 2K, it follows that the set of all w such that (w, v) → (v, u) (†) is closed under finite linear combinations and taking limits, hence is all of H. To see that it is closed under limits, suppose wk → w, and wk satisfy (†). Then |(w, v)−(w, u)| ≤ |(w −wk, v −u)|+|(wk, v −u)| ≤ 2Kw −wk+|(wk, v −u)| So we can first find k large enough such that the first term is small, then pick such that the second is small. We next want to show that if um u in H 1(U ), then um → u in L1. We may as well assume that U is some large cube of length L by extension. Notice that since U is bounded, the constant function 1 is in H 1 0 (U ). So um u in particular implies U (um − u) dx → 0. Recall that the Poincar´e inequality tells us if u ∈ H 1 0 (U ), then we can bound uL2(Q) by some multiple of DuL2(U ). If we try to prove this without the assumption that u vanishes on the boundary, then we find that we need a correction term. The resulting lemma is as follows: Lemma (Poincar´e revisited). Suppose u ∈ H 1(Rn). Let Q = [ξ1, ξ1 + L
] × · · · × [ξn, ξn + L] be a cube of length L. Then we have u2 L2(Q) ≤ 1 |Q| Q 2 u(x) dx + nL2 2 Du2 L2(Q). We can improve this to obtain better bounds by subdividing Q into smaller cubes, and then applying this to each of the cubes individually. By subdividing enough, this leads to a proof that um u in H 1 implies um → u in H 0. 43 4 Elliptic boundary value problems III Analysis of PDEs Proof. By approximation, we can assume u ∈ C∞( ¯Q). For x, y ∈ Q, we write u(x) − u(y) = x1 y1 + d dt x2 u(t, x1,..., xn) dt d dt d dt u(y1, t, x3,..., xn) dt u(y1,..., yn−1, t) dt. y2 + · · · + xn yn Squaring, and using 2ab ≤ a2 + b2, we have u(x)2 + u(y)2 − 2u(x)u(y) ≤ n u(t, x1,..., xn) dt 2 d dt x1 y1 + · · · + n xn yn d dt u(y1,..., yn−1, t) dt. 2 Now integrate over x and y. On the left, we get Q×Q dx dy (u(x)2 + u(y)2 − 2u(x)u(y)) = 2|Q|u2 L2(Q) − 2 2 u(x) dx. Q On the right we have x1 I1 = u(t, x2,..., xn) dt 2 d dt y1 x1 ≤ x1 dt d dt y1 ξ1+L ≤ L ξ1 y1 d dt u(t, x2,..., xn) dt. 2 u(t, x2,..., xn) dt (Cauchy–Schwarz) 2 Integrating
over all x, y ∈ Q, we get Q×Q dx dy I1 ≤ L2|Q|D1u2 L2(Q). Similarly estimating the terms on the right-hand side, we find that 2|Q|uL2(Q) − 2 2 u(x) dx ≤ n|Q| Q n i=1 It now follows that Diu2 L2(Q) = n|Q|L2Du2 L2(Q). Theorem (Rellich–Kondrachov). Let U ⊆ Rn be open, bounded with C 1 m=1 is a sequence in H 1(U ) with um u, then um → u boundary. Then if (um)∞ in L2. In particular, by weak compactness any sequence in H 1(U ) has a subsequence that is convergent in L2(U ). 44 4 Elliptic boundary value problems III Analysis of PDEs Note that to obtain the “in particular” part, we need to know that H 1(U ) is separable. This is an exercise on the example sheet. Alternatively, we can appeal to a stronger version of weak compactness that does not assume separability. Proof. By the extension theorem, we may assume U = Q for some large cube Q with U Q. We subdivide Q into N many cubes of side length δ, such that the cubes only intersect at their faces. Call these {Qa}N a=1. We apply Poincar´e separately to each of these to obtain uj − u2 L2(Q) = N uj − u2 L2(Qa) a=1 N ≤ = a=1 N a=1 1 |Qa| Qa 2 (ui − u) dx + nδ2 2 Dui − Du2 L2(Qa) 1 |Qa| Qa 2 (ui − u) dx + nδ2 2 Dui − Du2 L2(Q). Now since Dui − Du2 Then since ui u, we in particular have L2(Q) is fixed, for δ small enough, the second term is < ε 2. Q1 (ui − u) dx → 0 as i → ∞ for all a, since this is just the inner product with the constant function 1. So for i large
enough, the first term is also < ε 2. The same result holds with H 1(U ) replaced by H 1 0 (U ). The proof is in fact simpler, and we wouldn’t need the assumption that the boundary is C 1. Corollary. Suppose K : L2(U ) → H 1(U ) is a bounded linear operator. Then the composition L2(U ) K H 1(U ) L2(U ) is compact. The slogan is that we get compactness whenever we improve regularity, which is something that happens in much more generality. Proof. Indeed, if um ∈ L2(U ) is bounded, then Kum is also bounded. So by Rellich–Kondrachov, there exists a subsequence umj → u in L2(U ). We are now ready to prove the Fredholm alternative for elliptic boundary value problems. Recall that in our description of the Fredholm alternative, we had the direct characterizations im(I − K) = ker(I − K †)⊥. We can make the analogous statement here. To do so, we need to talk about the adjoint of L. Since L is not an operator defined on L2(U ), trying to write down what it means to be an adjoint is slightly messy. Instead, we shall be content with talking about “formal adjoints”. 45 4 Elliptic boundary value problems III Analysis of PDEs It’s been a while since we’ve met a PDE, so let’s recall the setting we had. We have a uniformly elliptic operator n Lu = − (aij(x)uxj )xi + i,j=1 n i=1 bi(x)uxi + c(x)u on an open bounded set U with C 1 boundary. The associated bilinear form is B[u, v] = U   u i,j aij(x)uxivxj + n i=1  bi(x)uxi v + c(x)uv  dx. We are interested solving in the boundary value problem Lu = f, u|∂u = 0 with f ∈ L2(U ). The formal adjoint of L is defined by the relation (Lφ, �
�)L2(U ) = (φ, L†ψ)L2(U ) for all φ, ψ ∈ C∞ c (U ). By integration by parts, we know L† should be given by n L†v = − (aijvxj )xi − i,j=1 n i=1 bi(x)vxj + c − bi xi v. n i=1 Note that here we have to assume that bi ∈ C 1( ¯U ). However, what really interests us is the adjoint bilinear form, which is simply given by B†[v, u] = B[u, v]. We are actually just interested in B†, and not L†, and we can sensibly talk about B† even if bi is not differentiable. As usual, we say v ∈ H 1 0 (U ) is a weak solution of the adjoint problem L†v = f, v|∂U = 0 if for all u ∈ H 1 0 (U ). B†[v, u] = (f, u)L2(U ) Given this set up, we can now state and prove the Fredholm alternative. Theorem (Fredholm alternative for elliptic BVP). Let L be a uniformly elliptic operator on an open bounded set U with C 1 boundary. Consider the problem Lu = f, u|∂U = 0. (∗) Then exactly one of the following are true: (i) For each f ∈ L2(U ), there is a unique weak solution u ∈ H 1 0 (U ) to (∗) (ii) There exists a non-zero weak solution u ∈ H 1 0 (U ) to the homogeneous problem, i.e. (∗) with f = 0. If this holds, then the dimension of N = ker L ⊆ H 1 dimension of N ∗ = ker L† ⊆ H 1 0 (U ). Finally, (∗) has a solution if and only if (f, v)L2(U ) = 0 for all v ∈ N ∗ 0 (U ) is equal to the 46 4 Elliptic boundary value problems III Analysis of PDEs Proof. We know that there exists γ > 0 such that for any f ∈ L2(U ), there is a unique weak
solution u ∈ H 1 0 (U ) to Lγu = Lu + γu = f, u|∂U = 0. Moreover, we have the bound uH 1(U ) ≤ Cf L2(U ) (which gives uniqueness). Thus, we can set L−1 0 (U ) is a bounded linear map. Composing with the inclusion L2(U ), we get a compact endomorphism of L2(U ). Now suppose u ∈ H 1 γ f to be this u, and then L−1 γ 0 is a weak solution to (∗). Then : L2(U ) → H 1 B[u, v] = (f, v)L2(U ) for all v ∈ H 1 0 (U ) is true if and only if Bγ[u, v] ≡ B[u, v] + γ(u, v) = (f + γu, v) for all v ∈ H 1 0 (U ). Hence, u is a weak solution of (∗) if and only if u = L−1 γ (f + γu) = γL−1 γ u + L−1 γ f. In other words, u solves (∗) iff for u − Ku = h, K = γL−1 γ, h = L−1 γ f. Since we know that K : L2(U ) → L2(U ) is compact, by the Fredholm alternative for compact operators, either (i) u − Ku = h admits a solution u ∈ L2(U ) for all h ∈ L2(U ); or (ii) There exists a non-zero u ∈ L2(U ) such that u − Ku = 0. Moreover, im(I − K) = ker(I − K †)⊥ and dim ker(I − K) = dim im(I − K)⊥. There is a bit of bookkeeping to show that this corresponds to the two alternatives in the theorem. (i) We need to show that u ∈ H 1 0 (U ). But this is trivial, since we have u = γL−1 γ u + L−1 γ f, and we know that L−1 γ maps L2(U ) into H 1 0 (U
). (ii) As above, we know that the non-zero solution u. There are two things to show. First, we have to show that v − K †v = 0 iff v is a weak solution to L†v = 0, v|∂U = 0. Next, we need to show that h = L−1 γ f ∈ (N ∗)⊥ iff f ∈ (N ∗)⊥. For the first part, we want to show that v ∈ ker(I − K †) iff B†[v, u] = B[u, v] = 0 for all u ∈ H 1 We are good at evaluating B[u, v] when u is of the form L−1 of a weak solution. Fortunately, im L−1 γ w, by definition γ Lγφ = c (U ), since L−1 contains C∞ 0 (U ). γ 47 4 Elliptic boundary value problems III Analysis of PDEs φ for all φ ∈ C∞ c (U ). In particular, im L−1 γ to show that v ∈ ker(I − K †) iff B[L−1 immediate from the computation is dense in H 1 0 (U ). So it suffices γ w, v] = 0 for w ∈ L2(U ). This is B[L−1 γ w, v] = Bγ[L−1 γ w, v]−γ(L−1 γ w, v) = (w, v)−(Kw, v) = (w, v−K †v). The second is also easy — if v ∈ N ∗ = ker(I − K †), then (L−1 γ f, v) = 1 γ (Kf, v) = 1 γ (f, K †v) = 1 γ (f, v). 4.3 The spectrum of elliptic operators Let’s recap what we have obtained so far. Given L, we have found some γ such that whenever µ ≥ γ, there is a unique solution to (L + µ)u = f (plus boundary conditions). In particular, L + µ has
trivial kernel. For µ ≤ γ, (L + µ)u = 0 may or may not have a non-trivial solution, but we know this satisfies the Fredholm alternative, since L + µ is still an elliptic operator. Rewriting (L + µ)u = 0 as Lu = −µu, we are essentially considering eigenvalues of L. Of course, L is not a bounded linear operator, so our usual spectral theory does not apply to L. However, as always, we know that L−1 is compact for large enough γ, and so the spectral theory of compact operators can tell us something about what the eigenvalues of L look like. γ We first recall some elementary definitions. Note that we are explicitly working with real Hilbert spaces and spectra. Definition (Resolvent set). Let A : H → H be a bounded linear operator. Then the resolvent set is ρ(A) = {λ ∈ R : A − λI is bijective}. Definition (Spectrum). The spectrum of a bounded linear A : H → H is σ(A) = R \ ρ(A). Definition (Point spectrum). We say η ∈ σ(A) belongs to the point spectrum of A if ker(A − ηI) = {0}. If η ∈ σp(A) and w satisfies Aw = ηw, then w is an associated eigenvector. Our knowledge of the spectrum of L will come from known results about the spectrum of compact operators. Theorem (Spectral theorem of compact operators). Let dim H = ∞, and K : H → H a compact operator. Then – σ(K) = σp(K) ∪ {0}. Note that 0 may or may not be in σp(K). – σ(K) \ {0} is either finite or is a sequence tending to 0. – If λ ∈ σp(K), then ker(K − λI) is finite-dimensional. – If K is self-adjoint, i.e. K = K † and H is separable, then there exists a countable orthon
ormal basis of eigenvectors. 48 4 Elliptic boundary value problems III Analysis of PDEs From this, it follows easily that Theorem (Spectrum of L). (i) There exists a countable set Σ ⊆ R such that there is a non-trivial solution to Lu = λu iff λ ∈ Σ. (ii) If Σ is infinite, then Σ = {λk}∞ k=1, the values of an increasing sequence with λk → ∞. (iii) To each λ ∈ Σ there is an associated finite-dimensional space E(λ) = {u ∈ H 1 0 (U ) | u is a weak solution of (∗) with f = 0}. We say λ ∈ Σ is an eigenvalue and u ∈ E(λ) is the associated eigenfunction. Proof. Apply the spectral theorem to compact operator L−1 and observe that γ : L2(U ) → L2(U ), L−1 γ u = λu ⇐⇒ u = λ(L + γ)u ⇐⇒ Lu = 1 − λγ λ u. Note that L−1 γ does not have a zero eigenvalue. In certain cases, such as Laplace’s equation, our operator is “self-adjoint”, and more things can be said. As before, we want the “formally” quantifier: Definition (Formally self-adjoint). An operator L is formally self-adjoint if L = L†. Equivalently, if bi ≡ 0. Definition (Positive operator). We say L is positive if there exists C > 0 such that H 1 u2 0 (U ) ≤ CB[u, u] for all u ∈ H 1 Theorem. Suppose L is a formally self-adjoint, positive, uniformly elliptic operator on U, an open bounded set with C 1 boundary. Then we can represent the eigenvalues of L as 0 (U ). 0 < λ1 ≤ λ2 ≤ λ3 ≤ · · ·, where each eigenvalue appears according to its multiplicity (dim E(λ)), and there exists
an orthonormal basis {wk}∞ 0 (U ) an eigenfunction of L with eigenvalue λk. k=1 of L2(U ) with wk ∈ H 1 Proof. Note that positivity implies c ≥ 0. So the inverse L−1 : L2(U ) → L2(U ) exists and is a compact operator. We are done if we can show that L−1 is self-adjoint. This is trivial, since for any f, g, we have (L−1f, g)L2(U ) = B[v, u] = B[u, v] = (L−1g, f )L2(U ). 49 4 Elliptic boundary value problems III Analysis of PDEs 4.4 Elliptic regularity We can finally turn to the problem of regularity. We previously saw that when solving Lu = f, if f ∈ L2(U ), then by definition of a weak solution, we have u ∈ H 1 0 (U ), so we have gained some regularity when solving the differential equation. However, it is not clear that u ∈ H 2(U ), so we cannot actually say u solves Lu = f. Even if u ∈ H 2(U ), it may not be classically differentiable, so Lu = f isn’t still holding in the strongest possible sense. So we might hope that under reasonable circumstances, u is in fact twice continuously differentiable. But human desires are unlimited. If f is smooth, we might hope further that u is also smooth. All of these will be true. Let’s think about how regularity may fail. It could be that the individual derivatives of u are quite singular, but in Lu all these singularities happen to cancel with each other. Thus, the content of elliptic regularity is that this doesn’t happen. To see why we should expect this to be true, suppose for convenience that u, f ∈ C∞ c (Rn) and −∆u = f. Using integration by parts, we compute f 2 dx = (∆u)2 dx Rn Rn Rn i,j Rn i,j = = (DiDiu)(DjDju) dx (DiDju)(DiDju) dx
= D2uL2(Rn). So we have deduced that D2uL2(Rn) = ∆uL2(Rn). This is of course not a very useful result, because we have a priori assumed that u and f are C∞, while what we want to prove that u is, for example, in H 2(u). However, the fact that we can control the H 2 norm if we assumed that u ∈ H 2(U ) gives us some strong indication that we should be able to show that u must always be in H 2(U ). The idea is to run essentially the same argument for weak solutions, without mentioning the word “second derivative”. This involves the use of difference quotients. Definition (Difference quotient). Suppose U ⊆ Rn is open and V U. For 0 < |h| < dist(V, ∂U ), we define ∆h i u(x) = ∆ku(x) = (∆h u(x + hei) − u(x) h 1 u,..., ∆h nu). Observe that if u ∈ L2(U ), then ∆hu ∈ L2(V ). If further u ∈ H 1(U ), then ∆hu ∈ H 1(V ) and D∆hu = ∆hDu. What makes difference quotients useful is the following lemma: 50 4 Elliptic boundary value problems III Analysis of PDEs Lemma. If u ∈ L2(U ), then u ∈ H 1(V ) iff ∆huL2(V ) ≤ C for some C and all 0 < |h| < 1 2 dist(V, ∂U ). In this case, we have 1 ˜C DuL2(V ) ≤ ∆huL2(V ) ≤ ˜CDuL2(V ). Proof. See example sheet. Thus, if we are able to establish the bounds we had for the Laplacian using difference quotients, then this tells us u is in H 2 loc(U ). Lemma. If w, v and compactly supported in U, then U w∆−h k v dx = (∆h kw
)v dx U k w)∆h k(wv) = (τ h ∆h kv + (∆h kw)v, where τ h k w(x) = w(x + hek). Theorem (Interior regularity). Suppose L is uniformly elliptic on an open set U ⊆ Rn, and assume aij ∈ C 1(U ), bi, c ∈ L∞(U ) and f ∈ L2(U ). Suppose further that u ∈ H 1(U ) is such that B[u, v] = (f, v)L2(U ) (†) for all v ∈ H 1 0 (U ). Then u ∈ H 2 loc(U ), and for each V U, we have uH 2(V ) ≤ C(f L2(U ) + uL2(U )), with C depending on L, V, U, but not f or u. Note that we don’t require u ∈ H 1 0 (U ), so we don’t require u to satisfy the boundary conditions. In this case, there may be multiple solutions, so we need the u on the right. Also, observe that we don’t actually need uniform ellipticity, as the property of being in H 2 loc(U ) can be checked locally, and L is always locally uniformly elliptic. The proof is essentially what we did for the Laplacian just now, except this time it is much messier since we need to use difference quotients instead of derivatives, and there are lots of derivatives of aij’s that have to be kept track of. When using regularity results, it is often convenient to not think about it in terms of “solving equations”, but as something that (roughly) says “if u is such that Lu happens to be in L2 (say), then u is in H 2 loc(U )”. Proof. We first show that we may in fact assume bi = c = 0. Indeed, if we know the theorem for such L, then given a general L, we write Lu = − (aijuxj )xi, Ru = biuxi + cu. Then if u is a weak solution to Lu = f, then it is also a weak solution to Lu = f − Ru. Noting
that Ru ∈ L2(U ), this tells us u ∈ H 2 loc(U ). Moreover, on V U, 51 4 Elliptic boundary value problems III Analysis of PDEs – We can control uH 2(V ) by f − RuL2(V ) and uL2(V ) (by theorem). – We can control f − RuL2(V ) by f L2(V ), uL2(V ) and DuL2(V ). – By G˚arding’s inequality, we can control DuL2(V ) by uL2(V ) and B[u, u] = (f, u)L2(V ). – By H¨older, we can control (f, u)L2(V ) by f L2(V ) and uL2(V ). So it suffices to consider the case where L only has second derivatives. Fix c (W ) such that ζ ≡ 1 V U and choose W such that V W U. Take ξ ∈ C∞ on V. Recall that our example of Laplace’s equation, we considered the integral f 2 dx and did some integration by parts. Essentially, what we did was to apply the definition of a weak solution to ∆u. There we was lucky, and we could obtain the result in one go. In general, we should consider the second derivatives one by one. For k ∈ {1,... n}, we consider the function v = −∆−h k (ζ 2∆h ku). As we shall see, this is the correct way to express uxkxk in terms of difference quotients (the −h in the first ∆−h comes from the fact that we want to integrate by parts). We shall put this into the definition of a weak solution to say B[u, v] = (f, v). The plan is to isolate a ∆h kDu2 term on the left and then bound it. k We first compute B[u, v] = − aijuxi∆−h k (ζ 2∆h ku)xj dx U i,j i,j U i,j U = = ≡ A1 + A2, ∆h
k(aijuxi )(ζ 2∆h ku)xj dx (τ h k aij∆h kuxi + (∆h kaij)uxi)(ζ 2∆h kuxj + 2ζζxj ∆h ku) dx where A1 = i,j U i,j U A2 = ξ2(τ h k aij)(∆h kuxi)(∆h kuxj ) dx (∆h kaij)uxiζ 2∆h kuxj + 2ζζxj ∆h ku(τ h k aij∆h kuxi + (∆h kaij)uxi) dx. By uniform ellipticity, we can bound A1 ≥ θ U ξ2|∆h kDu|2 dx. This is what we want to be small. Note that A2 looks scary, but every term either only involves “first derivatives” of u, or a product of a second derivative of u with a first derivative. Thus, applying 52 4 Elliptic boundary value problems III Analysis of PDEs Young’s inequality, we can bound |A2| by a linear combination of |∆h |Du|2, and we can make the coefficient of |∆h kDu|2 as small as possible. kDu|2 and In detail, since aij ∈ C 1(U ) and ζ is supported in W, we can uniformly bound aij, ∆h kaij, ζxj, and we have |A2| ≤ C W ζ|∆h kDu||Du| + ζ|Du||∆h ku| + ζ|∆h kDu||∆h ku| dx. Now recall that ∆h may bound (for a different C) ku is bounded by Du. So applying Young’s inequality, we |A2| ≤ ε W ζ 2|∆h kDu|2 + C W |Du|2 dx. Thus, taking ε = θ 2, it follows that (f, v) = B[u, v] ≥ θ 2 ζ 2|∆h kDu
|2 dx − C W |Du|2 dx. U This is promising. It now suffices to bound (f, v) from above. By Young’s inequality, |(f, v)| ≤ |f ||∆−h k (ζ 2∆h ku)| dx ≤ C ≤ ε ≤ ε |f ||D(ζ 2∆h ku)| dx |D(ζ 2∆h ku)|2 dx + C |f |2 dx |ζ 2∆h kDu|2 dx + C(f 2 L2(U ) + Du2 L2(U )) Setting ε = θ 4, we get U ζ 2|∆h kDu|2 dx ≤ C(f 2 L2(W ) + Du2 L2(W )), and so, in particular, we get a uniform bound on ∆h we can use G˚arding to get rid of the DuL2(W ) dependence on the right. kDuL2(V ). Now as before, Notice that this is a local result. In order to have u ∈ H 2(V ), it is enough for us to have f ∈ L2(W ) for some W slightly larger than V. Thus, singularities do not propagate either in from the boundary or from regions where f is not well-behaved. With elliptic regularity, we can understand weak solutions as genuine solutions to the equation Lu = f. Indeed, if u is a weak solution, then for any v ∈ C∞ c (U ), we have B[u, v] = (f, v), hence after integrating by parts, we recover (Lu−f, v) = 0 for all v ∈ C∞ c (U ). So in fact Lu = f almost everywhere. It is natural to hope that we can get better than u ∈ H 2 loc(U ). This is actually not hard given our current work. If Lu = f, and all aij, bi, c, f are sufficiently well-behaved, then we can simply differentiate the whole qeuation with respect to xi, and then observe that uxi satisfies some second-order elliptic PDE of the form previously understood, and if we do this for all i, then we can conclude
53 4 Elliptic boundary value problems III Analysis of PDEs that u ∈ H 3 loc(U ). Of course, some bookkeeping has to be done if we were to do this properly, since we need to write everything in weak form. However, this is not particularly hard, and the details are left as an exercise. Theorem (Elliptic regularity). If aij, bi and c are C m+1(U ) for some m ∈ N, and f ∈ H m(U ), then u ∈ H m+2 (U ) and for V W U, we can estimate loc uHm+2(V ) ≤ C(f Hm(W ) + uL2(W )). In particular, if m is large enough, then u ∈ C 2 smooth, then u is also smooth. loc(U ), and if all aij, bi, c, f are We can similarly obtain a H¨older theory of elliptic regularity, which gives (roughly) f ∈ C k,α(U ) implies u ∈ C k+2,α(U ). The final loose end is to figure out what happens at the boundary. Theorem (Boundary H 2 regularity). Assume aij ∈ C 1( ¯U ), b1, c ∈ L∞(U ), and f ∈ L2(U ). Suppose u ∈ H 1 0 (U ) is a weak solution of Lu = f, u|∂U = 0. Finally, we assume that ∂U is C 2. Then uH 2(U ) ≤ C(f L2(U ) + uL2(U )). If u is the unique weak solution, we can drop the uL2(U ) from the right hand side. Proof. Note that we already know that u is locally in H 2 to show that the second-derivative is well-behaved near the boundary. loc(U ). So we only have By a partition of unity and change of coordinates, we may assume we are in the case U = B1(0) ∩ {xn > 0}. Let V = B1/2(0) ∩ {xn > 0}. Choose a ζ ∈ C∞ 0 ≤ ζ ≤ 1. c (B1(0)) with ζ ≡ 1 on V
and Most of the proof in the previous proof goes through, as long as we restrict to v = −∆−h k (ζ 2∆h ku) with k = n, since all the translations keep us within U, and hence are well-defined. Thus, we control all second derivatives of the form DkDiu, where k ∈ {1,..., n − 1} and i ∈ {1,..., n}. The only remaining second-derivative to control is DnDnu. To understand this, we go back to the PDE and look at the PDE itself. Recall that we know it holds pointwise almost everywhere, so n i,j=1 (aijuxi)xj + n i=1 biuxi + cu = f. So we can write annuxn uxn = F almost everywhere, where F depends on a, b, c, f and all (up to) second derivatives of u that are not uxnxn. Thus, F is controlled in L2. But uniform ellipticity implies ann is bounded away from 0. So we are done. Similraly, we can reiterate this to obtain higher regularity results. 54 5 Hyperbolic equations III Analysis of PDEs 5 Hyperbolic equations So far, we have been looking at elliptic PDEs. Since the operator is elliptic, there is no preferred “time direction”. For example, Laplace’s equation models static electric fields. Thus, it is natural to consider boundary value problems in these cases. Hyperbolic equations single out a time direction, and these model quantities that evolve in time. In this case, we are often interested in initial value problems instead. Let’s first define what it means for an equation to by hyperbolic Definition (Hyperbolic PDE). A second-order linear hyperbolic PDE is a PDE of the form n+1 n+1 (aij(y)uyj )yi + bi(y)uyi + c(y)u = f i,j=1 i=1 with y ∈ Rn+1, aij = aji, bi, c ∈ C∞(Rn+1), such that the principal symbol Q(�
�) = n+1 i,j=1 aij(y)ξiξj has signature (+, −, −,...) for all y. That is to say, after perhaps changing basis, at each point we can write q(ξ) = λ2 n+1ξ2 n+1 − n i=1 i ξ2 λ2 i with λi > 0. It turns out not to be too helpful to treat this equation at this generality. We would like to pick out a direction that corresponds to the positive eigenvalue. By a coordinate transformation, we can locally put our equation in the form n utt = (aij(x, t)uxi )xj + i,j=1 n i=1 bi(x, t)uxi + c(x, t)u. Note that we did not write down a ut term. It doesn’t make much difference, and it is notationally convenient to leave it out. In this form, hyperbolicity is equivalent to the statement that the operator on the right is elliptic for each t (or rather, the negative of the right hand side). We observe that t = 0 is a non-characteristic surface. So we can hope to solve the Cauchy problem. In other words, we shall specify u|t=0 and ut|t=0. Actually, we’ll look at an initial boundary value problem. Consider a region of the form R × U, where U ⊆ Rn is open bounded with C 1 boundary. t = T t = 0 U 55 5 Hyperbolic equations III Analysis of PDEs We define Then Ut = (0, t) × U Σt = {t} × U ∂∗Ut = [0, t] × ∂U. ∂UT = Σ0 ΣT ∂∗UT. The general initial boundary value problem (IVBP) is as follows: Let L be a (time-dependent) uniformly elliptic operator. We want to solve utt + Lu = f u = ψ ut = ψ u = 0 on UT on Σ0 on Σ0 on ∂∗UT. In the case of elliptic PDEs, we saw that Laplace’s equation was a canonical, motivating example. In this case,
if we take L = −∆, then we obtain the wave equation. Let’s see what we can do with it. Example. Start with the equation utt − ∆u = 0. Multiply by ut and integrate over Ut to obtain 0 = uttut − ut∆u dx dt Ut Ut Ut t ∂ ∂t Σt−Σ0 u2 t − ∇ · (utDu) + Dut · Du dx dt u2 t + |Du|2 − ∇ · (utDu) t + |Du|2 u2 dx − ut ∂∗Ut dx dt ∂u ∂ν dS. But u vanishing on ∂∗UT implies ut vanishes as well. So the second term vanishes, and we obtain Σt t + |Du|2 dx = u2 t + |Du|2 dx. u2 Σ0 This is the conservation of energy! Thus, if a solution exists, we control uH 1(Σt) in terms of ψH 1(Σ0) and ψL2(Σ0). We also see that the solution is uniquely determined by ψ and ψ, since if ψ = ψ = 0, then ut = Du = 0 and u is zero at the boundary. Estimates like this that control a solution without needing to construct it are known as a priori estimates. These are often crucial to establish the existence of solutions (cf. G˚arding). We shall first find a weak formulation of this problem that only requires u ∈ H 1(UT ). Note that when we do so, we have to understand carefully what we mean by ut = ψ. We shall see how we will deal with that in the derivation of the weak formulation. 56 5 Hyperbolic equations III Analysis of PDEs Assume that u ∈ C 2( ¯UT ) is a classical solution. Multiply the equation by v ∈ C 2( ¯UT ) which satisfies v = 0 on ∂∗UT ∪ ΣT. Then we have UT dx dt (f v) = = UT UT dx dt (uttv + Luv) dx dt −utvt + aijuxi vxj + biuxiv + cu + T utv dx − T dt U 0
0 ∂U aijuxj v dS dt. Using the boundary conditions, we find that UT f v dx dt = UT −utvt + aijuxivxj + biuxiv + cuv dx dt − Σ0 ψv dx. (†) Conversely, suppose u ∈ C 2( ¯UT ) satisfies (†) for all such v, and u|Σ0 = ψ and u|∂∗UT = 0. Then by first testing on v ∈ C∞ c (UT ), reversing the integration by parts tells us 0 = (utt + Lu − f )v dx, since there is no boundary term. Hence we get UT utt + Lu = f on UT. To check the boundary conditions, if v ∈ C∞( ¯UT ) vanishes on ∂∗UT ∪ΣT, then again reversing the integration by parts shows that UT (utt + Lu − f )v dx dt = Σ0 (ψ − ut)v dx. Since we know that the LHS vanishes, it follows that ψ = ut on Σ0. So we see that our weak formulation can encapsulate the boundary condition on Σ0. Definition (Weak solution). Suppose f ∈ L2(UT ), ψ ∈ H 1 L2(Σ0). We say u ∈ H 1(Ut) is a weak solution to the hyperbolic PDE if 0 (Σ0) and ψ ∈ (i) u|Σ0 = ψ in the trace sense; (ii) u|∂∗UT = 0 in the trace sense; and (iii) (†) holds for all v ∈ H 1(UT ) with v = 0 on ∂∗UT ∪ ΣT in a trace sense. Theorem (Uniqueness of weak solution). A weak solution, if exists, is unique. Proof. It suffices to consider the case f = ψ = ψ = 0, and show any solution must be zero. Let T v(x, t) = e−λsu(x, s) ds, where λ is a real number we will pick later. The point of introducing this e−λt is that in general, we do not expect conservation
of energy. There could be some exponential growth in the energy, so want to suppress this. t 57 5 Hyperbolic equations III Analysis of PDEs Then this function belongs to H 1(UT ), v = 0 on ΣT ∪ ∂∗UT, and vt = −e−λtu. Using the fact that u is a weak solution, we have UT utue−λt − vtxj vxieλt + i biuxiv + (c − 1)uv − vvteλt dx dt = 0. Integrating by parts, we can write this as A = B, where u2e−λt − u2e−λt + 1 2 aijvxivxj eλt − v2eλt aijvxivxj eλt + v2eλt dx dt A = UT B = − 1 2 d dt λ 2 eλt + UT aijvxi vxj − bi xi uv − bivxiu + (c − 1)uv dx dt. Here A is the nice bit, which we can control, and B is the junk bit, which we will show that we can absorb elsewhere. Integrating the time derivative in A, using v = 0 on ΣT and u = 0 on Σ0, we have A = eλT ΣT 1 2 u2 dx + Σ0 λ 2 aijvxivxj + v2 dx u2e−λt + UT aijvxivxj eλt + v2eλt dx dt. Using the uniform ellipticity condition (and the observation that the first line is always non-negative), we can bound A ≥ λ 2 UT u2e−λt + θ|Dv|2eλt + v2eλt dx dt. Doing some integration by parts, we can also bound B ≤ c 2 UT u2e−λt + θ|Dv|2eλt + v2eλt dx dt, where the constant c does not depend on λ. Taking this together, we have λ − c 2 UT u2e−λt + θ|Dv|2eλt + v2eλt dx dt ≤ 0. Taking λ > c, this tells us the
integral must vanish. In particular, the integral of u2eλt = 0. So u = 0. We now want to prove the existence of weak solutions. While we didn’t need to assume much regularity in the uniqueness result, since we are going to subtract the boundary conditions off anyway, we expect that we need more regularity to prove existence. 58 5 Hyperbolic equations III Analysis of PDEs Theorem (Existence of weak solution). Given ψ ∈ H 1 f ∈ L2(UT ), there exists a (unique) weak solution with 0 (U ) and ψ ∈ L2(U ), (†) uH 1(UT ) ≤ C(ψH 1(U ) + ψL2(U ) + f L2(UT )). Proof. We use Galerkin’s method. The way we write our equations suggests we should think of our hyperbolic PDE as a second-order ODE taking values in the infinite-dimensional space H 1 0 (U ). To apply the ODE theorems we know, we project our equation onto a finite-dimensional subspace, and then take the limit. c (U ) and First note that by density arguments, we may assume ψ, ψ ∈ C∞ c (UT ), as long as we prove the estimate (†). So let us do so. f ∈ C∞ Let {ϕk}∞ k=1 be an orthonormal basis for L2(U ), with ϕk ∈ H 1 0 (U ). For example, we can take ϕk to be eigenfunctions of −∆ with Dirichlet boundary conditions. We shall consider “solutions” of the form uN (x, t) = N k=1 uk(t)ϕk(x). We want this to be a solution after projecting to the subspace spanned by ϕ1,..., ϕN. Thus, we want (utt + Lu − f, ϕk)L2(Σt) = 0 for all k = 1,..., N. After some integration by parts, we see that we want ¨uN, ϕk L2(U ) + Σt We also require aijuN xi (ϕ
k)xj + biuN xi ϕk + cuN ϕk dx = (f, ϕk)L2(U ). (∗) uk(0) = (ψ, ϕk)L2(U ) ˙uk(0) = (ψ, ϕk)L2(U ). Notice that if we have a genuine solution u that can be written as a finite sum of the ϕk(x), then these must be satisfied. This is a system of ODEs for the functions uk(t), and the RHS is uniformly C 1 in t and linear in the uk’s. By Picard–Lindel¨of, a solution exists for t ∈ [0, T ]. So for each N, we have an approximate solution that solves the equation when projected onto ϕ1,..., ϕN. What we need to do is to extract from this solution a genuine weak solution. To do so, we need some estimates to show that the functions uN converge. We multiply (∗) by e−λt ˙uk(t), sum over k = 1,..., N, and integrate from 0 to τ ∈ (0, T ), and end up with τ dt 0 U dx ¨uN ˙uN e−λt + aijuN xi ˙uN xj + biuN xi ˙uN + cuN ˙uN e−λt τ dt = du(f ˙uN e−λt). 0 As before, we can rearrange this to get A = B, where U A = Uτ dt dx d dt 1 2 ( ˙uN )2 + 1 2 aijuN xi uN xj + (uN )2e−λt 1 2 + (uN )2 e−λt + λ 2 ( ˙uN )2 + aijuN xi uN xj 59 5 Hyperbolic equations III Analysis of PDEs and B = Uτ dt dx 1 2 ˙aijuN xi uN xj − biuN xi ˙uN + (1 − c)uN ˙uN + f
˙uN e−λt. Integrating in time, and estimating as before, for λ sufficiently large, we get 1 2 Στ ( ˙uN )2 + |DuN |2 dx + Uτ ( ˙uN )2 + |DuN |2 + (uN )2 dx dt ≤ C(ψ2 H 1(U ) + ψ2 L2(U ) + f 2 UT ). This, in particular, tells us uN is bounded in H 1(UT ), Since uN (0) = N N large enough, we have n=1(ψ, ϕk)ϕk, we know this tends to ψ in H 1(U ). So for uN H 1(Σ0) ≤ 2ψH 1(U ). Similarly, ˙uN L2(Σ0) ≤ 2ψL2(U ). Thus, we can extract a convergent subsequence uNm u in H 1(U ) for some u ∈ H 1(U ) such that uH 1(UT ) ≤ C(ψH 1(U ) + ψL2(U ) + f L2(UT )). For convenience, we may relabel the sequence so that in fact uN u. To check that u is a solution, suppose v = M H 1((0, T )) with vk(T ) = 0. By definition of uN, we have k=1 vk(t)ϕk for some vk ∈ (¨uN, v)L2(U ) + Σt i,j aijuN xi vxj + i biuN xi v + cuv dx = (f, v)L2(U ). Integrating T 0 dt using v(T ) = 0, we have UT −uN t vt + xi N vxj + biuN xi v + cuv dx dt − Σ0 uN t v dx = UT f v dx dt. Now note that if N > M, then weak limit, we have Σ0 t v dx = uN Σ0 ψv dx. Now, passing to the UT −utvt + aijuxivxj + biuxiv + cuv dx
dt − ψv dx Σ0 = UT f v dx dt. So ut satisfies the identity required for u to be a weak solution. Now for k = 1,..., M, the map w ∈ H 1(UT ) → linear map, since the trace is bounded in L2. So we conclude that wϕk dx is a bounded Σ0 Σ0 uϕk dx = lim N→∞ Σ0 uN ϕk dx = (ψ, ϕk)L2(H). Since this is true for all ϕk, it follows that u|Σ0 = ψ, and v of the form considered are dense in H 1(UT ) with v = 0 on ∂∗UT ∪ ΣT. So we are done. 60 5 Hyperbolic equations III Analysis of PDEs In fact, we have ess sup t∈(0,T ) ( ˙uL2(Σt) + uH 1(Σt)) ≤ C · (data). So we can say u ∈ L∞((0, T ), H 1(U )) and ˙u ∈ L∞((0, T ), L2(U )). We would like to improve the regularity of the solution. To motivate how we are going to do that, let’s go back to the wave equation for a bit. Suppose that in fact u ∈ C∞(UT ) is a smooth solution to the wave equation with initial conditions (ψ, ψ). We want a quantitative estimate for u ∈ H 2(Σt). The idea is to differentiate the equation with respect to t. Writing w = ut, we get wtt − ∆w = 0 w|Σ0 = ψ wt|Σ0 = ∆ψ w|∂∗UT = 0. By the energy estimate we have for the wave equation, we get wtL2(Σt) + wH 1(Σt) ≤ C(ψH 1(U ) + ∆ψL2(U )) ≤ C(ψH 1(U ) + ψH 2(U )). So we now have control of utt and utxi in L2(Σt). But once
we know that utt is controlled in L2, then we can use the elliptic estimate to gain control on the second-order spatial derivatives of u. So uH 2(Σt) ≤ C(∆uL2(Σt)) = CuttL2(Σt). So we control all second-derivatives of u in terms of the data. Theorem. If aij, bi, c ∈ C 2(UT ) and ∂U ∈ C 2, then for ψ ∈ H 2(U ) and ψ ∈ H 1 0 (U ), and f, ft ∈ L2(UT ), we have u ∈ H 2(UT ) ∩ L∞((0, T ); H 2(U )) ut ∈ L∞((0, T ), H 1 0 (U )) utt ∈ L∞((0, T ); L2(U )) Proof. We return to the Galerkin approximation. Now by assumption, we have a linear system with C 2 coefficients. So uk ∈ C 3((0, T )). Differentiating with respect to t (assuming as we can f, ft ∈ C 0( ¯UT )), we have (∂3 t uN, ϕk)L2(U ) + Σt aij ˙uN xi (ϕk)xj + bi ˙uN xi ϕk + c ˙uN ϕk dx = ( ˙f, ϕk)L2(U ) − ˙aijuN xi (ϕk)xj + ˙biuN xi ϕk + ˙cuϕk dx. Σt Multiplying by ¨uke−λt, summing k = 1,..., N, integrating τ we already control u ∈ H 1(UT ), we get 0 dt, and recalling sup t∈(0,T ) (uN t H 1(Σt) + uN tt L2(Σt) + uN t H 2(UT )) ≤ C uN t H 1(Σ0) + uN tt L2(Σ0) + ψH 1(Σ0) + ψL
2(Σ0) + f L2(UT ) + ftL2(UT ). 61 5 Hyperbolic equations III Analysis of PDEs We know N uN t |t=0 = (ψ, ϕk)L2(U )ϕk. k=1 Since ϕk are a basis for H 1, we have uN t H 1(Σ0) ≤ ψH 1(Σ0). To control uN −∆. From the fact that tt, let us assume for convenience that in fact ϕk are the eigenfunctions (¨uN, ϕk)L2(U ) + Σt i,j aijuN xi (ϕk)xj + i biuN xi ϕk +cuN ϕk dx dt = (f, ϕk)L2(U ), integrate the first term in the integral by parts, multiply by ¨un, and sum to get uN tt Σ0 ≤ C(uN H 2(Σ0) + f L2(UT ) + ftL2(UT )). We need to control uN H 2(Σ0) by ψH 2(Σ0). Then, using that ∆ϕk|∂U = 0 and uN is a finite sum of these ϕk’s, (∆uN, ∆uN )L2(Σ0) = (uN, ∆2uN )L2(Σ0) = (ψ, ∆2uN )L2(Σ0) = (∆ψ, ∆uN )L2(Σ0). So uN H 2(Σ0) ≤ ∆uN L2(Σ0) ≤ Cψ2 H (U ). Passing to the weak limit, we conclude that ut ∈ H 1(UT ) ut ∈ L∞((0, T ), H 1 0 (U )) utt ∈ L∞((0, T ), L2(U )). Since utt + Lu = f, by an elliptic estimate on (almost) every constant t, we obtain u ∈ L∞((0, T ), H 2(U )). We can now understand
the equation as holding pointwise almost everywhere by undoing the integration by parts that gave us the definition of the weak solution. The initial conditions can also be understood in a trace sense. Returning to the case ψ ∈ H 1 0 (U ) and ψ ∈ L2(U ), by approximating in H 2(U ), by approximating in H 2(U ), H 1 0 (U ) respectively, we can show that a weak solution can be constructed as a strong limit in H 1(UT ). This implies the energy identity, so that in fact weak solutions satisfy u ∈ C 0((0, T ); H 1 0 (U )) ut ∈ C 0((0, T ); L2(U )) This requires slightly stronger regularity assumptions on aij, bi and c. Such solutions are said to be in the energy class. Finally, note that we can iterate the argument to get higher regularity. 62 5 Hyperbolic equations III Analysis of PDEs Theorem. If aij, bi, c ∈ C k+1( ¯UT ) and ∂U is C k+1, and tu|Σ0 ∈ H 1 ∂i 0 (U ) ∂k+1 u|Σ0 ∈ L2(U ) t tf ∈ L2((0, T ); H k−i(U )) ∂i i = 0,..., k i = 0,..., k then u ∈ H k+1(U ) and tu ∈ L∞((0, T ); H k+1−i(U )) ∂i for i = 0,..., k + 1. In particular, if everything is smooth, then we get a smooth solution. The first two conditions should be understood as conditions on ψ and ψ, using the fact that the equation allows us to express higher time derivatives of u in terms of lower time derivatives and spatial derivatives. One can check that these condition imply ψ ∈ H k+1(U ) and ψ ∈ H k(U ), but the condition we wrote down also encodes some compatibility conditions, since we know u ought to vanish at the boundary, hence all time derivatives should. Those were the standard existence and regularity theorems for hyperbolic PDEs. However, there are more things to say about hyperbolic equations
. The “physicist’s version” of the wave equation involves a constant c, and says ¨u − c2∆x = 0. This constant c is the speed of propagation. This tells us in the wave equation, information propagates at a speed of at most c. We can see this very concretely in the 1-dimensional wave equation, where d’Alembert wrote down an explicit solution to the wave equation given by u(x, t) = 1 2 (ψ(x − ct) + ψ(x + ct)) + 1 2c x+ct x−ct ψ(y) dy. Thus, we see that the value of φ at any point (x, t) is completely determined by the values of ψ and ψ in the interval [x − ct, x + ct]. (x, t) t = 0 This is true for a general hyperbolic PDE. In this case, the speed of propagation should be measured by the principal symbol Q(ξ) = aij(y)ξiξj. The correct way to formulate this result is as follows: Let S0 ⊆ U be an open set with (say) smooth boundary. Let τ : S0 → [0, T ] be a smooth function vanishing on ∂S0, and define D = {(t, x) ∈ UT : x ∈ S0, 0 < t < τ (x)} S = {(τ (x), x) : x ∈ S0}. 63 5 Hyperbolic equations III Analysis of PDEs We say S is spacelike if n i,j=1 aijτxi τxj < 1 for all x ∈ S0. Theorem. If u is a weak solution of the usual thing, and S is spacelike, then u|D depends only on ψ|S0, ψ|S0 and f |D. The proof is rather similar to the proof of uniqueness of solutions. Proof. Returning to the definition of a weak solution, we have UT −utvt + n i,j=1 aijuxj vxi + n i=1 biuxi + cuv dx dt − Σ0 ψv dx = UT f v dx dt. By linearity it suffices
to show that if u|Σ0 = 0 if ψ|S0 = ψ|S0 = 0 and f |D = 0. We take as test function v(t, x) = τ (x) t 0 e−λsu(s, x) ds (t, x) ∈ D (t, x) ∈ D. One checks that this is in H 1(UT ), and v = 0 on ΣT ∪ ∂∗UT with vxi = τxie−λτ u(x, τ ) + vt = −e−λtu(x, t). τ (x) t e−λsuxi(x, s) ds Plugging these into the definition of a weak solution, we argue as in the previous uniqueness proof. Then 1 2 aijvxivxj eλt − u2e−λt − v2eλt d dt 1 2 D 1 2 + λ 2 = u2e−λt + 1 2 D aijvxivxj eλt + v2eλt dx dt aijvxivxj eλt − bivxiv − (c − 1)uv dx dt Noting that term, and we get contribution from S which is given by D dx dt = dx τ (x) 0 S0 dt, we can perform the t integral of the d dt IS =   1 2 S0 u2(τ (x), x)e−λτ (x) −  aijτxi τxj u2e−λτ  dx 1 2 i,j We have used v = 0 on S and vxi = τxiue−λτ. Using the definition of a spacelike surface, we have IS > 0. The rest of the argument of the uniqueness of solutions goes through to conclude that u = 0 on D. This implies no signal can travel faster than a certain speed. In particular, if aijξiξj ≤ µ|ξ|k i,j for some µ, then no signal can travel faster than µ. This allows us to solve hyperbolic equations on unbounded domains by restricting to bounded domains. √ 64 Index Index C k(U ), 20 C k( ¯U ), 20 C
0,γ( ¯U ), 20 C k,δ boundary, 26 C k,γ( ¯U ), 20 H k(U ), 22 H k 0 (U ), 22 Lp space, 21 Lp(U ), 21 W k,p(U ), 22 W k,p (U ), 22 0 a priori estimates, 56 analytic real, 10 associated eigenvector, 48 autonomous ODE, 9 boundary value problem, 37 Cauchy problem, 9 Cauchy–Kovalevskaya theorem for ODEs, 10 for PDEs, 13 characteristic surface, 17 classical solution, 5 compact operator, 41 compactly contained, 21 difference quotient, 50 diffusion equation, 5 divergence form, 36 eigenfunction, 49 eigenvalue, 49 Einstein’s equations, 6 elliptic operator, 18 elliptic regularity interior, 51, 54 up to boundary, 54 energy class, 62 energy estimate, 39 existence theorem III Analysis of PDEs fully non-linear PDE, 8 G˚arding’s inequality, 40 Gagliardo–Nirenberg–Sobolev inequality, 32 Galerkin’s method, 59 global approximation, 25 H¨older continuity, 20 heat equation, 5 homogeneous problem, 41, 46 hyperbolic operator, 18, 55 hyperbolic PDE, 55 hypersurface real analytic, 16 IBVP, 56 initial boundary value problem, 56 interior regularity, 51, 54 Laplace’s equation, 5 Lax–Milgram theorem, 37 linear PDE, 7 majorant, 11 majorize, 11 Maxwell’s equation, 6 method of characteristics, 5 minimal surface equation, 6 mollification, 24 mollifier standard, 23 Morrey’s inequality, 34 multi-index notation, 7 non-characteristic surface, 17 ODE autonomous, 9 Cauchy–Kovalevskaya theorem, 10 order of PDE, 5 hyperbolic equation, 59 partial differential equation, 5 formal adjoint, 46 formally self-adjoint operator, 49 Fredholm alternative, 41, 46 system, 5 PDE, 5 Cauchy–Kovalevskaya theorem, 13 65 Index III Analysis of PDEs fully non-linear, 8 linear, 7 order, 5 quasi-linear, 7 semi-linear, 7 system, 5 Picard
–Lindelof theorem, 9 Poincar´e’s inequality, 40 point spectrum, 48 Poisson’s equation, 5 positive operator, 49 principal symbol, 55 quasi-linear PDE, 7 real analytic, 10 real analytic hypersurface, 16 Rellich–Kondrachov theorem, 44 resolvent set, 48 Ricci flow, 6 Schr¨odinger’s equation, 6 Schwartz notation, 7 self-adjoint operator formal, 49 semi-linear PDE, 7 Sobolev space, 22 spacelike, 64 spectral theorem compact operators, 48 spectrum, 48 standard mollifier, 23 system of PDEs, 5 trace, 29 trace theorem, 29 transport equation, 5 uniform ellipticity, 36 uniqueness theorem hyperbolic equation, 57 wave equation, 6, 56 weak compactness theorem, 42 weak convergence, 42 weak derivative, 21 weak formulation, 9 weak solution elliptic PDE, 37 hyperbolic equation, 57 well-posed problem, 4 66 Homotopy Equivalence Earlier in this chapter the main tool we used for constructing homotopy equiva- lences was the fact that a mapping cylinder deformation retracts onto its ‘target’ end. By repeated application of this fact one can often produce homotopy equivalences be- tween rather different-looking spaces. However, this process can be a bit cumbersome in practice, so it is useful to have other techniques available as well. We will describe two commonly used methods here. The first involves collapsing certain subspaces to points, and the second involves varying the way in which the parts of a space are put together. Two Criteria for Homotopy Equivalence Chapter 0 11 Collapsing Subspaces The operation of collapsing a subspace to a point usually has a drastic effect on homotopy type, but one might hope that if the subspace being collapsed already has the homotopy type of a point, then collapsing it to a point might not change the homotopy type of the whole space. Here is a positive result in this direction: If (X, A) is a CW pair consisting of a CW complex X and a contractible subcomplex A, then the quotient map X→X/A is a homotopy equivalence. A proof will be given later in Proposition 0.17, but for now let us look
at some examples showing how this result can be applied. Example 0.7: Graphs. The three graphs each is a deformation retract of a disk with two holes, but we can also deduce this are homotopy equivalent since from the collapsing criterion above since collapsing the middle edge of the first and third graphs produces the second graph. More generally, suppose X is any graph with finitely many vertices and edges. If the two endpoints of any edge of X are distinct, we can collapse this edge to a point, producing a homotopy equivalent graph with one fewer edge. This simplification can be repeated until all edges of X are loops, and then each component of X is either an isolated vertex or a wedge sum of circles. This raises the question of whether two such graphs, having only one vertex in each component, can be homotopy equivalent if they are not in fact just isomorphic homotopy equivalent to connected graphs. Then the task is to prove that a wedge sum graphs. Exercise 12 at the end of the chapter reduces the question to the case of m S 1 of m circles is not n S 1 if m ≠ n. This sort of thing is hard to do directly. What W one would like is some sort of algebraic object associated to spaces, depending only W n S 1 if m ≠ n. In m S 1 has Euler characteristic 1−m. But it W is a rather nontrivial theorem that the Euler characteristic of a space depends only on W on their homotopy type, and taking different values for fact the Euler characteristic does this since m S 1 and W its homotopy type. A different algebraic invariant that works equally well for graphs, and whose rigorous development requires less effort than the Euler characteristic, is the fundamental group of a space, the subject of Chapter 1. Example 0.8. Consider the space X obtained from S 2 by attaching the two ends of an arc A to two distinct points on the sphere, say the north and south poles. Let B be an arc in S 2 joining the two points where A attaches. Then X can be given a CW complex structure with the two endpoints of A and B as 0 cells, the interiors of A and B as 1 cells, and the rest of S 2 as a 2 cell. Since A and B are contractible, 12 Chapter 0
Some Underlying Geometric Notions X/A and X/B are homotopy equivalent to X. The space X/A is the quotient S 2/S 0, the sphere with two points identified, and X/B is S 1 ∨ S 2. Hence S 2/S 0 and S 1 ∨ S 2 are homotopy equivalent, a fact which may not be entirely obvious at first glance. Example 0.9. Let X be the union of a torus with n meridional disks. To obtain a CW structure on X, choose a longitudinal circle in the torus, intersecting each of the meridional disks in one point. These intersection points are then the 0 cells, the 1 cells are the rest of the longitudinal circle and the boundary circles of the meridional disks, and the 2 cells are the remaining regions of the torus and the interiors of the meridional disks. Collapsing each meridional disk to a point yields a homotopy equivalent space Y consisting of n 2 spheres, each tangent to its two neighbors, a ‘necklace with n beads’. The third space Z in the figure, a strand of n beads with a string joining its two ends, collapses to Y by collapsing the string to a point, so this collapse is a homotopy equivalence. Finally, by collapsing the arc in Z formed by the front halves of the equators of the n beads, we obtain the fourth space W, a wedge sum of S 1 with n 2 spheres. (One can see why a wedge sum is sometimes called a ‘bouquet’ in the older literature.) Example 0.10: Reduced Suspension. Let X be a CW complex and x0 ∈ X a 0 cell. Inside the suspension SX we have the line segment {x0}× I, and collapsing this to a point yields a space X homotopy equivalent to SX, called the reduced suspension of X. For example, if we take X to be S 1 ∨ S 1 with x0 the intersection point of the two circles, then the ordinary suspension SX is the union of two spheres intersecting X is S 2 ∨ S 2, a slightly simpler along the arc {x0}× I, so the reduced suspension X ∨ Y for arbitrary CW complexes X space. More generally we have Σ and Y. Another way in which the reduced
suspension X is slightly simpler than SX (X ∨ Y ) = Σ Σ Σ Σ is in its CW structure. In SX there are two 0 cells (the two suspension points) and an (n + 1) cell en × (0, 1) for each n cell en of X, whereas in and an (n + 1) cell for each n cell of X other than the 0 cell x0. X there is a single 0 cell Σ The reduced suspension X is actually the same as the smash product X ∧ S 1 since both spaces are the quotient of X × I with X × ∂I ∪ {x0}× I collapsed to a point. Σ Σ Attaching Spaces Another common way to change a space without changing its homotopy type in- volves the idea of continuously varying how its parts are attached together. A general definition of ‘attaching one space to another’ that includes the case of attaching cells Two Criteria for Homotopy Equivalence Chapter 0 13 is the following. We start with a space X0 and another space X1 that we wish to attach to X0 by identifying the points in a subspace A ⊂ X1 with points of X0. The data needed to do this is a map f : A→X0, for then we can form a quotient space of X0 ∐ X1 by identifying each point a ∈ A with its image f (a) ∈ X0. Let us denote this quotient space by X0 ⊔f X1, the space X0 with X1 attached along A via f. When (X1, A) = (Dn, S n−1) we have the case of attaching an n cell to X0 via a map f : S n−1→X0. Mapping cylinders are examples of this construction, since the mapping cylinder Mf of a map f : X→Y is the space obtained from Y by attaching X × I along X × {1} via f. Closely related to the mapping cylinder Mf is the mapping cone Cf = Y ⊔f CX where CX is the cone (X × I)/(X × {0}) and we attach this to Y along X × {1} via the identifications (x, 1) ∼ f (x). For example, when X is a sphere S
n−1 the mapping cone Cf is the space obtained from Y by attaching an n cell via f : S n−1→Y. A mapping cone Cf can also be viewed as the quotient Mf /X of the mapping cylinder Mf with the subspace X = X × {0} collapsed to a point. If one varies an attaching map f by a homotopy ft, one gets a family of spaces whose shape is undergoing a continuous change, it would seem, and one might expect these spaces all to have the same homotopy type. This is often the case: If (X1, A) is a CW pair and the two attaching maps f, g : A→X0 are homotopic, then X0 ⊔f X1 ≃ X0 ⊔g X1. Again let us defer the proof and look at some examples. Example 0.11. Let us rederive the result in Example 0.8 that a sphere with two points identified is homotopy equivalent to S 1 ∨ S 2. The sphere with two points identified can be obtained by attaching S 2 to S 1 by a map that wraps a closed arc A in S 2 around S 1, as shown in the figure. Since A is contractible, this attaching map is homotopic to a constant map, and attaching S 2 to S 1 via a constant map of A yields S 1 ∨ S 2. The result then follows since (S 2, A) is a CW pair, S 2 being obtained from A by attaching a 2 cell. Example 0.12. homotopy equivalent to the wedge sum of a circle with n 2 spheres. The necklace In similar fashion we can see that the necklace in Example 0.9 is can be obtained from a circle by attaching n 2 spheres along arcs, so the necklace is homotopy equivalent to the space obtained by attaching n 2 spheres to a circle at points. Then we can slide these attaching points around the circle until they all coincide, producing the wedge sum. Example 0.13. Here is an application of the earlier fact that collapsing a contractible subcomplex is a homotopy equivalence: If (X, A) is a CW pair, consisting of a cell 14 Chapter 0 Some Underlying Geometric Notions complex X and a subcomplex A, then X/A ≃ X ∪ CA, the mapping cone of the inclusion A
֓X. For we have X/A = (X∪CA)/CA ≃ X∪CA since CA is a contractible subcomplex of X ∪ CA. Example 0.14. If (X, A) is a CW pair and A is contractible in X, that is, the inclusion A֓ X is homotopic to a constant map, then X/A ≃ X ∨ SA. Namely, by the previous example we have X/A ≃ X ∪ CA, and then since A is contractible in X, the mapping cone X ∪ CA of the inclusion A ֓ X is homotopy equivalent to the mapping cone of a constant map, which is X ∨ SA. For example, S n/S i ≃ S n ∨ S i+1 for i < n, since S i is contractible in S n if i < n. In particular this gives S 2/S 0 ≃ S 2 ∨ S 1, which is Example 0.8 again. The Homotopy Extension Property In this final section of the chapter we will actually prove a few things, including the two criteria for homotopy equivalence described above. The proofs depend upon a technical property that arises in many other contexts as well. Consider the following problem. Suppose one is given a map f0 : X→Y, and on a subspace A ⊂ X one is also given a homotopy ft : A→Y of f0 || A that one would like to extend to a homotopy ft : X→Y of the given f0. If the pair (X, A) is such that this extension problem can always be solved, one says that (X, A) has the homotopy extension property. Thus (X, A) has the homotopy extension property if every pair of maps X × {0}→Y and A× I→Y that agree on A× {0} can be extended to a map X × I→Y. A pair (X, A) has the homotopy extension property if and only if X × {0} ∪ A× I is a retract of X × I. For one implication, the homotopy extension property for (X, A) implies that the identity map X × {0} ∪ A×I→X × {0} ∪ A× I extends to a map X
× I→X × {0} ∪ A× I, so X × {0} ∪ A× I is a retract of X × I. The converse is equally easy when A is closed in X. Then any two maps X × {0}→Y and A× I→Y that agree on A× {0} combine to give a map X × {0} ∪ A× I→Y which is continuous since it is continuous on the closed sets X × {0} and A× I. By composing this map X × {0} ∪ A× I→Y with a retraction X × I→X × {0} ∪ A× I we get an extension X × I→Y, so (X, A) has the homotopy extension property. The hypothesis that A is closed can be avoided by a more complicated argument given in the Appendix. If X × {0} ∪ A× I is a retract of X × I and X is Hausdorff, then A must in fact be closed in X. For if r : X × I→X × I is a retraction onto X × {0} ∪ A× I, then the image of r is the set of points z ∈ X × I with r (z) = z, a closed set if X is Hausdorff, so X × {0} ∪ A× I is closed in X × I and hence A is closed in X. A simple example of a pair (X, A) with A closed for which the homotopy extension property fails is the pair (I, A) where A = {0, 1,1/2,1/3,1/4, ···}. It is not hard to show that there is no continuous retraction I × I→I × {0} ∪ A× I. The breakdown of The Homotopy Extension Property Chapter 0 15 homotopy extension here can be attributed to the bad structure of (X, A) near 0. With nicer local structure the homotopy extension property does hold, as the next example shows. Example 0.15. A pair (X, A) has the homotopy extension property if A has a mapping cylinder neighborhood in X, by which we mean a closed neighborhood N containing a subspace B, thought of as the boundary of N, with N − B an open neighborhood of A, such that there
exists a map f : B→A and a homeomorphism h : Mf →N with h || A ∪ B = 11. Mapping cylinder neighborhoods like this occur fairly often. For example, the thick let- ters discussed at the beginning of the chapter provide such neighborhoods of the thin letters, regarded as subspaces of the plane. To verify the homotopy extension property, notice first that I × I retracts onto I × {0}∪∂I × I, hence B × I × I retracts onto B × I × {0} ∪ B × ∂I × I, and this retraction induces a retraction of Mf × I onto Mf × {0} ∪ (A ∪ B)× I. Thus (Mf, A ∪ B) has the homotopy extension property. Hence so does the homeomorphic pair (N, A ∪ B). Now given a map X→Y and a homotopy of its restriction to A, we can take the constant homotopy on X − (N − B) and then extend over N by applying the homotopy extension property for (N, A ∪ B) to the given homotopy on A and the constant homotopy on B. Proposition 0.16. If (X, A) is a CW pair, then X × {0}∪A× I is a deformation retract of X × I, hence (X, A) has the homotopy extension property. Proof: There is a retraction r : Dn × I→Dn × {0} ∪ ∂Dn × I, for example the radial projection from the point (0, 2) ∈ Dn × R. Then setting rt = tr + (1 − t)11 gives a deformation retraction of Dn × I onto Dn × {0} ∪ ∂Dn × I. This deformation retraction gives rise to a deformation retraction of X n × I onto X n × {0} ∪ (X n−1 ∪ An)× I since X n × I is obtained from X n × {0} ∪ (X n−1 ∪ An)× I by attaching copies of Dn × I along Dn × {0} ∪ ∂Dn × I. If we perform the deformation retraction of X n
× I onto X n × {0} ∪ (X n−1 ∪ An)× I during the t interval [1/2n+1, 1/2n], this infinite concatenation of homotopies is a deformation retraction of X × I onto X × {0} ∪ A× I. There is no problem with continuity of this deformation retraction at t = 0 since it is continuous on X n × I, being stationary there during the t interval [0, 1/2n+1], and CW complexes have the weak topology with respect to their skeleta ⊔⊓ so a map is continuous iff its restriction to each skeleton is continuous. Now we can prove a generalization of the earlier assertion that collapsing a con- tractible subcomplex is a homotopy equivalence. Proposition 0.17. If the pair (X, A) satisfies the homotopy extension property and A is contractible, then the quotient map q : X→X/A is a homotopy equivalence. 16 Chapter 0 Some Underlying Geometric Notions Proof: Let ft : X→X be a homotopy extending a contraction of A, with f0 = 11. Since ft(A) ⊂ A for all t, the composition qft : X→X/A sends A to a point and hence facq-----→ X/A→X/A. Denoting the latter map by f t : X/A→X/A, tors as a composition X we have qft = f tq in the first of the two diagrams at the right. When t = 1 we have f1(A) equal to a point, the point to which A contracts, so f1 induces a map g : X/A→X with gq = f1, as in the second diagram. It follows that qg = f 1 since qg(x) = qgq(x) = qf1(x) = f 1q(x) = f 1(x). The maps g and q are inverse homotopy equivalences since gq = f1 ≃ f0 = 11 via ft and ⊔⊓ qg = f 1 ≃ f 0 = 11 via f t. Another application of the homotopy extension property, giving a
slightly more refined version of one of our earlier criteria for homotopy equivalence, is the following: Proposition 0.18. If (X1, A) is a CW pair and we have attaching maps f, g : A→X0 that are homotopic, then X0 ⊔f X1 ≃ X0 ⊔g X1 rel X0. Here the definition of W ≃ Z rel Y for pairs (W, Y ) and (Z, Y ) is that there are maps ϕ : W→Z and ψ : Z→W restricting to the identity on Y, such that ψϕ ≃ 11 and ϕψ ≃ 11 via homotopies that restrict to the identity on Y at all times. Proof: If F : A× I→X0 is a homotopy from f to g, consider the space X0 ⊔F (X1 × I). This contains both X0 ⊔f X1 and X0 ⊔g X1 as subspaces. A deformation retraction of X1 × I onto X1 × {0} ∪ A× I as in Proposition 0.16 induces a deformation retraction of X0 ⊔F (X1 × I) onto X0 ⊔f X1. Similarly X0 ⊔F (X1 × I) deformation retracts onto X0⊔g X1. Both these deformation retractions restrict to the identity on X0, so together ⊔⊓ they give a homotopy equivalence X0 ⊔f X1 ≃ X0 ⊔g X1 rel X0. We finish this chapter with a technical result whose proof will involve several applications of the homotopy extension property: Proposition 0.19. Suppose (X, A) and (Y, A) satisfy the homotopy extension property, and f : X→Y is a homotopy equivalence with f || A = 11. Then f is a homotopy equivalence rel A. Corollary 0.20. If (X, A) satisfies the homotopy extension property and the inclusion A ֓ X is a homotopy equivalence, then A is a deformation retract of X. Proof: Apply the proposition to the inclusion A
֓ X. ⊔⊓ Corollary 0.21. A map f : X→Y is a homotopy equivalence iff X is a deformation retract of the mapping cylinder Mf. Hence, two spaces X and Y are homotopy equivalent iff there is a third space containing both X and Y as deformation retracts. The Homotopy Extension Property Chapter 0 17 Proof: In the diagram at the right the maps i and j are the inclusions and r is the canonical retraction, so f = r i and i ≃ jf. Since j and r are homotopy equivalences, it follows that f is a homotopy equivalence iff i is a homotopy equivalence, since the composition of two homotopy equivalences is a homotopy equivalence and a map homotopic to a homotopy equivalence is a homotopy equivalence. Now apply the preceding corollary to the pair (Mf, X), which satisfies the homotopy extension property by Example 0.15 using the neighborhood X × [0, 1/2] of X in Mf. ⊔⊓ Proof of 0.19: Let g : Y →X be a homotopy inverse for f. There will be three steps to the proof: (1) Construct a homotopy from g to a map g1 with g1 || A = 11. (2) Show g1f ≃ 11 rel A. (3) Show f g1 ≃ 11 rel A. (1) Let ht : X→X be a homotopy from gf = h0 to 11 = h1. Since f || A = 11, we || A as a homotopy from g || A to 11. Then since we assume (Y, A) has the can view ht homotopy extension property, we can extend this homotopy to a homotopy gt : Y →X from g = g0 to a map g1 with g1 || A = 11. (2) A homotopy from g1f to 11 is given by the formulas 0 ≤ t ≤ 1/2 1/2 ≤ t ≤ 1 g1−2tf, h2t−1, kt = ( Note that the two definitions agree when t = 1/2. Since f
|| A = 11 and gt = ht on A, || A starts and ends with the identity, and its second half simply rethe homotopy kt traces its first half, that is, kt = k1−t on A. We will define a ‘homotopy of homotopies’ ktu : A→X by means of the figure at the right showing the parameter domain I × I for the pairs (t, u), with the t axis horizontal and the u axis vertical. On the bottom edge of the square we de|| A. Below the ‘V’ we define ktu to be independent fine kt0 = kt of u, and above the ‘V’ we define ktu to be independent of t. This is unambiguous since kt = k1−t on A. Since k0 = 11 on A, we have ktu = 11 for (t, u) in the left, right, and top edges of the square. Next we extend ktu over X, as follows. Since (X, A) has the homotopy extension property, so does (X × I, A× I), as one can see from the equivalent retraction property. Viewing ktu as a homotopy of kt || A, we can therefore extend ktu : A→X to ktu : X→X with kt0 = kt. If we restrict this ktu to the left, top, and right edges of the (t, u) square, we get a homotopy g1f ≃ 11 rel A. (3) Since g1 ≃ g, we have f g1 ≃ f g ≃ 11, so f g1 ≃ 11 and steps (1) and (2) can be repeated with the pair f, g replaced by g1, f. The result is a map f1 : X→Y with f1 || A = 11 and f1g1 ≃ 11 rel A. Hence f1 ≃ f1(g1f ) = (f1g1)f ≃ f rel A. From this ⊔⊓ we deduce that f g1 ≃ f1g1 ≃ 11 rel A. 18 Chapter 0 Some Underlying Geometric Notions Exercises
1. Construct an explicit deformation retraction of the torus with one point deleted onto a graph consisting of two circles intersecting in a point, namely, longitude and meridian circles of the torus. 2. Construct an explicit deformation retraction of Rn − {0} onto S n−1. (a) Show that the composition of homotopy equivalences X→Y and Y →Z is a 3. homotopy equivalence X→Z. Deduce that homotopy equivalence is an equivalence relation. (b) Show that the relation of homotopy among maps X→Y is an equivalence relation. (c) Show that a map homotopic to a homotopy equivalence is a homotopy equivalence. 4. A deformation retraction in the weak sense of a space X to a subspace A is a homotopy ft : X→X such that f0 = 11, f1(X) ⊂ A, and ft(A) ⊂ A for all t. Show that if X deformation retracts to A in this weak sense, then the inclusion A ֓ X is a homotopy equivalence. 5. Show that if a space X deformation retracts to a point x ∈ X, then for each neighborhood U of x in X there exists a neighborhood V ⊂ U of x such that the inclusion map V ֓ U is nullhomotopic. 6. (a) Let X be the subspace of R2 consisting of the horizontal segment [0, 1]× {0} together with all the vertical segments {r }× [0, 1 − r ] for r a rational number in [0, 1]. Show that X deformation retracts to any point in the segment [0, 1]× {0}, but not to any other point. [See the preceding problem.] (b) Let Y be the subspace of R2 that is the union of an infinite number of copies of X arranged as in the figure below. Show that Y is contractible but does not deformation retract onto any point. (c) Let Z be the zigzag subspace of Y homeomorphic to R indicated by the heavier line. Show there is a deformation retraction in the weak sense (see Exercise 4) of Y onto Z, but no true deformation retraction. 7
. Fill in the details in the following construction from [Edwards 1999] of a compact space Y ⊂ R3 with the same properties as the space Y in Exercise 6, that is, Y is contractible but does not deformation retract to any point. To begin, let X be the union of an infinite se- quence of cones on the Cantor set arranged end-to-end, as in the figure. Next, form the one-point compactification of X × R. This embeds in R3 as a closed disk with curved ‘fins’ attached along Exercises Chapter 0 19 circular arcs, and with the one-point compactification of X as a cross-sectional slice. The desired space Y is then obtained from this subspace of R3 by wrapping one more cone on the Cantor set around the boundary of the disk. 8. For n > 2, construct an n room analog of the house with two rooms. 9. Show that a retract of a contractible space is contractible. 10. Show that a space X is contractible iff every map f : X→Y, for arbitrary Y, is nullhomotopic. Similarly, show X is contractible iff every map f : Y →X is nullhomotopic. 11. Show that f : X→Y is a homotopy equivalence if there exist maps g, h : Y →X such that f g ≃ 11 and hf ≃ 11. More generally, show that f is a homotopy equiva- lence if f g and hf are homotopy equivalences. 12. Show that a homotopy equivalence f : X→Y induces a bijection between the set of path-components of X and the set of path-components of Y, and that f restricts to a homotopy equivalence from each path-component of X to the corresponding path- component of Y. Prove also the corresponding statements with components instead of path-components. Deduce that if the components of a space X coincide with its path-components, then the same holds for any space Y homotopy equivalent to X. 13. Show that any two deformation retractions r 0 t of a space X onto a subspace A can be joined by a continuous family of deformation retractions r s t, 0 ≤
s ≤ 1, of X onto A, where continuity means that the map X × I × I→X sending (x, s, t) to r s t and r 1 t (x) is continuous. 14. Given positive integers v, e, and f satisfying v − e + f = 2, construct a cell structure on S 2 having v 0 cells, e 1 cells, and f 2 cells. 15. Enumerate all the subcomplexes of S ∞, with the cell structure on S ∞ that has S n as its n skeleton. 16. Show that S ∞ is contractible. 17. (a) Show that the mapping cylinder of every map f : S 1→S 1 is a CW complex. (b) Construct a 2 dimensional CW complex that contains both an annulus S 1 × I and a M¨obius band as deformation retracts. 18. Show that S 1 ∗ S 1 = S 3, and more generally Sm ∗ S n = Sm+n+1. 19. Show that the space obtained from S 2 by attaching n 2 cells along any collection of n circles in S 2 is homotopy equivalent to the wedge sum of n + 1 2 spheres. 20. Show that the subspace X ⊂ R3 formed by a Klein bottle intersecting itself in a circle, as shown in the figure, is homotopy equivalent to S 1 ∨ S 1 ∨ S 2. 21. If X is a connected Hausdorff space that is a union of a finite number of 2 spheres, any two of which intersect in at most one point, show that X is homotopy equivalent to a wedge sum of S 1 ’s and S 2 ’s. 20 Chapter 0 Some Underlying Geometric Notions 22. Let X be a finite graph lying in a half-plane P ⊂ R3 and intersecting the edge of P in a subset of the vertices of X. Describe the homotopy type of the ‘surface of revolution’ obtained by rotating X about the edge line of P. 23. Show that a CW complex is contractible if it is the union of two contractible subcomplexes whose intersection is also contractible. 24. Let X and Y be CW complexes with 0 cells x0 and y0. Show that the quotient spaces X ∗ Y /(X �