Q
stringlengths
4
3.96k
A
stringlengths
1
3k
Result
stringclasses
4 values
Theorem 3.2 Suppose \( \Omega \) is a bounded open subset of \( {\mathbb{R}}^{d} \). Given a linear partial differential operator \( L \) with constant coefficients, there exists a bounded linear operator \( K \) on \( {L}^{2}\left( \Omega \right) \) such that whenever \( f \in {L}^{2}\left( \Omega \right) \), then \[ L\left( {Kf}\right) = f\;\text{ in the weak sense. } \] In other words, \( u = K\left( f\right) \) is a weak solution to \( L\left( u\right) = f \).
We first prove the theorem assuming the validity of the inequality in the lemma. Consider the pre-Hilbert space \( {\mathcal{H}}_{0} = {C}_{0}^{\infty }\left( \Omega \right) \) equipped with the inner product and norm \[ \langle \varphi ,\psi \rangle = \left( {{L}^{ * }\varphi ,{L}^{ * }\psi }\right) ,\;\parallel \psi {\parallel }_{0}^{2} = {\begin{Vmatrix}{L}^{ * }\psi \end{Vmatrix}}_{{L}^{2}\left( \Omega \right) }. \] Following the results in Section 2.3 of Chapter 4, we let \( \mathcal{H} \) denote the completion of \( {\mathcal{H}}_{0} \). By Lemma 3.3, a Cauchy sequence in the \( \parallel \cdot {\parallel }_{0} \)-norm is also Cauchy in the \( {L}^{2}\left( \Omega \right) \)-norm; hence we may identify \( \mathcal{H} \) with a subspace of \( {L}^{2}\left( \Omega \right) \). Also, \( {L}^{ * } \), initially defined as a bounded operator from \( {\mathcal{H}}_{0} \) to \( {L}^{2}\left( \Omega \right) \), extends to a bounded operator \( {L}^{ * } \) from \( \mathcal{H} \) to \( {L}^{2}\left( \Omega \right) \) (by Lemma 1.3). For a fixed \( f \in {L}^{2}\left( \Omega \right) \), consider the linear map \( {\ell }_{0} \) : \( {C}_{0}^{\infty }\left( \Omega \right) \rightarrow \mathbb{C} \) defined by \[ {\ell }_{0}\left( \psi \right) = \left( {\psi, f}\right) \;\text{ for }\psi \in {C}_{0}^{\infty }\left( \Omega \right) . \] The Cauchy-Schwarz inequality together with another application of Lemma 3.3 yields \[ \left| {{\ell }_{0}\left( \psi \right) }\right| = \left| \left( {\psi, f}\right) \right| \leq \parallel \psi {\parallel }_{{L}^{2}\left( \Omega \right) }\parallel f{\parallel }_{{L}^{2}\left( \Omega \right) } \] \[ \leq c{\begin{Vmatrix}{L}^{ * }\psi \end{Vmatrix}}_{{L}^{2}\left( \Omega \right) }\parallel f{\parallel }_{{L}^{2}\left( \Omega \right) } \] \[ \leq {c}^{\prime }\parallel \psi {\parallel }_{0} \] with \( {c}^{\prime } = c\parallel f{\parallel }_{{L}^{2}\left( \Omega \right) } \). Hence \( {\ell }_{0} \) is bounded on the pre-Hilbert space \( {\mathcal{H}}_{0} \). Therefore, \( \ell \) extends to a bounded linear functional on \( \mathcal{H} \) (see Section 5.1, Chapter 4), and the above inequalities show that \( \parallel \ell \parallel \leq c\parallel f{\parallel }_{{L}^{2}\left( \Omega \right) } \). By the Riesz representation theorem applied to \( \ell \) on the Hilbert space \( \mathcal{H} \) (Theorem 5.3 in Chapter 4), there exists \( U \in \mathcal{H} \) such that \[ \ell \left( \psi \right) = \langle \psi, U\rangle = \left( {{L}^{ * }\psi ,U}\right) \;\text{ for all }\psi \in {\mathcal{H}}_{0}. \]
Yes
Lemma 3.4 Suppose \( P\left( z\right) = {z}^{m} + \cdots + {a}_{1}z + {a}_{0} \) is a polynonial of degree \( m \) with leading coefficient 1. If \( F \) is a holomorphic function on \( \mathbb{C} \) , then\n\n\[ \n{\left| F\left( 0\right) \right| }^{2} \leq \frac{1}{2\pi }{\int }_{0}^{2\pi }{\left| P\left( {e}^{i\theta }\right) F\left( {e}^{i\theta }\right) \right| }^{2}{d\theta }.\n\]
Proof. The lemma is a consequence of the special case when \( P = 1 \)\n\n(16)\n\n\[ \n{\left| F\left( 0\right) \right| }^{2} \leq \frac{1}{2\pi }{\int }_{0}^{2\pi }{\int }_{0}^{2\pi }{\left| F\left( {e}^{i\theta }\right) \right| }^{2}{d\theta }\n\]\n\nThis assertion follows directly from the mean-value identity (8) in Section 2 with \( \zeta = 0 \) and \( r = 1 \), via the Cauchy-Schwarz inequality. With it we begin by factoring \( P \) :\n\n\[ \nP\left( z\right) = \mathop{\prod }\limits_{{\left| \alpha \right| \geq 1}}\left( {z - \alpha }\right) \mathop{\prod }\limits_{{\left| \beta \right| < 1}}\left( {z - \beta }\right) = {P}_{1}\left( z\right) {P}_{2}\left( z\right) ,\n\]\n\nwhere each product is finite and taken over the roots of \( P \) whose absolute values are \( \geq 1 \) and \( < 1 \), respectively.\n\nNote that \( \left| {{P}_{1}\left( 0\right) }\right| = \mathop{\prod }\limits_{{\left| \alpha \right| \geq 1}}\left| \alpha \right| \geq 1 \) .\n\nFor \( {P}_{2} \) we write\n\n\[ \n\left( {z - \beta }\right) = - \left( {1 - \bar{\beta }z}\right) {\psi }_{\beta }\left( z\right)\n\]\n\nwhere \( {\psi }_{\beta }\left( z\right) = \frac{\beta - z}{1 - \bar{\beta }z} \) are the \
Yes
Proposition 4.1 Suppose there exists a function \( u \in {C}^{2}\left( \bar{\Omega }\right) \) that minimizes \( \mathcal{D}\left( U\right) \) among all \( U \in {C}^{2}\left( \bar{\Omega }\right) \) with \( {\left. U\right| }_{\partial \Omega } = f \) . Then \( u \) is harmonic in \( \Omega \) .
Proof. For functions \( F \) and \( G \) in \( {C}^{2}\left( \bar{\Omega }\right) \) define the following inner-product\n\n\[ \langle F, G\rangle = {\int }_{\Omega }\left( {\frac{\partial F}{\partial {x}_{1}}\overline{\frac{\partial G}{\partial {x}_{1}}} + \frac{\partial F}{\partial {x}_{2}}\overline{\frac{\partial G}{\partial {x}_{2}}}}\right) d{x}_{1}d{x}_{2}. \]\n\nWe then note that \( \mathcal{D}\left( u\right) = \langle u, u\rangle \) . If \( v \) is any function in \( {C}^{2}\left( \bar{\Omega }\right) \) with \( {\left. v\right| }_{\partial \Omega } = 0 \), then for all \( \epsilon \) we have\n\n\[ \mathcal{D}\left( {u + {\epsilon v}}\right) \geq \mathcal{D}\left( u\right) \]\n\nsince \( u + {\epsilon v} \) and \( u \) have the same boundary values, and \( u \) minimizes the Dirichlet integral. We note, however, that\n\n\[ \mathcal{D}\left( {u + {\epsilon v}}\right) = \mathcal{D}\left( u\right) + {\epsilon }^{2}\mathcal{D}\left( v\right) + \epsilon \langle u, v\rangle + \epsilon \langle v, u\rangle . \]\n\nHence\n\n\[ {\epsilon }^{2}\mathcal{D}\left( v\right) + \epsilon \langle u, v\rangle + \epsilon \langle v, u\rangle \geq 0 \]\n\nand since \( \epsilon \) can be both positive or negative, this can happen only if \( \operatorname{Re}\langle u, v\rangle = 0 \) . Similarly, considering the perturbation \( u + {i\epsilon v} \), we find \( \operatorname{Im}\langle u, v\rangle = 0 \), and therefore \( \langle u, v\rangle = 0 \) . An integration by parts then provides\n\n\[ 0 = \langle u, v\rangle = - {\int }_{\Omega }\left( {\bigtriangleup u}\right) \bar{v} \]\n\nfor all \( v \in {C}^{2}\left( \bar{\Omega }\right) \) with \( {\left. v\right| }_{\partial \Omega } = 0 \) . This implies that \( \bigtriangleup u = 0 \) in \( \Omega \), and of course \( u \) equals \( f \) on the boundary.
Yes
Corollary 4.4 Suppose \( \Omega \) is a bounded open set, and let \( \partial \Omega = \bar{\Omega } - \Omega \) denote its boundary. Assume that \( u \) is continuous in \( \bar{\Omega } \) and is harmonic in \( \Omega \) . Then\n\n\[ \mathop{\max }\limits_{{x \in \bar{\Omega }}}\left| {u\left( x\right) }\right| = \mathop{\max }\limits_{{x \in \partial \Omega }}\left| {u\left( x\right) }\right| \]
Proof. Since the sets \( \bar{\Omega } \) and \( \partial \Omega \) are compact and \( u \) is continuous, the two maxima above are clearly attained. We suppose that \( \mathop{\max }\limits_{{x \in \bar{\Omega }}}\left| {u\left( x\right) }\right| \) is attained at an interior point \( {x}_{0} \in \Omega \), for otherwise there is nothing to prove.\n\nNow by the mean-value property, \( \left| {u\left( {x}_{0}\right) }\right| \leq \frac{1}{m\left( B\right) }{\int }_{B}\left| {u\left( x\right) }\right| {dx} \) . If for some point \( {x}^{\prime } \in B \) we had \( \left| {u\left( {x}^{\prime }\right) }\right| < \left| {u\left( {x}_{0}\right) }\right| \), then a similar inequality would hold in a small neighborhood of \( {x}^{\prime } \), and since \( \left| {u\left( x\right) }\right| \leq \left| {u\left( {x}_{0}\right) }\right| \) throughout \( B \), the result would be that \( \frac{1}{m\left( B\right) }{\int }_{B}\left| {u\left( x\right) }\right| {dx} < \left| {u\left( {x}_{0}\right) }\right| \), which is a contradiction. Hence \( \left| {u\left( x\right) }\right| = \left| {u\left( {x}_{0}\right) }\right| \) for each \( x \in B \) . Now this is true for each ball \( {B}_{r} \) of radius \( r \), centered at \( {x}_{0} \), such that \( {B}_{r} \subset \Omega \) . Let \( {r}_{0} \) be the least upper bound of such \( r \) ; then \( {\bar{B}}_{{r}_{0}} \) intersects the boundary \( \Omega \) at some point \( \widetilde{x} \) . Since \( \left| {u\left( x\right) }\right| = \left| {u\left( {x}_{0}\right) }\right| \) for all \( x \in {\bar{B}}_{r}, r < {r}_{0} \), it follows by continuity that \( \left| {u\left( \widetilde{x}\right) }\right| = \left| {u\left( {x}_{0}\right) }\right| \), proving the corollary.
Yes
Lemma 4.5 We have the identity\n\n\[ \n{\int }_{B}\left( {v\bigtriangleup u - u\bigtriangleup v}\right) {\eta dx} = {\int }_{B}u\left( {\nabla v \cdot \nabla \eta }\right) - v\left( {\nabla u \cdot \nabla \eta }\right) {dx}. \n\]
Here \( \nabla u \) is the gradient of \( u \), that is, \( \nabla u = \left( {\frac{\partial u}{\partial {x}_{1}},\frac{\partial u}{\partial {x}_{2}},\ldots ,\frac{\partial u}{\partial {x}_{d}}}\right) \) and\n\n\[ \n\nabla v \cdot \nabla \eta = \mathop{\sum }\limits_{{j = 1}}^{d}\frac{\partial v}{\partial {x}_{j}}\frac{\partial \eta }{\partial {x}_{j}} \n\]\n\nwith \( \nabla u \cdot \nabla \eta \) defined similarly.\n\nIn fact, by integrating by parts as in the proof of (14) we have\n\n\[ \n{\int }_{B}\frac{\partial u}{\partial {x}_{j}}{v\eta dx} = - {\int }_{B}u\frac{\partial v}{\partial {x}_{j}}{\eta dx} - {\int }_{B}{uv}\frac{\partial \eta }{\partial {x}_{j}}{dx}. \n\]\n\nWe then repeat this with \( u \) replaced by \( \partial u/\partial {x}_{j} \), and sum in \( j \) to obtain\n\n\[ \n{\int }_{B}\left( {\bigtriangleup u}\right) {v\eta dx} = - {\int }_{B}\left( {\nabla u \cdot \nabla v}\right) {\eta dx} - {\int }_{B}\left( {\nabla u \cdot \nabla \eta }\right) {vdx}. \n\]\n\nThis yields the lemma if we subtract from this the symmetric formula with \( u \) and \( v \) interchanged.
Yes
Corollary 4.8 Suppose \( \left\{ {u}_{n}\right\} \) is a sequence of harmonic functions in \( \Omega \) that converges to a function \( u \) uniformly on compact subsets of \( \Omega \) as \( n \rightarrow \infty \) . Then \( u \) is also harmonic.
The first of these corollaries was already proved as a consequence of (26). For the second, we use the fact that each \( {u}_{n} \) satisfies the mean-value property\n\n\[ \n{u}_{n}\left( {x}_{0}\right) = \frac{1}{m\left( B\right) }{\int }_{B}{u}_{n}\left( x\right) {dx} \n\]\n\nwhenever \( B \) is a ball with center at \( {x}_{0} \), and \( \bar{B} \subset \Omega \) . Thus by the uniform convergence it follows that \( u \) also satisfies this property, and hence \( u \) is harmonic.
Yes
Lemma 4.9 Let \( \Omega \) be an open bounded set in \( {\mathbb{R}}^{d} \). Suppose \( v \) belongs to \( {C}^{1}\left( \bar{\Omega }\right) \) and \( v \) vanishes on \( \partial \Omega \). Then\n\n\[{\int }_{\Omega }{\left| v\left( x\right) \right| }^{2}{dx} \leq {c}_{\Omega }{\int }_{\Omega }{\left| \nabla v\left( x\right) \right| }^{2}{dx}\]
Proof. This conclusion could in fact be deduced from the considerations given in Lemma 3.3. We prefer to prove this easy version separately to highlight a simple idea that we shall also use later. It should be noted that the argument yields the estimate \( {c}_{\Omega } \leq d{\left( \Omega \right) }^{2} \), where \( d\left( \Omega \right) \) is the diameter of \( \Omega \).\n\nWe proceed on the basis of the following observation. Suppose \( f \) is a function in \( {C}^{1}\left( \bar{I}\right) \), where \( I = \left( {a, b}\right) \) is an interval in \( \mathbb{R} \). Assume that \( f \) vanishes at one of the end-points of \( I \). Then\n\n\[{\int }_{I}{\left| f\left( t\right) \right| }^{2}{dt} \leq {\left| I\right| }^{2}{\int }_{I}{\left| {f}^{\prime }\left( t\right) \right| }^{2}{dt}\]\n\nwhere \( \left| I\right| \) denotes the length of \( I \).\n\nIndeed, suppose \( f\left( a\right) = 0 \). Then \( f\left( s\right) = {\int }_{a}^{s}{f}^{\prime }\left( t\right) {dt} \), and by the Cauchy-Schwarz inequality\n\n\[{\left| f\left( s\right) \right| }^{2} \leq \left| I\right| {\int }_{a}^{s}{\left| {f}^{\prime }\left( t\right) \right| }^{2}{dt} \leq \left| I\right| {\int }_{I}{\left| {f}^{\prime }\left( t\right) \right| }^{2}{dt}.\n\nIntegrating this in \( s \) over \( I \) then yields (30).\n\nTo prove (29), write \( x = \left( {{x}_{1},{x}^{\prime }}\right) \) with \( {x}_{1} \in \mathbb{R} \) and \( {x}^{\prime } \in {\mathbb{R}}^{d - 1} \) and apply (30) to \( f \) defined by \( f\left( {x}_{1}\right) = v\left( {{x}_{1},{x}^{\prime }}\right) \), with \( {x}^{\prime } \) fixed. Let \( J\left( {x}^{\prime }\right) \) be the open set in \( \mathbb{R} \) that is the corresponding slice of \( \Omega \) given by \( \left\{ {{x}_{1} \in \mathbb{R} : \left( {{x}_{1},{x}^{\prime }}\right) \in \Omega }\right\} \). The set \( J\left( {x}^{\prime }\right) \) can be written as a disjoint union of open intervals \( {I}_{j} \). (Note that in fact \( f\left( {x}_{1}\right) \) vanishes at both end-points of each \( {I}_{j} \).) For each \( j \), on applying (30) we obtain\n\n\[{\int }_{{I}_{j}}{\left| v\left( {x}_{1},{x}^{\prime }\right) \right| }^{2}d{x}_{1} \leq {\left| {I}_{j}\right| }^{2}{\int }_{{I}_{j}}{\left| \nabla v\left( {x}_{1},{x}^{\prime }\right) \right| }^{2}d{x}_{1}.\n\nNow since \( \left| {I}_{j}\right| \leq d\left( \Omega \right) \), summing over the disjoint intervals \( {I}_{j} \) gives\n\n\[{\int }_{J\left( {x}^{\prime }\right) }{\left| v\left( {x}_{1},{x}^{\prime }\right) \right| }^{2}d{x}_{1} \leq d{\left( \Omega \right) }^{2}{\int }_{J\left( {x}^{\prime }\right) }{\left| \nabla v\left( {x}_{1},{x}^{\prime }\right) \right| }^{2}d{x}_{1}\]\n\nand an integration over \( {x}^{\prime } \in {\mathbb{R}}^{d} \) then leads to (29).
Yes
Lemma 4.10 Suppose \( \Gamma \) is a compact set in \( {\mathbb{R}}^{d} \), and \( f \) is a continuous function on \( \Gamma \) . Then there exists a sequence \( \left\{ {F}_{n}\right\} \) of smooth functions on \( {\mathbb{R}}^{d} \) so that \( {F}_{n} \rightarrow f \) uniformly on \( \Gamma \) .
To complete the proof of Lemma 4.10, we argue as follows. We regularize the function \( G \) obtained in Lemma 4.11 by defining\n\n\[ {F}_{\epsilon }\left( x\right) = {\epsilon }^{-d}{\int }_{{\mathbb{R}}^{d}}G\left( {x - y}\right) \varphi \left( {y/\epsilon }\right) {dy} = {\int }_{{\mathbb{R}}^{d}}G\left( y\right) {\varphi }_{\epsilon }\left( {x - y}\right) {dy}, \]\n\nwith \( {\varphi }_{\epsilon }\left( y\right) = {\epsilon }^{-d}\varphi \left( {y/\epsilon }\right) \).
Yes
Lemma 4.11 Let \( f \) be a continuous function on a compact subset \( \Gamma \) of \( {\mathbb{R}}^{d} \). Then there exists a function \( G \) on \( {\mathbb{R}}^{d} \) that is continuous, and so that \( {\left. G\right| }_{\partial \Gamma } = f \)
Proof. We begin with the observation that if \( {K}_{0} \) and \( {K}_{1} \) are two disjoint compact sets, there exists a continuous function \( 0 \leq g\left( x\right) \leq 1 \) on \( {\mathbb{R}}^{d} \) which takes the value 0 on \( {K}_{0} \) and 1 on \( {K}_{1} \). Indeed, if \( d\left( {x,\Omega }\right) \) denotes the distance from \( x \) to \( \Omega \), we see that\n\n\[ g\left( x\right) = \frac{d\left( {x,{K}_{0}}\right) }{d\left( {x,{K}_{0}}\right) + d\left( {x,{K}_{1}}\right) } \]\n\nhas the required properties.\n\nNow, we may assume without loss of generality that \( f \) is non-negative and bounded by 1 on \( \Gamma \). Let\n\n\[ {K}_{0} = \{ x \in \Gamma : 2/3 \leq f\left( x\right) \leq 1\} \;\text{ and }\;{K}_{1} = \{ x \in \Gamma : 0 \leq f\left( x\right) \leq 1/3\} ,\]\n\nso that \( {K}_{0} \) and \( {K}_{1} \) are disjoint. Clearly, the observation before the lemma guarantees that there exists a function \( 0 \leq {G}_{1}\left( x\right) \leq 1/3 \) on \( {\mathbb{R}}^{d} \) which takes the value \( 1/3 \) on \( {K}_{0} \) and 0 on \( {K}_{1} \). Then we see that\n\n\[ 0 \leq f\left( x\right) - {G}_{1}\left( x\right) \leq \frac{2}{3}\;\text{ for all }x \in \Gamma .\n\nWe now repeat the argument with \( f \) replaced by \( f - {G}_{1} \). In the first step, we have gone from \( 0 \leq f \leq 1 \) to \( 0 \leq f - {G}_{1} \leq 2/3 \). Consequently, we may find a continuous function \( {G}_{2} \) on \( {\mathbb{R}}^{d} \) so that\n\n\[ 0 \leq f\left( x\right) - {G}_{1}\left( x\right) - {G}_{2}\left( x\right) \leq {\left( \frac{2}{3}\right) }^{2}\;\text{ on }\Gamma ,\]\n\nand \( 0 \leq {G}_{2} \leq \frac{1}{3}\frac{2}{3} \). Repeating this process, we find continuous functions \( {G}_{n} \) on \( {\mathbb{R}}^{d} \) such that\n\n\[ 0 \leq f\left( x\right) - {G}_{1}\left( x\right) - \cdots - {G}_{N}\left( x\right) \leq {\left( \frac{2}{3}\right) }^{N}\;\text{ on }\Gamma ,\]\n\nand \( 0 \leq {G}_{N} \leq \frac{1}{3}{\left( \frac{2}{3}\right) }^{N - 1} \) on \( {\mathbb{R}}^{d} \). If we define\n\n\[ G = \mathop{\sum }\limits_{{n = 1}}^{\infty }{G}_{n} \]\n\nthen \( G \) is continuous and equals \( f \) on \( \Gamma \).
Yes
Theorem 4.12 Let \( \Omega \) be an open bounded set in \( {\mathbb{R}}^{2} \) that satisfies the outside-triangle condition. If \( f \) is a continuous function on \( \partial \Omega \), then the boundary value problem \( \bigtriangleup u = 0 \) with \( u \) continuous in \( \bar{\Omega } \) and \( {\left. u\right| }_{\partial \Omega } = f \) is always uniquely solvable.
We turn to the proof of the theorem. It is based on the following proposition, which may be viewed as a refined version of Lemma 4.9 above.\n\nProposition 4.13 For any bounded open set \( \Omega \) in \( {\mathbb{R}}^{2} \) that satisfies the outside-triangle condition there are two constants \( {c}_{1} < 1 \) and \( {c}_{2} > 1 \) such that the following holds. Suppose \( z \) is a point in \( \Omega \) whose distance from \( \partial \Omega \) is \( \delta \) . Then whenever \( v \) belongs to \( {C}^{1}\left( \bar{\Omega }\right) \) and \( {\left. v\right| }}_{\partial \Omega } = 0 \), we have\n\n(32)\n\n\[{\int }_{{B}_{{c}_{1}\delta }\left( z\right) }{\left| v\left( x\right) \right| }^{2}{dx} \leq C{\delta }^{2}{\int }_{{B}_{{c}_{2}\delta }\left( z\right) \cap \Omega }{\left| \nabla v\left( x\right) \right| }^{2}{dx}.\n\]\n\nThe bound \( C \) can be chosen to depend only on the diameter of \( \Omega \) and the parameters \( \ell \) and \( \alpha \) which determine the triangles \( T \) .\n\nLet us see how the proposition proves the theorem. We have already shown that it suffices to assume that \( f \) is the restriction to \( \partial \Omega \) of an \( F \) that belongs to \( {C}^{1}\left( \bar{\Omega }\right) \) . We recall we had the minimizing sequence \( {u}_{n} = F - {v}_{n} \), with \( {v}_{n} \in {C}^{1}\left( \bar{\Omega }\right) \) and \( {\left. {v}_{n}\right| }}_{\partial \Omega } = 0 \) . Moreover, this sequence converges in the norm of \( \mathcal{H} \) and \( {L}^{2}\left( \Omega \right) \) to a limit \( v \), such that \( u = F - v \) is harmonic in \( \Omega \) . Then since (32) holds for each \( {v}_{n} \), it also holds for \( v = F - u \) ; that is,\n\n(33)\n\n\[{\int }_{{B}_{{c}_{1}\delta }\left( z\right) }{\left| \left( F - u\right) \left( x\right) \right| }^{2}{dx} \leq C{\delta }^{2}{\int }_{{B}_{{c}_{2}\delta }\left( z\right) \cap \Omega }{\left| \nabla \left( F - u\right) \left( x\right) \right| }^{2}{dx}.\n\]\n\nTo prove the theorem it suffices, in view of the continuity of \( u \) in \( \Omega \), to show that if \( y \) is any fixed point in \( \partial \Omega \), and \( z \) is a variable point in \( \Omega \) , then \( u\left( z\right) \rightarrow f\left( y\right) \) as \( z \rightarrow y \) . Let \( \delta = \delta \left( z\right) \) denote the distance of \( z \) from the boundary. Then \( \delta \left( z\right) \leq \left| {z - y}\right| \) and therefore \( \delta \left( z\right) \rightarrow 0 \) as \( z \rightarrow y \) .\n\nWe now consider the averages of \( F \) and \( u \) taken over the discs centered at \( z \) of radius \( {c}_{1}\delta \left( z\right) \) (recall that \( {c}_{1}
No
Theorem 1.2 If \( {\mu }_{ * } \) is a metric exterior measure on a metric space \( X \) , then the Borel sets in \( X \) are measurable. Hence \( {\mu }_{ * } \) restricted to \( {\mathcal{B}}_{X} \) is a measure.
Proof. By the definition of \( {\mathcal{B}}_{X} \) it suffices to prove that closed sets in \( X \) are Carathéodory measurable. Therefore, let \( F \) denote a closed set and \( A \) a subset of \( X \) with \( {\mu }_{ * }\left( A\right) < \infty \) . For each \( n > 0 \), let\n\n\[ \n{A}_{n} = \left\{ {x \in {F}^{c} \cap A : \;d\left( {x, F}\right) \geq 1/n}\right\} .\n\] \n\nThen \( {A}_{n} \subset {A}_{n + 1} \), and since \( F \) is closed we have \( {F}^{c} \cap A = \mathop{\bigcup }\limits_{{n = 1}}^{\infty }{A}_{n} \) . Also, the distance between \( F \cap A \) and \( {A}_{n} \) is \( \geq 1/n \), and since \( {\mu }_{ * } \) is a metric exterior measure, we have\n\n(2)\n\n\[ \n{\mu }_{ * }\left( A\right) \geq {\mu }_{ * }\left( {\left( {F \cap A}\right) \cup {A}_{n}}\right) = {\mu }_{ * }\left( {F \cap A}\right) + {\mu }_{ * }\left( {A}_{n}\right) .\n\] \n\nNext, we claim that\n\n(3)\n\n\[ \n\mathop{\lim }\limits_{{n \rightarrow \infty }}{\mu }_{ * }\left( {A}_{n}\right) = {\mu }_{ * }\left( {{F}^{c} \cap A}\right) \n\] \n\nTo see this, let \( {B}_{n} = {A}_{n + 1} \cap {A}_{n}^{c} \) and note that\n\n\[ \nd\left( {{B}_{n + 1},{A}_{n}}\right) \geq \frac{1}{n\left( {n + 1}\right) }.\n\] \n\nIndeed, if \( x \in {B}_{n + 1} \) and \( d\left( {x, y}\right) < 1/n\left( {n + 1}\right) \) the triangle inequality shows that \( d\left( {y, F}\right) < 1/n \), hence \( y \notin {A}_{n} \) . Therefore\n\n\[ \n{\mu }_{ * }\left( {A}_{{2k} + 1}\right) \geq {\mu }_{ * }\left( {{B}_{2k} \cup {A}_{{2k} - 1}}\right) = {\mu }_{ * }\left( {B}_{2k}\right) + {\mu }_{ * }\left( {A}_{{2k} - 1}\right) ,\n\] \n\nand this implies that\n\n\[ \n{\mu }_{ * }\left( {A}_{{2k} + 1}\right) \geq \mathop{\sum }\limits_{{j = 1}}^{k}{\mu }_{ * }\left( {B}_{2j}\right) \n\] \n\nA similar argument also gives\n\n\[ \n{\mu }_{ * }\left( {A}_{2k}\right) \geq \mathop{\sum }\limits_{{j = 1}}^{k}{\mu }_{ * }\left( {B}_{{2j} - 1}\right) \n\] \n\nSince \( {\mu }_{ * }\left( A\right) \) is finite, we find that both series \( \sum {\mu }_{ * }\left( {B}_{2j}\right) \) and \( \sum {\mu }_{ * }\left( {B}_{{2j} - 1}\right) \) are convergent. Finally, we note that\n\n\[ \n{\mu }_{ * }\left( {A}_{n}\right) \leq {\mu }_{ * }\left( {{F}^{c} \cap A}\right) \leq {\mu }_{ * }\left( {A}_{n}\right) + \mathop{\sum }\limits_{{j = n + 1}}^{\infty }{\mu }_{ * }\left( {B}_{j}\right) \n\] \nand this proves the limit (3). Letting \( n \) tend to infinity in the inequality (2) we find that \( {\mu }_{ * }\left( A\right) \geq {\mu }_{ * }\left( {F \cap A}\right) + {\mu }_{ * }\left( {{F}^{c} \cap A}\right) \), and hence \( F \) is measurable, as was to be shown.
Yes
Lemma 1.4 If \( {\mu }_{0} \) is a premeasure on an algebra \( \mathcal{A} \), define \( {\mu }_{ * } \) on any subset \( E \) of \( X \) by\n\n\[ \n{\mu }_{ * }\left( E\right) = \inf \left\{ {\mathop{\sum }\limits_{{j = 1}}^{\infty }{\mu }_{0}\left( {E}_{j}\right) : E \subset \mathop{\bigcup }\limits_{{j = 1}}^{\infty }{E}_{j},\text{ where }{E}_{j} \in \mathcal{A}\text{ for all }j}\right\} .\n\]\n\nThen, \( {\mu }_{ * } \) is an exterior measure on \( X \) that satisfies:\n\n(i) \( {\mu }_{ * }\left( E\right) = {\mu }_{0}\left( E\right) \) for all \( E \in \mathcal{A} \).\n\n(ii) All sets in \( \mathcal{A} \) are measurable in the sense of (1).
Proof. Proving that \( {\mu }_{ * } \) is an exterior measure presents no difficulty. To see why the restriction of \( {\mu }_{ * } \) to \( \mathcal{A} \) coincides with \( {\mu }_{0} \), suppose that \( E \in \mathcal{A} \) . Clearly, one always has \( {\mu }_{ * }\left( E\right) \leq {\mu }_{0}\left( E\right) \) since \( E \) covers itself. To prove the reverse inequality let \( E \subset \mathop{\bigcup }\limits_{{j = 1}}^{\infty }{E}_{j} \), where \( {E}_{j} \in \mathcal{A} \) for all \( j \) . Then, if we set\n\n\[ \n{E}_{k}^{\prime } = E \cap \left( {{E}_{k} - \mathop{\bigcup }\limits_{{j = 1}}^{{k - 1}}{E}_{j}}\right)\n\]\n\nthe sets \( {E}_{k}^{\prime } \) are disjoint elements of \( \mathcal{A},{E}_{k}^{\prime } \subset {E}_{k} \) and \( E = \mathop{\bigcup }\limits_{{k = 1}}^{\infty }{E}_{k}^{\prime } \) . By (ii) in the definition of a premeasure, we have\n\n\[ \n{\mu }_{0}\left( E\right) = \mathop{\sum }\limits_{{k = 1}}^{\infty }{\mu }_{0}\left( {E}_{k}^{\prime }\right) \leq \mathop{\sum }\limits_{{k = 1}}^{\infty }{\mu }_{0}\left( {E}_{k}\right) .\n\]\n\nTherefore, we find that \( {\mu }_{0}\left( E\right) \leq {\mu }_{ * }\left( E\right) \), as desired.\n\nFinally, we must prove that sets in \( \mathcal{A} \) are measurable for \( {\mu }_{ * } \) . Let \( A \) be any subset of \( X, E \in \mathcal{A} \), and \( \epsilon > 0 \) . By definition, there exists a countable collection \( {E}_{1},{E}_{2},\ldots \) of sets in \( \mathcal{A} \) such that \( A \subset \mathop{\bigcup }\limits_{{j = 1}}^{\infty }{E}_{j} \) and\n\n\[ \n\mathop{\sum }\limits_{{j = 1}}^{\infty }{\mu }_{0}\left( {E}_{j}\right) \leq {\mu }_{ * }\left( A\right) + \epsilon\n\]\n\nSince \( {\mu }_{0} \) is a premeasure, it is finitely additive on \( \mathcal{A} \) and therefore\n\n\[ \n\mathop{\sum }\limits_{{j = 1}}^{\infty }{\mu }_{0}\left( {E}_{j}\right) = \mathop{\sum }\limits_{{j = 1}}^{\infty }{\mu }_{0}\left( {E \cap {E}_{j}}\right) + \mathop{\sum }\limits_{{j = 1}}^{\infty }{\mu }_{0}\left( {{E}^{c} \cap {E}_{j}}\right)\n\]\n\n\[ \n\geq {\mu }_{ * }\left( {E \cap A}\right) + {\mu }_{ * }\left( {{E}^{c} \cap A}\right) .\n\]\n\nSince \( \epsilon \) is arbitrary, we conclude that \( {\mu }_{ * }\left( A\right) \geq {\mu }_{ * }\left( {E \cap A}\right) + {\mu }_{ * }\left( {{E}^{c} \cap A}\right) \) , as desired.
Yes
Theorem 1.5 Suppose that \( \mathcal{A} \) is an algebra of sets in \( X,{\mu }_{0} \) a premeasure on \( \mathcal{A} \), and \( \mathcal{M} \) the \( \sigma \) -algebra generated by \( \mathcal{A} \) . Then there exists a measure \( \mu \) on \( \mathcal{M} \) that extends \( {\mu }_{0} \) .
Proof. The exterior measure \( {\mu }_{ * } \) induced by \( {\mu }_{0} \) defines a measure \( \mu \) on the \( \sigma \) -algebra of Carathéodory measurable sets. Therefore, by the result in the previous lemma, \( \mu \) is also a measure on \( \mathcal{M} \) that extends \( {\mu }_{0} \) . (We should observe that in general the class \( \mathcal{M} \) is not as large as the class of all sets that are measurable in the sense of (1).)
Yes
Proposition 3.2 If \( E \) is an arbitrary measurable set in \( X \), then the conclusion of Proposition 3.1 are still valid except that we only assert that \( {E}^{{x}_{2}} \) is \( {\mu }_{1} \) -measurable and \( {\mu }_{1}\left( {E}^{{x}_{2}}\right) \) is defined for almost every \( {x}_{2} \in {X}_{2} \) .
Proof. Consider first the case when \( E \) is a set of measure zero. Then we know by Proposition 1.6 that there is a set \( F \in {\mathcal{A}}_{\sigma \delta } \) such that\n\n\( E \subset F \) and \( \left( {{\mu }_{1} \times {\mu }_{2}}\right) \left( F\right) = 0 \) . Since \( {E}^{{x}_{2}} \subset {F}^{{x}_{2}} \) for every \( {x}_{2} \) and \( {F}^{{x}_{2}} \) has \( {\mu }_{1} \) -measure zero for almost every \( {x}_{2} \) by (7) applied to \( F \), the assumed completeness of the measure \( {\mu }_{2} \) shows that \( {E}^{{x}_{2}} \) is measurable and has measure zero for those \( {x}_{2} \) . Thus the desired conclusion holds when \( E \) has measure zero.\n\nIf we drop this assumption on \( E \), we can invoke Proposition 1.6 again to find an \( F \in {\mathcal{A}}_{\sigma \delta }, F \supset E \), such that \( F - E = Z \) has measure zero. Since \( {F}^{{x}_{2}} - {E}^{{x}_{2}} = {Z}^{{x}_{2}} \) we can apply the case we have just proved, and find that for almost all \( {x}_{2} \) the set \( {E}^{{x}_{2}} \) is measurable and \( {\mu }_{1}\left( {E}^{{x}_{2}}\right) = \) \( {\mu }_{1}\left( {F}^{{x}_{2}}\right) - {\mu }_{1}\left( {Z}^{{x}_{2}}\right) \) . From this the proposition follows.
Yes
Theorem 3.3 In the setting above, suppose \( f\left( {{x}_{1},{x}_{2}}\right) \) is an integrable function on \( \left( {{X}_{1} \times {X}_{2},{\mu }_{1} \times {\mu }_{2}}\right) \) . (i) For almost every \( {x}_{2} \in {X}_{2} \), the slice \( {f}^{{x}_{2}}\left( {x}_{1}\right) = f\left( {{x}_{1},{x}_{2}}\right) \) is integrable on \( \left( {{X}_{1},{\mu }_{1}}\right) \) . (ii) \( {\int }_{{X}_{1}}f\left( {{x}_{1},{x}_{2}}\right) d{\mu }_{1} \) is an integrable function on \( {X}_{2} \) . (iii) \( {\int }_{{X}_{2}}\left( {{\int }_{{X}_{1}}f\left( {{x}_{1},{x}_{2}}\right) d{\mu }_{1}}\right) d{\mu }_{2} = {\int }_{{X}_{1} \times {X}_{2}}{fd}{\mu }_{1} \times {\mu }_{2} \) .
Proof. Note that if the desired conclusions hold for finitely many functions, they also hold for their linear combinations. In particular it suffices to assume that \( f \) is non-negative. When \( f = {\chi }_{E} \), where \( E \) is a set of finite measure, what we wish to prove is contained in Proposition 3.2. Hence the desired result also holds for simple functions. Therefore by the monotone convergence theorem it is established for all non-negative functions, and the theorem is proved.
Yes
Proposition 4.1 The total variation \( \left| \nu \right| \) of a signed measure \( \nu \) is itself a (positive) measure that satisfies \( \nu \leq \left| \nu \right| \) .
Proof. Suppose \( {\left\{ {E}_{j}\right\} }_{j = 1}^{\infty } \) is a countable collection of disjoints sets in \( \mathcal{M} \), and let \( E = \bigcup {E}_{j} \) . It suffices to prove:\n\n(11)\n\n\[ \sum \left| \nu \right| \left( {E}_{j}\right) \leq \left| \nu \right| \left( E\right) \;\text{ and }\;\left| \nu \right| \left( E\right) \leq \sum \left| \nu \right| \left( {E}_{j}\right) \]\n\nLet \( {\alpha }_{j} \) be a real number that satisfies \( {\alpha }_{j} < \left| \nu \right| \left( {E}_{j}\right) \) . By definition, each \( {E}_{j} \) can be written as \( {E}_{j} = \mathop{\bigcup }\limits_{i}{F}_{i, j} \), where the \( {F}_{i, j} \) are disjoint, belong to \( \mathcal{M} \), and\n\n\[ {\alpha }_{j} \leq \mathop{\sum }\limits_{{i = 1}}^{\infty }\left| {\nu \left( {F}_{i, j}\right) }\right| \]\n\nSince \( E = \mathop{\bigcup }\limits_{{i, j}}{F}_{i, j} \), we have\n\n\[ \sum {\alpha }_{j} \leq \mathop{\sum }\limits_{{j, i}}\left| {\nu \left( {F}_{i, j}\right) }\right| \leq \left| \nu \right| \left( E\right) \]\n\nConsequently, taking the supremum over the numbers \( {\alpha }_{j} \) gives the first inequality in (11).\n\nFor the reverse inequality, let \( {F}_{k} \) be any other partition of \( E \) . For a fixed \( k,{\left\{ {F}_{k} \cap {E}_{j}\right\} }_{j} \) is a partition of \( {F}_{k} \), so\n\n\[ \mathop{\sum }\limits_{k}\left| {\nu \left( {F}_{k}\right) }\right| = \mathop{\sum }\limits_{k}\left| {\mathop{\sum }\limits_{j}\nu \left( {{F}_{k} \cap {E}_{j}}\right) }\right| \]\n\nsince \( \nu \) is a signed measure. An application of the triangle inequality and the fact that \( {\left\{ {F}_{k} \cap {E}_{j}\right\} }_{k} \) is a partition of \( {E}_{j} \) gives\n\n\[ \mathop{\sum }\limits_{k}\left| {\nu \left( {F}_{k}\right) }\right| \leq \mathop{\sum }\limits_{k}\mathop{\sum }\limits_{j}\left| {\nu \left( {{F}_{k} \cap {E}_{j}}\right) }\right| \]\n\n\[ = \mathop{\sum }\limits_{j}\mathop{\sum }\limits_{k}\left| {\nu \left( {{F}_{k} \cap {E}_{j}}\right) }\right| \]\n\n\[ \leq \mathop{\sum }\limits_{j}\left| \nu \right| \left( {E}_{j}\right) \]\n\nSince \( \left\{ {F}_{k}\right\} \) was an arbitrary partition of \( E \), we obtain the second inequality in (11) and the proof is complete.
Yes
Proposition 4.2 The assertion (14) implies (12). Conversely, if \( \\left| \\nu \\right| \) is a finite measure, then (12) implies (14).
That (12) is a consequence of (14) is obvious because \( \\mu \\left( E\\right) = 0 \) gives \( \\left| {\\nu \\left( E\\right) }\\right| < \\epsilon \) for every \( \\epsilon > 0 \) . To prove the converse, it suffices to consider the case when \( \\nu \) is positive, upon replacing \( \\nu \) by \( \\left| \\nu \\right| \) . We then assume that (14) does not hold. This means that it fails for some fixed \( \\epsilon > 0 \) . Hence for each \( n \), there is a measurable set \( {E}_{n} \) with \( \\mu \\left( {E}_{n}\\right) < {2}^{-n} \) while \( \\nu \\left( {E}_{n}\\right) \\geq \\epsilon \) . Now let \( {E}^{ * } = \\mathop{\\limsup }\\limits_{{n \\rightarrow \\infty }}{E}_{n} = \\mathop{\\bigcap }\\limits_{{n = 1}}^{\\infty }{E}_{n}^{ * } \), where \( {E}_{n}^{ * } = \) \( \\mathop{\\bigcup }\\limits_{{k > n}}{E}_{k} \) . Then since \( \\mu \\left( {E}_{n}^{ * }\\right) \\leq \\mathop{\\sum }\\limits_{{k > n}}1/{2}^{k} = 1/{2}^{n - 1} \), and the decreasing sets \( \\left\\{ {E}_{k}^{ * }\\right\\} \) are contained in a set of finite measure \( \\left( {E}_{1}^{ * }\\right) \), we get \( \\mu \\left( {E}^{ * }\\right) = 0 \) . However \( \\nu \\left( {E}_{n}^{ * }\\right) \\geq \\nu \\left( {E}_{n}\\right) \\geq \\epsilon \), and the \( \\nu \) measure is assumed finite. So \( \\nu \\left( {E}^{ * }\\right) = \\mathop{\\lim }\\limits_{{n \\rightarrow \\infty }}\\nu \\left( {E}_{n}^{ * }\\right) \\geq \\epsilon \), which gives a contradiction.
Yes
Theorem 4.3 Suppose \( \mu \) is a \( \sigma \) -finite positive measure on the measure space \( \left( {X,\mathcal{M}}\right) \) and \( \nu \) a \( \sigma \) -finite signed measure on \( \mathcal{M} \) . Then there exist unique signed measures \( {\nu }_{a} \) and \( {\nu }_{s} \) on \( \mathcal{M} \) such that \( {\nu }_{a} \ll \mu ,{\nu }_{s} \bot \mu \) and \( \nu = {\nu }_{a} + {\nu }_{s} \) . In addition, the measure \( {\nu }_{a} \) takes the form \( d{\nu }_{a} = {fd\mu } \) ; that is,\n\n\[{\nu }_{a}\left( E\right) = {\int }_{E}f\left( x\right) {d\mu }\left( x\right)\]\n\nfor some extended \( \mu \) -integrable function \( f \) .
We start with the case when both \( \nu \) and \( \mu \) are positive and finite. Let \( \rho = \nu + \mu \), and consider the transformation on \( {L}^{2}\left( {X,\rho }\right) \) defined by\n\n\[ \ell \left( \psi \right) = {\int }_{X}\psi \left( x\right) {d\nu }\left( x\right) \]\n\nThe mapping \( \ell \) defines a bounded linear functional on \( {L}^{2}\left( {X,\rho }\right) \) since\n\n\[ \left| {\ell \left( \psi \right) }\right| \leq {\int }_{X}\left| {\psi \left( x\right) }\right| {d\nu }\left( x\right) \leq {\int }_{X}\left| {\psi \left( x\right) }\right| {d\rho }\left( x\right) \]\n\n\[ \leq {\left( \rho \left( X\right) \right) }^{1/2}{\left( {\int }_{X}{\left| \psi \left( x\right) \right| }^{2}d\rho \left( x\right) \right) }^{1/2}, \]\n\nwhere the last inequality follows by the Cauchy-Schwarz inequality. But \( {L}^{2}\left( {X,\rho }\right) \) is a Hilbert space, so the Riesz representation theorem (in Chapter 4) guarantees the existence of \( g \in {L}^{2}\left( {X,\rho }\right) \) such that\n\n(15)\n\n\[ {\int }_{X}\psi \left( x\right) {d\nu }\left( x\right) = {\int }_{X}\psi \left( x\right) g\left( x\right) {d\rho }\left( x\right) \;\text{ for all }\psi \in {L}^{2}\left( {X,\rho }\right) . \]\n\nIf \( E \in \mathcal{M} \) with \( \rho \left( E\right) > 0 \), when we set \( \psi = {\chi }_{E} \) in (15) and recall that \( \nu \leq \rho \), we find\n\n\[ 0 \leq \frac{1}{\rho \left( E\right) }{\int }_{E}g\left( x\right) {d\rho }\left( x\right) \leq 1 \]\n\nfrom which we conclude that \( 0 \leq g\left( x\right) \leq 1 \) for a.e. \( x \) (with respect to the measure \( \rho \) ). In fact, \( 0 \leq {\int }_{E}g\left( x\right) {d\rho }\left( x\right) \) for all sets \( E \in \mathcal{M} \) implies that\n\n\( g\left( x\right) \geq 0 \) almost everywhere. In the same way, \( 0 \leq {\int }_{E}\left( {1 - g\left( x\right) }\right) {d\rho }\left( x\right) \) for all \( E \in \mathcal{M} \) guarantees that \( g\left( x\right) \leq 1 \) almost everywhere. Therefore we may clearly assume \( 0 \leq g\left( x\right) \leq 1 \) for all \( x \) without disturbing the identity (15), which we rewrite as\n\n(16)\n\n\[ \int \psi \left( {1 - g}\right) {d\nu } = \int {\psi gd\mu } \]\n\nConsider now the two sets\n\n\[ A = \{ x \in X : 0 \leq g\left( x\right) < 1\} \;\text{ and }\;B = \{ x \in X : g\left( x\right) = 1\} , \]\n\nand define two measures \( {\nu }_{a} \) and \( {\nu }_{s} \) on \( \mathcal{M} \) by\n\n\[ {\nu }_{a}\left( E\right) = \nu \left( {A \cap E}\right) \;\text{ and }\;{\nu }_{s}\left( E\right) = \nu \left( {B \cap E}\right) . \]\n\nTo see why \( {\nu }_{s} \bot \mu \), it suffices to note that setting \( \psi = {\chi }_{B} \) in (16) gives\n\n\[ 0 = \int {\chi }_{B}{d\mu } = \mu \left( B\right) \]\n\nFinally, we set \( \psi = {\chi }_{E}\left( {1 + g + \cdots + {g}^{n}}\right) \) in (16) :\n\n(17)\n\n\[ {\int }_{E}\left( {1 - {g}^{n + 1}}\right) {d\nu } = {\int }_{E}g\left( {1 + \cdots + {g}^{n}}\right) {d\mu }. \]\n\nSince \( \l
Yes
Lemma 5.2 The following relations hold among the subspaces \( S,{S}_{ * } \) , and \( \overline{{S}_{1}} \) .\n\n(i) \( S = {S}_{ * } \) .\n\n(ii) The orthogonal complement of \( \overline{{S}_{1}} \) is \( S \) .
Proof. First, since \( T \) is an isometry, we have that \( \left( {{Tf},{Tg}}\right) = \left( {f, g}\right) \) for all \( f, g \in \mathcal{H} \), and thus \( {T}^{ * }T = I \) . (See Exercise 22 in Chapter 4.) So if \( {Tf} = f \) then \( {T}^{ * }{Tf} = {T}^{ * }f \), which means that \( f = {T}^{ * }f \) . To prove the converse inclusion, assume \( {T}^{ * }f = f \) . As a consequence \( \left( {f,{T}^{ * }f - f}\right) = 0 \) , and thus \( \left( {f,{T}^{ * }f}\right) - \left( {f, f}\right) = 0 \) ; that is, \( \left( {{Tf}, f}\right) = \parallel f{\parallel }^{2} \) . However, \( \parallel {Tf}\parallel = \) \( \parallel f\parallel \), so we have in the above an instance of equality for the Cauchy-Schwarz inequality. As a result of Exercise 2 in Chapter 4 we get \( {Tf} = \) \( {cf} \), which by the above gives \( {Tf} = f \) . Thus part (i) is proved.\n\nNext we observe that \( f \) is in the orthogonal complement of \( \overline{{S}_{1}} \) exactly when \( \left( {f, g - {Tg}}\right) = 0 \), for all \( g \in \mathcal{H} \) . However, this means that \( \left( {f - {T}^{ * }f, g}\right) = 0 \) for all \( g \), and hence \( f = {T}^{ * }f \), which by part (i) means \( f \in S \) .
Yes
Theorem 5.4 Suppose \( f \) is integrable over \( X \) . Then for almost every \( x \in X \) the averages \( {A}_{m}\left( f\right) = \frac{1}{m}\mathop{\sum }\limits_{{k = 0}}^{{m - 1}}f\left( {{\tau }^{k}\left( x\right) }\right) \) converge to a limit as \( m \rightarrow \infty \) .
The idea of the proof is as follows. We first show that \( {A}_{m}\left( f\right) \) converges to a limit almost everywhere for a set of functions \( f \) that is dense in \( {L}^{1}\left( {X,\mu }\right) \) . We then use the maximal theorem to show that this implies the conclusion for all integrable functions.
No
Theorem 6.1 Suppose \( T \) is a bounded symmetric operator on a Hilbert space \( \mathcal{H} \). Then there exists a spectral resolution \( \{ E\left( \lambda \right) \} \) such that\n\n\[ T = {\int }_{{a}^{ - }}^{b}{\lambda dE}\left( \lambda \right) \]\n\n in the sense that for every \( f, g \in \mathcal{H} \)\n\n(32)\n\n\[ \left( {{Tf}, g}\right) = {\int }_{{a}^{ - }}^{b}{\lambda d}\left( {E\left( \lambda \right) f, g}\right) = {\int }_{{a}^{ - }}^{b}{\lambda dF}\left( \lambda \right) . \]
The integral on the right-hand side is taken in the Lebesgue-Stieltjes sense, as in (iii) and (iv) of Section 3.3.\n\nThe result encompasses the spectral theorem for compact symmetric operators \( T \) in the following sense. Let \( \left\{ {\varphi }_{k}\right\} \) be an orthonormal basis of eigenvectors of \( T \) with corresponding eigenvalues \( {\lambda }_{k} \), as guaranteed by Theorem 6.2 in Chapter 4. In this case, we take the spectral resolution to be defined via this orthogonal expansion by\n\n\[ E\left( \lambda \right) f \sim \mathop{\sum }\limits_{{{\lambda }_{k} \leq \lambda }}\left( {f,{\varphi }_{k}}\right) {\varphi }_{k} \]\n\nand one easily verifies that it satisfies conditions (i), (ii) and (iii) above. We also note that \( \parallel E\left( \lambda \right) f{\parallel }^{2} = \mathop{\sum }\limits_{{{\lambda }_{k} \leq \lambda }}{\left| \left( f,{\varphi }_{k}\right) \right| }^{2} \), and thus \( F\left( \lambda \right) = \left( {E\left( \lambda \right) f, g}\right) \) is a pure jump function as in Section 3.3 in Chapter 3.
Yes
Proposition 6.2 Suppose \( T \) is symmetric. Then \( \parallel T\parallel \leq M \) if and only if \( - {MI} \leq \) \( T \leq {MI} \) . As a result, \( \parallel T\parallel = \max \left( {\left| a\right| ,\left| b\right| }\right) \) .
This is a consequence of (7) in Chapter 4.
No
Proposition 6.4 If \( {T}_{1} \) and \( {T}_{2} \) are positive operators that commute, then \( {T}_{1}{T}_{2} \) is also positive.
Indeed, if \( S \) is a square root of \( {T}_{1} \) given in the previous proposition, then \( {T}_{1}{T}_{2} = \) \( {SS}{T}_{2} = S{T}_{2}S \), and hence \( \left( {{T}_{1}{T}_{2}f, f}\right) = \left( {S{T}_{2}{Sf}, f}\right) = \left( {{T}_{2}{Sf},{Sf}}\right) \), since \( S \) is symmetric, and thus the last term is positive.
Yes
Proposition 6.5 Suppose \( T \) is symmetric and a and \( b \) are given by (33). If \( p\left( t\right) = \) \( \mathop{\sum }\limits_{{k = 0}}^{n}{c}_{k}{t}^{k} \) is a real polynomial which is positive for \( t \in \left\lbrack {a, b}\right\rbrack \), then the operator \( p\left( T\right) = \mathop{\sum }\limits_{{k = 0}}^{n}{c}_{k}{T}^{k} \) is positive.
To see this, write \( p\left( t\right) = c\mathop{\prod }\limits_{j}\left( {t - {\rho }_{j}}\right) \mathop{\prod }\limits_{k}\left( {{\rho }_{k}^{\prime } - t}\right) \mathop{\prod }\limits_{\ell }\left( {{\left( t - {\mu }_{\ell }\right) }^{2} + {\nu }_{\ell }}\right) \), where \( c \) is positive and the third factor corresponds to the non-real roots of \( p\left( t\right) \) (arising in conjugate pairs), and the real roots of \( p\left( t\right) \) lying in \( \left( {a, b}\right) \) which are necessarily of even order. The first factor contains the real roots \( {\rho }_{j} \) with \( {\rho }_{j} \leq a \), and the second factor the real roots \( {\rho }_{k}^{\prime } \) with \( {\rho }_{k}^{\prime } \geq b \) . Since each of the factors \( T - {\rho }_{j}I,{\rho }_{j}^{\prime }I - T \) and \( {\left( T - {\mu }_{\ell }I\right) }^{2} + {\nu }_{\ell }^{2}I \) is positive and these commute, the desired conclusion follows from the previous proposition.
Yes
Corollary 6.6 If \( p\left( t\right) \) is a real polynomial, then\n\n\[ \parallel p\left( T\right) \parallel \leq \mathop{\sup }\limits_{{t \in \left\lbrack {a, b}\right\rbrack }}\left| {p\left( t\right) }\right| \]
This is an immediate consequence using Proposition 6.2, since \( - M \leq p\left( t\right) \leq M \) , where \( M = \mathop{\sup }\limits_{{t \in \left\lbrack {a, b}\right\rbrack }}\left| {p\left( t\right) }\right| \), and thus \( - {MI} \leq p\left( T\right) \leq {MI} \) .
Yes
Proposition 6.7 Suppose \( \left\{ {T}_{n}\right\} \) is a sequence of positive operators that satisfy \( {T}_{n} \geq {T}_{n + 1} \) for all \( n \) . Then there is a positive operator \( T \), such that \( {T}_{n}f \rightarrow {Tf} \) as \( n \rightarrow \infty \) for every \( f \in \mathcal{H} \) .
Proof. We note that for each fixed \( f \in \mathcal{H} \) the sequence of positive numbers \( \left( {{T}_{n}f, f}\right) \) is decreasing and hence convergent. Now observe that for any positive operator \( S \) with \( \parallel S\parallel \leq M \) we have\n\n(35)\n\n\[ \parallel S\left( f\right) {\parallel }^{2} \leq {\left( Sf, f\right) }^{1/2}{M}^{3/2}\parallel f\parallel . \]\n\nIn fact, the quadratic function \( \left( {S\left( {{tI} + S}\right) f,\left( {{tI} + S}\right) f}\right) = {t}^{2}\left( {{Sf}, f}\right) + {2t}\left( {{Sf},{Sf}}\right) + \) \( \left( {{S}^{2}f,{Sf}}\right) \) is positive for all real \( t \) . Hence its discriminant is negative, that is, \( \parallel S\left( f\right) {\parallel }^{4} \leq \left( {{Sf}, f}\right) \left( {{S}^{2}f,{Sf}}\right) \), and (35) follows. We apply this to \( S = {T}_{n} - {T}_{m} \) with \( n \leq m \) ; then \( \begin{Vmatrix}{{T}_{n} - {T}_{m}}\end{Vmatrix} \leq \begin{Vmatrix}{T}_{n}\end{Vmatrix} \leq \begin{Vmatrix}{T}_{1}\end{Vmatrix} = M \), and since \( \left( {\left( {{T}_{n} - {T}_{m}}\right) f, f}\right) \rightarrow 0 \) as \( n, m \rightarrow \infty \) we see that \( \begin{Vmatrix}{{T}_{n}f - {T}_{m}f}\end{Vmatrix} \rightarrow 0 \) as \( n, m \rightarrow \infty \) . Thus \( \mathop{\lim }\limits_{{n \rightarrow \infty }}{T}_{n}\left( f\right) = \) \( T\left( f\right) \) exists, and \( T \) is also clearly positive.
Yes
Proposition 6.8 If \( T \) is symmetric, then \( \sigma \left( T\right) \) is a closed subset of the interval \( \left\lbrack {a, b}\right\rbrack \) given by (33).
Note that if \( z \notin \left\lbrack {a, b}\right\rbrack \), the function \( \Phi \left( t\right) = {\left( t - z\right) }^{-1} \) is continuous on \( \left\lbrack {a, b}\right\rbrack \) and \( \Phi \left( T\right) \left( {T - {zI}}\right) = \left( {T - {zI}}\right) \Phi \left( T\right) = I \), so \( \Phi \left( T\right) \) is the inverse of \( T - {zI} \) . Now suppose \( {T}_{0} = T - {\lambda }_{0}I \) is invertible. Then we claim that \( {T}_{0} - {\epsilon I} \) is invertible for all (complex) \( \epsilon \) that are sufficiently small. This will prove that the complement of \( \sigma \left( T\right) \) is open. Indeed, \( {T}_{0} - {\epsilon I} = {T}_{0}\left( {I - \epsilon {T}_{0}^{-1}}\right) \), and we can invert the operator \( \left( {I - \epsilon {T}_{0}^{-1}}\right) \) (formally) by writing its inverse as a sum\n\n\[ \mathop{\sum }\limits_{{n = 0}}^{\infty }{\epsilon }^{n}{\left( {T}_{0}^{-1}\right) }^{n + 1} \]\n\nSince \( \mathop{\sum }\limits_{{n = 0}}^{\infty }\begin{Vmatrix}{{\epsilon }^{n}{\left( {T}_{0}^{-1}\right) }^{n + 1}}\end{Vmatrix} \leq \sum {\left| \epsilon \right| }^{n}{\begin{Vmatrix}{T}_{0}^{-1}\end{Vmatrix}}^{n + 1} \), the series converges when \( \left| \epsilon \right| < {\begin{Vmatrix}{T}_{0}^{-1}\end{Vmatrix}}^{-1} \) , and the sum is majorized by\n\n(37)\n\n\[ \begin{Vmatrix}{T}_{0}^{-1}\end{Vmatrix}\frac{1}{1 - \left| \epsilon \right| \begin{Vmatrix}{T}_{0}^{-1}\end{Vmatrix}}. \]\n\nThus we can define the operator \( {\left( {T}_{0} - \epsilon I\right) }^{-1} \) as \( \mathop{\lim }\limits_{{N \rightarrow \infty }}{T}_{0}^{-1}\mathop{\sum }\limits_{{n = 0}}^{N}{\epsilon }^{n}{\left( {T}_{0}^{-1}\right) }^{n + 1} \) , and it gives the desired inverse, as is easily verified.
Yes
Proposition 6.9 For each \( f \in \mathcal{H} \), the Lebesgue-Stieltjes measure corresponding to \( F\left( \lambda \right) = \left( {E\left( \lambda \right) f, f}\right) \) is supported on \( \sigma \left( T\right) \) .
To prove this, let \( J \) be one of the open intervals in the complement of \( \sigma \left( T\right) \) , \( {x}_{0} \in J \), and \( {J}_{0} \) the sub-interval centered at \( {x}_{0} \) of length \( {2\epsilon } \), with \( \epsilon < \begin{Vmatrix}{\left( T - {x}_{0}I\right) }^{-1}\end{Vmatrix} \) . First note that if \( z \) has non-vanishing imaginary part then \( {\left( T - zI\right) }^{-1} \) is given by \( {\Phi }_{z}\left( T\right) \), with \( {\Phi }_{z}\left( t\right) = {\left( t - z\right) }^{-1} \) . Hence \( {\left( T - zI\right) }^{-1}{\left( T - \bar{z}I\right) }^{-1} \) is given by \( {\Psi }_{z}\left( T\right) \) , with \( {\Psi }_{z}\left( t\right) = 1/{\left| t - z\right| }^{2} \) . Therefore by the estimate given in (37) and the representation (36) applied to \( \Phi = {\Psi }_{z} \), we obtain\n\n\[ \n\int \frac{{dF}\left( \lambda \right) }{{\left| \lambda - z\right| }^{2}} \leq {A}^{\prime } \n\]\n\nas long as \( z \) is complex and \( \left| {{x}_{0} - z}\right| < \epsilon \) . We can therefore obtain the same inequality for \( x \) real, \( \left| {{x}_{0} - x}\right| < \epsilon \) . Now integration in \( x \in {J}_{0} \) using the fact that \( {\int }_{{J}_{\epsilon }}\frac{dx}{{\left| \lambda - x\right| }^{2}} = \infty \) for every \( \lambda \in {J}_{\epsilon } \), gives \( {\int }_{{J}_{\epsilon }}{dF}\left( \lambda \right) = 0 \) . Thus \( F\left( \lambda \right) \) is constant in \( {J}_{\epsilon } \) , but since \( {x}_{0} \) was an arbitrary point of \( J \) the function \( F\left( \lambda \right) \) is constant throughout \( J \) and the proposition is proved.
Yes
Property 1 (Monotonicity) If \( {E}_{1} \subset {E}_{2} \), then \( {m}_{\alpha }^{ * }\left( {E}_{1}\right) \leq {m}_{\alpha }^{ * }\left( {E}_{2}\right) \) .
This is straightforward, since any cover of \( {E}_{2} \) is also a cover of \( {E}_{1} \) .
No
Property 2 (Sub-additivity) \( {m}_{\alpha }^{ * }\left( {\mathop{\bigcup }\limits_{{j = 1}}^{\infty }{E}_{j}}\right) \leq \mathop{\sum }\limits_{{j = 1}}^{\infty }{m}_{\alpha }^{ * }\left( {E}_{j}\right) \) for any countable family \( \left\{ {E}_{j}\right\} \) of sets in \( {\mathbb{R}}^{d} \) .
For the proof, fix \( \delta \), and choose for each \( j \) a cover \( {\left\{ {F}_{j, k}\right\} }_{k = 1}^{\infty } \) of \( {E}_{j} \) by sets of diameter less than \( \delta \) such that \( \mathop{\sum }\limits_{k}{\left( \operatorname{diam}{F}_{j, k}\right) }^{\alpha } \leq {\mathcal{H}}_{\alpha }^{\delta }\left( {E}_{j}\right) + \epsilon /{2}^{j} \) . Since \( \mathop{\bigcup }\limits_{{j, k}}{F}_{j, k} \) is a cover of \( E \) by sets of diameter less than \( \delta \), we must have\n\n\[ \n{\mathcal{H}}_{\alpha }^{\delta }\left( E\right) \leq \mathop{\sum }\limits_{{j = 1}}^{\infty }{\mathcal{H}}_{\alpha }^{\delta }\left( {E}_{j}\right) + \epsilon \n\]\n\n\[ \n\leq \mathop{\sum }\limits_{{j = 1}}^{\infty }{m}_{\alpha }^{ * }\left( {E}_{j}\right) + \epsilon \n\]\n\nSince \( \epsilon \) is arbitrary, the inequality \( {\mathcal{H}}_{\alpha }^{\delta }\left( E\right) \leq \sum {m}_{\alpha }^{ * }\left( {E}_{j}\right) \) holds, and we let \( \delta \) tend to 0 to prove the countable sub-additivity of \( {m}_{\alpha }^{ * } \) .
Yes
Property 3 If \( d\left( {{E}_{1},{E}_{2}}\right) > 0 \), then \( {m}_{\alpha }^{ * }\left( {{E}_{1} \cup {E}_{2}}\right) = {m}_{\alpha }^{ * }\left( {E}_{1}\right) + {m}_{\alpha }^{ * }\left( {E}_{2}\right) \) .
It suffices to prove that \( {m}_{\alpha }^{ * }\left( {{E}_{1} \cup {E}_{2}}\right) \geq {m}_{\alpha }^{ * }\left( {E}_{1}\right) + {m}_{\alpha }^{ * }\left( {E}_{2}\right) \) since the reverse inequality is guaranteed by sub-additivity. Fix \( \epsilon > 0 \) with \( \epsilon < \) \( d\left( {{E}_{1},{E}_{2}}\right) \) . Given any cover of \( {E}_{1} \cup {E}_{2} \) with sets \( {F}_{1},{F}_{2}\ldots \), of diameter less than \( \delta \), where \( \delta < \epsilon \), we let\n\n\[ \n{F}_{j}^{\prime } = {E}_{1} \cap {F}_{j}\;\text{ and }\;{F}_{j}^{\prime \prime } = {E}_{2} \cap {F}_{j}.\n\]\n\nThen \( \left\{ {F}_{j}^{\prime }\right\} \) and \( \left\{ {F}_{j}^{\prime \prime }\right\} \) are covers for \( {E}_{1} \) and \( {E}_{2} \), respectively, and are disjoint. Hence,\n\n\[ \n\mathop{\sum }\limits_{j}{\left( \operatorname{diam}{F}_{j}^{\prime }\right) }^{\alpha } + \mathop{\sum }\limits_{i}{\left( \operatorname{diam}{F}_{i}^{\prime \prime }\right) }^{\alpha } \leq \mathop{\sum }\limits_{k}{\left( \operatorname{diam}{F}_{k}\right) }^{\alpha }.\n\]\n\nTaking the infimum over the coverings, and then letting \( \delta \) tend to zero yields the desired inequality.
Yes
Property 4 If \( \\left\\{ {E}_{j}\\right\\} \) is a countable family of disjoint Borel sets, and \( E = \\mathop{\\bigcup }\\limits_{{j = 1}}^{\\infty }{E}_{j} \), then
\[ {m}_{\\alpha }\\left( E\\right) = \\mathop{\\sum }\\limits_{{j = 1}}^{\\infty }{m}_{\\alpha }\\left( {E}_{j}\\right) \]
Yes
Property 5 Hausdorff measure is invariant under translations\n\n\[ \n{m}_{\alpha }\left( {E + h}\right) = {m}_{\alpha }\left( E\right) \;\text{ for all }h \in {\mathbb{R}}^{d}, \n\]\n\nand rotations\n\n\[ \n{m}_{\alpha }\left( {rE}\right) = {m}_{\alpha }\left( E\right) \n\]\n\nwhere \( r \) is a rotation in \( {\mathbb{R}}^{d} \).
Moreover, it scales as follows:\n\n\[ \n{m}_{\alpha }\left( {\lambda E}\right) = {\lambda }^{\alpha }{m}_{\alpha }\left( E\right) \;\text{ for all }\lambda > 0. \n\]\n\nThese conclusions follow once we observe that the diameter of a set \( S \) is invariant under translations and rotations, and satisfies \( \operatorname{diam}\left( {\lambda S}\right) = \) \( \lambda \operatorname{diam}\left( S\right) \) for \( \lambda > 0 \) .
No
Property 7 If \( E \) is a Borel subset of \( {\mathbb{R}}^{d} \), then \( {c}_{d}{m}_{d}\left( E\right) = m\left( E\right) \) for some constant \( {c}_{d} \) that depends only on the dimension \( d \) .
The constant \( {c}_{d} \) equals \( m\left( B\right) /{\left( \operatorname{diam}B\right) }^{d} \), for the unit ball \( B \) ; note that this ratio is the same for all balls \( B \) in \( {\mathbb{R}}^{d} \), and so \( {c}_{d} = {v}_{d}/{2}^{d} \) (where \( {v}_{d} \) denotes the volume of the unit ball). The proof of this property relies on the so-called iso-diametric inequality, which states that among all sets of a given diameter, the ball has largest volume. (See Problem 2.) Without using this geometric fact one can prove the following substitute.\n\nProperty \( {\mathbf{7}}^{\prime } \) If \( E \) is a Borel subset of \( {\mathbb{R}}^{d} \) and \( m\left( E\right) \) is its Lebesgue measure, then \( {m}_{d}\left( E\right) \approx m\left( E\right) \), in the sense that\n\n\[ {c}_{d}{m}_{d}\left( E\right) \leq m\left( E\right) \leq {2}^{d}{c}_{d}{m}_{d}\left( E\right) . \]\n\nUsing Exercise 26 in Chapter 3 we can find for every \( \epsilon ,\delta > 0 \), a covering of \( E \) by balls \( \left\{ {B}_{j}\right\} \), such that diam \( {B}_{j} < \delta \), while \( \mathop{\sum }\limits_{j}m\left( {B}_{j}\right) \leq m\left( E\right) + \epsilon \) . Now,\n\n\[ {\mathcal{H}}_{d}^{\delta }\left( E\right) \leq \mathop{\sum }\limits_{j}{\left( \operatorname{diam}{B}_{j}\right) }^{d} = {c}_{d}^{-1}\mathop{\sum }\limits_{j}m\left( {B}_{j}\right) \leq {c}_{d}^{-1}\left( {m\left( E\right) + \epsilon }\right) . \]\n\nLetting \( \delta \) and \( \epsilon \) tend to 0, we get \( {m}_{d}\left( E\right) \leq {c}_{d}^{-1}m\left( E\right) \) . For the reverse direction, let \( E \subset \mathop{\bigcup }\limits_{j}{F}_{j} \) be a covering with \( \mathop{\sum }\limits_{j}{\left( \operatorname{diam}{F}_{j}\right) }^{d} \leq {m}_{d}\left( E\right) + \epsilon \) . We can always find closed balls \( {B}_{j} \) centered at a point of \( {F}_{j} \) so that \( {B}_{j} \supset {F}_{j} \) and diam \( {B}_{j} = 2\operatorname{diam}{F}_{j} \) . However, \( m\left( E\right) \leq \mathop{\sum }\limits_{j}m\left( {B}_{j}\right) \), since \( E \subset \mathop{\bigcup }\limits_{j}{B}_{j} \), and the last sum equals\n\n\[ \sum {c}_{d}{\left( \operatorname{diam}{B}_{j}\right) }^{d} = {2}^{d}{c}_{d}\sum {\left( \operatorname{diam}{F}_{j}\right) }^{d} \leq {2}^{d}{c}_{d}\left( {{m}_{d}\left( E\right) + \epsilon }\right) . \]\n\nLetting \( \epsilon \rightarrow 0 \) gives \( m\left( E\right) \leq {2}^{d}{c}_{d}{m}_{d}\left( E\right) \) .
Yes
Theorem 2.1 The Cantor set \( \mathcal{C} \) has strict Hausdorff dimension \( \alpha = \) \( \log 2/\log 3 \) .
The inequality\n\n\[ \n{m}_{\alpha }\left( \mathcal{C}\right) \leq 1 \n\]\n\nfollows from the construction of \( \mathcal{C} \) and the definitions. Indeed, recall from Chapter 1 that \( \mathcal{C} = \bigcap {C}_{k} \), where each \( {C}_{k} \) is a finite union of \( {2}^{k} \) intervals of length \( {3}^{-k} \) . Given \( \delta > 0 \), we first choose \( K \) so large that \( {3}^{-K} < \delta \) . Since the set \( {C}_{K} \) covers \( \mathcal{C} \) and consists of \( {2}^{K} \) intervals of diameter \( {3}^{-K} < \delta \) , we must have\n\n\[ \n{\mathcal{H}}_{\alpha }^{\delta }\left( \mathcal{C}\right) \leq {2}^{K}{\left( {3}^{-K}\right) }^{\alpha } \n\]\n\nHowever, \( \alpha \) satisfies precisely \( {3}^{\alpha } = 2 \), hence \( {2}^{K}{\left( {3}^{-K}\right) }^{\alpha } = 1 \), and therefore \( {m}_{\alpha }\left( \mathcal{C}\right) \leq 1. \)
Yes
Lemma 2.2 Suppose a function \( f \) defined on a compact set \( E \) satisfies a Lipschitz condition with exponent \( \gamma \) . Then\n\n(i) \( {m}_{\beta }\left( {f\left( E\right) }\right) \leq {M}^{\beta }{m}_{\alpha }\left( E\right) \) if \( \beta = \alpha /\gamma \) .\n\n(ii) \( \dim f\left( E\right) \leq \frac{1}{\gamma }\dim E \) .
Proof. Suppose \( \left\{ {F}_{k}\right\} \) is a countable family of sets that covers \( E \) . Then \( \left\{ {f\left( {E \cap {F}_{k}}\right) }\right\} \) covers \( f\left( E\right) \) and, moreover, \( f\left( {E \cap {F}_{k}}\right) \) has diameter less than \( M{\left( \operatorname{diam}{F}_{k}\right) }^{\gamma } \) . Hence\n\n\[ \mathop{\sum }\limits_{k}{\left( \operatorname{diam}f\left( E \cap {F}_{k}\right) \right) }^{\alpha /\gamma } \leq {M}^{\alpha /\gamma }\mathop{\sum }\limits_{k}{\left( \operatorname{diam}{F}_{k}\right) }^{\alpha },\]\n\nand part (i) follows. This result now immediately implies conclusion (ii).
Yes
Lemma 2.3 The Cantor-Lebesgue function \( F \) on \( \mathcal{C} \) satisfies a Lipschitz condition with exponent \( \gamma = \log 2/\log 3 \) .
Proof. The function \( F \) was constructed in Section 3.1 of Chapter 3 as the limit of a sequence \( \left\{ {F}_{n}\right\} \) of piecewise linear functions. The function \( {F}_{n} \) increases by at most \( {2}^{-n} \) on each interval of length \( {3}^{-n} \) . So the slope of \( {F}_{n} \) is always bounded by \( {\left( 3/2\right) }^{n} \), and hence\n\n\[ \left| {{F}_{n}\left( x\right) - {F}_{n}\left( y\right) }\right| \leq {\left( \frac{3}{2}\right) }^{n}\left| {x - y}\right| \]\n\nMoreover, the approximating sequence also satisfies \( \left| {F\left( x\right) - {F}_{n}\left( x\right) }\right| \leq \) \( 1/{2}^{n} \) . These two estimates together with an application of the triangle inequality give\n\n\[ \left| {F\left( x\right) - F\left( y\right) }\right| \leq \left| {{F}_{n}\left( x\right) - {F}_{n}\left( y\right) }\right| + \left| {F\left( x\right) - {F}_{n}\left( x\right) }\right| + \left| {F\left( y\right) - {F}_{n}\left( y\right) }\right| \]\n\n\[ \leq {\left( \frac{3}{2}\right) }^{n}\left| {x - y}\right| + \frac{2}{{2}^{n}} \]\n\nHaving fixed \( x \) and \( y \), we then minimize the right hand side by choosing \( n \) so that both terms have the same order of magnitude. This is achieved by taking \( n \) so that \( {3}^{n}\left| {x - y}\right| \) is between 1 and 3 . Then, we see that\n\n\[ \left| {F\left( x\right) - F\left( y\right) }\right| \leq c{2}^{-n} = c{\left( {3}^{-n}\right) }^{\gamma } \leq M{\left| x - y\right| }^{\gamma }, \]\n\nsince \( {3}^{\gamma } = 2 \) and \( {3}^{-n} \) is not greater than \( \left| {x - y}\right| \) . This argument is repeated in Lemma 2.8 below.
Yes
Theorem 2.5 The Sierpinski triangle \( \mathcal{S} \) has strict Hausdorff dimension \( \alpha = \log 3/\log 2 \) .
The inequality \( {m}_{\alpha }\left( \mathcal{S}\right) \leq 1 \) follows immediately from the construction. Given \( \delta > 0 \), choose \( K \) so that \( {2}^{-K} < \delta \) . Since the set \( {S}_{K} \) covers \( \mathcal{S} \) and consists of \( {3}^{K} \) triangles each of diameter \( {2}^{-K} < \delta \), we must have\n\n\[ \n{\mathcal{H}}_{\alpha }^{\delta }\left( \mathcal{S}\right) \leq {3}^{K}{\left( {2}^{-K}\right) }^{\alpha }\n\]\n\nBut since \( {2}^{\alpha } = 3 \), we find \( {\mathcal{H}}_{\alpha }^{\delta }\left( \mathcal{S}\right) \leq 1 \), hence \( {m}_{\alpha }\left( \mathcal{S}\right) \leq 1 \) .
Yes
Lemma 2.6 Suppose \( B \) is a ball in the covering \( \mathcal{B} \) that satisfies\n\n\[ \n{2}^{-\ell } \leq \operatorname{diam}B < {2}^{-\ell + 1}\;\text{ for some }\ell \leq k.\n\]\n\nThen \( B \) contains at most \( c{3}^{k - \ell } \) vertices of the \( {k}^{\text{th }} \) generation.
Proof of Lemma 2.6. Let \( {B}^{ * } \) denote the ball with same center as \( B \) but three times its diameter, and let \( {\bigtriangleup }_{k} \) be a triangle of the \( {k}^{\text{th }} \) generation whose vertex \( v \) lies in \( B \) . If \( {\bigtriangleup }_{\ell }^{\prime } \) denotes the triangle of the \( {\ell }^{\text{th }} \) generation that contains \( {\bigtriangleup }_{k} \), then since diam \( B \geq {2}^{-\ell } \),\n\n\[ \nv \in {\bigtriangleup }_{k} \subset {\bigtriangleup }_{\ell }^{\prime } \subset {B}^{ * }\n\]\n\nas shown in Figure 2.\n\nNext, there is a positive constant \( c \) such that \( {B}^{ * } \) can contain at most \( c \) distinct triangles of the \( {\ell }^{\text{th }} \) generation. This is because triangles of the\n\n![073675e4-718d-4da4-86cf-42e837589e14_357_0.jpg](images/073675e4-718d-4da4-86cf-42e837589e14_357_0.jpg)\n\nFigure 2. The setting in Lemma 2.6\n\n\( {\ell }^{\text{th }} \) generation have disjoint interiors and area equal to \( {c}^{\prime }{4}^{-\ell } \), while \( {B}^{ * } \) has area at most equal to \( {c}^{\prime \prime }{4}^{-\ell } \) . Finally, each \( {\bigtriangleup }_{\ell }^{\prime } \) contains \( {3}^{k - \ell } \) triangles of the \( {k}^{\text{th }} \) generation, hence \( B \) can contain at most \( c{3}^{k - \ell } \) vertices of triangles of the \( {k}^{\text{th }} \) generation.
Yes
Lemma 2.8 Suppose \( \\left\\{ {f}_{j}\\right\\} \) is a sequence of continuous functions on the interval \( \\left\\lbrack {0,1}\\right\\rbrack \) that satisfy\n\n\[ \n\\left| {{f}_{j}\\left( t\\right) - {f}_{j}\\left( s\\right) }\\right| \\leq {A}^{j}\\left| {t - s}\\right| \\;\\text{ for some }A > 1,\n\]\n\nand\n\n\[ \n\\left| {{f}_{j}\\left( t\\right) - {f}_{j + 1}\\left( t\\right) }\\right| \\leq {B}^{-j}\\;\\text{ for some }B > 1.\n\]\n\nThen the limit \( f\\left( t\\right) = \\mathop{\\lim }\\limits_{{j \\rightarrow \\infty }}{f}_{j}\\left( t\\right) \) exists and satisfies\n\n\[ \n\\left| {f\\left( t\\right) - f\\left( s\\right) }\\right| \\leq M{\\left| t - s\\right| }^{\\gamma }\n\]\n\nwhere \( \\gamma = \\log B/\\log \\left( {AB}\\right) \) .
Proof. The continuous limit \( f \) is given by the uniformly convergent series\n\n\[ \nf\\left( t\\right) = {f}_{1}\\left( t\\right) + \\mathop{\\sum }\\limits_{{k = 1}}^{\\infty }\\left( {{f}_{k + 1}\\left( t\\right) - {f}_{k}\\left( t\\right) }\\right)\n\]\n\nand therefore\n\n\[ \n\\left| {f\\left( t\\right) - {f}_{j}\\left( t\\right) }\\right| \\leq \\mathop{\\sum }\\limits_{{k = j}}^{\\infty }\\left| {{f}_{k + 1}\\left( t\\right) - {f}_{k}\\left( t\\right) }\\right| \\leq \\mathop{\\sum }\\limits_{{k = j}}^{\\infty }{B}^{-k} \\leq c{B}^{-j}.\n\]\n\nThe triangle inequality, an application of the inequality just obtained, and the inequality in the statement of the lemma give\n\n\[ \n\\left| {f\\left( t\\right) - f\\left( s\\right) }\\right| \\leq \\left| {{f}_{j}\\left( t\\right) - {f}_{j}\\left( s\\right) }\\right| + \\left| {\\left( {f - {f}_{j}}\\right) \\left( t\\right) }\\right| + \\left| {\\left( {f - {f}_{j}}\\right) \\left( s\\right) }\\right|\n\]\n\n\[ \n\\leq c\\left( {{A}^{j}\\left| {t - s}\\right| + {B}^{-j}}\\right) .\n\]\n\nFor a fixed pair of numbers \( t \) and \( s \) with \( t \\neq s \), we choose \( j \) to minimize the sum \( {A}^{j}\\left| {t - s}\\right| + {B}^{-j} \) . This is essentially achieved by picking \( j \) so that\n\ntwo terms \( {A}^{j}\\left| {t - s}\\right| \) and \( {B}^{-j} \) are comparable. More precisely, we choose a \( j \) that satisfies\n\n\[ \n{\\left( AB\\right) }^{j}\\left| {t - s}\\right| \\leq 1\\;\\text{ and }\\;1 \\leq {\\left( AB\\right) }^{j + 1}\\left| {t - s}\\right| .\n\]\n\nSince \( \\left| {t - s}\\right| \\leq 2 \) and \( {AB} > 1 \), such a \( j \) must exist. The first inequality then gives\n\n\[ \n{A}^{j}\\left| {t - s}\\right| \\leq {B}^{-j}\n\]\n\nwhile raising the second inequality to the power \( \\gamma \), and using the fact that \( {\\left( AB\\right) }^{\\gamma } = B \) gives\n\n\[ \n1 \\leq {B}^{j}{\\left| t - s\\right| }^{\\gamma }\n\]\n\nThus \( {B}^{-j} \\leq {\\left| t - s\\right| }^{\\gamma } \), and consequently\n\n\[ \n\\left| {f\\left( t\\right) - f\\left( s\\right) }\\right| \\leq c\\left( {{A}^{j}\\left| {t - s}\\right| + {B}^{-j}}\\right) \\leq M{\\left| t - s\\right| }^{\\gamma },\n\]\n\nas was to be shown.
Yes
Theorem 2.9 Suppose \( {S}_{1},{S}_{2},\ldots ,{S}_{m} \) are \( m \) similartities, each with the same ratio \( r \) that satisfies \( 0 < r < 1 \) . Then there exists a unique nonempty compact set \( F \) such that\n\n\[ F = {S}_{1}\left( F\right) \cup \cdots \cup {S}_{m}\left( F\right) \]
The proof of this theorem is in the nature of a fixed point argument. We shall begin with some large ball \( B \) and iteratively apply the mappings \( {S}_{1},\ldots ,{S}_{m} \) . The fact that the similarities have ratio \( r < 1 \) will suffice to imply that this process contracts to a unique set \( F \) with the desired property.
Yes
Lemma 2.10 There exists a closed ball \( B \) so that \( {S}_{j}\left( B\right) \subset B \) for all \( j = 1,\ldots, m \) .
Proof. Indeed, we note that if \( S \) is a similarity with ratio \( r \), then\n\n\[ \left| {S\left( x\right) }\right| \leq \left| {S\left( x\right) - S\left( 0\right) }\right| + \left| {S\left( 0\right) }\right| \]\n\n\[ \leq r\left| x\right| + \left| {S\left( 0\right) }\right| \text{.} \]\n\nIf we require that \( \left| x\right| \leq R \) implies \( \left| {S\left( x\right) }\right| \leq R \), it suffices to choose \( R \) so that \( {rR} + \left| {S\left( 0\right) }\right| \leq R \), that is, \( R \geq \left| {S\left( 0\right) }\right| /\left( {1 - r}\right) \) . In this fashion, we obtain for each \( {S}_{j} \) a ball \( {B}_{j} \) centered at the origin that satisfies \( {S}_{j}\left( {B}_{j}\right) \subset {B}_{j} \) . If \( B \) denotes the ball among the \( {B}_{j} \) with the largest radius, then the above shows that \( {S}_{j}\left( B\right) \subset B \) for all \( j \) .
Yes
Lemma 2.11 The distance function dist defined on compact subsets of \( {\mathbb{R}}^{d} \) satisfies\n\n(i) \( \operatorname{dist}\left( {A, B}\right) = 0 \) if and only if \( A = B \) .\n\n(ii) \( \operatorname{dist}\left( {A, B}\right) = \operatorname{dist}\left( {B, A}\right) \) .\n\n(iii) \( \operatorname{dist}\left( {A, B}\right) \leq \operatorname{dist}\left( {A, C}\right) + \operatorname{dist}\left( {C, B}\right) \) .\n\nIf \( {S}_{1},\ldots ,{S}_{m} \) are similarities with ratio \( r \), then\n\n(iv) \( \operatorname{dist}\left( {\widetilde{S}\left( A\right) ,\widetilde{S}\left( B\right) }\right) \leq r\operatorname{dist}\left( {A, B}\right) \) .
The proof of the lemma is simple and may be left to the reader.
No
Theorem 2.12 Suppose \( {S}_{1},{S}_{2},\ldots ,{S}_{m} \) are \( m \) separated similarities with the common ratio \( r \) that satisfies \( 0 < r < 1 \) . Then the set \( F \) has Hausdorff dimension equal to \( \log m/\log \left( {1/r}\right) \) .
We now turn to the proof of Theorem 2.12, which will follow the same approach used in the case of the Sierpinski triangle. If \( \alpha = \log m/\log \left( {1/r}\right) \) , we claim that \( {m}_{\alpha }\left( F\right) < \infty \), hence \( \dim F \leq \alpha \) . Moreover, this inequality holds even without the separation assumption. Indeed, recall that\n\n\[ \n{F}_{k} = {\widetilde{S}}^{k}\left( B\right) \n\]\n\nand \( {\widetilde{S}}^{k}\left( B\right) \) is the union of \( {m}^{k} \) sets of diameter less than \( c{r}^{k} \) (with \( c = \) diam \( B \) ), each of the form\n\n\[ \n{S}_{{n}_{1}} \circ {S}_{{n}_{2}} \circ \cdots \circ {S}_{{n}_{k}}\left( B\right) ,\;\text{ where }1 \leq {n}_{i} \leq m\text{ and }1 \leq i \leq k. \n\]\n\nConsequently, if \( c{r}^{k} \leq \delta \), then\n\n\[ \n{\mathcal{H}}_{\alpha }^{\delta }\left( F\right) \leq \mathop{\sum }\limits_{{{n}_{1},\ldots ,{n}_{k}}}{\left( \operatorname{diam}{S}_{{n}_{1}} \circ \cdots \circ {S}_{{n}_{k}}\left( B\right) \right) }^{\alpha } \n\]\n\n\[ \n\leq {c}^{\prime }{m}^{k}{r}^{\alpha k} \n\]\n\n\[ \n\leq {c}^{\prime } \n\]\n\nsince \( m{r}^{\alpha } = 1 \), because \( \alpha = \log m/\log \left( {1/r}\right) \) . Since \( {c}^{\prime } \) is independent of \( \delta \), we get \( {m}_{\alpha }\left( F\right) \leq {c}^{\prime } \) .\n\nTo prove \( {m}_{\alpha }\left( F\right) > 0 \), we now use the separation condition. We argue in parallel with the earlier calculation of the Hausdorff dimension of the Sierpinski triangle.\n\nFix a point \( \bar{x} \) in \( F \) . We define the \
Yes
Lemma 2.13 Suppose \( B \) is a ball in the covering \( \mathcal{B} \) that satisfies\n\n\[ \n{r}^{\ell } \leq \operatorname{diam}B < {r}^{\ell - 1}\;\text{ for some }\ell \leq k.\n\]\n\nThen \( B \) contains at most \( {\mathrm{{cm}}}^{k - \ell } \) vertices of the \( {k}^{\text{th }} \) generation.
Proof. If \( v \) is a vertex of the \( {k}^{\text{th }} \) generation with \( v \in B \), and \( \mathcal{O}\left( v\right) \) denotes the corresponding open set of the \( {k}^{\text{th }} \) generation, then, for some fixed dilate \( {B}^{ * } \) of \( B \), properties (a) and (b) above guarantee that \( \mathcal{O}\left( v\right) \subset \) \( {B}^{ * } \), and \( {B}^{ * } \) also contains the open set of generation \( \ell \) that contains \( \mathcal{O}\left( v\right) \) .\n\nSince \( {B}^{ * } \) has volume \( c{r}^{d\ell } \), and each open set in the \( {\ell }^{\text{th }} \) generation has volume \( \approx {r}^{d\ell } \) (by property (b) above), \( {B}^{ * } \) can contain at most \( c \) open sets of generation \( \ell \) . Hence \( {B}^{ * } \) contains at most \( {\mathrm{{cm}}}^{k - \ell } \) open sets of the \( {k}^{\text{th }} \) generation. Consequently, \( B \) can contain at most \( c{m}^{k - \ell } \) vertices of the \( {k}^{\text{th }} \) generation, and the lemma is proved.
Yes
Corollary 3.2 There are subsets \( {Z}_{1} \subset \left\lbrack {0,1}\right\rbrack \) and \( {Z}_{2} \subset \left\lbrack {0,1}\right\rbrack \times \left\lbrack {0,1}\right\rbrack \), each of measure zero, such that \( \mathcal{P} \) is bijective from
\[ \left\lbrack {0,1}\right\rbrack - {Z}_{1}\;\text{to}\;\left\lbrack {0,1}\right\rbrack \times \left\lbrack {0,1}\right\rbrack - {Z}_{2} \] and measure preserving. In other words, \( E \) is measurable if and only if \( \mathcal{P}\left( E\right) \) is measurable, and \[ {m}_{1}\left( E\right) = {m}_{2}\left( {\mathcal{P}\left( E\right) }\right) \] Here \( {m}_{1} \) and \( {m}_{2} \) denote the Lebesgue measures in \( {\mathbb{R}}^{1} \) and \( {\mathbb{R}}^{2} \), respectively.
Yes
Proposition 3.3 Chains of quartic intervals satisfy the following properties:\n\n(i) If \( \\left\\{ {I}^{k}\\right\\} \) is a chain of quartic intervals, then there exists a unique \( t \\in \\left\\lbrack {0,1}\\right\\rbrack \) such that \( t \\in \\mathop{\\bigcap }\\limits_{k}{I}^{k} \) .
Proof. Part (i) follows from the fact that \( \\left\\{ {I}^{k}\\right\\} \) is a decreasing sequence of compact sets whose diameters go to 0 .
Yes
Lemma 3.6 Let\n\n\\[ \n{E}_{0} = \\left\\{ {x = \\mathop{\\sum }\\limits_{{k = 1}}^{\\infty }{a}_{k}/{4}^{k},\\;}\\right. \\text{where}\\left. {{a}_{k} \\neq {f}_{k}\\text{for all sufficiently large}k}\\right\\} \\text{.} \n\\]\n\nThen \\( m\\left( {E}_{0}\\right) = 0 \\) .
Indeed, if we fix \\( r \\), then \\( m\\left( \\left\\{ {x : {a}_{r} \\neq {f}_{r}}\\right\\} \\right) = 3/4 \\), and\n\n\\[ \nm\\left( \\left\\{ {x : {a}_{r} \\neq {f}_{r}\\text{ and }{a}_{r + 1} \\neq {f}_{r + 1}}\\right\\} \\right) = {\\left( 3/4\\right) }^{2},\\;\\text{ etc. } \n\\]\n\nThus \\( m\\left( \\left\\{ {x : {a}_{k} \\neq {f}_{k},\\text{all}k \\geq r}\\right\\} \\right) = 0 \\), and \\( {E}_{0} \\) is a countable union of such sets, from which the lemma follows.
Yes
Lemma 3.8 If \( \Phi \) is the dyadic correspondence in Lemma 3.7, then \( {\Phi }^{ * }\left( t\right) = \) \( \mathcal{P}\left( t\right) \) for every \( 0 \leq t \leq 1 \) .
Proof. First, we observe that \( {\Phi }^{ * }\left( t\right) \) is unambiguously defined for every \( t \) . Indeed, suppose \( t \in \mathop{\bigcap }\limits_{k}{I}^{k} \) and \( t \in \mathop{\bigcap }\limits_{k}{J}^{k} \) are two chains of quartic intervals; then \( {I}^{k} \) and \( {J}^{k} \) must be adjacent for sufficiently large\n\n\( k \) . Thus \( \Phi \left( {I}^{k}\right) \) and \( \Phi \left( {J}^{k}\right) \) must be adjacent squares for all sufficiently large \( k \) . Hence\n\n\[ \mathop{\bigcap }\limits_{k}\Phi \left( {I}^{k}\right) = \mathop{\bigcap }\limits_{k}\Phi \left( {J}^{k}\right) \]\n\nNext, directly from our construction we have\n\n\[ \mathop{\bigcap }\limits_{k}\Phi \left( {I}^{k}\right) = \lim {\mathcal{P}}_{k}\left( t\right) = \mathcal{P}\left( t\right) \]\n\nThis gives the desired conclusion.
Yes
Theorem 4.3 There exists a set \( \mathcal{B} \) in \( {\mathbb{R}}^{2} \) that:\n\n(i) is compact,\n\n(ii) has Lebesgue measure zero,\n\n(iii) contains a translate of every unit line segment.
Note that with \( F = \mathcal{B} \) and \( \gamma \in {S}^{1} \) one has \( {m}_{1}\left( {F \cap {\mathcal{P}}_{{t}_{0},\gamma }}\right) \geq 1 \) for some \( {t}_{0} \) . If \( {m}_{1}\left( {F \cap {\mathcal{P}}_{t,\gamma }}\right) \) were continuous in \( t \), then this measure would be strictly positive for an interval in \( t \) containing \( {t}_{0} \), and thus we would have \( {m}_{2}\left( F\right) > 0 \), by Fubini’s theorem. This contradiction shows that the analogue of Theorem 4.1 cannot hold for \( d = 2 \) .
No
Lemma 4.7 If \( f \) is continuous with compact support, then for every \( \gamma \in {S}^{d - 1} \) we have\n\n\[ \widehat{\mathcal{R}}\left( f\right) \left( {\lambda ,\gamma }\right) = \widehat{f}\left( {\lambda \gamma }\right) \]
Proof. For each unit vector \( \gamma \) we use the adapted coordinate system described above: \( x = \left( {{x}_{1},\ldots ,{x}_{d}}\right) \) where \( \gamma \) coincides with the \( {x}_{d} \) direction. We can then write each \( x \in {\mathbb{R}}^{d} \) as \( x = \left( {u, t}\right) \) with \( u \in {\mathbb{R}}^{d - 1}, t \in \mathbb{R} \) , where \( x \cdot \gamma = t = {x}_{d} \) and \( u = \left( {{x}_{1},\ldots ,{x}_{d - 1}}\right) \) . Moreover\n\n\[ {\int }_{{\mathcal{P}}_{t,\gamma }}f = {\int }_{{\mathbb{R}}^{d - 1}}f\left( {u, t}\right) {du} \]\n\nand Fubini’s theorem shows that \( {\int }_{{\mathbb{R}}^{d}}f\left( x\right) {dx} = {\int }_{-\infty }^{\infty }\left( {{\int }_{{\mathcal{P}}_{t,\gamma }}f}\right) {dt} \) . Applying this to \( f\left( x\right) {e}^{-{2\pi ix} \cdot \left( {\lambda \gamma }\right) } \) in place of \( f\left( x\right) \) gives\n\n\[ \widehat{f}\left( {\lambda \gamma }\right) = {\int }_{{\mathbb{R}}^{d}}f\left( x\right) {e}^{-{2\pi ix} \cdot \left( {\lambda \gamma }\right) }{dx} = {\int }_{-\infty }^{\infty }\left( {{\int }_{{\mathbb{R}}^{d - 1}}f\left( {u, t}\right) {du}}\right) {e}^{-{2\pi i\lambda t}}{dt} \]\n\n\[ = {\int }_{-\infty }^{\infty }\left( {{\int }_{{\mathcal{P}}_{t,\gamma }}f}\right) {e}^{-{2\pi i\lambda t}}{dt} \]\n\nTherefore \( \widehat{f}\left( {\lambda \gamma }\right) = \widehat{\mathcal{R}}\left( f\right) \left( {\lambda ,\gamma }\right) \), and the lemma is proved.
Yes
Lemma 4.8 If \( f \) is continuous with compact support, then\n\n\[ \n{\int }_{{S}^{d - 1}}\left( {{\int }_{-\infty }^{\infty }{\left| \widehat{\mathcal{R}}\left( f\right) \left( \lambda ,\gamma \right) \right| }^{2}{\left| \lambda \right| }^{d - 1}{d\lambda }}\right) {d\sigma }\left( \gamma \right) = 2{\int }_{{\mathbb{R}}^{d}}{\left| f\left( x\right) \right| }^{2}{dx}.\n\]
Proof. The Plancherel formula in Chapter 5 guarantees that\n\n\[ \n2{\int }_{{\mathbb{R}}^{d}}{\left| f\left( x\right) \right| }^{2}{dx} = 2{\int }_{{\mathbb{R}}^{d}}{\left| \widehat{f}\left( \xi \right) \right| }^{2}{d\xi }.\n\]\n\nChanging to polar coordinates \( \xi = {\lambda \gamma } \) where \( \lambda > 0 \) and \( \gamma \in {S}^{d - 1} \), we obtain\n\n\[ \n2{\int }_{{\mathbb{R}}^{d}}{\left| \widehat{f}\left( \xi \right) \right| }^{2}{d\xi } = 2{\int }_{{S}^{d - 1}}{\int }_{0}^{\infty }{\left| \widehat{f}\left( \lambda \gamma \right) \right| }^{2}{\lambda }^{d - 1}{d\lambda d\sigma }\left( \gamma \right)\n\]\n\nWe now observe that a simple change of variables provides\n\n\[ \n{\int }_{{S}^{d - 1}}{\int }_{0}^{\infty }{\left| \widehat{f}\left( \lambda \gamma \right) \right| }^{2}{\lambda }^{d - 1}{d\lambda d\sigma }\left( \gamma \right) = {\int }_{{S}^{d - 1}}{\int }_{-\infty }^{0}{\left| \widehat{f}\left( \lambda \gamma \right) \right| }^{2}{\left| \lambda \right| }^{d - 1}{d\lambda d\sigma }\left( \gamma \right) ,\n\]\nand the proof is complete once we invoke the result of Lemma 4.7.
Yes
Lemma 4.9 Suppose\n\n\\[ F\\left( t\\right) = {\\int }_{-\\infty }^{\\infty }\\widehat{F}\\left( \\lambda \\right) {e}^{2\\pi i\\lambda t}{d\\lambda } \\]\n\nwhere\n\n\\[ \\mathop{\\sup }\\limits_{{\\lambda \\in \\mathbb{R}}}\\left| {\\widehat{F}\\left( \\lambda \\right) }\\right| \\leq A\\;\\text{ and }\\;{\\int }_{-\\infty }^{\\infty }{\\left| \\widehat{F}\\left( \\lambda \\right) \\right| }^{2}{\\left| \\lambda \\right| }^{d - 1}{d\\lambda } \\leq {B}^{2}. \\]\n\nThen\n\n(4)\n\n\\[ \\mathop{\\sup }\\limits_{{t \\in \\mathbb{R}}}\\left| {F\\left( t\\right) }\\right| \\leq c\\left( {A + B}\\right) \\]
Proof. The first inequality is obtained by considering separately the two cases \\( \\left| \\lambda \\right| \\leq 1 \\) and \\( \\left| \\lambda \\right| > 1 \\) . We write\n\n\\[ F\\left( t\\right) = {\\int }_{\\left| \\lambda \\right| \\leq 1}\\widehat{F}\\left( \\lambda \\right) {e}^{2\\pi i\\lambda t}{d\\lambda } + {\\int }_{\\left| \\lambda \\right| > 1}\\widehat{F}\\left( \\lambda \\right) {e}^{2\\pi i\\lambda t}{d\\lambda }.\\]\n\nClearly, the first integral is bounded by \\( {cA} \\) . To estimate the second integral it suffices to bound \\( {\\int }_{\\left| \\lambda \\right| > 1}\\left| {\\widehat{F}\\left( \\lambda \\right) }\\right| {d\\lambda } \\) . An application of the Cauchy-Schwarz inequality gives\n\n\\[ {\\int }_{\\left| \\lambda \\right| > 1}\\left| {\\widehat{F}\\left( \\lambda \\right) }\\right| {d\\lambda } \\leq {\\left( {\\int }_{\\left| \\lambda \\right| > 1}{\\left| \\widehat{F}\\left( \\lambda \\right) \\right| }^{2}{\\left| \\lambda \\right| }^{d - 1}d\\lambda \\right) }^{1/2}{\\left( {\\int }_{\\left| \\lambda \\right| > 1}{\\left| \\lambda \\right| }^{-d + 1}d\\lambda \\right) }^{1/2}.\\]\n\nThis last integral is convergent precisely when \\( - d + 1 < - 1 \\), which is equivalent to \\( d > 2 \\), namely \\( d \\geq 3 \\), which we assume. Hence \\( \\left| {F\\left( t\\right) }\\right| \\leq \\) \\( c\\left( {A + B}\\right) \\) as desired.
Yes
Theorem 4.10 If \( f \) is continuous with compact support, then\n\n\[ \n{\int }_{{S}^{1}}{\mathcal{R}}_{\delta }^{ * }\left( f\right) \left( \gamma \right) {d\sigma }\left( \gamma \right) \leq c{\left( \log 1/\delta \right) }^{1/2}\left( {\parallel f{\parallel }_{{L}^{1}\left( {\mathbb{R}}^{2}\right) } + \parallel f{\parallel }_{{L}^{2}\left( {\mathbb{R}}^{2}\right) }}\right)\n\]\n\nwhen \( 0 < \delta \leq 1/2 \) .
The same argument as in the proof of Theorem 4.5 applies here, except that we need a modified version of Lemma 4.9. More precisely, let us set\n\n\[ \n{F}_{\delta }\left( t\right) = {\int }_{-\infty }^{\infty }\widehat{F}\left( \lambda \right) \left( \frac{{e}^{{2\pi i}\left( {t + \delta }\right) \lambda } - {e}^{{2\pi i}\left( {t - \delta }\right) \lambda }}{{2\pi i\lambda }\left( {2\delta }\right) }\right) {d\lambda }\n\]\n\nand suppose that\n\n\[ \n\mathop{\sup }\limits_{\lambda }\left| {\widehat{F}\left( \lambda \right) }\right| \leq A\;\text{ and }\;{\int }_{-\infty }^{\infty }{\left| \widehat{F}\left( \lambda \right) \right| }^{2}\left| \lambda \right| {d\lambda } \leq B.\n\]\n\nThen we claim that\n\n(6)\n\n\[ \n\mathop{\sup }\limits_{t}\left| {{F}_{\delta }\left( t\right) }\right| \leq c{\left( \log 1/\delta \right) }^{1/2}\left( {A + B}\right)\n\]\n\nIndeed, we use the fact that \( \left| {\left( {\sin x}\right) /x}\right| \leq 1 \) to see that, in the definition of \( {F}_{\delta }\left( t\right) \), the integral over \( \left| \lambda \right| \leq 1 \) gives the \( {cA} \) . Also, the integral over \( \left| \lambda \right| > 1 \) can be split and is bounded by the sum\n\n\[ \n{\int }_{1 < \left| \lambda \right| \leq 1/\delta }\left| {\widehat{F}\left( \lambda \right) }\right| {d\lambda } + \frac{c}{\delta }{\int }_{1/\delta \leq \left| \lambda \right| }\left| {\widehat{F}\left( \lambda \right) }\right| {\left| \lambda \right| }^{-1}{d\lambda }.\n\]\n\nThe first integral above can be estimated by the Cauchy-Schwarz inequality, as follows\n\n\[ \n{\int }_{1 < \left| \lambda \right| \leq 1/\delta }\left| {\widehat{F}\left( \lambda \right) }\right| {d\lambda } \leq c{\left( {\int }_{1 < \left| \lambda \right| \leq 1/\delta }{\left| \widehat{F}\left( \lambda \right) \right| }^{2}\left| \lambda \right| d\lambda \right) }^{1/2}{\left( {\int }_{1 < \left| \lambda \right| \leq 1/\delta }{\left| \lambda \right| }^{-1}d\lambda \right) }^{1/2}\n\]\n\n\[ \n\leq {cB}{\left( \log 1/\delta \right) }^{1/2}.\n\]\n\nFinally, we also note that\n\n\[ \n\frac{c}{\delta }{\int }_{1/\delta \leq \left| \lambda \right| }\left| {\widehat{F}\left( \lambda \right) }\right| {\left| \lambda \right| }^{-1}{d\lambda } \leq c{\left( {\int }_{1/\delta \leq \left| \lambda \right| }{\left| \widehat{F}\left( \lambda \right) \right| }^{2}\left| \lambda \right| d\lambda \right) }^{1/2}\frac{1}{\delta }{\left( {\int }_{1/\delta \leq \left| \lambda \right| }{\left| \lambda \right| }^{-3}d\lambda \right) }^{1/2}\n\]\n\n\[ \n\leq {cB}\n\]\n\nand this establishes (6), and hence the theorem.
Yes
Theorem 4.12 The set \( F \) is compact and of two-dimensional measure zero. It contains a translate of any unit line segment whose slope is a number \( s \) that lies outside the intervals \( \left( {-1,2}\right) \) .
The proof of the required properties of the set \( F \) amounts to showing the following paradoxical facts about the set \( \mathcal{C} + \lambda \mathcal{C} \), for \( \lambda > 0 \) . Here \( \mathcal{C} + \lambda \mathcal{C} = \left\{ {{x}_{1} + \lambda {x}_{2} : {x}_{1} \in \mathcal{C},{x}_{2} \in \mathcal{C}}\right\} : \n\n- \( \mathcal{C} + \lambda \mathcal{C} \) has one-dimensional measure zero, for a.e. \( \lambda \) .\n\n- \( \mathcal{C} + \frac{1}{2}\mathcal{C} \) is the interval \( \left\lbrack {0,3/2}\right\rbrack \) .\n\nLet us see how these two assertions imply the theorem. First, we note that the set \( F \) is closed (and hence compact), because both \( {E}_{0} \) and \( {E}_{1} \) are closed. Next observe that with \( 0 < y < 1 \), the slice \( {F}^{y} \) of the set \( F \) is exactly \( \left( {1 - y}\right) \mathcal{C} + \frac{y}{2}\mathcal{C} \) . This set is obtained from the set \( \mathcal{C} + \lambda \mathcal{C} \) , where \( \lambda = y/\left( {2\left( {1 - y}\right) }\right) \), by scaling with the factor \( 1 - y \) . Hence \( {F}^{y} \) is of measure zero whenever \( \mathcal{C} + \lambda \mathcal{C} \) is also of measure zero. Moreover, under the mapping \( y \mapsto \lambda \), sets of measure zero in \( \left( {0,\infty }\right) \) correspond to sets of measure zero in \( \left( {0,1}\right) \) . (For this see, for example, Exercise 8 in Chapter 1, or Problem 1 in Chapter 6.) Therefore, the first assertion and Fubini's theorem prove that the (two-dimensional) measure of \( F \) is zero.\n\nFinally the slope \( s \) of the segment joining the point \( \left( {{x}_{0},0}\right) \), with the point \( \left( {{x}_{1},1}\right) \) is \( s = 1/\left( {{x}_{1} - {x}_{0}}\right) \) . Thus the quantity \( s \) can be realized if \( {x}_{1} \in \mathcal{C}/2 \) and \( {x}_{0} \in \mathcal{C} \), that is, if \( 1/s \in \mathcal{C}/2 - \mathcal{C} \) . However, by an obvious symmetry \( \mathcal{C} = 1 - \mathcal{C} \), and so the condition becomes \( 1/s \in \mathcal{C}/2 + \mathcal{C} - 1 \) , which by the second assertion is \( 1/s \in \left\lbrack {-1,1/2}\right\rbrack \) . This last is equivalent with \( s \notin \left( {-1,2}\right) \) .
Yes
Lemma 4.14 For every \( {\lambda }_{0} \) there is a pair \( 1 \leq {i}_{1},{i}_{2} \leq 4 \), with \( {i}_{1} \neq {i}_{2} \) such that \( {\mathcal{K}}_{{i}_{1}}\left( {\lambda }_{0}\right) \) and \( {\mathcal{K}}_{{i}_{2}}\left( {\lambda }_{0}\right) \) intersect.
Proof. Indeed, if the \( {\mathcal{K}}_{i} \) are disjoint for \( 1 \leq i \leq 4 \) then for sufficiently small \( \delta \) the \( {\mathcal{K}}_{i}^{\delta } \) are also disjoint. Here we have used the notation that \( {F}^{\delta } \) denotes the set of points of distance less than \( \delta \) from \( F \) . (See Lemma 3.1 in Chapter 1.) However, \( {\mathcal{K}}^{\delta } = \mathop{\bigcup }\limits_{{i = 1}}^{4}{\mathcal{K}}_{i}^{\delta } \), and by similarity \( m\left( {\mathcal{K}}^{4\delta }\right) = \) \( {4m}\left( {\mathcal{K}}_{i}^{\delta }\right) \) . Thus by the disjointness of the \( {\mathcal{K}}_{i}^{\delta } \) we have \( m\left( {\mathcal{K}}^{\delta }\right) = m\left( {\mathcal{K}}^{4\delta }\right) \) , which is a contradiction, since \( {\mathcal{K}}^{4\delta } - {\mathcal{K}}^{\delta } \) contains an open ball (of radius \( {3\delta }/2) \) . The lemma is therefore proved.
Yes
Corollary 1.2.9. Let \( {S}_{0} \) be a finite set of prime ideals of \( K \), let \( {\left( {e}_{\mathfrak{p}}\right) }_{\mathfrak{p} \in {S}_{0}} \) be a set of integers indexed by \( {S}_{0} \), and let \( {\left( {s}_{\sigma }\right) }_{\sigma \in {S}_{\infty }} \) be a set of signs \( \pm 1 \) indexed by the set \( {S}_{\infty } \) of all \( {r}_{1} \) real embeddings of \( K \) . There exists an element \( x \in K \) such that for all \( \mathfrak{p} \in {S}_{0},{v}_{\mathfrak{p}}\left( x\right) = {e}_{\mathfrak{p}} \), for all \( \sigma \in {S}_{\infty },\operatorname{sign}\left( {\sigma \left( x\right) }\right) = {s}_{\sigma } \), while for all \( \mathfrak{p} \notin {S}_{0},{v}_{\mathfrak{p}}\left( x\right) \geq 0 \), where \( {v}_{\mathfrak{p}}\left( x\right) \) denotes the \( \mathfrak{p} \) -adic valuation.
Proof. Set \( S = {S}_{0} \cup {S}_{\infty } \) considered as a set of places of \( K \) thanks to Ostrowski’s theorem. For \( \mathfrak{p} \in {S}_{0} \), we choose\n\n\[ \n{y}_{\mathfrak{p}} \in {\mathfrak{p}}^{{e}_{\mathfrak{p}}} \smallsetminus {\mathfrak{p}}^{{e}_{\mathfrak{p}} + 1}\;\text{ and }\;{\varepsilon }_{\mathfrak{p}} = \mathcal{N}{\left( \mathfrak{p}\right) }^{-{e}_{\mathfrak{p}}}, \n\] \n\nwhile for \( \sigma \in {S}_{\infty } \), we choose\n\n\[ \n{y}_{\sigma } = {s}_{\sigma }\;\text{ and }\;{\varepsilon }_{\sigma } = \frac{1}{2}. \n\] \n\nThe strong approximation theorem implies that there exists \( y \in K \) such that \( {\left| y - {y}_{\mathfrak{p}}\right| }_{\mathfrak{p}} < {\varepsilon }_{\mathfrak{p}} \) for \( \mathfrak{p} \in {S}_{0} \) and \( {\left| y - {y}_{\sigma }\right| }_{\sigma } < {\varepsilon }_{\sigma } \) for \( \sigma \in {S}_{\infty } \), and \( {\left| y\right| }_{\mathfrak{p}} \leq 1 \) for all \( \mathfrak{p} \notin S \) except at most one such \( \mathfrak{p} \) .\n\nThe condition \( {\left| y - {y}_{\mathfrak{p}}\right| }_{\mathfrak{p}} < {\varepsilon }_{\mathfrak{p}} \) is equivalent to \( y - {y}_{\mathfrak{p}} \in {\mathfrak{p}}^{{e}_{\mathfrak{p}} + 1} \) ; hence \( {v}_{\mathfrak{p}}\left( y\right) = \) \( {e}_{\mathfrak{p}} \) by our choice of \( {y}_{\mathfrak{p}} \).\n\nSince \( {s}_{\sigma } = \pm 1 \), the condition \( {\left| y - {y}_{\sigma }\right| }_{\sigma } < 1/2 \) implies in particular that the sign of \( y \) is equal to \( {s}_{\sigma } \).\n\nFinally, if \( \mathfrak{p} \notin S \), the condition \( {\left| y\right| }_{\mathfrak{p}} \leq 1 \) is evidently equivalent to \( {v}_{\mathfrak{p}}\left( y\right) \geq \) 0 .\n\nThus \( y \) is almost the element that we need, except that we may have \( {v}_{{\mathfrak{p}}_{0}}\left( y\right) < 0 \) for some \( {\mathfrak{p}}_{0} \notin S \) . Assume that this is the case (otherwise we simply take \( x = y \) ), and set \( v = - {v}_{{\mathfrak{p}}_{0}}\left( y\right) > 0 \) . By the weak approximation theorem, we can find an element \( \pi \) such that \( {v}_{{\mathfrak{p}}_{0}}\left( \pi \right) = v,{v}_{\mathfrak{p}}\left( \pi \right) = 0 \) for all \( \mathfrak{p} \in {S}_{0} \), and \( {v}_{\mathfrak{p}}\left( \pi \right) \geq 0 \) for \( \mathfrak{p} \notin {S}_{0} \cup \left\{ {\mathfrak{p}}_{0}\right\} \) (we can use the weak approximation theorem since we do not need to impose any Archimedean conditions on \( \pi \) ). Since a square is positive, it is immediately checked that \( x = {\pi }^{2}y \) satisfies the desired properties.
Yes
Corollary 1.2.10. Let \( \mathfrak{m} \) be any nonzero ideal. There exists \( \alpha \in \mathfrak{m} \) such that for every prime ideal \( \mathfrak{p} \) such that \( {v}_{\mathfrak{p}}\left( \mathfrak{m}\right) \neq 0 \) we have \( {v}_{\mathfrak{p}}\left( \alpha \right) = {v}_{\mathfrak{p}}\left( \mathfrak{m}\right) \) . Such an element \( \alpha \) will be called a uniformizer of the ideal \( \mathfrak{m} \) .
Proof. This is an immediate consequence of Corollary 1.2.9.
No
Corollary 1.2.11. Let \( \mathfrak{m} \) be any (nonzero) integral ideal, and let \( \mathfrak{a} \) be an ideal of \( R \) . There exists \( \alpha \in {K}^{ * } \) such that \( \alpha \mathfrak{a} \) is an integral ideal coprime to \( \mathfrak{m} \) ; in other words, in any ideal class there exists an integral ideal coprime to any fixed integral ideal.
Proof. Indeed, apply the weak approximation theorem to the set of prime ideals \( \mathfrak{p} \) that divide \( \mathfrak{m} \) or such that \( {v}_{\mathfrak{p}}\left( \mathfrak{a}\right) < 0 \), taking \( {e}_{\mathfrak{p}} = - {v}_{\mathfrak{p}}\left( \mathfrak{a}\right) \) . Then, if \( \alpha \) is such that \( {v}_{\mathfrak{p}}\left( \alpha \right) = {e}_{\mathfrak{p}} \) for all such \( \mathfrak{p} \) and nonnegative for all other \( \mathfrak{p} \), it is clear that \( {\alpha a} \) is an integral ideal coprime to \( \mathfrak{m} \) .
Yes
Lemma 1.2.17. If \( f \) is a surjective map from any module \( F \) onto a projective module \( P \) and if \( h \) is a section of \( f \) (so that \( f \circ h = i{d}_{P} \) ), then \( F = h\left( P\right) \oplus \operatorname{Ker}\left( f\right) \) .
Proof. Indeed, if \( x \in F \), then \( y = x - h\left( {f\left( x\right) }\right) \) is clearly in \( \operatorname{Ker}\left( f\right) \) since \( f \circ h = i{d}_{P} \) ; hence \( x \in h\left( P\right) + \operatorname{Ker}\left( f\right) \), so \( F = h\left( P\right) + \operatorname{Ker}\left( f\right) \) . Furthermore, if \( x \in h\left( P\right) \cap \operatorname{Ker}\left( f\right) \), then since \( x \in h\left( P\right), x = h\left( z\right) \) for some \( z \in P \) ; hence since \( x \in \operatorname{Ker}\left( f\right) ,0 = f\left( x\right) = f\left( {h\left( z\right) }\right) = z \), hence \( x = h\left( 0\right) = 0 \), so we have a direct sum, proving the lemma.
Yes
Corollary 1.2.18. A projective module is torsion-free.
Proof. Indeed, the third characterization of projective modules shows that a projective module is isomorphic to a submodule of a free module and hence is torsion-free since a free module is evidently torsion-free.
Yes
Theorem 1.2.19. Let \( M \) be a finitely generated, torsion-free module of rank \( n \) over a Dedekind domain \( R \) . Then \( M \) is a projective module. In addition, there exists an ideal \( I \) of \( R \) such that\n\n\[ M \simeq {R}^{n - 1} \oplus I \]
Proof of Theorem 1.2.19. We prove the theorem by induction on the rank of \( M \) . If the rank of \( M \) is zero, then \( M \) is torsion, and since \( M \) is torsion-free, \( M = \{ 0\} \) . Assume the theorem proved up to rank \( n - 1 \), and let \( M \) be a torsion-free module of rank \( n \) . Let \( e \) be a nonzero element of \( M \) . By Lemma 1.2.22 above, \( M/{Ie} \) is a torsion-free module of rank \( n - 1 \) ; hence by our induction hypothesis, \( M/{Ie} \) is a projective module and isomorphic to \( {R}^{n - 1} \oplus I' \) for some ideal \( I' \) of \( R \) . Since \( M \) is a projective module, there exists a submodule \( N \) of \( M \) such that \( M = N \oplus {Re} \) . By Lemma 1.2.22, \( N \simeq M/{Ie} \simeq {R}^{n - 1} \oplus I' \) . Therefore, \( M \simeq {R}^{n - 1} \oplus I' \oplus {Re} \simeq {R}^{n - 1} \oplus I \) for some ideal \( I \) of \( R \) .
Yes
Lemma 1.2.20. If \( I \) and \( J \) are any fractional ideals of \( R \), we have an isomorphism of \( R \) -modules:\n\n\[ I \oplus J \simeq R \oplus {IJ}. \]
Proof. Since \( I \simeq {kI} \) for any \( k \in R \), we can always reduce to the case where \( I \) and \( J \) are integral ideals. By Corollary 1.2.11, in the ideal class of \( J \) there exists an integral ideal \( {J}_{1} \) coprime to \( I \) . Thus, there exists \( \alpha \in {K}^{ * } \) such that \( {J}_{1} = {\alpha J} \), and it follows that \( {J}_{1} \simeq J \) and \( I{J}_{1} \simeq {IJ} \), so we may replace \( J \) by \( {J}_{1} \) ; in other words, we may assume that \( I \) and \( J \) are coprime integral ideals.\n\nLet \( f \) be the map from \( I \oplus J \) to \( R \) defined by \( f\left( {x, y}\right) = x + y \) . Since \( R \) is free, hence projective, and since \( I + J = R, f \) is surjective, so there exists a map \( g \) from \( R \) to \( I \oplus J \) such that \( f \circ g = {id} \) . Lemma 1.2.17 says that \( I \oplus J = g\left( R\right) \oplus \operatorname{Ker}\left( f\right) \) . Since \( f \circ g = {id}, g \) is injective; hence \( g\left( R\right) \simeq R \) . Finally,\n\n\[ \operatorname{Ker}\left( f\right) = \{ \left( {x, - x}\right) /x \in I, - x \in J\} = \{ \left( {x, - x}\right) /x \in I \cap J\} \simeq I \cap J = {IJ} \]\n\nsince \( I \) and \( J \) are coprime, proving the lemma.
Yes
Corollary 1.2.21. Every fractional ideal is a projective R-module.
Proof. Simply apply the preceding lemma to \( J = {I}^{-1} \) and use Proposition 1.2.16 (3).
No
Corollary 1.2.24. If \( I \) and \( J \) are two (fractional) ideals of \( R \) and \( {R}^{m - 1} \oplus \) \( I \simeq {R}^{n - 1} \oplus J \), then \( m = n \) and \( J \) and \( I \) are in the same ideal class (in other words, there exists \( \alpha \in {K}^{ * } \) such that \( J = {\alpha I} \) ).
Proof. Since \( I \) and \( J \) are of rank 1, it is clear that \( m = n \) . From the given isomorphism, we deduce that\n\n\[ \n{R}^{n - 1} \oplus I \oplus {I}^{-1} \simeq {R}^{n - 1} \oplus J \oplus {I}^{-1}.\n\]\n\nUsing Lemma 1.2.20, we obtain\n\n\[ \n{R}^{n + 1} \simeq {R}^{n} \oplus J{I}^{-1}.\n\]\n\nThus Theorem 1.2.23 implies that \( J{I}^{-1} \) is a principal ideal, whence the corollary.
Yes
Corollary 1.2.25. Let \( M \) be a finitely generated, torsion-free module. There exist elements \( {\omega }_{1},\ldots ,{\omega }_{n} \) in \( M \) and fractional ideals \( {\mathfrak{a}}_{1},\ldots ,{\mathfrak{a}}_{n} \) of \( R \) such that\n\n\[ M = {\mathfrak{a}}_{1}{\omega }_{1} \oplus \cdots \oplus {\mathfrak{a}}_{n}{\omega }_{n}. \]\n\nThe Steinitz class of \( M \) is the ideal class of the product \( {\mathfrak{a}}_{1}\cdots {\mathfrak{a}}_{n} \).
Proof. From Theorem 1.2.19, we know that \( M \) is isomorphic to \( {R}^{n - 1} \oplus I \) for some ideal \( I \) whose ideal class is the Steinitz class of \( M \) . Replacing if necessary \( I \) by \( I/\alpha \) for some nonzero element \( \alpha \) of \( I \), we may assume that \( 1 \in \) \( I \) . Let \( f \) be the isomorphism from \( {R}^{n - 1} \oplus I \) to \( M \), let \( {e}_{i} = \left( {0,\ldots ,1,\ldots ,0}\right) \in \) \( {R}^{n - 1} \oplus I \) (with 1 at the \( i \) th component), and let \( {\omega }_{i} = f\left( {e}_{i}\right) \in M \) . Since \( f \) is an isomorphism, we have \( M = {\mathfrak{a}}_{1}{\omega }_{1} \oplus \cdots \oplus {\mathfrak{a}}_{n}{\omega }_{n} \), with \( {\mathfrak{a}}_{i} = R \) for \( 1 \leq i \leq n - 1 \) and \( {\mathfrak{a}}_{n} = I \) .\n\nBy Lemma 1.2.20 we have\n\n\[ {\mathfrak{a}}_{1}{\omega }_{1} \oplus \cdots \oplus {\mathfrak{a}}_{n}{\omega }_{n} \simeq {\mathbb{R}}^{n - 1} \oplus \left( {{\mathfrak{a}}_{1}\cdots {\mathfrak{a}}_{n}}\right) \;, \]\n\nso the corollary follows.
Yes
Corollary 1.2.26. Let \( M, N \), and \( P \) be three finitely generated, torsion-free modules. Assume that \( P \oplus M \simeq P \oplus N \) . Then \( M \simeq N \) .
Proof. Using Theorem 1.2.19, we have \( M \simeq {R}^{m - 1} \oplus \operatorname{St}\left( M\right), N \simeq {R}^{n - 1} \oplus \operatorname{St}\left( N\right), P \simeq {R}^{p - 1} \oplus \operatorname{St}\left( P\right) \), so that\n\n\[{\mathbb{R}}^{p + m - 2} \oplus \mathrm{{St}}\left( P\right) \oplus \mathrm{{St}}\left( M\right) \simeq {\mathbb{R}}^{p + n - 2} \oplus \mathrm{{St}}\left( P\right) \oplus \mathrm{{St}}\left( N\right)\]\n\nor, in other words, by Lemma 1.2.20,\n\n\[{R}^{p + m - 1} \oplus \operatorname{St}\left( P\right) \operatorname{St}\left( M\right) \simeq {R}^{p + n - 1} \oplus \operatorname{St}\left( P\right) \operatorname{St}\left( N\right) .\n\nWe deduce from Corollary 1.2.24 that \( m = n \) and that there exists \( \alpha \in K \) such that \( \operatorname{St}\left( P\right) \operatorname{St}\left( M\right) = \alpha \operatorname{St}\left( P\right) \operatorname{St}\left( N\right) \) ; hence \( \operatorname{St}\left( M\right) = \alpha \operatorname{St}\left( N\right) \simeq \operatorname{St}\left( N\right) \) since \( \operatorname{St}\left( P\right) \) is invertible, so \( M \simeq N \) .
Yes
Proposition 1.2.27. Let\n\n\[ 0 \rightarrow {M}^{\prime } \rightarrow M \rightarrow {M}^{\prime \prime } \rightarrow 0 \]\n\nbe an exact sequence of finitely generated, torsion-free modules. Then\n\n\[ M \simeq {M}^{\prime } \oplus {M}^{\prime \prime }\;\text{ and }\;\operatorname{St}\left( M\right) = \operatorname{St}\left( {M}^{\prime }\right) \operatorname{St}\left( {M}^{\prime \prime }\right) . \]\n
Proof. The isomorphism follows immediately from Lemma 1.2.17: if \( f \) is the map from \( M \) to \( {M}^{\prime \prime } \), there exists a map \( h \) from \( {M}^{\prime \prime } \) to \( M \) such that \( f \circ h = i{d}_{{M}^{\prime \prime }} \) and \( M = h\left( {M}^{\prime \prime }\right) \oplus \operatorname{Ker}\left( f\right) \simeq {M}^{\prime \prime } \oplus {M}^{\prime } \) since the sequence is exact. The required equality of Steinitz classes now follows immediately from Theorem 1.2.19 and Lemma 1.2.20.
Yes
Proposition 1.2.28. If \( R \) is a Dedekind domain with only a finite number of prime ideals, then \( R \) is a principal ideal domain.
Proof. Let \( \mathfrak{b} \) be the product of the (nonzero) prime ideals of \( R \), which are finite in number. If \( \mathfrak{c} \) is an ideal of \( R \), by Corollary 1.2.11 we can find an \( x \in {K}^{ * } \) such that \( x\mathfrak{c} \) is an integral ideal coprime to \( \mathfrak{b} \) . But this means that \( {xc} \) is not divisible by any prime ideal of \( R \), hence \( {xc} = R \), and so \( \mathfrak{c} = \left( {1/x}\right) R \) is a principal ideal, hence \( R \) is a principal ideal domain.
Yes
Proposition 1.2.29. Let \( M \) be a finitely generated \( R \) -module, and let \( {M}_{\text{tors }} \) be the torsion submodule of \( M \) . Then there exists a torsion-free submodule \( N \) of \( M \) such that\n\n\[ M = {M}_{\text{tors }} \oplus N \]
Proof. If \( P = M/{M}_{\text{tors }} \), then \( P \) is torsion-free. Indeed, if \( \bar{y} \in {P}_{\text{tors }} \), there exists \( a \in R \smallsetminus \{ 0\} \) such that \( {ay} \in {M}_{\text{tors }} \), and hence there exists \( b \in R \smallsetminus \{ 0\} \) such that \( {bay} = 0 \), so \( y \in {M}_{\text{tors }} \) since \( R \) is an integral domain, and so \( \bar{y} = \overline{0} \) . From Theorem 1.2.19, we deduce that \( P \) is a projective \( R \) -module. It follows that there exists a linear map \( h \) from \( P \) to \( M \) such that \( f \circ h = i{d}_{P} \), where we denote by \( f \) the canonical surjection from \( M \) onto \( P = M/{M}_{\text{tors }} \) . From Lemma 1.2.17 we deduce that \( M = h\left( P\right) \oplus {M}_{\text{tors }} \), and, since \( h \) is injective, \( N = h\left( P\right) \) is isomorphic to \( P \), hence is projective (or torsion-free), thus proving the proposition.
Yes
Lemma 1.2.31. Let \( S \) be a finite set of prime ideals of \( R \) and let \( x \in {K}^{ * } \) such that \( {v}_{\mathfrak{p}}\left( x\right) \geq 0 \) for all \( \mathfrak{p} \in S \) . There exist \( n \) and \( d \) in \( R \) such that \( x = n/d \) and \( d \) not divisible by any \( \mathfrak{p} \) in \( S \) .
Proof. Let \( x = n/d \) with \( n \) and \( d \) in \( R \), for the moment arbitrary. By the approximation theorem, there exists \( b \in K \) such that\n\n\[ \forall \mathfrak{p} \in S,{v}_{\mathfrak{p}}\left( b\right) = - {v}_{\mathfrak{p}}\left( d\right) \;\text{and}\;\forall \mathfrak{p} \notin S,{v}_{\mathfrak{p}}\left( b\right) \geq 0. \]\n\nIt follows that for \( \mathfrak{p} \in S,{v}_{\mathfrak{p}}\left( {db}\right) = 0 \) and for \( \mathfrak{p} \notin S,{v}_{\mathfrak{p}}\left( {db}\right) \geq 0 \), so \( {db} \in R \) and is not divisible by any \( \mathfrak{p} \) in \( S \) . Since for all \( \mathfrak{p} \in S,{v}_{\mathfrak{p}}\left( x\right) \geq 0 \) or, equivalently, \( {v}_{\mathfrak{p}}\left( n\right) \geq {v}_{\mathfrak{p}}\left( d\right) \), it follows that \( {v}_{\mathfrak{p}}\left( {nb}\right) \geq {v}_{\mathfrak{p}}\left( {db}\right) = 0 \) for \( \mathfrak{p} \in S \) and \( {v}_{\mathfrak{p}}\left( {nb}\right) \geq 0 \) for \( \mathfrak{p} \notin S \), hence \( {nb} \in R \), so \( x = \left( {nb}\right) /\left( {db}\right) \) is a suitable representation of \( x \) .
Yes
Lemma 1.2.32. Let \( \mathfrak{a} \) be a nonzero integral ideal of \( R \) and set\n\n\[ B = \\left\\{ {x \\in K/\\forall \\mathfrak{p} \\mid \\mathfrak{a},{v}_{\\mathfrak{p}}\\left( x\\right) \\geq 0}\\right\\} .\n\]\n\nThen\n\n(1)\n\n\[ B = \\left\\{ {x = \\frac{n}{d}/n, d \\in R,\\left( {{dR},\\mathfrak{a}}\\right) = 1}\\right\\} ;\n\]\n\nin other words, \( B = {S}^{-1}R \), where \( S \) is the multiplicative set of elements of \( R \) coprime to \( \\mathfrak{a} \) . (We write \( \\left( {I, J}\\right) = 1 \) for two integral ideals \( I \) and \( J \) to mean that they are coprime - in other words, that \( I + J = R \) .)\n\n(2) \( B \) is a principal ideal domain.
Proof. (1). It is clear that if \( \\left( {{dR},\\mathfrak{a}}\\right) = 1 \), then \( {v}_{\\mathfrak{p}}\\left( {n/d}\\right) = {v}_{\\mathfrak{p}}\\left( n\\right) \\geq 0 \) for all \( \\mathfrak{p} \\mid \\mathfrak{a} \), and hence \( n/d \\in B \) . Conversely, let \( x \\in B \) . Taking for \( S \) the set of prime ideals dividing \( a \), it follows from Lemma 1.2.31 that one can write \( x = n/d \) with \( n \) and \( d \) in \( R \) and \( d \) coprime to \( \\mathfrak{a} \), proving (1).\n\n(2). It is clear that \( B \) is a ring, and it is also a domain since \( B \\subset K \) . By general properties of rings of fractions \( {S}^{-1}R \), we know that the prime ideals of \( B \) are exactly the ideals \( {S}^{-1}\\mathfrak{p} \) for the prime ideals \( \\mathfrak{p} \) such that \( \\mathfrak{p} \\cap S = \\varnothing \) , hence in our case the prime ideals dividing \( \\mathfrak{a} \), which are finite in number. Since \( B = {S}^{-1}R \) is also a Dedekind domain, it follows from Proposition 1.2.28 that \( B \) is a principal ideal domain.
Yes
Proposition 1.2.34. Assume that there exist nonzero ideals \( {\mathfrak{a}}_{i} \) such that an \( R \) -module \( M \) satisfies \( M \simeq {\bigoplus }_{1 \leq i \leq k}R/{\mathfrak{a}}_{i} \) . Then the order-ideal of \( M \) is equal to \( \mathop{\prod }\limits_{{1 \leq i \leq k}}{\mathfrak{a}}_{i} \) .
Proof. This immediately follows from the fact that the order-ideal is unchanged by module isomorphism, and that the order-ideal of a product of two modules is equal to the product of the order-ideals.
No
Theorem 1.2.35. Let \( M \) and \( N \) be two torsion-free (or projective) modules of rank \( m \) and \( n \), respectively, such that \( N \subset M\left( {\text{so}n \leq m}\right) \). There exist fractional ideals \( {\mathfrak{b}}_{1},\ldots ,{\mathfrak{b}}_{m} \) of \( R \), a basis \( \left( {{e}_{1},\ldots ,{e}_{m}}\right) \) of \( V = {KM} \), and integral ideals \( {\mathfrak{d}}_{1},\ldots ,{\mathfrak{d}}_{n} \) such that \[ M = {\mathfrak{b}}_{1}{e}_{1} \oplus \cdots \oplus {\mathfrak{b}}_{m}{e}_{m}, N = {\mathfrak{d}}_{1}{\mathfrak{b}}_{1}{e}_{1} \oplus \cdots \oplus {\mathfrak{d}}_{n}{\mathfrak{b}}_{n}{e}_{n} \] and \( {\mathfrak{d}}_{i - 1} \subset {\mathfrak{d}}_{i} \) for \( 2 \leq i \leq n \). The ideals \( {\mathfrak{d}}_{i}\left( {\text{for}1 \leq i \leq n}\right) \) and the ideal classes of the ideal products \( {\mathfrak{b}}_{1}\cdots {\mathfrak{b}}_{n} \) and \( {\mathfrak{b}}_{n + 1}\cdots {\mathfrak{b}}_{m} \) depend only on \( M \) and \( N \).
Proof. Let us first prove uniqueness, so let \( {\mathfrak{d}}_{i} \) and \( {\mathfrak{b}}_{i} \) be ideals as in the theorem. Since \( {\mathfrak{b}}_{i}/{\mathfrak{d}}_{i}{\mathfrak{b}}_{i} \simeq R/{\mathfrak{d}}_{i} \), we have \[ M/N \simeq R/{\mathfrak{d}}_{1} \oplus \cdots R/{\mathfrak{d}}_{n} \oplus {R}^{m - n}, \] hence \( {\left( M/N\right) }_{\text{tors }} \simeq R/{\mathfrak{d}}_{1} \oplus \cdots R/{\mathfrak{d}}_{n} \), so the uniqueness statement for the \( {\mathfrak{d}}_{i} \) follows from the uniqueness statement of Theorem 1.2.30. Furthermore, \( M \simeq \) \( {\mathfrak{b}}_{1} \oplus \cdots \oplus {\mathfrak{b}}_{m} \simeq {R}^{m - 1} \oplus {\mathfrak{b}}_{1}\cdots {\mathfrak{b}}_{m} \) by Lemma 1.2.20, and similarly \( N \simeq {R}^{n - 1} \oplus \) \( {\mathfrak{d}}_{1}\cdots {\mathfrak{d}}_{n}{\mathfrak{b}}_{1}\cdots {\mathfrak{b}}_{n} \). By Corollary 1.2.24, the ideal class of \( {\mathfrak{d}}_{1}\cdots {\mathfrak{d}}_{n}{\mathfrak{b}}_{1}\cdots {\mathfrak{b}}_{n} \) is well-defined, hence also that of \( {\mathfrak{b}}_{1}\cdots {\mathfrak{b}}_{n} \) since the \( {\mathfrak{d}}_{i} \) are unique. Finally, the ideal class of \( {\mathfrak{b}}_{1}\cdots {\mathfrak{b}}_{m} \) is well-defined, hence also that of \( {\mathfrak{b}}_{n + 1}\cdots {\mathfrak{b}}_{m} \). To prove the existence statement, we first reduce to the case where \( m = n \) by writing \( M/N = {\left( M/N\right) }_{\text{tors }} \oplus {M}^{\prime } \) for some torsion-free module \( {M}^{\prime } \), which can be done using Proposition 1.2.29. If we set \( {M}^{\prime \prime } = \{ x \in M/x{\;\operatorname{mod}\;N} \in \) \( \left. {\left( M/N\right) }_{\text{tors }}\right\} \), then \( {M}^{\prime \prime }/N = {\left( M/N\right) }_{\text{tors }} \). Hence, once suitable ideals \( {\mathfrak{d}}_{i} \) and \( {\mathfrak{b}}_{i} \) are found for the pair \( \left( {{M}^{\prime \prime }, N}\right) \), we add some extra ideals \( {\mathfrak{b}}_{i} \) by using Theorem 1.2.19 applied to the torsion-free module \( {M}^{\prime } \). Hence, we now assume that \( m = n \), so \( M/N \) is a finitely generated torsion module. We prove the result by induction on \( n \). Assume that \( n \geq 1 \) and that it is true for \( n - 1 \). By Theorem 1.2.30, we have \( M/N = {\bigoplus }_{1 \leq i \leq r}{\mathfrak{d}}_{i}\overline{{\omega }_{i}} \) for certain ideals \( {\mathfrak{d}}_{i} \). Using the same method as in the proof of Theorem 1.2.19, we see that if \( {\mathfrak{b}}_{1} = \left\{ {x \in K/x{\omega }_{1} \in M}\right\} \), then \( M = {\mathfrak{b}}_{1}{\omega }_{1} \oplus g\left( {M/{\mathfrak{b}}_{1}{\omega }_{1}}\right) \), where \( g \) is a section of the canonical projection of \( M \) onto \( M/{\mathfrak{b}}_{1}{\omega }_{1} \). Similarly, if \( {\mathfrak{c}}_{1} = \left\{ {x \in K/x{\omega }_{1} \in N}\right\} \), then \( N = {\mathfrak{c}}_{1}{\omega }_{1} \oplus {g}^{\prime }\left( {N/{\mathfrak{c}}_{1}{\omega }_{1}}\right) \).
Yes
Proposition 1.3.1. Given two coprime integral ideals \( \mathfrak{a} \) and \( \mathfrak{b} \) in \( R \), we can find in polynomial time elements \( a \in \mathfrak{a} \) and \( b \in \mathfrak{b} \) such that \( a + b = 1 \) .
Proof. Since this is a very simple but basic proposition, we give the proof as an algorithm.\n\nAlgorithm 1.3.2 (Extended Euclid in Dedekind Domains). Let \( R \) be a Dedekind domain in which one can compute, and let \( {\left( {\omega }_{i}\right) }_{1 \leq i \leq n} \) be an integral basis chosen so that \( {\omega }_{1} = 1 \) (it is easy to reduce to this case, and in practice it is always so). Given two coprime ideals \( \mathfrak{a} \) and \( \mathfrak{b} \) given by their HNF matrices \( A \) and \( B \) on this integral basis, this algorithm computes \( a \in \mathfrak{a} \) and \( b \in \mathfrak{b} \) such that \( a + b = 1 \) .\n\n1. [Apply Hermite] Let \( C \) be the \( n \times {2n} \) matrix obtained by concatenating \( A \) and \( B \) (we will denote this by \( C \leftarrow \left( {A \mid B}\right) \) ). Using one of the polynomial-time algorithms for HNF reduction (see, for example, [Coh0, Section 2.4.2]), compute an HNF matrix \( H \) and a \( {2n} \times {2n} \) unimodular matrix \( U \) such that \( {CU} = \left( {0 \mid H}\right) \) .\n\n2. [Check if coprime] If \( H \) is not equal to the \( n \times n \) identity matrix, output an error message stating that \( a \) and \( b \) are not coprime, and terminate the algorithm.\n\n3. [Compute coordinates] Set \( Z \leftarrow {U}_{n + 1} \), the \( \left( {n + 1}\right) \) st column of the matrix \( U \) , and let \( X \) be the \( n \) -component column vector formed by the top \( n \) components of \( Z \) .\n\n4. [Terminate] Let \( a \) be the element of \( K \) whose coordinate vector on the integral basis is \( {AX} \), and set \( b \leftarrow 1 - a \) . Output \( a \) and \( b \), and terminate the algorithm.\n\nIndeed, the HNF of the matrix \( C \) is the HNF of the ideal \( \mathfrak{a} + \mathfrak{b} \) . Since \( \mathfrak{a} \) and \( \mathfrak{b} \) are coprime, it is the identity matrix. It follows that \( {CZ} = {\left( 1,0,\ldots ,0\right) }^{t} \) . If we split \( Z \) into its upper half \( X \) and its lower half \( Y \), it is clear that \( {AX} \) and \( {BY} \) represent on the integral basis elements \( a \in \mathfrak{a} \) and \( b \in \mathfrak{b} \) such that \( a + b = 1 \), and hence the algorithm is valid.
Yes
Theorem 1.3.3. Let \( \\mathfrak{a} \) and \( \\mathfrak{b} \) be two (fractional) ideals in \( R \), let \( a \) and \( b \) be two elements of \( K \) not both equal to zero, and set \( \\mathfrak{d} = a\\mathfrak{a} + b\\mathfrak{b} \). There exist \( u \\in {\\mathfrak{{ad}}}^{-1} \) and \( v \\in {\\mathfrak{{bd}}}^{-1} \) such that \( {au} + {bv} = 1 \), and these elements can be found in polynomial time.
Proof. If \( a \) (resp., \( b \) ) is equal to zero, we can take \( \\left( {u, v}\\right) = \\left( {0,1/b}\\right) \) (resp., \( \\left( {u, v}\\right) = \\left( {1/a,0}\\right) ) \), since in that case we have \( 1/b \\in {\\mathfrak{{bd}}}^{-1} = R/b \) (resp., \( 1/a \\in \\mathfrak{a}{\\mathfrak{d}}^{-1} = R/a) \). So assume \( a \) and \( b \) are nonzero.\n\nSet \( I = {aa}{\\mathfrak{d}}^{-1} \) and \( J = {bb}{\\mathfrak{d}}^{-1} \). By the definition of \( {\\mathfrak{d}}^{-1}, I \) and \( J \) are integral ideals and we have \( I + J = R \). By Proposition 1.3.1, we can thus find in polynomial time \( e \\in I \) and \( f \\in J \) such that \( e + f = 1 \), and clearly \( u = e/a \) and \( v = f/b \) satisfy the conditions of the lemma.\n\nRemark. Although this proposition is very simple, we will see that the essential conditions \( u \\in {\\mathfrak{{ad}}}^{-1} \) and \( v \\in {\\mathfrak{{bd}}}^{-1} \) bring as much rigidity into the problem as in the case of Euclidean domains, and this proposition will be regularly used instead of the extended Euclidean algorithm. It is, in fact, clear that it is an exact generalization of the extended Euclidean algorithm. Note that this lemma is useful even when \( R \) is a principal ideal domain, since \( R \) is not necessarily Euclidean.
Yes
Proposition 1.3.4. Let \( \mathfrak{a},\mathfrak{b},\mathfrak{c},\mathfrak{d} \) be fractional ideals of \( R \), and let \( a, b, c \) , \( d \) be elements of \( K \) . Set \( e = {ad} - {bc} \), and assume that\n\n\[ \mathfrak{a}\mathfrak{b} = e\mathfrak{c}\mathfrak{d},\;a \in \mathfrak{a}{\mathfrak{c}}^{-1},\;b \in \mathfrak{b}{\mathfrak{c}}^{-1},\;c \in \mathfrak{a}{\mathfrak{d}}^{-1},\;d \in \mathfrak{b}{\mathfrak{d}}^{-1}. \]\n\nFinally, let \( x \) and \( y \) be two elements of an \( R \) -module \( M \), and set\n\n\[ \left( \begin{array}{ll} {x}^{\prime } & {y}^{\prime } \end{array}\right) = \left( \begin{array}{ll} x & y \end{array}\right) \left( \begin{array}{ll} a & c \\ b & d \end{array}\right) . \]\n\nThen\n\n\[ \mathfrak{a}x + \mathfrak{b}y = \mathfrak{c}{x}^{\prime } + \mathfrak{d}{y}^{\prime }. \]
Proof. We have \( {x}^{\prime } = {ax} + {by} \) and \( {y}^{\prime } = {cx} + {dy} \) ; hence\n\n\[ \mathfrak{c}{x}^{\prime } + \mathfrak{d}{y}^{\prime } \subset \left( {a\mathfrak{c} + c\mathfrak{d}}\right) x + \left( {b\mathfrak{c} + d\mathfrak{d}}\right) y \subset \mathfrak{a}x + \mathfrak{b}y. \]\n\nConversely, we have \( x = \left( {d{x}^{\prime } - b{y}^{\prime }}\right) /e \) and \( y = \left( {-c{x}^{\prime } + a{y}^{\prime }}\right) /e \) ; hence\n\n\[ {ax} + {by} \subset \frac{1}{e}\left( {{ab}{\delta }^{-1}{x}^{\prime } + {ab}{c}^{-1}{y}^{\prime }}\right) ,\]\n\nand since \( {ab} \subset {ec}\mathfrak{d} \),\n\n\[ \mathfrak{a}x + \mathfrak{b}y \subset \mathfrak{c}\mathfrak{d}\left( {{\mathfrak{d}}^{-1}{x}^{\prime } + {\mathfrak{c}}^{-1}{y}^{\prime }}\right) = \mathfrak{c}{x}^{\prime } + \mathfrak{d}{y}^{\prime },\]\n\nthus showing the double inclusion.\n\nNote that, although we have used only the inclusion \( {ab} \subset {ec}\mathfrak{d} \) in the proof, the hypotheses on \( a, b, c \), and \( d \) imply that \( {ec}\mathfrak{d} \subset \mathfrak{{ab}} \), so we must have equality.
Yes
Corollary 1.3.5. Let \( \mathfrak{a} \) and \( \mathfrak{b} \) be two ideals, \( a \) and \( b \) be two elements of \( K \) not both zero, \( \mathfrak{d} = a\mathfrak{a} + b\mathfrak{b} \), and \( u \in \mathfrak{a}{\mathfrak{d}}^{-1}, v \in \mathfrak{b}{\mathfrak{d}}^{-1} \) such that \( {au} + {bv} = 1 \) as given by Theorem 1.3.3.\n\nLet \( x \) and \( y \) be two elements of an \( R \) -module \( M \), and set\n\n\[ \left( \begin{array}{ll} {x}^{\prime } & {y}^{\prime } \end{array}\right) = \left( \begin{array}{ll} x & y \end{array}\right) \left( \begin{matrix} b & u \\ - a & v \end{matrix}\right) .\n\]\n\nThen\n\n\[ {ax} + {by} = {ab}{\partial }^{-1}{x}^{\prime } + \partial {y}^{\prime }.\n\]
Proof. Since \( b \in {\mathfrak{b}}^{-1}\mathfrak{d} \) and \( a \in {\mathfrak{a}}^{-1}\mathfrak{d} \), this is clearly a special case of Proposition 1.3.4 with \( \mathfrak{c} = \mathfrak{a}{\mathfrak{{bd}}}^{-1} \).
Yes
Corollary 1.3.6. Let \( \mathfrak{a},\mathfrak{b} \) be two ideals. Assume that \( a, b, c \), and \( d \) are four elements of \( K \) such that\n\n\[ \n{ad} - {bc} = 1,\;a \in \mathfrak{a},\;b \in \mathfrak{b},\;c \in {\mathfrak{b}}^{-1},\;d \in {\mathfrak{a}}^{-1}.\n\]\n\nLet \( x \) and \( y \) be two elements of an \( R \) -module \( M \), and set\n\n\[ \n\left( \begin{array}{ll} {x}^{\prime } & {y}^{\prime } \end{array}\right) = \left( \begin{array}{ll} x & y \end{array}\right) \left( \begin{array}{ll} a & c \\ b & d \end{array}\right)\n\]\n\nThen\n\n\[ \n\mathfrak{a}x + \mathfrak{b}y = R{x}^{\prime } + \mathfrak{a}\mathfrak{b}{y}^{\prime }.\n\]
Proof. This is also a special case of Proposition 1.3.4 with \( \mathfrak{c} = R \) and \( \mathfrak{d} = \mathfrak{{ab}} \) . We will see in Proposition 1.3.12 how to find \( a, b, c \), and \( d \), given \( \mathfrak{a} \) and \( \mathfrak{b} \) .
No
Proposition 1.3.7. Given ideals \( {\mathfrak{a}}_{i} \) for \( 1 \leq i \leq k \) whose sum is equal to \( R \) , we can in polynomial time find elements \( {a}_{i} \in {\mathfrak{a}}_{i} \) such that \( \mathop{\sum }\limits_{i}{a}_{i} = 1 \) .
Proof. Same proof as for Proposition 1.3.1, except that we concatenate the \( k \) HNF matrices of the ideals and we split \( Z \) into \( k \) pieces at the end. Note that the matrix \( U \) will be an \( {nk} \times {nk} \) unimodular matrix, which can become quite large.
Yes
Proposition 1.3.8. Let \( S \) be a finite set of prime ideals of \( R \) and let \( {\left( {e}_{\mathfrak{p}}\right) }_{\mathfrak{p} \in S} \in {\mathbb{Z}}^{S} \) . There exists a polynomial-time algorithm that finds \( a \in K \) such that \( {v}_{\mathfrak{p}}\left( a\right) = {e}_{\mathfrak{p}} \) for \( \mathfrak{p} \in S \) and \( {v}_{\mathfrak{p}}\left( a\right) \geq 0 \) for \( \mathfrak{p} \notin S \) .
Proof. We can write \( {e}_{\mathfrak{p}} = {f}_{\mathfrak{p}} - {g}_{\mathfrak{p}} \) with \( {f}_{\mathfrak{p}} \geq 0 \) and \( {g}_{\mathfrak{p}} \geq 0 \) . If we can find \( n \) (resp., \( d \) ) such that the conditions are satisfied with \( {e}_{\mathfrak{p}} \) replaced by \( {f}_{\mathfrak{p}} \) (resp., \( \left. {g}_{\mathfrak{p}}\right) \), it is clear that \( a = n/d \) satisfies our conditions. Thus, we may assume that \( {e}_{\mathfrak{p}} \geq 0 \) for \( \mathfrak{p} \in S \) . Following the classical proof (see, for example,[Coh0, Proposition 4.7.8]), we compute the ideal product\n\n\[ I = \mathop{\prod }\limits_{{\mathfrak{p} \in S}}{\mathfrak{p}}^{{e}_{\mathfrak{p}} + 1} \]\n\nand we set for each \( \mathfrak{p} \in S \)\n\n\[ {a}_{\mathfrak{p}} = I \cdot {\mathfrak{p}}^{-{e}_{\mathfrak{p}} - 1}. \]\n\nThen the \( {a}_{p} \) are integral ideals that sum to \( R \), so by Proposition 1.3.7, we can in polynomial time find \( {a}_{\mathfrak{p}} \in {\mathfrak{a}}_{\mathfrak{p}} \) whose sum is equal to 1 . Furthermore, we can find \( {b}_{\mathfrak{p}} \in {\mathfrak{p}}^{{e}_{\mathfrak{p}}} \smallsetminus {\mathfrak{p}}^{{e}_{\mathfrak{p}} + 1} \) (for example, by taking the \( {e}_{\mathfrak{p}} \) th power of an element of \( \mathfrak{p} \smallsetminus {\mathfrak{p}}^{2} \) which can be found in polynomial time). Then \( a = \mathop{\sum }\limits_{{\mathfrak{p} \in S}}{a}_{\mathfrak{p}}{b}_{\mathfrak{p}} \) is a solution to our problem.
Yes
Corollary 1.3.9. Given two integral ideals \( \mathfrak{a} \) and \( \mathfrak{b} \) of \( R \) such that the factorization of the norm of \( \mathfrak{b} \) is known, there exists a polynomial-time algorithm that finds \( x \in K \) such that \( x\mathfrak{a} \) is an integral ideal coprime to \( \mathfrak{b} \), and similarly finds \( y \in K \) such that \( y{\mathfrak{a}}^{-1} \) is an integral ideal coprime to \( \mathfrak{b} \) .
Proof. For \( x \), apply Proposition 1.3.8 to \( S \) equal to the prime ideal factors of \( \mathfrak{b} \) and to \( {e}_{\mathfrak{p}} = - {v}_{\mathfrak{p}}\left( \mathfrak{a}\right) \) for all \( \mathfrak{p} \in S \) . For \( y \), apply Proposition 1.3.8 to \( S \) equal to the prime ideal factors of \( \mathfrak{a} \) and \( \mathfrak{b} \) and to \( {e}_{\mathfrak{p}} = {v}_{\mathfrak{p}}\left( \mathfrak{a}\right) \) for all \( \mathfrak{p} \in S \) .
Yes
Proposition 1.3.10. Let \( \mathfrak{a} \) be an integral ideal of \( R \) and \( a \in \mathfrak{a}, a \neq 0 \) . Assume that the prime ideal factorization of a is known. Then there exists a polynomial-time algorithm that finds \( b \in \mathfrak{a} \) such that \( \mathfrak{a} = {aR} + {bR} \) .
Proof. Write \( {aR} = \mathop{\prod }\limits_{\mathfrak{p}}{\mathfrak{p}}^{{e}_{\mathfrak{p}}} \) with \( {e}_{\mathfrak{p}} \geq 0 \) . Thus, \( \mathfrak{a} = \mathop{\prod }\limits_{\mathfrak{p}}{\mathfrak{p}}^{{v}_{\mathfrak{p}}\left( \mathfrak{a}\right) } \) with \( 0 \leq \) \( {v}_{\mathfrak{p}}\left( \mathfrak{a}\right) \leq {e}_{\mathfrak{p}} \) . By Proposition 1.3.8 we can, in polynomial time, find \( b \in R \) such that \( {v}_{\mathfrak{p}}\left( b\right) = {v}_{\mathfrak{p}}\left( \mathfrak{a}\right) \) for all \( \mathfrak{p} \mid a \) ; by looking at \( \mathfrak{p} \) -adic valuations, it is clear that \( \mathfrak{a} = {aR} + {bR} \) .
Yes
Proposition 1.3.11. Let \( S \) be a finite set of prime ideals of \( R \), let \( {\left( {e}_{\mathfrak{p}}\right) }_{\mathfrak{p} \in S} \in \) \( {\mathbb{Z}}^{S} \), and let \( {\left( {x}_{\mathfrak{p}}\right) }_{\mathfrak{p} \in S} \in {K}^{S} \) . Then there exists a polynomial-time algorithm that finds \( x \in K \) such that \( {v}_{\mathfrak{p}}\left( {x - {x}_{\mathfrak{p}}}\right) = {e}_{\mathfrak{p}} \) for \( \mathfrak{p} \in S \) and \( {v}_{\mathfrak{p}}\left( x\right) \geq 0 \) for \( \mathfrak{p} \notin S \) .
Proof. Assume first that the \( {e}_{\mathfrak{p}} \) are nonnegative and \( {x}_{\mathfrak{p}} \in R \) . Then we introduce the same ideals \( I \) and \( {\mathfrak{a}}_{\mathfrak{p}} \) and elements \( {a}_{\mathfrak{p}} \) as in the proof of Proposition 1.3.8. If we set\n\n\[ x = \mathop{\sum }\limits_{{\mathfrak{p} \in S}}{a}_{\mathfrak{p}}{x}_{\mathfrak{p}} \]\n\nit is easy to see that \( x \) satisfies the required conditions.\n\nConsider now the general case. Let \( d \in R \) be a common denominator for the \( {x}_{\mathfrak{p}} \), and multiply \( d \) by suitable elements of \( R \) so that \( {e}_{\mathfrak{p}} + {v}_{\mathfrak{p}}\left( d\right) \geq 0 \) for all \( \mathfrak{p} \in S \) . According to what we have just proved, there exists \( y \in R \) such that\n\n\[ \forall \mathfrak{p} \in S,\;{v}_{\mathfrak{p}}\left( {y - d{x}_{\mathfrak{p}}}\right) = {e}_{\mathfrak{p}} + {v}_{\mathfrak{p}}\left( d\right) \;\text{ and } \]\n\n\[ \forall \mathfrak{p} \mid d,\mathfrak{p} \notin S,\;{v}_{\mathfrak{p}}\left( {y - d{x}_{\mathfrak{p}}}\right) = {v}_{\mathfrak{p}}\left( d\right) . \]\n\nIt follows that \( x = y/d \) satisfies the given conditions.
Yes
Proposition 1.3.12. Let \( \mathfrak{a} \) and \( \mathfrak{b} \) be two (fractional) ideals in \( R \) . Assume that the prime ideal factorization of \( \mathfrak{a} \) or of \( \mathfrak{b} \) is known. Then it is possible to find in polynomial time elements \( a \in \mathfrak{a}, b \in \mathfrak{b}, c \in {\mathfrak{b}}^{-1} \), and \( d \in {\mathfrak{a}}^{-1} \) such that \( {ad} - {bc} = 1 \) .
Proof. Multiplying if necessary \( \mathfrak{a} \) and \( \mathfrak{b} \) by an element of \( {\mathbb{Q}}^{ * } \), we can reduce to the case where \( \mathfrak{a} \) and \( \mathfrak{b} \) are integral ideals. Assume, for example, that the factorization of \( \mathfrak{b} \) is known. According to Corollary 1.2.11, we can, in polynomial time, find \( a \in R \) such that \( a{\mathfrak{a}}^{-1} \) is an integral ideal (or, equivalently, \( a \in \mathfrak{a} \) ) and coprime to \( \mathfrak{b} \) . According to Proposition 1.3.1, we can thus find \( e \in a{\mathfrak{a}}^{-1} \) and \( f \in \mathfrak{b} \) such that \( e + f = 1 \) . Clearly, \( b = f, c = - 1 \), and \( d = e/a \) satisfy the required conditions.
Yes
Corollary 1.4.3. Let \( M \) be a finitely generated, torsion-free \( R \) -module together with a nondegenerate, bilinear pairing \( T\left( {x, y}\right) \) from \( M \times M \) to \( R \) (for example, \( M = {\mathbb{Z}}_{L} \), where \( L \) is a number field containing \( K \), and \( T\left( {x, y}\right) = {\operatorname{Tr}}_{L/K}\left( {x \cdot y}\right) ) \) . For any pseudo-basis \( \mathcal{B} = \left( {{\omega }_{j},{\mathfrak{a}}_{j}}\right) \) of \( M \), let \( {\operatorname{disc}}_{T}\left( \mathcal{B}\right) \) be the ideal defined by \( {\operatorname{disc}}_{T}\left( \mathcal{B}\right) = \det \left( {T\left( {{\omega }_{i},{\omega }_{j}}\right) }\right) {\mathfrak{a}}^{2} \), where as usual \( \mathfrak{a} = {\mathfrak{a}}_{1}\cdots {\mathfrak{a}}_{n} \) . Then if \( {\mathcal{B}}^{\prime } = \left( {{\eta }_{j},{\mathfrak{b}}_{j}}\right) \) is another pseudo-basis of \( M \), we have \( {\operatorname{disc}}_{T}\left( {\mathcal{B}}^{\prime }\right) = {\operatorname{disc}}_{T}\left( \mathcal{B}\right)
Proof. Note that, since in general \( {\omega }_{j} \notin M \), in the above definition we extend the bilinear form \( T \) to \( V \times V \) (where \( V = {KM} \) ) by bilinearity.\n\nLet \( U \) be the matrix expressing the \( {\eta }_{j} \) in terms of the \( {\omega }_{i} \) . We know that \( \mathfrak{a} = \det \left( U\right) \mathfrak{b} \) . By bilinearity, it is clear that if \( G \) (resp., \( {G}^{\prime } \) ) is the matrix of the \( T\left( {{\omega }_{i},{\omega }_{j}}\right) \) (resp., \( \left. {T\left( {{\eta }_{i},{\eta }_{j}}\right) }\right) \), then \( {G}^{\prime } = {U}^{t}{GU} \) . It follows that\n\n\[ \n{\operatorname{disc}}_{T}\left( {\mathcal{B}}^{\prime }\right) = \det \left( {G}^{\prime }\right) {\mathfrak{b}}^{2} = \det \left( G\right) \det {\left( U\right) }^{2}{\mathfrak{a}}^{2}/\det {\left( U\right) }^{2} = \det \left( G\right) {\mathfrak{a}}^{2} = {\operatorname{disc}}_{T}\left( \mathcal{B}\right) .\n\]\n\nSince \( {\operatorname{disc}}_{T}\left( \mathcal{B}\right) \) does not depend on the chosen pseudo-basis \( \mathcal{B} \), we will denote it by \( {\mathfrak{d}}_{T}\left( M\right) \) and call it the discriminant ideal of \( M \) with respect to the pairing \( T\left( {x, y}\right) \) .
Yes
Proposition 1.4.4. Let \( {\left( {\omega }_{i},{\mathfrak{a}}_{i}\right) }_{i} \) be a pseudo-basis for a finitely generated, torsion-free module \( M \), and similarly \( {\left( {\omega }_{j}^{\prime },{\mathfrak{a}}_{j}^{\prime }\right) }_{j} \) for a module \( {M}^{\prime } \) . Let \( f \) be a \( K \) -linear map from \( {M}^{\prime } \) to \( M \) . There exists a matrix \( A = \left( {a}_{i, j}\right) \) such that \( {a}_{i, j} \in {\mathfrak{a}}_{i}{{\mathfrak{a}}_{j}^{\prime }}^{-1} \) and\n\n\[ f\left( {\mathop{\sum }\limits_{j}{a}_{j}^{\prime }{\omega }_{j}^{\prime }}\right) = \mathop{\sum }\limits_{i}\left( {\mathop{\sum }\limits_{j}{a}_{i, j}{a}_{j}^{\prime }}\right) {\omega }_{i}. \]
Proof. The (very easy) proof is left to the reader (Exercise 11). The matrix \( A \) will of course be called the matrix of the map \( f \) on the chosen pseudo-bases of \( {M}^{\prime } \) and \( M \) . Note that we need only a matrix and not a pseudo-matrix (see Definition 1.4.5) to represent a map. Thus, we will represent maps by such matrices \( A \) .
No
Theorem 1.4.6 (Hermite Normal Form in Dedekind Domains). Let \( \left( {A, I}\right) \) be a pseudo-matrix, where \( I = \left( {\mathfrak{a}}_{i}\right) \) is a list of \( k \) fractional ideals, and \( A = \left( {a}_{i, j}\right) \) is an \( n \times k \) matrix. Assume that \( A \) is of rank \( n\left( {\text{so}k \geq n}\right) \) with entries in the field of fractions \( K \) of \( R \) (we could just as easily consider the case of a matrix of lower rank). Let \( M = \mathop{\sum }\limits_{j}{\mathfrak{a}}_{j}{A}_{j} \) be the \( R \) -module associated with the pseudo-matrix \( \left( {A, I}\right) \) . There exist \( k \) nonzero ideals \( {\left( {\mathfrak{b}}_{j}\right) }_{1 \leq j \leq k} \) and a \( k \times k \) matrix \( U = \left( {u}_{i, j}\right) \) satisfying the following conditions, where we set \( \mathfrak{a} = {\mathfrak{a}}_{1}\cdots {\mathfrak{a}}_{k} \) and \( \mathfrak{b} = {\mathfrak{b}}_{1}\cdots {\mathfrak{b}}_{k} \) .\n\n(1) For all \( i \) and \( j \) we have \( {u}_{i, j} \in {\mathfrak{a}}_{i}{\mathfrak{b}}_{j}^{-1} \) .\n\n(2) We have \( \mathfrak{a} = \det \left( U\right) \mathfrak{b} \) .\n\n(3) The matrix \( {AU} \) is of the following form:\n\n\[ \n{AU} = \left( \begin{matrix} 0 & 0 & \ldots & 0 & 1 & * & \ldots & * \\ 0 & 0 & \ldots & 0 & 0 & 1 & \ldots & * \\ \vdots & \vdots & \ddots & \vdots & \vdots & \ddots & \ddots & \vdots \\ 0 & 0 & \ldots & 0 & 0 & \ldots & 0 & 1 \end{matrix}\right)\n\]\n\nwhere the first \( k - n \) columns are zero (we will write this in abbreviated form as \( {AU} = \left( {0 \mid H}\right) ) \) .\n\n(4) If we call \( {\omega }_{j} \) the elements corresponding to the nonzero columns of \( {AU} \) and \( {\mathfrak{c}}_{j} = {\mathfrak{b}}_{k - n + j} \) for \( 1 \leq j \leq n \), then\n\n\[ \nM = {\mathfrak{c}}_{1}{\omega }_{1} \oplus \cdots \oplus {\mathfrak{c}}_{n}{\omega }_{n}\n\]\n\nin other words, \( {\left( {\omega }_{j},{\mathfrak{c}}_{j}\right) }_{1 \leq j \leq n} \) is a pseudo-basis of the image \( M \) of the pseudo-matrix \( \left( {A, I}\right) \) .\n\n(5) If we denote by \( {U}_{j} \) the columns of \( U \), then \( {\left( {U}_{j},{\mathfrak{b}}_{j}\right) }_{1 \leq j \leq k - n} \) is a pseudo-basis for the kernel of the pseudo-matrix \( \left( {A, I}\right) \) .
Proof. We give the proof of the existence of the HNF as an algorithm, very similar to [Coh0, Algorithm 2.4.5], which is the naive HNF algorithm.\n\nAlgorithm 1.4.7 (HNF Algorithm in Dedekind Domains). Given an \( n \times k \) matrix \( A = \left( {a}_{i, j}\right) \) of rank \( n \), and \( k \) (fractional) ideals \( {\mathfrak{a}}_{j} \) in a number field \( K \) , this algorithm computes \( k \) ideals \( {\mathfrak{b}}_{j} \) and a \( k \times k \) matrix \( U \) such that these data satisfy the conditions of Theorem 1.4.6. We will make use only of elementary transformations of the type given in Theorem 1.3.3 combined with Corollary 1.3.5. We denote by \( {A}_{j} \) (resp., \( {U}_{j} \) ) the columns of \( A \) (resp., \( U \) ).\n\n1. [Initialize] Set \( i \leftarrow n \) , \( j \leftarrow k \) , and let \( U \) be the \( k \times k \) identity matrix.\n\n2. [Check zero] Set \( m \leftarrow j \), and while \( m \geq 1 \) and \( {a}_{i, m} = 0 \) , set \( m \leftarrow m - 1 \) . If \( m = 0 \), the matrix \( A \) is not of rank \( n \), so print an error message and terminate the algorithm. Otherwise, if \( m < j \), exchange \( {A}_{m} \) with \( {A}_{j},{\mathfrak{a}}_{m} \) with \( {\mathfrak{a}}_{j},{U}_{m} \) with \( {U}_{j} \), and set \( m \leftarrow j \) .\n\n3. [Put 1 on the main diagonal] Set \( {A}_{j} \leftarrow {A}_{j}/{a}_{i, j},{U}_{j} \leftarrow {U}_{j}/{a}_{i, j} \), and \( {\mathfrak{a}}_{j} \leftarrow \) \( {a}_{i, j}{\mathfrak{a}}_{j} \) . (We now have \( {a}_{i, j} = 1 \) .)\n\n4. [Loop] If \( m = 1 \), go to step 6 . Otherwise, set \( m \leftarrow m - 1 \), and if \( {a}_{i, m} = 0 \) , go to step 4.\n\n5. [Euclidean step] (Here \( {a}_{i, j} = 1 \) and \( {a}_{i, m} \neq 0 \) .) Using t
Yes
Theorem 1.4.9. With the notation of Theorem 1.4.6, for \( 1 \leq j \leq n \), set \( {\mathfrak{c}}_{j} = {\mathfrak{b}}_{k - n + j} \) . Then the ideals \( {\mathfrak{c}}_{j} \) are unique. More precisely, if we call \( {\mathfrak{g}}_{j} = \) \( {\mathfrak{g}}_{j}\left( A\right) \) the ideal generated by all the \( \left( {n + 1 - j}\right) \times \left( {n + 1 - j}\right) \) minor-ideals in the last \( n + 1 - j \) rows of the matrix \( A \), then \( {\mathfrak{c}}_{j} = {\mathfrak{g}}_{n + 1 - j}{\mathfrak{g}}_{n - j}^{-1} \) .
Proof. One easily checks that the ideals \( {\mathfrak{g}}_{m}\left( A\right) \) are invariant under the elementary transformations of the type used in Algorithm 1.4.7. In particular, \( {\mathfrak{g}}_{j}\left( A\right) = {\mathfrak{g}}_{j}\left( {AU}\right) \) . But in the last \( n + 1 - j \) rows of \( {AU} \) there is a single nonzero minor whose value is trivially equal to 1 ; hence we have \( {\mathfrak{g}}_{j}\left( A\right) = {\mathfrak{c}}_{n + 1 - j}\cdots {\mathfrak{c}}_{n} \) , proving the theorem.
Yes
Proposition 1.4.10. If \( {AU} \) is of the form given by Theorem 1.4.6, a necessary and sufficient condition for \( {AV} \) to be of the same form with the same ideals \( {\mathfrak{b}}_{j} \) for \( j > k - n \) is that \( {U}^{-1}V \) be a block matrix \( \left( \begin{matrix} B & C \\ 0 & D \end{matrix}\right) \) with \( D \) an \( n \times n \) upper-triangular matrix with 1 on the diagonal such that for each \( i, j \) the entry in row \( i \) and column \( j \) belongs to \( {\mathfrak{c}}_{i}{\mathfrak{c}}_{j}^{-1} \) .
Proof. Trivial and left to the reader.
No
Corollary 1.4.11. For each \( i \) and \( j \) with \( 1 \leq i < j \leq n \), let \( {S}_{i, j} \) be a system of representatives of \( K/{\mathfrak{c}}_{i}{\mathfrak{c}}_{j}^{-1} \) . Write \( {AU} = \left( {0 \mid H}\right) \) as in Theorem 1.4.6. Then in that theorem, we may assume that for every \( i \) and \( j \) such that \( i < j \) the entry in row \( i \) and column \( j \) of the matrix \( H \) is in \( {S}_{i, j} \), in which case the matrix \( H \) is unique.
Proof. For \( i < j \), let \( {h}_{i, j} \) be the entry in row \( i \) and column \( j \) of the matrix \( H \) . There exists a unique \( {h}_{i, j}^{\prime } \in {S}_{i, j} \) such that\n\n\[ q = {h}_{i, j}^{\prime } - {h}_{i, j} \in {\mathfrak{c}}_{i}{\mathfrak{c}}_{j}^{-1}. \]\n\nIf the \( {H}_{j} \) are the columns of \( H \), then by Proposition 1.4.10 the replacement of \( {H}_{j} \) by \( {H}_{j} - q{H}_{i} \) is a legal elementary operation that transforms \( {h}_{i, j} \) into \( {h}_{i, j}^{\prime } \), proving the existence. The uniqueness follows also from this, since there was a unique possible \( q \) .
Yes
Theorem 1.7.2 (Smith Normal Form in Dedekind Domains). Let \( \\left( {A, I, J}\\right) \) be an integral pseudo-matrix as above, with \( A = \\left( {a}_{i, j}\\right) \) an \( n \\times n \) matrix and \( I = \\left( {{\\mathfrak{b}}_{1},\\ldots ,{\\mathfrak{b}}_{n}}\\right) \), and \( J = \\left( {{\\mathfrak{a}}_{1},\\ldots ,{\\mathfrak{a}}_{n}}\\right) \) two vectors of \( n \) ideals such that \( {a}_{i, j} \\in {\\mathfrak{b}}_{i}{\\mathfrak{a}}_{j}^{-1} \) .\n\nThere exist vectors of ideals \( \\left( {{\\mathfrak{b}}_{1}^{\\prime },\\ldots ,{\\mathfrak{b}}_{n}^{\\prime }}\\right) \) and \( \\left( {{\\mathfrak{a}}_{1}^{\\prime },\\ldots ,{\\mathfrak{a}}_{n}^{\\prime }}\\right) \) and two \( n \\times n \) matrices \( U = \\left( {u}_{i, j}\\right) \) and \( V = \\left( {v}_{i, j}\\right) \) satisfying the following conditions, where for all \( i \) we set \( {\\mathfrak{d}}_{i} = {\\mathfrak{a}}_{i}^{\\prime }{{\\mathfrak{b}}_{i}^{\\prime }}^{-1} \), and we set \( \\mathfrak{a} = {\\mathfrak{a}}_{1}\\cdots {\\mathfrak{a}}_{n},\\mathfrak{b} = {\\mathfrak{b}}_{1}\\cdots {\\mathfrak{b}}_{n},{\\mathfrak{a}}^{\\prime } = \) \( {\\mathfrak{a}}_{1}^{\\prime }\\cdots {\\mathfrak{a}}_{n}^{\\prime } \), and \( {\\mathfrak{b}}^{\\prime } = {\\mathfrak{b}}_{1}^{\\prime }\\cdots {\\mathfrak{b}}_{n}^{\\prime } \) .\n\n(1) \( \\mathfrak{a} = \\det \\left( U\\right) {\\mathfrak{a}}^{\\prime } \) and \( {\\mathfrak{b}}^{\\prime } = \\det \\left( V\\right) \\mathfrak{b} \) (note the reversal).\n\n(2) The matrix VAU is the \( n \\times n \) identity matrix.\n\n(3) The \( {\\mathfrak{d}}_{i} \) are integral ideals, and for \( 2 \\leq i \\leq n \) we have \( {\\mathfrak{d}}_{i - 1} \\subset {\\mathfrak{d}}_{i} \).\n\n(4) For all \( i, j \) we have \( {u}_{i, j} \\in {\\mathfrak{a}}_{i}{{\\mathfrak{a}}_{j}^{\\prime }}^{-1} \) and \( {v}_{i, j} \\in {\\mathfrak{b}}_{i}^{\\prime }{\\mathfrak{b}}_{j}^{-1} \) .
Proof. Again we prove this theorem by giving an explicit algorithm for constructing the Smith normal form. We follow closely [Coh0, Algorithm 2.4.14], except that we do not work modulo the determinant (although such a modular version of the Smith normal form algorithm is easily written).\n\nAlgorithm 1.7.3 (SNF Algorithm in Dedekind Domains). Given an invertible \( n \\times n \) matrix \( A = \\left( {a}_{i, j}\\right) \), and two lists of \( n \) (fractional) ideals \( I = \\left( {\\mathfrak{b}}_{i}\\right) \) and \( J = \\left( {\\mathfrak{a}}_{j}\\right) \) in a number field \( K \), this algorithm computes two other lists of \( n \) ideals \( {\\mathfrak{b}}_{i}^{\\prime } \) and \( {\\mathfrak{a}}_{j}^{\\prime } \) and two \( n \\times n \) matrices \( U \) and \( V \) such that these data satisfy the conditions of Theorem 1.7.2. We will only make use of elementary transformations of the type given in Theorem 1.3.3 combined with Corollary 1.3.5. We denote by \( {A}_{j} \) (resp., \( {U}_{j} \) ) the columns of \( A \) (resp., \( U \) ), and by \( {A}_{j}^{\\prime } \) (resp., \( {V}_{j}^{\\prime } \) ) the rows of \( A \) (resp., \( V \) ).\n\n1. [Initialize \( i \) ] Set \( i \\leftarrow n \), and let \( U \) and \( V \) be the \( n \\times n \) identity matrix. If \( n = 1 \), output \( {\\mathfrak{b}}_{1},{\\mathfrak{a}}_{1}, U \), and \( V \), and terminate the algorithm.\n\n2. [Initialize \( j \) for row reduction] Set \( j \\leftarrow i \) and \( c \\leftarrow 0 \) .\n\n3. [Check zero] If \( j = 1 \), go to step 5 . Otherwise, set \( j \\leftarrow j - 1 \) . If \( {a}_{i, j} = 0 \), go to step 3.\n\n4. [Euclidean step] Using the algorithm of Theorem 1.3.3, set \( \\mathfrak{d} \\leftarrow {a}_{i, i}{\\mathfrak{a}}_{i} + {a}_{i, j}{\\mathfrak{a}}_{j} \) and find \( u \\in {\\mathfrak{a}}_{i}{\\mathfrak{d}}^{-1} \) and \( v \\in {\\mathfrak{a}}_{j}{\\mathfrak{d}}^{-1} \) such that \( {a}_{i, i}u + {a}_{i, j}v = 1 \) . Then set \( \\left( {{A}_{j},{A}_{i}}\\right) \\leftarrow \\left( {{a}_{i, j}{A}_{j} - {a}_{i, i}{A}_{i}, u{A}_{i} + v{A}_{j}}\\right) ,\\left( {{U}_{j},{U}_{i}}\\right) \\leftarrow \\left( {{a}_{i, j}{U}_{j} - {a}_{i, i}{U}_{i}, u{U}_{i} + }\\right. \n
Yes
Proposition 2.1.1. Let \( K \) be a number field, and let \( A \) be a finite-dimensional commutative \( K \) -algebra (in other words, a finite-dimensional \( K \) -vector space with an additional commutative ring structure with unit, compatible with the vector space structure). The following three properties are equivalent.\n\n(1) A has no nonzero nilpotent elements;\n\n(2) The equation \( {x}^{2} = 0 \) in \( A \) implies \( x = 0 \) ;\n\n## (3) The minimal polynomial in \( K\left\lbrack X\right\rbrack \) of any element \( a \in A \) is squarefree.
Proof. That (1) implies (2) is trivial. Let us prove that (2) implies (3), so assume (2), and let \( a \in A \) . The set \( {I}_{a} \) of polynomials \( P \in K\left\lbrack X\right\rbrack \) such that \( P\left( a\right) = 0 \) is clearly an ideal of \( K\left\lbrack X\right\rbrack \) . Furthermore, since \( A \) is of finite dimension \( n \), say, the elements \( 1, a,\ldots ,{a}^{n} \) are \( K \) -linearly dependent; hence \( {I}_{a} \) is nonzero. Therefore, \( {I}_{a} \) is generated by a monic polynomial \( {P}_{a} \in K\left\lbrack X\right\rbrack \) , which will be called as usual the minimal polynomial of \( a \) in \( A \) . Assume that \( {P}_{a}\left( X\right) = {Q}^{2}\left( X\right) R\left( X\right) \) in \( K\left\lbrack X\right\rbrack \), and let \( b = Q\left( a\right) R\left( a\right) \) . We have \( {b}^{2} = \) \( {P}_{a}\left( a\right) R\left( a\right) = 0 \) ; hence by \( \left( 2\right), b = 0 \) . But this means that \( Q\left( X\right) R\left( X\right) \) is a multiple of the minimal polynomial \( {Q}^{2}\left( X\right) R\left( X\right) \) . If follows that \( Q\left( X\right) \) is constant, so \( {P}_{a} \) is squarefree, as claimed.\n\nFinally, if \( {a}^{n} = 0 \), the minimal polynomial of \( a \) must be a divisor of \( {X}^{n} \) , and it must be squarefree by (3), so it must be equal to \( X \) . Hence \( a = 0 \) and so (3) implies (1).
Yes
Proposition 2.1.3. Let \( B \) be a commutative ring with unit, and let \( {T}_{1} \) and \( {T}_{2} \) be polynomials in \( B\left\lbrack X\right\rbrack \) . There exist polynomials \( {U}_{1}\left( X\right) \) and \( {U}_{2}\left( X\right) \) in \( B\left\lbrack X\right\rbrack \) such that \[ {U}_{1}\left( X\right) {T}_{1}\left( X\right) + {U}_{2}\left( X\right) {T}_{2}\left( X\right) = \mathcal{R}\left( {{T}_{1}\left( X\right) ,{T}_{2}\left( X\right) }\right) \in B, \] where as usual \( \mathcal{R}\left( {{T}_{1}\left( X\right) ,{T}_{2}\left( X\right) }\right) \) denotes the resultant of the polynomials \( {T}_{1}\left( X\right) \) and \( {T}_{2}\left( X\right) \) .
Proof. Let \( M \) be the Sylvester matrix associated to the polynomials \( {T}_{1} \) and \( {T}_{2} \) (see [Coh0, Lemma 3.3.4]). If \( \deg \left( {T}_{i}\right) = {n}_{i} \), let \( {U}_{1}\left( X\right) = \mathop{\sum }\limits_{{0 \leq i < {n}_{2}}}{x}_{i}{X}^{i} \) and \( {U}_{2}\left( X\right) = \mathop{\sum }\limits_{{0 < i < {n}_{1}}}{y}_{i}{X}^{i} \) be arbitrary polynomials of degree less than or equal to \( {n}_{2} - 1 \) and \( {n}_{1} - 1 \), respectively, and let \[ Z = \left( {{x}_{{n}_{2} - 1},\ldots ,{x}_{0},{y}_{{n}_{1} - 1},\ldots ,{y}_{0}}\right) \text{ and }\mathcal{X} = {\left( {X}^{{n}_{1} + {n}_{2} - 1},\ldots, X,1\right) }^{t}. \] Then the matrix \( M \) can be defined by the equation \[ {ZM}\mathcal{X} = {U}_{1}\left( X\right) {T}_{1}\left( X\right) + {U}_{2}\left( X\right) {T}_{2}\left( X\right) . \] Let \( {M}^{\text{adj }} \) be the adjoint matrix of \( M \) . By definition, we have \[ {M}^{\mathrm{{adj}}}M = \det \left( M\right) {I}_{{n}_{1} + {n}_{2}} = \mathcal{R}\left( {{T}_{1}\left( X\right) ,{T}_{2}\left( X\right) }\right) {I}_{{n}_{1} + {n}_{2}}. \] Hence, if \( Z \) is the last row of the matrix \( {M}^{\mathrm{{adj}}} \), we have \[ {ZM}\mathcal{X} = \mathcal{R}\left( {{T}_{1}\left( X\right) ,{T}_{2}\left( X\right) }\right) = {U}_{1}\left( X\right) {T}_{1}\left( X\right) + {U}_{2}\left( X\right) {T}_{2}\left( X\right) \] if \( {U}_{1}\left( X\right) \) and \( {U}_{2}\left( X\right) \) are related to \( Z \) as above. Since the entries of the adjoint matrix, hence of \( Z \), are in the ring \( B \), this proves the existence of the polynomials \( {U}_{1}\left( X\right) \) and \( {U}_{2}\left( X\right) \) in \( B\left\lbrack X\right\rbrack \) and also gives an algorithm to find them. The algorithm mentioned in [Coh0, Exercise 5 of Chapter 3], based on the subresultant algorithm is, however, much better in practice.
Yes
Theorem 2.1.5 (Primitive Element Theorem). Let \( K \) be a number field and \( A \) be an étale algebra of dimension \( n \) over \( K \) . There exists \( \theta \in A \) (called a primitive element) such that \( A = K\left\lbrack \theta \right\rbrack \), in other words such that \( 1,\theta ,\ldots ,{\theta }^{n - 1} \) is a \( K \) -basis of \( A \) .
Proof. Since \( A \) is finite-dimensional over \( K \), there exist elements \( {\theta }_{1},\ldots ,{\theta }_{m} \) such that \( A = K\left\lbrack {{\theta }_{1},\ldots ,{\theta }_{m}}\right\rbrack \) . For example, we can take for the \( {\theta }_{i} \) a \( K \) -basis of \( A \) . We prove the theorem by induction on \( m \) . It is trivial for \( m = 1 \), and for \( m = 2 \) it is nothing else than Lemma 2.1.4. Let \( m \geq 3 \) . By induction, we assume that we have proved it for all \( i \leq m - 1 \) . Since the theorem is true for \( m = 2 \), we can find \( \alpha \in A \) such that \( K\left\lbrack {{\theta }_{m - 1},{\theta }_{m}}\right\rbrack = K\left\lbrack \alpha \right\rbrack \) . But then \( A = K\left\lbrack {{\theta }_{1},\ldots ,{\theta }_{m - 2},\alpha }\right\rbrack \) is generated by \( m - 1 \) elements, and we conclude by our induction hypothesis.
Yes
Corollary 2.1.6. Let \( A \) be an étale algebra over \( K \) .\n\n(1) There exists a squarefree monic polynomial \( T\left( X\right) \in K\left\lbrack X\right\rbrack \) (called as above a defining polynomial for \( A/K \) ) such that \( A \) is isomorphic to \( K\left\lbrack X\right\rbrack /T\left( X\right) K\left\lbrack X\right\rbrack \) .
Proof. (1). By Theorem 2.1.5, we know that \( A = K\left\lbrack \theta \right\rbrack \) for some \( \theta \in A \) . If \( T \) is the minimal monic polynomial of \( \theta \) in \( K\left\lbrack X\right\rbrack \), then by definition of an étale algebra the polynomial \( T\left( X\right) \) is squarefree, and the map sending \( \theta \) to the class of \( X \) clearly gives an isomorphism from \( A \) to \( K\left\lbrack X\right\rbrack /T\left( X\right) K\left\lbrack X\right\rbrack \) .
Yes
(1) There exists an integer \( k \in \mathbb{Z} \) such that the polynomial \( R\left( {X, k}\right) \) is square-free.
Proof. By definition, \( L = K\left( {{\theta }_{1},{\theta }_{2}}\right) \) . By the proof of Lemma 2.1.4, there exists \( k \in \mathbb{Z} \) such that \( R\left( {X, k}\right) \) is squarefree, and if \( \theta = {\theta }_{2} + k{\theta }_{1} \), then \( L = K\left( \theta \right) \) with \( \theta \) a root of \( R\left( {X, k}\right) = 0 \) . Since \( L \) is a field, the minimal polynomial of \( \theta \) is an irreducible factor of \( R\left( {X, k}\right) \) in \( K\left\lbrack X\right\rbrack \) . Finally, in that proof, we have also seen that \( {\theta }_{1}{R}_{X}^{\prime }\left( {\theta, k}\right) + {R}_{Z}^{\prime }\left( {\theta, k}\right) = 0 \), and since we are in a field and \( R\left( {X, k}\right) \) is squarefree, we obtain the formula for \( {\theta }_{1} \), hence for \( {\theta }_{2} \) , given in the proposition.
Yes
Theorem 2.1.10. Let \( {L}_{1} = K\left( {\theta }_{1}\right) \) and \( {L}_{2} = {L}_{1}\left( {\theta }_{2}\right) \) be two number fields, where \( {\theta }_{1} \) is a root of the irreducible polynomial \( {T}_{1}\left( X\right) \in K\left\lbrack X\right\rbrack \) of degree \( {n}_{1} \) , and \( {\theta }_{2} \) is a root of the polynomial \( {T}_{2}\left( X\right) \in {L}_{1}\left( X\right) \) of degree \( {n}_{2} \), assumed to be irreducible in \( {L}_{1}\left( X\right) \) . If \( {T}_{2}\left( X\right) = \mathop{\sum }\limits_{{m = 0}}^{{n}_{2}}{A}_{m}\left( {\theta }_{1}\right) {X}^{m} \), we set \( W\left( {X, Y}\right) = \) \( \mathop{\sum }\limits_{{m = 0}}^{{n}_{2}}{A}_{m}\left( Y\right) {X}^{m} \), which makes sense only modulo \( {T}_{1}\left( Y\right) K\left\lbrack X\right\rbrack \) . Set\n\n\[ R\left( {X, Z}\right) = {\mathcal{R}}_{Y}\left( {{T}_{1}\left( Y\right), W\left( {X - {ZY}, Y}\right) }\right) \]\n\nThen we have the following.\n\n(1) There exists an integer \( k \in \mathbb{Z} \) such that the polynomial \( R\left( {X, k}\right) \) is square-free.\n\n(2) If \( k \) is chosen as in (1), then \( R\left( {X, k}\right) \) is irreducible in \( K\left\lbrack X\right\rbrack \), and \( {L}_{2} = \) \( K\left( \theta \right) \), where \( \theta = {\theta }_{2} + k{\theta }_{1} \) is a root of \( R\left( {X, k}\right) \).\n\n(3) If \( k \) and \( \theta \) are as in (1) and (2), we have\n\n\[ {\theta }_{1} = - \frac{{R}_{Z}^{\prime }}{{R}_{X}^{\prime }}\left( {\theta, k}\right) ,\;{\theta }_{2} = \theta - k{\theta }_{1}. \]
Proof. The proof is very close to that of Lemma 2.1.4 and Proposition 2.1.7 (see also [Coh0, Lemma 3.6.2]).\n\n(1). Let \( \Omega = \overline{{L}_{2}} \) be some algebraic closure of \( {L}_{2} \) . Then \( \Omega \) is also an algebraic closure of \( K \) and of \( {L}_{1} \) . We denote by \( {\theta }_{1}^{\left( i\right) } \) (resp., \( {\theta }_{2}^{\left( j\right) } \) ) the roots of \( {T}_{1} \) (resp., \( {T}_{2} \) ) in \( \Omega \), chosen so that \( {\theta }_{1} = {\theta }_{1}^{\left( 1\right) } \) and \( {\theta }_{2} = {\theta }_{2}^{\left( 1\right) } \) . Note that the \( {\theta }_{1}^{\left( i\right) } \) (resp., the \( {\theta }_{2}^{\left( j\right) } \) ) are distinct since \( {T}_{1} \) and \( {T}_{2} \) are irreducible and in particular squarefree. Let \( k \in \mathbb{Z} \) . The roots of \( R\left( {X, k}\right) \) in \( \Omega \) are the numbers \( X \) such that there exists a common root of \( {T}_{1}\left( Y\right) \) and \( W\left( {X - {kY}, Y}\right) \), so that \( Y = {\theta }_{1}^{\left( i\right) } \) and \( W\left( {X - k{\theta }_{1}^{\left( i\right) },{\theta }_{1}^{\left( i\right) }}\right) = 0. \)\n\nSet\n\n\[ {T}_{2}^{\left( i\right) } = \mathop{\sum }\limits_{{m = 0}}^{{n}_{2}}{A}_{m}\left( {\theta }_{1}^{\left( i\right) }\right) {X}^{m} = W\left( {X,{\theta }_{1}^{\left( i\right) }}\right) ,\]\n\nand let \( {\theta }_{2}^{\left( i, j\right) } \) be the roots of \( {T}_{2}^{\left( i\right) } \) in \( \Omega \), ordered so that \( {\theta }_{2}^{\left( 1, j\right) } = {\theta }_{2}^{\left( j\right) } \) . Thus the roots of \( R\left( {X, k}\right) \) are the numbers \( {\gamma }^{\left( i, j\right) } = {\theta }_{2}^{\left( i, j\right) } + k{\theta }_{1}^{\left( i\right) } \) . Furthermore, using as before Sylvester’s determinant, it is easy to show that \( R\left( {X, k}\right) \) is a polynomial in \( X \) of degree at most equal to \( {n}_{1}{n}_{2} \) . If we choose \( k \in \mathbb{Z} \) different from the\n\nfinite set of values\n\[ \frac{{\theta }_{2}^{\left( i, j\right) } - {\theta }_{2}^{\left( {i}^{\prime },{j}^{\prime }\right) }}{{\theta }_{1}^{\left( {i}^{\prime }\right) } - {\theta }_{1}^{\left( i\right) }}\;\text{ for }i \neq {i}^{\prime }, \]\n\nthe \( {n}_{1}{n}_{2} \) values \( {\gamma }^{\left( i, j\right) } \) are distinct, and hence the polynomial \( R\left( {X, k}\right) \) is squarefree of degree exactly equal to \( {n}_{1}{n}_{2} \), proving (1)
Yes
Lemma 2.1.13. With the above notation, we have\n\n\[ D\left( k\right) = {k}^{{n}_{1}{n}_{2}\left( {{n}_{1} - 1}\right) }\operatorname{disc}{\left( {T}_{1}\right) }^{{n}_{2}}\operatorname{disc}{\left( {T}_{2}\right) }^{{n}_{1}}\mathop{\prod }\limits_{{s \in G, s \neq {1}_{G}}}{D}_{s}\left( k\right) . \]
Furthermore, for all \( s \), we have \( {D}_{{s}^{-1}} = {D}_{s} \), and if \( {s}^{2} = {1}_{G} \) and \( s \neq {1}_{G} \), then \( {D}_{s} \) is the square of a rational integer.\n\nHence, we have split our large discriminant \( D\left( k\right) \) as a product of smaller pieces \( {D}_{s}\left( k\right) \) . This already shows that \( D\left( k\right) \) must factor relatively easily. This is still theoretical, however, since we must also give a purely algebraic way of computing \( {D}_{s}\left( k\right) \) .\n\nTo do this, we make the following observation. Let\n\n\[ U\left( X\right) = {\mathcal{R}}_{Y}\left( {{T}_{1}\left( Y\right) ,{T}_{1}\left( {Y + X}\right) }\right) /{X}^{{n}_{1}} \]\n\nbe the resultant in \( Y \) of \( {T}_{1} \) with a shifted version of the same polynomial \( {T}_{1} \) divided by \( {X}^{{n}_{1}} \) . Then\n\n\[ U\left( X\right) = \mathop{\prod }\limits_{{0 \leq {i}_{2} \neq {i}_{1} < {n}_{1}}}\left( {{\alpha }_{{i}_{2}} - {\alpha }_{{i}_{1}} + X}\right) . \]\n\nHence for \( s \neq {1}_{G} \), \n\n\[ {D}_{s}\left( k\right) = \mathop{\prod }\limits_{{\sigma \in G}}\left( {{k}^{{n}_{1}\left( {{n}_{1} - 1}\right) }U\left( {\left( {{\beta }_{\sigma s} - {\beta }_{\sigma }}\right) /k}\right) }\right) \]\n\n\[ = {k}^{{n}_{1}{n}_{2}\left( {{n}_{1} - 1}\right) }\mathop{\prod }\limits_{{\sigma \in G}}U\left( {\left( {{\beta }_{\sigma s} - {\beta }_{\sigma }}\right) /k}\right) . \]\n\nIf we set\n\n\[ {V}_{s}\left( X\right) = \mathop{\prod }\limits_{{\sigma \in G}}\left( {X - \left( {{\beta }_{\sigma s} - {\beta }_{\sigma }}\right) }\right) \]\n\nwe have\n\n\[ {V}_{s}\left( X\right) = \mathop{\prod }\limits_{{\sigma \in G}}\left( {X - \sigma \left( {s\left( \beta \right) - \beta }\right) }\right) = {C}_{s\left( \beta \right) - \beta }\left( X\right) , \]\n\nwhere \( {C}_{\alpha }\left( X\right) \) denotes the characteristic polynomial of \( \alpha \) in the number field \( {K}_{2} \) (see [Coh0, Definition 4.3.1]). Since \( {K}_{2}/\mathbb{Q} \) is a normal extension, \( s\left( \beta \right) \) is a polynomial in \( \beta \) with rational coefficients, and hence we can set \( s\left( \beta \right) = {A}_{s}\left( \beta \right) \) with \( {A}_{s} \in \mathbb{Q}\left\lbrack X\right\rbrack \) . Note that \( {A}_{s} \) can be computed algorithmically using one of the algorithms for the field isomorphism problem ([Coh0, Section 4.5]). Thus, using [Coh0, Proposition 4.3.4], we have\n\n\[ {V}_{s}\left( X\right) = {\mathcal{R}}_{Y}\left( {{T}_{2}\left( Y\right), X + Y - {A}_{s}\left( Y\right) }\right) . \]\n\nFinally, coming back to \( {D}_{s}\left( k\right) \), we see that\n\n\[ {\mathcal{R}}_{X}\left( {U\left( X\right) ,{V}_{s}\left( {kX}\right) }\right) = {k}^{{n}_{1}{n}_{2}\left( {{n}_{1} - 1}\right) }\mathop{\prod }\limits_{{\sigma \in G}}U\left( {\left( {{\beta }_{\sigma s} - {\beta }_{\sigma }}\right) /k}\right) = {D}_{s}\left( k\right) . \]
Yes
Theorem 2.1.14. Let \( {K}_{1} = \mathbb{Q}\left( {\theta }_{1}\right) \) and \( {K}_{2} = \mathbb{Q}\left( {\theta }_{2}\right) \) be number fields of respective degrees \( {n}_{1} \) and \( {n}_{2} \), and let \( {T}_{1}\left( X\right) \) and \( {T}_{2}\left( X\right) \) be the minimal monic polynomials of \( {\theta }_{1} \) and \( {\theta }_{2} \), respectively. Assume that \( {K}_{2} \) is a normal extension of \( \mathbb{Q} \) with Galois group \( G \) . Let \( R\left( X\right) = R\left( {X, k}\right) \) be an absolute defining polynomial for the compositum \( L \) of \( {K}_{1} \) and \( {K}_{2} \) as computed by Algorithm 2.1.8 ( \( R \) is squarefree but not necessarily irreducible).
For \( s \in G, s \neq {1}_{G} \), define \( {A}_{s}\left( X\right) \) to be the polynomial expressing \( s\left( {\theta }_{2}\right) \) in terms of \( {\theta }_{2} \), and set \( {V}_{s}\left( X\right) = {\mathcal{R}}_{Y}\left( {{T}_{2}\left( Y\right), X + Y - {A}_{s}\left( Y\right) }\right) \) (this depends only on the number field \( {K}_{2} \) and on \( s \) ).\n\nLet \( U\left( X\right) = {\mathcal{R}}_{Y}\left( {{T}_{1}\left( Y\right) ,{T}_{1}\left( {Y + X}\right) }\right) /{X}^{{n}_{1}} \) (this depends only on the number field \( \left. {K}_{1}\right) \), and for \( s \neq {1}_{G} \), set\n\n\[ \n{D}_{s}\left( k\right) = {\mathcal{R}}_{X}\left( {U\left( X\right) ,{V}_{s}\left( {kX}\right) }\right) .\n\]\n\nThen\n\n(1) for all \( s \in G, s \neq {1}_{G} \), we have \( {D}_{s}\left( k\right) \in \mathbb{Z} \) ;\n\n(2) we have the decomposition\n\n\[ \n\operatorname{disc}\left( {R\left( X\right) }\right) = {k}^{{n}_{1}{n}_{2}\left( {{n}_{1} - 1}\right) }\operatorname{disc}{\left( {T}_{1}\right) }^{{n}_{2}}\operatorname{disc}{\left( {T}_{2}\right) }^{{n}_{1}}\mathop{\prod }\limits_{{s \in G, s \neq {1}_{G}}}{D}_{s}\left( k\right) ;\n\]\n\n(3) for all \( s \in G \), we have \( {D}_{{s}^{-1}}\left( k\right) = {D}_{s}\left( k\right) \) ;\n\n(4) if \( {s}^{2} = {1}_{G} \) and \( s \neq {1}_{G} \), then \( {D}_{s}\left( k\right) \) is the square of a rational integer.
Yes