Q
stringlengths 4
3.96k
| A
stringlengths 1
3k
| Result
stringclasses 4
values |
|---|---|---|
Proposition 13.18 (Naturality of the Connecting Homomorphism). Suppose \n\n(13.12)\n\nis a commutative diagram of chain maps in which the horizontal rows are exact. Then the following diagram commutes for each \( p \) :\n\n\[ \n{H}_{p}\left( {E}_{ * }\right) \overset{{\partial }_{ * }}{ \rightarrow }{H}_{p - 1}\left( {C}_{ * }\right) \]\n\n\[ \n{\varepsilon }_{ * } \]\n\n\[ \n{H}_{p}\left( {E}_{ * }^{\prime }\right) \underset{{\partial }_{ * }}{ \rightarrow }{H}_{p - 1}\left( {C}_{ * }^{\prime }\right) . \]\n
|
Proof. Let \( \left\lbrack {e}_{p}\right\rbrack \in {H}_{p}\left( {E}_{ * }\right) \) be arbitrary. Then \( {\partial }_{ * }\left\lbrack {e}_{p}\right\rbrack = \left\lbrack {c}_{p - 1}\right\rbrack \), where \( F{c}_{p - 1} = \partial {d}_{p} \) for some \( {d}_{p} \) such that \( G{d}_{p} = {e}_{p} \) . Then by commutativity of (13.12), \n\n\[ \n{F}^{\prime }\left( {\kappa {c}_{p - 1}}\right) = {\delta F}{c}_{p - 1} = \delta \partial {d}_{p} = \partial \left( {\delta {d}_{p}}\right) \]\n\n\[ \n{G}^{\prime }\left( {\delta {d}_{p}}\right) = {\varepsilon G}{d}_{p} = \varepsilon {e}_{p}. \]\nBy definition, this means that \n\n\[ \n{\partial }_{ * }{\varepsilon }_{ * }\left\lbrack {e}_{p}\right\rbrack = {\partial }_{ * }\left\lbrack {\varepsilon {e}_{p}}\right\rbrack = \left\lbrack {\kappa {c}_{p - 1}}\right\rbrack = {\kappa }_{ * }\left\lbrack {c}_{p - 1}\right\rbrack = {\kappa }_{ * }{\partial }_{ * }\left\lbrack {e}_{p}\right\rbrack , \]\n\nwhich was to be proved.
|
Yes
|
Proposition 13.19. Suppose \( \mathcal{U} \) is any open cover of \( X \) . Then the inclusion map \( {C}_{ * }^{\mathcal{U}}\left( X\right) \rightarrow {C}_{ * }\left( X\right) \) induces a homology isomorphism \( {H}_{p}^{\mathcal{U}}\left( X\right) \cong {H}_{p}\left( X\right) \) for all \( p \) .
|
The idea of the proof is simple, although the technical details are somewhat involved. If \( \sigma : {\Delta }_{p} \rightarrow X \) is any singular \( p \) -simplex, the plan is to show that there is a homologous \( p \) -chain obtained by \
|
No
|
Lemma 13.20. If \( c \) is an affine chain, then\n\n\[ \partial \left( {w * c}\right) + w * \partial c = c. \]
|
Proof. For an affine simplex \( \alpha = A\left( {{v}_{0},\ldots ,{v}_{p}}\right) \), this is just a computation:\n\n\[ \partial \left( {w * \alpha }\right) = \partial A\left( {w,{v}_{0},\ldots ,{v}_{p}}\right) \]\n\n\[ = \mathop{\sum }\limits_{{i = 0}}^{{p + 1}}{\left( -1\right) }^{i}A\left( {w,{v}_{0},\ldots ,{v}_{p}}\right) \circ {F}_{i, p} \]\n\n\[ = A\left( {{v}_{0},\ldots ,{v}_{p}}\right) + \mathop{\sum }\limits_{{i = 0}}^{p}{\left( -1\right) }^{i + 1}A\left( {w,{v}_{0},\ldots ,{\widehat{v}}_{i},\ldots ,{v}_{p}}\right) \]\n\n\[ = \alpha - w * \partial \alpha \text{. } \]\n\nThe general case follows by linearity.
|
Yes
|
Lemma 13.21. Suppose \( \alpha : {\Delta }_{p} \rightarrow {\mathbb{R}}^{n} \) is an affine simplex that is a homeomorphism onto a p-simplex \( \sigma \subseteq {\mathbb{R}}^{n} \) . Let \( \beta : {\Delta }_{p} \rightarrow {\mathbb{R}}^{n} \) be any one of the affine singular \( p \) - simplices that appear in the chain \( {s\alpha } \) .\n\n(a) \( \beta \) is an affine homeomorphism onto a p-simplex of the form \( \left\lbrack {{b}_{p},\ldots ,{b}_{0}}\right\rbrack \), where each \( {b}_{i} \) is the barycenter of an \( i \) -dimensional face of \( \sigma \) .\n\n(b) The diameter of any such simplex \( \left\lbrack {{b}_{p},\ldots ,{b}_{0}}\right\rbrack \) is at most \( p/\left( {p + 1}\right) \) times the diameter of \( \sigma \) .
|
Proof. Part (a) follows immediately from the definition of the subdivision operator and an easy induction on \( p \) (see Fig. 13.9).\n\nTo prove (b), write \( \sigma = \alpha \left( {\Delta }_{p}\right) = \left\lbrack {{v}_{0},\ldots ,{v}_{p}}\right\rbrack \) and \( \tau = \beta \left( {\Delta }_{p}\right) = \left\lbrack {{b}_{p},\ldots ,{b}_{0}}\right\rbrack \) , where each \( {b}_{i} \) is the barycenter of an \( i \) -dimensional face of \( \sigma \) . Since a simplex is the convex hull of its vertices, the diameter of \( \tau \) is equal to the maximum of the distances between its vertices. Thus it suffices to prove that \( \left| {{b}_{i} - {b}_{j}}\right| \leq p/\left( {p + 1}\right) \operatorname{diam}\left( \sigma \right) \) whenever \( {b}_{i} \) and \( {b}_{j} \) are barycenters of faces of a \( p \) -simplex \( \sigma \) . For \( p = 0 \), there is nothing to prove, so assume the claim is true for simplices of dimension less than \( p \) . For \( i, j < p \), both vertices \( {b}_{i},{b}_{j} \) lie in some \( q \) -dimensional face \( {\sigma }^{\prime } \subseteq \sigma \) with \( q < p \) , so by induction we have \( \left| {{b}_{i} - {b}_{j}}\right| \leq q/\left( {q + 1}\right) \operatorname{diam}\left( {\sigma }^{\prime }\right) \leq p/\left( {p + 1}\right) \operatorname{diam}\left( \sigma \right) \) . So it remains only to consider the distance between \( {b}_{p} \) and the other vertices. Since \( {b}_{p} \) is the barycenter of \( \sigma \) itself, and every other vertex \( {b}_{j} \) lies in some proper face of \( \sigma \) , the distance from \( {b}_{p} \) to \( {b}_{j} \) is no more than the maximum of the distance from \( {b}_{p} \) to any of the vertices \( {v}_{j} \) of \( \sigma \) . We have\n\n\[ \left| {{b}_{p} - {v}_{j}}\right| = \left| {\mathop{\sum }\limits_{{i = 0}}^{p}\frac{1}{p + 1}{v}_{i} - {v}_{j}}\right| \]\n\n\[ = \left| {\mathop{\sum }\limits_{{i = 0}}^{p}\frac{1}{p + 1}{v}_{i} - \mathop{\sum }\limits_{{i = 0}}^{p}\frac{1}{p + 1}{v}_{j}}\right| \]\n\n\[ \leq \mathop{\sum }\limits_{{i = 0}}^{p}\frac{1}{p + 1}\left| {{v}_{i} - {v}_{j}}\right| \]\n\n\[ \leq \frac{p}{p + 1}\operatorname{diam}\left( \sigma \right) \]\n\nThis completes the induction.
|
Yes
|
Lemma 13.22. The singular subdivision operators \( s : {C}_{p}\left( X\right) \rightarrow {C}_{p}\left( X\right) \) have the following properties.\n\n(a) \( s \circ {f}_{\# } = {f}_{\# } \circ s \) for any continuous map \( f \) .\n\n(b) \( \partial \circ s = s \circ \partial \) .\n\n(c) Given an open cover \( \mathcal{U} \) of \( X \) and any \( c \in {C}_{p}\left( X\right) \), there exists \( m \) such that \( {s}^{m}c \in {C}_{p}^{u}\left( X\right) \) .
|
Proof. The first identity follows immediately from the definition of \( s \) :\n\n\[ s\left( {{f}_{\# }\sigma }\right) = s\left( {f \circ \sigma }\right) = {\left( f \circ \sigma \right) }_{\# }\left( {s{i}_{p}}\right) = {f}_{\# }{\sigma }_{\# }\left( {s{i}_{p}}\right) = {f}_{\# }\left( {s\sigma }\right) . \]\n\nThe second is proved by induction on \( p \) . For \( p = 0 \) it is immediate because \( s \) acts as the identity on 0 -chains. For \( p > 0 \), we use part (a),(13.14), and the inductive hypothesis to compute\n\n\n\n\[ \partial {s\sigma } = \partial {\sigma }_{\# }\left( {{b}_{p} * s\partial {i}_{p}}\right) \]\n\n\[ = {\sigma }_{\# }\partial \left( {{b}_{p} * s\partial {i}_{p}}\right) \]\n\n\[ = {\sigma }_{\# }\left( {s\partial {i}_{p} - {b}_{p} * \partial s\partial {i}_{p}}\right) \]\n\n\[ = s{\sigma }_{\# }\partial {i}_{p} - {\sigma }_{\# }{b}_{p} * \left( {s\partial \partial {i}_{p}}\right) \]\n\n\[ = s\partial {\sigma }_{\# }{i}_{p} - 0 \]\n\n\[ = s\partial \sigma \text{.} \]\n\nTo prove (c), define the mesh of an affine chain \( c \) in \( {\mathbb{R}}^{n} \) to be the maximum of the diameters of the images of the affine simplices that appear in \( c \) . By Lemma 13.21, by choosing \( m \) large enough, we can make the mesh of \( {s}^{m}{i}_{p} \) arbitrarily small.\n\nIf \( \sigma \) is any singular simplex in \( X \), by the Lebesgue number lemma there exists \( \delta > 0 \) such that any subset of \( {\Delta }_{p} \) of diameter less than \( \delta \) lies in \( {\sigma }^{-1}\left( U\right) \) for one of the sets \( U \in \mathcal{U} \) . In particular, if \( c \) is an affine chain in \( {\Delta }_{p} \) whose mesh is less than \( \delta \), then \( {\sigma }_{\# }c \in {C}_{p}^{\mathcal{U}}\left( X\right) \) . Choose \( \delta \) to be the minimum of the Lebesgue numbers for all the singular simplices appearing in \( c \), and choose \( m \) large enough that \( {s}^{m}{i}_{p} \) has mesh less than \( \delta \) . Then \( {s}^{m}\sigma = {\sigma }_{\# }\left( {{s}^{m}{i}_{p}}\right) \in {C}_{p}^{\mathcal{U}}\left( X\right) \) as desired.
|
Yes
|
Theorem 13.23 (Homology Groups of Spheres). For \( n \geq 1,{\mathbb{S}}^{n} \) has the following singular homology groups:\n\n\[ \n{H}_{p}\left( {\mathbb{S}}^{n}\right) \cong \left\{ \begin{array}{ll} \mathbb{Z} & \text{ if }p = 0, \\ 0 & \text{ if }0 < p < n, \\ \mathbb{Z} & \text{ if }p = n, \\ 0 & \text{ if }p > n. \end{array}\right.\n\]
|
Proof. We use the Mayer-Vietoris sequence as follows. Let \( N \) and \( S \) denote the north and south poles, and let \( U = {\mathbb{S}}^{n} \smallsetminus \{ N\}, V = {\mathbb{S}}^{n} \smallsetminus \{ S\} \) . Part of the Mayer-Vietoris sequence reads\n\n\[ \n{H}_{p}\left( U\right) \oplus {H}_{p}\left( V\right) \rightarrow {H}_{p}\left( {\mathbb{S}}^{n}\right) \overset{{\partial }_{ * }}{ \rightarrow }{H}_{p - 1}\left( {U \cap V}\right) \rightarrow {H}_{p - 1}\left( U\right) \oplus {H}_{p - 1}\left( V\right) .\n\]\n\nBecause \( U \) and \( V \) are contractible, when \( p > 1 \) this sequence reduces to\n\n\[ \n0 \rightarrow {H}_{p}\left( {\mathbb{S}}^{n}\right) \overset{{\partial }_{ * }}{ \rightarrow }{H}_{p - 1}\left( {U \cap V}\right) \rightarrow 0,\n\]\n\nfrom which it follows that \( {\partial }_{ * } \) is an isomorphism. Thus, since \( U \cap V \) is homotopy equivalent to \( {\mathbb{S}}^{n - 1} \) ,\n\n\[ \n{H}_{p}\left( {\mathbb{S}}^{n}\right) \cong {H}_{p - 1}\left( {U \cap V}\right) \cong {H}_{p - 1}\left( {\mathbb{S}}^{n - 1}\right) \;\text{ for }p > 1, n \geq 1.\n\]\n\n(13.16)\n\nWe prove the theorem by induction on \( n \) . In the case \( n = 1,{H}_{0}\left( {\mathbb{S}}^{1}\right) \cong {H}_{1}\left( {\mathbb{S}}^{1}\right) \cong \) \( \mathbb{Z} \) by Proposition 13.6 and Corollary 13.15. For \( p > 1 \) ,(13.16) shows that \( {H}_{p}\left( {\mathbb{S}}^{1}\right) \cong \) \( {H}_{p - 1}\left( {\mathbb{S}}^{0}\right) \) . Since each component of \( {\mathbb{S}}^{0} \) is a one-point space, \( {H}_{p - 1}\left( {\mathbb{S}}^{0}\right) \) is the trivial group by Propositions 13.7 and 13.5.\n\nNow let \( n > 1 \), and suppose the result is true for \( {\mathbb{S}}^{n - 1} \) . The cases \( p = 0 \) and \( p = 1 \) are again taken care of by Proposition 13.6 and Corollary 13.15. For \( p > 1 \) , (13.16) and the inductive hypothesis give\n\n\[ \n{H}_{p}\left( {\mathbb{S}}^{n}\right) \cong {H}_{p - 1}\left( {\mathbb{S}}^{n - 1}\right) \cong \left\{ \begin{array}{ll} 0 & \text{ if }p < n, \\ \mathbb{Z} & \text{ if }p = n, \\ 0 & \text{ if }p > n, \end{array}\right.\n\]\n\nwhich completes the proof.
|
Yes
|
Corollary 13.24 (Homology Groups of Punctured Euclidean Spaces). For \( n \geq 2 \) ,\n\n\( {\mathbb{R}}^{n} \smallsetminus \{ 0\} \) has the following singular homology groups:\n\n\[ \n{H}_{p}\left( {{\mathbb{R}}^{n}\smallsetminus \{ 0\} }\right) \cong \left\{ \begin{array}{ll} \mathbb{Z} & \text{ if }p = 0, \\ 0 & \text{ if }0 < p < n - 1, \\ \mathbb{Z} & \text{ if }p = n - 1, \\ 0 & \text{ if }p > n - 1. \end{array}\right. \n\]
|
Proof. Inclusion \( {\mathbb{S}}^{n - 1} \hookrightarrow {\mathbb{R}}^{n} \smallsetminus \{ 0\} \) is a homotopy equivalence.
|
No
|
Proposition 13.25. Suppose \( n \geq 1 \) and \( f, g : {\mathbb{S}}^{n} \rightarrow {\mathbb{S}}^{n} \) are continuous maps.\n\n(a) \( \deg \left( {g \circ f}\right) = \left( {\deg g}\right) \left( {\deg f}\right) \).\n\n(b) If \( f \simeq g \), then \( \deg f = \deg g \) .
|
Proof. Part (a) follows from the fact that \( {\left( g \circ f\right) }_{ * } = {g}_{ * } \circ {f}_{ * } \), and part (b) from the fact that homotopic maps induce the same homology homomorphism.
|
Yes
|
Lemma 13.26. The homological degree and the homotopic degree of a continuous map \( f : {\mathbb{S}}^{1} \rightarrow {\mathbb{S}}^{1} \) are equal.
|
Proof. By (13.8), the following diagram commutes:\n\n\[ \n{\pi }_{1}\left( {{\mathbb{S}}^{1},1}\right) \xrightarrow[]{{\left( \rho \circ f\right) }_{ * }}{\pi }_{1}\left( {{\mathbb{S}}^{1},1}\right) \n\]\n\n\[ \n\gamma \downarrow \gamma \n\]\n\n\[ \n{H}_{1}\left( {\mathbb{S}}^{1}\right) \xrightarrow[{\left( \rho \circ f\right) }_{ * }]{}{H}_{1}\left( {\mathbb{S}}^{1}\right) \n\]\n\nIt follows that the homotopic degree of \( f \) is equal to the homological degree of \( \rho \circ f \) . Since the rotation \( \rho \) is homotopic to the identity map, it has homological degree 1, so the homological degree of \( \rho \circ f \) is equal to that of \( f \) .
|
Yes
|
Proposition 13.31. The antipodal map \( \alpha : {\mathbb{S}}^{n} \rightarrow {\mathbb{S}}^{n} \) is homotopic to the identity map if and only if \( n \) is odd.
|
Proof. If \( n = {2k} - 1 \) is odd, an explicit homotopy \( H : \mathrm{{Id}} \simeq \alpha \) is given by\n\n\[ H\left( {x, t}\right) = \left( {\left( {\cos {\pi t}}\right) {x}_{1} + \left( {\sin {\pi t}}\right) {x}_{2},\left( {\cos {\pi t}}\right) {x}_{2} - \left( {\sin {\pi t}}\right) {x}_{1},}\right.\n\n\[ \left. {\ldots ,\left( {\cos {\pi t}}\right) {x}_{{2k} - 1} + \left( {\sin {\pi t}}\right) {x}_{2k},\left( {\cos {\pi t}}\right) {x}_{2k} - \left( {\sin {\pi t}}\right) {x}_{{2k} - 1}}\right) \text{.} \] \n\nIf \( n = 0,\alpha \) interchanges the two points of \( {\mathbb{S}}^{0} \), and so is clearly not homotopic to the identity. When \( n \) is even and positive, \( \alpha \) has degree -1, while the identity map has degree 1 , so they are not homotopic.
|
Yes
|
Theorem 13.32 (The Hairy Ball Theorem). There exists a nowhere vanishing vector field on \( {\mathbb{S}}^{n} \) if and only if \( n \) is odd.
|
Proof. Suppose there exists such a vector field \( V \) . By replacing \( V \) with \( V/\left| V\right| \), we can assume \( \left| {V\left( x\right) }\right| = 1 \) everywhere. We use \( V \) to construct a homotopy between the identity map and the antipodal map as follows:\n\n\[ H\left( {x, t}\right) = \left( {\cos {\pi t}}\right) x + \left( {\sin {\pi t}}\right) V\left( x\right) . \]\n\nDirect computation, using the facts that \( {\left| x\right| }^{2} = {\left| V\left( x\right) \right| }^{2} = 1 \) and \( x \cdot V\left( x\right) = 0 \), shows that \( H \) takes its values in \( {\mathbb{S}}^{n} \) . Since \( H\left( {x,0}\right) = x \) and \( H\left( {x,1}\right) = - x, H \) is the desired homotopy. By Proposition 13.31, \( n \) must be odd.\n\nConversely, when \( n = {2k} - 1 \) is odd, the following explicit vector field is easily checked to be tangent to the sphere and nowhere vanishing:\n\n\[ V\left( {{x}_{1},\ldots ,{x}_{2k}}\right) = \left( {{x}_{2}, - {x}_{1},{x}_{4}, - {x}_{3},\ldots ,{x}_{2k}, - {x}_{{2k} - 1}}\right) . \]
|
Yes
|
Proposition 13.33 (Homology Effect of Attaching a Cell). Let \( X \) be any topological space, and let \( Y \) be obtained from \( X \) by attaching a closed cell \( D \) of dimension \( n \geq 2 \) along the attaching map \( \varphi : \partial D \rightarrow X \) . Let \( K \) and \( L \) denote the kernel and image, respectively, of \( {\varphi }_{ * } : {H}_{n - 1}\left( {\partial D}\right) \rightarrow {H}_{n - 1}\left( X\right) \) . Then the homology homomorphism \( {H}_{p}\left( X\right) \rightarrow {H}_{p}\left( Y\right) \) induced by inclusion is characterized as follows.\n\n(a) If \( p < n - 1 \) or \( p > n \), it is an isomorphism.\n\n(b) If \( p = n - 1 \), it is a surjection whose kernel is \( L \), so there is a short exact sequence\n\n\[ 0 \rightarrow L \hookrightarrow {H}_{n - 1}\left( X\right) \rightarrow {H}_{n - 1}\left( Y\right) \rightarrow 0. \]\n\n(c) If \( p = n \), it is an injection, and there is a short exact sequence\n\n\[ 0 \rightarrow {H}_{n}\left( X\right) \rightarrow {H}_{n}\left( Y\right) \rightarrow K \rightarrow 0. \]
|
Proof. First, assume that \( p \geq 2 \) . Let \( q : X \coprod D \rightarrow Y \) be a quotient map realizing \( Y \) as an adjunction space. Choose a point \( z \in \operatorname{Int}D \), and define open subsets \( U, V \subseteq Y \) by \( U = q\left( {\operatorname{Int}D}\right) \) and \( V = q\left( {X \coprod \left( {D\smallsetminus \{ z\} }\right) }\right) \) . Then, by the same argument as in the proof of Proposition 10.13, it follows that \( U \) is homeomorphic to Int \( D, U \cap V \) is homeomorphic to Int \( D \smallsetminus \{ z\} \), and \( V \) is homotopy equivalent to \( X \) .\n\nBecause \( {H}_{p}\left( U\right) = 0 \) for \( p > 0 \), the Mayer-Vietoris sequence for \( \{ U, V\} \) reads in part\n\n\[ {H}_{p}\left( {U \cap V}\right) \overset{{j}_{ * }}{ \rightarrow }{H}_{p}\left( V\right) \overset{{l}_{ * }}{ \rightarrow }{H}_{p}\left( Y\right) \overset{{\partial }_{ * }}{ \rightarrow }{H}_{p - 1}\left( {U \cap V}\right) \overset{{j}_{ * }}{ \rightarrow }{H}_{p - 1}\left( V\right) ,\]\n\n(13.18)\n\nwhere \( j : U \cap V \hookrightarrow V \) and \( l : V \hookrightarrow Y \) are inclusion maps.\n\nThe easy case is (a). The hypothesis combined with our assumption \( p \geq 2 \) means that \( p \) is not equal to \( 0,1, n - 1 \), or \( n \) . Since \( U \cap V \simeq {\mathbb{S}}^{n - 1} \), the groups \( {H}_{p}\left( {U \cap V}\right) \) and \( {H}_{p - 1}\left( {U \cap V}\right) \) are both trivial. It follows that \( {l}_{ * } \) is an isomorphism. Combining this with the isomorphism \( {H}_{p}\left( X\right) \cong {H}_{p}\left( V\right) \) (also induced by inclusion), the result follows.\n\nNext consider case (b). We still have \( {H}_{p - 1}\left( {U \cap V}\right) = 0 \), so \( {l}_{ * } \) is surjective, but it might not be injective. To identify its kernel, consider the following commutative diagram, in which the unlabeled maps are inclusions:\n\n\n\nAll of the horizontal maps are homotopy equivalences, so we have the following commutative diagram of homology groups:\n\n\n\nSubstituting this into (13.18) yields an exact sequence\n\n\[ {H}_{n - 1}\left( {\partial D}\right) \overset{{\varphi }_{ * }}{ \rightarrow }{H}_{n - 1}\left( X\right) \rightarrow {H}_{n - 1}\left( Y\right) \rightarrow 0, \]\n\n(13.19)\n\nand (b) follows easily.\n\nIn case (c), making the same substitutions into (13.18) as above yields\n\n\[ 0 \rightarrow {H}_{n}\left( X\right) \rightarrow {H}_{n}\left( Y\right) \rightarrow {H}_{n - 1}\left( {\partial D}\right) \overset{{\varphi }_{ * }}{ \rightarrow }{H}_{n - 1}\left( X\right) ,\]\n\nand replacing \( {H}_{n - 1}\left( {\partial D}\right) \) by the kernel of \( {\varphi }_{ * } \) we obtain (c). This completes the proof under the assumption \( p \geq 2 \) .\n\nNow sup
|
Yes
|
Theorem 13.34 (Homology Properties of CW Complexes). Let \( X \) be a finite \( n \) - dimensional CW complex.\n\n(a) Inclusion \( {X}_{k} \hookrightarrow X \) induces isomorphisms \( {H}_{p}\left( {X}_{k}\right) \cong {H}_{p}\left( X\right) \) for \( p \leq k - 1 \) .\n\n(b) \( {H}_{p}\left( X\right) = 0 \) for \( p > n \) .\n\n(c) For \( 0 \leq p \leq n,{H}_{p}\left( X\right) \) is a finitely generated group, whose rank is less than or equal to the number of p-cells in \( X \) .\n\n(d) If \( X \) has no cells of dimension \( p - 1 \) or \( p + 1 \), then \( {H}_{p}\left( X\right) \) is a free abelian group whose rank is equal to the number of p-cells.\n\n(e) Suppose \( X \) has only one cell of dimension \( n \), and \( \varphi : \partial D \rightarrow {X}_{n - 1} \) is its attaching map. Then \( {H}_{n}\left( X\right) \) is infinite cyclic if \( {\varphi }_{ * } : {H}_{n - 1}\left( {\partial D}\right) \rightarrow {H}_{n - 1}\left( {X}_{n - 1}\right) \) is the zero map, and \( {H}_{n}\left( X\right) = 0 \) otherwise.
|
Proof. Part (a) follows immediately from Theorem 13.33, because attaching an \( m \) - cell cannot change \( {H}_{p}\left( X\right) \) if \( p < m - 1 \) .\n\nTo prove (b), assume \( p > n \), and note that \( X \) is obtained from \( {X}_{0} \) by adding finitely many cells of dimensions less than or equal to \( n \), so the homomorphism \( {H}_{p}\left( {X}_{0}\right) \rightarrow {H}_{p}\left( X\right) \) is an isomorphism by Theorem 13.33(a) and induction. Since \( {H}_{p}\left( {X}_{0}\right) = 0 \) by Proposition 13.7, the result follows.\n\nTo prove (c), note first that by (a), we can replace \( {H}_{p}\left( X\right) \) by the isomorphic group \( {H}_{p}\left( {X}_{p + 1}\right) \) . Furthermore, there is a surjection \( {H}_{p}\left( {X}_{p}\right) \rightarrow {H}_{p}\left( {X}_{p + 1}\right) \) by Theorem 13.33(b) and induction. Since a surjection takes generators to generators, and cannot increase rank by Proposition 9.23, it suffices to prove that \( {H}_{p}\left( {X}_{p}\right) \) satisfies the stated conditions. If there are no \( p \) -cells, then \( {H}_{p}\left( {X}_{p}\right) = {H}_{p}\left( {X}_{p - 1}\right) = 0 \) by part (b), so it suffices to show that attaching a single \( p \) -cell does not change the fact that the \( p \) th homology group is finitely generated, and does not increase its rank by more than 1 .\n\nSuppose, therefore, that \( Z \) is a space such that \( {H}_{p}\left( Z\right) \) is finitely generated, and \( Y \) is obtained from \( Z \) by adding a \( p \) -cell. By Theorem 13.33(c), there is an exact sequence\n\n\[ 0 \rightarrow {H}_{p}\left( Z\right) \overset{{l}_{ * }}{ \rightarrow }{H}_{p}\left( Y\right) \rightarrow K \rightarrow 0, \]\n\nwhere \( l : Z \rightarrow Y \) is inclusion, and \( K \) is a subgroup of an infinite cyclic group and thus is either trivial or infinite cyclic. It follows from Proposition 9.23 that \( {H}_{p}\left( Y\right) \) is finitely generated and \( \operatorname{rank}{H}_{p}\left( Y\right) = \operatorname{rank}{H}_{p}\left( Z\right) + \operatorname{rank}K \leq \operatorname{rank}{H}_{p}\left( Z\right) + 1 \) .\n\nNext consider (d), and assume that \( X \) has no \( \left( {p - 1}\right) \) -cells or \( \left( {p + 1}\right) \) -cells. Since \( {X}_{p + 1} = {X}_{p} \), part (a) implies that \( {H}_{p}\left( X\right) \cong {H}_{p}\left( {X}_{p + 1}\right) = {H}_{p}\left( {X}_{p}\right) \) . We prove by induction on \( m \) that if \( X \) has \( m \) cells of dimension \( p \), then \( {H}_{p}\left( {X}_{p}\right) \) is free abelian of rank \( m \) . If \( m = 0 \), then \( {H}_{p}\left( {X}_{p}\right) = 0 \) by (c), so assume it is true when the number of \( p \) -cells is \( m - 1 \), and assume that \( X \) has \( {mp} \) -cells. Let \( e \) be one of the \( p \) -cells, let \( Z = X \smallsetminus e \), and let \( \varphi : \partial D \rightarrow {X}_{p - 1} = {Z}_{p - 1} \) be an attaching map for \( e \) . Then by induction \( {H}_{p}\left( Z\right) \)
|
Yes
|
Theorem 13.36. If \( X \) is a finite \( {CW} \) complex,\n\n\[ \chi \left( X\right) = \mathop{\sum }\limits_{p}{\left( -1\right) }^{p}\operatorname{rank}{H}_{p}\left( X\right) . \]\n\n(13.21)\n\nTherefore, the Euler characteristic is a homotopy invariant.
|
Proof. First let us assume that \( X \) is connected. We prove (13.21) by induction on the number of cells of dimension 2 or more. If \( X \) has no such cells, then it is a connected graph. Problem 10-20 shows that \( {\pi }_{1}\left( X\right) \) is a free group on \( 1 - \chi \left( X\right) \) generators, and then Theorem 13.14 and Problem 10-19 show that \( {H}_{1}\left( X\right) \) has rank \( 1 - \chi \left( X\right) \) . On the other hand, \( {H}_{0}\left( X\right) \) has rank 1 because \( X \) is connected, and \( {H}_{p}\left( X\right) = 0 \) for all other values of \( p \), so (13.21) follows.\n\nNow assume by induction that we have proved (13.21) for every finite CW complex with fewer than \( k \) cells of dimension 2 or more, and suppose \( X \) has \( k \) such cells. Let \( e \) be any cell of maximum dimension \( n \), and let \( Z = X \smallsetminus e \) . It suffices to show that\n\n\[ \chi \left( X\right) = \chi \left( Z\right) + {\left( -1\right) }^{n}. \]\n\n(13.22)\n\nLet \( \varphi : \partial D \rightarrow Z \) be the attaching map for \( e \), and let \( K \) and \( L \) be the kernel and image of \( {\varphi }_{ * } : {H}_{n - 1}\left( {\partial D}\right) \rightarrow {H}_{n - 1}\left( Z\right) \), respectively. Then from Proposition 13.33, we have isomorphisms\n\n\[ {H}_{p}\left( Z\right) \cong {H}_{p}\left( X\right) \;\left( {p \neq n, n - 1}\right) ,\]\n\nand exact sequences\n\n\[ 0 \rightarrow L \hookrightarrow {H}_{n - 1}\left( Z\right) \rightarrow {H}_{n - 1}\left( X\right) \rightarrow 0, \]\n\n\[ 0 \rightarrow {H}_{n}\left( Z\right) \rightarrow {H}_{n}\left( X\right) \rightarrow K \rightarrow 0. \]\n\nIt follows from Proposition 9.23 that\n\n\[ \operatorname{rank}{H}_{p}\left( X\right) = \operatorname{rank}{H}_{p}\left( Z\right) ,\;\left( {p \neq n, n - 1}\right) ,\]\n\n\[ \operatorname{rank}{H}_{n - 1}\left( X\right) = \operatorname{rank}{H}_{n - 1}\left( Z\right) - \operatorname{rank}L, \]\n\n\[ \operatorname{rank}{H}_{n}\left( X\right) = \operatorname{rank}{H}_{n}\left( Z\right) + \operatorname{rank}K. \]\n\nSumming these equations with appropriate signs, and using the fact (which also follows from Proposition 9.23) that \( \operatorname{rank}K + \operatorname{rank}L = \operatorname{rank}{H}_{n - 1}\left( {\partial D}\right) = 1 \), we obtain (13.22).\n\nFinally, if \( X \) is not connected, we can apply the preceding argument to each component of \( X \), and then each side of (13.21) is the sum of the corresponding terms for the individual components.
|
Yes
|
Lemma 13.40. Let \( \mathbb{F} \) be a field of characteristic zero.\n\n(a) For any abelian group \( G \), the set \( \operatorname{Hom}\left( {G,\mathbb{F}}\right) \) of group homomorphisms from G to \( \mathbb{F} \) is a vector space over \( \mathbb{F} \) with scalar multiplication defined pointwise: \( \left( {a\varphi }\right) \left( g\right) = a\left( {\varphi \left( g\right) }\right) \) for \( a \in \mathbb{F} \) .
|
Proof. The proofs of (a) and (b) are straightforward (and hold for any field, not just one of characteristic zero), and are left as an exercise.
|
No
|
Corollary 13.44. If \( X \) is a topological space such that \( {H}_{p}\left( X\right) \) is finitely generated for all \( p \) and zero for \( p \) sufficiently large, then for any field \( \mathbb{F} \) of characteristic zero,
|
\[ \chi \left( X\right) = \mathop{\sum }\limits_{p}{\left( -1\right) }^{p}\dim {H}^{p}\left( {X;\mathbb{F}}\right) . \]
|
No
|
Proposition 1.1.1 If \( \lambda = \left( {{1}^{{m}_{1}},{2}^{{m}_{2}},\ldots ,{n}^{{m}_{n}}}\right) \) and \( g \in {S}_{n} \) has type \( \lambda \), then \( \left| {Z}_{g}\right| \) depends only on \( \lambda \) and
|
\[ {z}_{\lambda }\overset{\text{ def }}{ = }\left| {Z}_{g}\right| = {1}^{{m}_{1}}{m}_{1}!{2}^{{m}_{2}}{m}_{2}!\cdots {n}^{{m}_{n}}{m}_{n}! \] Proof. Any \( h \in {Z}_{g} \) can either permute the cycles of length \( i \) among themselves or perform a cyclic rotation on each of the individual cycles (or both). Since there are \( {m}_{i} \) ! ways to do the former operation and \( {i}^{{m}_{i}} \) ways to do the latter, we are done. ∎
|
Yes
|
All groups have the trivial representation, which is the one sending every \( g \in G \) to the matrix (1).
|
This is clearly a representation because \( X\left( \epsilon \right) = \left( 1\right) \) and\n\n\[ X\left( g\right) X\left( h\right) = \left( 1\right) \left( 1\right) = \left( 1\right) = X\left( {gh}\right) \]\n\nfor all \( g, h \in G \) . We often use \( {1}_{G} \) or just the number 1 itself to stand for the trivial representation of \( G \) . ∎
|
Yes
|
Let us find all one-dimensional representations of the cyclic group of order \( n,{C}_{n} \) . Let \( g \) be a generator of \( {C}_{n} \), i.e., \[ {C}_{n} = \left\{ {g,{g}^{2},{g}^{3},\ldots ,{g}^{n} = \epsilon }\right\} \]
|
If \( X\left( g\right) = \left( c\right), c \in \mathbb{C} \), then the matrix for every element of \( {C}_{n} \) is determined, since \( X\left( {g}^{k}\right) = \left( {c}^{k}\right) \) by property 2 in the preceding definition. But by property 1, \[ \left( {c}^{n}\right) = X\left( {g}^{n}\right) = X\left( \epsilon \right) = \left( 1\right) \] so \( c \) must be an \( n \) th root of unity. Clearly, each such root gives a representation, so there are exactly \( n \) representations of \( {C}_{n} \) having degree 1 .
|
Yes
|
Consider the symmetric group \( {\mathcal{S}}_{n} \) with its usual action on \( S = \{ 1,2,\ldots, n\} \) . Now\n\n\[ \mathbb{{CS}} = \left\{ {{c}_{1}\mathbf{1} + {c}_{2}\mathbf{2} + \cdots + {c}_{n}\mathbf{n} : {c}_{i} \in \mathbb{C}\text{ for all }i}\right\} \]\n\nwith the action\n\n\[ \pi \left( {{c}_{1}\mathbf{1} + {c}_{2}\mathbf{2} + \cdots + {c}_{n}\mathbf{n}}\right) = {c}_{1}\pi \left( \mathbf{1}\right) + {c}_{2}\pi \left( \mathbf{2}\right) + \cdots + {c}_{n}\pi \left( \mathbf{n}\right) \]\n\nfor all \( \pi \in {\mathcal{S}}_{n} \) .
|
To make things more concrete, we can select a basis and determine the matrices \( X\left( \pi \right) \) for \( \pi \in {\mathcal{S}}_{n} \) in that basis. Let us consider \( {\mathcal{S}}_{3} \) and use the standard basis \( \{ \mathbf{1},\mathbf{2},\mathbf{3}\} \) . To find the matrix for \( \pi = \left( {1,2}\right) \), we compute\n\n\[ \left( {1,2}\right) \mathbf{1} = \mathbf{2};\;\left( {1,2}\right) \mathbf{2} = \mathbf{1};\;\left( {1,2}\right) \mathbf{3} = \mathbf{3}; \]\n\nand so\n\n\[ X\left( \left( {1,2}\right) \right) = \left( \begin{array}{lll} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \end{array}\right) \]\n\nIf the reader determines the rest of the matrices for \( {\mathcal{S}}_{3} \), it will be noted that they are exactly the same as those of the defining representation, Example 1.2.4. It is not hard to show that the same is true for any \( n \) ; i.e., this is merely the module approach to the defining representation of \( {\mathcal{S}}_{n} \) . ∎
|
No
|
We now describe one of the most important representations for any group, the (left) regular representation. Let \( G \) be an arbitrary group. Then \( G \) acts on itself by left multiplication: if \( g \in G \) and \( h \in S = G \), then the action of \( g \) on \( h,{gh} \), is defined as the usual product in the group. Properties 1, 3, and 4 now follow, respectively, from the closure, associativity, and identity axioms for the group.
|
Thus if \( G = \left\{ {{g}_{1},{g}_{2},\ldots ,{g}_{n}}\right\} \), then we have the corresponding \( G \) -module\n\n\[ \mathbb{C}\left\lbrack \mathbf{G}\right\rbrack = \left\{ {{c}_{1}{\mathbf{g}}_{1} + {c}_{2}{\mathbf{g}}_{2} + \cdots + {c}_{n}{\mathbf{g}}_{n} : {c}_{i} \in \mathbb{C}\text{ for all }i}\right\} \]\n\nwhich is called the group algebra of \( G \) . Note the use of square brackets to indicate that this is an algebra, not just a vector space. The multiplication is gotten by letting \( {\mathbf{g}}_{i}{\mathbf{g}}_{j} = {\mathbf{g}}_{k} \) in \( \mathbb{C}\left\lbrack \mathbf{G}\right\rbrack \) if \( {g}_{i}{g}_{j} = {g}_{k} \) in \( G \), and linear extension. Now the action of \( G \) on the group algebra can be expressed as\n\n\[ g\left( {{c}_{1}{\mathbf{g}}_{1} + {c}_{2}{\mathbf{g}}_{2} + \cdots + {c}_{n}{\mathbf{g}}_{n}}\right) = {c}_{1}\left( {\mathbf{g}{\mathbf{g}}_{1}}\right) + {c}_{2}\left( {\mathbf{g}{\mathbf{g}}_{2}}\right) + \cdots + {c}_{n}\left( {\mathbf{g}{\mathbf{g}}_{n}}\right) \]\n\nfor all \( g \in G \) . The group algebra will furnish us with much important combinatorial information about group representations.
|
Yes
|
Example 1.3.5 Let group \( G \) have subgroup \( H \), written \( H \leq G \) . A generalization of the regular representation is the (left) coset representation of \( G \) with respect to \( H \) . Let \( {g}_{1},{g}_{2},\ldots ,{g}_{k} \) be a transversal for \( H \) ; i.e., \( \mathcal{H} = \left\{ {{g}_{1}H,{g}_{2}H,\ldots ,{g}_{k}H}\right\} \) is a complete set of disjoint left cosets for \( H \) in \( G \) . Then \( G \) acts on \( \mathcal{H} \) by letting\n\n\[ g\left( {{g}_{i}H}\right) = \left( {g{g}_{i}}\right) H \]\n\nfor all \( g \in G \) . The corresponding module\n\n\[ \mathbb{C}\mathcal{H} = \left\{ {{c}_{1}{\mathbf{g}}_{1}\mathbf{H} + {c}_{2}{\mathbf{g}}_{2}\mathbf{H} + \cdots + {c}_{k}{\mathbf{g}}_{k}\mathbf{H} : {c}_{i} \in \mathbb{C}\text{ for all }i}\right\} \]\n\ninherits the action\n\n\[ g\left( {{c}_{1}{\mathbf{g}}_{1}\mathbf{H} + \cdots + {c}_{k}{\mathbf{g}}_{k}\mathbf{H}}\right) = {c}_{1}\left( {\mathbf{g}{\mathbf{g}}_{1}\mathbf{H}}\right) + \cdots + {c}_{k}\left( {\mathbf{g}{\mathbf{g}}_{k}\mathbf{H}}\right) . \]\n\nNote that if \( H = G \), then this reduces to the trivial representation. At the other extreme, when \( H = \{ \epsilon \} \), then \( \mathcal{H} = G \) and we obtain the regular representation again. In general, representation by cosets is an example of an induced representation, an idea studied further in Section 1.12.
|
Let us consider \( G = {\mathcal{S}}_{3} \) and \( H = \{ \epsilon ,\left( {2,3}\right) \} \) . We can take\n\n\[ \mathcal{H} = \{ H,\left( {1,2}\right) H,\left( {1,3}\right) H\} \]\n\nand\n\n\[ \mathbb{C}\mathcal{H} = \left\{ {{c}_{1}\mathbf{H} + {c}_{2}\left( {\mathbf{1},\mathbf{2}}\right) \mathbf{H} + {c}_{3}\left( {\mathbf{1},\mathbf{3}}\right) \mathbf{H} : {c}_{i} \in \mathbb{C}\text{ for all }i}\right\} . \]\n\nComputing the matrix of \( \left( {1,2}\right) \) in the standard basis, we obtain\n\n\[ \left( {1,2}\right) \mathbf{H} = \left( {\mathbf{1},\mathbf{2}}\right) \mathbf{H},\left( {1,2}\right) \left( {\mathbf{1},\mathbf{2}}\right) \mathbf{H} = \mathbf{H},\left( {1,2}\right) \left( {\mathbf{1},\mathbf{3}}\right) \mathbf{H} = \left( {\mathbf{1},\mathbf{3},\mathbf{2}}\right) \mathbf{H} = \left( {\mathbf{1},\mathbf{3}}\right) \mathbf{H}, \]\n\nso that\n\n\[ X\left( \left( {1,2}\right) \right) = \left( \begin{array}{lll} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 1 \end{array}\right) \]\n\nAfter finding a few more matrices, the reader will become convinced that we have rediscovered the defining representation yet again. The reason for this is explained when we consider isomorphism of modules in Section 1.6. ∎
|
Yes
|
For a nontrivial example of a submodule, consider \( G = {\mathcal{S}}_{n} \) , \( n \geq 2 \), and \( V = \mathbb{C}\{ \mathbf{1},\mathbf{2},\ldots ,\mathbf{n}\} \) (the defining representation). Now take\n\n\[ W = \mathbb{C}\{ \mathbf{1} + \mathbf{2} + \cdots + \mathbf{n}\} = \{ c\left( {\mathbf{1} + \mathbf{2} + \cdots + \mathbf{n}}\right) : c \in \mathbb{C}\} \]\n\ni.e., \( W \) is the one-dimensional subspace spanned by the vector \( \mathbf{1} + \mathbf{2} + \cdots + \mathbf{n} \). To check that \( W \) is closed under the action of \( {\mathcal{S}}_{n} \), it suffices to show that\n\n\[ \pi \mathbf{w} \in W\text{ for all }\mathbf{w}\text{ in some basis for }W\text{ and all }\pi \in {\mathcal{S}}_{n}\text{.} \]\n\n(Why?) Thus we need to verify only that\n\n\[ \pi \left( {\mathbf{1} + \mathbf{2} + \cdots + \mathbf{n}}\right) \in W \]\n\nfor each \( \pi \in {\mathcal{S}}_{n} \).
|
But\n\n\[ \pi \left( {\mathbf{1} + \mathbf{2} + \cdots + \mathbf{n}}\right) = \pi \left( \mathbf{1}\right) + \pi \left( \mathbf{2}\right) + \cdots + \pi \left( \mathbf{n}\right) \]\n\n\[ = \mathbf{1} + \mathbf{2} + \cdots + \mathbf{n} \in W \]\n\nbecause applying \( \pi \) to \( \{ 1,2,\ldots, n\} \) just gives back the same set of numbers in a different order. Thus \( W \) is a submodule of \( V \) that is nontrivial since \( \dim W = 1 \) and \( \dim V = n \geq 2 \) .
|
Yes
|
Next, let us look again at the regular representation. Suppose \( G = \left\{ {{g}_{1},{g}_{2},\ldots ,{g}_{n}}\right\} \) with group algebra \( V = \mathbb{C}\left\lbrack \mathbf{G}\right\rbrack \) . Using the same idea as in the previous example, let\n\n\[ W = \mathbb{C}\left\lbrack {{\mathbf{g}}_{1} + {\mathbf{g}}_{2} + \cdots + {\mathbf{g}}_{n}}\right\rbrack \]\n\nthe one-dimensional subspace spanned by the vector that is the sum of all the elements of \( G \) . To verify that \( W \) is a submodule, take any \( g \in G \) and compute:
|
\[ g\left( {{\mathbf{g}}_{1} + {\mathbf{g}}_{2} + \cdots + {\mathbf{g}}_{n}}\right) = g{\mathbf{g}}_{1} + g{\mathbf{g}}_{2} + \cdots + g{\mathbf{g}}_{n} \]\n\n\[ = {\mathbf{g}}_{1} + {\mathbf{g}}_{2} + \cdots + {\mathbf{g}}_{n} \in W \]\n\nbecause multiplying by \( g \) merely permutes the elements of \( G \), leaving the sum unchanged. As before, \( G \) acts trivially on \( W \) .
|
Yes
|
Proposition 1.5.2 Let \( V \) be a \( G \) -module, \( W \) a submodule, and \( \langle \cdot , \cdot \rangle \) an inner product invariant under the action of \( G \) . Then \( {W}^{ \bot } \) is also a \( G \) -submodule.
|
Proof. We must show that for all \( g \in G \) and \( \mathbf{u} \in {W}^{ \bot } \) we have \( g\mathbf{u} \in {W}^{ \bot } \) . Take any \( \mathbf{w} \in W \) ; then\n\n\[ \langle g\mathbf{u},\mathbf{w}\rangle = \left\langle {{g}^{-1}g\mathbf{u},{g}^{-1}\mathbf{w}}\right\rangle \;\text{ (since }\langle \cdot , \cdot \rangle \text{ is invariant) } \]\n\n\[ = \;\langle \mathbf{u},{g}^{-1}\mathbf{w}\rangle \;\text{(properties of group action)} \]\n\n\[ = 0\text{. }\;\left( {\mathbf{u} \in {W}^{ \bot },\text{ and }{g}^{-1}\mathbf{w} \in W}\right. \]\n\nThus \( {W}^{ \bot } \) is closed under the action of \( G \) . ∎
|
Yes
|
Theorem 1.5.3 (Maschke’s Theorem) Let \( G \) be a finite group and let \( V \) be a nonzero \( G \) -module. Then\n\n\[ V = {W}^{\left( 1\right) } \oplus {W}^{\left( 2\right) } \oplus \cdots \oplus {W}^{\left( k\right) } \]\n\nwhere each \( {W}^{\left( i\right) } \) is an irreducible \( G \) -submodule of \( V \) .
|
Proof. We will induct on \( d = \dim V \) . If \( d = 1 \), then \( V \) itself is irreducible and we are done \( \left( {k = 1}\right. \) and \( \left. {{W}^{\left( 1\right) } = V}\right) \) . Now suppose that \( d > 1 \) . If \( V \) is irreducible, then we are finished as before. If not, then \( V \) has a nontrivial \( G \) -submodule, \( W \) . We will try to construct a submodule complement for \( W \) as we did in the preceding example.\n\nPick any basis \( \mathcal{B} = \left\{ {{\mathbf{v}}_{1},{\mathbf{v}}_{2},\ldots ,{\mathbf{v}}_{d}}\right\} \) for \( V \) . Consider the unique inner product that satisfies\n\n\[ \left\langle {{\mathbf{v}}_{i},{\mathbf{v}}_{j}}\right\rangle = {\delta }_{i, j} \]\n\nfor elements of \( \mathcal{B} \) . This product may not be \( G \) -invariant, but we can come up with another one that is. For any \( \mathbf{v},\mathbf{w} \in V \) we let\n\n\[ \langle \mathbf{v},\mathbf{w}{\rangle }^{\prime } = \mathop{\sum }\limits_{{g \in G}}\langle g\mathbf{v}, g\mathbf{w}\rangle \]\n\nWe leave it to the reader to verify that \( \langle \cdot , \cdot {\rangle }^{\prime } \) satisfies the definition of an inner product. To show that it is \( G \) -invariant, we wish to prove\n\n\[ \langle h\mathbf{v}, h\mathbf{w}{\rangle }^{\prime } = \langle \mathbf{v},\mathbf{w}{\rangle }^{\prime } \]\n\nfor all \( h \in G \) and \( \mathbf{v},\mathbf{w} \in V \) . But\n\n\[ \langle h\mathbf{v}, h\mathbf{w}{\rangle }^{\prime } = \mathop{\sum }\limits_{{g \in G}}\langle {gh}\mathbf{v},{gh}\mathbf{w}\rangle \]\n(definition of \( \langle \cdot , \cdot {\rangle }^{\prime } \) )\n\n\[ = \mathop{\sum }\limits_{{f \in G}}\langle f\mathbf{v}, f\mathbf{w}\rangle \]\n(as \( g \) varies over \( G \), so does \( f = {gh} \) )\n\n\[ = \langle \mathbf{v},\mathbf{w}{\rangle }^{\prime } \]\n(definition of \( \langle \cdot , \cdot {\rangle }^{\prime } \) )\n\nas desired.\n\nIf we let\n\n\[ {W}^{ \bot } = \left\{ {\mathbf{v} \in V : {\left\langle \mathbf{v},\mathbf{w}\right\rangle }^{\prime } = 0}\right\} \]\n\nthen by Proposition 1.5.2 we have that \( {W}^{ \bot } \) is a \( G \) -submodule of \( V \) with\n\n\[ V = W \oplus {W}^{ \bot } \]\n\nNow we can apply induction to \( W \) and \( {W}^{ \bot } \) to write each as a direct sum of irreducibles. Putting these two decompositions together, we see that \( V \) has the desired form.
|
No
|
Corollary 1.5.4 Let \( G \) be a finite group and let \( X \) be a matrix representation of \( G \) of dimension \( d > 0 \) . Then there is a fixed matrix \( T \) such that every matrix \( X\left( g\right), g \in G \), has the form\n\n\[ \n{TX}\left( g\right) {T}^{-1} = \left( \begin{matrix} {X}^{\left( 1\right) }\left( g\right) & 0 & \cdots & 0 \\ 0 & {X}^{\left( 2\right) }\left( g\right) & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & {X}^{\left( k\right) }\left( g\right) \end{matrix}\right) \n\]\n\nwhere each \( {X}^{\left( i\right) } \) is an irreducible matrix representation of \( G \) .
|
Proof. Let \( V = {\mathbb{C}}^{d} \) with the action\n\n\[ \ng\mathbf{v} = X\left( g\right) \mathbf{v} \n\]\n\nfor all \( g \in G \) and \( \mathbf{v} \in V \) . By Maschke’s theorem,\n\n\[ \nV = {W}^{\left( 1\right) } \oplus {W}^{\left( 2\right) } \oplus \cdots \oplus {W}^{\left( k\right) } \n\]\n\neach \( {W}^{\left( i\right) } \) being irreducible of dimension, say, \( {d}_{i} \) . Take a basis \( \mathcal{B} \) for \( V \) such that the first \( {d}_{1} \) vectors are a basis for \( {W}^{\left( 1\right) } \), the next \( {d}_{2} \) are a basis for \( {W}^{\left( 2\right) } \) , etc. The matrix \( T \) that transforms the standard basis for \( {\mathbb{C}}^{d} \) into \( \mathcal{B} \) now does the trick, since conjugating by \( T \) just expresses each \( X\left( g\right) \) in the new basis \( \mathcal{B} \) . ∎
|
Yes
|
Proposition 1.6.4 Let \( \theta : V \rightarrow W \) be a \( G \) -homomorphism. Then\n\n1. \( \ker \theta \) is a \( G \) -submodule of \( V \), and\n\n2. \( \operatorname{im}\theta \) is a \( G \) -submodule of \( W \) .
|
Proof. We prove only the first assertion, leaving the second one for the reader. It is known from the theory of vector spaces that \( \ker \theta \) is a subspace of \( V \) since \( \theta \) is linear. So we only need to show closure under the action of \( G \) . But if \( \mathbf{v} \in \ker \theta \), then for any \( g \in G \),\n\n\[ \theta \left( {g\mathbf{v}}\right) = {g\theta }\left( \mathbf{v}\right) \;\left( {\theta \text{ is a }G\text{-homomorphism }}\right) \]\n\n\[ = g\mathbf{0}\;\left( {\mathbf{v} \in \ker \theta }\right) \]\n\n\[ = 0\text{,} \]\n\nand so \( g\mathbf{v} \in \ker \theta \), as desired. ∎
|
No
|
Theorem 1.6.5 (Schur’s Lemma) Let \( V \) and \( W \) be two irreducible \( G \) - modules. If \( \theta : V \rightarrow W \) is a \( G \) -homomorphism, then either\n\n1. \( \theta \) is a \( G \) -isomorphism, or\n\n2. \( \theta \) is the zero map.
|
Proof. Since \( V \) is irreducible and \( \ker \theta \) is a submodule (by the previous proposition), we must have either \( \ker \theta = \{ \mathbf{0}\} \) or \( \ker \theta = V \) . Similarly, the irreducibility of \( W \) implies that \( \operatorname{im}\theta = \{ \mathbf{0}\} \) or \( W \) . If \( \ker \theta = V \) or \( \operatorname{im}\theta = \{ \mathbf{0}\} \) , then \( \theta \) must be the zero map. On the other hand, if \( \ker \theta = \{ \mathbf{0}\} \) and \( \operatorname{im}\theta = W \) , then we have an isomorphism. -
|
Yes
|
Corollary 1.6.6 Let \( X \) and \( Y \) be two irreducible matrix representations of \( G \) . If \( T \) is any matrix such that \( {TX}\left( g\right) = Y\left( g\right) T \) for all \( g \in G \), then either
|
1. \( T \) is invertible, or\n2. \( T \) is the zero matrix. ∎
|
Yes
|
Corollary 1.6.7 Let \( V \) and \( W \) be two \( G \) -modules with \( V \) being irreducible. Then \( \dim \operatorname{Hom}\left( {V, W}\right) = 0 \) if and only if \( W \) contains no submodule isomorphic \( {to}V. \)
|
∎
|
No
|
Corollary 1.6.8 Let \( X \) be an irreducible matrix representation of \( G \) over the complex numbers. Then the only matrices \( T \) that commute with \( X\left( g\right) \) for all \( g \in G \) are those of the form \( T = {cI} \) -i.e., scalar multiples of the identity matrix.
|
∎
|
No
|
Suppose that \( X \) is a matrix representation such that\n\n\[ X = \left( \begin{matrix} {X}^{\left( 1\right) } & 0 \\ 0 & {X}^{\left( 2\right) } \end{matrix}\right) = {X}^{\left( 1\right) } \oplus {X}^{\left( 2\right) },\]\n\nwhere \( {X}^{\left( 1\right) },{X}^{\left( 2\right) } \) are inequivalent and irreducible of degrees \( {d}_{1},{d}_{2} \), respectively. What does \( \operatorname{Com}X \) look like?
|
Suppose that\n\n\[ T = \left( \begin{array}{ll} {T}_{1,1} & {T}_{1,2} \\ {T}_{2,1} & {T}_{2,2} \end{array}\right)\]\n\nis a matrix partitioned in the same way as \( X \) . If \( {TX} = {XT} \), then we can multiply out each side to obtain\n\n\[ \left( \begin{array}{ll} {T}_{1,1}{X}^{\left( 1\right) } & {T}_{1,2}{X}^{\left( 2\right) } \\ {T}_{2,1}{X}^{\left( 1\right) } & {T}_{2,2}{X}^{\left( 2\right) } \end{array}\right) = \left( \begin{array}{ll} {X}^{\left( 1\right) }{T}_{1,1} & {X}^{\left( 1\right) }{T}_{1,2} \\ {X}^{\left( 2\right) }{T}_{2,1} & {X}^{\left( 2\right) }{T}_{2,2} \end{array}\right) . \]\n\nEquating corresponding blocks we get\n\n\[ {T}_{1,1}{X}^{\left( 1\right) } = {X}^{\left( 1\right) }{T}_{1,1} \]\n\n\[ {T}_{1,2}{X}^{\left( 2\right) } = {X}^{\left( 1\right) }{T}_{1,2} \]\n\n\[ {T}_{2,1}{X}^{\left( 1\right) } = {X}^{\left( 2\right) }{T}_{2,1} \]\n\n\[ {T}_{2,2}{X}^{\left( 2\right) } = {X}^{\left( 2\right) }{T}_{2,2} \]\n\nUsing Corollaries 1.6.6 and 1.6.8 along with the fact that \( {X}^{\left( 1\right) } \) and \( {X}^{\left( 2\right) } \) are inequivalent, these equations can be solved to yield\n\n\[ {T}_{1,1} = {c}_{1}{I}_{{d}_{1}},\;{T}_{1,2} = {T}_{2,1} = 0,\;{T}_{2,2} = {c}_{2}{I}_{{d}_{2}}, \]\n\nwhere \( {c}_{1},{c}_{2} \in \mathbb{C} \) and \( {I}_{{d}_{1}},{I}_{{d}_{2}} \) are identity matrices of degrees \( {d}_{1},{d}_{2} \) . Thus\n\n\[ T = \left( \begin{matrix} {c}_{1}{I}_{{d}_{1}} & 0 \\ 0 & {c}_{2}{I}_{{d}_{2}} \end{matrix}\right) \]\n\nWe have shown that when \( X = {X}^{\left( 1\right) } \oplus {X}^{\left( 2\right) } \) with \( {X}^{\left( 1\right) } \ncong {X}^{\left( 2\right) } \) and irreducible, then\n\n\[ \operatorname{Com}X = \left\{ {{c}_{1}{I}_{{d}_{1}} \oplus {c}_{2}{I}_{{d}_{2}} : {c}_{1},{c}_{2} \in \mathbb{C}}\right\} \]\n\nwhere \( {d}_{1} = \deg {X}^{\left( 1\right) },{d}_{2} = \deg {X}^{\left( 2\right) } \).
|
Yes
|
Example 1.7.3 Suppose that\n\n\[ X = \left( \begin{matrix} {X}^{\left( 1\right) } & 0 \\ 0 & {X}^{\left( 1\right) } \end{matrix}\right) = 2{X}^{\left( 1\right) } \]\n\nwhere \( {X}^{\left( 1\right) } \) is irreducible of degree \( d \) . Take \( T \) partitioned as before. Doing the multiplication in \( {TX} = {XT} \) and equating blocks now yields four equations, all of the form\n\n\[ {T}_{i, j}{X}^{\left( 1\right) } = {X}^{\left( 1\right) }{T}_{i, j} \]\n\nfor all \( i, j = 1,2 \) .
|
Corollaries 1.6.6 and 1.6.8 come into play again to reveal that, for all \( i \) and \( j \) ,\n\n\[ {T}_{i, j} = {c}_{i, j}{I}_{d} \]\n\nwhere \( {c}_{i, j} \in \mathbb{C} \) . Thus\n\n\[ \operatorname{Com}X = \left\{ {\left( \begin{array}{ll} {c}_{1,1}{I}_{d} & {c}_{1,2}{I}_{d} \\ {c}_{2,1}{I}_{d} & {c}_{2,2}{I}_{d} \end{array}\right) : {c}_{i, j} \in \mathbb{C}\text{ for all }i, j}\right\} \]\n\n(1.11)\n\nis the commutant algebra in this case.
|
Yes
|
Proposition 1.7.6 The center of \( {\operatorname{Mat}}_{d} \) is\n\n\[ \n{Z}_{{\text{Mat }}_{d}} = \left\{ {c{I}_{d} : c \in \mathbb{C}}\right\} \n\]
|
Proof. Suppose that \( C \in {Z}_{{\text{Mat }}_{d}} \) . Then, in particular,\n\n\[ \nC{E}_{i, i} = {E}_{i, i}C \n\]\n\n\( \left( {1.15}\right) \)\n\nfor all \( i \) . But \( C{E}_{i, i} \) (respectively, \( {E}_{i, i}C \) ) is all zeros except for the \( i \) th column (respectively, row), which is the same as \( C \) ’s. Thus (1.15) implies that all off-diagonal elements of \( C \) must be 0 . Similarly, if \( i \neq j \), then\n\n\[ \nC\left( {{E}_{i, j} + {E}_{j, i}}\right) = \left( {{E}_{i, j} + {E}_{j, i}}\right) C \n\]\n\nwhere the left (respectively, right) multiplication exchanges columns (respectively, rows) \( i \) and \( j \) of \( C \) . It follows that all the diagonal elements must be equal and so \( C = c{I}_{d} \) for some \( c \in \mathbb{C} \) . Finally, all these matrices clearly commute with any other matrix, so we are done. -
|
Yes
|
Lemma 1.7.7 Suppose \( A, X \in {\operatorname{Mat}}_{d} \) and \( B, Y \in {\operatorname{Mat}}_{f} \) . Then\n\n1. \( \left( {A \oplus B}\right) \left( {X \oplus Y}\right) = {AX} \oplus {BY} \),\n\n2. \( \left( {A \otimes B}\right) \left( {X \otimes Y}\right) = {AX} \otimes {BY} \) .
|
Proof. Both assertions are easy to prove, so we will do only the second. Suppose \( A = \left( {a}_{i, j}\right) \) and \( X = \left( {x}_{i, j}\right) \) .\n\n\[ \left( {A \otimes B}\right) \left( {X \otimes Y}\right) = \left( {{a}_{i, j}B}\right) \left( {{x}_{i, j}Y}\right) \;\text{ (definition of } \otimes \text{ ) } \]\n\n\[ = \;\left( {\mathop{\sum }\limits_{k}{a}_{i, k}B\;{x}_{k, j}Y}\right) \;\text{(block multiplication)} \]\n\n\[ = \;\left( {\left( {\mathop{\sum }\limits_{k}{a}_{i, k}{x}_{k, j}}\right) {BY}}\right) \;\mathrm{\left( {distributivity}\right) } \]\n\n\[ = {AX} \otimes {BY}.\;\text{(definition of} \otimes \text{)}\blacksquare \]
|
No
|
Theorem 1.7.8 Let \( X \) be a matrix representation of \( G \) such that\n\n\[ X = {m}_{1}{X}^{\left( 1\right) } \oplus {m}_{2}{X}^{\left( 2\right) } \oplus \cdots \oplus {m}_{k}{X}^{\left( k\right) },\]\n\nwhere the \( {X}^{\left( i\right) } \) are inequivalent, irreducible and \( \deg {X}^{\left( i\right) } = {d}_{i} \). Then\n\n1. \( \deg X = {m}_{1}{d}_{1} + {m}_{2}{d}_{2} + \cdots + {m}_{k}{d}_{k} \),\n\n2. \( \operatorname{Com}X = \left\{ {{ \oplus }_{i = 1}^{k}\left( {{M}_{{m}_{i}} \otimes {I}_{{d}_{i}}}\right) : {M}_{{m}_{i}} \in {\operatorname{Mat}}_{{m}_{i}}\text{for all}i}\right\} \),\n\n3. \( \dim \left( {\operatorname{Com}X}\right) = {m}_{1}^{2} + {m}_{2}^{2} + \cdots + {m}_{k}^{2} \),\n\n4. \( {Z}_{\text{Com }X} = \left\{ {{ \oplus }_{i = 1}^{k}{c}_{i}{I}_{{m}_{i}{d}_{i}} : {c}_{i} \in \mathbb{C}}\right. \) for all \( \left. i\right\} \), and\n\n5. \( \dim {Z}_{\operatorname{Com}X} = k \).
|
∎
|
No
|
Theorem 1.7.9 Let \( V \) be a \( G \) -module such that\n\n\[ V \cong {m}_{1}{V}^{\left( 1\right) } \oplus {m}_{2}{V}^{\left( 2\right) } \oplus \cdots \oplus {m}_{k}{V}^{\left( k\right) },\]\n\nwhere the \( {V}^{\left( i\right) } \) are pairwise inequivalent irreducibles and \( \dim {V}^{\left( i\right) } = {d}_{i} \). Then\n\n1. \( \dim V = {m}_{1}{d}_{1} + {m}_{2}{d}_{2} + \cdots + {m}_{k}{d}_{k} \),\n\n2. End \( V \cong { \oplus }_{i = 1}^{k}{\operatorname{Mat}}_{{m}_{i}} \),\n\n3. \( \dim \left( {\operatorname{End}V}\right) = {m}_{1}^{2} + {m}_{2}^{2} + \cdots + {m}_{k}^{2} \),\n\n4. \( {Z}_{\text{End }V} \) is isomorphic to the algebra of diagonal matrices of degree \( k \), and\n\n5. \( \dim {Z}_{\text{End }V} = k \).
|
∎
|
No
|
Proposition 1.7.10 Let \( V \) and \( W \) be \( G \) -modules with \( V \) irreducible. Then \( \dim \operatorname{Hom}\left( {V, W}\right) \) is the multiplicity of \( V \) in \( W \) .
|
∎
|
No
|
Suppose we consider the defining representation of \( {S}_{n} \) with its character \( {\chi }^{\text{def }} \) . If we take \( n = 3 \), then we can compute the character values directly by taking the traces of the matrices in Example 1.2.4. The results are\n\n\[ \n{\chi }^{\text{def }}\left( {\;\left( 1\right) \left( 2\right) \left( 3\right) \;}\right) = 3,\;{\chi }^{\text{def }}\left( {\;\left( {1,2}\right) \left( 3\right) \;}\right) = 1,\;{\chi }^{\text{def }}\left( {\;\left( {1,3}\right) \left( 2\right) \;}\right) = 1, \n\]\n\n\[ \n{\chi }^{\text{def}}\left( {\;\left( 1\right) \left( {2,3}\right) \;}\right) = 1,\;{\chi }^{\text{def}}\left( {\;\left( {1,2,3}\right) \;}\right) = 0,\;{\chi }^{\text{def}}\left( {\;\left( {1,3,2}\right) \;}\right) = 0. \n\]\n\nIt is not hard to see that in general, if \( \pi \in {\mathcal{S}}_{n} \), then\n\n\[ \n{\chi }^{\text{def }}\left( \pi \right) = \text{the number of ones on the diagonal of}X\left( \pi \right) \n\]\n\n\[ \n= \text{the number of fixedpoints of}\pi \text{. ∎} \n\]
|
\[ \n{\chi }^{\text{def }}\left( \pi \right) = \text{the number of ones on the diagonal of}X\left( \pi \right) \n\]\n\n\[ \n= \text{the number of fixedpoints of}\pi \text{. ∎} \n\]
|
Yes
|
Example 1.8.4 Let \( G = \left\{ {{g}_{1},{g}_{2},\ldots ,{g}_{n}}\right\} \) and consider the regular representation with module \( V = \mathbb{C}\left\lbrack \mathbf{G}\right\rbrack \) and character \( {\chi }^{\text{reg }} \) . Now \( X\left( \epsilon \right) = {I}_{n} \), so \( {\chi }^{\text{reg }}\left( \epsilon \right) = \left| G\right| \)
|
To compute the character values for \( g \neq \epsilon \), we will use the matrices arising from the standard basis \( \mathcal{B} = \left\{ {{\mathbf{g}}_{1},{\mathbf{g}}_{2},\ldots ,{\mathbf{g}}_{n}}\right\} \) . Now \( X\left( g\right) \) is the permutation matrix for the action of \( g \) on \( \mathcal{B} \), so \( {\chi }^{\text{reg }}\left( g\right) \) is the number of fixedpoints for that action. But if \( g{\mathbf{g}}_{i} = {\mathbf{g}}_{i} \) for any \( i \), then we must have \( g = \epsilon \), which is not the case; i.e., there are no fixedpoints if \( g \neq \epsilon \) . To summarize,\n\n\[ \n{\chi }^{\text{reg }}\left( g\right) = \left\{ \begin{array}{ll} \left| G\right| & \text{ if }g = \epsilon , \\ 0 & \text{ otherwise. } \end{array}\right.\n\]
|
Yes
|
Proposition 1.8.5 Let \( X \) be a matrix representation of a group \( G \) of degree \( d \) with character \( \chi \) .\n\n1. \( \chi \left( \epsilon \right) = d \) .\n\n2. If \( K \) is a conjugacy class of \( G \), then\n\n\[ g, h \in K \Rightarrow \chi \left( g\right) = \chi \left( h\right) . \]\n\n3. If \( Y \) is a representation of \( G \) with character \( \psi \), then\n\n\[ X \cong Y \Rightarrow \chi \left( g\right) = \psi \left( g\right) \]\n\nfor all \( g \in G \) .
|
Proof. 1. Since \( X\left( \epsilon \right) = {I}_{d} \) ,\n\n\[ \chi \left( \epsilon \right) = \operatorname{tr}{I}_{d} = d \]\n\n2. By hypothesis \( g = {kh}{k}^{-1} \), so\n\n\[ \chi \left( g\right) = \operatorname{tr}X\left( g\right) = \operatorname{tr}X\left( k\right) X\left( h\right) X{\left( k\right) }^{-1} = \operatorname{tr}X\left( h\right) = \chi \left( h\right) . \]\n\n3. This assertion just says that equivalent representations have the same character. We have already proved this in the remarks following the preceding definition of group characters. -
|
Yes
|
If \( G = {C}_{n} \), the cyclic group with \( n \) elements, then each element of \( {C}_{n} \) is in a conjugacy class by itself (as is true for any abelian group). Since there are \( n \) conjugacy classes, there must be \( n \) inequivalent irreducible representations of \( {C}_{n} \) . But we found \( n \) degree 1 representations in Example 1.2.3, and they are pairwise inequivalent, since they all have different characters (Proposition 1.8.5, part 3). So we have found all the irreducibles for \( {C}_{n} \) .
|
Since the representations are one-dimensional, they are equal to their corresponding characters. Thus the table we displayed on page 5 is indeed the complete character table for \( {C}_{4} \) . ∎
|
No
|
Example 1.8.9 Recall that a conjugacy class in \( G = {\mathcal{S}}_{n} \) consists of all permutations of a given cycle type. In particular, for \( {\mathcal{S}}_{3} \) we have three conjugacy classes,\n\n\[ \n{K}_{1} = \{ \epsilon \} ,\;{K}_{2} = \{ \left( {1,2}\right) ,\left( {1,3}\right) ,\left( {2,3}\right) \} ,\;\text{ and }\;{K}_{3} = \{ \left( {1,2,3}\right) ,\left( {1,3,2}\right) \} .\n\]\n\nThus there are three irreducible representations of \( {\mathcal{S}}_{3} \) . We have met two of them, the trivial and sign representations. So this is as much as we know of the character table for \( {\mathcal{S}}_{3} \) :\n\n<table><thead><tr><th></th><th>\( {K}_{1} \)</th><th>\( {K}_{2} \)</th><th>\( {K}_{3} \)</th></tr></thead><tr><td>\( {\chi }^{\left( 1\right) } \)</td><td>1</td><td>1</td><td>1</td></tr><tr><td>\( {\chi }^{\left( 2\right) } \)</td><td>1</td><td>\( - 1 \)</td><td>1</td></tr><tr><td>\( {\chi }^{\left( 3\right) } \)</td><td>?</td><td>?</td><td>?</td></tr></table>
|
We will be able to fill in the last line using character inner products. ∎
|
No
|
Proposition 1.9.2 Let \( \chi \) and \( \psi \) be characters; then\n\n\[ \langle \chi ,\psi \rangle = \frac{1}{\left| G\right| }\mathop{\sum }\limits_{{g \in G}}\chi \left( g\right) \psi \left( {g}^{-1}\right) .\n\]
|
When the field is arbitrary, equation (1.19) is taken as the definition of the inner product. In fact, for any two functions \( \chi \) and \( \psi \) from \( G \) to a field, we can define\n\n\[ \langle \chi ,\psi {\rangle }^{\prime } = \frac{1}{\left| G\right| }\mathop{\sum }\limits_{{g \in G}}\chi \left( g\right) \psi \left( {g}^{-1}\right) ,\n\]\n\nbut over the complex numbers this \
|
No
|
Corollary 1.9.4 Let \( X \) be a matrix representation of \( G \) with character \( \chi \) . Suppose\n\n\[X \cong {m}_{1}{X}^{\left( 1\right) } \oplus {m}_{2}{X}^{\left( 2\right) } \oplus \cdots \oplus {m}_{k}{X}^{\left( k\right) },\] \n\nwhere the \( {X}^{\left( i\right) } \) are pairwise inequivalent irreducibles with characters \( {\chi }^{\left( i\right) } \). \n\n1. \( \chi = {m}_{1}{\chi }^{\left( 1\right) } + {m}_{2}{\chi }^{\left( 2\right) } + \cdots + {m}_{k}{\chi }^{\left( k\right) } \). \n\n2. \( \left\langle {\chi ,{\chi }^{\left( j\right) }}\right\rangle = {m}_{j} \) for all \( j \). \n\n3. \( \langle \chi ,\chi \rangle = {m}_{1}^{2} + {m}_{2}^{2} + \cdots + {m}_{k}^{2} \). \n\n4. \( X \) is irreducible if and only if \( \langle \chi ,\chi \rangle = 1 \). \n\n5. Let \( Y \) be another matrix representation of \( G \) with character \( \psi \) . Then\n\n\[X \cong Y\text{if and only if}\chi \left( g\right) = \psi \left( g\right)\]\n\nfor all \( g \in G \) .
|
Proof. 1. Using the fact that the trace of a direct sum is the sum of the traces, we see that\n\n\[ \chi = \operatorname{tr}X = \operatorname{tr}{\bigoplus }_{i = 1}^{k}{m}_{i}{X}^{\left( i\right) } = \mathop{\sum }\limits_{{i = 1}}^{k}{m}_{i}{\chi }^{\left( i\right) }.\]\n\n2. We have, by the previous theorem,\n\n\[ \left\langle {\chi ,{\chi }^{\left( j\right) }}\right\rangle = \left\langle {\mathop{\sum }\limits_{i}{m}_{i}{\chi }^{\left( i\right) },{\chi }^{\left( j\right) }}\right\rangle = \mathop{\sum }\limits_{i}{m}_{i}\left\langle {{\chi }^{\left( i\right) },{\chi }^{\left( j\right) }}\right\rangle = {m}_{j}.\]\n\n3. By another application of Theorem 1.9.3:\n\n\[ \langle \chi ,\chi \rangle = \left\langle {\mathop{\sum }\limits_{i}{m}_{i}{\chi }^{\left( i\right) },\mathop{\sum }\limits_{j}{m}_{j}{\chi }^{\left( j\right) }}\right\rangle = \mathop{\sum }\limits_{{i, j}}{m}_{i}{m}_{j}\left\langle {{\chi }^{\left( i\right) },{\chi }^{\left( j\right) }}\right\rangle = \mathop{\sum }\limits_{i}{m}_{i}^{2}\]\n\n4. The assertion that \( X \) is irreducible implies that \( \langle \chi ,\chi \rangle = 1 \) is just part of the orthogonality relations already proved. For the converse, suppose that\n\n\[ \langle \chi ,\chi \rangle = \mathop{\sum }\limits_{i}{m}_{i}^{2} = 1 \]\n\nThen there must be exactly one index \( j \) such that \( {m}_{j} = 1 \) and all the rest of the \( {m}_{i} \) must be zero. But then \( X = {X}^{\left( j\right) } \), which is irreducible by assumption.\n\n5. The forward implication was proved as part 3 of Proposition 1.8.5. For the other direction, let \( Y \cong { \oplus }_{i = 1}^{k}{n}_{i}{X}^{\left( i\right) } \) . There is no harm in assuming that the \( X \) and \( Y \) expansions both contain the same irreducibles: Any irreducible found in one but not the other can be inserted with multiplicity 0 . Now \( \chi = \psi \), so \( \left\langle {\chi ,{\chi }^{\left( i\right) }}\right\rangle = \left\langle {\psi ,{\chi }^{\left( i\right) }}\right\rangle \) for all \( i \) . But then, by part 2 of this corollary, \( {m}_{i} = {n}_{i} \) for all \( i \) . Thus the two direct sums are equivalent-i.e., \( X \cong Y \) . ∎
|
Yes
|
Example 1.9.5 Let \( G = {\mathcal{S}}_{3} \) and consider \( \chi = {\chi }^{\text{def }} \) . Let \( {\chi }^{\left( 1\right) },{\chi }^{\left( 2\right) },{\chi }^{\left( 3\right) } \) be the three irreducible characters of \( {\mathcal{S}}_{3} \), where the first two are the trivial and sign characters, respectively. By Maschke's theorem, we know that\n\n\[ \chi = {m}_{1}{\chi }^{\left( 1\right) } + {m}_{2}{\chi }^{\left( 2\right) } + {m}_{3}{\chi }^{\left( 3\right) } \]\n\nFurthermore, we can use equation (1.24) and part 2 of Corollary 1.9.4 to compute \( {m}_{1} \) and \( {m}_{2} \) (character values for \( \chi = {\chi }^{\text{def }} \) were found in Example 1.8.3):
|
\[ {m}_{1} = \left\langle {\chi ,{\chi }^{\left( 1\right) }}\right\rangle = \frac{1}{3!}\mathop{\sum }\limits_{{\pi \in {\mathcal{S}}_{3}}}\chi \left( \pi \right) {\chi }^{\left( 1\right) }\left( \pi \right) = \frac{1}{6}\left( {3 \cdot 1 + 1 \cdot 1 + 1 \cdot 1 + 1 \cdot 1 + 0 \cdot 1 + 0 \cdot 1}\right) = 1, \]\n\n\[ {m}_{2} = \left\langle {\chi ,{\chi }^{\left( 2\right) }}\right\rangle = \frac{1}{3!}\mathop{\sum }\limits_{{\pi \in {\mathcal{S}}_{3}}}\chi \left( \pi \right) {\chi }^{\left( 2\right) }\left( \pi \right) = \frac{1}{6}\left( {3 \cdot 1 - 1 \cdot 1 - 1 \cdot 1 - 1 \cdot 1 + 0 \cdot 1 + 0 \cdot 1}\right) = 0. \]\n\nThus\n\n\[ \chi = {\chi }^{\left( 1\right) } + {m}_{3}{\chi }^{\left( 3\right) } \]
|
Yes
|
Proposition 1.10.2 The irreducible characters of a group \( G \) form an orthonormal basis for the space of class functions \( R\left( G\right) \) .
|
Proof. Since the irreducible characters are orthonormal with respect to the bilinear form \( \langle \cdot , \cdot \rangle \) on \( R\left( G\right) \) (Theorem 1.9.3), they are linearly independent. But part 3 of Proposition 1.10.1 and equation (1.18) show that we have \( \dim R\left( G\right) \) such characters. Thus they are a basis. ∎
|
Yes
|
Theorem 1.10.3 (Character Relations of the Second Kind) Let \( K, L \) be conjugacy classes of \( G \) . Then\n\n\[ \mathop{\sum }\limits_{\chi }{\chi }_{K}\overline{{\chi }_{L}} = \frac{\left| G\right| }{\left| K\right| }{\delta }_{K, L} \]\n\nwhere the sum is over all irreducible characters of \( G \) .
|
Proof. If \( \chi \) and \( \psi \) are irreducible characters, then the character relations of the first kind yield\n\n\[ \langle \chi ,\psi \rangle = \frac{1}{\left| G\right| }\mathop{\sum }\limits_{K}\left| K\right| {\chi }_{K}\overline{{\psi }_{K}} = {\delta }_{\chi ,\psi }, \]\n\nwhere the sum is over all conjugacy classes of \( G \) . But this says that the modified character table\n\n\[ U = \left( {\sqrt{\left| K\right| /\left| G\right| }{\chi }_{K}}\right) \]\n\nhas orthonormal rows. Hence \( U \), being square, is a unitary matrix and has orthonormal columns. The theorem follows. ∎
|
Yes
|
Theorem 1.11.2 Let \( X \) and \( Y \) be matrix representations for \( G \) and \( H \), respectively.\n\n1. Then \( X \otimes Y \) is a representation of \( G \times H \) .\n\n2. If \( X, Y \) and \( X \otimes Y \) have characters denoted by \( \chi ,\psi \), and \( \chi \otimes \psi \), respectively, then\n\n\[ \left( {\chi \otimes \psi }\right) \left( {g, h}\right) = \chi \left( g\right) \psi \left( h\right) \] \n\nfor all \( \left( {g, h}\right) \in G \times H \) .
|
Proof. 1. We verify the two conditions defining a representation. First of all,\n\n\[ \left( {X \otimes Y}\right) \left( {\epsilon ,\epsilon }\right) = X\left( \epsilon \right) \otimes Y\left( \epsilon \right) = I \otimes I = I. \]\n\nSecondly, if \( \left( {g, h}\right) ,\left( {{g}^{\prime },{h}^{\prime }}\right) \in G \times H \), then using Lemma 1.7.7, part 2,\n\n\[ \left( {X \otimes Y}\right) \left( {\left( {g, h}\right) \cdot \left( {{g}^{\prime },{h}^{\prime }}\right) }\right) = \left( {X \otimes Y}\right) \left( {g{g}^{\prime }, h{h}^{\prime }}\right) \]\n\n\[ = X\left( {g{g}^{\prime }}\right) \otimes Y\left( {h{h}^{\prime }}\right) \]\n\n\[ = X\left( g\right) X\left( {g}^{\prime }\right) \otimes Y\left( h\right) Y\left( {h}^{\prime }\right) \]\n\n\[ = \left( {X\left( g\right) \otimes Y\left( h\right) }\right) \cdot \left( {X\left( {g}^{\prime }\right) \otimes Y\left( {h}^{\prime }\right) }\right) \]\n\n\[ = \left( {X \otimes Y}\right) \left( {g, h}\right) \cdot \left( {X \otimes Y}\right) \left( {{g}^{\prime },{h}^{\prime }}\right) . \]\n\n2. Note that for any matrices \( A \) and \( B \) ,\n\n\[ \operatorname{tr}A \otimes B = \operatorname{tr}\left( {{a}_{i, j}B}\right) = \mathop{\sum }\limits_{i}{a}_{i, i}\operatorname{tr}B = \operatorname{tr}A\operatorname{tr}B. \]\n\nThus\n\n\[ \left( {\chi \otimes \psi }\right) \left( {g, h}\right) = \operatorname{tr}\left( {X\left( g\right) \otimes Y\left( h\right) }\right) = \operatorname{tr}X\left( g\right) \operatorname{tr}Y\left( h\right) = \chi \left( g\right) \psi \left( h\right) .\blacksquare \]
|
Yes
|
Theorem 1.11.3 Let \( G \) and \( H \) be groups.\n\n1. If \( X \) and \( Y \) are irreducible representations of \( G \) and \( H \), respectively, then \( X \otimes Y \) is an irreducible representation of \( G \times H \) .
|
Proof. 1. If \( \phi \) is any character, then we know (Corollary 1.9.4, part 4) that the corresponding representation is irreducible if and only if \( \langle \phi ,\phi \rangle = 1 \) . Letting \( X \) and \( Y \) have characters \( \chi \) and \( \psi \), respectively, we have\n\n\[ \langle \chi \otimes \psi ,\chi \otimes \psi \rangle = \frac{1}{\left| G \times H\right| }\mathop{\sum }\limits_{{\left( {g, h}\right) \in G \times H}}\left( {\chi \otimes \psi }\right) \left( {g, h}\right) \left( {\chi \otimes \psi }\right) \left( {{g}^{-1},{h}^{-1}}\right) \]\n\n\[ = \left\lbrack {\frac{1}{\left| G\right| }\mathop{\sum }\limits_{{g \in G}}\chi \left( g\right) \chi \left( {g}^{-1}\right) }\right\rbrack \left\lbrack {\frac{1}{\left| H\right| }\mathop{\sum }\limits_{{h \in H}}\psi \left( h\right) \psi \left( {h}^{-1}\right) }\right\rbrack \]\n\n\[ = \langle \chi ,\chi \rangle \langle \psi ,\psi \rangle \]\n\n\[ = 1 \cdot 1 \]\n\n\[ = 1\text{.} \]
|
Yes
|
Proposition 1.12.3 Let \( H \leq G \) have transversal \( \left\{ {{t}_{1},\ldots ,{t}_{l}}\right\} \) with cosets \( \mathcal{H} = \left\{ {{t}_{1}H,\ldots ,{t}_{l}H}\right\} \) . Then the matrices of \( 1{ \uparrow }_{H}^{G} \) are identical with those of \( G \) acting on the basis \( \mathcal{H} \) for the coset module \( \mathbb{C}\mathcal{H} \) .
|
Proof. Let the matrices for \( 1{ \uparrow }^{G} \) and \( \mathbb{C}\mathcal{H} \) be \( X = \left( {x}_{i, j}\right) \) and \( Z = \left( {z}_{i, j}\right) \) , respectively. Both arrays contain only zeros and ones. Finally, for any \( g \in G \) ,\n\n\[ \n{x}_{i, j}\left( g\right) = 1 \Leftrightarrow {t}_{i}^{-1}g{t}_{j} \in H \n\]\n\n\[ \n\Leftrightarrow g{t}_{j}H = {t}_{i}H \n\]\n\n\[ \n\Leftrightarrow {z}_{i, j}\left( g\right) = 1\text{. ∎} \n\]\n\nThus \( \mathbb{C}\mathcal{H} \) is a module for \( 1{ \uparrow }_{H}^{G} \) .
|
Yes
|
Theorem 1.12.4 Suppose \( H \leq G \) has transversal \( \left\{ {{t}_{1},\ldots ,{t}_{l}}\right\} \) and let \( Y \) be a matrix representation of \( H \) . Then \( X = Y{ \uparrow }_{H}^{G} \) is a representation of \( G \) .
|
Proof. Analogous to the case where \( Y \) is the trivial representation, we prove that \( X\left( g\right) \) is always a block permutation matrix; i.e., every row and column contains exactly one nonzero block \( Y\left( {{t}_{i}^{-1}g{t}_{j}}\right) \) . Consider the first column (the other cases being similar). It suffices to show that there is a unique element of \( H \) on the list \( {t}_{1}^{-1}g{t}_{1},{t}_{2}^{-1}g{t}_{1},\ldots ,{t}_{l}^{-1}g{t}_{1} \) . But \( g{t}_{1} \in {t}_{i}H \) for exactly one of the \( {t}_{i} \) in our transversal, and so \( {t}_{i}^{-1}g{t}_{1} \in H \) is the element we seek.\n\nWe must first verify that \( X\left( \epsilon \right) \) is the identity matrix, but that follows directly from the definition of induction.\n\nAlso, we need to show that \( X\left( g\right) X\left( h\right) = X\left( {gh}\right) \) for all \( g, h \in G \) . Considering the \( \left( {i, j}\right) \) block on both sides, it suffices to prove\n\n\[ \mathop{\sum }\limits_{k}Y\left( {{t}_{i}^{-1}g{t}_{k}}\right) Y\left( {{t}_{k}^{-1}h{t}_{j}}\right) = Y\left( {{t}_{i}^{-1}{gh}{t}_{j}}\right) . \]\n\nFor ease of notation, let \( {a}_{k} = {t}_{i}^{-1}g{t}_{k},{b}_{k} = {t}_{k}^{-1}h{t}_{j} \), and \( c = {t}_{i}^{-1}{gh}{t}_{j} \) . Note that \( {a}_{k}{b}_{k} = c \) for all \( k \) and that the sum can be rewritten\n\n\[ \mathop{\sum }\limits_{k}Y\left( {a}_{k}\right) Y\left( {b}_{k}\right) \overset{?}{ = }Y\left( c\right) \]\n\nNow the proof breaks into two cases.\n\nIf \( Y\left( c\right) = 0 \), then \( c \notin H \), and so either \( {a}_{k} \notin H \) or \( {b}_{k} \notin H \) for all \( k \) . Thus \( Y\left( {a}_{k}\right) \) or \( Y\left( {b}_{k}\right) \) is zero for each \( k \), which forces the sum to be zero as well.\n\nIf \( Y\left( c\right) \neq 0 \), then \( c \in H \) . Let \( m \) be the unique index such that \( {a}_{m} \in H \) . Thus \( {b}_{m} = {a}_{m}^{-1}c \in H \), and so\n\n\[ \mathop{\sum }\limits_{k}Y\left( {a}_{k}\right) Y\left( {b}_{k}\right) = Y\left( {a}_{m}\right) Y\left( {b}_{m}\right) = Y\left( {{a}_{m}{b}_{m}}\right) = Y\left( c\right) ,\]\n\ncompleting the proof.
|
Yes
|
Proposition 1.12.5 Consider \( H \leq G \) and a matrix representation \( Y \) of H. Let \( \left\{ {{t}_{1},\ldots ,{t}_{l}}\right\} \) and \( \left\{ {{s}_{1},\ldots ,{s}_{l}}\right\} \) be two transversals for \( H \) giving rise to representation matrices \( X \) and \( Z \), respectively, for \( Y{ \uparrow }^{G} \) . Then \( X \) and \( Z \) are equivalent.
|
Proof. Let \( \chi ,\psi \), and \( \phi \) be the characters of \( X, Y \), and \( Z \), respectively. Then it suffices to show that \( \chi = \phi \) (Corollary 1.9.4, part 5). Now\n\n\[ \chi \left( g\right) = \mathop{\sum }\limits_{i}\operatorname{tr}Y\left( {{t}_{i}^{-1}g{t}_{i}}\right) = \mathop{\sum }\limits_{i}\psi \left( {{t}_{i}^{-1}g{t}_{i}}\right) \]\n\n\( \left( {1.27}\right) \)\n\nwhere \( \psi \left( g\right) = 0 \) if \( g \notin H \) . Similarly,\n\n\[ \phi \left( g\right) = \mathop{\sum }\limits_{i}\psi \left( {{s}_{i}^{-1}g{s}_{i}}\right) \]\n\nSince the \( {t}_{i} \) and \( {s}_{i} \) are both transversals, we can permute subscripts if necessary to obtain \( {t}_{i}H = {s}_{i}H \) for all \( i \) . Now \( {t}_{i} = {s}_{i}{h}_{i} \), where \( {h}_{i} \in H \) for all \( i \) , and so\n\n\[ {t}_{i}^{-1}g{t}_{i} = {h}_{i}^{-1}{s}_{i}^{-1}g{s}_{i}{h}_{i} \]\n\nThus \( {t}_{i}^{-1}g{t}_{i} \in H \) if and only if \( {s}_{i}^{-1}g{s}_{i} \in H \), and when both lie in \( H \), they are in the same conjugacy class. It follows that \( \psi \left( {{t}_{i}^{-1}g{t}_{i}}\right) = \psi \left( {{s}_{i}^{-1}g{s}_{i}}\right) \), since \( \psi \) is constant on conjugacy classes of \( H \) and zero outside. Hence the sums for \( \chi \) and \( \phi \) are the same. ∎
|
Yes
|
Theorem 1.12.6 (Frobenius Reciprocity) Let \( H \leq G \) and suppose that \( \psi \) and \( \chi \) are characters of \( H \) and \( G \), respectively. Then\n\n\[ \left\langle {\psi { \uparrow }^{G},\chi }\right\rangle = \left\langle {\psi ,\chi { \downarrow }_{H}}\right\rangle \]\n\nwhere the left inner product is calculated in \( G \) and the right one in \( H \) .
|
Proof. We have the following string of equalities:\n\n\[ \left\langle {\psi { \uparrow }^{G},\chi }\right\rangle = \frac{1}{\left| G\right| }\mathop{\sum }\limits_{{g \in G}}\psi { \uparrow }^{G}\left( g\right) \chi \left( {g}^{-1}\right) \]\n\n\[ = \frac{1}{\left| G\right| \left| H\right| }\mathop{\sum }\limits_{{x \in G}}\mathop{\sum }\limits_{{g \in G}}\psi \left( {{x}^{-1}{gx}}\right) \chi \left( {g}^{-1}\right) \;\text{ (equation (1.28)) } \]\n\n\[ = \frac{1}{\left| G\right| \left| H\right| }\mathop{\sum }\limits_{{x \in G}}\mathop{\sum }\limits_{{y \in G}}\psi \left( y\right) \chi \left( {x{y}^{-1}{x}^{-1}}\right) \;\left( {\text{ let }y = {x}^{-1}{gx}}\right) \]\n\n\[ = \frac{1}{\left| G\right| \left| H\right| }\mathop{\sum }\limits_{{x \in G}}\mathop{\sum }\limits_{{y \in G}}\psi \left( y\right) \chi \left( {y}^{-1}\right) \]\n\n\[ = \frac{1}{\left| H\right| }\mathop{\sum }\limits_{{y \in G}}\psi \left( y\right) \chi \left( {y}^{-1}\right) \;\left( {x\text{ constant in the sum }}\right) \]\n\n\[ = \frac{1}{\left| H\right| }\mathop{\sum }\limits_{{y \in H}}\psi \left( y\right) \chi \left( {y}^{-1}\right) \;\left( {\psi \text{ is zero outside }H}\right) \]\n\n\[ = \langle \psi ,\chi { \downarrow }_{H}\rangle \text{. ∎} \]
|
Yes
|
Now consider \( \lambda = \left( {1}^{n}\right) \) . Each equivalence class \( \{ t\} \) consists of a single tableau, and this tableau can be identified with a permutation in one-line notation (by taking the transpose, if you wish). Since the action of \( {\mathcal{S}}_{n} \) is preserved,
|
\[ {M}^{\left( {1}^{n}\right) } \cong \mathbb{C}{\mathcal{S}}_{n} \] and the regular representation presents itself.
|
Yes
|
Consider \( \lambda \vdash n \) with Young subgroup \( {\mathcal{S}}_{\lambda } \) and tabloid \( \{ {t}^{\lambda }\} \) , as before. Then \( {V}^{\lambda } = \mathbb{C}{\mathcal{S}}_{n}{\mathcal{S}}_{\lambda } \) and \( {M}^{\lambda } = \mathbb{C}{\mathcal{S}}_{n}\left\{ {t}^{\lambda }\right\} \) are isomorphic as \( {\mathcal{S}}_{n} \) - modules.
|
Proof. Let \( {\pi }_{1},{\pi }_{2},\ldots ,{\pi }_{k} \) be a transversal for \( {\mathcal{S}}_{\lambda } \) . Define a map\n\n\[ \theta : {V}^{\lambda } \rightarrow {M}^{\lambda } \]\n\nby \( \theta \left( {{\pi }_{i}{\mathcal{S}}_{\lambda }}\right) = \left\{ {{\pi }_{i}{t}^{\lambda }}\right\} \) for \( i = 1,\ldots, k \) and linear extension. It is not hard to verify that \( \theta \) is the desired \( {\mathcal{S}}_{n} \) -isomorphism of modules. -
|
Yes
|
Theorem 2.1.12 Consider \( \lambda \vdash n \) with Young subgroup \( {\mathcal{S}}_{\lambda } \) and tabloid \( \{ {t}^{\lambda }\} \) , as before. Then \( {V}^{\lambda } = \mathbb{C}{\mathcal{S}}_{n}{\mathcal{S}}_{\lambda } \) and \( {M}^{\lambda } = \mathbb{C}{\mathcal{S}}_{n}\left\{ {t}^{\lambda }\right\} \) are isomorphic as \( {\mathcal{S}}_{n} \) - modules.
|
Proof. Let \( {\pi }_{1},{\pi }_{2},\ldots ,{\pi }_{k} \) be a transversal for \( {\mathcal{S}}_{\lambda } \) . Define a map\n\n\[ \theta : {V}^{\lambda } \rightarrow {M}^{\lambda } \]\n\nby \( \theta \left( {{\pi }_{i}{\mathcal{S}}_{\lambda }}\right) = \left\{ {{\pi }_{i}{t}^{\lambda }}\right\} \) for \( i = 1,\ldots, k \) and linear extension. It is not hard to verify that \( \theta \) is the desired \( {\mathcal{S}}_{n} \) -isomorphism of modules. -
|
Yes
|
Lemma 2.2.4 (Dominance Lemma for Partitions) Let \( {t}^{\lambda } \) and \( {s}^{\mu } \) be tableaux of shape \( \lambda \) and \( \mu \), respectively. If, for each index \( i \), the elements of row \( i \) of \( {s}^{\mu } \) are all in different columns in \( {t}^{\lambda } \), then \( \lambda \trianglerighteq \mu \) .
|
Proof. By hypothesis, we can sort the entries in each column of \( {t}^{\lambda } \) so that the elements of rows \( 1,2,\ldots, i \) of \( {s}^{\mu } \) all occur in the first \( i \) rows of \( {t}^{\lambda } \) . Thus\n\n\( {\lambda }_{1} + {\lambda }_{2} + \cdots + {\lambda }_{i} = \) number of elements in the first \( i \) rows of \( {t}^{\lambda } \)\n\n\( \geq \) number of elements of \( {s}^{\mu } \) in the first \( i \) rows of \( {t}^{\lambda } \)\n\n\[ = {\mu }_{1} + {\mu }_{2} + \cdots + {\mu }_{i}\text{. ∎} \]
|
Yes
|
Proposition 2.2.6 If \( \lambda ,\mu \vdash n \) with \( \lambda \trianglerighteq \mu \), then \( \lambda \geq \mu \) .
|
Proof. If \( \lambda \neq \mu \), then find the first index \( i \) where they differ. Thus \( \mathop{\sum }\limits_{{j = 1}}^{{i - 1}}{\lambda }_{j} = \) \( \mathop{\sum }\limits_{{j = 1}}^{{i - 1}}{\mu }_{j} \) and \( \mathop{\sum }\limits_{{j = 1}}^{i}{\lambda }_{j} > \mathop{\sum }\limits_{{j = 1}}^{i}{\mu }_{j} \) (since \( \lambda \vartriangleright \mu \) ). So \( {\lambda }_{i} > {\mu }_{i} \) . ∎
|
Yes
|
Lemma 2.3.3 Let \( t \) be a tableau and \( \pi \) be a permutation. Then\n\n1. \( {R}_{\pi t} = \pi {R}_{t}{\pi }^{-1} \) ,\n\n2. \( {C}_{\pi t} = \pi {C}_{t}{\pi }^{-1} \) ,\n\n3. \( {\kappa }_{\pi t} = \pi {\kappa }_{t}{\pi }^{-1} \) ,\n\n4. \( {\mathbf{e}}_{\pi t} = \pi {\mathbf{e}}_{t} \) .
|
Proof. 1. We have the following list of equivalent statements:\n\n\[ \sigma \in {R}_{\pi t}\; \leftrightarrow \;\sigma \{ {\pi t}\} = \{ {\pi t}\} \]\n\n\[ \leftrightarrow \;{\pi }^{-1}{\sigma \pi }\{ t\} = \{ t\} \]\n\n\[ \leftrightarrow \;{\pi }^{-1}{\sigma \pi } \in {R}_{t} \]\n\n\[ \leftrightarrow \sigma \in \pi {R}_{t}{\pi }^{-1} \]\n\nThe proofs of parts 2 and 3 are similar to that of part 1 .\n\n4. We have\n\n\[ {e}_{\pi t} = {\kappa }_{\pi t}\{ {\pi t}\} = \pi {\kappa }_{t}{\pi }^{-1}\{ {\pi t}\} = \pi {\kappa }_{t}\{ t\} = \pi {e}_{t}.\blacksquare \]
|
Yes
|
Proposition 2.3.5 The \( {S}^{\lambda } \) are cyclic modules generated by any given poly-tabloid.
|
∎
|
No
|
Example 2.3.6 Suppose \( \lambda = \left( n\right) \) . Then \( {e}_{{12}\cdots n} = \underline{\overline{\mathbf{{12}}\cdots \mathbf{n}}} \) is the only polytabloid and \( {S}^{\left( n\right) } \) carries the trivial representation. This is, of course, the only possibility, since \( {S}^{\left( n\right) } \) is a submodule of \( {M}^{\left( n\right) } \) where \( {\mathcal{S}}_{n} \) acts trivially (Example 2.1.6).
|
∎
|
No
|
Lemma 2.4.1 (Sign Lemma) Let \( H \leq {\mathcal{S}}_{n} \) be a subgroup.\n\n1. If \( \pi \in H \), then\n\n\[ \pi {H}^{ - } = {H}^{ - }\pi = \left( {\operatorname{sgn}\pi }\right) {H}^{ - }.\]\n\nOtherwise put: \( {\pi }^{ - }{H}^{ - } = {H}^{ - } \).\n\n2. For any \( \mathbf{u},\mathbf{v} \in {M}^{\lambda } \), \n\n\[ \left\langle {{H}^{ - }\mathbf{u},\mathbf{v}}\right\rangle = \left\langle {\mathbf{u},{H}^{ - }\mathbf{v}}\right\rangle \]\n\n3. If the transposition \( \left( {b, c}\right) \in H \), then we can factor\n\n\[ {H}^{ - } = k\left( {\epsilon - \left( {b, c}\right) }\right) \]\n\nwhere \( k \in \mathbb{C}\left\lbrack {\mathcal{S}}_{n}\right\rbrack \).\n\n4. If \( t \) is a tableau with \( b, c \) in the same row of \( t \) and \( \left( {b, c}\right) \in H \), then\n\n\[ {H}^{ - }\{ t\} = 0. \]
|
Proof. 1. This is just like the proof that \( \pi {\mathbf{e}}_{t} = \left( {\operatorname{sgn}\pi }\right) {\mathbf{e}}_{t} \) in Example 2.3.7.\n\n2. Using the fact that our form is \( {\mathcal{S}}_{n} \) -invariant,\n\n\[ \left\langle {{H}^{ - }\mathbf{u},\mathbf{v}}\right\rangle = \mathop{\sum }\limits_{{\pi \in H}}\langle \left( {\operatorname{sgn}\pi }\right) \pi \mathbf{u},\mathbf{v}\rangle = \mathop{\sum }\limits_{{\pi \in H}}\left\langle {\mathbf{u},\left( {\operatorname{sgn}\pi }\right) {\pi }^{-1}\mathbf{v}}\right\rangle .\n\]\nReplacing \( \pi \) by \( {\pi }^{-1} \) and noting that this does not affect the sign, we see that this last sum equals \( \left\langle {\mathbf{u},{H}^{ - }\mathbf{v}}\right\rangle \).\n\n3. Consider the subgroup \( K = \{ \epsilon ,\left( {b, c}\right) \} \) of \( H \). Then we can find a transversal and write \( H = {\biguplus }_{i}{k}_{i}K \). But then \( {H}^{ - } = \left( {\mathop{\sum }\limits_{i}{k}_{i}^{ - }}\right) \left( {\epsilon - \left( {b, c}\right) }\right) \), as desired.\n\n4. By hypothesis, \( \left( {b, c}\right) \{ t\} = \{ t\} \). Thus\n\n\[ {H}^{ - }\{ t\} = k\left( {\epsilon - \left( {b, c}\right) }\right) \{ t\} = k\left( {\{ t\} -\{ t\} }\right) = 0. \]
|
Yes
|
Corollary 2.4.2 Let \( t = {t}^{\lambda } \) be a \( \lambda \) -tableau and \( s = {s}^{\mu } \) be a \( \mu \) -tableau, where \( \lambda ,\mu \vdash n \) . If \( {\kappa }_{t}\{ s\} \neq \mathbf{0} \), then \( \lambda \geq \mu \) . And if \( \lambda = \mu \), then \( {\kappa }_{t}\{ s\} = \pm {\mathbf{e}}_{t} \) .
|
Proof. Suppose \( b \) and \( c \) are two elements in the same row of \( {s}^{\mu } \) . Then they cannot be in the same column of \( {t}^{\lambda } \), for if so, then \( {\kappa }_{t} = k\left( {\epsilon - \left( {b, c}\right) }\right) \) and \( {\kappa }_{t}\{ s\} = 0 \) by parts 3 and 4 in the preceding lemma. Thus the dominance lemma (Lemma 2.2.4) yields \( \lambda \trianglerighteq \mu \) . If \( \lambda = \mu \), then we must have \( \{ s\} = \pi \{ t\} \) for some \( \pi \in {C}_{t} \) by the same argument that established the dominance lemma. Using part 1 yields \[ {\kappa }_{t}\{ s\} = {\kappa }_{t}\pi \{ t\} = \left( {\operatorname{sgn}\pi }\right) {\kappa }_{t}\{ t\} = \pm {\mathbf{e}}_{t}.\blacksquare \]
|
Yes
|
Corollary 2.4.3 If \( \mathbf{u} \in {M}^{\mu } \) and \( \operatorname{sh}t = \mu \), then \( {\kappa }_{t}\mathbf{u} \) is a multiple of \( {\mathbf{e}}_{t} \) .
|
Proof. We can write \( \mathbf{u} = \mathop{\sum }\limits_{i}{c}_{i}\left\{ {s}_{i}\right\} \), where the \( {s}_{i} \) are \( \mu \) -tableaux. By the previous corollary, \( {\kappa }_{t}\mathbf{u} = \mathop{\sum }\limits_{i} \pm {c}_{i}{\mathbf{e}}_{t} \) . ∎
|
Yes
|
Theorem 2.4.4 (Submodule Theorem [Jam 76]) Let \( U \) be a submodule of \( {M}^{\mu } \). Then\n\n\[ U \supseteq {S}^{\mu }\;\text{ or }\;U \subseteq {S}^{\mu \bot }.\]\n\nIn particular, when the field is \( \mathbb{C} \), the \( {S}^{\mu } \) are irreducible.
|
Proof. Consider \( \mathbf{u} \in U \) and a \( \mu \) -tableau \( t \). By the preceding corollary, we know that \( {\kappa }_{t}\mathbf{u} = f{\mathbf{e}}_{t} \) for some field element \( f \). There are two cases, depending on which multiples can arise.\n\nSuppose that there exits a \( \mathbf{u} \) and a \( t \) with \( f \neq 0 \). Then since \( \mathbf{u} \) is in the submodule \( U \), we have \( f{\mathbf{e}}_{t} = {\kappa }_{t}\mathbf{u} \in U \). Thus \( {\mathbf{e}}_{t} \in U \) (since \( f \) is nonzero) and \( {S}^{\mu } \subseteq U \) (since \( {S}^{\mu } \) is cyclic).\n\nOn the other hand, suppose we always have \( {\kappa }_{t}\mathbf{u} = \mathbf{0} \). We claim that this forces \( U \subseteq {S}^{\mu \bot } \). Consider any \( \mathbf{u} \in U \). Given an arbitrary \( \mu \) -tableau \( t \), we can apply part 2 of the sign lemma to obtain\n\n\[ \left\langle {\mathbf{u},{\mathbf{e}}_{t}}\right\rangle = \left\langle {\mathbf{u},{\kappa }_{t}\{ t\} }\right\rangle = \left\langle {{\kappa }_{t}\mathbf{u},\{ t\} }\right\rangle = \langle \mathbf{0},\{ t\} \rangle = \mathbf{0}. \]\n\nSince the \( {\mathbf{e}}_{t} \) span \( {S}^{\mu } \), we have \( \mathbf{u} \in {S}^{\mu \bot } \), as claimed. ∎
|
Yes
|
Proposition 2.4.5 Suppose the field of scalars is \( \mathbb{C} \) and \( \theta \in \operatorname{Hom}\left( {{S}^{\lambda },{M}^{\mu }}\right) \) is nonzero. Thus \( \lambda \trianglerighteq \mu \), and if \( \lambda = \mu \), then \( \theta \) is multiplication by a scalar.
|
Proof. Since \( \theta \neq 0 \), there is some basis vector \( {\mathbf{e}}_{t} \) such that \( \theta \left( {\mathbf{e}}_{t}\right) \neq \mathbf{0} \) . Because \( \langle \cdot , \cdot \rangle \) is an inner product with complex scalars, \( {M}^{\lambda } = {S}^{\lambda } \oplus {S}^{\lambda \bot } \) . Thus we can extend \( \theta \) to an element of \( \operatorname{Hom}\left( {{M}^{\lambda },{M}^{\mu }}\right) \) by setting \( \theta \left( {S}^{\lambda \bot }\right) = \mathbf{0} \) . So\n\n\[ \mathbf{0} \neq \theta \left( {\mathbf{e}}_{t}\right) = \theta \left( {{\kappa }_{t}\{ \mathbf{t}\} }\right) = {\kappa }_{t}\theta \left( {\{ \mathbf{t}\} }\right) = {\kappa }_{t}\left( {\mathop{\sum }\limits_{i}{c}_{i}\left\{ {s}_{i}\right\} }\right) \]\n\nwhere the \( {s}_{i} \) are \( \mu \) -tableaux. By Corollary 2.4.2 we have \( \lambda \trianglerighteq \mu \) .\n\nIn the case \( \lambda = \mu \), Corollary 2.4.3 yields \( \theta \left( {\mathbf{e}}_{t}\right) = c{\mathbf{e}}_{t} \) for some constant \( c \) . So for any permutation \( \pi \),\n\n\[ \theta \left( {\mathbf{e}}_{\pi t}\right) = \theta \left( {\pi {\mathbf{e}}_{t}}\right) = {\pi \theta }\left( {\mathbf{e}}_{t}\right) = \pi \left( {c{\mathbf{e}}_{t}}\right) = c{\mathbf{e}}_{\pi t}. \]\n\nThus \( \theta \) is multiplication by \( c \) . ∎
|
Yes
|
Theorem 2.4.6 The \( {S}^{\lambda } \) for \( \lambda \vdash n \) form a complete list of irreducible \( {\mathcal{S}}_{n} \) - modules over the complex field.
|
Proof. The \( {S}^{\lambda } \) are irreducible by the submodule theorem and the fact that \( {S}^{\lambda } \cap {S}^{\lambda \bot } = \mathbf{0} \) for the field \( \mathbb{C} \) .\n\nSince we have the right number of modules for a full set, it suffices to show that they are pairwise inequivalent. But if \( {S}^{\lambda } \cong {S}^{\mu } \), then there is a nonzero homomorphism \( \theta \in \operatorname{Hom}\left( {{S}^{\lambda },{M}^{\mu }}\right) \), since \( {S}^{\mu } \subseteq {M}^{\mu } \) . Thus \( \lambda \trianglerighteq \mu \) (Proposition 2.4.5). Similarly, \( \mu \trianglerighteq \lambda \), so \( \lambda = \mu \) . ∎
|
Yes
|
Corollary 2.4.7 The permutation modules decompose as\n\n\[ \n{M}^{\mu } = {\bigoplus }_{\lambda \geq \mu }{m}_{\lambda \mu }{S}^{\lambda } \]\n\nwith the diagonal multiplicity \( {m}_{\mu \mu } = 1 \) .
|
Proof. This result follows from Proposition 2.4.5. If \( {S}^{\lambda } \) appears in \( {M}^{\mu } \) with nonzero coefficient, then \( \lambda \trianglerighteq \mu \) . If \( \lambda = \mu \), then we can also apply Proposition 1.7.10 to obtain\n\n\[ \n{m}_{\mu \mu } = \dim \operatorname{Hom}\left( {{S}^{\mu },{M}^{\mu }}\right) = 1.\blacksquare \]\n\nThe coefficients \( {m}_{\lambda \mu } \) have a nice combinatorial interpretation, as we will see in Section 2.11.
|
Yes
|
Lemma 2.5.5 (Dominance Lemma for Tabloids) If \( k < l \) and \( k \) appears in a lower row than \( l \) in \( \{ t\} \), then\n\n\[ \{ t\} \vartriangleleft \left( {k, l}\right) \{ t\} \]
|
Proof. Suppose that \( \{ t\} \) and \( \left( {k, l}\right) \{ t\} \) have composition sequences \( {\lambda }^{i} \) and \( {\mu }^{i} \) , respectively. Then for \( i < k \) or \( i \geq l \) we have \( {\lambda }^{i} = {\mu }^{i} \).\n\nNow consider the case where \( k \leq i < l \) . If \( r \) and \( q \) are the rows of \( \{ t\} \) in which \( k \) and \( l \) appear, respectively, then\n\n\( {\lambda }^{i} = {\mu }^{i} \) with the \( q \) th part decreased by 1\n\nand the \( r \) th part increased by 1 .\n\nSince \( q < r \) by assumption, \( {\lambda }^{i} \vartriangleleft {\mu }^{i} \) . ∎
|
Yes
|
Corollary 2.5.6 If \( t \) is standard and \( \{ s\} \) appears in \( {e}_{t} \), then \( \{ t\} \geq \{ s\} \) .
|
Proof. Let \( s = {\pi t} \), where \( \pi \in {C}_{t} \), so that \( \{ s\} \) appears in \( {\mathbf{e}}_{t} \) . We induct on the number of column inversions in \( s \), i.e., the number of pairs \( k < l \) in the same column of \( s \) such that \( k \) is in a lower row than \( l \) . Given any such pair,\n\n\[ \n\{ s\} \vartriangleleft \left( {k, l}\right) \{ s\} \n\]\n\nby Lemma 2.5.5. But \( \left( {k, l}\right) \{ s\} \) has fewer inversions than \( \{ s\} \), so, by induction, \( \left( {k, l}\right) \{ s\} \trianglelefteq \{ t\} \) and the result follows. ∎
|
Yes
|
Lemma 2.5.8 Let \( {\mathbf{v}}_{1},{\mathbf{v}}_{2},\ldots ,{\mathbf{v}}_{m} \) be elements of \( {M}^{\mu } \) . Suppose, for each \( {\mathbf{v}}_{i} \) , we can choose a tabloid \( \left\{ {\mathbf{t}}_{i}\right\} \) appearing in \( {\mathbf{v}}_{i} \) such that\n\n1. \( \left\{ {t}_{i}\right\} \) is maximum in \( {\mathbf{v}}_{i} \), and\n\n2. the \( \left\{ {t}_{i}\right\} \) are all distinct.\n\nThen \( {\mathbf{v}}_{1},{\mathbf{v}}_{2},\ldots ,{\mathbf{v}}_{m} \) are independent.
|
Proof. Choose the labels such that \( \left\{ {t}_{1}\right\} \) is maximal among the \( \left\{ {t}_{i}\right\} \) . Thus conditions 1 and 2 ensure that \( \left\{ {t}_{1}\right\} \) appears only in \( {\mathbf{v}}_{1} \) . (If \( \left\{ {t}_{1}\right\} \) occurs in \( {\mathbf{v}}_{i}, i > 1 \), then \( \left\{ {t}_{1}\right\} \vartriangleleft \left\{ {t}_{i}\right\} \), contradicting the choice of \( \left\{ {t}_{1}\right\} \) .) It follows that in any linear combination\n\n\[ \n{c}_{1}{\mathbf{v}}_{1} + {c}_{2}{\mathbf{v}}_{2} + \cdots + {c}_{m}{\mathbf{v}}_{m} = \mathbf{0} \n\]\n\nwe must have \( {c}_{1} = 0 \) because there is no other way to cancel \( \left\{ {t}_{1}\right\} \) . By induction on \( m \), the rest of the coefficients must also be zero. -
|
Yes
|
Proposition 2.5.9 The set\n\n\[ \n\\left\\{ {{e}_{t} : t\\text{ is a standard }\\lambda \\text{-tableau }}\\right\\} \n\]\n\nis independent.
|
Proof. By Corollary 2.5.6, \\( \\{ t\\} \\) is maximum in \\( {\\mathbf{e}}_{t} \\), and by hypothesis they are all distinct. Thus Lemma 2.5.8 applies.
|
No
|
Proposition 2.6.3 Let \( t, A \), and \( B \), be as in the definition of a Garnir element. If \( \left| {A \cup B}\right| \) is greater than the number of elements in column \( j \) of \( t \) , then \( {g}_{A, B}{e}_{t} = \mathbf{0} \) .
|
Proof. First, we claim that\n\n\[ \n{\mathcal{S}}_{A \cup B}^{ - }{e}_{t} = 0 \n\]\n\n(2.4)\n\nConsider any \( \sigma \in {C}_{t} \) . By the hypothesis, there must be \( a, b \in A \cup B \) such that \( a \) and \( b \) are in the same row of \( {\sigma t} \) . But then \( \left( {a, b}\right) \in {\mathcal{S}}_{A \cup B} \) and \( {\mathcal{S}}_{A \cup B}^{ - }\{ {\sigma t}\} = \mathbf{0} \) by part 4 of the sign lemma (Lemma 2.4.1). Since this is true of every \( \sigma \) appearing in \( {\kappa }_{t} \), the claim follows.\n\nNow \( {\mathcal{S}}_{A \cup B} = {\biguplus }_{\pi }\pi \left( {{\mathcal{S}}_{A} \times {\mathcal{S}}_{B}}\right) \), so \( {\mathcal{S}}_{A \cup B}^{ - } = {g}_{A, B}{\left( {\mathcal{S}}_{A} \times {\mathcal{S}}_{B}\right) }^{ - } \) . Substituting this into equation (2.4) yields\n\n\[ \n{g}_{A, B}{\left( {\mathcal{S}}_{A} \times {\mathcal{S}}_{B}\right) }^{ - }{\mathbf{e}}_{t} = \mathbf{0} \n\]\n\n(2.5)\n\nand we need worry only about the contribution of \( {\left( {\mathcal{S}}_{A} \times {\mathcal{S}}_{B}\right) }^{ - } \) . But we have \( {\mathcal{S}}_{A} \times {\mathcal{S}}_{B} \subseteq {C}_{t} \) . So if \( \sigma \in {\mathcal{S}}_{A} \times {\mathcal{S}}_{B} \), then, by part 1 of the sign lemma,\n\n\[ \n{\sigma }^{ - }{\mathbf{e}}_{t} = {\sigma }^{ - }{C}_{t}^{ - }\{ \mathbf{t}\} = {C}_{t}^{ - }\{ \mathbf{t}\} = {\mathbf{e}}_{t}. \n\]\n\nThus \( {\left( {\mathcal{S}}_{A} \times {\mathcal{S}}_{B}\right) }^{ - }{\mathbf{e}}_{t} = \left| {{\mathcal{S}}_{A} \times {\mathcal{S}}_{B}}\right| {\mathbf{e}}_{t} \), and dividing equation (2.5) by this cardinality yields the proposition.
|
Yes
|
Theorem 2.6.4 The set\n\n\[ \n\\left\\{ {{\\mathbf{e}}_{t} : t\\text{ is a standard }\\lambda \\text{-tableau }}\\right\\} \n\]\n\nspans \( {S}^{\\lambda } \) .
|
First note that if \( {\\mathbf{e}}_{t} \) is in the span of the set (2.6), then so is \( {\\mathbf{e}}_{s} \) for any \( s \\in \\left\\lbrack t\\right\\rbrack \), by the remarks at the beginning of this section. Thus we may always take \( t \) to have increasing columns.\n\nThe poset of column tabloids has a maximum element \( \\left\\lbrack {t}_{0}\\right\\rbrack \), where \( {t}_{0} \) is obtained by numbering the cells of each column consecutively from top to bottom, starting with the leftmost column and working right. Since \( {t}_{0} \) is standard, we are done for this equivalence class.\n\nNow pick any tableau \( t \) . By induction, we may assume that every tableau \( s \) with \( \\left\\lbrack s\\right\\rbrack \\vartriangleright \\left\\lbrack t\\right\\rbrack \) is in the span of (2.6). If \( t \) is standard, then we are done. If not, then there must be a descent in some row \( i \) (since columns increase). Let the columns involved be the \( j \) th and \( \\left( {j + 1}\\right) \) st with entries \( {a}_{1} < {a}_{2} < \\cdots < {a}_{p} \) and \( {b}_{1} < {b}_{2} < \\cdots < {b}_{q} \), respectively. Thus we have the following situation in \( t \) :\n\n\[ \n{a}_{1}\\{b}_{1} \n\]\n\n\[ \n\\text{___} \\land \n\]\n\n\[ \n{a}_{2}\\{b}_{2} \n\]\n\n\[ \n\\vdots \n\]\n\n\[ \n{a}_{i} > {b}_{i} \n\]\n\n\[ \n\\text{ ^} \n\]\n\n\[ \n\\vdots \n\]\n\n\[ \n\\land \\;{b}_{q} \n\]\n\n\[ \n{a}_{p} \n\]\n\nTake \( A = \\left\\{ {{a}_{i},\\ldots ,{a}_{p}}\\right\\} \) and \( B = \\left\\{ {{b}_{1},\\ldots ,{b}_{i}}\\right\\} \) with associated Garnir element \( {g}_{A, B} = \\mathop{\\sum }\\limits_{\\pi }\\left( {\\operatorname{sgn}\\pi }\\right) \\pi \) . By Proposition 2.6.3 we have \( {g}_{A, B}{\\mathbf{e}}_{t} = \\mathbf{0} \), so that\n\n\[ \n{\\mathbf{e}}_{t} = - \\mathop{\\sum }\\limits_{{\\pi \\neq \\epsilon }}\\left( {\\operatorname{sgn}\\pi }\\right) {\\mathbf{e}}_{\\pi t} \n\]\n\n(2.7)\n\nBut \( {b}_{1} < \\cdots < {b}_{i} < {a}_{i} < \\cdots < {a}_{p} \) implies that \( \\left\\lbrack {\\pi t}\\right\\rbrack \\geq \\left\\lbrack t\\right\\rbrack \) for \( \\pi \\neq \\epsilon \) by the column analogue of the dominance lemma for tabloids (Lemma 2.5.5). Thus all terms on the right side of (2.7), and hence \( {\\mathbf{e}}_{t} \) itself, are in the span of the standard polytabloids.
|
Yes
|
Theorem 2.6.5 For any partition \( \lambda \) :\n\n1. \( \left\{ {{\mathbf{e}}_{t} : t}\right. \) is a standard \( \lambda \) -tableau \( \} \) is a basis for \( {S}^{\lambda } \),\n\n2. \( \dim {S}^{\lambda } = {f}^{\lambda } \), and\n\n3. \( \mathop{\sum }\limits_{{\lambda \vdash n}}{\left( {f}^{\lambda }\right) }^{2} = n! \).
|
Proof. The first two parts are immediate. The third follows from the fact (Proposition 1.10.1) that for any group \( G \) ,\n\n\[ \mathop{\sum }\limits_{V}{\left( \dim V\right) }^{2} = \left| G\right| \]\n\nwhere the sum is over all irreducible \( G \) -modules. ∎
|
No
|
Lemma 2.8.2 We have\n\n\[ \n{f}^{\lambda } = \mathop{\sum }\limits_{{\lambda }^{ - }}{f}^{{\lambda }^{ - }} \n\]
|
Proof. Every standard tableau of shape \( \lambda \vdash n \) consists of \( n \) in some inner corner together with a standard tableau of shape \( {\lambda }^{ - } \vdash n - 1 \) . The result follows.
|
Yes
|
Proposition 2.9.4 If \( t \) is the fixed \( \lambda \) -tableau and \( T \in {\mathcal{T}}_{\lambda \mu } \), then \( {\kappa }_{t}\mathbf{T} = \mathbf{0} \) if and only if \( T \) has two equal elements in some column.
|
Proof. If \( {\kappa }_{t}\mathbf{T} = \mathbf{0} \), then\n\n\[\n\mathbf{T} + \mathop{\sum }\limits_{\substack{{\pi \in {C}_{t}} \\ {\pi \neq \epsilon } }}\left( {\operatorname{sgn}\pi }\right) \pi \mathbf{T} = \mathbf{0}\n\]\n\nSo we must have \( \mathbf{T} = \pi \mathbf{T} \) for some \( \pi \in {C}_{t} \) with \( \operatorname{sgn}\pi = - 1 \) . But then the elements corresponding to any nontrivial cycle of \( \pi \) are all equal and in the same column.\n\nNow suppose that \( T\left( i\right) = T\left( j\right) \) are in the same column of \( T \) . Then\n\n\[\n\left( {\epsilon - \left( {i, j}\right) }\right) \mathbf{T} = \mathbf{0}.\n\]\n\nBut \( \epsilon - \left( {i, j}\right) \) is a factor of \( {\kappa }_{t} \) by part 3 of the sign lemma (Lemma 2.4.1), so \( {\kappa }_{t}\mathbf{T} = \mathbf{0} \) . ∎
|
Yes
|
Lemma 2.10.4 Let \( V \) and \( \mathcal{B} \) be as before and consider a set of vectors \( {\mathbf{v}}_{1},{\mathbf{v}}_{2},\ldots ,{\mathbf{v}}_{m} \in V \) . Suppose that, for all \( i \), there exists \( {\mathbf{b}}_{i} \in \mathcal{B} \) appearing in \( {\mathbf{v}}_{i} \) such that\n\n1. \( \left\lbrack {\mathbf{b}}_{i}\right\rbrack \trianglerighteq \left\lbrack \mathbf{b}\right\rbrack \) for every \( \mathbf{b} \neq {\mathbf{b}}_{i} \) appearing in \( {\mathbf{v}}_{i} \), and\n\n2. the \( \left\lbrack {\mathbf{b}}_{i}\right\rbrack \) are all distinct.\n\nThen the \( {\mathbf{v}}_{i} \) are linearly independent.
|
∎
|
No
|
Lemma 2.10.5 Let \( V \) and \( W \) be vector spaces and let \( {\theta }_{1},{\theta }_{2},\ldots ,{\theta }_{m} \) be linear maps from \( V \) to \( W \) . If there exists \( a\mathbf{v} \in V \) such that \( {\theta }_{1}\left( \mathbf{v}\right) ,{\theta }_{2}\left( \mathbf{v}\right) ,\ldots ,{\theta }_{m}\left( \mathbf{v}\right) \) are independent in \( W \), then the \( {\theta }_{i} \) are independent as linear transformations.
|
Proof. Suppose not. Then there are constants \( {c}_{i} \), not all zero, such that \( \mathop{\sum }\limits_{i}{c}_{i}{\theta }_{i} \) is the zero map. But then \( \mathop{\sum }\limits_{i}{c}_{i}{\theta }_{i}\left( \mathbf{v}\right) = \mathbf{0} \) for all \( \mathbf{v} \in V \), a contradiction to the hypothesis of the lemma. -
|
Yes
|
Proposition 2.10.6 The set\n\n\\[ \n\\left\\{ {{\\bar{\\theta }}_{T} : T \\in {\\mathcal{T}}_{\\lambda \\mu }^{0}}\\right\\} \n\\]\n\nis independent.
|
Proof. Let \\( {T}_{1},{T}_{2},\\ldots ,{T}_{m} \\) be the elements of \\( {\\mathcal{T}}_{\\lambda \\mu } \\) . By the previous lemma, it suffices to show that \\( {\\bar{\\theta }}_{{T}_{1}}{\\mathbf{e}}_{t},{\\bar{\\theta }}_{{T}_{2}}{\\mathbf{e}}_{t},\\ldots ,{\\bar{\\theta }}_{{T}_{m}}{\\mathbf{e}}_{t} \\) are independent, where \\( t \\) is our fixed tableau. For all \\( i \\) we have\n\n\\[ \n{\\bar{\\theta }}_{{T}_{i}}{e}_{t} = {\\theta }_{{T}_{i}}{\\kappa }_{t}\\{ t\\} = {\\kappa }_{t}{\\theta }_{{T}_{i}}\\{ t\\} \n\\]\n\nNow \\( {T}_{i} \\) is semistandard, so \\( \\left\\lbrack {T}_{i}\\right\\rbrack \\vartriangleright \\left\\lbrack S\\right\\rbrack \\) for any other summand \\( S \\) in \\( {\\theta }_{{T}_{i}}\\{ t\\} \\) (Corollary 2.10.3). The same is true for summands of \\( {\\kappa }_{t}{\\theta }_{{T}_{i}}\\{ t\\} \\), since the permutations in \\( {\\kappa }_{t} \\) do not change the column equivalence class. Also, the \\( \\left\\lbrack {T}_{i}\\right\\rbrack \\) are all distinct, since no equivalence class has more than one semistandard tableau. Hence the \\( {\\kappa }_{t}{\\theta }_{{T}_{i}}\\{ \\mathbf{t}\\} = {\\bar{\\theta }}_{{T}_{i}}{\\mathbf{e}}_{t} \\) satisfy the hypotheses of Lemma 2.10.4, making them independent. -
|
Yes
|
Lemma 2.10.7 Consider \( \theta \in \operatorname{Hom}\left( {{S}^{\lambda },{M}^{\mu }}\right) \) . Write\n\n\[ \theta {e}_{t} = \mathop{\sum }\limits_{T}{c}_{T}T \]\n\nwhere \( t \) is the fixed tableau of shape \( \lambda \) .\n\n1. If \( \pi \in {C}_{t} \) and \( {T}_{1} = \pi {T}_{2} \), then \( {c}_{{T}_{1}} = \left( {\operatorname{sgn}\pi }\right) {c}_{{T}_{2}} \).\n\n2. Every \( {T}_{1} \) with a repetition in some column has \( {c}_{{T}_{1}} = 0 \).\n\n3. If \( \theta \) is not the zero map, then there exists a semistandard \( {T}_{2} \) having \( {c}_{{T}_{2}} \neq 0 \) .
|
Proof. 1. Since \( \pi \in {C}_{t} \), we have\n\n\[ \pi \left( {\theta {\mathbf{e}}_{t}}\right) = \theta \left( {\pi {\kappa }_{t}\{ \mathbf{t}\} }\right) = \theta \left( {\left( {\operatorname{sgn}\pi }\right) {\kappa }_{t}\{ \mathbf{t}\} }\right) = \left( {\operatorname{sgn}\pi }\right) \left( {\theta {\mathbf{e}}_{t}}\right) .\n\nTherefore, \( \theta {\mathbf{e}}_{t} = \mathop{\sum }\limits_{T}{c}_{T}\mathbf{T} \) implies\n\n\[ \pi \mathop{\sum }\limits_{T}{c}_{T}\mathbf{T} = \pi \left( {\theta {\mathbf{e}}_{t}}\right) = \left( {\operatorname{sgn}\pi }\right) \left( {\theta {\mathbf{e}}_{t}}\right) = \left( {\operatorname{sgn}\pi }\right) \mathop{\sum }\limits_{T}{c}_{T}\mathbf{T}.\n\nComparing coefficients of \( \pi {T}_{2} \) on the left and \( {T}_{1} \) on the right yields \( {c}_{{T}_{2}} = \) \( \left( {\operatorname{sgn}\pi }\right) {c}_{{T}_{1}} \), which is equivalent to part 1 .\n\n2. By hypothesis, there exists \( \left( {i, j}\right) \in {C}_{t} \) with \( \left( {i, j}\right) {T}_{1} = {T}_{1} \) . But then \( {c}_{{T}_{1}} = - {c}_{{T}_{1}} \) by what we just proved, forcing this coefficient to be zero.\n\n3. Since \( \theta \neq 0 \), we can pick \( {T}_{2} \) with \( {c}_{{T}_{2}} \neq 0 \) such that \( \left\lbrack {T}_{2}\right\rbrack \) is maximal. We claim that \( {T}_{2} \) can be taken to be semistandard. By parts 1 and 2, we can choose \( {T}_{2} \) so that its columns strictly increase.\n\nSuppose, toward a contradiction, that we have a descent in row \( i \) . Thus \( {T}_{2} \) has a pair of columns that look like\n\n\[ {a}_{i}\; > \;{b}_{i} \]\n\n\[ \text{ ^} \]\n\n\[ \vdots \]\n\n\[ \text{ ^} \]\n\n\[ {a}_{p} \]\n\nChoose \( A \) and \( B \) as usual, and let \( {g}_{A, B} = \mathop{\sum }\limits_{\pi }\left( {\operatorname{sgn}\pi }\right) \pi \) be the associated Garnir element. We have\n\n\[ {g}_{A, B}\left( {\mathop{\sum }\limits_{T}{c}_{T}\mathbf{T}}\right) = {g}_{A, B}\left( {\theta {\mathbf{e}}_{t}}\right) = \theta \left( {{g}_{A, B}{\mathbf{e}}_{t}}\right) = \theta \left( \mathbf{0}\right) = \mathbf{0}.\n\nNow \( {T}_{2} \) appears in \( {g}_{A, B}{T}_{2} \) with coefficient 1 (since the permutations in \( {g}_{A, B} \) move distinct elements of \( {T}_{2} \) ). So to cancel \( {T}_{2} \) in the previous equation, there must be a \( T \neq {T}_{2} \) with \( {\pi T} = {T}_{2} \) for some \( \pi \) in \( {g}_{A, B} \) . Thus \( T \) is just \( {T}_{2} \) with some of the \( {a}_{j} \) ’s and \( {b}_{k} \) ’s exchanged. But then \( \left\lbrack T\right\rbrack \vartriangleright \left\lbrack {T}_{2}\right\rbrack \) by the dominance lemma for generalized tableaux (Lemma 2.10.2). This contradicts our choice of \( {T}_{2} \) . ∎
|
Yes
|
Proposition 2.10.8 The set\n\n\\[ \n\\left\\{ {{\\bar{\\theta }}_{T} : T \\in {\\mathcal{T}}_{\\lambda \\mu }^{0}}\\right\\} \n\\]\n\nspans \\( \\operatorname{Hom}\\left( {{S}^{\\lambda },{M}^{\\mu }}\\right) \\) .
|
Proof. Pick any \\( \\theta \\in \\operatorname{Hom}\\left( {{S}^{\\lambda },{M}^{\\mu }}\\right) \\) and write\n\n\\[ \n\\theta {\\mathbf{e}}_{t} = \\mathop{\\sum }\\limits_{T}{c}_{T}\\mathbf{T} \n\\]\n\n\\( \\left( {2.13}\\right) \\)\n\nConsider\n\n\\[ \n{L}_{\\theta } = \\left\\{ {S \\in {\\mathcal{T}}_{\\lambda \\mu }^{0} : \\left\\lbrack S\\right\\rbrack \\trianglelefteq \\left\\lbrack T\\right\\rbrack \\text{ for some }T\\text{ appearing in }\\theta {\\mathbf{e}}_{t}}\\right\\} .\n\\]\n\nIn poset terminology, \\( {L}_{\\theta } \\) corresponds to the lower order ideal generated by the \\( T \\) in \\( \\theta {\\mathbf{e}}_{t} \\). We prove this proposition by induction on \\( \\left| {L}_{\\theta }\\right| \\).\n\nIf \\( \\left| {L}_{\\theta }\\right| = 0 \\), then \\( \\theta \\) is the zero map by part 3 of the previous lemma. Such a \\( \\theta \\) is surely generated by our set!\n\nIf \\( \\left| {L}_{\\theta }\\right| > 0 \\), then in equation (2.13) we can find a semistandard \\( {T}_{2} \\) with \\( {c}_{{T}_{2}} \\neq 0 \\). Furthermore, it follows from the proof of part 3 in Lemma 2.10.7 that we can choose \\( \\left\\lbrack {T}_{2}\\right\\rbrack \\) maximal among those tableaux that appear in the sum. Now consider\n\n\\[ \n{\\theta }_{2} = \\theta - {c}_{{T}_{2}}{\\bar{\\theta }}_{{T}_{2}} \n\\]\n\nWe claim that \\( {L}_{{\\theta }_{2}} \\) is a subset of \\( {L}_{\\theta } \\) with \\( {T}_{2} \\) removed. First of all, every \\( S \\) appearing in \\( {\\bar{\\theta }}_{{T}_{2}}{\\mathbf{e}}_{t} \\) satisfies \\( \\left\\lbrack S\\right\\rbrack \\trianglelefteq \\left\\lbrack {T}_{2}\\right\\rbrack \\) (see the comment after Corollary 2.10.3), so \\( {L}_{{\\theta }_{2}} \\subseteq {L}_{\\theta } \\). Furthermore, by part 1 of Lemma 2.10.7, every \\( S \\) with \\( \\left\\lbrack S\\right\\rbrack = \\left\\lbrack {T}_{2}\\right\\rbrack \\) appears with the same coefficient in \\( \\theta {\\mathbf{e}}_{t} \\) and \\( {c}_{{T}_{2}}{\\bar{\\theta }}_{{T}_{2}}{\\mathbf{e}}_{t} \\). Thus \\( {T}_{2} \\notin {L}_{{\\theta }_{2}} \\), since \\( \\left\\lbrack {T}_{2}\\right\\rbrack \\) is maximal. By induction, \\( {\\theta }_{2} \\) is in the span of the \\( {\\bar{\\theta }}_{T} \\) and thus \\( \\theta \\) is as well.\n\nThis completes the proof of the proposition and of Theorem 2.10.1. ∎
|
Yes
|
Theorem 2.11.2 (Young’s Rule) The multiplicity of \( {S}^{\lambda } \) in \( {M}^{\mu } \) is equal to the number of semistandard tableaux of shape \( \lambda \) and content \( \mu \), i.e.,
|
\[ {M}^{\mu } \cong {\bigoplus }_{\lambda }{K}_{\lambda \mu }{S}^{\lambda } \]
|
No
|
Example 2.11.3 Suppose \( \mu = \left( {2,2,1}\right) \) . Then the possible \( \lambda \trianglerighteq \mu \) and the associated \( \lambda \) -tableaux of type \( \mu \) are as follows:
|
\[ {\lambda }^{1} = \left( {2,2,1}\right) \;{\lambda }^{2} = \left( {3,1,1}\right) \;{\lambda }^{3} = \left( {3,2}\right) \;{\lambda }^{4} = \left( {4,1}\right) \;{\lambda }^{5} = \left( 5\right) \] Thus \[ {M}^{\left( 2,2,1\right) } \cong {S}^{\left( 2,2,1\right) } \oplus {S}^{\left( 3,3,1\right) } \oplus 2{S}^{\left( 3,2\right) } \oplus 2{S}^{\left( 4,1\right) } \oplus {S}^{\left( 5\right) }. \]
|
Yes
|
Example 2.11.4 For any \( \mu ,{K}_{\mu \mu } = 1 \) .
|
This is because the only \( \mu \) -tableau of content \( \mu \) is the one with all the 1’s in row 1, all the 2’s in row 2, etc. Of course, we saw this result in Corollary 2.4.7.
|
No
|
Example 2.11.6 For any \( \lambda ,{K}_{\lambda \left( {1}^{n}\right) } = {f}^{\lambda } \) (the number of standard tableaux of shape \( \lambda \) ). This says that
|
\[ {M}^{\left( {1}^{n}\right) } \cong {\bigoplus }_{\lambda }{f}^{\lambda }{S}^{\lambda } \] But \( {M}^{\left( {1}^{n}\right) } \) is just the regular representation (Example 2.1.7) and \( {f}^{\lambda } = \dim {S}^{\lambda } \) (Theorem 2.5.2). Thus this is merely the special case of Proposition 1.10.1, part 1, where \( G = {\mathcal{S}}_{n} \) .
|
Yes
|
Theorem 3.1.1 ([Rob 38, Sch 61]) The map\n\n\\[ \pi \\overset{\\mathrm{R} - \\mathrm{S}}{ \\rightarrow }\\left( {P, Q}\\right) \\]\n\nis a bijection between elements of \\( {\\mathcal{S}}_{n} \\) and pairs of standard tableaux of the same shape \\( \\lambda \\vdash n \\) .
|
Proof. To show that we have a bijection, it suffices to create an inverse.\n\n\
|
No
|
Theorem 3.2.3 ([Sch 61]) If \( P\left( \pi \right) = P \), then \( P\left( {\pi }^{r}\right) = {P}^{t} \), where \( t \) denotes transposition.
|
Proof. We have\n\n\[ P\left( {\pi }^{r}\right) = {r}_{{x}_{1}}\cdots {r}_{{x}_{n - 1}}{r}_{{x}_{n}}\left( \varnothing \right) \;\text{ (definition of }P\left( {\pi }^{r}\right) \text{ ) }\n\]\n\n\[ = {r}_{{x}_{1}}\cdots {r}_{{x}_{n - 1}}{c}_{{x}_{n}}\left( \varnothing \right) \;\text{(initial tableau is empty)}\n\]\n\n\[ = {c}_{{x}_{n}}{r}_{{x}_{1}}\cdots {r}_{{x}_{n - 1}}\left( \varnothing \right) \;\text{(Proposition 3.2.2)}\n\]\n\n\[ = \;{c}_{{x}_{n}}{c}_{{x}_{n - 1}}\cdots {c}_{{x}_{1}}\left( \varnothing \right) \;\mathrm{\left( {induction}\right) }\n\]\n\n\[ = {P}^{t}\text{. (definition of column insertion) -}\n\]
|
Yes
|
Theorem 3.3.2 ([Sch 61]) Consider \( \pi \in {\mathcal{S}}_{n} \) . The length of the longest increasing subsequence of \( \pi \) is the length of the first row of \( P\left( \pi \right) \) . The length of the longest decreasing subsequence of \( \pi \) is the length of the first column of \( P\left( \pi \right) \) .
|
Proof. Since reversing a permutation turns decreasing sequences into increasing ones, the second half of the theorem follows from the first and Theorem 3.2.3. To prove the first half, we actually demonstrate a stronger result. In what follows, \( {P}_{k - 1} \) is the the tableau formed after \( k - 1 \) insertions of the Robinson-Schensted algorithm.\n\nLemma 3.3.3 If \( \pi = {x}_{1}{x}_{2}\ldots {x}_{n} \) and \( {x}_{k} \) enters \( {P}_{k - 1} \) in column \( j \), then the longest increasing subsequence of \( \pi \) ending in \( {x}_{k} \) has length \( j \) .\n\nProof. We induct on \( k \) . The result is trivial for \( k = 1 \), so suppose it holds for all values up to \( k - 1 \) .\n\nFirst we need to show the existence of an increasing subsequence of length \( j \) ending in \( {x}_{k} \) . Let \( y \) be the element of \( {P}_{k - 1} \) in cell \( \left( {1, j - 1}\right) \) . Then we have \( y < {x}_{k} \), since \( {x}_{k} \) enters in column \( j \) . Also, by induction, there is an increasing subsequence \( \sigma \) of length \( j - 1 \) ending in \( y \) . Thus \( \sigma {x}_{k} \) is the desired subsequence.\n\nNow we must prove that there cannot be a longer increasing subsequence ending in \( {x}_{k} \) . Suppose that such a subsequence exists and let \( {x}_{i} \) be the element preceding \( {x}_{k} \) in that subsequence. Then, by induction, when \( {x}_{i} \) is inserted, it enters in some column (weakly) to the right of column \( j \) . Thus the element \( y \) in cell \( \left( {1, j}\right) \) of \( {P}_{i} \) satisfies\n\n\[ y \leq {x}_{i} < {x}_{k} \]\n\nBut by part 3 of Lemma 3.2.1, the entries in a given position of a tableau never increase with subsequent insertions. Thus the element in cell \( \left( {1, j}\right) \) of \( {P}_{k - 1} \) is smaller than \( {x}_{k} \), contradicting the fact that \( {x}_{k} \) displaces it. This finishes the proof of the lemma and hence of Theorem 3.3.2.
|
Yes
|
Lemma 3.3.3 If \( \pi = {x}_{1}{x}_{2}\ldots {x}_{n} \) and \( {x}_{k} \) enters \( {P}_{k - 1} \) in column \( j \), then the longest increasing subsequence of \( \pi \) ending in \( {x}_{k} \) has length \( j \) .
|
Proof. We induct on \( k \) . The result is trivial for \( k = 1 \), so suppose it holds for all values up to \( k - 1 \) .\n\nFirst we need to show the existence of an increasing subsequence of length \( j \) ending in \( {x}_{k} \) . Let \( y \) be the element of \( {P}_{k - 1} \) in cell \( \left( {1, j - 1}\right) \) . Then we have \( y < {x}_{k} \), since \( {x}_{k} \) enters in column \( j \) . Also, by induction, there is an increasing subsequence \( \sigma \) of length \( j - 1 \) ending in \( y \) . Thus \( \sigma {x}_{k} \) is the desired subsequence.\n\nNow we must prove that there cannot be a longer increasing subsequence ending in \( {x}_{k} \) . Suppose that such a subsequence exists and let \( {x}_{i} \) be the element preceding \( {x}_{k} \) in that subsequence. Then, by induction, when \( {x}_{i} \) is inserted, it enters in some column (weakly) to the right of column \( j \) . Thus the element \( y \) in cell \( \left( {1, j}\right) \) of \( {P}_{i} \) satisfies\n\n\[ y \leq {x}_{i} < {x}_{k} \]\n\nBut by part 3 of Lemma 3.2.1, the entries in a given position of a tableau never increase with subsequent insertions. Thus the element in cell \( \left( {1, j}\right) \) of\n\n\( {P}_{k - 1} \) is smaller than \( {x}_{k} \), contradicting the fact that \( {x}_{k} \) displaces it. This finishes the proof of the lemma and hence of Theorem 3.3.2.
|
Yes
|
Theorem 3.4.3 ([Knu 70]) If \( \pi ,\sigma \in {\mathcal{S}}_{n} \), then\n\n\[ \pi \overset{K}{ \cong }\sigma \Leftrightarrow \pi \overset{P}{ \cong }\sigma \]
|
Proof. \
|
No
|
Lemma 3.6.2 Let the shadow diagram of \( \pi = {x}_{1}{x}_{2}\ldots {x}_{n} \) be constructed as before. Suppose the vertical line \( x = k \) intersects \( i \) of the shadow lines. Let \( {y}_{j} \) be the \( y \) -coordinate of the lowest point of the intersection with \( {L}_{j} \) . Then the first row of the \( {P}_{k} = P\left( {{x}_{1}\ldots {x}_{k}}\right) \) is\n\n\[ \n{R}_{1} = {y}_{1}{y}_{2}\ldots {y}_{i} \n\]
|
Proof. Induct on \( k \), the lemma being trivial for \( k = 0 \) . Assume that the result holds for the line \( x = k \) and consider \( x = k + 1 \) . There are two cases. If\n\n\[ \n{x}_{k + 1} > {y}_{i} \n\]\n\nthen the box \( \left( {k + 1,{x}_{k + 1}}\right) \) starts a new shadow line. So none of the values \( {y}_{1},\ldots ,{y}_{i} \) change, and we obtain a new intersection,\n\n\[ \n{y}_{i + 1} = {x}_{k + 1}\text{. } \n\]\n\nBut by (3.9) and (3.10), the \( \left( {k + 1}\right) \) st intersection merely causes \( {x}_{k + 1} \) to sit at the end of the first row without displacing any other element. Thus the lemma continues to be true.\n\nIf, on the other hand,\n\n\[ \n{y}_{1} < \cdots < {y}_{j - 1} < {x}_{k + 1} < {y}_{j} < \cdots < {y}_{i} \n\]\n\nthen \( \left( {k + 1,{x}_{k + 1}}\right) \) is added to line \( {L}_{j} \) . Thus the lowest coordinate on \( {L}_{j} \) becomes\n\n\[ \n{y}_{j}^{\prime } = {x}_{k + 1} \n\]\n\nand all other \( y \) -values stay the same. Furthermore, equations (3.9) and (3.11) ensure that the first row of \( {P}_{k + 1} \) is\n\n\[ \n{y}_{1}\ldots {y}_{j - 1}{y}_{j}^{\prime }\ldots {y}_{i} \n\]\n\nThis is precisely what is predicted by the shadow diagram.
|
Yes
|
Corollary 3.6.3 ([Vie 76]) If the permutation \( \pi \) has Robinson-Schensted tableaux \( \left( {P, Q}\right) \) and shadow lines \( {L}_{j} \), then, for all \( j \), \n\n\[ \n{P}_{1, j} = {y}_{{L}_{j}}\;\text{ and }\;{Q}_{1, j} = {x}_{{L}_{j}}. \n\]
|
Proof. The statement for \( P \) is just the case \( k = n \) of Lemma 3.6.2.\n\nAs for \( Q \), the entry \( k \) is added to \( Q \) in cell \( \left( {1, j}\right) \) when \( {x}_{k} \) is greater than every element of the first row of \( {P}_{k - 1} \) . But the previous lemma’s proof shows that this happens precisely when the line \( x = k \) intersects shadow line \( {L}_{j} \) in a vertical ray. In other words, \( {y}_{{L}_{j}} = k = {Q}_{1, j} \) as desired. ∎
|
Yes
|
Theorem 3.6.5 ([Vie 76]) Suppose \( \pi \overset{\mathrm{R} - \mathrm{S}}{ \rightarrow }\left( {P, Q}\right) \) . Then \( {\pi }^{\left( i\right) } \) is a partial permutation such that\n\n\[ \n{\pi }^{\left( i\right) }\overset{\mathrm{R} - \mathrm{S}}{ \rightarrow }\left( {{P}^{\left( i\right) },{Q}^{\left( i\right) }}\right)\n\]\n\nwhere \( {P}^{\left( i\right) } \) (respectively, \( {Q}^{\left( i\right) } \) ) consists of the rows \( i \) and below of \( P \) (respectively, \( Q \) ). Furthermore,\n\n\[ \n{P}_{i, j} = {y}_{{L}_{j}^{\left( i\right) }}\;\text{ and }\;{Q}_{i, j} = {x}_{{L}_{j}^{\left( i\right) }}\n\]\n\nfor all \( i, j \) .
|
∎
|
No
|
Theorem 3.6.6 ([Sci 63]) If \( \pi \in {\mathcal{S}}_{n} \), then\n\n\[ P\left( {\pi }^{-1}\right) = Q\left( \pi \right) \;\text{ and }\;Q\left( {\pi }^{-1}\right) = P\left( \pi \right) .
|
Proof. Taking the inverse of a permutation corresponds to reflecting the shadow diagram in the line \( y = x \) . The theorem now follows from Theorem 3.6.5. ∎
|
Yes
|
Theorem 3.6.10 If \( \pi ,\sigma \in {\mathcal{S}}_{n} \), then\n\n\[ \pi \overset{{K}^{ * }}{ \cong }\sigma \Leftrightarrow \pi \overset{Q}{ \cong }\sigma . \]\n
|
Proof. We have the following string of equivalences:\n\n\[ \pi \overset{{K}^{ * }}{ \cong }\sigma \; \Leftrightarrow \;{\pi }^{-1}\overset{K}{ \cong }{\sigma }^{-1}\;\text{ (Lem }\n\n\[ \Leftrightarrow \;P\left( {\pi }^{-1}\right) = P\left( {\sigma }^{-1}\right) \;\text{(Theorem 3.4.3)}\n\n\[ \Leftrightarrow \;Q\left( \pi \right) = Q\left( \sigma \right) .\;\text{ (Theorem 3.6.6) } \bullet \]\n
|
Yes
|
Proposition 3.7.4 ([Scii 76]) If \( P, Q \) are standard skew tableaux, then\n\n\[ P \cong Q \Rightarrow P\overset{K}{ \cong }Q. \]\n
|
Proof. By induction, it suffices to prove the theorem when \( P \) and \( Q \) differ by a single slide. In fact, if we call the operation in steps Fb or Rb of the slide definition a move, then we need to demonstrate the result only when \( P \) and \( Q \) differ by a move. (The row word of a tableau with a hole in it can still be defined by merely ignoring the hole.)\n\nThe conclusion is trivial if the move is horizontal because then \( {\pi }_{P} = {\pi }_{Q} \) . If the move is vertical, then we can clearly restrict to the case where \( P \) and \( Q \) have only two rows. So suppose that \( x \) is the element being moved and that\n\n<table><thead><tr><th></th><th>\( {R}_{l} \)</th><th>\( x \)</th><th>\( {R}_{r} \)</th></tr></thead><tr><td>\( {S}_{l} \)</td><td></td><td>\( \bullet \)</td><td>\( {S}_{r} \)</td></tr></table>\n\n<table><thead><tr><th></th><th>\( {R}_{l} \)</th><th>·</th><th colspan=\
|
Yes
|
Theorem 3.7.7 ([Scii 76]) If \( P \) is a partial skew tableau that is brought to a normal tableau \( {P}^{\prime } \) by slides, then \( {P}^{\prime } \) is unique. In fact, \( {P}^{\prime } \) is the insertion tableau for \( {\pi }_{P} \) .
|
Proof. By the previous proposition, \( {\pi }_{P}\overset{K}{ \cong }{\pi }_{{P}^{\prime }} \) . Thus by Knuth’s theorem on \( P \) -equivalence (Theorem 3.4.3), \( {\pi }_{P} \) and \( {\pi }_{{P}^{\prime }} \) have the same insertion tableau. Finally, Lemma 3.4.5 tells us that the insertion tableau for \( {\pi }_{{P}^{\prime }} \) is just \( {P}^{\prime } \) itself. -
|
Yes
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.