Q
stringlengths
4
3.96k
A
stringlengths
1
3k
Result
stringclasses
4 values
Theorem 3.7.8 ([Scii 76]) Let \( P \) and \( Q \) be partial skew tableaux. Then\n\n\[ P \cong Q \Leftrightarrow P\overset{K}{ \cong }Q. \]\n
Proof. The only-if direction is Proposition 3.7.4. For the other implication, note that since \( P \cong Q \), their row words must have the same \( P \) -tableau (Theorem 3.4.3 again). So by the previous theorem, \( j\left( P\right) = j\left( Q\right) = {P}^{\prime } \), say. Thus we can take \( P \) into \( Q \) by performing the slide sequence taking \( P \) to \( {P}^{\prime } \) and then the inverse of the sequence taking \( Q \) to \( {P}^{\prime } \) . Hence \( P \cong Q \) . ∎
Yes
Proposition 3.8.1 Let \( P \) and \( Q \) be standard with the same normal shape \( \lambda \vdash n \) . Then \( P\overset{{K}^{ * }}{ \cong }Q \) .
Proof. Induct on \( n \), the proposition being trivial for \( n \leq 2 \) . When \( n \geq 3 \) , let \( c \) and \( d \) be the inner corner cells containing \( n \) in \( P \) and \( Q \), respectively. There are two cases, depending on the relative positions of \( c \) and \( d \) .\n\nIf \( c = d \), then let \( {P}^{\prime } \) (respectively, \( {Q}^{\prime } \) ) be \( P \) (respectively, \( Q \) ) with the \( n \)\nerased. Now \( {P}^{\prime } \) and \( {Q}^{\prime } \) have the same shape, so by induction \( {\pi }_{{P}^{\prime }}\overset{{K}^{ * }}{ \cong }{\pi }_{{Q}^{\prime }} \) . But then we can apply the same sequence of dual Knuth relations to \( {\pi }_{P} \) and \( {\pi }_{Q} \), the presence of \( n \) being immaterial. Thus \( P\overset{{K}^{ \star }}{ \cong }Q \) in this case.\n\nIf \( c \neq d \), then it suffices to show the existence of two dual Knuth-equivalent tableaux \( {P}^{\prime } \) and \( {Q}^{\prime } \) with \( n \) in cells \( c \) and \( d \), respectively. (Because then, by what we have shown in the first case, it follows that\n\n\[{\pi }_{P}\overset{{K}^{ * }}{ \cong }{\pi }_{{P}^{\prime }}\overset{{K}^{ * }}{ \cong }{\pi }_{{Q}^{\prime }}\overset{{K}^{ * }}{ \cong }{\pi }_{Q},\]\n\nand we are done.) Let \( e \) be a lowest rightmost cell among all the cells on the boundary of \( \lambda \) between \( c \) and \( d \) . (The boundary of \( \lambda \) is the set of all cells at the end of a row or column of \( \lambda \) .) Schematically, we might have the situation\n\n![fe1808d3-ed76-4667-ba97-eb284d29fcc8_130_0.jpg](images/fe1808d3-ed76-4667-ba97-eb284d29fcc8_130_0.jpg)\n\nNow let\n\n\[{P}_{c}^{\prime } = n,\;{P}_{e}^{\prime } = n - 2,\;{P}_{d}^{\prime } = n - 1;\]\n\n\[{Q}_{c}^{\prime } = n - 1,\;{Q}_{e}^{\prime } = n - 2,\;{Q}_{d}^{\prime } = n.\]\n\nand place the numbers \( 1,2,\ldots, n - 3 \) anywhere as long as they are in the same cells in both \( {P}^{\prime } \) and \( {Q}^{\prime } \) . By construction, \( {\pi }_{P}\overset{{K}^{ * }}{ \cong }{\pi }_{Q} \) . -
Yes
Lemma 3.8.3 Let \( P \cong Q \) . If applying the same sequence of slides to both tableaux yields \( {P}^{\prime } \) and \( {Q}^{\prime } \), then \( {P}^{\prime } \cong {Q}^{\prime } \) .
No
Proposition 3.8.5 Let \( P \) and \( Q \) be distinct miniature tableaux of the same shape \( \lambda /\mu \) and content. Then\n\n\[ P\overset{{K}^{ * }}{ \cong }Q \Leftrightarrow P\overset{ * }{ \cong }Q. \]
Proof. Without loss of generality, let \( P \) and \( Q \) be standard.\n\n\
Yes
Lemma 3.8.7 ([Hai 92]) Let \( V, W, P \), and \( Q \) be standard skew tableaux with\n\n\[ \operatorname{sh}V = \mu /\nu ,\;\operatorname{sh}P = \operatorname{sh}Q = \lambda /\mu ,\;\operatorname{sh}W = \kappa /\lambda .\n\]\n\nThen\n\n\[ P\overset{ * }{ \cong }Q \Rightarrow V \cup P \cup W\overset{ * }{ \cong }V \cup Q \cup W. \]
Proof. Consider what happens in performing a single forward slide on \( V \cup \) \( P \cup W \), say into cell \( c \) . Because of the relative order of the elements in the \( V \) , \( P \), and \( W \) portions of the tableau, the slide can be broken up into three parts. First of all, the slide travels through \( V \), creating a new tableau \( {V}^{\prime } = {j}_{c}\left( V\right) \) and vacating some inner corner \( d \) of \( \mu \) . Then \( P \) becomes \( {P}^{\prime } = {j}_{d}\left( P\right) \), vacating cell \( e \), and finally \( W \) is transformed into \( {W}^{\prime } = {j}_{e}\left( W\right) \) . Thus \( {j}_{c}\left( {V \cup P \cup W}\right) = \) \( {V}^{\prime } \cup {P}^{\prime } \cup {W}^{\prime } \) .\n\nNow perform the same slide on \( V \cup Q \cup W \) . Tableau \( V \) is replaced by \( {j}_{c}\left( V\right) = {V}^{\prime } \), vacating \( d \) . If \( {Q}^{\prime } = {j}_{d}\left( Q\right) \), then, since \( P \cong Q \), we have \( \operatorname{sh}{P}^{\prime } = \) sh \( {Q}^{\prime } \) . So \( e \) is vacated as before, and \( W \) becomes \( {W}^{\prime } \) . Thus \( {j}_{c}\left( {V \cup Q \cup W}\right) = \) \( {V}^{\prime } \cup {Q}^{\prime } \cup {W}^{\prime } \) with \( {P}^{\prime } \cong {Q}^{\prime } \) by Lemma 3.8.3.\n\nNow the preceding also holds, mutatis mutandis, to backward slides. Hence applying the same slide to both \( V \cup P \cup W \) and \( V \cup P \cup W \) yields tableaux of the same shape that still satisfy the hypotheses of the lemma. By induction, we are done.
Yes
Theorem 3.8.8 ([Hai 92]) Let \( P \) and \( Q \) be standard tableaux of the same shape \( \lambda /\mu \) . Then\n\n\[ P\overset{{K}^{ * }}{ \cong }Q \Leftrightarrow P\overset{ * }{ \cong }Q. \]\n
Proof. \
No
Lemma 3.9.2 Let \( Q \) be any skew partial tableau; then\n\n\[ \n{j\Delta }\left( Q\right) = {\Delta j}\left( Q\right) \n\]
Proof. Let \( P \) be \( Q \) with its minimum element \( m \) erased from cell \( c \) . We write this as \( P = Q - \{ m\} \) . Then\n\n\[ \n{j\Delta }\left( Q\right) = j\left( P\right) \n\]\n\nby the definition of \( \Delta \) and the uniqueness of the \( j \) operator (Theorem 3.7.7).\n\nWe now show that any forward slide on \( Q \) can be mimicked by a slide on \( P \) . Let \( {Q}^{\prime } = {j}^{d}\left( Q\right) \) for some cell \( d \) . There are two cases. If \( d \) is not vertically or horizontally adjacent to \( c \), then it is legal to form \( {P}^{\prime } = {j}^{d}\left( P\right) \) . Since \( m \) is involved in neither slide, we again have that \( {P}^{\prime } = {Q}^{\prime } - \{ m\} \).\n\nIf \( d \) is adjacent to \( c \), then the first move of \( {j}^{d}\left( Q\right) \) will put \( m \) in cell \( d \) and then proceed as a slide into \( c \) . Thus letting \( {P}^{\prime } = {j}^{c}\left( P\right) \) will preserve the relationship between \( {P}^{\prime } \) and \( {Q}^{\prime } \) as before.\n\nContinuing in this manner, when we obtain \( j\left( Q\right) \), we will have a corresponding \( {P}^{\prime \prime } \), which is just \( j\left( Q\right) \) with \( m \) erased from the \( \left( {1,1}\right) \) cell. Thus\n\n\[ \n{\Delta j}\left( Q\right) = {j}^{\left( 1,1\right) }\left( {P}^{\prime \prime }\right) = j\left( P\right) = {j\Delta }\left( Q\right) .\blacksquare \n\]
Yes
Proposition 3.9.3 ([Scii 63]) Suppose \( \pi = {x}_{1}{x}_{2}\ldots {x}_{n} \in {\mathcal{S}}_{n} \) and let\n\n\[ \n\bar{\pi } = \begin{matrix} 2 & 3 & \cdots & n \\ {x}_{2} & {x}_{3} & \cdots & {x}_{n} \end{matrix} \n\]\n\nThen\n\n\[ \nQ\left( \bar{\pi }\right) = {\Delta Q}\left( \pi \right) \n\]
Proof. Consider \( \sigma = {\pi }^{-1} \) and \( \bar{\sigma } = {\bar{\pi }}^{-1} \) . Note that the lower line of \( \bar{\sigma } \) is obtained from the lower line of \( \sigma \) by deleting the minimum element,1 .\n\nBy Theorem 3.6.6, it suffices to show that\n\n\[ \nP\left( \bar{\sigma }\right) = {\Delta P}\left( \sigma \right) \n\]\n\nIf we view \( \sigma \) and \( \bar{\sigma } \) as antidiagonal strip tableaux, then \( \bar{\sigma } \cong {\Delta \sigma } \) . Hence by Theorem 3.7.7 and Lemma 3.9.2,\n\n\[ \nP\left( \bar{\sigma }\right) = P\left( {\Delta \sigma }\right) = {\Delta P}\left( \sigma \right) .\text{∎} \n\]
Yes
Theorem 3.9.4 ([Sci 63]) If \( \pi \in {\mathcal{S}}_{n} \), then\n\n\[ Q\left( {\pi }^{r}\right) = \operatorname{ev}Q{\left( \pi \right) }^{t} \]
Proof. Let \( \pi ,\bar{\pi } \) be as in the previous proposition with\n\n\[ {\bar{\pi }}^{r} = \begin{matrix} 1 & 2 & \cdots & n - 1 \\ {x}_{n} & {x}_{n - 1} & \cdots & {x}_{2} \end{matrix} \]\n\nInduct on \( n \) . Now\n\n\[ Q\left( {\pi }^{r}\right) - \{ n\} = Q\left( {\bar{\pi }}^{r}\right) \;\left( {{\bar{\pi }}^{r} = {x}_{n}\ldots {x}_{2}}\right) \]\n\n\[ = \;\mathrm{{ev}}Q{\left( \overline{\pi }\right) }^{t}\;\mathrm{{(induction)}} \]\n\n\[ = \operatorname{ev}Q{\left( \pi \right) }^{t} - \{ n\} \text{. (Proposition 3.9.3)} \]\n\nThus we need to show only that \( n \) occupies the same cell in both \( Q\left( {\pi }^{r}\right) \) and ev \( Q{\left( \pi \right) }^{t} \) . Let \( \operatorname{sh}Q\left( \pi \right) = \lambda \) and \( \operatorname{sh}Q\left( \bar{\pi }\right) = \bar{\lambda } \) . By Theorems 3.1.1 and 3.2.3 we have \( \operatorname{sh}Q\left( {\pi }^{r}\right) = {\lambda }^{t} \) and \( \operatorname{sh}Q\left( {\bar{\pi }}^{r}\right) = {\bar{\lambda }}^{t} \) . Hence\n\n\[ \text{cell of}n\text{in}Q\left( {\pi }^{r}\right) = {\lambda }^{t}/{\bar{\lambda }}^{t}\;\left( {{\bar{\pi }}^{r} = {x}_{n}\ldots {x}_{2}}\right) \]\n\n\[ = {\left( \lambda /\bar{\lambda }\right) }^{t} \]\n\n\[ = {\left( \operatorname{cell}\text{of}n\text{in ev}Q\left( \pi \right) \right) }^{t}.\;\left( \text{Proposition 3.9.3}\right) \text{.} \]
Yes
Theorem 3.10.3 ([NPS 97]) For fixed \( \lambda \), the map\n\n\[ T\overset{\mathrm{N} - \mathrm{P} - \mathrm{S}}{ \rightarrow }\left( {P, J}\right) \]\n\njust defined is a bijection between tableaux \( T \) and pairs \( \left( {P, J}\right) \) with \( P \) a standard tableau and \( J \) a hook tableau such that \( \operatorname{sh}T = \operatorname{sh}P = \operatorname{sh}J = \lambda \) .
Proof. As usual, we will create an inverse.\n\n\
No
Lemma 3.10.4 Suppose that all reverse paths in \( {\mathcal{R}}^{\prime } \) go through \( \left( {{i}_{0},{j}_{0}}\right) \) . Then \( {r}_{0}^{\prime } \) with initial cell \( \left( {{i}_{0}^{\prime },{j}_{0}^{\prime }}\right) \in {\mathcal{C}}^{\prime } \) is the largest reverse path in \( {\mathcal{R}}^{\prime } \) if and only if any initial cell \( \left( {{i}^{\prime },{j}^{\prime }}\right) \in {\mathcal{C}}^{\prime } \) of \( {r}^{\prime } \in {\mathcal{R}}^{\prime },{r}^{\prime } \neq {r}_{0}^{\prime } \), satisfies\n\nR1 \( {i}_{0} \leq {i}^{\prime } \leq {i}_{0}^{\prime } \) and \( \left( {{i}^{\prime },{j}^{\prime }}\right) \) is west and weakly south of \( {r}_{0}^{\prime } \), or\n\nR2 \( {i}^{\prime } > {i}_{0}^{\prime } \) and \( {r}^{\prime } \) enters row \( {i}_{0}^{\prime } \) weakly west of \( {r}_{0}^{\prime } \) .
Proof. Since both \( {r}^{\prime } \) and \( {r}_{0}^{\prime } \) end up at \( \left( {{i}_{0},{j}_{0}}\right) \), they must intersect somewhere and coincide after their intersection. For the forward implication, assume that neither \( \mathrm{R}1 \) nor \( \mathrm{R}2 \) hold. Then the only other possibilities force \( {r}^{\prime } \) to join \( {r}_{0}^{\prime } \) with a \( W \) step or start on \( {r}_{0}^{\prime } \) after an \( N \) step of \( {r}_{0}^{\prime } \) . In either case \( {r}^{\prime } > {r}_{0}^{\prime } \), a contradiction.\n\nFor the other direction, if \( {r}^{\prime } \) satisfies \( \mathrm{R}1 \) or \( \mathrm{R}2 \), then either \( {r}^{\prime } \subset {r}_{0}^{\prime } \) and \( {r}_{0}^{\prime } \) joins \( {r}^{\prime } \) with a \( W \) step, or \( {r}^{\prime } \nsubseteq {r}_{0}^{\prime } \) and \( {r}^{\prime } \) joins \( {r}_{0}^{\prime } \) with an \( N \) step. This gives \( {r}^{\prime } < {r}_{0}^{\prime } \) . ∎
Yes
Lemma 3.10.5 If \( {r}_{0}^{\prime } \) goes through \( \left( {{i}_{0},{j}_{0}}\right) \) and is north of some cell of a reverse path \( {r}^{\prime \prime } \in {\mathcal{R}}^{\prime \prime } \), then \( {r}^{\prime \prime } \) goes through \( \left( {{i}_{0} + 1,{j}_{0}}\right) \) .
Proof. I claim that \( {r}_{0}^{\prime } \) is north of every cell of \( {r}^{\prime \prime } \) after the one given in the hypothesis of the lemma. This forces \( {r}^{\prime \prime } \) through \( \left( {{i}_{0} + 1,{j}_{0}}\right) \) as \( {r}^{\prime } \) goes through \( \left( {{i}_{0},{j}_{0}}\right) \) . If the claim is false, then let \( \left( {{i}_{1},{j}_{1}}\right) \) be the first cell of \( {r}^{\prime \prime } \) after the given one that is northmost on \( {r}_{0}^{\prime } \) in its column. So the previous cell of \( {r}^{\prime \prime } \) must have been \( \left( {{i}_{1} + 1,{j}_{1}}\right) \) . But, by construction of \( {r}_{0}^{\prime } \) ,\n\n\[ \n{T}_{{i}_{1},{j}_{1}}^{\prime } = {T}_{{i}_{1},{j}_{1} - 1}^{\prime \prime } < {T}_{{i}_{1} + 1,{j}_{1} - 1}^{\prime \prime } = {T}_{{i}_{1} + 1,{j}_{1} - 1}^{\prime }. \n\]\n\nSo \( {r}^{\prime \prime } \) should have moved from \( \left( {{i}_{1} + 1,{j}_{1}}\right) \) to \( \left( {{i}_{1} + 1,{j}_{1} - 1}\right) \) rather than \( \left( {{i}_{1},{j}_{1}}\right) \) , a contradiction. -
Yes
Proposition 3.10.6 For all \( k \), the hook tableau \( {J}_{k} \) is well defined and all reverse paths go through \( {c}_{k} = \left( {{i}_{0},{j}_{0}}\right) \) in \( {T}_{k} \) .
Proof. The proposition is true for \( {i}_{0} = 1 \) by definition of the algorithm. So by induction, we can assume that the statement is true for \( {J}^{\prime } \) and \( {T}^{\prime } \) and prove it for \( {J}^{\prime \prime } \) and \( {T}^{\prime \prime } \) . By Lemma 3.10.4 and (3.20), we see that if \( {r}^{\prime \prime } \in {R}^{\prime \prime } \) starts at \( \left( {{i}^{\prime \prime },{j}^{\prime \prime }}\right) \) with \( {i}_{0} \leq {i}^{\prime \prime } \leq {i}_{0}^{\prime } \), then \( \left( {{i}^{\prime \prime },{j}^{\prime \prime }}\right) \) is south and weakly west of \( {r}_{0}^{\prime } \) as in the following figure (which the reader should compare with the previous one). ![fe1808d3-ed76-4667-ba97-eb284d29fcc8_144_0.jpg](images/fe1808d3-ed76-4667-ba97-eb284d29fcc8_144_0.jpg)\n\nSo, in particular, \( \left( {{i}^{\prime \prime },{j}^{\prime \prime }}\right) \in \lambda \) and \( {J}^{\prime \prime } \) is well defined. Also, by Lemma 3.10.5, these \( {r}^{\prime \prime } \) must go through \( \left( {{i}_{0} + 1,{j}_{0}}\right) \) . If \( {i}^{\prime \prime } > {i}_{0}^{\prime } \), let \( {r}^{\prime } \) be the reverse path in \( {T}^{\prime } \) starting at \( \left( {{i}^{\prime \prime },{j}^{\prime \prime }}\right) \) . Then \( {r}^{\prime \prime } \) and \( {r}^{\prime } \) must agree up to and including the first cell on \( {r}^{\prime } \) in a column weakly west of column \( {j}_{0}^{\prime } \) . Since this cell is south of \( {r}_{0}^{\prime } \), we are done by Lemma 3.10.5. ∎
Yes
Lemma 3.10.7 Suppose that \( {p}_{0}^{\prime } \) ends at \( \left( {{i}_{0}^{\prime },{j}_{0}^{\prime }}\right) \) and does not consist solely of \( E \) (east) steps. Then the initial cell \( \left( {{i}^{\prime \prime },{j}^{\prime \prime }}\right) \in {\mathcal{C}}^{\prime \prime } \) of any \( {r}^{\prime \prime } \in {\mathcal{R}}^{\prime \prime } \) satisfies\n\n1. \( {i}_{0} + 1 \leq {i}^{\prime \prime } \leq {i}_{0}^{\prime } \) and \( \left( {{i}^{\prime \prime },{j}^{\prime \prime }}\right) \) is south and weakly west of \( {p}_{0}^{\prime } \), or\n\n2. \( {i}^{\prime \prime } > {i}_{0}^{\prime } \) and \( {r}^{\prime \prime } \) enters row \( {i}_{0}^{\prime } \) weakly west of \( {p}_{0}^{\prime } \) . ∎
The proof of this result is by contradiction. It is also very similar to that of Lemma 3.10.5, and so is left to the reader.
No
Proposition 4.1.4 If \( {f}_{i}\left( x\right) \in \mathbb{C}\left\lbrack \left\lbrack x\right\rbrack \right\rbrack \) for \( i \geq 1 \) and \( \mathop{\lim }\limits_{{i \rightarrow \infty }}\deg \left( {{f}_{i}\left( x\right) - 1}\right) = \) \( \infty \), then \( \mathop{\prod }\limits_{{i \geq 1}}{f}_{i}\left( x\right) \) converges.
No
Theorem 4.1.5 For all \( n,{p}_{d}\left( n\right) = {p}_{o}\left( n\right) \) .
Proof. It suffices to show that \( {p}_{d}\left( n\right) \) and \( {p}_{o}\left( n\right) \) have the same generating function. But\n\n\[ \mathop{\prod }\limits_{{i \geq 1}}\left( {1 + {x}^{i}}\right) = \mathop{\prod }\limits_{{i \geq 1}}\left( {1 + {x}^{i}}\right) \mathop{\prod }\limits_{{i \geq 1}}\frac{1 - {x}^{i}}{1 - {x}^{i}} \]\n\n\[ = \mathop{\prod }\limits_{{i \geq 1}}\frac{1 - {x}^{2i}}{1 - {x}^{i}} \]\n\n\[ = \mathop{\prod }\limits_{{i \geq 1}}\frac{1}{1 - {x}^{{2i} - 1}}\text{. ∎} \]
Yes
Proposition 4.1.6 Let \( S \) and \( T \) be weighted sets.\n\n1. If \( S \cap T = \varnothing \), then\n\n\[ \n{f}_{S \uplus T}\left( x\right) = {f}_{S}\left( x\right) + {f}_{T}\left( x\right) \n\]\n\n2. Let \( S \) and \( T \) be arbitrary and weight \( S \times T \) by \( \operatorname{wt}\left( {s, t}\right) = \operatorname{wt}s \) wt \( t \) . Then\n\n\[ \n{f}_{S \times T} = {f}_{S}\left( x\right) \cdot {f}_{T}\left( x\right) \n\]
Proof. 1. If \( S \) and \( T \) do not intersect, then\n\n\[ \n{f}_{S \uplus T}\left( x\right) = \mathop{\sum }\limits_{{s \in S \uplus T}}\mathrm{{wt}}s \n\]\n\n\[ \n= \mathop{\sum }\limits_{{s \in S}}\operatorname{wt}s + \mathop{\sum }\limits_{{s \in T}}\operatorname{wt}s \n\]\n\n\[ \n= {f}_{S}\left( x\right) + {f}_{T}\left( x\right) \text{.} \n\]\n\n2. For any two sets \( S, T \), we have\n\n\[ \n{f}_{S \times T}\left( x\right) = \mathop{\sum }\limits_{{\left( {s, t}\right) \in S \times T}}\mathrm{{wt}}\left( {s, t}\right) \n\]\n\n\[ \n= \mathop{\sum }\limits_{{s \in S}}\mathrm{{wt}}s\mathrm{{wt}}t \n\]\n\n\[ \n= \mathop{\sum }\limits_{{s \in S}}^{{t \in T}}\operatorname{wt}s\mathop{\sum }\limits_{{t \in T}}\operatorname{wt}t \n\]\n\n\[ \n= {f}_{S}\left( x\right) {f}_{T}\left( x\right) \text{. ∎} \n\]
Yes
Theorem 4.2.2 Fix a partition \( \lambda \) . Then\n\n\[ \mathop{\sum }\limits_{{n \geq 0}}{rp}{p}_{\lambda }\left( n\right) {x}^{n} = \mathop{\prod }\limits_{{\left( {i, j}\right) \in \lambda }}\frac{1}{1 - {x}^{{h}_{i, j}}}. \]
Proof. By the discussion after Proposition 4.1.4, the coefficient of \( {x}^{n} \) in this product counts partitions of \( n \), where each part is of the form \( {h}_{i, j} \) for some \( \left( {i, j}\right) \in \lambda \) . (Note that the part \( {h}_{i, j} \) is associated with the node \( \left( {i, j}\right) \in \lambda \), so parts \( {h}_{i, j} \) and \( {h}_{k, l} \) are considered different if \( \left( {i, j}\right) \neq \left( {k, l}\right) \) even if \( {h}_{i, j} = {h}_{k, l} \) as integers.) To show that this coefficient equals the number of reverse plane partitions \( T \) of \( n \), it suffices to find a bijection\n\n\[ T \leftrightarrow \left( {{h}_{{i}_{1},{j}_{1}},{h}_{{i}_{2},{j}_{2}},\ldots }\right) \]\n\nthat is weight preserving, i.e.,\n\n\[ \mathop{\sum }\limits_{{\left( {i, j}\right) \in \lambda }}{T}_{i, j} = \mathop{\sum }\limits_{k}{h}_{{i}_{k},{j}_{k}} \]\n\n\
Yes
Lemma 4.2.3 In the decomposition of \( T \) into hooklengths, the hooklength \( {h}_{{i}^{\prime },{j}^{\prime }} \) was removed before \( {h}_{{i}^{\prime \prime },{j}^{\prime \prime }} \) if and only if\n\n\[ \n{i}^{\prime \prime } > {i}^{\prime },\;\text{ or }\;{i}^{\prime \prime } = {i}^{\prime }\text{ and }{j}^{\prime \prime } \leq {j}^{\prime }.\n\]
Proof. Since (4.3) is a total order on the nodes of the shape, we need only prove the only-if direction. By transitivity, it suffices to consider the case where \( {h}_{{i}^{\prime \prime },{j}^{\prime \prime }} \) is removed directly after \( {h}_{{i}^{\prime },{j}^{\prime }} \) .\n\nLet \( {T}^{\prime } \) and \( {T}^{\prime \prime } \) be the arrays from which \( {h}_{{i}^{\prime },{j}^{\prime }} \) and \( {h}_{{i}^{\prime \prime },{j}^{\prime \prime }} \) were removed using paths \( {p}^{\prime } \) and \( {p}^{\prime \prime } \), respectively. By the choice of initial points and the fact that entries decrease in passing from \( {T}^{\prime } \) to \( {T}^{\prime \prime } \), we have \( {i}^{\prime \prime } \geq {i}^{\prime } \).\n\nIf \( {i}^{\prime \prime } > {i}^{\prime } \), we are done. Otherwise, \( {i}^{\prime \prime } = {i}^{\prime } \) and \( {p}^{\prime \prime } \) starts in a column weakly to the west of \( {p}^{\prime } \) . We claim that in this case \( {p}^{\prime \prime } \) can never pass through a node strictly to the east of a node of \( {p}^{\prime } \), forcing \( {j}^{\prime \prime } \leq {j}^{\prime } \) . If not, then there is some \( \left( {s, t}\right) \in {p}^{\prime } \cap {p}^{\prime \prime } \) such that \( \left( {s, t - 1}\right) \in {p}^{\prime } \) and \( \left( {s + 1, t}\right) \in {p}^{\prime \prime } \) . But the fact that \( {p}^{\prime } \) moved west implies \( {T}_{s, t}^{\prime } = {T}_{s, t - 1}^{\prime } \) . Since this equality continues to hold in \( {T}^{\prime \prime } \) after the ones have been subtracted, \( {p}^{\prime \prime } \) is forced to move west as well, a contradiction.
Yes
Lemma 4.2.4 If \( {r}_{k} \) is the reverse path for \( {h}_{{i}_{k},{j}_{k}} \), then \( \left( {{i}_{k},{\lambda }_{{i}_{k}}}\right) \in {r}_{k} \) .
Proof. Use reverse induction on \( k \) . The result is obvious when \( k = f \) by the first alternative in step GH2.\n\nFor \( k < f \), let \( {r}^{\prime } = {r}_{k} \) and \( {r}^{\prime \prime } = {r}_{k + 1} \) . Similarly, define \( {T}^{\prime },{T}^{\prime \prime },{h}_{{i}^{\prime },{j}^{\prime }} \), and \( {h}_{{i}^{\prime \prime },{j}^{\prime \prime }} \) . By our ordering of the hooklengths, \( {i}^{\prime } \leq {i}^{\prime \prime } \) . If \( {i}^{\prime } < {i}^{\prime \prime } \), then row \( {i}^{\prime } \) of \( T \) consists solely of zeros, and we are done as in the base case.\n\nIf \( {i}^{\prime } = {i}^{\prime \prime } \), then \( {j}^{\prime } \geq {j}^{\prime \prime } \) . Thus \( {p}^{\prime } \) starts weakly to the east of \( {p}^{\prime \prime } \) . By the same arguments as in Lemma 4.3, \( p \) stays to the east of \( {p}^{\prime } \) . Since \( {p}^{\prime } \) reaches the east end of row \( {i}^{\prime } = i \) by assumption, so must \( p \) . ∎
Yes
Theorem 4.2.6 ([Stn 71]) Let \( A \) be a poset with \( \left| A\right| = n \) . Then the generating function for reverse \( A \) -partitions is\n\n\[ \frac{P\left( x\right) }{\left( {1 - x}\right) \left( {1 - {x}^{2}}\right) \cdots \left( {1 - {x}^{n}}\right) } \]\n\nwhere \( P\left( x\right) \) is a polynomial such that \( P\left( 1\right) \) is the number of natural labelings of \( A \) . ∎
In the case where \( A = \lambda \), we can compare this result with Theorem 4.2.2 and obtain\n\n\[ \frac{P\left( x\right) }{\left( {1 - x}\right) \left( {1 - {x}^{2}}\right) \cdots \left( {1 - {x}^{n}}\right) } = \mathop{\prod }\limits_{{\left( {i, j}\right) \in \lambda }}\frac{1}{1 - {x}^{{h}_{i, j}}}. \]\n\n\nThus\n\n\[ {f}^{\lambda } = P\left( 1\right) \]\n\n\[ = \mathop{\lim }\limits_{{x \rightarrow 1}}\frac{\mathop{\prod }\limits_{{k = 1}}^{n}\left( {1 - {x}^{k}}\right) }{\mathop{\prod }\limits_{{\left( {i, j}\right) \in \lambda }}\left( {1 - {x}^{{h}_{i, j}}}\right) } \]\n\n\[ = \frac{n!}{\mathop{\prod }\limits_{{\left( {i, j}\right) \in \lambda }}{h}_{i, j}}. \]
Yes
Proposition 4.3.3 The space \( {\Lambda }^{n} \) has basis\n\n\[ \left\{ {{m}_{\lambda } : \lambda \vdash n}\right\} \]\n\nand so has dimension \( p\left( n\right) \), the number of partitions of \( n \) .
No
Proposition 4.3.5 We have the following generating functions\n\n\[ E\\left( t\\right) \\overset{\\text{ def }}{ = }\\mathop{\\sum }\\limits_{{n \\geq 0}}{e}_{n}\\left( \\mathbf{x}\\right) {t}^{n} = \\mathop{\\prod }\\limits_{{i \\geq 1}}\\left( {1 + {x}_{i}t}\\right) \]\n\n\[ H\\left( t\\right) \\overset{\\text{ def }}{ = }\\mathop{\\sum }\\limits_{{n \\geq 0}}{h}_{n}\\left( \\mathbf{x}\\right) {t}^{n} = \\mathop{\\prod }\\limits_{{i \\geq 1}}\\frac{1}{\\left( 1 - {x}_{i}t\\right) } \]
Proof. Work in the ring \\( \\mathbb{C}\\left\\lbrack \\left\\lbrack {\\mathbf{x}, t}\\right\\rbrack \\right\\rbrack \\) . For the elementary symmetric functions, consider the set \\( S = \\{ \\lambda : \\lambda \\) with distinct parts \\} \\) with weight\n\n\[ {\\mathrm{{wt}}}^{\\prime }\\lambda = {t}^{l\\left( \\lambda \\right) }\\mathrm{{wt}}\\lambda \]\n\nwhere wt is as before. Then\n\n\[ {f}_{S}\\left( {\\mathbf{x}, t}\\right) = \\mathop{\\sum }\\limits_{{\\lambda \\in S}}{\\mathrm{{wt}}}^{\\prime }\\lambda \]\n\n\[ = \\mathop{\\sum }\\limits_{{n \\geq 0}}\\mathop{\\sum }\\limits_{{l\\left( \\lambda \\right) = n}}{t}^{n}\\operatorname{wt}\\lambda \]\n\n\[ = \\mathop{\\sum }\\limits_{{n \\geq 0}}{e}_{n}\\left( \\mathbf{x}\\right) {t}^{n} \]\n\nTo obtain the product, write\n\n\[ S = \\left( {\\left\\{ {1}^{0}\\right\\} \\uplus \\left\\{ {1}^{1}\\right\\} }\\right) \\times \\left( {\\left\\{ {2}^{0}\\right\\} \\uplus \\left\\{ {2}^{1}\\right\\} }\\right) \\times \\left( {\\left\\{ {3}^{0}\\right\\} \\uplus \\left\\{ {3}^{1}\\right\\} }\\right) \\times \\cdots ,\]\n\nso that\n\n\[ {f}_{S}\\left( {\\mathbf{x}, t}\\right) = \\left( {1 + {x}_{1}t}\\right) \\left( {1 + {x}_{2}t}\\right) \\left( {1 + {x}_{3}t}\\right) \\cdots . \]\n\nThe proof for the complete symmetric functions is analogous. -
Yes
Proposition 4.3.6 We have the following generating function:\n\n\[ \mathop{\sum }\limits_{{n \geq 1}}{p}_{n}\left( \mathbf{x}\right) \frac{{t}^{n}}{n} = \ln \mathop{\prod }\limits_{{i \geq 1}}\frac{1}{\left( 1 - {x}_{i}t\right) } \]
Proof. Using the Taylor expansion of \( \ln \frac{1}{1 - x} \), we obtain\n\n\[ \ln \mathop{\prod }\limits_{{i \geq 1}}\frac{1}{\left( 1 - {x}_{i}t\right) } = \mathop{\sum }\limits_{{i \geq 1}}\ln \frac{1}{\left( 1 - {x}_{i}t\right) } \]\n\n\[ = \mathop{\sum }\limits_{{i \geq 1}}\mathop{\sum }\limits_{{n \geq 1}}\frac{{\left( {x}_{i}t\right) }^{n}}{n} \]\n\n\[ = \mathop{\sum }\limits_{{n \geq 1}}\frac{{t}^{n}}{n}\mathop{\sum }\limits_{{i \geq 1}}{x}_{i}^{n} \]\n\n\[ = \mathop{\sum }\limits_{{n \geq 1}}{p}_{n}\left( \mathbf{x}\right) \frac{{t}^{n}}{n}\text{. ∎} \]
Yes
Theorem 4.3.7 The following are bases for \( {\Lambda }^{n} \) .
1. Let \( C = \left( {c}_{\lambda \mu }\right) \) be the matrix expressing the \( {p}_{\lambda } \) in terms of the basis \( {m}_{\mu } \) . If we can find an ordering of partitions such that \( C \) is triangular with nonzero entries down the diagonal, then \( {C}^{-1} \) exists, and the \( {p}_{\lambda } \) are also a basis. It turns out that lexicographic order will work. In fact, we claim that\n\n\[ \n{p}_{\lambda } = {c}_{\lambda \lambda }{m}_{\lambda } + \mathop{\sum }\limits_{{\mu \vartriangleright \lambda }}{c}_{\lambda \mu }{m}_{\mu } \n\]\n\n(4.6)\n\nwhere \( {c}_{\lambda \lambda } \neq 0 \) . (This is actually stronger than our claim about \( C \) by Proposition 2.2.6.) But if \( {\mathbf{x}}_{1}^{{\mu }_{1}}{\mathbf{x}}_{2}^{{\mu }_{2}}\cdots {\mathbf{x}}_{m}^{{\mu }_{m}} \) appears in\n\n\[ \n{p}_{\lambda } = \left( {{x}_{1}^{{\lambda }_{1}} + {x}_{2}^{{\lambda }_{1}} + \cdots }\right) \left( {{x}_{1}^{{\lambda }_{2}} + {x}_{2}^{{\lambda }_{2}} + \cdots }\right) \cdots \n\]\n\nthen each \( {\mu }_{i} \) must be a sum of \( {\lambda }_{j} \) ’s. Since adding together parts of a partition makes it become larger in dominance order, \( {m}_{\lambda } \) must be the smallest term that occurs.\n\n2. In a similar manner we can show that there exist scalars \( {d}_{\lambda \mu } \) such that\n\n\[ \n{e}_{{\lambda }^{\prime }} = {m}_{\lambda } + \mathop{\sum }\limits_{{\mu \vartriangleleft \lambda }}{d}_{\lambda \mu }{m}_{\mu } \n\]\n\nwhere \( {\lambda }^{\prime } \) is the conjugate of \( \lambda \) .\n\n3. Since there are \( p\left( n\right) = \dim {\Lambda }^{n} \) functions \( {h}_{\lambda } \), it suffices to show that they generate the basis \( {e}_{\mu } \) . Since both sets of functions are multiplicative, we may simply demonstrate that every \( {e}_{n} \) is a polynomial in the \( {h}_{k} \) . From the products in Proposition 4.3.5, we see that\n\n\[ \nH\left( t\right) E\left( {-t}\right) = 1 \n\]\n\nSubstituting in the summations for \( H \) and \( E \) and picking out the coefficient of \( {t}^{n} \) on both sides yields\n\n\[ \n\mathop{\sum }\limits_{{r = 0}}^{n}{\left( -1\right) }^{r}{h}_{n - r}{e}_{r} = 0 \n\]\n\nfor \( n \geq 1 \) . So\n\n\[ \n{e}_{n} = {h}_{1}{e}_{n - 1} - {h}_{2}{e}_{n - 2} + \cdots \n\]\n\nwhich is a polynomial in the \( h \) ’s by induction on \( n \) . ∎
Yes
Proposition 4.4.2 The function \( {s}_{\lambda }\left( \mathbf{x}\right) \) is symmetric.
Proof 1. By definition of the Schur functions and Kostka numbers,\n\n\[ \n{s}_{\lambda } = \mathop{\sum }\limits_{\mu }{K}_{\lambda \mu }{\mathbf{x}}^{\mu }\n\]\n\n(4.11)\n\nwhere the sum is over all compositions \( \mu \) of \( n \) . Thus it is enough to show that\n\n\[ \n{K}_{\lambda \mu } = {K}_{\lambda \widetilde{\mu }}\n\]\n\n(4.12)\n\nfor any rearrangement \( \widetilde{\mu } \) of \( \mu \) . But in this case \( {M}^{\mu } \) and \( {M}^{\widetilde{\mu }} \) are isomorphic modules. Thus they have the same decomposition into irreducibles, and (4.12) follows from Young's rule (Theorem 2.11.2). Proof 2. It suffices to show that\n\n\[ \n\left( {i, i + 1}\right) {s}_{\lambda }\left( \mathbf{x}\right) = {s}_{\lambda }\left( \mathbf{x}\right)\n\]\n\nfor each adjacent transposition. To this end, we describe an involution on semistandard \( \lambda \) -tableaux\n\n\[ \nT \rightarrow {T}^{\prime }\n\]\n\nsuch that the numbers of \( i \) ’s and \( \left( {i + 1}\right) \) ’s arc exchanged when passing from \( T \) to \( {T}^{\prime } \) (with all other multiplicities staying the same).\n\nGiven \( T \), each column contains either an \( i, i + 1 \) pair; exactly one of \( i, i + 1 \) ; or neither. Call the pairs fixed and all other occurrences of \( i \) or \( i + 1 \) free. In each row switch the number of free \( i \) ’s and \( \left( {i + 1}\right) \) ’s; i.e., if the the row consists of \( k \) free \( i \) ’s followed by \( l \) free \( \left( {i + 1}\right) \) ’s then replace them by \( l \) free \( i \) ’s followed by \( k \) free \( \left( {i + 1}\right) \) ’s. To illustrate, if \( i = 2 \) and\n\n\[ \nT = \begin{array}{llllllllll} 1 & 1 & 1 & 1 & 2 & 2 & 2 & 2 & 2 & 3 \\ 2 & 2 & 3 & 3 & 3 & 3 & & & & \end{array},\n\]\n\nthen the twos and threes in columns 2 through 4 and 7 through 10 are free. So\n\n\[ \n{T}^{\prime } = \begin{array}{llllllllll} 1 & 1 & 1 & 1 & 2 & 2 & 2 & 3 & 3 & 3 \\ 2 & 2 & 2 & 3 & 3 & 3 & & & & \end{array}.\n\]\n\nThe new tableau \( {T}^{\prime } \) is still semistandard by the definition of free. Since the fixed \( i \) ’s and \( \left( {i + 1}\right) \) ’s come in pairs, this map has the desired exchange property. It is also clearly an involution.
Yes
Proposition 4.4.3 We have\n\n\[ \n{s}_{\lambda } = \mathop{\sum }\limits_{{\mu \leq \lambda }}{K}_{\lambda \mu }{m}_{\mu }\n\]\n\nwhere the sum is over partitions \( \mu \) (rather than compositions) and \( {K}_{\lambda \lambda } = 1 \) .
Proof. By equation (4.11) and the symmetry of the Schur functions, we have\n\n\[ \n{s}_{\lambda } = \mathop{\sum }\limits_{\mu }{K}_{\lambda \mu }{m}_{\mu }\n\]\n\nwhere the sum is over all partitions \( \mu \) . We can prove that\n\n\[ \n{K}_{\lambda \mu } = \left\{ \begin{array}{ll} 0 & \text{ if }\lambda \ntrianglerighteq \mu \\ 1 & \text{ if }\lambda = \mu \end{array}\right.\n\]\n\nin two different ways.\n\nOne is to appeal again to Young's rule and Corollary 2.4.7. The other is combinatorial. If \( {K}_{\lambda \mu } \neq 0 \), then consider a \( \lambda \) -tableau \( T \) of content \( \mu \) . Since \( T \) is column-strict, all occurrences of the numbers \( 1,2,\ldots, i \) are in rows 1 through \( i \) . This implies that for all \( i \),\n\n\[ \n{\mu }_{1} + {\mu }_{2} + \cdots + {\mu }_{i} \leq {\lambda }_{1} + {\lambda }_{2} + \cdots + {\lambda }_{i}\n\]\n\ni.e., \( \mu \trianglelefteq \lambda \) . Furthermore, if \( \lambda = \mu \), then by the same reasoning there is only one tableau of shape and content \( \lambda \), namely, the one where row \( i \) contains all occurrences of \( i \) . (Some authors call this tableau superstandard.)
Yes
Lemma 4.6.1 Let \( \mu = \left( {{\mu }_{1},{\mu }_{2},\ldots ,{\mu }_{l}}\right) \) be any composition. Consider the \( l \times l \) matrices \[ {A}_{\mu } = \left( {x}_{j}^{{\mu }_{i}}\right) ,{H}_{\mu } = \left( {h}_{{\mu }_{i} - l + j}\right) \;\text{ and }\;E = \left( {{\left( -1\right) }^{l - i}{e}_{l - i}^{\left( j\right) }}\right) . \] Then \[ {A}_{\mu } = {H}_{\mu }E \]
Proof. Consider the generating function for the \( {e}_{n}^{\left( j\right) } \) , \[ {E}^{\left( j\right) }\left( t\right) \overset{\text{ def }}{ = }\mathop{\sum }\limits_{{n = 0}}^{{l - 1}}{e}_{n}^{\left( j\right) }{t}^{n} = \mathop{\prod }\limits_{{i \neq j}}\left( {1 + {x}_{i}t}\right) . \] We can now mimic the proof of Theorem 4.3.7, part 3. Since \[ H\left( t\right) {E}^{\left( j\right) }\left( {-t}\right) = \frac{1}{1 - {x}_{j}t} \] we can extract the coefficient of \( {t}^{{\mu }_{i}} \) on both sides. This yields \[ \mathop{\sum }\limits_{{k = 1}}^{l}{h}_{{\mu }_{i} - l + k} \cdot {\left( -1\right) }^{l - k}{e}_{l - k}^{\left( j\right) } = {x}_{j}^{{\mu }_{i}}, \] which is equivalent to what we wished to prove. -
Yes
Corollary 4.6.2 Let \( \lambda \) have length \( l \) . Then\n\n\[ \n{s}_{\lambda } = \frac{{a}_{\lambda + \delta }}{{a}_{\delta }} \n\]\n\nwhere all functions are polynomials in \( {x}_{1},\ldots ,{x}_{l} \) .
Proof. Taking determinants in the lemma, we obtain\n\n\[ \n\left| {A}_{\mu }\right| = \left| {H}_{\mu }\right| \cdot \left| E\right| \n\]\n\n(4.19)\n\nwhere \( \left| {A}_{\mu }\right| = {a}_{\mu } \) . First of all, let \( \mu = \delta \) . In this case \( {H}_{\delta } = \left( {h}_{i - j}\right) \), which is upper unitriangular and thus has determinant 1. Plugging this into (4.19) gives \( \left| E\right| = {a}_{\delta } \) .\n\nNow, letting \( \mu = \lambda + \delta \) in the same equation, we have\n\n\[ \n\frac{{a}_{\lambda + \delta }}{{a}_{\delta }} = \left| {H}_{\lambda + \delta }\right| = \left| {h}_{{\lambda }_{i} - i + j}\right| .\n\]\n\nHence we are done, by the Jacobi-Trudi theorem. -
Yes
Theorem 4.6.3 Let \( {\phi }_{\lambda }^{\mu } \) be the character of \( {M}^{\mu } \) evaluated on the class corresponding to \( \lambda \) . Then\n\n\[ \n{p}_{\lambda } = \mathop{\sum }\limits_{{\mu \geq \lambda }}{\phi }_{\lambda }^{\mu }{m}_{\mu }\n\]
Proof. Let \( \lambda = \left( {{\lambda }_{1},{\lambda }_{2},\ldots ,{\lambda }_{l}}\right) \) . Then we can write equation (4.6) as\n\n\[ \n\mathop{\prod }\limits_{i}\left( {{x}_{1}^{{\lambda }_{i}} + {x}_{2}^{{\lambda }_{i}} + \cdots }\right) = \mathop{\sum }\limits_{\mu }{c}_{\lambda \mu }{m}_{\mu }\n\]\n\nPick out the coefficient of \( {\mathbf{x}}^{\mu } \) on both sides, where \( \mu = \left( {{\mu }_{1},{\mu }_{2},\ldots ,{\mu }_{m}}\right) \) . On the right, it is \( {c}_{\lambda \mu } \) . On the left, it is the number of ways to distribute the parts of \( \lambda \) into subpartitions \( {\lambda }^{1},\ldots ,{\lambda }^{m} \) such that\n\n\[ \n{\biguplus }_{i}{\lambda }^{i} = \lambda \text{ and }{\lambda }^{i} \vdash {\mu }_{i}\text{ for all }i\n\]\n\n(4.21)\n\nwhere equal parts of \( \lambda \) are distinguished in order to be considered different in the disjoint union.\n\nNow consider \( {\phi }_{\lambda }^{\mu } = {\phi }^{\mu }\left( \pi \right) \), where \( \pi \in {\mathcal{S}}_{n} \) is an element of cycle type \( \lambda \) . By definition, this character value is the number of fixed-points of the action of \( \pi \) on all standard tabloids \( t \) of shape \( \mu \) . But \( t \) is fixed if and only if each cycle of \( \pi \) lies in a single row of \( t \) . Thus we must distribute the cycles of length \( {\lambda }_{i} \) among the rows of length \( {\mu }_{j} \) subject to exactly the same restrictions as in (4.21). It follows that \( {c}_{\lambda \mu } = {\phi }_{\lambda }^{\mu } \), as desired. ∎
Yes
Theorem 4.6.4 If \( \lambda \vdash n \), then\n\n\[ \n{s}_{\lambda } = \frac{1}{n!}\mathop{\sum }\limits_{{\pi \in {\mathcal{S}}_{n}}}{\chi }^{\lambda }\left( \pi \right) {p}_{\pi }\n\]\n\n(4.22)
There are several other ways to write equation (4.22). Since \( {\chi }^{\lambda } \) is a class function, we can collect terms and obtain\n\n\[ \n{s}_{\lambda } = \frac{1}{n!}\mathop{\sum }\limits_{\mu }{k}_{\mu }{\chi }_{\mu }^{\lambda }{p}_{\mu }\n\]\n\nwhere \( {k}_{\mu } = \left| {K}_{\mu }\right| \) and \( {\chi }_{\mu }^{\lambda } \) is the value of \( {\chi }^{\lambda } \) on \( {K}_{\mu } \) . Alternatively, we can use formula (1.2) to express \( {s}_{\lambda } \) in terms of the size of centralizers:\n\n\[ \n{s}_{\lambda } = \mathop{\sum }\limits_{\mu }\frac{1}{{z}_{\mu }}{\chi }_{\mu }^{\lambda }{p}_{\mu }\n\]\n\n(4.23)
Yes
Theorem 4.7.4 The map \( \operatorname{ch} : R \rightarrow \Lambda \) is an isomorphism of algebras.
Proof. By Proposition 4.7.2, it suffices to check that products are preserved. If \( \chi \) and \( \psi \) are characters in \( {\mathcal{S}}_{n} \) and \( {\mathcal{S}}_{m} \), respectively, then using part 2 of Theorem 1.11.2 yields\n\n\[ \operatorname{ch}\left( {\chi \cdot \psi }\right) \; = \;\langle \chi \cdot \psi, p{\rangle }^{\prime } \]\n\n\[ = {\left\langle \left( \chi \otimes \psi \right) { \uparrow }^{{\mathcal{S}}_{n + m}}, p\right\rangle }^{\prime } \]\n\n\[ = \langle \chi \otimes \psi, p{ \downarrow }_{{\mathcal{S}}_{n} \times {\mathcal{S}}_{m}}{\rangle }^{\prime } \]\n\n\[ = \frac{1}{n!m!}\mathop{\sum }\limits_{{{\pi \sigma } \in {\mathcal{S}}_{n} \times {\mathcal{S}}_{m}}}\left( {\chi \otimes \psi }\right) \left( {\pi \sigma }\right) {p}_{\pi \sigma } \]\n\n\[ = \frac{1}{n!m!}\mathop{\sum }\limits_{\substack{{\pi \in {\mathcal{S}}_{n}} \\ {\sigma \in {\mathcal{S}}_{m}} }}\chi \left( \pi \right) \psi \left( \sigma \right) {p}_{\pi }{p}_{\sigma } \]\n\n\[ = \left\lbrack {\frac{1}{n!}\mathop{\sum }\limits_{{\pi \in {\mathcal{S}}_{n}}}\chi \left( \pi \right) {p}_{\pi }}\right\rbrack \left\lbrack {\frac{1}{m!}\mathop{\sum }\limits_{{\sigma \in {\mathcal{S}}_{m}}}\psi \left( \sigma \right) {p}_{\sigma }}\right\rbrack \]\n\n\[ = \operatorname{ch}\left( \chi \right) \operatorname{ch}\left( \psi \right) \text{. ∎} \]
Yes
There is a bijection between generalized permutations and pairs of semistandard tableaux of the same shape, such that \( \operatorname{cont}\check{\pi } = \operatorname{cont}T \) and \( \operatorname{cont}\widehat{\pi } = \operatorname{cont}U \) .
Proof. \
No
Theorem 4.8.5 ([Knu 70]) There is a bijection between \( \pi \in {\mathrm{{GP}}}^{\prime } \) and pairs \( \left( {T, U}\right) \) of tableaux of the same shape with \( T,{U}^{t} \) semistandard, such that \( \operatorname{cont}\check{\pi } = \operatorname{cont}T \) and \( \operatorname{cont}\widehat{\pi } = \operatorname{cont}U \) .
Proof. \
No
Theorem 4.8.7 If \( M \in \operatorname{Mat} \) and \( M\overset{\mathrm{R} - \mathrm{S} - \mathrm{K}}{ \leftrightarrow }\left( {T, U}\right) \), then
\[ {M}^{t}\overset{\mathrm{R} - \mathrm{S} - \mathrm{K}}{ \leftrightarrow }\left( {U, T}\right) \text{. ∎} \]
Yes
Theorem 4.8.9 ([Knu 70]) A pair of generalized permutations are Knuth equivalent if and only if they have the same \( T \) -tableau.
No
Theorem 4.8.11 ([Scii 76]) Let \( T \) and \( U \) be skew semistandard tableaux. Then \( T \) and \( U \) have Knuth equivalent row words if and only if they are connected by a sequence of slides. Furthermore, any such sequence bringing them to normal shape results in the first output tableau of the Robinson-Schensted- \( K{nuth} \) correspondence.
No
Theorem 4.8.12 If \( T \) and \( U \) are semistandard of the same normal shape, then \( T \cong U \) .
No
Proposition 4.9.1 ([Mac 79]) Define \( {s}_{\lambda }\left( {\mathbf{x},\mathbf{y}}\right) = {s}_{\lambda }\left( {{x}_{1},{x}_{2},\ldots ,{y}_{1},{y}_{2},\ldots }\right) \) . Then \[ {s}_{\lambda }\left( {\mathbf{x},\mathbf{y}}\right) = \mathop{\sum }\limits_{{\mu \subseteq \lambda }}{s}_{\mu }\left( \mathbf{x}\right) {s}_{\lambda /\mu }\left( \mathbf{y}\right) \] \( \left( {4.27}\right) \)
Proof. The function \( {s}_{\lambda }\left( {\mathbf{x},\mathbf{y}}\right) \) enumerates semistandard fillings of the diagram \( \lambda \) with letters from the totally ordered alphabet \[ \left\{ {1 < 2 < 3 < \cdots < {1}^{\prime } < {2}^{\prime } < {3}^{\prime } < \cdots }\right\} . \] In any such tableau, the unprimed numbers (which are weighted by the \( x \) ’s) form a subtableau of shape \( \mu \) in the upper left corner of \( \lambda \), whereas the primed numbers (weighted by the \( y \) ’s) fill the remaining squares of \( \lambda /\mu \) . The right-hand side of (4.27) is the generating function for this description of the relevant tableaux. -
Yes
Theorem 4.9.2 If the \( {c}_{\mu \nu }^{\lambda } \) are Littlewood-Richardson coefficients, where \( \left| \mu \right| + \left| \nu \right| = \left| \lambda \right| \), then\n\n\[ \n{s}_{\lambda /\mu } = \mathop{\sum }\limits_{\nu }{c}_{\mu \nu }^{\lambda }{s}_{\nu }\n\]
Proof. Bring in yet a third set of variables \( \mathbf{z} = \left\{ {{z}_{1},{z}_{2},\ldots }\right\} \) . By using the previous proposition and Cauchy's formula (Theorem 4.8.4),\n\n\[ \n\mathop{\sum }\limits_{{\lambda ,\mu }}{s}_{\mu }\left( \mathbf{x}\right) {s}_{\lambda /\mu }\left( \mathbf{y}\right) {s}_{\lambda }\left( \mathbf{z}\right) = \mathop{\sum }\limits_{\lambda }{s}_{\lambda }\left( {\mathbf{x},\mathbf{y}}\right) {s}_{\lambda }\left( \mathbf{z}\right)\n\]\n\n\[ \n= \mathop{\prod }\limits_{{i, j}}\frac{1}{1 - {x}_{i}{z}_{j}}\frac{1}{1 - {y}_{i}{z}_{j}}\n\]\n\n\[ \n= \left\lbrack {\mathop{\sum }\limits_{\mu }{s}_{\mu }\left( \mathbf{x}\right) {s}_{\mu }\left( \mathbf{z}\right) }\right\rbrack \left\lbrack {\mathop{\sum }\limits_{\nu }{s}_{\nu }\left( \mathbf{y}\right) {s}_{\nu }\left( \mathbf{z}\right) }\right\rbrack\n\]\n\n\[ \n= \mathop{\sum }\limits_{{\mu ,\nu }}{s}_{\mu }\left( \mathbf{x}\right) {s}_{\nu }\left( \mathbf{y}\right) {s}_{\mu }\left( \mathbf{z}\right) {s}_{\nu }\left( \mathbf{z}\right)\n\]\n\nTaking the coefficient of \( {s}_{\mu }\left( \mathbf{x}\right) {s}_{\nu }\left( \mathbf{y}\right) {s}_{\lambda }\left( \mathbf{z}\right) \) on both sides and comparing with equation (4.26) completes the proof. -
Yes
Lemma 4.9.5 If we can go from \( T \) to \( {T}^{\prime } \) by a sequence of slides, then \( {\pi }_{T} \) is a reverse lattice permutation if and only if \( {\pi }_{{T}^{\prime }} \) is.
Proof. It suffices to consider the case where \( T \) and \( {T}^{\prime } \) differ by a single move. If the move is horizontal, the row word doesn't change, so consider a vertical move. Suppose\n\n![fe1808d3-ed76-4667-ba97-eb284d29fcc8_191_0.jpg](images/fe1808d3-ed76-4667-ba97-eb284d29fcc8_191_0.jpg)\n\nand\n\n![fe1808d3-ed76-4667-ba97-eb284d29fcc8_191_1.jpg](images/fe1808d3-ed76-4667-ba97-eb284d29fcc8_191_1.jpg)\n\nwhere \( {R}_{l} \) and \( {S}_{l} \) (respectively, \( {R}_{r} \) and \( {S}_{r} \) ) are the left (respectively, right) portions of the two rows between which \( x \) is moving. We show that if \( \sigma = {\pi }_{T}^{r} \) is a ballot sequence, then so is \( {\sigma }^{\prime } = {\pi }_{{T}^{\prime }}^{r} \) . (The proof of the reverse implication is similar.)\n\nClearly, we need only check that the number of \( \left( {x + 1}\right) \) ’s does not exceed the number of \( x \) ’s in any prefix of \( {\sigma }^{\prime } \) ending with an element of \( {R}_{l} \) or \( {S}_{r} \) . To do this, we show that each \( x + 1 \) in such a prefix can be injectively matched with an \( x \) that comes before it. If the \( x + 1 \) occurs in \( {R}_{r} \) or a higher row, then it can be matched with an \( x \) because \( \sigma \) is a ballot sequence. Notice that all these \( x \) ’s must be in higher rows because the \( x \) ’s in \( {R}_{r} \) come after the \( \left( {x + 1}\right) \) ’s in \( {R}_{r} \) when listed in the reverse row word. By semistandardness, there are no \( \left( {x + 1}\right) \) ’s in \( {R}_{l} \) to be matched. For the same reason, every \( x + 1 \) in \( {S}_{r} \) must have an \( x \) in \( {R}_{r} \) just above it. Since these \( x \) ’s have not been previously used in our matching, we are done.
Yes
Theorem 4.10.2 (Murnaghan-Nakayama Rule [Mur 37, Nak 40]) If \( \lambda \) is a partition of \( n \) and \( \alpha = \left( {{\alpha }_{1},\ldots ,{\alpha }_{k}}\right) \) is a composition of \( n \), then we have\n\n\[ \n{\chi }_{\alpha }^{\lambda } = \mathop{\sum }\limits_{\xi }{\left( -1\right) }^{{ll}\left( \xi \right) }{\chi }_{\alpha \smallsetminus \alpha }^{\lambda \smallsetminus \xi }\n\]\n\nwhere the sum runs over all rim hooks \( \xi \) of \( \lambda \) having \( {\alpha }_{1} \) cells.
Proof (of Theorem 4.10.2). Let \( m = {\alpha }_{1} \) . Consider \( {\pi \sigma } \in {\mathcal{S}}_{n - m} \times {\mathcal{S}}_{m} \subseteq {\mathcal{S}}_{n} \),\n\nwhere \( \pi \) has type \( \left( {{\alpha }_{2},\ldots ,{\alpha }_{k}}\right) \) and \( \sigma \) is an \( m \) -cycle. By part 2 of Theorem 1.11.3, the characters \( {\chi }^{\mu } \otimes {\chi }^{\nu } \), where \( \mu \vdash n - m,\nu \vdash m \), form a basis for the class functions on \( {\mathcal{S}}_{n - m} \times {\mathcal{S}}_{m} \) . So\n\n\[ \n{\chi }_{\alpha }^{\lambda } = {\chi }^{\lambda }\left( {\pi \sigma }\right) = {\chi }^{\lambda }{ \downarrow }_{{\mathcal{S}}_{n - m} \times {\mathcal{S}}_{m}}\left( {\pi \sigma }\right) = \mathop{\sum }\limits_{\substack{{\mu \vdash n - m} \\ {\nu \vdash
No
Lemma 4.10.3 If \( \nu \vdash m \), then\n\n\[{\chi }_{\left( m\right) }^{\nu } = \left\{ \begin{array}{ll} {\left( -1\right) }^{m - r} & \text{ if }\nu = \left( {r,{1}^{m - r}}\right) , \\ 0 & \text{ otherwise. } \end{array}\right.\n\]
Proof (of Lemma 4.10.3). By equation (4.23), \( {\chi }_{\left( m\right) }^{\nu } \) is \( {z}_{\left( m\right) } = m \) times the coefficient of \( {p}_{m} \) in\n\n\[{s}_{\nu } = \mathop{\sum }\limits_{\mu }\frac{1}{{z}_{\mu }}{\chi }_{\mu }^{\nu }{p}_{\mu }\n\]\n\nUsing the complete homogeneous Jacobi-Trudi determinant (Theorem 4.5.1), we obtain\n\n\[{s}_{\nu } = {\left| {h}_{{\nu }_{i} - i + j}\right| }_{l \times l} = \mathop{\sum }\limits_{\kappa } \pm {h}_{\kappa }\n\]\n\nwhere the sum is over all compositions \( \kappa = \left( {{\kappa }_{1},\ldots ,{\kappa }_{l}}\right) \) that occur as a term in the determinant. But each \( {h}_{{\kappa }_{i}} \) in \( {h}_{\kappa } \) can be written as a linear combination of power sums. So, since the \( p \) ’s are a multiplicative basis, the resulting linear combination for \( {h}_{\kappa } \) will not contain \( {p}_{m} \) unless \( \kappa \) contains exactly one nonzero part, which must, of course, be \( m \) . Hence \( {\chi }_{\left( m\right) }^{\nu } \neq 0 \) only when \( {h}_{m} \) appears-in the preceding determinant.\n\nThe largest index to appear in this determinant is at the end of the first row, and \( {\nu }_{1} - 1 + l = {h}_{1,1} \), the hooklength of cell \( \left( {1,1}\right) \) . Furthermore, we always have \( m = \left| \nu \right| \geq {h}_{1,1} \) . Thus \( {\chi }_{\left( m\right) }^{\nu } \) is nonzero only when \( {h}_{1,1} = m \), i.e., when \( \nu \) is a hook \( \left( {r,{1}^{m - r}}\right) \) . In this case, we have\n\n\[{s}_{\nu } = \left| \begin{array}{llllll} {h}_{r} & \cdots & & & {h}_{m} & \\ {h}_{0} & {h}_{1} & \cdots & & & \\ 0 & {h}_{0} & {h}_{1} & \cdots & & \\ 0 & 0 & {h}_{0} & {h}_{1} & \cdots & \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \end{array}\right|\n\]\n\n\( = {\left( -1\right) }^{m - r}{h}_{m} + \) other terms not involving \( {p}_{m} \) .\n\nBut \( {h}_{m} = {s}_{\left( m\right) } \) corresponds to the trivial character, so comparing coefficients of \( {p}_{m}/m \) in this last set of equalities yields \( {\chi }_{\left( m\right) }^{\nu } = {\left( -1\right) }^{m - r} \), as desired. ∎
Yes
Lemma 4.10.4 Let \( \lambda \vdash n,\mu \vdash n - m \), and \( \nu = \left( {r,{1}^{m - r}}\right) \). Then \( {c}_{\mu \nu }^{\lambda } = 0 \) unless each edgewise connected component of \( \lambda /\mu \) is a rim hook. In that case, if there are \( k \) component hooks spanning a total of \( c \) columns, then\n\n\[ \n{c}_{\mu \nu }^{\lambda } = \left( \begin{matrix} k - 1 \\ c - r \end{matrix}\right) \n\]
Proof. By the Littlewood-Richardson rule (Theorem 4.9.4), \( {c}_{\mu \nu }^{\lambda } \) is the number of semistandard tableaux \( T \) of shape \( \lambda /\mu \) containing \( r \) ones and a single copy each of \( 2,3,\ldots, m - r + 1 \) such that \( {\pi }_{T} \) is a reverse lattice permutation. Thus the numbers greater than one in \( {\pi }_{T}^{r} \) must occur in increasing order. This condition, together with semistandardness, puts the following constraints on \( T \) :\n\nT1. Any cell of \( T \) having a cell to its right must contain a one.\n\nT2. Any cell of \( T \) having a cell above must contain an element bigger than one.\n\nIf \( T \) contains a \( 2 \times 2 \) block of squares, as in (4.31), then there is no way to fill the lower left cell and satisfy both T1 and T2. Thus \( {c}_{\mu \nu }^{\lambda } = 0 \) if the components of the shape of \( T \) are not rim hooks.\n\nNow suppose \( \lambda /\mu = {\biguplus }_{i = 1}^{k}{\xi }^{\left( i\right) } \), where each \( {\xi }^{\left( i\right) } \) is a component skew hook. Conditions T1, T2 and the fact that 2 through \( m - r + 1 \) increase in \( {\pi }_{T}^{r} \) show that every rim hook must have the form\n\n![fe1808d3-ed76-4667-ba97-eb284d29fcc8_197_0.jpg](images/fe1808d3-ed76-4667-ba97-eb284d29fcc8_197_0.jpg)\n\nwhere \( d > 1 \) is the smallest number that has not yet appeared in \( {\pi }_{T}^{r} \) and \( b \) is either 1 or \( d - 1 \). Thus all the entries in \( {\xi }^{\left( i\right) } \) are determined once we choose the value of \( b \). Furthermore, in \( {\xi }^{\left( 1\right) } \) we must have \( b = 1 \). By T2 we have \( c - \left( {k - 1}\right) \) ones already fixed in \( T \). Hence there are \( r - c + k - 1 \) ones left to distribute among the \( k - 1 \) cells marked with a \( b \). The number of ways this\n\ncan be done is\n\[ \n{c}_{\mu \nu }^{\lambda } = \left( \begin{matrix} k - 1 \\ r - c + k - 1 \end{matrix}\right) = \left( \begin{matrix} k - 1 \\ c - r \end{matrix}\right) .\n\]
Yes
Corollary 4.10.6 Let \( \\lambda \) be a partition of \( n \) and let \( \\alpha = \\left( {{\\alpha }_{1},\\ldots ,{\\alpha }_{k}}\\right) \) be any composition of \( n \) . Then\n\n\[ \n{\\chi }_{\\alpha }^{\\lambda } = \\mathop{\\sum }\\limits_{T}{\\left( -1\\right) }^{T}\n\]\n\nwhere the sum is over all rim hook tableaux of shape \( \\lambda \) and content \( \\alpha \) .
No
Proposition 5.1.3 The poset \( Y \) satisfies the following two conditions.\n\n(1) If \( \lambda \in Y \) covers \( k \) elements for some \( k \), then it is covered by \( k + 1 \) elements.\n\n(2) If \( \lambda \neq \mu \) and \( \lambda \) and \( \mu \) both cover \( l \) elements for some \( l \), then they are both covered by \( l \) elements. In fact, \( l \leq 1 \) .
Proof. (1) The elements covered by \( \lambda \) are just the partitions \( {\lambda }^{ - } \) obtained by removing an inner corner of \( \lambda \), while those covering \( \lambda \) are the \( {\lambda }^{ + } \) obtained by adding an outer corner. But along the rim of \( \lambda \), inner and outer corners alternate beginning and ending with an outer corner. So the number of \( {\lambda }^{ + } \) is one greater than the number of \( {\lambda }^{ - } .\n\n(2) If \( \lambda ,\mu \) both cover an element of \( Y \), then this element must be \( \lambda \land \mu \) . So \( l \leq 1 \) . Also, \( l = 1 \) if and only if \( \left| \lambda \right| = \left| \mu \right| = n \) and \( \left| {\lambda \cap \mu }\right| = n - 1 \) for some \( n \) . But this is equivalent to \( \left| \lambda \right| = \left| \mu \right| = n \) and \( \left| {\lambda \cup \mu }\right| = n + 1 \), i.e., \( \lambda ,\mu \) are both covered by a single element. -
Yes
Proposition 5.1.5 The operators \( D, U \) satisfy\n\n\[ \n{DU} - {UD} = I \n\]\n\nwhere \( I \) is the identity map.
Proof. By linearity, it suffices to prove that this equation holds when applied to a single \( \mathbf{\lambda } \in \mathbb{C}\mathbf{Y} \) . But by the previous proposition\n\n\[ \n{DU}\left( \mathbf{\lambda }\right) = \mathop{\sum }\limits_{\mathbf{\mu }}\mathbf{\mu } + \left( {k + 1}\right) \mathbf{\lambda } \n\]\n\nwhere the sum is over all \( \mathbf{\mu } \neq \mathbf{\lambda } \) such that there is an element of \( Y \) covering both, and \( k + 1 \) is the number of elements covering \( \lambda \) . In the same way,\n\n\[ \n{UD}\left( \mathbf{\lambda }\right) = \mathop{\sum }\limits_{\mathbf{\mu }}\mathbf{\mu } + k\mathbf{\lambda } \n\]\n\nand taking the difference completes the proof. -
Yes
Corollary 5.1.6 Let \( p\left( x\right) \in \mathbb{C}\left\lbrack x\right\rbrack \) be a polynomial. Then\n\n\[ \n{Dp}\left( U\right) = {p}^{\prime }\left( U\right) + p\left( U\right) D \n\]\n\nwhere \( {p}^{\prime }\left( x\right) \) is the derivative.
Proof. By linearity, it suffices to prove this identity for the powers \( {U}^{n}, n \geq 0 \) . The case \( n = 0 \) is trivial. So, applying induction yields\n\n\[ \nD{U}^{n + 1} = \left( {D{U}^{n}}\right) U \n\]\n\n\[ \n= \;\left( {n{U}^{n - 1} + {U}^{n}D}\right) U \n\]\n\n\[ \n= n{U}^{n} + {U}^{n}\left( {I + {UD}}\right) \n\]\n\n\[ \n= \;\left( {n + 1}\right) {U}^{n} + {U}^{n + 1}D\text{. ∎} \n\]
Yes
Theorem 5.1.8 We have\n\n\[ \mathop{\sum }\limits_{{\lambda \vdash n}}{\left( {f}^{\lambda }\right) }^{2} = n! \]
Proof. In view of equation (5.5) we need only show that \( {D}^{n}{U}^{n}\left( \varnothing \right) = n!\varnothing \) . We will do this by induction, it being clear for \( n = 0 \) . But using Corollary 5.1.6 and the fact that \( \varnothing \) is the minimal element of \( Y \), we obtain\n\n\[ {D}^{n}{U}^{n}\left( \varnothing \right) = {D}^{n - 1}\left( {D{U}^{n}}\right) \left( \varnothing \right) \]\n\n\[ = {D}^{n - 1}\left( {n{U}^{n - 1} + {U}^{n}D}\right) \left( \varnothing \right) \]\n\n\[ = {\overset{n}{D}}^{n - 1}{U}^{n - 1}\left( \varnothing \right) + {D}^{n - 1}{U}^{n}D\left( \varnothing \right) \]\n\n\[ = n\left( {n - 1}\right) !\varnothing + \mathbf{0}\text{. ∎} \]
Yes
Lemma 5.1.11 If \( A \) is a poset satisfying DP1 and DP3, then \( l \leq 1 \) .
Proof. Suppose to the contrary that there are elements \( a, b \in A \) with \( l \geq 2 \) , and pick a pair of minimal rank. Then \( a, b \) both cover \( c, d \) one rank lower. But then \( c, d \) are both covered by \( a, b \) and so have an \( l \) value at least 2, a contradiction. -
No
Lemma 5.1.12 If \( A \) is differential, then the nth rank, \( {A}_{n} \), is finite for all \( n \geq 0 \) .
Proof. Induct on \( n \), where the case \( n = 0 \) is taken care of by DP1. Assume that all ranks up through \( {A}_{n} \) are finite. Any \( a \in {A}_{n} \) can cover at most \( \left| {A}_{n - 1}\right| \) elements, and so can be covered by at most \( \left| {A}_{n - 1}\right| + 1 \) elements by DP2. It follows that\n\n\[ \left| {A}_{n + 1}\right| \leq \left| {A}_{n}\right| \left( {\left| {A}_{n - 1}\right| + 1}\right) \]\n\nforcing \( {A}_{n + 1} \) to be finite. -
Yes
Proposition 5.1.14 Let \( A \) be a graded poset with \( {A}_{n} \) finite for all \( n \geq 0 \) . Then the following are equivalent.
1. \( A \) is differential.\n\n2. \( {DU} - {UD} = I \) . ∎
No
Theorem 5.1.15 ([Stn 88]) In any differential poset \( A \) , \n\n\[ \mathop{\sum }\limits_{{a \in {A}_{n}}}{\left( {f}^{a}\right) }^{2} = n! \]\n\nwhere \( {f}^{a} \) is the number of saturated \( \varnothing - a \) chains.
No
Lemma 5.2.2 Rules LR1-3 construct a well-defined growth \( {g}_{\pi } : {C}_{n}^{2} \rightarrow Y \) in that \( \rho \) is a partition and satisfies \( \rho \succcurlyeq \mu ,\nu \) .
Proof. The only case where it is not immediate that \( \rho \) is a partition is LR2. But by the way \( \mu \) is obtained from \( \lambda \) we have \( {\mu }_{i} > {\mu }_{i + 1} \) . So adding 1 to \( {\mu }_{i + 1} \) will not disturb the fact that the parts are weakly decreasing.\n\nIt is clear that \( \rho \) covers or equals \( \mu ,\nu \) except in LR1. Since \( \lambda \preccurlyeq \mu ,\nu \) by induction and \( \mu \neq \nu \), there are three possibilities, namely \( \lambda = \mu \prec \nu \) , \( \lambda = \nu \prec \mu \), or \( \lambda \prec \mu ,\nu \) . It is easy to check that in each case, remembering the proof of (2) in Proposition 5.1.3 for the third one, that \( \rho \succcurlyeq \mu ,\nu \) . ∎
Yes
Lemma 5.2.3 The growth \( {g}_{\pi } \) has the following properties.\n\n1. \( {g}_{\pi }\left( {i, j - 1}\right) \prec {g}_{\pi }\left( {i, j}\right) \) if and only if there is an \( X \) in square \( \left( {k, j}\right) \) for some \( k \leq i \).\n\n2. \( {g}_{\pi }\left( {i - 1, j}\right) \prec {g}_{\pi }\left( {i, j}\right) \) if and only if there is an \( X \) in square \( \left( {i, l}\right) \) for some \( l \leq j \). ∎
Proof. We will prove that \( {P}_{\pi } = P\left( \pi \right) \) since the equation \( {Q}_{\pi } = Q\left( \pi \right) \) is derived using similar reasoning.\n\nFirst we will associate with each \( \left( {i, j}\right) \) in \( {C}_{n}^{2} \) a permutation \( {\pi }_{i, j} \) and a tableau \( {P}_{i, j} \). Note that in \( {P}_{i, j} \), the subscripts refer to the element of \( {C}_{n}^{2} \) and not the entry of the tableau in row \( i \) and column \( j \) as is our normal convention.\n\nSince this notation will be used only in the current proof, no confusion will result. Define \( {\pi }_{i, j} \) as the lower line of the partial permutation corresponding to those \( X \) ’s southwest of \( \left( {i, j}\right) \) in the diagram for \( {g}_{\pi } \). In our usual example,\n\n\[ {\pi }_{6,3} = {231}\text{.}\]\n\nWe also have a sequence of shapes\n\n\[ {\lambda }_{i,0} \preccurlyeq {\lambda }_{i,1} \preccurlyeq \cdots \preccurlyeq {\lambda }_{i, j} \]\n\nwhere \( {\lambda }_{i, l} = {g}_{\pi }\left( {i, l}\right) \). So \( \left| {{\lambda }_{i, l}/{\lambda }_{i, l - 1}}\right| = 0 \) or 1 for \( 1 \leq l \leq j \). In the latter case, label the box of \( {\lambda }_{i, l}/{\lambda }_{i, l - 1} \) with \( l \) to obtain the tableau \( {P}_{i, j} \). These tableaux are illustrated for our example in the following figure:\n\n![fe1808d3-ed76-4667-ba97-eb284d29fcc8_214_0.jpg](images/fe1808d3-ed76-4667-ba97-eb284d29fcc8_214_0.jpg)\n\nIt is clear from the definitions that \( {\pi }_{n, n} = \pi \) and \( {P}_{n, n} = {P}_{\pi } \). So it suffices to prove that \( {P}_{i, j} = P\left( {\pi }_{i, j}\right) \) for all \( i, j \leq n \). We will do this by induction on \( i + j \). It is obvious if \( i = 0 \) or \( j = 0 \). The proof breaks down into three cases depending on the coordinates \( \left( {i, l}\right) \) of the \( X \) in row \( i \), and \( \left( {k, j}\right) \) of the \( X \) in column \( j \).\n\nCase 1: \( l > j \). (The case \( k > i \) is similar.) By Lemma 5.2.3, we have \( {\lambda }_{i, j - 1} = {\lambda }_{i, j} \), so \( {P}_{i, j - 1} = {P}_{i, j} \). Also, \( {\pi }_{i, j - 1} = {\pi }_{i, j} \). Since \( {P}_{i, j - 1} = P\left( {\pi }_{i, j - 1}\right) \) by induction, we are done with this case.\n\nCase 2: \( l = j \). (Note that this forces \( k = i \).) By Lemma 5.2.3 again, \( {\lambda }_{i - 1, j - 1} = {\lambda }_{i, j - 1} = {\lambda }_{i - 1, j} \). So the second option in LR3 applies, and \( {P}_{i, j} \) is \( {P}_{i, j - 1} \) with \( j \) placed at the end of the first row. Furthermore, \( {\pi }_{i, j} \)
Yes
Theorem 5.2.6 The map\n\n\[ \pi \rightarrow \left( {{\mathcal{C}}_{\pi },{\mathcal{D}}_{\pi }}\right) \]\n\nis a bijection between \( {\mathcal{S}}_{n} \) and pairs of saturated \( \widehat{0} - a \) chains as a varies over all elements of rank \( n \) in the differential poset \( A \) .
Proof. Since both \( \mathcal{C} \) and \( \mathcal{D} \) end at \( \left( {n, n}\right) \in {C}_{n}^{2} \), it is clear that the corresponding chains end at the same element which must be of rank \( n \) .\n\nTo show that this is a bijection, we construct an inverse. Given saturated chains\n\n\[ \mathcal{C} : \widehat{0} = {c}_{0} \prec {c}_{1} \prec \cdots \prec {c}_{n} \]\n\n\[ \mathcal{D} : \widehat{0} = {d}_{0} \prec {d}_{1} \prec \cdots \prec {d}_{n} \]\n\nwhere \( {c}_{n} = {d}_{n} \), define\n\n\[ {g}_{\pi }\left( {n, j}\right) = {c}_{j}\text{and}{g}_{\pi }\left( {i, n}\right) = {d}_{i}\text{for}0 \leq i, j \leq n\text{.} \]\n\nAlso, let \( {\Phi }_{b} = {\Psi }_{b}^{-1} \). Assuming that \( b, c, d \) are given around a square \( s \) as in the previous diagram, we define \( a \) as follows.\n\nRLD1 If \( b \neq c \), then \( d = b \succ c, d = c \succ b \), or \( d \succ b, c \) . In the first two cases let \( a = \min \{ b, c\} \) . In the third let \( a \) be the unique element covered by \( b, c \) .\n\nRLD2 If \( d \succ b = c \), then let \( a = {\Phi }_{b}\left( d\right) \) .\n\nRLD3 If \( b = c = d \), then let \( a = d \) .\n\nFinally, we recover \( \pi \) as corresponding to those squares where \( a = b = c \prec \) \( d \) . Showing that RLD1-3 give a well-defined growth associated with some permutation \( \pi \) and that this is the inverse procedure to the one given by DLR1-3 is straightforward and left to the reader.
No
Theorem 5.3.3 Let \( V \) be a vector space and suppose we have two functions \( f, g : {B}_{n} \rightarrow V \) . Then the following are equivalent:\n\n\[ \text{(1)}f\left( S\right) = \mathop{\sum }\limits_{{T \subseteq S}}g\left( T\right) \text{for all}S \in {B}_{n}\text{,}\]\n\n\[ \text{(2)}g\left( S\right) = \mathop{\sum }\limits_{{T \subseteq S}}{\left( -1\right) }^{\left| S - T\right| }f\left( T\right) \text{for all}S \in {B}_{n}\text{.} \]
Proof. We will prove that (1) implies (2), as the converse is similar. Now,\n\n\[ \mathop{\sum }\limits_{{T \subseteq S}}{\left( -1\right) }^{\left| S - T\right| }f\left( T\right) = \mathop{\sum }\limits_{{T \subseteq S}}{\left( -1\right) }^{\left| S - T\right| }\mathop{\sum }\limits_{{U \subseteq T}}g\left( U\right) \]\n\n\[ = \mathop{\sum }\limits_{{U \subseteq S}}{\left( -1\right) }^{\left| S - U\right| }g\left( U\right) \mathop{\sum }\limits_{{U \subseteq T \subseteq S}}{\left( -1\right) }^{\left| T - U\right| }.\]\n\nBut from Exercise 1e in Chapter 4 we see that the inner sum is 0 unless \( U = S \) . So the outer sum collapses to \( g\left( S\right) \) . ∎
No
Proposition 5.3.6 Let \( \lambda \vdash n, S = {\left\{ {n}_{1},{n}_{2},\ldots ,{n}_{k}\right\} }_{ < } \subseteq \{ 1,2,\ldots, n - 1\} \) , and \( \mu = \left( {{n}_{1},{n}_{2} - {n}_{1},\ldots, n - {n}_{k}}\right) \) . Then\n\n\[ \left| {\{ P : P\text{ a standard }\lambda \text{-tableau and }\operatorname{Des}P \subseteq S\} }\right| = {K}_{\lambda \mu }. \]
Proof. We will give a bijection between the two sets involved, where \( {K}_{\lambda \mu } \) counts semistandard tableaux \( T \) of shape \( \lambda \) and content \( \mu \) . Given \( P \), replace the numbers \( 1,2,\ldots ,{n}_{1} \) with ones, then \( {n}_{1} + 1,{n}_{1} + 2,\ldots ,{n}_{2} \) with twos, and so on. The next figure illustrates the map when \( \lambda = \left( {4,3,1}\right), S = \{ 3,5,7\} \) , and so \( \mu = \left( {3,2,2,1}\right) \) . Below each \( T \) is the corresponding \( P \), and below that, Des \( P \) is given: ![fe1808d3-ed76-4667-ba97-eb284d29fcc8_220_0.jpg](images/fe1808d3-ed76-4667-ba97-eb284d29fcc8_220_0.jpg)\n\nThe resulting \( T \) is clearly a \( \lambda \) -tableau of content \( \mu \), so we need only check that it is semistandard. Since the rows of \( P \) increase and the replacement map is weakly increasing, the rows of \( T \) must weakly increase. Similarly, the columns of \( T \) weakly increase. To show that they actually increase strictly, suppose \( {T}_{i, j} = {T}_{i + 1, j} = l \) . Then at least one of \( {n}_{l - 1} + 1,{n}_{l - 1} + 2,\ldots ,{n}_{l} - 1 \) is a descent in \( P \), contradicting Des \( P \subseteq S \) .\n\nTo show that this map is a bijection, we must construct an inverse. This is a simple matter and left as an exercise. -
No
For the action of \( {\mathcal{S}}_{n} \) on \( {B}_{n} \), decompose\n\n\[ \n{B}^{S} \cong \mathop{\sum }\limits_{\lambda }{b}^{S}\left( \lambda \right) {S}^{\lambda }\n\]\n\nThen the muliplicities are nonnegative integers, since\n\n\[ \n{b}^{S}\left( \lambda \right) = \left| {\{ P : P\text{ a standard }\lambda \text{-tableau and Des }P = S\} }\right| .\n\]
Proof. Decompose \( {A}^{S} \) as\n\n\[ \n{A}^{S} \cong \mathop{\sum }\limits_{\lambda }{a}^{S}\left( \lambda \right) {S}^{\lambda }\n\]\n\nThen Corollary 5.3.4 implies\n\n\[ \n{a}^{S}\left( \lambda \right) = \mathop{\sum }\limits_{{T \subseteq S}}{b}^{T}\left( \lambda \right) \;\text{ for all }\;S \subseteq \{ 1,2,\ldots, n - 1\} .\n\]\n\n(5.9)\n\n\n\nDefine \( {\bar{a}}^{S}\left( \lambda \right) \) (respectively, \( {\bar{b}}^{S}\left( \lambda \right) \) ) to be the number of standard \( \lambda \) -tableaux \( P \) with Des \( P \subseteq S \) (respectively, Des \( P = S \) ). Then directly from the definitions\n\n\[ \n{\bar{a}}^{S}\left( \lambda \right) = \mathop{\sum }\limits_{{T \subseteq S}}{\bar{b}}^{T}\left( \lambda \right) \;\text{ for all }\;S \subseteq \{ 1,2,\ldots, n - 1\} .\n\]\n\n(5.10)\n\nNow, from Proposition 5.3.6 and the fact that \( {A}^{S} \) is equivalent to \( {M}^{\mu } \) we have \( {a}^{S}\left( \lambda \right) = {K}_{\lambda \mu } = {\bar{a}}^{S}\left( \lambda \right) \) for all \( S \). But then inverting (5.9) and (5.10) using Theorem 5.3.3 shows that \( {b}^{S}\left( \lambda \right) = {\bar{b}}^{S}\left( \lambda \right) \) for all \( S \), as desired. ∎
Yes
Proposition 5.4.2 For any \( n \), the sequence\n\n\[ \left( \begin{array}{l} n \\ 0 \end{array}\right) ,\left( \begin{array}{l} n \\ 1 \end{array}\right) ,\ldots ,\left( \begin{array}{l} n \\ n \end{array}\right) \]\n\nis symmetric and unimodal. It follows that the same is true of the Boolean algebra \( {B}_{n} \) .
Proof. For symmetry we have, from the factorial formula in Exercise 1b of Chapter 4,\n\n\[ \left( \begin{matrix} n \\ n - k \end{matrix}\right) = \frac{n!}{\left( {n - k}\right) !\left( {n - \left( {n - k}\right) }\right) !} = \frac{n!}{\left( {n - k}\right) !\left( k\right) !} = \left( \begin{array}{l} n \\ k \end{array}\right) .\n\nSince the sequence is symmetric, we can prove unimodality by showing that it increases up to its midpoint. Using the same exercise, we see that for \( k \leq n/2 \) :\n\n\[ \frac{\left( \begin{matrix} n \\ k - 1 \end{matrix}\right) }{\left( \begin{array}{l} n \\ k \end{array}\right) } = \frac{n!k!\left( {n - k}\right) !}{n!\left( {k - 1}\right) !\left( {n - k + 1}\right) !} = \frac{k}{n - k + 1} < 1. \]
Yes
Lemma 5.4.4 Let \( G \) be a group of automorphisms of a finite, graded poset A.\n\n1. If \( \operatorname{rk}X = \left| {A}_{k}\right| \), then \( \mathbb{C}{\mathbf{A}}_{k} \leq \mathbb{C}{\mathbf{A}}_{k + 1} \).\n\n2. If \( \operatorname{rk}X = \left| {A}_{k + 1}\right| \), then \( \mathbb{C}{\mathbf{A}}_{k} \geq \mathbb{C}{\mathbf{A}}_{k + 1} \).
Proof. We will prove the first statement, as the second is similar. We claim that \( {U}_{k} \) is actually a \( G \) -homomorphism. It suffices to show that \( {U}_{k}\left( {g\mathbf{a}}\right) = \) \( g{U}_{k}\left( \mathbf{a}\right) \) for \( g \in G \) and \( a \in {A}_{k} \) . In other words, it suffices to prove that\n\n\[ \{ b : b \succ {ga}\} = \{ {gc} : c \succ a\} . \]\n\nBut if \( b \succ {ga} \), then letting \( c = {g}^{-1}b \) we have \( {gc} = b \succ {ga} \), so \( c \succ a \) . And if \( c \succ a \), then \( b = {gc} \succ {ga} \), so the two sets are equal.\n\nSince \( \operatorname{rk}X = \left| {A}_{k}\right| \), we have that \( {U}_{k} \) is an injective \( G \) -homomorphism, and so \( {\mathbb{{CA}}}_{k} \) is isomorphic to a submodule of \( {\mathbb{{CA}}}_{k + 1} \) . Since \( G \) is completely irreducible, the inequality follows. -
Yes
Theorem 5.4.6 ([Stn 82]) Let \( A \) be a finite, graded poset of rank \( n \) . Let \( G \) be a group of automorphisms of \( A \) and \( V \) be an irreducible \( G \) -module. If \( A \) is unimodal and ample, then the following sequences are unimodal.\n\n(1) \( \mathbb{C}{\mathbf{A}}_{0},\mathbb{C}{\mathbf{A}}_{1},\mathbb{C}{\mathbf{A}}_{2},\ldots ,\mathbb{C}{\mathbf{A}}_{n} \) .\n\n(2) \( {m}_{0}\left( V\right) ,{m}_{1}\left( V\right) ,{m}_{2}\left( V\right) ,\ldots ,{m}_{n}\left( V\right) \), where \( {m}_{k}\left( V\right) \) is the multiplicity of \( V \) in \( {\mathbb{{CA}}}_{k} \) .\n\n(3) \( \left| {{A}_{0}/G}\right| ,\left| {{A}_{1}/G}\right| ,\left| {{A}_{2}/G}\right| ,\ldots ,\left| {{A}_{n}/G}\right| \) .
Proof. The fact that the first sequence is unimodal follows immediately from the definition of ample and Lemma 5.4.4. This implies that the second sequence is as well by definition of the partial order on \( G \) -modules. Finally, by Exercise \( 5\mathrm{\;b} \) in Chapter 2,(3) is the special case of (2) where one takes \( V \) to be the trivial module. -
No
Corollary 5.4.8 Let \( G \leq {\mathcal{S}}_{n} \) act on \( {B}_{n} \) and let \( V \) be an irreducible \( G \) - module. Then, keeping the notation in the statement and proof of the previous proposition, the following sequences are symmetric and unimodal.\n\n(1) \( {m}_{0}\left( V\right) ,{m}_{1}\left( V\right) ,{m}_{2}\left( V\right) ,\ldots ,{m}_{n}\left( V\right) \), where \( {m}_{k}\left( V\right) \) is the multiplicity of \( V \) in \( {\mathbb{{CB}}}_{n, k} \) .\n\n(2) \( \left| {{B}_{n,0}/G}\right| ,\left| {{B}_{n,1}/G}\right| ,\left| {{B}_{n,2}/G}\right| ,\ldots ,\left| {{B}_{n, n}/G}\right| \) .
Proof. The fact that the sequences are unimodal follows from Theorem 5.4.6, and Propositions 5.4.2 and 5.4.7. For symmetry, note that the map \( f : {B}_{n, k} \rightarrow \) \( {B}_{n, n - k} \) sending \( S \) to its complement induces a \( G \) -module isomorphism. ∎
Yes
For fixed \( k, l \), the sequence\n\n\[ \n{p}_{k, l}\left( 0\right) ,{p}_{k, l}\left( 1\right) ,{p}_{k, l}\left( 2\right) ,\ldots ,{p}_{k, l}\left( {kl}\right) \n\]\n\nis symmetric and unimodal. So the poset \( {Y}_{k, l} \) is as well.
\( \\textbf{Proof. Identify the cells of }\\left( {l}^{k}\\right) \) with the elements of \( \\{ 1,2,\\ldots, k\\} \\times \\{ 1,2,\\ldots, l\\} \) in the usual way. Then \( {\\mathcal{S}}_{k}\\wr {\\mathcal{S}}_{l} \) has an induced action on subsets of \( \\left( {l}^{k}\\right) \), which are partially ordered as in \( {B}_{kl} \) (by containment). There is a unique partition in each orbit, and so the result follows from Corollary 5.4.8, part (2). -
Yes
Proposition 5.5.6 ([Stn 95]) Let \( \Gamma \) be any graph with vertex set \( V \). 1. \( {X}_{\Gamma } \) is homogeneous of degree \( d = \left| V\right| \). 2. \( {X}_{\Gamma } \) is a symmetric function. 3. If we set \( {x}_{1} = \cdots = {x}_{n} = 1 \) and \( {x}_{i} = 0 \) for \( i > n \), written \( \mathbf{x} = {1}^{n} \), then \[ {X}_{\Gamma }\left( {1}^{n}\right) = {P}_{\Gamma }\left( n\right) \]
Proof. 1. Every monomial in \( {X}_{\Gamma } \) has a factor for each vertex. 2. Any permutation of the colors of a proper coloring gives another proper coloring. This means that permuting the subscripts of \( {X}_{\Gamma } \) leaves the function invariant. And since it is homogeneous of degree \( d \), it is a finite sum of monomial symmetric functions. 3. With the given specialization, each monomial of \( {X}_{\Gamma } \) becomes either 1 or 0 . The surviving monomials are exactly those that use only the first \( n \) colors. So their sum is \( {P}_{\Gamma }\left( n\right) \) by definition of the chromatic polynomial as the number of such colorings. -
Yes
Proposition 5.5.9 ([Stn 95]) The expansion of \( {X}_{\Gamma } \) in terms of monomial symmetric functions is\n\n\[ \n{X}_{\Gamma } = \mathop{\sum }\limits_{{\lambda \vdash d}}{i}_{\lambda }{y}_{\lambda }{m}_{\lambda }\n\]
Proof. In any proper coloring, the set of all vertices of a given color form an independent set. So given \( \kappa : V \rightarrow \mathbb{P} \) proper, the set of nonempty \( {\kappa }^{-1}\left( i\right) \) , \( i \in \mathbb{P} \), form an independent partition \( \beta \vdash V \) . Thus the coefficient of \( {\mathbf{x}}^{\lambda } \) in \( {X}_{\Gamma } \) is just the number of ways to choose \( \beta \) of type \( \lambda \) and then assign colors to the blocks so as to give this monomial. There are \( {i}_{\lambda } \) possiblities for the first step. Also, colors can be permuted among the blocks of a fixed size without changing \( {\mathbf{x}}^{\lambda } \), which gives the factor of \( {y}_{\lambda } \) for the second.
Yes
Theorem 5.5.10 ([Stn 95]) We have\n\n\\[ \n{X}_{\\Gamma } = \\mathop{\\sum }\\limits_{{F \\subseteq E}}{\\left( -1\\right) }^{\\left| F\\right| }{p}_{\\lambda \\left( F\\right) }\n\\]
Proof. Let \( K\\left( F\\right) \) denote the set of all colorings of \( \\Gamma \) that are monochromatic on the components of \( F \) (which will usually not be proper). If \( \\beta \\left( F\\right) = \) \( {B}_{1}/\\ldots /{B}_{l} \), then we can compute the weight generating function for such colorings as\n\n\\[ \n\\mathop{\\sum }\\limits_{{\\kappa \\in K\\left( F\\right) }}{\\mathbf{x}}^{\\kappa } = \\mathop{\\prod }\\limits_{{i = 1}}^{l}\\left( {{x}_{1}^{\\left| {B}_{i}\\right| } + {x}_{2}^{\\left| {B}_{i}\\right| } + \\cdots }\\right) = {p}_{\\lambda \\left( F\\right) }\\left( \\mathbf{x}\\right) .\n\\]\n\nNow, for an arbitrary coloring \( \\kappa \) let \( E\\left( \\kappa \\right) \) be the set of edges \( {uv} \) of \( \\Gamma \) such that both \( \\kappa \\left( u\\right) = \\kappa \\left( v\\right) \) . Directly from the definitions, \( \\kappa \\in K\\left( F\\right) \) if and only if \( E\\left( \\kappa \\right) \\supseteq F \) . So\n\n\\[ \n\\mathop{\\sum }\\limits_{{F \\subseteq E}}{\\left( -1\\right) }^{\\left| F\\right| }{p}_{\\lambda \\left( F\\right) } = \\mathop{\\sum }\\limits_{{F \\subseteq E}}{\\left( -1\\right) }^{\\left| F\\right| }\\mathop{\\sum }\\limits_{{\\kappa \\in K\\left( F\\right) }}{\\mathbf{x}}^{\\kappa }\n\\]\n\n\\[ \n= \\mathop{\\sum }\\limits_{{\\text{all }\\kappa }}{\\mathbf{x}}^{\\kappa }\\mathop{\\sum }\\limits_{{F \\subseteq E\\left( \\kappa \\right) }}{\\left( -1\\right) }^{\\left| F\\right| }\n\\]\n\nBut from Exercise 1e in Chapter 4, the inner sum is 0 unless \( E\\left( \\kappa \\right) = \\varnothing \), in which case it is 1 . Hence the only surviving terms are those corresponding to proper colorings. -
Yes
Proposition 5.5.11 A tree \( T \) with \( \left| V\right| = d \) has chromatic polynomial\n\n\[ \n{P}_{T}\left( n\right) = n{\left( n - 1\right) }^{d - 1}.\n\]
Proof. Pick any \( v \in V \) that can be colored in \( n \) ways. Since \( T \) has no cycles, each of the neighbors \( v \) can now be colored in \( n - 1 \) ways. The same can be said of the uncolored neighbors of the neighbors of \( v \) . Since \( T \) is connected, we will eventually color every vertex this way, yielding the formula for \( {P}_{T} \) . ∎
Yes
Example 5 Cubes and spheres.\n\nThe unit interval is a \( {CW} \) complex with three cells: \( \{ 0\} ,\{ 1\} \) and \( I \) . Thus the cube \( {I}^{n} \) inherits a (product) CW structure, and \( \partial {I}^{n} \) is a subcomplex. Note that the CW complex \( {I}^{n}/\partial {I}^{n} \) coincides, as a CW complex, with the suspension of \( {I}^{n - 1}/\partial {I}^{n - 1} \) .\n\nOn the other hand, assign to \( {S}^{n} \) the basepoint \( * = \left( {1,0,0,\ldots ,0}\right) \) . The map \( \theta : {S}^{n - 1} \times I \rightarrow {S}^{n} \), represented by the picture\n\n![4c8f1a9a-2daa-4186-96fd-2418ee588fa5_38_0.jpg](images/4c8f1a9a-2daa-4186-96fd-2418ee588fa5_38_0.jpg)\n\ninduces a based homeomorphism \( \sum {S}^{n - 1}\overset{ \cong }{ \rightarrow }{S}^{n} \) . Identify \( I/\partial I = {S}^{1} \) via the map \( t \mapsto {e}^{2\pi it} \) . Then we identify inductively\n\n\[ \n{S}^{n} = \sum {S}^{n - 1} = \sum \left( {{I}^{n - 1}/\partial {I}^{n - 1}}\right) = {I}^{n}/\partial {I}^{n}.\n\]\nThis exhibits \( {S}^{n} \) as the \( {CW} \) complex \( * \cup {e}^{n} \) .
There is also a homeomorphism\n\n\[ \n\partial {I}^{n + 1}\overset{ \cong }{ \rightarrow }{S}^{n},\;n \geq 1 \n\]\n\ndefined by translating \( {I}^{n + 1} \) in \( {\mathbb{R}}^{n + 1} \) to centre it at the origin, and then projecting the boundary onto \( {S}^{n} \) (explicitly, \( x \mapsto \left( {x - a}\right) /\parallel x - a\parallel \), where \( a = \left( {\frac{1}{2},\ldots ,\frac{1}{2}}\right) \) ). The corresponding base point of \( \partial {I}^{n + 1} \) is \( \left( {1,\frac{1}{2},\ldots ,\frac{1}{2}}\right) \) .
Yes
Any continuous map \( f : \left( {X, A}\right) \rightarrow \left( {Y, B}\right) \) between relative \( {CW} \) complexes is homotopic rel \( A \) to a cellular map.
proof: Step (i) Linear approximation. Suppose \( \varphi : Z \rightarrow {\mathbb{R}}^{k} \) is a continuous map from a finite \( n \) -dimensional CW complex. Write \( {Z}_{r} = {Z}_{r - 1}{ \cup }_{{f}_{r}}\left( {\mathop{\coprod }\limits_{\alpha }{D}_{\alpha }^{r}}\right) \) and let \( {o}_{r,\alpha } \) be the origin (centre) of \( {D}_{\alpha }^{r} \) . Any point in \( {D}_{\alpha }^{r} \) has the form \( {tv} \) with \( v \in {S}_{\alpha }^{r - 1} \), and \( 0 \leq t \leq 1 \) . The linear approximation of \( \varphi \) is the linear map \( \theta : Z \rightarrow {\mathbb{R}}^{k} \) defined inductively by:\n\n\[ \theta = \varphi \text{ in }{Z}_{0},\;\text{ and }\;\theta \left( {tv}\right) = {t\theta }{f}_{r}\left( v\right) + \left( {1 - t}\right) \varphi \left( {o}_{r,\alpha }\right) ,{tv} \in {e}_{\alpha }^{r}. \]\n\nAn \( n \) -dimensional flat in \( {\mathbb{R}}^{k} \) is a subset of the form \( v + W \) where \( v \in {\mathbb{R}}^{k} \) and \( W \subset {\mathbb{R}}^{k} \) is an \( n \) -dimensional subspace. An obvious induction shows that \( \operatorname{Im}\theta \) is contained in a finite union of \( n \) -dimensional flats. It is also contained in any convex set \( C \) such that \( C \supset \operatorname{Im}\varphi \) . Finally, let \( \varepsilon > 0 \) and suppose for all cells \( {e}_{\alpha }^{r} \) that \( \begin{Vmatrix}{{\varphi x} - \varphi \left( {o}_{r,\alpha }\right) }\end{Vmatrix} < \varepsilon, x \in {e}_{\alpha }^{r} \) . Since\n\n\[ \parallel \theta \left( {tv}\right) - \varphi \left( {tv}\right) \parallel \leq t\parallel \theta {f}_{r}\left( v\right) - \varphi {f}_{r}\left( v\right) \]
Yes
Theorem 1.4 (Cellular models theorem) [160]\n\n(i) Every space \( Y \) has a cellular model \( f : X \rightarrow Y \) .\n\n(ii) If \( {f}^{\prime } : {X}^{\prime } \rightarrow Y \) is a second cellular model then there is a homotopy equivalence \( g : X\overset{ \simeq }{ \rightarrow }{X}^{\prime } \) such that \( {f}^{\prime } \circ g \sim f \) .
To prove this theorem one needs the fundamental\n\nLemma 1.5 (Whitehead lifting lemma) Suppose given a (not necessarily commutative) diagram\n\n![4c8f1a9a-2daa-4186-96fd-2418ee588fa5_43_0.jpg](images/4c8f1a9a-2daa-4186-96fd-2418ee588fa5_43_0.jpg)\n\ntogether with a with a homotopy \( H : A \times I \rightarrow Z \) from \( {\psi i} \) to \( {f\varphi } \) . Assume \( \left( {X, A}\right) \) is a relative \( {CW} \) complex and \( f \) is a weak homotopy equivalence.\n\nThen \( \varphi \) and \( H \) can be extended respectively to a map \( \Phi : X \rightarrow Y \) and a homotopy \( K : X \times I \rightarrow Z \) from \( \psi \) to \( f \circ \Phi \) .\n\nproof: As in the proof of Step (iii) in Theorem 1.2 it is enough, by induction on the cellular structure, to consider the case that \( A = {X}_{n} \) and \( X = {X}_{n + 1} \) . Then working one cell at a time reduces us to the case that \( A = {S}^{n}, X = {D}^{n + 1} \) and \( i \) is the standard inclusion. In this case \( \varphi : \left( {{S}^{n}, * }\right) \rightarrow \left( {Y,{y}_{0}}\right) \) and \( f \circ \varphi \sim {\left. \psi \right| }_{{S}^{n}} \sim \) the constant map. Since \( f \) is a weak homotopy equivalence, \( \varphi \) itself is homotopic to the constant map via a homotopy \( {H}^{\prime } \) .\n\nThis produces the map \( \sigma : {S}^{n} \times I \cup {D}^{n + 1} \times \{ 0,1\} \rightarrow \left( {Z, f\left( {y}_{0}\right) }\right) \), described in the following picture (for \( n = 1 \) )\n\n![4c8f1a9a-2daa-4186-96fd-2418ee588fa5_44_0.jpg](images/4c8f1a9a-2daa-4186-96fd-2418ee588fa5_44_0.jpg)\n\nSince \( {S}^{n} \times I \cup {D}^{n + 1} \times \{ 0,1\} \) is homeomorphic to \( {S}^{n + 1} \) and since \( f \) is a weak homotopy equivalence, there is a map \( \tau : \left( {{S}^{n + 1}, * }\right) \rightarrow \left( {Y,{y}_{0}}\right) \) such that \( {\pi }_{n + 1}\left( f\right) \left\lbrack \tau \right\rbrack = \) \( {\left\lbrack \sigma \right\rbrack }^{-1} \) .\n\nRecall the homeomorphism \( \sum {S}^{n}\overset{ \cong }{ \rightarrow }{S}^{n + 1} \) . This identifies \( \tau \) as a map \( \tau \) : \( \left( {{S}^{n} \times I,{S}^{n}\times \{ 0,1\} }\right) \rightarrow \left( {Y,{y}_{0}}\right) \) . Define \( \Phi : {D}^{n + 1} \rightarrow Y \) to be the map\n\n![4c8f1a9a-2daa-4186-96fd-2418ee588fa5_44_1.jpg](images/4c8f1a9a-2daa-4186-96fd-2418ee588fa5_44_1.jpg)\n\nThen the map \( \Psi : {S}^{n} \times I \cup {D}^{n + 1} \times \{ 0,1\} \rightarrow Z \) given by\n\n![4c8f1a9a-2daa-4186-96fd-2418ee588fa5_44_2.jpg](images/4c8f1a9a-2daa-4186-96fd-2418ee588fa5_44_2.jpg)\n\nmay be redrawn as\n\n![4c8f1a9a-2daa-4186-96fd-2418ee588fa5_45_0.jpg](images/4c8f1a9a-2daa-4186-96fd-2418ee588fa5_45_0.jpg)\n\nHence \( \Psi \) represents \( \left\lbrack \sigma \right\rbrack * \left\lbrack {f \circ \tau }\right\rbrack = 0 \), and so it extends to \( K : {D}^{n + 1} \times I \rightarrow Z \), as desired.
Yes
Lemma 1.5 (Whitehead lifting lemma) Suppose given a (not necessarily commutative) diagram\n\n![4c8f1a9a-2daa-4186-96fd-2418ee588fa5_43_0.jpg](images/4c8f1a9a-2daa-4186-96fd-2418ee588fa5_43_0.jpg)\n\ntogether with a with a homotopy \( H : A \times I \rightarrow Z \) from \( {\psi i} \) to \( {f\varphi } \) . Assume \( \left( {X, A}\right) \) is a relative \( {CW} \) complex and \( f \) is a weak homotopy equivalence.\n\nThen \( \varphi \) and \( H \) can be extended respectively to a map \( \Phi : X \rightarrow Y \) and a homotopy \( K : X \times I \rightarrow Z \) from \( \psi \) to \( f \circ \Phi \) .
proof: As in the proof of Step (iii) in Theorem 1.2 it is enough, by induction on the cellular structure, to consider the case that \( A = {X}_{n} \) and \( X = {X}_{n + 1} \) . Then working one cell at a time reduces us to the case that \( A = {S}^{n}, X = {D}^{n + 1} \) and \( i \) is the standard inclusion. In this case \( \varphi : \left( {{S}^{n}, * }\right) \rightarrow \left( {Y,{y}_{0}}\right) \) and \( f \circ \varphi \sim {\left. \psi \right| }_{{S}^{n}} \sim \) the constant map. Since \( f \) is a weak homotopy equivalence, \( \varphi \) itself is homotopic to the constant map via a homotopy \( {H}^{\prime } \) .\n\nThis produces the map \( \sigma : {S}^{n} \times I \cup {D}^{n + 1} \times \{ 0,1\} \rightarrow \left( {Z, f\left( {y}_{0}\right) }\right) \), described in the following picture (for \( n = 1 \) )\n\n![4c8f1a9a-2daa-4186-96fd-2418ee588fa5_44_0.jpg](images/4c8f1a9a-2daa-4186-96fd-2418ee588fa5_44_0.jpg)\n\nSince \( {S}^{n} \times I \cup {D}^{n + 1} \times \{ 0,1\} \) is homeomorphic to \( {S}^{n + 1} \) and since \( f \) is a weak homotopy equivalence, there is a map \( \tau : \left( {{S}^{n + 1}, * }\right) \rightarrow \left( {Y,{y}_{0}}\right) \) such that \( {\pi }_{n + 1}\left( f\right) \left\lbrack \tau \right\rbrack = \) \( {\left\lbrack \sigma \right\rbrack }^{-1} \) .\n\nRecall the homeomorphism \( \sum {S}^{n}\overset{ \cong }{ \rightarrow }{S}^{n + 1} \) . This identifies \( \tau \) as a map \( \tau \) : \( \left( {{S}^{n} \times I,{S}^{n}\times \{ 0,1\} }\right) \rightarrow \left( {Y,{y}_{0}}\right) \) . Define \( \Phi : {D}^{n + 1} \rightarrow Y \) to be the map\n\n![4c8f1a9a-2daa-4186-96fd-2418ee588fa5_44_1.jpg](images/4c8f1a9a-2daa-4186-96fd-2418ee588fa5_44_1.jpg)\n\nThen the map \( \Psi : {S}^{n} \times I \cup {D}^{n + 1} \times \{ 0,1\} \rightarrow Z \) given by\n\n![4c8f1a9a-2daa-4186-96fd-2418ee588fa5_44_2.jpg](images/4c8f1a9a-2daa-4186-96fd-2418ee588fa5_44_2.jpg)\n\nmay be redrawn as\n\n![4c8f1a9a-2daa-4186-96fd-2418ee588fa5_45_0.jpg](images/4c8f1a9a-2daa-4186-96fd-2418ee588fa5_45_0.jpg)\n\nHence \( \Psi \) represents \( \left\lbrack \sigma \right\rbrack * \left\lbrack {f \circ \tau }\right\rbrack = 0 \), and so it extends to \( K : {D}^{n + 1} \times I \rightarrow Z \), as desired.
Yes
Corollary 1.6 If \( X \) is a \( {CW} \) complex and \( f : Y \rightarrow Z \) is a weak homotopy equivalence then composition with \( f \) induces a bijection \( {f}_{\# } : \left\lbrack {X, Y}\right\rbrack \rightarrow \left\lbrack {X, Z}\right\rbrack \) of homotopy classes of maps. Similarly, if \( \left( {X,{x}_{0}}\right) \) is a pointed \( {CW} \) complex then \( {f}_{\# } : \left\lbrack {\left( {X,{x}_{0}}\right) ,\left( {Y,{y}_{0}}\right) }\right\rbrack \rightarrow \left\lbrack {\left( {X,{x}_{0}}\right) ,\left( {Z, f\left( {y}_{0}\right) }\right) }\right\rbrack \) is a bijection.
proof: The lifting theorem, applied with the relative CW complex \( \left( {X,\phi }\right) \) , shows that \( {f}_{\# } \) is surjective. Regard \( I \) as a CW complex with \( {I}_{0} = \{ 0,1\} \) and \( {I}_{1} = I \) . Then the lifting theorem applied with the relative CW complex \( \left( {X \times I, X\times \{ 0,1\} }\right) \) shows that \( {f}_{\# } \) is injective. A slight variation of this argument gives the pointed case.
Yes
Lemma 1.8 \( \left( {X, A}\right) \) is a cofibration if and only if \( X \times \{ 0\} \cup A \times I \) is a retract of \( X \times I \) .
proof: A continuous map \( f : X \rightarrow Y \) and a homotopy \( H : A \times I \) starting at \( {\left. f\right| }_{A} \) define a map \( \left( {f, H}\right) : X \times \{ 0\} \cup A \times I \rightarrow Y \) . If \( r : X \times I \rightarrow X \times \{ 0\} \cup A \times I \) is a retraction then \( \left( {f, H}\right) \circ r : X \times I \rightarrow Y \) extends \( H \) and starts at \( f \) . Conversely, if \( \left( {X, A}\right) \) is a cofibration take \( Y = X \times \{ 0\} \cup A \times I \) and \( \left( {f, H}\right) \) the identity map. An extension of \( H \) starting at \( f \) is a retraction \( r : X \times I \rightarrow X \times \{ 0\} \cup A \times I \) . \( ▱ \)
Yes
Proposition 1.10 The following conditions are equivalent on a topological pair \( \left( {X, A}\right) \) :\n\n(i) \( A \) is closed in \( X \) and \( \left( {X, A}\right) \) is a cofibration.\n\n(ii) \( \left( {X, A}\right) \) is an NDR pair.\n\n(iii) \( \left( {X \times I, X\times \{ 0\} \cup A \times I}\right) \) is a DR pair.
proof: \( \;\left( i\right) \Rightarrow \left( {ii}\right) \) : Let \( r = \left( {{r}_{X},{r}_{I}}\right) : X \times I \rightarrow X \times \{ 0\} \cup A \times I \) be a retraction (Lemma 1.8). Define \( h : X \rightarrow I \) by \( h\left( x\right) = \sup \left\{ {t - {r}_{I}\left( {x, t}\right) \mid t \in I}\right\} \) . Set \( U = {h}^{-1}\left( {\lbrack 0,1}\right) ) \), and set \( H = {r}_{X} : U \times I \rightarrow X \) .\n\n(ii) \( \Rightarrow \) (iii): Let \( h, U,\varepsilon \) and \( H \) be as in the definition of NDR pairs. Choose a continuous function \( \alpha : \left( {I,\left\lbrack {0,\varepsilon /2}\right\rbrack ,\left\lbrack {\varepsilon ,1}\right\rbrack }\right) \rightarrow \left( {I,1,0}\right) \) and set \( \varphi = {\alpha h} \) : \( X \rightarrow I \) . Define \( K : X \times I \rightarrow X \) by\n\n\[ K\left( {x, t}\right) = \left\{ \begin{array}{lll} H\left( {x,\left( {\varphi x}\right) t}\right) & , & x \in U \\ x & , & {hx} > \varepsilon . \end{array}\right. \]\n\nDefine \( k : X \rightarrow I \) by \( {kx} = \inf \left( {\frac{2}{\varepsilon }{hx},1}\right) \) . Then \( K : {k}^{-1}\left( {\lbrack 0,1}\right) ) \times \{ 1\} \rightarrow A \) . Define a homotopy \( \Phi : \left( {X \times I}\right) \times I \rightarrow X \times I \) by\n\n\[ \Phi \left( {x, t, s}\right) = \left\{ \begin{array}{ll} \left( {K\left( {x,\frac{st}{kx}}\right), t\left( {1 - s}\right) }\right) &, t < {kx} \\ \left( {K\left( {x, s}\right), t - {skx}}\right) &, t \geq {kx}. \end{array}\right. \]\n\nIt is straightforward to see that \( \Phi \) is a homotopy rel \( X \times \{ 0\} \cup A \times I \) from \( i{d}_{X \times I} \) to a retraction onto \( X \times \{ 0\} \cup A \times I \) .\n\n(iii) \( \Rightarrow \) (i): This is Lemma 1.8.
Yes
Proposition 1.11 Suppose \( \left( {X, A}\right) \) is an NDR pair. Then the inclusion \( i \) : \( A \rightarrow X \) is a homotopy equivalence if and only if \( \left( {X, A}\right) \) is a DR pair. In this case \( \left( {X \times I, X\times \{ 0,1\} \cup A \times I}\right) \) is a DR pair.
proof: If \( \left( {X, A}\right) \) is a DR pair then \( i \) is certainly a homotopy equivalence. Suppose that \( i \) is a homotopy equivalence. Choose \( \varrho : X \rightarrow A \) so that \( \varrho i \sim i{d}_{A} \) . Extend the homotopy to a homotopy \( X \times I \rightarrow A \) from \( \varrho \) to a map \( r : X \rightarrow A \) . Clearly \( {ri} = i{d}_{A} \) . Now, since \( i \) is a homotopy equivalence, there is a homotopy \( H : X \times I \rightarrow X \) from \( i{d}_{X} \) to \( {ir} \) . Define \( K : X \times I \times \{ 0\} \cup X \times \{ 1\} \times I \cup A \times I \times I \rightarrow \) \( X \) by setting\n\n\[ K\left( {x, t,0}\right) = \left\{ \begin{array}{lll} H\left( {x,{2t}}\right) & , & 0 \leq t \leq \frac{1}{2} \\ H\left( {{irx},2 - {2t}}\right) & , & \frac{1}{2} \leq t \leq 1, \end{array}\right. \]\n\n\[ K\left( {x,1, t}\right) = \operatorname{ir}x \]\n\nand\n\n\[ K\left( {a, t, s}\right) = \left\{ \begin{array}{lll} H\left( {a,2\left( {1 - s}\right) t}\right) & , & 0 \leq t \leq \frac{1}{2} \\ H\left( {a,2\left( {1 - s}\right) \left( {1 - t}\right) }\right) & , & \frac{1}{2} \leq t \leq 1. \end{array}\right. \]\n\nIt is easy to construct a homeomorphism ![4c8f1a9a-2daa-4186-96fd-2418ee588fa5_49_0.jpg](images/4c8f1a9a-2daa-4186-96fd-2418ee588fa5_49_0.jpg)\n\nThus \( \left( {X \times I \times I, X \times I\times \{ 0\} \cup X\times \{ 1\} \times I \cup A \times I \times I}\right) \cong \left( {X \times I \times I, X \times }\right) \) \( I \times \{ 0\} \cup A \times I \times I) \), which is a DR pair by Proposition 1.10. In particular \( K \) extends to a map \( K : X \times I \times I \rightarrow X \), and \( K\left( {x, t,1}\right) \) is a homotopy rel \( A \) from \( i{d}_{X} \) to \( {ir} \) . Thus \( \left( {X, A}\right) \) is a DR pair.\n\nFinally, since \( \left( {X, A}\right) \) is a DR pair, clearly \( A \times \{ 1\} \) and \( A \times \{ 0\} \) are strong deformation retracts of \( X \times \{ 0\} \) and \( X \times \{ 1\} \), and so \( A \times I \) is a strong deformation retract of \( X \times \{ 0,1\} \cup A \times I \) . Since the composite \( A \times I \rightarrow X \times \{ 0,1\} \cup A \times I \rightarrow \) \( X \times I \) is also a homotopy equivalence it follows that the inclusion \( X \times \{ 0,1\} \cup A \times \) \( I \rightarrow X \times I \) is a homotopy equivalence as well. Thus \( \left( {X \times I, X\times \{ 0,1\} \cup A \times I}\right) \) is a DR pair by the first assertion of the proposition. \( ▱ \)
Yes
Lemma 1.12 If \( \\left( {X, A}\\right) \) is an NDR pair and \( {f}_{0},{f}_{1} : A \rightarrow Y \) are homotopic continuous maps then \( Y{ \cup }_{f_{0}}X \simeq Y{ \cup }_{f_{1}}X \) .
proof: Choose a homotopy \( H : A \times I \rightarrow Y \) from \( {f}_{0} \) to \( {f}_{1} \) . Denote \( X \times \\{ t\\} \cup A \times I \) by \( {B}_{t} \subset X \times I \) . Proposition 1.10 implies that each \( {B}_{t} \) is a strong deformation retract of \( X \times I \) . Hence \( Y{ \cup }_{H}{B}_{0} \) and \( Y{ \cup }_{H}{B}_{1} \) are strong deformation retracts of \( Y{ \cup }_{H}\\left( {X \times I}\\right) \) . But \( Y{ \cup }_{H}{B}_{0} = Y{ \cup }_{f_{0}}X \) and \( Y{ \cup }_{H}{B}_{1} = Y{ \cup }_{f_{1}}X \) .
Yes
Lemma 1.14 If \( \left( {B, C}\right) \) is a DR pair and \( h : D \rightarrow W \) is a continuous map from a closed subspace \( D \subset C \) then \( \left( {W{ \cup }_{h}B, W{ \cup }_{h}C}\right) \) is a DR pair. In particular, the inclusion\n\n\[ W{ \cup }_{h}C \hookrightarrow W{ \cup }_{h}B \]\n\nis a homotopy equivalence.
proof of Theorem 1.13: Identify \( \left( {X, A}\right) \) with \( \left( {X, A}\right) \times \{ 1\} \subset X \times I \) . Then \( \left( {X \times I, X}\right) \) is a DR pair (trivially) and \( \left( {X \times I, X\times \{ 0\} \cup A \times I}\right) \) is a DR pair by Proposition 1.10. Moreover \( f \) is identified as a map \( f : A \times \{ 1\} \rightarrow Y \) and by Lemma 1.14,\n\n\[ Y{ \cup }_{f}\left( {X\times \{ 0\} \cup A \times I}\right) \rightarrow Y{ \cup }_{f}\left( {X \times I}\right) \leftarrow Y{ \cup }_{f}X \]\n\nare homotopy equivalences.\n\nDenote \( X \coprod Y \) by \( Z \) and \( {\varphi }_{X} \coprod {\varphi }_{Y} \) by \( {\varphi }_{Z} \), and define \( g : A \times \{ 0,1\} \rightarrow Z \) by \( g\left( {a,0}\right) = {ia} \) and \( g\left( {a,1}\right) = {fa} \) . Then \( Y{ \cup }_{f}\left( {X\times \{ 0\} \cup A \times I}\right) = Z{ \cup }_{g}\left( {A \times I}\right) \) . It is thus sufficient to show that\n\n\[ \left( {{\varphi }_{Z},{\varphi }_{A} \times {id}}\right) : Z{ \cup }_{g}\left( {A \times I}\right) \rightarrow {Z}^{\prime }{ \cup }_{{g}^{\prime }}\left( {{A}^{\prime } \times I}\right) \]\n\nis a homotopy equivalence. Since this map factors as\n\n\[ Z{ \cup }_{g}\left( {A \times I}\right) \rightarrow {Z}^{\prime }{ \cup }_{\varphi \rightarrow g}\left( {A \times I}\right) \rightarrow {Z}^{\prime }{ \cup }_{{g}^{\prime }}\left( {{A}^{\prime } \times I}\right) \]\n\nit is sufficient to consider the two special cases: either \( A = {A}^{\prime } \) and \( {\varphi }_{A} = {id} \) or else \( Z = {Z}^{\prime } \) and \( {\varphi }_{Z} = {id} \) .
Yes
Proposition 1.17 If \( \left( {X,{x}_{0}}\right) \) and \( \left( {Y,{y}_{0}}\right) \) are well based spaces then there is a homotopy equivalence, \( X * Y\overset{ \simeq }{ \rightarrow }\sum \left( {X \land Y}\right) \) .
proof: Embed \( I \) as \( I{x}_{0} \) and \( I{y}_{0} \) in \( {CX} \) and \( {CY} \) . Identify \( {CX}{ \cup }_{I}{CY} \) as the subspace of \( X * Y \) of points \( \left( {{tx},\left( {1 - t}\right) y}\right) \) such that either \( x = {x}_{0} \) or \( y = {y}_{0} \) . Now \( \left( {{CX}{ \cup }_{I}{CY}}\right) /I = \left( {{CX}/I}\right) \vee \left( {{CY}/I}\right) \) . Since \( I \) is contractible we can apply the Corollary to Theorem 1.13 to obtain\n\n\[ \n{CX}{ \cup }_{I}{CY} \simeq \left( {{CX}{ \cup }_{I}{CY}}\right) /I \simeq {CX} \vee {CY} \simeq \{ {pt}\} .\n\]\n\nHence (by the same Corollary) the quotient map \( X * Y \rightarrow X * Y/{CX}{ \cup }_{I}{CY} \) is a homotopy equivalence.\n\nBut if \( q : X \times Y \times I \rightarrow X * Y \) is the quotient map \( \left( {x, y, t}\right) \mapsto \left( {{tx},\left( {1 - t}\right) y}\right) \) then \( {q}^{-1}\left( {{CX}{ \cup }_{I}{CY}}\right) \) is just \( \left( {X \vee Y}\right) \times I \) . Hence \( q \) induces a homeomorphism \( \sum \left( {X \land Y}\right) \overset{ \cong }{ \rightarrow }X * Y/{CX}{ \cup }_{I}{CY}. \)
Yes
Proposition 2.1 (i) A fibration \( p : X \rightarrow Y \) has the lifting property with respect to any DR pair \( \left( {Z, A}\right) \) . In particular, if \( \left( {W, B}\right) \) is any NDR pair then \( p \) has the lifting property with respect to \( \left( {W \times I, W\times \{ 0\} \cup B \times I}\right) \) .
proof: [157] In both cases we suppose given a commutative square\n\n![4c8f1a9a-2daa-4186-96fd-2418ee588fa5_55_1.jpg](images/4c8f1a9a-2daa-4186-96fd-2418ee588fa5_55_1.jpg)\n\nand we have to construct \( k : Z \rightarrow X \) .\n\n(i) Let \( r : Z \rightarrow A \) be a retraction and let \( H : Z \times I \rightarrow Z \) be a homotopy\n\nrel \( A \) from ir to \( i{d}_{Z} \) . Choose a continuous map \( h : Z \rightarrow I \) such that \( A = {h}^{-1}\left( 0\right) \) . Define a continuous map \( {H}^{\prime } : Z \times I \rightarrow Z \) by\n\n\[ \n{H}^{\prime }\left( {z, t}\right) = \left\{ \begin{array}{lll} H\left( {z, t/h\left( z\right) }\right) & , & t < h\left( z\right) \\ z & , & t \geq h\left( z\right) . \end{array}\right. \n\]\n\nThen \( g{H}^{\prime }\left( {z,0}\right) = {gr}\left( z\right) = {pfr}\left( z\right) \) . Thus there is a continuous map \( K : Z \times I \rightarrow X \) such that \( {pK} = g{H}^{\prime } \) and \( K\left( {z,0}\right) = {fr}\left( z\right) \) . Set \( k\left( z\right) = K\left( {z, h\left( z\right) }\right) \) . For the second assertion apply Proposition 1.9.
Yes
Proposition 2.1 (i) A fibration \( p : X \rightarrow Y \) has the lifting property with respect to any DR pair \( \left( {Z, A}\right) \) . In particular, if \( \left( {W, B}\right) \) is any NDR pair then \( p \) has the lifting property with respect to \( \left( {W \times I, W\times \{ 0\} \cup B \times I}\right) \) .
proof: [157] In both cases we suppose given a commutative square\n\n![4c8f1a9a-2daa-4186-96fd-2418ee588fa5_55_1.jpg](images/4c8f1a9a-2daa-4186-96fd-2418ee588fa5_55_1.jpg)\n\nand we have to construct \( k : Z \rightarrow X \) .\n\n(i) Let \( r : Z \rightarrow A \) be a retraction and let \( H : Z \times I \rightarrow Z \) be a homotopy\n\nrel \( A \) from ir to \( i{d}_{Z} \) . Choose a continuous map \( h : Z \rightarrow I \) such that \( A = {h}^{-1}\left( 0\right) \) . Define a continuous map \( {H}^{\prime } : Z \times I \rightarrow Z \) by\n\n\[ \n{H}^{\prime }\left( {z, t}\right) = \left\{ \begin{array}{lll} H\left( {z, t/h\left( z\right) }\right) & , & t < h\left( z\right) \\ z & , & t \geq h\left( z\right) . \end{array}\right. \n\]\n\nThen \( g{H}^{\prime }\left( {z,0}\right) = {gr}\left( z\right) = {pfr}\left( z\right) \) . Thus there is a continuous map \( K : Z \times I \rightarrow X \) such that \( {pK} = g{H}^{\prime } \) and \( K\left( {z,0}\right) = {fr}\left( z\right) \) . Set \( k\left( z\right) = K\left( {z, h\left( z\right) }\right) \) . For the second assertion apply Proposition 1.9.
Yes
Proposition 2.1 (i) A fibration \( p : X \rightarrow Y \) has the lifting property with respect to any DR pair \( \left( {Z, A}\right) \) . In particular, if \( \left( {W, B}\right) \) is any NDR pair then \( p \) has the lifting property with respect to \( \left( {W \times I, W\times \{ 0\} \cup B \times I}\right) \) .
proof: [157] In both cases we suppose given a commutative square\n\n![4c8f1a9a-2daa-4186-96fd-2418ee588fa5_55_1.jpg](images/4c8f1a9a-2daa-4186-96fd-2418ee588fa5_55_1.jpg)\n\nand we have to construct \( k : Z \rightarrow X \) .\n\n(i) Let \( r : Z \rightarrow A \) be a retraction and let \( H : Z \times I \rightarrow Z \) be a homotopy\n\nrel \( A \) from ir to \( i{d}_{Z} \) . Choose a continuous map \( h : Z \rightarrow I \) such that \( A = {h}^{-1}\left( 0\right) \) . Define a continuous map \( {H}^{\prime } : Z \times I \rightarrow Z \) by\n\n\[ \n{H}^{\prime }\left( {z, t}\right) = \left\{ \begin{array}{lll} H\left( {z, t/h\left( z\right) }\right) & , & t < h\left( z\right) \\ z & , & t \geq h\left( z\right) . \end{array}\right. \n\]\n\nThen \( g{H}^{\prime }\left( {z,0}\right) = {gr}\left( z\right) = {pfr}\left( z\right) \) . Thus there is a continuous map \( K : Z \times I \rightarrow X \) such that \( {pK} = g{H}^{\prime } \) and \( K\left( {z,0}\right) = {fr}\left( z\right) \) . Set \( k\left( z\right) = K\left( {z, h\left( z\right) }\right) \) . For the second assertion apply Proposition 1.9.
Yes
Proposition 2.1 (i) A fibration \( p : X \rightarrow Y \) has the lifting property with respect to any DR pair \( \left( {Z, A}\right) \) . In particular, if \( \left( {W, B}\right) \) is any NDR pair then \( p \) has the lifting property with respect to \( \left( {W \times I, W\times \{ 0\} \cup B \times I}\right) \) .
proof: [157] In both cases we suppose given a commutative square\n\n![4c8f1a9a-2daa-4186-96fd-2418ee588fa5_55_1.jpg](images/4c8f1a9a-2daa-4186-96fd-2418ee588fa5_55_1.jpg)\n\nand we have to construct \( k : Z \rightarrow X \) .\n\n(i) Let \( r : Z \rightarrow A \) be a retraction and let \( H : Z \times I \rightarrow Z \) be a homotopy\n\nrel \( A \) from ir to \( i{d}_{Z} \) . Choose a continuous map \( h : Z \rightarrow I \) such that \( A = {h}^{-1}\left( 0\right) \) . Define a continuous map \( {H}^{\prime } : Z \times I \rightarrow Z \) by\n\n\[ \n{H}^{\prime }\left( {z, t}\right) = \left\{ \begin{array}{lll} H\left( {z, t/h\left( z\right) }\right) & , & t < h\left( z\right) \\ z & , & t \geq h\left( z\right) . \end{array}\right. \n\]\n\nThen \( g{H}^{\prime }\left( {z,0}\right) = {gr}\left( z\right) = {pfr}\left( z\right) \) . Thus there is a continuous map \( K : Z \times I \rightarrow X \) such that \( {pK} = g{H}^{\prime } \) and \( K\left( {z,0}\right) = {fr}\left( z\right) \) . Set \( k\left( z\right) = K\left( {z, h\left( z\right) }\right) \) . For the second assertion apply Proposition 1.9.
Yes
Proposition 2.2 [93] With the notation above the correspondence \( g \rightsquigarrow h \) defines natural set maps \( {\partial }_{n} : {\pi }_{n}\left( {Y,{y}_{0}}\right) \rightarrow {\pi }_{n - 1}\left( {F,{x}_{0}}\right) \) . When \( n \geq 2 \), these are group homomorphisms, and fit into a long exact sequence (the long exact homotopy sequence)
proof: Put \( W = {S}^{n} \times I \) and \( B = {S}^{n} \times \{ 0\} \cup \{ * \} \times I \) . Then the inclusion \( W \times \{ 0,1\} \cup B \times I \hookrightarrow W \times I \) is a weak homotopy equivalence. Hence it follows from Proposition 2.1(ii) that the based homotopy class of \( h \) is independent of the choice of \( H \) and depends only on the based homotopy class of \( g \) .\n\nRecall from \( §1 \) that maps \( \left( {{S}^{n}, * }\right) \rightarrow \left( {X,{x}_{0}}\right) \) may be identified with maps \( \left( {{I}^{n},\partial {I}^{n}}\right) \rightarrow \left( {X,{x}_{0}}\right) \), and that, when \( n \geq 1 \), composition along the first coordinate defines a second product, \( f\widetilde{ * }g \), such that \( f\widetilde{ * }g \sim f * g \) rel \( \partial {I}^{n} \) . But it is immediate from the definition that when \( n \geq 2,{\partial }_{n}\left\lbrack {f\widetilde{ * }g}\right\rbrack = \left\lbrack {{\partial }_{n}f}\right\rbrack \widetilde{ * }\left\lbrack {{\partial }_{n}g}\right\rbrack \) ; hence \( {\partial }_{n} \) is a group homomorphism.\n\nThe exactness of the sequence and the assertion about \( {\partial }_{1} \) are straight forward consequences of the definition and Proposition 2.1(ii).
Yes
Proposition 2.3 Suppose in the diagram above that \( p \) is a fibration.\n\n(i) If \( g \) is the inclusion of a DR pair (resp. an NDR pair), \( \left( {Y, A}\right) \) then \( \left( {X,{X}_{A}}\right) \) is also a DR pair (resp. on NDR pair).\n\n(ii) If \( g \) is a homotopy equivalence so is \( {g}_{X} \) .
proof: (i) Suppose \( \left( {Y, A}\right) \) is a DR pair. Let \( H : Y \times I \rightarrow Y \) be a homotopy rel \( A \) from \( i{d}_{Y} \) to a retraction \( r : Y \rightarrow A \) . Let \( h : Y \rightarrow I \) be a continuous function such that \( {h}^{-1}\left( 0\right) = A \) and define a homotopy \( {H}^{\prime } : X \times I \rightarrow Y \) by\n\n\[ \n{H}^{\prime }\left( {x, t}\right) = \left\{ \begin{array}{lll} H\left( {{px}, t/h\left( {px}\right) }\right) & , & t < h\left( {px}\right) \\ {px} & , & t \geq h\left( {px}\right) \end{array}\right. \n\]\n\nLift this to a homotopy \( {K}^{\prime } : X \times I \rightarrow X \) starting at \( i{d}_{X} \) . Define \( K : X \times I \rightarrow X \) by\n\n\[ \nK\left( {x, t}\right) = \left\{ \begin{array}{lll} {K}^{\prime }\left( {x, t}\right) & , & t \leq h\left( {px}\right) \\ {K}^{\prime }\left( {x, h\left( {px}\right) }\right) & , & t \geq h\left( {px}\right) \end{array}\right. \n\]\n\nThen \( K \) exhibits \( {X}_{A} \) as a strong deformation retract of \( X \) . Thus \( K \) together with \( k = h \circ p \) exhibits \( \left( {X,{X}_{A}}\right) \) as a DR pair.\n\nAn identical argument shows that if \( \left( {Y, A}\right) \) is an NDR pair then so is \( \left( {X,{X}_{A}}\right) .\n\n(ii) Regard \( g \) as a map \( A \times \{ 0\} \rightarrow Y \), and let \( r : \left( {A \times I}\right) { \cup }_{g}Y \rightarrow Y \) be the retraction sending \( \left( {a, t}\right) \) to \( {ga} \) . Denote \( \left( {A \times I}\right) { \cup }_{g}Y \) by \( Z \) and form the pullback diagram\n\n![4c8f1a9a-2daa-4186-96fd-2418ee588fa5_58_0.jpg](images/4c8f1a9a-2daa-4186-96fd-2418ee588fa5_58_0.jpg)\n\nNow identify \( A = A \times \{ 1\} \subset Z \) . Then the inclusion of \( A \) in \( Z \) is a homotopy equivalence, because \( g \) is, while the inclusion of \( Y \) in \( Z \) is obviously a homotopy equivalence. Since \( \left( {Z, A}\right) \) and \( \left( {Z, Y}\right) \) are clearly NDR pairs, Proposition 1.11 asserts they are NR pairs. Now assertion (i) of this proposition implies that \( \left( {Z{ \times }_{Y}X, A{ \times }_{Y}X}\right) \) and \( \left( {Z{ \times }_{Y}X, Y{ \times }_{Y}X}\right) \) are DR pairs. But \( Y{ \times }_{Y}X = X \) and \( {r}_{X} : Z{ \times }_{Y}X \rightarrow X \) is a left inverse for the inclusion. Hence \( {r}_{X} \) is a homotopy equivalence. Moreover, \( {g}_{X} \) factors as \( A{ \times }_{Y}X \rightarrow Z{ \times }_{Y}X\overset{{r}_{X}}{ \rightarrow }X \), and so it is a homotopy equivalence too.
Yes
Proposition 2.6 A fibre bundle is a Serre fibration.
proof: First note that if a continuous surjection \( p : X \rightarrow Y \) has the lifting property with respect to \( {\left( {I}^{n} \times I,{I}^{n}\times \{ 0\} \right) }_{n > 0} \), then it is a Serre fibration. Indeed, suppose given a relative CW complex \( \left( {W, A}\right) \) and a homotopy \( g : W \times I \rightarrow Y \) . Note that \( \left( {{D}^{n} \times I,{S}^{n - 1} \times I \cup {D}^{n}\times \{ 0\} }\right) \cong \left( {{I}^{n} \times I,{I}^{n}\times \{ 0\} }\right) \) . Thus if \( f : W \times \{ 0\} \cup A \times I \rightarrow X \) satisfies \( {pf} = g \) we may extend \( f \), one cell at a time, to a lift of \( g \) .\n\nSuppose now that \( p : X \rightarrow Y \) is a fibre bundle and we want to lift \( g : {I}^{n + 1} \rightarrow \) \( Y \) through \( p \) extending \( f : {I}^{n} \times \{ 0\} \rightarrow X \) . Use the pullback over \( g \) to reduce to the case \( Y = {I}^{n + 1}, g = \) identity. Recall (§1) that a subdivision of \( I \) gives a product cellular structure for \( {I}^{n + 1} \) . Choose the subdivision sufficiently fine that each 'little cell' is contained in an open set over which the fibre bundle is trivial. Then the restriction to each cell is a Serre fibration and the construction of the lift \( g \), one cell at a time, is immediate.
Yes
Proposition 2.8 With the hypotheses above \( p : X \rightarrow Y \) is a principal \( G - \) bundle.
proof: It follows from Proposition 2.7 that \( p : X \rightarrow Y \) is a fibre bundle. Thus \( Y \) is covered by open sets \( {U}_{i} \) for which there are continuous maps \( {\tau }_{i} \) : \( {U}_{i} \rightarrow X \) such that \( p{\tau }_{i} = {id} \) . We have only to check that the continuous maps \( {h}_{i} : {U}_{i} \times G \rightarrow {p}^{-1}\left( {U}_{i}\right) ,\left( {u, g}\right) \mapsto {\tau }_{i}\left( u\right) \cdot g \), are homeomorphisms.\n\nIt is immediate from the hypotheses above that \( {h}_{i} \) is a bijection. Thus it is sufficient to show that \( {h}_{i}^{-1}\left( C\right) \) is compact for any compact subspace \( C \subset {p}^{-1}\left( {U}_{i}\right) \) . Since \( p\left( C\right) \) is covered by finitely many cells (Proposition 1.1(ii)) this too follows easily from the hypotheses.
Yes
Proposition 2.9 If \( p : X \rightarrow Y \) is a principal \( G \) -bundle then there is a weak homotopy equivalence \( {X}_{G}\overset{ \simeq }{ \rightarrow }Y \) .
proof: The associated fibre bundle with fibre \( {E}_{G} \) has the form\n\n\[ \n{q}^{\prime } : {X}_{G} = \left( {{E}_{G} \times X}\right) /G \rightarrow Y.\n\]\n\nSince \( {\pi }_{ * }\left( {E}_{G}\right) = 0 \) and since this is a Serre fibration, the long exact homotopy sequence shows that \( {q}^{\prime } \) is a weak homotopy equivalence.\n\nConsider a principal \( G \) -bundle \( p : X \rightarrow Y \) whose base \( Y \) is a CW complex. Because \( {q}^{\prime } : {X}_{G} \rightarrow Y \) is a weak homotopy equivalence there is a map \( {\sigma }^{\prime } \) : \( Y \rightarrow {X}_{G} \) such that \( {q}^{\prime }{\sigma }^{\prime } \sim {id} \) (Whitehead Lifting Lemma 1.4). Because \( {q}^{\prime } \) is a Serre fibration we can lift the homotopy starting at \( {\sigma }^{\prime } \) to obtain a homotopy \( {\sigma }^{\prime } \sim \sigma : Y \rightarrow {X}_{G} \) with \( {q}^{\prime }\sigma = {id} \) . This identifies \( Y \) as a subspace of \( {X}_{G} \) .\n\nRestrict the principal \( G \) -bundle \( {E}_{G} \times X \rightarrow {X}_{G} \) to a principal bundle \( P \rightarrow Y \) . Projection \( {E}_{G} \times X \rightarrow X \) restricts to a map \( P \rightarrow X \) of principal bundles covering the identity map of \( Y \) . This map is therefore a homeomorphism, which we use to identify \( P = X \) .\n\nThus the diagram\n\n![4c8f1a9a-2daa-4186-96fd-2418ee588fa5_68_0.jpg](images/4c8f1a9a-2daa-4186-96fd-2418ee588fa5_68_0.jpg)\n\nexhibits the original principal bundle as a pullback of Milnor's universal bundle (whence the name). An extension of this argument shows that the pullback construction defines a bijection\n\n\[ \n\left\lbrack {Y,{B}_{G}}\right\rbrack \overset{ \cong }{ \rightarrow }\left\{ \begin{array}{l} \text{ isomorphism classes of principal } \\ G\text{-bundles over }Y \end{array}\right\}\n\]\n\nthereby explaining the terminology ’classifying space’ for \( {B}_{G} \) .
Yes
Proposition 2.10 \( \gamma \) and \( {\gamma }^{\prime } \) are weak homotopy equivalences, so that \( G \) and \( {\Omega B} \) are weakly equivalent topological monoids.
proof: Define an action of \( G{ \times }_{Z}{PZ} \) on \( {PZ} \) by setting \( u \cdot \left( {g, w}\right) = \) \( \left( {u \cdot g}\right) * w \) . Then \( \pi : {PZ} \rightarrow B,\pi \left( w\right) = {p}_{B}\left( {w\left( 0\right) }\right) \), is a \( G{ \times }_{Z}{PZ} \) -fibration. It fits in the diagram of fibrations\n\n![4c8f1a9a-2daa-4186-96fd-2418ee588fa5_69_0.jpg](images/4c8f1a9a-2daa-4186-96fd-2418ee588fa5_69_0.jpg)\n\nwhere \( f\left( w\right) = w\left( 0\right) \) and \( {f}^{\prime }\left( w\right) = {p}_{B} \circ w \) . Moreover \( f \) and \( {f}^{\prime } \) restrict to \( \gamma \) and \( {\gamma }^{\prime } \) respectively in the fibres. Since \( {\pi }_{ * } \) vanishes on \( Z,{PZ} \) and \( {PB} \), it follows from the long exact homotopy sequence that \( \gamma \) and \( {\gamma }^{\prime } \) are weak homotopy equivalences. \( ▱ \)
Yes
Proposition 2.11 The pullback fibre bundle \( p : Y{ \times }_{B}Z \rightarrow Y \) and the holonomy fibration of \( \varphi \) are connected by equivariant weak equivalences of fibrations:
![4c8f1a9a-2daa-4186-96fd-2418ee588fa5_70_1.jpg](images/4c8f1a9a-2daa-4186-96fd-2418ee588fa5_70_1.jpg)
No
Lemma 3.2 A necessary and sufficient condition for \( H\left( f\right) \) to be an isomorphism (of degree \( i \) ) is that for each \( \left( {m, n}\right) \in M \times N \) satisfying \( {dm} = 0 \) and \( f\left( m\right) = \) dn there exist \( \left( {{m}^{\prime },{n}^{\prime }}\right) \in M \times N \) satisfying \( d{m}^{\prime } = m \) and \( d{n}^{\prime } = n - f\left( {m}^{\prime }\right) \) .
proof: Suppose the condition holds. If \( {dm} = 0 \) and \( H\left( f\right) \left\lbrack m\right\rbrack = 0 \) then \( f\left( m\right) = \) \( {dn} \) ; hence \( m = d{m}^{\prime } \) and \( \left\lbrack m\right\rbrack = 0 \) . If \( \left\lbrack n\right\rbrack \in H\left( N\right) \) then \( {dn} = 0 = f\left( 0\right) \) and so there exist \( \left( {{m}^{\prime },{n}^{\prime }}\right) \) with \( d{m}^{\prime } = m \) and \( d{n}^{\prime } = n - f\left( {m}^{\prime }\right) \) ; i.e. \( \left\lbrack n\right\rbrack = H\left( f\right) \left\lbrack {m}^{\prime }\right\rbrack \) .\n\nSuppose \( H\left( f\right) \) is an isomorphism. Given \( \left( {m, n}\right) \) as in the lemma we have \( H\left( f\right) \left\lbrack m\right\rbrack = 0 \) . Hence \( m = d{m}^{\prime \prime } \) . Now \( n - f\left( {m}^{\prime \prime }\right) \) is cycle; hence \( \left\lbrack {n - f\left( {m}^{\prime \prime }\right) }\right\rbrack = \) \( H\left( f\right) \left\lbrack z\right\rbrack = \left\lbrack {f\left( z\right) }\right\rbrack \) . Thus \( m = d\left( {{m}^{\prime \prime } + z}\right) \) and \( n - f\left( {{m}^{\prime \prime } + z}\right) = d{n}^{\prime } \) . Set \( {m}^{\prime } = {m}^{\prime \prime } + z \) .
Yes
A morphism \( \varphi : R \rightarrow S \) of graded algebras makes \( S \) into a left (and right) \( R \) -module
\[ x \cdot s = \varphi \left( x\right) s\;\text{ or }\;s \cdot x = {s\varphi }\left( x\right) ,\;x \in R, s \in S. \] If \( M \) is an \( R \) -module then \( S{ \otimes }_{R}M \) is an \( S \) -module via \( s \cdot \left( {{s}^{\prime }{ \otimes }_{R}m}\right) = s{s}^{\prime }{ \otimes }_{R}m \) .
Yes
Let \( R \) be a graded algebra. An \( R \) -module \( M \) is free if \( M \cong R \otimes V \), with \( V \) a free graded module. A basis \( \left\{ {v}_{\alpha }\right\} \) for \( V \) is a basis for the free \( R \) -module \( M \) . If \( \varphi : R \rightarrow S \) is a morphism of graded algebras then
\[ S{ \otimes }_{R}M = S{ \otimes }_{R}\left( {R \otimes V}\right) = S \otimes V \] is a free \( S \) -module with the same basis.
Yes
If \( R, S \) are graded algebras then \( R \otimes S \) denotes the graded algebra with multiplication
\[ \left( {x \otimes y}\right) \left( {{x}^{\prime } \otimes {y}^{\prime }}\right) = {\left( -1\right) }^{\deg y\deg {x}^{\prime }}x{x}^{\prime } \otimes y{y}^{\prime }. \]
Yes
For any free graded module \( V \), the tensor algebra \( {TV} \) is defined by\n\n\[ \n{TV} = {\bigoplus }_{q = 0}^{\infty }{T}^{q}V\;{T}^{q}V = \underset{q}{\underbrace{V \otimes \cdots \otimes V}}. \n\]\n\nMultiplication is given by \( a \cdot b = a \otimes b \) . Note that \( q \) is not the degree: elements \( {v}_{1} \otimes \cdots \otimes {v}_{q} \in {T}^{q}V \) have degree \( = \sum \deg {v}_{i} \) and word length \( q \) . If \( \left\{ {v}_{i}\right\} \) is a basis of \( V \) we may write \( {TV} = T\left( \left\{ {v}_{i}\right\} \right) \) .
Any linear map of degree zero from \( V \) to a graded algebra \( R \) extends to a unique morphism of graded algebras, \( {TV} \rightarrow R \) . Any degree \( k \) linear map \( V \rightarrow {TV} \) extends to a unique derivation of \( {TV} \) .
Yes
A graded algebra \( A \) is commutative if\n\n\[ \n{xy} = {\left( -1\right) }^{\deg x\deg y}{yx},\;x, y \in A.\n\]
When \( \frac{1}{2} \in \mathbb{R} \) this condition implies that \( {x}^{2} = 0 \) if \( x \) has odd degree. If \( A \) is a commutative graded algebra, then a left \( A \) -module, \( M \), is automatically a right \( A \) -module, via\n\n\[ \n{mx} = {\left( -1\right) }^{\deg m\deg x}{xm}.\n\]\n\nIf \( N \) is a second \( A \) -module then \( {\operatorname{Hom}}_{A}\left( {M, N}\right) \) and \( M{ \otimes }_{A}N \) are \( A \) -modules via\n\n\[ \n\left( {xf}\right) \left( m\right) = x \cdot f\left( m\right) = {\left( -1\right) }^{\deg x\deg f}f\left( {xm}\right)\n\]\n\nand\n\n\[ \nx\left( {m{ \otimes }_{A}n}\right) = {xm}{ \otimes }_{A}n = {\left( -1\right) }^{\deg x\deg m}m{ \otimes }_{A}{xn},\;x \in A, m \in M, n \in N.\n\]\n\nIf \( A \rightarrow B, A \rightarrow C \) are morphisms of commutative graded algebras then \( B \otimes C \) is also commutative and the kernel of the surjection \( B \otimes C \rightarrow B{ \otimes }_{A}C \) is an ideal. Thus \( B{ \otimes }_{A}C \) is also a commutative graded algebra.
Yes
Example 6 Free commutative graded algebras.
Suppose \( \mathbb{k} \) contains \( \frac{1}{2} \) . Let \( V \) be a free graded module. The elements \( v \otimes w - \) \( {\left( -1\right) }^{\deg v\deg w}w \otimes v\left( {v, w \in V}\right) \) generate an ideal \( I \subset {TV} \) . The quotient graded algebra\n\n\[ \n{\Lambda V} = {TV}/I \n\]\n\nis called the free commutative graded algebra on \( V \) . If \( \left\{ {v}_{i}\right\} \) is a basis of \( V \) we may write \( {\Lambda V} = \Lambda \left( \left\{ {v}_{i}\right\} \right) \) . The algebra \( {\Lambda V} \) has the following properties:\n\n(i) \( {\Lambda V} \) is graded commutative; in particular, the square of an element of odd degree in \( {\Lambda V} \) is zero.\n\n(ii) There is a unique isomorphism \( \Lambda \left( {V \oplus W}\right) = {\Lambda V} \otimes {\Lambda W} \) which is the identity in \( V \) and in \( W \) .\n\n(iii) If \( V \) is free on a single basis element \( \{ v\} \) then a basis of \( {\Lambda V} \) is given by:\n\n\[ \n\left\{ \begin{array}{llllll} 1 & v & & & & \text{ if }\deg v\text{ is odd } \\ 1 & v & {v}^{2} & {v}^{3} & \cdots & \text{ if }\deg v\text{ is even. } \end{array}\right. \n\]\n\n(iv) A linear map of degree zero from \( V \) to a commutative graded algebra \( A \) extends to a unique morphism of graded algebras, \( {\Lambda V} \rightarrow A \) . A linear map \( V \rightarrow {\Lambda V} \) of degree \( k \) extends to a unique derivation in \( {\Lambda V} \) .\n\n(v) Suppose \( \varphi : {\Lambda V} \rightarrow A \) is a morphism of graded algebras and \( \theta \) and \( {\theta }^{\prime } \) are derivations respectively in \( {\Lambda V} \) and in \( A \) . If \( {\varphi \theta v} = {\theta }^{\prime }{\varphi v}, v \in V \), then \( {\varphi \theta } = {\theta }^{\prime }\varphi \n\n(vi) \( {\Lambda V} = {\bigoplus }_{q = 0}^{\infty }{\Lambda }^{q}V \), where \( {\Lambda }^{q}V \) is the linear span of the elements \( {v}_{1} \land \cdots \land {v}_{q} \), \n\n\( {v}_{i} \in V \) ; these elements have degree \( = {\sum }_{i}\deg {v}_{i} \) and word length \( q \) .
Yes
If \( \left( {A, d}\right) \) and \( \left( {{A}^{\prime }, d}\right) \) are dga’s then the direct product \( \left( {A, d}\right) \times \left( {{A}^{\prime }, d}\right) \) is the dga \( \left( {A \times {A}^{\prime }, d}\right) \) given by \( \left( {a,{a}^{\prime }}\right) \cdot \left( {{a}_{1},{a}_{1}^{\prime }}\right) = \left( {a{a}_{1},{a}^{\prime }{a}_{1}^{\prime }}\right) \) and \( d\left( {a,{a}^{\prime }}\right) = \left( {{da}, d{a}^{\prime }}\right) \) .
The direct product, \( \mathop{\prod }\limits_{\alpha }\left( {A\left( \alpha \right), d}\right) \), of a family of dga’s is defined in the same way.
No
Proposition 3.3. If \( \mathbb{k} \) is a field these natural maps are isomorphisms:\n\n\[ H\left( M\right) \otimes H\left( N\right) = H\left( {M \otimes N}\right) \;\text{ and }\;H\left( {\operatorname{Hom}\left( {M, N}\right) }\right) = \operatorname{Hom}\left( {H\left( M\right), H\left( N\right) }\right) .
proof: This is a straightforward exercise using the fact that any complex \( M \) can be written \( M = \operatorname{Im}d \oplus H \oplus C \) with \( d : C\overset{ \cong }{ \rightarrow }\operatorname{Im}d \) and \( d = 0 \) in \( H \) .
No
Proposition 4.10 AW and EZ are inverse chain equivalences. In fact, \( {AW} \circ \) \( {EZ} = {id} \) and \( {EZ} \circ {AW} \) is naturally homotopic to the identity.
proof: The first assertion is a simple computation, depending on the fact that we have divided by the degenerate simplices. For the second, we have to construct \( h : {C}_{n}\left( {X \times Y;\mathbb{R}}\right) \rightarrow {C}_{n + 1}\left( {X \times Y;\mathbb{R}}\right) \), natural in \( X \) and \( Y \), such that \( {EZ} \circ {AW} - \) \( {id} = {dh} + {hd} \) . We may set \( h = 0 \) in \( {C}_{0}\left( {X \times Y}\right) \) . Suppose for some \( n \geq 1 \) that \( h \) is constructed for \( i < n \) . Let \( {\Delta }_{\text{top }} : {\Delta }^{n} \rightarrow {\Delta }^{n} \times {\Delta }^{n} \) be the diagonal, regarded as a singular \( n \) -simplex. Then \( z = \left( {{EZ} \circ {AW}}\right) \left( {\Delta }_{\text{top }}\right) - {\Delta }_{\text{top }} - {hd}\left( {\Delta }_{\text{top }}\right) \) is a cycle in \( {C}_{n}\left( {{\Delta }^{n} \times {\Delta }^{n};\mathbb{R}}\right) \) . Since \( {\Delta }^{n} \times {\Delta }^{n} \) is contractible we may find \( {c}_{n + 1} \in {C}_{n + 1}\left( {{\Delta }^{n} \times }\right. \) \( \left. {{\Delta }^{n};\mathbb{k}}\right) \) so that \( d{c}_{n + 1} = z \) . Now for any \( n \) -simplex \( \left( {\sigma ,\tau }\right) : {\Delta }^{n} \rightarrow X \times Y \) define \( h\left( {\sigma ,\tau }\right) = {C}_{ * }\left( {\sigma \times \tau }\right) \left( {c}_{n + 1}\right) .
Yes