Q
stringlengths 4
3.96k
| A
stringlengths 1
3k
| Result
stringclasses 4
values |
|---|---|---|
Lemma 3.10. Assume that \( M \) is flat over \( R \) . Let \( {a}_{i} \in A,{x}_{i} \in M \) for \( i = 1 \) , \( \ldots, n \), and suppose that we have the relation\n\n\[ \mathop{\sum }\limits_{{i = 1}}^{n}{a}_{i}{x}_{i} = 0 \]\n\nThen there exists an integer \( s \) and elements \( {b}_{ij} \in A \) and \( {y}_{j} \in M\left( {j = 1,\ldots, s}\right) \) such that\n\n\[ \mathop{\sum }\limits_{i}{a}_{i}{b}_{ij} = 0\;\text{ for all }j\;\text{ and }\;{x}_{i} = \mathop{\sum }\limits_{j}{b}_{ij}{y}_{j}\;\text{ for all }i. \]
|
Proof. We consider the exact sequence\n\n\[ 0 \rightarrow K \rightarrow {R}^{\left( n\right) } \rightarrow R \]\n\nwhere the map \( {R}^{\left( n\right) } \rightarrow R \) is given by\n\n\[ \left( {{b}_{1},\ldots ,{b}_{n}}\right) \mapsto \mathop{\sum }\limits_{{i = 1}}^{n}{a}_{i}{b}_{i} \]\n\nand \( K \) is its kernel. Since \( M \) is flat it follows that\n\n\[ K \otimes M \rightarrow {M}^{\left( n\right) }\overset{{f}_{M}}{ \rightarrow }M \]\n\nis exact, where \( {f}_{M} \) is given by\n\n\[ {f}_{M}\left( {{z}_{1},\ldots ,{z}_{n}}\right) = \mathop{\sum }\limits_{{i = 1}}^{n}{a}_{i}{z}_{i} \]\n\nTherefore there exist elements \( {\beta }_{j} \in K \) and \( {y}_{j} \in M \) such that\n\n\[ \left( {{x}_{1},\ldots ,{x}_{n}}\right) = \mathop{\sum }\limits_{{j = 1}}^{s}{\beta }_{j}{y}_{j} \]\n\nWrite \( {\beta }_{j} = \left( {{b}_{1j},\ldots ,{b}_{nj}}\right) \) with \( {b}_{ij} \in R \) . This proves the lemma.
|
Yes
|
Proposition 4.2. Let \( R \rightarrow A \) be an \( R \) -algebra, and assume \( A \) commutative.\n\n(i) Base change. If \( F \) is a flat \( R \) -module, then \( A{ \otimes }_{R}F \) is a flat \( A \) -module.\n\n(ii) Transitivity. If \( A \) is a flat commutative \( R \) -algebra and \( M \) is a flat \( A \) -module, then \( M \) is flat as \( R \) -module.
|
The proofs are immediate, and will be left to the reader.
|
No
|
Lemma 5.2. Let \( E \) and \( {E}_{i}\left( {i = 1,\ldots, m}\right) \) be modules over a ring. Let \( {\varphi }_{i} : {E}_{i} \rightarrow E \) and \( {\psi }_{i} : E \rightarrow {E}_{i} \) be homomorphisms having the following properties:\n\n\[ \n{\psi }_{i} \circ {\varphi }_{i} = \mathrm{{id}},\;{\psi }_{i} \circ {\varphi }_{j} = 0\;\text{ if }\;i \neq j \n\]\n\n\[ \n\mathop{\sum }\limits_{{i = 1}}^{m}{\varphi }_{i} \circ {\psi }_{i} = \mathrm{{id}} \n\]\n\nThen the map\n\n\[ \nx \mapsto \left( {{\psi }_{1}x,\ldots ,{\psi }_{m}x}\right) \n\]\n\nis an isomorphism of \( E \) onto the direct product \( \mathop{\prod }\limits_{{i = 1}}^{m}{E}_{i} \), and the map\n\n\[ \n\left( {{x}_{1},\ldots ,{x}_{m}}\right) \mapsto {\varphi }_{1}{x}_{1} + \cdots + {\varphi }_{m}{x}_{m} \n\]\n\nis an isomorphism of the product onto \( E \) . Conversely, if \( E \) is equal to the direct sum of submodules \( {E}_{i}\left( {i = 1,\ldots, m}\right) \), if we let \( {\psi }_{i} \) be the inclusion of \( {E}_{i} \) in \( E \) , and \( {\varphi }_{i} \) the projection of \( E \) on \( {E}_{i} \), then these maps satisfy the above-mentioned properties.
|
Proof. The proof is routine, and is essentially the same as that of Proposition 3.1 of Chapter III. We shall leave it as an exercise to the reader.
|
No
|
Corollary 5.3. Let \( {E}^{\prime }, E,{F}^{\prime }, F \) be free and finite dimensional over \( R \) . Then we have a functorial isomorphism\n\n\[ L\left( {{E}^{\prime }, E}\right) \otimes L\left( {{F}^{\prime }, F}\right) \rightarrow L\left( {{E}^{\prime } \otimes {F}^{\prime }, E \otimes F}\right) \]\n\n such that\n\n\[ f \otimes g \mapsto T\left( {f, g}\right) \]
|
Proof. Keep \( E,{F}^{\prime }, F \) fixed, and view \( L\left( {{E}^{\prime }, E}\right) \otimes L\left( {{F}^{\prime }, F}\right) \) as a functor in the variable \( {E}^{\prime } \) . Similarly, view\n\n\[ L\left( {{E}^{\prime } \otimes {F}^{\prime }, E \otimes F}\right) \]\n\n as a functor in \( {E}^{\prime } \) . The map \( f \otimes g \mapsto T\left( {f, g}\right) \) is functorial, and thus by the lemma, it suffices to prove that it yields an isomorphism when \( {E}^{\prime } \) has dimension 1 . Assume now that this is the case; fix \( {E}^{\prime } \) of dimension 1, and view the two expressions in the corollary as functors of the variable \( E \) . Applying the lemma again, it suffices to prove that our arrow is an isomorphism when \( E \) has dimension 1. Similarly, we may assume that \( F,{F}^{\prime } \) have dimension 1 . In that case the verification that the arrow is an isomorphism is a triviality, as desired.
|
Yes
|
Corollary 5.4. Let \( E, F \) be free and finite dimensional. Then we have a natural isomorphism\n\n\[{\operatorname{End}}_{R}\left( E\right) \otimes {\operatorname{End}}_{R}\left( F\right) \rightarrow {\operatorname{End}}_{R}\left( {E \otimes F}\right) .
|
Proof. Special case of Corollary 5.3.
|
No
|
Corollary 5.5. Let \( E, F \) be free finite dimensional over \( R \) . There is a functorial isomorphism\n\n\[ \n{E}^{ \vee } \otimes F \rightarrow L\left( {E, F}\right)\n\]\n\ngiven for \( \lambda \in {E}^{ \vee } \) and \( y \in F \) by the map\n\n\[ \n\lambda \otimes y \mapsto {A}_{\lambda, y}\n\]\n\nwhere \( {A}_{\lambda, y} \) is such that for all \( x \in E \), we have \( {A}_{\lambda, y}\left( x\right) = \lambda \left( x\right) y \) .
|
To prove Corollary 5.5, justify that there is a well-defined homomorphism of \( {E}^{ \vee } \otimes F \) to \( L\left( {E, F}\right) \), by the formula written down. Verify that this homomorphism is both injective and surjective. We leave the details as exercises.
|
No
|
Corollary 5.6. Let \( E, F \) be free and finite dimensional over \( R \) . There is a functorial isomorphism\n\n\[ \n{E}^{ \vee } \otimes {F}^{ \vee } \rightarrow {\left( E \otimes F\right) }^{ \vee }.\n\]\n\ngiven for \( \lambda \in {E}^{ \vee } \) and \( \mu \in {F}^{ \vee } \) by the map\n\n\[ \n\lambda \otimes \mu \mapsto \Lambda \n\]\n\nwhere \( \Lambda \) is such that, for all \( x \in E \) and \( y \in F \) ,\n\n\[ \n\Lambda \left( {x \otimes y}\right) = \lambda \left( x\right) \mu \left( y\right)\n\]
|
Proof. As before.
|
No
|
Proposition 5.7. Let \( E \) be free and finite dimensional over \( R \) . The trace function on \( L\left( {E, E}\right) \) is equal to the composite of the two maps\n\n\[ L\left( {E, E}\right) \rightarrow {E}^{ \vee } \otimes E \rightarrow R, \]\n\nwhere the first map is the inverse of the isomorphism described in Corollary 5.5, and the second map is induced by the bilinear map\n\n\[ \left( {\lambda, x}\right) \mapsto \lambda \left( x\right) \]
|
Of course, it is precisely in a situation involving the trace that the isomorphism of Corollary 5.5 becomes important, and that the finite dimensionality of \( E \) is used. In many applications, this finite dimensionality plays no role, and it is better to deal with \( L\left( {E, E}\right) \) directly.
|
No
|
Finite coproducts exist in the category of commutative rings, and in the category of commutative algebras over a commutative ring. If \( R \rightarrow A \) and \( R \rightarrow B \) are two homomorphisms of commutative rings, then their coproduct over \( R \) is the homomorphism \( R \rightarrow A \otimes B \) given by\n\n\[ a \mapsto a \otimes 1 = 1 \otimes a. \]
|
Proof. We shall limit our proof to the case of the coproduct of two ring homomorphisms \( R \rightarrow A \) and \( R \rightarrow B \) . One can use induction.\n\nLet \( A, B \) be commutative rings, and assume given ring-homomorphisms into a commutative ring \( C \) ,\n\n\[ \varphi : A \rightarrow C\text{ and }\psi : B \rightarrow C. \]\n\nThen we can define a \( \mathbf{Z} \) -bilinear map\n\n\[ A \times B \rightarrow C \]\n\nby \( \left( {x, y}\right) \mapsto \varphi \left( x\right) \psi \left( y\right) \) . From this we get a unique additive homomorphism\n\n\[ A \otimes B \rightarrow C \]\n\nsuch that \( x \otimes y \mapsto \varphi \left( x\right) \psi \left( y\right) \) . We have seen above that we can define a ring structure on \( A \otimes B \), such that\n\n\[ \left( {a \otimes b}\right) \left( {c \otimes d}\right) = {ac} \otimes {bd}. \]\n\nIt is then clear that our map \( A \otimes B \rightarrow C \) is a ring-homomorphism. We also have two ring-homomorphisms\n\n\[ A\overset{f}{ \rightarrow }A \otimes B\text{ and }B\overset{g}{ \rightarrow }A \otimes B \]\n\ngiven by\n\n\[ x \mapsto x \otimes 1\;\text{ and }\;y \mapsto 1 \otimes y. \]\n\nThe universal property of the tensor product shows that \( \left( {A \otimes B, f, g}\right) \) is a coproduct of our rings \( A \) and \( B \) .
|
Yes
|
Proposition 7.1. Let \( E \) be free of dimension n over \( R \) . Then \( T\left( E\right) \) is isomorphic to the non-commutative polynomial algebra on \( n \) variables over \( R \) . In other words, if \( \left\{ {{v}_{1},\ldots ,{v}_{n}}\right\} \) is a basis of \( E \) over \( R \), then the elements\n\n\[ \n{M}_{\left( i\right) }\left( v\right) = {v}_{{i}_{1}} \otimes \cdots \otimes {v}_{{i}_{v}},\;1 \leqq {i}_{v} \leqq n \n\] \n\nform a basis of \( {T}^{r}\left( E\right) \), and every element of \( T\left( E\right) \) has a unique expression as a finite sum\n\n\[ \n\mathop{\sum }\limits_{\left( i\right) }{a}_{\left( i\right) }{M}_{\left( i\right) }\left( v\right) ,\;{a}_{\left( i\right) } \in R \n\] \n\nwith almost all \( {a}_{\left( i\right) } \) equal to 0 .
|
Proof. This follows at once from Proposition 2.3.
|
No
|
Proposition 7.2. Let \( E \) be free, finite dimensional over \( R \) . Then we have an algebra-isomorphism\n\n\[ T\left( {L\left( E\right) }\right) = T\left( {{\operatorname{End}}_{R}\left( E\right) }\right) \rightarrow {LT}\left( E\right) = {\bigoplus }_{r = 0}^{\infty }{\operatorname{End}}_{R}\left( {{T}^{r}\left( E\right) }\right) \]\n\ngiven by\n\n\[ f \otimes g \mapsto T\left( {f, g}\right) \]
|
Proof. By Proposition 2.5, we have a linear isomorphism in each dimension, and it is clear that the map preserves multiplication.
|
No
|
Proposition 8.1. Let \( E \) be free of dimension \( n \) over \( R \) . Let \( \left\{ {{v}_{1},\ldots ,{v}_{n}}\right\} \) be a basis of \( E \) over \( k \) . Viewed as elements of \( {S}^{1}\left( E\right) \) in \( S\left( E\right) \), these basis elements are algebraically independent over \( R \), and \( S\left( E\right) \) is therefore isomorphic to the polynomial algebra in \( n \) variables over \( R \) .
|
Proof. Let \( {t}_{1},\ldots ,{t}_{n} \) be algebraically independent variables over \( R \), and form the polynomial algebra \( R\left\lbrack {{t}_{1},\ldots ,{t}_{n}}\right\rbrack \) . Let \( {P}_{r} \) be the \( R \) -module of homogeneous polynomials of degree \( r \) . We define a map of \( {E}^{\left( r\right) } \rightarrow {P}_{r} \) as follows. If \( {w}_{1},\ldots ,{w}_{r} \) are elements of \( E \) which can be written\n\n\[ \n{w}_{i} = \mathop{\sum }\limits_{{v = 1}}^{n}{a}_{iv}{v}_{v},\;i = 1,\ldots, r \n\]\n\nthen our map is given by\n\n\[ \n\left( {{w}_{1},\ldots ,{w}_{r}}\right) \mapsto \left( {{a}_{11}{t}_{1} + \cdots + {a}_{1n}{t}_{n}}\right) \cdots \left( {{a}_{r1}{t}_{1} + \cdots + {a}_{rn}{t}_{n}}\right) . \n\]\n\nIt is obvious that this map is multilinear and symmetric. Hence it factors through a linear map of \( {S}^{r}\left( E\right) \) into \( {P}_{r} \) :\n\n\n\nFrom the commutativity of our diagram, it is clear that the element \( {v}_{{i}_{1}}\cdots {v}_{{i}_{s}} \) in \( {S}^{r}\left( E\right) \) maps on \( {t}_{{i}_{1}}\cdots {t}_{{i}_{s}} \) in \( {P}_{r} \) for each \( r \) -tuple of integers \( \left( i\right) = \left( {{i}_{1},\ldots ,{i}_{r}}\right) \) . Since the monomials \( {M}_{\left( i\right) }\left( t\right) \) of degree \( r \) are linearly independent over \( k \), it follows that the monomials \( {M}_{\left( i\right) }\left( v\right) \) in \( {S}^{r}\left( E\right) \) are also linearly independent over \( R \), and that our map \( {S}^{r}\left( E\right) \rightarrow {P}_{r} \) is an isomorphism. One verifies at once that the multiplication in \( S\left( E\right) \) corresponds to the multiplication of polynomials in \( R\left\lbrack t\right\rbrack \), and thus that the map of \( S\left( E\right) \) into the polynomial algebra described as above for each component \( {S}^{r}\left( E\right) \) induces an algebra-isomorphism of \( S\left( E\right) \) onto \( R\left\lbrack t\right\rbrack \), as desired.
|
Yes
|
Proposition 8.2. Let \( E = {E}^{\prime } \oplus {E}^{\prime \prime } \) be a direct sum of finite free modules. Then there is a natural isomorphism\n\n\[ \n{S}^{n}\left( {{E}^{\prime } \oplus {E}^{\prime \prime }}\right) \approx {\bigoplus }_{p + q = n}{S}^{p}{E}^{\prime } \otimes {S}^{q}{E}^{\prime \prime }.\n\]\n\nIn fact, this is but the n-part of a graded isomorphism\n\n\[ \nS\left( {{E}^{\prime } \oplus {E}^{\prime \prime }}\right) \approx S{E}^{\prime } \otimes S{E}^{\prime \prime }.\n\]
|
Proof. The isomorphism comes from the following maps. The inclusions of \( {E}^{\prime } \) and \( {E}^{\prime \prime } \) into their direct sum give rise to the functorial maps\n\n\[ \nS{E}^{\prime } \otimes S{E}^{\prime \prime } \rightarrow {SE}\n\]\n\nand the claim is that this is a graded isomorphism. Note that \( S{E}^{\prime } \) and \( S{E}^{\prime \prime } \) are commutative rings, and so their tensor product is just the tensor product of commutative rings discussed in \( §6 \) . The reader can either give a functorial map backward to prove the desired isomorphism, or more concretely, \( S{E}^{\prime } \) is the polynomial ring on a finite family of variables, \( S{E}^{\prime \prime } \) is the polynomial ring in another family of variables, and their tensor product is just the polynomial ring in the two families of variables. The matter is easy no matter what, and the formal proof is left to the reader.
|
No
|
Proposition 1.1. Schur's Lemma. Let \( E, F \) be simple \( R \) -modules. Every non-zero homomorphism of \( E \) into \( F \) is an isomorphism. The ring \( {\operatorname{End}}_{R}\left( E\right) \) is a division ring.
|
Proof. Let \( f : E \rightarrow F \) be a non-zero homomorphism. Its image and kernel are submodules, hence \( \operatorname{Ker}f = 0 \) and \( \operatorname{Im}f = F \) . Hence \( f \) is an isomorphism. If \( E = F \), then \( f \) has an inverse, as desired.
|
Yes
|
Proposition 1.2. Let \( E = {E}_{1}^{\left( {n}_{1}\right) } \oplus \cdots \oplus {E}_{r}^{\left( {n}_{r}\right) } \) be a direct sum of simple modules, the \( {E}_{i} \) being non-isomorphic, and each \( {E}_{i} \) being repeated \( {n}_{i} \) times in the sum. Then, up to a permutation, \( {E}_{1},\ldots ,{E}_{r} \) are uniquely determined up to isomorphisms, and the multiplicities \( {n}_{1},\ldots ,{n}_{r} \) are uniquely determined. The ring \( {\operatorname{End}}_{R}\left( E\right) \) is isomorphic to a ring of matrices, of type \[ \left( \begin{matrix} {M}_{1} & & \cdots & 0 \\ \vdots & {M}_{2} & & \vdots \\ 0 & & \cdots & {M}_{r} \end{matrix}\right) \] where \( {M}_{i} \) is an \( {n}_{i} \times {n}_{i} \) matrix over \( {\operatorname{End}}_{R}\left( {E}_{i}\right) \) . (The isomorphism is the one with respect to our direct sum decomposition.)
|
Proof. The last statement follows from our previous considerations, taking into account Proposition 1.1. Suppose now that we have two \( R \) -modules, with direct sum decompositions into simple submodules, and an isomorphism \[ {E}_{1}^{\left( {n}_{1}\right) } \oplus \cdots \oplus {E}_{r}^{\left( {n}_{r}\right) } \rightarrow {F}_{1}^{\left( {m}_{1}\right) } \oplus \cdots \oplus {F}_{s}^{\left( {m}_{s}\right) }, \] such that the \( {E}_{i} \) are non-isomorphic, and the \( {F}_{j} \) are non-isomorphic. From Proposition 1.1, we conclude that each \( {E}_{i} \) is isomorphic to some \( {F}_{j} \), and conversely. It follows that \( r = s \), and that after a permutation, \( {E}_{i} \approx {F}_{i} \) . Furthermore, the isomorphism must induce an isomorphism \[ {E}_{i}^{\left( {n}_{i}\right) } \rightarrow {F}_{i}^{\left( {m}_{i}\right) } \] for each \( i \) . Since \( {E}_{i} \approx {F}_{i} \), we may assume without loss of generality that in fact \( {E}_{i} = {F}_{i} \) . Thus we are reduced to proving: If a module is isomorphic to \( {E}^{\left( n\right) } \) and to \( {E}^{\left( m\right) } \), with some simple module \( E \), then \( n = m \) . But \( {\operatorname{End}}_{R}\left( {E}^{\left( n\right) }\right) \) is isomorphic to the \( n \times n \) matrix ring over the division ring \( {\operatorname{End}}_{R}\left( E\right) = K \) . Furthermore this isomorphism is verified at once to be an isomorphism as \( K \) -vector space. The dimension of the space of \( n \times n \) matrices over \( K \) is \( {n}^{2} \) . This proves that the multiplicity \( n \) is uniquely determined, and proves our proposition.
|
Yes
|
Lemma 2.1. Let \( E = \mathop{\sum }\limits_{{i \in I}}{E}_{i} \) be a sum (not necessarily direct) of simple submodules. Then there exists a subset \( J \subset I \) such that \( E \) is the direct sum \( \bigoplus {E}_{j} \) . \( j \in J \)
|
Proof. Let \( J \) be a maximal subset of \( I \) such that the sum \( \mathop{\sum }\limits_{{j \in J}}{E}_{j} \) is direct. We contend that this sum is in fact equal to \( E \) . It will suffice to prove that each \( {E}_{i} \) is contained in this sum. But the intersection of our sum with \( {E}_{i} \) is a submodule of \( {E}_{i} \), hence equal to 0 or \( {E}_{i} \) . If it is equal to 0, then \( J \) is not maximal, since we can adjoin \( i \) to it. Hence \( {E}_{i} \) is contained in the sum, and our lemma is proved.
|
Yes
|
Proposition 2.2. Every submodule and every factor module of a semisimple module is semisimple.
|
Proof. Let \( F \) be a submodule. Let \( {F}_{0} \) be the sum of all simple submodules of \( F \) . Write \( E = {F}_{0} \oplus {F}_{0}^{\prime } \) . Every element \( x \) of \( F \) has a unique expression \( x = {x}_{0} + {x}_{0}^{\prime } \) with \( {x}_{0} \in {F}_{0} \) and \( {x}_{0}^{\prime } \in {F}_{0}^{\prime } \) . But \( {x}_{0}^{\prime } = x - {x}_{0} \in F \) . Hence \( F \) is the direct sum\n\n\[ F = {F}_{0} \oplus \left( {F \cap {F}_{0}^{\prime }}\right) .\n\]\n\nWe must therefore have \( {F}_{0} = F \), which is semisimple. As for the factor module, write \( E = F \oplus {F}^{\prime } \) . Then \( {F}^{\prime } \) is a sum of its simple submodules, and the canonical map \( E \rightarrow E/F \) induces an isomorphism of \( {F}^{\prime } \) onto \( E/F \) . Hence \( E/F \) is semisimple.
|
Yes
|
Lemma 3.1. Let \( E \) be semisimple over \( R \) . Let \( {R}^{\prime } = {\operatorname{End}}_{R}\left( E\right), f \in {\operatorname{End}}_{{R}^{\prime }}\left( E\right) \) as above. Let \( x \in R \) . There exists an element \( \alpha \in R \) such that \( {\alpha x} = f\left( x\right) \) .
|
Proof. Since \( E \) is semisimple, we can write an \( R \) -direct sum\n\n\[ E = {Rx} \oplus F \]\n\nwith some submodule \( F \) . Let \( \pi : E \rightarrow {Rx} \) be the projection. Then \( \pi \in {R}^{\prime } \), and hence\n\n\[ f\left( x\right) = f\left( {\pi x}\right) = {\pi f}\left( x\right) . \]\n\nThis shows that \( f\left( x\right) \in {Rx} \), as desired.
|
No
|
Theorem 3.2. (Jacobson). Let \( E \) be semisimple over \( R \), and let \( {R}^{\prime } = {\operatorname{End}}_{R}\left( E\right) \) . Let \( f \in {\operatorname{End}}_{{R}^{\prime }}\left( E\right) \) . Let \( {x}_{1},\ldots ,{x}_{n} \in E \) . Then there exists an element \( \alpha \in R \) such that \[ \alpha {x}_{i} = f\left( {x}_{i}\right) \;\text{ for }\;i = 1,\ldots, n. \] If \( E \) is finitely generated over \( {R}^{\prime } \), then the natural map \( R \rightarrow {\operatorname{End}}_{{R}^{\prime }}\left( E\right) \) is surjective.
|
Proof. For clarity of notation, we shall first carry out the proof in case \( E \) is simple. Let \( {f}^{\left( n\right) } : {E}^{\left( n\right) } \rightarrow {E}^{\left( n\right) } \) be the product map, so that \[ {f}^{\left( n\right) }\left( {{y}_{1},\ldots ,{y}_{n}}\right) = \left( {f\left( {y}_{1}\right) ,\ldots, f\left( {y}_{n}\right) }\right) . \] Let \( {R}_{n}^{\prime } = {\operatorname{End}}_{R}\left( {E}^{\left( n\right) }\right) \) . Then \( {R}_{n}^{\prime } \) is none other than the ring of matrices with coefficients in \( {R}^{\prime } \) . Since \( f \) commutes with elements of \( {R}^{\prime } \) in its action on \( E \), one sees immediately that \( {f}^{\left( n\right) } \) is in \( {\operatorname{End}}_{{R}_{n}^{\prime }}\left( {E}^{\left( n\right) }\right) \) . By the lemma, there exists an element \( \alpha \in R \) such that \[ \left( {\alpha {x}_{1},\ldots ,\alpha {x}_{n}}\right) = \left( {f\left( {x}_{1}\right) ,\ldots, f\left( {x}_{n}\right) }\right) , \] which is what we wanted to prove. When \( E \) is not simple, suppose that \( E \) is equal to a finite direct sum of simple submodules \( {E}_{i} \) (non-isomorphic), with multiplicities \( {n}_{i} \) : \[ E = {E}_{1}^{\left( {n}_{1}\right) } \oplus \cdots \oplus {E}_{r}^{\left( {n}_{r}\right) }\;\left( {{E}_{i} ≉ {E}_{j}\;\text{ if }\;i \neq j}\right) , \] then the matrices representing the ring of endomorphisms split according to blocks corresponding to the non-isomorphic simple components in our direct sum decomposition. Hence here again the argument goes through as before. The main point is that \( {f}^{\left( n\right) } \) lies in \( {\operatorname{End}}_{{R}_{n}^{\prime }}\left( {E}^{\left( n\right) }\right) \), and that we can apply the lemma. We add the observation that if \( E \) is finitely generated over \( {R}^{\prime } \), then an element \( f \in {\operatorname{End}}_{{R}^{\prime }}\left( E\right) \) is determined by its value on a finite number of elements of \( E \), so the asserted surjectivity \( R \rightarrow {\operatorname{End}}_{{R}^{\prime }}\left( E\right) \) follows at once. In the applications below, \( E \) will be a finite dimensional vector space over a field \( k \), and \( R \) will be a \( k \) -algebra, so the finiteness condition is automatically satisfied.
|
Yes
|
Corollary 3.3. (Burnside's Theorem). Let \( E \) be a finite-dimensional vector space over an algebraically closed field \( k \), and let \( R \) be a subalgebra of \( {\operatorname{End}}_{k}\left( E\right) \) . If \( E \) is a simple \( R \) -module, then \( R = {\operatorname{End}}_{{R}^{\prime }}\left( E\right) \) .
|
Proof. We contend that \( {\operatorname{End}}_{R}\left( E\right) = k \) . At any rate, \( {\operatorname{End}}_{R}\left( E\right) \) is a division ring \( {R}^{\prime } \), containing \( k \) as a subring and every element of \( k \) commutes with every element of \( {R}^{\prime } \) . Let \( \alpha \in {R}^{\prime } \) . Then \( k\left( \alpha \right) \) is a field. Furthermore, \( {R}^{\prime } \) is contained in \( {\operatorname{End}}_{k}\left( E\right) \) as a \( k \) -subspace, and is therefore finite dimensional over \( k \) . Hence \( k\left( \alpha \right) \) is finite over \( k \), and therefore equal to \( k \) since \( k \) is algebraically closed. This proves that \( {\operatorname{End}}_{R}\left( E\right) = k \) . Let now \( \left\{ {{v}_{1},\ldots ,{v}_{n}}\right\} \) be a basis of \( E \) over \( k \) . Let \( A \in {\operatorname{End}}_{k}\left( E\right) \) . According to the density theorem, there exists \( \alpha \in R \) such that \[ \alpha {v}_{i} = A{v}_{i}\;\text{ for }\;i = 1,\ldots, n. \] Since the effect of \( A \) is determined by its effect on a basis, we conclude that \( R = {\operatorname{End}}_{k}\left( E\right) \)
|
Yes
|
Corollary 3.5. (Wedderburn’s Theorem). Let \( R \) be a ring, and \( E \) a simple, faithful module over \( R \). Let \( D = {\operatorname{End}}_{R}\left( E\right) \), and assume that \( E \) is finite dimensional over \( D \). Then \( R = {\operatorname{End}}_{D}\left( E\right) \).
|
Proof. Let \( \left\{ {{v}_{1},\ldots ,{v}_{n}}\right\} \) be a basis of \( E \) over \( D \). Given \( A \in {\operatorname{End}}_{D}\left( E\right) \), by Theorem 3.2 there exists \( \alpha \in R \) such that\n\n\[ \alpha {v}_{i} = A{v}_{i}\;\text{ for }\;i = 1,\ldots, n. \]\n\nHence the map \( R \rightarrow {\operatorname{End}}_{D}\left( E\right) \) is surjective. Our assumption that \( E \) is faithful over \( R \) implies that it is injective, and our corollary is proved.
|
Yes
|
Corollary 3.5. (Wedderburn’s Theorem). Let \( R \) be a ring, and \( E \) a simple, faithful module over \( R \) . Let \( D = {\operatorname{End}}_{R}\left( E\right) \), and assume that \( E \) is finite dimensional over \( D \) . Then \( R = {\operatorname{End}}_{D}\left( E\right) \).
|
Proof. Let \( \left\{ {{v}_{1},\ldots ,{v}_{n}}\right\} \) be a basis of \( E \) over \( D \) . Given \( A \in {\operatorname{End}}_{D}\left( E\right) \), by Theorem 3.2 there exists \( \alpha \in R \) such that\n\n\[ \alpha {v}_{i} = A{v}_{i}\;\text{ for }\;i = 1,\ldots, n. \]\n\nHence the map \( R \rightarrow {\operatorname{End}}_{D}\left( E\right) \) is surjective. Our assumption that \( E \) is faithful over \( R \) implies that it is injective, and our corollary is proved.
|
Yes
|
Corollary 3.6. Let \( R \) be a ring, finite dimensional algebra over a field \( k \) which is algebraically closed. Let \( V \) be a finite dimensional vector space over \( k \), with a simple faithful representation \( \rho : R \rightarrow {\operatorname{End}}_{k}\left( V\right) \) . Then \( \rho \) is an isomorphism, in other words, \( R \approx {\operatorname{Mat}}_{n}\left( k\right) \) .
|
Proof. We apply Corollary 3.5, noting that \( D \) is finite dimensional over \( k \) . Given \( \alpha \in D \), we note that \( k\left( \alpha \right) \) is a commutative subfield of \( D \), whence \( k\left( \alpha \right) = k \) by assumption that \( k \) is algebraically closed, and the corollary follows.
|
No
|
Theorem 3.7. Existence of projection operators. Let \( k \) be a field, \( {Ra} \) \( k \) -algebra, and \( {V}_{1},\ldots ,{V}_{m} \) finite dimensional \( k \) -spaces which are also simple \( R \) -modules, and such that \( {V}_{i} \) is not \( R \) -isomorphic to \( {V}_{j} \) for \( i \neq j \) . Then there exist elements \( {e}_{i} \in R \) such that \( {e}_{i} \) acts as the identity on \( {V}_{i} \) and \( {e}_{i}{V}_{j} = 0 \) if \( j \neq i \) .
|
Proof. We observe that the projection \( {f}_{i} \) from the direct sum \( E \) to the \( i \) -th factor is in \( {\operatorname{End}}_{{R}^{\prime }}\left( E\right) \), because if \( \varphi \in {R}^{\prime } \) then \( \varphi \left( {V}_{j}\right) \subset {V}_{j} \) for all \( j \) . We may therefore apply the density theorem to conclude the proof.
|
No
|
Corollary 3.8. (Bourbaki). Let \( k \) be a field of characteristic 0 . Let \( R \) be a \( k \) -algebra, and let \( E, F \) be semisimple \( R \) -modules, finite dimensional over \( k \) . For each \( \alpha \in R \), let \( {\alpha }_{E},{\alpha }_{F} \) be the corresponding \( k \) -endomorphisms on \( E \) and \( F \) respectively. Suppose that the traces are equal; that is,\n\n\[ \n\operatorname{tr}\left( {\alpha }_{E}\right) = \operatorname{tr}\left( {\alpha }_{F}\right) \text{ for all }\alpha \in R.\n\]\n\nThen \( E \) is isomorphic to \( F \) as \( R \) -module.
|
Proof. Each of \( E \) and \( F \) is isomorphic to a finite direct sum of simple \( R \) - modules, with certain multiplicities. Let \( V \) be a simple \( R \) -module, and suppose\n\n\( E = {V}^{\left( n\right) } \oplus \) direct summands not isomorphic to \( V \)\n\n\( F = {V}^{\left( m\right) } \oplus \) direct summands not isomorphic to \( V \) .\n\nIt will suffice to prove that \( m = n \) . Let \( {e}_{V} \) be the element of \( R \) found in Theorem 3.7 such that \( {e}_{V} \) acts as the identity on \( V \), and is 0 on the other direct summands of \( E \) and \( F \) . Then\n\n\[ \n\operatorname{tr}\left( {e}_{E}\right) = n{\dim }_{k}\left( V\right) \;\text{ and }\;\operatorname{tr}\left( {e}_{F}\right) = m{\dim }_{k}\left( V\right) .\n\]\n\nSince the traces are equal by assumption, it follows that \( m = n \), thus concluding the proof. Note that the characteristic 0 is used here, because the values of the trace are in \( k \) .
|
Yes
|
Proposition 4.1. If \( R \) is semisimple, then every \( R \) -module is semisimple.
|
Proof. An R-module is a factor module of a free module, and a free module is a direct sum of \( R \) with itself a certain number of times. We can apply Proposition 2.2 to conclude the proof.
|
No
|
Lemma 4.2. Let \( L \) be a simple left ideal, and let \( E \) be a simple \( R \) -module. If \( L \) is not isomorphic to \( E \), then \( {LE} = 0 \) .
|
Proof. We have \( {RLE} = {LE} \), and \( {LE} \) is a submodule of \( E \), hence equal to 0 or \( E \) . Suppose \( {LE} = E \) . Let \( y \in E \) be such that \[ {Ly} \neq 0\text{.} \] Since \( {Ly} \) is a submodule of \( E \), it follows that \( {Ly} = E \) . The map \( \alpha \mapsto {\alpha y} \) of \( L \) into \( E \) is a homomorphism of \( L \) into \( E \), which is surjective, and hence nonzero. Since \( L \) is simple, this homomorphism is an isomorphism.
|
Yes
|
Theorem 4.4. Let \( R \) be semisimple, and let \( E \) be an \( R \) -module \( \neq 0 \) . Then\n\n\[ E = {\bigoplus }_{i = 1}^{s}{R}_{i}E = {\bigoplus }_{i = 1}^{s}{e}_{i}E \]\n\nand \( {R}_{i}E \) is the submodule of \( E \) consisting of the sum of all simple submodules isomorphic to \( {L}_{i} \) .
|
Proof. Let \( {E}_{i} \) be the sum of all simple submodules of \( E \) isomorphic to \( {L}_{i} \) . If \( V \) is a simple submodule of \( E \), then \( {RV} = V \), and hence \( {L}_{i}V = V \) for some \( i \) . By a previous lemma, we have \( {L}_{i} \approx V \) . Hence \( E \) is the direct sum of \( {E}_{1},\ldots ,{E}_{s} \) . It is then clear that \( {R}_{i}E = {E}_{i} \) .
|
Yes
|
Proposition 4.7. Let \( k \) be a field and \( E \) a finite dimensional vector space over \( k \) . Let \( S \) be a subset of \( {\operatorname{End}}_{k}\left( E\right) \) . Let \( R \) be the \( k \) -algebra generated by the elements of \( S \) . Then \( R \) is semisimple if and only if \( E \) is a semisimple \( R \) (or \( S \) ) module.
|
Proof. If \( R \) is semisimple, then \( E \) is semisimple by Proposition 4.1. Conversely, assume \( E \) semisimple as \( S \) -module. Then \( E \) is semisimple as \( R \) -module, and so is a direct sum\n\n\[ E = {\bigoplus }_{i = 1}^{n}{E}_{i} \]\n\nwhere each \( {E}_{i} \) is simple. Then for each \( i \) there exists an element \( {v}_{i} \in {E}_{i} \) such that \( {E}_{i} = R{v}_{i} \) . The map\n\n\[ x \mapsto \left( {x{v}_{1},\ldots, x{v}_{n}}\right) \]\n\n is a \( R \) -homomorphism of \( R \) into \( E \), and is an injection since \( R \) is contained in \( {\operatorname{End}}_{k}\left( E\right) \) . Since a submodule of a semisimple module is semisimple by Proposition 2.2, the desired result follows.
|
Yes
|
Corollary 4.6. A simple ring has exactly one simple module, up to isomorphism.
|
Both these corollaries are immediate consequences of Theorems 4.3 and 4.4.
|
No
|
Proposition 4.7. Let \( k \) be a field and \( E \) a finite dimensional vector space over \( k \) . Let \( S \) be a subset of \( {\operatorname{End}}_{k}\left( E\right) \) . Let \( R \) be the \( k \) -algebra generated by the elements of \( S \) . Then \( R \) is semisimple if and only if \( E \) is a semisimple \( R \) (or \( S \) ) module.
|
Proof. If \( R \) is semisimple, then \( E \) is semisimple by Proposition 4.1. Conversely, assume \( E \) semisimple as \( S \) -module. Then \( E \) is semisimple as \( R \) -module, and so is a direct sum\n\n\[ E = {\bigoplus }_{i = 1}^{n}{E}_{i} \]\n\nwhere each \( {E}_{i} \) is simple. Then for each \( i \) there exists an element \( {v}_{i} \in {E}_{i} \) such that \( {E}_{i} = R{v}_{i} \) . The map\n\n\[ x \mapsto \left( {x{v}_{1},\ldots, x{v}_{n}}\right) \]\n\n is a \( R \) -homomorphism of \( R \) into \( E \), and is an injection since \( R \) is contained in \( {\operatorname{End}}_{k}\left( E\right) \) . Since a submodule of a semisimple module is semisimple by Proposition 2.2, the desired result follows.
|
Yes
|
Lemma 5.1. Let \( R \) be a ring, and \( \psi \in {\operatorname{End}}_{R}\left( R\right) \) a homomorphism of \( R \) into itself, viewed as \( R \) -module. Then there exists \( \alpha \in R \) such that \( \psi \left( x\right) = {x\alpha } \) for all \( x \in R \) .
|
Proof. We have \( \psi \left( x\right) = \psi \left( {x \cdot 1}\right) = {x\psi }\left( 1\right) \) . Let \( \alpha = \psi \left( 1\right) \) .
|
Yes
|
Theorem 5.2. Let \( R \) be a simple ring. Then \( R \) is a finite direct sum of simple left ideals. There are no two-sided ideals except 0 and \( R \) . If \( L, M \) are simple left ideals, then there exists \( \alpha \in R \) such that \( {L\alpha } = M \) . We have \( {LR} = R \) .
|
Proof. Since \( R \) is by definition also semisimple, it is a direct sum of simple left ideals, say \( {\bigoplus }_{j \in J}{L}_{j} \) . We can write 1 as a finite sum \( 1 = \mathop{\sum }\limits_{{j = 1}}^{m}{\beta }_{j} \), with \( {\beta }_{j} \in {L}_{j} \) .\n\nThen\n\[ R = {\bigoplus }_{j = 1}^{m}R{\beta }_{j} = {\bigoplus }_{j = 1}^{m}{L}_{j} \]\n\nThis proves our first assertion. As to the second, it is a consequence of the third. Let therefore \( L \) be a simple left ideal. Then \( {LR} \) is a left ideal, because \( {RLR} = {LR} \), hence ( \( R \) being semisimple) is a direct sum of simple left ideals, say\n\[ {LR} = {\bigoplus }_{j = 1}^{m}{L}_{j},\;L = {L}_{1}. \]\n\nLet \( M \) be a simple left ideal. We have a direct sum decomposition \( R = L \oplus {L}^{\prime } \) . Let \( \pi : R \rightarrow L \) be the projection. It is an \( R \) -endomorphism. Let \( \sigma : L \rightarrow M \) be an isomorphism (it exists by Theorem 4.3). Then \( \sigma \circ \pi : R \rightarrow R \) is an \( R \) -endomorphism. By the lemma, there exists \( \alpha \in R \) such that\n\[ \sigma \circ \pi \left( x\right) = {x\alpha }\;\text{ for all }\;x \in R. \]\n\nApply this to elements \( x \in L \) . We find\n\[ \sigma \left( x\right) = {x\alpha }\text{ for all }x \in L. \]\n\nThe map \( x \mapsto {x\alpha } \) is a \( R \) -homomorphism of \( L \) into \( M \), is non-zero, hence is an isomorphism. From this it follows at once that \( {LR} = R \), thereby proving our theorem.
|
Yes
|
Corollary 5.3. Let \( R \) be a simple ring. Let \( E \) be a simple \( R \) -module, and \( L \) a simple left ideal of \( R \). Then \( {LE} = E \) and \( E \) is faithful.
|
Proof. We have \( {LE} = L\left( {RE}\right) = \left( {LR}\right) E = {RE} = E \). Suppose \( {\alpha E} = 0 \) for some \( \alpha \in R \). Then \( {R\alpha RE} = {R\alpha E} = 0 \). But \( {R\alpha R} \) is a two-sided ideal. Hence \( {R\alpha R} = 0 \), and \( \alpha = 0 \). This proves that \( E \) is faithful.
|
Yes
|
Theorem 5.4. (Rieffel). Let \( R \) be a ring without two-sided ideals except 0 and \( R \) . Let \( L \) be a nonzero left ideal, \( {R}^{\prime } = {\operatorname{End}}_{R}\left( L\right) \) and \( {R}^{\prime \prime } = {\operatorname{End}}_{{R}^{\prime }}\left( L\right) \) . Then the natural map \( \lambda : R \rightarrow {R}^{\prime \prime } \) is an isomorphism.
|
Proof. The kernel of \( \lambda \) is a two-sided ideal, so \( \lambda \) is injective. Since \( {LR} \) is a two-sided ideal, we have \( {LR} = R \) and \( \lambda \left( L\right) \lambda \left( R\right) = \lambda \left( R\right) \) . For any \( x, y \in L \) , and \( f \in {R}^{\prime \prime } \), we have \( f\left( {xy}\right) = f\left( x\right) y \), because right multiplication by \( y \) is an \( R \) -endomorphism of \( L \) . Hence \( \lambda \left( L\right) \) is a left ideal of \( {R}^{\prime \prime } \), so\n\n\[ \n{R}^{\prime \prime } = {R}^{\prime \prime }\lambda \left( R\right) = {R}^{\prime \prime }\lambda \left( L\right) \lambda \left( R\right) = \lambda \left( L\right) \lambda \left( R\right) = \lambda \left( R\right) ,\n\]\n\nas was to be shown.
|
Yes
|
Theorem 5.5. Let \( D \) be a division ring, and \( E \) a finite-dimensional vector space over \( D \) . Let \( R = {\operatorname{End}}_{D}\left( E\right) \) . Then \( R \) is simple and \( E \) is a simple \( R \) -module. Furthermore, \( D = {\operatorname{End}}_{R}\left( E\right) \) .
|
Proof. We first show that \( E \) is a simple \( R \) -module. Let \( v \in E, v \neq 0 \) . Then \( v \) can be completed to a basis of \( E \) over \( D \), and hence, given \( w \in E \), there exists \( \alpha \in R \) such that \( {\alpha v} = w \) . Hence \( E \) cannot have any invariant subspaces other than 0 or itself, and is simple over \( R \) . It is clear that \( E \) is faithful over \( R \) . Let \( \left\{ {{v}_{1},\ldots ,{v}_{m}}\right\} \) be a basis of \( E \) over \( D \) . The map\n\n\[ \alpha \mapsto \left( {\alpha {v}_{1},\ldots ,\alpha {v}_{m}}\right) \]\n\nof \( R \) into \( {E}^{\left( m\right) } \) is an \( R \) -homomorphism of \( R \) into \( {E}^{\left( m\right) } \), and is injective. Given \( \left( {{w}_{1},\ldots ,{w}_{m}}\right) \in {E}^{\left( m\right) } \), there exists \( \alpha \in R \) such that \( \alpha {v}_{i} = {w}_{i} \) and hence \( R \) is \( R \) - isomorphic to \( {E}^{\left( m\right) } \) . This shows that \( R \) (as a module over itself) is isomorphic to a direct sum of simple modules and is therefore semisimple. Furthermore, all these simple modules are isomorphic to each other, and hence \( R \) is simple by Theorem 4.3.\n\nThere remains to prove that \( D = {\operatorname{End}}_{R}\left( E\right) \) . We note that \( E \) is a semisimple module over \( D \) since it is a vector space, and every subspace admits a complementary subspace. We can therefore apply the density theorem (the roles of \( R \) and \( D \) are now permuted!). Let \( \varphi \in {\operatorname{End}}_{R}\left( E\right) \) . Let \( v \in E, v \neq 0 \) . By the density theorem, there exists an element \( a \in D \) such that \( \varphi \left( v\right) = {av} \) . Let \( w \in E \) . There exists an element \( f \in R \) such that \( f\left( v\right) = w \) . Then\n\n\[ \varphi \left( w\right) = \varphi \left( {f\left( v\right) }\right) = f\left( {\varphi \left( v\right) }\right) = f\left( {av}\right) = {af}\left( v\right) = {aw}. \]\n\nTherefore \( \varphi \left( w\right) = {aw} \) for all \( w \in E \) . This means that \( \varphi \in D \), and concludes our proof.
|
Yes
|
Theorem 5.6. Let \( k \) be a field and \( E \) a finite-dimensional vector space of dimension \( m \) over \( k \) . Let \( R = {\operatorname{End}}_{k}\left( E\right) \) . Then \( R \) is a \( k \) -space, and\n\n\[ \n{\dim }_{k}R = {m}^{2}.\n\]\n\nFurthermore, \( m \) is the number of simple left ideals appearing in a direct sum decomposition of \( R \) as such a sum.
|
Proof. The \( k \) -space of \( k \) -endomorphisms of \( E \) is represented by the space of \( m \times m \) matrices in \( k \), so the dimension of \( R \) as a \( k \) -space is \( {m}^{2} \) . On the other hand, the proof of Theorem 5.5 showed that \( R \) is \( R \) -isomorphic as an \( R \) -module to the direct sum \( {E}^{\left( m\right) } \) . We know the uniqueness of the decomposition of a module into a direct sum of simple modules (Proposition 1.2), and this proves our assertion.
|
Yes
|
Theorem 6.2. Let \( A \) be a semisimple algebra, finite dimensional over a field \( k \) . Let \( K \) be a finite separable extension of \( k \) . Then \( K{ \otimes }_{k}A \) is a semisimple over \( K \) .
|
Proof. In light of the radical criterion for semisimplicity, it suffices to prove that \( K{ \otimes }_{k}A \) has zero radical, and it suffices to do so for an even larger extension than \( K \), so that we may assume \( K \) is Galois over \( k \), say with Galois group \( G \) . Then \( G \) operates on \( K \otimes A \) by\n\n\[ \sigma \left( {x \otimes a}\right) = {\sigma x} \otimes a\text{ for }x \in K\text{ and }a \in A. \]\n\nLet \( N \) be the radical of \( K \otimes A \) . Since \( N \) is nilpotent, it follows that \( {\sigma N} \) is also nilpotent for all \( \sigma \in G \), whence \( {\sigma N} = N \) because \( N \) is the maximal nilpotent ideal (Exercise 5). Let \( \left\{ {{\alpha }_{1},\ldots ,{\alpha }_{m}}\right\} \) be a basis of \( A \) over \( k \) . Suppose \( N \) contains the element\n\n\[ \xi = \sum {x}_{i} \otimes {\alpha }_{i} \neq 0\;\text{ with }\;{x}_{i} \in K. \]\n\nFor every \( y \in K \) the element \( \left( {y \otimes 1}\right) \xi = \sum y{x}_{i} \otimes {\alpha }_{i} \) also lies in \( N \) . Then\n\n\[ \mathrm{{trace}}\left( {\left( {y \otimes 1}\right) \xi }\right) = \sum {\sigma \xi } = \sum \mathrm{{Tr}}\left( {y{x}_{i}}\right) \otimes {\alpha }_{i} = \sum 1 \otimes {\alpha }_{i}\mathrm{{Tr}}\left( {y{x}_{i}}\right) \]\n\nalso lies in \( N \), and lies in \( 1 \otimes A \approx A \), thus proving the theorem.
|
Yes
|
Theorem 6.3. Let \( A, B \) be simple algebras, finite dimensional over a field \( k \) which is algebraically closed. Then \( A{ \otimes }_{k}B \) is also simple. We have \( A \approx {\operatorname{End}}_{k}\left( V\right) \) and \( B \approx {\operatorname{End}}_{k}\left( W\right) \) where \( V, W \) are finite dimensional vector spaces over \( k \), and there is a natural isomorphism
|
Proof. The formula is a special case of Theorem 2.5 of Chapter XVI, and the isomorphisms \( A \approx {\operatorname{End}}_{k}\left( V\right), B \approx {\operatorname{End}}_{k}\left( W\right) \) exist by Wedderburn’s theorem or its corollaries.
|
No
|
Theorem 6.4. Let \( A, B \) be absolutely semisimple algebras finite dimensional over a field \( k \) . Then \( A{ \otimes }_{k}B \) is absolutely semisimple.
|
Proof. Let \( F = {k}^{\mathrm{a}} \) . Then \( {A}_{F} \) is semisimple by hypothesis, so it is a direct product of simple algebras, which are matrix algebras, and in particular we can apply Theorem 6.3 to see that \( {A}_{F}{ \otimes }_{F}{B}_{F} \) has no radical. Hence \( A{ \otimes }_{k}B \) has no radical (because if \( N \) is its radical, then \( N{ \otimes }_{k}F = {N}_{F} \) is a nilpotent ideal of \( \left. {{A}_{F}{ \otimes }_{F}{B}_{F}}\right) \), whence \( A{ \otimes }_{k}B \) is semisimple by Theorem 6.1(c).
|
Yes
|
Theorem 7.1. (Morita). Let \( E \) be an \( R \) -module. Then \( E \) is a generator if and only if \( E \) is balanced and finitely generated projective over \( {R}^{\prime }\left( E\right) \) .
|
Proof. We shall prove half of the theorem, leaving the other half to the reader, using similar ideas (see Exercise 12). So we assume that \( E \) is a generator, and we prove that it satisfies the other properties by arguments due to Faith.\n\nWe first prove that for any module \( F, R \oplus F \) is balanced. We identify \( R \) and \( F \) as the submodules \( R \oplus 0 \) and \( 0 \oplus F \) of \( R \oplus F \), respectively. For \( w \in F \) , let \( {\psi }_{w} : R \oplus F \rightarrow F \) be the map \( {\psi }_{w}\left( {x + v}\right) = {xw} \) . Then any \( f \in {R}^{\prime \prime }\left( {R \oplus F}\right) \) commutes with \( {\pi }_{1},{\pi }_{2} \), and each \( {\psi }_{w} \) . From this we see at once that \( f\left( {x + v}\right) = f\left( 1\right) \left( {x + v}\right) \) and hence that \( R \oplus F \) is balanced. Let \( E \) be a generator, and \( {E}^{\left( n\right) } \rightarrow R \) a surjective homomorphism. Since \( R \) is free, we can write \( {E}^{\left( n\right) } \approx R \oplus F \) for some module \( F \), so that \( {E}^{\left( n\right) } \) is balanced, Let \( g \in {R}^{\prime }\left( E\right) \) . Then \( {g}^{\left( n\right) } \) commutes with every element \( \varphi = \left( {\varphi }_{ij}\right) \) in \( {R}^{\prime }\left( {E}^{\left( n\right) }\right) \) (with components \( \left. {{\varphi }_{ij} \in {R}^{\prime }\left( E\right) }\right) \), and hence there is some \( x \in R \) such that \( {g}^{\left( n\right) } = {\lambda }_{x}^{\left( n\right) } \) . Hence \( g = {\lambda }_{x} \), thereby proving that \( E \) is balanced, since \( \lambda \) is obviously injective.\n\nTo prove that \( E \) is finitely generated over \( {R}^{\prime }\left( E\right) \), we have\n\n\[ \n{R}^{\prime }{\left( E\right) }^{\left( n\right) } \approx {\operatorname{Hom}}_{R}\left( {{E}^{\left( n\right) }, E}\right) \approx {\operatorname{Hom}}_{R}\left( {R, E}\right) \oplus {\operatorname{Hom}}_{R}\left( {F, E}\right)\n\]\n\nas additive groups. This relation also obviously holds as \( {R}^{\prime } \) -modules if we define the operation of \( {R}^{\prime } \) to be composition of mappings (on the left). Since \( {\operatorname{Hom}}_{R}\left( {R, E}\right) \) is \( {R}^{\prime } \) -isomorphic to \( E \) under the map \( h \mapsto h\left( 1\right) \), it follows that \( E \) is an \( {R}^{\prime } \) -homomorphic image of \( {R}^{\prime \left( n\right) } \), whence finitely generated over \( {R}^{\prime } \) . We also see that \( E \) is a direct summand of the free \( {R}^{\prime } \) -module \( {R}^{\prime \left( n\right) } \) and is therefore projective over \( {R}^{\prime }\left( E\right) \) . This concludes the proof.
|
No
|
Proposition 1.1. Let \( G \) be a finite group and let \( {E}^{\prime }, E, F,{F}^{\prime } \) be \( G \) -modules. Let\n\n\[ \n{E}^{\prime }\overset{\varphi }{ \rightarrow }E\overset{f}{ \rightarrow }F\overset{\psi }{ \rightarrow }{F}^{\prime } \]\n\nbe R-homomorphisms, and assume that \( \varphi ,\psi \) are \( G \) -homomorphisms. Then\n\n\[ \n{\operatorname{Tr}}_{G}\left( {\psi \circ f \circ \varphi }\right) = \psi \circ {\operatorname{Tr}}_{G}\left( f\right) \circ \varphi .\n\]
|
Proof. We have\n\n\[ \n{\operatorname{Tr}}_{G}\left( {\psi \circ f \circ \varphi }\right) = \mathop{\sum }\limits_{{\sigma \in G}}\sigma \left( {\psi \circ f \circ \varphi }\right) = \mathop{\sum }\limits_{{\sigma \in G}}\left( {\sigma \psi }\right) \circ \left( {\sigma f}\right) \circ \left( {\sigma \varphi }\right)\n\n\[ \n= \psi \circ \left( {\mathop{\sum }\limits_{{\sigma \in G}}{\sigma f}}\right) \circ \varphi = \psi \circ {\operatorname{Tr}}_{G}\left( f\right) \circ \varphi .\n\]
|
Yes
|
Theorem 1.2. (Maschke). Let \( G \) be a finite group of order \( n \), and let \( k \) be a field whose characteristic does not divide \( n \) . Then the group ring \( k\left\lbrack G\right\rbrack \) is semisimple.
|
Proof. Let \( E \) be a \( G \) -module, and \( F \) a \( G \) -submodule. Since \( k \) is a field, there exists a \( k \) -subspace \( {F}^{\prime } \) such that \( E \) is the \( k \) -direct sum of \( F \) and \( {F}^{\prime } \) . We let the \( k \) -linear map \( \pi : E \rightarrow F \) be the projection on \( F \) . Then \( \pi \left( x\right) = x \) for all \( x \in F \) .\n\nLet\n\n\[ \varphi = \frac{1}{n}{\operatorname{Tr}}_{G}\left( \pi \right) \]\n\nWe have then two \( G \) -homomorphisms\n\n\[ 0 \rightarrow F\underset{\varphi }{\overset{j}{ \rightleftarrows }}E \]\n\nsuch that \( j \) is the inclusion, and \( \varphi \circ j = \mathrm{{id}} \) . It follows that \( E \) is the \( G \) -direct sum of \( F \) and Ker \( \varphi \), thereby proving that \( k\left\lbrack G\right\rbrack \) is semisimple.
|
Yes
|
Proposition 2.1. If \( E, F \) are \( G \) -spaces, then\n\n\[{\chi }_{E} + {\chi }_{F} = {\chi }_{E \oplus F}\;\text{ and }\;{\chi }_{E}{\chi }_{F} = {\chi }_{E \otimes F}.\]\n\nIf \( {\chi }^{ \vee } \) denotes the character of the dual representation on \( {E}^{ \vee } \), then\n\n\[{\chi }^{ \vee }\left( \sigma \right) = \chi \left( {\sigma }^{-1}\right)\]\n\n\[= \overline{\chi \left( \sigma \right) }\text{if}k = \mathbf{C}\text{.}\]
|
Proof. The first relation holds because the matrix of an element \( \sigma \) in the representation \( E \oplus F \) decomposes into blocks corresponding to the representation in \( E \) and the representation in \( F \) . As to the second, if \( \left\{ {v}_{i}\right\} \) is a basis of \( E \) and \( \left\{ {w}_{j}\right\} \) is a basis of \( F \) over \( k \), then we know that \( \left\{ {{v}_{i} \otimes {w}_{j}}\right\} \) is a basis of \( E \otimes F \) . Let \( \left( {a}_{iv}\right) \) be the matrix of \( \sigma \) with respect to our basis of \( E \), and \( \left( {b}_{j\mu }\right) \) its matrix with respect to our basis of \( F \) . Then\n\n\[ \sigma \left( {{v}_{i} \otimes {w}_{j}}\right) = \sigma {v}_{i} \otimes \sigma {w}_{j} = \mathop{\sum }\limits_{v}{a}_{iv}{v}_{v} \otimes \mathop{\sum }\limits_{\mu }{b}_{j\mu }{w}_{\mu }\]\n\n\[= \mathop{\sum }\limits_{{v,\mu }}{a}_{iv}{b}_{j\mu }{v}_{v} \otimes {w}_{\mu }\]\n\nBy definition, we find\n\n\[{\chi }_{E \otimes F}\left( \sigma \right) = \mathop{\sum }\limits_{i}\mathop{\sum }\limits_{j}{a}_{ii}{b}_{jj} = {\chi }_{E}\left( \sigma \right) {\chi }_{F}\left( \sigma \right)\]\n\nthereby proving the statement about tensor products. The statement for the character of the dual representation follows from the formula for the matrix \( {}^{t}{M}^{-1} \) given in \( §1 \) . The value given as the complex conjugate in case \( k = \mathbf{C} \) will be proved later in Corollary 3.2.
|
Yes
|
Theorem 2.2. There are only a finite number of simple characters of \( G \) (over \( k) \) . The characters of representations of \( G \) are the linear combinations of the simple characters with integer coefficients \( \geqq 0 \) .
|
We shall use the direct product decomposition of a semisimple ring. We have\n\n\[ \nk\left\lbrack G\right\rbrack = \mathop{\prod }\limits_{{i = 1}}^{s}{R}_{i} \]\n\nwhere each \( {R}_{i} \) is simple, and we have a corresponding decomposition of the unit element of \( k\left\lbrack G\right\rbrack \) :\n\n\[ 1 = {e}_{1} + \cdots + {e}_{s} \]\n\nwhere \( {e}_{i} \) is the unit element of \( {R}_{i} \), and \( {e}_{i}{e}_{j} = 0 \) if \( i \neq j \) . Also, \( {R}_{i}{R}_{j} = 0 \) if \( i \neq j \) . We note that \( s = s\left( k\right) \) depends on \( k \) .\n\nIf \( {L}_{i} \) denotes a typical simple module for \( {R}_{i} \) (say one of the simple left ideals), we let \( {\chi }_{i} \) be the character of the representation on \( {L}_{i} \) .\n\nWe observe that \( {\chi }_{i}\left( \alpha \right) = 0 \) for all \( \alpha \in {R}_{j} \) if \( i \neq j \) . This is a fundamental relation of orthogonality, which is obvious, but from which all our other relations will follow.
|
Yes
|
Assume that \( k \) has characteristic 0 . Then every effective character has a unique expression as a linear combination\n\n\[ \chi = \mathop{\sum }\limits_{{i = 1}}^{s}{n}_{i}{\chi }_{i},\;{n}_{i} \in \mathbf{Z},{n}_{i} \geqq 0, \]\n\nwhere \( {\chi }_{1},\ldots ,{\chi }_{s} \) are the simple characters of \( G \) over \( k \) . Two representations are isomorphic if and only if their associated characters are equal.
|
Let \( E \) be the representation space of \( \chi \) . Then by Theorem 4.4 of Chapter XVII,\n\n\[ E \approx {\bigoplus }_{i = 1}^{s}{n}_{i}{L}_{i} \]\n\nThe sum is finite because we assume throughout that \( E \) is finite dimensional. Since \( {e}_{i} \) acts as a unit element on \( {L}_{i} \), we find\n\n\[ {\chi }_{i}\left( {e}_{i}\right) = {\dim }_{k}{L}_{i} \]\n\nWe have already seen that \( {\chi }_{i}\left( {e}_{j}\right) = 0 \) if \( i \neq j \) . Hence\n\n\[ \chi \left( {e}_{i}\right) = {n}_{i}{\dim }_{k}{L}_{i} \]\n\nSince \( {\dim }_{k}{L}_{i} \) depends only on the structure of the group algebra, we have recovered the multiplicities \( {n}_{1},\ldots ,{n}_{s} \) . Namely, \( {n}_{i} \) is the number of times that \( {L}_{i} \) occurs (up to an isomorphism) in the representation space of \( \chi \), and is the value of \( \chi \left( {e}_{i}\right) \) divided by \( {\dim }_{k}{L}_{i} \) (we are in characteristic 0 ). This proves our theorem.
|
Yes
|
As functions of \( G \) into \( k \), the simple characters \[ {\chi }_{1},\ldots ,{\chi }_{s} \] are linearly independent over \( k \) .
|
Proof. Suppose that \( \sum {a}_{i}{\chi }_{i} = 0 \) with \( {a}_{i} \in k \) . We apply this expression to \( {e}_{j} \) and get \[ 0 = \left( {\sum {a}_{i}{\chi }_{i}}\right) \left( {e}_{j}\right) = {a}_{j}{\dim }_{k}{L}_{j} \] Hence \( {a}_{j} = 0 \) for all \( j \) .
|
Yes
|
Theorem 3.1. Let \( G \) be a finite abelian group, and assume that \( k \) is algebraically closed. Then every simple representation of \( G \) is 1-dimensional. The simple characters of \( G \) are the homomorphisms of \( G \) into \( {k}^{ * } \) .
|
Proof. The group ring \( k\left\lbrack G\right\rbrack \) is semisimple, commutative, and is a direct product of simple rings. Each simple ring is a ring of matrices over \( k \) (by Corollary 3.6 Chapter XVII), and can be commutative if and only if it is equal to \( k \) .
|
Yes
|
Corollary 3.2. Let \( k \) be algebraically closed. Let \( G \) be a finite group. For any character \( \chi \) and \( \sigma \in G \), the value \( \chi \left( \sigma \right) \) is equal to a sum of roots of unity with integer coefficients (i.e. coefficients in \( \mathbf{Z} \) or \( \mathbf{Z}/p\mathbf{Z} \) depending on the characteristic of \( k \) ).
|
Proof. Let \( H \) be the subgroup generated by \( \sigma \) . Then \( H \) is a cyclic subgroup. A representation of \( G \) having character \( \chi \) can be viewed as a representation for \( H \) by restriction, having the same character. Thus our assertion follows from Theorem 3.1.
|
No
|
An element of \( k\left\lbrack G\right\rbrack \) commutes with every element of \( G \) if and only if it is a linear combination of conjugacy classes with coefficients in \( k \) .
|
Let \( \alpha = \mathop{\sum }\limits_{{\sigma \in G}}{a}_{\sigma }\sigma \) and assume \( {\alpha \tau } = {\tau \alpha } \) for all \( \tau \in G \) . Then\n\n\[\n\mathop{\sum }\limits_{{\sigma \in G}}{a}_{\sigma }{\tau \sigma }{\tau }^{-1} = \mathop{\sum }\limits_{{\sigma \in G}}{a}_{\sigma }\sigma\n\]\n\nHence \( {a}_{{\sigma }_{0}} = {a}_{\sigma } \) whenever \( \sigma \) is conjugate to \( {\sigma }_{0} \), and this means that we can write\n\n\[\n\alpha = \mathop{\sum }\limits_{\gamma }{a}_{\gamma }\gamma\n\]\n\nwhere the sum is taken over all conjugacy classes \( \gamma \) .
|
Yes
|
Proposition 4.3. Let \( {\chi }_{\mathrm{{reg}}} \) be the regular character. Then\n\n\[ \n{\chi }_{\mathrm{{reg}}}\left( \sigma \right) = 0\;\text{ if }\;\sigma \in G,\sigma \neq 1 \n\]\n\n\[ \n{\chi }_{\text{reg }}\left( 1\right) = n\text{.} \n\]
|
Proof. Let \( 1 = {\sigma }_{1},\ldots ,{\sigma }_{n} \) be the elements of \( G \) . They form a basis of \( k\left\lbrack G\right\rbrack \) over \( k \) . The matrix of 1 is the unit \( n \times n \) matrix. Thus our second assertion follows. If \( \sigma \neq 1 \), then multiplication by \( \sigma \) permutes \( {\sigma }_{1},\ldots ,{\sigma }_{n} \), and it is immediately clear that all diagonal elements in the matrix representing \( \sigma \) are 0 . This proves what we wanted.
|
Yes
|
Assume again that \( k \) is algebraically closed. Let\n\n\[ \n{e}_{i} = \mathop{\sum }\limits_{{\tau \in G}}{a}_{\tau }\tau ,\;{a}_{\tau } \in k.\n\]\n\nThen\n\n\[ \n{a}_{\tau } = \frac{1}{n}{\chi }_{\mathrm{{reg}}}\left( {{e}_{i}{\tau }^{-1}}\right) = \frac{{d}_{i}}{n}{\chi }_{i}\left( {\tau }^{-1}\right) .\n\]
|
Proof. We have for all \( \tau \in G \) :\n\n\[ \n{\chi }_{\mathrm{{reg}}}\left( {{e}_{i}{\tau }^{-1}}\right) = {\chi }_{\mathrm{{reg}}}\left( {\mathop{\sum }\limits_{{\sigma \in G}}{a}_{\sigma }\sigma {\tau }^{-1}}\right) = \mathop{\sum }\limits_{{\sigma \in G}}{a}_{\sigma }{\chi }_{\mathrm{{reg}}}\left( {\sigma {\tau }^{-1}}\right) .\n\]\n\nBy Proposition 4.3, we find\n\n\[ \n{\chi }_{\text{reg }}\left( {{e}_{i}{\tau }^{-1}}\right) = n{a}_{\tau }\n\]\n\nOn the other hand,\n\n\[ \n{\chi }_{\text{reg }}\left( {{e}_{i}{\tau }^{-1}}\right) = \mathop{\sum }\limits_{{j = 1}}^{s}{d}_{j}{\chi }_{j}\left( {{e}_{i}{\tau }^{-1}}\right) = {d}_{i}{\chi }_{i}\left( {{e}_{i}{\tau }^{-1}}\right) = {d}_{i}{\chi }_{i}\left( {\tau }^{-1}\right) .\n\]\n\nHence\n\n\[ \n{d}_{i}{\chi }_{i}\left( {\tau }^{-1}\right) = n{a}_{\tau }\n\]\n\nfor all \( \tau \in G \) . This proves our proposition.
|
Yes
|
Corollary 4.6. The dimensions \( {d}_{i} \) are not divisible by the characteristic of \( k \) .
|
Proof. Otherwise, \( {e}_{i} = 0 \), which is impossible.
|
Yes
|
Corollary 4.6. The dimensions \( {d}_{i} \) are not divisible by the characteristic of \( k \) .
|
Proof. Otherwise, \( {e}_{i} = 0 \), which is impossible.
|
Yes
|
Corollary 4.7. The simple characters \( {\chi }_{1},\ldots ,{\chi }_{s} \) are linearly independent over \( k \) .
|
Proof. The proof in Corollary 2.4 applies, since we now know that the characteristic does not divide \( {d}_{i} \) .
|
No
|
Corollary 4.8. Assume in addition that \( k \) has characteristic 0 . Then \( {d}_{i} \mid n \) for each \( i \) .
|
Proof. Multiplying our expression for \( {e}_{i} \) by \( n/{d}_{i} \), and also by \( {e}_{i} \), we find\n\n\[\n\frac{n}{{d}_{i}}{e}_{i} = \mathop{\sum }\limits_{{\sigma \in G}}{\chi }_{i}\left( {\sigma }^{-1}\right) \sigma {e}_{i}\n\]\n\nLet \( \zeta \) be a primitive \( m \) -th root of unity, and let \( M \) be the module over \( \mathbf{Z} \) generated by the finite number of elements \( {\zeta }^{v}\sigma {e}_{i}\left( {v = 0,\ldots, m - 1\text{and}\sigma \in G}\right) \) . Then from the preceding relation, we see at once that multiplication by \( n/{d}_{i} \) maps \( M \) into itself. By definition, we conclude that \( n/{d}_{i} \) is integral over \( \mathbf{Z} \) , and hence lies in \( \mathbf{Z} \), as desired.
|
Yes
|
Theorem 4.9. Let \( k \) be algebraically closed. Let \( {Z}_{k}\left( G\right) \) be the center of \( k\left\lbrack G\right\rbrack \), and let \( {X}_{k}\left( G\right) \) be the \( k \) -space of class functions on \( G \) . Then \( {Z}_{k}\left( G\right) \) and \( {X}_{k}\left( G\right) \) are the dual spaces of each other, under the pairing\n\n\[ \left( {f,\alpha }\right) \mapsto f\left( \alpha \right) \text{.} \]
|
Proof. The formula has been proved in the proof of Theorem 2.3. The two spaces involved here both have dimension \( s \), and \( {d}_{i} \neq 0 \) in \( k \) . Our proposition is then clear.
|
No
|
Theorem 5.1. The symbol \( \langle f, g\rangle \) for \( f, g \in X\left( G\right) \) takes on values in the prime ring. The simple characters form an orthonormal basis for \( X\left( G\right) \), in other words\n\n\[ \left\langle {{\chi }_{i},{\chi }_{j}}\right\rangle = {\delta }_{ij} \]
|
Proof. By Proposition 4.4, we find\n\n\[ {\chi }_{j}\left( {e}_{i}\right) = \frac{{d}_{i}}{n}\mathop{\sum }\limits_{{\sigma \in G}}{\chi }_{i}\left( {\sigma }^{-1}\right) {\chi }_{j}\left( \sigma \right) \]\n\nIf \( i \neq j \) we get 0 on the left-hand side, so that \( {\chi }_{i} \) and \( {\chi }_{j} \) are orthogonal. If \( i = j \) we get \( {d}_{i} \) on the left-hand side, and we know that \( {d}_{i} \neq 0 \) in \( k \), by Corollary 4.6. Hence \( \left\langle {{\chi }_{i},{\chi }_{i}}\right\rangle = 1 \) . Since every element of \( X\left( G\right) \) is a linear combination of simple characters with integer coefficients, it follows that the values of our bilinear map are in the prime ring. The extension statement is obvious, thereby proving our theorem.
|
Yes
|
Theorem 5.2. Let \( k \) have characteristic 0, and let \( R \) be a subring containing the \( m \) -th roots of unity, and having a conjugation. Then the bilinear form on \( X\left( G\right) \) has a unique extension to a hermitian form\n\n\[ \n{X}_{R}\left( G\right) \times {X}_{R}\left( G\right) \rightarrow R \n\]\n\ngiven by the formula\n\n\[ \n\langle f, g\rangle = \frac{1}{n}\mathop{\sum }\limits_{{\sigma \in G}}f\left( \sigma \right) \overline{g\left( \sigma \right) }.\n\]\n\nThe simple characters constitute an orthonormal basis of \( {X}_{R}\left( G\right) \) with respect to this form.
|
Proof. The formula given in the statement of the theorem gives the same value as before for the symbol \( \langle f, g\rangle \) when \( f, g \) lie in \( X\left( G\right) \) . Thus the extension exists, and is obviously unique.
|
Yes
|
Proposition 5.3. For \( \alpha ,\beta \in Z\left( G\right) \), we can define a symbol \( \langle \alpha ,\beta \rangle \) by either one of the following expressions, which are equal:\n\n\[ \langle \alpha ,\beta \rangle = \frac{1}{n}{\chi }_{\text{reg }}\left( {\alpha {\beta }^{ - }}\right) = \frac{1}{n}\mathop{\sum }\limits_{{v = 1}}^{s}{\chi }_{v}\left( \alpha \right) {\chi }_{v}\left( {\beta }^{ - }\right) . \]
|
Proof. Each expression is linear in its first and second variable. Hence to prove their equality, it will suffice to prove that the two expressions are equal when we replace \( \alpha \) by \( {e}_{i} \) and \( \beta \) by an element \( \tau \) of \( G \) . But then, our equality is equivalent to\n\n\[ {\chi }_{\text{reg }}\left( {{e}_{i}{\tau }^{-1}}\right) = \mathop{\sum }\limits_{{v = 1}}^{s}{\chi }_{v}\left( {e}_{i}\right) {\chi }_{v}\left( {\tau }^{-1}\right) . \]\n\nSince \( {\chi }_{v}\left( {e}_{i}\right) = 0 \) unless \( v = i \), we see that the right-hand side of this last relation is equal to \( {d}_{i}{\chi }_{i}\left( {\tau }^{-1}\right) \) . Our two expressions are equal in view of Proposition 4.4.
|
Yes
|
For each ring \( R \) contained in \( k \), the pairing of Proposition 5.3 has a unique extension to a map\n\n\[ \n{Z}_{R}\left( G\right) \times Z\left( G\right) \rightarrow R \]\n\nwhich is \( R \) -linear in its first variable. If \( R \) contains the \( m \) -th roots of unity, where \( m \) is an exponent for \( G \), and also contains \( 1/n \), then \( {e}_{i} \in {Z}_{R}\left( G\right) \) for all \( i \) . The class number \( {h}_{i} \) is not divisible by the characteristic of \( k \), and we have\n\n\[ \n{e}_{i} = \mathop{\sum }\limits_{{v = 1}}^{s}\left\langle {{e}_{i},{\gamma }_{v}}\right\rangle \frac{1}{{h}_{v}}{\gamma }_{v} \]\n
|
Proof. We note that \( {h}_{i} \) is not divisible by the characteristic because it is the index of a subgroup of \( G \) (the isotropy group of an element in \( {\gamma }_{i} \) when \( G \) operates by conjugation), and hence \( {h}_{i} \) divides \( n \) . The extension of our pairing as stated is obvious, since \( {\gamma }_{1},\ldots ,{\gamma }_{s} \) form a basis of \( Z\left( G\right) \) over the prime ring. The expression of \( {e}_{i} \) in terms of this basis is only a reinterpretation of Proposition 4.4 in terms of the present pairing.
|
Yes
|
Theorem 5.5. The conjugacy classes \( {\gamma }_{1},\ldots ,{\gamma }_{s} \) constitute an orthogonal basis for \( Z\left( G\right) \) . We have \( \left\langle {{\gamma }_{i},{\gamma }_{i}}\right\rangle = {h}_{i} \) . For each ring \( R \) contained in \( k \), the bilinear map of Proposition 5.3 has a unique extension to a R-bilinear map\n\n\[ \n{Z}_{R}\left( G\right) \times {Z}_{R}\left( G\right) \rightarrow R \n\]
|
Proof. We use the lemma. By linearity, the formula in the lemma remains valid when we replace \( R \) by \( k \), and when we replace \( {e}_{i} \) by any element of \( {Z}_{k}\left( G\right) \), in particular when we replace \( {e}_{i} \) by \( {\gamma }_{i} \) . But \( \left\{ {{\gamma }_{1},\ldots ,{\gamma }_{s}}\right\} \) is a basis of \( {Z}_{k}\left( G\right) \), over \( k \) . Hence we find that \( \left\langle {{\gamma }_{i},{\gamma }_{i}}\right\rangle = {h}_{i} \) and \( \left\langle {{\gamma }_{i},{\gamma }_{j}}\right\rangle = 0 \) if \( i \neq j \), as was to shown.
|
Yes
|
Corollary 5.6. If \( G \) is commutative, then\n\n\[ \n\frac{1}{n}\mathop{\sum }\limits_{{v = 1}}^{n}{\chi }_{v}\left( \sigma \right) {\chi }_{v}\left( {\tau }^{-1}\right) = \left\{ \begin{array}{ll} 0 & \text{ if }\sigma \text{ is not equal to }\tau \\ 1 & \text{ if }\sigma \text{ is equal to }\tau . \end{array}\right.\n\]
|
Proof. When \( G \) is commutative, each conjugacy class has exactly one element, and the number of simple characters is equal to the order of the group.
|
No
|
Theorem 5.7. Let \( k \) have characteristic 0, and let \( R \) be a subring of \( k \), containing the \( m \) -th roots of unity, and having a conjugation. Then the pairing of Proposition 5.3 has a unique extension to a hermitian form \[ {Z}_{R}\left( G\right) \times {Z}_{R}\left( G\right) \rightarrow R \] given by the formulas \[ \langle \alpha ,\beta \rangle = \frac{1}{n}{\chi }_{\text{reg }}\left( {\alpha \bar{\beta }}\right) = \frac{1}{n}\mathop{\sum }\limits_{{v = 1}}^{s}{\chi }_{v}\left( \alpha \right) \overline{{\chi }_{v}\left( \beta \right) }.\] The conjugacy classes \( {\gamma }_{1},\ldots ,{\gamma }_{s} \) form an orthogonal basis for \( {Z}_{R}\left( G\right) \) . If \( R \) contains \( 1/n \), then \( {e}_{1},\ldots ,{e}_{s} \) lie in \( {Z}_{R}\left( G\right) \) and also form an orthogonal basis for \( {Z}_{R}\left( G\right) \) . We have \( \left\langle {{e}_{i},{e}_{i}}\right\rangle = {d}_{i}^{2}/n \) .
|
Proof. The formula given in the statement of the theorem gives the same value as the symbol \( \langle \alpha ,\beta \rangle \) of Proposition 5.3 when \( \alpha ,\beta \) lie in \( Z\left( G\right) \) . Thus the extension exists, and is obviously unique. Using the second formula in Proposition 5.3, defining the scalar product, and recalling that \( {\chi }_{v}\left( {e}_{i}\right) = 0 \) if \( v \neq i \), we see that \[ \left\langle {{e}_{i},{e}_{i}}\right\rangle = \frac{1}{n}{\chi }_{i}\left( {e}_{i}\right) \overline{{\chi }_{i}\left( {e}_{i}\right) } \] whence our assertion follows.
|
Yes
|
Theorem 5.8. Let \( E, F \) be simple \( \left( {G, k}\right) \) -spaces. Let \( \lambda \) be a \( k \) -linear functional on \( E \), let \( x \in E \) and \( y \in F \) . If \( E, F \) are not isomorphic, then\n\n\[ \mathop{\sum }\limits_{{\sigma \in G}}\lambda \left( {\sigma x}\right) {\sigma }^{-1}y = 0 \]\n\nIf \( \mu \) is a functional on \( F \) then the coefficient functions \( {\rho }_{\lambda, x} \) and \( {\rho }_{\mu, y} \) are orthogonal, that is\n\n\[ \mathop{\sum }\limits_{{\sigma \in G}}\lambda \left( {\sigma x}\right) \mu \left( {{\sigma }^{-1}y}\right) = 0 \]
|
Proof. The map \( x \mapsto \sum \lambda \left( {\sigma x}\right) {\sigma }^{-1}y \) is a \( G \) -homomorphism of \( E \) into \( F \), so Schur's lemma concludes the proof of the first statement. The second comes by applying the functional \( \mu \) .
|
Yes
|
Lemma 5.9. Let \( E \) be a simple \( \left( {G, k}\right) \) -space. Then any \( G \) -endomorphism of \( E \) is equal to a scalar multiple of the identity.
|
Proof. The algebra \( {\operatorname{End}}_{G, k}\left( E\right) \) is a division algebra by Schur’s lemma, and is finite dimensional over \( k \) . Since \( k \) is assumed algebraically closed, it must be equal to \( k \) because any element generates a commutative subfield over \( k \) . This proves the lemma.
|
Yes
|
Lemma 5.10. Let \( E \) be a representation space for \( G \) of dimension \( d \) . Let \( \lambda \) be a functional on \( E \), and let \( x \in E \) . Let \( {\varphi }_{\lambda, x} \in {\operatorname{End}}_{k}\left( E\right) \) be the endomorphism such that\n\n\[ \n{\varphi }_{\lambda, x}\left( y\right) = \lambda \left( y\right) x \n\]\n\nThen \( \operatorname{tr}\left( {\varphi }_{\lambda, x}\right) = \lambda \left( x\right) \) .
|
Proof. If \( x = 0 \) the statement is obvious. Let \( x \neq 0 \) . If \( \lambda \left( x\right) \neq 0 \) we pick a basis of \( E \) consisting of \( x \) and a basis of the kernel of \( \lambda \) . If \( \lambda \left( x\right) = 0 \), we pick a basis of \( E \) consisting of a basis for the kernel of \( \lambda \), and one other element. In either case it is immediate from the corresponding matrix representing \( {\varphi }_{\lambda, x} \) that the trace is given by the formula as stated in the lemma.
|
Yes
|
Theorem 5.11. Let \( \rho : G \rightarrow {\operatorname{Aut}}_{k}\left( E\right) \) be a simple representation of \( G \), of dimension \( d \) . Then the characteristic of \( k \) does not divide \( d \) . Let \( x, y \in E \) . Then for any functionals \( \lambda ,\mu \) on \( E \) ,\n\n\[ \mathop{\sum }\limits_{{\sigma \in G}}\lambda \left( {\sigma x}\right) \mu \left( {{\sigma }^{-1}y}\right) = \frac{n}{d}\lambda \left( y\right) \mu \left( x\right) \]
|
Proof. It suffices to prove that\n\n\[ \mathop{\sum }\limits_{{\sigma \in G}}\lambda \left( {\sigma x}\right) {\sigma }^{-1}y = \frac{n}{d}\lambda \left( y\right) x. \]\n\nFor fixed \( y \) the map\n\n\[ x \mapsto \mathop{\sum }\limits_{{\sigma \in G}}\lambda \left( {\sigma x}\right) {\sigma }^{-1}y \]\n\nis immediately verified to be a \( G \) -endomorphism of \( E \), so is equal to \( {cI} \) for some \( c \in k \) by Lemma 5.9. In fact, it is equal to\n\n\[ \mathop{\sum }\limits_{{\sigma \in G}}\rho \left( {\sigma }^{-1}\right) \circ {\varphi }_{\lambda, y} \circ \rho \left( \sigma \right) \]\n\nThe trace of this expression is equal to \( n \cdot \operatorname{tr}\left( {\varphi }_{\lambda, y}\right) \) by Lemma 5.10, and also to \( {dc} \) . Taking \( \lambda, y \) such that \( \lambda \left( y\right) = 1 \) shows that the characteristic does not divide \( d \) , and then we can solve for \( c \) as stated in the theorem.
|
Yes
|
Corollary 5.12. Let \( \chi \) be the character of the representation of \( G \) on the simple space E. Then\n\n\[ \langle \chi ,\chi \rangle = 1\text{.} \]
|
Proof. This follows immediately from the theorem, and the expression of \( \chi \) as\n\n\[ \chi = {\rho }_{11} + \cdots + {\rho }_{dd} \]
|
Yes
|
Corollary 5.13. \( {\chi }_{i}\left( {e}_{j}\right) = {\delta }_{ij}{d}_{i} \) and \( {\chi }_{\text{reg }} = \mathop{\sum }\limits_{{i = 1}}^{s}{d}_{i}{\chi }_{i} \) .
|
Proof. The first formula is a direct application of the orthonormality of the characters. The second formula concerning the regular character is obtained by writing\n\n\[ \n{\chi }_{\text{reg }} = \mathop{\sum }\limits_{j}{m}_{j}{\chi }_{j} \n\]\n\nwith unknown coefficients. We know the values \( {\chi }_{\mathrm{{reg}}}\left( 1\right) = n \) and \( {\chi }_{\mathrm{{reg}}}\left( \sigma \right) = 0 \) if \( \sigma \neq 1 \) . Taking the scalar product of \( {\chi }_{\text{reg }} \) with \( {\chi }_{i} \) for \( i = 1,\ldots, s \) immediately yields the desired values for the coefficients \( {m}_{j} \) .
|
Yes
|
Proposition 5.14. We have\n\n\\[ \n{\\rho }_{i}\\left( {e}_{i}\\right) = \\mathrm{{id}}\\;\\text{ and }\\;{\rho }_{i}\\left( {e}_{j}\\right) = 0\\;\\text{ if }i \\neq j.\n\\]\n
|
Proof. The map \\( x \\mapsto {e}_{i}x \\) is a \\( G \\) -homomorphism of \\( {E}_{i} \\) into itself since \\( {e}_{i} \\) is in the center of \\( k\\left\\lbrack G\\right\\rbrack \\) . Hence by Lemma 5.9 this homomorphism is a scalar multiple of the identity. Taking the trace and using the orthogonality relations between simple characters immediately gives the desired value of this scalar.
|
No
|
Theorem 5.15. Let \( f \) be a class function on \( G \) . Then\n\n\[ f = \mathop{\sum }\limits_{{i = 1}}^{s}\left\langle {f,{\chi }_{i}}\right\rangle {\chi }_{i} \]
|
Proof. The number of conjugacy class is equal to the number of distinct characters, and these are linearly independent, so they form a basis for the class functions. The coefficients are given by the stated formula, as one sees by taking the scalar product of \( f \) with any character \( {\chi }_{j} \) and using the orthonormality.
|
Yes
|
Theorem 5.16. Let \( {\rho }^{\left( i\right) } \) be a matrix representation of \( G \) on \( {E}_{i} \) relative to a choice of basis, and let \( {\rho }_{v,\mu }^{\left( i\right) } \) be the coefficient functions of this matrix, \( i = 1,\ldots, s \) and \( v,\mu = 1,\ldots ,{d}_{i} \) . Then the functions \( {\rho }_{v,\mu }^{\left( i\right) } \) form an orthogonal basis for the space of all functions on \( G \), and hence for any function \( f \) on \( G \) we have\n\n\[ f = \mathop{\sum }\limits_{{i = 1}}^{s}\mathop{\sum }\limits_{{v,\mu }}\frac{1}{{d}_{i}}\left\langle {f,{\rho }_{v,\mu }^{\left( i\right) }}\right\rangle {\rho }_{v,\mu }^{\left( i\right) }.\]
|
Proof. That the coefficient functions form an orthogonal basis follows from Theorems 5.8 and 5.11. The expression of \( f \) in terms of this basis is then merely the standard Fourier expansion relative to any scalar product. This concludes the proof.
|
No
|
Theorem 5.17. (a) Let \( \chi \) be an effective character in \( X\left( G\right) \). Then \( \chi \) is simple over \( \mathbf{C} \) if and only if \( \parallel \chi {\parallel }^{2} = 1 \), or alternatively,\n\n\[ \mathop{\sum }\limits_{{\sigma \in G}}{\left| \chi \left( \sigma \right) \right| }^{2} = \# \left( G\right) \]
|
Proof. The first part has been proved, and for (b), let \( \chi ,\psi \) be effective characters in \( X\left( G\right) \), and let \( E, F \) be their representation spaces over \( \mathbf{C} \). Then\n\n\[ \langle \chi ,\psi {\rangle }_{G} = \dim {\operatorname{Hom}}_{G}\left( {E, F}\right) . \]\n\nThen by orthonormality, we get\n\n\[ \langle \chi ,\psi {\rangle }_{G} = \sum {m}_{i}{q}_{i} \]\n\nBut if \( {E}_{i} \) is the representation space of \( {\chi }_{i} \) over \( \mathbf{C} \), then by Schur’s lemma\n\n\( \dim {\operatorname{Hom}}_{G}\left( {{E}_{i},{E}_{i}}\right) = 1 \) and \( \dim {\operatorname{Hom}}_{G}\left( {{E}_{i},{E}_{j}}\right) = 0 \) for \( i \neq j \).\n\nHence \( \dim {\operatorname{Hom}}_{G}\left( {E, F}\right) = \sum {m}_{i}{q}_{i} \), thus proving (b).
|
No
|
Corollary 5.18 With the above notation and \( k = \mathbf{C} \) for simplicity, we have:\n\n(a) The multiplicity of \( {1}_{G} \) in \( {E}^{ \vee } \otimes F \) is \( {\dim }_{k}{\operatorname{inv}}_{G}\left( {{E}^{ \vee } \otimes F}\right) \) .\n\n(b) The \( \left( {G, k}\right) \) -space \( E \) is simple if and only if \( {1}_{G} \) has multiplicity 1 in \( {E}^{ \vee } \otimes E \) .
|
Proof. Immediate from Theorem 5.17 and formula (3) of §1.
|
No
|
Proposition 7.1. Let \( \varphi : F \rightarrow {M}_{G}^{S}\left( F\right) \) be such that \( \varphi \left( x\right) = {\varphi }_{x} \) is the map\n\n\[{\varphi }_{x}\left( \tau \right) = \left\{ \begin{array}{lll} 0 & \text{ if } & \tau \notin S \\ {\tau x} & \text{ if } & \tau \in S \end{array}\right.\]\n\nThen \( \varphi \) is an S-homomorphism, \( \varphi : F \rightarrow {M}_{G}^{S}\left( F\right) \) is universal, and \( \varphi \) is injective. The image of \( \varphi \) consists of those elements \( f \in {M}_{G}^{S}\left( F\right) \) such that \( f\left( \tau \right) = 0 \) if \( \tau \notin S \) .
|
Proof. Let \( \sigma \in S \) and \( x \in F \) . Let \( \tau \in G \) . Then\n\n\[ \left( {\sigma {\varphi }_{x}}\right) \left( \tau \right) = {\varphi }_{x}\left( {\tau \sigma }\right) \]\n\nIf \( \tau \in S \), then this last expression is equal to \( {\varphi }_{\sigma x}\left( \tau \right) \) . If \( \tau \notin S \), then \( {\tau \sigma } \notin S \), and hence both \( {\varphi }_{\sigma x}\left( \tau \right) \) and \( {\varphi }_{x}\left( {\tau \sigma }\right) \) are equal to 0 . Thus \( \varphi \) is an \( S \) -homomorphism, and it is immediately clear that \( \varphi \) is injective. Furthermore, if \( f \in {M}_{G}^{S}\left( F\right) \) is such that \( f\left( \tau \right) = 0 \) if \( \tau \notin S \), then from the definitions, we conclude that \( f = {\varphi }_{x} \) where \( x = f\left( 1\right) \) .
|
Yes
|
Proposition 7.2. Let \( G = \mathop{\bigcup }\limits_{{i = 1}}^{r}S{\bar{c}}_{i} \) be a decomposition of \( G \) into right cosets.\n\nLet \( {F}_{1} \) be the additive group of functions in \( {M}_{G}^{S}\left( F\right) \) having value 0 at elements \( \xi \in G,\xi \notin S \) . Then\n\n\[ \n{M}_{G}^{S}\left( F\right) = {\bigoplus }_{i = 1}^{r}{\bar{c}}_{i}^{-1}{F}_{1} \n\]\n\nthe direct sum being taken as an abelian group.
|
Proof. For each \( f \in {M}_{G}^{S}\left( F\right) \), let \( {f}_{i} \) be the function such that\n\n\[ \n{f}_{i}\left( \xi \right) = \left\{ \begin{array}{lll} 0 & \text{ if } & \xi \notin S{\bar{c}}_{i} \\ f\left( \xi \right) & \text{ if } & \xi \in S{\bar{c}}_{i} \end{array}\right. \n\]\n\nFor all \( \sigma \in S \) we have \( {f}_{i}\left( {\sigma {\bar{c}}_{i}}\right) = \left( {{\bar{c}}_{i}{f}_{i}}\right) \left( \sigma \right) \) . It is immediately clear that \( {\bar{c}}_{i}{f}_{i} \) lies in \( {F}_{1} \), and\n\n\[ \nf = \mathop{\sum }\limits_{{i = 1}}^{r}{\bar{c}}_{i}^{-1}\left( {{\bar{c}}_{i}{f}_{i}}\right) \n\]\n\nThus \( {M}_{G}^{S}\left( F\right) \) is the sum of the subgroups \( {\bar{c}}_{i}^{-1}{F}_{1} \) . It is clear that this sum is direct, as desired.
|
Yes
|
Theorem 7.3. Let \( \left\{ {{\lambda }_{1},\ldots ,{\lambda }_{r}}\right\} \) be a system of left coset representatives of \( S \) in G. There exists a G-module \( E \) containing \( F \) as an S-submodule, such that \[ E = {\bigoplus }_{i = 1}^{r}{\lambda }_{i}F \] is a direct sum (as R-modules). Let \( \varphi : F \rightarrow E \) be the inclusion mapping. Then \( \varphi \) is universal in our category \( \mathbb{C} \), i.e. \( E \) is an induced module.
|
Proof. By the usual set-theoretic procedure of replacing \( {F}_{1} \) by \( F \) in \( {M}_{G}^{S}\left( F\right) \) , obtain a \( G \) -module \( E \) containing \( F \) as a \( S \) -submodule, and having the desired direct sum decomposition. Let \( {\varphi }^{\prime } : F \rightarrow {E}^{\prime } \) be an \( S \) -homomorphism into a \( G \) -module \( {E}^{\prime } \) . We define \[ h : E \rightarrow {E}^{\prime } \] by the rule \[ h\left( {{\lambda }_{1}{x}_{1} + \cdots + {\lambda }_{r}{x}_{r}}\right) = {\lambda }_{1}{\varphi }^{\prime }\left( {x}_{1}\right) + \cdots + {\lambda }_{r}{\varphi }^{\prime }\left( {x}_{r}\right) \] for \( {x}_{i} \in F \) . This is well defined since our sum for \( E \) is direct. We must show that \( h \) is a \( G \) -homomorphism. Let \( \sigma \in G \) . Then \[ \sigma {\lambda }_{i} = {\lambda }_{\sigma \left( i\right) }{\tau }_{\sigma, i} \] where \( \sigma \left( i\right) \) is some index depending on \( \sigma \) and \( i \), and \( {\tau }_{\sigma, i} \) is an element of \( S \), also depending on \( \sigma, i \) . Then \[ h\left( {\sigma {\lambda }_{i}{x}_{i}}\right) = h\left( {{\lambda }_{\sigma \left( i\right) }{\tau }_{\sigma, i}{x}_{i}}\right) = {\lambda }_{\sigma \left( i\right) }{\varphi }^{\prime }\left( {{\tau }_{\sigma, i}{x}_{i}}\right) . \] Since \( {\varphi }^{\prime } \) is an \( S \) -homomorphism, we see that this expression is equal to \[ {\lambda }_{\sigma \left( i\right) }{\tau }_{\sigma, i}{\varphi }^{\prime }\left( {x}_{i}\right) = {\sigma h}\left( {{\lambda }_{i}{x}_{i}}\right) \] By linearity, we conclude that \( h \) is a \( G \) -homomorphism, as desired.
|
Yes
|
Proposition 7.4. Let \( \psi \) be the character of the representation of \( S \) on the \( k \) -space \( F \) . Let \( E \) be the space of an induced representation. Then the character \( \chi \) of \( E \) is equal to the induced character \( {\psi }^{G} \), i.e. is given by the formula\n\n\[ \chi \left( \xi \right) = \mathop{\sum }\limits_{c}{\psi }_{0}\left( {\bar{c}\xi {\bar{c}}^{-1}}\right) \]\n\nwhere the sum is taken over the right cosets \( c \) of \( S \) in \( G,\bar{c} \) is a fixed coset representative for \( c \), and \( {\psi }_{0} \) is the extension of \( \psi \) to \( G \) obtained by setting \( {\psi }_{0}\left( \sigma \right) = 0 \) if \( \sigma \notin S \) .
|
Proof. Let \( \left\{ {{w}_{1},\ldots ,{w}_{m}}\right\} \) be a basis for \( F \) over \( k \) . We know that\n\n\[ E = \bigoplus {\bar{c}}^{-1}F \]\n\nLet \( \sigma \) be an element of \( G \) . The elements \( {\left\{ {\overline{c\sigma }}^{-1}{w}_{j}\right\} }_{c, j} \) form a basis for \( E \) over \( k \) .\n\nWe observe that \( \bar{c}\sigma {\overline{c\sigma }}^{-1} \) is an element of \( S \) because\n\n\[ S\bar{c}\sigma = {Sc\sigma } = S\overline{c\sigma } \]\n\nWe have\n\n\[ \sigma \left( {\bar{c}{\bar{\sigma }}^{-1}{w}_{j}}\right) = {\bar{c}}^{-1}\left( {\bar{c}\sigma \bar{c}{\bar{\sigma }}^{-1}}\right) {w}_{j}. \]\n\nLet\n\n\[ {\left( \bar{c}\sigma \bar{c}{\bar{\sigma }}^{-1}\right) }_{\mu j} \]\n\nbe the components of the matrix representing the effect of \( \bar{c}\sigma \bar{c}{\bar{\sigma }}^{-1} \) on \( F \) with respect to the basis \( \left\{ {{w}_{1},\ldots ,{w}_{m}}\right\} \) . Then the action of \( \sigma \) on \( E \) is given by\n\n\[ \sigma \left( {\bar{c}{\bar{\sigma }}^{-1}{w}_{j}}\right) = {\bar{c}}^{-1}\mathop{\sum }\limits_{\mu }{\left( \bar{c}\sigma \bar{c}{\bar{\sigma }}^{-1}\right) }_{\mu j}{w}_{\mu } \]\n\n\[ = \mathop{\sum }\limits_{\mu }{\left( \bar{c}\sigma \bar{c}{\bar{\sigma }}^{-1}\right) }_{\mu j}\left( {{\bar{c}}^{-1}{w}_{\mu }}\right) \]\n\nBy definition,\n\n\[ \chi \left( \sigma \right) = \mathop{\sum }\limits_{{{c\sigma } = c}}\mathop{\sum }\limits_{j}{\left( \bar{c}\sigma \bar{c}{\bar{\sigma }}^{-1}\right) }_{jj} \]\n\nBut \( {c\sigma } = c \) if and only if \( \bar{c}\sigma {\bar{c}}^{-1} \in S \) . Furthermore,\n\n\[ \psi \left( {\bar{c}\sigma {\bar{c}}^{-1}}\right) = \mathop{\sum }\limits_{j}{\left( \bar{c}\sigma {\bar{c}}^{-1}\right) }_{jj}. \]\n\nHence\n\n\[ \chi \left( \sigma \right) = \mathop{\sum }\limits_{c}{\psi }_{0}\left( {\bar{c}\sigma {\bar{c}}^{-1}}\right) \]\n\nas was to be shown.
|
Yes
|
Lemma 7.5. The elements \( \left\{ {{\tau }_{\gamma }\gamma }\right\} \) form a family of left coset representatives for \( S \) in \( G \) ; that is, we have a disjoint union\n\n\[ G = \mathop{\bigcup }\limits_{{\gamma ,{\tau }_{\gamma }}}{\tau }_{\gamma }{\gamma S} \]
|
Proof. First we have by hypothesis\n\n\[ G = \mathop{\bigcup }\limits_{\gamma }\mathop{\bigcup }\limits_{{\tau }_{\gamma }}{\tau }_{\gamma }\left( {H \cap \left\lbrack \gamma \right\rbrack S}\right) {\gamma S} \]\n\nand so every element of \( G \) can be written in the form\n\n\[ {\tau }_{\gamma }\gamma {s}_{1}{\gamma }^{-1}\gamma {s}_{2} = {\tau }_{\gamma }{\gamma s}\;\text{ with }\;{s}_{1},{s}_{2}, s \in S. \]\n\nOn the other hand, the elements \( {\tau }_{\gamma }\gamma \) represent distinct cosets of \( S \), because if \( {\tau }_{\gamma }{\gamma S} = {\tau }_{{\gamma }^{\prime }}{\gamma }^{\prime }S \), then \( \gamma = {\gamma }^{\prime } \), since the elements \( \gamma \) represent distinct double cosets, whence \( {\tau }_{\gamma } \) and \( {\tau }_{{\gamma }^{\prime }} \) represent the same coset of \( {\gamma S}{\gamma }^{-1} \), and therefore are equal. This proves the lemma.
|
Yes
|
Theorem 7.6. Applied to the S-module \( F \), we have an isomorphism of \( H \) - modules\n\n\[ \n{\operatorname{res}}_{H}^{G} \circ {\operatorname{ind}}_{S}^{G} \approx {\bigoplus }_{\gamma }{\operatorname{ind}}_{H \cap \left\lbrack \gamma \right\rbrack S}^{H} \circ {\operatorname{res}}_{H \cap \left\lbrack \gamma \right\rbrack S}^{\left\lbrack \gamma \right\rbrack S} \circ \left\lbrack \gamma \right\rbrack \n\]\n\nwhere the direct sum is taken over double coset representatives \( \gamma \) .
|
Proof. The induced module \( {\operatorname{ind}}_{S}^{G}\left( F\right) \) is simply the direct sum\n\n\[ \n{\operatorname{ind}}_{S}^{G}\left( F\right) = {\bigoplus }_{\gamma ,{\tau }_{\gamma }}{\tau }_{\gamma }{\gamma F} \n\]\n\nby Lemma 7.5, which gives us coset representatives of \( S \) in \( G \), and Theorem 7.3. On the other hand, for each \( \gamma \), the module\n\n\[ \n{\bigoplus }_{{\tau }_{\gamma }}{\tau }_{\gamma }{\gamma F} \n\]\n\n is a representation module for the induced representation from \( H \cap \left\lbrack \gamma \right\rbrack S \) on \( {\gamma F} \) to \( H \) . Taking the direct sum over \( \gamma \), we get the right-hand side of the expression in the theorem, and thus prove the theorem.
|
Yes
|
Theorem 7.7. Let \( H, S \) be subgroups of finite index in \( G \) . Let \( {F}_{1},{F}_{2} \) be \( \left( {H, R}\right) \) and \( \left( {S, R}\right) \) -modules respectively. Then we have an isomorphism of \( R \) - modules\n\n\[ \n{\operatorname{Hom}}_{G}\left( {{\operatorname{ind}}_{H}^{G}\left( {F}_{1}\right) ,{\operatorname{ind}}_{S}^{G}\left( {F}_{2}\right) }\right) \approx {\bigoplus }_{D}{M}_{D}\left( {{F}_{1},{F}_{2}}\right) ,\n\]\n\nwhere the direct sum is taken over all double cosets \( {H\gamma S} = D \) .
|
Proof. We have the isomorphisms:\n\n\( {\operatorname{Hom}}_{G}\left( {{\operatorname{ind}}_{H}^{G}\left( {F}_{1}\right) ,{\operatorname{ind}}_{S}^{G}\left( {F}_{2}\right) }\right) \approx {\operatorname{Hom}}_{H}\left( {{F}_{1},{\operatorname{res}}_{H}^{G} \circ {\operatorname{ind}}_{S}^{G}\left( {F}_{2}\right) }\right) \)\n\n\[ \n\approx {\bigoplus }_{\gamma }{\operatorname{Hom}}_{H}\left( {{F}_{1},{\operatorname{ind}}_{H \cap \left\lbrack \gamma \right\rbrack S}^{H} \circ {\operatorname{res}}_{H \cap \left\lbrack \gamma \right\rbrack S}^{\left\lbrack \gamma \right\rbrack S} \circ \left\lbrack \gamma \right\rbrack {F}_{2}}\right) \n\]\n\n\[ \n\approx {\bigoplus }_{\gamma }{\operatorname{Hom}}_{H \cap \left\lbrack \gamma \right\rbrack S}\left( {{F}_{1},\left\lbrack \gamma \right\rbrack {F}_{2}}\right) \n\]\n\nby applying the definition of the induced module in the first and third step, and applying Theorem 7.6 in the second step. Each term in the last expression is what we denoted by \( {M}_{D}\left( {{F}_{1},{F}_{2}}\right) \) if \( \gamma \) is a representative for the double coset \( D \) . This proves the theorem.
|
Yes
|
Corollary 7.8. Let \( R = k = \mathbf{C} \) . Let \( S, H \) be subgroups of the finite group G. Let \( D = {H\gamma S} \) range over the double cosets, with representatives \( \gamma \) . Let \( \chi \) be an effective character of \( H \) and \( \psi \) an effective character of \( S \) . Then\n\n\[ \n{\left\langle {\operatorname{ind}}_{H}^{G}(\left( \chi \right) ,{\operatorname{ind}}_{S}^{G}\left( \psi \right) \right\rangle }_{G} = \mathop{\sum }\limits_{\gamma }\langle \chi ,\left\lbrack \gamma \right\rbrack \psi {\rangle }_{H \cap \left\lbrack \gamma \right\rbrack S} \n\]
|
Proof. Immediate from Theorem 5.17(b) and Theorem 7.7, taking dimensions on the left-hand side and on the right-hand side.
|
No
|
Corollary 7.9. (Irreducibility of the induced character). Let \( S \) be a subgroup of the finite group \( G \) . Let \( R = k = \mathbf{C} \) . Let \( \psi \) be an effective character of \( S \) . Then \( {\operatorname{ind}}_{S}^{G}\left( \psi \right) \) is irreducible if and only if \( \psi \) is irreducible and\n\n\[ \langle \psi ,\left\lbrack \gamma \right\rbrack \psi {\rangle }_{S \cap \left\lbrack \gamma \right\rbrack S} = 0 \]\n\nfor all \( \gamma \in G,\gamma \notin S \) .
|
Proof. Immediate from Corollary 7.8 and Theorem 5.17(a). It is of course trivial that if \( \psi \) is reducible, then so is the induced character.
|
No
|
Theorem 7.10. Let \( S \) be a subgroup of \( G \) and let \( F \) be a finite free \( R \) -module.\n\nThen there is a G-isomorphism\n\n\[ \n{\operatorname{ind}}_{S}^{G}\left( {F}^{ \vee }\right) \approx {\left( {\operatorname{ind}}_{S}^{G}\left( F\right) \right) }^{ \vee }.\n\]
|
Proof. Let \( G = \bigcup {\lambda }_{i}S \) be a left coset decomposition. Then, as in Theorem 7.3, we can express the representation space for \( {\operatorname{ind}}_{S}^{G}\left( F\right) \) as\n\n\[ \n{\operatorname{ind}}_{S}^{G}\left( F\right) = \bigoplus {\lambda }_{i}F \n\]\n\nWe may select \( {\lambda }_{1} = 1 \) (unit element of \( G \) ). There is a unique \( R \) -homomorphism\n\n\[ \nf : {F}^{ \vee } \rightarrow {\left( {\operatorname{ind}}_{S}^{G}\left( F\right) \right) }^{ \vee }\n\]\n\nsuch that for \( \varphi \in {F}^{ \vee } \) and \( x \in F \) we have\n\n\[ \nf\left( \varphi \right) \left( {{\lambda }_{i}x}\right) = \left\{ \begin{array}{ll} 0 & \text{ if }i \neq 1 \\ \varphi \left( x\right) & \text{ if }i = 1, \end{array}\right. \n\]\n\nwhich is in fact an \( R \) -isomorphism of \( {F}^{ \vee } \) on \( {\left( {\lambda }_{1}F\right) }^{ \vee } \) . We claim that it is an \( S \) - homomorphism. This is a routine verification, which we write down. We have\n\n\[ \nf\left( {\left\lbrack \sigma \right\rbrack \varphi }\right) \left( {{\lambda }_{i}x}\right) = \left\{ \begin{array}{ll} 0 & \text{ if }i \neq 1 \\ \sigma \left( {\varphi \left( {{\sigma }^{-1}x}\right) }\right) & \text{ if }i = 1. \end{array}\right. \n\]\n\nOn the other hand, note that if \( \sigma \in S \) then \( {\sigma }^{-1}{\lambda }_{1} \in S \) so \( {\sigma }^{-1}{\lambda }_{1}x \in {\lambda }_{1}F \) for \( x \in F \) ; but if \( \sigma \notin S \), then \( {\sigma }^{-1}{\lambda }_{i} \notin S \) for \( i \neq 1 \) so \( {\sigma }^{-1}{\lambda }_{i}x \notin {\lambda }_{1}F \) . Hence\n\n\[ \n\left\lbrack \sigma \right\rbrack \left( {f\left( \varphi \right) }\right) \left( {{\lambda }_{1}x}\right) = {\sigma f}\left( \varphi \right) \left( {{\sigma }^{-1}{\lambda }_{i}x}\right) = \left\{ \begin{array}{ll} 0 & \text{ if }i \neq 1 \\ \sigma \left( {\varphi \left( {{\sigma }^{-1}x}\right) }\right) & \text{ if }i = 1. \end{array}\right. \n\]\n\nThis proves that \( f \) commutes with the action of \( S \) .\n\nBy the universal property of the induced module, it follows that there is a unique \( \left( {G, R}\right) \) -homomorphism\n\n\[ \n{\operatorname{ind}}_{S}^{G}\left( f\right) : {\operatorname{ind}}_{S}^{G}\left( {F}^{ \vee }\right) \rightarrow {\left( {\operatorname{ind}}_{S}^{G}\left( F\right) \right) }^{ \vee },\n\]\n\nwhich must be an isomorphism because \( f \) was an isomorphism on its image, the \( {\lambda }_{1} \) -component of the induced module. This concludes the proof of the theorem.
|
Yes
|
Theorem 7.11. Let \( S \) be a subgroup of finite index in \( G \) . Let \( F \) be an \( S \) - module, and \( E \) a \( G \) -module (over the commutative ring \( R \) ). Then there is an isomorphism\n\n\[ \n{\operatorname{ind}}_{S}^{G}\left( {{\operatorname{res}}_{S}\left( E\right) \otimes F}\right) \approx E \otimes {\operatorname{ind}}_{S}^{G}\left( F\right) .\n\]
|
Proof. The \( G \) -module \( {\operatorname{ind}}_{S}^{G}\left( F\right) \) contains \( F \) as a summand, because it is the direct sum \( \bigoplus {\lambda }_{i}F \) with left coset representatives \( {\lambda }_{i} \) as in Theorem 7.3. Hence we have a natural \( S \) -isomorphism\n\n\[ \nf : {\operatorname{res}}_{S}\left( E\right) \otimes F\overset{ \approx }{ \rightarrow }E \otimes {\lambda }_{1}F \subset E \otimes {\operatorname{ind}}_{S}^{G}\left( F\right) .\n\]\n\ntaking the representative \( {\lambda }_{1} \) to be 1 (the unit element of \( G \) ). By the universal property of induction, there is a \( G \) -homomorphism\n\n\[ \n{\operatorname{ind}}_{S}^{G}\left( f\right) : {\operatorname{ind}}_{S}^{G}\left( {{\operatorname{res}}_{S}\left( E\right) \otimes F}\right) \rightarrow E \otimes {\operatorname{ind}}_{S}^{G}\left( F\right) ,\n\]\n\nwhich is immediately verified to be an isomorphism, as desired. (Note that here it only needed to verify the bijectivity in this last step, which comes from the structure of direct sum as \( R \) -modules.)
|
Yes
|
Theorem 7.12. There is a G-isomorphism\n\n\[ \n{\operatorname{ind}}_{H}^{G}\left( {F}_{1}\right) \otimes {\operatorname{ind}}_{S}^{G}\left( {F}_{2}\right) \approx {\bigoplus }_{\gamma }{\operatorname{ind}}_{H \cap \left\lbrack \gamma \right\rbrack S}^{G}\left( {{F}_{1} \otimes \left\lbrack \gamma \right\rbrack {F}_{2}}\right) ,\n\]\n\nwhere the sum is taken over double coset representatives \( \gamma \) .
|
Proof. We have:\n\n\( {\operatorname{ind}}_{H}^{G}\left( {F}_{1}\right) \otimes {\operatorname{ind}}_{S}^{G}\left( {F}_{2}\right) \approx {\operatorname{ind}}_{H}^{G}\left( {{F}_{1} \otimes {\operatorname{res}}_{H}{\operatorname{ind}}_{S}^{G}\left( {F}_{2}\right) }\right) \; \) by Theorem 7.11\n\n\( \approx {\bigoplus }_{\gamma }{\operatorname{ind}}_{H}^{G}\left( {{F}_{1} \otimes {\operatorname{ind}}_{H \cap \left\lbrack \gamma \right\rbrack S}^{H}{\operatorname{res}}_{H}{ \cap }_{\left\lbrack \gamma \right\rbrack S}^{\left\lbrack \gamma \right\rbrack S}\left( {\left\lbrack \gamma \right\rbrack {F}_{2}}\right) }\right. \) by Theorem 7.6\n\n\( \approx {\bigoplus }_{\gamma }{\mathrm{{ind}}}_{H}^{G}\left( {{\mathrm{{ind}}}_{H \cap \left\lbrack \gamma \right\rbrack S}^{H}\left( {{\mathrm{{res}}}_{H \cap \left\lbrack \gamma \right\rbrack S}^{H}\left( {F}_{1}\right) \otimes {\mathrm{{res}}}_{H \cap \left\lbrack \gamma \right\rbrack S}^{\left\lbrack \gamma \right\rbrack S}\left( {\left\lbrack \gamma \right\rbrack {F}_{2}}\right) }\right) }\right) \;\mathrm{{by}} \) Theorem 7.7\n\n\[ \n\approx {\bigoplus }_{\gamma }{\operatorname{ind}}_{H \cap \left\lbrack \gamma \right\rbrack S}^{G}\left( {{F}_{1} \otimes \left\lbrack \gamma \right\rbrack {F}_{2}}\right) \;\text{by transitivity of induction} \n\]\n\nwhere we view \( {F}_{1} \cap \left\lbrack \gamma \right\rbrack {F}_{2} \) as an \( H \cap \left\lbrack \gamma \right\rbrack S \) -module in this last line. This proves the theorem.
|
Yes
|
Proposition 8.1. Let \( H \) be a subgroup of \( G \), and let \( \psi \) be a character of \( H \) . Let \( {\psi }^{G} \) be the induced character. Then the multiplicity of \( {1}_{H} \) in \( \psi \) is the same as the multiplicity of \( {1}_{G} \) in \( {\psi }^{G} \) .
|
Proof. By Theorem 6.1 (i), we have\n\n\[ \n{\left\langle \psi ,{1}_{H}\right\rangle }_{H} = {\left\langle {\psi }^{G},{1}_{G}\right\rangle }_{G} \n\]\n\nThese scalar products are precisely the multiplicities in question.
|
Yes
|
Proposition 8.2. The regular representation is the representation induced by the trivial character on the trivial subgroup of \( G \) .
|
Proof. This follows at once from the definition of the induced character\n\n\[ \n{\psi }^{G}\left( \tau \right) = \mathop{\sum }\limits_{{\sigma \in G}}{\psi }_{H}\left( {{\sigma \tau }{\sigma }^{-1}}\right) \n\]\n\ntaking \( \psi = 1 \) on the trivial subgroup.
|
Yes
|
Proposition 8.5. Let \( G \) be a finite group of order \( n \) . Then\n\n\[ n{u}_{G} = \sum {\lambda }_{A}^{G} \]\n\nthe sum being taken over all cyclic subgroups of \( G \) .
|
Proof. Given two class functions \( \chi ,\psi \) on \( G \), we have the usual scalar product:\n\n\[ \langle \psi ,\chi {\rangle }_{G} = \frac{1}{n}\mathop{\sum }\limits_{{\sigma \in G}}\psi \left( \sigma \right) \overline{\chi \left( \sigma \right) } \]\n\nLet \( \psi \) be any class function on \( G \) . Then:\n\n\[ \left\langle {\psi, n{u}_{G}}\right\rangle = \left\langle {\psi, n{r}_{G}}\right\rangle - \left\langle {\psi, n{1}_{G}}\right\rangle \]\n\n\[ = {n\psi }\left( 1\right) - \mathop{\sum }\limits_{{\sigma \in G}}\psi \left( \sigma \right) \]\n\nOn the other hand, using the fact that the induced character is the transpose of the restriction, we obtain\n\n\[ \mathop{\sum }\limits_{A}\left\langle {\psi ,{\lambda }_{A}^{G}}\right\rangle = \mathop{\sum }\limits_{A}\left\langle {\psi \mid A,{\lambda }_{A}}\right\rangle \]\n\n\[ = \mathop{\sum }\limits_{A}\left\langle {\psi \mid A,\varphi \left( a\right) {r}_{A} - {\theta }_{A}}\right\rangle \]\n\n\[ = \mathop{\sum }\limits_{A}\varphi \left( a\right) \psi \left( 1\right) - \mathop{\sum }\limits_{A}\frac{1}{a}\mathop{\sum }\limits_{{\sigma \operatorname{gen}A}}{a\psi }\left( \sigma \right) \]\n\n\[ = {n\psi }\left( 1\right) - \mathop{\sum }\limits_{{\sigma \in G}}\psi \left( \sigma \right) \]\n\nSince the functions on the right and left of the equality sign in the statement of our proposition have the same scalar product with an arbitrary function, they are equal. This proves our proposition.
|
Yes
|
Proposition 8.6. If \( A \neq \{ 1\} \), the function \( {\lambda }_{A} \) is a linear combination of irreducible nontrivial characters of \( A \) with positive integral coefficients.
|
Proof. If \( A \) is cyclic of prime order, then by Proposition 8.5, we know that \( {\lambda }_{A} = n{u}_{A} \), and our assertion follows from the standard structure of the regular representation.\n\nIn order to prove the assertion in general, it suffices to prove that the Fourier coefficients of \( {\lambda }_{A} \) with respect to a character of degree 1 are integers \( \geqq 0 \) . Let \( \psi \) be a character of degree 1 . We take the scalar product with respect to \( A \), and obtain:\n\n\[ \left\langle {\psi ,{\lambda }_{A}}\right\rangle = \varphi \left( a\right) \psi \left( 1\right) - \mathop{\sum }\limits_{{\sigma \text{ gen }}}\psi \left( \sigma \right) \]\n\n\[ = \varphi \left( a\right) - \mathop{\sum }\limits_{{\sigma \text{ gen }}}\psi \left( \sigma \right) \]\n\n\[ = \mathop{\sum }\limits_{{\sigma \text{ gen }}}\left( {1 - \psi \left( \sigma \right) }\right) \]\n\nThe sum \( \sum \psi \left( \sigma \right) \) taken over generators of \( A \) is an algebraic integer, and is in fact a rational number (for any number of elementary reasons), hence a rational integer. Furthermore, if \( \psi \) is non-trivial, all real parts of\n\n\[ 1 - \psi \left( \sigma \right) \]\n\nare \( > 0 \) if \( \sigma \neq \mathrm{{id}} \) and are 0 if \( \sigma = \mathrm{{id}} \) . From the last two inequalities, we conclude that the sums must be equal to a positive integer. If \( \psi \) is the trivial character, then the sum is clearly 0 . Our proposition is proved.
|
Yes
|
Proposition 9.1. Every subgroup and every factor group of a super-solvable group is supersolvable.
|
Proof. Obvious, using the standard homomorphism theorems.
|
No
|
Proposition 9.2. Let \( G \) be a non-abelian supersolvable group. Then there exists a normal abelian subgroup which contains the center properly.
|
Proof. Let \( C \) be the center of \( G \), and let \( \bar{G} = G/C \) . Let \( \bar{H} \) be a normal subgroup of prime order in \( \bar{G} \) and let \( H \) be its inverse image in \( G \) under the canonical map \( G \rightarrow G/C \) . If \( \sigma \) is a generator of \( \bar{H} \), then an inverse image \( \sigma \) of \( \bar{\sigma } \) , together with \( C \), generate \( H \) . Hence \( H \) is abelian, normal, and contains the center properly.
|
Yes
|
Theorem 9.3. (Blichfeldt). Let \( G \) be a supersolvable group, let \( k \) be algebraically closed. Let \( E \) be a simple \( \left( {G, k}\right) \) -space. If \( {\dim }_{k}E > 1 \), then there exists a proper subgroup \( H \) of \( G \) and a simple \( H \) -space \( F \) such that \( E \) is induced by \( F \) .
|
Proof. Since a simple representation of an abelian group is 1-dimensional, our hypothesis implies that \( G \) is not abelian.\n\nWe shall first give the proof of our theorem under the additional hypothesis that \( E \) is faithful. (This means that \( {\sigma x} = x \) for all \( x \in E \) implies \( \sigma = 1 \) .) It will be easy to remove this restriction at the end.
|
No
|
Corollary 9.5. Let \( G \) be a product of a p-group and a cyclic group, and let \( k \) be algebraically closed. If \( E \) is a simple \( \left( {G, k}\right) \) -space and is not \( 1 \) -dimensional, then \( E \) is induced by a 1-dimensional representation of some subgroup.
|
Proof. We apply the theorem step by step using the transitivity of induced representations until we get a 1-dimensional representation of a subgroup.
|
No
|
Lemma 10.1. \( {V}_{R}\left( G\right) \) is an ideal in \( {X}_{R}\left( G\right) \) .
|
Proof. This is immediate from Theorem 6.1.
|
No
|
Proposition 10.2. Every subgroup and every factor group of a p-elementary group is p-elementary. If \( S \) is a subgroup of the p-elementary group \( P \times C \) , where \( P \) is a p-group, and \( C \) is cyclic, of order prime to \( p \), then
|
\[ S = \left( {S \cap P}\right) \times \left( {S \cap C}\right) . \]\n\nProof. Clear.
|
No
|
Lemma 10.3. If \( d \in \mathbf{Z} \) and the constant function \( d.{1}_{G} \) belongs to \( {V}_{R} \) then \( d.{1}_{G} \) belongs to \( {V}_{\mathbf{z}} \) .
|
Proof. We contend that \( 1,\zeta ,\ldots ,{\zeta }^{N - 1} \) are linearly independent over \( {X}_{\mathbf{Z}}\left( G\right) \) . Indeed, a relation of linear dependence would yield\n\n\[ \mathop{\sum }\limits_{{v = 1}}^{s}\mathop{\sum }\limits_{{j = 0}}^{{N - 1}}{c}_{vj}{\chi }_{v}{\zeta }^{j} = 0 \]\n\nwith integers \( {c}_{vj} \) not all 0 . But the simple characters are linearly independent over \( k \) . The above relation is a relation between these simple characters with coefficients in \( R \), and we get a contradiction. We conclude therefore that\n\n\[ {V}_{R} = {V}_{\mathbf{Z}} \oplus {V}_{\mathbf{Z}}\zeta \oplus \cdots \oplus {V}_{\mathbf{Z}}{\zeta }^{N - 1} \]\n\nis a direct sum (of abelian groups), and our lemma follows.
|
Yes
|
Lemma 10.4. Let \( f \in {X}_{R}\left( G\right) \), and assume that \( f\left( \sigma \right) \in \mathbf{Z} \) for all \( \sigma \in G \) . Then \( f \) is constant \( {\;\operatorname{mod}\;p} \) on every \( p \) -class.
|
Proof. Let \( x = {\sigma \tau } \), where \( \sigma \) is \( p \) -singular, and \( \tau \) is \( p \) -regular, and \( \sigma ,\tau \) commute. It will suffice to prove that \[ f\left( x\right) \equiv f\left( \tau \right) \;\left( {\;\operatorname{mod}\;p}\right) \] Let \( H \) be the cyclic subgroup generated by \( x \) . Then the restriction of \( f \) to \( H \) can be written \[ {f}_{H} = \sum {a}_{j}{\psi }_{j} \] with \( {a}_{j} \in R \), and \( {\psi }_{j} \) being the simple characters of \( H \), hence homomorphisms of \( H \) into \( {k}^{ * } \) . For some power \( {p}^{r} \) we have \( {x}^{{p}^{r}} = {\tau }^{{p}^{r}} \), whence \( {\psi }_{j}{\left( x\right) }^{{p}^{r}} = {\psi }_{j}{\left( \tau \right) }^{{p}^{r}} \), and hence \[ f{\left( x\right) }^{{p}^{r}} \equiv f{\left( \tau \right) }^{{p}^{r}}\;\left( {{\;\operatorname{mod}\;p}R}\right) . \] We now use the following lemma. Lemma 10.5. Let \( R = \mathbf{Z}\left\lbrack \zeta \right\rbrack \) be as before. If \( a \in \mathbf{Z} \) and \( a \in {pR} \) then \( a \in p\mathbf{Z} \) . Proof. This is immediate from the fact that \( R \) has a basis over \( \mathbf{Z} \) such that 1 is a basis element. Applying Lemma 10.5, we conclude that \( f\left( x\right) \equiv f\left( \tau \right) \left( {\;\operatorname{mod}\;p}\right) \), because \( {b}^{{p}^{r}} \equiv b\left( {\;\operatorname{mod}\;p}\right) \) for every integer \( b \) .
|
Yes
|
Lemma 10.5. Let \( R = \mathbf{Z}\left\lbrack \zeta \right\rbrack \) be as before. If \( a \in \mathbf{Z} \) and \( a \in {pR} \) then \( a \in p\mathbf{Z} \) .
|
Proof. This is immediate from the fact that \( R \) has a basis over \( \mathbf{Z} \) such that 1 is a basis element.
|
No
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.