text
stringlengths
16
3.88k
source
stringlengths
60
201
µ : H1,c(G, h) ∼ H1,c by xa Suppose P = P (x1, . . . , xr) ∈ (Sh∗)G . Then we have �→ ya, yb �→ −xb, g �→ g. (G, h∗), which is given Now let x1, . . . , xr be a basis of h∗ and y1, . . . , yr a basis of h. = [y, P ] = ∂ ∂y P ∈ Sh∗, where y ∈ h, (this follows from the fact that both sides act in the same way in the polynomial represen­ tation, which is faithful). So using the isomorphism µ, we conclude that if Q ∈ (Sh)G, Q = Q(y1, . . . , yr), then [x, Q] = −∂xQ for x ∈ h∗. Now, to prove the proposition, the only thing we need to check is that Mλ is invariant under x ∈ h∗. For any v ∈ Mλ, we have (Q − λ(Q))N v = 0 for some N . Then (Q − λ(Q))N +1 xv = (N + 1)∂xQ (Q − λ(Q))N v = 0. · So xv ∈ Mλ. � Corollary 3.17. We have the following decomposition: Oc(G, h) = Oc(G, h)λ, � λ∈h∗/G where Oc(G, h)λ is the subcategory of modules where (Sh)G acts with generalized eigenvalue λ. Proof. Directly from the definition and the proposition. � Note that Oc(G, h)λ is an abelian category closed under taking subquotients and exten­ sions. 3.6. The grading element. Let � (3.2) Proposition 3.18
https://ocw.mit.edu/courses/18-735-double-affine-hecke-algebras-in-representation-theory-combinatorics-geometry-and-mathematical-physics-fall-2009/0a639a83911449ff554a28b7772cb49f_MIT18_735F09_ch03.pdf
ions. 3.6. The grading element. Let � (3.2) Proposition 3.18. We have i h = xiyi + dim h − 1 2 � 2cs 1 − λs s∈S s. [h, x] = x, x ∈ h∗, [h, y] = −y, y ∈ h. 17 Proof. Let us prove the first relation; the second one is proved similarly. We have cs 2 λs − 1 � � cs xi (α∨, x)(αs, yi)s + (α∨, x)αs · s. (α∨, x)αs · s xi(yi, x) − xi[yi, x] − � cs [h, x] = − 1 2 � � � λs = s∈S · s i s s The last two terms cancel since i � s∈S i i xi(αs, yi) = αs, so we get � s∈S i xi(yi, x) = x. Proposition 3.19. Let G = W be a real reflection group. Let h = � 1 xiyi + dim h − 2 � 1 � css, E = − 2 s∈S i i 2 , F = xi 1 � 2 i 2 . yi Then � (i) h = (ii) h, E, F form an sl2-triple. i(xiyi + yixi)/2; Proof. A direct calculation. Theorem 3.20. Let M be a module over H1,c(G, h). � � (i) If h acts locally nilpotently on M , then h acts locally finitely on M . (ii) If M is finitely generated over Sh∗, then M
https://ocw.mit.edu/courses/18-735-double-affine-hecke-algebras-in-representation-theory-combinatorics-geometry-and-mathematical-physics-fall-2009/0a639a83911449ff554a28b7772cb49f_MIT18_735F09_ch03.pdf
nitely on M . (ii) If M is finitely generated over Sh∗, then M ∈ Oc(G, h)0 if and only if h acts locally finitely on M . Proof. (i) Assume that Sh acts locally nilpotently on M . Let v ∈ M . Then Sh v is a finite dimensional vector space and let d = dim Sh v. We prove that v is h-finite by induction · in dimension d. We can use d = 0 as base, so only need to do the induction step. The space Sh v must contain a nonzero vector u such that y u = 0, ∀y ∈ h. Let U ⊂ M be the subspace of vectors with this property. h acts on U by an element of CG, hence locally finitely. So it is sufficient to show that the image of v in M/�U � is h-finite (where �U � is the submodule generated by U ). But this is true by the induction assumption, as u = 0 in M/�U �. · · · (ii) We need to show that if h acts locally finitely on M , then h acts locally nilpotently on M . Assume h acts locally finitely on M . Then M = ⊕β∈B M [β], where B ⊂ C. Since M is finitely generated over Sh∗, B is a finite union of sets of the form z + Z≥0, z ∈ C. So Sh � must act locally nilpotently on M . We can obtain the following corollary easily. Corollary 3.21. Any finite
https://ocw.mit.edu/courses/18-735-double-affine-hecke-algebras-in-representation-theory-combinatorics-geometry-and-mathematical-physics-fall-2009/0a639a83911449ff554a28b7772cb49f_MIT18_735F09_ch03.pdf
potently on M . We can obtain the following corollary easily. Corollary 3.21. Any finite dimensional H1,c(G, h)-module is in Oc(G, h)0. We see that any module M ∈ Oc(G, h)0 has a grading by generalized eigenvalues of h: M = ⊕β M [β]. 3.7. Standard modules. Let τ be a finite dimensional representation of G. The standard module over H1,c(G, h) corresponding to τ (also called the Verma module) is Mc(G, h, τ ) = Mc(τ ) = H1,c(G, h) ⊗CG�Sh τ ∈ Oc(G, h)0, where Sh acts on τ by zero. So from the PBW theorem, we have that as vector spaces, Mc(τ ) ∼= τ ⊗ Sh∗. 18 Remark 3.22. More generally, ∀λ ∈ h∗, let Gλ = Stab(λ), and τ be a finite dimensional representation of Gλ. Then we can define Mc,λ(G, h, τ ) = H1,c(G, h) ⊗CGλ�Sh τ , where Sh acts on τ by λ. These modules are called the Whittaker modules. Let τ be irreducible, and let hc(τ ) be the number given by the formula hc(τ ) = dim h � 2cs − 2 1 − λs s∈S s|τ . Then we see that h acts on τ ⊗ Smh∗ by the scalar hc(τ ) + m. Definition 3.23. A vector v in an H1,c-module M is singular if yiv = 0 for all i. Proposition 3.24. Let U be an H1,c(G, h)-module. Let τ ⊂ U be a G-submodule consisting U of C[h]-modules
https://ocw.mit.edu/courses/18-735-double-affine-hecke-algebras-in-representation-theory-combinatorics-geometry-and-mathematical-physics-fall-2009/0a639a83911449ff554a28b7772cb49f_MIT18_735F09_ch03.pdf
H1,c(G, h)-module. Let τ ⊂ U be a G-submodule consisting U of C[h]-modules of singular vectors. Then there is a unique homomorphism φ : Mc(τ ) such that φ|τ is the identity, and it is an H1,c-homomorphism. Proof. The first statement follows from the fact that Mc(τ ) is a free module over C[h] gen­ erated by τ . Also, it follows from the Frobenius reciprocity that there must exist a map φ � which is an H1,c-homomorphism. This implies the proposition. → 3.8. Finite length. Proposition 3.25. ∃K ∈ R such that for any M ⊂ N in Oc(G, h)0, if M [β] = N [β] for Re (β) ≤ K, then M = N . Proof. Let K = maxτ Re hc(τ ). Then if M = N , M/N begins in degree β0 with Re β0 > K, � which is impossible since by Proposition 3.24, β0 must equal hc(τ ) for some τ . Corollary 3.26. Any M ∈ Oc(G, h)0 has finite length. Proof. Directly from the proposition. � 3.9. Characters. For M ∈ Oc(G, h)0, define the character of M as the following formal series in t: ch M (g, t) = tβTr M [β](g) = Tr M (gth), g ∈ G. � Proposition 3.27. We have β ch Mc(τ )(g, t) = χτ (g)thc(τ ) . deth∗ (1 − tg) Proof. We begin with the following lemma. Lemma 3.28 (
https://ocw.mit.edu/courses/18-735-double-affine-hecke-algebras-in-representation-theory-combinatorics-geometry-and-mathematical-physics-fall-2009/0a639a83911449ff554a28b7772cb49f_MIT18_735F09_ch03.pdf
τ ) . deth∗ (1 − tg) Proof. We begin with the following lemma. Lemma 3.28 (MacMahon’s Master theorem). Let V be a finite dimensional space, A : V → V a linear operator. Then � tnTr (SnA) = n≥0 1 . det(1 − tA) Proof of the lemma. If A is diagonalizable, this is obvious. The general statement follows by � continuity. The lemma implies that Tr Sh∗ (gtD) = implies the required statement. 1 det(1 − gt) 19 where D is the degree operator. This � � 3.10. Irreducible modules. Let τ be an irreducible representation of G. Proposition 3.29. Mc(τ ) has a maximal proper submodule Jc(τ ). Proof. The proof is standard. Jc(τ ) is the sum of all proper submodules of Mc(τ ), and it is not equal to Mc(τ ) because any proper submodule has a grading by generalized eigenspaces � of h, with eigenvalues β such that β − hc(τ ) > 0. We define Lc(τ ) = Mc(τ )/Jc(τ ), which is an irreducible module. Proposition 3.30. Any irreducible object of Oc(G, h)0 has the form Lc(τ ) for an unique τ . Proof. Let L ∈ Oc(G, h)0 be irreducible, with lowest eigenspace of h containing an irreducible G-module τ . Then by Proposition 3.24, we have a nonzero homomorphism Mc(τ ) L, which � is surjective, since L is irreducible. Then we must have L = Lc(τ ). Remark 3.31. Let χ be a character of G. Then we have an isomorphism H1,c(G, h) → H1,cχ(G, h), mapping g ∈ G to χ−1(g
https://ocw.mit.edu/courses/18-735-double-affine-hecke-algebras-in-representation-theory-combinatorics-geometry-and-mathematical-physics-fall-2009/0a639a83911449ff554a28b7772cb49f_MIT18_735F09_ch03.pdf
,c(G, h) → H1,cχ(G, h), mapping g ∈ G to χ−1(g)g. This automorphism maps Lc(τ ) to Lcχ(χ−1 ⊗ τ ) isomorphically. 3.11. The contragredient module. Set ¯c(s) = c(s−1). We have a natural isomorphism γ : H1,c¯(G, h∗)op → H1,c(G, h), acting trivially on h and h∗, and sending g ∈ G to g−1 . → Thus if M is an H1,c(G, h)-module, then the full dual space M ∗ is an H1,c¯(M, h∗)-module. If M ∈ Oc(G, h)0, then we can define M †, which is the h-finite part of M ∗. Proposition 3.32. M † belongs to Oc¯(G, h∗)0. Proof. Clearly, if L is irreducible, then so is L†. Then L† is generated by its lowest h­ eigenspace over H1,c¯(G, h∗), hence over Sh∗. Thus, L† ∈ Oc¯(G, h∗)0. Now, let M ∈ Oc(G, h)0 be any object. Since M has finite length, so does M †. Moreover, M † has a finite filtration with successive quotients of the form L†, where L ∈ Oc(G, h)0 is irreducible. This implies � the required statement, since Oc(G, h)0 is closed under taking extensions. Clearly, M †† = M . Thus, M �→ M † is an equivalence of categories Oc(G, h) → Oc¯(G, h∗)op
https://ocw.mit.edu/courses/18-735-double-affine-hecke-algebras-in-representation-theory-combinatorics-geometry-and-mathematical-physics-fall-2009/0a639a83911449ff554a28b7772cb49f_MIT18_735F09_ch03.pdf
Thus, M �→ M † is an equivalence of categories Oc(G, h) → Oc¯(G, h∗)op. 3.12. The contravariant form. Let τ be an irreducible representation of G. By Propo­ sition 3.24, we have a unique homomorphism φ : Mc(G, h, τ ) Mc¯(G, h∗, τ ∗)† which is the identity in the lowest h-eigenspace. Thus, we have a pairing → which is called the contravariant form. βc : Mc(G, h, τ ) × Mc¯(G, h∗, τ ∗) → C, W is a real reflection group, then h ∼ h∗, c Remark 3.33. If G = symmetric form. So for real reflection groups, βc is a symmetric form on Mc(τ ). Proposition 3.34. The maximal proper submodule Jc(τ ) is the kernel of φ (or, equivalently, of the contravariant form βc). Proof. Let K be the kernel of the contravariant form. It suffices to show that Mc(τ )/K is irreducible. We have a diagram: = c, and τ ∼ τ ∗ via a = = ¯ Mc(G, h, τ ) φ � Mc(G, h∗, τ ∗)† � � Lc(G, h, τ ) � � � � � Lc(G, h∗, τ ∗)† ������������ ξ ∼ η 20 � � � � Indeed, a nonzero map ξ exists by Proposition 3.24, and it factors through Lc(G, h, τ ), with η being an isomorphism, since Lc(G, h
https://ocw.mit.edu/courses/18-735-double-affine-hecke-algebras-in-representation-theory-combinatorics-geometry-and-mathematical-physics-fall-2009/0a639a83911449ff554a28b7772cb49f_MIT18_735F09_ch03.pdf
through Lc(G, h, τ ), with η being an isomorphism, since Lc(G, h∗, τ ∗)† is irreducible. Now, by Proposition 3.24 (uniqueness of φ), the diagram must commute up to scaling, which implies the statement. � Proposition 3.35. Assume that hc(τ ) − hc(τ �) never equals a positive integer for any τ, τ � ∈ IrrepG. Then Oc(G, h)0 is semisimple, with simple objects Mc(τ ). Proof. It is clear that in this situation, all Mc(τ ) are simple. Also consider Ext1(Mc(τ ), Mc(τ �)). If hc(τ )−hc(τ �) ∈/ Z, it is clearly 0. Otherwise, hc(τ ) = hc(τ �), and again Ext1(Mc(τ ), Mc(τ �)) = 0, since for any extension 0 Mc(τ �) N → → → Mc(τ ) → 0, � by Proposition 3.24 we have a splitting Mc(τ ) N . Remark 3.36. In fact, our argument shows that if Ext1(Mc(τ ), Mc(τ �)) �= 0, then hc(τ ) − hc(τ �) ∈ N. → 3.13. The matrix of multiplicities. For τ, σ ∈ IrrepG, write τ < σ if Re hc(σ) − Re hc(τ ) ∈ N. Proposition 3.37. There exists a matrix of integers N = (nσ,τ ), with nσ,τ ≥ 0, such that nτ,τ = 1, nσ,τ = 0 unless σ < τ , and � Mc(σ) = nσ,τ Lc(τ ) ∈ K0(Oc(G, h)0). Proof. This follows from the Jordan-H¨older theorem and the fact that objects in Oc(G, h)0 � have �
https://ocw.mit.edu/courses/18-735-double-affine-hecke-algebras-in-representation-theory-combinatorics-geometry-and-mathematical-physics-fall-2009/0a639a83911449ff554a28b7772cb49f_MIT18_735F09_ch03.pdf
0). Proof. This follows from the Jordan-H¨older theorem and the fact that objects in Oc(G, h)0 � have finite length. Corollary 3.38. Let N −1 = (¯nτ,σ). Then Lc(τ ) = � n¯τ,σMc(σ). Corollary 3.39. We have ch Lc(τ )(g, t) = � n¯τ,σχσ(g)thc(τ ) . deth∗ (1 − tg) Both of the corollaries can be obtained from the above proposition easily. One of the main problems in the representation theory of rational Cherednik algebras is the following problem. Problem: Compute the multiplicities nσ,τ or, equivalently, ch Lc(τ ) for all τ . In general, this problem is open. 3.14. Example: the rank 1 case. Let G = Z/mZ and λ be an m-th primitive root of 1. Then the algebra H1,c(G, h) is generated by x, y, s with relations [y, x] = 1 − 2 cj sj , sxs−1 = λx, sys−1 = λ−1 y. m−1 � j=1 Consider the one-dimensional space C and let y act by 0 and g ∈ G act by 1. We have Mc(C) = C[x]. The contravariant form βc,C on Mc(C) is defined by βc,C(x , x n) = an; βc,C(x , x n� ) = 0, n �= n�. 21 n n Recall that βc,C satisfies βc,C(x , xn) = βc,C(xn−1, yxn), which gives n an = an−1(n − bn), where bn are new parameters: bn := 2 cj (b0 =
https://ocw.mit.edu/courses/18-735-double-affine-hecke-algebras-in-representation-theory-combinatorics-geometry-and-mathematical-physics-fall-2009/0a639a83911449ff554a28b7772cb49f_MIT18_735F09_ch03.pdf
an = an−1(n − bn), where bn are new parameters: bn := 2 cj (b0 = 0, bn+m = bn). m−1 1 − λjn � 1 − λj j=1 Thus we obtain the following proposition. Proposition 3.40. (i) Mc(C) is irreducible if only if n − bn = 0 (ii) Assume that r is the smallest positive integer such that r = br. Then Lc(C) has dimension r (which can be any number not divisible by m) with basis 1, x, . . . , xr−1 . for any n ≥ 1. Remark 3.41. According to Remark 3.31, this proposition in fact describes all the irre­ ducible lowest weight modules. Example 3.42. Consider the case m = 2. The Mc(C) is irreducible unless c ∈ 1/2 + Z≥0. If c = (2n + 1)/2 ∈ 1/2 + Z, n ≥ 0, then Lc(C) has dimension 2n + 1. A similar answer is obtained for lowest weight C−, replacing c by −c. 3.15. The Frobenius property. Let A be a Z+-graded commutative algebra. The algebra A is called Frobenius if the top degree A[d] of A is 1-dimensional, and the multiplication map A[m] × A[d − m] A[d] is a nondegenerate pairing for any 0 ≤ m ≤ d. In particular, the Hilbert polynomial of a Frobenius algebra A is palindromic. → Now, let us go back to considering modules over the rational Cherednik algebra H1,c. Any submodule J of the polynomial representation Mc(C) = Mc = C[h] is an ideal in
https://ocw.mit.edu/courses/18-735-double-affine-hecke-algebras-in-representation-theory-combinatorics-geometry-and-mathematical-physics-fall-2009/0a639a83911449ff554a28b7772cb49f_MIT18_735F09_ch03.pdf
J of the polynomial representation Mc(C) = Mc = C[h] is an ideal in C[h], so the quotient A = Mc/J is a Z+-graded commutative algebra. Now suppose that G preserves an inner product in h, i.e., G ⊆ O(h). Theorem 3.43. If A = Mc(C)/J is finite dimensional, then A is irreducible (A = Lc(C)) ⇐⇒ A is a Frobenius algebra. Proof. 1) Suppose A is an irreducible H1,c-module, i.e., A = Lc(C). By Proposition 3.19, A is naturally a finite dimensional sl2-module (in particular, it integrates to the group SL2(C)). Hence, by the representation theory of sl2, the top degree of A is 1-dimensional. Let φ ∈ A∗ denote a nonzero linear function on the top component. Let βc be the contravariant form on Mc(C). Consider the form (v1, v2) �→ E(v1, v2) := βc(v1, gv2), where g = � � 0 1 −1 0 ∈ SL2(C). Then E(xv1, v2) = E(v1, xv2). So for any p, q ∈ Mc(C) = C[h], E(p, q) = φ(p(x)q(x)) (for a suitable normalization of φ). Since E is a nondegenerate form, A is a Frobenius algebra. 2) Suppose A is Frobenius. Then the highest component is 1-dimensional, and E : A ⊗ A → C, E(a, b) = φ(ab) is nondegenerate. We have E(xa, b) = E(a, xb). So set β(a, b) = E(a, g−1b). Then β satisfies β(a, xib)
https://ocw.mit.edu/courses/18-735-double-affine-hecke-algebras-in-representation-theory-combinatorics-geometry-and-mathematical-physics-fall-2009/0a639a83911449ff554a28b7772cb49f_MIT18_735F09_ch03.pdf
β(a, b) = E(a, g−1b). Then β satisfies β(a, xib) = β(yia, b). Thus, for all p, q ∈ C[h], β(p(x), q(x)) = β(q(y)p(x), 1). So β = βc up to scaling. Thus, βc is nondegenerate and A is � irreducible. Remark 3.44. If G � O(h), this theorem is false, in general. 22 � Now consider the Frobenius property of Lc(C) for any G ⊂ GL(h). Theorem 3.45. Let U ⊂ Mc(C) = C[h] be a G-subrepresentation of dimension l = dim h, sitting in degree r, which consists of singular vectors. Let J = �U �. Assume that A = Mc/J is finite dimensional. Then (i) A is Frobenius. (ii) A admits a BGG type resolution: A ← Mc(C) ← Mc(U ) ← Mc(∧2U ) ← · · · ← Mc(∧lU ) ← 0. (iii) The character of A is given by the formula χA(g, t) = t 2 − P l s∈S 2cs 1−λs detU (1 − gtr) . deth∗ (1 − gt) In particular, dim A = rl . (iv) If G preserves an inner product, then A is irreducible. Proof. (i) Since Spec A is a complete intersection, A is Frobenius. (ii) We will use the following theorem: Theorem 3.46 (Serre). Let f1, . . . , fn ∈ C[t1, . . . , tn] be homogeneous polynomials, and assume that C[t1, . . . , tn] is a finitely generated module over C[f1, . . . , fn]. Then this is a free module. Consider
https://ocw.mit.edu/courses/18-735-double-affine-hecke-algebras-in-representation-theory-combinatorics-geometry-and-mathematical-physics-fall-2009/0a639a83911449ff554a28b7772cb49f_MIT18_735F09_ch03.pdf
tn] is a finitely generated module over C[f1, . . . , fn]. Then this is a free module. Consider SU ⊂ Sh∗. Then Sh∗ is a finitely generated SU -module (as Sh∗/�U � is finite dimensional). By Serre’s theorem, we know that Sh∗ is a free SU -module. The rank of this module is rl . Consider the Koszul complex attached to this module. Since the module is free, the Koszul complex is exact (i.e., it is a resolution of the zero fiber). At the level of SU -modules, it looks exactly like we want in (3.45). So we only need to show that the maps of the resolution are morphisms over H1,c. This is shown by induction. Namely, let δj : Mc(∧j U ) → Mc(∧j−1U ) be the corresponding differentials (so that δ0 : Mc(C) A is the projection). Then δ0 is an H1,c-morphism, which is the base of induction. If δj is an H1,c-morphism, then the kernel of δj is a submodule Kj ⊂ Mc(∧j U ). Its lowest degree part is ∧j+1U sitting in degree (j + 1)r and consisting of singular vectors. Now, δj+1 is a morphism over Sh∗ which maps ∧j+1U identically to itself. By Proposition 3.24, there is only one such morphism, and it must be an H1,c-morphism. This completes the induction step. → (iii) follows from (ii) by the Euler-Poincar´e
https://ocw.mit.edu/courses/18-735-double-affine-hecke-algebras-in-representation-theory-combinatorics-geometry-and-mathematical-physics-fall-2009/0a639a83911449ff554a28b7772cb49f_MIT18_735F09_ch03.pdf
H1,c-morphism. This completes the induction step. → (iii) follows from (ii) by the Euler-Poincar´e formula. (iv) follows from Theorem 3.43. � 3.16. Representations of H1,c of type A. Let us now apply the above results to the case of type A. We will follow the paper [CE]. Let G = Sn, and h be its reflection representation. In this case the function c reduces to one number. We will denote the rational Cherednik algebra H1,c(Sn) by Hc(n). It is generated by x1, . . . , xn, y1, . . . , yn and CSn with the following relations: � yi = 0, � xi = 0, [yi, xj] = − 23 1 n + csij , i =� j, [yi, xi] = n − 1 � − c sij . n The polynomial representation Mc(C) of this algebra is the space of C[x1, . . . , xn]T of poly­ nomials of x1, . . . , xn, which are invariant under simultaneous translation T : xi �→ xi + a. In other words, it is the space of regular functions on h = Cn/Δ, where Δ is the diagonal. j=i Proposition 3.47 (C. Dunkl). Let r be a positive integer not divisible by n, and c = r/n. Then Mc(C) contains a copy of the reflection representation h of Sn, which consists of singular vectors (i.e. those killed by y ∈ h). This copy sits in degree r and is spanned by the functions fi(x1, . . . , xn) = Res∞[(z − x1) · · · (z −
https://ocw.mit.edu/courses/18-735-double-affine-hecke-algebras-in-representation-theory-combinatorics-geometry-and-mathematical-physics-fall-2009/0a639a83911449ff554a28b7772cb49f_MIT18_735F09_ch03.pdf
ned by the functions fi(x1, . . . , xn) = Res∞[(z − x1) · · · (z − xn)] r n dz . z − xi (the symbol Res∞ denotes the residue at infinity). Remark 3.48. The space spanned by fi is (n − 1)-dimensional, since is the residue of an exact differential). � i fi = 0 (this sum Proof. This proposition can be proved by a straightforward computation. The functions fi � are a special case of Jack polynomials. Let Ic be the submodule of Mc(C) generated by fi. Consider the Hc(n)-module Vc = Mc(C)/Ic, and regard it as a C[h]-module. We have the following results. Theorem 3.49. Let d = (r, n) denote the greatest common divisor of r and n. Then the (set-theoretical) support of Vc is the union of Sn-translates of the subspaces of Cn/Δ, defined by the equations x1 = x2 = · · · = x ; n d x n d +1 = · · · = x2 ; n d . . . x(d−1) +1 = · · · = xn. n d In particular, the Gelfand-Kirillov dimension of Vc is d − 1. Corollary 3.50 ([BEG]). If d = 1 then the module Vc is finite dimensional, irreducible, admits a BGG type resolution, and its character is χVc (g, t) = t(1−r)(n−1)/2 det |h(1 − gtr) . det |h(1 − gt) Proof. For d = 1 Theorem 3.49 says that the support of Mc(C)/
https://ocw.mit.edu/courses/18-735-double-affine-hecke-algebras-in-representation-theory-combinatorics-geometry-and-mathematical-physics-fall-2009/0a639a83911449ff554a28b7772cb49f_MIT18_735F09_ch03.pdf
det |h(1 − gt) Proof. For d = 1 Theorem 3.49 says that the support of Mc(C)/Ic is {0}. This implies that � Mc(C)/Ic is finite dimensional. The rest follows from Theorem 3.45. Proof of Theorem 3.49. The support of Vc is the zero set of Ic, i.e. the common zero set of fi. Fix x1, . . . , xn ∈ C. Then fi(x1, . . . , xn) = 0 for all i iff λifi = 0 for all λi, i.e. � n � n � (z − xj ) r n n � Res∞ i=1 � dz = 0. λi z − xi i=1 Assume that x1, . . . xn take distinct values y1, . . . , yp with positive multiplicities m1, . . . , mp. j=1 The previous equation implies that the point (x1, . . . , xn) is in the zero set iff p � (z − yj )mj n −1 r Res∞ � � p νi(z − y1) · · · (z�− yi) � · · · (z − yp) dz = 0 ∀νi. j=1 i=1 24 � Since νi are arbitrary, this is equivalent to the condition p � (z − yj )mj n r −1 z idz = 0, Res∞ i = 0, . . . , p − 1. We will now need the following lemma. j=1 Lemma 3.51. Let a(z) = p � (z − yj )µj , where µj ∈ C, � j µj ∈ Z and � j µj > −p. Suppose j=1 Res∞a(z)z idz = 0, i = 0, 1, . . . , p − 2.
https://ocw.mit.edu/courses/18-735-double-affine-hecke-algebras-in-representation-theory-combinatorics-geometry-and-mathematical-physics-fall-2009/0a639a83911449ff554a28b7772cb49f_MIT18_735F09_ch03.pdf
Res∞a(z)z idz = 0, i = 0, 1, . . . , p − 2. Then a(z) is a polynomial. Proof. Let g(z) be a polynomial. Then 0 = Res∞d(g(z) a(z)) = Res∞(g�(z)a(z) + a�(z)g(z))dz, · and hence ∞ � Res � � g�(z) + Let g(z) = z l � µj g(z) a(z)dz = 0. z − yj � µj z − � with highest coefficient l + p + µj > −p). This means that for every l ≥ 0, Res∞z l+p−1 a(z)dz is a linear combination of residues of zqa(z)dz with q < l + p − 1. By the assumption of the lemma, this implies by induction in l that all such residues are 0 and � hence a is a polynomial. i (z − yj). Then g�(z) + g(z) is a polynomial of degree l + p − 1 j µj = 0 (as � yj j � � In our case satisfied. Hence (x1, . . . , xn) is in the zero set of Ic iff (mj r/n − 1) = r − p (since mj = n) and the conditions of the lemma are p � (z − yj)mj n r −1 is a polynomial. This j=1 is equivalent to saying that all mj are divisible by n/d. We have proved that (x1, . . . , xn) is in the zero set of Ic if and only if (z − x1) is the (n/d)-th power of a polynomial of degree d. This implies the theorem. · · · (z − xn) � Remark 3.52. For c > 0, the above
https://ocw.mit.edu/courses/18-735-double-affine-hecke-algebras-in-representation-theory-combinatorics-geometry-and-mathematical-physics-fall-2009/0a639a83911449ff554a28b7772cb49f_MIT18_735F09_ch03.pdf
. · · · (z − xn) � Remark 3.52. For c > 0, the above representations are the only irreducible finite di­ mensional representations of H1,c(Sn). Namely, it is proved in [BEG] that the only finite dimensional representations of H1,c(Sn) are multiples of Lc(C) for c = r/n, and of Lc(C−) (where C− is the sign representation) for c = −r/n, where r is a positive integer relatively prime to n. 3.17. Notes. The discussion of the definition of rational Cherednik algebras and their basic properties follows Section 7 of [E4]. The discussion of the category O for rational Cherednik algebras follows Section 11 of [E4]. The material in Sections 3.14-3.16 is borrowed from [CE]. 25 � MIT OpenCourseWare http://ocw.mit.edu 18.735 Double Affine Hecke Algebras in Representation Theory, Combinatorics, Geometry, and Mathematical Physics Fall 2009 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
https://ocw.mit.edu/courses/18-735-double-affine-hecke-algebras-in-representation-theory-combinatorics-geometry-and-mathematical-physics-fall-2009/0a639a83911449ff554a28b7772cb49f_MIT18_735F09_ch03.pdf
6.864: Lecture 10 (October 13th, 2005) Tagging and History-Based Models Overview • The Tagging Problem • Hidden Markov Model (HMM) taggers • Log-linear taggers • Log-linear models for parsing and other problems Tagging Problems • Mapping strings to Tagged Sequences a b e e a f h j � a/C b/D e/C e/C a/D f/C h/D j/C Part-of-Speech Tagging INPUT: Profits soared at Boeing Co., easily topping forecasts on Wall Street, as their CEO Alan Mulally announced first quarter results. OUTPUT: Profits/N soared/V at/P Boeing/N Co./N ,/, easily/ADV topping/V forecasts/N on/P Wall/N Street/N ,/, as/P their/POSS CEO/N Alan/N Mulally/N announced/V first/ADJ quarter/N results/N ./. = Noun = Verb = Preposition N V P Adv = Adverb Adj . . . = Adjective Information Extraction Named Entity Recognition INPUT: Profits soared at Boeing Co., easily topping forecasts on Wall Street, as their CEO Alan Mulally announced first quarter results. OUTPUT: Profits soared at [Company Boeing Co.], easily topping forecasts on [Location Wall Street], as their CEO [Person Alan Mulally] announced first quarter results. Named Entity Extraction as Tagging INPUT: Profits soared at Boeing Co., easily topping forecasts on Wall Street, as their CEO Alan Mulally announced first quarter results. OUTPUT: Profits/NA soared/NA at/NA Boeing/SC Co./CC ,/NA easily/NA topping/NA forecasts/NA on/NA Wall/SL Street/CL ,/NA as/NA their/NA CEO/NA Alan/SP Mulally/CP announced/NA first/NA quarter/NA results/NA ./NA NA SC CC SL CL .
https://ocw.mit.edu/courses/6-864-advanced-natural-language-processing-fall-2005/0abc5d7ab6458ec55a14c9f7c300438b_lec10.pdf
CP announced/NA first/NA quarter/NA results/NA ./NA NA SC CC SL CL . . . = No entity = Start Company = Continue Company = Start Location = Continue Location Extracting Glossary Entries from the Web Input: Images removed for copyright reasons. Set of webpages from The Weather Channel (http://www.weather.com), including a multi-entry 'Weather Glossary' page. Output: Text removed for copyright reasons. The glossary entry for 'St. Elmo's Fire.' Our Goal Training set: 1 Pierre/NNP Vinken/NNP ,/, 61/CD years/NNS old/JJ ,/, will/MD join/VB the/DT board/NN as/IN a/DT nonexecutive/JJ director/NN Nov./NNP 29/CD ./. 2 Mr./NNP Vinken/NNP is/VBZ chairman/NN of/IN Elsevier/NNP N.V./NNP ,/, the/DT Dutch/NNP publishing/VBG group/NN ./. 3 Rudolph/NNP Agnew/NNP ,/, 55/CD years/NNS old/JJ and/CC chairman/NN of/IN Consolidated/NNP Gold/NNP Fields/NNP PLC/NNP ,/, was/VBD named/VBN a/DT nonexecutive/JJ director/NN of/IN this/DT British/JJ industrial/JJ conglomerate/NN ./. . . . is/VBZ also/RB pulling/VBG 20/CD people/NNS out/IN of/IN 38,219 It/PRP Puerto/NNP Rico/NNP ,/, who/WP were/VBD helping/VBG Huricane/NNP Hugo/NNP victims/NNS ,/, and/CC sending/VBG them/PRP to/TO San/NNP Francisco/NNP instead/RB ./. • From the training set, induce a function or “program” that maps new sentences to their tag sequences.
https://ocw.mit.edu/courses/6-864-advanced-natural-language-processing-fall-2005/0abc5d7ab6458ec55a14c9f7c300438b_lec10.pdf
the training set, induce a function or “program” that maps new sentences to their tag sequences. Our Goal (continued) • A test data sentence: Influential members of the House Ways and Means Committee introduced legislation that would restrict how the new savings-and-loan bailout agency can raise capital , creating another potential obstacle to the government ’s sale of sick thrifts . • Should be mapped to underlying tags: Influential/JJ members/NNS of/IN the/DT House/NNP Ways/NNP and/CC Means/NNP Committee/NNP introduced/VBD legislation/NN that/WDT would/MD restrict/VB the/DT new/JJ savings-and-loan/NN bailout/NN agency/NN can/MD how/WRB raise/VB capital/NN ,/, creating/VBG another/DT potential/JJ obstacle/NN to/TO the/DT government/NN ’s/POS sale/NN of/IN sick/JJ thrifts/NNS ./. • Our goal is to minimize the number of tagging errors on sentences not seen in the training set Two Types of Constraints Influential/JJ members/NNS of/IN the/DT House/NNP Ways/NNP and/CC Means/NNP Committee/NNP introduced/VBD legislation/NN that/WDT would/MD restrict/VB how/WRB the/DT new/JJ savings-and-loan/NN bailout/NN agency/NN can/MD raise/VB capital/NN ./. • “Local”: e.g., can is more likely to be a modal verb MD rather than a noun NN • “Contextual”: e.g., a noun is much more likely than a verb to follow a determiner • Sometimes these preferences are in conflict: The trash can is in the garage A Naive Approach • Use a machine learning method to build a “classifier” that maps each word individually to its tag • A problem: does not take contextual constraints into account Hidden Markov Models • We have an input sentence S = w1,
https://ocw.mit.edu/courses/6-864-advanced-natural-language-processing-fall-2005/0abc5d7ab6458ec55a14c9f7c300438b_lec10.pdf
• A problem: does not take contextual constraints into account Hidden Markov Models • We have an input sentence S = w1, w2, . . . , wn (wi is the i’th word in the sentence) • We have a tag sequence T = t1, t2, . . . , tn (ti is the i’th tag in the sentence) • We’ll use an HMM to define P (t1, t2, . . . , tn, w1, w2, . . . , wn) for any sentence S and tag sequence T of the same length. • Then the most likely tag sequence for S is T � = argmaxT P (T, S) How to model P (T, S)? A Trigram HMM Tagger: P (T, S) = P (END | t1 . . . tn, w1 . . . wn)× n j=1 [ P (tj | w1 . . . wj−1, t1 . . . tj−1)× P (wj | w1 . . . wj−1, t1 . . . tj )] Chain rule � = P (END|tn−1, tn)× n =1 [P (tj | tj−2, tj−1) × P (wj | tj )] j Independence assumptions � • END is a special tag that terminates the sequence • We take t0 = t−1 = START • 1st assumption: each tag only depends on previous two tags P (tj |tj−1, tj−2) • 2nd assumption: each word only depends on underlying tag P (wj |tj ) An Example • S = the boy laughed • T = DT NN VBD P (T, S) = P (END|NN, VBD)× P (DT|START, START)× P (NN|START, DT)× P (VBD|DT, NN)× P (the|DT)× P (boy|NN)× P (laughed|VBD) Why
https://ocw.mit.edu/courses/6-864-advanced-natural-language-processing-fall-2005/0abc5d7ab6458ec55a14c9f7c300438b_lec10.pdf
NN)× P (the|DT)× P (boy|NN)× P (laughed|VBD) Why the Name? P (T, S) = P (END|tn−1, tn) P (tj | tj−2, tj−1) × P (wj | tj ) n n j=1 � Hidden Markov Chain ⎞� ⎠ ⎟ j=1 � wj ’s are observed ⎞� ⎠ ⎟ How to model P (T, S)? Hispaniola/NNP quickly/RB became/VB an/DT important/JJ base/Vt from which Spain expanded its empire into the rest of the Western Hemisphere . “Score” for tag Vt: P (Vt | DT, JJ) × P (base | Vt) Smoothed Estimation P (Vt | DT, JJ) = �1 × Count(Dt, JJ, Vt) Count(Dt, JJ) Count(JJ, Vt) Count(JJ) +�2 × +�3 × Count(Vt) Count() P (base | Vt) = Count(Vt, base) Count(Vt) Dealing with Low-Frequency Words • Step 1: Split vocabulary into two sets Frequent words Low frequency words = all other words = words occurring � 5 times in training • Step 2: Map low frequency words into a small, finite set, depending on prefixes, suffixes etc. Dealing with Low-Frequency Words: An Example [Bikel et. al 1999] An Algorithm that Learns What’s in a Name Word class Example Intuition twoDigitNum fourDigitNum containsDigitAndAlpha containsDigitAndDash containsDigitAndSlash containsDigitAndComma containsDigitAndPeriod othernum allCaps capPeriod firstWord initCap lowercase other 90 1990 A8956-67 09-96 11/9/89 23,000.00 1.00
https://ocw.mit.edu/courses/6-864-advanced-natural-language-processing-fall-2005/0abc5d7ab6458ec55a14c9f7c300438b_lec10.pdf
other 90 1990 A8956-67 09-96 11/9/89 23,000.00 1.00 456789 BBN M. first word of sentence Sally can , Two digit year Four digit year Product code Date Date Monetary amount Monetary amount,percentage Other number Organization Person name initial no useful capitalization information Capitalized word Uncapitalized word Punctuation marks, all other words Dealing with Low-Frequency Words: An Example Profits/NA soared/NA at/NA Boeing/SC Co./CC ,/NA easily/NA topping/NA forecasts/NA on/NA Wall/SL Street/CL ,/NA as/NA their/NA CEO/NA Alan/SP Mulally/CP announced/NA first/NA quarter/NA results/NA ./NA � firstword/NA soared/NA at/NA initCap/SC Co./CC ,/NA easily/NA lowercase/NA forecasts/NA on/NA initCap/SL Street/CL ,/NA as/NA their/NA CEO/NA Alan/SP initCap/CP announced/NA first/NA quarter/NA results/NA ./NA NA SC CC SL CL . . . = No entity = Start Company = Continue Company = Start Location = Continue Location The Viterbi Algorithm • Question: how do we calculate the following?: T � = argmaxT log P (T, S) • Define n to be the length of the sentence • Define a dynamic programming table �[i, t−2, t−1] = maximum log probability of a tag sequence ending in tags t−2, t−1 at position i • Our goal is to calculate maxt−2,t−1�T �[n, t−2, t−1] The Viterbi Algorithm: Recursive Definitions • Base case: �[0, �, �] = �[0, t−2, t
https://ocw.mit.edu/courses/6-864-advanced-natural-language-processing-fall-2005/0abc5d7ab6458ec55a14c9f7c300438b_lec10.pdf
Definitions • Base case: �[0, �, �] = �[0, t−2, t−1] = log 1 = 0 log 0 = −∗ for all other t−2, t−1 here � is a special tag padding the beginning of the sentence. • Recursive case: for i = 1 . . . n, for all t−2, t−1, �[i, t−2, t−1] = max {�[i − 1, t, t−2] + Score(S, i, t, t−2, t−1)} t�T �{�} Backpointers allow us to recover the max probability sequence: BP[i, t−2, t−1] = argmaxt�T �{�} {�[i − 1, t, t−2] + Score(S, i, t, t−2, t−1)} Where Score(S, i, t, t−2, t−1) = log P (t−1 | t, t−2) + log P (wi | t−1) Complexity is O(nk3), where n = length of sentence, k is number of possible tags The Viterbi Algorithm: Running Time • O(n|T |3) time to calculate Score(S, i, t, t−2, t−1) for all i, t, t−2, t−1. • n|T |2 entries in � to be filled in. • O(T ) time to fill in one entry (assuming O(1) time to look up Score(S, i, t, t−2, t−1)) • ∞ O(n|T |3) time Pros and Cons • Hidden markov model taggers are very simple to train (compile counts from the training corpus) • Perform relatively well (over 90% performance on named entities) • Main difficulty is modeling P (word | tag) can be very difficult if “words” are complex Log-Linear Models • We have an input
https://ocw.mit.edu/courses/6-864-advanced-natural-language-processing-fall-2005/0abc5d7ab6458ec55a14c9f7c300438b_lec10.pdf
(word | tag) can be very difficult if “words” are complex Log-Linear Models • We have an input sentence S = w1, w2, . . . , wn (wi is the i’th word in the sentence) • We have a tag sequence T = t1, t2, . . . , tn (ti is the i’th tag in the sentence) • We’ll use an log-linear model to define P (t1, t2, . . . , tn|w1, w2, . . . , wn) for any sentence S and tag sequence T of the same length. (Note: contrast with HMM that defines P (t1, t2, . . . , tn, w1, w2, . . . , wn)) • Then the most likely tag sequence for S is T � = argmaxT P (T |S) How to model P (T |S)? A Trigram Log-Linear Tagger: P (T |S) = n j=1 P (tj | w1 . . . wn, t1 . . . tj−1) Chain rule � n = j=1 P (tj | tj−2, tj−1, w1, . . . , wn) Independence assumptions � • We take t0 = t−1 = START • Assumption: each tag only depends on previous two tags P (tj |tj−1, tj−2, w1, . . . , wn) An Example Hispaniola/NNP important/JJ base/?? its empire into the rest of the Western Hemisphere . an/DT became/VB from which Spain expanded quickly/RB • There are many possible tags in the position ?? Y = {NN, NNS, Vt, Vi, IN, DT, . . . } • The input domain X is the set of all possible histories (or contexts) • Need to learn a function from (history, tag) pairs to a probability P (tag|history) Representation
https://ocw.mit.edu/courses/6-864-advanced-natural-language-processing-fall-2005/0abc5d7ab6458ec55a14c9f7c300438b_lec10.pdf
(or contexts) • Need to learn a function from (history, tag) pairs to a probability P (tag|history) Representation: Histories • A history is a 4-tuple ∈t−1, t−2, w[1:n], i→ • t−1, t−2 are the previous two tags. • w[1:n] are the n words in the input sentence. • i is the index of the word being tagged • X is the set of all possible histories important/JJ Hispaniola/NNP quickly/RB became/VB an/DT base/?? from which Spain expanded its empire into the rest of the Western Hemisphere . • t−1, t−2 = DT, JJ • w[1:n] = ∈Hispaniola, quickly, became, . . . , Hemisphere, .→ • i = 6 Feature Vector Representations • We have some input domain X , and a finite label set Y. Aim is to provide a conditional probability P (y | x) for any x ≤ X and y ≤ Y. • A feature is a function f : X × Y ≥ R (Often binary features or indicator functions f : X × Y � {0, 1}). • Say we have m features �k for k = 1 . . . m ∞ A feature vector �(x, y) ≤ Rm for any x ≤ X and y ≤ Y. An Example (continued) • X is the set of all possible histories of form ∈t−1, t−2, w[1:n], i→ • Y = {NN, NNS, Vt, Vi, IN, DT, . . . } • We have m features �k : X × Y ≥ R for k = 1 . . . m For example: �1(h, t) = �2(h, t) = . . . � � if current word wi is base and t = Vt 1 0 otherwise if current word wi ends in ing and t = VBG 1 0 otherwise �1(∈JJ, DT, ∈ Hispaniola,
https://ocw.mit.edu/courses/6-864-advanced-natural-language-processing-fall-2005/0abc5d7ab6458ec55a14c9f7c300438b_lec10.pdf
current word wi ends in ing and t = VBG 1 0 otherwise �1(∈JJ, DT, ∈ Hispaniola, . . . →, 6→, Vt) = 1 �2(∈JJ, DT, ∈ Hispaniola, . . . →, 6→, Vt) = 0 . . . The Full Set of Features in [(Ratnaparkhi, 96)] • Word/tag features for all word/tag pairs, e.g., �100(h, t) = � if current word wi is base and t = Vt 1 0 otherwise • Spelling features for all prefixes/suffixes of length � 4, e.g., �101(h, t) = �102(h, t) = � � if current word wi ends in ing and t = VBG 1 0 otherwise if current word wi starts with pre and t = NN 1 0 otherwise The Full Set of Features in [(Ratnaparkhi, 96)] • Contextual Features, e.g., �103(h, t) = �104(h, t) = �105(h, t) = �106(h, t) = �107(h, t) = � � � � � if ∈t−2, t−1, t→ = ∈DT, JJ, Vt→ 1 0 otherwise if ∈t−1, t→ = ∈JJ, Vt→ 1 0 otherwise if ∈t→ = ∈Vt→ 1 0 otherwise if previous word wi−1 = the and t = Vt 1 0 otherwise if next word wi+1 = the and t = Vt 1 0 otherwise The Final Result • We can come up with practically any questions (features) regarding history/tag pairs. • For a given history x ≤ X , each label in Y is mapped to a different feature vector �(∈JJ, DT, ∈ Hispaniola, . . . →, 6→, Vt) =
https://ocw.mit.edu/courses/6-864-advanced-natural-language-processing-fall-2005/0abc5d7ab6458ec55a14c9f7c300438b_lec10.pdf
a different feature vector �(∈JJ, DT, ∈ Hispaniola, . . . →, 6→, Vt) = 1001011001001100110 �(∈JJ, DT, ∈ Hispaniola, . . . →, 6→, JJ) = 0110010101011110010 �(∈JJ, DT, ∈ Hispaniola, . . . →, 6→, NN) = 0001111101001100100 �(∈JJ, DT, ∈ Hispaniola, . . . →, 6→, IN) = 0001011011000000010 . . . Log-Linear Models • We have some input domain X , and a finite label set Y. Aim is to provide a conditional probability P (y | x) for any x ≤ X and y ≤ Y. • A feature is a function f : X × Y ≥ R (Often binary features or indicator functions f : X × Y � {0, 1}). • Say we have m features �k for k = 1 . . . m ∞ A feature vector �(x, y) ≤ Rm for any x ≤ X and y ≤ Y. • We also have a parameter vector W ≤ Rm • We define P (y | x, W) = W·�(x,y) e y� �Y eW·�(x,y�) � Training the Log-Linear Model • To train a log-linear model, we need a training set (xi, yi) for i = 1 . . . n. Then search for W� = argmaxW log P (yi|xi, W) − C W2 k i � Log−Likelihood k � Gaussian P rior (see last lecture on log-linear models) ⎞� ⎠ ⎟ ⎞� ⎠ � � � � � � � ⎟ ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ •
https://ocw.mit.edu/courses/6-864-advanced-natural-language-processing-fall-2005/0abc5d7ab6458ec55a14c9f7c300438b_lec10.pdf
� ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ • Training set is simply all history/tag pairs seen in the training data The Viterbi Algorithm for Log-Linear Models • Question: how do we calculate the following?: T � = argmaxT log P (T |S) • Define n to be the length of the sentence • Define a dynamic programming table �[i, t−2, t−1] = maximum log probability of a tag sequence ending in tags t−2, t−1 at position i • Our goal is to calculate maxt−2,t−1�T �[n, t−2, t−1] The Viterbi Algorithm: Recursive Definitions • Base case: �[0, �, �] = �[0, t−2, t−1] = log 1 = 0 log 0 = −∗ for all other t−2, t−1 here � is a special tag padding the beginning of the sentence. • Recursive case: for i = 1 . . . n, for all t−2, t−1, �[i, t−2, t−1] = max {�[i − 1, t, t−2] + Score(S, i, t, t−2, t−1)} t�T �{�} Backpointers allow us to recover the max probability sequence: BP[i, t−2, t−1] = argmaxt�T �{�} {�[i − 1, t, t−2] + Score(S, i, t, t−2, t−1)} Where Score(S, i, t, t−2, t−1) = log P (t−1 | t, t−2, w1, . . . , wn, i) Identical to Viterbi for HMMs, except for the definition of Score(S, i, t, t−2, t−1) FAQ Segmentation
https://ocw.mit.edu/courses/6-864-advanced-natural-language-processing-fall-2005/0abc5d7ab6458ec55a14c9f7c300438b_lec10.pdf
for the definition of Score(S, i, t, t−2, t−1) FAQ Segmentation: McCallum et. al • McCallum et. al compared HMM and log-linear taggers on a FAQ Segmentation task • Main point: in an HMM, modeling P (word|tag) is difficult in this domain FAQ Segmentation: McCallum et. al <head>X-NNTP-POSTER: NewsHound v1.33 <head> <head>Archive name: acorn/faq/part2 <head>Frequency: monthly <head> <question>2.6) What configuration of serial cable should I use <answer> <answer> Here follows a diagram of the necessary connections <answer>programs to work properly. They are as far as I know t <answer>agreed upon by commercial comms software developers fo <answer> <answer> Pins 1, 4, and 8 must be connected together inside <answer>is to avoid the well known serial port chip bugs. The FAQ Segmentation: Line Features begins-with-number begins-with-ordinal begins-with-punctuation begins-with-question-word begins-with-subject blank contains-alphanum contains-bracketed-number contains-http contains-non-space contains-number contains-pipe contains-question-mark ends-with-question-mark first-alpha-is-capitalized indented-1-to-4 indented-5-to-10 more-than-one-third-space only-punctuation prev-is-blank prev-begins-with-ordinal shorter-than-30 FAQ Segmentation: The Log-Linear Tagger <head>X-NNTP-POSTER: NewsHound v1.33 <head> <head>Archive name: acorn/faq/part2 <head>Frequency: monthly <head> <question>2.6) What configuration of serial cable should I use Here follows a diagram of the necessary connections programs to work properly. They are as far as I know t agreed upon by commercial comms software developers fo Pins 1, 4, and 8 must be connected together inside is to avoid the well known serial port chip bugs. The ∞ “tag=question;prev=head;beg
https://ocw.mit.edu/courses/6-864-advanced-natural-language-processing-fall-2005/0abc5d7ab6458ec55a14c9f7c300438b_lec10.pdf
8 must be connected together inside is to avoid the well known serial port chip bugs. The ∞ “tag=question;prev=head;begins-with-number” “tag=question;prev=head;contains-alphanum” “tag=question;prev=head;contains-nonspace” “tag=question;prev=head;contains-number” “tag=question;prev=head;prev-is-blank” FAQ Segmentation: An HMM Tagger <question>2.6) What configuration of serial cable should I use • First solution for P (word | tag): P (“2.6) What configuration of serial cable should I use” | question) = P ( 2.6) | question)× P (W hat | question)× P (conf iguration | question)× P (of | question)× P (serial | question)× . . . • i.e. have a language model for each tag FAQ Segmentation: McCallum et. al • Second solution: first map each sentence to string of features: <question>2.6) What configuration of serial cable should I use � <question>begins-with number contains-alphanum contains-nonspace contains-number prev-is-blank • Use a language model again: P (“2.6) What configuration of serial cable should I use” | question) = P (begins-with-number | question)× P (contains-alphanum | question)× P (contains-nonspace | question)× P (contains-number | question)× P (prev-is-blank | question)× FAQ Segmentation: Results COAP SegPrec SegRec Method ME-Stateless 0.520 TokenHMM 0.865 FeatureHMM 0.941 0.965 MEMM 0.038 0.276 0.413 0.867 0.362 0.140 0.529 0.681 Overview • The Tagging Problem • Hidden Markov Model (HMM) taggers • Log-linear taggers • Log-linear models for parsing and other problems Log-Linear Taggers: Summary • The input sentence is S = w1 . . . wn
https://ocw.mit.edu/courses/6-864-advanced-natural-language-processing-fall-2005/0abc5d7ab6458ec55a14c9f7c300438b_lec10.pdf
-linear models for parsing and other problems Log-Linear Taggers: Summary • The input sentence is S = w1 . . . wn • Each tag sequence T has a conditional probability P (T | S) = n =1 P (tj | w1 . . . wn, j, t1 . . . tj−1) Chain rule j � n = j=1 P (tj | w1 . . . wn, j, tj−2, tj−1) � Independence assumptions • Estimate P (tj | w1 . . . wn, j, tj−2, tj−1) using log-linear models • Use the Viterbi algorithm to compute argmaxT �T n log P (T | S) A General Approach: (Conditional) History-Based Models • We’ve shown how to define P (T | S) where T is a tag sequence • How do we define P (T | S) if T is a parse tree (or another structure)? A General Approach: (Conditional) History-Based Models • Step 1: represent a tree as a sequence of decisions d1 . . . dm T = ∈d1, d2, . . . dm→ m is not necessarily the length of the sentence • Step 2: the probability of a tree is m P (T | S) = P (di | d1 . . . di−1, S) i=1 � • Step 3: Use a log-linear model to estimate P (di | d1 . . . di−1, S) • Step 4: Search?? (answer we’ll get to later: beam or heuristic search) NP(lawyer) DT NN the lawyer An Example Tree S(questioned) VP(questioned) Vt NP(witness) PP(about) questioned DT NN IN NP(revolver) the witness about DT NN the revolver Ratnaparkhi’s Parser
https://ocw.mit.edu/courses/6-864-advanced-natural-language-processing-fall-2005/0abc5d7ab6458ec55a14c9f7c300438b_lec10.pdf
IN NP(revolver) the witness about DT NN the revolver Ratnaparkhi’s Parser: Three Layers of Structure 1. Part-of-speech tags 2. Chunks 3. Remaining structure Layer 1: Part-of-Speech Tags DT NN Vt DT NN IN DT NN the lawyer questioned the witness about the revolver • Step 1: represent a tree as a sequence of decisions d1 . . . dm T = ∈d1, d2, . . . dm→ • First n decisions are tagging decisions ∈d1 . . . dn→ = ∈ DT, NN, Vt, DT, NN, IN, DT, NN → Layer 2: Chunks NP Vt NP IN NP DT NN questioned DT NN about DT NN the lawyer the witness the revolver Chunks are defined as any phrase where all children are part- of-speech tags (Other common chunks are ADJP, QP) Layer 2: Chunks Start(NP) Join(NP) Other Start(NP) Join(NP) Other Start(NP) Join(NP) DT the NN Vt lawyer questioned DT the NN IN witness about DT the NN revolver • Step 1: represent a tree as a sequence of decisions d1 . . . dn T = ∈d1, d2, . . . dn→ • First n decisions are tagging decisions Next n decisions are chunk tagging decisions ∈d1 . . . d2n→ = ∈ DT, NN, Vt, DT, NN, IN, DT, NN, Start(NP), Join(NP), Other, Start(NP), Join(NP), Other, Start(NP), Join(NP)→ Layer 3: Remaining Structure Alternate Between Two Classes of Actions: • Join(X) or Start(X), where X is a label (NP, S, VP etc.) • Check=YES or Check=NO Meaning of these actions:
https://ocw.mit.edu/courses/6-864-advanced-natural-language-processing-fall-2005/0abc5d7ab6458ec55a14c9f7c300438b_lec10.pdf
Start(X), where X is a label (NP, S, VP etc.) • Check=YES or Check=NO Meaning of these actions: • Start(X) starts a new constituent with label X (always acts on leftmost constituent with no start or join label above it) • Join(X) continues a constituent with label X (always acts on leftmost constituent with no start or join label above it) • Check=NO does nothing • Check=YES takes previous Join or Start action, and converts it into a completed constituent NP Vt NP IN NP DT NN questioned DT NN about DT NN the lawyer the witness the revolver Start(S) Vt NP IN NP NP questioned DT NN about DT NN DT NN the lawyer the witness the revolver Start(S) Vt NP IN NP NP questioned DT NN about DT NN the witness the revolver DT NN the lawyer Check=NO Start(S) Start(VP) NP IN NP NP Vt DT NN about DT NN DT NN questioned the witness the revolver the lawyer Start(S) Start(VP) NP IN NP NP Vt DT NN about DT NN DT NN questioned the witness the revolver the lawyer Check=NO Start(S) Start(VP) Join(VP) IN NP NP Vt NP about DT NN DT NN questioned DT NN the revolver the lawyer the witness Start(S) Start(VP) Join(VP) IN NP NP Vt NP about DT NN DT NN questioned DT NN the revolver the lawyer the witness Check=NO Start(S) Start(VP) Join(VP) Start(PP) NP NP Vt
https://ocw.mit.edu/courses/6-864-advanced-natural-language-processing-fall-2005/0abc5d7ab6458ec55a14c9f7c300438b_lec10.pdf
Check=NO Start(S) Start(VP) Join(VP) Start(PP) NP NP Vt NP IN DT NN DT NN questioned DT NN about the revolver the lawyer the witness Start(S) Start(VP) Join(VP) Start(PP) NP NP Vt NP IN DT NN DT NN questioned DT NN about the revolver the lawyer the witness Check=NO Start(S) Start(VP) Join(VP) Start(PP) Join(PP) NP Vt NP IN NP DT NN questioned DT NN about DT NN the lawyer the witness the revolver Start(S) Start(VP) Join(VP) PP NP Vt NP IN NP DT NN questioned DT NN about DT NN the lawyer the witness the revolver Check=YES Start(S) Start(VP) Join(VP) Join(VP) NP Vt NP PP DT NN questioned DT NN IN NP the lawyer the witness about DT NN the revolver VP Vt NP PP questioned DT NN IN NP the witness about DT NN the revolver Start(S) NP DT NN the lawyer Check=YES Start(S) NP DT NN the lawyer Join(S) VP Vt NP PP questioned DT NN IN NP the witness about DT NN the revolver S VP Vt NP PP questioned DT NN IN NP the witness about DT NN the revolver NP DT NN the lawyer Check=YES The Final Sequence of decisions ∈d1 . . . dm → = ∈ DT, NN, Vt, DT,
https://ocw.mit.edu/courses/6-864-advanced-natural-language-processing-fall-2005/0abc5d7ab6458ec55a14c9f7c300438b_lec10.pdf
Check=YES The Final Sequence of decisions ∈d1 . . . dm → = ∈ DT, NN, Vt, DT, NN, IN, DT, NN, Start(NP), Join(NP), Other, Start(NP), Join(NP), Other, Start(NP), Join(NP), Start(S), Check=NO, Start(VP), Check=NO, Join(VP), Check=NO, Start(PP), Check=NO, Join(PP), Check=YES, Join(VP), Check=YES, Join(S), Check=YES → A General Approach: (Conditional) History-Based Models • Step 1: represent a tree as a sequence of decisions d1 . . . dm T = ∈d1, d2, . . . dm→ m is not necessarily the length of the sentence • Step 2: the probability of a tree is m P (T | S) = P (di | d1 . . . di−1, S) i=1 � • Step 3: Use a log-linear model to estimate P (di | d1 . . . di−1, S) • Step 4: Search?? (answer we’ll get to later: beam or heuristic search) Applying a Log-Linear Model • Step 3: Use a log-linear model to estimate P (di | d1 . . . di−1, S) • A reminder: P (di | d1 . . . di−1, S) = �(∗d1...di−1 ,S∈,di )·W e d�A e�(∗d1 ...di−1,S∈,d)·W � is the history is the outcome where: ∈d1 . . . di−1, S→ di � maps a history/outcome pair to a feature vector W A is a parameter vector is set of possible actions (may be context dependent) Applying a Log-Linear Model • Step 3: Use a log-linear model to estimate P (di | d1 . . . di−1, S) = �(∗d1...di−1
https://ocw.mit.edu/courses/6-864-advanced-natural-language-processing-fall-2005/0abc5d7ab6458ec55a14c9f7c300438b_lec10.pdf
model to estimate P (di | d1 . . . di−1, S) = �(∗d1...di−1 ,S∈,di )·W e d�A e�(∗d1 ...di−1,S∈,d)·W � • The big question: how do we define �? • Ratnaparkhi’s method defines � differently depending on whether next decision is: – A tagging decision (same features as before for POS tagging!) – A chunking decision – A start/join decision after chunking – A check=no/check=yes decision Layer 2: Chunks Start(NP) Join(NP) Other Start(NP) Join(NP) IN DT NN DT the NN Vt lawyer questioned DT the NN about the revolver witness ∞ “TAG=Join(NP);Word0=witness;POS0=NN” “TAG=Join(NP);POS0=NN” “TAG=Join(NP);Word+1=about;POS+1=IN” “TAG=Join(NP);POS+1=IN” “TAG=Join(NP);Word+2=the;POS+2=DT” “TAG=Join(NP);POS+2=IN” “TAG=Join(NP);Word-1=the;POS-1=DT;TAG-1=Start(NP)” “TAG=Join(NP);POS-1=DT;TAG-1=Start(NP)” “TAG=Join(NP);TAG-1=Start(NP)” . . . Layer 3: Join or Start • Looks at head word, constituent (or POS) label, and start/join annotation of n’th tree relative to the decision, where n = −2, −1 • Looks at head word, constituent (or POS) label of n’th tree relative to the decision, where n = 0, 1, 2 • Looks at bigram features of the above for (-1,0) and
https://ocw.mit.edu/courses/6-864-advanced-natural-language-processing-fall-2005/0abc5d7ab6458ec55a14c9f7c300438b_lec10.pdf
the decision, where n = 0, 1, 2 • Looks at bigram features of the above for (-1,0) and (0,1) • Looks at trigram features of the above for (-2,-1,0), (-1,0,1) and (0, 1, 2) • The above features with all combinations of head words excluded • Various punctuation features Layer 3: Check=NO or Check=YES • A variety of questions concerning the proposed constituent The Search Problem • In POS tagging, we could use the Viterbi algorithm because P (tj | w1 . . . wn, j, t1 . . . tj−1) = P (tj | w1 . . . wn, j, tj−2 . . . tj−1) • Now: Decision di could depend on arbitrary decisions in the “past” ∞ no chance for dynamic programming • Instead, Ratnaparkhi uses a beam search method
https://ocw.mit.edu/courses/6-864-advanced-natural-language-processing-fall-2005/0abc5d7ab6458ec55a14c9f7c300438b_lec10.pdf
3 Representations of finite groups: basic results Recall that a representation of a group G over a field k is a k-vector space V together with a group homomorphism δ : G GL(V ). As we have explained above, a representation of a group G over k is the same thing as a representation of its group algebra k[G]. ⊃ In this section, we begin a systematic development of representation theory of finite groups. 3.1 Maschke’s Theorem Theorem 3.1. (Maschke) Let G be a finite group and k a field whose characteristic does not divide G . Then: | | (i) The algebra k[G] is semisimple. (ii) There is an isomorphism of algebras ξ : k[G] |Vi , where Vi are the irreducible representations of G. In particular, this is an isomorphism of representations of G (where G acts on both sides by left multiplication). Hence, the regular representation k[G] decomposes into irreducibles as ⊃ �iEndVi defined by g �ig �⊃ �i dim(Vi)Vi, and one has dim(Vi)2. G = | | � i (the “sum of squares formula”). Proof. By Proposition 2.16, (i) implies (ii), and to prove (i), it is sufficient to show that if V is V is any subrepresentation, then there exists a a finite-dimensional representation of G and W as representations. subrepresentation W � V such that V = W → W � → Choose any complement ˆW
https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0abca81c8a50d3d529fa0fc3d3e5319a_MIT18_712F10_ch3.pdf
representations. subrepresentation W � V such that V = W → W � → Choose any complement ˆW of W in ˆ W as vector spaces, but not necessarily V . (Thus V = W as representations.) Let P be the projection along W onto W , i.e., the operator on V defined by |W = Id and P P | ˆ = 0. Let � W ˆ � | where δ(g) is the action of g on V , and let P := 1 G | δ(g)P δ(g− � G g � 1), Now P vector spaces. |W = Id and P (V ) ∧ W , so 2 P = P , so P is a projection along W �. Thus, V = W W � as � W � = ker P . Moreover, for any h G and any y W �, � P δ(g) δ(g− 1 h)y = � 1 G P δ(h y = ) | | � G g � 1 G | δ(hσ)P δ(σ− 1)y = δ(h)P y = 0, | φ� G � so δ(h)y sentation of V . Thus, V = W ker P � W � � = W �. Thus, W � is invariant under the action of G and is therefore a subrepre­ is the desired decomposition into subrepresentations. The converse to Theorem 3.1(i) also holds. Proposition 3.2. If k[G] is semisimple, then the characteristic of k does not divide G . | | Proof. Write k[G] = trivial one-dimensional representation. Then � r i=1 End Vi,
https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0abca81c8a50d3d529fa0fc3d3e5319a_MIT18_712F10_ch3.pdf
Proof. Write k[G] = trivial one-dimensional representation. Then � r i=1 End Vi, where the Vi are irreducible representations and V1 = k is the k[G] = k End Vi = k r � � i=2 r � � i=2 diVi, where di = dim Vi. By Schur’s Lemma, Homk[G](k, k[G]) = kΓ Homk[G](k[G], k) = kφ, for nonzero homomorphisms of representations φ : k[G] We can take φ such that φ(g) = 1 for all g ⊃ G, and Γ such that Γ(1) = k and Γ : k ⊃ k[G] unique up to scaling. � G g. Then g � ⎨ φ ∞ Γ(1) = φ �� G g � = g � 1 = � G g � G . | | If G = 0, then Γ has no left inverse, as (aφ) ∞ | | Γ(1) = 0 for any a � k. This is a contradiction. Example 3.3. If G = Z/pZ and k has characteristic p, then every irreducible representation of G over k is trivial (so k[Z/pZ] indeed is not semisimple). Indeed, an irreducible representation of this group is a 1-dimensional space, on which the generator acts by a p-th root of unity, and every p-th 1)p over k. root of unity in k equals 1, as xp 1 = (x − − Problem 3.4. a field k of characteristic p is trivial. Let G be a group of order pn. Show that every irreducible representation of G over 3.2 Characters If V is a finite-dimensional representation of a �
https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0abca81c8a50d3d529fa0fc3d3e5319a_MIT18_712F10_ch3.pdf
of G over 3.2 Characters If V is a finite-dimensional representation of a finite group G, then its character νV : G k is defined by the formula νV (g) = tr |V (δ(g)). Obviously, νV (g) is simply the restriction of the character νV (a) of V as a representation of the algebra A = k[G] to the basis G A, so it carries exactly the same information. The character is a central or class function: νV (g) depends only on the conjugacy class of g; i.e., ν V (hgh− ) = νV (g). ⊃ → 1 Theorem 3.5. If the characteristic of k does not divide tions of G form a basis in the space Fc(G, k) of class functions on G. G | | , characters of irreducible representa­ Proof. By the Maschke theorem, k[G] is semisimple, so by Theorem 2.17, the characters are linearly independent and are a basis of (A/[A, A])⊕, where A = k[G]. It suffices to note that, as vector spaces over k, (A/[A, A])⊕ ∪= =∪ which is precisely Fc(G, k). � f { { � � Homk(k[G], k) Fun(G, k) gh hg | � f (gh) = f (hg) − ker � g, h g, h ⊕ , G G } � | ⊕ � } Corollary 3.6. The number of isomorphism classes of irreducible representations of G equals the number
https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0abca81c8a50d3d529fa0fc3d3e5319a_MIT18_712F10_ch3.pdf
.6. The number of isomorphism classes of irreducible representations of G equals the number of conjugacy classes of G (if = 0 in k). G | | ⇒ Exercise. Show that if G = 0 in k then the number of isomorphism classes of irreducible representations of G over k is strictly less than the number of conjugacy classes in G. | | Hint. Let P = g G g � representation of G over k. ⎨ � k[G]. Then P 2 = 0. So P has zero trace in every finite dimensional Corollary 3.7. Any representation of G is determined by its character if k has characteristic 0; namely, νV = νW implies V ∪= W . 3.3 Examples The following are examples of representations of finite groups over C. 1. Finite abelian groups G = Zn1 × · · · × Znk . Let G∗ be the set of irreducible representations . Recall that all irreducible of G. Every element of G forms a conjugacy class, so representations over C (and algebraically closed fields in general) of commutative algebras and are irreducible groups are one-dimensional. Thus, G∗ C× is called the dual or character group representations then so are δ1(g)δ2(g) and δ1(g)− of G. | is an abelian group: if δ1, δ2 : G 1. G∗ G∗| = ⊃ G | | For given n ⊂ so Z∗n =∪ Zn. In general, 1,
https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0abca81c8a50d3d529fa0fc3d3e5319a_MIT18_712F10_ch3.pdf
| For given n ⊂ so Z∗n =∪ Zn. In general, 1, define δ : Zn ⊃ C× by δ(m) = e2νim/n. Then Z∗n = δk : k = 0, . . . , n − 1 , } { (G1 G2 × × · · · × Gn) = G1∗ × ∗ G∗2 × · · · × G∗n , =∪ G for any finite abelian group G. This isomorphism is, however, noncanonical: Znk is not unique as far as which elements the other hand, G ∪ (G )∗ is a canonical = so G∗ the particular decomposition of G as Zn1 × · · · × of G correspond to Zn1 , etc. is concerned. On isomorphism, given by � : G (G∗)∗, where �(g)(ν) = ν(g). ∗ ⊃ 2. The symmetric group S3. In Sn, conjugacy classes are determined by cycle decomposition sizes: two permutations are conjugate if and only if they have the same number of cycles of each length. For S3, there are 3 conjugacy classes, so there are 3 different irreducible representations over C. If their dimensions are d1, d2, d 3, then d1+d2 +d3 = 6, so S3 must have two 1-dimensional and one 2-dimensional representations. The 1-dimensional representations are the trivial representation C+ given by δ(ε) = 1 and the sign representation C given by δ(ε) − ( 2 2
https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0abca81c8a50d3d529fa0fc3d3e5319a_MIT18_712F10_ch3.pdf
given by δ(ε) = 1 and the sign representation C given by δ(ε) − ( 2 2 2 = 1)ε. − The 2-dimensional representation can be visualized as representing the symmetries of the equilateral triangle with vertices 1, 2, 3 at the points (cos 120∨, sin 120∨), (cos 240∨, sin 240∨), (1, 0) of the coordinate plane, respectively. Thus, for example, δ((12)) = 1 0 � 0 1 � − , δ((123)) = cos 120∨ �sin 120∨ sin 120∨ − cos 120∨ � . To show that this representation is irreducible, consider any subrepresentation V . V must be the span of a subset of the eigenvectors of δ((12)), which are the nonzero multiples of (1, 0) and (0, 1). V must also be the span of a subset of the eigenvectors of δ((123)), which are different vectors. Thus, V must be either C2 or 0. 3. The quaternion group Q8 = i = jk = kj, − j 1, i, {± ± = ki = ± j, ± ik, , with defining relations k } k = ij = ji, − − 1 = i2 = j2 = k2. − The 5 conjugacy classes are , so there are 5 different irreducible , {± representations, the sum of the squares of whose dimensions is 8, so their dimensions must be 1, 1, 1,
https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0abca81c8a50d3d529fa0fc3d3e5319a_MIT18_712F10_ch3.pdf
of whose dimensions is 8, so their dimensions must be 1, 1, 1, 1, and 2. , {± , {± , {− k } { } } } } 1 1 j i 1 The center Z(Q8) is representations of Z2 quotient map, and δ any representation of Q8/Z(Q8), then δ ∞ The 2-dimensional representation is V = C2, given by δ( 1) = , and Q8/Z(Q8) =∪ Z2 {± × Z2 can be “pulled back” to Q8. That is, if q : Q8 Z2. The four 1-dimensional irreducible Q8/Z(Q8) is the q gives a representation of Q8. ⊃ × } − Id and − δ(i) = 1 0 1 0� − , � δ(j) = 1 ∀ − 0 � 0 ∀ − − , 1 � δ(k) = 0 � ∀ − − 1 ∀ 1 − − . 0 � (3) These are the Pauli matrices, which arise in quantum mechanics. Exercise. Show that the 2-dimensional irreducible representation of Q8 can be realized in 1f (g) (the action of G is by right the space of functions f : Q8 multiplication, g C such that f (gi) = ∀ f (x) = f (xg)). ⊃ − ∞ − { × × Z2 is Z2, where Z2 e, (12)(34), (13)(24), (14)(23) 4. The symmetric group S4. The order of S4 is 24, and there are 5 conjugacy classes: e, (12), (123),
https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0abca81c8a50d3d529fa0fc3d3e5319a_MIT18_712F10_ch3.pdf
S4 is 24, and there are 5 conjugacy classes: e, (12), (123), (1234), (12)(34). Thus the sum of the squares of the dimensions of 5 irreducible representations is 24. As with S3, there are two of dimension 1: the trivial and sign repre­ sentations, C+ and C . The other three must then have dimensions 2, 3, and 3. Because , the 2-dimensional repre­ S3 ∪= S4/Z2 sentation of S3 can be pulled back to the 2-dimensional representation of S4, which we will call C2 . We can consider S4 as the group of rotations of a cube acting by permuting the interior diagonals (or, equivalently, on a regular octahedron permuting pairs of opposite faces); this gives the 3-dimensional representation C3 +. last 3-dimensional representation is C3 , the product of C3 with the sign representation. The C3 = 1. C3 = 1 while det g − Note that another realization of C3 is by action of S4 by symmetries (not necessarily rotations) − of the regular tetrahedron. Yet another realization of this representation is the space of functions on the set of 4 elements (on which S4 acts by permutations) with zero sum of values. + and C3 are different, for if g is a transposition, det g = ( 1)3 C3 − − } + − − + | | 3 .4 Duals and tensor products of representations I f V is
https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0abca81c8a50d3d529fa0fc3d3e5319a_MIT18_712F10_ch3.pdf
− − + | | 3 .4 Duals and tensor products of representations I f V is a representation of a group G, then V ⊕ is also a representation, via δV � (g) = (δV (g)⊕)− 1 = (δV (g)− 1)⊕ = δV (g− 1)⊕. T he character is ν V � (g) = νV (g− ). 1 We have νV (g) = oots of unity because r ∂i, where the ∂i are the eigenvalues of g in V . These eigenvalues must be |) = δ(e) = Id. Thus for complex representations | = δ(g| G G δ(g)| ⎨ νV � (g) = ν 1 (g− ) = V ∂ 1 i− = ∂i = ∂i = νV (g). � as representations (not just as vector spaces) if and only if νV (g) � � n particular, V ∪= V ⊕ I g � G. R for all � If V, W are representations of G, then V W is also a representation, via � δV W (g) = δV (g) � δ W (g). � Therefore, νV � W (g) = νV (g)νW (g). An interesting problem discussed below is to decompose V direct sum of irreducible representations. W (for irreducible V, W ) into the � 3.5 Orthogonality of characters We define a positive definite Hermitian inner product on Fc(G, C) (the space of central functions) by (f1, f2) = f (g)f (g). 2 1 1 G g� G � The following theorem says that characters of irreducible representations of G form
https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0abca81c8a50d3d529fa0fc3d3e5319a_MIT18_712F10_ch3.pdf
(g). 2 1 1 G g� G � The following theorem says that characters of irreducible representations of G form an orthonormal basis of Fc(G, C) under this inner product. | | Theorem 3.8. For any representations V, W (νV , νW ) = dim HomG(W, V ), (νV , νW ) = 1, if V ∪= W, 0, if V � W � and if V, W are irreducible. Proof. By the definition (νV , νW ) = | ν (g)ν (g) = W V � G g � | = | g� G � νV W � (g) = Tr � |V � 1 G | 1 G | ν (g)ν (g) W � V 1 G | g� G � W � (P ), where P = 1 G � | | representation of G then g G g � ⎨ Z(C[G]). (Here Z(C[G]) denotes the center of C[G If X is an irreducible ]). P |X = Id, if X = C, � 0, X = C. Therefore, for any representation X the operator P X G of G-invariants in X. Thus, |X is the G-invariant projector onto the subspace Tr |V W � (P ) = dim HomG(C, V = dim(V W ⊕ � � W ⊕) )G = dim HomG(W, V ). � ⇒ Theorem 3.8 gives a powerful method of checking if a given complex representation V of a finite group G is irreducible. Indeed, it implies that V is irreducible if and only if (νV , νV ) = 1. Exercise. Let G be a finite group. Let Vi
https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0abca81c8a50d3d529fa0fc3d3e5319a_MIT18_712F10_ch3.pdf
if and only if (νV , νV ) = 1. Exercise. Let G be a finite group. Let Vi be the irreducible complex representations of G. For every i, let ξi = i dim V G | | ν � G g � V (g) g i − · 1 � C [G] . (i) Prove that ξi acts on Vj as the identity if j = i, and as the null map if j = i. (ii) Prove that ξi are idempotents, i.e., ξ2 i = ξi for any i, and ξiξj = 0 for any i = j. Hint: In (i), notice that ξi commutes with any element of k [G], and thus acts on Vj as an intertwining operator. Corollary 1.17 thus yields that ξi acts on Vj as a scalar. Compute this scalar by taking its trace in Vj . Here is another “orthogonality formula” for characters, in which summation is taken over irre­ ducible representations rather than group elements. Theorem 3.9. Let g, h � G, and let Zg denote the centralizer of g in G. Then νV (g)νV (h) = if g is conjugate to h Zg| | 0, otherwise � � V where the summation is taken over all irreducible representations of G. Proof. As noted above, νV (h) = νV � (h), so the left hand side equals (using Maschke’s theorem): νV (g)νV � (h) = Tr | V V V � (g ∈ � � (h⊕)− 1) = � V
https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0abca81c8a50d3d529fa0fc3d3e5319a_MIT18_712F10_ch3.pdf
Tr | V V V � (g ∈ � � (h⊕)− 1) = � V Tr |∈ V EndV (x gxh− 1) = Tr | C[G](x �⊃ �⊃ gxh− 1). If g and h are not conjugate, this trace is clearly zero, since the matrix of the operator x in the basis of group elements has zero diagonal entries. On the other hand, if g and h are in the 1, i.e., the same conjugacy class, the trace is equal to the number of elements x such that x = gxh− order of the centralizer Zg of g. We are done. gxh − �⊃ 1 Remark. Another proof of this result is as follows. Consider the matrix U whose rows are by irreducible representations of G and columns by conjugacy classes, with entries U V,g = labeled is the number of elements νV (g)/ conjugate to G. Thus, by Theorem 3.8, the rows of the matrix U are orthonormal. This means that U is unitary and hence its columns are also orthonormal, which implies the statement. . Note that the conjugacy class of g is G/Zg, thus Zg| Zg| � G / | | | | 3.6 Unitary representations. Another proof of Maschke’s theorem for complex representations Definition 3.10. A unitary finite dimensional representation of a group G is a representation of G on a complex finite dimensional vector space V over C equipped with a G-invariant positive definite Hermitian form4 (, ), i.e., such that δV (g) are unitary operators: (δV (g)v, δV (g)w) = (v, w). 4We
https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0abca81c8a50d3d529fa0fc3d3e5319a_MIT18_712F10_ch3.pdf
(g) are unitary operators: (δV (g)v, δV (g)w) = (v, w). 4We agree that Hermitian forms are linear in the first argument and antilinear in the second one. ⇒ ⇒ Theorem 3.11. If G is finite, then any finite dimensional representation of G has a unitary structure. If the representation is irreducible, this structure is unique up to scaling by a positive real number. Proof. Take any positive definite form B on V and define another form B as follows: B(v, w) = B(δV (g)v, δV (g)w) � G g � Then B is a positive definite Hermitian form on V, and δV (g) are unitary operators. If V is an irreducible representation and B1, B2 are two positive definite Hermitian forms on V, then B1(v, w) = B2(Av, w) for some homomorphism A : V V (since any positive definite Hermitian form is nondegenerate). By Schur’s lemma, A = ∂Id, and clearly ∂ > 0. ⊃ Theorem 3.11 implies that if V is a finite dimensional representation of a finite group G, then the complex conjugate representation V (i.e., the same space V with the same addition and the same action of G, but complex conjugate action of scalars) is isomorphic to the dual representation V ⊕. is obviously the same thing as an invariant Indeed, a homomorphism of representations V ⊃ sesqu
https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0abca81c8a50d3d529fa0fc3d3e5319a_MIT18_712F10_ch3.pdf
same thing as an invariant Indeed, a homomorphism of representations V ⊃ sesquilinear form on V (i.e. a form additive on both arguments which is linear on the first one and antilinear on the second one), and an isomorphism is the same thing as a nondegenerate invariant sesquilinear form. So one can use a unitary structure on V to define an isomorphism V V ⊕ V ⊕. ⊃ Theorem 3.12. A finite dimensional unitary representation V of any group G is completely re­ ducible. Proof. Let W be a subrepresentation of V under the Hermitian inner product. Then This implies that V is completely reducible. . Let W � be the orthogonal complement of W in V W � is a subrepresentation of W , and V = W W �. � Theorems 3.11 and 3.12 imply Maschke’s theorem for complex representations (Theorem 3.1). Thus, we have obtained a new proof of this theorem over the field of complex numbers. Remark 3.13. Theorem 3.12 shows that for infinite groups G, a finite dimensional representation may fail to admit a unitary structure (as there exist finite dimensional representations, e.g. for G = Z, which are indecomposable but not irreducible). 3.7 Orthogonality of matrix elements Let V be an irreducible representation of a finite group G, and v1, v2, . . . , vn be an orthonormal basis ij (x) = (δV (x)vi, vj ). of V under the invariant Hermitian form. The matrix elements of V are tV Proposition
https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0abca81c8a50d3d529fa0fc3d3e5319a_MIT18_712F10_ch3.pdf
(x)vi, vj ). of V under the invariant Hermitian form. The matrix elements of V are tV Proposition 3.14. onal in Fun(G, C) under the form (f, g) = 1 G | | (i) Matrix elements of nonisomorphic irreducible representations are orthog­ x G f (x)g(x). � ⎨ (ii) (tV ij , tV ⊗ i j ⊗ ) = ζ ζ ii⊗ jj⊗ 1 dim V · Thus, matrix elements of irreducible representations of G form an orthogonal basis of Fun(G, C). Proof. Let V and W be two irreducible representations of G. Take of V and { forms. Let to be an orthonormal basis to be an orthonormal basis of W under their positive definite invariant Hermitian wi} W ⊕ be the linear function on W defined by taking the inner product with wi⊕ � vi} { wi: wi⊕ 1 G x | | (u) = (u, wi). Then for x G x, we have � G we have (xwi⊕, wj⊕) = (xwi, wj ). Therefore, putting P = � ⎨ V W (tij , ti⊗j⊗ ) � 1 = G | | − � G x � (xvi, v j )(xwi⊗ , wj⊗ ) = 1 − G | | (xvi, vj )(xwi⊕⊗ , wj⊕⊗ ) = (P (vi � G x � � wi⊕⊗ ), vj wj⊕⊗ ) � If V = W, this
https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0abca81c8a50d3d529fa0fc3d3e5319a_MIT18_712F10_ch3.pdf
��⊗ ), vj wj⊕⊗ ) � If V = W, this is zero, since P projects to the trivial representation, which does not occur in vj⊕⊗ ). We have a G-invariant decomposition vi⊕⊗ ), vj V W ⊕. If V = W, we need to consider (P (vi V ⊕ V � � = C L C = span( � � vk � vk⊕) � L = spana:P akk =0( k aklvk vl⊕), � � k,l and P projects to the first summand along the second one. The projection of vi is thus vi⊕⊗ to C L C � → � This shows that ζii⊗ dim V v ⊕ v k � k � (P (vi ⊕ vi⊗ ), vj ⊕ vj⊗ ) = ζ ζ ii⊗ jj⊗ dim V � which finishes the proof of (i) and (ii). The last statement follows immediately from the sum of squares formula. � 3.8 Character tables, examples The characters of all the irreducible representations of a finite group can be arranged into a char­ acter table, with conjugacy classes of elements as the columns, and characters as the rows. More specifically, the first row in a character table lists representatives of conjugacy classes, the second one the numbers of elements in the conjugacy classes, and the other rows list the values of the characters on the conjugacy
https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0abca81c8a50d3d529fa0fc3d3e5319a_MIT18_712F10_ch3.pdf
conjugacy classes, and the other rows list the values of the characters on the conjugacy classes. Due to Theorems 3.8 and 3.9 the rows and columns of a character table are orthonormal with respect to the appropriate inner products. Note that in any character table, the row corresponding to the trivial representation consists of ones, and the column corresponding to the neutral element consists of the dimensions of the representations. Here is, for example, the character table of S3 : C+ Id S3 # 1 1 1 2 C − C2 (12) 3 1 -1 0 (123) 2 1 1 -1 It is obtained by explicitly computing traces in the irreducible representations. For another example consider A4, the group of even permutations of 4 items. There are three one-dimensional representations (as A4 has a normal subgroup Z2 Z2 = Z3). Since there are four conjugacy classes in total, there is one more irreducible representation of dimension 3. Finally, the character table is Z2, and A4/Z2 � � A4 # C C C ρ2 C3 ρ Id 1 1 1 1 3 (123) 4 1 φ φ2 0 (132) 4 1 φ2 φ 0 (12)(34) 3 1 1 1 1 − where φ = exp( 2νi ). 3 The last row can be computed using the orthogonality of rows. Another way to compute the last row is to note that C3 is the representation of
https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0abca81c8a50d3d529fa0fc3d3e5319a_MIT18_712F10_ch3.pdf
Another way to compute the last row is to note that C3 is the representation of A4 by rotations of the regular tetrahedron: in rotations by 1200 and 2400 around a perpendicular to a face of the this case (123), (132) are the tetrahedron, while (12)(34) is the rotation by 1800 around an axis perpendicular to two opposite edges. Example 3.15. The following three character tables are of Q8, S4, and A5 respectively. Q8 # C++ C+ C C −− C2 − + − 1 1 1 1 1 1 2 -1 1 1 1 1 1 -2 i 2 1 1 -1 -1 0 j 2 1 -1 1 -1 0 k 2 1 -1 -1 1 0 S4 # C+ C − C2 C3 + C3 − A5 # C C3 + C3 − C4 C5 Id 1 1 1 2 3 3 Id 1 1 3 3 4 5 (12) 6 1 -1 0 -1 1 (123) 20 1 0 0 1 -1 (12)(34) 3 1 1 2 -1 -1 (123) 8 1 1 -1 0 0 (1234) 6 1 -1 0 1 -1 (12)(34) 15 1 -1 -1 0 1 (12345) 12 1 1+⊗5 2 1 ⊗ − 2 -1 0 5 (13245) 12 1 1 ⊗ − 2 1+
https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0abca81c8a50d3d529fa0fc3d3e5319a_MIT18_712F10_ch3.pdf
⊗ − 2 -1 0 5 (13245) 12 1 1 ⊗ − 2 1+⊗5 2 -1 0 5 Indeed, the computation of the characters of the 1-dimensional representations is straightfor­ ward. The character of the 2-dimensional representation of Q8 is obtained from the explicit formula (3) for this representation, or by using the orthogonality. For S4, the 2-dimensional irreducible representation is obtained from the 2-dimensional irre­ S3, which allows to obtain its ducible representation of S3 via the surjective homomorphism S4 character from the character table of S3. ⊃ The character of the + is computed from its geometric realization by rotations of the cube. Namely, by rotating the cube, S4 permutes the main diagonals. Thus rotation by 1800 around an axis that is perpendicular to two opposite edges, (12)(34) (12) is the 3-dimensional representation C3 is the rotation by 1800 around an axis that is perpendicular to two opposite faces, (123) is the rotation around a main diagonal by 1200, and (1234) is the rotation by 900 around an axis that is perpendicular to two opposite faces; this allows us to compute the traces easily, using the fact that angle θ in R3 is 1 + 2 cos θ. Now the character of C3 is found by the trace of a rotation by the multiplying the character of C3 + by the character of the sign representation. − C3 + Finally, we explain how to obtain the character table of A5 (even permutations of 5 items). The group A5 is the group of
https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0abca81c8a50d3d529fa0fc3d3e5319a_MIT18_712F10_ch3.pdf
, we explain how to obtain the character table of A5 (even permutations of 5 items). The group A5 is the group of rotations of the regular icosahedron. Thus it has a 3-dimensional “rotation around an axis perpendicular to two representation” rotation by 1200 around an axis perpendicular to two opposite faces, opposite edges, (123) is the and (12345), (13254) are the rotations by 720, respectively 1440, around axes going through two opposite vertices. The character of this representation is computed from this description in a straightforward way. , in which (12)(34) is the rotation by 1800 Another representation of A5, which is also 3-dimensional, is C3 + twisted by the automorphism of A5 given by conjugation by (12) inside S5. This representation is denoted by C3 . It has the same character as C3 +, except that the conjugacy classes (12345) and (13245) are interchanged. − There are two remaining irreducible representations, and by the sum of squares formula their dimensions are 4 and 5. So we call them C4 and C5. The representation C4 is realized on the space of functions on the set with zero sum of values, where A5 acts by permutations (check that it is irreducible!). The character of this representation is equal to the character of the 5-dimensional permutation representation minus the character of the 1-dimensional trivial representation (constant functions). The former at an element g equals to the number of items among 1,2,3,4,5 which are fixed by g. 1, 2, 3, 4, 5 { } The representation C5 is
https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0abca81c8a50d3d529fa0fc3d3e5319a_MIT18_712F10_ch3.pdf
which are fixed by g. 1, 2, 3, 4, 5 { } The representation C5 is realized on the space of functions on pairs of opposite vertices of the icosahedron which has zero sum of values (check that it is irreducible!). The character of this representation is computed similarly to the character of C4, or from the orthogonality formula. 3.9 Computing tensor product multiplicities using character tables Character tables allow us to compute the tensor product multiplicities Nij using k Example 3.16. The following tables represent computed tensor product multiplicities of irre­ Vi � Vj = � N k ijVk, N k ij = (νiνj , νk) ducible representations of S3, S4, and A5 respectively. − − C+ S4 C+ C C+ C+ C C − C2 C3 + C3 − C2 C2 C2 C − 2 C � C+ � C+ � C3 + C3 + 3C − � � 3 C + C2 3 C − C3 �+ S3 C+ C C+ C+ C C − C2 − − C+ C2 C2 C2 C − C+ � C2 � 3C − C3 − C3 + 3C �+ − C3 �+ � C3 + � � C3 C2 C2 3C − C3 − 3C − C − C+ � � C C 3 C+ C+ 3 C5 � 3 C+ C � A5 C C3 + C3 −
https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0abca81c8a50d3d529fa0fc3d3e5319a_MIT18_712F10_ch3.pdf
+ C+ 3 C5 � 3 C+ C � A5 C C3 + C3 − C4 C5 3.10 Problems C3 −C3 − C5 � C5 � C4 C � C3 + C4 C4 C4 C4 C C3 − � C3 + � C3 � − � � C5 � C5 � C4 C3 + C5 � C5 C5 C3 C3 C4 + � − � � C3 C3 C4 + � − � � C3 C3 2C5 + � � − � C3 C3 2C4 C5 C5 C4 + � − � � C � 2C5 Problem 3.17. Let G be the group of symmetries of a regular N-gon (it has 2N elements). (a) Describe all irreducible complex representations of this group (consider the cases of odd and even N ) (b) Let V be the 2-dimensional complex representation of G obtained by complexification of the standard representation on the real plane (the plane of the polygon). Find the decomposition of V V in a direct sum of irreducible representations. � Problem 3.18. Let G be the group of 3 by 3 matrices over Fp which are upper triangular and have ones on the diagonal, under multiplication (its order is p3). It is called the Heisenberg group. For any complex number z such that z = 1 we define a representation of G on the space V of complex functions on Fp, by p 1 1 0
https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0abca81c8a50d3d529fa0fc3d3e5319a_MIT18_712F10_ch3.pdf
of G on the space V of complex functions on Fp, by p 1 1 0 0 1 0 � 0 0 1 ⎝ (δ ⎧ � f )(x) = f (x − 1), (note that zx makes sense since zp = 1). 1 0 0 0 1 1 � 0 0 1 ⎝ (δ ⎧ � f )(x) = z xf (x). (a) Show that such a representation exists and is unique, and compute δ(g) for all g � (b) Denote this representation by Rz. Show that Rz is irreducible if and only if z = 1. G. (c) Classify all 1-dimensional representations of G. Show that R1 decomposes into a direct sum of 1-dimensional representations, where each of them occurs exactly once. (d) Use (a)-(c) and the “sum of squares” formula to classify all irreducible representations of G. Problem 3.19. Let V be a finite dimensional complex vector space, and GL(V ) be the group of invertible linear transformations of V . Then S nV and ΓmV (m dim(V )) are representations of GL(V ) in a natural way. Show that they are irreducible representations. ∗ ei} Hint: Choose a basis in V . Find a diagonal element H of GL(V ) such that δ(H) has distinct eigenvalues. (where δ is one of the above representations). This shows that if W is a subrepresentation, then it is
https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0abca81c8a50d3d529fa0fc3d3e5319a_MIT18_712F10_ch3.pdf
the above representations). This shows that if W is a subrepresentation, then it is spanned by a subset S of a basis of eigenvectors of δ(H). Use the invariance of W under the operators δ(1 + Eij ) (where Eij is defined by Eij ek = ζjkei) for all i = j to show that if the subset S is nonempty, it is necessarily the entire basis. { Problem 3.20. Recall that the adjacency matrix of a graph � (without multiple edges) is the matrix in which the ij-th entry is 1 if the vertices i and j are connected with an edge, and zero otherwise. Let � be a finite graph whose automorphism group is nonabelian. Show that the adjacency matrix of � must have repeated eigenvalues. Problem 3.21. Let I be the set of vertices of a regular icosahedron ( = 12). Let Fun(I) be the space of complex functions on I. Recall that the group G = A5 of even permutations of 5 items acts on the icosahedron, so we have a 12-dimensional representation of G on Fun(I). I | | (a) Decompose this representation in a direct sum of irreducible representations (i.e., find the multiplicities of occurrence of all irreducible representations). (b) Do the same for the representation of G on the space of functions on the set of faces and the set of edges of
https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0abca81c8a50d3d529fa0fc3d3e5319a_MIT18_712F10_ch3.pdf
on the space of functions on the set of faces and the set of edges of the icosahedron. Problem 3.22. Let Fq be a finite field with q elements, and G be the group of nonconstant inho­ Fq). Find all irreducible mogeneous linear transformations, x complex representations of G, and compute their characters. Compute the tensor products of irre­ ducible representations. ax + b, over Fq (i.e., a F×q , b ⊃ � � Hint. Let V be the representation of G on the space of functions on Fq with sum of all values equal to zero. Show that V is an irreducible representation of G. Problem 3.23. Let G = SU (2) (unitary 2 by 2 matrices with determinant 1), and V = C the standard 2-dimensional representation of SU (2). We consider V as a real representation, so it is 4-dimensional. (a) Show that V is irreducible (as a real representation). (b) Let H be the subspace of EndR(V ) consisting of endomorphisms of V as a real representation. Show that H is 4-dimensional and closed under multiplication. Show that every nonzero element in H is invertible, i.e., H is an algebra with division. (c) Find a basis 1, i, j, k of H such that 1 is the unit and i2 = j2 = k2 = ji = k, jk = j. Thus we have that Q8 is a subgroup of the group H× of invertible
https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0abca81c8a50d3d529fa0fc3d3e5319a_MIT18_712F10_ch3.pdf
= j. Thus we have that Q8 is a subgroup of the group H× of invertible elements kj = i, ki = 1, ij = ik = − − − of H under multiplication. − The algebra H is called the quaternion algebra. (d) For q = a+bi+cj +dk, a, b, c, d = Show that q1q2 = q¯2q¯1, and q1q2|| || R, let q¯ = a � q2|| q1|| || . || · bi − − cj − dk, and q || || 2 = qq¯ = a2 +b2 +c2 . (e) Let G be the group of quaternions of norm 1. Show that this group is isomorphic to SU (2). (Thus geometrically SU (2) is the 3-dimensional sphere). (f) Consider the action of G on the space V qxq − V . Since this action preserves the norm on V , we have a homomorphism h : SU (2) 1, q G, x SO(3), where SO(3) is the group of rotations of the three-dimensional Euclidean space. Show that this homomorphism is surjective and that its kernel is by i, j, k, by x H spanned 1, ⊃ ⊃ → � . 1 { − } Problem 3.24. It is known that the classification of finite subgroups of SO(3) is as follows: 1) the cyclic group Z/nZ, n ⊂ 1, generated by a rotation
https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0abca81c8a50d3d529fa0fc3d3e5319a_MIT18_712F10_ch3.pdf
1) the cyclic group Z/nZ, n ⊂ 1, generated by a rotation by 2β/n around an axis; 2) the dihedral group Dn of order 2n, n a plane containing a regular n-gon5; ⊂ 2 (the group of rotational symmetries in 3-space of 3) the group of rotations of the regular tetrahedron (A4). 4) the group of rotations of the cube or regular octahedron (S4). 5) the group of rotations of a regular dodecahedron or icosahedron (A5). 5A regular 2-gon is just a line segment. (a) Derive this classification. Hint. Let G be a finite subgroup of SO(3). Consider the action of G on the unit sphere. A point of the sphere preserved by some nontrivial element of G is called a pole. Show that every nontrivial element of G fixes a unique pair of opposite poles, and that the subgroup of G fixing a particular pole P is cyclic, of some order m (called the order of P). Thus the orbit of P has n/m elements, where n = . Now let P1, ..., Pk be the poles representing all the orbits of G on the set of poles, and m1, ..., mk be their orders. By counting nontrivial elements of G, show that G | | 2(1 − 1
https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0abca81c8a50d3d529fa0fc3d3e5319a_MIT18_712F10_ch3.pdf
. By counting nontrivial elements of G, show that G | | 2(1 − 1 n ) = (1 − 1 mi ). � i Then find all possible mi and n that can satisfy this equation and classify the corresponding groups. (b) Using this classification, classify finite subgroups of SU (2) (use the homomorphism SU (2) SO(3)). ⊃ Problem 3.25. Find the characters and tensor products of irreducible complex representations of the Heisenberg group from Problem 3.18. Problem 3.26. Let G be a finite group, and V a complex representation of G which is faithful, GL(V ) is injective. Show that any irreducible representation of i.e., the corresponding map G G occurs inside SnV (and hence inside V � n) for some n. ⊃ Hint. Show that there exists a vector Fun(G, C) sending a polynomial V ⊕ whose stabilizer in G is 1. Now define the map f on V ⊕ to the function fu on G given by fu(g) = f (gu). SV Show that this map is surjective and use this to deduce the desired result. ⊃ u � Problem 3.27. This problem is about an application of representation theory to physics (elasticity theory). We first describe the physical motivation and then state the mathematical problem. Imagine a material which occupies a certain region U in the physical space V = R3 (a space with a positive definite inner product). Suppose the material is de
https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0abca81c8a50d3d529fa0fc3d3e5319a_MIT18_712F10_ch3.pdf
3 (a space with a positive definite inner product). Suppose the material is deformed. This means, we have U �. The question in elasticity theory applied a diffeomorphism (=change of coordinates) g : U is how much stress in the material this deformation will cause. ⊃ � U , let AP : V For every point P V be defined by AP = dg(P ). AP is nondegenerate, so it has a polar decomposition AP = DP OP , where OP is orthogonal and DP is symmetric. The matrix OP characterizes the rotation part of AP (which clearly produces no stress), and DP is the distortion part, which actually causes stress. If the deformation is small, DP is close to 1, so DP = 1 + dP , where dP is a small symmetric matrix, i.e., an element of S 2V . This matrix is called the deformation tensor at P . ⊃ Now we define the stress tensor, which characterizes stress. Let v be a small nonzero vector in . Let Fv be the force with which V , and ε a small disk perpendicular to v centered at P of area the part of the material on the v-side of ε acts on the part on the opposite side. It is easy to deduce from Newton’s laws that Fv is linear in v, so there exists a linear operator SP : V V such that Fv = SP v. It is called the stress tensor. ⊃ || || v An elasticity
https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0abca81c8a50d3d529fa0fc3d3e5319a_MIT18_712F10_ch3.pdf
= SP v. It is called the stress tensor. ⊃ || || v An elasticity law is an equation SP = f (dP ), where f is a function. The simplest such law is a linear law (Hooke’s law): f : S 2V End(V ) is a linear function. In general, such a function is defined by 9 6 = 54 parameters, but we will show there are actually only two essential ones – the · compression modulus K and the shearing modulus µ. For this purpose we will use representation theory. ⊃ Recall that the group SO(3) of rotations acts on V , so S 2V , End(V ) are representations of this group. The laws of physics must be invariant under this group (Galileo transformations), so f must be a homomorphism of representations. (a) Show that End(V ) admits a decomposition R W , where R is the trivial representation, V is the standard 3-dimensional representation, and W is a 5-dimensional representation of SO(3). Show that S2V = R W V � � � (b) Show that V and W are irreducible, even after complexification. Deduce using Schur’s W one has f (x + y) = Kx + µy for some R, y lemma that SP is always symmetric, and for x real numbers K, µ. � � In fact, it is clear from physics that K, µ are positive. Physically, the compression modulus K characterises resistance of the material to compression or dilation, while the shearing modulus µ characterizes its
https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0abca81c8a50d3d529fa0fc3d3e5319a_MIT18_712F10_ch3.pdf
of the material to compression or dilation, while the shearing modulus µ characterizes its resistance to changing the shape of the object without changing its volume. For instance, clay (used for sculpting) has a large compression modulus but a small shearing modulus. MIT OpenCourseWare http://ocw.mit.edu 18.712 Introduction to Representation Theory Fall 2010 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0abca81c8a50d3d529fa0fc3d3e5319a_MIT18_712F10_ch3.pdf
18.212: Algebraic Combinatorics Andrew Lin Spring 2019 This class is being taught by Professor Postnikov. February 27, 2019 This is a reminder that the problem set is due on Monday, so we should start it soon. A few bonus problems were also added that are a bit more challenging. Next Wednesday, we will discuss the problem set in class. Professor Postnikov is pretty lenient with late problem sets, but don’t turn them in after we discuss the solutions. Last time, we started discussing statistics on permutations. We defined inv(w ) to be the number of inversions and cyc(w ) to be the number of cycles, and we found some generating functions inv(w ) q = [n]q !, X w ∈Sn X w ∈Sn cyc w x = x(x + 1) · · · (x + n − 1). Definition 1 Let a descent of a permutation w = (w1, · · · , wn) be an index 1 ≤ i ≤ n − 1 such that wi > wi+1. Denote the number of descents to be des(w ). For example, des(2, 5, 7, 3, 1, 6, 8, 4) = 3. The form of the generating function is a bit less nice, though. Definition 2 The generating function is called the Eulerian polynomial. X w ∈Sn des(w ) x So we have a number of inversions, cycles, and
https://ocw.mit.edu/courses/18-212-algebraic-combinatorics-spring-2019/0b039163b47d51f947e6fdbea5b99844_MIT18_212S19_lec10.pdf
X w ∈Sn des(w ) x So we have a number of inversions, cycles, and descents. Here’s a meta-mathematical claim: interesting permutation statistics are likely equidistributed with one of these classes! Fact 3 Things related to inversions are called “Mahonian statistics.” Cycles are related to Stirling numbers, but there’s no common name. Descent-related statistics are called “Eulerian statistics.” 1 Definition 4 Let the major index of w ∈ Sn be maj(w ) = X i. i descent of w For example, the permutation w = (2, 5, 7, 3, 1, 6, 8, 4) has descents in position 3, 4, and 7, so maj(w ) = 3 + 4 + 7 = 14. Theorem 5 inv(w ) and maj(w ) are equidistributed. This is a bonus problem from the problem set! Both of these are named after Major Percy MacMahon, who wrote a famous book on combinatorics. So major is named that way for the military rank. Definition 6 A record of a permutation w is an entry greater than all entries to the left. Define rec(w ) to be the number of records of w . For example, w = (2, 5, 7, 3, 1, 6, 8, 4) has records 2, 5, 7, 8, so rec(w ) = 4. Theorem 7 rec(w ) and cyc(w
https://ocw.mit.edu/courses/18-212-algebraic-combinatorics-spring-2019/0b039163b47d51f947e6fdbea5b99844_MIT18_212S19_lec10.pdf
5, 7, 8, so rec(w ) = 4. Theorem 7 rec(w ) and cyc(w ) are equidistributed. This can be proved by induction: for example, show the generating functions satisfy the same recurrence relation. But we prefer combinatorial proofs: maybe we can find a transformation with a bijective argument! Proof. We’ll find some bijection f : Sn → Sn sending w → w˜ such that cyc(w ) = rec(w˜). Write permutations in cycle notation: if w = (a1 · · · )(a2 · · · )(a3 · · · ) · · · , there are many ways we can write this permutation down. Write w such that each ai is the maximal element in its cycle, and sort them such that a1 < a2 < · · · . Then w˜ is just w , but instead of viewing it as cycle notation, view it as one-line notation! For example, w = (125)(3784)(6) =⇒ w = (512)(6)(8437) =⇒ w˜ = (5, 1, 2, 6, 8, 4, 3, 7). Notice that a1, a2, · · · will be the records of w˜, so the number of cycles of w is the number of records of w˜, as desired. This is bijective (to go backwards, find the records and put parentheses back), so we’re done! Definition 8 An exceedance in w is an index 1
https://ocw.mit.edu/courses/18-212-algebraic-combinatorics-spring-2019/0b039163b47d51f947e6fdbea5b99844_MIT18_212S19_lec10.pdf
we’re done! Definition 8 An exceedance in w is an index 1 ≤ i ≤ n such that wi > i . Let exc(w ) be the number of exceedances of w . For example, in (2, 5, 7, 3, 1, 6, 8, 4), we have 4 exceedances (not counting 6, which is a “weak exceedance”), so exc(w ) = 4. 2 Theorem 9 exc(w ) and des(w ) are equidistributed. In other words, “the number of exceedances is an Eulerian statistic.” Proof. Let’s start with a related idea: Definition 10 Define an anti-exceedance to be an index i such that wi < i . This is equidistributed with the number of exceedances. Why? Take the inverse permutation. If w (i ) > i , then w −1(w (i )) < w (i ): this means that i is an exceedance in w means w (i ) is an exceedance in w −1 . Claim 10.1. Given a map w → w˜ that converts cycle notation to one-line notation, the number of anti-exceedances in w is the number of descents in w˜. This is because i being a descent in w˜ means that because the i + 1th entry is not larger than w˜, i and i + 1 are in the same cycle. Then that means that i goes to something smaller than itself,
https://ocw.mit.edu/courses/18-212-algebraic-combinatorics-spring-2019/0b039163b47d51f947e6fdbea5b99844_MIT18_212S19_lec10.pdf
in the same cycle. Then that means that i goes to something smaller than itself, making it an anti-exceedance! Let’s look a bit at Stirling numbers now. There are two kinds of Stirling numbers: the first kind and the second kind. Definition 11 Define the Stirling numbers of the first kind for 0 ≤ k ≤ n where c(n, k) is the signless Stirling numbers of the first kind s(n, k) = (−1)n−k c(n, k), c(n, k) = the number of permutations ∈ Sn with k cycles. By convention, let s(0, 0) = 1 and s(n, 0) = 0 for all n ≥ 1. Here, fixed points are included as cycles (of length 1). Fact 12 The generating function for the Stirling numbers n X c(n, k)x = x(x + 1) · · · (x + n − 1), k and we can equivalently find that k=0 n X s(n, k)x = x(x − 1) · · · (x − n + 1). k The first expression is called “raising power of x,” while the second is called the “falling power of x.” The latter is k=0 sometimes denoted (x)n. Definition 13 Define the Stirling number of the second kind S(n, k) = number of set-partitions of [n] into k non-empty blocks. 3 We also use the convention S(0
https://ocw.mit.edu/courses/18-212-algebraic-combinatorics-spring-2019/0b039163b47d51f947e6fdbea5b99844_MIT18_212S19_lec10.pdf
of [n] into k non-empty blocks. 3 We also use the convention S(0, 0) = 1 and S(n, 0) = 0 for n ≥ 1. Example 14 One set-partition is π = (125|3478|6). The main di˙erence between this and cyclic notation is that the order within each group doesn’t matter. The Stirling numbers of the second kind are always positive: there’s no negative signs like in the first kind. Theorem 15 n X n S(n, k)(x)k = x . k=0 Compare this to the generating function for Stirling numbers of the first kind: in that one, we input powers of x, and get falling powers of x, and in this one, we input falling powers of x and get powers of x! This is a kind of duality. 4 MIT OpenCourseWare https://ocw.mit.edu 18.212 Algebraic Combinatorics Spring 2019 For information about citing these materials or our Terms of Use, visit: https://ocw.mit.edu/terms.
https://ocw.mit.edu/courses/18-212-algebraic-combinatorics-spring-2019/0b039163b47d51f947e6fdbea5b99844_MIT18_212S19_lec10.pdf
LP Example Stanley B. Gershwin� Massachusetts Institute of Technology Consider the factory in Figure 1 that consists of three parallel machines. It makes a single product which can be produced using any one of the machines. The possible material flows are indicated. Assume that the cost ($/part) of using machine Mi is ci, and that the maximum rate that Mi can operate is µi. Assume that c3 > c2 > c1 > 0 and let µ3 = �. The total demand is D. Problem: How should the demand be allocated among the machines to minimize cost? Intuitive answer: We want to use the least expensive machine as much as possible, and the most expensive machine as little as possible. Therefore If D � µ1, x1 = D, x2 = x3 = 0 cost = c1D If µ1 < D � µ1 + µ2, x1 = µ1, x2 = D − µ1, x3 = 0 cost = c1µ1 + c2(D − µ1) If µ1 + µ2 < D, x1 = µ1, x2 = µ2, x3 = D − µ1 − µ2 cost = c1µ1 + c2µ2 + c3(D − µ1 − µ2) LP formulation: min c1x1 + c2x2 + c3x3 such that x1 + x2 + x3 = D x1 � µ1 x2 � µ2 xi � 0, i = 1, 2, 3 The constraint space is illustrated in Figure 2 for two different values of D. The arrows indicate the solution points. 1 LP Example M1 M2 M3 Figure 1: Factory x2 2µ D < 1µ 1µ < D < 1µ + 2µ 1
https://ocw.mit.edu/courses/2-854-introduction-to-manufacturing-systems-fall-2016/0b2ff52cac042a60403e72ddce64c3cf_MIT2_854F16_LpExample.pdf
: Factory x2 2µ D < 1µ 1µ < D < 1µ + 2µ 1µ x1 3µ x3 Figure 2: Constraint space 2 LP Example LP in Standard Form: Define x4 and x5 as the slack variables associated with the upper bounds on x1 and x2. Then min c1x1 + c2x2 + c3x3 such that x1 + x2 + x3 = D x1 + x4 = µ1 x2 + x5 = µ2 xi � 0, i = 1, ..., 5 In that case A = ⎞ ⎟ ⎠ T = c ⎛ Verification of solution guess: 1. D � µ1: � � � 1 1 1 0 0 1 0 0 1 0 0 1 0 0 1 D µ1 µ2 c1 c2 c3 0 0 � � ⎟ ⎠ ⎞ � b = ⎜ We have guessed that when D is small, x1 = D, x2 = x3 = 0. Then, also, x4 = µ1 − D, x5 = µ2. This is a feasible solution. It is also basic, in which the basic variables are x1, x4, x5 and AB = 1 0 0 1 1 0 0 0 1 ⎞ ⎟ ⎠ � � � AN = 1 1 0 0 1 0 ⎞ ⎟ ⎠ � � � c T B = ⎛ c1 0 0 ⎜ c T N = ⎛ c2 c3 ⎜ It is
https://ocw.mit.edu/courses/2-854-introduction-to-manufacturing-systems-fall-2016/0b2ff52cac042a60403e72ddce64c3cf_MIT2_854F16_LpExample.pdf
0 0 ⎜ c T N = ⎛ c2 c3 ⎜ It is easy to show that A−1 B AN = ⎞ ⎟ ⎠ 1 1 −1 −1 0 1 � � � This is demonstrated at the end of this note. Therefore, T c R = c T N −c T B A−1 B AN = c2 c3 − c1 0 0 ⎜ ⎛ ⎜ ⎛ ⎞ ⎟ ⎠ 1 1 −1 −1 0 1 = ⎛ � � � 3 c2 c3 − c1 c1 = ⎜ ⎛ ⎜ c2 − c1 c3 − c1 ⎛ ⎜ and, by assumption, LP Example c2 − c1 > 0 c3 − c1 > 0 Therefore, since both components of cR are positive, the solution we guessed is correct. 2. µ1 < D � µ1 + µ2 We have guessed that x1 = µ1, x2 = D − µ1, x3 = 0. Then x4 = 0, x5 = µ2 − D + µ1. The basic variables are x1, x2, x5 and AB = 1 1 0 � 0 0 1 1 � 1 0 � ⎞ ⎟ ⎠ AN = 1 0 � 0 1 0 0 � � ⎞ ⎟ ⎠ c T B = ⎛ c1 c2 0 ⎜ c T N = ⎛ c3
https://ocw.mit.edu/courses/2-854-introduction-to-manufacturing-systems-fall-2016/0b2ff52cac042a60403e72ddce64c3cf_MIT2_854F16_LpExample.pdf
⎛ c1 c2 0 ⎜ c T N = ⎛ c3 0 ⎜ A−1 B AN = 0 −1 1 ⎞ ⎟ ⎠ 1 −1 −1 � � � c T R = c T N − c T B BA−1 AN = −c2 + c3 c1 − c2 + c3 ⎛ ⎜ Then and which is also componentwise positive. 3. µ1 + µ2 < D: We have guessed that x1 = µ1, x2 = µ2, x3 = D − µ1 − µ2. Then x4 = x5 = 0 and x1, x2, x3 are the basic variables. Therefore, AB = 1 1 1 1 0 0 0 1 0 ⎞ ⎟ ⎠ � � � AN = 0 0 1 0 0 1 ⎞ ⎟ ⎠ � � � Then and c T B = ⎛ c1 c2 c3 ⎜ c T N = ⎛ 0 0 ⎜ A−1 B AN = 0 1 1 0 −1 −1 ⎞ ⎟ ⎠ � � � T c = c R T N − c T B A−1 B AN = − c1 c2 c3 ⎛ ⎜ 0 1 1 0 −1 −1 ⎞ ⎟ ⎠ � � � = (c3 − c1 c3 − c2) 4
https://ocw.mit.edu/courses/2-854-introduction-to-manufacturing-systems-fall-2016/0b2ff52cac042a60403e72ddce64c3cf_MIT2_854F16_LpExample.pdf
= (c3 − c1 c3 − c2) 4 LP Example cost D µ1 µ1+ µ2 Figure 3: Cost 5 LP Example which is also componentwise positive. Note that the cost as a function of D is shown in Figure 3. To show that, in the first case, A−1 B AN = ⎞ ⎟ ⎠ 1 1 −1 −1 0 1 � � � A−1AN is a matrix of two columns. The first column satisfies B 1 0 1 ⎞ � ⎞ = ⎟ � ⎟ ⎠ ⎠ � 1 0 0 1 1 0 0 0 1 s1 s2 s3 � ⎞ � ⎟ � ⎠ � � � and the second satisfies 1 0 0 ⎞ � ⎞ = ⎟ � ⎟ ⎠ ⎠ � 1 0 0 1 1 0 0 0 1 t1 t2 t3 � ⎞ � ⎟ � ⎠ � � � The equation for the first column is equivalent to 1 = s1 0 = s1 + s2 1 = s3 s1 s2 s3 ⎞ ⎟ ⎠ � ⎞ = � ⎟ ⎠ � 1 −1 1 � � � and the equation for the second column is 1 = t1 0 = t1 + t2 0 = s3 t1 t2 t3 ⎞ ⎟ ⎠ � ⎞ = � ⎟ ⎠ � 1
https://ocw.mit.edu/courses/2-854-introduction-to-manufacturing-systems-fall-2016/0b2ff52cac042a60403e72ddce64c3cf_MIT2_854F16_LpExample.pdf
�� ⎟ ⎠ � ⎞ = � ⎟ ⎠ � 1 −1 0 � � � or or Putting the columns together, A−1 B AN = ⎞ ⎟ ⎠ 1 1 −1 −1 0 1 � � � 6 MIT OpenCourseWare https://ocw.mit.edu 2.854 / 2.853 Introduction To Manufacturing Systems Fall 2016 For information about citing these materials or our Terms of Use, visit: https://ocw.mit.edu/terms.
https://ocw.mit.edu/courses/2-854-introduction-to-manufacturing-systems-fall-2016/0b2ff52cac042a60403e72ddce64c3cf_MIT2_854F16_LpExample.pdf
Chapter 1 Root Concepts of the Standard Model The standard model of particle physics accurately describes a vast range of phenomena using a small number of parameters. Much of the power of the standard model arises from the fact that it embodies a few deep physical and mathematical concepts which are difficult, but not quite impossible, to make consistent with one another. These principles include, first of all, the general principles of special relativity and quantum mechanics. Those lead us to local quantum field theory. To these general local, or gauge, principles, the standard model adds one more specific ingredient: symmetry. Gauge symmetry is a vast generalization of the principle of electric charge conservation. Formulating equations that embody these concepts is only half the job, because it is not at all straightforward to see that the resulting equations describe Nature. Specially, apart from the special case of quantum electrodynamics, which forms only a small sub-theory of the standard model, local gauge symmetry is not manifest in the superficial appearance of phenomena. In applying the basic highly symmetri­ cal equations to describe observed reality, which appears much less symmetric, two profound dynamical effects must be taken into considerations. The first of these dynamical effects is the spontaneous breaking of local gauge symmetry. It is a form of super-conductivity, but operating in
https://ocw.mit.edu/courses/8-325-relativistic-quantum-field-theory-iii-spring-2003/0b475fe7464cbd47a63424d0261905b8_chap1.pdf
of local gauge symmetry. It is a form of super-conductivity, but operating in (what we perceive as) empty space. In conventional super-conductivity, roughly speaking, the electrons in a metal, which normally behave as a gas of independent particles, condense into a liquid of overlapping Cooper pairs. The dynamical response of this liquid screens the electromagnetic interactions, and renders magnetic fields short-ranged. That is the essence of the Meissner effect. In the context of the standard model, the role of Cooper pairs is played by a new form of matter, the so-called Higgs field, or more accurately the Higgs multiplet of fields. The Higgs multiplet is a theoretical 1 2 CHAPTER 1. ROOT CONCEPTS OF THE STANDARD MODEL construct that was invented specially to fulfill this mission. According to the theory, a condensate of Higgs particles fills empty space, and its dynamical response screens the weak interactions, rendering them short-ranged. There is weighty indirect evidence for the existence of the Higgs multiplet, but at present direct observation of its quanta, the Higgs particles, remains a major unmet challenge, as we shall discuss in detail later. The second of these dynamical effects is confinement of quarks. Confinement should be considered together with the closely related property of asymptotic free­ dom, which
https://ocw.mit.edu/courses/8-325-relativistic-quantum-field-theory-iii-spring-2003/0b475fe7464cbd47a63424d0261905b8_chap1.pdf
nement should be considered together with the closely related property of asymptotic free­ dom, which is in a sense its inverse. Confinement and asymptotic freedom occur in the sector of the standard model dealing with the strong interaction, quantum chro­ modynamics or QCD. In this context, local gauge symmetry takes a most peculiar form. The fundamental building-blocks of the theory – quarks and gluons – trans­ form non-trivially under an SU (3) local gauge symmetry. Indeed, the so-called color charges of these particles, which specify their transformation properties, are entirely responsible for their QCD interactions. But the physical particles in QCD are all singlets, which do not transform under the gauge symmetry. They are formed out of combinations of quarks, anti-quarks, and gluons in which the color charges all cancel. Although the colored building blocks are real and tangible, and reveal their exis­ tence quite directly to suitable probes, they cannot be separated out and examined individually. Attempts to pull them apart call ever-growing forces into play, and are inevitably frustrated. This is confinement. Yet when the color charges are close together, or when we consider processes that involve large changes in energy and mo­ mentum, the forces are feeble an d the radiation is rare. This is asymptotic freedom. These unusual behaviors – fundamental forces that grow with distance, radiators that
https://ocw.mit.edu/courses/8-325-relativistic-quantum-field-theory-iii-spring-2003/0b475fe7464cbd47a63424d0261905b8_chap1.pdf
asymptotic freedom. These unusual behaviors – fundamental forces that grow with distance, radiators that become quiescent as they are shaken violently – were once thought to be paradoxi­ cal or even problematic. They are now understood to be a general feature of many model theories with local gauge symmetry. They are deeply related, respectively, to the most basic formulation of local gauge symmetry, and to the ultimate consistency of quantum field theory. In QCD itself, asymptotic freedom is born out of a interplay among our three basic concepts of relativity, quantum mechanics, and gauge symmetry. All three play crucial roles, as we shall see in detail later.
https://ocw.mit.edu/courses/8-325-relativistic-quantum-field-theory-iii-spring-2003/0b475fe7464cbd47a63424d0261905b8_chap1.pdf
MIT OpenCourseWare http://ocw.mit.edu 18.917 Topics in Algebraic Topology: The Sullivan Conjecture Fall 2007 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. Free Modules (Lecture 7) We first recall a bit of notation: If I = (i1, . . . , ik) is a sequence of integers, we write SqI for the composition product Sqi1 . . . Sqik in the Steenrod algebra A (or the big Steenrod algebra ABig). We say that I is admissible if ij ≥ 2ij+1 for 1 ≤ j < k. The excess of I is defined to be the expression i1 − i2 − i3 − . . . − ik = (i1 − 2i2) + (i2 − 2i3) + . . . + (ik−1 − 2ik) + ik. We wanted to prove that the Steenrod algebra has a basis {SqI }, where I ranges over the admissible sequences of positive integers. This was reduced to the following assertion: Proposition 1. Let F (n) denote the free unstable A-module generated by one generator νn in degree n. Then the collection of elements {SqI νn} is linearly independent in F (n), where I ranges over admissible sequences of positive integers having excess ≤ n. To prove this, it will suffice to find any unstable A-module M with a element x ∈ M n such that the set {SqI x} is linearly independent in M (here again I ranges over admissible positive sequences of excess ≤ n). To see this, we observe that the freeness of F (n) implies that there is a (unique) map φ : F (n) M with φ(νn) = x
https://ocw.mit.edu/courses/18-917-topics-in-algebraic-topology-the-sullivan-conjecture-fall-2007/0b65ed5433a1570711db5113fe4576a0_lecture7.pdf
that there is a (unique) map φ : F (n) M with φ(νn) = x. Consequently, any linear relation among the expressions {SqI νn} would entail a linear relation among the expressions {SqI x}. → It will therefore suffice to choose M to be some sufficiently nontrivial unstable A-module. We have seen that for any topological space X, the cohomology H∗(X) has the structure of an unstable module over the Steenrod algebra. The most interesting example we have studied so far is the case where X = BΣ2 � RP ∞. In this case, the cohomology ring H∗(X) is isomorphic to a polynomial ring F2[t], and the action of the Steenrod algebra is described by the formula Sqk tm = � � m k tm+k . We can obtain a more interesting example by taking X to be a product of n copies of the space RP ∞. In this case, the cohomology of X can be identified with a polynomial ring F2[t1, . . . , tn] in several variables (obtained by pulling back the cohomology class t along the n different projections). Using the Cartan formula Sqk(xy) = � k=k�+k�� Sqk� (x) Sqk�� (y), we deduce that the action of the Steenrod algebra on H∗(X) is described by the following formula: Sqk(t1 a1 . . . tn an ) = � k=k1+...+kn � � � � an t1 . . . kn a1 k1 a1+k1 . . . tn an+
https://ocw.mit.edu/courses/18-917-topics-in-algebraic-topology-the-sullivan-conjecture-fall-2007/0b65ed5433a1570711db5113fe4576a0_lecture7.pdf
� an t1 . . . kn a1 k1 a1+k1 . . . tn an+kn . We now make a crucial observation about the formula above. Suppose that each exponent ai is a power � � of 2. The binomial coefficient a is equal to 1 if ki = 0 or ki = ai, and vanishes otherwise (since we are i ki working over the field F2). Moreover, the exponents appearing on the right hand side have the form ai + ki, 1 which will again be a power of two if ki = 0 or ki = ai. In other words, we can rewrite the preceding formula as follows: � Sqk(t1 2b1 . . . tn 2bn ) = 2b1+δ1 . . . tn t1 2bn+δn , where the sum is taken over δ1, . . . , δn ∈ {0, 1}. Let x = t1 . . . tn ∈ F2[t1, . . . , tn]. Then, for every sequence of integers I, the expression SqI (x) can be identified with some polynomial f (t1, . . . , tn) ∈ F2[t1, . . . , tn]. This polynomial necessarily has the following properties: k=δ12b1 +...+δn2bn bn . b1 . . . t2 (a) Every monomial appearing in f has the form t2 n 1 (b) The polynomial f is symmetric in its arguments. Let M denote the subspace of F2[t1, . . . , tn] consisting of those polynomials which satisfy (a) and (b) above. We observe that M is invariant under the action of the Steenrod algebra A, and is therefore an unstable
https://ocw.mit.edu/courses/18-917-topics-in-algebraic-topology-the-sullivan-conjecture-fall-2007/0b65ed5433a1570711db5113fe4576a0_lecture7.pdf
invariant under the action of the Steenrod algebra A, and is therefore an unstable A-module in its own right. Moreover, M contains the element x = t1 . . . tn of degree n. To complete the proof of Proposition 1, it will suffice to show the following: Proposition 2. The expressions {SqI (x)} form a basis for M , where I ranges over admissible sequences of positive integers having excess ≤ n. Let us now introduce a bit of notation. Given a monomial f = ta1 . . . tan , let 1 n σ(f ) = � f g g∈Σn/G be the symmetric polynomial obtained by summing the conjugates of f ; here we take G to be the stabilizer of f in Σn, so that f itself appears in this sum exactly once. For example, if n = 2, we have at2 σ(t1 b ) = � ta b 1 t2 1 tb + t1 ta 2 b ta 2 if a = b b if a =� . bn ), where 0 ≤ b1 ≤ b1 . . . t2 The space M has a basis consisting of symmetric polynomials of the form σ(t2 n 1 . . . ≤ bn. It will be convenient to index this set of polynomials a little bit differently. Given a sequence of nonnegative integers � = (�0, . . . , �k) with �0 + . . . + �k = n, there is a unique sequence 0 ≤ b1 ≤ . . . ≤ bn bn ). Thus M has a basis b1 . . . t2 such that �i is the cardinality of the set {j : bj = i}.
https://ocw.mit.edu/courses/18-917-topics-in-algebraic-topology-the-sullivan-conjecture-fall-2007/0b65ed5433a1570711db5113fe4576a0_lecture7.pdf
b1 . . . t2 such that �i is the cardinality of the set {j : bj = i}. We then set f� = σ(t2 n 1 consisting of the polynomials {f�}, where � ranges over sequences of nonnegative integers (�0, . . . , �k) such that n = �0 + . . . + �k and �k is nonzero. There is a corresponding indexing for positive admissible monomials of the form SqI . Let I = (i1, . . . , ik) be a sequence of positive integers. If I is admissible, then the integers �1 = i1 − 2i2, �2 = i2 − 2i3, . . . , �k−1 = ik−1 − 2ik are all nonnegative. We then set �k = ik, which is positive so long as I is positive. The sum �1 + . . . + �k = i1 − i2 − . . . − ik is equal to the excess of I. Thus, if I has excess ≤ n, we can define �0 = n − (�1 + . . . + �k), to obtain a sequence of nonnegative integers � = (�0, . . . , �k), where �k is positive. Conversely, given such a sequence of integers, we can construct a unique admissible sequence I = (2k−1�k + . . . + �1, . . . , 2�k + �k−1, �k) of excess ≤ n. We will denote this admissible sequence by I(�). We now wish to compare the expressions {SqI(�)(x)} with the basis {f�} for M . They do not coincide, but we get the next best thing: the translation between these two bases is upper triangular.
https://ocw.mit.edu/courses/18-917-topics-in-algebraic-topology-the-sullivan-conjecture-fall-2007/0b65ed5433a1570711db5113fe4576a0_lecture7.pdf
we get the next best thing: the translation between these two bases is upper triangular. To be more precise, we need to introduce an ordering on our index set. Let E be the collection of all finite sequences � = (�0, . . . , �k) of nonnegative integers (here k is allowed to vary) such that �k > 0, and �0 + . . . + �k = n. 2 We equip E with the following lexicographical ordering: � < �� if there exists an integer i such that �i < �� while �j = �� sequence �. i, j for j > i. Here we agree to the convention that �i = 0 if i is larger than the length of the To complete prove Proposition 2, it will suffice to verify the following: Proposition 3. Let � ∈ E. Then SqI(�)(x) = f� + � fα where α ranges over some subset of {�� ∈ E : �� < �}. α Proof. We compute: Sq�k−1 +2�k Sq�k (x) = σ(t4 t�k +�k−1+1 . . . tn) + lower order t�k +1 . . . tn) Sq�k (x) = σ(t1 2 . . . t4 1t4 �k x = σ(t1 . . . tn) 2t2 2 2 . . . t� k t2 �k +1 . . . t2 . . . SqI(�)(x) = f� + lower order �k +�k−1 We now wish to reformulate some of the above ideas, using Kuhn’s theory of “generic representations”. In what follows, we let V
https://ocw.mit.edu/courses/18-917-topics-in-algebraic-topology-the-sullivan-conjecture-fall-2007/0b65ed5433a1570711db5113fe4576a0_lecture7.pdf
We now wish to reformulate some of the above ideas, using Kuhn’s theory of “generic representations”. In what follows, we let V denote a finite dimensional vector space over F2, and let V ∨ denote its dual space. We observe that H∗(BV ∨) = H∗(RP ∞ × . . . × RP ∞) � F2[t1, . . . , tN ], where N is the dimension of V . However, we can describe this cohomology ring more in a more invariant way: it is given by the symmetric algebra Sym∗(V ) generated by the vector space V � H1(BV ∨). Every admissible monomial SqI in the Steenrod algebra of degree k determines a map H∗(BV ∨) → H∗+k(BV ∨). Restricting to a particular degree n, we get a map Symn(V ) → Symn+k(V ). This map depends functorially on V , and vanishes if the excess of I is larger than n. To study the situation more systematically, let Vectf denote the category of finite dimensional vector spaces over F2, and Vect the category of all vector spaces over F2. We let Fun denote the category of functors from Vectf to Vect. Remark 4. Kuhn refers to objects of Fun as generic representations. If F : Vectf Vect is a functor, then for every finite dimensional vector space V ∈ Vectf , we obtain a new vector space F (V ) which is equipped with an action of Aut(V ) � GLn(F2). In other words, we can think of F as providing a family of representations of the groups GLn(F2), which are somehow connected to one another as n grows. → Example 5. For every nonnegative integer n, the functor
https://ocw.mit.edu/courses/18-917-topics-in-algebraic-topology-the-sullivan-conjecture-fall-2007/0b65ed5433a1570711db5113fe4576a0_lecture7.pdf
(F2), which are somehow connected to one another as n grows. → Example 5. For every nonnegative integer n, the functor V �→ Symn(V ) is an object of Fun, which we will denote by Symn . Let Sym∗ denote the direct sum of these functors, so that Sym∗(V ) is the free algebra generated by V . If SqI is an admissible monomial (or any element of the Steenrod algebra), then SqI determines a natural transformation Symn → Sym∗; in other words, a morphism in the category Fun. This natural transformation vanishes if the excess of I is larger than n. 3 Proposition 6. Let n be a positive integer. Then the natural transformations {SqI } form a basis for HomFun(Symn , Sym∗), where I ranges over positive admissible sequences of excess ≤ n. Proof. We first show that the expressions SqI are linearly independent in HomFun(Symn , Sym∗). For this, it suffices to choose a vector space V such that the functors SqI are linearly independent in HomF2 (Symn(V ), Sym∗(V )). Let V be the free vector space generated by a basis {t1, . . . , tn}, and let x = t1 . . . tn; then it will suffice to show that the elements {SqI (x)} are linearly independent in Sym∗(V ). This follows immediately from Proposition 2. We now wish to prove that HomFun(Symn , Sym∗) is spanned by the Steenrod operations {SqI For this, we need to compute HomFun(Symn , Sym∗). Suppose α : Symn Sym
https://ocw.mit.edu/courses/18-917-topics-in-algebraic-topology-the-sullivan-conjecture-fall-2007/0b65ed5433a1570711db5113fe4576a0_lecture7.pdf
enrod operations {SqI For this, we need to compute HomFun(Symn , Sym∗). Suppose α : Symn Sym∗ is a natural transformation. Choose V = F2{t1, . . . , tn} as above, and let x = t1 . . . tn ∈ Symn(V ). Then α(x) = f (t1, . . . , tn) ∈ F2[t1, . . . , tn] � Sym∗(V ), for some polynomial f . The construciton α �→ f determines a linear map }. → φ : HomFun(Symn , Sym∗) F2[t1, . . . , tn]. → We first claim that φ is injective. For suppose that φ(α) = 0. Let W be any vector space over F2. We wish to prove that the induced map αW : Symn(W ) → Sym∗(W ) is equal to zero. Since αW is a linear map, it will suffice to show that αW vanishes on each monomial w1 . . . wn in Symn(W ). But in this case we have a map V → W , given by ti �→ wi. This linear map determines a commutative diagram Symn(V ) φ � Sym∗(V ) � � Symn(W ) αW � � � Sym∗(W ), so that αW (w1 . . . wn) = f (w1, . . . , wn) = 0 ∈ Sym∗(W ). We now wish to describe the image of the map φ. Fix α : Symn Sym∗, and let f = φ(α). Since x = t1 . . . tn ∈ Symn(V ) is invariant under the permutation action of the symmetric group, we deduce immediately that
https://ocw.mit.edu/courses/18-917-topics-in-algebraic-topology-the-sullivan-conjecture-fall-2007/0b65ed5433a1570711db5113fe4576a0_lecture7.pdf
(V ) is invariant under the permutation action of the symmetric group, we deduce immediately that f is a symmetric polynomial. → Let V � be the F2-vector space spanned by a basis {t1, . . . , tn, tn+1}. Then we have an equation t1 . . . tn−1(tn + tn+1) = t1 . . . tn + t1 . . . tn−1tn+1. Since the map αV � is linear, we get f (t1, . . . , tn−1, tn + tn+1) = f (t1, . . . , tn) + f (t1, . . . , tn−1, tn+1). In other words, the polynomial f is additive in its last argument. If we write f (t1, . . . , tn) = � gk(t1, . . . , tn−1)tk ,n k then we deduce that gk(t1, . . . , tn−1) vanishes unless k is a power of 2. Since f is symmetric, we can apply the same reasoning to each argument of f . It follows that f can be written as a sum of monomials of the form t2b1 . . . t2bn . Since f is symmetric, we conclude that f ∈ M ⊆ F2[t1, . . . , tn]. 1 n We therefore have a factorization φ : HomFun(Symn , Sym∗) �→ M ⊆ F2[t1, . . . , tn]. The map φ carries SqI to SqI (x). Proposition 2 implies that M is generated by these expressions, so that φ restricts to an isomorphism HomFun(Symn , Sym∗) � M . Since the expressions {SqI (x)} form a basis for M (where I ranges over admissible positive sequences of excess ≤ n), we conclude that
https://ocw.mit.edu/courses/18-917-topics-in-algebraic-topology-the-sullivan-conjecture-fall-2007/0b65ed5433a1570711db5113fe4576a0_lecture7.pdf
for M (where I ranges over admissible positive sequences of excess ≤ n), we conclude that the expressions {SqI } form a basis for HomFun(Symn , Sym∗). 4 � � This gives another approach to constructing the Steenrod algebra (at least with mod-2 coefficients): it can be regarded as an algebra of natural transformations between functors of the form We will return to this point of view in the next lecture. Symn : Vectf → Vect . 5
https://ocw.mit.edu/courses/18-917-topics-in-algebraic-topology-the-sullivan-conjecture-fall-2007/0b65ed5433a1570711db5113fe4576a0_lecture7.pdf
LECTURE 6 • Readings: Sections 2.4-2.6 Lecture outline • Review PMF, expectation, variance • Conditional PMF • Geometric PMF • Total expectation theorem • Joint PMF of two random variables • Independence Review • Random variable X: function from sample space to the real numbers • PMF (for discrete random variables): • Expectation: • Variance: Average Speed vs. Average Time - 1 Average Speed vs. Average Time - 2 Conditional Expectation • Recall: • Definition: Geometric PMF • X: Waiting time for the #1 bus at the MIT stop • What is the expected waiting time, ? • What is the expected waiting time conditioned on the fact that you have already waited 2 minutes? Geometric PMF • Expected time: • Memoryless property: Given that X > 2, the r.v. X – 2 has same geometric PMF. Total Expectation Theorem • Partition of sample space into disjoint events: • Geometric example: • Solve to get Geometric R.V. • Geometric example: • Solve to get Joint PMFs Independent Random Variables • Random variables X, Y and Z are independent if (for all x, y and z): • Example: Independent? • What if we condition on X ≤ 2 and Y ≥ 3?
https://ocw.mit.edu/courses/6-041-probabilistic-systems-analysis-and-applied-probability-spring-2006/0ba32c276891edcfb233b7dd35e2d594_lec06.pdf