text stringlengths 16 3.88k | source stringlengths 60 201 |
|---|---|
k) = diVi, so A =
r
i=1diVi, A =
n
⊃
→
�
−
∪
n
⊕
⊕
r
=
r
i=1miVi, as desired.
�
Exercise. The goal of this exercise is to give an alternative proof of Theorem 2.6, not using
any of the previous results of Chapter 2.
Let A1, A2, ..., An be n algebras with units 11, 12, ..., 1n, respectively. Let A = A1
Clearly, 1i1j = ζij1i, and the unit of A is 1 = 11 + 12 + ... + 1n.
A2
�
...
�
An.
�
For every representation V of A, it is easy to see that 1iV is a representation of Ai for every
. Conversely, if V1, V2, ..., Vn are representations of A1, A2, ..., An, respectively,
A acting
...
Vn canonically becomes a representation of A (with (a1, a2, ..., an)
1, 2, ..., n
V2
�
Vn as (v1, v2, ..., vn)
(a1v1, a2v2, ..., anvn)).
�⊃
�
i
� {
then V1
on V1
�
�
V2
}
�
...
�
�
(a) Show that a representation V of A is irreducible if and only if 1iV is an irreducible repre
, while 1iV = 0 for all the other i. Thus, classify the
sentation of Ai for exactly one i
� {
irreducible representations of A in terms of those of A1, A2, ..., An.
1, 2, ..., n
}
(b) | https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0721b66f53c3c196ce86d6d867514442_MIT18_712F10_ch2.pdf |
irreducible representations of A in terms of those of A1, A2, ..., An.
1, 2, ..., n
}
(b) Let d
N. Show that the only irreducible representation of Matd(k) is kd, and every finite
�
dimensional representation of
Matd(k) is a direct sum of copies of kd.
2, let Eij
� {
1, 2, ..., d
Hint: For every (i, j)
Matd(k) be the matrix with 1 in the ith row of the
jth column and 0’s everywhere else. Let V be a finite dimensional representation of Mat d(k). Show
Ei1v is an isomorphism for
that V = E11V
. Prove that S (v)
every i
S (v).
is a
Conclude that V = S (v1)
⊃
E11V , denote S (v) =
◦
subrepresentation of V isomorphic to kd (as a representation of Matd(k)), and that v
is a basis of E11V .
S (vk), where
EiiV , v
�⊃
E11v, E21v, ..., Ed1v
...
. For every v
EddV , and that �i : E11V
1, 2, ..., d
S (v2)
E22V
� {
...
�
�
�
�
�
�
}
}
�
�
�
�
v1, v2, ..., vk}
{
(c) Conclude Theorem 2.6.
2.4 Filtrations
Let A be an algebra. Let V be a representation of A. A (finite) filtration of V is a sequence of
subrepresentations 0 = V0
Vn = V .
V1
...
→
→
→
Lemma 2.8. Any finite dimensional representation V of an algebra A admits a finite filtration | https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0721b66f53c3c196ce86d6d867514442_MIT18_712F10_ch2.pdf |
finite dimensional representation V of an algebra A admits a finite filtration
0 = V0
Vn = V such that the successive quotients Vi/Vi
1 are irr
educible.
V1
...
→
→
→
−
Proof. The proof is by induction in dim(V ). The base is clear, and only the induction step needs
V , and consider the representation
to be justified. Pick an irreducible subrepresentation V1
= U
U = V /V1. Then by the induction assumption U has a filtration 0 = U0
→
the
such that Ui/Ui 1 are irreducible. Define Vi for i
1
i
−
Vn = V is a filtration of V
V /V1 = U . Then 0 = V0
tautological projection V
with the desired property.
...
2 to be the preimages of U
V1
Un
1
−
under
⊂
→
U1
V2
...
⊃
→
→
→
→
→
→
−
2.5 Finite dimensional algebras
Definition 2.9. The radical of a finite dimensional algebra A is the set of all elements of A which
act by 0 in all irreducible representations of A. It is denoted Rad(A).
Proposition 2.10. Rad(A) is a two-sided ideal.
Proof. Easy.
Proposition 2.11. Let A be a finite dimensional algebra.
(i) Let
I be a nilpotent two-sided ideal in A, i.e., I n = 0 for some n. Then I
Rad(A).
→ | https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0721b66f53c3c196ce86d6d867514442_MIT18_712F10_ch2.pdf |
ideal in A, i.e., I n = 0 for some n. Then I
Rad(A).
→
(ii) Rad(A) is a nilpotent ideal. Thus, Rad(A) is the largest nilpotent two-sided ideal in A.
Proof. (i) Let V be an irreducible representation of A. Let v
tation. If Iv = 0 then Iv = V so there is x
�
Thus Iv = 0, so I acts by 0 in V and hence I
Rad(A).
V is a subrepresen
I such that xv = v. Then xn = 0, a contradiction.
V . Then Iv
→
�
→
A1
(ii) Let 0 = A0
subrepresentations such that Ai+1/Ai are irreducible. It exists by Lemma 2.8. Let x
Then
desired.
An = A be a filtration of the regular representation of A by
Rad(A).
i+1/Ai by zero, so x maps Ai+1 to Ai. This implies that Rad(A) = 0, as
x acts on A
...
→
→
→
�
n
Theorem 2.12. A finite dimensional algebra A has only finitely many irreducible representations
Vi up to isomorphism, these representations are finite dimensional, and
A/Rad(A) ∪
=
End Vi.
�
i
Proof. First, for any irreducible representation V of A, and for any nonzero v
V is a
finite dimensional subrepresentation of V . (It is finite dimensional as A is finite dimensional.) As
V | https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0721b66f53c3c196ce86d6d867514442_MIT18_712F10_ch2.pdf |
representation of V . (It is finite dimensional as A is finite dimensional.) As
V is irreducible and Av = 0, V = Av and V is finite dimensional.
V , Av
∧
�
Next, suppose we have non-isomorphic irreducible representations V1, V2, . . . , Vr. By Theorem
2.5, the homomorphism
δi : A
�
i
−⊃
�
i
End Vi
is surjective. So r
irreducible representations (at most dim A).
i dim End Vi
∗
∗
⎨
dim A. Thus, A has only finitely many non-isomorphic
Now, let V1, V2, . . . , Vr be all non-isomorphic irreducible finite dimensional representations of
A. By Theorem 2.5, the homomorphism
δi : A
�
i
−⊃
�
i
End Vi
is surjective. The kernel of this map, by definition, is exactly Rad(A).
Corollary 2.13.
i (dim
2 Vi)
∗
⎨
dim A, where the Vi’s are the irreducible representations of A.
Proof. As dim End Vi =
2
i (dim Vi) . As dim Rad(A)
2
(dim Vi) , Theorem
0,
i (dim Vi)
2.12 implies that dim A
dim A.
2
−
dim Rad(A) =
i dim End Vi =
⎨
⎨
⊂
⎨
∗
Example 2.14. 1. Let A = k[x]/(xn). This algebra has a unique irreducible representation, which
is a 1-dimensional space k, in which x acts by zero. So the radical Rad(A) is the ideal (x).
2. | https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0721b66f53c3c196ce86d6d867514442_MIT18_712F10_ch2.pdf |
is a 1-dimensional space k, in which x acts by zero. So the radical Rad(A) is the ideal (x).
2. Let A be the algebra of upper triangular n by n matrices. It is easy to check that the
irreducible representations of A are Vi, i = 1, ..., n, which are 1-dimensional, and any matrix x acts
by xii. So the radical Rad(A) is the ideal of strictly upper triangular matrices (as it is a nilpotent
ideal and contains the radical). A similar result holds for block-triangular matrices.
Definition 2.15. A finite dimensional algebra A is said to be semisimple if Rad(A) = 0.
Proposition 2.16. For a finite dimensional algebra A, the following are equivalent:
1. A is semisimple.
2.
i (dim
2 Vi) = dim A, where the Vi’s are the irreducible representations of A.
⎨
3. A ∪=
4. Any finite dimensional representation of A is completely reducible (that is, isomorphic to a
i Matdi (k) for some di.
�
direct sum of irreducible representations).
5. A is a completely reducible representation of A.
Proof. As dim A
0. Thus, (1)
−
(2).
⊆
dim Rad(A) =
i (dim
Vi
2) , clearly dim
A =
i (dim Vi) if and only if Rad(A) =
2
⎨
⎨
Next, by Theorem 2.12, if
(3). Conversely, if A ∪=
(1)
≥
Thus (3)
�
(1).
≥
Rad(A) = 0, then clearly A =
∪
i Matdi (k) | https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0721b66f53c3c196ce86d6d867514442_MIT18_712F10_ch2.pdf |
1).
≥
Rad(A) = 0, then clearly A =
∪
i Matdi (k) for di = dim Vi. Thus,
i Matdi (k), then by Theorem 2.6,� Rad(A) = 0, so A is semisimple.
≥
Next, (3)
(4) by Theorem 2.6. Clearly (4)
i niVi.
Consider EndA(A) (endomorphisms of A as a representation of A). As the Vi’s are pairwise� non-
be mapped to a distinct Vj . Also, again by
isomorphic, by Schur’s lemma, no copy of Vi in A can
(k). But EndA(A) = Aop by Problem
Schur’s lemma,
∪
∪
1.22, so A = i Matni (k). Thus, A = (
∪
= i Matni
i Matni (k)) =
�
i Matni(k), as desired.
Thus, EndA(A) ∪
(5). To see that (5)
EndA (Vi) = k.
(3), let A =
≥
≥
op
op
�
�
�
2.6 Characters of representations
Let A be an algebra and V a finite-dimensional representation of A with action δ. Then the
character of V is the linear function νV : A
k given by
⊃
νV (a) = tr
|V (δ(a)).
yx over all x, y
k.
⊃
If [A, A] is the span of commutators [x, y] := xy
we may view the character as a mapping νV : A/[A, A]
−
A, then [A, A]
ker νV . Thus,
∧
�
Exercise. Show that if W
νV /W . | https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0721b66f53c3c196ce86d6d867514442_MIT18_712F10_ch2.pdf |
A]
ker νV . Thus,
∧
�
Exercise. Show that if W
νV /W .
→
V are finite dimensional representations of A, then νV = νW +
Theorem 2.17. (i) Characters of (distinct) irreducible finite-dimensional representations of A are
linearly independent.
(ii) If A is a finite-dimensional semisimple algebra, then these characters form a basis of
(A/[A, A])⊕.
Proof. (i) If V1, . . . , Vr are nonisomorphic irreducible finite-dimensional representations of A, then
End Vr is surjective by the density theorem, so νV1 , . . . , νVr are
δV1 � · · · �
linearly independent. (Indeed, if
∂iνVi (a) = 0 for all a
EndkVi. But each tr(Mi) can range⎨ independently over k, so it must ⎨be that ∂1 =
∂iTr(Mi) = 0 for all Mi
�
= ∂r = 0.)
A, then
End V1
δVr : A
� · · · �
⊃
�
· · ·
(ii) First we prove that [Matd(k), Matd(k)] = sld(k), the set of all matrices with trace 0. It is
sld(k). If we denote by Eij the matrix with 1 in the ith row of the
clear that [Matd(k), Matd(k)]
jth column and 0’s everywhere else, we have [Eij , Ejm] = Eim for i = m, and [Ei,i+1, Ei+1,i] = Eii
Ei+1,i+1. Now
{
as claimed.
−
forms a basis in sld(k), so indeed [Matd(k), Matd(k)] = s | https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0721b66f53c3c196ce86d6d867514442_MIT18_712F10_ch2.pdf |
1. Now
{
as claimed.
−
forms a basis in sld(k), so indeed [Matd(k), Matd(k)] = sld(k),
∧
Ei+1,i+1}
Eim}⊗{
Eii−
By semisimplicity, we can write A = Matd1 (k)
Matdr (k). Then [A, A] = sld1(k)
� · · · �
sldr (k), and A/[A, A] =∪ kr. By Theorem 2.6, there are exactly r irreducible representations of A
(isomorphic to kd1 , . . . , kdr , respectively), and therefore r linearly independent characters on the
r-dimensional vector space A/[A, A]. Thus, the characters form a basis.
� · · · �
2.7 The Jordan-H¨older theorem
We will now state and prove two important theorems about representations of finite dimensional
algebras - the Jordan-H¨older theorem and the Krull-Schmidt theorem.
Theorem 2.18. (Jordan-H¨older theorem). Let V be a finite dimensional representation of A,
Vm� = V be filtrations of V , such that the
...
and 0 = V0
representations Wi := Vi/Vi 1 and W i� := Vi�/Vi� 1 are irreducible for all i. Then n = m, and there
−
exists a permutation ε of 1, ..., n such that Wε(i) is isomorphic to Wi�.
Vn = V , 0 = V0� →
V1
...
→
→
→
→
−
Proof. First proof (for k of characteristic zero). The character of V obviously equals the sum
of | https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0721b66f53c3c196ce86d6d867514442_MIT18_712F10_ch2.pdf |
proof (for k of characteristic zero). The character of V obviously equals the sum
of characters of Wi, and also the sum of characters of Wi�. But by Theorem 2.17, the charac
ters of irreducible representations are linearly independent, so the multiplicity of every irreducible
representation W of A among Wi and among Wi� are the same.
This implies the theorem. 3
Second proof (general). The proof is by induction on dim V . The base of induction is clear,
so let us prove the induction step. If W1 = W1� (as subspaces), we are done, since by the induction
assumption the theorem holds for V /W1. So assume W1 = W1�. In this case W1
W1� = 0 (as
W1�), and
W1, W 1� are irreducible), so we have an embedding f : W1
1 (it exists by
...
0 = U0
Lemma 2.8). Then we see that:
Up = U be a filtration of U with simple quotients Zi = Ui/Ui
∈
V . Let U = V /(W1
W1� ⊃
U1
→
→
�
�
→
−
1) V /W1 has a filtration with successive quotients W1�, Z1, ..., Zp, and another filtration with
successive quotients W2, ...., Wn.
2) V /W 1� has a filtration with successive quotients W1, Z1, ..., Zp, and another filtration with
successive quotients W2�, ...., W n� .
By the induction assumption, this means that the collection of irreducible representations with
multiplicities W1, W | https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0721b66f53c3c196ce86d6d867514442_MIT18_712F10_ch2.pdf |
, ...., W n� .
By the induction assumption, this means that the collection of irreducible representations with
multiplicities W1, W 1�, Z1, ..., Zp coincides on one hand with W1, ..., Wn, and on the other hand, with
W1�, ..., W m� . We are done.
The Jordan-H¨older theorem shows that the number n of terms in a filtration of V with irre
ducible successive quotients does not depend on the choice of a filtration, and depends only on
3
This proof does not work in characteristic p because it only implies that the multiplicities of Wi and W ⊗
i are the
same modulo p, which is not sufficient. In fact, the character of the representation pV , where V is any representation,
is zero.
V . This number is called the length of V . It is easy to see that n is also the maximal length of a
filtration of V in which all the inclusions are strict.
The sequence of the irreducible representations W1, ..., Wn enumerated in the order they appear
from some filtration of V as successive quoteints is called a Jordan-H¨older series of V .
2.8 The Krull-Schmidt theorem
Theorem 2.19. (Krull-Schmidt theorem) Any finite dimensional representation of A can be uniquely
(up to an isomorphism and order of summands) decomposed into a direct sum of indecomposable
representations.
Proof. It is clear that a decomposition of V into a direct sum of indecomposable representations
exists, so we just need to prove uniqueness. We will prove it by induction on dim V . Let V =
V s� be the natural
V | https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0721b66f53c3c196ce86d6d867514442_MIT18_712F10_ch2.pdf |
will prove it by induction on dim V . Let V =
V s� be the natural
V , ps : V
V , is� : V s� ⊃
Vm = V1� �
V1
n
s=1 χs = 1. Now
p i1 : V1
maps associated to these decompositions. Let χs = p1i�s �
s
we need the following lemma.
Vs, ps� : V
⊃
V1. We
⊃
V n�. Let is : Vs
⊃
have
...
...
⊃
�
�
�
⎨
Lemma 2.20. Let W be a finite dimensional indecomposable representation of A. Then
(i) Any homomorphism χ : W
⊃
W is either an isomorphism or nilpotent;
(ii) If χs : W
⊃
W , s = 1, ..., n are nilpotent homomorphisms, then so is χ := χ1 + ... + χn.
Proof. (i) Generalized eigenspaces of χ are subrepresentations of W , and W is their direct sum.
Thus, χ can have only one eigenvalue ∂. If ∂ is zero, χ is nilpotent, otherwise it is an isomorphism.
(ii) The proof is by induction in n. The base is clear. To make the induction step (n
assume that χ is not nilpotent.
χ−
isomorphism, which is a contradiction with the induction assumption.
Then by (i) χ is an isomorphism, so
χ −
1χi are not isomorphisms, so they are nilpotent. Thus 1
−
n
n
i=1 χ−
1χ⎨ = χ−
1 to n), | https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0721b66f53c3c196ce86d6d867514442_MIT18_712F10_ch2.pdf |
1
−
n
n
i=1 χ−
1χ⎨ = χ−
1 to n),
1χi = 1. The morphisms
1
1χ + ... + χ−
χn 1 is an
−
1
−
By the lemma, we find that for some s, χs must be an isomorphism; we may assume that
is indecomposable, we get that
s = 1. In this case, V1� = Im(p1� i1)
Ker(p1i1� ), so since V1�
�
V1 are isomorphisms.
V1� and g := p1i1� : V1� ⊃
f := p�1i1 : V1
�j>1Vj�; then we have V = V1
⊃
�j>1Vj , B� =
Let B =
B
B� defined as a composition of the natural maps B
B�. Consider the map
B � attached to these
h :
decompositions. We claim that h is an isomorphism. To show this, it suffices to show that Kerh = 0
(as h is a map between spaces of the same dimension). Assume that v
V1�.
On the other hand, the projection of v to V1 is zero, so gv = 0. Since g is an isomorphism, we get
v = 0, as desired.
B = V1� �
V
⊃
⊃
B. Then v
Kerh
⊃
�
→
�
�
Now by the induction assumption, m = n, and Vj ∪= V �
ε(j)
The theorem is proved.
for some permutation ε of 2, ..., n. | https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0721b66f53c3c196ce86d6d867514442_MIT18_712F10_ch2.pdf |
= V �
ε(j)
The theorem is proved.
for some permutation ε of 2, ..., n.
Exercise. Let A be the algebra of real-valued continuous functions on R which are periodic
with period 1. Let M be the A-module of continuous functions f on R which are antiperiodic with
period 1, i.e., f (x + 1) =
f (x).
−
(i) Show that A and M are indecomposable A-modules.
(ii) Show that A is not isomorphic to M but A
�
A is isomorphic to M
M .
�
Remark. Thus, we see that in general, the Krull-Schmidt theorem fails for infinite dimensional
modules. However, it still holds for modules of finite length, i.e., modules M such that any filtration
of M has length bounded above by a certain constant l = l(M ).
2.9 Problems
Problem 2.21. Extensions of representations. Let A be an algebra, and V, W be a pair of
representations of A. We would like to classify representations U of A such that V is a subrepre
W , but are there
sentation of U , and U/V = W . Of course, there is an obvious example U = V
any others?
�
Suppose we have a representation U as above. As a vector space, it can be (non-uniquely)
A the corresponding operator δU (a) has block triangular
W , so that for any a
identified with V
form
�
�
δU (a) =
δV (a)
0
�
f (a)
δW (a)
�
,
where f : A
⊃
Homk(W, V ) is | https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0721b66f53c3c196ce86d6d867514442_MIT18_712F10_ch2.pdf |
(a)
δW (a)
�
,
where f : A
⊃
Homk(W, V ) is a linear map.
(a) What is the necessary and sufficient condition on f (a) under which δU (a) is a repre
sentation? Maps f satisfying this condition are called (1-)cocycles (of A with coefficients in
Homk(W, V )). They form a vector space denoted Z 1(W, V ).
(b) Let X : W
V be a linear map. The coboundary of X, dX, is defined to be the function A
⊃
Homk(W, V ) given by dX(a) = δV (a)X
only if X is a homomorphism of representations. Thus coboundaries form a subspace B 1(W, V )
→
Z 1(W, V ), which is isomorphic to Homk(W, V )/HomA(W, V ). The quotient Z 1(W, V )/B1(W, V ) is
denoted Ext1(W, V ).
⊃
XδW (a). Show that dX is a cocycle, which vanishes if and
−
Z 1(W, V
(c)
are isomorphic representations
Show that if f, f � �
f � �
) and f
of A. Conversely, if θ : U
B1(W, V ) then the corresponding extensions
U � is an isomorphism such that
−
U, U �
⊃
θ(a) =
1V
0
�
∼
1W �
then f
f �
−
�
B1(V, W ). Thus, the space Ext1(W, V ) “classifies” extensions of W by V .
(d) Assume | https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0721b66f53c3c196ce86d6d867514442_MIT18_712F10_ch2.pdf |
Ext1(W, V ) “classifies” extensions of W by V .
(d) Assume that W, V are finite dimensional irreducible representations of A. For any f
�
Ext1(W, V ), let Uf be the corresponding extension. Show that Uf is isomorphic to Uf ⊗ as repre
f � are proportional. Thus isomorphism classes (as representations)
sentations if and only if f and
of nontrivial extensions of W by V (i.e., those not isomorphic to W
V ) are parametrized by the
projective space PExt1(W, V ). In particular, every extension is trivial if and only if Ext1(W, V ) = 0.
�
Problem 2.22. (a) Let A = C[x1, ..., xn], and Va, Vb be one-dimensional representations in which
C). Find Ext1(Va, Vb) and classify 2-dimensional repre
xi act by ai and bi, respectively (ai, bi
sentations of A.
�
(b) Let B be the algebra over C generated by x1, ..., xn with the defining relations xixj = 0 for
all i, j. Show that for n > 1 the algebra B has infinitely many non-isomorphic indecomposable
representations.
Problem 2.23. Let Q be a quiver without oriented cycles, and PQ the path algebra of Q. Find
compute Ext1 between them. Classify 2-dimensional repre
irreducible representations of PQ and
sentations of PQ.
Problem 2 | https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0721b66f53c3c196ce86d6d867514442_MIT18_712F10_ch2.pdf |
irreducible representations of PQ and
sentations of PQ.
Problem 2.24. Let A be an algebra, and V a representation of A. Let δ : A
deformation of V is a formal series
⊃
EndV . A formal
δ˜ = δ
0 + tδ1 + ... + t δn + ...,
n
where δi : A
End(V ) are linear maps, δ0 = δ, and δ˜(ab) = δ˜(a)δ˜(b).
⊃
If b(t) = 1 + b1t + b 2
2t + ..., where bi
is also a deformation of δ, which is said to be isomorphic to δ˜.
�
End(V ), and δ˜ is a formal deformation of δ, then bδb˜ −
1
(a) Show that if Ext1(V, V ) = 0, then any deformation of δ is trivial, i.e., isomorphic to δ.
(b) Is the converse to (a) true? (consider the algebra of dual numbers A = k[x]/x2).
Problem 2.25. The Clifford algebra. Let V be a finite dimensional complex vector space
equipped with a symmetric bilinear form (, ). The Clifford algebra Cl(V ) is the quotient of the
V . More explicitly, if
tensor algebra T V by the ideal generated by the elements v
N is a basis of V and (xi, xj ) = aij then Cl(V ) is generated by xi with defining relations
xi, 1
(v, v)1, v
v
i
�
−
� | https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0721b66f53c3c196ce86d6d867514442_MIT18_712F10_ch2.pdf |
with defining relations
xi, 1
(v, v)1, v
v
i
�
−
�
∗
∗
xixj +
xjxi = 2aij , x 2
i = aii.
Thus, if (, ) = 0, Cl(V ) =
V .
√
(i) Show that if (, ) is nondegenerate then Cl(V ) is semisimple, and has one irreducible repre
sentation of dimension 2n if dim V = 2n (so in this case Cl(V ) is a matrix algebra), and two such
representations if dim(V ) = 2n + 1 (i.e., in this case Cl(V ) is a direct sum of two matrix algebras).
Hint. In the even case, pick a basis a1, ..., an, b1, ..., bn of V in which (ai, aj ) = (bi, bj ) = 0,
(ai, bj ) = ζij /2, and construct a representation of Cl(V ) on S :=
(a1, ..., an) in which bi acts as
“differentiation” with respect to ai. Show that S is irreducible. In the odd case the situation is
similar, except there should be an additional basis vector c such that (c, ai) = (c, bi) = 0, (c, c) =
1)degree+1, giving two
1, and the action
representations S+, S (why are they non-isomorphic?). Show that there is no other irreducible
representations by finding a spanning set of Cl(V ) with 2dim V elements.
of c on S may be | https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0721b66f53c3c196ce86d6d867514442_MIT18_712F10_ch2.pdf |
spanning set of Cl(V ) with 2dim V elements.
of c on S may be defined either by (
1)degree or by (
−
−
√
−
(ii) Show that Cl(V ) is semisimple if and only if (, ) is nondegenerate. If (, ) is degenerate, what
is Cl(V )/Rad(Cl(V ))?
2.10 Representations of tensor products
Let A, B be algebras. Then A
a1a2
b1b2.
�
B is also an algebra, with multiplication (a1
b1)(a2
b2) =
�
�
�
Exercise. Show that Matm(k)
�
Matn(k) =∪ Matmn(k).
The following theorem describes irreducible finite dimensional representations of A
of irreducible finite dimensional representations of A and those of B.
B in terms
�
Theorem 2.26. (i) Let V be an irreducible finite dimensional representation of A and W an
W is an irreducible representation of
irreducible finite dimensional representation of B. Then V
A
B.
�
�
(ii) Any irreducible finite dimensional representation M of A
B has the form (i) for unique
�
V and W .
Remark 2.27. Part (ii) of the theorem typically fails for infinite dimensional representations;
e.g. it fails when A is the Weyl algebra in characteristic zero. Part (i) also may fail. E.g. let
A = B = V = W = C(x). Then (i) fails, as A
B is not a field.
�
Proof. (i) By the density theorem, the maps A
End | https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0721b66f53c3c196ce86d6d867514442_MIT18_712F10_ch2.pdf |
B is not a field.
�
Proof. (i) By the density theorem, the maps A
End W = End(V
the map A
End V
B
�
⊃
�
End V and B
End W are surjective. Therefore,
⊃
W ) is surjective. Thus, V
�
⊃
W is irreducible.
�
(ii)
, B� are finite dimensional algebras, and M is a representation of A� �
First we show the existence of V and W . Let A�, B� be the images of A, B in End M . Then
B�, so we may assume
A�
without loss of generality that A and B are finite dimensional.
In this case, we claim that Rad(A
�
by J. Then J is a nilpotent ideal in A
B)/J = (A/Rad(A))
hand, (A
�
hence semisimple. This implies J
J = Rad(A
∩
B), proving the claim.
�
�
Thus, we see that
B + A
B) = Rad(A)
Rad(B). Indeed, denote the latter
B, as Rad(A) and Rad(B) are nilpotent. On the other
(B/Rad(B)), which is a product of two semisimple algebras,
B). Altogether, by Proposition 2.11, we see that
Rad(A
�
�
�
�
(A
�
B)/Rad(A
�
B) = A/Rad(A)
B/Rad(B).
�
B)/Rad(A
B), so it is clearly of the form
Now, M is an irreducible representation of (A
M = V
W , where V is an irreducible representation of A/Rad(A) and W | https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0721b66f53c3c196ce86d6d867514442_MIT18_712F10_ch2.pdf |
= V
W , where V is an irreducible representation of A/Rad(A) and W is an irreducible
representation of B/Rad(B), and V, W are uniquely determined by M (as all of the algebras
involved are direct sums of matrix algebras).
�
�
�
MIT OpenCourseWare
http://ocw.mit.edu
18.712 Introduction to Representation Theory
Fall 2010
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. | https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0721b66f53c3c196ce86d6d867514442_MIT18_712F10_ch2.pdf |
6.895 Essential Coding Theory
September 13, 2004
Lecture 2
Lecturer: Madhu Sudan
Scribe: Joungkeun Lim
1 Overview
We consider the problem of communication in which a source wish to transmit information to a receiver.
The transmission is conducted through a channel, which may generate errors in the information de
pending on the options. In this model, we will introduce Shannon’s coding theorem, which shows that
depending on the properties of the source and the channel, the probability of the receiver’s restoring the
original data varies with a threshold.
2 Shannon’s theory of information
In this section we will discuss the main result from Shannon’s paper which was introduced in 1948 and
founded the theory of information.
There are three entities in Shannon’s model:
• Source : The party which produces information by a probabilistic process.
• Channel : The means of passing information from source to receiver. It may generate errors while
transporting the information.
• Receiver : The party which receives the information and tries to figure out information at source’s
end.
There are two options for channel, “Noisy” and “Noiseless”
• Noisy channel : A channel that flips some bits of information sent across them. The bits that flips
are determined by a probabilistic process.
• Noiseless channel : A channel that perfectly transmits the information from source to receiver
without any error.
The source will generate and encode its message, and send it to receiver through the channel. When
the message arrives, the receiver will decode the message. We want to find the encoding-decoding scheme
which makes it possible for a receiver to restore the exact massage which a source sent. Shannon’s
theorem states the conditions with which a restoration can be conducted with high probability.
2.1 Shannon’s coding theorem
Theorem 1 (Shannon’s coding theorem)
There exist positive real values capacity C and rate | https://ocw.mit.edu/courses/6-895-essential-coding-theory-fall-2004/074857398da1986d173fe74fd902aab0_lect02.pdf |
orem 1 (Shannon’s coding theorem)
There exist positive real values capacity C and rate R satisfying the followings. If R < C then
information transmission is feasible(coding theorem.) If R > C then information transmission is not
feasible(Converse of coding theorem.)
Capacity C and rate R are the values associated with a source and a channel respectively. The
general way to compute this two values are a bit complicated. To get a better understanding, we will
start with simple examples of Shannon’s model one in noiseless model and one in noisy model.
2-1
2.2 preliminaries
Before studying the examples, we study the property of the binary entropy function and Chernoff bounds
which make crucial roles in the analyses in later chapters.
Definition 2 For p ∃ [0, 1], the binary entropy function is defined as follows.
1
H(p) = plog2 + (1 − p)log2
p
1
.
1 − p
H(p) is a concave function and has a maximum values 1 where p=1/2. The following property of
H(P) is used in later chapters.
• Let Bn(0, r) be a ball of radius r(in Hamming distance) and center 0 in {0, 1}n . V (r, n) =
V ol(Bn(0, r)) =
r
i=0
�
n
i
�
Lemma 3 (Chernoff Bounds)
≤ 2H(r/n)·n
. Hence V ol(pn, n) ≤ 2H(p)·n .
If �1, �2, · · · , �n are independent random variables in [0,1 | https://ocw.mit.edu/courses/6-895-essential-coding-theory-fall-2004/074857398da1986d173fe74fd902aab0_lect02.pdf |
n .
If �1, �2, · · · , �n are independent random variables in [0,1] with EXP (�i) = p, then
P r[|
�
n
i=1 �i − p| > �] � 2−�
n
2
·n
.
2.3 An example of noiseless model
Source produces a sequence of bits such that each bits are 0 with probability 1-p and 1 with probability
p, where p � 1/2. Source produces one bit per unit of time. For it is a noiseless channel, the channel
transmits exactly same bits to a receiver as the bits given from the source. The channel is allowed to
transmit C bits per unit of time.. In this case, the rate of source is given as the entropy function H(p)
and the capacity value is the number of bits transmitted through channel per unit of time. When n is
the amount of time we used the channel, the Shannon’s coding theorem is expressed as follows.
Theorem 4 (Shannon’s noiseless coding theorem)
If C > H(p), then there exist encoding function En and decoding function Dn such that Pr[Receiver
figures out what the source produced]� 1 − exp(−n).
Also if C > H(p), then there exist encoding function En and decoding function Dn such that
Pr[Receiver figures out what the source produced]� exp(−n).
2.4 An example of noisy model
The source produces a sequence of bits such that each bits are 0 with probability 1/2 and 1 with
probability 1/2. Source produceds R bits per unit of time, where R < 1. For it | https://ocw.mit.edu/courses/6-895-essential-coding-theory-fall-2004/074857398da1986d173fe74fd902aab0_lect02.pdf |
with
probability 1/2. Source produceds R bits per unit of time, where R < 1. For it is a noisy channel, the
channel filps each bit with a probabilistic process. In this example, channel flips each bit with probability
p. Also the channel transmits one bit per unit of time. In this case, the rate R is the number of bits
produced in the source per unit of time and the capacity C is given as 1-H(p). Then shannon’s coding
theorem is expressed as follows.
Theorem 5 (Shannon’s noisy coding theorem)
If R < 1−H(p) then there exist encoding function En and decoding function Dn such that Pr[Receiver
figures out what the source produced]� 1 − exp(−n).
Also If R > 1 − H(p) then there exist encoding function En and decoding function Dn such that
Pr[Receiver figures out what the source produced]� exp(−n).
2-2
We prove the first part of theorem using probabilistic method and give an idea of the proof for the
second part of theorem.
Proof (First part)
Let k be the number of bits produced by source, then k = R · n. For R < 1 − H(p), there exists � > 0
such that R < 1 − H(p + �). For this �, let r = n(p + �) = n · p� . Now we can restate the theorem as
follows.
If R = k/n < 1 − H(p), then there exists functions E : {0, 1}k ≥ {0, 1}n, D : {0, 1}n ≥ {0, 1}k
such that P r��BSCp ,m�Uk [m ∈= | https://ocw.mit.edu/courses/6-895-essential-coding-theory-fall-2004/074857398da1986d173fe74fd902aab0_lect02.pdf |
n ≥ {0, 1}k
such that P r��BSCp ,m�Uk [m ∈= D(E(m) + �)] � exp(−n), where Uk is uniform distribution on k bit
strings and BSCp,n is distribution on n bit strings with each bits to be 0 with probability 1-p and 1
with probability p.
Pick the encoding function E : {0, 1}k ≥ {0, 1}n at random, and the decoding function D : {0, 1}n ≥
{0, 1}k works as follows. Given a string y ∃ {0, 1}n, we find the m ∃ {0, 1}k such that �(y, E(m)) is
minimized. This m is the value of D(y). Fix m ∃ {0, 1}k and fix E(m) also. For E is randomly chosen,
E(m�) is still random when m ∈
= m
at least one of the following two events must occur:
� = m. Let y be the value that the receiver acquires. In order for D(y) ∈
• There exists some m� ∈= m such that E(m�) ∃ B(y, r).
• y /∃ B(E(m), r)
If neither of above events happen, then m is the unique message such that E(m) is within a distance
of r from y and so D(y) = m.
We prove that the events above happen with low probability. For the first event to happen, the error
� = y − E(m) has more than n(p + �) of 1 bits. By Chernoff bounds we will have
For the second event happen to happen, fix y and an m� ∈= m and consider the event that E(m�) ∃
B(y, r). For E(m�) is random, the probability of this event is exactly V ol(B(y, | https://ocw.mit.edu/courses/6-895-essential-coding-theory-fall-2004/074857398da1986d173fe74fd902aab0_lect02.pdf |
(m�) ∃
B(y, r). For E(m�) is random, the probability of this event is exactly V ol(B(y, r))/2n . Using
P r[y /
∃ B(E(m), r)] � 2−(�
2
/2)n .
we have
�
�
V ol(B(y, p n)) ≤ 2H(p
)n
,
P r[E(m ) ∃ B(y, r)] ≤ 2H(p
�
�
)n−n
.
Using union bound, we get P r[�m� ∃ {0, 1}k s.t. E(m�) ∃ B(y, r)] � 2k+H(p
For R = k/n < 1 − H(p�), 2k+H(p
� )n−n = exp(−n). Therefore the probability that second event
�
)n−n
happens is also bounded by exp(−n).
Hence the probability of at least one of above two events happens is bounded by exp(−n) where m
and E(m) is fixed. Therefore for the random E and associated D, the probability is still bounded. Using
probabilistic method, we see that there exists a encoding E and associated decoding D such that the
probability that any of two events happen is still bounded by exp(−n).
Here we give the brief sratch of the proof for second part of theorem. Decoding function partitions
universe to 2k regions. By Chernoff bounds, Pr[number of 1 bits in error <pn] is low. Hence when
E(m) was transmited from source, the corrupted value y that arrives at receiver will have spread-out
distribution around E(m). It means the region that covers most of possible y value has much larger size
than one of the 2k region that contains E(m). It will make the decoding inaccurate.
2-3 | https://ocw.mit.edu/courses/6-895-essential-coding-theory-fall-2004/074857398da1986d173fe74fd902aab0_lect02.pdf |
one of the 2k region that contains E(m). It will make the decoding inaccurate.
2-3 | https://ocw.mit.edu/courses/6-895-essential-coding-theory-fall-2004/074857398da1986d173fe74fd902aab0_lect02.pdf |
Applet Exploration: Trigonometric Identity
‘
Start by opening the Trigonometric Identity applet from the Mathlets Gallery.
This mathlet illustrates sinusoidal functions and the trigonometric iden
tity
a cos(ωt) + b sin(ωt
)
=
A cos
(ωt − φ), where a + ib = Ae φ.
i
That is, (A, φ) are the polar coordinates of (a, b).
The sinusoidal function A cos(ωt − φ) is drawn here in red. A and φ
are the amplitude and phase lag of the sinusoid. They are both controlled
by sliders.
1. The phase lag φ measures how many radians the sinuoid falls behind the
standard sinusoid, which we take to be the cosine. So when φ = π/2 you
have the sine function. Verify this in the applet.
2. The final parameter is ω, the angular frequency. High frequency means
the waves come faster. Frequency zero means constant. Play with the ω
slider and understand this statement. Return the angular frequency to 2.
3. The trigonometric identity shows the remarkable fact that the sum of
any two sinuoidal functions of the same frequency is again a sinusoid of
the same frequency.
Use the a and b sliders to select coefficients for cos(ωt) and sin(ωt). The
a slider modifies the yellow cosine curve in the window at bottom and the
b slider modifies the blue sine curve. Notice that the sum of a cos(t) and
b sin(t) is displayed in the top window in green (which is a combination of
blue and yellow). There it is! - the linear combination is again sinusoidal,
or at least appears to be.
4. The window at the right shows the two complex numbers a + ib and
Aeiφ. The sinusoidal identity says that the green and red sinusoids will co
incide exactly when the complex numbers a + ib and Aeiφ coincide. Verify
this on the applet by pickong values of A and φ. and then adjusting a and | https://ocw.mit.edu/courses/18-03sc-differential-equations-fall-2011/0748b0422dfadf10b59cbbff33d90530_MIT18_03SCF11_s7_3bappl.pdf |
ib and Aeiφ coincide. Verify
this on the applet by pickong values of A and φ. and then adjusting a and
b until the green and red sinusoids are the same.
MIT OpenCourseWare
http://ocw.mit.edu
18.03SC Differential Equations
Fall 2011
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. | https://ocw.mit.edu/courses/18-03sc-differential-equations-fall-2011/0748b0422dfadf10b59cbbff33d90530_MIT18_03SCF11_s7_3bappl.pdf |
18.782 Introduction to Arithmetic Geometry
Lecture #11
Fall 2013
10/10/2013
11.1 Quadratic forms over Qp
The Hasse-Minkowski theorem reduces the problem of determining whether a quadratic form
f over Q represents 0 to the problem of determining whether f represents zero over Qp for
all p ≤ ∞. At first glance this might not seem like progress, since there are infinitely many p
to check, but in fact we only need to check p = 2, p = ∞ and a finite set of odd primes.
Theorem 11.1. Let p be an odd prime and let f be a diagonal quadratic form of dimension
×. Then f represents 0 over Qp.
n > 2 with coefficients a1, . . . , an ∈ Z
p
Proof. The equation f (x1, . . . , xn) ≡ 0 mod p is a homogeneous equation of degree 2 in
n > 2 variables over Fp.
It follows from the Chevalley-Warning theorem that it has a
non-trivial solution (y1, . . . , yn) over Fp (cid:39) Z/pZ. Assume without loss of generality that
y1 = 0 and let g(z) be the univariate polynomial g(y) = f (y, y2, . . . , yn) over Zp. Then
g(y1) ≡ 0 mod p and g(cid:48)(y1) = 2a1y1 (cid:54)≡ 0 mod p, so by Hensel’s lemma there is a root z1 of
g(y) over Zp. We then have f (z1, y2, . . . , yn) = g(z1) = 0, so f represents 0 over Qp.
Corollary 11.2. Every quadratic form of dimension n > 2 over Q represents 0 over Qp
for all but finitely many primes p.
Proof. In diagonal form the coefficients a1, . . . , an lie in Z×
p for all odd p (cid:45) a1 · · · an.
For quadratic forms of dimension n ≤ 2, we note that a nondegenerate unary form never | https://ocw.mit.edu/courses/18-782-introduction-to-arithmetic-geometry-fall-2013/07649a23c61675ef75f12a8b77367e93_MIT18_782F13_lec11.pdf |
(cid:45) a1 · · · an.
For quadratic forms of dimension n ≤ 2, we note that a nondegenerate unary form never
represents 0, and the nondegenerate form ax2 + by2 represents 0 if and only if −ab is square
(this holds over any field). But when −ab is not square it may still be the case that ax2 +by2
represents a given nonzero element t, and having a criterion for identifying such t will be
useful in our proof of the Hasse-Minkowski theorem.
Lemma 11.3. The nondegenerate quadratic form ax2 + by2 over Qp represents t ∈ Q∗
and only if (a, b)p = (t, −ab)p.
p if
Proof. Since t = 0, the equation ax2 + by2 = t has a non-trivial solution in Qp if and only
if (a/t)x2 + (b/t)y2 = 1 has a solution, which is equivalent to (a/t, b/t)p = 1. We have
(a/t, b/t)p = (at, bt)p = (a, bt)p(t, bt)p = (a, b)p(a, t)p(t, bt) = (a, b)p(t, abt)p
= (a, b)p(t, abt)p(t, −t)p = (a, b)p(t, −ab)p,
where we have used that the Hilbert symbol is symmetric, bilinear, invariant on square
classes, and satisfies (x, −x)p = 1. Thus (a/t, b/t)p = 1 if and only if (a, b)p(t, −ab)p = 1,
which is equivalent to (a, b)p = (t, −ab)p since both are ±1.
Corollary 11.4. The nondegenerate form ax2 + by2 + cz2 over Qp represents 0 if and only
if (a, b)p = (−c, −ab)p
Proof. By the lemma, if suffices to show that ax2 + by2 + cz2 represents 0 if and only if
the binary form ax2 + by2 represents −c. The reverse implication is clear (set z = 1).
For the forward implication, if ax2
0 + cz | https://ocw.mit.edu/courses/18-782-introduction-to-arithmetic-geometry-fall-2013/07649a23c61675ef75f12a8b77367e93_MIT18_782F13_lec11.pdf |
2 + by2 represents −c. The reverse implication is clear (set z = 1).
For the forward implication, if ax2
0 + cz2
0 = 0 then either z0 = 0, in which case
a(x /z2) + b(y /z )2 = −c or z = 0 in which case ax2
+ by represents 0 and therefore
0
every element of Qp, including −c.
0 + by2
2
0
0
0
0
1
Andrew V. Sutherland
1(cid:54)
(cid:54)
(cid:54)
Corollary 11.5. A ternary quadratic form over Q that represents 0 over all but at most
one completion of Q represents 0 over every completion of Q.
Proof. The corollary is trivially true if the form is degenerate and otherwise it follows from
the product formula for Hilbert symbols and the corollary above.
11.2 Approximation
We now prove two approximation theorems that we will need to prove the Hasse-Minkowski
theorem for Q. These are quite general theorems that have many applications, but we will
state them in a particularly simple form that suffices for our purposes here. Before proving
them we first note/recall that Q is dense in Qp and Z is dense in Zp.
Theorem 11.6. Let p ≤ ∞ be any prime of Q. Under the metric d(x, y) = |x − y|p, the
set Q is dense in Qp and the set Z is dense in Zp.
Proof. We know that Q = R is the completion of Q and we proved that Q
p is (isomorphic
to) the completion of Q for p < ∞, and any field is dense in its completion (this follows
immediately from the definition). We note that the completion Z = Z (any Cauchy
sequence of integers must be eventually constant), and for p < ∞ the we can apply the fact
that Zp = {x ∈ Qp : |x|p ≤ 1} and Z = {x ∈ Q : |x|p ≤ 1}.
∞
∞
Theorem 11.7 (Weak approximation). Let S be a finite set of primes p ≤ � | https://ocw.mit.edu/courses/18-782-introduction-to-arithmetic-geometry-fall-2013/07649a23c61675ef75f12a8b77367e93_MIT18_782F13_lec11.pdf |
1}.
∞
∞
Theorem 11.7 (Weak approximation). Let S be a finite set of primes p ≤ ∞, and for each
p ∈ S let xp ∈ Qp be given. Then for every (cid:15) > 0 there exists x ∈ Q such that
|x − xp|p < (cid:15)
for all p ∈ S. Equivalently, the image of Q in (cid:81)
Qp dense under the product topology.
∈
p S
Proof. If S has cardinality 1 we can apply Theorem 11.6, so we assume S contains at least 2
primes. For any particular prime p ∈ S, we claim that there is a yp ∈ Q such that |yp|p > 1
and |yp|q < 1 for q ∈ S − {p}. Indeed, let P be the product of the finite primes in S, and
for each p < ∞ choose r ∈ Z>0 so that p−r
P < 1. Then define
(cid:40)
yp =
if p =
∞,
P
p−rP otherwise.
We now note that for any q ∈ S,
lim
n→∞
|
|yp q =
(cid:40)
if
∞ q = p,
if q = p.
0
It follows that for each q ∈ S
yn
p
lim
n→∞ 1 + yn
p
=
(cid:40)
1 with respect to | |q for q = p,
0 with respect to | |q for q = p,
|1 − yn
since limn
for q = p. For each n ∈ Z>0 define
p /(1 + yn
p )|p = limn
→∞
→∞
zn =
|1/(1 + yn
p )|p = 0 and limn→∞ |yn
p /(1 + yn
p )|q = 0
(cid:88) x yn
p p
1 + yn
p
p∈
S
.
Then limn
which x = zn satisfies |x − xp|p < (cid:15) for all p ∈ S.
→∞
zn = xp with respect to | |p for each p ∈ S. So | https://ocw.mit.edu/courses/18-782-introduction-to-arithmetic-geometry-fall-2013/07649a23c61675ef75f12a8b77367e93_MIT18_782F13_lec11.pdf |
< (cid:15) for all p ∈ S.
→∞
zn = xp with respect to | |p for each p ∈ S. So for any (cid:15) > 0 there is an n for
2
2(cid:54)
(cid:54)
(cid:54)
Theorem 11.8 (Strong approximation). Let S be a finite set of primes p < ∞, and for
each p ∈ S let xp ∈ Zp be given. Then for every (cid:15) > 0 there exists x ∈ Z such that
|x − xp|p < (cid:15)
for all p ∈ S. Equivalently, the image of Z in (cid:81)
Zp is dense under the product topology.
∈
p S
Proof. Fix (cid:15) > 0. By Theorem 11.6, for each xp we can pick yp ∈ Z≥0 so that |yp − xp|p < (cid:15).
Let n be a positive integer such that pn > yp for all p ∈ S. By the Chinese remainder
theorem, there exists x ∈ Z such that x ≡ yp mod pn for all p ∈ S, and for this x we have
|x − xp|p < (cid:15) for all p ∈ S.
Remark 11.9. In more general settings it is natural to consider the infinite product of all
the rings of p-adic integers
Zˆ =
(cid:89) Zp.
p<
∞
Recall that for infinite products, the product topology is defined using a basis of open sets
that consists of sequences (Up), where each Up is an open subset of Zp, and for all but
finitely many p we have Up = Z
p. It follows from Theorem 11.8 that the image of Z in Z is
dense.
There is another way to define Zˆ, which is to consider the inverse system of rings (Z/nZ),
where n ranges over all positive integers n and we have reduction maps from Z/mZ to Z/nZ
whenever n|m (note that we now have an infinite acyclic graph of maps, not just a linear
chain). The inverse limit
Zˆ = lim Z/nZ
←−
is called the profi | https://ocw.mit.edu/courses/18-782-introduction-to-arithmetic-geometry-fall-2013/07649a23c61675ef75f12a8b77367e93_MIT18_782F13_lec11.pdf |
of maps, not just a linear
chain). The inverse limit
Zˆ = lim Z/nZ
←−
is called the profinite completion of Z. One can show that these two definitions of Z are
canonically isomorphic. So a more pithy statement of Theorem 11.8 is that Z is dense in
its profinite completion (this statement applies to profinite completions in general).
ˆ
ˆ
Remark 11.10. Note the difference between weak and strong approximation. With weak
approximation we obtain a rational number x that is p-adically close to xp for each p in a
finite set S, but we have no control on |x|p for p (cid:54)∈ S. With strong approximation we obtain
a rational number (in fact an integer) x that is p-adically close to xp for each p ∈ S and also
satisfies |x|p ≤ 1 for all p (cid:54)∈ S, except the prime p = ∞; in order to apply the CRT we may
∞ ∈ S if we grant ourselves
need to make |x|
the freedom to make |x|p0 large for one prime p0 (cid:54)∈ S; in this case x would be a rational
number, not an integer, but its denominator would be divisible by no primes other than p0,
so that x ∈ Zp for all p = p0. This is characteristic of strong approximation theorems, we
obtain an element whose absolute value is bounded at all but one prime.
very large. More generally, we could allow
∞
The following lemma follows from the strong approximation theorem and Dirichlet’s
theorem on primes in arithmetic progressions: for any relative prime integers a and b there
are infinitely many primes congruent to a mod b.
Lemma 11.11. Let S be a finite set of primes p ≤ ∞, and for each p ∈ S let xp ∈ Q×
given. Then there exists an x ∈ Q such that
p be
(i) x ∈ xpQ×2
p
(ii) |x|p = 1 for all but at most one finite prime p0 (cid:54)∈ S.
for each p ∈ | https://ocw.mit.edu/courses/18-782-introduction-to-arithmetic-geometry-fall-2013/07649a23c61675ef75f12a8b77367e93_MIT18_782F13_lec11.pdf |
|x|p = 1 for all but at most one finite prime p0 (cid:54)∈ S.
for each p ∈ S.
3
3(cid:54)
Proof. Let S0 = S − {∞}, and define the rational number
y = ±
(cid:89)
p∈S0
pvp(xp),
∞
where the sign of y is negative if ∞ ∈ S and x < 0, and positive otherwise. Then
|y|p = |xp|p for all p ∈ S0, and it follows that for each p ∈ S0 we have y = upxp for some
up ∈ Z×
p . By the strong approximation theorem there exists an integer z ≡ u
p mod p p, for
all p ∈ S0, where ep = 1 for odd p and ep = 3 for p = 2. It follows that z ∈ upQ×2
for all
p
p ∈ S0, since the square class of up depends only on its reduction mod pep.
The integers z and m = (cid:81)
p S pep are relatively prime, so it follows from Dirichlet’s
∈ 0
theorem that there are infinitely many primes congruent to z mod m. Let p0 be the least
such prime. Then p0 ∈ zQ×2
p
for all p ∈ S0, and x = p0y satisfies both (i) and (ii).
e
11.3 Proof of the Hasse-Minkowski theorem
Before proving the Hasse-Minkowski theorem for Q we make one final remark. The definition
of the Hilbert symbol we gave in the last lecture makes sense over any field, in particular Q,
and the proofs of Lemma 10.2 and Corollary 10.3 still apply. In the proof below we use
(a, b) to denote the Hilbert symbol of a, b ∈ Q×.
Theorem 11.12 (Hasse-Minkowski). A quadratic form over Q represents 0 if and only if
it represents 0 over every completion of Q.
Proof. The forward implication is clear, we only need to prove the reverse implication. So
let f be a quadratic form over Q that represents 0 over every completion of Q. We may | https://ocw.mit.edu/courses/18-782-introduction-to-arithmetic-geometry-fall-2013/07649a23c61675ef75f12a8b77367e93_MIT18_782F13_lec11.pdf |
, we only need to prove the reverse implication. So
let f be a quadratic form over Q that represents 0 over every completion of Q. We may
assume without loss of generality that f is a diagonal form a x2
1 1 + · · · + anxn, which we
may denote (cid:104)a1, . . . , an(cid:105). We write (cid:104)a1, . . . , an(cid:105)p to denote the same form over Qp. If any
ai = 0, then f clearly represents 0 over Q (set xi = 1 and xj = 0 for i = j), so we assume
f is nondegenerate and proceed by induction on its dimension n.
2
Case n = 1: The theorem holds trivially (f cannot represent 0 over any Qp).
Case n = 2: The form (cid:104)a, b(cid:105)p represents 0 if and only if −ab is square in Qp. Thus
vp(−ab) ≡ 0 mod 2 for all p < ∞ and −ab > 0. It follows that −ab is square in Q, and
therefore (cid:104)a, b(cid:105) represents 0.
Case n = 3: Let f (x, y, z) = z2 − ax2 − by2, where a and b are nonzero square-free
integers with |a| ≤ |b|. We know (a, b)p = 1 for all p ≤ ∞ and wish to show (a, b) = 1. We
proceed by induction on m = |a| + |b|. The base case m = 2 has a = ±1 and b = ±1, in
which case (a, b) = 1 implies that either a or b is 1 and therefore (a, b) = 1.
∞
We now suppose m ≥ 3, and that the result has been proven for all smaller m. For each
p to z2 − ax2 − by2 = 0. We must have
0), since p|b, but we cannot have p|x0 since then we would have p|z0, contradicting
p and a = (z0/x0)2 is a square modulo p. This holds for every prime
prime p|b there is | https://ocw.mit.edu/courses/18-782-introduction-to-arithmetic-geometry-fall-2013/07649a23c61675ef75f12a8b77367e93_MIT18_782F13_lec11.pdf |
contradicting
p and a = (z0/x0)2 is a square modulo p. This holds for every prime
prime p|b there is a primitive solution (x0, y0, z0) ∈ Z3
p|(z2
primitivity. So x0 ∈ Z×
p|b, and b is square-free, so a is a square modulo b.
0 − ax2
It follows that a + bb(cid:48) = t2 for some t, b(cid:48) ∈ Z with t ≤ |b/2|. This implies (a, bb(cid:48)) = 1,
√
√
since bb(cid:48) = t2
−
a is the norm of t +
a in Q( a). Therefore
(a, b) = (a, b)(a, bb(cid:48)) = (a, b2b(cid:48)) = (a, b(cid:48)).
We also have (a, bb(cid:48))p = 1, and therefore (a, b(cid:48))p = (a, b)p = 1, for all p ≤ ∞. But
(cid:12) t2 (cid:12)
(cid:12)
(cid:12)
≤ (cid:12)
(cid:12)
(cid:12) b
(cid:12)
(cid:12) a (cid:12)
(cid:12)
(cid:12)
(cid:12) ≤
+ (cid:12)
b
t2 − a
b
|b(cid:48)| =
+ 1 <
|b|
4
|b|,
(cid:12)
(cid:12)
(cid:12)
(cid:12)
(cid:12)
(cid:12)
(cid:12)
(cid:12)
4
4(cid:54)
so |a| + |b(cid:48)| < m and the inductive hypothesis implies (a, b(cid:48)) = 1. Thus (a, b) = 1, as desired.
Case n = 4: Let f = (cid:104)a1, a2, a3, a4(cid:105) and let S consist of the primes p|2a1a2a3a4 and ∞.
Then ai ∈ Z×
p such that (cid:104)a1, a2(cid:105)p
represents tp and | https://ocw.mit.edu/courses/18-782-introduction-to-arithmetic-geometry-fall-2013/07649a23c61675ef75f12a8b77367e93_MIT18_782F13_lec11.pdf |
4 and ∞.
Then ai ∈ Z×
p such that (cid:104)a1, a2(cid:105)p
represents tp and (cid:104)a3, a4(cid:105)p represents −tp (we can assume tp = 0:
if 0 is represented, by
both forms, so is every element of Qp). By Lemma 11.11, there is a rational number t and
a prime p0 (cid:54)∈ S such that t ∈ tpQ×2
p
p for all p (cid:54)∈ S. For each p ∈ S there exists tp ∈ Q×
for all p ∈ S and |t|p = 1 for all p (cid:54)∈ S ∪ {p0}.
p , so (a1, a2)p = 1 = (t, −a1a2)p and (a3, a4)p = 1 = (
The forms (cid:104)a1, a2, −t(cid:105)p and (cid:104)a3, a4, t(cid:105)p represent 0 for all p (cid:54)∈ S ∪ {p0} because all such p
are odd, and ai, ±t ∈ Z×
−t, −a3a4)p,
and we may apply Corollary 11.4. Since t ∈ tpQ×
2
for all p ∈ S, the forms (cid:104)a1, a2, −t(cid:105)p and
p
(cid:104)a3, a4, t(cid:105)p also represent 0 for all p ∈ S. Thus (cid:104)a1, a2, −t(cid:105)p and (cid:104)a3, a4, t(cid:105)p represent 0 for
all p = p0, and by Corollary 11.5, also for p = p0. By the inductive hypothesis (cid:104)a1, a2, −t(cid:105)
and (cid:104)a3, a4, t(cid:105) both represent 0, therefore (cid:104)a1, a2, a3, a4(cid:105) represents 0.
Case n ≥ 5: Let f = (cid:104)a1, . . . , an(cid:105). Let | https://ocw.mit.edu/courses/18-782-introduction-to-arithmetic-geometry-fall-2013/07649a23c61675ef75f12a8b77367e93_MIT18_782F13_lec11.pdf |
105) represents 0.
Case n ≥ 5: Let f = (cid:104)a1, . . . , an(cid:105). Let S be the set of primes for which (cid:104)a3, . . . , an(cid:105)p
does not represent 0. The set S is finite, by Corollary 11.2. If S is empty then (cid:104)a3, . . . , an(cid:105),
and therefore f , represents 0, by the inductive hypothesis, so we assume S is not empty.
For each p ∈ S pick tp ∈ Q×
p = tp, such that
(cid:104)a3, . . . , an(cid:105)p represents −tp (such a tp exists since f represents 0 over Qp and, as above, we
can always pick tp = 0).
p represented by (cid:104)a1, a2(cid:105), say a1x2
p + a2y2
By the weak approximation theorem there exists x, y Q that are simultaneously close
enough to all the x , y ∈ Q so that t = a x2
1 + a2y is close enough to all the tp to
p
guarantee that t ∈ tpQ×2
for all p ∈ S (for p < ∞ the square class only depends on at most
p
the first three nonzero p-adic digits, and over R = Q we can ensure that x and y have the
n(cid:105)p represents 0 for all p ∈ S, and
same signs as x
since (cid:104)a3, . . . , an(cid:105)p represents 0 for all p (cid:54)∈ S, so does (cid:104)t, a3, . . . , an(cid:105)p. Thus (cid:104)t, a3, . . . , an(cid:105)p
represents 0 for all p, and by the inductive hypothesis, (cid:104)t, a3, . . . , an(cid:105) represents 0. Therefore
(cid:104)a3, . . . , an(cid:105) represents −t = −a1x2 − a2y2, hence (cid:104)a1, . . . , an(cid:105) represents 0.
and y ). | https://ocw.mit.edu/courses/18-782-introduction-to-arithmetic-geometry-fall-2013/07649a23c61675ef75f12a8b77367e93_MIT18_782F13_lec11.pdf |
x2 − a2y2, hence (cid:104)a1, . . . , an(cid:105) represents 0.
and y ).1 It follows that (cid:104)t, a3, . . . , a
∈
2
∞
∞
∞
p
p
1Equivalently, the set of squares Q×2
p
is an open subset of Q×
p , hence so is every square class tpQ×2
p .
5
5(cid:54)
(cid:54)
(cid:54)
MIT OpenCourseWare
http://ocw.mit.edu
(cid:20)(cid:27)(cid:17)(cid:26)(cid:27)(cid:21) (cid:44)(cid:81)(cid:87)(cid:85)(cid:82)(cid:71)(cid:88)(cid:70)(cid:87)(cid:76)(cid:82)(cid:81)(cid:3)(cid:87)(cid:82)(cid:3)(cid:36)(cid:85)(cid:76)(cid:87)(cid:75)(cid:80)(cid:72)(cid:87)(cid:76)(cid:70)(cid:3)(cid:42)(cid:72)(cid:82)(cid:80)(cid:72)(cid:87)(cid:85)(cid:92)
(cid:41)(cid:68)(cid:79)(cid:79) 201(cid:22)
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. | https://ocw.mit.edu/courses/18-782-introduction-to-arithmetic-geometry-fall-2013/07649a23c61675ef75f12a8b77367e93_MIT18_782F13_lec11.pdf |
6.241 Dynamic Systems and Control
Lecture 2: Least Square Estimation
Readings: DDV, Chapter 2
Emilio Frazzoli
Aeronautics and Astronautics
Massachusetts Institute of Technology
February 7, 2011
E. Frazzoli (MIT)
Lecture 2: Least Squares Estimation
Feb 7, 2011
1 / 9
Outline
1 Least Squares Estimation
E. Frazzoli (MIT)
Lecture 2: Least Squares Estimation
Feb 7, 2011
2 / 9
Least Squares Estimation
Consider an system of m equations in n unknown, with m > n, of the form
y = Ax.
Assume that the system is inconsistent: there are more equations than
unknowns, and these equations are non linear combinations of one another.
In these conditions, there is no x such that y − Ax = 0. However, one can
write e = y − Ax, and find x that minimizes �e�.
In particular, the problem
min �e�2 = min �y − Ax�2
x
x
is a least squares problem. The optimal x is the least squares estimate.
E. Frazzoli (MIT)
Lecture 2: Least Squares Estimation
Feb 7, 2011
3 / 9
Computing the Least-Square Estimate
The set M := {z ∈ Rm : z = Ax, x ∈ Rn} is a subspace of Rm, called the
range of A, R(A), i.e., the set of all vectors that can be obtained by linear
combinations of the columns of A.
Recall the projection theorem. Now we are looking for the element of M that
is “closest” to y , in terms of 2-norm. We know the solution is such that
e = (y − Axˆ) ⊥ R(A).
In particular, if ai is the i-th column of A, it is also | https://ocw.mit.edu/courses/6-241j-dynamic-systems-and-control-spring-2011/076ac20d5b8bda672a8bcee8d3e95438_MIT6_241JS11_lec02.pdf |
e = (y − Axˆ) ⊥ R(A).
In particular, if ai is the i-th column of A, it is also the case that
(y − Axˆ) ⊥ R(A) ⇔
�(y − Axˆ) = 0,
ai
i = 1, . . . , n
A�(y − Axˆ) = 0
A�Axˆ = A�y
A�A is a n × n matrix; is it invertible? It if were, then at this point it is easy
to recover the least-square solution as
xˆ = (A�A)−1A�y .
E. Frazzoli (MIT)
Lecture 2: Least Squares Estimation
Feb 7, 2011
4 / 9
The Gram product
Let us take a more abstract look at this problem, e.g., to address the case
that the data vector y is infinite-dimensional.
Given an array of nA vectors A = [a1| . . . |anA ], and an array of nB vectors
B = [b1| . . . |bnB ], both from an inner vector space V , define the Gram
Product � A, B � as a nA × nB matrix such that its (i, j) entry is �ai , bj �.
For the usual Euclidean inner product in an m-dimensional space,
� A, B �= A�B.
Symmetry and linearity of the inner product imply symmetry and linearity of
the Gram product.
E. Frazzoli (MIT)
Lecture 2: Least Squares Estimation
Feb 7, 2011
5 / 9
The Least Squares Estimation Problem
Consider again the problem of computing
min
x∈Rn
� y − Ax
� �� �
e
� = min
ˆy ∈R(A)
�y − ˆ
y �.
y can be an infinite-dimensional vector—as long as n is finite.
We assume that the columns of A = [a | https://ocw.mit.edu/courses/6-241j-dynamic-systems-and-control-spring-2011/076ac20d5b8bda672a8bcee8d3e95438_MIT6_241JS11_lec02.pdf |
y can be an infinite-dimensional vector—as long as n is finite.
We assume that the columns of A = [a1, a2, . . . , an] are independent.
Lemma (Gram matrix)
The columns of a matrix A are independent ⇔ � A, A � is invertible.
�
j aj ηj = 0. But then
Proof— If the columns are dependent, then there is η = 0 such that
�
Aη =
That is, � A, A � η = 0, and hence � A, A � is not invertible.
Conversely, if � A, A � is not invertible, then � A, A � η = 0 for some η = 0. In
other words η� � A, A � η = 0, and hence Aη = 0.
j �ai , aj �ηj = 0 by the linearity of inner product.
E. Frazzoli (MIT)
Lecture 2: Least Squares Estimation
Feb 7, 2011
6 / 9
�
�
The Projection theorem and least squares estimation 1
y has a unique decomposition y = y1 + y2, where y1 ∈ R(A), and
y2 ∈ R⊥(A).
To find this decomposition, let y1 = Aα, for some α ∈ Rn . Then, ensure that
y2 = y − y1 ∈ R⊥(A). For this to be true,
�ai , y − Aα� = 0,
i = 1, . . . , n,
i.e.,
Rearranging, we get
� A, y − Aα �= 0.
� A, A � α =� A, y �
if the columns of A are independent,
α =� A, A �−1� A, y �
E. Frazzoli (MIT)
Lecture 2: Least Squares Estimation
Feb 7, 2011
7 / 9
The Projection theorem and least squares estimation | https://ocw.mit.edu/courses/6-241j-dynamic-systems-and-control-spring-2011/076ac20d5b8bda672a8bcee8d3e95438_MIT6_241JS11_lec02.pdf |
Squares Estimation
Feb 7, 2011
7 / 9
The Projection theorem and least squares estimation 2
Decompose e = e1 + e2 similarly (e1 ∈ R(A), and e2 ∈ R⊥(A)).
Note �e�2 = �e1�2 + �e2�2 .
Rewrite e = y − Ax as
i.e.,
e1 + e2 = y1 + y2 − Ax,
e2 − y2 = y1 − e1 − Ax.
Each side must be 0, since they are on orthogonal subspaces!
e2 = y2—can’t do anything about it.
e1 = y1 − Ax = A(α − x)—minimize by choosing x = α. In other words
xˆ =� A, A �−1� A, y � .
E. Frazzoli (MIT)
Lecture 2: Least Squares Estimation
Feb 7, 2011
8 / 9
Examples
�
If y , e ∈ Rm, and it is desired to minimize �e�2 = e�e = m
i=1 |ei |2, then
xˆ = (A�A)−1A�y
(If the columns of A are mutually orthogonal, A�A is diagonal, and inversion
is easy)
if y , e ∈ Rm, and it is desired to minimize e�Se, where S is a Hermitian,
positive-definite matrix, then
xˆ = (A�SA)−1A�Sy .
�
Note that if S is diagonal, then e�Se = m
weighted least square criterion. A large sii penalizes the i-th component of
the error more relative to the others.
i=1 sii |ei |2, i.e., we are minimizing a
In a general stochastic setting, the weight matrix S should be related to the
noise covariance, i.e.,
S = (E [ee�])−1 .
E. Frazzoli (MIT)
Lecture 2: Least | https://ocw.mit.edu/courses/6-241j-dynamic-systems-and-control-spring-2011/076ac20d5b8bda672a8bcee8d3e95438_MIT6_241JS11_lec02.pdf |
= (E [ee�])−1 .
E. Frazzoli (MIT)
Lecture 2: Least Squares Estimation
Feb 7, 2011
9 / 9
MIT OpenCourseWare
http://ocw.mit.edu
6.241J / 16.338J Dynamic Systems and Control
Spring 2011
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms . | https://ocw.mit.edu/courses/6-241j-dynamic-systems-and-control-spring-2011/076ac20d5b8bda672a8bcee8d3e95438_MIT6_241JS11_lec02.pdf |
IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 46, NO. 12, DECEMBER 1998
1619
Upper Bounds on the Bit-Error Rate of
Optimum Combining in Wireless Systems
Jack H. Winters, Fellow, IEEE, and Jack Salz, Member, IEEE
Abstract—This paper presents upper bounds on the bit-error
rate (BER) of optimum combining in wireless systems with
multiple cochannel interferers in a Rayleigh fading environment.
We present closed-form expressions for the upper bound on
the bit-error rate with optimum combining, for any number of
antennas and interferers, with coherent detection of BPSK and
QAM signals, and differential detection of DPSK. We also present
bounds on the performance gain of optimum combining over
maximal ratio combining. These bounds are asymptotically tight
with decreasing BER, and results show that the asymptotic gain
is within 2 dB of the gain as determined by computer simulation
for a variety of cases at a ��0� BER. The closed-form expressions
for the bound permit rapid calculation of the improvement with
optimum combining for any number of interferers and antennas,
as compared with the CPU hours previously required by Monte
Carlo simulation. Thus these bounds allow calculation of the
performance of optimum combining under a variety of conditions
where it was not possible previously, including analysis of the
outage probability with shadow fading and the combined effect
of adaptive arrays and dynamic channel assignment in mobile
radio systems.
Index Terms— Bit-error rate, optimum combining, Rayleigh
fading, smart antennas.
I. INTRODUCTION
ANTENNA arrays with optimum combining combat multi-
path fading of the desired signal and suppress interfering
signals, thereby increasing both the performance and capacity
of wireless systems. | https://ocw.mit.edu/courses/18-996-random-matrix-theory-and-its-applications-spring-2004/0791574bd66d763e89d3c6572616d996_wintsal_tc.pdf |
optimum combining combat multi-
path fading of the desired signal and suppress interfering
signals, thereby increasing both the performance and capacity
of wireless systems. With optimum combining, the received
signals are weighted and combined to maximize the signal-to-
interference-plus-noise ratio (SINR) at the receiver. Optimum
combining yields superior performance over maximal ratio
combining, whereby the signals are combined to maximize
signal-to-noise ratio, in interference-limited systems. However,
while with maximal ratio combining the bit-error rate can
be expressed in closed form [1], with optimum combining
a closed-form expression is available only with one interferer
[2], [3]. With multiple interferers, Monte Carlo simulation has
been used [3]–[5], but this requires on the order of CPU hours
even with just a few interferers. Thus the improvement of
optimum combining has only been studied for a few simple
Paper approved by N. C. Beaulieu, the Editor for Wireless Communication
Theory of the IEEE Communications Society. Manuscript received September
21, 1993; revised November 28, 1996. This paper was presented in part at
the 1994 IEEE Vehicular Technology Conference, Stockholm, Sweden, June
8–10, 1994.
J. H. Winters is with AT&T Labs–Research, Red Bank, NJ 07701 USA.
J. Salz, retired, was with AT&T Labs–Research, Crawford Hill Laboratory,
Holmdel, NJ 07733 USA.
Publisher Item Identifier S 0090-6778(98)09388-X.
Fig. 1. Block diagram of an � -element adaptive array.
cases, and detailed comparisons (e.g., in terms of outage
probability) have not been done.
In [6], we showed that, with
antenna elements, the
received signals can be | https://ocw.mit.edu/courses/18-996-random-matrix-theory-and-its-applications-spring-2004/0791574bd66d763e89d3c6572616d996_wintsal_tc.pdf |
In [6], we showed that, with
antenna elements, the
received signals can be combined to eliminate
interferers in the output signal while obtaining an
diversity improvement, i.e., the performance of maximal ratio
antennas and no interference. However,
combining with
this “zero-forcing” solution gives far lower output SINR than
optimum combining in most cases of interest and cannot be
used when
.
In this paper we present a closed-form expression for the up
per bound on the bit-error rate (BER) with optimum combining
in wireless systems. We assume flat fading across the channel
and independent Rayleigh fading of the desired and interfering
signals at each antenna.1 Equations are presented for the
upper bound on the BER for coherent detection of quadrature
amplitude modulated (QAM) and binary phase-shift-keyed
(BPSK) signals, and for differential detection of differential
phase-shift-keyed (DPSK) signals. From these equations, a
lower bound on the improvement of optimum combining over
maximal ratio combining is derived.
In Section II we derive the upper bound on the BER. In
Section III we compare the upper bound to Monte Carlo
simulation results. A summary and conclusions are presented
in Section IV.
II. UPPER BOUND DERIVATION
Fig. 1 shows a block diagram of an
array. The complex baseband signal received by the
antenna element in the
by a controllable complex weight
are summed to form the array output signal
-element adaptive
th
is multiplied
and the weighted signals
th symbol interval
.
1 As shown in [7], the gain of optimum combining is not significantly
degraded with fading correlation up to about 0.5. Thus our bounds, based on | https://ocw.mit.edu/courses/18-996-random-matrix-theory-and-its-applications-spring-2004/0791574bd66d763e89d3c6572616d996_wintsal_tc.pdf |
is not significantly
degraded with fading correlation up to about 0.5. Thus our bounds, based on
independent fading, are reasonably accurate and useful even in environments
with fading correlation up to this level.
0090–6778/98$10.00 © 1998 IEEE
1620
IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 46, NO. 12, DECEMBER 1998
With optimum combining, the weights are chosen to maxi
mize the output SINR, which also minimizes the mean-square
error (MSE), which is given by [8]
the bound. Also, note that with only noise at the receiver,
is the variance of the noise normalized to
, where
the received desired signal power, and from (4) and (5)
MSE
(1)
where
matrix given by
is the received interference-plus-noise correlation
(2)
is the identity matrix,
is the noise power,
are the desired and
and
th interfering signal propagation
denotes complex
vectors, respectively, and the superscript
conjugate transpose. Here we have assumed the same average
received power for the desired signal at each antenna (that
is, microdiversity rather than macrodiversity) and that the
noise and interfering signals are uncorrelated, and without
loss of generality, have normalized the received signal power,
. Note that the MSE varies at
averaged over the fading, to
the fading rate.
(6)
where
is the received SINR, while the actual BER is
[1]. Thus even without interference, the bound
differs from the actual BER, and this difference increases as
the received SINR decreases.
Let us consider the case of | https://ocw.mit.edu/courses/18-996-random-matrix-theory-and-its-applications-spring-2004/0791574bd66d763e89d3c6572616d996_wintsal_tc.pdf |
and this difference increases as
the received SINR decreases.
Let us consider the case of interference only. In this case,
, which is given by (2), may also be expressed as
(7)
where
th element of
is the
permutations of the
,
, the sum is extended over
is the th element of
all
” sign is assigned for even
the permutation of the
’s in
permutations (i.e., an even number of swapping of
the permutation), and the “ ” sign for odd permutations. Now
’s,
’s, the “
For coherent detection of BPSK or QAM, the BER is
bounded by [9]
(8)
where
to the desired signal power, and
is the average power of the th interferer normalized
(3)
Similarly, from (7), it can be shown that
where now the expected value is taken over the fading
is the
parameters of the desired and interfering signals, and
variance of the BPSK or QAM symbol levels (e.g.,
and
for BPSK and quaternary phase-shift keying (QPSK),
respectively). For differential detection of DPSK, assuming
Gaussian noise and interference,2 the BER is given by [1]
Thus the BER expression for both cases differs only by a
.
constant, and we will now consider the term
As shown in the Appendix, this term can be upper-bounded by
(4)
(5)
where
denotes the determinant of
, and
is the
.
th eigenvalue of
Since (5) is the key inequality in our bound (and is the only
inequality we use in determining the bound for differential
detection of DPSK), | https://ocw.mit.edu/courses/18-996-random-matrix-theory-and-its-applications-spring-2004/0791574bd66d763e89d3c6572616d996_wintsal_tc.pdf |
is the only
inequality we use in determining the bound for differential
detection of DPSK), let us examine its accuracy. The bound
’s are proportional
, and since the
is tight if
to the interference signal powers, the bound is tight for
large received SINR, i.e., low BER’s. Although for all cases
and thus BER
for
the BER as given by the bound may exceed
. Thus with
may
small received SINR, occasionally BER’s greater than
be averaged into the average BER, reducing the tightness of
,
(9)
(10)
and
, e.g.,
set
where the sum is over all sets of positive integers
that exist such that
For example, when
that
, there are 6 sets of
, with
.
such
(see Table I). All sets are of the form
, except for the
.
th set
is obtained by summing the
can
for
for
.
antennas. Note that
with
coefficients (
be determined as shown below.
’s) for similar terms in
is an integer coefficient corresponding to the
Since
, and
(10) can also be expressed as
when
,
2 Since the stronger the interference, the more that optimum combining
suppresses it, with the Gaussian assumption we overestimate the probability
of strong interference. Note that this is consistent with the derivation of an
upper bound on the BER.
(11)
WINTERS AND SALZ: UPPER BOUNDS ON THE BER OF OPTIMUM COMBINING IN WIRELESS SYSTEMS
1621 | https://ocw.mit.edu/courses/18-996-random-matrix-theory-and-its-applications-spring-2004/0791574bd66d763e89d3c6572616d996_wintsal_tc.pdf |
UPPER BOUNDS ON THE BER OF OPTIMUM COMBINING IN WIRELESS SYSTEMS
1621
VALUES OF ��
FOR � � � TO �
TABLE I
�� �
where now
.
To determine the
’s, first note that if
then
, and (11) becomes
(12)
’s and the
’s are the coefficients of the
where the
related. From [6],
’s can be seen to be closely
, and thus the
for
th-order polynomial in
,
. This result is not only
useful when all interferers have equal power, but also serves
as a consistency check on our calculated values of
.
The values of
. Tables I and II list these values for
terms exist for
program to examine every permutation in (7) for given
number of each type of
determine
Note that only
were generated using a computer
. The
term was calculated to
– .
and
and
terms also exist for
for higher
can also be easily calculated. However, since the amount
increases
of computer time to generate the values of
, our program could only generate these
exponentially with
values in a reasonable amount of computer time for up to
(where a hundred CPU hours on a SPARCstation20
. Values for
, and
VALUES OF ��
FOR � � � AND �
TABLE II
�� �
and from (4), the upper bound on the BER with differential
detection of DPSK is given by
(14)
For the case of noise with
interferers, consider the noise as
an infinite number of weak interferers with total power equal
to the noise. That is | https://ocw.mit.edu/courses/18-996-random-matrix-theory-and-its-applications-spring-2004/0791574bd66d763e89d3c6572616d996_wintsal_tc.pdf |
ers, consider the noise as
an infinite number of weak interferers with total power equal
to the noise. That is, let
would be required).
From (3), the upper bound on the BER with coherent
and let
. Then,
, and
detection of BPSK or QAM is now given by
(15)
(16)
(13)
. Therefore, with noise, the BER bound is the same
for
including the noise. In this
as in (13) and (14), but with
case, if we define the received desired signal-to-noise ratio
th interferer signal-to-noise ratio as
as
and the
1622
IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 46, NO. 12, DECEMBER 1998
, then (14) becomes [similarly for (13)]
(17)
Since
is the bound with maximal ratio combining, the
term in the brackets is the improvement of optimum combining
over maximal ratio combining based on the BER bound.
Defining the gain of optimum combining as the reduction in
for a given BER, from (17), this gain in decibels
the required
is given by
Gain (dB)
(18)
This gain is therefore independent of the desired signal
).
power (because the bound is asymptotically tight as
However, this is the gain of the BER bound with optimum
combining over the BER bound with maximal ratio combining.
for a given BER with maximal ratio
Since the required
combining is less than the bound, the true gain may differ
from (18) and to obtain a bound on the gain, the gain in (18)
must be reduced accordingly. For example, with differential | https://ocw.mit.edu/courses/18-996-random-matrix-theory-and-its-applications-spring-2004/0791574bd66d763e89d3c6572616d996_wintsal_tc.pdf |
obtain a bound on the gain, the gain in (18)
must be reduced accordingly. For example, with differential
detection of DPSK, to obtain a bound the gain given in (18)
,
is reduced by the factor
this factor reduces to one and the gain approaches (18). Thus
we will refer to (18) as the asymptotic gain.
. Note that as
III. COMPARISON TO EXACT THEORY AND SIMULATION
In this section, we compare the bound to theoretical results
for
and simulation results for
.
3 and 10 dB. In all cases the gain monotonically de
Fig. 2 compares theoretical results (from [1]–[3]) for the
gain to the asymptotic gain (18) versus BER with coherent
,
detection of BPSK. Results are generated for
and
creases to the asymptotic gain as the BER decreases. The gain
approaches the asymptotic gain more slowly with decreasing
and also, at low BER’s, the accuracy of the
BER for larger
. Thus the accuracy
asymptotic gain decreases with higher
required for a given
of the asymptotic gain decreases as the
BER with optimum combining decreases, as predicted by the
approximation in Section II.
and
Fig. 2. Gain versus BER for coherent detection of BPSK—comparison of
analytical results to the asymptotic gain.
Fig. 3. Gain with � � � for 1, 2, and 6 equal-power interferers versus
signal-to-noise ratio of each interferer—comparison of analytical and Monte
Carlo simulation results with coherent detection of BPSK [5] to the asymptotic
gain.
.
, and 0.4 dB for
BER.3 In all cases, the asymptotic gain | https://ocw.mit.edu/courses/18-996-random-matrix-theory-and-its-applications-spring-2004/0791574bd66d763e89d3c6572616d996_wintsal_tc.pdf |
.
, and 0.4 dB for
BER.3 In all cases, the asymptotic gain has the
at a
,
same shape as the gain and is within 1.7 dB for
Since optimum
1.0 dB for
combining gives the largest gain when the interference power
is concentrated in one interferer and the least gain when the
interference power is equally divided among many interferers,
represent the best and worst cases for the
gain in an interference-limited cellular system. Thus from the
results in Fig. 3, we would expect the asymptotic gain to be
within 0.4–1.7 dB of the actual gain for all cases in cellular
systems with
and
.
Fig. 3 compares theoretical and Monte Carlo simulation [5]
and
results for the gain to the asymptotic gain with
,
, and 6. Results are plotted versus
, where all
interferers have equal power, for coherent detection of BPSK
3 This BER was used because the results in [5] were obtained for this BER.
As shown in [5], the gain does not change significantly for BER’s between
��0� and ��0� , the range of interest in most mobile radio systems.
WINTERS AND SALZ: UPPER BOUNDS ON THE BER OF OPTIMUM COMBINING IN WIRELESS SYSTEMS
1623
of cases at a 10 BER. These cases include interference
scenarios that cover the range of worst to best cases for the
.
gain of optimum combining in cellular systems with
The bound is most accurate with differential detection of
DPSK and high SINR, corresponding to low BER | https://ocw.mit.edu/courses/18-996-random-matrix-theory-and-its-applications-spring-2004/0791574bd66d763e89d3c6572616d996_wintsal_tc.pdf |
most accurate with differential detection of
DPSK and high SINR, corresponding to low BER and a
few antennas. Because of the 2-dB accuracy, the bound is
most useful where the optimum combining improvement is
the largest, which is the case of most interest. The closed-
form expression for the bound permits rapid calculation of
the improvement with optimum combining for any number
of interferers and antennas, as compared with the CPU hours
previously required by Monte Carlo simulation. These bounds
allow calculation of the performance of optimum combining
under a variety of conditions where it was not possible
previously, including analysis of the outage probability with
shadow fading and the combined effect of adaptive arrays and
dynamic channel assignment in mobile radio systems.
APPENDIX
Diagonalizing
by a unitary transformation
, we obtain
(19)
where
elements only on the diagonal, or
denotes an
matrix with nonzero
and
Let
Then
and
(20)
(21)
(22)
(23)
(24)
Since with independent, Rayleigh fading at each antenna,
are independent and identically distributed
the elements of
(i.i.d.) complex Gaussian random variables, the elements of
are also i.i.d. complex Gaussian random variables with the
same mean and variance. Furthermore, the
dent of the
interfering signal vectors separately, i.e.,
’s. Thus we can average over the desired and
’s are indepen
(25)
Fig. 4. Gain versus � with
interfer-
ers—comparison of Monte Carlo simulation results with coherent detection
of BPSK [3] to the asymptotic gain.
two and six equal power | https://ocw.mit.edu/courses/18-996-random-matrix-theory-and-its-applications-spring-2004/0791574bd66d763e89d3c6572616d996_wintsal_tc.pdf |
detection
of BPSK [3] to the asymptotic gain.
two and six equal power
requires
Now, consider the lower bound on the gain obtained from
the BER bound (17), as compared to the asymptotic gain.
Without interference, differential detection of DPSK with
13.3 dB
maximal ratio combining and
BER, while the BER bound
(theoretically [10]) for a
13.5 dB. Thus the lower bound on the gain
(17) gives
(from (17)) at a 10 BER is 0.2 dB less than the asymptotic
gain for any interference scenario—in particular, the lower
bound on the gain is 0.2 dB less than the results shown in
Fig. 3. Similarly, coherent detection of BPSK with maximal
ratio combining and
BER, while the BER bound (13) gives 15.0 dB. Thus the
bound is most accurate with differential detection of DPSK
and low BER’s.
11.1 dB for a 10
requires
and
Fig. 4 compares Monte Carlo simulation results [3] for the
. Results are plotted
gain to the asymptotic gain for
3 dB for all interferers and coherent
with
versus
BER. Again the asymptotic
detection of BPSK at a 10
gain has the same shape as the simulation results. The cases
include both many more interferers than antennas and many
more antennas than interferers, but in all cases the asymptotic
gain is within 1.8 dB of simulation results.
IV. CONCLUSIONS
In this paper we have presented upper bounds on the bit-
error rate (BER) | https://ocw.mit.edu/courses/18-996-random-matrix-theory-and-its-applications-spring-2004/0791574bd66d763e89d3c6572616d996_wintsal_tc.pdf |
IONS
In this paper we have presented upper bounds on the bit-
error rate (BER) of optimum combining in wireless systems
with multiple cochannel interferers in a Rayleigh fading envi
ronment. We presented closed-form expressions for the upper
bound on the bit-error rate with optimum combining, for any
number of antennas and interferers, with coherent detection of
BPSK and QAM signals, and differential detection of DPSK.
We also presented bounds on the performance gain of optimum
combining over maximal ratio combining and showed that
these bounds are asymptotically tight with decreasing BER.
Results showed that the asymptotic gain is within 2 dB of
the gain as determined by computer simulation for a variety
1624
IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 46, NO. 12, DECEMBER 1998
Since the
’s are complex Gaussian random variables with
zero mean and unit variance
and
Since the
’s are nonnegative
and, therefore,
where
denotes the determinant of
.
REFERENCES
(26)
(27)
(28)
(29)
[7] J. Salz and J. H. Winters, “Effect of fading correlation on adaptive arrays
in digital wireless communications,” IEEE Trans. Veh. Technol., vol. 43,
pp. 1049–1057, Nov. 1994.
[8] R. A. Monzingo and T. W. Miller, Introduction to Adaptive Arrays.
New York: Wiley, 1980.
[9] G. J. Foschini and J. Salz, “Digital communications over fading radio
channels,” Bell Syst. Tech. J., vol. 62, pp. 429–456, Feb. 1983.
[10] J. H. Winters, “Switched diversity with feedback for | https://ocw.mit.edu/courses/18-996-random-matrix-theory-and-its-applications-spring-2004/0791574bd66d763e89d3c6572616d996_wintsal_tc.pdf |
429–456, Feb. 1983.
[10] J. H. Winters, “Switched diversity with feedback for DPSK mobile radio
systems,” IEEE Trans. Veh. Technol., vol. VT-32, pp. 134–150, Feb.
1983.
Jack H. Winters (S’77–M’81–SM’88–F’96) received the B.S.E.E. degree
from the University of Cincinnati, Cincinnati, OH, in 1977 and the M.S. and
the Ph.D. degrees in electrical engineering from The Ohio State University,
Columbus, in 1978 and 1981, respectively.
He has been with AT&T Bell Laboratories, now AT&T Labs–Research,
since 1981, where he is in the Wireless Systems Research Department. He has
studied signal processing techniques for increasing the capacity and reducing
signal distortion in fiber optic, mobile radio, and indoor radio systems, and
is currently studying adaptive arrays and equalization for indoor and mobile
radio.
Dr. Winters is a member of Sigma Xi.
[1] W. C. Jakes Jr. et al., Microwave Mobile Communications. New York:
Wiley, 1974.
[2] V. M. Bogachev and I. G. Kiselev, “Optimum combining of signals in
space-diversity reception,” Telecommun. Radio Eng., vol. 34/35, no. 10,
pp. 83, Oct. 1980.
[3] J. H. Winters, “Optimum combining in digital mobile radio with
cochannel interference,” IEEE J. Select. Areas Commun., vol. SAC-2,
no. 4, July 1984.
[4]
[5]
, “Optimum combining for indoor radio systems with multiple | https://ocw.mit.edu/courses/18-996-random-matrix-theory-and-its-applications-spring-2004/0791574bd66d763e89d3c6572616d996_wintsal_tc.pdf |
1984.
[4]
[5]
, “Optimum combining for indoor radio systems with multiple
users,” IEEE Trans. Commun., vol. COM-35, no. 11, Nov. 1987.
, “Signal acquisition and tracking with adaptive arrays in the
digital mobile radio system IS-54 with flat fading,” IEEE Trans. Veh.
Technol., Nov. 1993.
[6] J. H. Winters, J. Salz, and R. D. Gitlin, “The impact of antenna
diversity on the capacity of wireless communication systems,” IEEE
Trans. Commun., Apr. 1994.
Jack Salz (S’59–M’89) received the B.S.E.E. degree in 1955, the M.S.E.
degree in 1956, and the Ph.D. degree in 1961, all in electrical engineering,
from the University of Florida, Gainesville, FL.
He joined AT&T Bell Laboratories in 1956, where he first worked on
the electronic switching system. From 1968 to 1981, he supervised a group
engaged in theoretical work in data communications. During the academic year
1967–1968, he was on leave from AT&T Bell Laboratories as a Professor of
electrical engineering at the University of Florida. In the spring of 1981, he
was a Visiting Lecturer at Stanford University, Stanford, CA. In the spring
of 1983, he was a Mackay Lecturer at the University of California, Berkeley.
In 1988, he held the Shirly and Burt Harris Chair in Electrical Engineering
at Technion–Israel Institute of Technology, Haifa, Israel | https://ocw.mit.edu/courses/18-996-random-matrix-theory-and-its-applications-spring-2004/0791574bd66d763e89d3c6572616d996_wintsal_tc.pdf |
Shirly and Burt Harris Chair in Electrical Engineering
at Technion–Israel Institute of Technology, Haifa, Israel. In 1992, he became
an AT&T Bell Laboratories Fellow. He retired from AT&T in 1995 and is
currently splitting his time between Lucent–Bell Labs, Bellcore, the University
of California at Berkeley, and the Technion. | https://ocw.mit.edu/courses/18-996-random-matrix-theory-and-its-applications-spring-2004/0791574bd66d763e89d3c6572616d996_wintsal_tc.pdf |
Electricity and Magnetism
• The x in 8.02x
• Course Organization
• The Beginning
Feb 6 2002
From Amber to the Radio...
1632
600 BC
ελεχτρον
(amber)
Galileo
Feb 6 2002
Now
Observation
Prediction
Physical Law
From Amber to the Radio...
Coulomb Gauss
Maxwell
1791
1830
1873
1632
1831
1887
Now
600 BC
ελεχτρον
(amber)
Galileo
Feb 6 2002
Faraday
Hertz
8.02x
Lecture Demos
(me)
Experiments
(YOU!)
Feb 6 2002
Let’s start from the beginning!
Now
600 BC
ελεχτρον
(amber)
Feb 6 2002 | https://ocw.mit.edu/courses/8-02x-physics-ii-electricity-magnetism-with-an-experimental-focus-spring-2005/0795c1c28d5a7b18f1d2edd7923acb67_2_06_2002_edited.pdf |
20.110/5.60 Fall 2005
Lecture #3
page
1
EXPANSIONS, ENERGY, ENTHALPY
Isothermal Gas Expansion
(
∆
=
0T )
gas (p1, V1, T) = gas (p2, V2, T)
Irreversibly (many ways possible)
(1) Set pext = 0
T
T
p= 0
p2 2,V
p2
p2 2,V
)
T
p= 0
p1 1,V
w
(1)
= −
(2) Set pext = p2
v
2
∫
V
1
p dV
ext
=
0
v
2
= −∫
V
1
p
dV
2
= −
p V V
2
1
−
2
(
p2
T
p1 1,V
w
(2)
p
p1
p2
Note, work is negative: system expands against surroundings
V1
V2
-w(2)
20.110J / 2.772J / 5.601JThermodynamics of Biomolecular SystemsInstructors: Linda G. Griffith, Kimberly Hamad-Schifferli, Moungi G. Bawendi, Robert W. Field
20.110/5.60 Fall 2005
Lecture #3
page
2
(3) Carry out change in two steps
gas (p1, V1, T) = gas (p3, V3, T) = gas (p2, V2, T)
p1 > p3 > p2
T
p3
T
v
3
∫
V
1
pdV
3
−
p1 1,V
w
(3)
= −
p
p1
p3
p2
p3
p3 3,V
p2
p2 2,V
T
v
2
∫
V
3
−
pdV p V V
1
3
2
3
−
=
(
)
−
p V V
2
3
− | https://ocw.mit.edu/courses/20-110j-thermodynamics-of-biomolecular-systems-fall-2005/07bc581228f2db794f3916fb32146f1a_l03.pdf |
pdV p V V
1
3
2
3
−
=
(
)
−
p V V
2
3
−
2
(
)
More work delivered to
surroundings in this case.
V1
V3
V2
-w(3)
(4) Reversible change p = pext throughout
w
rev
V
= −∫ 2
V
1
dV
p
V1
V2
-
rev
Maximum work delivered to
surroundings for isothermal gas
expansion is obtained using a
reversible path
p
p1
p2
For ideal gas:
w
rev
= −
V
∫ 2
V
1
nRT
V
= −
dV nRT
ln
V
2
V
1
=
nR
T
ln
p
2
p
1
20.110J / 2.772J / 5.601JThermodynamics of Biomolecular SystemsInstructors: Linda G. Griffith, Kimberly Hamad-Schifferli, Moungi G. Bawendi, Robert W. Field
20.110/5.60 Fall 2005
Lecture #3
page
3
The Internal Energy U
d - d -
dU = q + w
(First Law)
dU C dT p dV
−
=
ext
path
And
(
UTV
,
)
⇒
dU
=
∂
U
∂
T
⎛
⎜
⎝
⎞
⎟
⎠
V
dT
+
∂
U
∂
V
⎛
⎜
⎝
⎞
⎟
⎠
T
dV
Some frequent constraints:
•
Reversible
d -
⇒ dU = qrev + wrev = qrev – pdV
)ext
(
p p=
d -
d -
•
•
•
But also
Isolated
Adiabatic
⇒ q = w = 0
d -
d -
⇒ q | https://ocw.mit.edu/courses/20-110j-thermodynamics-of-biomolecular-systems-fall-2005/07bc581228f2db794f3916fb32146f1a_l03.pdf |
Isolated
Adiabatic
⇒ q = w = 0
d -
d -
⇒ q = 0 ⇒ dU = w = -pdV
d -
d -
reversible
Constant V
⇒
w = 0 ⇒ dU = qVd -
dU
=
⎛
⎜
⎝
∂
U
∂
T
dT
⎞
⎟
⎠
V
+
⎛
⎜
⎝
∂
U
∂
V
Constant V
dV
⎞
⎟
⎠
T
⇒
d -
q
V
=
⎛
⎜
⎝
∂
U
∂
T
dT
⎞
⎟
⎠
V
⇒
⎛
⎜
⎝
∂
U
∂
T
⎞
=⎟
⎠
V
C
V
very important result!!
d -
dTCq
V
=
V
So
=
dU C dT
V
+
∂
U
∂
V
⎛
⎜
⎝
⎞
⎟
⎠
T
dV
what is this?
20.110J / 2.772J / 5.601JThermodynamics of Biomolecular SystemsInstructors: Linda G. Griffith, Kimberly Hamad-Schifferli, Moungi G. Bawendi, Robert W. Field
20.110/5.60 Fall 2005
Lecture #3
page
4
Joule Free Expansion of a Gas
(to get
∂
U
∂
V
⎛
⎜
⎝
⎞
⎟
⎠
T
)
gas
vac
gas (p1, T1, V1) = gas (p2, T2 | https://ocw.mit.edu/courses/20-110j-thermodynamics-of-biomolecular-systems-fall-2005/07bc581228f2db794f3916fb32146f1a_l03.pdf |
��
T
)
gas
vac
gas (p1, T1, V1) = gas (p2, T2, V2)
Adiabatic
q = 0
Expansion into Vac. w = 0
(pext=0)
Since q = w = 0
⇒ dU or ∆U = 0
Constant U
Recall
=
dU CdT
V
+
∂
U
∂
V
⎛
⎜
⎝
⎞
⎟
⎠
T
dV
=
0
⎛
⎜
⎝
⎛
⎜
⎝
∂
U
∂
V
∂
U
∂
V
⎞
⎟
⎠
T
⎞
⎟
⎠
T
= −
dV CdT
V U
U
= −
C
V
∂
T
∂
V
⎛
⎜
⎝
⎞
⎟
⎠
U
measure in Joule exp't!
∆⎛
⎜
∆⎝
T
V
⎞
⎟
⎠
U
Joule did this.
lim
∆ →
V
0
⎛
⎜
⎝
∆
T
∆
V
⎞
⎟
⎠
U
=
∂
T
∂
V
⎛
⎜
⎝
⎞
⎟
⎠
U
≡
η
J
∴
dU C dT C dV
−
=
η
J
V
V
•
For Ideal gas ⇒
0Jη =
dU C dT=
V
Joule coefficient
exactly
Always for ideal gas
U(T)
only depends on T
The internal energy of an ideal gas depends only on temperature
Consequences ⇒
0U∆ =
For all isothermal
compressions of ideal gases
expansions or | https://ocw.mit.edu/courses/20-110j-thermodynamics-of-biomolecular-systems-fall-2005/07bc581228f2db794f3916fb32146f1a_l03.pdf |
an ideal gas depends only on temperature
Consequences ⇒
0U∆ =
For all isothermal
compressions of ideal gases
expansions or
⇒
∆ = ∫
VU C dT
For any
ideal gas change in state
20.110J / 2.772J / 5.601JThermodynamics of Biomolecular SystemsInstructors: Linda G. Griffith, Kimberly Hamad-Schifferli, Moungi G. Bawendi, Robert W. Field
20.110/5.60 Fall 2005
Lecture #3
page
5
Enthalpy H(T,p) H ≡ U + pV
Chemical reactions and biological processes usually take place under
constant pressure and with reversible pV work. Enthalpy turns out
to be an especially useful function of state under those conditions.
gas (p, T1, V1)
reversible
=
const p
.
gas (p, T
2, V2)
U1
U2
∆ = +
=
U q w q p V
− ∆
p
∆ + ∆ =
U p V q
p
)
(
U pV q
p
∆ + ∆
=
define as H
⇒ ∆
(
U pV q
p
+
=
)
H U pV
≡
+
⇒ ∆
=
H q
p
for a reversible constant p process
Choose
)
(
H T p
,
⇒
dH
=
What are
⎛
⎜
⎝
∂
H
∂
T
⎞
⎟
⎠
p
and
⎛
⎜
⎝
∂
H
∂
p
⎞
⎟
⎠
T
?
⎛
⎜
⎝
∂
H
∂
T
⎞
dT
⎟
⎠
p
⎛
+ ⎜
⎝
∂
H
∂
p
⎞
⎟
⎠ | https://ocw.mit.edu/courses/20-110j-thermodynamics-of-biomolecular-systems-fall-2005/07bc581228f2db794f3916fb32146f1a_l03.pdf |
�
+ ⎜
⎝
∂
H
∂
p
⎞
⎟
⎠
T
dp
•
∂⎛
⎜
∂⎝
H
T
⎞
⎟
⎠p
⇒
for a reversible process at constant p (dp = 0)
dH
=
đ
q
p
and
dH
= ⎜
∂⎛
∂⎝
H
T
⎞
⎟
⎠
p
dT
⇒
đ
q
p
=
∂⎛
⎜
∂⎝
H
T
⎞
⎟
⎠
p
dT
but
∴
∂⎛
⎜
∂⎝
H
T
⎞
⎟
⎠
p
=
C
p
đ
q p
=
CdT
p
also
20.110J / 2.772J / 5.601JThermodynamics of Biomolecular SystemsInstructors: Linda G. Griffith, Kimberly Hamad-Schifferli, Moungi G. Bawendi, Robert W. Field
20.110/5.60 Fall 2005
Lecture #3
page
6
•
⎛
∂
H
⎜
∂⎝
p
⎞
⎟
⎠T
⇒ Joule-Thomson expansion
porous partition (throttle)
adiabatic, q = 0
gas (p1, T1) = gas (p2, T2)
w pV pV
1 1
2 2
−
=
⇒ ∆
∴ ∆ + ∆
1 1
−
=
+
(
U pV
=
U q w pV pV
2 2
)
∴ ∆
= ⇒ ∆
0
=
H
0
(
= −∆
(
+ | https://ocw.mit.edu/courses/20-110j-thermodynamics-of-biomolecular-systems-fall-2005/07bc581228f2db794f3916fb32146f1a_l03.pdf |
V
2 2
)
∴ ∆
= ⇒ ∆
0
=
H
0
(
= −∆
(
+
U pV
)
pV
)
=
0
Joule-Thomson is a constant Enthalpy process.
=
dH CdT
p
+
⎛
⎜
⎝
∂
H
∂
p
⎞
⎟
⎠
T
dp
⇒
CdT
p
= −
⎛
⎜
⎝
∂
H
∂
p
⎞
⎟
⎠
T
dp
H
⇒
⎛
⎜
⎝
∂
H
∂
p
⎞
⎟
⎠
T
= −
C
p
⎛
⎜
⎝
∂
T
∂
p
⎞
⎟
⎠
H
←
can measure this
⎛
⎜
⎝
∆
T
∆
p
⎞
⎟
⎠
H
Define lim
∆ →
p
0
⎛
⎜
⎝
∆
T
∆
p
⎞
⎟
⎠
H
=
⎛
⎜
⎝
∂
T
∂
p
⎞
⎟
⎠
H
≡
µ
JT
←
Joule-Thomson Coefficient
∴
∂
H
∂
p
⎛
⎜
⎜
⎝
⎞
⎟
⎟
⎠
T
= −
µ
C
p
JT
and
dH C dT C
p
−
=
µ
p dp
JT
20.110J / 2.772J / 5.601JThermodynamics of Biomolecular SystemsInstructors: Linda G. Griffith, Kimberly Hamad-Schifferli, Moungi G. Baw | https://ocw.mit.edu/courses/20-110j-thermodynamics-of-biomolecular-systems-fall-2005/07bc581228f2db794f3916fb32146f1a_l03.pdf |
601JThermodynamics of Biomolecular SystemsInstructors: Linda G. Griffith, Kimberly Hamad-Schifferli, Moungi G. Bawendi, Robert W. Field
20.110/5.60 Fall 2005
Lecture #3
page
7
For an ideal gas: U(T),
pV=nRT
≡
H U
(
)
+T
pV
=
(
U T
)
+
nRT
only depends on T, no p dependence
H
(
T
)
⇒
⎛
∂
H
⎜
∂⎝
p
⎞
⎟
⎠
T
=
µ
JT
=
0 for an ideal gas
For an ideal gas
VC C R
=
+
p
C
p
=
⎛
⎜
⎝
∂
H
∂
T
⎞
⎟
⎠
p
,
C
V
=
⎛
⎜
⎝
∂
U
∂
T
⎞
⎟
⎠
V
=
+
H U pV
(cid:8)(cid:11)(cid:11)(cid:9)(cid:11)(cid:11)(cid:10)
↓
,
=
pV RT
(cid:13)(cid:11)(cid:11)(cid:11)(cid:11)(cid:14)(cid:11)(cid:11)(cid:11)(cid:11)(cid:15)
↑
⎛
⎛
∂
U
⎜
⎜
∂
T
⎝
⎝
∂
H
∂
T
⎞
⎟
⎠
p
⎞
⎟
⎠
p
+
=
R
=
C C
V
p
(cid:13)(cid:11)(cid:11)(cid:11)(cid:11)(cid:14)(cid:11)(cid:11)(cid:11)(cid:11)( | https://ocw.mit.edu/courses/20-110j-thermodynamics-of-biomolecular-systems-fall-2005/07bc581228f2db794f3916fb32146f1a_l03.pdf |
cid:11)(cid:11)(cid:11)(cid:14)(cid:11)(cid:11)(cid:11)(cid:11)(cid:15)
↑
⎞
⎞
⎟
⎟
⎠
⎠
T
p
⎛
∂
U
+ ⎜
∂⎝
V
0 for ideal gas
⎛
∂
V
⎜
∂⎝
T
=
+
R
∴
p
VC C R
+
=
20.110J / 2.772J / 5.601JThermodynamics of Biomolecular SystemsInstructors: Linda G. Griffith, Kimberly Hamad-Schifferli, Moungi G. Bawendi, Robert W. Field | https://ocw.mit.edu/courses/20-110j-thermodynamics-of-biomolecular-systems-fall-2005/07bc581228f2db794f3916fb32146f1a_l03.pdf |
Massachusetts Institute of Technology
Department of Electrical Engineering and Computer Science
6.438 Algorithms For Inference
Fall 2014
9 Forward-backward algorithm, sum-product on
factor graphs
The previous lecture introduced belief propagation (sum-product), an efficient infer
ence algorithm for tree structured graphical models. In this lecture, we specialize it
further to the so-called hidden Markov model (HMM), a model which is very useful
in practice for problems with temporal structure.
9.1 Example: convolution codes
We motivate our discussion of HMMs with a kind of code for communication called a
convolution code. In general, the problem of communication is that the sender would
like to send a message m, represented as a bit string, to a receiver. The message may
be corrupted along the way, so we need to introduce redundancy into the message so
that it can be reconstructed accurately even in the presence of noise. To do this, the
sender sends a coded message b over a noisy channel. The channel introduces some
noise (e.g. by flipping random bits). The receiver receives the “received message” y
and then applies a decoding procedure to get the decoded message mˆ . A schematic
is shown in Figure 1. Clearly, we desire a coding scheme where mˆ = m with high
probability, b is not much larger than m, and mˆ can be efficiently computed from y .
We now discuss one example of a coding scheme, called a convolution code. Sup
1
pose the message m consists of N bits. The coded message b will consist of 2N
bits, alternating between the following:
−
The odd-numbered bits b2i−1 repeat the message bits mi exactly.
•
The even-numbered bits b2i are the XOR of message bits mi and mi+1, denoted
mi ⊕
mi+1.
•
The ratio of | https://ocw.mit.edu/courses/6-438-algorithms-for-inference-fall-2014/07fd05499a8596682dedce6fd0a229c3_MIT6_438F14_Lec9.pdf |
2i are the XOR of message bits mi and mi+1, denoted
mi ⊕
mi+1.
•
The ratio of the lengths of m and b is called the rate of the code, so this convolution
code is a rate 1
2 code, i.e. for every coded message bit, it can convey 1
2 message bit.
We assume an error model called a binary symmetric channel : each of the bits of
the coded message is independently flipped with probability ε. We can represent this
as a directed graphical model as shown in Figure 2. Note that from the receiver’s
perspective, only the yi’s are observed, and the task is to infer the mi’s.
In order to perform inference, we must convert this graph into an undirected
graphical model. Unfortunately, the straightforward construction, where we moralize
the graph, does not result in a tree structure, because of the cliques over mi, mi+1, and
b2i. Instead, we coarsen the representation by combining nodes into “supernodes.” In
particular, we will combine all of the adjacent message bits into variables mimi+1, and
1
Figure 1: A schematic representation of the problem setup for convolution codes.
Figure 2: Our convolution code can be represented as a directed graphical model.
we will combine pairs of adjacent received message bits y2i−1y2i, as shown in Figure
3. This results in a tree-structured directed graph, and therefore an undirected tree
graph — now we can perform sum-product.
9.2 Hidden Markov models
Observe that the graph in Figure 3 is Markov in its hidden states. More generally, a
hidden Markov model (HMM) is a graphical model with the structure shown in Figure
4. Intuitively, the variables xi represent a state which evolves over time and which
we don’t get to observe, so we refer to them as | https://ocw.mit.edu/courses/6-438-algorithms-for-inference-fall-2014/07fd05499a8596682dedce6fd0a229c3_MIT6_438F14_Lec9.pdf |
represent a state which evolves over time and which
we don’t get to observe, so we refer to them as the hidden state. The variables yi are
signals which depend on the state at the same time step, and in most applications
are observed, so we refer to them as observations.
From the definition of directed graphical models, we see that the HMM represents
the factorization property
P(x1, . . . , xN , y1, . . . , yN ) = P(x1) P(xi
N
N
i=2
|
N
N
xi−1) P(yj |
j=1
xj ).
(1)
Observe that we can convert this to the undirected representation shown in Figure 4
(b) by taking each of the terms in this product to be a potential. This allows us to
2
m1,...,mNb1,...,b2N | https://ocw.mit.edu/courses/6-438-algorithms-for-inference-fall-2014/07fd05499a8596682dedce6fd0a229c3_MIT6_438F14_Lec9.pdf |
3.044 MATERIALS PROCESSING
LECTURE 3
We will often be comparing heat transfer steps/processes:
When can we neglect one and focus on the other?
Resistance:
LA
10 > kA
LB
kB
> 0.1
⇒
10 :
0.1 :
“B”conducts fast, cannot sustain a gradient
“A”conducts fast, cannot sustain a gradient
Reduce Dimensionality:
∂T
∂x
= α
∇2T :
T (t, x, y, z)
1. Steady State
∂T = 0
∂t
2. No Thermal Gradients
∇ T = 0, T = T (t) ONLY
∂T = ....
∂t
Date: February 15th, 2012.
1
2
LECTURE 3
In general, for solid / “fluid” interfaces: T2 (cid:54)= Tf
- constant T, B.C. is not appropriate
- fluid cannot always remove heat at the rate it is delivered
How is heat transferred / removed in the fluid?
- conduction: heat moves, atoms sit still
- convection: atoms flow away, carrying heat with them
1. natural convection (T interacts w/ gravity)
2. forced convection (mechanically driven flow)
- radiation: photons carries heat away
What are the proper B.C.?
1. T2 (cid:54)= Tf
2. @ x = L, specify flux:
heat[ W
2m ]
(cid:122)(cid:125)(cid:124)(cid:123)
q
(T2 − Tf ) ⇒
=
h
(cid:124)(cid:123)(cid:122)(cid:125)
heat transfer coeff.[ W
2m K ]
the hotter the material is with respect
to the fluid, the faster heat will flow
∂T
∂t
= 0 = α
∂2T
∂x2
Step 1: Solve
Step 2: B.C.
3.044 MATERIALS PROCESSING
3
T − T1 = xL, where T
T2 −
T1
Θ = χ
2 is unknown
@ x = | https://ocw.mit.edu/courses/3-044-materials-processing-spring-2013/081f3c79abb2de69274656cde699ce78_MIT3_044S13_Lec03.pdf |
T − T1 = xL, where T
T2 −
T1
Θ = χ
2 is unknown
@ x = L
qcond = qconv
−k
∂T
∂x
= h(T2
− Tf )
Step 3: Solve for ∂T
∂x
T − T1
T2 − T1
=
x
L
T = T1 + (T2L
∂T
∂x
− T1
L
T2=
x
− Tf )
Plug into: −k ∂T = h(T2
∂x
− Tf )
T
− 2k
kT1
L
− T1 = h(T2
L
(cid:18)
− Tf )
(cid:19)
k
L
+ hTf =
h +
k
T2 = L Tf
h + k
L
4
LECTURE 3
Plug into: T = T1 + x (T2
L
− Tf )
T = T1 +
(cid:34)
x
L
(cid:35)
k T1 + hTf
L
h + L
(cid:35)
k − 1
T
T − T1 =
T − T
1
Tf − T1
=
Θ = χ
(cid:34)
x
L
(cid:34)
h(Tf − T1)
h + k
L
(cid:35)
L
h k
x
L 1 + h x
L
(cid:35)
(cid:34)
h L
k
1 + h x
L
hL
k
⇒
h
k
l
⇒
L
k
1
h
Biot Number:
hL
k
is conductive resistance
L
k
is convective resistance
where
1
h
and
dimensionless, ratio of resistances
Three Important Cases:
3.044 MATERIALS PROCESSING
5
Generalize:
1. Imperfect interfaces:
qin = qout
= h(T +
2 − T2
−), where = interface resistance
1
h
2. Geometry:
hL
k
→
What
is L?
L ≈
volume
surface area
, a characteristic dimension
6
Examples:
LECTURE 3 | https://ocw.mit.edu/courses/3-044-materials-processing-spring-2013/081f3c79abb2de69274656cde699ce78_MIT3_044S13_Lec03.pdf |
k
→
What
is L?
L ≈
volume
surface area
, a characteristic dimension
6
Examples:
LECTURE 3
1. plate heated on one side: L = thickness
2. plate heated on both sides: L = half thickness
πR2l
3. cylinder: L =
2πRl
4. sphere (or other 3D shape): L = 3 πR3
4πR2
R
2
=
4
=
R
3
MIT OpenCourseWare
http://ocw.mit.edu
3.044 Materials Processing
Spring 2013
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. | https://ocw.mit.edu/courses/3-044-materials-processing-spring-2013/081f3c79abb2de69274656cde699ce78_MIT3_044S13_Lec03.pdf |
(cid:0)(cid:2)(cid:3)(cid:4)(cid:5)(cid:0)J(cid:6)(cid:7)(cid:3)(cid:8)(cid:2)(cid:0)J
Introduction
to
Mathematical
Programming
Lecture
(cid:2)(cid:3)
Geometry
of
Linear
Optimization
I
(cid:0)
Outline(cid:2)
Slide
(cid:2)
(cid:0)(cid:2)
What
is
the
central
problem(cid:3)
(cid:4)(cid:2)
Standard
Form(cid:2)
(cid:5)(cid:2)
Preliminary
Geometric
Insights(cid:2)
(cid:6)(cid:2)
Geometric
Concepts
(cid:7)Polyhedra(cid:8)
(cid:9)Corners(cid:10)(cid:11)(cid:2)
(cid:12)(cid:2)
Equivalence
of
algebraic
and
geometric
concepts(cid:2)
(cid:3)
Central
Problem
Slide
(cid:3)
minimize
c
x
0
sub ject
to
(cid:13)
i
a
x
b
i
M
(cid:0)
1
0
i
0
i
a
x
b
i
M
(cid:2)
(cid:0)
2
i
0
i
a
x
b
i
M
(cid:3)
(cid:0)
3
i
x
j
(cid:3)
(cid:14)
j
N
(cid:0)
1
(cid:2)
x
j
(cid:3)
(cid:14)
j
N
(cid:0)
2
(cid:0)(cid:2)(cid:3)
Standard
Form
Slide
(cid:4)
minimize
c
x
0
sub ject
to
(cid:13)
Ax
b
x
(cid:3)
(cid:0)
Characteristics
(cid:4)
Minimization
problem
(cid:4)
Equality
constraints
(cid:4)
Non(cid:15)negative
v
ariables
(cid:0)(cid:2)(cid:0)
Transformations
Slide
(cid:5)
0
0
max
(cid:5 | https://ocw.mit.edu/courses/6-251j-introduction-to-mathematical-programming-fall-2009/0820cfa3e02c9bbfce75399c24e618de_MIT6_251JF09_lec02.pdf |
:2)
(cid:4)
x 2
1
c = (1,0)
c = (- 1,- 1)
c = (0,1)
c = (1,1)
x 1
(cid:4)
(cid:5)(cid:8)
The
optimal
cost
is
(cid:8)
and
no
feasible
solution
is
optimal(cid:2)
(cid:4)
The
feasible
set
is
empty(cid:2)
(cid:5)
Polyhedra
(cid:5)(cid:2)(cid:3)
De(cid:6)nitions
Slide
(cid:10)
(cid:4)
f
j
g
The
set
(cid:13)
is
called
a
hyperplane
(cid:2)
x
a
x
b
0
(cid:4)
The
set
f
j
(cid:3)
g
is
called
a
(cid:2)
halfspace
x
a
x
b
0
(cid:4)
The
intersection
of
many
halfspaces
is
called
a
(cid:2)
polyhedron
(cid:4)
A
polyhedron
is
a
convex set(cid:8) i(cid:2)e(cid:2)(cid:8) if
(cid:0)
(cid:5)
(cid:0)
(cid:8)
then
(cid:16)
(cid:7)(cid:0)
(cid:11)
(cid:2)
P
x(cid:2)
y
P
(cid:3)x
(cid:3)
y
P
a
'
2
x
=
b
2
a 2
a 1
=b1
x
'
a 1
b
3
=
x
'
3
a
a3
'
a x < b
a4
'
a x > b
'
a4x = b4
x
'
a
b
=
a
( a ) | https://ocw.mit.edu/courses/6-251j-introduction-to-mathematical-programming-fall-2009/0820cfa3e02c9bbfce75399c24e618de_MIT6_251JF09_lec02.pdf |
Equality
h
o
ld
s
if
(cid:13)
(cid:17)
sin
ce
spans
(cid:13)
has
a
unique
i
i
i
i
(cid:0)
(cid:11)
a
x
b
(cid:2) i
I
a
i
(cid:2)
a
x
b
0
0
n
solution
(cid:13)
(cid:2)
x
x
�
(cid:21)
MIT OpenCourseWare
http://ocw.mit.edu
6.251J / 15.081J Introduction to Mathematical Programming
Fall 2009
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. | https://ocw.mit.edu/courses/6-251j-introduction-to-mathematical-programming-fall-2009/0820cfa3e02c9bbfce75399c24e618de_MIT6_251JF09_lec02.pdf |
15.082 and 6.855J
Fall 2010
Network Optimization J.B. Orlin
WELCOME!
Welcome to 15.082/6.855J
Introduction to Network Optimization
Instructor: James B. Orlin
TA: David Goldberg
Textbook: Network Flows: Theory, Algorithms,
and Applications by Ahuja, Magnanti, and Orlin
referred to as AMO
2
Quick Overview
Next: The Koenigsberg Bridge Problem
Introduces Networks and Network Algorithms
Some subject management issues
Network flows and applications
Computational Complexity
Overall goal of today’s lecture: set the tone for the rest of
the subject
provide background
provide motivation
handle some class logistics
3
On the background of students
Requirement for this class
Either Linear Programming (15.081J)
or Data Structures
Mathematical proofs
The homework exercises usually call for
proofs.
The midterms will not require proofs.
For those who have not done many proofs
before, the TA will provide guidance
4
Some aspects of the class
Fondness for Powerpoint animations
Cold-calling as a way to speed up learning of the
algorithms
Talking with partners (the person next to you in
in the classroom.)
Class time: used for presenting theory,
algorithms, applications
mostly outlines of proofs illustrated by
examples (not detailed proofs)
detailed proofs are in the text
5
The Bridges of Koenigsberg: Euler 1736
“Graph Theory” began in 1736
Leonard Eüler
Visited Koenigsberg
People wondered whether it is possible to take
a walk, end up where you started from, and
cross each bridge in Koenigsberg exactly
once
Generally it was believed to be | https://ocw.mit.edu/courses/15-082j-network-optimization-fall-2010/084aaba151e144f852e66099c7ea1213_MIT15_082JF10_lec01.pdf |
walk, end up where you started from, and
cross each bridge in Koenigsberg exactly
once
Generally it was believed to be impossible
6
The Bridges of Koenigsberg: Euler 1736
A
1
2
3
B
4
5
6
D
C
7
Is it possible to start in A, cross over each bridge
exactly once, and end up back in A?
7
The Bridges of Koenigsberg: Euler 1736
A
1
2
3
B
4
5
6
D
C
7
Conceptualization: Land masses are “nodes”.
8
The Bridges of Koenigsberg: Euler 1736
A
4
1
2
B
5
6
3
C
D
7
Conceptualization: Bridges are “arcs.”
9
The Bridges of Koenigsberg: Euler 1736
A
4
1
2
B
5
6
3
C
D
7
Is there a “walk” starting at A and ending at A and
passing through each arc exactly once?
10
Notation and Terminology
Network terminology as used in AMO.
1
a
4
b
c
d
2
3
e
1
a
4
b
c
d
2
3
e
An Undirected Graph or
Undirected Network
A Directed Graph or
Directed Network
Network G = (N, A)
Node set N = {1, 2, 3, 4}
Arc Set A = {(1,2), (1,3), (3,2), (3,4), (2,4)}
In an undirected graph, (i,j) = (j,i)
11
Path: Example: 5, 2, 3, 4.
(or 5, c, 2, b, 3, e, 4)
•No node is repeated.
•Directions are ignored.
Directed Path . Example: 1, 2, 5, 3, 4
(or 1, a, 2 | https://ocw.mit.edu/courses/15-082j-network-optimization-fall-2010/084aaba151e144f852e66099c7ea1213_MIT15_082JF10_lec01.pdf |
.
Directed Path . Example: 1, 2, 5, 3, 4
(or 1, a, 2, c, 5, d, 3, e, 4)
•No node is repeated.
•Directions are important.
Cycle (or circuit or loop)
1, 2, 3, 1. (or 1, a, 2, b, 3, e)
•A path with 2 or more nodes, except
that the first node is the last node.
•Directions are ignored.
Directed Cycle: (1, 2, 3, 4, 1) or
1, a, 2, b, 3, c, 4, d, 1
•No node is repeated.
•Directions are important.
5
b
d
3
c
2
e
4
a
1
5
b
d
3
c
2
e
4
a
1
a
2
e
b
d
c
3
1
4
1
a
2
e
b
d
c
3
4
2
3
e
1
a
4
b
c
d
5
Walks
2
3
e
1
a
4
b
c
d
5
Walks are paths that can repeat nodes and arcs
Example of a directed walk: 1-2-3-5-4-2-3-5
A walk is closed if its first and last nodes are the
same.
A closed walk is a cycle except that it can repeat
nodes and arcs.
13
The Bridges of Koenigsberg: Euler 1736
A
4
1
2
B
5
6
3
C
D
7
Is there a “walk” starting at A and ending at A and
passing through each arc exactly once?
Such a walk is called an eulerian cycle.
14
Adding two bridges creates such a walk
A
4
9
1
2
B
65
D
3
8
C
7
Here is the walk.
A, 1, B, 5, D, 6, B, | https://ocw.mit.edu/courses/15-082j-network-optimization-fall-2010/084aaba151e144f852e66099c7ea1213_MIT15_082JF10_lec01.pdf |
3
8
C
7
Here is the walk.
A, 1, B, 5, D, 6, B, 4, C, 8, A, 3, C, 7, D, 9, B, 2, A
Note: the number of arcs incident to B is twice
the number of times that B appears on the walk.
15
On Eulerian Cycles
4
A
4
9
1
2
6
B
65
D
4
3
8
4
C
7
The degree of
a node in an
undirected
graph is the
number of
incident arcs
Theorem. An undirected graph has an eulerian
cycle if and only if
(1) every node degree is even and
(2) the graph is connected (that is, there is a path
from each node to each other node).
More on Euler’s Theorem
Necessity of two conditions:
Any eulerian cycle “visits” each node an even
number of times
Any eulerian cycle shows the network is connected
caveat: nodes of degree 0
Sufficiency of the condition
Assume the result is true for all graphs with fewer
than |A| arcs.
Start at some node, and take a walk until a cycle C is
found.
1
5
5
4
4
7
7
3
3
17
More on Euler’s Theorem
Sufficiency of the condition
Start at some node, and take a walk until a cycle C is
found.
Consider G’ = (N, A\C)
the degree of each node is even
each component is connected
So, G’ is the union of Eulerian cycles
Connect G’ into a single eulerian cycle by adding C.
5
4
7
3
18
Comments on Euler’s theorem
1.
It reflects how proofs are done in class, often in
outline form, with key ideas illustrated.
2. However, this proof does not directly lead to | https://ocw.mit.edu/courses/15-082j-network-optimization-fall-2010/084aaba151e144f852e66099c7ea1213_MIT15_082JF10_lec01.pdf |
reflects how proofs are done in class, often in
outline form, with key ideas illustrated.
2. However, this proof does not directly lead to an
efficient algorithm. (More on this in two
lectures.)
3. Usually we focus on efficient algorithms.
19
15.082/6.855J Subject Goals:
1. To present students with a knowledge of the
state-of-the art in the theory and practice of
solving network flow problems.
A lot has happened since 1736
2. To provide students with a rigorous analysis of
network flow algorithms.
computational complexity & worst case
analysis
3. To help each student develop his or her own
intuition about algorithm development and
algorithm analysis.
20
Homework Sets and Grading
Homework Sets
6 assignments
4 points per assignment
lots of practice problems with solutions
Grading
homework: 24 points
16 points
Project
Midterm 1: 30 points
30 points
Midterm 2:
21
Class discussion
Have you seen network models elsewhere?
Do you have any specific goals in taking this
subject?
22
Mental break
Which nation gave women the right to vote first?
New Zealand.
Which Ocean goes to the deepest depths?
Pacific Ocean
What is northernmost land on earth?
Cape Morris Jessep in Greenland
Where is the Worlds Largest Aquarium?
Epcot Center in Orlando, FL
23
Mental break
What country has not fought in a war since 1815?
Switzerland
What does the term Prima Donna mean in Opera?
The leading female singer
What fruit were Hawaiian women once forbidden by
law to eat?
The coconut
What’s the most common non-contagious disease in
the world?
Tooth decay
24
Three Fundamental Flow Problems
The shortest path problem
The maximum flow problem
The minimum cost flow problem
25
The shortest path problem
1
1
1 | https://ocw.mit.edu/courses/15-082j-network-optimization-fall-2010/084aaba151e144f852e66099c7ea1213_MIT15_082JF10_lec01.pdf |
maximum flow problem
The minimum cost flow problem
25
The shortest path problem
1
1
1
2
4
2
1
3
4
2
3
4
3
5
2
2
6
6
Consider a network G = (N, A) in which there is
an origin node s and a destination node t.
standard notation: n = |N|, m = |A|
What is the shortest path from s to t?
26
The Maximum Flow Problem
Directed Graph G = (N, A).
Source s
Sink t
Capacities uij on arc (i,j)
Maximize the flow out of s, subject to
Flow out of i = Flow into i, for i ≠ s or t.
9
10
6
6
s
8
8
7
10
1
1
1
2
t
A Network with Arc Capacities (and the maximum flow) 27
Representing the Max Flow as an LP
s
10, 9
6, 6
1
8,8
1,1
10,7
2
t
Flow out of i - Flow into i = 0
for i ≠ s or t.
max v
s.t xs1 + xs2
= v
max v
s.t. ∑j xsj
= v
-xs1 + x12 + x1t = 0
-xs2 - x12 + x2t = 0
= -v
-x1t - x2t
∑j xij ∑j xji
–
for each i ≠ s or t
= 0
s.t. - ∑i xit
= -v
0 ≤ xij ≤ uij for all (i,j)
0 ≤ xij ≤ uij for all (i,j)
28
Min Cost Flows
$4 ,10
1
5
2
3
Flow out of i - Flow into i = b(i)
4
Each arc has a
linear cost and a | https://ocw.mit.edu/courses/15-082j-network-optimization-fall-2010/084aaba151e144f852e66099c7ea1213_MIT15_082JF10_lec01.pdf |
1
5
2
3
Flow out of i - Flow into i = b(i)
4
Each arc has a
linear cost and a
capacity
cijxij
min ∑i,j
s.t ∑j xij – ∑j xji = b(i) for each i
0 ≤ xij ≤ uij for all (i,j)
Covered in detail in Chapter 1 of AMO
29
Where Network Optimization Arises
Transportation Systems
Transportation of goods over transportation networks
Scheduling of fleets of airplanes
Manufacturing Systems
Scheduling of goods for manufacturing
Flow of manufactured items within inventory systems
Communication Systems
Design and expansion of communication systems
Flow of information across networks
Energy Systems, Financial Systems, and much more
30
Next topic: computational complexity
What is an efficient algorithm?
How do we measure efficiency?
“Worst case analysis”
but first …
31
Measuring Computational Complexity
Consider the following algorithm for adding two m × n
matrices A and B with coefficients a( , ) and b( , ).
begin
for
for
i = 1 to m do
j = 1 to n do c(i,j) := a(i,j) + b(i,j)
end
What is the running time of this algorithm?
Let’s measure it as precisely as we can as a function of n and m.
Is it 2nm, or 3nm, or what?
Worst case versus average case
How do we measure the running time?
What are the basic steps that we should count?
32
Compute the running time precisely.
Operation Number (as a function of m,n)
Additions
Assignments
Comparisons
Multiplications
33
Towards Computational Complexity
1. We will ignore running time constants.
2. Our running times will be stated in terms of
relevant problem parameters, e.g., nm.
3. We will measure | https://ocw.mit.edu/courses/15-082j-network-optimization-fall-2010/084aaba151e144f852e66099c7ea1213_MIT15_082JF10_lec01.pdf |
.
2. Our running times will be stated in terms of
relevant problem parameters, e.g., nm.
3. We will measure everything in terms of worst
case or most pessimistic analysis (performance
guarantees.)
4. All arithmetic operations are assumed to take
one step,
(or a number of steps that is bounded by a
constant).
34
A Simpler Metric for Running Time.
Operation Number (as a function of m,n)
Additions ≤ c1 mn for some c1 and m, n ≥ 1
O(mn) steps
Assignments ≤ c2 mn for some c2 and m, n ≥ 1
O(mn) steps
Comparisons ≤ c3 mn for some c3 and m, n ≥ 1
O(mn) steps
TOTAL ≤ c4 mn for some c4 and m, n ≥ 1
O(mn) steps
35
Simplifying Assumptions and Notation
MACHINE MODEL: Random Access Machine
(RAM).
This is the computer model that everyone is used
to. It allows the use of arrays, and it can select
any element of an array or matrix in O(1) steps.
c(i,j) := a(i,j) + b(i,j).
Integrality Assumption. All numbers are integral
(unless stated otherwise.)
36
Size of a problem
The size of a problem is the number of bits
needed to represent the problem.
The size of the n × m matrix A is not nm.
If each matrix element has K bits, the size is
nmK
e.g., if max 2107 < aij < 2108, then K = 108.
K = O( log (amax)).
37
Polynomial Time Algorithms
We say that an algorithm runs in polynomial time
if the number of steps taken by an algorithm on
any instance I | https://ocw.mit.edu/courses/15-082j-network-optimization-fall-2010/084aaba151e144f852e66099c7ea1213_MIT15_082JF10_lec01.pdf |
ynomial Time Algorithms
We say that an algorithm runs in polynomial time
if the number of steps taken by an algorithm on
any instance I is bounded by a polynomial in the
size of I.
We say that an algorithm runs in exponential time
if it does not run in polynomial time.
Example 1: finding the determinant of a matrix
can be done in O(n3) steps.
This is polynomial time.
38
Polynomial Time Algorithms
Example 2: We can determine if n is prime by dividing n by
every integer less than n.
This algorithm is exponential time.
The size of the instance is log n
The running time of the algorithm is O(n).
Side note: there is a polynomial time algorithm for
determining if n is prime.
Almost all of the algorithms presented in this class will be
polynomial time.
One can find an Eulerian cycle (if one exists) in O(m) steps.
There is no known polynomial time algorithm for finding a
min cost traveling salesman tour
39
On polynomial vs exponential time
We contrast two algorithm, one that takes 30,000 n3
steps, and one that takes 2n steps.
Suppose that we could carry out 1 billion steps per
second.
# of nodes
n = 30,
n = 40,
n = 50
n = 60
30,000 n3 steps2n
0.81 seconds
1.92 seconds
3.75 seconds
6.48 seconds
steps
1 second
17 minutes
12 days
31 years
40
On polynomial vs. exponential time
Suppose that we could carry out 1 trillion steps
per second, and instantaneously eliminate
99.9999999% of all solutions as not worth
considering
# of nodes
n = 70,
n = 80,
n = 90
n = 100
1,000 n10 steps
2.82 seconds
10.74 seconds
34.86 seconds
100 seconds
2 | https://ocw.mit.edu/courses/15-082j-network-optimization-fall-2010/084aaba151e144f852e66099c7ea1213_MIT15_082JF10_lec01.pdf |
n = 100
1,000 n10 steps
2.82 seconds
10.74 seconds
34.86 seconds
100 seconds
2n steps
1 second
17 minutes
12 days
31 years
41
Overview of today’s lecture
Eulerian cycles
Network Definitions
Network Applications
Introduction to computational complexity
42
Upcoming Lectures
Lecture 2: Review of Data Structures
even those with data structure backgrounds
are encouraged to attend.
Lecture 3. Graph Search Algorithms.
how to determine if a graph is connected
and to label a graph
and more
43
MIT OpenCourseWare
http://ocw.mit.edu
15.082J / 6.855J / ESD.78J Network Optimization
Fall 2010
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. | https://ocw.mit.edu/courses/15-082j-network-optimization-fall-2010/084aaba151e144f852e66099c7ea1213_MIT15_082JF10_lec01.pdf |
The method of characteristics applied to
quasi-linear PDEs
18.303 Linear Partial Differential Equations
Matthew J. Hancock
Fall 2006
1 Motivation
[Oct 26, 2005]
Most of the methods discussed in this course: separation of variables, Fourier
Series, Green’s functions (later) can only be applied to linear PDEs. However, the
method of characteristics can be applied to a form of nonlinear PDE.
1.1 Traffic flow
Ref: Myint-U & Debnath §12.6
Consider the idealized flow of traffic along a one-lane highway. Let ρ (x, t) be the
traffic density at (x, t). The total number of cars in x1 ≤ x ≤ x2 at time t is
N (t) =
x2
x1
Z
ρ (x, t) dx
(1)
Assume the number of cars is conserved, i.e. no exits. Then the rate of change of the
number of cars in x1 ≤ x ≤ x2 is given by
dN
dt
=
rate in at x1 − rate out at x2
= ρ (x1, t) V (x1, t) − ρ (x2, t) V (x2, t)
x2 ∂
∂x
x1
= −
Z
(ρV ) dx
(2)
where V (x, t) is the velocity of the cars at (x, t). Combining (1) and (2) gives
x2 ∂ρ
∂t
(cid:18)
x1
Z
(ρV ) dx = 0
(cid:19)
+ | https://ocw.mit.edu/courses/18-303-linear-partial-differential-equations-fall-2006/085f9ac605e03e9c72328a7239d11d10_quasi.pdf |
ρ
∂t
(cid:18)
x1
Z
(ρV ) dx = 0
(cid:19)
+
∂
∂x
1
and since x1, x2 are arbitrary, the integrand must be zero at all x,
∂ρ
∂t
+
∂
∂x
(ρV ) = 0
(3)
We assume, for simplicity, that velocity V depends on density ρ, via
V (ρ) = c 1 −
ρ
ρmax
(cid:18)
where c = max velocity, ρ = ρmax indicates a traffic jam (V = 0 since everyone is
stopped), ρ = 0 indicates open road and cars travel at c, the speed limit (yeah right).
The PDE (3) becomes
(cid:19)
∂ρ
∂t
+ c
1 −
(cid:18)
2ρ
ρmax (cid:19)
∂ρ
∂x
= 0
We introduce the following normalized variables
u =
ρ
,
ρmax
t˜ = ct
into the PDE (4) to obtain (dropping tildes),
ut + (1 − 2u) ux = 0
(4)
(5)
The PDE (5) is called quasi-linear because it is linear in the derivatives of u. It
is NOT linear in u (x, t), though, and this will lead to interesting outcomes.
2 General first-order quasi-linear PDEs
Ref: Guenther & Lee §2.1, Myint-U & Debnath §12.1, 12.2
The general form of quasi-linear PDEs is
A
∂u
∂x
+ B
∂u
∂t
= C
( | https://ocw.mit.edu/courses/18-303-linear-partial-differential-equations-fall-2006/085f9ac605e03e9c72328a7239d11d10_quasi.pdf |
DEs is
A
∂u
∂x
+ B
∂u
∂t
= C
(6)
where A, B, C are functions of u, x, t. The initial condition u (x, 0) is specified at
t = 0,
u (x, 0) = f (x)
(7)
We will convert the PDE to a sequence of ODEs, drastically simplifying its solu
tion. This general technique is known as the method of characteristics and is useful
for finding analytic and numerical solutions. To solve the PDE (6), we note that
(A, B, C) · (ux, ut, −1) = 0.
(8)
2
Recall from vector calculus that the normal to the surface f (x, y, z) = 0 is ∇f .
To make the analogy here, t replaces y, f (x, t, z) = u (x, t) − z and ∇f = (ut, ux, −1).
Thus, a plot of z = u (x, t) gives the surface f (x, t, z) = 0. The vector (ux, ut, −1) is
the normal to the solution surface z = u (x, t). From (8), the vector (A, B, C) is the
tangent to this solution surface.
The IC u (x, 0) = f (x) is a curve in the u − x plane. For any point on the initial
curve, we follow the vector (A, B, C) to generate a curve on the solution surface,
called | https://ocw.mit.edu/courses/18-303-linear-partial-differential-equations-fall-2006/085f9ac605e03e9c72328a7239d11d10_quasi.pdf |
follow the vector (A, B, C) to generate a curve on the solution surface,
called a characteristic curve of the PDE. Once we find all the characteristic curves,
we have a complete description of the solution u (x, t).
2.1 Method of characteristics
We represent the characteristic curves parametrically,
x = x (r; s) ,
t = t (r; s) ,
u = u (r; s) ,
where s labels where we start on the initial curve (i.e. the initial value of x at t = 0).
The parameter r tells us how far along the characteristic curve. Thus (x, t, u) are now
thought of as trajectories parametrized by r and s. The semi-colon indicates that s
is a parameter to label different characteristic curves, while r governs the evolution
of the solution along a particular characteristic.
From the PDE (8), at each point (x, t), a particular tangent vector to the solution
surface z = u (x, t) is
(A (x, t, u) , B (x, t, u) , C (x, t, u)) .
Given any curve (x (r; s) , t (r; s) , u (r; s)) parametrized by r (s acts as a label only),
the tangent vector is
∂x ∂t ∂u
∂r ∂r ∂r
,
,
.
(cid:19)
(cid:18)
For a general curve on the surface z = u (x, t), the tangent vector (A, B, C) will
be different than the tangent vecto (xr, | https://ocw.mit.edu/courses/18-303-linear-partial-differential-equations-fall-2006/085f9ac605e03e9c72328a7239d11d10_quasi.pdf |
(A, B, C) will
be different than the tangent vecto (xr, tr, ur). However, we choose our curves
(x (r; s) , t (r; s) , u (r; s)) so that they have tangents equal to (A, B, C),
∂x
∂r
= A,
∂t
∂r
= B,
∂u
∂r
= C
(9)
where (A, B, C) depend on (x, t, u), in general. We have written partial derivatives
to denote differentiation with respect to r, since x, t, u are functions of both r and
s. However, since only derivatives in r are present in (9), these equations are ODEs!
This has greatly simplified our solution method: we have reduced the solution of a
PDE to solving a sequence of ODEs.
3
2
1.5
)
x
(
f
1
0.5
0
−3
−2
−1
0
x
1
2
3
Figure 1: Plot of f (x).
The ODEs (9) in conjunction with some initial conditions specified at r = 0. We
are free to choose the value of r at t = 0; for simplicity we take r = 0 at t = 0. Thus
t (0; s) = 0. Since x changes with r, we choose s to denote the initial value of x (r; s)
along the x-axis (when t = 0) in the space-time domain. Thus the initial values (at
r = | https://ocw.mit.edu/courses/18-303-linear-partial-differential-equations-fall-2006/085f9ac605e03e9c72328a7239d11d10_quasi.pdf |
x-axis (when t = 0) in the space-time domain. Thus the initial values (at
r = 0) are
x (0; s) = s,
t (0; s) = 0,
u (0; s) = f (s) .
(10)
3 Example problem
[Oct 28, 2005]
Consider the following quasi-linear PDE,
∂u
∂t
+ (1 + cu)
∂u
∂x
= 0,
u (x, 0) = f (x)
where c = ±1 and the initial condition f (x) is
f (x) =
(
1,
2 − |x| ,
|x| > 1
|x| ≤ 1
=
1,
x < −1
2 + x, −1 ≤ x ≤ 0
2 − x,
0 < x ≤ 1
x > 1
1,
The function f (x) is sketched in Figure 1. To find the parametric solution, we can
write the PDE as
(1, 1 + cu, 0) ·
∂u ∂u
,
∂t ∂x
(cid:18)
, −1 = 0
(cid:19)
Thus the parametric solution is defined by the ODEs
dt
dr
= 1,
dx
dr
= 1 + cu,
du
dr
= 0
4
with initial conditions at r = 0,
t = 0,
x = s,
u = u (x, 0) = u (s, 0) = f (s) .
Integrating the ODE | https://ocw.mit.edu/courses/18-303-linear-partial-differential-equations-fall-2006/085f9ac605e03e9c72328a7239d11d10_quasi.pdf |
, 0) = u (s, 0) = f (s) .
Integrating the ODEs and imposing the ICs gives
t (r; s) = r
u (r; s) = f (s)
x (r; s) =
(1 + cf (s)) r + s = (1 + cf (s)) t + s
3.1 Validity of solution and break-down (shock formation)
To find the time ts and position xs when and where a shock first forms, we find the
Jacobian:
J =
∂ (x, t)
∂ (r, s)
= det
xr xs
ts
tr
!
=
∂x ∂t
∂r ∂s
−
∂x ∂t
∂s ∂r
= 0 − (cf ′ (s) r + 1) = − (cf ′ (s) t + 1)
Shocks occur (the solution breaks down) where J = 0, i.e. where
t = −
1
cf ′ (s)
The first shock occurs at
ts = min −
1
cf ′ (s)
(cid:18)
In this course, we will not consider what happens after the shock. You can find more
about this in §12.9 of Myint-U & Debnath. We now take cases for c = ±1.
(cid:19)
For c = 1, since min f ′ (s) = −1, we have
ts = −
1
min f ′ (s)
= 1
Any of the characteristics where f ′ (s) = min f ′ (s) = −1 can | https://ocw.mit.edu/courses/18-303-linear-partial-differential-equations-fall-2006/085f9ac605e03e9c72328a7239d11d10_quasi.pdf |
of the characteristics where f ′ (s) = min f ′ (s) = −1 can be used to find the
location of the shock at ts = 1. For e.g., with s = 1/2, the location of the shock at
ts = 1 is
1
2
(cid:18) (cid:19)(cid:19)
Any other value of s where f ′ (s) = −1 will give the same xs.
1 + 2 −
(cid:18)
1
1 + =
2
1 + f
xs =
(cid:19)(cid:19)
1
2
(cid:18)
(cid:18)
1
1 + = 3.
2
5
For c = −1, since max f ′ (s) = 1, we have
ts =
1
max f ′ (s)
= 1
Any of the characteristics where f ′ (s) = max f ′ (s) = 1 can be used to find the
location of the shock at ts = 1. For e.g., with s = −1/2, the location of the shock at
ts = 1 is
xs = 1 − f −
1
2
(cid:18) (cid:19)(cid:19)
(cid:19)(cid:19)
Any other value of s where f ′ (s) = 1 will give the same xs.
1 − = 1 − 2 −
(cid:18)
1
2
1
2
(cid:18)
(cid:18)
1 − = −1.
1
2
3.2 Solution Method (plotting u(x,t))
Since r = t, we can rewrite the solution as being parametrized | https://ocw.mit.edu/courses/18-303-linear-partial-differential-equations-fall-2006/085f9ac605e03e9c72328a7239d11d10_quasi.pdf |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.