text stringlengths 30 4k | source stringlengths 60 201 |
|---|---|
is exactly αO .
Dilworth’s Theorem now links the number of chains partitioning T to αO . The final part of the
proof recovers a covering family of cycles from the chains in T .
The Proof: For the given ordering O, let S = {v1, . . . , vk } denote the maximum size cyclic sta-
ble set, with corresponding enumeration of V = {v1, . . . , vk , . . . , vn}. We note that since S is, in
particular, a stable set, we can permute its elements as we wish within the given ordering.
(cid:1)
We form an acyclic digraph D(cid:1) = (V (cid:1), A(cid:1)) from D as follows. Let V (cid:1) = {v1, . . . , vn, v1, . . . , vk
(cid:1)
}
(we duplicate the elements of S) so that |V (cid:1)| = n + k. Next, if (v, w) ∈ A is a forward arc, then
(v, w) ∈ A(cid:1). If (w, vi ) ∈ A is a backward arc into a vertex of S (i.e., if vi ∈ S) then (w, vi ) ∈ A(cid:1) .
Note that by our choice of enumeration, any arcs into vi, i ≤ k, must be backward. Therefore the
digraph D(cid:1) is acyclic. It is illustrated in Figure 2.
(cid:1)
�
�
�
�
�
� � �
�
�
�
�
�
�
�
�
�
� �
�
�
�
�
� � �
� | https://ocw.mit.edu/courses/18-997-topics-in-combinatorial-optimization-spring-2004/06d9723de3290adf809e3e61e4dadc8a_co_lec8.pdf |
�
�
�
�
�
�
� �
�
�
�
�
� � �
�
�
�
�
Figure 3: This figure illustrates the directed acyclic graph D� we obtain from splitting the vertices in S and
drawing arcs as explained above.
In order to use Dilworth’s Theorem, we need to have a poset T . We obtain a poset T from
the acyclic digraph D(cid:1) by considering the transitive closure of D(cid:1) . Since the sets {v1, . . . , vk } and
8-4
(cid:1)
(cid:1)
{v1, . . . , vk
} have no incoming and outgoing arcs, respectively, they are both antichains in T . This
is also evident from Figure 2. We show that they are in fact maximum size antichains. Consider any
antichain I. As the ordering is valid, for any vertex, there exists a directed cycle of index 1 going
through it. This translates into a chain in the poset going from vi to vi for any 1 ≤ i ≤ k. This
means that an antichain I cannot contain both vi and vi. Let ID be the elements of the original
digraph D corresponding to I.
(cid:1)
(cid:1)
(cid:1)
(cid:1)
(cid:1)
(cid:1)
/
(cid:1)
l
(cid:1)
k
By renumbering the elements of S (recall that we can permute the elements of S | https://ocw.mit.edu/courses/18-997-topics-in-combinatorial-optimization-spring-2004/06d9723de3290adf809e3e61e4dadc8a_co_lec8.pdf |
renumbering the elements of S (recall that we can permute the elements of S within the
given ordering) we can assume that v1, . . . , v ∈ I, and vl+1, . . . , v ∈ I. Now rotate the enumeration
to obtain {v˜1, . . . , v˜n} so that v˜1 = vl+1. Since I is an antichain, and since the digraph vertices
(cid:1)
{v1, . . . , vl} corresponding to the poset elements {v1, . . . , vl
} at the “top” of the poset T have been
rotated to be the last elements of the enumeration, there are no forward paths between any two
elements of ID . Therefore, by Lemma 2, ID is a cyclic stable set. Therefore ID , and consequently
I, can have size at most equal to the size of S, that is, αO . We have thus shown that the size of the
largest antichain in T is equal to the cyclic stability number αO of D.
Now consider the minimal partitioning set of chains in the poset T , call these P1, . . . , Pk (where
k = |S| = αO ). Each chain Pi is a chain from vi to vσ(i), for some permutation σ of {1, . . . , k}. By
a slight abuse of notation, we also use Pi to refer to the directed path in D from vi to vσ(i) (or cycle
if σ(i) = i). We note that by construction of T , there is exactly one backward arc in each path Pi,
namely, the last arc to vσ(i). These paths cover the vertex set V . Now, the cycles in the permutation
σ correspond to cycles in D. For example, if (1 | https://ocw.mit.edu/courses/18-997-topics-in-combinatorial-optimization-spring-2004/06d9723de3290adf809e3e61e4dadc8a_co_lec8.pdf |
. Now, the cycles in the permutation
σ correspond to cycles in D. For example, if (12) is a cycle in σ, i.e., if σ(1) = 2 and σ(2) = 1,
then joining the paths P1 and P2 we have a cycle from v1 to v1. We note that these cycles may in
fact intersect. Since the cycles merely need to cover the vertex set V , distinct cycles can intersect.
We need to take care that the same cycle does not intersect itself. If σ happens to be the identity
permutation, σ(i) = i, then each path is a cycle and cannot intersect itself, and hence the proof is
complete. If this is not the case, then a cycle in D obtained by joining together the paths Pi that
correspond to a cycle of σ may in fact intersect itself. Suppose that i and j are in the same cycle
of σ and the paths Pi and Pj intersect, in say v. We can then replace the paths Pi and Pj by two
other paths P (cid:1) and Pj
(cid:1) (obtained by switching from one to the other at v) which together cover the
same vertices and which corresponds to a new permutation σ(cid:1) with σ(cid:1)(i) = σ(j) and σ(cid:1)(j) = σ(i).
Now the number of cycles in the permutation has increased by one, and we can repeat this process
until no cycle in D (corresponding to each cycle of teh permutation σ) intersects itself.
i
Since the cycle splitting procedure | https://ocw.mit.edu/courses/18-997-topics-in-combinatorial-optimization-spring-2004/06d9723de3290adf809e3e61e4dadc8a_co_lec8.pdf |
no cycle in D (corresponding to each cycle of teh permutation σ) intersects itself.
i
Since the cycle splitting procedure does not change the total index of the cycles, we know that
the total index equals the minimal number of chains required to partition T . But by above, this is
exactly the size of the maximum cyclic stable set, and therefore
αO = min
�
iO(Ci),
{C1,...,Cp}
which is what we wanted to prove.
�
3 Cyclic Stable Set Polytope
In this section, we follow some recent (unpublished) work of A. Seb¨o, and define the cyclic stable
set polytope of a strongly connected graph D, with a given valid ordering O. Define the polytope
P as follows.
�
(cid:2)
�
� x(C) ≤ iO(C), ∀ directed cycles C
P = x � xv ≥ 0,
∀v ∈ V
�
.
We show in this section that the polytope P is totally dual integral (TDI) (see lecture 5 for more
on TDI system of inequalities).
Given a cyclic stable set S (cyclic stable with respect to the given ordering), let xS denote its
incidence vector, i.e., xv = 1 if v ∈ S, and 0 otherwise. Then in fact xS ∈ P. Indeed, consider any
S
8-5
directed cycle C. Since S is cyclic stable, C always enters S via a backward arc, and therefore the
number of backward arcs of C is at least the cardinality of its intersection with S:
(# backward arcs in C) = iO(C) ≥ |C ∩ S| | https://ocw.mit.edu/courses/18-997-topics-in-combinatorial-optimization-spring-2004/06d9723de3290adf809e3e61e4dadc8a_co_lec8.pdf |
is at least the cardinality of its intersection with S:
(# backward arcs in C) = iO(C) ≥ |C ∩ S|,
or, equivalently, xS (C) ≤ iO(C).
Since we have shown that the incidence vector of every cyclic stable set belongs to P, we have:
αO ≤ max :
s.t. :
�
xv
v∈V
x(C) ≤ iO(C), ∀C
xv ≥ 0, ∀v ∈ V
By linear programming duality, and then by observing that the optimum value of a minimization
can only increase if we add constraints, we have
�
αO ≤ max :
xv
v∈V
s.t. :
x(C) ≤ iO(C), ∀C
xv ≥ 0, ∀v ∈ V
�
= min : � C
iO(C)yC
yC ≥ 1, ∀v ∈ V
s.t. :
C : v∈C
yC ≥ 0, ∀C
�
≤ min : � C
s.t. :
iO(C)yC
C : v∈C
yC ≥ 0, ∀C
yC ∈ {0, 1}.
yC ≥ 1, ∀v ∈ V
But this last quantity is exactly the minimum total index of a cycle cover of V , and thus by the
Bessy-Thomass´e Theorem, the final quantity equals αO . Therefore equality must hold throughout.
Recall that in order to prove that the description of P is TDI, we must show that for all integral
objective functions w (wv ∈ Z), the dual linear program
�
min : � C
s.t. :
iO(C)yC
C : v∈C
yC ≥ 0, ∀C
yC ≥ 1, ∀v ∈ V
has an integral solution whenever its value | https://ocw.mit.edu/courses/18-997-topics-in-combinatorial-optimization-spring-2004/06d9723de3290adf809e3e61e4dadc8a_co_lec8.pdf |
yC ≥ 0, ∀C
yC ≥ 1, ∀v ∈ V
has an integral solution whenever its value is finite. We note that we have just proved this statement
for the special case wv = 1. We note also that if we have wv ≤ 0, we can replace this wv by 0
without affecting the feasible region of the dual linear program. Therefore, we can assume that we
have wv ∈ Z+.
We now construct a strongly connected digraph D(cid:1) = (V (cid:1), A(cid:1)), with valid ordering O(cid:1) as follows.
Let V (cid:1) consist of wv copies of each xv , {xv,1, . . . , xv,wv } (recall that wv is a positive integer). If
(v, u) ∈ A, then (xv,i, xu,j ) ∈ A(cid:1) for every i ≤ wv and j ≤ wu. From our reasoning above, we
know that the linear program associated to the digraph D(cid:1) (now we have wv = 1 for every v ∈ V (cid:1))
produces an integral solution that corresponds to a maximum size cyclic stable set in D(cid:1). Note that
if xv,i is in the stable set S(cid:1) for D(cid:1), then we can also take xv,j to be in S(cid:1) for any j ≤ wv . Therefore
any maximum size cyclic stable set S(cid:1) in D(cid:1) naturally corresponds to a cyclic stable set S in D.
Moreover, |S(cid:1)| = w xS . Conversely, if S is a cyclic stable set in D, then | https://ocw.mit.edu/courses/18-997-topics-in-combinatorial-optimization-spring-2004/06d9723de3290adf809e3e61e4dadc8a_co_lec8.pdf |
1)| = w xS . Conversely, if S is a cyclic stable set in D, then the set S(cid:1) of all copies of
the vertices in S, is a cyclic stable set in D(cid:1), with |S(cid:1)| = w xS . Therefore given any vector w with
wv ∈ Z+, the linear program with objective function w x has an integral optimal solution. Therefore
P is totally dual integral, as we wished to show.
(cid:1)
(cid:1)
(cid:1)
8-6 | https://ocw.mit.edu/courses/18-997-topics-in-combinatorial-optimization-spring-2004/06d9723de3290adf809e3e61e4dadc8a_co_lec8.pdf |
Massachusetts Institute of Technology
Department of Electrical Engineering and Computer Science
6.685 Electric Machines
Class Notes 1: Electromagnetic Forces
c 2003 James L. Kirtley Jr.
(cid:13)
1
Introduction
Bearings
Shaft
End Windings
Stator
Stator
Conductors
Rotor
Air
Gap
Rotor
Conductors
Figure 1: Form of Electric Machine
This section of notes discusses some of the fundamental processes involved in electric machinery.
In the section on energy conversion processes we examine the two major ways of estimating elec-
tromagnetic forces: those involving thermodynamic arguments (conservation of energy) and field
methods (Maxwell’s Stress Tensor). But first it is appropriate to introduce the topic by describing
a notional rotating electric machine.
Electric machinery comes in many different types and a strikingly broad range of sizes, from
those little machines that cause cell ’phones and pagers to vibrate (yes, those are rotating electric
machines) to turbine generators with ratings upwards of a Gigawatt. Most of the machines with
which we are familiar are rotating, but linear electric motors are widely used, from shuttle drives in
weaving machines to equipment handling and amusement park rides. Currently under development
are large linear induction machines to be used to launch aircraft. It is our purpose in this subject
to develop an analytical basis for understanding how all of these different machines work. We start,
however, with a picture of perhaps the most common of electric machines.
2 Electric Machine Description:
Figure 1 is a cartoon drawing of a conventional induction motor. This is a very common type
of electric machine and will serve as a reference point. Most other electric machines operate in
1
a fashion which is the same as the induction machine or which differ in ways which are easy to
reference to the induction machine.
Most (but not all!) machines we will be studying have essentially this morphology. The rotor
of the machine is mounted on a shaft which is supported on some sort of bearing(s). Usually, but
not always, the rotor is inside. I have drawn a rotor which is round, but this does not need to be
the case. I have also indicated rotor conductors, but sometimes the rotor has permanent | https://ocw.mit.edu/courses/6-685-electric-machines-fall-2013/06deccf046b565a8295e5ada36b252df_MIT6_685F13_chapter1.pdf |
drawn a rotor which is round, but this does not need to be
the case. I have also indicated rotor conductors, but sometimes the rotor has permanent magnets
either fastened to it or inside, and sometimes (as in Variable Reluctance Machines) it is just an
oddly shaped piece of steel. The stator is, in this drawing, on the outside and has windings. With
most of the machines we will be dealing with, the stator winding is the armature, or electrical
power input element. (In DC and Universal motors this is reversed, with the armature contained
on the rotor: we will deal with these later).
In most electrical machines the rotor and the stator are made of highly magnetically permeable
materials: steel or magnetic iron. In many common machines such as induction motors the rotor
and stator are both made up of thin sheets of silicon steel. Punched into those sheets are slots
which contain the rotor and stator conductors.
Figure 2 is a picture of part of an induction machine distorted so that the air-gap is straightened
out (as if the machine had infinite radius). This is actually a convenient way of drawing the machine
and, we will find, leads to useful methods of analysis.
Stator Core
Stator Conductors
In Slots
Air Gap
Rotor Conductors
In Slots
Figure 2: Windings in Slots
What is important to note for now is that the machine has an air gap g which is relatively
small (that is, the gap dimension is much less than the machine radius r). The air-gap also has a
physical length l. The electric machine works by producing a shear stress in the air-gap (with of
course side effects such as production of “back voltage”). It is possible to define the average air-
gap shear stress, which we will refer to as τ . Total developed torque is force over the surface area
times moment (which is rotor radius):
T = 2πr2ℓ < τ >
Power transferred by this device is just torque times speed, which is the same as force times
2
surface velocity, since surface velocity is u = rΩ:
Pm = ΩT = 2πrℓ < τ > u
If we note that active rotor volume is , the ratio | https://ocw.mit.edu/courses/6-685-electric-machines-fall-2013/06deccf046b565a8295e5ada36b252df_MIT6_685F13_chapter1.pdf |
= rΩ:
Pm = ΩT = 2πrℓ < τ > u
If we note that active rotor volume is , the ratio of torque to volume is just:
T
Vr
= 2 < τ >
Now, determining what can be done in a volume of machine involves two things. First, it is
clear that the volume we have calculated here is not the whole machine volume, since it does not
include the stator. The actual estimate of total machine volume from the rotor volume is actually
quite complex and detailed and we will leave that one for later. Second, we need to estimate the
value of the useful average shear stress. Suppose both the radial flux density Br and the stator
surface current density Kz are sinusoidal flux waves of the form:
Br = √2B0 cos (pθ
Kz = √2K0 cos (pθ
ωt)
ωt)
−
−
Note that this assumes these two quantities are exactly in phase, or oriented to ideally produce
torque, so we are going to get an “optimistic” bound here. Then the average value of surface
traction is:
< τ >=
BrKzdθ = B0K0
2π
1
2π 0
Z
The magnetic flux density that can be developed is limited by the characteristics of the magnetic
materials (iron) used. Current densities are a function of technology and are typically limited by
how much effort can be put into cooling and the temperature limits of insulating materials.
In
practice, the range of shear stress encountered in electric machinery technology is not terribly
broad: ranging from a few kPa in smaller machines to about 100 kPa in very large, well cooled
machines.
It is usually said that electric machines are torque producing devices, meaning tht they are
defined by this shear stress mechanism and by physical dimensions. Since power is torque times
rotational speed, high power density machines necessarily will have high shaft speeds. Of course
there are limits on rotational speed as well, arising from centrifugal forces which limit tip velocity.
Our first step in understanding how electric machinery works is to understand the mechanisms
which produce forces of electromagnetic origin.
3 Energy Conversion Process:
In a motor the | https://ocw.mit.edu/courses/6-685-electric-machines-fall-2013/06deccf046b565a8295e5ada36b252df_MIT6_685F13_chapter1.pdf |
first step in understanding how electric machinery works is to understand the mechanisms
which produce forces of electromagnetic origin.
3 Energy Conversion Process:
In a motor the energy conversion process can be thought of in simple terms. In “steady state”,
electric power input to the machine is just the sum of electric power inputs to the different phase
terminals:
Mechanical power is torque times speed:
Pe =
viii
i
X
Pm = T Ω
3
Electric Power In
Electro-
Mechanical
Converter
Mechanical Power Out
Losses: Heat, Noise, Windage,...
Figure 3: Energy Conversion Process
And the sum of the losses is the difference:
Pd = Pe
Pm
−
It will sometimes be convenient to employ the fact that, in most machines, dissipation is small
enough to approximate mechanical power with electrical power. In fact, there are many situations in
which the loss mechanism is known well enough that it can be idealized away. The “thermodynamic”
arguments for force density take advantage of this and employ a “conservative” or lossless energy
conversion system.
3.1 Energy Approach to Electromagnetic Forces:
+
v
-
Magnetic Field
System
f
x
Figure 4: Conservative Magnetic Field System
To start, consider some electromechanical system which has two sets of “terminals”, electrical
and mechanical, as shown in Figure 4. If the system stores energy in magnetic fields, the energy
stored depends on the state of the system, defined by (in this case) two of the identifiable variables:
flux (λ), current (i) and mechanical position (x). In fact, with only a little reflection, you should
be able to convince yourself that this state is a single-valued function of two variables and that the
energy stored is independent of how the system was brought to this state.
Now, all electromechanical converters have loss mechanisms and so are not themselves conser-
vative. However, the magnetic field system that produces force is, in principle, conservative in the
4
sense that its state and stored energy can be described by only two variables. The “history” of the
system is not important.
It is possible to chose | https://ocw.mit.edu/courses/6-685-electric-machines-fall-2013/06deccf046b565a8295e5ada36b252df_MIT6_685F13_chapter1.pdf |
sense that its state and stored energy can be described by only two variables. The “history” of the
system is not important.
It is possible to chose the variables in such a way that electrical power into this conservative
system is:
P e
= vi = i
dλ
dt
Similarly, mechanical power out of the system is:
P m = f e dx
dt
The difference between these two is the rate of change of energy stored in the system:
dWm = P e
dt
−
P m
It is then possible to compute the change in energy required to take the system from one state to
another by:
Wm(a)
Wm(b) =
a
idλ
f edx
−
where the two states of the system are described by a = (λa, xa) and b = (λb, xb)
b
Z
−
If the energy stored in the system is described by two state variables, λ and x, the total
differential of stored energy is:
and it is also:
dWm =
∂Wm
∂λ
dλ +
∂Wm
∂x
dx
dWm = idλ
f edx
−
So that we can make a direct equivalence between the derivatives and:
f e
=
∂
m
W
∂x
−
In the case of rotary, as opposed to linear, motion, torque T e takes the place of force f e and
angular displacement θ takes the place of linear displacement x. Note that the product of torque
and angle has the same units as the product of force and distance (both have units of work, which
in the International System of units is Newton-meters or Joules.
In many cases we might consider a system which is electricaly linear, in which case inductance
is a function only of the mechanical position x.
In this case, assuming that the energy integral is carried out from λ = 0 (so that the part of the
integral carried out over x is zero),
λ(x) = L(x)i
Wm =
λ
1
0 L(x)
Z
λdλ =
1 λ2
2 L(x)
This makes
f e
=
1
2
−
λ2 ∂
1
∂x L(x)
5 | https://ocw.mit.edu/courses/6-685-electric-machines-fall-2013/06deccf046b565a8295e5ada36b252df_MIT6_685F13_chapter1.pdf |
2
2 L(x)
This makes
f e
=
1
2
−
λ2 ∂
1
∂x L(x)
5
Note that this is numerically equivalent to
f e
=
1
i2 ∂
2 ∂x
−
L(x)
This is true only in the case of a linear system. Note that substituting L(x)i = λ too early in the
derivation produces erroneous results: in the case of a linear system it produces a sign error, but
in the case of a nonlinear system it is just wrong.
3.1.1 Example: simple solenoid
Consider the magnetic actuator shown in cartoon form in Figure 5. The actuator consists of
a circular rod of ferromagnetic material (very highly permeable) that can move axially (the x-
direction) inside of a stationary piece, also made of highly permeable material. A coil of N turns
carries a current I. The rod has a radius R and spacing from the flat end of the stator is the
variable dimension x. At the other end there is a radial clearance between the rod and the stator
g. Assume g
R. If the axial length of the radial gaps is ℓ = R/2, the area of the radial gaps is
the same as the area of the gap between the rod and the stator at the variable gap.
≪
R/2
x
µ
R
g
CL
µ
N turns
Figure 5: Solenoid Actuator
The permeances of the variable width gap is:
µ
0πR2
x
1 =
P
and the permeance of the radial clearance gap is, if the gap dimension is small compared with the
radius:
The inductance of the coil system is:
2 =
P
2µ0πRℓ
g
=
0 R2
µ π
g
L =
N 2
R1 +
R2
= N 2 P P =
P2 +
1 2
P2
2
µ
2
0πR N
x + g
Magnetic energy is:
6
Wm =
0
Z
λ
0
idλ =
1 λ2
2 L(x)
λ2
x + g | https://ocw.mit.edu/courses/6-685-electric-machines-fall-2013/06deccf046b565a8295e5ada36b252df_MIT6_685F13_chapter1.pdf |
:
6
Wm =
0
Z
λ
0
idλ =
1 λ2
2 L(x)
λ2
x + g
= 0
2 µ πR2N 2
0
And then, of course, force of electric origin is:
Here that is easy to carry out:
So that the force is:
f e
=
m
∂W
∂x
−
=
λ2
0 d
− 2 dx L(x)
1
d 1
dx L
=
1
2
0πR N
µ
2
f e
(x) =
λ2
0
2 µ
1
0πR N 2
2
−
Given that the system is to be excited by a current, we may at this point substitute for flux:
λ = L(x)i =
µ0πR2N i
x + g
and then total force may be seen to be:
f e
=
0πR2N 2 i2
µ
(x + g)2 2
−
The force is ‘negative’ in the sense that it tends to reduce x, or to close the gap.
3.1.2 Multiply Excited Systems
There may be (and in most electric machine applications there will be) more than one source of
electrical excitation (more than one coil). In such systems we may write the conservation of energy
expression as:
dWm =
ikdλk
k
X
f edx
−
which simply suggests that electrical input to the magnetic field energy storage is the sum (in this
case over the index k) of inputs from each of the coils. To find the total energy stored in the
system it is necessary to integrate over all of the coils (which may and in general will have mutual
inductance).
Of course, if the system is conservative, Wm(λ1, λ2, . . . , x) is uniquely specified and so the actual
path taken in carrying out this integral will not affect the value of the resulting energy.
Wm =
dλ
i
·
Z
7
3.1.3 Coenergy
We often will describe systems in terms of inductance rather than its reciprocal, so that | https://ocw.mit.edu/courses/6-685-electric-machines-fall-2013/06deccf046b565a8295e5ada36b252df_MIT6_685F13_chapter1.pdf |
Z
7
3.1.3 Coenergy
We often will describe systems in terms of inductance rather than its reciprocal, so that current,
rather than flux, appears to be the relevant variable.
It is convenient to derive a new energy
variable, which we will call co-energy, by:
W
′
m =
λiii
−
Wm
i
X
and in this case it is quite easy to show that the energy differential is (for a single mechanical
variable) simply:
so that force produced is:
dW
′
m =
λkdik + f dx
e
k
X
fe =
∂W ′
m
∂x
3.2 Example: Synchronous Machine
Stator
Gap
Rotor
C
A’
B
θ
B’
A
C’
F’
µ
F
F
F’
C’
A
B’
B
µ
C
A’
Figure 6: Cartoon of Synchronous Machine
Consider a simple electric machine as pictured in Figure 6 in which there is a single winding
on a rotor (call it the field winding and a polyphase armature with three identical coils spaced at
uniform locations about the periphery. We can describe the flux linkages as:
λa = Laia + Labib + Labic + M cos(pθ)if
λb = Labia + Laib + Labic + M cos(pθ
−
λc = Labia + Labib + Laic + M cos(pθ + )if
)if
2π
3
2π
3
λf = M cos(pθ)ia + M cos(pθ
2π
3
−
)ib + M cos(pθ + ) + Lf if
2π
3
It is assumed that the flux linkages are sinusoidal functions of rotor position. As it turns out,
many electrical machines work best (by many criteria such as smoothness of torque production)
8
if this is the case, so that techniques have been developed to make those flux linkages very nearly
sinusoidal. We will see some of these techniques in later chapters of these notes. For the moment,
we will simply assume these dependencies. In addition, | https://ocw.mit.edu/courses/6-685-electric-machines-fall-2013/06deccf046b565a8295e5ada36b252df_MIT6_685F13_chapter1.pdf |
sinusoidal. We will see some of these techniques in later chapters of these notes. For the moment,
we will simply assume these dependencies. In addition, we assume that the rotor is magnetically
’round’, which means the stator self inductances and the stator phase to phase mutual inductances
are not functions of rotor position. Note that if the phase windings are identical (except for their
If there are three uniformly spaced
angular position), they will have identical self inductances.
windings the phase-phase mutual inductances will all be the same.
Now, this system can be simply described in terms of coenergy. With multiple excitation it
is important to exercise some care in taking the coenergy integral (to ensure that it is taken over
a valid path in the multi-dimensional space). In our case there are actually five dimensions, but
only four are important since we can position the rotor with all currents at zero so there is no
contribution to coenergy from setting rotor position. Suppose the rotor is at some angle θ and that
the four currents have values ia0, ib0, ic0 and if 0. One of many correct path integrals to take would
be:
W
′
m =
+
+
+
ia0
0
Z
ib0
0
Z
ic0
0
Z
if 0
0
Z
The result is:
W
′
m
=
1
2
Laiadia
(Labia0 + Laib) dib
(Labia0 + Labib0 + Laic) dic
M cos(pθ)ia0 + M cos(pθ
(cid:18)
2π
3
−
)ib0 + M cos(pθ + )ic0 + Lf if
2π
3
dif
(cid:19)
L
a
i2 + i2
a0
2
b0 + ico
+ Lab (iaoib0 + iaoic0 + icoib0)
(cid:16)
+M if 0
(cid:17)
ia0 cos(pθ) + i
b0 cos(pθ
(cid:18)
2π
3
2π
) + ic0 cos(pθ + )
3 (cid:19)
− | https://ocw.mit.edu/courses/6-685-electric-machines-fall-2013/06deccf046b565a8295e5ada36b252df_MIT6_685F13_chapter1.pdf |
θ
(cid:18)
2π
3
2π
) + ic0 cos(pθ + )
3 (cid:19)
−
1
2
+ Lf i
f 0
2
Since there are no variations of the stator inductances with rotor position θ, torque is easily
given by:
Te =
∂W ′
m
∂θ
=
−
pM if 0
ia0 sin(pθ) + ib0 sin(pθ
(cid:18)
2
π
3
π
2
) + ico sin(pθ + )
3 (cid:19)
−
3.2.1 Current Driven Synchronous Machine
Now assume that we can drive this thing with currents:
ia0 = Ia cos ωt
ib0 = Ia cos ω
t
(cid:18)
ic0 = Ia cos ω
t +
2π
− 3
2π
3
(cid:19)
(cid:19)
if 0 = If
(cid:18)
9
and assume the rotor is turning at synchronous speed:
pθ = ωt + δi
Noting that cos x sin y = 1 sin(x
2
−
y) + 1 sin(x + y), we find the torque expression above to be:
2
Te =
pM IaIf
−
(cid:18)
+
+
1
2
(cid:18)
1
2
1
2
sin δi + sin (2ωt + δi)
sin δi + sin
(cid:19)
2ωt + δi
(cid:18)
4
π
− 3
4π
3
(cid:19)(cid:19)
sin δi + sin
2ωt + δi +
1
2
1
2
1
2
(cid:18)
The sine functions on the left add and the ones on the right cancel, leaving:
(cid:19)(cid:19)
(cid:18)
Te =
3
2
−
pM IaIf sin δi
And this is indeed one way of looking at a synchronous machine, which produces steady torque
if the rotor speed | https://ocw.mit.edu/courses/6-685-electric-machines-fall-2013/06deccf046b565a8295e5ada36b252df_MIT6_685F13_chapter1.pdf |
pM IaIf sin δi
And this is indeed one way of looking at a synchronous machine, which produces steady torque
if the rotor speed and currents all agree on frequency. Torque is related to the current torque angle
δi. As it turns out such machines are not generally run against current sources, but we will take
up actual operation of such machines later.
4 Field Descriptions: Continuous Media
While a basic understanding of electromechanical devices is possible using the lumped parameter
approach and the principle of virtual work as described in the previous section, many phenomena in
electric machines require a more detailed understanding which is afforded by a continuum approach.
In this section we consider a fields-based approach to energy flow using Poynting’s Theorem and
then a fields based description of forces using the Maxwell Stress Tensor. These techniques will
both be useful in further analysis of what happens in electric machines.
4.1 Field Description of Energy Flow: Poyting’s Theorem
Start with Faraday’s Law:
and Ampere’s Law:
~E =
~∂B
− ∂t
∇ ×
~H = J
~
∇ ×
Multiplying the first of these by H and the second by E and taking the difference:
~
~
~H
~E
~E
−
· ∇ ×
· ∇ ×
~H =
∇ ·
~E
~
H
×
(cid:16)
(cid:17)
=
~H
−
∂B~
dt − ·
~E J
~
·
On the left of this expression is the divergence of electromagnetic energy flow:
~
~
S = E
~H
×
10
~
Here, S is the celebrated Poynting flow which describes power in an electromagnetic field
sysstem. (The units of this quantity is watts per square meter in the International System). On
~
is rate of change of magnetic stored energy. The second
the right hand side are two terms: H
~J looks a lot like power dissipation. We will discuss each of these in more detail. For the
~
term, E
moment, however, note that the divergence theorem of vector calculus yields:
~
∂B | https://ocw.mit.edu/courses/6-685-electric-machines-fall-2013/06deccf046b565a8295e5ada36b252df_MIT6_685F13_chapter1.pdf |
of these in more detail. For the
~
term, E
moment, however, note that the divergence theorem of vector calculus yields:
~
∂B
dt
·
·
vo ume ∇ ·
Z
l
~dv =
S
~nda
~S
·
(cid:13)
Z
Z
that is, the volume integral of the divergence of the Poynting energy flow is the same as the Poynting
energy flow over the surface of the volume in question. This integral becomes:
~
S ~nda =
·
(cid:13)
ZZ
−
volume
Z
~
~
~
E J + H
·
~∂B
· ∂t !
dv
which is simply a realization that the total energy flow into a region of space is the same as the
volume integral over that region of the rate of change of energy stored plus the term that looks like
dissipation. Before we close this, note that, if there is motion of any material within the system, we
can use the empirical expression for transformation of electric field between observers moving with
respect to each other. Here the ’primed’ frame is moving with respeect to the ’unprimed’ frame
with the velocity ~v
′
~E
~= E + ~v
~B
×
This transformation describes, for example, the motion of a charged particle such as an electron
under the influence of both electric and magnetic fields. Now, if we assume that there is material
motion in the system we are observing and if we assign ~v to be the velocity of that material, so that
~E′ is measured in a frame in which thre is no material motion (that is the frame of the material
itself), the product of electric field and current density becomes:
~E
~J =
′
~E
·
~B
~v
−
×
·
~
~
J = E
′
~J
·
−
~B
~v
×
·
~
~
J = E
′
~J + ~v
·
~J
~B
×
·
(cid:16)
(cid:16)
In the last step we | https://ocw.mit.edu/courses/6-685-electric-machines-fall-2013/06deccf046b565a8295e5ada36b252df_MIT6_685F13_chapter1.pdf |
~J + ~v
·
~J
~B
×
·
(cid:16)
(cid:16)
In the last step we used the fact that in a scalar triple product the order of the scalar (dot)
and vector (cross) products can be interchanged and that reversing the order of terms in a vector
(cross) product simply changes the sign of that product. Now we have a ready interpretation for
what we have calculated:
(cid:16)
(cid:17)
(cid:17)
(cid:17)
If the ’primed’ coordinate system is actually the frame of material motion,
′
~E
·
~J =
1
σ |
~J
2
|
which is easily seen to be dissipation and is positive definite if material conductivity σ is positive.
The last term is obviously conversion of energy from electromagnetic to mechanical form:
where we have now identified force density to be:
~v
·
~J
×
(cid:16)
~B
= ~v
~F
·
(cid:17)
~
~
F = J
~B
×
11
This is the Lorentz Force Law, which describes the interaction of current with magnetic field
to produce force. It is not, however, the complete story of force production in electromechanical
systems. As we learned earlier, changes in geometry which affect magnetic stored energy can also
produce force. Fortunately, a complete description of electromechanical force is possible using only
magnetic fields and that is the topic of our next section.
4.2 Field Description of Forces: Maxwell Stress Tensor
Forces of electromagnetic origin, because they are transferred by electric and magnetic fields, are
the result of those fields and may be calculated once the fields are known. In fact, if a surface can
be established that fully encases a material body, the force on that body can be shown to be the
integral of force density, or traction over that surface.
The traction τ derived by taking the cross product of surface current density and flux density
on the air-gap surface of a machine (above) actually makes sense in view of the empirically derived
Lorentz Force Law: Given a (vector) current | https://ocw.mit.edu/courses/6-685-electric-machines-fall-2013/06deccf046b565a8295e5ada36b252df_MIT6_685F13_chapter1.pdf |
air-gap surface of a machine (above) actually makes sense in view of the empirically derived
Lorentz Force Law: Given a (vector) current density and a (vector) flux density. This is actually
enough to describe the forces we see in many machines, but since electric machines have permeable
magnetic material and since magnetic fields produce forces on permeable material even in the
absence of macroscopic currents it is necessary to observe how force appears on such material. A
suitable empirical expression for force density is:
where H is the magnetic field intensity and µ is the permeability.
~
~
~
F = J
~B
1
− 2
×
~H
~H
(cid:16)
·
(cid:17)
µ
∇
Now, note that current density is the curl of magnetic field intensity, so that:
And, since:
f
orce den it
s y
is:
~F =
~H
∇ ×
(cid:16)
= µ
∇ ×
(cid:16)
~µH
×
~H
×
(cid:17)
−
−
1
2
1 (cid:16)
2
(cid:16)
·
~H H µ
~
∇
(cid:17)
~H
~H
µ
·
∇
(cid:17)
(cid:17)
~H
~H
×
(cid:17)
∇ ×
(cid:16)
~ =
H
~
H
(cid:16)
· ∇
(cid:17)
~H
1
− 2 ∇
~F = µ
= µ
~H
~H
~H
(cid:16)
~H
(cid:16)
·
∇
· ∇
(cid:17)
(cid:17)
1
2
µ
−
− ∇ (cid:18)
∇
1
2
~H
(cid:16)
µ
·
~H
(cid:16)
~
H
(cid:17)
~H
·
−
(cid:17)(cid:19)
~H
~H
·
(cid:17)
~
H
(cid: | https://ocw.mit.edu/courses/6-685-electric-machines-fall-2013/06deccf046b565a8295e5ada36b252df_MIT6_685F13_chapter1.pdf |
·
−
(cid:17)(cid:19)
~H
~H
·
(cid:17)
~
H
(cid:16)
~H
·
(cid:17)
µ
∇
(cid:16)
1
2
This expression can be written by components: the component of force in the i’th dimension is:
Fi = µ
H
k
X (cid:18)
k
The first term can be written as:
∂
1
∂xk (cid:19) − ∂xi 2
Hi
∂
µ H
k
X
2
k
!
Hk
∂
∂xk
(cid:19)
µ
X (cid:18)
k
Hi =
k
X
µHkHi Hi
−
∂
∂xk
µHk
k
X
∂
xk
∂
12
The last term in this expression is easily shown to be divergence of magnetic flux density, which is
zero:
~B =
∇ ·
k
X
∂
∂xk
µHk = 0
Using this, we can write force density in a more compact form as:
Fk =
∂
∂xi
µH
iHk
µ
2
δ
ik
−
H
2
n
!
n
X
where we have used the Kroneker delta δik = 1 if i = k, 0 otherwise.
Note that this force density is in the form of the divergence of a tensor:
Fk =
∂
∂xi
Tik
or
∇ ·
In this case, force on some object that can be surrounded by a closed surface can be found by
~F =
T
using the divergence theorem:
~f =
vol
Z
~F dv =
or, if we note surface traction to be τi =
total force in direction i is just:
vol ∇ ·
T
dv =
T
~nda
·
(cid:13)
Z
Z
Z
k Tiknk , where n is the surface normal vector, then the
P
~f =
s
I
τida =
Tiknkda
I X
k
The interpretation of all | https://ocw.mit.edu/courses/6-685-electric-machines-fall-2013/06deccf046b565a8295e5ada36b252df_MIT6_685F13_chapter1.pdf |
normal vector, then the
P
~f =
s
I
τida =
Tiknkda
I X
k
The interpretation of all of this is less difficult than the notation suggests. This field description
of forces gives us a simple picture of surface traction, the force per unit area on a surface. If we
just integrate this traction over the area of some body we get the whole force on the body.
Note one more thing about this notation. Sometimes when subscripts are repeated as they are
here the summation symbol is omitted. Thus we would write τi =
k Tiknk = Tiknk.
4.3 Example: Linear Induction Machine
P
Figure 7 shows a highly simplified picture of a single sided linear induction motor. This is not how
most linear induction machines are actually built, but it is possible to show through symmetry
arguments that the analysis we can carry out here is actually valid for other machines of this class.
This machine consists of a stator (the upper surface) which is represented as a surface current on
the surface of a highly permeable region. The moving element consists of a thin layer of conducting
material on the surface of a highly permeable region. The moving element (or ’shuttle’) has a
velocity u with respect to the stator and that motion is in the x direction. The stator surface
current density is assumed to be:
Kz = Re
K ej(ωt−kx)
n
z
o
13
K
z
g
µ
µ
(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)
(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid:0)(cid: | https://ocw.mit.edu/courses/6-685-electric-machines-fall-2013/06deccf046b565a8295e5ada36b252df_MIT6_685F13_chapter1.pdf |
cid:0)
(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)(cid:1)
y
u
x
σ
s
K
s
Figure 7: Simple Model of single sided linear induction machine
Note that we are ignoring some important effects, such as those arising from finite length of the
stator and of the shuttle. Such effects can be quite important, but we will leave those until later,
as they are what make linear motors interesting.
Viewed from the shuttle for which the dimension in the direction of motion is x′
ut′, the
x
−
−
relative frequency is:
ωt
kx = (ω
ku) t
′
kx
= ωst
′
kx
−
Now, since the shuttle surface can support a surface current and is excited by magnetic fields
which are in turn excited by the stator currents, it is reasonable to assume that the form of rotor
current is the same as that of the stator:
−
−
−
K = Re
s
K ej(ωst
s
′−kx )
Ampere’s Law is, in this situation:
n
o
g
∂Hy
∂x
= Kz + Ks
which is, in complex amplitudes:
The y- component of Faraday’s Law is, assuming the problem is uniform in the z- direction:
H y = z + K s
K
jkg
−
jω B
s
′
y = jkEz
−
or
ω
s µ0H
k
A bit of algebraic manipulation yields expressions for the complex amplitudes of rotor surface
′
z =
−
E
y
current and gap magnetic field:
Ks
=
H y =
σ
j µ0ωs s
k2g
−
1 + j mu0ωsσs Kz
k | https://ocw.mit.edu/courses/6-685-electric-machines-fall-2013/06deccf046b565a8295e5ada36b252df_MIT6_685F13_chapter1.pdf |
=
H y =
σ
j µ0ωs s
k2g
−
1 + j mu0ωsσs Kz
k2g
j
K
kg 1 + j mu0ωsσs
z
k2g
14
To find surface traction, the Maxwell Stress Tensor can be evaluated at a surface just below the
stator (on this surface the x- directed magnetic field is simply H x = Kz. Thus the traction is
τx = Txy = µ0HxHy
and the average of this is:
This is:
< τx >= Re H
µ0
2
xH y∗
o
2 0ωsσs
k2g
µ
n
K z|
|
µ0 1
2 kg 1 +
Now, if we consider electromagnetic power flow (Poynting(cid:17)’s Theorem): in the y- direction:
< τx >=
µ0ωsσs
k2g
(cid:16)
2
And since in the frame of the shuttle E
< S
′
y >=
1 ω µ
s
0
2 k kg
−
2
K =
z
|
|
−
ωs
k
< τ >
x
2
′
z =
ωs
Sy = EzHx
− k µ0H y
0 sσs
k2g
1 + µ0ωsσs
g
(cid:16)
µ ω
k2
(cid:17)
Similarly, evaluated in the frame of the stator:
ω
k
< Sy >=
−
< τ
x >
−
This shows what we already suspected: the electromagnetic power flow from the stator is the
force density on the shuttle times the wave velocity. The electromagnetic ower flow into the shuttle
is the same force density times the ’slip’ velocity. The difference between these two is the power
converted to mechanical form and it is the force density times the shuttle velocity.
4.4 Rotating Machines
The use of this formulation in rotating machines is a bit tricky because, at lest formally, directional
vectors must have constant | https://ocw.mit.edu/courses/6-685-electric-machines-fall-2013/06deccf046b565a8295e5ada36b252df_MIT6_685F13_chapter1.pdf |
4.4 Rotating Machines
The use of this formulation in rotating machines is a bit tricky because, at lest formally, directional
vectors must have constant identity if an integral of forces is to become a total force. In cylindrical
coordinates, of course, the directional vectors are not of constant identity. However, with care and
understanding of the direction of traction and how it is integrated we can make use of the MST
approach in rotating electric machines.
Now, if we go back to the case of a circular cylinder and are interested in torque, it is pretty
clear that we can compute the circumferential force by noting that the normal vector to the cylinder
is just the radial unit vector, and then the circumferential traction must simply be:
τθ = µ0HrHθ
Assuming that there are no fluxes inside the surface of the rotor, simply integrating this over
the surface gives azimuthal force. In principal this is the same as surrounding the surface of the
rotor by a continuum of infinitely small boxes, one surface just outside the rotor and with a normal
facing outward, the other surface just inside with normal facing inward. (Of course the MST is
zero on this inner surface). Then multiplying by radius (moment arm) gives torque. The last step
is to note that, if the rotor is made of highly permeable material, the azimuthal magnetic field just
outside the rotor is equal to surface current density.
15
5 Generalization to Continuous Media
Now, consider a system with not just a multiplicity of circuits but a continuum of current-carrying
paths. In that case we could identify the co-energy as:
W
′
m =
~
λ(~a)dJ
d~a
·
area
Z
where that area is chosen to cut all of the current carrying conductors. This area can be picked to
be perpedicular to each of the current filaments since the divergence of current is zero. The flux λ
is calculated over a path that coincides with each current filament (such paths exist since current
has zero divergence). Then the flux is:
Z
Now, if we use the vector potential A for which the magnetic flux density is:
~
λ(~a) | https://ocw.mit.edu/courses/6-685-electric-machines-fall-2013/06deccf046b565a8295e5ada36b252df_MIT6_685F13_chapter1.pdf |
��ux is:
Z
Now, if we use the vector potential A for which the magnetic flux density is:
~
λ(~a) =
~
B
·
d~n
Z
∇ ×
the flux linked by any one of the current filaments is:
~B =
~A
where dℓ is the path around the current filament. This implies directly that the coenergy is:
~
λ(~a) =
~A
~
dℓ
·
I
W
′
m =
area ZJ I
Z
~ ~
~
A dℓdJ d~a
·
·
Now: it is possible to make dℓ coincide with d~a and be parallel to the current filaments, so that:
~
5.1 Permanent Magnets
W
′
m =
~dJdv
~A
·
vol
Z
Permanent magnets are becoming an even more important element in electric machine systems.
Often systems with permanent magnets are approached in a relatively ad-hoc way, made equivalent
to a current that produces the same MMF as the magnet itself.
The constitutive relationship for a permanent magnet relates the magnetic flux density B to
~
magnetic field H and the property of the magnet itself, the magnetization M .
~
~
~B = µ0
~H + M
~
(cid:16)
Now, the effect of the magnetization is to act as if there were a current (called an amperian current)
with density:
(cid:17)
∗
~J
=
∇ ×
~M
Note that this amperian current “acts” just like ordinary current in making magnetic flux density.
Magnetic co-energy is:
W
′
m =
~A
· ∇ ×
~dM dv
l
vo
Z
16
Next, note the vector identity
~
C
×
∇ ·
(cid:16)
~
D
~
= D
(cid:17)
·
∇ ×
(cid:16)
~
C
·
−
~
C
(cid:17)
′
Wm =
Then, noting that B =
~
∇ ×
−∇ ·
vol
Z
~
A:
~
~ | https://ocw.mit.edu/courses/6-685-electric-machines-fall-2013/06deccf046b565a8295e5ada36b252df_MIT6_685F13_chapter1.pdf |
:17)
′
Wm =
Then, noting that B =
~
∇ ×
−∇ ·
vol
Z
~
A:
~
~
A dM dv +
×
(cid:16)
(cid:17)
vol
Z
∇ ×
(cid:16)
·
(cid:17)
~D
Now,
∇ ×
(cid:16)
~
A
(cid:17)
~
dM dv
W
′
m =
−
(cid:13)
Z
Z
~dM d~s +
~A
×
~dM dv
~B
·
vol
Z
The first of these integrals (closed surface) vanishes if it is taken over a surface just outside the
magnet, where M is zero. Thus the magnetic co-energy in a system with only a permanent magnet
source is
~
W
′
m =
~
dM dv
~
B
·
v
ol
Z
Adding current carrying coils to such a system is done in the obvious way.
17
MIT OpenCourseWare
http://ocw.mit.edu
6.685 Electric Machines
Fall 2013
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. | https://ocw.mit.edu/courses/6-685-electric-machines-fall-2013/06deccf046b565a8295e5ada36b252df_MIT6_685F13_chapter1.pdf |
8.022 (E&M) – Lecture 6
Topics:
(cid:132) More on capacitors
(cid:132) Mini-review of electrostatics
(cid:132) (almost) all you need to know for Quiz 1
Last time…
(cid:132) Capacitor:
(cid:132) System of charged conductors
(cid:132) Capacitance:
C
=
Q
V
(cid:132)
It depends only on geometry
+Q
+
+
+
+
+
+
--
-Q
-
-
-
-
(cid:132) Energy stored in capacitor:
(cid:132)
In agreement with energy associated with electric field
U
=
2
Q
C
2
=
1
2
2
CV
(cid:132) Let’s now apply what we have learned…
G. Sciolla – MIT
8.022 – Lecture 6
2
1
Wimshurst machine and Leyden Jars (E1)
(cid:132) A Wimshurst machine is used to charge 2 Leyden Jars
(cid:132) Leyden Jars are simple cylindrical capacitors
Insulator
Outer conductor
Inner conductor
(cid:132) What happens when we connect the outer and the outer surface?
(cid:132) Why?
G. Sciolla – MIT
8.022 – Lecture 6
3
Dissectible Leyden Jar (E2)
(cid:132) A Wimshurst machine is used to charge a Leyden Jar
(cid:132) Where is the charge stored?
(cid:132) On the conductors?
(cid:132) On the dielectric?
(cid:132) Take apart capacitor and short conductors
(cid:132) Nothing happens!
(cid:132) Now reassemble it
(cid:132) Bang!
(cid:132) Why?
(cid:132) Because it’s “easier” for the charges to stay on dielectric when we take
conductors apart or energy stored would have to change:
U=Q2/2C, | https://ocw.mit.edu/courses/8-022-physics-ii-electricity-and-magnetism-fall-2004/07002b8b4ec36c93c70bfc4697be3789_lecture6.pdf |
for the charges to stay on dielectric when we take
conductors apart or energy stored would have to change:
U=Q2/2C, and moving plates away C would decrease (cid:198) U increase
G. Sciolla – MIT
8.022 – Lecture 6
4
2
Capacitors and dielectrics
(cid:132) Parallel plates capacitor:
C
=
Q Q
=
Ed
V
=
A
dπ
4
(cid:132) Add a dielectric between the plates:
(cid:132) Dielectric’s molecules are not spherically symmetric
(cid:132) Electric charges are not free to move
(cid:198) E will pull + and – charges apart and orient them // E
+
+
+
+
+
+
+
+
+
+
E=4πσ
_ +
_ +
_ +
_ +
_ +
_ +
_ +
E
(cid:132) Edielectric is opposite to Ecapacitor
(cid:132) Given Q (cid:198) V decreases
(cid:132) Given V (cid:198) Q increases
} (cid:198) C increased!
G. Sciolla – MIT
8.022 – Lecture 6
-
-
-
-
-
-
-
-
-
-
-
-
5
Energy is stored in capacitors (E6)
(cid:132) A 100 µF oil filled capacitor is charged to 4KV
(cid:132) What happens if we discharge it thought a 12” long iron wire?
+
+
+
+
+
Oil
-
-
-
-
-
(cid:132) How much energy is stored in the capacitor?
(cid:132) U= ½ CV2 = 800 J
Big!
(cid:132) Resistance of iron wire: very small, but >> than the rest of the circuit
(cid:198) All the energy is dumped on the wire in a small time
(cid:198) Huge currents! (cid:198) Huge temperatures! (cid:198) The wire will explode!
G. Sciolla | https://ocw.mit.edu/courses/8-022-physics-ii-electricity-and-magnetism-fall-2004/07002b8b4ec36c93c70bfc4697be3789_lecture6.pdf |
98) Huge currents! (cid:198) Huge temperatures! (cid:198) The wire will explode!
G. Sciolla – MIT
8.022 – Lecture 6
6
3
Capacitors in series
(cid:132)
Let’s connect 2 capacitors C1 and C2 in the following way:
Q1
Q2
A
B
V
V1
V2
C
?
V
(cid:132) What is the total capacitance C of the new system?
V V
+
1
2
Q Q
=
1
1
V
=
C Q
=
V
=
Q
2
=
1
V V
+
Q
2
=
1
C
1
+
1
C
2
B
-
+
-
+
-
+
-
+
-
+
-
+-
This is 1 conductor that
starts electrically neutral
(cid:198) Q1=Q2
G. Sciolla – MIT
1
−
C
⎞
⎟
⎠
i
8.022 – Lecture 6
⎛
= ⎜
⎝
1
C
i
∑
Capacitors in parallel
(cid:132)
Let’s connect 2 capacitors C1 and C2 in the following way:
Q1
Q2
V1
V2
V
?
V
(cid:132) What is the total capacitance C of the new system?
1
+
=
=
V
V V
=
1
2
Q Q Q
2
Q Q
+
2
V
C
=
1
=
Q Q
2
1
+
V
V
2
1
=
C C
+
1
2
(cid:198)
C
7
C
i
i N
=
= ∑
i
1
=
G. Sciolla – MIT
8.022 – Lecture 6
8
4
Application
(cid:132) Why are capacitors useful?
(cid:132) …among other things…
(cid:132) They can store large amount of energy and | https://ocw.mit.edu/courses/8-022-physics-ii-electricity-and-magnetism-fall-2004/07002b8b4ec36c93c70bfc4697be3789_lecture6.pdf |
Why are capacitors useful?
(cid:132) …among other things…
(cid:132) They can store large amount of energy and release it in very short time
(cid:132) Energy stored: U= ½ CV2
(cid:132) The larger the capacitance, the larger the energy stored at a given V
(cid:132) How to increase the capacitance?
(cid:132) Modify geometry
(cid:132) For parallel plates capacitors C=A/(4πd): increase A or decrease d
(cid:132) Add a dielectric in between the plates
(cid:132) Add capacitors in parallel
G. Sciolla – MIT
8.022 – Lecture 6
9
Bank of capacitors (E7)
(cid:132)
Bank of 12 x 80 µF capacitors is parallel
V
-------
-------
++++++
++++++
…
-------
++++++
80µF
60W
(cid:132)
Total capacitance: 960 µF
(cid:132)
Discharged on a 60 W light bulb when capacitors are charged at:
(cid:132)
V = 100 V, 200 V, V = 300 V
(cid:132) What happens?
(cid:132)
(cid:132)
Energy stored in capacitor is U= ½ CV2
(cid:198) V = V0: 2xV0 : 3xV0 (cid:198) U = U0 : 4xU0 : 9xU0
R is the same (cid:198) time of discharge will not change with V
The power will increase by a factor 9! (P=RI2 and I=V/R)
(cid:132)
(cid:132) Will the bulb survive?
(cid:132)
Remember: light bulb designed for 120 V…
G. Sciolla – MIT
8.022 – Lecture 6
10
5
Review of Electrostatics for Quiz 1
Disclaimer:
(cid:132) Can | https://ocw.mit.edu/courses/8-022-physics-ii-electricity-and-magnetism-fall-2004/07002b8b4ec36c93c70bfc4697be3789_lecture6.pdf |
2 – Lecture 6
10
5
Review of Electrostatics for Quiz 1
Disclaimer:
(cid:132) Can we review all of the electrostatics in less than 1 hour?
(cid:132) No, but we will try anyway…
(cid:132) Only main concepts will be reviewed
(cid:132) Review main formulae and tricks to solve the various problems
(cid:132) No time for examples
(cid:132) Go back to recitations notes or Psets and solve problems again
G. Sciolla – MIT
8.022 – Lecture 6
11
The very basic:
Coulomb’s law
(cid:71)
F
2
=
q q
1
r
2 1
|
2
2
|
ˆ
r
2 1
where F2 is the force that the charge q2 feels due to q1
NB: this is in principle the only thing you have to remember:
all the rest follows from this an the superposition principle
G. Sciolla – MIT
8.022 – Lecture 6
12
6
The very basic:
Superposition principle
qN
q5
q2
q3
q1
q4
qi
Q
r
V
Q
(cid:71)
F
Q
i N
=
= ∑
i
=
1
q Q
i
2
r
|
i
|
ˆ
r
i
(cid:71)
QF =
∫
V
dq Q
2
|r|
ˆ
r
=
∫
V
ρ dV Q
2
|r|
ˆ
r
G. Sciolla – MIT
8.022 – Lecture 6
13
The Importance of Superposition
Extremely important because it allows us to transform complicated
problems into sum of small, simple problems that we know how to
solve.
Example:
+
+
+
+
+
Empty
+
Empty
+
Empty
+
+
+
Calculate force F exerted by this
distribution of charges on the
test charge q
q
G. Sciolla – MIT
8.022 – Lecture 6
14
7 | https://ocw.mit.edu/courses/8-022-physics-ii-electricity-and-magnetism-fall-2004/07002b8b4ec36c93c70bfc4697be3789_lecture6.pdf |
charges on the
test charge q
q
G. Sciolla – MIT
8.022 – Lecture 6
14
7
Electric Field and Electric Potential
(cid:132) Solving problems in terms of Fcoulomb is not always convenient
(cid:132) F depends on probe charge q
(cid:132) We get rid of this dependence introducing the Electric Field
(cid:71)
qF
q
Q
|
r
(cid:132) Advantages and disadvantages of E
(cid:71)
E
=
=
ˆ
r
|
2
(cid:132) E describes the properties of space due to the presence of charge Q ☺
(cid:132)
It’s a vector (cid:198) hard integrals when applying superposition… (cid:47)
(cid:132)
Introduce Electric Potential φ
(cid:132) φ(P) is the work done to move a unit charge from infinity to P(x,y,z)
(
φ
x y z
,
,
)
= − ∫
P
∞
(cid:74)(cid:71)
(cid:71)
i
E ds
NB: true only when φ(inf)=0
(cid:132) Advantages: superposition still holds but simpler calculation (scalar) ☺
G. Sciolla – MIT
8.022 – Lecture 6
15
Energy associated with E
(cid:132) Moving charges in E requires work:
2
W
− > = −
2
1
∫
1
(cid:71)
F
C
(cid:71)
ds
•
w here
F
C oulom b
=
ˆ
Q qr
2
r
y
1
2
P1
P2
3
x
(cid:132) NB: integral independent of path: force conservative!
(cid:132) Assembling a system of charges costs energy. This is the energy
stored in the electric field:
U
=
1
2
∫
Volume
with
charges
ρφ
dV
=
2
E
8
π
dV
∫
Entire
space | https://ocw.mit.edu/courses/8-022-physics-ii-electricity-and-magnetism-fall-2004/07002b8b4ec36c93c70bfc4697be3789_lecture6.pdf |
Volume
with
charges
ρφ
dV
=
2
E
8
π
dV
∫
Entire
space
G. Sciolla – MIT
8.022 – Lecture 6
16
8
Electrostatics problems
(cid:132)
In electrostatics there are 3 different ways of describing a problem:
ρ(x,y,z)
Ε(x,y,z)
φ(x,y,z)
(cid:132) Solving most problem consists in going from one formulation to
another. All you need to know is: how?
G. Sciolla – MIT
8.022 – Lecture 6
17
From ρ (cid:198) E
(cid:132) General case:
(cid:132) For a point charge:
(cid:71)
E
(cid:132) Superposition principle:
(cid:132) Special cases:
q
r
|
=
=
(cid:71)
E
2 ˆ
r
|
∫
V
(cid:71)
d E
d q
r
|
2 ˆ
r
=
∫
V
|
Solving this integral
may not be easy…
(cid:132) Look for symmetry and thank Mr. Gauss who solved the integrals for you
(cid:132) Gauss’s Law:
(cid:118)
∫
(cid:132) N.B.:
S
Qπ
4
Φ =(cid:71)
E
(cid:71)
(cid:71)
i
E d A
=
enc
∫
4
π ρ
V
d V
S
+Q S1
(cid:132) Gauss’s law is always true but not always useful:
Symmetry is needed!
(cid:132) Main step: choose the “right” gaussian surface so that
E is constant on the surface of integration
G. Sciolla – MIT
8.022 – Lecture 6
18
9
From ρ (cid:198) φ
(cid:132) General case:
(cid:132) For a point charge:
(cid:132) | https://ocw.mit.edu/courses/8-022-physics-ii-electricity-and-magnetism-fall-2004/07002b8b4ec36c93c70bfc4697be3789_lecture6.pdf |
98) φ
(cid:132) General case:
(cid:132) For a point charge:
(cid:132) Superposition principle:
φ =
q
r
φ = ∫
V
NB: implicit hypothesis:
φ(infinity)=0
d q
r
The problem is simpler than for E (only scalars involved) but not trivial…
(cid:132) Special cases:
(cid:132)
If symmetry allows, use Gauss’s law to extract E and then integrate E to
get φ:
φ φ−
1
2
2
= − ∫
1
(cid:74)(cid:71)
(cid:71)
i
E ds
(cid:132) N.B.: The force is conservative (cid:198) the result is the same for any path, but
choosing a simple one makes your life much easier….
G. Sciolla – MIT
8.022 – Lecture 6
19
From φ to E and ρ
Easy! No integration needed!
(cid:132) From φ to E
(cid:71)
E
= − ∇
φ
(cid:132) One derivative is all it takes but… make sure you choose the best
coordinate system
(cid:132) You will not loose points but you will waste time…
(cid:132) From φ to ρ
(cid:132) Poisson tells you how to get from potential to charge distributions directly:
2
4
∇ = −
φ πρ
(cid:132) Uncomfortable with Laplacian? Get there in 2 steps:
(cid:71)
E
(cid:132) First calculate E:
(cid:132) The use differential form of Gauss’s law:
= − ∇
φ
(cid:71)
i
E πρ
4
∇ =
G. Sciolla – MIT
8.022 – Lecture 6
20
10
Thoughts about φ and E
(cid:132) The potential φ is always continuous
(cid:132) E is not always continuous: it can “jump”
(cid:132 | https://ocw.mit.edu/courses/8-022-physics-ii-electricity-and-magnetism-fall-2004/07002b8b4ec36c93c70bfc4697be3789_lecture6.pdf |
2) The potential φ is always continuous
(cid:132) E is not always continuous: it can “jump”
(cid:132) When we have surface charge distributions
(cid:132) Remember problem #1 in Pset 2
(cid:198) When solving problems always check for consistency!
G. Sciolla – MIT
8.022 – Lecture 6
21
Summary
ρ(x,y,z)
2
φ πρ
4
∇ = −
G a uss’s la w (int)
(cid:71)
i
E πρ
4
∇ =
Ε(x,y,z)
2
φ φ−
1
(cid:71)
E
(cid:74)(cid:71)
(cid:71)
i
E ds
2
= −∫
1
= −∇
φ
φ = ∫
V
d q
r
φ(x,y,z)
G. Sciolla – MIT
8.022 – Lecture 6
22
11
Conductors
(cid:132) Properties:
(cid:132) Surface of conductors are equipotential
(cid:132) E (field lines) always perpendicular to the surface
(cid:132) Einside=0
(cid:132) Esurface=4πσ
(cid:132) What’s the most useful info?
(cid:132) Einside=0 because it comes handy in conjunction with Gauss’s law to solve
problems of charge distributions inside conductors.
+
(cid:132) Example: concentric cylindrical shells
(cid:132) Charge +Q deposited in inner shell
(cid:132) No charge deposited on external shell
(cid:132) What is E between the 2 shells?
(cid:132)
- Q induced on inner surface of inner cylinder
(cid:132) +Q induced on outer surface of outer cylinder
G. Sciolla – MIT
8.022 – Lecture 6
+
+
-
+
+
-
-
+
-
+
+
+
+
-
+
+
-
-
+
+
23
E due to Charges and Conductors
(cid:132) | https://ocw.mit.edu/courses/8-022-physics-ii-electricity-and-magnetism-fall-2004/07002b8b4ec36c93c70bfc4697be3789_lecture6.pdf |
-
-
+
-
+
+
+
+
-
+
+
-
-
+
+
23
E due to Charges and Conductors
(cid:132) How to find E created by charges near conductors?
(cid:132) Uniqueness theorem:
(cid:132) A solution that satisfies boundary conditions is THE solution
(cid:132) Be creative and think of distribution of point charges that will create the
same filed lines:
(cid:132) Example:
Method of images
+
+
- - - - - - - - -
- - - - - - - - -
-
G. Sciolla – MIT
8.022 – Lecture 6
24
12
Capacitors
(cid:132) Capacitance
have capacitance C
(cid:132) Two oppositely charged conductors kept at a potential difference V will
Q
V
C
=
(cid:132) NB: capacitance depends only on the geometry!
(cid:132) Energy stored in capacitor
U
=
2
Q
C
2
=
1
2
2
CV
(cid:132) What should you remember?
(cid:132) Parallel plate capacitor: very well
(cid:132) Be able to derive the other standard geometries
G. Sciolla – MIT
8.022 – Lecture 6
25
Conclusion
(cid:132) Material for Quiz #1:
(cid:132) Up to this lecture (Purcell chapters 1/2/3)
(cid:132) Next lecture:
(cid:132) Charges in motion: currents
(cid:132) NB: currents are not included in Quiz 1!
G. Sciolla – MIT
8.022 – Lecture 6
26
13 | https://ocw.mit.edu/courses/8-022-physics-ii-electricity-and-magnetism-fall-2004/07002b8b4ec36c93c70bfc4697be3789_lecture6.pdf |
2 General results of representation theory
2.1 Subrepresentations in semisimple representations
Let A be an algebra.
Definition 2.1. A semisimple (or completely reducible) representation of A is a direct sum of
irreducible representations.
Example. Let V be an irreducible representation of A of dimension n. Then Y = End(V ),
with action of A by left multiplication, is a semisimple representation of A, isomorphic to nV (the
direct sum of n copies of V ). Indeed, any basis v1, ..., vn of V gives rise to an isomorphism of
representations End(V )
nV , given by x
(xv1, ..., xvn).
⊃
⊃
Remark. Note that by Schur’s lemma, any semisimple representation V of A is canonically
X, where X runs over all irreducible representations of A. Indeed,
Hom(X, V ),
x
identified with
we have a natural map f :
and it is easy to verify that this map is an isomorphism.
�
�X Hom(X, V )
�X HomA(X, V )
V , given by g
g(x), x
X, g
X
⊃
⊃
�
�
�
�
We’ll see now how Schur’s lemma allows us to classify subrepresentations in finite dimensional
semisimple representations.
i
Proposition 2.2. Let Vi, 1
∗
representations of A, and W be a subrepresentation of V =
m be irreducible finite dimensional pairwise nonisomorphic
m
i=1niVi. Then W is isomorphic to
niVi given
�
by multiplication of a | https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0721b66f53c3c196ce86d6d867514442_MIT18_712F10_ch2.pdf |
i=1niVi. Then W is isomorphic to
niVi given
�
by multiplication of a row vector of elements of Vi (of length ri) by a certain ri-by-ni matrix Xi
with linearly independent rows: θ(v1, ..., vri ) = (v1, ..., vri )Xi.
V is a direct sum of inclusions θi : riVi
ni, and the inclusion θ : W
m
i=1riVi, ri
⊃
⊃
∗
∗
�
by induction in n :=
m
i=1 ni. The base of induction (n = 1) is clear. To perform
Proof. The proof is
the induction step, let us assume that W is nonzero, and fix an irreducible subrepresentation
W . Such P exists (Problem 1.20). 2 Now, by Schur’s lemma, P is isomorphic to Vi for some i,
P
V factors through niVi, and upon identification of P with Vi is given
and the inclusion θ
by the formula v
|P : P
(vq1, ..., vqni ), where ql
k are not all zero.
⎨
⊃
→
�⊃
�
⊃
Now note that the group Gi = GLni (k) of invertible ni-by-ni matrices over k acts on niVi
(v1, ..., vni )gi (and by the identity on njVj , j = i), and therefore acts on the
by (v1, ..., vni )
set of subrepresentations of V , preserving the property we need to establish: namely, under the
action of gi, the matrix Xi goes to Xigi | https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0721b66f53c3c196ce86d6d867514442_MIT18_712F10_ch2.pdf |
to establish: namely, under the
action of gi, the matrix Xi goes to Xigi, while Xj , j = i don’t change. Take gi
Gi such that
(q1, ..., qni )gi = (1, 0, ..., 0). Then W gi contains the first summand Vi of niVi (namely, it is P gi),
hence W gi = Vi
nmVm is the kernel of the projection
−
of W gi to the first summand Vi along the other summands. Thus the required statement follows
from the induction assumption.
W �, where W � →
n1V1
1)Vi
(ni
...
...
�
�
�
�
�
�
Remark 2.3. In Proposition 2.2, it is not important that k is algebraically closed, nor it matters
that V is finite dimensional. If these assumptions are dropped, the only change needed is that the
entries of the matrix Xi are no longer in k but in Di = EndA(Vi), which is, as we know, a division
algebra. The proof of this generalized version of Proposition 2.2 is the same as before (check it!).
2Another proof of the existence of P , which does not use the finite dimensionality of V , is by induction in n.
Namely, if W itself is not irreducible, let K be the kernel of the projection of W to the first summand V1. Then
K is a subrepresentation of (n1 − 1)V1 � ... � nmVm, which is nonzero since W is not irreducible, so | https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0721b66f53c3c196ce86d6d867514442_MIT18_712F10_ch2.pdf |
1)V1 � ... � nmVm, which is nonzero since W is not irreducible, so K contains an
irreducible subrepresentation by the induction assumption.
2.2 The density theorem
Let A be an algebra over an algebraically closed field k.
Corollary 2.4. Let V be an irreducible finite dimensional representation of A, and v1, ..., vn
V there exists an element a
be any linearly independent vectors. Then for any w1, ..., wn
such that avi = wi.
�
V
A
�
�
(av1, ..., avn) is a
Proof. Assume the contrary. Then the image of the map A
proper subrepresentation, so by Proposition 2.2 it corresponds to an r-by-n matrix X, r < n. Thus,
vn). Let
taking a = 1, we see that there exist vectors u1, ..., ur
(q1, ..., qn) be a nonzero
ivi =
T
qivi = 0 - a contradiction with the linear independence of
(u
vi.
V such that (u1, ..., ur)X = (v1, ...,
that X(q1, ..., qn)T = 0 (it exists because r < n). Then
vector such
1, ..., ur )X(q1, ..., qn) = 0, i.e.
nV given by a
⎨
⊃
⊃
�
q
⎨
Theorem 2.5. (the Density Theorem). (i) Let V be an irreducible finite dimensional representation
of A. Then the map δ : A
EndV is surjective.
(ii) Let V = V1
...
representations of A. Then
�
�
⊃
Vr, where Vi are | https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0721b66f53c3c196ce86d6d867514442_MIT18_712F10_ch2.pdf |
= V1
...
representations of A. Then
�
�
⊃
Vr, where Vi are irreducible pairwise nonisomorphic finite dimensional
⊃ �i=1 End(Vi) is surjective.
r
i=1δi : A
the map
�
r
Proof. (i) Let B be the image of A in End(V ). We want to show that B = End(V ). Let c
v1, ..., vn be a basis of V , and wi = cvi. By Corollary 2.4, there exists a
B, and we are done.
Then a maps to c, so c
End(V ),
A such that avi = wi.
�
�
�
i be the image of A in
(ii) Let
B
a representation A,
Then by Proposition 2.2, B =
follows.
of
�
End(V
i), and B be the image of A in
r
i=1 End(Vi) is semisimple: it is isomorphic to
�i=1 End(Vi). Recall that as
r
i=1diVi, where di = dim Vi.
�
�iBi. On the other hand, (i) implies that Bi = End(Vi). Thus (ii)
r
2.3 Representations of direct sums of matrix algebras
In this section we consider representations of algebras A =
r
i Matdi (k) for any field k.
i=1 Matdi (k). Then the irreducible� representations of A are V1 =
r = k r , and any finite dimensional representation of A is a direct sum of copies of
=
�
Theorem 2.6. Let A
kd1 , . . . , V
V1, . | https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0721b66f53c3c196ce86d6d867514442_MIT18_712F10_ch2.pdf |
=
�
Theorem 2.6. Let A
kd1 , . . . , V
V1, . . . , Vr.
d
In order to prove Theorem 2.6, we shall need the notion of a dual representation.
Definition 2.7. (Dual representation) Let V be a representation of any algebra A. Then the
dual representation V ⊕ is the representation of the opposite algebra Aop (or, equivalently, right
A-module) with the action
a)(v) := f (av).
(f
·
Proof of Theorem 2.6. First, the given representations are clearly irreducible, as for any v = 0, w
�
A such that av = w. Next, let X be an n-dimensional representation of
Vi, there exists a
(Matdi (k)) ∪= Matdi (k) with
A. Then, X ⊕
isomorphism �(X) = X T , as (BC)T = C T BT . Thus, A =∪ Aop and X ⊕ may be viewed as an
n-dimensional representation of A. Define
representation of Aop. But
is an n-dimensional
�
op
θ : A
� ·
· · �
n copies
��
�
�
A
X ⊕
−⊃
�
by
θ(a1, . . . , an) = a1y1 +
+ anyn
· · ·
is a basis
of X ⊕. θ is clearly surjective, as k A. Thus, the dual map θ⊕ : X
An
where
yi}
{
is injective. But A = An as representations of A (check it!). Hence, Im θ⊕ ∪= X is | https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0721b66f53c3c196ce86d6d867514442_MIT18_712F10_ch2.pdf |
yi}
{
is injective. But A = An as representations of A (check it!). Hence, Im θ⊕ ∪= X is a subrepresen
tation of An
�i=1ndiVi, as a representation of A.
Hence by Proposition 2.2, X
. Next, Matdi (k) = diVi, so A =
r
i=1diVi, A =
n
⊃
→
�
−
∪
n
⊕
⊕
r
=
r
i=1miVi, as desired.
�
Exercise. The goal of this exercise is to give an alternative proof of Theorem 2.6, not using
any of the previous results of Chapter 2.
Let A1, A2, ..., An be n algebras with units 11, 12, ..., 1n, respectively. Let A = A1
Clearly, 1i1j = ζij1i, and the unit of A is 1 = 11 + 12 + ... + 1n.
A2
�
...
�
An.
�
For every representation V of A, it is easy to see that 1iV is a representation of Ai for every
. Conversely, if V1, V2, ..., Vn are representations of A1, A2, ..., An, respectively,
A acting
...
Vn canonically becomes a representation of A (with (a1, a2, ..., an)
1, 2, ..., n
V2
�
Vn as (v1, v2, ..., vn)
(a1v1, a2v2, ..., anvn)).
�⊃
�
i
� {
then V1
on V1
�
�
V2
}
�
...
�
�
(a | https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0721b66f53c3c196ce86d6d867514442_MIT18_712F10_ch2.pdf |
�
i
� {
then V1
on V1
�
�
V2
}
�
...
�
�
(a) Show that a representation V of A is irreducible if and only if 1iV is an irreducible repre
, while 1iV = 0 for all the other i. Thus, classify the
sentation of Ai for exactly one i
� {
irreducible representations of A in terms of those of A1, A2, ..., An.
1, 2, ..., n
}
(b) Let d
N. Show that the only irreducible representation of Matd(k) is kd, and every finite
�
dimensional representation of
Matd(k) is a direct sum of copies of kd.
2, let Eij
� {
1, 2, ..., d
Hint: For every (i, j)
Matd(k) be the matrix with 1 in the ith row of the
jth column and 0’s everywhere else. Let V be a finite dimensional representation of Mat d(k). Show
Ei1v is an isomorphism for
that V = E11V
. Prove that S (v)
every i
S (v).
is a
Conclude that V = S (v1)
⊃
E11V , denote S (v) =
◦
subrepresentation of V isomorphic to kd (as a representation of Matd(k)), and that v
is a basis of E11V .
S (vk), where
EiiV , v
�⊃
E11v, E21v, ..., Ed1v
...
. For every v
EddV , and that �i : E11V
1, 2, ..., d
S (v2)
E22V
� {
...
�
�
�
�
�
�
}
}
�
�
�
�
v1, v2, ..., vk}
{
(c) Conclude Theorem 2.6. | https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0721b66f53c3c196ce86d6d867514442_MIT18_712F10_ch2.pdf |
}
}
�
�
�
�
v1, v2, ..., vk}
{
(c) Conclude Theorem 2.6.
2.4 Filtrations
Let A be an algebra. Let V be a representation of A. A (finite) filtration of V is a sequence of
subrepresentations 0 = V0
Vn = V .
V1
...
→
→
→
Lemma 2.8. Any finite dimensional representation V of an algebra A admits a finite filtration
0 = V0
Vn = V such that the successive quotients Vi/Vi
1 are irr
educible.
V1
...
→
→
→
−
Proof. The proof is by induction in dim(V ). The base is clear, and only the induction step needs
V , and consider the representation
to be justified. Pick an irreducible subrepresentation V1
= U
U = V /V1. Then by the induction assumption U has a filtration 0 = U0
→
the
such that Ui/Ui 1 are irreducible. Define Vi for i
1
i
−
Vn = V is a filtration of V
V /V1 = U . Then 0 = V0
tautological projection V
with the desired property.
...
2 to be the preimages of U
V1
Un
1
−
under
⊂
→
U1
V2
...
⊃
→
→
→
→
→
→
−
2.5 Finite dimensional algebras
Definition 2.9. The radical of a fi | https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0721b66f53c3c196ce86d6d867514442_MIT18_712F10_ch2.pdf |
2.5 Finite dimensional algebras
Definition 2.9. The radical of a finite dimensional algebra A is the set of all elements of A which
act by 0 in all irreducible representations of A. It is denoted Rad(A).
Proposition 2.10. Rad(A) is a two-sided ideal.
Proof. Easy.
Proposition 2.11. Let A be a finite dimensional algebra.
(i) Let
I be a nilpotent two-sided ideal in A, i.e., I n = 0 for some n. Then I
Rad(A).
→
(ii) Rad(A) is a nilpotent ideal. Thus, Rad(A) is the largest nilpotent two-sided ideal in A.
Proof. (i) Let V be an irreducible representation of A. Let v
tation. If Iv = 0 then Iv = V so there is x
�
Thus Iv = 0, so I acts by 0 in V and hence I
Rad(A).
V is a subrepresen
I such that xv = v. Then xn = 0, a contradiction.
V . Then Iv
→
�
→
A1
(ii) Let 0 = A0
subrepresentations such that Ai+1/Ai are irreducible. It exists by Lemma 2.8. Let x
Then
desired.
An = A be a filtration of the regular representation of A by
Rad(A).
i+1/Ai by zero, so x maps Ai+1 to Ai. This implies that Rad(A) = 0, as
x acts on A
...
→ | https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0721b66f53c3c196ce86d6d867514442_MIT18_712F10_ch2.pdf |
Ai. This implies that Rad(A) = 0, as
x acts on A
...
→
→
→
�
n
Theorem 2.12. A finite dimensional algebra A has only finitely many irreducible representations
Vi up to isomorphism, these representations are finite dimensional, and
A/Rad(A) ∪
=
End Vi.
�
i
Proof. First, for any irreducible representation V of A, and for any nonzero v
V is a
finite dimensional subrepresentation of V . (It is finite dimensional as A is finite dimensional.) As
V is irreducible and Av = 0, V = Av and V is finite dimensional.
V , Av
∧
�
Next, suppose we have non-isomorphic irreducible representations V1, V2, . . . , Vr. By Theorem
2.5, the homomorphism
δi : A
�
i
−⊃
�
i
End Vi
is surjective. So r
irreducible representations (at most dim A).
i dim End Vi
∗
∗
⎨
dim A. Thus, A has only finitely many non-isomorphic
Now, let V1, V2, . . . , Vr be all non-isomorphic irreducible finite dimensional representations of
A. By Theorem 2.5, the homomorphism
δi : A
�
i
−⊃
�
i
End Vi
is surjective. The kernel of this map, by definition, is exactly Rad(A).
Corollary 2.13.
i (dim
2 Vi)
∗
⎨
dim A, where the Vi’s are | https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0721b66f53c3c196ce86d6d867514442_MIT18_712F10_ch2.pdf |
2.13.
i (dim
2 Vi)
∗
⎨
dim A, where the Vi’s are the irreducible representations of A.
Proof. As dim End Vi =
2
i (dim Vi) . As dim Rad(A)
2
(dim Vi) , Theorem
0,
i (dim Vi)
2.12 implies that dim A
dim A.
2
−
dim Rad(A) =
i dim End Vi =
⎨
⎨
⊂
⎨
∗
Example 2.14. 1. Let A = k[x]/(xn). This algebra has a unique irreducible representation, which
is a 1-dimensional space k, in which x acts by zero. So the radical Rad(A) is the ideal (x).
2. Let A be the algebra of upper triangular n by n matrices. It is easy to check that the
irreducible representations of A are Vi, i = 1, ..., n, which are 1-dimensional, and any matrix x acts
by xii. So the radical Rad(A) is the ideal of strictly upper triangular matrices (as it is a nilpotent
ideal and contains the radical). A similar result holds for block-triangular matrices.
Definition 2.15. A finite dimensional algebra A is said to be semisimple if Rad(A) = 0.
Proposition 2.16. For a finite dimensional algebra A, the following are equivalent:
1. A is semisimple.
2.
i (dim
2 Vi) = dim A, where the Vi’s are the irreducible representations of A.
⎨
3. A ∪=
4. Any finite dimensional representation of A is completely reducible (that is, isomorphic to a
i Matdi (k) | https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0721b66f53c3c196ce86d6d867514442_MIT18_712F10_ch2.pdf |
of A is completely reducible (that is, isomorphic to a
i Matdi (k) for some di.
�
direct sum of irreducible representations).
5. A is a completely reducible representation of A.
Proof. As dim A
0. Thus, (1)
−
(2).
⊆
dim Rad(A) =
i (dim
Vi
2) , clearly dim
A =
i (dim Vi) if and only if Rad(A) =
2
⎨
⎨
Next, by Theorem 2.12, if
(3). Conversely, if A ∪=
(1)
≥
Thus (3)
�
(1).
≥
Rad(A) = 0, then clearly A =
∪
i Matdi (k) for di = dim Vi. Thus,
i Matdi (k), then by Theorem 2.6,� Rad(A) = 0, so A is semisimple.
≥
Next, (3)
(4) by Theorem 2.6. Clearly (4)
i niVi.
Consider EndA(A) (endomorphisms of A as a representation of A). As the Vi’s are pairwise� non-
be mapped to a distinct Vj . Also, again by
isomorphic, by Schur’s lemma, no copy of Vi in A can
(k). But EndA(A) = Aop by Problem
Schur’s lemma,
∪
∪
1.22, so A = i Matni (k). Thus, A = (
∪
= i Matni
i Matni (k)) =
�
i Matni(k), as desired.
Thus, EndA(A) ∪
(5). To see that (5)
EndA (Vi) = k.
(3), let | https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0721b66f53c3c196ce86d6d867514442_MIT18_712F10_ch2.pdf |
�
(5). To see that (5)
EndA (Vi) = k.
(3), let A =
≥
≥
op
op
�
�
�
2.6 Characters of representations
Let A be an algebra and V a finite-dimensional representation of A with action δ. Then the
character of V is the linear function νV : A
k given by
⊃
νV (a) = tr
|V (δ(a)).
yx over all x, y
k.
⊃
If [A, A] is the span of commutators [x, y] := xy
we may view the character as a mapping νV : A/[A, A]
−
A, then [A, A]
ker νV . Thus,
∧
�
Exercise. Show that if W
νV /W .
→
V are finite dimensional representations of A, then νV = νW +
Theorem 2.17. (i) Characters of (distinct) irreducible finite-dimensional representations of A are
linearly independent.
(ii) If A is a finite-dimensional semisimple algebra, then these characters form a basis of
(A/[A, A])⊕.
Proof. (i) If V1, . . . , Vr are nonisomorphic irreducible finite-dimensional representations of A, then
End Vr is surjective by the density theorem, so νV1 , . . . , νVr are
δV1 � · · · �
linearly independent. (Indeed, if
∂iνVi (a) = 0 for all a
EndkVi. But each tr(Mi) can range⎨ independently over k, so it must ⎨be that ∂1 =
∂iTr(Mi) = 0 for all Mi
� | https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0721b66f53c3c196ce86d6d867514442_MIT18_712F10_ch2.pdf |
k, so it must ⎨be that ∂1 =
∂iTr(Mi) = 0 for all Mi
�
= ∂r = 0.)
A, then
End V1
δVr : A
� · · · �
⊃
�
· · ·
(ii) First we prove that [Matd(k), Matd(k)] = sld(k), the set of all matrices with trace 0. It is
sld(k). If we denote by Eij the matrix with 1 in the ith row of the
clear that [Matd(k), Matd(k)]
jth column and 0’s everywhere else, we have [Eij , Ejm] = Eim for i = m, and [Ei,i+1, Ei+1,i] = Eii
Ei+1,i+1. Now
{
as claimed.
−
forms a basis in sld(k), so indeed [Matd(k), Matd(k)] = sld(k),
∧
Ei+1,i+1}
Eim}⊗{
Eii−
By semisimplicity, we can write A = Matd1 (k)
Matdr (k). Then [A, A] = sld1(k)
� · · · �
sldr (k), and A/[A, A] =∪ kr. By Theorem 2.6, there are exactly r irreducible representations of A
(isomorphic to kd1 , . . . , kdr , respectively), and therefore r linearly independent characters on the
r-dimensional vector space A/[A, A]. Thus, the characters form a basis.
� · · · �
2.7 The Jordan-H¨older theorem
We will now state and prove two important theorems about representations of finite dimensional
algebras - the Jordan-H¨older theorem and the Krull-Schmidt theorem.
Theorem | https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0721b66f53c3c196ce86d6d867514442_MIT18_712F10_ch2.pdf |
of finite dimensional
algebras - the Jordan-H¨older theorem and the Krull-Schmidt theorem.
Theorem 2.18. (Jordan-H¨older theorem). Let V be a finite dimensional representation of A,
Vm� = V be filtrations of V , such that the
...
and 0 = V0
representations Wi := Vi/Vi 1 and W i� := Vi�/Vi� 1 are irreducible for all i. Then n = m, and there
−
exists a permutation ε of 1, ..., n such that Wε(i) is isomorphic to Wi�.
Vn = V , 0 = V0� →
V1
...
→
→
→
→
−
Proof. First proof (for k of characteristic zero). The character of V obviously equals the sum
of characters of Wi, and also the sum of characters of Wi�. But by Theorem 2.17, the charac
ters of irreducible representations are linearly independent, so the multiplicity of every irreducible
representation W of A among Wi and among Wi� are the same.
This implies the theorem. 3
Second proof (general). The proof is by induction on dim V . The base of induction is clear,
so let us prove the induction step. If W1 = W1� (as subspaces), we are done, since by the induction
assumption the theorem holds for V /W1. So assume W1 = W1�. In this case W1
W1� = 0 (as
W1�), and
W1, W 1� are irreducible), so we have an embedding | https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0721b66f53c3c196ce86d6d867514442_MIT18_712F10_ch2.pdf |
1� = 0 (as
W1�), and
W1, W 1� are irreducible), so we have an embedding f : W1
1 (it exists by
...
0 = U0
Lemma 2.8). Then we see that:
Up = U be a filtration of U with simple quotients Zi = Ui/Ui
∈
V . Let U = V /(W1
W1� ⊃
U1
→
→
�
�
→
−
1) V /W1 has a filtration with successive quotients W1�, Z1, ..., Zp, and another filtration with
successive quotients W2, ...., Wn.
2) V /W 1� has a filtration with successive quotients W1, Z1, ..., Zp, and another filtration with
successive quotients W2�, ...., W n� .
By the induction assumption, this means that the collection of irreducible representations with
multiplicities W1, W 1�, Z1, ..., Zp coincides on one hand with W1, ..., Wn, and on the other hand, with
W1�, ..., W m� . We are done.
The Jordan-H¨older theorem shows that the number n of terms in a filtration of V with irre
ducible successive quotients does not depend on the choice of a filtration, and depends only on
3
This proof does not work in characteristic p because it only implies that the multiplicities of Wi and W ⊗
i are the
same modulo p, which is not sufficient. In fact, the character of the representation pV , where V is any representation,
is zero.
V . This number is called the length | https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0721b66f53c3c196ce86d6d867514442_MIT18_712F10_ch2.pdf |
pV , where V is any representation,
is zero.
V . This number is called the length of V . It is easy to see that n is also the maximal length of a
filtration of V in which all the inclusions are strict.
The sequence of the irreducible representations W1, ..., Wn enumerated in the order they appear
from some filtration of V as successive quoteints is called a Jordan-H¨older series of V .
2.8 The Krull-Schmidt theorem
Theorem 2.19. (Krull-Schmidt theorem) Any finite dimensional representation of A can be uniquely
(up to an isomorphism and order of summands) decomposed into a direct sum of indecomposable
representations.
Proof. It is clear that a decomposition of V into a direct sum of indecomposable representations
exists, so we just need to prove uniqueness. We will prove it by induction on dim V . Let V =
V s� be the natural
V , ps : V
V , is� : V s� ⊃
Vm = V1� �
V1
n
s=1 χs = 1. Now
p i1 : V1
maps associated to these decompositions. Let χs = p1i�s �
s
we need the following lemma.
Vs, ps� : V
⊃
V1. We
⊃
V n�. Let is : Vs
⊃
have
...
...
⊃
�
�
�
⎨
Lemma 2.20. Let W be a finite dimensional indecomposable representation of A. Then
(i) Any homomorphism χ : W
⊃
W is either an | https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0721b66f53c3c196ce86d6d867514442_MIT18_712F10_ch2.pdf |
representation of A. Then
(i) Any homomorphism χ : W
⊃
W is either an isomorphism or nilpotent;
(ii) If χs : W
⊃
W , s = 1, ..., n are nilpotent homomorphisms, then so is χ := χ1 + ... + χn.
Proof. (i) Generalized eigenspaces of χ are subrepresentations of W , and W is their direct sum.
Thus, χ can have only one eigenvalue ∂. If ∂ is zero, χ is nilpotent, otherwise it is an isomorphism.
(ii) The proof is by induction in n. The base is clear. To make the induction step (n
assume that χ is not nilpotent.
χ−
isomorphism, which is a contradiction with the induction assumption.
Then by (i) χ is an isomorphism, so
χ −
1χi are not isomorphisms, so they are nilpotent. Thus 1
−
n
n
i=1 χ−
1χ⎨ = χ−
1 to n),
1χi = 1. The morphisms
1
1χ + ... + χ−
χn 1 is an
−
1
−
By the lemma, we find that for some s, χs must be an isomorphism; we may assume that
is indecomposable, we get that
s = 1. In this case, V1� = Im(p1� i1)
Ker(p1i1� ), so since V1�
�
V1 are isomorphisms.
V1� and g := p1i1� : V1� ⊃
f := p�1i1 | https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0721b66f53c3c196ce86d6d867514442_MIT18_712F10_ch2.pdf |
omorphisms.
V1� and g := p1i1� : V1� ⊃
f := p�1i1 : V1
�j>1Vj�; then we have V = V1
⊃
�j>1Vj , B� =
Let B =
B
B� defined as a composition of the natural maps B
B�. Consider the map
B � attached to these
h :
decompositions. We claim that h is an isomorphism. To show this, it suffices to show that Kerh = 0
(as h is a map between spaces of the same dimension). Assume that v
V1�.
On the other hand, the projection of v to V1 is zero, so gv = 0. Since g is an isomorphism, we get
v = 0, as desired.
B = V1� �
V
⊃
⊃
B. Then v
Kerh
⊃
�
→
�
�
Now by the induction assumption, m = n, and Vj ∪= V �
ε(j)
The theorem is proved.
for some permutation ε of 2, ..., n.
Exercise. Let A be the algebra of real-valued continuous functions on R which are periodic
with period 1. Let M be the A-module of continuous functions f on R which are antiperiodic with
period 1, i.e., f (x + 1) =
f (x).
−
(i) Show that A and M are indecomposable A-modules.
(ii) Show that A is not isomorphic to M but A
�
A is isomorphic to M
M .
�
Remark. Thus, we see that in general, the Krull-Schmidt theorem fails for infinite dimensional
modules. However, it still holds for modules | https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0721b66f53c3c196ce86d6d867514442_MIT18_712F10_ch2.pdf |
Thus, we see that in general, the Krull-Schmidt theorem fails for infinite dimensional
modules. However, it still holds for modules of finite length, i.e., modules M such that any filtration
of M has length bounded above by a certain constant l = l(M ).
2.9 Problems
Problem 2.21. Extensions of representations. Let A be an algebra, and V, W be a pair of
representations of A. We would like to classify representations U of A such that V is a subrepre
W , but are there
sentation of U , and U/V = W . Of course, there is an obvious example U = V
any others?
�
Suppose we have a representation U as above. As a vector space, it can be (non-uniquely)
A the corresponding operator δU (a) has block triangular
W , so that for any a
identified with V
form
�
�
δU (a) =
δV (a)
0
�
f (a)
δW (a)
�
,
where f : A
⊃
Homk(W, V ) is a linear map.
(a) What is the necessary and sufficient condition on f (a) under which δU (a) is a repre
sentation? Maps f satisfying this condition are called (1-)cocycles (of A with coefficients in
Homk(W, V )). They form a vector space denoted Z 1(W, V ).
(b) Let X : W
V be a linear map. The coboundary of X, dX, is defined to be | https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0721b66f53c3c196ce86d6d867514442_MIT18_712F10_ch2.pdf |
(b) Let X : W
V be a linear map. The coboundary of X, dX, is defined to be the function A
⊃
Homk(W, V ) given by dX(a) = δV (a)X
only if X is a homomorphism of representations. Thus coboundaries form a subspace B 1(W, V )
→
Z 1(W, V ), which is isomorphic to Homk(W, V )/HomA(W, V ). The quotient Z 1(W, V )/B1(W, V ) is
denoted Ext1(W, V ).
⊃
XδW (a). Show that dX is a cocycle, which vanishes if and
−
Z 1(W, V
(c)
are isomorphic representations
Show that if f, f � �
f � �
) and f
of A. Conversely, if θ : U
B1(W, V ) then the corresponding extensions
U � is an isomorphism such that
−
U, U �
⊃
θ(a) =
1V
0
�
∼
1W �
then f
f �
−
�
B1(V, W ). Thus, the space Ext1(W, V ) “classifies” extensions of W by V .
(d) Assume that W, V are finite dimensional irreducible representations of A. For any f
�
Ext1(W, V ), let Uf be the corresponding extension. Show that Uf is isomorphic to Uf ⊗ as repre
f � are proportional. Thus isomorphism classes (as representations)
sentations if and only if f and
of nontrivial extensions of W by V (i.e., those not isomorphic to | https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0721b66f53c3c196ce86d6d867514442_MIT18_712F10_ch2.pdf |
and
of nontrivial extensions of W by V (i.e., those not isomorphic to W
V ) are parametrized by the
projective space PExt1(W, V ). In particular, every extension is trivial if and only if Ext1(W, V ) = 0.
�
Problem 2.22. (a) Let A = C[x1, ..., xn], and Va, Vb be one-dimensional representations in which
C). Find Ext1(Va, Vb) and classify 2-dimensional repre
xi act by ai and bi, respectively (ai, bi
sentations of A.
�
(b) Let B be the algebra over C generated by x1, ..., xn with the defining relations xixj = 0 for
all i, j. Show that for n > 1 the algebra B has infinitely many non-isomorphic indecomposable
representations.
Problem 2.23. Let Q be a quiver without oriented cycles, and PQ the path algebra of Q. Find
compute Ext1 between them. Classify 2-dimensional repre
irreducible representations of PQ and
sentations of PQ.
Problem 2.24. Let A be an algebra, and V a representation of A. Let δ : A
deformation of V is a formal series
⊃
EndV . A formal
δ˜ = δ
0 + tδ1 + ... + t δn + ...,
n
where δi : A
End(V ) are linear maps, δ0 = δ, and δ˜(ab) = δ˜(a) | https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0721b66f53c3c196ce86d6d867514442_MIT18_712F10_ch2.pdf |
End(V ) are linear maps, δ0 = δ, and δ˜(ab) = δ˜(a)δ˜(b).
⊃
If b(t) = 1 + b1t + b 2
2t + ..., where bi
is also a deformation of δ, which is said to be isomorphic to δ˜.
�
End(V ), and δ˜ is a formal deformation of δ, then bδb˜ −
1
(a) Show that if Ext1(V, V ) = 0, then any deformation of δ is trivial, i.e., isomorphic to δ.
(b) Is the converse to (a) true? (consider the algebra of dual numbers A = k[x]/x2).
Problem 2.25. The Clifford algebra. Let V be a finite dimensional complex vector space
equipped with a symmetric bilinear form (, ). The Clifford algebra Cl(V ) is the quotient of the
V . More explicitly, if
tensor algebra T V by the ideal generated by the elements v
N is a basis of V and (xi, xj ) = aij then Cl(V ) is generated by xi with defining relations
xi, 1
(v, v)1, v
v
i
�
−
�
∗
∗
xixj +
xjxi = 2aij , x 2
i = aii.
Thus, if (, ) = 0, Cl(V ) =
V .
√
(i) Show that if (, ) is nondegenerate then Cl(V ) is semisimple, and has one irreducible repre
sentation of dimension 2n if | https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0721b66f53c3c196ce86d6d867514442_MIT18_712F10_ch2.pdf |
is semisimple, and has one irreducible repre
sentation of dimension 2n if dim V = 2n (so in this case Cl(V ) is a matrix algebra), and two such
representations if dim(V ) = 2n + 1 (i.e., in this case Cl(V ) is a direct sum of two matrix algebras).
Hint. In the even case, pick a basis a1, ..., an, b1, ..., bn of V in which (ai, aj ) = (bi, bj ) = 0,
(ai, bj ) = ζij /2, and construct a representation of Cl(V ) on S :=
(a1, ..., an) in which bi acts as
“differentiation” with respect to ai. Show that S is irreducible. In the odd case the situation is
similar, except there should be an additional basis vector c such that (c, ai) = (c, bi) = 0, (c, c) =
1)degree+1, giving two
1, and the action
representations S+, S (why are they non-isomorphic?). Show that there is no other irreducible
representations by finding a spanning set of Cl(V ) with 2dim V elements.
of c on S may be defined either by (
1)degree or by (
−
−
√
−
(ii) Show that Cl(V ) is semisimple if and only if (, ) is nondegenerate. If (, ) is degenerate, what
is Cl(V )/Rad(Cl(V ))?
2.10 Representations of tensor products
Let A, B be alge | https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0721b66f53c3c196ce86d6d867514442_MIT18_712F10_ch2.pdf |
)/Rad(Cl(V ))?
2.10 Representations of tensor products
Let A, B be algebras. Then A
a1a2
b1b2.
�
B is also an algebra, with multiplication (a1
b1)(a2
b2) =
�
�
�
Exercise. Show that Matm(k)
�
Matn(k) =∪ Matmn(k).
The following theorem describes irreducible finite dimensional representations of A
of irreducible finite dimensional representations of A and those of B.
B in terms
�
Theorem 2.26. (i) Let V be an irreducible finite dimensional representation of A and W an
W is an irreducible representation of
irreducible finite dimensional representation of B. Then V
A
B.
�
�
(ii) Any irreducible finite dimensional representation M of A
B has the form (i) for unique
�
V and W .
Remark 2.27. Part (ii) of the theorem typically fails for infinite dimensional representations;
e.g. it fails when A is the Weyl algebra in characteristic zero. Part (i) also may fail. E.g. let
A = B = V = W = C(x). Then (i) fails, as A
B is not a field.
�
Proof. (i) By the density theorem, the maps A
End W = End(V
the map A
End V
B
�
⊃
�
End V and B
End W are surjective. Therefore,
⊃
W ) is surjective. Thus, V
�
⊃
W is irreducible.
�
(ii)
, B� are fi | https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0721b66f53c3c196ce86d6d867514442_MIT18_712F10_ch2.pdf |
is surjective. Thus, V
�
⊃
W is irreducible.
�
(ii)
, B� are finite dimensional algebras, and M is a representation of A� �
First we show the existence of V and W . Let A�, B� be the images of A, B in End M . Then
B�, so we may assume
A�
without loss of generality that A and B are finite dimensional.
In this case, we claim that Rad(A
�
by J. Then J is a nilpotent ideal in A
B)/J = (A/Rad(A))
hand, (A
�
hence semisimple. This implies J
J = Rad(A
∩
B), proving the claim.
�
�
Thus, we see that
B + A
B) = Rad(A)
Rad(B). Indeed, denote the latter
B, as Rad(A) and Rad(B) are nilpotent. On the other
(B/Rad(B)), which is a product of two semisimple algebras,
B). Altogether, by Proposition 2.11, we see that
Rad(A
�
�
�
�
(A
�
B)/Rad(A
�
B) = A/Rad(A)
B/Rad(B).
�
B)/Rad(A
B), so it is clearly of the form
Now, M is an irreducible representation of (A
M = V
W , where V is an irreducible representation of A/Rad(A) and W is an irreducible
representation of B/Rad(B), and V, W are uniquely determined by M (as all of the algebras
involved are direct sums of matrix algebras | https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0721b66f53c3c196ce86d6d867514442_MIT18_712F10_ch2.pdf |
uniquely determined by M (as all of the algebras
involved are direct sums of matrix algebras).
�
�
�
MIT OpenCourseWare
http://ocw.mit.edu
18.712 Introduction to Representation Theory
Fall 2010
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. | https://ocw.mit.edu/courses/18-712-introduction-to-representation-theory-fall-2010/0721b66f53c3c196ce86d6d867514442_MIT18_712F10_ch2.pdf |
6.895 Essential Coding Theory
September 13, 2004
Lecture 2
Lecturer: Madhu Sudan
Scribe: Joungkeun Lim
1 Overview
We consider the problem of communication in which a source wish to transmit information to a receiver.
The transmission is conducted through a channel, which may generate errors in the information de
pending on the options. In this model, we will introduce Shannon’s coding theorem, which shows that
depending on the properties of the source and the channel, the probability of the receiver’s restoring the
original data varies with a threshold.
2 Shannon’s theory of information
In this section we will discuss the main result from Shannon’s paper which was introduced in 1948 and
founded the theory of information.
There are three entities in Shannon’s model:
• Source : The party which produces information by a probabilistic process.
• Channel : The means of passing information from source to receiver. It may generate errors while
transporting the information.
• Receiver : The party which receives the information and tries to figure out information at source’s
end.
There are two options for channel, “Noisy” and “Noiseless”
• Noisy channel : A channel that flips some bits of information sent across them. The bits that flips
are determined by a probabilistic process.
• Noiseless channel : A channel that perfectly transmits the information from source to receiver
without any error.
The source will generate and encode its message, and send it to receiver through the channel. When
the message arrives, the receiver will decode the message. We want to find the encoding-decoding scheme
which makes it possible for a receiver to restore the exact massage which a source sent. Shannon’s
theorem states the conditions with which a restoration can be conducted with high probability.
2.1 Shannon’s coding theorem
Theorem | https://ocw.mit.edu/courses/6-895-essential-coding-theory-fall-2004/074857398da1986d173fe74fd902aab0_lect02.pdf |
theorem states the conditions with which a restoration can be conducted with high probability.
2.1 Shannon’s coding theorem
Theorem 1 (Shannon’s coding theorem)
There exist positive real values capacity C and rate R satisfying the followings. If R < C then
information transmission is feasible(coding theorem.) If R > C then information transmission is not
feasible(Converse of coding theorem.)
Capacity C and rate R are the values associated with a source and a channel respectively. The
general way to compute this two values are a bit complicated. To get a better understanding, we will
start with simple examples of Shannon’s model one in noiseless model and one in noisy model.
2-1
2.2 preliminaries
Before studying the examples, we study the property of the binary entropy function and Chernoff bounds
which make crucial roles in the analyses in later chapters.
Definition 2 For p ∃ [0, 1], the binary entropy function is defined as follows.
1
H(p) = plog2 + (1 − p)log2
p
1
.
1 − p
H(p) is a concave function and has a maximum values 1 where p=1/2. The following property of
H(P) is used in later chapters.
• Let Bn(0, r) be a ball of radius r(in Hamming distance) and center 0 in {0, 1}n . V (r, n) =
V ol(Bn(0, r)) =
r
i=0
�
n
i
�
Lemma 3 (Chernoff Bounds)
≤ 2H(r/n)·n
. Hence V ol | https://ocw.mit.edu/courses/6-895-essential-coding-theory-fall-2004/074857398da1986d173fe74fd902aab0_lect02.pdf |
i
�
Lemma 3 (Chernoff Bounds)
≤ 2H(r/n)·n
. Hence V ol(pn, n) ≤ 2H(p)·n .
If �1, �2, · · · , �n are independent random variables in [0,1] with EXP (�i) = p, then
P r[|
�
n
i=1 �i − p| > �] � 2−�
n
2
·n
.
2.3 An example of noiseless model
Source produces a sequence of bits such that each bits are 0 with probability 1-p and 1 with probability
p, where p � 1/2. Source produces one bit per unit of time. For it is a noiseless channel, the channel
transmits exactly same bits to a receiver as the bits given from the source. The channel is allowed to
transmit C bits per unit of time.. In this case, the rate of source is given as the entropy function H(p)
and the capacity value is the number of bits transmitted through channel per unit of time. When n is
the amount of time we used the channel, the Shannon’s coding theorem is expressed as follows.
Theorem 4 (Shannon’s noiseless coding theorem)
If C > H(p), then there exist encoding function En and decoding function Dn such that Pr[Receiver
figures out what the source produced]� 1 − exp(−n).
Also if C > H(p), then there exist encoding function En and decoding function Dn such that
Pr[Receiver figures out what the source produced]� exp(−n).
2.4 An example of noisy model
The source produces a | https://ocw.mit.edu/courses/6-895-essential-coding-theory-fall-2004/074857398da1986d173fe74fd902aab0_lect02.pdf |
produced]� exp(−n).
2.4 An example of noisy model
The source produces a sequence of bits such that each bits are 0 with probability 1/2 and 1 with
probability 1/2. Source produceds R bits per unit of time, where R < 1. For it is a noisy channel, the
channel filps each bit with a probabilistic process. In this example, channel flips each bit with probability
p. Also the channel transmits one bit per unit of time. In this case, the rate R is the number of bits
produced in the source per unit of time and the capacity C is given as 1-H(p). Then shannon’s coding
theorem is expressed as follows.
Theorem 5 (Shannon’s noisy coding theorem)
If R < 1−H(p) then there exist encoding function En and decoding function Dn such that Pr[Receiver
figures out what the source produced]� 1 − exp(−n).
Also If R > 1 − H(p) then there exist encoding function En and decoding function Dn such that
Pr[Receiver figures out what the source produced]� exp(−n).
2-2
We prove the first part of theorem using probabilistic method and give an idea of the proof for the
second part of theorem.
Proof (First part)
Let k be the number of bits produced by source, then k = R · n. For R < 1 − H(p), there exists � > 0
such that R < 1 − H(p + �). For this �, let r = n(p + �) = n · p� . Now we can restate the theorem as
follows.
If R | https://ocw.mit.edu/courses/6-895-essential-coding-theory-fall-2004/074857398da1986d173fe74fd902aab0_lect02.pdf |
+ �) = n · p� . Now we can restate the theorem as
follows.
If R = k/n < 1 − H(p), then there exists functions E : {0, 1}k ≥ {0, 1}n, D : {0, 1}n ≥ {0, 1}k
such that P r��BSCp ,m�Uk [m ∈= D(E(m) + �)] � exp(−n), where Uk is uniform distribution on k bit
strings and BSCp,n is distribution on n bit strings with each bits to be 0 with probability 1-p and 1
with probability p.
Pick the encoding function E : {0, 1}k ≥ {0, 1}n at random, and the decoding function D : {0, 1}n ≥
{0, 1}k works as follows. Given a string y ∃ {0, 1}n, we find the m ∃ {0, 1}k such that �(y, E(m)) is
minimized. This m is the value of D(y). Fix m ∃ {0, 1}k and fix E(m) also. For E is randomly chosen,
E(m�) is still random when m ∈
= m
at least one of the following two events must occur:
� = m. Let y be the value that the receiver acquires. In order for D(y) ∈
• There exists some m� ∈= m such that E(m�) ∃ B(y, r).
• y /∃ B(E(m), r)
If neither of above events happen, then m is the unique message such that E(m) is within a distance
of r from y and so D(y) = m.
We prove that the events above happen with low | https://ocw.mit.edu/courses/6-895-essential-coding-theory-fall-2004/074857398da1986d173fe74fd902aab0_lect02.pdf |
that E(m) is within a distance
of r from y and so D(y) = m.
We prove that the events above happen with low probability. For the first event to happen, the error
� = y − E(m) has more than n(p + �) of 1 bits. By Chernoff bounds we will have
For the second event happen to happen, fix y and an m� ∈= m and consider the event that E(m�) ∃
B(y, r). For E(m�) is random, the probability of this event is exactly V ol(B(y, r))/2n . Using
P r[y /
∃ B(E(m), r)] � 2−(�
2
/2)n .
we have
�
�
V ol(B(y, p n)) ≤ 2H(p
)n
,
P r[E(m ) ∃ B(y, r)] ≤ 2H(p
�
�
)n−n
.
Using union bound, we get P r[�m� ∃ {0, 1}k s.t. E(m�) ∃ B(y, r)] � 2k+H(p
For R = k/n < 1 − H(p�), 2k+H(p
� )n−n = exp(−n). Therefore the probability that second event
�
)n−n
happens is also bounded by exp(−n).
Hence the probability of at least one of above two events happens is bounded by exp(−n) where m
and E(m) is fixed. Therefore for the random E and associated D, the probability is still bounded. Using
probabilistic method, we see that there exists a encoding E and associated decoding D such that the
probability that any of two events happen is still bounded by exp(−n).
Here we give the brief sratch of the proof for second part of theorem. Decoding function partitions
universe to 2k regions. By Chernoff | https://ocw.mit.edu/courses/6-895-essential-coding-theory-fall-2004/074857398da1986d173fe74fd902aab0_lect02.pdf |
for second part of theorem. Decoding function partitions
universe to 2k regions. By Chernoff bounds, Pr[number of 1 bits in error <pn] is low. Hence when
E(m) was transmited from source, the corrupted value y that arrives at receiver will have spread-out
distribution around E(m). It means the region that covers most of possible y value has much larger size
than one of the 2k region that contains E(m). It will make the decoding inaccurate.
2-3 | https://ocw.mit.edu/courses/6-895-essential-coding-theory-fall-2004/074857398da1986d173fe74fd902aab0_lect02.pdf |
Applet Exploration: Trigonometric Identity
‘
Start by opening the Trigonometric Identity applet from the Mathlets Gallery.
This mathlet illustrates sinusoidal functions and the trigonometric iden
tity
a cos(ωt) + b sin(ωt
)
=
A cos
(ωt − φ), where a + ib = Ae φ.
i
That is, (A, φ) are the polar coordinates of (a, b).
The sinusoidal function A cos(ωt − φ) is drawn here in red. A and φ
are the amplitude and phase lag of the sinusoid. They are both controlled
by sliders.
1. The phase lag φ measures how many radians the sinuoid falls behind the
standard sinusoid, which we take to be the cosine. So when φ = π/2 you
have the sine function. Verify this in the applet.
2. The final parameter is ω, the angular frequency. High frequency means
the waves come faster. Frequency zero means constant. Play with the ω
slider and understand this statement. Return the angular frequency to 2.
3. The trigonometric identity shows the remarkable fact that the sum of
any two sinuoidal functions of the same frequency is again a sinusoid of
the same frequency.
Use the a and b sliders to select coefficients for cos(ωt) and sin(ωt). The
a slider modifies the yellow cosine curve in the window at bottom and the
b slider modifies the blue sine curve. Notice that the sum of a cos(t) and
b sin(t) is displayed in the top window in green (which is a combination of
blue and yellow). There it is! - the linear combination is again sinusoidal,
or at least appears to be.
4. The window at the right shows the two complex numbers a + ib and
Aeiφ. The sinusoidal identity says that the green and red sinusoids will co
incide exactly when the complex numbers a + ib and Aeiφ coincide. Verify
this on | https://ocw.mit.edu/courses/18-03sc-differential-equations-fall-2011/0748b0422dfadf10b59cbbff33d90530_MIT18_03SCF11_s7_3bappl.pdf |
the green and red sinusoids will co
incide exactly when the complex numbers a + ib and Aeiφ coincide. Verify
this on the applet by pickong values of A and φ. and then adjusting a and
b until the green and red sinusoids are the same.
MIT OpenCourseWare
http://ocw.mit.edu
18.03SC Differential Equations
Fall 2011
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. | https://ocw.mit.edu/courses/18-03sc-differential-equations-fall-2011/0748b0422dfadf10b59cbbff33d90530_MIT18_03SCF11_s7_3bappl.pdf |
18.782 Introduction to Arithmetic Geometry
Lecture #11
Fall 2013
10/10/2013
11.1 Quadratic forms over Qp
The Hasse-Minkowski theorem reduces the problem of determining whether a quadratic form
f over Q represents 0 to the problem of determining whether f represents zero over Qp for
all p ≤ ∞. At first glance this might not seem like progress, since there are infinitely many p
to check, but in fact we only need to check p = 2, p = ∞ and a finite set of odd primes.
Theorem 11.1. Let p be an odd prime and let f be a diagonal quadratic form of dimension
×. Then f represents 0 over Qp.
n > 2 with coefficients a1, . . . , an ∈ Z
p
Proof. The equation f (x1, . . . , xn) ≡ 0 mod p is a homogeneous equation of degree 2 in
n > 2 variables over Fp.
It follows from the Chevalley-Warning theorem that it has a
non-trivial solution (y1, . . . , yn) over Fp (cid:39) Z/pZ. Assume without loss of generality that
y1 = 0 and let g(z) be the univariate polynomial g(y) = f (y, y2, . . . , yn) over Zp. Then
g(y1) ≡ 0 mod p and g(cid:48)(y1) = 2a1y1 (cid:54)≡ 0 mod p, so by Hensel’s lemma there is a root z1 of
g(y) over Zp. We then have f (z1, y2, . . . , yn) = g(z1) = 0, so f represents 0 over Qp.
Corollary 11.2. Every quadratic form of dimension n > 2 over Q represents 0 over Qp
for all but finitely many primes p.
Proof. In diagonal form the coefficients a1, . . . , an lie in Z×
p for all odd p | https://ocw.mit.edu/courses/18-782-introduction-to-arithmetic-geometry-fall-2013/07649a23c61675ef75f12a8b77367e93_MIT18_782F13_lec11.pdf |
many primes p.
Proof. In diagonal form the coefficients a1, . . . , an lie in Z×
p for all odd p (cid:45) a1 · · · an.
For quadratic forms of dimension n ≤ 2, we note that a nondegenerate unary form never
represents 0, and the nondegenerate form ax2 + by2 represents 0 if and only if −ab is square
(this holds over any field). But when −ab is not square it may still be the case that ax2 +by2
represents a given nonzero element t, and having a criterion for identifying such t will be
useful in our proof of the Hasse-Minkowski theorem.
Lemma 11.3. The nondegenerate quadratic form ax2 + by2 over Qp represents t ∈ Q∗
and only if (a, b)p = (t, −ab)p.
p if
Proof. Since t = 0, the equation ax2 + by2 = t has a non-trivial solution in Qp if and only
if (a/t)x2 + (b/t)y2 = 1 has a solution, which is equivalent to (a/t, b/t)p = 1. We have
(a/t, b/t)p = (at, bt)p = (a, bt)p(t, bt)p = (a, b)p(a, t)p(t, bt) = (a, b)p(t, abt)p
= (a, b)p(t, abt)p(t, −t)p = (a, b)p(t, −ab)p,
where we have used that the Hilbert symbol is symmetric, bilinear, invariant on square
classes, and satisfies (x, −x)p = 1. Thus (a/t, b/t)p = 1 if and only if (a, b)p(t, −ab)p = 1,
which is equivalent to (a, b)p = (t, −ab)p since both are ±1.
Corollary 11.4. The nondegenerate form ax2 + by2 + cz2 over Qp represents 0 if and only
if (a, b)p = (−c, −ab)p
Proof. By the lemma, if suffices to show | https://ocw.mit.edu/courses/18-782-introduction-to-arithmetic-geometry-fall-2013/07649a23c61675ef75f12a8b77367e93_MIT18_782F13_lec11.pdf |
0 if and only
if (a, b)p = (−c, −ab)p
Proof. By the lemma, if suffices to show that ax2 + by2 + cz2 represents 0 if and only if
the binary form ax2 + by2 represents −c. The reverse implication is clear (set z = 1).
For the forward implication, if ax2
0 + cz2
0 = 0 then either z0 = 0, in which case
a(x /z2) + b(y /z )2 = −c or z = 0 in which case ax2
+ by represents 0 and therefore
0
every element of Qp, including −c.
0 + by2
2
0
0
0
0
1
Andrew V. Sutherland
1(cid:54)
(cid:54)
(cid:54)
Corollary 11.5. A ternary quadratic form over Q that represents 0 over all but at most
one completion of Q represents 0 over every completion of Q.
Proof. The corollary is trivially true if the form is degenerate and otherwise it follows from
the product formula for Hilbert symbols and the corollary above.
11.2 Approximation
We now prove two approximation theorems that we will need to prove the Hasse-Minkowski
theorem for Q. These are quite general theorems that have many applications, but we will
state them in a particularly simple form that suffices for our purposes here. Before proving
them we first note/recall that Q is dense in Qp and Z is dense in Zp.
Theorem 11.6. Let p ≤ ∞ be any prime of Q. Under the metric d(x, y) = |x − y|p, the
set Q is dense in Qp and the set Z is dense in Zp.
Proof. We know that Q = R is the completion of Q and we proved that Q
p is (isomorphic
to) the completion of Q for p < ∞, and any field is dense in its completion (this follows
immediately from the definition). We note that the completion Z = Z (any Cauchy
sequence of integers must be eventually constant | https://ocw.mit.edu/courses/18-782-introduction-to-arithmetic-geometry-fall-2013/07649a23c61675ef75f12a8b77367e93_MIT18_782F13_lec11.pdf |
follows
immediately from the definition). We note that the completion Z = Z (any Cauchy
sequence of integers must be eventually constant), and for p < ∞ the we can apply the fact
that Zp = {x ∈ Qp : |x|p ≤ 1} and Z = {x ∈ Q : |x|p ≤ 1}.
∞
∞
Theorem 11.7 (Weak approximation). Let S be a finite set of primes p ≤ ∞, and for each
p ∈ S let xp ∈ Qp be given. Then for every (cid:15) > 0 there exists x ∈ Q such that
|x − xp|p < (cid:15)
for all p ∈ S. Equivalently, the image of Q in (cid:81)
Qp dense under the product topology.
∈
p S
Proof. If S has cardinality 1 we can apply Theorem 11.6, so we assume S contains at least 2
primes. For any particular prime p ∈ S, we claim that there is a yp ∈ Q such that |yp|p > 1
and |yp|q < 1 for q ∈ S − {p}. Indeed, let P be the product of the finite primes in S, and
for each p < ∞ choose r ∈ Z>0 so that p−r
P < 1. Then define
(cid:40)
yp =
if p =
∞,
P
p−rP otherwise.
We now note that for any q ∈ S,
lim
n→∞
|
|yp q =
(cid:40)
if
∞ q = p,
if q = p.
0
It follows that for each q ∈ S
yn
p
lim
n→∞ 1 + yn
p
=
(cid:40)
1 with respect to | |q for q = p,
0 with respect to | |q for q = p,
|1 − yn
since limn
for q = p. For each n ∈ Z>0 define
p /(1 + yn
p )|p = limn
→∞
→∞
zn =
|1/(1 + yn
p )|p = 0 | https://ocw.mit.edu/courses/18-782-introduction-to-arithmetic-geometry-fall-2013/07649a23c61675ef75f12a8b77367e93_MIT18_782F13_lec11.pdf |
yn
p )|p = limn
→∞
→∞
zn =
|1/(1 + yn
p )|p = 0 and limn→∞ |yn
p /(1 + yn
p )|q = 0
(cid:88) x yn
p p
1 + yn
p
p∈
S
.
Then limn
which x = zn satisfies |x − xp|p < (cid:15) for all p ∈ S.
→∞
zn = xp with respect to | |p for each p ∈ S. So for any (cid:15) > 0 there is an n for
2
2(cid:54)
(cid:54)
(cid:54)
Theorem 11.8 (Strong approximation). Let S be a finite set of primes p < ∞, and for
each p ∈ S let xp ∈ Zp be given. Then for every (cid:15) > 0 there exists x ∈ Z such that
|x − xp|p < (cid:15)
for all p ∈ S. Equivalently, the image of Z in (cid:81)
Zp is dense under the product topology.
∈
p S
Proof. Fix (cid:15) > 0. By Theorem 11.6, for each xp we can pick yp ∈ Z≥0 so that |yp − xp|p < (cid:15).
Let n be a positive integer such that pn > yp for all p ∈ S. By the Chinese remainder
theorem, there exists x ∈ Z such that x ≡ yp mod pn for all p ∈ S, and for this x we have
|x − xp|p < (cid:15) for all p ∈ S.
Remark 11.9. In more general settings it is natural to consider the infinite product of all
the rings of p-adic integers
Zˆ =
(cid:89) Zp.
p<
∞
Recall that for infinite products, the product topology is defined using a basis of open sets
that consists of sequences (Up), where each Up is an open subset of Zp, and for all but
finitely many p we have Up = | https://ocw.mit.edu/courses/18-782-introduction-to-arithmetic-geometry-fall-2013/07649a23c61675ef75f12a8b77367e93_MIT18_782F13_lec11.pdf |
that consists of sequences (Up), where each Up is an open subset of Zp, and for all but
finitely many p we have Up = Z
p. It follows from Theorem 11.8 that the image of Z in Z is
dense.
There is another way to define Zˆ, which is to consider the inverse system of rings (Z/nZ),
where n ranges over all positive integers n and we have reduction maps from Z/mZ to Z/nZ
whenever n|m (note that we now have an infinite acyclic graph of maps, not just a linear
chain). The inverse limit
Zˆ = lim Z/nZ
←−
is called the profinite completion of Z. One can show that these two definitions of Z are
canonically isomorphic. So a more pithy statement of Theorem 11.8 is that Z is dense in
its profinite completion (this statement applies to profinite completions in general).
ˆ
ˆ
Remark 11.10. Note the difference between weak and strong approximation. With weak
approximation we obtain a rational number x that is p-adically close to xp for each p in a
finite set S, but we have no control on |x|p for p (cid:54)∈ S. With strong approximation we obtain
a rational number (in fact an integer) x that is p-adically close to xp for each p ∈ S and also
satisfies |x|p ≤ 1 for all p (cid:54)∈ S, except the prime p = ∞; in order to apply the CRT we may
∞ ∈ S if we grant ourselves
need to make |x|
the freedom to make |x|p0 large for one prime p0 (cid:54)∈ S; in this case x would be a rational
number, not an integer, but its denominator would be divisible by no primes other than p0,
so that x ∈ Zp for all p = p0. This is characteristic of strong approximation theorems, we
obtain an element whose absolute value is bounded at all but one prime.
very large. More generally, we could allow
∞
The following lemma follows from the strong approximation | https://ocw.mit.edu/courses/18-782-introduction-to-arithmetic-geometry-fall-2013/07649a23c61675ef75f12a8b77367e93_MIT18_782F13_lec11.pdf |
element whose absolute value is bounded at all but one prime.
very large. More generally, we could allow
∞
The following lemma follows from the strong approximation theorem and Dirichlet’s
theorem on primes in arithmetic progressions: for any relative prime integers a and b there
are infinitely many primes congruent to a mod b.
Lemma 11.11. Let S be a finite set of primes p ≤ ∞, and for each p ∈ S let xp ∈ Q×
given. Then there exists an x ∈ Q such that
p be
(i) x ∈ xpQ×2
p
(ii) |x|p = 1 for all but at most one finite prime p0 (cid:54)∈ S.
for each p ∈ S.
3
3(cid:54)
Proof. Let S0 = S − {∞}, and define the rational number
y = ±
(cid:89)
p∈S0
pvp(xp),
∞
where the sign of y is negative if ∞ ∈ S and x < 0, and positive otherwise. Then
|y|p = |xp|p for all p ∈ S0, and it follows that for each p ∈ S0 we have y = upxp for some
up ∈ Z×
p . By the strong approximation theorem there exists an integer z ≡ u
p mod p p, for
all p ∈ S0, where ep = 1 for odd p and ep = 3 for p = 2. It follows that z ∈ upQ×2
for all
p
p ∈ S0, since the square class of up depends only on its reduction mod pep.
The integers z and m = (cid:81)
p S pep are relatively prime, so it follows from Dirichlet’s
∈ 0
theorem that there are infinitely many primes congruent to z mod m. Let p0 be the least
such prime. Then p0 ∈ zQ×2
p
for all p ∈ S0, and x = p0y satisfies both (i) and (ii).
e
11.3 Proof of the Hasse-Minkowski theorem
Before proving the Hasse-Minkowski theorem for Q we make one fi | https://ocw.mit.edu/courses/18-782-introduction-to-arithmetic-geometry-fall-2013/07649a23c61675ef75f12a8b77367e93_MIT18_782F13_lec11.pdf |
e
11.3 Proof of the Hasse-Minkowski theorem
Before proving the Hasse-Minkowski theorem for Q we make one final remark. The definition
of the Hilbert symbol we gave in the last lecture makes sense over any field, in particular Q,
and the proofs of Lemma 10.2 and Corollary 10.3 still apply. In the proof below we use
(a, b) to denote the Hilbert symbol of a, b ∈ Q×.
Theorem 11.12 (Hasse-Minkowski). A quadratic form over Q represents 0 if and only if
it represents 0 over every completion of Q.
Proof. The forward implication is clear, we only need to prove the reverse implication. So
let f be a quadratic form over Q that represents 0 over every completion of Q. We may
assume without loss of generality that f is a diagonal form a x2
1 1 + · · · + anxn, which we
may denote (cid:104)a1, . . . , an(cid:105). We write (cid:104)a1, . . . , an(cid:105)p to denote the same form over Qp. If any
ai = 0, then f clearly represents 0 over Q (set xi = 1 and xj = 0 for i = j), so we assume
f is nondegenerate and proceed by induction on its dimension n.
2
Case n = 1: The theorem holds trivially (f cannot represent 0 over any Qp).
Case n = 2: The form (cid:104)a, b(cid:105)p represents 0 if and only if −ab is square in Qp. Thus
vp(−ab) ≡ 0 mod 2 for all p < ∞ and −ab > 0. It follows that −ab is square in Q, and
therefore (cid:104)a, b(cid:105) represents 0.
Case n = 3: Let f (x, y, z) = z2 − ax2 − by2, where a and b are nonzero square-free
integers with |a| ≤ |b|. We | https://ocw.mit.edu/courses/18-782-introduction-to-arithmetic-geometry-fall-2013/07649a23c61675ef75f12a8b77367e93_MIT18_782F13_lec11.pdf |
z) = z2 − ax2 − by2, where a and b are nonzero square-free
integers with |a| ≤ |b|. We know (a, b)p = 1 for all p ≤ ∞ and wish to show (a, b) = 1. We
proceed by induction on m = |a| + |b|. The base case m = 2 has a = ±1 and b = ±1, in
which case (a, b) = 1 implies that either a or b is 1 and therefore (a, b) = 1.
∞
We now suppose m ≥ 3, and that the result has been proven for all smaller m. For each
p to z2 − ax2 − by2 = 0. We must have
0), since p|b, but we cannot have p|x0 since then we would have p|z0, contradicting
p and a = (z0/x0)2 is a square modulo p. This holds for every prime
prime p|b there is a primitive solution (x0, y0, z0) ∈ Z3
p|(z2
primitivity. So x0 ∈ Z×
p|b, and b is square-free, so a is a square modulo b.
0 − ax2
It follows that a + bb(cid:48) = t2 for some t, b(cid:48) ∈ Z with t ≤ |b/2|. This implies (a, bb(cid:48)) = 1,
√
√
since bb(cid:48) = t2
−
a is the norm of t +
a in Q( a). Therefore
(a, b) = (a, b)(a, bb(cid:48)) = (a, b2b(cid:48)) = (a, b(cid:48)).
We also have (a, bb(cid:48))p = 1, and therefore (a, b(cid:48))p = (a, b)p = 1, for all p ≤ ∞. But
(cid:12) t2 (cid:12)
(cid:12)
(cid:12)
≤ (cid:12)
(cid:12)
(cid:12 | https://ocw.mit.edu/courses/18-782-introduction-to-arithmetic-geometry-fall-2013/07649a23c61675ef75f12a8b77367e93_MIT18_782F13_lec11.pdf |
(cid:12)
(cid:12)
(cid:12)
≤ (cid:12)
(cid:12)
(cid:12) b
(cid:12)
(cid:12) a (cid:12)
(cid:12)
(cid:12)
(cid:12) ≤
+ (cid:12)
b
t2 − a
b
|b(cid:48)| =
+ 1 <
|b|
4
|b|,
(cid:12)
(cid:12)
(cid:12)
(cid:12)
(cid:12)
(cid:12)
(cid:12)
(cid:12)
4
4(cid:54)
so |a| + |b(cid:48)| < m and the inductive hypothesis implies (a, b(cid:48)) = 1. Thus (a, b) = 1, as desired.
Case n = 4: Let f = (cid:104)a1, a2, a3, a4(cid:105) and let S consist of the primes p|2a1a2a3a4 and ∞.
Then ai ∈ Z×
p such that (cid:104)a1, a2(cid:105)p
represents tp and (cid:104)a3, a4(cid:105)p represents −tp (we can assume tp = 0:
if 0 is represented, by
both forms, so is every element of Qp). By Lemma 11.11, there is a rational number t and
a prime p0 (cid:54)∈ S such that t ∈ tpQ×2
p
p for all p (cid:54)∈ S. For each p ∈ S there exists tp ∈ Q×
for all p ∈ S and |t|p = 1 for all p (cid:54)∈ S ∪ {p0}.
p , so (a1, a2)p = 1 = (t, −a1a2)p and (a3, a4)p = 1 = (
The forms (cid:104)a1, a2, −t(cid:105)p and ( | https://ocw.mit.edu/courses/18-782-introduction-to-arithmetic-geometry-fall-2013/07649a23c61675ef75f12a8b77367e93_MIT18_782F13_lec11.pdf |
a4)p = 1 = (
The forms (cid:104)a1, a2, −t(cid:105)p and (cid:104)a3, a4, t(cid:105)p represent 0 for all p (cid:54)∈ S ∪ {p0} because all such p
are odd, and ai, ±t ∈ Z×
−t, −a3a4)p,
and we may apply Corollary 11.4. Since t ∈ tpQ×
2
for all p ∈ S, the forms (cid:104)a1, a2, −t(cid:105)p and
p
(cid:104)a3, a4, t(cid:105)p also represent 0 for all p ∈ S. Thus (cid:104)a1, a2, −t(cid:105)p and (cid:104)a3, a4, t(cid:105)p represent 0 for
all p = p0, and by Corollary 11.5, also for p = p0. By the inductive hypothesis (cid:104)a1, a2, −t(cid:105)
and (cid:104)a3, a4, t(cid:105) both represent 0, therefore (cid:104)a1, a2, a3, a4(cid:105) represents 0.
Case n ≥ 5: Let f = (cid:104)a1, . . . , an(cid:105). Let S be the set of primes for which (cid:104)a3, . . . , an(cid:105)p
does not represent 0. The set S is finite, by Corollary 11.2. If S is empty then (cid:104)a3, . . . , an(cid:105),
and therefore f , represents 0, by the inductive hypothesis, so we assume S is not empty.
For each p ∈ S pick tp ∈ Q×
p = tp, such that
(cid:104)a3, . . . , an(cid: | https://ocw.mit.edu/courses/18-782-introduction-to-arithmetic-geometry-fall-2013/07649a23c61675ef75f12a8b77367e93_MIT18_782F13_lec11.pdf |
each p ∈ S pick tp ∈ Q×
p = tp, such that
(cid:104)a3, . . . , an(cid:105)p represents −tp (such a tp exists since f represents 0 over Qp and, as above, we
can always pick tp = 0).
p represented by (cid:104)a1, a2(cid:105), say a1x2
p + a2y2
By the weak approximation theorem there exists x, y Q that are simultaneously close
enough to all the x , y ∈ Q so that t = a x2
1 + a2y is close enough to all the tp to
p
guarantee that t ∈ tpQ×2
for all p ∈ S (for p < ∞ the square class only depends on at most
p
the first three nonzero p-adic digits, and over R = Q we can ensure that x and y have the
n(cid:105)p represents 0 for all p ∈ S, and
same signs as x
since (cid:104)a3, . . . , an(cid:105)p represents 0 for all p (cid:54)∈ S, so does (cid:104)t, a3, . . . , an(cid:105)p. Thus (cid:104)t, a3, . . . , an(cid:105)p
represents 0 for all p, and by the inductive hypothesis, (cid:104)t, a3, . . . , an(cid:105) represents 0. Therefore
(cid:104)a3, . . . , an(cid:105) represents −t = −a1x2 − a2y2, hence (cid:104)a1, . . . , an(cid:105) represents 0.
and y ).1 It follows that (cid:104)t, a3, . . . , a
∈
2
∞
∞
∞
p
p
1Equivalently, the set of squares Q×2
p
is an open subset of Q×
p , hence so is every square class tpQ× | https://ocw.mit.edu/courses/18-782-introduction-to-arithmetic-geometry-fall-2013/07649a23c61675ef75f12a8b77367e93_MIT18_782F13_lec11.pdf |
6.241 Dynamic Systems and Control
Lecture 2: Least Square Estimation
Readings: DDV, Chapter 2
Emilio Frazzoli
Aeronautics and Astronautics
Massachusetts Institute of Technology
February 7, 2011
E. Frazzoli (MIT)
Lecture 2: Least Squares Estimation
Feb 7, 2011
1 / 9
Outline
1 Least Squares Estimation
E. Frazzoli (MIT)
Lecture 2: Least Squares Estimation
Feb 7, 2011
2 / 9
Least Squares Estimation
Consider an system of m equations in n unknown, with m > n, of the form
y = Ax.
Assume that the system is inconsistent: there are more equations than
unknowns, and these equations are non linear combinations of one another.
In these conditions, there is no x such that y − Ax = 0. However, one can
write e = y − Ax, and find x that minimizes �e�.
In particular, the problem
min �e�2 = min �y − Ax�2
x
x
is a least squares problem. The optimal x is the least squares estimate.
E. Frazzoli (MIT)
Lecture 2: Least Squares Estimation
Feb 7, 2011
3 / 9
Computing the Least-Square Estimate
The set M := {z ∈ Rm : z = Ax, x ∈ Rn} is a subspace of Rm, called the
range of A, R(A), i.e., the set of all vectors that can be obtained by linear
combinations of the columns of A.
Recall the projection theorem. Now we are looking for the element of M that
is “closest” to y , in terms of 2-norm. We know the solution is such that | https://ocw.mit.edu/courses/6-241j-dynamic-systems-and-control-spring-2011/076ac20d5b8bda672a8bcee8d3e95438_MIT6_241JS11_lec02.pdf |
for the element of M that
is “closest” to y , in terms of 2-norm. We know the solution is such that
e = (y − Axˆ) ⊥ R(A).
In particular, if ai is the i-th column of A, it is also the case that
(y − Axˆ) ⊥ R(A) ⇔
�(y − Axˆ) = 0,
ai
i = 1, . . . , n
A�(y − Axˆ) = 0
A�Axˆ = A�y
A�A is a n × n matrix; is it invertible? It if were, then at this point it is easy
to recover the least-square solution as
xˆ = (A�A)−1A�y .
E. Frazzoli (MIT)
Lecture 2: Least Squares Estimation
Feb 7, 2011
4 / 9
The Gram product
Let us take a more abstract look at this problem, e.g., to address the case
that the data vector y is infinite-dimensional.
Given an array of nA vectors A = [a1| . . . |anA ], and an array of nB vectors
B = [b1| . . . |bnB ], both from an inner vector space V , define the Gram
Product � A, B � as a nA × nB matrix such that its (i, j) entry is �ai , bj �.
For the usual Euclidean inner product in an m-dimensional space,
� A, B �= A�B.
Symmetry and linearity of the inner product imply symmetry and linearity of
the Gram product.
E. Frazzoli (MIT)
Lecture 2: Least Squares Estimation
Feb 7, 2011
5 / 9
The Least Squares Estimation Problem
Consider again the problem of computing
min
x∈Rn
� y − Ax
� �� �
e
� = min
ˆ | https://ocw.mit.edu/courses/6-241j-dynamic-systems-and-control-spring-2011/076ac20d5b8bda672a8bcee8d3e95438_MIT6_241JS11_lec02.pdf |
the problem of computing
min
x∈Rn
� y − Ax
� �� �
e
� = min
ˆy ∈R(A)
�y − ˆ
y �.
y can be an infinite-dimensional vector—as long as n is finite.
We assume that the columns of A = [a1, a2, . . . , an] are independent.
Lemma (Gram matrix)
The columns of a matrix A are independent ⇔ � A, A � is invertible.
�
j aj ηj = 0. But then
Proof— If the columns are dependent, then there is η = 0 such that
�
Aη =
That is, � A, A � η = 0, and hence � A, A � is not invertible.
Conversely, if � A, A � is not invertible, then � A, A � η = 0 for some η = 0. In
other words η� � A, A � η = 0, and hence Aη = 0.
j �ai , aj �ηj = 0 by the linearity of inner product.
E. Frazzoli (MIT)
Lecture 2: Least Squares Estimation
Feb 7, 2011
6 / 9
�
�
The Projection theorem and least squares estimation 1
y has a unique decomposition y = y1 + y2, where y1 ∈ R(A), and
y2 ∈ R⊥(A).
To find this decomposition, let y1 = Aα, for some α ∈ Rn . Then, ensure that
y2 = y − y1 ∈ R⊥(A). For this to be true,
�ai , y − Aα� = 0,
i = 1, . . . , n,
i.e.,
Rearranging, we get
� A, y − Aα �= 0.
� A, A � α =� A, y �
if the columns of A are independent,
α = | https://ocw.mit.edu/courses/6-241j-dynamic-systems-and-control-spring-2011/076ac20d5b8bda672a8bcee8d3e95438_MIT6_241JS11_lec02.pdf |
− Aα �= 0.
� A, A � α =� A, y �
if the columns of A are independent,
α =� A, A �−1� A, y �
E. Frazzoli (MIT)
Lecture 2: Least Squares Estimation
Feb 7, 2011
7 / 9
The Projection theorem and least squares estimation 2
Decompose e = e1 + e2 similarly (e1 ∈ R(A), and e2 ∈ R⊥(A)).
Note �e�2 = �e1�2 + �e2�2 .
Rewrite e = y − Ax as
i.e.,
e1 + e2 = y1 + y2 − Ax,
e2 − y2 = y1 − e1 − Ax.
Each side must be 0, since they are on orthogonal subspaces!
e2 = y2—can’t do anything about it.
e1 = y1 − Ax = A(α − x)—minimize by choosing x = α. In other words
xˆ =� A, A �−1� A, y � .
E. Frazzoli (MIT)
Lecture 2: Least Squares Estimation
Feb 7, 2011
8 / 9
Examples
�
If y , e ∈ Rm, and it is desired to minimize �e�2 = e�e = m
i=1 |ei |2, then
xˆ = (A�A)−1A�y
(If the columns of A are mutually orthogonal, A�A is diagonal, and inversion
is easy)
if y , e ∈ Rm, and it is desired to minimize e�Se, where S is a Hermitian,
positive-definite matrix, then
xˆ = (A�SA)−1A�Sy .
�
Note that if S is diagonal, then e�Se = m
weighted least square criterion. A large sii penalizes the i-th component of | https://ocw.mit.edu/courses/6-241j-dynamic-systems-and-control-spring-2011/076ac20d5b8bda672a8bcee8d3e95438_MIT6_241JS11_lec02.pdf |
if S is diagonal, then e�Se = m
weighted least square criterion. A large sii penalizes the i-th component of
the error more relative to the others.
i=1 sii |ei |2, i.e., we are minimizing a
In a general stochastic setting, the weight matrix S should be related to the
noise covariance, i.e.,
S = (E [ee�])−1 .
E. Frazzoli (MIT)
Lecture 2: Least Squares Estimation
Feb 7, 2011
9 / 9
MIT OpenCourseWare
http://ocw.mit.edu
6.241J / 16.338J Dynamic Systems and Control
Spring 2011
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms . | https://ocw.mit.edu/courses/6-241j-dynamic-systems-and-control-spring-2011/076ac20d5b8bda672a8bcee8d3e95438_MIT6_241JS11_lec02.pdf |
IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 46, NO. 12, DECEMBER 1998
1619
Upper Bounds on the Bit-Error Rate of
Optimum Combining in Wireless Systems
Jack H. Winters, Fellow, IEEE, and Jack Salz, Member, IEEE
Abstract—This paper presents upper bounds on the bit-error
rate (BER) of optimum combining in wireless systems with
multiple cochannel interferers in a Rayleigh fading environment.
We present closed-form expressions for the upper bound on
the bit-error rate with optimum combining, for any number of
antennas and interferers, with coherent detection of BPSK and
QAM signals, and differential detection of DPSK. We also present
bounds on the performance gain of optimum combining over
maximal ratio combining. These bounds are asymptotically tight
with decreasing BER, and results show that the asymptotic gain
is within 2 dB of the gain as determined by computer simulation
for a variety of cases at a ��0� BER. The closed-form expressions
for the bound permit rapid calculation of the improvement with
optimum combining for any number of interferers and antennas,
as compared with the CPU hours previously required by Monte
Carlo simulation. Thus these bounds allow calculation of the
performance of optimum combining under a variety of conditions
where it was not possible previously, including analysis of the
outage probability with shadow fading and the combined effect
of adaptive arrays and dynamic channel assignment in mobile
radio systems.
Index Terms— Bit-error rate, optimum combining, Rayleigh
fading, smart antennas.
I. INTRODUCTION
ANTENNA arrays with optimum combining combat multi | https://ocw.mit.edu/courses/18-996-random-matrix-theory-and-its-applications-spring-2004/0791574bd66d763e89d3c6572616d996_wintsal_tc.pdf |
optimum combining, Rayleigh
fading, smart antennas.
I. INTRODUCTION
ANTENNA arrays with optimum combining combat multi-
path fading of the desired signal and suppress interfering
signals, thereby increasing both the performance and capacity
of wireless systems. With optimum combining, the received
signals are weighted and combined to maximize the signal-to-
interference-plus-noise ratio (SINR) at the receiver. Optimum
combining yields superior performance over maximal ratio
combining, whereby the signals are combined to maximize
signal-to-noise ratio, in interference-limited systems. However,
while with maximal ratio combining the bit-error rate can
be expressed in closed form [1], with optimum combining
a closed-form expression is available only with one interferer
[2], [3]. With multiple interferers, Monte Carlo simulation has
been used [3]–[5], but this requires on the order of CPU hours
even with just a few interferers. Thus the improvement of
optimum combining has only been studied for a few simple
Paper approved by N. C. Beaulieu, the Editor for Wireless Communication
Theory of the IEEE Communications Society. Manuscript received September
21, 1993; revised November 28, 1996. This paper was presented in part at
the 1994 IEEE Vehicular Technology Conference, Stockholm, Sweden, June
8–10, 1994.
J. H. Winters is with AT&T Labs–Research, Red Bank, NJ 07701 USA.
J. Salz, retired, was with AT&T Labs–Research, Crawford Hill Laboratory,
Holmdel, NJ 07733 USA.
Publisher Item Identifier S 0090-6778(98)09388-X.
Fig. 1. Block diagram of an | https://ocw.mit.edu/courses/18-996-random-matrix-theory-and-its-applications-spring-2004/0791574bd66d763e89d3c6572616d996_wintsal_tc.pdf |
0-6778(98)09388-X.
Fig. 1. Block diagram of an � -element adaptive array.
cases, and detailed comparisons (e.g., in terms of outage
probability) have not been done.
In [6], we showed that, with
antenna elements, the
received signals can be combined to eliminate
interferers in the output signal while obtaining an
diversity improvement, i.e., the performance of maximal ratio
antennas and no interference. However,
combining with
this “zero-forcing” solution gives far lower output SINR than
optimum combining in most cases of interest and cannot be
used when
.
In this paper we present a closed-form expression for the up
per bound on the bit-error rate (BER) with optimum combining
in wireless systems. We assume flat fading across the channel
and independent Rayleigh fading of the desired and interfering
signals at each antenna.1 Equations are presented for the
upper bound on the BER for coherent detection of quadrature
amplitude modulated (QAM) and binary phase-shift-keyed
(BPSK) signals, and for differential detection of differential
phase-shift-keyed (DPSK) signals. From these equations, a
lower bound on the improvement of optimum combining over
maximal ratio combining is derived.
In Section II we derive the upper bound on the BER. In
Section III we compare the upper bound to Monte Carlo
simulation results. A summary and conclusions are presented
in Section IV.
II. UPPER BOUND DERIVATION
Fig. 1 shows a block diagram of an
array. The complex baseband signal received by the
antenna element in the | https://ocw.mit.edu/courses/18-996-random-matrix-theory-and-its-applications-spring-2004/0791574bd66d763e89d3c6572616d996_wintsal_tc.pdf |
block diagram of an
array. The complex baseband signal received by the
antenna element in the
by a controllable complex weight
are summed to form the array output signal
-element adaptive
th
is multiplied
and the weighted signals
th symbol interval
.
1 As shown in [7], the gain of optimum combining is not significantly
degraded with fading correlation up to about 0.5. Thus our bounds, based on
independent fading, are reasonably accurate and useful even in environments
with fading correlation up to this level.
0090–6778/98$10.00 © 1998 IEEE
1620
IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 46, NO. 12, DECEMBER 1998
With optimum combining, the weights are chosen to maxi
mize the output SINR, which also minimizes the mean-square
error (MSE), which is given by [8]
the bound. Also, note that with only noise at the receiver,
is the variance of the noise normalized to
, where
the received desired signal power, and from (4) and (5)
MSE
(1)
where
matrix given by
is the received interference-plus-noise correlation
(2)
is the identity matrix,
is the noise power,
are the desired and
and
th interfering signal propagation
denotes complex
vectors, respectively, and the superscript
conjugate transpose. Here we have assumed the same average
received power for the desired signal at each antenna (that
is, microdiversity rather than macrodiversity) and that the
noise and interfering signals are uncorrelated, and without
loss | https://ocw.mit.edu/courses/18-996-random-matrix-theory-and-its-applications-spring-2004/0791574bd66d763e89d3c6572616d996_wintsal_tc.pdf |
macrodiversity) and that the
noise and interfering signals are uncorrelated, and without
loss of generality, have normalized the received signal power,
. Note that the MSE varies at
averaged over the fading, to
the fading rate.
(6)
where
is the received SINR, while the actual BER is
[1]. Thus even without interference, the bound
differs from the actual BER, and this difference increases as
the received SINR decreases.
Let us consider the case of interference only. In this case,
, which is given by (2), may also be expressed as
(7)
where
th element of
is the
permutations of the
,
, the sum is extended over
is the th element of
all
” sign is assigned for even
the permutation of the
’s in
permutations (i.e., an even number of swapping of
the permutation), and the “ ” sign for odd permutations. Now
’s,
’s, the “
For coherent detection of BPSK or QAM, the BER is
bounded by [9]
(8)
where
to the desired signal power, and
is the average power of the th interferer normalized
(3)
Similarly, from (7), it can be shown that
where now the expected value is taken over the fading
is the
parameters of the desired and interfering signals, and
variance of the BPSK or QAM symbol levels (e.g.,
and
for BPSK and quaternary phase-shift keying (QPSK),
respectively). For differential detection of DPSK, assuming | https://ocw.mit.edu/courses/18-996-random-matrix-theory-and-its-applications-spring-2004/0791574bd66d763e89d3c6572616d996_wintsal_tc.pdf |
ary phase-shift keying (QPSK),
respectively). For differential detection of DPSK, assuming
Gaussian noise and interference,2 the BER is given by [1]
Thus the BER expression for both cases differs only by a
.
constant, and we will now consider the term
As shown in the Appendix, this term can be upper-bounded by
(4)
(5)
where
denotes the determinant of
, and
is the
.
th eigenvalue of
Since (5) is the key inequality in our bound (and is the only
inequality we use in determining the bound for differential
detection of DPSK), let us examine its accuracy. The bound
’s are proportional
, and since the
is tight if
to the interference signal powers, the bound is tight for
large received SINR, i.e., low BER’s. Although for all cases
and thus BER
for
the BER as given by the bound may exceed
. Thus with
may
small received SINR, occasionally BER’s greater than
be averaged into the average BER, reducing the tightness of
,
(9)
(10)
and
, e.g.,
set
where the sum is over all sets of positive integers
that exist such that
For example, when
that
, there are 6 sets of
, with
.
such
(see Table I). All sets are of the form
, except for the
.
th set
is obtained by summing the
can
for
for
.
antennas. Note that
with
coefficients (
be determined as shown below. | https://ocw.mit.edu/courses/18-996-random-matrix-theory-and-its-applications-spring-2004/0791574bd66d763e89d3c6572616d996_wintsal_tc.pdf |
for
.
antennas. Note that
with
coefficients (
be determined as shown below.
’s) for similar terms in
is an integer coefficient corresponding to the
Since
, and
(10) can also be expressed as
when
,
2 Since the stronger the interference, the more that optimum combining
suppresses it, with the Gaussian assumption we overestimate the probability
of strong interference. Note that this is consistent with the derivation of an
upper bound on the BER.
(11)
WINTERS AND SALZ: UPPER BOUNDS ON THE BER OF OPTIMUM COMBINING IN WIRELESS SYSTEMS
1621
VALUES OF ��
FOR � � � TO �
TABLE I
�� �
where now
.
To determine the
’s, first note that if
then
, and (11) becomes
(12)
’s and the
’s are the coefficients of the
where the
related. From [6],
’s can be seen to be closely
, and thus the
for
th-order polynomial in
,
. This result is not only
useful when all interferers have equal power, but also serves
as a consistency check on our calculated values of
.
The values of
. Tables I and II list these values for
terms exist for
program to examine every permutation in (7) for given
number of each type of
determine
Note that only
were generated using a computer
. The
term was calculated to
– .
and
and
terms also exist for
for higher
can also be easily calculated | https://ocw.mit.edu/courses/18-996-random-matrix-theory-and-its-applications-spring-2004/0791574bd66d763e89d3c6572616d996_wintsal_tc.pdf |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.