text stringlengths 30 4k | source stringlengths 60 201 |
|---|---|
calculated to
– .
and
and
terms also exist for
for higher
can also be easily calculated. However, since the amount
increases
of computer time to generate the values of
, our program could only generate these
exponentially with
values in a reasonable amount of computer time for up to
(where a hundred CPU hours on a SPARCstation20
. Values for
, and
VALUES OF ��
FOR � � � AND �
TABLE II
�� �
and from (4), the upper bound on the BER with differential
detection of DPSK is given by
(14)
For the case of noise with
interferers, consider the noise as
an infinite number of weak interferers with total power equal
to the noise. That is, let
would be required).
From (3), the upper bound on the BER with coherent
and let
. Then,
, and
detection of BPSK or QAM is now given by
(15)
(16)
(13)
. Therefore, with noise, the BER bound is the same
for
including the noise. In this
as in (13) and (14), but with
case, if we define the received desired signal-to-noise ratio
th interferer signal-to-noise ratio as
as
and the
1622
IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 46, NO. 12, DECEMBER 1998
, then (14) becomes [similarly for (13)]
(17)
Since
is the bound with maximal ratio combining, | https://ocw.mit.edu/courses/18-996-random-matrix-theory-and-its-applications-spring-2004/0791574bd66d763e89d3c6572616d996_wintsal_tc.pdf |
14) becomes [similarly for (13)]
(17)
Since
is the bound with maximal ratio combining, the
term in the brackets is the improvement of optimum combining
over maximal ratio combining based on the BER bound.
Defining the gain of optimum combining as the reduction in
for a given BER, from (17), this gain in decibels
the required
is given by
Gain (dB)
(18)
This gain is therefore independent of the desired signal
).
power (because the bound is asymptotically tight as
However, this is the gain of the BER bound with optimum
combining over the BER bound with maximal ratio combining.
for a given BER with maximal ratio
Since the required
combining is less than the bound, the true gain may differ
from (18) and to obtain a bound on the gain, the gain in (18)
must be reduced accordingly. For example, with differential
detection of DPSK, to obtain a bound the gain given in (18)
,
is reduced by the factor
this factor reduces to one and the gain approaches (18). Thus
we will refer to (18) as the asymptotic gain.
. Note that as
III. COMPARISON TO EXACT THEORY AND SIMULATION
In this section, we compare the bound to theoretical results
for
and simulation results for
.
3 and 10 dB. In all cases the gain monotonically de
Fig. 2 compares theoretical results (from [1]–[3]) for the
gain to the asymptotic gain (18) versus BER with coherent
,
detection of BPSK. Results are generated for
and | https://ocw.mit.edu/courses/18-996-random-matrix-theory-and-its-applications-spring-2004/0791574bd66d763e89d3c6572616d996_wintsal_tc.pdf |
8) versus BER with coherent
,
detection of BPSK. Results are generated for
and
creases to the asymptotic gain as the BER decreases. The gain
approaches the asymptotic gain more slowly with decreasing
and also, at low BER’s, the accuracy of the
BER for larger
. Thus the accuracy
asymptotic gain decreases with higher
required for a given
of the asymptotic gain decreases as the
BER with optimum combining decreases, as predicted by the
approximation in Section II.
and
Fig. 2. Gain versus BER for coherent detection of BPSK—comparison of
analytical results to the asymptotic gain.
Fig. 3. Gain with � � � for 1, 2, and 6 equal-power interferers versus
signal-to-noise ratio of each interferer—comparison of analytical and Monte
Carlo simulation results with coherent detection of BPSK [5] to the asymptotic
gain.
.
, and 0.4 dB for
BER.3 In all cases, the asymptotic gain has the
at a
,
same shape as the gain and is within 1.7 dB for
Since optimum
1.0 dB for
combining gives the largest gain when the interference power
is concentrated in one interferer and the least gain when the
interference power is equally divided among many interferers,
represent the best and worst cases for the
gain in an interference-limited cellular system. Thus from the
results in Fig. 3, we would expect the asymptotic gain to be
within 0.4–1.7 dB of the actual gain for all cases in cellular
systems with
and
.
Fig. | https://ocw.mit.edu/courses/18-996-random-matrix-theory-and-its-applications-spring-2004/0791574bd66d763e89d3c6572616d996_wintsal_tc.pdf |
7 dB of the actual gain for all cases in cellular
systems with
and
.
Fig. 3 compares theoretical and Monte Carlo simulation [5]
and
results for the gain to the asymptotic gain with
,
, and 6. Results are plotted versus
, where all
interferers have equal power, for coherent detection of BPSK
3 This BER was used because the results in [5] were obtained for this BER.
As shown in [5], the gain does not change significantly for BER’s between
��0� and ��0� , the range of interest in most mobile radio systems.
WINTERS AND SALZ: UPPER BOUNDS ON THE BER OF OPTIMUM COMBINING IN WIRELESS SYSTEMS
1623
of cases at a 10 BER. These cases include interference
scenarios that cover the range of worst to best cases for the
.
gain of optimum combining in cellular systems with
The bound is most accurate with differential detection of
DPSK and high SINR, corresponding to low BER and a
few antennas. Because of the 2-dB accuracy, the bound is
most useful where the optimum combining improvement is
the largest, which is the case of most interest. The closed-
form expression for the bound permits rapid calculation of
the improvement with optimum combining for any number
of interferers and antennas, as compared with the CPU hours
previously required by Monte Carlo simulation. These bounds
allow calculation of the performance of optimum combining
under a variety of conditions where it was not possible
previously | https://ocw.mit.edu/courses/18-996-random-matrix-theory-and-its-applications-spring-2004/0791574bd66d763e89d3c6572616d996_wintsal_tc.pdf |
the performance of optimum combining
under a variety of conditions where it was not possible
previously, including analysis of the outage probability with
shadow fading and the combined effect of adaptive arrays and
dynamic channel assignment in mobile radio systems.
APPENDIX
Diagonalizing
by a unitary transformation
, we obtain
(19)
where
elements only on the diagonal, or
denotes an
matrix with nonzero
and
Let
Then
and
(20)
(21)
(22)
(23)
(24)
Since with independent, Rayleigh fading at each antenna,
are independent and identically distributed
the elements of
(i.i.d.) complex Gaussian random variables, the elements of
are also i.i.d. complex Gaussian random variables with the
same mean and variance. Furthermore, the
dent of the
interfering signal vectors separately, i.e.,
’s. Thus we can average over the desired and
’s are indepen
(25)
Fig. 4. Gain versus � with
interfer-
ers—comparison of Monte Carlo simulation results with coherent detection
of BPSK [3] to the asymptotic gain.
two and six equal power
requires
Now, consider the lower bound on the gain obtained from
the BER bound (17), as compared to the asymptotic gain.
Without interference, differential detection of DPSK with
13.3 dB
maximal ratio combining and
BER, while the BER bound
(theoretically [10]) for a
13.5 dB. Thus the lower bound on the gain
(17) gives
(from (17 | https://ocw.mit.edu/courses/18-996-random-matrix-theory-and-its-applications-spring-2004/0791574bd66d763e89d3c6572616d996_wintsal_tc.pdf |
3.5 dB. Thus the lower bound on the gain
(17) gives
(from (17)) at a 10 BER is 0.2 dB less than the asymptotic
gain for any interference scenario—in particular, the lower
bound on the gain is 0.2 dB less than the results shown in
Fig. 3. Similarly, coherent detection of BPSK with maximal
ratio combining and
BER, while the BER bound (13) gives 15.0 dB. Thus the
bound is most accurate with differential detection of DPSK
and low BER’s.
11.1 dB for a 10
requires
and
Fig. 4 compares Monte Carlo simulation results [3] for the
. Results are plotted
gain to the asymptotic gain for
3 dB for all interferers and coherent
with
versus
BER. Again the asymptotic
detection of BPSK at a 10
gain has the same shape as the simulation results. The cases
include both many more interferers than antennas and many
more antennas than interferers, but in all cases the asymptotic
gain is within 1.8 dB of simulation results.
IV. CONCLUSIONS
In this paper we have presented upper bounds on the bit-
error rate (BER) of optimum combining in wireless systems
with multiple cochannel interferers in a Rayleigh fading envi
ronment. We presented closed-form expressions for the upper
bound on the bit-error rate with optimum combining, for any
number of antennas and interferers, with coherent detection of
BPSK and QAM signals, and differential detection of DPSK.
We also presented bounds on the performance gain of optimum
combining over maximal ratio combining and | https://ocw.mit.edu/courses/18-996-random-matrix-theory-and-its-applications-spring-2004/0791574bd66d763e89d3c6572616d996_wintsal_tc.pdf |
, and differential detection of DPSK.
We also presented bounds on the performance gain of optimum
combining over maximal ratio combining and showed that
these bounds are asymptotically tight with decreasing BER.
Results showed that the asymptotic gain is within 2 dB of
the gain as determined by computer simulation for a variety
1624
IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 46, NO. 12, DECEMBER 1998
Since the
’s are complex Gaussian random variables with
zero mean and unit variance
and
Since the
’s are nonnegative
and, therefore,
where
denotes the determinant of
.
REFERENCES
(26)
(27)
(28)
(29)
[7] J. Salz and J. H. Winters, “Effect of fading correlation on adaptive arrays
in digital wireless communications,” IEEE Trans. Veh. Technol., vol. 43,
pp. 1049–1057, Nov. 1994.
[8] R. A. Monzingo and T. W. Miller, Introduction to Adaptive Arrays.
New York: Wiley, 1980.
[9] G. J. Foschini and J. Salz, “Digital communications over fading radio
channels,” Bell Syst. Tech. J., vol. 62, pp. 429–456, Feb. 1983.
[10] J. H. Winters, “Switched diversity with feedback for DPSK mobile radio
systems,” IEEE Trans. Veh. Technol., vol. VT-32, pp. 134–150, Feb.
1983.
Jack | https://ocw.mit.edu/courses/18-996-random-matrix-theory-and-its-applications-spring-2004/0791574bd66d763e89d3c6572616d996_wintsal_tc.pdf |
. VT-32, pp. 134–150, Feb.
1983.
Jack H. Winters (S’77–M’81–SM’88–F’96) received the B.S.E.E. degree
from the University of Cincinnati, Cincinnati, OH, in 1977 and the M.S. and
the Ph.D. degrees in electrical engineering from The Ohio State University,
Columbus, in 1978 and 1981, respectively.
He has been with AT&T Bell Laboratories, now AT&T Labs–Research,
since 1981, where he is in the Wireless Systems Research Department. He has
studied signal processing techniques for increasing the capacity and reducing
signal distortion in fiber optic, mobile radio, and indoor radio systems, and
is currently studying adaptive arrays and equalization for indoor and mobile
radio.
Dr. Winters is a member of Sigma Xi.
[1] W. C. Jakes Jr. et al., Microwave Mobile Communications. New York:
Wiley, 1974.
[2] V. M. Bogachev and I. G. Kiselev, “Optimum combining of signals in
space-diversity reception,” Telecommun. Radio Eng., vol. 34/35, no. 10,
pp. 83, Oct. 1980.
[3] J. H. Winters, “Optimum combining in digital mobile radio with
cochannel interference,” IEEE J. Select. Areas Commun., vol. SAC-2,
no. 4, July 1984.
[4]
[5]
, “Optimum combining for indoor radio systems with multiple
users,” | https://ocw.mit.edu/courses/18-996-random-matrix-theory-and-its-applications-spring-2004/0791574bd66d763e89d3c6572616d996_wintsal_tc.pdf |
.
[4]
[5]
, “Optimum combining for indoor radio systems with multiple
users,” IEEE Trans. Commun., vol. COM-35, no. 11, Nov. 1987.
, “Signal acquisition and tracking with adaptive arrays in the
digital mobile radio system IS-54 with flat fading,” IEEE Trans. Veh.
Technol., Nov. 1993.
[6] J. H. Winters, J. Salz, and R. D. Gitlin, “The impact of antenna
diversity on the capacity of wireless communication systems,” IEEE
Trans. Commun., Apr. 1994.
Jack Salz (S’59–M’89) received the B.S.E.E. degree in 1955, the M.S.E.
degree in 1956, and the Ph.D. degree in 1961, all in electrical engineering,
from the University of Florida, Gainesville, FL.
He joined AT&T Bell Laboratories in 1956, where he first worked on
the electronic switching system. From 1968 to 1981, he supervised a group
engaged in theoretical work in data communications. During the academic year
1967–1968, he was on leave from AT&T Bell Laboratories as a Professor of
electrical engineering at the University of Florida. In the spring of 1981, he
was a Visiting Lecturer at Stanford University, Stanford, CA. In the spring
of 1983, he was a Mackay Lecturer at the University | https://ocw.mit.edu/courses/18-996-random-matrix-theory-and-its-applications-spring-2004/0791574bd66d763e89d3c6572616d996_wintsal_tc.pdf |
University, Stanford, CA. In the spring
of 1983, he was a Mackay Lecturer at the University of California, Berkeley.
In 1988, he held the Shirly and Burt Harris Chair in Electrical Engineering
at Technion–Israel Institute of Technology, Haifa, Israel. In 1992, he became
an AT&T Bell Laboratories Fellow. He retired from AT&T in 1995 and is
currently splitting his time between Lucent–Bell Labs, Bellcore, the University
of California at Berkeley, and the Technion. | https://ocw.mit.edu/courses/18-996-random-matrix-theory-and-its-applications-spring-2004/0791574bd66d763e89d3c6572616d996_wintsal_tc.pdf |
Electricity and Magnetism
• The x in 8.02x
• Course Organization
• The Beginning
Feb 6 2002
From Amber to the Radio...
1632
600 BC
ελεχτρον
(amber)
Galileo
Feb 6 2002
Now
Observation
Prediction
Physical Law
From Amber to the Radio...
Coulomb Gauss
Maxwell
1791
1830
1873
1632
1831
1887
Now
600 BC
ελεχτρον
(amber)
Galileo
Feb 6 2002
Faraday
Hertz
8.02x
Lecture Demos
(me)
Experiments
(YOU!)
Feb 6 2002
Let’s start from the beginning!
Now
600 BC
ελεχτρον
(amber)
Feb 6 2002 | https://ocw.mit.edu/courses/8-02x-physics-ii-electricity-magnetism-with-an-experimental-focus-spring-2005/0795c1c28d5a7b18f1d2edd7923acb67_2_06_2002_edited.pdf |
20.110/5.60 Fall 2005
Lecture #3
page
1
EXPANSIONS, ENERGY, ENTHALPY
Isothermal Gas Expansion
(
∆
=
0T )
gas (p1, V1, T) = gas (p2, V2, T)
Irreversibly (many ways possible)
(1) Set pext = 0
T
T
p= 0
p2 2,V
p2
p2 2,V
)
T
p= 0
p1 1,V
w
(1)
= −
(2) Set pext = p2
v
2
∫
V
1
p dV
ext
=
0
v
2
= −∫
V
1
p
dV
2
= −
p V V
2
1
−
2
(
p2
T
p1 1,V
w
(2)
p
p1
p2
Note, work is negative: system expands against surroundings
V1
V2
-w(2)
20.110J / 2.772J / 5.601JThermodynamics of Biomolecular SystemsInstructors: Linda G. Griffith, Kimberly Hamad-Schifferli, Moungi G. Bawendi, Robert W. Field
20.110/5.60 Fall 2005
Lecture #3
page
2
(3) Carry out change in two steps
gas (p1, V1, T) = gas (p3, V3, T) = gas (p2, V2, T)
p1 > p3 > p2
T
p3
T
v
3
∫
V
1
pdV
3
−
p1 1,V
w
(3)
= −
p
p1
p3
p2
p3
p3 3,V
p2
p2 2,V
T
v
2
∫
V | https://ocw.mit.edu/courses/20-110j-thermodynamics-of-biomolecular-systems-fall-2005/07bc581228f2db794f3916fb32146f1a_l03.pdf |
3
p2
p3
p3 3,V
p2
p2 2,V
T
v
2
∫
V
3
−
pdV p V V
1
3
2
3
−
=
(
)
−
p V V
2
3
−
2
(
)
More work delivered to
surroundings in this case.
V1
V3
V2
-w(3)
(4) Reversible change p = pext throughout
w
rev
V
= −∫ 2
V
1
dV
p
V1
V2
-
rev
Maximum work delivered to
surroundings for isothermal gas
expansion is obtained using a
reversible path
p
p1
p2
For ideal gas:
w
rev
= −
V
∫ 2
V
1
nRT
V
= −
dV nRT
ln
V
2
V
1
=
nR
T
ln
p
2
p
1
20.110J / 2.772J / 5.601JThermodynamics of Biomolecular SystemsInstructors: Linda G. Griffith, Kimberly Hamad-Schifferli, Moungi G. Bawendi, Robert W. Field
20.110/5.60 Fall 2005
Lecture #3
page
3
The Internal Energy U
d - d -
dU = q + w
(First Law)
dU C dT p dV
−
=
ext
path
And
(
UTV
,
)
⇒
dU
=
∂
U
∂
T
⎛
⎜
⎝
⎞
⎟
⎠
V
dT
+
∂
U
∂
V
⎛
⎜
⎝
⎞
⎟
⎠
T
dV
Some frequent constraints:
•
Reversible
d -
⇒ dU = qrev + wrev = qrev – pdV
)ext
(
p p=
d -
d -
•
• | https://ocw.mit.edu/courses/20-110j-thermodynamics-of-biomolecular-systems-fall-2005/07bc581228f2db794f3916fb32146f1a_l03.pdf |
qrev + wrev = qrev – pdV
)ext
(
p p=
d -
d -
•
•
•
But also
Isolated
Adiabatic
⇒ q = w = 0
d -
d -
⇒ q = 0 ⇒ dU = w = -pdV
d -
d -
reversible
Constant V
⇒
w = 0 ⇒ dU = qVd -
dU
=
⎛
⎜
⎝
∂
U
∂
T
dT
⎞
⎟
⎠
V
+
⎛
⎜
⎝
∂
U
∂
V
Constant V
dV
⎞
⎟
⎠
T
⇒
d -
q
V
=
⎛
⎜
⎝
∂
U
∂
T
dT
⎞
⎟
⎠
V
⇒
⎛
⎜
⎝
∂
U
∂
T
⎞
=⎟
⎠
V
C
V
very important result!!
d -
dTCq
V
=
V
So
=
dU C dT
V
+
∂
U
∂
V
⎛
⎜
⎝
⎞
⎟
⎠
T
dV
what is this?
20.110J / 2.772J / 5.601JThermodynamics of Biomolecular SystemsInstructors: Linda G. Griffith, Kimberly Hamad-Schifferli, Moungi G. Bawendi, Robert W. Field
20.110/5.60 Fall 2005
Lecture #3
page
4
Joule Free Expansion of a Gas
(to get
∂
U
∂
V
⎛
⎜
⎝
⎞
⎟
⎠
T
)
gas
vac
gas (p1, T1, V1) = gas (p2, T2, V2)
Adiabatic
q = 0
Expansion into Vac. w = | https://ocw.mit.edu/courses/20-110j-thermodynamics-of-biomolecular-systems-fall-2005/07bc581228f2db794f3916fb32146f1a_l03.pdf |
= gas (p2, T2, V2)
Adiabatic
q = 0
Expansion into Vac. w = 0
(pext=0)
Since q = w = 0
⇒ dU or ∆U = 0
Constant U
Recall
=
dU CdT
V
+
∂
U
∂
V
⎛
⎜
⎝
⎞
⎟
⎠
T
dV
=
0
⎛
⎜
⎝
⎛
⎜
⎝
∂
U
∂
V
∂
U
∂
V
⎞
⎟
⎠
T
⎞
⎟
⎠
T
= −
dV CdT
V U
U
= −
C
V
∂
T
∂
V
⎛
⎜
⎝
⎞
⎟
⎠
U
measure in Joule exp't!
∆⎛
⎜
∆⎝
T
V
⎞
⎟
⎠
U
Joule did this.
lim
∆ →
V
0
⎛
⎜
⎝
∆
T
∆
V
⎞
⎟
⎠
U
=
∂
T
∂
V
⎛
⎜
⎝
⎞
⎟
⎠
U
≡
η
J
∴
dU C dT C dV
−
=
η
J
V
V
•
For Ideal gas ⇒
0Jη =
dU C dT=
V
Joule coefficient
exactly
Always for ideal gas
U(T)
only depends on T
The internal energy of an ideal gas depends only on temperature
Consequences ⇒
0U∆ =
For all isothermal
compressions of ideal gases
expansions or
⇒
∆ = ∫
VU C dT
For any
ideal gas change in state
20.110J / 2.772J / 5.601JThermodynamics of Biomolecular SystemsInstructors: Linda G. Griffith, Kimberly Hamad-Schifferli, Moungi G. Bawendi, Robert W. Field | https://ocw.mit.edu/courses/20-110j-thermodynamics-of-biomolecular-systems-fall-2005/07bc581228f2db794f3916fb32146f1a_l03.pdf |
. Bawendi, Robert W. Field
20.110/5.60 Fall 2005
Lecture #3
page
5
Enthalpy H(T,p) H ≡ U + pV
Chemical reactions and biological processes usually take place under
constant pressure and with reversible pV work. Enthalpy turns out
to be an especially useful function of state under those conditions.
gas (p, T1, V1)
reversible
=
const p
.
gas (p, T
2, V2)
U1
U2
∆ = +
=
U q w q p V
− ∆
p
∆ + ∆ =
U p V q
p
)
(
U pV q
p
∆ + ∆
=
define as H
⇒ ∆
(
U pV q
p
+
=
)
H U pV
≡
+
⇒ ∆
=
H q
p
for a reversible constant p process
Choose
)
(
H T p
,
⇒
dH
=
What are
⎛
⎜
⎝
∂
H
∂
T
⎞
⎟
⎠
p
and
⎛
⎜
⎝
∂
H
∂
p
⎞
⎟
⎠
T
?
⎛
⎜
⎝
∂
H
∂
T
⎞
dT
⎟
⎠
p
⎛
+ ⎜
⎝
∂
H
∂
p
⎞
⎟
⎠
T
dp
•
∂⎛
⎜
∂⎝
H
T
⎞
⎟
⎠p
⇒
for a reversible process at constant p (dp = 0)
dH
=
đ
q
p
and
dH
= ⎜
∂⎛
∂⎝
H
T
⎞
⎟
⎠
p
dT
⇒
đ
q
p
=
∂⎛
⎜
∂⎝
H
T
⎞
⎟
⎠
p
dT
but
∴
∂⎛
⎜
∂⎝
H
T
⎞
⎟ | https://ocw.mit.edu/courses/20-110j-thermodynamics-of-biomolecular-systems-fall-2005/07bc581228f2db794f3916fb32146f1a_l03.pdf |
⎞
⎟
⎠
p
dT
but
∴
∂⎛
⎜
∂⎝
H
T
⎞
⎟
⎠
p
=
C
p
đ
q p
=
CdT
p
also
20.110J / 2.772J / 5.601JThermodynamics of Biomolecular SystemsInstructors: Linda G. Griffith, Kimberly Hamad-Schifferli, Moungi G. Bawendi, Robert W. Field
20.110/5.60 Fall 2005
Lecture #3
page
6
•
⎛
∂
H
⎜
∂⎝
p
⎞
⎟
⎠T
⇒ Joule-Thomson expansion
porous partition (throttle)
adiabatic, q = 0
gas (p1, T1) = gas (p2, T2)
w pV pV
1 1
2 2
−
=
⇒ ∆
∴ ∆ + ∆
1 1
−
=
+
(
U pV
=
U q w pV pV
2 2
)
∴ ∆
= ⇒ ∆
0
=
H
0
(
= −∆
(
+
U pV
)
pV
)
=
0
Joule-Thomson is a constant Enthalpy process.
=
dH CdT
p
+
⎛
⎜
⎝
∂
H
∂
p
⎞
⎟
⎠
T
dp
⇒
CdT
p
= −
⎛
⎜
⎝
∂
H
∂
p
⎞
⎟
⎠
T
dp
H
⇒
⎛
⎜
⎝
∂
H
∂
p
⎞
⎟
⎠
T
= −
C
p
⎛
⎜
⎝
∂
T
∂
p
⎞
⎟
⎠
H
←
can measure this
⎛
⎜
⎝
∆
T
∆
p
⎞
⎟
⎠
H
Define | https://ocw.mit.edu/courses/20-110j-thermodynamics-of-biomolecular-systems-fall-2005/07bc581228f2db794f3916fb32146f1a_l03.pdf |
H
←
can measure this
⎛
⎜
⎝
∆
T
∆
p
⎞
⎟
⎠
H
Define lim
∆ →
p
0
⎛
⎜
⎝
∆
T
∆
p
⎞
⎟
⎠
H
=
⎛
⎜
⎝
∂
T
∂
p
⎞
⎟
⎠
H
≡
µ
JT
←
Joule-Thomson Coefficient
∴
∂
H
∂
p
⎛
⎜
⎜
⎝
⎞
⎟
⎟
⎠
T
= −
µ
C
p
JT
and
dH C dT C
p
−
=
µ
p dp
JT
20.110J / 2.772J / 5.601JThermodynamics of Biomolecular SystemsInstructors: Linda G. Griffith, Kimberly Hamad-Schifferli, Moungi G. Bawendi, Robert W. Field
20.110/5.60 Fall 2005
Lecture #3
page
7
For an ideal gas: U(T),
pV=nRT
≡
H U
(
)
+T
pV
=
(
U T
)
+
nRT
only depends on T, no p dependence
H
(
T
)
⇒
⎛
∂
H
⎜
∂⎝
p
⎞
⎟
⎠
T
=
µ
JT
=
0 for an ideal gas
For an ideal gas
VC C R
=
+
p
C
p
=
⎛
⎜
⎝
∂
H
∂
T
⎞
⎟
⎠
p
,
C
V
=
⎛
⎜
⎝
∂
U
∂
T
⎞
⎟
⎠
V
=
+
H U pV
(cid:8)(cid:11)(cid:11)(cid:9)(cid:11)(cid:11)(cid:10)
↓
,
=
p | https://ocw.mit.edu/courses/20-110j-thermodynamics-of-biomolecular-systems-fall-2005/07bc581228f2db794f3916fb32146f1a_l03.pdf |
)(cid:11)(cid:9)(cid:11)(cid:11)(cid:10)
↓
,
=
pV RT
(cid:13)(cid:11)(cid:11)(cid:11)(cid:11)(cid:14)(cid:11)(cid:11)(cid:11)(cid:11)(cid:15)
↑
⎛
⎛
∂
U
⎜
⎜
∂
T
⎝
⎝
∂
H
∂
T
⎞
⎟
⎠
p
⎞
⎟
⎠
p
+
=
R
=
C C
V
p
(cid:13)(cid:11)(cid:11)(cid:11)(cid:11)(cid:14)(cid:11)(cid:11)(cid:11)(cid:11)(cid:15)
↑
⎞
⎞
⎟
⎟
⎠
⎠
T
p
⎛
∂
U
+ ⎜
∂⎝
V
0 for ideal gas
⎛
∂
V
⎜
∂⎝
T
=
+
R
∴
p
VC C R
+
=
20.110J / 2.772J / 5.601JThermodynamics of Biomolecular SystemsInstructors: Linda G. Griffith, Kimberly Hamad-Schifferli, Moungi G. Bawendi, Robert W. Field | https://ocw.mit.edu/courses/20-110j-thermodynamics-of-biomolecular-systems-fall-2005/07bc581228f2db794f3916fb32146f1a_l03.pdf |
Massachusetts Institute of Technology
Department of Electrical Engineering and Computer Science
6.438 Algorithms For Inference
Fall 2014
9 Forward-backward algorithm, sum-product on
factor graphs
The previous lecture introduced belief propagation (sum-product), an efficient infer
ence algorithm for tree structured graphical models. In this lecture, we specialize it
further to the so-called hidden Markov model (HMM), a model which is very useful
in practice for problems with temporal structure.
9.1 Example: convolution codes
We motivate our discussion of HMMs with a kind of code for communication called a
convolution code. In general, the problem of communication is that the sender would
like to send a message m, represented as a bit string, to a receiver. The message may
be corrupted along the way, so we need to introduce redundancy into the message so
that it can be reconstructed accurately even in the presence of noise. To do this, the
sender sends a coded message b over a noisy channel. The channel introduces some
noise (e.g. by flipping random bits). The receiver receives the “received message” y
and then applies a decoding procedure to get the decoded message mˆ . A schematic
is shown in Figure 1. Clearly, we desire a coding scheme where mˆ = m with high
probability, b is not much larger than m, and mˆ can be efficiently computed from y .
We now discuss one example of a coding scheme, called a convolution code. Sup
1
pose the message m consists of N bits. The coded message b will consist of 2N
bits, alternating between the following:
−
The odd-numbered bits b2i−1 repeat the message bits mi exactly.
•
The even-numbered bits b2i are the XOR of message bits mi and mi | https://ocw.mit.edu/courses/6-438-algorithms-for-inference-fall-2014/07fd05499a8596682dedce6fd0a229c3_MIT6_438F14_Lec9.pdf |
repeat the message bits mi exactly.
•
The even-numbered bits b2i are the XOR of message bits mi and mi+1, denoted
mi ⊕
mi+1.
•
The ratio of the lengths of m and b is called the rate of the code, so this convolution
code is a rate 1
2 code, i.e. for every coded message bit, it can convey 1
2 message bit.
We assume an error model called a binary symmetric channel : each of the bits of
the coded message is independently flipped with probability ε. We can represent this
as a directed graphical model as shown in Figure 2. Note that from the receiver’s
perspective, only the yi’s are observed, and the task is to infer the mi’s.
In order to perform inference, we must convert this graph into an undirected
graphical model. Unfortunately, the straightforward construction, where we moralize
the graph, does not result in a tree structure, because of the cliques over mi, mi+1, and
b2i. Instead, we coarsen the representation by combining nodes into “supernodes.” In
particular, we will combine all of the adjacent message bits into variables mimi+1, and
1
Figure 1: A schematic representation of the problem setup for convolution codes.
Figure 2: Our convolution code can be represented as a directed graphical model.
we will combine pairs of adjacent received message bits y2i−1y2i, as shown in Figure
3. This results in a tree-structured directed graph, and therefore an undirected tree
graph — now we can perform sum-product.
9.2 Hidden Markov models
Observe that the graph in Figure 3 is Markov in its hidden states. More generally, a
hidden Markov model (HMM) is a graphical model with the structure shown in Figure
4. Int | https://ocw.mit.edu/courses/6-438-algorithms-for-inference-fall-2014/07fd05499a8596682dedce6fd0a229c3_MIT6_438F14_Lec9.pdf |
hidden states. More generally, a
hidden Markov model (HMM) is a graphical model with the structure shown in Figure
4. Intuitively, the variables xi represent a state which evolves over time and which
we don’t get to observe, so we refer to them as the hidden state. The variables yi are
signals which depend on the state at the same time step, and in most applications
are observed, so we refer to them as observations.
From the definition of directed graphical models, we see that the HMM represents
the factorization property
P(x1, . . . , xN , y1, . . . , yN ) = P(x1) P(xi
N
N
i=2
|
N
N
xi−1) P(yj |
j=1
xj ).
(1)
Observe that we can convert this to the undirected representation shown in Figure 4
(b) by taking each of the terms in this product to be a potential. This allows us to
2
m1,...,mNb1,...,b2N | https://ocw.mit.edu/courses/6-438-algorithms-for-inference-fall-2014/07fd05499a8596682dedce6fd0a229c3_MIT6_438F14_Lec9.pdf |
3.044 MATERIALS PROCESSING
LECTURE 3
We will often be comparing heat transfer steps/processes:
When can we neglect one and focus on the other?
Resistance:
LA
10 > kA
LB
kB
> 0.1
⇒
10 :
0.1 :
“B”conducts fast, cannot sustain a gradient
“A”conducts fast, cannot sustain a gradient
Reduce Dimensionality:
∂T
∂x
= α
∇2T :
T (t, x, y, z)
1. Steady State
∂T = 0
∂t
2. No Thermal Gradients
∇ T = 0, T = T (t) ONLY
∂T = ....
∂t
Date: February 15th, 2012.
1
2
LECTURE 3
In general, for solid / “fluid” interfaces: T2 (cid:54)= Tf
- constant T, B.C. is not appropriate
- fluid cannot always remove heat at the rate it is delivered
How is heat transferred / removed in the fluid?
- conduction: heat moves, atoms sit still
- convection: atoms flow away, carrying heat with them
1. natural convection (T interacts w/ gravity)
2. forced convection (mechanically driven flow)
- radiation: photons carries heat away
What are the proper B.C.?
1. T2 (cid:54)= Tf
2. @ x = L, specify flux:
heat[ W
2m ]
(cid:122)(cid:125)(cid:124)(cid:123)
q
(T2 − Tf ) ⇒
=
h
(cid:124)(cid:123)(cid:122)(cid:125)
heat transfer coeff.[ W
2m K ]
the hotter the material is with respect
to the fluid, the faster heat will flow
∂T
∂t
= 0 = α
∂2T
∂x2
Step 1: Solve
Step 2: B.C.
3.044 MATERIALS PROCESSING
3 | https://ocw.mit.edu/courses/3-044-materials-processing-spring-2013/081f3c79abb2de69274656cde699ce78_MIT3_044S13_Lec03.pdf |
T
∂x2
Step 1: Solve
Step 2: B.C.
3.044 MATERIALS PROCESSING
3
T − T1 = xL, where T
T2 −
T1
Θ = χ
2 is unknown
@ x = L
qcond = qconv
−k
∂T
∂x
= h(T2
− Tf )
Step 3: Solve for ∂T
∂x
T − T1
T2 − T1
=
x
L
T = T1 + (T2L
∂T
∂x
− T1
L
T2=
x
− Tf )
Plug into: −k ∂T = h(T2
∂x
− Tf )
T
− 2k
kT1
L
− T1 = h(T2
L
(cid:18)
− Tf )
(cid:19)
k
L
+ hTf =
h +
k
T2 = L Tf
h + k
L
4
LECTURE 3
Plug into: T = T1 + x (T2
L
− Tf )
T = T1 +
(cid:34)
x
L
(cid:35)
k T1 + hTf
L
h + L
(cid:35)
k − 1
T
T − T1 =
T − T
1
Tf − T1
=
Θ = χ
(cid:34)
x
L
(cid:34)
h(Tf − T1)
h + k
L
(cid:35)
L
h k
x
L 1 + h x
L
(cid:35)
(cid:34)
h L
k
1 + h x
L
hL
k
⇒
h
k
l
⇒
L
k
1
h
Biot Number:
hL
k
is conductive resistance
L
k
is convective resistance
where
1
h
and
dimensionless, ratio of resistances
Three Important Cases:
3.044 MATERIALS PROCESSING
5
Generalize:
1. Imperfect interfaces:
qin = qout
= h(T +
2 − | https://ocw.mit.edu/courses/3-044-materials-processing-spring-2013/081f3c79abb2de69274656cde699ce78_MIT3_044S13_Lec03.pdf |
.044 MATERIALS PROCESSING
5
Generalize:
1. Imperfect interfaces:
qin = qout
= h(T +
2 − T2
−), where = interface resistance
1
h
2. Geometry:
hL
k
→
What
is L?
L ≈
volume
surface area
, a characteristic dimension
6
Examples:
LECTURE 3
1. plate heated on one side: L = thickness
2. plate heated on both sides: L = half thickness
πR2l
3. cylinder: L =
2πRl
4. sphere (or other 3D shape): L = 3 πR3
4πR2
R
2
=
4
=
R
3
MIT OpenCourseWare
http://ocw.mit.edu
3.044 Materials Processing
Spring 2013
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. | https://ocw.mit.edu/courses/3-044-materials-processing-spring-2013/081f3c79abb2de69274656cde699ce78_MIT3_044S13_Lec03.pdf |
:14)
x
2
(cid:3)
(cid:14)
Slide
(cid:8)
Slide
(cid:9)
(cid:4)
There
exists
a
unique
optimal
solution(cid:2)
(cid:4)
There
exist
multiple
optimal
solutions(cid:17)
in
this
case(cid:8)
the
set
of
optimal
solutions
can
be
either
bounded
or
unbounded(cid:2)
(cid:4)
x 2
1
c = (1,0)
c = (- 1,- 1)
c = (0,1)
c = (1,1)
x 1
(cid:4)
(cid:5)(cid:8)
The
optimal
cost
is
(cid:8)
and
no
feasible
solution
is
optimal(cid:2)
(cid:4)
The
feasible
set
is
empty(cid:2)
(cid:5)
Polyhedra
(cid:5)(cid:2)(cid:3)
De(cid:6)nitions
Slide
(cid:10)
(cid:4)
f
j
g
The
set
(cid:13)
is
called
a
hyperplane
(cid:2)
x
a
x
b
0
(cid:4)
The
set
f
j
(cid:3)
g
is
called
a
(cid:2)
halfspace
x
a
x
b
0
(cid:4)
The
intersection
of
many
halfspaces
is
called
a
(cid:2)
polyhedron
(cid:4)
A
polyhedron
is
a
convex set(cid:8) i(cid:2)e(cid:2)(cid:8) if
(cid:0)
(cid:5)
(cid:0)
(cid:8)
then
(cid:16)
(cid | https://ocw.mit.edu/courses/6-251j-introduction-to-mathematical-programming-fall-2009/0820cfa3e02c9bbfce75399c24e618de_MIT6_251JF09_lec02.pdf |
c '
c '
x }
{ y
|
P
.
x
x3
.
A
P.E
x2
. C
.
D
.
B
x 1
(cid:5)
h
yperplanes
are
tight(cid:8)
but
constraints
are
not
linearly
independent(cid:2)
Slide
(cid:2)(cid:5)
T
h
e
n
Intuition(cid:3)
n
a point at w
hich
inequalities
are
tight
and
corresponding
equations
are
linearly
independent(cid:2)
P
x
Ax
b
(cid:13)
f
(cid:0) (cid:11)
j
(cid:2)
g
n
(cid:4)
rows
of
a
(cid:2) (cid:5) (cid:5) (cid:5) (cid:2)
a
A
1
m
(cid:4)
(cid:0)
x
P
(cid:4)
f
j
(cid:13)
(cid:13)
i
g
I
i
a
x
b
0
i
De(cid:4)nition
a
basic
feasible
solution
x
is
if
subspace
spanned
by
f
(cid:0)
g
a
(cid:2) i
I
i
n
is
(cid:11)
(cid:2)
(cid:5)(cid:6)(cid:7)(cid:6)(cid:8)
Degeneracy
Slide
(cid:2)(cid:6)
(cid:4)
j
j
(cid:0)
If
(cid:13)
(cid:8)
then
are
linearly
independent(cid:17)
nondegenerate(cid:2)
I
n
a
(cid:2) i
I
x
i
(cid:12)
(cid:4)
j
j
If
(cid: | https://ocw.mit.edu/courses/6-251j-introduction-to-mathematical-programming-fall-2009/0820cfa3e02c9bbfce75399c24e618de_MIT6_251JF09_lec02.pdf |
i
I
a
i
(cid:2)
a
x
b
0
0
n
solution
(cid:13)
(cid:2)
x
x
�
(cid:21)
MIT OpenCourseWare
http://ocw.mit.edu
6.251J / 15.081J Introduction to Mathematical Programming
Fall 2009
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. | https://ocw.mit.edu/courses/6-251j-introduction-to-mathematical-programming-fall-2009/0820cfa3e02c9bbfce75399c24e618de_MIT6_251JF09_lec02.pdf |
15.082 and 6.855J
Fall 2010
Network Optimization J.B. Orlin
WELCOME!
Welcome to 15.082/6.855J
Introduction to Network Optimization
Instructor: James B. Orlin
TA: David Goldberg
Textbook: Network Flows: Theory, Algorithms,
and Applications by Ahuja, Magnanti, and Orlin
referred to as AMO
2
Quick Overview
Next: The Koenigsberg Bridge Problem
Introduces Networks and Network Algorithms
Some subject management issues
Network flows and applications
Computational Complexity
Overall goal of today’s lecture: set the tone for the rest of
the subject
provide background
provide motivation
handle some class logistics
3
On the background of students
Requirement for this class
Either Linear Programming (15.081J)
or Data Structures
Mathematical proofs
The homework exercises usually call for
proofs.
The midterms will not require proofs.
For those who have not done many proofs
before, the TA will provide guidance
4
Some aspects of the class
Fondness for Powerpoint animations
Cold-calling as a way to speed up learning of the
algorithms
Talking with partners (the person next to you in
in the classroom.)
Class time: used for presenting theory,
algorithms, applications
mostly outlines of proofs illustrated by
examples (not detailed proofs)
detailed proofs are in the text
5
The Bridges of Koenigsberg: Euler 1736
“Graph Theory” began in 1736
Leonard Eüler
Visited Koenigsberg
People | https://ocw.mit.edu/courses/15-082j-network-optimization-fall-2010/084aaba151e144f852e66099c7ea1213_MIT15_082JF10_lec01.pdf |
Graph Theory” began in 1736
Leonard Eüler
Visited Koenigsberg
People wondered whether it is possible to take
a walk, end up where you started from, and
cross each bridge in Koenigsberg exactly
once
Generally it was believed to be impossible
6
The Bridges of Koenigsberg: Euler 1736
A
1
2
3
B
4
5
6
D
C
7
Is it possible to start in A, cross over each bridge
exactly once, and end up back in A?
7
The Bridges of Koenigsberg: Euler 1736
A
1
2
3
B
4
5
6
D
C
7
Conceptualization: Land masses are “nodes”.
8
The Bridges of Koenigsberg: Euler 1736
A
4
1
2
B
5
6
3
C
D
7
Conceptualization: Bridges are “arcs.”
9
The Bridges of Koenigsberg: Euler 1736
A
4
1
2
B
5
6
3
C
D
7
Is there a “walk” starting at A and ending at A and
passing through each arc exactly once?
10
Notation and Terminology
Network terminology as used in AMO.
1
a
4
b
c
d
2
3
e
1
a
4
b
c
d
2
3
e
An Undirected Graph or
Undirected Network
A Directed Graph or
Directed Network
Network G = (N, A)
Node set N = {1, 2, 3, 4}
Arc Set A = {(1,2), (1,3), (3,2), (3,4), (2,4)}
In an undirected graph, (i,j) = (j,i)
11
Path: Example: 5, | https://ocw.mit.edu/courses/15-082j-network-optimization-fall-2010/084aaba151e144f852e66099c7ea1213_MIT15_082JF10_lec01.pdf |
In an undirected graph, (i,j) = (j,i)
11
Path: Example: 5, 2, 3, 4.
(or 5, c, 2, b, 3, e, 4)
•No node is repeated.
•Directions are ignored.
Directed Path . Example: 1, 2, 5, 3, 4
(or 1, a, 2, c, 5, d, 3, e, 4)
•No node is repeated.
•Directions are important.
Cycle (or circuit or loop)
1, 2, 3, 1. (or 1, a, 2, b, 3, e)
•A path with 2 or more nodes, except
that the first node is the last node.
•Directions are ignored.
Directed Cycle: (1, 2, 3, 4, 1) or
1, a, 2, b, 3, c, 4, d, 1
•No node is repeated.
•Directions are important.
5
b
d
3
c
2
e
4
a
1
5
b
d
3
c
2
e
4
a
1
a
2
e
b
d
c
3
1
4
1
a
2
e
b
d
c
3
4
2
3
e
1
a
4
b
c
d
5
Walks
2
3
e
1
a
4
b
c
d
5
Walks are paths that can repeat nodes and arcs
Example of a directed walk: 1-2-3-5-4-2-3-5
A walk is closed if its first and last nodes are the
same.
A closed walk is a cycle except that it can repeat
nodes and arcs.
13
The Bridges of Koenigsberg: Euler 1736
A
4
1
2
B
5
6
3
C
D
7 | https://ocw.mit.edu/courses/15-082j-network-optimization-fall-2010/084aaba151e144f852e66099c7ea1213_MIT15_082JF10_lec01.pdf |
igsberg: Euler 1736
A
4
1
2
B
5
6
3
C
D
7
Is there a “walk” starting at A and ending at A and
passing through each arc exactly once?
Such a walk is called an eulerian cycle.
14
Adding two bridges creates such a walk
A
4
9
1
2
B
65
D
3
8
C
7
Here is the walk.
A, 1, B, 5, D, 6, B, 4, C, 8, A, 3, C, 7, D, 9, B, 2, A
Note: the number of arcs incident to B is twice
the number of times that B appears on the walk.
15
On Eulerian Cycles
4
A
4
9
1
2
6
B
65
D
4
3
8
4
C
7
The degree of
a node in an
undirected
graph is the
number of
incident arcs
Theorem. An undirected graph has an eulerian
cycle if and only if
(1) every node degree is even and
(2) the graph is connected (that is, there is a path
from each node to each other node).
More on Euler’s Theorem
Necessity of two conditions:
Any eulerian cycle “visits” each node an even
number of times
Any eulerian cycle shows the network is connected
caveat: nodes of degree 0
Sufficiency of the condition
Assume the result is true for all graphs with fewer
than |A| arcs.
Start at some node, and take a walk until a cycle C is
found.
1
5
5
4
4
7
7
3
3
17
More on Euler’s Theorem
Sufficiency of the condition
Start at some node, and take a walk until a cycle C is | https://ocw.mit.edu/courses/15-082j-network-optimization-fall-2010/084aaba151e144f852e66099c7ea1213_MIT15_082JF10_lec01.pdf |
Euler’s Theorem
Sufficiency of the condition
Start at some node, and take a walk until a cycle C is
found.
Consider G’ = (N, A\C)
the degree of each node is even
each component is connected
So, G’ is the union of Eulerian cycles
Connect G’ into a single eulerian cycle by adding C.
5
4
7
3
18
Comments on Euler’s theorem
1.
It reflects how proofs are done in class, often in
outline form, with key ideas illustrated.
2. However, this proof does not directly lead to an
efficient algorithm. (More on this in two
lectures.)
3. Usually we focus on efficient algorithms.
19
15.082/6.855J Subject Goals:
1. To present students with a knowledge of the
state-of-the art in the theory and practice of
solving network flow problems.
A lot has happened since 1736
2. To provide students with a rigorous analysis of
network flow algorithms.
computational complexity & worst case
analysis
3. To help each student develop his or her own
intuition about algorithm development and
algorithm analysis.
20
Homework Sets and Grading
Homework Sets
6 assignments
4 points per assignment
lots of practice problems with solutions
Grading
homework: 24 points
16 points
Project
Midterm 1: 30 points
30 points
Midterm 2:
21
Class discussion
Have you seen network models elsewhere?
Do you have any specific goals in taking this
subject?
22
Mental break
Which nation gave women the right to vote first?
New Zealand.
Which Ocean goes to the deepest depths?
Pacific Ocean
What is northernmost land on earth? | https://ocw.mit.edu/courses/15-082j-network-optimization-fall-2010/084aaba151e144f852e66099c7ea1213_MIT15_082JF10_lec01.pdf |
women the right to vote first?
New Zealand.
Which Ocean goes to the deepest depths?
Pacific Ocean
What is northernmost land on earth?
Cape Morris Jessep in Greenland
Where is the Worlds Largest Aquarium?
Epcot Center in Orlando, FL
23
Mental break
What country has not fought in a war since 1815?
Switzerland
What does the term Prima Donna mean in Opera?
The leading female singer
What fruit were Hawaiian women once forbidden by
law to eat?
The coconut
What’s the most common non-contagious disease in
the world?
Tooth decay
24
Three Fundamental Flow Problems
The shortest path problem
The maximum flow problem
The minimum cost flow problem
25
The shortest path problem
1
1
1
2
4
2
1
3
4
2
3
4
3
5
2
2
6
6
Consider a network G = (N, A) in which there is
an origin node s and a destination node t.
standard notation: n = |N|, m = |A|
What is the shortest path from s to t?
26
The Maximum Flow Problem
Directed Graph G = (N, A).
Source s
Sink t
Capacities uij on arc (i,j)
Maximize the flow out of s, subject to
Flow out of i = Flow into i, for i ≠ s or t.
9
10
6
6
s
8
8
7
10
1
1
1
2
t
A Network with Arc Capacities (and the maximum flow) 27
Representing the Max Flow as an LP
s
10, 9
6, 6
1
8,8
1,1
10,7
2
t
Flow out of i - Flow into i = 0
for i ≠ s or t.
max v
s.t xs | https://ocw.mit.edu/courses/15-082j-network-optimization-fall-2010/084aaba151e144f852e66099c7ea1213_MIT15_082JF10_lec01.pdf |
t
Flow out of i - Flow into i = 0
for i ≠ s or t.
max v
s.t xs1 + xs2
= v
max v
s.t. ∑j xsj
= v
-xs1 + x12 + x1t = 0
-xs2 - x12 + x2t = 0
= -v
-x1t - x2t
∑j xij ∑j xji
–
for each i ≠ s or t
= 0
s.t. - ∑i xit
= -v
0 ≤ xij ≤ uij for all (i,j)
0 ≤ xij ≤ uij for all (i,j)
28
Min Cost Flows
$4 ,10
1
5
2
3
Flow out of i - Flow into i = b(i)
4
Each arc has a
linear cost and a
capacity
cijxij
min ∑i,j
s.t ∑j xij – ∑j xji = b(i) for each i
0 ≤ xij ≤ uij for all (i,j)
Covered in detail in Chapter 1 of AMO
29
Where Network Optimization Arises
Transportation Systems
Transportation of goods over transportation networks
Scheduling of fleets of airplanes
Manufacturing Systems
Scheduling of goods for manufacturing
Flow of manufactured items within inventory systems
Communication Systems
Design and expansion of communication systems
Flow of information across networks
Energy Systems, Financial Systems, and much more
30
Next topic: computational complexity
What is an efficient algorithm?
How do we measure efficiency?
“Worst case analysis”
but first …
31
Measuring Computational Complexity
Consider the following algorithm for adding two m × n
matrices A and B with coefficients a( | https://ocw.mit.edu/courses/15-082j-network-optimization-fall-2010/084aaba151e144f852e66099c7ea1213_MIT15_082JF10_lec01.pdf |
Measuring Computational Complexity
Consider the following algorithm for adding two m × n
matrices A and B with coefficients a( , ) and b( , ).
begin
for
for
i = 1 to m do
j = 1 to n do c(i,j) := a(i,j) + b(i,j)
end
What is the running time of this algorithm?
Let’s measure it as precisely as we can as a function of n and m.
Is it 2nm, or 3nm, or what?
Worst case versus average case
How do we measure the running time?
What are the basic steps that we should count?
32
Compute the running time precisely.
Operation Number (as a function of m,n)
Additions
Assignments
Comparisons
Multiplications
33
Towards Computational Complexity
1. We will ignore running time constants.
2. Our running times will be stated in terms of
relevant problem parameters, e.g., nm.
3. We will measure everything in terms of worst
case or most pessimistic analysis (performance
guarantees.)
4. All arithmetic operations are assumed to take
one step,
(or a number of steps that is bounded by a
constant).
34
A Simpler Metric for Running Time.
Operation Number (as a function of m,n)
Additions ≤ c1 mn for some c1 and m, n ≥ 1
O(mn) steps
Assignments ≤ c2 mn for some c2 and m, n ≥ 1
O(mn) steps
Comparisons ≤ c3 mn for some c3 and m, n ≥ 1
O(mn) steps
TOTAL ≤ c4 mn for some c4 and m, n ≥ 1
O(mn) steps
35
S | https://ocw.mit.edu/courses/15-082j-network-optimization-fall-2010/084aaba151e144f852e66099c7ea1213_MIT15_082JF10_lec01.pdf |
c4 and m, n ≥ 1
O(mn) steps
35
Simplifying Assumptions and Notation
MACHINE MODEL: Random Access Machine
(RAM).
This is the computer model that everyone is used
to. It allows the use of arrays, and it can select
any element of an array or matrix in O(1) steps.
c(i,j) := a(i,j) + b(i,j).
Integrality Assumption. All numbers are integral
(unless stated otherwise.)
36
Size of a problem
The size of a problem is the number of bits
needed to represent the problem.
The size of the n × m matrix A is not nm.
If each matrix element has K bits, the size is
nmK
e.g., if max 2107 < aij < 2108, then K = 108.
K = O( log (amax)).
37
Polynomial Time Algorithms
We say that an algorithm runs in polynomial time
if the number of steps taken by an algorithm on
any instance I is bounded by a polynomial in the
size of I.
We say that an algorithm runs in exponential time
if it does not run in polynomial time.
Example 1: finding the determinant of a matrix
can be done in O(n3) steps.
This is polynomial time.
38
Polynomial Time Algorithms
Example 2: We can determine if n is prime by dividing n by
every integer less than n.
This algorithm is exponential time.
The size of the instance is log n
The running time of the algorithm is O(n).
Side note: there is a polynomial time algorithm for
determining if n is prime.
Almost all of the algorithms presented in this class will be
polynomial time.
One can find an | https://ocw.mit.edu/courses/15-082j-network-optimization-fall-2010/084aaba151e144f852e66099c7ea1213_MIT15_082JF10_lec01.pdf |
n is prime.
Almost all of the algorithms presented in this class will be
polynomial time.
One can find an Eulerian cycle (if one exists) in O(m) steps.
There is no known polynomial time algorithm for finding a
min cost traveling salesman tour
39
On polynomial vs exponential time
We contrast two algorithm, one that takes 30,000 n3
steps, and one that takes 2n steps.
Suppose that we could carry out 1 billion steps per
second.
# of nodes
n = 30,
n = 40,
n = 50
n = 60
30,000 n3 steps2n
0.81 seconds
1.92 seconds
3.75 seconds
6.48 seconds
steps
1 second
17 minutes
12 days
31 years
40
On polynomial vs. exponential time
Suppose that we could carry out 1 trillion steps
per second, and instantaneously eliminate
99.9999999% of all solutions as not worth
considering
# of nodes
n = 70,
n = 80,
n = 90
n = 100
1,000 n10 steps
2.82 seconds
10.74 seconds
34.86 seconds
100 seconds
2n steps
1 second
17 minutes
12 days
31 years
41
Overview of today’s lecture
Eulerian cycles
Network Definitions
Network Applications
Introduction to computational complexity
42
Upcoming Lectures
Lecture 2: Review of Data Structures
even those with data structure backgrounds
are encouraged to attend.
Lecture 3. Graph Search Algorithms.
how to determine if a graph is connected
and to label a graph
and | https://ocw.mit.edu/courses/15-082j-network-optimization-fall-2010/084aaba151e144f852e66099c7ea1213_MIT15_082JF10_lec01.pdf |
. Graph Search Algorithms.
how to determine if a graph is connected
and to label a graph
and more
43
MIT OpenCourseWare
http://ocw.mit.edu
15.082J / 6.855J / ESD.78J Network Optimization
Fall 2010
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. | https://ocw.mit.edu/courses/15-082j-network-optimization-fall-2010/084aaba151e144f852e66099c7ea1213_MIT15_082JF10_lec01.pdf |
The method of characteristics applied to
quasi-linear PDEs
18.303 Linear Partial Differential Equations
Matthew J. Hancock
Fall 2006
1 Motivation
[Oct 26, 2005]
Most of the methods discussed in this course: separation of variables, Fourier
Series, Green’s functions (later) can only be applied to linear PDEs. However, the
method of characteristics can be applied to a form of nonlinear PDE.
1.1 Traffic flow
Ref: Myint-U & Debnath §12.6
Consider the idealized flow of traffic along a one-lane highway. Let ρ (x, t) be the
traffic density at (x, t). The total number of cars in x1 ≤ x ≤ x2 at time t is
N (t) =
x2
x1
Z
ρ (x, t) dx
(1)
Assume the number of cars is conserved, i.e. no exits. Then the rate of change of the
number of cars in x1 ≤ x ≤ x2 is given by
dN
dt
=
rate in at x1 − rate out at x2
= ρ (x1, t) V (x1, t) − ρ (x2, t) V (x2, t)
x2 ∂
∂x
x1
= −
Z
(ρV ) dx
(2)
where V (x, t) is the velocity of the cars at (x, t). Combining (1) and (2) gives
x2 ∂ρ
∂t
(cid:18) | https://ocw.mit.edu/courses/18-303-linear-partial-differential-equations-fall-2006/085f9ac605e03e9c72328a7239d11d10_quasi.pdf |
, t). Combining (1) and (2) gives
x2 ∂ρ
∂t
(cid:18)
x1
Z
(ρV ) dx = 0
(cid:19)
+
∂
∂x
1
and since x1, x2 are arbitrary, the integrand must be zero at all x,
∂ρ
∂t
+
∂
∂x
(ρV ) = 0
(3)
We assume, for simplicity, that velocity V depends on density ρ, via
V (ρ) = c 1 −
ρ
ρmax
(cid:18)
where c = max velocity, ρ = ρmax indicates a traffic jam (V = 0 since everyone is
stopped), ρ = 0 indicates open road and cars travel at c, the speed limit (yeah right).
The PDE (3) becomes
(cid:19)
∂ρ
∂t
+ c
1 −
(cid:18)
2ρ
ρmax (cid:19)
∂ρ
∂x
= 0
We introduce the following normalized variables
u =
ρ
,
ρmax
t˜ = ct
into the PDE (4) to obtain (dropping tildes),
ut + (1 − 2u) ux = 0
(4)
(5)
The PDE (5) is called quasi-linear because it is linear in the derivatives of u. It
is NOT linear in u (x, t), though, and this will lead to interesting outcomes.
2 General first-order quasi-linear PDEs
Ref: Guenther & Lee §2.1, Myint-U & Debnath §12.1, 12.2
The general form | https://ocw.mit.edu/courses/18-303-linear-partial-differential-equations-fall-2006/085f9ac605e03e9c72328a7239d11d10_quasi.pdf |
§2.1, Myint-U & Debnath §12.1, 12.2
The general form of quasi-linear PDEs is
A
∂u
∂x
+ B
∂u
∂t
= C
(6)
where A, B, C are functions of u, x, t. The initial condition u (x, 0) is specified at
t = 0,
u (x, 0) = f (x)
(7)
We will convert the PDE to a sequence of ODEs, drastically simplifying its solu
tion. This general technique is known as the method of characteristics and is useful
for finding analytic and numerical solutions. To solve the PDE (6), we note that
(A, B, C) · (ux, ut, −1) = 0.
(8)
2
Recall from vector calculus that the normal to the surface f (x, y, z) = 0 is ∇f .
To make the analogy here, t replaces y, f (x, t, z) = u (x, t) − z and ∇f = (ut, ux, −1).
Thus, a plot of z = u (x, t) gives the surface f (x, t, z) = 0. The vector (ux, ut, −1) is
the normal to the solution surface z = u (x, t). From (8), the vector (A, B, C) is the
tangent to this solution surface.
The IC u (x, 0) = f (x) is a curve in the u | https://ocw.mit.edu/courses/18-303-linear-partial-differential-equations-fall-2006/085f9ac605e03e9c72328a7239d11d10_quasi.pdf |
.
The IC u (x, 0) = f (x) is a curve in the u − x plane. For any point on the initial
curve, we follow the vector (A, B, C) to generate a curve on the solution surface,
called a characteristic curve of the PDE. Once we find all the characteristic curves,
we have a complete description of the solution u (x, t).
2.1 Method of characteristics
We represent the characteristic curves parametrically,
x = x (r; s) ,
t = t (r; s) ,
u = u (r; s) ,
where s labels where we start on the initial curve (i.e. the initial value of x at t = 0).
The parameter r tells us how far along the characteristic curve. Thus (x, t, u) are now
thought of as trajectories parametrized by r and s. The semi-colon indicates that s
is a parameter to label different characteristic curves, while r governs the evolution
of the solution along a particular characteristic.
From the PDE (8), at each point (x, t), a particular tangent vector to the solution
surface z = u (x, t) is
(A (x, t, u) , B (x, t, u) , C (x, t, u)) .
Given any curve (x (r; s) , t (r; s) , u (r; s)) parametrized by r (s acts as a label only),
the tangent vector is
∂x ∂t ∂u
∂r ∂r ∂r
,
, | https://ocw.mit.edu/courses/18-303-linear-partial-differential-equations-fall-2006/085f9ac605e03e9c72328a7239d11d10_quasi.pdf |
the tangent vector is
∂x ∂t ∂u
∂r ∂r ∂r
,
,
.
(cid:19)
(cid:18)
For a general curve on the surface z = u (x, t), the tangent vector (A, B, C) will
be different than the tangent vecto (xr, tr, ur). However, we choose our curves
(x (r; s) , t (r; s) , u (r; s)) so that they have tangents equal to (A, B, C),
∂x
∂r
= A,
∂t
∂r
= B,
∂u
∂r
= C
(9)
where (A, B, C) depend on (x, t, u), in general. We have written partial derivatives
to denote differentiation with respect to r, since x, t, u are functions of both r and
s. However, since only derivatives in r are present in (9), these equations are ODEs!
This has greatly simplified our solution method: we have reduced the solution of a
PDE to solving a sequence of ODEs.
3
2
1.5
)
x
(
f
1
0.5
0
−3
−2
−1
0
x
1
2
3
Figure 1: Plot of f (x).
The ODEs (9) in conjunction with some initial conditions specified at r = 0. We
are free to choose the value of r at t = 0; for simplicity we take r = 0 at t = 0. | https://ocw.mit.edu/courses/18-303-linear-partial-differential-equations-fall-2006/085f9ac605e03e9c72328a7239d11d10_quasi.pdf |
at t = 0; for simplicity we take r = 0 at t = 0. Thus
t (0; s) = 0. Since x changes with r, we choose s to denote the initial value of x (r; s)
along the x-axis (when t = 0) in the space-time domain. Thus the initial values (at
r = 0) are
x (0; s) = s,
t (0; s) = 0,
u (0; s) = f (s) .
(10)
3 Example problem
[Oct 28, 2005]
Consider the following quasi-linear PDE,
∂u
∂t
+ (1 + cu)
∂u
∂x
= 0,
u (x, 0) = f (x)
where c = ±1 and the initial condition f (x) is
f (x) =
(
1,
2 − |x| ,
|x| > 1
|x| ≤ 1
=
1,
x < −1
2 + x, −1 ≤ x ≤ 0
2 − x,
0 < x ≤ 1
x > 1
1,
The function f (x) is sketched in Figure 1. To find the parametric solution, we can
write the PDE as
(1, 1 + cu, 0) ·
∂u ∂u
,
∂t ∂x
(cid:18)
, −1 = 0
(cid:19)
Thus the parametric solution is defined by the O | https://ocw.mit.edu/courses/18-303-linear-partial-differential-equations-fall-2006/085f9ac605e03e9c72328a7239d11d10_quasi.pdf |
−1 = 0
(cid:19)
Thus the parametric solution is defined by the ODEs
dt
dr
= 1,
dx
dr
= 1 + cu,
du
dr
= 0
4
with initial conditions at r = 0,
t = 0,
x = s,
u = u (x, 0) = u (s, 0) = f (s) .
Integrating the ODEs and imposing the ICs gives
t (r; s) = r
u (r; s) = f (s)
x (r; s) =
(1 + cf (s)) r + s = (1 + cf (s)) t + s
3.1 Validity of solution and break-down (shock formation)
To find the time ts and position xs when and where a shock first forms, we find the
Jacobian:
J =
∂ (x, t)
∂ (r, s)
= det
xr xs
ts
tr
!
=
∂x ∂t
∂r ∂s
−
∂x ∂t
∂s ∂r
= 0 − (cf ′ (s) r + 1) = − (cf ′ (s) t + 1)
Shocks occur (the solution breaks down) where J = 0, i.e. where
t = −
1
cf ′ (s)
The first shock occurs at
ts = min −
1
cf ′ (s)
(cid:18)
In this course, we will not consider what happens after the shock. You can find more
about this | https://ocw.mit.edu/courses/18-303-linear-partial-differential-equations-fall-2006/085f9ac605e03e9c72328a7239d11d10_quasi.pdf |
we will not consider what happens after the shock. You can find more
about this in §12.9 of Myint-U & Debnath. We now take cases for c = ±1.
(cid:19)
For c = 1, since min f ′ (s) = −1, we have
ts = −
1
min f ′ (s)
= 1
Any of the characteristics where f ′ (s) = min f ′ (s) = −1 can be used to find the
location of the shock at ts = 1. For e.g., with s = 1/2, the location of the shock at
ts = 1 is
1
2
(cid:18) (cid:19)(cid:19)
Any other value of s where f ′ (s) = −1 will give the same xs.
1 + 2 −
(cid:18)
1
1 + =
2
1 + f
xs =
(cid:19)(cid:19)
1
2
(cid:18)
(cid:18)
1
1 + = 3.
2
5
For c = −1, since max f ′ (s) = 1, we have
ts =
1
max f ′ (s)
= 1
Any of the characteristics where f ′ (s) = max f ′ (s) = 1 can be used to find the
location of the shock at ts = 1. For e.g., with s = −1/2, the location of the shock at
ts = | https://ocw.mit.edu/courses/18-303-linear-partial-differential-equations-fall-2006/085f9ac605e03e9c72328a7239d11d10_quasi.pdf |
e.g., with s = −1/2, the location of the shock at
ts = 1 is
xs = 1 − f −
1
2
(cid:18) (cid:19)(cid:19)
(cid:19)(cid:19)
Any other value of s where f ′ (s) = 1 will give the same xs.
1 − = 1 − 2 −
(cid:18)
1
2
1
2
(cid:18)
(cid:18)
1 − = −1.
1
2
3.2 Solution Method (plotting u(x,t))
Since r = t, we can rewrite the solution as being parametrized by time t and the
marker s of the initial value of x:
x (t; s) = (1 + cf (s)) t + s,
u (; s) = f (s)
We have written u (; s) to make clear that u depends only on the parameter s. In
other words, u is constant along characteristics!
To solve for the density u at a fixed time t = t0, we (1) choose values for s, (2)
compute x (t0; s), u (; s) at these s values and (3) plot u (; s) vs. x (t0; s). Since f (s)
is piecewise linear in s (i.e. composed of lines), x is therefore piecewise linear in s,
and hence at any given time, u = f (s) is piecewise linear in x. Thus, to find the
solution, we just | https://ocw.mit.edu/courses/18-303-linear-partial-differential-equations-fall-2006/085f9ac605e03e9c72328a7239d11d10_quasi.pdf |
f (s) is piecewise linear in x. Thus, to find the
solution, we just need to follow the positions of the intersections of the lines in f (s)
(labeled by s = −1, 0, 1) in time. We then plot the positions of these intersections
along with their corresponding u value in the u vs. x plane and connect the dots to
obtain a plot of u (x, t). Note that for c = 1, the s = −1, 0, 1 characteristics are given
by
s = −1 : x = (1 + cf (−1)) t − 1 = 2t − 1
s = 0 : x = (1 + cf (0)) t + 0 = 3t
s = 1 : x = (1 + cf (1)) t + 1 = 2t + 1
These are plotted in Figure 2. The following tables are useful as a plotting aid:
t =
1
2
s = −1
u =
x =
1
0
1
0
2
1
3
2 2
6
t = ts = 1
s = −1
u =
x =
1
1
0
2
3
1
1
3
A plot of u (x, 1/2) is made by plotting the three points (x, u) from the table for
t = 1/2 and connecting the dots (see middle plot in Figure 3). Similarly, u (x, ts) =
u (x, 1) is plotted in the | https://ocw.mit.edu/courses/18-303-linear-partial-differential-equations-fall-2006/085f9ac605e03e9c72328a7239d11d10_quasi.pdf |
Figure 3). Similarly, u (x, ts) =
u (x, 1) is plotted in the last plot of Figure 3.
Repeating the above steps for c = −1, the s = −1, 0, 1 characteristics are given
by
s = −1 : x = (1 − f (−1)) t − 1 = −1
s = 0 : x = (1 − f (0)) t + 0 = −t
s = 1 : x = (1 − f (1)) t + 1 = 1
These are plotted in Figure 4. We then construct the tables:
t =
1
2
t = ts = 1
s = −1
u =
x =
1
-1 − 1
0
2
1
1
2 1
s = −1
u =
1
x = −1 −1
0
2
1
1
1
As before, plots of u (x, 1/2) and u (x, 1) are made by plotting the three points (x, u)
from the tables and connecting the dots. See middle and bottom plots in Figure
5. Note that for c = 1 the wave front steepened, while for c = −1 the wave tail
steepened. This is easy to understand by noting how the speed changes relative to
the height u of the wave. When c = 1, the local wave speed 1 + u is larger for higher
parts of the wave. Hence the crest catches up | https://ocw.mit.edu/courses/18-303-linear-partial-differential-equations-fall-2006/085f9ac605e03e9c72328a7239d11d10_quasi.pdf |
speed 1 + u is larger for higher
parts of the wave. Hence the crest catches up with the trough ahead of it, and the
shock forms on the front of the wave. When c = −1, the local wave speed 1 − u is
larger for higher parts of the wave; hence the tail catches up with the crest, and the
shock forms on the back of the wave.
4 Solution to traffic flow problem
[Oct 31, 2005]
The traffic flow PDE (5) is
ut + (1 − 2u) ux = 0
(11)
7
1
t 0.5
0
−3
−2
−1
0
x
1
2
3
Figure 2: Plot of characteristics for c = 1.
2
1
)
0
,
x
(
u
0
−3
2
)
5
0
.
,
x
(
u
1
0
−3
2
)
1
,
x
(
u
1
0
−3
−2
−1
0
1
2
3
4
−2
−1
0
1
2
3
4
−2
−1
0
1
2
3
4
x
Figure 3: Plot of u(x, t0) with c = 1 for t0 = 0, 0.5 and 1.
8
1
t 0.5
0
−3
−2
−1
0
x
1
2
3
Figure 4: Plot of characteristics with c = −1.
2
1
)
0
,
x
(
u
0 | https://ocw.mit.edu/courses/18-303-linear-partial-differential-equations-fall-2006/085f9ac605e03e9c72328a7239d11d10_quasi.pdf |
Plot of characteristics with c = −1.
2
1
)
0
,
x
(
u
0
−4
2
)
5
0
.
,
x
(
u
1
0
−4
2
)
1
,
x
(
u
1
0
−4
−3
−2
−1
0
1
2
3
−3
−2
−1
0
1
2
3
−3
−2
−1
0
1
2
3
x
Figure 5: Plot of u(x, t0) with c = −1 for t0 = 0, 0.5 and 1.
9
and has form (6) with (A, B, C) = (1 − 2u, 1, 0). The characteristic curves satisfy (9)
and (10)
∂x
∂r
∂t
∂r
∂u
∂r
= 1 − 2u,
x (0) = s,
= 1,
= 0,
t (0) = 0,
u (0) = f (s) .
Integrating gives the parametric equations
t = r + c1,
u = c2,
x = (1 − 2u) r + c3 = (1 − 2c2) r + c3
Imposing the ICs gives c1 = 0, c2 = f (s), c3 = s, so that
t = r,
u = f (s) ,
x = (1 − 2f (s)) r + s = (1 − 2f (s)) t + s
(12)
We can now write
x (t; s) = (1 − 2f (s)) t + s,
u (; s) = | https://ocw.mit.edu/courses/18-303-linear-partial-differential-equations-fall-2006/085f9ac605e03e9c72328a7239d11d10_quasi.pdf |
write
x (t; s) = (1 − 2f (s)) t + s,
u (; s) = f (s)
Again, the traffic density u is constant along characteristics. Note that this would
change if, for example, there was a source/sink term in the traffic flow equation (11),
i.e.
ut + (1 − 2u) ux = h(x, t, u)
where h(x, t, u) models the traffic loss / gain to exists and on-ramps at various posi
tions.
4.1 Example : Light traffic heading into heavier traffic
Consider light traffic heading into heavy traffic, and model the initial density as
α,
x ≤ 0
u (x, 0) = f (x) =
(cid:0)
3 ,
4
(cid:1)
3
4 − α x + α, 0 ≤ x ≤ 1
x ≥ 1
(13)
where 0 ≤ α ≤ 3/4. The lightness of traffic is parametrized by α. We consider the
case of light traffic α = 1/6 and moderate traffic α = 1/3.
From (12), the characteristics are [DRAW]
(1 − 2α) t + s,
s ≤ 0
(1 − 2α − 2 (3/4 − α) s) t + s, 0 ≤ s ≤ 1
−t/2 + s,
s ≥ 1
10
x =
| https://ocw.mit.edu/courses/18-303-linear-partial-differential-equations-fall-2006/085f9ac605e03e9c72328a7239d11d10_quasi.pdf |
s ≤ 1
−t/2 + s,
s ≥ 1
10
x =
For α = 1/6, we have
x =
2
3
(cid:0)
For α = 1/3, we have
s ≤ 0
2 t + s,
3
7
− s t + s, 0 ≤ s ≤ 1
6
− 1 t + s,
(cid:1)
s ≥ 1
2
s ≤ 0
1 t + s,
3
1
5 s t + s, 0 ≤ s ≤ 1
−3
6
− 1 t + s,
(cid:1)
s ≥ 1
2
x =
(cid:0)
Again, for fixed times t = t0, plotting the solution amounts to choosing an appropriate
range of values for s, in this case −2 ≤ s ≤ 2 would suffice, and then plotting the
resulting points u (t0, s) versus x (t0, s) in the xu-plane.
The transformation (r, s) → (x, t) is non-invertible if the determinant of the Ja
cobian matrix is zero,
∂ (x, t)
∂ (r, s)
= det
xr xs = det
tr
!
ts
1 − 2f (s) −2f ′ (s) r + 1
1
0
!
= 2f ′ (s | https://ocw.mit.edu/courses/18-303-linear-partial-differential-equations-fall-2006/085f9ac605e03e9c72328a7239d11d10_quasi.pdf |
(s) −2f ′ (s) r + 1
1
0
!
= 2f ′ (s) r − 1 = 0.
(14)
Solving for r and noting that t = r gives the time when the determinant becomes
zero,
t = r =
.
(15)
1
2f ′ (s)
Since times in this problem are positive t > 0, then shocks occur if f ′ (s) > 0 for some
s. The first such time where shocks occur is
tshock =
1
2 max {f ′ (s)}
.
(16)
In the example above, the time when a shock first occurs is given by substituting
(13) into (16),
tshock =
1
2 max {f ′ (s)}
=
2
1
− α
3
4
.
Thus, lighter traffic (smaller α) leads to shocks sooner! The position of the shock at
tshock is given by
(cid:1)
(cid:0)
1 − α
2
xshock = (1 − 2α) tshock = 3
− α
4
.
11 | https://ocw.mit.edu/courses/18-303-linear-partial-differential-equations-fall-2006/085f9ac605e03e9c72328a7239d11d10_quasi.pdf |
6.034 Artificial Intelligence. Copyright © 2004 by Massachusetts Institute of Technology.
6.034 Notes: Section 2.4
Slide 2.4.1
So far, we have looked at three any-path algorithms, depth-first and breadth-first, which are
uninformed, and best-first, which is heuristically guided.
Slide 2.4.2
Now, we will look at the first algorithm that searches for optimal paths, as defined by a "path
length" measure. This uniform cost algorithm is uninformed about the goal, that is, it does not use
any heuristic guidance.
6.034 Artificial Intelligence. Copyright © 2004 by Massachusetts Institute of Technology.
Slide 2.4.3
This is the simple algorithm we have been using to illustrate the various searches. As before, we will
see that the key issues are picking paths from Q and adding extended paths back in.
Slide 2.4.4
We will continue to use the algorithm but (as we will see) the use of the Visited list conflicts with
optimal searching, so we will leave it out for now and replace it with something else later.
Slide 2.4.5
Why can't we use a Visited list in connection with optimal searching? In the earlier searches, the use
of the Visited list guaranteed that we would not do extra work by re-visiting or re-expanding states.
It did not cause any failures then (except possibly of intuition).
Slide 2.4.6
But, using the Visited list can cause an optimal search to overlook the best path. A simple example
will illustrate this.
6.034 Artificial Intelligence. Copyright © 2004 by Massachusetts Institute of Technology.
Slide 2.4.7
Clearly, the shortest path (as determined by sum of link costs) to G is (S A D G) and an optimal
search had better find it.
Slide 2.4.8
However, on expanding S, A and D are Visited, which means that the extension from A to D would
never be generated and we would miss the best path. So, we can't use a Visited list; nevertheless, we
still have | https://ocw.mit.edu/courses/6-034-artificial-intelligence-spring-2005/089f994f6a760c3c0b34e43dcd18ea60_ch2_search2.pdf |
to D would
never be generated and we would miss the best path. So, we can't use a Visited list; nevertheless, we
still have the problem of multiple paths to a state leading to wasted work. We will deal with that
issue later, since it can get a bit complicated. So, first, we will focus on the basic operation of
optimal searches.
Slide 2.4.9
The first, and most basic, algorithm for optimal searching is called uniform-cost search. Uniform-
cost is almost identical in implementation to best-first search. That is, we always pick the best node
on Q to expand. The only, but crucial, difference is that instead of assigning the node value based on
the heuristic value of the node's state, we will assign the node value as the "path length" or "path
cost", a measure obtained by adding the "length" or "cost" of the links making up the path.
Slide 2.4.10
To reiterate, uniform-cost search uses the total length (or cost) of a path to decide which one to
expand. Since we generally want the least-cost path, we will pick the node with the smallest path
cost/length. By the way, we will often use the word "length" when talking about these types of
searches, which makes intuitive sense when we talk about the pictures of graphs. However, we mean
any cost measure (like length) that is positive and greater than 0 for the link between any two states.
6.034 Artificial Intelligence. Copyright © 2004 by Massachusetts Institute of Technology.
Slide 2.4.11
The path length is the SUM of the length associated with the links in the path. For example, the path
from S to A to C has total length 4, since it includes two links, each with edge 2.
Slide 2.4.12
The path from S to B to D to G has length 8 since it includes links of length 5 (S-B), 1 (B-D) and 2
(D-G).
Slide 2.4.13
Similarly for S-A-D-C.
Slide 2.4.14
Given this, let's simulate the behavior of uniform-cost search | https://ocw.mit.edu/courses/6-034-artificial-intelligence-spring-2005/089f994f6a760c3c0b34e43dcd18ea60_ch2_search2.pdf |
13
Similarly for S-A-D-C.
Slide 2.4.14
Given this, let's simulate the behavior of uniform-cost search on this simple directed graph. As usual
we start with a single node containing just the start state S. This path has zero length. Of course, we
choose this path for expansion.
6.034 Artificial Intelligence. Copyright © 2004 by Massachusetts Institute of Technology.
Slide 2.4.15
This generates two new entries on Q; the path to A has length 2 and the one to B has length 5. So,
we pick the path to A to expand.
Slide 2.4.16
This generates two new entries on the queue. The new path to C is the shortest path on Q, so we
pick it to expand.
Slide 2.4.17
Since C has no descendants, we add no new paths to Q and we pick the best of the remaining paths,
which is now the path to B.
Slide 2.4.18
The path to B is extended to D and G and the path to D from B is tied with the path to D from A.
We are using order in Q to settle ties and so we pick the path from B to expand. Note that at this
point G has been visited but not expanded.
6.034 Artificial Intelligence. Copyright © 2004 by Massachusetts Institute of Technology.
Slide 2.4.19
Expanding D adds paths to C and G. Now the earlier path to D from A is the best pending path and
we choose it to expand.
Slide 2.4.20
This adds a new path to G and a new path to C. The new path to G is the best on the Q (at least tied
for best) so we pull it off Q.
Slide 2.4.21
And we have found our shortest path (S A D G) whose length is 8.
Slide 2.4.22
Note that once again we are not stopping on first visiting (placing on Q) the goal. We stop when the
goal gets expanded (pulled off Q).
6.034 Artificial Intelligence | https://ocw.mit.edu/courses/6-034-artificial-intelligence-spring-2005/089f994f6a760c3c0b34e43dcd18ea60_ch2_search2.pdf |
visiting (placing on Q) the goal. We stop when the
goal gets expanded (pulled off Q).
6.034 Artificial Intelligence. Copyright © 2004 by Massachusetts Institute of Technology.
Slide 2.4.23
In uniform-cost search, it is imperative that we only stop when G is expanded and not just when it is
visited. Until a path is first expanded, we do not know for a fact that we have found the shortest path
to the state.
Slide 2.4.24
In the any-path searches we chose to do the same thing, but that choice was motivated at the time
simply by consistency with what we HAVE to do now. In the earlier searches, we could have
chosen to stop when visiting a goal state and everything would still work fine (actually better).
Slide 2.4.25
Note that the first path that visited G was not the eventually chosen optimal path to G. This explains
our unwillingness to stop on first visiting G in the example we just did.
Slide 2.4.26
It is very important to drive home the fact that what uniform-cost search is doing (if we focus on the
sequence of expanded paths) is enumerating the paths in the search tree in order of their path cost.
The green numbers next to the tree on the left are the total path cost of the path to that state. Since,
in a tree, there is a unique path from the root to any node, we can simply label each node by the
length of that path.
6.034 Artificial Intelligence. Copyright © 2004 by Massachusetts Institute of Technology.
Slide 2.4.27
So, for example, the trivial path from S to S is the shortest path.
Slide 2.4.29
Then the path from S to A to C, with length 4, is the next shortest path.
Slide 2.4.28
Then the path from S to A, with length 2, is the next shortest path.
Slide 2.4.30
Then comes the path from S to B, with length 5.
6.034 Artificial Intelligence. Copyright © 2004 by Massachusetts Institute of | https://ocw.mit.edu/courses/6-034-artificial-intelligence-spring-2005/089f994f6a760c3c0b34e43dcd18ea60_ch2_search2.pdf |
path from S to B, with length 5.
6.034 Artificial Intelligence. Copyright © 2004 by Massachusetts Institute of Technology.
Slide 2.4.31
Followed by the path from S to A to D, with length 6.
Slide 2.4.32
And the path from S to B to D, also with length 6.
Slide 2.4.33
And, finally the path from S to A to D to G with length 8. The other path (S B D G) also has length
8.
Slide 2.4.34
This gives us the path we found. Note that the sequence of expansion corresponds precisely to path-
length order, so it is not surprising we find the shortest path.
6.034 Artificial Intelligence. Copyright © 2004 by Massachusetts Institute of Technology.
6.034 Notes: Section 2.5
Slide 2.5.1
Now, we will turn our attention to what is probably the most popular search algorithm in AI, the
A* algorithm. A* is an informed, optimal search algorithm. We will spend quite a bit of time
going over A*; we will start by contrasting it with uniform-cost search.
Slide 2.5.2
Uniform-cost search as described so far is concerned only with expanding short paths; it pays no
particular attention to the goal (since it has no way of knowing where it is). UC is really an
algorithm for finding the shortest paths to all states in a graph rather than being focused in reaching
a particular goal.
Slide 2.5.3
We can bias UC to find the shortest path to the goal that we are interested in by using a heuristic
estimate of remaining distance to the goal. This, of course, cannot be the exact path distance (if we
knew that we would not need much of a search); instead, it is a stand-in for the actual distance that
can give us some guidance.
6.034 Artificial Intelligence. Copyright © 2004 by Massachusetts Institute of Technology.
Slide 2.5.4
What we can do is to enumerate the paths by order of the SUM of the actual path length and the | https://ocw.mit.edu/courses/6-034-artificial-intelligence-spring-2005/089f994f6a760c3c0b34e43dcd18ea60_ch2_search2.pdf |
Technology.
Slide 2.5.4
What we can do is to enumerate the paths by order of the SUM of the actual path length and the
estimate of the remaining distance. Think of this as our best estimate of the TOTAL distance to the
goal. This makes more sense if we want to generate a path to the goal preferentially to short paths
away from the goal.
Slide 2.5.5
We call an estimate that always underestimates the remaining distance from any node an
admissible (heuristic) estimate.
Slide 2.5.6
In order to preserve the guarantee that we will find the shortest path by expanding the partial paths
based on the estimated total path length to the goal (like in UC without an expanded list), we must
ensure that our heuristic estimate is admissible. Note that straight-line distance is always an
underestimate of path-length in Euclidean space. Of course, by our constraint on distances, the
constant function 0 is always admissible (but useless).
Slide 2.5.7
UC using an admissible heuristic is known as A* (A star). It is a very popular search method in AI.
6.034 Artificial Intelligence. Copyright © 2004 by Massachusetts Institute of Technology.
Slide 2.5.8
Let's look at a quick example of the straight-line distance underestimate for path length in a graph.
Consider the following simple graph, which we are assuming is embedded in Euclidean space, that
is, think of the states as city locations and the length of the links are proportional to the driving
distance between the cities along the best roads.
Slide 2.5.9
Then, we can use the straight-line (airline) distances (shown in red) as an underestimate of the actual
driving distance between any city and the goal. The best possible driving distance between two
cities cannot be better than the straight-line distance. But, it can be much worse.
Slide 2.5.10
Here we see that the straight-line estimate between B and G is very bad. The actual driving distance
is much longer than the straight-line underestimate. Imagine that B and G are on different sides of
the Grand Canyon, for example.
Slide 2.5.11
It | https://ocw.mit.edu/courses/6-034-artificial-intelligence-spring-2005/089f994f6a760c3c0b34e43dcd18ea60_ch2_search2.pdf |
-line underestimate. Imagine that B and G are on different sides of
the Grand Canyon, for example.
Slide 2.5.11
It may help to understand why an underestimate of remaining distance may help reach the goal
faster to visualize the behavior of UC in a simple example.
Imagine that the states in a graph represent points in a plane and the connectivity is to nearest
neighbors. In this case, UC will expand nodes in order of distance from the start point. That is, as
time goes by, the expanded points will be located within expanding circular contours centered on the
start point. Note, however, that points heading away from the goal will be treated just the same as
points that are heading towards the goal.
6.034 Artificial Intelligence. Copyright © 2004 by Massachusetts Institute of Technology.
Slide 2.5.12
If we add in an estimate of the straight-line distance to the goal, the points expanded will be
bounded contours that keep constant the sum of the distance from the start and the distance to the
goal, as suggested in the figure. What the underestimate has done is to "bias" the search towards the
goal.
Slide 2.5.13
Let's walk through an example of A*, that is, uniform-cost search using a heuristic which is an
underestimate of remaining cost to the goal. In this example we are focusing on the use of the
underestimate. The heuristic we will be using is similar to the earlier one but slightly modified to be
admissible.
We start at S as usual.
Slide 2.5.14
And expand to A and B. Note that we are using the path length + underestimate and so the S-A path
has a value of 4 (length 2, estimate 2). The S-B path has a value of 8 (5 + 3). We pick the path to A.
Slide 2.5.15
Expand to C and D and pick the path with shorter estimate, to C.
6.034 Artificial Intelligence. Copyright © 2004 by Massachusetts Institute of Technology.
Slide 2.5.16
C has no descendants, so we pick the shorter path (to D).
Slide 2.5.1 | https://ocw.mit.edu/courses/6-034-artificial-intelligence-spring-2005/089f994f6a760c3c0b34e43dcd18ea60_ch2_search2.pdf |
Slide 2.5.16
C has no descendants, so we pick the shorter path (to D).
Slide 2.5.17
Then a path to the goal has the best value. However, there is another path that is tied, the S-B path.
It is possible that this path could be extended to the goal with a total length of 8 and we may prefer
that path (since it has fewer states). We have assumed here that we will ignore that possibility, in
some other circumstances that may not be appropriate.
Slide 2.5.18
So, we stop with a path to the goal of length 8.
Slide 2.5.19
It is important to realize that not all heuristics are admissible. In fact, the rather arbitrary heuristic
values we used in our best-first example are not admissible given the path lengths we later assigned.
In particular, the value for D is bigger than its distance to the goal and so this set of distances is not
everywhere an underestimate of distance to the goal from every node. Note that the (arbitrary) value
assigned for S is also an overestimate but this value would have no ill effect since at the time S is
expanded there are no alternatives.
6.034 Artificial Intelligence. Copyright © 2004 by Massachusetts Institute of Technology.
Slide 2.5.20
Although it is easy and intuitive to illustrate the concept of a heuristic by using the notion of straight-
line distance to the goal in Euclidean space, it is important to remember that this is by no means the
only example.
Take solving the so-called 8-puzzle, in which the goal is to arrange the pieces as in the goal state on
the right. We can think of a move in this game as sliding the "empty" space to one of its nearest
vertical or horizontal neighbors. We can help steer a search to find a short sequence of moves by
using a heuristic estimate of the moves remaining to the goal.
One admissible estimate is simply the number of misplaced tiles. No move can get more than one
misplaced tile into place, so this measure is a guaranteed underestimate and hence admissible.
Slide 2.5.21
We can do better if we note that | https://ocw.mit.edu/courses/6-034-artificial-intelligence-spring-2005/089f994f6a760c3c0b34e43dcd18ea60_ch2_search2.pdf |
place, so this measure is a guaranteed underestimate and hence admissible.
Slide 2.5.21
We can do better if we note that, in fact, each move can at best decrease by one the
"Manhattan" (aka Taxicab, aka rectilinear) distance of a tile from its goal.
So, the sum of these distances for each misplaced tile is also an underestimate. Note that it is always
a better (larger) underestimate than the number of misplaced tiles. In this example, there are 7
misplaced tiles (all except tile 2), but the Manhattan distance estimate is 17 (4 for tile 1, 0 for tile 2,
2 for tile 3, 3 for tile 4, 1 for tile 5, 3 for tile 6, 1 for tile 7 and 3 for tile 8). | https://ocw.mit.edu/courses/6-034-artificial-intelligence-spring-2005/089f994f6a760c3c0b34e43dcd18ea60_ch2_search2.pdf |
18.417 Introduction to Computational Molecular Biology
Lecture 13: October 21, 2004
Lecturer: Ross Lippert
Scribe: Eitan Reich
Editor: Peter Lee
13.1
Introduction
We have been looking at algorithms to find optimal scoring alignments of a query text
Q with a database T . While these algorithms have reasonable asymptotic runtimes,
namely O(|Q||T |), they are impractical for larger databases such as genomes. To
improve the runtime of our algorithms, we will sacrifice the optimality requirement
and instead use smart heuristics and statistical significance data to find “close to
optimal” alignments and quantify how significant such alignments are.
13.2
Inexact Matching
The inexact matching problem is to find good alignments while allowing for limited
discrepancies in the form of substitutions, insertions and deletions. The problem can
be stated in the following two ways, which are slight variations of each other:
Definition 1 (k-mismatch l-word problem) Given distance limit k, word length
l, query string q, and text string t, return all pairs of integers (i, j) such that
d(qi · · · qi+l , tj · · · tj+l) <= k, where d is the Hamming distance function.
Definition 2 (S-scoring l-word problem) Given score requirement S, scoring
function �, word length l, query string q, text string t, return all pairs of integers
(i, j) such that �(qi · · · qi+l, tj · · · tj+l | https://ocw.mit.edu/courses/18-417-introduction-to-computational-molecular-biology-fall-2004/08cd50dd8f78529b51b90101f2617f36_lecture_13.pdf |
all pairs of integers
(i, j) such that �(qi · · · qi+l, tj · · · tj+l) >= S where �(x, y) := �i �(xi, yi).
13.3 Pigeonhole Principle
A key insight in the inexact matching problem is that wherever there is a good
approximate alignment, there will be smaller exact alignments. For example, if there
13-1
13-2
Lecture 13: October 21, 2004
is an alignment of 9 bases with at most 1 mismatch, there must be an exact alignment
of at least 4 bases. The pigeonhole principle is used to generalize this idea and quantify
exactly what type of exact alignments we can be guaranteed to find, given the type
of inexact alignment. To locate good approximate matches, we instead look for the
Figure 13.1: Good matches must contain exact matches.
exact matches that we would expect to find in longer inexact alignments. These exact
matches can then be extended to produce longer matches that are may no longer be
exact, but may still be within the given discrepancy threshold. More specifically, we
can locate exact matches and extend them, using the k-mismatch algorithm:
1. We know that wherever there is a k-mismatch l-word, there is at least an exact
match of length s where s = �l/(k + 1)�.
2. We can look for potential alignment locations by finding all s-words that match
exactly between the query string and the text.
3. These s-words can then be extended to the left and the right to find an l-word
within k mismatches. To do this extension correctly takes O(l2) time, although
many methods dont actually achieve this run-time.
or using the using the S-scoring algorithm as follows for the k-mismatch problem or | https://ocw.mit.edu/courses/18-417-introduction-to-computational-molecular-biology-fall-2004/08cd50dd8f78529b51b90101f2617f36_lecture_13.pdf |
although
many methods dont actually achieve this run-time.
or using the using the S-scoring algorithm as follows for the k-mismatch problem or
the s-scoring problem:
1. We know that wherever there is an S-scoring l-word match, there must be some
s-word match with score threshold T (from exact matching)
2. To locate potential high scoring locations, we form T -scoring neighborhoods of
the s-words in the query text.
3. All neighborhood words in T are then found by exact matching.
Lecture 13: October 21, 2004
13-3
4. Each of these s-words can then be extended to the left and right to find an
l-word with score at least S.
To extend exact matching s-words to the left and right to produce approximately
matching l-words, we can use either ungapped extension, or gapped extension.
Using ungapped extension, the exact matches are extended to the left and right
without allowing any insertions or deletions. If two bases do not match, there is
simply a mismatch that counts towards the k-mismatch limit or will penalize the score
in the S-scoring problem. Using gapped extension, the exact matches are extended
Figure 13.2: Ungapped extension.
to the left and right allowing mismatches, insertions and deletions. While the exact
matching s-word that we begin with has no insertions or deletions, the approximate
match that is produced by extension allows for such imperfections which will count
toward the mismatch limit or penalize the score in the S-scoring problem:
13.4 BLAST
BLAST, or Basic Local Alignment Search Tool, is the successor to two simpler tools:
FASTA, a nucleotide alignment | https://ocw.mit.edu/courses/18-417-introduction-to-computational-molecular-biology-fall-2004/08cd50dd8f78529b51b90101f2617f36_lecture_13.pdf |
.4 BLAST
BLAST, or Basic Local Alignment Search Tool, is the successor to two simpler tools:
FASTA, a nucleotide alignment tool, and FASTP, a protein sequence alignment tool.
13-4
Lecture 13: October 21, 2004
Figure 13.3: Gapped extension.
Like its predecessors, BLAST works by starting with a seed, and then, using a ex
tension heuristic, finds approximate matches from shorter exact matches. What is so
innovative about BLAST, however, is its incorporation of statistical measures along
with alignment results to tell how statistically significant an alignment is. In addi
tion, given a query and text string of any length, BLAST can find maximal scoring
pairs (MSPs) very efficiently. While BLAST originally just returned MSPs, it now
also returns alignments after extension.
Fact 1 (Altschul-Karlin statistical result) If our query string has length n and
our text has length m, then the expected number of MSPs with score greater than or
equal to S is:
E(S) = Kmne −�S
where K is a constant and � is a normalizing factor that is a positive root of the
following equation:
pxpy e ��(x,y) = 1
x,y��
where px is the frequency of character x from our alphabet � and � is our scoring
function.
Fact 2 (Chen-Stein statistical result) The number of MSPs forms a Poisson dis
tribution with average value | https://ocw.mit.edu/courses/18-417-introduction-to-computational-molecular-biology-fall-2004/08cd50dd8f78529b51b90101f2617f36_lecture_13.pdf |
2 (Chen-Stein statistical result) The number of MSPs forms a Poisson dis
tribution with average value E(S).
Lecture 13: October 21, 2004
13-5
13.4.1 Variations
• BLASTn: nucleotide to nucleotide database alignment
• BLASTp: protein to protein database alignment
• BLASTx: translated nucleotide to protein database alignment
• tBLASTn: protein to translated nucleotide database alignment
• tBLASTx: translated nucleotide to translated nucleotide database alignment
13.5 Filtration
The idea behind filtration is to find seeds that will locate every MSP.
Given a proposed alignment, we can produce a binary string where there is a 1
whenever the two strings match and a 0 if they do not. This binary string gives us
the matching rate of the alignment. An MSP is an alignment whose associated binary
string has a high proportion of 1s. For example, here is an MSP of length 20 where
more than 70% of the bases are aligned as matches:
Q a a t c t t g c g a g a c c a a t g g c a c t t
T c t t c c t g c g g g a c c t a t c c c a c a a
= 0 0 1 1 0 1 1 1 1 0 1 1 1 1 0 1 1 0 0 1 1 1 0 0
If to locate MSPs, we look for an alignment that would produce an exact match of 4
bases (as we might have derived using the pigeonhole principle), we would be using
the seed 1111, which corresponds to fi | https://ocw.mit.edu/courses/18-417-introduction-to-computational-molecular-biology-fall-2004/08cd50dd8f78529b51b90101f2617f36_lecture_13.pdf |
hole principle), we would be using
the seed 1111, which corresponds to finding a location in the binary string where
there are 4 consecutive 1s. We can see that if we choose our seed to be too small (for
example choosing 11 as our seed), we would hit too many locations in the string to
be a statistically significant alignment, since many short exact matches can occur by
chance and have nothing to do with the existence of an MSP. On the other hand, if
we choose our seed to be too long, we will miss many MSPs because there may still
be slight mismatches in an MSP that would cause the seed to reject that location.
A creative idea to allow the seed to account for slight mismatches, but at the same
time not pick up too many alignments that are occurring merely due to chance is to
used gap seeds.
13-6
Lecture 13: October 21, 2004
Instead of looking for 1111, we can look for 11011. We can see the effectiveness of
the gapped seed as opposed to the consecutively spaced seed in the example because
the consecutively spaced seed misses a lot of locations in the MSP and almost fails to
hit it altogether. With the gapped seed however, a small amount of error is allowed
by introducing a dont care bit, and there are now many more hits on the MSP
to ensure that it isn’t missed. The idea of using gapped seeds rather than simple
consecutively spaced seeds increases the effectiveness of MSP search methods by not
allowing | https://ocw.mit.edu/courses/18-417-introduction-to-computational-molecular-biology-fall-2004/08cd50dd8f78529b51b90101f2617f36_lecture_13.pdf |
seeds rather than simple
consecutively spaced seeds increases the effectiveness of MSP search methods by not
allowing random noise to produce too many hits while at the same time ensuring that
MSPs are hit. | https://ocw.mit.edu/courses/18-417-introduction-to-computational-molecular-biology-fall-2004/08cd50dd8f78529b51b90101f2617f36_lecture_13.pdf |
6.895 Theory of Parallel Systems
Lecture 2
Cilk, Matrix Multiplication, and Sorting
Lecturer: Charles Leiserson
Lecture Summary
1. Parallel Processing With Cilk
This section provides a brief introduction to the Cilk language and how Cilk schedules and executes
parallel processes.
2. Parallel Matrix Multiplication
This section shows how to multiply matrices efficiently in parallel.
3. Parallel Sorting
This section describes a parallel implementation of the merge sort algorithm.
1 Parallel Processing With Cilk
We need a systems background to implement and test our parallel systems theories. This section gives
an introduction to the Cilk parallel-programming language. It then gives some background on how Cilk
schedules parallel processes and examines the role of race conditions in Cilk.
1.1 Cilk
This section introduces Cilk. Cilk is a version of C that runs in a parallel-processing environment. It uses
the same syntax as C with the addition of the keywords spawn, sync, and cilk. For example, the Fibonacci
function written in Cilk looks like this:
cilk int fib(int n)
{
if(n < 2) return n;
else
{
int x, y;
x = spawn fib(n-1);
y = spawn fib(n-2);
sync;
return(x + y);
}
}
Cilk is a faithful extension of C, in that if the Cilk keywords are elided from a Cilk program, the result is
a C program which implements the Cilk semantics.
A function preceded by cilk is defined as a Cilk function. For example,
cilk int fib(int n);
defines a Cilk function called fib. Functions | https://ocw.mit.edu/courses/6-895-theory-of-parallel-systems-sma-5509-fall-2003/08fd01c65bec5dd4805a5088973f6e20_lecture2.pdf |
For example,
cilk int fib(int n);
defines a Cilk function called fib. Functions defined without the cilk keyword are typical C functions.
A function call preceded by the spawn keyword tells the Cilk compiler that the function call can be made
2-1
push
stack
frame
push
stack
frame
steal
procedure
stack
pop
procedure
stack
pop
(a)
(b)
Figure 1: Call stack of an executing process. Boxes represent stack frames.
asynchronously in a concurrent thread. The sync keyword forces the current thread to wait for asynchronous
function calls made from the current context to complete.
Cilk keywords introduce several idiosyncracies into the C syntax. A Cilk function cannot be called with
normal C calling conventions – it must be called with spawn and waited for with sync. The spawn keyword
can only be applied to a Cilk function. The spawn keyword cannot occur within the context of a C function.
Refer to the Cilk manual for more details.
1.2 Parallel-Execution Model
This section examines how Cilk runs processes in parallel. It introduces the concepts of work-sharing and
work-stealing and then outlines Cilk’s implementation of the work-stealing algorithm.
Cilk processes are scheduled using an online greedy scheduler. The performance bounds of the online
scheduler are close to the optimal offline scheduler. We will look at provable bounds on the performance of
the online scheduler later in the term.
Cilk schedules processes using the principle of work-stealing rather than work-sharing. Work-sharing is
where a thread is scheduled to run in parallel whenever the runtime makes an asynchronous function call.
Work-stealing, in contrast, is where a processor looks around for work whenever it becomes idle.
To better explain | https://ocw.mit.edu/courses/6-895-theory-of-parallel-systems-sma-5509-fall-2003/08fd01c65bec5dd4805a5088973f6e20_lecture2.pdf |
function call.
Work-stealing, in contrast, is where a processor looks around for work whenever it becomes idle.
To better explain how Cilk implements work-stealing, let us first examine the call stack of a vanilla C
program running on a single processor. In figure 1, the stack grows downward. Each stack frame contains
local variables for a function. When a function makes a function call, a new stack frame is pushed onto the
stack (added to the bottom) and when a function returns, it’s stack frame is popped from the stack. The
call stack maintains synchronization between procedures and functions that are called.
In the work-sharing scheme, when a function is spawned, the scheduler runs the spawned thread in
parallel with the current thread. This has the benefit of maximizing parallelism. Unfortunately, the cost of
setting up new threads is high and should be avoided.
Work-stealing, on the other hand, only branches execution into parallel threads when a processor is
idle. This has the benefit of executing with precisely the amount of parallelism that the hardware can take
advantage of. It minimizes the number of new threads that must be setup. Work-stealing is the lazy way to
put off work for parallel execution until parallelism actually occurs. It has the benefit of running with the
same efficiency as a serial program in a uniprocessor environment.
Another way to view the distinction between work-stealing and work-sharing is in terms of how the
scheduler walks the computation graph. Work-sharing branches as soon and as often as possible, walking
the computation graph with a breadth-first search. | https://ocw.mit.edu/courses/6-895-theory-of-parallel-systems-sma-5509-fall-2003/08fd01c65bec5dd4805a5088973f6e20_lecture2.pdf |
and as often as possible, walking
the computation graph with a breadth-first search. Work-stealing only branches when necessary, walking
the graph with a depth-first search.
Cilk’s implementation of work-stealing avoids running threads that are likely to share variables by schedul-
ing threads to run from the other end of the call stack. When a processor is idle, it chooses a random processor
2-2
Call tree:
A
Views
of stack:
A
A
B
C
D
E
F
B
A
B
C
A
C
D
A
B
D
E
A
B
E
F
A
C
F
Figure 2: Example of a call stack shown as a cactus stack, and the views of the stack as seen by each procedure.
Boxes represent stack frames.
and finds the sleeping stack frame that is closest to the base of that processor’s stack and executes it. This
way, Cilk always parallelizes code execution at the oldest possible code branch.
1.3 Cactus Stack
Cilk uses a cactus stack to implement C’s rule for sharing of function-local variables. A cactus stack is
a parallel stack implemented as a tree. A push maps to following a branch to a child and a pop maps to
returning to the parent in a tree. For example, the cactus tree in figure 2 represents the call stack constructed
by a call to A in the following code:
void A(void)
{
B();
C();
}
void B(void)
{
D();
E();
}
void C(void)
{
F();
}
void D(void) {}
void E(void) {}
void F(void) {}
Cilk has the same rules for pointers as C. Point | https://ocw.mit.edu/courses/6-895-theory-of-parallel-systems-sma-5509-fall-2003/08fd01c65bec5dd4805a5088973f6e20_lecture2.pdf |
{}
void F(void) {}
Cilk has the same rules for pointers as C. Pointers to local variables can be passed downwards in the
call stack. Pointers can be passed upward only if they reference data stored on the heap (allocated with
malloc). In other words, a stack frame can only see data stored in the current and in previous stack frames.
Functions cannot return references to local variables.
The complete call tree is shown in figure 2. Each procedure sees a different view of the call stack based
on how it is called. For example, B sees a call stack of A followed by B, D sees a call stack of A followed by
B followed by D and so on. When procedures are run in parallel by Cilk, the running threads operate on
2-3
their view of the call stack. The stack maintained by each process is a reference to the actual call stack, not
a copy of it. Cilk maintains coherence among call stacks that contain the same frames using methods that
we will discuss later.
1.4 Race Conditions
The single most prominent reason that parallel computing is not widely deployed today is because of race
conditions. Identifying and debugging race conditions in parallel code is hard. Once a race condition has been
found, no methodology currents exists to write a regression test to ensure that the bug is not reintroduced
during future development. For these reasons, people do not write and deploy parallel code unless they
absolutely must. This section examines an example race condition in Cilk.
Consider the following code:
cilk int foo(void)
{
int x = 0;
spawn bar(&x);
spawn bar(&x);
sync;
return x;
}
}
cilk void bar(int *p | https://ocw.mit.edu/courses/6-895-theory-of-parallel-systems-sma-5509-fall-2003/08fd01c65bec5dd4805a5088973f6e20_lecture2.pdf |
;
spawn bar(&x);
spawn bar(&x);
sync;
return x;
}
}
cilk void bar(int *p)
{
*p += 1;
If this were a serial code, we would expect that foo returns 2. What value is returned by foo in the parallel
case? Assume the increment performed by bar is implemented with assembly that looks like this:
read x
add
write x
Then, the parallel execution looks like the following:
bar 1:
read x (1)
add
write x (2)
bar 2:
read x (3)
add
write x (4)
where bar 1 and bar 2 run concurrently. On a single processor, the steps are executed (1) (2) (3) (4) and
foo returns 2 as expected. In the parallel case, however, the execution could occur in the order (1) (3) (2)
(4), in which case foo would return 1. The simple code exhibits a race condition. Cilk has a tool called the
Nondeterminator which can be used to help check for race conditions.
2-4
2 Matrix Multiplication and Merge Sort
In this section we explore multithreaded algorithms for matrix multiplication and array sorting. We also
analyze the work and critical path-lengths. From these measures, we can compute the parallelism of the
algorithms.
2.1 Matrix Multiplication
To multiply two n × n matrices in parallel, we use a recursive algorithm. This algorithm uses the following
formulation, where matrix A multiplies matrix B to produce a matrix C:
�
�
C11 C12
C21 C22
=
=
�
�
�
� �
·
B11 B12
A11 A | https://ocw.mit.edu/courses/6-895-theory-of-parallel-systems-sma-5509-fall-2003/08fd01c65bec5dd4805a5088973f6e20_lecture2.pdf |
22
=
=
�
�
�
� �
·
B11 B12
A11 A12
A21 A22
B21 B22
A11B11 + A12B21 A11B12 + A12B22
A21B11 + A22B21 A21B12 + A22B22
�
.
This formulation expresses an n × n matrix multiplication as 8 multiplications and 4 additions of (n/2) ×
(n/2) submatrices. The multithreaded algorithm Mult performs the above computation when n is a power
of 2. Mult uses the subroutine Add to add two n × n matrices.
Mult(C, A, B, n)
if n = 1
then C[1, 1] ← A[1, 1] · B[1, 1]
else
allocate a temporary matrix T [1 . . n,1 . . n]
partition A, B, C and T into (n/2) × (n/2) submatrices
spawn Mult(C11, A11, B11, n/2)
spawn Mult(C12, A11, B12, n/2)
spawn Mult(C21, A21, B11, n/2)
spawn Mult(C22, A21, B12, n/2)
spawn Mult(T11, A12, B21, n/2)
spawn Mult(T12, A12, B22, n/2)
spawn Mult(T21, A22, B21, n/2)
spawn Mult(T22, A22, B22, n/2)
sync
spawn Add(C, T, n)
sync
Add(C, T, n)
if n = 1
then C[1, 1] ← C[1, 1] + T [1, 1]
else partition C and T into ( | https://ocw.mit.edu/courses/6-895-theory-of-parallel-systems-sma-5509-fall-2003/08fd01c65bec5dd4805a5088973f6e20_lecture2.pdf |
[1, 1] ← C[1, 1] + T [1, 1]
else partition C and T into (n/2) × (n/2) submatrices
spawn Add(C11, T11, n/2)
spawn Add(C12, T12, n/2)
spawn Add(C21, T21, n/2)
spawn Add(C22, T22, n/2)
sync
The analysis of the algorithms in this section requires the use of the Master Theorem. We state the
Master Theorem here for convenience.
2-5
spawn
comp
start
15 us
30 us
40 us
10 us
Figure 3: Critical path. The squiggles represent two different code paths. The circle is another code path.
Theorem 1 (Master Theorem) Let a ≥ 1 and b > 1 be constants, let f (n) be a function, and let T (n) be
defined on the nonnegative integers by the recurrence
T (n) = aT (n/b) + f (n),
where we interpret n/b to mean either (cid:3)n/b(cid:4) or (cid:5)n/b(cid:6). Then T (n) can be bounded asymptotically as follows.
1. If f (n) = O(nlogb a−(cid:1)) for some constant (cid:1) > 0, then T (n) = Θ(nlogba).
2. If f (n) = Θ(nlogbalgkn) for some constant k ≥ 0, then T (n) = Θ(nlogbalgk+1n).
3. If f (n) = Ω(nlogba+(cid:1)) for some constant (cid:1) > 0, and if af (n/b) ≤ cf ( | https://ocw.mit.edu/courses/6-895-theory-of-parallel-systems-sma-5509-fall-2003/08fd01c65bec5dd4805a5088973f6e20_lecture2.pdf |
cid:1)) for some constant (cid:1) > 0, and if af (n/b) ≤ cf (n) for some constant c < 1 and
all sufficiently large n, then T (n) = Θ(f (n)).
We begin by analyzing the work for Mult. The work is the running time of the algorithm on one
processor, which we compute by solving the recurrence relation for the serial equivalent of the algorithm.
We note that the matrix partitioning in Mult and Add takes O(1) time, as it requires only a constant
number of indexing operations. For the subroutine Add, the work at the top level (denoted A1(n)) then
consists of the work of 4 problems of size n/2 plus a constant factor, which is expressed by the recurrence
A1(n) = 4A1(n/2) + Θ(1)
= Θ(n 2).
(1)
(2)
We solve this recurrence by invoking case 1 of the Master Theorem. Similarly, the recurrence for the work
of Mult (denoted M1(n)):
M1(n) = 8M1(n/2) + Θ(n 2)
= Θ(n 3).
(3)
(4)
We also solve this recurrence with case 1 of the Master Theorem. The work is the same as for the traditional
triply-nested-loop serial algorithm.
The critical-path length is the maximum path length through a computation, as illustrated by figure
3. For Add, all subproblems have the same critical-path length, and all are executed in parallel. The
critical-path length | https://ocw.mit.edu/courses/6-895-theory-of-parallel-systems-sma-5509-fall-2003/08fd01c65bec5dd4805a5088973f6e20_lecture2.pdf |
have the same critical-path length, and all are executed in parallel. The
critical-path length (denoted A∞(n)) is a constant plus the critical-path length of one subproblem, and is
represented by the recurrence (sovled by case 2 of the Master Theorem):
Using this result, the critical-path length for Mult (denoted M∞(n)) is
A∞ = A∞(n/2) + Θ(1)
= Θ(lg n).
M∞ = M∞(n/2) + Θ(lg n)
= Θ(lg2 n),
2-6
(5)
(6)
(7)
(8)
by case 2 of the Master Theorem. From the work and critical-path length, we compute the parallelism:
M1(n)/M∞(n) = Θ(n / lg2 n).
3
(9)
As an example, if n = 1000, the parallelism ≈ 107 . In practice, multiprocessor systems don’t have more that
≈ 64,000 processors, so the algorithm has more than adequate parallelism.
In fact, it is possible to trade parallelism for an algorithm that runs faster in practice. Mult may
run slower than an in-place algorithm because of the hierarchical structure of memory. We introduce a
new algorithm, Mult-Add, that trades parallelism in exchange for eliminating the need for the temporary
matrix T .
Mult-Add(C, A, B, n)
if n = 1
then C[1, 1] ← C[1, 1] + A[1, 1] · B[1, 1]
else partition A, B, and C into (n/2) × (n/2) submatrices
spawn Mult(C11, A11, B11, n/2) | https://ocw.mit.edu/courses/6-895-theory-of-parallel-systems-sma-5509-fall-2003/08fd01c65bec5dd4805a5088973f6e20_lecture2.pdf |
/2) × (n/2) submatrices
spawn Mult(C11, A11, B11, n/2)
spawn Mult(C12, A11, B12, n/2)
spawn Mult(C21, A21, B11, n/2)
spawn Mult(C22, A21, B12, n/2)
sync
spawn Mult(C11, A12, B21, n/2)
spawn Mult(C12, A12, B22n/2)
spawn Mult(C21, A22, B21, n/2)
spawn Mult(C22, A22, B22, n/2)
sync
The work for Mult-Add (denoted M1
(cid:2) (n) = Θ(n3). Since
the algorithm now executes four recursive calls in parallel followed in series by another four recursive calls
in parallel, the critical-path length (denoted M (cid:2)
(cid:2) (n)) is the same as the work for Mult, M1
∞(n)) is
M (cid:2)
∞(n) = 2M (cid:2)
= Θ(n)
∞(n/2) + Θ(1)
by case 1 of the Master Theorem. The parallelism is now
M (cid:2)
1(n)/M (cid:2)
∞ = Θ(n 2).
(10)
(11)
(12)
When n = 1000, the parallelism ≈ 106, which is still quite high.
The naive algorithm (M (cid:2)(cid:2)) that computes n2 dot-products in parallel yields the following theoretical
results:
M (cid:2)(cid:2)
1 (n) = Θ(n 3)
M (cid:2)(cid:2)
∞ = Θ | https://ocw.mit.edu/courses/6-895-theory-of-parallel-systems-sma-5509-fall-2003/08fd01c65bec5dd4805a5088973f6e20_lecture2.pdf |
2)
1 (n) = Θ(n 3)
M (cid:2)(cid:2)
∞ = Θ(lg n)
=> P arallelism = Θ(n / lg n).
3
(13)
(14)
(15)
Although it does not use temporary storage, it is slower in practice due to less memory locality.
2.2 Sorting
In this section, we consider a parallel algorithm for sorting an array. We start by parallelizing the code for
Merge-Sort while using the traditional linear time algorithm Merge to merge the two sorted subarrays.
2-7
Merge-Sort(A, p, r)
if p < r
then q ← (cid:3)(p + r)/2(cid:4)
spawn Merge-Sort(A, p, q)
spawn Merge-Sort(A, q + 1, r)
sync
Merge(A, p, q, r)
Since the running time of Merge is Θ(n), the work (denoted T1(n)) for Merge-Sort is
T1(n) = 2T1(n/2) + Θ(n)
= Θ(n lg n)
(16)
(17)
by case 2 of the Master Theorem. The critical-path length (denoted T∞(n)) is the critical-path length of
one of the two recursive spawns plus that of Merge:
T∞(n) = T∞(n/2) + Θ(n)
= Θ(n)
(18)
(19)
by case 3 of the Master Theorem. The parallelism, T1(n)/T∞ = Θ(lg n), is not scalable. The bottleneck is
the linear-time Merge. We achieve better parallelism by designing a parallel version of Merge.
P-Merge(A[1..l], B[1..m], C[1..n])
if m > l
then spawn P-Merge(B[1 | https://ocw.mit.edu/courses/6-895-theory-of-parallel-systems-sma-5509-fall-2003/08fd01c65bec5dd4805a5088973f6e20_lecture2.pdf |
..l], B[1..m], C[1..n])
if m > l
then spawn P-Merge(B[1 . . m], A[1 . . l], C[1 . . n])
elseif n = 1
then C[1] ← A[1]
elseif l = 1
then if A[1] ≤ B[1]
then C[1] ← A[1]; C[2] ← B[1]
else C[1] ← B[1]; C[2] ← A[1]
else find j such that B[j] ≤ A[l/2] ≤ B[j + 1] using binary search
spawn P-Merge(A[1..(l/2)], B[1..j], C[1..(l/2 + j)])
spawn P-Merge(A[(l/2 + 1)..l], B[(j + 1)..m], C[(l/2 + j + 1)..n])
sync
P-Merge puts the elements of arrays A and B into array C in sequential order, where n = l + m. The
algorithm finds the median of the larger array and uses it to partition the smaller array. Then, it recursively
merges the lower portions and the upper portions of the arrays. The operation of the algorithm is illustrated
in figure 4.
We begin by analyzing the critical-path length of P-Merge. The critical-path length is equal to the
maximum critical-path length of the two spawned subproblems plus the work of the binary search. The
binary search completes in Θ(lg m) time, which is Θ(lg n) in the worst case. For the subproblems, half of A
is merged with all of B in the worst case. Since l ≥ n/2, at least n/4 elements are merged in the smaller
subproblem. That leaves at most 3n/ | https://ocw.mit.edu/courses/6-895-theory-of-parallel-systems-sma-5509-fall-2003/08fd01c65bec5dd4805a5088973f6e20_lecture2.pdf |
n/4 elements are merged in the smaller
subproblem. That leaves at most 3n/4 elements to be merged in the larger subproblem. Therefore, the
critical-path is
T∞(n) ≤ T (3/4n) + O(lg n)
= O(lg2 n)
2-8
1
l/2
l
A
< A[
l/2]
> A[
l
/2]
1
j
j + 1
m
B
< A[
l/2]
l
> A[
/2]
Figure 4: Find where middle element of A goes into B. The boxes represent arrays.
by case 2 of the Master Theorem. To analyze the work of P-Merge, we set up a recurrence by using the
observation that each subproblem operates on αn elements, where 1/4 ≤ α ≤ 3/4. Thus, the work satisfies
the recurrence
T1(n) = T (αn) + T ((1 − α)n) + O(lg n).
We shall show that T1(n) = Θ(n) by using the substitution method. We take T (n) ≤ an − b lg n as our
inductive assumption, for constants a, b > 0. We have
T(n) ≤ aαn − b lg(αn) + a(1 − α)n − b lg((1 − α)n) + Θ(lg n)
= an − b(lg(αn) + lg((1 − α)n)) + Θ(lg n)
= an − b(lg α + lg n + lg(1 − α) + lg n) + Θ(lg n)
= an − b lg n − (b(lg n + lg(α(1 − α))) − Θ(lg n | https://ocw.mit.edu/courses/6-895-theory-of-parallel-systems-sma-5509-fall-2003/08fd01c65bec5dd4805a5088973f6e20_lecture2.pdf |
g n)
= an − b lg n − (b(lg n + lg(α(1 − α))) − Θ(lg n))
≤ an − b lg n,
since we can choose b large enough so that b(lg n + lg(α(1 − α))) dominates Θ(lg n). We can also pick a large
enough to satisfy the base conditions. Thus, T(n) = Θ(n), which is the same as the work for the ordinary
Merge. Reanalyzing the Merge-Sort algorithm, with P-Merge replacing Merge, we find that the work
remains the same, but the critical-path length is now
T∞(n) = T∞(n/2) + Θ(lg2 n)
= Θ(lg3 n)
(20)
(21)
by case 2 of the Master Theorem. The parallelism is now Θ(n lg n)/Θ(lg3 n) = Θ(n/ lg2 n). By using a more
clever algorithm, a parallelism of Ω(n/ lg n) can be achieved.
While it is important to analyze the theoritical bounds of algorithms, it is also necessary that the
algorithms perform well in practice. One short-coming of Merge-Sort is that it is not in-place. An
in-place parallel version of Quick-Sort exists, which performs better than Merge-Sort in practice.
Additionally, while we desire a large parallelism, it is good practice to design algorithms that scale down
as well as up. We want the performance of our parallel algorithms when running on one processor to compare
well with the performance of the serial version of the algorithm. The best sorting algorithm to date requires
only 20% more work than the serial equivalent. Coming up with a dynamic multithreaded algorithm which
works well in practice is a good research project.
9 | https://ocw.mit.edu/courses/6-895-theory-of-parallel-systems-sma-5509-fall-2003/08fd01c65bec5dd4805a5088973f6e20_lecture2.pdf |
multithreaded algorithm which
works well in practice is a good research project.
9 | https://ocw.mit.edu/courses/6-895-theory-of-parallel-systems-sma-5509-fall-2003/08fd01c65bec5dd4805a5088973f6e20_lecture2.pdf |
15.082J, 6.855J, and ESD.78J
September 21, 2010
Eulerian Walks
Flow Decomposition and
Transformations
Eulerian Walks in Directed Graphs in O(m) time.
Step 1. Create a breadth first search tree into node
1. For j not equal to 1, put the arc out of j in T
last on the arc list A(j).
Step 2. Create an Eulerian cycle by starting a walk
at node 1 and selecting arcs in the order they
appear on the arc lists.
2
Proof of Correctness
Relies on the following observation and invariant:
Observation: The walk will terminate at node 1.
Whenever the walk visits node j for j ≠ 1, the walk
has traversed one more arc entering node j than
leaving node j.
Invariant: If the walk has not traversed the tree arc
for node j, then there is a path from node j to
node 1 consisting of nontraversed tree arcs.
Eulerian Cycle
Animation
3
Eulerian Cycles in undirected graphs
Strategy: reduce to the directed graph problem as
follows:
Step 1. Use dfs to partition the arcs into disjoint
cycles
Step 2. Orient each arc along its directed cycle.
Afterwards, for all i, the number of arcs entering
node i is the same as the number of arcs leaving
node i.
Step 3. Run the algorithm for finding Eulerian
Cycles in directed graphs
4
Flow Decomposition and Transformations
Flow Decomposition
Removing Lower Bounds
Removing Upper Bounds
Node splitting
Arc flows: an arc flow x is a vector x satisfying:
Let b(i) = ∑j xij
- ∑i xji
We are not focused on upper and lower bounds
on x for now.
5
Flows along Paths
Usual: represent flows in terms of flows in arcs.
Alternative: represent a flow as the sum of flows
in paths and cycles.
2
1
2
2
3
P | https://ocw.mit.edu/courses/15-082j-network-optimization-fall-2010/0911d1fe02b68af5127a53657f25b5a8_MIT15_082JF10_lec04.pdf |
in arcs.
Alternative: represent a flow as the sum of flows
in paths and cycles.
2
1
2
2
3
P
2
2
4
5
Two units of flow
in the path P
5
1
1
4
1
1
2
1
3
1
C
One unit of flow
around the cycle C
6
Properties of Path Flows
Let P be a directed path.
Let Flow(,P) be a flow of units in each arc of
the path P.
2
1
2
2
3
P
2
2
4
5
Flow(2, P)
Observation. If P is a path from s to t, then
Flow(,P) sends units of δ flow from s to t, and has
conservation of flow at other nodes.
7
Property of Cycle Flows
If p is a cycle, then sending one unit of flow along
p satisfies conservation of flow everywhere.
5
1
1
4
1
1
1
3
2
1
8
Representations as Flows along Paths and Cycles
Let P be a collection of Paths; let f(P) denote the
flow in path P
Let C be a collection of cycles; let f(C) denote the
flow in cycle C.
One can convert the path and cycle flows into an
arc flow x as follows: for each arc (i,j) ∈ A
xij = ∑P∋(i,j) f(P) + ∑C∋(i,j) f(C)
9
Flow Decomposition
x:
y:
Initial flow
updated flow
G(y): subgraph with arcs (i, j) with yij > 0 and
incident nodes
Flow around path P (during the algorithm)
paths with flow in the decomposition
cycles with flow in the decomposition
f(P)
P:
C:
INVARIANT
xij = yij + ∑P∋(i,j) f(P) + ∑C∋(i,j) f(C)
Initially, x = y and f = 0.
At end, y = 0, and f gives the flow decomposition | https://ocw.mit.edu/courses/15-082j-network-optimization-fall-2010/0911d1fe02b68af5127a53657f25b5a8_MIT15_082JF10_lec04.pdf |
i,j) f(C)
Initially, x = y and f = 0.
At end, y = 0, and f gives the flow decomposition.
10
Deficit and Excess Nodes
Let x be a flow (not necessarily feasible)
If the flow out of node i exceeds the flow into node
i, then node i is a deficit node.
Its deficit is ∑j xij - ∑k xki.
If the flow out of node i is less than the flow into
node i, then node i is an excess node.
Its excess is -∑j xij + ∑k xki.
If the flow out of node i equals the flow into node i,
then node i is a balanced node.
11
Flow Decomposition Algorithm
Step 0. Initialize: y := x; f := 0; P := ∅ ; C:= ∅;
Step 1. Select a deficit node j in G(y). If no deficit node exists,
select a node j with an incident arc in G(y);
Step 2. Carry out depth first search from j in G(y) until finding a
directed cycle W in G(y) or a path W in G(y) from s to a node t
with excess in G(y).
Step 3.
1. Let Δ = capacity of W in G(y). (See next slide)
2. Add W to the decomposition with f(W) = Δ.
3. Update y (subtract flow in W) and excesses and deficits
4.
If y ≠ 0, then go to Step 1
12
Capacities of Paths and Cycles
The capacity of C is
= min arc flow on C
wrt flow y.
capacity = 4
1
5
4
4
2
5
8
9
7
deficit = 3
6
s
4
4
7
C
2
P
excess = 2
5
t
3
9
The capacity of P is
denoted as D(P, y) =
min[ def(s), excess(t),
min (xij : (i,j) ∈ P) ]
capacity = 2
Flow Decomposition
Animation | https://ocw.mit.edu/courses/15-082j-network-optimization-fall-2010/0911d1fe02b68af5127a53657f25b5a8_MIT15_082JF10_lec04.pdf |
[ def(s), excess(t),
min (xij : (i,j) ∈ P) ]
capacity = 2
Flow Decomposition
Animation
13
Complexity Analysis
Select initial node:
O(1) per path or cycle, assuming that we
maintain a set of supply nodes and a set of
balanced nodes incident to a positive flow arc
Find cycle or path
O(n) per path or cycle since finding the next
arc in depth first search takes O(1) steps.
Update step
O(n) per path or cycle
14
Complexity Analysis (continued)
Lemma. The number of paths and cycles found in the
flow decomposition is at most m + n – 1.
Proof.
In the update step for a cycle, at least one of
the arcs has its capacity reduced to 0, and the arc is
eliminated.
In an update step for a path, either an arc is
eliminated, or a deficit node has its deficit reduced to
0, or an excess node has its excess reduced to 0.
(Also, there is never a situation with exactly one
node whose excess or deficit is non-zero).
15
Conclusion
Flow Decomposition Theorem. Any non-negative
feasible flow x can be decomposed into the
following:
i. the sum of flows in paths directed from deficit
nodes to excess nodes, plus
ii. the sum of flows around directed cycles.
It will always have at most n + m paths and cycles.
Remark. The decomposition usually is not unique.
16
Corollary
A circulation is a flow with the property that the
flow in is the flow out for each node.
Flow Decomposition Theorem for circulations. Any
non-negative feasible flow x can be decomposed
into the sum of flows around directed cycles.
It will always have at most m cycles.
17
An application of Flow Decomposition
Consider a feasible flow where the supply of node 1 is
n-1, and the supply of every other node is -1.
Suppose the arcs with positive flow have no cycle.
Then the flow can be decomposed into unit flows
along paths from node 1 | https://ocw.mit.edu/courses/15-082j-network-optimization-fall-2010/0911d1fe02b68af5127a53657f25b5a8_MIT15_082JF10_lec04.pdf |
-1.
Suppose the arcs with positive flow have no cycle.
Then the flow can be decomposed into unit flows
along paths from node 1 to node j for each j ≠ 1.
18
jxijjxjin1ifi11ifi1A flow and its decomposition
5
1
1
4
-1
2
3
-1
3
-1
4
1
5
-1
1
6
-1
The decomposition of flows yields the paths:
1-2, 1-3, 1-3-4
1-3-4-5 and 1-3-4-6.
There are no cycles in the decomposition.
19
Application to shortest paths
To find a shortest path from node 1 to each other
node in a network, find a minimum cost flow in
which b(1) = n-1 and b(j) = -1 for j ≠ 1.
The flow decomposition gives the shortest paths.
20
Other Applications of Flow Decomposition
Reformulations of Problems.
There are network flow models that use path
and cycle based formulations.
Multicommodity Flows
Used in proving theorems
Can be used in developing algorithms
21
The min cost flow problem (again)
The minimum cost flow problem
uij = capacity of arc (i,j).
cij = unit cost of flow sent on (i,j).
xij = amount shipped on arc (i,j)
Minimize
∑ cijxij
∑j xij - ∑k xki = bi
and 0 ≤ xij ≤ uij for all (i,j) ∈ A.
for all i ∈ N.
24
The model seems very limiting
• The lower bounds are 0.
• The supply/demand constraints must be satisfied
exactly
• There are no constraints on the flow entering or
leaving a node.
We can model each of these constraints using
transformations.
•
In addition, | https://ocw.mit.edu/courses/15-082j-network-optimization-fall-2010/0911d1fe02b68af5127a53657f25b5a8_MIT15_082JF10_lec04.pdf |
• There are no constraints on the flow entering or
leaving a node.
We can model each of these constraints using
transformations.
•
In addition, we can transform a min cost flow
problem into an equivalent problem with no
upper bounds.
23
Eliminating Lower Bound on Arc Flows
Suppose that there is a lower bound lij on the arc flow in
(i,j)
Minimize ∑ cijxij
∑j xij - ∑k xki = bi
and lij ≤ xij ≤ uij for all (i,j) ∈ A.
for all i ∈ N.
Then let yij = xij - lij. Then xij = yij + lij
Minimize ∑ cij(yij + lij)
∑j (yij + lij) - ∑k (yij + lij) = bi
and lij ≤ (yij + lij) ≤ uij for all (i,j) ∈ A.
for all i ∈ N.
Then simplify the expressions.
26
Allowing inequality constraints
Minimize ∑ cijxij
∑j xij - ∑k xki ≤ bi
and lij ≤ xij ≤ uij for all (i,j) ∈ A.
for all i ∈ N.
Let B = ∑i bi . For feasibility, we need B ≥ 0
Create a “dummy node” n+1, with bn+1 = -B. Add arcs
(i, n+1) for i = 1 to n, with ci,n+1 = 0. Any feasible
solution for the original problem can be transformed
into a feasible solution for the new problem by
sending excess flow to node n+1.
27
Node Splitting
1
5
6
2
3
44
5
5
4
Flow x
6
Arc numbers
are capacities
Suppose that we want to add the constraint that the
flow into node 4 is at most 7.
Method: split node 4 into two nodes, say 4’ and 4”
1
5
6
2
3
7
4’
4”
4
6
5
5
Flow x’ can be
obtained from
flow x, and vice
vers | https://ocw.mit.edu/courses/15-082j-network-optimization-fall-2010/0911d1fe02b68af5127a53657f25b5a8_MIT15_082JF10_lec04.pdf |
7
4’
4”
4
6
5
5
Flow x’ can be
obtained from
flow x, and vice
versa.
26
Eliminating Upper Bounds on Arc Flows
The minimum cost flow problem
Min ∑ cijxij
s.t. ∑j xi - ∑k xki = bi for all i ∈ N.
and 0 ≤ xij ≤ uij for all (i,j) ∈ A.
bi
i
7
i
Before
xij
uij
5
20
bj
j
-2
j
bi-uij
i
uij-xij
After
uij
<i,j>
-13
i
15
20
<i,j>
xij
5
bj
j
-2
j
29
Summary
1. Efficient implementation of finding an eulerian
cycle.
2. Flow decomposition theorem
3. Transformations that can be used to incorporate
constraints into minimum cost flow problems.
28
MIT OpenCourseWare
http://ocw.mit.edu
15.082J / 6.855J / ESD.78J Network Optimization
Fall 2010
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. | https://ocw.mit.edu/courses/15-082j-network-optimization-fall-2010/0911d1fe02b68af5127a53657f25b5a8_MIT15_082JF10_lec04.pdf |
18.156 Lecture Notes
Lecture 7
Lecturer: Larry Guth
Trans.: Cole Graham
February 20, 2015
In Lecture 6 we developed the following continuity method for proving isomorphisms on Banach spaces:
Proposition 1. Let X and Y be Banach spaces, let I be a connected subset of R, and let Lt : X → Y be a
continuous family of operators with t ∈ I. If Lt0 is an isomorphism for some t0 ∈ I, and there exists λ > 0
such that (cid:107)Ltx(cid:107)Y ≥ λ (cid:107)x(cid:107)X for all x ∈ X and all t ∈ I, then Lt is an isomorphism for all t ∈ I.
We will now use α-H¨older norm estimates related to Schauder’s inequality to establish an isomorphism
theorem for the elliptic Dirichlet problem on discs. Let L be an elliptic operator satisfying the usual hy-
potheses, i.e.
(cid:88)
Lu =
aij∂i∂ju
(cid:107)
C (B1)
with aij
(cid:107)
¯by Lu := (Lu, u|∂B1 ). The principal result of this lecture is:
β and 0 < λ
≤
≤
α
¯
eig( aij ) Λ < . Define the map L : C 2,α(B1) → C α(B1)×C 2,α(∂B1)
} ≤
∞
{
¯
i,j
Theorem 1. If L obeys the usual hypotheses then L is an isomorphism.
¯
We may restate this result as follows:
Corollary 1. For all f ∈ C α(B1) and all ϕ ∈ C 2,α(∂B
Lu = f on B1 and u|∂B1 = ϕ.
1) there exists a unique u ∈ C (B1) such that
2,α
¯
To establish Theorem 1, we verify that ∆ is an isomorphism, and show that Lt := (1 − t)∆ + tL satis | https://ocw.mit.edu/courses/18-156-differential-analysis-ii-partial-differential-equations-and-fourier-analysis-spring-2016/091bf989a474217c56f80bbb84cead6e_MIT18_156S16_lec7.pdf |
Theorem 1, we verify that ∆ is an isomorphism, and show that Lt := (1 − t)∆ + tL satisfies
the hypotheses of Proposition 1. To prove both these statements we will rely heavily on the following version
of Schauder’s inequality:
¯
Theorem 2 (Global Schauder). Suppose u ∈ C 2,α(B1) and L satisfies the usual hypotheses. Let f := Lu
and ϕ := u|∂B1. Then
(cid:107)u(cid:107)C2,α ¯(B1) ≤ C(n, α, λ, Λ, β)
(cid:104)
(cid:107)f (cid:107)Cα(B1) + (cid:107)ϕ(cid:107)C2,α(∂B
1)
(cid:105)
.
(1)
The Banach spaces involved in this bound, namely C 2,α(B
1), C (B1), and C 2,α(∂B1) motivate the
definition of the map L. Indeed, we have defined the map L : C 2,α(B1) → C α(B1) × C 2,α(∂B1) because (1)
is precisely the form of quantitative injectivity required to apply Proposition 1 to the family Lt. We also use
Theorem 2 to show:
¯
¯
¯
¯
α
1
Proposition 2. ∆ is an isomorphism.
¯
¯
Proof. From the preceding lecture, it is sufficient to show that ∆ is surjective and satisfies an injectivity
estimate of the form found in Proposition 1. To prove surjectivity, fix f ∈ C α(B1) and ϕ ∈ C 2,α(∂B1).
c (Rn). Define w := F ∗ Γn, where Γn is the fundamental solution to the Laplacian
Extend f to F ∈ C α
considered in earlier lectures. Then w ∈ C 2,α(Rn) and ∆w = f on B1. However, | https://ocw.mit.edu/courses/18-156-differential-analysis-ii-partial-differential-equations-and-fourier-analysis-spring-2016/091bf989a474217c56f80bbb84cead6e_MIT18_156S16_lec7.pdf |
C α
considered in earlier lectures. Then w ∈ C 2,α(Rn) and ∆w = f on B1. However, there is no reason to
expect that w|∂B1 = ϕ. To rectify this issue, use the Poisson kernel to find v ∈ C 2,α(B1) such that ∆v = 0
on B1 and v|∂B1 = ϕ − w|∂B1. Set u = v + w ∈ C 2,α(B1). Then ∆u = ∆v + ∆w = f on B1 and
¯
u|
∂B1 + w|∂B1 = ϕ. Hence ∆u = (f, ϕ), so ∆ is surjective. Theorem 2 shows that
∂B = v|
¯
1
(cid:107)f (cid:107)Cα(B ) + (cid:107)ϕ(cid:107)C2,α(∂B ) ≥ C(n, α, 1, 1, 1)
1
1
−1 (cid:107)u(cid:107)C2,α ¯(B1) ,
(cid:13) ¯∆u(cid:13)
so (cid:13)
together with surjectivity this estimate proves that ∆ is an isomorphism.
(cid:13) ≥ λ (cid:107)u(cid:107)
for all u ∈ C 2,α(B1), with λ = C(n, α, 1, 1, 1)−1 > 0. As we showed in the previous lecture,
¯
¯
Proof of Theorem 1. Consider the operator Lt for t ∈ [0, 1]. Because (cid:107)aij(cid:107)Cα(B1) ≤ β for all i, j,
(cid:107)(1 − t)δij + taij(cid:107)Cα
(1
1) ≤ −
(B
t) + tβ
≤
β
(cid:48),
where β(cid:48) := max{β, | https://ocw.mit.edu/courses/18-156-differential-analysis-ii-partial-differential-equations-and-fourier-analysis-spring-2016/091bf989a474217c56f80bbb84cead6e_MIT18_156S16_lec7.pdf |
≤ −
(B
t) + tβ
≤
β
(cid:48),
where β(cid:48) := max{β, 1}. Similarly, we must have,
eig({(1 − t)δij + taij}) ⊂ [(1 − t) + tλ, (1 − t) + tΛ] ⊂ [λ(cid:48), Λ(cid:48)],
where λ(cid:48) := min{λ, 1} and Λ(cid:48) := max{Λ, 1}. Hence the operators Lt obey regularity and spectral bounds
which are uniform in t for t ∈ [0, 1]. Theorem 2 therefore implies that
(cid:107)Ltu(cid:107)Cα(B ) + (cid:107)u|∂B1(cid:107)C2,α(∂B ) ≥ C(n, α, λ
1
1
(cid:48), Λ(cid:48), β(cid:48))−1 (cid:107)u
(cid:107)C2,α ¯(B1)
for all u ∈ C 2,α ¯(B1) and all t ∈ [0, 1]. By Proposition 1, this regularity combined with Proposition 2 is
sufficient to establish Theorem 1.
In summary, we used explicit formulæ involving Γn and the Poisson kernel to establish the surjectivity
¯
¯
of ∆, and then use injectivity bounds furnished by the global Schauder inequality to conclude that ∆ and L
are in fact isomorphisms.
¯
It remains to verify the global Schauder inequality. We will read through the proof and fill in details
for homework. The essential difference between the global and interior Schauder inequalities lies in the
treatment of region boundaries. In the interior Schauder inequality proven previously, C 2,α regularity of u
on a ball is controlled by C 0 regularity of Lu on a larger ball. Global Schauder replaces regularity on a
larger domain with regularity on the boundary. Unsurprisingly therefore, the proof of global Sch | https://ocw.mit.edu/courses/18-156-differential-analysis-ii-partial-differential-equations-and-fourier-analysis-spring-2016/091bf989a474217c56f80bbb84cead6e_MIT18_156S16_lec7.pdf |
ball. Global Schauder replaces regularity on a
larger domain with regularity on the boundary. Unsurprisingly therefore, the proof of global Schauder relies
on a form of Korn’s inequality which accounts for behavior near boundaries:
Theorem 3 (Boundary Korn). Let H := {x ∈ Rn; xn > 0} denote the upper half space. Let u ∈ C 2,α
such that u = 0 on ∂H. Then [∂2u]Cα(H) ≤ C(, α)[∆u]Cα(H).
c
¯
(H)
2
As with the standard Korn inequality, the proof of Theorem 3 is divided into two parts:
1. Find a formula for ∂i∂ju in terms of ∆u
2. Bound the integral in the formula to obtain an operator estimate on the map ∆u (cid:55)→ ∂i∂ju.
To approach the first part of the proof, let u ∈ C 2,α
c
¯
(H), and extend ∆u to F : Rn → R by setting
F (x1, . . . , xn) = −∆u(x1, . . . , xn
,−1 −xn) when xn < 0.
Proposition 3. u = F ∗
¯
Γn on H.
Proof. Let w = F ∗ Γn. By the symmetry of Γn and the antisymmetry of F in xn, w = 0 when xn = 0.
That is, w vanishes on ∂H. Just as in previous work, w(x) → 0 as |x| → ∞ and ∆w = F on H. Hence
∆(u − w) = 0 on H, u − w = 0 on ∂H, and u − w → 0 as |x| → ∞. Applying the maximum principle to
ever larger semidiscs, we see that u = w on H.
The same arguments from the proof of the standard Korn inequality show that
∂i∂ju(x) = lim
ε→0+
(cid:90)
|
y
|>ε
F (x
−
y)∂i∂jΓn | https://ocw.mit.edu/courses/18-156-differential-analysis-ii-partial-differential-equations-and-fourier-analysis-spring-2016/091bf989a474217c56f80bbb84cead6e_MIT18_156S16_lec7.pdf |
lim
ε→0+
(cid:90)
|
y
|>ε
F (x
−
y)∂i∂jΓn(y) dy + δijF (x)
1
n
for all x ∈ H. Define the operator TεF (x) := (cid:82)
|y|>ε
On the homework we will complete the operator norm part of the proof of boundary Korn:
F (x − y)∂i∂jΓn(y) dy and integral kernel K := ∂i∂jΓn.
c (H) + C α
Proposition 4. If F ∈ C α
ε < min{xn, x¯n} with x, x¯ ∈ H, then
c (H ) (but F is permitted to be discontinuous on ∂H) and
−
|TεF (x) − TεF (x¯)| ≤
C(n, α) |x − x¯| ([F ]Cα(H) + [F ]Cα(H )).−
α
As in the proof of standard Korn, cancellation prop
erties of K are crucial to the proof of this operator
(cid:82)
estimate. For standard Korn we used the fact that
K = 0 for every radius r. This fact is not sufficient
Sr
for boundary Korn, however, because spheres centered at x or x¯ in H will intersect ∂H, where we have no
control on F . To fix this, we note that K enjoys even stronger cancellation:
Proposition 5. If Hr ⊂ Sr is any hemisphere,
(cid:82)
Hr
K = 0.
Proof. Γn is even, and hence so is its second deriv
that
ative ∂i∂jΓn = K. The substitution y (cid:55)→ −y then shows
(cid:90)
Hr
K =
1
2
(cid:90)
Sr
K = 0.
Now to prove Proposition 4 we may divide the integral TεF (x) into three rough regions:
1. ε < |y| < xn, where K cancels on whole spheres.
2. xn < |y| < R for some large R, which | https://ocw.mit.edu/courses/18-156-differential-analysis-ii-partial-differential-equations-and-fourier-analysis-spring-2016/091bf989a474217c56f80bbb84cead6e_MIT18_156S16_lec7.pdf |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.