text stringlengths 16 3.88k | source stringlengths 60 201 |
|---|---|
space S is the set of 2k possible heads-tails
sequences.
I If X is the random sequence (so X is a random variable), then
for each x ∈ S we have P{X = x} = 2−k .
I In information theory it’s quite common to use log to mean
log2 instead of log . We follow that convention in this lecture.
In particular, this means t... | https://ocw.mit.edu/courses/18-600-probability-and-random-variables-fall-2019/32dce72d547d449d4cfe2012a8297ba2_MIT18_600F19_lec33.pdf |
I This can be interpreted as the expectation of (− log pi ). The
value (− log pi ) is the “amount of surprise” when we see xi .
Shannon entropy
I Shannon: famous MIT student/faculty member, wrote The
Mathematical Theory of Communication in 1948.
I Goal is to define a notion of how much we “expect to learn”
from a ra... | https://ocw.mit.edu/courses/18-600-probability-and-random-variables-fall-2019/32dce72d547d449d4cfe2012a8297ba2_MIT18_600F19_lec33.pdf |
by
H(X ) =
n
X
i=1
pi (− log pi ) = −
pi log pi .
n
X
i=1
I This can be interpreted as the expectation of (− log pi ). The
value (− log pi ) is the “amount of surprise” when we see xi .
17I Can learn animal with H(X ) questions on average.
I General: expect H(X ) questions if probabilities powers of 2.
Other... | https://ocw.mit.edu/courses/18-600-probability-and-random-variables-fall-2019/32dce72d547d449d4cfe2012a8297ba2_MIT18_600F19_lec33.pdf |
following animals:
P{X = x} − log P{X = x}
x
Dog
Cat
Cow
Pig
Squirrel
Mouse
Owl
Sloth
Hippo
Yak
Zebra
Rhino
1/4
1/4
1/8
1/16
1/16
1/16
1/16
1/32
1/32
1/32
1/64
1/64
2
2
3
4
4
4
4
5
5
5
6
6
I Can learn animal with H(X ) questions on average.
I General: expect H(X ) questions if pr... | https://ocw.mit.edu/courses/18-600-probability-and-random-variables-fall-2019/32dce72d547d449d4cfe2012a8297ba2_MIT18_600F19_lec33.pdf |
1, what is H(X )?
22I What is H(X ) if X is a geometric random variable with
parameter p = 1/2?
Other examples
I Again, if a random variable X takes the values x1, x2, . . . , xn
with positive probabilities p1, p2, . . . , pn then we define the
entropy of X by
H(X ) =
n
X
i=1
pi (− log pi ) = −
pi log pi .
n
... | https://ocw.mit.edu/courses/18-600-probability-and-random-variables-fall-2019/32dce72d547d449d4cfe2012a8297ba2_MIT18_600F19_lec33.pdf |
, Y ) (viewed as a
random variable itself).
I Claim: if X and Y are independent, then
H(X , Y ) = H(X ) + H(Y ).
Why is that?
Entropy for a pair of random variables
I Consider random variables X , Y with joint mass function
p(xi , yj ) = P{X = xi , Y = yj }.
I Then we write
H(X , Y ) = −
p(xi , yj ) log p(xi , yi ... | https://ocw.mit.edu/courses/18-600-probability-and-random-variables-fall-2019/32dce72d547d449d4cfe2012a8297ba2_MIT18_600F19_lec33.pdf |
is equivalent to a twenty questions strategy.
Coding values by bit sequences
I David Huffman (as MIT student) published in “A Method for
the Construction of Minimum-Redundancy Code” in 1952.
I If X takes four values A, B, C , D we can code them by:
A ↔ 00
B ↔ 01
C ↔ 10
D ↔ 11
31I No sequence in code is an exten... | https://ocw.mit.edu/courses/18-600-probability-and-random-variables-fall-2019/32dce72d547d449d4cfe2012a8297ba2_MIT18_600F19_lec33.pdf |
Or by
A ↔ 00
B ↔ 01
C ↔ 10
D ↔ 11
A ↔ 0
B ↔ 10
C ↔ 110
D ↔ 111
I No sequence in code is an extension of another.
I What does 100111110010 spell?
34Coding values by bit sequences
I David Huffman (as MIT student) published in “A Method for
the Construction of Minimum-Redundancy Code” in 1952.
I If X takes f... | https://ocw.mit.edu/courses/18-600-probability-and-random-variables-fall-2019/32dce72d547d449d4cfe2012a8297ba2_MIT18_600F19_lec33.pdf |
, let X take values x1, . . . , xN with probabilities
p(x1), . . . , p(xN ). Then if a valid coding of X assigns ni bits
to xi , we have
N
X
i=1
ni p(xi ) ≥ H(X ) = −
p(xi ) log p(xi ).
N
X
i=1
I Data compression: X1, X2, . . . , Xn be i.i.d. instances of X .
Do there exist encoding schemes such that the expected
numbe... | https://ocw.mit.edu/courses/18-600-probability-and-random-variables-fall-2019/32dce72d547d449d4cfe2012a8297ba2_MIT18_600F19_lec33.pdf |
(xi ).
N
X
i=1
38I Yes. Consider space of N n possibilities. Use “rounding to 2
power” trick, Expect to need at most H(x)n + 1 bits.
Twenty questions theorem
I Noiseless coding theorem: Expected number of questions
you need is always at least the entropy.
I Note: The expected number of questions is the entropy ... | https://ocw.mit.edu/courses/18-600-probability-and-random-variables-fall-2019/32dce72d547d449d4cfe2012a8297ba2_MIT18_600F19_lec33.pdf |
n (assuming n is sufficiently large)?
I Yes. Consider space of N n possibilities. Use “rounding to 2
power” trick, Expect to need at most H(x)n + 1 bits.
40Outline
Entropy
Noiseless coding theory
Conditional entropy
41Outline
Entropy
Noiseless coding theory
Conditional entropy
42I But now let’s not assume t... | https://ocw.mit.edu/courses/18-600-probability-and-random-variables-fall-2019/32dce72d547d449d4cfe2012a8297ba2_MIT18_600F19_lec33.pdf |
(xi , yj ) log p(xi , yi ).
X X
I But now let’s not assume they are independent.
i
j
44I This is just the entropy of the conditional distribution. Recall
that p(xi |yj ) = P{X = xi |Y = yj }.
I We similarly define HY (X ) = P
j HY =yj (X )pY (yj ). This is
the expected amount of conditional entropy that there will ... | https://ocw.mit.edu/courses/18-600-probability-and-random-variables-fall-2019/32dce72d547d449d4cfe2012a8297ba2_MIT18_600F19_lec33.pdf |
Conditional entropy
I Let’s again consider random variables X , Y with joint mass
function p(xi , yj ) = P{X = xi , Y = yj } and write
H(X , Y ) = −
p(xi , yj ) log p(xi , yi ).
X X
i
j
I But now let’s not assume they are independent.
I We can define a conditional entropy of X given Y = yj by
X
HY =yj (X ) = −... | https://ocw.mit.edu/courses/18-600-probability-and-random-variables-fall-2019/32dce72d547d449d4cfe2012a8297ba2_MIT18_600F19_lec33.pdf |
−
P
(X )pY (yj ).
P
=yj
j HY =yj
i p(xi |yj ) log p(xi |yj ) and
48I In words, the expected amount of information we learn when
discovering (X , Y ) is equal to expected amount we learn when
discovering Y plus expected amount when we subsequently
discover X (given our knowledge of Y ).
I To prove this property,... | https://ocw.mit.edu/courses/18-600-probability-and-random-variables-fall-2019/32dce72d547d449d4cfe2012a8297ba2_MIT18_600F19_lec33.pdf |
j ) log p(xi |yj ) = H(Y ) + HY (X ).
Properties of conditional entropy
I Definitions: HY
HY (X ) =
P
(X ) = −
HY =yj (X )pY (yj ).
=yj
P
j
i p(xi |yj ) log p(xi |yj ) and
I Important property one: H(X , Y ) = H(Y ) + HY (X ).
I In words, the expected amount of information we learn when
discovering (X , Y ) ... | https://ocw.mit.edu/courses/18-600-probability-and-random-variables-fall-2019/32dce72d547d449d4cfe2012a8297ba2_MIT18_600F19_lec33.pdf |
) =
I Definitions: HY
P
(X )pY (yj ).
I Important property one: H(X , Y ) = H(Y ) + HY (X ).
I In words, the expected amount of information we learn when
i p(xi |yj ) log p(xi |yj ) and
=yj
HY
P
j
=yj
discovering (X , Y ) is equal to expected amount we learn when
discovering Y plus expected amount when we s... | https://ocw.mit.edu/courses/18-600-probability-and-random-variables-fall-2019/32dce72d547d449d4cfe2012a8297ba2_MIT18_600F19_lec33.pdf |
Jensen’s inequality,
H(X ) = E(v ) = E(P pY (yj )vj ) ≥ P pY (yj )E(vj ) = HY (X ).
Properties of conditional entropy
I Definitions: HY
HY (X ) =
(X ) = −
P
(X )pY (yj ).
P
=yj
j HY =yj
i p(xi |yj ) log p(xi |yj ) and
53I In words, the expected amount of information we learn when
discovering X after having di... | https://ocw.mit.edu/courses/18-600-probability-and-random-variables-fall-2019/32dce72d547d449d4cfe2012a8297ba2_MIT18_600F19_lec33.pdf |
X (x2|yj ), . . . , pX (xn|yj )}
as j ranges over possible values. By (vector version of)
Jensen’s inequality,
H(X ) = E(v ) = E(P pY (yj )vj ) ≥ P pY (yj )E(vj ) = HY (X ).
Properties of conditional entropy
I Definitions: HY
HY (X ) =
(X ) = −
P
(X )pY (yj ).
P
j
=yj
HY
=yj
i p(xi |yj ) log p(xi |yj ) and
I... | https://ocw.mit.edu/courses/18-600-probability-and-random-variables-fall-2019/32dce72d547d449d4cfe2012a8297ba2_MIT18_600F19_lec33.pdf |
we would learn when
discovering X before knowing anything about Y .
I Proof: note that E(p1, p2, . . . , pn) := −
pi log pi is concave.
P
56Properties of conditional entropy
I Definitions: HY =yj (X ) = −
P
j
P
HY =yj (X )pY (yj ).
HY (X ) =
i p(xi |yj ) log p(xi |yj ) and
I Important property two: HY (X )... | https://ocw.mit.edu/courses/18-600-probability-and-random-variables-fall-2019/32dce72d547d449d4cfe2012a8297ba2_MIT18_600F19_lec33.pdf |
6.S897/HST.956 Machine Learning for Healthcare
Lecture 10: Application of Machine Learning to Cardiac Imaging
Instructors: David Sontag, Peter Szolovits
1 Background
This lecture was a guest lecture by Rahul Deo, the lead investigator of the One Brave Idea project at Brigham
and Women’s Hospital. Rahul is also Adj... | https://ocw.mit.edu/courses/6-s897-machine-learning-for-healthcare-spring-2019/32e3387bbd9f6ea7df9f9195337ed5b4_MIT6_S897S19_lec10note.pdf |
exercise. One crucial aspect of cardiac function is that
the body must maintain extremely rhythmic beating of the heart, a not inconsequential task given that the
average human heart generates a total of more than 2 billion heartbeats over a lifetime.
2.2 Structure of the Heart
Figure 1 above gives an overview of t... | https://ocw.mit.edu/courses/6-s897-machine-learning-for-healthcare-spring-2019/32e3387bbd9f6ea7df9f9195337ed5b4_MIT6_S897S19_lec10note.pdf |
— Lec10 — 1
Courtesy of OpenStax. Used under CC BY.
Figure 1: The major chambers, valves, and blood vessels of the human heart.
mechanical systems, allowing one to see how events in an EKG align with the physical state of the heart, as
shown in Figure 2.
2.4 Cardiac Diseases
Given the complex structure of the hea... | https://ocw.mit.edu/courses/6-s897-machine-learning-for-healthcare-spring-2019/32e3387bbd9f6ea7df9f9195337ed5b4_MIT6_S897S19_lec10note.pdf |
.
Can be used, for example, to diagnose myocardial infarction.
6.S897/HST.956 Machine Learning for Healthcare — Lec10 — 2
© Julian Andrés Betancur Acevedo. All rights reserved. This content is excluded from our Creative Commons license.
For more information, see https://ocw.mit.edu/help/faq-fair-use/
Figure 2: An exa... | https://ocw.mit.edu/courses/6-s897-machine-learning-for-healthcare-spring-2019/32e3387bbd9f6ea7df9f9195337ed5b4_MIT6_S897S19_lec10note.pdf |
high cost. As
a result, doctors are often stuck with the stuff that they already know something about.
6.S897/HST.956 Machine Learning for Healthcare — Lec10 — 3
4 Where’s the Data?
4.1 How is Medical Imaging Data Stored
DICOM is the major international standard for storing imaging information. Image/video files are s... | https://ocw.mit.edu/courses/6-s897-machine-learning-for-healthcare-spring-2019/32e3387bbd9f6ea7df9f9195337ed5b4_MIT6_S897S19_lec10note.pdf |
high quality cardiac images is that, as the patient breathes, the
chest wall and the heart are both continuously moving. Thus, high quality scans need to get enough
temporal frequency on their data acquisition so that the movement of the heart doesn’t affect the imaging.
Another solution to image corruption resulting... | https://ocw.mit.edu/courses/6-s897-machine-learning-for-healthcare-spring-2019/32e3387bbd9f6ea7df9f9195337ed5b4_MIT6_S897S19_lec10note.pdf |
physicians are
6.S897/HST.956 Machine Learning for Healthcare — Lec10 — 4
Figure 3: Comparison of Various Imaging Techniques.
generally really fast at most of these, with an experience radiologist capable of diagnosis disease based on
images in less than 2 minutes.
Most of the initial successes in medical image clas... | https://ocw.mit.edu/courses/6-s897-machine-learning-for-healthcare-spring-2019/32e3387bbd9f6ea7df9f9195337ed5b4_MIT6_S897S19_lec10note.pdf |
, at the very least, catch some missed diagnoses.
5.1.2 Explaining the Diagnosis
Another challenge arises in the tension between predictive accuracy and descriptive accuracy. As a general
rule, medicine is very demanding on descriptive accuracy while simultaneously being inflexible on predictive
accuracy. As a result, i... | https://ocw.mit.edu/courses/6-s897-machine-learning-for-healthcare-spring-2019/32e3387bbd9f6ea7df9f9195337ed5b4_MIT6_S897S19_lec10note.pdf |
manually draw a line between two ends of a structure in an image in
order for the machine to measure the distance between them based on the acquired data.
From a machine learning perspective, once again research has propelled several critical advancements in
this area. A certain architecture known as U-Net seems to ... | https://ocw.mit.edu/courses/6-s897-machine-learning-for-healthcare-spring-2019/32e3387bbd9f6ea7df9f9195337ed5b4_MIT6_S897S19_lec10note.pdf |
cholesterol, and blood
sugar). Thus, by the time they start treatment they are far too a long the disease timeline leading to costly
treatments that may not be very effective. We see that patients who die from cardiovascular diseases, die
shortly after developing symptoms, such as dyspnea and angina. See Figure 4.
6... | https://ocw.mit.edu/courses/6-s897-machine-learning-for-healthcare-spring-2019/32e3387bbd9f6ea7df9f9195337ed5b4_MIT6_S897S19_lec10note.pdf |
4 Zhang, Deo, et al. approach to Automated Approach for Echo interpreta-
tion
An echo study is typically a collection of up to 70 videos of the heart taken over multiple cardiac cycles and
focusing on different viewpoints. The heart is visualized from ¿10 different views, and still images are typi-
cally included to enab... | https://ocw.mit.edu/courses/6-s897-machine-learning-for-healthcare-spring-2019/32e3387bbd9f6ea7df9f9195337ed5b4_MIT6_S897S19_lec10note.pdf |
��ts of automated interpretation will be muted
4. Pharmaceutical companies have high motivation to perform high frequency serial imaging to assess
whether there are any benefits to medications in clinical trials, and an accurate scalable quantification
will be needed for this.
5. Surveillance of daily studies may be usef... | https://ocw.mit.edu/courses/6-s897-machine-learning-for-healthcare-spring-2019/32e3387bbd9f6ea7df9f9195337ed5b4_MIT6_S897S19_lec10note.pdf |
reflect many pathways found in diverse cell types, such as
autophagy, phagocytosis, and free radical dissipation.
Third, we should foucs on cell morphology rather than genomics. This takes advantage of the computer
vision advances that are able to characterize subtle distinctions between cell types and states at low ... | https://ocw.mit.edu/courses/6-s897-machine-learning-for-healthcare-spring-2019/32e3387bbd9f6ea7df9f9195337ed5b4_MIT6_S897S19_lec10note.pdf |
gorithms.
Question 2: Usually people begin treatment after visiting the doctor for the first time. How do you trust
the one visit when you go to the doctor to determine if you should go on medication or not?
Answer 2 : There are noisy point estimates, and it’s hard to determine precisely whether the timing is
right ... | https://ocw.mit.edu/courses/6-s897-machine-learning-for-healthcare-spring-2019/32e3387bbd9f6ea7df9f9195337ed5b4_MIT6_S897S19_lec10note.pdf |
MIT OpenCourseWare
http://ocw.mit.edu
6.641 Electromagnetic Fields, Forces, and Motion, Spring 2005
Please use the following citation format:
Markus Zahn, 6.641 Electromagnetic Fields, Forces, and Motion, Spring
2005. (Massachusetts Institute of Technology: MIT OpenCourseWare).
http://ocw.mit.edu (accessed MM DD, ... | https://ocw.mit.edu/courses/6-641-electromagnetic-fields-forces-and-motion-spring-2005/32f92e8162acbc3041e3ac32861394bb_lecture8.pdf |
e
2 π (
=
e h
4 π m
e
≈ 9.3 x 10 −24 amp − m2
Bohr magneton mB
(smallest unit of
magnetic moment)
Imagine all Bohr magnetons in sphere of radius R aligned. Net magnetic moment is
⎞
⎛
m m R
ρ
⎜
⎟
⎝
⎠
4
3
π
=
B
3
Avogadro’s number = 6.023 x 1026 molecules per kilogram−mole
A
0
M
0
Total mass
of s... | https://ocw.mit.edu/courses/6-641-electromagnetic-fields-forces-and-motion-spring-2005/32f92e8162acbc3041e3ac32861394bb_lecture8.pdf |
θ ⎥
⎦
(multiply top & bottom by µ0 )
_
i r
_
⎤
+ sin θ i θ ⎥
⎦
E =
p
⎡
2 cos θ
4 π ε0 r3 ⎢
⎣
Analogy
p → µ0 m
P = N p ⇒ M = N m , N = # of magnetic dipoles / volume
Polarization Magnetization
II. Maxwell’s Equations with Magnetization
EQS
MQS
∇
(
εi
0
)
E =
ρ −
u
i
∇ P
∇
(
i µ
0
)
H =
(
i
−... | https://ocw.mit.edu/courses/6-641-electromagnetic-fields-forces-and-motion-spring-2005/32f92e8162acbc3041e3ac32861394bb_lecture8.pdf |
⎣
b ⎤
⎥
⎦
MQS Equations
⎡
σ = n i
⎢
⎣
sm
−
a
µ (M
0
− M
)⎥
b ⎤
⎦
∇ × H = J
∇ × E = −
∂
∂t
µ0 (H + M)
B = µ (H + M) Magnetic flux density B has units of Teslas (1 Tesla = 10,000 Gauss)
0
∇ i B = 0
n i ⎡B − B ⎤ = 0
a
a
⎢⎣
⎥⎦
∇ × E = −
∂ B
∂ t
∇ × H = J
v =
dλ
dt
, λ = ∫ B i da (total flux)
S
III. Mag... | https://ocw.mit.edu/courses/6-641-electromagnetic-fields-forces-and-motion-spring-2005/32f92e8162acbc3041e3ac32861394bb_lecture8.pdf |
�
z +
⎜
⎝
⎡
⎢R2 +
⎢⎣
⎛
⎜ z +
⎝
d ⎞
⎟
2
⎠
⎫
⎪
⎪
⎪
⎬
2 ⎤
⎪
d ⎞
⎪
⎥
⎟
2 ⎠ ⎥⎦ ⎪
⎭
2
1
⎧
⎪
⎪
−M
⎪
0 ⎨
2 ⎪ ⎡
z −
d
2
1 −
⎛
⎜ z +
⎝
d ⎞
⎟
2
⎠
2
⎪ ⎢R + ⎜ z −
⎪ ⎢
⎩ ⎣
⎛
⎝
2 ⎤ 2
d ⎞
⎟ ⎥
2 ⎠ ⎥
⎦
2
⎡
⎢R + ⎜ z +
⎢
⎣
⎛
⎝
2 ⎤ 2
d ⎞
⎟ ⎥
2 ⎠ ⎥
⎦
⎫
⎪
⎪
⎪
1 + 2⎬
⎪
⎪
⎪
⎭
z >
d
2
−
d
2
< z <
... | https://ocw.mit.edu/courses/6-641-electromagnetic-fields-forces-and-motion-spring-2005/32f92e8162acbc3041e3ac32861394bb_lecture8.pdf |
2 π r = N i ⇒
φ
1
C
N i
Hφ = 1 ≈
2 rπ
N i
1
π2 R
Φ ≈ B
π w2
4
λ = N Φ = N B
2
2
π w 2
4
VH
-
N1
+
R1
i1
i2
N2
V2
-
+
R2
C2
-
Vv
+
Courtesy of Hermann A. Haus and James R. Melcher. Used with permission.
V = i R = R
1
H
1
1
φ
H 2 π R
N1
(V =Horizontal voltage to oscilloscope )
H
v = = i R + V... | https://ocw.mit.edu/courses/6-641-electromagnetic-fields-forces-and-motion-spring-2005/32f92e8162acbc3041e3ac32861394bb_lecture8.pdf |
Solving Approach, by Markus Zahn, 1987. Used with permission.
In iron core:
lim B = µ ⇒H
µ→∞
H dl
i = Hs = Ni
(cid:118)∫
H = 0
B finite
⇒
Ni
H =
s
Φ = µ 0 H
Dd
=
µ 0 Dd N
i
s
(cid:118)∫ B da = 0
i
S
λ = N Φ =
µ 0 Dd
s
2
N i ⇒ L = =
λ
i
µ 0 Dd
s
2
N
6.641, Electromagnetic Fields, Forces, and Mo... | https://ocw.mit.edu/courses/6-641-electromagnetic-fields-forces-and-motion-spring-2005/32f92e8162acbc3041e3ac32861394bb_lecture8.pdf |
1
µ a s + µ a s
1
1
2
2
1
2
1
B. Reluctance In Parallel
H dl = H s = H s = Ni ⇒ H
1
i
2
1
(cid:118)∫
C
= H =
2
Ni
s
Φ = (µ H a
1
1
1
+ µ H a
2
2
2
) D =
1
Ni(R R+
R R1
2
2
)
= Ni
(
P P
+
1
2
)
P1 =
1
R1
; P2 =
1
R2
P =
1
R
[Permeances, analogous to Conductance]
VII. Transformers
(Ideal)
Courtesy of... | https://ocw.mit.edu/courses/6-641-electromagnetic-fields-forces-and-motion-spring-2005/32f92e8162acbc3041e3ac32861394bb_lecture8.pdf |
1
2
1
2 2
1
L = N L , L = N L , M = N N L , L = =
1
2
0
0
2
2
0
0
2
2
1
1
µ A
l
R
M = L L⎡
⎣ 1
1
⎤
2 ⎦
2
dλ
v = 1 = L
dt
di
1 dt
1
1 − M 2 = N L N 1 − N
⎡ di
1 0 ⎢
⎣
1 dt
di
⎤
2
2 dt ⎥
⎦
di
dt
dλ2
v = = +M
dt
di
1
dt
2
2
di
2 dt
− L = N L +N
⎡
2 0 ⎢
⎣
di
1
1 dt
− N
di
2 ⎤
2 dt ⎥
⎦
v1... | https://ocw.mit.edu/courses/6-641-electromagnetic-fields-forces-and-motion-spring-2005/32f92e8162acbc3041e3ac32861394bb_lecture8.pdf |
Topic 2 Notes
Jeremy Orloff
2 Analytic functions
2.1 Introduction
The main goal of this topic is to define and give some of the important properties of complex analytic
functions. A function () is analytic if it has a complex derivative ′(). In general, the rules for
computing derivatives will be familiar to you from ... | https://ocw.mit.edu/courses/18-04-complex-variables-with-applications-spring-2018/330e301bd727c7bdaa679cf44cb75fe3_MIT18_04S18_topic2.pdf |
to 0. There are lots of ways to do this. For example,
if we let Δ go to 0 along the -axis then, Δ = 0 while Δ goes to 0. In this case, we would have
On the other hand, if we let Δ go to 0 along the positive -axis then
′(0) = lim
Δ
Δ→0 Δ
= 1.
′(0) = lim
−Δ
Δ→0 Δ
= −1.
1
2 ANALYTIC FUNCTIONS
2
The limits... | https://ocw.mit.edu/courses/18-04-complex-variables-with-applications-spring-2018/330e301bd727c7bdaa679cf44cb75fe3_MIT18_04S18_topic2.pdf |
point
shown is on the boundary, so every disk around contains points outside .
Left: an open region ; right: is not an open region
2.4 Limits and continuous functions
Definition. If () is defined on a punctured disk around
0 then we say
lim () =
0
→
0
0 no matter what direction approaches
if () goes to
The figu... | https://ocw.mit.edu/courses/18-04-complex-variables-with-applications-spring-2018/330e301bd727c7bdaa679cf44cb75fe3_MIT18_04S18_topic2.pdf |
• If
2
≠ 0 then lim ()∕() =
1∕
2
→
0
• If ℎ() is continuous and defined on a neighborhood of
1 then lim ℎ( ()) = ℎ(
→
0
(Note: we will give the official definition of continuity in the next section.)
4
1)
We won’t give a proof of these properties. As a challenge, you can try to supply it using the formal
definition... | https://ocw.mit.edu/courses/18-04-complex-variables-with-applications-spring-2018/330e301bd727c7bdaa679cf44cb75fe3_MIT18_04S18_topic2.pdf |
) A polynomial
+
is continuous on the entire plane. Reason: it is clear that each power ( + ) is continuous as a
function of (, ).
() =
2 + … +
0 +
1
2
(ii) The exponential function is continuous on the entire plane. Reason:
e = e+ = e cos() + e sin().
So the both the real and imaginary parts are clearly con... | https://ocw.mit.edu/courses/18-04-complex-variables-with-applications-spring-2018/330e301bd727c7bdaa679cf44cb75fe3_MIT18_04S18_topic2.pdf |
as follows.
A sequence of points {} goes to infinity if (cid:240)(cid:240) goes to infinity. This “point at infinity” is approached
in any direction we go. All of the sequences shown in the figure below are growing, so they all go
to the (same) “point at infinity”.
Various sequences all going to infinity.
2 ANALYTIC FUNC... | https://ocw.mit.edu/courses/18-04-complex-variables-with-applications-spring-2018/330e301bd727c7bdaa679cf44cb75fe3_MIT18_04S18_topic2.pdf |
as (cid:240)(cid:240) gets large. Write = , then
(cid:240)(cid:240) = (cid:240)(cid:240) = = (cid:240)(cid:240)
2.5.2 Stereographic projection from the Riemann sphere
This is a lovely section and we suggest you read it. However it will be a while before we use it in
18.04.
RRe(z)Im(z)2 ANALYTIC FUNCTIONS
7
One w... | https://ocw.mit.edu/courses/18-04-complex-variables-with-applications-spring-2018/330e301bd727c7bdaa679cf44cb75fe3_MIT18_04S18_topic2.pdf |
6 Derivatives
The definition of the complex derivative of a complex function is similar to that of a real derivative
of a real function: For a function () the derivative at
0 is defined as
′(
0) = lim
→
0
() − (
−
0
0)
Provided, of course, that the limit exists. If the limit exists we say is analytic at
entia... | https://ocw.mit.edu/courses/18-04-complex-variables-with-applications-spring-2018/330e301bd727c7bdaa679cf44cb75fe3_MIT18_04S18_topic2.pdf |
′ + ′
• Quotient rule:
( ()∕()) =
′ − ′
2
• Chain rule:
( ()) = ′( ()) ′()
• Inverse rule:
−1()
=
1
′( −1())
To give you the flavor of these arguments we’ll prove the product rule.
( ()()) = lim
→
0
= lim
→
0
()() − (
0)(
0)
−
( () − (
0
0))() + (
−
0
() + (
0)
0) ′(
0)
0)(() − (
0))
(() − ... | https://ocw.mit.edu/courses/18-04-complex-variables-with-applications-spring-2018/330e301bd727c7bdaa679cf44cb75fe3_MIT18_04S18_topic2.pdf |
i.e. the derivative of holding constant.
2.7.2 The Cauchy-Riemann equations
The Cauchy-Riemann equations use the partial derivatives of and to allow us to do two things:
first, to check if has a complex derivative and second, to compute that derivative. We start by
stating the equations as a theorem.
Theorem 2.10. ... | https://ocw.mit.edu/courses/18-04-complex-variables-with-applications-spring-2018/330e301bd727c7bdaa679cf44cb75fe3_MIT18_04S18_topic2.pdf |
little faster.)
′() = lim
Δ→0
= lim
Δ→0
( + Δ) − ()
Δ
((, + Δ) + (, + Δ)) − ((, ) + (, ))
Δ
= lim
Δ→0
1
=
(, + Δ) − (, )
Δ
(, ) +
(, )
+
(, + Δ) − (, )
Δ
=
(, ) −
(, )
We have found two different representations of ′() in terms of the partials of and . If put them
together we have the Cauchy-Riema... | https://ocw.mit.edu/courses/18-04-complex-variables-with-applications-spring-2018/330e301bd727c7bdaa679cf44cb75fe3_MIT18_04S18_topic2.pdf |
−e sin()
= e sin(), = e cos()
We see that = and = −, so the Cauchy-Riemann equations are satisfied. Thus, e is
differentiable and
e = + = e cos() + e sin() = e.
Example 2.12. Use the Cauchy-Riemann equations to show that () = is not differentiable.
Solution: ( + ) = − , so (, ) = , (, ) = −. Taking partial derivati... | https://ocw.mit.edu/courses/18-04-complex-variables-with-applications-spring-2018/330e301bd727c7bdaa679cf44cb75fe3_MIT18_04S18_topic2.pdf |
]
,
i.e, ′() is just the Jacobian of (, ).
For me, it is easier to remember the Jacobian than the Cauchy-Riemann equations. Since ′() is a
complex number I can use the matrix representation in Equation 1 to remember the Cauchy-Riemann
equations!
2 ANALYTIC FUNCTIONS
12
2.8 Cauchy-Riemann all... | https://ocw.mit.edu/courses/18-04-complex-variables-with-applications-spring-2018/330e301bd727c7bdaa679cf44cb75fe3_MIT18_04S18_topic2.pdf |
we’ll have to do three things.
1. Define how to compute it.
2. Specify a branch (if necessary) giving its range.
3. Specify a domain (with branch cut if necessary) where it is analytic.
4. Compute its derivative.
Most often, we can compute the derivatives of a function using the algebraic rules like the quotient
ru... | https://ocw.mit.edu/courses/18-04-complex-variables-with-applications-spring-2018/330e301bd727c7bdaa679cf44cb75fe3_MIT18_04S18_topic2.pdf |
polynomials ()∕() is called a rational function.
If we assume that and have no common roots, then:
Domain = − {roots of }
′ − ′
.
2
′() =
7. sin(), cos()
Definition. cos() =
e + e−
,
2
sin() =
e − e−
2
(By Euler’s formula we know this is consistent with cos() and sin() when = is real.)
Domain: these fun... | https://ocw.mit.edu/courses/18-04-complex-variables-with-applications-spring-2018/330e301bd727c7bdaa679cf44cb75fe3_MIT18_04S18_topic2.pdf |
.
cosh() =
e + e−
,
2
sinh() =
e − e−
2
Domain: these functions are entire.
cosh()
= sinh(),
sinh()
= cosh()
Other key properties of cosh and sinh:
- cosh2() − sinh2() = 1
- For real , cosh() is real and positive, sinh() is real.
- cosh() = cos(), sinh() = − sin().
10. log() (See Topic 1.)
Definition... | https://ocw.mit.edu/courses/18-04-complex-variables-with-applications-spring-2018/330e301bd727c7bdaa679cf44cb75fe3_MIT18_04S18_topic2.pdf |
ø
1
.
1 − 2
Choosing a branch is tricky because both the square root and the log require choices. We will
look at this more carefully in the future.
For now, the following discussion and figure are for your amusement.
Sine (likewise cosine) is not a 1-1 function, so if we want sin−1() to be single-valued then we
... | https://ocw.mit.edu/courses/18-04-complex-variables-with-applications-spring-2018/330e301bd727c7bdaa679cf44cb75fe3_MIT18_04S18_topic2.pdf |
0
0
→
=
−1 .
0
Since we showed directly that the derivative exists for all , the function must be entire.
4. () (polynomial). Since a polynomial is a sum of monomials, the formula for the derivative
follows from the derivative rule for sums and the case () = . Likewise the fact the ()
is entire.
5. () = 1∕. This ... | https://ocw.mit.edu/courses/18-04-complex-variables-with-applications-spring-2018/330e301bd727c7bdaa679cf44cb75fe3_MIT18_04S18_topic2.pdf |
entire functions, so is () = e2 . The
chain rule gives us
′() = e2
(2).
Example 2.15. Let () = e and () = 1∕. () is entire and () is analytic everywhere but 0.
So (()) is analytic except at 0 and
(())
= ′(()) ′() = e
1∕ ⋅
−1
.
2
Example 2.16. Let ℎ() = 1∕(e − 1). Clearly ℎ is entire except where the denomi... | https://ocw.mit.edu/courses/18-04-complex-variables-with-applications-spring-2018/330e301bd727c7bdaa679cf44cb75fe3_MIT18_04S18_topic2.pdf |
real and ≤ 0 ⇔ is real and ≥ 1.
ø
So 1 − is analytic on the region (see figure below)
− { ≥ 1, = 0}
Note. A different branch choice for
would lead to a different region where
1 − is analytic.
ø
ø
2 ANALYTIC FUNCTIONS
18
The figure below shows the domains with branch cuts for this example.
ø
domain fo... | https://ocw.mit.edu/courses/18-04-complex-variables-with-applications-spring-2018/330e301bd727c7bdaa679cf44cb75fe3_MIT18_04S18_topic2.pdf |
πi−πiπi3πi2 ANALYTIC FUNCTIONS
19
2.11.1 Limits of sequences
, … converges to if for large , is really
Intuitively, we say a sequence of complex numbers
close to . To be a little more precise, if we put a small circle of radius around then eventually the
sequence should stay inside the circle. Let’s refer to thi... | https://ocw.mit.edu/courses/18-04-complex-variables-with-applications-spring-2018/330e301bd727c7bdaa679cf44cb75fe3_MIT18_04S18_topic2.pdf |
2
+
(
1
(cid:243)
(cid:243)
(cid:243)
(cid:243)
(cid:243)
(cid:243)
(cid:243)
(cid:243)
+ 1
(cid:243)
(cid:243)
=
(cid:243)
1
(cid:243)
(cid:243)
(cid:243) 2
(cid:243)
2
(cid:243)
+ (cid:243)
(cid:243)
<
1
2
2
+
So all we have to do is pick large enough that
2
1
+
2
Since this can clearly be done we have ... | https://ocw.mit.edu/courses/18-04-complex-variables-with-applications-spring-2018/330e301bd727c7bdaa679cf44cb75fe3_MIT18_04S18_topic2.pdf |
cid:240) −
(cid:240) <
0
This says exactly that as gets closer (within ) to
can be made as small as we want, () must go to .
0 we have () is close (within ) to . Since
Remarks.
1. Using the punctured disk (also called a deleted neighborhood) means that () does not have
0) = then we
0) does not necessarily equal... | https://ocw.mit.edu/courses/18-04-complex-variables-with-applications-spring-2018/330e301bd727c7bdaa679cf44cb75fe3_MIT18_04S18_topic2.pdf |
18.405J/6.841J: Advanced Complexity Theory
Spring 2016
(cid:47)(cid:72)(cid:70)(cid:87)(cid:88)(cid:85)(cid:72)(cid:3)(cid:20)(cid:21)(cid:29)(cid:3)(cid:53)(cid:68)(cid:81)(cid:71)(cid:82)(cid:80)(cid:76)(cid:93)(cid:72)(cid:71)(cid:3)(cid:38)(cid:82)(cid:80)(cid:80)(cid:88)(cid:81)(cid:76)(cid:70)(cid:68)(cid:87)(cid... | https://ocw.mit.edu/courses/18-405j-advanced-complexity-theory-spring-2016/3311a00f4875153695469046bc4615c1_MIT18_405JS16_Random.pdf |
= P(x, y, r) with high prob-
ability, and let π denote the set of all strings of randomness r. We would like to construct a
private-randomness protocol to solve f .
If we knew that |π| ≤ poly(n)poly(1/δ) ⇐⇒ |r| = O(log n + log 1/δ), then we are done by trivial
simulation, as Alice can generate private random bits and s... | https://ocw.mit.edu/courses/18-405j-advanced-complexity-theory-spring-2016/3311a00f4875153695469046bc4615c1_MIT18_405JS16_Random.pdf |
(cid:17)
2
≤
< 1
so there must exist some choice of r1, . . . rt such that, for all (x, y), P (cid:48) fails with probability at most
ε + δ.
Now, π(cid:48) has only t = O(n/δ2) = poly(n)poly(1/δ) different random strings, so we can sample from
it using O(log n + log 1/δ) bits of randomness. Thus, we can simulate P (cid:... | https://ocw.mit.edu/courses/18-405j-advanced-complexity-theory-spring-2016/3311a00f4875153695469046bc4615c1_MIT18_405JS16_Random.pdf |
by the minimax theorem or LP duality.
In analyzing deterministic communication complexity, we analyzed partitions into monochromatic
rectangles. Now, we want to consider partitions into “almost” monochromatic rectangles. We now
define a way to measure this.
2
Definition 5 (Discrepancy). Let f : X × Y → {0, 1}. Then, the... | https://ocw.mit.edu/courses/18-405j-advanced-complexity-theory-spring-2016/3311a00f4875153695469046bc4615c1_MIT18_405JS16_Random.pdf |
80)n
i=1 xiyi mod 2. Then, IPn ∈/ BPPcc. In particular, R(IPn) = Ω(n).
Proof. To apply our theorems now, we must pick a distribution µ to work on. In general, picking
µ is the art of proving bounds on randomized communication complexity.
In our case, the distribution is easy: let µ be uniform over {0, 1}n × {0, 1}n. We... | https://ocw.mit.edu/courses/18-405j-advanced-complexity-theory-spring-2016/3311a00f4875153695469046bc4615c1_MIT18_405JS16_Random.pdf |
indicator vectors of S and T . Thus, we have
R(IPn) ≥ Rpub
(IPn) = R 1
pub
3
(IPn) ≥ Dµ
1
3
(IPn) ≥ log
1
3 · 2−n/2
=
n
2
− (1) .
O
Thus, we have shown that IPn ∈/ BPPcc, which gives us a bound on randomized communication
complexity.
4
MIT OpenCourseWare
https://ocw.mit.edu
18.405J / 6.841J Advanced Complexity Theory
... | https://ocw.mit.edu/courses/18-405j-advanced-complexity-theory-spring-2016/3311a00f4875153695469046bc4615c1_MIT18_405JS16_Random.pdf |
6.096 Introduction to C++
Massachusetts Institute of Technology
January 10, 2011
John Marrero
Lecture 4 Notes: Arrays and Strings
1
Arrays
So far we have used variables to store values in memory for later reuse. We now explore a
means to store multiple values together as one unit, the array.
An array is a fixed ... | https://ocw.mit.edu/courses/6-096-introduction-to-c-january-iap-2011/33183276121549190ef4f8017b06b1b6_MIT6_096IAP11_lec04.pdf |
3
4 int main() {
5
6
7
8
9
10
11
for(int i = 0; i < 4; i++)
cin >> arr[i];
int arr[4];
cout << “Please enter 4 integers:“ << endl;
12
13
14
15
16
17
18
19
20 }
cout << “Values in array are now:“;
for(int i = 0; i < 4; i++)
cout << “ “ << arr[i];
cout << endl;
return 0;
Note that when accessing an arr... | https://ocw.mit.edu/courses/6-096-introduction-to-c-january-iap-2011/33183276121549190ef4f8017b06b1b6_MIT6_096IAP11_lec04.pdf |
1][dimension2];
The array will have dimension1 x dimension2 elements of the same type and can be thought of
as an array of arrays. The first index indicates which of dimension1 subarrays to access, and
then the second index accesses one of dimension2 elements within that subarray. Initialization
and access thus work... | https://ocw.mit.edu/courses/6-096-introduction-to-c-january-iap-2011/33183276121549190ef4f8017b06b1b6_MIT6_096IAP11_lec04.pdf |
in memory. Declaring int arr[2][4]; is the same thing as declaring
int arr[8];.
2
Strings
String literals such as “Hello, world!” are actually represented by C++ as a sequence of
characters in memory. In other words, a string is simply a character array and can be
manipulated as such.
Consider the following prog... | https://ocw.mit.edu/courses/6-096-introduction-to-c-january-iap-2011/33183276121549190ef4f8017b06b1b6_MIT6_096IAP11_lec04.pdf |
[++i]) {
cout << (char)(isupper(current) ? tolower(current) : current);
if(isalpha(current))
else if(ispunct(current))
cout << ' ';
This example uses the isalpha, isupper, ispunct, and tolower functions from the cctype
library. The is- functions check whether a given character is an alphabetic character, an
uppercase... | https://ocw.mit.edu/courses/6-096-introduction-to-c-january-iap-2011/33183276121549190ef4f8017b06b1b6_MIT6_096IAP11_lec04.pdf |
giving I'm a string!.
You are encouraged to read the documentation on these and any other libraries of interest to
learn what they can do and how to use a particular function properly. (One source is
http://www.cplusplus.com/reference/.)
MIT OpenCourseWare
http://ocw.mit.edu
6.096 Introduction to C++
January (I... | https://ocw.mit.edu/courses/6-096-introduction-to-c-january-iap-2011/33183276121549190ef4f8017b06b1b6_MIT6_096IAP11_lec04.pdf |
MIT OpenCourseWare
http://ocw.mit.edu
6.006 Introduction to Algorithms
Spring 2008
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
Lecture 5
Hashing I: Chaining, Hash Functions
6.006 Spring 2008
Lecture 5: Hashing I: Chaining, Hash Functions
Lecture Overview
• D... | https://ocw.mit.edu/courses/6-006-introduction-to-algorithms-spring-2008/3319dd2c718ac545917c4212920b1b2e_lec5.pdf |
= 1
• new docdist7 uses dictionaries instead of sorting:
def inner_product(D1, D2):
sum = φ. φ
for key in D1:
if key in D2:
sum += D1[key]*D2[key]
=
⇒
optimal Θ(n) document distance assuming dictionary ops. take O(1) time
PS2
How close is chimp DNA to human DNA?
= Longest common substring of two strings
e.g. ALG... | https://ocw.mit.edu/courses/6-006-introduction-to-algorithms-spring-2008/3319dd2c718ac545917c4212920b1b2e_lec5.pdf |
I: Chaining, Hash Functions
6.006 Spring 2008
Solution 2 : hashing (verb from ‘hache’ = hatchet, Germanic)
• Reduce universe U of all keys (say, integers) down to reasonable size m for table
• idea: m ≈ n, n =| k |, k = keys in dictionary
• hash function h: U → φ, 1, . . . , m − 1
Figure 2: Mapping keys to a table... | https://ocw.mit.edu/courses/6-006-introduction-to-algorithms-spring-2008/3319dd2c718ac545917c4212920b1b2e_lec5.pdf |
performance is O(1) if α = O(1) i. e. m = Ω(n).
5
1....Ukkkkk1234k...4k.k2k3h(k1) =h(k2) =h(k4) Lecture 5
Hashing I: Chaining, Hash Functions
6.006 Spring 2008
Hash Functions
Division Method:
h(k) = k mod m
• k1 and k2 collide when k1 = k2( mod m) i. e. when m divides | k1 − k2 |
• fine if keys you store are unif... | https://ocw.mit.edu/courses/6-006-introduction-to-algorithms-spring-2008/3319dd2c718ac545917c4212920b1b2e_lec5.pdf |
6.776
High Speed Communication Circuits
Lecture 1
Communication Systems Overview
Profs. Hae-Seung Lee and Michael H. Perrott
Massachusetts Institute of Technology
February 1, 2005
Copyright © 2005 by H.-S. Lee and M. H. Perrott
Modulation Techniques
(cid:131) Amplitude Modulation (AM)
-Standard AM
-Double-sideband (D... | https://ocw.mit.edu/courses/6-776-high-speed-communication-circuits-spring-2005/33315df81345b772b4100897ea80454d_lec1.pdf |
- Envelope detector can no longer be used for receiver
(cid:131) The carrier frequency tone that carries no information is
removed: less transmit power required for same
transmitter SNR (compared to standard AM)
H.-S. Lee & M.H. Perrott
MIT OCW
DSB Spectra
(cid:131) Impulse in DC portion of baseband signal is now go... | https://ocw.mit.edu/courses/6-776-high-speed-communication-circuits-spring-2005/33315df81345b772b4100897ea80454d_lec1.pdf |
bandwidth is reduced 2x: more bandwidth
efficient
H.-S. Lee & M.H. Perrott
MIT OCW
Quadrature Modulation (QAM)
(cid:131) Takes advantage of coherent receiver’s sensitivity to phase
alignment with transmitter local oscillator
- We essentially have two orthogonal transmission channels (I
- Transmit two independent bas... | https://ocw.mit.edu/courses/6-776-high-speed-communication-circuits-spring-2005/33315df81345b772b4100897ea80454d_lec1.pdf |
/Q channels
(cid:131) Uses decision boundaries to evaluate value of data at each time
instant
(cid:131) I/Q signals may be binary or multi-bit
- Multi-bit shown above
H.-S. Lee & M.H. Perrott
MIT OCW
Advantages of Digital Modulation
(cid:131) Allows information to be “packetized”
- Can compress information in time a... | https://ocw.mit.edu/courses/6-776-high-speed-communication-circuits-spring-2005/33315df81345b772b4100897ea80454d_lec1.pdf |
systems
- Linear power amps more power consuming than nonlinear ones
(cid:131) Constant-envelope modulation allows nonlinear power amp
- Lower power consumption possible
H.-S. Lee & M.H. Perrott
MIT OCW
Simplified Implementation for Constant-Envelope
Baseband
Input
Baseband to RF Modulation
Power Amp
Transmit
Filter
T... | https://ocw.mit.edu/courses/6-776-high-speed-communication-circuits-spring-2005/33315df81345b772b4100897ea80454d_lec1.pdf |
Two different methods of dealing with transmit/receive of
a given user
- Frequency-division duplexing
- Time-division duplexing
H.-S. Lee & M.H. Perrott
MIT OCW
Frequency-Division Duplexing (Full-duplex)
Transmitter
TX
Duplexer
TX
RX
Antenna
Receiver
RX
Transmit
Band
Receive
Band
f
(cid:131) Separate frequency channe... | https://ocw.mit.edu/courses/6-776-high-speed-communication-circuits-spring-2005/33315df81345b772b4100897ea80454d_lec1.pdf |
. Perrott
MIT OCW
Code-Division Multiple Access (CDMA)
Td
Separate
Transmitters
x1(t)
y1(t)
Transmit Signals
Combine
in Freespace
PN1(t)
y(t)
x2(t)
y2(t)
PN2(t)
Tc
(cid:131) Assign a unique code sequence to each transmitter
(cid:131) Data values are encoded in transmitter output stream by
varying the polarity of the... | https://ocw.mit.edu/courses/6-776-high-speed-communication-circuits-spring-2005/33315df81345b772b4100897ea80454d_lec1.pdf |
1/Td
Sx2(f)
Td
Transmitter 1
x1(t)
y1(t)
f
PN1(t)
Transmitter 2
x2(t)
y2(t)
Tc
Tc
f
1/Tc
0
Sy2(f)
PN1(t)
Td
y(t)
Tc
Sx(f)
Sx1(f)
Lowpass
r(t)
0
1/Td
f
1/Tc
f
f
0
1/Td
0
(cid:131) CDMA transmitters broaden data spectra by encoding it
PN2(t)
1/Tc
onto chip sequences (‘spread-spectrum’)
(cid:131) CDMA receiver correlates... | https://ocw.mit.edu/courses/6-776-high-speed-communication-circuits-spring-2005/33315df81345b772b4100897ea80454d_lec1.pdf |
Lee, et. al.
H.-S. Lee & M.H. Perrott
MIT OCW
Pulsed UWB
(cid:131) Data encoded in impulse train
(cid:131) Multipath can be exploited
(cid:131) No narrowband filters (RF or baseband) needed in
tranceivers
(cid:131) Extremely tight time-synchronization is essential
Pictures Courtesy of R. Blazquez, et. al.
H.-S. Lee &... | https://ocw.mit.edu/courses/6-776-high-speed-communication-circuits-spring-2005/33315df81345b772b4100897ea80454d_lec1.pdf |
\'nting classifiers, t,rairlirlg error of l~oosting
Sllpport vector rr~a~:hines (SVII)
Gerleralizatiorl error of SVM
One ~iirnensional concentrt~t,ion ine~l~~alities.
i3ennett's ineqllality
i3ernstein's ine~lllalit,y
Hoeffcling, Hoeffcling-Chernoff, and Kllirldlirle ineqllalities
\ ~ t ~ ~ ~ r ~ i k - C l ~... | https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf |
le gerleralizatiorl error of vot,ing classifiers
i3ol1111is in t,errrls of sparsity
i3ol1111is in t,errrls of sparsity (example)
\lartirlgale-~iiffererlce ineqllalities
Corrlparisorl ine~lllalit,y for Ra~lerr~acher ~~rocesses
.ippli<:ation of Mxtirlgale i n e q ~ ~ d i t i e s . Generalize~l hlartirlgale i n e q ~... | https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf |
,y. Terlsorizatiorl of Laplace t,rarlsforrrl
.ippli<:ation of tlle entropy t,erlsorizatiorl te<:hniq~~e
Stein's rnetllo~i for con<:er~t,ratio~~
ineqllalities
Lecture 02
Voting classifiers, training error of boosting.
18.465
In this lecture we consider the classification problem, i.e. Y = {−1, +1}.
Consider a fam... | https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf |
) update weight for each i:
wt+1(i) =
wt(i)e−αtYiht (Xi)
Zt
Zt =
n
�
i=1
wt(i)e−αtYiht(Xi)
αt =
1
2
ln
1 − εt
εt
> 0
3) t = t+1
end
1
Lecture 02
Voting classifiers, training error of boosting.
18.465
Output the final classifier: f = sign( αtht(x)).
�
Theorem 2.1. Let γt = 1/2 − εt (how much better ... | https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf |
n
i=1
e−Yi
PT
t=1 αt ht(Xi) =
n
T
� �
Zt
wT +1(i) =
t=1
i=1
T
�
Zt
t=1
�
wt(i)e−αtYiht(Xi)
Zt =
=
n
�
wt(i)e−αt I(ht(Xi) = Yi) +
wt(i)e +αt I(ht(Xi) = Yi)
�
n
�
i=1
= e +αt
i=1
n
�
wt(i)I(ht(Xi) =� Yi) + e−αt
n
�
wt(i)(1 − I(ht(Xi) =� Yi))
i=1
i=1
= e αt εt + e−αt (1 − εt)
Minimize over... | https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf |
over the choice of hyperplanes the minimal distance from the data to the hyper
plane:
where
max min d(xi, H),
H
i
Hence, the problem is formulated as maximizing the margin:
d(xi, H) = yi(ψxi + b).
max min yi(ψxi + b) .
ψ,b
�
��
m (margin)
i
�
Rewriting,
yi(ψ�xi + b�) =
yi(ψxi + b)
m
≥ 1,
ψ� = ψ/m,... | https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf |
iyj xixj −
αiαj yiyj xixj − b
�
�
�
αiyi +
αi
1 �
2
i,j
�
i,j
1 �
2
αiαj yiyj xixj
αi −
The above expression has to be maximized this with respect to αi, αi ≥ 0, which is a Quadratic Programming
problem.
Hence, we have ψ =
�
n
i=1 αiyixi.
Kuhn-Tucker condition:
αi = 0
� ⇔ yi(ψxi + b) − 1 = 0.
Throwin... | https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf |
the decision function becomes
��
sign
αiyixx · x + b = sign
�
��
�
αiyiK(xi, x) + b
5
Lecture 04
Generalization error of SVM.
18.465
Assume we have samples z1 = (x1, y1), . . . , zn = (xn, yn) as well as a new sample zn+1. The classifier trained
on the data z1, . . . , zn is fz1,...,zn .
The error of this c... | https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf |
obtain a bound on the generalization ability of an algorithm, it’s enough to obtain a bound
on its leave-one-out error. We now prove such a bound for SVMs. Recall that the solution of SVM is
ϕ =
�
n+1 α0
i=1
i yixi.
Theorem 4.1.
L.O.O.E. ≤
min(# support vect., D2/m2)
n + 1
where D is the diameter of a ball co... | https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf |
0
i yixi
1
2
m
�
α0
i (yiϕ xi)
·
=
=
=
�
�
�
α0
i
α0
i (yi(ϕ xi + b) − 1) +
·
��
0
� �
0 − b
αi
α0
i yi
� �� �
0
�
D2
.
m2
�
We now prove Lemma 4.1. Let u ∗ v = K(u, v) be the dot product of u and v, and �u� = (K(u, u))1/2 be
, xn+1 ∈ Rd and y1,
the corresponding L2 norm. Given x1,
· · ·... | https://ocw.mit.edu/courses/18-465-topics-in-statistics-statistical-learning-theory-spring-2007/336b787dc798473a4fa55da69591b190_toc.pdf |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.