text stringlengths 16 3.88k | source stringlengths 60 201 |
|---|---|
make up the four tiers in the ToBI labelling window. Unlike the speech displays, these boxes are
text writeable and this is where you will type in the ToBI transcription labels. The top white box
is the Tone tier, and the third box is the Break Index tier. These two tiers represent the core
ToBI analysis. The Tone tier is the part of the transcription that corresponds most closely to a
phonological analysis of the utterance's intonation pattern. It consists of labels for distinctive
pitch events, transcribed as a sequence of high (H) and low (L) tones marked with diacritics
indicating their intonational function. Tones function either as prominence markers called pitch
accents, as parts of pitch accents, or as boundary-related events called phrase accents and
boundary tones, that mark the edges of two types of phrases. These categories are based on the
work of Janet Pierrehumbert (1980) and joint work by Mary Beckman and Janet Pierrehumbert
(1986, 1988).
The Break-Index tier captures the prosodic grouping of the words in an utterance by labelling the
end of each word for the subjective strength of its association with the next word, on a scale from
0 (for weakest perceived boundary/strongest perceived conjoining, as in doncha for don’t you) to
4 (for the most disjoint boundary, i.e. at the end of the highest-level intonationally marked
phrase). These categories of association strength or “break indices” are based on work by Mari
Ostendorf, Patti Price, Stefanie Shattuck-Hufnagel, and their associates (Price et al., 1991). The
two highest break indices (3 and 4) are equated with two levels of prosodic groupings (phrases)
that are marked intonationally; additional higher-level groupings of these Intonational Phrases
are not marked in the ToBI system. | https://ocw.mit.edu/courses/6-911-transcribing-prosodic-structure-of-spoken-utterances-with-tobi-january-iap-2006/0986334a8a593bba602eaaf8ab13de2f_chap1.pdf |
ationally; additional higher-level groupings of these Intonational Phrases
are not marked in the ToBI system.
The Orthographic tier is the third white box. It contains a straightforward transcription of all of
the words in the utterance, in ordinary English orthography. The word transcriptions are aligned
with their locations in the speech waveform. For labellers using Praat or a similar labelling
computer application, the convention is to place the orthographic label for a word between two
marks that delineate the approximate time interval in the signal that corresponds to the utterance
of that word and placing <SIL> to mark silence between words, if any.2 The orthographic tier is
arguably not part of any core prosodic analysis, except inasmuch as the labels on this tier can be
used to interface the transcription to dictionary entries which do indicate such things as which
syllable is likely to be most stressed in each word, prosodic information which is not otherwise
included in the ToBI system (more on this below). This tier also helps the labeller to keep track
of which time regions in the wave form, spectrogram and f0 track correspond to which words in
the utterance.
The Miscellaneous tier is the bottom white box in this display. It is essentially a 'comment' tier
that can be used to mark events such as breaths, coughs, laughter, long silences and other non-
speech events. These are traditionally marked with angle brackets (e.g. <cough>). Like the
orthographic tier, it can include events that are arguably not part of prosody per se. However,
many events that are typically marked on the Miscellaneous tier are important for interpreting the
analyses on the Tone tier and Break-Index tier, because they disrupt the smooth rhythm of the
utterance or interrupt the intonation contour. Labels on this tier usually mark the beginning and
end of an event interval; one exception is the label ‘disfl’, which often stands alone to flag the
occurrence of a perceived disfluency of some type.
1.2. Guiding principles
As the preceding discussion shows, ToBI | https://ocw.mit.edu/courses/6-911-transcribing-prosodic-structure-of-spoken-utterances-with-tobi-january-iap-2006/0986334a8a593bba602eaaf8ab13de2f_chap1.pdf |
a perceived disfluency of some type.
1.2. Guiding principles
As the preceding discussion shows, ToBI does not try to transcribe all aspects of prosody, or
even all aspects that are amenable to symbolic transcription. In deciding what to include and
what to leave out, we are guided by three principles. First, we want to be able to distinguish in
our transcription all of the categorically distinct intonation patterns and prosodic units of the
language (in this case, Mainstream American English (MAE), see Jun 2005 for ToBI systems for
other languages and dialects). Second, we do not transcribe aspects of prosody which are more
amenable to continuous-valued quantitative measures than to the categorical divisions of a
symbolic transcription, such as the slope of the changing f0 curve. Finally, we do not want to
squander the user's energies in transcribing even categorical aspects of prosody which are
predictable from other parts of the transcription or from auxiliary tools, such as dictionaries, that
can be used to determine the location of lexical stress within words.
The categorical aspects of prosody which we try to capture completely (according to the first
principle) are of two types. The first is the prosodic structure -- the alternating rhythm of more
and less prominent words and syllables and the grouping of words into prosodic constituents of
various sizes -- and the second is the intonation pattern – the sequence of contrastive pitch events
2 In older transcriptions using xwaves ™, the orthographic label had to be aligned to the end of
the word rather than spanning the whole word interval. Other popular programs for displaying
and labelling speech are wavesurfer (http://www.speech.kth.se/wavesurfer) and emu
(http://emu.sourceforge.net/emu-tobi.shtml).
that we call pitch accents, phrase accents, and boundary tones, and that determines the f0 contour
of the utterance. A basic assumption of the | https://ocw.mit.edu/courses/6-911-transcribing-prosodic-structure-of-spoken-utterances-with-tobi-january-iap-2006/0986334a8a593bba602eaaf8ab13de2f_chap1.pdf |
that we call pitch accents, phrase accents, and boundary tones, and that determines the f0 contour
of the utterance. A basic assumption of the ToBI approach is that both of these goals can be met
using the same inventory of elements.
[The next two paragraphs contain further discussion of what is not captured by a ToBI
transcription; read them if you are curious about this question. Otherwise, skip directly to
section 2.0 below.]
An example of the non-categorical aspects of prosody which we leave out (in accordance with
the second principle) is the local tempo of each word in the utterance, which we feel could be
more accurately and directly captured by some quantitative measure such as normalized segment
duration (e.g., Campbell, 1992) than by any symbolic transcription such as an arbitrary division
into, say, categories `1', `2', and `3' (for `slow', `medium', and `fast' tempi).
A categorical aspect of prosody which we leave out (in accordance with the third principle)
because it should be fairly predictable is the marking of the lexically stressed and unstressed
syllables within each word. By this level of stress we mean the word-internal alternation
between more stressed and less stressed syllables, where the relative prominence of any pair of
syllables is fairly fixed and can be thought of as inherent to the word's dictionary entry. | https://ocw.mit.edu/courses/6-911-transcribing-prosodic-structure-of-spoken-utterances-with-tobi-january-iap-2006/0986334a8a593bba602eaaf8ab13de2f_chap1.pdf |
MIT OpenCourseWare
http://ocw.mit.edu
6.013/ESD.013J Electromagnetics and Applications, Fall 2005
Please use the following citation format:
Markus Zahn, 6.013/ESD.013J Electromagnetics and Applications, Fall
2005. (Massachusetts Institute of Technology: MIT OpenCourseWare).
http://ocw.mit.edu (accessed MM DD, YYYY). License: Creative
Commons Attribution-Noncommercial-Share Alike.
Note: Please use the actual date you accessed this material in your citation.
For more information about citing these materials or our Terms of Use, visit:
http://ocw.mit.edu/terms
6.013 - Electromagnetics and Applications
Fall 2005
Lecture 9 - Oblique Incidence of Electromagnetic Waves
Prof. Markus Zahn
October 6, 2005
I. Wave Propagation at an Arbitrary Angle
From Electromagnetic Field Theory: A Problem Solving Approach, by Markus Zahn, 1987. Used with permission.
z� = x sin(θ) + z cos(θ)
kz� = kxx + kzz, kx = k sin(θ), kz = k cos(θ), k = ω
√
µ�
� × E = −jωµ H ¯ ⇒ H = −
¯
¯
�
ˆ
�
�
Eej(ωt−kz�)
E¯(x, z, t) = Re ˆ
¯iy = Re Eej(ωt−kxx−kz z) ¯iy
�
∂Ey + ¯ ∂Ey
− ¯ix ∂z
iz ∂x
ˆ¯ �
ˆ¯
��jkzEix − ��jkzEiz e−
� × E ¯ = −
Hˆ¯ = −
1
jωµ
�
�
j(kxx+kz z)
1
jωµ
1 �
jωµ
��
Eˆ
= −
η
[cos(θ)¯ix − sin(θ)¯iz] e−j(k | https://ocw.mit.edu/courses/6-013-electromagnetics-and-applications-fall-2005/099a51f799422c0747c7e8d691e42401_lec9.pdf |
ˆ − E¯ˆ(k ¯ · k¯) = ωµ k ¯ × H ¯ˆ = −ω2�µE ¯ˆ
�
�
2
|k¯ | = kx
2 = ω2�µ
A ¯ × (B ¯ × C¯) = B¯(A ¯ C¯) − C¯(A ¯ B¯)
·
2 + ky
·
2 + kz
S ¯ˆ =
ˆ¯
S =
1 ¯ˆ H¯ˆ ∗, H ¯ˆ =
E ×
2
�
1
ωµ
(¯ E¯ˆ)
k ×
ˆ
E ¯ ×
1
2
�
1
k × Eˆ∗ =
¯
¯
ωµ
1
2ωµ
�
¯ E ¯ · E¯∗) − ¯ Eˆ · k¯)
k( ˆ
ˆ
Eˆ∗(�¯��� 0 �
Sˆ¯ =
¯ ˆ¯ 2
k|E|
2ωµ
(S ¯ˆ in the direction of k¯)
II. Oblique Incidence Onto a Perfect Conductor
A. E ¯ Field Parallel to Interface (TE - Transverse Electric)
�
�
E¯ i = Re Eˆiej(ωt−kxix−kziz)¯iy
H¯ i = Re
�
�
Eˆi (− cos(θi)¯ix + sin(θi)¯iz)ej(ωt−kxix−kziz)
η
kxi = k sin(θi), kzi = k cos(θi), k = ω
√
�µ, η =
�
µ
�
�
E¯ r = Re Eˆrej(ωt−kxr x+kzrz)¯iy
�
H¯ r = Re
�
�
Eˆr (cos(θr)¯ix + sin(θr)¯iz)ej(ωt−kxr x+k | https://ocw.mit.edu/courses/6-013-electromagnetics-and-applications-fall-2005/099a51f799422c0747c7e8d691e42401_lec9.pdf |
�
Eˆr (cos(θr)¯ix + sin(θr)¯iz)ej(ωt−kxr x+kzrz)
η
Boundary conditions require that
kxr = k sin(θr), kzr = k cos(θr)
2
From Electromagnetic Field Theory: A Problem Solving Approach, by Markus Zahn, 1987. Used with permission.
Eˆy(x, z = 0) = 0 = Eˆyi(x, z = 0) + Eˆyr(x, z = 0)
= Eˆie−jkxix + Eˆre−jkxr x = 0
Hˆz(x, z = 0) = 0 = Hˆzi(x, z = 0) + Hˆzr(x, z = 0)
1 �
Eie−jkxix sin(θi) + Eˆre−jkxr x sin(θr) = 0
ˆ
η
�
kxi = kxr ⇒ sin(θi) = sin(θr) ⇒ θi = θr
�
angle of incidence =
angle of reflection
�
�
Eˆi = Ei(real) ⇒ Ey(x, z, t) = Re Eˆi e−jkz z − e +jkz z ej(ωt−kxx)
� �
�
Eˆr = −Eˆi
= 2Ei sin(kzz) sin(ωt − kxx)
H¯ (x, z, t) = Re
�
� ˆ �
� �
E
i cos(θ) −e−jkz z − e +jkz z ¯ix + sin(θ) e−jkz z − e +jkz z ¯iz
η
�
�
�
j(ωt−kxx)
e·
=
2Ei �
η
− cos(θ) cos(kzz) cos(ωt − kxx)¯ix
�
+ sin(θ) sin(kzz) sin(ωt − kxx)¯iz
3
Ky | https://ocw.mit.edu/courses/6-013-electromagnetics-and-applications-fall-2005/099a51f799422c0747c7e8d691e42401_lec9.pdf |
�
�
�
�
= 2Ei [cos(θ) sin(kzz) sin(ωt − kxx)¯ix − sin(θ) cos(kzz) cos(ωt − kxx)¯iz]
H ¯ = Re
�
ˆ �
Ei e−jkz z + e +jkz z ej(ωt−kxx)¯iy
η
�
�
=
2Ei cos(kzz) cos(ωt − kxx)¯iy
η
Kx(x, z = 0) = Hy(x, z = 0) =
2Ei cos(ωt − kxx)
η
σs(x, z = 0) = −�Ez(x, z = 0) = 2�Ei sin(θ) cos(ωt − kxx)
Check: Conservation of Charge
∂σs
∂t
= 0
⇒
∂Kx
∂x
+
∂σs
∂t
= 0
¯
�Σ · K +
� �� �
surface
divergence
1
2
�
� �
S ¯ =
Re E ¯ˆ × H¯ˆ ∗ =
�
2E2
i sin(θ) cos2(kzz)¯ix
η
4
III. Oblique Incidence Onto a Dielectric
From Electromagnetic Field Theory: A Problem Solving Approach, by Markus Zahn, 1987. Used with permission.
A. TE ( E ¯ � Interface) Waves
�
�
E¯ i = Re Eˆiej(ωt−kxix−kziz)¯iy
�
�
Eˆi (− cos(θi)¯ix + sin(θi)¯iz) ej(ωt−kxix−kziz)
H¯ i = Re
η1
�
�
E¯ r = Re Eˆrej(ωt−kxr x+kzrz)¯iy
�
�
Eˆr (cos(θr)¯ix + sin(θr)¯iz) | https://ocw.mit.edu/courses/6-013-electromagnetics-and-applications-fall-2005/099a51f799422c0747c7e8d691e42401_lec9.pdf |
t)
θi = θr
k1
k2
sin(θt) =
sin(θi) =
ωc2
�
ωc1
�
sin(θi) =
c2
c1
sin(θi)
(Snell’s Law)
Index of refraction:
Reflection Coefficent:
R =
Transmission Coefficent: T =
Eˆt
Eˆi
B. Brewster’s Angle of No Reflection
�rµr
√
√
η2
√
=
n =
c0
c
�µ
=
�0µ0
n1 sin(θi)
sin(θt) =
n2
cos(θt) − cos(θi) =
η2 +
2η2
η2 +
η1
cos(θi)
� =
cos(θt)
η1
η1
cos(θi)
cos(θt)
ˆ
Er =
ˆ
Ei
=
�
cos(θt)
η2 cos(θi) − η1 cos(θt)
η2 cos(θi) + η1 cos(θt)
2η2 cos(θi)
η2 cos(θi) + η1 cos(θt)
R = 0
⇒
η2 cos(θi) = η1 cos(θt)
2 cos2(θi) = η2
η2
2(1 − sin2(θi)) = η1
2(1 − sin2(θt)) = η1
�
2 1 −
�
2
c2
2 sin2(θi)
c1
�
2
2c2
η1
sin2(θi)
2
c1
�
µ1 �1µ1
��
�1 �2µ2
��
sin2(θi)
2 cos2(θt) = η1
�
− η2 = η1
2
�
2 − η2
2 | https://ocw.mit.edu/courses/6-013-electromagnetics-and-applications-fall-2005/099a51f799422c0747c7e8d691e42401_lec9.pdf |
(θt) = η1
�
− η2 = η1
2
�
2 − η2
2
−
µ2
�2
=
µ1
�1
−
µ2
�2
sin2(θi) = sin2(θB) =
�
µ
1 − �
2
1
µ
1
2
� �2
µ
1 − µ
1
2
θB is called the Brewster angle. There is no Brewster angle for TE polarization if
µ1 = µ2.
If c2 > c1,
C. Critical Angle of No Power Transmission
sin(θt) can be greater than 1:
c2 sin(θi)
c1
θi = θc ⇒ sin(θi) =
sin(θt) =
c1
c2
6
(Real solution for θi if c1 < c2)
θc is called the critical angle. At the critical angle, θt = π
2 ⇒ kzt = k2 cos(θt) = 0.
For θi > θc,
sin(θt) > 1 ⇒ cos(θt) =
�
− sin2(θt) ⇒ −jα =
kzt
1
�
�
E¯ t = Re Eˆtej(ωt−kxtx)e−αz¯iy
H¯ t = Re
�
�
Eˆt (− cos(θt)¯ix + sin(θt)¯iz) ej(ωt−kxt)e−αz
η2
These are non-uniform plane waves.
�Sz� = −
�
EˆyHˆ
Re
∗ �
x
= −
1
2
1
2
�
Re
�
∗
t
ˆ
Eˆ
tE
η
2
= 0
cos(θt) = −
�
(− cos(θt))∗ e−2αz
�
jα
k | https://ocw.mit.edu/courses/6-013-electromagnetics-and-applications-fall-2005/099a51f799422c0747c7e8d691e42401_lec9.pdf |
6 . 2 7 0 : A U T O N O M O U S R O B O T
D E S I G N C O M P E T I T I O N
• Assignment 2: General
Comments
• More on sensors
Servos
•
• RF receiver
• Robot control and state
machines
Threads
Assignment 3 handed out
•
•
LECTURE3: Advanced Techniques
Delinquent Teams
• Assignment 2 teams not finished:
– 11, 14, 15, 18, 22, 40, 55
• Assignment 3 handed out today, due
tonight!!
Rules Clarifications
• Are prices measured in 1-u or 100-u quantities?
– We’ll use 100-u quantities for pricing purposes
• Can we use rubber bands to add friction with game
balls?
– Yes
• Can we disable an opponent?
– You cannot intentionally damage or flip
– You can drive into the other robot or push them around
• Can we cut apart the baseplate and glue it back
together?
– Yes, but things glued together are not structural
Power Usage
• Your HandyBoard has 8 rechargable 1.5
volt batteries built in
• They don’t last too long when driving
actuators
• Next week, we’ll give you high capacity
lead acid batteries from Hawker to power
actuators
Some 6.002/8.02 Lovin’
• Voltage, Current, Resistance: V = I · R
– Resistance: ohms (Ω)
– Current: amps (A)
– Voltage: volts (V)
• Power: P = I · V
i
v
R
Shorting the Batteries
•
•
v = 6 V, R = 0.02 Ω
i = 300 A
– Household wiring rated 15
A
• p = 1800 W
– Thirty 60-watt light bulbs
• Lesson: ensure battery
leads are well-insulated!
i
v
R
Phototransistors
• It’s an art
• Need to figure out an effective way of reading the color
off the board or | https://ocw.mit.edu/courses/6-270-autonomous-robot-design-competition-january-iap-2005/09c931ae703e564cd4a5f3f559d49987_lecture3_slides.pdf |
Phototransistors
• It’s an art
• Need to figure out an effective way of reading the color
off the board or object
– Factors: glossiness, ambient lighting
– It’s not really color; it’s grayscale
– Contest night
– Wear and tear of contest board
– Can’t rely on just light provided by the world alone
Providing Your Own Light
• LEDs are polarized, and you must use a resistor
• Light dispersion
– What is the best way to put LEDs
– For color detection, look for reflection at an angle, not
perpendicular to surface
Providing Your Own Light
• Turning LED on and off gives more info
• Use FET to turn multiple LEDs on and off
using the digital output
• See handouts page for FET datasheet
Wiring Multiple LEDs
vdd
• Use a separate
resistor for each
LED
150Ω
¼ watt
digital output
Shielding
• Control the light made available to the
sensor
• Help focus what the sensor is looking at
• Cardboard, heat shrink (black), electrical
tape
• Some things aren’t as opaque as you think
• Calibration (and 60-second set-up time)
Distance Sensor
• Follow hookup instructions in the notes
Distance Sensor on the HB
• Distance sensor provides
variable voltage output
• Must disconnect internal
pull-up resistor
v = 5 V
VDD
R = 47 kΩ
IN
to ADC
GND
Disconnecting the 47kΩ Pull-Up
• Remove
main HB
PCB from
the plastic
case
Disconnecting the 47kΩ Pull-Up
5 4
2
• Analog
Inputs 2,
4, and 5
can be
modified
Disconnecting the 47kΩ Pull-Up
• Cut traces
5 4
2
(make
sure you
know
where!)
Distance Sensor
• Range: 15-150 cm
– 6-60 in
Distance Sensor
• You probably
don’t need more
than 3
• But if you’re really
thatneedy, cut
port 0 or 1
IN_16
IN_17
IN_18
IN | https://ocw.mit.edu/courses/6-270-autonomous-robot-design-competition-january-iap-2005/09c931ae703e564cd4a5f3f559d49987_lecture3_slides.pdf |
But if you’re really
thatneedy, cut
port 0 or 1
IN_16
IN_17
IN_18
IN_19
IN_20
IN_21
IN_22
IN_23
v = 5 V
R = 47 kΩ
to ADC
VDD
IN_0
GND
The Gyroscope
• Gives you a rate of rotation
• You can integrate to get a position
– Usually accurate to within a couple of degrees
over 60 seconds
• Example code given on contestant’s
information page
Gyroscope Considerations
• Reducing drift and inaccuracy
– Correct gyroscope data when you know what
it must be
• Backed against a wall
– Use relative positioning
• Make turns based on a change of 90 degrees,
rather than turning to 270 absolute degrees
Gyroscope Usage
• You can have ONE gyroscope
– 5 sensor points
– Talk to us about how to hook it up
– We will not replace broken gyroscopes
– Use any analog sensor port
The RF Receiver
• Lets us give you
information during
the match:
– Start/end of match
– Vote tally
– Position of robots
Using the RF receiver
• First thing in your code:
– rf_team = YOURTEAMNUMBER;
– rf_enable();
• This will disable the IC interface
– To turn on your handyboard without enabling RF,
hold START while turning on
• Plug your telephone cable into the HandyBoard
and the RF receiver
Start/stop of match
• Use the function start_machine()
(described in the course notes)
• Will automatically start your robot when
the match begins, and stop it when the
match ends
Voting Information
• rf_vote_red
• rf_vote_green
• rf_vote_winner
– You need some way of determining a winner
when there is a tie
– All automagically updated
Position Information
• rf_x0, rf_y0
• rf_x1, rf_y1
– Tell you the x and y coordinates of the two robots
– No guarantees about which of the two robots you will
be
• Consistent during the match, but not across matches
– Also automagically updated
Position Information
y
• rf_x0, rf | https://ocw.mit.edu/courses/6-270-autonomous-robot-design-competition-january-iap-2005/09c931ae703e564cd4a5f3f559d49987_lecture3_slides.pdf |
• Consistent during the match, but not across matches
– Also automagically updated
Position Information
y
• rf_x0, rf_y0
• rf_x1, rf_y1
– Approximately 8000
units per foot
– Center of table is (0,0)
x
x
Position Information
• How do we determine the position
information?
– You’ll be required to put a colored swatch
(which we provide) on top of your robot
– We look at the table and find the swatches
– More details later
The Bigger Picture
The Bigger Picture
• You have tools:
– Sensors
– Actuators
– Mechanical chassis
– Task-specific mechanical devices
– Processor
• How to put it all together?
The Bigger Picture
• Combining Sensors
– Servo + distance sensor
– Servo + beacon
– Beacon + distance sensor
• Do you even need sensors?
– Wall following / going straight
– Making precise turns
What Are the Sensors Doing?
• They prevent you from dead reckoning
• What matters is where the robot is, not
where it thinks it is
• Provide information to make decisions
The AI: How to Code a Robot
• Programming language is easy; programming
style is difficult, especially with a team (any
6.170 alums?)
• Some patterns have emerged in regards to
having an effective coding style
– Finite State Machines
– Control
– Coding Techniques
Finite State Machines
• What is a finite state machine (FSM)?
– Defines what the robot should do at a given point in
time
– Each state has predefined outputs
– Transitions to other states depend on inputs
• Why?
– Effective way of thinking about your strategy
– Define what to do for any combination of inputs
Implementing a State Machine
• Each action is a state
– Moving forward
– Turning
• Actuators are outputs of the FSM
• Sensor inputs determine next state
Example FSM
Orient
Robot
bot
Move
Move
Straight
Straight
Ball not
detected
Release
Release
ball
ball
Collect
Collect
Ball
Ball
180-
degree
turn
Detect
opposing
robot
Detect
scoring
area
Wall
Follow
Forward
Coding an FSM
• While | https://ocw.mit.edu/courses/6-270-autonomous-robot-design-competition-january-iap-2005/09c931ae703e564cd4a5f3f559d49987_lecture3_slides.pdf |
degree
turn
Detect
opposing
robot
Detect
scoring
area
Wall
Follow
Forward
Coding an FSM
• While loops
– Continue an action until input is received
• Multithreading
– Processes that determine the inputs
– Processes that determine outputs and state transitions
• Don’t do it the 6.111 PAL 20V10 way
– Don’t need a variable to keep track of what state you’re in
– Instead think conceptually; think before you code
FSM Issues
• Inputs
– Check only those that matter at that state
– Determine what is important
• Storing State
– Make your robot smarter
• Use the state as well as the inputs to determine action
• Store last actions in state variables
– Helpful if robot gets disoriented
Driving Straight
differential
steering
synchro
• Drive mechanism
• Line following
• Shaft encoding
•
Wall following
l
e
d
d
M
i
l
e
d
d
M
i
t
f
e
L
t
f
e
L
t
h
g
R
i
Action
n/a
RIGHT!
RIGHT!
t
i
h
g
a
r
t
s
right
t
h
g
R
i
Action
LEFT!
LEFT!
n/a
left
n/a
Sensor Inputs
on
off
drive
wheel
steering
wheel
steering
and drive
wheel
phototransistor
LED
wall follow
w all follo w
Drive Mechanisms
• Differential Drive
• Synchro Drive (servos)
• Rack-and-Pinion Drive (car)
• Independent Drive (gearboxes; Assignment 2)
differential
steering
synchro
drive
wheel
steering
wheel
steering
and drive
wheel
Line Following
• Use set of light sensors to look at color under robot
• Set of lines and contrasts on board
• Follow contrast
l
e
d
d
M
i
t
f
e
L
l
e
d
d
M
i
t
f
e
L
t
h
g | https://ocw.mit.edu/courses/6-270-autonomous-robot-design-competition-january-iap-2005/09c931ae703e564cd4a5f3f559d49987_lecture3_slides.pdf |
f
e
L
l
e
d
d
M
i
t
f
e
L
t
h
g
R
i
Action
n/a
RIGHT!
RIGHT!
i
t
h
g
a
r
t
s
right
t
h
g
R
i
Action
LEFT!
LEFT!
n/a
left
n/a
Sensor Inputs
on
off
Line Following
n/a
n/a
if prev_state == hard_right
then keep turning right
if prev_state == hard_left
keep turning left
if prev_state == right
turn left
if prev_state == left
turn right
Control Systems
• Robots are deaf, dumb, and blind
– Only capable of following explicit instructions
• Control systems required to create desired
motions
Open Loop Control
• Simply a set of sequential instructions
• Does not rely on external inputs
– Dead reckoning / using timing
• Errors accumulate
Feedback Control
• Sense environment to correct errors
• Avoid dead reckoning
Shaft Encoding
• Breakbeam sensor +
pulley
• Count interruptions to find
revolutions
• Driving straight
• Useful for:
– Turning
– Moving a specific distance
– Better than timing
• Doesn’t rely on battery
charge
phototransistor
LED
Shaft Encoding
• Works better on some ports:
– Ports 7 and 8 have hardware counters (faster, more
accurate)
– Others use software counters
– If you need more than 2, try using ports 2-6
• Both wheels may not turn at same speed
• Use revolutions for feedback
• Determine difference in speed and adjust
• Hint: place encoder high in gear train
Pseudo-Code
if (right encoder value – left encoder
value) > 100 ticks
slow down right wheel or speed up left
wheel
if (left encoder value – right encoder
value) > 100 ticks
slow down left wheel or speed up right
wheel
Wall Following
• Easy way to go straight
• Simple to implement
– Bump sensors on side
– Distance sensors
while (…) {
if (sensor hit)
steer away from wall
else
steer | https://ocw.mit.edu/courses/6-270-autonomous-robot-design-competition-january-iap-2005/09c931ae703e564cd4a5f3f559d49987_lecture3_slides.pdf |
implement
– Bump sensors on side
– Distance sensors
while (…) {
if (sensor hit)
steer away from wall
else
steer towards wall
}
wall follow
w all follo w
Driving Straight—Advantages
and Disadvantages
• Shaft encoding
– Relies on initial alignment
– Relatively fast
– Can be tricked by slipping
• Line following
– Robust
– Relatively slow
• Wall following
– Requires continuous stretch of wall
– Can be fast
Code Implementation
• Start on paper
• Use functions and comments
– Code is then legible for everyone on your
team and for us (impounding)
Programming Methodology
• Top-down programming
– Good for initial design
– Overall view without details
• Bottom-up programming
– Good for code creation
– Allows individual testing of functions
Programming Methodology
• Figure out the actions you want to take
• Figure out the functions you need
• Implement
• Test
• Integrate into other code
• Repeat
Testing and Debugging
• Most important part of the design
• Significant testing is necessary to do well
– Things will break
– Things will happen that you don’t expect
– Try to see these things in advance
• Test and debug incrementally
Hints
• Test sensors before mounting
• Test small pieces of code before
combining into larger procedures
• Use the LCD screen
• Remember mechanical reliability
Error Detection
• Your robot will mess up
• How can it find out what’s wrong?
• Timeouts are key
Timeouts
• Detect when robot is stuck in a state
– Probably waiting for input – bump into wall,
light reading
• Force out of stuck state
– Error correcting routines
Error Correction
• Try again, harder
• Back up, try again
• Wiggle around
• Guess what it should try next
• Skip to next part of routine
• Line following: what to do about the n/a states
– In this case, using an FSM may help you figure out
what to do
Quick Note on Threads
Quick Note on Threads
• What is a Thread?
– Separate task running at the same time
– Allows you to multi-task
• Motors run andwatch if a sensor is pressed
• How does | https://ocw.mit.edu/courses/6-270-autonomous-robot-design-competition-january-iap-2005/09c931ae703e564cd4a5f3f559d49987_lecture3_slides.pdf |
Thread?
– Separate task running at the same time
– Allows you to multi-task
• Motors run andwatch if a sensor is pressed
• How does one processor run two threads?
• Executes a process certain number of ticks (ms)
• Processor switches from one thread to another
The Methods for Threading
• int start_process(function_call(),
[TICKS], [STACK_SIZE]);
– Default run is 5 ticks, or 5 ms
– Stack size is by default 256 bytes
– Returns process ID (pid) of the new process
– You shouldn’t need to pass ticks or stack_size
• int kill_process(int pid)
– Returns 0 (process was destroyed), 1 (process not
found)
Interacting in IC
• kill_all
– Kill all currently running processes
• ps
– Prints out list of process status
– Provides:
• Process ID
• Status code
• Program counter
• Stack pointer
• Stack pointer origin
• Number of ticks
• Name of function that is currently executing
• Refer to Handy Board manual for more information
Example
main() {
while (true) {
go forward
wait until sensor
pressed
go backward
wait until sensor
pressed
}
}
main() {
while (true) {
while (vote is tied)
play tone 1
while (red is winning)
play tone 2
while (green is winning)
play tone 3
}
}
Example
move() {
while (true) {
go forward
wait until sensor
pressed
go backward
wait until sensor
pressed
}
}
watch_vote() {
while (true) {
while (vote is tied)
play tone 1
while (red is winning)
play tone 2
while (green is winning)
play tone 3
}
}
Example
void move() { … }
void watch_vote() { … }
void main() {
int move_pid;
int watch_vote_pid;
move_pid = start_process(move());
watch_vote_pid = start_process(watch_vote());
sleep(60);
kill_process(watch_vote_pid);
kill_process(move_pid);
}
Why Was the Example Easy?
• Threads are independent of each other
• Do not share any common variables, or
common information
• Did not attempt to communicate or
change each other’s state
How Threads Can | https://ocw.mit.edu/courses/6-270-autonomous-robot-design-competition-january-iap-2005/09c931ae703e564cd4a5f3f559d49987_lecture3_slides.pdf |
• Do not share any common variables, or
common information
• Did not attempt to communicate or
change each other’s state
How Threads Can Communicate
• Communicate through global variables
• Variables declared above and outside of all
functions are global variables (like C)
• One thread can use the global variable
that another thread is changing
For the Contest
• You will be using threads, even if you
don’t know it
– We provide start code that makes sure that
you start and stop at the right times:
start_machine()
• See Appendix A for more details
Thread Tips
• Outside of start_machine(), you most likely won’t need threads
– Work around threads with control statements: for, while,
if…then…else, return, break
• Don’t use reset_system_timer()
– Our start system code depends on the timer
• Don’t
sleep(); use while loops
sleep(3.0);
float start_time = seconds();
while (seconds() - start_time < 3.0) {
/* check for anything (like sensor inputs) */
if (you_really_need_to_leave_the_while_loop)
break;
}
Your Winning Strategy
• Sufficient sensors and AI to determine location of robot
• Be able to react to potential problems that the robot
might face
• Be aware of your limitations
– Amount of LEGO
– Power and speed of the motors
– Robot size
– Time of the round (60 seconds)
– How long until Tuesday, January 25, 5:00 pm
Your Winning Strategy
• Reliability and robustness are the keys
– 90% reliability means 43% chance of not
failing in 8 rounds
– KISS
– Leavea lot of time for testing and debugging
• Impossible to counter every opposing
strategy, so don’t try
Assignment 3
• Due Friday night (TONIGHT!) at 11:45
pm
• One task to complete:
1. Romeo and Juliet
• Pick up assignment after lecture
Assignment 4
• Due Tuesday night (January 11) at 11:45
pm
• Two tasks to complete:
1. Discuss with your Organizer/TA pair your
strategy
2. Submit a one-page write up of intentions | https://ocw.mit.edu/courses/6-270-autonomous-robot-design-competition-january-iap-2005/09c931ae703e564cd4a5f3f559d49987_lecture3_slides.pdf |
Two tasks to complete:
1. Discuss with your Organizer/TA pair your
strategy
2. Submit a one-page write up of intentions
What’s Next
• No workshops today
• Monday, January 10, and Tuesday, January 11
– Workshop 5 – Servos, Sensors, and Shaft Encoders
• Using analog sensors
• Servo – the other motor
• Shaft encoding with breakbeam sensor
• Gyroscopes
– Workshop 6 – Advanced LEGO
• Using the unique pieces
• Interesting gadgets
– Workshop 7 – Code & Sensors II: Advanced Techniques
• Open vs. closed loop control
• Line following
• Don’t forget to sign up for workshops in lab!
Good LUCK! | https://ocw.mit.edu/courses/6-270-autonomous-robot-design-competition-january-iap-2005/09c931ae703e564cd4a5f3f559d49987_lecture3_slides.pdf |
Heinrich Hencky (1885-1952)
• Natural logarithmic strain measure;
•
Biography:
(
!(t) = ln L(t) L0
)
High School (Humanistic Gymnasium),
Speyer am Rhein, Germany
Technical University, Munich; Dipl. Eng. 1908
Technical University, Darmstadt; D. Eng. 1913
•
Professor of Mechanical Engineering, M.I.T. 1930-
1933.
Office 1-321
From the 1930-31 M.I.T. Course Catalog:
Courtesy of MIT. Used with permission.
© source unknown. All rights reserved. This content is
excluded from our Creative Commons license. For more
information, see https://ocw.mit.edu/help/faq-fair-use/.
1So What is a Complex Fluid?
• Complex fluids possess an underlying microstructure that can be affected by (and
in turn then affect) a flow field
• Examples include:
Polymer solutions, polymer melts, liquid crystals
Foams, gels, bubbly-liquids,
Suspensions, emulsions, slurries, mud..
Food stuffs, paints, adhesives and other consumer products
• Basically everything except air, oil, water!
• These fluids violate Newton’s viscosity law:
•Rheology: study of the material properties of
complex fluids in specified/known flow fields
•Non-Newtonian Fluid Dynamics: self-consistent
solutions of cons. of mass, momentum PLUS a
constitutive model (rheological equation of state)
!yx=µ"vx"y!=µ#v+#vt{}2Important Non-Newtonian Fluid Effects
• Shear thinning (rate-dependence of viscosity)
“inelastic”, “generalized Newtonian fluids”
!( !") #
$yx
!"
• Elasticity (normal stress differences)
“Second order fluids” (SOF)
Boger fluids
$11 %$22
!" 2
!1( !") #
• Fluid Memory (stress relaxation)
Relaxation time λ
!0
s
s | https://ocw.mit.edu/courses/2-341j-macromolecular-hydrodynamics-spring-2016/09d1e70558b8678c9712932a5fc55beb_MIT2_341JS16_Lec02-slides.pdf |
!" 2
!1( !") #
• Fluid Memory (stress relaxation)
Relaxation time λ
!0
s
s
e
r
t
s
r
a
e
h
S
time
G(t) =
!12 (t)
"0
~ G0e#t $
3
Natural Time Scale of Complex Fluids
• Natural time scale
• The Deborah number is a dimensionless measure
compared with the time scale of the deformation...
!material " 100 sec.
http://www.sillyputty.com
Images courtesy of
Cambridge Polymer Group
De < 1
Viscous Liquid
De ~ 1
Viscoplastic Solid
De ~ 10 Elastic Solid
De ~
(Relaxation time of the material)
(The timescale of the process)
=
!material
T process
Brittle Solid De ~1000!
©MIT; Harold.Edgerton Strobe Lab.
4
MIT OpenCourseWare
https://ocw.mit.edu
2.341J / 10.531J Macromolecular Hydrodynamic
Spring 2016
For information about citing these materials or our Terms of Use, visit: https://ocw.mit.edu/terms. | https://ocw.mit.edu/courses/2-341j-macromolecular-hydrodynamics-spring-2016/09d1e70558b8678c9712932a5fc55beb_MIT2_341JS16_Lec02-slides.pdf |
8.701
Introduction to Nuclear
and Particle Physics
Markus Klute - MIT
0. Introduction
0.5 Early History and
People in Nuclear and
Particle Physics
1
Early Developments in Nuclear & Particle Physics
~1820s: geologists and biologists have come to believe that the Earth is much older than 10s of
thousands of year, perhaps hundred of million of years. Classical thermodynamic calculations
contradict these estimates and challenge evolution and the Orgin of Species.
1895: Wilhelm Rontgen discovers X-rays
Charles Darvin
1809-1882
Wilhelm Roentgen
1845-1923
And the first X-ray
images of a human hand
1895. X-rays were used
for medical purposes as
early as 1897.
Lord Kelvin
1824-1907
Both images © Source unknown. All rights
reserved. This content is excluded from our
Creative Commons license. For more
information, see https://ocw.mit.edu/fairuse.
These images are in the public domain.
Early Developments in Nuclear & Particle Physics
1896: Henri Becquerel discovers radiation from uranium
1897: Ernest Rutherford discovers ɑ and β rays in experiments with uranium
Henri Becquerel
1852-1908
1897: J.J. Thomson discovers the electron
1898: Marie and Pierre Curie propose the new term “radioactivity” for
material which emit rays. They discovered that thorium emits “uranic rays”
and also discovered the new elements polonium and radium.
Ernest Rutherford
1871-1937
Photos of Ernest Rutherford, Henri Becquerel, Marie Curie and Pierre Curie © Source
unknown. All rights reserved. This content is excluded from our Creative Commons
license. For more information, see https://ocw.mit.edu/fairuse.
Photo of J.J. Thomson is in the public domain.
J.J. Thomson
1856-1950
Marie Curie
1867-1934
Pierre Curie
1859-1906
3
Early Developments in Nuclear & Particle Physics
1899: Paul Villard discovers a third component of radiation from uranium
and calls them ɣ rays.
1901: The Curie’s measure the energy emitted by radioactive elements and discover that one gram of
radium gives off the incredible amount | https://ocw.mit.edu/courses/8-701-introduction-to-nuclear-and-particle-physics-fall-2020/09d46adb32b9af809d01687fc861ee9b_MIT8_701f20_lec0.5.pdf |
ɣ rays.
1901: The Curie’s measure the energy emitted by radioactive elements and discover that one gram of
radium gives off the incredible amount of 140 calories per hour.
1903: Rutherford is first to make the connection to the puzzle of the age of Earth by suggesting that a
small amount of heat added by radioactive decays keeps the Earth geologically active. The come to
the conclusion that the Earth might as well be a few billion years old.
1905: Einstein’s annus mirabilis with E=mc2
1906: Rutherford discovers that ɑ-particles turn into helium when stopped
Paul Villard
1860-1934
This photo is in the public domain.
Albert Einstein
1876-1955
© Source unknown. All rights reserved. This content is
excluded from our Creative Commons license. For more
information, see https://ocw.mit.edu/fairuse.
Early Developments in Nuclear & Particle Physics
1909: Marsden and Geiger, students of Rutherford, perform experiments
bombarding a gold foil with ɑ-particles. Rutherford proposes a “solar system”
model of the atom, in which the atom is essentially empty space with a
very small and dense nucleus
1919: Rutherford, by bombarding nitrogen with ɑ-particles produces a proton
and oxygen and with that the first human-engineered nuclear reaction
1930: Dirac combines relativity and quantum mechanics with the so-called
Dirac equation as a consequence. The equation predicts the existence of
negative states of electrons and protons, predicting the existence of antimatter
Hans Geiger
1882-1945
Both photos © Source unknown. All rights reserved. This content is
excluded from our Creative Commons license. For more information,
see https://ocw.mit.edu/fairuse.
Eugene Marsden
1882-1936
Paul Dirac
1902-1984
This photo is in the public domain.
Early Developments in Nuclear & Particle Physics
1931: Pauli and Fermi propose that decay is producing two particle
sharing kinetic energy assuming a very light neutral particle which can not
be easily detected - the neutrino
1932: Chadwick detects neutrons directly in experiments with beryllium and
ɑ-particles
1932: Anderson discovers the positron in tracks on photographic plates which | https://ocw.mit.edu/courses/8-701-introduction-to-nuclear-and-particle-physics-fall-2020/09d46adb32b9af809d01687fc861ee9b_MIT8_701f20_lec0.5.pdf |
neutrons directly in experiments with beryllium and
ɑ-particles
1932: Anderson discovers the positron in tracks on photographic plates which
look like electrons but curve in the “wrong” direction
All of the photos © Source unknown. All rights reserved. This content is excluded
from our Creative Commons license. For more information, see https://ocw.mit.edu/
fairuse.
Wolfgang Pauli
1900-1958
Enrico Fermi
1901-1954
James Chadwick
1905-1991
Carl Anderson
1905-1991
6
Early Developments in Nuclear & Particle Physics
1935: Yukawa proposes that neutrons and protons in nuclei are held together by a strong force
1938: Bethe calculates in detail how nuclear fusion, rather than nuclear fission, can power the Sun.
He proposed a three-step sequence called the proton-proton chain
Hideki Yukawa
1907-1981
1938: Meitner and Hahn bombard uranium with neutrons and discover nuclear fission.
Lise Meitner
1878-1968
Otto Hahn
1876-1968
All of the photos © Source unknown. All rights reserved. This content is excluded from our Creative Commons license. For more information, see https://ocw.mit.edu/fairuse.
Hans Bethe
1906-2005
7
MIT OpenCourseWare
https://ocw.mit.edu
8.701 Introduction to Nuclear and Particle Physics
Fall 2020
For information about citing these materials or our Terms of Use, visit: https://ocw.mit.edu/terms. | https://ocw.mit.edu/courses/8-701-introduction-to-nuclear-and-particle-physics-fall-2020/09d46adb32b9af809d01687fc861ee9b_MIT8_701f20_lec0.5.pdf |
7(cid:6)(cid:6)(cid:7)(cid:21)
def buildMenu(names, values, calories):
"""names, values, calories lists of same length.
name a list of strings
values and calories lists of numbers
returns list of Foods"""
menu = []
for i in range(len(values)):
menu.append(Food(names[i], values[i],
calories[i]))
return menu
(cid:2)(cid:3)(cid:4)(cid:4)(cid:4)(cid:5)(cid:6)(cid:7)(cid:8)(cid:9)(cid:10)(cid:11)(cid:12)(cid:8)(cid:6)(cid:13)
(cid:5)(cid:5)
(cid:2)(cid:15)(cid:14)(cid:19)(cid:20)(cid:15)(cid:20)(cid:3)(cid:4)(cid:12)(cid:4)(cid:10)(cid:6)(cid:3)(cid:11)(cid:6)(cid:29)(cid:11)7(cid:19)(cid:20):(cid:10)(cid:18)(cid:19)(cid:20)(cid:11)(cid:24)(cid:5)(cid:20)(cid:20)(cid:7)/
def greedy(items, maxCost, keyFunction):
"""Assumes items a list, maxCost >= 0,
keyFunction maps elements of items to numbers"""
itemsCopy = sorted(items, key = keyFunction,
result = []
totalValue, totalCost = 0.0, 0.0
reverse = True)
for i in range(len(itemsCopy)):
if (totalCost+itemsCopy[i].getCost()) <= maxCost:
result.append(itemsCopy[i])
totalCost += itemsCopy[i].getCost()
totalValue += itemsCopy[i].getValue()
return (result, totalValue)
(cid:2)(cid:3)(cid:4)(cid:4)(cid:4)(cid:5)(cid:6)(cid:7)(cid:8)(cid:9)(cid:10)(cid:11)(cid:12)(cid:8)(cid:6)(cid:13)
(cid:5)&
&(cid:19)(cid:25)(cid:6)(cid:5)(cid:10)(cid:4)(cid:23)(cid:15)(cid:10)(cid:9 | https://ocw.mit.edu/courses/6-0002-introduction-to-computational-thinking-and-data-science-fall-2016/0a353b26f1c6bd161b28b3f249aa05d1_MIT6_0002F16_lec1.pdf |
)(cid:6)(cid:5)(cid:10)(cid:4)(cid:23)(cid:15)(cid:10)(cid:9)(cid:11)(cid:30)(cid:29)(cid:29)(cid:10)(cid:9)(cid:10)(cid:20)(cid:3)(cid:9)/
def greedy(items, maxCost, keyFunction):
itemsCopy = sorted(items, key = keyFunction,
result = []
totalValue, totalCost = 0.0, 0.0
reverse = True)
for i in range(len(itemsCopy)):
if (totalCost+itemsCopy[i].getCost()) <= maxCost:
result.append(itemsCopy[i])
totalCost += itemsCopy[i].getCost()
totalValue += itemsCopy[i].getValue()
return (result, totalValue)
(cid:2)(cid:3)(cid:4)(cid:5)(cid:6)(cid:7)(cid:8)(cid:9)(cid:10)(cid:26)
(cid:2)(cid:3)(cid:4)(cid:4)(cid:4)(cid:5)(cid:6)(cid:7)(cid:8)(cid:9)(cid:10)(cid:11)(cid:12)(cid:8)(cid:6)(cid:13)
(cid:5)2
;(cid:21)(cid:10)(cid:3)(cid:25)(cid:11)(cid:25)(cid:5)(cid:20)(cid:20)(cid:7)/
def testGreedy(items, constraint, keyFunction):
taken, val = greedy(items, constraint, keyFunction)
print('Total value of items taken =', val)
for item in taken:
print(' ', item)
(cid:2)(cid:3)(cid:4)(cid:4)(cid:4)(cid:5)(cid:6)(cid:7)(cid:8)(cid:9)(cid:10)(cid:11)(cid:12)(cid:8)(cid:6)(cid:13)
(cid:5)’
;(cid:21)(cid:10)(cid:3)(cid:25)(cid:11)(cid:25)(cid:5)(cid:20)(cid:20)(cid:7)/
def testGreedys(maxUnits):
print('Use greedy by value to allocate', maxUnits,
'calories')
testGre | https://ocw.mit.edu/courses/6-0002-introduction-to-computational-thinking-and-data-science-fall-2016/0a353b26f1c6bd161b28b3f249aa05d1_MIT6_0002F16_lec1.pdf |
AN EXPOSITION OF BRETAGNOLLE AND MASSART’S
PROOF OF THE KMT THEOREM FOR THE UNIFORM
EMPIRICAL PROCESS
R. M. Dudley
February 23, 2005
Preface
These lecture notes, part of a course given in Aarhus, August 1999, treat the classical
empirical process defined in terms of empirical distribution functions. A proof, expanding on
one in a 1989 paper by Bretagnolle and Massart, is given for the Koml´
ady result
on the speed of convergence of the empirical process to a Brownian bridge in the supremum
norm.
os-Major-Tusn´
Herein “A := B” means A is defined by B, whereas “A =: B” means B is defined by A.
Richard Dudley
Cambridge, Mass., August 24, 1999
i
Contents
1 Empirical distribution functions: the KMT theorem
a
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.1
1.2 Statements: the theorem and Tusn´dy’s lemmas
. . . . . . . . . . . . . . . . .
1.3 Stirling’s formula: Proof of Lemma 1.5 . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.4 Proof of Lemma 1.4
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . | https://ocw.mit.edu/courses/18-465-topics-in-statistics-nonparametrics-and-robustness-spring-2005/0a472bc75921bd9a7a12c37bb261d572_bretagn_massart.pdf |
. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.5 Proof of Lemma 1.2
. . . . . . . . . . . . . . . . . . . . . . .
1.6
1.7 Proof of Theorem 1.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.8 Another way of defining the KMT construction . . . . . . . . . . . . . . . . . .
Inequalities for the separate processes
1
1
2
3
4
12
13
16
22
ii
Chapter 1
Empirical distribution functions:
the KMT theorem
1.1
Introduction
Let U [0, 1] be the uniform distribution on [0, 1] and U its distribution function. Let X1, X2 , . . .
be independent and identically distributed random variables with law U . Let Fn(t) be the
empirical distribution function based on X1, X2, . . . , Xn,
Fn(t) :=
n
(cid:1)
1
n j=1
1{Xj ≤t},
√
and αn(t) the corresponding empirical process, i.e., αn(t) = n(Fn(t) − t), t ∈ [0, 1]. Here
αn may be called the classical empirical process. Recall that a Brownian bridge is a Gaussian
stochastic process B(t), 0 ≤ t ≤ 1, with EB(t) = 0 and EB(t)B(u) = t(1 − u) for 0 ≤ t ≤ u ≤
1. Donsker (1952) proved (neglecting measurability problems) that αn(t) converges in law to
a Brownian | https://ocw.mit.edu/courses/18-465-topics-in-statistics-nonparametrics-and-robustness-spring-2005/0a472bc75921bd9a7a12c37bb261d572_bretagn_massart.pdf |
(neglecting measurability problems) that αn(t) converges in law to
a Brownian bridge B(t) with respect to the sup norm. Koml´
ady (1975)
stated a sharp rate of convergence, namely that on some probability space there exist Xi i.i.d.
U [0, 1] and Brownian bridges Bn such that
(cid:2)
os, Major, and Tusn´
(cid:3)
√
P
sup | n(αn(t) − Bn(t))| > x + c log n < Ke −λx
0≤t≤1
(1.1)
os, Major and Tusn´
os, Major and Tusn´
for all n and x, where c, K, and λ are positive absolute constants. Koml´
ady
(KMT) formulated a construction giving a joint distribution of αn and Bn, and this construc-
tion has been accepted by later workers. But Koml´
ady gave hardly any
proof for (1.1). Cs¨org˝o and R´ev´esz (1981) sketched a method of proof of (1.1) based on lemmas
of G. Tusn´ady, Lemmas 1.2 and 1.4 below. The implication from Lemma 1.4 to 1.2 is not dif-
ficult, but Cs¨ o and R´ev´esz did not include a proof of Lemma 1.4. Bretagnolle and Massart
(1989) gave a proof of the lemmas and of the inequality (1.1) with specific constants, Theorem
1.1 below. Bretagnolle and Massart’s proof was rather compressed and some readers have
had difficulty following it. Cs¨ o and Horv´
ath (1993), pp. 116-139, expanded the proof while
making it more elementary and gave a proof of Lemma 1.4 for n ≥ n0 where n0 is at least 100. | https://ocw.mit.edu/courses/18-465-topics-in-statistics-nonparametrics-and-robustness-spring-2005/0a472bc75921bd9a7a12c37bb261d572_bretagn_massart.pdf |
while
making it more elementary and gave a proof of Lemma 1.4 for n ≥ n0 where n0 is at least 100.
The purpose of the present chapter is to give a detailed and in some minor details corrected
version of the original Bretagnolle and Massart proof of the lemmas for all n, overlapping in
org˝
org˝
1
part with the Cs¨org˝
Bretagnolle and Massart and largely following their proof.
o and Horv´
ath proof, then to prove (1.1) for some constants, as given by
Mason and van Zwet (1987) gave another proof of the inequality (1.1) and an extended
form of it for subintervals 0 ≤ t ≤ d/n with 1 ≤ d ≤ n and log n replaced by log d, without
Tusn´ady’s inequalities and without specifying the constants c, K, λ. Some parts of the proof
sketched by Mason and van Zwet are given in more detail by Mason (1998).
Acknowledgments. I am very grateful to Evarist Gin´e, David Mason, Jon Wellner, and Uwe
Einmahl for conversations and correspondence on the topic.
1.2 Statements: the theorem and Tusn´ady’s lemmas
The main result of the present chapter is:
Theorem 1.1. (Bretagnolle and Massart) The approximation (1.1) of the empirical process
by the Brownian bridge holds with c = 12, K = 2 and λ = 1/6 for n ≥ 2.
The rest of this chapter will give a proof of the theorem. In a preprint, Rio (1991 | https://ocw.mit.edu/courses/18-465-topics-in-statistics-nonparametrics-and-robustness-spring-2005/0a472bc75921bd9a7a12c37bb261d572_bretagn_massart.pdf |
this chapter will give a proof of the theorem. In a preprint, Rio (1991, Theorem
5.1) states in place of (1.1)
(cid:2)
√
(cid:3)
P
sup | n(αn(t) − Bn(t))| > ax + b log n + γ log 2 < Ke
0≤t≤1
−x
(1.2)
for n ≥ 8 where a = 3.26, b = 4.86, γ = 2.70, and K = 1. This implies that for n ≥ 8, (1.1)
holds with c = 5.76, K = 1, and λ = 1/3.26, where all three constants are better than in
Theorem 1.1.
Tusn´ady’s lemmas are concerned with approximating symmetric binomial distributions by
normal distributions. Let B(n, 1/2) denote the symmetric binomial distribution for n trials.
Thus if Bn has this distribution, Bn is the number of successes in n independent trials with
probability 1/2 of success on each trial. For any distribution function F and 0 < t < 1 let
F −1(t) := inf{x : F (x) ≥ t}. Here is one of Tusn´ady’s lemmas (Lemma 4 of Bretagnolle and
Massart (1989)).
Lemma 1.2. Let Φ be the standard normal distribution function and Y a standard normal
random variable. Let Φn be the distribution function of B(n, 1/2) and set Cn := Φ−1(Φ(Y )) −
n/2. Then
n
√
|Cn| ≤ 1 + ( n/2)|Y |,
|Cn | https://ocw.mit.edu/courses/18-465-topics-in-statistics-nonparametrics-and-robustness-spring-2005/0a472bc75921bd9a7a12c37bb261d572_bretagn_massart.pdf |
n/2. Then
n
√
|Cn| ≤ 1 + ( n/2)|Y |,
|Cn − ( n/2)Y | ≤ 1 + Y
√
2/8.
(1.3)
(1.4)
Recall the following well known and easily checked facts:
Theorem 1.3. Let X be a real random variable with distribution function F .
(a) If F is continuous then F (X) has a U [0, 1] distribution.
(b) For any F , if V has a U [0, 1] distribution then F −1(V ) has distribution function F .
Thus Φ(Y ) has a U [0, 1] distribution and Φ−1(Φ(Y )) has distribution B(n, 1/2). Lemma 1.2
will be shown (by a relatively short proof ) to follow from:
n
2
Lemma 1.4. Let Y be a standard normal variable and let βn be a binomial random variable
with distribution B(n, 1/2). Then for any integer j such that 0 ≤ j ≤ n and n + j is even, we
have
√
P (βn ≥ (n + j)/2) ≥ P ( nY /2 ≥ n(1 − 1 − j/n)),
P (βn ≥ (n + j)/2) ≤ P ( nY /2 ≥ (j − 2)/2).
(1.5)
(1.6)
√
(cid:4)
Remarks. The restriction that n + j be even is not stated in the formulation of the lemma
by Bretagnolle and Massart (1989), but n + j is always even in their proof. If | https://ocw.mit.edu/courses/18-465-topics-in-statistics-nonparametrics-and-robustness-spring-2005/0a472bc75921bd9a7a12c37bb261d572_bretagn_massart.pdf |
Bretagnolle and Massart (1989), but n + j is always even in their proof. If (1.6) holds for
n + j even it also holds directly for n + j odd, but the same is not clear for (1.5). It turns out
that only the case n + j even is needed in the proof of Lemma 1.2, so I chose to restrict the
statement to that case.
The following form of Stirling’s formula with remainder is used in the proof of Lemma 1.4.
Lemma 1.5. Let n! = (n/e)n 2πnAn where An = 1 + βn/(12n), which defines An and βn
for n = 1, 2, · · ·. Then βn↓1 as n → ∞.
√
1.3 Stirling’s formula: Proof of Lemma 1.5
It can be checked directly that β1 > β2 > · · · > β8 > 1. So it suffices to prove the lemma
for n ≥ 8. We have An = exp((12n)−1 − θn/(360n3 )) where 0 < θn < 1, see Whittaker and
Watson (1927), p. 252 or Nanjundiah (1959). Then by Taylor’s theorem with remainder,
(cid:5)
An =
1 +
1
12n
+
1
288n2 +
1
6(12n)3 φne
(cid:6)
1/12n exp(−θn/(360n
3 ))
where 0 < φn < 1. Next,
(cid:7)
(cid:5)
βn+1 ≤ 12(n + 1) exp
(cid:6)
(cid:8 | https://ocw.mit.edu/courses/18-465-topics-in-statistics-nonparametrics-and-robustness-spring-2005/0a472bc75921bd9a7a12c37bb261d572_bretagn_massart.pdf |
)
(cid:5)
βn+1 ≤ 12(n + 1) exp
(cid:6)
(cid:8)
− 1
1
12(n + 1)
≤ 1 +
1
24(n + 1)
+
1
6(12(n + 1))2 e
1/(12(n+1))
,
from which lim supn→∞ βn ≤ 1, and
(cid:7)(cid:5)
βn = 12n[An − 1] ≥ 12n
1 +
1
12n
+
(cid:6)
(cid:8)
2 exp(−1/(360n 3 )) − 1 .
1
288n
Using e−x ≥ 1 − x gives
(cid:7)
βn ≥ 12n
1
12n
+
1
288n2
−
1
360n3
(cid:5)
(cid:5)
1 +
1
12n
(cid:6)(cid:8)
+
1
2
288n
(cid:6)
.
= 1 +
1
24n
−
1
30n2
1 +
1
12n
+
1
288n
2
Thus lim inf n→∞ βn ≥ 1 and βn → 1 as n → ∞. To prove βn ≥ βn+1 for n ≥ 8 it will suffice
to show that
1 +
1
24(n + 1)
+
e1/108
6 · 144n2
≤ 1 +
1
24n
−
1
30n2
(cid:7)
1
1 + +
96
(cid:8)
1
288 · 82
3
or
e1/108
6 · 144n2
+
1
30n2
(cid:7)
1 +
1
96
+
(cid:8)
1
288 · 64
≤
1
24n(n + 1)
or that 0.035/n2 ≤ 1/[24n(n + 1)] or | https://ocw.mit.edu/courses/18-465-topics-in-statistics-nonparametrics-and-robustness-spring-2005/0a472bc75921bd9a7a12c37bb261d572_bretagn_massart.pdf |
1
24n(n + 1)
or that 0.035/n2 ≤ 1/[24n(n + 1)] or 0.84 ≤ 1 − 1/(n + 1), which holds for n ≥ 8, proving that
�
βn decreases with n. Since its limit is 1, Lemma 1.5 is proved.
1.4 Proof of Lemma 1.4
First, (1.5) will be proved. For any i = 0, 1, · · · , n such that n + i is even, let k := (n + i)/2
so that k is an integer, n/2 ≤ k ≤ n, and i = 2k − n. Let pni := P (βn = (n + i)/2) = P (βn =
k) = (n
k ) will be
approximated via Stirling’s formula with correction terms as in Lemma 1.5. To that end, let
:= 0 for n + i odd. The factorials in (n
:= i/n. Define pni
k )/2n and xi
CS(u, v, w, x, n) :=
1 + u/(12n)
.
(1 + v/[6n(1 − x)])(1 + w/[6n(1 + x)])
By Lemma 1.5, we can write for 0 ≤ i < n and n + i even
pni = CS(xi, n) 2/πn exp(−ng(xi)/2 − (1/2) log(1 − xi ))
2
(cid:4)
(1.7)
where g(x) := (1 + x) log(1 + x) + (1 − x) log(1 − x) and CS(xi, n) := CS(βn, βn−k , βk , xi, n).
By Lemma 1.5 and since k ≥ n/2,
1+ := 1.013251 ≥ 12(e(2π)−1/2 − 1) = β1 ≥ βn | https://ocw.mit.edu/courses/18-465-topics-in-statistics-nonparametrics-and-robustness-spring-2005/0a472bc75921bd9a7a12c37bb261d572_bretagn_massart.pdf |
:= 1.013251 ≥ 12(e(2π)−1/2 − 1) = β1 ≥ βn−k ≥ βk ≥ βn > 1.
Thus, for x := xi, by clear or easily checked monotonicity properties,
CS(x, n) ≤ CS(βn, βk , βk , x, n) =
(cid:9)
(cid:6)
(cid:5)
1 +
βn
12n
1 +
βk
3n(1 − x2)
+
(cid:10)−1
β2
k
36n2(1 − x2)
≤ CS(βn, βk , βk , 0, n) ≤ CS(βn, βn, βn, 0, n)
≤ CS(1, 1, 1, 0, n) = 1 +
(cid:5)
(cid:6) (cid:7)
1
12n
1
1 + +
3n
(cid:8)−1
.
1
2
36n
It will be shown next that log(1 + y) − 2 log(1 + 2y) ≤ −3y + 7y2/2 for y ≥ 0. Both sides
vanish for y = 0. Differentiating and clearing fractions, we get a clearly true inequality. Setting
y := 1/(12n) then gives
log CS(xi, n) ≤ −1/(4n) + 7/(288n
2 ).
(1.8)
To get a lower bound for CS(x, n) we have by an analogous string of inequalities
(cid:11)
(cid:12)−1
(cid:5)
CS(x, n) ≥
1 +
(cid:6)
1
12n
1 +
1+
3n(1 − x2)
+
(1+)2
36n2(1 − x2)
.
(1.9)
4
The inequality (1.5) to be proved can be written as
n
(cid: | https://ocw.mit.edu/courses/18-465-topics-in-statistics-nonparametrics-and-robustness-spring-2005/0a472bc75921bd9a7a12c37bb261d572_bretagn_massart.pdf |
)
.
(1.9)
4
The inequality (1.5) to be proved can be written as
n
(cid:1)
pni ≥ 1 − Φ(2 n(1 − 1 − j/n)).
√
(cid:4)
(1.10)
i=j
When j = 0 the result is clear. When n ≤ 4 and j = n or n − 2 the result can be checked from
tables of the normal distribution. Thus we can assume from here on
n ≥ 5.
(cid:13)
(1.11)
√
CASE I. Let j2 ≥ 2n, in other words xj ≥ 2/n. Recall that for t > 0 we have P (Y > t) ≤
(t 2π)−1 exp(−t2/2), e.g. Dudley (1993), Lemma 12.1.6(a). Then (1.10) follows easily when
j = n and n ≥ 5. To prove it for j = n − 2 it is enough to show
n(2 − log 2) − 4 2n + log(n + 1) + 4 + log[2 2π( n − 2)] ≥ 0, n ≥ 5.
√
√ √
√
The left side is increasing in n for n ≥ 5 and is ≥ 0 at n = 5.
For 5 ≤ n ≤ 7 we have (n − 4)2 < 2n, so we can assume in the present case that 2n ≤ j2 ≤
(n − 4)2 and n ≥ 8. Let yi := 2 n(1 − 1 − i/n). Then it will suffice to show
√
(cid:13)
pni ≥
(cid:14)
yi+2
yi
φ(u)du, i = j, j + 2, · · · , n − 4,
(1.12)
where φ is the standard normal density function. Let
fn(x) | https://ocw.mit.edu/courses/18-465-topics-in-statistics-nonparametrics-and-robustness-spring-2005/0a472bc75921bd9a7a12c37bb261d572_bretagn_massart.pdf |
, · · · , n − 4,
(1.12)
where φ is the standard normal density function. Let
fn(x) :=
n/2π(1 − x) exp(−2n(1 − 1 − x)2).
√
(cid:4)
√
By the change of variables u = 2 n(1 − 1 − x), (1.12) becomes
√
pni ≥
(cid:14)
xi+2
xi
fn(x)dx.
(1.13)
(1.14)
Clearly fn > 0. To see that fn(x) is decreasing in x for 2/n ≤ x ≤ 1 − 4/n, note that
(cid:13)
2(1 − x)f (cid:4)
√
√
n/fn = 1 − 4n[ 1 − x − 1 + x],
√
so fn is decreasing where 1 − x − (1 − x) > 1/(4n). We have √y − y ≥ y for y ≤ 1/4, so
√
y − y > 1/(4n) for 1/(4n) < y ≤ 1/4. Let y := 1 − x. Also √1 − x − (1 − x) > x/4 for
x < 8/9, so 1 − x − (1 − x) > 1/(4n) for 1/n < x < 8/9. Thus 1 − x − (1 − x) > 1/(4n)
for 1/n < x < 1 − 1/(4n), which includes the desired range. Thus to prove (1.14) it will be
enough to show that
√
So by (1.7) it will be enough to show that for 2/n ≤ x ≤ 1 − 4/n and n ≥ 8,
pni ≥ (2/n)fn | https://ocw.mit.edu/courses/18-465-topics-in-statistics-nonparametrics-and-robustness-spring-2005/0a472bc75921bd9a7a12c37bb261d572_bretagn_massart.pdf |
2/n ≤ x ≤ 1 − 4/n and n ≥ 8,
pni ≥ (2/n)fn(xi),
i = j, j + 2, · · · , n − 4.
(cid:13)
CS(x, n)(1 + x)−1/2 exp[n{4(1 − 1 − x)2 − g(x)}/2] ≥ 1.
√
Let
√
J(x) := 4(1 − 1 − x)2 − g(x).
5
(1.15)
(1.16)
(1.17)
Then J is increasing for 0 < x < 1, since its first and second derivatives are both 0 at 0, while
its third derivative is easily checked to be positive on (0, 1). In light of (1.9), to prove (1.16) it
suffices to show that
(cid:5)
1 +
(cid:6)
1
12n
nJ (x)/2 ≥
e
√
(cid:2)
1 + x 1 +
1+
3n(1 − x2)
+
(1+)2
36n2(1 − x2)
(cid:3)
.
(1.18)
When x ≤ 1 − 4/n and n ≥ 8 the right side is less than 1.5, using first 1 + x ≤ 2, next
x ≤ 1 − 4/n, and lastly n ≥ 8. For x ≥ 0.55 and n ≥ 8 the left side is larger than 1.57, so
(1.18) is proved for x ≥ 0.55. We will next need the inequality
√
√
J (x) ≥ x 3/2 + 7x /48, 0 ≤ x ≤ 0.55.
4
(1.19)
√
To check this one can calculate J ( | https://ocw.mit.edu/courses/18-465-topics-in-statistics-nonparametrics-and-robustness-spring-2005/0a472bc75921bd9a7a12c37bb261d572_bretagn_massart.pdf |
≤ x ≤ 0.55.
4
(1.19)
√
To check this one can calculate J (0) = J (cid:4)(0) = J (cid:4)(cid:4)(0) = 0, J (3)(0) = 3, J (4) (0) = 7/2, so that
the right side of (1.19) is the Taylor series of J around 0 through fourth order. One then shows
straightforwardly that J (5) (x) > 0 for 0 ≤ x < 1.
It follows since nx2 ≥ 2 and n ≥ 8 that nJ (x)/2 ≥ x/2 + 7/24n. Let K(x)
:=
:= (K(x) − 1)/x2 . We will next see that κ(·) is decreasing
exp(x/2)/ 1 + x and κ(x)
on [0, 1]. To show κ(cid:4) ≤ 0 is equivalent to ex/2[4 + 4x − x2] ≥ 4(1 + x)3/2, which is true at
x = 0. Differentiating, we would like to show ex/2[6 − x2/2] ≥ 6 1 + x, or squaring that and
multiplying by 4, ex(144 − 24x2 + x4) ≥ 144(1 + x). This is true at x = 0. Differentiating, we
would like to prove ex(144 − 48x − 24x2 + 4x3 + x4) ≥ 144. Using ex ≥ 1 + x and algebra gives
this result for 0 ≤ x ≤ 1.
(cid:13)
√
It follows that K(x) ≥ 1 + 0.3799/n when
2/n ≤ x ≤ 0.55. It remains to show that for
x ≤ 0 | https://ocw.mit.edu/courses/18-465-topics-in-statistics-nonparametrics-and-robustness-spring-2005/0a472bc75921bd9a7a12c37bb261d572_bretagn_massart.pdf |
3799/n when
2/n ≤ x ≤ 0.55. It remains to show that for
x ≤ 0.55,
(cid:5)
1 +
1
12n
(cid:6) (cid:5)
1 +
(cid:6)
0.3799
n
7/(24n) ≥ 1 +
e
1+
3n(1 − x2)
+
(1+)2
.
36n2(1 − x2)
√
At x = 0.55 the right side is less than 1 + 0.543/n, so Case I is completed since 0.543 ≤
1/12 + 0.3799 + 7/24.
CASE II. The remaining case is j < 2n. For any integer k, P (βn ≥ k) = 1−P (βn ≤ k−1). For
k = (n + j)/2 we have k − 1 = (n + j − 2)/2. If n is odd, then P (βn ≥ n/2) = 1/2 = P (Y ≥ 0).
If n is even, then P (βn ≥ n/2) − pn0/2 = 1/2 = P (Y ≥ 0). So, since pn0 = 0 for n odd, (1.5)
is equivalent to
(cid:1)
√
pni ≤ P (0 ≤ Y ≤ 2 n(1 − 1 − j/n)).
(cid:4)
(1.20)
1
2
pn0 +
0<i≤j−2
√
Given j < 2n, a family I0, I1, · · · , IK of adjacent intervals will be defined such that for n odd,
√
pni ≤ P ( nY /2 ∈ Ik ) with i = 2k + 1, 0 ≤ k ≤ K := (j − 3)/2,
while for n even,
√
pni ≤ | https://ocw.mit.edu/courses/18-465-topics-in-statistics-nonparametrics-and-robustness-spring-2005/0a472bc75921bd9a7a12c37bb261d572_bretagn_massart.pdf |
0 ≤ k ≤ K := (j − 3)/2,
while for n even,
√
pni ≤ P ( nY /2 ∈ Ik ) with i = 2k, 1 ≤ k ≤ K := (j − 2)/2,
and
√
pn0/2 ≤ P ( nY /2 ∈ I0).
6
(1.21)
(1.22)
(1.23)
In either case,
I0 ∪ I1 ∪ · · · ∪ IK ⊂ [0, n(1 − 1 − j/n)].
(cid:4)
The intervals will be defined by
δk+1 := (k + 1)/n + k(k + 1/2)(k + 1)/n3/2 , k ≥ 0,
∆k+1 := δk+1 + k + 1/2 = δk+1 + (i + 1)/2,
i = 2k, n even,
∆k+1 := δk+1 + k + 1 = δk+1 + (i + 1)/2,
i = 2k + 1, n odd,
(1.24)
(1.25)
(1.26)
(1.27)
Ik := [∆k , ∆k+1] with ∆0 = 0.
(1.28)
It will be shown that I0, I1, · · · , IK defined by (1.25) through (1.28) satisfy (1.21) through
(1.24). Recall that n ≥ 5 (1.11) and xi := i/n.
Proof of (1.24). It needs to be shown that ∆K+1 ≤ n(1 −
K ≤ j/2 − 1 < n/2 − 1 and
1 − xj ). Since j < 2n, we have
√
(cid:13)
(cid:13)
δK+1 | https://ocw.mit.edu/courses/18-465-topics-in-statistics-nonparametrics-and-robustness-spring-2005/0a472bc75921bd9a7a12c37bb261d572_bretagn_massart.pdf |
xj ). Since j < 2n, we have
√
(cid:13)
(cid:13)
δK+1 ≤ (K + 1)/n + K(K + 1/2)/(n 2) ≤ xj /2 + nxj /(4 2).
√
√
2
We have ∆K+1 = nxj /2 − 1/2 + δK+1. It will be shown next that
√
1 − 1 − x ≥ x/2 + x /8, 0 ≤ x ≤ 1.
2
(1.29)
The functions and their first derivatives agree at 0 while the second derivative of the left side
is clearly larger.
It then remains to prove that
1/2 + nxj (1/8 − 1/4 2) − xj /2 ≥ 0.
2
√
2
This is true since nxj
≤ 2 and xj ≤ (2/8)1/2 = 1/2, so (1.24) is proved.
Proof of (1.21)-(1.23). First it will be proved that
√
2
pni ≤ √
πn
exp − +
7
288n
1
4n
−
2
(cid:9)
(n − 1)i2
2
2n
+
(cid:10)
(i/n)2n
2n(1 − i2/n2)
In light of (1.7) and (1.8), it is enough to prove, for x := i/n, that
−[ng(x) + log(1 − x
2) − (n − 1)x ]/2 ≤ x
2n/2n(1 − x
2).
2
.
(1.30)
(1.31)
It is easy to verify that for 0 ≤ t < 1,
g(t) = (1 + t) log(1 + t) + (1 − t) log(1 − t) =
∞
(cid:1)
2r
t / | https://ocw.mit.edu/courses/18-465-topics-in-statistics-nonparametrics-and-robustness-spring-2005/0a472bc75921bd9a7a12c37bb261d572_bretagn_massart.pdf |
(1 + t) + (1 − t) log(1 − t) =
∞
(cid:1)
2r
t /r(2r − 1).
r=1
Thus the left side of (1.31) can be expanded as
A =
. We have
(cid:15) n−1
r=2 and B =
(cid:15)
r≥n
(cid:15)
r≥2 x2r (1 − n/(2r − 1))/2r = A + B where
d2A/dx2 =
(cid:1)
(2r − n − 1)(x
2r−2 − x 2n−2r )
2≤r≤(n+1)/2
7
which is ≤ 0 for 0 ≤ x ≤ 1. Since A = dA/dx = 0 for x = 0 we have A ≤ 0 for 0 ≤ x ≤ 1.
Then, 2nB ≤ x2n/(1 − x2), so (1.30) is proved.
We have for n ≥ 5 and x ≤ ( 2n − 2)/n that x2n/(1 − x2) < 10−3, since n (cid:10)→ ( 2n − 2)/n
is decreasing in n for n ≥ 8 and the statement can be checked for n = 5, 6, 7, 8. So (1.30) yields
√
√
pni ≤
Next we will need:
(cid:4)
2/πn exp[−0.249/n + 7/288n 2 − (n − 1)i2 /2n ].
2
(1.32)
Lemma 1.6. For any 0 ≤ a < b and a standard normal variable Y ,
P (Y ∈ [a, b]) ≥
(cid:4)
1/2π(b − a) exp[−a /4 − b2/4]φ(a, | https://ocw.mit.edu/courses/18-465-topics-in-statistics-nonparametrics-and-robustness-spring-2005/0a472bc75921bd9a7a12c37bb261d572_bretagn_massart.pdf |
]) ≥
(cid:4)
1/2π(b − a) exp[−a /4 − b2/4]φ(a, b)
2
(1.33)
where φ(a, b) := [4/(b2 − a2)] sinh[(b2 − a2)/4] ≥ 1.
Proof. Since the Taylor series of sinh around 0 has all coefficients positive, and (sinh u)/u is an
even function, clearly sinh u/u ≥ 1 for any real u. The conclusion of the lemma is equivalent
to
exp(−u /2)du ≥ exp(−a /2) − exp(−b2/2).
2
2
(1.34)
a
Letting x := b − a and v := u − a we need to prove
(cid:6) (cid:14)
(cid:5)
exp(−av − v
2/2)dv ≥ 1 − exp(−ax − x /2).
2
(cid:14) b
a + b
2
a +
x
2
x
0
This holds for x = 0. Taking derivatives of both sides and simplifying, we would like to show
(cid:14)
x
0
exp(−av − v /2)dv ≥ x exp(−ax − x /2).
2
2
This also holds for x = 0, and differentiating both sides leads to a clearly true inequality, so
�
Lemma 1.6 is proved.
For the intervals Ik , Lemma 1.6 yields
√
P ( nY /2 ∈ Ik) ≥
(cid:4)
2/πnφk exp[−(∆2 + ∆2
k+1
k)/n + log(∆k+1 − ∆k )]
(1.35)
where φk := φ(2∆k / n, 2∆k+1/ n). The aim is to show that the ratio of the bounds (1 | https://ocw.mit.edu/courses/18-465-topics-in-statistics-nonparametrics-and-robustness-spring-2005/0a472bc75921bd9a7a12c37bb261d572_bretagn_massart.pdf |
2∆k / n, 2∆k+1/ n). The aim is to show that the ratio of the bounds (1.35)
over (1.32) is at least 1.
√
√
First consider the case k = 0. If n is even, this means we want to prove (1.23). Using
(1.32) and (1.35) and φ0 ≥ 1, it suffices to show that
0.249/n − 7/288n
2 − 1/4n − 1/n2 − 1/n3 + log(1 + 2/n) ≥ 0.
Since log(1 + u) ≥ u − u2/2 for u ≥ 0 by taking a derivative, it will be enough to show that
(E)n := 1.999/n − 3/n2 − 7/288n
2 − 1/n3 ≥ 0,
and it is easily checked that n(E)n > 0 since n ≥ 5.
8
If n is odd, then (1.32) applies for i = 2k+1 = 1 and we have ∆0 = 0, ∆1 = δ1 +1 = 1 + 1/n
so (1.35) yields
√
P ( nY /2 ∈ I0) ≥
(cid:4)
2/πn exp[−(1 + 1/n)2/n + log(1 + 1/n)].
Using log(1 + u) ≥ u − u2/2 again, the desired inequality can be checked since n ≥ 5. This
completes the case k = 0.
Now suppose k ≥ 1. In this case, i < 2n − 2 implies n ≥ 10 for n even and n ≥ 13 for n
√
odd. Let sk := δk + δk+1 and dk := δk+1 − δk . Then for i as in the de | https://ocw.mit.edu/courses/18-465-topics-in-statistics-nonparametrics-and-robustness-spring-2005/0a472bc75921bd9a7a12c37bb261d572_bretagn_massart.pdf |
:= δk + δk+1 and dk := δk+1 − δk . Then for i as in the definition of ∆k+1,
∆k+1 + ∆k = i + sk,
∆k+1 − ∆k = 1 + dk ,
2k3 + k
,
n3/2
2k + 1
n
+
sk =
(1.36)
(1.37)
(1.38)
and
1
n
From the Taylor series of sinh around 0 one easily sees that (sinh u)/u ≥ 1 + u2/6 for all u.
Letting u := (∆2 − ∆2
k)/n ≥ i/n gives
dk =
(1.39)
+
k+1
3k2
.
n3/2
We have
√
log φk ≥ log(1 + i2/6n 2).
√
dk ≤ 3/(2 n)
(1.40)
(1.41)
since 2k ≤ 2n − 2 and n ≥ 10. Next we have another lemma:
Lemma 1.7. log(1 + x) ≥ λx for 0 ≤ x ≤ α for each of the pairs (α, λ) = (0.207, 0.9),
(0.195, 0.913), (0.14, 0.93), (0.04, 0.98).
Proof. Since x (cid:10)→ log(1 + x) is concave, or equivalently we are proving 1 + x ≥ eλx where the
latter function is convex, it suffices to check the inequalities at the endpoints, where they hold.
�
Lemma 1.7 and (1.40) then give
log φk ≥ 0.98i2 /6n
2
(1.42)
since i2/(6n2 | https://ocw.mit.edu/courses/18-465-topics-in-statistics-nonparametrics-and-robustness-spring-2005/0a472bc75921bd9a7a12c37bb261d572_bretagn_massart.pdf |
log φk ≥ 0.98i2 /6n
2
(1.42)
since i2/(6n2 ) ≤ 1/3n ≤ 0.04, n ≥ 10. Next,
Lemma 1.8. We have log(∆k+1 − ∆k) ≥ λdk where λ = 0.9 when n is even and n ≥ 20,
λ = 0.93 when n is odd and n ≥ 25, and λ = 0.913 when k = 1 and n ≥ 10. Only these cases
are possible (for k ≥ 1).
Proof. If n is even and k ≥ 2, then 4 ≤ i = 2k < 2n − 2 implies n ≥ 20. If n is odd and
k ≥ 2, then 5 ≤ i = 2k + 1 < 2n − 2 implies n ≥ 25. So only the given cases are possible.
We have k ≤ kn
d(n) := 1/n + 3k2
:=
n/2 − 3/2 for n odd. Let
n/n3/2 and t := 1/ n. It will be shown that d(n) is decreasing in n,
n/2 − 1 for n even or kn
√
:=
√
(cid:13)
(cid:13)
√
9
separately for n even and odd. For n even we would like to show that 3t/2 + (1 − 3 2)t2 + 3t3
is increasing for 0 ≤ t ≤ 1/ 20 and in fact its derivative is > 0.04. For n odd we would like to
show that 3t/2 + (1 − 9/ 2)t2 + 27t3/4 | https://ocw.mit.edu/courses/18-465-topics-in-statistics-nonparametrics-and-robustness-spring-2005/0a472bc75921bd9a7a12c37bb261d572_bretagn_massart.pdf |
like to
show that 3t/2 + (1 − 9/ 2)t2 + 27t3/4 is increasing. We find that its derivative has no real
roots and so is always positive as desired.
√
√
√
Since d(·) is decreasing for n ≥ 20, its maximum for n even, n ≥ 20 is at n = 20 and we
find it is less than 0.207 so Lemma 1.7 applies to give λ = 0.9. Similarly for n odd and n ≥ 25
we have the maximum d(25) < 0.14 and Lemma 1.7 applies to give λ = 0.93.
If k = 1 then n (cid:10)→ n−1 + 3/n3/2 is clearly decreasing. Its value at n = 10 is less than 0.195
�
and Lemma 1.7 applies with λ = 0.913. So Lemma 1.8 is proved.
It will next be shown that for n ≥ 10
sk ≤ n
−1 + k/ n.
√
(1.43)
√
(cid:13)
By (1.38) this is equivalent to 2/ n + (2k2 + 1)/n ≤ 1. Since k ≤ n/2 − 1 one can check
that (1.43) holds for n ≥ 14. For n = 10, 11, 12, 13 note that k is an integer, in fact k ≤ 1, and
(1.43) holds.
After some calculations, letting s := sk and d := dk and noting that
∆2 + ∆2 =
k+1
k
[(∆k+1 − ∆k )2 + (∆k + ∆k+1)2],
1
2
to show that the ratio of (1.35) to (1.32) is at least 1 is equivalent to showing | https://ocw.mit.edu/courses/18-465-topics-in-statistics-nonparametrics-and-robustness-spring-2005/0a472bc75921bd9a7a12c37bb261d572_bretagn_massart.pdf |
2],
1
2
to show that the ratio of (1.35) to (1.32) is at least 1 is equivalent to showing that
− − −
d
n
s2
2n
−
d2
2n
−
1
2n
−
7
288n
2
−
i2
2n2
+
0.249
n
+ log(1 + d) + log φk ≥ 0.
(1.44)
is
n
Proof of (1.44). First suppose that n is even and n ≥ 20 or n is odd and n ≥ 25. Apply the
bound (1.41) for d2/2n, (1.42) for log φk , (1.43) for s and Lemma 1.8 for log(1 + d). Apply the
exact value (1.39) of d in the d/n and λd terms. We assemble together terms with factors k2 ,
k and no factor of k, getting a lower bound A for (1.44) of the form
A := α[k2/n3/2] − 2β[k/n 5/4 ] + γ[1/n]
(1.45)
where, if n is even, so i = 2k and λ = 0.9, we get
α = 0.7 − [2.5 − 2(0.98)/3]/ n − 3/n,
√
β = n
−5/4/2,
−3/4 + n
2
γ = 0.649 − [17/8 + 7/288]/n − 1/2n .
Note that for each fixed n, A is 1/n times a quadratic in k/n1/4. Also, α and γ are increasing
in n while β is decreasing. Thus for n ≥ 20 the supremum of β2 − αγ is attained at n = 20
where it is < −0.06. So the quadratic has no real roots and since α > | https://ocw.mit.edu/courses/18-465-topics-in-statistics-nonparametrics-and-robustness-spring-2005/0a472bc75921bd9a7a12c37bb261d572_bretagn_massart.pdf |
n = 20
where it is < −0.06. So the quadratic has no real roots and since α > 0 it is always positive,
thus (1.44) holds.
When n is odd, i = 2k + 1, λ = 0.93 and n ≥ 25. We get a lower bound A for (1.44) of the
same form (1.45) where now
α = 0.79 − [2.5 − 2(0.98)/3]/ n − 3/n,
√
10
1/4 + 2(1 − 0.98/6)/n3/4 + 1/2n
β = 1/2n
,
2
γ = 0.679 − (3.625 + 7/288 − 0.98/6)/n − 1/2n .
For the same reasons, the supremum of β2 − αγ for n ≥ 25 is now attained at n = 25 and is
negative (less than -0.015), so the conclusion (1.44) again holds.
5/4
It remains to consider the case k = 1 where n is even and n ≥ 10 or n is odd and n ≥ 13.
Here instead of bounds for sk and dk we use the exact values (1.38) and (1.39) for k = 1. We
still use the bounds (1.42) for log φk and Lemma 1.8 for log(1+dk). When n is even, i = 2k = 2,
and we obtain a lower bound A(cid:4) for (1.44) of the form a1/n + a2/n3/2 + · · · . All terms n−2 and
· 10−α for
beyond have negative coefficients. Applying the inequality −n
n ≥ 10 and α = 1/2, 1, · · · , I found a lower bound A(cid:4) ≥ | https://ocw.mit.edu/courses/18-465-topics-in-statistics-nonparametrics-and-robustness-spring-2005/0a472bc75921bd9a7a12c37bb261d572_bretagn_massart.pdf |
and α = 1/2, 1, · · · , I found a lower bound A(cid:4) ≥ 0.662/n − 1.115/n3/2 > 0 for n ≥ 10.
The same method for n odd gave A(cid:4) ≥ 0.662/n − 1.998/n3/2 > 0 for n ≥ 13. The proof of (1.5)
is complete.
−(3/2)−α ≥ −n−3/2
√
√
(cid:13)
Proof of (1.6). For n odd, (1.6) is clear when j = 1, so we can assume j ≥ 3. For n even,
(1.6) is clear when j = 2. We next consider the case j = 0. By symmetry we need to prove
that pn0 ≤ P ( n|Y |/2 ≤ 1). This can be checked from a normal table for n = 2. For n ≥ 4
√
we have pn0 ≤
2/πn by (1.32). The integral of the standard normal density from −2/ n
to 2/ n is clearly larger than the length of the interval times the density at the endpoints,
namely 2 2/πn exp(−2/n). Since exp(−2/n) ≥ 1/2 for n ≥ 4 the proof for n even and j = 0
is done.
We are left with the cases j ≥ 3. For j = n, we have pnn = 2−n and can check the
conclusion for n = 3, 4 from a normal table. Let φ be the standard normal density. We | https://ocw.mit.edu/courses/18-465-topics-in-statistics-nonparametrics-and-robustness-spring-2005/0a472bc75921bd9a7a12c37bb261d572_bretagn_massart.pdf |
n = 3, 4 from a normal table. Let φ be the standard normal density. We have
the inequality, for t > 0,
(cid:13)
P (Y ≥ t) ≥ ψ(t) := φ(t)[t−1 − t−3],
(1.46)
Feller (1968), p. 175. Feller does not give a proof. For completeness, here is one:
(cid:14) ∞
(cid:14) ∞
ψ(t) = −
ψ(cid:4)(x)dx =
φ(x)(1 − 3x
−4)dx ≤ P (Y ≥ t).
t
To prove (1.6) via (1.46) for j = n ≥ 5 we need to prove
t
1/2n
≤ φ(tn)t−1
n
(1 − t−2)
n
√
where tn := (n − 2)/ n. Clearly n (cid:10)→ tn is increasing. For n ≥ 5 we have 1 − t−2 ≥ 4/9 and
(2π)−1/2 e2−2/n · 4/9 ≥ 0.878. Thus it suffices to prove
n
n(log 2 − 0.5) + 0.5 log n − log(n − 2) + log(0.878) ≥ 0, n ≥ 5.
This can be checked for n = 5, 6 and the left side is increasing in n for n ≥ 6, so (1.6) for
j = n ≥ 5 follows.
So it will suffice to prove pni ≤ P ( nY /2 ∈ [(i − 2)/2, i/2]) for j ≤ i < n. From (1.30) and
√
Lemma 1.6, and the bound φk ≥ 1, it will suffice to prove, for x := i | https://ocw.mit.edu/courses/18-465-topics-in-statistics-nonparametrics-and-robustness-spring-2005/0a472bc75921bd9a7a12c37bb261d572_bretagn_massart.pdf |
Lemma 1.6, and the bound φk ≥ 1, it will suffice to prove, for x := i/n,
(n − 1)x2
2
x2n
2n(1 − x2)
n[(x − 2/n)2 + x2]
4
1
− +
4n
7
288n2
≤ −
−
+
where 3/n ≤ x ≤ 1 − 2/n. Note that 2n(1 − x2) ≥ 4. Thus it is enough to prove that
x − x 2/2 − x /4 ≥ 3/4n + 7/288n
2
2n
11
for 3/n ≤ x ≤ 1 and n ≥ 5, which holds since the function on the left is concave, and the
�
inequality holds at the endpoints. Thus (1.6) and Lemma 1.4 are proved.
1.5 Proof of Lemma 1.2
Let G(x) be the distribution function of a normal random variable Z with mean n/2 and vari-
n)2−n .
ance n/4 (the same mean and variance as for B(n, 1/2)). Let B(k, n, 1/2) :=
Lemma 1.4 directly implies
0≤i≤k (i
(cid:15)
√
G( 2kn − n/2) ≤ B(k, n, 1/2) ≤ G(k + 1) for k ≤ n/2.
(1.47)
Specifically, letting k := (n − j)/2, (1.6) implies
B(k, n, 1/2) ≤ P (Z ≥ n − k − 1) = P (k + 1 ≥ n − Z) = G(k + 1)
since n − Z has the same distribution as Z. (1.5) implies
B(k, n, 1/2) ≥ P
(cid:2) | https://ocw.mit.edu/courses/18-465-topics-in-statistics-nonparametrics-and-robustness-spring-2005/0a472bc75921bd9a7a12c37bb261d572_bretagn_massart.pdf |
same distribution as Z. (1.5) implies
B(k, n, 1/2) ≥ P
(cid:2)
n
2
−
Let
√
n
2
(cid:3)
√
√
Y ≤ − + 2kn = G( 2kn − n/2).
n
2
η := Φ−1(G(Z)).
n
(1.48)
This definition of η from Z is called a quantile transformation. By Theorem 1.3, G(Z) has a
U [0, 1] distribution and η a B(n, 1/2) distribution. It will be shown that
Z − 1 ≤ η ≤ Z + (Z − n/2)2 /2n + 1 if Z ≤ n/2,
(1.49)
and
Z − (Z − n/2)2/2n − 1 ≤ η ≤ Z + 1 if Z ≥ n/2.
(1.50)
Define a sequence of extended real numbers −∞ = c−1 < c0 < c1 < · · · < cn = +∞ by G(ck ) =
B(k, n, 1/2) Then one can check that η = k on the event Ak := {ω : ck−1 < Z(ω) ≤ ck }. By
(1.47), G(ck ) = B(k, n, 1/2) ≤ G(k + 1) for k ≤ n/2. So, on the set Ak for k ≤ n/2 we have
Z − 1 ≤ ck − 1 ≤ k = η. Note that for n even, n/2 < cn/2 while for n odd, n/2 = c(n−1)/2 . So
the left side of (1.49) is proved.
If Y is a standard normal random variable with distribution function Φ and density φ then
Φ(x) ≤ φ | https://ocw.mit.edu/courses/18-465-topics-in-statistics-nonparametrics-and-robustness-spring-2005/0a472bc75921bd9a7a12c37bb261d572_bretagn_massart.pdf |
.49) is proved.
If Y is a standard normal random variable with distribution function Φ and density φ then
Φ(x) ≤ φ(x)/x for x > 0, e.g. Dudley (1993), Lemma 12.1.6(a). So we have
P (Z ≤ −n/2) = P
(cid:2)
n
2
+
√
n
2
Y ≤ −
(cid:3)
n
2
(cid:2) √
n
2
P
(cid:3)
Y ≤ −n = Φ(−2 n) ≤ √
√
e−2n
2 2πn
=
<
1
2n .
So G(−n/2) < G(c0) = 2−n and −n/2 < c0. Thus if Z ≤ −n/2 then η = 0. Next note that
Z + (Z − n/2)2/2n = (Z + n/2)2/2n ≥ 0 always. Thus the right side of (1.49) holds when
Z ≤ −n/2 and whenever η = 0. Now assume that Z ≥ −n/2. By (1.47), for 1 ≤ k ≤ n/2
G((2(k − 1)n)1/2 − n/2) ≤ B(k − 1, n, 1/2) = G(ck−1),
12
from which it follows that (2(k − 1)n)1/2 − n/2 ≤ ck−1 and
k − 1 ≤ (ck−1 + n/2)2/2n.
The function x (cid:10)→ (x + n/2)2 is clearly increasing for x ≥ −n/2 and thus for x ≥ c0. Applying
(1.51) we get on the set Ak for 1 ≤ k ≤ n/2
(1.51)
η = k ≤ (Z + n/2) | https://ocw.mit.edu/courses/18-465-topics-in-statistics-nonparametrics-and-robustness-spring-2005/0a472bc75921bd9a7a12c37bb261d572_bretagn_massart.pdf |
for 1 ≤ k ≤ n/2
(1.51)
η = k ≤ (Z + n/2)2/2n + 1 = Z + (Z − n/2)2 /2n + 1.
Since P (Z ≤ n/2) = 1/2 ≤ P (η ≤ n/2), and η is a non-decreasing function of Z, Z ≤ n/2
implies η ≤ n/2. So (1.49) is proved.
It will be shown next that (η, Z) has the same joint distribution as (n − η, n − Z). It is
clear that η and n − η have the same distribution and that Z and n − Z do. We have for each
k = 0, 1, · · · , n, n − η = k if and only if η = n − k if and only if cn−k−1 < Z ≤ cn−k. We need to
show that this is equivalent to ck−1 ≤ n−Z < ck , in other words n−ck < Z ≤ n−ck−1. Thus we
want to show that cn−k−1 = n−ck for each k. It is easy to check that G(n−ck ) = P (Z ≥ ck ) =
1 − G(ck ) while G(ck ) = B(k, n, 1/2) and G(cn−k−1) = B(n − k − 1, n, 1/2) = 1 − B(k, n, 1/2).
The statement about joint distributions follows. (1.49) thus implies (1.50).
Some elementary algebra, (1.49) and (1.50) imply
|η − Z| ≤ 1 + (Z − n/2)2/2n
and since Z < n/2 implies η ≤ n/2 and Z > n/2 implies η ≥ n/2,
|η − n/2| ≤ 1 + | https://ocw.mit.edu/courses/18-465-topics-in-statistics-nonparametrics-and-robustness-spring-2005/0a472bc75921bd9a7a12c37bb261d572_bretagn_massart.pdf |
� ≤ n/2 and Z > n/2 implies η ≥ n/2,
|η − n/2| ≤ 1 + |Z − n/2|.
(1.52)
(1.53)
√
Letting Z = (n + nY )/2 and noting that then G(Z) ≡ Φ(Y ), (1.48), (1.52), and (1.53)
imply Lemma 1.2 with Cn = η − n/2.
�
1.6
Inequalities for the separate processes
We will need facts providing a modulus of continuity for the Brownian bridge and something
similar for the empirical process (although it is discontinuous). Let h(t) := +∞ if t ≤ −1
and
h(t) := (1 + t) log(1 + t) − t,
t > −1.
(1.54)
Lemma 1.9. Let ξ be a binomial random variable with parameters n and p. Then for any
x ≥ 0 and m := np we have
P (ξ − m ≥ x) ≤ inf e −sxEes(ξ−m) =
s>0
m
m + x
If p ≤ 1/2 then bounds for the right side of (1.55) give
(cid:5)
(cid:6)m+x
(cid:5)
(cid:6)n−m−x
n − m
n − m − x
P (ξ ≥ m + x) ≤ exp −
(cid:5)
m
(cid:5) (cid:6)(cid:6)
x
h
1 − p m
and
P (ξ ≤ m − x) ≤ exp(−x /[2p(1 − p)]).
2
13
.
(1.55)
(1.56)
(1.57)
Proof. The | https://ocw.mit.edu/courses/18-465-topics-in-statistics-nonparametrics-and-robustness-spring-2005/0a472bc75921bd9a7a12c37bb261d572_bretagn_massart.pdf |
)]).
2
13
.
(1.55)
(1.56)
(1.57)
Proof. The first inequality in (1.55) is clear. Let E(k, n, p) denote the probability of at least
k successes in n independent trials with probability p of success on each trial, and B(k, n, p)
the probability of at most k successes. According to Chernoff ’s inequalities (Chernoff, 1954),
we have with q := 1 − p
E(k, n, p) ≤ (np/k)k (nq/(n − k))n−k
if k ≥ np,
and symmetrically
B(k, n, p) ≤ (np/k)k (nq/(n − k))n−k
if k ≤ np.
These inequalities hold for k not necessarily an integer; for this and the equality in (1.55) see
also Hoeffding (1963). Then for p ≤ 1/2, (1.56) is a consequence proved by Bennett (1962), see
also Shorack and Wellner (1986, p. 440, (3)), and (1.57) is a consequence proved by Okamoto
�
(1958) and extended by Hoeffding (1963).
√
Let Fn be an empirical distribution function for the uniform distribution on [0, 1] and
αn(t) := n(Fn(t) − t), 0 ≤ t ≤ 1, the corresponding empirical process. The previous lemma
extends via martingales to a bound for the empirical process on intervals.
Lemma 1.10. For any b with 0 < b ≤ 1/2 and x > 0,
(cid:5)
|αn(t)| > x/ n) ≤ 2 exp −
√
P
( sup
0
≤ | https://ocw.mit.edu/courses/18-465-topics-in-statistics-nonparametrics-and-robustness-spring-2005/0a472bc75921bd9a7a12c37bb261d572_bretagn_massart.pdf |
|αn(t)| > x/ n) ≤ 2 exp −
√
P
( sup
0
≤t≤b
nb
h
1 − b
(cid:6)(cid:6)
(cid:5)
x(1 − b)
nb
≤ 2 exp(−nb(1 − b)h(x/(nb))).
(1.58)
Remark. The bound given by (1.58) is Lemma 2 of Bretagnolle and Massart (1989). Lemma
ath (1993), p. 116, has instead the bound 2 exp(−nbh(x/(nb))). This
1.2 of Cs¨ o and Horv´
does not follow from Lemma 1.10, while the converse implication holds by (1.83) below, but I
ath’s proof of their form.
could not follow Cs¨ o and Horv´
org˝
org˝
Proof. From the binomial conditional distributions of multinomial variables we have for 0 ≤
s ≤ t < 1
E(Fn(t)|Fn(u), u ≤ s) = E(Fn(t)|Fn(s))
1 − t
1 − s
(1 − Fn(s)) =
t − s
1 − s
t − s
1 − s
+
= Fn(s) +
Fn(s),
from which it follows directly that
(cid:5)
(cid:16)
Fn(t) − t (cid:16)
(cid:16)Fn(u), u ≤ s =
1 − t
(cid:6)
E
Fn(s) − s
,
1 − s
in other words, the process (Fn(t) − t)/(1 − t), 0 ≤ t < 1 is a martingale in t (here n is
fixed). Thus, αn(t)/(1 − t), 0 ≤ t < 1, is also a martingale, and for any real s the process | https://ocw.mit.edu/courses/18-465-topics-in-statistics-nonparametrics-and-robustness-spring-2005/0a472bc75921bd9a7a12c37bb261d572_bretagn_massart.pdf |
t < 1, is also a martingale, and for any real s the process
exp(sαn(t)/(1 − t)) is a submartingale, e.g. Dudley (1993), 10.3.3(b). Then
√
√
αn(t) > x/ n) ≤
αn(t)/(1 − t) > x/ n)
P
( sup
≤t≤b
0
P
( sup
≤t≤b
0
14
which for any s > 0 equals
(cid:2)
P
sup exp(sαn(t)/(1 − t)) > exp(sx/ n)
0≤t≤b
.
(cid:3)
√
By Doob’s inequality (e.g. Dudley (1993), 10.4.2, for a finite sequence increasing up to a dense
set) the latter probability is
≤ inf exp(−sx/ n)E exp(sαn(b)/(1 − b)) ≤ exp −
s>0
√
(cid:5)
(cid:5)
nb
1 − b
h
x(1 − b)
nb
by Lemma 1.9, (1.56). In the same way, by (1.57) we get
P ( sup (−αn(t)) > x/ n) ≤ exp(−x
2(1 − b)/(2nb))).
√
0≤t≤b
(cid:6)(cid:6)
(1.59)
It is easy to check that h(u) ≤ u2/2 for u ≥ 0, so the first inequality in Lemma 1.10 follows.
It is easily shown by derivatives that h(qy) ≥ q2h(y) for y ≥ 0 and 0 ≤ q ≤ 1. For q = 1 − b,
�
the bound in (1.58) then follows.
We next have a corresponding inequality for the Brownian bridge.
Lemma 1 | https://ocw.mit.edu/courses/18-465-topics-in-statistics-nonparametrics-and-robustness-spring-2005/0a472bc75921bd9a7a12c37bb261d572_bretagn_massart.pdf |
the bound in (1.58) then follows.
We next have a corresponding inequality for the Brownian bridge.
Lemma 1.11. Let B(t), 0 ≤ t ≤ 1, be a Brownian bridge, 0 < b < 1 and x > 0. Let Φ be the
standard normal distribution function. Then
P ( sup B(t) > x) = 1 − Φ(x/ b(1 − b))
0≤t≤b
(cid:4)
+ exp(−2x 2) 1 − Φ
If 0 < b ≤ 1/2, then for all x > 0,
(cid:2)
(cid:2)
(cid:3)(cid:3)
(1 − 2b)x
(cid:13)
b(1 − b)
.
(1.60)
P ( sup B(t) > x) ≤ exp(−x 2/(2b(1 − b))).
0≤t≤b
(1.61)
Proof. Let X(t), 0 ≤ t < ∞ be a Wiener process. For some real α and value of X(1) let
β := X(1) − α. It will be shown that for any real α and y
P { sup X(t) − αt > y|X(1)} = 1{β>y} + exp(−2y(y − β))1{β≤y} .
0≤t≤1
(1.62)
Clearly, if β > y then sup0≤t≤1 X(t) − αt > y (let t = 1). Suppose β ≤ y. One can apply
a reflection argument as in the proof of Dudley (1993), Proposition 12.3.3, where details are
given on making such an argument rigorous. | https://ocw.mit.edu/courses/18-465-topics-in-statistics-nonparametrics-and-robustness-spring-2005/0a472bc75921bd9a7a12c37bb261d572_bretagn_massart.pdf |
), Proposition 12.3.3, where details are
given on making such an argument rigorous. Let X(t) = B(t) + tX(1) for 0 ≤ t ≤ 1, where
B(·) is a Brownian bridge. We want to find P (sup0≤t≤1 B(t) + βt > y). But this is the same
as P (sup0≤t≤1 Y (t) > y|Y (1) = β) for a Wiener process Y . For β ≤ y, the probability that
sup0≤t≤1 Y (t) > y and β ≤ Y (1) ≤ β + dy is the same by reflection as P (2y − β ≤ Y (1) ≤
2y − β + dy). Thus the desired conditional probability, for the standard normal density φ, is
φ(2y − β)/φ(β) = exp(−2y(y − β)) as stated. So (1.62) is proved.
15
We can write the Brownian bridge B as W (t) − tW (1), 0 ≤ t ≤ 1, for a Wiener process W .
Let W1(t) := b−1/2W (bt), 0 ≤ t < ∞. Then W1 is a Wiener process. Let η := W (1) − W (b).
Then η has a normal N (0, 1√− b) distribution and is independent of W1(t), 0 ≤ t ≤ 1. Let
γ := ((1 − b)W1(1) − bη) b/x. We have
√
P ( sup B(t) > x|η, W1(1)) = P
0≤t≤b
(cid:2)
√
sup | https://ocw.mit.edu/courses/18-465-topics-in-statistics-nonparametrics-and-robustness-spring-2005/0a472bc75921bd9a7a12c37bb261d572_bretagn_massart.pdf |
x|η, W1(1)) = P
0≤t≤b
(cid:2)
√
sup (W1(t) − (bW1(1) + bη)t) > x/ b|η, W1(1)
0≤t≤1
√
(cid:3)
.
Now the process W1(t) − (bW1(t) + bη)t, 0 ≤ t ≤ 1, has the same distribution as a Wiener
process Y (t), 0 ≤ t ≤ 1, given that Y (1) = (1 − b)W1(1) − bη. Thus by (1.62) with α = 0,
√
√
P ( sup B(t) > x|η, W1(1)) = 1{γ>1} + 1{γ≤1} exp(−2x
2(1 − γ)/b).
0≤t≤b
(1.63)
Thus, integrating gives
(cid:18)
P ( sup B(t) > x) = P (γ > 1) + exp(−2x /b)E exp(2x γ/b)1{γ≤1}
.
(cid:17)
2
2
0≤t≤b
From the definition of γ it has a N (0, b(1 − b)/x2) distribution. Since x is constant, the latter
integral with respect to γ can be evaluated by completing the square in the exponent and
yields (1.60).
We next need the inequality, for x ≥ 0,
1 − Φ(x) ≤
exp(−x /2).
2
1
2
(1.64)
This is easy to check via the first derivative for 0 ≤ x ≤ 2/π. On the other hand we have the
inequality 1 − Φ(x) ≤ φ(x)/x, x > 0, e.g. Dudley (1993), 12 | https://ocw.mit.edu/courses/18-465-topics-in-statistics-nonparametrics-and-robustness-spring-2005/0a472bc75921bd9a7a12c37bb261d572_bretagn_massart.pdf |
inequality 1 − Φ(x) ≤ φ(x)/x, x > 0, e.g. Dudley (1993), 12.1.6(a), which gives the conclusion
for x ≥ 2/π.
(cid:13)
(cid:13)
Applying (1.64) to both terms of (1.60) gives (1.61), so the Lemma is proved.
�
1.7 Proof of Theorem 1.1
For the Brownian bridge B(t), 0 ≤ t ≤ 1, it is well known that for any x > 0
P ( sup |B(t)| ≥ x) ≤ 2 exp(−2x 2),
0≤t≤1
e.g. Dudley (1993), Proposition 12.3.3. It follows that
P ( n sup |B(t)| ≥ u) ≤ 2 exp(−u/3)
√
0≤t≤1
for u ≥ n/6. We also have |α1(t)| ≤ 1 for all t and
P ( sup |αn(t)| ≥ x) ≤ D exp(−2x
2),
0≤t≤1
(1.65)
16
which is the Dvoretzky-Kiefer-Wolfowitz inequality with a constant D. Massart (1990) proved
(1.65) with the sharp constant D = 2. Earlier Hu (1985) proved it with D = 4 2. D = 6
suffices for present purposes. Given D, it follows that for u ≥ n/6,
√
P ( n sup |αn(t)| ≥ u) ≤ D exp(−u/3).
√
0≤t≤1
For x < 6 log 2, we have 2e−x/6 > 1 so the conclusion of Theorem 1.1 holds. holds. For
x > n/3 − 12 log | https://ocw.mit.edu/courses/18-465-topics-in-statistics-nonparametrics-and-robustness-spring-2005/0a472bc75921bd9a7a12c37bb261d572_bretagn_massart.pdf |
the conclusion of Theorem 1.1 holds. holds. For
x > n/3 − 12 log n, u := (x + 12 log n)/2 > n/6 so the left side of (1.1) is bounded above by
(2 + D)n−2e−x/6. We have (2 + D)n−2 ≤ 2 for n ≥ 2 and D ≤ 6.
Thus it will be enough to prove Theorem 1.1 when
6 log 2 ≤ x ≤ n/3 − 12 log n.
(1.66)
The function t (cid:10)→ t/3 − 12 log t is decreasing for t < 36, increasing for t > 36. Thus one can
check that for (1.66) to be non-vacuous is equivalent to
n ≥ 204.
(1.67)
:= 2N ≤ n < 2ν. Let Z be
Let N be the largest integer such that 2N ≤ n, so that ν
a ν-dimensional normal random variable with independent components, each having mean 0
:= {i + 1, · · · , m}. For any
and variance λ
two vectors a := (a1, · · · , aν ) and b := (b1, · · · , bν ) in Rν , we have the usual inner product
For any subset D ⊂ A(0, ν) let 1D be its indicator function as a member
(a, b) :=
of Rν . For any integers j = 0, 1, 2, · · · and k = 0, 1, · · ·, let
:= n/ν. For integers 0 ≤ i < m let A(i, m)
i=1 aibi.
(cid:15)ν
Ij,k := A(2j k, 2j | https://ocw.mit.edu/courses/18-465-topics-in-statistics-nonparametrics-and-robustness-spring-2005/0a472bc75921bd9a7a12c37bb261d572_bretagn_massart.pdf |
(i, m)
i=1 aibi.
(cid:15)ν
Ij,k := A(2j k, 2j (k + 1)),
(1.68)
(cid:4)
:= ej−1,2k − ej,k/2. Then one
let ej,k be the indicator function of Ij,k and for j ≥ 1, let e
j,k
: 1 ≤ j ≤ N, 0 ≤ k < 2N −j } ∪ {eN,0} is an
can easily check that the family E
orthogonal basis of Rν with (eN,0, eN,0) = ν and (ej,k , ej,k) = 2j−2 for each of the given j, k.
Let Wj,k := (Z, ej,k ) and W (cid:4)
j,k ). Then since the elements of E are orthogonal it
j,k
follows that the random variables W (cid:4)
for 1 ≤ j ≤ N, 0 ≤ k < 2N −j and WN,0 are independent
j,k
normal with
:= (Z, e(cid:4)
:= {e(cid:4)
j,k
(cid:4)
(cid:4)
j,k) = λ2j−2 , Var(WN,0) = λν.
(cid:4)
(cid:4)
EW
j,k = EWN,0 = 0, Var(W
(1.69)
Recalling the notation of Lemma 1.2, let Φn be the distribution function of a binomial B(n, 1/2)
random variable, with inverse Φ−1. Now let Gm(t) := Φ−1(Φ(t)).
n
m
We will begin defining the construction that will connect the empirical process with a
Brownian bridge. Let
UN,0 := | https://ocw.mit.edu/courses/18-465-topics-in-statistics-nonparametrics-and-robustness-spring-2005/0a472bc75921bd9a7a12c37bb261d572_bretagn_massart.pdf |
construction that will connect the empirical process with a
Brownian bridge. Let
UN,0 := n
(1.70)
and then recursively as j decreases from j = N to j = 1,
(cid:4)
Uj−1,2k := GUj,k ((22−j /λ)1/2 W
j,k), Uj−1,2k+1 := Uj,k − Uj−1,2k,
(1.71)
k = 0, 1, · · · , 2N −j −1. Note that by (1.69), (22−j /λ)1/2W (cid:4)
j,k has a standard normal distribution,
so Φ of it has a U [0, 1] distribution. It is easy to verify successively for j = N, N − 1, · · · , 0
that the random vector {Uj,k, 0 ≤ k < 2N −j } has a multinomial distribution with parameters
17
n, 2j−N , · · · , 2j−N . Let X
multinomial distribution with parameters n, 1/ν, · · · , 1/ν.
The random vector X is equal in distribution to
:= (U0,0, U0,1, · · · , U0,ν−1). Then the random vector X has a
{n(Fn((k + 1)/ν) − Fn(k/ν)), 0 ≤ k ≤ ν − 1},
(1.72)
while for a Wiener process W , Z is equal in distribution to
√
{ n(W ((k + 1)/ν) − W (k/ν)), 0 ≤ k ≤ ν − 1}.
(1.73)
Without loss of generality, we can assume that the above equalities in distribution are actual
equalities for some uniform empirical distribution functions Fn and Wiener process W = Wn.
Specifically, consider | https://ocw.mit.edu/courses/18-465-topics-in-statistics-nonparametrics-and-robustness-spring-2005/0a472bc75921bd9a7a12c37bb261d572_bretagn_massart.pdf |
actual
equalities for some uniform empirical distribution functions Fn and Wiener process W = Wn.
Specifically, consider a vector of i.i.d. uniform random variables (x1, · · · , xn) ∈ Rn such that
Fn(t) :=
n
(cid:1)
1
n j=1
1{xj ≤t}
and note that W has sample paths in C[0, 1]. Both Rn and C[0, 1] are separable Banach
spaces. Thus one can let (x1, · · · , xn) and W be conditionally independent given the vectors in
(1.72) and (1.73) which have the joint distribution of X and Z, by the Vorob’ev-Berkes-Philipp
theorem, see Berkes and Philippp (1979), Lemma A1. Then we define a Brownian bridge by
n(Fn(t) − t), 0 ≤ t ≤ 1. By
Bn(t) := Wn(t) − tWn(1) and the empirical process αn(t) :=
our choices, we then have
√
{n(Fn(j/ν) − j/ν)}ν =
j=0
⎧
⎨j−1 (cid:5)
(cid:1)
⎩
i=0
Xi −
⎫ν
(cid:6)⎬
n
ν ⎭
j=0
(1.74)
and
(cid:25)√
nBn(j/ν)
⎧⎛
⎨ j−1
(cid:1)
(cid:26)ν
j=0 = ⎝ Zi
⎩
i=0
⎞
⎠ −
j
ν
⎫ν
ν−1 ⎬
(cid:1)
Zr ⎭
r=0
j=0
.
(1.75)
Theorem 1.1 will be proved for the given Bn and αn. | https://ocw.mit.edu/courses/18-465-topics-in-statistics-nonparametrics-and-robustness-spring-2005/0a472bc75921bd9a7a12c37bb261d572_bretagn_massart.pdf |
=0
.
(1.75)
Theorem 1.1 will be proved for the given Bn and αn. Specifically, we want to prove
(cid:2)
(cid:3)
√
P0 := P
sup |αn(t) − Bn(t)| > (x + 12 log n)/ n ≤ 2 exp(−x/6).
0≤t≤1
(1.76)
It will be shown that αn(j/ν) and Bn(j/ν) are not too far apart for j = 0, 1, · · · , ν while
the increments of the processes over the intervals between the lattice points j/ν are also not
too large.
Let C := 0.29. Let M be the least integer such that
C(x + 6 log n)
≤ λ2M +1
.
(1.77)
Since n ≥ 204 (1.67) and λ < 2 this implies M ≥ 2. We have by definition of M and (1.66)
2M ≤ λ2M ≤ C(x + 6 log n) ≤ Cn/3 <
0.1 · 2N +1
< 2N −2
so M ≤ N − 3.
18
For each t ∈ [0, 1], let πM (t) be the nearest point of the grid {i/2N −M , 0 ≤ i ≤ 2N −M }, or
(cid:15)
m
i=1 Di.
if there are two nearest points, take the smaller one. Let D := X − Z and D(m) :=
Let C (cid:4) := 0.855 and define
Θ := {Uj,k ≤ λ(1 + C (cid:4))2j whenever M + 1 < j ≤ N, 0 ≤ k < 2N −j }
∩ {Uj,k | https://ocw.mit.edu/courses/18-465-topics-in-statistics-nonparametrics-and-robustness-spring-2005/0a472bc75921bd9a7a12c37bb261d572_bretagn_massart.pdf |
j whenever M + 1 < j ≤ N, 0 ≤ k < 2N −j }
∩ {Uj,k ≥ λ(1 − C (cid:4))2j whenever M < j ≤ N, 0 ≤ k < 2N −j }.
Then
where
(cid:2)
P1 := P
P0 ≤ P1 + P2 + P3 + P (Θc)
(cid:3)
sup |αn(t) − αn(πM (t))| > 0.28(x + 6 log n)/ n ,
0≤t≤1
√
(cid:3)
sup |Bn(t) − Bn(πM (t))| > 0.22(x + 6 log n)/ n ,
0≤t≤1
√
(cid:2)
P2 := P
(1.78)
(1.79)
and, recalling (1.74) and (1.75),
P3 := 2N −M max P
m∈A(M )
(cid:31)(cid:5)
|D(m) −
m
ν
D(ν)| > 0.5x + 9 log n ∩ Θ
,
(1.80)
(cid:6)
where A(M ) := {k2M : k = 1, 2, · · ·} ∩ A(0, ν).
First we bound P (Θc). Since by (1.71) Uj,k = Uj−1,2k + Uj−1,2k+1, we have
!
Θc ⊂
0≤k<2N−M −2
{UM +2,k > (1 + C (cid:4))λ2M +2} ∪
!
{UM +1,k < (1 − C (cid:4))λ2M +1}.
0≤k<2N−M −1
Since UM +2,k and UM +1,k are binomial random variables, Lemma 1.9 gives
P (Θc)
� | https://ocw.mit.edu/courses/18-465-topics-in-statistics-nonparametrics-and-robustness-spring-2005/0a472bc75921bd9a7a12c37bb261d572_bretagn_massart.pdf |
UM +2,k and UM +1,k are binomial random variables, Lemma 1.9 gives
P (Θc)
≤ 2N −M −1
(cid:17)
(cid:18)
exp(−λ2M +2h(C (cid:4))) + exp(−λ2M +1h(−C (cid:4))) .
Now 2h(C (cid:4)) ≥ 0.5823 ≥ h(−C (cid:4)) ≥ 0.575 (note that C (cid:4) has been chosen to make 2h(C (cid:4))
and h(−C (cid:4)) approximately equal). By definition of M (1.77), λ2M +1 ≥ C(x + 6 log n), and
0.575C > 1/6, so
(1.81)
Next, to bound P1 and P2. Let b := 2M −N −1 ≤ 1/2. Since αn(t) has stationary increments,
P (Θc) ≤ 2−M exp(−x/6).
we can apply Lemma 1.10. Let u := x + 6 log n. We have by definition of M (1.77)
nb = n2M −N −1 < Cu/2.
(1.82)
By (1.66), u < n/3 so b < C/6. Recalling (1.54), note that h(cid:4)(t) ≡ log(1 + t). Thus h is
increasing. For any given v > 0 it is easy to check that
y (cid:10)→ yh(v/y) is decreasing for y > 0.
(1.83)
Lemma 1.10 gives
P1
≤ 2N −M +2 exp
(cid:5)
(cid:5)
(cid:6)(cid:6)
0.28u | https://ocw.mit.edu/courses/18-465-topics-in-statistics-nonparametrics-and-robustness-spring-2005/0a472bc75921bd9a7a12c37bb261d572_bretagn_massart.pdf |
2N −M +2 exp
(cid:5)
(cid:5)
(cid:6)(cid:6)
0.28u
nb
−nb(1 − b)h
19
(cid:5)
C −
< 2N −M +2 exp
2
(cid:7)
1 −
(cid:8)
C
6
(cid:5)
uh 0.28 ·
(cid:6)(cid:6)
2
C
by (1.83) and (1.82) and since 1 − b > 1 − C/6, so one can calculate
P1 ≤ 2N −M +2 −u/6
e
≤ 22−M λ−1 exp(−x/6).
(1.84)
The Brownian bridge also has stationary increments, so Lemma 1.11, (1.61) and (1.82) give
P2 ≤ 2N −M +2 exp(−(0.22u)2 /(2nb))
≤ 2N −M +2 exp(−(0.22)2 u/C)
≤ 22−M λ−1 −x/6
e
(1.85)
since (0.22)2 /C > 1/6.
It remains to bound P3. Fix m ∈ A(M ). A bound is needed for
(cid:31)(cid:5)
(cid:6)
P3(m) := P
|D(m) − D(ν)| > 0.5x + 9 log n ∩ Θ
.
(1.86)
m
ν
For each j = 1, · · · , N take k(j) such that m ∈ Ij,k(j). By the definition (1.68) of Ij,k, k(M ) =
m2−M − 1 and k(j) = [k(j − 1)/2] for j = 1, · · · , N where [x] is the largest integer ≤ x.
From here on each double subscript j, k(j) | https://ocw.mit.edu/courses/18-465-topics-in-statistics-nonparametrics-and-robustness-spring-2005/0a472bc75921bd9a7a12c37bb261d572_bretagn_massart.pdf |
] is the largest integer ≤ x.
From here on each double subscript j, k(j) will be abbreviated to the single subscript j, e.g.
(cid:4)
ej := ej,k(j). The following orthogonal expansion holds in E:
(cid:4)
1A(0,m) =
m
ν
(cid:1)
eN,0 +
M <j≤N
(cid:4) ,
cj ej
(1.87)
(cid:4)
(cid:4)
for j ≤ M since 2M
where 0 ≤ cj ≤ 1 for m < j ≤ N . To see this, note that 1A(0,m)
⊥ e
j,k
(cid:4)
⊥ e
is a divisor of m. Also, 1A(0,m)
= k(j) since 1A(0,m) has all 0’s or all 1’s on the
j,k
set where ej,k has non-zero entries, half of which are +1/2 and the other half −1/2. In an
orthogonal expansion f =
:= (v, v).
(cid:15) = 2(j−2)/2 . Now, (1A(0,m) , ej ) is as large as possible when the components of
(cid:4)
We have (cid:15)ej
(cid:4)
ej equal = 1/2 only for indices ≤ m, and then the inner product equals 2j−2, so |cj | ≤ 1 as
stated. The m/ν factor is clear.
j cj fj we always have cj = (f, fj )/(cid:15)fj (cid:15)2 where (cid:15)v(cid:15)2
for | https://ocw.mit.edu/courses/18-465-topics-in-statistics-nonparametrics-and-robustness-spring-2005/0a472bc75921bd9a7a12c37bb261d572_bretagn_massart.pdf |
f, fj )/(cid:15)fj (cid:15)2 where (cid:15)v(cid:15)2
for k (cid:14)
(cid:15)
(cid:4)
We next have
ej = 2j−N eN,0 +
(−1)s(i,j,m)2j+1−i (cid:4)
ei
(cid:1)
(1.88)
i>j
where s(i, j, m) = 0 or 1 for each i, j, m so that the corresponding factors are ±1, the signs
being immaterial in what follows. Let ∆j := (D, e(cid:4)
j ). Then from (1.87),
(cid:16)
(cid:16)
m
(cid:16)D(m) − D(ν)(cid:16) ≤
(cid:16)
ν
(cid:16)
(cid:16)
(cid:16)
(cid:1)
|∆j |.
M <j≤N
(1.89)
(cid:4) ) (see between (1.68) and (1.69)) and D = X − Z. Let ξj
(cid:4) = (Z, ej
Recall that Wj
:=
(22−j /λ)1/2 Wj
(cid:4) for M < j ≤ N . Then by (1.69) and the preceding statement, ξM +1, · · · , ξN
are i.i.d. standard normal random variables. We have Uj,k = (X, ej,k ) for all j and k from the
definitions. Then Uj = (X, ej ). Let Uj
j ). By (1.71) and Lemma 1.2, (1.4),
|U (cid:4) − Uj ξj /2| ≤ 1 + ξj
j
2/8.
(1.90)
(cid:4 | https://ocw.mit.edu/courses/18-465-topics-in-statistics-nonparametrics-and-robustness-spring-2005/0a472bc75921bd9a7a12c37bb261d572_bretagn_massart.pdf |
�
√
2 1 + C (cid:4)( 2 + 1)
,
4
and for each M let cM := 1/(4C4 2M/2 ). Then for any real number x, we have x(1−C42M/2x) ≤
cM . It follows that
(cid:1)
(cid:1)
Lj ≤
C3ξ2
j + cM C22−j/2
M <j≤N
M <j≤N
21
≤ C2cM 2−(M +1)/2 /(1 − 2−1/2) +
(cid:1)
M <j≤N
2
C3ξj
≤ √ √
C22−M
2 1 + C (cid:4)
+
(cid:1)
M <j≤N
2 .
C3ξj
Thus, combining (1.91) and (1.94) we get on Θ
(cid:5)
(cid:1)
M <j≤N
|∆j | ≤ N +
(cid:6)
+ C3
(cid:1)
2 .
ξj
M <j≤N
(1.95)
1
8
We have E exp(tξ2) = (1 − 2t)−1/2 for t < 1/2 and any standard normal variable ξ such as ξj
for each j. Since ξM +1, · · · , ξN are independent we get
(cid:5)
⎞ ⎞
⎛⎛
1 (cid:1)
E exp ⎝⎝
3
M <j≤N
|∆j |⎠ 1Θ⎠ ≤ eN/3 1 −
C3 +
(cid:5)
2
3
(cid:6)(cid:6)(M −N )/2
1
8
≤ eN/321.513(N −M ) ≤ 22N −1.5M .
Markov | https://ocw.mit.edu/courses/18-465-topics-in-statistics-nonparametrics-and-robustness-spring-2005/0a472bc75921bd9a7a12c37bb261d572_bretagn_massart.pdf |
1
8
≤ eN/321.513(N −M ) ≤ 22N −1.5M .
Markov’s inequality and (1.89) then yield
P3(m)
≤ e −x/6 −322N −1.5M
n
.
Thus
P3 ≤ e −x/6 −323N −2.5M ≤ 2−2.5M −x/6
.
(1.96)
Collecting (1.81), (1.84), (1.85) and (1.96) we get that P0 ≤ (23−M λ−1 + 2−M + 2−2.5M )e
−x/6
.
By (1.77) and (1.67) and since x ≥ 6 log 2 (1.66) and M ≥ 2, it follows that Theorem 1.1 holds.
�
n
e
1.8 Another way of defining the KMT construction
Now, here is an alternate description of the KMT construction as given in the previous section.
For any Hilbert space H, the isonormal process is a stochastic process L indexed by H such that
the joint distributions of L(f ) for F ∈ H are normal (Gaussian) with mean 0 and covariance
given by the inner product in H, EL(f )L(g) = (f, g). Since the inner product is a nonnegative
definite bilinear form, such a process exists. Moreover, we have:
Lemma 1.12. For any Hilbert space H, an isonormal process L on H is linear, that is, for
any f, g ∈ H and constant c, L(cf + g) = cL(f ) + L(g) almost surely.
Proof. The variable L(cf + g) − cL(f ) − L(g) clearly has mean 0 and by a short calculation
�
one can show that its variance | https://ocw.mit.edu/courses/18-465-topics-in-statistics-nonparametrics-and-robustness-spring-2005/0a472bc75921bd9a7a12c37bb261d572_bretagn_massart.pdf |
− L(g) clearly has mean 0 and by a short calculation
�
one can show that its variance is also 0, so it is 0 almost surely.
The Wiener process (Brownian motion) is a Gaussian stochastic process Wt defined for
t ≥ 0 with mean 0 and covariance EWsWt = min(s, t). One can obtain a Wiener process
easily from an isonormal process as follows. Let H be the Hilbert space L2([0, ∞), λ) where λ
is Lebesgue measure. Let Wt := L(1[0,t]). This process is Gaussian, has mean 0 and clearly
22
has the correct covariance. Historically, the Wiener process was defined first, and then L(f )
was defined only for the particular Hilbert space L2([0, ∞)) by way of a “stochastic integral”
" ∞ f (t)dWt, which generally doesn’t exist as an ordinary integral but is defined as a
L(f ) =
limit in probability, approximating f in L2 by step functions. Defining L first seems much
easier.
0
The Brownian bridge process, as has been treated throughout this chapter, is a Gaussian
stochastic process Bt defined for 0 ≤ t ≤ 1 with mean 0 and covariance EBtBu = t(1 − u) for
0 ≤ t ≤ u ≤ 1. Given a Wiener process Wt, it is easy to see that Bt = Wt − tW1 for 0 ≤ t ≤ 1
defines a Brownian bridge.
For j = 0, 1, 2, ..., and k = 1, ..., | https://ocw.mit.edu/courses/18-465-topics-in-statistics-nonparametrics-and-robustness-spring-2005/0a472bc75921bd9a7a12c37bb261d572_bretagn_massart.pdf |
nes a Brownian bridge.
For j = 0, 1, 2, ..., and k = 1, ..., 2j let Ij,k be the open interval ((k − 1)/2j , k/2j ). Let
Tj,k be the “triangle function” defined as 0 outside Ij,k, 1 at the midpoint (2k − 1)/2j+1, and
:= f at k/2r for
linear in between. For a function f :
k = 0, 1, ..., 2r and linear in between. Let
(cid:5)
[0, 1] (cid:10)→ R and r = 0, 1, ..., let [f ]r
(cid:6)
fj,k := Wj,k(f ) := f
2k − 1
2j+1
(cid:7) (cid:5)
f
(cid:6)
k − 1
2j
1
2
−
(cid:5) (cid:6)(cid:8)
k
2j
.
+ f
Lemma 1.13. If f is affine, that is f (t) ≡ a + bt where a and b are constants, then fj,k = 0
for all j and k.
Proof. One can check this easily if f is a constant or if f (t) ≡ t, then use linearity of the
�
operation Wj,k on functions for each j and k.
Lemma 1.14. For any f :
[0, 1] (cid:10)→ R and r = 0, 1, ..., for 0 ≤ t ≤ 1
[f ]r (t) = f (0) + t[f (1) − f (0)] +
fj,kTj,k(t),
r−1 2j
(cid:1 | https://ocw.mit.edu/courses/18-465-topics-in-statistics-nonparametrics-and-robustness-spring-2005/0a472bc75921bd9a7a12c37bb261d572_bretagn_massart.pdf |
+ t[f (1) − f (0)] +
fj,kTj,k(t),
r−1 2j
(cid:1) (cid:1)
where the sum is defined as 0 for r = 0.
j=0 k=1
Proof. For r = 0 we have f (0) + t[f (1) − f (0)] = f (0) when t = 0, f (1) when t = 1, and
the function is linear in between, so it equals [f ]0. Then by Lemma 1.13 and linearity of the
operations Wj,k we can assume in the proof for r ≥ 1 that f (0) = f (1) = 0.
For r = 1 we have f0,1T0,1(t) = 0 = f (t) for t = 0 or 1 and f (1/2) for t = 1/2, with linearity
in between, so f0,1T0,1 = [f ]1, proving the case r = 1. Then, by induction on r, we can apply
the same argument on each interval Ir,k , k = 1, ..., 2r , to prove the lemma.
�
The following is clear since a continuous function on [0, 1] is uniformly continuous:
Lemma 1.15. If f is continuous on [0, 1] then [f ]r converges to f uniformly as r → ∞.
It follows that for any f ∈ C[0, 1],
f (t) = f (0) + t[f (1) − f (0)] +
fj,kTj,k(t),
∞ 2j
(cid:1) (cid:1)
where the sum converges uniformly on [0, 1]. Thus, the sequence of functions
j=0 k=1
1, t, T0,1 , T1,1, T | https://ocw.mit.edu/courses/18-465-topics-in-statistics-nonparametrics-and-robustness-spring-2005/0a472bc75921bd9a7a12c37bb261d572_bretagn_massart.pdf |
1]. Thus, the sequence of functions
j=0 k=1
1, t, T0,1 , T1,1, T1,2, ..., Tj,1 , ..., Tj,2j , Tj+1,1, ...,
23
is known as the Schauder basis of C[0, 1]. This basis fits well with a simple relation between
the Brownian motion or Wiener process Wt, t ≥ 0, and the Brownian bridge Bt, 0 ≤ t ≤ 1,
given by Bt = Wt − tW1, 0 ≤ t ≤ 1. Both processes are 0 at 0, and their Schauder expansions
differ only in the linear “t” term where W. has the coefficient W1 and B. has the coefficient 0,
by the following fact:
Lemma 1.16. Wj,k(B.) = Wj,k(W.) for all j = 0, 1, ... and k = 1, ..., 2j .
Proof. We need only note that Wj,k(·) is a linear operation on functions for each j and k and
�
Wj,k(tW1) = 0 by Lemma 1.13.
Lemma 1.17. The random variables Wj,k(B.) for j = 0, 1, ... and k = 1, ..., 2j are independent
with distribution N (0, 2−j−2).
Proof. We have by the previous lemma
Wj,k(B.) = Wj,k(W.) = W(2k−1)/2j+1 −
#
W(k−1)/2j + Wk/2j
$
1
2
= L(1[0,(2k−1)/2j+1]) −
$
#
L(1[0,(k−1)/2j ]) + L(1[0,k/2j ])
1
2
which by linearity of | https://ocw.mit.edu/courses/18-465-topics-in-statistics-nonparametrics-and-robustness-spring-2005/0a472bc75921bd9a7a12c37bb261d572_bretagn_massart.pdf |
[0,(k−1)/2j ]) + L(1[0,k/2j ])
1
2
which by linearity of the isonormal process L, Lemma 1.12, equals L(gj,k) where
gj,k := 1[0,(2k−1)/2j+1]
−
1
2
#
1((k−1)/2j ,(2k−1)/2j+1]
=
1
2
$
#
1[0,(k−1)/2j ] + 1[0,k/2j]
$
− 1((2k−1)/2j+1 ,k/2j ] .
(These functions gj,k, multiplied by some constants, are known as Haar functions.) To finish
the proof of Lemma 1.17 we will use the following:
Lemma 1.18. The functions gj,k and gj(cid:1),k(cid:1) are orthogonal in L2([0, 1]) (with Lebesgue mea-
sure) unless (j, k) = (j(cid:4), k(cid:4)).
Proof. If j = j(cid:4), the functions gj,k are orthogonal for different k since they are supported on
non-overlapping intervals Ij,k. If j (cid:14)= j(cid:4), say j(cid:4) < j, then gj,k is 0 outside of Ij,k, equal to 1/2
on the left half of it and −1/2 on the right half, while gj(cid:1) ,k(cid:1) is constant on the interval, so the
�
functions are orthogonal, proving Lemma 1.18.
Returning to the proof of Lemma 1.17, we have that L of orthogonal functions are inde-
pendent normal variables with mean 0, and E(L(f )2) = (cid:15)f (cid:15)2, where
(cid | https://ocw.mit.edu/courses/18-465-topics-in-statistics-nonparametrics-and-robustness-spring-2005/0a472bc75921bd9a7a12c37bb261d572_bretagn_massart.pdf |
pendent normal variables with mean 0, and E(L(f )2) = (cid:15)f (cid:15)2, where
(cid:15)gj,k (cid:15)2 =
(cid:14) 1
0
gj,k(t)2dt = 1/2j+2
2
since gj,k equals 1/4 on an interval of length 1/2j and is 0 elsewhere. So Lemma 1.17 is proved.
�
There are other ways of expanding functions on [0, 1] beside Schauder bases, for example,
Fourier series. Fourier series have the advantage that the terms in the series are orthogonal
24
functions with respect to Lebesgue measure on [0, 1]. The Schauder basis functions are not
orthogonal, for example the constant function 1 is not orthogonal to any of the other functions
in the sequence, and the functions are all nonnegative, so those whose supports overlap are
non-orthogonal. However, the Schauder functions are indefinite integrals of constant multiples
of the orthogonal functions gj,k or equivalently constant multiples of Haar functions, and it
turns out that the indefinite integral fits well with the processes we are considering, as in the
above proof. In a sense, the Wiener process Wt is the indefinite integral of the isonormal
process L via Wt = L(1[0,t]).
Let Φm be the distribution function of the binomial bin(m, 1/2) distribution, Φm(x) := 0
j=0 k 2−m for k ≤ x < k + 1, k = 0, 1, ..., m − 1, and Φm(x) := 1 for
for x < 0, Φm(x) :=
x ≥ m. For a function | https://ocw.mit.edu/courses/18-465-topics-in-statistics-nonparametrics-and-robustness-spring-2005/0a472bc75921bd9a7a12c37bb261d572_bretagn_massart.pdf |
(x) := 1 for
for x < 0, Φm(x) :=
x ≥ m. For a function F from R into itself let F ←(y) := inf{x : F (x) ≥ y}, as in Lemma
1.3. Let H(t|m) := Φ←
% &
m
(cid:15)k
m (t) for 0 < t < 1.
Now to proceed with the KMT construction, for a given n, let B(n) be a Brownian bridge
process. Let V0,1 := n. Let V1,1 := H(Φ(2W0,1(B(n)))|n), V1,2 := V0,1 − V1,1. By Lemma
1.17, 2W0,1(B(n)) has law N (0, 1), thus Φ of it has law U [0, 1] by Lemma 1.3(a), and V1,1 has
law bin(n, 1/2) by Lemma 1.3(b). We will define empirical distribution functions Un for the
U [0, 1] distribution recursively over dyadic rationals, beginning with Un(0) = 0, Un(1) = 1,
and Un(1/2) = V1,1/n. These values have their correct distributions so far. Now given Vj−1,k
for some j ≥ 2 and all k = 1, ..., 2j−1 , let
Vj,2k−1 := H(Φ(2(j+1)/2 Wj−1,k(B(n))|Vj−1,k )
:= Vj−1,k − Vj,2k−1. This completes the recursive definition of the Vj,i. Then
and Vj,2k
Wj−1,k(B(n)) has law N (0, 2−j− | https://ocw.mit.edu/courses/18-465-topics-in-statistics-nonparametrics-and-robustness-spring-2005/0a472bc75921bd9a7a12c37bb261d572_bretagn_massart.pdf |
j,i. Then
and Vj,2k
Wj−1,k(B(n)) has law N (0, 2−j−1) by Lemma 1.17, so 2(j+1)/2 times it has law N (0, 1), and Φ
of the product has law U [0, 1] by Lemma 1.3(a), so Vj,2k−1 has law bin(Vj−1,k, 1/2) by Lemma
1.3(b). Let Un(1/4) := V2,1/n, Un(3/4) := Un(1/2) + V2,2/n, and so on. Then Un(k/2j ) for
k = 0, 1, ..., 2j have their correct joint distribution and and when taken for all j = 1, 2, ..., they
uniquely define Un on [0, 1] by monotonicity and right-continuity, which has all the properties
of an empirical distribution function for U [0, 1].
With the help of Lemma 1.2, one can show that the Schauder coefficients of the empirical
process αn := n1/2(Un − U ), where U is the U [0, 1] distribution function, are close to those
of B(n). Lemma 1.2 has to be applied not only for the given n but also for n replaced by Vj,k,
and that creates some technical problems. For the present, the proof in the previous section
is not rewritten here in terms of the present construction.
REFERENCES
Bennett, George W. (1962). Probability inequalities for the sum of bounded random vari-
ables. J. Amer. Statist. Assoc. 57, 33–45.
Berkes, I., and Philipp, W. (1979). Approximation theorems for independent and weakly
dependent random vectors. Ann. Probab. 7, 29-54.
B | https://ocw.mit.edu/courses/18-465-topics-in-statistics-nonparametrics-and-robustness-spring-2005/0a472bc75921bd9a7a12c37bb261d572_bretagn_massart.pdf |
ms for independent and weakly
dependent random vectors. Ann. Probab. 7, 29-54.
Bretagnolle, J., and Massart, P. (1989). Hungarian constructions from the nonasymptotic
viewpoint. Ann. Probab. 17, 239–256.
Chernoff, H. (1952). A measure of efficiency for tests of a hypothesis based on the sum of
observations. Ann. Math. Statist. 23, 493-507.
Cs¨ o, M., and Horv´
org˝
Wiley, Chichester.
ath, L. (1993). Weighted Approximations in Probability and Statistics.
25
Cs¨ o, M., and R´ev´esz, P. (1981). Strong Approximations in Probability and Statistics.
org˝
Academic, New York.
Donsker, Monroe D. (1952). Justification and extension of Doob’s heuristic approach to
the Kolmogorov-Smirnov theorems. Ann. Math. Statist. 23, 277–281.
Dudley, Richard M. (1984). A Course on Empirical Processes. Ecole d’´et´e de probabilit´es
de St.-Flour, 1982. Lecture Notes in Math. 1097, 1-142, Springer.
Dudley, R. M. (2002). Real Analysis and Probability. Second ed., Cambridge University
Press.
Feller, William (1968). An Introduction to Probability Theory and Its Applications. Vol. 1,
3d ed. Wiley, New York.
Hoeffding, W. (1963). Probability inequalities for sums of bounded random variables. J.
Amer. Statist. Assoc. 58, 13-30.
Hu, Inchi (1985). A uniform bound for the | https://ocw.mit.edu/courses/18-465-topics-in-statistics-nonparametrics-and-robustness-spring-2005/0a472bc75921bd9a7a12c37bb261d572_bretagn_massart.pdf |
. Statist. Assoc. 58, 13-30.
Hu, Inchi (1985). A uniform bound for the tail probability of Kolmogorov-Smirnov statis-
tics. Ann. Statist. 13, 821-826.
Koml´
os, J., Major, P., and Tusn´
ady, G. (1975). An approximation of partial sums of
independent RV’-s and the sample DF. I. Z. Wahrscheinlichkeitstheorie verw. Gebiete 32,
111–131.
Mason, D. M. (1998). Notes on the the KMT Brownian bridge approximation to the
uniform empirical process. Preprint.
Mason, D. M., and van Zwet, W. (1987). A refinement of the KMT inequality for the
uniform empirical process. Ann. Probab. 15, 871-884.
Massart, P. (1990). The tight constant in the Dvoretzky-Kiefer-Wolfowitz inequality. Ann.
Probab. 18, 1269-1283.
Nanjundiah, T. S. (1959). Note on Stirling’s formula. Amer. Math. Monthly 66, 701-703.
Okamoto, Masashi (1958). Some Inequalities Relating to the Partial Sum of Binomial
Probabilities. Ann. Inst. Statist. Math. 10, 29-35.
Rio, E. (1991). Local invariance principles and its application to density estimation.
Pr´epubl Math. Univ. Paris-Sud 91-71.
Rio, E. (1994). Local invariance principles and their application to density estimation.
Probab. Theory Related Fields 98, 21-45. | https://ocw.mit.edu/courses/18-465-topics-in-statistics-nonparametrics-and-robustness-spring-2005/0a472bc75921bd9a7a12c37bb261d572_bretagn_massart.pdf |
and their application to density estimation.
Probab. Theory Related Fields 98, 21-45.
Shorack, G., and Wellner, J. A. (1986). Empirical Processes with Applications to Statistics.
Wiley, New York.
Whittaker, E. T., and Watson, G. N. (1927). Modern Analysis, 4th ed., Cambridge Univ.
Press, Repr. 1962.
26 | https://ocw.mit.edu/courses/18-465-topics-in-statistics-nonparametrics-and-robustness-spring-2005/0a472bc75921bd9a7a12c37bb261d572_bretagn_massart.pdf |
15.093 Optimization Methods
Lecture 3: The Simplex Method
1 Outline
� Reduced Costs
� Optimality conditions
� Improving the cost
� Unboundness
� The Simplex algorithm
Slide 1
� The Simplex algorithm on degenerate problems
2 Matrix View
Slide 2
min c x
0
s:t: Ax � b
x � 0
x � (x
; x
)
x
basic variables
B
N
B
x
non-basic variables
N
A � [B; N ]
Ax � b )
B � x
+ N � x
� b
B
N
) x
+ B N x
� B b
B
N
�1
�1
) x
� B b � B N x
B
N
�1
�1
2.1 Reduced Costs
Slide 3
z � c
x
+ c
x
B
N
B
N
0
0
0
0
�1
�1
� c
(B b � B N x
) + c
x
B
N
N
N
0
0
0
�1
�1
� c
B b + (c
� c
B N )x
B
N
B
N
c
� c
� c
B A
reduced cost
j
j
j
B
0
�1
Slide 4
2.2 Optimality Conditions
Theorem:
� x BFS associated with basis B
� c reduced costs
Then
� If c � 0 ) x optimal
� x optimal and non-degenerate ) c � 0
1
2.3 Proof
� y arbitrary feasible solution
� d � y � x ) Ax � Ay � b ) Ad � 0
) Bd
+
A
d
� 0
B
i
i
P
i2N
P
�1
) d
� �
B A
d
B
i
i
i2N
P
0
0
) c d � c
d
+
c
d
B
B
i | https://ocw.mit.edu/courses/15-093j-optimization-methods-fall-2009/0a506ec77f46e9366688254eb6026cb9_MIT15_093J_F09_lec03.pdf |
�
�
�
x
B(i)
min �
; �
; �
� 1:
(�1)
(�1)
(�1)
2
1
4
l � 6 (A
exits the basis).
6
New solution
0
y � (1; 0; 3; 0; 1; 0; 3)
Slide 16
New
basis B � (A
; A
; A
; A
)
Slide 17
2
3
2
3
1
3
5
7
1 1 0 0
1 0 �1 0
6
7
6
7
1 0 1 0
�1
0 0
1 0
B �
; B �
6
7
6
7
4
5
4
5
0 1 0 0
�1 1
1 0
0 1 0 1
0 0 �1 1
0
0
0
�1
c � c � c B A � (0; 4; 0; �1; 0; 3; 0)
B
Need to continue, column A
enters the basis.
4
4
6 Correctness
Slide 18
x
x
B(l)
B(i)
�
�
�
min
�
� �
�
�
d
i�1;:::;m;d
d
B(l)
B(i)
B(i)<0
Theorem
� B � fA
; A
g basis
B
;i�6
l
j
(i)
� y � x + � d is a BFS associated with basis B .
�
7 The Simplex Algorithm
1. Start with basis B � [A
; : : : ; A
]
B(1)
B(m)
and a BFS x.
2. Compute c
� c
� c
B A
j
j
j
B
0
�1
� If c
� 0; x optimal; stop.
j | https://ocw.mit.edu/courses/15-093j-optimization-methods-fall-2009/0a506ec77f46e9366688254eb6026cb9_MIT15_093J_F09_lec03.pdf |
j
j
B
0
�1
� If c
� 0; x optimal; stop.
j
� Else select j : c
� 0.
j
3. Compute u � �d � B A
.
j
�1
� If u � 0 ) cost unbounded; stop
� Else
�
x
u
B(i)
B(l)
4. � �
min
�
1�i�m;u
>0
i
u
u
i
l
5. Form a new basis by replacing A
with A
.
B(l)
j
6. y
� �
j
�
y
� x
� � u
B(i)
B(i)
i
�
7.1 Finite Convergence
Theorem:
� P � fx j Ax � b; x � 0g 6� ;
� Every BFS non-degenerate
Then
Slide 19
Slide 20
Slide 21
� Simplex method terminates after a �nite number of iterations
� At termination, we have optimal basis B or we have a direction d : Ad �
0; d � 0; c d � 0 and optimal cost is �1.
0
5
7.2 Degenerate problems
�
� � can equal zero (why�) ) y � x, although B 6� B .
Slide 22
� Even if � � 0, there might be a tie
�
x
B(i)
min
)
1�i�m;u
>0
i
u
i
next BFS degenerate.
� Finite termination not guaranteed; cycling is possible.
7.3 Avoiding Cycling
Slide 23
� Cycling can be avoided by carefully selecting which variables enter and
exit the basis.
� Example: among all variables c
� 0, | https://ocw.mit.edu/courses/15-093j-optimization-methods-fall-2009/0a506ec77f46e9366688254eb6026cb9_MIT15_093J_F09_lec03.pdf |
variables enter and
exit the basis.
� Example: among all variables c
� 0, pick the smallest subscript;
j
among all variables eligible to exit the basis, pick the one with the smallest
subscript.
6
MIT OpenCourseWare
http://ocw.mit.edu
15.093J / 6.255J Optimization Methods
Fall 2009
For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms.
(cid:7)
(cid:1) | https://ocw.mit.edu/courses/15-093j-optimization-methods-fall-2009/0a506ec77f46e9366688254eb6026cb9_MIT15_093J_F09_lec03.pdf |
3. The rational Cherednik algebra
3.1. Definition and examples. Above we have made essential use of the commutation
relations between operators x ∈ h∗, g ∈ G, and Da, a ∈ h. This makes it natural to consider
the algebra generated by these operators.
Definition 3.1. The rational Cherednik algebra associated to (G, h) is the algebra Hc(G, h)
generated inside A = Rees(CG � D(hreg)) by the elements x ∈ h∗, g ∈ G, and Da(c, �), a ∈ h.
If t ∈ C, then the algebra Ht,c(G, h) is the specialization of Hc(G, h) at � = t.
Proposition 3.2. The algebra Hc is the quotient of the algebra CG � T(h ⊕ h∗)[�] (where
T denotes the tensor algebra) by the ideal generated by the relations
[x, x�] = 0, [y, y�] = 0, [y, x] = �(y, x) −
cs(y, αs)(x, α∨)s,
s
�
where x, x� ∈ h∗, y, y� ∈ h.
s∈S
Proof. Let us denote the algebra defined in the proposition by Hc
�
ing to the results of the previous sections, we have a surjective homomorphism φ : Hc
defined by the formula φ(x) = x, φ(g) = g, φ(y) = Dy(c, �).
�(G, h). Then accord
Hc
� = Hc
→
Let us show that this homomorphism is injective. For this purpose assume that yi is a
� is
basis of h, and xi is the dual basis of h∗. Then it is clear from the relations of Hc
spanned over C[�] by the elements
� that Hc
(3.1)
g
r
r | https://ocw.mit.edu/courses/18-735-double-affine-hecke-algebras-in-representation-theory-combinatorics-geometry-and-mathematical-physics-fall-2009/0a639a83911449ff554a28b7772cb49f_MIT18_735F09_ch03.pdf |
relations of Hc
spanned over C[�] by the elements
� that Hc
(3.1)
g
r
r
� �
mi
yi
ni .
xi
Thus it remains to show that the images of the elements (3.1) under the map φ, i.e. the
i=1
i=1
elements
g
r
�
Dyi (c, �)mi
r
�
ni .
xi
i=1
i=1
are linearly independent. But this follows from the obvious fact that the symbols of these
�
elements in CG � C[h∗ × hreg][�] are linearly independent. The proposition is proved.
Remark 3.3. 1. Similarly, one can define the universal algebra H(G, h), in which both �
and c are variables. (So this is an algebra over C[�, c].) It has two equivalent definitions
similar to the above.
2. It is more convenient to work with algebras defined by generators and relations than
with subalgebras of a given algebra generated by a given set of elements. Therefore, from
now on we will use the statement of Proposition 3.2 as a definition of the rational Cherednik
algebra Hc. According to Proposition 3.2, this algebra comes with a natural embedding
Θc : Hc → Rees(CG � D(hreg)), defined by the formula x → x, g → g, y → Dy(c, �). This
embedding is called the Dunkl operator embedding.
Example 3.4. 1. Let G = Z2, h = C. In this case c reduces to one parameter, and | https://ocw.mit.edu/courses/18-735-double-affine-hecke-algebras-in-representation-theory-combinatorics-geometry-and-mathematical-physics-fall-2009/0a639a83911449ff554a28b7772cb49f_MIT18_735F09_ch03.pdf |
= Z2, h = C. In this case c reduces to one parameter, and the
algebra Ht,c is generated by elements x, y, s with defining relations
s 2 = 1, sx = −xs, sy = −ys, [y, x] = t − 2cs.
14
2. Let G = Sn, h = Cn . In this case there is also only one complex parameter c, and the
algebra Ht,c is the quotient of Sn � C�x1, . . . , xn, y1, . . . , yn� by the relations
[xi, xj ] = [yi, yj ] = 0, [yi, xj ] = csij , [yi, xi] = t − c
sij .
�
Here C�E� denotes the free algebra on a set E, and sij is the transposition of i and j.
3.2. The PBW theorem for the rational Cherednik algebra. Let us put a filtration
on Hc by setting deg y = 1 for y ∈ h and deg x = deg g = 0 for x ∈ h∗, g ∈ G. Let gr(Hc)
denote the associated graded algebra of Hc under this filtration, and similarly for Ht,c. We
have a natural surjective homomorphism
j=i
For t ∈ C, it specializes to surjective homomorphisms
ξ : CG � C[h ⊕ h∗][�] → gr(Hc).
ξt : CG � C[h ⊕ h∗] → gr(Ht,c).
Proposition 3.5 (The PBW theorem for rational Cherednik algebras). The maps ξ and ξt
are isomorphisms.
Proof. The statement is equivalent to the claim that the elements (3.1) are a basis of Ht,c,
�
which follows from the proof of Proposition 3. | https://ocw.mit.edu/courses/18-735-double-affine-hecke-algebras-in-representation-theory-combinatorics-geometry-and-mathematical-physics-fall-2009/0a639a83911449ff554a28b7772cb49f_MIT18_735F09_ch03.pdf |
(3.1) are a basis of Ht,c,
�
which follows from the proof of Proposition 3.2.
Remark 3.6. 1. We have
H0,0 = CG � C[h ⊕ h∗] and H1,0 = CG � D(h).
2. For any λ ∈ C∗, the algebra Ht,c is naturally isomorphic to Hλt,λc.
3. The Dunkl operator embedding Θc specializes to embeddings
Θ0,c : H0,c �→ CG � C[h∗ × hreg],
given by x �→ x, g �→ g, y �→ D0, and
a
Θ1,c : H1,c �→ CG � D(hreg),
given by x �→ x, g �→ g, y �→ Da. So H0,c is generated by x, g, D0, and H1,c is generated by
x, g, Da.
a
Since Dunkl operators map polynomials to polynomials, the map Θ1,c defines a represen
tation of H1,c on C[h]. This representation is called the polynomial representation of H1,c.
3.3. The spherical subalgebra. Let e ∈ CG be the symmetrizer, e = |G|−1
have e 2 = e.
g∈G g. We
�
Definition 3.7. Bc := eHce is called the spherical subalgebra of Hc. The spherical subalgebra
of Ht,c is Bt,c := Bc/(� − t) = eHt,ce.
Note that
e (CG � D(hreg)) e = D(hreg)G , e (CG � C[hreg × h∗]) e = C[hreg × h∗]G .
Therefore, the restriction gives the embeddings: Θ1,c : B1,c
C[h∗ × hreg]G . In particular, we have
Proposition 3.8. The spherical subalgebra B | https://ocw.mit.edu/courses/18-735-double-affine-hecke-algebras-in-representation-theory-combinatorics-geometry-and-mathematical-physics-fall-2009/0a639a83911449ff554a28b7772cb49f_MIT18_735F09_ch03.pdf |
� × hreg]G . In particular, we have
Proposition 3.8. The spherical subalgebra B0,c is commutative and does not have zero
divisors. Also B0,c is finitely generated.
)G, and Θ0,c : B0,c �
�→ D(hreg
→
15
�
Proof. The first statement is clear from the above. The second statement follows from the
fact that gr(B0,c) = B0,0 = C[h × h∗]G, which is finitely generated by Hilbert’s theorem. �
Corollary 3.9. Mc = SpecB0,c is an irreducible affine algebraic variety.
Proof. Directly from the definition and the proposition.
�
We also obtain
Proposition 3.10. Bc is a flat quantization (non-commutative deformation) of B0,c over
C[�].
So B0,c carries a Poisson bracket {·, ·}(thus Mc is a Poisson variety), and Bc is a quanti
zation of the Poisson bracket, i.e. if a, b ∈ Bc and a0, b0 are the corresponding elements in
B0,c, then
[a, b]/� ≡ {a0, b0} (mod �).
Definition 3.11. The Poisson variety Mc is called the Calogero-Moser space of G, h with
parameter c.
3.4. The localization lemma. Let H loc = Ht,c[δ−1] be the localization of Ht,c as a module
over C[h] with respect to the discriminant δ (a polynomial vanishing to the first order on
each reflection plane). De | https://ocw.mit.edu/courses/18-735-double-affine-hecke-algebras-in-representation-theory-combinatorics-geometry-and-mathematical-physics-fall-2009/0a639a83911449ff554a28b7772cb49f_MIT18_735F09_ch03.pdf |
δ (a polynomial vanishing to the first order on
each reflection plane). Define also Bloc = eH loc e.
t,c
t,c
t,c
Proposition 3.12.
H loc
(i) For t = 0 the map Θt,c induces an isomorphism of algebras
t,c → CG � D(hreg
), which restricts to an isomorphism Bloc
(ii) The map Θ0,c induces an isomorphism of algebras H loc → CG � C[h∗ × hreg], which
t,c → D(hreg
)G .
0,c
restricts to an isomorphism Bloc
0,c → C[h∗ × hreg
Proof. This follows immediately from the fact that the Dunkl operators have poles only on
�
the reflection hyperplanes.
]G .
Since gr(B0,c) = B0,0 = C[h∗ ⊕ h]G, we get the following geometric corollary.
Corollary 3.13.
(i) The family of Poisson varieties Mc is a flat deformation of the
Poisson variety M0 := (h × h∗)/G. In particular, Mc is smooth outside of a subset of
codimension 2.
(ii) We have a natural map βc : Mc
−1(hreg/G) is isomorphic to
(hreg ×h∗)/G. The Poisson structure on Mc is obtained by extension of the symplectic
Poisson structure on (hreg × h∗)/G.
h/G, such that βc
→
Example 3.14. Let W = Z2, h = C. Then B0,c = �x2, xp, p2 − c2/x2�. Let X := x2, Z := xp
and Y := p2 − c2/x2 . Then Z 2 − XY | https://ocw.mit.edu/courses/18-735-double-affine-hecke-algebras-in-representation-theory-combinatorics-geometry-and-mathematical-physics-fall-2009/0a639a83911449ff554a28b7772cb49f_MIT18_735F09_ch03.pdf |
. Let X := x2, Z := xp
and Y := p2 − c2/x2 . Then Z 2 − XY = c2 . So Mc is isomorphic to the quadric Z 2 − XY = c2
in the 3-dimensional space and it is smooth for c = 0.
3.5. Category O for rational Cherednik algebras. From the PBW theorem, we see that
H1,c = Sh∗ ⊗ CG ⊗ Sh. It is similar to the structure of the universal enveloping algebra of a
simple Lie algebra: U (g) = U (n−)⊗U (h)⊗U (n+). Namely, the subalgebra CG plays the role
of the Cartan subalgebra, and the subalgebras Sh∗ and Sh play the role of the positive and
negative nilpotent subalgebras. This similarity allows one to define and study the category
O analogous to the Bernstein-Gelfand-Gelfand category O for simple Lie algebras.
16
�
�
Definition 3.15. The category Oc(G, h) is the category of modules over H1,c(G, h) which
are finitely generated over Sh∗ and locally finite under Sh (i.e., for M ∈ Oc(G, h), ∀v ∈ M ,
(Sh)v is finite dimensional).
If M is a locally finite (Sh)G-module, then
M = ⊕λ∈h∗/GMλ,
where
Mλ = {v ∈ M |∀p ∈ (Sh)G , ∃N s.t. (p − λ(p))N v = 0},
(notice that h∗/G = Specm(Sh)G).
Proposition 3.16. Mλ are H1,c-submodules.
Proof. Note first that we have an isomorphism µ : H1,c(G, h) ∼ H1,c
by xa
Suppose P = P (x1, . . . , | https://ocw.mit.edu/courses/18-735-double-affine-hecke-algebras-in-representation-theory-combinatorics-geometry-and-mathematical-physics-fall-2009/0a639a83911449ff554a28b7772cb49f_MIT18_735F09_ch03.pdf |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.