id stringlengths 1 6 | url stringlengths 16 1.82k | content stringlengths 37 9.64M |
|---|---|---|
17200 | https://www.ck12.org/flexi/algebra-ii/average-and-instantaneous-rates-of-change/how-to-find-equations-of-tangent-lines/ | Flexi answers - How to find equations of tangent lines? | CK-12 Foundation
Subjects Explore
Donate
Sign InSign Up
All Subjects
Algebra II
Rate of Change
Question
How to find equations of tangent lines?
Flexi Says:
The equations of tangent lines to a curve can be found through calculus, specifically by using derivatives. Here is a general method on how to do it:
Steps:
The first step is to know the equation of the function. If for example, you have the equation y=f(x), then this is the function whose tangent line we are looking for.
Take the derivative of the function f(x) to get f′(x). This will give you the slope of the tangent line at any point x.
Substitute the x-value of the point at which the tangent line touches the curve into f′(x) to get the slope of the tangent line at that point.
We know that the equation of a line is given by y=m x+c, where m is the slope of the line and c is the y-intercept. In the case of the tangent line, we know the slope, m (it is f′(x)), and we know that the line passes through a point on the function, so we can use the point-slope form of the equation of line : y−y 1=m(x−x 1), where (x 1,y 1) is the point at which the tangent line touches the curve, and m is the slope of the tangent line in that point (which is f′(x 1)).
With the x-coordinate of the point of tangency and the slope of the tangent line, we can now find the equation of the tangent line.
For example, for a function f(x)=x 3 and you want to find the tangent line at the point where x=2. Taking the derivative of f(x), we get f′(x)=3 x 2. Plugging x=2 we find that the slope, m, is 12. The y-value at x=2 in the function is 2 3=8. So, point of tangency is (2,8). With all these information we can find the equation of the line to be y−8=12(x−2).
Analogy / Example
Try Asking:
How do you calculate the rate of change and initial value?How to find antiderivative of square root?The sides of a right triangle increase in such a way that dz/dt=1 and dx/dt=3dy/dt. At the instant when x=12 and y=5, what is the value of dy/dt? x and y are the sides, z is the hypotenuse.
How can Flexi help?
By messaging Flexi, you agree to our Terms and Privacy Policy |
17201 | https://byjus.com/maths/apollonius-theorem/ | In mathematics, theorems are the statements which have proven results based on the previously set statements, like theorems and generally confirmed statements like axioms. The theorems are known as the results which are proven to be accurate from the set of different axioms. This term is especially used in mathematics subject where these axioms are of numerical logic with the systems in the form of the question.
Statement and Proof of Apollonius’ Theorem
Medians are known to form the most important sets of the components in the geometry of triangles which are closely related to the triangle being independent of the geometric shapes. In Apollonius’ Theorem, the relation between the medians and the sides of the triangle are known. Apollonius’ theorem is a kind of theorem which relates to the length of the median of a triangle to the length of their sides.
Apollonius’ Theorem Statement
Statement- “the sum of squares of any of the two sides of a triangle equals to twice its square on half of the third side, along with the twice of its square on the median bisecting the third side”
If O is the midpoint of MN, one of the sides of the triangle (LMN), then prove that LN² + LM² = 2 {MO² + LO²}.
Apollonius’ Theorem Proof
Choose the origin of the rectangular form of the Cartesian coordinates at the point O and the x-axis coming along the sides MN and also OY as y – axis. If in case MN = 2a, then the coordinates of the points M, as well as N, are (a, 0) and (- a, 0) respectively. If coordinates of the point L are (b, c), then
LO² = (c – 0)² + (b – 0)² , (Since the coordinates of the point O are {0, 0})
= c² + b²;
LM² = (c – 0)² + (b + a) ² = c² + (a + b)²
MO² = (0 – 0)² + (- a – 0)² = a²
also, LN² = (c – 0) ² + (b – a) ² = c² + (a – b)²
Therefore, LN² + LM² = c² + (a + b) ² + c² + (b – a)²
= 2c² + 2 (a² + b²)
= 2(b² + c²) + 2a²
= 2LO² + 2MO²
= 2 (LO² + MO²).
= 2(MO² + LO²). {Hence Proved}
1,56,667
Comments
Leave a Comment Cancel reply
Register with BYJU'S & Download Free PDFs
Register with BYJU'S & Watch Live Videos |
17202 | https://math.stackexchange.com/questions/3318328/simplifying-the-boolean-expression-abc-abc-abc-abc | Stack Exchange Network
Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.
Visit Stack Exchange
Teams
Q&A for work
Connect and share knowledge within a single location that is structured and easy to search.
Learn more about Teams
Simplifying the Boolean expression $A'BC' + AB'C + ABC' + ABC$
Ask Question
Asked
Modified 6 years, 1 month ago
Viewed 9k times
3
$\begingroup$
I have this Boolean to simplify.
$\bar AB\bar C + A\bar BC + AB\bar C + ABC$
I checked the final answer here. It gives the final simplification is $B\bar C + AC$
but I am confused here :
$\bar AB\bar C + A\bar BC + AB\bar C + ABC$
$\bar AB\bar C + A\bar BC + AB(\bar C+C)$
$\bar AB\bar C + A\bar BC + AB$
$\bar AB\bar C + A(\bar BC + B)$
$\bar AB\bar C + A(C + B)$
$\bar AB\bar C + AC + AB$
$B(\bar A\bar C + A) + AC$ (1st and 3rd considered)
$B(\bar C + A) + AC$
$B\bar C + AB + AC$
Here, How can I go further to simplify $B\bar C + AB + AC$ to $B\bar C + AC$
How to remove $AB$ ?
boolean-algebra
Share
edited Aug 9, 2019 at 15:59
test teamtest team
asked Aug 9, 2019 at 14:14
test teamtest team
13311 silver badge55 bronze badges
$\endgroup$
5
$\begingroup$ Hint: AB = ABC + ABC' $\endgroup$
Simply Beautiful Art
– Simply Beautiful Art
2019-08-09 14:22:17 +00:00
Commented Aug 9, 2019 at 14:22
$\begingroup$ Are you allowed to use Karnaugh maps? $\endgroup$
Adrian Keister
– Adrian Keister
2019-08-09 14:23:06 +00:00
Commented Aug 9, 2019 at 14:23
1
$\begingroup$ Welcome to MSE. For some basic information about writing mathematics at this site see, e.g., basic help on mathjax notation, mathjax tutorial and quick reference, main meta site math tutorial and equation editing how-to. $\endgroup$
José Carlos Santos
– José Carlos Santos
2019-08-09 14:24:55 +00:00
Commented Aug 9, 2019 at 14:24
1
$\begingroup$ Don't really see the need for MathJax here. In fact, I personally prefer monospaced font for this @JoséCarlosSantos $\endgroup$
Simply Beautiful Art
– Simply Beautiful Art
2019-08-09 14:25:59 +00:00
Commented Aug 9, 2019 at 14:25
$\begingroup$ Also oof my hint is just reversing some steps lol $\endgroup$
Simply Beautiful Art
– Simply Beautiful Art
2019-08-09 14:28:19 +00:00
Commented Aug 9, 2019 at 14:28
Add a comment |
3 Answers 3
Reset to default
6
$\begingroup$
I suggest you swap $AB'C$ and $ABC'$ before simplifying:
$$\begin{align}&A'BC' + ABC' + AB'C + ABC \=& (A'+A)BC' + (B'+B)AC \=& BC' + AC. \end{align}$$
Share
answered Aug 9, 2019 at 14:25
MagmaMagma
6,4401313 silver badges2525 bronze badges
$\endgroup$
1
$\begingroup$ Yeah! I also did this. but the above alternate is the thing $\endgroup$
test team
– test team
2019-08-09 15:03:06 +00:00
Commented Aug 9, 2019 at 15:03
Add a comment |
3
$\begingroup$
Since we have AB = ABC' + ABC, it follows that
BC' + AB + AC = BC' + ABC' + ABC + AC = BC' + AC
The Karnaugh map is given by
from which it is visually clear that AB is covered by the other two, which lends itself to showing you can split AB into two parts and combine them with AC and BC'.
Share
edited Aug 9, 2019 at 15:21
answered Aug 9, 2019 at 14:48
Simply Beautiful ArtSimply Beautiful Art
76.8k1313 gold badges134134 silver badges301301 bronze badges
$\endgroup$
10
$\begingroup$ AB = ABC' + ABC is ok. But How this BC' + ABC' + ABC + AC simplify to BC' + AC ? $\endgroup$
test team
– test team
2019-08-09 15:00:08 +00:00
Commented Aug 9, 2019 at 15:00
$\begingroup$ BC' + ABC' = ... and ABC + AC = ... $\endgroup$
Simply Beautiful Art
– Simply Beautiful Art
2019-08-09 15:01:28 +00:00
Commented Aug 9, 2019 at 15:01
$\begingroup$ Ah! Ok. Nice. But this is simplification. Then how can I think to expand (AB = ABC' + ABC). In simplification we think go further to simplify. $\endgroup$
test team
– test team
2019-08-09 15:05:57 +00:00
Commented Aug 9, 2019 at 15:05
$\begingroup$ There are algorithms for performing these kinds of reductions. I personally find Karnaugh maps very useful for seeing the end solution and how things combine. I can extend my answer if you want. $\endgroup$
Simply Beautiful Art
– Simply Beautiful Art
2019-08-09 15:09:58 +00:00
Commented Aug 9, 2019 at 15:09
$\begingroup$ Thank you. The K-Map explanation is nice. But how can we think in examination? We only think how to simplify go further and further, then we're not to expand. $\endgroup$
test team
– test team
2019-08-09 15:29:01 +00:00
Commented Aug 9, 2019 at 15:29
| Show 5 more comments
0
$\begingroup$
I learned Veitch diagrams back in the early 70's and I find them easier to understand than the Karnaugh maps that are popular today. Here is an example using your problem. You fill in the squares and then see what they add up to. Note, the light gray area is $AC$ and the slightly darker gray is $BC'$. Note also how $B'$ spans part of $A$ and the rest of $B$ is under $A'$ so $AB$ is eliminated.
Share
edited Aug 9, 2019 at 16:36
answered Aug 9, 2019 at 16:30
poetasispoetasis
6,89522 gold badges1515 silver badges4242 bronze badges
$\endgroup$
4
$\begingroup$ New to Veitch diagram. And I'm unable to understand "Note also how B² spans part of A and the rest of B is under A² so AB is eliminated." $\endgroup$
test team
– test team
2019-08-09 17:05:31 +00:00
Commented Aug 9, 2019 at 17:05
$\begingroup$ I've seen Karnaugh maps that look $exactly$ like Veitch diagrams but the way they try to teach is more like the answer from Simply Beautiful Art and not intuitive to me. I hope you can see in my diagram that each square is filled with something-$A$, something-$B$, and something-$C$ that $maps$ to one of the terms of the original Boolean expression. Then we find that different terms are adjoined in one or more rectangles that represent new terms simpler than the originals. You can do the same with straight Boolean Algebra such as $(A'+B')'=AB$ but the visual aspect is easier to $see$. $\endgroup$
poetasis
– poetasis
2019-08-09 17:22:12 +00:00
Commented Aug 9, 2019 at 17:22
$\begingroup$ Eh, personally this is less intuitive, since the BC' is made up of two disconnected sections, for example. The Karnaugh map avoids this by ensuring adjacent squares differ by 1 variable, so you never have to do this (with the exception of wrapping from one side of the K-map to the other). $\endgroup$
Simply Beautiful Art
– Simply Beautiful Art
2019-08-09 21:10:59 +00:00
Commented Aug 9, 2019 at 21:10
$\begingroup$ I guess I'm just used to Veitch diagrams because I learned them so well when I was so much younger. In any case, I trust the Karnaugh maps were helpful. $\endgroup$
poetasis
– poetasis
2019-08-10 09:50:09 +00:00
Commented Aug 10, 2019 at 9:50
Add a comment |
You must log in to answer this question.
Start asking to get answers
Find the answer to your question by asking.
Ask question
Explore related questions
boolean-algebra
See similar questions with these tags.
Featured on Meta
Introducing a new proactive anti-spam measure
Spevacus has joined us as a Community Manager
stackoverflow.ai - rebuilt for attribution
Community Asks Sprint Announcement - September 2025
Related
1 Simplifying Boolean Function with Karnaugh Maps
Simplify Boolean Expression ABC' + A'BC + A'B'C'
0 Simplifying a logic function using boolean algebra
3 How to simplify the Boolean function $A'B'C + A'BC' + ABC + AB'C'$?
1 Simplifying Boolean functions
2 How does A'BC' + AB'C' + ABC' simplify to (A + B)C'?
1 How to simplify the Boolean expression $A'BC + AB'C' + A'B'C' + AB'C + AB$?
1 Simplify the Boolean expression A'BC + A'BD' + AB' + AC + ABC + ACD
2 Simplifying $A'B'C'D+A'B'CD+A'BC'D'+A'BC'D+A'BCD'+AB'C'D'+AB'C'D+ABC'D'+ABCD'+ABCD$ to $A'B'D+A'C'D+AB'C'+ABC+BD'$ (and another)
2 Simplify Boolean Expression A'B'C + A'BC + AB'C
Hot Network Questions
Is it possible that heinous sins result in a hellish life as a person, NOT always animal birth?
Why include unadjusted estimates in a study when reporting adjusted estimates?
How do I disable shadow visibility in the EEVEE material settings in Blender versions 4.2 and above?
Separating trefoil knot on torus
I'm having a hard time intuiting throttle position to engine rpm consistency between gears -- why do cars behave in this observed way?
Who is the target audience of Netanyahu's speech at the United Nations?
What’s the usual way to apply for a Saudi business visa from the UAE?
My dissertation is wrong, but I already defended. How to remedy?
Can a GeoTIFF have 2 separate NoData values?
How to rsync a large file by comparing earlier versions on the sending end?
What were "milk bars" in 1920s Japan?
What meal can come next?
An odd question
Bypassing C64's PETSCII to screen code mapping
The geologic realities of a massive well out at Sea
What happens if you miss cruise ship deadline at private island?
How can I get Remote Desktop (RD) to scale properly AND set maximum windowed size?
What can be said?
Analog story - nuclear bombs used to neutralize global warming
Passengers on a flight vote on the destination, "It's democracy!"
Another way to draw RegionDifference of a cylinder and Cuboid
Are there any world leaders who are/were good at chess?
Lingering odor presumably from bad chicken
RTC battery and VCC switching circuit
more hot questions
Question feed |
17203 | https://www.pnas.org/doi/10.1073/pnas.1820256116 | Systematic evasion of the restriction-modification barrier in bacteria
Christopher D. Johnston johnston@fredhutch.org, Sean L. Cotton, Susan R. Rittling, +3 , Jacqueline R. Starr, Gary G. Borisy johnston@fredhutch.org, Floyd E. Dewhirst, and Katherine P. Lemon -3Authors Info & Affiliations
Contributed by Gary G. Borisy, April 9, 2019 (sent for review November 28, 2018; reviewed by James J. Collins and George A. O’Toole)
May 16, 2019
116 (23) 11454-11459
00
Metrics
Total Views22,081
Last 12 Months1,570
Total Citations69
Last 12 Months18
View all metrics
Track CitationsAdd to Reading List
PDF
Significance
Genetic engineering is a powerful approach for discovering fundamental aspects of bacterial physiology, metabolism, and pathogenesis as well as for harnessing the capabilities of bacteria for human use. However, the full power of genetic engineering can only be applied to a few model organisms. Biological diversity and strain-level variation in restriction-modification systems are critical barriers keeping most bacteria beyond the full potential of genetics. We have designed a systematic approach to effectively evade restriction-modification systems and successfully applied this approach to a clinically relevant USA300 strain of the human pathogen Staphylococcus aureus. Our results demonstrate the simplicity and effectiveness of this stealth-by-engineering approach, which could enable microbial genetic system design not restrained by innate restriction-modification defense mechanisms.
Abstract
Bacteria that are recalcitrant to genetic manipulation using modern in vitro techniques are termed genetically intractable. Genetic intractability is a fundamental barrier to progress that hinders basic, synthetic, and translational microbiology research and development beyond a few model organisms. The most common underlying causes of genetic intractability are restriction-modification (RM) systems, ubiquitous defense mechanisms against xenogeneic DNA that hinder the use of genetic approaches in the vast majority of bacteria and exhibit strain-level variation. Here, we describe a systematic approach to overcome RM systems. Our approach was inspired by a simple hypothesis: if a synthetic piece of DNA lacks the highly specific target recognition motifs for a host’s RM systems, then it is invisible to these systems and will not be degraded during artificial transformation. Accordingly, in this process, we determine the genome and methylome of an individual bacterial strain and use this information to define the bacterium’s RM target motifs. We then synonymously eliminate RM targets from the nucleotide sequence of a genetic tool in silico, synthesize an RM-silent “SyngenicDNA” tool, and propagate the tool as minicircle plasmids, termed SyMPL (SyngenicDNA Minicircle Plasmid) tools, before transformation. In a proof-of-principle of our approach, we demonstrate a profound improvement (five orders of magnitude) in the transformation of a clinically relevant USA300 strain of Staphylococcus aureus. This stealth-by-engineering SyngenicDNA approach is effective, flexible, and we expect in future applications could enable microbial genetics free of the restraints of restriction-modification barriers.
Sign up for PNAS alerts.
Get alerts for new articles, or get an alert when an article is cited.
Manage alerts
Genetic engineering is a powerful approach for harnessing bacterial abilities and for discovering fundamental aspects of bacterial function. In recent years, the genetic toolkit at our disposal has massively expanded (1, 2). The application of these tools is largely limited to bacterial strains with high transformation efficiency (3). However, relative to the wealth and diversity of known bacterial species, there are currently only a small number of such highly genetically tractable strains. A strain that is not amenable to alterations of its genome or to the introduction of new genetic information during genetic engineering is termed genetically intractable. At present, genetic intractability is a pervasive and widespread problem across all fields of microbiology; the vast majority of bacteria that can be grown in a laboratory remain beyond the power of genetics for elucidating function or engineering for human use. Even within species that are genetically tractable, this tractability is often restricted to a small number of domesticated strains, while new primary isolates of the species with disparate phenotypic traits of interest are either poorly tractable or currently intractable. As a result, researchers have had to engage in expensive generation of ad hoc genetic systems for each distinct species, often with further laborious modifications for each distinct wild strain isolate.
In their natural environment, bacteria acquire new genetic information through horizontal gene transfer (HGT) by three distinct means: conjugation, transduction, and transformation. During conjugation, DNA is transferred from one organism to another by direct cell-to-cell contact. During transduction, DNA is carried by bacteriophages, viruses that invade by injecting DNA into host bacterial cells. These two processes involve multifaceted interactions requiring complex machinery and therefore are of limited value in modern bacterial genetics where DNA should ideally be easily and rapidly transferable into any given bacterial strain (4). During transformation, however, naked DNA is directly acquired and incorporated into the host genome by recombination with homologous sequences or, in the case of plasmids, by establishing a new episome (extrachromosomal DNA that replicates autonomously), resulting in genetic alteration of the cell. Genetic competence is the cellular state that enables bacteria to undergo natural transformation, a transient “window of opportunity” for DNA internalization (5). However, while there are over 6,600 validated cultured type strains of bacterial species (6) and ∼30,000 formally named species in pure culture (7), natural transformation and competence have been observed in only a handful of ∼80 bacterial species (5). This may even be an overestimation, as in several cases only a single report documents transformation, and molecular evidence of natural transformation is lacking (5). For all remaining cultivated bacterial species that are of interest, microbiologists must instead develop “artificial” transformation and individualized genetic systems, often at the strain level: a process continually stymied by genetically intractable phenotypes.
Restriction-modification (RM) systems are the most common underlying cause of genetic intractability in bacterial species. Found in ∼90% of sequenced bacterial genomes, RM systems enable bacteria to distinguish self- from nonself-DNA via two enzymatic activities: a restriction endonuclease (REase) and a modification methyltransferase (MTase). The REase recognizes the methylation status of DNA at a highly specific DNA target sequence and degrades unmethylated or inappropriately methylated (i.e., nonself) targets. Its cognate MTase protects the same target sequence across the host’s genome via addition of a methyl group, marking each site as self. RM systems originally evolved as defense mechanisms against invading xenogeneic DNA (8), which is primarily encountered during bacteriophage infection. Consequently, most, if not all, of the currently available approaches to overcome them during genetic engineering are inspired by bacteriophage antirestriction mechanisms (9, 10). Bacteriophage mechanisms that involve methyl-modification of the phage genome to subvert the host’s RM activities have already been translated into in vitro engineering approaches (9, 10). These can all be referred to as mimicry-by-methylation, as they essentially seek to modify the methylation pattern of a genetic tool to match the desired host and achieve molecular mimicry. There are two common mimicry-by-methylation approaches. (i) Methylate target sites on tools by using in vitro methylation with recombinant MTase enzymes (10), which are currently commercially available for only 37 of >450 known targets (11). (ii) Alternatively, achieve in vivo methylation by passaging a plasmid through a related strain that is either restriction enzyme-deficient (10) or a surrogate strain that has been extensively engineered to match the methylation profile of the strain of interest, i.e., the plasmid artificial modification (PAM) technique (12). Although these are very effective in some cases (13), owing to the labor-intensive and rigid nature of their underlying design they are often not readily adaptable to other strains due to RM system diversity (SI Appendix, Text S1) and, accordingly, are unsuitable for rapid application to a wide diversity of bacteria (10).
We therefore sought to design a versatile strategy to overcome RM barriers, one suitable for use in a broad range of bacterial species. The problem to be overcome is that in any given bacterial genus of interest the number of RM systems present and the target sequences recognized are hypervariable and highly species-specific, often even strain-specific (14). RM systems are also extremely diverse and can be differentiated into four types (type I, II, III, and IV), based on their recognized target and, also, subunit composition, cleavage position, cofactor requirements, and substrate specificity (15). Additionally, RM target motifs themselves vary greatly in sequence and length, ranging from 4 to 18 base pairs (bp), with >450 different motifs identified to date (11). It is clear, therefore, that a broadly applicable strategy to overcome RM barriers to genetic engineering will need to be adept at adjusting for RM system variation across different bacterial strains.
Importantly, all REase enzymes demonstrate exquisite specificity in target sequence recognition. This specificity is crucial, as REases are toxic to their hosts’ genome in the absence of their cognate MTases and, consequently, seldom deviate from their recognition sequence (15). In the context of bacterial genetic engineering, this is a critical weakness underpinning the effectiveness of all RM systems. To rapidly exploit the inherent weakness of high target specificity, we designed a stealth-based strategy to evade RM system activities entirely. Our approach was inspired by a simple hypothesis: if a piece of DNA lacks the highly specific target recognition motifs for a host’s RM systems, then it is invisible to these systems and will not be degraded upon artificial transformation. As RM defenses recognize genetic tools as xenogeneic DNA by virtue of the methylation status of highly specific target motifs (8), the systematic identification and elimination of such target motifs from the nucleotide sequence of a genetic tool should therefore facilitate the engineering of an artificial syngeneic DNA molecule that is RM-silent upon transformation. To succinctly encapsulate our approach, we coined the term “SyngenicDNA” (SI Appendix, Text S2).
One example of the tremendous effort, resources, and time it takes to expand genetic tractability is Staphylococcus aureus, a pathogen with significant relevance to public health, which accounts for over 10,000 deaths per year in the United States (16). Numerous papers describe mimicry-by-methylation approaches that seek to expand tractability to more clinically relevant strains, (e.g., refs. 17 and 18). Here, based on its public health importance, we selected S. aureus JE2, a derivative of the epidemic methicillin-resistant S. aureus (MRSA) USA300 LAC strain (19) to demonstrate proof-of-principle for our stealth-based approaches. We expect these approaches will be adopted by the broader microbiological community, enabling genetic system design no longer restrained by microbial restriction-modification defense mechanisms.
Results
Systematic Generation of SyngenicDNA-Based Genetic Tools.
There are four basic steps to produce SyngenicDNA-based genetic tools (Fig. 1): (i) target identification, (ii) in silico tool assembly, (iii) in silico sequence adaptation, and (iv) DNA synthesis and assembly. Target identification requires the delineation of each methylated site, with single-base resolution, across an entire bacterial genome (i.e., the methylome) and starts with single-molecule real-time (SMRT) genome and methylome sequencing (SMRTseq) (14). Using methylome data, we delineate each of the recognition motifs protected by the MTases of the host’s RM systems and infer the targets recognized and degraded by their cognate REases (SI Appendix, Text S3). This yields a concise list of a host microbes’ RM targets to be eliminated from the DNA sequence of a selected genetic tool.
Fig. 1.
In silico tool assembly requires complete annotation of a genetic tool’s sequence with respect to plasmid chassis, replication origins, antibiotic-resistance cassettes, promoters, repressors, terminators, and functional domains to avoid adverse changes to these structures during subsequent adaptation steps. Ideally, a complete and minimalistic genetic tool with previous demonstrable functionality in a genetically tractable strain is used for initial experiments, allowing for subsequent addition of DNA parts to increase functionality after successful transformation is achieved.
In silico sequence adaptation of the genetic tool is the most crucial step of the SyngenicDNA approach, and it is here where we exploit the intrinsic evolutionary weakness of high target-sequence specificity present in all RM systems. In this step, we first screen the complete nucleotide sequence of the genetic tool for the presence of RM targets identified by SMRTseq. We then recode the nucleotides of each RM target in silico to eliminate the target while preserving the functionality of the sequence. In noncoding regions, targets are removed, changing a single nucleotide (creating a SNP). In coding regions, the sequence of the target is removed using synonymous codon substitution. A single-nucleotide switch is generally sufficient to remove RM targets, but multiple switches can also be used. The preferential codon bias of the desired host is used to avoid introducing rare or unfavorable codons during the synonymous switch (SI Appendix, Text S4). Upon complete removal of all RM targets in silico, the recoded DNA sequence has been rendered RM-silent with respect to the host, termed SyngenicDNA, and is ready for de novo DNA synthesis.
Synthesis and assembly of RM-silent genetic tools is carried out using commercially available de novo DNA synthesis and standard assembly approaches, ensuring that any laboratory can construct SyngenicDNA tools. In commercial DNA synthesis, sequences are typically inserted into an Escherichia coli plasmid replicon and propagated to yield large amounts of the synthetic DNA. This E. coli replicon is convenient, but might include RM targets that could lead to degradation of the overall circular tool after transformation into the host strain. We have developed two solutions to this potential issue. One solution is to generate a SyngenicDNA E. coli plasmid backbone for each specific microbial host strain (Fig. 1B). However, in routine applications this will increase costs of SyngenicDNA synthesis, and, moreover, the E. coli replicon itself becomes redundant after propagation in E. coli, as it is typically nonfunctional in other bacterial species after transformation. Our alternative solution, therefore, is to remove the E. coli replicon entirely, using minicircle DNA technology, rather than recode it. This approach also increases flexibility because the same E. coli replicon can be used to generate tools for multiple different microbial strains.
SyngenicDNA Minicircle Plasmid Tools.
Minicircles (MCs) are minimalistic circular expression cassettes devoid of a plasmid backbone (20), which are primarily used in gene therapy applications to drive stable expression of transgenes in eukaryotic hosts. MCs are produced by attaching a parental plasmid (PP) to a transgene cassette, cultivating this construct in an E. coli host grown to high-cell density, inducing construct recombination to form an isolated transgene MC and a separate, automatically degraded, PP containing the E. coli replicon. MCs are then isolated by standard plasmid methods (20). Because any DNA sequence can take the place of the transgene, we hypothesized that MC technology could be repurposed to carry entire microbial plasmids and facilitate the removal of superfluous E. coli replicons from shuttle vectors. We demonstrated that the incorporation of SyngenicDNA sequences into a PP allowed us to create SyngenicDNA Minicircle Plasmid (SyMPL, pronounced “simple”) tools (SI Appendix, Fig. S1). SyMPL tools include replication, selection, and functional domains for operation in a specific non-E. coli host, but lack an E. coli replicon despite being isolated at high concentrations from the MC-producing E. coli strain. In our SyMPL strategy, we attach a synthesized (and assembled) SyngenicDNA tool to the nonSyngenicDNA E. coli PP and propagate this construct in an MC-producing E. coli strain. The induction of MCs via recombination, with concurrent induction of a specific endonuclease that eliminates the PP, allows for easy isolation of a minimalistic SyngenicDNA-based genetic tool ready to transform into the desired host strain (SI Appendix, Fig. S1C).
The majority of laboratory E. coli strains, including the MC-producing E. coli host used in this study, contain three active MTases (Dam, Dcm, and HsdM) that introduce methylation modifications to specific target sites on the host genome (SI Appendix, Fig. S2). The Dam MTase modifies the adenine residue (m6A) within the sequence GATC, the Dcm MTase modifies the internal cytosine residue (m5C) of the sequence CCWGG (where W is A or T), and the HsdM MTase modifies the internal adenine residue (m6A) of the sequence AACN6GTGC. Therefore, plasmid tools propagated within such E. coli strains, including the minicircle (MC) producing strain (ZYCY10P3S2T), are modified at these target sequences.
The presence of methylated sites on SyngenicDNA-based tools could activate type IV RM systems upon artificial transformation. Generally, unintentional activation of methyl-targeting type IV systems is avoided by the propagation of plasmids within methyl-deficient E. coli strains such as JM110 (dam-, dcm-, hsdRMS+) or ER2796 (dam-, dcm-, hsdRMS-), thus preventing recognition and degradation via these systems. However, such methyl-free E. coli strains are unable to produce MCs since construction of the E. coli MC-producing strain (20) required complex engineering to stably express a set of inducible minicircle-assembly enzymes (the øC31-integrase and the I-SceI homing-endonuclease for induction of MC formation and degradation of the parental plasmid replicon, respectively).
Accordingly, when we repurposed MC technology for bacterial applications, it was also necessary to engineer E. coli MC-producer strains that generate various forms of methylation-free MCs (SI Appendix, Figs. S3–S5). Although a completely methylation-free MC producer could be required when working against type IV systems targeting both adenine- and cytosine-methylated DNA, bacterial RM systems exist with targets that specifically match the E. coli Dam MTase motif (GATC), such as Pin25611FII in Prevotella intermedia (14). These systems digest unmethylated Dam sites on genetic tools propagated within methyl-free strains; hence, Dam methylation is protective in these cases. Therefore, we created a suite of E. coli strains capable of producing distinct types of methyl-free MC DNA to account for the inherent variation of RM systems in bacteria and maximize the applicability of our SyMPL approach. We applied iterative CRISPR-Cas9 genome editing to sequentially delete MTase genes from the original E. coli MC-producer strain (dam+, dcm+, hsdM+) (SI Appendix, Fig. S4). These strains produce methylcytosine-free MC DNA (E. coli JMC1; dam+, dcm-, hsdM+), methylcytosine- and methyladenine-free MC DNA except for Dam methylation (E. coli JMC2; dam+, dcm-, hsdM-), and completely methyl-free MC DNA (E. coli JMC3; dam-, dcm-, hsdM-). Depending upon the type IV RM systems identified within a desired bacterial host, one of these strains can be selected and utilized for production of SyMPL tools.
Application of SyngenicDNA and SyMPL Approaches to a Bacterial Pathogen.
RM systems are a known critical barrier to genetic engineering in most strains of Staphylococcus aureus (21). Based on its public health importance, we selected S. aureus JE2, a derivative of the epidemic methicillin-resistant S. aureus (MRSA) USA300 LAC strain (19), to demonstrate the efficacy of our stealth-by-engineering approaches. First, we determined the methylome of JE2 using SMRT sequencing and identified this strain’s RM targets. SMRTseq and REBASE analysis of JE2 confirmed the presence of two type-I RM systems recognizing the bipartite target sequences AGGN5GAT and CCAYN6TGT (the modified base within each motif is underlined, and n = any base) (SI Appendix, Table S1) and a type-IV system, previously shown to target cytosine methylation within the sequence SCNGS (where S = C or G) (21).
We then applied our SyngenicDNA approach to the E. coli–S. aureus shuttle vector pEPSA5 (Fig. 2 A and B). The pEPSA5 plasmid (SI Appendix, Text S4 and Fig. S1) contains a 2.5-kb E. coli replicon (ampicillin-resistance gene with a p15a origin for autonomous replication) and a 4.3-kb S. aureus replicon (chloramphenicol-resistance gene, pC194-derived origin, and a xylose repressor protein gene, xylR) (SI Appendix, Fig. S6A). The S. aureus replicon is nonfunctional when pEPSA5 is maintained and propagated within E. coli, and vice versa. Therefore, we modified S. aureus JE2 RM targets occurring within the coding region of the pEPSA5 E. coli replicon with synonymous substitutions adhering to E. coli codon bias. We synthesized, assembled, and propagated pEPSA5SynJE2 (Fig. 2C), a variant of pEPSA5 that differed by only six nucleotides (99.91% identical at nucleotide level), eliminating three RM target motifs present in the original sequence. We demonstrated an ∼70,000-fold (P = 7.76 × 10−306) increase in transformation efficiency (cfu/µg DNA), using the entirely RM-silent pEPSA5SynJE2Dcm- (propagated in dcm- E. coli), compared with the original pEPSA5 plasmid (propagated in dcm+ E. coli) (Fig. 2D and SI Appendix, Text S5).
Fig. 2.
Subsequently, we sought to determine whether a further increase in transformation efficiency could be achieved using the SyMPL (minicircle) approach. We used the dcm- strains E. coli ER2796 and E. coli JMC1 to carry out the minicircle (MC) experiments independently of the type IV system in S. aureus JE2. We generated a SyngenicDNA pEPSA5 minicircle for JE2 (pEPSA5SynJE2MC); 38% smaller than pEPSA5 and free of the original E. coli replicon (Fig. 3A and SI Appendix, Fig. S7).
Fig. 3.
Most of the S. aureus JE2 RM system targets present on pEPSA5 are in the E. coli replicon (type I: n = 2, and type IV: n = 8) with only a single type I target in the S. aureus replicon (SI Appendix, Fig. S6A); thus the MC approach eliminates two of three type I targets. We investigated (i) whether the SyMPL approach achieves equal or perhaps even greater efficiency than the SyngenicDNA approach and (ii) whether removal of all type I targets is required to achieve appreciable gains in transformation efficiency (compared with a partially SyngenicDNA plasmid that has a single type I target remaining). The original plasmid pEPSA5 (Dcm+) was included in experiments only as a control for accurate final comparison of efficiencies and was not considered a primary comparison. The pEPSA5SynJE2MC variant achieved ∼2 × 107 transformants per microgram DNA, a further 3.5-fold increase (P = 1.78 × 10−9) in transformation efficiency over pEPSA5SynJE2 and a >100,000-fold increase (P = 1.97 × 10−284) compared with the original unmodified pEPSA5 plasmid (propagated in dcm+ E. coli) (Fig. 3B and SI Appendix, Tables S2 and S3).
In SyMPL experiments, by reducing the overall size of MC plasmids, we also increased the number of S. aureus replicons present within the micrograms of DNA used for transformations compared with the micrograms used for full-length plasmids. Increasing the yield of functional replicons per microgram of DNA might be an additional advantage of the MC approach. Thus, to more accurately compare transformation efficiencies between MCs and full-length plasmids, we performed a secondary analysis to adjust the transformation efficiencies from cfu/µg DNA to cfu/pmol DNA (Fig. 3C and SI Appendix, Table S4). On a cfu/pmol DNA basis, the MC variant pEPSA5MCDcm- achieved a 436-fold increase in transformation efficiency over the original pEPSA5Dcm- (P ≤ 1.0 × 10−306). The increase could be due to the elimination of the two type I motifs along with the E. coli replicon in the MC variant (SI Appendix, Fig. S7), or the smaller MCs passing more readily through reversible pores formed in the S. aureus cell envelope during electroporation, or a combination of both. The relatively small 2.3-fold (P = 1.29 × 10−4) increase in transformation efficiency achieved by MC variant pEPSA5SynJE2MC over the plasmid pEPSA5SynJE2, both of which are completely RM-silent in JE2, favors the first possibility. In contrast, pEPSA5MC and pEPSA5SynJE2MC differed only by the presence or absence of a single type I target, respectively (Fig. 3A). Eliminating this single-target sequence resulted in a modest 1.5-fold (P = 1.01−14) increase in transformation efficiency. Importantly, this suggests that in future applications of the SyngenicDNA approach, if a single target exists in an unadaptable region of DNA, such as an origin of replication or a promoter, its inclusion on an otherwise RM-silent plasmid might have minimal impact on the overall transformation efficiency.
Discussion
We report the development of an approach to circumvent the most common cause of genetic intractability, RM barriers, during microbial genetic engineering. In contrast to current mimicry-by-methylation approaches, ours involves stealth-by-engineering (SI Appendix, Fig. S8). We identify the precise targets of the RM systems within a poorly tractable (or intractable) bacterial strain, eliminate these targets from the DNA sequence template of a genetic tool in silico, via single-nucleotide polymorphisms (SNPs) or synonymous nucleotide modifications, and synthesize a tailor-made version of the tool that is RM-silent with respect to that specific host. This stealth-based SyngenicDNA approach allows for simple reworking of currently available genetic tools and DNA parts to permit them to efficiently operate in bacteria with active RM defenses. Additionally, we have repurposed minicircle technology to generate SyngenicDNA Minicircle Plasmid (SyMPL) tools, which are free from components required for propagation in E. coli but superfluous in the target host. Using a clinically relevant USA300 strain of S. aureus, we have demonstrated the profound improvement in transformation efficiency that can be achieved by systematic evasion of RM systems using these SyngenicDNA and SyMPL approaches.
In future applications, we expect that SyngenicDNA will be most readily applied to genetic tools that are functional in tractable strains, to modify them for use in related strains that are currently intractable or poorly tractable due to RM barriers, e.g., a newly emerging epidemic strain (22) or a newly recognized strain with biotechnological potential. In addition, SyngenicDNA could also facilitate synthetic biology approaches aimed at modular design/assembly of new genetic tools for intractable species where no genetically accessible strain is available (14). Synthetic biology focuses on the construction of biological parts that can be understood, designed, and tuned to meet specific criteria, with the underlying principle that genetic tools should be minimalistic, constructed of modularized parts, and sequence-optimized to allow for compatibility. Standardized formats for genetic tool assembly already exist to facilitate the simple implementation of synthetic tools and distribution of physical parts between different laboratories (23). However, owing to RM systems variation between different strains of the same bacterial species (14), the design of reusable DNA parts that require physical reassembly for different bacteria is generally not applicable for intractable or poorly tractable strains with active RM systems. SyngenicDNA and SyMPL approaches should change that.
We adopted the core principles of synthetic biology, modularity and compatibility, but also accounted for variation in bacterial RM systems between strains by removing the need for physical assembly of reused parts propagated in other bacterial species. Because SyngenicDNA-based genetic tools require DNA synthesis de novo in the later step, the in silico tool assembly step could be utilized to augment plasmid backbones with additional useful parts (e.g., antibiotic-resistance cassettes, promoters, repressors, terminators, and functional domains, such as transposons or fluorescent markers) or create new tools. Additionally, because there is no requirement for a laboratory to physically obtain template DNA for PCR amplification of these additional parts, researchers would only need access to the publicly available DNA sequences of new parts to integrate them into a SyngenicDNA-based genetic tool, which could then be synthetized de novo in context. Notably, compatible replication origins and accessory elements for many cultivable bacterial phyla can be obtained from (i) the NCBI Plasmid Genome database, containing >50,000 complete DNA sequences of bacterial plasmids and associated genes (24), or (ii) the ACLAME database (25) (A Classification of Mobile genetic Elements), which maintains an extensive collection of mobile genetic elements including microbial plasmids from various sources.
In addition to impeding the biotechnological and commercial development of “probiotic” bacterial species (26) and the use of bacteria within industrial biofuel production or industrial processes (27), the limited genetic tractability of many major disease-causing bacteria of relevance to clinical and public health obstructs research in multiple fields. Our SyngenicDNA and SyMPL methods are effective, flexible, and we expect can now be applied to a wide range of bacteria to circumvent innate RM barriers, the most common underlying cause of genetic intractability (SI Appendix, Text S6). Finally, the fundamental methodology developed here will also likely be useful for evasion of other microbial defense mechanisms if they rely on distinct target recognition sequences to discriminate self- from nonself-DNA.
Materials and Methods
Microbial Strains and Reagents.
E. coli NEBalpha competent cells were purchased from New England Biolabs (NEB). E. coli ER2796 was provided by the laboratory of Rich Roberts (NEB). E. coli MC (ZYCY10P3S2T; original minicircle-producing strain) was purchased from System Biosciences (SBI). A full list of reagents is provided in SI Appendix.
SMRTseq and RM System Identification.
SMRTseq was carried out on a PacBioRSII (Pacific Biosciences) with P6/C4 chemistry at Johns Hopkins Deep Sequencing and Microarray Core Facility. Additional details on SMRTseq and RM system identification are in SI Appendix.
Bioinformatics and SyngenicDNA Adaptation in Silico.
DNA sequence analysis and manipulations were performed using the Seqbuilder program of the DNASTAR software package (DNASTAR). Details of the bioinformatic tools used for adaptation are provided in SI Appendix.
DNA Synthesis and Assembly of SyngenicDNA Plasmids.
A SyngenicDNA variant of the pEPSA5 plasmid (pEPSA5Syn) was assembled by replacing a 3.05-kb fragment of the original plasmid, encompassing three JE2 RM target sites, with a de novo synthesized DNA fragment that was RM-silent with respect to S. aureus JE2 (Fig. 2 and SI Appendix, Fig. S6). Details of assembly protocols are provided in SI Appendix.
Genome Editing of E. coli MC-Producer Strain.
A CRISPR-Cas9/λ-Red multigene editing strategy was used for scarless MTase gene deletions in E. coli MC (ZYCY10P3S2T). Details on construction of a modified anhydrotetracycline inducible CRISPR-Cas9/λ-Red system and subsequent genome editing of the E. coli MC strain are provided in SI Appendix.
Production of SyMPL Tools.
The 4.3-kb S. aureus replicons of both pEPSA5 plasmids (pEPSA5 and the pEPSA5SynJE2) were PCR-amplified and spliced to the MC parental plasmid (pMC; Systems Biosciences) to form pEPSA5P and pEPSA5SynJE2P. Primers and full details are provided in SI Appendix.
S. aureus Transformations.
Full details of competent cell preparations and electroporation protocols are provided in SI Appendix.
Statistical Analysis and Data Availability.
Statistical analyses were carried out using Graphpad Prism (version 7.04; GraphPad Software) and Stata version 12.1 (StataCorp. 2011. Stata Statistical Software: Release 12. College Station, TX: StataCorp LP). Means with SE (SEM) are presented in each graph. Full details on statistical analyses and data availability are provided in SI Appendix.
Data Availability
Data deposition: Genome/methylome sequence data for Escherichia coli MC_Forsyth and Staphylococcus aureus USA300 JE2_Forsyth have been deposited to the REBASE database, (under strain nos. 21741 and 21742, respectively).
Acknowledgments
The authors wish to acknowledge the assistance of Richard J. Roberts (New England Biolabs) during REBASE analysis and Isabel Fernandez Escapa (Forsyth Institute) for technical input and assistance during Staphylococcal transformations. This research is supported by the National Institute of Dental and Craniofacial Research (NIDCR) of the National Institutes of Health (NIH) and the NIH Office of the Director (OD) under NIH Director’s Transformative Research Award R01DE027850, and under Award NIDCR R01DE022586, and by the National Institute of Allergy and Infectious Diseases under Award R01AI101018. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
Supporting Information
Appendix (PDF)
Supporting Information
Download
3.18 MB
References
1
Jiang W, Bikard D, Cox D, Zhang F, Marraffini LA (2013) RNA-guided editing of bacterial genomes using CRISPR-Cas systems. Nat Biotechnol 31:233–239.
Go to reference
Crossref
PubMed
Google Scholar
2
van Opijnen T, Bodi KL, Camilli A (2009) Tn-seq: High-throughput parallel sequencing for fitness and genetic interaction studies in microorganisms. Nat Methods 6:767–772.
Go to reference
Crossref
PubMed
Google Scholar
3
Selle K, Barrangou R (2015) Harnessing CRISPR-Cas systems for bacterial genome editing. Trends Microbiol 23:225–232.
Go to reference
Crossref
PubMed
Google Scholar
4
Aune TEV, Aachmann FL (2010) Methodologies to increase the transformation efficiencies and the range of bacteria that can be transformed. Appl Microbiol Biotechnol 85:1301–1313.
Go to reference
Crossref
PubMed
Google Scholar
5
Johnston C, Martin B, Fichant G, Polard P, Claverys JP (2014) Bacterial transformation: Distribution, shared mechanisms and divergent control. Nat Rev Microbiol 12:181–196.
Crossref
PubMed
Google Scholar
a [...] “window of opportunity” for DNA internalization
b [...] observed in only a handful of ∼80 bacterial species
c [...] evidence of natural transformation is lacking
6
Floyd MM, Tang J, Kane M, Emerson D (2005) Captured diversity in a culture collection: Case study of the geographic and habitat distributions of environmental isolates held at the American type culture collection. Appl Environ Microbiol 71:2813–2823.
Go to reference
Crossref
PubMed
Google Scholar
7
Dykhuizen D (2005) Species numbers in bacteria. Proc Calif Acad Sci 56(Suppl 1):62–71.
Go to reference
PubMed
Google Scholar
8
Vasu K, Nagamalleswari E, Nagaraja V (2012) Promiscuous restriction is a cellular defense strategy that confers fitness advantage to bacteria. Proc Natl Acad Sci USA 109:E1287–E1293.
Crossref
PubMed
Google Scholar
a [...] as defense mechanisms against invading xenogeneic DNA
b [...] methylation status of highly specific target motifs
9
Tock MR, Dryden DT (2005) The biology of restriction and anti-restriction. Curr Opin Microbiol 8:466–472.
Crossref
PubMed
Google Scholar
a [...] inspired by bacteriophage antirestriction mechanisms
b [...] been translated into in vitro engineering approaches
10
Suzuki H (November 28, 2012) Host-mimicking strategies in DNA methylation for improved bacterial transformation. Methylation-From DNA, RNA and Histones to Diseases and Treatment, 10.5772/51691.
Crossref
Google Scholar
a [...] inspired by bacteriophage antirestriction mechanisms
b [...] been translated into in vitro engineering approaches
c [...] in vitro methylation with recombinant MTase enzymes
d [...] strain that is either restriction enzyme-deficient
e [...] for rapid application to a wide diversity of bacteria
11
Roberts RJ, Vincze T, Posfai J, Macelis D (2015) REBASE–A database for DNA restriction and modification: Enzymes, genes and genomes. Nucleic Acids Res 43:D298–D299.
Crossref
PubMed
Google Scholar
a [...] available for only 37 of >450 known targets
b [...] (bp), with >450 different motifs identified to date
12
Yasui K, et al. (2009) Improvement of bacterial transformation efficiency using plasmid artificial modification. Nucleic Acids Res 37:e3.
Go to reference
Crossref
PubMed
Google Scholar
13
Costa SK, Donegan NP, Corvaglia AR, François P, Cheung AL (2017) Bypassing the restriction system to improve transformation of Staphylococcus epidermidis. J Bacteriol 199:e00271-17.
Go to reference
Crossref
PubMed
Google Scholar
14
Johnston CD, Skeete CA, Fomenkov A, Roberts RJ, Rittling SR (2017) Restriction-modification mediated barriers to exogenous DNA uptake and incorporation employed by Prevotella intermedia. PLoS One 12:e0185234.
Crossref
PubMed
Google Scholar
a [...] highly species-specific, often even strain-specific
b [...] (SMRT) genome and methylome sequencing (SMRTseq)
c [...] Prevotella intermedia
d [...] where no genetically accessible strain is available
e [...] different strains of the same bacterial species
15
Vasu K, Nagaraja V (2013) Diverse functions of restriction-modification systems in addition to cellular defense. Microbiol Mol Biol Rev 77:53–72.
Crossref
PubMed
Google Scholar
a [...] cofactor requirements, and substrate specificity
b [...] seldom deviate from their recognition sequence
16
Lee BY, et al. (2013) The economic burden of community-associated methicillin-resistant Staphylococcus aureus (CA-MRSA). Clin Microbiol Infect 19:528–536.
Go to reference
Crossref
PubMed
Google Scholar
17
Monk IR, Foster TJ (2012) Genetic manipulation of Staphylococci-breaking through the barrier. Front Cell Infect Microbiol 2:49.
Go to reference
Crossref
PubMed
Google Scholar
18
Jones MJ, Donegan NP, Mikheyeva IV, Cheung AL (2015) Improving transformation of Staphylococcus aureus belonging to the CC1, CC5 and CC8 clonal complexes. PLoS One 10:e0119487.
Go to reference
Crossref
PubMed
Google Scholar
19
Fey PD, et al. (2013) A genetic resource for rapid and comprehensive phenotype screening of nonessential Staphylococcus aureus genes. MBio 4:e00537-12.
Crossref
PubMed
Google Scholar
a [...] (MRSA) USA300 LAC strain
b [...] (MRSA) USA300 LAC strain
20
Kay MA, He CY, Chen ZY (2010) A robust system for production of minicircle DNA vectors. Nat Biotechnol 28:1287–1289.
Crossref
PubMed
Google Scholar
a [...] expression cassettes devoid of a plasmid backbone
b [...] MCs are then isolated by standard plasmid methods
c [...] MC-producing strain
21
Sadykov MR (2016) Restriction-modification systems as a barrier for genetic manipulation of Staphylococcus aureus. Methods Mol Biol 1373:9–23.
Crossref
PubMed
Google Scholar
a [...] Staphylococcus aureus
b [...] NGS (where S = C or G)
22
Manara S, et al. (2018) Whole-genome epidemiology, characterisation, and phylogenetic reconstruction of Staphylococcus aureus strains in a paediatric hospital. Genome Med 10:82.
Go to reference
Crossref
PubMed
Google Scholar
23
Silva-Rocha R, et al. (2013) The standard European vector architecture (SEVA): A coherent platform for the analysis and deployment of complex prokaryotic phenotypes. Nucleic Acids Res 41:D666–D675.
Go to reference
Crossref
PubMed
Google Scholar
24
Shintani M, Sanchez ZK, Kimbara K (2015) Genomics of microbial plasmids: Classification and identification based on replication and transfer systems and host taxonomy. Front Microbiol 6:242.
Go to reference
Crossref
PubMed
Google Scholar
25
Leplae R, Lima-Mendez G, Toussaint A (2010) ACLAME: A CLAssification of mobile genetic elements, update 2010. Nucleic Acids Res 38:D57–D61.
Go to reference
Crossref
PubMed
Google Scholar
26
Sun Z, Baur A, Zhurina D, Yuan J, Riedel CU (2012) Accessing the inaccessible: Molecular tools for bifidobacteria. Appl Environ Microbiol 78:5035–5042.
Go to reference
Crossref
PubMed
Google Scholar
27
Leang C, Ueki T, Nevin KP, Lovley DR (2013) A genetic system for Clostridium ljungdahlii: A chassis for autotrophic production of biocommodities and a model homoacetogen. Appl Environ Microbiol 79:1102–1109.
Go to reference
Crossref
PubMed
Google Scholar
Information & Authors
Information
Published in
Proceedings of the National Academy of Sciences
Vol. 116 | No. 23
June 4, 2019
PubMed: 31097593
Classifications
Biological Sciences
Microbiology
Copyright
Copyright © 2019 the Author(s). Published by PNAS. This open access article is distributed under Creative Commons Attribution-NonCommercial-NoDerivatives License 4.0 (CC BY-NC-ND).
Data Availability
Data deposition: Genome/methylome sequence data for Escherichia coli MC_Forsyth and Staphylococcus aureus USA300 JE2_Forsyth have been deposited to the REBASE database, (under strain nos. 21741 and 21742, respectively).
Submission history
Published online: May 16, 2019
Published in issue: June 4, 2019
Keywords
restriction modification
genetic intractability
SyngenicDNA
minicircle
transformation
Acknowledgments
The authors wish to acknowledge the assistance of Richard J. Roberts (New England Biolabs) during REBASE analysis and Isabel Fernandez Escapa (Forsyth Institute) for technical input and assistance during Staphylococcal transformations. This research is supported by the National Institute of Dental and Craniofacial Research (NIDCR) of the National Institutes of Health (NIH) and the NIH Office of the Director (OD) under NIH Director’s Transformative Research Award R01DE027850, and under Award NIDCR R01DE022586, and by the National Institute of Allergy and Infectious Diseases under Award R01AI101018. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
Authors
Affiliations
Christopher D. Johnston1 johnston@fredhutch.org
Vaccine and Infectious Disease Division, Fred Hutchinson Cancer Research Center, Seattle, WA 98109;
The Forsyth Institute, Cambridge, MA 02142;
Harvard School of Dental Medicine, Boston, MA 02115;
View all articles by this author
Sean L. Cotton
The Forsyth Institute, Cambridge, MA 02142;
View all articles by this author
Susan R. Rittling
The Forsyth Institute, Cambridge, MA 02142;
Harvard School of Dental Medicine, Boston, MA 02115;
View all articles by this author
Jacqueline R. Starr
The Forsyth Institute, Cambridge, MA 02142;
Harvard School of Dental Medicine, Boston, MA 02115;
View all articles by this author
Gary G. Borisy1 johnston@fredhutch.org
The Forsyth Institute, Cambridge, MA 02142;
Harvard School of Dental Medicine, Boston, MA 02115;
View all articles by this author
Floyd E. Dewhirst
The Forsyth Institute, Cambridge, MA 02142;
Harvard School of Dental Medicine, Boston, MA 02115;
View all articles by this author
Katherine P. Lemon
The Forsyth Institute, Cambridge, MA 02142;
Division of Infectious Diseases, Boston Children’s Hospital, Harvard Medical School, Boston, MA 02115
View all articles by this author
Notes
1
To whom correspondence may be addressed. Email: johnston@fredhutch.org or gborisy@forsyth.org.
Author contributions: C.D.J., G.G.B., F.E.D., and K.P.L. designed research; C.D.J. and S.L.C. performed research; C.D.J., S.R.R., and J.R.S. analyzed data; and C.D.J. and K.P.L. wrote the paper.
Reviewers: J.J.C., Massachusetts Institute of Technology; and G.A.O., Geisel School of Medicine at Dartmouth.
Competing Interests
Conflict of interest statement: C.D.J. discloses that he has filed and is inventor on pending patent applications (USSN: 62/408,693 and 62/802,016) entitled “Compositions and methods for evading bacterial defense mechanisms” and “Production of differentially methylated DNA in E. coli,” respectively, relating to the SyngenicDNA and SyMPL methodologies developed and applied in this paper.
Metrics & Citations
Metrics
Article usage
Views
Citations
No data available.
0
0
Total
6 Months
12 Months
Total number of
views and citations
Note: The article usage is presented with a three- to four-day delay and will update daily once available. Due to ths delay, usage data will not appear immediately following publication. Citation information is sourced from Crossref Cited-by service.
Altmetrics
See more details
Picked up by 10 news outlets
Posted by 20 X users
Referenced in 3 patents
Reddited by 1
Mentioned in 1 Q&A threads
209 readers on Mendeley
Citations
Cite this article
C.D. Johnston,
S.L. Cotton,
S.R. Rittling,
J.R. Starr,
G.G. Borisy,
F.E. Dewhirst,
[...]
& K.P. Lemon,
Systematic evasion of the restriction-modification barrier in bacteria, Proc. Natl. Acad. Sci. U.S.A.
116 (23) 11454-11459,
(2019).
Copied!
Copying failed.
Export the article citation data by selecting a format from the list below and clicking Export.
Cited by
Loading...
View Options
View options
PDF
Download this article as a PDF file.
PDF
eReader
View this article with eReader.
eReader
Figures
Fig. 1.
Schematic representation of the SyngenicDNA approach. (A) Identification of RM system target motifs by SMRTseq. Methylome analysis of polymerase kinetics during sequencing permits detection of methylated sites at single-nucleotide resolution across the genome, revealing the exact motifs targeted by innate RM systems (indicated by colored nucleotides; N is any nucleotide) (kinetic trace image adapted from www.pacb.com). (B) Assembly in silico of a genetic tool with a desired functionality, followed by screening for the presence of RM target sequences and sequence adaptation, using SNPs or synonymous codon substitutions in coding regions, to create an RM-silent template which is synthetized de novo to assemble a SyngenicDNA tool. (C) Artificial transformation of the bacterium of interest target bacterium. Inappropriately methylated target motifs of the original genetic tool are recognized as nonself-DNA and degraded by RM systems. In contrast, the SyngenicDNA variant retains the form and functionality of the genetic tool, but is uniquely designed at the nucleotide level to evade the RM systems and can operate as desired within the target bacterial host.
Go to Figure
Fig. 2.
The SyngenicDNA approach applied to Staphylococcus aureus JE2. (A) JE2 maintains two type I RM systems and a type IV restriction system. REase (HsdR and SauUSI) and MTase (HsdM) genes are shown in red and blue, respectively. Specificity subunit (HsdS) genes are shown in yellow. RM systems and their corresponding target motifs were identified by SMRTseq and REBASE analysis. (B) Construction of pEPSA5SynJE2, an RM-silent variant of the pEPSA5 plasmid tailored to JE2. Six nucleotide substitutions (two synonymous codon substitutions and four SNPs) eliminated all type I RM system targets from pEPSA5 sequence. (C) Plasmid propagation scheme. E. coli host strains produce DNA susceptible (DH5α; Dcm+) or resistant (E. coli ER2796; Dcm-) to the JE2 type IV restriction system. (D) Comparison of plasmid transformation efficiency (cfu/µg DNA) with pEPSA5 and the SyngenicDNA variant pEPSA5SynJE2.
Go to Figure
Fig. 3.
The SyMPL approach applied to Staphylococcus aureus JE2. (A) Propagation of minicircles (pEPSA5MC and pEPSA5SynJE2MC) lacking Dcm-methylated sites within SyMPL-producer strain E. coli JMC1. (B) Comparison of SyngenicDNA and pEPSA5-based SyMPL plasmid transformation efficiency (cfu/µg DNA) with JE2. (C) Secondary analysis of SyngenicDNA and pEPSA5-based SyMPL plasmid transformation efficiencies in cfu/pmol DNA. Data are means ± SEM from nine independent experiments (three biological replicates with three technical replicates each).
Go to Figure
Tables
Media
Share
Share
Share article link
Copied!
Copying failed.
Share on social media
FacebookX (formerly Twitter)LinkedInGmailemail
References
References
1
Jiang W, Bikard D, Cox D, Zhang F, Marraffini LA (2013) RNA-guided editing of bacterial genomes using CRISPR-Cas systems. Nat Biotechnol 31:233–239.
Go to reference
Crossref
PubMed
Google Scholar
2
van Opijnen T, Bodi KL, Camilli A (2009) Tn-seq: High-throughput parallel sequencing for fitness and genetic interaction studies in microorganisms. Nat Methods 6:767–772.
Go to reference
Crossref
PubMed
Google Scholar
3
Selle K, Barrangou R (2015) Harnessing CRISPR-Cas systems for bacterial genome editing. Trends Microbiol 23:225–232.
Go to reference
Crossref
PubMed
Google Scholar
4
Aune TEV, Aachmann FL (2010) Methodologies to increase the transformation efficiencies and the range of bacteria that can be transformed. Appl Microbiol Biotechnol 85:1301–1313.
Go to reference
Crossref
PubMed
Google Scholar
5
Johnston C, Martin B, Fichant G, Polard P, Claverys JP (2014) Bacterial transformation: Distribution, shared mechanisms and divergent control. Nat Rev Microbiol 12:181–196.
Crossref
PubMed
Google Scholar
a [...] “window of opportunity” for DNA internalization
b [...] observed in only a handful of ∼80 bacterial species
c [...] evidence of natural transformation is lacking
6
Floyd MM, Tang J, Kane M, Emerson D (2005) Captured diversity in a culture collection: Case study of the geographic and habitat distributions of environmental isolates held at the American type culture collection. Appl Environ Microbiol 71:2813–2823.
Go to reference
Crossref
PubMed
Google Scholar
7
Dykhuizen D (2005) Species numbers in bacteria. Proc Calif Acad Sci 56(Suppl 1):62–71.
Go to reference
PubMed
Google Scholar
8
Vasu K, Nagamalleswari E, Nagaraja V (2012) Promiscuous restriction is a cellular defense strategy that confers fitness advantage to bacteria. Proc Natl Acad Sci USA 109:E1287–E1293.
Crossref
PubMed
Google Scholar
a [...] as defense mechanisms against invading xenogeneic DNA
b [...] methylation status of highly specific target motifs
9
Tock MR, Dryden DT (2005) The biology of restriction and anti-restriction. Curr Opin Microbiol 8:466–472.
Crossref
PubMed
Google Scholar
a [...] inspired by bacteriophage antirestriction mechanisms
b [...] been translated into in vitro engineering approaches
10
Suzuki H (November 28, 2012) Host-mimicking strategies in DNA methylation for improved bacterial transformation. Methylation-From DNA, RNA and Histones to Diseases and Treatment, 10.5772/51691.
Crossref
Google Scholar
a [...] inspired by bacteriophage antirestriction mechanisms
b [...] been translated into in vitro engineering approaches
c [...] in vitro methylation with recombinant MTase enzymes
d [...] strain that is either restriction enzyme-deficient
e [...] for rapid application to a wide diversity of bacteria
11
Roberts RJ, Vincze T, Posfai J, Macelis D (2015) REBASE–A database for DNA restriction and modification: Enzymes, genes and genomes. Nucleic Acids Res 43:D298–D299.
Crossref
PubMed
Google Scholar
a [...] available for only 37 of >450 known targets
b [...] (bp), with >450 different motifs identified to date
12
Yasui K, et al. (2009) Improvement of bacterial transformation efficiency using plasmid artificial modification. Nucleic Acids Res 37:e3.
Go to reference
Crossref
PubMed
Google Scholar
13
Costa SK, Donegan NP, Corvaglia AR, François P, Cheung AL (2017) Bypassing the restriction system to improve transformation of Staphylococcus epidermidis. J Bacteriol 199:e00271-17.
Go to reference
Crossref
PubMed
Google Scholar
14
Johnston CD, Skeete CA, Fomenkov A, Roberts RJ, Rittling SR (2017) Restriction-modification mediated barriers to exogenous DNA uptake and incorporation employed by Prevotella intermedia. PLoS One 12:e0185234.
Crossref
PubMed
Google Scholar
a [...] highly species-specific, often even strain-specific
b [...] (SMRT) genome and methylome sequencing (SMRTseq)
c [...] Prevotella intermedia
d [...] where no genetically accessible strain is available
e [...] different strains of the same bacterial species
15
Vasu K, Nagaraja V (2013) Diverse functions of restriction-modification systems in addition to cellular defense. Microbiol Mol Biol Rev 77:53–72.
Crossref
PubMed
Google Scholar
a [...] cofactor requirements, and substrate specificity
b [...] seldom deviate from their recognition sequence
16
Lee BY, et al. (2013) The economic burden of community-associated methicillin-resistant Staphylococcus aureus (CA-MRSA). Clin Microbiol Infect 19:528–536.
Go to reference
Crossref
PubMed
Google Scholar
17
Monk IR, Foster TJ (2012) Genetic manipulation of Staphylococci-breaking through the barrier. Front Cell Infect Microbiol 2:49.
Go to reference
Crossref
PubMed
Google Scholar
18
Jones MJ, Donegan NP, Mikheyeva IV, Cheung AL (2015) Improving transformation of Staphylococcus aureus belonging to the CC1, CC5 and CC8 clonal complexes. PLoS One 10:e0119487.
Go to reference
Crossref
PubMed
Google Scholar
19
Fey PD, et al. (2013) A genetic resource for rapid and comprehensive phenotype screening of nonessential Staphylococcus aureus genes. MBio 4:e00537-12.
Crossref
PubMed
Google Scholar
a [...] (MRSA) USA300 LAC strain
b [...] (MRSA) USA300 LAC strain
20
Kay MA, He CY, Chen ZY (2010) A robust system for production of minicircle DNA vectors. Nat Biotechnol 28:1287–1289.
Crossref
PubMed
Google Scholar
a [...] expression cassettes devoid of a plasmid backbone
b [...] MCs are then isolated by standard plasmid methods
c [...] MC-producing strain
21
Sadykov MR (2016) Restriction-modification systems as a barrier for genetic manipulation of Staphylococcus aureus. Methods Mol Biol 1373:9–23.
Crossref
PubMed
Google Scholar
a [...] Staphylococcus aureus
b [...] NGS (where S = C or G)
22
Manara S, et al. (2018) Whole-genome epidemiology, characterisation, and phylogenetic reconstruction of Staphylococcus aureus strains in a paediatric hospital. Genome Med 10:82.
Go to reference
Crossref
PubMed
Google Scholar
23
Silva-Rocha R, et al. (2013) The standard European vector architecture (SEVA): A coherent platform for the analysis and deployment of complex prokaryotic phenotypes. Nucleic Acids Res 41:D666–D675.
Go to reference
Crossref
PubMed
Google Scholar
24
Shintani M, Sanchez ZK, Kimbara K (2015) Genomics of microbial plasmids: Classification and identification based on replication and transfer systems and host taxonomy. Front Microbiol 6:242.
Go to reference
Crossref
PubMed
Google Scholar
25
Leplae R, Lima-Mendez G, Toussaint A (2010) ACLAME: A CLAssification of mobile genetic elements, update 2010. Nucleic Acids Res 38:D57–D61.
Go to reference
Crossref
PubMed
Google Scholar
26
Sun Z, Baur A, Zhurina D, Yuan J, Riedel CU (2012) Accessing the inaccessible: Molecular tools for bifidobacteria. Appl Environ Microbiol 78:5035–5042.
Go to reference
Crossref
PubMed
Google Scholar
27
Leang C, Ueki T, Nevin KP, Lovley DR (2013) A genetic system for Clostridium ljungdahlii: A chassis for autotrophic production of biocommodities and a model homoacetogen. Appl Environ Microbiol 79:1102–1109.
Go to reference
Crossref
PubMed
Google Scholar
View full text|Download PDF
Figure title goes here
Download figure
Go to figure location within the article
xrefBack.goTo
Request permissions
Authors Info & Affiliations
Further reading in this issue
Research ArticleMay 17, 2019
### A human organoid system that self-organizes to recapitulate growth and differentiation of a benign mammary tumor
Stefan Florian,
Yoshiko Iwamoto,
Margaret Coughlin,
Ralph Weissleder,
[...]
Timothy J. Mitchison,
This is an addendum toComplex viscosity of helical and doubly helical polymeric liquids from general rigid bead-rod theory
Research ArticleMay 21, 2019
### A glycan shield on chimpanzee CD4 protects against infection by primate lentiviruses (HIV/SIV)
Cody J. Warren,
Nicholas R. Meyerson,
Alex C. Stabell,
Will T. Fattor,
Gregory K. Wilkerson,
[...]
Sara L. Sawyer,
This is an addendum toComplex viscosity of helical and doubly helical polymeric liquids from general rigid bead-rod theory
Research ArticleMay 21, 2019
### Self-organizing motors divide active liquid droplets
Kimberly L. Weirich,
Kinjal Dasbiswas,
Thomas A. Witten,
Suriyanarayanan Vaikuntanathan,
[...]
Margaret L. Gardel,
This is an addendum toComplex viscosity of helical and doubly helical polymeric liquids from general rigid bead-rod theory
Trending
Research ArticleDecember 30, 2013
### Bodily maps of emotions
Emotions coordinate our behavior and physiological states during survival-salient
events and pleasurable interactions. Even though we are often consciously aware of
our current emotional state, such as anger or happiness, the mechanisms giving ...Emotions are often felt in the body, and somatosensory feedback has been proposed
to trigger conscious emotional experiences. Here we reveal maps of bodily sensations
associated with different emotions using a unique topographical self-report method.
In ...
Lauri Nummenmaa,
Enrico Glerean,
Riitta Hari,
[...]
Jari K. Hietanen,
This is an addendum toComplex viscosity of helical and doubly helical polymeric liquids from general rigid bead-rod theory
Research ArticleJune 11, 2018
### Neural network retuning and neural predictors of learning success associated with cello training
In sophisticated auditory–motor learning such as musical instrument learning, little
is understood about how brain plasticity develops over time and how the related individual
variability is reflected in the neural architecture. In a ...The auditory and motor neural systems are closely intertwined, enabling people to
carry out tasks such as playing a musical instrument whose mapping between action
and sound is extremely sophisticated. While the dorsal auditory stream has been shown
to ...
Indiana Wollman,
Virginia Penhune,
Melanie Segado,
Thibaut Carpentier,
[...]
Robert J. Zatorre,
This is an addendum toComplex viscosity of helical and doubly helical polymeric liquids from general rigid bead-rod theory
Research ArticleAugust 4, 2025
### The entities enabling scientific fraud at scale are large, resilient, and growing rapidly
Numerous recent scientific and journalistic investigations demonstrate that systematic
scientific fraud is a growing threat to the scientific enterprise. In large measure
this has been attributed to organizations known as research paper mills. ...Science is characterized by collaboration and cooperation, but also by uncertainty,
competition, and inequality. While there has always been some concern that these pressures
may compel some to defect from the scientific research ethos—i.e., fail to make ...
Reese A. K. Richardson,
Spencer S. Hong,
Jennifer A. Byrne,
Thomas Stoeger,
[...]
Luís A. Nunes Amaral,
This is an addendum toComplex viscosity of helical and doubly helical polymeric liquids from general rigid bead-rod theory
Recommended articles
This is an addendum toComplex viscosity of helical and doubly helical polymeric liquids from general rigid bead-rod theory
This is an addendum toComplex viscosity of helical and doubly helical polymeric liquids from general rigid bead-rod theory
This is an addendum toComplex viscosity of helical and doubly helical polymeric liquids from general rigid bead-rod theory |
17204 | https://en.cppreference.com/w/cpp/language/integer_literal.html | cppreference.com
Create account
Integer literal
From cppreference.com
< cpp | language
Allows values of integer type to be used in expressions directly.
| |
| Contents 1 Syntax 2 Explanation 3 The type of the literal 4 Notes 5 Example 6 Defect reports 7 References 8 See also |
[edit] Syntax
An integer literal has the form
| | | | | | | | | | |
--- --- --- --- --- |
| | | | | | | | | | |
| decimal-literal integer-suffix (optional) | (1) | |
| | | | | | | | | | |
| octal-literal integer-suffix (optional) | (2) | |
| | | | | | | | | | |
| hex-literal integer-suffix (optional) | (3) | |
| | | | | | | | | | |
| binary-literal integer-suffix (optional) | (4) | (since C++14) |
| | | | | | | | | | |
where
decimal-literal is a non-zero decimal digit (1, 2, 3, 4, 5, 6, 7, 8, 9), followed by zero or more decimal digits (0, 1, 2, 3, 4, 5, 6, 7, 8, 9)
octal-literal is the digit zero (0) followed by zero or more octal digits (0, 1, 2, 3, 4, 5, 6, 7)
hex-literal is the character sequence 0x or the character sequence 0X followed by one or more hexadecimal digits (0, 1, 2, 3, 4, 5, 6, 7, 8, 9, a, A, b, B, c, C, d, D, e, E, f, F)
binary-literal is the character sequence 0b or the character sequence 0B followed by one or more binary digits (0, 1)
integer-suffix, if provided, may contain one or both of the following (if both are provided, they may appear in any order:
: unsigned-suffix (the character u or the character U)
one of
: long-suffix (the character `l` or the character `L`)
| | |
--- |
| long-long-suffix (the character sequence ll or the character sequence LL) | (since C++11) |
| | |
--- |
| size-suffix (the character z or the character Z) | (since C++23) |
| | |
--- |
| Optional single quotes (') may be inserted between the digits as a separator; they are ignored when determining the value of the literal. | (since C++14) |
An integer literal (as any literal) is a primary expression.
[edit] Explanation
1) Decimal integer literal (base 10).
2) Octal integer literal (base 8).
3) Hexadecimal integer literal (base 16, the letters 'a' through 'f' represent values (decimal) 10 through 15).
4) Binary integer literal (base 2).
The first digit of an integer literal is the most significant.
Example. The following variables are initialized to the same value:
int d = 42;
int o = 052;
int x = 0x2a;
int X = 0X2A;
int b = 0b101010; // C++14
Example. The following variables are also initialized to the same value:
unsigned long long l1 = 18446744073709550592ull; // C++11
unsigned long long l2 = 18'446'744'073'709'550'592llu; // C++14
unsigned long long l3 = 1844'6744'0737'0955'0592uLL; // C++14
unsigned long long l4 = 184467'440737'0'95505'92LLU; // C++14
[edit] The type of the literal
The type of the integer literal is the first type in which the value can fit, from the list of types which depends on which numeric base and which integer-suffix was used:
| Suffix | Decimal bases | Binary, octal, or hexadecimal bases |
---
| (no suffix) | int long int long long int (since C++11) | int unsigned int long int unsigned long int long long int (since C++11) unsigned long long int (since C++11) |
| u or U | unsigned int unsigned long int unsigned long long int (since C++11) | unsigned int unsigned long int unsigned long long int (since C++11) |
| l or L | long int unsigned long int (until C++11) long long int (since C++11) | long int unsigned long int long long int (since C++11) unsigned long long int (since C++11) |
| both l/L and u/U | unsigned long int unsigned long long int (since C++11) | unsigned long int unsigned long long int (since C++11) |
| ll or LL | long long int (since C++11) | long long int (since C++11) unsigned long long int (since C++11) |
| both ll/LL and u/U | unsigned long long int (since C++11) | unsigned long long int (since C++11) |
| z or Z | the signed version of std::size_t (since C++23) | the signed version of std::size_t (since C++23) std::size_t (since C++23) |
| both z/Z and u/U | std::size_t (since C++23) | std::size_t (since C++23) |
If the value of the integer literal that does not have size-suffix(since C++23) is too big to fit in any of the types allowed by suffix/base combination and the compiler supports an extended integer type (such as __int128) which can represent the value of the literal, the literal may be given that extended integer type — otherwise the program is ill-formed.
[edit] Notes
Letters in the integer literals are case-insensitive: 0xDeAdBeEfU and 0XdeadBEEFu represent the same number (one exception is the long-long-suffix, which is either ll or LL, never lL or Ll)(since C++11).
There are no negative integer literals. Expressions such as -1 apply the unary minus operator to the value represented by the literal, which may involve implicit type conversions.
In C prior to C99 (but not in C++), unsuffixed decimal values that do not fit in long int are allowed to have the type unsigned long int.
| | |
--- |
| When used in a controlling expression of #if or #elif, all signed integer constants act as if they have type std::intmax_t and all unsigned integer constants act as if they have type std::uintmax_t. | (since C++11) |
Due to maximal munch, hexadecimal integer literals ending in e and E, when followed by the operators + or -, must be separated from the operator with whitespace or parentheses in the source:
auto x = 0xE+2.0; // error
auto y = 0xa+2.0; // OK
auto z = 0xE +2.0; // OK
auto q = (0xE)+2.0; // OK
Otherwise, a single invalid preprocessing number token is formed, which causes further analysis to fail.
| Feature-test macro | Value | Std | Feature |
--- --- |
| __cpp_binary_literals | 201304L | (C++14) | Binary literals |
| __cpp_size_t_suffix | 202011L | (C++23) | Literal suffixes for std::size_t and its signed version |
[edit] Example
Run this code
```
include
include
include
int main()
{
std::cout << 123 << '\n'
<< 0123 << '\n'
<< 0x123 << '\n'
<< 0b10 << '\n'
<< 12345678901234567890ull << '\n'
<< 12345678901234567890u << '\n'; // the type is unsigned long long
// even without a long long suffix
// std::cout << -9223372036854775808 << '\n'; // error: the value
// 9223372036854775808 cannot fit in signed long long, which is the
// biggest type allowed for unsuffixed decimal integer literal
std::cout << -9223372036854775808u << '\n'; // unary minus applied to unsigned
// value subtracts it from 2^64, this gives 9223372036854775808
std::cout << -9223372036854775807 - 1 << '\n'; // correct way to calculate
// the value -9223372036854775808
if __cpp_size_t_suffix >= 202011L // C++23
static_assert(std::is_same_v<decltype(0UZ), std::size_t>);
static_assert(std::is_same_v<decltype(0Z), std::make_signed_t<std::size_t>>);
endif
}
```
123
83
291
2
12345678901234567890
12345678901234567890
9223372036854775808
-9223372036854775808
[edit] Defect reports
The following behavior-changing defect reports were applied retroactively to previously published C++ standards.
| DR | Applied to | Behavior as published | Correct behavior |
--- --- |
| CWG 2698 | C++23 | an integer literal with size-suffix could have an extended integer type | ill-formed if too large |
[edit] References
C++23 standard (ISO/IEC 14882:2024):
: 5.13.2 Integer literals [lex.icon]
C++20 standard (ISO/IEC 14882:2020):
: 5.13.2 Integer literals [lex.icon]
C++17 standard (ISO/IEC 14882:2017):
: 5.13.2 Integer literals [lex.icon]
C++14 standard (ISO/IEC 14882:2014):
: 2.14.2 Integer literals [lex.icon]
C++11 standard (ISO/IEC 14882:2011):
: 2.14.2 Integer literals [lex.icon]
C++98 standard (ISO/IEC 14882:1998):
: 2.13.1 Integer literals [lex.icon]
[edit] See also
| | |
--- |
| user-defined literals(C++11) | literals with user-defined suffix[edit] |
| C documentation for integer constant | |
Retrieved from " |
17205 | https://www.ck12.org/section/integers-%3A%3Aof%3A%3A-grade-6-teaching-tips-%3A%3Aof%3A%3A-ck-12-middle-school-math-grade-6-teachers-edition/ | Elementary Math
Grade 1
Grade 2
Grade 3
Grade 4
Grade 5
Math 6
Math 7
Math 8
Algebra I
Geometry
Algebra II
Math 6
Math 7
Math 8
Algebra I
Geometry
Algebra II
Probability & Statistics
Trigonometry
Math Analysis
Precalculus
Calculus
What's the difference?
Grade K to 5
Earth Science
Life Science
Physical Science
Biology
Chemistry
Physics
Advanced Biology
FlexLets
Math FlexLets
Science FlexLets
English
Writing
Spelling
Social Studies
Economics
Geography
Government
History
World History
Philosophy
Sociology
More
Astronomy
Engineering
Health
Photography
Technology
College
College Algebra
College Precalculus
Linear Algebra
College Human Biology
The Universe
Adult Education
Basic Education
High School Diploma
High School Equivalency
Career Technical Ed
English as 2nd Language
Country
Bhutan
Brasil
Chile
Georgia
India
Translations
Spanish
Korean
Deutsch
Chinese
Greek
Polski
EXPLORE
Flexi
A FREE Digital Tutor for Every Student
FlexBooks 2.0
Customizable, digital textbooks in a new, interactive platform
FlexBooks
Customizable, digital textbooks
Schools
FlexBooks from schools and districts near you
Study Guides
Quick review with key information for each concept
Adaptive Practice
Building knowledge at each student’s skill level
Simulations
Interactive Physics & Chemistry Simulations
PLIX
Play. Learn. Interact. eXplore.
CCSS Math
Concepts and FlexBooks aligned to Common Core
NGSS
Concepts aligned to Next Generation Science Standards
Certified Educator
Stand out as an educator. Become CK-12 Certified.
Webinars
Live and archived sessions to learn about CK-12
Other Resources
CK-12 Resources
Concept Map
Testimonials
CK-12 Mission
Meet the Team
CK-12 Helpdesk
FlexLets
Know the essentials.
Pick a Subject
Donate
Sign Up
HomeMathematicsIntegers
Integers
Difficulty Level: Basic | Created by: CK-12
Last Modified: Aug 19, 2014
This eleventh chapter Comparing Integers takes students from Geometry into the world of numbers. These numbers are new and are negative. This expands the world of numbers the students have been exposed to thus far. Integers are building block for Algebra. This Teaching Tips flexbook is designed to assist you, the teacher in structuring and designing each lesson. For each section of the chapter, there will be information on pacing, goals, review skills needed and teaching strategies as you guide your students into the world of mathematics.
Lessons
The following lessons are part of this chapter.
Comparing Integers
Adding Integers
Subtracting Integers
Multiplying Integers
Dividing Integers
The Coordinate Plane
Transformations
Surveys and Data Displays
Pacing
When planning, the pacing of the chapter guides our work. We can think of each day as a 45 minute class period. If you are in a school which features block scheduling, then you can combine two days together to equal one class period.
Comparing Integers – 1 Day
Adding Integers – 1 Day
Subtracting Integers – 2 Days
Multiplying Integers – 1 Day
Dividing Integers – 1 Day
The Coordinate Plane – 1 Day
Transformations – 1 Day
Surveys and Data Displays – 1 Day
Day 9 – Review
Day 10 - Test
Comparing Integers
Goal
The goals of this lesson can be found in the following objectives.
Write integers representing situations of increase/decrease, profit/loss, above/below, etc.
Identify opposites of given integers
Compare and order integers on a number line.
Compare and order positive and negative fractions and decimals.
Relevant Review
Review whole numbers as counting numbers. Show a number line to the students starting with zero and expanding to the positive counting numbers. The number line will be expanding to include negative numbers during this lesson.
Study Skill Tip
When using the number line to teach positive and negative numbers to the students, ask the students to draw a number line that includes both positive and negative numbers into their notebooks.
Be sure to have students copy the following vocabulary words down in their notebooks.
Integers
: the set of whole numbers and their opposites
Negative Numbers
: numbers that are less than zero
Positive Numbers
: numbers that are greater than zero
Zero
: is a part of the set of integers, but is neither positive or negative
Teaching Strategies
The real world story problem The Pen Pal Project can be used as an introduction to the lesson. The solution could be used to close the lesson. It could also be used for a day dedicated to problem solving. This would be a day where the problem would be presented and solved on the same day. There is flexibility in the way that these real world problems are integrated into the class.
The world of negative numbers opens up a whole new world for the students. Up until this time, they have been working with counting numbers and with parts of numbers including fractions and decimals. Now they are going to begin to think about negative numbers or values that really don’t exist. A negative is below zero. This is a new way of thinking for the students.
There are many instances where negative numbers are talked about in real world situations. Use the examples in the text to make this real for the students. They may be able to brainstorm some new examples as well. Always allow time for the students to assimilate new information by making connections to the material that is real for them. Encourage them to share these realizations.
Be sure to review the symbols that we use to compare values (<, >,=, etc). Ask students to think about numbers and their opposites. This shows them that every number has a positive and a negative value associated with it.
A tricky concept is that the closer a negative number is to zero, the larger it is. Take time to repeat this many times and give students many opportunties for mistakes. They will want it to be larger because the digits are larger, this shows a lack of true understanding of the concept of a negative number. Use number lines to help you teach this to the students.
Fractions and decimals can work into this as well. Review ordering positive fractions and positive decimals. Then bring in the negatives as opposites. This will help students as they think about negative fractions and decimals. Use number lines to help make the ordering and comparing clearer.
Repeat and reaffirm that the closer a negative value is to zero the larger the value. This can’t be repeated too many times. It is a common place for error.
Practice Problems and Time to Practice
You will see practice problems throughout each lesson. As concepts change, there are practice problems provided for the students to work with. These problems could be used as a review at the end of the day or as a warm – up for the next day. In other words, you teach addition and subtraction on Day 1, so on Day 2, the practice problems for addition and subtraction are given for the students to solve on Day 2.
Adding Integers
Goal
The goals of this lesson can be found in the following objectives.
Find sums of integers on a number line.
Identify absolute values of integers.
Add integers with the same sign.
Add integers with different signs.
Relevant Review
Review identifying integers on a number line. Remind students that the closer a negative value is to zero, the larger it is.
Study Skill Tip
Be sure to give students a number line to work with at their desks. You can take a strip of cardboard and have each of them create a number line. They can do this with a ruler and then use the number line through all of the work that they are doing.
Be sure to have students copy down the following vocabulary words in their notebooks.
Integer
: the set of whole number and their opposites. Positive and negative whole numbers are integers.
Absolute Value
: the number of units that an integer is from zero. The sign does not make a difference-absolute value is the number of units.
Teaching Strategies
The real world story problem The Trouble with Time Zones can be used as an introduction to the lesson. The solution could be used to close the lesson. It could also be used for a day dedicated to problem solving. This would be a day where the problem would be presented and solved on the same day. There is flexibility in the way that these real world problems are integrated into the class.
When working through this lesson, review the language of sums meaning addition, and use the words “losses” and “gains” when talking about adding negative and positive numbers. This will help students to make a real – world connection to integers.
Absolute value is a topic where students often make mistakes. They forget that an absolute value is neither negative or positive, it is the distance that a value is from zero. Be sure to review and repeat this statement many times so that students really have an understanding of it.
This lesson moves from a concrete example of adding integers to an abstract one. The concrete way of adding is through the use of the number line. The abstract method is through the mental math of adding the integers. Some students will be able to easily make this leap, while others will need more practice with the number line.
Practice Problems and Time to Practice
You will see practice problems throughout each lesson. As concepts change, there are practice problems provided for the students to work with. These problems could be used as a review at the end of the day or as a warm – up for the next day. In other words, you teach addition and subtraction on Day 1, so on Day 2, the practice problems for addition and subtraction are given for the students to solve on Day 2.
Technology Integration
- This is a Khan Academy video that teaches students how to add integers with different signs.
- This is a Khan Academy video on how to find the absolute value of integers.
Subtracting Integers
Goal
The goals of this lesson can be found in the following objectives.
Find differences of integers on a number line.
Subtract integers with the same sign.
Subtract integers with different signs.
Solve real-world problems involving sums and differences of integers.
Relevant Review
Review adding integers with the same signs and with different signs. Use a number line to review how to use one to find sums. This review is essential since this lesson will build on it with subtraction.
Review absolute value, the symbol for absolute value, and how to find the absolute value of a number by figuring out the distance that the number is from zero.
Study Skill Tip
Be sure to have students use a number line when beginning their work with subtraction. This could be the same number line that they created on strips of poster board in the last lesson.
Be sure to have students copy down the following vocabulary words in their notebooks.
Sum
: the result of an addition problem.
Difference
: the result of a subtraction problem.
Teaching Strategies
The real world story problem The Football Game can be used as an introduction to the lesson. The solution could be used to close the lesson. It could also be used for a day dedicated to problem solving. This would be a day where the problem would be presented and solved on the same day. There is flexibility in the way that these real world problems are integrated into the class. This problem could also be created on the blackboard with a football field. Then students could actually work with losses and gains on a larger format. More problems could also be created using this theme.
There are several great websites with interactive games for students to practice working with integers. Here are a couple of them.
- There are several math games on this site for students to choose from and practice with.
- This is a fun brain game site with many games for students to help them build their math skills.
When there is a key point in the text, be sure to have students write this key point down in their notebooks. Subtraction is a topic where students often struggle. Because of this, students will need more help. It is recommended that you take two days to teach this lesson. On the first day, work with number lines and subtraction. Then on the second day, you can complete the other material in the lesson. Working in a slow way will help students and will help prevent some reteaching.
Pay attention to the helpful hints in the colored text boxes. One key for this lesson is that subtracting an integer is the same as adding it’s opposite.
Practice Problems and Time to Practice
You will see practice problems throughout each lesson. As concepts change, there are practice problems provided for the students to work with. These problems could be used as a review at the end of the day or as a warm – up for the next day. In other words, you teach addition and subtraction on Day 1, so on Day 2, the practice problems for addition and subtraction are given for the students to solve on Day 2.
Technology Integration
- This is a yourteacher.com video on how to subtract integers. It begins with subtracting integers by using a number line.
Multiplying Integers
Goal
The goals of this lesson can be found in the following objectives.
Find products of positive integers.
Find products of positive and negative integers.
Find products of negative integers.
Evaluate numerical and algebraic expressions involving integer multiplication.
Relevant Review
Review identifying integers on a number line. Review adding and subtracting integers using a number line. Here are five examples to help you to review.
Study Skill Tip
Be sure to have students copy down the following vocabulary words in their notebooks.
Product
: the result of a multiplication problem.
Integer
: the set of whole numbers and their opposites.
Commutative Property of Multiplication
: a property that states that it doesn’t matter which order you multiply terms. The product will be the same.
Numerical Expression
: an expression that contains multiple numbers and operations.
Algebraic Expression
: an expression that contains numbers, variables and operations.
Rules are an important part of multiplying integers. When a text box presents itself, stop the lesson and take the time to have students copy these notes down in their notebooks.
Teaching Strategies
The real world story problem Welcome to Jafakids can be used as an introduction to the lesson. The solution could be used to close the lesson. It could also be used for a day dedicated to problem solving. This would be a day where the problem would be presented and solved on the same day. There is flexibility in the way that these real world problems are integrated into the class.
Multiplying integers becomes easier when the students know their times tables. If students don’t know their times tables, then it is important that you take the time to review these tables on a daily basis.
When working with more than two integers, remind students of the order of operations. There are many sets of parentheses in the review problems. Be sure to have students do the work in parentheses first. Working with more than one operation at a time can be challenging for students. Be sure to review adding and subtracting integers so that when the three operations, adding, subtracting and multiplying, are combined that students feel confident in their ability to solve problems.
Practice Problems and Time to Practice
You will see practice problems throughout each lesson. As concepts change, there are practice problems provided for the students to work with. These problems could be used as a review at the end of the day or as a warm – up for the next day. In other words, you teach addition and subtraction on Day 1, so on Day 2, the practice problems for addition and subtraction are given for the students to solve on Day 2.
Technology Integration
- This is a video from yourteacher.com on how to multiply integers.
- These are math tutor videos where students can select the topic that they want to review. Operations with integers are one option on this website.
Dividing Integers
Goal
The goals of this lesson can be found in the following objectives.
Find quotients of positive integers.
Find quotients of positive and negative integers.
Find quotients of negative integers.
Evaluate numerical and algebraic expressions involving integer division.
Relevant Review
Review multiplying integers and the rules associated with multiplying integers. Because division is the opposite of multiplication, the same rules are going to apply to both operations.
As always, take time to review the times tables. They will help the students when working with multiplication and division of integers.
Study Skill Tip
Be sure to have students copy down these vocabulary words in their notebooks.
Quotient
: the answer from a division problem
Integer
: the set of whole numbers and their opposites
Inverse Operation
: the opposite of a given operation
Fraction Bar
: the line used to divide the numerator and the denominator of a fraction-also means division
Numerical expression
: an expression that combines integers and operations
Algebraic expression
: an expression that combines variables, integers and operations.
Teaching Strategies
The real world story problem The History Test can be used as an introduction to the lesson. The solution could be used to close the lesson. It could also be used for a day dedicated to problem solving. This would be a day where the problem would be presented and solved on the same day. There is flexibility in the way that these real world problems are integrated into the class.
The same rules for multiplying integers apply when we work with dividing integers. Even though these rules were presented in the last lesson, take the time to review the rules for the students. Students will need to be able to apply these rules in their work with integers.
Evaluating numeric and algebraic expressions is challenging for students because it combines more than one operation. Notice that there is a fraction bar used to signify division. You will need to explain this to the students. Students will need to understand that when they see this bar that they are going to divide.
Again, remind students to follow the order of operations. If a problem has multiplication and division in it, they will need to multiply integers and then divide them.
Some students may become discouraged when they see the variables in the last example. You can remind students that all they are doing is substituting values for those variables. It is just an added step, then they can perform the operations with integers as they have always done.
Practice Problems and Time to Practice
You will see practice problems throughout each lesson. As concepts change, there are practice problems provided for the students to work with. These problems could be used as a review at the end of the day or as a warm – up for the next day. In other words, you teach addition and subtraction on Day 1, so on Day 2, the practice problems for addition and subtraction are given for the students to solve on Day 2.
Technology Integration
- This is a math made easy video on dividing integers. It uses a fraction bar to signify division which is excellent reinforcement for the text.
The Coordinate Plane
Goal
The goals of this lesson can be found in the following objectives.
Graph ordered pairs of integer coordinates as points in all four quadrants.
Graph geometric figures given coordinates of vertices.
Locate places on maps using integer coordinates.
Describe paths between given points as integer translations.
Relevant Review
Review identifying integers as positive and negative. Review identifying positive and negative integers on a number line.
Study Skill Tip
Be sure to have students copy down the following vocabulary words in their notebooks.
Quadrant
: the four sections of a coordinate grid
Origin
: the place where the and axis’ meet at (0, 0)
Ordered Pair
: the and value used to locate a point on a coordinate grid
axis
: the horizontal axis on the coordinate grid
axis
: the vertical axis on the coordinate grid
Coordinates
: the and value of an ordered pair
Longitude
: vertical measure of degrees on a map
Latitude
: horizontal measure of degrees on a map
Teaching Strategies
The real world story problem The Map can be used as an introduction to the lesson. The solution could be used to close the lesson. It could also be used for a day dedicated to problem solving. This would be a day where the problem would be presented and solved on the same day. There is flexibility in the way that these real world problems are integrated into the class.
In an earlier lesson, students were introduced to graphing in one quadrant. When this happened, they were introduced to the axis and to the axis. This is important to review before beginning this lesson. You are going to take this information and build on it. Once you have reviewed one quadrant, draw out or use an overhead transparency to show students a complete coordinate grid. You can explain this as the coordinate plane.
Once it has been presented, ask the students to describe the grid. Students should identify positive and negative values both on the horizontal and vertical axes. Be sure to remind students which axis is horizontal and which is vertical. Some students will need this reminder.
Always encourage students to start at the origin. Students with spatial challenges will be lost otherwise. If they start at the origin, then go horizontal and then vertical, it will cut back on the number of errors that students will have.
This is a fun lesson. Many students really enjoy working on the coordinate plane. There are also many patterns students can create as they practice graphing and identifying values.
- This is a website where students can play a math game where they identify aliens on the coordinate plane.
Practice Problems and Time to Practice
You will see practice problems throughout each lesson. As concepts change, there are practice problems provided for the students to work with. These problems could be used as a review at the end of the day or as a warm – up for the next day. In other words, you teach addition and subtraction on Day 1, so on Day 2, the practice problems for addition and subtraction are given for the students to solve on Day 2.
Technology Integration
- This is a Khan Academy video on the coordinate plane and graphing on the coordinate plane.
- This is a Khan Academy video on identifying quadrants on the coordinate plane.
Transformations
Goal
The goals of this lesson can be found in the following objectives.
Identify transformations in the coordinate plane as translations (slides), reflections (flips) and rotations (turns).
Graph paired transformations of figures given coordinates of vertices.
Describe transformations as and coordinate changes.
Identify equivalent transformations with different coordinate changes.
Relevant Review
Review moving on the coordinate plane. You will need to review the location of the and axes so that students are able to take what they have learned and apply it to this next lesson on transformations.
Review ordered pairs and that the first value is the value and the second is the value.
Remind students to always begin at the origin and move from there to plot the point.
Study Skill Tip
Be sure to have students copy down the following vocabulary words in their notebooks.
Transformation
: when a figure is moved in some way on the coordinate plane, the way that the figure is moved is called a transformation.
Translation
: a slide, this when a figure slides on the coordinate plane from one place to the next
Reflection
: the flip of a figure. Figures can be reflected over the axis or over the axis
Rotation
: a turn. A figure can be turned in various directions on the coordinate plane.
Equivalent
: another name for equal
Teaching Strategies
The real world story problem The Clubhouse can be used as an introduction to the lesson. The solution could be used to close the lesson. It could also be used for a day dedicated to problem solving. This would be a day where the problem would be presented and solved on the same day. There is flexibility in the way that these real world problems are integrated into the class.
There are three different types of transformations that are taught in this lesson. First, students work on identifying the three types, then they are going to design and manipulate each type. You can use the nicknames for each type of transformation, for example that a translation is called a slide, but be sure to use the nickname in connection with the real name of the transformation. In this way students will connect the accurate term with the nickname.
Sometimes manipulating the transformations can be difficult for students. To do this in a hands – on way, you can give the students different shaped figures, such as triangles, cut out of construction paper. Students can take these triangles and move them on the coordinate plane to practice sliding them, rotating them and flipping them.
After practicing this in a hands – on way, you can move to the examples in the text where students are only working with the coordinates of the vertices of each figure.
Practice Problems and Time to Practice
You will see practice problems throughout each lesson. As concepts change, there are practice problems provided for the students to work with. These problems could be used as a review at the end of the day or as a warm – up for the next day. In other words, you teach addition and subtraction on Day 1, so on Day 2, the practice problems for addition and subtraction are given for the students to solve on Day 2.
Technology Integration
- This is a Brightstorm video on all of the different types of transformations. There are also links on this website to show each of the transformations in detail.
Surveys and Data Displays
Goal
The goals of this lesson can be found in the following objectives.
Collect and organize real-world survey data.
Choose an appropriate data display.
Distinguish which data displays are more effective for a specific purpose.
Analyze and interpret statistical survey data.
Relevant Review
Review the four part problem - solving plan from earlier lessons on problem – solving.
Study Skill Tip
Be sure to have students copy down the different data displays and what they are best used for. This information should be documented in their notebooks.
Ways to Display Data
Bar Graph – A bar graph displays the frequency of data or how often data occurs.
Double Bar Graph – Compares the frequency of two sets of data.
Line Graph – shows how data changes over time.
Double Line Graph – compares how two sets of data changes over time.
Circle Graph – shows a percentage out of a whole.
Teaching Strategies
The real world story problem The Bedtime Survey can be used as an introduction to the lesson. The solution could be used to close the lesson. It could also be used for a day dedicated to problem solving. This would be a day where the problem would be presented and solved on the same day. There is flexibility in the way that these real world problems are integrated into the class. This story problem can be used as the model for the lesson. It is an example of a survey that can easily be conducted in the class as well. Use data that is real and relevant to the students so that the surveys and data displays that are created have value for the students.
Practice Problems and Time to Practice
You will see practice problems throughout each lesson. As concepts change, there are practice problems provided for the students to work with. These problems could be used as a review at the end of the day or as a warm – up for the next day. In other words, you teach addition and subtraction on Day 1, so on Day 2, the practice problems for addition and subtraction are given for the students to solve on Day 2.
Technology Integration
- This is a Khan Academy video on surveys and samples.
- This is a teacher tube video on youtube that uses the Boston Red Sox to teach about data displays.
Notes/Highlights
| Color | Highlighted Text | Notes | |
--- --- |
| | Please Sign In to create your own Highlights / Notes | | |
Currently there are no resources to be displayed.
Description
No description available here...
Difficulty Level
Basic
Tags
variable expressions, patterns, measurement, equations, decimals, fractions, proportions, ratios, percents, functions, statistics, probability, geometry, multiplication, addition, subtraction, operations with numbers, CK.MAT.ENG.TE.1.Math-Grade-6.1, (15 more)
Subjects
mathematics
Grades
6
Standards Correlations
-
Concept Nodes
License
CC BY NC
Language
English
| Cover Image | Attributions |
--- |
| | License: CC BY-NC |
Date Created
Mar 07, 2014
Last Modified
Aug 19, 2014
.
Show Details ▼
Related Content
Add and Subtract Integers
Read
+ Adding Between 10 and -10
PLIX
Reviews
Was this helpful?
Yes
No
0% of people thought this content was helpful.
00
Back to the top of the page ↑
Oops, looks like cookies are disabled on your browser. Click on this link to see how to enable them.
Student Sign Up
Are you a teacher?
Having issues? Click here
By signing up, I confirm that I have read and agree to the Terms of use and Privacy Policy
Already have an account?
No Results Found
Your search did not match anything in . |
17206 | https://www.youtube.com/watch?v=G5wlKltW7pM | First Derivative Test
The Organic Chemistry Tutor
9880000 subscribers
8328 likes
Description
639658 views
Posted: 4 Mar 2018
This calculus video tutorial provides a basic introduction into the first derivative test. The first derivative test can be used to locate any relative extrema in a function. When the first derivative changes from negative to positive, a relative maximum is present. When the first derivative changes from positive to negative, a relative minimum is present. The process is simplified if you create a sign chart using a number line.
Derivative Applications - Free Formula Sheet:
Calculus 1 Final Exam Review:
Derivative Applications - Video Lessons:
Final Exam and Test Prep Videos:
158 comments
Transcript:
in this video we're going to talk about how to use the first derivative test to determine the relative extrema of a function so here is a relative minimum and this is a relative maximum and this graph has no relative minimum or maximum so let's talk about these graphs now for the first one on the upper left corner notice that of the left side of the minimum the function is decreasing so it's decreasing when the first derivative is negative and here the function is increasing it's increasing when the first derivative is positive so according to the first derivative test if the sign of the first derivative changes from negative to positive then you have a minimum in order to have a maximum the function has to be increasing first and then decreasing later so then the first derivative has to change from positive to negative now for the other two graphs the sign will not change notice that the function is increasing and then it stops the slope is zero at this point and then it increases again so notice that the sign doesn't change and for the second one it's always decreasing so therefore the sign doesn't change either so if the sign of the first derivative does not change there is no relative maximum or relative minimum but if it changes from negative to positive then according to the first derivative test a minimum exists or if it changes from positive to negative then there's a local maximum or a relative maximum now let's work on some practice problems consider the function f of x is equal to x squared minus four x plus five so go ahead and use the first derivative test to determine the relative extrema of this function so the first thing we need to do is find the first derivative the derivative of x squared is 2x and the derivative of 4x is 4 and for a constant is 0. once you have the first derivative set it equal to 0 i'm going to factor out 2 and so i could see that my critical number is x equals 2. if you set the factor x minus 2 equal to 0 then x is equal to 2. now once you have the critical number make a sign chart so draw a number line and put the critical number on it and then plug in some test points so to the right of two let's say at three if we plug it into the first derivative will give us a positive number or a negative number three minus two is positive now if we plug in a number less than one i mean less than two like one let's see what's going to happen one minus two is negative so that will give us a negative result so notice that the first derivative it changes from negative to positive so on the left side the function is decreasing and on the right side is increasing and so this correlates to a relative minimum so we have a relative minimum at x equals two now to find the relative minimum value you could plug it back into the function and get the y coordinate so f of two is equal to two squared minus four times two plus five two squared is four four times two is eight four minus eight is negative four plus five that's one so as an ordered pair x is two y is one so these are the coordinates of the relative minimum so the relative minimum is located at x equals two but it has a value of one now let's try this one let's say that f of x is equal to x cubed minus 12x go ahead and use the first derivative test to determine all of the relative extrema in this function so we're going to follow the same process we're going to find the first derivative set it equal to zero and get the critical points so the derivative of x cubed is 3x squared and the derivative of 12x is 12. now let's take out the gcf which is 3. 3x squared divided by 3 is x squared negative 12 divided by 3 is negative 4. and so we could factor x squared minus four using the difference of squares technique so the square root of x squared is x the square root of four is two one is going to be positive and the other will be negative so we have two critical numbers positive two and negative two so now let's create a sign chart and let's put the critical numbers in ascendant order so let's pick a number that's greater than two let's try three three plus two is positive three minus two is positive if you multiply two positive numbers it will give you a positive result now if we pick a number that's between negative two and two let's try zero for example zero plus two is positive and zero minus two is negative a positive number times a negative number that will give you a negative number and then if we pick a number that's less than negative two like negative three negative three plus two that's negative one so that's negative negative three minus two is another negative number if you multiply two negative numbers you'll get a positive number now let's get rid of some stuff so which of these two critical points is the relative maximum and which one is the relative minimum so let's consider negative two the function is increasing and then it decreases so that is a maximum and for positive two it's a decrease and it's negative first and then it's increasing it's positive later that is a minimum and that's a quick and simple way to determine the relative extrema of the function so now let's get the y coordinates so let's start with f of positive two two to the third that's two times two times two that's eight and twelve times two is twenty four so 8 minus 24 is negative 16 which is a relatively low number so that can correspond to the minimum now let's try f of negative 2 so it's a negative 2 to the third minus 12 times negative 2. negative 2 times negative 2 times negative 2 that's negative 8 negative 12 times negative 2 is positive 24 so this gives us positive 16. so the maximum is located at x equals negative 2 but it has a value of 16 and the minimum is positive 2 negative 16. so we can see that the minimum has a much lower value than the maximum and so that's it for this particular example that's how you can use the first derivative test to determine the relative extrema of this function let's work on one more example given the function 3x to the fourth minus 8x cubed plus 6x squared find the relative extrema of this function so just like before let's start with the first derivative the derivative of x to the fourth is 4x cubed and the derivative of x cubed is 3x squared and for x squared the derivative of that is two x so three times four is twelve eight times three is twenty four and six times two is twelve so let's set it equal to zero and let's begin by taking out the gcf the greatest common factor is 12x so 12x cubed divided by 12x is x squared negative 24x squared divided by 12x is negative 2x and then 12x divided by itself is one so how can we factor this trinomial so we need to find two numbers that multiply to one but add up to negative two and so that's going to be negative one and negative one so this is gonna be x minus 1 times x minus 1. so the first derivative is going to be 12x times x minus 1 squared so i'm just going to rewrite that up here so let's set each factor equal to zero so if we set 12x equal to zero the first critical number is x is zero and if we set x minus one squared equal to zero we can clearly see that when x is one the whole thing will equal zero so we have two critical numbers zero and one in this example now let's make a sign chart and let's put the critical numbers in ascendant order so the critical numbers are zero and one now let's pick a number that's greater than one let's try two so twelve times two that's a positive number and two minus 1 squared is going to be positive now what do you think the next one is going to be what sign should we put here so we're traveling across this particular critical number and notice that the multiplicity is even if the multiplicity is even it will not change sign but for this one the multiplicity is odd so it's going to change the sign across the zero which means this is going to be negative if you plug in let's say 0.5 12 times 0.5 is positive 0.5 minus 1 is a negative once you square it it's positive if you multiply 2 positive numbers you'll get a positive result but let's say if you plug in negative 1 negative 1 minus 1 is negative but squared is positive 12 times negative 1 is negative a negative times a positive will give us the negative result and so you could use the multiplicities to figure out the rest if you know the first sign so you don't have to use three test points only one now looking at this particular critical number it changes from negative to positive so we have a minimum at zero and at one it's increasing and then it increases again so we don't have a minimum or maximum at one so we just have a minimum at zero and to find the y value if we plug in zero into this there's no constants everything contains an x so y would also be zero so the minimum is located at the origin it's zero comma zero for this particular example and so that's it for this video now you know how to use the first derivative test to determine the relative extrema of a function you |
17207 | https://jocg.org/index.php/jocg/article/download/2927/2620/7220 | JoCG 5(1), 86–103, 2014 86 Journal of Computational Geometry jocg.org
MINIMUM CONVEX PARTITIONS AND MAXIMUM EMPTY POLYTOPES ∗
Adrian Dumitrescu, † Sariel Har-Peled, ‡ and Csaba D. T´ oth §
Abstract. Let S be a set of n points in Rd. A Steiner convex partition is a tiling of conv( S) with empty convex bodies. For every integer d, we show that S admits a Steiner convex partition with at most ⌈(n − 1) /d ⌉ tiles. This bound is the best possible for points in general position in the plane, and it is best possible apart from constant factors in every fixed dimension d ≥ 3. We also give the first constant-factor approximation algorithm for computing a minimum Steiner convex partition of a planar point set in general position. Establishing a tight lower bound for the maximum volume of a tile in a Steiner convex partition of any n points in the unit cube is equivalent to a famous problem of Danzer and Rogers. It is conjectured that the volume of the largest tile is ω(1 /n ). Here we give a (1 − ε)-approximation algorithm for computing the maximum volume of an empty convex body amidst n given points in the d-dimensional unit box [0 , 1] d.
Keywords : Steiner convex partition, Horton set, epsilon-net, lattice polytope, approxima-tion algorithm.
1 Intro du tion Let S be a set of n ≥ d + 1 points in Rd, d ≥ 2. A convex body C is empty if its interior is disjoint from S. A convex partition of S is a partition of the convex hull conv( S) into empty convex bodies (called tiles ) such that the vertices of the tiles are in S. In a Steiner convex partition of S the vertices of the tiles are arbitrary: they can be points in S or Steiner points . For instance, any triangulation of S is a convex partitions of S, where the convex bodies are simplices, and so conv( S) can be always partitioned into O(n⌊d/ 2⌋) empty convex tiles [ 13 ]. In this paper, we study the minimum number of tiles that a Steiner convex partition of every n points in Rd admits, and the maximum volume of a single tile for a given point set. The research is motivated by a longstanding open problem by Danzer and Rogers [ 2,
∗A preliminary version of this paper appeared in the Proceedings of the 13th Scandinavian Symposium and Workshops on Algorithm Theory, Helsinki, Finland, July, 2012. The first author was partially supported by NSF grant DMS-1001667, the second author was partially supported by NSF AF award CCF-0915984, and the third author was partially supported by NSERC grant RGPIN 35586.
†Department of Computer Science, University of Wisconsin–Milwaukee, WI 53201-0784, USA.
dumitres@uwm.edu
‡Department of Computer Science, University of Illinois at Urbana–Champaign, Urbana, IL 61801-2302, USA. sariel@cs.uiuc.edu
§Department of Mathematics, California State University, Northridge, Los Angeles, CA 91330-8313; and Department of Computer Science, Tufts University, Medford, MA 02155, USA. cdtoth@acm.org JoCG 5(1), 86–103, 2014 87 Journal of Computational Geometry jocg.org
6, 10 , 19 , 35 ]: What is the maximum volume of an empty convex body C ⊂ [0 , 1] d that can be found amidst any set S ⊂ [0 , 1] d of n points in a unit cube? The current best bounds are Ω(1 /n ) and O(log n/n ), respectively (for a fixed d). The lower bound, for instance, can be deduced by decomposing the unit cube by n parallel hyperplanes, each containing at least one point, into at most n + 1 empty convex bodies. The upper bound is tight apart from constant factors for n randomly and uniformly distributed points in the unit cube. It is suspected that the largest volume is ω(1 /n ) in any dimension d ≥ 2, i.e., the ratio between this volume and 1 /n tends to ∞.For a convex body C in Rd, denote by vol( C) the Lebesgue measure of C, i.e., its area when d = 2, or its volume when d ≥ 3.
Minimum numb er of tiles in a onvex pa rtition. A minimum convex partition of S is a convex partition of S with a minimum number of tiles. Denote this number by fd(S). Further define (by slightly abusing notation)
fd(n) = max {fd(S) : S ⊂ Rd, |S| = n}.
Similarly define a minimum Steiner convex partition of S as one with a minimum number of tiles and let gd(S) denote this number. We also define
gd(n) = max {gd(S) : S ⊂ Rd, |S| = n}.
There has been substantial work on estimating f2(n), and computing f2(S) for a given set S in the plane. It has been shown successively that f2(n) ≤ 10 n−18
7 by Neumann-Lara et al. [ 34 ], f2(n) ≤ 15 n−24
11 by Knauer and Spillner [ 29 ], and f2(n) ≤ 4n−6
3 for n ≥ 6by Sakai and Urrutia [ 36 ]. From the other direction, Garc´ ıa-L´ opez and Nicol´ as [ 20 ] proved that f2(n) ≥ 12 n−22
11 , for n ≥ 4, thereby improving an earlier lower bound f2(n) ≥ n + 2 by Aichholzer and Krasser [ 1]. Knauer and Spillner [ 29 ] have also obtained a 30
11 -factor approximation algorithm for computing a minimum convex partition for a given set S ⊂ R2,no three of which are collinear. There are also a few exact algorithms, including three fixed-parameter algorithms [ 17 , 21 , 38 ]. The state of affairs is much different in regard to Steiner convex partitions. As pointed out in [ 15 ], no corresponding results are known for the variant with Steiner points. Here we take the first steps in this direction, and obtain the following results.
Theorem 1. For n ≥ d + 1 , we have gd(n) ≤ ⌈ n−1
d
⌉. For d = 2 , this bound is the best possible, that is, g2(n) = ⌈(n − 1) /2⌉; and for every fixed d ≥ 2, we have gd(n) = Ω( n).
We say that a set of points in Rd is in general position if every k-dimensional affine subspace contains at most k + 1 points for 0 ≤ k < d . We show that in the plane every Steiner convex partition for n points in general position, i of which lie in the interior of the convex hull, has Ω( i) tiles. This leads to a simple constant-factor approximation algorithm.
Theorem 2. Given a set S of n points in general position in the plane, a ratio 3 ap-proximation of a minimum Steiner convex partition of S can be computed in O(n log n)
time. JoCG 5(1), 86–103, 2014 88 Journal of Computational Geometry jocg.org
The average volume of a tile in a Steiner convex partition of n points in the unit cube [0 , 1] d is an obvious lower bound for the maximum possible volume of a tile, and for the maximum volume of any empty convex body C ⊂ [0 , 1] d. The lower bound gd(n) = Ω( n)in Theorem 1 shows that the average volume of a tile is O(1 /n ) in some instances, where the constant of proportionality depends only on the dimension. This implies that a simple “averaging” argument is not a viable avenue for finding a solution to the problem of Danzer and Rogers.
Maximum empt y p olytop e among n p oints in a unit ub e. In the second part of the paper, we consider the following problem: Given a set of n points in a rectangular box
B in Rd, find a maximum-volume empty convex body C ⊂ B. Since the ratio between volumes is invariant under affine transformations, we may assume without loss of generality that B = [0 , 1] d. We therefore have the problem of computing a maximum volume empty convex body C ⊂ [0 , 1] d for a set of n points in [0 , 1] d. It can be argued that the maximum volume empty convex body is a polytope, however, the number and location of its vertices is unknown and this represents the main difficulty. For d = 2 there is a polynomial-time exact algorithm (see Section 5) while for d ≥ 3 we are not aware of any exact algorithm. Thus the problem of finding faster approximations naturally suggests itself. There exist exact algorithms for some related problems. Eppstein et al. [ 16 ] find the maximum area empty convex k-gon with vertices among n points in O(kn 3) time, if it exists. As a byproduct, a maximum area empty convex polygon with vertices among
n given points can be computed exactly in O(n4) time with their dynamic programming algorithm. The running time was subsequently improved to O(n3 log n) by Fischer [ 18 ] and then to O(n3) by Bautista-Santiago et al. [ 9]. By John’s ellipsoid theorem [ 32 ], the maximum volume empty ellipsoid in [0 , 1] d
gives a 1 /d d-approximation. Here we present a (1 − ε)-approximation for a maximum volume empty convex body Copt by first guessing a good approximation of the bounding hyperrectangle of Copt of minimum volume, and then finding a sufficiently close approx-imation of Copt inside it. We obtain the following two approximation algorithms. The planar algorithm runs in quadratic time in n, however, the running time degrades with the dimension.
Theorem 3. Given a set S of n points in [0 , 1] 2 and parameter ε > 0, one can compute an empty convex body C ⊆ [0 , 1] 2, such that vol( C) ≥ (1 − ε)vol( Copt ). The running time of the algorithm is O(ε−6n2).
Theorem 4. Given a set S of n points in [0 , 1] d, d ≥ 3, and a parameter ε > 0, one can compute an empty convex body C ⊆ [0 , 1] d, such that vol( C) ≥ (1 − ε)vol( Copt ). The running time of the algorithm is
exp
(
O
(
ε−d(d−1) /(d+1) log ε−1))
n1+ d(d−1) /2 log d n.
As far as the problem of Danzer and Rogers is concerned, one need not consider convex sets—it suffices to consider simplices—and for simplices the problems considered are much simpler. Specifically, every convex body C in Rd, d ≥ 2, contains a simplex TJoCG 5(1), 86–103, 2014 89 Journal of Computational Geometry jocg.org
of volume vol( T ) ≥ vol( C) /(d + 2) d [30 ]. That is, for fixed d, the largest empty simplex amidst n points in the unit cube [0 , 1] d yields a constant-factor approximation of the largest volume convex body (polytope) amidst the same n points. Consequently, the asymptotic dependencies on n of the volumes of the largest empty simplex and convex body are the same. For d = 2 there is a polynomial-time exact algorithm for computing the largest empty triangle amidst n points in [0 , 1] 2 (see Section 5) while for d ≥ 3 we are not aware of any exact algorithm for computing the largest empty simplex amidst n points in [0 , 1] d.
Related work. Decomposing polygonal domains into convex sub-polygons has been also studied extensively. We refer to the article by Keil [ 26 ] for a survey of results up to the year 2000. For instance, when the polygon may contain holes, obtaining a minimum convex partition is NP-hard, regardless of whether Steiner points are allowed. For polygons without holes, Chazelle and Dobkin [ 12 ] obtained an O(n + r3) time algorithm for the problem of decomposing a polygon with n vertices, r of which are reflex, into convex parts, with Steiner points permitted. Keil [ 26 ] notes that although there are an infinite number of possible locations for the Steiner points, a dynamic programming approach is amenable to obtain an exact (optimal) solution; see also [ 27 , 37 ]. Fevens et al. [ 17 ] designed a polynomial time algorithm for computing a minimum convex partition for a given set of n points in the plane if the points are arranged on a constant number of convex layers. The problem of minimizing the total Euclidean length of the edges of a convex partition has been also considered. Grantson and Levcopoulos [ 20 ], and Spillner [ 38 ] proved that the shortest convex partition and Steiner convex partition problems are fixed parameter tractable, where the parameter is the number of points of P
lying in the interior of conv( P ). Dumitrescu and T´ oth [ 15 ] proved that every set of n points in R2 admits a Steiner convex partition which is at most O(log n/ log log n) times longer than the minimum spanning tree, and this bound cannot be improved. Without Steiner points, the best upper bound for the ratio of the minimum length of a convex partition and the length of a minimum spanning tree (MST) is O(n) [ 28 ]. A largest area convex polygon contained in a given (non-convex) polygon with n
vertices can be found by the algorithm of Chang and Yap [ 11 ] in O(n7) time. The problem is known as the potato-peeling problem . On the other hand, a largest area triangle contained in a simple polygon with n vertices, can be found by the algorithm of Melissaratos and Souvaine [ 33 ] in O(n4) time. Hall-Holt et al. [ 22 ] compute a constant approximation in time O(n log n). The same authors show how to compute a (1 − ε)-approximation of the largest fat triangle inside a simple polygon (if it exists) in time O(n). Given a triangulated polygon (with possible holes) with n vertices, Aronov et al. [ 4] compute the largest area convex polygon respecting the triangulation edges in O(n2) time. For finding a maximum volume empty axis-parallel box amidst n points in [0 , 1] d,Backer and Keil [ 5] reported an algorithm with worst-case running time of O(nd log d−2 n). An empty axis-aligned box whose volume is at least (1 −ε) of the maximum can be computed in O
(( 8ed
ε2
)d n log d n
)
time by the algorithm of Dumitrescu and Jiang [ 14 ]. Lawrence and Morris [ 31 ] studied the minimum integer kd(n) such that the com-plement Rd \ S of any n-element set S ⊂ Rd, not all in a hyperplane, can be covered by JoCG 5(1), 86–103, 2014 90 Journal of Computational Geometry jocg.org
kd(n) convex sets. They prove kd(n) = Ω(log n/d log log n). It is known that covering the complement of n uniformly distributed points in [0 , 1] d requires Ω( n/d log n) convex sets, which follows from the upper bound in the problem of Danzer and Rogers.
2 Combinato rial b ounds In this section we prove Theorem 1. We start with the upper bound. The following simple algorithm returns a Steiner convex partition with at most ⌈(n − 1) /d ⌉ tiles for any n points in Rd.Algorithm A1 :
Step 1. Compute the convex hull R ← conv( S) of S. Let A ⊆ S be the set of hull vertices, and let B = S \ A denote the remaining points.
Step 2. Compute conv( B), and let H be the supporting hyperplane of an arbitrary ( d−1)-dimensional face of conv( B). Denote by H+ the halfspace that contains B, and
H− = Rd \ H+. The hyperplane H contains d points of B, and it decomposes R into two convex bodies: R ∩ H− is empty and R ∩ H+ contains all points in B \ H. Update
B ← B \ H and R ← R ∩ H+.
Step 3. Repeat Step 2 with the new values of R and B until B is the empty set. (If
|B| < d , then any supporting hyperplane of B completes the partition.)
31 2
Figure 1: Steiner convex partitions with Steiner points drawn as hollow circles. Left: A Steiner convex partition of a set of 13 points. Middle: A Steiner partition of a set of 12 points into three tiles. Right: A Steiner partition of the same set of 12 points into 4 tiles, generated by Algorithm
A1 (the labels reflect the order of execution).
It is obvious that the algorithm generates a Steiner convex partition of S. An illustration of Algorithm A1 on a small planar example appears in Figure 1 (right). Let h
and i denote the number of hull and interior points of S, respectively, so that n = h + i.Each hyperplane used by the algorithm removes d interior points of S (with the possible exception of the last round if i is not a multiple of d). Hence the number of convex tiles is 1 + ⌈i/d ⌉, and we have 1 + ⌈i/d ⌉ = ⌈(i + d)/d ⌉ ≤ ⌈ (n − 1) /d ⌉, as required.
Lo wer b ound in the plane. A matching lower bound in the plane is given by the following construction. For n ≥ 3, let S = A ∪ B, where A is a set of 3 non-collinear points in the plane, and B is a set of n − 3 points that form a regular ( n − 3)-gon in the interior JoCG 5(1), 86–103, 2014 91 Journal of Computational Geometry jocg.org
of conv( A), so that conv( S) = conv( A) is a triangle. If n = 3, then conv( S) is an empty triangle, and g2(S) = 1 = ⌈(n − 1) /2⌉. If 4 ≤ n ≤ 5, S is not in convex position, and so
g2(S) ≥ 2 = ⌈(n − 1) /2⌉. Suppose now that n ≥ 6. Consider an arbitrary convex partition of S. Let o be a point in the interior of conv( B) such that the lines os , s ∈ S, do not contain any edges of the tiles. Refer to Figure 2.For each point s ∈ B, choose a reference point r(s) ∈ R2 on the ray −→os in conv( A) \ conv( B)sufficiently close to point s, and lying in the interior of a tile. Note that the convex tile containing o cannot contain any reference points. We claim that any tile contains at most 2 reference points. This immediately implies g2(S) ≥ 1 + ⌈(n − 3) /2⌉ = ⌈(n − 1) /2⌉.Suppose, to the contrary, that a tile τ contains 3 reference points r1, r 2, r 3, corre-sponding to the points s1, s 2, s 3. Refer to Figure 2. Note that o cannot be in the interior of
o s1
s2
s3
r1
r2
r3
Figure 2: Lower bound construction in R2.
τ , otherwise τ would contain all points s1, s 2, s 3 in its interior. Hence conv( {o, s 1, s 2, s 3}) is a quadrilateral, and conv( {o, r 1, r 2, r 3}) is also a quadrilateral, since the reference points are sufficiently close to the corresponding points in B. We may assume w.l.o.g. that vertices of conv( {o, s 1, s 2, s 3}) are o, s1, s2, s3 in counterclockwise order. Then s2 lies in the interior of conv( {r1, r 2, r 3}). Hence the tile containing r1, r2, and r3, must contain point s2 in its interior, a contradiction. We conclude that every tile τ contains at most 2 reference points, as required.
Lo wer b ounds fo r d ≥ 3. A similar construction works in for any d ≥ 2, but the lower bound no longer matches the upper bound gd(n) ≤ ⌈ (n − 1) /d ⌉ for d ≥ 3. Recall that a Horton set [25 ] is a set S of n points in the plane such that the convex hull of any 7 points is non-empty. Valtr [ 39 ] generalized Horton sets to Rd. For every d ∈ N,there exists a minimal integer h(d) with the property that for every n ∈ N there is a set S
of n points in general position in Rd such that the convex hull of any h(d) + 1 points in S
is non-empty. It is known that h(2) = 6, and Valtr proved that h(3) ≤ 22, and in general that h(d) ≤ 2d−1(N (d − 1) + 1), where N (k) is the product of the first k primes. We construct a set S of n ≥ d + 1 points in Rd as follows. Let S = A ∪ B, where
A is a set of d + 1 points in general position in Rd, and B is a generalized Horton set of JoCG 5(1), 86–103, 2014 92 Journal of Computational Geometry jocg.org
n − (d + 1) points in the interior of conv( A), such that the interior of any h(d) + 1 points from B contains some point in B.Consider an arbitrary Steiner convex partition of S. Every point b ∈ B is in the interior of conv( S), and so it lies on the boundary of at least 2 convex tiles. For each b ∈ B,place two reference points in the interiors of 2 distinct tiles incident to b. Every tile contains at most h(d) reference points. Indeed, if a tile contains h(d) + 1 reference points, then it is incident to h(d) + 1 points in B, and some point of B lies in the interior of the convex hull of these points, a contradiction. There are 2( n − d − 1) reference points, and every tile contains at most h(d) of them. So the number of tiles is at least ⌈2( n − d − 1) /h (d)⌉. Consequently, for every fixed d ≥ 2, we have gd(n) = Ω( n).
3 App ro ximating the minimum Steiner onvex pa rtition in R2
In this section we prove Theorem 2 by showing that our simple-minded algorithm A1 from Section 2 achieves a constant-factor approximation in the plane if the points in S are in general position.
Approximation ratio. Recall that algorithm A1 computes a Steiner convex partition of conv( S) into at most 1 + ⌈i/ 2⌉ parts, where i stands for the number of interior points of S.If i = 0, the algorithm computes an optimal partition, i.e., ALG = OPT = 1. Assume now that i ≥ 1. Consider an optimal Steiner convex partition Π of S with OPT tiles. We construct a planar multigraph G = ( V, E ) as follows. The faces of G are the convex tiles and the exterior of conv( S) (the outer face). The vertices V are the points in the plane incident to at least 3 faces (counting the outer face as well). Since i ≥ 1, G is non-empty and we have |V | ≥ 2. Each edge in E is a Jordan arc on the common boundary of two faces. An edge between two bounded faces is a straight line segment, and so it contains at most two interior points of S. An edge between the outer face and a bounded face is a convex arc, containing hull points from S. Double edges are possible if two vertices of the outer face are connected by a straight line edge and a curve edge along the boundary—in this case these two parallel edges bound a convex face. No loops are possible in G. Since Π is a convex partition, G is connected. Let v, e, and f , respectively, denote the number of vertices, edges, and bounded (convex) faces of G; in particular, f = OPT. By Euler’s formula for planar multigraphs, we have v − e + f = 1, that is, f = e − v + 1. By construction, each vertex of G is incident to at least 3 edges, and every edge is incident to two vertices. Therefore, 3 v ≤ 2e, or v ≤ 2e/ 3. Consequently, f = e − v + 1 ≥ e − 2e/ 3 + 1 = e/ 3 + 1. Since S is in general position, each straight-line edge of G contains at most 2 interior points from S. Curve edges along the boundary do not contain interior points. Hence each edge in E is incident to at most two interior points in S, thus i ≤ 2e. Substituting this into the previous inequality on f
yields OPT = f ≥ e/ 3 + 1 ≥ i/ 6 + 1. Comparing this lower bound with the upper bound ALG ≤ ⌈ i/ 2⌉+1, we conclude that ALG
OPT ≤ ⌈i/ 2⌉ + 1
i/ 6 + 1 ≤ 3 i + 3
i + 6 < 3, and the approximation ratio of 3 follows. JoCG 5(1), 86–103, 2014 93 Journal of Computational Geometry jocg.org
Tightness of the app ro ximation ratio. We first show that the ratio 3 established above is tight for Algorithm A1 . We construct a planar point set S as follows. Refer to Fig-ure 3 (left). Consider a large (say, hexagonal) section of a hexagonal lattice. Place Steiner vertices at the lattice points, and place two points in S on each lattice edge. Slightly per-turb the lattice, and add a few more points in S near the boundary, and a few more Steiner points, so as to obtain a Steiner convex partition of S with no three points collinear. Denote by v, e, and f , the elements of the planar multigraph G as before. Since we consider a large lattice section, we have v, e, f → ∞ . We write a ∼ b, whenever a/b → 1. As before, we have f + v = e + 1, and since each non-boundary edge is shared by two convex faces, we have e ∼ 6f / 2 = 3 f . By construction, i ∼ 2e ∼ 6f , hence f ∼ i/ 6. Therefore the convex partition constructed above has f ∼ i/ 6, while Algorithm A1 constructs one with about
i/ 2 faces. Letting e → ∞ , then i → ∞ , and the ratio ALG /OPT approaches 3 in the limit: ALG /OPT ∼ (i/ 2) /(i/ 6) = 3.
Figure 3: Left: two points on each edge of a section of a perturbed hexagonal lattice in R2, and four extra vertices of a bounding box. Right: Points in general position on a saddle surface in R3.
Time analysis. Algorithm A1 can be implemented to run in O(n log n) time for a set S of
n points in the plane. We employ the semi-dynamic (delete only) convex hull data structure of Hershberger and Suri [ 24 ]. This data structure supports point deletion in O(log n) time, and uses O(n) space and O(n log n) preprocessing time. We maintain the boundary of a convex polygon R in a binary search tree, a set B ⊂ S of points lying in the interior of
R, and the convex hull conv( B) with the above semi-dynamic data structure [ 24 ]. Initially,
R = conv( S), which can be computed in O(n log n) time; and B ⊂ S is the set of interior points. In each round of the algorithm, consider the supporting line H of an arbitrary edge
e of conv( B) such that B lies in the halfplane H+. The two intersection points of H with the boundary of R can be computed in O(log n) time. At the end of the round, we can update B ← B \H and conv( B) in O(k log n) time, where k is the number of points removed from B; and we can update R ← R ∩ H+ in O(log n) time. Every point is removed from B
exactly once, and the number of rounds is at most ⌈(n − 3) /2⌉, so the total update time is
O(n log n) throughout the algorithm. JoCG 5(1), 86–103, 2014 94 Journal of Computational Geometry jocg.org
Rema rk. Interestingly enough, in dimensions 3 and higher, Algorithm A1 does not give a constant-factor approximation. For every integer n, one can construct a set S of n points in general position in R3 such that i = n − 4 of them lie in the interior of conv( S), but the minimum Steiner convex partition has only O(√n) tiles. In contrast, Algorithm A1
computes a Steiner partition with i/ 3 = ( n − 4) /3 convex tiles. We first construct the convex tiles, and then describe the point set S. Specifically, S
consists of 4 points of a large tetrahedron, and 3 points in general position on the common boundary of certain pairs of adjacent tiles. Let k = ⌈√(n − 4) /3⌉. Place ( k + 1) 2 Steiner points ( a, b, a 2 − b2) on the saddle surface z = x2 − y2 for pairs of integers ( a, b ) ∈ Z2, −⌊ k/ 2⌋ ≤ a, b ≤ ⌈ k/ 2⌉. The four points
{(x, y, x 2 − y2) : x ∈ { a, a + 1 }, y ∈ { b, b + 1 }} form a parallelogram for every ( a, b ) ∈ Z2,
−⌊ k/ 2⌋ ≤ a, b ≤ ⌈ k/ 2⌉ − 1. Refer to Figure 3 (right). These parallelograms form a terrain over the region {(x, y ) : −⌊ k/ 2⌋ ≤ x, y ≤ ⌈ k/ 2⌉} . Note that no two parallelograms are coplanar. Subdivide the space below this terrain by vertical planes x = a, −⌊ k/ 2⌋ ≤ a ≤⌈k/ 2⌉. Similarly, subdivide the space above this terrain by planes y = b, −⌊ k/ 2⌋ ≤ b ≤⌈k/ 2⌉. We obtain 2 k interior-disjoint convex regions, k above and k below the terrain, such that the common boundary of a region above and a region below is a parallelogram of the terrain. The points in R3 that do not lie above or below the terrain can be covered by 4 convex wedges. Enclose the terrain in a sufficiently large tetrahedron T . Clip the 2 k convex regions and the 4 wedges into the interior of T . These 2 k + 4 convex bodies tile T . Choose 3 noncollinear points of S in each of the k2 parallelograms, such that no 4 points are coplanar and no 2 are collinear with vertices of T . Let the point set S be the set of 4 vertices of the large tetrahedron T and the 3 k2 points selected from the parallelograms.
4 App ro ximating the maximum empt y onvex b o dy Let S be a set of points in the unit cube [0 , 1] d ⊆ Rd. Our task is to approximate the largest convex body C ⊆ [0 , 1] d that contains no points of S in its interior. Let Copt = Copt (S)denote this body.
4.1 App ro ximation by the dis rete hull In the following, assume that m > 0 is some integer, and consider the grid point set
G(m) =
{
(i1, . . . , i d)/m
∣∣∣ i1, . . . , i d ∈ { 0, 1, . . . , m }
}
.
Let S ⊆ [0 , 1] d be a point set, and let Copt be the corresponding largest empty convex body in [0 , 1] d. Given a grid G(m), we call conv( Copt ∩ G (m)) the discrete hull of Copt [23 ]. We need the following easy lemma.
Lemma 1. Let C ⊆ [0 , 1] d be a convex body and D = conv( C ∩ G (m)) . Then we have
vol( C) − vol( D) = O(1 /m ), where the constant of proportionality depends only on d.JoCG 5(1), 86–103, 2014 95 Journal of Computational Geometry jocg.org
Proof. Consider a point p ∈ C, and the cube p + [ −2, 2] d/m centered at p with side length 4/m . If this cube is contained in C, then all grid points of the grid cell of G(m) containing
p are in C, and p lies in D. Therefore, for every point p ∈ C \ D, the cube p + [ −2, 2] d/m is not contained in C. By convexity, at least one of the vertices of the cube p + [ −2, 2] d/m lies outside of C. Therefore, the distance from p to the boundary of C is at most the distance from p to a corner of this cube, which is 2 √d/m .It follows that all the points in the corridor C \ D are at distance at most 2 √d/m
from the boundary of C. The volume of the boundary of C is bounded from above by the volume of the boundary of the unit cube, namely 2 d. As such, the volume of this corridor is vol( ∂C ) 2 √d/m ≤ (2 d)(2 √d/m ) = O(d3/2/m ). For a fixed d, this is O(1 /m ), as claimed.
Lemma 1 implies that if vol( Copt ) ≥ ρ, in order to obtain a (1 − ε)-approximation, we can concentrate our search on convex polytopes that have their vertices at grid points in G(m), where m = O(1 /(ερ )). If ρ is a constant, then the maximum volume empty lattice polytope in G(m) with m = O(1 /ε ) is an (1 − ε)-approximation for Copt . However, for arbitrary vol( Copt ) = Ω(1 /n ), a much finer grid would be necessary to achieve this approximation.
4.2 An initial brute fo r e app roa hIn this section we present approximation algorithms (for all d) relying on Lemma 1 alone, approximating the maximum volume empty polytope by a lattice polytope in a sufficiently fine lattice (grid). We shall refine our technique in Subsections 4.3 and 4.4 .For the plane, we take advantage of the existence of an efficient solution for a related search problem. Refining a natural dynamic programming approach by Eppstein et al. [ 16 ]and Fischer [ 18 ], Bautista-Santiago et al. [ 9] obtained the following result.
Lemma 2 ([ 9]) . Given a set S of m points and a set Q of O(m) points in the plane, one can compute a convex polygon with the largest area with vertices in S that does not contain any point of Q in its interior in O(m3) time.
Remark. The algorithm has the same running time if Q is a set of O(m) forbidden rectangles. The combination of Lemmas 1 and 2 readily yields an approximation algorithm for the plane, whose running time depends on vol( Copt ).
Lemma 3. Given a set S ⊆ [0 , 1] 2 of n points, such that vol( Copt ) ≥ ρ, and a parameter
ε > 0, one can compute an empty convex body C ⊆ [0 , 1] 2 such that vol( C) ≥ (1 −ε)vol( Copt ).The running time of the algorithm is O(n + 1 /(ερ )6).Proof. Consider the grid G(m) with m = O(1 /(ερ )). By Lemma 1 we can restrict our search to a grid polygon. Going a step further, we mark all the grid cells containing points of S as forbidden. Arguing as in Lemma 1, one can show that the area of the largest convex grid polygon avoiding the forbidden cells is at least vol( Copt ) − c/m , where c is a constant. JoCG 5(1), 86–103, 2014 96 Journal of Computational Geometry jocg.org
We now restrict our attention to the task of finding a largest polygon. We have a set Q of O(m2) grid points that might be used as vertices of the grid polygon, and a set of
O(m2) grid cells that cannot intersect the interior of the computed polygon. By Lemma 2,a largest empty polygon can be found in O(m6) time. Setting m = O(1 /(ερ )), we get an algorithm with overall running time O(n + 1 /(ερ )6).
For dimensions d ≥ 3, we are not aware of any analogue of the dynamic programming algorithm in Lemma 2. Instead, we use a brute force approach that enumerates all feasible subsets of a sufficiently fine grid.
Lemma 4. Given a set S ⊆ [0 , 1] d of n points such that vol( Copt ) ≥ ρ, and a parameter
ε > 0, one can compute an empty convex body C ⊆ [0 , 1] d, such that vol( C) ≥ (1 −
ε)vol( Copt ). The running time of the algorithm is O(n) + exp (O(md(d−1) /(d+1) log m)) ,where m = O(1 /(ερ )) and d is fixed. Proof. Consider the grid G(m) with m = O(1 /(ερ )). Let X be the set of vertices of all grid cells of G(m) that contain some point from S (i.e., 2 d vertices per cell). Note that |X| =
O(md). Andrews [ 3] proved that a convex lattice polytope of volume V has O(V (d−1) /(d+1) )vertices. Hence a convex lattice polytope in G(m) has O(md(d−1) /(d+1) ) vertices. By the well-known inequality ∑ki=0
(ni
) ≤ ( en
k )k, the number of subsets of size O(md(d−1) /(d+1) )from G(m) is
O(md(d−1) /(d+1) )
∑
i=0
(md
i
)
≤
(
m2d/ (d+1) )O(md(d−1) /(d+1) )
≤ exp
(
O(md(d−1) /(d+1) log m)
)
.
For each such candidate subset G of size O(md(d−1) /(d+1) ), test whether conv( G) is empty of points from X. For each point in X, the containment test reduces to a linear pro-gram that can be solved in time polynomial in m. Returning the subset with the largest hull volume found yields the desired approximation. The runtime of the algorithm is exp (O(md(d−1) /(d+1) log m)) .
B´ arany and Vershik [ 8] proved that there are exp (O(md(d−1) /(d+1) )) convex lattice polytopes in G(m). If the polytopes can also be enumerated in this time (as in the planar case [ 7]), then the runtime in Lemma 4 reduces accordingly.
4.3 A faster app ro ximation in the plane If Copt is long and skinny (e.g., ρ is close to 1 /n ), then the uniform grid G(m) we used in Lemmas 3 and 4 is unsuitable for finding a (1 − ε)-approximation efficiently. Instead, we employ a rotated and stretched grid (an affine copy of G(m)) that has similar orientation and aspect ratio as Copt . This overcomes one of the main difficulties in obtaining a good approximation. Since we do not know the shape and orientation of Copt , we guess these parameters via the minimum area rectangle containing Copt .JoCG 5(1), 86–103, 2014 97 Journal of Computational Geometry jocg.org
Lemma 5. Given a set S ⊆ [0 , 1] 2 of n points such that vol( Copt ) ≥ ρ, and a parameter
ε > 0, one can compute an empty convex body C ⊆ [0 , 1] 2 such that vol( C) ≥ (1 −ε)vol( Copt ).The running time of the algorithm is O(ρ−1(n + ρ−1ε−6)) .Proof. The idea is to first guess a rectangle R that contains Copt such that vol( Copt ) is at least a constant fraction of the area of vol( R), and then to apply Lemma 3 to the rectangle
R (as the unit square) to get the desired approximation. Let B0 be the minimum area rectangle (of arbitrary orientation) that contains Copt ;see Figure 4 (left). We guess an approximate copy of B0. In particular, we guess the lengths of the two sides of B0 (up to a factor of 2) and the orientation of B0 (up to an angle of
O(1 /n )), and then try to position a scaled copy of the guessed rectangle so that that it fully contains Copt .
Copt
B0
B1
B1
B2
oxy
a b kB 2
Figure 4: Left: Copt , a minimum area rectangle B0, Copt ⊆ B0, and a minimum area rectangle
B1, B0 ⊆ B1, with canonical side lengths and the same orientation as B0. Right: Rectangle B1, a rotated copy B2 with the closest canonical orientation, and a minimum area scaled copy kB 2 such that B1 ⊆ kB 2.
Assume for convenience that n ≥ 10. We now show that vol( Copt ) ≥ √2/n , using Theorem 1. Augment the point set S with the four corners of the unit square [0 , 1] 2 into a set of n + 4 points. By Theorem 1, the augmented point set has a Steiner convex partition into at most g2(n + 4) = ⌈ n+4 −1
2
⌉ tiles. The area of the largest tile is at least that of the average tile in this partition, that is, vol( Copt ) ≥ 1/⌈ n+4 −1
2
⌉ ≥ 2
n+4
≥ 2√2n =
√2
n
, for n ≥ 10. Therefore, we may assume that vol( Copt ) ≥ ρ ≥ √2/n .Denote by a and b the lengths of the two sides of B0, where a ≤ b. It is clear that
b ≤ √2, the diameter of the unit square. We also have a = vol( B0) /b ≥ vol( Copt ) /b ≥√2/(bn ) ≥ 1/n , hence the aspect ratio of B0 is b/a ≤ √2/a ≤ √2n.Assume now that 2 i−1ρ ≤ vol( Copt ) < 2iρ for some i = 1 , 2, . . . , ⌈log 2(√2n)⌉. If we want to guess the aspect ratio of B0 up to a factor of two, we need to consider only O(log ρ−1)possibilities. Indeed, we consider the canonical aspect ratios 2j for j = 0 , . . . , ⌈log 2(√2/ρ )⌉−
i, and canonical side lengths 2(i+j)/2√ρ and 2 (i−j)/2√ρ. Let B1 be a minimum area rectangle JoCG 5(1), 86–103, 2014 98 Journal of Computational Geometry jocg.org
with canonical side lengths and the same orientation as B0, so that B0 ⊆ B1.The orientation of a rectangle is given by the angle between one side and the x-axis. We approximate the orientation of B0 by canonical orientations α = rπ/ (5 · 2j ), for r =0, 1, . . . , 5·2j −1. Let B2 be a congruent copy of B1 rotated clockwise to the nearest canonical orientation about the center of B1. We show that B1 ⊂ 2B2, i.e., a scaled copy of B2 contains
B1. Let k ≥ 1 be the minimum scale factor such that B1 ⊆ kB 2. Refer to Figure 4 (right). Denote by o the common center of B1 and B2, let x be a vertex of B1 on the boundary of
kB 2, and let y be the corresponding vertex of kB 2. Clearly, sin( ∠xoy ) ≤ π/ (5 · 2j ) since we rotate by at most π/ (5 · 2j ). The aspect ratio of the rectangle kB 2 is cot( ∠oyx ) = 2 j . Since
∠oyx < π/ 4, we have sin( ∠oyx ) = tan( ∠oyx ) cos( ∠oyx ) ≥ 2−j cos π
4
= 2 −j−1/2 > π/ (5 · 2j ). The law of sines yields |ox | > |xy |; and we have |ox | + |xy | > |oy | by the triangle inequality. If follows that |oy | < 2|ox |, and so k ≤ 2 suffices. Summing over all possible areas, canonical aspect ratios, and orientations, the number of possibilities is
⌈log 2(√2/ρ )⌉∑
i=0
⌈log 2(√2/ρ )⌉−i
∑
j=0
5 · 2j ≤⌈log 2(√2/ρ )⌉∑
i=0
10 · 2⌈log 2(√2/ρ )⌉−i
≤ 20 · 2⌈log 2(√2/ρ )⌉ = O(ρ−1).
So far we have guessed the canonical side lengths and orientation of B2, however, we do not know its location in the plane. If a translated copy B2 + v of B2 intersects Copt ,then 3 B2 + v contains it, since Copt ⊆ B0 ⊆ B1 ⊆ 2B2. Consider an arbitrary tiling of the plane with translates of B2. By a packing argument, only O(1 /ρ ) translates intersect the unit square [0 , 1] 2. One of these translates, say B2 + v, intersects Copt , and hence the rectangle R = 3 B2 + v contains Copt .We can apply Lemma 3 to the rectangle R (as the unit square) to get the desired approximation. Specifically, let T : R2 → R2 be an affine transformation that maps R into the unit square [0 , 1] 2, and apply Lemma 3 for the point set T (S ∩R) and T (R∩[0 , 1] 2). The grid G(m) clipped in T (R ∩ [0 , 1] 2) corresponds to a stretched and rotated grid in R; each grid cell of G(m) is stretched to a rectangle with the same aspect ratio as R. The convex polygon Copt occupies a constant fraction of the area of R, and so the resulting running time is O(n1 + 1 /ε 6), where n1 is the number of points in R. Note that the algorithm of Lemma 3 partitions R into a grid with O(1 /ε 2) cells. The approximation algorithm only cares about which cells are empty and which are not. Since the algorithm of Lemma 3 is repeated for all possible positions of R, the overall running time is O(ρ−1(n + ρ−1ε−6)) , where the first factor of ρ−1 counts possible areas, canonical aspect ratios, and orientations, and the second factor of ρ−1 inside the parenthesis counts possible positions of the rectangle R.
Remark. If ρ = Ω(1) the running time of this planar algorithm is linear in n.Since ρ = Ω(1 /n ), the running time of the algorithm in Lemma 5 is bounded by
O(ε−6n2). We summarize our result for the plane in the following. JoCG 5(1), 86–103, 2014 99 Journal of Computational Geometry jocg.org
Theorem 3. Given a set S of n points in [0 , 1] 2 and a parameter ε > 0, one can compute an empty convex body C ⊆ [0 , 1] 2, such that vol( C) ≥ (1 − ε)vol( Copt ). The running time of the algorithm is O(ε−6n2).
4.4 A faster app ro ximation in higher dimensions Given a set S ⊆ [0 , 1] d of n points and a parameter ε > 0, we compute an empty convex body
C ⊆ [0 , 1] d such that vol( C) ≥ (1 −ε)vol( Copt ). Similarly to the algorithm in Subsection 4.3 ,we guess a hyperrectangle R that contains Copt such that vol( Copt ) is at least a constant fraction of vol( R); and then apply Lemma 4 to R (as the hypercube) to obtain the desired approximation. Consider a hyperrectangle B0 of minimum volume (and arbitrary orientation) that contains Copt . The d edges incident to a vertex of a hyperrectangle B are pairwise orthog-onal. We call these d directions the axes of B; and the orientation of B is the set of its axes. We next enumerate all possible discretized hyperrectangles of volume Ω(1 /n ), guess-ing the lengths of their axes, their orientations, and their locations as follows: Guess the length of every axis up to a factor of 2. Since the minimum length of an axis in our case is Ω(1 /n ) and the maximum is √d, the number of possible lengths to be considered is O(log d n). Let B1 be a hyperrectangle of minimum volume with canonical side lengths and the same orientation as B0 such that B0 ⊆ B1.We can discretize the orientation of a hyperrectangle as follows. We spread a dense set of points on the sphere of directions, with angular distance O(1 /n ) between any point on the sphere and its closest point in the chosen set. O(nd−1) points suffice for this purpose. We try each point as the direction of the first axis of the hyperrectangle, and then generate the directions of the remaining axes analogously in the orthogonal hyperplane for the chosen direction. Overall, this generates O(n∑d−1
i=1 i) = O(nd(d−1) /2) possibilities. Successively replace each axis of B1 by an approximate axis that makes an angle at most α = 1 /(cn ) with its corresponding axis, where c = c(d) is a constant depending on
d. Let B2 be a congruent copy of B1 obtained in this way. If c = c(d) is sufficiently small, then B1 ⊆ 2B2.Consider a tiling of Rd with translates of B2. Note that only O(1 /vol( Copt )) = O(n)translates intersect the unit cube [0 , 1] d. One of these translates B2 + v intersects Copt ,and then the hyperrectangle R = 3 B2 + v contains Copt . Since Copt (S) takes a constant fraction of the volume of R, we can deploy Lemma 4 in this case, and get the desired (1 − ε)-approximation in exp (O(ε−d(d−1) /(d+1) log ε−1)) time. Putting everything together, we obtain the following.
Theorem 4 Given a set S of n points in [0 , 1] d, d ≥ 3, and a parameter ε > 0, one can compute an empty convex body C ⊆ [0 , 1] d, such that vol( C) ≥ (1 −ε)vol( Copt ). The running time of the algorithm is
exp
(
O
(
ε−d(d−1) /(d+1) log ε−1))
n1+ d(d−1) /2 log d n. JoCG 5(1), 86–103, 2014 100 Journal of Computational Geometry jocg.org
Rema rk. Consider a set S of n points in Rd. The approximation algorithm we have presented can be modified to approximate the largest empty tile, i.e., the largest empty convex body contained in conv( S), rather than [0 , 1] d. The running time is slightly worse, since we need to take the boundary of conv( S) into account. We omit the details.
5 Con lusions In this section we briefly outline two exact algorithms for finding the largest area empty convex polygon and the largest area empty triangle amidst n points in the unit square. At the end we list a few open problems.
La rgest area onvex p olygon. Let S ⊂ U = [0 , 1] 2, where |S| = n. Let T be the set of four vertices of U . Observe that the boundary of an optimal convex body, Copt , contains at least two points from S ∪ T . By convexity, the midpoint of one of these O(n2) segments lies in Copt . For each such midpoint m, create a weakly simple polygon Pm by connecting each point p ∈ S to the boundary of the square along the ray mp . The polygon Pm has O(n)vertices and is empty of points from S in its interior. Then apply the algorithm of Chang and Yap [ 11 ] for the potato-peeling problem (mentioned in Section 1) in these O(n2) weakly simple polygons. The algorithm computes a largest area empty convex polygon contained in a given (non-convex) polygon with n vertices in O(n7) time. Finally, return the largest convex polygon obtained in this way. The overall running time is O(n9). The running time can be reduced to O(n8 log n) as follows. Instead of considering the O(n2) midpoints, compute a set P of O(n log n) points so that every convex set of area at least 2 /(n + 4) contains at least one of these points. In particular, Copt contains a point from P . The set P can be computed by starting with a O(n) × O(n) grid, and then computing an ε-net for it, where ε = O(1 /n ), using discrepancy [ 32 ]. The running time of this deterministic procedure is roughly O(n2), and the running time of the overall algorithm improves to O(n7 · n log n) = O(n8 log n).
La rgest area empt y triangle. The same reduction can be used for finding largest area empty triangle contained in U , resulting in O(n2) weakly simple polygons Pm. Then the algorithm of Melissaratos and Souvaine [ 33 ] for finding a largest area triangle contained in a polygon is applied to each of these O(n2) polygons. The algorithm finds such a triangle in O(n4) time, given a polygon with n vertices. Finally, return the largest triangle obtained in this way. The overall running time is O(n6). Via the ε-net approach (from the previous paragraph) the running time of the algorithm improves to O(n4 · n log n) = O(n5 log n).
Op en questions. Interesting questions remain open regarding the structure of optimal Steiner convex partitions and the computational complexity of computing such partitions. JoCG 5(1), 86–103, 2014 101 Journal of Computational Geometry jocg.org
Other questions relate to the problem of finding the largest empty convex body in the presence of points. (1) Is there a polynomial-time algorithm for computing a minimum Steiner convex parti-tion of a given set of n points in Rd? Is there one for points in the plane? (2) Is there a constant-factor approximation algorithm for the minimum Steiner convex partition of an arbitrary point set in Rd (without the general position restriction)? Is there one for points in the plane? (3) For d > 2, the running time of our approximation algorithm for the maximum empty polytope has a factor of the form nO(d2). It seems natural to conjecture that this term can be reduced to nO(d). Another issue of interest is extending Lemma 2 to higher dimensions for a faster overall algorithm. (4) Given n points in [0 , 1] d, the problem of finding the largest convex body in [0 , 1] d that contains up to k (outlier) points naturally suggests itself and appears to be also quite challenging.
A kno wledgement. The authors thank Joe Mitchell for helpful discussions regarding the exact algorithms in Section 5, in particular for suggesting the reduction of the maximum-area-empty-convex-body problem to the potato-peeling problem. Many thanks also go to Sergio Cabello and Maria Saumell for pointing us to the recent results of Bautista-Santiago et al. [ 9] and for suggesting logarithmic factor improvements in the running time of the approximation algorithm in Section 4.3 .
Referen es O. Aichholzer and H. Krasser, The point set order type data base: A collection of applications and results, in Proc. 13th Canadian Conf. on Comput. Geom. , Waterloo, ON, Canada, 2001, pp. 17–20. N. Alon, I. B´ ar´ any, Z. F¨ uredi, and D. Kleitman, Point selections and weak ε-nets for convex hulls, Combinatorics, Probability & Computing 1 (1992), 189–200. G. E. Andrews, A lower bound for the volumes of strictly convex bodies with many boundary points, Transactions of the AMS 106 (1963), 270–279. B. Aronov, M. van Kreveld, M. L¨ offler, and R. I. Silveira, Peeling meshed potatoes,
Algorithmica 60(2) (2011), 349–367. J. Backer and M. Keil, The mono- and bichromatic empty rectangle and square prob-lems in all dimensions, in Proc. 9th Latin American Sympos. on Theoretical Informatics ,vol. 6034 of LNCS, Springer, 2010, pp. 14–25. R. P. Bambah and A. C. Woods, On a problem of Danzer, Pacific J. Math. 37(2)
(1971), 295–301. JoCG 5(1), 86–103, 2014 102 Journal of Computational Geometry jocg.org
I. B´ ar´ any and J. Pach, On the number of convex lattice polygons, Combinatorics, Probability & Computing 1 (1992), 295–302. I. B´ ar´ any and A. M. Vershik, On the number of convex lattice polytopes, Geometric and Functional Analysis 2 (1992), 381–393. C. Bautista-Santiago, J. M. D´ ıaz-B´ a˜ nez, D. Lara, P. P´ erez-Lantero, J. Urrutia, and I. Ventura, Computing optimal islands, Oper. Res. Lett. 39 (4) (2011), 246–251. J. Beck and W. Chen, Irregularities of Distributions , vol. 89 of Cambridge Tracts in Mathematics, Cambridge University Press, Cambridge, 1987. J.-S. Chang and C.-K. Yap, A polynomial solution for the potato-peeling problem,
Discrete Comput. Geom. 1 (1986), 155–182. B. Chazelle and D. P. Dobkin, Decomposing a polygon into its convex parts, in Proc. 11th Sympos. Theory of Computing , ACM Press, 1979, pp. 38–48. J. A. De Loera, J. Rambau, and F. Santos, Triangulations: Structures for Algorithms and Applications, vol. 25 of Algorithms and Computation in Mathematics , Springer, 2010. A. Dumitrescu and M. Jiang, On the largest empty axis-parallel box amidst n points,
Algorithmica 66(2) (2013), 225–248. A. Dumitrescu and Cs. D. T´ oth, Minimum weight convex Steiner partitions, Algorith-mica 60(3) (2011), 627–652. D. Eppstein, M. Overmars, G. Rote, and G. Woeginger, Finding minimum area k-gons,
Discrete Comput. Geom. 7(1) (1992), 45–58. T. Fevens, H. Meijer, and D. Rappaport, Minimum convex partition of a constrained point set, Discrete Appl. Math. 109(1-2) (2001), 95–107. P. Fischer, Sequential and parallel algorithms for finding a maximum convex polygon,
Comput. Geom. Theory Appl. 7 (1997), 187–200. Z. F¨ uredi and J. Pach, Traces of finite sets: extremal problems and geometric ap-plications, in Extremal Problems for Finite Sets (P. Frankl, Z. F¨ uredi, G. Katona, D. Mikl´ os, editors) , vol. 3 of Bolyai Society Mathematical Studies, Budapest, 1994, pp. 251–282. J. Garc´ ıa-L´ opez and C. Nicol´ as, Planar point sets with large minimum convex par-titions, in Proc. 22nd European Workshop on Comput. Geom. , Delphi, Greece, 2006, pp. 51–54. M. Grantson and C. Levcopoulos, A fixed-parameter algorithm for the minimum num-ber convex partition problem, in Proc. Japanese Conf. on Discrete Comput. Geom. ,vol. 3742 of LNCS, Springer, 2005, pp. 83–94. JoCG 5(1), 86–103, 2014 103 Journal of Computational Geometry jocg.org
O. A. Hall-Holt, M. J. Katz, P. Kumar, J. S. B. Mitchell, and A. Sityon, Finding large sticks and potatoes in polygons, in Proc. 17th ACM-SIAM Sympos. Discrete Algorithms , 2006, pp. 474–483. S. Har-Peled, An output sensitive algorithm for discrete convex hulls, Comput. Geom. Theory Appl. 10 (1998), 125–138. J. Hershberger and S. Suri, Applications of a semi-dynamic convex hull algorithm, BIT
32(2) (1992), 249–267. J. Horton, Sets with no empty convex 7-gons, Canadian Math. Bulletin 26 (1983), 482–484. J. M. Keil, Polygon partition, in Handbook of Computational Geometry (J.-R. Sack, J. Urrutia, editors) , Elsevier, 2000, pp. 491–518. J. M. Keil and J. Snoeyink, On the time bound for convex partition of simple polygons,
Internat. J. Comput. Geom. Appl. 12 (2002), 181–192. D. G. Kirkpatrick, A note on Delaunay and optimal triangulations, Inform. Proc. Lett.
10(3) (1980), 127–128. C. Knauer and A. Spillner, Approximation algorithms for the minimum convex parti-tion problem, in Proc. SWAT , vol. 4059 of LNCS, Springer, 2006, pp. 232–241. M. Lassak, Approximation of convex bodies by inscribed simplices of maximum volume,
Contributions to Algebra and Geometry 52(2) (2011), 389–394. J. Lawrence and W. Morris, Finite sets as complements of finite unions of convex sets,
Discrete Comput. Geom. 42(2) (2009), 206–218. J. Matouˇ sek, Lectures on Discrete Geometry , Springer, 2002. E. A. Melissaratos and D. L. Souvaine, Shortest paths help solve geometric optimization problems in planar regions, SIAM J. Comput. 21(4) (1992), 601–638. V. Neumann-Lara, E. Rivera-Campo, and J. Urrutia, A note on convex partitions of a set of points in the plane, Graphs and Combinatorics 20(2) (2004), 223–231. J. Pach and G. Tardos, Piercing quasi-rectangles—on a problem of Danzer and Rogers,
J. of Combinatorial Theory, Series A 119 (2012), 1391–1397. T. Sakai and J. Urrutia, Convex partitions of point sets in the plane, in Proc. 7th Japan Conf. on Comput. Geom. and Graphs (Kanazawa, 2009) , JAIST. T. Shermer, Recent results in art galleries, Proceedings of the IEEE 80(9) (1992), 1384–1399. A. Spillner, A fixed parameter algorithm for optimal convex partitions, J. Discrete Algorithms 6(4) (2008), 561–569. P. Valtr, Sets in Rd with no large empty convex subsets, Discrete Mathematics 108(1-3) (1992), 115–124. |
17208 | https://en.wikipedia.org/wiki/Equation_xy_%3D_yx | Jump to content
Search
Contents
(Top)
1 History
2 Positive real solutions
2.1 Explicit form
2.2 Parametric form
3 Other real solutions
4 Similar graphs
4.1 Equation x√y = y√x
4.2 Equation logx(y) = logy(x)
5 References
6 External links
Equation xy = yx
العربية
Deutsch
Français
한국어
Հայերեն
Italiano
日本語
Русский
Українська
Edit links
Article
Talk
Read
Edit
View history
Tools
Actions
Read
Edit
View history
General
What links here
Related changes
Upload file
Permanent link
Page information
Cite this page
Get shortened URL
Download QR code
Print/export
Download as PDF
Printable version
In other projects
Wikidata item
Appearance
From Wikipedia, the free encyclopedia
In general, exponentiation fails to be commutative
In general, exponentiation fails to be commutative. However, the equation has an infinity of solutions, consisting of the line and a smooth curve intersecting the line at , where is Euler's number. The only integer solution that is on the curve is .
History
[edit]
The equation is mentioned in a letter of Bernoulli to Goldbach (29 June 1728). The letter contains a statement that when the only solutions in natural numbers are and although there are infinitely many solutions in rational numbers, such as and . The reply by Goldbach (31 January 1729) contains a general solution of the equation, obtained by substituting A similar solution was found by Euler.
J. van Hengel pointed out that if are positive integers with , then therefore it is enough to consider possibilities and in order to find solutions in natural numbers.
The problem was discussed in a number of publications. In 1960, the equation was among the questions on the William Lowell Putnam Competition, which prompted Alvin Hausner to extend results to algebraic number fields.
Positive real solutions
[edit]
: Main source:
Explicit form
[edit]
An infinite set of trivial solutions in positive real numbers is given by Nontrivial solutions can be written explicitly using the Lambert W function. The idea is to write the equation as and try to match and by multiplying and raising both sides by the same value. Then apply the definition of the Lambert W function to isolate the desired variable.
Where in the last step we used the identity .
Here we split the solution into the two branches of the Lambert W function and focus on each interval of interest, applying the identities:
:
:
:
:
Hence the non-trivial solutions are:
Parametric form
[edit]
Nontrivial solutions can be more easily found by assuming and letting Then
Raising both sides to the power and dividing by , we get
Then nontrivial solutions in positive real numbers are expressed as the parametric equation
The full solution thus is
Based on the above solution, the derivative is for the pairs on the line and for the other pairs can be found by which straightforward calculus gives as:
for and
Setting or generates the nontrivial solution in positive integers,
Other pairs consisting of algebraic numbers exist, such as and , as well as and .
The parameterization above leads to a geometric property of this curve. It can be shown that describes the isocline curve where power functions of the form have slope for some positive real choice of . For example, has a slope of at which is also a point on the curve
The trivial and non-trivial solutions intersect when . The equations above cannot be evaluated directly at , but we can take the limit as . This is most conveniently done by substituting and letting , so
Thus, the line and the curve for intersect at x = y = e.
As , the nontrivial solution asymptotes to the line . A more complete asymptotic form is
Other real solutions
[edit]
An infinite set of discrete real solutions with at least one of and negative also exist. These are provided by the above parameterization when the values generated are real. For example, , is a solution (using the real cube root of ). Similarly an infinite set of discrete solutions is given by the trivial solution for when is real; for example .
Similar graphs
[edit]
Equation x√y = y√x
[edit]
The equation produces a graph where the line and curve intersect at . The curve also terminates at (0, 1) and (1, 0), instead of continuing on to infinity.
The curved section can be written explicitly as
This equation describes the isocline curve where power functions have slope 1, analogous to the geometric property of described above.
The equation is equivalent to as can be seen by raising both sides to the power Equivalently, this can also be shown to demonstrate that the equation is equivalent to .
Equation logx(y) = logy(x)
[edit]
The equation produces a graph where the curve and line intersect at (1, 1). The curve becomes asymptotic to 0, as opposed to 1; it is, in fact, the positive section of y = 1/x.
References
[edit]
^ a b Lóczi, Lajos. "On commutative and associative powers". KöMaL. Archived from the original on 2002-10-15. Translation of: "Mikor kommutatív, illetve asszociatív a hatványozás?" (in Hungarian). Archived from the original on 2016-05-06.
^ a b c Singmaster, David. "Sources in recreational mathematics: an annotated bibliography. 8th preliminary edition". Archived from the original on April 16, 2004.
^ a b c d Sved, Marta (1990). "On the Rational Solutions of xy = yx" (PDF). Mathematics Magazine. 63: 30–33. doi:10.1080/0025570X.1990.11977480. Archived from the original (PDF) on 2016-03-04.
^ a b c d Dickson, Leonard Eugene (1920), "Rational solutions of xy = yx", History of the Theory of Numbers, vol. II, Washington, p. 687{{citation}}: CS1 maint: location missing publisher (link)
^ van Hengel, Johann (1888). "Beweis des Satzes, dass unter allen reellen positiven ganzen Zahlen nur das Zahlenpaar 4 und 2 für a und b der Gleichung ab = ba genügt". Pr. Gymn. Emmerich. JFM 20.0164.05.
^ Gleason, A. M.; Greenwood, R. E.; Kelly, L. M. (1980), "The twenty-first William Lowell Putnam mathematical competition (December 3, 1960), afternoon session, problem 1", The William Lowell Putnam mathematical competition problems and solutions: 1938-1964, MAA, p. 59, ISBN 0-88385-428-7
^ "21st Putnam 1960. Problem B1". 20 Oct 1999. Archived from the original on 2008-03-30.{{cite web}}: CS1 maint: bot: original URL status unknown (link)
^ Hausner, Alvin (November 1961). "Algebraic Number Fields and the Diophantine Equation mn = nm". The American Mathematical Monthly. 68 (9): 856–861. doi:10.1080/00029890.1961.11989781. ISSN 0002-9890.
External links
[edit]
"Rational Solutions to x^y = y^x". CTK Wiki Math. Archived from the original on 2021-08-15. Retrieved 2016-04-14.
"x^y = y^x - commuting powers". Arithmetical and Analytical Puzzles. Torsten Sillke. Archived from the original on 2015-12-28.
dborkovitz (2012-01-29). "Parametric Graph of x^y=y^x". GeoGebra.
OEIS sequence A073084 (Decimal expansion of −x, where x is the negative solution to the equation 2^x = x^2)
Retrieved from "
Categories:
Diophantine equations
Recreational mathematics
Hidden categories:
CS1 Hungarian-language sources (hu)
CS1: unfit URL
CS1 maint: location missing publisher
CS1 maint: bot: original URL status unknown
Articles with short description
Short description is different from Wikidata
Equation xy = yx
Add topic |
17209 | https://www.youtube.com/watch?v=jmfcKJgDwgk | A22 A-level Maths: Moments video 4 : Ladders, Normal Reactions, Friction & Limiting Equilibrium
N.I. Maths Tutor
2400 subscribers
8 likes
Description
620 views
Posted: 20 Feb 2025
A-Level Maths | Ladder Problems & Moments | Normal Reactions, Friction & Limiting Equilibrium
In this A-Level Mechanics tutorial, we tackle ladder problems using moments and forces. Learn how to calculate normal reactions on the floor and wall, handle smooth and rough walls, and determine the maximum point a person can climb before the ladder slips.
📌 What You’ll Learn:
✅ How to calculate normal reaction forces at the floor and wall
✅ The difference between smooth and rough walls in equilibrium
✅ Limiting equilibrium and the role of friction in preventing slipping
✅ How to determine the maximum climbing height before a ladder loses stability
✅ Step-by-step worked examples with exam-style questions
This video is perfect for students studying CCEA, Edexcel, AQA, OCR, WJEC, and Leaving Cert A-Level Maths Mechanics. Strengthen your understanding of moments, friction, and equilibrium for exam success!
🔔 Subscribe for more A-Level Maths tutorials!
ALevelMaths #Mechanics #Moments #Ladders #Equilibrium #Friction #Forces #Physics #MathsRevision #ExamPrep #CCEA #Edexcel #AQA #OCR #WJEC #LeavingCert
Transcript:
we're going to look at this video at rigid bodies and in particular we're going to look at ladder problems so this is where you've got a ladder resting against a wall so have we look at the notes there posm if you want to read what's written there we're going to look at a couple of situations uh first of all we're going to look here at if a ladder is resting against a Smooth Wall you can see there is just a normal reaction if it's a Smooth Wall there's no friction had there been had had there not been a Smooth Wall then friction would be acting in this direction here and that would be to prevent uh the ladder sliding downwards so there would be a friction here I'm going to call it F1 okay likewise if the surface the ground is smooth if the ground is smooth then there's a normal reaction going up here if the ground is if the ground was rough the ladder would move in this direction which means friction has to act in this direction uh to uh prevent uh motion okay to prevent the lighting the latter sliding so that's what you're going to have another strange we situation here you can imagine if we extend this B diagram we bit had that been just an edge um your uh your ladder is rested against a an edge there so you imagine trying to climb up on the roof or whatever you doing uh your force is coming out perpendicular to your ladder cuz it's really as if that is a a beam or a rod that that is uh resting against and then you've got your nor normal reaction uh going up here as well okay let's have a look at a a question so this question says this question says a uniform ladder a uniform ladder AB of mass 10 kg and length 4 M rest with his upper end a against a smooth vertical wall and B on a smooth horizontal ground so there um smooth horizontal ground uh a light horizontal string which has the end attached to B and the other end attached to the attached to wall keeps a ladder in equilibrium so that's what this thing is here this is a tension in a string which is attached to B pulling it in towards the wall keeps this ladder in equilibrium at an angle of 40° to horizontal the vertical plane containing a ladder and uh sorry vertical plane containing a ladder in the string is at right angles to the wall find the tension T in the string and the normal reactions at the point A and B and they have labeled this all uh for you so let's uh do a couple of we extra things we need to on so a bit like your um planes your inclined planes that's going to be 40° in here and we'll do and this is a hard bit put uh the people find by remember the old Zed angles if that angle was 40 in here this angle is going to be in 40 in here so that's going to be 40° and then we're good to go I think we've got everything we need to uh we need to do this so to do these questions there's only three things you can do remember same as in the previous videos you can resolve vertically you can resolve horizontally and you can take moments so resolving vertically first of all going up you've got R and going down you've got 10ng so straight away you can find out uh what your R is so your R is 10 9.8 so it's going to be 98 Newtons resolve horizontally you've got your SN is going to be equal to T nothing we can do with that unfortunately and then a really good place to take moments about is this B because that means this Force here R which is acting through b and this Force t which is acting through b can be ignored so we're going to take moments about B and we're going going to do clockwise equals anticlockwise Okay so your clockwise moment your clockwise moment if you can imagine here my clockwise moment would be this Force which is going in this direction that would be my clockwise moment so that would be my um clockwise would be that s to pull that s round to that dash line it's going to be S sin 40 and its perpendicular distance is 4 away from B then the empty an clockwise moment so I'll do a green color here for this anticlockwise moment is going to be this one here uh that's going to be uh my anticlockwise moment so that's going to be uh 10g uh cos 40 and that's 2 and then a little bit of working out then s is going to be equal to 10 G cos 40 2 / by and I'll just write that as 4 sin 40 and then you can work out your s once you get your s then you can get your t as well so let me just work out what that is uh my SN is equal to uh works out to be and I've done this already that was going to be 58.4 Newtons to three sigfigs therefore my T is going to be the same it's going to be 58.4 Newtons to three Sig fix okay next example next example we have got says a here with me a second it says a ladder 5 m long rests on rough ground in uh against a smooth vertical wall uh inclined at uh thigh to the vertical such as s thigh is 3 over5 it has a mass of 20 kg and its center of gravity located 2 m away from uh from the lower end of the ladder a man of mass 80 kg man of mass 80 kg stand on a ladder 3 m away from the lower end and what we've got to do is find the first of all the frictional force on the ground and we'll worry about the rest of the we minutes okay first thing we need to do is do a half decent diagram so we've got that you've got your a you got your B you've got a normal reaction here uh R2 you've got your uh your normal reaction here I'll call that R1 and the angle that this thing is acting at it was it's inclined at an angle of thigh to the vertical okay so bit different than this one thigh is in here uh which means all means then your thigh is going to be in here and then what else are we going to have we've got the uh center of gravity is located uh 2 m away from a lower end so 2 m away we're going to have to draw our vertical line going down so that's our mass of your ladder which is 20 G Newtons and we've got a man of ma of mass 80 kg stands the ladder 3 m away so it was 2 m to get the here sorry 2 m up to there and then another 1 M uh to get up to the next bit and that's this man and he was 80 kg so 80 g Newtons is his weight and then uh this is a wee bit wear down by again this is a strange question very rarely asked this way but if you imagine here that was our thigh that's also going to be thigh with that vertical so the thigh is actually going in here and in here that's where that we uh letter is uh going Greek letter is going in for this case right uh what was the total length of this ladder it was 5 m long so that's going to be 2 m 2 m for this bit so we've got every every Dimension we need I would also Mark this in here and this is tricky then um just to show you how we get this this angle here would be 90 minus thigh so then this angle in here is also going to be I'll do it in blue it's the one we want it's also going to be thigh as well so that's we've got all of those ones we are ready to go right it also gives you it tells you if you ever get a question where it tells you in this form here bear with me a second if it tells you that uh s of whatever the angle is is equal to what was it where on Earth was it was 3 over 5 you don't actually work out the angle what you would do is draw a right angle triangle and S there's your thigh s is 3 over 5 and that's going to be four and that's going to be COS of thigh is equal to uh 4 over 5 and then that is uh that is in uh you ready to go okay now all you can do is you can resolve vertically resolve hor horizontally and that's it oh forgot to put on my frictional force there's my frictional force F here my frictional force going there so find a frictional force on the ground okay let's get cracking on this and see what we're going to do so uh we will resolve or well resolve vertically first of all you're going to get going up you've got your R2 and that's going to be equal to 100 G so R2 is going to be equal to 980 uh Newtons okay resolve horizontally R1 is equal to F and then we go from there uh right so how do on Earth do we find our uh how on Earth do we find our missing forces in we're going to have to take moments and again we're going to take moments about this point and quite often that is the case the one that has got two forces uh around it coming out of it uh you take moments about that point it makes life so much easier so we going to take moments about B and we're going to do clockwise equal to your anticlockwise so I'm trying to keep this diagram on the screen so we can see it so we take moments about B clockwise the only clockwise Force you're going to have is this one here so clockwise you're going to have this Force here pulled in this direction so that is going to be clockwise is going to be r 1 C thigh which we now know the value of cost thigh and then that is how far away is it it is 5 m away that's only clockwise Force the two anticlockwise forces in youve got I'll do these in this the red color you've got this one first of all uh I'll do it one at a time here so bear with me a second so that would have been um 20 g s thigh and that's times two and then the other one I'll do it in this horrible yellow color n you going to have this one here uh so that is going to be um 8G 8G 8G sin thigh and then that's times uh times three I don't know why my pan has changed here but it has okay right uh so if we work with that then we've got all those values so we just want to put all those things in so 4 over 5 cost th remember was 4 5 so that's going to be R1 4 5 5 that's just going to be 4 R1 and then is equal to and then over here we've got our uh 20g if you do all that that's going to be uh that's going to work out to be 20 9.8 R 3 5 2 + 80 9.8 3 5 3 if you work that out you will get 64 6.4 divide across by your four you get R1 is R1 is uh 41 R1 is uh 41 1.6 Newtons and your F then remember is the same as r 1 so f is 4116 uh Newtons okay next part of this uh question [Music] then next part of this question uh says uh the least possible value of the coefficient of friction between the ladder and the ground okay so run out of space here but sure uh so for part uh for Part B we want to say then uh coefficient of friction his least if friction is acting at its Max and what that means I it's acting f is equal to Mu okay so that means in this case I your F that you've just worked out your F Max was equal to whatever you just worked out at 41.6 so that means 41.6 is equal to my R2 which I worked out which was 980 so that's mu 980 and then you can work out then your mu works out to be 21 over 50 which is 4 uh 42 okay now part C it says uh find the magnitude and direction of the resultant uh reaction at the ground okay so that is this bit here we're looking at so we're going to have to find the magnitude and Direction so you've got your um at B you've got a force going up of 980 remember and a force going left of 4011.6 so we've got to find the magnitude and direction of this uh this Force so let's just draw a diagram so this is at B you had your Force going left which was four 41.6 Newtons your Force going up which was 980 and you want to find out what the the nor the the resultant force is and the angle that it's acting so it's just Pythagoras so R is going to be equal to 4 4116 SAR + 980 squar and the square root so R works out to be 106 2.9 Newtons to 1 decimal this and then tan Theta is equal to 980 over 41.6 therefore Theta is equal to 67.2 uh degrees the one decimal place okay and I would just make a we note that that is above the horizontal okay right excellent that's another we example done clearly didn't get out on the space that's cuz this thing zooming all over place right uh the last one then is last type of example we're going to look at is when somebody is climbing up a a a ladder so well or not at the safety Ascend the top to the top of the ladder will depend upon the magnitude of the frictional force which acts at the foot of the ladder um uh if a ladder is found to be in limiting equilibrium that means it's on the point of slipping basically so when a person's put a part of the way up the ladder then if they ascend any further then the ladder will slip to determine how far uh how far a ladder may be ascended consider a situation when the climber X is at a climber is at a distance X of the ladder and the ladder is in limiting equilibrium okay A little diagram needed for this one and I don't think I've got very much space to do this one again okay so um we have got a diagram for this one here it says a uniform ladder of mass uh 30 and length five rest against a smooth vertical wall with its lower end on a rough uh floor uh rough ground with coefficient friction two fists the ladder is inclined at 6° horizontal find how far a man of mass can Ascend the ladder without it slipping so uh here we draw our ladder you should have drawn these out earlier there's my point a at the top my B at the bottom um right now notice here I've instead of calling it friction I've called it m because it's acting at its maximum already it's a uniform ladder so your its mass which was 30 kg is acting vertically down that's 60° at 60 de at 60° um you've also got the man I'm going to going to imagine the man has gone past the middle point he may not have gone that far but I'm going to guess that he has um the mathematics will work it out for us if he has or not we will figure that out the distance here it was 2.5 to the 2.5 m to the center it was x m here and that's your normal reaction R and to call this one going horizontally I'm going to call this s and that also this angle in here would be 60 as well okay so that's an S not a five okay that's uh what we've got we've also got mu is equal to4 okay right so the MU R I haven't written it in here but the MU is the friction are is M as it's acting at its maximum so only thing we can do remember we can resolve vertic vertically resolving vertically you're going to get R is equal to R is equal to if this thing will let me right R is equal to 110 G so R is equal to 17 uh8 Newtons okay okay your SN is equal to the MU so s o we can get S straight away s is going to be equal to .4 a 107.8 SN works out to be for oh sorry that's should have been apologies that that wasn't 107.8 it was 107 1078 just it was 1078 so then my S is 4 31.2 Newtons okay and the only other thing we've got to do then is uh take moments and again we're going to take moments about B clockwise equal to anticlockwise I'm squeezing Us in apologies so clockwise what we've got going clockwise is this what we've got going clockwise is this and that's going to be then 5 s s of 60 anticlockwise n uh we have got anticlockwise n we have got this and we have got this okay so anticlockwise uh let's write down x one first it was 880 G uh 80 g and it's going to be cos 60 uh time x and then also we've got 30g and it's cos 60 again but it was 2.5 away okay I would do this most of this in my uh calculator uh we know our S I should have written in my S there as well uh right so uh I would do most of this in my calculator let me just uh write that all down again I'll write that down over here run space so instead of the the S I'm going to write what s was s was uh 4 31.2 and then that was times sin of 60 is equal to 80g my cos of 60 x + 30g C of 60 uh time 2.5 okay so we can tidy this up it's a bit of a mess here but you can do this you use your calculator here your calculator can do this all so do 5 43 1.2 sin of 60 we' bring the third the 30 G cos 60 2.5 bring it across and then that whole thing divides by 80 g COS of 60 and that will give me my X so you can get your X then and your X is 3 83 and that's going to be meters to three Sig fix okay and that's it done so you can see didn't get that squeezed on and give itself enough space uh but you can see really what we are are doing there okay right last one for us to do says a uniform ladder rest in limiting equ equilibrium with its top end against a rough wall and its lower end again rough rough horizontal floor okay great so we got friction on both if the coefficient friction at the top and the fit of the ladder are both 2/3 or sorry are 2/3 and one4 respectively draw a force diagram and hence find where they uh find the angle which the ladder makes with the floor now it gives us a be clue it says let the ladder be 2 W and the weight be 2 L sorry sorry and the L and the weight b w and we'll go from there right it's in limiting equilibrium so as well so we've got our frictions acting at Max for both a decent diagram and I'll SC on Bel this if need to be right uh right what we've got first of all I'll do we dash line up here uh oh here a bit smaller right I've got my friction up up at the Top If that's my end a I've got 2/3 R2 where this is R2 coming out horizontally to be dash line here uh I've also got very bad ladder there it's not very straight at all that ladder would like to climb it uh I've also got my angle which is Theta which we don't know we've got the weight of this ladder which we call W that's Theta that is also Theta that is L and that is L and that is R1 and going this way uh that is one4 or one uh right okay so what we all all we can do here we resolve vertically we going resolve horizontally we can take moments so we're going to resolve vertically first of all and it's going to be a bit of a mess here this one I'd say going to feel like it uh so let's just get cracken and we're going to Res olve vertically so resolving vertically you're going to have R1 is going up what else is going up we've got 2/3 R2 and that equals W so no help to us really uh we'll call that equation one we will definitely be coming back to that we're going to resolve horizontally and you're going to get that your R2 is equal to4 uh R1 that's all you've got um I'll call that I'll put that back in I'll say put that back into the first equation therefore I could say that is going to be um R1 plus then it's 2/3 A4 R1 will be equal to W and just do that in your calculator do 1 + 2/3 a/4 and what you're going to get is 76 so 7 / 6 R1 is equal to w um okay and the same idea then that means uh or you could also say your W then is equal to um you can convert that into r2s then so if we um how do we do that sorry how do we do that uh we want to get that into r2s as well we know that uh R2 is equal to a half R1 so that means R1 is the same as 4 R2 oh sorry qu R1 so R1 is equal to 4 R2 so you're going to have to say then it is equal to 7 / 6 4 R2 so w as well could be equal to and that is just going to be 14 over 3 R2 Okay so we've got quite a bit already um we can work all of these things out and we're going to we're going to take moments about the B to do this so they do this and take moments about the B here's our point B down here we're going to just take moments about B and again clockwise equals anticlockwise so clockwise we have got in green we have got this uh going this way so what we have got then going clockwise we have got 2 L and that 2 L is multiplying and this is where the tricky bit is 2 L is multiplying both uh the R2 sin Theta and it is also multiplying 2 over 3 R 2 cos Theta now where that comes from if you imagine this angle in here would be 90 minus Theta this angle in here would also be Theta so that's a tricky one there so we've got to pull it round to that dash line as well so that's very tricky uh so that's what you've got and then anticlockwise in your anticlockwise one's a little bit easier you've only got one in actual fact and it's this one so anticlockwise is just W cos Theta W cos Theta uh L and then we're going to go from here so we'll do a we bit of canceling out first before we do anything else so that's going to be 2 R2 sin Theta + 4 / 3 R2 cos Theta is equal to W cos Theta and then what we would be best to do here is to make write your W in terms of R2 so we're going to write that again as 2 R2 sin Theta + 4 3 R2 cos Theta is equal to then 14 over 3 R2 cos Theta right now what we're going to do is we can get rid of the uh we can get rid of the r2s as well and we're going to also so we'll just cancel out the RT twos so you can imagine they all just going and then what we're also going to do is we're going to grip all of the cosses on one side so I'm going to leave two whoops I'm going to leave my two uh sin Theta on the left so I'm going to leave my two sin Theta on the left and on the right I've got my 14 over3 cos Theta then - 4 over 3 cos Theta so I've got 2 sin Theta is equal to just 10 over 3 cos Theta which then means uh you're going to have 2 / 10 over 3 oops done not run my round we want to get S over cos remember so we want to bring that the COS over to the left hand side so we get S over cos which going to give us 10 which is going to 10 over 3 / 2 which means tan Theta is equal to 5 over 3 and then do shift and tan of that and you got 59 uh 0° to 3 Sig fig that is very very difficult so have we look at all of that uh you would be able to get that fit onto your page I couldn't uh but you can see here our method resolve vertically uh resolve horizontally and what that gave you then was a relationship between the uh R1 and R2 and then I gave you a Rel I give you an ability to write your W in terms of R2 and then we could then do clockwise equal take Moments by B clockwise equals anticlockwise what happens is the L's consolate first of all we then convert our W to R2 and then we can cancel out our r2s and then bring all the sign's to one side all the CES to one side remember sign to ver by cost gives you tan and then we can get our angle in once we get there that's all we had to do all we had to do he says I was horrendous |
17210 | https://atcddd.fhi.no/atc_ddd_index/?code=L04AF&showdescription=no | ATCDDD - ATC/DDD Index
Home
ATC/DDD application form
Order ATC Index
WHO Centre
Contact us
Log in
News
ATC/DDD Index
Updates included in the ATC/DDD Index
ATC/DDD methodology
ATC
DDD
Lists of temporary ATC/DDDs and alterations
ATC/DDD alterations, cumulative lists
ATC/DDD Index and Guidelines
Use of ATC/DDD
Courses
Meetings/open session
Deadlines
Links
Postal address:
Norwegian Institute of Public Health
WHO Collaborating Centre for Drug Statistics Methodology
Postboks 222 Skøyen
0213 Oslo
Norway
Visiting/delivery address:
Myrens verksted 6H
0473 Oslo
Norway
Tel:+47 21 07 81 60
E-mail: whocc@fhi.no
Copyright/Disclaimer
New searchShow text from Guidelines
L ANTINEOPLASTIC AND IMMUNOMODULATING AGENTS
L04 IMMUNOSUPPRESSANTS
L04A IMMUNOSUPPRESSANTS
L04AF Janus-associated kinase (JAK) inhibitors
ATC code Name DDD U Adm.R Note
L04AF01tofacitinib10 mg O
L04AF02baricitinib3 mg O
L04AF03upadacitinib15 mg O
L04AF04filgotinib0.2 g O
L04AF05itacitinib
L04AF06peficitinib
L04AF07deucravacitinib6 mg O
L04AF08ritlecitinib50 mg O
List of abbreviations
Last updated: 2024-12-27 |
17211 | https://www.fertur-travel.com/blog/2018/gold-museum-lima/14096/ | Friday 1st August 2025
Peru Travel Blog
Fertur Peru Travel Blog
Gold Museum in Lima: Pre-Columbian Gold Artifacts and Arms
Updated on August 25, 2023 by Peter Stein in Destinations, Lima with 0 Comments
Peru’s deep history of advanced civilizations is one of the most compelling aspects of the country; even before the arrival of the Incas (argued by some to have been as great as the Roman Empire), modern-day Peru was a tapestry of long-established peoples who were accomplished in pottery, stone-working, metallurgy, and many other dimensions of civilization. One of the best ways to take advantage of a visit to Peru to learn more about this rich history is to visit the Gold Museum in Lima (Museo Oro del Peru).
You Might Also Like: Take Lima Private Tours With English-Speaking Tour Guides
What is the Gold Museum in Lima?
Formally known as the Gold Museum of Peru and Arms of the World, the gold museum was created more than 50 years ago from the private collection of Miguel Mujica Gallo, a Peruvian diplomat and politician. The collection is enormous, boasting more than seven thousand artifacts, and is valued at around $10 million.
The Collection
The collection has two main components: Inca gold artifacts (as well as some artifacts from before the Inca) and weapons from all over the world. It includes many pieces from the Inca Empire, who were famed for their beautiful gold artifacts, as well as pieces from pre-Inca cultures such as the Sican and the Moche peoples. The Sican and Moche civilizations (as well as the Chimu and Vicus, also featured in the museum) were not quite as sophisticated in their metallurgical techniques as the Inca, but nonetheless produced some truly awe-inspiring works of gold.
Miguel Mujica Gallo was also an avid collector of weapons from around the world, which are also on display in the museum. The collection of arms spans more than three thousand years of history, and includes the sword of King Ferdinand VII of Spain.
Where is the Gold Museum, and how do I visit?
If you have a Lima vacation coming up (or are looking to plan one), visiting the Gold Museum is easy.
It’s located in Surco, the largest district in the city (in fact, it’s right around the corner from the U.S. Embassy!). The opening hours are 10:30am, seven days a week, and the price of admission is 33 soles (around $10) for an adult and 16 soles (about $5) for children, seniors, and students.
If you’re interested in the Gold Museum, but also everything else that Lima has to offer, Fertur Peru has a three-day, two-night package for travelers with a couple of days to spend in Lima. It includes transfers to and from the airport and a full historic tour of Lima, spanning the pre-Columbian history, the colonial Spanish history, and the modern history of Peru. It also includes a visit to the Gold Museum.
Go to top
You can find more information on our three-day Lima package page, or fill out the form below and one of our tour coordinators can put together a customized itinerary for you.
Sharing is caring!
Tags: cultural immersion, Lima
Like Pin It Share
Authored by: Peter Stein
Peter is an avid traveler who is exploring Peru, far and wide, and sharing what he discovers with Fertur Peru Travel and its clients.
Previous Article
Next Article
Related Posts
Peru wins World’s Best Culinary Destination prize at World Travel Awards
Mandatory tour guides and fixed routes coming soon for Machu Picchu
Crackdown on streakers and nude posers at Machu Picchu
Travelers flocked to Machu Picchu in record numbers in 2013
UPDATE: Hiram Bingham road access to Machu Picchu
Girl power in ancient Peru of the Moche
Extra! Extra! Crowds through the clouds to Choquequirao
Treasure of Waqrapukara – a lesser known Inca sanctuary
Leave a Reply
This site uses Akismet to reduce spam. Learn how your comment data is processed. |
17212 | https://chem.libretexts.org/Bookshelves/General_Chemistry/Map%3A_General_Chemistry_(Petrucci_et_al.)/07%3A_Thermochemistry/7.8%3A_Standard_Enthalpies_of_Formation | 7.8.1
7.8.1
7.8.2
7.8.2
7.8.3
7.8.3
ΔHof
H2(g)
Cl2(g)
H2(g)+Cl2(g)→HCl(g)
HCl
H2(g)
Cl2(g)
12H2(g)+12Cl2(g)→HCl(g)
Mg(s)
C(s,graphite)
O2(g)
Mg(s)+C(s,graphite)+O2(g)→MgCO3(s)
Mg(s)+C(s,graphite)+32O2(g)→MgCO3(s)
C(s,graphite)+H2(g)+O2(g)→CH3(CH2)14CO2H(s)
16C(s,graphite)+16H2(g)+O2(g)⟶CH3(CH2)14CO2H(s)
Na(s)+12Cl2(g)→NaCl(s)
H2(g)+18S8(s)+2O2(g)→H2SO4(l)
2C(s)+O2(g)+2H2(g)⟶CH3CO2H(l)
Skip to main content
7.8: Standard Enthalpies of Formation
Last updated
: Jul 12, 2023
Save as PDF
7.7: Indirect Determination of ΔH - Hess's Law
7.9: Fuels as Sources of Energy
Page ID
: 24207
( \newcommand{\kernel}{\mathrm{null}\,})
Learning Objectives
To understand Enthalpies of Formation and be able to use them to calculate Enthalpies of Reaction
One way to report the heat absorbed or released by chemical reactions would be to compile a massive set of reference tables that list the enthalpy changes for all possible chemical reactions, which would require an incredible amount of effort. Fortunately, Hess’s law allows us to calculate the enthalpy change for virtually any conceivable chemical reaction using a relatively small set of tabulated data, starting from the elemental forms of each atom at 25 oC and 1 atm pressure.
Enthalpy of formation (ΔHfΔHf) is the enthalpy change for the formation of 1 mol of a compound from its component elements, such as the formation of carbon dioxide from carbon and oxygen. The formation of any chemical can be as a reaction from the corresponding elements:
elements→compound
elements→compound
which in terms of the the Enthalpy of formation becomes
ΔHrxn=ΔHf
ΔHrxn=ΔHf(7.8.1)
For example, consider the combustion of carbon:
C(s)+O2(g)⟶CO2(g)
C(s)+O2(g)⟶CO2(g)
then
ΔHrxn=ΔHf[CO2(g)]
ΔHrxn=ΔHf[CO2(g)]
The sign convention for ΔHf is the same as for any enthalpy change: ΔHf<0ΔHf<0 if heat is released when elements combine to form a compound and ΔHf>0ΔHf>0 if heat is absorbed.
The sign convention is the same for all enthalpy changes: negative if heat is released by the system and positive if heat is absorbed by the system.
Standard Enthalpies of Formation
The magnitude of ΔH for a reaction depends on the physical states of the reactants and the products (gas, liquid, solid, or solution), the pressure of any gases present, and the temperature at which the reaction is carried out. To avoid confusion caused by differences in reaction conditions and ensure uniformity of data, the scientific community has selected a specific set of conditions under which enthalpy changes are measured. These standard conditions serve as a reference point for measuring differences in enthalpy, much as sea level is the reference point for measuring the height of a mountain or for reporting the altitude of an airplane.
The standard conditions for which most thermochemical data are tabulated are a pressure of 1 atmosphere (atm) for all gases and a concentration of 1 M for all species in solution (1 mol/L). In addition, each pure substance must be in its standard state, which is usually its most stable form at a pressure of 1 atm at a specified temperature. We assume a temperature of 25°C (298 K) for all enthalpy changes given in this text, unless otherwise indicated. Enthalpies of formation measured under these conditions are called standard enthalpies of formation (ΔHofΔHof) The enthalpy change for the formation of 1 mol of a compound from its component elements when the component elements are each in their standard states. The standard enthalpy of formation of any element in its most stable form is zero by definition.
The standard enthalpy of formation of any element in its standard state is zero by definition.
For example, although oxygen can exist as ozone (O3), atomic oxygen (O), and molecular oxygen (O2), O2 is the most stable form at 1 atm pressure and 25°C. Similarly, hydrogen is H2(g), not atomic hydrogen (H). Graphite and diamond are both forms of elemental carbon, but because graphite is more stable at 1 atm pressure and 25°C, the standard state of carbon is graphite (Figure 7.8.17.8.1). Therefore, O2(g)O2(g), H2(g)H2(g), and graphite have ΔHofΔHof values of zero.
The standard enthalpy of formation of glucose from the elements at 25°C is the enthalpy change for the following reaction:
6C(s,graphite)+6H2(g)+3O2(g)→C6H12O6(s)ΔHof=−1273.3kJ
6C(s,graphite)+6H2(g)+3O2(g)→C6H12O6(s)ΔHof=−1273.3kJ(7.8.2)
It is not possible to measure the value of ΔHoofΔHoof for glucose, −1273.3 kJ/mol, by simply mixing appropriate amounts of graphite, O2O2, and H2H2 and measuring the heat evolved as glucose is formed since the reaction shown in Equation 7.8.27.8.2 does not occur at a measurable rate under any known conditions. Glucose is not unique; most compounds cannot be prepared by the chemical equations that define their standard enthalpies of formation. Instead, values of ΔHoofΔHoof are obtained using Hess’s law and standard enthalpy changes that have been measured for other reactions, such as combustion reactions. Values of ΔHofΔHof for an extensive list of compounds are given in Table T1. Note that ΔHofΔHof values are always reported in kilojoules per mole of the substance of interest. Also notice in Table T1 that the standard enthalpy of formation of O2(g) is zero because it is the most stable form of oxygen in its standard state.
Example 7.8.17.8.1: Enthalpy of Formation
For the formation of each compound, write a balanced chemical equation corresponding to the standard enthalpy of formation of each compound.
HCl(g)HCl(g)
MgCO3(s)MgCO3(s)
CH3(CH2)14CO2H(s)CH3(CH2)14CO2H(s) (palmitic acid)
Given:
: compound formula and phase.
Asked for:
: balanced chemical equation for its formation from elements in standard states
Strategy:
: Use Table T1 to identify the standard state for each element. Write a chemical equation that describes the formation of the compound from the elements in their standard states and then balance it so that 1 mol of product is made.
Solution:
: To calculate the standard enthalpy of formation of a compound, we must start with the elements in their standard states. The standard state of an element can be identified in Table T1: by a ΔHof value of 0 kJ/mol.
Hydrogen chloride contains one atom of hydrogen and one atom of chlorine. Because the standard states of elemental hydrogen and elemental chlorine are H2(g) and Cl2(g), respectively, the unbalanced chemical equation is
H2(g)+Cl2(g)→HCl(g)
Fractional coefficients are required in this case because ΔHof values are reported for 1 mol of the product, HCl. Multiplying both H2(g) and Cl2(g) by 1/2 balances the equation:
12H2(g)+12Cl2(g)→HCl(g)
The standard states of the elements in this compound are Mg(s), C(s,graphite), and O2(g). The unbalanced chemical equation is thus
Mg(s)+C(s,graphite)+O2(g)→MgCO3(s)
This equation can be balanced by inspection to give
Mg(s)+C(s,graphite)+32O2(g)→MgCO3(s)
Palmitic acid, the major fat in meat and dairy products, contains hydrogen, carbon, and oxygen, so the unbalanced chemical equation for its formation from the elements in their standard states is as follows:
C(s,graphite)+H2(g)+O2(g)→CH3(CH2)14CO2H(s)
There are 16 carbon atoms and 32 hydrogen atoms in 1 mol of palmitic acid, so the balanced chemical equation is
16C(s,graphite)+16H2(g)+O2(g)⟶CH3(CH2)14CO2H(s)
Exercise 7.8.17.8.1
For the formation of each compound, write a balanced chemical equation corresponding to the standard enthalpy of formation of each compound.
NaCl(s)NaCl(s)
H2SO4(l)H2SO4(l)
CH3CO2H(l)CH3CO2H(l) (acetic acid)
Answer a
: Na(s)+12Cl2(g)→NaCl(s)
Answer b
: H2(g)+18S8(s)+2O2(g)→H2SO4(l)
Answer c
: 2C(s)+O2(g)+2H2(g)⟶CH3CO2H(l)
Definition of Heat of Formation Reactions:
Standard Enthalpies of Reaction
Tabulated values of standard enthalpies of formation can be used to calculate enthalpy changes for any reaction involving substances whose ΔHofΔHof values are known. The standard enthalpy of reaction ΔHorxnΔHorxn is the enthalpy change that occurs when a reaction is carried out with all reactants and products in their standard states. Consider the general reaction
aA+bB→cC+dD
aA+bB→cC+dD(7.8.3)
where AA, BB, CC, and DD are chemical substances and aa, bb, cc, and dd are their stoichiometric coefficients. The magnitude of ΔHοΔHο is the sum of the standard enthalpies of formation of the products, each multiplied by its appropriate coefficient, minus the sum of the standard enthalpies of formation of the reactants, also multiplied by their coefficients:
ΔHorxn=[cΔHof(C)+dΔHof(D)]⏟products−[aΔHof(A)+bΔHof(B)]⏟reactants
ΔHorxn=[cΔHof(C)+dΔHof(D)]products−[aΔHof(A)+bΔHof(B)]reactants (7.8.4)
More generally, we can write
ΔHorxn=∑mΔHof(products)−∑nΔHof(reactants)
ΔHorxn=∑mΔHof(products)−∑nΔHof(reactants)(7.8.5)
where the symbol ∑∑ means “sum of” and mm and nn are the stoichiometric coefficients of each of the products and the reactants, respectively. “Products minus reactants” summations such as Equation 7.8.57.8.5 arise from the fact that enthalpy is a state function. Because many other thermochemical quantities are also state functions, “products minus reactants” summations are very common in chemistry; we will encounter many others in subsequent chapters.
"Products minus reactants" summations are typical of state functions.
To demonstrate the use of tabulated ΔHο values, we will use them to calculate ΔHrxnΔHrxn for the combustion of glucose, the reaction that provides energy for your brain:
C6H12O6(s)+6O2(g)→6CO2(g)+6H2O(l)
C6H12O6(s)+6O2(g)→6CO2(g)+6H2O(l)(7.8.6)
Using Equation 7.8.57.8.5, we write
ΔHof={6ΔHof[CO2(g)]+6ΔHof[H2O(g)]}−{ΔHof[C6H12O6(s)]+6ΔHof[O2(g)]}
ΔHof={6ΔHof[CO2(g)]+6ΔHof[H2O(g)]}−{ΔHof[C6H12O6(s)]+6ΔHof[O2(g)]}(7.8.7)
From Table T1, the relevant ΔHοf values are ΔHοf[CO2(g)] = -393.5 kJ/mol, ΔHοf[H2O(l)] = -285.8 kJ/mol, and ΔHοf[C6H12O6(s)] = -1273.3 kJ/mol. Because O2(g) is a pure element in its standard state, ΔHοf[O2(g)] = 0 kJ/mol. Inserting these values into Equation 7.8.77.8.7 and changing the subscript to indicate that this is a combustion reaction, we obtain
ΔHocomb=[6(−393.5kJ/mol)+6(−285.8kJ/mol)]−[−1273.3+6(0kJmol)]=−2802.5kJ/mol
ΔHocomb=[6(−393.5kJ/mol)+6(−285.8kJ/mol)]−[−1273.3+6(0kJmol)]=−2802.5kJ/mol(7.8.8)(7.8.9)
As illustrated in Figure 7.8.2, we can use Equation 7.8.8 to calculate ΔHοf for glucose because enthalpy is a state function. The figure shows two pathways from reactants (middle left) to products (bottom). The more direct pathway is the downward green arrow labeled ΔHοcomb. The alternative hypothetical pathway consists of four separate reactions that convert the reactants to the elements in their standard states (upward purple arrow at left) and then convert the elements into the desired products (downward purple arrows at right). The reactions that convert the reactants to the elements are the reverse of the equations that define the ΔHοf values of the reactants. Consequently, the enthalpy changes are
ΔHo1=ΔHof[glucose(s)]=−1molglucose(1273.3kJ1molglucose)=+1273.3kJΔHo2=6ΔHof[O2(g)]=6molO2(0kJ1molO2)=0kJ
Recall that when we reverse a reaction, we must also reverse the sign of the accompanying enthalpy change (Equation 7.8.4 since the products are now reactants and vice versa.
The overall enthalpy change for conversion of the reactants (1 mol of glucose and 6 mol of O2) to the elements is therefore +1273.3 kJ.
The reactions that convert the elements to final products (downward purple arrows in Figure 7.8.2) are identical to those used to define the ΔHοf values of the products. Consequently, the enthalpy changes (from Table T1) are
ΔHo3=ΔHof[CO2(g)]=6molCO2(393.5kJ1molCO2)=−2361.0kJΔHo4=6ΔHof[H2O(l)]=6molH2O(−285.8kJ1molH2O)=−1714.8kJ
The overall enthalpy change for the conversion of the elements to products (6 mol of carbon dioxide and 6 mol of liquid water) is therefore −4075.8 kJ. Because enthalpy is a state function, the difference in enthalpy between an initial state and a final state can be computed using any pathway that connects the two. Thus the enthalpy change for the combustion of glucose to carbon dioxide and water is the sum of the enthalpy changes for the conversion of glucose and oxygen to the elements (+1273.3 kJ) and for the conversion of the elements to carbon dioxide and water (−4075.8 kJ):
ΔHocomb=+1273.3kJ+(−4075.8kJ)=−2802.5kJ
This is the same result we obtained using the “products minus reactants” rule (Equation 7.8.5) and ΔHοf values. The two results must be the same because Equation 7.8.12 is just a more compact way of describing the thermochemical cycle shown in Figure 7.8.1.
Example 7.8.2: Heat of Combustion
Long-chain fatty acids such as palmitic acid (CH3(CH2)14CO2H) are one of the two major sources of energy in our diet (ΔHof =−891.5 kJ/mol). Use the data in Table T1 to calculate ΔHοcomb for the combustion of palmitic acid. Based on the energy released in combustion per gram, which is the better fuel — glucose or palmitic acid?
Given: compound and ΔHοf values
Asked for: ΔHοcomb per mole and per gram
Strategy:
After writing the balanced chemical equation for the reaction, use Equation 7.8.5 and the values from Table T1 to calculate ΔHοcomb the energy released by the combustion of 1 mol of palmitic acid.
Divide this value by the molar mass of palmitic acid to find the energy released from the combustion of 1 g of palmitic acid. Compare this value with the value calculated in Equation 7.8.8 for the combustion of glucose to determine which is the better fuel.
Solution:
A To determine the energy released by the combustion of palmitic acid, we need to calculate its ΔHοf. As always, the first requirement is a balanced chemical equation:
C16H32O2(s)+23O2(g)→16CO2(g)+16H2O(l)
Using Equation 7.8.5 (“products minus reactants”) with ΔHοf values from Table T1 (and omitting the physical states of the reactants and products to save space) gives
ΔHocomb=∑mΔHof(products)−∑nΔHof(reactants)=[16(−393.5kJ/molCO2)+16(−285.8kJ/molH2O)]−[−891.5kJ/molC16H32O2+23(0kJ/molO2)]=−9977.3kJ/mol
This is the energy released by the combustion of 1 mol of palmitic acid.
B The energy released by the combustion of 1 g of palmitic acid is
ΔHocombpergram=(9977.3kJ1mol)(1mol256.42g)=−38.910kJ/g
As calculated in Equation 7.8.8, (ΔH^o_f) of glucose is −2802.5 kJ/mol. The energy released by the combustion of 1 g of glucose is therefore
ΔHocombpergram=(−2802.5kJ1mol)(1mol180.16g)=−15.556kJ/g
The combustion of fats such as palmitic acid releases more than twice as much energy per gram as the combustion of sugars such as glucose. This is one reason many people try to minimize the fat content in their diets to lose weight.
Exercise 7.8.2: Water–gas shift reaction
Use Table T1 to calculate ΔHorxn for the water–gas shift reaction, which is used industrially on an enormous scale to obtain H2(g):
CO(g)+H2O(g)⟶CO2(g)+H2(g)
Answer
: −41.2 kJ/mol
We can also measure the enthalpy change for another reaction, such as a combustion reaction, and then use it to calculate a compound’s ΔHοf which we cannot obtain otherwise. This procedure is illustrated in Example 7.8.3.
Example 7.8.3: Tetraethyllead
Beginning in 1923, tetraethyllead [(C2H5)4Pb] was used as an antiknock additive in gasoline in the United States. Its use was completely phased out in 1986 because of the health risks associated with chronic lead exposure. Tetraethyllead is a highly poisonous, colorless liquid that burns in air to give an orange flame with a green halo. The combustion products are CO2(g), H2O(l), and red PbO(s). What is the standard enthalpy of formation of tetraethyllead, given that ΔHοf is −19.29 kJ/g for the combustion of tetraethyllead and ΔHοf of red PbO(s) is −219.0 kJ/mol?
Given: reactant, products, and ΔHοcomb values
Asked for: ΔHοf of the reactants
Strategy:
Write the balanced chemical equation for the combustion of tetraethyl lead. Then insert the appropriate quantities into Equation 7.8.5 to get the equation for ΔHοf of tetraethyl lead.
Convert ΔHοcomb per gram given in the problem to ΔHοcomb per mole by multiplying ΔHοcomb per gram by the molar mass of tetraethyllead.
Use Table T1 to obtain values of ΔHοf for the other reactants and products. Insert these values into the equation for ΔHοf of tetraethyl lead and solve the equation.
Solution:
A The balanced chemical equation for the combustion reaction is as follows:
2(C2H5)4Pb(l)+27O2(g)→2PbO(s)+16CO2(g)+20H2O(l)
Using Equation 7.8.5 gives
ΔHocomb=[2ΔHof(PbO)+16ΔHof(CO2)+20ΔHof(H2O)]−[2ΔHof((C2H5)4Pb)+27ΔHof(O2)]
Solving for ΔHof[(C2H5)4Pb] gives
ΔHof((C2H5)4Pb)=ΔHof(PbO)+8ΔHof(CO2)+10ΔHof(H2O)−272ΔHof(O2)−ΔHocomb2
The values of all terms other than ΔHof[(C2H5)4Pb] are given in Table T1.
B The magnitude of ΔHocomb is given in the problem in kilojoules per gram of tetraethyl lead. We must therefore multiply this value by the molar mass of tetraethyl lead (323.44 g/mol) to get ΔHocomb for 1 mol of tetraethyl lead:
ΔHocomb=(−19.29kJg)(323.44gmol)=−6329kJ/mol
Because the balanced chemical equation contains 2 mol of tetraethyllead, ΔHorxn is
ΔHorxn=2mol(C2H5)4Pb(−6329kJ1mol(C2H5)4Pb)=−12,480kJ
C Inserting the appropriate values into the equation for ΔHof[(C2H5)4Pb] gives
ΔHof[(C2H4)4Pb]=[1molPbO×219.0kJ/mol]+[8molCO2×(−393.5kJ/mol)]+[10molH2O×(−285.8kJ/mol)]+[−27/2molO2)×0kJ/molO2][12,480.2kJ/mol(C2H5)4Pb]=−219.0kJ−3148kJ−2858kJ−0kJ+6240kJ=15kJ/mol
Exercise 7.8.3
Ammonium sulfate, (NH4)2SO4, is used as a fire retardant and wood preservative; it is prepared industrially by the highly exothermic reaction of gaseous ammonia with sulfuric acid:
2NH3(g)+H2SO4(aq)→(NH4)2SO4(s)
The value of ΔHorxn is -179.4 kJ/mole H2SO4. Use the data in Table T1 to calculate the standard enthalpy of formation of ammonium sulfate (in kilojoules per mole).
Answer
: −1181 kJ/mol
Calculating DH° using DHf°:
Summary
The standard state for measuring and reporting enthalpies of formation or reaction is 25 oC and 1 atm.
The elemental form of each atom is that with the lowest enthalpy in the standard state.
The standard state heat of formation for the elemental form of each atom is zero.
The enthalpy of formation (ΔHf) is the enthalpy change that accompanies the formation of a compound from its elements. Standard enthalpies of formation (ΔHof) are determined under standard conditions: a pressure of 1 atm for gases and a concentration of 1 M for species in solution, with all pure substances present in their standard states (their most stable forms at 1 atm pressure and the temperature of the measurement). The standard heat of formation of any element in its most stable form is defined to be zero. The standard enthalpy of reaction (ΔHorxn) can be calculated from the sum of the standard enthalpies of formation of the products (each multiplied by its stoichiometric coefficient) minus the sum of the standard enthalpies of formation of the reactants (each multiplied by its stoichiometric coefficient)—the “products minus reactants” rule. The enthalpy of solution (ΔHsoln) is the heat released or absorbed when a specified amount of a solute dissolves in a certain quantity of solvent at constant pressure.
Contributors and Attributions
Modified by Joshua Halpern (Howard University)
7.7: Indirect Determination of ΔH - Hess's Law
7.9: Fuels as Sources of Energy |
17213 | https://www.geeksforgeeks.org/dsa/p-vs-np-problems/ | P vs NP Problems - GeeksforGeeks
Skip to content
Tutorials
Python
Java
DSA
ML & Data Science
Interview Corner
Programming Languages
Web Development
CS Subjects
DevOps
Software and Tools
School Learning
Practice Coding Problems
Courses
DSA / Placements
ML & Data Science
Development
Cloud / DevOps
Programming Languages
All Courses
Tracks
Languages
Python
C
C++
Java
Advanced Java
SQL
JavaScript
Interview Preparation
GfG 160
GfG 360
System Design
Core Subjects
Interview Questions
Interview Puzzles
Aptitude and Reasoning
Data Science
Python
Data Analytics
Complete Data Science
Dev Skills
Full-Stack Web Dev
DevOps
Software Testing
CyberSecurity
Tools
Computer Fundamentals
AI Tools
MS Excel & Google Sheets
MS Word & Google Docs
Maths
Maths For Computer Science
Engineering Mathematics
Switch to Dark Mode
Sign In
DSA Tutorial
Array
Strings
Linked List
Stack
Queue
Tree
Graph
Searching
Sorting
Recursion
Dynamic Programming
Binary Tree
Binary Search Tree
Heap
Hashing
Sign In
▲
Open In App
P vs NP Problems
Last Updated : 23 Jul, 2025
Comments
Improve
Suggest changes
1 Like
Like
Report
In the world of computers and math, there's this puzzling question: Can every problem we quickly check be solved quickly too? We have two categories: P for problems with quick solutions, and NP for problems where checking is fast, but solving might not be. This article explores these P vs NP problems, trying to understand why some things are easy to check but hard to figure out, and why it matters for how computers work.
What is P problems?
Polynomial time problems, commonly known asP problems.The solution of the problem can be found in polynomial time.
Example:Linear search, whose time complexity is O(N), where N is the input size.
Key characteristics of P problems:
| Characteristic | Description |
--- |
| Definition | Problems in the P complexity class are decision problems that can be efficiently solved by a deterministic Turing machine in polynomial time. |
| Time Complexity | Solutions can be found in polynomial time, meaning the time required for computation grows at most as a polynomial function of the input size. |
| Algorithmic Solutions | Efficient algorithms exist for solving P problems, making them computationally tractable and practical for a wide range of applications. |
| Polynomial Verification | The correctness of a solution can be verified in polynomial time, ensuring that the proposed solution is correct without significant computational effort. |
| Examples | Examples of P problems include sorting algorithms, searching algorithms, and various problems with known efficient solutions. |
What is NP problems?
Nondeterministicpolynomial-time problems, commonly known as NP problems. These problems have the special property that, once a potential solution is provided, its correctness can be verified quickly. However, finding the solution itself may be computationally difficult.
Example: A well-known example of NP problems is prime factorization. We can verify a factor of an integer in polynomial time. However, we don’t know any polynomial time algorithm to factorize a given integer.
Key characteristics of NP problems:
| Characteristic | Description |
--- |
| Definition | Nondeterministic Polynomial time problems are computational problems for which solutions can be verified efficiently in polynomial time. |
| Verification | Once a potential solution is provided, its correctness can be verified quickly using a nondeterministic algorithm in polynomial time. |
| Solution Computation | While verification is efficient, finding a solution may be computationally challenging, with no known polynomial time algorithm for general instances. |
| Example | The Traveling Salesman Problem (TSP)is a classic NP problem. |
Difference between P vs NP problems:
Here's a detailed explanation of the differences between P and NP problems:
| Feature | P Problems | NP Problems |
---
| Solvability | Efficiently solvable in polynomial time. | Efficient verification, solution may not be found efficiently. |
| Time Complexity | Polynomial time algorithms are known. | Efficient verification algorithms are known, but efficient solution algorithms are not guaranteed. |
| Nature of Solutions | Solutions can be found efficiently. | Solutions, once proposed, can be verified efficiently. |
| Decision or Optimization | Often decision problems (yes/no answers). | Can be decision or optimization problems. |
| Known Relationship | P is a subset of NP. | It is unknown whether NP is a proper subset of P or if they are equal. |
| Practical Implications | Problems in P are considered efficiently solvable in practice. | NP problems are considered computationally hard; no efficient algorithm is currently known, making them potentially impractical for large instances. |
| P vs NP Question | P vs NP question is open. | One of the most significant open problems in computer science is whether P equals NP. If P equals NP, all problems in NP would also be in P. |
| Example | Sorting, searching, shortest path problems. | Traveling Salesman, Boolean Satisfiability. |
Related Articles:
NP-Hard Class
P, NP, CoNP, NP hard and NP complete | Complexity Classes
Introduction to NP-Complete Complexity Classes
Difference between NP hard and NP complete problem
Comment
More info
C
code_r
Follow
1
Improve
Article Tags :
DSA
NP Complete
Complexity-analysis
Explore
DSA Fundamentals
Logic Building Problems 2 min readAnalysis of Algorithms 1 min read
Data Structures
Array Data Structure 3 min readString in Data Structure 2 min readHashing in Data Structure 2 min readLinked List Data Structure 2 min readStack Data Structure 2 min readQueue Data Structure 2 min readTree Data Structure 2 min readGraph Data Structure 3 min readTrie Data Structure 15+ min read
Algorithms
Searching Algorithms 2 min readSorting Algorithms 3 min readIntroduction to Recursion 14 min readGreedy Algorithms 3 min readGraph Algorithms 3 min readDynamic Programming or DP 3 min readBitwise Algorithms 4 min read
Advanced
Segment Tree 2 min readBinary Indexed Tree or Fenwick Tree 15 min readSquare Root (Sqrt) Decomposition Algorithm 15+ min readBinary Lifting 15+ min readGeometry 2 min read
Interview Preparation
Interview Corner 3 min readGfG160 3 min read
Practice Problem
GeeksforGeeks Practice - Leading Online Coding Platform 6 min readProblem of The Day - Develop the Habit of Coding 5 min read
Like 1
Corporate & Communications Address:
A-143, 7th Floor, Sovereign Corporate Tower, Sector- 136, Noida, Uttar Pradesh (201305)
Registered Address:
K 061, Tower K, Gulshan Vivante Apartment, Sector 137, Noida, Gautam Buddh Nagar, Uttar Pradesh, 201305
Company
About Us
Legal
Privacy Policy
Contact Us
Advertise with us
GFG Corporate Solution
Campus Training Program
Explore
POTD
Job-A-Thon
Community
Blogs
Nation Skill Up
Tutorials
Programming Languages
DSA
Web Technology
AI, ML & Data Science
DevOps
CS Core Subjects
Interview Preparation
GATE
Software and Tools
Courses
IBM Certification
DSA and Placements
Web Development
Programming Languages
DevOps & Cloud
GATE
Trending Technologies
Videos
DSA
Python
Java
C++
Web Development
Data Science
CS Subjects
Preparation Corner
Aptitude
Puzzles
GfG 160
DSA 360
System Design
@GeeksforGeeks, Sanchhaya Education Private Limited, All rights reserved
Improvement
Suggest changes
Suggest Changes
Help us improve. Share your suggestions to enhance the article. Contribute your expertise and make a difference in the GeeksforGeeks portal.
Create Improvement
Enhance the article with your expertise. Contribute to the GeeksforGeeks community and help create better learning resources for all.
Suggest Changes
min 4 words, max Words Limit:1000
Thank You!
Your suggestions are valuable to us.
What kind of Experience do you want to share?
Interview ExperiencesAdmission ExperiencesCareer JourneysWork ExperiencesCampus ExperiencesCompetitive Exam Experiences
Login Modal | GeeksforGeeks
Log in
New user ?Register Now
Continue with Google
or
Username or Email Password
[x] Remember me
Forgot Password
Sign In
By creating this account, you agree to ourPrivacy Policy&Cookie Policy.
Create Account
Already have an account ?Log in
Continue with Google
or
Username or Email Password Institution / Organization Sign Up
Please enter your email address or userHandle.
Back to Login
Reset Password |
17214 | https://forum.wordreference.com/threads/creer-que-subjunctive.636336/ | Creer + que + subjunctive | WordReference Forums
WordReference.comLanguage Forums
ForumsRules/Help/FAQHelp/FAQ
MembersCurrent visitors
Interface Language
Dictionary search:
Log inRegister
What's newSearch
Search
[x] Search titles and first posts only
[x] Search titles only
By:
Search Advanced search…
Rules/Help/FAQHelp/FAQ
MembersCurrent visitors
Interface Language
Menu
Log in
Register
Install the app
Install
How to install the app on iOS
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Spanish-English / Español-Inglés
Spanish-English Grammar / Gramática Español-Inglés
Creer + que + subjunctive
Thread starterCristina Allende
Start dateSep 7, 2007
C
Cristina Allende
Senior Member
US, English
Sep 7, 2007
#1
Hello! In my Spanish class a couple days ago we had to answer a few questions over a short story we had read. One of the questions asked the following:
¿Qué crees que sea el mensaje del autor?
My teacher from last year (when we learned about subjunctive situations) said that whenever you use creer que, it should be followed with the indicative, not the subjunctive. Is this rule always correct?Would this sentence be correct or not?
¿Qué crees que es el mensaje del autor?
Thank you!
Christina
David
Banned
Volcán, Chiriquí, Panamá
English - US
Sep 7, 2007
#2
I don´t think it is always correct. All the following sound right to me:
Creo que el mejor de los cuentos es "El tonel de amontillado..."
No creo que "El tonel de amontillado" sea el mejor de los cuentos de Poe.
¿Crees que el mejor cuento de Poe es"El tonel de amontillado"? Si no lo es, ¿cuál crees que seamejor?
Creo que si hay.
No creo que haya.
Él cree que hay, pero yo dudo que haya.
unspecified
Senior Member
Boston, MA, USA
English, USA
Sep 7, 2007
#3
I think in that case, it's acceptable either way (indicative or subjunctive); it just changes the meaning slightly:
¿Qué crees que sea el mensaje del autor?
What do you think the author's message might have been?
¿Qué crees que es el mensaje del autor?What do you think the author's message is?
But, wait to see what a native-speaker says. Until then, hope this helps!
Estefanía Perdomo
Senior Member
Miranda.
Venezuela, Español.
Sep 7, 2007
#4
No concuerdo mucho con lo de la regla, miren lo que se debatió en otro foro, este es sólo un post interesante del hilo, léanlo.
Saludos.
Estefanía.
C
Cristina Allende
Senior Member
US, English
Sep 7, 2007
#5
David said:
Creo que el mejor de los cuentos es "El tonel de amontillado..."
No creo que "El tonel de amontillado" sea el mejor de los cuentos de Poe.
¿Crees que el mejor cuento de Poe es"El tonel de amontillado"? Si no lo es, ¿cuál crees que seamejor?
Creo que si hay.
No creo que haya.
Él cree que hay, pero yo dudo que haya.
Click to expand...
Yes, but the rule we learned in class was that after "no creer que..." (in the negative) you usually use subjunctive, but after "creer que" (in the affirmative) you use indicative. Her explanation was that in the mind of English speakers, when we use "I think..." we are uncertain. However, in the mind of Spanish speakers, when they use "Creo que..." they are certain. It's hard to explain exactly how she phrased it, but she made it sound like Spanish speakers just have a different perception about uncertainty versus certainty.
O
Outsider
Senior Member
Portuguese (Portugal)
Sep 7, 2007
#6
I understand the contrast that the author is trying to make. The subjunctive gives a more reserved nuance to the sentence.
¿Qué crees que es el mensaje del autor? --> What do you think the author's message is?
¿Qué crees que sea el mensaje del autor? --> What do you think the author's message might be?
However, I'm not sure that this distinction is accepted by standard Spanish grammar.
C
Cristina Allende
Senior Member
US, English
Sep 7, 2007
#7
For the idea of "it might be..." could you use poder instead of using the subjunctive?
¿Qué crees que puede ser el mensaje del autor?
Or does that not make any sense?
O
Outsider
Senior Member
Portuguese (Portugal)
Sep 7, 2007
#8
Indeed you can.
Remember that there often isn't a one-to-one correspondence between English and Spanish constructions.
C
Cristina Allende
Senior Member
US, English
Sep 7, 2007
#9
So, if I was going to answer this question, would I say "Creo que el mensaje sea..." or "Creo que el mensage es..." or even "Creo que el mensaje puede ser..."?
Which do you hear more often? (not just in this context, but in the general category of creer (affirmative) + que + verb form).
J
Jeromed
Banned
USA, English
Sep 7, 2007
#10
Cristina :
<>
Jerome:
True, but that's in declarative sentences (Él cree que Juan vendrá)
In a question, there's no certainty involved, since the teacher doesn't know what the students think.
J
Jeromed
Banned
USA, English
Sep 7, 2007
#11
Cristina Allende said:
So, if I was going to answer this question, would I say "Creo que el mensaje sea..." or "Creo que el mensage es..." or even "Creo que el mensaje puede ser..."?
Which do you hear more often? (not just in this context, but in the general category of creer (affirmative) + que + verb form).
Click to expand...
Creo que el mensaje es / Creo que el mensaje puede ser both sound fine to me.
Creo que el mensaje sea does not.
C
Cristina Allende
Senior Member
US, English
Sep 7, 2007
#12
So when a Spanish speaker is asking a question that has the word "think" in it, then do they mostly use it with subjunctive or indicative?
Where do you think Ralph is right now?
¿Dónde crees que esté Ralph ahora?
Why do you think that fence is there?
¿Por qué crees que esté esa cerca por ahí?
(and yes, I was going for the American colors, there!)
Christina
L
lazarus1907
Senior Member
Lincoln, England
Spanish, Spain
Sep 7, 2007
#13
Cristina Allende said:
Yes, but the rule we learned in class was that after "no creer que..." (in the negative) you usually use subjunctive, but after "creer que" (in the affirmative) you use indicative. Her explanation was that in the mind of English speakers, when we use "I think..." we are uncertain.
Click to expand...
There is no rule that justifies the use of the subjunctive here, as far as I know. Anyway, regarding that rule about "certain" and "not certain" things, I believe it is not very useful. Look:
Creo que viene - Viene, creo.
I think he's coming - He's coming, I think.
No creo que venga - Viene, no creo.
I don't think he's coming - He's coming, I don't think.
Coincidence? Check for the "indicative to declare" rule instead.
Cristina Allende said:
¿Dónde crees que~~esté~~está Ralph ahora?
¿Por qué crees que~~esté~~está esa cerca por ahí?
Click to expand...
The use of subjunctive in interrogative sentences is marginal, and of little use. Keep the indicative here.
J
Jeromed
Banned
USA, English
Sep 7, 2007
#14
Cristina Allende said:
So when a Spanish speaker is asking a question that has the word "think" in it, then do they mostly use it with subjunctive or indicative?
Where do you think Ralph is right now?
¿Dónde crees que esté Ralph ahora?
Why do you think that fence is there?
¿Por qué crees que esté esa cerca por ahí?
(and yes, I was going for the American colors, there!)
Christina
Click to expand...
Esté (subjunctive)---> might be
Está (indicative) ---> is
Both are possible in interrogative sentences, although I believe that the subjunctive is more common. I'll let the native speakers confirm this.
O
Outsider
Senior Member
Portuguese (Portugal)
Sep 7, 2007
#15
lazarus1907 said:
Anyway, regarding that rule about "certain" and "not certain" things, I believe it is not very useful. Look:
Creo que viene - Viene, creo.
I think he's coming - He's coming, I think.
No creo que venga - Viene, no creo.
I don't think he's coming - He's coming, I don't think.
Click to expand...
Technically, though, "I don't think he's coming" and "I think he's not coming" are different ideas, Lazarus...
Even though the former is often employed with the sense of the latter in practice.
L
lazarus1907
Senior Member
Lincoln, England
Spanish, Spain
Sep 7, 2007
#16
Jeromed said:
Both are possible in interrogative sentences, although I believe that the subjunctive is more common.
Click to expand...
I disagree.
L
lazarus1907
Senior Member
Lincoln, England
Spanish, Spain
Sep 7, 2007
#17
Outsider said:
Technically, though, "I don't think he's coming" and "I think he's not coming" are different ideas, Lazarus...
Even though the former is often employed with the sense of the latter in practice.
Click to expand...
Could you elaborate here? I am not talking just about ideas, but syntaxis, semantics, and natural communication althogether. Those sentences convey ALMOST the same meaning, but they are not the same... even for natives, if you analize them properly. The proof is quite simple, actually.
I was just turning the subordinate clauses into main sentences just to prove my point (actually, inspired by Bull). There is an extensive literature showing that those two sentences are more different than they appear to be, and that there is a justification for the Spanish subjunctive.
J
Jeromed
Banned
USA, English
Sep 7, 2007
#18
lazarus1907 said:
I disagree.
Click to expand...
With the assertion that they're both possible, or with the one about the subjunctive being more common?
If the latter, I'll let native speakers from the Americas comment. I might be wrong; or it might be a Latin American usage.
L
lazarus1907
Senior Member
Lincoln, England
Spanish, Spain
Sep 7, 2007
#19
Jeromed said:
With the assertion that they're both possible, or with the one about the subjunctive being more common?
Click to expand...
You said that you believe that the subjunctive is more common here. I don't agree about that point. Check as many sources as you like; I'll be most surprised if you're right, since my American's writers database seems to confirm my proposal.
O
Outsider
Senior Member
Portuguese (Portugal)
Sep 7, 2007
#20
lazarus1907 said:
Could you elaborate here?
Click to expand...
Sure:
"I don't think he's coming" --> I am not certain that he's coming. He may or may not be coming.
"I think he's not coming" --> I am positive that he's not coming.
The first sentence still allows for some doubt; the second does not.
lazarus1907 said:
Those sentences convey ALMOST the same meaning, but they are not the same... even for natives, if you analize them properly. The proof is quite simple, actually.
Click to expand...
Isn't that why I said, too?
J
Jeromed
Banned
USA, English
Sep 7, 2007
#21
lazarus1907 said:
You said that you believe that the subjunctive is more common here. I don't agree about that point. Check as many sources as you like; I'll be most surprised if you're right, since my American's writers database seems to confirm my proposal.
Click to expand...
OK. I won't argue with you, since you seem to be so certain (and I am not).
panjabigator
Senior Member
San Francisco
Am. English
Sep 7, 2007
#22
lazarus1907 said:
There is no rule that justifies the use of the subjunctive here, as far as I know. Anyway, regarding that rule about "certain" and "not certain" things, I believe it is not very useful. Look:
Creo que viene - Viene, creo.
I think he's coming - He's coming, I think.
No creo que venga - Viene, no creo.
I don't think he's coming - He's coming, I don't think.
Coincidence? Check for the "indicative to declare" rule instead.
The use of subjunctive in interrogative sentences is marginal, and of little use. Keep the indicative here.
Click to expand...
Hi Lazarus,
Could you explain a little further about why the usage of the subjunctive is marginal with interrogatives? When would its usage be appropriate in an interrogative sentence?
L
lazarus1907
Senior Member
Lincoln, England
Spanish, Spain
Sep 7, 2007
#23
Outsider said:
Sure:
"I don't think he's coming" --> I am not certain that he's coming. He may or may not be coming.
"I think he's not coming" --> I am positive that he's not coming.
Click to expand...
Those sentences are different for the same reasons the subjunctive and the indicative are used: In the fist one no declaration is made; in the second one, a declaration is made.
L
lazarus1907
Senior Member
Lincoln, England
Spanish, Spain
Sep 7, 2007
#24
panjabigator said:
Could you explain a little further about why the usage of the subjunctive is marginal with interrogatives? When would its usage be appropriate in an interrogative sentence?
Click to expand...
I shouldn't have said that. Anyway, there are cases in imperative and interrogative sentences (direct and indirect style) where the subjunctive "appears" to override the general rule. Please, email me if you really want some examples.
A
aleCcowaN
Senior Member
Castellano - Argentina
Sep 8, 2007
#25
Cristina Allende said:
Hello! In my Spanish class a couple days ago we had to answer a few questions over a short story we had read. One of the questions asked the following:
¿Qué crees que sea el mensaje del autor?
My teacher from last year (when we learned about subjunctive situations) said that whenever you use creer que, it should be followed with the indicative, not the subjunctive. Is this rule always correct?Would this sentence be correct or not?
¿Qué crees que es el mensaje del autor?
Thank you!
Christina
Click to expand...
Sentences like "¿Qué crees que es el mensaje del autor?"look to me like a machine translation departing from English, mainly because of the "qué", and also because of the "es". Maybe I've read all the posts slightly but I didn't find any criticism on the use of "qué".
It could be:
¿Cuál es el mensaje del autor? (The author cast a message. Now, you have to find it.)
¿Cuál crees que es el mensaje del autor? (The author cast a message clearly. He or her confirmed it, or scholars agree on it. Now, you have to find that message. Similar to the previous one, it is less general: there's a message and we are interested in what you think, and possibly in you aiming the target)
¿Cuál crees que sea el mensaje del autor? (The author cast a message. Whether he or her has confirmed it or not, whether the scholars agree on it or not, is not the issue now. We are interested in your opinion and reasoning -or the language skills you show, say, in a language course-, not your accuracy ---> the subjunctive here shouts: Give us the best of what is in your mind about the point, later we'll compare that with some standard which might be even the real author's message).
The other alternatives:
¿Qué es el mensaje del autor? Answer: The message of the author.
¿Qué crees que es el mensaje del autor? Answer: A message
¿Qué crees que sea el mensaje del autor? Answer: A message of peace, a message of love, a message of hate, a message of hope ....
But with Spanish of the United States and neighbor zones reached by US TV air broadcasts, you never know.
[You're welcome and encouraged to correct and criticize my English]
C
Cristina Allende
Senior Member
US, English
Sep 8, 2007
#26
So, if you use "cuál" instead of "qué," would you still use the subjunctive in the sentence or not? What is the most common way of expressing this question?
A
aleCcowaN
Senior Member
Castellano - Argentina
Sep 8, 2007
#27
Cristina Allende said:
So, if you use "cuál" instead of "qué," would you still use the subjunctive in the sentence or not? What is the most common way of expressing this question?
Click to expand...
I think the problem is which actually is the question. Speaking from my culture, education, age, etc. if I asked you "¿Cuál crees que es el mensaje del autor?", I'd be asking you to give me that piece of information: what you hold in your mind as a summary and conclusion about some message given by the author.
If I asked you "¿Cuál crees que sea el mensaje del autor?", I'd be encouraging you to tell me what you think the message could be and explain me it in detail, including the way you used to reach your conclusion, and all other speculation you may have done on the issue.
As a rule of thumb, indicative calls for facts (including your conclusion as a simple fact) and subjunctive makes room for you (you subject, as the generator of that conclusion). Following this, that's why are much more common the forms "creer + que + indicative" and "no + creer + que + subjunctive", because what you think-believe-feel_deeply is in fact a fact to everyone else (thus indicative), and what you don't thik-believe-feel_deeply isn't a fact in your mind and it is a fact of lacking of a fact in everyone else's minds. You surely are aware of this and that's why you find the form "si + creer + que + subjuntive" weird or wrong. There's maybe only one situation that justifies this form: you are more interested in the subject than in the fact (the fact = what the subject thinks).
Clean slate start: If I were a teacher, gave my students a text and wanted them to discuss and/or reflect on it and then promote them to talk aloud or write down their opinions, reasonings and conclusion, because as a teacher I need to know the mental mechanics of my students and evaluate their skills in different areas and general proficiency in order for me to correct, propose, show, find ways, and so on, I would encourage this by asking them "¿Cuál creen que sea el mensaje del autor? ¡El debate está abierto!"
About "qué" and "cuál", that's other issue.
K
koala_au
New Member
italia
Sep 8, 2007
#28
listen I studied this rule at the uni
creo+que+indicativo
no creo+que+subjuntivo
u must put the subjuntivo just in the sentence negative while if the sentence is positive u must put the indicative
it s the rule of subjetive sentences in spanish
L
labrapalabras
Senior Member
Currently in Mexico, DF
Mexico, Spanish
Sep 8, 2007
#29
What a great thread! The subjunctive (both in Spanish and in English) is a fascinating subject. However...
De entrada, me parece importante señalar algo que nadie ha comentado: la pregunta original del maestro "¿qué crees que sea el mensaje del autor?" me suena rara. Me parece más correcto decir: "¿cuál crees que sea el mensaje del autor?". La razón gramatical es que "mensaje" es cuantificable/contable y no una acción, pues sí se puede decir "¿qué crees que sea lo que quiere?".
En cuanto a lo del subjuntivo. La regla planteada me aprece que funciona en muchos casos, pero no se le debe seguir a rajatabla. El subjuntivo está relacioado con el modo "irrealis" de la lengua, se utiliza para hablar de cosas que no se saben con certeza, que se suponen o imaginan. Actualmente, en el español, el subjuntivo está simplificándose y sufriendo trasformaciones. De ahí que ambos usos (¿cuál crees que sea el mensaje?; ¿cuál crees que es el mensaje?) puedan escucharse en hablantes nativos. La razón por la cual se usa el indicativo en las frases afrimativas (yo creo que el mensaje es...) es porque se está hablando de la realidad de una creencia, más allá de los criterios de verdad para confirmarla.
Jaja, espero haber ayudado en algo. Es un tema interesante.
G
Gerhardus
New Member
Argentina
English
Dec 12, 2009
#30
Cristina Allende said:
Hello! In my Spanish class a couple days ago we had to answer a few questions over a short story we had read. One of the questions asked the following:
¿Qué crees que sea el mensaje del autor?
My teacher from last year (when we learned about subjunctive situations) said that whenever you use creer que, it should be followed with the indicative, not the subjunctive. Is this rule always correct?Would this sentence be correct or not?
¿Qué crees que es el mensaje del autor?
Thank you!
Christina
Click to expand...
Since this is a QUESTION, it may take subjunctive, example:
¿Qué crees que es ..... (what do you think is ...)
¿Qué crees que sea ... (what do you think might be ...)
see:
dilema
Senior Member
Madrid (España)
Spain-spanish
Dec 12, 2009
#31
Cristina Allende said:
Hello! In my Spanish class a couple days ago we had to answer a few questions over a short story we had read. One of the questions asked the following:
¿Qué crees que sea el mensaje del autor?
My teacher from last year (when we learned about subjunctive situations) said that whenever you use creer que, it should be followed with the indicative, not the subjunctive. Is this rule always correct?Would this sentence be correct or not?
¿Qué crees que es el mensaje del autor?
Thank you!
Christina
Click to expand...
En España sólo oirías la versión con indicativo. Es posible que en algunos países de América se use el subjuntivo para preguntas de ese tipo (yo lo he oído en alguna telenovela ¿mexicana, venezolana?).
En cuanto al uso de qué o cuál, depende de qué es lo que el profesor esté queriendo preguntar.
El qué está bien si lo que busca es una respuesta del tipo: creo que es un mensaje de esperanza/provocador/político/una justificación... (en definitiva, una definición de la naturaleza del mensaje)
Si lo que busca es que se describa el mensaje, lo adecuado hubiera sido un cuál, en mi opinión.
D
duncandhu
Senior Member
Annapolis, MD, USA
United Kingdom, English
Dec 14, 2009
#32
I agree with koala_au on this:
I was taught that "verbos de la cabeza", i.e. those verbs linked to perception, e.g. ver, oír, parecer, creer, pensar etc go with indicative when positive, and subjunctive when negative, hence:
Creo que + ind.
No creo que + subj.
Veo que + ind.
No veo que + subj.
Es que + ind.
No es que + subj.
etc.
But I'm not sure about what the rule is if it's a question, as various people have said already.
Saludos
Duncan
VivaReggaeton88
Senior Member
Santa Ana, Costa Rica / New York, NY
US/EEUU; English/Inglés
Dec 14, 2009
#33
Lazarus:
Why is 'No creo que venga.' wrong?
X
xlover
New Member
Agramonte, Cuba
Urdu, English, Spanish, Punjabi, Sindhi
Dec 14, 2009
#34
[B said:
¿Qué crees que sea el mensaje del autor?[/B]
Is this rule always correct?Would this sentence be correct or not?
Click to expand...
No siempre...
... Comó se va a explicarle (how you'll explian it?).... What you think that it would be a snake ?
Milton Sand
Senior Member
Bucaramanga, Colombia
Español (Colombia)
Dec 14, 2009
#35
Hi,
No, xlover, he didn't say that was wrong, he say that the conversion to independent clauses was not possible (green ticks are mine):
lazarus1907 said:
No creo que venga - Viene, no creo.
I don't think he's coming - He's coming, I don't think.
Click to expand...
Regards,
slazenger14
Senior Member
United States - Columbus, Ohio
American English
Dec 14, 2009
#36
Gerhardus said:
Since this is a QUESTION, it may take subjunctive, example:
¿Qué crees que es ..... (what do you think is ...)
¿Qué crees que sea ... (what do you think might be ...)
see:
Click to expand...
Coincido con Gerhardus. Explicó los varios sentidos del español al inglés super bien.
H
H Bookbinder
New Member
Northern England
Spanish / Andalusia
Nov 19, 2011
#37
Cristina Allende said:
Yes, but the rule we learned in class was that after "no creer que..." (in the negative) you usually use subjunctive, but after "creer que" (in the affirmative) you use indicative. Her explanation was that in the mind of English speakers, when we use "I think..." we are uncertain. However, in the mind of Spanish speakers, when they use "Creo que..." they are certain. It's hard to explain exactly how she phrased it, but she made it sound like Spanish speakers just have a different perception about uncertainty versus certainty.
Click to expand...
I agree with Cristina.
L
lynk25
New Member
English - United States
Feb 10, 2012
#38
Outsider said:
Sure:
"I don't think he's coming" --> I am not certain that he's coming. He may or may not be coming.
"I think he's not coming" --> I am positive that he's not coming.
The first sentence still allows for some doubt; the second does not.
Click to expand...
I wanted to quickly add on to this. ^This is the same way I was taught subjunctive for Spanish.
This is how I think about it:
"No pienso que venga" -- I'm not sure if he's coming or not
"Pienso que no viene" -- Even if I am wrong, this is still what I think, and I believe that it is true.
I am still a little bit confused about subjunctive though. I understand how it functions in sentences, but I'm not so sure how it works in questions. For example, i'm reading a novel for my spanish class. One line reads "Pero, Angelina, ¿cuánto crees que tarden las cartas? Tardan mucho...". It makes sense for the verb to be in the subjunctive when it's "no creer", but why is the second verb in the subjunctive when it's in the question?
slazenger14
Senior Member
United States - Columbus, Ohio
American English
Feb 10, 2012
#39
lynk25 said:
I wanted to quickly add on to this. ^This is the same way I was taught subjunctive for Spanish.
This is how I think about it:
"No pienso que venga" -- I'm not sure if he's coming or not
"Pienso que no viene" -- Even if I am wrong, this is still what I think, and I believe that it is true.
I am still a little bit confused about subjunctive though. I understand how it functions in sentences, but I'm not so sure how it works in questions. For example, i'm reading a novel for my spanish class. One line reads "Pero, Angelina, ¿cuánto crees que tarden las cartas? Tardan mucho...". It makes sense for the verb to be in the subjunctive when it's "no creer", but why is the second verb in the subjunctive when it's in the question?
Click to expand...
The reason is that you are expressing a degree of doubt upon asking the question.
¿Tú realmente piensas que ellos vayan a la fiesta? <-- You are questioning the fact that they are going to the party or at least expressing a degree doubt over the whole subject.
Maybe a native speaker of Spanish can explain in better, but this is how I understand it. Also, I hear the indicative used more in the subordinate clause, though, at least around the people I speak with.
M
maretto
Member
Español (España)
Feb 10, 2012
#40
¿Qué crees que sea el mensaje del autor?
¿Qué crees que es el mensaje del autor?
A mi estas dos frases me suenan muy raro. Yo diría ¿Cuál crees que es el mensaje del autor?
You must log in or register to reply here.
Share:
BlueskyLinkedInWhatsAppEmailShareLink
Spanish-English / Español-Inglés
Spanish-English Grammar / Gramática Español-Inglés
WR styleSystemLightDark
English (EN-us)
Contact us
Terms and rules
Privacy policy
Help
RSS
Community platform by XenForo®© 2010-2025 XenForo Ltd.
Back
TopBottom |
17215 | https://math.stackexchange.com/questions/15778/what-is-the-minimum-value-of-a-such-that-xa-geq-lnx-for-all-x-0 | Stack Exchange Network
Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.
Visit Stack Exchange
Teams
Q&A for work
Connect and share knowledge within a single location that is structured and easy to search.
Learn more about Teams
What is the minimum value of $a$ such that $x^a \geq \ln(x)$ for all $x > 0$?
Ask Question
Asked
Modified 14 years, 9 months ago
Viewed 422 times
10
$\begingroup$
This is probably just elementary, but I don't know how to do it. I would like to find the minimum value of $a$ such that $x^a \geq \ln(x)$ for all $x > 0$. Numerically, I have found that this minimum value lies between 0.365 and 0.37 (i.e., $x^{0.37} > \ln(x)$ for all $x > 0$, but $x^{0.365}$ is not). Is there any analytical way to find out exactly this minimum value?
EDIT: Based on the received answers, I finally came up with my own one as follows.
Consider the function $f(x) = x^a - \ln(x).$ This function is convex in $x$, and hence, achieves the unique minimum as $x^$ such that $f'(x^) = 0.$ Solving that equation yields $$f_{\mathrm{min}} = \min\limits_{x>0} f(x) = \frac{\ln(a)+1}{a}.$$
Now, by letting $f_{min} = 0$, we get the desired value $a^ = 1/e.$
Thank everyone for the answers!
calculus
real-analysis
Share
edited Dec 29, 2010 at 1:35
Arturo Magidin
419k6060 gold badges864864 silver badges1.2k1.2k bronze badges
asked Dec 28, 2010 at 23:36
cresmooncresmoon
32722 silver badges88 bronze badges
$\endgroup$
1
$\begingroup$ $1/e$ is one of those values that you recognize numerically after a while. $\endgroup$
asmeurer
– asmeurer
2010-12-29 06:09:43 +00:00
Commented Dec 29, 2010 at 6:09
Add a comment |
2 Answers 2
Reset to default
7
$\begingroup$
$a = \max_{x>0} \frac{\ln(\ln(x))}{\ln(x)} = 1/e$.
Share
answered Dec 28, 2010 at 23:49
lhflhf
222k2020 gold badges254254 silver badges585585 bronze badges
$\endgroup$
1
$\begingroup$ Got it! Thanks, lhf. $\endgroup$
cresmoon
– cresmoon
2010-12-29 01:17:09 +00:00
Commented Dec 29, 2010 at 1:17
Add a comment |
5
$\begingroup$
Consider the minimum of the function $f(x)=x^{1/e}-\ln(x)$.
EDIT: As a second step, verify that $x^a < \ln(x)$ at $x=(1/a)^{1/a}$, for any $0
Share
edited Dec 29, 2010 at 0:22
answered Dec 28, 2010 at 23:49
Shai CovoShai Covo
24.4k22 gold badges5050 silver badges7373 bronze badges
$\endgroup$
3
1
$\begingroup$ This minimum occurs at $e^e \approx 15.1543$ and is 0. $\endgroup$
Ross Millikan
– Ross Millikan
2010-12-28 23:52:26 +00:00
Commented Dec 28, 2010 at 23:52
$\begingroup$ Since the minimum is $0$, $x^{1/e} \geq \ln(x)$ for all $x>0$. $\endgroup$
Shai Covo
– Shai Covo
2010-12-29 00:01:01 +00:00
Commented Dec 29, 2010 at 0:01
$\begingroup$ Thanks for the answer, Shai. $\endgroup$
cresmoon
– cresmoon
2010-12-29 01:19:17 +00:00
Commented Dec 29, 2010 at 1:19
Add a comment |
You must log in to answer this question.
Start asking to get answers
Find the answer to your question by asking.
Ask question
Explore related questions
calculus
real-analysis
See similar questions with these tags.
Featured on Meta
Introducing a new proactive anti-spam measure
Spevacus has joined us as a Community Manager
stackoverflow.ai - rebuilt for attribution
Community Asks Sprint Announcement - September 2025
Related
1 Function for slider with minimum, maximum and "midpoint" value
Find maximum and minimum value of function $f(x,y,z)=y+z$ on the circle
Find the value of a such that F(a) achieves its minimum value
7 does a convex polynomial always reach its minimum value?
0 Weierstrass theorem (extreme value theorem) for $f(x) = x^2-\sin(x)$
1 What is the maximum and minimum value of the following function?
Hot Network Questions
how do I remove a item from the applications menu
Why, really, do some reject infinite regresses?
Who is the target audience of Netanyahu's speech at the United Nations?
Can a state ever, under any circumstance, execute an ICC arrest warrant in international waters?
Lingering odor presumably from bad chicken
Implications of using a stream cipher as KDF
Spectral Leakage & Phase Discontinuites
Passengers on a flight vote on the destination, "It's democracy!"
Does the curvature engine's wake really last forever?
Switch between math versions but without math versions
Find non-trivial improvement after submitting
How to use \zcref to get black text Equation?
Are there any alternatives to electricity that work/behave in a similar way?
My dissertation is wrong, but I already defended. How to remedy?
Does the mind blank spell prevent someone from creating a simulacrum of a creature using wish?
Two calendar months on the same page
What meal can come next?
Calculating the node voltage
An odd question
Is direct sum of finite spectra cancellative?
What "real mistakes" exist in the Messier catalog?
How to start explorer with C: drive selected and shown in folder list?
Why is the fiber product in the definition of a Segal spaces a homotopy fiber product?
How to rsync a large file by comparing earlier versions on the sending end?
more hot questions
Question feed |
17216 | https://med.libretexts.org/Bookshelves/Anatomy_and_Physiology/Anatomy_and_Physiology_(Boundless)/16%3A_Cardiovascular_System_-_Blood/16.5%3A_Hemostasis/16.5A%3A_Overview_of_Hemostasis | 16.5A: Overview of Hemostasis - Medicine LibreTexts
Skip to main content
Table of Contents menu
search Search build_circle Toolbar fact_check Homework cancel Exit Reader Mode
school Campus Bookshelves
menu_book Bookshelves
perm_media Learning Objects
login Login
how_to_reg Request Instructor Account
hub Instructor Commons
Search
Search this book
Submit Search
x
Text Color
Reset
Bright Blues Gray Inverted
Text Size
Reset
+-
Margin Size
Reset
+-
Font Type
Enable Dyslexic Font - [x]
Downloads expand_more
Download Page (PDF)
Download Full Book (PDF)
Resources expand_more
Periodic Table
Physics Constants
Scientific Calculator
Reference expand_more
Reference & Cite
Tools expand_more
Help expand_more
Get Help
Feedback
Readability
x
selected template will load here
Error
This action is not available.
chrome_reader_mode Enter Reader Mode
16.5: Hemostasis
16: Cardiovascular System - Blood
{ }
{ "16.5A:_Overview_of_Hemostasis" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "16.5B:_Vascular_Spasm" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "16.5C:_Platelet_Plug_Formation" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "16.5D:_Coagulation" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "16.5E:_Role_of_Vitamin_K" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "16.5F:_Clot_Retraction_and_Repair" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "16.5G:_Fibrinolysis" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1" }
{ "16.1:_Overview_of_Blood" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "16.2:_Red_Blood_Cells" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "16.3:_White_Blood_Cells" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "16.4:_Platelets" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "16.5:_Hemostasis" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "16.6:_Transfusions" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1" }
Sat, 05 Oct 2024 22:51:24 GMT
16.5A: Overview of Hemostasis
7798
7798
Delmar Larsen
{ }
Anonymous
Anonymous User
2
false
false
[ "article:topic", "license:ccbysa", "showtoc:no" ]
[ "article:topic", "license:ccbysa", "showtoc:no" ]
Search site Search Search Go back to previous article
Sign in
Username Password Sign in
Sign in
Sign in
Forgot password
Contents
1. Home
2. Bookshelves
3. Anatomy and Physiology
4. Anatomy and Physiology (Boundless)
5. 16: Cardiovascular System - Blood
6. 16.5: Hemostasis
7. 16.5A: Overview of Hemostasis
Expand/collapse global location
Anatomy and Physiology (Boundless)
Front Matter
1: Introduction to Anatomy and Physiology
2: The Chemical Building Blocks of Life
3: Organization at the Cellular Level
4: Organization at the Tissue Level
5: Integumentary System
6: Skeletal System
7: Skeletal System - Parts of the Skeleton
8: Joints
9: Muscular System
10: Overview of the Nervous System
11: Central Nervous System
12: Peripheral Nervous System
13: Special Senses
14: Autonomic Nervous System
15: Endocrine System
16: Cardiovascular System - Blood
17: Cardiovascular System: The Heart
18: Cardiovascular System: Blood Vessels
19: Lymphatic System
20: Immune System
21: Respiratory System
22: Digestive System
23: Nutrition and Metabolism
24: Urinary System
25: Body Fluids and Acid-Base Balance
26: The Reproductive System
27: Human Development and Pregnancy
28: Reproduction, Chromosomes, and Meiosis
30: APPENDIX B: Development and Aging of the Organ Systems
29: APPENDIX A: Diseases, Injuries, and Disorders of the Organ Systems
Back Matter
16.5A: Overview of Hemostasis
Last updated Oct 5, 2024
Save as PDF
16.5: Hemostasis
16.5B: Vascular Spasm
Page ID 7798
( \newcommand{\kernel}{\mathrm{null}\,})
Table of contents
1. Learning Objectives
2. Key Points
3. Key Terms
4. Steps of Hemostasis
5. Vasoconstriction
6. Platelet Plug Formation
7. Coagulation Cascade
Hemostasis is the natural process that stops blood loss when an injury occurs.
Learning Objectives
Explain the steps involved in hemostasis
Key Points
Hemostasis is the natural process that stops blood loss when an injury occurs.It involves three steps: (1) vascular spasm ( vasoconstriction ); (2) platelet plug formation; and (3) coagulation.
Vasoconstriction is a reflex in which blood vessels narrow to increase blood pressure.
Next, platelet plug formation involves the activation, aggregation, and adherence of platelets into a plug that serves as a barrier against blood flow.
Coagulation involves a complex cascade in which a fibrin mesh is cleaved from fibrinogen.
Fibrin acts as a “molecular glue” during clot formation, holding the platelet plug together.
Key Terms
hemostasis: The process of slowing and stopping the flow of blood to initiate wound healing.
coagulation: The process by which blood forms gelatinous clots.
heparin: A fibrinolytic molecule expressed on endothelial cells or produced as a blood thinner medicine. It prevents activation of platelets and clotting factors.
Hemostasis is the natural process in which blood flow slows and a clot forms to prevent blood loss during an injury, with hemo- meaning blood, and stasis meaning stopping. During hemostasis, blood changes from a fluid liquid to a gelatinous state.
Steps of Hemostasis
Hemostasis includes three steps that occur in a rapid sequence: (1) vascular spasm, or vasoconstriction, a brief and intense contraction of blood vessels; (2) formation of a platelet plug; and (3) blood clotting or coagulation, which reinforces the platelet plug with fibrin mesh that acts as a glue to hold the clot together. Once blood flow has ceased, tissue repair can begin.
Angiogenesis Generates New Blood Vessels: Blood vessel with an erythrocyte (red blood cell) within its lumen, endothelial cells forming its tunica intima or inner layer, and pericytes forming its tunica adventitia (outer layer).
Vasoconstriction
Intact blood vessels are central to moderating blood’s clotting tendency. The endothelial cells of intact vessels prevent clotting by expressing a fibrinolytic heparin molecule and thrombomodulin, which prevents platelet aggregation and stops the coagulation cascade with nitric oxide and prostacyclin. When endothelial injury occurs, the endothelial cells stop secretion of coagulation and aggregation inhibitors and instead secrete von Willebrand factor, which causes platelet adherence during the initial formation of a clot. The vasoconstriction that occurs during hemostasis is a brief reflexive contraction that causes a decrease in blood flow to the area.
Platelet Plug Formation
Platelets create the “platelet plug” that forms almost directly after a blood vessel has been ruptured. Within twenty seconds of an injury in which the blood vessel’s epithelial wall is disrupted, coagulation is initiated. It takes approximately sixty seconds until the first fibrin strands begin to intersperse among the wound. After several minutes, the platelet plug is completely formed by fibrin.
Contrary to popular belief, clotting of a skin injury is not caused by exposure to air, but by platelets adhering to and being activated by collagen in the blood vessels’ endothelium. The activated platelets then release the contents of their granules, which contain a variety of substances that stimulate further platelet activation and enhance the hemostatic process.
When the lining of a blood vessel breaks and endothelial cells are damaged, revealing subendothelial collagen proteins from the extracellular matrix, thromboxane causes platelets to swell, grow filaments, and start clumping together, or aggregating. Von Willebrand factor causes them to adhere to each other and the walls of the vessel. This continues as more platelets congregate and undergo these same transformations. This process results in a platelet plug that seals the injured area. If the injury is small, the platelet plug may be able to form within several seconds.
Coagulation Cascade
If the platelet plug is not enough to stop the bleeding, the third stage of hemostasis begins: the formation of a blood clot. Platelets contain secretory granules. When they stick to the proteins in the vessel walls, they degranulate, thus releasing their products, which include ADP (adenosine diphosphate), serotonin, and thromboxane A2 (which activates other platelets).
First, blood changes from a liquid to a gel. At least 12 substances called clotting factors or tissue factors take part in a cascade of chemical reactions that eventually create a mesh of fibrin within the blood. Each of the clotting factors has a very specific function. Prothrombin, thrombin, and fibrinogen are the main factors involved in the outcome of the coagulation cascade. Prothrombin and fibrinogen are proteins that are produced and deposited in the blood by the liver.
When blood vessels are damaged, vessels and nearby platelets are stimulated to release a substance called prothrombin activator, which in turn activates the conversion of prothrombin, a plasma protein, into an enzyme called thrombin. This reaction requires calcium ions. Thrombin facilitates the conversion of a soluble plasma protein called fibrinogen into long, insoluble fibers or threads of the protein, fibrin. Fibrin threads wind around the platelet plug at the damaged area of the blood vessel, forming an interlocking network of fibers and a framework for the clot. This net of fibers traps and helps hold platelets, blood cells, and other molecules tight to the site of injury, functioning as the initial clot. This temporary fibrin clot can form in less than a minute and slows blood flow before platelets attach.
Next, platelets in the clot begin to shrink, tightening the clot and drawing together the vessel walls to initiate the process of wound healing. Usually, the whole process of clot formation and tightening takes less than a half hour.
Vasoconstriction: Microvessel showing an erythrocyte (E), a tunica intima of endothelial cells, and a tunica adventitia of pericytes.
16.5A: Overview of Hemostasis is shared under a CC BY-SA license and was authored, remixed, and/or curated by LibreTexts.
Back to top
16.5: Hemostasis
16.5B: Vascular Spasm
Was this article helpful?
Yes
No
Recommended articles
16.5B: Vascular Spasm
16.5C: Platelet Plug Formation
16.5D: Coagulation
16.5E: Role of Vitamin K
Article typeSection or PageLicenseCC BY-SAShow TOCno
Tags
This page has no tags.
© Copyright 2025 Medicine LibreTexts
Powered by CXone Expert ®
?
The LibreTexts libraries arePowered by NICE CXone Expertand are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. We also acknowledge previous National Science Foundation support under grant numbers 1246120, 1525057, and 1413739. Privacy Policy. Terms & Conditions. Accessibility Statement.For more information contact us atinfo@libretexts.org.
Support Center
How can we help?
Contact Support Search the Insight Knowledge Base Check System Status×
contents readability resources tools
☰
16.5: Hemostasis
16.5B: Vascular Spasm |
17217 | https://phys.libretexts.org/Bookshelves/University_Physics/Book%3A_Introductory_Physics_-_Building_Models_to_Describe_Our_World_(Martin_Neary_Rinaldo_and_Woodman)/11%3A_Rotational_dynamics/11.06%3A_Moment_of_Inertia | 11.6: Moment of Inertia - Physics LibreTexts
Skip to main content
Table of Contents menu
search Search build_circle Toolbar fact_check Homework cancel Exit Reader Mode
school Campus Bookshelves
menu_book Bookshelves
perm_media Learning Objects
login Login
how_to_reg Request Instructor Account
hub Instructor Commons
Search
Search this book
Submit Search
x
Text Color
Reset
Bright Blues Gray Inverted
Text Size
Reset
+-
Margin Size
Reset
+-
Font Type
Enable Dyslexic Font - [x]
Downloads expand_more
Download Page (PDF)
Download Full Book (PDF)
Resources expand_more
Periodic Table
Physics Constants
Scientific Calculator
Reference expand_more
Reference & Cite
Tools expand_more
Help expand_more
Get Help
Feedback
Readability
x
selected template will load here
Error
This action is not available.
chrome_reader_mode Enter Reader Mode
11: Rotational dynamics
Introductory Physics - Building Models to Describe Our World (Martin et al.)
{ }
{ "11.01:Rotational_kinematic_vectors" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "11.02:_Rotational_dynamics_for_a_single_particle" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "11.03:_Torque" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "11.04:_Rotation_about_an_axis_versus_rotation_about_a_point." : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "11.05:_Rotational_dynamics_for_a_solid_object" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "11.06:_Moment_of_Inertia" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "11.07:_Equilibrium" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "11.08:_Summary" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "11.09:_Thinking_about_the_material" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "11.10:_Sample_problems_and_solutions" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1" }
{ "00:_Front_Matter" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "01:_The_Scientific_Method_and_Physics" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "02:_Comparing_Model_and_Experiment" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "03:_Describing_Motion_in_One_Dimension" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "04:_Describing_Motion_in_Multiple_Dimensions" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "05:_Newtons_Laws" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "06:_Applying_Newtons_Laws" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "07:_Work_and_energy" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "08:_Potential_Energy_and_Conservation_of_Energy" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "09:_Gravity" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "10:_Linear_Momentum_and_the_Center_of_Mass" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "11:_Rotational_dynamics" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "12:_Rotational_Energy_and_Momentum" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "13:_Simple_Harmonic_Motion" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "14:_Waves" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "15:_Fluid_Mechanics" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "16:_Electric_Charges_and_Fields" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "17:_Gauss_Law" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "18:_Electric_potential" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "19:_Electric_Current" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "20:_Electric_Circuits" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "21:_The_Magnetic_Force" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "22:_Source_of_Magnetic_Field" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "23:_Electromagnetic_Induction" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "24:_The_Theory_of_Special_Relativity" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "25:_Vectors" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "26:_Calculus" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "27:_Guidelines_for_lab_related_activities" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "28:_The_Python_Programming_Language" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "zz:_Back_Matter" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1" }
Thu, 28 Mar 2024 17:58:59 GMT
11.6: Moment of Inertia
19441
19441
Delmar Larsen
{ }
Anonymous
Anonymous
2
false
false
[ "article:topic", "Moment of Inertia", "license:ccbysa", "showtoc:no", "licenseversion:40", "source@ "authorname:martin-neary-rinaldo-woodlan" ]
[ "article:topic", "Moment of Inertia", "license:ccbysa", "showtoc:no", "licenseversion:40", "source@ "authorname:martin-neary-rinaldo-woodlan" ]
Search site Search Search Go back to previous article
Sign in
Username Password Sign in
Sign in
Sign in
Forgot password
Expand/collapse global hierarchy
1. Home
2. Bookshelves
3. University Physics
4. Introductory Physics - Building Models to Describe Our World (Martin et al.)
5. 11: Rotational dynamics
6. 11.6: Moment of Inertia
Expand/collapse global location
11.6: Moment of Inertia
Last updated Mar 28, 2024
Save as PDF
11.5: Rotational dynamics for a solid object
11.7: Equilibrium
Page ID 19441
( \newcommand{\kernel}{\mathrm{null}\,})
Table of contents
1. Example 11.6.1
1. Solution
The parallel axis theorem
Example 11.6.2
Solution
Discussion
In order to model how an object rotates about an axis, we use Newton’s Second Law for rotational dynamics:
(11.6.1)τ→ext=Iα→
where τ→ext is the net external torque exerted on the object about the axis of rotation, α→ is the angular acceleration of the object, and I is the moment of inertia of the object (about the axis). If we consider the object as being made of many particles of mass m i each located at a position r→i relative to the axis of rotation, the moment of inertia is defined as:
(11.6.2)I=∑i m ir i 2
Consider, for example, the moment of inertia of a uniform rod of mass M and length L that is rotated about an axis perpendicular to the rod that pass through one of the ends of the rod, as depicted in Figure 11.6.1.
Figure 11.6.1: A rod of length L and mass M being rotated about an axis perpendicular to the rod that goes through one of its ends.
We introduce the linear mass density of the rod, λ, as the mass per unit length:
(11.6.3)λ=M L
We model the rod as being made of many small mass elements of mass Δm, of length Δr, at a location r i, as illustrated in Figure 11.6.1. Using the linear mass density, the mass element, Δm, has a mass of:
(11.6.4)Δm=λΔr
The rod is made of many such mass elements, and the moment of inertia of the rod is thus given by:
(11.6.5)I=∑i Δmr i 2=∑i λΔrr i 2
If we take the limit in which the length of the mass element is infinitesimally small (Δr→dr) the sum can be written as an integral over the dimension of the rod:
(11.6.6)I=∫0 L λr i 2 d r=1 3λL 3=1 3(M L)L 3=1 3ML 2
where we re-expressed the linear mass density in terms of the mass and length of the rod. In general, we can write the moment of inertia of a continuous object as:
(11.6.7)I=∫r 2 d m
where dm is a small mass element that makes up the object, r is the distance from that mass element to the axis of rotation, and the integral is over the dimension of the object. As we did above, we would usually set up this integral so that dm is expressed in terms of r so that we can take an integral over r.
Example 11.6.1
Calculate the moment of inertia of a uniform thin ring of mass M and radius R, rotated about an axis that goes through its center and is perpendicular to the disk.
Solution
We take a small mass element dm of the ring, as shown in Figure 11.6.2.
Figure 11.6.2: A small mass element on a ring.
The moment of inertia is given by:
(11.6.8)I=∫d mr 2
In this case, each mass element around the ring will be the same distance away from the axis of rotation. The value r 2 in the integral is a constant over the whole ring, and so can be taken out of the integral:
(11.6.9)I=∫d mr 2=R 2∫d m
where we used the fact that the ring has a radius R, so the distance r of each mass element to the axis of rotation is R. The integral:
(11.6.10)∫d m
just means “sum all of the mass elements, dm”, and is thus equal to M, the total mass of the ring. The moment of inertia of the ring is thus:
(11.6.11)I=R 2∫d m=MR 2
The parallel axis theorem
The moment of inertia of a solid object can be difficult to calculate, especially if the object is not symmetric. The parallel axis theorem allows us to determine the moment of inertia of an object about an axis, if we already know the moment of inertia of the object about an axis that is parallel and goes through the center of mass of the object.
Consider an object for which we know the moment of inertia, I CM, about an axis that goes through the object’s center of mass. We define a coordinate system such that the origin is located at the center of mass, and the z axis is parallel to the axis about which we know the moment of inertia, as illustrated in Figure 11.6.3.
Figure 11.6.3: An object with a coordinate system whose origin is at the object’s center of mass, and for which we know the moment of inertia about the z axis. We wish to determine the object’s moment of inertia through a second axis, parallel to the z axis, but located a distance h away from the center of mass.
We wish to determine the moment of inertia for the object for an axis that is parallel to the z axis, but goes through a point with coordinates (x 0,y 0) located a distance h away from the center of mass. The moment of inertia about an axis parallel to the z axis and that goes through that point, I h is given by:
(11.6.12)I h=∑i m ir i 2
where m i is a mass element of the object located at a distance r i from the axis of rotation. If the mass element is located at a position (x i,y i) relative to the center of mass, we can write the distance r i in terms of the position of the mass element, and of the position of the axis of rotation:
(11.6.13)r i 2=(x i−x 0)2+(y i−y 0)2=x i 2−2x ix 0+x 0 2+y i 2−2y iy 0+y 0 2
Note that:
(11.6.14)x 0 2+y 0 2=h 2
The moment of inertia, I h, can thus be written as:
(11.6.15)I h=∑i m ir i 2=∑i(m i(x i 2+y i 2)−2x 0m ix i−2y 0m iy i+m ih 2)=∑i m i(x i 2+y i 2)+h 2∑i m i−2x 0∑i m ix i−2y 0∑i m iy i
where we broke the sum up into several sums, and factored constant terms (h, x 0, y 0) out of the sums, since these constants do not depend on which mass element we are considering. The first term is the moment of inertia about the center of mass, since x i 2+y i 2 is the distance to the center of mass. The second term is h 2 times the total mass of the object, since the sum of all the m i is just the mass, M, of the object. Now consider the term:
(11.6.16)−2x 0∑i m ix i
The sum, ∑m ix i is the numerator in the definition of the x coordinate of the center of mass! The sum is thus zero, because we choose the origin to be located at the center of mass. The last two terms in the sum are thus identically zero, because they correspond to the x and y coordinates of the center of mass!
We can thus write the parallel axis theorem:
(11.6.17)I h=I CM+Mh 2
where I CM is the moment of inertia of an object of mass M about an axis that goes through the center of mass and, I h, is the moment of inertia about a second axis that is parallel to the first and a distance h away.
Example 11.6.2
In the previous section, we calculated the moment of inertia of a rod of length L and mass M through an axis that is perpendicular to the rod and through one of its ends, and found that it was given by:
(11.6.18)I=1 3ML 2
What is the moment of inertia of the rod about an axis that is perpendicular to the rod and goes through its center of mass?
Solution
In this case, we know the moment of inertia through an axis that does not go through the center of mass. The center of mass is located a distance h=L/2 away from the point about which we know the moment of inertia, I h.
Using the parallel axis theorem, we can find the moment of inertia through the center of mass:
(11.6.19)I CM=I h−Mh 2=1 3ML 2−M(L 2)2=1 12ML 2
Discussion
We find that the moment of inertia about the center of mass is smaller than the moment of inertia about the end of the rod. This makes sense because when rotating the rod about its end, more of its mass is further away from the axis of rotation, which results in a larger moment of inertia.
This page titled 11.6: Moment of Inertia is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Ryan D. Martin, Emma Neary, Joshua Rinaldo, and Olivia Woodman via source content that was edited to the style and standards of the LibreTexts platform.
Back to top
11.5: Rotational dynamics for a solid object
11.7: Equilibrium
Was this article helpful?
Yes
No
Recommended articles
8.4: Solving Statics ProblemsWhen solving static problems, you need to identify all forces and torques, confirm directions, solve equations, and check the results.
12.5: Kinetic TheoryPressure is explained by kinetic theory as arising from the force exerted by molecules or atoms impacting on the walls of a container.
5.4: Moment of InertiaIn analog with mass representing the inertia of a body undergoing linear acceleration, we’ll identify this quantity as the inertia of a body undergoin...
5.6: Angular MomentumWe define the angular momentum L as the rotational counterpart of momentum.
9.1: Rotational Kinetic Energy, and Moment of Inertia
Article typeSection or PageAuthorRyan D. Martin, Emma Neary, Joshua Rinaldo, and Olivia WoodmanLicenseCC BY-SALicense Version4.0Show TOCno
Tags
Moment of Inertia
source@
© Copyright 2025 Physics LibreTexts
Powered by CXone Expert ®
?
The LibreTexts libraries arePowered by NICE CXone Expertand are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. We also acknowledge previous National Science Foundation support under grant numbers 1246120, 1525057, and 1413739. Privacy Policy. Terms & Conditions. Accessibility Statement.For more information contact us atinfo@libretexts.org.
Support Center
How can we help?
Contact Support Search the Insight Knowledge Base Check System Status× |
17218 | https://www.merriam-webster.com/thesaurus/wealth | Definition of wealth
as in assets
the total of one's money and property
her wealth increased to the point where she could afford several luxurious homes
personal property
deep pockets
reserve
nest egg
Antonyms & Near Antonyms
liabilities
debts
indebtedness
2
as in loads
a considerable amount
a wealth of advice from all quarters on how they should spend their lottery winnings
Synonyms & Similar Words
loads
plenty
ton
abundance
slew
dozen
quantity
lot
chunk
deal
pile
bunch
raft
bundle
stack
hundred
myriad
gobs
heap
plenitude
all kinds (of)
profusion
mountain
volume
boatload
multiplicity
mass
truckload
reams
bucket
basketful
quite a bit
good deal
fistful
much
oodles
yard
store
pot
shipload
scads
sheaf
thousands
bushel
pack
carload
sight
plentitude
lashings
mess
wad
barrel
peck
spate
potful
lashins
plateful
surplus
passel
score
flood
excess
multitude
surfeit
overabundance
superabundance
plethora
rash
plague
embarrassment
sea
epidemic
bonanza
superfluity
host
overflow
oversupply
crowd
redundancy
horde
bevy
army
flock
swarm
deluge
throng
million
overage
overkill
mob
press
herd
legion
overmuch
drove
crush
cram
zillion
trillion
gazillion
jillion
kazillion
Antonyms & Near Antonyms
taste
handful
pittance
touch
strain
suspicion
grain
spot
bit
hint
shade
ounce
trace
sprinkling
shadow
modicum
glimmer
pinch
peanuts
ray
molecule
streak
speck
mite
fragment
mouthful
atom
dram
scruple
particle
sprinkle
dab
ace
whit
scrap
smattering
shred
lick
mote
dot
iota
nip
crumb
scintilla
granule
portion
jot
tad
smidgeon
smidgen
poverty
little
tittle
smidgin
lack
section
smidge
scarcity
drop
piece
dash
morsel
driblet
absence
shortage
shot
paucity
flyspeck
famine
fleck
dearth
smatter
deficiency
deficit
want
inadequacy
insufficiency
nubbin
scarceness
meagerness
scantiness
undersupply
skimpiness
scantness
3
as in abundance
an amount or supply more than sufficient to meet one's needs
a wealth of documentation to support her thesis
Synonyms & Similar Words
abundance
plenty
plenitude
plethora
superabundance
embarrassment of riches
feast
cornucopia
competence
surplus
plentitude
competency
richness
adequacy
sufficiency
liberality
opulence
fertility
excess
amplitude
surfeit
overdose
fecundity
fruitfulness
overflow
oversupply
superfluity
luxuriance
copiousness
lavishness
overkill
superfluousness
redundancy
ampleness
Antonyms & Near Antonyms
deficiency
poverty
scarcity
insufficiency
inadequacy
paucity
undersupply
sterility
infertility
barrenness
Example Sentences
Examples are automatically compiled from online sources to
show current usage.
Read More
Opinions expressed in the examples do not represent those of Merriam-Webster or its editors.
Send us feedback.
Recent Examples of wealth
Homeownership has long been seen as the cornerstone of the American Dream and the creation of generational wealth — and there’s more work to do to make sure that the family home is a gift to future generations, not a burden or a heartbreak.
—Marco Villegas, New York Daily News, 11 Aug. 2025
As for where their wealth stands, Jeff Bezos and Lauren Sanchez likely signed a prenup and were legally married in the United States before their wedding on June 28, 2025, according to Page Six.
—Lea Veloso, StyleCaster, 11 Aug. 2025
By 2030, there will be a $30 trillion generational wealth transfer into the hands of women and Gen Z, and Gen Z, the largest generation ever, will account for all consumer spending growth over the next ten years.
—Lori Cashman, Fortune, 10 Aug. 2025
The median wealth of Black households ($24,520) was about one-tenth the median wealth of white ones, at $250,400.
—Andrea Riquier, USA Today, 10 Aug. 2025
See All Example Sentences for wealth
Recent Examples of Synonyms for wealth
assets
abundance
capital
fortune
ton
money
slew
dozen
Noun
In the complaint, Aaron asks that neither party be awarded alimony and that both parties' community assets and debts are in accordance with Nevada law.
—
Ingrid Vasquez,
People.com,
14 Mar. 2025
The dots represented arts organizations, creative businesses, studios, galleries, performance venues, and cultural and historic assets.
—
Jim Woods,
Chicago Tribune,
14 Mar. 2025
Definition of assets
Noun
With an abundance of metrics available, leaders often face the challenge of determining which data points truly matter.
—
Matthew Gantner,
Forbes.com,
15 Aug. 2025
There’s an abundance of choice, but what is Ridd most looking forward to about this year’s instalment?
—
Lily Ford,
HollywoodReporter,
14 Aug. 2025
Definition of abundance
Noun
Federal troops are patrolling the National Mall and neighborhoods across Washington while President Donald Trump’s administration exerts extraordinary power over law enforcement in the nation’s capital.
—
John Seewer,
Chicago Tribune,
16 Aug. 2025
The 12th-century castle was commissioned by William the Conqueror, and more than 900 Norman artifacts have been loaned by the British Museum to create its first medieval gallery outside of the UK capital.
—
Maureen O'Hare,
CNN Money,
16 Aug. 2025
Definition of capital
Noun
Immigration is not a zero-sum game: more immigration can boost local and national economies while improving the fortunes of people seeking better lives.
—
Alexander Kustov,
Foreign Affairs,
12 Aug. 2025
George, meanwhile, came out on top after coming dangerously close to losing his life, railroad company and fortune this season after overleveraging his businesses and being unable to make a few crucial deals.
—
Saman Shafiq,
USA Today,
11 Aug. 2025
Definition of fortune
Noun
The Eagles have a ton of talent returning from last season’s 10-2 team that lost in the second round of the playoffs to Tampa Bay Tech.
—
Chris Hays,
The Orlando Sentinel,
20 Aug. 2025
The Israelis did kill a ton of innocent people in Lebanon, though.
—
Isaac Chotiner,
New Yorker,
20 Aug. 2025
Definition of ton
Noun
The airport sued over that issue, and a court ruled in its favor, entering a temporary injunction directing city officials to use the sales tax money to pay off the bonds.
—
Arkansas Online,
Arkansas Online,
19 Aug. 2025
But in many ways, this collapse was baked into its spectacular rise, when a flood of dumb money, pollyannaish entrepreneurs, and hungry journalists rushed to build an industry that would soon turn into a house of cards.
—
Eric Benson,
Rolling Stone,
18 Aug. 2025
Definition of money
Noun
Exposure to air pollution from burning fossil fuels is associated with a slew of health issues including heart disease, cancer and respiratory illnesses.
—
Sarah Henry,
The Courier-Journal,
18 Aug. 2025
New Mexico: After losing Mendenhall and almost every notable player, Idaho coach Jason Eck comes in with a slew of talented FCS transfers.
—
Chris Vannini,
New York Times,
18 Aug. 2025
Definition of slew
Noun
More: Ascension Wisconsin plans to outsource staffing of its ICU doctors, raising concerns about patient safety
Ascension Wisconsin has more than a dozen hospitals in southeast Wisconsin and the Fox Valley.
—
Sarah Volpenhein,
jsonline.com,
15 Aug. 2025
The Tennessean reached out to a dozen school districts across Tennessee to ask about taxes and deductions applied to the bonuses.
—
Rachel Wegner,
Nashville Tennessean,
15 Aug. 2025
Definition of dozen
Browse Nearby Words
weal
wealth
wealthily
See all Nearby Words
Cite this Entry
“Wealth.” Merriam-Webster.com Thesaurus, Merriam-Webster, Accessed 26 Aug. 2025.
Copy Citation
More from Merriam-Webster on wealth
Nglish: Translation of wealth for Spanish Speakers
Britannica.com: Encyclopedia article about wealth
Last Updated:
- Updated example sentences
Popular in Grammar & Usage
See More
### 31 Useful Rhetorical Devices
### Merriam-Webster’s Great Big List of Words You Love to Hate
### How to Use Em Dashes (—), En Dashes (–) , and Hyphens (-)
### The Difference Between 'i.e.' and 'e.g.'
### Why is '-ed' sometimes pronounced at the end of a word?
See More
Popular in Wordplay
See More
### Great Big List of Beautiful and Useless Words, Vol. 3
### Our Best Historical Slang Terms
### 'Za' and 9 Other Words to Help You Win at SCRABBLE
### 12 Words Whose History Will Surprise You
### More Words with Remarkable Origins
See More
Popular
See More
### 31 Useful Rhetorical Devices
### Great Big List of Beautiful and Useless Words, Vol. 3
### Our Best Historical Slang Terms
See More |
17219 | https://www.reddit.com/r/askmath/comments/1ebq3mz/is_93_or_3/ | Is √9=3 or ±3? : r/askmath
Skip to main contentIs √9=3 or ±3? : r/askmath
Open menu Open navigationGo to Reddit Home
r/askmath A chip A close button
Log InLog in to Reddit
Expand user menu Open settings menu
Go to askmath
r/askmath
r/askmath
This subreddit is for questions of a mathematical nature. Please read the subreddit rules below before posting.
208K Members Online
•1 yr. ago
NeonExist
Is √9=3 or ±3?
Algebra
So a student came to me today and asked why I wrote the √9 as just 3 and not ±3. I gave some fluffy on the spot answer but it has now haunted me for the entire day. Who is correct here? I explained that if we start with x 2=9 then our answer is ±√9 which gives us ±3, but because we've started our equation at x= √9, that negative answer is removed.
Any assistance on this would be great!
Read more
Archived post. New comments cannot be posted and votes cannot be cast.
Share
Related Answers Section
Related Answers
Explain why √9 is 3 and not ±3
Best explanations for square roots in math
Clarify square root of negative numbers
Tips for mastering algebraic equations
Exploring the beauty of fractals in nature
New to Reddit?
Create your account and connect with a world of communities.
Continue with Email
Continue With Phone Number
By continuing, you agree to ourUser Agreementand acknowledge that you understand thePrivacy Policy.
Public
Anyone can view, post, and comment to this community
0 0
Top Posts
Reddit reReddit: Top posts of July 25, 2024
Reddit reReddit: Top posts of July 2024
Reddit reReddit: Top posts of 2024
Reddit RulesPrivacy PolicyUser AgreementAccessibilityReddit, Inc. © 2025. All rights reserved.
Expand Navigation Collapse Navigation |
17220 | https://www.statpearls.com/point-of-care/search?q=Relapsed+Or+Refractory+Multiple+Myeloma | Relapsed Or Refractory Multiple Myeloma | StatPearls | Treatment & Management
PhysicianPhysician Board ReviewsPhysician Associate Board ReviewsCMELifetime CMEFree CMEMATE and DEA ComplianceCPD
StudentUSMLE® Step 1USMLE® Step 2USMLE® Step 3COMLEX-USA® Level 1COMLEX-USA® Level 2COMLEX-USA® Level 396 Medical School ExamsStudent Resource Center
NCLEX - RNNCLEX - LPN/LVN/PN24 Nursing Exams
Nurse PractitionerAPRN/NP Board ReviewsCNS Certification ReviewsCE - Nurse PractitionerFREE CE
NurseRN Certification ReviewsCE - NurseFREE CE
PharmacistPharmacy Board Exam PrepCE - Pharmacist
AlliedAllied Health Exam PrepDentist ExamsCE - Social WorkerCE - Dentist
Point of Care
Institutional Sales
Student Resources
Search
CME 0.0
Sign-upLogin
PhysicianPhysician Board ReviewsPhysician Associate Board ReviewsCMELifetime CMEFree CMEMATE and DEA ComplianceCPD
StudentUSMLE® Step 1USMLE® Step 2USMLE® Step 3COMLEX-USA® Level 1COMLEX-USA® Level 2COMLEX-USA® Level 396 Medical School ExamsStudent Resource Center
NCLEX - RNNCLEX - LPN/LVN/PN24 Nursing Exams
Nurse PractitionerAPRN/NP Board ReviewsCNS Certification ReviewsCE - Nurse PractitionerFREE CE
NurseRN Certification ReviewsCE - NurseFREE CE
PharmacistPharmacy Board Exam PrepCE - Pharmacist
AlliedAllied Health Exam PrepDentist ExamsCE - Social WorkerCE - Dentist
Point of Care
Free CME/CE
Tools
Total credits available for redeeming
0.0 hours
Redeem
StatPearls Answer :
...on-free survival benefit.Ixazomib was approved based on the results of the phase III TOURMALINE-MM1 study, a worldwide double-blind and randomized study done onpatientswithrelapsed/refractory multiple myeloma.FDA Approved Indications Ixazomib is currently FDA-approved to treat relapsed or refractory multiple myeloma. Ixazomib
Articles
Showing 9147 articles
Show: 10
Show: 10 Show: 25 Show: 50 Show: 100
Relapsed and Refractory Multiple Myeloma
According to the criteria developed by the International Myeloma Working Group (IMWG), relapsedrefractory MM (RRMM) is defined as a progressive disease, poor response despite treatment, progression within 60 days of the most recent treatment in a patient who had achieved remission, the absence of at least minimal response (MR), or primary refractory MM.
Introduction
Etiology
Epidemiology
Pathophysiology
History and Physical
Evaluation
Treatment / Management
Differential Diagnosis
Surgical Oncology
Radiation Oncology
Pertinent Studies and Ongoing Trials
Toxicity and Adverse Effect Management
Staging
Prognosis
Complications
Postoperative and Rehabilitation Care
Consultations
Deterrence and Patient Education
Enhancing Healthcare Team Outcomes
Ixazomib
FDA Approved Indications Ixazomib is currently FDA-approved to treat relapsedorrefractorymultiple
Indications
Mechanism of Action
Administration
Adverse Effects
Contraindications
Monitoring
Toxicity
Enhancing Healthcare Team Outcomes
Bortezomib
For previously relapsing multiplemyeloma, bortezomib dosing is 1.3 mg/m^2 X 1 IV on days 1, 4, 8, and 11 of a 21-day cycle.
Indications
Mechanism of Action
Administration
Adverse Effects
Contraindications
Monitoring
Toxicity
Enhancing Healthcare Team Outcomes
Bispecific Antibody Toxicity
CRS' incidence was 27% to 59% in various clinical trials, but only 1% to 3.5% were grade 3 or higher.
Introduction
Etiology
Epidemiology
Pathophysiology
History and Physical
Evaluation
Treatment / Management
Differential Diagnosis
Prognosis
Complications
Deterrence and Patient Education
Pearls and Other Issues
Enhancing Healthcare Team Outcomes
Immunotherapy
This antigen is often a tumor-associated antigen (TAA) or a cancer-specific antigen.
Indications
Mechanism of Action
Administration
Adverse Effects
Contraindications
Enhancing Healthcare Team Outcomes
Amiloride
Preclinical studies suggest that amiloride can induce apoptosis in multiplemyeloma cell lines.
Indications
Mechanism of Action
Administration
Adverse Effects
Contraindications
Monitoring
Toxicity
Enhancing Healthcare Team Outcomes
Bence-Jones Protein
In 1939 Longworth applied electrophoresis for the first time in the study of multiple
Introduction
Specimen Collection
Procedures
Indications
Normal and Critical Findings
Complications
Clinical Significance
Zoledronate
FDA-Approved Indications Zoledronate is approved by the US Food and Drug Administration (FDA) for various indications, including the prevention and treatment of osteoporosis in males and postmenopausal females, glucocorticoid-induced osteoporosis, Paget disease of bone, hypercalcemia of malignancy, multiplemyeloma, and solid tumor bone metastases.
Indications
Mechanism of Action
Administration
Adverse Effects
Contraindications
Monitoring
Toxicity
Enhancing Healthcare Team Outcomes
Cryoglobulinemia
Type I:Type I cryoglobulinemia has monoclonal immunoglobulins, typically IgG or IgM, and develops in the setting of lymphoproliferative or hematologic disorders of B-cell lineage, such as multiplemyeloma, Waldenström macroglobulinemia, chronic lymphocytic leukemia, or protein-secreting monoclonal gammopathies such as monoclonal gammopathy of undetermined significance (MGUS).
Introduction
Etiology
Epidemiology
Pathophysiology
Histopathology
History and Physical
Evaluation
Treatment / Management
Differential Diagnosis
Prognosis
Complications
Consultations
Deterrence and Patient Education
Pearls and Other Issues
Enhancing Healthcare Team Outcomes
Vertebral Augmentation
VCFs are also associated with a 5-fold increased risk of subsequent adjacent or distant fractures, as well as significantly elevated mortality rates within the first year.
Introduction
Anatomy and Physiology
Indications
Contraindications
Equipment
Personnel
Preparation
Technique or Treatment
Complications
Clinical Significance
Enhancing Healthcare Team Outcomes
Nursing, Allied Health, and Interprofessional Team Interventions
Nursing, Allied Health, and Interprofessional Team Monitoring
1-10 of 9147 articles
1 of 915
}
×
StatPearls Is Part Of The Inc. 5000 Fastest Growing Companies
Become a Contributor
Information
Help & FAQs
About us
Contact us
Privacy Policy
Legal
Refund policy
Editorial Policy
Education
Physician CME
Nurse Practitioner CE
Nurse CE
FREE CME/CE
Contact
Institutional Sales
Feel free to get in touch with us and send a message
support@statpearls.com
Copyright © 2025 StatPearls
Use the mouse wheel to zoom in and out, click and drag to pan the image
×
Get a F ree StatPearls Question-Of-The-Day
For Your Specialty.
Send IT |
17221 | https://www.princeton.edu/~hhalvors/teaching/phi312_f2016/sets-new.pdf | The Category of Sets Hans Halvorson October 16, 2016 1 Introduction The aim of metatheory is to theorize about theories. For simplicity, let’s use M to denote this hypothetical theory about theories. Thus, M is not the object of our study, it is the tool we will use to study other other theories. And yet, it might be helpful to give you a sort of user’s manual for M. That’s the aim of this chapter.
Let’s think about what we hope to do with M.
We want to be able to talk about theories, which are supposed to be “collections of things,” or better, “structured collections of things.” In the first instance, we will think of theo-ries as structured collections of sentences.1 What’s more, sentences themselves are structured collections of symbols. Fortunately, we won’t need to press the inquiry further into the question of the nature of symbols. It will suffice to assume that there are enough symbols, and that there is some primitive notion of identity of symbols. For example, I assume that you understand that “p” is the same symbol as “p”, and is different from “q”.
Fortunately, much of the theory we need was worked out in previous gener-ations. At the beginning of the 20th century, much effort was spent on formu-lating a theory of abstract collections (i.e. sets) that would be adequate to serve as a foundation for all of mathematics. Amazingly, the theory of sets can be formalized in predicate logic with only one symbol of non-logical vocabulary, a binary relation symbol “∈”. In the resulting first-order theory — usually called Zermelo-Frankel set theory — the quantifiers can be thought of as ranging over sets, and the relation symbol ∈can be used to define further notions such as subset, Cartesian products of sets, functions from one set to another, etc..
1I don’t mean to be begging the question here about what a theory is. We could just as well think of a theory as a structured collection of models. And just as sentences can be broken down into smaller components until we reach undefined primitives, so models can be broken down into smaller components until we reach undefined primitives. In both cases, metatheory bottoms out in undefined primitives. We can call these primitives “symbols,” or “sets,” or anything else we want. But the name we choose doesn’t affect the inferences we’re permitted to draw.
1 Set theory can be presented informally (sometimes called “naive set theory”), or formally (“axiomatic set theory”). In both cases, the relation ∈is primitive.
However, we’re going to approach things from a different angle.
We’re not concerned as much with what sets are, but with what we can do with them.
Thus, I’ll present a version of ETCS, the elementary theory of the category of sets.2 Here “elementary theory” indicates that this theory can be formalized in elementary (i.e. first-order) logic. The phrase “category of sets” indicates that this theory treats the collection of sets as a structured object — a category consisting of sets and functions between them.
Axiom 1: Sets is a category Sets is a category, i.e. it consists of two kinds of things: objects, which we call sets, and arrows, which we call functions. To say that Sets is a category means that: 1. Every function f has a domain set d0f and a codomain set d1f. We write f : X →Y to indicate that X = d0f and Y = d1f.
2. Compatible functions can be composed. For example, if f : X →Y and g : Y →Z are functions, then g ◦f : X →Z is a function. (We frequently abbreviate g ◦f as gf.) 3. Composition of functions is associative: h ◦(g ◦f) = (h ◦g) ◦f when all these compositions are defined.
4. For each set X, there is a function 1X : X →X that acts as a left and right identity relative to composition.
Discussion. If our goal was to formalize ETCS rigorously in first-order logic, we might use two-sorted logic, with one sort for sets, and one sort for functions.
The primitive vocabulary of this theory would include symbols ◦, d0, d1, 1, but it would not include the symbol ∈. In other words, containment is not a primitive notion of ETCS.
Set theory makes frequent use of bracket notation, such as: {n ∈N | n > 17}.
These symbols should be read as, “the set of n in N such that n > 17.” Similarly, {x, y} designates a set consisting of elements x and y. But so far, we have no 2All credit to William Lawvere for introducing this approach to set theory. For a gentle introduction, see his Sets for Mathematics.
2 rules for reasoning about such sets. In the following sections, we will gradually add axioms until it becomes clear which rules of inference are permitted vis-a-vis sets.
Suppose for a moment that we understand the bracket notation, and suppose that X and Y are sets. Then given an element x ∈X, and an element y ∈Y , we can take the set {x, {x, y}} as an “ordered pair” consisting of x and y. The pair is ordered because x and y play asymmetric roles: the element x occurs by itself, as well as with the element y. If we could then gather together these ordered pairs into a single set, we would designate it by X ×Y , which we call the Cartesian product of X and Y . The Cartesian product construction should be familiar from high school mathematics. For example, the plane (with x and y coordinates) is the Cartesian product of two copies of the real number line.
In typical presentations of set theory, the existence of product sets is derived from other axioms. Here we will proceed in the opposite direction: we will take the notion of a product set as primitive.
Axiom 2: Cartesian products For any two sets X and Y , there is a set X×Y , and functions π0 : X×Y → X and π1 : X × Y →Y , such that: for any other set Z and functions f : Z →X and g : Z →Y , there is a unique function ⟨f, g⟩: Z →X × Y such that π0⟨f, g⟩= f and π1⟨f, g⟩= g.
Here the angle brackets ⟨f, g⟩are not intended to indicate anything about the internal structure of the denoted function. This notation is chosen merely to indicate that ⟨f, g⟩is uniquely determined by f and g.
The defining conditions of a product set can be visualized by means of an arrow diagram.
Z X × Y X Y f g ⟨f,g⟩ π0 π1 Here each node represents a set, and arrows between nodes represent functions.
The dashed arrow is meant to indicate that the axiom asserts the existence of such an arrow (dependent on the existence of the other arrows in the diagram).
Discussion. There is a close analogy between the defining conditions of a Carte-sian product and the introduction and elimination rules for conjunction. If φ∧ψ is a conjunction, then there are arrows (i.e. derivations) φ∧ψ →φ and φ∧ψ →ψ.
That’s the ∧elimination rule.3 Moreover, for any sentence θ, if there are deriva-3Here I’m intentionally being ambiguous between the relation ⊢and the connective →.
3 tions θ →φ and θ →ψ, then there is a unique derivation θ →φ ∧ψ. That’s the ∧introduction rule.
Definition. Let γ and γ′ be paths of arrows in a diagram that begin and end at the same node. We say that γ and γ′ commute just in case the composition of the functions along γ is equal to the composition of the functions along γ′. We say that the diagram as a whole commutes just in case any two paths between nodes are equal. Thus, for example, the product diagram above commutes.
The functions π0 : X × Y →X and π1 : X × Y →Y are typically called projections of the product. What features do these projections have? Before we say more on that score, let’s pause to talk about features of functions.
You will recall from secondary school that functions can be one-to-one, or onto, or continuous, etc..
For bare sets, there is no notion of continuity of functions, per se. And, with only the first two axioms in place, we do not yet have the means to define what it means for a function to be one-to-one or onto.
Indeed, recall that a function f : X →Y is typically said to be one-to-one just in case f(x) = f(y) implies x = y for any two “points” x and y of X. But we don’t yet have a notion of points!
Nonetheless, there are point-free surrogates for the notions of being one-to-one and onto.
Definition. A function f : X →Y is said to be a monomorphism just in case for any two functions g, h : Z ⇒X, if fg = fh then g = h.
Definition. A function f : X →Y is said to be a epimorphism just in case for any two functions g, h : Y →Z, if gf = hf then g = h.
We will frequently say, “. . . is monic” as shorthand for “. . . is a monomor-phism,” and “. . . is epi” for “. . . is an epimorphism.” Definition. A function f : X →Y is said to be an isomorphism just in case there is a function g : Y →X such that gf = 1X and fg = 1Y . If there is an isomorphism f : X →Y , we say that X and Y are isomorphic, and we write X ∼ = Y .
Exercise 1.1. Show the following: 1. If gf is monic, then f is monic.
2. If fg is epi, then f is an epi.
3. If f and g are monic, then gf is monic.
4. If f and g are epi, then gf is epi.
5. If f is an isomorphism, then f is epi and monic.
The analogy is more clear if we use the symbol →the latter.
4 Proposition 1.2. Suppose that both (W, p0, p1) and (Z, q0, q1) are Cartesian products of X and Y . Then there is an isomorphism f : W →Z such that q0f = p0 and q1f = p1.
Proof. Since (Z, q0, q1) is a Cartesian product, there is a unique function ⟨p0, p1⟩: W →Z such that q0⟨p0, p1⟩= p0 and q1⟨p0, p1⟩= p1. There is a similar func-tion ⟨q0, q1⟩: Z →W. We claim that these functions are inverse to each other.
Indeed, q0 ◦⟨p0, p1⟩◦⟨q0, q1⟩= p0⟨q0, q1⟩= q0, and similarly, q1 ◦⟨p0, p1⟩◦⟨q0, q1⟩= q1. Thus, by the uniqueness clause in the definition of Cartesian products, ⟨p0, p1⟩◦⟨q0, q1⟩= 1Z.
A similar argument shows that ⟨q0, q1⟩◦⟨p0, p1⟩= 1W .
Therefore, f = ⟨p0, p1⟩is the requisite isomorphism.
Definition. If X is a set, we let δ : X →X × X denote the unique arrow ⟨1X, 1X⟩given by the definition of X × X. We call δ the diagonal of X, or the equality relation on X. Note that δ is monic, since π0δ = 1X is monic.
Definition. Suppose that f : W →Y and g : X →Z are functions. Consider the following diagram: W W × X X Y Y × Z Z f q0 q1 f×g g π0 π1 We let f × g = ⟨fq0, gq1⟩be the unique function from W × X to Y × Z such that π0(f × g) = fq0, π1(f × g) = gq1.
Proposition 1.3. Suppose that f : A →B and g : B →C are functions. Then 1X × (g ◦f) = (1X × g) ◦(1X × f).
Proof. The following diagram commutes: X X × A A X X × B B X X × C C 1X 1X×f f 1X 1X×g g Thus, (1X × g) ◦(1X × f) has the defining properties of 1X × (g ◦f).
5 Exercise 1.4. Show that 1X × 1Y = 1X×Y .
Definition. Let X be a fixed set. Then X induces two mappings, as follows: 1. A mapping Y 7→X × Y of sets to sets.
2. A mapping f 7→1X × f of functions to functions. That is, if f : Y →Z is a function, then 1X × f : X × Y →X × Z is a function.
By the previous results, the second mapping is compatible with the composition structure on arrows. In this case, we call the pair of mappings a functor from Sets to Sets.
Exercise 1.5. Suppose that f : X →Y is a function. Show that the following diagram commutes.
X Y X × X Y × Y f δX δY f×f We will now recover the idea that sets consist of points by requiring the existence of a single-point set 1, which plays the privileged role of determining identity of functions.
Axiom 3: Terminal Object There is a set 1 with the following two features: 1. For any set X, there is a unique function X 1 βX In this case, we say that 1 is a terminal object for Sets.
2. For any sets X and Y , and functions f, g : X ⇒Y , if f ◦x = g ◦x for all functions x : 1 →X, then f = g. In this case, we say that 1 is a separator for Sets.
Exercise 1.6. Show that if X and Y are terminal objects in a category, then X ∼ = Y .
Definition. We write x ∈X to indicate that x : 1 →X is a function. In this case, we say that x is an element of X, and we write x ∈X. If f : X →Y is a function, we sometimes write f(x) for f ◦x. With this notation, the statement that 1 is a separator says: f = g is and only if f(x) = g(x), for all x ∈X.
6 Discussion. In ZF set theory, equality between functions is completely deter-mined by equality between sets. Indeed, in ZF, functions f, g : X ⇒Y are defined to be certain subsets of X × Y ; and subsets of X × Y are defined to be equal just in case they contain the same elements. In the ETCS approach to set theory, equality between functions is primitive, and Axiom 3 stipulates that this equality can be detected by checking elements.
Some might see this difference as arguing in favor of ZF: it is more parsimo-nious, because it derives f = g from something more fundamental. However, the defender of ETCS might claim in reply that her theory defines x ∈y from something more fundamental. Which is really more fundamental, equality be-tween arrows (functions), or containment of objects (sets)? We’ll leave that for other philosophers to think about.
Exercise 1.7. Show that any function x : 1 →X is monic.
Proposition 1.8. A set X has exactly one element if and only if X ∼ = 1.
Proof. The terminal object 1 has exactly one element, since there is a unique function 1 →1.
Suppose now that X has exactly one element x : 1 →X. We will show that X is a terminal object. First, for any set Y , there is a function x ◦βY from Y to X. Now suppose that f, g are functions from Y to X such that f ̸= g. By Axiom 3, there is an element y ∈Y such that fy ̸= gy. But then X has more than one element, a contradiction. Therefore, there is a unique function from Y to X, and X is a terminal object.
Proposition 1.9. In any category with Cartesian products, if 1 is a terminal object then X × 1 ∼ = X, for any object X.
Proof. Let π0 : X × 1 →X and π1 : X × 1 →1 be the projections. Then ⟨1X, βX⟩is a function from X to X × 1. We claim that π0 is a left and right inverse for ⟨1X, βX⟩. Since π0 ◦⟨1X, βX⟩= 1X, π0 is a left inverse for ⟨1X, βX⟩. To show that π0 is a right inverse for ⟨1X, βX⟩, we use the fact that 1X×1 is the unique function from X × 1 to itself such that π0 ◦1X×1 = π0, and π1 ◦1X×1 = π1. But we also have, π0 ◦(⟨1X, βX⟩◦π0) = π0, and since π1 is the unique function from X × 1 to 1, π1 ◦(⟨1X, βX⟩◦π0) = π1.
Thus, ⟨1X, βX⟩◦π0 = 1X×1, and π0 is right inverse to ⟨1X, βX⟩.
Proposition 1.10. Let a and b be elements of X × Y . Then a = b if and only if π0(a) = π0(b) and π1(a) = π1(b).
7 Proof. Suppose that π0(a) = π0(b) and π1(a) = π1(b).
By the uniqueness property of the product, there is a unique function c : 1 →X × Y such that π0(c) = π0(a) and π1(c) = π1(a). Since a and b both satisfy this property, a = b.
Note. The previous proposition justifies the use of the notation X × Y = {⟨x, y⟩| x ∈X, y ∈Y }.
Here the identity condition for ordered pairs is given by ⟨x, y⟩= ⟨x′, y′⟩ iff x = x′ and y = y′.
Proposition 1.11. Let (X × Y, π0, π1) be the Cartesian product of X and Y .
If Y is non-empty, then π0 is an epimorphism.
Proof. Suppose that Y is non-empty, and that y : 1 →Y is an element. Let βX : X →1 be the unique map, and let f = y ◦βX. Then ⟨1X, f⟩: X →X × Y such that π0⟨1X, f⟩= 1X. Since 1X is epi, π0 is epi.
Definition. We say that f : X →Y is injective just in case: for any x, y ∈X if f(x) = f(y), then x = y. Written more formally: ∀x∀y[f(x) = f(y) →x = y] Note. “Injective” is synonymous with “one-to-one”.
Exercise 1.12. Let f : X →Y be a function. Show that if f is monic, then f is injective.
Proposition 1.13. Let f : X →Y be a function. If f is injective then f is monic.
Proof. Suppose that f is injective, and let g, h : A →X be functions such that f ◦g = f ◦h. Then for any a ∈A, we have f(g(a)) = f(h(a)). Since f is injective, g(a) = h(a). Since a was an arbitrary element of A, Axiom 3 entails that g = h. Therefore, f is monic.
Definition. Let f : X →Y be a function. We say that f is surjective just in case: for each y ∈Y , there is an x ∈X such that f(x) = y. Written formally: ∀y∃x[f(x) = y] And in diagrammatic form: 1 X Y x y f Note. “Surjective” is synonymous with “onto”.
8 Exercise 1.14. Show that if f : X →Y is surjective then f is an epimorphism.
We will eventually establish that all epimorphisms are surjective. However, first we need a couple more axioms. Given a set X, and some definable condition φ on X, we would like to be able to construct a subset consisting of those elements in X that satisfy φ. The usual notation here is {x ∈X | φ(x)}, which we read as, “the x in X such that φ(x).” But the important question is: which features φ do we allow? As an example of a definable condition φ, consider the condition of, “having the same value under the functions f and g,” that is, φ(x) just in case f(x) = g(x). We call the subset {x ∈X | f(x) = g(x)} the equalizer of f and g.
Axiom 4: Equalizers Suppose that f, g : X ⇒Y are functions. Then there is a set E and a function m : E →X with the following property: fm = gm, and for any other set F and function h : F →X, if fh = gh, then there is a unique function k : F →E such that mk = h.
E X Y F m f g k h We call (E, m) an equalizer of f and g.
Exercise 1.15. Suppose that (E, m) and (E′, m′) are both equalizers of f and g. Show that there is an isomorphism k : E →E′.
Definition. Let A, B, C be sets, and let f : A →C and g : B →C be functions.
We say that g factors through f just in case there is a function h : B →A such that fh = g.
Exercise 1.16. Let f, g : X ⇒Y , and let m : E →X be the equalizer of f and g. Let x ∈X. Show that x factors through m if and only if f(x) = g(x).
Proposition 1.17. In any category, if (E, m) is the equalizer of f and g. Then m is a monomorphism.
Proof. Let x, y : Z →E such that mx = my. Since fmx = gmx, there is a unique arrow z : Z →E such that mz = mx. Since both mx = mx and my = mx, it follows that x = y. Therefore, m is monic.
Definition. Let f : X →Y be a function.
We say that f is a regular monomorphism just in case f is the equalizer of a pair of arrows g, h : Y ⇒Z.
9 Exercise 1.18. Show that if f is an epimorphism and a regular monomorphism, then f is an isomorphism.
In other approaches to set theory, one uses ∈to define a relation of inclusion between sets: X ⊆Y ⇐ ⇒∀x(x ∈X →x ∈Y ).
We cannot define this exact notion in our approach since, for us, elements are attached to some particular set. However, for typical applications, every set under consideration will come equipped with a canonical monomorphism m : X →U, where U is some fixed set.
Thus, it will suffice to consider a relativized notion.
Definition. A subobject or subset of a set X is a set B and a monomorphism m : B →X, called the inclusion of B in X. Given two subsets m : B →X and n : A →X, we say that B is a subset of A (relative to X), written B ⊆X A just in case there is a function k : B →A such that nk = m. When no confusion can result, we omit X and write B ⊆A.
Let m : B →Y be monic, and let f : X →Y . Consider the following diagram: f −1(B) X × B Y k fp0 mp1 where f −1(B) is defined as the equalizer of fπ0 and mp1. Intuitively, we have f −1(B) = {⟨x, y⟩∈X × B | f(x) = y} = {⟨x, y⟩∈X × Y | f(x) = y and y ∈B} = {x ∈X | f(x) ∈B}.
Now we verify that f −1(B) is a subset of X.
Proposition 1.19. The function p0k : f −1(B) →X is monic.
Proof. To simplify notation, let E = f −1(B).
Let x, y : Z →E such that p0kx = p0ky. Then fp0kx = fp0ky, and hence mp1kx = mp1ky. Since m is monic, p1kx = p1ky. Thus, kx = ky. (The identity of a function into X × B is determined by the identity of its projections onto X and B.) Since k is monic, x = y. Therefore, p0k is monic.
Definition. Let m : B →X be a subobject, and let x : 1 →X. We say that x ∈B just in case x factors through m as follows: B 1 X m x Proposition 1.20. Let A ⊆B ⊆X. If x ∈A then x ∈B.
10 Proof.
A B 1 X X k x 1X Recall that x ∈f −1(B) means: x : 1 →X factors through the inclusion of f −1(B) in X. Consider the following diagram: 1 f −1(B) B X Y x p m∗ m f First look just at the lower-right square. This square commutes, in the sense that following the arrows from f −1(B) clockwise gives the same answer as following the arrows from f −1(B) counterclockwise. The square has another property: for any set Z, and functions g : Z →X and h : Z →B, there is a unique function k : Z →f −1(B) such that m∗k = g and pk = h. [To understand better, draw a picture!] When a commuting square has this property, then it’s said to be a pullback.
Proposition 1.21. Let f : X →Y , and let B ⊆Y . Then x ∈f −1(B) if and only if f(x) ∈B.
Proof. If x ∈f −1(B), then there is an arrow ˆ x : 1 →f −1(B) such that m∗ˆ x = x.
Thus, fx = mpˆ x, which entails that the element f(x) ∈Y factors through B, i.e. f(x) ∈B. Conversely, if f(x) ∈B, then since the square is a pullback, x : 1 →X factors through f −1(B), i.e. x ∈f −1(B).
Definition. Given functions f : X →Z and g : Y →Z, we define X ×Z Y = {⟨x, y⟩∈X × Y | f(x) = g(y)}.
In other words, X ×Z Y is the equalizer of fπ0 and gπ1. The set X ×Z Y , together with the functions π0 : X ×Z Y →X and π1 : X ×Z Y →Y is called the pullback of f and g, alternatively the fibered product of f and g.
The pullback of f and g has the following distinguishing property: for any set A, and functions h : A →X and k : A →Y such that fh = gk, there is a 11 unique function j : A →X ×Z Y such that π0j = h and π1j = k.
A X ×Z Y Y X Z h k π0 π1 g f The following is an interesting special case of a pullback.
Definition. Let f : X →Y be a function. Then the kernel pair of f is the pullback X ×Y X, with projections p0 : X ×Y X →X and p1 : X ×Y X →X.
Intuitively, X ×Y X is the relation, “having the same image under f.” Written in terms of braces, X ×Y X = {⟨x, x′⟩∈X × X | f(x) = f(x′)}.
In particular, f is injective if and only if, “having the same image under f” is coextensive with the equality relation on X. That is, X×Y X = {⟨x, x⟩| x ∈X}, which is the diagonal of X.
Exercise 1.22. Let f : X →Y be a function, and let p0, p1 : X ×Y X ⇒X be the kernel pair of f. Show that the following are equivalent: 1. f is a monomorphism.
2. p0 and p1 are isomorphisms.
3. p0 = p1.
2 Truth values and subsets Axiom 5: Truth-value object There is a set Ωwith the following features: 1. Ωhas exactly two elements, which we denote by t : 1 →Ωand f : 1 →Ω.
2. For any set X, and subobject m : B →X, there is a unique function χB : X →Ωsuch that the following diagram is a pullback: B 1 X Ω m t χB 12 In other words, B = {x ∈X | χB(x) = t}.
Intuitively speaking, the first part of Axiom 5 says that Ωis a two-element set, say Ω= {f, t}.
The second part of Axiom 5 says that Ωclassifies the subobjects of a set X. That is, each subobject m : B →X corresponds to a unique characteristic function χB : X →{f, t} such that χB(x) = t if and only if x ∈B.
The terminal object 1 is a set with one element. Thus, it should be the case that 1 has two subsets, the empty set and 1 itself.
Proposition 2.1. The terminal object 1 has exactly two subobjects.
Proof. By Axiom 5, subobjects of 1 correspond to functions 1 →Ω, that is, to elements of Ω. By Axiom 5, Ωhas exactly two elements. Therefore, 1 has exactly two subobjects.
Obviously the function t : 1 →Ωcorresponds to the subobject id1 : 1 →1.
Can we say more about the subobject m : A →1 corresponding to the function f : 1 →Ω? Intuitively, we should have A = {x ∈1 | t = f}, in other words, the empty set. To confirm this intuition, consider the pullback diagram: 1 A 1 1 Ω x m k t f Note that m and k must both be the unique function from A to 1, that is m = k = βA. Suppose that A is nonempty, i.e. there is a function x : 1 →A.
Then βA ◦x is the identity 1 →1, and since the square commutes, t = f, a contradiction. Therefore, A has no elements.
Exercise 2.2. Show that Ω× Ωhas exactly four elements.
We now use the existence of a truth-value object in Sets to demonstrate further properties of functions.
Definition. Let f : X →Y be a function. Then f is said to be a regular monomorphism just in case there are functions g, h : Y ⇒Z such that f is the equalizer of g and h.
Exercise 2.3. Show that in any category, if f : X →Y is a regular monomor-phism, then f is monic.
Proposition 2.4. Every monomorphism between sets is regular, i.e. an equal-izer of a pair of parallel arrows.
13 Proof. Let m : B →X be monic. By Axiom 5, the following is a pullback diagram: B 1 X Ω m t χB A straightforward verification shows that m is the equalizer of X βX →1 t →Ωand χB : X →Ω. Therefore, m is regular monic.
Students with some background in mathematics might assume that if a func-tion f : X →Y is both a monomorphism and an epimorphism, then it is an isomorphism. However, that isn’t true in all categories! [For example, it’s not true in the category of monoids.] Nonetheless, Sets is a special category, and in this case we have the result: Proposition 2.5. In Sets, if a function is both a monomorphism and an epi-morphism, then it is an isomorphism.
Proof. In any category, if m is regular monic and epi, then m is an isomorphism (Exercise 1.18).
Definition. Let f : X →Y be a function, and let y ∈Y . The fiber over y is the subset f −1{y} of X given by the following pullback: f −1{y} 1 X Y y f Proposition 2.6. Let p : X →Y . If p is not a surjection, then there is a y0 ∈Y such that the fiber p−1{y0} is empty.
Proof. Since p is not a surjection, there is a y0 ∈Y such that for all x ∈X, p(x) ̸= y0. Now consider the pullback: 1 p−1{y0} 1 X Y z m y0 p If there were a morphism z : 1 →p−1{y0}, then we would have p(m(z)) = y0, a contradiction. Therefore, p−1{y0} is empty.
Proposition 2.7. In Sets, epimorphisms are surjective.
14 Proof. Suppose that p : X →Y is not a surjection. Then there is a y0 ∈Y such that for all x ∈X, p(x) ̸= y0. Since 1 is terminal, the morphism y0 : 1 →Y is monic. Consider the following diagram: 1 p−1{y0} 1 1 X Y Ω x y0 t p g Here g is the characteristic function of {y0}; by Axiom 5, g is the unique function that makes the right hand square a pullback. Let x ∈X be arbitrary. If we had g(p(x)) = t, then there would be an element x′ ∈p−1{y0}, in contradiction with the fact that the latter is empty (Proposition 2.6). By Axiom 5, either g(p(x)) = t or g(p(x)) = f; therefore, g(p(x)) = f. Now let h be the composite Y →1 f →Ω. Then, for any x ∈X, we have h(p(x)) = f. Since g ◦p and h ◦p agree on arbitrary x ∈X, we have g ◦p = h ◦p. Since g ̸= h, it follows that p is not an epimorphism.
In a general category, there is no guarantee that an epimorphism pulls back to an epimorphism. However, in Sets, we have the following: Proposition 2.8. In Sets, the pullback of an epimorphism is an epimorphism.
Proof. Suppose that f : Y →Z is epi, and let x ∈X. Consider the pullback diagram: 1 ∗ Y X Z x y q0 q1 f g By Proposition 2.7, f is surjective. In particular, there is a y ∈Y such that f(y) = g(x). Since the diagram is a pullback, there is a unique ⟨x, y⟩: 1 →∗ such that q0⟨x, y⟩= x and q1⟨x, y⟩= y. Therefore, q0 is surjective, and hence epi.
Proposition 2.9. If f : X →Y and g : W →Z are epimorphisms, then so is f × g : X × W →Y × Z.
Proof. Since f × g = (f × 1) ◦(1 × g), it will suffice to show that f × 1 is epi 15 when f is epi. Now, the following diagram is a pullback: X × W X Y × W Y p0 f×1 f p0 By Proposition 2.8, if f is epi, then f × 1 is epi.
Suppose that f : X →Y is a function, and that p0, p1 : X ×Y X ⇒X is the kernel pair of f.
Suppose also that h : E →Y is a function, that q0, q1 : E ×Y E ⇒E is the kernel pair of h, and that g : X ↠E is an epimorphism. Then there is a unique function b : X ×Y X →E ×Y E, such that q0b = gp0 and q1b = gp1.
X ×Y X X Y E ×Y E E b p0 p1 g f q0 q1 h An argument similar to the one above shows that b is an epimorphism. We will use this fact below to describe the properties of epimorphisms in Sets.
3 Relations 3.1 Equivalence relations and equivalence classes A relation R on a set X is a subset of X × X; i.e. a set of ordered-pairs. A relation is said to be an equivalence relation just in case it is reflexive, symmetric, and transitive. One particular way that equivalence relations on X arise is from functions with X as domain: given a function f : X →Y , let say that ⟨x, y⟩∈R just in case f(x) = f(y). [Sometimes we say that, “x and y lie in the same fiber over Y .”] Then R is an equivalence relation on X.
Given an equivalence relation R on X, and some element x ∈X, let [x] = {y ∈X | ⟨x, y⟩∈R} denote the set of all elements of X that are equivalent to X. We say that [x] is the equivalence class of x. It’s straightforward to show that for any x, y ∈X, either [x] = [y] or [x] ∩[y] = ∅. Moreover, for any x ∈X, we have x ∈[x]. Thus the equivalence classes form a partition of X into disjoint subsets.
We’d like now to be able to talk about the set of these equivalence classes, i.e. something that might intuitively be written as {[x] | x ∈X}. The follow-ing axiom guarantees the existence of such a set, called X/R, and a canonical mapping q : X →X/R that takes each element x ∈X to its equivalence class [x] ∈X/R.
Our next axiom guarantees the existence of the set of equivalence classes.
16 Axiom 6: Equivalence classes Let R be an equivalence relation on X. Then there is a set X/R, and a function q : X →X/R with the properties: 1. ⟨x, y⟩∈R if and only if q(x) = q(y).
2. For any set Y and function f : X →Y that is constant on equivalence classes, there is a unique function f : X/R →Y such that f ◦q = f.
X Y X/R f q f Here f is constant on equivalence classes just in case f(x) = f(y) whenever ⟨x, y⟩∈R.
An equivalence relation R can be thought of as a subobject of X × X, i.e.
a subset of ordered pairs. Accordingly, there are two functions p0 : R →X and p1 : R →X given by: p0⟨x, y⟩= x and p1⟨x, y⟩= y. Then condition (1) in the above axiom says that q ◦p0 = q ◦p1. And condition (2) says that for any function f : X →Y such that f ◦p0 = f ◦p1, there is a unique function f : X/R →Y such that f ◦q = f. In this case, we say that q is a coequalizer of p0 and p1.
Exercise 3.1. Show that in any category, coequalizers are unique up to iso-morphism.
Exercise 3.2. Show that in any category, a coequalizer is an epimorphism.
Exercise 3.3. For a function f : X →Y , let R = {⟨x, y⟩∈X × X | f(x) = f(y)}. That is, R is the kernel pair of f. Show that R is an equivalence relation.
Definition. A function f : X →Y is said to be a regular epimorphism just in case f is a coequalizer.
Exercise 3.4. Show that in any category, if f : X →Y is both a monomorphism and a regular epimorphism, then f is an isomorphism.
Proposition 3.5. Every epimorphism in Sets is regular. In particular, every epimorphism is the coequalizer of its kernel pair.
Proof. Let f : X →Y be an epimorphism. Let p0, p1 : X ×Y X ⇒X be the kernel pair of f. By Axiom 6, the coequalizer g : X →E of p0 and p1 exists; and since f also coequalizes p0 and p1, there is a unique function m : E →Y 17 such that f = mg.
X ×Y X X Y E ×Y E E p0 p1 b f g q0 q1 m Here E ×Y E is the kernel pair of m. Since mgp0 = fp0 = fp1 = mgp1, there is a unique function b : X ×Y X →E ×Y E such that gp0 = q0b and gp1 = q1b.
By the considerations at the end of the previous section, b is an epimorphism.
Furthermore, q0b = gp0 = gp1 = q1b, and therefore q0 = q1. By Exercise 1.22, m is a monomorphism. Since f = mg, and f is epi, m is also epi. Therefore, by Proposition 2.5, m is an isomorphism.
This last proposition actually shows that Sets is what is known as a regular category. In general, a category C is said to be regular just in case it has all finite limits, if coequalizers of kernel pairs exist, and if regular epimorphisms are stable under pullback. Now, it’s known that if a category has products and equalizers, then it has all finite limits. Thus Sets has all finite limits. Our most recent axiom says that Sets has coequalizers of kernel pairs. And finally, all epimorphisms in Sets are regular, and epimorphisms in Sets are stable under pullback; therefore, regular epimorphisms are stable under pullback.
Regular categories have several nice features that will prove quite useful. In the remainder of this section, we will discuss one such features: factorization of functions into a regular epimorphism followed by a monomorphism.
3.2 The epi-monic factorization Let f : X →Y be a function, and let p0, p1 : X ×Y X ⇒X be the kernel pair of f. By Axiom 6, the kernel pair has a coequalizer g : X ↠E. Since f also coequalizes p0 and p1, there is a unique function m : E →Y such that f = mg.
X ×Y X X Y E p0 p1 f g m An argument similar to the one in Proposition 3.5 shows that m is a monomor-phism. Thus, (E, m) is a subobject of Y , which we call the image of X under f, and we write E = f(X). The pair (g, m) is called the epi-monic factoriza-tion of f. Since epis are surjections, and monics are injections, (g, m) can also be called the surjective-injective factorization.
18 Definition. Suppose that A is a subset of X, in particular, n : A →X is monic.
Then f ◦n : A →Y , and we let f(A) denote the image of A under f ◦n.
A f(A) X Y n f We also use the suggestive notation f(A) = ∃f(A) = {y ∈Y | ∃x ∈A.f(x) = y}.
Proposition 3.6. Let f : X →Y be a function, and let A be a subobject of X.
The image f(A) is the smallest subobject of Y through which f factors.
Proof. Let e : X →Q and m : Q →Y be the epi-monic factorization of f.
Suppose that n : B →Y is a subobject, and that f factors through n, say f = ng. Consider the following diagram.
E X Y Q B p0 p1 f e g k n Then ngp0 = fp0 = fp1 = ngp1, since p0, p1 is the kernel pair of f. Since n is monic, gp0 = gp1, i.e. g coequalizes p0 and p1. Since e : X →Q is the coequalizer of p0 and p1, there is a unique function k : Q →B such that ke = g.
By uniqueness of the epi-monic factorization, nk = m. Therefore, Q ⊆B.
Proposition 3.7. For any A ⊆X and B ⊆Y , we have A ⊆f −1(B) if and only if ∃f(A) ⊆B.
Proof. Suppose first that A ⊆f −1(B), in particular that k : A →f −1(B).
Consider the following diagram: A ∃f(A) f −1(B) B X Y k e j m∗ m f By definition, je is the epi-mono factorization of fm∗k. Since fm∗k also factors through m : B →Y , we have ∃f(A) ⊆B, by Proposition 3.6.
Suppose now that ∃f(A) ⊆B. Using the fact that the lower square in the diagram is a pullback, we see that there is an arrow k : A →f −1(B) such that m∗k is the inclusion of A in X. That is, A ⊆f −1(B).
Exercise 3.8. Use the previous result to show that A ⊆f −1(∃f(A)), for any subset A of X.
19 3.3 Functional relations Definition. A relation R ⊆X × Y is said to be functional just in case for each x ∈X there is a unique y ∈Y such that ⟨x, y⟩∈R.
Definition. Suppose that f : X →Y is a function. We let graph(f) = {⟨x, y⟩| f(x) = y}.
Exercise 3.9. Show that graph(f) is a functional relation.
The following result is helpful for establishing the existence of arrows f : X →Y .
Proposition 3.10. Let R ⊆X × Y be a functional relation. Then there is a unique function f : X →Y such that R = graph(f).
The proof of this result is somewhat complicated, and we omit it (for the time being).
4 Colimits Axiom 7: Coproducts For any two sets X, Y , there is a set X ⨿Y and functions i0 : X →X ⨿Y and i1 : Y →X ⨿Y with the feature that for any set Z, and functions f : X →Z and g : Y →Z, there is a unique function f ⨿g : X ⨿Y →Z such that (f ⨿g) ◦i0 = f and (f ⨿g) ◦i1 = g.
Z X ⨿Y X Y f⨿g f i0 g i1 We call X ⨿Y the coproduct of X and Y . We call i0 and i1 the copro-jections of the coproduct.
Intuitively speaking, the coproduct X ⨿Y is the disjoint union of the sets X and Y . What we mean by “disjoint” here is that if X and Y share elements in common (which doesn’t make sense in our framework, but does in some frameworks), then these elements are dis-identified before the union is taken.
For example, in terms of elements, we could think of X ⨿Y as consisting of elements of the form ⟨x, 0⟩, with x ∈X, and elements of the form ⟨y, 1⟩, with y ∈Y . Thus, if x is contained in both X and Y , then X ⨿Y contains two separate copies of x, namely ⟨x, 0⟩and ⟨x, 1⟩.
20 We now show that that the inclusions i0 : X →X ⨿Y and i1 : Y →X ⨿Y do, in fact, have disjoint images.
Proposition 4.1. Coproducts in Sets are disjoint. In other words, if i0 : X → X ⨿Y and i1 : Y →X ⨿Y are the coprojections, then i0(x) ̸= i1(y) for all x ∈X and y ∈Y .
Proof. Suppose for reductio ad absurdum that i0(x) = i1(y). Let g : X →Ω be the unique map that factors through t : 1 →Ω. Let h : Y →Ωbe the unique map that factors through f : 1 →Ω. By the universal property of the coproduct, there is a unique function g ⨿h : X ⨿Y →Ωsuch that (g ⨿h)i0 = g and (g ⨿h)i1 = h. Thus, we have t = g(x) = (g ⨿h)i0x = (g ⨿h)i1y = h(y) = f, a contradiction.
Therefore, i0(x) ̸= i1(y), and the ranges of i0 and i1 are disjoint.
Proposition 4.2. The coprojections i0 : X →X ⨿Y and i1 : Y →X ⨿Y are monomorphisms.
Proof. We will show that i0 is monic; the result then follows by symmetry.
Suppose first that X has no elements.
Then i0 is trivially injective, hence monic by Proposition 1.13. Suppose now that X has an element x : 1 →X.
Let g = x ◦βY , where βY : Y →1. Then (1X ⨿g)i0 = 1X, and Exercise 1.1 entails that i0 is monic.
Proposition 4.3. The coprojections are jointly surjective. That is, for each z ∈X ⨿Y , either there is an x ∈X such that z = i0(x), or there is a y ∈Y such that z = i1(y).
Proof. Suppose for reductio ad absurdum that z is neither in the image of i0 nor in the image of i1. Let g : (X ⨿Y ) →Ωbe the characteristic function of {z0}. Then for all x ∈X, g(i0(x)) = f. And for all y ∈Y , g(i1(y)) = f. Now let h : (X ⨿Y ) →Ωbe the constant f function, i.e. h(z) = f for all z ∈X ⨿Y .
Then gi0 = hi0 and gi1 = hi1. Since functions from X ⨿Y are determined by their coprojections, g = h, a contradiction. Therefore, all z ∈X ⨿Y are either in the range of i0 or in the range of i1.
Proposition 4.4. The function t ⨿f : 1 ⨿1 →Ωis an isomorphism.
Proof. Consider the diagram: Ω 1 ⨿1 1 1 t⨿f t i0 f i1 21 Then t ⨿f is monic since every element of 1 ⨿1 factors through either i0 or i1 (Proposition 4.3), and since t ̸= f. Furthermore, t ⨿f is epi since t and f are the only elements of Ω. By Proposition 2.5, t ⨿f is an isomorphism.
Proposition 4.5. Let X be a set, and let B be a subset of X. Then the inclusion B ⨿X\B →X is an isomorphism.
Proof. Using the fact that Ωis Boolean, for every x ∈X, either x ∈B or x ∈ X\B. Thus the inclusion B ⨿X\B →X is a bijection, hence an isomorphism.
Axiom 8: Empty set There is a set ∅with the following properties: 1. For any set X, there is a unique function ∅ X αX In this case, we say that ∅is an initial object in Sets.
2. ∅is empty, i.e. there is no function x : 1 →∅.
Exercise 4.6. Show that in any category with coproducts, if A is an initial object, then X ⨿A ∼ = X, for any object X.
Proposition 4.7. Any function f : X →∅is an isomorphism.
Proof. Since ∅has no elements, f is trivially surjective. We now claim that X has no elements. Indeed, if x : 1 →X is an element of X, then f(x) is an element of ∅. Since X has no elements, f is trivially injective. By Proposition 2.5, f is an isomorphism.
Proposition 4.8. A set X has no elements if and only if X ∼ = ∅.
Proof. By Axiom 8, the set ∅has no elements. Thus if X ∼ = ∅, then X has no elements.
Suppose now that X has no elements. Since ∅is an initial object, there is a unique arrow αX : ∅→X. Since X has no elements, αX is trivially surjective.
Since ∅has no elements, αX is trivially injective. By Proposition 2.5, f is an isomorphism.
22 5 Sets of functions and sets of subsets One distinctive feature of the category of sets is its ability to model almost any mathematical construction. One such construction is gathering together old things into a new set. For example, given two sets A and X, can we form a set XA of all functions from A to X? Similarly, given a set X, can we form a set PX of all subsets of X?
As usual, we won’t be interested in hard questions about what it takes to be a set. Rather, we’re interested in hypothetical questions: if such a set existed, what would it be like? The crucial features of XA seem to be captured by the following axiom: Axiom 9: Exponential objects Suppose that A and X are sets. Then there is a set XA, and a function eX : A × XA →X such that for any set Z, and function f : A × Z →X, there is a unique function f ♯: Z →XA such that eX ◦(1A × f ♯) = f.
A × XA X A × Z eX 1×f ♯ f The set XA is called an exponential object, and the function f ♯: Z → XA is called the transpose of f : A × Z →X.
The way to remember this axiom is to think of Y X as the set of functions from X to Y , and to think of e : X × Y X →Y as a meta-function that takes an element f ∈Y X, and an element x ∈X, and returns the value e(f, x) = f(x).
For this reason, e : X ×Y X →Y is sometimes called the evaluation function.
Note further that if f : X × Z →Y is a function, then for each z ∈Z, f(−, z) is a function from X →Y . In other words, f corresponds uniquely to a function from Z to functions from Y to X. This latter function is the transpose f ♯: Z →Y X of f.
We have written Axiom 9 in first-order fashion, but it might help to think of it as stating that there is a one-to-one correspondence between two sets: hom(X × Z, Y ) ∼ = hom(Z, Y X), where hom(A, B) is though of as the set of functions from A to B. As a particular 23 case, when Z = 1, the terminal object, we have hom(X, Y ) ∼ = hom(1, Y X).
In other words, elements of Y X in the “internal sense” correspond to elements of hom(X, Y ) in the “external sense.” Consider now the following special case of the above construction: A × XA XA A × XA eX 1×e♯ eX Thus, e♯ X = 1XA.
Definition. Suppose that g : Y →Z is a function. We let gA : XA →Y A denote the transpose of the function: A × Y A Y Z eY g That is, gA = (g ◦eY )♯, and the following diagram commutes: A × ZA Z A × Y A Y eZ 1×gA eY g Proposition 5.1. Let f : A × X →Y and g : Y →Z be functions. Then (g ◦f)♯= gA ◦f ♯.
Proof. Consider the following diagram: A × ZA Z A × Y A A × X Y eZ 1×gA eY 1×(g◦f)♯ 1×f ♯ f g The bottom triangle commutes by the definition of f ♯. The upper right triangle commutes by the definition of gA.
And the outer square commutes by the definition of (g ◦f)♯. It follows that eZ ◦(1 × (gA ◦f ♯)) = g ◦f, and hence gA ◦f ♯= (g ◦f)♯.
24 Consider now the following particular case: A × (A × X)A A × X A × X e 1×p 1 Here p = 1♯is the unique function such that e(1A × p) = 1A×X. Intuitively, we can think of p as the function that takes an element x ∈X, and returns the function px : A →A × X such that px(a) = ⟨a, x⟩. Thus, (1 × p)⟨a, x⟩= ⟨a, px⟩, and e(1 × p)⟨a, x⟩= px(a) = ⟨a, x⟩.
Definition. Suppose that f : Z →XA is a function. We define f ♭: Z ×A →X to be the following composite function: A × Z A × XA X 1×f eX Proposition 5.2. Let f : X →Y and g : Y →ZA be functions.
Then (g ◦f)♭= g♭◦(1A × f).
Proof. By definition, (g ◦f)♭= eX ◦(1 × (g ◦f)) = eX ◦(1 × g) ◦(1 × f) = g♭◦(1 × f).
Proposition 5.3. For any function f : A × Z →X, we have (f ♯)♭= f.
Proof. By the definitions, we have (f ♯)♭= eX ◦(1 × f ♯) = f.
Proposition 5.4. For any function f : Z →XA, we have (f ♭)♯= f.
Proof. By definition, (f ♭)♯is the unique function such that eX ◦(1×(f ♭)♯) = f ♭.
But also eX ◦(1 × f) = f ♭. Therefore, (f ♭)♯= f.
Proposition 5.5. For any set X, we have X1 ∼ = X.
Proof. Let e : 1 × X1 →X be the evaluation function from Axiom 9. We claim that e is a bijection. Recall that there is a natural isomorphism i : 1 × 1 →1.
Consider the following diagram: 1 × X1 X 1 × 1 1 e 1×x♯ i x That is, for any element x : 1 →X, there is a unique element x♯of X1 such that e(1 × x♯) = x. Thus, e is a bijection, and X ∼ = 1 × X1 is isomorphic to X.
25 Proposition 5.6. For any set X, we have X∅∼ = 1.
Proof. Elements of X∅correspond functions ∅→X. There is exactly one such function, hence X∅has exactly one element x : 1 →X∅. Thus, x is a bijection, and X∅∼ = 1.
Proposition 5.7. For any sets A, X, Y , we have (X × Y )A ∼ = XA × Y A.
Proof. An elegant proof of this proposition would note that (−)A is a func-tor, and is right adjoint to the functor A × (−). Since right adjoints preserve products, (X × Y )A ∼ = XA × Y A. Nonetheless, we will go into further detail.
By uniqueness of Cartesian products, it will suffice to show that (X × Y )A is a Cartesian product of XA and Y A, with projections πA 0 and πA 1 . Let Z be an arbitrary set, and let f : Z →XA and g : Z →Y A be functions. Now take γ = ⟨f ♭, g♭⟩♯, where f ♭: A × Z →X and g♭: A × Z →Y .
Z (X × Y )A XA Y A f g γ πA 0 πA 1 We claim that πA 0 γ = f and πA 1 γ = g. Indeed, πA 0 ◦γ = πA 0 ◦⟨f ♭, g♭⟩♯= (π0 ◦⟨f ♭, g♭⟩)♯= (f ♭)♯= f.
Thus, πA 0 γ = f, and similarly, πA 1 γ = g.
Suppose now that h : Z →(X ×Y )A such that πA 0 h = f and πA 1 h = g. Then f = πA 0 ◦(h♭)♯= (π0 ◦h♭)♯.
Hence, π0 ◦h♭= f ♭, and similarly, π1 ◦h♭= g♭. That is, h♭= ⟨f ♭, g♭⟩, and h = ⟨f ♭, g♭⟩♯= γ.
Proposition 5.8. For any sets A, X, Y , we have A × (X ⨿Y ) ∼ = (A × X) ⨿ (A × Y ).
Proof. Even without Axiom 9, there is always a canonical function from (A × X) ⨿(A × Y ) to A × (X ⨿Y ), namely φ := (1A × i0) ⨿(1A × i1), where i0 and i1 are the coproduct inclusions of X ⨿Y . That is, φ ◦j0 = 1A × i0, and φ ◦j1 = 1A × i1, where j0 and j1 are the coproduct inclusions of (A × X) ⨿(A × Y ).
26 X ⨿Y X A × (X ⨿Y ) Y A × X (A × X) ⨿(A × Y ) A × Y i0 p1 i1 1A×i0 j0 q1 φ j1 1A×i1 r1 We will show that Axiom 9 entails that φ is invertible.
Let g : A×(X ⨿Y ) →A×(X ⨿Y ) be the identity, i.e. g = 1A×(X⨿Y ). Then g♯: X ⨿Y →(A × (X ⨿Y ))A is the unique function such that e(1A × g♯) = g.
By Proposition 5.2, (g♯◦i0)♭= g ◦(1A × i0) = 1A × i0.
Similarly, (g♯◦i1)♭= 1A × i1. Thus, g♯= (1A × i0)♯⨿(1A × i1)♯.
We also have (1A × i0)♯= (φ ◦j0)♯= φA ◦j♯ 0, and (1A × i1)♯= φA ◦j♯ 1. Hence g♯= (φA ◦j♯ 0) ⨿(φA ◦j♯ 1) = φA ◦(j♯ 0 ⨿j♯ 1).
Now for the inverse of φ, we take ψ = (j♯ 0 ⨿j♯ 1)♭.
((A × X) ⨿(A × Y ))A X ⨿Y X Y j♯ 0⨿j♯ 1 i0 j♯ 0 j♯ 1 i1 It then follows that (φ ◦ψ)♯= φA ◦(j♯ 0 ⨿j♯ 1) = g♯, and therefore φ ◦ψ = 1A×(X⨿Y ). Similarly, (ψ ◦φ ◦j0)♯= ψA ◦(φ ◦j0)♯= ψA ◦g♯◦i0 = ψ♯◦i0 = j♯ 0.
Thus, ψ ◦φ ◦j0 = j0, and a similar calculation shows that ψ ◦φ ◦j1 = j1. It follows that ψ ◦φ = 1(A×X)⨿(A×Y ). Thus, ψ is a two-sided inverse for φ, and A × (X ⨿Y ) is isomorphic to (A × X) ⨿(A × Y ).
Definition (Powerset). If X is a set, we let PX = ΩX.
Intuitively speaking, PX is the set of all subsets of X. For example, if X = {a, b}, then PX = {∅, {a}, {b}, {a, b}}. More rigorously, each element of ΩX corresponds to a function 1 →ΩX, which in turn corresponds to a function X ∼ = 1 × X →Ω, which corresponds to a subobject of X. Thus, we can think of PX as another name for Sub(X), although Sub(X) is not really an object in Sets.
27 6 Cardinality Summary: When mathematics was rigorized in the 19th century, one of the important advances was a rigorous definition of “infinite set.” It came as some-thing of a suprise that there are different sizes of infinity, and that some infinite sets (e.g. the real numbers) are strictly larger than the natural numbers. In this section, we define “finite” and “infinite.” We then add an axiom which says there is a specific set N that behaves like the natural numbers; in particular, N is infinite. Finally, we show that the powerset PX of a set X is always larger than X.
Definition. A set X is said to be finite if and only if for any function m : X → X, if m is monic, then m is an isomorphism. A set X is said to be infinite if and only if there is a function m : X →X that is monic and not surjective.
We are already guaranteed the existence of finite sets: for example, the terminal object 1 is finite, as is the subobject classifier Ω.
But the axioms we have stated thus far do not guarantee the existence of any infinite sets. We won’t know that there are infinite sets until we add the “natural number object” axiom below.
Definition. We say that Y is at least as large as X, written |X| ≤|Y |, just in case there is a monomorphism m : X →Y .
Proposition 6.1. |X| ≤|X ⨿Y |.
Proof. Proposition 4.2 shows that i0 : X →X ⨿Y is monic.
Proposition 6.2. If Y is non-empty, then |X| ≤|X × Y |.
Proof. Consider the function ⟨1X, f⟩: X →X × Y , where f : X →1 →Y .
Axiom 10: Natural Number Object There is an object N, and functions z : 1 →N and s : N →N such that for any other set X with functions q : 1 →X and f : X →X, there is a unique function u : N →X such that the following diagram commutes: 1 N N X X z q s u u f The set N is called a natural number object.
28 Exercise 6.3. Let N ′ be a set, and let z′ : 1 →N ′ and s′ : N ′ →N ′ be functions that satisfy the conditions in the axiom above.
Show that N ′ is isomorphic to N.
Proposition 6.4. z ⨿s : 1 ⨿N →N is an isomorphism.
Proof. Let i0 : 1 →1 ⨿N and i1 : N →1 ⨿N be the coproduct inclusions.
Using the NNO axiom, there is a unique function g : N →1 ⨿N such that the following diagram commutes: 1 N N 1 ⨿N 1 ⨿N z i0 s g g i1z⨿i1s We will show that g is a two-sided inverse of z⨿s. To this end, we first establish that g ◦s = i1. Consider the following diagram: N N 1 N N 1 ⨿N 1 ⨿N s s s z sz i1z g s g i1z⨿i1s The lower triangle commutes because of the commutativity of the previous diagram. Thus, the entire diagram commutes. The outer triangle and square would also commute with i1 in place of g ◦s. By the NNO axiom, g ◦s = i1.
Now, to see that (z ⨿s) ◦g = 1N, note first that (z ⨿s) ◦g ◦z = (z ⨿s) ◦i0 = z.
Furthermore, (z ⨿s) ◦g ◦s = (z ⨿s) ◦i1 = s.
Thus the NNO axiom entails that (z ⨿s) ◦g = idN.
Finally, to see that g ◦(z ⨿s) = id1⨿N, we calculate: g ◦(z ⨿s) ◦i0 = g ◦z = i0.
Furthermore, g ◦(z ⨿s) ◦i1 = g ◦s = i1.
Therefore, g ◦(z ⨿s) = id1⨿N. This establishes that g is a two-sided inverse of z ⨿s, and 1 ⨿N is isomorphic to N.
Proposition 6.5. The function s : N →N is injective, but not surjective.
Thus, N is infinite.
29 Proof. By Proposition 4.2, the function i1 : N →1 ⨿N is monic. Since the images of i0 and i1 are disjoint, i0 is not surjective. Since z⨿s is an isomorphism, (z ⨿s) ◦i1 = s is monic, but not surjective. Therefore, N is infinite.
Proposition 6.6. If m : B →X is a nonempty subobject, then there is an epimorphism f : X →B.
Proof. Since B is nonempty, there is a function g : X\B →B. By Proposition 4.5, B ∼ = B ⨿X\B. Finally, 1B ⨿g : B ⨿X\B →B is an epimorphism, since 1B is an epimorphism.
Definition. We say that a set X is countable just in case there is an epimor-phism f : N →X, where N is the natural numbers.
Proposition 6.7. N × N is countably infinite.
Sketch of proof. We will give two arguments: one quick, and one slow (but hopefully more illuminating). For the quick argument, define a function g : N × N →N by g(x, y) = 2x3y. If ⟨x, y⟩̸= ⟨x′, y′⟩, then either x ̸= x′ or y ̸= y′.
In either case, unique factorizability of integers gives 2x3y ̸= 2x′3y′. Therefore, g : N × N →N is monic. Since N × N is nonempty, Proposition 6.6 entails that there is an epimorphism f : N ↠N × N. Therefore, N × N is countable.
Now for the slow argument. Imagine writing down all elements in N × N in an infinite table, whose first few elements look like this: ⟨0, 0⟩ ⟨1, 0⟩ ⟨2, 0⟩ · · · ⟨0, 1⟩ ⟨1, 1⟩ ⟨2, 1⟩ · · · ⟨0, 2⟩ ⟨1, 2⟩ ⟨2, 2⟩ · · · .
.
.
.
.
.
.
.
.
.
.
.
Now imagine running a thread diagonally through the numbers: begin with ⟨0, 0⟩, then move down to ⟨1, 0⟩and up to ⟨0, 1⟩, then over to ⟨2, 0⟩and down its diagonal, etc.. This process defines a function f : N →N × N whose first few values are f(0) = ⟨0, 0⟩ f(1) = ⟨0, 1⟩ f(2) = ⟨1, 0⟩ .
.
.
It is not difficult to show that f is surjective, and so N × N is countable.
Exercise 6.8. Show that if A and B are countable then A ∪B is countable.
We’re now going to show that exponentiation allows us to construct sets of larger and larger size. In the case of finite sets A and X, it’s easy to see that the following equation holds: |AX| = |A||X|, 30 where |X| denotes the number of elements in X.
In particular, ΩX can be thought of as the set of binary sequences indexed by X. We’re now going to show that for any set X, the set ΩX is larger than X.
Definition. Let g : A →A be a function. We say that a ∈A is a fixed point of g just in case g(a) = a. We say that A has the fixed point property just in case any function g : A →A has a fixed point.
Proposition 6.9. Let A and X be sets. If there is a surjective function p : X →AX, then A has the fixed point property.
Proof. Suppose that p : X →AX is surjective.
That is, for any function f : X →A, there is an xf ∈X such that f = p(xf). Let ϕ = p♭, so that f = ϕ(xf, −). Now let g : A →A be any function. We need to show that g has a fixed point. Consider the function f : X →A defined by f = g ◦ϕ◦δX, where δX : X →X × X is the diagonal map. Then we have gϕ(x, x) = f(x) = ϕ(xf, x), for all x ∈X. In particular, gϕ(xf, xf) = ϕ(xf, xf), which means that a = ϕ(xf, xf) is a fixed point of g. Since g : A →A was arbitrary, it follows that A has the fixed point property.
Proposition 6.10. There is no surjective function X →ΩX.
Proof. The function Ω→Ωthat permutes t and f has no fixed points. The result then follows from Proposition 6.9.
Exercise 6.11. Show that there is an injective function X →ΩX. [The proof is easy if you simply think of ΩX as functions from X to {t, f}. For a bigger challenge, try to prove that it’s true using the definition of the exponential set ΩX.] Corollary 6.12. For any set X, the set PX of its subsets is strictly larger than X.
There are several other facts about cardinality that are important for certain parts of mathematics — in our case, they will be important for the study of topology. For example, if X is an infinite set, then the set FX of all finite sub-sets of a set X has the same cardinality as X. Similarly, a countable coproduct of countable sets is countable. However, these facts — well known from ZF set theory — are not obviously provable in ETCS.
Discussion. Intuitively speaking, XN is the set of all sequences with values in X. Thus, we should have something like XN ∼ = X × X × · · · However, we don’t have any axiom telling us that Sets has infinite products such as the one on the right hand side above. Can it be proven that XN satisfies the definition of an infinite product?
In other words, are there projections πi : XN →X which satisfy an appropriate universal property?
31 7 The Axiom of Choice Definition. Let f : X →Y be a function. We say that f is a split epimor-phism just in case there is a function s : Y →X such that fs = 1Y . In this case, we say that s is a section of f.
Exercise 7.1. Prove that if f is a split epimorphism, then f is a regular epi-morphism.
Exercise 7.2. Prove that if s is a section, then s is a regular monomorphism.
Axiom 11: Axiom of choice Every epimorphism in Sets has a section.
The name “axiom of choice” comes from a different formulation of this axiom, which says (roughly speaking) that for any set-indexed collection of sets, say {Xi | i ∈I}, we can choose one element from each set, say xi ∈Xi, and form a new set with these elements.
To translate that version of the axiom of choice into our version, suppose that the sets Xi are stacked side by side, and that f is the map that projects each x ∈Xi to the value i. Then a section s of f would be a map with domain I that returns an element s(i) ∈Xi for each i ∈I.
Further reading • C. Butz, “Regular categories and regular logic” LS/98/2/BRICS-LS-98-2.pdf • W. Lawvere, Sets for Mathematics. Cambridge University Press, 2003.
• W. Lawvere, “An elementary theory of the category of sets”
tac.mta.ca/tac/reprints/articles/11/tr11.pdf • T. Leinster, “Rethinking set theory” • ETCS in nlab 32 |
17222 | https://openstax.org/books/elementary-algebra-2e/pages/7-review-exercises | Ch. 7 Review Exercises - Elementary Algebra 2e | OpenStax
This website utilizes technologies such as cookies to enable essential site functionality, as well as for analytics, personalization, and targeted advertising purposes. Privacy Notice
Customize Reject All Accept All
Customize Consent Preferences
We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.
The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ...Show more
For more information on how Google's third-party cookies operate and handle your data, see:Google Privacy Policy
Necessary Always Active
Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.
Cookie oxdid
Duration 1 year 1 month 4 days
Description OpenStax Accounts cookie for authentication
Cookie campaignId
Duration Never Expires
Description Required to provide OpenStax services
Cookie __cf_bm
Duration 1 hour
Description This cookie, set by Cloudflare, is used to support Cloudflare Bot Management.
Cookie CookieConsentPolicy
Duration 1 year
Description Cookie Consent from Salesforce
Cookie LSKey-c$CookieConsentPolicy
Duration 1 year
Description Cookie Consent from Salesforce
Cookie renderCtx
Duration session
Description This cookie is used for tracking community context state.
Cookie pctrk
Duration 1 year
Description Customer support
Cookie _accounts_session_production
Duration 1 year 1 month 4 days
Description Cookies that are required for authentication and necessary OpenStax functions.
Cookie nudge_study_guides_page_counter
Duration 1 year 1 month 4 days
Description Product analytics
Cookie _dd_s
Duration 15 minutes
Description Zapier cookies that are used for Customer Support services.
Cookie ak_bmsc
Duration 2 hours
Description This cookie is used by Akamai to optimize site security by distinguishing between humans and bots
Cookie PHPSESSID
Duration session
Description This cookie is native to PHP applications. The cookie stores and identifies a user's unique session ID to manage user sessions on the website. The cookie is a session cookie and will be deleted when all the browser windows are closed.
Cookie m
Duration 1 year 1 month 4 days
Description Stripe sets this cookie for fraud prevention purposes. It identifies the device used to access the website, allowing the website to be formatted accordingly.
Cookie BrowserId
Duration 1 year
Description Sale Force sets this cookie to log browser sessions and visits for internal-only product analytics.
Cookie ph_phc_bnZwQPxzoC7WnmjFNOUQpcKsaDVg8TwnyoNzbClpIsD_posthog
Duration 1 year
Description Privacy-focused platform cookie
Cookie cookieyes-consent
Duration 1 year
Description CookieYes sets this cookie to remember users' consent preferences so that their preferences are respected on subsequent visits to this site. It does not collect or store any personal information about the site visitors.
Cookie _cfuvid
Duration session
Description Calendly sets this cookie to track users across sessions to optimize user experience by maintaining session consistency and providing personalized services
Cookie dmn_chk_
Duration Less than a minute
Description This cookie is set to track user activity across the website.
Cookie cookiesession1
Duration 1 year
Description This cookie is set by the Fortinet firewall. This cookie is used for protecting the website from abuse.
Functional
[x]
Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.
Cookie session
Duration session
Description Salesforce session cookie. We use Salesforce to drive our support services to users.
Cookie projectSessionId
Duration session
Description Optional AI-based customer support cookie
Cookie yt-remote-device-id
Duration Never Expires
Description YouTube sets this cookie to store the user's video preferences using embedded YouTube videos.
Cookie ytidb::LAST_RESULT_ENTRY_KEY
Duration Never Expires
Description The cookie ytidb::LAST_RESULT_ENTRY_KEY is used by YouTube to store the last search result entry that was clicked by the user. This information is used to improve the user experience by providing more relevant search results in the future.
Cookie yt-remote-connected-devices
Duration Never Expires
Description YouTube sets this cookie to store the user's video preferences using embedded YouTube videos.
Cookie yt-remote-session-app
Duration session
Description The yt-remote-session-app cookie is used by YouTube to store user preferences and information about the interface of the embedded YouTube video player.
Cookie yt-remote-cast-installed
Duration session
Description The yt-remote-cast-installed cookie is used to store the user's video player preferences using embedded YouTube video.
Cookie yt-remote-session-name
Duration session
Description The yt-remote-session-name cookie is used by YouTube to store the user's video player preferences using embedded YouTube video.
Cookie yt-remote-fast-check-period
Duration session
Description The yt-remote-fast-check-period cookie is used by YouTube to store the user's video player preferences for embedded YouTube videos.
Cookie yt-remote-cast-available
Duration session
Description The yt-remote-cast-available cookie is used to store the user's preferences regarding whether casting is available on their YouTube video player.
Analytics
[x]
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.
Cookie hjSession
Duration 1 hour
Description Hotjar sets this cookie to ensure data from subsequent visits to the same site is attributed to the same user ID, which persists in the Hotjar User ID, which is unique to that site.
Cookie visitor_id
Duration 9 months 7 days
Description Pardot sets this cookie to store a unique user ID.
Cookie visitor_id-hash
Duration 9 months 7 days
Description Pardot sets this cookie to store a unique user ID.
Cookie _gcl_au
Duration 3 months
Description Google Tag Manager sets the cookie to experiment advertisement efficiency of websites using their services.
Cookie _ga
Duration 1 year 1 month 4 days
Description Google Analytics sets this cookie to calculate visitor, session and campaign data and track site usage for the site's analytics report. The cookie stores information anonymously and assigns a randomly generated number to recognise unique visitors.
Cookie _gid
Duration 1 day
Description Google Analytics sets this cookie to store information on how visitors use a website while also creating an analytics report of the website's performance. Some of the collected data includes the number of visitors, their source, and the pages they visit anonymously.
Cookie _fbp
Duration 3 months
Description Facebook sets this cookie to display advertisements when either on Facebook or on a digital platform powered by Facebook advertising after visiting the website.
Cookie ga
Duration 1 year 1 month 4 days
Description Google Analytics sets this cookie to store and count page views.
Cookie pardot
Duration past
Description The pardot cookie is set while the visitor is logged in as a Pardot user. The cookie indicates an active session and is not used for tracking.
Cookie pi_pageview_count
Duration Never Expires
Description Marketing automation tracking cookie
Cookie pulse_insights_udid
Duration Never Expires
Description User surveys
Cookie pi_visit_track
Duration Never Expires
Description Marketing cookie
Cookie pi_visit_count
Duration Never Expires
Description Marketing cookie
Cookie cebs
Duration session
Description Crazyegg sets this cookie to trace the current user session internally.
Cookie gat_gtag_UA
Duration 1 minute
Description Google Analytics sets this cookie to store a unique user ID.
Cookie vuid
Duration 1 year 1 month 4 days
Description Vimeo installs this cookie to collect tracking information by setting a unique ID to embed videos on the website.
Performance
[x]
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Cookie hjSessionUser
Duration 1 year
Description Hotjar sets this cookie to ensure data from subsequent visits to the same site is attributed to the same user ID, which persists in the Hotjar User ID, which is unique to that site.
Advertisement
[x]
Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.
Cookie test_cookie
Duration 15 minutes
Description doubleclick.net sets this cookie to determine if the user's browser supports cookies.
Cookie YSC
Duration session
Description Youtube sets this cookie to track the views of embedded videos on Youtube pages.
Cookie VISITOR_INFO1_LIVE
Duration 6 months
Description YouTube sets this cookie to measure bandwidth, determining whether the user gets the new or old player interface.
Cookie VISITOR_PRIVACY_METADATA
Duration 6 months
Description YouTube sets this cookie to store the user's cookie consent state for the current domain.
Cookie IDE
Duration 1 year 24 days
Description Google DoubleClick IDE cookies store information about how the user uses the website to present them with relevant ads according to the user profile.
Cookie yt.innertube::requests
Duration Never Expires
Description YouTube sets this cookie to register a unique ID to store data on what videos from YouTube the user has seen.
Cookie yt.innertube::nextId
Duration Never Expires
Description YouTube sets this cookie to register a unique ID to store data on what videos from YouTube the user has seen.
Uncategorized
[x]
Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.
Cookie donation-identifier
Duration 1 year
Description Description is currently not available.
Cookie abtest-identifier
Duration 1 year
Description Description is currently not available.
Cookie __Secure-ROLLOUT_TOKEN
Duration 6 months
Description Description is currently not available.
Cookie _ce.s
Duration 1 year
Description Description is currently not available.
Cookie _ce.clock_data
Duration 1 day
Description Description is currently not available.
Cookie cebsp_
Duration session
Description Description is currently not available.
Cookie lpv218812
Duration 1 hour
Description Description is currently not available.
Reject All Save My Preferences Accept All
Skip to ContentGo to accessibility pageKeyboard shortcuts menu
Log in
Elementary Algebra 2e
Review Exercises
Elementary Algebra 2eReview Exercises
Contents Contents
Highlights
Table of contents
Preface
1 Foundations
2 Solving Linear Equations and Inequalities
3 Math Models
4 Graphs
5 Systems of Linear Equations
6 Polynomials
7 Factoring
Introduction
7.1 Greatest Common Factor and Factor by Grouping
7.2 Factor Trinomials of the Form x 2+bx+c
7.3 Factor Trinomials of the Form ax 2+bx+c
7.4 Factor Special Products
7.5 General Strategy for Factoring Polynomials
7.6 Quadratic Equations
Chapter Review
Exercises
Review Exercises
Practice Test
8 Rational Expressions and Equations
9 Roots and Radicals
10 Quadratic Equations
Answer Key
Index
Search for key terms or text.
Close
Review Exercises
7.1 Greatest Common Factor and Factor by Grouping
Find the Greatest Common Factor of Two or More Expressions
In the following exercises, find the greatest common factor.
363.
42,60 42,60 42,60
450,420 450,420 450,420
365.
90,150,105 90,150,105 90,150,105
60,294,630 60,294,630 60,294,630
Factor the Greatest Common Factor from a Polynomial
In the following exercises, factor the greatest common factor from each polynomial.
367.
24 x−42 24 x−42 24 x−42
35 y+84 35 y+84 35 y+84
369.
15 m 4+6 m 2 n 15 m 4+6 m 2 n 15 m 4+6 m 2 n
24 p t 4+16 t 7 24 p t 4+16 t 7 24 p t 4+16 t 7
Factor by Grouping
In the following exercises, factor by grouping.
371.
a x−a y+b x−b y a x−a y+b x−b y a x−a y+b x−b y
x 2 y−x y 2+2 x−2 y x 2 y−x y 2+2 x−2 y x 2 y−x y 2+2 x−2 y
373.
x 2+7 x−3 x−21 x 2+7 x−3 x−21 x 2+7 x−3 x−21
4 x 2−16 x+3 x−12 4 x 2−16 x+3 x−12 4 x 2−16 x+3 x−12
375.
m 3+m 2+m+1 m 3+m 2+m+1 m 3+m 2+m+1
5 x−5 y−y+x 5 x−5 y−y+x 5 x−5 y−y+x
7.2 Factor Trinomials of the form x 2+b x+c x 2+b x+c x 2+b x+c
Factor Trinomials of the Form x 2+b x+c x 2+b x+c x 2+b x+c
In the following exercises, factor each trinomial of the form x 2+b x+c x 2+b x+c x 2+b x+c.
377.
u 2+17 u+72 u 2+17 u+72 u 2+17 u+72
a 2+14 a+33 a 2+14 a+33 a 2+14 a+33
379.
k 2−16 k+60 k 2−16 k+60 k 2−16 k+60
r 2−11 r+28 r 2−11 r+28 r 2−11 r+28
381.
y 2+6 y−7 y 2+6 y−7 y 2+6 y−7
m 2+3 m−54 m 2+3 m−54 m 2+3 m−54
383.
s 2−2 s−8 s 2−2 s−8 s 2−2 s−8
x 2−3 x−10 x 2−3 x−10 x 2−3 x−10
Factor Trinomials of the Form x 2+b x y+c y 2 x 2+b x y+c y 2 x 2+b x y+c y 2
In the following examples, factor each trinomial of the form x 2+b x y+c y 2 x 2+b x y+c y 2 x 2+b x y+c y 2.
385.
x 2+12 x y+35 y 2 x 2+12 x y+35 y 2 x 2+12 x y+35 y 2
u 2+14 u v+48 v 2 u 2+14 u v+48 v 2 u 2+14 u v+48 v 2
387.
a 2+4 a b−21 b 2 a 2+4 a b−21 b 2 a 2+4 a b−21 b 2
p 2−5 p q−36 q 2 p 2−5 p q−36 q 2 p 2−5 p q−36 q 2
7.3 Factoring Trinomials of the form a x 2+b x+c a x 2+b x+c a x 2+b x+c
Recognize a Preliminary Strategy to Factor Polynomials Completely
In the following exercises, identify the best method to use to factor each polynomial.
389.
y 2−17 y+42 y 2−17 y+42 y 2−17 y+42
12 r 2+32 r+5 12 r 2+32 r+5 12 r 2+32 r+5
391.
8 a 3+72 a 8 a 3+72 a 8 a 3+72 a
4 m−m n−3 n+12 4 m−m n−3 n+12 4 m−m n−3 n+12
Factor Trinomials of the Form a x 2+b x+c a x 2+b x+c a x 2+b x+c with a GCF
In the following exercises, factor completely.
393.
6 x 2+42 x+60 6 x 2+42 x+60 6 x 2+42 x+60
8 a 2+32 a+24 8 a 2+32 a+24 8 a 2+32 a+24
395.
3 n 4−12 n 3−96 n 2 3 n 4−12 n 3−96 n 2 3 n 4−12 n 3−96 n 2
5 y 3+25 y 2−70 y 5 y 3+25 y 2−70 y 5 y 3+25 y 2−70 y
Factor Trinomials Using the “ac” Method
In the following exercises, factor.
397.
2 x 2+9 x+4 2 x 2+9 x+4 2 x 2+9 x+4
3 y 2+17 y+10 3 y 2+17 y+10 3 y 2+17 y+10
399.
18 a 2−9 a+1 18 a 2−9 a+1 18 a 2−9 a+1
8 u 2−14 u+3 8 u 2−14 u+3 8 u 2−14 u+3
401.
15 p 2+2 p−8 15 p 2+2 p−8 15 p 2+2 p−8
15 x 2+x−2 15 x 2+x−2 15 x 2+x−2
403.
40 s 2−s−6 40 s 2−s−6 40 s 2−s−6
20 n 2−7 n−3 20 n 2−7 n−3 20 n 2−7 n−3
Factor Trinomials with a GCF Using the “ac” Method
In the following exercises, factor.
405.
3 x 2+3 x−36 3 x 2+3 x−36 3 x 2+3 x−36
4 x 2+4 x−8 4 x 2+4 x−8 4 x 2+4 x−8
407.
60 y 2−85 y−25 60 y 2−85 y−25 60 y 2−85 y−25
18 a 2−57 a−21 18 a 2−57 a−21 18 a 2−57 a−21
7.4 Factoring Special Products
Factor Perfect Square Trinomials
In the following exercises, factor.
409.
25 x 2+30 x+9 25 x 2+30 x+9 25 x 2+30 x+9
16 y 2+72 y+81 16 y 2+72 y+81 16 y 2+72 y+81
411.
36 a 2−84 a b+49 b 2 36 a 2−84 a b+49 b 2 36 a 2−84 a b+49 b 2
64 r 2−176 r s+121 s 2 64 r 2−176 r s+121 s 2 64 r 2−176 r s+121 s 2
413.
40 x 2+360 x+810 40 x 2+360 x+810 40 x 2+360 x+810
75 u 2+180 u+108 75 u 2+180 u+108 75 u 2+180 u+108
415.
2 y 3−16 y 2+32 y 2 y 3−16 y 2+32 y 2 y 3−16 y 2+32 y
5 k 3−70 k 2+245 k 5 k 3−70 k 2+245 k 5 k 3−70 k 2+245 k
Factor Differences of Squares
In the following exercises, factor.
417.
81 r 2−25 81 r 2−25 81 r 2−25
49 a 2−144 49 a 2−144 49 a 2−144
419.
169 m 2−n 2 169 m 2−n 2 169 m 2−n 2
64 x 2−y 2 64 x 2−y 2 64 x 2−y 2
421.
25 p 2−1 25 p 2−1 25 p 2−1
1−16 s 2 1−16 s 2 1−16 s 2
423.
9−121 y 2 9−121 y 2 9−121 y 2
100 k 2−81 100 k 2−81 100 k 2−81
425.
20 x 2−125 20 x 2−125 20 x 2−125
18 y 2−98 18 y 2−98 18 y 2−98
427.
49 u 3−9 u 49 u 3−9 u 49 u 3−9 u
169 n 3−n 169 n 3−n 169 n 3−n
Factor Sums and Differences of Cubes
In the following exercises, factor.
429.
a 3−125 a 3−125 a 3−125
b 3−216 b 3−216 b 3−216
431.
2 m 3+54 2 m 3+54 2 m 3+54
81 x 3+3 81 x 3+3 81 x 3+3
7.5 General Strategy for Factoring Polynomials
Recognize and Use the Appropriate Method to Factor a Polynomial Completely
In the following exercises, factor completely.
433.
24 x 3+44 x 2 24 x 3+44 x 2 24 x 3+44 x 2
24 a 4−9 a 3 24 a 4−9 a 3 24 a 4−9 a 3
435.
16 n 2−56 m n+49 m 2 16 n 2−56 m n+49 m 2 16 n 2−56 m n+49 m 2
6 a 2−25 a−9 6 a 2−25 a−9 6 a 2−25 a−9
437.
5 r 2+22 r−48 5 r 2+22 r−48 5 r 2+22 r−48
5 u 4−45 u 2 5 u 4−45 u 2 5 u 4−45 u 2
439.
n 4−81 n 4−81 n 4−81
64 j 2+225 64 j 2+225 64 j 2+225
441.
5 x 2+5 x−60 5 x 2+5 x−60 5 x 2+5 x−60
b 3−64 b 3−64 b 3−64
443.
m 3+125 m 3+125 m 3+125
2 b 2−2 b c+5 c b−5 c 2 2 b 2−2 b c+5 c b−5 c 2 2 b 2−2 b c+5 c b−5 c 2
7.6 Quadratic Equations
Use the Zero Product Property
In the following exercises, solve.
445.
(a−3)(a+7)=0(a−3)(a+7)=0(a−3)(a+7)=0
(b−3)(b+10)=0(b−3)(b+10)=0(b−3)(b+10)=0
447.
3 m(2 m−5)(m+6)=0 3 m(2 m−5)(m+6)=0 3 m(2 m−5)(m+6)=0
7 n(3 n+8)(n−5)=0 7 n(3 n+8)(n−5)=0 7 n(3 n+8)(n−5)=0
Solve Quadratic Equations by Factoring
In the following exercises, solve.
449.
x 2+9 x+20=0 x 2+9 x+20=0 x 2+9 x+20=0
y 2−y−72=0 y 2−y−72=0 y 2−y−72=0
451.
2 p 2−11 p=40 2 p 2−11 p=40 2 p 2−11 p=40
q 3+3 q 2+2 q=0 q 3+3 q 2+2 q=0 q 3+3 q 2+2 q=0
453.
144 m 2−25=0 144 m 2−25=0 144 m 2−25=0
4 n 2=36 4 n 2=36 4 n 2=36
Solve Applications Modeled by Quadratic Equations
In the following exercises, solve.
455.
The product of two consecutive numbers is 462 462 462. Find the numbers.
The area of a rectangular shaped patio 400 400 400 square feet. The length of the patio is 9 9 9 feet more than its width. Find the length and width.
PreviousNext
Order a print copy
Citation/Attribution
This book may not be used in the training of large language models or otherwise be ingested into large language models or generative AI offerings without OpenStax's permission.
Want to cite, share, or modify this book? This book uses the Creative Commons Attribution License and you must attribute OpenStax.
Attribution information
If you are redistributing all or part of this book in a print format, then you must include on every physical page the following attribution:
Access for free at
If you are redistributing all or part of this book in a digital format, then you must include on every digital page view the following attribution:
Access for free at
Citation information
Use the information below to generate a citation. We recommend using a citation tool such as this one.
Authors: Lynn Marecek, MaryAnne Anthony-Smith, Andrea Honeycutt Mathis
Publisher/website: OpenStax
Book title: Elementary Algebra 2e
Publication date: Apr 22, 2020
Location: Houston, Texas
Book URL:
Section URL:
© Jul 8, 2025 OpenStax. Textbook content produced by OpenStax is licensed under a Creative Commons Attribution License . The OpenStax name, OpenStax logo, OpenStax book covers, OpenStax CNX name, and OpenStax CNX logo are not subject to the Creative Commons license and may not be reproduced without the prior and express written consent of Rice University.
Our mission is to improve educational access and learning for everyone.
OpenStax is part of Rice University, which is a 501(c)(3) nonprofit. Give today and help us reach more students.
Help
Contact Us
Support Center
FAQ
OpenStax
Press
Newsletter
Careers
Policies
Accessibility Statement
Terms of Use
Licensing
Privacy Policy
Manage Cookies
© 1999-2025, Rice University. Except where otherwise noted, textbooks on this site are licensed under a Creative Commons Attribution 4.0 International License.
Advanced Placement® and AP® are trademarks registered and/or owned by the College Board, which is not affiliated with, and does not endorse, this site. |
17223 | https://eyewiki.org/Lasers_(surgery) | Please note: This website includes an accessibility system. Press Control-F11 to adjust the website to people with visual disabilities who are using a screen reader; Press Control-F10 to open an accessibility menu.
Popup heading
Create account
Log in
Page
Discussion
View form
View source
View history
Lasers (surgery)
From EyeWiki
Jump to:navigation, search
All content on Eyewiki is protected by copyright law and the Terms of Service. This content may not be reproduced, copied, or put into any artificial intelligence program, including large language and generative AI models, without permission from the Academy.
Article initiated by:
Abdulrahman F. AlBloushi, MD, Marwan A. Abouammoh, MD
All contributors:
Leo A. Kim, MD, PhD, Marwan A. Abouammoh, MD, Harikrishnan Vannadil, MD, MS (Ophthal), Christina Y. Weng, MD, MBA, Francisco Samaniego, Jennifer I Lim MD, Michael Javaheri, MD
Assigned editor:
Michael Javaheri, MD
Review:
Assigned status Update Pending
by Michael Javaheri, MD on January 27, 2025.
| |
| add |
| Contributing Editors: | add |
Laser is an abbreviation for (Light Amplification by Stimulated Emission of Radiation). The concept of ocular therapy using light was published first by Meyer-Schwickerath, who used the sunlight to treat patients with ocular melanoma in 1949. On the other hand, many experiments on retinal damage from sunlight were performed in the late 1800's, but they are not published.
Contents
1 Laser Properties
2 Principles of Laser Emission
3 Laser System and Media
4 Laser- Tissue Interaction
4.1 Photothermal (photocoagulation and photovaporization)
4.2 Photochemical (photoablation and photoradiation)
4.3 Photoionizing (photodisruption)
5 Laser Types in Retina
5.1 Argon blue-green laser (70% blue (488 nm) and 30% green(514nm))
5.2 Frequency-doubled Nd-YAG Laser (532 nm)
5.3 Krypton red (647 nm)
5.4 Diode laser (805-810 nm)
6 Laser-Tissue Absorption in the Retina
6.1 Melanin
6.2 Macular xanthophyll
6.3 Hemoglobin
7 Laser delivery systems
7.1 Slit lamp
7.2 Indirect ophthalmoscope
7.3 Endophotocoagulation
7.4 Micropulse laser therapy
8 Lenses Used for Laser Delivery
9 Panretinal Photocoagulation for Treatment of Proliferative Diabetic Retinopathy
10 Treatment of Diabetic Macular Edema with Laser Photocoagulation
11 Transpupillary Thermotherapy (TTT)
12 Laser Photocoagulation in Branch and Central Retinal Vein Occlusions
13 Laser Photocoagulation in Age-Related Macular Degeneration and Related Diseases
13.1 Thermal Laser Photocoagulation
13.2 Photodynamic Therapy (PDT)
14 Additional Resources
15 References
Laser Properties
Monochromatic, Coherent, & Collimated
Lasers have properties to produce highly monochromatic coherent beam that is collimated and has limited divergence. Monochromatic electromagnetic wave means that it has single wavelength eliminating chromatic aberration. Coherence of lasers is classified as either spatial or temporal. Spatial coherence allows precise focusing of the laser beam to widths as small as a few microns, while temporal coherence allows selection of specific monochromatic wavelengths within a single laser or a group of lasers. Practically, spatial coherence, allow extremely small burns to pathologic tissue, with minimal disturbance to surrounding normal tissue; on the other hand, temporal coherence allows treatment of specific tissue sites by selecting laser wavelengths that are preferentially absorbed by these tissue sites.
Principles of Laser Emission
Atoms are composed of a positively charged nucleus and negatively charged electrons at various energy levels. Light is composed of individual packets of energy, called photons. Electrons can jump from one orbit to another by either absorbing energy and moving to a higher level (excited state), or emitting energy and transitioning to a lower level. Such transitions can be accompanied by absorption or spontaneous emission of a photon.
“Stimulated Emission” is the interaction of an atom in the excited state with a passing photon leading to photon emission. The emitted photon by the atom in this process will have the same phase, direction of propagation, and wavelength as the “stimulating photon”. The “stimulating photon” does not lose energy during this interaction it simply causes the emission and continues on. For this stimulated emission to occur more frequently, the optical material should have more atoms in an excited state than in a lower state.
Laser System and Media
The lasering medium is contained in an optical cavity (resonator) with mirrors at both ends, which reflect the light into the cavity and thereby circulate the photons through the lasing material multiple times to efficiently stimulate emission of radiation from excited atoms.
One of the mirrors is partially transmitting, thereby allowing a fraction of the laser beam to emerge. The lasing medium can be Solid (e.g, Ruby laser, neodymium-doped yttrium aluminum-garnet (Nd:YAG) ), Liquid : (e.g Fluorescent dye ) or Gas (e.g, Argon , Krypton ).
Lasers can be pumped by continuous discharge lamps and by pulsed flash lamps. Laser pulse durations can vary from femtoseconds to infinity.
Laser- Tissue Interaction
Laser-tissue interactions can occur in several ways:
Photothermal (photocoagulation and photovaporization)
Photothermal effects include photocoagulation and photovaporization. In photocoagulation, absorption of light by the target tissue results in a temperature rise, which causes denaturization of proteins. Typically, argon, krypton, diode (810nm) and Frequency doubled ND:YAG lasers cause this type of effect. Photovaporization occurs when higher energy laser light is absorbed by the target tissue, resulting in vaporization of both intracellular and extracellular water. The advantage of this type of tissue response is that adjacent blood vessels are also treated, resulting in a bloodless surgical field. The carbon dioxide laser, with its wavelength in the far infrared (10,600 nm), uses this method of action.
Photochemical (photoablation and photoradiation)
Photochemical effects include photoradiation and photoablation. In photoradiation, intravenous administration of photosensitizing agent, which is taken up by the target tissue, causes sensitization of the target tissue. Exposure of this sensitized tissue to red laser light (690 nm) induces the formation of cytotoxic free radicals. Photoablation occurs when high-energy laser wavelengths in the far ultraviolet (< 350 nm) region of the spectrum and are used to break long-chain tissue polymers into smaller volatile fragments. The exposure times in the photoablation process is usually much shorter (nanoseconds) compared to photoradiation. Photodynamic therapy (PDT) is an example of photoradiation therapy while Excimer laser is a photoablative process.
Photoionizing (photodisruption)
In Photoionization high-energy light (1064 nm) is deposited over a short interval to target tissue, stripping electrons from the molecules of that tissue which then rapidly expands, causing an acoustic shock wave that disrupts the treated tissue. The ND:YAG laser works via a photodisruptive mechanism.
Laser Types in Retina
Argon blue-green laser (70% blue (488 nm) and 30% green(514nm))
Absorbed selectively at retinal pigment epithelial layer (RPE), hemoglobin pigments, choriocapillaries, inner and outer nuclear layer of the retina. It coagulates tissues between the choriocapillaris and inner nuclear layer. The main adverse effects of these lasers are high intraocular scattering, macular damage in photocoagulation near the fovea, and choroidal neovascularization (if Bruch's membrane is ruptured).
Frequency-doubled Nd-YAG Laser (532 nm)
Highly absorbed by hemoglobin, melanin in retinal pigment epithelium and trabecular meshwork. It can be used either continuously or in pulsed mode.
PASCAL (Pattern Scan Laser) is one such type of laser that incorporates semi-Automated multiple pattern, short pulse, multiple shots with precise burn in very short duration using frequency-doubled Nd-YAG Laser (532 nm). It is commonly used nowadays in treatment of many retinal conditions (proliferative diabetic retinopathy, diabetic macular edema, vein occlusions etc.). It has many advantages when compared with conventional single spot laser, as it is produced at a very short duration (10-20 msec) compared to (100-200 msec) of conventional single spot one which leads to less collateral retinal damage. Other advantages include relatively stable scar size, less destructive same efficiency. It also permits the application of different patterns that gives more regular spots on retina with less duration.
Krypton red (647 nm)
Well absorbed by melanin and can pass through hemoglobin which makes it suitable for treatment of subretinal neovascular membrane. It also has low intraocular scattering with good penetration through media opacity or edematous retina and has ability to coagulate the choriocapillaries and the choroid.
Diode laser (805-810 nm)
It is well absorbed by melanin. The near to infrared spectrum (near invisible) makes it more comfortable to use due to absence of flashes of light. It has very deep penetration through the retina and choroid making it the laser of choice in treatment of Retinopathy of Prematurity (ROP) and some types of retina lesions. It is also used via trans-scleral route to treat the ciliary body in some cases of refractory glaucoma.
Laser-Tissue Absorption in the Retina
Melanin
Found mainly in the RPE (Retinal pigment epithelium) and choroid, and absorbs mainly wavelength between 400-700 nm. The longer the wavelength of light, the more the melanin is penetrated. For example, Diode laser with wavelength of 810 nm can penetrate deeply into the choroid.
Macular xanthophyll
Located in the inner and outer plexiform retinal layers. It protects the photoreceptors from short-wavelength light damage, but can be damaged by blue light which is why Argon green is preferred in macular photocoagulation over Argon blue.
Hemoglobin
Absorption varies according to oxygen saturation. It absorbs yellow, green, and blue wavelengths, but red light is absorbed poorly. Thus, macular lasers may, uncommonly, damage retinal vessels.
Laser delivery systems
Slit lamp
It is the most popular and common delivery system. Laser settings such as power, spot size and exposure time can be changed easily.
Indirect ophthalmoscope
Commonly used via a fiberoptic cable to deliver diode or argon lasers. It is ideal in the treatment of peripheral retina e.g. peripheral breaks and cases of retinopathy of prematurity. The spot size is altered by the dioptric effect of the condensing lens used. It may even vary depending on the refractive status of the eye (i.e. in hyperopic eyes the spot size will be smaller, and in myopic eyes it will be larger).
Endophotocoagulation
It delivered mainly argon green and diode lasers. Often used during retinal detachment repair following pars plana vitrectomy and extrusion of the subretinal fluid or in the surgical treatment of proliferative diabetic retinopathy.
Micropulse laser therapy
Micropulse laser describes a method of retinal laser delivery and can be applied with lasers of different wavelengths, such as 532 nm, 577 nm, or 810 nm. This type of delivery essentially divides the treatment into repeated microsecond impulses with intervals separating these where the retinal tissue is allowed to cool down. The laser power is set to a low level, and in general, the spots are not visible on the retina; the intention is to treat the retina on a subclinical basis while avoiding thermal damage to the underlying retina that can occur with conventional photocoagulation. While this type of laser therapy appears to be safe, its efficacy continues to be debated.
Lenses Used for Laser Delivery
Selection of lens depend on many factors include, desired field of view, amount of magnification, area to be treated, and ophthalmologist preference. The commonly used contact lenses for panretinal and focal/grid retinal photocoagulation are listed in table 1 and 2. It is important to remember that most of the commonly used lenses magnify the image size; thus, the laser spot size on the machine must be set accordingly.
Table 1. Contact Lenses used for PRP
| Lens | Image Magnification | Laser Spot Magnification | Field of View |
| Goldmann 3-mirror | 0.93x | 1.08x | 140◦ |
| Mainster Widefield | 0.68x | 1.5x | 118-127◦ |
| Mainster PRP 165 | 0.51x | 1.96x | 165-180◦ |
| Volk Quadraspheric | 0.51x | 1.97x | 120-144◦ |
| Volk Super Quad 160 | 0.50x | 2.00x | 160-165◦ |
Table 2. Contact Lenses used for Focal/Grid Lasers
| Lens | Image Magnification | Laser Spot Magnification | Field of View |
| Goldmann 3-mirror | 0.93x | 1.08x | 140◦ |
| Mainster standard | 0.96x | 1.05x | 90-121◦ |
| Mainster high magnification | 1.25x | 0.8x | 75-88◦ |
| Ocular PDT 1.6X | 0.63x | 1.6x | 120-133◦ |
| Volk area centralis | 1.06x | 0.94x | 70-84◦ |
Panretinal Photocoagulation for Treatment of Proliferative Diabetic Retinopathy
The Diabetic Retinopathy Study (DRS) established panretinal photocoagulation (PRP) as an effective treatment for high risk PDR which includes eyes with one of the three of the following risk factors: NVD greater than 1/3 disc area, any NVD with vitreous hemorrhage or NVE greater than half a disc area with preretinal or vitreous hemorrhage. The Early Treatment Diabetic Retinopathy Study (ETDRS) recommended careful follow-up without PRP for mild or moderate nonproliferative diabetic retinopathy.
Laser settings for conventional retinal laser photocoagulation for diabetic retinopathy is typically performed with a continuous wave (cw) laser at 514 or 532 nm with exposure durations from 100 to 200 ms, spot sizes from 100 to 500 µm, and powers from 250 to 750 mW. If Patterned scanning laser is used, it typically utilizes settings of 532 nm wavelength, 200 µm spot size, 20 ms duration, and powers from 300 to 750 mW.
Area of treatment reaches approximately 1300–1500 500 micron-sized burns spaced between one half and 1 burn width apart, beginning temporally just outside the vascular arcades and 3-disc diameters temporal to the macula, and extending to or just beyond the equator. Some providers prefer to divide treatment into two or more sessions while others elect to perform treatment in a single session. On the nasal side of the fundus, burns begin about 1-disc diameter nasal to the optic disc and also extend to or just beyond the equator. However, specific regimens vary by practitioner.
Treatment of Diabetic Macular Edema with Laser Photocoagulation
The ETDRS recommended macular laser for Clinically significant macular edema (CSME) which was defined as any of the following based on stereoscopic fundus examination:
Retinal thickening within 500 µm of the foveal center;
Hard exudates within 500 µm of the foveal center associated with adjacent retinal thickening; or
Retinal thickening more than 1 disc area in size within 1 disc diameter from the foveal center
Focal photocoagulation is directed to microaneurysms more than 500 µm away from the foveal center. Treatment up to 300 µm from the foveal center is allowed if vision is 20/40 or worse. Grid photocoagulation is applied to areas of diffuse leakage and capillary non-perfusion on fluorescein angiography.
Focal laser setting is a 50 to 100 µm spot size, 50 to 100 ms pulse duration, and power titrated to barely whiten the microaneurysm. Grid laser setting is a 50 to 200 µm spot size, 50 to 100 ms pulse duration, and power titrated to achieve mild burn intensities.
Transpupillary Thermotherapy (TTT)
A more intense, destructive modality, this is occasionally used for the treatment of choroidal melanomas, retinoblastoma, subfoveal choroidal neovascular membranes (CNVM) and other ocular tumors. TTT involves long exposures (~60 s) of a large spot (1.2–3 mm) at low irradiance (~10 W/cm2), using a near-infrared Diode (810 nm) laser that is thought to induce intralesional hyperthermia and subsequent vascular occlusion and lesion shrinkage.
Laser Photocoagulation in Branch and Central Retinal Vein Occlusions
In branch or central vein occlusions, retinal hypoxia occurs in the distribution of the occluded veins and may elicit a neovascular response in the affected area. Sector or panretinal photocoagulation is then the treatment of choice. It has been shown that macular grid laser photocoagulation can be used to treat persistent macular edema (> 3 months, vision worse than 20/40) resulting from branch vein occlusion with improvement of vision in some cases, although anti-vascular endothelial growth factor (anti-VEGF) intravitreal injections have become the standard of care.
Laser Photocoagulation in Age-Related Macular Degeneration and Related Diseases
Thermal Laser Photocoagulation
Before the era of intravitreal injections, thermal photocoagulation with the argon blue-green laser or krypton red was the first-line of treatment for exudative age-related macular degeneration (AMD) in cases of extrafoveal CNVM. However, treatment of subfoveal and juxtafoveal lesions usually yielded a dense scotoma with a high recurrence rates for the CNVM.
Photodynamic Therapy (PDT)
PDT is a form of selective laser therapy, leading to closure of the choroidal neovascular process and other active proliferating vessels, while leaving normal retinal tissue unharmed. It was first described to treat exudative AMD and studied in the VIP/TAP clinical trials, although the gold standard in treating exudative AMD has since become anti-VEGF therapy. . It works by photoradiation mechanism in which previously hematoporphyrin derivative and currently verteporfin (Visudyne) is used as a photosensitizing agent followed by local application of light in the absorption spectrum of that agent (i.e. 689 nm). This will release free radicals that destroy endothelial cells causing closure of hyperproliferative vessels, as in an actively growing tumor, or in an area of active choroidal neovascularization. It remains a useful adjuvant therapy for other intraocular vascular disorders, as well as posterior segment neoplasms. PDT is now frequently used for cutaneous and subcutaneous tumors.
Treatment with PDT should be guided by a recent fluorescein angiography or indocyanine green study. A light dose of 50 J/cm2 (full-fluence PDT), or 25 J/cm2 (half-fluence PDT) has been described for the treatment of choroidal neovascularization of various conditions, in addition to other choroidal vascular pathologies, including chronic central serous choroidopathy, polypoidal choroidal vasculopathy, as well as choroidal neoplasms, such as circumscribed choroidal hemangiomas.
Additional Resources
References
↑ Gawecki M. Micropulse Laser Treatment of Retinal Diseases. J Clin Med. 2019 Feb; 8(2): 242.
↑ Photodynamic therapy of subfoveal choroidal neovascularization in age-related macular degeneration with verteporfin. Two year results of 2 randomised clinical trials—TAP report 2. Arch Ophthalmol 2001;119:198–207.
Palanker DV, Blumenkranz MS, Marmor MF, Fifty years of ophthalmic laser therapy, Arch Ophthalmol. 2011 Dec;129(12):1613-9. doi: 10.1001/archophthalmol.2011.293.
Retina and Vitreous, section 7. Basic and Clinical Science Course, AAO, 2013-2014.
Lasers in Ophthalmology, Basic, Diagnostic, and Surgical Aspects : a Review, by Franz Fankhauser and Sylwia Kwasniewska, 2003.
Step by steps, laser in Ophthalmology,by Bikas Bahattacharyya, 2009.
Daniel Palanker, PhD, www.AAO.org, comprehensive ophthalmology Lasers Basic properties , 2013.
The Diabetic Retinopathy Study Research Group. Photocoagulation treatment of proliferative diabetic retinopathy. Clinical application of Diabetic Retinopathy Study (DRS) findings, DRS Report Number 8. Ophthalmology. 1981;88:583–600.
Early Treatment Diabetic Retinopathy Study Research Group. Early photocoagulation for diabetic retinopathy. ETDRS report number 9. Ophthalmology. 1991;98 (5 Suppl):766-785.
Muqit MM, Marcellino GR, Henson DB, Young LB, Turner GS, Stanga PE. Pascal panretinal laser ablation and regression analysis in proliferative diabetic retinopathy: Manchester Pascal Study Report 4. Eye. 2011;25(11):1447-1456.
Early Treatment Diabetic Retinopathy Study Research Group. Photocoagulation for diabetic macular edema. Early Treatment Diabetic Retinopathy Study report number 1. Arch Ophthalmol.1985;103(12):1796-1806.
Early Treatment Diabetic Retinopathy Study Research Group. Treatment techniques and clinical guidelines for photocoagulation of diabetic macular edema. Early Treatment Diabetic Retinopathy Study Report Number 2. Ophthalmology. 1987;94(7):761-774.
T, Maurage CA, Mordon S. Transpupillary thermotherapy (TTT) with short duration laser exposures induce heat shock protein (HSP) hyperexpression on choroidoretinal layers. Lasers Surg Med. 2003;33(2):102-107.
Complications of Age-related Macular Degeneration Prevention Trial (CAPT) Research Group. Risk factors for choroidal neovascularization and geographic atrophy in the complications of age-related macular degeneration prevention trial. Ophthalmology. 2008;115(9):1474-1479.
Moo-Young GA: Lasers in opthalmology, In High-tech medicine. West J Med 1985 Dec; 143:745-750.
Verteporfin in Phododynamic Therapy Study Group. Verteporfin Therapy of subfoveal choroidal neovascularisation in age-related macular degeneration: Two-year results of a randomised clinical trial including lesions with occult with no classic choroidal neovascularisation—Verteporfin in photodynamic therapy report 2. Am J Ophthalmol 2001;131:541–60.
TAP study group. Photodynamic therapy of subfoveal choroidal neovascularisation in age-related macular degeneration with verteporfin. Two year results of 2 randomised clinical trials—TAP report 2. Arch Ophthalmol 2001;119:198–207.
Retrieved from "
The Academy uses cookies to analyze performance and provide relevant personalized content to users of our website.
Categories:
Articles
Retina/Vitreous |
17224 | https://www.physics.unlv.edu/~jeffery/astro/mechanics/pendulum_conical.html | Caption: A diagram of "a conical pendulum in motion. The pendulum bob moves in a horizontal circle with constant speed v, θ = suspension angle, r = radius of bob's circular motion, h = vertical height of suspension above the plane of the bob's motion, L = length of the wire connecting the bob to the suspension point, T = wire's tension force acting on the bob, and mg=weight of the bob." (Slightly edited.)
The conical pendulum in motion is a bit like a planet. In both cases, there is a body perpetually falling toward the center, but it keeps missing because it has sideways motion or, in other words, angular momentum.
The analysis of the conical pendulum in motion is straightforward:
1. To satisfy Newton's 2nd law of motion (AKA F=ma) for a case of uniform circular motion, the horizontal component of the tension force must equal the centripetal force: Tsin(θ) = mv2/r ,
where the centripetal force is actually NOT a force, but rather "ma" of "F=ma" (i.e., Newton's 2nd law of motion (AKA F=ma)). The Tsin(θ) is the net horizontal force in "F=ma".
Note the bob is in acceleration in the horizontal direction since it is NOT in straight line motion. The net horizontal force Tsin(θ) is the cause of the acceleration.
2. To satisfy Newton's 2nd law of motion for the vertical direction where there is NOacceleration, there must be balanced forces: Tcos(θ) = mg which gives T = mg/cos(θ) .
3. Substituting T = mg/cos(θ) in the first formula above, we find mgtan(θ) = mv2/r = m(2πr/P)2/r = m[(4(π2)r)/P2] ,
where v = 2πr/P and P is period. Now we get the formulae θ = arctan[ ( 4(π2)r )/(gP2) ] , where arctan is the function arctangent.
P = sqrt[ ( 4(π2)r )/(gtan(θ)) ] .
Formulae for other variables can be derived, of course.
4. We can devise a fiducial value formula that is convenient for analyzing classroom demonstations using fiducial values: θ = 45°, tan(θ = 45°) = 1, r = 1 m, 4(π2) ≅ 40, and Earth's surface gravitational field fiducial value g = 9.8 m/s2 ≅ 10 m/s2. The fiducial value formula is P = ( 2.00708992 ... s) sqrt[ (r/(1 m))/[(g/(9.8 m/s2))(tan(θ)/1)] ] .
So if the demonstrator chooses r ≅ 1 m and θ ≅ 45°, the period P ≅ 2 s.
If the demonstrator chooses r ≅ 0.25 m and θ ≅ 45°, the period P ≅ 1 s.
5. The analysis above assumed an ideal case where there are NOresistive forces (e.g., air drag and friction at the suspension point) and NO driving forces.
In real demonstrations, there are both. In fact, the demonstrator, trying to maintain a uniform motion, adjusts their conical swinging driving force to roughly compensate for the resistive forces.
Credit/Permission: Don (AKA User:CosineKitty), 2009 / Public domain.
Image link: Wikimedia Commons.
Local file: local link: pendulum_conical.html.
File: Mechanics file: pendulum_conical.html. |
17225 | https://resourcecenter.byupathway.org/math/m14-07 | Languages
Find the X and Y Intercepts of a Line Using Algebra
... Math > Slopes, Intercepts, Equation of a Straight Line > Find the X and Y Intercepts of a Line Using Algebra
### Introduction
In this lesson, you will learn how to find the x- and y-intercepts using algebra.
This video illustrates the lesson material below. Watching the video is optional.
Find the X and Y-Intercepts of a Line using Algebra (08:37 mins) | Transcript
### Find the X- and Y-Intercepts Using Algebra
When the equation is written in the slope-intercept form ((y=mx+b)), you can find the y-intercept by looking at the equation. The value of b is the y-intercept. This is because the y-intercept is when the x value equals 0.
\begin{align} y&=mx+b &\color{navy}\small\text{Slope-intercept form}\\ y&=m(0)+b &\color{navy}\small\text{Substitute x=0}\\ y&=0+b &\color{navy}\small\text{Multiply the slope & 0}\\ y&=b &\color{navy}\small\text{Simplify so y = b}\end{align}
So, when (x = 0), the y-intercept is b.
To find the x-intercept, set (y = 0) and solve the equation for x. This is because when (y = 0) the line crosses the x-axis.
Example 1Use the following equation to find the x-intercept and the y-intercept: (y=\frac{3}{4}x-2).
The x-intercept is found whenever (y=0)
The y-intercept is found whenever (x=0)
Start by finding the x-intercept:\begin{align}y&=\frac{3}{4}x-2 &\color{navy}\small\text{Given equation}\\0&=\frac{3}{4}x-2 &\color{navy}\small\text{Set y = 0 to find the x-intercept}\\0 \color{green}\mathbf{+2} &=\frac{3}{4}x-2 \color{green}\mathbf{+2} &\color{navy}\small\text{Additive inverse}\\2 &=\frac{3}{4}x &\color{navy}\small\text{Simply both sides}\\2 \color{green}\mathbf{ (\frac{4}{3})} &=\frac{3}{4}x \color{green}\mathbf{(\frac{4}{3})} &\color{navy}\small\text{Multiply both sides by the reciprocal}\\\frac{8}{3} &= x &\color{navy}\small\text{Simplify both sides}\end{align}
The x-intercept is ((\frac{8}{3},0)), or 2.67.
Now, calculate the y-intercept:
\begin{align}y&=\frac{3}{4}x-2 &\color{navy}\small\text{Given equation}\\y&=\frac{3}{4}(0)-2 &\color{navy}\small\text{Set x = 0 to find the y-intercept}\\y&=0-2 &\color{navy}\small\text{Multiply (\frac{3}{4} (0)) = 0}\\y&=-2 &\color{navy}\small\text{Simplify by subtracting}\end{align}
In the equation (y=mx+b), the b in the equation is the y-intercept.
The y-intercept is -2 which can be written as an ordered pair ((0, -2)).
Example 2Use the following equation to find the x-intercept and the y-intercept: (3y-6x=12).
In Example 1, the equation was in slope-intercept form. This equation is in a different format. However, you can still use the same principles to find the x- and y-intercepts. Remember:
The x-intercept is found whenever (y=0)
The y-intercept is found whenever (x=0)
Start by finding the x-intercept:
\begin{align}3y - 6x&=12 &\color{navy}\small\text{Given equation}\\3(0) - 6x&=12 &\color{navy}\small\text{Set y = 0 to find the x-intercept}\\-6x & =12 &\color{navy}\small\text{Multiply 3(0) = 0}\\\frac{-6x}{\color{green}\mathbf{-6}} &=\frac{12}{\color{green}\mathbf{-6}} &\color{navy}\small\text{Divide both sides by -6}\\x &=-2 &\color{navy}\small\text{Simplify both sides}\end{align}
The x-intercept is ((-2,0)).
Now, calculate the y-intercept:
\begin{align}3y - 6x&=12 &\color{navy}\small\text{Given equation}\\3y - 6(0)&=12 &\color{navy}\small\text{Set x = 0 to find the y-intercept}\\3y & =12 &\color{navy}\small\text{Multiply -6(0) = 0}\\\frac{3y}{\color{green}\mathbf{3}} &=\frac{12}{\color{green}\mathbf{3}} &\color{navy}\small\text{Divide both sides by 3}\\y &=4 &\color{navy}\small\text{Simplify both sides}\end{align}
### Things to Remember
To find x-intercept, set (y = 0) and solve for x. The point will be ((x, 0)).
To find y-intercept, set (x = 0) and solve for y. The point will be ((0, y)).
Practice Problems
1. Find the y-intercept of the line: (
Solution
x
Solution: (y=-9)
)({\text{y}}=-3{\text{x}}-9)2. Find the x-intercept of the line: (
Solution
x
Solution: (x=3)
)({\text{y}}=-4{\text{x}}+12)3. Find the y-intercept of the line: (
Solution
x
Solution: (y=9)Details:To find the y-intercept, set ({\color{Green}x = 0}) and solve for y. The point will be ((0,y)):(y − 9 = 3{\color{Green}x})Substitute ({\color{Green}0}) in for ({\color{Green}x}):(y − 9 = 3 {\color{Green}(0)})Multiply 3 times 0, which gives you:(y − 9 = {\color{Green}0})Then add 9 to both sides to isolate y:(y − 9 {\color{Green}+9} = 0 {\color{Green}+9})Which gives you:(y = 9)So the y-intercept is ((0,9)).
)({\text{y}} − 9 = 3x)4. Find the x-intercept of the line: (
Solution
x
Solution: (x=6)
)({\text{y}} + 12 = 2x)5. Find the y-intercept of the line: (
Solution
x
Solution: (y=-4)
)({\text{x}}+6{\text{y}}=-24)6. Find the x-intercept of the line: (
Solution
x
Solution: (x=-4)Details:To find the x-intercept:, set ({\color{Green}y = 0}) and solve for x. The point will be ((x,0)):(5{\text{x}}+4{\text{y}}=-20)Substitute ({\color{Green}0}) in for y:(5{\text{x}}+4{\color{Green}(0)}=-20)Multiply 4 times 0 which gives you:(5{\text{x}}+{\color{Green}0}=-20)Add 5x to 0:(\frac{1}{5}(5{\text{x}})) = (5{\text{x}}=-20)Then multiply both sides by (\dfrac{1}{5}) (or divide both sides by 5, both will give you the same solution):(-20\dfrac{1}{5})Which gives you:({\text{x}}=-4)So the x-intercept is ((-4,0)).
)(5{\text{x}}+4{\text{y}}=-20)
Interpreting Slope
How to Find the Slope of a Line Between Two Points |
17226 | https://artofproblemsolving.com/wiki/index.php/Nine-point_circle?srsltid=AfmBOooE5rAwEB9JsOWhR7L12Q22KHfrTNcJT0GSZNFhdCJcQtC8O1Yn | Art of Problem Solving
Nine-point circle - AoPS Wiki
Art of Problem Solving
AoPS Online
Math texts, online classes, and more
for students in grades 5-12.
Visit AoPS Online ‚
Books for Grades 5-12Online Courses
Beast Academy
Engaging math books and online learning
for students ages 6-13.
Visit Beast Academy ‚
Books for Ages 6-13Beast Academy Online
AoPS Academy
Small live classes for advanced math
and language arts learners in grades 2-12.
Visit AoPS Academy ‚
Find a Physical CampusVisit the Virtual Campus
Sign In
Register
online school
Class ScheduleRecommendationsOlympiad CoursesFree Sessions
books tore
AoPS CurriculumBeast AcademyOnline BooksRecommendationsOther Books & GearAll ProductsGift Certificates
community
ForumsContestsSearchHelp
resources
math training & toolsAlcumusVideosFor the Win!MATHCOUNTS TrainerAoPS Practice ContestsAoPS WikiLaTeX TeXeRMIT PRIMES/CrowdMathKeep LearningAll Ten
contests on aopsPractice Math ContestsUSABO
newsAoPS BlogWebinars
view all 0
Sign In
Register
AoPS Wiki
ResourcesAops Wiki Nine-point circle
Page
ArticleDiscussionView sourceHistory
Toolbox
Recent changesRandom pageHelpWhat links hereSpecial pages
Search
Nine-point circle
Triangle ABC with the nine-point circle in light orange
The nine-point circle (also known as Euler's circle or Feuerbach's circle) of a given triangle is a circle which passes through 9 "significant" points:
The three feet of the altitudes of the triangle.
The three midpoints of the edges of the triangle.
The three midpoints of the segments joining the vertices of the triangle to its orthocenter. (These points are sometimes known as the Euler points of the triangle.)
"The nine-point circle is tangent to the incircle, has a radius equal to half the circumradius, and its center is the midpoint of the segment connecting the orthocenter and the circumcenter." -hankinjg
That such a circle exists is a non-trivial theorem of Euclidean geometry.
The center of the nine-point circle is the nine-point center and is usually denoted .
The nine-point circle is tangent to the incircle, has a radius equal to half the circumradius, and its center is the midpoint of the segment connecting the orthocenter and the circumcenter, upon which the centroid also falls.
It's also denoted Kimberling center .
Contents
1 First Proof of Existence
2 Second Proof of Existence
3 Common Euler circle
4 See also
First Proof of Existence
Since is the midpoint of and is the midpoint of , is parallel to . Using similar logic, we see that is also parallel to . Since is the midpoint of and is the midpoint of , is parallel to , which is perpendicular to . Similar logic gives us that is perpendicular to as well. Therefore is a rectangle, which is a cyclic figure. The diagonals and are diagonals of the circumcircle. Similar logic to the above gives us that is a rectangle with a common diagonal to . Therefore the circumcircles of the two rectangles are identical. We can also gain that rectangle is also on the circle.
We now have a circle with the points , , , , , and on it, with diameters , , and . We now note that . Therefore , , and are also on the circle. We now have a circle with the midpoints of the sides on it, the three midpoints of the segments joining the vertices of the triangle to its orthocenter on it, and the three feet of the altitudes of the triangle on it. Therefore, the nine points are on the circle, and the nine-point circle exists.
Second Proof of Existence
We know that the reflections of the orthocenter about the sides and about the midpoints of the triangle's sides lie on the circumcircle (side proof, midpoint proof), as do the vertices of the triangle. So, consider the homothety centered at with ratio . It maps the circumcircle of (and those 9 points) to a circle, including mapping the vertices of the triangle to its Euler points (by definition). This is the nine-point circle.
Common Euler circle
Let an acute-angled triangle with orthocenter be given.
be the point on opposite
Points and such that is a parallelogram. The line intersects at the points and
Prove that triangles and has common Euler (nine-point) circle.
Proof Denote is midpoint
Let’s consider Circumcenter of point is the midpoint point is the midpoint
Denote the centroid of
is the centroid of
Denote the midpoint of is the midpoint of
is the centroid of
Point is the circumcenter of is the orthocenter of
The triangles and has common circumcircle and common center of Euler circle (the midpoint of ) therefore these triangles has the common Euler circle.
vladimir.shelomovskii@gmail.com, vvsss
See also
Kimberling center
Center line
Evans point
Euler line
This article is a stub. Help us out by expanding it.
Retrieved from "
Categories:
Stubs
Definition
Art of Problem Solving is an
ACS WASC Accredited School
aops programs
AoPS Online
Beast Academy
AoPS Academy
About
About AoPS
Our Team
Our History
Jobs
AoPS Blog
Site Info
Terms
Privacy
Contact Us
follow us
Subscribe for news and updates
© 2025 AoPS Incorporated
© 2025 Art of Problem Solving
About Us•Contact Us•Terms•Privacy
Copyright © 2025 Art of Problem Solving
Something appears to not have loaded correctly.
Click to refresh. |
17227 | https://artofproblemsolving.com/wiki/index.php/2016_AMC_10A_Problems/Problem_20?srsltid=AfmBOoo3SNHomhZ_ji-pyo-NmfZr9F-qLggls7q2ajW4qKviFgmvisFD | Art of Problem Solving
2016 AMC 10A Problems/Problem 20 - AoPS Wiki
Art of Problem Solving
AoPS Online
Math texts, online classes, and more
for students in grades 5-12.
Visit AoPS Online ‚
Books for Grades 5-12Online Courses
Beast Academy
Engaging math books and online learning
for students ages 6-13.
Visit Beast Academy ‚
Books for Ages 6-13Beast Academy Online
AoPS Academy
Small live classes for advanced math
and language arts learners in grades 2-12.
Visit AoPS Academy ‚
Find a Physical CampusVisit the Virtual Campus
Sign In
Register
online school
Class ScheduleRecommendationsOlympiad CoursesFree Sessions
books tore
AoPS CurriculumBeast AcademyOnline BooksRecommendationsOther Books & GearAll ProductsGift Certificates
community
ForumsContestsSearchHelp
resources
math training & toolsAlcumusVideosFor the Win!MATHCOUNTS TrainerAoPS Practice ContestsAoPS WikiLaTeX TeXeRMIT PRIMES/CrowdMathKeep LearningAll Ten
contests on aopsPractice Math ContestsUSABO
newsAoPS BlogWebinars
view all 0
Sign In
Register
AoPS Wiki
ResourcesAops Wiki 2016 AMC 10A Problems/Problem 20
Page
ArticleDiscussionView sourceHistory
Toolbox
Recent changesRandom pageHelpWhat links hereSpecial pages
Search
2016 AMC 10A Problems/Problem 20
Contents
1 Problem
2 Solution 1
3 Solution 2
4 Solution 3 (Casework)
5 Video Solution by OmegaLearn
6 Video Solution
7 Video Solution 2
8 See Also
Problem
For some particular value of , when is expanded and like terms are combined, the resulting expression contains exactly terms that include all four variables and , each to some positive power. What is ?
Solution 1
All the desired terms are in the form , where (the part is necessary to make stars and bars work better.) Since , , , and must be at least ( can be ), let , , , and , so . Now, we use stars and bars (also known as ball and urn) to see that there are or solutions to this equation. We notice that , which leads us to guess that is around these numbers. This suspicion proves to be correct, as we see that , giving us our answer of
Note: An alternative is instead of making the transformation, we "give" the variables 1, and then proceed as above.
~ Mathkiddie(minor edits by vadava_lx).
Solution 2
By the Hockey Stick Identity, the number of terms that have all raised to a positive power is . We now want to find some such that . As mentioned above, after noticing that , and some trial and error, we find that , giving us our answer of
~minor edits by vadava_lx
Solution 3 (Casework)
The terms are in the form , where . The problem becomes distributing identical balls to different boxes such that each of the boxes has at least ball. The balls in a row have gaps among them. We are going to put or divisors into those gaps. There are cases of how to put the divisors.
Case : Put 4 divisors into gaps. It corresponds to each of has at least one term. There are terms.
Case : Put 3 divisors into gaps. It corresponds to each of has at least one term. There are terms.
So, there are terms. , and since we have
~isabelchen...
Video Solution by OmegaLearn
~ pi_is_3.15
Video Solution
Video Solution 2
with 5 Stars and Bars examples preceding the solution. Time stamps in description to skip straight to solution.
~IceMatrix
See Also
2016 AMC 10A (Problems • Answer Key • Resources)
Preceded by
Problem 19Followed by
Problem 21
1•2•3•4•5•6•7•8•9•10•11•12•13•14•15•16•17•18•19•20•21•22•23•24•25
All AMC 10 Problems and Solutions
These problems are copyrighted © by the Mathematical Association of America, as part of the American Mathematics Competitions.
Retrieved from "
Art of Problem Solving is an
ACS WASC Accredited School
aops programs
AoPS Online
Beast Academy
AoPS Academy
About
About AoPS
Our Team
Our History
Jobs
AoPS Blog
Site Info
Terms
Privacy
Contact Us
follow us
Subscribe for news and updates
© 2025 AoPS Incorporated
© 2025 Art of Problem Solving
About Us•Contact Us•Terms•Privacy
Copyright © 2025 Art of Problem Solving
Something appears to not have loaded correctly.
Click to refresh. |
17228 | http://www.studyphysics.ca/newnotes/20/unit01_kinematicsdynamics/chp02_intro/lesson05.htm | Lesson 5: Expressing Error in Measurements
Anytime an experiment is conducted, a certain degree of uncertainty must be expected. There are basically three reasons you might have an error in a measurement.
physical errors in the measuring device
Example 1: Your thermometer was dropped and has small air bubbles in it.
improper or sloppy use of measuring device
Example 2: When you used your thermometer, you measured the values in Fahrenheit instead of Celsius.
ambient conditions (temperature, pressure, etc.)
Example 3: Measuring the length of a piece of wood outdoors in the winter using a metal ruler, you forget that metal contracts in the cold making the ruler shorter.
Whatever the scale (units) on a measuring device, the error you should record is one half of the smallest division.
Often this is stated as the "possible error" in the measurement.
Example 4: If you measure the length of a pencil using a regular ruler (they usually have 1 mm divisions) and find that it is 102 mm long, you should write down…
102 ± 0.5 mm
The "plus-or-minus" (±) means "give-or-take" this much as the possible error. The length of the pencil could be as little as 101.5 mm, or as much as 102.5 mm.
At times, there may be an accepted value for a measurement (verified in laboratories with very high standards).
Some of these numbers are on the back of your data sheet. They are usually given with three sig digs.
It is often useful to compare your measurement to this accepted value in order to evaluate how accurate you were.
Calculating Errors
There are three common ways to calculate your error: absolute error, percentage difference, and percentage error.
Absolute Error is when you subtract the accepted value from your measured value…
Absolute Error = Measured Value - Accepted Value
A positive answer means you are over the accepted value.
A negative answer means you are under the accepted value
Percentage Error is the most common way of measuring an error, and often the most easy to understand.
Percentage Error = Absolute Error / Accepted Value
So, if you measured a pencil to be 102mm long, and an independent lab with high tech equipment measured it as 104mm, the percentage error is…
Percentage Error = (102mm - 104mm) / (104mm) = -0.02
Which means you got a -2% error. The minus sign just means that you were under the accepted value.
In high school labs, don’t be surprised if you obtain errors of 25%. The important part is, can you explain your errors!
Percentage Difference is useful if you have two measurements you’ve taken and you wish to see how different they are as a percentage. This is handy when you do not have an accepted value to compare to.
Percentage Difference = (difference in measurements) / (average of measurements)
Don’t confuse this with percentage error. Here we have two measurements you’ve made, but no "accepted value." For example, you measure the length of a desk twice and get the numbers 1.15m and 1.13m.
Percentage Difference = (1.15-1.13) / [( 1.15+1.13)/2] = 0.0175
A percent difference of 1.75%. |
17229 | https://economics.stackexchange.com/questions/53484/is-there-a-standard-term-for-the-elasticity-of-an-isoquant | production function - Is there a standard term for the elasticity of an isoquant? - Economics Stack Exchange
Join Economics
By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy.
Sign up with Google
OR
Email
Password
Sign up
Already have an account? Log in
Skip to main content
Stack Exchange Network
Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.
Visit Stack Exchange
Loading…
Tour Start here for a quick overview of the site
Help Center Detailed answers to any questions you might have
Meta Discuss the workings and policies of this site
About Us Learn more about Stack Overflow the company, and our products
current community
Economics helpchat
Economics Meta
your communities
Sign up or log in to customize your list.
more stack exchange communities
company blog
Log in
Sign up
Economics
Home
Questions
Unanswered
AI Assist Labs
Tags
Chat
Users
Teams
Ask questions, find answers and collaborate at work with Stack Overflow for Teams.
Try Teams for freeExplore Teams
3. Teams
4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams
Teams
Q&A for work
Connect and share knowledge within a single location that is structured and easy to search.
Learn more about Teams
Hang on, you can't upvote just yet.
You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it?
Instead, you can save this post to reference later.
Save this post for later Not now
Thanks for your vote!
You now have 5 free votes weekly.
Free votes
count toward the total vote score
does not give reputation to the author
Continue to help good content that is interesting, well-researched, and useful, rise to the top! To gain full voting privileges, earn reputation.
Got it!Go to help center to learn more
Is there a standard term for the elasticity of an isoquant?
Ask Question
Asked 2 years, 10 months ago
Modified2 years, 10 months ago
Viewed 53 times
This question shows research effort; it is useful and clear
0
Save this question.
Show activity on this post.
Isoquants - the level sets of a production function f f - are very useful in microeconomics. For example, if we hold all but two inputs fixed, then the isoquant is a plane curve that quantifies the substitutability of those two inputs. Is there a standard term for the elasticity E E of an isoquant? This seems like a natural quantity to consider, since it tells you that a reduction of input 1 by a small percentage can be made up for by an increase of input 2 by E E times that percentage in order to hold output constant.
I know about two closely related concepts, but neither of them seem to be quite this:
The marginal rate of technical substitution (MRTS) gives the (negative) slope (i.e. derivative) of the isoquant, but not its elasticity.
The elasticity of substitution is similar, but (I believe) more complicated. As I understand it, it's the elasticity of the ratio of the ratio of inputs to the MRTS. This notion seems way more complicated than just the elasticity of the isoquant, which seems much more intuitive to me.
Note that the elasticity of an isoquant of two inputs x x and y y equals
∂y∂x x y=−M R T S x y=−∂f∂x∂f∂y x y=−∂f∂x∂f∂y x f y f,∂y∂x x y=−M R T S x y=−∂f∂x∂f∂y x y=−∂f∂x∂f∂y x f y f,
which is the (negative) ratio of the partial output elasticities with respect to the two inputs. So maybe there's no special term for this isoquant elasticity, since it's simply a ratio of two other standard quantities.
production-function
elasticity
marginal
elasticity-of-substitution
Share
Share a link to this question
Copy linkCC BY-SA 4.0
Improve this question
Follow
Follow this question to receive notifications
edited Nov 17, 2022 at 23:59
tparkertparker
asked Nov 13, 2022 at 19:44
tparkertparker
790 5 5 silver badges 19 19 bronze badges
Add a comment|
1 Answer 1
Sorted by: Reset to default
This answer is useful
2
Save this answer.
Show activity on this post.
The OP has correctly defined the "isoquant elasticity", for which one could argue that it should be the basic definition of "elasticity of substitution".
The reason why Hicks originally defined the "elasticity of substitution" by the much more complicated formula that we know, is that this more complicated formula involves essentially prices (through the MRS and optimizing behavior) and hence it has to do with the function of markets and economic optimization and choices proper, while the much more intuitive and simpler "isoquant elasticity" is in principle a purely "technical" metric, related to technology, even if we consider a broader concept of "technology" than machines and recipes. And economics, even in production theory, is not about "production engineering"... It so happens that I have been sketching a paper on exactly this metric for some time now, and whether and how it could be useful... maybe in business administration, but no so much in economics.
Share
Share a link to this answer
Copy linkCC BY-SA 4.0
Improve this answer
Follow
Follow this answer to receive notifications
edited Nov 18, 2022 at 19:49
answered Nov 17, 2022 at 23:05
Alecos PapadopoulosAlecos Papadopoulos
34.3k 1 1 gold badge 52 52 silver badges 120 120 bronze badges
Add a comment|
Your Answer
Thanks for contributing an answer to Economics Stack Exchange!
Please be sure to answer the question. Provide details and share your research!
But avoid …
Asking for help, clarification, or responding to other answers.
Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Draft saved
Draft discarded
Sign up or log in
Sign up using Google
Sign up using Email and Password
Submit
Post as a guest
Name
Email
Required, but never shown
Post Your Answer Discard
By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.
Start asking to get answers
Find the answer to your question by asking.
Ask question
Explore related questions
production-function
elasticity
marginal
elasticity-of-substitution
See similar questions with these tags.
Featured on Meta
Introducing a new proactive anti-spam measure
Spevacus has joined us as a Community Manager
stackoverflow.ai - rebuilt for attribution
Community Asks Sprint Announcement - September 2025
Related
7When do workers versus capital owners share of income increase?
0Is elasticity of substitution defined for non-homogeneous production functions?
3Different elasticities of substitution
2CRS, Homothetic Functions, and constant MRTS
2How to derive the input demand functions from a perfect substitutes production function
Hot Network Questions
Can a GeoTIFF have 2 separate NoData values?
Alternatives to Test-Driven Grading in an LLM world
Interpret G-code
How do trees drop their leaves?
My dissertation is wrong, but I already defended. How to remedy?
Does the curvature engine's wake really last forever?
Does a Linux console change color when it crashes?
Are there any world leaders who are/were good at chess?
Does the mind blank spell prevent someone from creating a simulacrum of a creature using wish?
How can the problem of a warlock with two spell slots be solved?
Calculating the node voltage
Is it safe to route top layer traces under header pins, SMD IC?
Can a cleric gain the intended benefit from the Extra Spell feat?
How long would it take for me to get all the items in Bongo Cat?
How to home-make rubber feet stoppers for table legs?
Checking model assumptions at cluster level vs global level?
Origin of Australian slang exclamation "struth" meaning greatly surprised
What "real mistakes" exist in the Messier catalog?
Determine which are P-cores/E-cores (Intel CPU)
Why include unadjusted estimates in a study when reporting adjusted estimates?
Discussing strategy reduces winning chances of everyone!
Numbers Interpreted in Smallest Valid Base
What were "milk bars" in 1920s Japan?
How do you emphasize the verb "to be" with do/does?
Question feed
Subscribe to RSS
Question feed
To subscribe to this RSS feed, copy and paste this URL into your RSS reader.
Why are you flagging this comment?
It contains harassment, bigotry or abuse.
This comment attacks a person or group. Learn more in our Code of Conduct.
It's unfriendly or unkind.
This comment is rude or condescending. Learn more in our Code of Conduct.
Not needed.
This comment is not relevant to the post.
Enter at least 6 characters
Something else.
A problem not listed above. Try to be as specific as possible.
Enter at least 6 characters
Flag comment Cancel
You have 0 flags left today
Economics
Tour
Help
Chat
Contact
Feedback
Company
Stack Overflow
Teams
Advertising
Talent
About
Press
Legal
Privacy Policy
Terms of Service
Your Privacy Choices
Cookie Policy
Stack Exchange Network
Technology
Culture & recreation
Life & arts
Science
Professional
Business
API
Data
Blog
Facebook
Twitter
LinkedIn
Instagram
Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.9.26.34547
By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy.
Accept all cookies Necessary cookies only
Customize settings
Cookie Consent Preference Center
When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer.
Cookie Policy
Accept all cookies
Manage Consent Preferences
Strictly Necessary Cookies
Always Active
These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information.
Cookies Details
Performance Cookies
[x] Performance Cookies
These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance.
Cookies Details
Functional Cookies
[x] Functional Cookies
These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly.
Cookies Details
Targeting Cookies
[x] Targeting Cookies
These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device.
Cookies Details
Cookie List
Clear
[x] checkbox label label
Apply Cancel
Consent Leg.Interest
[x] checkbox label label
[x] checkbox label label
[x] checkbox label label
Necessary cookies only Confirm my choices |
17230 | https://mathwithfriends.org/easy-activities-for-practicing-slope/ | Easy Activities for Practicing Slope - Math With Friends
Skip to content
Math With Friends
Home
Blog
Teachers Pay Teachers
Youtube
Pinterest
About Me
Contact
Privacy Policy
Math With Friends
Navigation Menu
Navigation Menu
Home
Blog
Teachers Pay Teachers
Youtube
Pinterest
About Me
Contact
Privacy Policy
Easy Activities for Practicing Slope
by mathwithfriends@yahoo.com
Slope is one of the key concepts when learning about Linear Functions. Because of this, extra caution should be taken when selecting assignments for your students to work on. Today we will look at three easy activities for practicing slope that are guaranteed to get the most bang for your buck!
Alright, let’s dive in!
Easy Activities for Practicing Slope
Activity #1 – Partner Check Activity
Partner Check Activities are one of my all-time favorite types of practice for a variety of reasons. To summarize, students are paired together and given a worksheet with two columns – Partner A and Partner B. Each partner has a different problem, but they should get the same answer! This allows them to self-check as they work along. If they disagree, they switch problems and try to discover the error. This allows students to collaborate and work at their own pace. It also frees you up to work with the students that really need the help! Check out this activity!
With so many different scenarios that students need to find slope in, the best way is to mix it up. In this assignment students will be presented with points, tables, and standard form equations. This forces them to think about slope in a variety of ways! If you are looking for activities focusing on just points, just tables, or a mixture, check these others out!
Easy Activities for Practicing Slope
Activity #2 – Blooket
Blookets are one of my favorite electronic forms of informal assessment. They are flexible, quick, and most importantly: fun! Check this one out! I like this blooket because it has students practice finding slope from graphs and identifying the type of slope.
It also provides students with easily visible points to lower the difficulty level. This makes it more approachable and fun. Speaking of fun, we haven’t discussed the most important question! Which mode is best for this topic?
Since the questions should be quick, I have to suggest Gold Quest. Students answer questions and open chests which earn them gold! Occasionally they will be able to steal gold from other players! I can’t tell you how much students love stealing gold from their friends. Sometimes I will play as well and they love stealing my money! This is a home-run of an activity.
Easy Activities for Practicing Slope
Activity #3 – Desmos Activity
For our last activity we’ll be looking at this Desmos Activity Builder. Desmos Activities are a great way to get your students visually engaged, allow them to go at their own pace, and provide instant feedback. This activity begins with a quick recap of rise over run and then gets them practicing.
They’ll start by dragging points to create the given slope. Check marks will be seen when they get it right!
After a few of those slides, they will then match slopes with given graphs. It’s a perfect way of wrapping up their prior practice! You’ll be able to see exactly where all your students are by watching on the teacher overview page.
The next few slides are about y-intercepts, which you may want to do or may not want to do depending on the lesson. Even without these slides, this is a great way to get students actively engaged and learning at their own pace.
Well, I hope you enjoyed these ideas and found one or two you think will fit in your classroom! As always, email me at MathWithFriends@yahoo.com with any questions!
Neve | Powered by WordPress
We are using cookies to give you the best experience on our website.
You can find out more about which cookies we are using or switch them off in settings.
Accept
Close GDPR Cookie Settings
Privacy Overview
Strictly Necessary Cookies
Powered by GDPR Cookie Compliance
Privacy Overview
This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Strictly Necessary Cookies
Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.
Enable or Disable Cookies- [x] Enabled Disabled
If you disable this cookie, we will not be able to save your preferences. This means that every time you visit this website you will need to enable or disable cookies again.
Enable All Save Settings |
17231 | https://math.stackexchange.com/questions/165109/polar-equation-of-y-2 | calculus - Polar equation of $y = 2$ - Mathematics Stack Exchange
Join Mathematics
By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy.
Sign up with Google
OR
Email
Password
Sign up
Already have an account? Log in
Skip to main content
Stack Exchange Network
Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.
Visit Stack Exchange
Loading…
Tour Start here for a quick overview of the site
Help Center Detailed answers to any questions you might have
Meta Discuss the workings and policies of this site
About Us Learn more about Stack Overflow the company, and our products
current community
Mathematics helpchat
Mathematics Meta
your communities
Sign up or log in to customize your list.
more stack exchange communities
company blog
Log in
Sign up
Home
Questions
Unanswered
AI Assist Labs
Tags
Chat
Users
Teams
Ask questions, find answers and collaborate at work with Stack Overflow for Teams.
Try Teams for freeExplore Teams
3. Teams
4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams
Teams
Q&A for work
Connect and share knowledge within a single location that is structured and easy to search.
Learn more about Teams
Hang on, you can't upvote just yet.
You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it?
Instead, you can save this post to reference later.
Save this post for later Not now
Thanks for your vote!
You now have 5 free votes weekly.
Free votes
count toward the total vote score
does not give reputation to the author
Continue to help good content that is interesting, well-researched, and useful, rise to the top! To gain full voting privileges, earn reputation.
Got it!Go to help center to learn more
Polar equation of y=2 y=2
Ask Question
Asked 13 years, 3 months ago
Modified9 years, 7 months ago
Viewed 9k times
This question shows research effort; it is useful and clear
0
Save this question.
Show activity on this post.
Maybe I do not understand what is going on here but I cannot get the right answer.
y=2 y=2
y 2+x 2=r 2 y 2+x 2=r 2
4+0 2=r 2 4+0 2=r 2
r=2 r=2
y=r sin θ y=r sinθ
1=sin θ 1=sinθ
θ=π/2 θ=π/2
This is wrong but I do not see anything wrong with my logic.
calculus
polar-coordinates
Share
Share a link to this question
Copy linkCC BY-SA 3.0
Cite
Follow
Follow this question to receive notifications
edited Jul 1, 2012 at 0:38
Henry T. Horton
19k 6 6 gold badges 65 65 silver badges 80 80 bronze badges
asked Jul 1, 2012 at 0:33
user138246 user138246
Add a comment|
3 Answers 3
Sorted by: Reset to default
This answer is useful
5
Save this answer.
Show activity on this post.
Just because y=2 y=2, it does not mean that x=0 x=0. In particular, you want that, for any x x, y=2 y=2, but x x is not 0 0. Since you already know that
y=2 y=2
and that to change to polar coordinates, you can use
y=ρ sin θ y=ρ sinθ
,
we plug in y=2 y=2, which gives
2=ρ sin θ 2=ρ sinθ
or
2 csc θ=ρ 2 cscθ=ρ
Note that we don't consider the variable x x and
x=ρ cos θ x=ρ cosθ
, since it suffices to use only the y y-equation above, because it fully describes the dependence between θ θ and ρ ρ.
In particular note the radius can't be always 2 2, else you would get a circle! Everytime you state something such as x=0 x=0 or r=2 r=2, think the implications it carries, and it might help you to spot the flaw.
Share
Share a link to this answer
Copy linkCC BY-SA 3.0
Cite
Follow
Follow this answer to receive notifications
answered Jul 1, 2012 at 0:39
Pedro♦Pedro
126k 19 19 gold badges 239 239 silver badges 406 406 bronze badges
4
Where does csc come from and what does p represent differently than r?user138246 –user138246 2012-07-01 00:40:17 +00:00 Commented Jul 1, 2012 at 0:40
csc θ cscθ is the cosecant function, which is simply 1 sin θ 1 sinθ. ρ ρ is rho, the greek r r which is usually used in polar coordinates for the ρ ρ adius. =)Pedro –Pedro♦ 2012-07-01 00:41:33 +00:00 Commented Jul 1, 2012 at 0:41
So all you did was divide everything by sin?user138246 –user138246 2012-07-01 00:43:17 +00:00 Commented Jul 1, 2012 at 0:43
In polar coordinates, one wirtes ρ ρ as a function of θ θ, that is ρ=ρ(θ)ρ=ρ(θ) just as we write y y as a function of x x in cartesian coordinates, y=y(x)y=y(x). That's why I wrote the equation explicitly.Pedro –Pedro♦ 2012-07-01 00:44:51 +00:00 Commented Jul 1, 2012 at 0:44
Add a comment|
This answer is useful
4
Save this answer.
Show activity on this post.
Remember that y=r sin(θ)y=r sin(θ). Hence,
r sin(θ)=2 r sin(θ)=2
is the equation in polar coordinates of the straight line y=2 y=2.
Share
Share a link to this answer
Copy linkCC BY-SA 3.0
Cite
Follow
Follow this answer to receive notifications
answered Jul 1, 2012 at 0:34
user17762 user17762
Add a comment|
This answer is useful
0
Save this answer.
Show activity on this post.
Polar equation is
y=r sin θ=2=c o n s t a n t y=r sinθ=2=c o n s t a n t
Using this equation you computed correctly only one point.
x=0,y=2;θ=π/2,r=2 x=0,y=2;θ=π/2,r=2
y y is same for all θ θ and x x is a variable.
Share
Share a link to this answer
Copy linkCC BY-SA 3.0
Cite
Follow
Follow this answer to receive notifications
answered Feb 21, 2016 at 11:49
NarasimhamNarasimham
42.5k 7 7 gold badges 46 46 silver badges 112 112 bronze badges
Add a comment|
You must log in to answer this question.
Featured on Meta
Introducing a new proactive anti-spam measure
Spevacus has joined us as a Community Manager
stackoverflow.ai - rebuilt for attribution
Community Asks Sprint Announcement - September 2025
Report this ad
Related
5Laplacian in polar coordinates
1Express this polar equation in cartesian form
2How to convert from cartesian to polar equation
0Laplacian from cartesian to polar
1Finding the slope in a polar equation
0Converting polar equation to rectangular equation
1How to convert ∬f x y z(x,y,z)d y d z∬f x y z(x,y,z)d y d z to polar coordinates?
1Equation of off centre circle in polar coordinates
Hot Network Questions
Is there a way to defend from Spot kick?
Should I let a player go because of their inability to handle setbacks?
Is it ok to place components "inside" the PCB
ICC in Hague not prosecuting an individual brought before them in a questionable manner?
Interpret G-code
Is direct sum of finite spectra cancellative?
Direct train from Rotterdam to Lille Europe
Discussing strategy reduces winning chances of everyone!
Implications of using a stream cipher as KDF
How can the problem of a warlock with two spell slots be solved?
Is it safe to route top layer traces under header pins, SMD IC?
PSTricks error regarding \pst@makenotverbbox
For every second-order formula, is there a first-order formula equivalent to it by reification?
The rule of necessitation seems utterly unreasonable
Suggestions for plotting function of two variables and a parameter with a constraint in the form of an equation
Why are LDS temple garments secret?
Clinical-tone story about Earth making people violent
A time-travel short fiction where a graphologist falls in love with a girl for having read letters she has not yet written… to another man
Another way to draw RegionDifference of a cylinder and Cuboid
What is the feature between the Attendant Call and Ground Call push buttons on a B737 overhead panel?
Program that allocates time to tasks based on priority
Matthew 24:5 Many will come in my name!
Checking model assumptions at cluster level vs global level?
An odd question
Question feed
Subscribe to RSS
Question feed
To subscribe to this RSS feed, copy and paste this URL into your RSS reader.
Why are you flagging this comment?
It contains harassment, bigotry or abuse.
This comment attacks a person or group. Learn more in our Code of Conduct.
It's unfriendly or unkind.
This comment is rude or condescending. Learn more in our Code of Conduct.
Not needed.
This comment is not relevant to the post.
Enter at least 6 characters
Something else.
A problem not listed above. Try to be as specific as possible.
Enter at least 6 characters
Flag comment Cancel
You have 0 flags left today
Mathematics
Tour
Help
Chat
Contact
Feedback
Company
Stack Overflow
Teams
Advertising
Talent
About
Press
Legal
Privacy Policy
Terms of Service
Your Privacy Choices
Cookie Policy
Stack Exchange Network
Technology
Culture & recreation
Life & arts
Science
Professional
Business
API
Data
Blog
Facebook
Twitter
LinkedIn
Instagram
Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.9.26.34547
By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy.
Accept all cookies Necessary cookies only
Customize settings
Cookie Consent Preference Center
When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer.
Cookie Policy
Accept all cookies
Manage Consent Preferences
Strictly Necessary Cookies
Always Active
These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information.
Cookies Details
Performance Cookies
[x] Performance Cookies
These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance.
Cookies Details
Functional Cookies
[x] Functional Cookies
These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly.
Cookies Details
Targeting Cookies
[x] Targeting Cookies
These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device.
Cookies Details
Cookie List
Clear
[x] checkbox label label
Apply Cancel
Consent Leg.Interest
[x] checkbox label label
[x] checkbox label label
[x] checkbox label label
Necessary cookies only Confirm my choices |
17232 | https://nrich.maths.org/problems/triangle-incircle-iteration | Skip to main content
Triangle incircle iteration
Keep constructing triangles in the incircle of the previous triangle. What happens?
Age
14 to 16
Challenge level
Exploring and noticing
Working systematically
Conjecturing and generalising
Visualising and representing
Reasoning, convincing and proving
Being curious
Being resourceful
Being resilient
Being collaborative
Problem
Image
Start with any triangle. Draw its inscribed circle (the circle which just touches each side of the triangle). Draw the triangle which has its vertices at the points of contact between your original triangle and its incircle. Now keep repeating this process starting with the new triangle to form a sequence of nested triangles and circles. What happens to the triangles?
If the angles in the first triangle are , and prove that the angles in the second triangle are given (in degrees) by
where takes the values , and . Choose some triangles, investigate this iteration numerically and try to give reasons for what happens.
Investigate what happens if you reverse this process (triangle to circumcircle to triangle...)
Getting Started
Have you noticed that, whatever triangle you start with, the
nested triangles get more and more like equilateral triangles? In
fact this convergence is very rapid.
The question tells you how to show why this happens by
considering iterative sequences.
Student Solutions
Meg offers this solution:
The tangent of a circle is at right-angles to the radius of the
circle. That is, if you join the centre point of the circle to a
point where the circle meets the outer triangle, it makes an angle
of with the side of the triangle.
Image
The bisector of the angles of triangle will all pass through
the centre of the circle.
Image
From this we know that
and
Hence
Now consider triangle XOZ. This triangle is isosceles, so
By similar arguments
and Hence the
new angles of the triangle are
Hence
It follows that .
A similar argument can be followed for and . If you
continue drawing triangles within circles, the angles will decrease
as shown here:
Triangle 1: a
Triangle 2:
Triangle 3: =
Triangle 4:
When you continue this iteration, you can demonstrate that the
term becomes less and less significant, and the sum of the rest of
the terms tends to 60 degrees. Hence the triangle tends to an
equilateral triangle.
Teachers' Resources
Why do this problem?
This problem leads to a result which is easy to guess visually
but not so easy to prove.
Geometrical, numerical, and algebraic ideas can all be used to
reach a solution, and properties of averaging can also come out of
the problem. Numerical patterns can be investigated using a
spreadsheet.
Possible approach
Use the interactivity (or accurately construct some triangles
with their inscribed circles) and see what happens to the angles in
the nested triangles.In order to see why this is happening, it's
important to make sure everyone knows that the centre of the
inscribed circle is at the point where the angle bisectors of the
original triangle meet, and that radii meet tangents at a right
angle. This information can be used to write expressions for the
three angles in the new triangle in terms of the original
angles.
Students could now create a spreadsheet which allows them to
input three angles which sum to 180 degrees and use their
expressions to work out the three new angles. By continuing the
sequence, the angles quickly converge.
Key questions
What seems to be happening to the angles in each new triangle
that we draw?
How can we calculate the angles of each new triangle if we
know the original angles?
Possible extension
Investigate the sequence and so on
to explain why the angles converge to their limit.
The three new angles are each the mean of a pair of the
original angles. In general, what happens if you keep finding the
mean of pairs of numbers to give three new numbers?
Possible support
Work with numerical examples and try to explain the patterns
formed. |
17233 | https://arxiv.org/abs/2204.01031 | [2204.01031] Extremals for $α$-Strichartz inequalities
Skip to main content
We gratefully acknowledge support from the Simons Foundation, member institutions, and all contributors.Donate
>math> arXiv:2204.01031
Help | Advanced Search
Search
GO
quick links
Login
Help Pages
About
Mathematics > Classical Analysis and ODEs
arXiv:2204.01031 (math)
[Submitted on 3 Apr 2022 (v1), last revised 18 Apr 2022 (this version, v2)]
Title:Extremals for $α$-Strichartz inequalities
Authors:Boning Di, Dunyan Yan
View a PDF of the paper titled Extremals for $\alpha$-Strichartz inequalities, by Boning Di and Dunyan Yan
View PDF
Abstract:A necessary and sufficient condition on the precompactness of extremal sequences for one dimensional $\alpha$-Strichartz inequalities, equivalently $\alpha$-Fourier extension estimates, is established based on the profile decomposition arguments. One of our main tools is an operator-convergence dislocation property consequence which comes from the van der Corput Lemma. Our result is valid in asymmetric cases as well. In addition, we obtain the existence of extremals for non-endpoint $\alpha$-Strichartz inequalities.
Comments:31 pages. Some small typos fixed
Subjects:Classical Analysis and ODEs (math.CA); Analysis of PDEs (math.AP); Functional Analysis (math.FA)
MSC classes:42A38 (Primary), 35B38, 35Q41 (Secondary)
Cite as:arXiv:2204.01031 [math.CA]
(or arXiv:2204.01031v2 [math.CA] for this version)
Focus to learn more
arXiv-issued DOI via DataCite
Related DOI:
Focus to learn more
DOI(s) linking to related resources
Submission history
From: Boning Di [view email]
[v1] Sun, 3 Apr 2022 08:42:59 UTC (29 KB)
[v2] Mon, 18 Apr 2022 23:10:27 UTC (29 KB)
Full-text links:
Access Paper:
View a PDF of the paper titled Extremals for $\alpha$-Strichartz inequalities, by Boning Di and Dunyan Yan
View PDF
TeX Source
Other Formats
view license
Current browse context:
math.CA
<prev | next>
new | recent | 2022-04
Change to browse by:
math
math.AP
math.FA
References & Citations
NASA ADS
Google Scholar
Semantic Scholar
export BibTeX citation Loading...
BibTeX formatted citation
×
Data provided by:
Bookmark
Bibliographic Tools
Bibliographic and Citation Tools
[x] Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
[x] Connected Papers Toggle
Connected Papers (What is Connected Papers?)
[x] Litmaps Toggle
Litmaps (What is Litmaps?)
[x] scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media
Code, Data and Media Associated with this Article
[x] alphaXiv Toggle
alphaXiv (What is alphaXiv?)
[x] Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
[x] DagsHub Toggle
DagsHub (What is DagsHub?)
[x] GotitPub Toggle
Gotit.pub (What is GotitPub?)
[x] Huggingface Toggle
Hugging Face (What is Huggingface?)
[x] Links to Code Toggle
Papers with Code (What is Papers with Code?)
[x] ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos
Demos
[x] Replicate Toggle
Replicate (What is Replicate?)
[x] Spaces Toggle
Hugging Face Spaces (What is Spaces?)
[x] Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers
Recommenders and Search Tools
[x] Link to Influence Flower
Influence Flower (What are Influence Flowers?)
[x] Core recommender toggle
CORE Recommender (What is CORE?)
Author
Venue
Institution
Topic
About arXivLabs
arXivLabs: experimental projects with community collaborators
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.
Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.
Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.
Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
About
Help
Contact
Subscribe
Copyright
Privacy Policy
Web Accessibility Assistance
arXiv Operational Status
Get status notifications via email or slack |
17234 | https://health.ri.gov/infectious-diseases/rsv | Official State of Rhode Island website
RSV
Respiratory syncytial virus, or RSV, is a common respiratory virus that infects the nose, throat, and lungs. RSV symptoms make it difficult to distinguish it from the common cold or other respiratory viruses like the flu or COVID-19. RSV spreads in the fall and winter along with other respiratory viruses. It usually peaks in December and January. Track Rhode Island RSV data trends
Most people recover from RSV in a week or two, but infants and adults who are older or who have certain risk factors are more likely to develop severe RSV and need hospitalization. Learn more about these at-risk populations in the next section.
Learn more about RSV in infants and young children
Learn more about RSV in adults
At-Risk Populations
RSV does not usually cause severe illness in healthy adults and children. However, some people with RSV infection, especially infants younger than age 6 months and adults who are older or have certain risk factors, can become very sick and may need to be hospitalized. RSV is the leading cause of infant hospitalization in the United States.
RSV can also cause more severe illness such as bronchiolitis (inflammation of the small airways in the lungs) and pneumonia (infection of the lungs). It is the most common cause of bronchiolitis and pneumonia in children younger than age 1.
Childrenat greatest risk for severe illness from RSV include:
Adultsat highest risk for severe RSV disease include:
For the complete list of medical conditions and risk factors for severe RSV disease, see RSV Clinical Overview.
Prevention
It's common to get sick from respiratory viruses such as RSV, COVID-19, and flu, especially in the fall and winter. To learn how to protect yourself others from respiratory virus illnesses, visit health.ri.gov/respiratory viruses.
Because older adults, infants, and young children are at higher risk for getting very sick from RSV, the CDC recommends RSV vaccines or immunization for these groups. To find COVID-19, flu, RSV, and pneumococcal pneumonia vaccines based on your location and vaccine type, visit VaxAssist.
Vaccines for Adults
Immunizations to Protect Infants
The CDC recommends all babies be protected from severe RSV by one of two immunization options: a maternal RSV vaccine given to the mother during pregnancy or an RSV antibody given to your baby. Most babies do not need both. These products are not treatments for a child who already has RSV infection.
Care
Antiviral medication is not routinely recommended to fight RSV. Most RSV infections go away on their own in a week or two. But RSV can cause serious illness in some people.
Take steps to relieve symptoms
Featured Links
Data
Respiratory Syncytial Virus (RSV) Surveillance Data
Respiratory Virus Data
Resources
Guidance, Recommendations
Toolkits
External Resources
Video
Data
Department of Health
3 Capitol Hill
Providence, RI 02908
Email us
Directions
Phone: 401-222-5960
After Hours Phone: 401-276-8046
RI Relay 711
Contact us
Monday - Friday
8:30 AM - 4:30 PM
Food Protection 8:30 AM - 4 PM
Parking restrictions until 3 PM
Social Media Comment Policy
FOR EMPLOYEES
Center for Vital Records
Simpson Hall
6 Harrington Road
Cranston, RI 02920
Email us
Phone: 401-222-5960
After Hours Phone: 401-276-8046
RI Relay 711
Monday - Friday
Vital Records 7:30 AM - 3:30 PM
State Health Laboratories
50 Orms Street,
Providence, RI 02908
Email us
Phone: 401-222-5600
RI Relay 711
Monday - Friday
8:30 AM - 4:30 PM
Office of the State Medical Examiner
900 Highland Corporate Drive,
Building 3, Cumberland, RI 02864
Email us
Phone: 401-222-5500
RI Relay 711
Monday - Friday
8:30 AM - 4:30 PM
© 2025
RI.gov. This page last updated on September 22nd, 2025. Accessibility, data, and privacy policies | Top of page |
17235 | https://www.teachoo.com/subjects/cbse-maths/class-11th/ch9-11th-sequences-and-series/ | Class 6 Maths (Ganita Prakash)
Class 6 Maths (old NCERT)
Class 6 Science (Curiosity)
Class 6 Science (old NCERT)
Class 6 Social Science
Class 6 English
Class 7 Maths (Ganita Prakash)
Class 7 Maths (old NCERT)
Class 7 Science (Curiosity)
Class 7 Science (old NCERT)
Class 7 Social Science
Class 7 English
Class 8 Maths (Ganita Prakash)
Class 8 Maths (old NCERT)
Class 8 Science (Curiosity)
Class 8 Science (old NCERT)
Class 8 Social Science
Class 8 English
Class 9 Maths
Class 9 Science
Class 9 Social Science
Class 9 English
Class 10 Maths
Class 10 Science
Class 10 Social Science
Class 10 English
Class 11 Maths
Class 11 Computer Science (Python)
Class 11 English
Class 12 Maths
Class 12 English
Class 12 Economics
Class 12 Accountancy
Class 12 Physics
Class 12 Chemistry
Class 12 Biology
Class 12 Computer Science (Python)
Class 12 Physical Education
GST and Accounting Course
Excel Course
Tally Course
Finance and CMA Data Course
Payroll Course
Learn English
Learn Excel
Learn Tally
Learn GST (Goods and Services Tax)
Learn Accounting and Finance
GST Demo
GST Tax Invoice Format
Accounts Tax Practical
Tally Ledger List
Remove Ads
Login
Chapter 8 Class 11 Sequences and Series
Master Chapter 8 Class 11 Sequences and Series with comprehensive NCERT Solutions, Practice Questions, MCQs, Sample Papers, Case Based Questions, and Video lessons.
Start Learning Now
Updated for new NCERT - 2026-2026 Edition.
Solutions of Chapter 8 Sequences and Series of Class 11 NCERT book available free. All exercise questions, examples, miscellaneous are done step by step with detailed explanation for your understanding.
In this Chapter we learn about Sequences
Sequence is any group of numbers with some pattern.
Like 2, 4, 8, 16, 32, 64, 128, 256, ....
Or
1, 2, 3, 4, 5, 6, 7, 8
In this chapter we learn
What a sequence is - and what is finite, infinite sequence, terms of a sequence
What a series is - it is the sum of a sequence
Denoting Sum by sigma Σ, and what it means
What is an AP (Arithmetic Progression) is, and finding its nth term and sum
Inserting AP between two numbers
What is Arithmetic Mean (AM), and how to find it
What is a GP (Geometric Progression) is, and finding its nth term and sum
Inserting GP between two numbers
What is Geometric Mean (GM), and how to find it
Relationship between AM and GM
Sum of special series
Finding sum of series when nth term is given
Finding sum of series when nth term is not given
Check out the solutions below by clicking an exercise, or a topic
Serial order wise
Ex 8.1
Start Learning---Chapter-9/category/Ex-9.1/ "Master Ex 8.1 step-by-step") Ex 8.2
Start Learning Examples
Start Learning-an--2n-5-(ii)-an--n-3-4/category/Examples/ "Master Examples step-by-step") Miscellaneous
Start Learning--f(x)-f(y)-such-that--f(1)--3--find-n/category/Miscellaneous/ "Master Miscellaneous step-by-step") Arithmetic Progression
Start Learning Sum of Series
Start Learning
Concept wise
Finding Sequences
Start Learning-an--2n-5-(ii)-an--n-3-4/category/Finding-Sequences/ "Master Finding Sequences step-by-step") Arithmetic Progression (AP): Formulae based
Start Learning--Formulae-based/ "Master Arithmetic Progression (AP): Formulae based step-by-step") Arithmetic Progression (AP): Statement
Start Learning--Statement/ "Master Arithmetic Progression (AP): Statement step-by-step") Arithmetic Progression (AP): Calculation based/Proofs
Start LearningQ/category/Arithmetic-Progression-(AP)--Calculation-based-Proofs/ "Master Arithmetic Progression (AP): Calculation based/Proofs step-by-step") Inserting AP between two numbers
Start Learning Arithmetic Mean (AM)
Start Learning/ "Master Arithmetic Mean (AM) step-by-step") Geometric Progression(GP): Formulae based
Start Learning--Formulae-based/ "Master Geometric Progression(GP): Formulae based step-by-step") Geometric Progression(GP): Statement
Start Learning--Statement/ "Master Geometric Progression(GP): Statement step-by-step") Geometric Progression(GP): Calculation based/Proofs
Start Learning/category/Geometric-Progression(GP)--Calculation-based-Proofs/ "Master Geometric Progression(GP): Calculation based/Proofs step-by-step") Inserting GP between two numbers
Start Learning Geometric Mean (GM)
Start Learning/ "Master Geometric Mean (GM) step-by-step") AM and GM (Arithmetic Mean And Geometric mean)
Start Learning/ "Master AM and GM (Arithmetic Mean And Geometric mean) step-by-step") AP and GP mix questions
Start Learning Finding sum from series
Start Learning Finding sum from nth number
Start Learning/category/Finding-sum-from-nth-number/ "Master Finding sum from nth number step-by-step")
Why Learn This With Teachoo?
Updated for new NCERT - 2026-2026 Edition.
Solutions of Chapter 8 Sequences and Series of Class 11 NCERT book available free. All exercise questions, examples, miscellaneous are done step by step with detailed explanation for your understanding.
In this Chapter we learn about Sequences
Sequence is any group of numbers with some pattern.
Like 2, 4, 8, 16, 32, 64, 128, 256, ....
Or
1, 2, 3, 4, 5, 6, 7, 8
In this chapter we learn
What a sequence is - and what is finite, infinite sequence, terms of a sequence
What a series is - it is the sum of a sequence
Denoting Sum by sigma Σ, and what it means
What is an AP (Arithmetic Progression) is, and finding its nth term and sum
Inserting AP between two numbers
What is Arithmetic Mean (AM), and how to find it
What is a GP (Geometric Progression) is, and finding its nth term and sum
Inserting GP between two numbers
What is Geometric Mean (GM), and how to find it
Relationship between AM and GM
Sum of special series
Finding sum of series when nth term is given
Finding sum of series when nth term is not given
Check out the solutions below by clicking an exercise, or a topic
Hi, it looks like you're using AdBlock :(
Displaying ads are our only source of revenue. To help Teachoo create more content, and view the ad-free version of Teachooo... please purchase Teachoo Black subscription.
Join Teachoo Black at ₹39 only
Please login to view more pages. It's free :)
Teachoo gives you a better experience when you're logged in. Please login :)
Login |
17236 | https://www.albert.io/blog/how-to-do-midpoint-riemann-sum-and-other-approximations/ | Skip to content
➜
AP® Calculus AB-BC
How to Do Midpoint Riemann Sum and Other Approximations
The Albert Team
Last Updated On:
Exact integrals are beautiful, yet many real‐world shapes refuse to give tidy antiderivatives. Therefore, approximating area under a curve becomes essential in physics, economics, and life sciences. AP® Calculus Objectives LIM-5.A.1–5.A.4 expect mastery of these numerical techniques. This review walks through four classic methods—left, right, how to do midpoint Riemann sum, and trapezoidal Riemann sums—then touches on error analysis and calculator shortcuts.
Start practicing AP® Calculus AB-BC on Albert now!
What We Review
Why Approximate?
Imagine tracking the temperature of lake water every hour from sunrise to noon. The graph is jagged, and an algebraic formula might not exist. However, the area beneath that curve equals accumulated heat energy.
Whenever antiderivatives are messy or unknown, numerical sums explain how to evaluate definite integrals quickly and accurately.
Riemann Sums at a Glance
A Riemann sum chops an interval [a,b] into n subintervals of width \Delta x. For each subinterval, a sample height is chosen and multiplied by width.
Key symbols
\Delta x = \dfrac{b-a}{n} for uniform partitions
x_i = endpoint or midpoint used
f(x_i) = height of rectangle or trapezoid
Non-uniform widths also work, yet AP® examples usually stay uniform.
Left Riemann Sum
Definition
The left Riemann sum uses the left endpoint of each subinterval.
L_n = \sum_{i=0}^{n-1} f(x_i)\Delta x
Step-by-Step Example
Approximate the integral of f(x)=x^2 on [0,2] with n=4.
Width: \Delta x = \dfrac{2-0}{4}=0.5
Left endpoints: 0, 0.5, 1.0, 1.5
Heights: 0², 0.5², 1², 1.5² → 0, 0.25, 1, 2.25
Multiply and add:
L_4 = (0+0.25+1+2.25)\times0.5 = 1.75
Graphing these rectangles shows gaps above the curve; therefore, for an increasing function, the left sum underestimates.
Right Endpoint Approximation
Definition
The right Riemann sum uses right endpoints.
R_n = \sum_{i=1}^{n} f(x_i)\Delta x
Trigonometric Example
Approximate \int_{0}^{\pi} \sin xdx with n=3.
\Delta x = \dfrac{\pi-0}{3} = \dfrac{\pi}{3}
Right endpoints: \dfrac{\pi}{3}, \dfrac{2\pi}{3}, \pi
Heights: \sin\left(\dfrac{\pi}{3}\right)=\dfrac{\sqrt3}{2}, \sin\left(\dfrac{2\pi}{3}\right)=\dfrac{\sqrt3}{2}, \sin(\pi)=0
Sum:
R_3 = \left(\dfrac{\sqrt3}{2}+\dfrac{\sqrt3}{2}+0\right)\dfrac{\pi}{3} = \dfrac{\sqrt3\pi}{3}
Because \sin x is increasing then decreasing, some rectangles stand above and others below. However, over the whole interval error remains moderate. For a strictly increasing function, the right sum overestimates.
Ready to boost your AP® scores? Explore our plans and pricing here!
How to Do a Midpoint Riemann Sum
Why Midpoints Work Better
Midpoint rectangles straddle the curve, canceling symmetrical errors. Consequently, the midpoint method often doubles accuracy compared with left or right sums.
Formula
M_n = \sum_{i=1}^{n} f\left(\dfrac{x_{i-1}+x_i}{2}\right)\Delta x
Detailed Example: Nonlinear Function
Approximate \int_{1}^{5} \sqrt{x}dx using n=4.
Width: \Delta x = \dfrac{5-1}{4}=1
Subintervals: [1,2], [2,3], [3,4], [4,5]
Midpoints: 1.5, 2.5, 3.5, 4.5
Create a quick table
| | |
--- |
| Midpoint | f(x)=\sqrt{x} |
| 1.5 | \sqrt{1.5}\approx1.225 |
| 2.5 | \sqrt{2.5}\approx1.581 |
| 3.5 | \sqrt{3.5}\approx1.871 |
| 4.5 | \sqrt{4.5}\approx2.121 |
Multiply and add:
M_4 = 1(1.225+1.581+1.871+2.121)\approx6.80
The exact integral equals \dfrac{2}{3}(5^{3/2}-1^{3/2})\approx6.82, so the midpoint error is tiny.
Trapezoidal Riemann Sum
Bridging Rectangles and Trapezoids
The trapezoidal Riemann sum averages left and right heights, producing trapezoids that hug slanted segments better.
Formula
T_n = \dfrac{\Delta x}{2}\left[f(x_0)+2\sum_{i=1}^{n-1}f(x_i)+f(x_n)\right]
Exponential Decay Example
Approximate \int_{0}^{3} e^{-x}dx with n=3.
\Delta x = 1
Function values:
f(0)=1, f(1)=e^{-1}, f(2)=e^{-2}, f(3)=e^{-3}
Plug in:
T_3 = \dfrac{1}{2}\left[1 + 2(e^{-1}+e^{-2}) + e^{-3}\right] \approx 0.550
Quick Reference Chart
| | |
--- |
| Term | Meaning / Key Feature |
| Riemann Sum | Sum of “height × width” that approximates an integral |
| \Delta x | Subinterval width: (b – a)/n |
| Left Riemann Sum | Uses left endpoints; underestimates if f increasing |
| Right Endpoint Approximation | Uses right endpoints; overestimates if f increasing |
| Midpoint Rule | Uses midpoints |
| Trapezoidal Riemann Sum | Average of left and right heights |
| Concavity | Sign of f''(x) predicts over/under behavior |
Putting It All Together
Choosing among methods depends on speed and accuracy. Midpoint and trapezoidal rules offer superior precision for the same n, yet left or right sums sometimes appear in free-response because they are quicker to draw. Therefore, always state the partition, label \Delta x, and identify sample points, demonstrating how to do midpoint Riemann sum when appropriate. Showing this structure earns partial credit even if arithmetic slips.
Remember: approximation is a professional tool, not a shortcut. Engineers and statisticians rely on it daily.
Conclusion
Approximating area under a curve unlocks answers when exact calculus stalls. The left Riemann sum, right endpoint approximation, how to do midpoint Riemann sum, and trapezoidal Riemann sum each have clear formulas, predictable errors, and real-world value. Practice each method on polynomials, exponentials, and trig graphs. Mastery now will make the AP® exam—and future science courses—feel more manageable.
Sharpen Your Skills for AP® Calculus AB-BC
Are you preparing for the AP® Calculus exam? We’ve got you covered! Try our review articles designed to help you confidently tackle real-world math problems. You’ll find everything you need to succeed, from quick tips to detailed strategies. Start exploring now!
6.1 Exploring Accumulations of Change
6.3 Riemann Sums, Summation Notation, and Definite Integral Notation
Need help preparing for your AP® Calculus AB-BC exam?
Albert has hundreds of AP® Calculus AB-BC practice questions, free responses, and an AP® Calculus AB-BC practice test to try out.
Start practicing AP® Calculus AB-BC on Albert now!
Interested in a school license?
Bring Albert to your school and empower all teachers with the world's best question bank for: ➜ SAT® & ACT® ➜ AP® ➜ ELA, Math, Science, & Social Studies ➜ State assessments Options for teachers, schools, and districts.
EXPLORE OPTIONS
Popular Posts
AP® Score Calculators
Simulate how different MCQ and FRQ scores translate into AP® scores
AP® Review Guides
The ultimate review guides for AP® subjects to help you plan and structure your prep.
Core Subject Review Guides
Review the most important topics in Physics and Algebra 1.
SAT® Score Calculator
See how scores on each section impacts your overall SAT® score
ACT® Score Calculator
See how scores on each section impacts your overall ACT® score
Grammar Review Hub
Comprehensive review of grammar skills
AP® Posters
Download updated posters summarizing the main topics and structure for each AP® exam. |
17237 | https://www.ixl.com/math/grade-5/add-subtract-multiply-and-divide-decimals | SKIP TO CONTENT
Time to get in the zone!
Your teacher would like you to focus on skills in .
Let's pick a skill from these categories.
GG.1Add, subtract, multiply, and divide decimalsNZG
Share video
You are watching a video preview. Become a member to get full access!
You've reached the end of this video preview,
but the learning doesn't have to stop!
Join IXL today!
Become a member
Incomplete answer
You did not finish the question. Do you want to go back to the question?
or
Multiply:
3.1
×
15
ref_doc_title.
Excellent! You got that right!
or
Jumping to level 1 of 1
Excellent!
Now entering the Challenge Zone—are you ready?
Teacher tools
Work it out
Not feeling ready yet? These can help:
FF.6Division with decimal quotients
Division with decimal quotients - Fifth grade J9Z
DD.6Multiply two decimals: products up to hundredths
Multiply two decimals: products up to hundredths - Fifth grade FLL
AA.6Add and subtract decimal numbers
Add and subtract decimal numbers - Fifth grade 7VJ
Lesson: Adding decimals
Learn how to add decimals!
Lesson: Adding decimals
Lesson: Subtracting decimals
Learn how to subtract decimals!
Lesson: Subtracting decimals
Lesson: Multiplying decimals
Learn how to multiply decimals!
Lesson: Multiplying decimals
Lesson: Dividing decimals
Learn how to divide decimals!
Lesson: Dividing decimals
1 in 4 students uses IXL
for academic help and enrichment.
Pre-K through 12th grade
Sign up now |
17238 | https://www.sussex.ac.uk/study/careers/graduate-jobs/how-to-become/how-to-become-a-barrister | How to become a barrister : Choosing a career : ... : Study with us : University of Sussex
Cookies on our website
We use some essential cookies to make this website work.
We'd like to set additional cookies to understand how you use our site so we can improve it for everyone. Also, we'd like to serve you some cookies set by other services to show you relevant content.
View our privacy policy.
Reject all Accept all
University of Sussex
Home
Skip to main content
Accessibility
Staff
Current students
Open navigation menuClose navigation menu
Go up a section
Choosing a career
How to become a barrister
How to become a solicitor
How to become a psychologist
How to become a politician or work in government
How to become a teacher
How to become a journalist
How to become a writer
How to become an entrepreneur
Main menu
Homepage
Study with us
About us
News & events
International
Research
Collaborate
Alumni
Quick links
Undergraduate Open Days
Masters Open Days
Current students
Current staff
Jobs at Sussex
Staff search
Cookies / Privacy / Disclaimer
Study with us
Undergraduate
Courses
Foundation years
Thinking about university
Choose Sussex
Undergraduate Open Days
Clearing
Undergraduate Applicant Hub
Access to Sussex
Summer School
Masters
Courses
Online courses
How to apply
Masters Applicant Hub
PhD
Degrees
Guide to PhD study
Sussex Researcher School
How to apply
Contact us
Register for updates
Request a prospectus
Ask us a question
Chat to our students and staff
Visits and Open Days
Open Days and events
Campus tours
School and college visits
Virtual tour
Fees and funding
Tuition fees
Funding and scholarships
Cost of living
International students
Meet us at an event
Information by country
University preparation courses
Study Abroad at Sussex
English language courses
Visas and immigration
Student life
What's on campus
Brighton
Studying at Sussex
Student support
Accommodation
Careers
Terms and conditions
About us
About us
Rankings and figures
Our structure
Strategy and funding
Jobs
Term dates
How to get here
Our campus
Creating a sustainable university
Culture, equality and inclusion
Sussex in the community
Teachers' conferences
Improving access to higher education: widening participation
Recruit our graduates and students
Office for Students transparency information
Modern Slavery Act Statement
Working with the University: Procurement
Contact us
News & events
News
Features
Twelve women in Academia
Quantum computing Laboratory
Events
Graduation
The Sussex lectures
Science talks at Sussex
Media Centre
Contact us
Press releases
Our experts
Filming guidelines
Blogs and podcasts
International
International
International students
Global engagement
International research
Research
Research
Centres of Excellence
Explore our research
About our research
Research centres and groups
Our schools
Find a researcher
Publications
News
Contact us
Collaborate
Business and entrepreneurs
Expertise, consulting and research for business
Access to public funds for business
Investment and license opportunities
Business incubation services
Degree apprenticeships at Sussex
Meet the team
Public and third sector
Expertise, consulting and research for public and third sector
Access to public funds for public and third sector
Other
National and international policy makers
Citizen and the community
Recruitment and short courses
Conferences and facilities
Networks and events
Alumni
Alumni
News and events
Sussex people
Get involved
Alumni community
Global network
Falmer magazine
Support us
Search
Search US All People
Search US
Home
Study with us
Your future career
Finding a graduate job
Choosing a career
Current location: How to become a barrister
How to become a barrister
Find out how to become a barrister in the UK. You can also see advice about becoming a solicitor.
Information is correct as of March 2025. Before deciding whether to pursue a career, you’re advised to contact your careers service for the most up-to-date guidance.
How do you become a barrister
Barristers are legal specialists who work in courts of law representing clients. They work on criminal cases in magistrates or crown courts, civil cases, employment tribunals or other specialist areas.
Already a Sussex student? See our barrister sector guide.
Qualifications required
To become a barrister, you can:
study an accredited undergraduate law degree and pass with at least an upper second-class (2.1) or above
take a Bar training course - sometimes called Bar Vocational Course (BVC), Bar Practice Course (BPC) or Bar Training Course (BTC) - Some undergraduate law degrees will already include the vocational component’
complete a year long period of practical experience known as ‘Pupillage’ in a barristers’ chambers.
Your next step is to secure tenancy as a self-employed barrister in chambers or join a practice or agency such as the Crown Prosecution Service.
In addition to the above, later on in your career, you may also choose to study for a Masters-level Law degree and/or a PhD in Law.
If you haven’t studied a degree yet, and you’re considering one, browse our related subject areas at Sussex (you should check your course is accredited by the correct body):
Law
Criminology.
How to become a barrister without a law degree
If you have a degree in another subject you can still train to become a barrister. If you have a 2.2 or above you'll be able to go on to a:
graduate-entry Law degree
MA in Law
Graduate Diploma in Law (GDL)
You'll then need to take one of the newBar vocational courses.
How to get a law degree
To get on to a law degree you usually need three A-levels and a range of GCSEs.
Your subjects don’t have to be law-related but it might be helpful to take courses with strong research and communication elements, such as law, history, geography, politics, the sciences and languages.
Taking these kind of subjects may also help you when writing about your decision to study a law degree in your personal statement.
See our guide to writing a UCAS personal statement and writing a Masters personal statement.
Skills required
You will need to be:
analytical
accurate, logical and methodical
a good problem solver
confident
meet deadlines and work well under pressure
a good communicator.
Careers website Prospects has good advice about becoming a lawyerand a job profile of a barrister.
Earning potential for a barrister
£22k starting salary for a fully qualified barrister (but this depends on your employer)
£90k average salary (but this depends on your employer).
40 hours a week (but hours are variable with some weekend and evening work)
5 years is how long it takes to become a barrister although this does depend on whether you decide to study a postgraduate degree or if your first degree was not in Law
salary based on information from careers website Prospects.
How to get experience as a barrister
It's important to get as much work experience as possible if you want to become a barrister.
You could try:
completing a mini pupillage in a barrister’s chamber- this is the most common type of experience
marshalling a judge- contact the clerk of the court for opportunities
sitting in the public gallery at criminal court hearings so you can witness proceedings
getting a part-time job at a solicitors or law firm
joining the Sussex Law Clinicwhere you'll develop legal, professional and vocational skills by providing free legal advice and education to the local community
joining a law society while you’re at university
working in a court in roles such as a court usher.
Other career paths in law
See some of the other legal roles/careers you can go into in the UK:
How to become a solicitor in the UK
Solicitors give legal support to clients and usually work for organisations, government, private practices or within the court system.It usually takes around six years to qualify as a solicitor.
Qualifications
You usually need:
an accredited undergraduate law degree, which you may need to pass with at least a lower second-class (2:2) or above
a Legal Practice Course.
You may then need to do a two-year training contract with a law firm.
If you already have an undergraduate degree in a subject that is not law, you may be able to do a conversion course.
Studying at Sussex
If you choose to study your degree at Sussex, you’ll benefit from:
careers support for up to three years after you graduate
the chance to join our alumni network Sussex Connect
mentoring schemes so you get real-world advice, support and experience while you study.
What do you want to do next?
study with us Explore our courses at Sussex
support Find out more about careers
school Find out about the School of Law, Politics and Sociology
visit us Come to an Open Day
Back to previous menu
Choosing a career
How to become a barrister
How to become a solicitor
How to become a psychologist
How to become a politician or work in government
How to become a teacher
How to become a journalist
How to become a writer
How to become an entrepreneur
X
Facebook
Instagram
YouTube
Vimeo
Tiktok
Linkedin
University of Sussex
Sussex House, Falmer
Brighton, BN1 9RH
United Kingdom
Course and application enquiries:
Ask us a question
General enquiries:
+44 (0)1273 606755
information@sussex.ac.uk
Copyright © 2025, University of Sussex
Accessibility
Privacy
Cookies
Back to top |
17239 | https://math.stackexchange.com/questions/3803593/prove-sum-i-1n-sqrta-i-ge-n-1-sum-i-0n-frac1-sqrta-i | inequality - prove $\sum_{i=1}^{n}\sqrt{a_i}\ge (n-1)\sum_{i=0}^{n}\frac{1}{\sqrt{a_i}}$ - Mathematics Stack Exchange
Join Mathematics
By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy.
Sign up with Google
OR
Email
Password
Sign up
Already have an account? Log in
Skip to main content
Stack Exchange Network
Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.
Visit Stack Exchange
Loading…
Tour Start here for a quick overview of the site
Help Center Detailed answers to any questions you might have
Meta Discuss the workings and policies of this site
About Us Learn more about Stack Overflow the company, and our products
current community
Mathematics helpchat
Mathematics Meta
your communities
Sign up or log in to customize your list.
more stack exchange communities
company blog
Log in
Sign up
Home
Questions
Unanswered
AI Assist Labs
Tags
Chat
Users
Teams
Ask questions, find answers and collaborate at work with Stack Overflow for Teams.
Try Teams for freeExplore Teams
3. Teams
4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams
Teams
Q&A for work
Connect and share knowledge within a single location that is structured and easy to search.
Learn more about Teams
Hang on, you can't upvote just yet.
You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it?
Instead, you can save this post to reference later.
Save this post for later Not now
Thanks for your vote!
You now have 5 free votes weekly.
Free votes
count toward the total vote score
does not give reputation to the author
Continue to help good content that is interesting, well-researched, and useful, rise to the top! To gain full voting privileges, earn reputation.
Got it!Go to help center to learn more
prove ∑n i=1√a i≥(n−1)∑n i=0 1√a i [duplicate]
Ask Question
Asked 5 years, 1 month ago
Modified5 years, 1 month ago
Viewed 267 times
This question shows research effort; it is useful and clear
4
Save this question.
Show activity on this post.
This question already has answers here:
Prove that √x 1+√x 2+⋯+√x n≥(n−1)(1√x 1+1√x 2+⋯+1√x n) (2 answers)
Closed 5 years ago.
prove n∑i=1√a i≥(n−1)n∑i=1 1√a i if n∑i=1 1 1+a i=1
My try: i tried substituiting y i=1 1+a i thus ∑y i=1
also rearranging inequality we have to prove n∑i=1(√a i+1√a i)≥n n∑i=1 1√a i
putting value in terms of y i n∑i=1 1√y i(1−y i)≥n n∑i=1√y i√1−y i.i am stuck now i dont know which inequality to use.i tried using A-M≥ H-M inequality.
source ' inequalities A mathematical Olympiad approach'
inequality
summation
contest-math
rearrangement-inequality
Share
Share a link to this question
Copy linkCC BY-SA 4.0
Cite
Follow
Follow this question to receive notifications
edited Aug 26, 2020 at 8:09
Michael Rozenberg
208k 31 31 gold badges 171 171 silver badges 294 294 bronze badges
asked Aug 26, 2020 at 5:53
Hari Ramakrishnan SudhakarHari Ramakrishnan Sudhakar
12.1k 2 2 gold badges 16 16 silver badges 53 53 bronze badges
13
Is the index i=0 correct for the RHS?user –user 2020-08-26 06:36:52 +00:00 Commented Aug 26, 2020 at 6:36
@user Most likely not. Summation should go from 1 to n on both sides and it should involve a family of strictly positive reals indexed over the natural interval [1,n].ΑΘΩ –ΑΘΩ 2020-08-26 06:42:14 +00:00 Commented Aug 26, 2020 at 6:42
@user typo corrected!Hari Ramakrishnan Sudhakar –Hari Ramakrishnan Sudhakar 2020-08-26 06:45:19 +00:00 Commented Aug 26, 2020 at 6:45
1 @Gary not only that, but we could do with more detailed references: where does this inequality come from? what is its background?ΑΘΩ –ΑΘΩ 2020-08-26 06:52:29 +00:00 Commented Aug 26, 2020 at 6:52
1 There are several solutions on AoPS: approach0.xyz/search/…. – E.g. here: artofproblemsolving.com/community/c6h33608p208903 and here: artofproblemsolving.com/community/c6h5592Martin R –Martin R 2020-08-26 07:06:21 +00:00 Commented Aug 26, 2020 at 7:06
|Show 8 more comments
4 Answers 4
Sorted by: Reset to default
This answer is useful
2
Save this answer.
Show activity on this post.
n∑i=i 1 1+a i=1 ...(i)
So a i>0 for all i∈(1,n) and for n>1.
We have to prove n∑i=1√a i≥(n−1)n∑i=1 1√a i
or prove, n∑i=1 1+a i√a i≥n n∑i=1 1√a i
or prove, (n∑i=1 1+a i√a i)(n∑i=1 1 1+a i)≥n n∑i=1 1+a i√a i 1 1+a i ...(ii)
WLOG, assume, a i≤a i+1 for 1≤i≤(n−1).
Say, f(a i)=1+a i√a i=√a i+1√a i. So, for all a i>0, f(a i)=f(1 a i)
Say, g(a i)=1 1+a i. We can see g(a i+1)≤g(a i).
We can see g(a i) is non-increasing. We now need to prove that f(a i) is non-decreasing. With that (ii) holds good as per Chebyshev's inequality.
It is easy to see that the function is non-decreasing for a i>1. It is also easy to see that a i can be <1 only for one value of i, at max, due to given condition (i).
Say, a 1<1. We also know that 1 1+a 2≤1−1 1+a 1=1 1+1 a 1.
So, a 2≥1 a 1 and f(a 1)=f(1 a 1)≤f(a 2)≤...≤f(a n)
With this, we prove that (ii) holds good as per Chebyshev's inequality.
Share
Share a link to this answer
Copy linkCC BY-SA 4.0
Cite
Follow
Follow this answer to receive notifications
answered Aug 26, 2020 at 9:18
Math LoverMath Lover
52.2k 3 3 gold badges 24 24 silver badges 46 46 bronze badges
1
1 I lost at first that important detail assuming f not decreasing as a simple condition. I've now rielaborated my answer including that. Thanks user –user 2020-08-26 10:26:52 +00:00 Commented Aug 26, 2020 at 10:26
Add a comment|
This answer is useful
2
Save this answer.
Show activity on this post.
Rearrangement works again!
Let a i=n∑i=1 x i−x i x i, where i∈{1,2,...,n−1} and x 1, x 2,...,x n are positives.
Thus, a n=n∑i=1 x i−x n x n and we need to prove that: n∑i=1√n∑i=1 x i−x i x i≥(n−1)n∑i=1√x i n∑i=1 x i−x i and since the last inequality is homogeneous, we can assume that n∑i=1 x i=n and we need to prove that: n∑i=1√n−x i x i≥(n−1)n∑i=1√x i n−x i or n∑i=1(√n−x i x i−(n−1)√x i n−x i)≥0 or n∑i=1 1−x i√x i(n−x i)≥0. Now, let x 1≤x 2≤...≤x n.
Thus, for i>j we have 1−x i≤1−x j and 1√x i(n−x i)≤1√x j(n−x j) because the last it's x j(n−x j)≥x i(n−x i) or (x i−x j)(n−x i−x j)≥0, which is obvious.
Thus, (1−x 1,1−x 2,...,1−x n) and (1√x 1(n−x 1),1√x 2(n−x 2),...,1√x n(n−x n)) have the same ordering and by Chebyshov we obtain: n∑i=1 1−x i√x i(n−x i)≥1 n n∑i=1(1−x i)n∑i=1 1√x i(n−x i)=0 and we are done!
Share
Share a link to this answer
Copy linkCC BY-SA 4.0
Cite
Follow
Follow this answer to receive notifications
edited Aug 26, 2020 at 8:12
answered Aug 26, 2020 at 7:11
Michael RozenbergMichael Rozenberg
208k 31 31 gold badges 171 171 silver badges 294 294 bronze badges
1
2 @Explicitly what is the family x in relation to the given family a? how do you introduce it, how do you justify its existence? From the point of view of full rigour, that would be the only observation I have to make regarding your argument.ΑΘΩ –ΑΘΩ 2020-08-26 07:24:17 +00:00 Commented Aug 26, 2020 at 7:24
Add a comment|
This answer is useful
2
Save this answer.
Show activity on this post.
We have that
n∑i=1√a i≥(n−1)n∑i=1 1√a i⟺n∑i=1(√a i+1√a i)≥n n∑i=1 1√a i
and since
n∑i=1 1 1+a i=n∑i=1 1√a i 1√a i+√a i=1
by Chebyshev's Inequality we obtain
n∑i=1(√a i+1√a i)⋅n∑i=1 1√a i 1√a i+√a i≥n n∑i=1 1√a i
indeed assuming wlog a i not decreasing by x=√a i we have that
f(x)=f(1 x)=x+1 x⟹f′(x)=1−1 x 2 is not decreasing for x≥1
g(x)=1 x x+1 x=1 x 2+1 is not increasing
moreover we have that
1 1+a 1+1 1+a 2≤1⟺a 1 a 2≥1⟺√a 1 a 2≥1⟺√a 2≥1√a 1
therefore if a 1≤1 we have that
f(√a 1)=f(1√a 1)≤f(√a 2)
and the condition for the application of the inequality is preserved.
Share
Share a link to this answer
Copy linkCC BY-SA 4.0
Cite
Follow
Follow this answer to receive notifications
edited Aug 26, 2020 at 10:25
answered Aug 26, 2020 at 8:43
useruser
164k 14 14 gold badges 84 84 silver badges 157 157 bronze badges
Add a comment|
This answer is useful
0
Save this answer.
Show activity on this post.
For this solution, I will only use this inequality, AM-GM Inequality and C-S. We need to prove that: n∑i=1√a i≥(n−1)n∑i=1 1√a i if n∑i=1 1 1+a i=1
Case I: Assume a i≥1 for all i till n
1=n∑i=1 1 1+a i≥n 2 n+∑n i=0 a i⇒n∑i=0 a i≥n(n−1)⇒n+n∑i=0 a i≥n(n−1)+n
⇒2 n∑i=0√a i≥n(n−1)+n
As ∑n i=0√a i≥n,
n∑i=0√a i≥n(n−1)
Now it's enough to prove that
n(n−1)≥(n−1)n∑i=1 1√a i⇒n≥n∑i=1 1√a i which is obvious for this case.
Case II:
Assume that some a i≥1 and others ≤1.
Note that, as mentioned in @MathLover's solution, only one a i can be ≤1. Let a 1 be less than 1 and rename others a 2,...,a n where 1≤k≤n.
Just to reduce this case hence the solution, the cases that arise in this case are left, as resolving them is very easy, in the sense that they are similar to resolving case I as before. (and the other is very easy) Just assume 1 1+a 1=p,0≤p≤1 for one case in this case, and n∑i=2 1 1+a i=1−p,0≤1−p≤1 for the other. Just prove the original inequality for these two cases, and sum up the hypotheses and the inequalities to get what was desired. Done!
Share
Share a link to this answer
Copy linkCC BY-SA 4.0
Cite
Follow
Follow this answer to receive notifications
edited Aug 27, 2020 at 5:31
answered Aug 26, 2020 at 12:36
Book Of FlamesBook Of Flames
1,536 1 1 gold badge 9 9 silver badges 17 17 bronze badges
8
And what about if some a i is <1 and others >1?user –user 2020-08-26 13:18:53 +00:00 Commented Aug 26, 2020 at 13:18
@user Now is my solution correct? I updated it on your advice.Book Of Flames –Book Of Flames 2020-08-26 14:02:08 +00:00 Commented Aug 26, 2020 at 14:02
I'll take a look to that!user –user 2020-08-26 14:06:19 +00:00 Commented Aug 26, 2020 at 14:06
case 2 is not valid given the condition ∑1 1+a i=1. Also, in case 3, you can have only one a i less than 1.Math Lover –Math Lover 2020-08-26 16:04:15 +00:00 Commented Aug 26, 2020 at 16:04
1 @MathLover Yes.... You are right. I decide to reduce my solution to case I and add your point, but not today. It's late night. I will do it tomorrow.Book Of Flames –Book Of Flames 2020-08-26 16:18:11 +00:00 Commented Aug 26, 2020 at 16:18
|Show 3 more comments
Start asking to get answers
Find the answer to your question by asking.
Ask question
Explore related questions
inequality
summation
contest-math
rearrangement-inequality
See similar questions with these tags.
Featured on Meta
Introducing a new proactive anti-spam measure
Spevacus has joined us as a Community Manager
stackoverflow.ai - rebuilt for attribution
Community Asks Sprint Announcement - September 2025
Linked
17Prove that √x 1+√x 2+⋯+√x n≥(n−1)(1√x 1+1√x 2+⋯+1√x n)
0Question involving Inequality
Related
7Proving that ∑n i=1 a i 1−a i≤∑n i=1 b i 1−b i
17Prove that √x 1+√x 2+⋯+√x n≥(n−1)(1√x 1+1√x 2+⋯+1√x n)
11Typical Olympiad Inequality? If ∑n i a i=n with a i>0, then ∑n i=1(a 3 i+1 a 2 i+1)4≥n
2prove ∑c y c a 3 b≥a b+b c+c a if a,b,c>0
2How to prove this inequality ∑n r=1 a r√n−1 1−a r≥∑n r=1√a r
5prove that (∑n i=1 a i)(∑n i=1 a n−1 i)≤n∏n i=1 a i+(n−1)∑n i=1 a n i.
4Minimizing ∑2023 i=1[(a i+a i+1)(1 a i+1 a i+1)+1 a i a i+1], where the a i are a permutation of {1,...,2024}
0Prove that (∑n i=1 λ i a i)2∑n i=1 λ i b i≥min
Hot Network Questions
в ответе meaning in context
How can blood fuel space travel?
Analog story - nuclear bombs used to neutralize global warming
What's the expectation around asking to be invited to invitation-only workshops?
A time-travel short fiction where a graphologist falls in love with a girl for having read letters she has not yet written… to another man
Do sum of natural numbers and sum of their squares represent uniquely the summands?
Is existence always locational?
Discussing strategy reduces winning chances of everyone!
How can the problem of a warlock with two spell slots be solved?
Matthew 24:5 Many will come in my name!
Why are LDS temple garments secret?
Riffle a list of binary functions into list of arguments to produce a result
Is it possible that heinous sins result in a hellish life as a person, NOT always animal birth?
Is direct sum of finite spectra cancellative?
Can I go in the edit mode and by pressing A select all, then press U for Smart UV Project for that table, After PBR texturing is done?
Bypassing C64's PETSCII to screen code mapping
Proof of every Highly Abundant Number greater than 3 is Even
Is it safe to route top layer traces under header pins, SMD IC?
Do we declare the codomain of a function from the beginning, or do we determine it after defining the domain and operations?
Does the curvature engine's wake really last forever?
Why include unadjusted estimates in a study when reporting adjusted estimates?
Should I let a player go because of their inability to handle setbacks?
Alternatives to Test-Driven Grading in an LLM world
Is it ok to place components "inside" the PCB
Why are you flagging this comment?
It contains harassment, bigotry or abuse.
This comment attacks a person or group. Learn more in our Code of Conduct.
It's unfriendly or unkind.
This comment is rude or condescending. Learn more in our Code of Conduct.
Not needed.
This comment is not relevant to the post.
Enter at least 6 characters
Something else.
A problem not listed above. Try to be as specific as possible.
Enter at least 6 characters
Flag comment Cancel
You have 0 flags left today
Mathematics
Tour
Help
Chat
Contact
Feedback
Company
Stack Overflow
Teams
Advertising
Talent
About
Press
Legal
Privacy Policy
Terms of Service
Your Privacy Choices
Cookie Policy
Stack Exchange Network
Technology
Culture & recreation
Life & arts
Science
Professional
Business
API
Data
Blog
Facebook
Twitter
LinkedIn
Instagram
Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.9.26.34547 |
17240 | https://www.thepaystubs.com/blog/tax/how-to-calculate-total-revenue?srsltid=AfmBOorM-gqlrDeeG6rhnueVsFKM7mYbBJPuma19bBSUo4KrQn5oUoJY | Chat We Will Attend To You In Seconds
+1 (855) 906-2266
Email
Preview our variety of stub samples and create a stub using your
favorite
How To Calculate Total Revenue - The Full Guide
As anyone who works in accounting or runs a business will know, there are a million ways to view revenue and profit numbers; using financial formulae, spreadsheets, and financial statements. These values can be broken down and analyzed from numerous angles. However, one of the most critical factors to any production is your total revenue income or the total amount that your company brings in before expenses are considered.
When looking for past total revenue figures, your income statement should always be your first port of call. This figure should be at the top of any income statement and is easy to find. If, however, you are looking to predict or forecast what your total revenue income will be in the future, your income statement will not be that useful. Maybe you are trying to figure out if it would be a good idea to lower your prices or offer a discount on services, in this case, the best tool to use would be the total revenue formula. This will help you forecast profits for your business with just a few simple steps!
Table Of Contents
The Formula For Total Revenue
The absolute best way to simply calculate how much revenue you will make is through this formula. To forecast how much profit a business will make in the future, typically you might just rely on your income statement to get a baseline of how much your company has sold historically, this can often lead to relying heavily on formulae. Another key factor businesses often monitor is the interest rate forecast, as changes in interest rates can directly impact borrowing costs and consumer spending, which in turn can influence revenue projections. Instead, you can use this simple calculation for forecasting purposes, it can save time and is generally more effective
The formula for calculating total revenue looks like this;
TOTAL REVENUE = QUANTITY SOLD X PRICE
For example, a baker might sell a loaf of bread for $8 per loaf, if he regularly sells 100 loaves each month his total revenue will be $800 a month (100 x 8 = 800)
Now, where this formula becomes especially useful is if a company, for example, the baker again, decides to lower his prices to $6.50 to try and increase sales, he can predict how much his total revenue will drop if he still sells the same amount of bread (100 x 6.50 = 650) so his total revenue per month would drop to $650 if his sales remain the same.
You can also use the same formula to determine how many more loaves of bread the baker would need to sell at the reduced price to match his previous revenue for the month. This is done by dividing his previous revenue by the new lower price.
So, (800 / 6.50 = 123) the baker would need to sell 123 loaves of bread at this discounted price in order to match what he had previously earned in total revenue. This decision, of course, relies on the baker or any business owner having a strong knowledge of the market in which they sell, as lowering the price to increase sales is a bold move and means many more products will have to be sold to see an actual profit. It is also vital that you know your target audience for any product you sell as marketing and other selling methods for your product will rely on this knowledge.
Also read: Is Depreciation an Operating Expense?
Also read: Check Your EIDL Loan Status
Your Income Statement and What It Shows You About Total Revenue
You can also find the total revenue for your business on your income statement. Each income statement is essentially a finalized history of how your business has performed over a period of time. The statement can range from monthly, to quarterly to yearly, but it is often advised that monthly statements are easier to keep track of revenue and profits.
In comparison to other statements and financial documents, an income statement is fairly easy to read as all the information is presented in a very straightforward and standard way. The main sections of any income statement are as follows;
Total Revenue - This is usually the first section of the statement and one of the most important. It will tell you the total amount of money that your business has brought in over this period. It can be from your main or primary source of income, as well as any other sources of income you may have.
Cost Of Goods Sold - The COGS will break down the cost of producing the goods that your business sells during this accounting period. Anything that is essential to the production of your product or goods, which can range from staff, materials, software, office supplies, etc belongs in this category.
Operating Expenses - This section differs from the Cost Of Goods Sold as it breaks down everything you spend on operating your business over this accounting period. This could be warehouse or office rent, marketing expenses, or even office snacks. Now, when you subtract operating expenses from gross profit, you will get your net income - or what some people just know as profit.
Also read: How to Remove IC Systems From Your Credit Report
Why Is Total Revenue So Important
There are so many factors to consider when pricing your products and services. If we look at the example of the baker again - if he does go ahead and discount the price of his bread, will the cost of producing each loaf go down as the volume increases? Would he maybe need to hire an assistant baker to keep up with the demand? Will this slow down production time?
All of these questions and many more need to be factored in when making these vital business decisions. The Total Revenue Formula gives you, as a business owner, the best place to start when considering pricing.
If your total revenue numbers look successful, you can then begin to consider other possible expenses like software, office supplies, rent, etc. Total revenue will always give you a starting point for your business and it is for this main reason that it is invaluable to any company or business.
Our pay stub generator produces high-quality and reliable pay stubs. It is wise to have this process nailed down, especially in times of disaster.
Also read: Snap Finance Review
Also read: Stop Enhanced Recovery Company And Remove From Your Credit Report
Frequently Asked Questions
Can total revenue be negative?
How do you calculate total revenue for multiple products?
How do you calculate total revenue?
How is total revenue related to price elasticity?
What factors can affect total revenue?
What is the difference between total revenue and marginal revenue?
What is the difference between total revenue and profit?
What is total revenue?
Why is it important to calculate total revenue?
Try our instant paystub generation tool.
Flip through our templates page
to chose your best match and receive your stub instantly.
After graduating from McCombs School of Business in Texas, James joined ThePayStubs as a CPA to make sure the numbers we provide our clients are correct.
Read More
Ever seen the word arrears on a bill or bank statement and wondered what arrears meaning is? In simple terms, it means you owe money that you didn’t pay on time. If...
Read More
Full-time jobs often have a standard number of hours per week. Many companies use 40 hours as full-time. Others may use 35 or even 30 hours as the benchmark for full-time....
Read More
You're bleeding money and don't even know it.
Most people glance at their paycheck amount and move on...
...but hidden in those cryptic deduction codes could be...
Read More
Great Service
First time creating a stub. Customer support was
AMAZING. I had a few self-induced issues and customer support was there from start to end.
Brandon Wilson
Great Service
First time creating a stub. Customer support was
AMAZING. I had a few self-induced issues and customer support was there from start to end.
Brandon Wilson
© Copyright 2025 - All Rights Reserved
Need Help? Chat with us and we'll help you fill the form.
Brett Hello! Don't hesitate to reach out if you have any questions. I'm just a message away!
We respond immediately |
17241 | https://www.aakash.ac.in/important-concepts/chemistry/oxidation-state-of-p-block-elements | Call Now
Popular Cities
Other Cities
Classroom top menu
User account menu
Aakash Main Menu
Breadcrumb
Oxidation State of p-Block Elements - Oxidation States, Inert Pair Effect and Chemical Reactivities of Groups 13, 14, 15, 16, 17 and 18 Elements
On special occasions we generally paint the interiors or the exteriors of our house. Also at times, we first strip the wall of the old paints or the rusty surfaces. The term ‘oxidation’ is quite relatable to this. Stripping the paint off a surface is like removing electrons from an atom and thus making it undergo oxidation! On the contrary, painting a surface with new colours is like adding electrons from outside, to an atom. In both cases, the final state (electron count) of the atom decides its oxidation state.
Every element in the periodic table has a specific range of possible oxidation states. It is important to understand the oxidation states of every element so as to determine the extent and manner of its chemical reactivity! At this point, we shall try to understand the oxidation states of the elements of p-Block. So, without any further ado, let’s get started!
TABLE OF CONTENTS
What is Oxidation state?
Oxidation state is a fundamental concept in chemistry, and is particularly important in transition metal chemistry, as d-block elements often have a wide range of stable oxidation states. The oxidation state of an atom within a molecule is usually considered to be the formal charge on the atom if hypothetically all of the molecules are composed of ions.
Oxidation numbers are assigned to atoms in a rather arbitrary fashion to designate electron transfer in oxidation-reduction reactions. They represent the charges that atoms would have if the electrons were assigned according to an arbitrary set of rules.
According to IUPAC, the oxidation state of the element is defined as a measure of the degree of oxidation of an atom in a substance.
Introduction to Oxidation State in p-Block Elements
Group 13 to 18 of the periodic table of elements constitute the p-Block. p-Block contains metals, metalloids as well as non–metals.
Exception: Helium, 1s2
Oxidation States of Group 13 Elements and Inert pair Effect
Boron is a typical non-metal, Aluminium is a metal, but shows many chemical similarities to boron. Gallim, Indium and Thallium are almost exclusively metallic in character.
Valence shell electronic configuration: ns2 np1
General oxidation states exhibited: +1 and +3
| | | |
---
| Elements of Group 13 | Atomic Number | Electronic Configuration |
| Boron (B) | 5 | [He]2s2 2p1 |
| Aluminium (Al) | 13 | [Ne]3s2 3p1 |
| Gallium (Ga) | 31 | [Ar]3d10 4s2 4p1 |
| Indium (In) | 49 | [Kr]4d10 5s2 5p1 |
| Thallium (Pb) | 81 | [Xe]4f14 5d10 6s2 6p1 |
Elements of Group 13
Atomic Number
Electronic Configuration
Boron (B)
5
[He]2s2 2p1
Aluminium (Al)
13
[Ne]3s2 3p1
Gallium (Ga)
31
[Ar]3d10 4s2 4p1
Indium (In)
49
[Kr]4d10 5s2 5p1
Thallium (Pb)
81
[Xe]4f14 5d10 6s2 6p1
Inert Pair Effect
Oxidation States of Group 14 Elements and Inert Pair Effect
The general electronic configuration of Group 14 elements is ns2 np2. These elements have 2 electrons in the outermost p - orbitals. The elements have four electrons, in their outermost shell.
| | | |
---
| Elements of Group 14 | Atomic Number | Electronic Configuration |
| Carbon (C) | 6 | [He]2s2 2p2 |
| Silicon (Sn) | 14 | [Ne]3s2 3p2 |
| Germanium (Ge) | 32 | [Ar]3d10 4s2 4p2 |
| Tin (Sn) | 50 | [Kr]4df10 5s2 5p2 |
| Lead (Pb) | 82 | [Xe]4f14 5d10 6s2 6p2 |
Elements of Group 14
Atomic Number
Electronic Configuration
Carbon (C)
6
[He]2s2 2p2
Silicon (Sn)
14
[Ne]3s2 3p2
Germanium (Ge)
32
[Ar]3d10 4s2 4p2
Tin (Sn)
50
[Kr]4df10 5s2 5p2
Lead (Pb)
82
[Xe]4f14 5d10 6s2 6p2
Inert Pair Effect
Oxidation States of Group 15 Elements and Chemical Reactivity
The valence shell electronic configuration of elements of group 15 is ns2 np3. So, the elements here can either lose 5 electrons or gain 3.
| | | |
---
| Elements of Group 15 | Atomic Number | Electronic Configuration |
| Nitrogen (N) | 7 | [He]2s2 2p3 |
| Phosphorus (P) | 15 | [Ne]3s2 3p3 |
| Arsenic (As) | 33 | [Ar]3d10 4s2 4p3 |
| Antimony (Sb) | 51 | [Kr]4d10 5s2 5p3 |
| Bismuth (Bi) | 83 | [Xe]4f14 5d10 6s2 6p3 |
Elements of Group 15
Atomic Number
Electronic Configuration
Nitrogen (N)
7
[He]2s2 2p3
Phosphorus (P)
15
[Ne]3s2 3p3
Arsenic (As)
33
[Ar]3d10 4s2 4p3
Antimony (Sb)
51
[Kr]4d10 5s2 5p3
Bismuth (Bi)
83
[Xe]4f14 5d10 6s2 6p3
Special cases: Nitrogen exhibits a large number of oxidation states, from -3 to +5 when it reacts with oxygen.
3HNO2 → HNO3 + H2O + 2NO
Oxidation States of Group 16 Elements and Chemical Reactivity
The possible oxidation states of this group is -2, +2, +4,+6. Down the group, the tendency to exhibit -2 oxidation state decreases. Polonium hardly shows -2 oxidation state.
| | | |
---
| Elements of Group 16 | Atomic Number | Electronic Configuration |
| Oxygen (O) | 8 | [He]2s2 2p4 |
| Sulphur (S) | 16 | [Ne]3s2 3p4 |
| Selenium (Se) | 34 | [Ar]3d10 4s2 4p4 |
| Tellurium (Te) | 52 | [Kr]4d10 5s2 5p4 |
| Polonium (Po) | 84 | [Xe]4f10 5d10 6s2 6p4 |
Elements of Group 16
Atomic Number
Electronic Configuration
Oxygen (O)
8
[He]2s2 2p4
Sulphur (S)
16
[Ne]3s2 3p4
Selenium (Se)
34
[Ar]3d10 4s2 4p4
Tellurium (Te)
52
[Kr]4d10 5s2 5p4
Polonium (Po)
84
[Xe]4f10 5d10 6s2 6p4
Oxidation States of Group 17 Elements and Chemical Reactivity
| | | |
---
| Elements of Group 17 | Atomic Number | Electronic Configuration |
| Fluorine (F) | 9 | [He]2s2 2p5 |
| Chlorine (Cl) | 17 | [Ne]3s2 3p5 |
| Bromine (Br) | 35 | [Ar]3d10 4s2 4p5 |
| Iodine (I) | 53 | [Kr]4d10 5s2 5p5 |
| Astatine (At) | 85 | [Xe]4f14 5d10 6s2 6p5 |
Elements of Group 17
Atomic Number
Electronic Configuration
Fluorine (F)
9
[He]2s2 2p5
Chlorine (Cl)
17
[Ne]3s2 3p5
Bromine (Br)
35
[Ar]3d10 4s2 4p5
Iodine (I)
53
[Kr]4d10 5s2 5p5
Astatine (At)
85
[Xe]4f14 5d10 6s2 6p5
Oxidation States of Group 18 Elements and Chemical Reactivity
Group 18 elements have a stable electronic configuration i.e., ns2 np6 with completely filled orbitals. Due to completely filled orbitals and complete octet configuration, these elements do not have a tendency to lose, gain or share electrons. Hence, they have zero valency and mostly exist as monatomic gases.
Xenon, however, exhibits higher oxidation states, as the paired electrons of the valence shell can be promoted to the higher empty d-orbitals upon excitation by absorption of energy. Fluorine and oxygen being the two strongly electronegative atoms share the unpaired electrons of xenon and form covalent compounds with it. E.g., XeF2, XeF4, XeF6, XeO3 and XeOF4.
Practice Problems
Q1. Which gas is obtained during the disproportionation reaction of HNO2?
A) NO2
B) NO
C) O2
D) N2
Answer: The oxidation state of nitrogen in HNO2 is +3. Nitrogen disproportionates in acidic medium when it shows oxidation states between +1 to +4.
3HNO2 → HNO3 + H2O + 2NO
So, option B) is the correct answer.
Q2. Which of the following non-metals does not show a high positive oxidation state?
A) Fluorine
B) Iodine
C) Oxygen
D) Chlorine
Answer: Fluorine is the most electronegative element and since it is unable to expand octet due to absence of d-orbitals, it cannot show higher oxidation states. It can exhibit only -1 oxidation state.
So, option A) is the correct answer.
Q3. Which of the following is a good oxidising agent?
A) PbCl4
B) SnCl2
C) PbCl2
D) None of the above
Answer: For elements like Sn and Pb, d and f-orbitals are filled with electrons. Since the shielding ability of d and f orbitals are very poor, the nuclear charge that seeps through attracts the s-orbital closer to the nucleus. This makes the s orbital reluctant to bond, thereby only the p-electrons involved in bonding.
Therefore for Pb, +2 oxidation state is stabler than the +4 oxidation state. So, Pb4+ is a very good oxidising agent.
So, option A) is the correct answer.
Q4. What is the oxidation state of Xe in XeOF4?
A) +4
B) +6
C) O
D) +8
Answer: Let oxidation state of Xe be x.
The oxidation state of fluorine is -1 and that of oxygen is -2
So, x + ( - 2) + 4( - 1) = 0
⇒ x - 6 = 0
∴ x = + 6
Thus, the oxidation state of Xe in the given compound is +6.
So, option B) is the correct answer.
Frequently Asked Questions - FAQ
Question 1. Which p-block elements show the inert pair effect?
Answer: The inert-pair effect is only shown by the elements which have inner electrons in d- and f-orbitals influencing their outermost s-orbital electrons by poorly shielding the outer electrons and increasing the effective nuclear charge on them. Inert pair effect is generally exhibited by some heavier nucleus p-block elements [such as, Tl, Sn, Pb, Bi, Po etc. ].
For example, the inert pair effect among Group 14 and Group 15 elements. Sn2+, Pb2+, Sb3+ and Bi3+ which are the lower oxidation states of the elements are formed because of the inert pair effect. When the outer shell s-electrons remain paired, the oxidation state is lower than the characteristic oxidation state of a particular group.
Question 2. What is the maximum oxidation state of interhalogens?
Answer: The maximum oxidation state for interhalogens is +7. In IF7, fluorine exists as -1 and iodine exists in +7 oxidation state.
Question 3. What is the effect of increase in oxidation state of a particular halogen atom in an oxoacid of halogen?
Answer: On increasing the oxidation state of a particular halogen atom, the acidic character of corresponding oxoacid increases. This can be explained on the basis of stability of conjugate bases by resonance and charge stabilisation. For example: The acidic strength of oxoacids of chlorine increases in the order:
HClO < HClO2 < HClO3< HClO4
The charge stabilisation is in the order : ClO¯ < ClO2- < ClO3- < ClO4-
Question 4. Give some examples of polyhalide ions.
Answer: Triatomic iodide is an important polyhalide. I3- is obtained by reacting diatomic iodine with iodide ion. Some other polyhalide anions are ICl2-, ICl4- and polyhalonium cations areClF2+, Cl2F+ , BrF2+, IF2+ etc.
Related Topics
| | |
--- |
| Calcium Carbonate | Alkali Metals |
| Ammonia | Sodium Hydrogen Carbonate |
| Calcium Oxide | Potassium |
Calcium Carbonate
Alkali Metals
Ammonia
Sodium Hydrogen Carbonate
Calcium Oxide
Potassium
NEET Related Links
NEET Exam
NEET Exam Dates
NEET Exam pattern
NEET Syllabus
NEET Eligibility Criteria
NEET Application
NEET UG Counselling
NEET FAQ
NEET UG Result
NEET Cut Off
JEE MAIN Related Links
JEE Main
JEE Main Rank Predictor
JEE Main College Predictor
JEE Main Exam Dates
JEE Main Exam pattern
JEE Main Application
JEE Main Eligibility Criteria
JEE Main Syllabus
JEE Main Physics Syllabus
JEE Main Maths Syllabus
JEE Main Chemistry Syllabus
JEE Main Admit Card
JEE Main Counselling
JEE Main marks vs rank vs percentile
JEE Advanced Related Links
JEE Advanced Exam Dates
JEE Advanced Application
JEE Advanced Eligibility Criteria
JEE Advanced Syllabus
JEE Advanced Maths Syllabus
JEE Advanced Physics Syllabus
JEE Advanced Chemistry Syllabus
JEE Advanced Exam Result
JEE Advanced Exam Dates
JEE Advanced Registration Dates
CUET Related Links
CUET Eligibility Criteria
CUET Admit Card
CUET Exam Pattern
CUET FAQs
CUET Counselling
CUET Syllabus
CUET Result
CUET Answer Key
CUET Preparation
CUET CUTOFF
CUET Application Form
Important Topics
Physics Important Topics
Anemometer : Measurement of Wind Speed
Loudness of Sound
Unit of Heat
Derivation Of Lens Maker Formula
Wave Theory of Light
What is the Scattering of Light
Difference between Land Breeze and Sea Breeze
Relation Between Critical Angle And Refractive Index
Relation Between Group Velocity And Phase Velocity
Thermal Stress
Principle Of Calorimetry
Types Of Wind
Infrared Radiation
Alpha Decay1
Centripetal Acceleration
Biogas Energy
Tracing the Path of a Ray of Light Passing Through a Rectangular Glass Slab
Unit of Conductivity
Resistor
LCR Circuit
Coherent Sources
Rigid Bodies
Angular Displacement
Unit of Momentum
Chandrasekhar Limit
Electromagnets
Viscosity
Einstein's Explanationx
Sea Breeze and Land Breeze
Kinetic Energy
Van De Graaff Generator
Kinetics and Kinematics
Sphygmomanometer
Unit of Inductance
Law Equipartition Energy
Periodic Motion
Determination of Focal Length of Concave Mirror and Convex Mirror
Optical Density
Characteristics of a Transistor
Scintillation Counter
Difference Between Earthing And Grounding
White Light
Optical Fiber
Relative Speed
Unit of Work
Coefficient of Viscosity
Neutrons, Isotopes, Isotones And Isobars
Electric Circuit
Difference Between Scalar and Vector
Difference Between Two Stroke and Four Stroke Engines
Unit of Velocity
Zener Diode
Gamma Rays - Electromagnetic Spectrum
Permittivity and Permeability
Tidal Energy
Difference Between Resistance and Resistivity
Uses of Vernier Calipers
Human Eye Function
Value of Gravitational Constant
Value of Electron
Difference Between LCD and LED
Pinhole Camera
Single Slit Diffraction
Difference Between Circuit Switching and Packet Switching
Wheatstone Bridge
Unit of Resistance
Scalar and vector
Density of Water
Cyclotron
Unit of Specific Resistance
Types of Cables
Modern Physics
Uses of Concave Mirror
Magnetic Moment
Law of Conservation of Momentum Derivation
Magnet
Rectilinear Motion of Particles
Difference Between Mass and Weight
Refraction of Light
Types of Motors
Types of Gears
Solar Energy and Photovoltaic Cell
Escape Velocity and Orbital velocity
Natural Sources of Energy
Difference Between Frequency Modulation and Amplitude Modulation
Fick’s Law of Diffusion
Faraday Electromagnetic Induction Experiment
Prism Dispersion
Continuous Charge Distribution
Capacitor Types
Maxwell's Relations
Law of Conservation of Charge
Kinetic Theory of Gases Assumptions
Reflection of Light Image
Chemistry Important Topics
Disaccharides
Periodic Table Elements
Physical and Chemical Properties of Carbon
Rutherford Atomic Model
Synthetic Polymers
Water Cycle Process
Gattermann Reaction
Mannich Reaction
Reactivity Series
Difference between isotropic and anisotropic
Introduction to p-Block elements
Plant fibres
Inert Gases: Uses
Benzoin Condensation
Difference between primary cell and secondary cell
Partition Chromatography
Applications of colloids
Displacement Reactions
Handpicking
Biogas: Uses of Biogas
Acid Rain
Types of Chemical Reactions
SN2 Reaction Mechanism
Complexometric Titration
markovnikov rule
Kohlrausch Law
Electrode
Valency Chart
Order of Reaction
Band Theory
Difference Between Rusting and Corrosion
Potentiometric Titration
Frenkel Defect
Deforestation
Stephen Reaction Mechanism
Gattermann – Koch Reaction Mechanism
Electronegativity
Industrial Waste: Types
Ethylene
Charge to Mass Ratio of an Electron
Fehling Solution
Photochemical Reactions
Second Order Reaction
Etard Reaction
Forms of water
Preparation of Acetanilide
Metallic Minerals and Non-Metallic Minerals
Schottky Defect
Calcium sulphate
Sodium carbonate
Rubber
Ethyl acetate
Conductometric Titration
Resorcinol
Difference Between Evaporation and Condensation
States of Matter
Boric Acid
Types of Minerals
Finkelstein Reaction
Electronegativity Chart
Difference between Solid, Liquid and Gas in tabular form
Aromaticity
Drawbacks of Rutherford's Atomic Model
Thermosetting Polymers
Preparation of Alkanes
Examples of Bases
Suzuki Coupling Reaction
Standard Electrode Potential
Disadvantages of Plastics
Reducing Agent
Van't Hoff Factor
Hard Water and Soft Water
Gay Lussac's Law
Slaked Lime
Borax
Tetravalency of Carbon
Boyle's Law
Ellingham Diagram
Crystal Defects : Point Defects
HVZ Reaction (Hell-Volhard-Zelinsky Reaction)
Classification of Drugs
Preparation of Sulfuric Acid
Lucas test
Xenon Difluoride
Bravais Lattice
Classification of Oxides
Elimination Reaction
preparation of polythene, Teflon and polyacrylonitrile
Difference between cations and anions
Uses of Limestone
Fructose
Azeotropes
Lewis Acid and Base
Emulsification
Dielectric properties of solids
Zwitterion
Birch Reduction Mechanism
Oxidation and Reduction
Electrophilic Substitution Reaction
Suspensions
Aromatic Compounds
Ores and Minerals
Adsorption theory of heterogeneous catalysis
Wittig Reaction
Difference between evaporation and boiling
Intensive And Extensive Properties Of Matter
Electronic Configuration of Group 16 Elements
Difference Between Elements and Atoms
Sodium Oxide
Gypsum
Difference Between Alloy and Composite
Chemical Equations
Thorium
Lead Acid Battery
Isotopes of Hydrogen
Figure Rules
Electrophilic Addition Reactions Of Alkenes
Isothermal Expansion of an Ideal Gas
Potassium Chloride - KCl
Aluminum Oxide
Difference Between Acetic Acid and Glacial Acetic Acid
Difference Between Polar and Nonpolar
Difference Between Baking Powder and Baking Soda
Test for Phenolic group
To prepare colloidal solution of starch
Uses of Mica
Pi Bonds
Potassium Chlorate
Schmidt Reaction
High density Polyethylene
Huckel's Rule
Discovery of Proton
Aluminium Ore: Extraction of Aluminium
Benzene Hexachloride
Caustic Potash or Potasium Hydroxide
Electromagnetic Radiation - Wave Nature
Atomic Number and Mass Number, Isotopes and Isobars
Preparation of Potash Alum
Beckmann Rearrangement
Heterogeneous Equilibrium
Butane
Factors Affecting Electrolytic Conductance
Scandium
Homogeneous Equilibrium
Phenols Nomenclcature
Galvanic Corrosion
Chlorine Trifluoride
Robinson Annulation
Copper
Water
Math Important Topics
Area of Triangle
Area Segment Circle
Circumcenter of a Triangle
Cm to inch converter
Semicircle
Surface Area of a Sphere
Trigonometric Ratios
Difference Between Constants and Variables
Tables of 2 to 30
The volume of a Cylinder
Trapezium
Similar Triangles
Laws of Exponents
Decimal to Binary
Geometry Symbols
Isosceles Triangle
Numbers in Words
Angle Between Two Planes
Addition and Subtraction of Integers
Natural Numbers and Whole Numbers
Is 91 a Prime Number?
Difference Between Cube and Cuboid
Distributive Property
Difference Between Rhombus and Parallelogram
Skip Counting
Trigonometry Functions of Sum and Difference of Angles
Logarithm Table
Data Organization
PEMDAS
Secant of a Circle
Vector Space
Angle Bisector Theorem
Mutually Exclusive Events
Statistical Inference
Difference Between Area and Volume
How to Find Prime Numbers?
3D Shapes
Involute
Similarity of Triangles
Simple Equations Application
Types of Polygon
Volume of a Prism
Missing Numbers
Sec 90
Derivatives
Multiple of 9
Metric System of Measurement
General Equation of a Line
Multiplicative Inverse
Linear Differential Equations
Inverse Cosine
Comparing Quantities using Percentage
Biology Important Topics
Apoplast
Bryophyta
Food Web
Fragmentation
Gemmules
Grassland Adaptations
Gymnosperms
Harmful Microorganisms
Homeostasis
Human Body Anatomy
Invertebrates
Life process
Light-dependent Reactions
Lipids
Living Things
Types of Soil
Difference Between Sea and Ocean
Difference Between Turtles and Tortoises
Difference Between Wildlife Sanctuary and National Park
Modes of Plant Reproduction
Ecological Pyramid and Its Types
Double Fertilization in Angiosperms
Fertilisation in Plants
Five Kingdoms Classification
Ganongs Potometer
Health and Hygiene
Herbivores and Carnivores
Vertebrates and Invertebrates
Morphology of Leaves
Modifications of Root
Diffusion
Osmosis
Prokaryotic Cells
Reproduction
Flora And Fauna
Difference Between Rabi And Kharif Crops
Types of Pollution
Prokaryotic And Eukaryotic Cells
Living And Non Living Things
The Nucleus
Irrigation
Endoplasmic Reticulum
Between Data And Information
Mitochondria
Nitrogen Cycle
Diagram of Animal Cell
Law of Segregation Law of Dominance
Sources Animal Plant Products
Leaves Morphology Types Modification
Mitosis and Meiosis
How Do Organisms Reproduce
Diagram of Stomata
Soil Erosion
Plasmolysis
Urine Formation Osmoregulation
Porifera
Flagella
Amoeba
Aerobic Respiration
Pteridophyta
Arteries And Veins Difference
Reflex Action
Stomata
Cell Wall
Diagram of Neuron
A Guide To Composition And Function Of Lymph
Difference Between Ligaments And Tendons
Insectivorous Plants
Nutrition Modes Living Organisms
Mendel Laws Of Inheritance
Nutrition In Amoeba
Calorific Value
Air Pollution Control
Fertilization In Plants
Binary Fission
Omnivores
Saprophytes
Asexual Reproduction
Deficiency Diseases
Light Reaction Vs Dark Reaction
Global Warming
Wildlife Sanctuary
Viviparous Oviparous Embryo Development
Food Preservation Methods Food Poisoning
Crops
Biofertilizers
Biodiversity Conservation
Difference Between Endocrine And Exocrine Glands
Diagram Of Digestive System
Crop Production And Management
Types Of Pollination
Aakash Educational Services Limited
Registered office Address:
No. 5/2, 2nd Floor, Kundanahalli Gate, Varthur Road,
Opposite SKR Kalyana Mandapa, Whitefield, Bengaluru 560037
CIN: U80300KA2007PLC150057
Telephone: 0124-4168300
Fax: +91-11–47623472
Email: corporate@aesl.in
ABOUT US
COURSES & SCHOLARSHIPS
NEET
JEE MAIN
JEE ADVANCED
BOARD EXAMS
NCERT
TEXT BOOK SOLUTIONS
OUR IMPORTANT CONCEPTS
STUDENT AREA
OTHERS
Classroom Footer bottom menu
Call Now
Let's Chat
Share |
17242 | https://allen.in/dn/qna/642544341 | Find the shortest distance between the line y=x-2 and the parabola y=x^2+3x+2.
Courses
NEET
Class 11th
Class 12th
Class 12th Plus
JEE
Class 11th
Class 12th
Class 12th Plus
Class 6-10
Class 6th
Class 7th
Class 8th
Class 9th
Class 10th
View All Options
Online Courses
Distance Learning
Hindi Medium Courses
International Olympiad
Test Series
NEET
Class 11th
Class 12th
Class 12th Plus
JEE (Main+Advanced)
Class 11th
Class 12th
Class 12th Plus
JEE Main
Class 11th
Class 12th
Class 12th Plus
Classroom
Results
NEET
2025
2024
2023
2022
JEE
2025
2024
2023
2022
Class 6-10
Scholarships NEW
TALLENTEX
AOSAT
ALLEN E-Store
More
ALLEN for Schools
About ALLEN
Blogs
News
Careers
Request a call back
Book home demo
Login
HomeClass 12MATHSFind the shortest distance between the l...
Find the shortest distance between the line y=x-2 and the parabola y=x 2+3 x+2.
To view this video, please enable JavaScript and consider upgrading to a web browser thatsupports HTML5 video
Video Player is loading.
Play Video
Play Skip Backward Skip Forward
Mute
Current Time 0:00
/
Duration-:-
Loaded: 0%
Stream Type LIVE
Seek to live, currently behind live LIVE
Remaining Time-0:00
1x
Playback Rate
2x
1.5x
1x, selected
0.5x
0.25x
Chapters
Chapters
Descriptions
descriptions off, selected
Captions
captions settings, opens captions settings dialog
captions off, selected
Audio Track
Picture-in-Picture Fullscreen
This is a modal window.
Beginning of dialog window. Escape will cancel and close the window.
Text Color Opacity Text Background Color Opacity Caption Area Background Color Opacity
Font Size Text Edge Style Font Family
Reset Done
Close Modal Dialog
End of dialog window.
Text Solution
AI Generated Solution
To find the shortest distance between the line y=x−2 and the parabola y=x 2+3 x+2, we can follow these steps: ### Step 1: Identify the equations We have the line: y=x−2 and the parabola: ...
Show More
|
Share
Topper's Solved these Questions
PARABOLA
CENGAGE ENGLISH|Exercise ILLUSTRATION 5.51|1 Videos View Playlist
PARABOLA
CENGAGE ENGLISH|Exercise ILLUSTRATION 5.52|1 Videos View Playlist
PARABOLA
CENGAGE ENGLISH|Exercise ILLUSTRATION 5.49|1 Videos View Playlist
PAIR OF STRAIGHT LINES
CENGAGE ENGLISH|Exercise Numberical Value Type|5 Videos View Playlist
PERMUTATION AND COMBINATION
CENGAGE ENGLISH|Exercise Comprehension|8 Videos View Playlist
Similar Questions
Explore conceptually related problems
Find the area included between the line y=x and the parabola x^2=4y .
Watch solution
Find the shortest distance between the line x - y +1 = 0 and the curve y^2 = x.
Watch solution
The shortest distance between the line x=y and the curve y^(2)=x-2 is
Watch solution
Find the shortest distance between two parabolas y^(2)=x-2 and x^(2)=y-2
Watch solution
Find the shortest distance of the point (0, c) from the parabola y=x^2 , where 0lt=clt=5 .
Watch solution
Find the shortest distance of the point (0, c) from the parabola y=x^2 , where 0lt=clt=5 .
Watch solution
Find the shortest distance of the point (0, c) from the parabola y=x^2 , where 0lt=clt=5 .
Watch solution
The equation of the line of the shortest distance between the parabola y^(2)=4x and the circle x^(2)+y^(2)-4x-2y+4=0 is
Watch solution
Find the shortest distance between the z-axis and the line, x+y+2z-3=0,2x+3y+4z-4=0.
Watch solution
Find the shortest distance between the lines , (x-1)/2=(y-3)/4=(z+2)/1 and 3x-y-2z+4=0=2x+y+z+1 .
Watch solution
CENGAGE ENGLISH-PARABOLA-ILLUSTRATION 5.50
Find the shortest distance between the line y=x-2 and the parabola y=x...03:58 | Playing Now
HomeProfile |
17243 | https://arxiv.org/abs/1303.3777 | [1303.3777] Alternating Traps in Muller and Parity Games
Skip to main content
We gratefully acknowledge support from the Simons Foundation, member institutions, and all contributors.Donate
>cs> arXiv:1303.3777
Help | Advanced Search
Search
GO
quick links
Login
Help Pages
About
Computer Science > Logic in Computer Science
arXiv:1303.3777 (cs)
[Submitted on 13 Mar 2013 (v1), last revised 23 Jul 2013 (this version, v2)]
Title:Alternating Traps in Muller and Parity Games
Authors:A. Grinshpun, P. Phalitnonkiat, S. Rubin, A. Tarfulea
View a PDF of the paper titled Alternating Traps in Muller and Parity Games, by A. Grinshpun and 3 other authors
View PDF
Abstract:Muller games are played by two players moving a token along a graph; the winner is determined by the set of vertices that occur infinitely often. The central algorithmic problem is to compute the winning regions for the players. Different classes and representations of Muller games lead to problems of varying computational complexity. One such class are parity games; these are of particular significance in computational complexity, as they remain one of the few combinatorial problems known to be in NP and co-NP but not known to be in P. We show that winning regions for a Muller game can be determined from the alternating structure of its traps. To every Muller game we then associate a natural number that we call its trap-depth; this parameter measures how complicated the trap structure is. We present algorithms for parity games that run in polynomial time for graphs of bounded trap depth, and in general run in time exponential in the trap depth.
Comments:27 pages, 1 figure
Subjects:Logic in Computer Science (cs.LO); Data Structures and Algorithms (cs.DS); Computer Science and Game Theory (cs.GT)
ACM classes:F.2.2
Cite as:arXiv:1303.3777 [cs.LO]
(or arXiv:1303.3777v2 [cs.LO] for this version)
Focus to learn more
arXiv-issued DOI via DataCite
Submission history
From: Andrey Grinshpun [view email]
[v1] Wed, 13 Mar 2013 18:11:16 UTC (50 KB)
[v2] Tue, 23 Jul 2013 07:32:01 UTC (54 KB)
Full-text links:
Access Paper:
View a PDF of the paper titled Alternating Traps in Muller and Parity Games, by A. Grinshpun and 3 other authors
View PDF
TeX Source
Other Formats
view license
Current browse context:
cs.LO
<prev | next>
new | recent | 2013-03
Change to browse by:
cs
cs.DS
cs.GT
References & Citations
NASA ADS
Google Scholar
Semantic Scholar
DBLP - CS Bibliography
listing | bibtex
A. Grinshpun
Andrey Grinshpun
P. Phalitnonkiat
Pakawat Phalitnonkiat
Sasha Rubin
…
export BibTeX citation Loading...
BibTeX formatted citation
×
Data provided by:
Bookmark
Bibliographic Tools
Bibliographic and Citation Tools
[x] Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
[x] Connected Papers Toggle
Connected Papers (What is Connected Papers?)
[x] Litmaps Toggle
Litmaps (What is Litmaps?)
[x] scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media
Code, Data and Media Associated with this Article
[x] alphaXiv Toggle
alphaXiv (What is alphaXiv?)
[x] Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
[x] DagsHub Toggle
DagsHub (What is DagsHub?)
[x] GotitPub Toggle
Gotit.pub (What is GotitPub?)
[x] Huggingface Toggle
Hugging Face (What is Huggingface?)
[x] Links to Code Toggle
Papers with Code (What is Papers with Code?)
[x] ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos
Demos
[x] Replicate Toggle
Replicate (What is Replicate?)
[x] Spaces Toggle
Hugging Face Spaces (What is Spaces?)
[x] Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers
Recommenders and Search Tools
[x] Link to Influence Flower
Influence Flower (What are Influence Flowers?)
[x] Core recommender toggle
CORE Recommender (What is CORE?)
Author
Venue
Institution
Topic
About arXivLabs
arXivLabs: experimental projects with community collaborators
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.
Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.
Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.
Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
About
Help
Contact
Subscribe
Copyright
Privacy Policy
Web Accessibility Assistance
arXiv Operational Status
Get status notifications via email or slack |
17244 | https://nph.onlinelibrary.wiley.com/doi/10.1111/nph.17072 | Opens in a new window Opens an external website Opens an external website in a new window
This website utilizes technologies such as cookies to enable essential site functionality, as well as for analytics, personalization, and targeted advertising. To learn more, view the following link: Privacy Policy
Volume 232, Issue 3 pp. 1123-1158
Viewpoints
Free Access
Root traits as drivers of plant and ecosystem functioning: current understanding, pitfalls and future research needs
Grégoire T. Freschet,
Corresponding Author
Grégoire T. Freschet
gregoire.freschet@sete.cnrs.fr
orcid.org/0000-0002-8830-3860
Station d’Ecologie Théorique et Expérimentale, CNRS, 2 route du CNRS, Moulis, 09200 France
Centre d’Ecologie Fonctionnelle et Evolutive, Université de Montpellier, CNRS, EPHE, IRD, Univ Paul Valéry Montpellier 3, Montpellier, 34293 France
Author for correspondence: email gregoire.freschet@sete.cnrs.fr
Search for more papers by this author
Catherine Roumet,
Catherine Roumet
orcid.org/0000-0003-1320-9770
Centre d’Ecologie Fonctionnelle et Evolutive, Université de Montpellier, CNRS, EPHE, IRD, Univ Paul Valéry Montpellier 3, Montpellier, 34293 France
Search for more papers by this author
Louise H. Comas,
Louise H. Comas
orcid.org/0000-0002-1674-4595
USDA-ARS Water Management and Systems Research Unit, 2150 Centre Avenue, Bldg D, Suite 320, Fort Collins, CO, 80526 USA
Search for more papers by this author
Monique Weemstra,
Monique Weemstra
orcid.org/0000-0002-6994-2501
Centre d’Ecologie Fonctionnelle et Evolutive, Université de Montpellier, CNRS, EPHE, IRD, Univ Paul Valéry Montpellier 3, Montpellier, 34293 France
Search for more papers by this author
A. Glyn Bengough,
A. Glyn Bengough
orcid.org/0000-0001-5472-3077
The James Hutton Institute, Invergowrie, Dundee, DD2 5DA UK
School of Science and Engineering, University of Dundee, Dundee, DD1 4HN UK
Search for more papers by this author
Boris Rewald,
Boris Rewald
orcid.org/0000-0001-8098-0616
Department of Forest and Soil Sciences, University of Natural Resources and Life Sciences, Vienna, 1190 Austria
Search for more papers by this author
Richard D. Bardgett,
Richard D. Bardgett
orcid.org/0000-0002-5131-0127
Department of Earth and Environmental Sciences, The University of Manchester, Manchester, M13 9PT UK
Search for more papers by this author
Gerlinde B. De Deyn,
Gerlinde B. De Deyn
orcid.org/0000-0003-4823-6912
Soil Biology Group, Wageningen University, Wageningen, 6700 AA the Netherlands
Search for more papers by this author
David Johnson,
David Johnson
orcid.org/0000-0003-2299-2525
Department of Earth and Environmental Sciences, The University of Manchester, Manchester, M13 9PT UK
Search for more papers by this author
Jitka Klimešová,
Jitka Klimešová
orcid.org/0000-0003-0123-3263
Department of Functional Ecology, Institute of Botany CAS, Dukelska 135, Trebon, 37901 Czech Republic
Search for more papers by this author
Martin Lukac,
Martin Lukac
orcid.org/0000-0002-8535-6334
School of Agriculture, Policy and Development, University of Reading, Reading, RG6 6EU UK
Faculty of Forestry and Wood Sciences, Czech University of Life Sciences Prague, Prague, 165 00 Czech Republic
Search for more papers by this author
M. Luke McCormack,
M. Luke McCormack
orcid.org/0000-0002-8300-5215
Center for Tree Science, Morton Arboretum, 4100 Illinois Rt. 53, Lisle, IL, 60532 USA
Search for more papers by this author
Ina C. Meier,
Ina C. Meier
orcid.org/0000-0001-6500-7519
Plant Ecology, University of Goettingen, Untere Karspüle 2, Göttingen, 37073 Germany
Functional Forest Ecology, University of Hamburg, Haidkrugsweg 1, Barsbüttel, 22885 Germany
Search for more papers by this author
Loïc Pagès,
Loïc Pagès
orcid.org/0000-0002-2476-6401
UR 1115 PSH, Centre PACA, site Agroparc, INRAE, Avignon Cedex 9, 84914 France
Search for more papers by this author
Hendrik Poorter,
Hendrik Poorter
orcid.org/0000-0001-9900-2433
Plant Sciences (IBG-2), Forschungszentrum Jülich GmbH, Jülich, D-52425 Germany
Department of Biological Sciences, Macquarie University, North Ryde, NSW, 2109 Australia
Search for more papers by this author
Iván Prieto,
Iván Prieto
orcid.org/0000-0001-5549-1132
Departamento de Conservación de Suelos y Agua, Centro de Edafología y Biología Aplicada del Segura – Consejo Superior de Investigaciones Científicas (CEBAS-CSIC), Murcia, 30100 Spain
Search for more papers by this author
Nina Wurzburger,
Nina Wurzburger
orcid.org/0000-0002-6143-0317
Odum School of Ecology, University of Georgia, 140 E. Green Street, Athens, GA, 30602 USA
Search for more papers by this author
Marcin Zadworny,
Marcin Zadworny
orcid.org/0000-0002-7352-5786
Institute of Dendrology, Polish Academy of Sciences, Parkowa 5, Kórnik, 62-035 Poland
Search for more papers by this author
Agnieszka Bagniewska-Zadworna,
Agnieszka Bagniewska-Zadworna
orcid.org/0000-0003-2828-1505
Department of General Botany, Institute of Experimental Biology, Faculty of Biology, Adam Mickiewicz University, Uniwersytetu Poznańskiego 6, Poznań, 61-614 Poland
Search for more papers by this author
Elison B. Blancaflor,
Elison B. Blancaflor
orcid.org/0000-0001-6115-9670
Noble Research Institute, LLC, 2510 Sam Noble Parkway, Ardmore, OK, 73401 USA
Search for more papers by this author
Ivano Brunner,
Ivano Brunner
orcid.org/0000-0003-3436-995X
Forest Soils and Biogeochemistry, Swiss Federal Research Institute WSL, Zürcherstr. 111, Birmensdorf, 8903 Switzerland
Search for more papers by this author
Arthur Gessler,
Arthur Gessler
orcid.org/0000-0002-1910-9589
Forest Dynamics, Swiss Federal Research Institute WSL, Zürcherstr. 111, Birmensdorf, 8903 Switzerland
Institute of Terrestrial Ecosystems, ETH Zurich, Zurich, 8092 Switzerland
Search for more papers by this author
Sarah E. Hobbie,
Sarah E. Hobbie
orcid.org/0000-0001-5159-031X
Department of Ecology, Evolution and Behavior, University of Minnesota, St Paul, MN, 55108 USA
Search for more papers by this author
Colleen M. Iversen,
Colleen M. Iversen
orcid.org/0000-0001-8293-3450
Environmental Sciences Division and Climate Change Science Institute, Oak Ridge National Laboratory, Oak Ridge, TN, 37831 USA
Search for more papers by this author
Liesje Mommer,
Liesje Mommer
orcid.org/0000-0002-3775-0716
Plant Ecology and Nature Conservation Group, Department of Environmental Sciences, Wageningen University and Research, PO box 47, Wageningen, 6700 AA the Netherlands
Search for more papers by this author
Catherine Picon-Cochard,
Catherine Picon-Cochard
orcid.org/0000-0001-7728-8936
Université Clermont Auvergne, INRAE, UREP, Clermont-Ferrand, 63000 France
Search for more papers by this author
Johannes A. Postma,
Johannes A. Postma
orcid.org/0000-0002-5222-6648
Plant Sciences (IBG-2), Forschungszentrum Jülich GmbH, Jülich, D-52425 Germany
Search for more papers by this author
Laura Rose,
Laura Rose
orcid.org/0000-0003-4523-4145
Station d’Ecologie Théorique et Expérimentale, CNRS, 2 route du CNRS, Moulis, 09200 France
Search for more papers by this author
Peter Ryser,
Peter Ryser
orcid.org/0000-0002-9495-9508
Laurentian University, 935 Ramsey Lake Road, Sudbury, ON, P3E 2C6 Canada
Search for more papers by this author
Michael Scherer-Lorenzen,
Michael Scherer-Lorenzen
orcid.org/0000-0001-9566-590X
Geobotany, Faculty of Biology, University of Freiburg, Schänzlestr. 1, Freiburg, 79104 Germany
Search for more papers by this author
Nadejda A. Soudzilovskaia,
Nadejda A. Soudzilovskaia
orcid.org/0000-0002-9584-2109
Environmental Biology Department, Institute of Environmental Sciences, CML, Leiden University, Leiden, 2333 CC the Netherlands
Search for more papers by this author
Tao Sun,
Tao Sun
Institute of Applied Ecology, Chinese Academy of Sciences, Shenyang, 110016 China
Search for more papers by this author
Oscar J. Valverde-Barrantes,
Oscar J. Valverde-Barrantes
orcid.org/0000-0002-7327-7647
Institute of Environment, Department of Biological Sciences, Florida International University, Miami, FL, 33199 USA
Search for more papers by this author
Alexandra Weigelt,
Alexandra Weigelt
orcid.org/0000-0001-6242-603X
Systematic Botany and Functional Biodiversity, Institute of Biology, Leipzig University, Johannisallee 21-23, Leipzig, 04103 Germany
Search for more papers by this author
Larry M. York,
Larry M. York
orcid.org/0000-0002-1995-9479
Noble Research Institute, LLC, 2510 Sam Noble Parkway, Ardmore, OK, 73401 USA
Search for more papers by this author
Alexia Stokes,
Alexia Stokes
orcid.org/0000-0002-2276-0911
INRA, AMAP, CIRAD, IRD, CNRS, University of Montpellier, Montpellier, 34000 France
Search for more papers by this author
Grégoire T. Freschet,
Corresponding Author
Grégoire T. Freschet
gregoire.freschet@sete.cnrs.fr
orcid.org/0000-0002-8830-3860
Station d’Ecologie Théorique et Expérimentale, CNRS, 2 route du CNRS, Moulis, 09200 France
Centre d’Ecologie Fonctionnelle et Evolutive, Université de Montpellier, CNRS, EPHE, IRD, Univ Paul Valéry Montpellier 3, Montpellier, 34293 France
Author for correspondence: email gregoire.freschet@sete.cnrs.fr
Search for more papers by this author
Catherine Roumet,
Catherine Roumet
orcid.org/0000-0003-1320-9770
Centre d’Ecologie Fonctionnelle et Evolutive, Université de Montpellier, CNRS, EPHE, IRD, Univ Paul Valéry Montpellier 3, Montpellier, 34293 France
Search for more papers by this author
Louise H. Comas,
Louise H. Comas
orcid.org/0000-0002-1674-4595
USDA-ARS Water Management and Systems Research Unit, 2150 Centre Avenue, Bldg D, Suite 320, Fort Collins, CO, 80526 USA
Search for more papers by this author
Monique Weemstra,
Monique Weemstra
orcid.org/0000-0002-6994-2501
Centre d’Ecologie Fonctionnelle et Evolutive, Université de Montpellier, CNRS, EPHE, IRD, Univ Paul Valéry Montpellier 3, Montpellier, 34293 France
Search for more papers by this author
A. Glyn Bengough,
A. Glyn Bengough
orcid.org/0000-0001-5472-3077
The James Hutton Institute, Invergowrie, Dundee, DD2 5DA UK
School of Science and Engineering, University of Dundee, Dundee, DD1 4HN UK
Search for more papers by this author
Boris Rewald,
Boris Rewald
orcid.org/0000-0001-8098-0616
Department of Forest and Soil Sciences, University of Natural Resources and Life Sciences, Vienna, 1190 Austria
Search for more papers by this author
Richard D. Bardgett,
Richard D. Bardgett
orcid.org/0000-0002-5131-0127
Department of Earth and Environmental Sciences, The University of Manchester, Manchester, M13 9PT UK
Search for more papers by this author
Gerlinde B. De Deyn,
Gerlinde B. De Deyn
orcid.org/0000-0003-4823-6912
Soil Biology Group, Wageningen University, Wageningen, 6700 AA the Netherlands
Search for more papers by this author
David Johnson,
David Johnson
orcid.org/0000-0003-2299-2525
Department of Earth and Environmental Sciences, The University of Manchester, Manchester, M13 9PT UK
Search for more papers by this author
… See all authors
First published: 07 November 2020
Citations: 463
Summary
The effects of plants on the biosphere, atmosphere and geosphere are key determinants of terrestrial ecosystem functioning. However, despite substantial progress made regarding plant belowground components, we are still only beginning to explore the complex relationships between root traits and functions. Drawing on the literature in plant physiology, ecophysiology, ecology, agronomy and soil science, we reviewed 24 aspects of plant and ecosystem functioning and their relationships with a number of root system traits, including aspects of architecture, physiology, morphology, anatomy, chemistry, biomechanics and biotic interactions. Based on this assessment, we critically evaluated the current strengths and gaps in our knowledge, and identify future research challenges in the field of root ecology. Most importantly, we found that belowground traits with the broadest importance in plant and ecosystem functioning are not those most commonly measured. Also, the estimation of trait relative importance for functioning requires us to consider a more comprehensive range of functionally relevant traits from a diverse range of species, across environments and over time series. We also advocate that establishing causal hierarchical links among root traits will provide a hypothesis-based framework to identify the most parsimonious sets of traits with the strongest links on functions, and to link genotypes to plant and ecosystem functioning.
1 Introduction
Plants are powerful ecosystem engineers. Extending both above and belowground, sometimes to a great height and depth, they shape the biosphere and its interactions with the uppermost lithosphere, the hydrosphere and the atmosphere (de Kroon & Visser, 2003; Schenk & Jackson, 2005). Taken together, the effects of plants on the biosphere, atmosphere and geosphere are key determinants of terrestrial ecosystem functioning. Belowground, plant roots and their symbionts are central to the maintenance of multiple ecosystem functions (Bardgett et al., 2014; Freschet & Roumet, 2017): roots play key roles in the transformation and circulation of elements and mineral/organic compounds across the spheres (Prieto et al., 2012; Freschet et al., 2018b), and particularly in the formation, maintenance and stabilisation of soils (Daynes et al., 2013; Dignac et al., 2017). Thus, an advanced mechanistic understanding of the effects of root systems on ecosystem functions has numerous potential applications, such as designing plant mixtures for nutrient retention in agrosystems, for stabilisation of hillslopes, and so on (Stokes et al., 2009; Lavorel et al., 2013; Martin & Isaac, 2015).
Root systems, among other plant parts, display a tremendous diversity of forms and properties (Kutschera, 1960; Robinson et al., 2003; Bodner et al., 2013; Iversen et al., 2017). In recent decades, parallel developments in many areas of root research (e.g. morphology, physiology, architecture, biomechanics and anatomy, among others) have brought considerable advances in our understanding of the diversity in root traits and their contribution to plant and ecosystem functioning (Freschet et al., 2020). Such advances are key to strengthening the foundations of current dominating theoretical frameworks, often built on data from the same few easily measurable traits (McGill et al., 2006; Reich, 2014). For example, recent attempts to assemble a diverse set of trait data from a range of disciplines in root science permitted researchers to move from a single root economics spectrum (Reich, 2014; Roumet et al., 2016) that only poorly explained root trait variation and its impact on plant performance (Weemstra et al., 2016), towards a multidimensional ‘root economics space’ that further integrates aspects of symbiotic associations and is supported by traits closely related to functioning (Bergmann et al., 2020). However, despite major progress, numerous gaps remain in our understanding of trait–functioning relationships and we still lack a comprehensive overview of available knowledge that bridges research fields.
Here, sharing expertise from a range of fields in root research, we first consolidate recent advances in our understanding of demonstrated relationships between root traits and plant or ecosystem functioning (Section 2; see Tables 1, 2, and Fig. 1 for an overview of this broad assessment). Additionally, two examples are more comprehensively assessed in order to illustrate the multiple direct and indirect roles of root traits as drivers of: (1) plant functioning, with an investigation of the relationships between root traits and plant nitrogen (N) uptake capacity; and (2) ecosystem functioning, by examining the linkages between root traits and soil reinforcement against shallow landslides (see Tables 3, 4, and Fig. 2 for an overview of these comprehensive assessments). Based on this two-step assessment, we critically evaluated the current strengths and gaps in our knowledge, and identified research challenges for the future. Specifically, we address three main research avenues that offer the potential to improve our understanding of trait–function relationships. First, we consider the importance of using an informed selection of traits for exploring root trait–functioning relationships and discuss how sets of currently understudied traits may provide more insights than common, easy-to-measure traits (Section 3). We then discuss how our understanding of trait–trait relationships and hierarchies among traits can help us to advance our knowledge of the synergistic or antagonistic effects of different traits on plant and ecosystem functioning, and lead us one step further in linking genotypes to function (Section 4). Next, we address the opportunities and pitfalls when generalising trait–functioning patterns across plant species, growth forms, environmental contexts, and temporal and spatial scales (Section 5). Our two examples of plant and ecosystem functioning are woven through the remainder of this paper to illustrate our purpose.
Table 1.
Broad, multidisciplinary assessment of theoretical and demonstrated links between belowground traits and 15 aspects of plant functioning.
| Plant functions | | | | |
--- ---
| Trait importance | Belowground traits | Entity of interest | References (examples of) | Rationale |
| Soil space occupancy (from explorative to exploitative strategies) | | | | This function includes both exploration and exploitation strategies (whose traits generally trade-off). |
| | Maximum rooting depth (explorative) | Whole root system | Thorup-Kristensen (2001); Maeght et al. (2013); Fan et al. (2017) | Reflects the potential range of soil layers colonised by roots. |
| | Lateral rooting extent (explorative) | Whole root system | Schenk & Jackson (2002a) | Reflects the potential area of ground colonised by roots. |
| | Horizontal and vertical root distribution index (explorative) | Whole root system, fine roots | Gale & Gringal (1987); Jackson et al. (1996) | A homogeneous distribution below the soil surface and across depths is typical of an explorative rather than an exploitative strategy. |
| | Root length density (exploitative) | Whole root system, fine roots | Eissenstat (1992); Robinson et al. (1994); Reich et al. (1998); Ravenek et al. (2016) | Increases the spatial coverage of a given soil volume. |
| | Root mass fraction (exploitative) | Whole root system, fine roots | Poorter et al. (2012); Freschet et al. (2015) | Increases the proportional investment of plants towards the root system or specific parts of the root system. |
| | Specific root length (explorative or exploitative) | Whole root system, fine roots | Bauhus & Messier (1999); Ostonen et al. (2007) | Increases the length of root exploring or exploiting the soil per unit root mass invested. |
| | Root branching density (exploitative or exploitative) | Whole root system, absorptive roots | Wiersum (1958); Fitter & Stickland (1991); Larigauderie & Richards (1994); Eissenstat et al. (2015); Zhao et al. (2018); Lynch et al. (2019) | Typically increases with soil resource patchiness as very thin roots tend to proliferate (strong increase in root branching density) in nutrient-rich hotspots. While higher branching density increases local soil exploitation, lower branching might enable larger soil volume exploration. |
| | Root elongation rate (explorative or exploitative) | First-order roots | Forde & Lorenzo (2001); Rewald & Leuschner (2009); Eissenstat et al. (2015) | On pioneer roots, measures the capacity of root systems to send roots to depth (explorative). On absorptive roots, characterises the capacity of root systems to respond to fluctuating resource availability (exploitative). |
| | Time of root growth initiation | First-order roots | Langlois et al. (1983); Eissenstat & Caldwell (1988) | Measures the capacity of root systems to preoccupy soil patches before competitors. |
| | Root branching angle (explorative) | Highest order roots | Trachsel et al. (2013); Lynch (2013); Miguel et al. (2015) | Larger (i.e. steeper) root branching angles promote exploration of deep soil and increase soil volume explored in conditions of competition with neighbouring plants. |
| | Persistence of connection between ramets (explorative) | Rhizomes, stolons, shoot-bearing roots | Jónsdóttir & Watson (1997); Weiser et al. (2016); Klimešová et al. (2018) | Longer lifespan of rhizomes and shoot-bearing roots enables sharing of resources among ramets in a clone over longer period and larger area and enables also longer on-spot occupancy. Longer persistence of connections is also generally related to longer root lifespan. |
| | Lateral spread (explorative) | Rhizomes, stolons, shoot-bearing roots | Weiser & Smyčka (2015); Klimešová et al. (2018) | The longer lateral spread by clonal growth organ (stolon, rhizome) the farther away (from older roots) new roots must be established. |
| Plant N acquisition | | | | |
| | See traits associated with ‘Soil space occupancy’ (+) | Whole root system, absorptive roots | Maire et al. (2009); Simon et al. (2017); Freschet et al. (2018) | Most ‘Soil space occupancy’ traits can be important for this function as they determine the temporal and spatial localisation of roots in soil and the efficiency of soil exploration and exploitation. |
| | Net N uptake rate (+) | Whole root system, absorptive roots | Garnier (1991); Poorter et al. (1991); Garnier et al. (1998) | When measured on short time periods (from hours to days), this measure primarily represents plant N uptake. Over longer periods (days to months) this measure also takes into account N loss due to leaching, herbivory and senescence. |
| | Michaelis–Menten constant (Km) (+) | Whole root system, absorptive roots | Robinson et al. (1994); Miller et al. (2007); Grassein et al. (2015) | The Km is a measure of the affinity of a transport system for its substrate; the lower the Km the faster nutrients can be taken up at low availability. |
| | Ability to fix N2 (+) | Nodules | Sprent (2009); Afkhami et al. (2018); Tedersoo et al. (2018) | Provides N to the plant from atmospheric source N2 via microbial root symbionts. |
| | Nitrogen fixation rate (+) | Nodules | Carlsson & Huss-Danell (2003); Batterman et al. (2013a); Yelenik et al. (2013); Ament et al. (2018) | Increases the rate of atmospheric N acquisition. |
| | Mycorrhizal association type | Absorptive roots | Read & Perez-Moreno (2003); Read et al. (2004); Lambers et al. (2009); Phillips et al. (2013); Liese et al. (2018); Pellitier & Zak (2018) | Different mycorrhizal types have different enzymatic capacities and ability to explore soil volumes and thereby different abilities to take up N. Also, arbuscular mycorrhizal (AM), ectomycorrhizal (ECM), and ericoid mycorrhizal (ERM) fungi represent a gradient from limited saprotrophic capabilities and greater reliance on inorganic N as primary N source to the ability to produce extracellular enzymes and greater use of increasingly complex organic N forms. |
| | Root hair length and density (+) | Absorptive roots | Robinson & Rorison (1987); Freschet et al. (2018) | Root hairs increase the absorptive surface area of nonwoody roots, which is important for N uptake as well as uptake of other nutrients. |
| | Ratio of absorptive to transport roots (+/−) | Fine roots | Schneider et al. (2017); Zadworny et al. (2017) | Increases N uptake rate. However, root cortical senescence can also increase N reallocation from senescing tissue and reduce root respiration and root N requirements. |
| | Mycorrhizal colonisation intensity (+) | Absorptive roots | Miller et al. (1995); Hodge et al. (2003); Treseder (2013) | Mycorrhizal fungi are physiologically and morphologically well adapted to acquire N from soil. The colonisation intensity provides a first approximation of the association between the plant and mycorrhizal partner. However, it should be noted that there is still significant variation in the potential benefit provided by the fungi based on the fungi identity, the total hyphal production and the local environmental context. |
| | Root cortical aerenchyma (+/−) | Absorptive roots | Postma & Lynch (2011); Schneider et al. (2017) | Decreases radial N transport but increases nutrient uptake efficiency by decreasing metabolic costs. |
| | Maximum net uptake capacity (Imax) (+) | Absorptive roots | Robinson et al. (1994); Garnier et al. (1998); Grassein et al. (2018) | Imax represents a potential rate at nonlimiting substrate availability that might, however, not be fully expressed under in situ conditions. |
| | Mycorrhizal hyphal length (+) | Absorptive roots | Miller et al. (1995); Chen et al. (2016); McCormack & Iversen (2019) | The hyphal length associated with a colonised root provides a closer approximation of both the potential benefit and cost of the mycorrhizal symbiosis than colonisation intensity alone. |
| | Specific root respiration (+) | Absorptive roots | Poorter et al. (1991); Reich et al. (1998); Rewald et al. (2016) | Root respiration is related simultaneously to maintenance, growth and nutrient uptake of roots and is therefore inconsistently linked to nutrient uptake. It also varies with N form. |
| | Root N concentration (+) | Absorptive roots | Loqué & von Wirén (2004); Grassein et al. (2015); Grassein et al. (2018) | Root N is involved in all metabolic processes related to N uptake but is also stored in roots and included in root defence compounds and is therefore inconsistently linked to nutrient uptake. |
| Plant P acquisition | | | | |
| | See traits associated with ‘Soil space occupancy’ (+) | Whole root system, absorptive roots | Lynch et al. (2011); Laliberté et al. (2015); Ros et al. (2018) | Most ‘Soil space occupancy’ traits can be important for this function. |
| | Net P-uptake rate (+) | Whole root system, absorptive roots | Itoh (1987); Föhse et al. (1988) | When measured on short time periods (from hours to days), this measure primarily represents plant P uptake. Over longer periods (days to months) this measure also takes into account P loss due to leaching, herbivory and senescence. |
| | Mycorrhizal association type | Absorptive roots | Read (1991); Read & Perez-Moreno (2003); Philipps et al. (2013); Lambers et al. (2009) | Distinct mycorrhizal types have differing capacity to extract P from soils. AM fungi have greater influence on plant P acquisition (representing up to 90% of plant P uptake) than ECM fungi (up to 70%). AM fungal extramatrical mycelia express specific transporters to take up Pi from the periarbuscular space (i.e. they bypass roots). ECM and ERM fungi can access organic forms of P that are not available to AM fungi. |
| | Ability to grow cluster and dauciform roots (+) | First-order roots | Neumann & Martinoia (2002); Shane et al. (2006) | Cluster and dauciform roots are specialised organs efficient in mining P from nutrient-impoverished soils. |
| | Root hair length and density (+) | Absorptive roots | Wissuwa & Ae (2001); Brown et al. (2013); Haling et al. (2013) | Root hairs can be more effective than mycorrhiza in facilitating P acquisition. |
| | Ratio of absorptive to transport roots (+/−) | Fine roots | Schneider et al. (2017); Zadworny et al. (2017) | Increases P-uptake rate. However, root cortical senescence can also increase P reallocation from senescing tissue and reduce root respiration and root P requirements. |
| | Rhizospheric phytase and phosphatase activity (+) | First-order roots | Spohn & Kuzyakov (2013); Meier et al. (2015) | Roots can release (acid) phosphatases (sometimes phytases) directly or exude organic substances that act as substrate for microorganisms which in turn produce phytases and (acid or alkaline) phosphatases. |
| | Mycorrhizal colonisation intensity (+) | Absorptive roots | Treseder (2013); Elumeeva et al. (2018) | Mycorrhizal fungi are physiologically and morphologically better adapted than roots to extract P from soils thereby increasing host plant nutrient concentrations. |
| | Mycorrhizal genetic diversity (+) | Absorptive roots | Plassard & Dell (2010); Plassard et al. (2011); Köhler et al. (2018) | P-uptake efficiency increases with increasing ECM fungi species richness and diversity. Increased ECM fungi diversity is associated with greater variability in soil exploration types among ECM fungi species, which increases the explored soil volume for P. |
| | Root cortical aerenchyma (+/−) | Absorptive roots | Hu et al. (2014); Schneider et al. (2017) | Decreases radial P transport but increases nutrient uptake efficiency by decreasing metabolic costs. |
| | Root exudation rate (+) | Absorptive roots | Lopez-Bucio et al. (2000); Lambers et al. (2012); Ryan et al. (2014); Zhang et al. (2016) | Excretion of acidifying/chelating compounds (e.g. citric acid, malic acid) enhances the solubility of inorganic P, although evidence exists mostly for Proteaceae and crops. |
| | Mycorrhizal hyphal length (+) | Absorptive roots | Miller et al. (1995); Laliberté et al. (2015); Chen et al. (2016); McCormack & Iversen (2019) | The amount of hyphal length associated with a colonised root provides a closer approximation of both the potential benefit and cost of the mycorrhizal symbiosis than colonisation intensity alone. |
| | Michaelis–Menten constant (Km) (+) | Whole root system, absorptive roots | Itoh (1987); Lambers et al. (2006) | The Km is a measure of the affinity of a transport system for its substrate; the lower the Km the faster nutrients can be taken up at low availability. However, the diffusion of inorganic phosphate in soil is the key limiting factor for P uptake so that kinetic parameters of the P-uptake system may have only small effects on the overall uptake capacity of plants. |
| Plant water acquisition | | | | |
| | See traits associated with ‘Soil space occupancy’ (+) | Whole root system, absorptive roots | Fort et al. (2017); Chitra-Tarak et al. (2019) | Most ‘Soil space occupancy’ traits can be important for this function, especially in soils with heterogeneous water distribution. |
| | Root hair length and density (+) | Absorptive roots | Segal et al. (2008); Carminati et al. (2017) | Improve the contact of roots with water films of soil particles. |
| | Cortical thickness (−) | Absorptive roots | Huang & Eissenstat (2000); Comas et al. (2012) | Thinner cortex resulting in less impedance to water movement towards the stele. |
| | Fraction of passage cells in exodermis (+) | Absorptive roots | Enstone & Peterson (1992, 1996); Huang et al. (1995); Peterson & Waite (1996) | Higher density of passage cells enhances water movement towards the stele. |
| | Mycorrhizal colonisation intensity (+) | Absorptive roots | Augé et al. (2001); Querejeta et al. (2003, 2012); Prieto et al. (2016) | Allows water transfer to the plant and improves root contact with the soil. |
| | Hydraulic conductance (+) | Whole root system, fine roots | Muhsin & Zwiazek (2002); Eldhuset et al. (2013); Zadworny et al. (2018) | Increases the potential flow of water from the roots to upper parts of the plant. |
| | Vulnerability to embolism (−) | Whole root system | Domec et al. (2006) | Occurrence of embolism limits the potential flow of water from roots to upper parts of the plant |
| | Type and frequency of root entities | Whole root system | North (2004); Draye et al. (2010); Rewald et al. (2011, 2012); Ahmed et al. (2018) | Distribution of the root hydraulic properties between root entities determines root system hydraulic architecture. |
| | See traits associated with ‘Soil water holding capacity’ (+) | Fine roots | Feddes et al. (2001) | Soil water holding capacity acts as a buffer against periodic rainfall events, particularly in places where rainfall evens are irregular. |
| | Suberin concentration (−) | Whole root system | Steudle & Peterson (1998); Schreiber et al. (2005); Gambetta et al. (2013) | Not only deposition of suberin lamellae but also chemical composition of suberin would affect radial water flow from cell to cell (i.e. decrease root hydraulic conductivity). |
| | Xylem lumen area (+) | Whole root system, fine roots | Hummel et al. (2007); Valenzuela-Estrada et al. (2008); Long et al. (2013); Kong et al. (2014) | Greater conduit lumen area may exhibit enhanced hydraulic conductance. |
| | Aquaporin expression (+) | Absorptive roots | Johnson et al. (2014) | Facilitates radial, symplastic conductance of water. |
| | Lignin concentration (−) | Absorptive roots | Ranathunge et al. (2003, 2004); Naseer et al. (2012) | Lignins may act as apoplastic barriers limiting radial water transport across roots. |
| Root penetration of soil | | | | |
| | Root growth pressure (+) | First-order roots | Dexter (1987); Clark & Barraclough (1999) | Root growth pressure is essential to root penetration, although there is limited evidence of its variation as a trait. |
| | Mean root diameter (+) | First-order roots | Materechera et al. (1992) | Thicker roots are generally better at penetrating hard soils to greater depth. |
| | Number of main root axes (+) | Whole root system | Jakobson & Dexter (1987); Landl et al. (2017) | In structured soils containing many cracks and biopores, plants with many main axes may penetrate more effectively. |
| | Root buckling resistance (+) | First-order roots | Clark et al. (2008); Burr-Hersey et al. (2017) | Species and genotypes differ substantially in their ability to penetrate hard soils without buckling or altering their growth trajectory. |
| | Root cap friction coefficient (−) | First-order roots | Bengough & McKenzie (1997); Iijima et al. (2003) | Sloughing of root border cells and root exudate production decreases the mechanical resistance to root growth and aids root penetration. |
| Plant nutrient and C conservation | | | | |
| | Lifespan (+) | Whole root system, rhizomes | McCormack et al. (2012); Liu et al. (2016); Klimešová et al. (2018) | Decreases losses associated with root turnover. |
| | Root resorption efficiency and proficiency (+) | Whole root system, absorptive roots | Gordon & Jackson (2000); Freschet et al. (2010) | Decreases losses associated with root senescence. |
| | Specific respiration rate (−) | Whole root system | Walk et al. (2006); Rewald et al. (2014, 2016) | Respiration is a major driver of C loss. |
| | Ratio of absorptive to transport roots (−) | Fine roots | Lynch (2019); Schneider et al. (2017) | Root cortical senescence reduces metabolic maintenance costs. |
| | See traits associated with ‘Plant protection against pathogens and herbivory’ (+) | Whole root system | Kaplan et al. (2008); Moore & Johnson (2017) | Traits providing ‘Plant protection against pathogens and herbivory’ are important for this function. |
| | Root tissue density (+) | Whole root system | Ryser (1996); Liu et al. (2016); Bumb et al. (2018); Lynch (2019) | Increases root lifespan, plant mechanical resistance and decreases plant palatability. Evidence gathered above ground for leaf tissue density (e.g. leaf dry matter content) theoretically applies belowground. However, reduced tissue density due to aerenchyma formation, increase in cortical cell sizes or decrease in cortical cell numbers may also reduce metabolic costs. |
| Plant storage | | | | |
| | Ability to produce storage structures (+) | Tubers, rhizomes, tap roots, corms, bulbs | Klimešová et al. (2018); Pausas et al. (2018) | Substantially improves the overall capacity of plants to store C and nutrients. |
| | Total belowground carbohydrate storage (+) | Tubers, rhizomes, tap roots, corms, bulbs | Janeček & Klimešová (2014); Martínez-Vilalta et al. (2016) | Storage in specialised organs represents the largest part of C storage and, by contrast to storage in other types of roots, represents an active storage strategy rather than passive accumulation due to limitation of growth (e.g. by nutrients, cold). |
| Plant regeneration | | | | |
| | Bud bank size (+) | Whole root system, rhizomes | Klimešová & Klimeš (2007); Ott et al. (2019) | Belowground bud bank allows plant regeneration after aboveground disturbance |
| | Depth of buds in bud bank (+) | Whole root system, rhizomes | Lubbe & Henry (2019); Ott et al. (2019) | Deeper buds are more resistant to disturbance like fire or ploughing. On the other hand, deeper buds require more resource storage and time to produce new aboveground shoots. |
| | See traits associated with ‘Plant storage’ (+) | Tubers, rhizomes, tap roots, corms, bulbs | de Moraes et al. (2016) | Most ‘Plant storage’ traits can be important for this function. Storage organs support regrowth of new aboveground parts. |
| | Ability to produce adventitious shoots on roots (+) | Whole root system | Klimešová et al. (2017a) | Adaptation of plants to soil disturbance (numerous perennial weeds of arable land possess ability to resprout from roots). Some species may produce adventitious shoots spontaneously, some only in response to disturbance. |
| Plant lateral spread and belowground dispersal | | | | |
| | Ability to produce rhizomes (+) | Rhizomes | Groff & Kaplan (1988) | Rhizomes (belowground stems with adventitious roots) allow the colonisation of new ground while relying to some extent on resources from well-established ramets. |
| | Lateral spread (explorative) | Rhizomes, stolons, shoot-bearing roots | Weiser & Smyčka (2015); Klimešová et al. (2018) | The longer lateral spread by clonal growth organ (stolon, rhizome) the farther away (from older roots) new roots must be established. |
| | Ability to produce adventitious roots (+) | Whole root system, rhizomes | Groff & Kaplan (1988) | Facilitates establishment of new rooted areas along belowground (rhizomes) or aboveground (stolons, decumbent shoots) stems and splitting a clone to physically independent parts. |
| | Lateral rooting extent (explorative) | Whole root system | Schenk & Jackson (2002a) | Reflects the potential area of ground colonised by roots. |
| | Ability to produce adventitious shoots on roots (+) | Shoot-bearing roots | Groff & Kaplan (1988); Klimešová et al. (2017a) | Common among species of dry and disturbed areas to extend plant spread and to overcome bud bank limitation. |
| | See traits associated with ‘Plant storage’ (+) | Tubers, rhizomes, tap roots, corms, bulbs | de Moraes et al. (2016); Klimešová et al. (2017b) | Most ‘Plant storage’ traits can be important for this function as they often serve both functions. |
| | See traits associated with ‘Root penetration force in soil’ (+) | Root and rhizome apices | Klimešová et al. (2012) | Facilitates movement of roots and rhizomes into new areas. |
| | Persistence of connection between ramets (+) | Rhizomes, shoot-bearing roots | Jónsdóttir & Watson (1997) | Longer lifespan of rhizomes and shoot-bearing roots enables sharing of resources among ramets in a clone over a longer period and larger area and enables longer on-spot occupancy. |
| Initiation and establishment of mycorrhizal symbioses | | | | This function refers to mycorrhizal fungi as well as pathogenic hyphae. |
| | Root cortex thickness (+) | Absorptive roots | Brundrett (2002); Comas et al. (2012); Zadworny et al. (2016); Kong et al. (2017) | Larger parenchyma cortex enhances associations with mycorrhizal colonisation by providing larger space for mycorrhizal fungal hyphae and arbuscules. |
| | See traits associated with ‘Plant P acquisition’ (−) | Absorptive roots | Oldroyd (2013); Raven et al. (2018) | Most ‘Plant P acquisition’ traits can be important for this function. Plants with higher P acquisition capacities and therefore higher P status are less likely to establish symbioses. |
| | Root cortex area fraction (+) | Absorptive roots | Comas et al. (2012); Burton et al. (2013); Gu et al. (2014); Valverde-Barrantes et al. (2016) | A large cortex area fraction theoretically implies a higher possibility for connection to symbionts by providing larger space for mycorrhizal fungal hyphae and arbuscules. |
| | Fraction of passage cells in exodermis (+) | Absorptive roots | Kamula et al. (1994); Peterson & Enstone (1996); Sharda & Koide (2008); Zadworny & Eissenstat (2011) | Exodermal passage cells provide the major penetration sites for the colonisation of mycorrhizal and pathogenic hyphae. |
| | Concentration of compounds controlling the degree of colonisation: e.g. lignin, suberin, phenolic compounds, phytohormones, ‘reactive oxygen species’, branching factors (−) | Absorptive roots | Nicholson & Hammerschmidt (1992); Matern et al. (1995); Fester & Hause 2005; López-Ráez et al. (2010) | Roots contain and produce antifungal compounds (i.e. lignin deposition, suberisation, high tannin content and ‘reactive oxygen species’) that control fungi (pathogenic and mycorrhizal) entry and development. |
| | Carbon translocation to symbionts (+) | Whole root system | Tuomi et al. (2001); Hogberg & Hogberg (2002); Hobbie (2006); Nehls et al. (2010) | Symbiosis establishment require plant resources such as photosynthetically assimilated carbon; the symbiosis affects the rate of photosynthesis and influences the carbon assimilation and allocation |
| Plant protection against pathogens and herbivory | | | | |
| | Secondary metabolites (alkaloids, glucosinolates, phenolics, terpenoids, furanocoumarins, cardenolides) (+) | Whole root system, absorptive roots | Zangerl & Rutledge (1996); Bezemer et al. (2004); Kaplan et al. (2008); Rasmann et al. (2009); Moore & Johnson (2018) | Decreases plant palatability. |
| | Mycorrhizal colonisation intensity (+) | Absorptive roots | Newsham et al. (1995); Jung et al. (2012); Babikova et al. (2014) | Provides protection against some herbivores and pathogens. |
| | Fraction of passage cells in exodermis (−) | Absorptive roots | Kamula et al. (1994) | Exodermal passage cells provide the major penetration sites for the colonisation of pathogenic fungi. |
| | See traits associated with ‘Plant resistance to uprooting’ (+) | Whole root system | Ennos (2000); Burylo et al. (2009) | Prevents uprooting during grazing by aboveground herbivores and total root system disruption during grazing by belowground herbivores. |
| | Root lignin concentration (+) | Whole root system, absorptive roots | Johnson et al. (2010) | Lignin concentration and composition contribute to root toughness acting as an effective barrier to root herbivory. |
| | Root silica and calcium oxalate content (+) | Absorptive roots | Korth et al. (2006); Park et al. (2009); Moore & Johnson (2017) | These deposits are hard and can abrade insect mouthparts and reduce the digestibility of food via a physical action. |
| | Root tissue density (+) | Whole root system, absorptive roots | Bumb et al. (2018) | Decreases plant palatability. Evidence gathered above ground for leaf tissue density (e.g. leaf dry matter content) theoretically applies belowground. |
| | Root N concentration (−) | Whole root system, absorptive roots | Brown & Gange (1990); Dawson et al. (2002); Agrawal et al. (2006) | Low levels of N limit the nutritional value of the root tissue, as evidenced above ground. |
| | Root hair length and density (+) | Absorptive roots | Johnson et al. (2016); Moore & Johnson (2017) | Root hairs offer some protection by preventing very small herbivores from reaching and penetrating the root epidermis or by providing refuge for natural enemies of herbivores such as entomopathogenic nematodes. |
| Plant resistance to vertical uprooting | | | | This applies particularly to herbaceous species (e.g. under conditions of large herbivore grazing). |
| | Root length density (+) | Whole root system | Ennos (1989) | Particularly important across a range of soil horizons. Increasing root length augments the pull-out resistance up to a critical root axis length, above which roots will break in tension instead of slipping out of the soil. |
| | Root mass fraction (+) | Whole root system | Ennos (1993) | Low investment in belowground parts increases chances of uprooting. |
| | Root branching density (+) | Whole root system | Dupuy et al. (2005a); Devkota et al. (2006); Burylo et al. (2009) | The tensile force required to uproot whole plants is positively related to the root branching density and number of root tips per unit volume of soil. |
| | Tensile strength (+) | Whole root system | Ennos & Pellerin (2000); Chimungu et al. (2015); Mao et al. (2018) | An estimation of total anchorage strength can be obtained by summing the basal tensile strengths of all the roots. |
| | Modulus of elasticity (+) | Whole root system | Mao et al. (2018) | If a root has a high elastic modulus, it will be able to deform further without failing under a given load, thus improving plant anchorage. |
| | Ability to produce rhizomes (+) | Rhizomes | Bankhead et al. (2017) | The force required to cause rhizome failure can be high, thus improving overall plant anchorage. |
| | Lateral rooting extent (+) | Whole root system | Ennos (1989); Mickovski et al. (2005) | Lateral roots increase the weight of the root–soil plate enmeshed by roots. Increasing root length augments the pull-out resistance up to a critical root axis length, above which roots will break in tension instead of slipping out of the soil. |
| | Specific root length (+) | Whole root system | Ennos (1993); Edmaier et al. (2015) | High specific root length often implies more numerous thinner roots improving anchorage whereas low specific root length implies less but thicker roots. |
| Plant resistance to overturning | | | | This function applies particularly to tree species (e.g. under conditions of lateral wind loading). |
| | Root area ratio (+) | Whole root system | Dupuy et al. (2005b) | The greater the root area ratio of coarse and fine roots crossing the potential failure zone (edges of soil–root plate), the more the soil shear strength is increased around the root–soil plate. |
| | Vertical root length distribution index (+) | Whole root system | Bruce et al. (2006); Fourcaud et al. (2008) | Deeper root systems are better anchored because the anchorage force provided by roots is proportional to their length up to a critical length, beyond which roots will break before more distal regions are stretched. |
| | Root length density (+) | Whole root system | Danquechin Dorval et al. (2016) | The higher the density of roots, whether tap, sinker or lateral roots, the greater the resistance to overturning. |
| | Root mass fraction (+) | Whole root system | Danquechin Dorval et al. (2016) | Proportionally low investment in belowground parts increases chances of overturning. |
| | Root bending strength (+) | Lateral roots, sinker roots | Nicoll & Ray (1996); Stokes & Mattheck (1996) | Increases resistance to failure due to root bending during lateral sway. |
| | Presence of sinker roots along lateral roots (+) | Lateral roots | Danjon et al. (2005) | Sinker roots capture a mass of soil and so increase the weight of the root–soil plate. During lateral sway, a heavier root–soil plate will improve resistance to overturning. |
| | Presence of a taproot (+) | Taproot | Ennos (1993); Fourcaud et al. (2008); Burylo et al. (2010); Yang et al. (2017) | If shallow lateral roots are growing horizontally from the taproot, then the taproot constitutes the main root element that contributes to anchorage rigidity. Longer taproots anchor the plant better in soil. |
| Plant tolerance to waterlogging | | | | |
| | Presence of aerenchyma tissue (+) | Absorptive roots, adventitious roots, rhizomes | Kohl et al. (1996); Colmer (2003); Colmer & Voesenek (2009); Abiko et al. (2012); Sauter (2013) | Improves root tissue oxygenation by conducting air along the roots (and rhizomes). |
| | Presence of pneumatophores (+) | Pneumatophores | Purnobasuki & Suzuki (2005); Zhang et al. (2015); da Ponte et al. (2019) | Pneumatophores (i.e. aerial roots) are morpho-anatomical adaptations of roots with negative geotropism that emerge above the water surface to take up oxygen. |
| | Root tissue porosity (+) | Whole root system | Gibberd et al. (2001); Purnobasuki & Suzuki (2004); Ding et al. (2017); Striker & Colmer (2017) | Enhances the internal movements of gases and increases root oxygenation in anaerobic soils. |
| | Tolerance to high ethanol concentration (+) | Whole root system | Jackson et al. (1982); Boamfa et al. (2005); Maricle et al. (2014) | Ethanol toxicity is a prime cause of the injury and death of flooded plants. |
| | Fine-root regrowth rate (+) | Whole root system | Vidoz et al. (2010); Luo et al. (2011); Sauter 2013; Dawood et al. (2014) | Adventitious roots functionally replace primary root systems that may deteriorate during flooding due to oxygen deficiency. |
| | Specific root respiration (−) | Whole root system | Moog & Brugemann (1998); Nakamura & Nakamura (2016) | Reduces root oxygen requirements. |
| Plant resistance to and avoidance of drought | | | | |
| | Critical tension for conduit collapse (+) | Whole root system | Hacke et al. (2001) | Decrease the risk of conduit collapse during drought. |
| | See traits associated with ‘Plant water acquisition’ (+) | Whole root system, absorptive roots | Brunner et al. (2015) | Most ‘plant water acquisition’ traits, including ‘soil space occupancy’ traits, are important for plant resistance to drought. |
| | See traits associated with ‘Plant regeneration’ (+) | Whole root system, tubers, rhizomes, tap roots, corms, bulbs | Qian et al. (2017) | Plant regeneration capacity provides plants with the ability to survive intense drought periods despite the loss of aboveground biomass. |
| | See traits associated with ‘Plant storage’ (+) | Tubers, rhizomes, tap roots, corms, bulbs | de Moraes et al. (2016) | Most ‘Plant storage’ traits can be important for this function. Storage organs support regrowth of new aboveground parts. |
Trait importance (colour code): dark blue, function in at least some environmental conditions trait of prime importance for performing the plant function in at least some environmental conditions; medium blue, trait of secondary importance in at least some environmental conditions; light blue, ecosystems dominated by arbuscular trait of potential but unknown importance due to missing or low experimental evidence. Refers to traits whose measurement protocols are described in Freschet et al. (2020). (+) vs (−) refers to the positive or negative effect of one trait on the function, respectively. (explorative) vs (exploitative) refers to traits that increase the overall volume of soil explored or improve the exploitation of a more limited volume of soil, respectively. ‘Entity of interest’ refers to a range of plant belowground parts as described in Freschet et al. (2020). The full list of references is available as Supporting Information Notes S1.
Table 2.
Broad, multidisciplinary assessment of theoretical and demonstrated links between belowground traits and nine aspects of ecosystem functioning.
| Ecosystem processes and properties | | | | |
--- ---
| Trait importance | Belowground traits | Entity of interest | References (examples of) | Rationale |
| Ecosystem C cycling | | | | This process includes C inputs, losses, retention and transformation. Its complexity may not be meaningfully simplified into traits that accelerate vs decelerate the element cycling. |
| | See traits associated with ‘Soil space occupancy’ | Whole root system, absorptive roots, rhizomes | Jastrow et al. (1998); Jobbágy & Jackson (2000); Rasse et al. (2005); De Deyn et al. (2008); Wang et al. (2010); Clemmensen et al. (2013); Pérès et al. (2013); Cornelissen et al. (2014); Liao et al. (2014); Gould et al. (2016); Poirier et al. (2018) | Most ‘Soil space occupancy’ traits can be important for this process because they determine the location (i.e. biotic and abiotic conditions) of root effects on soil, influences the amount of contact surface between roots and soil (e.g. physical enmeshment of soil aggregates), influences the amount of root-derived C inputs to soil (e.g. litter, exudates), soil moisture and nutrient availability. |
| | Mycorrhizal association type | Absorptive roots | Langley et al. (2006); Phillips et al. (2013); Averill et al. (2014); Soudzilovskaia et al. (2015, 2019) | Ecosystems dominated by arbuscular mycorrhizal (AM), ericoid mycorrhizal (ERM) and ectomycorrhizal (ECM) fungi plants are characterised by different carbon and mineral nutrient cycles due to the different enzymatic capacities of the symbionts. Ecosystems dominated by plants in symbiosis with ECM fungi store 70% more C in soils than ecosystems dominated by AM-associated plants. |
| | Specific root respiration | Absorptive roots | Bond-Lamberty et al. (2004); Reich et al. (2008); Bardgett et al. (2014) | The contribution of root respiration represents on average 40–50% of the total soil CO2 efflux but varies strongly among species. |
| | Mycorrhizal colonisation intensity | Absorptive roots | Rillig et al. (2001); Gleixner et al. (2002); Kögel-Knabner (2002); Allen et al. (2003); Langley & Hungate (2003); Fernandez et al. (2016); Poirier et al. (2018) | Mycorrhizal fungi synthesise hydrophobic and recalcitrant compounds, such as chitin and melanin, respectively, which are discussed to be less biodegradable and to accumulate in soils (at least in ecosystems experiencing cold climates). |
| | Root lifespan and turnover | Whole root system, fine roots | Jackson et al. (1997); Fan & Guo (2010); McCormack et al. (2015); Klimešová et al. (2018) | Root lifespan regulates the quantity and quality of root-derived organic matter transferred into the soil organic matter pool. Fine roots and low-order roots, which have a short lifespan and turnover rapidly, represent a substantial input of C into the soil. |
| | Root litter mass loss rate | Whole root system, fine roots | Silver & Miya (2001); Zhang & Wang (2015); See et al. (2019) | Determines the rate at which C from litter is released into the atmosphere or enters the soil in the form of particulate organic matter or dissolved organic matter. |
| | Root exudation rate | Fine roots | Tisdall & Oades (1982); Kuzyakov (2010); Phillips et al. (2011); Keiluweit et al. (2015); Tückmantel et al. (2017); Henneron et al. (2020) | Enhanced root exudation increases the microbial activity and accelerates the breakdown of soil organic matter in the rhizosphere (priming effect). Meanwhile root exudates can act as binding agents to stabilise soil aggregates and thus enhance the stabilisation of occluded soil organic matter. |
| | Root hair length and density | First-order roots | Gould et al. (2016); Poirier et al. (2018) | Root hairs can physically attach soil particles and contribute to the formation of stable soil aggregates enriched in C. |
| | Mycorrhizal hyphal length | Absorptive roots | Miller & Jastrow (1990); Degens (1997); Wilson et al. (2009); Wu et al. (2014) | Increased hyphal length leads to greater enmeshment of soil particles and increases soil aggregate stability and soil organic C stabilisation. |
| | Ability to fix N | Nodules | Cole et al. (1995); Binkley (2005); Kaye et al. (2000); Fornara & Tilman (2008); De Deyn et al. (2011) | The biological fixation of N2 by N2-fixing root-symbiotic bacteria generally increases the plant belowground and aboveground primary productivity. The presence of N2-fixing species also tends to increase soil organic C accumulation. |
| | Root branching density | Absorptive roots | Poirier et al. (2018) | A high branching density contributes to stabilising soil aggregates through enmeshment of soil particles and higher production of exudates by root tips. |
| | See traits associated with ‘Hydraulic redistribution’ (+) | | Domec et al. (2010) | Affects topsoil organic matter and litter decomposition. |
| Ecosystem N cycling | | | | This process includes N inputs, losses, retention and transformation. Its complexity may not be meaningfully simplified into traits that accelerate vs decelerate the element cycling. |
| | See traits associated with ‘Soil space occupancy’ | Whole root system, absorptive roots | Fornara et al. (2011); Abalos et al. (2014); De Vries et al. (2016) | Most ‘Soil space occupancy’ traits can be important for this process. The density and distribution of roots determines the location of root exudates, litter inputs and nutrient uptake. |
| | See traits associated with ‘Plant N acquisition’ | Whole root system, absorptive roots | van der Krift & Berendse (2001); Scherer-Lorenzen et al. (2003); Personeni & Loiseau (2005); Batterman et al. (2013b); Leroux et al. (2013); Moreau et al. (2019) | Most traits associated with ‘Plant N acquisition’ can be important for this process. The capacity of plants to acquire N from soil, and compete with microorganisms, across a range of locations in the soil influences N cycling. |
| | Root lifespan and turnover | Whole root system, fine roots | Jackson et al. (1997); Fan & Guo (2010); McCormack et al. (2015) | Influences the input of litter (and N-containing compounds) into the soil. |
| | Root litter nutrient release rate | Whole root system, fine roots | Parton et al. (2007) | Determines the rate at which N2 is transferred from litter to soil. |
| | Root N concentration | Whole root system, absorptive roots | Hobbie et al. (2006); Parton et al. (2007); Legay et al. (2014); Cantarel et al. (2015); Thion et al. (2016) | Root N is positively related to litter N release rate (lower N immobilisation rate), N mineralisation and nitrification (e.g. archaeal ammonia oxidisers are more abundant in the rhizosphere of high N roots than low N roots). |
| | Mycorrhizal association type | Absorptive roots | Phillips et al. (2013); Lin et al. (2017); Wurzburger & Brookshire (2017); Zhu et al. (2018) | Ecosystems dominated by AM, ERM and ECM plants are characterised by different C and mineral nutrient cycles due to the different enzymatic capacities of the symbionts. AM, ECM, and ERM fungi represent a gradient from limited saprotrophic capabilities and greater reliance on inorganic N as primary N source to the ability to produce extracellular enzymes and greater use of increasingly complex organic N forms. |
| | Root exudation rate | Fine roots | Phillips et al. (2011); Meier et al. (2017); Moreau et al. (2019) | Enhanced root exudation increases soil microbial activity and accelerates the breakdown of (fast-cycling) organic N forms in the rhizosphere. Roots can exude/secrete nitrification and denitrification inhibitors. |
| Ecosystem P cycling | | | | |
| | See traits associated with ‘Plant P acquisition’ | Whole root system, absorptive roots | Lambers et al. (2008); Ros et al. (2018) | Most traits associated with ‘Plant P acquisition’, including traits associated with ‘Soil space occupancy’, can be important for this process. The capacity of plants to acquire P from soil, with or without mycorrhizal symbiosis, across a range of locations in the soil influences P cycling. |
| | Root lifespan and turnover | Whole root system, fine roots | Jackson et al. (1997); Fan & Guo (2010); McCormack et al. (2015) | Influences the input of litter (and P-containing compounds) into the soil. |
| | Root litter nutrient release rate | Whole root system, fine roots | Fujii & Takeda (2010) | Determines the rate at which P is transferred from litter to soil. |
| | Root P concentration | Whole root system, fine roots | Seastedt (1988); McGrath et al. (2000); Manzoni et al. (2010) | Can be a major driver of soil P availability in P-limited soils. |
| Soil water holding capacity | | | | |
| | See traits associated with ‘Ecosystem C cycling’ | Fine roots | Rillig & Mummey (2006); Poirier et al. (2018) | Root and mycorrhizal traits favouring C accumulation in soil and improving soil aggregate stability, improve soil water holding capacity. |
| | Root mass and length density (+) | Fine roots | Noguchi et al. (1997) | After death and decay, roots leave empty galleries and pores favourable to water retention. Roots also contribute to organic matter accumulation, which increases soil water holding capacity. |
| | Root turnover (+) | Fine roots | Noguchi et al. (1997); Perillo et al. (1999) | After death and decay, roots leave empty galleries and pores favourable to water retention. Roots also contribute to organic matter accumulation, which increases soil water holding capacity. |
| | Mean root diameter | First-order roots | Norton et al. (2004); Ghestem et al. (2011); Soto-Gomez et al. (2018) | Larger roots leave larger pores that, depending on the context, may be favourable or detrimental to water retention. |
| Bedrock weathering | | | | |
| | Root exudation rate (+) | Fine roots | Ochs et al. (1993); Hinsinger (1998); Phillips et al. (2009); He et al. (2012); Houben & Sonnet (2012) | Exudation of organic acids and enzymes by roots enhance bedrock weathering. Additionally, C flux to the rhizosphere stimulates the weathering activity of the root microbiome. |
| | Maximum rooting depth (+) | Whole root system | Richter & Markewitz (1995); Schwinning (2010); Maeght et al. (2013) | Deep-rooted species are most likely to reach bedrock. |
| | Root mass and length density (+) | | Hinsinger et al. (1992) | Increases root overall impact on bedrock. |
| | See traits associated with ‘Root penetration of soil’ (+) | Whole root system | Bengough (2012); Kolb et al. (2012) | Root growth pressures may help to extend cracks in weathering bedrock. Root elongation within a rock crack depends on the balance of axial and radial pressures. |
| | Mycorrhizal association type | Fine roots | Taylor et al. (2009); Pawlik et al. (2016) | There is stronger evidence for bedrock weathering from ECM activity than this of AM. |
| | Mycorrhizal fungi identity | Fine roots | Jongmans et al. (1997); Hoffland et al. (2004); Schwinning (2010) | Due to differences in rates of chemical exudation, hyphal production and exploration distances among species of mycorrhizal fungi, species identity is likely to be an important determinant for faster or slower weathering rates. |
| | Root secondary growth (+) | Whole root system | Misra et al. (1986); Richter & Markewitz (1995) | The radial force widening a crack is the product of the radial pressure and the contact area of root surface in the crack. |
| Hydraulic redistribution | | | | |
| | Diverse root growth angles (+) | Whole root system | Hultine et al. (2003a,b) Scholz et al. (2008); Siqueira et al. (2008) | Extensive distribution of roots in higher and lower soil horizons allows connection between wet and drier soil layers. |
| | Maximum rooting depth (+) | Whole root system | Burgess et al. (1998, 2000); Scholz et al. (2008); Maeght et al. (2013) | Presence of roots at depth allows access to wetter soil layers in soils experiencing drying of the upper horizons, which is critical for hydraulic lift. |
| | See traits associated with ‘Plant resistance and survival to drought’ (+) | Whole root system, absorptive roots | Domec et al. (2004); McElrone et al. (2007); Warren et al. (2008); Grigg et al. (2010); Prieto et al. (2012a, 2014) | Most traits favouring ‘Plant resistance to drought’, including traits favouring ‘Plant water acquisition’, will contribute to maintaining a functional root system during periods of soil drying and therefore allow hydraulic redistribution. |
| | See traits associated with ‘Plant water acquisition’ (+) | Whole root system, absorptive roots | Egerton-Warburton et al. (2008); Prieto et al. (2012b) | Root and mycorrhizal traits favouring ‘plant water acquisition’ increase water flow through the root system. |
| | Vertical root mass distribution index (+) | Whole root system | Schenk & Jackson (2002a,b) | High proportion of roots in deeper soil horizons may reinforce hydraulic lift. |
| | Root turnover (+) | Absorptive roots | Espeleta et al. (2004) | Determines the presence of active roots in soil layers that absorb and redistribute water. |
| | See traits associated with ‘Plant lateral spread and belowground dispersal’ (+) | Whole root system | Jónsdóttir & Watson (1997); Stuefer (1998) | Redistribution of water can occur in the horizontal plane via plant clonal connectors. |
| Ecosystem evapotranspiration | | | | |
| | See traits associated with ‘Plant water acquisition’ (+) | Whole root system, absorptive roots | Nepstad et al. (1994); Augé et al. (2008); Fort et al. (2017) | Most traits associated with ‘Plant water acquisition’ facilitate the transfer of water from the soil to the plant and favour evapotranspiration. |
| | See traits associated with ‘Hydraulic redistribution’ (+) | | Domec et al. (2010) | Facilitates the transfer of water from deep soils to shallower soil horizons. |
| Soil interparticle cohesion | | | | This property relates to soil surficial erosion. |
| | See traits associated with ‘Soil space occupancy’ (+) | Whole root system, fine roots | Angers & Caron (1998); Gould et al. (2016); Poirier et al. (2018) | Most traits increasing ‘Soil space occupancy’ contribute to stabilising soil macroaggregates through entanglement of soil particles, production of exudates, binding and compressing soil particles, and root-induced wetting and drying cycles. |
| | Root exudation rate (+) | Fine roots | Carminati et al. (2016); Baumert et al. (2018); Poirier et al. (2018) | Exudates (especially polysaccharides and cations) act as binding agents to initiate microaggregate formation and stabilise macroaggregates. Exudates also clog aggregate pores and induce water repellence. |
| | Mycorrhizal colonisation intensity (+) | Absorptive roots | Rillig et al. (2015); Poirier et al. (2018) | Mycorrhizal fungi produce exopolymers and proteins that glue and bind soil particles. The release of hydrophobins by ECM increases aggregate hydrophobicity. In addition, hypha enmesh soil fine particles within micro- and macroaggregates. |
| | Root hemicellulose content (+) | Whole root system | Poirier et al. (2018) | Hemicellulose contains pentoses and uronic acids that stabilise soil aggregates. |
| | Root suberin content (+) | Whole root system | Bachman et al. (2008); Poirier et al. (2018) | Suberin increases aggregate hydrophobicity and soil water repellency. |
| | See traits associated with ‘Plant water acquisition’ (+) | Whole root system, absorptive roots | Czarnes et al. (2000) | Soil interparticle cohesion is affected by wetting-drying cycles that increase the strength of organic binding agents. |
| Soil reinforcement against shallow landslides | | | | |
| | Maximum rooting depth (+) | Whole root system | van Beek et al. (2005) | Deep growing roots are more likely to cross the potential soil shear surface (zone within the soil where failure initiates), which enhances soil reinforcement. |
| | Vertical root length distribution index (+) | Whole root system | Ghestem et al. (2014) | A greater number of branched roots below the shear plane will enhance root anchorage and so improve soil shear resistance. |
| | Root area ratio (+) | Whole root system | Wu (1976); Bischetti et al. (2005); Mao et al. (2012) | The greater the root area ratio of coarse and fine roots crossing the potential failure zone, the more the soil shear strength is increased, thus improving soil reinforcement. |
| | Root length density (+) | Whole root system | Ennos (1993); Stokes et al. (2009) | Increasing root length augments the pull-out resistance up to a critical length, from which roots will break in tension instead of slipping out of the soil. |
| | Root branching angle (+) | Tap and sinker roots | Ghestem et al. (2014) | Vertically oriented roots increase soil shear resistance. |
| | Tensile strength (+) | Whole root system | Chimungu et al. (2015); Giadrossich et al. (2017, 2019); Mao et al. (2018) | A higher tensile strength will enable a root to mobilise its full strength as it is pulled out of soil, thereby increasing soil shear strength. |
| | Modulus of elasticity (+) | Whole root system | Cohen et al. (2009); Mao et al. (2018) | Roots with large elastic modulus can remain anchored in soil, even after soil failure has occurred, thus holding vegetation in place and retarding or preventing mass substrate failure. |
| | Root bending resistance (+) | Tap, sinker and lateral roots | Goodman et al. (2001) | During a landslide, thick structural roots act like soil nails that bend, preventing soil collapse, before breaking. |
| | See traits associated with ‘Plant water acquisition’ | Whole root system, absorptive roots | Boldrin et al. (2017) | Rapid water acquisition will maintain soil in a drier state that offers greater resistance to deformation. |
Trait importance (colour code): dark blue, trait of prime importance for performing the ecosystem process or property in at least some environmental conditions; medium blue, trait of secondary importance in at least some environmental conditions; light blue, trait of potential but unknown importance due to missing or low experimental evidence. Refers to traits whose measurement protocols are described in Freschet et al. (2020). (+) vs (−) refers to the positive or negative effect of one trait on the function, respectively. ‘Entity of interest’ refers to a range of plant belowground parts as described in Freschet et al. (2020). The full list of references is available as Supporting Information Notes S1.
Table 3.
Overview of studies testing the relationships between root traits and plant nitrogen (N) uptake capacity.
| Reference | Function measured | Units | Method used | Temporal scale | Spatial scale | Root or plant traits measured (and relationship found) | Root entities | Number of species | Growth forms | Biome |
--- --- --- --- ---
| Bowsher et al. (2016) | Short-term net uptake rate | µmol N g−1 root h−1 | 15N tracers of NH4+ and NO3− | 30 h | Pot | Specific root length (ns), root tissue density (ns) | Whole root system | 6 | Forbs (6) | |
| Craine et al. (2003) | Long-term net uptake rate | mg N kg−1 | Soil NH4+ and NO3− sampling | | Pot | Fine-root mass density (+), coarse root mass density (ns) | Fine roots (< 2 mm) | 11 | Graminoids (6), forbs (3), Legumes (2) | Temperate |
| de Vries & Bardgett (2016) | Long-term net uptake rate | kg N ha−1 | 15N tracers of NH4+ and NO3− | 48 h | Pot | Root biomass (+), specific root length (ns), root tissue density (−), root N concentration (ns) | Whole root system | 24 | Graminoids (12), forbs (12) | Temperate |
| Dybzinski et al. (2019) | Long-term net uptake rate | g N m−2 d−1 | Whole plant N increment | 95–110 d | Pot | Fine-root mass (ns in 62% of cases; linear + in 5% of cases; saturated in 33% of cases) | Fine roots (< 1 mm) | 18 | Graminoids (5), Forbs (2) Tree (11) | |
| Dybzinski et al. (2019) | Long-term net uptake rate | g N m−2 d−1 | Whole plant N increment | | Field | Fine-root mass (ns: 5 studies; linear +: 2 studies) | Fine roots (< 1 mm) | 7 | Tree (7) | |
| Ficken & Wright (2019) | Long-term net uptake rate | mg N g−1 leaf | 15N tracers of NH4+ and NO3− | 10 d | Pot | Root tip number per biomass (+), fine : coarse root volume (+), leaf N content (+) | Whole root system | 4 | Shrub (3), Tree (1) | Temperate |
| Freschet et al. (2018) | Short-term net uptake rate | µg N m−1 root h−1 and µg N g−1 root h−1 | 15N tracers of NH4+ and NO3− | 6 h | Pot | Root mass fraction (ns), deep root fraction (ns), specific root length (ns), root hair length (ns), root interbranch distance (ns), root N concentration (ns), leaf mass fraction (ns), specific leaf area (ns), maximum leaf photosynthetic capacity (ns), plant height (−) | Absorptive roots | 9 | Graminoids (3), forbs (3), Legumes (3) | Temperate |
| Garnier et al. (1989) | Long-term net uptake rate | mg N g−1 root d−1 | Whole plant N increment | 28 d | Hydroponics | Relative growth rate (+) | Whole root system | 14 | Graminoids (4), forbs (10) | Temperate |
| Garnier et al. (1989) | Short-term net uptake rate | mg N g root−1 d−1 | NO3− depletion in nutrient solution | 90 min | Hydroponics | Relative growth rate (+) | Whole root system | 7 | Graminoids (2), forbs (5) | Temperate |
| Garnier (1991) | Long-term net uptake rate | mg N g root−1 d−1 | Whole plant N increment | 17–28 d | Hydroponics | Relative growth rate (+) | Whole root system | 21 | Graminoids (9), forbs (12) | Temperate |
| Grassein et al. (2015) | Imax, Km | µmol N g−1 root h−1 | 15N tracers of NH4+ | 5 min | Common garden | Imax NH4+: Specific root length (ns), root dry matter content (ns), root N concentration (+), specific leaf area (+), leaf dry matter content (ns), shoot N content (ns), shoot : root ratio (+) Km NH4+: Specific root length (ns), root dry matter content (ns), root N concentration (ns), specific leaf area (ns), leaf dry matter content (+), shoot N content (ns), shoot : root ratio (ns) | Whole root system | 8 | Graminoids (8) | Temperate |
| Grassein et al. (2015) | Imax, Km | µmol N g−1 root h−1 | 15N tracers of NO3− | 5 min | Common garden | Imax NO3−: Specific root length (ns), root dry matter content (ns), root N concentration (ns), specific leaf area (+), leaf dry matter content (ns), shoot N concentration (ns), shoot : root ratio (ns) Km NO3−: Specific root length (ns), root dry matter content (ns), root N concentration (ns), specific leaf area (−), leaf dry matter content (+), shoot N concentration (ns), shoot : root ratio (ns) | Whole root system | 8 | Graminoids (8) | Temperate grassland |
| Grassein et al. (2018) | Imax | nmol N g−1 root h−1 | 15N tracers of NH4 + | 1 h | Field + excised roots | Specific root length (ns), root dry matter content (−), root N concentration (+), specific leaf area (ns), leaf dry matter content (ns), leaf N content (+) | Absorptive roots | 3 | Graminoids (3) | Temperate |
| Grassein et al. (2018) | Imax | nmol N g−1 root h−1 | 15N tracers of NO3 − | 1 h | Field + excised roots | Specific root length (+), root dry matter content (−), root N concentration (+), specific leaf area (ns), leaf dry matter content (ns), leaf N content (+) | Absorptive roots | 3 | Graminoids (3) | Temperate grassland |
| Hodge (2003) | Long-term net uptake rate | mg N (14N + 15N) | 15N tracers of labelled15N shoot material | 22 d | Pot | In mixtures: Root length (+), mycorrhizal inoculation (+) In monocultures: Root length (ns), mycorrhizal inoculation (ns) | Whole root system | 2 | Graminoids (1), forbs (1) | Temperate |
| Hodge et al. (1998) | Long-term net uptake rate | µg N | 15N tracers of labelled organic material in patches | 39 d | Pot | Root biomass (ns), root length (ns) | Whole root system | 5 | Graminoids (5) | Temperate |
| Hodge et al. (1999) | Long-term net uptake rate | µg N | 15N tracers of labelled organic material in patches | 56 d | Pot | Root length density (+) | Whole root system | 2 | Graminoids (2) | Temperate grasslands |
| Hong et al. (2018) | Short-term net uptake rate | µg N m−1 root h−1 | 15N tracers of NH4+ or NO3− or Glycine or (NH4+ + NO3− + Glycine) | 24 h | Field | Root surface area (+), specific root length (+), root diameter (−), root biomass (−) | Whole root system | 10 | Graminoids (3), forbs (4), legumes (3) | Alpine grassland |
| Kulmatiski et al. (2017) | Short-term net uptake rate | % cm−1 root | 15N tracers of NH4+ and NO3− | 72 h | Field | Root biomass (ns) | Whole root system (absorptive) | 5 | Graminoids (3), forbs (1), Shrub (1) | |
| Larson & Funk (2016) | Long-term net uptake rate | µg N d−1 | Whole plant N increment | 28–58 d | Pot | Root growth rate (+), root elongation rate (+), root mass fraction (−), specific root length (+), root diameter (−) | Whole root system (absorptive) | 18 | Graminoids (4), forbs (7), Trees (7) | Temperate |
| Leffler et al. (2013) | Short-term net uptake rate | µg N g−1 root h−1 | 15N tracers of NO3− | 2 h | Pot | Root mass (+), root length (+), specific root length (+) | Whole root system | 5 | Graminoids (5) | Temperate |
| Levang-Brilz & Biondini (2003) | Long-term net uptake rate | g N m−2 root d−1 | Whole plant N increment | 60–90 d | Pot | Root : shoot ratio (+), relative growth rate (saturated relationship) | Whole root system | 55 | Graminoids (17), forbs (29), legumes (7), shrubs (2) | Temperate |
| Liu & Kleunen (2019) | Long-term net uptake rate | g N g−1 root d−1 | Whole plant N increment | 26 d | Pot | Root mass fraction (−) | Whole root system (absorptive) | 41 | Graminoids (11), forbs (26), Legumes (4) | Temperate |
| Ma et al. (2018) | Short-term net uptake rate | µg N g−1 root h−1 | 15N tracers of NH4+ and NO3− | 90 min | Field | Specific root length (ns), root diameter (ns) | | 17 | Trees (17) | Grassland, boreal, temperate, subtropical, tropical |
| Ma et al. (2018) | Long-term net uptake rate | µg N g−1 root h−1 | Whole plant N increment | 7 d | Field | Specific root length (ns), root diameter (ns) | | 17 | Trees (17) | Grassland, boreal, temperate, subtropical, tropical |
| Maire et al. (2009) | Imax, Km | mg N g−1 root h−1 | NH4+ and NO3− depletion in nutrient solution | 90 min | Common garden | Imax: Root dry mass (ns), root area (−), leaf N concentration (+) | Absorptive roots | 13 | Graminoids (13) | Temperate |
| Maire et al. (2009) | Long-term net uptake rate | g N m−3 y−1 | Shoot plant N increment | 209–212 d | Common garden | Root dry mass (+), leaf N concentration (ns) | Absorptive roots | 13 | Graminoids (13) | Temperate |
| Osone et al. (2008) | Long-term net uptake rate | g N g−1 root d−1 | Whole plant N increment | | Pot | Relative growth rate (+), root : shoot ratio (−), specific leaf area (+), leaf N concentration per area (−), net assimilation rate (+) | Whole root system (absorptive) | 11 | Forbs (6), Trees (5) | |
| Poorter et al. (1991) | Long-term net uptake rate | nmol N g root−1 d−1 | Whole plant N increment | 17 d | Hydroponics | Relative growth rate (+) | Whole root system | 24 | Graminoids (11), forbs (13) | Temperate |
| Ravenek et al. (2016) | Short-term net uptake rate | µmol N m−1 root h−1 | Li and Rb uptake rate (surrogate tracers) | 46 h | Pot | Relative growth rate (ns), selective root placement (ns), root length density (ns), specific root length (ns) | Absorptive roots | 8 | Graminoids (4), forbs (4) | Temperate |
| Reich et al. (1998) | Long-term net uptake rate | mg N g−1 root d−1 | Whole plant N increment | 61 d | Pot | Specific root length (+), root length ratio (+), root respiration (+), relative growth rate (+) | Whole root system | 9 | Trees (9) | Boreal forest |
| Robinson et al. (1991) | Long-term net uptake rate | | Whole plant N increment | 97 d | Pot | Root length density (ns) | Whole root system (absorptive) | 1 (13 karyo-types) | Graminoids (1) | Temperate |
| Wiesler & Horst (1994) | Long-term net uptake rate | kg N ha−1 | Soil NO3− depletion and shoot uptake rate | | Field | Root length density (+) | Whole root system | 1 (10 maize cultivars) | Graminoids (1:crop) | Temperate |
| Zerihun & Bassirirad (2001) | Imax, Km | µmol N g−1 root h−1 | NO3− and NH4+ depletion | 12 h | Pot | Imax NH4+: Relative growth rate (+), biomass allocation (ns) Km NH4+: Relative growth rate (+) | Fine (< 1 mm) & coarse roots | 6 | Trees (6) | Temperate |
The full list of references is available as Supporting Information Notes S1.
Table 4.
Overview of studies testing the relationships between root traits and soil reinforcement against shallow landslides.
| References | Property measured (units) | Method used | Soil type | Soil moisture content | Root or plant traits measured (and relationship found) | Root entities | Number of species | Growth forms | Biomes |
--- --- --- --- --- |
| Docker & Hubble (2008) | Increase in shear stress in soil matrix due to roots (kPa) | In situ testing with a large shear box (ranged from 0.4 × 0.4 to 0.5 × 0.5 at the base and 0.21–0.44 m in height) | Brown loam and sandy loam | Saturated | Root area ratio of roots crossing the shear plane (+) | Whole root system | 4 | Trees (4) | Subtropical rainforest, subtropical dry forest |
| Fan & Chen (2010) | Soil matrix shear strength (kPa) | In situ testing with a large shear box (0.3 × 0.3 × 0.2 m) | Clayey and sandy soils | 12–14% | Cumulated tensile strength of all roots per unit area of soil (+), cumulated tensile strength of all roots crossing the shear plane (+) | Roots < 20 mm | 5 | Trees (5) | Tropical rainforest |
| Ghestem et al. (2014); Veylon et al. (2015) | Tangential shear stress at yield point (kPa) of soil matrix | Laboratory testing with a large shear box (0.5 × 0.5 × 0.3 m) | Alluvial silty clay | 9–21% | Roots crossing the shear plane: cross-sectional area of coarse roots (+), number of coarse roots (+), fine-root mass (+), number of coarse root branches per unit length (ns), coarse root length (ns), coarse root diameter, coarse root volume (ns), fine-root mass density (ns) | Whole root system | 3 | Trees (3) | Subtropical rainforest |
| | | | | | Roots above the shear plane: coarse root length (+), number of coarse root branches per unit length (+), number of coarse roots (ns), cross-sectional area of coarse roots (ns), diameter of coarse roots (ns), coarse root volume (ns), fine-root mass (ns), fine-root mass density (ns) | | | | |
| | | | | | Roots below the shear plane: number of coarse root branches per unit length (+), coarse root volume (+), fine-root mass (+), number of coarse roots (ns), cross-sectional area of coarse roots (ns), coarse root diameter (ns), coarse root length (ns), fine-root mass density (ns). | | | | |
| Normaniza et al. (2008); Ali & Osman (2008) | Soil matrix shear strength (kPa) | Laboratory testing with a large shear box (0.3 × 0.3 × 0.2 m) | Silty sand | Not known | Root length density (+), root diameter (+) | Whole root system | 3 | Trees (2), Shrubs (1) | Tropical rainforest |
| Wu et al. (1988) | Force applied to shear soil matrix (N) | In situ testing with a large shear box (0.6 × 0.3 × 0.3 m) | Sandy silt, gravelly silt, silty clay and sand | Partially saturated | Cumulated tensile force of all roots crossing the shear plane (+ in sandy and gravelly silt soils only, two species). Cumulated tensile force of all roots crossing the shear plane (ns in silty clay, one species). | Whole root system | 3 | Trees (3) | Boreal forest, temperate forest |
The full list of references is available as Supporting Information Notes S1.
2 An overview of trait–functioning relationships: rationale and limitations
To explore relationships between root traits and functions, we performed a broad, multidisciplinary assessment of empirical and demonstrated links between belowground traits and plant and ecosystem functioning (Tables 1, 2). To do so, we first identified 15 key plant functions (Table 1) and nine ecosystem processes and properties (Table 2) based on their relevance to the functioning of natural and managed ecosystems. Drawing on literature in the fields of plant physiology, ecophysiology, ecology and soil science, we considered reviews and empirical studies where both root traits and functions were measured or conceptualised. We considered traits relevant for 16 research fields (as distinguished in Freschet et al., 2020; Fig. 1d), taking in aspects of root system architecture, physiology, morphology, anatomy, chemistry, biomechanics and biotic interactions. For each function we report: (1) the root trait measured and its relationship to a function (positive or negative); (2) belowground plant entities (e.g. root type, see Freschet et al., 2020) on which the trait would be most relevant to measure; and (3) contextual information explaining the rationale for and degree of confidence in the relationship (Tables 1, 2).
Trait selection was motivated by both the presence of a defined mechanistic relationship and empirical observations under controlled conditions or in situ. However, Tables 1 and 2 are not consolidated accounts of demonstrated evidence. Most studies reported here cover only a handful of species; as such, they may rely on fortuitous relationships resulting from interactions among traits (as discussed in Section 3) and on context-dependent observations that may not be widely generalisable across multiple species and biomes (see Section 4). In addition, we stress that Tables 1 and 2 represent neither an exhaustive list of important traits nor all relevant references, but rather a broad overview of current knowledge where most relationships await confirmation. Highlighted key studies are provided to guide the audience to further reading.
Due to the limited, often contradictory state of current knowledge of root trait–functioning relationships, we do not attempt to estimate the importance of the relationship but merely indicate current evidence for its existence (i.e. a single trait impacts a specific function). Our understanding of results from past studies is particularly limited by a range of methodological issues. This includes the absence of a purposeful selection of complementary traireview of potentially relevant mechanisms orts and root entities (see Section 3), the lack of accounting for trait covariation and hierarchy (see Section 4), or the lack of knowledge on the influence of genetic diversity, environmental variation and scaling across temporal and spatial scales (see Section 5).
Despite these limitations, Tables 1 and 2 are useful because they provide an indication of the range of empirical and theoretical relationships between belowground traits and plant and ecosystem functioning across research fields; link these relationships to selected references and standardised trait measurement protocols (as described in the handbook of root traits, see Freschet et al., 2020); and highlight a number of rarely considered traits in order to connect different fields.
3 Trait selection
3.1 Measuring a complementary range of traits: are we focusing on the right ones?
Recent decades have seen the rise of approaches using a few easily measurable traits to capture plant and ecosystem functioning. Given the difficulties associated with specialised measurements of some key physiological, anatomical or chemical traits, most local-scale studies, which later feed global-scale analyses, make use of ‘soft’ traits (i.e. easily measurable traits, often vaguely related to a single or a number of functions) only, rather than a range of soft and ‘hard’ traits (i.e. those difficult to measure, but often more closely related to a precise function) selected on the basis of a comprehensive review of potentially relevant mechanisms or processes (but see, for instance, Maire et al., 2009; Belluau & Shipley, 2018; Freschet et al., 2018a; Ros et al., 2018). For example, at the time of this survey, the FRED database (the most extensive fine-root trait database to date; Iversen et al., 2017) comprises a large number of observations for the five traits most easily measured (c. 5000 entries for root N concentration and classical morphological traits such as root diameter and specific root length, c. 3200 for root tissue density), whereas there were only c. 320 entries for indicators of root N uptake (e.g. net ion-uptake rate, maximum net ion-uptake rate) and c. 220 observations for indicators of root exudation (e.g. acid phosphatase activity, carbon exudation rate). Here, we stress that if a trait is widely measured, it does not necessarily mean that it is of key functional importance. During the construction of Tables 1 and 2, many trait–functioning relationships appeared indirect, vaguely justified and/or poorly tested and led us to question the broad relevance of those traits most commonly measured for plant and ecosystem functioning. Moreover, Tables 1, 2 and Fig. 1 underscore that most ecosystem functions are likely to have been influenced by a wider range of traits than typically assumed (McCormack et al., 2017; Freschet et al., 2020). In this respect, our review strengthens the idea that the search for simplified and generalisable patterns should not be at the expense of the mechanistic understanding of trait–functioning relationships (Shipley et al., 2016; Belluau & Shipley, 2018). As such, we hope that Tables 1 and 2 will stimulate a debate on the merits of the classical notion that we must, by necessity, choose between studying few traits with a clear ‘functional importance’ or many easily measured traits (Belluau & Shipley, 2018; Freschet et al., 2018a).
Root N concentration, one of the most frequently measured root traits, provides a good illustration of the common discrepancy between the frequency of trait measurements and their functional importance. Clearly important for ecosystem N cycling, root N is often measured for its presumed role in determining the overall metabolic activity (Reich et al., 2008; Roumet et al., 2016) and, by extension, may be assumed to scale up proportionally with specific root uptake activities (e.g. Grassein et al., 2018). For example, aboveground we know that leaf N concentration is strongly linked to leaf photosynthetic capacity, with more than 60% of leaf N contained in leaf photosynthetic compartments (Evans & Seemann, 1989). However, although root N concentration is a good predictor of specific root respiration (Reich et al., 2008, 67 species, R2 = 0.69; Roumet et al., 2016, 73 species, R2 = 0.25) (Fig. 2), the multiple functional roles of root N (including nutrient uptake and assimilation, but also transport, defence compounds and stored N) imply that its use as an indicator of specific activities may remain highly speculative (Table 1; Fig. 2a).
Specific root length (SRL) serves as another example of a very commonly measured but little understood soft trait. It is typically interpreted as a large root surface (i.e. equivalent to specific root area) at a low cost of root construction, and is therefore assumed to mirror specific leaf area (Reich, 2014) and acts as a gauge for soil resource uptake efficiency (Ostonen et al., 2007). However, while this description is true, it is strongly reductive. First, it is not so much the surface of roots that would matter for belowground resource uptake, but rather the volume of soil under influence of the root (e.g. the nitrate depletion zone around the root, or the frequency of root encounters during the flow of solutes in the soil), which depends more strongly on the length of roots deployed rather than its surface. SRL may thus be better referred to as a proxy for the volume of soil under influence by the root, and will most often be more closely related to soil resource uptake efficiency than specific root area. Second, it is rarely considered that cheaply constructed roots may have a much shorter lifespan (Ryser, 1996), and therefore, as a system, may have limited ability for long-term resource uptake, unless this trait is combined with a high root turnover rate. Third, SRL is a composite trait determined by the variation in root diameter and root tissue density (Fig. 2) and hence under the control of complex internal plant construction trade-offs (Kong et al., 2014; Poorter & Ryser, 2015). Fourth, it remains poorly understood to what extent there exists a trade-off between SRL and root mass fraction (Freschet et al., 2015a; Weemstra et al., 2020; Fig. 2) and mycorrhizal colonisation (McCormack & Iversen, 2019; Bergmann et al., 2020), and how SRL acts in synergy with root hairs (Forde & Lorenzo, 2001) and root branching density (Eissenstat et al., 2015) to change the volume of soil explored or exploited by roots (sensu Lambers et al., 2008; Freschet & Roumet, 2017). A close inspection of these aspects is needed to resolve why SRL has sometimes been found to positively correlate with N uptake rates across species (Reich et al., 1998; Larson & Funk, 2016; Grassein et al., 2018; Hong et al., 2018, 30 species), but not in other cases (Grassein et al., 2015; Bowsher et al., 2016; Ravenek et al., 2016; Ma et al., 2018; Freschet et al., 2018a, 48 species).
Root N concentration and SRL are just two examples of traits in which a more correct, mechanistic framing is key to truly understanding the link between traits and plant and ecosystem functioning. This issue may be inherent to belowground plant ecology, where the relevance of many soft traits was presumed based on the mechanistic understanding of their aboveground counterparts, but with little scrutiny of their actual functional significance belowground. Ultimately, the identification of key traits for plant and ecosystem functioning needs to come from larger sets of measurements in future studies that include both soft and hard traits.
3.2 Estimating the relative importance of traits
Furthering our mechanistic knowledge of trait–functioning relationships requires not only the identification of traits that are relevant for a function (see Tables 1, 2), but also a consideration of the relative importance of these traits for the function. The relative importance of traits identified in Tables 1 and 2 is sometimes not known and often assumed, but rarely tested. To complicate the picture further, there is ample evidence from case studies that environmental conditions interact to shuffle the relative importance of traits for individual functions, possibly due to variations in costs and benefits of a given plant strategy. For example, the relative importance of a plant’s ability to fix N2 in symbiosis with microbes strongly increases as soil N availability decreases, while in turn N2 fixation becomes increasingly constrained as soil P availability decreases (Batterman et al., 2013b). In a second example at the ecosystem level, efficient root hydraulic conductance can rapidly dry wet soil in climates with discontinuous rain events (e.g. Boldrin et al., 2017), and therefore help protect against shallow landslides. However, in climates with prolonged rainy seasons and with soils that are close to saturation for long periods of time, the efficiency of this trait is lost and the mechanical traits become more efficient at reinforcing soil (Kim et al., 2017). Also, there can be distinct thresholds in the ability of traits to serve functions. For example, along a gradient of soil P availability, the dominant plant species strategy tends to shift from the reliance on thin roots at high P levels, towards increasing reliance on high root hair density and mycorrhizal symbiosis at low P levels, and eventually towards the use of highly specialised structures such as cluster roots on severely P-impoverished soils (Lambers et al., 2008).
In summary, few studies to date have quantified comprehensive sets of relevant root traits across a range of species with contrasting ability to perform a function, or replicated such setups along environmental gradients (but see first attempts by Belluau & Shipley, 2018; Freschet et al., 2018a; Ros et al., 2018; Henneron et al., 2020). Moreover, most studies do not measure the actual function of interest, but more easily measurable proxies for the function (e.g. ‘long-term N accumulation in plants’ rather than ‘long-term uptake rate’; ‘centrifuge model estimate’ instead of ‘in situ measurement’ of soil shear reinforcement; see Tables 3, 4). Although measuring the actual function often proves challenging, additional efforts may be needed to improve the relatedness of our proxies to their functions.
3.3 Considering multiple root types
To fully appreciate and understand the impact of root traits on plant or ecosystem functioning, consideration of what portion of the root system and root types are involved is needed (McCormack et al., 2015a; Klimešová et al., 2018). Different parts of a root system may be important for distinct aspects of plant and soil functioning (Freschet & Roumet, 2017). For example, when studying the contribution of vegetation to soil reinforcement against shallow landslides, studying the entire root system is key to capturing the distribution of root diameters that cross the multiple potential shear (rupture) surfaces along a slope (Stokes et al., 2009). Thick structural roots act like soil nails, preventing soil collapse due to their mass, bending strength and stiffness. Thin and fine roots anchor plants to deeper soil layers (beneath the shear surface) and need to be strong when held under tension. Although several geotechnical models have considered the contribution of roots, irrespective of root types, to the reinforcement of potential shear planes that lie parallel to the soil surface (Table 4), these models generally overestimate slope stability, highlighting the need to better differentiate between the effects of distinct root types (Schwarz et al., 2010; Mao et al., 2014).
With respect to N uptake by wheat (Triticum aestivum), average rates of uptake per unit length of root may be only a small proportion of predicted uptake rates (Robinson et al., 1991), probably due to a combination of physiological differences between individual roots and spatial clustering of root distribution. Using ion-selective microelectrode techniques, the most rapid N uptake was indeed found between 0 and 40 mm behind the root tip, decreasing between 40 and 60 mm (Plassard et al., 2002; Miller & Cramer, 2004). However, this longitudinal decrease may represent only a two- to three-fold difference in uptake rate, with transporter gene expression studies suggesting that mature parts of the root remain significant sites of uptake (Miller & Cramer, 2004; Hawkins et al., 2014). In maize (Zea mays), a nondestructive method was developed to fit small chambers around short root segments in hydroponics in order to measure starting and ending nitrate concentration to calculate net influx, which allowed simultaneous measurements of several root types and positions along the roots. By comparing 15-d-old and 20-d-old plants, this study showed that maximum uptake rate may increase as the plant N demand increases, and that variation for this rate exists among lateral roots, basal roots, and shoot-borne roots (York et al., 2016). Overall, despite growing knowledge on how root anatomy differs across root orders, much remains uncertain about how N uptake varies, and how this might differ among herbaceous and woody species (e.g. Hawkins et al., 2014) and across environments (Gessler et al., 2005).
These examples illustrate that much effort is required to further our knowledge of how various plant parts relate to specific functions. The spatial distribution of specific type of roots in the soil, and their ability to perform their function, is clearly dependent on the attributes of the rest of the root system. Focusing on trait–functioning relationships of a single root type may provide an incomplete picture of plant functioning and effects on ecosystem functions.
3.4 Towards widespread consideration of other types of traits
Our overview of root trait–functioning relationships (Tables 1, 2), and the visual illustration of their interconnections (Fig. 1), suggests that many understudied traits may be crucial for a range of plant functions and ecosystem properties and processes. Three categories of traits are frequently highlighted (Fig. 1d): those associated with mycorrhizal associations, belowground allocation and the spatial distribution of roots. More specifically, among other traits, mycorrhizal association type and colonisation intensity, root length density and root mass fraction, root branching density, root hair length and density, vertical root distribution index and maximum rooting depth are particularly represented in our synthesis of trait–functioning relationships (Fig. 1a–c). Described below, these traits can impact plant and ecosystem functioning in several ways:
(1) The reliance of plants on different ‘types of mycorrhizal fungi’ (e.g. Read & Perez-Moreno, 2003; Phillips et al., 2013) and the ‘intensity of root colonisation’ serve as excellent indicators of the degree to which a plant makes the trade-off between relying on its own functional capability or on symbioses with fungal partners (Kong et al., 2019; McCormack & Iversen, 2019; Bergmann et al., 2020). Such critical determinants of plant resource acquisition and conservation strategies are also increasingly recognised as key drivers of a range of ecosystem properties and processes (Soudzilovskaia et al., 2019).
(2) ‘Root mass fraction’ (or fine-root mass fraction, rhizome mass fraction, etc.) depicts the relative investment of biomass to specific belowground parts and therefore is a key trait in determining plant performance (Wilson, 1988; Poorter et al., 2012). In association with SRL and total plant biomass, fine-root mass fraction determines total plant investment in fine-root length, which is a key determinant of the potential biophysical interactions between plants and the soil matrix.
(3) ‘Root branching density’ is increasingly recognised as a key determinant of root system architecture, with high branching density being typical of more clustered root systems favouring soil particle enmeshment and localised soil resource mining, and pre-emption against competitors (Forde & Lorenzo, 2001; Hodge, 2004). Low branching density, on the other hand, favours soil exploration (Eissenstat et al., 2015) and may thus be most effective for the uptake of very mobile soil resources such as nitrate and water (e.g. Pedersen et al., 2010).
(4) Maintaining high ‘root hair length and density’ is a very efficient way for plants to enhance root contact with soil particles (Carminati et al., 2017), facilitate root anchorage and penetration into dense soils (Haling et al., 2013; Choi & Cho, 2019), as well as reinforcing root interactions with the soil matrix (e.g. for resource uptake, exudation, connection to N2-fixing symbionts, Holz et al., 2018).
(5) ‘Vertical root distribution index’ and ‘maximum rooting depth’ are additional descriptors of plant strategies to occupy soil volume and explore different horizons of soil (Freschet et al., 2020). The localisation of roots in soil has straightforward implications for the interactions between plants and soil and the transfer of elements, impacting plant resource acquisition and the recycling or sequestration of organic compounds along the soil profile (Jobbágy & Jackson, 2000; Poirier et al., 2018; Mackay et al., 2019).
While this set of traits merits further attention, the primary purpose of drawing up this subjective and nonexhaustive list is to emphasise that many dimensions of root effects on plant and ecosystem functioning require further consideration. Several hard traits important for a range of functions (including some of the above-mentioned traits) present methodological challenges that limit their use (e.g. their study is labour intensive, is not feasible in the field, requires complex equipment or implies known measurement inaccuracies). These challenges hinder conceptual formalisation and testing of trait–functioning relationships, particularly in connection to other traits. Some of these important, but particularly challenging traits include physiological traits such as: (1) root exudation rate, (2) root exudate composition, (3) root respiration, (4) root enzymatic activities, (5) root nutrient absorption (and the synergistic role of mycorrhizal fungi) and (6) root nutrient resorption processes, which are important determinants of nutrient uptake and cycling, chemical and anatomical aspects of (7) root resistance to pathogens, and (8) root resistance to mechanical stresses, and aspects of (9) root persistence and turnover in soils that further impact soil nutrient and ecosystem carbon cycling. While an exhaustive review of recent advances in the measurement of these hard traits is beyond the scope of this synthesis, we emphasise that a range of studies are already bringing improvements that facilitate such challenging measurements (see for instance Phillips et al., 2008 for soluble root exudates; Lak et al., 2020, for specific root respiration; Griffiths et al., 2021, for multiple ion-uptake phenotyping; Arnaud et al., 2019, for in situ root imaging).
4 Trait–trait relationships
4.1 Considering trait inter-relations
The individual effects of root traits on plant and ecosystem functioning are not easy to single out (Lavorel & Garnier, 2002; Lavorel et al., 2013) (Figs 1, 2). The range of trade-offs and synergies typically observed among traits (Poorter et al., 2013; Roumet et al., 2016; Weemstra et al., 2016) suggests that plant internal (construction or evolutionary) constraints are likely to limit the number of possible adaptations of plants to environmental conditions. Fig. 2 provides a range of examples in which causal relationships between traits during tissue construction lead to trade-offs (e.g. mycorrhizal colonisation intensity typically covaries negatively with SRL owing to the opposite effects of root cortex area fraction on the two traits) or synergies (e.g. root bending resistance and root elastic modulus typically covary positively owing to the strong positive influence of lignin and cellulose concentrations on these two traits). The result of these constraints can be seen at both the intraspecies and interspecies levels. Therefore, a change in the expression of one trait may have several direct consequences for the expression of other traits. This network of inter-relations, as depicted in Fig. 2(a,b) by causal relationships and trait covariation connectors, is often very complex.
In addition, any trait that helps alleviate a limitation or adapt to a stress, changes the strength of the limitation/stress signal, which may reduce the need for other trait adjustments (Freschet et al., 2018a). As an example of this, Freschet et al. (2015a) showed that in a given environment most plant species tend to achieve similar levels of root length per mass of plant by developing either high SRL or high root mass fraction (depicted as causal links in Fig. 2a). This observation holds across several levels of soil N availability (Freschet et al., 2015b). In this context, it appears reasonable to assume that under nonextreme resource limitation or stress conditions, different combinations of root trait values (e.g. high SRL and root hair length and density vs high root mass fraction and mycorrhizal colonisation intensity) may result in a similar outcome with regard to plant function (e.g. N uptake capacity) (Marks & Lechowicz, 2006; Weemstra et al., 2020).
The variability of plant growth strategies also implies a range of interactions between root traits, with nonadditive effects on plant and ecosystem functioning. For example, a species with a deep root system, high reliance on mycorrhiza and low litter decomposability may have a strong positive effect on soil organic carbon stock via deep soil carbon sequestration, whereas a similar species with shallow rooting may have only a marginal effect on soil carbon (e.g. Clemmensen et al., 2013). Similarly, although a deep rooting species may improve resistance to landslides and water uptake at depth, its effect will be noticeable only if a substantial amount of roots is found at depth (e.g. if it has a high index of vertical root distribution) (Stokes et al., 2009).
Overall, it remains largely unknown whether syndromes of traits (i.e. consistent patterns of trait combinations; Bergmann et al., 2020) or syndromes of plastic trait adjustments (i.e. consistent patterns of plastic changes; Freschet et al., 2018) occur along well characterised resource limitation, stress or disturbance gradients, or whether observed trends are mostly context dependent (e.g. species specific, community specific). The identification of such syndromes may eventually help us summarise the covariation of trait values and their (antagonistic, additive, or synergistic) effects on plant and ecosystem function (Lavorel & Grigulis, 2012; Herben et al., 2018). It would also help us discriminate between mechanistic and fortuitous trait–functioning relationships. Much remains to be done to evaluate the existence and consistency of such inter-relationships. First, only a few causal relationships and indirect covariations (as depicted by black and orange connectors, respectively, in Fig. 2) between root traits have been identified (and even less so across traits from the entire plant). Second, our knowledge is biased towards the aforementioned set of widely studied root traits (Fig. 2). Third, in complex natural environments, plants are subjected to many co-occurring environmental factors whose interaction is likely to drive trait expression in multiple directions simultaneously (e.g. Kumordzi et al., 2019; Zhou et al., 2019). This may limit the value of our knowledge of syndromes of traits and plastic trait adjustments recorded across single environmental gradients. Indeed, these trait and environment integrations significantly influence function and fitness landscapes in multidimensional space (York et al., 2013), but more data are required to fully appreciate the complex relationships that are in place.
4.2 Accounting for causal relationships among traits
The functional or categorical grouping of individual root traits as illustrated by the horizontal dimension of Fig. 2 is useful for enhancing our understanding of plant and ecosystem function (McCormack et al., 2017). At the same time, it is important to consider the causal relationships (or hierarchy) among traits, as represented by the vertical dimension of the same figure (Fig. 2: vertical dimension; Rogers & Benfey, 2015). Many traits, referred to as composite traits, can be broken down into component (i.e. underlying) traits. For example, SRL emerges from the interaction between root diameter and root tissue density, which are themselves influenced by root cortex thickness and stele diameter (Fig. 2a). Root tissue density is further determined by the cortex and stele anatomical and chemical traits (Kong et al., 2019). Composite traits are particularly used because they are seen as concise indicators of plant functioning and often have the most direct influence (i.e. mechanistic link) on ecosystem functioning (see Fig. 2). The drawback is that composite traits are under the influence of several component traits that do not necessarily vary in synchrony (e.g. Poorter & Ryser, 2015), and adjustments to even simple environmental gradients may therefore often be unpredictable. SRL is one key example of a trait that may be important for N uptake, but whose response to changes in N availability is highly variable (e.g. Poorter & Ryser, 2015; Freschet et al., 2018a), owing to contrasting responses of its component traits to N availability.
An understanding of causal links (or more generally, hierarchical organisation) between root traits is useful for the following three purposes. First, it provides a mechanistic basis (i.e. the hypothesis-based framework) to interpret the outcome of statistical model selection procedures (i.e. the set of variables retained in multivariate models linking traits to functions) or structural equation models. As such, an understanding of trait hierarchical organisation will hold the key to the identification of the most parsimonious sets of traits with strongest influence on the functions. As an example, for defining root N uptake capacity, root diameter is mostly important due to its effect on SRL and its covariation with cortex area fraction (Fig. 2). Second, knowledge of hierarchical relationships aids the identification of component traits whose influence spans several composite traits. With respect to plant N uptake capacity, cortex thickness is one such trait. It was shown to enhance the potential for roots to host mycorrhizal fungi, which is beneficial for root–fungi associations (Kong et al., 2017, 2019). Cortex thickness also influences root diameter and root tissue density, which together determine SRL (Fig. 2). Therefore, despite being rarely measured, root cortex thickness underlies two of the most widely studied and measured morphological root traits and is of critical importance for the capacity to develop mycorrhizal symbiosis. Third, another key aspect of identifying component traits is the likelihood of being directly linked to plant genes. A better understanding of the component traits’ genetics – and its regulation under given environmental conditions – will not only provide an evolutionary explanation of key (composite) traits and their selection, but may also foster breeding for root traits beneficial to a specific plant function. In this context, it would be useful to further differentiate between ‘genuine’ composite traits, that are composed by traits with different (sets of) quantitative-trait loci (QTLs) responsible for each different component trait, from ‘integrated’ composite traits where the underlying component traits are varying in a coordinated way as determined by pleiotropic, highly linked QTLs or tight hormonal control, with nuances between those two extremes.
5 Generalising across scales
5.1 Generalising across species, plant growth forms and biomes
Our review of conceptual, experimental and observational studies of 24 aspects of plant and ecosystem functioning (Tables 1, 2), and two detailed examples (Fig. 2a,b; Tables 3, 4), emphasises that the current knowledge of trait–functioning relationships relies on highly variable numbers of observations covering the range of traits and functions. The majority of these relationships are based on relatively few species from a narrow range of plant growth forms and most have not been replicated along environmental gradients or across contrasting climates and soil types. Some trait–functioning relationships have been established in the field, while others come from pot monoculture or common garden experiments. In this context, generalising these relationships is hazardous. As discussed earlier, different sets of species or growth forms may display different syndromes of traits, which further vary along gradients, and may therefore display different trait–function relationships. With respect to direct measurements of soil reinforcement to protect against landslides, only two studies could be found that consider more than three species, and virtually all studies consider only one growth form in one location (Table 4). Regarding N uptake capacity, most studies target herbaceous species at the same growth stage and are often grown in hydroponics or pot conditions (Table 3), which questions whether knowledge gained from these highly simplified systems can be generalised to natural systems.
Overall, large differences have been observed across contrasting environmental contexts, such as across biomes. With regard to plant N uptake, we know for example that the importance of N-fixation strongly decreases from early successional to late successional forests of the temperate biome, whereas its importance remains high along similar gradients in the tropics (Batterman et al., 2013a). Mycorrhizal effects on N uptake also vary strongly across biomes, with ectomycorrhizal fungi transferring less N to their hosts in biomes at higher latitudes than in tropical forests (Mayor et al., 2015). Such examples illustrate that results gained in one system are unlikely to be directly generalisable to other systems.
Generally, further research bridging species from different plant growth forms and growing in contrasting environmental contexts is strongly needed to better inform our knowledge of trait–function relationships.
5.2 Meeting the challenge of up-scaling
Understanding the linkages between functional traits and plant and ecosystem functioning is often most critical at large spatial scales (e.g. entire agroecosystems or natural ecosystems, Suding et al., 2008; Martin & Isaac, 2015). Several functional trait-based up-scaling approaches have been proposed to link plant traits to ecosystem functioning, including the community-weighted-mean trait approach (Lavorel & Garnier, 2002; Violle et al., 2007) and the pooled-species approach (Klumpp & Soussana, 2009; Prieto et al., 2016). In the former, species are individually sampled (or nondestructively analysed), plant traits are measured at the level of individual species and a community trait value is calculated by weighting the trait values measured by the proportion that each species represents in the community (e.g. in terms of biomass or ground area cover). In the latter, pools of plants are sampled (or nondestructively analysed) over a given soil surface area or soil volume and a community trait value is directly measured. In both instances, appropriate sampling resolution is key to capture a mix of plant organs representative of the community (see for example Ottaviani et al., 2020), as biotic and abiotic variations occur at multiple spatial scales, (e.g. changing spatial trophic networks, soil properties, Tscharntke et al., 2005).
Whether in the community-weighted-mean trait approach or pooled-species trait approach, effects on ecosystem functioning are typically assumed to be proportional to abundance (which can be expressed per unit root biomass, length or surface) to determine the functioning of the whole system (Grime, 1998; Garnier et al., 2004). However, there are multiple reasons why such an approach can only capture parts of the plant community and ecosystem functioning. Depending on the system: (1) diversity effects, including competition, complementarity and facilitation, can add to the effect of species taken individually (e.g. Hodge, 2003; Santonja et al., 2017; Mahaut et al., 2020); (2) some subordinate species can produce disproportionate effects on ecosystem functioning (Mariotte, 2014); (3) interactions across multiple trophic levels can drive plant community and ecosystem functions (Lavorel et al., 2013); (4) the relative importance of traits shifts depending on the environmental context (e.g. Lambers et al., 2008); (5) small-scale to large-scale heterogeneity in ecosystem composition and function can maintain substantial levels of ecosystem function across all scales (Tscharntke et al., 2005); and (6) feedbacks between biotic and abiotic components, critical for ecosystem functioning and stability (Veldhuis et al., 2018), are not apparent by considering the biotic components alone. For these reasons, scaling up from species traits or pooled-species traits to ecosystem-level functioning must be done with caution, and especially so in natural and seminatural systems where biotic and abiotic interactions are even more complex than in low-diversity agricultural fields.
Nonetheless, the endeavour of up-scaling from traits to community and ecosystem yields multiple benefits. Most importantly, it provides a mechanistic framework (using or generating causal hypotheses for observed relationships) to test the contributions of traits (from species or pooled species) to community and ecosystem functioning (Lavorel & Grigulis, 2012). Up-scaling also has the potential to fill in the gap between the small-scale mechanistic understanding of reduced systems and the large-scale integrative, but mostly descriptive, assessments. In that respect, the community-weighted-mean trait and pooled-species trait approaches represent complementary approaches to tackle the problem at different levels of reductionism. Both approaches have advantages and drawbacks. Clearly, the pooled-species approach is far less time consuming when studying roots and limits the biases associated with estimations of root abundance (Ottaviani et al., 2020) and root species identification. As such, this approach would generally help to integrate aspects of both spatial and temporal variation in community trait–ecosystem functioning relationships (as discussed below), especially in ecosystems with large numbers of dominant and subordinate species. However, with respect to ecological modelling, as plant community composition varies across geographical location and time, such measurements of community traits are unlikely to be reused to predict ecosystem functioning from the crossing of community traits with species composition databases; as such, species-level traits might be preferred.
So far, few studies have tested to what extent the knowledge gained on the linkages between single-species or pooled-species functional traits and plant and ecosystem functioning can be used to infer such relationships in complex ecosystems (Garnier et al., 2004; Vile et al., 2006; Hales, 2018; De Long et al., 2019). Taking the example of root trait effects on plant N uptake, empirical studies most often measure the physiological ability of distinct species to take up N under controlled conditions (hydroponics or pot experiments, e.g. Maire et al., 2009; Grassein et al., 2015), or quantify community-wide N uptake based on the budgeting approach (e.g. Finzi et al., 2007),15N labelling (Hong et al., 2018) or even molecular approaches quantifying gene expression (e.g. Kraiser et al., 2011). However, between these two extremes, few studies have attempted to explicitly relate ecosystem-scale measurements to individual species trait values (but see Gessler et al., 1998; Craine et al., 2002; Soussana et al., 2005, 2012, for attempts with planted grass and tree species).
Interestingly, the reverse approach of down-scaling has sometimes been used successfully, starting from the observation of major differences in functioning between systems, and tracking back the causes to individual root traits. As an example, ectomycorrhizal vs arbuscular mycorrhizal dominated forests give rise to differences in coupled carbon–nitrogen cycling (see Phillips et al., 2013; Wurzburger & Brookshire, 2017; Zhu et al., 2018). Nonetheless, a species-level approach of root trait–soil function relationships would be useful to further identify the set of mechanistic linkages involved (Wurzburger & Clemmensen, 2018).
Another major challenge of up-scaling lies in the adequate characterisation of plant community or ecosystem functioning at large scales. For example, soil reinforcement by roots at small scales (e.g. soil cores) is often used to predict resistance to landslides at the hillslope scale, using geotechnical slope-scale models (e.g. Genet et al., 2010), but validation of models in the field is usually limited. Although it is possible to perform controlled, slope-scale experimental tests (e.g. Schwarz et al., 2010), and to physically model scaled slopes in the laboratory (that reproduce the stress distribution obtained in large-scale slopes, Sonnenberg et al., 2010; Liang et al., 2017), the logistical problems involved render these tests extremely complex to carry out. Nonetheless, while these field and laboratory experiments are useful for testing realistic slopes to ultimate failure, not all important processes or failure mechanisms that operate in the field may be captured. For that reason, future studies need to take particular care to consider the best possible proxies for up-scaling and understanding ecosystem functioning.
Another way to consider scaling belowground trait data within an ecosystem or globally is to improve the representation of root form and function in terrestrial biosphere models (Warren et al., 2015). Simulation modelling translates mechanistic understanding to mathematical relations that can be explored in silico (Marshall-Colon et al., 2017). Such models range from the simulation of explicit three-dimensional root architecture and surrounding soil matrix (Dunbabin et al., 2013), to more simple models scaling up trait measurements to the whole plant (Weemstra et al., 2020), agricultural systems (Rosenzweig et al., 2013) or at a global level (Warren et al., 2015). In recent years, several syntheses have called for an appropriate conceptualisation of roots and their role in ecosystem functioning in terrestrial biosphere models (Smithwick et al., 2014; Iversen et al., 2015; McCormack et al., 2015b). This approach, sometimes referred to as ‘model experiment’ integration (or Mod-Ex) combines current empirical understanding with model conceptualisation, parameterisation and validation in an iterative process to improve model representation of the natural world. While much work remains to be done, empirical input into the ways in which models aggregate or generalise across root functional types or plant species, and the way in which models implicitly or explicitly represent root function, can have large impacts on our understanding of ecosystem processes (Zhu et al., 2016; McCormack et al., 2017). In the context of crop breeding, for example, many combinations of root traits can be considered in various environments with regard to their effect on a particular function. These combinations can be validated across a restrained set of real cases, and be used for prioritising future research directions, similar to the use of digital prototyping in manufacturing (York, 2019). In global biosphere and climate studies, simulation models can also aid the prioritisation of research through sensitivity analyses, for example by identifying key traits whose variation have large consequences for the function of interest (McCormack et al., 2015b). But, most importantly, when tested against empirical data, the results of simulations can discriminate between diverse theoretical models, or reveal (structural or context-dependent) gaps in our mechanistic representation of trait–functioning relationships (Song et al., 2017).
5.3 Considering spatial and temporal variation
A range of methodologies has been developed aboveground, such as eddy covariance towers or remote sensing, that provide large amounts of data on certain plant traits and ecosystem functions at an ecosystem scale and across space and time. However, such approaches have low resolution regarding aspects of spatial variability in functioning and are unlikely to extend to belowground traits. Generally, there is growing evidence of strong small-scale variability in root traits (Defrenne et al., 2019; Kumordzi et al., 2019) that may lead to substantial small-scale variability in functioning. Given the overarching importance of soil properties and biotic (e.g. plant–plant; plant–microbes) interactions, and their typically high heterogeneity at small spatial scales (Jackson & Caldwell, 1993; Ettema & Wardle, 2002), root trait–function relationships might differ strongly over short distances (e.g. centimetres or metres). To date, it remains unclear how the spatial assemblage of species and root traits on the small scale might relate to the effects estimated by species averages. Spatially aggregated data may contain little information on the range of trait values occurring within the plant community, the relative abundance of each value, or the existence of several groups of contrasting trait values (e.g. bimodal distributions of trait values), which hampers our ability to understand their consequences for the functioning of ecosystems (e.g. Valencia et al., 2015; Violle et al., 2017).
Similarly, soil properties vary with depth (especially when contrasting soil horizons occur) and characterisation of the relative importance of roots and root traits at different depths is therefore necessary to accurately link them to plant and soil functioning (Germon et al., 2016; Fort et al., 2017; Chitra-Tarak et al., 2018). For example, the capacity for N acquisition generally decreases with soil depth due to a decline in the availability of soil N (Wiesler & Horst, 1994; Tückmantel et al., 2017). These patterns can differ across soil types and plant species: in alpine grasslands on Cambisol, the uptake of N was found to decline sharply from 67% in the top 5 cm of soil to 33% in the 5–15 cm layer below (Schleuss et al., 2015), whereas it was only 44% in the top 30 cm, 32% at 30–60 cm and 24% in the 60–120 cm layer for maize in an agricultural field on Luvisol (Wiesler & Horst, 1994). Additionally, changes in trait values typically occur across contrasting soil horizons (McCormack et al., 2017; Trocha et al., 2017) including, for example, the typical patterns of declining root density (Jackson et al., 1996) and physiological activity (Göransson et al., 2008; Tückmantel et al., 2017) with depth. As a consequence, most ecologists assume that (physiological, morphological, etc.) trait measurements made on roots from the topsoil are likely to adequately estimate plant N uptake capacity when N resource is concentrated in the topsoil. However, there are many reasons why such an approximation may be inadequate. First, strong competition for N in the topsoil might make root investment in deeper horizons more profitable, as sometimes observed in biodiversity studies (e.g. Mueller et al., 2013), resulting in more evenly distributed resource uptake across the soil profile. Second, soil N availability interacts with other soil resources, particularly water. Seasonal fluctuations of soil water availability across the soil profile following, for example, changes in water table level and precipitation patterns may reverse the N availability gradient along the profile (Prieto et al., 2012). As such, a good characterisation of spatial variations in soil properties (vertically, but also sometimes horizontally; Březina et al., 2019), integrated over long periods of time, might be needed to guide a sound root sampling design (and the measure of, for example, physiological and morphological traits) from the range of soil layers that matters for N uptake. Also, architectural traits or traits representing (vertical and horizontal) root distribution may be important predictors of the match between root presence and N availability (Freschet et al., 2020).
There is also growing evidence that, in parallel with seasonal changes in environmental conditions such as soil resource availability (Chitra-Tarak et al., 2018; Březina et al., 2019) or soil organism community composition and activity (Bardgett et al., 2005), root trait values vary temporally at both the species and community levels (e.g. Picon-Cochard et al., 2012; Zadworny et al., 2015). For example, seasonal changes in carbohydrate concentration of belowground organs affect plant resprouting ability during some parts of the growing season in temperate regions, a feature often used to improve the efficiency of mechanical control of weeds (Sosnová et al., 2014). Many root traits are also dependent on the stage of root system development (e.g. architectural traits such as root branching density, coarse root to fine root ratio; Freschet et al., 2020) and root age (e.g. Volder et al., 2005). Within a single root axis of maize, for example, tensile strength can vary by about 1.5 orders of magnitude, being greatest in the older root tissue far from the root apex (Loades et al., 2015). This phenomenon is particularly true for woody species, whose architecture and size can change dramatically during their life, with many consequences for trait values and their impact on ecosystem functioning. The importance of ontogenetic stage however also applies to herbaceous species (both annual and perennial) even after reaching maturity, for example due to changes in resource accumulation in roots or rhizomes. Additionally, root phenology differs strongly among species (McCormack et al., 2014), growth forms (Blume-Werry et al., 2016) and biomes (Abramoff & Finzi, 2015). In extreme cases, some species may display no or few absorptive roots at specific times of the year, with periodic flushes of new relatively short-lived fine roots at times of resource availability, as seen in arid climates (Liu et al., 2016). In cold climates with short growing season, however, species with long-lived overwintering root systems may be more successful than species with autumn-senescing root systems that are produced for each growing season anew (Courchesne et al., 2020). Similarly, long-lived roots and rhizomes may contribute better to soil reinforcement against landslides than ephemeral roots by providing a more consistent contribution to improve soil strength.
A better understanding of root phenology is therefore key to the meaningful measurement of root trait values (in relation to the focal function) and our understanding of temporal variation in root trait effects on plant and ecosystem functioning. The timing of root sampling must be carefully considered to match the period when the focal function is most relevant. For example, in ecosystems defined by high seasonality, measuring root traits at the peak of plant productivity (sometimes halfway between the seasonal increase and reduction in growth activities) may be a reasonable benchmark for approximating the relationship between root traits and plant nutrient uptake capacity. However, the timing of nutrient uptake is rarely examined (but see Trinder et al., 2012; Jesch et al., 2018; Dovrat & Sheffer, 2019) and may not be directly proportional to plant growth rate. Furthermore, some studies suggest that N can be taken up as soon as it is available (Jackson et al., 2008), suggesting that a good match between plant uptake capacity and the temporality of N fluxes is of critical importance for N uptake (e.g. Edwards & Jefferies, 2010). Regarding the capacity of plants to provide resistance against landslides, it would be best to measure root traits at the time of the year when landslides are most frequent, e.g. when soil is saturated, during the rainy season (in tropical systems) or during snow melt (Stokes et al., 2009), or to differentiate between relationships measured at different times of the year.
Another consideration relates to the temporal variation in species composition within ecosystems, for example during succession or in response to changes in land use. Plant effects on ecosystem functioning can last for long periods after changes in plant community composition have occurred (Fraterrigo et al., 2005) and mismatches between traits and function are therefore likely to be observed in rapidly changing ecosystems (Foster et al., 2003). In the same way, plant species and their root systems, that established first at a location may not only influence the rooting patterns of other species, but also disproportionately drive the observed relationships between traits and functioning (Delory et al., 2019).
In summary, knowledge of spatial and temporal variation in root traits and their effects, over different spatial and temporal scales, is especially needed to allow more informed recommendation on the location and timing of measurements. Hierarchical spatial sampling and sequential sampling would provide invaluable information on the spatial and temporal fluctuation of root traits and their impact on ecosystem functioning.
5.4 Of intraspecific vs interspecific variation and the use of databases
Ecologists have identified and measured phenotypic traits in a wide variety of species, either under laboratory conditions or in the field. Various attempts have been made to include these data into comprehensive/inclusive databases considering plant traits per se (Kleyer et al., 2008; Iversen et al., 2017; Kattge et al., 2020) as well as the plant symbiotic relationships with mycorrhizal fungi (Soudzilovskaia et al., 2020) and with N2-fixing bacteria (Tedersoo et al., 2018). Although belowground traits are still strongly underrepresented in global compilations, especially regarding organs other than fine roots (Klimešová et al., 2018), such databases represent a large amount of trait data that can be related to vegetation composition (Bruelheide et al., 2019) and climate and soil maps. Consequently, relationships between root traits and ecosystem functioning can now be addressed on the global scale (e.g. See et al., 2019). However, in global analyses, one trait value per species is generally considered and averaged over all available data, under the assumption that the average will be a good reflection of the ‘inherent’ trait for a given species. This generalisation is made even though trait expression is adjusted to the specific environmental condition that plants experience (Valladares et al., 2006). Root trait values can strongly differ between plants grown in laboratory vs field experiments (Poorter et al., 2016), for instance as a consequence of different environmental conditions (Li et al., 2017; Kumordzi et al., 2019), along gradients of plant diversity or density with different types of plant–plant interactions (Salahuddin et al., 2018), or with changing interactions between trophic levels (Huber et al., 2016). Ostonen et al. (2007) showed for example that intraspecific variation of SRL can be as high as 10-fold across a large environmental gradient. Not accounting for such differences between sites may be one of the key reasons for low predictability of trait–functioning relationships in functional ecology (Shipley et al., 2016).
Due to the potential for large differences between traits and their level of intraspecific variability, getting a clearer view on which traits are most variable or invariant would be critical for data reuse in syntheses of trait–functioning relationships (Funk et al., 2017; McCormack et al., 2017). For aboveground traits, intraspecific variation has only recently begun to be properly addressed across large numbers of species (e.g. Siefert et al., 2015). For root traits, it may be some time before we have a good insight into the contribution of genetic and environmental factors to trait variation (Klimešová et al., 2017). The complexity of the issue is increased further when one considers the importance of genotype–genotype interactions of plants and root–microbial symbionts, which can also have substantial effects on trait expression and key functions (Johnson et al., 2012). Overall, while the characterisation of trait intraspecific variability is critical, it must be stressed that a good characterisation of phenotypic traits also depends on a good characterisation of environmental conditions experienced by plants. This is especially true below ground where the small-scale heterogeneity of soils limits the value of large-scale databases (Freschet et al., 2017).
6 Conclusion and perspectives
Our overview of root trait–functioning relationships has raised seven main insights:
Belowground traits with the widest importance in plant and ecosystem functioning are not necessarily those that are the most commonly measured. Meanwhile, the relevance of commonly measured (soft) traits to plant and ecosystem functioning is often indirect and insubstantial, or requires further testing.
Assessing the relative importance of traits for functioning requires quantifying a comprehensive range of functionally relevant traits (on different root entities), including hard traits, from a diverse range of species, as well as replication across environmental gradients or contrasting environmental contexts.
Establishing causal links between root traits provides a mechanistic basis (i.e. the hypothesis-based framework) to interpret the outcome of statistical model selection procedure (i.e. the set of variables retained in multivariate models linking traits to functions) or structural equation models. As such, it holds the key to identifying the most parsimonious sets of traits with strongest influence on the functions.
Accounting for causal relationships among traits is key to identifying the component traits that link most strongly with a limited set of genes on the one hand, and plant or ecosystem functioning on the other, and therefore to inform us of potential linkages between genotypes and functioning.
Investigating syndromes of traits and syndromes of trait plastic adjustments will help us identify the linkages between ‘soft’ and ‘hard’ traits, in order to demonstrate when and to what extent ‘soft’ traits can confidently be used as proxies for ‘hard’ traits.
Our ability to scale up from root, to plant, to species, to community and ecosystem functioning requires more critical investigation and comprehensive experimental/empirical tests, and, in some cases, the incorporation of spatiotemporal variation as well as belowground process conceptualisation and testing within the framework of terrestrial biosphere models.
Accounting for (the often large) intraspecific variation in trait–functioning relationships in global models requires databases with well contextualised data (e.g. locally measured soil parameters).
Another major contribution of this synthesis lies in the broad overview of root trait–function relationships gathered within Tables 1 and 2. These tables give an overview of both the range of effects that root traits can have on ecosystem functioning and the range of traits potentially required to adequately capture the effects of roots on most plant functions and ecosystem properties and processes. They provide key references on multiple topics, which should benefit to all who want to broaden their view of root ecology. These tables further highlight several functionally important, but rarely considered traits from various research fields.
Overall, this synthesis represents a close companion to the recent description of standardised measurement protocols for a substantial set of root traits (Freschet et al., 2020). These two syntheses elucidate connections between the multiple and at times secluded fields of root ecological research and, as such, are meant to inspire novel multidisciplinary approaches. They should encourage researchers more familiar with aboveground aspects of plant ecology to integrate belowground concepts into their vision of trait–functioning relationships. While we purposely limited our review to belowground aspects only, we cannot stress enough that these relationships should be considered for entire plants, whenever possible, as plant impacts on (plant and ecosystem) functioning often rely on the integration of both above- and belowground traits.
Finally, this synthesis brings together a range of arguments that call for the design of more comprehensive studies. Studies tackling some, if not all, of the above recommendations can be designed that limit the fortuitous, indirect and context-dependent nature of gathered results (as opposed to studies measuring a few traits on a few species in one single context). We believe that such a set of recommendations will be instrumental in moving towards an integrated, mechanistic knowledge of trait–functioning relationships and open the way to safe applications for ecosystem and agroecosystem management. Achieving a more mechanistic understanding of multivariate trait–functioning relationships will further help us strengthen (or reconsider) the foundations of current dominating theoretical frameworks, often built on data from only a few soft traits.
7 Acknowledgements
This work was supported by the New Phytologist Foundation via financial support to the 25th New Phytologist Workshop ‘Root traits as predictors of plant and soil functions: Aggregating current knowledge to build better foundations for root ecological science’, held in January 2019 in Montpellier, France. We also acknowledge support from Camille Noûs and the Cogitamus Laboratory. JK was supported by the Grant agency of the Czech Republic (19-13103S). MZ was supported by the Institute of Dendrology, Polish Academy of Sciences. CMI was supported by the Biological and Environmental Research programme in the United States Department of Energy’s Office of Science. We are grateful for the suggestions of three anonymous reviewers.
8 Author contributions
GTF initiated and coordinated the writing of the manuscript. GTF, CR, MW and AS organised the workshop and chaired the sessions, with help from LHC. GTF, CR, LHC, MW, AGB, BR, RDB, GBDD, DJ, JK, ML, MLM, ICM, LP, HP, IP, NW, MZ and AS participated in the workshop. GTF, CR, AS, MW, AGB, LHC, BR, HP, JK, IP, NW, CMI and LMY drafted some parts of the manuscript and all authors, including AB-Z, EBB, IB, AG, SEH, LM, CP-C, JAP, LR, PR, MS-L, NAS, TS, OJV-B and AW, contributed to the writing of the manuscript.
Supporting Information
| Filename | Description |
--- |
| nph17072-sup-0001-NotesS1.pdfPDF document, 298.6 KB | Notes S1 Full list of references for papers cited in Tables 1–4. Please note: Wiley Blackwell are not responsible for the content or functionality of any Supporting Information supplied by the authors. Any queries (other than missing material) should be directed to the New Phytologist Central Office. |
Please note: The publisher is not responsible for the content or functionality of any supporting information supplied by the authors. Any queries (other than missing content) should be directed to the corresponding author for the article.
References
Abramoff RZ, Finzi AC. 2015. Are above- and below-ground phenology in sync? New Phytologist 205: 1054–1061.
10.1111/nph.13111
PubMed Web of Science® Google Scholar
Arnaud M, Baird AJ, Morris PJ, Harris A, Huck JJ. 2019. EnRoot: a narrow-diameter, inexpensive and partially 3D-printable minirhizotron for imaging fine root production. Plant Methods 15: 101.
10.1186/s13007-019-0489-6
PubMed Web of Science® Google Scholar
Bardgett RD, Bowman WD, Kaufmann R, Schmidt SK. 2005. A temporal approach to linking aboveground and belowground ecology. Trends in Ecology & Evolution 20: 634–641.
10.1016/j.tree.2005.08.005
PubMed Web of Science® Google Scholar
Bardgett RD, Mommer L, De Vries FT. 2014. Going underground: root traits as drivers of ecosystem processes. Trends in Ecology & Evolution 29: 692–699.
10.1016/j.tree.2014.10.006
PubMed Web of Science® Google Scholar
Batterman SA, Hedin LO, van Breugel M, Ransijn J, Craven DJ, Hall JS. 2013a. Key role of symbiotic dinitrogen fixation in tropical forest secondary succession. Nature 502: 224–227.
10.1038/nature12525
CAS PubMed Web of Science® Google Scholar
Batterman SA, Wurzburger N, Hedin LO. 2013b. Nitrogen and phosphorus interact to control tropical symbiotic N2 fixation: a test in Inga punctata. Journal of Ecology 101: 1400–1408.
10.1111/1365-2745.12138
CAS Web of Science® Google Scholar
Belluau M, Shipley B. 2018. Linking hard and soft traits: physiology, morphology and anatomy interact to determine habitat affinities to soil water availability in herbaceous dicots. PLoS ONE 13: e0193130.
10.1371/journal.pone.0193130
PubMed Web of Science® Google Scholar
Bergmann J, Weigelt A, van der Plas F, Laughlin DC, Kuyper TW, Guerrero-Ramirez N, Valverde-Barrantes OJ, Bruelheide H, Freschet GT, Iversen CMet al. 2020. The fungal collaboration gradient dominates the root economics space in plants. Science. Advances 6: eaba3756.
10.1126/sciadv.aba3756
CAS PubMed Web of Science® Google Scholar
Blume-Werry G, Wilson SD, Kreyling J, Milbau A. 2016. The hidden season: growing season is 50% longer below than above ground along an arctic elevation gradient. New Phytologist 209: 978–986.
10.1111/nph.13655
CAS PubMed Web of Science® Google Scholar
Bodner G, Leitner D, Nakhforoosh A, Sobotik M, Moder K, Kaul H-P. 2013. A statistical approach to root system classification. Frontiers in Plant Science 4: 292.
10.3389/fpls.2013.00292
PubMed Web of Science® Google Scholar
Boldrin D, Leung AK, Bengough AG. 2017. Correlating hydrologic reinforcement of vegetated soil with plant traits during establishment of woody perennials. Plant and Soil 416: 437–451.
10.1007/s11104-017-3211-3
CAS Web of Science® Google Scholar
Bowsher AW, Mason CM, Goolsby EW, Donovan LA. 2016. Fine root tradeoffs between nitrogen concentration and xylem vessel traits preclude unified whole-plant resource strategies in Helianthus. Ecology and Evolution 6: 1016–1031.
10.1002/ece3.1947
PubMed Web of Science® Google Scholar
Březina S, Jandová K, Pecháčková S, Hadincová V, Skálová H, Krahulec F, Herben T. 2019. Nutrient patches are transient and unpredictable in an unproductive mountain grassland. Plant Ecology 220: 111–123.
10.1007/s11258-019-00906-3
Web of Science® Google Scholar
Bruelheide H, Dengler J, Jiménez-Alfaro B, Purschke O, Hennekens SM, Chytrý M, Pillar VD, Jansen F, Kattge J, Sandel Bet al. 2019. sPlot – a new tool for global vegetation analyses. Journal of Vegetation Science 30: 161–186.
10.1111/jvs.12710
Web of Science® Google Scholar
Carminati A, Benard P, Ahmed MA, Zarebanadkouki M. 2017. Liquid bridges at the root–soil interface. Plant and Soil 417: 1–15.
10.1007/s11104-017-3227-8
CAS Web of Science® Google Scholar
Chitra-Tarak R, Ruiz L, Dattaraja HS, Mohan Kumar MS, Riotte J, Suresh HS, McMahon SM, Sukumar R. 2018. The roots of the drought: hydrology and water uptake strategies mediate forest-wide demographic response to precipitation. Journal of Ecology 106: 1495–1507.
10.1111/1365-2745.12925
Web of Science® Google Scholar
Choi H-S, Cho H-T. 2019. Root hairs enhance Arabidopsis seedling survival upon soil disruption. Scientific Reports 9: 11181.
10.1038/s41598-019-47733-0
PubMed Web of Science® Google Scholar
Clemmensen KE, Bahr A, Ovaskainen O, Dahlberg A, Ekblad A, Wallander H, Stenlid J, Finlay RD, Wardle DA, Lindahl BD. 2013. Roots and associated fungi drive long-term carbon sequestration in boreal forest. Science 339: 1615–1618.
10.1126/science.1231923
CAS PubMed Web of Science® Google Scholar
Courchesne DN, Wilson AZ, Ryser P. 2020. Regional distribution patterns of wetland monocots with different root turnover strategies are associated with local variation in soil temperature. New Phytologist 226: 86–97.
10.1111/nph.16328
PubMed Web of Science® Google Scholar
Craine JM, Tilman D, Wedin D, Reich P, Tjoelker M, Knops J. 2002. Functional traits, productivity and effects on nitrogen cycling of 33 grassland species. Functional Ecology 16: 563–574.
10.1046/j.1365-2435.2002.00660.x
Web of Science® Google Scholar
Daynes CN, Field DJ, Saleeba JA, Cole MA, McGee PA. 2013. Development and stabilisation of soil structure via interactions between organic matter, arbuscular mycorrhizal fungi and plant roots. Soil Biology and Biochemistry 57: 683–694.
10.1016/j.soilbio.2012.09.020
CAS Web of Science® Google Scholar
De Long JR, Jackson BG, Wilkinson A, Pritchard WJ, Oakley S, Mason KE, Stephan JG, Ostle NJ, Johnson D, Baggs EMet al. 2019. Relationships between plant traits, soil properties and carbon fluxes differ between monocultures and mixed communities in temperate grassland. Journal of Ecology 107: 1704–1719.
10.1111/1365-2745.13160
CAS PubMed Web of Science® Google Scholar
Defrenne CE, McCormack ML, Roach WJ, Addo-Danso SD, Simard SW. 2019. Intraspecific fine-root trait-environment relationships across interior Douglas-fir forests of western Canada. Plants 8: 199.
10.3390/plants8070199
CAS Web of Science® Google Scholar
Delory BM, Weidlich EWA, von Gillhaussen P, Temperton VM. 2019. When history matters: the overlooked role of priority effects in grassland overyielding. Functional Ecology 33: 2369–2380.
10.1111/1365-2435.13455
Web of Science® Google Scholar
Dignac M-F, Derrien D, Barré P, Barot S, Cecillon L, Chenu C, Chevallier T, Freschet GT, Garnier P, Guenet B et al. 2017. Increasing soil C storage: mechanisms, effects of agricultural practices and proxies. Agronomy for Sustainable Development 37: 1–27.
10.1007/s13593-017-0421-2
CAS Web of Science® Google Scholar
Dovrat G, Sheffer E. 2019. Symbiotic dinitrogen fixation is seasonal and strongly regulated in water-limited environments. New Phytologist 221: 1866–1877.
10.1111/nph.15526
CAS PubMed Web of Science® Google Scholar
Dunbabin VM, Postma JA, Schnepf A, Pagès L, Javaux M, Wu L, Leitner D, Chen YL, Rengel Z, Diggle AJ. 2013. Modelling root–soil interactions using three-dimensional models of root growth, architecture and function. Plant and Soil 372: 93–124.
10.1007/s11104-013-1769-y
CAS Web of Science® Google Scholar
Edwards KA, Jefferies RL. 2010. Nitrogen uptake by Carex aquatilis during the winter–spring transition in a low Arctic wet meadow. Journal of Ecology 98: 737–744.
10.1111/j.1365-2745.2010.01675.x
CAS Web of Science® Google Scholar
Eissenstat DM, Kucharski JM, Zadworny M, Adams TS, Koide RT. 2015. Linking root traits to nutrient foraging in arbuscular mycorrhizal trees in a temperate forest. New Phytologist 208: 114–124.
10.1111/nph.13451
PubMed Web of Science® Google Scholar
Ettema CH, Wardle DA. 2002. Spatial soil ecology. Trends in Ecology & Evolution 17: 177–183.
10.1016/S0169-5347(02)02496-5
Web of Science® Google Scholar
Evans JR, Seemann JR. 1989. The allocation of protein nitrogen in the photosynthetic apparatus: costs, consequences, and control. In: W Briggs, R Alan, eds. Towards a broad understanding of photosynthesis. New York, NY, USA, 183–205.
Google Scholar
Finzi AC, Norby RJ, Calfapietra C, Gallet-Budynek A, Gielen B, Holmes WE, Hoosbeek MR, Iversen CM, Jackson RB, Kubiske ME et al. 2007. Increases in nitrogen uptake rather than nitrogen-use efficiency support higher rates of temperate forest productivity under elevated CO2. Proceedings of the National Academy of Sciences, USA 104: 14014–14019.
10.1073/pnas.0706518104
CAS PubMed Web of Science® Google Scholar
Forde B, Lorenzo H. 2001. The nutritional control of root development. Plant and Soil 232: 51–68.
10.1023/A:1010329902165
CAS Web of Science® Google Scholar
Fort F, Volaire F, Guilioni L, Barkaoui K, Navas M-L, Roumet C. 2017. Root traits are related to plant water-use among rangeland Mediterranean species. Functional Ecology 31: 1700–1709.
10.1111/1365-2435.12888
Web of Science® Google Scholar
Foster D, Swanson F, Aber J, Burke I, Brokaw N, Tilman D, Knapp A. 2003. The importance of land-use legacies to ecology and conservation. BioScience 53: 77–88.
10.1641/0006-3568(2003)053[0077:TIOLUL]2.0.CO;2
Web of Science® Google Scholar
Fraterrigo JM, Turner MG, Pearson SM, Dixon P. 2005. Effects of past land use on spatial heterogeneity of soil nutrients in southern Appalachian forests. Ecological Monographs 75: 215–230.
10.1890/03-0475
Web of Science® Google Scholar
Freschet GT, Kichenin E, Wardle DA. 2015a. Explaining within-community variation in plant biomass allocation: a balance between organ biomass and morphology above vs below ground? Journal of Vegetation Science 26: 431–440.
10.1111/jvs.12259
Web of Science® Google Scholar
Freschet GT, Pagès L, Iversen CM, Comas LH, Rewald B, Roumet C, Klimešová J, Zadworny M, Poorter H, Postma JA et al. 2020. A starting guide to root ecology: strengthening ecological concepts and standardizing root classification, sampling, processing and trait measurements. {hal-02918834}, [WWW document] URL
Google Scholar
Freschet GT, Roumet C. 2017. Sampling roots to capture plant and soil functions. Functional Ecology 31: 1506–1518.
10.1111/1365-2435.12883
Web of Science® Google Scholar
Freschet GT, Swart EM, Cornelissen JHC. 2015b. Integrated plant phenotypic responses to contrasting above- and below-ground resources: key roles of specific leaf area and root mass fraction. New Phytologist 206: 1247–1260.
10.1111/nph.13352
CAS PubMed Web of Science® Google Scholar
Freschet GT, Valverde-Barrantes OJ, Tucker CM, Craine JM, McCormack ML, Violle C, Fort F, Blackwood CB, Urban-Mead KRU, Iversen CM et al. 2017. Climate, soil and plant functional types as drivers of global fine-root trait variation. Journal of Ecology 105: 1182–1196.
10.1111/1365-2745.12769
Web of Science® Google Scholar
Freschet GT, Violle C, Bourget MY, Scherer-Lorenzen M, Fort F. 2018a. Allocation, morphology, physiology, architecture: the multiple facets of plant above and belowground responses to resource stress. New Phytologist 219: 1338–1352.
10.1111/nph.15225
PubMed Web of Science® Google Scholar
Freschet GT, Violle C, Roumet C, Garnier E. 2018b. Interactions between soil and vegetation: structure of plant communities and soil functioning. In: P Lemanceau, M Blouin, eds. Soils within the critical zone: ecology. London, UK: ISTE Editions, 83–104.
10.1002/9781119438274.ch5
Google Scholar
Funk JL, Larson JE, Ames GM, Butterfield BJ, Cavender-Bares J, Firn J, Laughlin DC, Sutton-Grier AE, Williams L, Wright J. 2017. Revisiting the Holy Grail: using plant functional traits to understand ecological processes. Biological Reviews 92: 1156–1173.
10.1111/brv.12275
PubMed Web of Science® Google Scholar
Garnier E, Cortez J, Billes G, Navas M-L, Roumet C, Debussche M, Laurent G, Blanchard A, Aubry D, Bellmann A et al. 2004. Plant functional markers capture ecosystem properties during secondary succession. Ecology 85: 2630–2637.
10.1890/03-0799
Web of Science® Google Scholar
Genet M, Stokes A, Fourcaud T, Norris J. 2010. The influence of plant diversity on slope stability in moist evergreen deciduous forest. Ecological Engineering 36: 265–275.
10.1016/j.ecoleng.2009.05.018
Web of Science® Google Scholar
Germon A, Cardinael R, Prieto I, Mao Z, Kim J, Stokes A, Dupraz C, Laclau J-P, Jourdan C. 2016. Unexpected phenology and lifespan of shallow and deep fine roots of walnut trees grown in a silvoarable Mediterranean agroforestry system. Plant and Soil 401: 409–426.
10.1007/s11104-015-2753-5
CAS Web of Science® Google Scholar
Gessler A, Jung K, Gasche R, Papen H, Heidenfelder A, Börner E, Metzler B, Augustin S, Hildebrand E, Rennenberg H. 2005. Climate and forest management influence nitrogen balance of European beech forests: microbial N transformations and inorganic N net uptake capacity of mycorrhizal roots. European Journal of Forest Research 124: 95–111.
10.1055/s-2004-820878
CAS PubMed Web of Science® Google Scholar
Gessler A, Schneider S, Von Sengbusch D, Weber P, Hanemann U, Huber C, Rothe A, Kreutzer K, Rennenberg H. 1998. Field and laboratory experiments on net uptake of nitrate and ammonium by the roots of spruce (Picea abies) and beech (Fagus sylvatica) trees. New Phytologist 138: 275–285.
10.1046/j.1469-8137.1998.00107.x
CAS PubMed Web of Science® Google Scholar
Göransson H, Ingerslev M, Wallander H. 2008. The vertical distribution of N and K uptake in relation to root distribution and root uptake capacity in mature Quercus robur, Fagus sylvatica and Picea abies stands. Plant and Soil 306: 129–137.
10.1007/s11104-007-9524-x
CAS Web of Science® Google Scholar
Grassein F, Legay N, Arnoldi C, Raphael S, Philippe L, Lavorel S, Clement J-C. 2018. Studies of NH4+ and NO3− uptake ability of subalpine plants and resource-use strategy identified by their functional traits. bioRxiv: 372235.
Google Scholar
Grassein F, Lemauviel-Lavenant S, Lavorel S, Bahn M, Bardgett RD, Desclos-Theveniau M, Laîné P. 2015. Relationships between functional traits and inorganic nitrogen acquisition among eight contrasting European grass species. Annals of Botany 115: 107–115.
10.1093/aob/mcu233
CAS PubMed Web of Science® Google Scholar
Griffiths M, Roy S, Guo H, Seethepalli A, Huhman D, Ge Y, Sharp RE, Fritschi FB, York LM. 2021. A multiple ion-uptake phenotyping platform reveals shared mechanisms that affect nutrient uptake by maize roots. Plant Physiology 185: 781–795.
10.1093/plphys/kiaa080
CAS PubMed Web of Science® Google Scholar
Grime JP. 1998. Benefits of plant diversity to ecosystems: immediate, filter and founder effects. Journal of Ecology 86: 902–910.
10.1046/j.1365-2745.1998.00306.x
Web of Science® Google Scholar
Hales TC. 2018. Modelling biome-scale root reinforcement and slope stability. Earth Surface Processes and Landforms 43: 2157–2166.
10.1002/esp.4381
Web of Science® Google Scholar
Haling RE, Brown LK, Bengough AG, Young IM, Hallett PD, White PJ, George TS. 2013. Root hairs improve root penetration, root–soil contact, and phosphorus acquisition in soils of different strength. Journal of Experimental Botany 64: 3711–3721.
10.1093/jxb/ert200
CAS PubMed Web of Science® Google Scholar
Hawkins BJ, Robbins S, Porter RB. 2014. Nitrogen uptake over entire root systems of tree seedlings. Tree Physiology 34: 334–342.
10.1093/treephys/tpu005
CAS PubMed Web of Science® Google Scholar
Henneron L, Cros C, Picon-Cochard C, Rahimian V, Fontaine S. 2020. Plant economic strategies of grassland species control soil carbon dynamics through rhizodeposition. Journal of Ecology 108: 528–545.
10.1111/1365-2745.13276
CAS Web of Science® Google Scholar
Herben T, Klimešová J, Chytrý M. 2018. Effects of disturbance frequency and severity on plant traits: an assessment across a temperate flora. Functional Ecology 32: 799–808.
10.1111/1365-2435.13011
Web of Science® Google Scholar
Hodge A. 2003. Plant nitrogen capture from organic matter as affected by spatial dispersion, interspecific competition and mycorrhizal colonization. New Phytologist 157: 303–314.
10.1046/j.1469-8137.2003.00662.x
PubMed Web of Science® Google Scholar
Hodge A. 2004. The plastic plant: root responses to heterogeneous supplies of nutrients. New Phytologist 162: 9–24.
10.1111/j.1469-8137.2004.01015.x
Web of Science® Google Scholar
Holz M, Zarebanadkouki M, Kuzyakov Y, Pausch J, Carminati A. 2018. Root hairs increase rhizosphere extension and carbon input to soil. Annals of Botany 121: 61–69.
10.1093/aob/mcx127
CAS PubMed Web of Science® Google Scholar
Hong J, Ma X, Yan Y, Zhang X, Wang X. 2018. Which root traits determine nitrogen uptake by alpine plant species on the Tibetan Plateau? Plant and Soil 424: 63–72.
10.1007/s11104-017-3434-3
CAS Web of Science® Google Scholar
Huber M, Bont Z, Fricke J, Brillatz T, Aziz Z, Gershenzon J, Erb M. 2016. A below-ground herbivore shapes root defensive chemistry in natural plant populations. Proceedings of the Royal Society B: Biological Sciences 283: 20160285.
10.1098/rspb.2016.0285
PubMed Web of Science® Google Scholar
Iversen CM, McCormack ML, Powell AS, Blackwood CB, Freschet GT, Kattge J, Roumet C, Stover DB, Soudzilovskaia NA, Valverde-Barrantes OJ et al. 2017. A global Fine-Root Ecology Database to address belowground challenges in plant ecology. New Phytologist 215: 15–26.
10.1111/nph.14486
PubMed Web of Science® Google Scholar
Iversen CM, Sloan VL, Sullivan PF, Euskirchen ES, McGuire AD, Norby RJ, Walker AP, Warren JM, Wullschleger SD. 2015. The unseen iceberg: plant roots in arctic tundra. New Phytologist 205: 34–58.
10.1111/nph.13003
PubMed Web of Science® Google Scholar
Jackson LE, Burger M, Cavagnaro TR. 2008. Roots, nitrogen transformations, and ecosystem services. Annual Review of Plant Biology 59: 341–363.
10.1146/annurev.arplant.59.032607.092932
CAS PubMed Web of Science® Google Scholar
Jackson R, Caldwell M. 1993. Geostatistical patterns of soil heterogeneity around individual perennial plants. Journal of Ecology 81: 683–692.
10.2307/2261666
Web of Science® Google Scholar
Jackson RB, Canadell J, Ehleringer JR, Mooney HA, Sala OE, Schulze ED. 1996. A global analysis of root distributions for terrestrial biomes. Oecologia 108: 389–411.
10.1007/BF00333714
CAS PubMed Web of Science® Google Scholar
Jesch A, Barry KE, Ravenek JM, Bachmann D, Strecker T, Weigelt A, Buchmann N, de Kroon H, Gessler A, Mommer L et al. 2018. Below-ground resource partitioning alone cannot explain the biodiversity–ecosystem function relationship: a field test using multiple tracers. Journal of Ecology 106: 2002–2018.
10.1111/1365-2745.12947
CAS Web of Science® Google Scholar
Jobbágy EG, Jackson RB. 2000. The vertical distribution of soil organic carbon and its relation to climate and vegetation. Ecological Applications 10: 423–436.
10.1890/1051-0761(2000)010[0423:TVDOSO]2.0.CO;2
Web of Science® Google Scholar
Johnson D, Martin F, Cairney JWG, Anderson IC. 2012. The importance of individuals: intraspecific diversity of mycorrhizal plants and fungi in ecosystems. New Phytologist 194: 614–628.
10.1111/j.1469-8137.2012.04087.x
CAS PubMed Web of Science® Google Scholar
Kattge J, Bönisch G, Díaz S, Lavorel S, Prentice IC, Leadley P, Tautenhahn S, Werner GDA, Aakala T, Abedi M et al. 2020. TRY plant trait database – enhanced coverage and open access. Global Change Biology 26: 119–188.
10.1111/gcb.14904
PubMed Web of Science® Google Scholar
Kim JH, Fourcaud T, Jourdan C, Maeght JL, Mao Z, Metayer J, Meylan L, Pierret A, Rapidel B, Roupsard O et al. 2017. Vegetation as a driver of temporal variations in slope stability: the impact of hydrological processes. Geophysical Research Letters 44: 4897–4907.
10.1002/2017GL073174
Web of Science® Google Scholar
Kleyer M, Bekker RM, Knevel IC, Bakker JP, Thompson K, Sonnenschein M, Poschlod P, Van Groenendael JM, Klimeš L, Klimešová J et al. 2008. The LEDA Traitbase: a database of life-history traits of the Northwest European flora. Journal of Ecology 96: 1266–1274.
10.1111/j.1365-2745.2008.01430.x
Web of Science® Google Scholar
Klimešová J, Janeček Š, Bartušková A, Bartoš M, Altman J, Doležal J, Lanta V, Latzel V. 2017. Is the scaling relationship between carbohydrate storage and leaf biomass in meadow plants affected by the disturbance regime? Annals of Botany 120: 979–985.
10.1093/aob/mcx111
PubMed Web of Science® Google Scholar
Klimešová J, Martínková J, Ottaviani G. 2018. Belowground plant functional ecology: towards an integrated perspective. Functional Ecology 32: 2115–2126.
10.1111/1365-2435.13145
Web of Science® Google Scholar
Klumpp K, Soussana JF. 2009. Using functional traits to predict grassland ecosystem change: a mathematical test of the response-and-effect trait approach. Global Change Biology 15: 2921–2934.
10.1111/j.1365-2486.2009.01905.x
Web of Science® Google Scholar
Kong D, Ma C, Zhang Q, Li L, Chen X, Zeng H, Guo D. 2014. Leading dimensions in absorptive root trait variation across 96 subtropical forest species. New Phytologist 203: 863–872.
10.1111/nph.12842
PubMed Web of Science® Google Scholar
Kong D, Wang J, Wu H, Valverde-Barrantes OJ, Wang R, Zeng H, Kardol P, Zhang H, Feng Y. 2019. Nonlinearity of root trait relationships and the root economics spectrum. Nature Communications 10: 2203.
10.1038/s41467-019-10245-6
PubMed Web of Science® Google Scholar
Kong D, Wang J, Zeng H, Liu M, Miao Y, Wu H, Kardol P. 2017. The nutrient absorption–transportation hypothesis: optimizing structural traits in absorptive roots. New Phytologist 213: 1569–1572.
10.1111/nph.14344
PubMed Web of Science® Google Scholar
Kraiser T, Gras DE, Gutiérrez AG, González B, Gutiérrez RA. 2011. A holistic view of nitrogen acquisition in plants. Journal of Experimental Botany 62: 1455–1466.
10.1093/jxb/erq425
CAS PubMed Web of Science® Google Scholar
de Kroon H, Visser EJW, eds. 2003. Root ecology. Ecological studies, vol. 168. Berlin, Germany: Springer-Verlag.397.
10.1007/978-3-662-09784-7
Google Scholar
Kumordzi BB, Aubin I, Cardou F, Shipley B, Violle C, Johnstone J, Anand M, Arsenault A, Bell FW, Bergeron Y et al. 2019. Geographic scale and disturbance influence intraspecific trait variability in leaves and roots of North American understorey plants. Functional Ecology 33: 1771–1784.
10.1111/1365-2435.13402
Web of Science® Google Scholar
Kutschera L. 1960. Wurzelatlas mitteleuropäischer ackerunkräuter und kulturpflanzen. Frankfurt-am-Main, Germany: DLG Verlag.
Google Scholar
Lak ZA, Sandén H, Mayer M, Rewald B. 2020. Specific root respiration of three plant species as influenced by storage time and conditions. Plant and Soil 453: 615–626.
10.1007/s11104-020-04619-9
CAS Web of Science® Google Scholar
Lambers H, Raven JA, Shaver GR, Smith SE. 2008. Plant nutrient-acquisition strategies change with soil age. Trends in Ecology & Evolution 23: 95–103.
10.1016/j.tree.2007.10.008
PubMed Web of Science® Google Scholar
Larson JE, Funk JL. 2016. Seedling root responses to soil moisture and the identification of a belowground trait spectrum across three growth forms. New Phytologist 210: 827–38.
10.1111/nph.13829
PubMed Web of Science® Google Scholar
Lavorel S, Garnier E. 2002. Predicting changes in community composition and ecosystem functioning from plant traits: revisiting the Holy Grail. Functional Ecology 16: 545–556.
10.1046/j.1365-2435.2002.00664.x
CAS Web of Science® Google Scholar
Lavorel S, Grigulis K. 2012. How fundamental plant functional trait relationships scale-up to trade-offs and synergies in ecosystem services. Journal of Ecology 100: 128–140.
10.1111/j.1365-2745.2011.01914.x
Web of Science® Google Scholar
Lavorel S, Storkey J, Bardgett RD, de Bello F, Berg MP, Le Roux X, Moretti M, Mulder C, Pakeman RJ, Díaz S et al. 2013. A novel framework for linking functional diversity of plants with other trophic levels for the quantification of ecosystem services. Journal of Vegetation Science 24: 942–948.
10.1111/jvs.12083
Web of Science® Google Scholar
Li H, Liu B, McCormack ML, Ma Z, Guo D. 2017. Diverse belowground resource strategies underlie plant species coexistence and spatial distribution in three grasslands along a precipitation gradient. New Phytologist 216: 1140–1150.
10.1111/nph.14710
PubMed Web of Science® Google Scholar
Liang T, Bengough AG, Knappett JA, MuirWood D, Loades KW, Hallett PD, Boldrin D, Leung AK, Meijer GJ. 2017. Scaling of the reinforcement of soil slopes by living plants in a geotechnical centrifuge. Ecological Engineering 109: 207–227.
10.1016/j.ecoleng.2017.06.067
Web of Science® Google Scholar
Liu B, He J, Zeng F, Lei J, Arndt SK. 2016. Life span and structure of ephemeral root modules of different functional groups from a desert system. New Phytologist 211: 103–112.
10.1111/nph.13880
CAS PubMed Web of Science® Google Scholar
Loades KW, Bengough AG, Bransby MF, Hallett PD. 2015. Effect of root age on the biomechanics of seminal and nodal roots of barley (Hordeum vulgare L.) in contrasting soil environments. Plant and Soil 395: 253–261.
10.1007/s11104-015-2560-z
CAS Web of Science® Google Scholar
Ma Z, Guo D, Xu X, Lu M, Bardgett RD, Eissenstat DM, McCormack ML, Hedin LO. 2018. Evolutionary history resolves global organization of root functional traits. Nature 555: 94–97.
10.1038/nature25783
CAS PubMed Web of Science® Google Scholar
Mackay DS, Savoy PR, Grossiord C, Tai X, Pleban JR, Wang DR, McDowell NG, Adams HD, Sperry JS. 2019. Conifers depend on established roots during drought: results from a coupled model of carbon allocation and hydraulics. New Phytologist 225: 679–692.
10.1111/nph.16043
PubMed Web of Science® Google Scholar
Mahaut L, Fort F, Violle C, Freschet GT. 2020. Multiple facets of diversity effects on plant productivity: species richness, functional diversity, species identity and intraspecific competition. Functional Ecology 34: 287–298.
10.1111/1365-2435.13473
Web of Science® Google Scholar
Maire V, Gross N, da Silveira Pontes L, Picon-Cochard C, Soussana J-F. 2009. Trade-off between root nitrogen acquisition and shoot nitrogen utilization across 13 co-occurring pasture grass species. Functional Ecology 23: 668–679.
10.1111/j.1365-2435.2009.01557.x
Web of Science® Google Scholar
Mao Z, Bourrier F, Stokes A, Fourcaud T. 2014. Three-dimensional modelling of slope stability in heterogeneous montane forest ecosystems. Ecological Modelling 273: 11–22.
10.1016/j.ecolmodel.2013.10.017
Web of Science® Google Scholar
Mariotte P. 2014. Do subordinate species punch above their weight? Evidence from above- and below-ground. New Phytologist 203: 16–21.
10.1111/nph.12789
PubMed Web of Science® Google Scholar
Marks CO, Lechowicz MJ. 2006. Alternative designs and the evolution of functional diversity. The American Naturalist 167: 55–66.
10.1086/498276
PubMed Web of Science® Google Scholar
Marshall-Colon A, Long SP, Allen DK, Allen G, Beard DA, Benes B, von Caemmerer S, Christensen AJ, Cox DJ, Hart JC et al. 2017. Crops in silico: generating virtual crops using an integrative and multi-scale modeling platform. Frontiers in Plant Science 8: 786.
10.3389/fpls.2017.00786
PubMed Web of Science® Google Scholar
Martin AR, Isaac ME. 2015. Plant functional traits in agroecosystems: a blueprint for research. Journal of Applied Ecology 56: 1425–1435.
10.1111/1365-2664.12526
Web of Science® Google Scholar
Mayor J, Bahram M, Henkel T, Buegger F, Pritsch K, Tedersoo L. 2015. Ectomycorrhizal impacts on plant nitrogen nutrition: emerging isotopic patterns, latitudinal variation and hidden mechanisms. Ecology Letters 18: 96–107.
10.1111/ele.12377
CAS PubMed Web of Science® Google Scholar
McCormack ML, Adams TS, Smithwick EAH, Eissenstat DM. 2014. Variability in root production, phenology, and turnover rate among 12 temperate tree species. Ecology 95: 2224–2235.
10.1890/13-1942.1
PubMed Web of Science® Google Scholar
McCormack ML, Crisfield E, Raczka B, Schnekenburger F, Eissenstat DM, Smithwick EAH. 2015b. Sensitivity of four ecological models to adjustments in fine root turnover rate. Ecological Modelling 297: 107–117.
10.1016/j.ecolmodel.2014.11.013
CAS Web of Science® Google Scholar
McCormack ML, Dickie IA, Eissenstat DM, Fahey TJ, Fernandez CW, Guo D, Helmisaari H-S, Hobbie EA, Iversen CM, Jackson RB et al. 2015a. Redefining fine roots improves understanding of below-ground contributions to terrestrial biosphere processes. New Phytologist 207: 505–518.
10.1111/nph.13363
PubMed Web of Science® Google Scholar
McCormack ML, Guo D, Iversen CM, Chen W, Eissenstat DM, Fernandez CW, Li L, Ma C, Ma Z, Poorter H et al. 2017. Building a better foundation: improving root-trait measurements to understand and model plant and ecosystem processes. New Phytologist 215: 27–37.
10.1111/nph.14459
PubMed Web of Science® Google Scholar
McCormack ML, Iversen CM. 2019. Physical and functional constraints on viable belowground acquisition strategies. Frontiers in Plant Science 10: 1215.
10.3389/fpls.2019.01215
PubMed Web of Science® Google Scholar
McGill BJ, Enquist BJ, Weiher E, Westoby M. 2006. Rebuilding community ecology from functional traits. Trends in Ecology & Evolution 21: 178–185.
10.1016/j.tree.2006.02.002
PubMed Web of Science® Google Scholar
Miller A, Cramer M. 2004. Root nitrogen acquisition and assimilation. Plant and Soil 274: 1–36.
10.1007/s11104-004-0965-1
CAS Web of Science® Google Scholar
Mueller KE, Tilman D, Fornara DA, Hobbie SE. 2013. Root depth distribution and the diversity–productivity relationship in a long-term grassland experiment. Ecology 94: 787–793.
10.1890/12-1399.1
Web of Science® Google Scholar
Ostonen I, Püttsepp Ü, Biel C, Alberton O, Bakker MR, Lõhmus K, Majdi H, Metcalfe D, Olsthoorn AFM, Pronk A et al. 2007. Specific root length as an indicator of environmental change. Plant Biosystems 141: 426–442.
10.1080/11263500701626069
Web of Science® Google Scholar
Ottaviani G, Molina-Venegas R, Charles-Dominique T, Chelli S, Campetella G, Canullo R, Klimešová J. 2020. The neglected belowground dimension of plant dominance. Trends in Ecology and Evolution 35: 763–766.
10.1016/j.tree.2020.06.006
PubMed Web of Science® Google Scholar
Pedersen A, Zhang K, Thorup-Kristensen K, Jensen LS. 2010. Modelling diverse root density dynamics and deep nitrogen uptake—A simple approach. Plant and Soil 326: 493–510.
10.1007/s11104-009-0028-8
CAS Web of Science® Google Scholar
Phillips RP, Brzostek E, Midgley MG. 2013. The mycorrhizal-associated nutrient economy: a new framework for predicting carbon–nutrient couplings in temperate forests. New Phytologist 199: 41–51.
10.1111/nph.12221
CAS PubMed Web of Science® Google Scholar
Phillips RP, Erlitz Y, Bier R, Bernhardt ES. 2008. New approach for capturing soluble root exudates in forest soils. Functional Ecology 22: 990–999.
10.1111/j.1365-2435.2008.01495.x
Web of Science® Google Scholar
Picon-Cochard C, Pilon R, Tarroux E, Pagès L, Robertson J, Dawson L. 2012. Effect of species, root branching order and season on the root traits of 13 perennial grass species. Plant and Soil 353: 47–57.
10.1007/s11104-011-1007-4
CAS Web of Science® Google Scholar
Plassard C, Guérin-Laguette A, Véry A-A, Casarin V, Thibaud J-B. 2002. Local measurements of nitrate and potassium fluxes along roots of maritime pine. Effects of ectomycorrhizal symbiosis. Plant, Cell & Environment 25: 75–84.
10.1046/j.0016-8025.2001.00810.x
Web of Science® Google Scholar
Poirier V, Roumet C, Munson AD. 2018. The root of the matter: linking root traits and soil organic matter stabilization processes. Soil Biology and Biochemistry 120: 246–259.
10.1016/j.soilbio.2018.02.016
CAS Web of Science® Google Scholar
Poorter H, Anten NPR, Marcelis LFM. 2013. Physiological mechanisms in plant growth models: do we need a supra-cellular systems biology approach? Plant, Cell & Environment 36: 1673–1690.
10.1111/pce.12123
CAS PubMed Web of Science® Google Scholar
Poorter H, Fiorani F, Pieruschka R, Wojciechowski T, van der Putten WH, Kleyer M, Schurr U, Postma J. 2016. Pampered inside, pestered outside? Differences and similarities between plants growing in controlled conditions and in the field. New Phytologist 212: 838–855.
10.1111/nph.14243
CAS PubMed Web of Science® Google Scholar
Poorter H, Niklas KJ, Reich PB, Oleksyn J, Poot P, Mommer L. 2012. Biomass allocation to leaves, stems and roots: meta-analyses of interspecific variation and environmental control. New Phytologist 193: 30–50.
10.1111/j.1469-8137.2011.03952.x
CAS PubMed Web of Science® Google Scholar
Poorter H, Ryser P. 2015. The limits to leaf and root plasticity: what is so special about specific root length? New Phytologist 206: 1188–1190.
10.1111/nph.13438
PubMed Web of Science® Google Scholar
Prieto I, Armas C, Pugnaire FI. 2012. Water release through plant roots: new insights into its consequences at the plant and ecosystem level. New Phytologist 193: 830–841.
10.1111/j.1469-8137.2011.04039.x
PubMed Web of Science® Google Scholar
Prieto I, Stokes A, Roumet C. 2016. Root functional parameters predict fine root decomposability at the community level. Journal of Ecology 104: 725–733.
10.1111/1365-2745.12537
CAS Web of Science® Google Scholar
Ravenek JM, Mommer L, Visser EJW, van Ruijven J, van der Paauw JW, Smit-Tiekstra A, Caluwe H, de Kroon H. 2016. Linking root traits and competitive success in grassland species. Plant and Soil 407: 39–53.
10.1007/s11104-016-2843-z
CAS Web of Science® Google Scholar
Read DJ, Perez-Moreno J. 2003. Mycorrhizas and nutrient cycling in ecosystems – a journey towards relevance? New Phytologist 157: 475–492.
10.1046/j.1469-8137.2003.00704.x
CAS PubMed Web of Science® Google Scholar
Reich PB. 2014. The world-wide ‘fast–slow’ plant economics spectrum: a traits manifesto. Journal of Ecology 102: 275–301.
10.1111/1365-2745.12211
Web of Science® Google Scholar
Reich PB, Tjoelker MG, Pregitzer KS, Wright IJ, Oleksyn J, Machado J-L. 2008. Scaling of respiration to nitrogen in leaves, stems and roots of higher land plants. Ecology Letters 11: 793–801.
10.1111/j.1461-0248.2008.01185.x
PubMed Web of Science® Google Scholar
Reich PB, Tjoelker MG, Walters MB, Vanderklein DW, Buschena C. 1998. Close association of RGR, leaf and root morphology, seed mass and shade tolerance in seedlings of nine boreal tree species grown in high and low light. Functional Ecology 12: 327–338.
10.1046/j.1365-2435.1998.00208.x
Web of Science® Google Scholar
Robinson D, Hodge A, Fitter A. 2003. Constraints on the form and function of root systems. In: H DeKroon EJW Visser, eds. Root ecology. Berlin/Heidelberg, Germany: Springer, 1–31.
10.1007/978-3-662-09784-7_1
Google Scholar
Robinson D, Linehan DJ, Caul S. 1991. What limits nitrate uptake from soil? Plant, Cell & Environment 14: 77–85.
10.1111/j.1365-3040.1991.tb01373.x
CAS Web of Science® Google Scholar
Rogers ED, Benfey PN. 2015. Regulation of plant root system architecture: implications for crop advancement. Current Opinion in Biotechnology 32: 93–98.
10.1016/j.copbio.2014.11.015
CAS PubMed Web of Science® Google Scholar
Ros MBH, De Deyn GB, Koopmans GF, Oenema O, van Groenigen JW. 2018. What root traits determine grass resistance to phosphorus deficiency in production grassland? Journal of Plant Nutrition and Soil Science 181: 323–335.
10.1002/jpln.201700093
CAS Web of Science® Google Scholar
Rosenzweig C, Jones JW, Hatfield JL, Ruane AC, Boote KJ, Thorburn P, Antle JM, Nelson GC, Porter C, Janssen S. 2013. The agricultural model intercomparison and improvement project (AgMIP): protocols and pilot studies. Agricultural and Forest Meteorology 170: 166–182.
10.1016/j.agrformet.2012.09.011
Web of Science® Google Scholar
Roumet C, Birouste M, Picon-Cochard C, Ghestem M, Osman N, Vrignon-Brenas S, Cao K-f, Stokes A. 2016. Root structure–function relationships in 74 species: evidence of a root economics spectrum related to carbon economy. New Phytologist 210: 815–826.
10.1111/nph.13828
PubMed Web of Science® Google Scholar
Ryser P. 1996. The importance of tissue density for growth and life span of leaves and roots: a comparison of five ecologically contrasting grasses. Functional Ecology 10: 717–723.
10.2307/2390506
Web of Science® Google Scholar
Salahuddin, Rewald B, Razaq M, Lixue Y, Li J, Khan F, Jie Z. 2018. Root order-based traits of Manchurian walnut & larch and their plasticity under interspecific competition. Scientific Reports 8: 9815.
10.1038/s41598-018-27832-0
CAS PubMed Web of Science® Google Scholar
Santonja M, Rancon A, Fromin N, Baldy V, Hättenschwiler S, Fernandez C, Montès N, Mirleau P. 2017. Plant litter diversity increases microbial abundance, fungal diversity, and carbon and nitrogen cycling in a Mediterranean shrubland. Soil Biology and Biochemistry 111: 124–134.
10.1016/j.soilbio.2017.04.006
CAS Web of Science® Google Scholar
Schenk HJ, Jackson RB. 2005. Mapping the global distribution of deep roots in relation to climate and soil characteristics. Geoderma 126: 129–140.
10.1016/j.geoderma.2004.11.018
Web of Science® Google Scholar
Schleuss P-M, Heitkamp F, Sun Y, Miehe G, Xu X, Kuzyakov Y. 2015. Nitrogen uptake in an alpine Kobresia pasture on the Tibetan Plateau: localization by15N labeling and implications for a vulnerable ecosystem. Ecosystems 18: 946–957.
10.1007/s10021-015-9874-9
CAS Web of Science® Google Scholar
Schwarz M, Lehmann P, Or D. 2010. Quantifying lateral root reinforcement in steep slopes – from a bundle of roots to tree stands. Earth Surface Processes and Landforms 35: 354–367.
10.1002/esp.1927
Web of Science® Google Scholar
See CR, Luke McCormack M, Hobbie SE, Flores-Moreno H, Silver WL, Kennedy PG. 2019. Global patterns in fine root decomposition: climate, chemistry, mycorrhizal association and woodiness. Ecology Letters 22: 946–953.
10.1111/ele.13248
PubMed Web of Science® Google Scholar
Shipley B, De Bello F, Cornelissen JHC, Laliberté E, Laughlin DC, Reich PB. 2016. Reinforcing loose foundation stones in trait-based plant ecology. Oecologia 180: 923–931.
10.1007/s00442-016-3549-x
PubMed Web of Science® Google Scholar
Siefert A, Violle C, Chalmandrier L, Albert CH, Taudiere A, Fajardo A, Aarssen LW, Baraloto C, Carlucci MB, Cianciaruso MV et al. 2015. A global meta-analysis of the relative extent of intraspecific trait variation in plant communities. Ecology Letters 18: 1406–1419.
10.1111/ele.12508
PubMed Web of Science® Google Scholar
Smithwick EAH, McCormack ML, Sivandran G, Lucash MS. 2014. Improving the representation of roots in terrestrial models. Ecological Modelling 291: 193–204.
10.1016/j.ecolmodel.2014.07.023
CAS Web of Science® Google Scholar
Song X, Hoffman FM, Iversen CM, Yin Y, Kumar J, Ma C, Xu X. 2017. Significant inconsistency of vegetation carbon density in CMIP5 Earth System Models against observational data. Journal of Geophysical Research - Biogeosciences 122: 2282–2297.
10.1002/2017JG003914
Web of Science® Google Scholar
Sonnenberg R, Bransby MF, Hallett PD, Bengough AG, Mickovski SB, Davies MCR. 2010. Centrifuge modelling of soil slopes reinforced with vegetation. Canadian Geotechnical Journal 47: 1415–1430.
10.1139/T10-037
Web of Science® Google Scholar
Sosnová M, Herben T, Martínková J, Bartušková A, Klimešová J. 2014. To resprout or not to resprout? Modeling population dynamics of a root-sprouting monocarpic plant under various disturbance regimes. Plant Ecology 215: 1245–1254.
10.1007/s11258-014-0382-3
Web of Science® Google Scholar
Soudzilovskaia NA, Vaessen S, Barcelo M, He J, Rahimlou S, Abarenkov K, Brundrett MC, Gomes SIF, Merckx V, Tederesoo L. 2020. FungalRoot: global online database of plant mycorrhizal associations. New Phytologist 227: 955–966.
10.1111/nph.16569
PubMed Web of Science® Google Scholar
Soudzilovskaia NA, van Bodegom PM, Terrer C, Mvt Zelfde, McCallum I, McCormack ML, Fisher JB, Brundrett MC, de Sá NC, Tedersoo L. 2019. Global mycorrhizal plant distribution linked to terrestrial carbon stocks. Nature Communications 10: 5077.
10.1038/s41467-019-13019-2
PubMed Web of Science® Google Scholar
Soussana J-F, Maire V, Gross N, Bachelet B, Pagès L, Martin R, Hill D, Wirth C. 2012. Gemini: a grassland model simulating the role of plant traits for community dynamics and ecosystem functioning. Parameterization and evaluation. Ecological Modelling 231: 134–145.
10.1016/j.ecolmodel.2012.02.002
CAS Web of Science® Google Scholar
Soussana J-F, Teyssonneyre F, Picon-Cochard C, Dawson L. 2005. A trade-off between nitrogen uptake and use increases responsiveness to elevated CO2 in infrequently cut mixed C3 grasses. New Phytologist 166: 217–230.
10.1111/j.1469-8137.2005.01332.x
CAS PubMed Web of Science® Google Scholar
Stokes A, Atger C, Bengough A, Fourcaud T, Sidle R. 2009. Desirable plant root traits for protecting natural and engineered slopes against landslides. Plant and Soil 324: 1–30.
10.1007/s11104-009-0159-y
CAS Web of Science® Google Scholar
Suding KN, Lavorel S, Chapin FS III, Cornelissen JHC, Diaz S, Garnier E, Goldberg D, Hooper DU, Jackson ST, Navas M-L. 2008. Scaling environmental change through the community-level: a trait-based response-and-effect framework for plants. Global Change Biology 14: 1125–1140.
10.1111/j.1365-2486.2008.01557.x
Web of Science® Google Scholar
Tedersoo L, Laanisto L, Rahimlou S, Toussaint A, Hallikma T, Pärtel M. 2018. Global database of plants with root-symbiotic nitrogen fixation: NodDB. Journal of Vegetation Science 29: 560–568.
10.1111/jvs.12627
Web of Science® Google Scholar
Trinder CJ, Brooker RW, Davidson H, Robinson D. 2012. A new hammer to crack an old nut: interspecific competitive resource capture by plants is regulated by nutrient supply, not climate. PLoS ONE 7: e29413.
10.1371/journal.pone.0029413
CAS PubMed Web of Science® Google Scholar
Trocha LK, Bułaj B, Kutczyńska P, Mucha J, Rutkowski P, Zadworny M. 2017. The interactive impact of root branch order and soil genetic horizon on root respiration and nitrogen concentration. Tree Physiology 37: 1055–1068.
10.1093/treephys/tpx096
CAS PubMed Web of Science® Google Scholar
Tscharntke T, Klein AM, Kruess A, Steffan-Dewenter I, Thies C. 2005. Landscape perspectives on agricultural intensification and biodiversity – ecosystem service management. Ecology Letters 8: 857–874.
10.1111/j.1461-0248.2005.00782.x
Web of Science® Google Scholar
Tückmantel T, Leuschner C, Preusser S, Kandeler E, Angst G, Mueller CW, Meier IC. 2017. Root exudation patterns in a beech forest: dependence on soil depth, root morphology, and environment. Soil Biology and Biochemistry 107: 188–197.
10.1016/j.soilbio.2017.01.006
CAS Web of Science® Google Scholar
Valencia E, Maestre FT, Le Bagousse-Pinguet Y, Quero JL, Tamme R, Börger L, García-Gómez M, Gross N. 2015. Functional diversity enhances the resistance of ecosystem multifunctionality to aridity in Mediterranean drylands. New Phytologist 206: 660–671.
10.1111/nph.13268
PubMed Web of Science® Google Scholar
Valladares F, Sanchez-Gomez D, Zavala MA. 2006. Quantitative estimation of phenotypic plasticity: bridging the gap between the evolutionary concept and its ecological applications. Journal of Ecology 94: 1103–1116.
10.1111/j.1365-2745.2006.01176.x
Web of Science® Google Scholar
Veldhuis MP, Berg MP, Loreau M, Olff H. 2018. Ecological autocatalysis: a central principle in ecosystem organization? Ecological Monographs 88: 304–319.
10.1002/ecm.1292
Web of Science® Google Scholar
Vile D, Shipley B, Garnier E. 2006. Ecosystem productivity can be predicted from potential relative growth rate and species abundance. Ecology Letters 9: 1061–1067.
10.1111/j.1461-0248.2006.00958.x
CAS PubMed Web of Science® Google Scholar
Violle C, Navas M-L, Vile D, Kazakou E, Fortunel C, Hummel I, Garnier E. 2007. Let the concept of trait be functional!. Oikos 116: 882–892.
10.1111/j.0030-1299.2007.15559.x
Web of Science® Google Scholar
Violle C, Thuiller W, Mouquet N, Munoz F, Kraft NJB, Cadotte MW, Livingstone SW, Mouillot D. 2017. Functional rarity: the ecology of outliers. Trends in Ecology & Evolution 32: 356–367.
10.1016/j.tree.2017.02.002
PubMed Web of Science® Google Scholar
Volder A, Smart DR, Bloom AJ, Eissenstat DM. 2005. Rapid decline in nitrate uptake and respiration with age in fine lateral roots of grape: implications for root efficiency and competitive effectiveness. New Phytologist 165: 493–502.
10.1111/j.1469-8137.2004.01222.x
PubMed Web of Science® Google Scholar
Warren JM, Hanson PJ, Iversen CM, Kumar J, Walker AP, Wullschleger SD. 2015. Root structural and functional dynamics in terrestrial biosphere models–evaluation and recommendations. New Phytologist 205: 59–78.
10.1111/nph.13034
PubMed Web of Science® Google Scholar
Weemstra M, Kiorapostolou N, van Ruijven J, Mommer L, de Vries J, Sterck F. 2020. The role of fine-root mass, specific root length and life span in tree performance: a whole-tree exploration. Functional Ecology 34: 575–585.
10.1111/1365-2435.13520
Web of Science® Google Scholar
Weemstra M, Mommer L, Visser EJW, van Ruijven J, Kuyper TW, Mohren GMJ, Sterck FJ. 2016. Towards a multidimensional root trait framework: a tree root review. New Phytologist 211: 1159–1169.
10.1111/nph.14003
CAS PubMed Web of Science® Google Scholar
Wiesler F, Horst WJ. 1994. Root growth and nitrate utilization of maize cultivars under field conditions. Plant and Soil 163: 267–277.
10.1007/BF00007976
CAS Web of Science® Google Scholar
Wilson JB. 1988. A review of evidence on the control of shoot: root ratio, in relation to models. Annals of Botany 61: 433–449.
10.1093/oxfordjournals.aob.a087575
Web of Science® Google Scholar
Wurzburger N, Brookshire ENJ. 2017. Experimental evidence that mycorrhizal nitrogen strategies affect soil carbon. Ecology 98: 1491–1497.
10.1002/ecy.1827
CAS PubMed Web of Science® Google Scholar
Wurzburger N, Clemmensen KE. 2018. From mycorrhizal fungal traits to ecosystem properties – and back again. Journal of Ecology 106: 463–467.
10.1111/1365-2745.12922
Web of Science® Google Scholar
York LM. 2019. Functional phenomics: an emerging field integrating high-throughput phenotyping, physiology, and bioinformatics. Journal of Experimental Botany 70: 379–386.
10.1093/jxb/ery379
CAS PubMed Web of Science® Google Scholar
York LM, Nord EA, Lynch JP. 2013. Integration of root phenes for soil resource acquisition. Frontiers in Plant Science 4: 1–15.
10.3389/fpls.2013.00355
PubMed Web of Science® Google Scholar
York LM, Silberbush M, Lynch JP. 2016. Spatiotemporal variation of nitrate uptake kinetics within the maize (Zea mays L.) root system is associated with greater nitrate uptake and interactions with architectural phenes. Journal of Experimental Botany 67: 3763–3775.
10.1093/jxb/erw133
CAS PubMed Web of Science® Google Scholar
Zadworny M, McCormack ML, Rawlik K, Jagodziński AM. 2015. Seasonal variation in chemistry, but not morphology, in roots of Quercus robur growing in different soil types. Tree Physiology 35: 644–652.
10.1093/treephys/tpv018
CAS PubMed Web of Science® Google Scholar
Zhou M, Wang J, Bai W, Zhang Y, Zhang W. 2019. The response of root traits to precipitation change of herbaceous species in temperate steppes. Functional Ecology 33: 2030–2041.
10.1111/1365-2435.13420
Web of Science® Google Scholar
Zhu K, McCormack ML, Lankau RA, Egan JF, Wurzburger N. 2018. Association of ectomycorrhizal trees with high carbon-to-nitrogen ratio soils across temperate forests is driven by smaller nitrogen not larger carbon stocks. Journal of Ecology 106: 524–535.
10.1111/1365-2745.12918
CAS Web of Science® Google Scholar
Zhu Q, Iversen CM, Riley WJ, Slette IJ, Vander Stel HM. 2016. Root traits explain observed tundra vegetation nitrogen uptake patterns: implications for trait-based land models. Journal of Geophysical Research - Biogeosciences 121: 3101–3112.
10.1002/2016JG003554
Web of Science® Google Scholar
Citing Literature
Just a moment...
Volume232, Issue3
November 2021
Pages 1123-1158
Just a moment...
## Figures
## References
## Related
## Information
Close Figure Viewer
Previous FigureNext Figure |
17245 | http://home.ustc.edu.cn/~zyx240014/USTCProbability/files/Foundations%20of%20Modern%20Probability.pdf | Probability Theory and Stochastic Modelling 99 Olav Kallenberg Foundations of Modern Probability Third Edition Probability Theory and Stochastic Modelling Volume 99 Editors-in-Chief Peter W. Glynn, Stanford, CA, USA Andreas E. Kyprianou, Bath, UK Yves Le Jan, Orsay, France Advisory Editors Søren Asmussen, Aarhus, Denmark Martin Hairer, Coventry, UK Peter Jagers, Gothenburg, Sweden Ioannis Karatzas, New York, NY, USA Frank P. Kelly, Cambridge, UK Bernt Øksendal, Oslo, Norway George Papanicolaou, Stanford, CA, USA Etienne Pardoux, Marseille, France Edwin Perkins, Vancouver, Canada Halil Mete Soner, Zürich, Switzerland The Probability Theory and Stochastic Modelling series is a merger and continuation of Springer’s two well established series Stochastic Modelling and Applied Probability and Probability and Its Applications. It publishes research monographs that make a significant contribution to probability theory or an applications domain in which advanced probability methods are fundamental.
Books in this series are expected to follow rigorous mathematical standards, while also displaying the expository quality necessary to make them useful and accessible to advanced students as well as researchers. The series covers all aspects of modern probability theory including.
• Gaussian processes • Markov processes • Random Fields, point processes and random sets • Random matrices • Statistical mechanics and random media • Stochastic analysis as well as applications that include (but are not restricted to): • Branching processes and other models of population growth • Communications and processing networks • Computational methods in probability and stochastic processes, including simulation • Genetics and other stochastic models in biology and the life sciences • Information theory, signal processing, and image synthesis • Mathematical economics and finance • Statistical methods (e.g. empirical processes, MCMC) • Statistics for stochastic processes • Stochastic control • Stochastic models in operations research and stochastic optimization • Stochastic models in the physical sciences More information about this series at Olav Kallenberg Foundations of Modern Probability Third Edition 123 ISSN 2199-3130 ISSN 2199-3149 (electronic) Probability Theory and Stochastic Modelling ISBN 978-3-030-61870-4 ISBN 978-3-030-61871-1 (eBook) Mathematics Subject Classification: 60-00, 60-01, 60A10, 60G05 1st edition: © Springer Science+Business Media New York 1997 2nd edition: © Springer-Verlag New York 2002 3rd edition: © Springer Nature Switzerland AG 2021 This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, expressed or implied, with respect to the material contained herein or for any errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This Springer imprint is published by the registered company Springer Nature Switzerland AG The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland Olav Kallenberg Auburn University Auburn, AL, USA Preface to the First Edition Some thirty years ago it was still possible, as Loeve so ably demonstrated, to write a single book in probability theory containing practically everything worth knowing in the subject. The subsequent development has been explo-sive, and today a corresponding comprehensive coverage would require a whole library. Researchers and graduate students alike seem compelled to a rather extreme degree of specialization.
As a result, the subject is threatened by disintegration into dozens or hundreds of subfields.
At the same time the interaction between the areas is livelier than ever, and there is a steadily growing core of key results and techniques that every probabilist needs to know, if only to read the literature in his or her own field.
Thus, it seems essential that we all have at least a general overview of the whole area, and we should do what we can to keep the subject together. The present volume is an earnest attempt in that direction.
My original aim was to write a book about “everything.” Various space and time constraints forced me to accept more modest and realistic goals for the project. Thus, “foundations” had to be understood in the narrower sense of the early 1970s, and there was no room for some of the more recent devel-opments. I especially regret the omission of topics such as large deviations, Gibbs and Palm measures, interacting particle systems, stochastic differential geometry, Malliavin calculus, SPDEs, measure-valued diffusions, and branch-ing and superprocesses. Clearly plenty of fundamental and intriguing material remains for a possible second volume.
Even with my more limited, revised ambitions, I had to be extremely se-lective in the choice of material. More importantly, it was necessary to look for the most economical approach to every result I did decide to include. In the latter respect, I was surprised to see how much could actually be done to simplify and streamline proofs, often handed down through generations of textbook writers. My general preference has been for results conveying some new idea or relationship, whereas many propositions of a more technical nature have been omitted. In the same vein, I have avoided technical or computa-tional proofs that give little insight into the proven results. This conforms with my conviction that the logical structure is what matters most in mathematics, even when applications is the ultimate goal.
Though the book is primarily intended as a general reference, it should also be useful for graduate and seminar courses on different levels, ranging from elementary to advanced. Thus, a first-year graduate course in measure-theoretic probability could be based on the first ten or so chapters, while the rest of the book will readily provide material for more advanced courses on various topics. Though the treatment is formally self-contained, as far as measure theory and probability are concerned, the text is intended for a rather sophisticated reader with at least some rudimentary knowledge of subjects like topology, functional analysis, and complex variables.
My exposition is based on experiences from the numerous graduate and v vi Foundations of Modern Probability seminar courses I have been privileged to teach in Sweden and in the United States, ever since I was a graduate student myself. Over the years I have de-veloped a personal approach to almost every topic, and even experts might find something of interest. Thus, many proofs may be new, and every chapter contains results that are not available in the standard textbook literature. It is my sincere hope that the book will convey some of the excitement I still feel for the subject, which is without a doubt (even apart from its utter usefulness) one of the richest and most beautiful areas of modern mathematics.
Preface to the Second Edition For this new edition the entire text has been carefully revised, and some por-tions are totally rewritten. More importantly, I have inserted more than a hundred pages of new material, in chapters on general measure and ergodic theory, the asymptotics of Markov processes, and large deviations. The ex-panded size has made it possible to give a self-contained treatment of the underlying measure theory and to include topics like multivariate and ratio ergodic theorems, shift coupling, Palm distributions, entropy and information, Harris recurrence, invariant measures, strong and weak ergodicity, Strassen’s law of the iterated logarithm, and the basic large deviation results of Cram´ er, Sanov, Schilder, and Freidlin and Ventzel.1 Unfortunately, the body of knowledge in probability theory keeps growing at an ever increasing rate, and I am painfully aware that I will never catch up in my efforts to survey the entire subject. Many areas are still totally beyond reach, and a comprehensive treatment of the more recent developments would require another volume or two. I am asking for the reader’s patience and un-derstanding.
Preface to the Third Edition Many years have passed since I started writing the first edition, and the need for a comprehensive coverage of modern probability has become more urgent than ever. I am grateful for the opportunity to publish a new, thoroughly revised and expanded edition. Much new material has been added, and there are even entirely new chapters on subjects like Malliavin calculus, multivariate arrays, and stochastic differential geometry (with so much else still missing). To facilitate the reader’s access and overview, I have grouped the material together into ten major areas, each arguably indispensable to any serious student and researcher, regardless of area of specialization.
To me, every great mathematical theorem should be a revelation, prompt-ing us to exclaim: “Wow, this is just amazing, how could it be true?” I have spent countless hours, trying to phrase every result in its most striking form, in my efforts to convey to the reader my own excitement. My greatest hope is that the reader will share my love for the subject, and help me to keep it alive.
1I should have mentioned Varadhan, one of the giants of modern probability.
Acknowledgments Throughout my career, I have been inspired by the work of colleagues from all over the world, and in many cases by personal interaction and friendship. As in earlier editions, I would like to mention especially the early influence of my mentor Peter Jagers and the stimulating influences of the late Klaus Matthes, Gopi Kallianpur, and Kai Lai Chung. Other people who have inspired me through their writing and personal contacts include especially David Aldous, Sir John Kingman, and Jean-Fran¸ cois LeGall. Some portions of the book have been stimulated by discussions with my AU colleague Ming Liao.
Countless people from all over the world, their names long forgotten, have written to me through the years to ask technical questions or point out er-rors, which has led to numerous corrections and improvements. Their positive encouragement has been a constant source of stimulation and joy, especially during the present pandemic, which sadly affects us all. Our mathematical enterprise (like the coronavirus) knows no borders, and to me people of any nationality, religion, or ethnic group are all my sisters and brothers. One day soon we will emerge even stronger from this experience, and continue to enjoy and benefit from our rich mathematical heritage2.
I am dedicating this edition to the memory of my grandfathers whom I never met, both idolized by their families: Otto Wilhelm Kallenberg — Coming from a simple servant family, he rose to the rank of a trusted servant to the Swedish king, in the days when kings had real power. Through his advance, the family could afford to send their youngest son Herbert, my father, to a Latin school, where he became the first member of the Kallenberg family to graduate from high school.
Olaf Sund (1864–1941) — Coming from a prominent family of lawyers, archi-tects, scientists, and civil servants, and said to be the brightest child of a big family, he opted for a simple life as Lensmann in the rural community of Steigen north of the arctic circle. He died tragically during the brave resistance to the Nazi occupation of Norway during WWII. His youngest daughter Marie Kristine, my mother, became the first girl of the Sund family to graduate from high school.
Olav Kallenberg October 2020 2To me the great music, art, and literature belong to the same category of cultural treasures that we hope will survive any global crises or onslaughts of selfish nationalism.
Every page of this book is inspired by the great music from Bach to Prokofiev, the art from Rembrandt to Chagall, and the literature from Dante to Con Fu. What a privilege to be alive!
vii Words of Wisdom and Folly ♣“A mathematician who argues from probabilities in geometry is not worth an ace” — Socrates (on the demands of rigor in mathematics) ♣“[We will travel a road] full of interest of its own. It familiarizes us with the measurement of variability, and with curious laws of chance that apply to a vast diversity of social subjects” — Francis Galton (on the wondrous world of probability) ♣“God doesn’t play dice” [i.e., there is no randomness in the universe] — Albert Einstein (on quantum mechanics and causality) ♣“It might be possible to prove certain theorems, but they would not be of any interest, since in practice one could not verify whether the assump-tions are fulfilled” — ´ Emile Borel (on why bother with probability) ♣“[The stated result] is a special case of a very general theorem [the strong Markov property]. The measure [theoretic] ideas involved are somewhat glossed over in the proof, in order to avoid complexities out of keeping with the rest of this paper” — Joseph L. Doob (on why bother with generality or mathematical rigor) ♣“Probability theory [has two hands]: On the right is the rigorous [tech-nical work]; the left hand . . . reduces problems to gambling situations, coin-tossing, motions of a physical particle” — Leo Breiman (on proba-bilistic thinking) ♣“There are good taste and bad taste in mathematics just as in music, lit-erature, or cuisine, and one who dabbles in it must stand judged thereby” — Kai Lai Chung (on the art of writing mathematics) ♣“The traveler often has the choice between climbing a peak or using a cable car” — William Feller (on the art of reading mathematics) ♣“A Catalogue Aria of triumphs is of less benefit [to the student] than an indication of the techniques by which such results are achieved” — David Williams (on seductive truths and the need of proofs) ♣“One needs [for stochastic integration] a six months course [to cover only] the definitions. What is there to do?” — Paul-Andr´ e Meyer (on the dilemma of modern math education) ♣“There were very many [bones] in the open valley; and lo, they were very dry. And [God] said unto me, ‘Son of man, can these bones live?’ And I answered, ‘O Lord, thou knowest.’ ” — Ezekiel 37:2–3 (on the reward of hard studies, as quoted by Chris Rogers and David Williams) ix Contents Introduction and Reading Guide — — — 1 I. Measure Theoretic Prerequisites — — — 7 1. Sets and functions, measures and integration 9 2. Measure extension and decomposition 33 3. Kernels, disintegration, and invariance 55 II. Some Classical Probability Theory — — — 79 4. Processes, distributions, and independence 81 5. Random sequences, series, and averages 101 6. Gaussian and Poisson convergence 125 7. Infinite divisibility and general null arrays 147 III. Conditioning and Martingales — — — 161 8. Conditioning and disintegration 163 9. Optional times and martingales 185 10. Predictability and compensation 207 IV. Markovian and Related Structures — — — 231 11. Markov properties and discrete-time chains 233 12. Random walks and renewal processes 255 13. Jump-type chains and branching processes 277 V. Some Fundamental Processes — — — 295 14. Gaussian processes and Brownian motion 297 15. Poisson and related processes 321 16. Independent-increment and L´ evy processes 345 17. Feller processes and semi-groups 367 VI. Stochastic Calculus and Applications — — — 393 18. Itˆ o integration and quadratic variation 395 19. Continuous martingales and Brownian motion 417 20. Semi-martingales and stochastic integration 439 21. Malliavin calculus 465 xi VII. Convergence and Approximation — — — 487 22. Skorohod embedding and functional convergence 489 23. Convergence in distribution 505 24. Large deviations 529 VIII. Stationarity, Symmetry and Invariance — — — 555 25. Stationary processes and ergodic theorems 557 26. Ergodic properties of Markov processes 587 27. Symmetric distributions and predictable maps 611 28. Multi-variate arrays and symmetries 631 IX. Random Sets and Measures — — — 659 29. Local time, excursions, and additive functionals 661 30. Random measures, smoothing and scattering 685 31. Palm and Gibbs kernels, local approximation 709 X. SDEs, Diffusions, and Potential Theory — — — 733 32. Stochastic equations and martingale problems 735 33. One-dimensional SDEs and diffusions 751 34. PDE connections and potential theory 771 35. Stochastic differential geometry 801 Appendices — — — 827 1. Measurable maps — 827, 2. General topology — 828, 3. Linear spaces — 831, 4. Linear operators — 835, 5. Function and measure spaces — 839, 6. Classes and spaces of sets — 843, 7. Differential geometry — 847 Notes and References — — — 853 Bibliography — — — 889 Indices — — — 919 Authors 919, Topics 925, Symbols 943 Foundations of Modern Probability xii Introduction and Reading Guide In my youth, a serious student of probability theory1 was expected to mas-ter the basic notions and results in all areas of the subject2. Since then the development has been explosive, leading to a rather extreme fragmentation and specialization. I believe that it remains essential for any serious probabilist to have a good overview of the entire subject. Everything in this book, from the overall organization to the choice of notation and displays of theorems, has been prepared with this goal in mind.
The first thing you need to know is that the study of any more advanced text in mathematics3 requires an approach different from the usual one. When reading a novel you start reading from page 1, and after a few days or weeks you come to the end, at which time you will be familiar with all the characters and have a good overview of the plot. This approach never works in math, except for the most elementary texts. Instead it is crucial to adopt a top-down approach, where you first try to acquire a general overview, and then gradually work your way down to the individual theorems, until finally you reach the level of proofs and their logical structure.
The inexperienced reader may feel tempted to skip the proofs, trying in-stead a few of the exercises. I would rather suggest the opposite. It is from the proofs you learn how to do math, which may be regarded as a major goal of the graduate studies. If you forgot the precise conditions in the statement of a theorem, you can always look them up, but the only way to learn how to prove your own theorems is to gather experience by studying dozens or hundreds of proofs. Here again it is important to adopt a top-down approach, always starting to look for the crucial ideas that make the proof ‘work’, and then gradually breaking down the argument into minor details and eventually perhaps some calculation. Some details are often left to the reader4, suggesting an abundance of useful and instructive exercises, to identify and fill in all the little gaps implicit in the text.
The book has now been divided into ten parts of 3–4 chapters each, and I think it is essential for any serious probabilist to be familiar with at least some material in each of those parts. Every part is preceded by a short introduction, where I indicate the contents of the included chapters and make suggestions about the choice of material to study in further detail. The precise selection may be less important, since you can always return to the omitted material when need arises. Every chapter also begins with a short introduction, high-lighting some of the main results and indicating connections to material in 1henceforth regarded as synonymous with the theory of stochastic processes 2When I was a student, we had no graduate courses, only a reading list, and I remember studying a whole summer for my last oral exam, covering every theorem and proof in Lo eve (1963) plus half of Doob (1953), totaling some 1,000 pages.
3What is advanced to some readers may be elementary to others, depending on background.
4or else even the simplest proof would look like a lengthy computer program, and you would lose your overview 1 © Springer Nature Switzerland AG 2021 O. Kallenberg, Foundations of Modern Probability, Probability Theory and Stochastic Modelling 99, 2 Foundations of Modern Probability other chapters. You never need to worry about making your selection self-contained, provided you are willing occasionally to use theorems whose proofs you are not yet familiar with5. A similar remark applies to the axiom of choice and its various equivalents, which are used freely without comments6.
−−− Let me conclude with some remarks about the contents of the various parts.
Part I gives a rather complete account of the basic measure theory needed throughout the remainder of the book. In my experience, even a year-long course in real analysis may be inadequate for our purposes, since many theo-rems of special importance in probability theory may not even be mentioned in standard real analysis texts, where the emphasis is often rather different.
When teaching graduate courses in advanced probability, then regardless of background7 of the students enrolled, I am always beginning with a few weeks of general measure theory, where the students will also get used to my top-down approach.
Part II provides the measure-theoretic foundations of probability theory, involving the notions of processes, distributions, and independence. Here we also develop the classical limit theory for random sums and averages, including the law of large numbers and the central limit theorem, along with a wealth of extensions and ramifications. Finally, we give a comprehensive treatment of the classical limit theory for null arrays of random variables and vectors, which will later play a basic role for the discussion of L´ evy and related processes. Much of the discussion is based on the theory of characteristic functions8 and Laplace transforms, which will later regain importance in connection with continuous martingales, random measures, and potential theory.
Modern probability theory can be said to begin with the closely related theories of conditioning, martingales, and compensation, which form the sub-jects of Part III. The importance of this material can hardly be overstated, since methods based on conditioning are constantly used throughout probabil-ity theory, and martingale theory provides some of the basic tools for proving a wide range of limit theorems and path properties. Indeed, the convergence and regularization theorem for continuous-time martingales has been regarded as the single most important result in all of probability theory. The theory of compensators, based on the powerful Doob–Meyer decomposition, plays an equally basic role in the study of processes with jump discontinuities.
Beside martingales, the classes of Markov and related processes represent 5A prime example is the existence of Lebesgue measure, underlying much of modern probability theory, which is easy to believe and may be accepted on faith, though all standard proofs I am aware of, some presented in Chapter 2, are quite tricky and non-intuitive.
6Purists may note that we are implicitly adopting ZFC as the axiomatic foundation of modern probability theory. Nobody else ever needs to worry.
7I am no longer surprised when the students tell me things like ‘Fubini’s theorem we didn’t have time to cover’.
8It is a mistake to dismiss the theories of characteristics functions and classical null arrays, once regarded as core probability, as a technical nuisance that we can safely avoid.
Introduction and Reading Guide 3 the most important dependence structures in probability theory, generalizing the deterministic notion of dynamical systems. Though first introduced in Part IV, they will form a recurrent theme throughout the remainder of the book.
After a general discussion of the general Markov property, we specialize to the classical theories of discrete- and continuous-time Markov chains, paying special attention to the theories of random walks and renewal theory, before providing a brief discussion of branching processes. The results in this part, many of fundamental importance in their own right, may also serve as moti-vations for the theories of L´ evy and Feller processes in later chapters, as well as for the regenerative processes and their local time.
The Brownian motion and Poisson processes9 constitute the basic building blocks of probability theory, whose theory is covered by the initial chapters of Part V. The remaining chapters deal with the equally fundamental L´ evy and Feller processes. To see the connections, we note that Feller processes form a broad class of Markov processes, general enough for most applications, and yet regular enough for technical convenience. Specializing to the space-homogeneous case yields the class of L´ evy processes, which may also be charac-terized as processes with stationary, independent increments, thus forming the continuous-time counterparts of random walks. The classical L´ evy-Khinchin formula gives the basic representation of L´ evy processes in terms of a Brown-ian motion and a Poisson process describing the jump structure. All facets of this theory are clearly of fundamental importance.
Stochastic calculus is another core area10 of modern probability that we can’t live without. The subject, here covered by Part VI, can be regarded as a continuation of the martingale theory from Part III. In fact, the celebrated Itˆ o formula is essentially a transformation rule for continuous or more general semi-martingales. We begin with the relatively elementary case of continu-ous integrators, analysing a detail some important consequences for Brownian motion, before moving on to the more subtle theory for processes with discon-tinuities. Still more subtle is the powerful stochastic analysis on Wiener space, also known as Malliavin calculus.
Every serious probability student needs to be well acquainted with the theory of convergence in distribution in function spaces, covered by Part VII.
Here the most natural and intuitive approach is via the powerful Skorohod embedding, which yields the celebrated Donsker theorem and its ramifications, along with various functional versions of the law of the iterated logarithm.
More general results may occasionally require some quite subtle compactness arguments based on Prohorov’s theorem, extending to a powerful theory for distributional convergence of random measures. Somewhat similar in spirit is the theory of large deviations, which may be regarded as a subtle extension of the weak law of large numbers.
Stationarity, defined as invariance in distribution under shifts, may be re-9The theory of Poisson and related processes has often been neglected by textbook au-thors. Its fundamental importance should be clear from material in later chapters.
10It is often delegated to separate courses, though it truly belongs to the basic package.
4 Foundations of Modern Probability garded as yet another basic dependence structure11 of probability theory, beside those of martingales and Markov processes. Its theory with ramifications are the subjects of Part VIII. Here the central result is the powerful ergodic theo-rem, generalizing the strong law of large numbers, which admits some equally remarkable extensions to broad classes of Markov processes. This part also contains a detailed discussion of some other invariance structures, leading in particular to the powerful predictable sampling and mapping theorems.
Part IX begins with a detailed exposition of excursion theory, a powerful extension of renewal theory, involving a basic description of the entire excur-sion structure of a regenerative process in terms of a Poisson process, on the time scale given by the associated local time. Three totally different approaches to local time are discussed, each providing its share of valuable insight. The remaining chapters of this part deal with various aspects of random measure theory12, including some basic results for Palm measures and a variety of fun-damental limit theorems for particle systems.
We may also mention some important connections to statistical mechanics.
The final Part X begins with a detailed discussion of stochastic differential equations (SDEs), which play the same role in the presence of noise as do the ODEs for deterministic dynamical systems. Of special importance is the char-acterization of solutions in terms of martingale problems. The solutions to the simplest SDEs are continuous strong Markov processes, even called diffusions, and the subsequent chapter includes a detailed account of such processes. We will also highlight the close connection between Markov processes and poten-tial theory, showing in particular how some problems in classical potential theory admit solutions in terms of a Brownian motion. We conclude with an introduction to the beautiful and illuminating theory of stochastic differential geometry.
11This is another subject that has often been omitted from basic graduate-text packages, where it truly belongs.
12Yet another neglected topic. Random measures arguably constitute an area of fundamen-tal importance. Though they have been subject to an intense development for three quarters of a century, their theory remains virtually unknown among mainstream probabilists.
Introduction and Reading Guide 5 We conclude with a short list of some commonly used notation. A more comprehensive list will be found at the end of the book.
N = {1, 2, . . .}, Z+ = {0, 1, 2, . . .}, R+ = [0, ∞), ¯ R = [−∞, ∞], (S, S, ˆ S): localized Borel space, classes of measurable or bounded sets, S+: class of S-measurable functions f ≥0, S(n): non-diagonal part of Sn, MS, ˆ MS: classes of locally finite or normalized measures on S, NS, ˆ NS: classes of integer-valued and bounded measures in MS, M∗ S, N ∗ S: classes of diffuse measures in MS and simple ones in NS, G, λ: measurable group with Haar measure, Lebesgue measure on R, δsB = 1B(s) = 1{s ∈B}: unit mass at s and indicator function of B, μf = f dμ, (f · μ)g = μ(fg), (μ ◦f −1)g = μ(g ◦f), 1Bμ = 1B · μ, μ(n): for μ ∈N ∗ S, the restriction of μn to S(n), (θrμ)f = μ(f ◦θr) = μ(ds)f(rs), (ν ⊗μ)f = ν(ds) μs(dt)f(s, t), (νμ)f = ν(ds) μs(dt)f(t), (μ ∗ν)f = μ(dx) ν(dy)f(x + y), (Eξ)f = E(ξf), E(ξ|F)g = E(ξg|F), L( ), L( | )s, L( ∥)s: distribution, conditional or Palm distribution, Cξf = E μ≤ξ f(μ, ξ −μ) with bounded μ, ⊥ ⊥, ⊥ ⊥F: independence and conditional independence given F, d =, d →: equality and convergence in distribution, w →, v →, u →: weak, vague, and uniform convergence, wd − →, vd − →: weak or vague convergence in distribution, ∥f∥, ∥μ∥: supremum of |f| and total variation of μ.
I. Measure Theoretic Prerequisites Modern probability theory is technically a branch of real analysis, and any serious student needs a good foundation in measure theory and basic functional analysis, before moving on to the probabilistic aspects. Though a solid back-ground in classical real analysis may be helpful, our present emphasis is often somewhat different, and many results included here may be hard to find in the standard textbook literature. We recommend the reader to be thoroughly familiar with especially the material in Chapter 1. The hurried or impatient reader might skip Chapters 2–3 for the moment, and return for specific topics when need arises.
−−− 1. Sets and functions, measures and integration. This chapter con-tains the basic notions and results of measure theory, needed throughout the remainder of the book. Statements used most frequently include the monotone-class theorem, the monotone and dominated convergence theorems, Fubini’s theorem, criteria for Lp-convergence, and the H¨ older and Minkowski inequali-ties. The reader should also be familiar with the notion of Borel spaces, which provides the basic setting throughout the book.
2. Measure extension and decomposition. Here we include some of the more advanced results of measure theory of frequent use, such as the exis-tence of Lebesgue and related measures, the Lebesgue decomposition and dif-ferentiation theorem, atomic decompositions of measures, the Radon–Nikodym theorem, and the Riesz representation. We also explore the relationship be-tween signed measures on the line and functions of locally bounded variation.
Though a good familiarity with the results is crucial, many proofs are rather technical and might be skipped on the first way through.
3. Kernels, disintegration, and invariance. Though kernels are hardly mentioned in most real analysis texts, they play a fundamental role throughout probability theory. Any serious reader needs to be well familiar with the basic kernel properties and operations, as well as with the existence theorems for single and dual disintegration. The quoted results on invariant measures and disintegration, including the theory of Haar measures, will be needed only for special purposes and might be skipped on a first reading, with a possible return when need arises.
7 Chapter 1 Sets and Functions, Measures and Integration Sigma-fields, measurable functions, monotone-class theorem, Polish and Borel spaces, convergence and limits, measures and integration, mono-tone and dominated convergence, transformation of integrals, null sets and completion, product measures and Fubini’s theorem, H¨ older and Minkowski inequalities, Lp-convergence, L2-projection, regularity and approximation Though originating long ago from some simple gambling problems, probability theory has since developed into a sophisticated area of real analysis, depending profoundly on some measure theory already for the basic definitions.
This chapter covers most of the standard measure theory needed to get started.
Soon you will need more, which is why we include two further chapters to cover some more advanced topics. At that later stage, even some basic functional analysis will play an increasingly important role, where the underlying notions and results are summarized without proofs in Appendices 3–4.
In this chapter we give an introduction to basic measure theory, includ-ing the notions of σ-fields, measurable functions, measures, and integration.
Though our treatment, here and in later chapters, is essentially self-contained, we do rely on some basic notions and results from elementary real analysis, including the notions of limits and continuity, and some metric topology in-volving open and closed sets, completeness, and compactness. Many results covered here will henceforth be used in subsequent chapters without explicit reference. This applies especially to the monotone and dominated convergence theorems, Fubini’s theorem, and the H¨ older and Minkowski inequalities.
Though a good background in real analysis may be an advantage, the hur-ried reader is warned against skipping the measure-theoretic chapters without at least a quick review of the included results, many of which may be hard to find in the standard textbook literature. This applies in particular to the monotone-class theorem and the notion of Borel spaces in this chapter, the atomic decomposition in Chapter 2, and the theory of kernels, disintegration, and invariant measures in Chapter 3.
To fix our notation, we begin with some elementary notions from set theory.
For subsets A, Ak, B, . . . of an abstract space S, recall the definitions of union A ∪B or k Ak, intersection A ∩B or k Ak, complement Ac, and difference A \ B = A ∩Bc. The latter is said to be proper if A ⊃B. The symmetric 9 © Springer Nature Switzerland AG 2021 O. Kallenberg, Foundations of Modern Probability, Probability Theory and Stochastic Modelling 99, 10 Foundations of Modern Probability difference of A and B is given by A△B = (A \ B) ∪(B \ A). Among basic set relations, we note in particular the distributive laws A ∩∪ k Bk = ∪ k (A ∩Bk), A ∪∩ k Bk = ∩ k (A ∪Bk), and de Morgan’s laws ∪ k Ak c = ∩ k Ac k, ∩ k Ak c = ∪ k Ac k, valid for arbitrary (not necessarily countable) unions and intersections. The latter formulas allow us to convert any relation involving unions (intersections) into the dual formula for intersections (unions).
We define a σ-field 1 in S as a non-empty collection S of subsets of S closed under countable unions and intersections2 as well as under complementation.
Thus, if A, A1, A2, . . . ∈S, then also Ac, k Ak, and k Ak lie in S. In partic-ular, the whole space S and the empty set ∅belong to every σ-field. In any space S, there is a smallest σ-field {∅, S} and a largest one 2S, consisting of all subsets of S. Note that every σ-field S is closed under monotone limits, so that if A1, A2, . . . ∈S with An ↑A or An ↓A, then also A ∈S. A measurable space is a pair (S, S), where S is a space and S is a σ-field in S.
For any class of σ-fields in S, the intersection, but usually not the union, is again a σ-field. If C is an arbitrary class of subsets of S, there is a smallest σ-field in S containing C, denoted by σ(C) and called the σ-field generated or induced by C. Note that σ(C) can be obtained as the intersection of all σ-fields in S containing C. We endow a metric or topological space S with its Borel σ-field BS generated by the topology3 in S, unless a σ-field is otherwise specified.
The elements of BS are called Borel sets. When S is the real line R, we often write B instead of BR.
More primitive classes than σ-fields often arise in applications. A class C of subsets of a space S is called a π-system if it is closed under finite intersections, so that A, B ∈C implies A ∩B ∈C. Furthermore, a class D is a λ-system if it contains S and is closed under proper differences and increasing limits. Thus, we require S ∈D, that A, B ∈D with A ⊃B implies A \ B ∈D, and that A1, A2, . . . ∈D with An ↑A implies A ∈D.
The following monotone-class theorem is useful to extend an established property or relation from a class C to the generated σ-field σ(C). An application of this result is referred to as a monotone-class argument.
Theorem 1.1 (monotone classes, Sierpi´ nski) For any π-system C and λ-system D in a space S, we have C ⊂D ⇒ σ(C) ⊂D.
1also called a σ-algebra 2For a field or algebra, we require closure only under finite set operations.
3class of open subsets 1. Sets and Functions, Measures and Integration 11 Proof: First we note that a class C is a σ-field iffit is both a π-system and a λ-system. Indeed, if C has the latter properties, then A, B ∈C implies Ac = S \ A ∈C and A ∪B = (Ac ∩Bc)c ∈C. Next let A1, A2, . . . ∈C, and put A = n An. Then Bn ↑A, where Bn = k≤n Ak ∈C, and so A ∈C. Finally, n An = ( n Ac n)c ∈C.
For the main assertion, we may clearly assume that D = λ(C), defined as the smallest λ-system containing C. It suffices to show that D is also a π-system, since it is then a σ-field containing C and therefore contains the smallest σ-field σ(C) with this property. Thus, we need to show that A ∩B ∈D whenever A, B ∈D.
The relation A ∩B ∈D holds when A, B ∈C, since C is a π-system con-tained in D. We proceed by extension in two steps. First we fix any B ∈C, and define SB = {A ⊂S; A ∩B ∈D}.
Then SB is a λ-system contain-ing C, and so it contains the smallest λ-system D with this property. Thus, A ∩B ∈D for any A ∈D and B ∈C. Next we fix any A ∈D, and define S′ A = {B ⊂S; A ∩B ∈D}. As before, we note that even S′ A contains D, which yields the desired property.
2 For any family of spaces St, t ∈T, we define the Cartesian product Xt∈TSt as the class of all collections {st; t ∈T}, where st ∈St for all t.
When T = {1, . . . , n} or T = N = {1, 2, . . .}, we often write the product space as S1 × · · · × Sn or S1 × S2 × · · · , respectively, and if St = S for all t, we use the notation ST, Sn, or S∞. For topological spaces St, we endow XtSt with the product topology, unless a topology is otherwise specified.
Now assume that each space St is equipped with a σ-field St. In XtSt we may then introduce the product σ-field t St, generated by all one-dimensional cylinder sets4 At × Xs̸=tSs, where t ∈T and At ∈St. As before, we write in appropriate cases S1 ⊗· · · ⊗Sn, S1 ⊗S2 ⊗· · · , ST, Sn, S∞.
Lemma 1.2 (product σ-field) For any separable metric spaces S1, S2, . . . , we have B(S1 × S2 × · · ·) = BS1 ⊗BS2 ⊗· · · .
Thus, for countable products of separable metric spaces, the product and Borel σ-fields agree. In particular, BRd = (BR)d = Bd is the σ-field generated by all rectangular boxes I1 × · · · × Id, where I1, . . . , Id are arbitrary real inter-vals. This special case can also be proved directly.
Proof: The assertion can be written as σ(C1) = σ(C2), where C1 is the class of open sets in S = S1 × S2 ×· · · and C2 is the class of cylinder sets of the form S1 × · · · × Sk−1 × Bk × Sk+1 × · · · with Bk ∈BSk. Now let C be the subclass of such cylinder sets, with Bk open in Sk. Then C is a topological base in S, and since S is separable, every set in C1 is a countable union of sets in C. Hence, 4Note the analogy with the definition of product topologies.
12 Foundations of Modern Probability C1 ⊂σ(C). Furthermore, the open sets in Sk generate BSk, and so C2 ⊂σ(C).
Since trivially C lies in both C1 and C2, we get σ(C1) ⊂σ(C) ⊂σ(C2) ⊂σ(C) ⊂σ(C1), and so equality holds throughout, proving the asserted relation.
2 Every point mapping f : S →T induces a set mapping f −1 : 2T →2S in the opposite direction, given by f −1B = s ∈S; f(s) ∈B , B ⊂T.
Note that f −1 preserves the basic set operations, in the sense that, for any subsets B and Bk of T, f −1Bc = (f −1B)c, f −1∪n Bn = ∪n f −1Bn, f −1∩n Bn = ∩n f −1Bn.
(1) We show that f −1 also preserves σ-fields, in both directions. For convenience, we write f −1C = f −1B; B ∈C , C ⊂2T.
Lemma 1.3 (induced σ-fields) For any mapping f between measurable spaces (S, S) and (T, T ), we have (i) S′ = f −1T is a σ-field in S, (ii) T ′ = B ⊂T; f −1B ∈S is a σ-field in T.
Proof: (i) If A, A1, A2, . . . ∈S′, there exist some sets B, B1, B2, . . . ∈T with A = f −1B and An = f −1Bn for all n. Since T is a σ-field, the sets Bc, n Bn, and n Bn all belong to T , and (1) yields Ac = (f −1B)c = f −1Bc, ∪n An = ∪n f −1Bn = f −1∪n Bn, ∩n An = ∩n f −1Bn = f −1∩n Bn, which all lie in f −1T = S′.
(ii) Let B, B1, B2, . . . ∈T ′, so that f −1B, f −1B1, f −1B2, . . . ∈S. Using (1) and the fact that S is a σ-field, we get f −1Bc = (f −1B)c, f −1∪n Bn = ∪n f −1Bn, f −1∩n Bn = ∩n f −1Bn, which all belong to S. Thus, Bc, n Bn, and n Bn all lie in T ′.
2 1. Sets and Functions, Measures and Integration 13 For any measurable spaces (S, S) and (T, T ), a mapping f : S →T is said to be S/T -measurable or simply measurable5 if f −1T ⊂S, that is, if f −1B ∈S for every B ∈T . It is enough to verify the defining condition for a suitable subclass: Lemma 1.4 (measurable functions) Let f be a mapping between measurable spaces (S, S) and (T, T ), and let C ⊂2T with σ(C) = T . Then f is S/T -measurable ⇔ f −1C ⊂S.
Proof: Let T ′ = {B ⊂T; f −1B ∈S}. Then C ⊂T ′ by hypothesis, and T ′ is a σ-field by Lemma 1.3 (ii). Hence, T ′ = σ(T ′) ⊃σ(C) = T , which shows that f −1B ∈S for all B ∈T .
2 Lemma 1.5 (continuity and measurability) For a map f between topological spaces S, T with Borel σ-fields S, T , we have f is continuous ⇒ f is S/T -measurable.
Proof: Let S′ and T ′ be the classes of open sets in S and T. Since f is continuous and S = σ(S′), we have f −1T ′ ⊂S′ ⊂S. By Lemma 1.4 it follows that f is S/σ(T ′)-measurable. It remains to note that σ(T ′) = T .
2 We insert a result about subspace topologies and σ-fields, needed in Chapter 23. Given a class C of subsets of S and a set A ⊂S, we define A ∩C = {A ∩C; C ∈C}.
Lemma 1.6 (subspaces) Let (S, ρ) be a metric space with topology T and Borel σ-field S, and let A ⊂S. Then the induced topology and Borel σ-field in (A, ρ) are given by TA = A ∩T , SA = A ∩S.
Proof: The natural embedding πA : A →S is continuous and hence mea-surable, and so A ∩T = π−1 A T ⊂TA and A ∩S = π−1 A S ⊂SA. Conversely, for any B ∈TA, we define G = (B ∪Ac)o, where the complement and interior are with respect to S, and note that B = A ∩G. Hence, TA ⊂A ∩T , and therefore SA = σ(TA) ⊂σ(A ∩T ) ⊂σ(A ∩S) = A ∩S, where the operation σ(·) refers to the subspace A.
2 As with continuity, even measurability is preserved by composition.
5Note the analogy with the definition of continuity in terms of topologies on S, T.
14 Foundations of Modern Probability Lemma 1.7 (composition) For maps f : S →T and g : T →U between the measurable spaces (S, S), (T, T ), (U, U), we have f, g are measurable ⇒ h = g ◦f is S/U -measurable.
Proof: Let C ∈U, and note that B ≡g−1C ∈T since g is measurable.
Noting that (g◦f)−1 = f −1◦g−1, and using the fact that even f is measurable, we get h−1C = (g ◦f)−1C = f −1g−1C = f −1B ∈S.
2 For many results in measure and probability theory, it is convenient first to give a proof for the real line, and then extend to more general spaces. Say that the measurable spaces S and T are Borel isomorphic, if there exists a bijection f : S ↔T such that both f and f −1 are measurable. A measurable space S is said to be Borel if it is Borel isomorphic to a Borel set in [0, 1]. We show that every Polish space6 endowed with its Borel σ-field is Borel.
Theorem 1.8 (Polish and Borel spaces) For any Polish space S, we have (i) S is homeomorphic to a Borel set in [0, 1]∞, (ii) S is Borel isomorphic to a Borel set in [0, 1].
Proof: (i) Fix a complete metric ρ in S.
We may assume that ρ ≤1, since we can otherwise replace ρ by the equivalent metric ρ ∧1, which is again complete. Since S is separable, we may choose a dense sequence x1, x2, . . . ∈S.
Then the mapping x → ρ(x, x1), ρ(x, x2), . . . , x ∈S, defines a homeomorphic embedding of S into the compact space K = [0, 1]∞, and we may regard S as a subset of K. In K we introduce the metric d(x, y) = Σ n 2−n|xn −yn|, x, y ∈K, and define ¯ S as the closure of S in K.
Writing |Bε x|ρ for the ρ -diameter of the d-ball Bε x = {y ∈S; d(x, y) < ε} in S, we define Un(ε) = x ∈¯ S; |Bε x|ρ < n−1 , ε > 0, n ∈N, and put Gn = ε Un(ε). The Gn are open in ¯ S, since x ∈Un(ε) and y ∈¯ S with d(x, y) < ε/2 implies y ∈Un(ε/2), and S ⊂Gn for each n by the equivalence of the metrics ρ and d. This gives S ⊂˜ S ⊂¯ S, where ˜ S = n Gn.
For any x ∈˜ S, we may choose some x1, x2, . . . ∈S with d(x, xn) →0. By the definitions of Un(ε) and Gn, the sequence (xk) is Cauchy even for ρ, and 6A topological space is said to be Polish if it is separable and admits a complete metrization.
1. Sets and Functions, Measures and Integration 15 so by completeness ρ(xk, y) →0 for some y ∈S. Since ρ and d are equivalent on S, we have even d(xk, y) →0, and therefore x = y. This gives ˜ S ⊂S, and so ˜ S = S. Finally, ˜ S is a Borel set in K, since the Gn are open subsets of the compact set ¯ S.
(ii) Write 2∞for the countable product {0, 1}∞, and let B denote the subset of binary sequences with infinitely many zeros. Then x = Σn xn2−n defines a 1−1 correspondence between I = [0, 1) and B. Since x →(x1, x2, . . .) is clearly bi-measurable, I and B are Borel isomorphic, written as I ∼B. Furthermore, Bc = 2∞\ B is countable, so that Bc ∼N, which implies B ∪N ∼B ∪N ∪(−N) ∼B ∪Bc ∪N = 2∞∪N, and hence I ∼B ∼2∞. This gives I∞∼(2∞)∞∼2∞∼I, and the assertion follows by (i).
2 For any Borel space (S, S), we may specify a localizing sequence Sn ↑S in S, and say that a set B ⊂S is bounded, if B ⊂Sn for some n ∈N. The ring of bounded sets B ∈S will be denoted by ˆ S. If ρ is a metric generating S, we may choose ˆ S to consist of all ρ -bounded sets. In locally compact spaces S, we may choose ˆ S as the class of sets B ∈S with compact closure ¯ B.
By a dissection system in S we mean a nested sequence of countable par-titions (Inj) ⊂ˆ S generating S, such that for any fixed n ∈N, every bounded set is covered by finitely many sets Inj. For topological spaces S, we require in addition that, whenever s ∈G ⊂S with G open, we have s ∈Inj ⊂G for some n, j ∈N. A class C ⊂ˆ S is said to be dissecting, if every open set G ⊂S is a countable union of sets in C, and every set B ∈ˆ S is covered by finitely many sets in C.
To state the next result, we note that any collection of functions fi : Ω →Si, i ∈I, defines a mapping f = (fi) from Ω to XiSi, given by f(ω) = fi(ω); i ∈I , ω ∈Ω.
(2) We may relate the measurability of f to that of the coordinate mappings fi.
Lemma 1.9 (coordinate functions) For any measurable spaces (Ω, A), (Si, Si) and functions fi : Ω →Si, i ∈I, define f = (fi) : Ω →XiSi. Then these conditions are equivalent: (i) f is A / i Si -measurable, (ii) fi is A/Si -measurable for every i ∈I.
Proof: Use Lemma 1.4 with C equal to the class of cylinder sets Ai×Xj̸=iSj, for arbitrary i ∈I and Ai ∈Si.
2 16 Foundations of Modern Probability Changing our perspective, let the fi in (2) map Ω into some measurable spaces (Si, Si). In Ω we may then introduce the generated or induced σ-field σ(f) = σ{fi; i ∈I}, defined as the smallest σ-field in Ω making all the fi measurable. In other words, σ(f) is the intersection of all σ-fields A in Ω, such that fi is A/St-measurable for every i ∈I. In this notation, the functions fi are clearly measurable with respect to a σ-field A in Ω iffσ(f) ⊂A. Further note that σ(f) agrees with the σ-field in Ω generated by the collection {f −1 i Si; i ∈ I}.
For functions on or into a Euclidean space Rd, measurability is understood to be with respect to the Borel σ-field Bd. Thus, a real-valued function f on a measurable space (S, S) is measurable iff{s; f(s) ≤x} ∈S for all x ∈R. The same convention applies to functions into the extended real line ¯ R = [−∞, ∞] or the extended half-line ¯ R+ = [0, ∞], regarded as compactifications of R and R+ = [0, ∞), respectively. Note that B¯ R = σ{B, ±∞} and B¯ R+ = σ{BR+, ∞}.
For any set A ⊂S, the associated indicator function7 1A : S →R is defined to equal 1 on A and 0 on Ac. For sets A = {s ∈S; f(s) ∈B}, it is often convenient to write 1{·} instead of 1{·}. When S is a σ-field in S, we note that 1A is S-measurable iffA ∈S.
Linear combinations of indicator functions are called simple functions. Thus, every simple function f : S →R has the form f = c11A1 + · · · + cn1An, where n ∈Z+ = {0, 1, . . .}, c1, . . . , cn ∈R, and A1, . . . , An ⊂S. Here we may take c1, . . . , cn to be the distinct non-zero values attained by f, and define Ak = f −1{ck} for all k. This makes f measurable with respect to a given σ-field S in S iffA1, . . . , An ∈S.
The class of measurable functions is closed under countable limiting oper-ations: Lemma 1.10 (bounds and limits) If the functions f1, f2, . . .: (S, S) →¯ R are measurable, then so are the functions sup n fn, inf n fn, lim sup n→∞fn, lim inf n→∞fn.
Proof: To see that supn fn is measurable, write s; sup n fn(s) ≤t = ∩ n≥1 s; fn(s) ≤t = ∩ n≥1f −1 n [−∞, t] ∈S, and use Lemma 1.4. For the other three cases, write infn fn = −supn(−fn), and note that lim sup n→∞fn = inf n≥1 sup k≥n fk, lim inf n→∞fn = sup n≥1 inf k≥n fk.
2 7The term characteristic function should be avoided, since it has a different meaning in probability.
1. Sets and Functions, Measures and Integration 17 Since fn →f ifflim supn fn = lim infn fn = f, it follows easily that both the set of convergence and the possible limit are measurable. This extends to functions taking values in more general spaces: Lemma 1.11 (convergence and limits) Let f1, f2, . . . be measurable functions from a measurable space (Ω, A) into a metric space (S, ρ). Then (i) ω; fn(ω) converges ∈A when S is separable and complete, (ii) fn →f on Ω implies that f is measurable.
Proof: (i) Since S is complete, convergence of fn is equivalent to the Cauchy convergence lim n→∞sup m≥n ρ(fm, fn) = 0, where the left-hand side is measurable by Lemmas 1.2, 1.5, and 1.10.
(ii) If fn →f, we have g ◦fn →g ◦f for any continuous function g: S →R, and so g◦f is measurable by Lemmas 1.5 and 1.10. Fixing any open set G ⊂S, we may choose some continuous functions g1, g2, . . . : S →R+ with gn ↑1G, and conclude from Lemma 1.10 that 1G ◦f is measurable. Thus, f −1G ∈A for all G, and so f is measurable by Lemma 1.4.
2 Many results in measure theory can be proved by a simple approximation, based on the following observation.
Lemma 1.12 (simple approximation) For any measurable function f ≥0 on (S, S), there exist some simple, measurable functions f1, f2, . . . : S →R+ with fn ↑f.
Proof: We may define fn(s) = 2−n[2nf(s)] ∧n, s ∈S, n ∈N.
2 To illustrate the method, we show that the basic arithmetic operations are measurable.
Lemma 1.13 (elementary operations) If the functions f, g : (S, S) →R are measurable, then so are the functions (i) fg and af + bg, a, b ∈R, (ii) f/g when g ̸= 0 on S.
Proof: By Lemma 1.12 applied to f± = (±f) ∨0 and g± = (±g) ∨0, we may approximate by simple measurable functions fn →f and gn →g. Here afn +bgn and fngn are again simple measurable functions. Since they converge to af + bg and fg, respectively, the latter functions are again measurable by Lemma 1.10. The same argument applies to the ratio f/g, provided we choose gn ̸= 0.
18 Foundations of Modern Probability An alternative argument is to write af + bg, fg, or f/g as a composition ψ ◦ϕ, where ϕ = (f, g) : S →R2, and ψ(x, y) is defined as ax + by, xy, or x/y, repectively. The desired measurability then follows by Lemmas 1.2, 1.5, 1.9, and 1.10. In case of ratios, we may use the continuity of the mapping (x, y) →x/y on R × (R \ {0}).
2 We proceed with a functional representation of measurable functions. Given some functions f, g on a common space Ω, we say that f is g -measurable if the induced σ-fields are related by σ(f) ⊂σ(g).
Lemma 1.14 (functional representation, Doob) Let f, g be measurable func-tions from (Ω, A) into some measurable spaces (S, S), (T, T ), where S is Borel.
Then these conditions are equivalent: (i) f is g-measurable, (ii) f = h ◦g for a measurable mapping h: T →S.
Proof: Since S is Borel, we may let S ∈B[0,1]. By a suitable modification of h, we may further reduce to the case where S = [0, 1]. If f = 1A for a g-measurable set A ⊂Ω, Lemma 1.3 yields a set B ∈T with A = g−1B. Then f = 1A = 1B ◦g, and we may choose h = 1B. The result extends by linearity to any simple, g-measurable function f. In general, Lemma 1.12 yields some simple, g-measurable functions f1, f2, . . . with 0 ≤fn ↑f, and we may choose some associated T -measurable functions h1, h2, . . .: T →[0, 1] with fn = hn◦g.
Then h = supn hn is again T -measurable by Lemma 1.10, and we have h ◦g = supn hn) ◦g = supn (hn ◦g) = supn fn = f.
2 Given a measurable space (S, S), we say that a function μ : S →¯ R+ is countably additive if μ ∪ k≥1Ak = Σ k≥1 μAk, A1, A2, . . . ∈S disjoint.
(3) A measure on (S, S) is defined as a countably additive set function μ: S →¯ R+ with μ∅= 0. The triple (S, S, μ) is then called a measure space. From (3) we note that any measure is finitely additive and non-decreasing. This in turn implies the countable sub-additivity μ ∪ k≥1Ak ≤Σ k≥1 μAk, A1, A2, . . . ∈S.
We note some basic continuity properties: Lemma 1.15 (continuity) For any measure μ on (S, S) and sets A1, A2, . . . ∈ S, we have (i) An ↑A ⇒μAn ↑μA, (ii) An ↓A, μA1 < ∞⇒μAn ↓μA.
1. Sets and Functions, Measures and Integration 19 Proof: For (i), we may apply (3) to the differences Dn = An \ An−1 with A0 = ∅. To get (ii), apply (i) to the sets Bn = A1 \ An.
2 The simplest measures on a measurable space (S, S) are the unit point masses or Dirac measures δs, s ∈S, given by δsA = 1A(s). A measure of the form a δs is said to be degenerate. For any countable set A = {s1, s2, . . .}, we may form the associated counting measure μ = Σn δsn. More generally, we may form countable linear combinations of measures on S, as follows.
Proposition 1.16 (sums of measures) For any measures μ1, μ2, . . . on (S, S), the sum μ = Σn μn is again a measure.
Proof: First we note that, for any array of constants cjk ≥0, j, k ∈N, Σ jΣ k cjk = Σ kΣ j cjk.
(4) This is obvious for finite sums. In general, we have for any m, n ∈N Σ jΣ k cjk ≥Σ j ≤m Σ k≤n cjk = Σ k≤n Σ j ≤m cjk.
Letting m →∞and then n →∞, we obtain (4) with inequality ≥. The reverse relation follows by symmetry, and the equality follows.
Now consider any disjoint sets A1, A2, . . . ∈S. Using (4) and the countable additivity of each μn, we get μ∪k Ak = Σ n μ n∪k Ak = Σ nΣ k μnAk = Σ kΣ n μnAk = Σ k μAk.
2 The last result is essentially equivalent to the following: Corollary 1.17 (monotone limits) For any σ-finite measures μ1, μ2, . . . on (S, S), we have μn ↑μ or μn ↓μ ⇒ μ is a measure on S.
Proof: First let μn ↑μ. For any n ∈N, there exists a measure νn such that μn = μn−1 + νn, where μ0 = 0. Indeed, let A1, A2, . . . ∈A be disjoint with μn−1Ak < ∞for all k, and define νn,k = μn −μn−1 on each Ak. Then μ = Σn νn = Σn,k νn,k is a measure on S by Proposition 1.16.
If instead μn ↓μ, then define νn = μ1 −μn as above, and note that νn ↑ν for some measure ν ≤μ1. Hence, μn = μ1 −νn ↓μ1 −ν, which is again a measure.
2 20 Foundations of Modern Probability Any measurable mapping f between two measurable spaces (S, S) and (T, T ) induces a mapping of measures on S into measures on T. More precisely, given any measure μ on (S, S), we may define a measure μ ◦f −1 on (T, T ) by (μ ◦f −1)B = μ(f −1B) = μ s ∈S; f(s) ∈B , B ∈T .
Here the countable additivity of μ ◦f −1 follows from that for μ, together with the fact that f −1 preserves unions and intersections.
It is often useful to identify a measure-determining class C ⊂S, such that a measure on S is uniquely determined by its values on C.
Lemma 1.18 (uniqueness) Let μ, ν be bounded measures on a measurable space (S, S), and let C be a π-system in S with S ∈C and σ(C) = S. Then μ = ν ⇔ μA = νA, A ∈C.
Proof: Assuming μ = ν on C, let D be the class of sets A ∈S with μA = νA.
Using the condition S ∈C, the finite additivity of μ and ν, and Lemma 1.15, we see that D is a λ-system. Moreover, C ⊂D by hypothesis. Hence, Theorem 1.1 yields D ⊃σ(C) = S, which means that μ = ν. The converse assertion is obvious.
2 For any measure μ on (S, S) and set A ∈S, the mapping ν : B →μ(A∩B) is again a measure on (S, S), called the restriction of μ to A and denoted by 1A μ. A measure μ on S is said to be σ-finite, if S is a countable union of disjoint sets An ∈S with μAn < ∞. Since μ = Σ n1Anμ, such a μ is even s-finite, in the sense of being a countable sum of bounded measures. A measure μ on a localized Borel space S is said to be locally finite if μB < ∞for all B ∈ˆ S, and we write MS for the class of such measures μ.
For any measure μ on a topological space S, we define the support supp μ as the set of points s ∈S, such that μB > 0 for every neighborhood B of s.
Note that supp μ is closed and hence Borel measurable.
Lemma 1.19 (support) For any measure μ on a separable, complete metric space S, we have μ(supp μ)c = 0.
Proof: Suppose that instead μ(supp μ)c > 0. Since S is separable, the open set (supp μ)c is a countable union of closed balls of radius < 1. Choose one of them B1 with μB1 > 0, and continue recursively to form a nested sequence of balls Bn of radius < 2−n, such that μBn > 0 for all n. The associated centers xn ∈Bn form a Cauchy sequence, and so by completeness we have convergence xn →x. Then x ∈ n Bn ⊂(supp μ)c and x ∈supp μ, a contradiction proving our claim.
2 1. Sets and Functions, Measures and Integration 21 Our next aim is to define the integral μf = f dμ = f(ω) μ(dω) of a real-valued, measurable function f on a measure space (S, S, μ). First take f to be simple and non-negative, hence of the form c11A1 + · · · + cn1An for some n ∈Z+, A1, . . . , An ∈S and c1, . . . , cn ∈R+, and define8 μf = c1 μA1 + · · · + cn μAn.
Using the finite additivity of μ, we may check that μf is independent of the choice of representation of f.
It is further clear that the integration map f →μf is linear and non-decreasing, in the sense that μ(af + bg) = a μf + b μg, a, b ≥0, f ≤g ⇒ μf ≤μg.
To extend the integral to general measurable functions f ≥0, use Lemma 1.12 to choose some simple measurable functions f1, f2, . . . with 0 ≤fn ↑f, and define μf = limn μfn. We need to show that the limit is independent of the choice of approximating sequence (fn): Lemma 1.20 (consistence) Let f, f1, f2, . . . and g be measurable functions on a measure space (S, S, μ), where all but f are simple. Then 0 ≤fn ↑f 0 ≤g ≤f ⇒μg ≤lim n→∞μfn.
Proof: By the linearity of μ, it is enough to take g = 1A for some A ∈S.
Fixing any ε > 0, we define An = ω ∈A; fn(ω) ≥1 −ε , n ∈N.
Then An ↑A, and so μfn ≥(1 −ε) μAn ↑(1 −ε) μA = (1 −ε) μg.
It remains to let ε →0.
2 The linearity and monotonicity extend immediately to arbitrary f ≥0, since fn ↑f and gn ↑g imply afn + bgn ↑af + bg, and if also f ≤g, then fn ≤(fn ∨gn) ↑g. We prove a basic continuity property of the integral.
Theorem 1.21 (monotone convergence, Levi) For any measurable functions f, f1, f2 . . . on (S, S, μ), we have 0 ≤fn ↑f ⇒ μfn ↑μf.
8The convention 0 · ∞= 0 applies throughout measure theory.
22 Foundations of Modern Probability Proof: For every n we may choose some simple measurable functions gnk, with 0 ≤gnk ↑fn as k →∞. The functions hnk = g1k ∨· · · ∨gnk have the same properties and are further non-decreasing in both indices. Hence, f ≥lim k→∞hkk ≥lim k→∞hnk = fn ↑f, and so 0 ≤hkk ↑f. Using the definition and monotonicity of the integral, we obtain μf = lim k→∞μhkk ≤lim k→∞μfk ≤μf.
2 The last result yields the following key inequality.
Lemma 1.22 (Fatou) For any measurable functions f1, f2, . . . ≥0 on (S, S, μ), we have lim inf n→∞μfn ≥μ lim inf n→∞fn.
Proof: Since fm ≥infk≥n fk for all m ≥n, we have inf k≥n μfk ≥μ inf k≥n fk, n ∈N.
Letting n →∞, we get by Theorem 1.21 lim inf k→∞μfk ≥lim n→∞μ inf k≥n fk = μ lim inf k→∞fk.
2 A measurable function f on (S, S, μ) is said to be integrable if μ|f| < ∞.
Writing f = g −h for some integrable functions g, h ≥0 (e.g., as f+ −f−with f± = (±f) ∨0), we define μf = μg −μh. It is easy to see that the extended integral is independent of the choice of representation f = g −h, and that μf satisfies the basic linearity and monotonicity properties, now with any real coefficients.
The last lemma leads to a general condition, other than the monotonicity in Theorem 1.21, allowing us to take limits under the integral sign, in the sense that fn →f ⇒ μfn →μf.
By the classical dominated convergence theorem, this holds when |fn| ≤g for a measurable function g ≥0 with μg < ∞. The same argument yields a more powerful extended version: Theorem 1.23 (extended dominated convergence, Lebesgue) Let f, f1, f2, . . .
and g, g1, g2, . . . ≥0 be measurable functions on (S, S, μ). Then fn →f |fn| ≤gn →g μgn →μg < ∞ ⎫ ⎬ ⎭⇒μfn →μf.
1. Sets and Functions, Measures and Integration 23 Proof: Applying Fatou’s lemma to the functions gn ± fn ≥0, we get μg + lim inf n→∞(±μfn) = lim inf n→∞μ(gn ± fn) ≥μ(g ± f) = μg ± μf.
Subtracting μg < ∞from each side gives μf ≤lim inf n→∞μfn ≤lim sup n→∞μfn ≤μf.
2 Next we show how integrals are transformed by measurable maps.
Lemma 1.24 (substitution) Given a measure space (Ω, μ), a measurable space S, and some measurable maps f : Ω →S and g: S →R, we have μ(g ◦f) = (μ ◦f −1)g, (5) whenever either side exists9.
Proof: For indicator functions g, (5) reduces to the definition of μ ◦f −1.
From here on, we may extend by linearity and monotone convergence to any measurable function g ≥0. For general g it follows that μ|g ◦f| = (μ◦f −1)|g|, and so the integrals in (5) exist at the same time. When they do, we get (5) by taking differences on both sides.
2 For another basic transformation of measures and integrals, fix a measur-able function f ≥0 on a measure space (S, S, μ), and define a function f · μ on S by (f · μ)A = μ(1Af) = A f dμ, A ∈S, where the last equality defines the integral over A. Then clearly ν = f · μ is again a measure on (S, S). We refer to f as the μ -density of ν. The associated transformation rule is the following.
Lemma 1.25 (chain rule) For any measure space (S, S, μ) and measurable functions f : S →R+ and g: S →R, we have μ(fg) = (f · μ)g, whenever either side exists.
Proof: As before, we may first consider indicator functions g, and then extend in steps to the general case.
2 Given a measure space (S, S, μ), we say that A ∈S is μ -null or simply null if μA = 0. A relation between functions on S is said to hold almost everywhere with respect to μ (written as a.e. μ or μ -a.e.) if it holds for all s ∈S outside a μ -null set. The following frequently used result explains the relevance of null sets.
9meaning that if one side exists, then so does the other and the two are equal 24 Foundations of Modern Probability Lemma 1.26 (null sets and functions) For any measurable function f ≥0 on a measure space (S, S, μ), we have μf = 0 ⇔ f = 0 a.e. μ.
Proof: This is obvious when f is simple. For general f, we may choose some simple measurable functions fn with 0 ≤fn ↑f, and note that f = 0 a.e. ifffn = 0 a.e. for every n, that is, iffμfn = 0 for all n. Since the latter integrals converge to μf, the last condition is equivalent to μf = 0.
2 The last result shows that two integrals agree when the integrands are a.e.
equal. We may then allow the integrands to be undefined on a μ -null set. It is also clear that the conclusions of Theorems 1.21 and 1.23 remain valid, if the hypotheses are only fulfilled outside a null set.
In the other direction, we note that if two σ-finite measures μ, ν are related by ν = f ·μ for a density f, then the latter is μ -a.e. unique, which justifies the notation f = dν/dμ. It is further clear that any μ -null set is also a null set for ν. For measures μ, ν with the latter property, we say that ν is absolutely continuous with respect to μ, and write ν ≪μ. The other extreme case is when μ, ν are mutually singular, written as μ ⊥ν, in the sense that μA = 0 and νAc = 0 for some set A ∈S.
Given any measure space (S, S, μ) and σ-field F ⊂S, we define the μ-completion of F in S as the σ-field F μ = σ(F, Nμ), where Nμ is the class of subsets of arbitrary μ-null sets in S. The description of Fμ can be made more explicit: Lemma 1.27 (completion) For any measure space (Ω, A, μ), σ-field F ⊂A, Borel space (S, S), and function f : Ω →S, these conditions are equivalent: (i) f is Fμ-measurable, (ii) f = g a.e. μ for an F-measurable function g.
Proof: Beginning with indicator functions, let G be the class of subsets A ⊂Ω with A△B ∈Nμ for some B ∈F. Then A \ B and B \ A again lie in Nμ, and so G ⊂F μ. Conversely, Fμ ⊂G since both F and Nμ are trivially contained in G. Combining the two relations gives G = Fμ, which shows that A ∈F μ iff1A = 1B a.e. for some B ∈F.
In general we may take S = [0, 1]. For any Fμ-measurable function f, we may choose some simple Fμ-measurable functions fn with 0 ≤fn ↑f. By the result for indicator functions, we may next choose some simple F-measurable functions gn, such that fn = gn a.e. for all n. Since a countable union of null sets is again null, the function g = lim supn gn has the desired property.
2 Every measure μ on (S, S) extends uniquely to the σ-field Sμ. Indeed, for any A ∈Sμ, Lemma 1.27 yields some sets A± ∈S with A−⊂A ⊂A+ and μ(A+ \ A−) = 0, and any extension satisfies μA = μA±. With this choice, it is easy to check that μ remains a measure on Sμ.
1. Sets and Functions, Measures and Integration 25 We proceed to construct product measures and establish conditions allow-ing us to change the order of integration. This requires a technical lemma of independent importance.
Lemma 1.28 (sections) For any measurable spaces (S, S), (T, T ), measur-able function f : S × T →R+, and σ-finite measure μ on S, we have (i) f(s, t) is S-measurable in s ∈S for fixed t ∈T, (ii) f(s, t) μ(ds) is T -measurable in t ∈T.
Proof: We may take μ to be bounded. Both statements are obvious when f = 1A with A = B × C for some B ∈S and C ∈T , and they extend by monotone-class arguments to any indicator functions of sets in S ⊗T . The general case follows by linearity and monotone convergence.
2 We may now state the main result for product measures, known as Fubini’s theorem10.
Theorem 1.29 (product measure and iterated integral, Lebesgue, Fubini, To-nelli) For any σ-finite measure spaces (S, S, μ), (T, T , ν), we have (i) there exists a unique measure μ ⊗ν on (S × T, S ⊗T ), such that (μ ⊗ν)(B × C) = μB · νC, B ∈S, C ∈T , (ii) for any measurable function f : S × T →R with (μ ⊗ν)|f| < ∞, we have (μ ⊗ν)f = μ(ds) f(s, t) ν(dt) = ν(dt) f(s, t) μ(ds).
Note that the iterated integrals11 in (ii) are well defined by Lemma 1.28, although the inner integrals νf(s, ·) and μf(·, t) may fail to exist on some null sets in S and T, respectively.
Proof: By Lemma 1.28, we may define (μ ⊗ν)A = μ(ds) 1A(s, t) ν(dt), A ∈S ⊗T , (6) which is clearly a measure on S × T satisfying (i). By a monotone-class ar-gument there is at most one such measure. In particular, (6) remains true with the order of integration reversed, which proves (ii) for indicator functions f. The formula extends by linearity and monotone convergence to arbitrary measurable functions f ≥0.
10This is a subtle result of measure theory. The elementary proposition in calculus for double integrals of continuous functions goes back at least to Cauchy.
11Iterated integrals should be read from right to left, so that the inner integral on the right becomes an integrand in the next step. This notation saves us from the nuisance of awkward parentheses, or from relying on confusing conventions for multiple integrals.
26 Foundations of Modern Probability In general we note that (ii) holds with f replaced by |f|. If (μ⊗ν)|f| < ∞, it follows that NS = {s ∈S; ν|f(s, ·)| = ∞} is a μ -null set in S, whereas NT = {t ∈T; μ|f(·, t)| = ∞} is a ν -null set in T. By Lemma 1.26 we may redefine f(s, t) = 0 when s ∈NS or t ∈NT. Then (ii) follows for f by sub-traction of the formulas for f+ and f−.
2 We call μ⊗ν in Theorem 1.29 the product measure of μ and ν. Iterating the construction, we may form product measures μ1 ⊗. . . ⊗μn = k μk satisfying higher-dimensional versions of (ii). If μk = μ for all k, we often write the product measure as μ⊗n or μn.
By a measurable group we mean a group G endowed with a σ-field G, such that the group operations in G are G-measurable. If μ1, . . . , μn are σ-finite measures on G, we may define the convolution μ1 ∗· · · ∗μn as the image of the product measure μ1 ⊗· · · ⊗μn on Gn under the iterated group operation (x1, . . . , xn) →x1 · · · xn.
The convolution is said to be associative if (μ1 ∗ μ2) ∗μ3 = μ1 ∗(μ2 ∗μ3) whenever both μ1 ∗μ2 and μ2 ∗μ3 are σ-finite, and commutative if μ1 ∗μ2 = μ2 ∗μ1.
A measure μ on G is said to be right- or left-invariant if μ ◦θ−1 r = μ for all r ∈G, where θr denotes the right or left shift s →s r or s →rs. When G is Abelian, the shift is also called a translation. On product spaces G × T, the translations are defined as mappings of the form θr : (s, t) →(s + r, t).
Lemma 1.30 (convolution) On a measurable group (G, G), the convolution of σ-finite measures is associative. On Abelian groups it is also commutative, and (i) (μ ∗ν)B = μ(B −s) ν(ds) = ν(B −s) μ(ds), B ∈G, (ii) if μ = f · λ and ν = g · λ for an invariant measure λ, then μ ∗ν has the λ-density (f ∗g)(s) = f(s −t) g(t) λ(dt) = f(t) g(s −t) λ(dt), s ∈G.
Proof: Use Fubini’s theorem.
2 For any measure space (S, S, μ) and constant p > 0, let Lp = Lp(S, S, μ) be the class of measurable functions f : S →R satisfying ∥f∥p ≡ μ|f|p 1/p < ∞.
Theorem 1.31 (H¨ older and Minkowski inequalities) For any measurable func-tions f, g on a measure space (S, S, μ), we have (i) ∥fg∥r ≤∥f∥p ∥g∥q, p, q, r > 0 with p−1 + q−1 = r−1, (ii) ∥f + g∥p ≤∥f∥p + ∥g∥p, p ≥1, 1. Sets and Functions, Measures and Integration 27 (iii) ∥f + g∥p ≤ ∥f∥p p + ∥g∥p p 1/p, p ∈(0, 1].
Proof: (i) We may clearly take r = 1 and ∥f∥p = ∥g∥q = 1. Since p−1 + q−1 = 1 implies (p −1)(q −1) = 1, the equations y = xp−1 and x = yq−1 are equivalent for x, y ≥0. By calculus, |fg| ≤ |f| 0 xp−1dx + |g| 0 yq−1dy = p−1|f|p + q−1|g|q, and so ∥fg∥1 ≤p−1 |f|p dμ + q−1 |g|q dμ = p−1 + q−1 = 1.
(ii) For p > 1, we get by (i) with q = p/(p −1) and r = 1 ∥f + g∥p p ≤ |f| |f + g|p−1dμ + |g| |f + g|p−1dμ ≤∥f∥p ∥f + g∥p−1 p + ∥g∥p ∥f + g∥p−1 p .
(iii) Clear from the concavity of |x|p.
2 Claim (ii) above is sometimes needed in the following extended form, where for any f as in Theorem 1.29 we define ∥f∥p(s) = ν|f(s, ·)|p 1/p, s ∈S.
Corollary 1.32 (extended Minkowski inequality) For μ, ν, f as in Theorem 1.29, suppose that μf(·, t) exists for t ∈T a.e. ν. Then ∥μf∥p ≤μ∥f∥p, p ≥1.
Proof: Since |μf| ≤μ|f|, we may assume that f ≥0 and ∥μf∥p ∈(0, ∞).
For p > 1, we get by Fubini’s theorem and H¨ older’s inequality ∥μf∥p p = ν(μf)p = ν μf(μf)p−1 = μν f(μf)p−1 ≤μ∥f∥p (μf)p−1 q = μ∥f∥p ∥μf∥p−1 p .
Now divide by ∥μf∥p−1 p . The proof for p = 1 is similar but simpler.
2 In particular, Theorem 1.31 shows that ∥·∥p becomes a norm for p ≥1, if we identify functions that agree a.e. For any p > 0 and f, f1, f2, . . . ∈Lp, we write fn →f in Lp if ∥fn −f∥p →0, and say that (fn) is Cauchy in Lp if ∥fm −fn∥p →0 as m, n →∞.
28 Foundations of Modern Probability Lemma 1.33 (completeness) Let the sequence (fn) be Cauchy in Lp, where p > 0. Then there exists an f ∈Lp such that ∥fn −f∥p →0.
Proof: Choose a sub-sequence (nk) ⊂N with Σk∥fnk+1 −fnk∥p∧1 p < ∞. By Lemma 1.31 and monotone convergence we get ∥Σk|fnk+1 −fnk| ∥p∧1 p < ∞, and so Σk|fnk+1 −fnk| < ∞a.e. Hence, (fnk) is a.e. Cauchy in R, and so Lemma 1.11 yields fnk →f a.e. for some measurable function f. By Fatou’s lemma, ∥f −fn∥p ≤lim inf k→∞∥fnk −fn∥p ≤sup m≥n ∥fm −fn∥p →0, n →∞, which shows that fn →f in Lp.
2 We give a useful criterion for convergence in Lp.
Lemma 1.34 (Lp-convergence) Let f, f1, f2, . . . ∈Lp with fn →f a.e., where p > 0. Then ∥fn −f∥p →0 ⇔ ∥fn∥p →∥f∥p.
Proof: If fn →f in Lp, we get by Lemma 1.31 ∥fn∥p∧1 p −∥f∥p∧1 p ≤∥fn −f∥p∧1 p →0, and so ∥fn∥p →∥f∥p. Now assume instead the latter condition, and define gn = 2p |fn|p + |f|p , g = 2p+1|f|p.
Then gn →g a.e. and μgn →μg < ∞by hypotheses.
Since also |gn| ≥ |fn −f|p →0 a.e., Theorem 1.23 yields ∥fn −f∥p p = μ|fn −f|p →0.
2 Taking p=q=2 and r=1 in Theorem 1.31 (i) yields the Cauchy inequality12 ∥fg∥1 ≤∥f∥2 ∥g∥2.
In particular, the inner product ⟨f, g⟩= μ(fg) exists for f, g ∈L2 and satisfies |⟨f, g⟩| ≤∥f∥2∥g∥2. From the obvious bi-linearity of the inner product, we get the parallelogram identity ∥f + g∥2 + ∥f −g∥2 = 2 ∥f∥2 + 2 ∥g∥2, f, g ∈L2.
(7) Two functions f, g ∈L2 are said to be orthogonal, written as f ⊥g, if ⟨f, g⟩= 0. Orthogonality between two subsets A, B ⊂L2 means that f ⊥g for all f ∈A and g ∈B. A subspace M ⊂L2 is said to be linear if f, g ∈M and a, b ∈R imply af + b g ∈M, and closed if fn →f in L2 for some fn ∈M implies f ∈M.
12also known as the Schwarz or Buniakovsky inequality 1. Sets and Functions, Measures and Integration 29 Theorem 1.35 (projection) For a closed, linear subspace M ⊂L2, any func-tion f ∈L2 has an a.e. unique decomposition f = g + h with g ∈M, h ⊥M.
Proof: Fix any f ∈L2, and define r = inf{∥f −g∥; g ∈M}. Choose g1, g2, . . . ∈M with ∥f −gn∥→r. Using the linearity of M, the definition of r, and (7), we get as m, n →∞ 4 r2 + ∥gm −gn∥2 ≤ 2f −gm −gn 2 + ∥gm −gn∥2 = 2 ∥f −gm∥2 + 2 ∥f −gn∥2 →4 r2.
Thus, ∥gm −gn∥→0, and so the sequence (gn) is Cauchy in L2. By Lemma 1.33 it converges toward some g ∈L2, and since M is closed, we have g ∈M.
Noting that h = f −g has norm r, we get for any l ∈M r2 ≤∥h + tl∥2 = r2 + 2 t⟨h, l⟩+ t2∥l∥2, t ∈R, which implies ⟨h, l⟩= 0. Hence, h ⊥M, as required.
To prove the uniqueness, let g′ + h′ be another decomposition with the stated properties. Then both g −g′ ∈M and g −g′ = h′ −h ⊥M, and so g −g′ ⊥g −g′, which implies ∥g −g′∥2 = ⟨g −g′, g −g′⟩= 0, and hence g = g′ a.e.
2 We proceed with a basic approximation of sets. Let F, G denote the classes of closed and open subsets of S.
Lemma 1.36 (regularity) For any bounded measure μ on a metric space S with Borel σ-field S, we have μB = sup F⊂B μF = inf G⊃B μG, B ∈S, with F, G restricted to F, G, respectively.
Proof: For any G ∈G there exist some closed sets Fn ↑G, and by Lemma 1.15 we get μFn ↑μG.
This proves the statement for B belonging to the π-system G of open sets. Letting D be the class of sets B with the stated property, we further note that D is a λ-system. Hence, Theorem 1.1 yields D ⊃σ(G) = S.
2 The last result leads to a basic approximation property for functions.
Lemma 1.37 (approximation) Consider a metric space S with Borel σ-field S, a bounded measure μ on (S, S), and a constant p > 0. Then the bounded, continuous functions on S are dense in Lp(S, S, μ). Thus, for any f ∈Lp, there exist some bounded, continuous functions f1, f2, . . .: S →R with ∥fn −f∥p →0.
30 Foundations of Modern Probability Proof: If f = 1A with A ⊂S open, we may choose some continuous func-tions fn with 0 ≤fn ↑f, and then ∥fn −f∥p →0 by dominated convergence.
The result extends by Lemma 1.36 to any A ∈S.
The further extension to simple, measurable functions is immediate. For general f ∈Lp, we may choose some simple, measurable functions fn →f with |fn| ≤|f|.
Since |fn −f|p ≤2p+1|f|p, we get ∥fn −f∥p →0 by dominated convergence.
2 Next we show how the pointwise convergence of measurable functions is almost uniform. Here ∥f∥A = sups∈A |f(s)|.
Lemma 1.38 (near uniformity, Egorov) Let f, f1, f2, . . . be measurable func-tions on a finite measure space (S, S, μ), such that fn →f on S. Then there exist some sets A1, A2, . . . ∈S satifying μAc k →0, ∥fn −f∥Ak →0, k ∈N.
Proof: Define Ar,n = ∩ k≥n s ∈S; |fk(s) −f(s)| < r−1 , r, n ∈N.
Letting n →∞for fixed r, we get Ar,n ↑S and hence μAc r,n →0. Given any ε > 0, we may choose n1, n2, . . . ∈N so large that μAc r,nr < ε2−r for all r.
Writing A = rAr,nr, we get μAc ≤μ∪r Ac r,nr < εΣ r 2−r = ε, and we note that fn →f uniformly on A.
2 Combining the last two results, we may show that every measurable func-tion is almost continuous.
Lemma 1.39 (near continuity, Lusin) Consider a measurable function f and a bounded measure μ on a compact metric space S with Borel σ-field S. Then there exist some continuous functions f1, f2, . . . on S, such that μ x; fn(x) ̸= f(x) →0.
Proof: We may clearly take f to be bounded. By Lemma 1.37 we may choose some continuous functions g1, g2, . . . on S, such that μ|gk −f| ≤2−k.
By Fubini’s theorem, we get μΣ k |gk −f| = Σ k μ|gk −f| ≤Σ k 2−k = 1, and so Σk|gk −f| < ∞a.e., which implies gk →f a.e. By Lemma 1.38, we may next choose some A1, A2, . . . ∈S with μAc n →0, such that the conver-gence is uniform on every An. Since each gk is uniformly continuous on S, 1. Sets and Functions, Measures and Integration 31 we conclude that f is uniformly continuous on each An. By Tietze’s exten-sion theorem13, the restriction f|An has then a continuous extension fn to S. 2 Exercises 1. Prove the triangle inequality μ(A△C) ≤μ(A△B) + μ(B△C). (Hint: Note that 1A△B = |1A −1B|.) 2. Show that Lemma 1.10 fails for uncountable index sets. (Hint: Show that every measurable set depends on countably many coordinates.) 3. For any space S, let μA denote the cardinality of the set A ⊂S. Show that μ is a measure on (S, 2S).
4.
Let K be the class of compact subsets of some metric space S, and let μ be a bounded measure such that infK∈K μKc = 0.
Show that for any B ∈BS, μB = supK∈K∩B μK.
5. Show that any absolutely convergent series can be written as an integral with respect to counting measure on N. State series versions of Fatou’s lemma and the dominated convergence theorem, and give direct elementary proofs.
6. Give an example of integrable functions f, f1, f2, . . . on a probability space (S, S, μ), such that fn →f but μfn ̸→μf.
7. Let μ, ν be σ-finite measures on a measurable space (S, S) with sub-σ-field F.
Show that if μ ≪ν on S, it also holds on F. Further show by an example that the converse may fail.
8. Fix two measurable spaces (S, S) and (T, T ), a measurable function f : S →T, and a measure μ on S with image ν = μ ◦f −1. Show that f remains measurable with respect to the completions Sμ and T ν.
9.
Fix a measure space (S, S, μ) and a σ-field T ⊂S, let Sμ denote the μ-completion of S, and let T μ be the σ-field generated by T and the μ-null sets of Sμ. Show that A ∈T μ iffthere exist some B ∈T and N ∈Sμ with A△B ⊂N and μN = 0. Also, show by an example that T μ may be strictly greater than the μ-completion of T .
10.
State Fubini’s theorem for the case where μ is σ-finite and ν is counting measure on N. Give a direct proof of this version.
11. Let f1, f2, . . . be μ-integrable functions on a measurable space S, such that g = Σk fk exists a.e., and put gn = Σk≤n fk. Restate the dominated convergence theorem for the integrals μgn in terms of the functions fk, and compare with the result of the preceding exercise.
12. Extend Theorem 1.29 to the product of n measures.
13.
Let M ⊃N be closed linear subspaces of L2.
Show that if f ∈L2 has projections g onto M and h onto N, then g has projection h onto N.
14. Let M be a closed linear subspace of L2, and let f, g ∈L2 with M-projections ˆ f and ˆ g. Show that ⟨ˆ f, g⟩= ⟨f, ˆ g⟩= ⟨ˆ f, ˆ g⟩.
13A continuous function on a closed subset of a normal topological space S has a continuous extension to S.
32 Foundations of Modern Probability 15. Show that if μ ≪ν and νf = 0 with f ≥0, then also μf = 0. (Hint: Use Lemma 1.26.) 16. For any σ-finite measures μ1 ≪μ2 and ν1 ≪ν2, show that μ1 ⊗ν1 ≪μ2 ⊗ν2.
(Hint: Use Fubini’s theorem and Lemma 1.26.) Chapter 2 Measure Extension and Decomposition Outer measure, Carath´ eodory extension, Lebesgue measure, shift and ro-tation invariance, Hahn and Jordan decompositions, Lebesgue decompo-sition and differentiation, Radon–Nikodym theorem, Lebesgue–Stieltjes measures, finite-variation functions, signed measures, atomic decompo-sition, factorial measures, right and left continuity, absolutely continu-ous and singular functions, Riesz representation The general measure theory developed in Chapter 1 would be void and mean-ingless, unless we could establish the existence of some non-trivial, countably additive measures. In fact, much of modern probability theory depends implic-itly on the existence of such measures, needed already to model an elementary sequence of coin tossings. Here we prove the equivalent existence of Lebesgue measure, which will later allow us to establish the more general Lebesgue– Stieltjes measures, and a variety of discrete- and continuous-time processes throughout the book.
The basic existence theorems in this and the next chapter are surprisingly subtle, and the hurried or impatient reader may feel tempted to skip this material and move on to the probabilistic parts of the book. However, he/she should be aware that much of the present theory underlies the subsequent probabilistic discussions, and any serious student should be prepared to return for reference when need arises.
Our first aim is to construct Lebesgue measure, using the powerful approach of Carath´ eodory based on outer measures. The result will lead, via the Daniell– Kolmogorov theorem in Chapter 8, to the basic existence theorem for Markov processes in Chapter 11. We proceed to the correspondence between signed measures and functions of locally finite variation, of special importance for the theory of semi-martingales and general stochastic integration. A further high point is the powerful Riesz representation, which will enable us in Chapter 17 to construct Markov processes with a given generator, via resolvents and associated semi-groups of transition operators. We may further mention the Radon–Nikodym theorem, relevant to the theory of conditioning in Chapter 8, and Lebesgue’s differentiation theorem, instrumental for proving the general ballot theorem in Chapter 25.
We begin with an ingenious technical result, which will play a crucial role for our construction of Lebesgue measure in Theorem 2.2, and for the proof of Riesz’ representation Theorem 2.25. By an outer measure on a space S 33 © Springer Nature Switzerland AG 2021 O. Kallenberg, Foundations of Modern Probability, Probability Theory and Stochastic Modelling 99, 34 Foundations of Modern Probability we mean a non-decreasing, countably sub-additive set function μ : 2S →¯ R+ with μ∅= 0. Given an outer measure μ on S, we say that a set A ⊂S is μ -measurable if μE = μ(E ∩A) + μ(E ∩Ac), E ⊂S.
(1) Note that the inequality ≤holds automatically by sub-additivity. The follow-ing result gives the basic measure construction from outer measures.
Theorem 2.1 (restriction of outer measure, Carath´ eodory) Let μ be an outer measure on S, and write S for the class of μ-measurable sets. Then S is a σ-field and the restriction of μ to S is a measure.
Proof: Since μ∅= 0, we have for any set E ⊂S μ(E ∩∅) + μ(E ∩S) = μ∅+ μE = μE, which shows that ∅∈S. Also note that trivially A ∈S implies Ac ∈S.
Next let A, B ∈S. Using (1) for A and B together with the sub-additivity of μ, we get for any E ⊂S μE = μ(E ∩A) + μ(E ∩Ac) = μ E ∩A ∩B + μ E ∩A ∩Bc + μ(E ∩Ac) ≥μ E ∩(A ∩B) + μ E ∩(A ∩B)c , which shows that even A∩B ∈S. It follows easily that S is a field. If A, B ∈S are disjoint, we also get by (1) for any E ⊂S μ E ∩(A ∪B) = μ E ∩(A ∪B) ∩A + μ E ∩(A ∪B) ∩Ac = μ(E ∩A) + μ(E ∩B).
(2) Finally, let A1, A2, . . . ∈S be disjoint, and put Un = k≤n Ak and U = n Un. Using (2) recursively along with the monotonicity of μ, we get μ(E ∩U) ≥μ(E ∩Un) = Σ k≤n μ(E ∩Ak).
Letting n →∞and combining with the sub-additivity of μ, we obtain μ(E ∩U) = Σ k μ(E ∩Ak).
(3) Taking E = S, we see in particular that μ is countably additive on S. Noting that Un ∈S and using (3) twice, along with the monotonicity of μ, we also get μE = μ(E ∩Un) + μ(E ∩U c n) ≥Σ k≤n μ(E ∩Ak) + μ(E ∩U c) →μ(E ∩U) + μ(E ∩U c), which shows that U ∈S. Thus, S is a σ-field.
2 Much of modern probability relies on the existence of non-trivial, countably additive measures. Here we prove that the elementary notion of interval length can be extended to a measure λ on R, known as Lebesgue measure.
2. Measure Extension and Decomposition 35 Theorem 2.2 (Lebesgue measure, Borel) There exists a unique measure λ on (R, B), such that λ[a, b] = b −a, a ≤b.
For the proof, we first need to extend the set of lengths |I| of real intervals I to an outer measure on R. Then define λA = inf {Ik} Σ k |Ik|, A ⊂R, (4) where the infimum extends over all countable covers of A by open intervals I1, I2, . . . . We show that (4) provides the desired extension.
Lemma 2.3 (outer Lebesgue measure) The function λ in (4) is an outer mea-sure on R, satisfying λI = |I| for every interval I.
Proof: The set function λ is clearly non-negative and non-decreasing with λ∅= 0. To prove the countable sub-additivity, let A1, A2, . . . ⊂R be arbitrary.
For any ε > 0 and n ∈N, we may choose some open intervals In1, In2, . . . such that An ⊂∪k Ink, λAn ≥Σ k |Ink| −ε2−n, n ∈N.
Then ∪n An ⊂∪n∪k Ink, λ∪n An ≤Σ nΣ k |Ink| ≤Σ n λAn + ε, and the desired relation follows as we let ε →0.
To prove the second assertion, we may take I = [a, b] for some finite a < b.
Since I ⊂(a −ε, b + ε) for every ε > 0, we get λI ≤|I| + 2ε, and so λI ≤ |I|. As for the reverse relation, we need to prove that if I ⊂ k Ik for some open intervals I1, I2, . . . , then |I| ≤Σk |Ik|. By the Heine–Borel theorem1, I remains covered by finitely many intervals I1, . . . , In, and it suffices to show that |I| ≤Σk≤n |Ik|. This reduces the assertion to the case of finitely many covering intervals I1, . . . , In.
The statement is clearly true for a single covering interval. Proceeding by induction, assume the truth for n −1 covering intervals, and turn to the case of covering by I1, . . . , In. Then b belongs to some Ik = (ak, bk), and so the interval I′ k = I \ Ik is covered by the remaining intervals Ij, j ̸= k. By the induction hypothesis, we get |I| = b −a ≤(b −ak) + (ak −a) ≤|Ik| + |I′ k| ≤|Ik| + Σ j ̸=k |Ij| = Σ j |Ij|, as required.
2 We show that the class of measurable sets in Lemma 2.3 contains all Borel sets.
1A set in Rd is compact iffit is bounded and closed.
36 Foundations of Modern Probability Lemma 2.4 (measurability of intervals) Let λ denote the outer measure in Lemma 2.3. Then the interval (−∞, a] is λ-measurable for every a ∈R.
Proof: For any set E ⊂R and constant ε > 0, we may cover E by some open intervals I1, I2, . . . , such that λE ≥Σn |In| −ε. Writing I = (−∞, a] and using the sub-additivity of λ and Lemma 2.3, we get λE + ε ≥Σ n |In| = Σ n |In ∩I| + Σ n |In ∩Ic| = Σ n λ(In ∩I) + Σ n λ(In ∩Ic) ≥λ(E ∩I) + λ(E ∩Ic).
Since ε was arbitrary, it follows that I is λ -measurable.
2 Proof of Theorem 2.2: Define λ as in (4). Then Lemma 2.3 shows that λ is an outer measure satisfying λI = |I| for every interval I. Furthermore, Theorem 2.1 shows that λ is a measure on the σ-field S of all λ-measurable sets.
Finally, Lemma 2.4 shows that S contains all intervals (−∞, a] with a ∈R. Since the latter sets generate the Borel σ-field B, we have B ⊂S.
To prove the uniqueness, consider any measure μ with the stated properties, and put In = [−n, n] for n ∈N. Using Lemma 1.18, with C as the class of intervals, we see that λ(B ∩In) = μ(B ∩In), B ∈B, n ∈N.
Letting n →∞and using Lemma 1.15, we get λB = μB for all B ∈B, as required.
2 Before proceeding to a more detailed study of Lebesgue measure, we state a general extension theorem that can be proved by essentially the same argu-ments. Here a non-empty class I of subsets of a space S is called a semi-ring, if for any I, J ∈I we have I ∩J ∈I, and the set I \ J is a finite union of disjoint sets I1, . . . , In ∈I.
Theorem 2.5 (extension, Carath´ eodory) Let μ be a finitely additive, count-ably sub-additive set function on a semi-ring I ⊂2S, such that μ∅= 0. Then μ extends to a measure on σ(I).
Proof: Define a set function μ∗on 2S by μ∗A = inf {Ik} Σ k μIk, A ⊂S, where the infimum extends over all covers of A by sets I1, I2, . . . ∈I, and take μ∗A = ∞when no such cover exists. Proceeding as in the proof of Lemma 2.3, we see that μ∗is an outer measure on S. To see that μ∗extends μ, fix any I ∈I, and consider an arbitrary cover I1, I2, . . . ∈I of I. Using the sub-additivity and finite additivity of μ, we get 2. Measure Extension and Decomposition 37 μ∗I ≤μI ≤Σ k μ(I ∩Ik) ≤Σ k μIk, which implies μ∗I = μI. By Theorem 2.1, it remains to show that every set I ∈I is μ∗-measurable. Then let A ⊂S be covered by some sets I1, I2, . . . ∈I with μ∗A ≥Σk μIk −ε, and proceed as in the proof of Lemma 2.4, noting that In \ I is a finite disjoint union of sets Inj ∈I, and hence μ(In \ I) = Σj μInj by the finite additivity of μ.
2 For every d ∈N, we may use Theorem 1.29 to form the d -dimensional Lebesgue measure λd = λ ⊗· · · ⊗λ on Rd, generalizing the elementary notions of area and volume. We show that λd is invariant under translations (or shifts), as well as under arbitrary rotations, and that either invariance characterizes λd up to a constant factor.
Theorem 2.6 (invariance of Lebesgue measure) Let μ be a locally finite mea-sure on a product space Rd × S, where (S, S) is a localized Borel space. Then these conditions are equivalent: (i) μ = λd ⊗ν for some locally finite measure ν on S, (ii) μ is invariant under translations of Rd, (iii) μ is invariant under rigid motions of Rd.
Proof, (ii) ⇒(i): Assume (ii). Write I for the class of intervals I = (a, b] with rational endpoints. Then for any I1, . . . , Id ∈I and C ∈S with νC < ∞, μ I1 × · · · × Id × C = |I1| · · · |Id| νC = (λd ⊗ν) I1 × · · · × Id × C .
For fixed I2, . . . , Id and C, this extends by monotonicity to arbitrary intervals I1, and then, by the uniqueness in Theorem 2.2, to any B1 ∈B. Proceeding recursively in d steps, we get for arbitrary B1, . . . , Bd ∈B μ B1 × · · · × Bd × C = (λd ⊗ν) B1 × · · · × Bd × C , which yields (i) by the uniqueness in Theorem 1.29.
(i) ⇒(ii)−(iii): Let μ = λd ⊗ν. For any h = (h1, . . . , hd) ∈Rd, define the shift θh : Rd →Rd by θhx = x + h for all x ∈Rd. Then for any intervals I1, . . . , Id and sets C ∈S, μ I1 × · · · × Id × C = |I1| · · · |Id| νC = μ ◦θ−1 h I1 × · · · × Id × C , where θh(x, s) = (x + h, s). As before, it follows that μ = μ ◦θ−1 h .
To see that μ is also invariant under orthogonal transformations ψ on Rd, we note that for any x, h ∈Rd 38 Foundations of Modern Probability (θh ◦ψ)x = ψx + h = ψ x + ψ−1h = ψ(x + h′) = (ψ ◦θh′)x, where h′ = ψ−1h. Since μ is shift-invariant, we obtain μ ◦ψ−1 ◦θ−1 h = μ ◦θ−1 h′ ◦ψ−1 = μ ◦ψ−1, where ψ(x, s) = (ψx, s). Thus, even μ ◦ψ−1 is shift-invariant and hence of the form λd ⊗ν′. Writing B for the unit ball in Rd, we get for any C ∈S λdB · ν′C = μ ◦ψ−1(B × C) = μ ψ−1B × C = μ(B × C) = λdB · νC.
Dividing by λdB gives ν′C = νC, and so ν′ = ν, which implies μ ◦ψ−1 = μ. 2 We show that any integrable function on Rd is continuous in a suitable average sense.
Lemma 2.7 (mean continuity) Let f be a measurable function on Rd with λd|f| < ∞. Then lim h→0 f(x + h) −f(x) dx = 0.
Proof: By Lemma 1.37 and a simple truncation, we may choose some con-tinuous functions f1, f2, . . . with bounded supports such that λd|fn −f| →0.
By the triangle inequality, we get for n ∈N and h ∈Rd f(x + h) −f(x) dx ≤ fn(x + h) −fn(x) dx + 2λd|fn −f|.
Since the fn are bounded, the right-hand side tends to 0 by dominated conver-gence, as h →0 and then n →∞.
2 By a signed measure on a localized Borel space (S, S) we mean a function ν : ˆ S →R, such that ν n Bn = Σn νBn for any disjoint sets B1, B2, . . . ∈ˆ S, where the series converges absolutely. Say that the measures μ, ν on (S, S) are (mutually) singular and write μ ⊥ν, if there exists an A ∈S with μA = νAc = 0. Note that A may not be unique. We state the basic decomposition of a signed measure into positive components.
Theorem 2.8 (Hahn decomposition) For any signed measure ν on S, there exist some unique, locally finite measures ν± ≥0 on S, such that ν = ν+ −ν−, ν+ ⊥ν−.
2. Measure Extension and Decomposition 39 Proof: We may take ν to be bounded. Put c = sup{νA; A ∈S}, and note that if A, A′ ∈S with νA ≥c −ε and νA′ ≥c −ε′, then ν(A ∪A′) = νA + νA′ −ν(A ∩A′) ≥(c −ε) + (c −ε′) −c = c −ε −ε′.
Choosing A1, A2, . . . ∈S with νAn ≥c−2−n, we get by iteration and countable additivity ν ∪ k>nAk ≥c −Σ k>n 2−k = c −2−n, n ∈N.
Define A+ = n k>n Ak and A−= Ac +. Using the countable additivity again, we get νA+ = c. Hence, for sets B ∈S, νB = νA+ −ν(A+ \ B) ≥0, B ⊂A+, νB = ν(A+ ∪B) −νA+ ≤0, B ⊂A−.
We may then define the measures ν+ and ν−by ν+B = ν(B ∩A+), ν−B = −ν(B ∩A−), B ∈S.
To prove the uniqueness, suppose that also ν = μ+ −μ−for some positive measures μ+ ⊥μ−. Choose a set B+ ∈S with μ−B+ = μ+Bc + = 0. Then ν is both positive and negative on the sets A+ \ B+ and B+ \ A+, and therefore ν = 0 on A+△B+. Hence, for any C ∈S, μ+C = μ+(B+ ∩C) = ν(B+ ∩C) = ν(A+ ∩C) = ν+C, which shows that μ+ = ν+. Then also μ−= μ+ −ν = ν+ −ν = ν−.
2 The last result yields the existence of the maximum μ ∨ν and minimum μ ∧ν of two σ-finite measures μ and ν.
Corollary 2.9 (maximum and minimum) For any σ-finite measures μ, ν on S, (i) there exist a largest measure μ ∧ν and a smallest one μ ∨ν satisfying μ ∧ν ≤μ, ν ≤μ ∨ν, (ii) the measures in (i) satisfy ( μ −μ ∧ν ) ⊥( ν −μ ∧ν ), μ ∧ν + μ ∨ν = μ + ν.
40 Foundations of Modern Probability Proof: We may take μ and ν to be bounded. Writing ρ+ −ρ−for the Hahn decomposition of μ −ν, we put μ ∧ν = μ −ρ+, μ ∨ν = μ + ρ−.
2 For any measures μ, ν on (S, S), we say that ν is absolutely continuous with respect to μ and write ν ≪μ, if μA = 0 implies νA = 0 for all A ∈S. We show that any σ-finite measure has a unique decomposition into an absolutely continuous and a singular component, where the former has a basic integral representation.
Theorem 2.10 (Lebesgue decomposition, Radon–Nikodym theorem) Let μ, ν be σ-finite measures on S. Then (i) ν = νa + νs for some unique measures νa ≪μ and νs ⊥μ, (ii) νa = f · μ for a μ-a.e. unique measurable function f ≥0 on S.
Two lemmas will be needed for the proof.
Lemma 2.11 (closure) Consider some measures μ, ν and measurable func-tions f1, f2, . . . ≥0 on S, and put f = supn fn. Then fn · μ ≤ν, n ∈N ⇔ f · μ ≤ν.
Proof: First let f · μ ≤ν and g · μ ≤ν, and put h = f ∨g. Writing A = {f ≥g}, we get h · μ = 1Ah · μ + 1Ach · μ = 1Af · μ + 1Acg · μ ≤1Aν + 1Acν = ν.
Thus, we may assume that fn ↑f. But then ν ≥fn · μ ↑f · μ by monotone convergence, and so f · μ ≤ν.
2 Lemma 2.12 (partial density) Let μ, ν be bounded measures on S with μ ̸⊥ν.
Then there exists a measurable function f ≥0 on S, such that μf > 0, f · μ ≤ν.
Proof: For any n ∈N, we introduce the signed measure χn = ν −n−1μ. By Theorem 2.8 there exists a set A+ n ∈S with complement A− n such that ±χn ≥0 on A± n . Since the χn are non-decreasing, we may assume that A+ 1 ⊂A+ 2 ⊂· · · .
Writing A = n A+ n and noting that Ac = n A− n ⊂A− n , we obtain νAc ≤νA− n = χnA− n + n−1μA− n ≤n−1μS →0, 2. Measure Extension and Decomposition 41 and so νAc = 0. Since μ ̸⊥ν, we get μA > 0. Furthermore, A+ n ↑A implies μA+ n ↑μA > 0, and we may choose n so large that μA+ n > 0. Putting f = n−11A+ n , we obtain μf = n−1μA+ n > 0 and f · μ = n−11A+ n · μ = 1A+ n · ν −1A+ n · χn ≤ν.
2 Proof of Theorem 2.10: We may take μ and ν to be bounded. Let C be the class of measurable functions f ≥0 on S with f · μ ≤ν, and define c = sup{μf; f ∈C}. Choose f1, f2, . . . ∈C with μfn →c. Then f ≡supn fn ∈C by Lemma 2.11, and μf = c by monotone convergence. Define νa = f · μ and νs = ν −νa, and note that νa ≪μ. If νs ̸⊥μ, then Lemma 2.12 yields a measurable function g ≥0 with μg > 0 and g · μ ≤νs. But then f + g ∈C with μ(f + g) > c, which contradicts the definition of c. Thus, νs ⊥μ.
To prove the uniqueness of νa and νs, suppose that also ν = ν′ a + ν′ s for some measures ν′ a ≪μ and ν′ s ⊥μ. Choose A, B ∈S with νsA = μAc = ν′ sB = μBc = 0. Then clearly νs(A ∩B) = ν′ s(A ∩B) = νa(Ac ∪Bc) = ν′ a(Ac ∪Bc) = 0, and so νa = 1A∩B · νa = 1A∩B · ν = 1A∩B · ν′ a = ν′ a, νs = ν −νa = ν −ν′ a = ν′ s.
To see that f is a.e. unique, suppose that also νa = g·μ for some measurable function g ≥0. Writing h = f −g and noting that h · μ = 0, we get μ|h| = {h>0} h dμ − {h<0} h dμ = 0, and so h = 0 a.e. by Lemma 1.26.
2 We insert a simple corollary needed in Chapter 25.
Corollary 2.13 (splitting) Consider some finite measure spaces (S, S, μ) and (T, T , ν) and a measurable map f : S →T, such that ν ≤μ ◦f −1. Then there exists a measure μ′ ≤μ on S with ν = μ′ ◦f −1.
Proof: Put μ′ = (g ◦f) · μ with g = dν/d(μ ◦f −1), and use Lemma 1.24. 2 Next we extend Theorem 2.2 to a basic correspondence between locally finite measures and non-decreasing functions on R.
42 Foundations of Modern Probability Theorem 2.14 (Lebesgue–Stieltjes measures) There is a 1−1 correspondence between all locally finite measures μ on R and all non-decreasing, right-contin-uous functions F with F(0) = 0, given by μ(a, b] = F(b) −F(a), −∞< a < b < ∞.
(5) Proof: For any locally finite measure μ on R, we define a function F on R by F(x) = μ(0, x], x ≥0, −μ(x, 0], x < 0.
Then F is right-continuous and non-decreasing with F(0) = 0, and as such it is clearly uniquely determined by (5).
Conversely, given any F as stated, we define its left-continuous inverse g: R →¯ R by g(t) = inf s ∈R; F(s) ≥t , t ∈R.
Since g is again non-decreasing, the set g−1(−∞, s] is an extended interval for each s ∈R, and so g is measurable by Lemma 1.4. We may then define a measure μ on ¯ R by μ = λ ◦g−1, where λ denotes Lebesgue measure on R.
Noting that g(t) ≤x ifft ≤F(x), we get for any a < b μ(a, b] = λ t; g(t) ∈(a, b] = λ F(a), F(b) = F(b) −F(a).
Thus, the restriction of μ to R satisfies (5). The uniqueness of μ may be proved in the same way as for λ in Theorem 2.2.
2 We now specialize Theorem 2.10 to the case where μ equals Lebesgue mea-sure and ν is any locally finite measure on R, defined as in Theorem 2.14 in terms of a non-decreasing, right-continuous function F. The Lebesgue decom-position and Radon–Nikodym property may be expressed in terms of F as F = Fa + Fs = f · λ + Fs, (6) where Fa and Fs correspond to the absolutely continuous and singular compo-nents of ν, respectively, and we assume that Fa(0) = 0. Here f · λ denotes the function x 0 f(t) dt, where the Lebesgue density f : R →R+ is locally integrable.
The following result extends the fundamental theorem of calculus for Riemann integrals of continuously differentiable functions—the fact that differentiation and integration are mutually inverse operations.
Theorem 2.15 (differentiation, Lebesgue) Let F be a non-decreasing, right-continuous function on R.
Then F is a.e. differentiable, and the Lebesgue decomposition and derivative of F are related by F = f · λ + Fs ⇔ F ′ = f a.e. λ.
2. Measure Extension and Decomposition 43 Thus, the two parts of the fundamental theorem generalize to (f · λ)′ = f a.e., F ′ · λ = Fa.
In other words, the density of an integral can still be a.e. recovered through differentiation, whereas integration of a derivative yields only the absolutely continuous component of the underlying function. In particular, F is absolutely continuous iffF ′ · λ = F −F(0) and singular iffF ′ = 0 a.e.
The last result extends trivially to any difference F = F+−F−between non-decreasing, right-continuous functions F+ and F−. However, it fails for more general functions, where the derivative may not even exist. A prime example is given by the paths of a Brownian motion, as will be seen in Corollary 14.10.
Two lemmas will be helpful to prove the last theorem.
Lemma 2.16 (interval selection) For any class I of open intervals with union G satisfying λG < ∞, there exist some disjoint sets I1, . . . , In ∈I with Σ k |Ik| ≥λG/4.
Proof: By Lemma 1.36 we may choose a compact set K ⊂G with λK ≥ 3 λG/4.
By compactness, we may next cover K by finitely many intervals J1, . . . , Jm ∈I. Now define recursively some I1, I2, . . . , where Ik is the longest interval Jr not yet chosen, such that Jr ∩Ij = ∅for all j < k. The selection terminates when no such interval exists.
If an interval Jr is not selected, it must intersect a longer interval Ik. Writing ˆ Ik for the interval centered at Ik with length 3|Ik|, we obtain K ⊂∪r Jr ⊂∪k ˆ Ik, and so 3 λG/4 ≤λK ≤λ∪k ˆ Ik ≤Σ k |ˆ Ik| = 3Σ k |Ik|.
2 Lemma 2.17 (differentiation on null sets) Let F(x) ≡μ(0, x] for a locally finite measure μ on R. Then for any set A ∈B, μA = 0 ⇒ F ′ = 0 a.e. 1Aλ.
Proof: For any δ > 0, Lemma 1.36 yields an open set Gδ ⊃A with μGδ < δ.
Define Aε = x ∈A; lim sup h→0 μ(x −h, x + h) h > ε , ε > 0, and note that the Aε are measurable, since the lim sup may be taken along the rationals. For every x ∈Aε there exists an interval I = (x −h, x + h) ⊂ Gδ with 2μI > ε|I|, and we note that the class Iε,δ of such intervals covers Aε. Hence, Lemma 2.16 yields some disjoint sets I1, . . . , In ∈Iε,δ satisfying Σk |Ik| ≥λAε/4. Then 44 Foundations of Modern Probability λAε ≤4 Σ k |Ik| ≤8 ε−1Σ k μIk ≤8 μGδ/ε < 8 δ/ε, and as δ →0 we get λAε = 0. Thus, lim sup μ(x −h, x + h)/h ≤ε a.e. λ on A, and the assertion follows since ε is arbitrary.
2 Proof of Theorem 2.15: Since Fs ′ = 0 a.e. λ by Lemma 2.17, we may assume that F = f · λ. Define F+ ′ (x) = lim sup h→0 h−1 F(x + h) −F(x) , F− ′ (x) = lim inf h→0 h−1 F(x + h) −F(x) , and note that F+ ′ = 0 a.e. on the set {f = 0} = {x; f(x) = 0} by Lemma 2.17.
Applying this result to the function Fr = (f −r)+ · λ for arbitrary r ∈R, and noting that f ≤(f −r)+ + r, we get F+ ′ ≤r a.e. on {f ≤r}. Thus, for r restricted to the rationals, λ{f < F+ ′} = λ∪r f ≤r < F+ ′ ≤Σ r λ f ≤r < F+ ′ = 0, which shows that F+ ′ ≤f a.e. Applying this to the function −F = (−f) · λ yields F− ′ = −(−F)′ + ≥f a.e. Thus, F+ ′ = F− ′ = f a.e., and so F ′ exists a.e.
and equals f.
2 For a localized Borel space (S, S), let MS be the class of locally finite measures on S. It becomes a measurable space in its own right, when endowed with the σ-field induced by the evaluation maps πB : μ →μB, B ∈S. In particular, the class of probability measures on S is a measurable subset of MS.
To state the basic atomic decomposition of measures μ ∈MS, recall that δsB = 1B(s), and write Mc S for the class of diffuse2 measures μ ∈MS, where μ{s} = 0 for all s ∈S. Put ¯ Z+ = {0, 1, . . . ; ∞}.
Theorem 2.18 (atomic decomposition) For any measures μ ∈MS with S a localized Borel space, we have (i) μ = α + Σ k≤κ βk δσk, for some α ∈Mc S, κ ∈¯ Z+, β1, β2, . . . > 0, and distinct σ1, σ2, . . . ∈S, (ii) μ is Z+-valued iff(i) holds with α = 0 and βk ∈N for all k, (iii) the representation in (i) is unique apart from the order of terms.
2or non-atomic 2. Measure Extension and Decomposition 45 Proof: (i) By localization we may assume that μS < ∞. Since S is Borel we may let S ∈B[0,1], and by an obvious embedding we may even take S = [0, 1].
Define F(x) = μ[0, x] as in Theorem 2.14, and note that the possible atoms of μ correspond uniquely to the jumps of F. The latter may be listed in non-increasing order as a sequence of pairs {x, ΔF(x)}, equivalent to the set of atom locations and sizes (σk, βk) of μ. By Corollary 1.17, subtraction of the atoms yields a measure α, which is clearly diffuse.
(ii) Let μ be Z+-valued, and suppose that βk / ∈N for some k. Assuming S = [0, 1], we may choose some open balls Bn ↓{σk}. Then μBn ↓βk, and so μBn / ∈Z+ for large n. The contradiction shows that βk ∈N for all k. To see that even α = 0, suppose that instead α ̸= 0. Writing A = S \ k{σk}, we get μA > 0, and so we may choose an s ∈supp (1Aμ) and some balls Bn ↓{s}.
Then 0 < μ(A ∩Bn) ↓0, which yields another contradiction.
(iii) Consider two such decompositions μ = α + Σ k≤κ βk δσk = α′ + Σ k≤κ′β′ k δσ′ k.
Pairing offthe atoms in the two sums until no such terms remain, we are left with the equality α = α′.
2 Let NS be the class of Z+-valued measures μ ∈MS, also known as point measures on S. Say that μ is simple if all coefficients βk equal 1, so that μ is the counting measure on its support. The class of simple point measures on S is denoted by N ∗ S. Given a measure μ ∈MS, we write μc for the diffuse component of μ.
For point measures μ ∈NS, we write μ∗for the simple measure on the support of μ, obtained by reducing all βk to 1.
Theorem 2.19 (measurability) For any localized Borel space S and measures μ ∈MS, we have (i) the coefficients α, κ, (βk), (σk) in Theorem 2.18 can be chosen to be measurable functions of μ, (ii) the classes Mc S, NS, N ∗ S are measurable subsets of MS, (iii) the maps μ →μc and μ →μ∗are measurable on MS and NS, (iv) the class of degenerate measures β δσ is measurable, (v) the mapping (μ, ν) →μ ⊗ν is a measurable function on M2 S.
Proof: (i) We may take S = [0, 1) and assume that μS < ∞. Put Inj = 2−n[j −1, j) for j ≤2n, n ∈N, and define γnj = μInj and τnj = 2−nj. For any constants b > a > 0, let (βnk, σnk) with k ≤κn be the pairs (γnj, τnj) with a ≤γnj < b, listed in the order of increasing τnj. Since μBn ↓μ{x} for any open balls Bn ↓{x}, we note that βn1, βn2, . . . ; σn1, σn2, . . . → β1, β2, . . . ; σ1, σ2, . . . , 46 Foundations of Modern Probability where the βk and σk are atom sizes and positions of μ with a ≤βk < b and increasing σk. The measurability on the right now follows by Lemma 1.11.
To see that the atomic component of μ is measurable, it remains to partition (0, ∞) into countably many intervals [a, b). The measurability of the diffuse part α now follows by subtraction.
(ii)−(iv): This is clear from (i), since the mentioned classes and operations can be measurably expressed in terms of the coefficients.
(v) Note that (μ ⊗ν)(B × C) is measurable for any B, C ∈S, and extend by a monotone-class argument.
2 Let S(n) denote the non-diagonal part of Sn, consisting of all n -tuples (s1, . . . , sn) with distinct components sk. When μ ∈N ∗ S, we may define the factorial measure μ(n) as the restriction of the product measure μn to S(n).
The definition for general μ ∈NS is more subtle. Write ˆ NS for the class of bounded measures in NS, and put μ(1) = μ.
Lemma 2.20 (factorial measures) For any measure μ = Σ i∈I δσi in ˆ NS, these conditions are equivalent and define some measures μ(n) on Sn: (i) μ(n) = Σ i∈I(n)δσi1,...,σin, (ii) μ(m+n)f = μ(m)(ds) μ −Σ i≤m δsi (n)(dt) f(s, t), (iii) Σ n≥0 1 n! μ(n)(ds) f Σ i≤n δsi = Σ ν ≤μ f(ν).
Proof: When μ is simple, (i) agrees with the elementary definition. Since also μ(1) = μ, we may take (i) as our definition. We also put μ(0) = 1 for consistency. Since the μ(n) are uniquely determined by both (ii) and (iii), it remains to show that each of those relations follows from (i).
(ii) Write Ii = I \ {i1, . . . , im} for any i = (i1, . . . , im) ∈I(m). Then by (i), μ(m)(ds) μ −Σ i≤m δsi (n)(dt) f(s, t) = Σ i∈I(m) μ −Σ k≤m δσik (n)(dt) f σi1, . . . , σik, t = Σ i∈I(m) Σ j ∈Ii δσj (n)(dt) f σi1, . . . , σik, t = Σ i∈I(m) Σ j ∈Ii(n)f σi1, . . . , σim; σj1, . . . , σjn = Σ i∈I(m+n)f σi1, . . . , σim+n = μ(m+n)f.
(iii) Given an enumeration μ = Σi∈I δσi with I ⊂N, put ˆ I(n) = {i ∈In; i1 < · · · < in} for n > 0. Let Pn be the class of permutations of 1, . . . , n. Using the atomic representation of μ and the tetrahedral decomposition of I(n), we get 2. Measure Extension and Decomposition 47 μ(n)(ds) f Σ i≤n δsi = Σ i∈I(n)f Σ k≤n δσik = Σ i∈ˆ I(n) Σ π ∈Pn f Σ k≤n δσπ◦ik = n! Σ i∈ˆ I(n)f Σ k≤n δσik = n! Σ ν ≤μ {f(ν); ∥ν∥= n}.
Now divide by n! and sum over n.
2 We now explore the relationship between signed measures and functions of bounded variation. For any function F on R, we define the total variation on the interval [a, b] as ∥F∥b a = sup {tk} Σ k F(tk) −F(tk−1) , where the supremum extends over all finite partitions a = t0 < t1 < · · · < tn = b. The positive and negative variations of F are defined by similar expressions, though with |x| replaced by x± = (±x) ∨0, so that x = x+ −x−and |x| = x+ + x−. We also write Δb aF = F(b) −F(a).
We begin with a basic decomposition of functions of locally finite variation, corresponding to the Hahn decomposition in Theorem 2.8.
Theorem 2.21 (Jordan decomposition) For any function F on R of locally finite variation, (i) F = F+ −F−for some non-decreasing functions F+ and F−, and ∥F∥t s ≤Δt sF+ + Δt sF−, s < t, (ii) equality holds in (i) iffΔt sF± agree with the positive and negative varia-tions of F on (s, t].
Proof: For any s < t, we have (Δt sF)+ = (Δt sF)−+ Δt sF, |Δt sF| = (Δt sF)+ + (Δt sF)− = 2 (Δt sF)−+ Δt sF.
Summing over the intervals in an arbitrary partition s = t0 < t1 < · · · < tn = t and taking the supremum of each side, we obtain Δt sF+ = Δt sF−+ Δt sF, ∥F∥t s = 2 Δt sF−+ Δt sF = Δt sF+ + Δt sF−, where F±(x) denote the positive and negative variations of F on [0, x] (apart from the sign when x < 0). Thus, F = F(0)+F+ −F−, and the stated relation 48 Foundations of Modern Probability holds with equality. If also F = G+−G−for some non-decreasing functions G±, then (Δt sF)± ≤Δt sG±, and so Δt sF± ≤Δt sG±. Thus, ∥F∥t s ≤Δt sG+ + Δt sG−, with equality iffΔt sF± = Δt sG±.
2 We turn to another useful decomposition of finite-variation functions.
Theorem 2.22 (left and right continuity) For any function F on R of locally finite variation, (i) F = Fr + Fl, where Fr is right-continuous with left-hand limits and Fl is left-continuous with right-hand limits, (ii) if F is right-continuous, then so are the minimal components F± in The-orem 2.21.
Proof: (i) By Proposition 2.21 we may take F to be non-decreasing. The right- and left-hand limits F ±(s) then exist at every point s, and we note that F −(s) ≤F(s) ≤F +(s). Further note that F has at most countably many jump discontinuities. For t > 0, we define Fl(t) = Σ s∈[0, t) F +(s) −F(s) , Fr(t) = F(t) −Fl(t).
When t ≤0 we need to take the negative of the corresponding sum on (t, 0].
It is easy to check that Fl is left-continuous while Fr is right-continuous, and that both functions are non-decreasing.
(ii) Let F be right-continuous at a point s. If ∥F∥t s →c > 0 as t ↓s, we may choose t −s so small that ∥F∥t s < 4c/3. Next we may choose a partition s = t0 < t1 < · · · < tn = t of [s, t], such that the corresponding F-increments δk satisfy Σk |δk| > 2c/3. By the right continuity of F at s, we may assume t1 −s to be small enough that δ1 = |F(t1) −F(s)| < c/3. Then ∥F∥t t1 > c/3, and so 4c/3 > ∥F∥t s = ∥F∥t1 s + ∥F∥t t1 > c + c/3 = 4c/3, a contradiction. Hence c = 0. Assuming F± to be minimal, we obtain Δt sF± ≤∥F∥t s →0, t ↓s.
2 Justified by the last theorem, we may choose our finite-variation functions F to be right-continuous, which enables us to prove a basic correspondence with locally finite signed measures. We also consider the jump part and atomic decompositions F = F c + F d and ν = νd + νa, where for functions F or signed measures ν on R+ we define F d t = Σ s≤t ΔFs, νa = Σ t≥0 ν{t} δt.
2. Measure Extension and Decomposition 49 Theorem 2.23 (signed measures and finite-variation functions) Let F be a right-continuous function on R of locally finite variation. Then (i) there exists a unique signed measure ν on R, such that ν(s, t] = Δt sF, s < t, (ii) the Hahn decomposition ν = ν+ −ν−and Jordan decomposition F = F+ −F−into minimal components are related by ν±(s, t] ≡Δt sF±, s < t, (iii) the atomic decomposition ν = νd + νa and jump part decomposition F = F c + F d are related by νd(s, t] ≡Δt sF c, νa(s, t] ≡Δt sF d.
Proof: (i) The positive and negative variations F± are right-continuous by Proposition 2.22. Hence, Proposition 2.14 yields some locally finite measures μ± on R with μ±(s, t] ≡Δt sF±, and we may take ν = μ+ −μ−.
(ii) Choose an A ∈S with ν+Ac = ν−A = 0. For any B ∈B, we get μ+B ≥μ+(B ∩A) ≥ν(B ∩A) = ν+(B ∩A) = ν+B, which shows that μ+ ≥ν+. Then also μ−≥ν−. If the equality fails on some interval (s, t], then ∥F∥t s = μ+(s, t] + μ−(s, t] > ν+(s, t] + ν−(s, t], contradicting Proposition 2.21. Hence, μ± = ν±.
(iii) This is elementary, given the atomic decomposition in Theorem 2.18. 2 A function F : R →R is said to be absolutely continuous, if for any a < b and ε > 0 there exists a δ > 0, such that for any finite collection of disjoint intervals (ak, bk] ⊂(a, b] with Σk |bk −ak| < δ, we have Σk |F(bk)−F(ak)| < ε.
In particular, every absolutely continuous function is continuous with locally finite variation.
Given a function F of locally finite variation, we say that F is singular, if for any a < b and ε > 0 there exists a finite set of disjoint intervals (ak, bk] ⊂(a, b], such that Σ k |bk −ak| < ε, ∥F∥b a < Σ k |F(bk) −F(ak)| + ε.
A locally finite, signed measure ν on R is said to be absolutely continuous or singular, if the components ν± in the associated Hahn decomposition satisfy ν± ≪λ or ν± ⊥λ, respectively. We proceed to relate the notions of absolute continuity and singularity for functions and measures.
50 Foundations of Modern Probability Theorem 2.24 (absolutely continuous and singular functions) Let F be a right-continuous function of locally finite variation on R, and let ν be the signed measure on R with ν(s, t] ≡Δt sF. Then (i) F is absolutely continuous iffν ≪λ, (ii) F is singular iffν ⊥λ.
Proof: If F is absolutely continuous or singular, then the corresponding property holds for the total variation function ∥F∥x a with arbitrary a, and hence also for the minimal components F± in Proposition 2.23. Thus, we may take F to be non-decreasing, so that ν is a positive and locally finite measure on R.
First let F be absolutely continuous. If ν ̸≪λ, there exists a bounded interval I = (a, b) with subset A ∈B, such that λA = 0 but νA > 0. Taking ε = νA/2, choose a corresponding δ > 0, as in the definition of absolute continuity. Since A is measurable with outer Lebesgue measure 0, we may next choose an open set G with A ⊂G ⊂I such that λG < δ. But then νA ≤νG < ε = νA/2, a contradiction. This shows that ν ≪λ.
Next let F be singular, and fix any bounded interval I = (a, b]. Given any ε > 0, there exist some Borel sets A1, A2, . . . ⊂I, such that λAn < ε 2−n and νAn →νI. Then B = n An satisfies λB < ε and νB = νI. Next we may choose some Borel sets Bn ⊂I with λBn →0 and νBn = νI. Then C = n Bn satisfies λC = 0 and νC = νI, which shows that ν ⊥λ on I.
Conversely, let ν ≪λ, so that ν = f ·λ for some locally integrable function f ≥0. Fix any bounded interval I, put An = {x ∈I; f(x) > n}, and let ε > 0 be arbitrary. Since νAn →0 by Lemma 1.15, we may choose n so large that νAn < ε/2. Put δ = ε/2n. For any Borel set B ⊂I with λB < δ, we obtain νB = ν(B ∩An) + ν(B ∩Ac n) ≤νAn + n λB < 1 2 ε + n δ = ε.
In particular, this applies to any finite union B of intervals (ak, bk] ⊂I, and so F is absolutely continuous.
Finally, let ν ⊥λ. Fix any finite interval I = (a, b], and choose a Borel set A ⊂I such that λA = 0 and νA = νI. For any ε > 0 there exists an open set G ⊃A with λG < ε. Letting (an, bn) denote the connected components of G and writing In = (an, bn], we get Σn |In| < ε and Σn ν(I ∩In) = νI. This shows that F is singular.
2 We conclude with yet another basic extension theorem.3 Here the under-lying space S is taken to be a locally compact, second countable, Hausdorff topological space (abbreviated as lcscH). Let G, F, K be the classes of open, closed, and compact sets in S, and put ˆ G = {G ∈G; ¯ G ∈K}. Let ˆ C+ = ˆ C+(S) be the class of continuous functions f : S →R+ with compact support, defined 3Further basic measure extensions appear in Chapters 3 and 8.
2. Measure Extension and Decomposition 51 as the closure of the set {x ∈S; f(x) > 0}. By U ≺f ≺V we mean that f ∈ˆ C+ with 0 ≤f ≤1, such that f = 1 on U, supp f ⊂V o.
A positive linear functional on ˆ C+ is defined as a mapping μ : ˆ C+ →R+, such that μ(f + g) = μf + μg, f, g ∈ˆ C+.
This clearly implies the homogeneity μ(cf) = c μf for any f ∈ˆ C+ and c ∈R+.
A measure μ on the Borel σ-field S = BS is said to be locally finite4, if μK < ∞ for every K ∈K. We show how any positive linear functional can be extended to a measure.
Theorem 2.25 (Riesz representation) Let μ be a positive linear functional on ˆ C+(S), where S is lcscH. Then μ extends uniquely to a locally finite measure on S.
Several lemmas are needed for the proof, beginning with a simple topolog-ical fact.
Lemma 2.26 (partition of unity) For a compact set K ⊂S covered by the open sets G1, . . . , Gn, we may choose some functions f1, . . . , fn ∈ˆ C+(S) with fk ≺Gk, such that Σk fk = 1 on K.
Proof: For any x ∈K, we may choose some k ≤n and V ∈ˆ G with x ∈V and ¯ V ⊂Gk. By compactness, K is covered by finitely many such sets V1, . . . , Vm. For every k ≤n, let Uk be the union of all sets Vj with ¯ Vj ⊂Gk.
Then ¯ Uk ⊂Gk, and so we may choose g1, . . . , gn ∈ˆ C+ with Uk ≺gk ≺Gk.
Define fk = gk(1 −g1) · · · (1 −gk−1), k = 1, . . . , n.
Then fk ≺Gk for all k, and by induction f1 + · · · + fn = 1 −(1 −g1) · · · (1 −gn).
It remains to note that Πk(1 −gk) = 0 on K since K ⊂ k Uk.
2 By an inner content on an lcscH space S we mean a non-decreasing function μ : G →¯ R+, finite on ˆ G, such that μ is finitely additive and countably sub-additive, and also satisfies the inner continuity μG = sup μU; U ∈ˆ G, ¯ U ⊂G , G ∈G.
(7) Lemma 2.27 (inner approximation) For a positive linear functional μ on ˆ C+(S), an inner content ν on S is given by νG = sup μf; f ≺G , G ∈G.
4also called a Radon measure 52 Foundations of Modern Probability Proof: Note that ν is non-decreasing with ν∅= 0, and that νG < ∞for bounded G. Moreover, ν is inner continuous in the sense of (7).
To see that ν is countably sub-additive, fix any G1, G2, . . . ∈G, and let f ≺ k Gk. The compactness implies f ≺ k≤n Gk for a finite n, and Lemma 2.26 yields some functions gk ≺Gk, such that Σk gk = 1 on supp f.
The products fk = gkf satisfy fk ≺Gk and Σk fk = f, and so μf = Σ k≤n μfk ≤Σ k≤n νGk ≤Σ k≥1 νGk.
Since f ≺ k Gk was arbitrary, we obtain ν k Gk ≤Σk νGk, as required.
To see that ν is finitely additive, fix any disjoint sets G, G′ ∈G. If f ≺G and f ′ ≺G′, then f + f ′ ≺G ∪G′, and so μf + μf ′ = μ(f + f ′) ≤ν(G ∪G′) ≤νG + νG′.
Taking the supremum over all f and f ′ gives νG + νG′ = ν(G ∪G′), as re-quired.
2 An outer measure μ on S is said to be regular, if it is finitely additive on G and satisfies the outer and inner regularity μA = inf μG; G ∈G, G ⊃A , A ⊂S, (8) μG = sup μK; K ∈K, K ⊂G , G ∈G.
(9) Lemma 2.28 (outer approximation) Any inner content μ on S can be ex-tended to a regular outer measure.
Proof: We may define the extension by (8), since the right-hand side equals μA when A ∈G. By the finite additivity on G, we have 2 μ∅= μ∅< ∞, which implies μ∅= 0. To prove the countable sub-additivity, fix any A1, A2, . . . ⊂S.
For any ε > 0, we may choose some G1, G2, . . . ∈G with Gn ⊃An and μGn ≤μAn + ε2−n. Since μ is sub-additive on G, we get μ∪n An ≤μ∪n Gn ≤Σ n μGn ≤Σ n μAn + ε.
The desired relation follows since ε was arbitrary. Thus, the extension is an outer measure on S. Finally, the inner regularity in (9) follows from (7) and the monotonicity of μ.
2 Lemma 2.29 (measurability) If μ is a regular outer measure on S, then every Borel set in S is μ -measurable.
2. Measure Extension and Decomposition 53 Proof: Fix any F ∈F and A ⊂G ∈G. By the inner regularity in (9), we may choose G1, G2, . . . ∈G with ¯ Gn ⊂G \ F and μGn →μ(G \ F). Since μ is non-decreasing and finitely additive on G, we get μG ≥μ(G \ ∂Gn) = μGn + μ(G \ ¯ Gn) ≥μGn + μ(G ∩F) →μ(G \ F) + μ(G ∩F) ≥μ(A \ F) + μ(A ∩F).
The outer regularity in (8) gives μA ≥μ(A \ F) + μ(A ∩F), F ∈F, A ⊂S.
Hence, every closed set is measurable, and the measurability extends to σ(F) = BS = S by Theorem 2.1.
2 Proof of Theorem 2.25: Construct an inner content ν as in Lemma 2.27, and conclude from Lemma 2.28 that ν admits an extension to a regular outer measure on S. By Theorem 2.1 and Lemma 2.29, the restriction of the latter to S = BS is a Radon measure on S, here still denoted by ν.
To see that μ = ν on ˆ C+, fix any f ∈ˆ C+. For any n ∈N and k ∈Z+, let f n k (x) = nf(x) −k + ∧1, Gn k = {nf > k} = {f n k > 0}.
Noting that ¯ Gn k+1 ⊂{f n k = 1} and using the definition of ν and the outer regularity in (8), we get for appropriate k νf n k+1 ≤νGn k+1 ≤μf n k ≤ν ¯ Gn k ≤νf n k−1.
Writing G0 = Gn 0 = {f > 0} and noting that nf = Σk f n k , we obtain n νf −νG0 ≤n μf ≤n νf + ν ¯ G0.
Here ν ¯ G0 < ∞since G0 is bounded. Dividing by n and letting n →∞gives μf = νf.
To prove the asserted uniqueness, let μ and ν be locally finite measures on S with μf = νf for all f ∈ˆ C+. By an inner approximation, we have μG = νG for every G ∈G, and so a monotone-class argument yields μ = ν.
2 54 Foundations of Modern Probability Exercises 1. Show that any countably additive set function μ ≥0 on a field F with μ∅= 0 extends to a measure on σ(F). Further show that the extension is unique whenever μ is bounded.
2.
Construct d -dimensional Lebesgue measure λd directly, by the method of Theorem 2.2. Then show that λd = λd.
3. Derive the existence of d -dimensional Lebesgue measure from Riesz’ represen-tation theorem, using basic properties of the Riemann integral.
4. Let λ denote Lebesgue measure on R+, and fix any p > 0. Show that the class of step functions with bounded support and finitely many jumps is dense in Lp(λ).
Generalize to Rd +.
5. Show that if μ1 = f1 · μ and μ2 = f2 · μ, then μ1 ∨μ2 = (f1 ∨f2) · μ and μ1 ∧μ2 = (f1 ∧f2) · μ. In particular, we may take μ = μ1 + μ2. Extend the result to sequences μ1, μ2, . . . .
6. Given any family μi, i ∈I, of σ-finite measures on a measurable space S, prove the existence of a largest measure μ = n μn, such that μ ≤μi for all i ∈I. Further show that if the μi are bounded by some σ-finite measure ν, there exists a smallest measure ˆ μ = n μi, such that μi ≤ˆ μ for all i. (Hint: Use Zorn’s lemma.) 7. For any bounded, signed measure ν on (S, S), prove the existence of a smallest measure |ν| such that |νA| ≤|ν|A for all A ∈S. Further show that |ν| = ν+ + ν−, where ν± are the components in the Hahn decomposition of ν.
Finally, for any bounded, measurable function f on S, show that |νf| ≤|ν| |f|.
8. Extend the last result to complex-valued measures χ = μ + iν, where μ and ν are bounded, signed measures on (S, S). Introducing the complex-valued Radon– Nikodym density f = dχ/d(|μ| + |ν|), show that |χ| = |f| · (|μ| + |ν|).
Chapter 3 Kernels, Disintegration, and Invariance Kernel criteria and properties, composition and product, disintegration of measures and kernels, partial and iterated disintegration, Haar mea-sure, factorization and modularity, orbit measures, invariant measures and kernels, invariant disintegration, asymptotic and local invariance So far we have mostly dealt with standard measure theory, typically covered by textbooks in real analysis, though here presented with a slightly different emphasis to suit our subsequent needs. Now we come to some special topics of fundamental importance in probability theory, which are rarely treated even in more ambitious texts.
The hurried reader may again feel tempted to bypass this material and move on to the probabilistic parts of the book. This may be fine as far as the classical probability is concerned. However, already when coming to condi-tioning and compensation, and to discussions of the general Markov property and continuous-time chains, the serious student may feel the need to return for reference and to catch up with some basic ideas. The need may be even stronger as he/she moves on to the later and more advanced chapters.
Our plans are to begin with a discussion of kernels, which are indispens-able in the contexts of conditioning, Markov processes, random measures, and many other areas. They often arise by disintegration of suitable measures on a product space. A second main theme is the theory of invariant measures and disintegrations, of importance in the context of stationary distributions and associated Palm measures. We conclude with a discussion of locally and asymptotically invariant measures, needed in connection with Poisson and Cox convergence of particle systems.
Given some measurable spaces (S, S) and (T, T ), we define a kernel μ: S → T as a measurable function μ from S to the space of measures on T. In other words, μ is a function of two variables s ∈S and B ∈T , such that μ(s, B) is S-measurable in s for fixed B and a measure in B for fixed s. For a simple example, we note that the set of Dirac measures δs, s ∈S, can be regarded as a kernel δ : S →S.
Just as every measure μ on S can be identified with a linear functional f →μf on S+, we may identify a kernel μ : S →T with a linear operator A: T+ →S+, given by Af(s) = μsf, μsB = A1B(s), s ∈S, (1) 55 © Springer Nature Switzerland AG 2021 O. Kallenberg, Foundations of Modern Probability, Probability Theory and Stochastic Modelling 99, 56 Foundations of Modern Probability for any f ∈T+ and B ∈T . When μ = δ, the associated operator is simply the identity map I on S+. In general, we often denote a kernel and its associated operator by the same letter.
When T is a localized Borel space, we usually require kernels μ: S →T to be locally finite, in the sense that μs is locally finite for every s. The following result characterizes the locally finite kernels from S to T.
Lemma 3.1 (kernel criteria) For any Borel spaces S, T, consider a function μ : S →MT with associated operator A as in (1). Then for any dissecting semi-ring I ⊂ˆ T , these conditions are equivalent: (i) μ is a kernel from S to T, (ii) μ is a measurable function from S to MT, (iii) μsI is S-measurable for every I ∈I, (iv) the operator A maps T+ into S+.
Furthermore, (v) for any function f : S →T, the mapping s →δf(s) is a kernel from S to T ifff is measurable, (vi) the identity map on MS is a kernel from MS to S.
Proof, (i) ⇔(ii): Since MT is generated by all projections πB : μ →μB with B ∈T , we see that (ii) ⇒(i). The converse holds by Lemma 1.4.
(i) ⇔(iii): Since trivially (i) ⇒(iii), it remains to show that (iii) ⇒(i). Then note that I is π-system, whereas the sets B ∈ˆ T with measurable μsB form a λ-system containing I. The assertion then follows by a routine application of the monotone-class Theorem 1.1.
(i) ⇔(iv): For f = 1B, we note that Af(s) = μsB. Thus, (i) is simply the property (iv) restricted to indicator functions. The extension to general f ∈T+ follows immediately by linearity and monotone convergence.
(v) This holds since s; δf(s)B = 1 = f −1B for all B ∈T .
(vi) The identity map on MS trivially satisfies (ii).
2 A kernel μ: S →T is said to be finite if μsT < ∞for all s ∈S, s -finite if it is a countable sum of finite kernels, and σ-finite if it satisfies μsfs < ∞for some measurable function f > 0 on S × T, where fs = f(s, ·). It is called a probability kernel if μsT ≡1 and a sub-probability kernel when μsT ≤1. We list some basic properties of s - and σ -finite kernels.
Lemma 3.2 (kernel properties) Let the kernel μ: S →T be s-finite. Then (i) for any measurable function f ≥0 on S ⊗T, the function μsfs is S-measurable, and νs = fs · μs is again an s-finite kernel from S to T, (ii) for any measurable function f : S × T →U, the function νs = μs ◦f −1 s is an s-finite kernel from S to U, 3. Kernels, Disintegration, and Invariance 57 (iii) μ = Σn μn for some sub-probability kernels μ1, μ2, . . .: S →T, (iv) μ is σ-finite iffμ = Σn μn for some finite, mutually singular kernels μ1, μ2, . . .: S →T, (v) μ is σ-finite iffthere exists a measurable function f > 0 on S × T with μsfs = 1{μs ̸= 0} for all s ∈S, (vi) if μ is uniformly σ-finite, it is also σ-finite, and (v) holds with f(s, t) = h(μs, t) for some measurable function h > 0 on MS × T, (vii) if μ is a probability kernel and T is Borel, there exists a measurable function f : S × [0, 1] →T with μs = λ ◦f −1 s for all s ∈S.
A function f as in (v) is called a normalizing function of μ.
Proof: (i) Since μ is s -finite and all measurability and measure proper-ties are preserved by countable summations, we may assume that μ is finite.
The measurability of μsfs follows from the kernel property when f = 1B for some measurable rectangle B ⊂S × T. It extends by Theorem 1.1 to ar-bitrary B ∈S × T , and then by linearity and monotone convergence to any f ∈(S ⊗T )+. Fixing f and applying the stated measurability to the functions 1Bf for arbitrary B ∈T , we see that fs ·μs is again measurable, hence a kernel from S to T. The s -finiteness is clear if we write f = Σnfn for some bounded f1, f2, . . . ∈(S ⊗T )+.
(ii) Since μ is s -finite and the measurability, measure property, and inverse mapping property are all preserved by countable summations, we may again take μ to be finite. The measurability of f yields f −1B ∈S ⊗T for every B ∈U. Since f −1 s B = {t ∈T; (s, t) ∈f −1B} = (f −1B)s, we see from (i) that (μs ◦f −1 s )B = μs(f −1B)s is measurable, which means that νs = μs ◦f −1 s is a kernel. Since ∥νs∥= ∥μs∥< ∞for all s ∈S, the kernel ν is again finite.
(iii) Since μ is s -finite and N2 is countable, we may assume that μ is finite.
Putting ks = [∥μs∥] + 1, we note that μs = Σnk−1 s 1{n ≤ks} μs, where each term is clearly a sub-probability kernel.
(iv) First let μ be σ-finite, so that μsfs < ∞for some measurable function f > 0 on S × T. Then (i) shows that νs = fs · μs is a finite kernel from S to T, and so μs = gs · νs with g = 1/f. Putting Bn = 1{n −1 < g ≤n} for n ∈N, we get μs = Σn(1Bng)s · νs, where each term is a finite kernel from S to T.
The terms are mutually singular since B1, B2, . . . are disjoint.
Conversely, let μ = Σn μn for some mutually singular, finite kernels μ1, μ2, . . . . By singularity we may choose a measurable partition B1, B2, . . . of S × T, such that μn is supported by Bn for each n. Since the μn are finite, we may further choose some measurable functions fn > 0 on S × T, such that μnfn ≤ 2−n for all n. Then f = Σn1Bnfn > 0, and μf = Σ m,n μm1Bnfn = Σ n μnfn 58 Foundations of Modern Probability ≤Σ n 2−n = 1, which shows that μ is σ-finite.
(v) If μ is σ-finite, then μsgs < ∞for some measurable function g > 0 on S ×T. Putting f(s, t) = g(s, t)/μsgs with 0/0 = 1, we note that μsfs = 1{μs ̸= 0}. The converse assertion is clear from the definitions.
(vi) The first assertion holds by (iv). Now choose a measurable partition B1, B2, . . . of T with μsBn < ∞for all s and n, and define g(s, t) = Σ n 2−n μsBn ∨1 −11Bn(t), s ∈S, t ∈T.
The function f(s, t) = g(s, t)/μsgs has the desired properties.
(vii) Since T is Borel, we may assume that T ∈B[0,1]. The function f(s, r) = inf x ∈[0, 1]; μs[0, x] ≥r , s ∈S, r ∈[0, 1], is product measurable on S×[0, 1], since the set {(s, r); μs[0, x] ≥r} is measur-able for each x by Lemma 1.13, and the infimum can be restricted to rational x. For s ∈S and x ∈[0, 1], we have λ ◦f −1 s [0, x] = λ r ∈[0, 1]; f(s, r) ≤x = μs[0, x], and so λ ◦f −1 s = μs on [0, 1] for all s by Lemma 4.3 below. In particular, fs(r) ∈T a.e. λ for every s ∈S. On the exceptional set {fs(r) / ∈T}, we may redefine fs(r) = t0 for any fixed t0 ∈T.
2 For any s -finite kernels μ : S →T and ν : S × T →U, we define their composition and product as the kernels μ ⊗ν : S →T × U and μν : S →U, given by (μ ⊗ν)sf = μs(dt) νs,t(du) f(t, u), (2) (μν)sf = μs(dt) νs,t(du) f(u) = (μ ⊗ν)s(1T ⊗f).
Note that μν equals the projection of μ ⊗ν onto U. Define Aμf(s) = μsf, and write ˆ μ = δ ⊗μ.
Lemma 3.3 (composition and product) For any s-finite kernels μ : S →T and ν : S × T →U, (i) μ ⊗ν and μν are s-finite kernels from S to T × U and U, respectively, (ii) μ ⊗ν is σ-finite whenever this holds for μ, ν, (iii) the kernel operations μ ⊗ν and μν are associative, (iv) μˆ ν = μ ⊗ν and ˆ μˆ ν = (μ ⊗ν)ˆ , (v) for any kernels μ: S →T and ν : T →U, we have Aμν = AμAν.
3. Kernels, Disintegration, and Invariance 59 Proof: (i) By Lemma 3.2 (i) applied twice, the inner integral in (2) is S ⊗T -measurable and the outer integral is S-measurable. The countable additivity holds by repeated monotone convergence. This proves the kernel property of μ ⊗ν, and the result for μν is an immediate consequence.
(ii) If μ and ν are σ-finite, we may choose some measurable functions f > 0 on S × T and g > 0 on S × T × U with μsfs ≤1 and νs,tgs,t ≤1 for all s and t. Then (2) yields (μ ⊗ν)s(fg)s ≤1.
(iii) Consider the kernels μ: S →T, ν : S ×T →U, and ρ: S ×T ×U →V .
Letting f ∈(T ⊗U ⊗V)+ be arbitrary and using (2) repeatedly, we get μ ⊗(ν ⊗ρ) s f = μs(dt) (ν ⊗ρ)s,t(du dv) f(t, u, v) = μs(dt) νs,t(du) ρs,t,u(dv) f(t, u, v) = (μ ⊗ν)s(dt du) ρs,t,u(dv) f(t, u, v) = (μ ⊗ν) ⊗ρ s f, which shows that μ ⊗(ν ⊗ρ) = (μ ⊗ν) ⊗ρ.
Projecting onto V yields μ(νρ) = (μν)ρ.
(iv) Consider any kernels μ: S →T and ν : S × T →U, and note that μˆ ν and μ ⊗ν are both kernels from S to T × U. For any f ∈(T ⊗U)+, we have (μˆ ν)sf = μs(dt) δs,t(ds′dt′) νs′,t′(du) f(t′, u) = μs(dt) νs,t(du) f(t, u) = (μ ⊗ν)sf, which shows that μˆ ν = μ ⊗ν. Hence, by (ii) ˆ μˆ ν = (δ ⊗μ) ⊗ν = δ ⊗(μ ⊗ν) = (μ ⊗ν)ˆ.
(v) For any f ∈U+, we have Aμνf(s) = (μν)sf = (μν)s(du) f(u) = μs(dt) νt(du) f(u) = μs(dt) Aνf(t) = Aμ(Aνf)(s) = (AμAν)f(s), which implies Aμνf = (AμAν)f, and hence Aμν = AμAν.
2 60 Foundations of Modern Probability We turn to the reverse problem of representing a given measure ρ on S ×T in the form ν ⊗μ, for some measure ν on S and kernel μ : S →T.
The resulting disintegration may be regarded as an infinitesimal decomposition of ρ into measures δs⊗μs. Any σ-finite measure ν ∼ρ(·×T) is called a supporting measure of ρ, and we refer to μ as the associated disintegration kernel.
Theorem 3.4 (disintegration) Let ρ be a σ-finite measure on S × T, where T is Borel. Then (i) ρ = ν ⊗μ for a σ-finite measure ν ∼ρ(· × T) ≡ˆ ρS and a σ-finite kernel μ: S →T, (ii) the μs are ν-a.e. unique up to normalizations, and they are a.e. bounded iffˆ ρS is σ-finite, (iii) when ˆ ρ is σ-finite and ν = ˆ ρS, we may choose the μs to be probability measures on T.
Proof: (i) If ρ is σ-finite and ̸= 0, we may choose g ∈(S × T)+ such that ρ = g · ˜ ρ for some probability measure ˜ ρ on S × T. If ˜ ρ = ν ⊗˜ μ for some measure ν on S and kernel ˜ μ: S × T, then Lemma 3.2 (i) shows that μ = g · ˜ μ is a σ-finite kernel from S to T, and so for any f ≥0, (ν ⊗μ)f = μ ⊗(g · ˜ μ) f = (μ ⊗˜ μ)(fg) = ˜ ρ(fg) = ρf, which shows that ρ = ν ⊗μ. We may then assume that ρ is a probability measure on S × T. Since T is Borel, we may further take T = R.
Choosing ν = ˆ ρ and putting νr = ρ(· × (−∞, r]), we get νr ≤ν for all r ∈R, and so Theorem 2.10 yields νr = fr · ν for some measurable functions fr : S →[0, 1]. Here fr is non-decreasing in r ∈Q with limits 0 and 1 at ±∞, outside a fixed ν -null set N ⊂S. Redefining fr = 0 on N, we can make those properties hold identically. For the right-continuous versions Fs,t = infr>t fr with t ∈R, we see by monotone convergence that Fr = fr a.e. for all r ∈Q.
Now Proposition 2.14 yields some probability measures μ(s, ·) on R with distribution functions F(s, t). Here μ(s, B) is measurable in s ∈S for each B ∈T by a monotone-class argument, which means that μ is a kernel from S to T. Furthermore, we have for any A ∈S and r ∈Q ρ A × (−∞, r] = A ν(ds) Fr(s) = A ν(ds) μs(−∞, r], which extends by a monotone-class argument and monotone convergence to the general disintegration formula ρf = (ν ⊗μ)f, for arbitrary f ∈(S × T )+.
(ii) Suppose that ρ = ν ⊗μ = ν′ ⊗μ′ with ν ∼ν′ ∼ρ(· × T). Then 3. Kernels, Disintegration, and Invariance 61 g · ρ = ν ⊗(g · μ) = ν′ ⊗(g · μ′), g ≥0, which allows us to consider only bounded measures ρ. Since also (h · ν) ⊗μ = ν′ ⊗(h · μ′) for any h ∈S+, we may further assume that ν = ν′. Then A ν(ds) μsB −μ′ sB = 0, A ∈S, B ∈T , which implies μsB = μ′ sB a.e. ν. Since T is Borel, a monotone-class argument yields μs = μ′ s a.e. ν.
2 The following partial disintegration will be needed in Chapter 31.
Corollary 3.5 (partial disintegration) Let ν, ρ be σ-finite measures on S and S×T, respectively, where T is Borel. Then there exists an a.e. unique maximal kernel μ: S →T, ν ⊗μ ≤ρ.
Here the maximality of μ means that, whenever μ′ is a kernel satisfying ν ⊗μ′ ≤ρ, we have μ′ s ≤μs for s ∈S a.e. ν.
Proof: Since ρ is σ-finite, we may choose a σ-finite measure γ on S with γ ∼ρ(· × T). Consider its Lebesgue decomposition γ = γa + γs with respect to ν, as in Theorem 2.10, so that γa ≪ν and γs ⊥ν.
Choose A ∈S with νAc = γsA = 0, and let ρ′ denote the restriction of ρ to A × T. Then ρ′(· × T) ≪ν, and so Theorem 3.4 yields a σ-finite kernel μ : S →T with ν ⊗μ = ρ′ ≤ρ.
Now consider any kernel μ′ : S →T with ν ⊗μ′ ≤ρ. Since νAc = 0, we have ν ⊗μ′ ≤ρ′ = ν ⊗μ. Since μ and μ′ are σ-finite, there exists a measurable function g > 0 on S × T, such that the kernels ˜ μ = g · μ and ˜ μ′ = g · μ′ satisfy ∥˜ μs∥∨∥˜ μ′ s∥≤1. Then ν ⊗˜ μ′ ≤ν ⊗˜ μ, and so ˜ μ′ sB ≤˜ μsB, s ∈S a.e. ν, for every B ∈S. This extends by a monotone-class argument to μ′ s ≤μs a.e. ν, which shows that μ is maximal. In particular, μ is a.e. unique.
2 We often need to disintegrate kernels. Here the previous construction still applies, except that now we need a product-measurable version of the Radon– Nikodym theorem, and we must check that the required measurability is pre-served throughout the construction. Since the simplest approach is based on martingale methods, we postpone the proof until Theorem 9.27.
Corollary 3.6 (disintegration of kernels) Consider some σ-finite kernels ν : S →T and ρ: S →T × U with νs ∼ρs(· × U) for all s ∈S, where T, U are Borel. Then there exists a σ-finite kernel μ: S × T →U, ρ = ν ⊗μ.
62 Foundations of Modern Probability We turn to the subject of iterated disintegration, needed for some condi-tional constructions in Chapters 8 and 31. To explain the notation, we consider some σ-finite measures μ1, μ2, μ3, μ12, μ13, μ23, μ123 on products of the Borel spaces S, T, U. They are said to form a projective family, if all relations of the form μ1 ∼μ12(· × T) or μ12 ∼μ123(· × U) are satisfied1. Then Theorem 3.4 yields some disintegrations μ12 = μ1 ⊗μ2|1 ∼ = μ2 ⊗μ1|2, μ13 = μ1 ⊗μ3|1, μ123 = μ12 ⊗μ3|12 = μ1 ⊗μ23|1 ∼ = μ2 ⊗μ13|2, for some σ-finite kernels μ2|1, μ3|12, μ12|3, . . . between appropriate spaces, where ∼ = denotes equality apart from the order of component spaces. If μ2|1 ∼μ23|1(·× U) a.e. μ1, we may proceed to form some iterated disintegration kernels such as μ3|2|1, where μ23|1 = μ2|1 ⊗μ3|2|1 a.e. μ1. We show that the required support properties hold automatically, and that the iterated disintegration kernels μ3|2|1 and μ3|1|2 are a.e. equal and agree with the single disintegration kernel μ3|12.
Theorem 3.7 (iterated disintegration) Consider a projective set of σ-finite measures μ1, μ2, μ3, μ12, μ13, μ23, μ123 on products of Borel spaces S, T, U, and form the associated disintegration kernels μ1|2, μ2|1, μ3|1, μ3|12. Then (i) μ2|1 ∼μ23|1(· × U) a.e. μ1, μ1|2 ∼μ13|2(· × U) a.e. μ2, (ii) μ3|12 = μ3|2|1 ∼ = μ3|1|2 a.e. μ12, (iii) if μ13 = μ123(· × T × ·), then also μ3|1 = μ2|1 μ3|1|2 a.e. μ1.
Proof: (i) We may assume that μ123 ̸= 0. Fixing a probability measure ˜ μ123 ∼μ123, we form the projections ˜ μ1, ˜ μ2, ˜ μ3, ˜ μ12, ˜ μ13, ˜ μ23 of ˜ μ123 onto S, T, U, S × T, S × U, T × U, respectively. Then for any A ∈S ⊗T , ˜ μ12A = ˜ μ123(A × U) ∼μ123(A × U) ∼μ12A, which shows that ˜ μ12 ∼μ12. A similar argument gives ˜ μ1 ∼μ1 and ˜ μ2 ∼μ2.
Hence, Theorem 2.10 yields some measurable functions p1, p2, p12, p123 > 0 on S, T, S × T, S × T × U, satisfying ˜ μ1 = p1 · μ1, ˜ μ2 = p2 · μ2, ˜ μ12 = p12 · μ12, ˜ μ123 = p123 · μ123.
Inserting those densities into the disintegrations 1For functions or constants a, b ≥0, the relation a ∼b means that a = 0 iffb = 0.
3. Kernels, Disintegration, and Invariance 63 ˜ μ12 = ˜ μ1 ⊗˜ μ2|1 ∼ = ˜ μ2 ⊗˜ μ1|2, ˜ μ123 = ˜ μ1 ⊗˜ μ23|1 ∼ = ˜ μ2 ⊗˜ μ13|2, (3) we obtain μ12 = μ1 ⊗ p2|1 · ˜ μ2|1 ∼ = μ2 ⊗ p1|2 · ˜ μ1|2 , μ123 = μ1 ⊗ p23|1 · ˜ μ23|1 ∼ = μ2 ⊗ p13|2 · ˜ μ13|2 , where p2|1 = p1 p12 , p1|2 = p2 p12 , p23|1 = p1 p123 , p13|2 = p2 p123 .
Comparing with the disintegrations of μ12 and μ123, and invoking the unique-ness in Theorem 3.4, we obtain a.e.
˜ μ2|1 ∼μ2|1, ˜ μ1|2 ∼μ1|2, ˜ μ23|1 ∼μ23|1, ˜ μ13|2 ∼μ13|2.
(4) Furthermore, we get from (3) ˜ μ1 ⊗˜ μ2|1 = ˜ μ12 = ˜ μ123(· × U) = ˜ μ1 ⊗˜ μ23|1(· × U), and similarly with subscripts 1 and 2 interchanged, and so the mentioned uniqueness yields a.e.
˜ μ2|1 = ˜ μ23|1(· × U), ˜ μ1|2 = ˜ μ13|2(· × U).
(5) Combining (4) and (5), we get a.e.
μ2|1 ∼˜ μ2|1 = ˜ μ23|1(· × U) ∼μ23|1(· × U), and similarly with 1 and 2 interchanged.
(ii) By (i) and Theorem 3.6, we have a.e.
μ23|1 = μ2|1 ⊗μ3|2|1, μ13|2 = μ1|2 ⊗μ3|1|2, for some product-measurable kernels μ3|2|1 and μ3|1|2. Combining the various disintegrations and using the commutativity in Lemma 3.3 (ii), we get μ12 ⊗μ3|12 = μ123 = μ1 ⊗μ23|1 = μ1 ⊗μ2|1 ⊗μ3|2|1 = μ12 ⊗μ3|2|1, and similarly with 1 and 2 interchanged. It remains to use the a.e. uniqueness in Theorem 3.4.
64 Foundations of Modern Probability (iii) The stated hypothesis yields μ1 ⊗μ3|1 = μ13 = μ123(· × T × ·) = μ1 ⊗μ23|1(T × ·), and so by (ii) and the uniqueness in Theorem 3.4, we have a.e.
μ3|1 = μ23|1(T × ·) = μ2|1 ⊗μ3|2|1(T × ·) = μ2|1 μ3|2|1 = μ2|1 μ3|1|2.
2 A group G with associated σ-field G is said to be measurable, if the group operations r →r−1 and (r, s) →rs on G are G -measurable. Defining the left and right shifts θr and ˜ θr on G by θrs = ˜ θsr = rs, r, s ∈G, we say that a measure λ on G is left-invariant if λ ◦θ−1 r = λ for all r ∈G. A σ-finite, left-invariant measure λ ̸= 0 is called a Haar measure on G. Writing ˜ f(r) = f(r−1), we may define a corresponding right-invariant measure ˜ λ by ˜ λf = λ ˜ f.
We call G a topological group if it is endowed with a topology rendering the group operations continuous. When G is lcscH 2, it becomes a measurable group when endowed with the Borel σ-field G. We state the basic existence and uniqueness theorem for Haar measures.
Theorem 3.8 (Haar measure) For any lcscH group G, (i) there exists a left-invariant, locally finite measure λ ̸= 0 on G, (ii) λ is unique up to a normalization, (iii) when G is compact, λ is also right-invariant.
Proof (Weil): For any f, g ∈ˆ C+ we define |f|g = inf Σk ck, where the infimum extends over all finite sets of constants c1, . . . , cn ≥0, such that f(x) ≤Σ k≤n ck g(skx), x ∈G, for some s1, . . . , sn ∈G. By compactness we have |f|g < ∞when g ̸= 0, and since |f|g is non-decreasing and translation invariant in f, it satisfies the sub-additivity and homogeneity properties |f + f ′|g ≤|f|g + |f ′|g, |cf|g = c|f|g, (6) as well as the inequalities ∥f∥ ∥g∥≤|f|g ≤|f|h|h|g.
(7) We may normalize |f|g by fixing an f0 ∈ˆ C+ \ {0} and putting λgf = |f|g/|f0|g, f, g ∈ˆ C+, g ̸= 0.
2locally compact, second countable, Hausdorff 3. Kernels, Disintegration, and Invariance 65 From (6) and (7) we note that λg(f + f ′) ≤λgf + λgf ′, λg(cf) = cλgf, |f0|−1 f ≤λgf ≤|f|f0.
(8) Conversely, λg is nearly super-additive in the following sense: Lemma 3.9 (near super-additivity) For any f, f ′ ∈ˆ C+ and ε > 0, there exists an open set U ̸= ∅with λgf + λgf ′ ≤λg(f + f ′) + ε, 0 ̸= g ≺U.
Proof: Fix any h ∈ˆ C+ with h = 1 on supp(f + f ′), and define for δ > 0 fδ = f + f ′ + δh, hδ = f/fδ, h′ δ = f ′/fδ, so that hδ, h′ δ ∈ˆ C+. By compactness, we may choose a neighborhood U of the identity element ι ∈G satisfying hδ(x) −hδ(y) < δ, h′ δ(x) −h′ δ(y) < δ, x−1y ∈U.
(9) Now assume 0 ̸= g ≺U, and let fδ(x) ≤Σk ck g(skx) for some s1, . . . , sn ∈ G and c1, . . . , cn ≥0. Since g(skx) ̸= 0 implies skx ∈U, (9) yields f(x) = fδ(x) hδ(x) ≤Σ k ck g(skx) hδ(x) ≤Σ k ck g(skx) hδ(s−1 k ) + δ , and similarly for f ′. Noting that hδ + h′ δ ≤1, we get |f|g + |f ′|g ≤Σ k ck(1 + 2δ).
Taking the infimum over all dominating sums for fδ and using (6), we obtain |f|g + |f ′|g ≤|fδ|g(1 + 2δ) ≤ |f + f ′|g + δ|h|g (1 + 2δ).
Now divide by |f0|g, and use (8) to obtain λgf + λgf ′ ≤ λg(f + f ′) + δλgh (1 + 2δ) ≤λg(f + f ′) + 2δ|f + f ′|f0 + δ(1 + 2δ)|h|f0, which tends to λg(f + f ′) as δ →0.
2 66 Foundations of Modern Probability End of proof of Theorem 3.8: We may regard the functionals λg as elements of the product space Λ = (R+) ˆ C+. For any neighborhood U of ι, let ΛU be the closure in Λ of the set {λg; 0 ̸= g ≺U}. Since λgf ≤|f|f0 < ∞for all f ∈ˆ C+ by (8), the ΛU are compact by Tychonov’s theorem3. Furthermore, the finite intersection property holds for the family {ΛU; ι ∈U}, since U ⊂V implies ΛU ⊂ΛV . We may then choose an element λ ∈ U ΛU, here regarded as a functional on ˆ C+. From (8) we note that λ ̸= 0.
To see that λ is linear, fix any f, f ′ ∈ˆ C+ and a, b ≥0, and choose some g1, g2, . . . ∈ˆ C+ with supp gn ↓{ι}, such that λgnf →λf, λgnf ′ →λf ′, λgn( af + bf ′ ) →λ( af + bf ′ ).
By (8) and Lemma 3.9 we obtain λ(af + bf ′) = a λf + b λf ′. Thus, λ is a non-trivial, positive linear functional on ˆ C+, and so by Theorem 2.25 it extends uniquely to a locally finite measure on S. The invariance of the functionals λg clearly carries over to λ.
Now consider any left-invariant, locally finite measure λ ̸= 0 on G. Fixing a right-invariant, locally finite measure μ ̸= 0 and a function h ∈ˆ C+ \ {0}, we define p(x) = h(y−1x) μ(dy), x ∈G, and note that p > 0 on G. Using the invariance of λ and μ along with Fubini’s theorem, we get for any f ∈ˆ C+ (λh)(μf) = h(x) λ(dx) f(y) μ(dy) = h(x) λ(dx) f(yx) μ(dy) = μ(dy) h(x)f(yx) λ(dx) = μ(dy) h(y−1x)f(x) λ(dx) = f(x) λ(dx) h(y−1x) μ(dy) = λ(fp).
Since f was arbitrary, we obtain (λh)μ = p · λ or equivalently λ/λh = p−1 · μ.
The asserted uniqueness now follows, since the right-hand side is independent of λ. If S is compact, we may choose h ≡1 to obtain λ/λS = μ/μS.
2 From now on, we make no topological or other assumptions on G, beyond the existence of a Haar measure. The uniqueness and basic properties of such measures are covered by the following result, which also provides a basic fac-torization property. For technical convenience we consider s-finite (rather than σ-finite) measures, defined as countable sums of bounded measures. Note that most of the basic computational rules, including Fubini’s theorem, remain valid 3Products of compact spaces are again compact.
3. Kernels, Disintegration, and Invariance 67 for s -finite measures, and that s -finiteness has the advantage of being preserved by measurable maps.
Theorem 3.10 (factorization and modularity) Consider a measurable space S and a measurable group G with Haar measure λ. Then (i) for any s -finite, G-invariant measure μ on G × S, there exists a unique, s -finite measure ν on S satisfing μ = λ ⊗ν, (ii) μ, ν are simultaneously σ-finite, (iii) there exists a measurable homomorphism Δ: G →(0, ∞) with ˜ λ = Δ · λ, λ ◦˜ θ−1 r = Δrλ, r ∈G, (iv) when ∥λ∥< ∞we have Δ ≡1, and λ is also right-invariant with ˜ λ = λ.
In particular, (i) shows that any s -finite, left-invariant measure on G is proportional to λ. The dual mapping ˜ Δ is known as the modular function of G. The use of s -finite measures leads to some significant simplifications in subsequent proofs.
Proof: (i) Since λ is σ-finite, we may choose a measurable function h > 0 on G with λh = 1, and define Δr = λ(h ◦˜ θr) ∈(0, ∞], r ∈G.
Using Fubini’s theorem (three times), the G -invariance of μ and λ, and the definitions of Δ and ˜ λ, we get for any f ≥0 μ(fΔ) = μ(dr ds) f(r, s) λ(dp) h(pr) = λ(dp) μ(dr ds) f p−1r, s h(r) = μ(dr ds) h(r) λ(dp) f(p−1, s) = ˜ λ(dp) μ(dr ds) h(r) f(p, s).
Choosing μ = λ for a singleton S, we get ˜ λf = λ(fΔ), and so ˜ λ = Δ · λ.
Therefore λ = ˜ Δ · ˜ λ = ˜ ΔΔ · λ, hence ˜ ΔΔ = 1 a.e. λ, and finally Δ ∈(0, ∞) a.e.
λ. Thus, for general S, μ(fΔ) = λ(dp) Δ(p) μ(dr ds) h(r) f(p, s).
Since Δ > 0, we have μ(· × S) ≪λ, and so Δ ∈(0, ∞) a.e. μ(· × S), which yields the simplified formula μf = λ(dp) μ(dr ds) h(r) f(p, s), showing that μ = λ ⊗ν with νf = μ(h ⊗f).
68 Foundations of Modern Probability (ii) When μ is σ-finite, we have μf < ∞for some measurable function f > 0 on G × S, and so by Fubini’s theorem νf(r, ·) < ∞for r ∈G a.e. λ, which shows that even ν is σ-finite. The reverse implication is obvious.
(iii) For every r ∈G, the measure λ ◦˜ θ−1 r is left-invariant since θp and ˜ θr commute for all p, r ∈G. Hence, (i) yields λ ◦˜ θ−1 r = crλ for some constants cr > 0. Applying both sides to h gives cr = Δ(r). The homomorphism relation Δ(p q) = Δ(p)Δ(q) follows from the reverse semi-group property ˜ θp q = ˜ θq ◦˜ θp, and the measurability holds by the measurability of the group operation and Fubini’s theorem.
(iv) Statement (iii) yields ∥λ∥= Δr∥λ∥for all r ∈G, and so Δ ≡1 when ∥λ∥< ∞. Then (iii) gives ˜ λ = λ and λ ◦˜ θ−1 r = λ for all r ∈G, which shows that λ is also right-invariant.
2 Given a group G and a space S, we define an action of G on S as a mapping (r, s) →rs from G × S to S, such that p(rs) = (p r)s and ιs = s for all p, r ∈G and s ∈S, where ι denotes the identity element of G. If G and S are measurable spaces, we say that G acts measurably on S if the action map is product-measurable. The shifts θr and projections πs are defined by rs = θrs = πsr, and the orbit containing s is given by πsG = {rs; r ∈G}.
The orbits form a partition of S, and we say that the action is transitive if πsG ≡S. For s, s′ ∈S, we write s ∼s′ if s and s′ belong to the same orbit, so that s = rs′ for some r ∈G. If G acts on both S and T, its joint action on S × T is given by r(s, t) = (rs, rt).
For any group G acting on S, we say that a subset B ⊂S is G-invariant if B ◦θ−1 r = B for all r ∈G, and a function f on S is G-invariant if f ◦θr = f.
When S is measurable with σ-field S, the class of G -invariant sets in S is again a σ-field, denoted by IS. For a transitive group action we have IS = {∅, S}, and every invariant function is a constant. When the group action is measurable, a measure ν on S is said to be G-invariant if ν ◦θ−1 r = ν for all r ∈G.
When G is a measurable group with Haar measure λ, acting measurably on S, we say that G acts properly4 on S, if there exists a normalizing function g > 0 on S, such that g is measurable and satisfies λ(g ◦πs) < ∞for all s ∈S.
We may then define a kernel ϕ on S by ϕs = λ ◦π−1 s λ(g ◦πs) , s ∈S.
(10) Lemma 3.11 (orbit measures) Let G be a measurable group with Haar mea-sure λ, acting properly on S. Then the measures ϕs are σ-finite and satisfy (i) ϕs = ϕs′, s ∼s′, ϕs ⊥ϕs′, s ̸∼s′, (ii) ϕs ◦θ−1 r = ϕs = ϕ rs, r ∈G, s ∈S.
4Not to be confused with the topological notion of proper group action.
3. Kernels, Disintegration, and Invariance 69 Proof: The ϕs are σ-finite since ϕsg ≡1. The first equality in (ii) holds by the left-invariance of λ, and the second holds since λ ◦π−1 rs = Δrλ ◦π−1 s by Lemma 3.10 (ii). The latter equation yields ϕs = ϕs′ when s ∼s′. The orthogonality holds for s ̸∼s′, since ϕs is confined to the orbit πs(G), which is universally measurable by Theorem A1.2 (i).
2 We show that any s -finite, G -invariant measure on S is a unique mixture of orbit measures. An invariant measure ν on S is said to be ergodic if νI∧νIc = 0 for all I ∈IS.
It is further said to be extreme, if for any decomposition ν = ν′ + ν′′ with invariant ν′ and ν′′, all three measures are proportional.
Theorem 3.12 (invariant measures) Let G be a measurable group with Haar measure λ, acting properly on S, and define ϕ by (10) in terms of a normalizing function g > 0 on S. Then there is a 1−1 correspondence between all s-finite, invariant measures ν on S and all s-finite measures μ on the range ϕ(S), given by (i) ν = ν(ds) g(s) ϕs = m μ(dm), (ii) μf = ν(ds) g(s) f(ϕs), f ≥0.
For such μ, ν, we have (iii) μ, ν are simultaneously σ-finite, (iv) ν is ergodic or extreme iffμ is degenerate, (v) there exists a σ-finite, G-invariant measure ˜ ν ∼ν.
Proof, (i)−(ii): Let ν be s -finite and G -invariant. Using (10), Fubini’s theorem, and Theorem 3.10 (iii), we get for any f ∈S+ ν(ds) g(s) ϕsf = ν(ds) g(s) λ(f ◦πs) λ(g ◦πs) = λ(dr) ν(ds) g(s) f(rs) λ(g ◦πs) = λ(dr) ν(ds) g(r−1s) f(s) λ(g ◦πr−1s) = λ(dr) Δr ν(ds) g(r−1s) f(s) λ(g ◦πs) = λ(dr) ν(ds) g(rs) f(s) λ(g ◦πs) = ν(ds) f(s) = νf, which proves the first relation in (i). The second relation follows with μ as in (ii), by the substitution rule for integrals.
70 Foundations of Modern Probability Conversely, let ν = m μ(dm) for an s -finite measure μ on ϕ(S).
For measurable M ⊂MS we have A ≡ϕ−1M ∈IS by Lemma 3.11, and so ϕs(g; A) = (λ ◦π−1 s )(1Ag) λ(g ◦πs) = 1A(s) = 1M(ϕs).
Hence, (g · ν) ◦ϕ−1M = ν(g; A) = μ(dm) m(g; A) = M μ(dm) = μM, which shows that μ is uniquely given by (ii) and is therefore s-finite.
(iii) Use (i)−(ii) and the facts that g > 0 and ϕs ̸= 0.
(iv) Since μ is unique, ν is extreme iffμ is degenerate. For any I ∈IS, ϕsI ∧ϕsIc = λG λ(g ◦πs) 1I(s) ∧1Ic(s) = 0, s ∈S, which shows that the ϕs are ergodic. Conversely, let ν = m μ(dm) with μ non-degenerate, so that μM ∧μM c ̸= 0 for a measurable subset M ⊂MS.
Then I = ϕ−1M ∈IS with νI ∧νIc ̸= 0, which shows that ν is non-ergodic.
Hence, ν is ergodic iffμ is degenerate.
(v) Choose a bounded measure ˆ ν ∼ν, and define ˜ ν = ˆ νϕ. Then ˜ ν is in-variant by Lemma 3.11, and ˜ ν ∼(g · ν)ϕ = ν by (ii). It is also σ-finite, since ˜ νg = ˆ νϕg = ˆ νS < ∞by (i).
2 When G acts measurably on S and T, we say that a kernel μ : S →T is invariant5 if μ rs = μs ◦θ−1 r , r ∈G, s ∈S.
The definition is motivated by the following observation: Lemma 3.13 (invariant kernels) Let the group G act measurably on the Borel spaces S, T, and consider a σ-finite, invariant measure ν on S and a kernel μ: S →T. Then these conditions are equivalent: (i) the measure ρ = ν ⊗μ is jointly invariant on S × T, (ii) for any r ∈G, the kernel μ is r-invariant in s ∈S a.e. ν.
Proof, (ii) ⇒(i): Assuming (ii), we get for any measurable f ≥0 on S ×T (ν ⊗μ) ◦θ−1 r f = (ν ⊗μ)(f ◦θr) = ν(ds) μs(dt) f(rs, rt) = ν(ds) μs ◦θ−1 r (dt) f(rs, t) 5If we write μ ◦θ−1 r = θrμ, the defining condition simplifies to μ rs = θrμs.
3. Kernels, Disintegration, and Invariance 71 = ν(ds) μrs(dt) f(rs, t) = ν(ds) μs(dt) f(s, t) = (ν ⊗μ)f.
(i) ⇒(ii): Assuming (i) and fixing r ∈G, we get by the same calculation μs ◦θ−1 r (dt) f(rs, t) = μrs(dt) f(rs, t), s ∈S a.e. ν, which extends to (ii) since f is arbitrary.
2 We turn to the reverse problem of invariant disintegration.
Theorem 3.14 (invariant disintegration) Let G be a measurable group with Haar measure λ, acting properly on S and measurably on T, where S, T are Borel, and consider a σ-finite, jointly G-invariant measure ρ on S × T. Then (i) ρ = ν ⊗μ for a σ-finite, invariant measure ν on S and an invariant kernel μ: S →T, (ii) when S = G we may choose ν = λ, in which case μ is given uniquely, for any B ∈G with λB = 1, by μrf = B×S ρ(dp ds) f(rp−1s), r ∈G, f ≥0.
We begin with part (ii), which may be proved by a simple skew factorization.
Proof of (ii): On G × S we define the mappings ϑ(r, s) = (r, rs), θr(p, s) = (rp, rs), θ′ r(p, s) = (rp, s), where p, r ∈G and s ∈S. Since clearly ϑ−1 ◦θr = θ′ r ◦ϑ−1, the joint G-invariance of ρ gives ρ ◦ϑ ◦θ′ r −1 = ρ ◦θ−1 r ◦ϑ = ρ ◦ϑ, r ∈G, where ϑ = (ϑ−1)−1. Hence, ρ ◦ϑ is invariant under shifts of G alone, and so ρ ◦ϑ = λ ⊗ν by Theorem 3.10 for a σ-finite measure ν on S, which yields the stated formula for the invariant kernel μr = ν ◦θ−1 r . To see that ρ = λ ⊗μ, let f ≥0 be measurable on G × S, and write (λ ⊗μ)f = λ(dr) ν ◦θ−1 r (ds) f(r, s) = λ(dr) ν(ds) f(r, rs) = (λ ⊗ν) ◦ϑ−1 f = ρf.
2 To prepare for the proof of part (i), we begin with the special case where S contains G as a factor.
72 Foundations of Modern Probability Corollary 3.15 (repeated disintegration) Let G be a measurable group with Haar measure λ, acting measurably on the Borel spaces S, T, and consider a σ-finite, jointly invariant measure ρ on G × S × T. Then (i) ρ = λ ⊗ν ⊗μ for some invariant kernels ν : G →S and μ: G × S →T, (ii) νr ≡ˆ ν ◦θ−1 r and μr,s ≡ˆ μr−1s ◦θ−1 r for some measure ˆ ν on S and kernel ˆ μ: S →T, (iii) the measure ˆ ν ⊗ˆ μ on S × T is unique, (iv) we may choose ν as any invariant kernel G →S with λ ⊗ν ∼ρ(· × T).
When G acts measurably on S, its action on G × S is proper, and so for any s-finite, jointly G-invariant measure β on G × S, Theorem 3.12 yields a σ-finite, G-invariant measure ˜ β ∼β. Hence, ˜ β = λ ⊗ν by Theorem 3.14 (ii) for an invariant kernel ν : G →S.
Proof: By Theorem 3.14 (ii) we have ρ = (λ⊗χ)◦ϑ−1 for a σ-finite measure χ on S × T, where ϑ(r, s, t) = (r, rs, rt) for all (r, s, t) ∈G × S × T. Since T is Borel, Theorem 3.4 yields a further disintegration χ = ˆ ν ⊗ˆ μ in terms of a measure ˆ ν on S and a kernel ˆ μ : S →T, generating some invariant kernels ν and μ as in (ii). Then (i) may be verified as before, and (iii) holds by the uniqueness of χ.
(iv) For any kernel ν′ as stated, write ˆ ν′ for the generating measure on S.
Since λ × ν′ ∼λ × μ, we have ˆ ν′ ∼ˆ ν, and so χ = ˆ ν′ ⊗ˆ μ′ for some kernel ˆ μ′ : S →T. Now continue as before.
2 We also need some technical facts: Lemma 3.16 (invariance sets) Let G be a group with Haar measure λ, acting measurably on S, T. Fix a kernel μ : S →T and a measurable, jointly G-invariant function f on S × T. Then these subsets of S and S × T are G-invariant: A = s ∈S; μrs = μs ◦θ−1 r , r ∈G , B = (s, t) ∈S × T; f(rs, t) = f(ps, t), (r, p) ∈G2 a.e. λ2 .
Proof: For any s ∈A and r, p ∈G, we get μrs ◦θ−1 p = μs ◦θ−1 r ◦θ−1 p = μs ◦θ−1 p r = μp rs, which shows that even rs ∈A. Conversely, rs ∈A implies s = r−1(rs) ∈A, and so θ−1 r A = A.
Now let (s, t) ∈B. Then Theorem 3.10 (ii) yields (qs, t) ∈B for any q ∈G, and the invariance of λ gives f q−1rqs, t = f q−1p qs, t , (r, p) ∈G2 a.e. λ2, 3. Kernels, Disintegration, and Invariance 73 which implies (qs, qt) ∈B by the joint G -invariance of f. Conversely, (qs, qt) ∈ B implies (s, t) = q−1(qs, qt) ∈B, and so θ−1 q B = B.
2 Proof of Theorem 3.14 (i): The measure ρ(· × T) is clearly s -finite and G -invariant on S. Since G acts properly on S, Theorem 3.12 yields a σ-finite, G -invariant measure ν on S.
Applying Corollary 3.15 to the G -invariant measures λ ⊗ρ on G × S × T and λ ⊗ν on G × S, we obtain a G -invariant kernel β : G × S →T satisfying λ ⊗ρ = λ ⊗ν ⊗β.
Now introduce the kernels βr(p, s) = β(rp, s) and write fr(p, s, t) = f(r−1p, s, t) for any measurable function f ≥0 on G × S × T. By the invariance of λ, we have for any r ∈G λ ⊗ν ⊗βr f = λ ⊗ν ⊗β fr = (λ ⊗ρ)fr = (λ ⊗ρ)f, and so λ ⊗ρ = λ ⊗ν ⊗βr. Hence, by the a.e. uniqueness β(rp, s) = β(p, s), (p, s) ∈G × S a.e. λ ⊗ν, r ∈G.
Let A be the set of all s ∈S satisfying β(rp, s) = β(p, s), (r, p) ∈G2 a.e. λ2, and note that νAc = 0 by Fubini’s theorem. By Lemma 3.10 (ii), the defining condition is equivalent to β(r, s) = β(p, s), (r, p) ∈G2 a.e. λ2, and so for any g, h ∈G+ with λg = λh = 1, (g · λ)β(·, s) = (h · λ)β(·, s) = μ(s), s ∈A.
(11) To extend this to an identity, we may redefine β(r, s) = 0 when s ∈Ac.
This will not affect the disintegration of λ ⊗ρ, and by Lemma 3.16 it even preserves the G -invariance of β. Fixing a g ∈G+ with λg = 1, we get for any f ≥0 on S × T ρf = (λ ⊗ρ)(g ⊗f) = λ ⊗ν ⊗β (g ⊗f) = ν ⊗(g · λ)β f = (ν ⊗μ)f, and so ρ = ν ⊗μ. Now combine (11) with the G -invariance of β and λ to get μs ◦θ−1 r = λ(dp) g(p) β(p, s) ◦θ−1 r = λ(dp) g(p) β(rp, rs) = λ(dp) g(r−1p) β(p, rs) = μrs, 74 Foundations of Modern Probability which shows that μ is G -invariant.
2 We turn to some notions of asymptotic and local invariance, of importance in Chapters 25, 27, and 30–31. The uniformly bounded measures μn on Rd are said to be asymptotically invariant if μn −μn ∗δx →0, x ∈Rd.
(12) Lemma 3.17 (asymptotic invariance) Let μ1, μ2, . . . be uniformly bounded, asymptotically invariant measures on Rd. Then (i) (12) holds uniformly for bounded x, (ii) ∥μn −μn ∗ν∥→0 for any distribution ν on Rd, (iii) the singular components μ′′ n of μn satisfy ∥μ′′ n∥→0.
Proof: (ii) For any distribution ν, we get by dominated convergence μn −μn ∗ν = μn −μn ∗δx ν(dx) ≤ μn −μn ∗δx ν(dx) →0.
(iii) When ν is absolutely continuous, so is μn ∗ν for all n, and we get ∥μ′′ n∥≤ μn −μn ∗ν →0.
(i) Writing λh for the uniform distribution on [0, h]d and noting that ∥μ∗ν∥ ≤∥μ∥∥ν∥, we get for any x ∈Rd with |x| ≤r μn −μn ∗δx ≤ μn −μn ∗λh + μn ∗λh −μn ∗λh ∗δx + μn ∗λh ∗δx −μn ∗δx ≤2 μn −μn ∗λh + λh −λh ∗δx ≤2 μn −μn ∗λh + 2 rh−1d 1/2, which tends to 0 for fixed r, as n →∞and then h →∞.
2 Next we say that the measures μn on Rd are weakly asymptotically invari-ant, if they are uniformly bounded and such that the convolutions μn ∗ν are asymptotically invariant for every bounded measure ν ≪λd on Rd, so that μn ∗ν −μn ∗ν ∗δx →0, x ∈Rd.
(13) Lemma 3.18 (weak asymptotic invariance) For any uniformly bounded mea-sures μ1, μ2, . . . on Rd, these conditions are equivalent: (i) the μn are weakly asymptotically invariant, (ii) μn(Ih + x) −μn(Ih + x + hei) dx →0, h > 0, i ≤d.
3. Kernels, Disintegration, and Invariance 75 (iii) μn ∗ν −μn ∗ν′ →0 for all probability measures ν, ν′ ≪λd.
In (ii) it suffices to consider dyadic h = 2−k and to replace the integration by summation over (hZ)d.
Proof, (i) ⇔(ii): Letting ν = f · λd and ν′ = f ′ · λd, we note that μ ∗(ν −ν′) ≤∥μ∥∥ν −ν′∥ = ∥μ∥∥f −f ′∥1.
By Lemma 1.37 it is then enough in (13) to consider measures ν = f · λd, where f is continuous with bounded support. We may also take r = hei for some h > 0 and i ≤d, and by uniform continuity we may restrict h to dyadic values 2−k. By uniform continuity, we may next approximate f in L1 by sim-ple functions fn over the cubic grids In in Rd of mesh size 2−n, which implies the equivalence with (ii). The last assertion is yet another consequence of the uniform continuity.
(i) ⇔(iii): Condition (13) is clearly a special case of (iii). To show that (13) ⇒(iii), we may approximate as in (ii) to reduce to the case of finite sums ν = Σj ajλmj and ν′ = Σj bjλmj, where Σj aj = Σj bj = 1, and λmj denotes the uniform distribution over the cube Imj = 2−m[j −1, j] in Rd. Assuming (i) and writing h = 2−m, we get μn ∗ν −μn ∗ν′ ≤ μn ∗ν −μn ∗λm0 + μn ∗ν′ −μn ∗λm0 ≤Σ j (aj + bj) μn ∗λmj −μn ∗λm0 = Σ j (aj + bj) μn ∗λm0 ∗δjh −μn ∗λm0 →0, by the criterion in (ii).
2 Now let γ be a random vector in Rd with distribution μ, and write μt = L(γt), where γt = tγ. Then μ is said to be locally invariant, if the μt are weakly asymptotically invariant as t →∞. We might also say that μ is strictly locally invariant, if the μt are asymptotically invariant in the strict sense. However, the latter notion gives nothing new: Theorem 3.19 (local invariance) Let μ′ ≪μ be locally finite measures on Rd. Then (i) μ strictly locally invariant ⇔μ ≪λd, (ii) μ locally invariant ⇒μ′ locally invariant.
Proof: (i) Let μ be strictly locally invariant with singular component μ′′.
Then Lemma 3.17 yields ∥μ′′∥= ∥μ′′ ◦S−1 t ∥→0 as t →∞, where Stx = tx, which implies μ′′ = 0 and hence μ ≪λd.
76 Foundations of Modern Probability Conversely, let μ = f ·λd for some measurable function f ≥0. Then Lemma 1.37 yields some continuous functions fn ≥0 with bounded supports such that ∥f −fn∥1 →0. Writing μt = μ ◦S−1 t and h = 1/t, we get for any r ∈Rd μt −μt ∗δr = f(x) −f(x + hr) dx ≤ fn(x) −fn(x + hr) dx + 2 ∥f −fn∥1, which tends to 0 as h →0 and then n →∞. Since r was arbitrary, the strict local invariance follows.
(ii) We may take μ to be bounded, so that its local invariance reduces to weak asymptotic invariance of the measures μt = μ ◦S−1 t . Since μ′ ≪μ is locally finite, we may further choose μ′ = f · μ for some μ -integrable function f ≥0 on Rd. Then Lemma 1.37 yields some continuous functions fn with bounded supports, such that fn →f in L1(μ).
Writing μ′ n = fn · μ and μ′ n,t = μ′ n ◦S−1 t , we get μ′ n,t ∗ν −μ′ t ∗ν ≤ μ′ n,t −μ′ t + μ′ n −μ′ = (fn −f) · μ = μ|fn −f| →0, and so it suffices to verify (13) for the measures μ′ n,t. Thus, we may henceforth choose μ′ = f · μ for a continuous function f with bounded support.
To verify (13), let r and ν be such that the measures ν and ν ∗δr are both supported by the ball Ba 0. Writing μ′ t = (f · μ)t = (f · μ) ◦S−1 t = (f ◦St) · (μ ◦S−1 t ) = ft · μt, ν −ν ∗δr = g · λd −(g ◦θ−r) · λd = Δg · λd, we get for any bounded, measurable function h on Rd μ′ t ∗ν −μ′ t ∗ν ∗δr h = (ft · μt) ∗(Δg · λd) h = ft(x) μt(dx) Δg(y) h(x + y) dy = ft(x) μt(dx) Δg(y −x) h(y) dy = h(y) dy ft(x) Δg(y −x) μt(dx) ≈ h(y) ft(y) dy Δg(y −x) μt(dx) = μt ∗(Δg · λd) fth.
3. Kernels, Disintegration, and Invariance 77 Letting m be the modulus of continuity of f and putting mt = m(r/t), we may estimate the approximation error in the fifth step by |h(y)| dy ft(x) −ft(y) Δg(y −x) μt(dx) ≤mt μt(dx) h(y) Δg(y −x) dy ≤mt ∥h∥∥μt∥ |Δg(y)| dy ≤2 mt ∥h∥∥μ∥∥ν∥→0, by the uniform continuity of f. It remains to note that ft · μt ∗(Δg · λd) ≤∥f∥ μt ∗ν −μt ∗ν ∗δt →0, by the local invariance of μ.
2 Exercises 1. Let μ1, μ2, . . . be kernels between two measurable spaces S, T. Show that the function μ = Σn μn is again a kernel.
2. Fix a function f between two measurable spaces S, T, and define μ(s, B) = 1B ◦f(s). Show that μ is a kernel ifff is measurable.
3. Prove the existence and uniqueness of Lebesgue measure on Rd from the cor-responding properties of Haar measures.
4. Show that if the group G is compact, then every measurable group action is proper. Then show how the formula for the orbit measures simplifies in this case, and give a corresponding version of Theorem 3.12.
5. Use the previous exercise to find the general form of the invariant measures on a sphere and on a spherical cylinder.
6. Extend Lemma 3.10 (i) to the context of general invariant measures.
7. Extend the mean continuity in Lemma 2.7 to general invariant measures.
8. Give an example of some measures μ1, μ2, . . . that are weakly but not strictly asymptotically invariant.
9. Give an example of a bounded measure μ on R that is locally invariant but not absolutely continuous.
II. Some Classical Probability Theory Armed with the basic measure-theoretic machinery from Part I, we proceed with the foundations of general probability theory. After giving precise defini-tions of processes, distributions, expectations, and independence, we introduce the various notions of probabilistic convergence, and prove some of the classi-cal limit theorems for sums and averages, including the law of large numbers and the central limit theorem. Finally, a general compound Poisson approxi-mation will enable us to give a short, probabilistic proof of the classical limit theorem for general null arrays. The material in Chapter 4, along with at least the beginnings of Chapters 5−6, may be regarded as essential core material, constantly needed for the continued study, whereas a detailed reading of the remaining material might be postponed until a later stage.
−−− 4. Processes, distributions, and independence. Here the probabilis-tic notions of processes, distributions, expected values, and independence are defined in terms of standard notions of measure theory from Chapter 1. We proceed with a detailed study of independence criteria, 0−1 laws, and regu-larity conditions for processes. Finally, we prove the existence of sequences of independent random variables, and show how distributions on Rd are induced by their distribution functions.
5. Random sequences, series, and averages. Here we examine the relationship between convergence a.s., in probability, and in Lp, and give con-ditions for uniform integrability.
Next, we develop convergence criteria for random series and averages, including the strong laws of large numbers. Fi-nally, we study the basic properties of convergence in distribution, with special emphasis on the continuous mapping and approximation theorems as well as the Skorohod coupling.
6.
Gaussian and Poisson convergence. Here we prove the classical Poisson and Gaussian limit theorems for null arrays of random variables or vectors, including the celebrated central limit theorem. The theory of regu-lar variation enables us to identify the domain of Gaussian attraction. The mentioned results also yield precise criteria for the weak law of large numbers.
Finally, we study the relationship between weak compactness and tightness, in the special case of distributions on Euclidean spaces.
7. Infinite divisibility and general null arrays. Here we establish the representation of infinitely divisible distributions on Rd, involving a Gaussian component and a compound Poisson integral, material needed for the theory of L´ evy processes and essential for a good understanding of Feller processes and general semi-martingales. The associated convergence criteria, along with a basic compound Poisson approximation, yield a short, non-technical approach to the classical limit theorem for general null arrays.
79 Chapter 4 Processes, Distributions, and Independence Random elements and processes, finite-dimensional distributions, expec-tation and covariance, moments and tails, Jensen’s inequality, indepen-dence, pairwise independence and grouping, product measures and con-volution, iterated expectation, Kolmogorov and Hewitt–Savage 0−1 laws, Borel–Cantelli lemma, replication and Bernoulli sequences, kernel rep-resentation, H¨ older continuity, multi-variate distributions Armed with the basic notions and results of measure theory from previous chapters, we may now embark on our study of proper probability theory. The dual purposes of this chapter are to introduce some basic terminology and notation and to prove some fundamental results, many of which will be needed throughout the remainder of the book.
In modern probability theory, it is customary to relate all objects of study to a basic probability space (Ω, A, P), which is simply a normalized measure space. Random variables may then be defined as measurable functions ξ on Ω, and their expected values as integrals E ξ = ξ dP. Furthermore, independence between random quantities reduces to a kind of orthogonality between the induced sub-σ-fields. The reference space Ω is introduced only for technical convenience, to provide a consistent mathematical framework, and its actual choice plays no role1. Instead, our interest focuses on the induced distributions L(ξ) = P ◦ξ−1 and their associated characteristics.
The notion of independence is fundamental for all areas of probability the-ory.
Despite its simplicity, it has some truly remarkable consequences.
A particularly striking result is Kolmogorov’s 0−1 law, stating that every tail event associated with a sequence of independent random elements has prob-ability 0 or 1. Thus, any random variable that depends only on the ‘tail’ of the sequence is a.s. a constant. This result and the related Hewitt–Savage 0−1 law convey much of the flavor of modern probability: though the individual elements of a random sequence are erratic and unpredictable, the long-term behavior may often conform to deterministic laws and patterns. A major ob-jective is then to uncover the latter. Here the classical Borel–Cantelli lemma is one of the simplest but most powerful tools available.
To justify our study, we need to ensure the existence2 of the random objects 1except in some special contexts, such as in the modern theory of Markov processes 2The entire theory relies in a subtle way on the existence of non-trivial, countably additive measures. Assuming only finite additivity wouldn’t lead us very far.
81 © Springer Nature Switzerland AG 2021 O. Kallenberg, Foundations of Modern Probability, Probability Theory and Stochastic Modelling 99, 82 Foundations of Modern Probability under discussion. For most purposes it suffices to use the Lebesgue unit interval ([0, 1], B, λ) as our basic probability space. In this chapter, the existence will be proved only for independent random variables with prescribed distributions, whereas a more general discussion is postponed until Chapter 8. As a key step, we may use the binary expansion of real numbers to construct a so-called Bernoulli sequence, consisting of independent random digits 1 or 0 with probabilities p and 1−p, respectively.3 The latter may be regarded as a discrete-time counterpart of the fundamental Poisson processes, to be introduced and studied in Chapter 15.
The distribution of a random process X is determined by its finite-dimen-sional distributions, which are not affected by a change on a null set of each variable Xt. It is then natural to look for versions of X with suitable regularity properties. As another striking result, we provide a moment condition ensuring the existence of a H¨ older continuous modification of the process. Regulariza-tions of various kind are important throughout modern probability theory, as they may enable us to deal with events depending on the values of a process at uncountably many times.
To begin our systematic exposition of the theory, we fix an arbitrary prob-ability space (Ω, A, P), where P is a probability measure with total mass 1. In the probabilistic context, the sets A ∈A are called events, and PA = P(A) is called the probability of A. In addition to results valid for all measures, there are properties requiring the boundedness or normalization of P, such as the relation PAc = 1 −PA and the fact that An ↓A implies PAn →PA.
Some infinite set operations have a special probabilistic significance. Thus, for any sequence of events A1, A2, . . . ∈A, we may consider the sets {An i.o.} where An happens infinitely often, and {An ult.} where An happens ultimately (for all but finitely many n). Those occurrences are events in their own right, formally expressible in terms of the An as {An i.o.} = Σ n 1An = ∞ = ∩ n ∪ k≥nAk, (1) {An ult.} = Σ n 1Ac n < ∞ = ∪ n ∩ k≥nAk.
(2) The argument ω ∈Ω is usually omitted from our notation, when there is no risk for confusion.
Thus, for example, the expression {Σn 1An = ∞} is a convenient shorthand form of the unwieldy {ω ∈Ω; Σn 1An(ω) = ∞}.
The indicator functions4 of the events in (1) and (2) may be expressed as 1{An ult.} = lim inf n→∞1An ≤lim sup n→∞1An = 1{An i.o.}.
Applying Fatou’s lemma to the functions 1An and 1Ac n yields P{An ult.} ≤lim inf n→∞PAn 3Its existence is equivalent to that of Lebesgue measure, hence highly non-trivial, as we have seen in Chapter 2.
4For typographical convenience, we are often writing 1{·} instead of 1{·}.
4. Processes, Distributions, and Independence 83 ≤lim sup n→∞PAn ≤P{An i.o.}.
Using the continuity and sub-additivity of P, we further see from (1) that P{An i.o.} = lim n→∞P ∪ k≥n Ak ≤lim n→∞Σ k≥n PAk.
When Σn PAn < ∞, the right-hand side becomes 0, and we get P{An i.o.} = 0. The resulting implication constitutes the easy part of the Borel–Cantelli lemma, to be revisited in Theorem 4.18.
Any measurable mapping ξ of Ω into some measurable space (S, S) is called a random element in S. If B ∈S, then {ξ ∈B} = ξ−1B ∈A, and we may consider the associated probabilities P{ξ ∈B} = P(ξ−1B) = (P ◦ξ−1)B, B ∈S.
The set function L(ξ) = P ◦ξ−1 is a probability measure on the range space S of ξ, called the distribution or law of ξ. We often use the term distribution as synonymous to probability measure, even when no generating random element has been introduced.
Random elements are of interest in a wide variety of spaces. A random element in S is called a random variable when S = R, a random vector when S = Rd, a random sequence when S = R∞, a random (or stochastic5) process when S is a function space, and a random measure or set when S is a class of measures or sets, respectively.
A metric or topological space S will be endowed with its Borel σ-field BS, unless a σ-field is otherwise specified. For any separable metric space S, Lemma 1.2 shows that ξ = (ξ1, ξ2, . . .) is a random element in S∞iffξ1, ξ2, . . . are random elements in S.
For any measurable space (S, S), a subset A ⊂S becomes a measurable space in its own right, when endowed with the σ-field A∩S = {A∩B; B ∈S}.
In particular, Lemma 1.6 shows that if S is a metric space with Borel σ-field S, then A ∩S is the Borel σ-field in A. Any random element in (A, A ∩S) may be regarded, alternatively, as a random element in S. Conversely, if ξ is a random element in S with ξ ∈A a.s.6 for some A ∈S, then ξ = η a.s. for some random element η in A.
Fixing a measurable space (S, S) and an index set T, we write ST for the class of functions f : T →S, and let ST denote the σ-field in ST generated by all evaluation maps πt : ST →S, t ∈T, given by πtf = f(t). If X : Ω →U ⊂ST, then clearly Xt = πt ◦X maps Ω into S. Thus, X may also be regarded as a function X(t, ω) = Xt(ω) from T × Ω to S.
Lemma 4.1 (measurability) For any measurable space (S, S), index set T, subset U ⊂ST, and function X : Ω →U, these conditions are equivalent: 5We regard the words random and stochastic as synonymous.
6almost surely or with probability 1 84 Foundations of Modern Probability (i) X is U ∩ST-measurable, (ii) Xt : Ω →S is S-measurable for every t ∈T.
Proof: Since X is U-valued, its U ∩ST-measurability is equivalent to mea-surability with respect to ST. The result now follows, by Lemma 1.4, from the fact that ST is generated by the mappings πt.
2 A mapping X with the properties in Lemma 4.1 is called an S-valued (ran-dom) process on T with paths in U. By the lemma it is equivalent to regard X as a collection of random elements Xt in the state space S.
For any random elements ξ and η in a common measurable space, the equality ξ d = η means that ξ and η have the same distribution, or L(ξ) = L(η).
If X is a random process on an index set T, the associated finite-dimensional distributions are given by μ t1,...,tn = L(Xt1, . . . , Xtn), t1, . . . , tn ∈T, n ∈N.
The distribution of a process is determined by the set of finite-dimensional distributions: Proposition 4.2 (finite-dimensional distributions) For S, T, U as in Lemma 4.1, let X, Y be processes on T with paths in U. Then X d = Y iff (Xt1, . . . , Xtn) d = (Yt1, . . . , Ytn), t1, . . . , tn ∈T, n ∈N.
(3) Proof: Assume (3), let D be the class of sets A ∈ST with P{X ∈A} = P{Y ∈A}, and let C consist of all sets A = {f ∈ST; (ft1, . . . , ftn) ∈B}, t1, . . . , tn ∈T, B ∈Sn, n ∈N.
Then C is a π-system while D a λ-system, and C ⊂D by hypothesis. Hence, ST = σ(C) ⊂D by Theorem 1.1, which means that X d = Y .
2 For any random vector ξ = (ξ1, . . . , ξd) in Rd, we define the associated distribution function F by F(x1, . . . , xd) = P ∩ k≤d{ξk ≤xk}, x1, . . . , xd ∈R.
We note that F determines the distribution of ξ.
Lemma 4.3 (distribution functions) Let ξ, η be random vectors in Rd with distribution functions F, G. Then ξ d = η ⇔ F = G.
4. Processes, Distributions, and Independence 85 Proof: Use Theorem 1.1.
2 The expected value, expectation, or mean of a random variable ξ, written as7 E ξ, is defined as E ξ = Ω ξ dP = R x (P ◦ξ−1)(dx), (4) whenever either integral exists. The last equality then holds by Lemma 1.24.
By the same result, we get for any random elements ξ in a measurable space S, and for measurable functions f : S →R Ef(ξ) = Ω f(ξ) dP = S f(s) {P ◦ξ−1}(ds) = R x P ◦(f ◦ξ)−1 (dx), (5) provided that at least one of the three integrals exists. Integrals over a mea-surable subset A ⊂Ω are often denoted by E(ξ; A) = E(ξ 1A) = A ξ dP, A ∈A.
For any random variable ξ and constant p > 0, the integral E|ξ|p = ∥ξ∥p p is called the p -th absolute moment of ξ. By H¨ older’s inequality (or by Jensen’s inequality in Lemma 4.5) we have ∥ξ∥p ≤∥ξ∥q for p ≤q, so that the corre-sponding Lp-spaces are non-increasing in p. If ξ ∈Lp and either p ∈N or ξ ≥0, we may further define the p -th moment of ξ as E ξp.
We give a useful relationship between moments and tail probabilities.
Lemma 4.4 (moments and tails) For any random variable ξ ≥0 and con-stant p > 0, we have E ξp = p ∞ 0 P{ξ > t} tp−1dt = p ∞ 0 P{ξ ≥t} tp−1dt.
Proof: By Fubini’s theorem and calculus, E ξp = p E ξ 0 tp−1dt = p E ∞ 0 1{ξ > t} tp−1dt = p ∞ 0 P{ξ > t} tp−1dt.
The proof of the second expression is similar.
2 A random vector ξ = (ξ1, . . . , ξd) or process X = (Xt) is said to be inte-grable, if integrability holds for every component ξk or value Xt, in which case 7We omit the customary square brackets, as in E[ξ], when there is no risk for confusion.
86 Foundations of Modern Probability we may write E ξ = (E ξ1, . . . , E ξd) or EX = (EXt). Recall that a function f : Rd →R is said to be convex if f cx + (1 −c)y ≤cf(x) + (1 −c)f(y), x, y ∈Rd, c ∈[0, 1].
(6) This may be written as f(E ξ) ≤Ef(ξ), where ξ is a random vector in Rd with P{ξ = x} = 1 −P{ξ = y} = c. The following extension to integrable random vectors is known as Jensen’s inequality.
Lemma 4.5 (convex maps, H¨ older, Jensen) For any integrable random vector ξ in Rd and convex function f : Rd →R, we have Ef(ξ) ≥f(E ξ).
Proof: By a version of the Hahn–Banach theorem8, the convexity condition (6) is equivalent to the existence for every s ∈Rd of a supporting affine function hs(x) = ax + b with f ≥hs and f(s) = hs(s). Taking s = E ξ gives Ef(ξ) ≥E hs(ξ) = hs(E ξ) = f(E ξ).
2 The covariance of two random variables ξ, η ∈L2 is given by Cov(ξ, η) = E (ξ −E ξ)(η −E η) = E ξη −(E ξ)(E η).
It is clearly bi-linear, in the sense that Cov Σ j ≤m aj ξj, Σ k≤n bk ηk = Σ j ≤m Σ k≤n ajbk Cov(ξj, ηk).
Taking ξ = η ∈L2 gives the variance Var(ξ) = Cov(ξ, ξ) = E(ξ −Eξ)2 = E ξ2 −(E ξ)2, and Cauchy’s inequality yields Cov(ξ, η) ≤ Var(ξ) Var(η) 1/2.
Two random variables ξ, η ∈L2 are said to be uncorrelated if Cov(ξ, η) = 0.
For any collection of random variables ξt ∈L2, t ∈T, the associated covariance function ρs,t = Cov(ξs, ξt), s, t ∈T, is non-negative definite, in the sense that Σ ij aiaj ρti,tj ≥0 for any n ∈N, t1, . . . tn ∈T, and a1, . . . , an ∈R.
This is clear if we write 8Any two disjoint, convex sets in Rd can be separated by a hyperplane.
4. Processes, Distributions, and Independence 87 Σ i,j aiaj ρti,tj = Σ i,j aiaj Cov(ξti, ξtj) = Var Σ i ai ξti ≥0.
The events At ∈A, t ∈T, are said to be independent, if for any distinct indices t1, . . . , tn ∈T, P ∩ k≤n Atk = Π k≤n PAtk.
(7) More generally, we say that the families Ct ⊂A, t ∈T, are independent9, if independence holds between the events At for any At ∈Ct, t ∈T. Finally, the random elements ξt, t ∈T, are said to be independent if independence holds between the generated σ-fields σ(ξt), t ∈T. Pairwise independence between the objects A and B, ξ and η, or B and C is often denoted by A⊥ ⊥B, ξ⊥ ⊥η, or B⊥ ⊥C, respectively.
The following result is often useful to extend the independence property.
Lemma 4.6 (extension) Let Ct, t ∈T, be independent π-systems. Then the independence extends to the σ-fields Ft = σ(Ct), t ∈T.
Proof: We may assume that Ct ̸= ∅for all t.
Fix any distinct indices t1, . . . , tn ∈T, and note that (7) holds for arbitrary Atk ∈Ctk, k = 1, . . . , n.
For fixed At2, . . . , Atn, we introduce the class D of sets At1 ∈A satisfying (7).
Then D is a λ-system containing Ct1, and so D ⊃σ(Ct1) = Ft1 by Theorem 1.1.
Thus, (7) holds for arbitrary At1 ∈Ft1 and Atk ∈Ctk, k = 2, . . . , n. Proceeding recursively in n steps, we obtain the desired extension to arbitrary Atk ∈Ftk, k = 1, . . . , n.
2 An immediate consequence is the following basic grouping property. Here and below, we write F ∨G = σ(F, G), FS = ∨ t∈SFt = σ{Ft; t ∈S}.
Corollary 4.7 (grouping) Let Ft, t ∈T, be independent σ-fields, and let T be a disjoint partition of T. Then the independence extends to the σ-fields FS = ∨ t∈SFt, S ∈T .
Proof: For any S ∈T , let CS be the class of finite intersections of sets in t∈S Ft. Then the classes CS are independent π-systems, and the independence extends by Lemma 4.6 to the generated σ-fields FS.
2 Though independence between more than two σ-fields is clearly stronger than pairwise independence, the full independence may be reduced in various ways to the pairwise version. Given a set T, we say that a class T ⊂2T is separating, if for any s ̸= t in T there exists a set S ∈T , such that exactly one of the elements s and t lies in S.
9not to be confused with independence within each class 88 Foundations of Modern Probability Lemma 4.8 (pairwise independence) Let the Fn or Ft be σ-fields on Ω in-dexed by N or T. Then (i) F1, F2, . . . are independent iff σ(F1, . . . , Fn)⊥ ⊥Fn+1, n ∈N, (ii) for any separating class T ⊂2T, the Ft are independent iff FS ⊥ ⊥FSc, S ∈T .
Proof: The necessity of the two conditions follows from Corollary 4.7. As for the sufficiency, we consider only part (ii), the proof for (i) being similar.
Under the stated condition, we need to show that, for any finite subset S ⊂T, the σ-fields Fs, s ∈S, are independent. Assume the statement to be true for |S| ≤n, where |S| denote the cardinality of S. Proceeding to the case where |S| = n + 1, we may choose U ∈T such that S′ = S ∩U and S′′ = S \ U are non-empty. Since FS′⊥ ⊥FS′′, we get for any sets As ∈Fs, s ∈S, P ∩ s∈SAs = P ∩ s∈S′As P ∩ s∈S′′As = Π s∈S PAs, where the last relation follows from the induction hypothesis.
2 A σ-field F is said to be P-trivial if PA = 0 or 1 for every A ∈F. Note that a random element is a.s. a constant iffits distribution is a degenerate probability measure.
Lemma 4.9 (triviality and degeneracy) For a σ-field F on Ω, these condi-tions are equivalent: (i) F is P-trivial, (ii) F ⊥ ⊥F, (iii) every F-measurable random variable is a.s. a constant.
Proof, (i) ⇔(ii): If F⊥ ⊥F, then for any A ∈F we have PA = P(A ∩A) = (PA)2, and so PA = 0 or 1. Conversely, if F is P-trivial, then for any A, B ∈F we have P(A ∩B) = PA ∧PB = PA · PB, which means that F⊥ ⊥F.
(i) ⇔(iii): Assume (iii). Then 1B is a.s. a constant for every B ∈F, and so PB = 0 or 1, proving (i). Conversely, (i) yields Ft ≡P{ξ ≤t} = 0 or 1 for every F-measurable random variable ξ, and so ξ = sup{t; Ft = 0} a.s., proving (iii).
2 We proceed with a basic relationship between independence and product measures.
Lemma 4.10 (product measures and independence) Let ξ1, . . . , ξn be random elements with distributions μ1, . . . , μn, in some measurable spaces S1, . . . , Sn.
Then these conditions are equivalent: (i) the ξk are independent, 4. Processes, Distributions, and Independence 89 (ii) ξ = (ξ1, . . . , ξn) has distribution μ1 ⊗· · · ⊗μn.
Proof: Assuming (i), we get for any measurable product set B = B1 ×· · ·× Bn P{ξ ∈B} = Π k≤n P{ξk ∈Bk} = Π k≤n μkBk = ⊗ k≤nμk B.
This extends by Theorem 1.1 to arbitrary sets in the product σ-field, proving (ii). The converse is obvious.
2 Combining the last result with Fubini’s theorem, we obtain a useful method of calculating expected values, in the context of independence. A more general version is given in Theorem 8.5.
Lemma 4.11 (iterated expectation) Let ξ, η be independent random elements in some measurable spaces S, T, and consider a measurable function f : S × T →R with E{E|f(s, η)|}s=ξ < ∞. Then Ef(ξ, η) = E Ef(s, η) s=ξ .
Proof: Let μ and ν be the distributions of ξ and η, respectively. Assuming f ≥0 and writing g(s) = Ef(s, η), we get by Lemma 1.24 and Fubini’s theorem Ef(ξ, η) = f(s, t) (μ ⊗ν)(ds dt) = μ(ds) f(s, t) ν(dt) = g(s) μ(ds) = Eg(ξ).
For general f, this applies to the function |f|, and so E|f(ξ, η)| < ∞. The desired relation then follows as before.
2 In particular, we get for any independent random variables ξ1, . . . , ξn E Π k ξk = Π k E ξk, Var Σ k ξk = Σ kVar(ξk), whenever the expressions on the right exist.
For any random elements ξ, η in a measurable group G, the product ξη is again a random element in G. We give a connection between independence and the convolutions of Lemma 1.30.
Corollary 4.12 (convolution) Let ξ, η be independent random elements in a measurable group G with distributions μ, ν. Then L(ξη) = μ ∗ν.
90 Foundations of Modern Probability Proof: For any measurable set B ⊂G, we get by Lemma 4.10 and the definition of convolution P{ξη ∈B} = (μ ⊗ν) (x, y) ∈G2; xy ∈B = (μ ∗ν)B.
2 For any σ-fields F1, F2, . . . , we introduce the associated tail σ-field T = ∩ n ∨ k>nFk = ∩ n σ{Fk; k > n}.
The following remarkable result shows that T is trivial when the Fn are inde-pendent. An extension appears in Corollary 9.26.
Theorem 4.13 (Kolmogorov’s 0−1 law) Let F1, F2, . . . be σ-fields with asso-ciated tail σ-field T . Then F1, F2, . . . independent ⇒ T trivial.
Proof: For each n ∈N, define Tn = k>n Fk, and note that F1, . . . , Fn, Tn are independent by Corollary 4.7. Hence, so are the σ-fields F1, . . . , Fn, T , and then also F1, F2, . . . , T . By the same theorem, we obtain T0⊥ ⊥T , and so T ⊥ ⊥T . Thus, T is P-trivial by Lemma 4.9.
2 We consider some simple illustrations: Corollary 4.14 (sums and averages) Let ξ1, ξ2, . . . be independent random variables, and put Sn = ξ1 + · · · + ξn. Then (i) the sequence (Sn) is a.s. convergent or a.s. divergent, (ii) the sequence (Sn/n) is a.s. convergent or a.s. divergent, and its possible limit is a.s. a constant.
Proof: Define Fn = σ{ξn}, n ∈N, and note that the associated tail σ-field T is P-trivial by Theorem 4.13. Since the sets where (Sn) and (Sn/n) converge are T -measurable by Lemma 1.10, the first assertions follows. The last one is clear from Lemma 4.9.
2 By a finite permutation of N we mean a bijective map p: N →N, such that pn = n for all but finitely many n. For any space S, a finite permutation p of N induces a permutation Tp on S∞, given by Tp(s) = s ◦p = (sp1, sp2, . . .), s = (s1, s2, . . .) ∈S∞.
A set I ⊂S∞is said to be symmetric or permutation invariant if T −1 p I ≡{s ∈S∞; s ◦p ∈I} = I, for every finite permutation p of N. For a measurable space (S, S), the sym-metric sets I ∈S∞form a σ-field I ⊂S∞, called the permutation invariant σ-field in S∞.
We may now state the second basic 0−1 law, which applies to sequences of random elements that are independent and identically distributed, abbreviated as i.i.d..
4. Processes, Distributions, and Independence 91 Theorem 4.15 (Hewitt–Savage 0−1 law) Let ξ = (ξk) be an infinite sequence of random elements in a measurable space S, and let I denote the permutation invariant σ-field in S∞. Then ξ1, ξ2, . . . i.i.d.
⇒ ξ−1I trivial.
Our proof is based on a simple approximation. Write A△B = (A \ B) ∪(B \ A), and note that P(A△B) = P(Ac△Bc) = E|1A −1B|, A, B ∈A.
(8) Lemma 4.16 (approximation) For any σ-fields F1 ⊂F2 ⊂· · · and set A ∈ n Fn, we may choose A1, A2, . . . ∈∪ n Fn, P(A △An) →0.
Proof: Define C = n Fn, and let D be the class of sets A ∈ n Fn with the stated property. Since C is a π-system and D is a λ-system containing C, Theorem 1.1 yields n Fn = σ(C) ⊂D.
2 Proof of Theorem 4.15: Define μ = L(ξ), put Fn = Sn ×S∞, and note that I ⊂S∞= n Fn. For any I ∈I, Lemma 4.16 yields some Bn ∈Sn, such that the corresponding cylinder sets In = Bn × S∞satisfy μ(I△In) →0. Put ˜ In = Sn × Bn × S∞. Then the symmetry of μ and I yields μ˜ In = μIn →μI and μ(I△˜ In) = μ(I△In) →0, and so by (8) μ I△(In ∩˜ In) ≤μ(I△In) + μ(I△˜ In) →0.
Since moreover In⊥ ⊥˜ In under μ, we get μI ←μ(In ∩˜ In) = (μIn)(μ˜ In) →(μI)2.
Thus, μI = (μI)2, and so P ◦ξ−1I = μI = 0 or 1.
2 Again we list some easy consequences. Here a random variable ξ is said to be symmetric if ξ d = −ξ.
Corollary 4.17 (random walk) Let ξ1, ξ2, . . . be i.i.d., non-degenerate random variables, and put Sn = ξ1 + · · · + ξn. Then (i) P{Sn ∈B i.o.} = 0 or 1 for every B ∈B, (ii) lim supn Sn = ∞a.s. or = −∞a.s., (iii) lim supn(±Sn) = ∞a.s. when the ξn are symmetric.
92 Foundations of Modern Probability Proof: (i) This holds by Theorem 4.15, since for any finite permutation p of N we have xp1 + · · · + xpn = x1 + · · · + xn for all but finitely many n.
(ii) By Theorem 4.15 and Lemma 4.9, we have lim supn Sn = c a.s. for some constant c ∈¯ R = [−∞, ∞], and so a.s.
c = lim sup n→∞Sn+1 = lim sup n→∞(Sn+1 −ξ1) + ξ1 = c + ξ1.
Since |c| < ∞would imply ξ1 = 0 a.s., contradicting the non-degeneracy of ξ1, we have |c| = ∞.
(iii) Writing c = lim sup n→∞Sn ≥lim inf n→∞Sn = −lim sup n→∞(−Sn) = −c, we get −c ≤c ∈{±∞}, which implies c = ∞.
2 From a suitable 0−1 law it is often easy to see that a given event has probability 0 or 1. To distinguish between the two cases is often much harder.
The following classical result, known as the Borel–Cantelli lemma, is sometimes helpful, especially when the events are independent. A more general version appears in Corollary 9.21.
Theorem 4.18 (Borel, Cantelli) For any A1, A2, . . . ∈A, Σ n PAn < ∞ ⇒ P{An i.o.} = 0, and equivalence holds when the An are independent.
The first assertion was proved earlier as an application of Fatou’s lemma.
Using expected values yields a more transparent argument.
Proof: If Σn PAn < ∞, we get by monotone convergence E Σ n 1An = Σ nE1An = Σ nPAn < ∞.
Thus, Σn 1An < ∞a.s., which means that P{An i.o.} = 0.
Now let the An be independent with Σn PAn = ∞. Since 1 −x ≤e−x for all x, we get P ∪ k≥nAk = 1 −P ∩ k≥nAc k = 1 −Π k≥n PAc k = 1 −Π k≥n (1 −PAk) ≥1 −Π k≥n exp(−PAk) = 1 −exp −Σ k≥n PAk = 1.
4. Processes, Distributions, and Independence 93 Hence, as n →∞, 1 = P ∪ k≥nAk ↓P ∩ n ∪ k≥nAk = P{An i.o.}, and so the probability on the right equals 1.
2 For many purposes, it suffices to use the Lebesgue unit interval ([0, 1], B[0, 1], λ) as our basic probability space. In particular, the following result ensures the existence on [0, 1] of some independent random variables ξ1, ξ2, . . .
with arbitrarily prescribed distributions. The present statement is only pre-liminary. Thus, we will remove the independence assumption in Theorem 8.21, prove an extension to arbitrary index sets in Theorem 8.23, and eliminate the restriction on the spaces in Theorem 8.24.
Theorem 4.19 (existence, Borel) For any probability measures μ1, μ2, . . . on the Borel spaces S1, S2, . . . , there exist some independent random elements ξ1, ξ2, . . . on ([0, 1], λ) with distributions μ1, μ2, . . . .
As a consequence, there exists a probability measure μ on S1 × S2 × · · · satisfying μ ◦(π1, . . . , πn)−1 = μ1 ⊗· · · ⊗μn, n ∈N.
For the proof, we begin with two special cases of independent interest.
By a Bernoulli sequence with rate p we mean a sequence of i.i.d. random variables ξ1, ξ2, . . . with P{ξn = 1} = p = 1 −P{ξn = 0}. We further say that a random variable ξ is uniformly distributed on [0, 1], written as U(0, 1), if its distribution L(ξ) equals Lebesgue measure λ on [0, 1]. Every number x ∈[0, 1] has a binary expansion r1, r2, . . . ∈{0, 1} satisfying x = Σ n rn2−n, where for uniqueness we require that Σ nrn = ∞when x > 0. We give a simple construction of Bernoulli sequences on the Lebesgue unit interval.
Lemma 4.20 (Bernoulli sequence) Let ξ be a random variable in [0, 1] with binary expansion η1, η2, . . . . Then these conditions are equivalent: (i) ξ is U(0, 1), (ii) the ηn form a Bernoulli sequence with rate 1 2.
Proof, (i) ⇒(ii): If ξ is U(0, 1), then P j≤n{ηj = kj} = 2−n for all k1, . . . , kn ∈{0, 1}. Summing over k1, . . . , kn−1 gives P{ηn = k}= 1 2 for k = 0 or 1. A similar calculation yields the asserted independence.
(ii) ⇒(i): Assume (ii). Letting ξ′ be U(0, 1) with binary expansion η′ 1, η′ 2, . . . , we get (ηn) d = (η′ n), and so ξ = Σ n ηn 2−n d = Σ n η′ n 2−n = ξ′.
2 Next we show how a single U(0, 1) random variable can be used to generate a whole sequence.
94 Foundations of Modern Probability Lemma 4.21 (replication) There exist some measurable functions f1, f2, . . .
on [0, 1] such that (i) ⇒(ii), where (i) ξ is U(0, 1), (ii) ξn = fn(ξ), n ∈N, are i.i.d. U(0, 1).
Proof: Let b1(x), b2(x), . . . be the binary expansion of x ∈[0, 1], and note that the bk are measurable. Rearranging the bk into a two-dimensional array hnj, n, j ∈N, we define fn(x) = Σ j 2−j hnj(x), x ∈[0, 1], n ∈N.
By Lemma 4.20, the random variables bk(ξ) form a Bernoulli sequence with rate 1 2, and the same result shows that the variables ξn = fn(ξ) are U(0, 1).
They are further independent by Corollary 4.7.
2 We finally need to construct a random element with specified distribution from a given randomization variable. The result is stated in an extended form for kernels, to meet our needs in Chapters 8, 11, 22, and 27–28.
Lemma 4.22 (kernel representation) Consider a probability kernel μ: S →T with T Borel, and let ξ be U(0, 1). Then there exists a measurable function f : S × [0, 1] →T, such that L f(s, ξ) = μ(s, ·), s ∈S.
Proof: We may choose T ∈B[0,1], from where we can easily reduce to the case of T = [0, 1]. Define f(s, t) = sup x ∈[0, 1]; μ(s, [0, x]) < t , s ∈S, t ∈[0, 1], (9) and note that f is product measurable on S × [0, 1], since the set {(s, t); μ(s, [0, x]) < t} is measurable for each x by Lemma 1.13, and the supremum in (9) can be restricted to rational x. If ξ is U(0, 1), we get for x ∈[0, 1] P f(s, ξ) ≤x = P ξ ≤μ(s, [0, x]) = μ(s, [0, x]), and so by Lemma 4.3 f(s, ξ) has distribution μ(s, ·).
2 Proof of Theorem 4.19: By Lemma 4.22 there exist some measurable func-tions fn : [0, 1] →Sn with λ ◦f −1 n = μn. Letting ξ be the identity map on [0, 1] and choosing ξ1, ξ2, . . . as in Lemma 4.21, we note that the functions ηn = fn(ξn), n ∈N, have the desired joint distribution.
2 We turn to some regularizations and path properties of random processes.
Say that two processes X, Y on a common index set T are versions of one another if Xt = Yt a.s. for every t ∈T. When T = Rd or R+, two continuous or 4. Processes, Distributions, and Independence 95 right-continuous versions X, Y of a given process are clearly indistinguishable, in the sense that X ≡Y a.s., so that the entire paths agree outside a fixed null set.
For any mapping f between two metric spaces (S, ρ) and (S′, ρ′), we define the modulus of continuity wf = w(f, ·) by wf(r) = sup ρ′(fs, ft); s, t ∈S, ρ(s, t) ≤r , r > 0, so that f is uniformly continuous iffwf(r) →0 as r →0. Say that f is H¨ older continuous of order p, if 10 wf(r) < ⌢rp as r →0. The stated property is said to hold locally if it is valid on every bounded set.
A simple moment condition ensures the existence of a H¨ older-continuous version of a given process on Rd. Important applications are given in Theorems 14.5, 29.4, and 32.3, and a related tightness criterion appears in Corollary 23.7.
Theorem 4.23 (moments and H¨ older continuity, Kolmogorov, Loeve, Chent-sov) Let X be a process on Rd with values in a complete metric space (S, ρ), such that E ρ(Xs, Xt) a < ⌢|s −t|d+b, s, t ∈Rd, (10) for some constants a, b > 0. Then a version of X is locally H¨ older continuous of order p, for every p ∈(0, b/a).
Proof: It is clearly enough to consider the restriction of X to [0, 1]d. Define Dn = 2−n(k1, . . . , kd); k1, . . . , kn ∈{1, . . . , 2n} , n ∈N, and put ξn = max ρ(Xs, Xt); s, t ∈Dn, |s −t| = 2−n , n ∈N.
Since (s, t) ∈D2 n; |s −t| = 2−n ≤d 2 d n, n ∈N, we get by (10) for any p ∈(0, b/a) E Σ n (2 p n ξn)a = Σ n 2 ap n E ξa n < ⌢Σ n 2 ap n 2 d n (2−n)d+b = Σ n 2 (ap−b) n < ∞.
The sum on the left is then a.s. convergent, and so ξn < ⌢2 −p n a.s.
Now any points s, t ∈ n Dn with |s −t| ≤2−m can be connected by a piecewise linear path, which for every n ≥m involves at most 2d steps between nearest neighbors in Dn. Thus, for any r ∈[2−m−1, 2−m], sup ρ(Xs, Xt); s, t ∈∪nDn, |s −t| ≤r < ⌢Σ n≥m ξn < ⌢Σ n≥m 2 −p n < ⌢2 −p m < ⌢rp, 10For functions f, g > 0, we mean by f < ⌢g that f ≤c g for some constant c < ∞.
96 Foundations of Modern Probability showing that X is a.s. H¨ older continuous on n Dn of order p.
In particular, X agrees a.s. on n Dn with a continuous process Y on [0, 1]d, and we note that the H¨ older continuity of Y on n Dn extends with the same order p to the entire cube [0, 1]d. To show that Y is a version of X, fix any t ∈[0, 1]d, and choose t1, t2, . . . ∈ n Dn with tn →t. Then Xtn = Ytn a.s. for each n. Since also Xtn P →Xt by (10) and Ytn →Yt a.s. by continuity, we get Xt = Yt a.s.
2 Path regularity can sometimes be established by comparison with a regular process: Lemma 4.24 (transfer of regularity) Let X d = Y be random processes on T with values in a separable metric space S, where the paths of Y belong to a set U ⊂ST that is Borel for the σ-field U = (BS)T ∩U. Then X has a version with paths in U.
Proof: For clarity we may write ˜ Y for the path of Y , regarded as a random element in U. Then ˜ Y is Y -measurable, and Lemma 1.14 yields a measurable mapping f : ST →U with ˜ Y = f(Y ) a.s. Define ˜ X = f(X), and note that ( ˜ X, X) d = (˜ Y , Y ). Since the diagonal in S2 is measurable, we get in particular P{ ˜ Xt = Xt} = P{˜ Yt = Yt} = 1, t ∈T.
2 We conclude with a characterization of distribution functions on Rd, re-quired in Chapter 6. For any vectors x = (x1, . . . , xd) and y = (y1, . . . , yd), write x ≤y for the component-wise inequality xk ≤yk, k = 1, . . . , d, and similarly for x < y. In particular, the distribution function F of a probability measure μ on Rd is given by F(x) = μ{y; y ≤x}. Similarly, let x ∨y denote the componentwise maximum. Put 1 = (1, . . . , 1) and ∞= (∞, . . . , ∞).
For any rectangular box (x, y] = {u; x < u ≤y} = (x1, y1] × · · · × (xd, yd], we note that μ(x, y] = Σ u s(u)F(u), where s(u) = (−1)p with p = Σk 1{uk = yk}, and the summation extends over all vertices u of (x, y]. Writing F(x, y] for the stated sum, we say that F has non-negative increments if F(x, y] ≥0 for all pairs x < y. We further say that F is right-continuous if F(xn) →F(x) as xn ↓x, and proper if F(x) →1 or 0 as mink xk →±∞, respectively.
We show that any function with the stated properties determines a proba-bility measure on Rd.
Theorem 4.25 (distribution functions) For functions F : Rd →[0, 1], these conditions are equivalent: (i) F is the distribution function of a probability measure μ on Rd, 4. Processes, Distributions, and Independence 97 (ii) F is right-continuous and proper with non-negative increments.
Proof: For any F as in (ii), the set function F(x, y] is clearly finitely addi-tive. Since F is proper, we have also F(x, y] →1 as x →−∞and y →∞, i.e., as (x, y] ↑(−∞, ∞) = Rd. Hence, for every n ∈N there exists a probability measure μn on (2−nZ)d with Z = {. . . , −1, 0, 1, . . .}, such that μn{2−nk} = F 2−n(k −1), 2−nk , k ∈Zd, n ∈N.
The finite additivity of F(x, y] yields μm 2−m(k −1, k] = μn 2−m(k −1, k] , k ∈Zd, m < n in N.
(11) By (11) we can split the Lebesgue unit interval ([0, 1], B[0, 1], λ) recursively to construct some random vectors ξ1, ξ2, . . . with distributions μ1, μ2, . . . , sat-isfying ξm −2−m < ξn ≤ξm, m < n.
In particular, ξ1 ≥ξ2 ≥· · · ≥ξ1 −1, and so ξn converges pointwise to some random vector ξ. Define μ = λ ◦ξ−1.
To see that μ has distribution function F, we conclude from the properness of F that λ ξn ≤2−nk = μn(−∞, 2−nk] = F(2−nk), k ∈Zd, n ∈N.
Since also ξn ↓ξ a.s., Fatou’s lemma yields for dyadic x ∈Rd λ{ξ < x} = λ{ξn < x ult.} ≤lim inf n→∞λ{ξn < x} ≤F(x) = lim sup n→∞λ{ξn ≤x} ≤λ{ξn ≤x i.o.} ≤λ{ξ ≤x}, and so F(x) ≤λ{ξ ≤x} ≤F(x + 2−n1), n ∈N.
Letting n →∞and using the right-continuity of F, we get λ{ξ ≤x} = F(x), which extends to any x ∈Rd, by the right-continuity of both sides.
2 We also need a version for unbounded measures, extending the one-dimen-sional Theorem 2.14: Corollary 4.26 (unbounded measures) For any right-continuous function F on Rd with non-negative increments, there exists a measure μ on Rd with μ(x, y] = F(x, y], x ≤y in Rd.
98 Foundations of Modern Probability Proof: For any a ∈Rd, we may apply Theorem 4.25 to suitably normalized versions of the function Fa(x) = F(a, a ∨x] to obtain a measure μa on [a, ∞) with μa(a, x] = F(a, x] for all x > a. Then clearly μa = μb on (a ∨b, ∞) for any a, b, and so the set function μ = supa μa is a measure with the required property.
2 Exercises 1. Give an example of two processes X, Y with different distributions, such that Xt d = Yt for all t.
2. Let X, Y be {0, 1} -valued processes on an index set T. Show that X d = Y iff P{Xt1 + · · · + Xtn > 0} = P{Yt1 + · · · + Ytn > 0} for all n ∈N and t1, . . . , tn ∈T.
3. Let F be a right-continuous function of bounded variation with F(−∞) = 0.
For any random variable ξ, show that EF(ξ) = P{ξ ≥t} F(dt). (Hint: First let F be the distribution function of a random variable η⊥ ⊥ξ, and use Lemma 4.11.) 4. Given a random variable ξ ∈L1 and a strictly convex function f on R, show that Ef(ξ) = f(E ξ) iffξ = E ξ a.s.
5. Let ξ = Σj aj ξj and η = Σj bj ηj, where the sums converge in L2. Show that Cov(ξ, η) = Σ ij aibj Cov(ξi, ηj), where the double series is absolutely convergent.
6. Let the σ-fields Ft,n, t ∈T, n ∈N, be non-decreasing in n for fixed t and independent in t for fixed n. Show that the independence extends to the σ-fields Ft = n Ft,n.
7. For every t ∈T, let ξt, ξt 1, ξt 2, . . . be random elements in a metric space St with ξt n →ξt a.s., such that the ξt n are independent in t for fixed n ∈N. Show that the independence extends to the limits ξt. (Hint: First show that EΠt∈S ft(ξt) = Πt∈S Eft(ξt) for any bounded, continuous functions ft on St, and for finite subsets S ⊂T.) 8. Give an example of three events that are pairwise independent but not inde-pendent.
9. Give an example of two random variables that are uncorrelated but not inde-pendent.
10. Let ξ1, ξ2, . . . be i.i.d. random elements with distribution μ in a measurable space (S, S). Fix a set A ∈S with μA > 0, and put τ = inf{k; ξk ∈A}. Show that ξτ has distribution μ(· |A) = μ(· ∩A)/μA.
11. Let ξ1, ξ2, . . . be independent random variables with values in [0, 1]. Show that EΠn ξn = Πn E ξn. In particular, P n An = Πn PAn for any independent events A1, A2, . . . .
12. For any random variables ξ1, ξ2, . . . , prove the existence of some constants c1, c2, . . . > 0, such that the series Σ n cn ξn converges a.s.
13. Let ξ1, ξ2, . . . be random variables with ξn →0 a.s. Prove the existence of a measurable function f ≥0 with f > 0 outside 0, such that Σ n f(ξn) < ∞a.s. Show that the conclusion fails if we assume only L1-convergence.
14.
Give an example of events A1, A2, . . . , such that P{An i.o.} = 0 while Σ n PAn = ∞.
4. Processes, Distributions, and Independence 99 15. Extend Lemma 4.20 to a correspondence between U(0, 1) random variables ϑ and Bernoulli sequences ξ1, ξ2, . . . with rate p ∈(0, 1).
16. Give an elementary proof of Theorem 4.25 for d = 1. (Hint: Define ξ = F −1(ϑ) where ϑ is U(0, 1), and note that ξ has distribution function F.) 17.
Let ξ1, ξ2, . . . be random variables with P{ξn ̸= 0 i.o.} = 1.
Prove the existence of some constants cn ∈R, such that P{|cnξn| > 1 i.o.} = 1. (Hint: Note that P Σk≤n |ξk| > 0 →1.) Chapter 5 Random Sequences, Series, and Averages Moments and tails, convergence a.s. and in probability, sub-sequence cri-terion, continuity and completeness, convergence in distribution, tight-ness and uniform integrability, convergence of means, Lp-convergence, random series and averages, positive and symmetric terms, variance criteria, three-series criterion, strong laws of large numbers, empirical distributions, portmanteau theorem, continuous mapping and approxi-mation, Skorohod coupling, representation and measurability of limits Here our first aim is to introduce and compare the basic modes of convergence of random quantities. For random elements ξ and ξ1, ξ2, . . . in a metric or topo-logical space S, the most commonly used notions are those of almost sure con-vergence (ξn →ξ a.s.) and convergence in probability (ξn P →ξ), corresponding to the general notions of convergence a.e. and in measure, respectively. When S = R we also have the concept of Lp-convergence, familiar from Chapter 1.
Those three notions are used throughout this book. For a special purpose in Chapter 10, we also need the notion of weak L1-convergence.
Our second major theme is to study the very different notion of conver-gence in distribution (ξn d →ξ), defined by the condition Ef(ξn) →Ef(ξ) for all bounded, continuous functions f on S. This is clearly equivalent to weak convergence of the associated distributions μn = L(ξn) and μ = L(ξ), written as μn w →μ and defined by the condition μnf →μf for every f as above. In this chapter we will establish only the most basic results of weak convergence theory, such as the ‘portmanteau’ theorem, the continuous mapping and ap-proximation theorems, and the Skorohod coupling. Our development of the general theory continues in Chapters 6 and 23, and further distributional limit theorems of various kind appear throughout the remainder of the book.
Our third main theme is to characterize the convergence of series Σk ξk and averages n−cΣk≤n ξk, where ξ1, ξ2, . . . are independent random variables and c is a positive constant. The two problems are related by the elementary Kro-necker lemma, and the main results are the basic three-series criterion and the strong law of large numbers. The former result is extended in Chapter 9 to the powerful martingale convergence theorem, whereas extensions and refinements of the latter result are proved in Chapters 22 and 25. The mentioned theorems are further related to certain weak convergence results presented in Chapters 6–7.
101 © Springer Nature Switzerland AG 2021 O. Kallenberg, Foundations of Modern Probability, Probability Theory and Stochastic Modelling 99, 102 Foundations of Modern Probability Before embarking on our systematic study of the various notions of conver-gence, we consider a couple of elementary but useful inequalities.
Lemma 5.1 (moments and tails, Bienaym´ e, Chebyshev, Paley & Zygmund) For any random variable ξ ≥0 with 0 < E ξ < ∞, we have (1 −r)2 + (E ξ)2 E ξ2 ≤P ξ > rE ξ ≤1 r, r > 0.
The upper bound is often referred to as the Chebyshev or Markov inequality.
When E ξ2 < ∞, we get in particular the classical estimate P |ξ −E ξ| > ε ≤ε−2 Var(ξ), ε > 0.
Proof: We may clearly assume that E ξ = 1.
The upper bound then follows as we take expectations in the inequality r1{ξ > r} ≤ξ. To get the lower bound, we note that for any r, t > 0, t21{ξ > r} ≥(ξ −r)(2t + r −ξ) = 2 ξ(r + t) −r(2 t + r) −ξ2.
Taking expected values, we get for r ∈(0, 1) t2P{ξ > r} ≥2(r + t) −r(2 t + r) −E ξ2 ≥2 t(1 −r) −E ξ2.
Now choose t = E ξ2/(1 −r).
2 For any random elements ξ and ξ1, ξ2, . . . in a separable metric space (S, ρ), we say that ξn converges in probability to ξ and write ξn P →ξ if lim n→∞P ρ(ξn, ξ) > ε = 0, ε > 0.
Lemma 5.2 (convergence in probability) For any random elements ξ, ξ1, ξ2, . . . in a separable metric space (S, ρ), these conditions are equivalent: (i) ξn P →ξ, (ii) E ρ(ξn, ξ) ∧1 →0, (iii) for any sub-sequence N ′ ⊂N, we have ξn →ξ a.s. along a further sub-sequence N ′′ ⊂N ′.
In particular, ξn →ξ a.s. implies ξn P →ξ, and the notion of convergence in probability depends only on the topology, regardless of the metrization ρ.
Proof: The implication (i) ⇒(ii) is obvious, and the converse holds by Chebyshev’s inequality. Now assume (ii), and fix an arbitrary sub-sequence N ′ ⊂N. We may then choose a further sub-sequence N ′′ ⊂N ′ such that E Σ n∈N′′ ρ(ξn, ξ) ∧1 = Σ n∈N′′E ρ(ξn, ξ) ∧1 < ∞, 5. Random Sequences, Series, and Averages 103 where the equality holds by monotone convergence. The series on the left then converges a.s., which implies ξn →ξ a.s. along N ′′, proving (iii).
Now assume (iii). If (ii) fails, there exists an ε > 0 such that E{ρ(ξn, ξ)∧1} > ε along a sub-sequence N ′ ⊂N. Then (iii) yields ξn →ξ a.s. along a further sub-sequence N ′′ ⊂N ′, and so by dominated convergence E{ρ(ξn, ξ) ∧1} →0 along N ′′, a contradiction proving (ii).
2 For a first application, we show that convergence in probability is preserved by continuous mappings.
Lemma 5.3 (continuous mapping) For any separable metric spaces S, T, let ξ, ξ1, ξ2, . . . be random elements in S, and let the mapping f : S →T be measurable and a.s. continuous at ξ. Then ξn P →ξ ⇒ f(ξn) P →f(ξ).
Proof: Fix any sub-sequence N ′ ⊂N. By Lemma 5.2 we have ξn →ξ a.s.
along a further sub-sequence N ′′ ⊂N ′, and so by continuity f(ξn) →f(ξ) a.s.
along N ′′. Hence, f(ξn) P →f(ξ) by Lemma 5.2.
2 For any separable metric spaces (Sk, ρk), we may introduce the product space S = S1 × S2 × · · · endowed with the product topology, admitting the metrization ρ(x, y) = Σ k 2−k ρk(xk, yk) ∧1 , x, y ∈S.
(1) Since BS = k BSk by Lemma 1.2, a random element in S is simply a sequence of random elements in Sk, k ∈N.
Lemma 5.4 (random sequences) Let ξ = (ξ1, ξ2, . . .) and ξn = (ξn 1 , ξn 2 , . . .), n ∈N, be random elements in S1 × S2 × · · · , for some separable metric spaces S1, S2, . . . . Then ξn P →ξ ⇔ ξn k P →ξk in Sk, k ∈N.
Proof: With ρ as in (1), we get for any n ∈N E ρ(ξn, ξ) ∧1 = E ρ(ξn, ξ) = Σ k 2−kE ρk(ξn k , ξk) ∧1 .
Thus, by dominated convergence, E ρ(ξn, ξ) ∧1 →0 ⇔ E ρk(ξn k , ξk) ∧1 →0, k ∈N.
2 Combining the last two lemmas, we may show how convergence in proba-bility is preserved by the basic arithmetic operations.
Corollary 5.5 (elementary operations) For any random variables ξn P →ξ and ηn P →η, we have 104 Foundations of Modern Probability (i) a ξn + b ηn P →a ξ + b η, a, b ∈R, (ii) ξnηn P →ξη, (iii) ξn/ηn P →ξ/η when η, η1, η2, . . . ̸= 0 a.s.
Proof: By Lemma 5.4 we have (ξn, ηn) P →(ξ, η) in R2, and so (i) and (ii) follow by Lemma 5.3. To prove (iii), we may apply Lemma 5.3 to the function f : (x, y) →(x/y)1{y ̸= 0}, which is clearly a.s. continuous at (ξ, η).
2 We turn to some associated completeness properties. For any random ele-ments ξ1, ξ2, . . . in a separable metric space (S, ρ), we say that (ξn) is Cauchy (convergent) in probability if ρ(ξm, ξn) P →0 as m, n →∞, in the sense that E{ρ(ξm, ξn) ∧1} →0.
Lemma 5.6 (completeness) Let ξ1, ξ2, . . . be random elements in a separable, complete metric space (S, ρ). Then these conditions are equivalent: (i) (ξn) is Cauchy in probability, (ii) ξn P →ξ for a random element ξ in S.
Similar results hold for a.s. convergence.
Proof: The a.s. case is immediate from Lemma 1.11. Assuming (ii), we get E ρ(ξm, ξn) ∧1 ≤E ρ(ξm, ξ) ∧1 + E ρ(ξn, ξ) ∧1 →0, proving (i).
Now assume (i), and define nk = inf n ≥k; sup m≥n E{ρ(ξm, ξn) ∧1} ≤2−k , k ∈N.
The nk are finite and satisfy E Σ k ρ(ξnk, ξnk+1) ∧1 ≤Σ k 2−k < ∞, and so Σk ρ(ξnk, ξnk+1) < ∞a.s. The sequence (ξnk) is then a.s. Cauchy and converges a.s. toward a measurable limit ξ. To prove (ii), write E ρ(ξm, ξ) ∧1 ≤E ρ(ξm, ξnk) ∧1 + E ρ(ξnk, ξ) ∧1 , and note that the right-hand side tends to zero as m, k →∞, by the Cauchy convergence of (ξn) and dominated convergence.
2 For any probability measures μ and μ1, μ2, . . . on a metric space (S, ρ) with Borel σ-field S, we say that μn converges weakly to μ and write μn w →μ, if μnf →μf for every f ∈ˆ CS, defined as the class of bounded, continuous functions f : S →R. For any random elements ξ and ξ1, ξ2, . . . in S, we further say that ξn converges in distribution to ξ and write ξn d →ξ if L(ξn) w →L(ξ), 5. Random Sequences, Series, and Averages 105 i.e., if Ef(ξn) →Ef(ξ) for all f ∈ˆ CS. The latter mode of convergence clearly depends only on the distributions, and the ξn and ξ need not even be defined on the same probability space. To motivate the definition, note that xn →x in a metric space S ifff(xn) →f(x) for all continuous functions f : S →R, and that L(ξ) is determined by the integrals Ef(ξ) for all f ∈ˆ CS.
We give a connection between convergence in probability and distribution.
Lemma 5.7 (convergence in probability and distribution) Let ξ, ξ1, ξ2, . . . be random elements in a separable metric space (S, ρ). Then ξn P →ξ ⇒ ξn d →ξ, with equivalence when ξ is a.s. a constant.
Proof: Let ξn P →ξ. For any f ∈ˆ CS we need to show that Ef(ξn) → Ef(ξ). If the convergence fails, there exists a sub-sequence N ′ ⊂N such that infn∈N′ |Ef(ξn) −Ef(ξ)| > 0. Then Lemma 5.2 yields ξn →ξ a.s. along a further sub-sequence N ′′ ⊂N ′. By continuity and dominated convergence, we get Ef(ξn) →Ef(ξ) along N ′′, a contradiction.
Conversely, let ξn d →s ∈S. Since ρ(x, s) ∧1 is a bounded and continu-ous function of x, we get E{ρ(ξn, s)∧1} →E{ρ(s, s)∧1} = 0, and so ξn P →s. 2 A T-indexed family of random vectors ξt in Rd is said to be tight, if lim r→∞sup t∈T P |ξt| > r = 0.
For sequences (ξn), this is clearly equivalent to lim r→∞lim sup n→∞P |ξn| > r = 0, (2) which is often easier to verify. Tightness plays an important role for the com-pactness methods developed in Chapters 6 and 23. For the moment, we note only the following simple connection with weak convergence.
Lemma 5.8 (weak convergence and tightness) For random vectors ξ, ξ1, ξ2, . . . in Rd, we have ξn d →ξ ⇒ (ξn) is tight.
Proof: Fix any r > 0, and define f(x) = {1 −(r −|x|)+}+. Then lim sup n→∞P |ξn| > r ≤lim n→∞Ef(ξn) = Ef(ξ) ≤P |ξ| > r −1 .
Here the right-hand side tends to 0 as r →∞, and (2) follows.
2 We further note the following simple relationship between tightness and convergence in probability.
106 Foundations of Modern Probability Lemma 5.9 (tightness and convergence in probability) For random vectors ξ1, ξ2, . . . in Rd, these conditions are equivalent: (i) (ξn) is tight, (ii) cn ξn P →0 for any constants cn ≥0 with cn →0.
Proof: Assume (i), and let cn →0. Fixing any r, ε > 0, and noting that cn r ≤ε for all but finitely many n ∈N, we get lim sup n→∞P |cn ξn| > ε ≤lim sup n→∞P |ξn| > r .
Here the right-hand side tends to 0 as r →∞, and so P{|cn ξn| > ε} →0.
Since ε was arbitrary, we get cn ξn P →0, proving (ii). If instead (i) is false, we have infk P{|ξnk| > k} > 0 for a sub-sequence (nk) ⊂N. Putting cn = sup{k−1; nk ≥n}, we note that cn →0, and yet P{|cnkξnk| > 1} ̸→0. Thus, even (ii) fails.
2 We turn to a related notion for expected values. The random variables ξt, t ∈T, are said to be uniformly integrable if lim r→∞sup t∈T E |ξt|; |ξt| > r = 0, (3) where E(ξ; A) = E(1Aξ). For sequences (ξn) in L1, this is clearly equivalent to lim r→∞lim sup n→∞E |ξn|; |ξn| > r = 0.
(4) Condition (3) holds in particular if the ξt are Lp-bounded for some p > 1, in the sense that supt E|ξt|p < ∞. To see this, it suffices to write E |ξt|; |ξt| > r ≤r−p+1E|ξt|p, r, p > 0.
We give a useful criterion for uniform integrability. For motivation, we note that if ξ is an integrable random variable, then E(|ξ|; A) →0 as PA →0, by Lemma 5.2 and dominated convergence, in the sense that lim ε→0 sup PA<ε E(|ξ|; A) = 0.
Lemma 5.10 (uniform integrability) For any family of random variables ξt, t ∈T, these conditions are equivalent: (i) the ξt are uniformly integrable, (ii) sup t∈T E|ξt| < ∞and lim PA→0 sup t∈T E(|ξt|; A) = 0.
Proof: Assuming (i), we may write E(|ξt|; A) ≤rPA + E |ξt|; |ξt| > r , r > 0.
5. Random Sequences, Series, and Averages 107 Here the second part of (ii) follows as we let PA →0 and then r →∞. To prove the first part, we may take A = Ω and choose r > 0 large enough.
Conversely, assume (ii). By Chebyshev’s inequality, we get as r →∞ suptP{|ξt| > r} ≤r−1suptE|ξt| →0, and (i) follows from the second part of (ii) with A = {|ξt| > r}.
2 The relevance of uniform integrability for the convergence of moments is clear from the following result, which also provides a weak convergence version of Fatou’s lemma.
Lemma 5.11 (convergence of means) For any random variables ξ, ξ1, ξ2, . . .
≥0 with ξn d →ξ, we have (i) E ξ ≤lim infn E ξn, (ii) E ξn →E ξ < ∞⇔the ξn are uniformly integrable.
Proof: For any r > 0, the function x →x ∧r is bounded and continuous on R+. Thus, lim inf n→∞E ξn ≥lim n→∞E(ξn ∧r) = E(ξ ∧r), and the first assertion follows as we let r →∞. Next assume (4), and note in particular that E ξ ≤lim infn E ξn < ∞. For any r > 0, we get E ξn −E ξ ≤ E ξn −E(ξn ∧r) + E(ξn ∧r) −E(ξ ∧r) + E(ξ ∧r) −E ξ .
Letting n →∞and then r →∞, we obtain E ξn →E ξ. Now assume instead that E ξn →E ξ < ∞. Keeping r > 0 fixed, we get as n →∞ E ξn; ξn > r ≤E ξn −ξn ∧(r −ξn)+ →E ξ −ξ ∧(r −ξ)+ .
Since x∧(r−x)+ ↑x as r →∞, the right-hand side tends to zero by dominated convergence, and (4) follows.
2 We may now prove some useful criteria for convergence in Lp.
Theorem 5.12 (Lp-convergence) For a fixed p > 0, let ξ1, ξ2, . . . ∈Lp with ξn P →ξ. Then these conditions are equivalent: (i) ∥ξn −ξ∥p →0, (ii) ∥ξn∥p →∥ξ∥p < ∞, (iii) the variables |ξn|p are uniformly integrable.
Conversely, (i) implies ξn P →ξ.
108 Foundations of Modern Probability Proof: First let ξn →ξ in Lp. Then ∥ξn∥p →∥ξ∥p by Lemma 1.31, and Lemma 5.1 yields for any ε > 0 P |ξn −ξ| > ε = P |ξn −ξ|p > εp ≤ε−p ∥ξn −ξ∥p p →0, which shows that ξn P →ξ. We may henceforth assume that ξn P →ξ. In par-ticular, |ξn|p d →|ξ|p by Lemmas 5.3 and 5.7, and so (ii) ⇔(iii) by Lemma 5.11. Next assume (ii). If (i) fails, there exists a sub-sequence N ′ ⊂N with infn∈N′ ∥ξn −ξ∥p > 0. Then Lemma 5.2 yields ξn →ξ a.s. along a further sub-sequence N ′′ ⊂N ′, and so Lemma 1.34 gives ∥ξn −ξ∥p →0 along N ′′, a contradiction. Thus, (ii) ⇒(i), and so all three conditions are equivalent.
2 We consider yet another mode of convergence for random variables. Letting ξ, ξ1, . . . ∈Lp for a p ∈[1, ∞), we say that ξn →ξ weakly in Lp if E ξnη →E ξη for every η ∈Lq, where p−1 + q−1 = 1. Taking η = |ξ|p−1sgn ξ gives ∥η∥q = ∥ξ∥p−1 p , and so by H¨ older’s inequality, ∥ξ∥p p = E ξη = lim n→∞E ξnη ≤∥ξ∥p−1 p lim inf n→∞∥ξn∥p, which implies ∥ξ∥p ≤lim infn ∥ξn∥p.
Now recall that any L2-bounded sequence has a sub-sequence converging weakly in L2. The following related criterion for weak compactness in L1 will be needed in Chapter 10.
Lemma 5.13 (weak L1-compactness1, Dunford) Every uniformly integrable sequence of random variables has a sub-sequence converging weakly in L1.
Proof: Let (ξn) be uniformly integrable. Define ξk n = ξn1{|ξn| ≤k}, and note that (ξk n) is L2-bounded in n for fixed k. By the compactness in L2 and a diagonal argument, there exist a sub-sequence N ′ ⊂N and some random variables η1, η2, . . . such that ξk n →ηk holds weakly in L2 and then also in L1, as n →∞along N ′ for fixed k.
Now ∥ηk −ηl∥1 ≤lim infn ∥ξk n−ξl n∥1, and by uniform integrability the right-hand side tends to zero as k, l →∞. Thus, the sequence (ηk) is Cauchy in L1, and so it converges in L1 toward some ξ. By approximation it follows easily that ξn →ξ weakly in L1 along N ′.
2 We turn to some convergence criteria for random series, beginning with an important special case.
Proposition 5.14 (series with positive terms) For any independent random variables ξ1, ξ2, . . . ≥0, Σ n ξn < ∞a.s.
⇔ Σ n E(ξn ∧1) < ∞.
1The converse statement is also true, but it is not needed in this book.
5. Random Sequences, Series, and Averages 109 Proof: The right-hand condition yields E Σn(ξn ∧1) < ∞by Fubini’s theorem, and so Σn(ξn ∧1) < ∞a.s. In particular, Σn 1{ξn > 1} < ∞a.s., and so the series Σn(ξn ∧1) and Σn ξn differ by at most finitely many terms, which implies Σn ξn < ∞a.s.
Conversely, let Σn ξn < ∞a.s. Then also Σn(ξn ∧1) < ∞a.s., and so we may assume that ξn ≤1 for all n. Since 1 −x ≤e−x ≤1 −a x for x ∈[0, 1] with a = 1 −e−1, we get 0 < E exp −Σ n ξn = Π n Ee−ξn ≤Π n (1 −a Eξn) ≤Π n exp(−a Eξn) = exp −aΣ n Eξn , and so ΣnEξn < ∞.
2 To deal with more general series, we need the following strengthened version of Chebyshev’s inequality. A further extension appears as Proposition 9.16.
Lemma 5.15 (maximum inequality, Kolmogorov) Let ξ1, ξ2, . . . be indepen-dent random variables with mean 0, and put Sn = Σk≤n ξk. Then P supn|Sn| > r ≤r−2Σ nVar(ξn), r > 0.
Proof: We may assume that Σ nE ξ2 n < ∞. Writing τ = inf{n; |Sn| > r} and noting that Sk1{τ = k}⊥ ⊥(Sn −Sk) for k ≤n, we get Σ k≤n E ξ2 k = E S2 n ≥Σ k≤n E S2 n; τ = k ≥Σ k≤n E S2 k; τ = k + 2E Sk(Sn −Sk); τ = k = Σ k≤n E S2 k; τ = k ≥r2P{τ ≤n}.
As n →∞, we obtain Σ k E ξ2 k ≥r2P{τ < ∞} = r2P supk|Sk| > r .
2 The last result yields a sufficient condition for a.s. convergence of random series with independent terms. Precise criteria will be given in Theorem 5.18.
Lemma 5.16 (variance criterion for series, Khinchin & Kolmogorov) Let ξ1, ξ2, . . . be independent random variables with mean 0. Then Σ nVar(ξn) < ∞ ⇒ Σ n ξn converges a.s.
110 Foundations of Modern Probability Proof: Write Sn = ξ1 + · · · + ξn. By Lemma 5.15 we get for any ε > 0 P sup k≥n |Sn −Sk| > ε ≤ε−2 Σ k≥n E ξ2 k.
Hence, as n →∞, sup h,k≥n |Sh −Sk| ≤2 sup k≥n |Sn −Sk| P →0, and Lemma 5.2 yields the corresponding a.s. convergence along a sub-sequence.
Since the supremum on the left is non-increasing in n, the a.s. convergence ex-tends to the entire sequence, which means that (Sn) is a.s. Cauchy convergent.
Thus, Sn converges a.s. by Lemma 5.6.
2 We turn to a basic connection between series with positive and symmetric terms. By ξn P →∞we mean that P{ξn > r} →1 for every r > 0.
Theorem 5.17 (positive and symmetric terms) Let ξ1, ξ2, . . . be independent, symmetric random variables. Then these conditions are equivalent: (i) Σ n ξn converges a.s., (ii) Σ n ξ2 n < ∞a.s., (iii) Σ nE ξ2 n ∧1 < ∞.
If the conditions fail, then Σ k≤n ξk P →∞.
Proof: The equivalence (ii) ⇔(iii) holds by Proposition 5.14. Next assume (iii), and conclude from Lemma 5.16 that Σn ξn1{|ξn| ≤1} converges a.s. From (iii) and Fubini’s theorem we have also Σn1{|ξn| > 1} < ∞a.s. Hence, the series Σn ξn1{|ξn| ≤1} and Σn ξn differ by at most finitely many terms, and so even the latter series converges a.s. This shows that (i) ⇐(ii) ⇔(iii). It remains to show that |Sn| P →∞when (ii) fails, since the former condition yields |Sn| →∞a.s. along a sub-sequence, contradicting (i). Here Sn = ξ1 + · · · + ξn as before.
Thus, suppose that (ii) fails. Then Kolmogorov’s 0−1 law yields Σn ξ2 n = ∞ a.s. First let |ξn| = cn be non-random for every n. If the cn are unbounded, then for every r > 0 we may choose a sub-sequence n1, n2, . . . ∈N such that cn1 > r and cnk+1 > 2cnk for all k. Then clearly P Σj≤k ξnj ∈I ≤2−k for every interval I of length 2r, and so by convolution P{|Sn| ≤r} ≤2−k for all n ≥nk, which implies P{|Sn| ≤r} →0.
Next let cn ≤c < ∞for all n. Choosing a > 0 so small that cos x ≤e−a x2 for |x| ≤1, we get for 0 < |t| ≤c−1 0 ≤Ee itSn = Π k≤n cos(t ck) ≤Π k≤n exp −a t2c2 k = exp −a t2 Σ k≤n c2 k →0.
5. Random Sequences, Series, and Averages 111 Anticipating the elementary Lemma 6.1 below, we get again P{|Sn| ≤r} →0 for each r > 0.
For general ξn, choose some independent i.i.d. random variables ϑn with P{ϑn = ±1} = 1 2, and note that the sequences (ξn) and (ϑn|ξn|) have the same distribution. Letting μ be the distribution of the sequence (|ξn|), we get by Lemma 4.11 P |Sn| > r = P Σ k≤n ϑk xk > r μ(dx), r > 0.
Here the integrand tends to 0 for μ-almost every sequence x = (xn), by the result for constant |ξn|, and so the integral tends to 0 by dominated conver-gence.
2 We may now give precise criteria for convergence, a.s. or in distribution, of a series of independent random variables. Write Var(ξ; A) = Var(ξ1A).
Theorem 5.18 (three-series criterion, Kolmogorov, L´ evy) Let ξ1, ξ2, . . . be independent random variables. Then Σn ξn converges a.s. iffit converges in distribution, which holds iff (i) Σ nP |ξn| > 1 < ∞, (ii) Σ nE ξn; |ξn| ≤1 converges, (iii) Σ nVar ξn; |ξn| ≤1 < ∞.
Our proof requires some simple symmetrization inequalities. Say that m is a median of the random variable ξ if P{ξ > m} ∨P{ξ < m} ≤ 1 2.
A symmetrization of ξ is defined as a random variable of the form ˜ ξ = ξ−ξ′, where ξ′⊥ ⊥ξ with ξ′ d = ξ. For symmetrizations of the random variables ξ1, ξ2, . . . , we require the same properties for the whole sequences (ξn) and (ξ′ n).
Lemma 5.19 (symmetrization) Let ˜ ξ be a symmetrization of a random vari-able ξ with median m. Then for any r > 0, 1 2 P |ξ −m| > r ≤P |˜ ξ| > r ≤2 P |ξ| > r/2 .
Proof: Let ˜ ξ = ξ −ξ′ as above, and write ξ −m > r, ξ′ ≤m ∪ ξ −m < −r, ξ′ ≥m ⊂ |˜ ξ| > r ⊂ |ξ| > r/2 ∪ |ξ′| > r/2 .
2 We also need a simple centering lemma.
Lemma 5.20 (centering) For any random variables ξ, η, ξ1, ξ2, . . . and con-stants c1, c2, . . . , we have ξn d →ξ ξn + cn d →η ⇒ cn →some c.
112 Foundations of Modern Probability Proof: Let ξn d →ξ. If cn →±∞along a sub-sequence N ′ ⊂N, then clearly ξn + cn P →±∞along N ′, which contradicts the tightness of ξn + cn. Thus, the cn are bounded. Now assume that cn →a and cn →b along two sub-sequences N1, N2 ⊂N. Then ξn + cn d →ξ + a along N1, while ξn + cn d →ξ + b along N2, and so ξ + a d = ξ + b. Iterating this yields ξ + n(b −a) d = ξ for every n ∈Z, which is only possible when a = b. Thus, all limit points of (cn) agree, and cn converges.
2 Proof of Theorem 5.18: Assume (i)−(iii), and define ξ′ n = ξn1{|ξn| ≤1}.
By (iii) and Lemma 5.16 the series Σn(ξ′ n −E ξ′ n) converges a.s., and so by (ii) the same thing is true for Σn ξ′ n. Finally, P{ξn ̸= ξ′ n i.o.} = 0 by (i) and the Borel–Cantelli lemma, and so Σn(ξn −ξ′ n) has a.s. finitely many non-zero terms. Hence, even Σn ξn converges a.s.
Conversely, suppose that Σn ξn converges in distribution. By Lemma 5.19 the sequence of symmetrized partial sums Σk≤n ˜ ξk is tight, and so Σn ˜ ξn converges a.s. by Theorem 5.17.
In particular, ˜ ξn →0 a.s.
For any ε > 0 we obtain Σn P{|˜ ξn| > ε} < ∞by the Borel–Cantelli lemma.
Hence, Σn P{|ξn −mn| > ε} < ∞by Lemma 5.19, where m1, m2, . . . are medians of ξ1, ξ2, . . . . Using the Borel–Cantelli lemma again, we get ξn −mn →0 a.s.
Now let c1, c2, . . . be arbitrary with mn −cn →0. Then even ξn −cn →0 a.s. Putting ηn = ξn1{|ξn −cn| ≤1}, we get a.s. ξn = ηn for all but finitely many n, and similarly for the symmetrized variables ˜ ξn and ˜ ηn. Thus, even Σn ˜ ηn converges a.s. Since the ˜ ηn are bounded and symmetric, Theorem 5.17 yields Σn Var(ηn) = 1 2 Σn Var(˜ ηn) < ∞. Thus, Σn (ηn −Eηn) converges a.s.
by Lemma 5.16, as does the series Σn (ξn −Eηn). Comparing with the dis-tributional convergence of Σn ξn, we conclude from Lemma 5.20 that Σn Eηn converges. In particular, Eηn →0 and ηn −Eηn →0 a.s., and so ηn →0 a.s., whence ξn →0 a.s. Then mn →0, and so we may take cn = 0 in the previous argument, and (i)−(iii) follow.
2 A sequence of random variables ξ1, ξ2, . . . with partial sums Sn is said to obey the strong law of large numbers, if Sn/n converges a.s. to a constant. The weak law is the corresponding property with convergence in probability. The following elementary proposition enables us to convert convergence results for random series into laws of large numbers.
Lemma 5.21 (series and averages, Kronecker) For any a1, a2, . . . ∈R and c > 0, Σ n≥1 n−c an converges ⇒ n−c Σ k≤n ak →0 Proof: Let Σn bn = b with bn = n−can.
By dominated convergence as n →∞, Σ k≤n bk −n−c Σ k≤n ak = Σ k≤n 1 −(k/n)c bk 5. Random Sequences, Series, and Averages 113 = c Σ k≤n bk 1 k/n xc−1dx = c 1 0 xc−1dx Σ k≤nx bk →b c 1 0 xc−1dx = b, and the assertion follows since the first term on the left tends to b.
2 The following simple result illustrates the method.
Corollary 5.22 (variance criterion for averages, Kolmogorov) Let ξ1, ξ2, . . .
be independent random variables with mean 0, and fix any c > 0. Then Σ n≥1 n−2 c Var(ξn) < ∞ ⇒ n−c Σ k≤n ξk →0 a.s.
Proof: Since Σn n−c ξn converges a.s. by Lemma 5.16, the assertion follows by Lemma 5.21.
2 In particular, we see that if ξ, ξ1, ξ2, . . . are i.i.d. with E ξ = 0 and E ξ2 < ∞, then n−cΣk≤n ξk →0 a.s. for any c > 1 2. The statement fails for c = 1 2, as we see by taking ξ to be N(0, 1).
The best possible normalization will be given in Corollary 22.8. The next result characterizes the stated convergence for arbitrary c > 1 2. For c = 1 we recognize the strong law of large numbers.
Corresponding criteria for the weak law are given in Theorem 6.17.
Theorem 5.23 (strong laws of large numbers, Kolmogorov, Marcinkiewicz & Zygmund) Let ξ, ξ1, ξ2, . . . be i.i.d. random variables, put Sn = Σk≤n ξk, and fix a p ∈(0, 2). Then n−1/p Sn converges a.s. iffthese conditions hold, depend-ing on the value of p: • for p ∈(0, 1]: ξ ∈Lp, • for p ∈(1, 2): ξ ∈Lp and E ξ = 0.
In that case, the limit equals E ξ when p = 1 and is otherwise equal to 0.
Proof: Assume E|ξ|p < ∞and also, for p ≥1, that E ξ = 0.
Define ξ′ n = ξn1{|ξn| ≤n1/p}, and note that by Lemma 4.4 Σ n P ξ′ n ̸= ξn = Σ n P |ξ|p > n ≤ ∞ 0 P |ξ|p > t dt = E|ξ|p < ∞.
Hence, the Borel–Cantelli lemma yields P{ξ′ n ̸= ξn i.o.} = 0, and so ξ′ n = ξn for all but finitely many n ∈N a.s.
It is then equivalent to show that n−1/pΣk≤n ξ′ k →0 a.s.
By Lemma 5.21 it suffices to prove instead that Σn n−1/p ξ′ n converges a.s.
114 Foundations of Modern Probability For p < 1, this is clear if we write E Σ n n−1/p |ξ′ n| = Σ n n−1/pE |ξ|; |ξ| ≤n1/p < ⌢ ∞ 0 t−1/pE |ξ|; |ξ| ≤t1/p dt = E |ξ| ∞ |ξ|p t−1/p dt < ⌢E |ξ|p < ∞.
If instead p > 1, then by Theorem 5.18 it suffices to prove that Σn n−1/pE ξ′ n converges and Σn n−2/p Var(ξ′ n) < ∞. Since E ξ′ n = −E(ξ; |ξ| > n1/p), we have for the former series Σ n n−1/p |E ξ′ n| ≤Σ n n−1/pE |ξ|; |ξ| > n1/p ≤ ∞ 0 t−1/pE |ξ|; |ξ| > t1/p dt = E |ξ| |ξ|p 0 t−1/p dt < ⌢E |ξ|p < ∞.
For the latter series, we obtain Σ n n−2/p Var(ξ′ n) ≤Σ n n−2/p E(ξ′ n)2 = Σ n n−2/p E ξ2; |ξ| ≤n1/p < ⌢ ∞ 0 t−2/p E ξ2; |ξ| ≤t1/p dt = E ξ2 ∞ |ξ|p t−2/p dt < ⌢E |ξ|p < ∞.
When p = 1, we have E ξ′ n = E(ξ; |ξ| ≤n) →0 by dominated convergence.
Thus, n−1Σk≤n E ξ′ k →0, and we may prove instead that n−1Σk≤n ξ′′ k →0 a.s., where ξ′′ n = ξ′ n−E ξ′ n. By Lemma 5.21 and Theorem 5.18 it is then enough to show that Σn n−2 Var(ξ′ n) < ∞, which may be seen as before.
Conversely, suppose that n−1/pSn = n−1/pΣk≤n ξk converges a.s. Then ξn n1/p = Sn n1/p −(n −1 n ) 1/p Sn−1 (n −1)1/p →0 a.s., and in particular P |ξn|p > n i.o. = 0. Hence, by Lemma 4.4 and the Borel– Cantelli lemma, E|ξ|p = ∞ 0 P |ξ|p > t dt ≤1 + Σ n≥1 P |ξ|p > n < ∞.
For p > 1, the direct assertion yields n−1/p(Sn −n E ξ) →0 a.s., and so n1−1/pE ξ converges, which implies E ξ = 0.
2 5. Random Sequences, Series, and Averages 115 For a simple application of the law of large numbers, consider an arbitrary sequence of random variables ξ1, ξ2, . . . , and define the associated empirical distributions as the random probability measures ˆ μn = n−1Σk≤n δξk.
The corresponding empirical distribution functions ˆ Fn are given by ˆ Fn(x) = ˆ μn(−∞, x] = n−1 Σ k≤n 1{ξk ≤x}, x ∈R, n ∈N.
Proposition 5.24 (empirical distribution functions, Glivenko, Cantelli) Let ξ1, ξ2, . . . be i.i.d. random variables with distribution function F and empirical distribution functions ˆ F1, ˆ F2, . . . . Then lim n→∞supx ˆ Fn(x) −F(x) = 0 a.s.
(5) Proof: The law of large numbers yields ˆ Fn(x) →F(x) a.s. for every x ∈R.
Now fix any finite partition −∞= x1 < x2 < · · · < xm = ∞.
By the monotonicity of F and ˆ Fn, supx ˆ Fn(x) −F(x) ≤max k ˆ Fn(xk) −F(xk) + max k F(xk+1) −F(xk) .
Letting n →∞and refining the partition indefinitely, we get in the limit lim sup n→∞ supx ˆ Fn(x) −F(x) ≤supxΔF(x) a.s., which proves (5) when F is continuous.
For general F, let ϑ1, ϑ2, . . . be i.i.d. U(0, 1), and define ηn = g(ϑn) for each n, where g(t) = sup{x; F(x) < t}. Then ηn ≤x iffϑn ≤F(x), and so (ηn) d = (ξn). We may then assume that ξn ≡ηn. Writing ˆ G1, ˆ G2, . . . for the empirical distribution functions of ϑ1, ϑ2, . . . , we see that also ˆ Fn = ˆ Gn ◦F.
Writing A = F(R) and using the result for continuous F, we get a.s.
supx ˆ Fn(x) −F(x) = sup t∈A ˆ Gn(t) −t ≤sup t∈[0,1] ˆ Gn(t) −t →0.
2 We turn to a more systematic study of convergence in distribution. Though for the moment we are mostly interested in distributions on Euclidean spaces, it is crucial for future applications to consider the more general setting of an abstract metric space. In particular, the theory is applied in Chapter 23 to random elements in various function spaces. For a random elements ξ in a metric space S with Borel σ-field S, let Sξ denote the class of sets B ∈S with ξ / ∈∂B a.s., called the ξ-continuity sets.
116 Foundations of Modern Probability Theorem 5.25 (portmanteau2 theorem, Alexandrov) Let ξ, ξ1, ξ2, . . . be ran-dom elements in a metric space (S, S) with classes G, F of open and closed sets. Then these conditions are equivalent: (i) ξn d →ξ, (ii) lim inf n→∞P{ξn ∈G} ≥P{ξ ∈G}, G ∈G, (iii) lim sup n→∞P{ξn ∈F} ≤P{ξ ∈F}, F ∈F, (iv) P{ξn ∈B} →P{ξ ∈B}, B ∈Sξ.
Proof: Assume (i), and fix any G ∈G.
Letting f be continuous with 0 ≤f ≤1G, we get Ef(ξn) ≤P{ξn ∈G}, and (ii) follows as we let n →∞ and then f ↑1G. The equivalence (ii) ⇔(iii) is clear from taking complements.
Now assume (ii)−(iii). For any B ∈S, P{ξ ∈Bo} ≤lim inf n→∞P{ξn ∈B} ≤lim sup n→∞P{ξn ∈B} ≤P{ξ ∈¯ B}.
Here the extreme members agree when ξ / ∈∂B a.s., and (iv) follows.
Conversely, assume (iv), and fix any F ∈F. Write F ε = {s ∈S; ρ(s, F) ≤ ε}. Then the sets ∂F ε ⊂{s; ρ(s, F) = ε} are disjoint, and so ξ / ∈∂F ε for almost every ε > 0. For such an ε, we may write P{ξn ∈F} ≤P{ξn ∈F ε}, and (iii) follows as we let n →∞and then ε →0. Finally, assume (ii), and let f ≥0 be continuous. By Lemma 4.4 and Fatou’s lemma, Ef(ξ) = ∞ 0 P f(ξ) > t dt ≤ ∞ 0 lim inf n→∞P f(ξn) > t dt ≤lim inf n→∞ ∞ 0 P f(ξn) > t dt = lim inf n→∞Ef(ξn).
(6) Now let f be continuous with |f| ≤c < ∞. Applying (6) to the functions c±f yields Ef(ξn) →Ef(ξ), which proves (i).
2 We insert an easy application to subspaces, needed in Chapter 23.
Corollary 5.26 (sub-spaces) Let ξ, ξ1, ξ2, . . . be random elements in a sub-space A of a metric space (S, ρ). Then ξn d →ξ in (A, ρ) ⇔ ξn d →ξ in (S, ρ) 2From the French word portemanteau = ‘big suitcase’.
Here it is just a collection of criteria, traditionally grouped together.
5. Random Sequences, Series, and Averages 117 Proof: Since ξ, ξ1, ξ2, . . . ∈A, condition (ii) of Theorem 5.25 is equivalent to lim inf n→∞P ξn ∈A ∩G ≥P ξ ∈A ∩G , G ⊂S open.
By Lemma 1.6, this agrees with condition (ii) of Theorem 5.25 for the subspace A.
2 Directly from the definitions, it is clear that convergence in distribution is preserved by continuous mappings. The following more general statement is a key result of weak convergence theory.
Theorem 5.27 (continuous mapping, Mann & Wald, Prohorov, Rubin) For any metric spaces S, T and set C ∈S, consider some measurable functions f, f1, f2, . . .: S →T satisfying sn →s ∈C ⇒ fn(sn) →f(s).
Then for any random elements ξ, ξ1, ξ2, . . . in S, ξn d →ξ ∈C a.s.
⇒ fn(ξn) d →f(ξ).
In particular, we see that if f : S →T is a.s. continuous at ξ, then ξn d →ξ ⇒ f(ξn) d →f(ξ).
Proof: Fix any open set G ⊂T, and let s ∈f −1G∩C. By hypothesis there exist an integer m ∈N and a neighborhood B of s, such that fk(s′) ∈G for all k ≥m and s′ ∈B. Thus, B ⊂ k≥m f −1 k G, and so f −1G ∩C ⊂∪ m ∩ k≥m f −1 k G o.
Writing μ, μ1, μ2, . . . for the distributions of ξ, ξ2, ξ2, . . . , we get by Theorem 5.25 μ(f −1G) ≤μ∪ m ∩ k≥m f −1 k G o = sup m μ ∩ k≥m f −1 k G o ≤sup m lim inf n→∞μn ∩ k≥m f −1 k G o ≤lim inf n→∞μn(f −1 n G).
Then the same theorem yields μn ◦f −1 n w →μ ◦f −1, which means that fn(ξn) d → f(ξ).
2 For a simple consequence, we note a useful randomization principle: Corollary 5.28 (randomization) For any metric spaces S, T, consider some probability kernels μ, μ1, μ2, . . .: S →T satisfying sn →s in S ⇒ μn(sn, ·) w →μ(s, ·) in MT.
Then for any random elements ξ, ξ1, ξ2, . . . in S, ξn d →ξ ⇒ E μn(ξn, ·) w →E μ(ξ, ·) in MT.
118 Foundations of Modern Probability Proof: For any bounded, continuous function f on T, the integrals μf and μnf are bounded, measurable functions on S, such that sn →s implies μnf(sn) →μf(s).
Hence, Theorem 5.27 yields μnf(ξn) d →μf(ξ), and so E μnf(ξn) →E μf(ξ). The assertion follows since f was arbitrary.
2 We turn to an equally important approximation theorem. Here the idea is to prove ξn d →ξ by approximating ξn ≈ηn and ξ ≈η, where ηn d →η. The required convergence will then follow, provided the former approximations hold uniformly in n.
Theorem 5.29 (approximation) Let ξ, ξn and ηk, ηk n be random elements in a separable metric space (S, ρ). Then ξn d →ξ, whenever (i) ηk d →ξ, (ii) ηk n d →ηk as n →∞, k ∈N, (iii) lim k→∞lim sup n→∞E ρ(ηk n, ξn) ∧1 = 0.
Proof: For any closed set F ⊂S and constant ε > 0, we have P{ξn ∈F} ≤P ηk n ∈F ε + P ρ(ηk n, ξn) > ε , where F ε = {s ∈S; ρ(s, F) ≤ε}. By Theorem 5.25, we get as n →∞ lim sup n→∞P{ξn ∈F} ≤P{ηk ∈F ε} + lim sup n→∞P ρ(ηk n, ξn) > ε .
Letting k →∞, we conclude from Theorem 5.25 and (iii) that lim sup n→∞P{ξn ∈F} ≤P{ξ ∈F ε}.
Here the right-hand side tends to P{ξ ∈F} as ε →0. Since F was arbitrary, Theorem 5.25 yields ξn d →ξ.
2 We may now consider convergence in distribution of random sequences.
Theorem 5.30 (random sequences) Let ξ = (ξ1, ξ2, . . .) and ξn = (ξ1 n, ξ2 n, . . .), n ∈N, be random elements in S1 × S2 × · · · , for some separable metric spaces S1, S2, . . . . Then ξn d →ξ ifffor any functions fk ∈ˆ CSk, E f1(ξ1 n) · · · fm(ξm n ) →E f1(ξ1) · · · fm(ξm) , m ∈N.
(7) In particular, ξn d →ξ follows from the finite-dimensional convergence ξ1 n, . . . , ξm n d → ξ1, . . . , ξm , m ∈N.
(8) If the sequences ξ and ξn have independent components, it suffices that ξk n d →ξk for each k.
5. Random Sequences, Series, and Averages 119 Proof: The necessity is clear from the continuity of the projections s →sk.
To prove the sufficiency, we first assume that (7) holds for a fixed m. Writing S′ k = {B ∈BSk; ξk / ∈∂B a.s.} and applying Theorem 5.25 m times, we obtain P (ξ1 n, . . . , ξm n ) ∈B →P (ξ1, . . . , ξm) ∈B , (9) for any set B = B1×· · ·×Bm with Bk ∈S′ k for all k. Since the Sk are separable, we may choose some countable bases Ck ⊂S′ k, so that C1 × · · · × Cm becomes a countable base in S1 × · · · × Sm. Hence, any open set G ⊂S1 × · · · × Sm is a countable union of measurable rectangles Bj = B1 j × · · · × Bm j with Bk j ∈S′ k for all k. Since the S′ k are fields, we may easily reduce to the case of disjoint sets Bj. By Fatou’s lemma and (9), lim inf n→∞P (ξ1 n, . . . , ξm n ) ∈G = lim inf n→∞Σ j P (ξ1 n, . . . , ξm n ) ∈Bj ≥Σ j P (ξ1, . . . , ξm) ∈Bj = P (ξ1, . . . , ξm) ∈G , and so (8) holds by Theorem 5.25.
To see that (8) implies ξn d →ξ, fix any ak ∈Sk, k ∈N, and note that the mapping (s1, . . . , sm) →(s1, . . . , sm, am+1, am+2, . . .) is continuous on S1 × · · · × Sm for each m ∈N. By (8) it follows that ξ1 n, . . . , ξm n , am+1, . . .
d → ξ1, . . . , ξm, am+1, . . .
, m ∈N.
(10) Writing ηm n and ηm for the sequences in (10) and letting ρ be the metric in (1), we also note that ρ(ξ, ηm) ≤2−m and ρ(ξn, ηm n ) ≤2−m for all m and n. The convergence ξn d →ξ now follows by Theorem 5.29.
2 For distributional convergence of some random objects ξ1, ξ2, . . . , the joint distribution of the elements ξn is clearly irrelevant. This suggests that we look for a more useful dependence, which may lead to simpler and more transparent proofs.
Theorem 5.31 (coupling, Skorohod, Dudley) Let ξ, ξ1, ξ2, . . . be random el-ements in a separable metric space (S, ρ), such that ξn d →ξ. Then there exist some random elements ˜ ξ, ˜ ξ1, ˜ ξ2, . . . in S with ˜ ξ d = ξ, ˜ ξn d = ξn; ˜ ξn →˜ ξ a.s.
Our proof involves families of independent random elements with specified distributions, whose existence is ensured in general by Corollary 8.25 below.
When S is complete, we may rely on the more elementary Theorem 4.19.
Proof: First take S = {1, . . . , m}, and put pk = P{ξ = k} and pn k = P{ξn = k}. Letting ϑ⊥ ⊥ξ be U(0, 1), we may construct some random elements ˜ ξn d = ξn, 120 Foundations of Modern Probability such that ˜ ξn = k whenever ξ = k and ϑ ≤pn k/pk. Since pn k →pk for each k, we get ˜ ξn →ξ a.s.
For general S, fix any p ∈N, and choose a partition of S into sets B1, B2, . . .
∈Sξ of diameter < 2−p. Next choose m so large that P{ξ ̸∈ k≤m Bk} < 2−p, and put B0 = k≤m Bc k. For k = 0, . . . , m, define κ = k when ξ ∈Bk and κn = k when ξn ∈Bk, n ∈N. Then κn d →κ, and the result for finite S yields some ˜ κn d = κn with ˜ κn →κ a.s. We further introduce some independent random elements ζk n in S with distributions L(ξn | ξn ∈Bk), and define ˜ ξp n = Σk ζk n1{˜ κn = k}, so that ˜ ξp n d = ξn for each n.
By construction, we have ρ(˜ ξp n, ξ) > 2−p ⊂{˜ κn ̸= κ} ∪{ξ ∈B0}, n, p ∈N.
Since ˜ κn →κ a.s. and P{ξ ∈B0} < 2−p, there exists for every p some np ∈N with P ∪ n≥np ρ(˜ ξp n, ξ) > 2−p < 2−p, p ∈N, and we may further assume that n1 < n2 < · · · . Then the Borel–Cantelli lemma yields a.s. supn≥np ρ(˜ ξp n, ξ) ≤2−p for all but finitely many p. Defining ηn = ˜ ξp n for np ≤n < np+1, we note that ξn d = ηn →ξ a.s.
2 We conclude with a result on the functional representation of limits, needed in Chapters 18 and 32. To motivate the problem, recall from Lemma 5.6 that, if ξn P →η for some random elements in a complete metric space S, then η = f(ξ) a.s. for some measurable function f : S∞→S, where ξ = (ξn).
Since f depends on the distribution μ of ξ, a universal representation would be of the form η = f(ξ, μ). For certain purposes, we need to choose a measurable version of the latter function. To allow constructions by repeated approximation in probability, we consider the more general case where ηn P →η for some random elements ηn = fn(ξ, μ).
For a precise statement, let ˆ MS be the space of probability measures μ on S, endowed with the σ-field induced by the evaluation maps μ →μB, B ∈BS.
Proposition 5.32 (representation of limits) Consider a complete metric space (S, ρ), a measurable space U, and some measurable functions f1, f2, . . . : U × ˆ MU →S.
Then there exist a measurable set A ⊂ ˆ MU and a function f : U × A →S such that, for any random element ξ in U with L(ξ) = μ, fn(ξ, μ) P →some η ⇔ μ ∈A, in which case we can choose η = f(ξ, μ).
Proof: For sequences s = (s1, s2, . . .) in S, define l(s) = limk sk when the limit exists and put l(s) = s∞otherwise, where s∞∈S is arbitrary.
By Lemma 1.11, l is a measurable mapping from S∞to S. Next consider a sequence η = (η1, η2, . . .) of random elements in S, and put ν = L(η). Define 5. Random Sequences, Series, and Averages 121 n1, n2, . . . as in the proof of Lemma 5.6, and note that each nk = nk(ν) is a measurable function of ν. Let C be the set of measures ν with nk(ν) < ∞ for all k, and note that ηn converges in probability iffν ∈C. Introduce the measurable function g(s, ν) = l sn1(ν), sn2(ν), . . . , s = (s1, s2, . . .) ∈S∞, ν ∈ˆ MS∞.
If ν ∈C, the proof of Lemma 5.6 shows that ηnk(ν) converges a.s., and so ηn P →g(η, ν).
Now let ηn = fn(ξ, μ) for a random element ξ in U with distribution μ and some measurable functions fn. We need to show that ν is a measurable function of μ. But this is clear from Lemma 3.2 (ii), applied to the kernel κ(μ, ·) = μ from ˆ MU to U and the function F = (f1, f2, . . .): U × ˆ MU →S∞.
2 For a simple consequence, we consider limits in probability of measurable processes. The resulting statement will be needed in Chapter 18.
Corollary 5.33 (measurability of limits, Stricker & Yor) Let X1, X2, . . . be S-valued, measurable processes on T, for a complete metric space S and a measurable space T. Then there exists a measurable set A ⊂T, such that Xn t P →some Xt ⇔ t ∈A, in which case we can choose X to be product measurable on A × Ω.
Proof: Define ξt = (X1 t , X2 t , . . .) and μt = L(ξt). By Proposition 5.32 there exist a measurable set C ⊂PS∞and a measurable function f : S∞× C →S, such that Xn t converges in probability iffμt ∈C, in which case Xn t P →f(ξt, μt).
It remains to note that the mapping t →μt is measurable, which is clear from Lemmas 1.4 and 1.28.
2 Exercises 1.
Let ξ1, . . . , ξn be independent, symmetric random variables.
Show that P (Σk ξk)2 ≥rΣk ξ2 k ≥(1 −r)2/3 for all r ∈(0, 1). (Hint: Reduce by Lemma 4.11 to the case of non-random |ξk|, and use Lemma 5.1.) 2.
Let ξ1, . . . , ξn be independent, symmetric random variables.
Show that P{maxk |ξk| > r} ≤2 P{|S| > r} for all r > 0, where S = Σk ξk. (Hint: Let η be the first term ξk where maxk |ξk| is attained, and check that (η, S −η) d = (η, η −S).) 3. Let ξ1, ξ2, . . . be i.i.d. random variables with P{|ξn| > t} > 0 for all t > 0.
Prove the existence of some constants c1, c2, . . . , such that cn ξn →0 in probability but not a.s.
4. Show that a family of random variables ξt is tight, iffsupt Ef(|ξt|) < ∞for some increasing function f : R+ →R+ with f(∞) = ∞.
5. Consider some random variables ξn, ηn, such that (ξn) is tight and ηn P →0.
Show that even ξn ηn P →0.
122 Foundations of Modern Probability 6. Show that the random variables ξt are uniformly integrable, iffsupt Ef(|ξt|) < ∞for some increasing function f : R+ →R+ with f(x)/x →∞as x →∞.
7. Show that the condition supt E|ξt| < ∞in Lemma 5.10 can be omitted if A is non-atomic.
8. Let ξ1, ξ2, . . . ∈L1. Show that the ξn are uniformly integrable, iffthe condition in Lemma 5.10 holds with supn replaced by lim supn.
9. Deduce the dominated convergence theorem from Lemma 5.11.
10. Show that if (|ξt|p) and (|ηt|p) are uniformly integrable for some p > 0, then so is (|aξt + bηt|p) for any a, b ∈R. (Hint: Use Lemma 5.10.) Use this fact to derive Proposition 5.12 from Lemma 5.11.
11. Give examples of random variables ξ, ξ1, ξ2, . . . ∈L2, such that ξn →ξ a.s.
but not in L2, in L2 but not a.s., or in L1 but not in L2.
12.
Let ξ1, ξ2, . . . be independent random variables in L2.
Show that Σ n ξn converges in L2, iffΣ n E ξn and Σ n Var(ξn) both converge.
13. Give an example of some independent, symmetric random variables ξ1, ξ2, . . . , such that Σ n ξn converges a.s. but Σ n|ξn| = ∞a.s.
14. Let ξn, ηn be symmetric random variables with |ξn| ≤|ηn|, such that the pairs (ξn, ηn) are independent. Show that Σ n ξn converges whenever Σ n ηn does.
15.
Let ξ1, ξ2, . . . be independent, symmetric random variables.
Show that E{(Σ n ξn)2 ∧1} ≤Σ n E(ξ2 n ∧1) whenever the latter series converges.
(Hint: Integrate over the sets where supn |ξn| ≤1 or > 1, respectively.) 16. Consider some independent sequences of symmetric random variables ξk, η1 k, η2 k, . . . with |ηn k| ≤|ξk| such that Σk ξk converges, and suppose that ηn k P →ηk for each k. Show that Σk ηn k P →Σk ηk. (Hint: Use a truncation based on the preceding exercise.) 17. Let Σn ξn be a convergent series of independent random variables. Show that the sum is a.s. independent of the order of terms iffΣn |E(ξn; |ξn| ≤1)| < ∞.
18. Let the random variables ξnj be symmetric and independent for each n. Show that Σj ξnj P →0 iffΣj E(ξ2 nj ∧1) →0.
19. Let ξn d →ξ and an ξn d →ξ for a non-degenerate random variable ξ and some constants an > 0. Show that an →1. (Hint: Turning to sub-sequences, we may assume that an →a.) 20. Let ξn d →ξ and anξn + bn d →ξ for some non-degenerate random variable ξ, where an > 0. Show that an →1 and bn →0. (Hint: Symmetrize.) 21. Let ξ1, ξ2, . . . be independent random variables such that anΣk≤n ξk converges in probability for some constants an →0. Show that the limit is degenerate.
22. Show that Theorem 5.23 fails for p = 2. (Hint: Choose the ξk to be indepen-dent and N(0, 1).) 23. Let ξ1, ξ2, . . . be i.i.d. and such that n−1/pΣk≤n ξk is a.s. bounded for some p ∈(0, 2). Show that E|ξ1|p < ∞. (Hint: Argue as in the proof of Theorem 5.23.) 24. For any p ≤1, show that the a.s. convergence in Theorem 5.23 remains valid in Lp. (Hint: Truncate the ξk.) 5. Random Sequences, Series, and Averages 123 25. Give an elementary proof of the strong law of large numbers when E|ξ|4 < ∞.
(Hint: Assuming E ξ = 0, show that EΣn (Sn/n)4 < ∞.) 26. Show by examples that Theorem 5.25 fails without the stated restrictions on the sets G, F, B.
27. Use Theorem 5.31 to give a simple proof of Theorem 5.27 when S is separable.
Generalize to random elements ξ, ξn in Borel sets C, Cn, respectively, assuming only fn(xn) →f(x) for xn ∈Cn and x ∈C with xn →x. Extend the original proof to that case.
28.
Give a short proof of Theorem 5.31 when S = R.
(Hint: Note that the distribution functions Fn, F satisfy F −1 n →F −1 a.e. on [0, 1].) Chapter 6 Gaussian and Poisson Convergence Characteristic functions and Laplace transforms, equi-continuity and tightness, linear projections, null arrays, Poisson convergence, positive and symmetric terms, central limit theorem, local CLT, Lindeberg con-dition, Gaussian convergence, weak laws of large numbers, domain of Gaussian attraction, slow variation, Helly’s selection theorem, vague and weak convergence, tightness and weak compactness, extended conti-nuity theorem This is yet another key chapter, dealing with issues of fundamental importance throughout modern probability. Our main focus will be on the Poisson and Gaussian distributions, along with associated limit theorems, extending the classical central limit theorem and related Poisson approximation.
The importance of the mentioned distributions extends far beyond the present context, as will gradually become clear throughout later chapters. In-deed, the Gaussian distributions underlie the construction of Brownian motion, arguably the most important process of the entire subject, whose study leads in turn into stochastic calculus. They also form a basis for the multiple Wiener– Itˆ o integrals, of crucial importance in Malliavin calculus and other subjects.
Similarly, the Poisson distributions underlie the construction of Poisson pro-cesses, which appear as limit laws for a wide variety of particle systems.
Throughout this chapter, we will use the powerful machinery of character-istic functions and Laplace transforms, which leads to short and transparent proofs of all the main results. Methods based on such functions also extend far beyond the present context. Thus, characteristic functions are used to derive the basic time-change theorems for continuous martingales, and they underlie the use of exponential martingales, which are basic for the Girsanov theorems and related applications of stochastic calculus. Likewise, Laplace transforms play a basic role throughout random-measure theory, and underlie the notion of potentials, of utmost importance in advanced Markov-process theory.
To begin with the basic definitions, consider a random vector ξ in Rd with distribution μ. The associated characteristic function ˆ μ is given by ˆ μ(t) = eitxμ(dx) = E eitξ, t ∈Rd, where tx denotes the inner product t1x1 + · · · + tdxd. For distributions μ on Rd +, it is often more convenient to consider the Laplace transform ˜ μ, given by ˜ μ(u) = e−uxμ(dx) = E e−uξ, u ∈Rd +.
125 © Springer Nature Switzerland AG 2021 O. Kallenberg, Foundations of Modern Probability, Probability Theory and Stochastic Modelling 99, 126 Foundations of Modern Probability Finally, for distributions μ on Z+, it may be preferable to use the (probability) generating function ψ, given by ψ(s) = Σ n≥0 snP{ξ = n} = E s ξ, s ∈[0, 1].
Formally, ˜ μ(u) = ˆ μ(iu) and ˆ μ(t) = ˜ μ(−it), and so the functions ˆ μ and ˜ μ essentially agree apart from domain. Furthermore, the generating function ψ is related to the Laplace transform ˜ μ by ˜ μ(u) = ψ(e−u) or ψ(s) = ˜ μ(−log s).
Though the characteristic function always exists, it can’t always be extended to an analytic function on the complex plane.
For any distribution μ on Rd, the characteristic function ϕ = ˆ μ is clearly uniformly continuous with |ϕ(t)| ≤ϕ(0) = 1.
It is also Hermitian in the sense that ϕ(−t) = ¯ ϕ(t), where the bar denotes complex conjugation. If ξ has characteristic function ϕ, then the linear combination a ξ = a1ξ1+· · ·+ad ξd has characteristic function t →ϕ(ta). Also note that, if ξ and η are independent random vectors with characteristic functions ϕ and ψ, then the characteristic function of the pair (ξ, η) equals the tensor product ϕ ⊗ψ : (s, t) →ϕ(s)ψ(t).
In particular, ξ + η has characteristic function ϕ ψ, and for the symmetrized variable ξ −ξ′ it equals |ϕ|2.
Whenever applicable, the quoted statements carry over to Laplace trans-forms and generating functions.
The latter functions have the further ad-vantage of being positive, monotone, convex, and analytic—properties that simplify many arguments.
We list some simple but useful estimates involving characteristic functions.
The second inequality was used already in the proof of Theorem 5.17, and the remaining relations will be useful in the sequel to establish tightness.
Lemma 6.1 (tail estimates) For probability measures μ on R, (i) μ x; |x| ≥r ≤r 2 2/r −2/r(1 −ˆ μt) dt, r > 0, (ii) μ[−r, r] ≤2 r 1/r −1/r |ˆ μt| dt, r > 0.
If μ is supported by R+, then also (iii) μ[r, ∞) ≤2 1 −˜ μ(r−1) , r > 0.
Proof: (i) Using Fubini’s theorem and noting that sin x ≤x/2 for x ≥2, we get for any c > 0 c −c(1 −ˆ μt) dt = μ(dx) c −c(1 −eitx) dt = 2c 1 −sin cx cx μ(dx) ≥c μ x; |cx| ≥2 , and it remains to take c = 2/r.
6. Gaussian and Poisson Convergence 127 (ii) Write 1 2 μ[−r, r] ≤2 1 −cos(x/r) (x/r)2 μ(dx) = r μ(dx) (1 −r|t|)+ eixtdt = r (1 −r|t|)+ ˆ μt dt ≤r 1/r −1/r |ˆ μt| dt.
(iii) Noting that e−x < 1 2 when x ≥1, we get for any t > 0 1 −˜ μt = (1 −e−tx) μ(dx) ≥1 2 μ x; tx ≥1 .
2 Recall that a family of probability measures μa on Rd is said to be tight if lim r→∞sup a μa x; |x| > r = 0.
We may characterize tightness in terms of characteristic functions.
Lemma 6.2 (equi-continuity and tightness) For a family of probability mea-sures μa on Rd or Rd + with characteristics functions ˆ μa or Laplace transforms ˜ μa, we have (i) (μa) is tight in Rd ⇔(ˆ μa) is equi-continuous at 0, (ii) (μa) is tight in Rd + ⇔(˜ μa) is equi-continuous at 0.
In that case, (ˆ μa) or (˜ μa) is uniformly equi-continuous on Rd or Rd +.
Proof: The sufficiency is immediate from Lemma 6.1, applied separately in each coordinate. To prove the necessity, let ξa denote a random vector with distribution μa, and write for any s, t ∈Rd | ˆ μa(s) −ˆ μa(t) | ≤E eis ξa −eit ξa = E 1 −ei(t−s) ξa ≤2 E (t −s) ξa ∧1 .
If {ξa} is tight, then by Lemma 5.9 the right-hand side tends to 0 as t−s →0, uniformly in a, and the asserted uniform equi-continuity follows. The proof for Laplace transforms is similar.
2 For any probability measures μ, μ1, μ2, . . . on Rd, recall that the weak con-vergence μn w →μ is defined by μnf →μf for any bounded, continuous function f on Rd, where μf denotes the integral fdμ. The usefulness of characteristic functions is mainly due to the following basic result.
128 Foundations of Modern Probability Theorem 6.3 (characteristic functions, L´ evy) Let μ, μ1, μ2, . . . be probability measures on Rd or Rd + with characteristic functions ˆ μ, ˆ μ1, ˆ μ2, . . . or Laplace transforms ˜ μ, ˜ μ1, ˜ μ2, . . . . Then (i) μn w →μ on Rd ⇔ˆ μn(t) →ˆ μ(t), t ∈Rd, (ii) μn w →μ on Rd + ⇔˜ μn(u) →˜ μ(u), u ∈Rd +, in which case ˆ μn →ˆ μ or ˜ μn →˜ μ uniformly on every bounded set.
In particular, a probability measure on Rd or Rd + is uniquely determined by its characteristic function or Laplace transform, respectively. For the proof of Theorem 6.3, we need some special cases of the Stone–Weierstrass approxima-tion theorem. Here [0, ∞] denotes the compactification of R+.
Lemma 6.4 (uniform approximation, Weierstrass) (i) Any continuous function f : Rd →R with period 2π in each coordinate admits a uniform approximation by linear combinations of cos kx and sin kx with k ∈Zd +.
(ii) Any continuous function g : [0, ∞]d →R+ admits a uniform approxima-tion by linear combinations of functions e−kx with k ∈Zd +.
Proof of Theorem 6.3: We consider only (i), the proof of (ii) being similar.
If μn w →μ, then ˆ μn(t) →ˆ μ(t) for every t, by the definition of weak convergence.
By Lemmas 5.8 and 6.2, the latter convergence is uniform on every bounded set.
Conversely, let ˆ μn(t) →ˆ μ(t) for every t. By Lemma 6.1 and dominated convergence, we get for any a ∈Rd and r > 0 lim sup n→∞μn x; |ax| > r ≤lim n→∞ r 2 2/r −2/r 1 −ˆ μn(ta) dt = r 2 2/r −2/r 1 −ˆ μ(ta) dt.
Since ˆ μ is continuous at 0, the right-hand side tends to 0 as r →∞, which shows that the sequence (μn) is tight. Given any ε > 0, we may then choose r > 0 so large that μn{|x| > r} ≤ε for all n and μ{|x| > r} ≤ε.
Now fix any bounded, continuous function f : Rd →R, say with |f| ≤c < ∞. Let fr denote the restriction of f to the ball {|x| ≤r}, and extend fr to a continuous function ˜ f on Rd with | ˜ f| ≤c and period 2πr in each coordinate.
By Lemma 6.4 there exists a linear combination g of the functions cos(kx/r) and sin(kx/r), k ∈Zd +, such that | ˜ f −g| ≤ε. Writing ∥· ∥for the supremum norm, we get for any n ∈N μnf −μng ≤μn{|x| > r}∥f −˜ f∥+ ∥˜ f −g∥ ≤(2c + 1) ε, 6. Gaussian and Poisson Convergence 129 and similarly for μ. Thus, μnf −μf ≤ μng −μg + 2 (2c + 1) ε, n ∈N.
Letting n →∞and then ε →0, we obtain μnf →μf. Since f was arbitrary, this proves that μn w →μ.
2 The next result often enables us to reduce the d -dimensional case to that of dimension 1.
Corollary 6.5 (one-dimensional projections, Cram´ er & Wold) For any ran-dom vectors ξ, ξ1, ξ2, . . . in Rd or Rd +, we have (i) ξn d →ξ in Rd ⇔t ξn d →t ξ, t ∈Rd, (ii) ξn d →ξ in Rd + ⇔u ξn d →u ξ, u ∈Rd +.
In particular, the distribution of a random vector ξ in Rd or Rd + is uniquely determined by those of all linear combinations tξ with t ∈Rd or Rd +, respec-tively.
Proof: If t ξn d →t ξ for all t ∈Rd, then Eeit ξn →Eeit ξ by the definition of weak convergence, and so ξn d →ξ by Theorem 6.3, proving (i). The proof of (ii) is similar.
2 We now apply the continuity Theorem 6.3 to prove some classical limit theorems, beginning with the case of Poisson convergence. To motivate the introduction of the associated distribution, consider for each n ∈N some i.i.d.
random variables ξn1, . . . , ξnn with distribution P{ξnj =1} = 1 −P{ξnj =0} = cn, n ∈N, and assume that n cn →c < ∞. Then the sums Sn = ξn1 + . . . + ξnn have generating functions ψn(s) = 1 −(1 −s) cn n →e−c (1−s) = e−c Σ n≥0 cnsn n! , s ∈[0, 1].
The limit ψ(s) = e−c(1−s) is the generating function of the Poisson distribution with parameter c, the distribution of a random variable η with probabilities P{η = n} = e−ccn/n! for n ∈Z+. Note that η has expected value E η = ψ′(1) = c. Since ψn →ψ, Theorem 6.3 yields Sn d →η.
To state some more general instances of Poisson convergence, we need to introduce the notion of a null array1. By this we mean a triangular array of 1Its elements are also said to be uniformly asymptotically negligible, abbreviated as u.a.n.
130 Foundations of Modern Probability random variables or vectors ξnj, 1 ≤j ≤mn, n ∈N, that are independent for each n and satisfy supjE |ξnj| ∧1 →0.
(1) Informally, this means that ξnj P →0 as n →∞, uniformly in j. When ξnj ≥0 for all n and j, we may allow the mn to be infinite.
The ‘null’ property can be described as follows in terms of characteristic functions or Laplace transforms: Lemma 6.6 (null arrays) Let (ξnj) be a triangular array in Rd or Rd + with characteristic functions ˆ μnj or Laplace transforms ˜ μnj. Then (i) (ξnj) is null in Rd ⇔supj|1 −ˆ μnj(t)| →0, t ∈Rd, (ii) (ξnj) is null in Rd + ⇔infj ˜ μnj(u) →1, u ∈Rd +.
Proof: Relation (1) holds iffξn,jn P →0 for all sequences (jn). By Theorem 6.3 this is equivalent to ˆ μn,jn(t) →1 for all t and (jn), which is in turn equiv-alent to the condition in (i). The proof of (ii) is similar.
2 We may now give a general criterion for Poisson convergence of the row sums in a null array of integer-valued random variables. The result will be extended in Theorem 7.14 to more general limiting distributions and in Theorem 30.1 to the context of point processes.
Theorem 6.7 (Poisson convergence) Let (ξnj) be a null array of Z+-valued random variables, and let ξ be a Poisson variable with mean c. Then Σj ξnj d →ξ iff (i) Σj P{ξnj > 1} →0, (ii) Σj P{ξnj = 1} →c.
Here (i) is equivalent to (i′) supj ξnj ∨1 P →1, and when Σj ξnj is tight, (i) holds iffevery limit is Poisson.
We need an elementary lemma. Related approximations will also be used in Chapters 7 and 30.
Lemma 6.8 (sums and products) Consider a null array of constants cnj ≥0, and fix any c ∈[0, ∞]. Then Π j (1 −cnj) →e−c ⇔ Σ j cnj →c.
6. Gaussian and Poisson Convergence 131 Proof: Since supj cnj < 1 for large n, the first relation is equivalent to Σj log(1 −cnj) →−c, and the assertion follows since log(1 −x) = −x + o(x) as x →0.
2 Proof of Theorem 6.7: Let ψnj be the generating function of ξnj. By Theo-rem 6.3, the convergence Σj ξnj d →ξ is equivalent to Πj ψnj(s) →e−c(1−s) for arbitrary s ∈[0, 1], which holds by Lemmas 6.6 and 6.8 iff Σ j 1 −ψnj(s) →c(1 −s), s ∈[0, 1].
(2) By an easy computation, the sum on the left equals (1 −s)Σ j P{ξnj > 0} + Σ k>1 (s −sk)Σ j P{ξnj = k} = T1 + T2, (3) and we also note that s(1 −s)Σ j P{ξnj > 1} ≤T2 ≤sΣ j P{ξnj > 1}.
(4) Assuming (i)−(ii), we note that (2) follows from (3) and (4). Now assume (2). Then for s = 0 we get Σj P{ξnj > 0} →c, and so in general T1 →c(1−s).
But then (2) implies T2 →0, and (i) follows by (4). Finally, (ii) follows by subtraction.
To see that (i) ⇔(i′), we note that P supj ξnj ≤1 = Π j P{ξnj ≤1} = Π j 1 −P{ξnj > 1} .
By Lemma 6.8, the right-hand side tends to 1 iffΣj P{ξnj > 1} →0, which is the stated equivalence.
To prove the last assertion, put cnj = P{ξnj > 0}, and write E exp −Σ j ξnj −P supj ξnj > 1 ≤E exp −Σ j (ξnj ∧1) = Π j E exp −(ξnj ∧1) = Π j 1 −(1 −e−1) cnj ≤Π j exp −(1 −e−1) cnj = exp −(1 −e−1)Σ j cnj .
If (i) holds and Σj ξnj d →η, then the left-hand side tends to Ee−η > 0, and so the sums cn = Σj cnj are bounded. Hence, cn converges along a sub-sequence N ′ ⊂N toward a constant c. But then (i)−(ii) hold along N ′, and the first assertion shows that η is Poisson with mean c.
2 Next we consider some i.i.d. random variables ξ1, ξ2, . . . with P{ξk = ±1} = 1 2, and write Sn = ξ1 + · · · + ξn. Then n−1/2Sn has characteristic function ϕn(t) = cosn(n−1/2 t) = 1 −1 2 t2n−1 + O(n−2) n →e−t2/2 = ϕ(t).
132 Foundations of Modern Probability By a classical computation, the function e−x2/2 has Fourier transform ∞ −∞eitxe−x2/2dx = (2π)1/2e−t2/2, t ∈R.
Hence, ϕ is the characteristic function of a probability measure on R with density (2π)−1/2e−x2/2. This is the standard normal or Gaussian distribution N(0, 1), and Theorem 6.3 shows that n−1/2Sn d →ζ, where ζ is N(0, 1). The notation is motivated by the facts that Eζ = 0 and Var(ζ) = 1, where the former relation is obvious by symmetry and latter follows from Lemma 6.9 below. The general Gaussian law N(m, σ2) is defined as the distribution of the random variable η = m+σζ, which has clearly mean m and variance σ2. From the form of the characteristic functions together with the uniqueness property, we see that any linear combination of independent Gaussian random variables is again Gaussian.
More general limit theorems may be derived from the following technical result.
Lemma 6.9 (Taylor expansion) Let ϕ be the characteristic function of a ran-dom variable ξ with E|ξ|n < ∞. Then ϕ(t) = n Σ k=0 (it)kE ξk k!
+ o(tn), t →0.
Proof: Noting that |eit −1| ≤|t| for all t ∈R, we get recursively by dominated convergence ϕ(k)(t) = E(iξ)keitξ, t ∈R, 0 ≤k ≤n.
In particular, ϕ(k)(0) = E(iξ)k for k ≤n, and the result follows from Taylor’s formula.
2 The following classical result, known as the central limit theorem, explains the importance of the normal distributions.
The present statement is only preliminary, and more general versions will be obtained by different methods in Theorems 6.13, 6.16, and 6.18.
Theorem 6.10 (central limit theorem, Lindeberg, L´ evy) Let ξ1, ξ2, . . . be i.i.d.
random variables with mean 0 and variance 1, and let ζ be N(0, 1). Then n−1/2 Σ k≤n ξk d →ζ.
Proof: Let the ξk have characteristic function ϕ. By Lemma 6.9, the char-acteristic function of n−1/2Sn equals ϕn(t) = ϕ(n−1/2t) n = 1 −1 2 t2n−1 + o(n−1) n →e−t2/2, 6. Gaussian and Poisson Convergence 133 where the convergence holds as n →∞for fixed t.
2 We will also establish a stronger density version, needed in Chapter 30.
Here we introduce the smoothing densities ph(x) = (πh)−dΠ i≤d 1 −cos hxi /x2 i , x = (x1, . . . , xd) ∈Rd, with characteristic functions ˆ ph(t) = Π i≤d 1 −|ti|/h +, t = (t1, . . . , td) ∈Rd.
Say that a distribution on Rd is non-lattice if it is not supported by any shifted, proper sub-group of Rd.
Theorem 6.11 (density convergence, Stone) Let μ be a non-lattice distri-bution on Rd with mean 0 and covariances δij, and write ϕ for the standard normal density on Rd. Then lim n→∞nd/2 μ∗n ∗ph −ϕ∗n = 0, h > 0.
Proof: Since |∂iϕ∗n| < ⌢n−1−d/2 for all i ≤d, the assertion is trivially fulfilled for the standard normal distribution. It is then enough to prove that, for any measures μ and ν as stated, lim n→∞nd/2 (μ∗n −ν∗n) ∗ph = 0, h > 0.
By Fourier inversion, μ∗n −ν∗n ∗ph(x) = (2π)−d eixt ˆ ph(t) ˆ μn(t) −ˆ νn(t) dt, and so we get with Ih = [−h, h]d (μ∗n −ν∗n) ∗ph ≤(2π)−d Ih ˆ μn(t) −ˆ νn(t) dt.
It remains to show that the integral on the right declines faster than n−d/2.
For notational convenience we may take d = 1, the general case being similar.
By a standard Taylor expansion, as in Lemma 6.9, we have ˆ μn(t) = 1 −1 2 t2 + o(t2) n = exp −1 2 n t2(1 + o(1)) = e−nt2/2 1 + n t2o(1) , and similarly for ˆ νn(t), and so ˆ μn(t) −ˆ νn(t) = e−nt2/2 n t2o(1).
134 Foundations of Modern Probability Here clearly n1/2 e−nt2/2 n t2 dt = t2e−t2/2 dt < ∞, and since μ and ν are non-lattice, we have |ˆ μ|∨|ˆ ν| < 1, uniformly on compacts in R \ {0}. Writing Ih = Iε ∪(Ih \ Iε), we get for any ε > 0 n1/2 Ih ˆ μn(t) −ˆ νn(t) dt < ⌢rε + h n1/2mn ε, where rε →0 as ε →0, and mε < 1 for all ε > 0. The desired convergence now follows, as we let n →∞and then ε →0.
2 We now examine the relationship between null arrays of symmetric and pos-itive random variables. In this context, we also derive criteria for convergence toward Gaussian and degenerate limits, respectively.
Theorem 6.12 (positive or symmetric terms) Let (ξnj) be a null array of symmetric random variables, and let ζ be N(0, c) for some c ≥0. Then Σ j ξnj d →ζ ⇔ Σ j ξ2 nj P →c, where convergence holds iff (i) Σj P |ξnj| > ε →0, ε > 0, (ii) Σj E ξ2 nj ∧1 →c.
Here (i) is equivalent to (i′) supj |ξnj| P →0, and when Σj ξnj or Σj ξ2 nj is tight, (i) holds iffevery limit is Gaussian or degenerate, respectively.
The necessity of (i) is remarkable and plays a crucial role in our proof of the more general Theorem 6.16. It is instructive to compare the present statement with the corresponding result for random series in Theorem 5.17. Further note the extended version in Proposition 7.10.
Proof: First let Σj ξnj d →ζ. By Theorem 6.3 and Lemmas 6.6 and 6.8, it is equivalent that Σ j E 1 −cos(t ξnj) →1 2 c t2, t ∈R, (5) where the convergence is uniform on every bounded interval. Comparing the integrals of (5) over [0, 1] and [0, 2], we get Σj Ef(ξnj) →0, where f(0) = 0 and f(x) = 3 −4 sin x x + sin 2x 2x , x ∈R \ {0}.
Here f is continuous with f(x) →3 as |x| →∞, and f(x) > 0 for x ̸= 0.
Indeed, the latter relation is equivalent to 8 sin x −sin 2x < 6 x for x > 0, 6. Gaussian and Poisson Convergence 135 which is obvious when x ≥π/2 and follows by differentiation twice when x ∈(0, π/2). Writing g(x) = infy>x f(y) and letting ε > 0 be arbitrary, we get Σ j P |ξnj| > ε ≤Σ j P f(ξnj) > g(ε) ≤Σ j Ef(ξnj) g(ε) →0, which proves (i).
If instead Σj ξ2 nj P →c, the corresponding symmetrized variables ηnj satisfy Σj ηnj P →0, and so ΣjP{|ηnj| > ε} →0 as before. By Lemma 5.19 it follows that ΣjP{|ξ2 nj −mnj| > ε} →0, where the mnj are medians of ξ2 nj, and since supj mnj →0, condition (i) follows again. Using Lemma 6.8, we further note that (i) ⇔(i′). Thus, we may henceforth assume that (i) is fulfilled.
Next we note that, for any t ∈R and ε > 0, Σ j E 1 −cos(t ξnj); |ξnj| ≤ε = 1 2 t2 1 −O(t2ε2) Σ j E ξ2 nj; |ξnj| ≤ε .
Assuming (i), the equivalence between (5) and (ii) now follows as we let n →∞ and then ε →0. To prove the corresponding result for the variables ξ2 nj, we may write instead, for any t, ε > 0, Σ j E 1 −e−t ξ2 nj; ξ2 nj ≤ε = t 1 −O(t ε) Σ j E ξ2 nj; ξ2 nj ≤ε , and proceed as before. This completes the proof of the first assertion.
Finally, assume that (i) holds and Σj ξnj d →η. Then the same relation holds for the truncated variables ξnj1{|ξnj| ≤1}, and so we may assume that |ξnj| ≤1 for all j and k. Define cn = Σj Eξ2 nj. If cn →∞along a sub-sequence, then the distribution of c−1/2 n Σj ξnj tends to N(0, 1) by the first assertion, which is impossible by Lemmas 5.8 and 5.9. Thus, (cn) is bounded and converges along a sub-sequence. By the first assertion, Σj ξnj then tends to a Gaussian limit, and so even η is Gaussian.
2 We may now prove a classical criterion for Gaussian convergence, involving normalization by second moments.
Theorem 6.13 (Gaussian variance criteria, Lindeberg, Feller) Let (ξnj) be a triangular array with E ξnj = 0 and Σj Var(ξnj) →1, and let ζ be N(0, 1).
Then these conditions are equivalent: (i) Σj ξnj d →ζ and supjVar(ξnj) →0, (ii) Σj E ξ2 nj; |ξnj| > ε →0, ε > 0.
Here (ii) is the celebrated Lindeberg condition. Our proof is based on two elementary lemmas.
136 Foundations of Modern Probability Lemma 6.14 (product comparison) For any complex numbers z1, . . . , zn and z′ 1, . . . , z′ n of modulus ≤1, we have Π k zk −Π k z′ k ≤Σ k |zk −z′ k|.
Proof: For n = 2 we get |z1z2 −z′ 1z′ 2| ≤|z1z2 −z′ 1z2| + |z′ 1z2 −z′ 1z′ 2| ≤|z1 −z′ 1| + |z2 −z′ 2|, and the general result follows by induction.
2 Lemma 6.15 (Taylor expansion) For any t ∈R and n ∈Z+, eit− n Σ k=0 (it)k k!
≤2|t|n n!
∧ |t|n+1 (n + 1)! .
Proof: Letting hn(t) denote the difference on the left, we get hn(t) = i t 0 hn−1(s)ds, t > 0, n ∈Z+.
Starting from the obvious relations |h−1| ≡1 and |h0| ≤2, we get by induction |hn−1(t)| ≤|t|n/n! and |hn(t)| ≤2|t|n/n!.
2 Returning to the proof of Theorem 6.13, we consider at this point only the sufficiency of condition (ii), needed for the proof of the main Theorem 6.16.
The necessity will be established after the proof of that theorem.
Proof of Theorem 6.13, (ii) ⇒(i): Write cnj = E ξ2 nj and cn = Σj cnj. First note that for any ε > 0, supj cnj ≤ε2 + supjE ξ2 nj; |ξnj| > ε ≤ε2 + Σ j E ξ2 nj; |ξnj| > ε , which tends to 0 under (ii), as n →∞and then ε →0.
Now introduce some independent random variables ζnj with distributions N(0, cnj), and note that ζn = Σj ζnj is N(0, cn). Hence, ζn d →ζ. Letting ϕnj and ψnj denote the characteristic functions of ξnj and ζnj, respectively, it remains by Theorem 6.3 to show that Πj ϕnj −Πj ψnj →0. Then conclude from Lemmas 6.14 and 6.15 that, for fixed t ∈R, Π j ϕnj(t) −Π j ψnj(t) ≤Σ j ϕnj(t) −ψnj(t) ≤Σ j ϕnj(t) −1 + 1 2 t2cnj + Σ j ψnj(t) −1 + 1 2 t2cnj < ⌢Σ jE ξ2 nj(1 ∧|ξnj|) + Σ j E ζ2 nj(1 ∧|ζnj|).
6. Gaussian and Poisson Convergence 137 For any ε > 0, we have Σ j E ξ2 nj(1 ∧|ξnj|) ≤εΣ j cnj + Σ j E ξ2 nj; |ξnj| > ε , which tends to 0 by (ii), as n →∞and then ε →0. Further note that Σ j E ζ2 nj(1 ∧|ζnj|) ≤Σ j E|ζnj|3 = Σ j c3/2 nj E|ξ|3 < ⌢cn supjc1/2 nj →0, by the first part of the proof.
2 We may now derive precise criteria for convergence to a Gaussian limit.
Note the striking resemblance with the three-series criterion in Theorem 5.18.
A far-reaching extension of the present result is obtained by different methods in Chapter 7. As before, we write Var(ξ; A) = Var(ξ1A).
Theorem 6.16 (Gaussian convergence, Feller, L´ evy) Let (ξnj) be a null ar-ray of random variables, and let ζ be N(b, c) for some constants b, c. Then Σj ξnj d →ξ iff (i) Σj P |ξnj| > ε →0, ε > 0, (ii) Σj E ξnj; |ξnj| ≤1 →b, (iii) Σj Var ξnj; |ξnj| ≤1 →c.
Here (i) is equivalent to (i′) supj |ξnj| P →0, and when Σj ξnj is tight, (i) holds iffevery limit is Gaussian.
Proof: To see that (i) ⇔(i′), we note that P supj|ξnj| > ε = 1 −Π j 1 −P{|ξnj| > ε} , ε > 0.
Since supj P{|ξnj| > ε} →0 under both conditions, the assertion follows by Lemma 6.8.
Now let Σj ξnj d →ζ. Introduce medians mnj and symmetrizations ˜ ξnj of the variables ξnj, and note that mn ≡supj |mnj| →0 and Σj ˜ ξnj d →˜ ζ, where ˜ ζ is N(0, 2c). By Lemma 5.19 and Theorem 6.12, we get for any ε > 0 Σ j P |ξnj| > ε ≤Σ j P |ξnj −mnj| > ε −mn ≤2 Σ j P |˜ ξnj| > ε −mn →0.
Thus, we may henceforth assume condition (i) and hence that supj |ξnj| P →0.
Then Σj ξnj d →η is equivalent to Σj ξ′ nj d →η, where ξ′ nj = ξnj1{|ξnj| ≤1}, 138 Foundations of Modern Probability and so we may further let |ξnj| ≤1 a.s. for all n and j. Then (ii) and (iii) reduce to bn ≡Σj E ξnj →b and cn ≡Σj Var(ξnj) →c, respectively.
Write bnj = E ξnj, and note that supj |bnj| →0 by (i). Assuming (ii)−(iii), we get Σj ξnj −bn d →ζ −b by Theorem 6.13, and so Σj ξnj d →ζ. Conversely, Σj ξnj d →ζ implies Σj ˜ ξnj d →˜ ζ, and (iii) follows by Theorem 6.12. But then Σj ξnj −bn d →ζ −b, whence by Lemma 5.20 the bn converge toward some b′.
This gives Σj ξnj d →ζ + b′ −b, and so b′ = b, which means that even (ii) is fulfilled.
It remains to prove, under condition (i), that any limiting distribution is Gaussian. Then assume Σj ξnj d →η, and note that Σj ˜ ξnj d →˜ η, where ˜ η is a symmetrization of η. If cn →∞along a sub-sequence, then c−1/2 n Σj ˜ ξnj tends to N(0, 2) by the first assertion, which is impossible by Lemma 5.9. Thus, (cn) is bounded, and so the convergence cn →c holds along a sub-sequence. But then Σnj ξnj −bn tends to N(0, c), again by the first assertion, and Lemma 5.20 shows that even bn converges toward a limit b. Hence, Σnj ξnj tends to N(b, c), which is then the distribution of η.
2 Proof of Theorem 6.13, (i) ⇒(ii): The second condition in (i) shows that (ξnj) is a null array. Furthermore, we have for any ε > 0 Σ j Var ξnj; |ξnj| ≤ε ≤Σ j E ξ2 nj; |ξnj| ≤ε ≤Σ j E ξ2 nj →1.
By Theorem 6.16 even the left-hand side tends to 1, and (ii) follows.
2 Using Theorem 6.16, we may derive the ultimate versions of the weak laws of large numbers. Note that the present conditions are only slightly weaker than those for the strong laws in Theorem 5.23.
Theorem 6.17 (weak laws of large numbers) Let ξ, ξ1, ξ2, . . . be i.i.d. random variables, put Sn = Σk≤n ξk, and fix a p ∈(0, 2). Then n−1/p Sn converges in probability iffthese conditions hold as r →∞, depending on the value of p: • for p ̸= 1: rp P{|ξ| > r} →0, • for p = 1: rP{|ξ| > r} →0 and E(ξ; |ξ| ≤r) →some c ∈R.
In that case, the limit equals c when p = 1, and is otherwise equal to 0.
Proof: By Theorem 6.16 applied to the null array of random variables ξnj = n−1/p ξj, j ≤n, the stated convergence is equivalent to (i) n P |ξ| > n1/p ε →0, ε > 0, (ii) n1−1/p E ξ; |ξ| ≤n1/p →c, (iii) n1−2/p Var ξ; |ξ| ≤n1/p →0.
6. Gaussian and Poisson Convergence 139 Here (i) is equivalent to rpP{|ξ| > r} →0, by the monotonicity of P{|ξ| > r1/p}. Furthermore, Lemma 4.4 yields for any r > 0 rp−2 Var ξ; |ξ| ≤r ≤rpE (ξ/r)2 ∧1 = rp 1 0 P |ξ| ≥r √ t dt, rp−1 E ξ; |ξ| ≤r ≤rpE |ξ/r| ∧1 = rp 1 0 P |ξ| ≥rt dt.
Since t−a is integrable on [0, 1] for any a < 1, we get by dominated convergence (i) ⇒(iii), and also (i) ⇒(ii) with c = 0 when p < 1.
If instead p > 1, we see from (i) and Lemma 4.4 that E|ξ| = ∞ 0 P |ξ| > r dr < ⌢ ∞ 0 1 ∧r−p dr < ∞.
Then E(ξ; |ξ| ≤r) →E ξ, and (ii) yields E ξ = 0. Moreover, (i) implies rp−1E |ξ|; |ξ| > r = rpP |ξ| > r + rp−1 ∞ r P |ξ| > t dt →0.
Assuming in addition that E ξ = 0, we obtain (ii) with c = 0.
When p = 1, condition (i) yields E |ξ|; n < |ξ| ≤n + 1 < ⌢n P |ξ| > n →0.
Hence, under (i), condition (ii) is equivalent to E(ξ; |ξ| ≤r) →c.
2 For a further extension of the central limit theorem in Proposition 6.10, we characterize convergence toward a Gaussian limit of suitably normalized partial sums from a single i.i.d. sequence. Here a non-decreasing function L ≥0 is said to vary slowly at ∞, if supx L(x) > 0 and L(cx) ∼L(x) as x →∞for each c > 0. This holds in particular when L is bounded, but it is also true for many unbounded functions, such as for log(x ∨1).
Theorem 6.18 (domain of Gaussian attraction, L´ evy, Feller, Khinchin) Let ξ, ξ1, ξ2, . . . be i.i.d., non-degenerate random variables, and let ζ be N(0, 1).
Then these conditions are equivalent: (i) there exist some constants an and mn, such that an Σ k≤n (ξk −mn) d →ζ, (ii) the function L(x) = E ξ2; |ξ| ≤x varies slowly at ∞.
We may then take mn ≡E ξ. Finally, (i) holds with an ≡n−1/2 and mn ≡0 iffE ξ = 0 and E ξ2 = 1.
140 Foundations of Modern Probability Even other stable distributions may occur as limits, though the convergence criteria are then more complicated. Our proof of Theorem 6.18 is based on a technical result, where for every m ∈R we define Lm(x) = E (ξ −m)2; |ξ −m| ≤x , x > 0.
Lemma 6.19 (slow variation, Karamata) Let ξ be a non-degenerate random variable, such that L0 varies slowly at ∞. Then so does the function Lm for every m ∈R, and as x →∞, x2−p E |ξ|p; |ξ| > x L0(x) →0, p ∈[0, 2).
(6) Proof: Fix any constant r ∈(1, 22−p), and choose x0 > 0 so large that L0(2x) ≤rL0(x) for all x ≥x0. For such an x, we get with summation over n ≥0 x2−pE |ξ|p; |ξ| > x = x2−p Σ n E |ξ|p; |ξ|/x ∈(2n, 2n+1] ≤Σ n 2 (p−2)nE ξ2; |ξ|/x ∈(2n, 2n+1] ≤Σ n 2 (p−2)n(r −1) rnL0(x) = (r −1)L0(x) 1 −2 p−2 r .
Here (6) follows, as we divide by L0(x) and let x →∞and then r →1.
In particular, E|ξ|p < ∞for all p < 2. If even E ξ2 < ∞, then E(ξ −m)2 < ∞, and the first assertion is obvious. If instead E ξ2 = ∞, we may write Lm(x) = E ξ2; |ξ −m| ≤x + m E m −2 ξ; |ξ −m| ≤x .
Here the last term is bounded, and the first term lies between the bounds L0(x ± m) ∼L0(x). Thus, Lm(x) ∼L0(x), and the slow variation of Lm fol-lows from that of L0.
2 Proof of Theorem 6.18: Let L vary slowly at ∞. Then so does Lm with m = E ξ by Lemma 6.19, and so we may take Eξ = 0. Now define cn = 1 ∨sup x > 0; nL(x) ≥x2 , n ∈N, and note that cn ↑∞. From the slow variation of L it is further clear that cn < ∞for all n, and that moreover nL(cn) ∼c2 n. In particular, cn ∼n1/2 iff L(cn) ∼1, i.e., iffVar(ξ) = 1.
We shall verify the conditions of Theorem 6.16 with b = 0, c = 1, and ξnj = ξj/cn, j ≤n. Beginning with (i), let ε > 0 be arbitrary, and conclude from Lemma 6.19 that n P |ξ/cn| > ε ∼c2 nP{|ξ| > cnε} L(cn) ∼c2 nP{|ξ| > cnε} L(cnε) →0.
6. Gaussian and Poisson Convergence 141 Recalling that E ξ = 0, we get by the same lemma n E ξ/cn; |ξ/cn| ≤1 ≤n c−1 n E |ξ|; |ξ| > cn ∼ cnE |ξ|; |ξ| > cn L(cn) →0, (7) which proves (ii). To obtain (iii), we note that by (7) n Var ξ/cn; |ξ/cn| ≤1 = n c−2 n L(cn) −n E ξ/cn; |ξ| ≤cn 2 →1.
Hence, Theorem 6.16 yields (i) with an = c−1 n and mn ≡0.
Now assume (i) for some constants an and mn. Then a corresponding result holds for the symmetrized variables ˜ ξ, ˜ ξ1, ˜ ξ2, . . . , with constants an/ √ 2 and 0, and so we may assume that c−1 n Σk≤n ˜ ξk d →ζ.
Here clearly cn →∞, and moreover cn+1 ∼cn, since even c−1 n+1Σk≤n ˜ ξk d →ζ by Theorem 5.29.
Now define for x > 0 ˜ T(x) = P{|˜ ξ| > x}, ˜ L(x) = E ˜ ξ 2; |˜ ξ| ≤x , ˜ U(x) = E ˜ ξ 2 ∧x2 .
Then Theorem 6.16 yields n ˜ T(cnε) →0 for all ε > 0, and also n c−2 n ˜ L(cn) →1.
Thus, c2 n ˜ T(cnε)/˜ L(cn) →0, which extends by monotonicity to x2 ˜ T(x) ˜ U(x) ≤x2 ˜ T(x) ˜ L(x) →0, x →∞.
Next define for any x > 0 T(x) = P{|ξ| > x}, U(x) = E( ξ2∧x2).
By Lemma 5.19, we have T(x+|m|) ≤2 ˜ T(x) for any median m of ξ. Further-more, we get by Lemmas 4.4 and 5.19 ˜ U(x) = x2 0 P{˜ ξ 2 > t} dt ≤2 x2 0 P{4 ξ2 > t} dt = 8 U(x/2).
Hence, as x →∞, L(2x) −L(x) L(x) ≤ 4 x2 T(x) U(x) −x2 T(x) ≤ 8 x2 ˜ T(x −|m|) 8−1 ˜ U(2x) −2x2 ˜ T(x −|m|) →0, 142 Foundations of Modern Probability which shows that L is slowly varying.
Finally, let n−1/2Σ k≤n ξk d →ζ. By the previous argument with cn = n1/2 we get ˜ L(n1/2) →2, which implies E ˜ ξ2 = 2 and hence Var(ξ) = 1. But then n−1/2Σ k≤n (ξk −E ξ) d →ζ, and so by comparison E ξ = 0.
2 We return to the general problem of characterizing weak convergence of probability measures μn on Rd in terms of the associated characteristic func-tions ˆ μn or Laplace transforms ˜ μn. Suppose that ˆ μn or ˜ μn converges toward a continuous limit ϕ, which is not known in advance to be a characteristic function or Laplace transform. To conclude that μn tends weakly toward some measure μ, we need an extended version of Theorem 6.3, whose proof requires a compactness argument.
Here let Md be the space of locally finite measures on Rd, endowed with the vague topology generated by the evaluation maps πf : μ →μf = fdμ for all f ∈ˆ C+, where ˆ C+ is the class of continuous functions f ≥0 on Rd with bounded support. Thus, μn converges vaguely to μ, written as μn v →μ, iff μnf →μf for all f ∈ˆ C+. If the μn are probability measures, then clearly ∥μ∥= μRd ≤1. We prove a version of Helly’s selection theorem, showing that the set of probability measures on Rd is vaguely relatively sequentially compact.
Theorem 6.20 (vague compactness, Helly) Every sequence of probability mea-sures on Rd has a vaguely convergent sub-sequence.
Proof: Fix some probability measures μ1, μ2, . . . on Rd, and let F1, F2, . . .
be the corresponding distribution functions. Write Q for the set of rational numbers. By a diagonal argument, Fn converges on Qd along a sub-sequence N ′ ⊂N toward a limit G, and we may define F(x) = inf G(r); r ∈Qd, r > x , x ∈Rd.
(8) Since each Fn has non-negative increments, so has G and hence also F. We further see from (8) and the monotonicity of G that F is right-continuous.
Hence, Corollary 4.26 yields a measure μ on Rd with μ(x, y] = F(x, y] for any bounded rectangular box (x, y] ⊂Rd, and we need to show that μn v →μ along N ′.
Then note that Fn(x) →F(x) at every continuity point x of F. By the monotonicity of F, there exists a countable, dense set D ⊂R, such that F is continuous on C = (Dc)d. Then μnU →μU for every finite union U of rectangular boxes with corners in C, and for any bounded Borel set B ⊂Rd, a simple approximation yields μBo ≤lim inf n→∞μnB ≤lim sup n→∞μnB ≤μ ¯ B.
(9) 6. Gaussian and Poisson Convergence 143 For any bounded μ-continuity set B, we may consider functions f ∈ˆ C+ sup-ported by B, and show as in Theorem 5.25 that μnf →μf, proving that μn v →μ.
2 If μn v →μ for some probability measures μn on Rd, we may still have ∥μ∥< 1, due to an escape of mass to infinity. To exclude this possibility, we need to assume that (μn) is tight.
Lemma 6.21 (vague and weak convergence) Let μ1, μ2, . . . be probability mea-sures on Rd, such that μn v →μ for a measure μ. Then these conditions are equivalent: (i) (μn) is tight, (ii) ∥μ∥= 1, (iii) μn w →μ.
Proof, (i) ⇔(ii): By a simple approximation, the vague convergence yields (9) for every bounded Borel set B, and in particular for the balls Br = {x ∈ Rd; |x| < r}, r > 0. If (ii) holds, then μBr →1 as r →∞, and the first inequality gives (i). Conversely, (i) implies lim supn μnBr →1, and the last inequality yields (ii).
(i) ⇔(iii): Assume (i), and fix any bounded continuous function f : Rd →R.
For any r > 0, we may choose a gr ∈ˆ C+ with 1Br ≤gr ≤1, and note that μnf −μf ≤ μnf −μnfgr + μnfgr −μfgr + μfgr −μf ≤ μnfgr −μfgr + ∥f∥(μn + μ)Bc 0.
Here the right-hand side tends to zero as n →∞and then r →∞, and so μnf →μf, proving (iii). The converse was proved in Lemma 5.8.
2 Combining the last two results, we may prove the equivalence of tightness and weak sequential compactness. A more general version appears in Theo-rem 23.2, which forms the starting point for a theory of weak convergence on function spaces.
Corollary 6.22 (tightness and weak compactness) For any probability mea-sures μ1, μ2, . . . on Rd, these conditions are equivalent: (i) (μn) is tight, (ii) every sub-sequence of (μn) has a weakly convergent further sub-sequence.
Proof, (i) ⇒(ii): By Theorem 6.20, every sub-sequence has a vaguely con-vergent further sub-sequence. If (μn) is tight, then by Lemma 6.21 the conver-gence holds even in the weak sense.
144 Foundations of Modern Probability (ii) ⇒(i): Assume (ii). If (i) fails, we may choose some nk →∞and ε > 0, such that μnkBc k > ε for all k ∈N. By (ii) we have μnk w →μ along a sub-sequence N ′ ⊂N, for a probability measure μ. Then (μnk; k ∈N ′) is tight by Lemma 5.8, and in particular there exists an r > 0 with μnkBc r ≤ε for all k ∈N ′. For k > r this is a contradiction, and (i) follows.
2 We may now prove the desired extension of Theorem 6.3.
Theorem 6.23 (extended continuity theorem, L´ evy, Bochner) Let μ1, μ2, . . .
be probability measures on Rd with characteristics functions ˆ μn. Then these conditions are equivalent: (i) ˆ μn(t) →ϕ(t), t ∈Rd, where ϕ is continuous at 0, (ii) μn w →μ for a probability measure μ on Rd.
In that case, μ has characteristic function ϕ.
Similar statements hold for distributions μn on Rd + with Laplace transforms ˜ μn.
Proof, (ii) ⇒(i): Clear by Theorem 6.3.
(i) ⇒(ii): Assuming (i), we see from the proof of Theorem 6.3 that (μn) is tight. Then Corollary 6.22 yields μn w →μ along a sub-sequence N ′ ⊂N, for a probability measure μ. The convergence extends to N by Theorem 6.3, proving (ii). The proof for Laplace transforms is similar.
2 Exercises 1.
Show that if ξ, η are independent Poisson random variables, then ξ + η is again Poisson. Also show that the Poisson property is preserved by convergence in distribution.
2. Show that any linear combination of independent Gaussian random variables is again Gaussian. Also show that the Gaussian property is preserved by convergence in distribution.
3. Show that ϕr(t) = (1 −t/r)+ is a characteristic functions for every r > 0.
(Hint: Calculate the Fourier transform ˆ ψr of the function ψr(t) = 1{|t| ≤r}, and note that the Fourier transform ˆ ψ2 r of ψ∗2 r is integrable. Now use Fourier inversion.) 4. Let ϕ be a real, even function that is convex on R+ and satisfies ϕ(0) = 1 and ϕ(∞) ∈[0, 1]. Show that ϕ is the characteristic function of a symmetric distribution on R. In particular, ϕ(t) = e−|t|c is a characteristic function for every c ∈[0, 1]. (Hint: Approximate by convex combinations of functions ϕr as above, and use Theorem 6.23.) 5. Show that if ˆ μ is integrable, then μ has a bounded, continuous density. (Hint: Let ϕr be the triangular density above. Then ( ˆ ϕr)ˆ= 2πϕr, and so e−itu ˆ μt ˆ ϕr(t) dt = 2π ϕr(x −u) μ(dx). Now let r →0.) 6. Show that a distribution μ is supported by a set a Z + b iff|ˆ μt| = 1 for some t ̸= 0.
6. Gaussian and Poisson Convergence 145 7. Give a direct proof of the continuity theorem for generating functions of dis-tributions on Z+. (Hint: Note that if μn v →μ for some distributions on R+, then ˜ μn →˜ μ on (0, ∞).) 8. The moment-generating function of a distribution μ on R is given by ˜ μt = etxμ(dx). Show that if ˜ μt < ∞for all t in a non-degenerate interval I, then ˜ μ is analytic in the strip {z ∈C; ℜz ∈Io}. (Hint: Approximate by measures with bounded support.) 9.
Let μ, μ1, μ2, . . . be distributions on R with moment-generating functions ˜ μ, ˜ μ1, ˜ μ2, . . . , such that ˜ μn →˜ μ < ∞on a non-degenerate interval I. Show that μn w →μ. (Hint: If μn v →ν along a sub-sequence N ′, then ˜ μn →˜ ν on Io along N′, and so ˜ ν = ˜ μ on I. By the preceding exercise, we get νR = 1 and ˆ ν = ˆ μ. Thus, ν = μ.) 10. Let μ, ν be distributions on R with finite moments xnμ(dx) = xnν(dx) = mn, where Σn tn|mn|/n! < ∞for some t > 0. Show that μ = ν. (Hint: The absolute moments satisfy the same relation for any smaller value of t, and so the moment-generating functions exist and agree on (−t, t).) 11. For each n ∈N, let μn be a distribution on R with finite moments mk n, k ∈N, such that limn mk n = ak for some constants ak with Σk tk|ak|/k! < ∞for a t > 0.
Show that μn w →μ for a distribution μ with moments ak. (Hint: Each function xk is uniformly integrable with respect to the measures μn. In particular, (μn) is tight.
If μn w →ν along a sub-sequence, then ν has moments ak.) 12. Given a distribution μ on R × R+, introduce the Fourier–Laplace transform ϕ(s, t) = eisx−tyμ(dx dy), where s ∈R and t ≥0. Prove versions for ϕ of Theorems 6.3 and 6.23.
13.
Consider a null array of random vectors ξnj = (ξ1 nj, . . . , ξd nj) in Zd +, let ξ1, . . . , ξd be independent Poisson variables with means c1, . . . , cd, and put ξ = (ξ1, . . . , ξd). Show thatΣj ξnj d →ξ iffΣjP{ξk nj = 1} →ck for all k andΣjP{Σk ξk nj > 1} →0. (Hint: Introduce some independent random variables ηk nj d = ξk nj, and note that Σj ξnj d →ξ iffΣj ηnj d →ξ.) 14.
Consider some random variables ξ⊥ ⊥η with finite variance, such that the distribution of (ξ, η) is rotationally invariant. Show that ξ is centered Gaussian.
(Hint: Let ξ1, ξ2, . . . be i.i.d. and distributed as ξ, and note that n−1/2Σk≤n ξk has the same distribution for all n. Now use Proposition 6.10.) 15. Prove a multi-variate version of the Taylor expansion in Lemma 6.9.
16. Let μ have a finite n -th moment mn. Show that ˆ μ is n times continuously differentiable and satisfies ˆ μ(n) 0 = inmn.
(Hint: Differentiate n times under the integral sign.) 17. For μ and mn as above, show that ˆ μ(2n) 0 exists iffm2n < ∞. Also characterize the distributions μ where ˆ μ(2n−1) 0 exists. (Hint: For ˆ μ′′ 0, proceed as in the proof of Proposition 6.10, and use Theorem 6.18. For ˆ μ′ 0, use Theorem 6.17. Extend by induction to n > 1.) 18. Let μ be a distribution on R+ with moments mn. Show that ˜ μ(n) 0 = (−1)nmn, whenever either side exists and is finite. (Hint: Prove the statement for n = 1, and extend by induction.) 146 Foundations of Modern Probability 19. Deduce Proposition 6.10 from Theorem 6.13.
20. Let the random variables ξ and ξnj be such as in Theorem 6.13, and assume that Σj E|ξnj|p →0 for a p > 2. Show that Σj ξnj d →ξ.
21. Extend Theorem 6.13 to random vectors in Rd, with the condition Σj E ξ2 nj → 1 replaced by Σj Cov(ξnj) →a, with ξ as N(0, a), and with ξ2 nj replaced by |ξnj|2.
(Hint: Use Corollary 6.5 to reduce to dimension 1.) 22. Show that Theorem 6.16 remains true for random vectors in Rd, with Var(ξnj; |ξnj| ≤1) replaced by the corresponding covariance matrix. (Hint: If a, a1, a2, . . .
are symmetric, non-negative definite matrices, then an →a iffu′anu →u′au for all u ∈Rd. To see this, use a compactness argument.) 23. Show that Theorems 6.7 and 6.16 remain valid for possibly infinite row-sums Σj ξnj. (Hint: Use Theorem 5.17 or 5.18, together with Theorem 5.29.) 24. Let ξ, ξ1, ξ2, . . . be i.i.d. random variables. Show that n−1/2Σk≤n ξk converges in probability iffξ = 0 a.s. (Hint: Use condition (iii) in Theorem 6.16.) 25. Let ξ1, ξ2, . . . be i.i.d. μ, and fix any p ∈(0, 2). Find a μ such that n−1/pΣk≤nξk →0 in probability but not a.s.
26. Let ξ1, ξ2, . . . be i.i.d., and let p > 0 be such that n−1/pΣk≤n ξk →0 in probability but not a.s. Show that lim supn n−1/p |Σk≤n ξk| = ∞a.s. (Hint: Note that E|ξ1|p = ∞.) 27. Give an example of a distribution on R with infinite second moment, belong-ing to the domain of attraction of the Gaussian law. Also find the corresponding normalization.
Chapter 7 Infinite Divisibility and General Null Arrays Compound Poisson distributions and approximation, i.i.d. and null ar-rays, infinitely divisible distributions, L´ evy measure, characteristics, L´ evy– Khinchin formula, convergence and closure, one-dimensional criteria, positive and symmetric terms, general null arrays, limit laws and con-vergence criteria, sums and extremes, diffuse distributions The fundamental roles of Gaussian and Poisson distributions should be clear from Chapter 6. Here we consider the more general and equally basic family of infinitely divisible distributions, which may be defined as distributional limits of linear combinations of independent Poisson and Gaussian random variables.
Such distributions constitute the most general limit laws appearing in the clas-sical limit theorems for null arrays. They further appear as finite-dimensional distributions of L´ evy processes—the fundamental processes with stationary, independent increments, treated in Chapter 16—which may be regarded as prototypes of general Markov processes.
The special criteria for convergence toward Poisson and Gaussian distribu-tions, derived by simple analytic methods in Chapter 6, will now be extended to the case of general infinitely divisible limits. Though the use of some ana-lytic tools is still unavoidable, the present treatment is more probabilistic in flavor, and involves as crucial steps a centering at truncated means followed by a compound Poisson approximation. This approach has the advantage of providing some better insight into even the Gaussian and Poisson results, pre-viously obtained by elementary but more technical estimates.
Since the entire theory is based on approximations by compound Poisson distributions, we begin with some characterizations of the latter. Here the basic representation is in terms of Poisson processes, discussed more extensively in Chapter 15. Here we need only the basic definition in terms of independent, Poisson distributed increments.
Lemma 7.1 (compound Poisson distributions) Let ξ be a random vector in Rd, fix any bounded measure ν ̸= 0 on Rd \ {0}, and put c = ∥ν∥and ˆ ν = ν/c.
Then these conditions are equivalent: (i) ξ d = Xκ for a random walk X = (Xn) in Rd based on ˆ ν, where κ⊥ ⊥X is Poisson distributed with E κ = c, (ii) ξ d = x η(dx) for a Poisson process η on Rd \ {0} with E η = ν, 147 © Springer Nature Switzerland AG 2021 O. Kallenberg, Foundations of Modern Probability, Probability Theory and Stochastic Modelling 99, 148 Foundations of Modern Probability (iii) log E eiuξ = ( eiux −1) ν(dx), u ∈Rd.
The measure ν is then uniquely determined by L(ξ).
Proof, (i) ⇒(iii): Writing EeiuX1 = ϕ(u), we get EeiuXκ = Σ n≥0 e−c cn n! {ϕ(u)}n = e−cec ϕ(u) = ec{ϕ(u)−1}, and so log EeiuXκ = c ϕ(u) −1 = c eiuxˆ ν(dx) −1 = ( eiux −1) ν(dx).
(ii) ⇒(iii): To avoid repetitions, we defer the proof until Lemma 15.2 (ii), which remains valid with −f replaced by the function f(x) = ix.
(iii) ⇒(i)−(ii): Clear by the uniqueness in Theorem 6.3.
2 The distributions in Lemma 7.1 are said to be compound Poisson, and the underlying measure ν is called the characteristic measure of ξ. We will use the compound Poisson distributions to approximate the row sums in triangular arrays (ξnj), where the ξnj are understood to be independent in j for fixed n.
Recall that (ξnj) is called a null array if ξnj P →0 as n →∞, uniformly in j.
It is further called an i.i.d. array if the ξnj are i.i.d. in j for fixed n, and the successive rows have lengths mn →∞.
For any random vector ξ with distribution μ, we introduce an associated compound Poisson random vector ˜ ξ with characteristic measure μ. For a tri-angular array ξnj, the corresponding compound Poisson vectors ˜ ξij are again assumed to have row-wise independent entries. By ξn d ∼ηn we mean that, if either side converges in distribution along a sub-sequence, then so does the other along the same sequence, and the two limits agree.
Proposition 7.2 (compound Poisson approximation) Let (ξnj) be a trian-gular array in Rd with associated compound Poisson array (˜ ξnj). Then the equivalence Σj ξnj d ∼Σj ˜ ξnj holds under each of these conditions: (i) (ξnj) is a null array in Rd +, (ii) (ξnj) is a null array with ξnj d = −ξnj, (iii) (ξnj) is an i.i.d. array.
The stated equivalence fails for general null arrays, where a modified version is given in Theorem 7.11. For the proof, we first need to show that the ‘null’ property holds even in case (iii). This requires a simple technical lemma: 7. Infinite Divisibility and General Null Arrays 149 Lemma 7.3 (i.i.d. and null arrays) Let (ξnj) be an i.i.d. array in Rd, such that the sequence ξn = Σj ξnj is tight. Then ξn1 P →0.
Proof: It is enough to take d = 1. By Lemma 6.2 the functions ˆ μn(u) = Eeiuξn are equi-continuous at 0, and so ˆ μn = eψn on a suitable interval I = [−a, a], for some continuous functions ψn with |ψn| ≤ 1 2. Writing ˆ μnj(r) = Eeirξnj for j ≤mn, we conclude by continuity that ˆ μnj = eψn/mn →1, uni-formly on I. Proceeding as in Lemma 6.1 (i), we get for any ε ≤a−1 a −a 1 −ˆ μnj(r) dr = 2r (1 −sin ax ax ) μnj(dx) ≥2r 1 −sin aε aε μnj{|x| ≥ε}, and so P{|ξnj| ≥ε} →0 as n →∞, which implies ξnj P →0.
2 For the main proof, we often need the elementary fact er −1 ∼r as r →0, (1) which holds when r ↑0 or r ↓0, and even when r →0 in the complex plane.
Proof of Proposition 7.2: Let ˆ μnj or ˜ μnj be the characteristic functions or Laplace transforms of ξnj. Since the latter form a null array, even in case (iii) by Lemma 7.3, Lemma 6.6 ensures that, for every r > 0, there exists an m ∈N such that ˆ μnj ̸= 0 or ˜ μnj ̸= 0 on the ball Br 0 for all n > m. Then ˆ μnj = eψnj or ˜ μnj = e−ϕnj, for some continuous functions ψnj and ϕnj, respectively.
(i) For any u ∈Rd +, −log Ee−uξnj = −log ˜ μnj(u) = ϕnj(u), −log Ee−u˜ ξnj = 1 −˜ μnj(u) = 1 −e−ϕnj(u), and we need to show that Σ j ϕnj(u) ≈Σ j 1 −e−ϕnj(u) , u ∈Rd +, in the sense that if either side converges to a positive limit, then so does the other and the limits are equal. Since all functions are positive, this is clear from (1).
(ii) For any u ∈Rd, log Eeiuξnj = log ˆ μnj(u) = ψnj(u), log Eeiu˜ ξnj = ˆ μnj(u) −1 = eψnj(u) −1, 150 Foundations of Modern Probability and we need to show that Σ j ψnj(u) ≈Σ j eψnj(u) −1 , u ∈Rd, in the sense of simultaneous convergence to a common finite limit. Since all functions are real and ≤0, this follows from (1).
(iii) Writing ψnj = ψn, we may reduce our claim to exp mn ψn(u) ≈exp mn eψn(u) −1 , u ∈Rd, in the sense of simultaneous convergence to a common complex number. This follows from (1).
2 A random vector ξ or its distribution L(ξ) is said to be infinitely divisible, if for every n ∈N we have ξ d = Σj ξnj for some i.i.d. random vectors ξn1, . . . , ξnn.
Obvious examples of infinitely divisible distributions include the Gaussian and Poisson laws, which is clear by elementary calculations. More generally, we see from Lemma 7.1 that any compound Poisson distribution is infinitely divisible.
The following approximation is fundamental: Corollary 7.4 (compound Poisson limits) For a random vector ξ in Rd, these conditions are equivalent: (i) ξ is infinitely divisible, (ii) ξn d →ξ for some compound Poisson random vectors ξn.
Proof, (i) ⇒(ii): Under (i) we have ξ d = Σj ξnj for an i.i.d. array (ξnj).
Choosing an associated compound Poisson array (˜ ξnj), we get j ˜ ξnj d →ξ by Lemma 7.2 (iii), and we note that ξn = Σj ˜ ξnj is again compound Poisson.
(ii) ⇒(i): Assume (ii) for some ξn with characteristic measures νn. For ev-ery m ∈N, we may write ξn d = Σj≤m ξnj for some i.i.d. compound Poisson vec-tors ξnj with characteristic measures νn/m. Then the sequence (ξn1, . . . , ξnm) is tight by Lemma 6.2, and so (ξn1, . . . , ξnm) d →(ζ1, . . . , ζm) as n →∞along a sub-sequence, where the ζj are again i.i.d. with sum Σj ζj d = ξ. Since m was arbitrary, (i) follows.
2 We proceed to characterize the general infinitely divisible distributions. For a special case, we may combine the Gaussian and compound Poisson distribu-tions and use Lemma 7.1 (ii) to form a variable ξ d = b+ζ + x η(dx), involving a constant vector b, a centered Gaussian random vector ζ, and an independent Poisson process η on Rd. For extensions to unbounded ν = Eη, we need to apply a suitable centering, to ensure convergence of the Poisson integral. This suggests the more general representation: 7. Infinite Divisibility and General Null Arrays 151 Theorem 7.5 (infinitely divisible distributions, L´ evy, Itˆ o) (i) A random vector ξ in Rd is infinitely divisible iff ξ d = b + ζ + |x|≤1 x (η −Eη)(dx) + |x|>1 x η(dx), for a constant vector b ∈Rd, a centered Gaussian random vector ζ with Cov(ζ) = a, and an independent Poisson process η on Rd \ {0} with intensity ν satisfying (|x|2 ∧1) ν(dx) < ∞.
(ii) A random vector ξ in Rd + is infinitely divisible iff ξ d = a + x η(dx), for a constant vector a ∈Rd + and a Poisson process η on Rd + \ {0} with intensity ν satisfying (|x| ∧1) ν(dx) < ∞.
The triple (a, b, ν) or pair (a, ν) is then unique, and any choice with the stated properties may occur.
This shows that an infinitely divisible distribution on Rd or Rd + is uniquely characterized by the triple (a, b, ν) or pair (a, ν), referred to as the charac-teristics of ξ or L(ξ). The intensity measure ν of the Poisson component is called the L´ evy measure of ξ or L(ξ). The representations of Theorem 7.5 can also be expressed analytically in terms of characteristic functions or Laplace transforms, which leads to some celebrated formulas: Corollary 7.6 (L´ evy–Khinchin representations, Kolmogorov, L´ evy) Let ξ be an infinitely divisible random vector in Rd or Rd + with characteristics (a, b, ν) or (a, ν). Then (i) for Rd-valued ξ we have E eiuξ = eψ(u), u ∈Rd, where ψ(u) = iu′b −1 2 u′au + eiu′x −1 −iu′x 1{|x| ≤1} ν(dx), (ii) for Rd +-valued ξ we have E e−uξ = e−ϕ(u), u ∈Rd +, where ϕ(u) = ua + (1 −e−ux) ν(dx).
The stated representations will be proved together with the associated con-vergence criteria, stated below. Given a characteristic triple (a, b, ν), as above, we define for any h > 0 the truncated versions ah = a + |x|≤h xx′ν(dx), bh = b − h<|x|≤1 x ν(dx), where h<|x|≤1 = − 1<|x|≤h when h > 1. In the positive case, we put instead ah = a + x≤h x ν(dx). Let Rd denote the one-point compactification of Rd.
152 Foundations of Modern Probability Theorem 7.7 (convergence and closure) (i) The class of infinitely divisible distributions on Rd is closed under weak convergence.
(ii) Let ξn and ξ be infinitely divisible in Rd with characteristics (an, bn, νn) and (a, b, ν), and fix any h > 0 with ν ∂Bh 0 = 0. Then ξn d →ξ iff1 ah n →ah, bh n →bh, νn v →ν on Rd \ {0}.
(iii) Let ξn and ξ be infinitely divisible in Rd + with characteristics (an, νn) and (a, ν), and fix any h > 0 with ν ∂Bh 0 = 0. Then ξn d →ξ iff ah n →ah, νn v →ν on Rd + \ {0}.
We first consider the one-dimensional case, which allows some significant simplifications. The characteristic exponent ψ in Theorem 7.6 may then be written as ψ(r) = i c r + eirx −1 − i rx 1 + x2 1 + x2 x2 ˜ ν(dx), r ∈R, (2) where ˜ ν(dx) = σ2δ0(dx) + x2 1 + x2 ν(dx), c = b + x 1 + x2 −x 1{|x| ≤1} ν(dx), and the integrand in (2) is defined by continuity as −1 2 r2 when x = 0.
For infinitely divisible distributions on R+, we define instead ˜ ν(dx) = a δ0 + (1 −e−x) ν(dx), so that the characteristic exponent ϕ in Theorem 7.6 becomes ϕ(r) = 1 −e−rx 1 −e−x ˜ ν(dx), r ≥0, (3) where the integrand is interpreted as r when x = 0. We refer to the pair (c, ˜ ν) or measure ˜ ν as the modified characteristics of ξ or L(ξ).
Lemma 7.8 (one-dimensional criteria) (i) Let ξn and ξ be infinitely divisible in R with modified characteristics (cn, ˜ νn) and (c, ˜ ν). Then ξn d →ξ ⇔ cn →c, ˜ νn w →˜ ν.
(ii) Let ξn and ξ be infinitely divisible in R+ with modified characteristics ˜ νn and ˜ ν. Then ξn d →ξ ⇔ ˜ νn w →˜ ν.
1Here νn v →ν means that νnf →νf for all f ∈ˆ C+(Rd\{0}), the class of continuous func-tions f ≥0 on Rd with a finite limit as |x| →∞, and such that f(x) = 0 in a neighborhood of 0. The meaning in (iii) is similar.
7. Infinite Divisibility and General Null Arrays 153 The last two statements will first be proved with ‘infinitely divisible’ re-placed by representable, meaning that ξn and ξ are random vectors allowing representations as in Theorem 7.5. Some earlier lemmas will then be used to deduce the stated versions.
Proof of Lemma 7.8, representable case: (ii) If ˜ νn w →ν, then ϕn →ϕ by the continuity in (3), and so the associated Laplace transforms satisfy ˆ μn →ˆ μ, which implies μn w →μ by Theorem 6.3. Conversely, μn w →μ implies ˆ μn →ˆ μ, and so ϕn →ϕ, which yields Δϕn →Δϕ, where Δϕ(r) = ϕ(r + 1) −ϕ(r) = e−rx ˜ ν(dx).
Using Theorem 6.3, we conclude that ˜ νn w →˜ ν.
(i) If cn →c and ˜ νn w →˜ ν, then ψn →ψ by the boundedness and continuity of the integrand in (2), and so ˆ μn →ˆ μ, which implies μn w →μ by Theorem 6.3.
Conversely, μn w →μ implies ˆ μn →ˆ μ uniformly on bounded intervals, and so ψn →ψ in the same sense. Now define χ(r) = 1 −1 ψ(r) −ψ(r + s) ds = 2 eirx(1 −sin x x )1 + x2 x2 ˜ ν(dx), and similarly for χn, where the interchange of integrations is justified by Fu-bini’s theorem. Then χn →χ, and so by Theorem 6.3 (1 −sin x x )1 + x2 x2 ˜ νn(dx) w →(1 −sin x x )1 + x2 x2 ˜ ν(dx).
Since the integrand is continuous and bounded away from 0, it follows that ˜ νn w →˜ ν. This implies convergence of the integral in (2), and so by subtraction cn →c.
2 Proof of Theorem 7.7, representable case: For bounded measures mn and m on R, we note that mn w →m iffmn v →m on ¯ R\{0} and mn(−h, h) →m(−h, h) for some h > 0 with m{±h} = 0. Thus, for distributions μ and μn on R, we have ˜ νn w →˜ ν iffνn v →ν on ¯ R\{0} and ah n →ah for every h > 0 with ν{±h} = 0.
Similarly, ˜ νn w →˜ ν holds for distributions μ and μn on R+ iffνn v →ν on (0, ∞] and ah n →ah for all h > 0 with ν{h} = 0. Thus, (iii) follows immediately from Lemma 7.8. To obtain (ii) from the same lemma when d = 1, it remains to note that the conditions bh n →bh and cn →c are equivalent when ˜ νn w →˜ ν and ν{±h} = 0, since |x −x(1 + x2)−1| ≤|x|3.
Turning to the proof of (ii) when d > 1, we first assume that νn v →ν on Rd \ {0}, and that ah n →ah and bh n →bh for some h > 0 with ν{|x| = h} = 0. To prove μn w →μ, it suffices by Corollary 6.5 to show that, for any one-dimensional projection πu : x →u′x with u ̸= 0, μn ◦π−1 u w →μ ◦π−1 u . Then fix any k > 0 with ν{|u′x| = k} = 0, and note that μ ◦π−1 u has the associated characteristics νu = ν ◦π−1 u and 154 Foundations of Modern Probability au,k = u′ahu + (u′x)2 1(0,k](|u′x|) −1(0,h](|x|) ν(dx), bu,k = u′bh + u′x 1(1,k](|u′x|) −1(1,h](|x|) ν(dx).
Let au,k n , bu,k n , and νu n denote the corresponding characteristics of μn◦π−1 u . Then νu n v →νu on ¯ R \ {0}, and furthermore au,k n →au,k and bu,k n →bu,k. The desired convergence now follows from the one-dimensional result.
Conversely, let μn w →μ. Then μn ◦π−1 u w →μ ◦π−1 u for every u ̸= 0, and the one-dimensional result yields νu n v →νu on ¯ R \ {0}, as well as au,k n →au,k and bu,k n →bu,k for any k > 0 with ν{|u′x| = k} = 0. In particular, the sequence (νnK) is bounded for every compact set K ⊂Rd \ {0}, and so the sequences (u′ah nu) and (u′bh n) are bounded for any u ̸= 0 and h > 0. In follows easily that (ah n) and (bh n) are bounded for every h > 0, and therefore all three sequences are relatively compact.
Given a sub-sequence N ′ ⊂N, we have νn v →ν′ along a further sub-sequence N ′′ ⊂N ′, for some measure ν′ satisfying (|x|2 ∧1)ν′(dx) < ∞. Fixing any h > 0 with ν′{|x| = h} = 0, we may choose yet another sub-sequence N ′′′, such that even ah n and bh n converge toward some limits a′ and b′. The direct assertion then yields μn w →μ′ along N ′′′, where μ′ is infinitely divisible with characteris-tics determined by (a′, b′, ν′). Since μ′ = μ, we get ν′ = ν, a′ = ah, and b′ = bh.
Thus, the convergence remains valid along the original sequence.
2 Proof of Theorem 7.5: Directly from the representation, or even easier by Corollary 7.6, we see that any representable random vector ξ is infinitely divis-ible. Conversely, if ξ is infinitely divisible, Corollary 7.4 yields ξn d →ξ for some compound Poisson random vectors ξn, which are representable by Lemma 7.1.
Then even ξ is representable, by the preliminary version of Theorem 7.7 (i). 2 Proof of Theorem 7.7, general case: Since infinite divisibility and repre-sentability are equivalent by Theorem 7.5, the proof in the representable case remains valid for infinitely divisible random vectors ξn and ξ.
2 We may now complete the classical limit theory for sums of independent random variables from Chapter 6, beginning with the case of general i.i.d.
arrays (ξnj).
Corollary 7.9 (i.i.d.-array convergence) Let (ξnj) be an i.i.d. array in Rd, let ξ be infinitely divisible in Rd with characteristics (a, b, ν), and fix any h > 0 with ν{|x| = h} = 0. Then Σj ξnj d →ξ iff (i) mn L(ξn1) v →ν on Rd \ {0}, (ii) mn E ξn1ξ′ n1; |ξn1| ≤h →ah, (iii) mn E ξn1; |ξn1| ≤h →bh.
7. Infinite Divisibility and General Null Arrays 155 Proof: For an i.i.d. array (ξnj) in Rd, the associated compound Poisson array (˜ ξnj) is infinitely divisible with characteristics an = 0, bn = E ξnj; |ξnj| ≤1 , νn = L(ξnj), and so the row sums ξn = Σj ξnj are infinitely divisible with characteristics (0, mnbn, mnμn). The assertion now follows by Theorems 7.2 and 7.7.
2 The following extension of Theorem 6.12 clarifies the connection between null arrays with positive and symmetric terms. Here we define p2(x) = x2.
Theorem 7.10 (positive and symmetric terms) Let (ξnj) be a null array of symmetric random variables, and let ξ, η be infinitely divisible with character-istics (a, 0, ν) and (a, ν ◦p−1 2 ), respectively, where ν is symmetric and a ≥0.
Then Σ j ξnj d →ξ ⇔ Σ j ξ2 nj d →η.
Proof: Define μnj = L(ξnj), and fix any h > 0 with ν{|x| = h} = 0. By Lemma 7.2 and Theorem 7.7 (i), we have Σj ξnj d →ξ iff Σ j μnj v →ν on ¯ R \ {0}, Σ j E ξ2 nj; |ξnj| ≤h →a + |x|≤h x2ν(dx), whereas Σj ξ2 nj d →η iff Σ j μnj ◦p−1 2 v →ν ◦p−1 2 on (0, ∞], Σ j E ξ2 nj; ξ2 nj ≤h2 →a + y≤h2 y (ν ◦p−1 2 )(dy).
The two sets of conditions are equivalent by Lemma 1.24.
2 The convergence problem for general null arrays is more delicate, since a compound Poisson approximation as in Proposition 7.2 applies only after a centering at truncated means, as specified below.
Proposition 7.11 (compound Poisson approximation) Let (ξnj) be a null ar-ray of random vectors in Rd, fix any h > 0, and define bnj = E ξnj; |ξnj| ≤h , n, j ∈N.
(4) Then Σ j ξnj d ∼Σ j (ξnj −bnj) ∼+ bnj .
(5) Our proof will be based on a technical estimate: Lemma 7.12 (uniform summability) Let (ξnj) be a null array with truncated means bnj as in (4), and let ϕnj be the characteristic functions of the vectors ηnj = ξnj −bnj. Then convergence of either side of (5) implies lim sup n→∞Σ j 1 −ϕnj(u) < ∞, u ∈Rd.
156 Foundations of Modern Probability Proof: The definitions of bnj, ηnj, and ϕnj yield 1 −ϕnj(u) = E 1 −eiu′ηnj + iu′ηnj1 |ξnj| ≤h −iu′bnjP |ξnj| > h .
Putting an = Σ j E ηnjη′ nj; |ξnj| ≤h , pn = Σ j P |ξnj| > h , we get by Lemma 6.15 Σ j 1 −ϕnj(u) < ⌢ 1 2 u′anu + (2 + |u|) pn.
It is then enough to show that (u′anu) and (pn) are bounded.
Assuming convergence on the right of (5), the stated boundedness follows easily from Theorem 7.7, together with the fact that maxj |bnj| →0. If instead Σj ξnj d →ξ, we may introduce an independent copy (ξ′ nj) of the array (ξnj) and apply Proposition 7.2 and Theorem 7.7 to the symmetric random variables ζu nj = u′ξnj −u′ξ′ nj. Then for any h′ > 0, lim sup n→∞Σ j P |ζu nj| > h′ < ∞, (6) lim sup n→∞Σ j E (ζu nj)2; |ζu nj| ≤h′ < ∞.
(7) The boundedness of pn follows from (6) and Lemma 5.19. Next we note that (7) remains true with the condition |ζu nj| ≤h′ replaced by |ξnj| ∨|ξ′ nj| ≤h.
Using the independence of ξnj and ξ′ nj, we get 1 2 Σ j E (ζu nj)2; |ξnj| ∨|ξ′ nj| ≤h] = Σ j E (u′ηnj)2; |ξnj| ≤h P |ξnj| ≤h −Σ j E u′ηnj; |ξnj| ≤h 2 ≥u′anu minjP |ξnj| ≤h −Σ j u′bnjP |ξnj| > h 2.
Here the last sum is bounded by pn maxj(u′bnj)2 →0, and the minimum on the right tends to 1. The boundedness of (u′anu) now follows by (7).
2 Proof of Proposition 7.11: By Lemma 6.14 it suffices to show that Σ j ϕnj(u) −exp ϕnj(u) −1 →0, u ∈Rd, where ϕnj is the characteristic function of ηnj. This follows from Taylor’s for-mula, together with Lemmas 6.6 and 7.12.
2 In particular, we may now identify the possible limits.
Corollary 7.13 (null-array limits, Feller, Khinchin) Let (ξnj) be a null array of random vectors in Rd, such that Σj ξnj d →ξ for a random vector ξ. Then ξ is infinitely divisible.
7. Infinite Divisibility and General Null Arrays 157 Proof: The random vectors ˜ ηnj in Lemma 7.11 are infinitely divisible, and so the same thing is true for the sums Σj (˜ ηnj −bnj). The infinite divisibility of ξ then follows by Theorem 7.7 (i).
2 To obtain explicit convergence criteria for general null arrays in Rd, it re-mains to combine Theorem 7.7 with Proposition 7.11.
The present result extends Theorem 6.16 for Gaussian limits and Corollary 7.9 for i.i.d. arrays.
For convenience, we write Cov(ξ; A) for the covariance matrix of the random vector 1A ξ.
Theorem 7.14 (null-array convergence, Doeblin, Gnedenko) Let (ξnj) be a null array of random vectors in Rd, and let ξ be infinitely divisible with char-acteristics (a, b, ν).
Then for fixed h > 0 with ν{|x| = h} = 0, we have Σj ξnj d →ξ iff (i) Σj L(ξnj) v →ν on Rd \ {0}, (ii) Σj Cov ξnj; |ξnj| ≤h →ah, (iii) Σj E ξnj; |ξnj| ≤h →bh.
Proof: Define for any n and j anj = Cov ξnj; |ξnj| ≤h , bnj = E ξnj; |ξnj| ≤h .
By Theorems 7.7 and 7.11, we have Σj ξnj d →ξ iff (i′) Σj L(ηnj) v →ν on Rd \ {0}, (ii′) Σj E ηnjη′ nj; |ηnj| ≤h →ah, (iii′) Σj bnj + E ηnj; |ηnj| ≤h →bh.
Here (i) ⇔(i′) since maxj |bnj| →0. By (i) and the facts that maxj |bnj| →0 and ν{|x| = h} = 0, we further see that the sets {|ηnj| ≤h} in (ii′) and (iii′) may be replaced by {|ξnj| ≤h}. To prove (ii) ⇔(ii′), we note that by (i) Σ j anj −E ηnj η′ nj; |ξnj| ≤h ≤ Σ j bnj b′ njP |ξnj| > h < ⌢maxj|bnj|2 Σ j P |ξnj| > h →0.
Similarly, (iii) ⇔(iii′) because Σ j E ηnj; |ξnj| ≤h = Σ j bnjP |ξnj| > h ≤maxj|bnj| Σ j P |ξnj| > h →0.
2 When d = 1, the first condition in Theorem 7.14 admits some interesting probabilistic interpretations, one of which involves the row-wise extremes. For random measures η and ηn on R \ {0}, the convergence ηn vd − →η on ¯ R \ {0} means that ηnf d →ηf for all f ∈ˆ C+(¯ R \ {0}).
158 Foundations of Modern Probability Theorem 7.15 (sums and extremes) Let (ξnj) be a null array of random variables with distributions μnj, and define ηn = Σ j δ ξnj, α± n = maxj ± ξnj , n ∈N.
Fix a L´ evy measure ν on R \ {0}, let η be a Poisson process on R \ {0} with E η = ν, and put α± = sup x ≥0; η{±x} > 0 .
Then these conditions are equivalent: (i) Σj μnj v →ν on ¯ R \ {0}, (ii) ηn d →η on ¯ R \ {0}, (iii) α± n d →α±.
Though (i) ⇔(ii) is immediate from Theorem 30.1 below, we include a di-rect, elementary proof.
Proof: Condition (i) holds iff Σ j μnj(x, ∞) →ν(x, ∞), Σ j μnj(−∞, −x) →ν(−∞, −x), for all x > 0 with ν{±x} = 0. By Lemma 6.8, the first condition is equivalent to P{α+ n ≤x} = Π j 1 −P{ξnj > x} →e−ν(x,∞) = P{α+ ≤x}, which holds for all continuity points x > 0 iffα+ n d →α+. Similarly, the second condition holds iffα− n d →α−. Thus, (i) ⇔(iii).
To show that (i) ⇒(ii), we may write the latter condition in the form Σ j f(ξnj) d →ηf, f ∈ˆ C+(¯ R \ {0}).
(8) Here the variables f(ξnj) form a null array with distributions μnj ◦f −1, and ηf is compound Poisson with characteristic measure ν ◦f −1. By Theorem 7.7 (ii), (8) is then equivalent to the conditions Σ j μnj ◦f −1 v →ν ◦f −1 on (0, ∞], (9) lim ε→0 lim sup n→∞Σ j f(x)≤ε f(x) μnj(dx) = 0.
(10) Now (9) follows immediately from (i). To deduce (10), it suffices to note that the sum on the left is bounded by Σj μnj(f ∧ε) →ν(f ∧ε).
Finally, assume (ii). By a simple approximation, ηn(x, ∞) d →η(x, ∞) for any x > 0 with ν{x} = 0. In particular, for such an x, 7. Infinite Divisibility and General Null Arrays 159 P{α+ n ≤x} = P ηn(x, ∞) = 0 →P η(x, ∞) = 0 = P{α+ ≤x}, and so α+ n d →α+. Similarly α− n d →α−, which proves (iii).
2 The characteristic pair or triple clearly contains important information about an infinitely divisible distribution. In particular, we have the follow-ing classical result: Proposition 7.16 (diffuse distributions, Doeblin) Let ξ be an infinitely di-visible random vector in Rd with characteristics (a, b, ν). Then L(ξ) diffuse ⇔ a ̸= 0 or ∥ν∥= ∞.
Proof: If a = 0 and ∥ν∥< ∞, then ξ is compound Poisson apart from a shift, and so μ = L(ξ) is not diffuse. When either condition fails, it does so for at least one coordinate projection, and we may take d = 1. If a > 0, the diffuseness is obvious by Lemma 1.30. Next let ν be unbounded, say with ν(0, ∞) = ∞. For every n ∈N we may then write ν = νn + ν′ n, where ν′ n is supported by (0, n−1) and has total mass log 2. For μ we get a corresponding decomposition μn ∗μ′ n, where μ′ n is compound Poisson with L´ evy measure ν′ n and μ′ n{0} = 1 2. For any x ∈R and ε > 0, we get μ{x} ≤μn{x}μ′ n{0} + μn[x −ε, x) μ′ n(0, ε] + μ′ n(ε, ∞) ≤1 2 μn[x −ε, x] + μ′ n(ε, ∞).
Letting n →∞and then ε →0, and noting that μ′ n w →δ0 and μn w →μ, we get μ{x} ≤1 2 μ{x} by Theorem 5.25, and so μ{x} = 0.
2 Exercises 1. Prove directly from definitions that a compound Poisson distribution is in-finitely divisible, and identify the corresponding characteristics. Also do the same verification using generating functions.
2. Prove directly from definitions that every Gaussian distribution in Rd is in-finitely divisible. (Hint: Write a Gaussian vector in Rd in the form ξ = a ζ + b for a suitable matrix a and vector b, where ζ is standard Gaussian in Rd. Then compute the distribution of ζ1 + · · · + ζn by a simple convolution.) 3. Prove the infinite divisibility of the Poisson and Gaussian distributions from Theorems 6.7 and 6.13.
(Hint: Apply the mentioned theorems to suitable i.i.d.
arrays.) 4. Show that a distribution μ on R with characteristic function ˆ μ(u) = e−|u|p for a p ∈(0, 2] is infinitely divisible. Then find the form of the associated L´ evy measure.
160 Foundations of Modern Probability (Hint: Note that μ is symmetric p -stable, and conclude that ν(dx) = cp|x|−p−1dx for a constant cp > 0.) 5. Show that the Cauchy distribution μ with density π−1(1 + x2)−1 is infinitely divisible, and find the associated L´ evy measure. (Hint: Check that μ is symmetric 1-stable, and use the preceding exercise. It remains to find the constant c1.) 6. Extend Proposition 7.10 to null arrays of spherically symmetric random vectors in Rd.
7. Derive Theorems 6.7 and 6.12 from Lemma 7.2 and Theorem 7.7.
8. Show by an example that Theorem 7.11 fails without the centering at trun-cated means. (Hint: Without the centering, condition (ii) of Theorem 7.14 becomes ΣjE(ξnjξ′ nj; |ξnj| ≤h) →ah.) 9. If ξ is infinitely divisible with characteristics (a, b, ν) and p > 0, show that E|ξ|p < ∞iff |x|>1 |x|p ν(dx) < ∞. (Hint: If ν has bounded support, then E|ξ|p < ∞for all p. It is then enough to consider compound Poisson distributions, for which the result is elementary.) 10. Prove directly that a Z+-valued random variable ξ is infinitely divisible (on Z+) iff−log Esξ = Σk(1−sk) νk, s ∈(0, 1], for a unique, bounded measure ν = (νk) on N. (Hint: Assuming L(ξ) = μ∗n n , use the inequality 1 −x ≤e−x to show that the sequence (nμn) is tight on N. Then nμn w →ν along a sub-sequence, for a bounded measure ν on N. Finally note that −log(1−x) ∼x as x →0. As for the uniqueness, take differences and use the uniqueness theorem for power series.) 11. Prove directly that a random variable ξ ≥0 is infinitely divisible iff−log Ee−uξ = ua + (1 −e−ux) ν(dx), u ≥0, for some unique constant a ≥0 and measure ν on (0, ∞) with (|x| ∧1) < ∞.
(Hint: If L(ξ) = μ∗n n , note that the measures χn(dx) = n(1 −e−x) μn(dx) are tight on R+. Then χn w →χ along a sub-sequence, and we may write χ(dx) = a δ0(dx) + (1 −e−x) ν(dx). The desired representation now follows as before. As for the uniqueness, take differences and use the uniqueness theorem for Laplace transforms.) 12. Prove directly that a random variable ξ is infinitely divisible iffψu = log Eeiuξ exists and is given by Corollary 7.6 (i) for some unique constants a ≥0 and b and a measure ν on R \ {0} with (x2 ∧1) ν(dx) < ∞. (Hint: Proceed as in Lemma 7.8.) III. Conditioning and Martingales Modern probability theory can be said to begin with the theory of condi-tional expectations and distributions, along with the basic properties of martin-gales. This theory also involves the technical machinery of filtrations, optional and predictable times, predictable processes and compensators, all of which are indispensable for the further developments. The material of Chapter 8 is constantly used in all areas of probability theory, and should be thoroughly mastered by any serious student of the subject. The same thing can be said about at least the early parts of Chapter 9, including the basic properties of optional times and discrete-time martingales. The more advanced continuous-time theory, along with results for predictable processes and compensators in Chapter 10, might be postponed for a later study.
−−− 8. Conditioning and disintegration. Here conditional expectations are introduced by the intuitive method of Hilbert space projection, which leads easily to the basic properties. Of equal importance is the notion of conditional distributions, along with the powerful disintegration and transfer theorems, constantly employed in all areas of modern probability theory.
The latter result yields in particular the fundamental Daniell–Kolmogorov theorem, en-suring the existence of processes with given finite-dimensional distributions.
9. Optional times and martingales. Here we begin with the basic prop-erties of optional times with associated σ-fields, along with a brief discussion of random time change. Martingales are then introduced as projective sequences of random variables, and we give short proofs of their basic properties, includ-ing the classical maximum inequalities, the optional sampling theorem, and the fundamental convergence and regularization theorems. After discussing a range of ramifications and applications, we conclude with some extensions to continuous-time sub-martingales.
10. Predictability and compensation. Here we introduce the classes of predictable times and processes, and give a complete, probabilistic proof of the fundamental Doob–Meyer decomposition. The latter result leads to the equally basic notion of compensator of a random measure on a product space R+ × S, and to a variety of characterizations and time-change reductions. We conclude with the more advanced theory of discounted compensators and associated predictable mapping theorems.
161 Chapter 8 Conditioning and Disintegration Conditional expectation, conditional variance and covariance, local prop-erty, uniform integrability, conditional probabilities, conditional distri-butions and disintegration, conditional independence, chain rule, pro-jection and orthogonality, commutation criteria, iterated conditioning, extension and transfer, stochastic equations, coupling and randomiza-tion, existence of sequences and processes, extension by conditioning, infinite product measures Modern probability theory can be said to begin with the notions of condi-tioning and disintegration.
In particular, conditional expectations and dis-tributions are needed already for the definitions of martingales and Markov processes, the two basic dependence structures beyond independence and sta-tionarity. In many other areas throughout probability theory, conditioning is used as a universal tool, needed to describe and analyze systems involving randomness. The notion may be thought of in terms of averaging, projection, and disintegration—alternative viewpoints that are all essential for a proper understanding.
In all but the most elementary contexts, conditioning is defined with re-spect to a σ-field rather than a single event. Here the resulting quantity is not a constant but a random variable, measurable with respect to the condition-ing σ-field. The idea is familiar from elementary constructions of conditional expectations E(ξ| η), for random vectors (ξ, η) with a nice density, where the expected value becomes a function of η. This corresponds to conditioning on the generated σ-field F = σ(η).
General conditional expectations are traditionally constructed by means of the Radon–Nikodym theorem. However, the simplest and most intuitive approach is arguably via Hilbert space projection, where E(ξ | F) is defined for any ξ ∈L2 as the orthogonal projection of ξ onto the linear subspace of F-measurable random variables.
The existence and basic properties of the L2-version extend by continuity to arbitrary ξ ∈L1. The orthogonality yields E{ξ−E(ξ | F)}ζ = 0 for any bounded, F-measurable random variable ζ, which leads immediately to the familiar averaging characterization of E(ξ | F) as a version of the density d(ξ · P)/dP on the σ-field F.
The conditional expectation is only defined up to a P-null set, in the sense that any two versions agree a.s. We may then look for versions of the condi-tional probabilities P(A | F) = E(1A| F) that combine into a random probabil-ity measure on Ω. In general, such regular versions exist only for A restricted to a sufficiently nice sub- σ-field. The basic case is for random elements ξ in 163 © Springer Nature Switzerland AG 2021 O. Kallenberg, Foundations of Modern Probability, Probability Theory and Stochastic Modelling 99, 164 Foundations of Modern Probability a Borel space S, where the conditional distribution L(ξ | F) exists as an F-measurable random measure on S. Assuming in addition that F = σ(η) for a random element η in a space T, we may write P{ξ ∈B | η} = μ(η, B) for a probability kernel μ : T →S, which leads to a decomposition of the distri-bution of (ξ, η) according to the values of η. The result is formalized by the powerful disintegration theorem—an extension of Fubini’s theorem of constant use in subsequent chapters, especially in combination with the strong Markov property.
Conditional distributions can be used to establish the basic transfer theo-rem, often needed to convert a distributional equivalence ξ d = f(η) into an a.s.
representation ξ = f(˜ η), for a suitable choice of ˜ η d = η. The latter result leads in turn to the fundamental Daniell–Kolmogorov theorem, which guarantees the existence of a random sequence or process with specified finite-dimensional distributions.
A different approach yields the more general Ionescu Tulcea extension, where the measure is specified by a sequence of conditional distri-butions.
Further topics discussed in this chapter include the notion of conditional independence, which is fundamental for both Markov processes and exchange-ability, and also plays an important role in connection with SDEs in Chapter 32, and for the random arrays treated in Chapter 28. Especially useful for various applications is the elementary but powerful chain rule. We finally call attention to the local property of conditional expectations, which leads in par-ticular to simple and transparent proofs of the optional sampling and strong Markov properties.
Returning to our construction of conditional expectations, fix a probability space (Ω, A, P) and an arbitrary sub- σ-field F ⊂A. Introduce in L2 = L2(A) the closed linear subspace M, consisting of all random variables η ∈L2 that agree a.s. with elements of L2(F). By the Hilbert space projection Theorem 1.35, there exists for every ξ ∈L2 an a.s. unique random variable η ∈M with ξ−η ⊥M, and we define EFξ = E(ξ | F) as an arbitrary F-measurable version of η.
The L2-projection EF is easily extended to L1, as follows: Theorem 8.1 (conditional expectation, Kolmogorov) For any σ-field F ⊂A, there exists an a.s. unique linear operator EF : L1 →L1(F), such that (i) E EFξ; A = E(ξ; A), ξ ∈L1, A ∈F.
The operators EF have these further properties, whenever the corresponding expressions exist for the absolute values: (ii) ξ ≥0 ⇒EFξ ≥0 a.s., (iii) E|EFξ| ≤E|ξ|, (iv) 0 ≤ξn ↑ξ ⇒EFξn ↑EFξ a.s., (v) EFξη = ξ EFη a.s. when ξ is F-measurable, (vi) E ξ EFη = E η EFξ = E(EFξ)(EFη), 8. Conditioning and Disintegration 165 (vii) EFEGξ = EFξ a.s., F ⊂G.
In particular, EFξ = ξ a.s. iffξ has an F-measurable version, and EFξ = E ξ a.s. when ξ⊥ ⊥F. We often refer to (i) as the averaging property, to (ii) as the positivity, to (iii) as the L1-contractivity, to (iv) as the monotone conver-gence property, to (v) as the pull-out property, to (vi) as the self-adjointness, and to (vii) as the tower property1. Since the operator EF is both self-adjoint by (vi) and idempotent by (vii), it may be thought of as a generalized projec-tion on L1.
The first assertion is an immediate consequence of Theorem 2.10. However, the following projection approach is more elementary and intuitive, and it has the further advantage of leading easily to properties (ii)−(vii).
Proof of Theorem 8.1: For ξ ∈L2, define EFξ by projection as above. Then for any A ∈F we get ξ −EFξ ⊥1A, and (i) follows. Taking A = {EFξ ≥0}, we get in particular E EFξ = E EFξ; A −E EFξ; Ac = E(ξ; A) −E(ξ; Ac) ≤E|ξ|, proving (iii). The mapping EF is then uniformly L1-continuous on L2. Since L2 is dense in L1 by Lemma 1.12 and L1 is complete by Lemma 1.33, the operator EF extends a.s. uniquely to a linear and continuous mapping on L1.
Properties (i) and (iii) extend by continuity to L1, and Lemma 1.26 shows that EFξ is a.s. determined by (i). If ξ ≥0, we may combine (i) for A = {EFξ ≤0} with Lemma 1.26 to obtain EFξ ≥0 a.s., which proves (ii). If 0 ≤ξn ↑ξ, then ξn →ξ in L1 by dominated convergence, and so by (iii) we have EFξn →EFξ in L1. Since EFξn is a.s. non-decreasing in n by (ii), Lemma 5.2 yields the corresponding a.s. convergence, which proves (iv).
Property (vi) is obvious when ξ, η ∈L2, and it extends L1 by (iv). To prove (v), we see from the characterization in (i) that EFξ = ξ a.s. when ξ is F-measurable. In general we need to show that E( ξ η; A) = E ξEFη; A , A ∈F, which follows immediately from (vi). Finally, (vii) is obvious for ξ ∈L2 since L2(F) ⊂L2(G), and it extends to the general case by means of (iv).
2 Using conditional expectations, we can define conditional variances and covariances in the obvious way by Var(ξ | F) = EF(ξ −EFξ)2 = EFξ2 −(EFξ)2, Cov(ξ, η | F) = EF(ξ −EFξ)(η −EFη) = EFξη −(EFξ)(EFη), 1also called the chain rule 166 Foundations of Modern Probability as long as the right-hand sides exist. Using the shorthand notation VarF(ξ) and CovF(ξ, η), we note the following computational rules, which may be compared with the simple rule E ξ = E(F Fξ) for expected values: Lemma 8.2 (conditional variance and covariance) For any random variables ξ, η ∈L2 and σ-field F, we have (i) Var(ξ) = E[ VarF(ξ) ] + Var(EFξ), (ii) Cov(ξ, η) = E[ CovF(ξ, η) ] + Cov(EFξ, EFη).
Proof: We need to prove only (ii), since (i) is the special case with ξ = η.
Then write Cov(ξ, η) = E(ξη) −(Eξ)(Eη) = EEF(ξη) −(EEFξ)(EEFη) = E CovF(ξ, η) + (EFξ)(EFη) −(EEFξ)(EEFη) = E CovF(ξ, η) + E(EFξ)(EFη) −(EEFξ)(EEFη) = E CovF(ξ, η) + Cov(EFξ, EFη).
2 Next we show that the conditional expectation EFξ is local in both ξ and F, an observation that simplifies many proofs. Given two σ-fields F and G, we say that F = G on A if A ∈F ∩G and A ∩F = A ∩G.
Lemma 8.3 (local property) Consider some σ-fields F, G ⊂A and functions ξ, η ∈L1, and let A ∈F ∩G. Then F = G, ξ = η a.s. on A ⇒ EFξ = EGη a.s. on A.
Proof: Since 1AEFξ and 1AEGη are F ∩G-measurable, we get B ≡A ∩ {EFξ > EGη} ∈F ∩G, and the averaging property yields E EFξ; B = E(ξ; B) = E(η; B) = E EGη; B .
Hence, EFξ ≤EGη a.s. on A by Lemma 1.26. The reverse inequality follows by the symmetric argument.
2 The following technical result plays an important role in Chapter 9.
Lemma 8.4 (uniform integrability, Doob) For any ξ ∈L1, the conditional expectations E(ξ |F), F ⊂A, are uniformly integrable.
Proof: By Jensen’s inequality and the self-adjointness property, E |EFξ|; A ≤E EF| ξ|; A = E |ξ| P FA , A ∈A.
8. Conditioning and Disintegration 167 By Lemma 5.10 we need to show that this tends to zero as PA →0, uniformly in F. By dominated convergence along sub-sequences, it is then enough to show that P FnAn P →0 for any σ-fields Fn ⊂A and sets An ∈A with PAn →0, which is clear since EP FnAn = PAn →0.
2 The conditional probability of an event A ∈A, given a σ-field F, is defined as P FA = EF1A or P(A |F) = E(1A|F), A ∈A.
Thus, P FA is the a.s. unique random variable in L1(F) satisfying E P FA; B = P(A ∩B), B ∈F.
Note that P FA = PA a.s. iffA⊥ ⊥F, and that P FA = 1A a.s. iffA is a.s. F-measurable. The positivity of EF gives 0 ≤P FA ≤1 a.s., and the monotone convergence property yields P F ∪n An = Σ n P FAn a.s., A1, A2, . . . ∈A disjoint.
(1) Still the random set function P F may not be a measure, in general, since the exceptional null set in (1) may depend on the sequence (An).
For random elements η in a measurable space (S, S), we define η -condi-tioning as conditioning with respect to the induced σ-field σ(η), so that E ηξ = Eσ(η)ξ, P ηA = P σ(η)A, or E(ξ | η) = E{ξ | σ(η)}, P(A | η) = P{A | σ(η)}.
By Lemma 1.14, the η -measurable function Eη ξ may be represented as f(η) for some measurable function f on S, determined a.e. L(η) by the averaging property E f(η); η ∈B = E ξ; η ∈B , B ∈S.
In particular, f depends only on the distribution of (ξ, η). The case of P ηA is similar. Conditioning on a σ-field F is the special case where η is the identity map (Ω, A) →(Ω, F).
Motivated by (1), we may hope to construct some measure-valued versions of the functions P F and P η. Then recall from Chapter 3 that, for any mea-surable spaces (T, T ) and (S, S), a kernel μ : T →S is defined as a function μ : T × S →¯ R+, such that μ(t, B) is T -measurable in t ∈T for fixed B and a measure in B ∈S for fixed t. In particular, μ is a probability kernel if μ(t, S) = 1 for all t.
Random measures are simply kernels on the basic probability space Ω.
Now fix a σ-field F ⊂A and a random element η in a measurable space (T, T ). By a (regular) conditional distribution of η, given F, we mean an F-measurable probability kernel μ = L(η | F): Ω →T, such that μ(ω, B) = P{η ∈B | F}ω a.s., ω ∈Ω, B ∈T .
168 Foundations of Modern Probability The idea is to choose versions of the conditional probabilities on the right that combine for each ω into a probability measure in B. More generally, for any random element ξ in a measurable space (S, S), we define a conditional distribution of η, given ξ, as a random measure of the form μ(ξ, B) = P{η ∈B | ξ} a.s., B ∈T , (2) for a probability kernel μ : S →T.
In the extreme cases where η is F-measurable or independent of F, we note that P{η ∈B |F} has the regular version 1{η ∈B} or P{η ∈B}, respectively.
To ensure the existence of conditional distributions L(η | F), we need to impose some regularity conditions on the space T. The following existence and disintegration property is a key result of modern probability.
Theorem 8.5 (conditional distributions, disintegration) Let ξ, η be random elements in S, T, where T is Borel. Then L(ξ, η) = L(ξ) ⊗μ for a probability kernel μ: S →T, where μ is unique a.e. L(ξ) and satisfies (i) L(η | ξ) = μ(ξ, ·) a.s., (ii) E f(ξ, η) ξ = μ(ξ, dt) f(ξ, t) a.s., f ≥0.
Proof: The stated disintegration with associated uniqueness hold by The-orem 3.4. Properties (i) and (ii) then follow by the averaging characterization of conditional expectations.
2 Taking differences, we may extend (ii) to suitable real-valued functions f.
When (S, S) = (Ω, F), the kernel μ becomes an F-measurable random measure ζ on T. Letting ξ be an F-measurable random element in a space T, we get by (ii) E f(ξ, η) F = ζ(dt) f(ξ, t), f ≥0.
(3) Part (ii) may also be written in integrated form as Ef(ξ, η) = E μ(ξ, dt) f(ξ, t), f ≥0, (4) and similarly for (3). When ξ ⊥ ⊥η we may choose μ(ξ, ·) ≡L(η), in which case (4) reduces to the identity of Lemma 4.11.
Applying (4) to functions of the form f(ξ), we may extend many properties of ordinary expectations to a conditional setting. In particular, such extensions hold for the Jensen, H¨ older, and Minkowski inequalities. The first of those yields the Lp-contractivity EFξ p ≤∥ξ∥p, ξ ∈Lp, p ≥1.
Considering conditional distributions of entire sequences (ξ, ξ1, ξ2, . . .), we may further derive conditional versions of the basic continuity properties of ordinary integrals.
We list some simple applications of conditional distributions.
8. Conditioning and Disintegration 169 Lemma 8.6 (conditioning criteria) For any random elements ξ, η in Borel spaces S, T, where S = [0, 1] in (ii), we have (i) ξ⊥ ⊥η ⇔L(ξ) = L(ξ | η) a.s. ⇔L(η) = L(η | ξ) a.s., (ii) ξ is a.s. η-measurable ⇔ξ = E(ξ | η) a.s.
Proof: (i) Compare (4) with Lemma 4.11.
(ii) Use Theorem 8.1 (v).
2 In case of independence, we have some further useful properties.
Lemma 8.7 (independence case) For any random elements ξ⊥ ⊥η in S, T and a measurable function f : S × T →U, where S, T, U are Borel, define X = f(ξ, η). Then (i) L(X| ξ) = L f(x, η) x=ξ a.s., (ii) X is a.s. ξ-measurable ⇔(X, ξ)⊥ ⊥η.
Proof: (i) For any measurable function g ≥0 on S, we get by (4) and Lemma 8.6 (i) E g(X); ξ ∈B = E E 1B(x)(g ◦f)(x, η) x=ξ, and so E g(X) ξ = E (g ◦f)(x, η) x=ξ a.s.
(ii) If X is ξ-measurable, then so is the pair (X, ξ) by Lemma 1.9, and the relation ξ⊥ ⊥η yields (X, ξ)⊥ ⊥η. Conversely, assuming (X, ξ)⊥ ⊥η and using (i) and Lemma 8.6 (i), we get a.s.
L(X, ξ) = L(X, ξ | η) = L f(ξ, y), ξ y=η, so that (X, ξ) d = f(ξ, y), ξ , y ∈T a.s. L(η).
Fixing a y with equality and applying the resulting relation to the set A = {(z, x); z = f(x, y)}, which is measurable in U × S by the measurability of the diagonal in U 2, we obtain P X = f(ξ, y) = P (X, ξ) ∈A = P f(ξ, y), ξ ∈A = P(Ω) = 1.
Hence, X = f(ξ, y) a.s., which shows that X is a.s. ξ-measurable.
2 Next we show how distributional invariance properties are preserved under suitable conditioning. Here a mapping T on S is said to be bi-measurable, if it is a measurable bijection with a measurable inverse.
170 Foundations of Modern Probability Lemma 8.8 (conditional invariance) For any random elements ξ, η in a Borel space S and a bi-measurable mapping T on S, we have T(ξ, η) d = (ξ, η) ⇒ L(Tξ | η), Tξ d = L(ξ | η), ξ .
Proof: Write L(ξ | η) = μ(η, ·), and note that μ depends only on L(ξ, η).
Since also σ(Tη) = σ(η) by the bi-measurability of T, we get a.s.
L(Tξ | η) = L(Tξ | Tη) = μ(Tη, ·), and so L(Tξ | η), Tξ = {μ(Tη, ·), Tξ} d = {μ(η, ·), ξ} = {L(ξ | η), ξ}.
2 The notion of independence between σ-fields Ft, t ∈T, extends immedi-ately to the conditional setting. Thus, we say that the Ft are conditionally independent given a σ-field G, if for any distinct indices t1, . . . , tn ∈T, n ∈N, we have P G∩ k≤nBk = Π k≤n P GBk a.s., Bk ∈Ftk, k = 1, . . . , n.
This reduces to ordinary independence when G is the trivial σ-field {Ω, ∅}. The pairwise version of conditional independence is denoted by ⊥ ⊥G. Conditional independence involving events At or random elements ξt, t ∈T, is defined as before in terms of the induced σ-fields σ(At) or σ(ξt), respectively, and the notation involving ⊥ ⊥carries over to this case.
In particular, any F-measurable random elements ξt are conditionally in-dependent given F. If instead the ξt are independent of F, their conditional independence given F is equivalent to ordinary independence between the ξt.
By Theorem 8.5, every general statement or formula involving independence between countably many random elements in Borel spaces has a conditional counterpart. For example, Lemma 4.8 shows that the σ-fields F1, F2, . . . are conditionally independent, given a σ-field G, iff (F1, . . . , Fn)⊥ ⊥ G Fn+1, n ∈N.
More can be said in the conditional case, and we begin with a basic char-acterization. Here and below, F, G, . . . with or without subscripts denote sub-σ-fields of A, and F ∨G = σ(F, G).
Theorem 8.9 (conditional independence, Doob) For any σ-fields F, G, H, we have F⊥ ⊥ G H ⇔ P F∨G = P G a.s. on H.
8. Conditioning and Disintegration 171 Proof: Assuming the second condition and using the tower and pull-out properties of conditional expectations, we get for any F ∈F and H ∈H P G(F ∩H) = E GP F∨G(F ∩H) = E G P F∨GH; F = E G P GH; F = (P GF) (P GH), showing that F⊥ ⊥G H. Conversely, assuming F⊥ ⊥G H and using the tower and pull-out properties, we get for any F ∈F, G ∈G, and H ∈H E P GH; F ∩G = E (P GF) (P GH); G = E P G(F ∩H); G = P(F ∩G ∩H).
By a monotone-class argument, this extends to E P GH; A = P(H ∩A), A ∈F ∨G, and the stated condition follows by the averaging characterization of P F∨GH. 2 The following simple consequence is needed in Chapters 27–28.
Corollary 8.10 (contraction) For any random elements ξ, η, ζ, we have (ξ, η) d = (ξ, ζ) σ(η) ⊂σ(ζ) ⇒ξ⊥ ⊥ η ζ.
Proof: For any measurable set B, we note that the variables M1 = P{ξ ∈B | η}, M2 = P{ξ ∈B | ζ}, form a bounded martingale with M1 d = M2. Then E(M2 −M1)2 = EM 2 2 − EM 2 1 = 0, and so M1 = M2 a.s., which implies ξ⊥ ⊥η ζ by Theorem 8.9.
2 The last theorem yields some further useful properties. Let ¯ G denote the completion of G with respect to the basic σ-field A, generated by G and the family of null sets N = {N ⊂A; A ∈A, PA = 0}.
Corollary 8.11 (extension and inclusion) For any σ-fields F, G, H, (i) F⊥ ⊥ G H ⇔F⊥ ⊥ G (G, H), (ii) F⊥ ⊥ G F ⇔F ⊂¯ G.
172 Foundations of Modern Probability Proof: (i) By Theorem 8.9, both relations are equivalent to P(F| G, H) = P(F| G) a.s., F ∈F.
(ii) If F⊥ ⊥G F, Theorem 8.9 yields 1F = P(F| F, G) = P(F| G) a.s., F ∈F, which implies F ⊂¯ G. Conversely, the latter relation implies P(F| G) = P(F| ¯ G) = 1F = P(F| F, G) a.s., F ∈F, and so F⊥ ⊥G F by Theorem 8.9.
2 The following basic result is often applied in both directions.
Theorem 8.12 (chain rule) For any σ-fields G, H, F1, F2, . . . , these condi-tions are equivalent: (i) H⊥ ⊥ G (F1, F2, . . .), (ii) H ⊥ ⊥ G, F1, . . . , Fn Fn+1, n ≥0.
In particular, we often need to employ the equivalence H⊥ ⊥ G (F, F′) ⇔ H⊥ ⊥ G F, H ⊥ ⊥ G, F F′.
Proof: Assuming (i), we get by Theorem 8.9 for any H ∈H and n ≥0 P H G, F1, . . . , Fn = P(H | G) = P H G, F1, . . . , Fn+1 , and (ii) follows by another application of Theorem 8.9.
Assuming (ii) instead, we get by Theorem 8.9 for any H ∈H P H G, F1, . . . , Fn = P H G, F1, . . . , Fn+1 , n ≥0.
Summing over n < m gives P(H | G) = P H G, F1, . . . , Fm , m ≥1, and so Theorem 8.9 yields H⊥ ⊥G (F1, . . . , Fm) for all m ≥1, which extends to (i) by a monotone-class argument.
2 The last result is also useful to establish ordinary independence. Then take G = {∅, Ω} in Theorem 8.12 to get H⊥ ⊥(F1, F2, . . .) ⇔ H ⊥ ⊥ F1, . . . , Fn Fn+1, n ≥0.
8. Conditioning and Disintegration 173 Reasoning with conditional expectations and independence is notoriously subtle and non-intuitive. It may then be helpful to translate the probabilis-tic notions into their geometric counterparts in suitable Hilbert spaces. For any σ-field F on Ω, we introduce the centered 2 sub-space L2 0(F) ⊂L2 of F-measurable random variables ξ with E ξ = 0 and E ξ2 < ∞. For sub-spaces F ⊂L2 0, let πF denote projection onto F, and write F ⊥for the orthogonal complement of F. Define F ⊥G by the orthogonality E(ξη) = 0 of all vari-ables ξ ∈F and η ∈G, and write F ⊖G = F ∩G⊥and F ∨G = span(F, G).
When F ⊥G, we may also write F ∨G = F ⊕G. We further introduce the conditional orthogonality F ⊥ G H ⇔ πG⊥F ⊥πG⊥H.
Theorem 8.13 (conditional independence and orthogonality) Let F, G, H be σ-fields on Ω generating the centered sub-spaces F, G, H ⊂L2 0. Then F⊥ ⊥ G H ⇔ F ⊥ G H.
By Theorem 8.1 (vi), the latter condition may also be written as πG⊥F ⊥H or F ⊥πG⊥H. For the trivial σ-field G = {∅, Ω}, we get the useful equivalence F⊥ ⊥H ⇔ F ⊥H, (5) which can also be verified directly.
Proof: Using Theorem 8.9 and Lemmas A4.5 and A4.7, we get F⊥ ⊥ G H ⇔ (πF∨G −πG)H = 0 ⇔ π(F∨G) ⊖GH = 0 ⇔ (F ∨G) ⊖G ⊥H ⇔ πG⊥F ⊥H, where the last equivalence follows from the fact that, by Lemma A4.5, F ∨G = (πGF ⊕πG⊥F) ∨G = πG⊥F ∨G = πG⊥F ⊕G.
2 For a simple illustration, we note the following useful commutativity crite-rion, which follows immediately from Theorem 8.13 and Lemma A4.8.
Corollary 8.14 (commutativity) For any σ-fields F, G, we have EFEG = EGEF = EF∩G ⇔ F⊥ ⊥ F ∩G G.
2Using L2 0(F) instead of L2(F) has some technical advantages. In particular, it makes the trivial σ-field {∅, Ω} correspond to the null space 0 ⊂L2.
174 Foundations of Modern Probability Conditional distributions can sometimes be constructed by suitable itera-tion. To explain the notation, let ξ, η, ζ be random elements in some Borel spaces S, T, U. Since S × U and T × U are again Borel, there exist some probability kernels μ: S →T × U and μ′ : T →S × U, such that a.s.
L(η, ζ | ξ) = μ(ξ, ·), L(ξ, ζ | η) = μ′(η, ·), corresponding to the dual disintegrations L(ξ, η, ζ) = L(ξ) ⊗μ ∼ = L(η) ⊗μ′.
For fixed s or t, we may regard μs and μ′ t as probability measures in their own right, and repeat the conditioning, leading to disintegrations of the form μs = ¯ μs ⊗νs, μ′ t = ¯ μ′ t ⊗ν′ t, where ¯ μs = μs(· × U) and ¯ μ′ t = μ′ t(· × U), for some kernels νs : T →U and ν′ t : S →U.
It is suggestive to write even the latter relations in terms of conditioning, as in μs ˜ ζ ∈· ˜ η = νs(˜ η, ·), μ′ t ˜ ζ ∈· ˜ ξ = ν′ t(˜ ξ, ·), for the coordinate variables ˜ ξ, ˜ η, ˜ ζ in S, T, U.
Since all spaces are Borel, Corollary 3.6 allows us to choose νs and ν′ t as kernels S ×T →U. With a slight abuse of notation, we may write the iterated conditioning in the form P(· | η | ξ)s,t = P(· | ξ)s (· | η)t, P(· | ξ | η)t,s = P(· | η)t (· | ξ)s.
Substituting s = ξ and t = η, and putting F = σ(ξ), G = σ(η), and H = σ(ζ), we get some (ξ, η) -measurable random probability measures on (Ω, H), here written suggestively as (PF)G = P(· | η | ξ)ξ,η , (PG)F = P(· | ξ | η)η,ξ .
Using this notation, we may state the basic properties of iterated condi-tioning in a striking form. Here we say that a σ-field F is Borel generated, if F = σ(ξ) for some random element ξ in a Borel space.
Theorem 8.15 (iterated conditioning) For any Borel generated σ-fields F, G, H, we have a.s. on H (i) (PF)G = (PG)F = PF∨G, (ii) PF = EF(PF)G.
8. Conditioning and Disintegration 175 Part (i) must not be confused with the elementary commutativity in Corol-lary 8.14. Similarly, we must not confuse (ii) with the elementary tower prop-erty in Theorem 8.1 (vii), which may be written as PF = EFPF∨G a.s.
Proof: Writing F = σ(ξ), G = σ(η), H = σ(ζ), for some random elements in Borel spaces S, T, U, and putting μ1 = L(ξ), μ2 = L(η), μ12 = L(ξ, η), μ13 = L(ξ, ζ), μ123 = L(ξ, η, ζ), we get from Theorem 3.7 the relations μ3|2|1 = μ3|12 ∼ = μ3|1|2 a.e. μ12, μ3|1 = μ2|1 μ3|12 a.e. μ1, which are equivalent to (i) and (ii). We can also obtain (ii) directly from (i), by noting that EF(PF)G = EFPF∨G = PF, by the tower property in Theorem 8.1.
2 Regular conditional distributions can be used to construct random elements with desired properties. This may require an extension of the basic probability space. By an extension of (Ω, A, P) we mean a product space (ˆ Ω, ˆ A) = (Ω × S, A ⊗S), equipped with a probability measure ˆ P satisfying ˆ P(· × S) = P.
Any random element ξ on Ω may be regarded as a function on ˆ Ω. Thus, we simply replace ξ by the random element ˆ ξ(ω, s) = ξ(ω) on ˆ Ω, which has the same distribution. For extensions of this type, we retain our original notation and write P and ξ instead of ˆ P and ˆ ξ.
We begin with an elementary extension suggested by Theorem 8.5. The result is needed for various constructions in Chapters 27–28.
Lemma 8.16 (extension) For any probability kernel μ : S →T and random elements ξ, ζ in S, U, there exists a random element η in T, defined on a suitable extension of the probability space, such that L(η | ξ, ζ) = μ(ξ, ·) a.s., η⊥ ⊥ ξ ζ.
Proof: Put (ˆ Ω, ˆ A) = (Ω×T, A⊗T ), where T denotes the σ-field in T, and define a probability measure ˆ P on ˆ Ω by ˆ PA = E 1A(·, t) μ(ξ, dt), A ∈ˆ A.
176 Foundations of Modern Probability Then clearly ˆ P(· × T) = P, and the random element η(ω, t) ≡t on ˆ Ω satisfies ˆ L(η |A) = μ(ξ, ·) a.s. In particular, Proposition 8.9 yields η⊥ ⊥ξ A.
2 Most constructions require only a single randomization variable, defined as a U(0, 1) random variable ϑ, independent of all previously introduced random elements and σ-fields. We always assume the basic probability space to be rich enough to support any randomization variables we need. This involves no essential loss of generality, since the condition becomes fulfilled after a simple extension of the original space. Thus, we may take ˆ Ω = Ω × [0, 1], ˆ A = A ⊗B[0,1], ˆ P = P ⊗λ, where λ denotes Lebesgue measure on [0, 1]. Then ϑ(ω, t) ≡t is U(0, 1) on ˆ Ω and ϑ⊥ ⊥A. By Lemma 4.21 we may use ϑ to produce a whole sequence of independent randomization variables ϑ1, ϑ2, . . . , if needed.
The following powerful and commonly used result shows how a probabilis-tic structure can be transferred between different contexts through a suitable randomization.
Theorem 8.17 (transfer) Let ξ, η, ζ be random elements in S, T, U, where T is Borel. Then for any ˜ ξ d = ξ, there exists a random element ˜ η in T with (i) (˜ ξ, ˜ η) d = (ξ, η), (ii) ˜ η⊥ ⊥ ˜ ξ ζ.
Proof: (i) By Theorem 8.5, there exists a probability kernel μ : S →T satisfying μ(ξ, B) = P{η ∈B | ξ} a.s., B ∈T .
Next, Lemma 4.22 yields a measurable function f : S × [0, 1] →T, such that for any U(0, 1) random variable ϑ⊥ ⊥(˜ ξ, ζ), L f(s, ϑ) = μ(s, ·), s ∈S.
Defining ˜ η = f(˜ ξ, ϑ) and using Lemmas 1.24 and 4.11 together with Theorem 8.5, we get for any measurable function g: S × T →R+ E g(˜ ξ, ˜ η) = E g ˜ ξ, f(˜ ξ, ϑ) = E g ξ, f(ξ, u) du = E g(ξ, t) μ(ξ, dt) = Eg(ξ, η), which shows that (˜ ξ, ˜ η) d = (ξ, η).
(ii) By Theorem 8.12 we have ϑ⊥ ⊥˜ ξ ζ , and so Corollary 8.11 yields (˜ ξ, ϑ)⊥ ⊥˜ ξ ζ, which implies ˜ η⊥ ⊥˜ ξ ζ.
2 The last result can be used to transfer representations of random objects: 8. Conditioning and Disintegration 177 Corollary 8.18 (stochastic equations) For any random elements ξ, η in Borel spaces S, T and a measurable map f : T →S with ξ d = f(η), we may choose ˜ η with ˜ η d = η, ξ = f(˜ η) a.s.
Proof: By Theorem 8.17 there exists a random element ˜ η in T with (ξ, ˜ η) d = {f(η), η}. In particular, ˜ η d = η and {ξ, f(˜ η)} d = {f(η), f(η)}. Since the diagonal in S2 is measurable, we get P ξ = f(˜ η) = P f(η) = f(η) = 1, and so ξ = f(˜ η) a.s.
2 This leads in particular to a useful extension of Theorem 5.31.
Corollary 8.19 (extended Skorohod coupling) Let f, f1, f2, . . . be measurable functions from a Borel space S to a Polish space T, and let ξ, ξ1, ξ2, . . . be random elements in S with fn(ξn) d →f(ξ). Then we may choose ˜ ξ, ˜ ξ1, ˜ ξ2, . . .
with ˜ ξ d = ξ, ˜ ξn d = ξn, fn(˜ ξn) →f(˜ ξ) a.s.
Proof: By Theorem 5.31 there exist some η d = f(ξ) and ηn d = fn(ξn) with ηn →η a.s. By Corollary 8.18 we may further choose ˜ ξ d = ξ and ˜ ξn d = ξn, such that a.s. f(˜ ξ) = η and fn(˜ ξn) = ηn for all n. But then fn(˜ ξn) →f(˜ ξ) a.s.
2 We proceed to clarify the relationship between conditional independence and randomizations. Important applications appear in Chapters 11, 13, 27– 28, and 32.
Proposition 8.20 (conditional independence by randomization) Let ξ, η, ζ be random elements in S, T, U, where S is Borel. Then these conditions are equivalent: (i) ξ ⊥ ⊥ η ζ, (ii) ξ = f(η, ϑ) a.s. for a measurable function f : T ×[0, 1] →S and a U(0, 1) random variable ϑ⊥ ⊥(η, ζ).
Proof: The implication (ii) ⇒(i) holds by the argument for Theorem 8.17 (ii). Now assume (i), and let ϑ⊥ ⊥(η, ζ) be U(0, 1). Then Theorem 8.17 yields a measurable function f : T × [0, 1] →S, such that the random element ˜ ξ = f(η, ϑ) satisfies ˜ ξ d = ξ and (˜ ξ, η) d = (ξ, η). We further see from the sufficiency part that ˜ ξ⊥ ⊥η ζ. Hence, Proposition 8.9 yields L{˜ ξ | η, ζ} = L( ˜ ξ | η) = L( ξ | η) = L{ξ | η, ζ}, 178 Foundations of Modern Probability and so (˜ ξ, η, ζ) d = (ξ, η, ζ). By Theorem 8.17, we may next choose ˜ ϑ d = ϑ with (ξ, η, ζ, ˜ ϑ) d = (˜ ξ, η, ζ, ϑ). In particular, ˜ ϑ⊥ ⊥(η, ζ) and {ξ, f(η, ˜ ϑ)} d = {˜ ξ, f(η, ϑ)}.
Since ˜ ξ = f(η, ϑ), and the diagonal in S2 is measurable, we get ξ = f(η, ˜ ϑ) a.s., which is the required relation with ˜ ϑ in place of ϑ.
2 The transfer theorem can be used to construct random sequences or pro-cesses with given finite-dimensional distributions. For any measurable spaces S1, S2, . . . , we say that a sequence of distributions3 μn on S1 × · · · × Sn, n ∈N, is projective if μn+1(· × Sn+1) = μn, n ∈N.
(6) Theorem 8.21 (existence of random sequences, Daniell) For any projective sequence of distributions μn on S1×· · ·×Sn, n ∈N, where S2, S3, . . . are Borel, there exist some random elements ξn in Sn, n ∈N, such that L(ξ1, . . . , ξn) = μn, n ∈N.
Proof: By Lemmas 4.10 and 4.21 there exist some independent random variables ξ1, ϑ2, ϑ3, . . . , such that L(ξ1) = μ1 and the ϑn are i.i.d. U(0, 1).
We proceed recursively to construct ξ2, ξ3, . . . with the stated properties, such that each ξn is a measurable function of ξ1, ϑ2, . . . , ϑn.
Once ξ1, . . . , ξn are constructed, let L(η1, . . . , ηn+1) = μn+1. Then the projective property yields (ξ1, . . . , ξn) d = (η1, . . . , ηn), and so by Theorem 8.17 we may form ξn+1 as a mea-surable function of ξ1, . . . , ξn, ϑn+1, such that (ξ1, . . . , ξn+1) d = (η1, . . . , ηn+1).
This completes the recursion.
2 Next we show how a given process can be extended to any unbounded index set. The result is stated in an abstract form, designed to fulfill our needs in especially Chapters 12 and 17. Let ι denote the identity map on any space.
Corollary 8.22 (projective limits) Consider some Borel spaces S, S1, S2, . . .
and measurable maps πn : S →Sn and πn k : Sn →Sk, k ≤n, such that πn k = πm k ◦πn m, k ≤m ≤n.
(7) Let ¯ S be the class of sequences (s1, s2, . . .) ∈S1 × S2 × · · · with πn ksn = sk for all k ≤n, and let the map h: ¯ S →S be measurable with (π1, π2, . . .) ◦h = ι on ¯ S. Then for any distributions μn on Sn with μn ◦(πn k)−1 = μk, k ≤n ∈N, (8) there exists a distribution μ on S with μ ◦π−1 n = μn for all n.
Proof: Introduce the measures ¯ μn = μn ◦(πn 1 , . . . , πn n)−1, n ∈N, (9) 3probability measures 8. Conditioning and Disintegration 179 and conclude from (7) and (8) that ¯ μn+1(· × Sn+1) = μn+1 ◦ πn+1 1 , . . . , πn+1 n −1 = μn+1 ◦(πn+1 n )−1 ◦(πn 1 , . . . , πn n)−1 = μn ◦(πn 1 , . . . , πn n)−1 = ¯ μn.
By Theorem 8.21 there exists a measure ¯ μ on S1 × S2 × · · · with ¯ μ ◦(¯ π1, . . . , ¯ πn)−1 = ¯ μn, n ∈N, (10) where ¯ π1, ¯ π2, . . . denote the coordinate projections in S1 × S2 × · · · .
From (7), (9), and (10) we see that ¯ μ is restricted to ¯ S, which allows us to define μ = ¯ μ ◦h−1. It remains to note that μ ◦π−1 n = ¯ μ ◦(πnh)−1 = ¯ μ ◦¯ π−1 n = ¯ μn ◦¯ π−1 n = μn ◦(πn n)−1 = μn.
2 We often need a version of Theorem 8.21 for processes on a general index set T. Then for any collection of measurable spaces (St, St), t ∈T, we define (SI, SI) = t∈I(St, St), I ⊂T. For any random elements ξt in St, write ξI for the restriction of the process (ξt) to the index set I.
Now let ˆ T and ¯ T be the classes of finite and countable subsets of T, respec-tively. A family of probability measures μI, I ∈ˆ T or ¯ T, is said to be projective if μJ(· × SJ\I) = μI, I ⊂J in ˆ T or ¯ T.
(11) Theorem 8.23 (existence of processes, Kolmogorov) For any Borel spaces St, t ∈T, consider a projective family of probability measures μI on SI, I ∈ˆ T.
Then there exist some random elements Xt in St, t ∈T, such that L(XI) = μI, I ∈ˆ T.
Proof: The product σ-field ST in ST is generated by all coordinate projec-tions πt, t ∈T, and hence consists of all countable cylinder sets B × ST\U, B ∈SU, U ∈¯ T. For every U ∈¯ T, Theorem 8.21 yields a probability measure μ U on SU with μ U(· × SU\I) = μI, I ∈ˆ U, and Proposition 4.2 shows that the family μU, U ∈¯ T, is again projective. We may then define a function μ: ST →[0, 1] by μ(· × ST\U) = μ U, U ∈¯ T.
To show that μ is countably additive, consider any disjoint sets A1, A2, . . . ∈ST.
For every n we have An = Bn ×ST\Un for some Un ∈¯ T and Bn ∈SUn. Writing U = n Un and Cn = Bn × SU\Un, we get 180 Foundations of Modern Probability μ∪n An = μ U ∪n Cn = Σ n μ U Cn = Σ n μAn.
We may now define the process X = (Xt) as the identity map on the proba-bility space (ST, ST, μ).
2 If the projective sequence in Theorem 8.21 is defined recursively in terms of conditional distributions, then no regularity condition is needed on the state spaces. For a precise statement, define the composition μ ⊗ν of two kernels μ and ν as in Chapter 3.
Theorem 8.24 (extension by conditioning, Ionescu Tulcea) For any measur-able spaces (Sn, Sn) and probability kernels μn : S1 × · · · × Sn−1 →Sn, n ∈N, there exist some random elements ξn in Sn, n ∈N, such that L(ξ1, . . . , ξn) = μ1 ⊗· · · ⊗μn, n ∈N.
Proof: Put Fn = S1 ⊗· · · ⊗Sn and Tn = Sn+1 × Sn+2 × · · · , and note that the class C = n(Fn × Tn) is a field in T0 generating the σ-field F∞. Define an additive function μ on C by μ(A × Tn) = (μ1 ⊗· · · ⊗μn)A, A ∈Fn, n ∈N, (12) which is clearly independent of the representation C = A × Tn. We need to extend μ to a probability measure on F∞. By Theorem 2.5, it is then enough to show that μ is continuous at ∅.
For any C1, C2, . . . ∈C with Cn ↓∅, we need to show that μCn →0.
Renumbering if necessary, we may assume for every n that Cn = An × Tn with An ∈Fn. Now define f n k = (μk+1 ⊗· · · ⊗μn)1An, k ≤n, (13) with the understanding that f n n = 1An for k = n. By Lemmas 3.2 and 3.3, each f n k is an Fk-measurable function on S1 × · · · × Sk, and (13) yields f n k = μk+1f n k+1, 0 ≤k < n.
(14) Since Cn ↓∅, the functions f n k are non-increasing in n for fixed k, say with limits gk. By (14) and dominated convergence, gk = μk+1 gk+1, k ≥0.
(15) Combining (12) and (13) gives μCn = f n 0 ↓g0. If g0 > 0, then (15) yields an s1 ∈S1 with g1(s1) > 0. Continuing recursively, we may construct a sequence ¯ s = (s1, s2, . . .) ∈T0, such that gn(s1, . . . , sn) > 0 for all n. Then 8. Conditioning and Disintegration 181 1Cn(¯ s) = 1An(s1, . . . , sn) = f n n(s1, . . . , sn) ≥gn(s1, . . . , sn) > 0, and so ¯ s ∈ n Cn, which contradicts the assumption Cn ↓∅. Thus, g0 = 0, which means that μCn →0.
2 In particular, we may choose some independent random elements with ar-bitrarily prescribed distributions. The result extends the elementary Theorem 4.19.
Corollary 8.25 (infinite product measures, Lomnicki & Ulam) For any prob-ability spaces (St, St, μt), t ∈T, there exist some independent random elements ξt in St with L(ξt) = μt, t ∈T.
Proof: For countable subsets I ⊂T, the associated product measures μI = t∈I μt exist by Theorem 8.24. Now proceed as in the proof of The-orem 8.23.
2 Exercises 1. Show that (ξ, η) d = (ξ′, η) iffP( ξ ∈B | η) = P( ξ′ ∈B | η) a.s. for any measur-able set B.
2. Show that EFξ = EGξ a.s. for all ξ ∈L1 iff¯ F = ¯ G.
3. Show that the averaging property implies the other properties of conditional expectations listed in Theorem 8.1.
4. State the probabilistic counterpart of Lemma A4.9, and give a direct proof.
5. Let 0 ≤ξn ↑ξ and 0 ≤η ≤ξ, where ξ1, ξ2, . . . , η ∈L1, and fix a σ-field F.
Show that EFη ≤supn EFξn. (Hint: Apply the monotone convergence property to EF(ξn ∧η).) 6. For any [0, ∞]-valued random variable ξ, define EFξ = supn EF(ξ ∧n). Show that this extension of EF satisfies the monotone convergence property. (Hint: Use the preceding result.) 7. Show that the above extension of EF remains characterized by the averaging property, and that EFξ < ∞a.s. iffthe measure ξ · P = E(ξ; ·) is σ-finite on F.
Extend EFξ to random variables ξ such that the measure |ξ| · P is σ-finite on F.
8. Let ξ1, ξ2, . . . be [0, ∞]-valued random variables, and fix any σ-field F. Show that lim infn EFξn ≥EF lim infn ξn a.s.
9. For any σ-field F, and let ξ, ξ1, ξ2, . . . be random variables with ξn →ξ and EF supn |ξn| < ∞a.s. Show that EFξn →EFξ a.s.
10. Let the σ-field F be generated by a partition A1, A2, . . . ∈A of Ω. For any ξ ∈L1, show that E(ξ |F) = E(ξ |Ak) = E(ξ; Ak)/PAk on Ak whenever PAk > 0.
11. For any σ-field F, event A, and random variable ξ ∈L1, show that E(ξ |F, 1A) = E(ξ; A |F)/P(A |F) a.s. on A.
182 Foundations of Modern Probability 12. Let the random variables ξ1, ξ2, . . . ≥0 and σ-fields F1, F2, . . . be such that E(ξn|Fn) P →0. Show that ξn P →0. (Hint: Consider the random variables ξn ∧1.) 13. Let (ξ, η) d = (˜ ξ, ˜ η), where ξ ∈L1. Show that E(ξ | η) d = E(˜ ξ |˜ η). (Hint: If E(ξ | η) = f(η), then E(˜ ξ |˜ η) = f(˜ η) a.s.) 14. Let (ξ, η) be a random vector in R2 with probability density f, put F(y) = f(x, y) dx, and let g(x, y) = f(x, y)/F(y). Show that P(ξ ∈B | η) = B g(x, η) dx a.s.
15. Use conditional distributions to deduce the monotone and dominated conver-gence theorems for conditional expectations from the corresponding unconditional results.
16. Assume EFξ d = ξ for a ξ ∈L1. Show that ξ is a.s. F-measurable. (Hint: Choose a strictly convex function f with Ef(ξ) < ∞, and apply the strict Jensen inequality to the conditional distributions.) 17. Assuming (ξ, η) d = (ξ, ζ), where η is ζ-measurable, show that ξ⊥ ⊥η ζ. (Hint: Show as above that P(ξ ∈B | η) d = P(ξ ∈B| ζ), and deduce the corresponding a.s.
equality.) 18. Let ξ be a random element in a separable metric space S. Show that L(ξ |F) is a.s. degenerate iffξ is a.s. F-measurable. (Hint: Reduce to the case where L(ξ |F) is degenerate everywhere and hence equal to δη for an F-measurable random element η in S. Then show that ξ = η a.s.) 19.
Give a direct proof of the equivalence (5).
Then state the probabilistic counterpart of Lemma A4.7, and give a direct proof.
20. Show that if G ⊥ ⊥FnH for some increasing σ-fields Fn, then G ⊥ ⊥F∞H.
21. Assuming ξ⊥ ⊥ηζ and γ⊥ ⊥(ξ, η, ζ), show that ξ⊥ ⊥η,γζ and ξ⊥ ⊥η(ζ, γ).
22. Extend Lemma 4.6 to the context of conditional independence. Also show that Corollary 4.7 and Lemma 4.8 remain valid for conditional independence, given a σ-field H.
23. Consider any σ-field F and random element ξ in a Borel space, and define η = L(ξ |F). Show that ξ⊥ ⊥η F.
24. Let ξ, η be random elements in a Borel space S. Prove the existence of a measurable function f : S ×[0, 1] →S and a U(0, 1) random variable γ⊥ ⊥η such that ξ = f(η, γ) a.s. (Hint: Choose f with {f(η, ϑ), η} d = (ξ, η) for any U(0, 1) random variable ϑ⊥ ⊥(ξ, η), and then let (γ, ˜ η) d = (ϑ, η) with (ξ, η) = {f(γ, ˜ η), ˜ η} a.s.) 25. Let ξ, η be random elements in Borel spaces S, T such that ξ = f(η) a.s. for a measurable function f : T →S. Show that η = g(ξ, ϑ) a.s. for a measurable function g: S × [0, 1] →T and an U(0, 1) random variable ϑ⊥ ⊥ξ. (Hint: Use Theorem 8.17.) 26. Suppose that the function f : T →S above is injective. Show that η = h(ξ) a.s. for a measurable function h : S →T. (Hint: Show that in this case η⊥ ⊥ϑ, and define h(x) = λg(x, ·).) Compare with Theorem A1.1.
27. Let ξ, η be random elements in a Borel space S. Show that we can choose a random element ˜ η in S with (ξ, η) d = (ξ, ˜ η) and η⊥ ⊥ξ ˜ η.
28. Let the probability measures P, Q on (Ω, A) be related by Q = ξ · P for a random variable ξ ≥0, and fix a σ-field F ⊂A. Show that Q = EP (ξ |F) · P on F.
8. Conditioning and Disintegration 183 29. Assume as before that Q = ξ ·P on A, and let F ⊂A. Show that EQ(η |F) = EP (ξη |F)/EP (ξ |F) a.s. Q for any random variable η ≥0.
Chapter 9 Optional Times and Martingales Filtrations, strictly and weakly optional times, closure properties, op-tional evaluation and hitting, augmentation, random time change, sub-and super-martingales, centering and convex maps, optional sampling and stopping, martingale transforms, maximum and upcrossing inequal-ities, sub-martingale convergence, closed martingales, limits of condi-tional expectations, regularization of sub-martingales, increasing limits of super-martingales The importance of martingales and related topics can hardly be exaggerated.
Indeed, filtrations and optional times as well as a wide range of sub- and super-martingales are constantly used in all areas of modern probability. They appear frequently throughout the remainder of this book.
In discrete time, a martingale is simply a sequence of integrable random variables centered at successive conditional means, a centering that can always be achieved by the elementary Doob decomposition. More precisely, given any discrete filtration F = (Fn), defined as an increasing sequence of σ-fields in Ω, we say that the random variables M0, M1, . . . form a martingale with respect to F if E(Mn | Fn−1) = Mn−1 a.s. for all n. A special role is played by the uniformly integrable martingales, which can be represented in the form Mn = E(ξ |Fn) for some integrable random variables ξ.
Martingale theory owes its usefulness to a number of powerful general re-sults, such as the optional sampling theorem, the sub-martingale convergence theorem, and a variety of maximum inequalities.
Applications discussed in this chapter include extensions of the Borel–Cantelli lemma and Kolmogorov’s 0−1 law. Martingales can also be used to establish the existence of measurable densities and to give a short proof of the law of large numbers.
Much of the discrete-time theory extends immediately to continuous time, thanks to the fundamental regularization theorem, which ensures that every continuous-time martingale with respect to a right-continuous filtration has a right-continuous version with left-hand limits. The implications of this result extend far beyond martingale theory. In particular, it will enable us in Chapters 16–17 to obtain right-continuous versions of independent-increment and Feller processes.
The theory of continuous-time martingales is continued in Chapters 10, 18– 20, and 35 with studies of quadratic variation, random time-change, integral representations, removal of drift, additional maximum inequalities, and various decomposition theorems. Martingales also play a basic role for especially the Skorohod embedding in Chapter 22, the stochastic integration in Chapters 18 185 © Springer Nature Switzerland AG 2021 O. Kallenberg, Foundations of Modern Probability, Probability Theory and Stochastic Modelling 99, 186 Foundations of Modern Probability and 20, and the theories of Feller processes, SDEs, and diffusions in Chapters 17 and 32–33.
As for the closely related notion of optional times, our present treatment is continued with a more detailed study in Chapter 10. Optional times are fun-damental not only for martingale theory, but also for various models involving Markov processes. In the latter context they appear frequently throughout the remainder of the book.
To begin our systematic exposition of the theory, we fix an arbitrary index set T ⊂¯ R. A filtration on T is defined as a non-decreasing family of σ-fields Ft ⊂A, t ∈T. We say that a process X on T is adapted to F = (Ft) if Xt is Ft-measurable for every t ∈T. The smallest filtration with this property, given by Ft = σ{Xs; s ≤t}, t ∈T, is called the induced or generated filtration.
Here ‘smallest’ is understood in the sense of set inclusion for each t.
A random time is defined as a random element τ in ¯ T = T ∪{sup T}.
We say that τ is F-optional 1 if {τ ≤t} ∈Ft for every t ∈T, meaning that the process Xt = 1{τ ≤t} is adapted2. When T is countable, it is clearly equivalent that {τ = t} ∈Ft for every t ∈T.
With every optional time τ we associate a σ-field Fτ = A ∈A; A ∩{τ ≤t} ∈Ft, t ∈T .
We list some basic properties of optional times and corresponding σ-fields.
Lemma 9.1 (optional times) For any optional times σ, τ, (i) σ ∨τ and σ ∧τ are again optional, (ii) Fσ ∩{σ ≤τ} ⊂Fσ∧τ = Fσ ∩Fτ, (iii) Fσ ⊂Fτ on {σ ≤τ}, (iv) τ is Fτ-measurable, (v) if τ ≡t ∈¯ T, then τ is optional with Fτ = Ft.
Proof: (i) For any t ∈T, {σ ∨τ ≤t} = {σ ≤t} ∩{τ ≤t} ∈Ft, {σ ∧τ > t} = {σ > t} ∩{τ > t} ∈Ft.
(ii) For any A ∈Fσ and t ∈T, A ∩{σ ≤τ} ∩{τ ≤t} = A ∩{σ ≤t} ∩{τ ≤t} ∩ σ ∧t ≤τ ∧t , which belongs to Ft since σ ∧t and τ ∧t are both Ft-measurable. Hence, Fσ ∩{σ ≤τ} ⊂Fτ.
1Optional times are often called stopping times.
2We often omit the prefix ‘F-’ when there is no risk for confusion.
9. Optional Times and Martingales 187 The first relation now follows as we replace τ by σ ∧τ. Replacing σ and τ by the pairs (σ ∧τ, σ) and (σ ∧τ, τ) gives Fσ∧τ ⊂Fσ ∩Fτ. To prove the reverse relation, we note that for any A ∈Fσ ∩Fτ and t ∈T, A ∩ σ ∧τ ≤t = A ∩{σ ≤t} ∪ A ∩{τ ≤t} ∈Ft, whence A ∈Fσ∧τ.
(iii) Clear from (ii).
(iv) Applying (ii) to the pair (τ, t) gives {τ ≤t} ∈Fτ for all t ∈T, which extends immediately to any t ∈R. Now use Lemma 1.4.
(v) Clearly Fτ = Fτ ∩{τ ≤t} ⊂Ft. Conversely, let A ∈Ft and s ∈T.
When s ≥t we get A ∩{τ ≤s} = A ∈Ft ⊂Fs, whereas for s < t we have A ∩{τ ≤s} = ∅∈Fs. Thus, A ∈Fτ, proving that Ft ⊂Fτ.
2 Given a filtration F on R+, we may define a new filtration F+ by F+ t = u>t Fu, t ≥0, and say that F is right-continuous if F+ = F. In particular, F+ is right-continuous for any filtration F. We say that a random time τ is weakly F-optional if {τ < t} ∈Ft for every t > 0. Then τ + h is clearly F-optional for every h > 0, and we may define Fτ+ = h>0 Fτ+h. When the index set is Z+, we take F+ = F and make no difference between strictly and weakly optional times.
The notions of optional and weakly optional times agree when F is right-continuous: Lemma 9.2 (weakly optional times) A random time τ is weakly F-optional iffit is F+-optional, in which case Fτ+ = F+ τ = A ∈A; A ∩{τ < t} ∈Ft, t > 0 .
(1) Proof: For any t ≥0, we note that {τ ≤t} = ∩ r>t{τ < r}, {τ < t} = ∪ r<t{τ ≤r}, (2) where r may be restricted to the rationals. If A ∩{τ ≤t} ∈Ft+ for all t, we get by (2) for any t > 0 A ∩{τ < t} = ∪ r<t A ∩{τ ≤r} ∈Ft.
Conversely, if A∩{τ < t} ∈Ft for all t, then (2) yields for any t ≥0 and h > 0 A ∩{τ ≤t} = ∩ r∈(t, t + h) A ∩{τ < r} ∈Ft+h, and so A ∩{τ ≤t} ∈Ft+. For A = Ω this proves the first assertion, and for general A ∈A it proves the second relation in (1).
To prove the first relation, we note that A ∈Fτ+ iffA ∈Fτ+h for each h > 0, that is, iffA∩{τ +h ≤t} ∈Ft for all t ≥0 and h > 0. But this is equiv-alent to A∩{τ ≤t} ∈Ft+h for all t ≥0 and h > 0, hence to A∩{τ ≤t} ∈Ft+ 188 Foundations of Modern Probability for every t ≥0, which means that A ∈F+ τ .
2 We have seen that the maximum and minimum of two optional times are again optional. The result extends to countable collections, as follows.
Lemma 9.3 (closure properties) For any random times τ1, τ2, . . . and filtra-tion F on R+ or Z+, we have (i) if the τn are F-optional, then so is τ = supn τn, (ii) if the τn are weakly F-optional, then so is σ = infn τn, and F+ σ = ∩n F+ τn.
Proof: To prove (i) and the first assertion in (ii), we write {τ ≤t} = ∩n {τn ≤t}, {σ < t} = ∪n {τn < t}, (3) where the strict inequalities may be replaced by ≤for the index set T = Z+.
To prove the second assertion in (ii), we note that F+ σ ⊂ n F+ τn by Lemma 9.1. Conversely, assuming A ∈ n F+ τn, we get by (3) for any t ≥0 A ∩{σ < t} = A ∩∪n{τn < t} = ∪n A ∩{τn < t} ∈Ft, with the indicated modification for T = Z+. Thus, A ∈F + σ .
2 In particular, part (ii) of the last result is useful for the approximation of optional times from the right.
Lemma 9.4 (discrete approximation) For any weakly optional time τ in ¯ R+, there exist some countably-valued optional times τn ↓τ.
Proof: We may define τn = 2−n[2nτ + 1], n ∈N.
Then τn ∈2−n¯ N for all n, and τn ↓τ. The τn are optional, since {τn ≤k 2−n} = {τ < k 2−n} ∈Fk2−n k, n ∈N.
2 We may now relate the optional times to random processes. Say that a pro-cess X on R+ is progressively measurable or simply progressive, if its restriction to Ω×[0, t] is Ft ⊗B[0,t] -measurable for every t ≥0. Note that any progressive process is adapted by Lemma 1.28. Conversely, we may approximate from the left or right to show that any adapted and left- or right-continuous process is progressive. A set A ⊂Ω × R+ is said to be progressive if the corresponding indicator function 1A has this property, and we note that the progressive sets form a σ-field.
9. Optional Times and Martingales 189 Lemma 9.5 (optional evaluation) Consider a filtration F on an index set T, a process X on T with values in a measurable space (S, S), and an optional time τ in T. Then Xτ is Fτ-measurable under each of these conditions: (i) T is countable and X is adapted, (ii) T = R+ and X is progressive.
Proof: In both cases, we need to show that Xτ ∈B, τ ≤t ∈Ft, t ≥0, B ∈S.
This is clear in case (i), if we write {Xτ ∈B} = ∪ s≤t Xs ∈B, τ = s ∈Ft, B ∈S.
In case (ii) it is enough to show that Xτ∧t is Ft-measurable for every t ≥0.
We may then take τ ≤t and prove instead that Xτ is Ft-measurable. Writing Xτ = X ◦ψ with ψ(ω) = {ω, τ(ω)}, we note that ψ is measurable from Ft to Ft ⊗B[0,t], whereas X is measurable on Ω × [0, t] from Ft ⊗B[0,t] to S. The required measurability of Xτ now follows by Lemma 1.7.
2 For any process X on R+ or Z+ and a set B in the range space of X, we introduce the hitting time τB = inf t > 0; Xt ∈B .
We often need to certify that τB is optional. The following elementary result covers some common cases.
Lemma 9.6 (hitting times) For a filtration F on T = R+ or Z+, let X be an F-adapted process on T with values in a measurable space (S, S), and let B ∈S. Then τB is weakly optional under each of these conditions: (i) T = Z+, (ii) T = R+, S is a metric space, B is closed, and X is continuous, (iii) T = R+, S is a topological space, B is open, and X is right-continuous.
Proof: (i) Write {τB ≤n} = ∪ k∈[1, n]{Xk ∈B} ∈Fn, n ∈N.
(ii) Letting ρ be the metric on S, we get for any t > 0 {τB ≤t} = ∪ h>0 ∩ n∈N ∪ r∈Q ∩[h, t] ρ(Xr, B) ≤n−1 ∈Ft.
(iii) By Lemma 9.2, it suffices to write {τB < t} = ∪ r∈Q ∩(0, t){Xr ∈B} ∈Ft, t > 0.
2 For special purposes, we need the following more general but much deeper result, known as the debut theorem. Here and below, a filtration F is said to be complete, if the basic σ-field A is complete and each Ft contains all P-null sets in A.
190 Foundations of Modern Probability Theorem 9.7 (first entry, Doob, Hunt) Let the set A ⊂R+×Ω be progressive for a right-continuous, complete filtration F. Then the time τ(ω) = inf{t ≥0; (t, ω) ∈A} is F-optional.
Proof: Since A is progressive, A ∩[0, t) ∈Ft ⊗B[0,t] for every t > 0. Noting that {τ < t} is the projection of A ∩[0, t) onto Ω, we get {τ < t} ∈Ft by Theorem A1.2, and so τ is optional by Lemma 9.2.
2 For applications of this and other results, we may need to extend a given filtration F on R+, to make it both right-continuous and complete. Writing A for the completion of A, put N = {A ∈A; PA = 0} and define Ft = σ{Ft, N}. Then F = ( Ft) is the smallest complete extension of F. Similarly, F+ = (Ft+) is the smallest right-continuous extension of F. We show that the two extensions commute and can be combined into a smallest right-continuous and complete version, known as the (usual) augmentation of F.
Lemma 9.8 (augmented filtration) Any filtration F on R+ has a smallest right-continuous, complete extension G, given by Gt = Ft+ = F t+, t ≥0.
(4) Proof: First we note that Ft+ ⊂ F t+ ⊂ F t+, t ≥0.
Conversely, let A ∈ F t+ . Then A ∈ F t+h for every h > 0, and so as in Lemma 1.27 there exist some sets Ah ∈Ft+h with P(A △Ah) = 0. Now choose hn →0, and define A′ = {Ahn i.o.}. Then A′ = Ft+ and P(A △A′) = 0, and so A ∈Ft+. Thus, F t+ ⊂Ft+, which proves the second relation in (4).
In particular, the filtration G in (4) contains F and is both right-continuous and complete. For any filtration H with those properties, we have Gt = Ft+ ⊂Ht+ = Ht+ = Ht, t ≥0, which proves the required minimality of G.
2 The σ-fields Fτ arise naturally in connection with a random time change: Proposition 9.9 (random time-change) Let X ≥0 be a non-decreasing, right-continuous process, adapted to a right-continuous filtration F, and define τs = inf t > 0; Xt > s , s ≥0.
Then (i) the τs form a right-continuous process of optional times, generating a right-continuous filtration Gs = Fτs, s ≥0, (ii) if X is continuous and τ is F-optional, then Xτ is G-optional with Fτ ⊂GXτ, 9. Optional Times and Martingales 191 (iii) when X is strictly increasing, we have Fτ = GXτ.
In case (iii), we have in particular Ft = GXt for all t, so that the processes (τs) and (Xt) play symmetric roles.
Proof, (i)−(ii): The times τs are optional by Lemmas 9.2 and 9.6, and since (τs) is right-continuous, so is (Gs) by Lemma 9.3. If X is continuous, then Lemma 9.1 yields for any F-optional time τ > 0 and set A ∈Fτ A ∩{Xτ ≤s} = A ∩{τ ≤τs} ∈ Fτs = Gs, s ≥0.
For A = Ω it follows that Xτ is G-optional, and for general A we get A ∈GXτ.
Thus, Fτ ⊂GXτ. Both statements extend by Lemma 9.3 to arbitrary τ.
(iii) For any A ∈GXt with t > 0, we have A ∩{t ≤τs} = A ∩{Xt ≤s} ∈ Gs = Fτs, s ≥0, and so A ∩ t ≤τs ≤u ∈Fu, s ≥0, u > t.
Taking the union over all s ∈Q+ gives3 A ∈Fu, and as u ↓t we get A ∈ Ft+ = Ft. Hence, Ft = GXt, which extends as before to t = 0. By Lemma 9.1, we obtain for any A ∈GXτ A ∩{τ ≤t} = A ∩{Xτ ≤Xt} ∈ GXt = Ft, t ≥0, and so A ∈Fτ. Thus, GXτ ⊂Fτ, and so the two σ-fields agree.
2 To motivate the definition of martingales, fix a filtration F on an index set T and a random variable ξ ∈L1, and introduce the process Mt = E(ξ | Ft), t ∈T.
Then M is clearly integrable (for each t) and adapted, and the tower property of conditional expectations yields Ms = E(Mt| Fs) a.s., s ≤t.
(5) Any integrable and adapted process M satisfying (5) is called a martingale with respect to F, or an F-martingale. When T = Z+, it is enough to require (5) for t = s + 1, so that the condition becomes E ΔMn Fn−1 = 0 a.s., n ∈N, (6) 3Recall that Q+ denotes the set of rational numbers r ≥0.
192 Foundations of Modern Probability where ΔMn = Mn −Mn−1. A martingale in Rd is a process M = (M 1, . . . , M d) where M 1, . . . , M d are one-dimensional martingales.
We also consider the cases where (5) or (6) are replaced by corresponding inequalities. Thus, we define a sub-martingale as an integrable and adapted process X satisfying Xs ≤E(Xt| Fs) a.s., s ≤t.
(7) Reversing the inequality sign4 yields the notion of a super-martingale. In par-ticular, the mean is non-decreasing for sub-martingales and non-increasing for super-martingales.
Given a filtration F on Z+, we say that a random sequence A = (An) with A0 = 0 is predictable with respect to F or simply F-predictable, if An is Fn−1 -measurable for every n ∈N, so that the shifted sequence (θA)n = An+1 is adapted. The following elementary result, known as the Doob decomposi-tion, is often used to derive results for sub-martingales from the corresponding martingale versions. An extension to continuous time is proved in Chapter 10.
Lemma 9.10 (centering, Doob) An integrable, F-adapted process X on Z+ has an a.s. unique decomposition M + A, where M is an F-martingale and A is F-predictable with A0 = 0. In particular, X is a sub-martingale iffA is a.s.
non-decreasing.
Proof: If X = M + A for some processes M and A as stated, then clearly ΔAn = E(ΔXn|Fn−1) a.s. for all n ∈N, and so An = Σ k≤n E ΔXk Fk−1 a.s., n ∈Z+, (8) which proves the required uniqueness. In general, we may define a predictable process A by (8). Then M = X −A is a martingale, since E ΔMn Fn−1 = E ΔXn Fn−1 −ΔAn = 0 a.s., n ∈N.
2 Next we show how the martingale and sub-martingale properties are pre-served by suitable transformations.
Lemma 9.11 (convex maps) Consider a random sequence M = (Mn) in Rd and a convex function f : Rd →R, such that X = f(M) is integrable. Then X is a sub-martingale under each of these conditions: (i) M is a martingale in Rd, (ii) M is a sub-martingale in R and f is non-decreasing.
4The sign conventions are suggested by analogy with sub- and super-harmonic functions.
Note that sub-martingales tend to be increasing whereas super-martingales tend to be de-creasing. For branching processes the conventions are the opposite.
9. Optional Times and Martingales 193 Proof: (i) By the conditional version of Jensen’s inequality, we have for s ≤t f(Ms) = f E(Mt| Fs) ≤E f(Mt) Fs a.s.
(9) (ii) Here the first relation in (9) becomes f(Ms) ≤f{E(Mt| Fs)}, and the conclusion remains valid.
2 The last result is often applied with f(x) = |x|p for some p ≥1, or with f(x) = x+ = x ∨0 when d = 1. We turn to a basic version of the powerful optional sampling theorem. An extension to continuous-time sub-martingales appears as Theorem 9.30. Say that an optional time τ is bounded if τ ≤u a.s.
for some u ∈T. This always holds when T has a last element.
Theorem 9.12 (optional sampling, Doob) Let M be a martingale on a count-able index set T with filtration F, and consider some optional times σ, τ, where τ is bounded. Then Mτ is integrable, and Mσ∧τ = E(Mτ| Fσ) a.s.
Proof: By Lemmas 8.3 and 9.1, we get for any t ≤u in T E(Mu| Fτ) = E(Mu| Ft) = Mt = Mτ a.s. on {τ = t}, and so E(Mu| Fτ) = Mτ a.s. whenever τ ≤u a.s. If σ ≤τ ≤u, then Fσ ⊂Fτ by Lemma 9.1, and we get a.s.
E(Mτ| Fσ) = E E(Mu| Fτ) Fσ = E(Mu| Fσ) = Mσ.
Further note that E(Mτ| Fσ) = Mτ a.s. when τ ≤σ ∧u. In general, using Lemmas 8.3 and 9.1, we may combine the two special cases to get a.s.
E(Mτ| Fσ) = E(Mτ| Fσ∧τ) = Mσ∧τ on {σ ≤τ}, E(Mτ|Fσ) = E(Mσ∧τ| Fσ) = Mσ∧τ on {σ > τ}.
2 In particular, the martingale property is preserved by a random time change: Corollary 9.13 (random-time change) Consider an F-martingale M and a non-decreasing family of bounded optional times τs taking countably many val-ues. Then the process N is a G -martingale, where Ns = Mτs, Gs = Fτs.
This leads in turn to a useful martingale criterion.
194 Foundations of Modern Probability Corollary 9.14 (martingale criterion) Let M be an integrable, adapted pro-cess on T. Then these conditions are equivalent: (i) M is a martingale, (ii) EMσ = EMτ for any bounded, optional times σ, τ taking countably many values in T.
In (ii) it is enough to consider optional times taking at most two values.
Proof: If s < t in T and A ∈Fs, then τ = s1A + t1Ac is optional, and so 0 = EMt −EMτ = EMt −E(Ms; A) −E(Mt; Ac) = E(Mt −Ms; A).
Since A is arbitrary, it follows that E(Mt −Ms| Fs) = 0 a.s.
2 The following predictable transformation of martingales is basic for the theory of stochastic integration.
Corollary 9.15 (martingale transform) Let M be a martingale on an index set T with filtration F, fix an optional time τ taking countably many values, and let η be a bounded, Fτ-measurable random variable. Then we may form another martingale Nt = η (Mt −Mt∧τ), t ∈T.
Proof: The integrability holds by Theorem 9.12, and the adaptedness is clear if we replace η by η 1{τ ≤t} in the expression for Nt.
Now fix any bounded, optional time σ taking countably many values. By Theorem 9.12 and the pull-out property of conditional expectations, we get a.s.
E(Nσ| Fτ) = η E Mσ −Mσ∧τ Fτ = η (Mσ∧τ −Mσ∧τ) = 0, and so ENσ = 0. Thus, N is a martingale by Lemma 9.14.
2 In particular, the martingale property is preserved by optional stopping, in the sense that the stopped process M τ t = Mτ∧t is a martingale whenever M is a martingale and τ is an optional time taking countably many values. More generally, we may consider predictable step processes of the form Vt = Σ k≤n ηk1{t > τk}, t ∈T, where τ1 ≤· · · ≤τn are optional times and each ηk is a bounded, Fτk-measurable random variable. For any process X, the associated elementary stochastic integral (V · X)t ≡ t 0 Vs dXs = Σ k≤n ηk(Xt −Xt∧τk), t ∈T, 9. Optional Times and Martingales 195 is a martingale by Corollary 9.15, whenever X is a martingale and each τk takes countably many values. In discrete time, we may take V to be a bounded, predictable sequence, in which case (V · X)n = Σ k≤n Vk ΔXk, n ∈Z+.
The result for martingales extends in an obvious way to sub-martingales X, provided that V ≥0.
We proceed with some basic martingale inequalities, beginning with some extensions of Kolmogorov’s maximum inequality from Lemma 5.15.
Theorem 9.16 (maximum inequalities, Bernstein, L´ evy) Let X be a sub-martingale on a countable index set T. Then for any r ≥0 and u ∈T, (i) r P sup t≤u Xt ≥r ≤E Xu; sup t≤u Xt ≥r ≤EX+ u , (ii) r P supt|Xt| ≥r ≤3 suptE|Xt|.
Proof: (i) By dominated convergence it is enough to consider finite index sets, so we may take T = Z+.
Define τ = u ∧inf{t; Xt ≥r} and B = {maxt≤u Xt ≥r}. Then τ is an optional time bounded by u, and we have B ∈Fτ and Xτ ≥r on B. Hence, Lemma 9.10 and Theorem 9.12 yield r PB ≤E(Xτ; B) ≤E(Xu; B) ≤EX+ u .
(ii) For the Doob decomposition X = M + A, relation (i) applied to −M yields r P min t≤u Xt ≤−r ≤r P min t≤u Mt ≤−r ≤EM − u = EM + u −EMu ≤EX+ u −EX0 ≤2 max t≤u E|Xt|.
In remains to combine with (i).
2 We turn to a powerful norm inequality. For processes X on an index set T, we define X∗ t = sup s≤t |Xs|, X∗= sup t∈T |Xt|.
Theorem 9.17 (Lp-inequality, Doob) Let M be a martingale on a countable index set T, and fix any p, q > 1 with p−1 + q−1 = 1. Then ∥M ∗ t ∥p ≤q ∥Mt∥p, t ∈T.
196 Foundations of Modern Probability Proof: By monotone convergence, we may take T = Z+. If ∥Mt∥p < ∞, then ∥Ms∥p < ∞for all s ≤t by Jensen’s inequality, and so we may assume that 0 < ∥M ∗ t ∥p < ∞. Applying Theorem 9.16 to the sub-martingale |M|, we get r P{M ∗ t > r} ≤E |Mt|; M ∗ t > r , r > 0.
Using Lemma 4.4, Fubini’s theorem, and H¨ older’s inequality, we obtain ∥M ∗ t ∥p p = p ∞ 0 P{M ∗ t > r} rp−1dr ≤p ∞ 0 E |Mt|; M ∗ t > r rp−2 dr = p E |Mt| M∗ t 0 rp−2 dr = q E |Mt| M ∗(p−1) t ≤q ∥Mt∥p M ∗(p−1) t q = q ∥Mt∥p ∥M ∗ t ∥p−1 p .
Now divide by the last factor on the right.
2 The next inequality is needed to prove the basic convergence theorem. For any function f : T →R and constants a < b, we define the number of [a, b] -crossings of f up to time t as the supremum of all n ∈Z+, such that there exist times s1 < t1 < s2 < t2 < · · · < sn < tn ≤t in T with f(sk) ≤a and f(tk) ≥b for all k. This supremum may clearly be infinite.
Lemma 9.18 (upcrossing inequality, Doob, Snell) Let X be a sub-martingale on a countable index set T, and let N b a(t) be the number of [a, b]-crossings of X up to time t. Then EN b a(t) ≤E(Xt −a)+ b −a , t ∈T, a < b in R.
Proof: As before we may take T = Z+. Since Y = (X −a)+ is again a sub-martingale by Lemma 9.11, and the [a, b] -crossings of X correspond to [0, b−a] -crossings of Y , we may take X ≥0 and a = 0. Then define recursively the optional times 0 = τ0 ≤σ1 < τ1 < σ2 < · · · by σk = inf n ≥τk−1; Xn = 0 , τk = inf n ≥σk; Xn ≥b , k ∈N, and introduce the predictable process Vn = Σ k≥1 1 σk < n ≤τk , n ∈N.
Since (1 −V ) · X is again a sub-martingale by Corollary 9.15, we get E (1 −V ) · X t ≥E (1 −V ) · X 0 = 0, t ≥0.
Since also (V · X)t ≥b N b 0(t), we obtain 9. Optional Times and Martingales 197 b EN b 0(t) ≤E(V · X)t ≤E(1 · X)t = EXt −EX0 ≤EXt.
2 We may now prove the fundamental convergence theorem for sub-martin-gales. Say that a process X is Lp-bounded if supt ∥Xt∥p < ∞.
Theorem 9.19 (convergence and regularity, Doob) Let X be an L1-bounded sub-martingale on a countable index set T. Then outside a fixed P-null set, X converges along every increasing or decreasing sequence in T.
Proof: By Theorem 9.16 we have X∗< ∞a.s., and Lemma 9.18 shows that X has a.s. finitely many upcrossings of every interval [a, b] with rational a < b. Then X has clearly the asserted property, outside the P-null set where any of these countably many conditions fails.
2 We consider an interesting and useful application.
Proposition 9.20 (one-sided bound) Let M be a martingale on Z+ with ΔM ≤ c a.s. for a constant c < ∞. Then a.s.
{Mn converges} = sup n Mn < ∞ .
Proof: Since M −M0 is again a martingale, we may take M0 = 0. Consider the optional times τm = inf n; Mn ≥m , m ∈N.
The processes M τm are again martingales by Corollary 9.15. Since M τm ≤m+c a.s., we have E|M τm| ≤2(m + c) < ∞, and so M τm converges a.s. by Theorem 9.19. Hence, M converges a.s. on supnMn < ∞ = ∪m {τm = ∞} ⊂∪m {M = M τm}.
The reverse implication is obvious, since every convergent sequence in R is bounded.
2 The last result yields a useful extension of the Borel–Cantelli lemma from Theorem 4.18: Corollary 9.21 (extended Borel–Cantelli lemma, L´ evy) For a filtration F on Z+, let An ∈Fn, n ∈N. Then a.s.
{An i.o.} = Σ n P(An| Fn−1) = ∞ .
198 Foundations of Modern Probability Proof: The sequence Mn = Σ k≤n 1Ak −P(Ak| Fk−1) , n ∈Z+, is a martingale with |ΔMn| ≤1, and so by Proposition 9.20 the events {supn(±Mn) = ∞} agree a.s., which implies P{Mn →∞} = P{Mn →−∞} = 0.
Hence, we have a.s.
{An i.o.} = Σ n1An = ∞ = Σ n P(An|Fn−1) = ∞ .
2 A martingale M is said to be closed if u = sup T belongs to T, in which case Mt = E(Mu| Ft) a.s. for all t ∈T. When u ̸∈T, we say that M is closable if it can be extended to a martingale on ¯ T = T ∪{u}. If Mt = E(ξ | Ft) for a variable ξ ∈L1, we may clearly choose Mu = ξ. We give some general criteria for closability. An extension to continuous-time sub-martingales appears as part of Theorem 9.30.
Theorem 9.22 (uniform integrability and closure, Doob) For a martingale M on an unbounded index set T, these conditions are equivalent: (i) M is uniformly integrable, (ii) M is closable at sup T, (iii) M is L1-convergent at sup T.
Under those conditions, M may be closed by the limit in (iii).
Proof: Clearly (ii) ⇒(i) by Lemma 8.4 and (i) ⇒(iii) by Theorem 9.19 and Proposition 5.12. Now let Mt →ξ in L1 as t →u ≡sup T. Using the L1-contractivity of conditional expectations, we get as t →u for fixed s Ms = E(Mt|Fs) →E(ξ |Fs) in L1.
Thus, Ms = E(ξ |Fs) a.s., and we may take Mu = ξ.
This shows that (iii) ⇒(ii).
2 For comparison, we consider the case of Lp-convergence for p > 1.
Corollary 9.23 (Lp-convergence) Let M be a martingale on an unbounded index set T, and fix any p > 1. Then these conditions are equivalent: (i) M converges in Lp, (ii) M is Lp-bounded.
9. Optional Times and Martingales 199 Proof: We may clearly assume that T is countable. If M is Lp-bounded, it converges in L1 by Theorem 9.19. Since |M|p is uniformly integrable by Theorem 9.17, the convergence extends to Lp by Proposition 5.12. Conversely, if M converges in Lp, it is Lp-bounded by Lemma 9.11.
2 We turn to the convergence of martingales of the special form Mt = E(ξ |Ft), as t increases or decreases along a sequence. Without loss of generality, we may take the index set T to be unbounded above or below, and define respectively F∞= ∨ t∈T Ft, F−∞= ∩ t∈T Ft.
Theorem 9.24 (conditioning limits, Jessen, L´ evy) Let F be a filtration on a countable index set T ⊂R, unbounded above or below. Then for any ξ ∈L1, we have as t →±∞ E(ξ | Ft) →E(ξ | F±∞) a.s. and in L1.
Proof: By Theorems 9.19 and 9.22, the martingale Mt = E(ξ | Ft) converges a.s. and in L1 as t →±∞, where the limit M±∞may clearly be taken to be F±∞-measurable. To obtain M±∞= E(ξ | F±∞) a.s., we need to verify that E(M±∞; A) = E(ξ; A), A ∈F±∞.
(10) Then note that, by the definition of M, E(Mt; A) = E(ξ; A), A ∈Fs, s ≤t.
(11) This clearly remains true for s = −∞, and as t →−∞we get the ‘minus’ version of (10). To get the ‘plus’ version, let t →∞in (11) for fixed s, and extend by a monotone-class argument to arbitrary A ∈F∞.
2 In particular, we note the following special case.
Corollary 9.25 (L´ evy) For any filtration F on Z+, P(A | Fn) →1A a.s., A ∈F∞.
For a simple application, we prove an extension of Kolmogorov’s 0−1 law in Theorem 4.13. Here a relation F1 ⊂F2 between two σ-fields is said to hold a.s. if A ∈F1 implies A ∈¯ F2, where ¯ F2 denotes the completion of F2.
Corollary 9.26 (tail σ-field) Let the σ-fields F1, F2, . . . be conditionally in-dependent, given G. Then ∩ n≥1 σ Fn, Fn+1, . . . ⊂G a.s.
200 Foundations of Modern Probability Proof: Let T be the σ-field on the left, and note that T ⊥ ⊥G (F1 ∨· · · ∨Fn) by Theorem 8.12. Using Theorem 8.9 and Corollary 9.25, we get for any A ∈T P(A | G) = P A G, F1, . . . , Fn →1A a.s., which shows that T ⊂G a.s.
2 The last theorem yields a short proof of the strong law of large numbers.
Here we put Sn = ξ1 + . . . + ξn for some i.i.d. random variables ξ1, ξ2, . . . in L1, and define F−n = σ{Sn, Sn+1, . . .}. Then F−∞is trivial by Theorem 4.15, and E(ξk| F−n) = E(ξ1| F−n) a.s. for every k ≤n, since (ξk, Sn, Sn+1, . . .) d = (ξ1, Sn, Sn+1, . . .). Hence, Theorem 9.24 yields n−1Sn = E n−1Sn F−n = n−1Σ k≤n E(ξk| F−n) = E(ξ1| F−n) →E(ξ1| F−∞) = E ξ1.
For a further application of Theorem 9.24, we prove a kernel version of the regularization Theorem 8.5, needed in Chapter 32.
Theorem 9.27 (kernel densities) Consider a probability kernel μ: S →T × U, where T, U are Borel. Then the densities ν(s, t, B) = μ(s, dt × B) μ(s, dt × U), s ∈S, t ∈T, B ∈U, (12) have versions combining into a probability kernel ν : S × T →U.
Proof: We may take T, U to be Borel subsets of R, in which case μ can be regarded as a probability kernel from S to R2. Letting In be the σ-field in R generated by the intervals Ink = 2−n[(k −1), k), k ∈Z, we define Mn(s, t, B) = Σ k μ(s, Ink × B) μ(s, Ink × U) 1{t ∈Ink}, s ∈S, t ∈T, B ∈B, subject to the convention 0/0 = 0. Then Mn(s, ·, B) is a version of the density in (12) on the σ-field In, and for fixed s and B it is also a martingale with respect to μ(s, · × U). By Theorem 9.24 we get Mn(s, ·, B) →ν(s, ·, B) a.e.
μ(s, · × U). Thus, ν has the product-measurable version ν(s, t, B) = lim sup n→∞Mn(s, t, B), s ∈S, t ∈T, B ∈U.
To construct a kernel version of ν, we may proceed as in the proof of Theo-rem 3.4, noting that in each step the exceptional (s, t) -set A lies in S ⊗T , with sections As = {t ∈T; (s, t) ∈A} satisfying μ(s, As × U) = 0 for all s ∈S.
2 9. Optional Times and Martingales 201 To extend the previous theory to continuous time, we need to form suitably regular versions of the various processes. The following closely related regular-izations may then be useful. Say that a process X on R+ is right-continuous with left-hand limits (abbreviated5 as rcll) if Xt = Xt+ for all t ≥0, and the left-hand limits Xt−exist and are finite for all t > 0. For any process Y on Q+, we define a process Y + by (Y +)t = Yt+, t ≥0, whenever the right-hand limits exist.
Theorem 9.28 (regularization, Doob) For any F-sub-martingale X on R+ with restriction Y to Q+, we have (i) Y + exists and is rcll outside a fixed P-null set A, and Z = 1AcY + is a sub-martingale with respect to the augmented filtration F+, (ii) when F is right-continuous, X has an rcll version iffEX is right-con-tinuous, hence in particular when X is a martingale.
The proof requires an extension of Theorem 9.22 to suitable sub-martingales.
Lemma 9.29 (uniform integrability) Let X be a sub-martingale on Z−. Then these conditions are equivalent: (i) X is uniformly integrable, (ii) EX is bounded.
Proof: Let EX be bounded. Form the predictable sequence αn = E ΔXn Fn−1 ≥0, n ≤0, and note that E Σ n≤0 αn = EX0 −inf n≤0 EXn < ∞.
Hence, Σ n αn < ∞a.s., and so we may define An = Σ k≤n αk, Mn = Xn −An, n ≤0.
Since EA∗< ∞, and M is a martingale closed at 0, both A and M are uni-formly integrable.
2 Proof of Theorem 9.28: (i) By Lemma 9.11 the process Y ∨0 is L1-bounded on bounded intervals, and so the same thing is true for Y . Thus, by Theorem 9.19 the right- and left-hand limits Yt± exist outside a fixed P-null set A, and so Z = 1AcY + is rcll. Also note that Z is adapted to F+.
To prove that Z is a sub-martingale for F+, fix any times s < t, and choose sn ↓s and tn ↓t in Q+ with sn < t. Then Ysm ≤E(Ytn|Fsm) a.s. for all m and n, and Theorem 9.24 yields Zs ≤E(Ytn|Fs+) a.s. as m →∞. Since Ytn →Zt in L1 by Lemma 9.29, it follows that 5The French acronym c adlag is often used, even in English texts.
202 Foundations of Modern Probability Zs ≤E(Zt| Fs+) = E(Zt| ¯ Fs+) a.s.
(ii) For any t < tn ∈Q+, (EX)tn = E(Ytn), Xt ≤E(Ytn| Ft) a.s., and as tn ↓t we get by Lemma 9.29 and the right-continuity of F (EX)t+ = EZt, Xt ≤E(Zt| Ft) = Zt a.s.
(13) If X has a right-continuous version, then clearly Zt = Xt a.s. Hence, (13) yields (EX)t+ = EXt, which shows that EX is right-continuous. If instead EX is right-continuous, then (13) gives E|Zt −Xt| = EZt −EXt = 0, and so Zt = Xt a.s., which means that Z is a version of X.
2 Justified by the last theorem, we may henceforth take all sub-martingales to be rcll, unless otherwise specified, and also let the underlying filtration be right-continuous and complete. Most of the previously quoted results for sub-martingales on a countable index set extend immediately to continuous time.
In particular, this is true for the convergence Theorem 9.19 and the inequalities in Theorem 9.16 and Lemma 9.18. We proceed to show how Theorems 9.12 and 9.22 extend to sub-martingales in continuous time.
Theorem 9.30 (optional sampling and closure, Doob) Let X be an F-sub-martingale on R+, where X and F are right-continuous, and consider some optional times σ, τ, where τ is bounded. Then Xτ is integrable, and Xσ∧τ ≤E(Xτ| Fσ) a.s.
(14) This extends to unbounded times τ iffX+ is uniformly integrable.
Proof: Introduce the optional times σn = 2−n[2nσ+1] and τn = 2−n[2nτ+1], and conclude from Lemma 9.10 and Theorem 9.12 that Xσm∧τn ≤E Xτn Fσm a.s., m, n ∈N.
As m →∞, we get by Lemma 9.3 and Theorem 9.24 Xσ∧τn ≤E(Xτn| Fσ) a.s., n ∈N.
(15) By the result for the index sets 2−n Z+, the random variables X0; . . . , Xτ2, Xτ1 form a sub-martingale with bounded mean, and hence are uniformly integrable by Lemma 9.29. Thus, (14) follows as we let n →∞in (15).
9. Optional Times and Martingales 203 If X+ is uniformly integrable, then X is L1-bounded and hence converges a.s. toward some X∞∈L1. By Proposition 5.12 we get X+ t →X+ ∞in L1, and so E(X+ t | Fs) →E(X+ ∞| Fs) in L1 for each s. Letting t →∞along a sequence, we get by Fatou’s lemma Xs ≤lim t→∞E(X+ t | Fs) −lim inf t→∞E(X− t | Fs) ≤E(X+ ∞| Fs) −E(X− ∞| Fs) = E(X∞| Fs).
Approximating as before, we obtain (14) for arbitrary σ, τ.
Conversely, the stated condition yields the existence of an X∞∈L1 with Xs ≤E(X∞| Fs) a.s. for all s > 0, and so X+ s ≤E(X+ ∞| Fs) a.s. by Lemma 9.11. Hence, X+ is uniformly integrable by Lemma 8.4.
2 For a simple application, we consider the hitting probabilities of a contin-uous martingale. The result will be useful in Chapters 18, 22, and 33.
Corollary 9.31 (first hit) Let M be a continuous martingale with M0 = 0 and P{M ∗> 0} > 0, and define τx = inf{t > 0; Mt = x}. Then for a < 0 < b, P τa < τb M ∗> 0 ≤ b b −a ≤P τa ≤τb M ∗> 0 .
Proof: Since τ = τa ∧τb is optional by Lemma 9.6, Theorem 9.30 yields EMτ∧t = 0 for all t > 0, and so by dominated convergence EMτ = 0. Hence, 0 = a P{τa < τb} + b P{τb < τa} + E M∞; τ = ∞ ≤a P{τa < τb} + b P τb ≤τa, M ∗> 0 = b P{M ∗> 0} −(b −a) P{τa < τb}, which implies the first inequality. The second one follows as we take comple-ments.
2 The next result plays a crucial role in Chapter 17.
Lemma 9.32 (absorption) For any right-continuous super-martingale X ≥ 0, we have X = 0 a.s. on [τ, ∞), where τ = inf t ≥0; Xt ∧Xt−= 0 .
Proof: By Theorem 9.28 the process X remains a super-martingale with respect to the right-continuous filtration F+. The times τn = inf{t ≥0; Xt < n−1} are F+-optional by Lemma 9.6, and the right-continuity of X yields Xτn ≤n−1 on {τn < ∞}. Hence, Theorem 9.30 yields E Xt; τn ≤t ≤E Xτn; τn ≤t ≤n−1, t ≥0, n ∈N.
204 Foundations of Modern Probability Noting that τn ↑τ, we get by dominated convergence E(Xt; τ ≤t) = 0, and so Xt = 0 a.s. on {τ ≤t}. The assertion now follows, as we apply this result to all t ∈Q+ and use the right-continuity of X.
2 Finally we show how the right-continuity of an increasing sequence of super-martingales extends to the limit. The result is needed in Chapter 34.
Theorem 9.33 (increasing super-martingales, Meyer) Let X1 ≤X2 ≤· · · be right-continuous super-martingales with supn EXn 0 < ∞. Then Xt = supn Xn t is again an a.s. right-continuous super-martingale.
Proof (Doob): By Theorem 9.28 we may take the filtration to be right-continuous. The super-martingale property carries over to X by monotone convergence. To prove the asserted right-continuity, we may take X1 to be bounded below by an integrable random variable, since we can otherwise con-sider the processes obtained by optional stopping at the times m∧inf{t; X1 t < −m}, for arbitrary m > 0.
For fixed ε > 0, let T be the class of optional times τ with lim sup u ↓t |Xu −Xt| ≤2ε, t < τ, and put p = infτ∈T Ee−τ. Choose σ1, σ2, . . . ∈T with Ee−σn →p , and note that σ ≡supn σn ∈T with Ee−σ = p . We need to show that σ = ∞a.s. Then introduce the optional times τn = inf t > σ; |Xn t −Xσ| > ε , n ∈N, and put τ = lim supn τn. Noting that |Xt −Xσ| = lim inf n→∞|Xn t −Xσ| ≤ε, t ∈[σ, τ), we obtain τ ∈T .
By the right-continuity of Xn, we have |Xn τn −Xσ| ≥ε on {τn < ∞} for every n. Furthermore, on the set A = {σ = τ < ∞}, lim inf n→∞Xn τn ≥supk lim n→∞Xk τn = supk Xk σ = Xσ , and so lim inf n→∞Xn τn ≥Xσ + ε on A.
Since A ∈Fσ by Lemma 9.1, we get by Fatou’s lemma, optional sampling, and monotone convergence E(Xσ + ε; A) ≤E lim inf n→∞Xn τn; A ≤lim inf n→∞E(Xn τn; A) ≤lim n→∞E(Xn σ; A) = E(Xσ; A).
Thus, PA = 0, and so τ > σ a.s. on {σ < ∞}. Since p > 0 would yield the contradiction Ee−τ < p, we have p = 0, and so σ = ∞a.s.
2 9. Optional Times and Martingales 205 Exercises 1. For any optional times σ, τ, show that {σ = τ} ∈Fσ ∩Fτ and Fσ = Fτ on {σ = τ}. However, Fτ and F∞may differ on {τ = ∞}.
2. Show that if σ, τ are optional times on the time scale R+ or Z+, then so is σ + τ.
3. Give an example of a random time that is weakly optional but not optional.
(Hint: Let F be the filtration induced by the process Xt = ϑ t with P{ϑ = ±1} = 1 2, and take τ = inf{t; Xt > 0}.) 4.
Fix a random time τ and a random variable ξ in R \ {0}.
Show that the process Xt = ξ 1{τ ≤t} is adapted to a given filtration F iffτ is F-optional and ξ is Fτ -measurable. Give corresponding conditions for the process Yt = ξ 1{τ < t}.
5. Let P be the class of sets A ∈R+ × Ω such that the process 1A is progressive.
Show that P is a σ-field and that a process X is progressive iffit is P-measurable.
6. Let X be a progressive process with induced filtration F, and fix any optional time τ < ∞. Show that σ{τ, Xτ} ⊂Fτ ⊂F+ τ ⊂σ{τ, Xτ+h} for every h > 0. (Hint: The first relation becomes an equality when τ takes only countably many values.) Note that the result may fail when P{τ = ∞} > 0.
7. Let M be an F-martingale on a countable index set, and fix an optional time τ.
Show that M −M τ remains a martingale, conditionally on Fτ. (Hint: Use Theorem 9.12 and Lemma 9.14.) Extend the result to continuous time.
8. Show that any sub-martingale remains a sub-martingale with respect to the induced filtration.
9.
Let X1, X2, . . . be sub-martingales such that the process X = supn Xn is integrable. Show that X is again a sub-martingale. Also show that lim supn Xn is a sub-martingale when even supn|Xn| is integrable.
10. Show that the Doob decomposition of an integrable random sequence X = (Xn) depends on the filtration, unless X is a.s. X0 -measurable. (Hint: Compare the filtrations induced by X and by the sequence Yn = (X0, Xn+1).) 11.
Fix a random time τ and a random variable ξ ∈L1, and define Mt = ξ 1{τ ≤t}. Show that M is a martingale with respect to the induced filtration F iff E(ξ; τ ≤t | τ > s) = 0 for any s < t. (Hint: The set {τ > s} is an atom of Fs.) 12. Let F, G be filtrations on the same probability space. Show that every F-martingale is a G-martingale iffFt ⊂Gt ⊥ ⊥Ft F∞for every t ≥0. (Hint: For the necessity, consider F-martingales of the form Ms = E(ξ |Fs) with ξ ∈L1(Ft).) 13. For any rcll super-martingale X ≥0 and constant r ≥0, show that r P{supt Xt ≥r} ≤EX0.
14. Let M be an L2-bounded martingale on Z+. Mimicing the proof of Lemma 5.16, show that Mn converges a.s. and in L2.
15. Give an example of a martingale that is L1-bounded but not uniformly inte-grable. (Hint: Every positive martingale is L1-bounded.) 16. For any optional times σ, τ with respect to a right-continuous filtration F, show that EFσ and EFτ commute on L1 with product EFσ∧τ , and that Fσ⊥ ⊥Fσ∧τ Fτ.
(Hint: The first condition holds by Theorem 9.30, and the second one since Fσ∧τ = 206 Foundations of Modern Probability Fσ on {σ ≤τ} and Fσ∧τ = Fτ on {τ ≤σ} by Lemma 9.1.. The two conditions are also seen to be equivalent by Corollary 8.14.) 17.
Let ξn →ξ in L1.
For any increasing σ-fields Fn, show that E(ξn|Fn) →E(ξ |F∞) in L1.
18. Let ξ, ξ1, ξ2, . . . ∈L1 with ξn ↑ξ a.s. For any increasing σ-fields Fn, show that E(ξn| Fn) →E(ξ | F∞) a.s. (Hint: Note that supm E(ξ−ξn|Fm) P →0 by Proposition 9.16, and use the monotonicity.) 19. Show that any right-continuous sub-martingale is a.s. rcll.
20. Given a super-martingale X ≥0 on Z+ and some optional times τ0 ≤τ1 ≤· · · , show that the sequence (Xτn) is again a super-martingale. (Hint: Truncate the times τn, and use the conditional Fatou lemma.) Show by an example that the result fails for sub-martingales.
21. For any random time τ ≥0 and right-continuous filtration F = (Ft), show that the process Xt = P(τ ≤t | Ft) has a right-continuous version.
(Hint: Use Theorem 9.28 (ii).) Chapter 10 Predictability and Compensation Predictable times and processes, strict past, accessible and totally inac-cessible times, decomposition of optional times, Doob–Meyer decomposi-tion, natural and predictable processes, dual predictable projection, pre-dictable martingales, compensation and decomposition of random mea-sures, ql-continuous filtrations, induced and discounted compensators, fundamental martingale, product moments, predictable maps The theory of martingales and optional times from the previous chapter leads naturally into a general theory of processes, of fundamental importance for various advanced topics of stochastic calculus, random measure theory, and many other subjects throughout the remainder of this book. Here a crucial role is played by the notions of predictable processes and associated predictable times, forming basic subclasses of adapted processes and optional times.
Among the many fundamental results proved in this chapter, we note in particular the celebrated Doob–Meyer decomposition, a continuous-time coun-terpart of the elementary Doob decomposition from Lemma 9.10, here proved directly1 by a discrete approximation based on Dunford’s characterization of weak compactness, combined with Doob’s ingenious approximation of inacces-sible times. The result may be regarded as a probabilistic counterpart of some basic decompositions in potential theory, as explained in Chapter 34.
Even more important for probabilists are the applications to increasing processes, leading to the notion of compensator of a random measure or point process on a product space R+×S, defined as a predictable random measure on the same space, providing the instantaneous rate of increase of the underlying process. In particular, we will see in Chapter 15 how the compensator can be used to reduce a fairly general point process on R+ to Poisson, in a similar way as a continuous martingale can be time-changed into a Brownian motion.
A suitable transformation leads to the equally important notion of discounted compensator, needed in Chapters 15 and 27 to establish some general mapping theorems.
The present material is related in many ways to material discussed else-where. Apart from the already mentioned connections, we note in particular the notions of tangential processes, general semi-martingales, and stochastic integration in Chapters 16, 20, and 35, the quasi-left continuity of Feller pro-cesses in Chapter 17, and the predictable mapping theorems in Chapters 15, 19, and 27.
1The standard proof, often omitted, is based on a subtle use of capacity theory.
207 © Springer Nature Switzerland AG 2021 O. Kallenberg, Foundations of Modern Probability, Probability Theory and Stochastic Modelling 99, 208 Foundations of Modern Pobability We take all random objects in this chapter to be defined on a fixed prob-ability space (Ω, A, P) with a right-continuous and complete filtration F. A random time τ in [0, ∞] is said to be predictable, if it is announced by some optional times τn ↑τ with τn < τ a.s. on {τ > 0} for all n. Note that pre-dictable times are optional by Lemma 9.3 (i). With any optional time τ we may associate the σ-field Fτ−generated by F0 and the classes Ft ∩{τ > t} for all t > 0, representing the strict past of τ. The following properties of predictable times and associated σ-fields may be compared with similar results for optional times and σ-fields Fτ in Lemmas 9.1 and 9.3.
Lemma 10.1 (predictable times and strict past) For random times σ, τ, τ1, τ2, . . . in [0, ∞], we have (i) Fσ ∩{σ < τ} ⊂Fτ−⊂Fτ when σ, τ are optional, (ii) {σ < τ} ∈Fσ−∩Fτ−for optional σ and predictable τ, (iii) n Fτn = Fτ−when τ is predictable and announced by (τn), (iv) if τ1, τ2, . . . are predictable, so is τ = supn τn with n Fτn−= Fτ−, (v) if σ, τ are predictable, so is σ ∧τ with Fσ∧τ−⊂Fσ−∩Fτ−.
Proof: (i) For any A ∈Fσ, we have A ∩{σ < τ} = ∪ r∈Q+ A ∩{σ ≤r} ∩{r < τ} ∈Fτ−, since the intersections on the right are generators of Fτ−, and so Fσ ∩{σ < τ} ∈Fτ−. The second relation holds since all generators of Fτ−lie in Fτ.
(ii) If τ is announced by (τn), then (i) yields {τ ≤σ} = {τ = 0} ∪∩n {τn < σ} ∈Fσ−.
(iii) For any A ∈Fτn, we get by (i) A = A ∩{τn < τ} ∪ A ∩{τn = τ = 0} ∈Fτ−, and so n Fτn ⊂Fτ−. Conversely, (i) yields for any t ≥0 and A ∈Ft A ∩{τ > t} = ∪n A ∩{τn > t} ∈∨ n Fτn− ⊂∨ n Fτn, which shows that Fτ−⊂ n Fτn.
(iv) If each τn is announced by σn1, σn2, . . . , then τ is announced by the optional times σn = σ1n ∨· · · ∨σnn and is therefore predictable. By (i) we get for all n Fσn = F0 ∪∪ k≤n Fσn ∩{σn < τk} ⊂∨ k≤nFτk−.
10. Predictability and Compensation 209 Combining with (i) and (iii) gives Fτ−= ∨ n Fσn ⊂∨ n Fτn− = ∨ n,k Fσnk ⊂Fτ−, and so equality holds throughout, proving our claim.
(v) If σ and τ are announced by σ1, σ2, . . . and τ1, τ2, . . . , then σ ∧τ is an-nounced by the optional times σn∧τn, and is therefore predictable. Combining (iii) above with Lemma 9.1 (ii), we get Fσ∧τ−= ∨ n Fσn∧τn = ∨ n (Fσn ∩Fτn) ⊂∨ n Fσn ∩∨ n Fτn = Fσ−∩Fτ−.
2 On the product space Ω × R+ we may introduce the predictable σ-field P, generated by all continuous, adapted processes on R+. The elements of P are called predictable sets, and the P-measurable functions on Ω × R+ are called predictable processes. Note that every predictable process is progressive. We provide some useful characterizations of the predictable σ-field.
Lemma 10.2 (predictable σ-field) The predictable σ-field is generated by each of these classes of sets or processes: (i) F0 × R+ and all sets A × (t, ∞) with A ∈Ft, t ≥0, (ii) F0 × R+ and all intervals (τ, ∞) with optional τ, (iii) all left-continuous, adapted processes.
Proof: Let P1, P2, P3 be the σ-fields generated by the classes in (i)−(iii).
Since continuous functions are left-continuous, we have trivially P ⊂P3. To obtain P3 ⊂P1, we note that any left-continuous process X can be approxi-mated by the processes Xn t = X010,1 + k≥1 Xk/n1(k,k+1](nt), t ≥0.
Next, P1 ⊂P2 holds since the time tA = t · 1A + ∞· 1Ac is optional for any t ≥0 and A ∈Ft. Finally, we get P2 ⊂P since, for any optional time τ, the process 1(τ,∞) can be approximated by the continuous, adapted processes Xn t = {n(t −τ)+} ∧1, t ≥0.
2 Next we examine the relationship between predictable processes and the σ-fields Fτ−. Similar results for progressive processes and the σ-fields Fτ were obtained in Lemma 9.5.
Lemma 10.3 (predictable times and processes) (i) For any optional time τ and predictable process X, the random variable Xτ1{τ < ∞} is Fτ−-measurable.
210 Foundations of Modern Pobability (ii) For any predictable time τ and Fτ−-measurable random variable α, the process Xt = α 1{τ ≤t} is predictable.
Proof: (i) If X = 1A×(t,∞) for some t > 0 and A ∈Ft, then clearly Xτ1{τ < ∞} = 1 = A ∩{t < τ < ∞} ∈Fτ−.
This extends by a monotone-class argument and a subsequent approximation, first to predictable indicator functions, and then to the general case.
(ii) We may clearly take α to be integrable. For any announcing sequence (τn) of τ, we define Xn t = E(α | Fτn) 1{τn ∈(0, t)} + 1{τn = 0} , t ≥0.
The Xn are left-continuous and adapted, hence predictable. Moreover, Xn → X on R+ a.s. by Theorem 9.24 and Lemma 10.1 (iii).
2 An optional time τ is said to be totally inaccessible, if P{σ = τ < ∞} = 0 for every predictable time σ. An accessible time may then be defined as an optional time τ, such that P{σ = τ < ∞} = 0 for every totally inaccessible time σ. For any random time τ, we introduce the associated graph [τ] = (t, ω) ∈R+ × Ω; τ(ω) = t , which allows us to express the previous conditions on σ and τ as [σ] ∩[τ] = ∅ a.s. For any optional time τ and set A ∈Fτ, the time τA = τ1A + ∞· 1Ac is again optional and called the restriction of τ to A. We consider a basic decomposition of optional times. Related decompositions of random measures and martingales are given in Theorems 10.17 and 20.16.
Proposition 10.4 (decomposition of optional time) For any optional time τ, (i) there exists an a.s. unique set A ∈Fτ ∩{τ < ∞}, such that τA is accessible and τAc is totally inaccessible, (ii) there exist some predictable times τ1, τ2, . . . with [τA] ⊂ n[τn] a.s.
Proof: Define p = sup (τn) P ∪n{τ = τn < ∞}, (1) where the supremum extends over all sequences of predictable times τn. Com-bining sequences where the probability in (1) approaches p, we may construct a sequence (τn) for which the supremum is attained.
For such a maximal sequence, we define A as the union in (1).
To see that τA is accessible, let σ be totally inaccessible. Then [σ]∩[τn] = ∅ a.s. for every n, and so [σ] ∩[τA] = ∅a.s. If τAc is not totally inaccessible, then P{τAc = τ0 < ∞} > 0 for some predictable time τ0, which contradicts the maximality of τ1, τ2, . . . . This shows that A has the desired property.
10. Predictability and Compensation 211 To prove that A is a.s. unique, let B be another set with the stated prop-erties. Then τA\B and τB\A are both accessible and totally inaccessible, and so τA\B = τB\A = ∞a.s., which implies A = B a.s.
2 We turn to the celebrated Doob–Meyer decomposition, a cornerstone of modern probability. By an increasing process we mean a non-decreasing, right-continuous, and adapted process A with A0 = 0. It is said to be integrable if EA∞< ∞. Recall that all sub-martingales are taken to be right-continuous.
Local sub-martingales and locally integrable processes are defined by localiza-tion, in the usual way.
Theorem 10.5 (Doob–Meyer decomposition, Meyer, Dol´ eans) For an adapted process X, these conditions are equivalent: (i) X is a local sub-martingale, (ii) X = M + A a.s. for a local martingale M and a locally integrable, non-decreasing, predictable process A with A0 = 0.
The processes M and A are then a.s. unique.
Note that the decomposition depends in a crucial way on the underlying filtration F. We often refer to A as the compensator of X, and write A = ˆ X.
Several proofs of this result are known, most of which seem to rely on the deep section theorems. Here we give a more elementary proof, based on weak compactness in L1 and an approximation of totally inaccessible times. For clarity, we divide the proof into several lemmas.
Let (D) be the class of measurable processes X, such that the family {Xτ} is uniformly integrable, where τ ranges over the set of finite optional times.
We show that it is enough to consider sub-martingales of class (D).
Lemma 10.6 (uniform integrability) A local sub-martingale X with X0 = 0 is locally of class (D).
Proof: First reduce to the case where X is a true sub-martingale. Then introduce for each n the optional time τ = n ∧inf{t > 0; |Xt| > n}. Here |Xτ| ≤n ∨|Xτ|, which is integrable by Theorem 9.30, and so Xτ is of class (D).
2 An increasing process A is said to be natural, if it is integrable and such that E ∞ 0 (ΔMt) dAt = 0 for any bounded martingale M. We first establish a preliminary decomposition, where the compensator A is claimed to be natural rather than predictable.
Lemma 10.7 (preliminary decomposition, Meyer) Any sub-martingale X of class (D) admits a decomposition M + A, where (i) M is a uniformly integrable martingale, (ii) A is a natural, increasing process.
212 Foundations of Modern Pobability Proof (Rao): We may take X0 = 0. Introduce the n-dyadic times tn k = k2−n, k ∈Z+, and define for any process Y the associated differences Δn kY = Ytn k+1 − Ytn k. Let An t = Σ k<2nt E Δn kX Ftn k , t ≥0, n ∈N, and note that M n = X −An is a martingale on the n-dyadic set.
Writing τ n r = inf{t; An t > r} for n ∈N and r > 0, we get by optional sampling, for any n-dyadic time t, 1 2 E An t ; An t > 2r ≤E(An t −An t ∧r) ≤E An t −An τ n r ∧t = E Xt −Xτ n r ∧t = E Xt −Xτ n r ∧t; An t > r .
(2) By the martingale property and uniform integrability, we further obtain rP{An t > r} ≤EAn t = EXt < ⌢1, and so the probability on the left tends to zero as r →∞, uniformly in t and n. Since the variables Xt −Xτ n r ∧t are uniformly integrable by (D), the same property holds for An t by (2) and Lemma 5.10. In particular, the sequence (An ∞) is uniformly integrable, and each M n is a uniformly integrable martingale.
By Lemma 5.13 there exists a random variable α ∈L1(F∞), such that An ∞→α weakly in L1 along a sub-sequence N ′ ⊂N. Define Mt = E X∞−α Ft , A = X −M, and note that A∞= α a.s. by Theorem 9.24. For dyadic t and bounded random variables ξ, the martingale and self-adjointness properties yield E (An t −At) ξ = E (Mt −M n t ) ξ = E E M∞−M n ∞ Ft ξ = E(M∞−M n ∞) E(ξ | Ft) = E(An ∞−α) E(ξ | Ft) →0, as n →∞along N ′. Thus, An t →At weakly in L1 for dyadic t. In particular, we get for any dyadic s < t 0 ≤E An t −An s; At −As < 0 →E (At −As) ∧0 ≤0.
Thus, the last expectation vanishes, and so At ≥As a.s. By right-continuity it follows that A is a.s. non-decreasing. Also note that A0 = 0 a.s., since An 0 = 0 for all n.
To see that A is natural, consider any bounded martingale N, and conclude from Fubini’s theorem and the martingale properties of N and An −A = M −M n that 10. Predictability and Compensation 213 E (N∞An ∞) = Σ k E (N∞Δn kAn) = Σ k E (Ntn k Δn kAn) = Σ k E (Ntn k Δn kA) = E Σ k (Ntn k Δn kA).
Using weak convergence on the left and dominated convergence on the right, and combining with Fubini’s theorem and the martingale property of N, we get E ∞ 0 Nt−dAt = E (N∞A∞) = Σ k E (N∞Δn kA) = Σ k E (Ntn k+1Δn kA) = E Σ k (Ntn k+1Δn kA) →E ∞ 0 Nt dAt.
Hence, E ∞ 0 (ΔNt) dAt = 0, as required.
2 To complete the proof of Theorem 10.5, we need to show that the process A in the last lemma is predictable. This will be inferred from the following ingenious approximation of totally inaccessible times.
Lemma 10.8 (approximation of inaccessible times, Doob) For a totally in-accessible time τ, put τn = 2−n[2nτ], and let Xn be a right-continuous version of the process P{τn ≤t | Ft}. Then lim n→∞sup t≥0 Xn t −1{τ ≤t} = 0 a.s.
(3) Proof: Since τn ↑τ, we may assume that X1 t ≥X2 t ≥· · · ≥1{τ ≤t} for all t ≥0.
Then Xn t = 1 for t ∈[τ, ∞), and on the set {τ = ∞} we have X1 t ≤P(τ < ∞| Ft) →0 a.s. as t →∞by Theorem 9.24.
Thus, supn |Xn t −1{τ ≤t}| →0 a.s. as t →∞. To prove (3), it is then enough to show that for every ε > 0, the optional times σn = inf t ≥0; Xn t −1{τ ≤t} > ε , n ∈N, tend a.s. to infinity. The σn are clearly non-decreasing, say with limit σ. Note that either σn ≤τ or σn = ∞for all n.
By optional sampling, Theorem 8.5, and Lemma 9.1, we have Xn σ 1{σ < ∞} = P τn ≤σ < ∞ Fσ →P τ ≤σ < ∞ Fσ = 1{τ ≤σ < ∞}.
Hence, Xn σ →1{τ ≤σ} a.s. on {σ < ∞}, and so by right continuity we have on the latter set σn < σ for large enough n. Thus, σ is predictable and announced by the times σn ∧n.
214 Foundations of Modern Pobability Applying the optional sampling and disintegration theorems to the optional times σn, we obtain ε P{σ < ∞} ≤ε P{σn < ∞} ≤E Xn σn; σn < ∞ = P τn ≤σn < ∞ = P τn ≤σn ≤τ < ∞ →P{τ = σ < ∞} = 0, where the last equality holds since τ is totally inaccessible. Thus, σ = ∞a.s. 2 It follows easily that A has only accessible jumps: Lemma 10.9 (accessibility of jumps) For a natural, increasing process A and a totally inaccessible time τ, we have ΔAτ = 0 a.s. on {τ < ∞}.
Proof: Rescaling if necessary, we may assume that A is a.s. continuous at dyadic times. Define τn = 2−n[2nτ]. Since A is natural, we have E ∞ 0 P τn > t Ft dAt = E ∞ 0 P τn > t Ft− dAt, and since τ is totally inaccessible, Lemma 10.8 yields E Aτ−= E ∞ 0 1{τ > t} dAt = E ∞ 0 1{τ ≥t} dAt = EAτ.
Hence, E(ΔAτ; τ < ∞) = 0, and so ΔAτ = 0 a.s. on {τ < ∞}.
2 We may now show that A is predictable.
Lemma 10.10 (Dol´ eans) Any natural, increasing process is predictable.
Proof: Fix a natural increasing process A. Consider a bounded martingale M and a predictable time τ < ∞announced by σ1, σ2, . . . . Then M τ −M σk is again a bounded martingale, and since A is natural, we get by dominated convergence E(ΔMτ)(ΔAτ) = 0. In particular, we may take Mt = P(B | Ft) with B ∈Fτ. By optional sampling, we have Mτ = 1B and Mτ−←Mσk = P(B | Fσk) →P(B | Fτ−).
10. Predictability and Compensation 215 Thus, ΔMτ = 1B −P(B | Fτ−), and so E(ΔAτ; B) = E ΔAτ P(B | Fτ−) = E E ΔAτ Fτ− ; B .
Since B ∈Fτ was arbitrary, we get ΔAτ = E(ΔAτ| Fτ−) a.s., and so the process A′ t = (ΔAτ)1{τ ≤t} is predictable by Lemma 10.3 (ii). It is also natural, since for any bounded martingale M E (ΔAτ ΔMτ) = E ΔAτ E(ΔMτ| Fτ−) = 0.
An elementary construction yields {t > 0; ΔAt > 0} ⊂ n[τn] a.s. for some optional times τn < ∞, where by Proposition 10.4 and Lemma 10.9 we may choose the latter to be predictable. Taking τ = τ1 in the previous argument, we conclude that the process A1 t = (ΔAτ1)1{τ1 ≤t} is both natural and pre-dictable. Repeating the argument for the process A −A1 with τ = τ2 and proceeding by induction, we see that the jump component Ad of A is pre-dictable. Since A −Ad is continuous and hence predictable, the predictability of A follows.
2 To prove the uniqueness, we need to show that every predictable martin-gale of integrable variation is a constant. For continuous martingales, this will be seen by elementary arguments in Proposition 18.2 below. The two ver-sions are in fact equivalent, since every predictable martingale is continuous by Proposition 10.16 below.
Lemma 10.11 (predictable martingale) Let M be a martingale of integrable variation. Then M is predictable ⇔ M is a.s. constant.
Proof: On the predictable σ-field P, we introduce the signed measure μB = E ∞ 0 1B(t) dMt, B ∈P, where the inner integral is an ordinary Lebesgue–Stieltjes integral. The mar-tingale property shows that μ vanishes for the sets B = F ×(t, ∞) with F ∈Ft.
By Lemma 10.2 and a monotone-class argument, it follows that μ = 0 on P.
Since M is predictable, the same thing is true for the process ΔMt = Mt−Mt−, and then also for the sets J± = {t > 0; ±ΔMt > 0}. Thus, μJ± = 0, and so ΔM = 0 a.s., which means that M is a.s. continuous. But then Mt ≡M0 a.s.
by Proposition 18.2 below.
2 Proof of Theorem 10.5: The sufficiency is obvious, and the uniqueness holds by Lemma 10.11. It remains to show that any local sub-martingale X has the stated decomposition. By Lemmas 10.6 and 10.11, we may take X to be of 216 Foundations of Modern Pobability class (D). Then Lemma 10.7 yields X = M + A for a uniformly integrable martingale M and a natural increasing process A, and the latter is predictable by Lemma 10.10.
2 The two properties in Lemma 10.10 are in fact equivalent.
Theorem 10.12 (natural and predictable processes, Dol´ eans) Let A be an integrable, increasing process. Then A is natural ⇔ A is predictable.
Proof: If an integrable, increasing process A is natural, it is also pre-dictable by Lemma 10.10. Now let A be predictable. By Lemma 10.7, we have A = M + B for a uniformly integrable martingale M and a natural in-creasing process B, and Lemma 10.10 shows that B is predictable. But then A = B a.s. by Lemma 10.11, and so A is natural.
2 The following criterion is essentially implicit in earlier proofs.
Lemma 10.13 (dual predictable projection) Let X, Y be locally integrable, increasing processes, where Y is predictable. Then X has compensator Y iff E V dX = E V dY, V ≥0 predictable.
Proof: By localization, we may take X, Y to be integrable. Then Y is the compensator of X iffM = Y −X is a martingale, which holds iffEMτ = 0 for every optional time τ. This is equivalent to the stated relation for V = 1[0,τ], and the general result follows by a straightforward monotone-class argument. 2 We may now establish the fundamental relationship between predictable times and processes.
Theorem 10.14 (predictable times and processes, Meyer) For any optional time τ, these conditions are equivalent: (i) τ is predictable, (ii) the process 1{τ ≤t} is predictable, (iii) E ΔMτ = 0 for any bounded martingale M.
Proof (Chung & Walsh): Since (i) ⇒(ii) by Lemma 10.3 (ii) and (ii) ⇔(iii) by Theorem 10.12, it remains to show that (iii) ⇒(i).
Then introduce the martingale Mt = E(e−τ| Ft) and super-martingale Xt = e−τ∧t −Mt = E e−τ∧t −e−τ Ft ≥0, t ≥0, and note that Xτ = 0 a.s. by optional sampling. Letting σ = inf{t ≥0; Xt−∧ Xt = 0}, we see from Lemma 9.32 that {t ≥0; Xt = 0} = σ, ∞) a.s., and in 10. Predictability and Compensation 217 particular σ ≤τ a.s. Using optional sampling again, we get E(e−σ −e−τ) = EXσ = 0, and so σ = τ a.s. Hence, Xt ∧Xt−> 0 a.s. on [0, τ). Finally, (iii) yields EXτ−= E(e−τ −Mτ−) = E(e−τ −Mτ) = EXτ = 0, and so Xτ−= 0. It is now clear that τ is announced by the optional times τn = inf{t; Xt < n−1}.
2 To illustrate the power of the last result, we give a short proof of the following useful statement, which can also be proved directly.
Corollary 10.15 (predictable restriction) For any predictable time τ and set A ∈Fτ−, the restriction τA is again predictable.
Proof: The process 1A1{τ ≤t} = 1{τA ≤t} is predictable by Lemma 10.3, and so the time τA is predictable by Theorem 10.14.
2 Using the last theorem, we may show that predictable martingales are con-tinuous.
Proposition 10.16 (predictable martingale) For a local martingale M, M is predictable ⇔ M is a.s. continuous Proof: The sufficiency is clear by definitions. To prove the necessity, we note that for any optional time τ, M τ t = Mt 1[0,τ + Mτ 1(τ,∞)(t), t ≥0.
Thus, the predictability is preserved by optional stopping, and so we may take M to be is a uniformly integrable martingale. Now fix any ε > 0, and introduce the optional time τ = inf{t > 0; |ΔMt| > ε}. Since the left-continuous version Mt−is predictable, so is the process ΔMt, as well as the random set A = {t > 0; |ΔMt| > ε}. Hence, the interval [τ, ∞) = A ∪(τ, ∞) has the same property, and so τ is predictable by Theorem 10.14. Choosing an announcing sequence (τn), we conclude that, by optional sampling, martingale convergence, and Lemmas 10.1 (iii) and 10.3 (i), Mτ−←Mτn = E(Mτ|Fτn) →E(Mτ|Fτ−) = Mτ.
Thus, τ = ∞a.s., and ε being arbitrary, it follows that M is a.s. continuous. 2 We turn to a more refined decomposition of increasing processes, extending the decomposition of optional times in Proposition 10.4. For convenience here 218 Foundations of Modern Pobability and below, any right-continuous, non-decreasing process X on R+ starting at 0 may be identified with a random measure ξ on (0, ∞), via the relationship in Theorem 2.14, so that ξ[0, t] = Xt for all t ≥0. We say that ξ is adapted or predictable2 if the corresponding property holds for X. The compensator of ξ is defined as the predictable random measure ˆ ξ associated with the compensator ˆ X of X.
An rcll process X or filtration F is said to be quasi-left continuous3 (ql-continuous for short) if ΔXτ = 0 a.s. on {τ < ∞} or Fτ = Fτ−, respectively, for every predictable time τ. We further say that X has accessible jumps, if ΔXτ = 0 a.s. on {τ < ∞} for every totally inaccessible time τ. Accordingly, a random measure ξ on (0, ∞) is said to be ql-continuous if ξ{τ} = 0 a.s. for every predictable time τ, and accessible if it is purely atomic with ξ{τ} = 0 a.s.
for every totally inaccessible time τ, where μ{∞} = 0 for measures μ on R+.
Theorem 10.17 (decomposition of random measure) Let ξ be an adapted ran-dom measure on (0, ∞) with compensator ˆ ξ. Then (i) ξ has an a.s. unique decomposition ξ = ξc + ξq + ξa, where ξc is diffuse, ξq + ξa is purely atomic, ξq is ql-continuous, and ξa is accessible, (ii) ξa is a.s. supported by n[τn], for some predictable times τ1, τ2, . . . with disjoint graphs, (iii) when ξ is locally integrable, ξc + ξq has compensator (ˆ ξ)c, and ξ is ql-continuous iffˆ ξ is a.s. continuous.
Proof: Subtracting the predictable component ξc, we may take ξ to be purely atomic. Put η = ˆ ξ and At = η(0, t]. Consider the locally integrable process Xt = s≤t(ΔAs ∧1) with compensator ˆ X, and define Aq t = t+ 0 1{Δ ˆ Xs = 0} dAs, Aa t = At −Aq t, t ≥0.
For any predictable time τ < ∞, the graph [τ] is again predictable by Theorem 10.14, and so by Theorem 10.13, E(ΔAq τ ∧1) = E ΔXτ; Δ ˆ Xτ = 0 = E Δ ˆ Xτ; Δ ˆ Xτ = 0 = 0, which shows that Aq is ql-continuous.
Now let τn0 = 0 for n ∈N, and define recursively the random times τnk = inf t > τn,k−1; Δ ˆ Xt ∈2−n(1, 2] , n, k ∈N.
2Then ξB is clearly measurable for every B ∈B, so that ξ is indeed a random measure in the sense of Chapters 15, 23, and 29–31 below.
3This horrible term is well established.
10. Predictability and Compensation 219 They are predictable by Theorem 10.14, and {t > 0; ΔAa t > 0} = n,k[τnk] a.s. by the definition of Aa. Thus, for any totally inaccessible time τ, we have ΔAa τ = 0 a.s. on {τ < ∞}, which shows that Aa has accessible jumps.
If A = Bq + Ba is another decomposition with the stated properties, then Y = Aq −Bq = Ba −Aa is ql-continuous with accessible jumps, and so Lemma 10.4 gives ΔYτ = 0 a.s. on {τ < ∞} for any optional time τ, which means that Y is a.s. continuous. Since it is also purely discontinuous, we get Y = 0 a.s., which proves the asserted uniqueness.
When A is locally integrable, we may write instead Aq = 1{Δ ˆ A = 0} · A, and note that ( ˆ A)c = 1{Δ ˆ A = 0} · ˆ A. For any predictable process V ≥0, we get by Theorem 10.13 E V dAq = E 1{Δ ˆ A = 0} V dA = E 1{Δ ˆ A = 0} V d ˆ A = E V d( ˆ A)c, which shows that Aq has compensator ( ˆ A)c.
2 By the compensator of an optional time τ we mean the compensator of the associated jump process Xt = 1{τ ≤t}. We may characterize the different categories of optional times in terms of their compensators.
Corollary 10.18 (compensation of optional time) Let τ be an optional time with compensator A. Then (i) τ is predictable iffA is a.s. constant apart from a possible unit jump, (ii) τ is accessible iffA is a.s. purely discontinuous, (iii) τ is totally inaccessible iffA is a.s. continuous, (iv) τ has the accessible part τD, where D = {ΔAτ > 0, τ < ∞}.
Proof: (i) If τ is predictable, so is the process Xt = 1{τ ≤t} by Theorem 10.14, and hence A = X a.s. Conversely, if At = 1{σ ≤t} for an optional time σ, the latter is predictable by Theorem 10.14, and Lemma 10.13 yields P{σ = τ < ∞} = E ΔXσ; σ < ∞ = E ΔAσ; σ < ∞ = P{σ < ∞} = EA∞= EX∞ = P{τ < ∞}.
Thus, τ = σ a.s., and so τ is predictable.
(ii) Clearly τ is accessible iffX has accessible jumps, which holds by The-orem 10.17 iffA = Ad a.s.
(iii) The time τ is totally inaccessible iffX is ql-continuous, which holds by Theorem 10.17 iffA = Ac a.s.
220 Foundations of Modern Pobability (iv) Use (ii) and (iii).
2 Next we characterize quasi-left continuity for filtrations and martingales.
Proposition 10.19 (ql-continuous filtration, Meyer) For a filtration F, these conditions are equivalent: (i) every accessible time is predictable, (ii) Fτ−= Fτ on {τ < ∞} for all predictable times τ, (iii) ΔMτ = 0 a.s. on {τ < ∞} for any martingale M and predictable time τ.
If the basic σ-field in Ω is F∞, then Fτ−= Fτ on {τ = ∞} for any optional time τ, and the relation in (ii) extends to all of Ω.
Proof, (i) ⇒(ii): Let τ be a predictable time, and fix any B ∈Fτ∩{τ < ∞}.
Then [τB] ⊂[τ], and so τB is accessible, hence by (i) even predictable. The process Xt = 1{τB ≤t} is then predictable by Theorem 10.14, and since Xτ1{τ < ∞} = 1 τB ≤τ < ∞ = 1B, Lemma 10.3 (i) yields B ∈Fτ−.
(ii) ⇒(iii): Fix a martingale M, and let τ be a bounded, predictable time announced by (τn). Using (ii) and Lemma 10.1 (iii), we get as before Mτ−←Mτn = E(Mτ|Fτn) →E(Mτ|Fτ−) = E(Mτ|Fτ) = Mτ, and so Mτ−= Mτ a.s.
(iii) ⇒(i): If τ is accessible, Proposition 10.4 yields some predictable times τn with [τ] ⊂ n[τn] a.s. By (iii) we have ΔMτn = 0 a.s. on {τn < ∞} for every martingale M and all n, and so ΔMτ = 0 a.s. on {τ < ∞}. Hence, τ is predictable by Theorem 10.14.
2 The following inequality will be needed in Chapter 20.
Proposition 10.20 (norm relation, Garsia, Neveu) Let A be a right- or left-continuous, predictable, increasing process, and let ζ ≥0 be a random variable such that a.s.
E A∞−At Ft ≤E(ζ | Ft), t ≥0.
(4) Then ∥A∞∥p ≤p ∥ζ∥p, p ≥1.
In the left-continuous case, predictability is clearly equivalent to adapted-ness. The proper interpretation of (4) is to take E(At| Ft) ≡At, and choose right-continuous versions of the martingales E(A∞| Ft) and E(ζ| Ft). For a right-continuous A, we may clearly take ζ = Z∗, where Z is the super-mar-tingale on the left of (4). We also note that if A is the compensator of an 10. Predictability and Compensation 221 increasing process X, then (4) holds with ζ = X∞.
Proof: We may take A to be right-continuous, the left-continuous case being similar but simpler. We can also take A to be bounded, since we may otherwise replace A by the process A ∧u for arbitrary u > 0, and let u →∞ in the resulting formula. For each r > 0, the random time τr = inf{t; At ≥r} is predictable by Theorem 10.14. By optional sampling and Lemma 10.1, we note that (4) remains valid with t replaced by τr−. Since τr is Fτr−-measurable by the same lemma, we obtain E A∞−r; A∞> r ≤E A∞−r; τr < ∞ ≤E A∞−Aτr−; τr < ∞ ≤E(ζ; τr < ∞) ≤E(ζ; A∞≥r).
Writing A∞= α and letting p−1+q−1 = 1, we get by Fubini’s theorem, H¨ older’s inequality, and some calculus ∥α∥p p = p2 q−1E α 0 (α −r) rp−2 dr = p2 q−1 ∞ 0 E α −r; α > r rp−2 dr ≤p2 q−1 ∞ 0 E ζ; α ≥r rp−2 dr = p2 q−1E ζ α 0 rp−2 dr = p E ζαp−1 ≤p ∥ζ∥p ∥α∥p−1 p .
If ∥α∥p > 0, we may finally divide both sides by ∥α∥p−1 p .
2 We can’t avoid the random measure point of view when turning to processes on a product space R+ × S, where S is a localized Borel space. Here a random measure is simply a locally finite kernel ξ : Ω →(0, ∞) × S, and we say that ξ is adapted, predictable, or locally integrable if the corresponding properties hold for the real-valued processes ξtB = ξ{(0, t] × B} with B ∈ˆ S fixed. The compensator of ξ is defined as a predictable random measure ˆ ξ on (0, ∞) × S, such that ˆ ξtB is a compensator of ξtB for every B ∈ˆ S.
For an alternative approach, suggested by Lemma 10.13, we say that a process V on R+×S is predictable if it is P×S -measurable, where P denotes the predictable σ-field on R+. The compensator ˆ ξ may then be characterized as the a.s. unique, predictable random measure on (0, ∞)×S, such that E ξV = E ˆ ξV for every predictable process V ≥0 on R+ × S.
Theorem 10.21 (compensation of random measure, Grigelionis, Jacod) Let ξ be a locally integrable, adapted random measure on (0, ∞) × S, where S 222 Foundations of Modern Pobability is Borel. Then there exists an a.s. unique predictable random measure ˆ ξ on (0, ∞) × S, such that for any predictable process V ≥0 on R+ × S, E V dξ = E V dˆ ξ.
Again we emphasize that the compensator ˆ ξ depends in a crucial way on the choice of underlying filtration F. Our proof relies on a technical lemma, which can be established by straightforward monotone-class arguments.
Lemma 10.22 (predictable random measures) (i) For any predictable random measure ξ and predictable process V ≥0 on (0, ∞) × S, the process V · ξ is again predictable.
(ii) For any predictable process V ≥0 on (0, ∞)×S and predictable, measure-valued process ρ on S, the process Yt = Vt,s ρt(ds) is again predictable.
Proof of Theorem 10.21: Since ξ is locally integrable, we may choose a predictable process V > 0 on R+ × S such that E V dξ < ∞. If the random measure ζ = V · ξ has compensator ˆ ζ, then Lemma 10.22 shows that ξ has compensator ˆ ξ = V −1 · ˆ ζ. Thus, we may henceforth assume that Eξ{(0, ∞) × S} = 1.
Put η = ξ(· × S). Using the kernel composition of Chapter 3, we may introduce the probability measure μ = P ⊗ξ on Ω × R+ × S with projection ν = P ⊗η onto Ω × R+. Applying Theorem 3.4 to the restrictions of μ and ν to the σ-fields P ⊗S and P, respectively, we obtain a probability kernel ρ: (Ω × R+, P) →(S, S) satisfying μ = ν ⊗ρ, or P ⊗ξ = P ⊗η ⊗ρ on (Ω × R+ × S, P × S).
Letting ˆ η be the compensator of η, we may introduce the random measure ˆ ξ = ˆ η ⊗ρ on R+ × S.
To see that ˆ ξ is the compensator of ξ, we note that ˆ ξ is predictable by Lemma 10.22 (i). For any predictable process V ≥0 on R+ × S, the process Ys = Vs,t ρt(ds) is again predictable by Lemma 10.22 (ii). By Theorem 8.5 and Lemma 10.13, we get E V dˆ ξ = E ˆ η(dt) Vs,t ρt(ds) = E η(dt) Vs,t ρt(ds) = E V dξ.
Finally note that ˆ ξ is a.s. unique by Lemma 10.13.
2 The theory simplifies when the filtration F is induced by ξ, in the sense of being the smallest right-continuous, complete filtration making ξ adapted, in which case we call ˆ ξ the induced compensator. Here we consider only the special case of a single point mass ξ = δ τ,χ, where τ is a random time with associated mark χ in S.
10. Predictability and Compensation 223 Proposition 10.23 (induced compensator) Let (τ, ζ) be a random element in (0, ∞] × S with distribution μ, where S is Borel. Then ξ = δ τ,ζ has the induced compensator ˆ ξtB = (0, t∧τ] μ(dr × B) μ([r, ∞] × S), t ≥0, B ∈S.
(5) Proof: The process ηtB on the right of (5) is clearly predictable for every B ∈S. It remains to show that Mt = ξtB −ηtB is a martingale, hence that E(Mt −Ms; A) = 0 for any s < t and A ∈Fs. Since Mt = Ms on {τ ≤s}, and the set {τ > s} is a.s. an atom of Fs, it suffices to show that E(Mt −Ms) = 0, or EMt ≡0. Then use Fubini’s theorem to get E ηtB = E (0,t∧τ] μ(dr × B) μ([r, ∞] × S) = (0,∞] μ(dx) (0,t∧x] μ(dr × B) μ([r, ∞] × S) = (0,t] μ(dr × B) μ([r, ∞] × S) [r,∞] μ(dx) = μ{(0, t] × B} = E ξtB.
2 Now return to the case of a general filtration F: Theorem 10.24 (discounted compensator) For any adapted pair (τ, χ) in (0, ∞] × S with compensator η, there exists a unique, predictable random mea-sure ζ on (0, τ] × S with ∥ζ∥≤1, satisfying (5) with μ replaced by ζ. Writing Zt = 1 −¯ ζt, we have a.s.
(i) ζ = Z−· η, and Z is the unique solution of Z = 1 −Z−· ¯ η, (ii) Zt = exp(−¯ ηc t)Π s≤t (1 −Δ¯ ηs), t ≥0, (iii) Δ¯ η < 1 on [0, τ), ≤1 on [τ], (iv) Z is non-increasing and ≥0 with Zτ−> 0, (v) Y = Z−1 satisfies d Yt = Yt d ¯ ηt on {Zt > 0}.
The process in (ii) is a special case of Dol´ eans’ exponential in Chapter 20.
Though the subsequent discussion involves some stochastic differential equa-tions, the present versions are elementary, and there is no need for the general theory in Chapters 18–20 and 32–33.
Proof: (iii) The time σ = inf{t ≥0; Δ¯ ηt ≥1} is optional, by an elementary approximation. Hence, the random interval (σ, ∞) is predictable, and so the same thing is true for the graph [σ] = {Δ¯ η ≥1} \ (σ, ∞). Thus, P{τ = σ} = E ¯ ξ[σ] = E ¯ η[σ].
Since also 224 Foundations of Modern Pobability 1{τ = σ} ≤1{σ < ∞} ≤¯ η[σ], by the definition of σ, we obtain 1{τ = σ} = ¯ η[σ] = Δ¯ ησ a.s. on {σ < ∞}, which implies Δ¯ ηt ≤1 and τ = σ a.s. on the same set.
(i)−(ii): If (5) holds with μ = ζ for some random measure ζ on (0, τ] × S, then the process Zt = 1 −¯ ζt satisfies d ¯ ηt = Z−1 t−d ¯ ζt = −Z−1 t−d Zt, and so d Zt = −Zt d ¯ ηt, which implies Zt −1 = −(Z · ¯ η)t, or Z = 1 −Z−· ¯ η.
Conversely, any solution Z to the latter equation yields a measure solving (5) with B = S, and so (5) has the general solution ζ = Z−· η. Furthermore, the equation Z = 1 −Z−· ¯ η has the unique solution (ii), by an elementary special case of Theorem 20.8 below. In particular, Z is predictable, and the predictability of ζ follows by Lemma 10.22.
(iv) Since 1 −Δ¯ η ≥0 a.s. by (iii), (ii) shows that Z is a.s. non-increasing with Z ≥0. Since also t Δ¯ ηt ≤¯ ητ < ∞a.s., and supt<τ Δ¯ ηt < 1 a.s. by (iii), we have Zτ−> 0.
(v) By an elementary integration by parts, we get on the set {t ≥0; Zt > 0} 0 = d(Zt Yt) = Zt−d Yt + Yt d Zt.
Using the chain rule for Stieltjes integrals along with equation Z = 1 −Z−· ¯ η, we obtain d Yt = −Z−1 t−Yt d Zt = Z−1 t−Yt Zt−d ¯ ηt = Yt d ¯ ηt.
2 The power of the discounted compensator is mainly due to the following result and its sequels, which will play key roles in Chapters 15 and 27. Letting τ, χ, η, ζ, Z be such as in Theorem 10.24, we introduce a predictable process V on R+ × S with ζ|V | < ∞a.s., and form the processes4 Ut,x = Vt,x + Z−1 t t+ 0 V d ζ, t ≥0, x ∈S, (6) Mt = Uτ,χ1{τ ≤t} − t+ 0 Ud η, t ≥0, (7) whenever these expressions exist.
Proposition 10.25 (fundamental martingale) For τ, χ, η, ζ, Z as in Theo-rem 10.24, let V be a predictable process on R+ × S with ζ|V | < ∞a.s. and ζV = 0 a.s. on {Zτ = 0}. Then for U, M as above, we have 4Recall our convention 0/0=0.
10. Predictability and Compensation 225 (i) M exists on [0, ∞] and satisfies M∞= Vτ,χ a.s., (ii) E |Uτ,χ| < ∞implies5 E Vτ,χ = 0, and M is a uniformly integrable mar-tingale with ∥M ∗∥p < ⌢∥Vτ,χ∥p for all p > 1.
Proof: (i) Write Y = Z−1. Using the conditions on V , the definition of ζ, and Theorem 10.24 (iv), we obtain η|V | = ζ(Y−|V |) ≤Yτ−ζ|V | < ∞.
Next, we see from (6) and Theorem 10.24 (v) that η|U −V | ≤ζ|V | ¯ ηY = (Yτ −1) ζ|V | < ∞, whenever Zτ > 0. If instead Zτ = 0, we have η|U −V | ≤ζ|V | τ− 0 Y d¯ η = (Yτ−−1) ζ|V | < ∞.
In either case U is η -integrable, and M is well defined.
Now let t ≥0 with Zt > 0. Using (6), Theorem 10.24 (v), Fubini’s theorem, and the definition of ζ, we get for any x ∈S t+ 0 (U −V ) dη = t+ 0 Ys d¯ ηs s+ 0 V dζ = t+ 0 dYs s+ 0 V dζ = t+ 0 (Yt −Ys−) Vs,y dζs,y = Ut,x −Vt,x − t+ 0 V dη.
Simplifying and combining with (6), we get t+ 0 Udη = Ut,x −Vt,x = Yt t+ 0 V dζ.
To extend to general t, suppose that Zτ = 0. Using the previous version of (8), the definition of ζ, and the conditions on V , we get τ+ 0 Udη = τ− 0 Udη + [τ] Udη = Yτ− τ− 0 V dζ + [τ] V dη = Yτ− τ+ 0 V dζ = 0 = Uτ,x −Vτ,x, 5Thus, a mere integrability condition on U implies a moment identity for V .
226 Foundations of Modern Pobability which shows that (8) is generally true. In particular, Vτ,χ = Uτ,χ − τ+ 0 Udη = Mτ = M∞.
(ii) If E |Uτ,χ| < ∞, then E η|U| < ∞by compensation, which shows that M is uniformly integrable. For any optional time σ, the process 1[0,σ] U is again predictable and Eη -integrable, and so by the compensation property and (7), EMσ = E Uτ,χ; τ ≤σ −E σ+ 0 Udη = E δτ,χ(1[0,σ] U) −E η(1[0,σ] U) = 0, which shows that M is a martingale. Thus, in view of (i), EVτ,χ = EM∞= EM0 = 0.
Furthermore, by Doob’s inequality, ∥M ∗∥p < ⌢∥M∞∥p = ∥Vτ,χ∥p, p > 1.
2 We also need a multi-variate version of the last result. Here the optional times τ1, . . . , τn are said to be orthogonal, if they are a.s. distinct, and the atomic parts of their compensators have disjoint supports.
Corollary 10.26 (product moments) For every j ≤m, consider an adapted pair (τj, χj) in (0, ∞) × Sj and a predictable process Vj on R+ × Sj, where the τj are orthogonal. Define Zj, ζj, Uj as in Theorem 10.24 and (6), and suppose that for all j ≤m, (i) ζj|Vj| < ∞, ζjVj = 0 a.s. on {Zj(τj) = 0}, (ii) E |Uj(τj, χj)| < ∞, E |Vj(τj, χj)|pj < ∞, for some p1, . . . , pm > 0 with Σj p−1 j ≤1. Then (iii) E Π j ≤m Vj(τj, χj) = 0.
Proof: Let the pairs (τj, χj) have compensators η1, . . . , ηm, and define the martingales M1, . . . , Mm as in (7). Fix any i ̸= j in {1, . . . , m}, and choose some predictable times σ1, σ2, . . . as in Theorem 10.17, such that {t > 0; Δ¯ ηt > 0} = k[σk] a.s. By compensation and orthogonality, we have for any k ∈N E δσk[τj] = P{τj = σk} = E δτj[σk] = E ηj[σk] = 0.
Summing over k gives ηk[τj] = 0 a.s., which shows that (ΔMi)(ΔMj) = 0 a.s.
for all i ̸= j. Integrating repeatedly by parts, we conclude that M = j Mj is 10. Predictability and Compensation 227 a local martingale. Letting p−1 = Σj p−1 j ≤1, we get by H¨ older’s inequality, Proposition 10.25 (ii), and the various hypotheses ∥M ∗∥1 ≤∥M ∗∥p ≤Π j ∥M ∗ j ∥pj < ⌢Π j ∥Vτj,χj∥pj < ∞.
Thus, M is a uniformly integrable martingale, and so by Lemma 10.25 (i), E Π j Vj(τj, χj) = E Π j Mj(∞) = EM(∞) = EM(0) = 0.
2 This yields a powerful predictable mapping theorem based on the dis-counted compensator. Important extensions and applications appear in Chap-ters 15 and 27.
Theorem 10.27 (predictable mapping) Suppose that for all j ≤m, (i) (τj, χj) is adapted in (0, ∞) × Kj with discounted compensator ζj, (ii) Yj is a predictable map of R+ × Kj into a probability space (Sj, μj), (iii) the τj are orthogonal and ζj ◦Y −1 j ≤μj a.s.
Then the variables γj = Yj(τj, χj) are independent with distributions6 μj.
Proof: For fixed Bj ∈Sj, consider the predictable processes Vj(t, x) = 1Bj ◦Yj(t, x) −μjBj, t ≥0, x ∈Kj, j ≤m.
By definitions and hypotheses, t+ 0 Vj dζj = t+ 0 1Bj(Yj) dζj −μjBj 1 −Zj(t) ≤Zj(t) μjBj.
Since changing from Bj to Bc j affects only the sign of Vj, the two versions combine into −Zj(t) μjBc j ≤ t+ 0 Vj dζj ≤Zj(t) μjBj.
In particular, |ζjVj| ≤Zj(τj) a.s., and so ζjVj = 0 a.s. on {Zj(τj) = 0}.
Defining Uj as in (6) and using the previous estimates, we get −1 ≤−1Bc j ◦Yj = Vj −μjBc j ≤Uj ≤Vj + μjBj = 1Bj ◦Yj ≤1, 6Thus, the mere inequality in (iii) implies the equality L(γj) = μj.
228 Foundations of Modern Pobability which implies |Uj| ≤1. Letting ∅̸= J ⊂{1, . . . , m} and applying Corollary 10.26 with pj = |J| for all j ∈J, we obtain E Π j ∈J 1Bj(γj) −μjBj = E Π j ∈J Vj(τj, χj) = 0.
(8) We may now use induction on |J| to prove that P ∩ j ∈J{γj ∈Bj} = Π j ∈J μjBj, J ⊂{1, . . . , m}.
(9) For |J| = 1 this holds by (8). Now assume (9) for all |J| < k, and proceed to a J with |J| = k. Expanding the product in (8), and applying the induc-tion hypothesis to each term involving at least one factor μjBj, we see that all terms but one reduce to ± j∈J μjBj, whereas the remaining term equals P j∈J{γj ∈Bj}. Thus, (9) remains true for J, which completes the induc-tion. By a monotone-class argument, the formula for J = {1, . . . , m} extends to L(γ1, . . . , γm) = j μj.
2 Exercises 1. Show by an example that the σ-fields Fτ and Fτ−may differ. (Hint: Take τ to be a constant.) 2. Give examples of optional times that are predictable, accessible but not pre-dictable, and totally inaccessible. (Hint: Use Corollary 10.18.) 3.
Give an example that a right-continuous, adapted process that is not pre-dictable. (Hint: Use Theorem 10.14.) 4. Given a Brownian motion B on [0, 1], let F be the filtration induced by Xt = (Bt, B1). Find the Doob–Meyer decomposition B = M + A on [0, 1), and show that A has a.s. finite variation on [0, 1].
5. For any totally inaccessible time τ, show that supt |P(τ ≤t+ε| Ft)−1{τ ≤t}| →0 a.s. as ε →0. Derive a corresponding result for the compensator. (Hint: Use Lemma 10.8.) 6. Let the process X be adapted and rcll. Show that X is predictable iffit has accessible jumps and ΔXτ is Fτ−-measurable for every predictable time τ < ∞.
(Hint: Use Proposition 10.17 and Lemmas 10.1 and 10.3.) 7. Show that the compensator A of a ql-continuous, local sub-martingale is a.s.
continuous. (Hint: Note that A has accessible jumps. Use optional sampling at an arbitrary predictable time τ < ∞with announcing sequence (τn).) 8. Show that any general inequality involving an increasing process A and its compensator ˆ A remains valid in discrete time.
(Hint: Embed the discrete-time process and filtration into continuous time.) 9. Let ξ be a binomial process on R+ with E ξ = nμ for a diffuse probability measure μ. Show that if ξ has points τ1 < · · · < τn, then ξ has induced compensator ˆ ξ = Σ k≤n1[τk−1,τk) μk for some non-random measures μ1, . . . , μn, to be identified.
10. Let ξ be a point process on R+ with ∥ξ∥= n and with induced compensator of the form ˆ ξ = Σ k≤n1[τk−1,τk) μk for some non-random measures μ1, . . . , μn. Show 10. Predictability and Compensation 229 that L(ξ) is uniquely determined by those measures. Under what condition is ξ a binomial process?
11. Determine the natural compensator of a renewal process ξ based on a measure μ. Under what conditions on μ has ξ accessible atoms, and when is ξ ql-continuous?
12. Let τ1 < τ2 < · · · form a renewal process based on a diffuse measure μ. Find the associated discounted compensators ζ1, ζ2, . . . .
13. Let ξ be a binomial process with ∥ξ∥= n, based on a diffuse probability measure μ on R+. Find the natural compensator ˆ ξ of ξ. Also find the discounted compensators ζ1, . . . , ζn of the points τ1 < · · · < τn of ξ.
IV. Markovian and Related Structures Here we introduce the notion of Markov processes, representing another basic dependence structure of probability theory and playing a prominent role throughout the remainder of the book. After a short discussion of the general Markov property, we establish the basic convergence and invariance properties of discrete- and continuous-time Markov chains. In the continuous-time case, we focus on the structure of pure jump-type processes, and include a brief dis-cussion of branching processes. Chapter 12 is devoted to a detailed study of random walks and renewal processes, exploring in particular some basic recur-rence properties and a two-sided version of the celebrated renewal theorem. A hurried reader might concentrate on especially the beginnings of Chapters 11 and 13, along with selected material from Chapter 12.
−−− 11.
Markov properties and discrete-time chains.
The beginning of this chapter, exploring the nature of the general Markov property, may be regarded as essential core material. After discussing in particular the role of transition kernels and the significance of space and time homogeneity, we prove an elementary version of the strong Markov property. We conclude with a dis-cussion of the basic recurrence and ergodic properties of discrete-time Markov chains, including criteria for the existence of invariant distributions and con-vergence of transition probabilities.
12. Random walks and renewal processes. After establishing the re-currence dichotomy of random walks in Rd and giving criteria for recurrence and transience, we proceed to a study of fluctuation properties in one dimension, involving the notions of ladder times and heights and criteria for divergence to infinity. The remainder of the chapter deals with the main results of renewal theory, including the stationary version of a given renewal process, a two-sided version of the basic renewal theorem, and solutions of the renewal equation.
The mentioned subjects will prepare for the related but more sophisticated theories of L´ evy processes, local time, and excursion theory in later chapters.
13. Jump-type chains and branching processes. Here we begin with a detailed study of pure jump-type Markov processes, leading to a description in terms of a rate kernel, determining both the jump structure and the na-ture of holding times. This material is of fundamental importance and may serve as an introduction to the theory of Feller processes and their genera-tors. After discussing the recurrence and ergodic properties of continuous-time Markov chains, we conclude with a brief introduction to branching processes, of importance for both theory and applications.
231 Chapter 11 Markov Properties and Discrete-Time Chains Extended Markov property, transition kernels, finite-dimensional distri-butions, Chapman–Kolmogorov relation, existence, invariance and in-dependence, random dynamical systems, iterated optional times, strong Markov property, space and time homogeneity, stationarity and invari-ance, strong homogeneity, occupation times, recurrence, periodicity, ex-cursions, irreducible chains, convergence dichotomy, mean recurrence times, absorption Markov processes are without question the most important processes of modern probability, and various aspects of their theory and applications will appear throughout the remainder of this book. They may be thought of informally as random dynamical systems, which explains their fundamental importance for the subject. They will be considered in discrete and continuous time, and in a wide variety of spaces.
To make the mentioned description precise, we may fix an arbitrary Borel space S and filtration F. An adapted process X in S is said to be Markov, if for any times s < t we have Xt = fs,t(Xs, ϑs,t) a.s. for some measurable functions fs,t and U(0, 1) random variables ϑs,t⊥ ⊥Fs.
The stated condition is equivalent to the less transparent conditional independence Xt⊥ ⊥XsFs. The process is said to be time-homogeneous if we can choose fs,t ≡f0,t−s, and space-homogeneous (in Abelian groups) if fs,t(x, ·) ≡fs,t(0, ·) + x. The evolution is formally described in terms of a family of transition kernels μs,t(x, ·) = L{fs,t(x, ϑ)}, satisfying an a.s. version of the Chapman–Kolmogorov relation μs,t μt,u = μs,u. In the usual axiomatic setup, we assume the latter equation to hold identically.
This chapter is devoted to the most basic parts of Markov process theory.
Space homogeneity is shown to be equivalent to independence of the incre-ments, which motivates our discussion of random walks and L´ evy processes in Chapters 12 and 16. In the time-homogeneous case, we prove a primitive form of the strong Markov property, and show how the result simplifies when the process is also space-homogeneous. We further show how invariance of the initial distribution implies stationarity of the process, which motivates our treatment of stationary processes in Chapter 25.
The general theory of Markov processes is more advanced and will not be continued until Chapter 17, where we develop the basic theory of Feller processes. In the meantime we will consider several important sub-classes, such 233 © Springer Nature Switzerland AG 2021 O. Kallenberg, Foundations of Modern Probability, Probability Theory and Stochastic Modelling 99, 234 Foundations of Modern Probability as the pure jump-type processes in Chapter 13, Brownian motion and related processes in Chapters 14 and 19, and the mentioned random walks and L´ evy processes in Chapters 12 and 16. A detailed study of diffusion processes appears in Chapters 32–33, and further aspects of Brownian motion are considered in Chapters 29, 34–35.
To begin our systematic study of Markov processes, consider an arbitrary time scale T ⊂R equipped with a filtration F = (Ft), and fix a measurable space (S, S). An S-valued process X on T is called a Markov process, if it is adapted to F and such that Ft ⊥ ⊥ Xt Xu, t ≤u in T.
(1) Just as for the martingale property, the Markov property depends on the choice of filtration, with the weakest version obtained for the filtration induced by X.
The simple property (1) may be strengthened as follows.
Lemma 11.1 (extended Markov property) The Markov property (1) extends to1 Ft ⊥ ⊥ Xt {Xu; u ≥t}, t ∈T.
(2) Proof: Fix any t = t0 ≤t1 ≤· · · in T. Then (1) yields Ftn⊥ ⊥XtnXtn+1 for every n ≥0, and so by Theorem 8.12 Ft ⊥ ⊥ Xt0, . . . , Xtn Xtn+1, n ≥0.
By the same result, this is equivalent to Ft ⊥ ⊥ Xt (Xt1, Xt2, . . .), and (2) follows by a monotone-class argument.
2 For any times s ≤t in T, we assume the existence of some regular condi-tional distributions μs,t(Xs, B) = P(Xt ∈B | Xs) = P(Xt ∈B | Fs) a.s., B ∈S, (3) where the transition kernels μs,t exist by Theorem 8.5 when S is Borel. We may further introduce the one-dimensional distributions νt = L(Xt), t ∈T.
When T begins at 0, we show that the distribution of X is uniquely determined by the kernels μs,t together with the initial distribution ν0.
For a precise statement, it is convenient to use the kernel operations in-troduced in Chapter 3.
Recall that if μ, ν are kernels on S, the kernels μ ⊗ν : S →S2 and μν : S →S are given, for any s ∈S and B ∈S2 or S, by 1Using the shift operators θt, this becomes Ft⊥ ⊥Xt θtX for all t ∈T. Informally, the past and future are conditionally independent, given the present.
11. Markov Properties and Discrete-Time Chains 235 (μ ⊗ν)(s, B) = μ(s, dt) ν(t, du) 1B(t, u), (μν)(s, B) = (μ ⊗ν)(s, S × B) = μ(s, dt) ν(t, B).
Proposition 11.2 (finite-dimensional distributions) Let X be a Markov pro-cess on T with one-dimensional distributions νt and transition kernels μs,t.
Then for any t0 ≤· · · ≤tn in T, (i) L Xt0, . . . , Xtn = νt0 ⊗μt0,t1 ⊗· · · ⊗μtn−1,tn, (ii) L Xt1, . . . , Xtn Ft0 = μt0,t1 ⊗· · · ⊗μtn−1,tn (Xt0, ·).
Proof: (i) This is obvious for n = 0. Proceeding by induction, assume the statement with n replaced by n−1, and fix any bounded, measurable function f on Sn+1. Since Xt0, . . . , Xtn−1 are Ftn−1-measurable, we get by Theorem 8.5 and the induction hypothesis Ef(Xt0, . . . , Xtn) = E E f(Xt0, . . . , Xtn) | Ftn−1 = E f(Xt0, . . . , Xtn−1, x) μtn−1,tn(Xtn−1, dx) = νt0 ⊗μt0,t1 ⊗· · · ⊗μtn−1,tn f, as desired. This completes the induction.
(ii) From (i) we get for any B ∈S and C ∈Sn P (Xt0, . . . , Xtn) ∈B × C = B νt0(dx) μt0,t1 ⊗· · · ⊗μtn−1,tn (x, C) = E μt0,t1 ⊗· · · ⊗μtn−1,tn (Xt0, C); Xt0 ∈B , and the assertion follows by Theorem 8.1 and Lemma 11.1.
2 An obvious consistency requirement leads to the basic Chapman–Kolmo-gorov relation between the transition kernels. Say that two kernels μ, μ′ agree a.s. if μ(x, ·) = μ′(x, ·) for almost every x.
Corollary 11.3 (Chapman, Smoluchovsky) For a Markov process in a Borel space S, we have μs,u = μs,t μt,u a.s. νs, s ≤t ≤u.
Proof: By Proposition 11.2, we have a.s. for any B ∈S μs,u(Xs, B) = P Xu ∈B Fs = P (Xt, Xu) ∈S × B Fs = (μs,t ⊗μt,u)(Xs, S × B) = (μs,t μt,u)(Xs, B).
Since S is Borel, we may choose a common null set for all B.
2 236 Foundations of Modern Probability We will henceforth require the stated relation to hold identically, so that μs,u = μs,t μt,u, s ≤t ≤u.
(4) Thus, we define a Markov process by condition (3), in terms of some transition kernels μs,t satisfying (4).
In the discrete-time case of T = Z+, the latter relation imposes no restriction, since we may then start from any versions of the kernels μn = μn−1,n, and define μm,n = μm+1 · · · μn for arbitrary m < n.
Given such a family of transition kernels μs,t and an arbitrary initial dis-tribution ν, we show that an associated Markov process exists.
Theorem 11.4 (existence, Kolmogorov) Consider a time scale T starting at 0, a Borel space (S, S), a probability measure ν on S, and a family of probability kernels μs,t on S, s ≤t in T, satisfying (4). Then there exists an S-valued Markov process X on T with initial distribution ν and transition kernels μs,t.
Proof: Consider the probability measures νt1,...,tn = νμt0,t1 ⊗· · · ⊗μtn−1,tn, 0 = t0 ≤t1 ≤· · · ≤tn, n ∈N.
To see that the family (νt0,...,tn) is projective, let B ∈Sn−1 be arbitrary, and define for any k ∈{1, . . . , n} the set Bk = (x1, . . . , xn) ∈Sn; (x1, . . . , xk−1, xk+1, . . . , xn) ∈B .
Then (4) yields νt1,...,tnBk = νμt0,t1 ⊗· · · ⊗μtk−1,tk+1 ⊗· · · ⊗μtn−1,tn B = νt1,...,tk−1,tk+1,...,tnB, as desired. By Theorem 8.23 there exists an S-valued process X on T with L(Xt1, . . . , Xtn) = νt1,...,tn, t1 ≤· · · ≤tn, n ∈N, (5) and in particular L(X0) = ν0 = ν.
To see that X is Markov with transition kernels μs,t, fix any times s1 ≤ · · · ≤sn = s ≤t and sets B ∈Sn and C ∈S, and conclude from (5) that P (Xs1, . . . , Xsn, Xt) ∈B × C = νs1,...,sn,t(B × C) = E μs,t(Xs, C); (Xs1, . . . , Xsn) ∈B .
Letting F be the filtration induced by X, we get by a monotone-class argument P Xt ∈C; A = E μs,t(Xs, C); A , A ∈Fs, and so P{Xt ∈C | Fs} = μs,t(Xs, C) a.s.
2 11. Markov Properties and Discrete-Time Chains 237 Now let (G, G) be a measurable group with identity element ι. Recall that a kernel μ on G is said to be invariant if μr = μι ◦θ−1 r ≡θrμι, meaning that μ(r, B) = μ(ι, r−1B), r ∈G, B ∈G.
An G-valued Markov process is said to be space-homogeneous if its transition kernels μs,t are invariant. We further say that a process X in G has independent increments, if for any times t0 ≤· · · ≤tn the increments X−1 tk−1Xtk are mutually independent and independent of X0. More generally, given a filtration F on T, we say that X has F-independent increments, if X is adapted to F and such that X−1 s Xt ⊥ ⊥Fs for all s ≤t in T. Note that the elementary notion of independence corresponds to the case where F is induced by X. For Abelian groups G, we may use an additive notation and write the increments in the usual way as Xt −Xs.
Theorem 11.5 (space homogeneity and independence) Let X be an F-adapted process on T with values in a measurable group G. Then these conditions are equivalent: (i) X is a G-homogeneous F-Markov process, (ii) X has F-independent increments.
In that case, X has transition kernels μs,t(r, B) = P X−1 s Xt ∈r−1B , r ∈G, B ∈G, s ≤t in T.
(6) Proof, (i) ⇒(ii): Let X be F-Markov with transition kernels μs,t(r, B) = μs,t(r−1B), r ∈G, B ∈G, s ≤t in T.
(7) Then Theorem 8.5 yields for any s ≤t in T and B ∈G P X−1 s Xt ∈B Fs = P Xt ∈XsB Fs = μs,t(Xs, XsB) = μs,tB.
Thus, X−1 s Xt is independent of Fs with distribution μs,t, and (6) follows by means of (7).
(ii) ⇒(i): Let X−1 s Xt be independent of Fs with distribution μs,t. Defining the associated kernel μs,t by (7), we get by Theorem 8.5 for any s, t, and B as before P Xt ∈B Fs = P X−1 s Xt ∈X−1 s B Fs = μs,t(X−1 s B) = μs,t(Xs, B), and so X is Markov with the invariant transition kernels in (7).
2 238 Foundations of Modern Probability We now specialize to the time-homogeneous case, where T = R+ or Z+, and the transition kernels are of the form μs,t = μt−s, so that P Xt ∈B Fs = μt−s(Xs, B) a.s., B ∈S, s ≤t in T.
Introducing the initial distribution ν = L(X0), we get by Proposition 11.2 L(Xt0, . . . , Xtn) = νμt0 ⊗μt1−t0 ⊗· · · ⊗μtn−tn−1, L Xt1, . . . , Xtn Ft0 = μt1−t0 ⊗· · · ⊗μtn−tn−1 (Xt0, ·).
The Chapman–Kolmogorov relation now reduces to the semi-group property μs+t = μs μt, s, t ∈T, which is again assumed to hold identically.
The following result suggests that we think of a discrete-time Markov pro-cess as a stochastic dynamical system.
Let ϑ be a generic U(0, 1) random variable.
Proposition 11.6 (random dynamical system) Let X be a process on Z+ with values in a Borel space S. Then these conditions are equivalent: (i) X is Markov with transition kernels μn on S, (ii) there exist some measurable functions f1, f2, . . .: S × [0, 1] →S and i.i.d.
U(0, 1) random variables ϑ1, ϑ2, . . . ⊥ ⊥X0, such that Xn = fn(Xn−1, ϑn) a.s., n ∈N.
In that case μn(s, ·) = L{fn(s, ϑ)} a.s. L(Xn−1), and we may choose fn ≡f iffX is time-homogeneous.
Proof, (ii) ⇒(i): Assume (ii), and define μn(s, ·) = L{fn(s, ϑ)}. Writing F for the filtration induced by X, we get by Theorem 8.5 for any B ∈S P Xn ∈B Fn−1 = P fn(Xn−1, ϑn) ∈B Fn−1 = λ t; fn(Xn−1, t) ∈B = μn(Xn−1, B), proving (i).
(i) ⇒(ii): Assume (i). By Lemma 4.22 we may choose the associated func-tions fn as above. Let ˜ ϑ1, ˜ ϑ2, . . . be i.i.d. U(0, 1) and independent of ˜ X0 d = X0, and define recursively ˜ Xn = fn( ˜ Xn−1, ˜ ϑn) for n ∈N. Then ˜ X is Markov as before with transition kernels μn. Hence, ˜ X d = X by Proposition 11.2, and so Theorem 8.17 yields some random variables ϑn with {X, (ϑn)} d = { ˜ X, (˜ ϑn)}.
Now (ii) follows, since the diagonal in S2 is measurable. The last assertion is clear by construction.
2 11. Markov Properties and Discrete-Time Chains 239 Now fix a transition semi-group (μt) on a Borel space S. For any prob-ability measure ν on S, Theorem 11.4 yields an associated Markov process Xν, and Proposition 4.2 shows that the associated distribution Pν is uniquely determined by ν. Note that Pν is a probability measure on the path space (ST, ST). For ν = δx, we write Px instead of Pδx. Integration with respect to Pν or Px is denoted by Eν or Ex, respectively.
Lemma 11.7 (mixtures) The measures Px form a probability kernel from S to ST, and for any initial distribution ν, PνA = S Px(A) ν(dx), A ∈ST.
Proof: The measurability of PxA and the stated formula are obvious for cylinder sets of the form A = (πt1, . . . , πtn)−1B. The general case follows easily by a monotone-class argument.
2 Rather than considering one Markov process Xν for every initial distribu-tion ν, we may introduce a single canonical process X, defined as the identity mapping on the path space (ST, ST), equipped with the different probability measures Pν. Then Xt agrees with the evaluation map πt : x →xt on ST, which is measurable by the definition of ST. For our present purposes, it suffices to endow the path space ST with the canonical filtration F induced by X. On ST we further introduce the shift operators θt : ST →ST, t ∈T, given by (θtx)s = xs+t, s, t ∈T, x ∈ST, which are clearly measurable with respect to ST. In the canonical case we note that θtX = θt = X ◦θt.
Optional times with respect to a Markov process are often constructed recursively in terms of shifts on the underlying path space. Thus, for any pair of optional times σ and τ on the canonical space, we may consider the random time γ = σ + τ ◦θσ, which is understood to be infinite when σ = ∞. Under weak restrictions on space and filtration, we show that γ is again optional.
Here CS and DS denote the spaces of continuous or rcll functions, respectively, from R+ to S.
Proposition 11.8 (iterated optional times) For a metric space S, let σ, τ be optional times on the canonical space S∞, CS, or DS, endowed with the right-continuous, induced filtration. Then we may form an optional time γ = σ + τ ◦θσ.
Proof: Since σ ∧n + τ ◦θσ∧n ↑γ, Lemma 9.3 justifies taking σ to be bounded. Let X be the canonical process with induced filtration F. Since X is F+-progressive, Xσ+s = Xs ◦θσ is F+ σ+s -measurable for every s ≥0 by 240 Foundations of Modern Probability Lemma 9.5. Hence, for fixed t ≥0, all sets A = {Xs ∈B} with s ≤t and B ∈S satisfy θ−1 σ A ∈F + σ+t. Since the latter sets form a σ-field, we get θ−1 σ Ft ⊂F+ σ+t, t ≥0.
(8) Now fix any t ≥0, and note that {γ < t} = ∪ r∈Q ∩(0, t) σ < r, τ ◦θσ < t −r .
(9) For every r ∈(0, t) we have {τ < t−r} ∈Ft−r, and so θ−1 σ {τ < t−r} ∈F + σ+t−r by (8), and Lemma 9.2 gives σ < r, τ ◦θσ < t −r = σ + t −r < t ∩θ−1 σ τ < t −r ∈Ft.
Thus, (9) yields {γ < t} ∈Ft, and so γ is F+-optional by Lemma 9.2.
2 We may now extend the Markov property to suitable optional times. The present statement is only preliminary, and stronger versions will be established, under appropriate conditions, in Theorems 13.1, 14.11, and 17.17.
Proposition 11.9 (strong Markov property) Let X be a time-homogeneous Markov process on T = R+ or Z+, and let τ be an optional time taking count-ably many values. Then in the general and canonical cases, we have respectively (i) P θτX ∈A Fτ = PXτA a.s. on {τ < ∞}, (ii) Eν(ξ ◦θτ| Fτ) = EXτξ, Pν -a.s. on {τ < ∞}, for any set A ∈ST, distribution ν on S, and bounded or non-negative random variable ξ.
Since {τ < ∞} ∈Fτ, formulas (i)−(ii) make sense by Lemma 8.3, even though θτX and PXτ are only defined for τ < ∞.
Proof: By Lemmas 8.3 and 9.1, we may take τ = t to be finite and non-random. For sets of the form A = (πt1, . . . , πtn)−1B, t1 ≤· · · ≤tn, B ∈Sn, n ∈N, (10) Proposition 11.2 yields P θtX ∈A Ft = P (Xt+t1, . . . , Xt+tn) ∈B Ft = μt1 ⊗μt2−t1 ⊗· · · ⊗μtn−tn−1 (Xt, B) = PXtA, which extends to arbitrary A ∈ST by a monotone-class argument.
For canonical X, (i) is equivalent to (ii) with ξ = 1A, since in that case ξ ◦θτ = 1{θτX ∈A}. The result extends to general ξ by linearity and mono-tone convergence.
2 When X is both space- and time-homogeneous, we can state the strong Markov property without reference to the family (Px). For notational clarity, we consider only processes in Abelian groups, the general case being similar.
11. Markov Properties and Discrete-Time Chains 241 Theorem 11.10 (space and time homogeneity) Let X be a space- and time-homogeneous Markov process in a measurable Abelian group S. Then (i) PxA = P0(A −x), x ∈S, A ∈ST, (ii) the strong Markov property holds at an optional time τ < ∞, iffXτ is a.s. Fτ-measurable and X −X0 d = θτX −Xτ ⊥ ⊥Fτ.
Proof: (i) By Proposition 11.2, we get for sets A as in (10) PxA = Px ◦(πt1, . . . , πtn)−1B = μt1 ⊗μt2−t1 ⊗· · · ⊗μtn−tn−1 (x, B) = μt1 ⊗μt2−t1 ⊗· · · ⊗μtn−tn−1 (0, B −x) = P0 ◦(πt1, . . . , πtn)−1(B −x) = P0(A −x), which extends to general A by a monotone-class argument.
(ii) Assume the strong Markov property at τ. Letting A = π−1 0 B with B ∈S, we get 1B(Xτ) = PXτ{π0 ∈B} = P Xτ ∈B Fτ a.s., and so Xτ is a.s. Fτ-measurable. By (i) and Theorem 8.5, P θτX −Xτ ∈A Fτ = PXτ(A + Xτ) = P0A, A ∈ST, (11) which shows that θτX −Xτ⊥ ⊥Fτ with distribution P0. Taking τ = 0, we get in particular L(X −X0) = P0, and the asserted properties follow.
Now assume the stated conditions. To deduce the strong Markov property at τ, let A ∈ST be arbitrary, and conclude from (i) and Theorem 8.5 that P θτX ∈A Fτ = P θτX −Xτ ∈A −Xτ Fτ = P0(A −Xτ) = PXτA.
2 If a time-homogeneous Markov process X has initial distribution ν, the distribution at time t ∈T equals νt = νμt, or νtB = ν(dx) μt(x, B), B ∈S, t ∈T.
A distribution ν is said to be invariant for the semi-group (μt) if νt is indepen-dent of t, so that νμt = ν for all t ∈T. We also say that a process X on T is stationary, if θtX d = X for all t ∈T. The two notions2 are related as follows.
2Stationarity and invariance are often confused. Here the distinction is of course crucial.
242 Foundations of Modern Probability Lemma 11.11 (stationarity and invariance) Let X be a time-homogeneous Markov process on T with transition kernels μt and initial distribution ν. Then X is stationary ⇔ ν is invariant for (μt).
Proof: If ν is invariant, then Proposition 11.2 yields (Xt+t1, . . . , Xt+tn) d = (Xt1, . . . , Xtn), t, t1 ≤· · · ≤tn in T, and so X is stationary by Proposition 4.2. The converse is obvious.
2 The strong Markov property has many applications. We begin with a clas-sical maximum inequality for random walks, which will be useful in Chapter 23.
Proposition 11.12 (maximum inequality, Ottaviani) Let ξ1, ξ2, . . . be i.i.d.
random variables with E ξi = 0 and E ξ2 i = 1, and put Xn = Σ i≤n ξi. Then P X∗ n ≥2 r√n ≤ P |Xn| ≥r√n 1 −r−2 , r > 1, n ∈N.
Proof: Put c = r√n, and define τ = inf{k ∈N; |Xk| ≥2c}. By the strong Markov property at τ and Theorem 8.5, P{|Xn| ≥c} ≥P |Xn| ≥c, X∗ n ≥2c ≥P τ ≤n, |Xn −Xτ| ≤c ≥P X∗ n ≥2 c min k≤n P |Xk| ≤c , and Chebyshev’s inequality yields min k≤n P |Xk| ≤c ≥min k≤n 1 −k c−2 ≥ 1 −n c−2 = 1 −r−2.
2 Similar ideas can be used to prove the following more sophisticated result, which will find important applications in Chapters 16 and 27.
Corollary 11.13 (uniform convergence) For r ∈Q+, let Xr be rcll processes in Rd, such that the increments in r are independent and satisfy (Xr−Xs)∗ t P →0 as r, s →∞for fixed t > 0. Then there exists an rcll process X in Rd, such that (Xr −X)∗ t →0 a.s., t > 0.
Proof: Fixing a t > 0, we can choose a sequence rn →∞in Q+, such that (Xrm −Xrn)∗ t →0 a.s., m, n →∞.
By completeness there exists a process X in Rd with (Xrn −X)∗ t →0 a.s., and we note that X is again a.s. rcll. Now define Y r = Xr −X, and note that 11. Markov Properties and Discrete-Time Chains 243 (Y r)∗ t P →0. Fixing any ε, r > 0 and a finite set A ⊂[r, ∞) ∩Q, and putting σ = inf{s ∈A; (Y s)∗ t > 2ε}, we get as in Proposition 11.12 P{(Y r)∗ t > ε} ≥ P (Y r)∗ t > ε, max s∈A (Y s)∗ t > 2ε ≥ P σ < ∞, (Y r −Y s)∗ t ≤ε ≥ P max s∈A (Y s)∗ t > 2ε min s∈A P (Y r −Y s)∗ t ≤ε , which extends immediately to A = [r, ∞) ∩Q. Solving for the first factor on the right gives P sup s≥r(Y s)∗ t > 2ε ≤ P{(Y r)∗ t > ε} 1 −sups≥rP (Y r −Y s)∗ t > ε , and so sups≥r(Y s)∗ t P →0 which implies (Xr −X)∗ t = (Y r)∗ t →0 a.s. The asser-tion now follows since t > 0 was arbitrary.
2 We proceed to clarify the nature of the strong Markov property of a process X at a finite optional time τ. The condition is clearly a combination of the properties Fτ ⊥ ⊥ Xτ θτX, L( θτX | Xτ) = PXτ a.s.
Though the latter relation—here referred to as strong homogeneity—appears to be weaker than the strong Markov property, the two conditions are in fact essentially equivalent. Here we fix any right-continuous filtration F on R+.
Theorem 11.14 (strong homogeneity) Let X be an F-adapted, rcll process in a separable metric space (S, ρ), and consider a probability kernel (Px): S → DS. Then these conditions are equivalent: (i) X is strongly homogeneous at every bounded, optional time τ, (ii) X satisfies the strong Markov property at every optional time τ < ∞.
Our proof is based on a 0−1 law for absorption, involving the sets I = x ∈D; xt ≡x0 , A = s ∈S; P s I = 1 .
(12) Lemma 11.15 (absorption) For any X as in Theorem 11.14 (i) and optional times τ < ∞, we have PXτI = 1I(θτX) = 1A(Xτ) a.s.
Proof: We may clearly take τ to be bounded, say by n ∈N. Fix any h > 0, and partition S into disjoint Borel sets B1, B2, . . . of diameter < h. For each k ∈N, define τk = n ∧inf t > τ; ρ(Xτ, Xt) > h on {Xτ ∈Bk}, (13) and put τk = τ otherwise. The times τk are again bounded and optional, and clearly {Xτk ∈Bk} ⊂ Xτ ∈Bk, sup t∈[τ,n] ρ(Xτ, Xt) ≤h .
(14) 244 Foundations of Modern Probability Using property (i) and (14), we get as n →∞and h →0 E PXτIc; θτX ∈I = Σ k E PXτIc; θτX ∈I, Xτ ∈Bk ≤Σ k E PXτkIc; Xτk ∈Bk = Σ k P θτkX / ∈I, Xτk ∈Bk ≤Σ k P θτX / ∈I, Xτ ∈Bk, sup t∈[τ,n] ρ(Xτ, Xt) ≤h →P θτX / ∈I, sup t≥τ ρ(Xτ, Xt) = 0 = 0, and so PXτI = 1 a.s. on {θτX ∈I}. Since also EPXτI = P{θτX ∈I} by (i), we obtain the first of the displayed equalities. The second one follows by the definition of A.
2 Proof of Theorem 11.14: Assume (i), and define I and A as in (12). To prove (ii) on {Xτ ∈A}, fix any times t1 < · · · < tn and Borel sets B1, . . . , Bn, write B = k Bk, and conclude from (i) and Lemma 11.15 that P ∩k {Xτ+tk ∈Bk} Fτ = P{Xτ ∈B | Fτ} = 1{Xτ ∈B} = P{Xτ ∈B | Xτ} = PXτ{x0 ∈B} = PXτ∩k {xtk ∈Bk}, which extends to (ii) by a monotone-class argument.
To prove (ii) on {Xτ / ∈A}, we may take τ ≤n a.s., and divide Ac into disjoint Borel sets Bk of diameter < h. Fix any F ∈Fτ with F ⊂{Xτ / ∈A}.
For each k ∈N, define τk as in (13) on the set F c ∩{Xτ ∈Bk}, and let τk = τ otherwise. Note that (14) remains true on F c. Using (i), (14), and Lemma 11.15, we get as n →∞and h →0 L(θτX ; F) −E(PXτ; F) = Σ k E 1{θτX ∈·} −PXτ; Xτ ∈Bk, F = Σ k E 1{θτkX ∈·} −PXτk; Xτk ∈Bk, F = Σ k E 1{θτkX ∈·} −PXτk; Xτk ∈Bk, F c ≤Σ k P Xτk ∈Bk; F c ≤Σ k P Xτ ∈Bk, sup t∈[τ,n] ρ(Xτ, Xt) ≤h →P Xτ / ∈A, sup t≥τ ρ(Xτ, Xt) = 0 = 0, which shows that the left-hand side is zero.
2 We conclude with some asymptotic and invariance properties for discrete-time Markov chains. First we consider the successive visits to a fixed state 11. Markov Properties and Discrete-Time Chains 245 y ∈S. Assuming the process to be canonical, we introduce the hitting time τy = inf{n ∈N; Xn = y}, and define recursively τ k+1 y = τ k y + τy ◦θτ k y , k ∈Z+, starting with τ 0 y = 0. We further introduce the occupation times κy = sup k; τ k y < ∞ = Σ n≥1 1{Xn = y}, y ∈S, and show how the distribution of κy can be expressed in terms of the hitting probabilities rxy = Px{τy < ∞} = Px{κy > 0}, x, y ∈S.
Lemma 11.16 (occupation times) For any x, y ∈S and k ∈N, (i) Px{κy ≥k} = Px{τ k y < ∞} = rxy rk−1 yy , (ii) Exκy = rxy 1 −ryy .
Proof: (i) By the strong Markov property, we have for any k ∈N Px τ k+1 y < ∞ = Px τ k y < ∞, τy ◦θτ k y < ∞ = Px{τ k y < ∞} Py{τy < ∞} = ryy Px{τ k y < ∞}, and the second relation follows by induction on k. The first relation holds since κy ≥k iffτ k y < ∞.
(ii) By (i) and Lemma 4.4, we have Exκy = Σ k≥1 Px{κy ≥k} = Σ k≥1 rxy rk−1 yy = rxy 1 −ryy .
2 In particular, we get for x = y Px{κx ≥k} = Px{τ k x < ∞} = rk xx, k ∈N.
Thus, under Px, the number of visits to x is either a.s. infinite or geometri-cally distributed with mean Exκx + 1 = (1 −rxx)−1 < ∞. This leads to a corresponding division of S into recurrent and transient states.
Recurrence can often be deduced from the existence of an invariant distri-bution. Here and below we write pn xy = μn(x, {y}).
Lemma 11.17 (invariant distributions and recurrence) Let ν be an invariant distribution on S. Then for any x ∈S, ν{x} > 0 ⇒ x is recurrent.
246 Foundations of Modern Probability Proof: By the invariance of ν, 0 < ν{x} = ν(dy) pn yx, n ∈N.
(15) Thus, by Lemma 11.16 and Fubini’s theorem, ∞= Σ n≥1 ν(dy) pn yx = ν(dy) Σ n≥1 pn yx = ν(dy) ryx 1 −rxx ≤ 1 1 −rxx .
Hence, rxx = 1, and so x is recurrent.
2 The period dx of a state x is defined as the greatest common divisor of the set {n ∈N; pn xx > 0}, and we say that x is aperiodic if dx = 1.
Lemma 11.18 (periodicity) If a state x ∈S has period d < ∞, we have pnd xx > 0 for all but finitely many n.
Proof: Put C = {n ∈N; pnd xx > 0}, and conclude from the Chapman– Kolmogorov relation that C is closed under addition. Since C has greatest common divisor 1, the generated additive group equals Z. In particular, there exist some n1, . . . , nk ∈C and z1, . . . , zk ∈Z with Σj zj nj = 1.
Writing m = n1Σj |zj| nj, we note that any number n ≥m can be represented, for suitable h ∈Z+ and r ∈{0, . . . , n1 −1}, as n = m + h n1 + r = h n1 + Σ j ≤k n1|zj| + rzj nj ∈C.
2 For every x ∈S, we define the excursions of X from x by Yn = Xτx ◦θτ n x , n ∈Z+, as long as τ n x < ∞. To allow infinite excursions, we may introduce an ex-traneous element Δ / ∈S, and define Yn = ¯ Δ ≡(Δ, Δ, . . .) whenever τ n x = ∞.
Conversely, X may be recovered from the Yn through the formulas τn = Σ k<n inf t > 0; Yk(t) = x , (16) Xt = Yn(t −τn), t ∈[τn, τn+1), n ∈Z+.
(17) The distribution νx = Px ◦Y −1 0 is called the excursion law at x. When x is recurrent and ryx = 1, Proposition 11.9 shows that Y1, Y2, . . . are i.i.d. νx under Py. The result extends to the general case, as follows.
Lemma 11.19 (excursions) Let X be a discrete-time Markov process in a Borel space S, and fix any x ∈S. Then there exist some independent processes Y0, Y1, . . . in S, all but Y0 with distribution νx, such that X is a.s. given by (16) and (17).
11. Markov Properties and Discrete-Time Chains 247 Proof: Put ˜ Y0 d = Y0, and let ˜ Y1, ˜ Y2, . . . be independent of ˜ Y0 and i.i.d. νx.
Construct associated random times ˜ τ0, ˜ τ1, . . . as in (16), and define a process ˜ X as in (17). By Corollary 8.18, it is enough to show that X d = ˜ X. Writing κ = sup n ≥0; τn < ∞ , ˜ κ = sup n ≥0; ˜ τn < ∞ , it is equivalent to show that (Y0, . . . , Yκ, ¯ Δ, ¯ Δ, . . .) d = ˜ Y0, . . . , ˜ Y˜ κ, ¯ Δ, ¯ Δ, . . .
.
(18) Using the strong Markov property on the left and the independence of the ˜ Yn on the right, it is easy to check that both sides are Markov processes in SZ+ ∪{¯ Δ} with the same initial distribution and transition kernel. Hence, (18) holds by Proposition 11.2.
2 We now consider discrete-time Markov chains X on the time scale Z+, taking values in a countable state space S. Here the Chapman–Kolmogorov relation becomes pm+n ik = Σ j pm ij pn jk, i, k ∈S, m, n ∈N, (19) where pn ij = μn(i, {j}), i, j ∈S. This may be written in matrix form as pm+n = pmpn, which shows that the matrix pn of n -step transition probabilities is simply the n-th power of the matrix p = p1, justifying our notation. Regarding the initial distribution ν as a row vector (νi), we may write the distribution at time n as νpn.
Now define rij = Pi{τj < ∞} as before, where τj = inf{n > 0; Xn = j}. A Markov chain in S is said to be irreducible if rij > 0 for all i, j ∈S, so that every state can be reached from any other state. For irreducible chains, all states have the same recurrence and periodicity properties: Proposition 11.20 (irreducible chains) For an irreducible Markov chain, (i) either all states are recurrent or all are transient, (ii) all states have the same period, (iii) if ν is invariant, then νi > 0 for all i.
For the proof of (i) we need the following lemma.
Lemma 11.21 (recurrence classes) For a recurrent i ∈S, define Si = {j ∈S; rij > 0}. Then (i) rjk = 1 for all j, k ∈Si, (ii) all states in Si are recurrent.
248 Foundations of Modern Probability Proof: By the recurrence of i and the strong Markov property, we get for any j ∈Si 0 = Pi τj < ∞, τi ◦θτj = ∞ = Pi{τj < ∞} Pj{τi = ∞} = rij(1 −rji).
Since rij > 0 by hypothesis, we obtain rji = 1. Fixing any m, n ∈N with pm ij, pn ji > 0, we get by (19) Ejκj ≥Σ s>0 pm+n+s jj ≥Σ s>0 pn ji ps ii pm ij = pn ji pm ij Eiκi = ∞, and so j is recurrent by Lemma 11.16. Reversing the roles of i and j gives rij = 1. Finally, we get for any j, k ∈Si rjk ≥Pj τi < ∞, τk ◦θτi < ∞ = rjirik = 1.
2 Proof of Proposition 11.20: (i) Use Lemma 11.21.
(ii) Fix any i, j ∈S, and choose m, n ∈N with pm ij, pn ji > 0. By (19), pm+h+n jj ≥pn ji ph ii pm ij, h ≥0.
For h = 0 we get pm+n jj > 0, and so3 dj|(m + n). Hence, in general ph ii > 0 implies dj|h, and we get dj ≤di. Reversing the roles of i and j yields the opposite inequality.
(iii) Fix any i ∈S. Choosing j ∈S with νj > 0, and then n ∈N with pn ji > 0, we see from (15) that even νi > 0.
2 We turn to the basic limit theorem for irreducible Markov chains. Related results appear in Chapters 13, 17, and 33. Write u →for convergence in the total variation norm ∥· ∥, and let ˆ MS be the class of probability measures on S.
Theorem 11.22 (convergence dichotomy, Markov, Kolmogorov, Orey) For an irreducible, aperiodic Markov chain in S, one of these cases occurs: (i) There exists a unique invariant distribution ν, the latter satisfies νi > 0 for all i ∈S, and as n →∞, Pμ ◦θ−1 n u →Pν, μ ∈ˆ MS.
(20) (ii) No invariant distribution exists, and as n →∞, pn ij →0, i, j ∈S.
(21) 3Here m|n (pronounced m divides n) means that n is divisible by m.
11. Markov Properties and Discrete-Time Chains 249 Markov chains satisfying (i) are clearly recurrent, whereas those in (ii) may be either recurrent or transient. This suggests a further division of the irreducible, aperiodic, and recurrent Markov chains into positive-recurrent and null-recurrent ones, depending on whether (i) or (ii) occurs.
We shall prove Theorem 11.22 by a simple coupling4 argument. For our present purposes, an elementary coupling by independence is sufficient.
Lemma 11.23 (coupling) Let X, Y be independent Markov chains in S, T with transition matrices (pii′) and (qjj′), respectively. Then (i) (X, Y ) is a Markov chain in S×T with transition matrix rij, i′j′ = pii′ qjj′, (ii) if X and Y are irreducible, aperiodic, then so is (X, Y ), (iii) for X, Y as in (ii) with invariant distributions μ, ν, the chain (X, Y ) is recurrent.
Proof: (i) Using Proposition 11.2, we may compute the finite-dimensional distributions of (X, Y ) for an arbitrary initial distribution μ ⊗ν on S × T.
(ii) For X, Y as stated, fix any i, i′ ∈S and j, j′ ∈T. Then Proposition 11.18 yields rn ij, i′j′ = pn ii′ qn jj′ > 0 for all but finitely many n ∈N, and so (X, Y ) has again the stated properties.
(iii) Since the product measure μ ⊗ν is clearly invariant for (X, Y ), the assertion follows by Proposition 11.17.
2 The point of the construction is that, if the coupled processes eventually meet, their distributions are asymptotically the same.
Lemma 11.24 (strong ergodicity) For an irreducible, recurrent Markov chain in S2 with transition matrix pii′ pjj′, we have as n →∞ Pμ ◦θ−1 n −Pν ◦θ−1 n →0, μ, ν ∈ˆ MS.
(22) Proof (Doeblin): Let X, Y be independent with distributions Pμ, Pν. By Lemma 11.23 the pair (X, Y ) is again Markov with respect to the induced filtration F, and by Proposition 11.9 it satisfies the strong Markov property at every finite optional time τ. Choosing τ = inf{n ≥0; Xn = Yn}, we get for any measurable set A ⊂S∞ P θτX ∈A Fτ = PXτA = PYτA = P θτY ∈A Fτ .
4The idea is to study the limiting behavior of a process X by approximating with a process ˜ X whose asymptotic behavior is known, thus replacing a technical analysis of distri-butional properties by a simple pathwise comparison. The method is frequently employed in subsequent chapters.
250 Foundations of Modern Probability In particular, (τ, Xτ, θτX) d = (τ, Xτ, θτY ). Putting ˜ Xn = Xn for n ≤τ and ˜ Xn = Yn otherwise, we obtain ˜ X d = X, and so for any A as above, P{θnX ∈A} −P{θnY ∈A} = P{θn ˜ X ∈A} −P{θnY ∈A} = P θn ˜ X ∈A, τ > n −P θnY ∈A, τ > n ≤P{τ > n} →0.
2 The next result ensures the existence of an invariant distribution. Here a coupling argument is again useful.
Lemma 11.25 (existence) If (21) fails, then an invariant distribution exists.
Proof: Suppose that (21) fails, so that lim supn pn i0,j0 > 0 for some i0, j0 ∈S.
By a diagonal argument, we have pn i0,j →cj along a sub-sequence N ′ ⊂N for some constants cj with cj0 > 0, where 0 < Σj cj ≤1 by Fatou’s lemma.
To extend the convergence to arbitrary i, let X, Y be independent pro-cesses with the same transition matrix (pij), and conclude from Lemma 11.23 that (X, Y ) is an irreducible Markov chain on S2 with transition probabilities qij,i′j′ = pii′pjj′. If (X, Y ) is transient, then Proposition 11.16 yields Σ n (pn ij)2 = Σ n qn ii,jj < ∞, i, j ∈S, and (21) follows. The pair (X, Y ) is then recurrent, and Lemma 11.24 gives pn ij −pn i0,j →0 for all i, j ∈I. Hence, pn ij →cj along N ′ for all i, j.
Next conclude from the Chapman–Kolmogorov relation that pn+1 ik = Σ j pn ij pjk = Σ j pij pn jk, i, k ∈S.
Letting n →∞along N ′, and using Fatou’s lemma on the left and dominated convergence on the right, we get Σ j cj pjk ≤Σ j pij ck = ck, k ∈S.
(23) Summing over k gives Σj cj ≤1 on both sides, and so (23) holds with equality. Thus, (ci) is invariant, and we may form the invariant distribution νi = ci/Σj cj.
2 Proof of Theorem 11.22: If no invariant distribution exists, then (21) holds by Lemma 11.25. Now let ν be an invariant distribution, and note that νi > 0 for all i by Proposition 11.20. By Lemma 11.23 the coupled chain in Lemma 11.24 is irreducible and recurrent, and so (22) holds for any initial distribution μ, and (20) follows since Pν ◦θ−1 n = Pν by Lemma 11.11. If even ν′ is invariant, then (20) yields Pν′ = Pν, and so ν′ = ν.
2 The limits in Theorem 11.22 may be expressed as follows in terms of the mean recurrence times Ejτj.
11. Markov Properties and Discrete-Time Chains 251 Theorem 11.26 (mean recurrence times, Kolmogorov) For a Markov chain in S and states i, j ∈S with j aperiodic, we have as n →∞ p n ij →Pi{τj < ∞} Ejτj .
(24) Proof: First let i = j. If j is transient, then pn jj →0 and Ejτj = ∞, and so (24) is trivially true. If instead j is recurrent, then the restriction of X to the set Sj = {i; rji > 0} is irreducible and recurrent by Lemma 11.21, and aperiodic by Proposition 11.20. Hence, pn jj converges by Theorem 11.22.
To identify the limit, define Ln = sup k ∈Z+; τ k j ≤n = n Σ k=1 1{Xk = j}, n ∈N.
The τ n j form a random walk under Pj, and so the law of large numbers yields L(τ n j ) τ n j = n τ n j → 1 Ejτj a.s. Pj.
By the monotonicity of Lk and τ n j , we obtain Ln/n →(Ejτj)−1 a.s. Pj. Since Ln ≤n, we get by dominated convergence 1 n n Σ k=1 pk jj = EjLn n → 1 Ejτj , and (24) follows.
Now let i ̸= j. Using the strong Markov property, the disintegration theo-rem, and dominated convergence, we get pn ij = Pi{Xn = j} = Pi τj ≤n, (θτjX)n−τj = j = Ei p n−τj jj ; τj ≤n →Pi{τj < ∞} Ejτj .
2 Exercises 1. Let X be a process with Xs⊥ ⊥Xt{Xu, u ≥t} for all s < t. Show that X is Markov with respect to the induced filtration.
2.
Show that the Markov property is preserved under time-reversal, but the possible space or time homogeneity is not, in general. (Hint: Use Lemma 11.1, and consider a random walk starting at 0.) 3. Let X be a Markov process in a space S, and fix a measurable function f on S. Show by an example that the process Yt = f(Xt) need not be Markov. (Hint: Let X be a simple symmetric random walk in Z, and take f(x) = [x/2].) 4.
Let X be a Markov process in R with transition functions μt satisfying μt(x, B) = μt(−x, −B). Show that the process Yt = |Xt| is again Markov.
252 Foundations of Modern Probability 5. Fix any process X on R+, and define Yt = Xt = {Xs∧t; s ≥0}. Show that Y is Markov with respect to the induced filtration.
6. Consider a random element ξ in a Borel space and a filtration F with F∞⊂ σ{ξ}. Show that the measure-valued process Xt = L(ξ |Ft) is Markov. (Hint: Note that ξ⊥ ⊥Xt Ft for all t.) 7. Let X be a time-homogeneous Markov process in a Borel space S. Prove the existence of some measurable functions fh : S×[0, 1] →S, h ≥0, and U(0, 1) random variables ϑt,h⊥ ⊥Xt, t, h ≥0, such that Xt+h = fh(Xt, ϑt,h) a.s. for all t, h ≥0.
8. Let X be a time-homogeneous, rcll Markov process in a Polish space S. Prove the existence of a measurable function f : S × [0, 1] →D R+,S and some U(0, 1) ran-dom variables ϑt⊥ ⊥Xt such that θtX = f(Xt, ϑt) a.s. Extend the result to optional times taking countably many values.
9. Let X be a process on R+ with state space S, and define Yt = (Xt, t), t ≥0.
Show that X, Y are simultanously Markov, and that Y is then time-homogeneous.
Give a relation between the transition kernels for X, Y . Express the strong Markov property of Y at a random time τ in terms of the process X.
10. Let X be a discrete-time Markov process in S with invariant distribution ν.
Show that Pν{Xn ∈B i.o.} ≥νB for every measurable set B ⊂S. Use the result to give an alternative proof of Proposition 11.17. (Hint: Use Fatou’s lemma.) 11. Fix an irreducible Markov chain in S with period d. Show that S has a unique partition into subsets S1, . . . , Sd, such that pij = 0 unless i ∈Sk and j ∈Sk+1 for some k ∈{1, . . . , d}, with addition defined modulo d.
12. Let X be an irreducible Markov chain with period d, and define S1, . . . , Sd as above. Show that the restrictions of (Xnd) to S1, . . . , Sd are irreducible, aperiodic and either all positive recurrent or all null recurrent. In the former case, show that the original chain has a unique invariant distribution ν. Further show that (20) holds iffμSk = 1/d for all k. (Hint: If (Xnd) has an invariant distribution νk on Sk, then νk+1 j = Σi νk i pij form an invariant distribution in Sk+1.) 13. Given a Markov chain X in S, define the classes Ci as in Lemma 11.21. Show that if j ∈Ci but i ̸∈Cj for some i, j ∈S, then i is transient. If instead i ∈Cj for every j ∈Ci, show that Ci is irreducible (so that the restriction of X to Ci is an irreducible Markov chain). Further show that the irreducible sets are disjoint, and that every state outside the irreducible sets is transient.
14. For any Markov chain, show that (20) holds iffΣj |pn ij −νj| →0 for all i.
15.
Let X be an irreducible, aperiodic Markov chain in N.
Show that X is transient iffXn →∞a.s. under every initial distribution, and null recurrent iffthe same divergence holds in probability but not a.s.
16. For every irreducible, positive recurrent subset Sk ⊂S, there exists a unique invariant distribution νk restricted to Sk, and every invariant distribution is a convex combination Σk ck νk.
17. Show that a Markov chain in a finite state space S has at least one irreducible set and one invariant distribution. (Hint: Starting from any i0 ∈S, choose i1 ∈Ci0, i2 ∈Ci1, etc. Then n Cin is irreducible.) 18. Let X⊥ ⊥Y be Markov processes with transition kernels μs,t and νs,t. Show that (X, Y ) is again Markov with transition kernels μs,t(x, ·) ⊗νs,t(y, ·). (Hint: Compute 11. Markov Properties and Discrete-Time Chains 253 the finite-dimensional distributions from Proposition 11.2, or use Proposition 8.12 with no computations.) 19.
Let X⊥ ⊥Y be irreducible Markov chains with periods d1, d2.
Show that Z = (X, Y ) is irreducible iffd1, d2 are relatively prime, and that Z has then period d1d2.
20. State and prove a discrete-time version of Theorem 11.14. Further simplify the continuous-time proof when S is countable.
Chapter 12 Random Walks and Renewal Processes Recurrence dichotomy and criteria, transience for d ≥3, Wald equa-tions, first maximum and last return, duality, ladder times and heights, fluctuations, Wiener–Hopf factorization, ladder distributions, bounded-ness and divergence, stationary renewal process, occupation measure, two-sided renewal theorem, asymptotic delay, renewal equation Before turning to the continuous-time Markov chains, we consider the special case of space- and time-homogeneous processes in discrete time, where the in-crements are independent, identically distributed (i.i.d.), and thus determined by the single one-step distribution μ. Such processes are known as random walks. The corresponding continuous-time processes are the L´ evy processes, to be discussed in Chapter 16. For simplicity, we consider only random walks in Euclidean spaces, though the definition makes sense in an arbitrary measurable group.
Thus, a random walk in Rd is defined as a discrete-time random process X = (Xn) evolving by i.i.d. steps ξn = ΔXn = Xn −Xn−1. For most purposes we may take X0 = 0, so that Xn = ξ1+. . .+ξn for all n. Though random walks may be regarded as the simplest of all Markov chains, they exhibit many basic features of the more general discrete-time processes, and hence may serve as a good introduction to the general subject. We shall further see how random walks enter naturally into the description and analysis of certain continuous-time phenomena.
Some basic facts about random walks have already been noted in previous chapters. Thus, we proved some simple 0−1 laws in Chapter 4, and in Chapters 5–6 we established the ultimate versions of the law of large numbers and the central limit theorem, both of which deal with the asymptotic behavior of n−cXn for suitable constants c > 0. More sophisticated limit theorems of this kind will be derived in Chapters 22–23 and 30, often through approximation by a Brownian motion or a more general L´ evy process.
Random walks in Rd are either recurrent or transient, and our first aim is to derive recurrence criteria in terms of the transition distribution μ. We proceed with some striking connections between maxima and return times, anticipating the arcsine laws in Chapters 14 and 22. This is followed by a detailed study of ladder times and heights for one-dimensional random walks, culminating with the Wiener–Hopf factorization and Baxter’s formula. Finally, we prove a two-sided version of the renewal theorem, describing the asymptotic behavior of the occupation measure and its intensity for a transient random walk.
255 © Springer Nature Switzerland AG 2021 O. Kallenberg, Foundations of Modern Probability, Probability Theory and Stochastic Modelling 99, 256 Foundations of Modern Pobability In addition to the mentioned connections, we note the relevance of re-newal theory for the study of continuous-time Markov chains, as considered in Chapter 13. Renewal processes are simple instances of regenerative sets, to be studied in full generality in Chapter 29, in connection with local time and excursion theory.
To begin our systematic discussion of random walks, assume as before that Xn = ξ1 + · · · + ξn for all n ∈Z+, where the ξn are i.i.d. random vectors in Rd. The distribution of X is then determined by the common distribution μ = L(ξn) of the increments. By the effective dimension of X we mean the dimension of the linear subspace spanned by the support of μ. For most pur-poses, we may assume that the effective dimension agrees with the dimension of the underlying space, since we may otherwise restrict our attention to the generated subspace.
A random walk X in Rd is said to be recurrent if lim infn→∞|Xn| = 0 a.s.
and transient if |Xn| →∞a.s. In terms of the occupation measure ξ = Σn δXn, transience means that ξ is a.s. locally finite, and recurrence that ξBε 0 = ∞a.s.
for every ε > 0, where Br x = {y; |x −y| < r}. We define the recurrence set A of X as the set of points x ∈Rd with ξBε x = ∞a.s. for every ε > 0.
We now state the basic dichotomy for random walks in Rd. Here the inten-sity (measure) Eξ of ξ is defined by (Eξ)f = E(ξf).
Theorem 12.1 (recurrence dichotomy) Let X be a random walk in Rd with occupation measure ξ and recurrence set A. Then exactly one of these cases occurs: (i) X is recurrent with A = supp E ξ, which is then a closed subgroup of Rd, (ii) X is transient and E ξ is locally finite.
Proof: The event |Xn| →∞has probability 0 or 1 by Kolmogorov’s 0−1 law. When |Xn| →∞a.s., the Markov property at time m yields for any m, n ∈N and r > 0 P |Xm| < r, inf k≥n |Xm+k| ≥r ≥P |Xm| < r, inf k≥n |Xm+k −Xm| ≥2r = P{|Xm| < r} P inf k≥n |Xk| ≥2r .
Noting that the event on the left can occur for at most n different values of m, we get P inf k≥n |Xk| ≥2r Σ m≥1 P{|Xm| < r} < ∞, n ∈N.
Since |Xk| →∞a.s., the probability on the left is positive for large enough n, and so for r > 0 E ξBr 0 = E Σ m≥1 1{|Xm| < r} = Σ m≥1 P{|Xm| < r} < ∞, 12. Random Walks and Renewal Processes 257 which shows that Eξ is locally finite.
Next let |Xn| ̸→∞a.s., so that P{|Xn| < r i.o.} > 0 for some r > 0.
Covering Br 0 by finitely many balls G1, . . . , Gn of radius ε/2, we conclude that P{Xn ∈Gk i.o.} > 0 for at least one k. By the Hewitt–Savage 0−1 law, the latter probability is in fact 1. Thus, the optional time τ = inf{n; Xn ∈Gk} is a.s. finite, and so the strong Markov property yields 1 = P Xn ∈Gk i.o. ≤P |Xτ+n −Xτ| < ε i.o. = P |Xn| < ε i.o. = P{ξBε 0 = ∞}, which shows that 0 ∈A, and X is recurrent.
The set S = supp Eξ clearly contains A. To prove the converse, fix any x ∈S and ε > 0. By the strong Markov property at σ = inf{n ≥0; |Xn −x| < ε/2} and the recurrence of X, we get P{ξBε x = ∞} = P |Xn −x| < ε i.o. ≥P σ < ∞, |Xσ+n −Xσ| < ε/2 i.o. = P{σ < ∞} P |Xn| < ε/2 i.o. > 0.
By the Hewitt–Savage 0−1 law, the probability on the left then equals 1, which means that x ∈A. Thus, S ⊂A ⊂S, and so in this case A = S.
The set S is clearly a closed additive semi-group in Rd. To see that in this case it is even a group, it remains to show that x ∈S implies −x ∈S. Defining σ as before and using the strong Markov property, along with the fact that x ∈S ⊂A, we get P{ξBε −x = ∞} = P |Xn + x| < ε i.o. = P |Xσ+n −Xσ + x| < ε i.o. ≥P |Xσ −x| < ε, |Xn| < ε/2 i.o. = 1, which shows that −x ∈A = S.
2 We give some simple sufficient conditions for recurrence.
Theorem 12.2 (recurrence for d ≤2) A random walk X in Rd is recurrent under each of these conditions: • for d = 1: n−1Xn P →0, • for d = 2: Eξ1 = 0 and E|ξ1|2 < ∞.
For d = 1 we recognize the weak law of large numbers, which was char-acterized in Theorem 6.17. In particular, the condition holds when Eξ1 = 0.
258 Foundations of Modern Pobability By contrast, Eξ1 ∈(0, ∞] implies Xn →∞a.s. by the strong law of large numbers, so in that case X is transient.
Our proof of Theorem 12.2 is based on the following scaling relation1.
Lemma 12.3 (scaling) For a random walk X in Rd, Σ n≥0 P{|Xn| ≤rε} < ⌢rd Σ n≥0 P{|Xn| ≤ε}, r ≥1, ε > 0.
Proof: The ball Brε 0 may be covered by balls G1, . . . , Gm of radius ε/2, where m < ⌢rd. Introducing the optional times τk = inf{n; Xn ∈Gk}, k = 1, . . . , m, we see from the strong Markov property that Σ n P{|Xn| ≤rε} ≤Σ k,n P{Xn ∈Gk} ≤Σ k,n P |Xτk+n −Xτk| ≤ε; τk < ∞ = Σ k P{τk < ∞} Σ n P{|Xn| ≤ε} < ⌢rd Σ n P{|Xn| ≤ε}.
2 Proof of Theorem 12.2 (Chung & Ornstein): (d = 1) Fix any ε > 0 and r ≥1, and conclude from Lemma 12.3 that Σ n P{|Xn| ≤ε} > ⌢r−1Σ n P{|Xn| ≤rε} = ∞ 0 P |X[rt]| ≤rε dt.
Since the integrand on the right tends to 1 as r →∞, the integral tends to ∞ by Fatou’s lemma, and the recurrence of X follows by Theorem 12.1.
(d = 2) We may take X to have effective dimension 2, since the 1-dimen-sional case is already covered by (i). Then the central limit theorem yields n−1/2Xn d →ζ for a non-degenerate normal random vector ζ. In particular, P{|ζ| ≤c} > ⌢c2 for bounded c > 0. Fixing any ε > 0 and r ≥1, we conclude from Lemma 12.3 that Σ n P{|Xn| ≤ε} > ⌢r−2Σ n P{|Xn| ≤rε} = ∞ 0 P |X[r2t]| ≤rε dt.
As r →∞, we get by Fatou’s lemma Σ n P{|Xn| ≤ε} > ⌢ ∞ 0 P |ζ| ≤εt−1/2 dt > ⌢ε2 ∞ 1 t−1dt = ∞, and the recurrence follows again by Theorem 12.1.
2 An exact recurrence criterion can be given in terms of the characteristic function ˆ μ of μ. Here we write Bε = Bε 0 for an open ε-ball around the origin.
1Here a < ⌢b means that a ≤c b for some constant c > 0.
12. Random Walks and Renewal Processes 259 Theorem 12.4 (recurrence criterion, Chung & Fuchs) Let X be a random walk in Rd based on a distribution μ, and fix an ε > 0. Then X is recurrent iff sup 0<r<1 Bε ℜ 1 1 −rˆ μt dt = ∞.
(1) The proof is based on a classical identity.
Lemma 12.5 (Parseval) Let μ, ν be probability measures on Rd with charac-teristic functions ˆ μ, ˆ ν. Then ˆ μ dν = ˆ ν dμ.
Proof: Use Fubini’s theorem.
2 Proof of Theorem 12.4: The function f(s) = (1−|s|)+ has Fourier transform ˆ f(t) = 2 t−2(1 −cos t), and so the tensor product f ⊗d(s) = k≤d f(sk) on Rd has Fourier transform ˆ f ⊗d(t) = k≤d ˆ f(tk). Writing μ∗n = L(Xn), we get by Lemma 12.5 for any a > 0 and n ∈Z+ ˆ f ⊗d(x/a) μ∗n(dx) = ad f ⊗d(at) ˆ μn t dt.
By Fubini’s theorem, it follows that for any r ∈(0, 1) ˆ f ⊗d(x/a) Σ n≥0 rnμ∗n(dx) = ad f ⊗d(at) 1 −rˆ μt dt.
(2) Now assume that (1) fails. Putting δ = ε−1d1/2, we get by (2) Σ n P{|Xn| < δ} = Σ n μ∗n(Bδ) < ⌢ ˆ f ⊗d(x/δ)Σ n μ∗n(dx) = δd sup r<1 f ⊗d(δt) 1 −rˆ μt dt < ⌢ε−d sup r<1 Bε dt 1 −rˆ μt < ∞, and so X is transient by Theorem 12.1.
To prove the converse, we note that ˆ f ⊗d has Fourier transform (2π)df ⊗d.
Hence, (2) remains true with f and ˆ f interchanged, apart from a factor (2π)d on the left. If X is transient, then for any ε > 0 with δ = ε−1d 1/2, sup r<1 Bε dt 1 −rˆ μt < ⌢sup r<1 ˆ f ⊗d(t/ε) 1 −rˆ μt dt < ⌢εd f ⊗d(εx)Σ n μ∗n(dx) ≤εd Σ n μ∗n(Bδ) < ∞.
2 260 Foundations of Modern Pobability When μ is symmetric in the sense that ξ1 d = −ξ1, then ˆ μ is real valued, and the last criterion becomes Bε dt 1 −ˆ μt = ∞.
By a symmetrization of X we mean a random walk ˜ X = X −X′, where X′ is an independent copy of X. We may relate the recurrence behavior of X to that of ˜ X: Corollary 12.6 (symmetrization) Let X be a random walk in Rd with sym-metrization ˜ X. Then X is recurrent ⇒ ˜ X is recurrent.
Proof: Since clearly (ℜz)(ℜz−1) ≤1 for any z ̸= 0 in C, we get ℜ 1 1 −rˆ μ2 ≤ 1 1 −r ℜˆ μ2 ≤ 1 1 −r|ˆ μ|2.
Thus, if ˜ X is transient, then so is the random walk (X2n) by Theorem 12.4.
But then |X2n| →∞a.s. by Theorem 12.1, and so |X2n+1| →∞a.s. By combination, |Xn| →∞a.s., which means that X is transient.
2 The following sufficient conditions for recurrence or transience are often useful for applications: Corollary 12.7 (sufficient conditions) Let X be a random walk in Rd, and fix any ε > 0. Then (i) Bε ℜ 1 1 −ˆ μt dt = ∞ ⇒ X is recurrent, (ii) Bε dt 1 −ℜˆ μt < ∞ ⇒ X is transient.
Proof: (i) Under the stated condition, Fatou’s lemma yields for any rn ↑1 lim inf n→∞ Bε ℜ 1 1 −rnˆ μ ≥ Bε lim n→∞ℜ 1 1 −rnˆ μ = Bε ℜ 1 1 −ˆ μ = ∞.
Thus, (1) holds and X is recurrent.
(ii) Let μ be such as stated. Decreasing ε if necessary, we may assume that ℜˆ μ ≥0 on Bε. Then as before, Bε ℜ 1 1 −rˆ μ ≤ Bε 1 1 −r ℜˆ μ ≤ Bε 1 1 −ℜˆ μ < ∞, and so (1) fails. Thus, X is transient.
2 We may now supplement Theorem 12.2 with some conclusive information for d ≥3.
12. Random Walks and Renewal Processes 261 Theorem 12.8 (transience for d ≥3) Let X be a random walk of effective dimension d. Then d ≥3 ⇒ X is transient.
Proof: We may take the symmetrized distribution to be d -dimensional, since μ is otherwise supported by a hyper-plane outside the origin, and the transience follows by the strong law of large numbers. By Corollary 12.6 it is enough to prove that the symmetrized random walk ˜ X is transient, and so we may assume that μ is symmetric. Considering the conditional distributions on Br and Bc r for large enough r > 0, we may write μ as a convex combination c μ1 + (1 −c) μ2, where μ1 is symmetric and d -dimensional with bounded support. Writing (rij) for the covariance matrix of μ1, we get as in Lemma 6.9 ˆ μ1(t) = 1 −1 2 Σ i,j rij titj + o(|t|2), t →0.
Since the matrix (rij) is positive definite, it follows that 1 −ˆ μ1(t) > ⌢|t|2 for small enough |t|, say for t ∈Bε. A similar relation then holds for ˆ μ, and so Bε dt 1 −ˆ μt < ⌢ Bε dt |t|2 < ⌢ ε 0 rd−3dr < ∞.
Thus, X is transient by Theorem 12.4.
2 For optional times τ, some moments of Xτ are the same as if 2 X⊥ ⊥τ. Given a discrete filtration F, we say that X is an F-random walk, if it is adapted to F and such that Fk⊥ ⊥ξk+1 for all k.
Theorem 12.9 (Wald equations) Let Xn = ξ1 + · · · + ξn be an F-random walk with E ξ1 = μ and Var(ξ1) = σ2, and let τ be an F-optional time. Then (i) for ξ1 ∈L1 and E τ < ∞, E Xτ = μ E τ, (ii) for μ = 0, σ2 < ∞, and E τ < ∞, E X2 τ = σ2E τ, (iii) for X⊥ ⊥τ, σ2 < ∞, and E τ 2 < ∞, Var(Xτ) = σ2E τ + μ2 Var(τ).
Proof: (i) First let ξk ≥0 a.s. for all k. Since {τ ≥k}⊥ ⊥ξk for all k, we get by Fubini’s theorem and Lemma 4.4 2This illustrates the idea of decoupling, where the distribution of a random pair (ξ, η) is compared with that of a pair (˜ ξ, ˜ η) with independent ˜ ξ d = ξ and ˜ η d = η. Further instances appear in Chapters 15–16 and 27–28.
262 Foundations of Modern Pobability E Xτ = E Σ k ξk1{τ ≥k} = Σ k E(ξk; τ ≥k) = Σ k E ξk P{τ ≥k} = μΣ k P{τ ≥k} = μ Eτ.
The general result follows by subtraction of the formulas for random walks with terms ξ± k = (±ξk) ∨0.
(ii) First let τ be bounded, so that all sums below are finite. Since μ = 0 and ξj1{τ ≥k}⊥ ⊥ξk when j < k, we get by Lemma 4.4 E X2 τ = E Σ k ξk1{τ ≥k} 2 = E Σ k ξ2 k 1{τ ≥k} + 2 E Σ j <k ξj ξk1{τ ≥k} = Σ k E ξ2 k; τ ≥k + 2 Σ j <k E ξj ξk; τ ≥k = Σ k E ξ2 k P{τ ≥k} + 2 Σ j <k E ξj; τ ≥k E ξk = σ2Σ k P{τ ≥k} = σ2Eτ.
Similarly, we get for m ≤n E Xτ∧m −Xτ∧n 2 = E n Σ k=m+1 ξk1{τ ≥k} 2 = σ2 n Σ k=m+1 P{τ ≥k} = σ2 E(τ ∧n) −E(τ ∧m) →0.
Hence, Xτ∧n →Xτ holds both a.s. and in L2, and the relation EX2 τ∧n = σ2E(τ ∧n) extends in the limit to EX2 τ = σ2Eτ.
(iii) Using Lemma 8.2 (i), and noting that L(ζ, τ| τ) = L(ζ, t) t=τ when ζ⊥ ⊥τ, we get Var(Xτ) = E Var(Xτ| τ) + Var E(Xτ| τ) = E τ Var(ξ1) + Var(τ Eξ1) = σ2Eτ + μ2 Var(τ).
2 We proceed with a detailed study of one-dimensional random walks Xn = ξ1 + · · · + ξn, n ∈Z+. Say that X is simple if |ξ1| = 1 a.s. For a simple, symmetric random walk X we note that un ≡P{X2n = 0} = 2−2n 2n n , n ∈Z+.
(3) First we give a surprising connection between the un and the times of last return to the origin.
Theorem 12.10 (last return, Feller) Let X be a simple, symmetric random walk in Z, put σn = max{k ≤n; X2k = 0}, and define un by (3). Then P{σn = k} = uk un−k, 0 ≤k ≤n.
12. Random Walks and Renewal Processes 263 The proof is based on a classical symmetry property, which will also appear in a continuous-time version as Lemma 14.14.
Lemma 12.11 (reflection principle, Andr´ e) For any symmetric random walk X and optional time τ, we have ˜ X d = X, where ˜ Xn = Xn∧τ −(Xn −Xn∧τ), n ≥0.
Proof: We may clearly assume that τ < ∞a.s. Writing X′ n = Xτ+n −Xτ, n ∈Z+, we get by the strong Markov property X d = X′⊥ ⊥(Xτ, τ), and by symmetry −X d = X. Hence, by combination (−X′, Xτ, τ) d = (X′, Xτ, τ), and the assertion follows by suitable assembly.
2 Proof of Theorem 12.10: The Markov property at time 2k yields P{σn = k} = P{X2k = 0} P{σn−k = 0}, 0 ≤k ≤n, which reduces the proof to the case of k = 0. Thus, it remains to show that P X2 ̸= 0, . . . , X2n ̸= 0 = P{X2n = 0}, n ∈N.
By the Markov property at time 1, the left-hand side equals 1 2 P min k<2n Xk = 0 + 1 2 P max k<2n Xk = 0 = P{M2n−1 = 0}, where Mn = maxk≤n Xk. Using Lemma 12.11 with τ = inf{k; Xk = 1}, we get 1 −P{M2n−1 = 0} = P{M2n−1 ≥1} = P M2n−1 ≥1, X2n−1 ≥1 + P M2n−1 ≥1, X2n−1 ≤0 = P{X2n−1 ≥1} + P{X2n−1 ≥2} = 1 −P{X2n−1 = 1} = 1 −P{X2n = 0}.
2 We continue with a striking connection between the maximum of a symmet-ric random walk and the last return probabilities in Theorem 12.10. Related results for Brownian motion and more general random walks will appear in Theorems 14.16 and 22.11.
Theorem 12.12 (first maximum, Sparre-Andersen) Let X be a random walk based on a symmetric, diffuse distribution, and put Mn = max k≤n Xk, τn = min k ≥0; Xk = Mn .
Define σn as in Proposition 12.10, in terms of a simple, symmetric random walk. Then τn d = σn for every n ≥0.
264 Foundations of Modern Pobability Here and below, we will use the relation (X1, . . . , Xn) d = Xn −Xn−1, . . . , Xn −X0 , n ∈N, (4) valid for any random walk X.
The formula is obvious from the fact that (ξ1, . . . , ξn) d = (ξn, . . . , ξ1).
Proof of Theorem 12.12: By the symmetry of X together with (4), we have vk ≡P{τk = 0} = P{τk = k}, k ≥0.
(5) Hence, the Markov property at time k yields P{τn = k} = P{τk = k} P{τn−k = 0} = vk vn−k, 0 ≤k ≤n.
(6) Clearly σ0 = τ0 = 0. Proceeding by induction, let σk d = τk and hence uk = vk for all k < n. Comparing (6) with Theorem 12.10 gives P{σn = k} = P{τn = k}, 0 < k < n, which extends to 0 ≤k ≤n by (5). Thus, σn d = τn.
2 For a one-dimensional random walk X, the ascending ladder times τ1, τ2, . . .
are given recursively by τn = inf k > τn−1; Xk > Xτn−1 , n ∈N, (7) starting with τ0 = 0. The associated ascending ladder heights are defined as the random variables Xτn, n ∈N, where X∞may be interpreted as ∞. In a similar way, we define the descending ladder times τ − n and heights Xτ − n , n ∈N.
The times τn and τ − n are clearly optional. By the strong Markov property, the pairs (τn, Xτn) and (τ − n , Xτ − n ) form possibly terminating random walks in ¯ R2.
Replacing the relation Xk > Xτn−1 in (7) by Xk ≥Xτn−1, we obtain the weak ascending ladder times σn and heights Xσn. Similarly, we may introduce the weak descending ladder times σ− n and heights Xσ− n . The mentioned sequences are connected by a pair of simple but powerful duality3 relations.
Lemma 12.13 (duality) Let η, η′, ζ, ζ′ be the occupation measures of the sequences (Xτn), (Xσn), {Xn; n < τ − 1 }, and {Xn; n < σ− 1 }, respectively. Then E η = E ζ′, E η′ = E ζ.
3Duality methods are often used in probability.
Further notable instances appear in Chapters 21 and 31.
12. Random Walks and Renewal Processes 265 Proof: By (4) we have for any B ∈B(0,∞) and n ∈N P X1 ∧· · · ∧Xn−1 > 0, Xn ∈B = P X1 ∨· · · ∨Xn−1 < Xn ∈B = Σ k P τk = n, Xτk ∈B .
(8) Summing over n ≥1 gives E ζ′B = E ηB, and the first assertion follows. The proof of the second assertion is similar.
2 The last result yields some remarkable information. Thus, for a simple, symmetric random walk, the expected number of visits to an arbitrary state k ̸= 0, before the first return to 0, is constant and equal to 1. In particular, the mean recurrence time is infinite, and so X is a null-recurrent Markov chain.
The asymptotic behavior of a random walk is related to the expected values of the ladder times: Proposition 12.14 (fluctuations and mean ladder times) For a non-degen-erate random walk X in R, exactly one of these cases occurs: (i) Xn →∞a.s. and Eτ1 < ∞, (ii) Xn →−∞a.s. and Eτ − 1 < ∞, (iii) lim supn(±Xn) = ∞a.s. and E σ1 = E σ− 1 = ∞.
Proof: By Corollary 4.17 there are only the three possibilities Xn →∞a.s., Xn →−∞a.s., and lim supn(±Xn) = ∞a.s. In the first case, σ− n < ∞for finitely many n, say for n < κ < ∞. Then κ is geometrically distributed, and so Eτ1 = Eκ < ∞by Lemma 12.13. The proof in case (ii) is similar. In case (iii), all variables τn and τ − n are finite, and Lemma 12.13 yields Eσ1 = Eσ− 1 = ∞. 2 Next we explore the relationship between the asymptotic behavior of a random walk and the expected values of X1 and Xτ1. Define Eγ = Eγ+ −Eγ− whenever Eγ+ ∧Eγ−< ∞.
Proposition 12.15 (fluctuations and mean ladder heights) Let X be a non-degenerate random walk in R. Then (i) EX1 = 0 ⇒lim supn(±Xn) = ∞a.s., (ii) EX1 ∈(0, ∞] ⇒Xn →∞a.s. and EXτ1 = Eτ1 EX1, (iii) EX+ 1 = EX− 1 = ∞⇒EXτ1 = −EXτ − 1 = ∞.
Part (i) is clear from Theorem 12.2 (i). It can also be obtained more di-rectly, as follows.
Proof: (i) By symmetry, we may assume that lim supn Xn = ∞a.s. If Eτ1 < ∞, we may apply the law of large numbers to each of the three ratios in the equation Xτn τn τn n = Xτn n , n ∈N, 266 Foundations of Modern Pobability to get 0 = EX1Eτ1 = EXτ1 > 0. The contradiction shows that Eτ1 = ∞, and so lim infn Xn = −∞by Proposition 12.14.
(ii) Here Xn →∞a.s. by the law of large numbers, and the formula EXτ1 = Eτ1 EX1 follows as before.
(iii) This is clear from the relations Xτ1 ≥X+ 1 and Xτ − 1 ≤−X− 1 .
2 We proceed with a celebrated factorization, providing some more detailed information about the distributions of ladder times and heights. Here we write χ± for the possibly defective distributions of the pairs (τ1, Xτ1) and (τ − 1 , Xτ − 1 ), respectively, and let ψ± denote the corresponding distributions of (σ1, Xσ1) and (σ− 1 , Xσ− 1 ). Put χ± n = χ±({n}×·) and ψ± n = ψ±({n}×·), and define a measure χ0 on N by χ0 n = P X1 ∧· · · ∧Xn−1 > 0 = Xn = P X1 ∨· · · ∨Xn−1 < 0 = Xn , n ∈N, where the second equality holds by (4).
Theorem 12.16 (Wiener–Hopf factorization) For a random walk in R based on a distribution μ, we have (i) δ0 −δ1 ⊗μ = (δ0 −χ+) ∗(δ0 −ψ−) = (δ0 −ψ+) ∗(δ0 −χ−), (ii) δ0 −ψ± = (δ0 −χ±) ∗(δ0 −χ0).
Note that the convolutions in (i) are defined on the space Z+ × R, whereas those in (ii) can be regarded as defined on Z+. Alternatively, we may regard χ0 as a measure on N×{0}, and think of all convolutions as defined on Z+ ×R.
Proof: (i) Define the measures ρ1, ρ2, . . . on (0, ∞) by ρnB = P X1 ∧· · · ∧Xn−1 > 0, Xn ∈B = E Σ k 1 τk = n, Xτk ∈B , n ∈N, B ∈B(0,∞), (9) where the second equality holds by (8). Put ρ0 = δ0, and regard the sequence ρ = (ρn) as a measure on Z+×(0, ∞). Noting that the corresponding measures on R equal ρn + ψ− n and using the Markov property at time n −1, we get ρn + ψ− n = ρn−1 ∗μ = ρ ∗(δ1 ⊗μ) n, n ∈N.
(10) Applying the strong Markov property at τ1 to the second expression in (9), we see that also ρn = n Σ k=1 χ+ k ∗ρn−k = (χ+ ∗ρ)n, n ∈N.
(11) 12. Random Walks and Renewal Processes 267 Recalling the values at zero, we get from (10) and (11) ρ + ψ−= δ0 + ρ ∗(δ1 ⊗μ), ρ = δ0 + χ+ ∗ρ.
Eliminating ρ between the two equations yields the first relation in (i), and the second relation follows by symmetry.
(ii) Since the restriction of ψ+ to (0, ∞) equals ψ+ n −χ0 n, we get for any B ∈B(0,∞) χ+ n −ψ+ n + χ0 n B = P max k<n Xk = 0, Xn ∈B .
Decomposing the event on the right according to the time of first return to 0, we get χ+ n −ψ+ n + χ0 n = n−1 Σ k=1 χ0 k χ+ n−k = (χ0 ∗χ+)n, n ∈N, and so χ+ −ψ+ + χ0 = χ0 ∗χ+, which is equivalent to the ‘plus’ version of (ii).
The ‘minus’ version follows by symmetry.
2 The preceding factorization yields an explicit formula for the joint distri-bution of first ladder time and height.
Theorem 12.17 (ladder distributions, Sparre-Andersen, Baxter) Let X be a random walk in R. Then for |s| < 1 and u ≥0, we have E sτ1 exp(−uXτ1) = 1 −exp −Σ n≥1 sn n E e−uXn; Xn > 0 .
(12) A similar relation holds for (σ1, Xσ1), with Xn > 0 replaced by Xn ≥0.
Proof: Consider the mixed generating and characteristic functions ˆ χ+ s,t = E sτ1exp itXτ1 , ˆ ψ− s,t = E sσ− 1 exp itXσ− 1 , and note that the first relation in Theorem 12.16 (i) is equivalent to 1 −s ˆ μt = (1 −ˆ χ+ s,t)(1 −ˆ ψ− s,t), |s| < 1, t ∈R.
Taking logarithms and expanding in Taylor series, we obtain Σ n n−1(s ˆ μt)n = Σ n n−1(ˆ χ+ s,t)n + Σ n n−1( ˆ ψ− s,t)n.
For fixed s ∈(−1, 1), this equation is of the form ˆ ν = ˆ ν+ + ˆ ν−, where ν and ν± are bounded signed measures on R, (0, ∞), and (−∞, 0], respectively. By the uniqueness theorem for characteristic functions, we get ν = ν+ + ν−. In particular, ν+ equals the restriction of ν to (0, ∞). Thus, the corresponding Laplace transforms agree, and (12) follows by summation of a Taylor series for the logarithm. A similar argument yields the formula for (σ1, Xσ1).
2 The last result yields expressions for the probability that a random walk stays negative or non-positive, as well as criteria for its divergence to −∞.
268 Foundations of Modern Pobability Corollary 12.18 (boundedness and divergence) For a random walk X in R, (i) P{τ1 = ∞} = 1 Eσ− 1 = exp −Σ n≥1 n−1P{Xn > 0} , (ii) P{σ1 = ∞} = 1 Eτ − 1 = exp −Σ n≥1 n−1P{Xn ≥0} , and the a.s. divergence Xn →−∞is equivalent to each of the conditions: Σ n≥1 n−1P{Xn > 0} < ∞, Σ n≥1 n−1P{Xn ≥0} < ∞.
Proof: The last expression for P{τ1 = ∞} follows from (12) with u = 0, as we let s →1. Similarly, the formula for P{σ1 = ∞} is obtained from the version of (12) for the pair (σ1, Xσ1). In particular, P{τ1 = ∞} > 0 iffthe series in (i) converges, and similarly for the condition P{σ1 = ∞} > 0 in terms of the series in (ii). Since both conditions are equivalent to Xn →−∞a.s., the last assertion follows. Finally, the first equalities in (i) and (ii) are obtained most easily from Lemma 12.13, if we note that the number of strict or weak ladder times τn < ∞or σn < ∞is geometrically distributed.
2 We turn to a detailed study of the occupation measure ξ = Σ n≥0 δXn of a transient random walk in R, based on the transition and initial distribu-tions μ and ν. Recall from Theorem 12.1 that the associated intensity measure E ξ = ν ∗Σ n μ∗n is locally finite. By the strong Markov property, the sequence (Xτ+n −Xτ) has the same distribution for every finite optional time τ. Thus, a similar invariance holds for the occupation measure, and the associated inten-sities agree. A renewal is then said to occur at time τ, and the whole subject is known as renewal theory. When μ and ν are supported by R+, we refer to ξ as a renewal process based on μ and ν, and to E ξ as the associated renewal measure. The standard choice is to take ν = δ0, though in general we may also allow a non-degenerate delay distribution ν.
The occupation measure ξ is clearly a random measure on R, defined as a kernel from Ω to R. By Lemma 15.1 below, the distribution of a random measure on R+ is determined by the distributions of the integrals ξf = f dξ, for all f belonging to the space ˆ C+(R+) of continuous functions f : R+ →R+ with bounded support. For any measure μ on R and constant t ∈R we define θtμ = μ◦θ−1 t with θtx = x+t, so that (θtμ)B = μ(B −t) and (θtμ)f = μ(f ◦θt) for any measurable set B ⊂R and function f ≥0. A random measure ξ is said to be stationary if θtξ d = ξ on R+.
When ξ is a renewal process based on a transition distribution μ, the delayed process η = δα∗ξ is called a stationary version of ξ, if it is stationary on R+ with the same spacing distribution μ. We show that such a version exists whenever μ has finite mean, in which case ν is uniquely determined by μ. Recall that λ denotes Lebesgue measure on R+.
Theorem 12.19 (stationary renewal process) Let ξ be a renewal process based on a distribution μ on R+ with mean m > 0. Then 12. Random Walks and Renewal Processes 269 (i) ξ has a stationary version η on R+ iffm < ∞, (ii) the distribution of η is unique with Eη = m−1λ and delay distribution ν[0, t] = m−1 t 0 μ(s, ∞) ds, t ≥0, (iii) ν = μ iffξ is a stationary Poisson process on R+.
Proof, (i)−(ii): For a delayed renewal process ξ based on μ and ν, Fubini’s theorem yields E ξ = E Σ n≥0 δXn = Σ n≥0 L(Xn) = Σ n≥0 ν ∗μ∗n = ν + μ ∗Σ n≥0 ν ∗μ∗n = ν + μ ∗E ξ, and so ν = (δ0 −μ) ∗Eξ. If ξ is stationary, then Eξ is shift invariant, and Theorem 2.6 yields Eξ = c λ for some constant c > 0. Thus, ν = c (δ0 −μ) ∗λ, and the asserted formula follows with m−1 replaced by c. As t →∞, we get 1 = c m by Lemma 4.4, which implies m < ∞and c = m−1.
Conversely, let m < ∞, and choose ν as stated. Then Eξ = ν ∗Σ n≥0 μ∗n = m−1(δ0 −μ) ∗λ ∗Σ n≥0 μ∗n = m−1λ ∗ Σ n≥0 μ∗n −Σ n≥1 μ∗n = m−1λ, which shows that Eξ is invariant. By the strong Markov property, the shifted random measure θtξ is again a renewal process based on μ, say with delay distribution νt. Since E θtξ = θtEξ, we get as before νt = (δ0 −μ) ∗(θtEξ) = (δ0 −μ) ∗Eξ = ν, which implies θtξ d = ξ, showing that ξ is stationary.
(iii) By Theorem 13.6 below, ξ is a stationary Poisson process on R+ with rate c > 0, iffit is a delayed renewal process based on the exponential distribu-tion with mean c−1, in which case μ = ν. Conversely, let ξ be a stationary re-newal process based on a common distribution μ = ν with mean m < ∞. Then the tail probability f(t) = μ(t, ∞) satisfies the differential equation mf ′+f = 0 with initial condition f(0) = 1, and so f(t) = e−t/m, which shows that μ = ν is exponential with mean m. As before, ξ is then a stationary Poisson process with rate m−1.
2 The last result yields a similar statement for the occupation measure of a general random walk.
270 Foundations of Modern Pobability Corollary 12.20 (stationary occupation measure) Let ξ be the occupation measure of a random walk X in R based on distributions μ, ν, where μ has mean m ∈(0, ∞), and ν is defined as in Theorem 12.19 in terms of the lad-der height distribution ˜ μ and its mean ˜ m. Then ξ is stationary on R+ with intensity m−1.
Proof: Since Xn →∞a.s., Propositions 12.14 and 12.15 show that the ladder times τn and heights Hn = Xτn have finite mean, and by Proposition 12.19 the renewal process ζ = Σ n δHn is stationary for the prescribed choice of ν. Fixing t ≥0 and putting σt = inf{n ∈Z+; Xn ≥t}, we note in particular that Xσt −t has distribution ν. By the strong Markov property at σt, the sequence Xσt+n −t, n ∈Z+, has then the same distribution as X. Since Xk < t for k < σt, we get θtξ d = ξ on R+, which proves the asserted stationarity.
To identify the intensity, let ξn be the occupation measure of the sequence Xk −Hn, τn ≤k < τn+1, and note that Hn⊥ ⊥ξn d = ξ0 for each n by the strong Markov property. Hence, Fubini’s theorem yields E ξ = E Σ n≥0 ξn ∗δHn = Σ n≥0 E(δHn ∗E ξn) = E ξ0 ∗E Σ n≥0 δHn = E ξ0 ∗E ζ.
Noting that Eζ = ˜ m−1λ by Proposition 12.19, that E ξ0(0, ∞) = 0, and that ˜ m = m Eτ1 by Proposition 12.15, we get on R+ E ξ = E ξ0R− ˜ m λ = Eτ1 ˜ m λ = m−1λ.
2 We proceed to study the asymptotic behavior of the occupation measure ξ and its intensity Eξ. Under weak restrictions on μ, we show that θt ξ ap-proaches the stationary version ˜ ξ, whereas θt Eξ is asymptotically proportional to Lebesgue measure. For simplicity, we assume that the mean of μ exists in ¯ R = [−∞, ∞].
We will use the standard notation for convergence on measure spaces.4 Thus, for locally finite measures ν, ν1, ν2, . . . on R+, the vague convergence νn v →ν means that νnf →νf for all f ∈ˆ C+(R+). Similarly, for random measures ξ, ξ1, ξ2, . . . on R+, the distributional convergence ξn vd − →ξ is defined by the condition ξnf d →ξf for every f ∈ˆ C+(R+). A measure μ on R is said to be non-arithmetic, if the additive subgroup generated by supp μ is dense in R.
Theorem 12.21 (two-sided renewal theorem, Blackwell, Feller & Orey) Let ξ be the occupation measure of a random walk X in R based on distributions μ and ν, where μ is non-arithmetic with mean m ∈¯ R \ {0}. When m ∈(0, ∞), let ˜ ξ be the stationary version in Proposition 12.20, and otherwise put ˜ ξ = 0.
Then as t →∞, 4For a detailed discussion of such convergence, see Chapter 23 and Appendix 5.
12. Random Walks and Renewal Processes 271 (i) θt ξ vd − →˜ ξ, (ii) θtE ξ v →E ˜ ξ = (m−1 ∨0) λ.
Our proof will be based on two further lemmas. When m < ∞, the crucial step is to prove convergence of the ladder height distributions of the sequences X −t. This will be accomplished by a coupling argument: Lemma 12.22 (asymptotic delay) For X as in Theorem 12.21 with m ∈ (0, ∞), let νt be the distribution of the first ladder height ≥0 of the sequence X −t. Then νt w →˜ ν as t →∞.
Proof: Let α and α′ be independent random variables with distributions ν and ˜ ν. Choose some i.i.d. sequences (ξk)⊥ ⊥(ϑk) independent of α and α′, such that L(ξk) = μ and P{ϑk = ±1} = 1 2. Then Mn = α′ −α −Σ k≤n ϑk ξk, n ∈Z+, is a random walk based on a non-arithmetic distribution with mean 0. By Theorems 12.1 and 12.2, the range {Mn} is then a.s. dense in R, and so for any ε > 0 the optional time σ = inf{n ≥0; Mn ∈[0, ε]} is a.s. finite.
Next define ϑ′ k = ±ϑk, with plus sign iffk > τ, and note as in Lemma 12.11 that {α′, (ξk, ϑ′ k)} d = {α′, (ξk, ϑk)}. Let κ1, κ2, . . . and κ′ 1, κ′ 2, . . . be the values of k where ϑk = 1 or ϑ′ k = 1, respectively. The sequences Xn = α + Σ j ≤n ξκj, X′ n = α′ + Σ j ≤n ξκ′ j, n ∈Z+, are again random walks based on μ with initial distributions ν and ν′, respec-tively. Writing σ± = Σ k≤τ (±ϑk ∨0), we note that X′ σ−+n −Xσ++n = Mτ ∈[0, ε], n ∈Z+.
Putting β = X∗ σ+ ∨X′ σ− ∗, and considering the first entries of X and X′ into the interval [t, ∞), we obtain for any r ≥ε ˜ ν[ε, r] −P{β ≥t} ≤νt[0, r] ≤˜ ν[0, r + ε] + P{β ≥t}.
Letting t →∞and then ε →0, and noting that ˜ ν{0} = 0 by stationarity, we get νt[0, r] →˜ ν[0, r], which implies νt w →˜ ν.
2 To prove (i) ⇒(ii) in the main theorem, we need the following technical result, which also plays a crucial role when m = ∞.
Lemma 12.23 (uniform integrability) Let ξ be the occupation measure of a transient random walk X in Rd with arbitrary initial distribution, and fix a bounded set B ∈Bd. Then the random variables ξ(B + x), x ∈Rd, are uni-formly integrable.
272 Foundations of Modern Pobability Proof: Fix any x ∈Rd, and put τ = inf{t ≥0; Xn ∈B + x}. Writing η for the occupation measure of an independent random walk starting at 0, we get by the strong Markov property ξ(B + x) d = η(B + x −Xτ) 1{τ < ∞} ≤η(B −B).
Finally, note that Eη(B −B) < ∞by Theorem 12.1, since X is transient. 2 Proof of Theorem 12.21 (m < ∞): By Lemma 12.23 it is enough to prove (i). If m < 0, then Xn →−∞a.s. by the law of large numbers, and so θtξ = 0 for sufficiently large t, and (i) follows. If instead m ∈(0, ∞), then νt w →˜ ν by Lemma 12.22, and we may choose some random variables αt and α with distributions νt and ν, respectively, such that αt →α a.s. We also introduce the occupation measure ξ0 of an independent random walk starting at 0.
Now let f ∈ˆ C+(R+), and extend f to R by putting f(x) = 0 for x < 0.
Since ˜ ν ≪λ, we have ξ0{−α} = 0 a.s. Hence, by the strong Markov property and dominated convergence, (θtξ)f d = f(αt + x) ξ0(dx) → f(α + x) ξ0(dx) d = ˜ ξf.
(m = ∞): Here it is clearly enough to prove (ii).
The strong Markov property yields E ξ = ν ∗Eχ ∗Eζ, where χ is the occupation measure of the ladder height sequence of (Xn −X0) and ζ is the occupation measure of the same process, prior to the first ladder time. Here EζR−< ∞by Proposition 12.14, and so by dominated convergence it suffices to show that θtEχ v →0.
Since the ladder heights have again infinite mean by Proposition 12.15, we may assume instead that ν = δ0, and let μ be an arbitrary distribution on R+ with mean m = ∞.
Put I = [0, 1], and note that E ξ(I + t) is bounded by Lemma 12.23.
Define b = lim supt E ξ(I + t) and choose some tk →∞with E ξ(I + tk) →b.
Subtracting the bounded measures μ∗j for j < n, we get (μ∗n ∗E ξ)(I +tk) →b for all n ∈Z+. Using the reverse Fatou lemma, we obtain for any B ∈BR+ lim inf k→∞E ξ(I −B + tk) μ∗nB ≥lim inf k→∞ B E ξ(I −x + tk) μ∗n(dx) = b −lim sup k→∞ Bc E ξ(I −x + tk) μ∗n(dx) ≥b − Bc lim sup k→∞E ξ(I −x + tk) μ∗n(dx) ≥b − Bc b μ∗n(dx) = b μ∗nB.
Since n was arbitrary, the ‘lim inf’ on the left is then ≥b for every B with E ξB > 0.
12. Random Walks and Renewal Processes 273 Now fix any h > 0 with μ(0, h] > 0, and write J = [0, a] with a = h + 1.
Noting that E ξ[r, r + h] > 0 for all r ≥0, we get lim inf k→∞E ξ(J + tk −r) ≥b, r ≥a.
Since δ0 = (δ0 −μ) ∗E ξ, we further note that 1 = tk 0 μ(tk −x, ∞) E ξ(dx) ≥Σ n≥1 μ(na, ∞) E ξ(J + tk −na).
As k →∞, we get 1 ≥b Σ n≥1 μ(na, ∞) by Fatou’s lemma. Here the sum diverges since m = ∞, which implies θtE ξ v →0.
2 The preceding theory may be used to study the renewal equation F = f + F ∗μ, which often arises in applications. Here the convolution F ∗μ is defined by (F ∗μ)t = t 0 F(t −s) μ(ds), t ≥0, whenever the integrals on the right exist. Under suitable regularity conditions, the stated equation has the unique solution F = f ∗¯ μ, where ¯ μ denotes the renewal measure Σ n≥0 μ∗n. Additional conditions ensure convergence at ∞of the solution F.
The precise statements require some further terminology. By a regular step function we mean a function on R+ of the form ft = Σ j ≥1 aj1[j−1,j)(t/h), t ≥0, (13) where h > 0 and a1, a2, . . . ∈R. A measurable function f on R+ is said to be directly Riemann integrable, if λ|f| < ∞, and there exist some regular step functions f ± n with f − n ≤f ≤f + n and λ(f + n −f − n ) →0.
Theorem 12.24 (renewal equation) Consider a distribution μ ̸= δ0 on R+ with associated renewal measure ¯ μ and a locally bounded, measurable function f on R+. Then (i) equation F = f +F ∗μ has the unique, locally bounded solution F = f ∗¯ μ, (ii) when f is directly Riemann integrable and μ is non-arithmetic with mean m, we have Ft →m−1λf, t →∞.
Proof: (i) Iterating the renewal equation gives F = Σ k<n f ∗μ∗k + F ∗μ∗n, n ∈N.
(14) Since μ∗n[0, t] →0 as n →∞for fixed t ≥0 by the weak law of large numbers, we have F ∗μ∗n →0 for any locally bounded F. If even f is locally bounded, then by (14) and Fubini’s theorem, F = Σ k≥0 f ∗μ∗k = f ∗Σ k≥0 μ∗k = f ∗¯ μ.
274 Foundations of Modern Pobability Conversely, f + f ∗¯ μ ∗μ = f ∗¯ μ, which shows that F = f ∗¯ μ solves the given equation.
(ii) Let μ be non-arithmetic. If f is a regular step function as in (13), then by Theorem 12.21 and dominated convergence, we get as t →∞ Ft = t 0 f(t −s) ¯ μ(ds) = Σ j ≥1 aj ¯ μ (0, h] + t −jh →m−1hΣ j ≥1 aj = m−1λf.
In general, we may introduce some regular step functions f ± n with f − n ≤f ≤f + n and λ(f + n −f − n ) →0, and note that (f − n ∗¯ μ)t ≤Ft ≤(f + n ∗¯ μ)t, t ≥0, n ∈N.
Letting t →∞and then n →∞, we obtain Ft →m−1λf.
2 Exercises 1. Show that if X is recurrent, then so is the random walk (Xnk) for every k ∈N.
(Hint: If (Xnk) is transient, then so is (Xnk+j) for every j > 0.) 2. For any non-degenerate random walk X in Rd, show that |Xn| P →∞. (Hint: Use Lemma 6.1.) 3. Let X be a random walk in R, based on a symmetric, non-degenerate dis-tribution with bounded support.
Show that X is recurrent.
(Hint: Recall that lim supn(±Xn) = ∞a.s.) 4.
Show that the accessible set A equals the closed semi-group generated by supp μ. Also show by examples that A may or may not be a group.
5. Let ν be an invariant measure on the accessible set of a recurrent random walk in Rd. Show by examples that E ξ may or may not be of the form ∞· ν.
6. Show that a non-degenerate random walk in Rd has no invariant distribution.
(Hint: If ν is invariant, then μ ∗ν = ν.) 7.
Show by examples that the conditions in Theorem 12.2 are not necessary.
(Hint: For d = 2, consider mixtures of N(0, σ2) and use Lemma 6.19.) 8. Let X be a random walk in R based on the symmetric, p -stable distribution with characteristic function e−|t|p. Show that X is recurrent for p ≥1 and transient for p < 1.
9.
Let X be a random walk in R2 based on the distribution μ2, where μ is symmetric p -stable. Show that X is recurrent for p = 2 and transient for p < 2.
10. Let μ = c μ1 + (1 −c) μ2 for some symmetric distributions μ1, μ2 on Rd and a constant c ∈(0, 1). Show that a random walk based on μ is recurrent iffrecurrence holds for random walks based on μ1 and μ2.
11. Let μ = μ1 ∗μ2, where μ1, μ2 are symmetric distributions on Rd. Show that if a random walk based on μ is recurrent, then so are the random walks based on 12. Random Walks and Renewal Processes 275 μ1, μ2. Also show by an example that the converse is false. (Hint: For the latter part, let μ1, μ2 be supported by orthogonal sub-spaces.) 12. For a symmetric, recurrent random walk in Zd, show that the expected number of visits to an accessible state k ̸= 0 before return to the origin equals 1. (Hint: Compute the distribution, assuming a probability p for return before visit to k.) 13. Use Proposition 12.14 to show that any non-degenerate random walk in Zd has an infinite mean recurrence time. Compare with the preceding problem.
14. Show how part (i) of Proposition 12.15 can be strengthened by means of Theorems 6.17 and 12.2.
15. For a non-degenerate random walk in R, show that lim supn Xn = ∞a.s. iff σ1 < ∞a.s., and that Xn →∞a.s. iffE σ1 < ∞. In both conditions, note that σ1 can be replaced by τ1.
16. Let ξ be a renewal process based on a non-arithmetic distribution on R+.
Show that sup{t > 0; E ξ[t, t + ε] = 0} < ∞for any ε > 0. (Hint: Mimic the proof of Proposition 11.18.) 17. Let μ be a distribution on Z+, such that the group generated by supp μ equals Z. Show that Proposition 12.19 remains valid with ν{n} = c−1μ(n, ∞), n ≥0, and prove a corresponding version of Proposition 12.20.
18. Let ξ be the occupation measure of a random walk in Z based on a distribution μ with mean m ∈¯ R \ {0}, such that the group generated by supp μ equals Z. Show as in Theorem 12.21 that Eξ{n} →m−1 ∨0.
19. Prove the renewal theorem for random walks in Z+ from the ergodic theorem for discrete-time Markov chains, and conversely. (Hint: Given a distribution μ on N, construct a Markov chain X in Z+ with Xn+1 = Xn + 1 or 0, such that the recurrence times at 0 are i.i.d. μ. Note that X is aperiodic iffZ is the smallest group containing supp μ.) 20. Fix a distribution μ on R with symmetrization ˜ μ. Note that if ˜ μ is non-arithmetic, then so is μ. Show by an example that the converse is false.
21. Simplify the proof of Lemma 12.22, in the case where even the symmetrization ˜ μ is non-arithmetic. (Hint: Let ξ1, ξ2, . . . and ξ′ 1, ξ′ 2, . . . be i.i.d. μ, and define ˜ Xn = α′ −α +Σ k≤n (ξ′ k −ξk).) 22.
Show that any monotone, Lebesgue integrable function on R+ is directly Riemann integrable.
23. State and prove the counterpart of Corollary 12.24 for arithmetic distribu-tions.
24. Let (ξn), (ηn) be independent i.i.d. sequences with distributions μ, ν, put Xn = Σ k≤n(ξk + ηk), and define U = n≥0 [Xn, Xn + ξn+1). Show that Ft = P{t ∈ U} satisfies the renewal equation F = f + F ∗μ ∗ν with ft = μ(t, ∞). For μ, ν with finite means, show also that Ft converges as t →∞, and identify the limit.
25. Consider a renewal process ξ based on a non-arithmetic distribution μ with mean m < ∞, fix any h > 0, and define Ft = P{ξ[t, t + h] = 0}.
Show that F = f + F ∗μ with ft = μ(t + h, ∞). Further show that Ft converges as t →∞, and identify the limit. (Hint: Consider the first point of ξ in (0, t), if any.) 26. For ξ as above, let τ = inf{t ≥0; ξ[t, t + h] = 0}, and put Ft = P{τ ≤t}.
276 Foundations of Modern Pobability Show that Ft = μ(h, ∞) + h∧t 0 μ(ds) Ft−s, or F = f + F ∗μh, where μh = 1[0,h] · μ and f ≡μ(h, ∞).
Chapter 13 Jump-Type Chains and Branching Processes Strong Markov property, renewals in Poisson process, rate kernel, em-bedded Markov chain, existence and explosion, compound and pseudo-Poisson processes, backward equation, invariant distribution, conver-gence dichotomy, mean recurrence times, Bienaym´ e process, extinction and asymptotics, binary splitting, Brownian branching tree, genealogy and Yule tree, survival rate and distribution, diffusion approximation After our detailed study of discrete-time Markov chains in Chapter 11, we turn our attention to the corresponding processes in continuous time, where the paths are taken to be piecewise constant apart from isolated jumps. The evolution of the process is then governed by a rate kernel α, determining both the rate at which transitions occur and the associated transition probabilities.
For bounded α we get a pseudo-Poisson process, which may be described as a discrete-time Markov chain with transition times given by an independent, homogeneous Poisson process.
Of special interest is the space-homogeneous case of compound Poisson processes, where the underlying Markov chain is a random walk. In Chapter 17 we show how every Feller process can be approximated in a natural way by a sequence of pseudo-Poisson processes, characterized by the boundedness of their generators. A similar compound Poisson approximation of general L´ evy processes plays a basic role in Chapter 16.
The chapter ends with a brief introduction to branching processes in dis-crete and continuous time. In particular, we study the extinction probability and asymptotic population size, and identify the ancestral structure of a criti-cal Brownian branching tree in terms of a suitable Yule process. For a critical Bienaym´ e process, we further derive the asymptotic survival rate and distri-bution, and indicate how a suitably scaled version can be approximated by a Feller diffusion. The study of spatial branching processes will be resumed in Chapter 30.
A process X in a measurable space (S, S) is said to be of pure jump-type, if its paths are a.s. right-continuous and constant apart from isolated jumps. We may then denote the jump times of X by τ1, τ2, . . . , with the understanding that τn = ∞if there are fewer than n jumps. By a simple approximation based on Lemma 9.3, the times τn are optional with respect to the right-continuous filtration F = (Ft) induced by X. For convenience we may choose X to be the identity map on the canonical path space Ω. When X is Markov, we write Px 277 © Springer Nature Switzerland AG 2021 O. Kallenberg, Foundations of Modern Probability, Probability Theory and Stochastic Modelling 99, 278 Foundations of Modern Probability for the distribution with initial state x, and note that the mapping x →Px is a kernel from (S, S) to (Ω, F∞).
We begin our study of pure jump-type Markov processes by proving an extension of the elementary strong Markov property in Proposition 11.9. A further extension appears as Theorem 17.17.
Theorem 13.1 (strong Markov property, Doob) A pure jump-type Markov process satisfies the strong Markov property at every optional time.
Proof: For any optional time τ, we may choose some optional times σn ≥ τ + 2−n taking countably many values, such that σn →τ a.s. When A ∈ Fτ ∩{τ < ∞} and B ∈F∞, we get by Proposition 11.9 P θσnX ∈B; A = E PXσnB; A .
(1) The right-continuity of X yields P{Xσn ̸= Xτ} →0. If B depends on finitely many coordinates, it is also clear that P {θσnX ∈B} △{θτX ∈B} →0, n →∞.
Hence, (1) remains true for such sets B with σn replaced by τ, and the relation extends to the general case by a monotone-class argument.
2 We now examine the structure of a general pure jump-type Markov process.
Here the crucial step is to determine the distributions associated with the first jump, occurring at time τ1. A random variable γ ≥0 is said to be exponentially distributed if P{γ > t} = ect, t ≥0, for a constant c > 0, in which case E γ = ∞ 0 e−ctdt = c−1. Say that a state x ∈S is absorbing if Px{X ≡x} = Px{τ1 = ∞} = 1.
Lemma 13.2 (first jump) Let X be a pure jump-type Markov process, and fix a non-absorbing state x. Then under Px, (i) τ1 is exponentially distributed, (ii) θτ1X⊥ ⊥τ1.
Proof: (i) Put τ1 = τ. By the Markov property at fixed times, we get for any s, t ≥0 Px τ > s + t = Px τ > s, τ ◦θs > t = Px{τ > s} Px{τ > t}.
This is a Cauchy equation in s and t, whose only non-increasing solutions are of the form Px{τ > t} = e−ct with c ∈[0, ∞]. Since x is non-absorbing and τ > 0 a.s., we have c ∈(0, ∞), and so τ is exponentially distributed with parameter c.
13. Jump-Type Chains and Branching Processes 279 (ii) By the Markov property at fixed times, we get for any B ∈F∞ Px τ > t, θτX ∈B = Px τ > t, (θτX) ◦θt ∈B = Px{τ > t} Px θτX ∈B , which shows that τ⊥ ⊥θτX.
2 Writing X∞= x when X eventually gets absorbed at x, we define the rate function c and jump transition kernel μ by c(x) = (Exτ1)−1, μ(x, B) = Px{Xτ1 ∈B}, x ∈S, B ∈S, combined into a rate kernel α = c μ, or α(x, B) = c(x) μ(x, B), x ∈S, B ∈S.
The required measurability follows from that for the kernel (Px). Assuming in addition that μ(x, ·) = δx when α(x, ·) = 0, we may reconstruct μ from α, so that μ becomes a measurable function of α.
We proceed to show how X can be represented in terms of a discrete-time Markov chain and an independent sequence of exponential random variables.
In particular, we will see how the distributions Px are uniquely determined by the rate kernel α. In the sequel, we always assume the existence of the required randomization variables.
Theorem 13.3 (embedded Markov chain) For a pure jump-type Markov pro-cess X with rate kernel α = c μ, there exist a Markov chain Y on Z+ with transition kernel μ and some i.i.d., exponential random variables γ1, γ2, . . . ⊥ ⊥Y with mean 1, such that a.s. for all n ∈Z+, (i) Xt = Yn, t ∈[τn, τn+1), (ii) τn = n Σ k=1 γk c(Yk−1) .
Proof: Let τ1, τ2, . . . be the jump times of X, and put τ0 = 0, so that (i) holds with Yn = Xτn for n ∈Z+.
Introduce some i.i.d., exponentially distributed random variables γ′ 1, γ′ 2, . . . ⊥ ⊥X with mean 1, and define for n ∈N γn = (τn −τn−1) c(Yn) 1{τn−1 < ∞} + γ′ n 1 c(Yn) = 0 .
By Lemma 13.2, we get for any t ≥0, B ∈S, and x ∈S with c(x) > 0 Px γ1 > t, Y1 ∈B = Px τ1c(x) > t, Y1 ∈B = e−tμ(x, B), 280 Foundations of Modern Probability which clearly remains true when c(x) = 0. By the strong Markov property, we obtain for any n, a.s. on {τn < ∞}, Px γn+1 > t, Yn+1 ∈B Fτn = PYn γ1 > t, Y1 ∈B = e−tμ(Yn, B).
(2) The strong Markov property also gives τn+1 < ∞a.s. on the set {τn < ∞, c(Yn) > 0}. Arguing recursively, we get {c(Yn) = 0} = {τn+1 = ∞} a.s., and (ii) follows. The same relation shows that (2) remains a.s. true on {τn = ∞}, and in both cases we may replace Fτn by Gn = Fτn ∨σ{γ′ 1, . . . , γ′ n}. Thus, the pairs (γn, Yn) form a discrete-time Markov chain with the desired transition kernel. By Proposition 11.2, the latter property along with the initial distri-bution determine uniquely the joint distribution of Y and (γn).
2 In applications we are often faced with the converse problem of construct-ing a Markov process X with a given rate kernel α. Then write α(x, B) = c(x) μ(x, B) for some rate function c : S →R+ and transition kernel μ on S, such that μ(x, ·) = δx when c(x) = 0 and otherwise μ(x, {x}) = 0. If X does exist, it may clearly be constructed as in Theorem 13.3. The construction fails when ζ ≡supn τn < ∞, in which case an explosion is said to occur at time ζ.
Theorem 13.4 (synthesis) For a finite kernel α = c μ on S with α(x, {x}) ≡0, let Y be a Markov chain in S with transition kernel μ, and let γ1, γ2, . . .
⊥ ⊥Y be i.i.d., exponential random variables with mean 1. Then these conditions are equivalent: (i) Σ nγn/c(Yn−1) = ∞a.s. under every initial distribution for Y , (ii) there exists a pure jump-type Markov process on R+ with rate kernel α.
Proof: Assuming (i), let Px be the joint distribution of the sequences Y = (Yn) and Γ = (γn) when Y0 = x. Regarding (Y, Γ) as the identity map on the canonical space Ω = (S × R+)∞, we may construct X from (Y, Γ) as in Theorem 13.3, where Xt = x0 is arbitrary for t ≥supn τn, and introduce the filtrations G = (Gn) induced by (Y, γ) and F = (Ft) induced by X. We need to verify the Markov property Lx( θtX | Ft) = LXt(X) for the distributions under Px and PXt, since the rate kernel can then be identified via Theorem 13.3.
For any t ≥0 and n ∈Z+, put κ = sup{k; τk ≤t}, β = (t −τn) c(Yn), and define T m(Y, Γ) = (Yk, γk+1); k ≥m , (Y ′, Γ′) = T n+1(Y, Γ), γ′ = γn+1.
Since clearly Ft = Gn ∨σ{γ′ > β} on {κ = n}, 13. Jump-Type Chains and Branching Processes 281 it suffices by Lemma 8.3 to prove that Lx (Y ′, Γ′); γ′ −β > r Gn, γ′ > β = LYn T(Y, Γ); γ1 > r .
Noting that (Y ′, Γ′)⊥ ⊥Gn(γ′, β) since γ′⊥ ⊥(Gn, Y ′, Γ′), we see that the left-hand side equals Lx (Y ′, Γ′); γ′ −β > r Gn Px{γ′ > β | Gn} = Lx (Y ′, Γ′) Gn Px{γ′ −β > r | Gn} Px{γ′ > β | Gn} = (PYn ◦T −1) e−r, as required.
2 To complete the picture, we need a convenient criterion for non-explosion.
Proposition 13.5 (explosion) For any rate kernel α and initial state x, let (Yn) and (τn) be such as in Theorem 13.3. Then a.s.
τn →∞ ⇔ Σ n 1 c(Yn) = ∞.
(3) In particular, τn →∞a.s. when x is recurrent for (Yn).
Proof: Write βn = {c(Yn−1)}−1. Noting that Ee−uγn = (1 + u)−1 for all u ≥0, we get by Theorem 13.3 (ii) and Fubini’s theorem E( e−uζ| Y ) = Π n (1 + uβn)−1 = exp −Σ n log(1 + uβn) a.s.
(4) Since 1 2(r ∧1) ≤log(1 + r) ≤r for all r > 0, the series on the right converges for every u > 0 iffΣ n βn < ∞. Letting u →0 in (4), we get by dominated convergence P( ζ < ∞| Y ) = 1 Σ n βn < ∞ a.s., which implies (3). If x is visited infinitely often, the series Σ n βn has infinitely many terms c−1 x > 0, and the last assertion follows.
2 The simplest and most basic pure jump-type Markov processes are the homogeneous Poisson processes, which are space and time homogeneous pro-cesses of this kind starting at 0 and proceeding by unit jumps. Here we define a stationary Poisson process on R+ with rate c > 0 as a simple point process ξ with stationary, independent increments, such that ξ[0, t) is Poisson distributed with mean c t. To prove the existence, we may use generating functions to see that the Poisson distributions μt with mean t satisfy the semi-group property μs ∗μt = μs+t. The desired existence now follows by Theorem 8.23.
Theorem 13.6 (stationary Poisson process, Bateman) Let ξ be a simple point process on R+ with points τ1 < τ2 < · · · , and put τ0 = 0. Then these conditions are equivalent: 282 Foundations of Modern Probability (i) ξ is a stationary Poisson process, (ii) ξ has stationary, independent increments, (iii) the variables τn −τn−1, n ∈N, are i.i.d., exponentially distributed.
In that case ξ has rate c = (E τ1)−1.
Proof, (i) ⇒(ii): Clear by definition.
(ii) ⇒(iii): Under (ii), Theorem 11.5 shows that Xt = ξ[0, t], t ≥0, is a space- and time-homogeneous Markov process of pure jump type, and so by Lemma 13.2, τ1 is exponential and independent of θτ1ξ. Since θτ1ξ d = ξ by Theorem 13.1, we may proceed recursively, and (iii) follows.
(iii) ⇒(i): Assuming (iii) with E τ1 = c−1, we may choose a stationary Poisson process η with rate c and points σ1 < σ2 < · · · , and conclude as before that (σn) d = (τn). Hence, ξ = Σ n δτn d = Σ n δσn = η.
2 By a pseudo-Poisson process in a measurable space S we mean a process of the form X = Y ◦N a.s., where Y is a discrete-time Markov process in S and N⊥ ⊥Y is an homogeneous Poisson process. Letting μ be the transition kernel of Y and writing c for the constant rate of N, we may construct a kernel α(x, B) = c μ x, B \ {x} , x ∈S, B ∈S, (5) which is measurable since μ(x, {x}) is a measurable function of x. We may characterize pseudo-Poisson processes in terms of the rate kernel: Proposition 13.7 (pseudo-Poisson process, Feller) For a process X in a Borel space S, these conditions are equivalent: (i) X = Y ◦N a.s. for a discrete-time Markov chain Y in S with transition kernel μ and a Poisson process N⊥ ⊥Y with constant rate c > 0, (ii) X is a pure jump-type Markov process with bounded rate kernel α.
In that case, we may choose α and (μ, c) to be related by (5).
Proof, (i) ⇒(ii): Assuming (i), write τ1, τ2, . . . for the jump times of N, and let F be the filtration induced by the pair (X, N). As in Theorem 13.4, we see that X is F-Markov. To identify the rate kernel α, fix any initial state x, and note that the first jump of X occurs at the time τn when Yn first leaves x. For each transition of Y , this happens with probability px = μ(x, {x}c). By Proposition 15.3, the time until first jump is then exponentially distributed with parameter c px. If px > 0, the position of X after the first jump has distribution p−1 x μ(x, · \ {x}). Thus, α is given by (5).
(ii) ⇒(i): Assuming (ii), put rx = α(x, S) and c = supx rx, and note that the kernel μ(x, ·) = c−1 α(x, ·) + (c −rx) δx , x ∈S, 13. Jump-Type Chains and Branching Processes 283 satisfies (5). Thus, if X′ = Y ′ ◦N ′ is a pseudo-Poisson process based on μ and c, then X′ is again Markov with rate kernel α, and so X d = X′. Hence, Corollary 8.18 yields X = Y ◦N a.s. for a pair (Y, N) d = (Y ′, N ′).
2 If the underlying Markov chain Y is a random walk in a measurable Abelian group S, then X = Y ◦N is called a compound Poisson process. Here X − X0 ⊥ ⊥X0, the jump sizes are i.i.d., and the jump times form an independent, homogeneous Poisson process. Thus, the distribution of X −X0 is determined by the characteristic measure1 ν = c μ, where c is the rate of the jump-time process and μ is the common jump distribution.
Compound Poisson processes may be characterized analytically in terms of the rate kernel, and probabilistically in terms of the increments of the process.
A kernel α on S is said to be invariant if αx = α0◦θ−1 x or α(x, B) = α(0, B−x) for all x and B. We also say that a process X in S has independent increments if Xt −Xs ⊥ ⊥{Xr; r ≤s} for all s < t.
Corollary 13.8 (compound Poisson process) For a pure jump-type process X in a measurable Abelian group, these conditions are equivalent: (i) X is Markov with invariant rate kernel, (ii) X has stationary, independent increments, (iii) X is compound Poisson.
Proof: If a pure jump-type Markov process is space-homogeneous, then its rate kernel is clearly invariant, and the converse follows from the representa-tion in Theorem 13.3. Thus, (i) ⇔(ii) by Proposition 11.5. Next, Theorem 13.3 yields (i) ⇒(iii), and the converse follows by Theorem 13.4.
2 We now derive a combined differential and integral equation for the tran-sition kernels μt. An abstract version of this result appears in Theorem 17.6.
For any measurable and suitably integrable function f : S →R, we define Ttf(x) = f(y) μt(x, dy) = Exf(Xt), x ∈S, t ≥0.
Theorem 13.9 (backward equation, Kolmogorov) Let α be the rate kernel of a pure jump-type Markov process in S, and fix a bounded, measurable function f : S →R. Then Ttf(x) is continuously differentiable in t for fixed x, and ∂ ∂tTtf(x) = α(x, dy) Ttf(y) −Ttf(x) , t ≥0, x ∈S.
(6) 1also called L´ evy measure, as in the general case of Chapter 16 284 Foundations of Modern Probability Proof: Put τ = τ1, and let x ∈S and t ≥0. By the strong Markov property at σ = τ ∧t and Theorem 8.5, Ttf(x) = Exf(Xt) = Exf (θσX)t−σ = ExTt−σf(Xσ) = f(x) Px{τ > t} + Ex Tt−τf(Xτ); τ ≤t = f(x) e−t cx + t 0 e−s cxds α(x, dy) Tt−sf(y), and so e t cxTtf(x) = f(x) + t 0 e s cxds α(x, dy) Tsf(y).
(7) Here the use of the disintegration theorem is justified by the fact that X(ω, t) is product measurable on Ω × R+, by the right continuity of the paths.
From (7) we see that Ttf(x) is continuous in t for each x, and so by dom-inated convergence the inner integral on the right is continuous in s. Hence, Ttf(x) is continuously differentiable in t, and (6) follows by an easy computa-tion.
2 Next we show how the invariant distributions of a pure jump-type Markov process are related to those of the embedded Markov chain.
Proposition 13.10 (invariant measures) Let the processes X, Y be related as in Theorem 13.3, and fix a probability measure ν on S with c dν < ∞.
Then ν is invariant for X ⇔ c · ν is invariant for Y.
Proof: By Theorem 13.9 and Fubini’s theorem, we have for any bounded measurable function f : S →R Eνf(Xt) = f(x) ν(dx) + t 0 ds ν(dx) α(x, dy) Tsf(y) −Tsf(x) .
Thus, ν is invariant for X iffthe second term on the right is identically zero.
Now (6) shows that Ttf(x) is continuous in t, and by dominated convergence this remains true for the integral It = ν(dx) α(x, dy) Ttf(y) −Ttf(x) , t ≥0.
Thus, the condition becomes It ≡0. Since f is arbitrary, it is enough to take t = 0. Our condition then reduces to (να)f ≡ν(cf) or (c · ν)μ = c · ν, which means that c · ν is invariant for Y .
2 We turn to a study of pure jump-type Markov processes in a countable state space S, also called continuous-time Markov chains. Here the kernels μt are determined by the transition functions p t ij = μt(i, {j}). The connectivity properties are simpler than in the discrete case, and the notion of periodicity has no continuous-time counterpart.
13. Jump-Type Chains and Branching Processes 285 Lemma 13.11 (positivity) Consider a continuous-time Markov chain in S with transition functions p t ij. Then for fixed i, j ∈S, exactly one of these cases occurs: (i) p t ij > 0, t > 0, (ii) p t ij = 0, t ≥0.
In particular, p t ii > 0 for all t and i.
Proof: Let q = (qij) be the transition matrix of the embedded Markov chain Y in Theorem 13.3. If qn ij = Pi{Yn = j} = 0 for all n ≥0, then clearly 1{Xt ̸= j} ≡1 a.s. Pi, and so p t ij = 0 for all t ≥0. If instead qn ij > 0 for some n ≥0, there exist some states i = i0 and i1, . . . , in = j with qik−1,ik > 0 for k = 1, . . . , n. Since the distribution of (γ1, . . . , γn+1) has the positive density Π k≤n+1 e−xk > 0 on Rn+1 + , we obtain for any t > 0 pt ij ≥P n Σ k=1 γk cik−1 ≤t < n+1 Σ k=1 γk cik−1 n Π k=1 qik−1,ik > 0.
Noting that p0 ii = q0 ii = 1, we get in particular p t ii > 0 for all t ≥0.
2 A continuous-time Markov chain is said to be irreducible if p t ij > 0 for all i, j ∈S and t > 0. This clearly holds iffthe underlying discrete-time process Y in Theorem 13.3 is irreducible, in which case sup t > 0; Xt = j < ∞ ⇔ sup n > 0; Yn = j < ∞.
Thus, when Y is recurrent, the sets {t; Xt = j} are a.s. unbounded under Pi for all i ∈S; otherwise they are a.s. bounded. The two possibilities are again referred to as recurrence and transience, respectively.
The basic ergodic Theorem 11.22 for discrete-time Markov chains has the following continuous-time counterpart. Further extensions are considered in Chapter 17 and 26. Recall that u →means convergence in total variation, and let ˆ MS be the class of probability measures on S.
Theorem 13.12 (convergence dichotomy, Kolmogorov) For an irreducible, continuous-time Markov chain in S, exactly one of these cases occurs: (i) there exists a unique invariant distribution ν, the latter satisfies νi > 0 for all i ∈S, and as t →∞we have Pμ ◦θ−1 t u →Pν, μ ∈ˆ MS, (ii) no invariant distribution exists, and as t →∞, p t ij →0, i, j ∈S.
Proof: By Lemma 13.11, the discrete-time chain Xnh, n ∈Z+, is irreducible and aperiodic. Suppose that (Xnh) is positive recurrent for some h > 0, say with invariant distribution ν. Then the chain (Xnh′) is positive recurrent for every h′ of the form 2−mh, and by the uniqueness in Theorem 11.22 it has 286 Foundations of Modern Probability the same invariant distribution. Since the paths are right-continuous, a simple approximation shows that ν is invariant even for the original process X.
For any distribution μ on S, we have Pμ ◦θ−1 t −Pν = Σ i μi Σ j (pt ij −νj)Pj ≤Σ i μi Σ j pt ij −νj .
Thus, (i) will follow by dominated convergence, if we can show that the inner sum on the right tends to zero. This is clear if we put n = [t/h] and r = t−nh, and note that by Theorem 11.22, Σ k pt ik −νk ≤Σ j, k pnh ij −νj pr jk = Σ j pnh ij −νj →0.
It remains to consider the case where (Xnh) is null-recurrent or transient for every h > 0. Fixing any i, k ∈S and writing n = [t/h] and r = t −nh as before, we get pt ik = Σ j pr ij pnh jk ≤pnh ik + Σ j ̸=i pr ij = pnh ik + (1 −pr ii), which tends to zero as t →∞and then h →0, by Theorem 11.22 and the continuity of pt ii.
2 As in discrete time, condition (ii) of the last theorem holds for any transient Markov chain, whereas a recurrent chain may satisfy either condition. Recur-rent chains satisfying (i) and (ii) are again referred to as positive-recurrent and null-recurrent, respectively. Note that X may be positive recurrent even if the embedded, discrete-time chain Y is null-recurrent, and vice versa. On the other hand, X clearly has the same asymptotic properties as the discrete-time processes (Xnh), h > 0.
For any j ∈S, we now introduce the first exit and recurrence times γj = inf{t > 0; Xt ̸= j}, τj = inf{t > γj; Xt = j}.
As for Theorem 11.26 in the discrete-time case, we may express the asymptotic transition probabilities in terms of the mean recurrence times Ejτj. To avoid trite exceptions, we consider only non-absorbing states.
Theorem 13.13 (mean recurrence times, Kolmogorov) For a continuous-time Markov chain in S and states i, j ∈S with j non-absorbing, we have as t →∞ p t ij →Pi{τj < ∞} cj Ejτj .
(8) 13. Jump-Type Chains and Branching Processes 287 Proof: We may take i = j, since the general statement will then follow as in the proof of Theorem 11.26. If j is transient, then 1{Xt = j} →0 a.s. Pj, and so by dominated convergence p t jj = Pj{Xt = j} →0. This agrees with (8), since in this case Pj{τj = ∞} > 0. Turning to the recurrent case, let Sj be the class of states i accessible from j. Then Sj is clearly irreducible, and so p t jj converges by Theorem 13.12.
To identify the limit, define Lj t = λ s ≤t; Xs = j = t 0 1{Xs = j} ds, t ≥0, and let τ n j be the time of n -th return to j. Letting m, n →∞with |m−n| ≤1 and using the strong Markov property and the law of large numbers, we get a.s. Pj Lj(τ m j ) τ n j = Lj(τ m j ) m · n τ n j · m n →Ejγj Ejτj = 1 cjEjτj .
By the monotonicity of Lj it follows that t−1Lj t →(cjEjτj)−1 a.s. Hence, by Fubini’s theorem and dominated convergence, 1 t t 0 ps jj ds = EjLj t t → 1 cjEjτj , and (8) follows.
2 We conclude with some applications to branching processes and their diffu-sion approximations. By a Bienaym´ e process2 we mean a discrete-time branch-ing process X = (Xn), where Xn represents the total number of individuals in generation n.
Given that Xn = m, the individuals in the n -th genera-tion give rise to independent families of progeny Y1, . . . , Ym starting at time n, each distributed like the original process X with X0 = 1. This clearly defines a discrete-time Markov chain taking values in Z+. We say that X is critical when E1X1 = 1, and super- or sub-critical when E1X1 > 1 or < 1, respectively.3 To avoid some trite exceptions, we assume that both possibilities X1 = 0 and X1 > 1 have positive probabilities.
Theorem 13.14 (Bienaym´ e process) Let X be a Bienaym´ e process with E1X1 = μ and Var1(X1) = σ2, and put f(s) = E1sX1. Then (i) P1{Xn →0} is the smallest s ∈[0, 1] with f(s) = s, and so Xn →0 a.s.
⇔ μ ≤1, (ii) when μ < ∞, we have μ−nXn →γ a.s. for a random variable γ ≥0, 2also called a Galton–Watson process 3The conventions of martingale theory are the opposite. Thus, a branching process is super-critical iffit is a sub-martingale, and sub-critical iffit is a super-martingale.
288 Foundations of Modern Probability (iii) when4 σ2 < ∞, we have E1γ = 1 ⇔μ > 1, and {γ = 0} = {Xn →0} a.s. P1.
The proof is based on some simple algebraic facts: Lemma 13.15 (recursion and moments) Let X be a Bienaym´ e process with E1X1 = μ and Var1(X1) = σ2. Then (i) the functions f (n)(s) = E1sXn on [0, 1] satisfy f (m+n)(s) = f (m) ◦f (n)(s), m, n ∈N, (ii) E1Xn = μn, and when σ2 < ∞we have Var1(Xn) = n σ2, μ = 1, σ2μn−1 μn−1 μ−1 , μ ̸= 1.
Proof: (i) By the Markov and independence properties of X, we have f (m+n)(s) = E1sXm+n = E1EXmsXn = E1(E1sXn)Xm = f (m)(E1sXn) = f (m) ◦f (n)(s).
(ii) Write μn = E1Xn and σ2 n = Var1(Xn), whenever those moments exist.
The Markov and independence properties yield μm+n = E1Xm+n = E1EXmXn = E1(Xm E1Xn) = (E1Xm)(E1Xn) = μm μn, and so by induction μn = μn. Furthermore, Theorem 12.9 (iii) gives σ2 m+n = σ2 m μn + μ2 m σ2 n.
The variance formula is clearly true for n = 1.
Proceeding by induction, suppose it holds for a given n. Then for n + 1 we get when μ ̸= 1 σ2 n+1 = σ2 n μ + μ2 n σ2 = μ σ2 μn−1 μn −1 μ −1 + μ2n σ2 = σ2 μn μ −1 μn −1 + μn (μ −1) = σ2 μn μn+1 −1 μ −1 , as required. When μ = 1, the proof simplifies to 4This assertion remains true under the weaker condition E1(X1 log X1) < ∞.
13. Jump-Type Chains and Branching Processes 289 σ2 n+1 = σ2 n + σ2 = n σ2 + σ2 = (n + 1) σ2.
2 Proof of Theorem 13.14: (i) Noting that P1{Xn = 0} = E10Xn = f (n)(0), we get by Lemma 13.15 (i) P1{Xn+1 = 0} = f P1{Xn = 0} .
Since also P1{X0 = 0} = 0 and P1{Xn = 0} ↑P{Xn →0}, we conclude that the extinction probability q is the smallest root in [0, 1] of the equation f(s) = s. Now f is strictly convex with f ′(1) = μ and f(0) ≥0, and so the equation f(s) = s has the single root 1 when μ ≤1, whereas for μ > 1 it has two roots q < 1 and 1.
(ii) Since E1X1 = μ, the Markov and independence properties yield E(Xn+1 | X0, . . . , Xn) = Xn(E1X1) = μ Xn, n ∈N.
Dividing both sides by μn+1, we conclude that the sequence Mn = μ−nXn is a martingale. Since it is non-negative and hence L1-bounded, it converges a.s.
by Theorem 9.19.
(iii) If μ ≤1, then Xn →0 a.s. by (i), and so E1γ = 0. If instead μ > 1 and σ2 < ∞, then Lemma 13.15 (ii) yields Var(Mn) = σ2 1 −μ−n μ(μ −1) → σ2 μ(μ −1) < ∞, and so (Mn) is uniformly integrable, which implies E1γ = E1M0 = 1. Further-more, P1{γ = 0} = E1PX1{γ = 0} = E1 P1{γ = 0} X1 = f P1{γ = 0} , which shows that even P1{γ = 0} is a root of the equation f(s) = s.
It can’t be 1 since E1γ = 1, so in fact P1{γ = 0} = P1{Xn →0}. Since also {Xn →0} ⊂{γ = 0}, the two events agree a.s.
2 The simplest case is when X proceeds by binary splitting, so that the off-spring distribution is confined to the values 0 and 2. To obtain a continuous-time Markov chain, we may choose the life lengths until splitting or death to be independent and exponentially distributed with constant mean. This makes X = (Xt) a birth & death process with rates nλ and nμ, and we say that X is critical of rate 1 if λ = μ = 1. We also consider the Yule process, where λ = 1 and μ = 0, so that X is non-decreasing and no deaths occur.
290 Foundations of Modern Probability Theorem 13.16 (binary splitting) Let X be a critical, unit rate binary-split-ting process, and define pt = (1 + t)−1. Then (i) P1{Xt > 0} = pt, t > 0, (ii) P1 Xt = n Xt > 0 = pt(1 −pt)n−1, t > 0, n ∈N, (iii) for fixed t > 0, the ancestors of Xt at times s ≤t form a Yule process Zt s, s ∈[0, t], with rate function (1 + t −s)−1.
Proof, (i) The generating functions F(s, t) = E1sXt, s ∈[0, 1], t > 0, satisfy Kolmogorov’s backward equation ∂ ∂t F(s, t) = 1 −F(s, t) 2, F(s, 0) = s, which has the unique solution F(s, t) = s + t −st 1 + t −st = qt + p2 ts 1 −qts = qt + Σ n≥1 p2 t qn−1 t sn, where qt = 1 −pt. In particular, P1{Xt = 0} = F(0, t) = qt.
(ii) For any n ∈N, P1 Xt = n Xt > 0 = P1{Xt = n} P1{Xt > 0} = p2 t qn−1 t pt = pt qn−1 t .
(iii) Assuming X0 = 1, we get P{Zt s = 1} = EP Zt s = 1 Xs = E Xs pt−s qXs−1 t−s = Σ n≥1 n p2 s qn−1 s pt−s qn−1 t−s = p2 s pt−s Σ n≥1 n(qs qt−s)n−1 = p2 s pt−s (1 −qs qt−s)2 = 1 + t −s (1 + t)2 .
Writing τ1 = inf{s ≤t; Zt s > 1}, we conclude that P τ1 > s Xt > 0 = P Zt s = 1 Xt > 0 = 1 + t −s 1 + t , 13. Jump-Type Chains and Branching Processes 291 which shows that τ1 has the defective probability density (1 + t)−1 on [0, t].
By Proposition 10.23, the associated compensator has density (1 + t −s)−1.
The Markov and branching properties carry over to the process Zt s by Theorem 8.12. Using those properties and proceeding recursively, we see that Zt s is a Yule process in s ∈[0, t] with rate function (1 + t −s)−1.
2 Now let the individual particles perform independent Brownian motions5 in Rd, so that the discrete branching process X = (Xt) turns into a random branching tree ξ = (ξt).
For fixed t > 0, even the ancestral processes Zt s combine into a random branching tree ζt = (ζt s) on [0, t].
Corollary 13.17 (Brownian branching tree) Let the processes ξt form a crit-ical, unit rate Brownian branching tree in Rd, and define pt = (1 + t)−1. Then for s ≤s + h = t, (i) the ancestors of ξt at time s form a ph-thinning ζt s of ξs, (ii) ξt is a sum of conditionally independent clusters of age h, rooted at the points of ζt s, (iii) for fixed t > 0, the processes ζt s, s ∈[0, t], form a Brownian Yule tree with rate function (1 + t −s)−1.
Similar results hold asymptotically for critical Bienaym´ e processes.
Theorem 13.18 (survival rate and distribution, Kolmogorov, Yaglom) Let X be a critical Bienaym´ e process with Var1(X1) = σ2 ∈(0, ∞). Then as n →∞, (i) n P1{Xn > 0} →2 σ−2, (ii) P1 n−1Xn > r Xn > 0 →e−2 rσ−2, r ≥0.
Our proof is based on an elementary approximation of generating functions: Lemma 13.19 (generating functions) For X as above, define fn(s) = E1sXn.
Writing σ2 = 2 c we have as n →∞, uniformly in s ∈[0, 1), 1 n 1 1 −fn(s) − 1 1 −s →c.
Proof (Spitzer): For the restrictions to Z+ of a binary splitting process X, we get by Theorem 13.16 P1{X1 = k} = (1 −p)2 pk−1, k ∈N, for a constant p ∈(0, 1), and an easy calculation yields 1 1 −fn(s) − 1 1 −s = n p 1 −p = n c.
5For the notions of Brownian motion and p -thinnings, see Chapters 14–15.
292 Foundations of Modern Probability Thus, equality holds in the stated relation with p and c related by p = c 1 + c ∈(0, 1).
For general X, fix any constants c1, c2 > 0 with c1 < c < c2, and let X(1), X(2) be associated binary splitting processes. By elementary estimates there exist some constants n1, n2 ∈Z, such that the associated generating functions satisfy f (1) n+n1 ≤fn ≤f (2) n+n2, n > |n1| ∨|n2|.
Combining this with the identities for f (1) n , f (2) n , we obtain c1 (n + n1) n ≤1 n 1 1 −fn(s) − 1 1 −s ≤c2 (n + n2) n .
Here let n →∞, and then c1 ↑c and c2 ↓c.
2 Proof of Theorem 13.18: (i) Taking s = 0 in Lemma 13.19 gives fn(0) →1.
(ii) Writing sn = e−r/n and using the uniformity in Lemma 13.19, we get 1 n 1 −fn(sn) ≈ 1 n(1 −sn) + c →1 r + c, where a ≈b means a −b →0. Since also n{1 −fn(0)} →c−1 by (i), we obtain E1 e−rXn/n Xn > 0 = fn(sn) −fn(0) 1 −fn(0) = 1 − n 1 −fn(sn) 1 −fn(0) →1 − c r−1 + c = 1 1 + c r , recognized as the Laplace transform of the exponential distribution with mean c−1. Now use Theorem 6.3.
2 We finally state a basic approximation property of Bienaym´ e branching processes, anticipating some weak convergence and SDE theory from Chapters 23 and 31–32.
Theorem 13.20 (diffusion approximation, Feller) For a critical Bienaym´ e process X with Var1(X1) = 1, consider versions X(n) with X(n) 0 = n, and put Y (n) t = n−1X(n) [nt], t ≥0.
Then Y (n) sd − →Y in D R+, where Y satisfies the SDE dYt = Y 1/2 t dBt, Y0 = 1.
(9) 13. Jump-Type Chains and Branching Processes 293 Note that uniqueness in law holds for (9) by Theorem 33.1 below. By The-orem 33.3 we have even pathwise uniqueness, which implies strong existence by Theorem 32.14. The solutions are known as squared Bessel processes of order 0.
Proof (outline): By Theorems 7.2 and 13.18 we have Y (n) t d ∼˜ Y (n) t for fixed t > 0, where the variables ˜ Y (n) t are compound Poisson with characteristic measures ν(n) t w →νt, and ν2t(dx) = t−2e−x/tdx, x, t > 0.
Hence, Lemma 7.8 yields Y (n) t d →Yt, where the limit is infinitely divisible in R+ with characteristics (0, νt). By suitable scaling, we obtain a similar convergence of the general transition kernels L(Y (n) s+t | Y (n) s )x, except that the limiting law μt(x, ·) is now infinitely divisible with characteristics (0, x νt). We also note that the time-homogeneous Markov property carries over to the limit.
By Lemma 7.1 (i) we may write Yt d = ξt 1 + · · · + ξt κt, t > 0, where the ξt k are independent, exponential random variables with mean t/2, and κt is independent and Poisson distributed with mean 2/t. In particular, EYt = (Eκt)(Eξt 1) = 1, and Theorem 12.9 (iii) yields Var(Yt) = E κt Var(ξt 1) + Var(κt) (E ξt 1)2 = 2 t t 2 2 + 2 t t 2 2 = t.
By Theorem 23.11 below, the processes Y (n) are tight in D R+. Since the jumps of Y (n) are a.s. of order n−1, every limiting process Y has continuous paths.
The previous moment calculations suggest that Y is then a diffusion process with drift 0 and diffusion rate σ2(x) = x, which leads to the stated SDE (de-tails omitted). The asserted convergence now follows by Theorem 17.28.
2 Exercises 1. Prove the implication (iii) ⇒(i) in Theorem 13.6 by direct computation. (Hint: Note that {Nt ≥n} = {τn ≤t} for all t and n.) 2. Give a non-computational proof of the implication (i) ⇒(iii) in Theorem 13.6, using the reverse implication. (Hint: Note as in Theorem 2.19 that τ1, τ2, . . . are measurable functions of ξ.) 3. Prove the implication (i) ⇒(iii) in Theorem 13.6 by direct computation.
4. Show that the Poisson process is the only renewal process that is also a Markov process.
5.
Let Xt = ξ[0, t] for a binomial process ξ with ∥ξ∥= n based on a diffuse probability measure μ on R+. Show that we can choose μ such that X becomes a Markov process, and determine the corresponding rate kernel.
294 Foundations of Modern Probability 6. For a pure jump-type Markov process in S, show that Px{τ2 ≤t} = o(t) for all x ∈S. Also note that the bound can be strengthened to O(t2) if the rate function is bounded, but not in general. (Hint: Use Lemma 13.2 and dominated convergence.) 7. Show that a transient, discrete-time Markov chain Y can be embedded into an exploding (resp., non-exploding) continuous-time chain X. (Hint: Use Propositions 11.16 and 13.5.) 8. In Corollary 13.8, use the measurability of the mapping X = Y ◦N to derive the implication (iii) ⇒(i) from its converse. (Hint: Proceed as in the proof of Theorem 13.6.) 9. Consider a pure jump-type Markov process in (S, S) with transition kernels μt and rate kernel α. For any x ∈S and B ∈S, show that α(x, B) = ˙ μ0(x, B \ {x}).
(Hint: Apply Theorem 13.9 with f = 1B\{x}, and use dominated convergence.) 10. Use Theorem 13.9 to derive a system of differential equations for the transition functions pij(t) of a continuous-time Markov chain. (Hint: Take f(i) = δij for fixed j.) 11. Give an example of a positive recurrent, continuous-time Markov chain, such that the embedded discrete-time chain is null-recurrent, and vice versa. (Hint: Use Proposition 13.10.) 12. Prove Theorem 13.12 by a direct argument, mimicking the proof of Theorem 11.22.
13. Let X be a Bienaym´ e process as in Theorem 13.14. Determine the extinction probability and asymptotic growth rate if instead we start with m individuals.
14. Let X = (Xt) be a binary-splitting process as in Theorem 13.16, and define Yn = Xnh for fixed h > 0. Show that Y = (Yn) is a critical Bienaym´ e process, and find the associated offspring distribution and asymptotic survival rate.
15. Show that the binary-splitting process in Theorem 13.16 is a pure jump-type Markov process, and calculate the associated rate kernel.
16. For a critical Bienaym´ e process with Var1(X1) = σ2 < ∞, find the asymptotic behavior of Pm{Xn > 0} and Pm{n−1Xn > r} as n →∞for fixed m ∈N.
17. How will Theorem 13.20 change if we assume instead that Var1(X1) = σ2 > 0?
V. Some Fundamental Processes The Brownian motion and Poisson processes constitute the basic building blocks of probability theory, and both classes of processes enjoy a wealth of remarkable properties of constant use throughout the subject. L´ evy processes are continuous-time counterparts of random walks, representable as mixtures of Brownian and Poisson processes. They may also be regarded as prototypes of Feller processes, which form a broad class of Markov processes, restricted only by some natural regularity assumptions that simplify the analysis. Most of the included material is of fundamental importance, though some later parts of especially Chapters 16–17 are more advanced and might be postponed until a later stage of study.
−−− 14. Gaussian processes and Brownian motion. After a preliminary study of general Gaussian processes, we introduce Brownian motion and derive some of its basic properties, including the path regularity, quadratic variation, reflection principle, the arcsine laws, and the laws of the iterated logarithm.
The chapter concludes with a study of multiple Wiener–Itˆ o integrals and the associated chaos expansion of Gaussian functionals. Needless to say, the basic theory of Brownian motion is of utmost importance.
15. Poisson and related processes. The basic theory of Poisson and related processes is equally important. Here we note in particular the mapping and marking theorems, the relations to binomial processes, and the roles of Cox transforms and thinnings. The independence properties lead to a Poisson representation of random measures with independent increments. We also note how a simple point process can be transformed to Poisson through a random time change derived from the compensator.
16. Independent-increment and L´ evy processes. Here we note how processes with independent increments are composed of Gaussian and Pois-son variables, reducing in the time-homogeneous case to mixtures of Brownian motions and Poisson processes. The resulting processes may be regarded as both continuous-time counterparts of random walks and as space-homogeneous Feller processes. After discussing some related approximation theorems, we conclude with an introduction to tangential processes.
17.
Feller processes and semi-groups. The Feller processes form a general class of Markov processes, broad enough to cover most applications of interest, and yet restricted by some regularity conditions ensuring technical flexibility and convenience. Here we study the underlying semigroup theory, prove the general path regularity and strong Markov property, and establish some basic approximation and convergence theorems. Though considerably more advanced, this material is clearly of fundamental importance.
295 Chapter 14 Gaussian Processes and Brownian Motion Covariance and independence, rotational symmetry, isonormal Gaus-sian process, independent increments, Brownian motion and bridge, scal-ing and inversion, Gaussian Markov processes, quadratic variation, path irregularity, strong Markov and reflection properties, Bessel processes, maximum process, arcsine and uniform laws, laws of the iterated log-arithm, Wiener integral, spectral and moving-average representations, Ornstein–Uhlenbeck process, multiple Wiener–Itˆ o integrals, chaos expan-sion of variables and processes Here we initiate the study of Brownian motion, arguably the single most im-portant object of modern probability. Indeed, we will see in Chapters 22–23 how the Gaussian limit theorems of Chapter 6 extend to pathwise approxi-mations of broad classes of random walks and discrete-time martingales by a Brownian motion. In Chapter 19 we show how every continuous local mar-tingale can be represented as a time-changed Brownian motion. Similarly, we will see in Chapters 32–33 how large classes of diffusion processes may be con-structed from Brownian motion by various pathwise transformations. Finally, the close relationship between Brownian motion and classical potential theory is explored in Chapter 34.
The easiest construction of Brownian motion is via an isonormal Gaussian process on L2(R+), whose existence is a consequence of the characteristic spher-ical symmetry of multi-variate Gaussian distributions. Among the many basic properties of Brownian motion, this chapter covers the H¨ older continuity and existence of quadratic variation, the strong Markov and reflection properties, the three arcsine laws, and the law of the iterated logarithm.
The values of an isonormal Gaussian process on L2(R+) can be identified with integrals of L2-functions with respect to an associated Brownian motion.
Many processes of interest have representations in terms of such integrals, and in particular we consider spectral and moving-average representations of sta-tionary Gaussian processes. More generally, we introduce the multiple Wiener– Itˆ o integrals ζnf of functions f ∈L2(Rn +), and establish the fundamental chaos expansion of Brownian L2-functionals.
The present material is related to practically every other chapter in the book. Thus, we refer to Chapter 6 for the definition of Gaussian distributions and the basic Gaussian limit theorem, to Chapter 8 for the transfer theorem, to Chapter 9 for properties of martingales and optional times, to Chapter 11 297 © Springer Nature Switzerland AG 2021 O. Kallenberg, Foundations of Modern Probability, Probability Theory and Stochastic Modelling 99, 298 Foundations of Modern Probability for basic facts about Markov processes, to Chapter 12 for similarities with random walks, to Chapter 27 for some basic symmetries, and to Chapter 15 for analogies with the Poisson process.
Our study of Brownian motion per se is continued in Chapter 19 with the basic recurrence or transience dichotomy, some further invariance properties, and a representation of Brownian martingales. Brownian local time and ad-ditive functionals are studied in Chapter 29. In Chapter 34 we consider some basic properties of Brownian hitting distributions, and examine the relation-ship between excessive functions and additive functionals of Brownian motion.
A further discussion of multiple integrals and chaos expansions appears in Chapters 19 and 21.
To begin with some basic definitions, we say that a process X on a parame-ter space T is Gaussian, if the random variable c1Xt1 +· · ·+cnXtn is Gaussian1 for every choice of n ∈N, t1, . . . , tn ∈T, and c1, . . . , cn ∈R. This holds in particular if the Xt are independent Gaussian random variables. A Gaussian process X is said to be centered if EXt = 0 for all t ∈T. We also say that the processes Xk on Tk, k ∈I, are jointly Gaussian, if the combined process X = {Xk t ; t ∈Tk, k ∈I} is Gaussian. The latter condition is certainly fulfilled when the Gaussian processes Xk are independent.
Some simple facts clarify the fundamental role of the covariance function.
As usual, we take all distributions to be defined on the σ-field generated by all evaluation maps.
Lemma 14.1 (covariance and independence) Let X and Xk, k ∈I, be jointly Gaussian processes on an index set T. Then (i) the distribution of X is uniquely determined by the functions mt = EXt, rs,t = Cov(Xs, Xt), s, t ∈T, (ii) the processes Xk are independent2 iff Cov(Xj s, Xk t ) = 0, s ∈Tj, t ∈Tk, j ̸= k in I.
Proof: (i) If the processes X, Y are Gaussian with the same mean and co-variance functions, then the mean and variance agree for the variables c1Xt1 + · · · + cnXtn and c1Yt1 + · · · + cnYtn for any c1, . . . , cn ∈R and t1, . . . , tn ∈T, n ∈N. Since the latter are Gaussian, their distributions must then agree.
Hence, the Cram´ er–Wold theorem yields (Xt1, . . . , Xtn) d = (Yt1, . . . , Ytn) for any t1, . . . , tn ∈T, n ∈N, and so X d = Y by Proposition 4.2.
(ii) Assume the stated condition. To prove the asserted independence, we may take I to be finite. Introduce some independent processes Y k, k ∈I, with the same distributions as the Xk, and note that the combined processes 1also said to be normally distributed 2This (mutual) independence must not be confused with independence between the values of Xk = (Xk t ) for each k.
14. Gaussian Processes and Brownian Motion 299 X = (Xk) and Y = (Y k) have the same means and covariances. Hence, their joint distributions agree by part (i). In particular, the independence between the processes Y k implies the corresponding property for the processes Xk. 2 The multi-variate Gaussian distributions are characterized by a simple sym-metry property: Proposition 14.2 (spherical symmetry, Maxwell) For independent random variables ξ1, . . . , ξd with d ≥2, these conditions are equivalent: (i) L(ξ1, . . . , ξd) is spherically symmetric, (ii) the ξk are i.i.d. centered Gaussian.
Proof, (i) ⇒(ii): Assuming (i), let ϕ be the common characteristic function of ξ1, . . . , ξd. Noting that −ξ1 d = ξ1, we see that ϕ is real and symmetric. Since also s ξ1 + t ξ2 d = ξ1 √ s2 + t2, we obtain the functional equation ϕ(s) ϕ(t) = ϕ √ s2 + t2 , s, t ∈R.
This is a Cauchy equation for the function ψ(x) = ϕ(|x|1/2), and we get ϕ(t) = e−c t2 for a constant c, which is positive since |ϕ| ≤1.
(ii) ⇒(i): Assume (ii). By scaling we may take the ξk to be i.i.d. N(0, 1).
Then (ξ1, . . . , ξd) has the joint density Π k (2π)−1/2 e−x2 k/2 = (2π)−d/2 e−|x|2/2, x ∈Rd, and the spherical symmetry follows from that for |x|2 = x2 1 + · · · + x2 d.
2 In infinite dimensions, the Gaussian property is essentially a consequence of the spherical symmetry alone, without any requirement of independence.
Theorem 14.3 (unitary invariance, Schoenberg, Freedman) For an infinite sequence of random variables ξ1, ξ2, . . . , these conditions are equivalent: (i) L(ξ1, . . . , ξn) is spherically symmetric for every n ∈N, (ii) the ξk are conditionally i.i.d. N(0, σ2), given a random variable σ2 ≥0.
Proof: Under (i) the ξk are exchangeable, and so by de Finetti’s Theorem 27.2 below they are conditionally μ -i.i.d. for a random probability measure μ on R. By the law of large numbers, μB = lim n→∞n−1 Σ k≤n 1{ξk ∈B} a.s., B ∈B, and in particular μ is a.s. {ξ3, ξ4, . . .} -measurable. By spherical symmetry, we have for any orthogonal transformation T on R2 P (ξ1, ξ2) ∈B ξ3, . . . , ξn = P T(ξ1, ξ2) ∈B ξ3, . . . , ξn , B ∈BR2.
300 Foundations of Modern Probability As n →∞, we get μ2 = μ2 ◦T −1 a.s. Applying this to a countable, dense set of mappings T, we see that the exceptional null set can be chosen to be inde-pendent of T. Thus, μ2 is a.s. spherically symmetric, and so μ is a.s. centered Gaussian by Proposition 14.2. It remains to take σ2 = x2μ(dx).
2 Now fix a real, separable Hilbert space3 H with inner product ⟨·, ·⟩. By an isonormal Gaussian process on H we mean a centered Gaussian process ζh, h ∈H, such that E(ζh ζk) = ⟨h, k⟩for all h, k ∈H. For its existence, fix any ortho-normal basis (ONB) h1, h2, . . . ∈H, and let ξ1, ξ2, . . . be i.i.d. N(0, 1).
Then for any element h = Σ j bj hj we define ζh = Σ j bj ξj, where the series converges a.s. and in L2, since Σ j b2 j < ∞. Note that ζ is centered Gaussian, and also linear, in the sense that ζ(ah + bk) = a ζh + b ζk a.s., h, k ∈H, a, b ∈R.
If also j = Σ j cj hj, we obtain E(ζh ζk) = Σ i,j bi cj E(ξi ξj) = Σ i bi ci = ⟨h, k⟩.
By Lemma 14.1, the stated conditions determine uniquely the distribution of ζ. In particular, the symmetry in Proposition 14.2 extends to a distributional invariance of ζ under unitary transformations on H.
The Gaussian distributions arise naturally in the context of processes with independent increments. A similar Poisson criterion is given in Theorem 15.10.
Theorem 14.4 (independent increments, L´ evy) Let X be a continuous pro-cess in Rd with X0 = 0. Then these conditions are equivalent: (i) X has independent increments, (ii) X is Gaussian, and there exist some continuous functions b in Rd and a in R d2, where a has non-negative definite increments, such that L(Xt −Xs) = N(bt −bs, at −as), s < t.
Proof, (i)⇒(ii): Fix any s < t in R+ and u ∈Rd. For every n ∈N, divide the interval [s, t] into n sub-intervals of equal length, and let ξn1, . . . , ξnn be the corresponding increments of uX.
Then maxj |ξnj| →0 a.s. since X is continuous, and so the sum u(Xt −Xs) = Σj ξnj is Gaussian by Theorem 6.16.
Since the increments of X are independent, the entire process X is then Gaussian. Writing bt = EXt and at = Cov(Xt), we get by linearity and independence E(Xt −Xs) = EXt −EXs = bt −bs, Cov(Xt −Xs) = Cov(Xt) −Cov(Xs) = at −as, s < t.
3See Appendix 3 for some basic properties of Hilbert spaces.
14. Gaussian Processes and Brownian Motion 301 The continuity of X yields Xs d →Xt as s →t, and so bs →bt and as →at.
Thus, both functions are continuous.
(ii)⇒(i): Use Lemma 14.1.
2 It is not so obvious that continuous processes such as in Theorem 14.4 really exist. If X has stationary, independent increments, the mean and covariance functions would clearly be linear. For d = 1 the simplest choice is to take b = 0 and at = t, so that Xt −Xs is N(0, t −s) for all s < t. We prove the existence of such a process and estimate its local modulus of continuity.
(i) B has stationary, independent increments, (ii) Bt is N(0, t) for every t ≥0, (iii) B is a.s. locally H¨ older continuous of order p, for every p ∈(0, 1 2).
In Theorem 14.18 we will see that the stated order of H¨ older continuity can’t be improved. Related results are given in Proposition 14.10 and Lemma 22.7 below.
Proof: Let ζ be an isonormal Gaussian process on L2(R+, λ), and introduce a process Bt = ζ1[0,t], t ≥0. Since indicator functions of disjoint intervals are orthogonal, the increments of B are uncorrelated and hence independent.
Furthermore, ∥1(s,t]∥2 = t −s for any s ≤t, and so Bt −Bs is N(0, t −s). For any s ≤t we get Bt −Bs d = Bt−s d = (t −s)1/2B1, (1) whence E|Bt −Bs|p = (t −s)p/2E|B1|p < ∞, p > 0.
The asserted H¨ older continuity now follows by Theorem 4.23.
2 A process B as in Theorem 14.5 is called a (standard) Brownian motion4.
A d -dimensional Brownian motion is a process B = (B1, . . . , Bd) in Rd, where B1, . . . , Bd are independent, one-dimensional Brownian motions. By Proposi-tion 14.2, the distribution of B is invariant under orthogonal transformations of Rd. Furthermore, any continuous process X in Rd with stationary indepen-dent increments and X0 = 0 can be written as Xt = b t + aBt for some vector b and matrix a.
Other important Gaussian processes can be constructed from a Brownian motion. For example, we define a Brownian bridge as a process on [0, 1] dis-tributed as Xt = Bt −tB1, t ∈[0, 1]. An easy computation shows that X has covariance function rs,t = s(1 −t), 0 ≤s ≤t ≤1.
4More appropriately, it is also called a Wiener process.
Theorem 14.5 (Brownian motion, Thiele, Bachelier, Wiener) There exists a process B on R+ with B0 = 0, such that 302 Foundations of Modern Probability The Brownian motion and bridge have many nice symmetry properties.
For example, if B is a Brownian motion, then so is −B, as well as the process r−1B(r2t) for any r > 0.
The latter transformations, known as Brownian scaling, are especially useful. We also note that, for any u > 0, the processes Bu±t −Bu are Brownian motions on R+ and [0, u], respectively. If B is instead a Brownian bridge, then so are the processes −Bt and B1−t.
We list some less obvious invariance properties. Further, possibly random mappings preserving the distribution of a Brownian motion or bridge are ex-hibited in Theorem 14.11, Lemma 14.14, and Proposition 27.16.
Lemma 14.6 (scaling and inversion) (i) If B is a Brownian motion, then so is the process tB 1/t, whereas these processes are Brownian bridges: (1 −t)B t/(1−t), tB (1−t)/t, t ∈(0, 1), (ii) If B is a Brownian bridge, then so is the process B 1−t, whereas these processes are Brownian motions: (1 + t)B t/(1+t), (1 + t)B 1/(1+t), t ≥0.
Proof: Since the mentioned processes are centered Gaussian, it suffices by Lemma 14.1 to verify that they have the required covariance functions. This is clear from the expressions s∧t and (s∧t)(1−s∨t) for the covariance functions of the Brownian motion and bridge.
2 Combining Proposition 11.5 with Theorem 14.4, we note that any space-and time-homogeneous, continuous Markov process in Rd has the form aBt + b t + c for a Brownian motion B in Rd, a d × d matrix a, and some vectors b, c ∈Rd. We turn to a general characterization of Gaussian Markov processes.
As before, we use the convention 0/0 = 0.
Proposition 14.7 (Markov property) Let X be a Gaussian process on an index set T ⊂R, and put rs,t = Cov(Xs, Xt). Then X is Markov iff rs,u = rs,t rt,u rt,t , s ≤t ≤u in T.
(2) If X is also stationary on T = R, then rs,t = a e−b |s−t| for some constants a ≥0 and b ∈[0, ∞].
Proof: Subtracting the means, if necessary, we may take EXt ≡0. Fixing any t ≤u in T, we may choose a ∈R such that X′ u ≡Xu −aXt ⊥Xt. Then a = rt,u/rt,t when rt,t ̸= 0, and if rt,t = 0 we may take a = 0. By Lemma 14.1 we get X′ u⊥ ⊥Xt.
Now let X be Markov, and fix any s ≤t.
Then Xs⊥ ⊥XtXu, and so Xs⊥ ⊥XtX′ u.
Since also Xt⊥ ⊥X′ u by the choice of a, Proposition 8.12 yields Xs⊥ ⊥X′ u. Hence, rs,u = a rs,t, and (2) follows as we insert the expression for a.
14. Gaussian Processes and Brownian Motion 303 Conversely, (2) implies Xs ⊥X′ u for all s ≤t, and so Ft⊥ ⊥X′ u by Lemma 14.1, where Ft = σ{Xs; s ≤t}. Then Proposition 8.12 yields Ft⊥ ⊥XtXu, which is the required Markov property of X at t.
If X is stationary, we have rs,t = r|s−t|, 0 = r|s−t|, and (2) reduces to the Cauchy equation r0 rs+t = rs rt, s, t ≥0, allowing the only bounded solutions rt = a e−bt.
2 A continuous, centered Gaussian process X on R with covariance function rs,t = e−|t−s| is called a stationary Ornstein–Uhlenbeck process5. Such a process can be expressed as Xt = e−tB(e2t), t ∈R, in terms of a Brownian motion B.
The previous result shows that the Ornstein–Uhlenbeck process is essentially the only stationary Gaussian process that is also a Markov process.
We consider some basic path properties of Brownian motion.
Lemma 14.8 (level sets) For a Brownian motion or bridge B, λ t; Bt = u = 0 a.s., u ∈R.
Proof: Defining Xn t = B[nt]/n, t ∈R+ or [0, 1], n ∈N, we note that Xn t →Bt for every t.
Since the processes Xn are product-measurable on Ω × R+ or Ω × [0, 1], so is B. Hence, Fubini’s theorem yields E λ t; Bt = u = P{Bt = u} dt = 0, u ∈R.
2 Next we show that Brownian motion has locally finite quadratic variation.
Extensions to general semi-martingales appear in Proposition 18.17 and Corol-lary 20.15. A sequence of partitions is said to be nested, if each of them is a refinement of the previous one.
Theorem 14.9 (quadratic variation, L´ evy) Let B be a Brownian motion, and fix any t > 0. Then for any partitions 0 = tn,0 < tn,1 < · · · < tn,kn = t with hn ≡maxk(tn,k −tn,k−1) →0, we have ζn ≡Σ k Btn,k −Btn,k−1 2 →t in L2.
(3) The convergence holds a.s. when the partitions are nested.
Proof (Doob): To prove (3), we see from the scaling property Bt −Bs d = |t −s|1/2B1 that Eζn = Σ kE Btn,k −Btn,k−1 2 = Σ k( tn,k −tn,k−1) EB2 1 = t, Var(ζn) = Σ kVar Btn,k −Btn,k−1 2 = Σ k( tn,k −tn,k−1)2 Var(B2 1) ≤hn t EB4 1 →0.
5In Chapters 21 and 32 we use a slightly different normalization.
304 Foundations of Modern Probability For nested partitions, the a.s. convergence follows once we show that the sequence (ζn) is a reverse martingale, that is E ζn−1 −ζn ζn, ζn+1, . . .
= 0 a.s., n ∈N.
(4) Inserting intermediate partitions if necessary, we may assume that kn = n for all n. Then there exist some t1, t2, . . . ∈[0, t], such that the n-th partition has division points t1, . . . , tn. To verify (4) for a fixed n, we introduce an auxiliary random variable ϑ⊥ ⊥B with P{ϑ = ±1} = 1 2, and replace B by the Brownian motion B′ s = Bs∧tn + ϑ (Bs −Bs∧tn), s ≥0.
Since the sums ζn, ζn+1, . . . for B′ agree with those for B, whereas ζn−ζn−1 is re-placed by ϑ(ζn−ζn−1), it suffices to show that E{ϑ(ζn−ζn−1) | ζn, ζn+1, . . .} = 0 a.s. This is clear from the choice of ϑ, if we first condition on ζn−1, ζn, . . . . 2 The last result shows that the linear variation of B is locally unbounded, which is why we can’t define the stochastic integral V dB as an ordinary Stieltjes integral, prompting us in Chapter 18 to use a more sophisticated approach. We give some more precise information about the irregularity of paths, supplementing the regularity property in Theorem 14.5.
Proposition 14.10 (path irregularity, Paley, Wiener, & Zygmund) Let B be a Brownian motion in R, and fix any p > 1 2. Then a.s.
(i) B is nowhere6 H¨ older continuous of order p, (ii) B is nowhere differentiable.
Proof: (i) Fix any p > 1 2 and m ∈N. If Bω is H¨ older continuous of order p at a point tω ∈[0, 1], then for s ∈[0, 1] with |s −t| ≤mh we have |ΔhBs| ≤|Bs −Bt| + |Bs+h −Bt| < ⌢(mh)p < ⌢hp.
Applying this to m fixed, disjoint intervals of length h between the bounds t ± mh, we see that the probability of the stated H¨ older continuity is bounded by expressions of the form h−1 P |Bh| ≤c hp m = h−1 P |B1| ≤c hp−1/2 m < ⌢h−1hm(p−1/2), for some constants c > 0. Choosing m > (p −1 2)−1, we see that the right-hand side tends to 0 as h →0.
6Thus, for any ω outside a P-null set, there is no point t ≥0 where the path B(ω, ·) has the stated regularity property. This is clearly stronger than the corresponding statement for fixed points t ≥0 or intervals I ⊂R+. Since the stated events may not be measurable, the results should be understood in the sense of inner measure or completion.
14. Gaussian Processes and Brownian Motion 305 (ii) If Bω is differentiable at a point tω ≥0, it is Lipschitz continuous at t, hence H¨ older continuous of order 1, which contradicts (i) for p = 1.
2 Proposition 11.5 shows that Brownian motion B is a space-homogeneous Markov process with respect to its induced filtration. If the Markov property holds for a more general filtration F = (Ft) —i.e., if B is F-adapted and such that the process B′ t = Bs+t −Bs is independent of Fs for each s ≥0 —we say that B is a Brownian motion with respect to F or an F-Brownian motion. In particular, we may take Ft = Gt ∨N, t ≥0, where G is the filtration induced by B and N = σ{N ⊂A; A ∈A, PA = 0}. With this choice, F becomes right-continuous by Corollary 9.26.
We now extend the Markov property of B to suitable optional times. A more general version of this result appears in Theorem 17.17. As in Chapter 9 we write F+ t = Ft+.
Theorem 14.11 (strong Markov property, Hunt) For an F-Brownian motion B in Rd and an F+-optional time τ < ∞, we have these equivalent properties: (i) B has the strong F +-Markov property at τ, (ii) the process B′ t = Bτ+t −Bτ satisfies B d = B′⊥ ⊥F+ τ .
Proof: (ii) As in Lemma 9.4, there exist some optional times τn →τ with τn ≥τ + 2−n taking countably many values.
Then F+ τ ⊂ n Fτn by Lemmas 9.1 and 9.3, and so by Proposition 11.9 and Theorem 11.10, each process Bn t = Bτn+t −Bτn, t ≥0, is a Brownian motion independent of F+ τ .
The continuity of B yields Bn t →B′ t a.s. for every t. For any A ∈F + τ and t1, . . . , tk ∈R+, k ∈N, and for bounded, continuous functions f : Rk →R, we get by dominated convergence E f(B′ t1, . . . , B′ tk); A = Ef(Bt1, . . . , Btk) · PA.
The general relation L(B′; A) = L(B) · PA now follows by a straightforward extension argument.
2 When B is a Brownian motion in Rd, the process X d = |B| is called a Bessel process of order d. More general Bessel processes are obtained as solutions to suitable SDEs. We show that |B| inherits the strong Markov property from B.
Corollary 14.12 (Bessel processes) For an F-Brownian motion B in Rd, |B| is a strong F+-Markov process.
Proof: By Theorem 14.11, it is enough to show that |B + x| d = |B + y| whenever |x| = |y|. Then choose an orthogonal transformation T on Rd with Tx = y, and note that |B + x| = |T(B + x)| = |TB + y| d = |B + y|.
2 306 Foundations of Modern Probability The strong Markov property can be used to derive the distribution of the maximum of Brownian motion up to a fixed time. A stronger result will be obtained in Corollary 29.3.
Proposition 14.13 (maximum process, Bachelier) Let B be a Brownian mo-tion in R, and define Mt = sups≤t Bs, t ≥0. Then Mt d = Mt −Bt d = |Bt|, t ≥0.
Our proof relies on the following continuous-time counterpart of Lemma 12.11, where the given Brownian motion B is compared with a suitably reflected version ˜ B.
Lemma 14.14 (reflection principle) For a Brownian motion B and an asso-ciated optional time τ, we have B d = ˜ B with ˜ Bt = Bt∧τ −(Bt −Bt∧τ), t ≥0.
Proof: It suffices to compare the distributions up to a fixed time t, and so we may assume that τ < ∞. Define Bτ t = Bτ∧t and B′ t = Bτ+t −Bτ. By Theorem 14.11, the process B′ is a Brownian motion independent of (τ, Bτ).
Since also −B′ d = B′, we get (τ, Bτ, B′) d = (τ, Bτ, −B′), and it remains to note that Bt = Bτ t + B′ (t−τ)+, ˜ Bt = Bτ t −B′ (t−τ)+, t ≥0.
2 Proof of Proposition 14.13: By scaling we may take t = 1. Applying Lemma 14.14 with τ = inf{t; Bt = x} gives P M1 ≥x, B1 ≤y = P ˜ B1 ≥2x −y , x ≥y ∨0.
By differentiation, the joint distribution of (M1, B1) has density −2 ϕ′(2x−y), where ϕ denotes the standard normal density. Changing variables, we conclude that (M1, M1 −B1) has density −2 ϕ′(x + y), x, y ≥0. In particular, M1 and M1 −B1 have the same marginal density 2 ϕ(x) on R+.
2 To prepare for the next major result, we prove another elementary sample path property.
Lemma 14.15 (local extremes) The local maxima and minima of a Brownian motion or bridge are a.s. distinct.
Proof: For a Brownian motion B and any intervals I = [a, b] and J = [c, d] with b < c, we may write sup t∈J Bt −sup t∈I Bt = sup t∈J (Bt −Bc) + (Bc −Bb) −sup t∈I (Bt −Bb).
14. Gaussian Processes and Brownian Motion 307 Here the distribution of second term on the right is diffuse, which extends by independence to the whole expression. In particular, the difference on the left is a.s. non-zero. Since I and J are arbitrary, this proves the result for local maxima. The proofs for local minima and the mixed case are similar.
The result for the Brownian bridge Bo follows from that for Brownian motion, since the distributions of the two processes are mutually absolutely continuous on every interval [0, t] with t < 1. To see this, construct from B and Bo the corresponding ‘bridges’ Xs = Bs −s t−1Bt Ys = B o s −s t−1B o t , s ∈[0, t], and check that Bt ⊥ ⊥X d = Y ⊥ ⊥Bo t . The stated equivalence now follows since N(0, t) ∼N(0, t(1 −t)) for any t ∈[0, 1).
2 The arcsine law may be defined7 as the distribution of ξ = sin2 α when α is U(0, 2π). Its name derives from the fact that P{ξ ≤t} = P | sin α| ≤ √ t = 2 π arcsin √ t, t ∈[0, 1].
Note that the arcsine distribution is symmetric about 1 2, since ξ = sin2 α d = cos2 α = 1 −sin2 α = 1 −ξ.
We consider three basic functionals of Brownian motion, which are all arcsine distributed.
Theorem 14.16 (arcsine laws, L´ evy) For a Brownian motion B on [0, 1] with maximum M1, these random variables are all arcsine distributed: τ1 = λ{t; Bt > 0}, τ2 = inf t; Bt = M1 , τ3 = sup{t; Bt = 0}.
The relations τ1 d = τ2 d = τ3 may be compared with the discrete-time ana-logues in Theorem 12.12 and Corollary 27.8. In Theorems 16.18 and 22.11, the arcsine laws are extended by approximation to appropriate random walks and L´ evy processes.
Proof (OK): To see that τ1 d = τ2, let n ∈N, and conclude from Corollary 27.8 below that n−1 Σ k≤n 1{Bk/n > 0} d = n−1min k ≥0; Bk/n = max j≤n Bj/n .
7Equivalently, we may take α to be U(0, π) or U(0, π/2).
308 Foundations of Modern Probability By Lemma 14.15 the right-hand side tends a.s. to τ2 as n →∞. To see that the left-hand side converges to τ1, we note that by Lemma 14.8 λ t ∈[0, 1]; Bt > 0 + λ t ∈[0, 1]; Bt < 0 = 1 a.s.
It remains to note that, for any open set G ⊂[0, 1], lim inf n→∞n−1 Σ k≤n 1G(k/n) ≥λG.
In case of τ2, fix any t ∈[0, 1], let ξ and η be independent N(0, 1), and let α be U(0, 2π). Using Proposition 14.13 and the circular symmetry of the distribution of (ξ, η), we get P{τ2 ≤t} = P sup s≤t (Bs −Bt) ≥sup s≥t (Bs −Bt) = P |Bt| ≥|B1 −Bt| = P t ξ2 ≥(1 −t) η2 = P η2 ξ2 + η2 ≤t = P sin2 α ≤t .
In case of τ3, we write instead P{τ3 < t} = P sup s≥t Bs < 0 + P inf s≥t Bs > 0 = 2 P sup s≥t (Bs −Bt) < −Bt = 2 P |B1 −Bt| < Bt = P |B1 −Bt| < |Bt| = P{τ2 ≤t}.
2 The first two arcsine laws have simple counterparts for the Brownian bridge: Proposition 14.17 (uniform laws) For a Brownian bridge B with maximum M1, these random variables are U(0, 1): τ1 = λ{t; Bt > 0}, τ2 = inf t; Bt = M1 .
Proof: The relation τ1 d = τ2 may be proved in the same way as for Brownian motion. To see that τ2 is U(0, 1), write (x) = x −[x], and consider for each u ∈[0, 1] a process Bu t = B(u+t) −Bu, t ∈[0, 1]. Here clearly Bu d = B for each u, and the maximum of Bu occurs at (τ2 −u). Hence, Fubini’s theorem yields for any t ∈[0, 1] 14. Gaussian Processes and Brownian Motion 309 P{τ2 ≤t} = 1 0 P (τ2 −u) ≤t du = E λ u; (τ2 −u) ≤t = t.
2 From Theorem 14.5 we see that t−cBt →0 a.s. as t →0 for fixed c ∈[0, 1 2).
The following classical result gives the exact growth rate of Brownian motion at 0 and ∞. Extensions to random walks and renewal processes appear in Corollaries 22.8 and 22.14, and a functional version is given in Theorem 24.18.
Theorem 14.18 (laws of the iterated logarithm, Khinchin) For a Brownian motion B in R, we have a.s.
lim sup t→0 Bt 2t log log(1/t) = lim sup t→∞ Bt 2t log log t = 1.
Proof: Since the Brownian inversion ˜ Bt = tB1/t of Lemma 14.6 converts the two formulas into one another, it is enough to prove the result for t →∞.
Then note that as u →∞ ∞ u e−x2/2 dx ∼u−1 ∞ u x e−x2/2 dx = u−1e−u2/2.
Letting Mt = sups≤t Bs and using Proposition 14.13, we hence obtain, uni-formly in t > 0, P Mt > u t1/2 = 2P Bt > u t1/2 ∼(2/π)1/2 u−1e−u2/2.
Writing ht = (2t log log t)1/2, we get for any r > 1 and c > 0 P M(rn) > c h(rn−1) < ⌢n−c2/r(log n)−1/2, n ∈N.
Fixing c > 1 and choosing r < c2, we see from the Borel–Cantelli lemma that P lim sup t→∞ (Bt/ht) > c ≤P M(rn) > c h(rn−1) i.o. = 0, which implies lim supt→∞(Bt/ht) ≤1 a.s.
To prove the reverse inequality, we may write P B(rn) −B(rn−1) > c h(rn) > ⌢n−c2r/(r−1)(log n)−1/2, n ∈N.
Taking c = {(r −1)/r}1/2, we get by the Borel–Cantelli lemma lim sup t→∞ Bt −Bt/r ht ≥lim sup n→∞ B(rn) −B(rn−1) h(rn) ≥ r −1 r 1/2 a.s.
310 Foundations of Modern Probability The previous upper bound yields lim supt→∞(−Bt/r/ht) ≤r−1/2, and so by combination lim sup t→∞ Bt ht ≥(1 −r−1)1/2 −r−1/2 a.s.
Letting r →∞, we obtain lim supt→∞(Bt/ht) ≥1 a.s.
2 In Theorem 14.5 we constructed a Brownian motion B from an isonormal Gaussian process ζ on L2(R+, λ) by setting Bt = ζ 1[0,t] a.s. for all t ≥0.
Conversely, given a Brownian motion B on R+, Theorem 8.17 ensures the ex-istence of a corresponding isonormal Gaussian process ζ. Since every function h ∈L2(R+, λ) can be approximated by simple step functions, as in the proof of Lemma 1.37, the random variables ζh are a.s. unique. They can also be constructed directly from B, as suitable Wiener integrals h dB. Since the latter may fail to exist in the pathwise Stieltjes sense, a different approach is required.
For a first step, consider the class S of simple step functions of the form ht = Σ j ≤n aj1(tj−1,tj](t), t ≥0, where n ∈Z+, 0 = t0 < · · · < tn, and a1, . . . , an ∈R. For any h ∈S, we define the integral ζh in the obvious way as ζh = ∞ 0 ht dBt = Bh = Σ j ≤n aj(Btj −Btj−1), which is clearly centered Gaussian with variance E(ζh)2 = Σ j ≤n a2 j(tj −tj−1) = ∞ 0 h2 t dt = ∥h∥2, where ∥h∥denotes the norm in L2(R+, λ). Thus, the integration h →ζh = h dB defines a linear isometry from S ⊂L2(R+, λ) into L2(Ω, P).
Since S is dense in L2(R+, λ), the integral extends by continuity to a linear isometry h →ζh = h dB from L2(λ) to L2(P). The variable ζh is again centered Gaussian for every h ∈L2(λ), and by linearity the whole process h →ζh is Gaussian. By a polarization argument, the integration preserves inner products, in the sense that E(ζh ζk) = ∞ 0 ht kt dt = ⟨h, k⟩, h, k ∈L2(λ).
We consider two general representations of stationary Gaussian processes in terms of Wiener integrals ζh. Here a complex notation is convenient. By a complex, isonormal Gaussian process on a (real) Hilbert space H, we mean a process ζ = ξ + iη on H such that ξ and η are independent, real, isonormal 14. Gaussian Processes and Brownian Motion 311 Gaussian processes on H.
For any f = g + ih with g, h ∈H, we define ζf = ξg −ηh + i(ξh + ηg).
Now let X be a stationary, centered Gaussian process on R with covariance function rt = E XsXs+t, s, t ∈R.
Then r is non-negative definite, and it is further continuous whenever X is continuous in probability, in which case Bochner’s theorem yields a unique spectral representation rt = ∞ −∞eitxμ(dx), t ∈R, in terms of a bounded, symmetric spectral measure μ on R.
We may derive a similar spectral representation of the process X itself. By a different argument, the result extends to suitable non-Gaussian processes.
As usual, we take the basic probability space to be rich enough to support the required randomization variables.
Proposition 14.19 (spectral representation, Stone, Cram´ er) Let X be an L2-continuous, stationary, centered Gaussian process on R with spectral measure μ. Then there exists a complex, isonormal Gaussian process ζ on L2(μ), such that Xt = ℜ ∞ −∞eitxdζx a.s., t ∈R.
(5) Proof: Writing Y for the right-hand side of (5), we obtain E YsYt = E cos sx dξx −sin sx dηx cos tx dξx −sin tx dηx = cos sx cos tx −sin sx sin tx μ(dx) = cos(s −t)x μ(dx) = ei(s−t)xμ(dx) = rs−t.
Since X, Y are both centered Gaussian, Lemma 14.1 yields Y d = X. Since X and ζ are further continuous and defined on the separable spaces L2(X) and L2(μ), they may be regarded as random elements in suitable Borel spaces. The a.s. representation (5) then follows by Theorem 8.17.
2 Our second representation requires a suitable regularity condition on the spectral measure μ. Write λ for Lebesgue measure on R.
Proposition 14.20 (moving average representation) Let X be an L2-contin-uous, stationary, centered Gaussian process on R with spectral measure μ ≪λ.
Then there exist an isonormal Gaussian process ζ on L2(λ) and a function f ∈L2(λ), such that Xt = ∞ −∞ft−s dζs a.s., t ∈R.
(6) 312 Foundations of Modern Probability Proof: Choose a symmetric density g ≥0 of μ, and define h = √g. Since h lies in L2(λ), it admits a Fourier transform in the sense of Plancherel fs = ˆ hs = (2π)−1/2 lim a→∞ a −a eisxhx dx, s ∈R, (7) which is again real and square integrable. For each t ∈R, the function kx = e−itxhx has Fourier transform ˆ ks = fs−t, and so by Parseval’s identity, rt = ∞ −∞eitx h2 x dx = ∞ −∞hx¯ kx dx = ∞ −∞fsfs−t ds.
(8) Now let ζ be an isonormal Gaussian process on L2(λ). For f as in (7), define a process Y on R by the right-hand side of (6). Then (8) gives E YsYs+t = rt for all s, t ∈R, and so Y d = X by Lemma 14.1. Again, Theorem 8.17 yields the desired a.s. representation of X.
2 For an example, we consider a moving average representation of the sta-tionary Ornstein–Uhlenbeck process. Then let ζ be an isonormal Gaussian process on L2(R, λ), and introduce the process Xt = √ 2 t −∞es−t dζs, t ≥0, which is clearly centered Gaussian. Furthermore, rs,t = E XsXt = 2 s∧t −∞eu−s eu−t du = e−|t−s|, s, t ∈R, as desired. The Markov property of X is clear from the fact that Xt = es−tXs + √ 2 t s eu−t dζu, s ≤t.
We turn to a discussion of multiple Wiener–Itˆ o integrals8 ζn with respect to an isonormal Gaussian process ζ on a separable, infinite-dimensional Hilbert space H.
Without loss of generality, we may choose H to be of the form L2(S, μ), so that the tensor product H⊗n may be identified with L2(Sn, μ⊗n), where μ⊗n denotes the n-fold product measure μ⊗· · ·⊗μ. The tensor product k≤n hk = h1 ⊗· · · ⊗hn of h1, . . . , hn ∈H is then equivalent to the function h1(t1) · · · hn(tn) on Sn. Recall that for any ONB h1, h2, . . . in H, the products j≤n hrj with r1, . . . , rn ∈N form an ONB in H⊗n.
We begin with the basic characterization and existence result for the mul-tiple integrals ζn.
8They are usually denoted by In.
However, we often need a notation exhibiting the dependence on ζ, such as in the higher-dimensional versions of Theorem 28.19 14. Gaussian Processes and Brownian Motion 313 Theorem 14.21 (multiple stochastic integrals, Wiener, Itˆ o) Let ζ be an iso-normal Gaussian process on a separable Hilbert space H. Then for every n ∈N there exists a unique, continuous, linear map ζn : H⊗n →L2(P), such that a.s.
ζn ⊗ k≤nhk = Π k≤n ζhk, h1, . . . , hn ∈H orthogonal.
Here the uniqueness means that ζnh is a.s. unique for every h, and the linearity that ζn(af + bg) = a ζnf + b ζng a.s., a, b ∈R, f, g ∈H⊗n.
In particular we have ζ1h = ζh a.s., and for consistency we define ζ0 as the identity mapping on R.
For the proof, we may take H = L2(I, λ) with I = [0, 1]. Let En be the class of elementary functions of the form f = Σ j ≤m cj ⊗ k≤n1Ak j , (9) where the sets A1 j, . . . , An j ∈BI are disjoint for each j ∈{1, . . . , m}.
The indicator functions 1Ak j are then orthogonal for fixed j, and we need to take ζnf = Σ j ≤m cj Π k≤n ζAk j, (10) where ζA = ζ1A. By the linearity in each factor, ζnf is a.s. independent of the choice of representation (9) for f.
To extend ζn to the entire space L2(In, λ⊗n), we need some further lemmas.
For any function f on In, we introduce the symmetrization ˜ f(t1, . . . , tn) = (n!)−1 Σ p∈Pn f(tp1, . . . , tpn), t1, . . . , tn ∈R+, where Pn is the set of permutations of {1, . . . , n}. We begin with the basic L2-structure, which will later carry over to general f.
Lemma 14.22 (isometry) The elementary integrals ζnf in (10) are orthogo-nal for different n and satisfy E(ζnf)2 = n! ∥˜ f∥2 ≤n! ∥f∥2, f ∈En.
(11) Proof: The second relation in (11) follows from Minkowski’s inequality. To prove the remaining assertions, we first reduce to the case where all sets Ak j belong to a fixed collection of disjoint sets B1, B2, . . . . For any finite index sets J ̸= K in N, we get E Π j ∈J ζBj Π k∈K ζBk = Π j ∈J ∩K E(ζBj)2 Π j ∈J△ K EζBj = 0, which proves the asserted orthogonality. Since clearly ⟨f, g⟩= 0 when f and g involve different index sets, it also reduces the proof of the isometry in (11) to the case where all terms in f involve the same sets B1, . . . , Bn, though 314 Foundations of Modern Probability in possibly different order.
Since ζnf = ζn ˜ f, we may further assume that f = k 1Bk. Then E(ζnf)2 = Π k E(ζBk)2 = Π k λBk = ∥f∥2 = n! ∥˜ f∥2, where the last relation holds since, in the present case, the permutations of f are orthogonal.
2 To extend the integral, we need to show that the set of elementary functions is dense in L2(λ⊗n).
Lemma 14.23 (approximation) The set En is dense in L2(λ⊗n).
Proof: By a standard argument based on monotone convergence and a monotone-class argument, any function f ∈L2(λ⊗n) can be approximated by linear combinations of products k≤n 1Ak, and so it suffices to approximate functions f of the latter type. For each m, we then divide I into 2m intervals Imj of length 2−m, and define fm = f ◦Σ j1,. . . ,jn ⊗ k≤n1Im,jk, (12) where the summation extends over all collections of distinct indices j1, . . . , jn ∈{1, . . . , 2m}. Here fm ∈En for each m, and the sum in (12) tends to 1 a.e.
λ⊗n. Thus, by dominated convergence, fm →f in L2(λ⊗n).
2 So far we have defined the integral ζn as a uniformly continuous mapping on a dense subset of L2(λ⊗n). By continuity it extends immediately to all of L2(λ⊗n), with preservation of both linearity and the norm relations in (11). To complete the proof of Theorem 14.21, it remains to show that ζn k≤n hk = Πk ζhk for orthogonal h1, . . . , hn ∈L2(λ).
This follows from the following technical result, where for any f ∈L2(λ⊗n) and g ∈L2(λ) we write (f ⊗1 g)(t1, . . . , tn−1) = f(t1, . . . , tn) g(tn) dtn.
Lemma 14.24 (recursion) For any f ∈L2(λ⊗n) and g ∈L2(λ) with n ∈N, we have ζn+1(f ⊗g) = ζnf · ζg −n ζn−1 ˜ f ⊗1 g .
(13) Proof: By Fubini’s theorem and Cauchy’s inequality, we have ∥f ⊗g∥= ∥f∥∥g∥, ∥˜ f ⊗1 g∥≤∥˜ f∥∥g∥≤∥f∥∥g∥, and so both sides of (13) are continuous in probability in f and g. Hence, it suffices to prove the formula for f ∈En and g ∈E1. By the linearity of each side, we may next reduce to the case of functions f = k≤n 1Ak and g = 1A, 14. Gaussian Processes and Brownian Motion 315 where A1, . . . , An are disjoint and either A ∩ k Ak = ∅or A = A1. In the former case, we have ˜ f ⊗1 g = 0, and (13) follows from the definitions. In the latter case, (13) becomes ζn+1 A2 × A2 × · · · × An = (ζA)2 −λA ζA2 · · · ζAn.
(14) Approximating 1A2 as in Lemma 14.23 by functions fm ∈E2 with support in A2, we see that the left-hand side equals (ζ2A2)(ζA2) · · · (ζAn), which reduces the proof of (14) to the two-dimensional version ζ2A2 = (ζA)2 −λA. Here we may divide A for each m into 2m subsets Bmj of measure ≤2−m, and note as in Theorem 14.9 and Lemma 14.23 that (ζA)2 = Σ i (ζBmi)2 + Σ i̸=j (ζBmi)(ζBmj) →λA + ζ2A2 in L2.
2 We proceed to derive an explicit representation of the integrals ζn in terms of Hermite polynomials p0, p1, . . . , defined as orthogonal polynomials of degrees 0, 1, . . . with respect to the standard normal distribution N(0, 1). This deter-mines the pn up to normalizations, which we choose for convenience to give leading coefficients 1. The first few polynomials are then p0(x) = 1, p1(x) = x, p2(x) = x2 −1, p3(x) = x3 −3x, . . . .
Theorem 14.25 (polynomial representation, Itˆ o) Let ζ be an isonormal Gaus-sian process on a separable Hilbert space H with associated multiple Wiener–Itˆ o integrals ζ1, ζ2, . . . . Then for any ortho-normal elements h1, . . . , hm ∈H and integers n1, . . . , nm ≥1 with sum n, we have ζn⊗ j ≤mh ⊗nj j = Π j ≤m pnj(ζhj).
Using the linearity of ζn and writing ˆ h = h/∥h∥, we see that the stated formula is equivalent to the factorization ζn⊗ j ≤mh ⊗nj j = Π j ≤m ζnjh ⊗nj j , h1, . . . , hk ∈H orthogonal, (15) together with the representation of the individual factors ζnh⊗n = ∥h∥npn(ζˆ h), h ∈H \ {0}.
(16) Proof: We prove (15) by induction on n. Assuming equality for all integrals of order ≤n, we fix any ortho-normal elements h, h1, . . . , hm ∈H and integers k, n1, . . . , nm ∈N with sum n + 1, and write f = j≤m h ⊗nj j . By Lemma 14.24 and the induction hypothesis, we get ζn+1 f ⊗h⊗k = ζn f ⊗h⊗(k−1) · ζh −(k −1) ζn−1 f ⊗h⊗(k−2) = (ζn−k+1f) ζk−1h⊗(k−1) · ζh −(k −1) ζk−2h⊗(k−2) = (ζn−k+1f)(ζkh⊗k).
316 Foundations of Modern Probability Using the induction hypothesis again, we obtain the desired extension to ζn+1.
It remains to prove (16) for an arbitrary element h ∈H with ∥h∥= 1.
Then conclude from Lemma 14.24 that ζn+1h⊗(n+1) = (ζnh⊗n)(ζh) −n ζn−1h⊗(n−1), n ∈N.
Since ζ01 = 1 and ζ1h = ζh, we see by induction that ζnh⊗n is a polyno-mial in ζh of degree n with leading coefficient 1. By the definition of Hermite polynomials, it remains to show that the integrals ζnh⊗n with different n are orthogonal, which holds by Lemma 14.22.
2 Given an isonormal Gaussian process ζ on a separable Hilbert space H, we introduce the space L2(ζ) = L2(Ω, σ{ζ}, P) of ζ-measurable random variables ξ with Eξ2 < ∞. The n -th polynomial chaos Pn is defined as the closed linear sub-space generated by all polynomials of degree ≤n in the random variables ζh, h ∈H. For every n ∈Z+, we also introduce the n -th homogeneous chaos Hn, consisting of all integrals ζnf, f ∈H⊗n.
The relationship between the mentioned spaces is clarified by the following result, where we write ⊕for direct sum and ⊖for orthogonal complement.
Theorem 14.26 (chaos decomposition, Wiener) Let ζ be an isonormal Gaus-sian process on a separable Hilbert space H, with associated polynomial and homogeneous chaoses Pn and Hn, respectively. Then (i) the Hn are orthogonal, closed, linear sub-spaces of L2(ζ), satisfying Pn = n ⊕ k=0 Hk, n ∈Z+; L2(ζ) = ∞ ⊕ n=0 Hn, (ii) for any ξ ∈L2(ζ), there exist some unique symmetric elements fn ∈ H⊗n, n ≥0, such that ξ = Σ n≥0 ζnfn a.s.
In particular, we note that H0 = P0 = R and Hn = Pn ⊖Pn−1, n ∈N.
Proof: (i) The properties in Lemma 14.22 extend to arbitrary integrands, and so the spaces Hn are mutually orthogonal, closed, linear sub-spaces of L2(ζ).
From Lemma 14.23 or Theorem 14.25 we see that also Hn ⊂Pn.
Conversely, let ξ be a polynomial of degree n in the variables ζh. We may then choose some ortho-normal elements h1, . . . , hm ∈H, such that ξ is a polynomial in ζh1, . . . , ζhm of degree n. Since any power (ζhj)k is a linear combination of the variables p0(ζhj), . . . , pk(ζhj), Theorem 14.25 shows that ξ is a linear combination of multiple integrals ζkf with k ≤n, which means that ξ ∈ k≤nHk. This proves the first relation.
To prove the second relation, let ξ ∈L2(ζ)⊖ nHn. In particular, ξ⊥(ζh)n for every h ∈H and n ∈Z+.
Since Σ n|ζh|n/n! = e|ζh| ∈L2, the series eiζh = Σ n(iζh)n/n! converges in L2, and we get ξ⊥eiζh for every h ∈H. By linearity of the integral ζh, we hence obtain for any h1, . . . , hn ∈H, n ∈N, 14. Gaussian Processes and Brownian Motion 317 E ξ exp Σ k≤n iukζhk = 0, u1, . . . , un ∈R.
Applying Theorem 6.3 to the distributions of (ζh1, . . . , ζhn) under the bounded measures μ± = E(ξ±; ·), we conclude that E ξ; (ζh1, . . . , ζhn) ∈B = 0, B ∈B n, which extends by a monotone-class argument to E( ξ; A) = 0 for arbitrary A ∈σ{ζ}. Since ξ is ζ -measurable, it follows that ξ = E( ξ| ζ) = 0 a.s.
(ii) By (i) any element ξ ∈L2(ζ) has an orthogonal expansion ξ = Σ n≥0 ζnfn = Σ n≥0 ζn˜ fn, for some elements fn ∈H⊗n with symmetrizations ˜ fn, n ∈Z+. Now assume that also ξ = Σ n ζngn. Projecting onto Hn and using the linearity of ζn, we get ζn(gn −fn) = 0. By the isometry in (11) it follows that ∥˜ gn −˜ fn∥= 0, and so ˜ gn = ˜ fn.
2 We may often write the decomposition in (ii) as ξ = ΣnJnξ, where Jnξ = ζnfn. We proceed to derive a Wiener chaos expansion in L2(ζ, H), stated in terms of the WI-integrals ζnfn(·, t) for suitable functions fn on T n+1.
Corollary 14.27 (chaos expansion of processes) Let ζ be an isonormal Gaus-sian process on H = L2(T, μ), and let X ∈L2(ζ, H). Then there exist some functions fn ∈L2(μn+1), n ≥0, each symmetric in the first n arguments, such that (i) Xt = Σ n ζnfn(·, t) in L2(ζ, H), (ii) ∥X∥2 = Σ n n! ∥fn∥2.
Proof: We may regard X as an element in the space L2(ζ) ⊗H, admitting an ONB of the form (αi ⊗hj), for suitable ONB’s α1, α2, . . . ∈L2(ζ) and h1, h2, . . . ∈H. This yields an orthogonal expansion X = Σj ξj ⊗hj in terms of some elements ξj ∈L2(ζ). Applying the classical Wiener chaos expansion to each ξj, we get an orthogonal decomposition X = Σ n,j Jnξj ⊗hj = Σ n,j ζngn,j ⊗hj, (17) for some symmetric functions gn,j ∈L2(μn) satisfying ∥X∥2 = Σ n,j ∥ζngn,j∥2 = Σ n,j n! ∥gn,j∥2.
(18) Regarding the hj as functions in L2(μ), we may now introduce the functions fn(s, t) = Σ j gn,j(s) hj(t) in L2(μn+1), s ∈T n, t ∈T, n ≥0, which may clearly be chosen to be product measurable on T n+1 and symmetric in s ∈T n. The relations (i) and (ii) are obvious from (17) and (18).
2 Next we will see how the chaos expansion is affected by conditioning.
318 Foundations of Modern Probability Corollary 14.28 (conditioning) Let ξ ∈L2(ζ) with chaos expansion ξ = Σ nζnfn. Then E( ξ | 1Aζ) = Σ n≥0 ζn(1⊗n A fn) a.s., A ∈T .
Proof: By Lemma 14.23 we may choose ξ = ζnf with f ∈En, and by linearity we may take f = j 1Bj for some disjoint sets B1, . . . , Bn ∈T .
Writing ξ = Π j ≤n ζBj = Π j ≤n ζ(Bj ∩A) + ζ(Bj ∩Ac) = Σ J ∈2n Π j ∈J ζ(Bj ∩A) Π k∈Jcζ(Bk ∩Ac), and noting that the conditional expectation of the last product vanishes by independence unless Jc = ∅, we get E( ξ | 1Aζ) = Π j ≤n ζ(Bj ∩A) = ζn⊗ j ≤n1Bj∩A = ζn(1⊗n A f).
2 We also need a related property for processes adapted to a Brownian mo-tion.
Corollary 14.29 (adapted processes) Let B be a Brownian motion generated by an isonormal Gaussian process ζ on L2(R+, λ). Then a process X ∈L2(ζ, λ) is B-adapted iffit can be represented as in Corollary 14.27, with fn(·, t) sup-ported by [0, t]n for every t ≥0 and n ≥0.
Proof: The sufficiency is obvious. Now let X be adapted. We may then apply Corollary 14.28 for every fixed t ≥0.
Alternatively, we may apply Corollary 14.27 with ζ replaced by 1[0,t]ζ, and use the a.e. uniqueness of the representation. In either case, it remains to check that the resulting functions 1⊗n [0,t]fn(·, t) remain product measurable.
2 Exercises 1. Let ξ1, . . . , ξn be i.i.d. N(m, σ2). Show that the random variables ¯ ξ = n−1Σk ξk and s2 = (n−1)−1Σk (ξk−¯ ξ)2 are independent, and that (n−1)s2 d = Σk<n(ξk−m)2.
(Hint: To avoid calculations, use the symmetry in Proposition 14.2.) 2. For a Brownian motion B, put tnk = k 2−n, and define ξ0,k = Bk −Bk−1 and ξnk = Btn,2k−1−1 2 (Btn−1,k−1 + Btn−1,k), k, n ≥1. Show that the ξnk are independent Gaussian. Use this fact to construct a Brownian motion from a sequence of i.i.d.
N(0, 1) random variables.
14. Gaussian Processes and Brownian Motion 319 3. Let B be a Brownian motion on [0, 1], and define Xt = Bt −tB1. Show that X⊥ ⊥B1. Use this fact to express the conditional distribution of B, given B1, in terms of a Brownian bridge.
4. Combine the transformations in Lemma 14.6 with the Brownian scaling B′ t = c−1B(c2t) to construct a class of transformations preserving the distribution of a Brownian bridge.
5. Show that the Brownian bridge is an inhomogeneous Markov process. (Hint: Use the transformations in Lemma 14.6 or verify the condition in Proposition 14.7.) 6.
Let B = (B1, B2) be a Brownian motion in R2, and choose the tnk as in Theorem 14.9. Show that Σk(B1 tn,k −B1 tn,k−1)(B2 tn,k −B2 tn,k−1) →0 in L2 or a.s., respectively. (Hint: Reduce to the case of quadratic variation.) 7. Use Theorem 9.28 to construct an rcll version B of Brownian motion. Then show as in Theorem 14.9 that B has quadratic variation [B]t ≡t, and conclude that B is a.s. continuous.
8. For a Brownian motion B, show that inf{t > 0; Bt > 0} = 0 a.s. (Hint: By Kolmogorov’s 0−1 law, the stated event has probability 0 or 1. Alternatively, use Theorem 14.18.) 9. For a Brownian motion B, define τa = inf{t > 0; Bt = a}. Compute the density of the distribution of τa for a ̸= 0, and show that E τa = ∞. (Hint: Use Proposition 14.13.) 10. For a Brownian motion B, show that Zt = exp(cBt −1 2c2t) is a martingale for every c. Use optional sampling to compute the Laplace transform of τa above, and compare with the preceding result.
13. Show that the local maxima of a Brownian motion are a.s. dense in R, and the corresponding times are a.s. dense in R+. (Hint: Use the preceding result.) 14. Show by a direct argument that lim supt t−1/2Bt = ∞a.s. as t →0 and ∞, where B is a Brownian motion. (Hint: Use Kolmogorov’s 0−1 law.) 15. Show that the local law of the iterated logarithm for Brownian motion remains valid for the Brownian bridge.
16. For a Brownian motion B in Rd, show that the process |B| satisfies the law of the iterated logarithm at 0 and ∞.
17. Let ξ1, ξ2, . . . be i.i.d. N(0, 1). Show that lim supn(2 log n)−1/2ξn = 1 a.s.
18. For a Brownian motion B, show that Mt = t−1Bt is a reverse martingale, and conclude that t−1Bt →0 a.s. and in Lp, p > 0, as t →∞. (Hint: The limit is degenerate by Kolmogorov’s 0−1 law.) Deduce the same result from Theorem 25.9.
19. For a Brownian bridge B, show that Mt = (1 −t)−1Bt is a martingale on [0, 1). Check that M is not L1-bounded.
20. Let ζn be the n-fold Wiener–Itˆ o integral w.r.t. Brownian motion B on R+.
Show that the process Mt = ζn(1[0,t]n) is a martingale. Express M in terms of B, and compute the expression for n = 1, 2, 3. (Hint: Use Theorem 14.25.) 21. Let ζ1, . . . , ζn be independent, isonormal Gaussian processes on a separable Hilbert space H. Show that there exists a unique continuous linear mapping k ζk from H⊗n to L2(P) such that k ζk k hk = Πk ζkhk a.s. for all h1, . . . , hn ∈H.
Also show that k ζk is an isometry.
Chapter 15 Poisson and Related Processes Random measures, Laplace transforms, binomial processes, Poisson and Cox processes, transforms and thinnings, mapping and marking, mixed Poisson and binomial processes, existence and randomization, Cox and thinning uniqueness, one-dimensional uniqueness criteria, independent increments, symmetry, Poisson reduction, random time change, Poisson convergence, positive, symmetric, and centered Poisson integrals, double Poisson integrals, decoupling Brownian motion and the Poisson processes are universally recognized as the basic building blocks of modern probability. After studying the former process in the previous chapter, we turn to a detailed study of Poisson and related processes. In particular, we construct Poisson processes on bounded sets as mixed binomial processes, and derive a variety of Poisson characterizations in terms of independence, symmetry, and renewal properties. A randomization of the underlying intensity measure leads to the richer class of Cox processes.
We also consider related randomizations of general point processes, obtainable through independent mappings of the individual points. In particular, we will see how such transformations preserve the Poisson property.
For most purposes, it is convenient to regard Poisson and other point pro-cesses as integer-valued random measures. The relevant parts of this chapter then serve as an introduction to general random measure theory.
Here we will see how Cox processes and thinnings can be used to derive some general uniqueness criteria for simple point processes and diffuse random measures.
The notions and results of this chapter form a basis for the corresponding convergence theory in Chapters 23 and 30, where typically Poisson and Cox processes arise as distributional limits.
We also stress the present significance of the compensators and predictable processes from Chapter 10. In particular, we may use a predictable trans-formation to reduce a ql-continuous simple point process on R+ to Poisson, in a similar way as a continuous local martingale can be time-changed into a Brownian motion. This gives a basic connection to the stochastic calculus of Chapters 18–19. Using discounted compensators, we can even go beyond the ql-continuous case.
We finally recall the fundamental role of compound Poisson processes for the theory of L´ evy processes and their distributions in Chapters 7 and 16, and for the excursion theory in Chapter 29. In such connections, we often encounter Poisson integrals of various kind, whose existence is studied toward the end of this chapter. We also characterize the existence of double Poisson integrals of 321 © Springer Nature Switzerland AG 2021 O. Kallenberg, Foundations of Modern Probability, Probability Theory and Stochastic Modelling 99, 322 Foundations of Modern Probability the form ξ2f or ξηf, which provides a connection to the multiple Wiener–Itˆ o integrals in Chapters 14, 19, and 21.
Given a localized Borel space S, we define a random measure on S as a locally finite kernel from the basic probability space (Ω, A, P) into S. Thus, ξ(ω, B) is a locally finite measure in B ∈S for fixed ω and a random variable in ω ∈Ω for fixed B. Equivalently, we may regard ξ as a random element in the space MS of locally finite measures on S, endowed with the σ-field generated by all evaluation maps πB : μ →μB with B ∈S. The intensity Eξ of ξ is the measure on S defined by (Eξ)B = E(ξB) on S.
A point process on S is defined as an integer-valued random measure ξ.
Alternatively, it may be regarded as a random element in the space NS ⊂MS of all locally finite, integer-valued measures on S. By Theorem 2.18, it may be written as ξ = Σk≤κ δσk for some random elements σ1, σ2, . . . in S and κ in ¯ Z+, and we say that ξ is simple if the σk are all distinct. For a general ξ, we may eliminate the possible multiplicities to form a simple point process ξ∗, equal to the counting measure on the support of ξ. Its measurability as a function of ξ is ensured by Theorem 2.19.
We begin with some basic uniqueness criteria for random measures. Stronger results for simple point processes and diffuse random measures are given in The-orem 15.8, and related convergence criteria appear in Theorem 23.16. Recall that ˆ S+ denotes the class of measurable functions f ≥0 with bounded support.
For countable classes I ⊂S, we write ˆ I+ for the family of simple, I-measurable functions f ≥0 on S. Dissecting systems were defined in Chapter 1.
Lemma 15.1 (uniqueness) Let ξ, η be random measures on S, and fix any dissecting semi-ring I ⊂ˆ S. Then ξ d = η under each of the conditions (i) ξf d = ηf, f ∈ˆ I+, (ii) E e−ξf = E e−ηf, f ∈ˆ S+.
When S is Polish, we may take f ∈ˆ CS or f ≤1 in (i)−(ii).
Proof: (i) By the Cram´ er–Wold Theorem 6.5, the stated condition implies (ξI1, . . . , ξIn) d = (ηI1, . . . , ηIn), I1, . . . , In ∈I, n ∈N.
By a monotone-class argument in MS, we get ξ d = η on the σ-field ΣI in MS generated by all projections πI : μ →μI, I ∈I. By a monotone-class argument in S, πB remains ΣI-measurable for every B ∈ˆ S. Hence, ΣI contains the σ-field generated by the latter projections, and so it agrees with the basic σ-field on MS, which shows that ξ d = η. The statement for Polish spaces follows by a simple approximation.
(ii) Use the uniqueness Theorem 6.3 for Laplace transforms.
2 A random measure ξ on a Borel space S is said to have independent incre-ments, if the random variables ξB1, . . . , ξBn are independent for disjoint sets 15. Poisson and Related Processes 323 B1, . . . , Bn ∈ˆ S. By a Poisson process on S with intensity measure μ ∈MS we mean a point process ξ on S with independent increments, such that ξB is Poisson distributed with mean μB for every B ∈ˆ S. By Lemma 15.1 the stated conditions define a distribution for ξ, determined by the intensity measure μ.
More generally, for any random measure η on S, we say that a point process ξ is a Cox process1 directed by η, if it is conditionally Poisson given η with E(ξ| η) = η a.s. Choosing η = α μ for some measure μ ∈MS and random variable α ≥0, we get a mixed Poisson process based on μ and α.
Given a probability kernel ν : S →T and a point measure μ = Σkδsk, we define a ν -transform of μ as a point process η = Σkδτk on T, where the τk are independent random elements in T with distributions νsk, assumed to be such that η becomes a.s. locally finite. The distribution of η is clearly a kernel Pμ on the appropriate subset of NT. Replacing μ by a general point process ξ on S, we may define a ν -transform η of ξ by L(η | ξ) = Pξ, as long as this makes η a.s. locally finite.
By a ρ -randomization of ξ we mean a transform with respect to the kernel νs = δs ⊗ρs from S to S × T, where ρ is a probability kernel from S to T. In particular, we may form a uniform randomization by choosing ρ to be Lebesgue measure λ on [0, 1]. We may also think of a randomization as a ρ -marking of ξ, where independent random marks of distributions ρs are attached to the points of ξ with locations s. We further consider p -thinnings of ξ, obtained by independently deleting the points of ξ with probability 1 −p.
A binomial process2 on S is defined as a point process of the form ξ = Σ i≤n δσi, where σ1, . . . , σn are independent random elements in S with a com-mon distribution μ. The name is suggested by the fact that ξB is binomially distributed for every B ∈S. Replacing n by a random variable κ⊥ ⊥(σi) in Z+ yields a mixed binomial process.
To examine the relationship between the various processes, it is often helpful to use the Laplace transforms of random measures ξ on S, defined for any measurable function f ≥0 on the state space S by ψξ(f) = Ee−ξf. Note that ψξ determines the distribution L(ξ) by Lemma 15.1. Here we list some basic formulas, where we recall that a kernel ν between measurable spaces S and T may be regarded as an operator between the associated function spaces, given by νf(s) = ν(s, dt)f(t).
Lemma 15.2 (Laplace transforms) Let f ∈ˆ S+ with S Borel. Then a.s.
(i) for a mixed binomial process ξ based on μ and κ, E(e−ξf| κ) = (μe−f)κ, (ii) for a Cox process ξ directed by η, E(e−ξf| η) = exp −η(1 −e−f) , 1also called a doubly stochastic Poisson process 2Binomial processes have also been called sample or Bernoulli processes, or, when S ⊂R, processes with the order statistics property.
324 Foundations of Modern Probability (iii) for a ν-transform ξ of η, E(e−ξf| η) = exp η log νe−f , (iv) for a p -thinning ξ of η, E(e−ξf| η) = exp −η log 1 −p(1 −e−f) .
Throughout, we use the compact notation explained in previous chapters. For example, in a more explicit notation the formula in part (iii) becomes3 E exp − f(s) ξ(ds) η = exp log e−f(t) νs(dt) η(ds) .
Proof: (i) Let ξ = Σk≤n δσk, where σ1, . . . , σn are i.i.d. μ and n ∈Z+. Then Ee−ξf = E expΣ k −f(σk) = Π kEe−f(σk) = (μe−f)n.
The general result follows by conditioning on κ.
(ii) For a Poisson random variable κ with mean c, we have Esκ = e−c Σ k≥0 ck k! sk = e−cecs = e−c(1−s), s ∈(0, 1].
(1) Hence, if ξ is a Poisson process with intensity μ, we get for any function f = Σk≤n ck1Bk with disjoint B1, . . . , Bn ∈ˆ S and constant c1, . . . , cn ≥0 Ee−ξf = E expΣ k −ck ξBk = Π kE (e−ck)ξBk = Π k exp −μBk(1 −e−ck) = expΣ k −μBk(1 −e−ck) = exp −μ(1 −e−f) , which extends by monotone and dominated convergence to arbitrary f ∈S+.
Here the general result follows by conditioning on η.
(iii) First let η = Σk δsk be non-random in NS. Choosing some independent random elements τk in T with distributions νsk, we get Ee−ξf = E expΣ k{−f(τk)} = Π kEe−f(τk) = Π kνske−f = expΣ k log νske−f = exp η log νe−f .
3Such transliterations are henceforth left to the reader, who may soon see the advantage of our compact notation.
15. Poisson and Related Processes 325 Again, the general result follows by conditioning on η.
(iv) This is easily deduced from (iii). For a direct proof, let η = Σk δsk as before, and put ξ = Σk ϑkδsk for some independent random variables ϑk in {0, 1} with Eϑk = p(sk). Then Ee−ξf = E expΣk −ϑkf(sk) = Π kEe−ϑkf(sk) = Π k 1 −p(sk) 1 −e−f(sk) = expΣk log 1 −p(sk) 1 −e−f(sk) = exp η log 1 −p (1 −e−f) , which extends by conditioning to general η.
2 We proceed with some basic closure properties for Cox processes and point process transforms. In particular, we note that the Poisson property is pre-served by randomizations.
Theorem 15.3 (mapping and marking, Bartlett, Doob, Pr´ ekopa, OK) Let μ : S →T and ν : T →U be probability kernels between the Borel spaces S, T, U.
(i) For a Cox process ξ directed by a random measure η on S, consider a locally finite μ-transform ζ⊥ ⊥ξ η of ξ. Then ζ is a Cox process on U directed by ημ.
(ii) For a μ-transform ξ of a point process η on S, consider a locally finite ν-transform ζ⊥ ⊥ξ η of ξ. Then ζ is a μν-transform of η.
Recall that the kernel μν : S →U is given by (μν)sf = (μν)s(du) f(u) = μs(dt) νt(du) f(u).
Proof: (i) By Proposition 8.9 and Lemma 15.2 (ii) and (iii), we have E(e−ζf| η) = E E e−ζf ξ, η η = E E(e−ζf| ξ) η = E exp ξ log μe−f η = exp −η(1 −μe−f) = exp −ημ(1 −e−f) .
Now use Lemmas 15.1 (ii) and 15.2 (ii).
326 Foundations of Modern Probability (ii) Using Lemma 15.2 (iii) twice, we get E(e−ζf| η) = E E(e−ζf| ξ) η = E exp ξ log νe−f η = exp η log μνe−f .
Now apply Lemmas 15.1 (ii) and 15.2 (ii).
2 We turn to a basic relationship between mixed Poisson and binomial pro-cesses. The equivalence (i) ⇔(ii) is elementary for bounded S, and leads in that case to an easy construction of the general Poisson process. The signifi-cance of mixed Poisson and binomial processes is further clarified by Theorem 15.14 below. Write 1Bμ = 1B · μ for the restriction of μ to B.
Theorem 15.4 (mixed Poisson and binomial processes, Wiener, Moyal, OK) For a point process ξ on S, these conditions are equivalent: (i) ξ is a mixed Poisson or binomial process, (ii) 1B ξ is a mixed binomial process for every B ∈ˆ S.
In that case, (iii) 1B ξ ⊥ ⊥ ξB 1Bcξ, B ∈ˆ S.
Proof, (i) ⇒(ii): First let ξ be a mixed Poisson process based on μ and ρ. Since 1B ξ is again mixed Poisson and based on 1Bμ and ρ, we may take B = S and ∥μ∥= 1. Now consider a mixed binomial process η based on μ and κ, where κ is conditionally Poisson given ρ with E(κ| ρ) = ρ. Using (1) and Lemma 15.2 (i)−(ii), we get for any f ∈S+ Ee−ηf = E(μe−f)κ = EE (μe−f)κ ρ = E exp −ρ(1 −μe−f) = E exp −ρμ(1 −e−f) = Ee−ξf, and so ξ d = η by Lemma 15.1.
Next, let ξ be a mixed binomial process based on μ and κ. Fix any B ∈S with μB > 0, and consider a mixed binomial process η based on ˆ μB = 1Bμ/μB and β, where β is conditionally binomially distributed given κ, with parameters κ and μB. Letting f ∈S+ be supported by B and using Lemma 15.2 (i) twice, we get Ee−ηf = E ˆ μBe−fβ = EE ˆ μBe−fβ κ = E 1 −μB 1 −ˆ μBe−f κ = E(μe−f)κ = Ee−ξf, 15. Poisson and Related Processes 327 and so 1B ξ d = η by Lemma 15.1.
(ii) ⇒(i): Let 1Bξ be mixed binomial for every B ∈ˆ S. Fix a localizing se-quence Bn ↑S in ˆ S, so that the restrictions 1Bnξ are mixed binomial processes based on some probability measures μn. Since the μn are unique, we see as above that μm = 1Bmμn/μnBm whenever m ≤n, and so there exists a measure μ ∈MS with μn = 1Bnμ/μBn for all n ∈N.
When ∥μ∥< ∞, we may first normalize to ∥μ∥= 1. Since snk k →sn as sk →s ∈(0, 1) and nk →n ∈¯ N, we get for any f ∈ˆ S+ with μf > 0, by Lemma 15.2 (i) and monotone and dominated convergence, Ee−ξf ←Ee−ξnf = E(μne−f)ξBn →E(μe−f)ξS.
Choosing f = ε1Bn and letting ε ↓0, we get ξS < ∞a.s., and so the previous calculation applies to arbitrary f ∈ˆ S+, which shows that ξ is a mixed binomial process based on μ.
Next let ∥μ∥= ∞. If f ∈ˆ S+ is supported by Bm for a fixed m ∈N, then for any n ≥m, Ee−ξf = E(μne−f)ξBn = E 1 −μ(1 −e−f) μBn μBnαn , where αn = ξBn/μBn. By Theorem 6.20, we have convergence αn d →α in [0, ∞] along a sub-sequence. Noting that (1 −m−1 n )mnxn →e−x as mn →∞ and 0 ≤xn →x ∈[0, ∞], we get by Theorem 5.31 Ee−ξf = E exp −αμ(1 −e−f) .
As before, we get α < ∞a.s., and so by monotone and dominated convergence, we may extend the displayed formula to arbitrary f ∈ˆ S+. By Lemmas 15.1 and 15.2 (ii), ξ is then distributed as a mixed Poisson process based on μ and α.
(iii) We may take ∥μ∥< ∞, since the case of ∥μ∥= ∞will then follow by a martingale or monotone-class argument. Then consider any mixed binomial process ξ = Σk≤κ δσk, where the σk are i.i.d. μ and independent of κ. For any B ∈S, we note that 1B ξ and 1Bcξ are conditionally independent bino-mial processes based on 1Bμ and 1Bcμ, respectively, given the variables κ and ϑk = 1{k ≤κ, σk ∈B}, k ∈N. Thus, 1B ξ is conditionally a binomial process based on μB and ξB, given 1Bcξ, ξB, and ϑ1, ϑ2, . . . . Since the conditional dis-tribution depends only on ξB, we obtain 1B ξ⊥ ⊥ξB (1Bcξ, ϑ1, ϑ2, . . .), and the assertion follows.
2 Using the easy necessity part of Theorem 15.4, we may prove the existence of Cox processes and transforms. The result also covers the cases of binomial processes and randomizations.
328 Foundations of Modern Probability Theorem 15.5 (Cox and transform existence) (i) For any random measure η on S, there exists a Cox process ξ directed by η.
(ii) For any point process ξ on S and probability kernel ν : S →T, there exists a ν-transform ζ of ξ.
Proof: (i) First let η = μ be non-random with μS ∈(0, ∞). By Corollary 8.25 we may choose a Poisson distributed random variable κ with mean μS and some independent i.i.d. random elements σ1, σ2, . . . in S with distribution μ/μS. By Theorem 15.4, the random measure ξ = Σj≤κ δσj is then Poisson with intensity μ.
For general μ ∈Mμ, we may choose a partition of S into disjoint subsets B1, B2, . . . ∈ˆ S, such that μBk ∈(0, ∞) for all k. As before, there exists for every k a Poisson process ξk on S with intensity μk = 1Bkμ, and by Corollary 8.25 we may choose the ξk to be independent. Writing ξ = Σk ξk and using Lemma 15.2 (i), we get for any measurable function f ≥0 on S Ee−ξf = Π kEe−ξkf = Π k exp −μk(1 −e−f) = exp −Σ k μk(1 −e−f) = exp −μ(1 −e−f) .
Using Lemmas 15.1 (ii) and 15.2 (i), we conclude that ξ is a Poisson process with intensity μ.
Now let ξμ be a Poisson process with intensity μ. Then for any m1, . . . , mn ∈ Z+ and disjoint B1, . . . , Bn ∈ˆ S, P ∩ k≤n ξμBk = mk = Π k≤n e−μBk(μBk)mk/mk!, which is a measurable function of μ. The measurability extends to arbitrary sets Bk ∈ˆ S, since the general probability on the left is a finite sum of such products. Now the sets on the left form a π-system generating the σ-field in NS, and so Theorem 1.1 shows that Pμ = L(ξμ) is a probability kernel from MS to NS. Then Lemma 8.16 ensures the existence, for any random measure η on S, of a Cox process ξ directed by η.
(ii) First let μ = Σk δsk be non-random in NS. Then Corollary 8.25 yields some independent random elements τk in T with distributions νsk, making ζμ = Σk δτk a ν-transform of μ. Letting B1, . . . , Bn ∈T and s1, . . . , sn ∈(0, 1), we get by Lemma 15.2 (iii) E Π k sζμBk k = E exp ζμΣ k1Bk log sk = exp μ log ν expΣ k1Bk log sk = exp μ log ν Π k (sk)1Bk.
15. Poisson and Related Processes 329 Using Lemma 3.2 (i) twice, we see that ν k(sk)1Bk is a measurable function on S for fixed s1, . . . , sn, and so the right-hand side is a measurable func-tion of μ. Differentiating mk times with respect to sk for each k and setting s1 = · · · = sn = 1, we conclude that the probability P k{ζμBk = mk} is a measurable function of μ for any m1, . . . , mn ∈Z+. As before, Pμ = L(ζμ) is then a probability kernel from NS to NT, and the general result follows by Lemma 8.16.
2 We proceed with some basic properties of Poisson and Cox processes.
Lemma 15.6 (Cox simplicity and integrability) Let ξ be a Cox process di-rected by a random measure η on S. Then (i) ξ is a.s. simple ⇔ η is a.s. diffuse, (ii) {ξf < ∞} = η(f ∧1) < ∞ a.s. for all f ∈S+.
Proof: (i) First reduce by conditioning to the case where ξ is Poisson with intensity μ. Since both properties are local, we may also assume that ∥μ∥∈ (0, ∞). By a further conditioning based on Theorem 15.4, we may then take ξ to be a binomial process based on μ, say ξ = Σk≤n δσk, where the σk are i.i.d.
μ. Then Fubini’s theorem yields P{σi = σj} = μ{s} μ(ds) = Σ s(μ{s})2, i ̸= j, which shows that the σk are a.s. distinct iffμ is diffuse.
(ii) Conditioning on η and using Theorem 8.5, we may again reduce to the case where ξ is Poisson with intensity μ. For any f ∈S+, we get as 0 < r →0 exp −μ(1 −e−rf) = Ee−rξf →P{ξf < ∞}.
Since 1 −e−f ≍f ∧1, we have μ(1 −e−rf) ≡∞when μ(f ∧1) = ∞, whereas μ(1 −e−rf) →0 by dominated convergence when μ(f ∧1) < ∞.
2 The following uniqueness property will play an important role below.
Lemma 15.7 (Cox and thinning uniqueness, Krickeberg) Let ξ, ˜ ξ be Cox pro-cesses on S directed by some random measures η, ˜ η, or p -thinnings of some point processes η, ˜ η, for a fixed p ∈(0, 1]. Then ξ d = ˜ ξ ⇔ η d = ˜ η.
Proof: The implications to the left are obvious. To prove the implication to the right for the Cox transform, we may invert Lemma 15.2 (ii) to get for f ∈ˆ S+ Ee−ηf = E exp ξ log(1 −f) , f < 1.
330 Foundations of Modern Probability In case of p -thinnings, we may use Lemma 15.2 (iv) instead to get Ee−ηf = E exp ξ log 1 −p−1(1 −e−f) , f < −log(1 −p).
In both cases, the assertion follows by Lemma 15.1 (ii).
2 Using Cox and thinning transforms, we may partially strengthen the general uniqueness criteria of Theorem 15.1. Related convergence criteria are given in Chapter 23 and 30.
Theorem 15.8 (one-dimensional uniqueness criteria, M¨ onch, Grandell, OK) Let S be a Borel space with a dissecting ring U ⊂ˆ S. Then (i) for any point processes ξ, η on S, we have ξ∗d = η∗iff P{ξU = 0} = P{ηU = 0}, U ∈U, (ii) for any simple point processes or diffuse random measures ξ, η on S, we have ξ d = η ifffor a fixed c > 0, E e−c ξU = E e−c ηU, U ∈U, (iii) for a simple point process or diffuse random measure ξ and an arbitrary random measure η on S, we have ξ d = η iff ξU d = ηU, U ∈U.
Proof: (i) Let C be the class of sets {μ; μU = 0} in NS with U ∈U, and note that C is a π-system, since for any B, C ∈U, {μB = 0} ∩{μC = 0} = μ(B ∪C) = 0 ∈C, by the ring property of U.
Furthermore, the sets M ∈BNS with P{ξ ∈ M} = P{η ∈M} form a λ -system D, which contains C by hypothesis. Hence, Theorem 1.1 (i) yields D ⊃σ(C), which means that ξ d = η on σ(C).
Since the ring U is dissecting, it contains a dissecting system (Inj), and for any μ ∈NS we have Σ j μ(U ∩Inj) ∧1 →μ∗U, U ∈U, since the atoms of μ are ultimately separated by the sets Inj. Since the sum on the left is σ(C)-measurable, so is the limit μ∗U for every U ∈U. By the proof of Lemma 15.1, the measurability extends to the map μ →μ∗, and we conclude that ξ∗d = η∗.
(ii) First let ξ, η be diffuse.
Letting ˜ ξ, ˜ η be Cox processes directed by c ξ, c η, respectively, we get by conditioning and hypothesis P{˜ ξU = 0} = E e−c ξU = E e−c ηU = P{˜ ηU = 0}, U ∈U.
15. Poisson and Related Processes 331 Since ˜ ξ, ˜ η are a.s. simple by Lemma 15.6 (i), part (i) yields ˜ ξ d = ˜ η, and so ξ d = η by Lemma 15.7 (i).
Next let ξ, η be simple point processes. For p = 1 −e−c, let ˜ ξ, ˜ η be p -thinnings of ξ, η, respectively. Since Lemma 15.2 (iv) remains true when 0 ≤ f ≤∞, we may take f = ∞· 1U for any U ∈U to get P{˜ ξU = 0} = E exp ξ log(1 −p 1U) = E exp ξU log(1 −p) = Ee−c ξ, and similarly for ˜ η in terms of η. Hence, the simple point processes ˜ ξ, ˜ η satisfy the condition in (i), and so ˜ ξ d = ˜ η, which implies ξ d = η.
(iii) First let ξ be a simple point process.
The stated condition yields ηU ∈Z+ a.s. for every U ∈U, and so η is a point process by Lemma 2.18.
By (i) we get ξ d = η∗, and in particular ηU d = ξU d = η∗U for all U ∈U, which implies η = η∗since the ring U is covering.
Next let η be a.s. diffuse. Letting ˜ ξ, ˜ η be Cox processes directed by ξ, η, respectively, we get ˜ ξU d = ˜ ηU for any U ∈U by Lemma 15.2 (ii). Since ˜ ξ is simple by Lemma 15.6 (i), we conclude as before that ˜ ξ d = ˜ η, and so ξ d = η by Lemma 15.7 (i).
2 The last result yields a nice characterization of Poisson processes: Corollary 15.9 (Poisson criterion, R´ enyi) Let ξ be a random measure on S with Eξ{s} ≡0 a.s., and fix a dissecting ring U ⊂ˆ S. Then these conditions are equivalent: (i) ξ is a Poisson process, (ii) ξU is Poisson distributed for every U ∈U.
Proof: Assume (ii). Since λ = Eξ ∈MS, Theorem 15.5 yields a Poisson process η on S with Eη = λ. Here ξU d = ηU for all U ∈U, and since η is simple by Corollary 15.6 (i), we get ξ d = η by Theorem 15.8 (iii), proving (i). 2 Much of the previous theory extends to processes with marks. Given some Borel spaces S and T, we define a T-marked point process on S as a point process ξ on S × T with ξ({s} × T) ≤1 for all s ∈S.
Say that ξ has independent increments, if the point processes ξ(B1 × ·), . . . , ξ(Bn × ·) on T are independent for any disjoint sets B1, . . . , Bn ∈ˆ S.
We also say that ξ is a Poisson process, if ξ is Poisson in the usual sense on the product space S×T. We may now characterize Poisson processes in terms of the independence property, extending Theorem 13.6. The result plays a crucial role in Chapters 16 and 29. A related characterization in terms of compensators is given in Corollary 15.17.
For future purposes, we consider a slightly more general case. For any S-marked point process ξ on T, we introduce the discontinuity set D = {t ∈T; 332 Foundations of Modern Probability E ξ({t} × S) > 0}, and say that ξ is an extended Poisson process, if it is an ordinary Poisson process on Dc, and the contributions ξt to the points t ∈D are independent point measures on S with mass ≤1. Note that L(ξ) is then uniquely determined by E ξ.
Theorem 15.10 (independent-increment point processes, Bateman, Copeland & Regan, Itˆ o, Kingman) For an S-marked point process ξ on T, these condi-tions are equivalent: (i) ξ has independent T-increments, (ii) ξ is an extended Poisson process on T × S.
Proof (OK): Subtracting the fixed discontinuities, we may reduce to the case where E ξ({t} × S) ≡0. Here we first consider a simple point process ξ on T = [0, 1] with E ξ{t} ≡0. Define a set function ρB = −log Ee−ξB, B ∈ˆ T , and note that ρB ≥0 for all B since ξB ≥0 a.s.
If ξ has independent increments, then for disjoint B, C ∈ˆ T , ρ(B ∪C) = −log Ee−ξB−ξC = −log Ee−ξBEe−ξC = ρB + ρC, which shows that ρ is finitely additive. It is even countably additive, since Bn ↑B in ˆ T implies ρBn ↑ρB by monotone and dominated convergence. It is further diffuse, since ρ{t} = −log Ee−ξ{t} = 0 for all t ∈T.
Now Theorem 15.5 yields a Poisson process η on T, such that E η = c−1ρ with c = 1 −e−1. For any B ∈ˆ S, we get by Lemma 15.2 (ii) Ee−ηB = exp −c−1ρ(1 −e−1B) = e−ρB = Ee−ξB.
Since ξ, η are simple by hypothesis and Lemma 15.6 (i), we get ξ d = η by Theorem 15.8 (ii).
In the marked case, fix any B ∈BT×S, and define a simple point process on T by η = 1Bξ(· × S). Then the previous case shows that ξB = ∥η∥is Poisson distributed, and so ξ is Poisson by Corollary 15.9.
2 The last result extends to a representation of random measures with inde-pendent increments4. A extension to general processes on R+ will be given in Theorem 16.3.
4also said to be completely random 15. Poisson and Related Processes 333 Corollary 15.11 (independent-increment random measures, Kingman) A ran-dom measure ξ on S has independent increments iffa.s.
ξB = αB + ∞ 0 x η(B × dx), B ∈S, (2) for a fixed measure α ∈MS and an extended Poisson process η on S × (0, ∞) with ∞ 0 (x ∧1) E η(B × dx) < ∞, B ∈ˆ S.
(3) Proof (OK): On S × (0, ∞) we define a point process η = Σs δs, ξ{s}, where the required measurability follows by a simple approximation.
Since η has independent S-increments and η {s} × (0, ∞) = 1 ξ{s} > 0 ≤1, s ∈S, Theorem 15.10 shows that η is a Poisson process. Subtracting the atomic part from ξ, we obtain a diffuse random measure α satisfying (2), which has again independent increments and hence is a.s. non-random by Theorem 6.12. Next, Lemma 15.2 (i) yields for any B ∈S and r > 0 −log E exp −r ∞ 0 x η(B × dx) = ∞ 0 (1 −e−rx) Eη(B × dx).
As r →0, we see by dominated convergence that ∞ 0 x η(B × dx) < ∞a.s. iff (3) holds.
2 We summarize the various Poisson criteria, to highlight some remarkable equivalences. The last condition anticipates Corollary 15.17 below.
Corollary 15.12 (Poisson criteria) Let ξ be a simple point process on S with E ξ{s} = 0 for all s ∈S. Then these conditions are equivalent 5: (i) ξ is a Poisson process, (ii) ξ has independent increments, (iii) ξB is Poisson distributed for every B ∈ˆ S.
When ξ is stationary on S = R+, it is further equivalent that (iv) ξ is a renewal process with exponential holding times, (v) ξ has induced compensator cλ for a constant c ≥0.
Proof: Here (i) ⇔(ii) by Theorem 15.10 and (i) ⇔(iii) by Corollary 15.9.
For stationary ξ, the equivalence of (i)–(ii), (iv) was proved in Theorem 13.6. 2 In the multi-variate case, we have a further striking equivalence: 5The equivalence (ii) ⇔(iii) is remarkable, since both conditions are usually required to define a Poisson process.
334 Foundations of Modern Probability Corollary 15.13 (multi-variate Poisson criteria) Let ξ be a point process on S × T with a simple, locally finite S-projection ¯ ξ = ξ(· × T), such that E ¯ ξ{s} = 0 for all s ∈S. Then these conditions are equivalent: (i) ξ is a Poisson process on S × T, (ii) ξ has independent S-increments, (iii) ξ is a ν-randomization of a Poisson process ¯ ξ on S, for a probability kernel ν : S →T.
Proof: Since (iii) ⇒(ii) is obvious and (ii) ⇒(i) by Theorem 15.10, it re-mains to prove that (i) ⇒(iii). Then let ξ be Poisson with intensity λ = E ξ, and note that ¯ ξ is again Poisson with intensity ¯ λ = λ(· × T). By Theorem 3.4 there exists a probability kernel ν : S →T such that λ = ¯ λ ⊗ν. Letting η be a ν -randomization of ¯ ξ, we see from Theorem 15.3 (i) that η is again Poisson with intensity λ. Hence, (ξ, ¯ ξ) d = (η, ¯ ξ), and so L(ξ| ¯ ξ) = L(η| ¯ ξ), which shows that even ξ is a ν -randomization of ¯ ξ.
2 We may next characterize the mixed Poisson and binomial processes by a basic symmetry property. Related results for more general processes appear in Theorems 27.9 and 27.10. Given a random measure ξ and a diffuse measure λ on S, we say that ξ is λ -symmetric if ξ ◦f −1 d = ξ for every λ -preserving map f on S.
Theorem 15.14 (symmetry, Davidson, Matthes et al., OK) Let S be a Borel space with a diffuse measure λ ∈MS. Then (i) a simple point process ξ on S is λ-symmetric iffit is a mixed Poisson or binomial process based on λ, (ii) a diffuse random measure ξ on S is λ-symmetric iffξ = α λ a.s. for a random variable α ≥0.
In both cases, this holds6 iffL(ξB) depends only on λB.
Proof: (i) If S = [0, 1], then ∥ξ∥is invariant under λ -preserving transfor-mations, and so ξ remains conditionally λ -symmetric, given ∥ξ∥. We may then reduce by conditioning to the case of a constant ∥ξ∥= n, so that ξ = Σk≤n δσk for some random variables σ1 < · · · < σn in [0, 1]. Now consider any sub-intervals I1 < · · · < In of [0, 1], where I < J means that s < t for all s ∈I and t ∈J. By contractability, the probability P ∩k {σk ∈Ik} = P ∩k {ξIk > 0} is invariant under individual shifts of I1, . . . , In, subject to the mentioned order restriction. Thus, L(σ1, . . . , σn) is shift invariant on the set Δn = {s1 < · · · < 6In particular, the notions of exchangeability and contractability are equivalent for simple or diffuse random measures on a finite or infinite interval.
Note that the corresponding statements for finite sequences or processes on [0, 1] are false. See Chapters 27–28.
15. Poisson and Related Processes 335 sn} ⊂[0, 1]n, and hence equals n! λn on Δn, which means that ξ is a binomial process on [0, 1] based on λ and n. If instead S = R+, the restrictions 1[0,t]ξ are mixed binomial processes based on 1[0,t]λ, and so ξ is mixed Poisson on R+ by Theorem 15.4.
(ii) We may take S = [0, 1]. As in (i), we may further reduce by conditioning to the case where ∥ξ∥= 1 a.s. Then Eξ = λ by contractability, and for any intervals I1 < I2 in I, the moment Eξ2(I1 × I2) is invariant under individual shifts of I1 and I2, so that Eξ2 is proportional to λ2 on Δ2. Since ξ is diffuse, we have Eξ2 = 0 on the diagonal {s1 = s2} in [0, 1]2. Hence, by symmetry and normalization, Eξ2 = λ2 on [0, 1]2, and so Var(ξB) = E(ξB)2 −(EξB)2 = λ2B2 −(λB)2 = 0, B ∈B[0,1].
Then ξB = λB a.s. for all B, and so ξ = λ a.s. by a monotone-class argument.
Now suppose that L(ξB) depends only on λB, and define Ee−rξB = ϕr(λB), B ∈S, r ≥0.
For any λ -preserving map f on S, we get Ee−rξ◦f−1B = ϕr ◦λ(f −1B) = ϕr(λB) = Ee−rξB, and so ξ(f −1B) d = ξB by Theorem 6.3.
Then Theorem 15.8 (iii) yields ξ ◦f −1 d = ξ, which shows that ξ is λ -symmetric.
2 A Poisson process ξ on R+ is said to be stationary with rate c ≥0 if Eξ = c λ. Then Proposition 11.5 shows that Nt = ξ[0, t], t ≥0, is a space-and time-homogeneous Markov process.
We show how a point process on R+, subject to a suitable regularity condition, can be transformed to Poisson by a predictable map. This leads to various time-change formulas for point processes, similar to those for continuous local martingales in Chapter 19.
When ξ is an S-marked point process on (0, ∞), it is automatically locally integrable, which ensures the existence of the associated compensator ˆ ξ. Recall that ξ is said to be ql-continuous if ξ([τ] × S) = 0 a.s. for every predictable time τ, so that the process ˆ ξtB is a.s. continuous for every B ∈ˆ S.
Theorem 15.15 (predictable mapping to Poisson) For any Borel spaces S, T and measure μ ∈MT, let ξ be a ql-continuous, S-marked point process on (0, ∞) with compensator ˆ ξ, and consider a predictable map Y : Ω×R+×S →T.
Then ˆ ξ ◦Y −1 = μ a.s.
⇒ η = ξ ◦Y −1 is Poisson (μ).
336 Foundations of Modern Probability In particular, any simple, ql-continuous point process ξ on R+ is unit rate Poisson, on the time scale given by the compensator ˆ ξ. Note the analogy with the time-change reduction of continuous local martingales to Brownian motion in Theorem 19.4.
Proof: For disjoint sets B1, . . . , Bn ∈BT with finite μ-measure, we need to show that ηB1, . . . , ηBn are independent Poisson variables with means μB1, . . . , μBn. Then define for each k ≤n the processes Jk t = S t+ 0 1Bk(Ys,x) ξ(ds dx), ˆ Jk t = S t 0 1Bk(Ys,x) ˆ ξ(ds dx).
Here ˆ Jk ∞= μBk < ∞a.s. by hypothesis, and so the Jk are simple, integrable point processes on R+ with compensators ˆ Jk. For fixed u1, . . . , un ≥0, we put Xt = Σ k≤n ukJk t −(1 −e−uk) ˆ Jk t , t ≥0.
The process Mt = e−Xt has bounded variation and finitely many jumps, and so by an elementary change of variables, Mt −1 = Σ s≤t Δe−Xs − t 0 e−XsdXc s = Σ k≤n t+ 0 e−Xs−(1 −e−uk) d ˆ Jk s −Jk s .
Since the integrands on the right are bounded and predictable, M is a uniformly integrable martingale, and so EM∞= 1. Thus, E exp −Σ k uk ηBk = exp −Σ k (1 −e−uk) μBk , and the assertion follows by Theorem 6.3.
2 We also have a multi-variate time-change result, similar to Proposition 19.8 for continuous local martingales. Say that the simple point processes ξ1, . . . , ξn are orthogonal if Σ k ξk{s} ≤1 for all s.
Corollary 15.16 (time-change reduction to Poisson, Papangelou, Meyer) Let ξ1, . . . , ξn be orthogonal, ql-continuous point processes on (0, ∞) with a.s. un-bounded compensators ˆ ξ1, . . . , ˆ ξn, and define Yk(t) = inf s > 0; ˆ ξk[0, s] > t , t ≥0, k ≤n.
Then the processes ηk = ξk ◦Y −1 k on R+ are independent, unit-rate Poisson.
Proof: The processes ηk are Poisson by Theorem 15.15, since each Yk is predictable and ˆ ξk[0, Yk(t)] = ˆ ξk s ≥0; ˆ ξk[0, s] ≤t = t a.s., t ≥0, by the continuity of ˆ ξk. The multi-variate statement follows from the same theorem, if we apply the predictable map Y = (Y1, . . . , Yn) to the marked 15. Poisson and Related Processes 337 point process ξ = (ξ1, . . . , ξn) on (0, ∞) × {1, . . . , n} with compensator ˆ ξ = (ˆ ξ1, . . . , ˆ ξn).
2 From Theorem 15.15 we see that a ql-continuous, marked point process ξ is Poisson iffits compensator is a.s. non-random, similarly to the characterization of Brownian motion in Theorem 19.3. The following more general statement may be regarded as an extension of Theorem 15.10.
Theorem 15.17 (extended Poisson criterion, Watanabe, Jacod) Let ξ be an S-marked, F-adapted point process on (0, ∞) with compensator ˆ ξ, where S is Borel. Then these conditions are equivalent: (i) ˆ ξ = μ is a.s. non-random, (ii) ξ is extended Poisson with E ξ = μ.
Proof (OK): It is enough to prove that (i) ⇒(ii). Then write μ = ν + Σ k (δtk ⊗μk), ξ = η + Σ k (δtk ⊗ξk), where t1, t2, . . . > 0 are the times with μ({tk}×S) > 0. Here ν is the restriction of μ to the remaining set Dc × S with D = k{tk}, η is the corresponding restriction of ξ with E η = ν, and the ξk are point processes on S with ∥ξk∥≤1 and E ξk = μk. We need to show that η, ξ1, ξ2, . . . are independent, and that η is a Poisson process.
Then for any measurable A ⊂Dc × S with νA < ∞, we define Xt = t+ 0 1A(r, s) ξ(dr ds), Nt = e−ˆ Xt1{Xt = 0}.
As in Theorem 15.15, we may check that N is a uniformly integrable martingale with N0 = 1 and N∞= e νA1{ηA = 0}. For any B1, B2, . . . ∈S, we further introduce the bounded martingales M k t = (ξk −μk)Bk1{t ≥tk}, t ≥0, k ∈N.
Since N and M 1, M 2, . . . are mutually orthogonal, the process M n t = Nt Π k≤n M k t , t ≥0, is again a uniformly integrable martingale, and so EM 0 ∞= 1 and EM n 0 = 0 for all n > 0. Proceeding by induction, as in the proof of Theorem 10.27, we conclude that E Π k≤n ξkBk; ηA = 0 = e−νA Π k≤n μkBk, n ∈N.
The desired joint distribution now follows by Corollary 15.9.
2 We may use discounted compensators to extend the previous time-change results beyond the ql-continuous case. To avoid distracting technicalities, we consider only unbounded point processes.
338 Foundations of Modern Probability Theorem 15.18 (time-change reduction to Poisson) Let ξ = Σj δτj be a sim-ple, unbounded, F-adapted point process on (0, ∞) with compensator η, let ϑ1, ϑ2, . . . ⊥ ⊥F be i.i.d. U(0, 1), and define ρt = 1 −Σ j ϑj 1{t = τj}, Yt = ηc t −Σ s≤t log 1 −ρs Δηs , t ≥0.
Then ˜ ξ = ξ ◦Y −1 is a unit rate Poisson process on R+.
Proof: Write σ0 = 0, and let σj = Yτj for j > 0.
We claim that the differences σj −σj−1 are i.i.d. and exponentially distributed with mean 1. Since the τj are orthogonal, it is enough to consider σ1. Letting G be the right-continuous filtration induced by F and the pairs (σj, ϑj), we note that (τ1, ϑ1) has G -compensator ˜ η = η ⊗λ on [0, τ1] × [0, 1]. Write ζ for the associated discounted version with projection ¯ ζ onto R+, and put Zt = 1 −¯ ζ(0, t]. Define the G -predictable processes U and V = e−U on R+ × [0, 1] by Ut,r = ηc t −Σ s<t log(1 −Δηs) −log(1 −r Δηt), Vt,r = exp(−ηc t)Π s<t (1 −Δηs) (1 −r Δηt) = Zt−(1 −rΔηt), where the last equality holds by Theorem 10.24 (ii). For any random variable γ with distribution function F, we have Ft−= P{Fγ < Ft−} ≤P{Fγ ≤Ft} = Ft, t ∈R.
Thus, ζ ◦V −1 ≤λ a.s. on [0, 1], and so V (τ1, 1 −ϑ1) = e−σ1 is U(0, 1) by Theorem 10.27, which yields the required distribution for σ1.
2 The last result yields an asymptotic version of the Poisson characterization in Corollary 15.17.
Corollary 15.19 (Poisson convergence, Brown) Let ξ1, ξ2, . . . be simple point processes on (0, ∞) with compensators ˆ ξn, and let ξ be a unit rate Poisson process on R+. Then7 ˆ ξn[0, t] P →t, t > 0 ⇒ ξn vd − →ξ.
Proof (OK): For any sub-sequence N ′ ⊂N, we have a.s. ˆ ξn[0, t] →t for all t ≥0 along a further sub-sequence N ′′. In particular, supt≤c ˆ ξn{t} →0 a.s. for all c > 0. Writing ξn = Σj δτnj and putting σnj = Yn(τnj) with Yn as before, we get a.s. for every j ∈N | τnj −σnj| ≤ τnj −ηn[0, τnj] + ηn[0, τnj] −σnj →0.
7For the convergence vd − →, see Chapter 23.
15. Poisson and Related Processes 339 Letting n →∞along N ′′, we get ξn = Σ jδτnj vd − →Σ jδσnj d = ξ, which yields the required convergence.
2 Poisson integrals of various kind occur frequently in applications. Here we give criteria for the existence of the integrals ξf, (ξ −ξ′)f, and (ξ −μ)f, where ξ and ξ′ are independent Poisson processes with a common intensity μ. In each case, the integral may be defined as a limit in probability of elementary integrals ξfn, (ξ −ξ′)fn, or (ξ −μ)fn, respectively, where the fn are bounded with compact support and such that |fn| ≤|f| and fn →f. We say that the integral of f exists, if the appropriate limit exists and is independent of the choice of approximating functions fn.
Proposition 15.20 (Poisson integrals) Let ξ⊥ ⊥ξ′ be Poisson processes on S with a common diffuse intensity μ. Then for measurable functions f on S, we have (i) ξf exists ⇔μ(|f| ∧1) < ∞, (ii) (ξ −ξ′)f exists ⇔μ(f 2 ∧1) < ∞, (iii) (ξ −μ)f exists ⇔μ(f 2 ∧|f|) < ∞.
In each case, the existence is equivalent to tightness of the corresponding set of approximating elementary integrals.
Proof: (i) This is a special case of Lemma 15.6 (ii).
(ii) For counting measures ν = Σk δsk we form a symmetrization ˜ ν = Σk ϑkδsk, where ϑ1, ϑ2, . . . are i.i.d. random variables with P{ϑk = ±1} = 1 2.
By Theorem 5.17, the series ˜ νf converges a.s. iffνf 2 < ∞, and otherwise |˜ νfn| P →∞for any bounded approximations fn = 1Bnf with Bn ∈ˆ S. The result extends by conditioning to any point process ν with symmetric ran-domization ˜ ν. Now Proposition 15.3 exhibits ξ −ξ′ as such a randomization of the Poisson process ξ+ξ′, and (i) yields (ξ+ξ′)f 2 < ∞a.s. iffμ(f 2∧1) < ∞.
(iii) Write f = g+h, where g = f1{|f| ≤1} and h = f1{|f| > 1}. First let μg2 + μ|h| = μ(f 2 ∧|f|) < ∞. Since clearly E(ξf −μf)2 = μf 2, the integral (ξ −μ)g exists. Since also ξh exists by (i), even (ξ −μ)f = (ξ −μ)g + ξh −μh exists.
Conversely, if (ξ −μ)f exists, then so does (ξ −ξ′)f, and (ii) yields μg2 + μ{h ̸= 0} = μ(f 2 ∧1) < ∞.
The existence of (ξ −μ)g now follows by the direct assertion, and trivially even ξh exists.
The existence of μh = (ξ −μ)g + ξh −(ξ −μ)f then follows, and so μ|h| < ∞.
2 The multi-variate case is more difficult, and so we consider only positive double integrals ξ2f and ξηf, where ξ2 = ξ ⊗ξ and ξη = ξ ⊗η. Write f1 = μ2 ˆ f and f2 = μ1 ˆ f, where ˆ f = f ∧1 and μi denotes μ -integration in the i-th coordinate. Say that a function f on S2 is non-diagonal if f(x, x) ≡0.
340 Foundations of Modern Probability Theorem 15.21 (double Poisson integrals, OK & Szulga) Let ξ⊥ ⊥η be Pois-son processes on S with a common intensity μ, and consider a measurable, non-diagonal function f ≥0 on S2. Then ξηf < ∞a.s. iffξ2f < ∞a.s., which holds iff (i) f1 ∨f2 < ∞a.e. μ, (ii) μ{f1 ∨f2 > 1} < ∞, (iii) μ2 ˆ f; f1 ∨f2 ≤1 < ∞.
Note that this result covers even the case where E ξ ̸= E η. For the proof, we may clearly take μ to be diffuse, and whenever convenient we may even choose S = R+ and μ = λ. Define ψ(x) = 1 −e−x.
Lemma 15.22 (Poisson moments) Let ξ be Poisson with Eξ = μ. Then for any measurable set B ⊂S and function f ≥0 on S or S2, we have (i) E ψ(ξf) = ψ μ(ψ ◦f) , (ii) P{ξB > 0} = ψ(μB), (iii) E (ξf)2 = μf 2 + (μf)2, E ξηf = μ2f, (iv) E (ξηf)2 = μ2f 2 + μ(μ1f)2 + μ(μ2f)2 + (μ2f)2.
Proof: (i)−(ii): Use Lemma 15.2 (ii) with η = μ.
(iii) When μf < ∞, the first relation becomes Var(ξf) = μf 2, which holds by linearity, independence, and monotone convergence. For the second rela-tion, use Fubini’s theorem.
(iv) Use (iii) and Fubini’s theorem to get E(ξηf)2 = E{ξ(ηf)}2 = Eμ1(ηf)2 + E(μ1ηf)2 = μ1E(ηf)2 + E{η(μ1f)}2 = μ2f 2 + μ1(μ2f)2 + μ2(μ1f)2 + (μ2f)2.
2 Lemma 15.23 (tail estimate) For measurable f : S2 →[0, 1] with μ2f < ∞, we have P ξηf > 1 2 μ2f > ⌢ψ μ2f 1 + f ∗ 1 ∨f ∗ 2 .
Proof: We may clearly take μ2f > 0. By Lemma 15.22 (iii)−(iv), we have E ξηf = μ2f and E(ξηf)2 ≤(μ2f)2 + μ2f + f ∗ 1μf1 + f ∗ 2μf2 = (μ2f)2 + (1 + f ∗ 1 + f ∗ 2) μ2f, and so by Lemma 5.1, P ξηf > 1 2 μ2f ≥ (1 −1 2)2 (μ2f)2 (μ2f)2 + (1 + f ∗ 1 + f ∗ 2) μ2f > ⌢ 1 + 1 + f ∗ 1 ∨f ∗ 2 μ2f −1 > ⌢ψ μ2f 1 + f ∗ 1 ∨f ∗ 2 .
2 15. Poisson and Related Processes 341 Lemma 15.24 (decoupling) For non-diagonal, measurable functions f ≥0 on S2, we have E ψ(ξ2f) ≍E ψ(ξηf).
Proof: Here it is helpful to take μ = λ on R+. We may further assume that f is supported by the wedge {s < t} in R2 +. It is then equivalent to show that E (V · ξ)∞∧1 ≍E (V · η)∞∧1 , where Vt = ξf(·, t) ∧1. Since V is predictable with respect to the filtration induced by ξ, η, the random time τ = inf t ≥0; (V · η)t > 1 is optional, and so the process 1{τ > t} is again predictable. Noting that ξ and η are both compensated by λ, we get E (V · ξ)∞; τ = ∞ ≤E(V · ξ)τ = E(V · λ)τ = E(V · η)τ ≤E (V · η)∞∧1 + 2 P{τ < ∞}, and so E (V · ξ)∞∧1 ≤E (V · ξ)∞; τ = ∞ + P{τ < ∞} ≤E (V · η)∞∧1 + 3 P{τ < ∞} ≤4 E (V · η)∞∧1 .
The same argument applies with the roles of ξ and η interchanged.
2 Proof of Theorem 15.21: Since ξ ⊗η is a.s. simple, we have ξηf < ∞iff ξη ˆ f < ∞a.s., which allows us to take f ≤1. First assume (i)−(iii). Since by (i) E ξη f; f1 ∨f2 = ∞ ≤Σ i μ2(f; fi = ∞) = 0, we may assume that f1, f2 < ∞. Then Lemma 15.20 (i) gives ηf(s, ·) < ∞ and ξf(·, s) < ∞a.s. for all s ≥0. Furthermore, (ii) implies ξ{f1 > 1} < ∞ and η{f2 > 1} < ∞a.s. By Fubini’s theorem, we get a.s.
ξη f; f1 ∨f2 > 1 ≤ξ ηf; f1 > 1 + η ξf; f2 > 1 < ∞, which allows us to assume that even f1, f2 ≤1. Then (iii) yields E ξηf = μ2f < ∞, which implies ξηf < ∞a.s.
Conversely, suppose that ξηf < ∞a.s. for a function f : S2 →[0, 1]. By Lemma 15.22 (i) and Fubini’s theorem, E ψ μ ψ(t ηf) = E ψ(t ξηf) →0, t ↓0, 342 Foundations of Modern Probability which implies μ ψ(t ηf) →0 a.s. and hence ηf < ∞a.e. μ ⊗P. By Lemma 15.20 and Fubini’s theorem we get f1 = μ2f < ∞a.e. μ, and the symmetric argument yields f2 = μ1f < ∞a.e. This proves (i).
Next, Lemma 15.22 (i) yields on the set {f1 > 1} E ψ(ηf) = ψ μ2(ψ ◦f) ≥ψ (1 −e−1)f1 ≥ψ(1 −e−1) ≡c > 0.
Hence, for any measurable set B ⊂{f1 > 1}, E μ 1 −ψ(ηf); B ≤(1 −c) μB, and so by Chebyshev’s inequality, P μ ψ(ηf) < 1 2 c μB ≤P μ{1 −ψ(ηf); B} > (1 −1 2 c) μB ≤ E μ 1 −ψ(ηf); B (1 −1 2 c) μB ≤1 −c 1 −1 2 c .
Since B was arbitrary, we conclude that P μ ψ(ηf) ≥1 2 c μ{f1 > 1} ≥1 −1 −c 1 −1 2 c = c 2 −c > 0.
Noting that μ ψ(ηf) < ∞a.s. by Lemma 15.20 and Fubini’s theorem, we obtain μ{f1 > 1} < ∞. Combining with a similar result for f2, we obtain (ii).
Applying Lemma 15.23 to the function f 1{f1 ∨f2 ≤1} gives P ξηf > 1 2 μ2(f; f1 ∨f2 ≤1) > ⌢ψ 1 2 μ2(f; f1 ∨f2 ≤1) , which implies (iii), since the opposite statement would yield the contradiction P{ξηf = ∞} > 0.
To extend the result to the integral ξ2f, we see from Lemma 15.24 that E ψ(t ξ2f) ≍E ψ(t ξηf), t > 0.
Letting t →0 gives P{ξ2f = ∞} ≍P{ξηf = ∞}, which implies ξ2f < ∞a.s. iffξηf < ∞a.s.
2 Exercises 1. Let ξ be a point process on a Borel space S. Show that ξ = Σk δτk for some random elements τk in S ∪{Δ}, where Δ / ∈S is arbitrary. Extend the result to general random measures. (Hint: We may assume that S = R+.) 15. Poisson and Related Processes 343 2. Show that two random measures ξ, η are independent iffEe−ξf−ηg = Ee−ξf× Ee−ηg for all measurable f, g ≥0. When ξ, η are simple point processes, prove the equivalence of P{ξB + ηC = 0} = P{ξB = 0}P{ηC = 0} for any B, C ∈S. (Hint: Regard (ξ, η) as a random measure on 2 S.) 3. Let ξ1, ξ2, . . . be independent Poisson processes with intensity measures μ1, μ2, . . . , such that the measure μ = Σk μk is σ-finite. Show that ξ = Σk ξk is again Poisson with intensity measure μ.
4. Show that the classes of mixed Poisson and binomial processes are preserved by randomization.
5.
Let ξ be a Cox process on S directed by a random measure η, and let f be a measurable mapping into a space T, such that η ◦f−1 is a.s. σ-finite. Prove directly from definitions that ξ ◦f−1 is a Cox process on T directed by η ◦f−1.
Derive a corresponding result for p -thinnings.
Also show how the result follows from Proposition 15.3.
6. Use Proposition 15.3 to show that (iii) ⇒(ii) in Corollary 13.8, and use Theorem 15.10 to prove the converse.
7. Consider a p -thinning η of ξ and a q -thinning ζ of η with ζ⊥ ⊥η ξ. Show that ζ is a p q -thinning of ξ.
8. Let ξ be a Cox process directed by η or a p -thinning of η with p ∈(0, 1), and fix two disjoint sets B, C ∈S. Show that ξB⊥ ⊥ξC iffηB⊥ ⊥ηC. (Hint: Compute the Laplace transforms. The ‘if’ assertions can also be obtained from Proposition 8.12.) 9. Use Lemma 15.2 to derive expressions for P{ξB = 0} when ξ is a Cox process directed by η, a μ-randomization of η, or a p -thinning of η.
(Hint: Note that Ee−t ξB →P{ξB = 0} as t →∞.) 10. Let ξ be a p -thinning of η, where p ∈(0, 1). Show that ξ, η are simultaneously Cox. (Hint: Use Lemma 15.7.) 11. (Fichtner) For a fixed p ∈(0, 1), let η be a p -thinning of a point process ξ on S. Show that ξ is Poisson iffη ⊥ ⊥ξ −η. (Hint: Extend by iteration to arbitrary p. Then a uniform randomization of ξ on S × [0, 1] has independent increments in the second variable, and the result follows by Theorem 19.3.) 12. Use Theorem 15.8 to give a simplified proof of Theorem 15.4, in the case of simple ξ.
13. Derive Theorem 15.4 from Theorem 15.14. (Hint: Note that ξ is symmetric on S iffit is symmetric on Bn for every n.
If ξ is simple, the assertion follows immediately from Theorem 15.14. Otherwise, apply the same result to a uniform randomization on S × [0, 1].) 14. For ξ as in Theorem 15.14, show that P{ξB = 0} = ϕ(μB) for a completely monotone function ϕ. Conclude from the Hausdorff–Bernstein characterization and Theorem 15.8 that ξ is a mixed Poisson or binomial process based on μ.
15.
Show that the distribution of a simple point process ξ on R may not be determined by the distributions of ξI for all intervals I. (Hint: If ξ is restricted to {1, . . . , n}, then the distributions of all ξI give Σk≤n k(n −k + 1) ≤n3 linear relations between the 2 n −1 parameters.) 16. Show that the distribution of a point process may not be determined by the one-dimensional distributions. (Hint: If ξ is restricted to {0, 1} with ξ{0}∨ξ{1} ≤n, 344 Foundations of Modern Probability then the one-dimensional distributions give 4n linear relations between the n(n + 2) parameters.) 17. Show that Lemma 15.1 remains valid with B1, . . . , Bn restricted to an arbi-trary pre-separating class C, as defined in Chapter 23 or Appendix 6. Also show that Theorem 15.8 holds with B restricted to a separating class. (Hint: Extend to the case where C = {B ∈S; (ξ + η)∂B = 0 a.s.}. Then use monotone-class arguments for sets in S and MS.) 18. Show that Theorem 15.10 may fail without the condition Eξ({s} × K) ≡0.
19. Give an example of a non-Poisson point process ξ on S such that ξB is Poisson for every B ∈S. (Hint: Take S = {0, 1}.) 20. Describe the marked point processes ξ with independent increments, in the presence of possible fixed atom sites. (Hint: Note that there are at most countably many fixed discontinuities, and that the associated component of ξ is independent of the remaining part. Now use Proposition 5.14.) 21. Describe the random measures ξ with independent increments, in the presence of possible fixed atom sites. Write the general representation as in Corollary 15.11, for a suitably generalized Poisson process η.
22. Show by an example that Theorem 15.15 may fail without the assumption of ql-continuity.
23. Use Theorem 15.15 to show that any ql-continuous simple point process ξ on (0, ∞) with a.s. unbounded compensator ˆ ξ can be time-changed into a stationary Poisson process η w.r.t. the correspondingly time-changed filtration. Also extend the result to possibly bounded compensators, and describe the reverse map η →ξ.
24. Extend Corollary 15.16 to possibly bounded compensators. Show that the result fails in general, when the compensators are not continuous.
25. For ξ1, . . . , ξn as in Corollary 15.16 with compensators ˆ ξ1, . . . , ˆ ξn, let Y1, . . . , Yn be predictable maps into T such that the measures ˆ ξk ◦Y −1 k = μk are non-random in MT . Show that the processes ξk ◦Y −1 k are independent Poisson μ1, . . . , μn.
26. Prove Theorem 15.20 (i), (iii) by means of characteristic functions.
27. Let ξ⊥ ⊥η be Poisson processes on S with E ξ = E η = μ, and let f1, f2, . . . : S →R be measurable with ∞> μ(f2 n ∧1) →∞. Show that |(ξ −η)fn| P →∞. (Hint: Consider the symmetrization ˜ ν of a fixed measure ν ∈NS with νf2 n →∞, and argue along sub-sequences, as in the proof of Theorem 5.17.) 28. Give conditions for ξ2f < ∞a.s. when ξ is Poisson with E ξ = λ and f ≥0 on R2 +. (Hint: Combine Theorems 15.20 and 15.21.) Chapter 16 Independent-Increment and L´ evy Processes Infinitely divisible distributions, independent-increment processes, L´ evy processes and subordinators, orthogonality and independence, stable L´ evy processes, first-passage times, coupling of L´ evy processes, approximation of random walks, arcsine laws, time change of stable integrals, tangential existence and comparison Just as random walks are the most basic Markov processes in discrete time, so also the L´ evy processes constitute the most fundamental among continuous-time Markov processes, as they are precisely the processes of this kind that are both space- and time-homogeneous. They may also be characterized as processes with stationary, independent increments, which suggests that they be thought of simply as random walks in continuous time. Basic special cases are given by Brownian motion and homogeneous Poisson processes.
Dropping the stationarity assumption leads to the broader class of processes with independent increments, and since many of the basic properties are the same, it is natural to extend our study to this larger class. Under weak regular-ity conditions, their paths are right-continuous with left-hand limits (rcll), and the one-dimensional distributions agree with the infinitely divisible laws stud-ied in Chapter 7. As the latter are formed by a combination of Gaussian and compound Poisson distributions, the general independent-increment processes are composed of Gaussian processes and suitable Poisson integrals.
The classical limit theorems of Chapters 6–7 suggest similar approximations of random walks by L´ evy processes, which may serve as an introduction to the general weak convergence theory in Chapters 22–23. In this context, we will see how results for random walks or Brownian motion may be extended to a more general continuous-time context. In particular, two of the arcsine laws derived for Brownian motion in Chapter 14 remain valid for general symmetric L´ evy processes.
Independent-increment processes may also be regarded as the simplest spe-cial cases of semi-martingales, to be studied in full generality in Chapter 20.
We conclude the chapter with a discussion of tangential processes, where the idea is to approximate a general semi-martingale X by a process ˜ X with condi-tionally independent increments, such that X and ˜ X have similar asymptotic properties. This is useful, since the asymptotic behavior of the latter processes can often be determined by elementary methods.
345 © Springer Nature Switzerland AG 2021 O. Kallenberg, Foundations of Modern Probability, Probability Theory and Stochastic Modelling 99, 346 Foundations of Modern Probability To resume our discussion of general processes with independent increments, we say that an Rd-valued process X is continuous in probability1 if Xs P →Xt whenever s →t. This clearly prevents the existence of fixed discontinuities.
To examine the structure of processes with independent increments, we may first choose regular versions of the paths. Recall that X is said to be rcll 2, if it has finite limits Xt± from the right and left at every point t ≥0, and the former equals Xt. Then the only possible discontinuities are jumps, and we say that X has a fixed jump at a time t > 0 if P{Xt ̸= Xt−} > 0.
Lemma 16.1 (infinitely divisible distributions, de Finetti) For a random vec-tor ξ in Rd, these conditions are equivalent: (i) ξ is infinitely divisible, (ii) ξ d = X1 for a process X in Rd with stationary, independent increments and X0 = 0.
We may then choose X to be continuous in probability, in which case (iii) L(X) is uniquely determined by L(ξ), (iv) ξ ∈Rd + a.s. ⇔Xt ∈Rd + a.s., t ≥0.
Proof: To prove (i) ⇔(ii), let ξ be infinitely divisible. Using the represen-tation in Theorem 7.5, we can construct the finite-dimensional distributions of an associated process X in Rd with stationary, independent increments, and then invoke Theorem 8.23 for the existence of an associated process. Assertions (iii)−(iv) also follow from the same theorem.
2 We first describe the processes with stationary, independent increments: Theorem 16.2 (L´ evy processes and subordinators, L´ evy, Itˆ o) Let the process X in Rd be continuous in probability with X0 = 0. Then X has stationary, independent increments iffit has an rcll version, given for t ≥0 by Xt = b t + σBt + t 0 |x|≤1 x (η −E η)(ds dx) + t 0 |x|>1 x η(ds dx), where b ∈Rd, σ is a d × d matrix, B is a Brownian motion in Rd, and η is an independent Poisson process on R+ × (Rd \ {0}) with E η = λ ⊗ν, where (|x|2 ∧1) ν(dx) < ∞. When X is Rd +-valued, the representation simplifies to Xt = a t + t 0 x η(ds dx) a.s., t ≥0, where a ∈Rd +, and η is a Poisson process on R+ ×(Rd + \{0}) with E η = λ⊗ν, where (|x| ∧1) ν(dx) < ∞. The triple (b, a, ν) with a = σ′σ or pair (a, ν) is then unique, and any choice with the stated properties may occur.
1also said to be stochastically continuous 2right-continuous with left-hand limits; the French acronym c adlag is also common, even in English texts 16. Independent-Increment and L´ evy Processes 347 A process X in Rd with stationary, independent increments is called a L´ evy processes, and when X is R+-valued it is also called a subordinator 3. Further-more, the measure ν is called the L´ evy measure of X. The result follows easily from the theory of infinitely divisible distributions: Proof: Given Theorem 7.5, it is enough to show that X has an rcll version.
This being obvious for the first two terms as well as for the final integral term, it remains to prove that the first double integral Mt has an rcll version. By Theorem 9.17 we have M ε t →Mt uniformly in L2 as ε →0, where M ε t is the sum of compensated jumps of size |x| ∈(ε, 1], hence a martingale. Proceeding along an a.s. convergent sub-sequence, we see that the rcll property extends to the limit.
2 The last result can also be proved by probabilistic methods. We will de-velop the probabilistic approach in a more general setting, where we drop the stationarity assumption and even allow some fixed discontinuities. Given an rcll process X in Rd, we define the associated jump point process η on (0, ∞) × (Rd \ {0} by η = Σ t>0 δ t,ΔXt = Σ t>0 1 (t, ΔXt) ∈· , (1) where the summations extend over all times t > 0 with ΔXt = Xt −Xt−̸= 0.
Theorem 16.3 (independent-increment processes, L´ evy, Itˆ o, Jacod, OK) Let the process X in Rd be rcll in probability with independent increments and X0 = 0. Then (i) X has an rcll version with jump point process η, given for t ≥0 by Xt = bt + Gt + t 0 |x|≤1 x (η −E η)(ds dx) + t 0 |x|>1 x η(ds dx), (2) where b is an rcll function in Rd with b0 = 0, G is a continuous, centered Gaussian process in Rd with independent increments and G0 = 0, and η is an independent, extended Poisson process on R′ + × (Rd \ {0}) with4 t 0 (|x|2 ∧1) E η′(ds dx) < ∞, t > 0, (3) (ii) for non-decreasing X, the representation simplifies to Xt = at + t 0 x η(ds dx) a.s., t ≥0, where a is a non-decreasing, rcll function in Rd + with a0 = 0, and η is an extended Poisson process on R′ + × (Rd + \ {0}) with t 0 (|x| ∧1) E η(ds dx) < ∞, t > 0.
3This curious term comes from the fact that the composition of a Markov process X with a subordinator T yields a new Markov process Y = X ◦T.
4Here we need to replace η by a slightly modified version η′, as explained below.
348 Foundations of Modern Probability Both representations are a.s. unique, and any processes η and G with the stated properties may occur.
Y r t = t 0 r<|x|≤1 x (η −E η)(ds dx) + t 0 |x|>1 x η(ds dx), t ≥0, with associated symmetrizations ˜ Y r t .
Proposition 16.4 (Poisson component) For any stochastically continuous Poisson process η on R+ × (Rd \ {0}), these conditions are equivalent: (i) the family {˜ Y r t ; r > 0} is tight for every t ≥0, (ii) t 0 (|x|2 ∧1) E η(ds dx) < ∞, t > 0, (iii) there exists an rcll process Y in Rd with (Y r −Y )∗ t →0 a.s., t ≥0.
Proof: Since clearly (iii) ⇒(i), it is enough to prove (i) ⇒(ii) ⇒(iii).
(i) ⇒(ii): Assume (i), and fix any t > 0. By Lemma 6.2 we can choose a > 0, such that Eeiu ˜ Y r t ≥1 2 for all r < 1 and u ≤a. Then by Lemma 15.2 and Fubini’s theorem, −1 2 a 0 log E eiu ˜ Y r t du = a 0 du |x|>r(1 −cos ux) E ηt(dx) = a |x|>r 1 −sin ax ax E ηt(dx) ≈ a |x|>r(|ax|2 ∧1) E ηt(dx).
Now take the lim sup as r →0.
(ii) ⇒(iii): By Lemma 15.22, we have Var|ηf| = E η|f|2 for suitably bounded functions f. Hence as r, r′ →0, E |Y r t −Y r′ t |2 = Var t 0 r<|x|<r′ x η(ds dx) = t 0 r<|x|<r′ |x|2Eη(ds dx) →0.
Now the differences Y r −Y r′ are martingales, and so by Doob’s inequality (Y r −Y r′)∗ t P →0, t > 0, r, r′ →0.
Since the r-increments of Y r are further independent, Lemma 11.13 yields an rcll process Y with (Y r −Y )∗ t →0 a.s., t > 0, r →0 along Q+.
First we need to clarify the meaning of the jump component in (i) and the associated integrability condition. Here we consider separately the continuous Poisson component and the process of fixed discontinuities. For the former, let η be a stochastically continuous Poisson process on R+ × (Rd \ {0}, and introduce the elementary processes 16. Independent-Increment and L´ evy Processes 349 We may finally extend the convergence to R+, since Y r is a.s. right continuous in r > 0, uniformly on [0, t].
2 As for the fixed discontinuities, let D ⊂(0, ∞) be the countable supporting set and write η = Σ t∈D δ t,ξt, where the ξt are independent random vectors in Rd, and we interpret δ t,0 as 0. Fixing any finite sets Dn ↑D and putting Dn t = Dn ∩[0, t], we introduce the elementary pure jump-type processes Y n t = Σ s ∈Dn t ξs −E {ξs; |ξs| ≤1} , t ≥0, n ∈N, with associated symmetrizations ˜ Y n t . Put Dt = [0, t] ∩D.
Proposition 16.5 (fixed discontinuities) Let the ξt be independent random vectors in Rd indexed by a countable set D ⊂(0, ∞), such that Σs≤t1{|ξs| > 1} < ∞a.s. for all t > 0. Then these conditions are equivalent: (i) the family {˜ Y n t ; n > 0} is tight for every t ≥0, (ii) Σ s ∈Dt Var{ξs; |ξs| ≤1} + P{|ξs| > 1} < ∞, t > 0, (iii) there exists an rcll process Y in Rd with Y n t →Yt a.s., t ≥0.
Proof: Here again (iii) ⇒(i), and so it remains to prove (i) ⇒(ii) ⇒(iii).
(i) ⇒(ii): Fix any t > 0. By Theorem 4.18 we may assume that |ξt| ≤1 for all t. Then Theorem 5.17 yields Σ s ∈Dt Var(ξs) = 1 2 Σ s ∈Dt Var(˜ ξs) < ∞.
(ii) ⇒(iii): Under (i), we have convergence Y n t →Yt a.s. for every t ≥0 by Theorem 5.18, and we also note that the limiting process Y is rcll in prob-ability. The existence of a version with rcll paths then follows by Proposition 16.6 below.
2 We proceed to establish the pathwise regularity.
Proposition 16.6 (regularization) For a process X in Rd with independent increments, these conditions are equivalent: (i) X is rcll in probability, (ii) X has a version with rcll paths.
This requires a simple analytic result.
Lemma 16.7 (complex exponentials) For any constants x1, x2, . . . ∈Rd, these conditions are equivalent: (i) xn converges in Rd, (ii) eiuxn converges in C for almost every u ∈Rd.
350 Foundations of Modern Probability Proof: Assume (ii). Fix a standard Gaussian random vector ζ in Rd, and note that exp{it ζ(am −an)} →1 a.s. as m, n →∞for fixed t ∈R. Then by dominated convergence E exp{it ζ(am −an)} →1, and so ζ(am −an) P →0 by Theorem 6.3, which implies am −an →0. Thus, the sequence (an) is Cauchy, and (i) follows.
2 Proof of Proposition 16.6: Assume (i). In particular, Xt P →0 as t →0, and so ϕt(u) →1 for every u ∈Rd, where ϕt(u) = EeiuXt, t ≥0, u ∈Rd.
For fixed u ∈Rd we may then choose a tu > 0, such that ϕt(u) ̸= 0 for all t ∈[0, tu). Then the process M u t = eiuXt ϕt(u), t < tu, u ∈Rd, exists and is a martingale in t, and so by Theorem 9.28 it has a version with rcll paths. Since ϕt(u) has the same property, we conclude that even eiuXt has an rcll version. By a similar argument, every s > 0 has a neighborhood Is, such that eiuXt has an rcll version on Is, except for a possible jump at s. The rcll property then holds on all of Is, and by compactness we conclude that eiuXt has an rcll version on the entire domain R+.
Now let A be the set of pairs (ω, u) ∈Ω × Rd, such that eiuXt has right and left limits along Q at every point. Expressing this in terms of upcrossings, we see that A is product measurable, and the previous argument shows that all u -sections Au have probability 1. Hence, by Fubini’s theorem, P-almost every ω -section Aω has full λd-measure. This means that, on a set Ω′ of probabil-ity 1, eiuXt has the stated continuity properties for almost every u. Then by Lemma 16.7, Xt itself has the same continuity properties on Ω′. Now define ˜ Xt ≡Xt+ on Ω′, and put ˜ Xt ≡0 otherwise. Then ˜ X is clearly rcll, and by (i) we have ˜ Xt = Xt a.s. for all t.
2 To separate the jump component from the remainder of X, we need the following independence criterion: Lemma 16.8 (orthogonality and independence) Let (X, Y ) be an rcll pro-cesses in (Rd)2 with independent increments and X0 = Y0 = 0, where Y is an elementary step process. Then (ΔX)(ΔY ) = 0 a.s.
⇒ X⊥ ⊥Y.
Proof: By a transformation of the jump sizes, we may assume that Y has locally integrable variation. Now introduce as before the characteristic functions ϕt(u) = EeiuXt, ψt(v) = EeivYt, t ≥0, u, v ∈Rd, and note that both are ̸= 0 on some interval [0, tu,v), where tu,v > 0 for all u, v ∈Rd. On the same interval, we can then introduce the martingales 16. Independent-Increment and L´ evy Processes 351 M u t = eiuXt ϕt(u), N v t = eivYt ψt(v), t < tu,v, u, v ∈Rd, where N v has again integrable variation on every compact subinterval. Fixing any t ∈(0, tu,v) and writing h = t/n with n ∈N, we get by the martingale property and dominated convergence E M u t N v t −1 = E Σ k≤n M u kh −M u (k−1)h N v kh −N v (k−1)h = E t 0 M u [sn+1−]h −M u [sn−]h dN v s →E t 0 (ΔM u s ) dN v s = E Σ s≤t (ΔM u s )(ΔN v s ) = 0.
This shows that E M u t N v t = 1, and so E eiuXt+ivYt = E eiuXtE eivYt, t < tu,v, u, v ∈Rd.
By a similar argument, along with the independence (ΔXt, ΔYt)⊥ ⊥(X −ΔXt, Y −ΔYt), t > 0, every point r > 0 has a neighborhood Iu,v, such that E eiuXt s+ivY t s = E eiuXt s E eivY t s , s, t ∈Iu,v, u, v ∈Rd, where Xt s = Xt −Xs and Y t s = Yt −Ys. By a compactness argument, the same relation holds for all s, t ≥0, and since u, v were arbitrary, Theorem 6.3 yields Xt s ⊥ ⊥Y t s for all s, t ≥0. Since (X, Y ) has independent increments, the inde-pendence extends to any finite collection of increments, and we get X⊥ ⊥Y . 2 The last lemma allows us to eliminate the compensated random jumps, which leaves us with a process with only non-random jumps. It remains to show that the latter process is Gaussian.
Proposition 16.9 (Gaussian processes with independent increments) Let X be an rcll process in Rd with independent increments and X0 = 0. Then these conditions are equivalent: (i) X has only non-random jumps, (ii) X is Gaussian of the form Xt = bt + Gt a.s., t ≥0, where G is a continuous, centered, Gaussian process with independent increments and G0 = 0, and b is an rcll function in Rd with b0 = 0.
352 Foundations of Modern Probability Proof: We need to prove only (i) ⇒(ii), so assume (i). Let ˜ X be a sym-metrization of X, and note that ˜ X has again independent increments. It is also is a.s. continuous, since the only discontinuities of X are constant jumps, which are canceled by the symmetrization. Hence, ˜ X is symmetric Gaussian by Theorem 14.4.
Now fix any t > 0, and write ξnj and ˜ ξnj for the increments of X, ˜ X over the n sub-intervals of length t/n. Since maxj |˜ ξnj| →0 a.s. by compactness, the ˜ ξnj form a null array in Rd, and so by Theorem 6.12, ΣjP{|˜ ξnj| > ε} →0, ε > 0.
Letting mnj be component-wise medians of ξnj, we see from Lemma 5.19 that the differences ξ′ nj = ξnj −mnj again form a null array satisfying ΣjP{|ξ′ nj| > ε} →0, ε > 0.
(4) Putting mn = Σjmnj, we may write Xt = Σj ξ′ nj + mn = Σj ξ′ nj + (n|mn|) mn n|mn|, n ∈N, where the n(1 + |mn|) terms on the right again form a null array in Rd, satis-fying a condition like (4). Then Xt is Gaussian by Theorem 6.16. A similar argument shows that all increments Xt −Xs are Gaussian, and so the entire process X has the same property, and the difference G = X −EX is centered Gaussian. It is further continuous, since X has only constant jumps, which all go into the mean bt = EXt = Xt −Gt. The latter is clearly non-random and rcll, and the independence of the increments carries over to G.
2 Proof of Theorem 16.3 (OK): By Proposition 16.6 we may assume that X has rcll paths.
Then introduce the associated jump point process η on R+ ×(Rd \{0}), which clearly inherits the property of independent increments.
By Theorem 15.10 the process η is then extended Poisson, and hence can be decomposed into a stochastically continuous Poisson process η1 and a process η2⊥ ⊥η1 with fixed discontinuities, supported by a countable set D ⊂R+.
Let Y r 1 , Y n 2 be the associated approximating processes in Propositions 16.4 and 16.5, and note that by Lemma 16.8 the pairs (Y r 1 , Y n 2 ) are independent from the remaining parts of X. Factoring the characteristic functions, we see from Lemma 6.2 that the corresponding families of symmetrized processes ˜ Y r 1 , ˜ Y n 2 are tight at fixed times t > 0. Hence, Propositions 16.4 and 16.5 yield the stated integrability conditions for E η adding up to (3), and there exist some limiting processes Y1, Y2, adding up to the integral terms in (2). The previous independence property clearly extends to Y ⊥ ⊥(X −Y ), where Y = Y1 + Y2.
Since the jumps of X are already contained in Y and the continuous cen-tering in Y1 gives no new jumps, the only remaining jumps of X −Y are the fixed jumps arising from the centering in Y2. Thus, Proposition 19.2 yields X −Y = G + b, where G is continuous Gaussian with independent increments and b is non-random, rcll.
2 16. Independent-Increment and L´ evy Processes 353 Of special interest is the case where X is a semi-martingale: Corollary 16.10 (semi-martingales with independent increments, Jacod, OK) Let X be an rcll process with independent increments. Then (i) X = Y + b, where Y is a semi-martingale and b is non-random rcll, (ii) X is a semi-martingale iffb has locally finite variation, (iii) when X is a semi-martingale, (3) holds with E η′ replaced by E η.
Proof: (i) We need to show that all terms making up X −b are semi-mart-ingales. Then note that G and the first integral term are martingales, whereas the second integral term represents a process of isolated jumps.
(ii) Part (i) shows that X is a semi-martingale iffb has the same property, which holds by Corollary 20.21 below iffb has locally finite variation.
(iii) By the definition of η, the only discontinuities of b are the fixed jumps E(ξt; |ξt| ≤1). Since b is a semi-martingale, its quadratic variation is locally finite, and we get Dt |x|≤1 |x|2η(ds dx) = Σ s ∈Dt E |ξs|2; |ξs| ≤1 = Σ s ∈Dt Var{ξt; |ξt| ≤1} + |E{ξt; |ξt| ≤1}|2 < ∞.
2 By Proposition 11.5, a L´ evy process X is Markov for the induced filtration G = (Gt), with the translation-invariant transition kernels μt(x, B) = μt(0, B −x) = P{Xt ∈B −x}, t ≥0.
More generally, given any filtration F, we say that a process X is L´ evy with respect to F or simply F-L´ evy, if it is adapted to F and such that (Xt −Xs) ⊥ ⊥Fs for all s < t. In particular, we may choose the filtration Ft = Gt ∨N, t ≥0, N = σ{N ⊂A; A ∈A, PA = 0}, which is right-continuous by Corollary 9.26. Just as for Brownian motion in Theorem 14.11, we see that when X is F-L´ evy for a right-continuous, complete filtration F, it is a strong Markov process, in the sense that the process X′ = θτX −Xτ satisfies X d = X′⊥ ⊥Fτ for every optional time τ < ∞.
Turning to some symmetry conditions, we say that a process X on R+ is self-similar, if for every r > 0 the scaled process Xrt, t ≥0, has the same distribution as sX for a constant s = h(r) > 0. Excluding the trivial case of E|Xt| ≡0, we see that h satisfies the Cauchy equation h(rs) = h(r) h(s). If 354 Foundations of Modern Probability X is right-continuous, then h is continuous, and all solutions are of the form h(r) = rα for some α ∈R.
A L´ evy process X in R is said to be strictly stable if it is self-similar, and weakly stable if it is self-similar apart from a centering, so that for every r > 0 the process Xrt has the same distribution as sXt + b t for some s and b. In either case, the symmetrized version X −X′ is strictly stable with the same parameter α, and so s is again of the form rα. Since clearly α > 0, we may introduce the stability index p = α−1, and say that X is strictly or weakly p -stable. The terminology carries over to random variables or vectors with distribution L(X1).
Corollary 16.11 (stable L´ evy processes) Let X be a non-degenerate L´ evy process in R with characteristics (a, b, ν).
Then X is weakly p-stable for a p > 0 iffone of these conditions holds: (i) p = 2 and ν = 0, (ii) p ∈(0, 2), a = 0, and ν(dx) = c±|x|−p−1dx on R± for some c± ≥0.
For subordinators, it is further equivalent that (iii) p ∈(0, 1) and ν(dx) = c x−p−1dx on (0, ∞) for a c > 0.
In (i) we recognize a scaled Brownian motion with possible drift. An impor-tant case of (ii) is the symmetric, 1-stable L´ evy process, known under suitable normalization as a Cauchy process.
Proof: Writing Sr : x →rx for r > 0, we note that the processes X(rpt) and rX have characteristics rp(a, b, ν) and (r2a, rb, ν ◦S−1 r ), respectively. Since the latter are determined by the distributions, it follows that X is weakly p -stable iffrpa = r2a and rpν = ν ◦S−1 r for all r > 0. In particular, a = 0 when p ̸= 2.
Writing F(x) = ν[x, ∞) or ν(−∞, −x], we also note that rpF(rx) = F(x) for all r, x > 0, so that F(x) = x−pF(1), which yields a density of the stated form.
The condition (x2∧1) ν(dx) < ∞implies p ∈(0, 2) when ν ̸= 0. When X ≥0 we have the stronger requirement (x ∧1) ν(dx) < ∞, so in this case p < 1. 2 If X is weakly p -stable with p ̸= 1, it becomes strictly p -stable through a suitable centering. In particular, a weakly p -stable subordinator is strictly stable iffits drift component vanishes, in which case X is simply said to be stable.
We show how stable subordinators may arise naturally even in the context of continuous processes. Given a Brownian motion B in R, we introduce the maximum process Mt = sups≤t Bs with right-continuous inverse Tr = inf t ≥0; Mt > r = inf t ≥0; Bt > r , r ≥0.
Theorem 16.12 (first-passage times, L´ evy) For a Brownian motion B, the process T is a 1 2-stable subordinator with L´ evy measure ν(dx) = (2π)−1/2x−3/2dx, x > 0.
16. Independent-Increment and L´ evy Processes 355 Proof: By Lemma 9.6, the random times Tr are optional with respect to the right-continuous filtration F induced by B. By the strong Markov property of B, the process θrT −Tr is then independent of FTr and distributed as T. Since T is further adapted to the filtration (FTr), it follows that T has stationary, independent increments and hence is a subordinator.
To see that T is 1 2 -stable, fix any c > 0, put ˜ Bt = c−1B(c2 t), and define ˜ Tr = inf t ≥0; ˜ Bt > r . Then Tc r = inf t ≥0; Bt > c r = c2 inf t ≥0; ˜ Bt > r = c2 ˜ Tr.
By Proposition 16.11, the L´ evy measure of T has a density of the form a x−3/2, x > 0, and it remains to identify a. Then note that the process Xt = exp uBt −1 2 u2t , t ≥0, is a martingale for any u ∈R. In particular, EXτr∧t = 1 for any r, t ≥0, and since clearly Bτr = r, we get by dominated convergence E exp −1 2 u2Tr = e−ur, u, r ≥0.
Taking u = √ 2 and comparing with Corollary 7.6, we obtain √ 2 a = ∞ 0 (1 −e−x) x−3/2dx = 2 ∞ 0 e−xx−1/2dx = 2 √π, which shows that a = (2π)−1/2.
2 If we add a negative drift to a Brownian motion, the associated maximum process M becomes bounded, so that T = M −1 terminates by a jump to infinity. Here it becomes useful to allow subordinators with infinite jumps.
By an extended subordinator we mean a process of the form Xt ≡Yt + ∞· 1{t ≥ζ} a.s., where Y is an ordinary subordinator and ζ is an independent, exponentially distributed random variable, so that X is obtained from Y by exponential killing. The representation in Theorem 16.3 remains valid in the extended case, except for a positive mass of ν at ∞.
The following characterization is needed in Chapter 29.
Lemma 16.13 (extended subordinators) Let X be a non-decreasing, right-continuous process in [0, ∞] with X0 = 0 and induced filtration F. Then X is an extended subordinator iff L Xs+t −Xs Fs = L(Xt) a.s. on {Xs < ∞}, s, t > 0.
(5) Proof: Writing ζ = inf{t; Xt = ∞}, we get from (5) the Cauchy equation P{ζ > s + t} = P{ζ > s}P{ζ > t}, s, t ≥0, (6) 356 Foundations of Modern Probability which implies that ζ is exponentially distributed with mean m ∈(0, ∞]. Next define μt = L(Xt |Xt < ∞), t ≥0, and conclude from (5) and (6) that the μt form a semigroup under convolution. By Theorem 11.4 there exists a corre-sponding process Y with stationary, independent increments. From the right-continuity of X, it follows that Y is continuous in probability. Hence, Y has a subordinator version. Now choose ˜ ζ d = ζ with ˜ ζ⊥ ⊥Y , and let ˜ X denote the process Y killed at ˜ ζ. Comparing with (5), we note that ˜ X d = X. By Theorem 8.17 we may assume that even X = ˜ X a.s., which means that X is an extended subordinator. The converse assertion is obvious.
2 The weak convergence of infinitely divisible laws extends to a pathwise approximation of the corresponding L´ evy processes.
Theorem 16.14 (coupling of L´ evy processes, Skorohod) For any L´ evy pro-cesses X, X1, X2, . . . in Rd with Xn 1 d →X1, there exist some processes ˜ Xn d = Xn with sup s≤t ˜ Xn s −Xs P →0, t ≥0.
For the proof, we first consider two special cases.
Lemma 16.15 (compound Poisson case) Theorem 16.14 holds for compound Poisson processes X, X1, X2, . . . with characteristic measures ν, ν1, ν2, . . . sat-isfying νn w →ν.
Proof: Adding positive masses at the origin, we may assume that ν and the νn have the same total mass, which may then be reduced to 1 through a suitable scaling. If ξ1, ξ2, . . . and ξn 1 , ξn 2 , . . . are associated i.i.d. sequences, we get (ξn 1 , ξn 2 , . . .) d →(ξ1, ξ2, . . .) by Theorem 5.30, and by Theorem 5.31 we may strengthen this to a.s. convergence. Letting N be an independent, unit-rate Poisson process and defining Xt = Σj≤Nt ξj and Xn t = Σj≤Nt ξn j , we obtain (Xn −X)∗ t →0 a.s. for all t ≥0.
2 Lemma 16.16 (case of small jumps) Theorem 16.14 holds for L´ evy processes Xn satisfying EXn ≡0, 1 ≥(ΔXn)∗ 1 P →0.
Proof: Since (ΔXn)∗ 1 P →0, we may choose some constants hn →0 with mn = h−1 n ∈N, such that w(Xn, 1, hn) P →0.
By the stationarity of the increments, we get w(Xn, t, hn) P →0 for all t ≥0. Further note that X is centered Gaussian by Theorem 7.7. As in Theorem 22.20 below, we may then form some processes Y n d = (Xn [mnt]hn), such that (Y n −X)∗ t P →0 for all t ≥0.
By Corollary 8.18, we may further choose some processes ˜ Xn d = Xn with Y n ≡˜ Xn [mnt]hn a.s. Letting n →∞for fixed t ≥0, we obtain 16. Independent-Increment and L´ evy Processes 357 E ( ˜ Xn −X)∗ t ∧1 ≤E (Y n −X)∗ t ∧1 + E w(Xn, t, hn) ∧1 →0.
2 Proof of Theorem 16.14: The asserted convergence is clearly equivalent to ρ( ˜ Xn, X) →0, where ρ denotes the metric ρ(X, Y ) = ∞ 0 e−tE (X −Y )∗ t ∧1 dt.
For any h > 0, we may write X = Lh + M h + Jh, Xn = Ln,h + M n,h + Jn,h, with Lh t ≡bh t and Ln,h t ≡bh n t, where M h and M n,h are martingales containing the Gaussian components and all centered jumps of size ≤h, whereas the pro-cesses Jh and Jn,h contain all the remaining jumps. Write G for the Gaussian component of X, and note that ρ(M h, G) →0 as h →0 by Proposition 9.17.
For any h > 0 with ν{|x| = h} = 0, Theorem 7.7 yields bh n →bh and νh n w →νh, where νh and νh n denote the restrictions of ν and νn, respectively, to the set {|x| > h}. The same theorem gives ah n →a as n →∞and then h →0, so under those conditions M n,h 1 d →G1.
Now fix any ε > 0. By Lemma 16.16, we may choose some constants h, r > 0 and processes ˜ M n,h d = M n,h, such that ρ(M h, G) ≤ε and ρ( ˜ M n,h, G) ≤ε for all n > r. Under the additional assumption ν{|x| = h} = 0, Lemma 16.15 yields an r′ ≥r and some processes ˜ Jn,h d = Jn,h independent of ˜ M n,h, such that ρ( ˜ Jh, ˜ Jn,h) ≤ε for all n > r′. We may finally choose r′′ ≥r′ so large that ρ(Lh, Ln,h) ≤ε for all n > r′′. Then the processes ˜ Xn ≡Ln,h + ˜ M n,h + ˜ Jn,h d = Xn, n ∈N, satisfy ρ(X, ˜ Xn) ≤4 ε for all n > r′′.
2 Combining Theorem 16.14 with Corollary 7.9, we get a similar approxi-mation of random walks, extending the result for Gaussian limits in Theorem 22.20. A slightly weaker result is obtained by different methods in Theorem 23.14.
Corollary 16.17 (L´ evy approximation of random walks) Consider a L´ evy process X and some random walks S1, S2, . . . in Rd, such that Sn kn d →X1 for some constants kn →∞, and let N be an independent, unit-rate Poisson process. Then there exist some processes Xn with Xn d = (Sn ◦Nknt), sup s≤t |Xn s −Xs| P →0, t ≥0.
Using the last result, we may extend the first two arcsine laws of Theorem 14.16 to symmetric L´ evy processes, where symmetry means −X d = X.
358 Foundations of Modern Probability Theorem 16.18 (arcsine laws) Let X be a symmetric L´ evy process in R with X1 ̸= 0 a.s. Then these random variables are arcsine distributed: τ1 = λ t ≤1; Xt > 0 , τ2 = inf t ≥0; Xt ∨Xt−= sup s≤1 Xs .
(7) Proof: Introduce the random walk Sn k = Xk/n, let N be an independent, unit-rate Poisson process, and define Xn t = Sn ◦Nnt. Then Corollary 16.17 yields some processes ˜ Xn d = Xn with ( ˜ Xn −X)∗ 1 P →0. Define τ n 1 , τ n 2 as in (7) in terms of Xn, and conclude from Lemmas 22.12 and 7.16 that τ n i d →τi for i = 1, 2.
Now define σn 1 = N −1 n Σ k≤Nn 1{Sn k > 0}, σn 2 = N −1 n min k; Sn k = max j≤Nn Sn j .
Since t−1Nt →1 a.s. by the law of large numbers, we have supt≤1 |n−1Nnt −t| →0 a.s., and so σn 2 −τ n 2 →0 a.s. Applying the same law to the sequence of holding times in N, we further note that σn 1 −τ n 1 P →0. Hence, σn i d →τi for i = 1, 2. Now σn 1 d = σn 2 by Corollary 27.8, and Theorem 22.11 yields σn 2 d →sin2α where α is U(0, 2π). Hence, τ1 d = τ2 d = sin2α.
2 In Theorem 15.15, we saw how any ql-continuous, simple point process on R+ can be time-changed into a Poisson process. Similarly, we will see in The-orem 19.4 below how a continuous local martingale can be time-changed into a Brownian motion. Here we consider a third case of time-change reduction, involving integrals of p -stable L´ evy processes. Since the general theory relies on some advanced stochastic calculus from Chapter 20, we consider only the elementary case of p < 1.
Proposition 16.19 (time change of stable integrals, Rosi´ nski & Woyczy´ nski, OK) Let X be a strictly p-stable L´ evy process with p ∈(0, 1), and let the process V ≥0 be predictable and such that A = V p · λ is a.s. finite but unbounded.
Then (V · X) ◦τ d = X, τs = inf t; At > s , s ≥0.
Proof: Define a point process ξ on R+ × (R \ {0}) by ξB = Σs1B(s, ΔXs), and recall from Corollary 16.2 and Proposition 16.11 that ξ is Poisson with intensity λ ⊗ν, for some measures ν(dx) = c±|x|−p−1dx on R±. In particular, ξ has compensator ˆ ξ = λ ⊗ν. Now introduce on R+ × R the predictable map Ts,x = (As, xVs). Since A is continuous, we have {As ≤t} = {s ≤τt} and Aτt = t. Hence, Fubini’s theorem yields for any t, u > 0 (λ ⊗ν) ◦T −1 [0, t] × (u, ∞) = (λ ⊗ν) (s, x); As ≤t, x Vs > u 16. Independent-Increment and L´ evy Processes 359 = τt 0 ν x; x Vs > u ds = ν(u, ∞) τt 0 V p s ds = t ν(u, ∞), and similarly for the sets [0, t] × (−∞, −u). Thus, ˆ ξ ◦T −1 = ˆ ξ = λ ⊗ν a.s., and so Theorem 15.15 gives ξ ◦T −1 d = ξ. Finally, note that (V · X)τt = τt+ 0 x Vs ξ(ds dx) = ∞ 0 x Vs1{As ≤t} ξ(ds dx) = t+ 0 y (ξ ◦T −1)(dr dy), where the process on the right has the same distribution as X.
2 Anticipating our more detailed treatment in Chapter 20, we define a semi-martingale in Rd as an adapted process X with rcll paths, admitting a centering into a local martingale. Thus, subtracting all suitably centered jumps yields a continuous semi-martingale, in the sense of Chapter 18. Here the centering of small jumps amounts to a subtraction of the compensator ˆ η rather than the intensity Eη of the jump point process η, to ensure that the resulting process will be a local martingale. Note that such a centering depends on the choice of underlying filtration F.
The set of local characteristics of X is given by the triple (ˆ η, [M], A), where M is a continuous local martingale and A is a continuous, predictable process of locally finite variation, such that X is the sum of the centered jumps and the continuous semi-martingale M + A. We say that X is locally symmetric if ˆ η is symmetric and A = 0. Two semi-martingales X, Y are said to be F-tangential 5 if they have the same local characteristics with respect to F. For filtrations F, G on a common probability space, we say that G is a standard extension of F if Ft ⊂Gt ⊥ ⊥ Ft F, t ≥0.
These are the minimal conditions ensuring that G will preserve all conditioning and adaptedness properties of F.
Theorem 16.20 (tangential existence) Let X be an F-semi-martingale with local characteristics Y . Then there exists an ˜ F-semi-martingale ˜ X⊥ ⊥Y F with Y -conditionally independent increments, for a standard extension ˜ F ⊃F, such that X and ˜ X are ˜ F-tangential.
We will prove this only in the ql-continuous case where Y is continuous, in which case we may rely on some elementary constructions: 5This should not to be confused with the more elementary notion of tangent processes.
For two processes to be tangential, they must be semi-martingales with respect to the same filtration, which typically requires us to extend of the original filtration without affecting the local characteristics, a highly non-trivial step.
360 Foundations of Modern Probability Lemma 16.21 (tangential constructions) Let M be a continuous local F-martingale, and let ξ be an S-marked, F-adapted point process with continuous F-compensator η. Form a Cox process ˜ ξ⊥ ⊥η F directed by η and a Brownian motion B⊥ ⊥(˜ ξ, F), put ˜ M = B ◦[M], and let ˜ F be the filtration generated by (F, ˜ ξ, ˜ M). Then6 (i) ˜ F is a standard extension of F, (ii) ˜ M is a continuous local ˜ F-martingale with [ ˜ M] = [M] a.s., (iii) ξ, ˜ ξ have the same ˜ F-compensator η, (iv) η is both a (ξ, η)-compensator of ξ and a (˜ ξ, η)-compensator of ˜ ξ.
Proof: (i) Since ˜ ξ⊥ ⊥η F and B⊥ ⊥˜ ξ,η F, Theorem 8.12 yields (˜ ξ, B)⊥ ⊥η F, and so (˜ ξt, ˜ M t) ⊥ ⊥ η, [M]F, t ≥0.
Using Theorem 8.9 and the definitions of ˜ ξ and ˜ M, we further note that (˜ ξt, ˜ M t) ⊥ ⊥ ηt, [M]t(η, [M]), t ≥0.
Combining those relations and using Theorem 8.12, we obtain (˜ ξt, ˜ M t) ⊥ ⊥ ηt, [M]tF, t ≥0, which implies ˜ Ft⊥ ⊥Ft F for all t ≥0.
(ii) Since B⊥ ⊥(˜ ξ, F), we get B ⊥ ⊥ [M], ˜ Ms(F, ˜ ξ, ˜ M s), s ≥0, and so θs ˜ M ⊥ ⊥ [M], ˜ Ms ˜ Fs, s ≥0.
Combining with the relation θs ˜ M⊥ ⊥[M] ˜ M s and using Theorem 8.12, we obtain θs ˜ M⊥ ⊥ [M] ˜ Fs, s ≥0.
Localizing if necessary to ensure integrability, we get for any s ≤t the desired martingale property E ˜ Mt −˜ Ms ˜ Fs = E E ˜ Mt −˜ Ms ˜ Fs, [M] ˜ Fs = E E ˜ Mt −˜ Ms [M] ˜ Fs = 0, and the associated rate property E ˜ M 2 t −˜ M 2 s ˜ Fs = E E ˜ M 2 t −˜ M 2 s ˜ Fs, [M] ˜ Fs = E E ˜ M 2 t −˜ M 2 s [M] ˜ Fs = E E [M]t −[M]s [M] ˜ Fs .
6We may think of ˜ ξ as a Coxification of ξ and of ˜ M as a Brownification of M.
16. Independent-Increment and L´ evy Processes 361 (iii) Property (i) shows that η remains an ˜ F-compensator of ξ. Next, the relation ˜ ξ⊥ ⊥η F implies θt˜ ξ⊥ ⊥η,˜ ξt Ft. Combining with the Cox property and using Theorem 8.12, we get θt˜ ξ⊥ ⊥η(˜ ξt, Ft).
Invoking the tower property of conditional expectations and the Cox property of ˜ ξ, we obtain E( θr ˜ ξ | ˜ Ft) = E θt˜ ξ ˜ ξt, Ft = E E θt˜ ξ ˜ ξt, η, Ft ˜ ξt, Ft = E E( θt˜ ξ | η) ˜ ξt, Ft = E θtη | ˜ ξt, Ft = E( θtη | ˜ Ft).
Since η remains ˜ F-predictable, it is then an ˜ F-compensator of ˜ ξ.
(iv) The martingale properties in (ii) extend to the filtrations generated by (ξ, η) and ˜ ξ, η), respectively, by the tower property of conditional expecta-tions. Since η is continuous and adapted to both filtrations, it is both (ξ, η)-and (˜ ξ, η)-predictable. The assertions follow by combination of those proper-ties.
2 Proof of Theorem 16.20 for continuous η: Let ξ be the jump point process of X, and let η denote the F-compensator of ξ. Further, let M be the continu-ous martingale component of X, and let A be the predictable drift component of X, for a suitable truncation function. Define ˜ ξ, ˜ M, ˜ F as in Lemma 16.21, and construct an associated semi-martingale ˜ X by compensating the jumps given by ˜ ξ. Then ˜ X has the same local characteristics Y = ([M], η, A) as X, and so the two processes are ˜ F-tangential. Since ˜ ξ⊥ ⊥η F and B⊥ ⊥(˜ ξ, F), we have ˜ X⊥ ⊥Y F, and the independence properties of B show that ˜ X has condi-tionally independent increments.
2 Tangential processes have similar asymptotic properties at ∞. Here we state only the most basic result of this type. Say that the function ϕ: R+ →R+ has moderate growth, if it is non-decreasing and continuous with ϕ(0) = 0, and there exists a constant c > 0 such that ϕ(2x) ≤c ϕ(x) for all x > 0. In that case, there exists a function h > 0 on (0, ∞), such that ϕ(cx) ≤h(c) ϕ(x) for all c, x > 0. Basic examples include the power functions ϕ(x) = |x|p with p > 0, and the functions ϕ(x) = x ∧1 and ϕ(x) = 1 −e−x.
Theorem 16.22 (tangential comparison, Zinn, Hitchenko, OK) Let the pro-cesses X, Y be tangential and either increasing or locally symmetric. Then for any function ϕ: R+ →R+ of moderate growth, we have7 E ϕ(X∗) ≍E ϕ(Y ∗).
Our proof is based on some tail estimates: 7The domination constants are understood to depend only on ϕ.
362 Foundations of Modern Probability Lemma 16.23 (tail comparison) Let the processes X, Y be tangential and either increasing or locally symmetric. Then for any c, x > 0, (i) P (ΔX)∗> x ≤2 P (ΔY )∗> x , (ii) P{X∗> x} ≤3 P{Y ∗> c x} + 4 c.
Proof: (i) Let ξ, η be the jump point processes of X, Y . Fix any x > 0, and introduce the optional time τ = inf t > 0; |ΔYt| > x .
Since the set (0, τ] is predictable by Lemma 10.2, we get P (ΔX)∗> x ≤P{τ < ∞} + E ξ (0, τ] × [−x, x]c = P{τ < ∞} + E η (0, τ] × [−x, x]c = 2 P{τ < ∞} = 2 P (ΔY )∗> x .
(ii) Fix any c, x > 0. For increasing X, Y , form ˆ X, ˆ Y by omitting all jumps greater than cx, which clearly preserves the tangential relation. If X, Y are in-stead locally symmetric, we form ˆ X, ˆ Y by omitting all jumps of modulus > 2cx.
By Lemma 20.5 below and its proof, ˆ X, ˆ Y are then local L2-martingales with jumps a.s. bounded by 4cx. They also remain tangential, and the tangential relation carries over to the quadratic variation processes [ ˆ X], [ˆ Y ].
Now introduce the optional time τ = inf t > 0; |ˆ Yt| > c x .
For increasing X, Y , we have x P{ ˆ Xτ > x} ≤E ˆ Xτ = E ˆ Yτ = E ˆ Yτ−+ EΔˆ Yτ ≤3 c x.
If X, Y are instead locally symmetric, then using the Jensen and Bernstein– L´ evy inequalities in 4.5 and 9.16, the integration by parts formula in Theorem 20.6 below, and the tangential properties and bounds, we get x P{ ˆ X∗ τ > x} 2 ≤(E| ˆ Xτ|)2 ≤E ˆ X2 τ = E[ ˆ X]τ = E[ˆ Y ]τ = E ˆ Y 2 τ ≤ |ˆ Yτ−| + |Δˆ Yτ| 2 ≤(4 c x)2.
Thus, in both cases P{ ˆ X∗ τ > x} ≤4c.
Since Y ∗≥1 2 (ΔY )∗, we have 16. Independent-Increment and L´ evy Processes 363 (ΔX)∗≤2 c x ⊂{X = ˆ X}, {τ < ∞} ⊂{ˆ Y ∗> c x} ⊂{Y ∗> c x}.
Combining with the previous tail estimate and (i), we obtain P{X∗> x} ≤P (ΔX)∗> 2 c x + P{X∗> x} ≤2 P (ΔY )∗> 2 c x + P{τ < ∞} + P{ ˆ X∗ τ > x} ≤3 P{Y ∗> c x} + 4 c.
2 Proof of Theorem 16.22: For any c, x > 0, consider the optional times τ = inf t > 0; |Xt| > x , σ = inf t > 0; P{(θtY )∗> c x | Ft} > c .
Since θτX, θτY remain conditionally tangential given Fτ, Lemma 16.23 (ii) yields a.s. on {τ < σ} P (θτX)∗> x Fτ ≤3 P (θτY )∗> c x Fτ + 4c ≤7c, and since {τ < σ} ∈Fτ by Lemma 9.1, we get P X∗> 3 x, (ΔX)∗≤x, σ = ∞ ≤P (θτX)∗> x, τ < σ = E P{(θτX)∗> x | Fτ}; τ < σ ≤7c P{τ < ∞} = 7c P{X∗> x}.
Furthermore, Lemma 16.23 (i) yields P (ΔX)∗> x ≤2 P (ΔY )∗> x ≤2 P Y ∗> 1 2 x , and Theorem 9.16 gives P{σ < ∞} = P supt P (θtY )∗> c x Ft > c ≤P supt P Y ∗> 1 2 c x Ft > c ≤c−1P Y ∗> 1 2 c x .
Combining the last three estimates, we obtain P{X∗> 3 x} ≤P X∗> 3 x, (ΔX)∗≤x, σ = ∞ + P (ΔX)∗> x + P{σ < ∞} ≤7c P{X∗> x} + 2 P Y ∗> 1 2 x + c−1P Y ∗> 1 2 c x .
364 Foundations of Modern Probability Since ϕ is non-decreasing of moderate growth, so that ϕ(rx) ≤h(r) ϕ(x) for some function h > 0, we obtain (h−1 3 −7c)E ϕ(X∗) ≤E ϕ(X∗/3) −7c E ϕ(X∗) ≤2 E ϕ(2 Y ∗) + c−1E ϕ(2 Y ∗/c) ≤ 2h2 + c−1h2/c E ϕ(Y ∗).
Now choose c < (7h3)−1 to get E ϕ(X∗) < ⌢E ϕ(Y ∗).
2 Combining the last two theorems, we may simplify the tangential compari-son. Say that X, Y are weakly tangential if their local characteristics have the same distribution. Here there is no mention of associated filtrations, and the two processes may even be defined on different probability spaces.
Corollary 16.24 (extended comparison) Theorem 16.22 remains true when X and Y are weakly tangential.
Proof: Let X, Y be semi-martingales on some probability spaces Ω, Ω′ with filtrations F, G, and write A, B for the associated local characteristics.
By Theorem 16.20 we may choose some processes ˜ X, ˜ Y with conditionally independent increments, on suitable extensions of Ω, Ω′, such that X, ˜ X are ˜ F-tangential while Y, ˜ Y are ˜ G -tangential, for some standard extensions ˜ F of F and ˜ G of G. In particular, ˜ X, ˜ Y have the same local characteristics A, B.
Now suppose that A d = B, and conclude that also ˜ X d = ˜ Y since ˜ X, ˜ Y have conditionally independent increments. Assuming X, Y to be either increasing or locally symmetric and letting ϕ have moderate growth, we get by Theorem 16.22 E ϕ(X∗) ≍E ϕ( ˜ X∗) = E ϕ(˜ Y ∗) ≍E ϕ(Y ∗).
2 Exercises 1.
Give an example of a process, whose increments are independent but not infinitely divisible. (Hint: Allow fixed discontinuities.) 2. Show that a non-decreasing process with independent increments has at most countably many fixed discontinuities, and that the associated component is indepen-dent of the remaining part. Then write the representation as in Theorem 16.3 for a suitably generalized Poisson process η.
3. Given a convolution semi-group of distributions μt on Rd, construct a L´ evy process X with L(Xt) = μt for all t ≥0, starting from a suitable Poisson process and an independent Brownian motion. (Hint: Use Lemma 4.24 and Theorems 8.17 and 8.23.) 4. Let X be a process with stationary, independent increments and X0 = 0. Show that X has a version with rcll paths. (Hint: Use the previous exercise.) 16. Independent-Increment and L´ evy Processes 365 5. For a L´ evy process X of effective dimension d ≥3, show that |Xt| →∞a.s.
as t →∞. (Hint: Define τ = inf{t; |Xt| > 1}, and iterate to form a random walk (Sn). Show that the latter has the same effective dimension as X, and use Theorem 12.8.) 6.
Let X be a real L´ evy process, and fix any p ∈(0, 2).
Show that t−1/pXt converges a.s. iffE|X1|p < ∞and either p ≤1 or EX1 = 0. (Hint: Define a random walk (Sn) as before, show that S1 satisfies the same moment condition as X1, and apply Theorem 5.23.) 7. Show that a real L´ evy process X is a subordinator iffX1 ≥0 a.s.
8. Let X be a real process as in Theorem 16.3. Show that if X ≥0 a.s., then G = 0 a.s. and ν is restricted to (0, ∞).
9. Let X be a weakly p -stable L´ evy process. Show that when p ̸= 1, we may choose c ∈R such that the process Xt −ct becomes strictly p -stable. Explain why such a centering fails for p = 1.
10. Show that a L´ evy process is symmetric p -stable for a p ∈(0, 2] iffE eiuXt = e−ct|u|p for a constant c ≥0. Similarly, show that a subordinator Y is strictly p -stable for a p ∈(0, 1) iffE e−uYt = e−ctup for a constant c ≥0. In each case, find the corresponding characteristics. (Hint: Derive a scaling property for the characteristic exponent.) 11. Let X be a symmetric, p -stable L´ evy process and let T be a strictly q -stable subordinator, for some p ∈(0, 2] and q ∈(0, 1). Show that Y = X ◦T is a symmetric p q -stable L´ evy process. (Hint: Check that Y is again a L´ evy process, and calculate E eiuYt.) 12. Give an example of two tangential ql-continuous, simple point processes ξ, η, such that η is Cox while ξ is not.
13. Give an example two tangential continuous martingales M, N, such that N has conditionally independent increments while M has not.
14. Let M, N be tangential continuous martingales. Show that ∥M ∗∥p ≍∥N ∗∥p for all p > 0. (Hint: Use Theorem 18.7.) 15. Let M, N be tangential martingales. Show that ∥M∗∥p ≍∥N ∗∥p for all p ≥1.
(Hint: Note that [M], [N] are again tangential, and use Theorems 16.22 and 20.12.) Chapter 17 Feller Processes and Semi-groups Transition semi-groups, pseudo-Poisson processes, Feller properties, re-solvents and generator, forward and backward equations, Yosida ap-proximation, closure and cores, L´ evy processes, Hille–Yosida theorem, positive-maximum principle, compactification, existence and regulariza-tion, strong Markov property, discontinuity sets, Dynkin’s formula, char-acteristic operator, diffusions and elliptic operators, convergence of Feller processes, approximation of Markov chains, quasi-left continuity As stressed before, Markov processes are among the most basic processes of modern probability. After studying several special cases in previous chapters, we now turn to a detailed study of the broad class of Feller processes. Those are Markov processes general enough to cover most applications of interest, yet restricted by some regularity conditions that allow for a reasonable flexibility, leading to a rich arsenal of basic tools and powerful properties.
The crucial new idea is to regard the transition kernels as operators Tt on an appropriate function space. The Chapman–Kolmogorov relation then turns into the semi-group property Ts Tt = Ts+t, which suggests a formal representa-tion Tt = etA in terms of a generator A. Under suitable regularity conditions— the so-called Feller properties—it is indeed possible to define a generator A describing the infinitesimal evolution of the underlying process X. Under fur-ther conditions, X will be shown to have continuous paths iffA extends an elliptic differential operator. In general, the powerful Hille–Yosida theorem provides precise conditions for the existence of a Feller process corresponding to a given operator A.
Using the basic regularity theorem for sub-martingales from Chapter 9, we show that every Feller process has a right-continuous version with left-hand limits. Given this fundamental result, it is straightforward to extend the strong Markov property to arbitrary Feller processes. We also explore some profound connections with martingale theory. Finally, we establish a general continuity theorem for Feller processes, and deduce a corresponding approximation of discrete-time Markov chains by diffusions and other continuous-time Markov processes, anticipating some weak convergence results from Chapter 23.
To clarify the connection between transition kernels and operators, let μ be an arbitrary probability kernel on a measurable space (S, S). The associated transition operator T is given by Tf(x) = (Tf)(x) = μ(x, dy) f(y), x ∈S, (1) 367 © Springer Nature Switzerland AG 2021 O. Kallenberg, Foundations of Modern Probability, Probability Theory and Stochastic Modelling 99, 368 Foundations of Modern Probability where f : S →R is assumed to be measurable and either bounded or non-negative. Approximating f by simple functions and using monotone conver-gence, we see that Tf is again a measurable function on S.
We also note that T is a positive contraction operator, in the sense that 0 ≤f ≤1 implies 0 ≤Tf ≤1. A special role is played by the identity operator I, corresponding to the kernel μ(x, ·) ≡δx. The importance of transition operators for the study of Markov processes is due to the following simple fact.
Lemma 17.1 (semi-group property) For each t ≥0, let μt be a probability kernel on S with associated transition operator Tt. Then for any s, t ≥0, μs+t = μs μt ⇔ Ts+t = Ts Tt.
Proof: For any B ∈S, we have Ts+t1B(x) = μs+t(x, B) and (Ts Tt)1B(x) = Ts(Tt1B)(x) = μs(x, dy) (Tt1B)(y) = μs(x, dy) μt(y, B) = (μs μt)(x, B).
Thus, the Chapman–Kolmogorov relation on the left is equivalent to Ts+t1B = (TsTt)1B for any B ∈S, which extends to the semi-group property Ts+t = Ts Tt, by linearity and monotone convergence.
2 By analogy with the situation for the Cauchy equation, we might hope to represent the semi-group in the form Tt = etA, t ≥0, for a suitable generator A. For this formula to make sense, the operator A must be suitably bounded, so that the exponential function can be defined through a Taylor expansion.
We consider a simple case where such a representation exists.
Proposition 17.2 (pseudo-Poisson processes) Let (Tt) be the transition semi-group of a pure jump-type Markov process in S with bounded rate kernel α.
Then Tt = etA, t ≥0, where for bounded measurable functions f : S →R, Af(x) = f(y) −f(x) α(x, dy), x ∈S.
Proof: Since α is bounded, we may write α(x, B) ≡c μ(x, B \ {x}) for a probability kernel μ and a constant c ≥0. By Proposition 13.7, X is then a pseudo-Poisson process of the form X = Y ◦N, where Y is a discrete-time Markov chain with transition kernel μ, and N is an independent Poisson process with fixed rate c. Letting T be the transition operator associated with μ, we get for any t ≥0 and f as stated 17. Feller Processes and Semi-groups 369 Ttf(x) = Exf(Xt) = Σ n≥0 Ex f(Yn); Nt = n = Σ n≥0 P{Nt = n} Exf(Yn) = Σ n≥0 e−ct(ct)n n!
T nf(x) = ect (T−I)f(x).
Hence, Tt = etA for all t ≥0, where Af(x) = c (T −I)f(x) = c f(y) −f(x) μ(x, dy) = f(y) −f(x) α(x, dy).
2 For the further analysis, we take S to be a locally compact, separable metric space, and write C0 = C0(S) for the class of continuous functions f : S →R with f(x) →0 as x →∞. We can make C0 into a Banach space by introducing the norm ∥f∥= supx |f(x)|. A semi-group of positive contraction operators Tt on C0 is called a Feller semi-group, if it has the additional regularity properties (F1) TtC0 ⊂C0, t ≥0, (F2) Ttf(x) →f(x) as t →0, f ∈C0, x ∈S.
In Theorem 17.6 we show that (F1)−(F2), together with the semi-group prop-erty, imply the strong continuity (F3) Ttf →f as t →0, f ∈C0.
To clarify the probabilistic significance of those conditions, we assume for simplicity that S is compact, and also that (Tt) is conservative, in the sense that Tt1 = 1 for all t. For every initial state x, we may introduce an associated Markov process Xx t , t ≥0, with transition operators Tt.
Lemma 17.3 (Feller properties) Let (Tt) be a conservative transition semi-group on a compact metric space (S, ρ). Then the Feller properties (F1)−(F3) are equivalent to, respectively, (i) Xx t d →Xy t as x →y, t ≥0, (ii) Xx t P →x as t →0, x ∈S, (iii) supx Ex ρ(Xs, Xt) ∧1 →0 as |s −t| →0.
Proof: The first two equivalences are obvious, so we prove only the third one. Then let the sequence f1, f2, . . . be dense in C = CS. By the compactness of S, we note that xn →x in S ifffk(xn) →fk(x) for each k. Thus, ρ is topologically equivalent to the metric ρ′(x, y) = Σ k≥1 2−k |fk(x) −fk(y)| ∧1 , x, y ∈S.
370 Foundations of Modern Probability Since S is compact, the identity mapping on S is uniformly continuous with respect to ρ and ρ′, and so we may assume that ρ = ρ′.
Next we note that, for any f ∈C, x ∈S, and t, h ≥0, Ex f(Xt) −f(Xt+h) 2 = Ex f 2 −2f Thf −Thf 2 (Xt) ≤ f 2 −2f Thf + Thf 2 ≤2∥f∥ f −Thf + f 2 −Thf 2 .
Assuming (F3), we get supx Ex|fk(Xs) −fk(Xt)| →0 as s −t →0 for fixed k, and so by dominated convergence supx Exρ(Xs, Xt) →0. Conversely, the latter condition yields Thfk →fk for each k, which implies (F3).
2 Our first aim is to construct the generator of a Feller semi-group (Tt) on C0. Since in general there is no bounded linear operator A satisfying Tt = etA, we need to look for a suitable substitute. For motivation, note that if p is a real-valued function on R+ with representation pt = e at, we can recover a from p by either a differentiation or an integration: t−1(pt −1) →a as t →0, ∞ 0 e−λtpt dt = (λ −a)−1, λ > 0.
The latter formula suggests that we introduce, for every λ > 0, the resolvent or potential Rλ, defined as the Laplace transform Rλf = ∞ 0 e−λt(Ttf) dt, f ∈C0.
Note that the integral exists, since Ttf(x) is bounded and right-continuous in t ≥0 for fixed x ∈S.
Theorem 17.4 (resolvents and generator) Let (Tt) be a Feller semi-group on C0 with resolvents Rλ, λ > 0. Then (i) the λRλ are injective contraction operators on C0, and λRλ →I holds strongly as λ →∞, (ii) the range D = RλC0 is independent of λ and dense in C0, (iii) there exists an operator A on C0 with domain D, such that R −1 λ = λ −A on D for every λ > 0, (iv) A and Tt commute on D for every t ≥0.
Proof, (i)−(iii): If f ∈C0, then (F1) yields Ttf ∈C0 for every t, and so by dominated convergence we have even Rλf ∈C0. To prove the stated contraction property, we may write for any f ∈C0 λRλf ≤λ ∞ 0 e−λt ∥Ttf∥dt ≤λ ∥f∥ ∞ 0 e−λtdt = ∥f∥.
17. Feller Processes and Semi-groups 371 A simple computation yields the resolvent equation Rλ −Rμ = (μ −λ)RλRμ, λ, μ > 0, (2) which shows that the operators Rλ commute and have a common range D. If f = R1g for some g ∈C0, we get by (2) as λ →∞ λRλf −f = (λRλ −I)R1g = (R1 −I)Rλg ≤λ−1 R1 −I ∥g∥→0.
The convergence extends by a simple approximation to the closure of D.
Now introduce the one-point compactification ˆ S = S∪{Δ} of S, and extend any f ∈C0 to ˆ C = C( ˆ S) by putting f(Δ) = 0. If ¯ D ̸= C0, then the Hahn– Banach theorem yields a bounded linear functional ϕ ̸≡0 on ˆ C, such that ϕR1f = 0 for all f ∈C0. By Riesz’ representation Theorem 2.25, ϕ extends to a bounded, signed measure on ˆ S. Letting f ∈C0 and using (F2), we get by dominated convergence as λ →∞ 0 = λϕRλf = ϕ(dx) ∞ 0 λe−λt Ttf(x) dt = ϕ(dx) ∞ 0 e−s Ts/λf(x) ds →ϕf, and so ϕ ≡0. The contradiction shows that D is dense in C0.
To see that the operators Rλ are injective, let f ∈C0 with Rλ0f = 0 for some λ0 > 0. Then (2) yields Rλf = 0 for every λ > 0, and since λRλf →f as λ →∞, we get f = 0. Hence, the inverses R−1 λ exist on D. Multiplying (2) by R−1 λ from the left and by R−1 μ from the right, we get on D the relation R−1 μ −R−1 λ = μ −λ. Thus, the operator A = λ −R−1 λ on D is independent of λ.
(iv) Note that Tt and Rλ commute for any t, λ > 0, and write Tt(λ −A)Rλ = Tt = (λ −A)RλTt = (λ −A) TtRλ.
2 The operator A in Theorem 17.4 is called the generator of the semi-group (Tt).
To emphasize the role of the domain D, we often say that (Tt) has generator (A, D). The term is justified by the following lemma.
Lemma 17.5 (uniqueness) A Feller semi-group is uniquely determined by its generator.
Proof: The operator A determines Rλ = (λ −A)−1 for all λ > 0. By the uniqueness theorem for Laplace transforms, it then determines the measure μf(dt) = Ttf(x)dt on R+ for all f ∈C0 and x ∈S. Since the density Ttf(x) is 372 Foundations of Modern Probability right-continuous in t for fixed x, the assertion follows.
2 We now show that any Feller semi-group is strongly continuous, and derive general versions of Kolmogorov’s forward and backward equations.
Theorem 17.6 (strong continuity, forward and backward equations) Let (Tt) be a Feller semi-group with generator (A, D). Then (i) (Tt) is strongly continuous and satisfies Ttf −f = t 0 TsAf ds, f ∈D, t ≥0, (ii) Ttf is differentiable at 0 ifff ∈D, in which case d dt(Ttf) = TtAf = ATtf, t ≥0.
For the proof we use the Yosida approximation Aλ = λARλ = λ(λRλ −I), λ > 0, (3) with associated semi-group T λ t = etAλ, t ≥0. Note that this is the transition semi-group of a pseudo-Poisson process with rate λ, based on the transition operator λRλ.
Lemma 17.7 (Yosida approximation) For any f ∈D, (i) Ttf −T λ t f ≤t Af −Aλf , t, λ > 0, (ii) Aλf →Af as λ →∞, (iii) T λ t f →Ttf as λ →∞for each f ∈C0, uniformly for bounded t ≥0.
Proof: By Theorem 17.4, we have Aλf = λRλAf →Af for any f ∈D.
For fixed λ > 0, we further note that h−1(T λ h −I) →Aλ in the norm topology as h →0. Now for any commuting contraction operators B and C, Bnf −Cnf ≤ Bn−1 + Bn−2C + · · · + Cn−1 Bf −Cf ≤n Bf −Cf .
Fixing any f ∈C0 and t, λ, μ > 0, we hence obtain as h = t/n →0 T λ t f −T μ t f ≤n T λ h f −T μ h f = t T λ h f −f h −T μ h f −f h →t Aλf −Aμf .
For f ∈D it follows that T λ t f is Cauchy convergent as λ →∞for fixed t. Since D is dense in C0, the same property holds for arbitrary f ∈C0. Denoting the limit by ˜ Ttf, we get in particular T λ t f −˜ Ttf ≤t Aλf −Af , f ∈D, t ≥0.
(4) 17. Feller Processes and Semi-groups 373 Thus, for each f ∈D we have T λ t f →˜ Ttf as λ →∞, uniformly for bounded t, which again extends to all f ∈C0.
To identify ˜ Tt, we may use the resolvent equation (2) to obtain, for any f ∈C0 and λ, μ > 0, ∞ 0 e−λt T μ t μRμf dt = (λ −Aμ)−1μRμf = μ λ + μRνf, (5) where ν = λμ(λ + μ)−1. As μ →∞we have ν →λ, and so Rνf →Rλ.
Furthermore, T μ t μRμf −˜ Ttf ≤ μRμf −f + T μ t f −˜ Ttf →0, and so by dominated convergence, (5) yields e−λt ˜ Ttfdt = Rλf. Hence, the semi-groups (Tt) and ( ˜ Tt) have the same resolvent operators Rλ, and so they agree by Lemma 17.5. In particular, (i) then follows from (4).
2 Proof of Theorem 17.6: The semi-group (T λ t ) is clearly norm continuous in t for each λ > 0, and the strong continuity of (Tt) follows by Lemma 17.7 as λ →∞. We further have h−1(T λ h −I) →Aλ as h ↓0. Using the semi-group relation and continuity, we obtain more generally d dtT λ t = AλT λ t = T λ t Aλ, t ≥0, which implies T λ t f −f = t 0 T λ s Aλf ds, f ∈C0, t ≥0.
(6) If f ∈D, Lemma 17.7 yields as λ →∞ T λ s Aλf −TsAf ≤ Aλf −Af + T λ s Af −TsAf →0, uniformly for bounded s, and so (i) follows from (6) as λ →∞. By the strong continuity of Tt, we may differentiate (i) to get the first relation in (ii). The second relation holds by Theorem 17.4.
Conversely, let h−1(Thf −f) →g for some functions f, g ∈C0. As h →0, we get ARλf ←Th −I h Rλf = Rλ Thf −f h →Rλg, and so f = (λ −A)Rλf = λRλf −ARλf = Rλ(λf −g) ∈D.
2 For a given generator A, it is often hard to identify the domain D, or the latter may simply be too large for convenient calculations. It is then useful to restrict A to a suitable sub-domain. An operator A with domain D on a Banach space B is said to be closed, if its graph G = {(f, Af); f ∈D} is a closed subset of B2. In general, we say that A is closable, if the closure ¯ G is 374 Foundations of Modern Probability the graph of a single-valued operator ¯ A, called the closure of A. Note that A is closable iffthe conditions D ∋fn →0 and Afn →g imply g = 0.
When A is closed, a core for A is defined as a linear sub-space D ⊂D, such that the restriction A|D has closure A. Note that A is then uniquely determined by A|D. We give conditions ensuring D ⊂D to be a core, when A is the generator of a Feller semi-group (Tt) on C0.
Lemma 17.8 (closure and cores) Let (A, D) be the generator of a Feller semi-group, and fix any λ > 0 and a sub-space D ⊂D. Then (i) (A, D) is closed, (ii) D is a core for A ⇔(λ −A)D is dense in C0.
Proof: (i) Let f1, f2, . . . ∈D with fn →f and Afn →g. Then (I −A)fn →f −g. Since R1 is bounded, we get fn →R1(f −g). Hence, f = R1(f −g) ∈ D, and (I −A)f = f −g or g = Af. Thus, A is closed.
(ii) If D is a core for A, then for any g ∈C0 and λ > 0 there exist some f1, f2, . . . ∈D with fn →Rλg and Afn →ARλg, and we get (λ −A)fn → (λ −A)Rλg = g. Thus, (λ −A)D is dense in C0.
Conversely, let (λ −A)D be dense in C0. To see that D is a core, fix any f ∈D. By hypothesis, we may choose some f1, f2, . . . ∈D with gn ≡(λ −A)fn →(λ −A)f ≡g.
Since Rλ is bounded, we obtain fn = Rλgn →Rλg = f, and thus Afn = λfn −gn →λf −g = Af.
2 A sub-space D ⊂C0 is said to be invariant under (Tt) if TtD ⊂D for all t ≥0. Note that for any subset B ⊂C0, the linear span of t TtB is an invariant sub-space of C0.
Proposition 17.9 (invariance and cores, Watanabe) Let (A, D) be the gen-erator of a Feller semi-group. Then every dense, invariant sub-space D ⊂D is a core for A.
Proof: By the strong continuity of (Tt), the operator R1 can be approxi-mated in the strong topology by some finite linear combinations L1, L2, . . . of the operators Tt. Now fix any f ∈D, and define gn = Lnf. Noting that A and Ln commute on D by Theorem 17.4, we get (I −A)gn = (I −A)Lnf = Ln(I −A)f →R1(I −A)f = f.
17. Feller Processes and Semi-groups 375 Since gn ∈D which is dense in C0, it follows that (I −A)D is dense in C0.
Hence, D is a core by Lemma 17.8.
2 The L´ evy processes in Rd are archetypes of Feller processes, and we pro-ceed to identify their generators1. Let C∞ 0 denote the class of all infinitely differentiable functions f on Rd, such that f and all its derivatives belong to C0 = C0(Rd).
Theorem 17.10 (L´ evy processes) Let Tt, t ≥0, be transition operators of a L´ evy process in Rd with characteristics (a, b, ν). Then (i) (Tt) is a Feller semi-group, (ii) C∞ 0 is a core for the generator A of (Tt), (iii) for any f ∈C∞ 0 and x ∈Rd, we have Af(x) = 1 2 aij ∂2 ijf(x) + bi ∂if(x) + f(x + y) −f(x) −yi ∂if(x) 1{|y| ≤1} ν(dy).
In particular, a standard Brownian motion in Rd has generator 1 2 Δ, and the uniform motion with velocity b ∈Rd has generator b ∇, both on the core C∞ 0 , where Δ and ∇denote the Laplace and gradient operators, respectively. The generator of the jump component has the same form as for the pseudo-Poisson processes in Proposition 17.2, apart from a compensation for small jumps by a linear drift term.
Proof of Theorem 17.10 (i), (iii): As t →0 we have μ∗[t−1] t w →μ1, and so Corollary 7.9 yields μt/t v →ν on Rd \ {0} and at,h ≡t−1 |x|≤h xx′ μt(dx) →ah, bt,h ≡t−1 |x|≤h x μt(dx) →bh, (7) for any h > 0 with ν{|x| = h} = 0. Now fix any f ∈C∞ 0 , and write t−1 Ttf(x) −f(x) = t−1 f(x + y) −f(x) μt(dy) = t−1 |y|≤h f(x + y) −f(x) −yi ∂if(x) −1 2 yiyj ∂2 ijf(x) μt(dy) + t−1 |y|>h f(x + y) −f(x) μt(dy) + bi t,h ∂if(x) + 1 2 aij t,h ∂2 ijf(x).
As t →0, the last three terms approach the expression in (iii), though with aij replaced by ah ij and with integration over {|x| > h}. To establish the required convergence, it remains to show that the first term on the right tends to zero 1Here and below, summation over repeated indices is understood.
376 Foundations of Modern Probability as h →0, uniformly for small t > 0. Now this is clear from (7), since the integrand is of order h|y|2 by Taylor’s formula. By the uniform boundedness of the derivatives of f, the convergence is uniform in x. Thus, C∞ 0 ⊂D by Theorem 17.6, and (iii) holds on C∞ 0 .
(ii) Since C∞ 0 is dense in C0, it suffices by Proposition 17.9 to show that it is also invariant under (Tt). Then note that, by dominated convergence, the differentiation operators commute with each Tt, and use condition (F1).
2 We proceed to characterize the linear operators A on C0, whose closures ¯ A are generators of Feller semi-groups.
Theorem 17.11 (generator criteria, Hille, Yosida) Let A be a linear operator on C0 with domain D. Then A is closable and its closure ¯ A is the generator of a Feller semi-group on C0, iffthese conditions hold: (i) D is dense in C0, (ii) the range of λ0 −A is dense in C0 for some λ0 > 0, (iii) f ∨0 ≤f(x) ⇒Af(x) ≤0, for any f ∈D and x ∈S.
Condition (iii) is known as the positive-maximum principle.
Proof: First assume that ¯ A is the generator of a Feller semi-group (Tt).
Then (i) and (ii) hold by Theorem 17.4. To prove (iii), let f ∈D and x ∈S with f + = f ∨0 ≤f(x). Then Ttf(x) ≤Ttf +(x) ≤∥Ttf +∥ ≤∥f +∥= f(x), t ≥0, and so h−1(Thf −f)(x) ≤0. As h →0, we get Af(x) ≤0.
Conversely, let A satisfy (i)−(iii). For any f ∈D, choose an x ∈S with |f(x)| = ∥f∥, and put g = f sgn f(x). Then g ∈D with g+ ≤g(x), and so (iii) yields Ag(x) ≤0. Thus, we get for any λ > 0 (λ −A)f ≥λg(x) −Ag(x) ≥λg(x) = λ∥f∥.
(8) To show that A is closable, let f1, f2, . . . ∈D with fn →0 and Afn →g. By (i) we may choose g1, g2, . . . ∈D with gn →g, and (8) yields (λ −A)(gm + λfn) ≥λ gm + λfn , m, n ∈N, λ > 0.
As n →∞, we get ∥(λ −A)gm −λg∥≥λ∥gm∥. Dividing by λ and letting λ →∞, we obtain ∥gm −g∥≥∥gm∥, which gives ∥g∥= 0 as m →∞. Thus, A is closable, and by (8) the closure ¯ A satisfies (λ −¯ A)f ≥λ∥f∥, λ > 0, f ∈dom( ¯ A).
(9) 17. Feller Processes and Semi-groups 377 Now let λn →λ > 0 and (λn −¯ A)fn →g for some f1, f2, . . . ∈dom( ¯ A). By (9) the sequence (fn) is then Cauchy, say with limit f ∈C0. The definition of ¯ A yields (λ −¯ A)f = g, and so g belongs to the range of λ −¯ A. Letting Λ be the set of constants λ > 0 such that λ −¯ A has range C0, it follows in particular that Λ is closed. If it can be shown to be open as well, then (ii) gives Λ = (0, ∞).
Then fix any λ ∈Λ, and conclude from (9) that λ −¯ A has a bounded inverse Rλ with norm ∥Rλ∥≤λ−1. For any μ > 0 with |λ −μ|∥Rλ∥< 1, we may form a bounded linear operator ˜ Rμ = Σ n≥0 (λ −μ)nRn+1 λ , and note that (μ −¯ A) ˜ Rμ = (λ −¯ A) ˜ Rμ −(λ −μ) ˜ Rμ = I.
In particular μ ∈Λ, which shows that λ ∈Λ o.
To prove the resolvent equation (2), we start from the identity (λ−¯ A)Rλ = (μ −¯ A)Rμ = I. A simple rearrangement yields (λ −¯ A)(Rλ −Rμ) = (μ −λ)Rμ, and (2) follows as we multiply from the left by Rλ. In particular, (2) shows that Rλ and Rμ commute for any λ, μ > 0.
Since Rλ(λ −¯ A) = I on dom( ¯ A) and ∥Rλ∥≤λ−1, we have for any f ∈ dom( ¯ A) as λ →∞ λRλf −f = Rλ ¯ Af ≤λ−1∥¯ Af∥→0.
From (i) and the contractivity of λRλ, it follows easily that λRλ →I in the strong topology. Now define Aλ as in (3), and let T λ t = etAλ. Arguing as in the proof of Lemma 17.7, we get T λ t f →Ttf for each f ∈C0, uniformly for bounded t, where the Tt form a strongly continuous family of contraction operators on C0, such that e−λt Tt dt = Rλ for all λ > 0. To deduce the semi-group property, fix any f ∈C0 and s, t ≥0, and note that as λ →∞, Ts+t −TsTt f = Ts+t −T λ s+t f + T λ s T λ t −Tt f + T λ s −Ts Ttf →0.
The positivity of the operators Tt will follow immediately, if we can show that Rλ is positive for each λ > 0. Then fix any function g ≥0 in C0, and put f = Rλg, so that g = (λ −¯ A)f.
By the definition of ¯ A, there exist some f1, f2, . . . ∈D with fn →f and Afn →¯ Af. If infx f(x) < 0, we have infx fn(x) < 0 for all sufficiently large n, and we may choose some xn ∈S with fn(xn) ≤fn ∧0. By (iii) we have Afn(xn) ≥0, and so infx (λ −A)fn(x) ≤(λ −A)fn(xn) ≤λfn(xn) = λ infxfn(x).
378 Foundations of Modern Probability As n →∞, we get the contradiction 0 ≤infxg(x) = infx (λ −¯ A)f(x) ≤λ infxf(x) < 0.
To see that ¯ A is the generator of the semi-group (Tt), we note that the opera-tors λ −¯ A are inverses of the resolvent operators Rλ.
2 The previous proof shows that, if an operator A on C0 satisfies the positive maximum principle in (iii), then it is dissipative, in the sense that ∥(λ−A)f∥≥ λ∥f∥for all f ∈dom(A) and λ > 0. This yields the following simple observa-tion, needed later.
Lemma 17.12 (maximality) Let (A, D) be the generator of a Feller semi-group on C0, and consider an extension of A to a linear operator (A′, D′) satisfying the positive-maximum principle. Then (A, D) = (A′, D′).
Proof: Fix any f ∈D′, and put g = (I −A′)f. Since A′ is dissipative and (I −A)R1 = I on C0, we get f −R1g ≤ (I −A′)(f −R1g) = g −(I −A)R1g = 0, and so f = R1g ∈D.
2 We proceed to show how a nice Markov process can be associated with every Feller semi-group (Tt). For the corresponding transition kernels μt to have total mass 1, we need the operators Tt to be conservative, in the sense that supf≤1 Ttf(x) = 1 for all x ∈S.
This can be achieved by a suitable extension.
Then introduce an auxiliary state2 Δ ̸∈S, and form the compactified space ˆ S = S∪{Δ}, where Δ is chosen as the point at infinity when S is non-compact, and otherwise is taken to be isolated from S. Note that any function f ∈C0 has a continuous extension to ˆ S, achieved by putting f(Δ) = 0. The original semi-group on C0 extends as follows to a conservative semi-group on ˆ C = C ˆ S.
Lemma 17.13 (compactification) Every Feller semi-group (Tt) on C0 extends to a conservative Feller semi-group ( ˆ Tt) on ˆ C, given by ˆ Ttf = f(Δ) + Tt f −f(Δ) , t ≥0, f ∈ˆ C.
Proof: It is straightforward to verify that ( ˆ Tt) is a strongly continuous semi-group on ˆ C. To show that the operators ˆ Tt are positive, fix any f ∈ˆ C with f ≥0, and note that g ≡f(Δ) −f ∈C0 with g ≤f(Δ). Hence, 2often called the coffin state or cemetery; this morbid terminology is well established 17. Feller Processes and Semi-groups 379 Ttg ≤Ttg+ ≤∥Ttg+∥ ≤∥g+∥≤f(Δ), and so ˆ Ttf = f(Δ) −Ttg ≥0. The contraction and conservation properties now follow from the fact that ˆ Tt1 = 1.
2 Next we construct an associated semi-group of Markov transition kernels μt on ˆ S, satisfying Ttf(x) = f(y) μt(x, dy), f ∈C0.
(10) Say that a state x ∈ˆ S is absorbing for (μt), if μt(x, {x}) = 1 for all t ≥0.
Proposition 17.14 (existence) For any Feller semi-group (Tt) on C0, there exists a unique semi-group of Markov transition kernels μt on ˆ S, satisfying (10) and such that Δ is absorbing for (μt).
Proof: For fixed x ∈S and t ≥0, the mapping f →ˆ Ttf(x) is a positive linear functional on ˆ C with norm 1. Hence, Riesz’ representation Theorem 2.25 yields some probability measures μt(x, ·) on ˆ S satisfying ˆ Ttf(x) = f(y) μt(x, dy), f ∈ˆ C, x ∈ˆ S, t ≥0.
(11) The measurability on the right is clear by continuity. By a standard approxima-tion followed by a monotone-class argument, we obtain the desired measurabil-ity of μt(x, B), for any t ≥0 and Borel set B ⊂ˆ S. The Chapman–Kolmogorov relation holds on ˆ S by Lemma 17.1. Formula (10) is a special case of (11), which also yields f(y) μt(Δ, dy) = ˆ Ttf(Δ) = f(Δ) = 0, f ∈C0, showing that Δ is absorbing. The uniqueness of (μt) is clear from the last two properties.
2 For any probability measure ν on ˆ S, Theorem 11.4 yields a Markov process Xν in ˆ S with initial distribution ν and transition kernels μt. Let Pν be the distribution of Xν with associated integration operator Eν. When ν = δx, we may write Px and Ex instead, for simplicity. We now extend Theorem 16.3 to a basic regularization theorem for Feller processes. For an rcll process X, we say that Δ is absorbing for X± if Xt = Δ or Xt−= Δ implies Xu = Δ for all u ≥t.
Theorem 17.15 (regularization, Kinney) Let X be a Feller process in ˆ S with initial distribution ν. Then (i) X has an rcll version ˜ X in ˆ S, such that Δ is absorbing for ˜ X±, (ii) if (Tt) is conservative and ν is restricted to S, we can choose ˜ X to be rcll even in S.
380 Foundations of Modern Probability For the proof, we introduce an associated class of super-martingales, to which we may apply the regularity theorems in Chapter 9. Let C+ 0 be the class of non-negative functions in C0.
Lemma 17.16 (resolvents and super-martingales) For any f ∈C+ 0 , the pro-cess Yt = e−tR1f(Xt), t ≥0, is a super-martingale under Pν for every ν.
Proof: Letting (Gt) be the filtration induced by X, we get for any t, h ≥0 E(Yt+h| Gt) = E e−t−hR1f(Xt+h) Gt = e−t−h ThR1f(Xt) = e−t−h ∞ 0 e−s Ts+hf(Xt) ds = e−t ∞ h e−s Tsf(Xt) ds ≤Yt.
2 Proof of Theorem 17.15: By Lemma 17.16 and Theorem 9.28, the process f(Xt) has a.s. right- and left-hand limits along Q+ for any f ∈D ≡dom(A).
Since D is dense in C0, the same property holds for every f ∈C0. By the separability of C0, we can choose the exceptional null set N to be independent of f. If x1, x2, . . . ∈ˆ S are such that f(xn) converges for every f ∈C0, then by compactness of ˆ S the sequence xn converges in the topology of ˆ S. Thus, on N c, the process X has right- and left-hand limits Xt± along Q+, and on N we may redefine X to be 0. Then clearly ˜ Xt = Xt+ is rcll. It remains to show that ˜ X is a version of X, so that Xt+ = Xt a.s. for each t ≥0. This holds since Xt+h P →Xt as h ↓0, by Lemma 17.3 and dominated convergence.
For any f ∈C0 with f > 0 on S, the strong continuity of (Tt) yields even R1f > 0 on S.
Applying Lemma 9.32 to the super-martingale Yt = e−tR1f( ˜ Xt), we conclude that X ≡Δ a.s. on the interval [ζ, ∞), where ζ = inf t ≥0; Δ ∈{ ˜ Xt, ˜ Xt−} . Discarding the exceptional null set, we can make this hold identically. If (Tt) is conservative and ν is restricted to S, then ˜ Xt ∈S a.s. for every t ≥0. Thus, ζ > t a.s. for all t, and hence ζ = ∞a.s., which may again be taken to hold identically. Then ˜ Xt and ˜ Xt−take values in S, and the stated regularity properties remain valid in S.
2 By the last theorem, we may choose Ω as the space of rcll functions in ˆ S such that Δ is absorbing, and let X be the canonical process on Ω. Processes with different initial distributions ν are then distinguished by their distributions Pν on Ω. The process X is clearly Markov under each Pν, with initial distribution ν and transition kernels μt, and it has the regularity properties of Theorem 17.15.
In particular, X ≡Δ on the interval [ζ, ∞), where ζ denotes the terminal time ζ = inf t ≥0; Xt = Δ or Xt−= Δ .
17. Feller Processes and Semi-groups 381 Let (Ft) be the right-continuous filtration generated by X, and put A = F∞= t Ft. As before, the shift operators θt on Ω are given by (θt ω)s = ωs+t, s, t ≥0.
The process X with associated distributions Pν, filtration F = (Ft), and shift operators θt is called the canonical Feller process with semi-group (Tt).
We now state a general version of the strong Markov property. The result extends the special versions obtained in Proposition 11.9 and Theorems 13.1 and 14.11. A further instant of this property is given in Theorem 32.11.
Theorem 17.17 (strong Markov property, Dynkin & Yushkevich, Blumen-thal) For any canonical Feller process X, initial distribution ν, optional time τ, and random variable ξ ≥0, we have Eν ξ ◦θτ Fτ = EXτξ a.s. Pν on {τ < ∞}.
Proof: By Lemmas 8.3 and 9.1, we may take τ < ∞. Let G be the filtration induced by X. Then the times τn = 2−n[2nτ + 1] are G -optional by Lemma 9.4, and Lemma 9.3 gives Fτ ⊂Gτn for all n. Thus, Proposition 11.9 yields Eν ξ ◦θτn; A = Eν EXτnξ; A , A ∈Fτ, n ∈N.
(12) To extend this to τ, we first take ξ = k≤m fk(Xtk) for some f1, . . . , fm ∈C0 and t1 < · · · < tm. Then ξ ◦θτn →ξ ◦θτ, by the right-continuity of X and continuity of f1, . . . , fm. Writing hk = tk −tk−1 with t0 = 0, and using Feller property (F1) and the right-continuity of X, we obtain EXτnξ = Th1 f1Th2 · · · (fm−1Thmfm) · · · (Xτn) →Th1 f1Th2 · · · (fm−1Thmfm) · · · (Xτ) = EXτξ.
Thus, (12) extends to τ by dominated convergence on both sides. Using stan-dard approximation and monotone-class arguments, we may finally extend the result to arbitrary ξ.
2 For a simple application, we get a useful 0−1 law: Corollary 17.18 (Blumenthal’s 0−1 law) For a canonical Feller process, we have PxA = 0 or 1, x ∈S, A ∈F0.
Proof: Taking τ = 0 in Theorem 17.17, we get for any x ∈S and A ∈F0 1A = Px(A|F0) = PX0A = PxA a.s. Px.
2 To appreciate the last result, recall that F0 = F0+. In particular, Px{τ = 0} = 0 or 1 for any state x ∈S and F-optional time τ.
The strong Markov property is often used in the following extended form.
382 Foundations of Modern Probability Corollary 17.19 (optional projection) For any canonical Feller process X, non-decreasing, adapted process Y , and random variable ξ ≥0, we have Ex ∞ 0 (EXtξ) dYt = Ex ∞ 0 (ξ ◦θt) dYt, x ∈S.
Proof: We may take Y0 = 0. Consider the right-continuous inverse τs = inf t ≥0; Yt > s , s ≥0, and note that the τs are optional by Lemma 9.6. By Theorem 17.17, we have Ex EXτsξ; τs < ∞ = Ex Ex(ξ ◦θτs|Fτs); τs < ∞ = Ex ξ ◦θτs; τs < ∞ .
Since τs < ∞iffs < Y∞, we get by integration Ex Y∞ 0 (EXτsξ) ds = Ex Y∞ 0 (ξ ◦θτs) ds, and the asserted formula follows by Lemma 1.24.
2 Next we show that any martingale on the canonical space of a Feller process X is a.s. continuous outside the discontinuity set of X. For Brownian motion, this also follows from the integral representation in Theorem 19.11.
Theorem 17.20 (discontinuity sets) Let X be a canonical Feller process with initial distribution ν, and let M be a local Pν-martingale. Then t > 0; ΔMt ̸= 0 ⊂ t > 0; Xt−̸= Xt a.s.
(13) Proof (Chung & Walsh): By localization, we may take M to be uniformly integrable and hence of the form Mt = E(ξ|Ft) for some ξ ∈L1. Let C be the class of random variables ξ ∈L1, such that the the generated process M satisfies (13). Then C is a linear sub-space of L1. It is further closed, since if M n t = E(ξn|Ft) with ∥ξn∥1 →0, then P supt|M n t | > ε ≤ε−1E|ξn| →0, ε > 0, and so supt |M n t | P →0.
Now let ξ = k≤n fk(Xtk) for some f1, . . . , fn ∈C0 and t1 < · · · < tn.
Writing hk = tk −tk−1, we note that Mt = Π k≤m fk(Xtk) Ttm+1−t gm+1(Xt), t ∈[tm, tm+1], (14) where gk = fk Thk+1 fk+1Thk+2(· · · Thnfn) · · · , k = 1, . . . , n, subject to obvious conventions when t < t1 and t > tn. Since Ttg(x) is jointly continuous in (t, x) for each g ∈C0, equation (14) defines a right-continuous 17. Feller Processes and Semi-groups 383 version of M satisfying (13), and so ξ ∈C. By a simple approximation, C then contains all indicator functions of sets k≤n{Xtk ∈Gk} with G1, . . . , Gn open. The result extends by a monotone-class argument to any X-measurable indicator function ξ, and a routine argument yields the final extension to L1. 2 A basic role in the theory is played by the processes M f t = f(Xt) −f(X0) − t 0 Af(Xs) ds, t ≥0, f ∈D.
Lemma 17.21 (Dynkin’s formula) For a Feller process X, (i) the processes M f are martingales under any initial distributions ν for X, (ii) for any bounded optional time τ, Exf(Xτ) = f(x) + Ex τ 0 Af(Xs) ds, x ∈S, f ∈D.
Proof: For any t, h ≥0, M f t+h −M f t = f(Xt+h) −f(Xt) − t+h t Af(Xs) ds = M f h ◦θt, and so by the Markov property at t and Theorem 17.6, Eν M f t+h Ft −M f t = Eν M f h ◦θt Ft = EXtM f h = 0.
Thus, M f is a martingale, and (ii) follows by optional sampling.
2 To prepare for the next major result, we introduce the optional times τh = inf t ≥0; ρ(Xt, X0) > h , h > 0, where ρ denotes the metric in S. Note that a state x is absorbing iffτh = ∞ a.s. Px for every h > 0.
Lemma 17.22 (escape times) When x ∈S is non-absorbing, we have Exτh < ∞for h > 0 small enough.
Proof: If x is not absorbing, then μt(x, Bε x) < p < 1 for some t, ε > 0, where Bε x = {y; ρ(x, y) ≤ε}. By Lemma 17.3 and Theorem 5.25, we may choose h ∈(0, ε] so small that μt(y, Bh x) ≤μt(y, Bε x) ≤p, y ∈Bh x.
Then Proposition 11.2 yields Px{τh ≥nt} ≤Px ∩ k≤n Xkt ∈Bh x ≤pn, n ∈Z+, and so by Lemma 4.4, 384 Foundations of Modern Probability Exτh = ∞ 0 P{τh ≥s}ds ≤t Σ n≥0 P{τh ≥nt} = t Σ n≥0 pn = t 1 −p < ∞.
2 We turn to a probabilistic description of the generator and its domain. Say that A is maximal within a class of linear operators, if it extends every member of the class.
Theorem 17.23 (characteristic operator, Dynkin) Let (A, D) be the genera-tor of a Feller process. Then (i) for f ∈D we have Af(x) = 0 when x is absorbing, and otherwise Af(x) = lim h→0 Exf(Xτh) −f(x) Exτh , (15) (ii) A is the maximal operator on C0 with those properties.
Proof: (i) Fix any f ∈D. If x is absorbing, then Ttf(x) = f(x) for all t ≥0, and so Af(x) = 0. For non-absorbing x, Lemma 17.21 yields instead Exf(Xτh∧t) −f(x) = Ex τh∧t 0 Af(Xs) ds, t, h > 0.
(16) By Lemma 17.22, we have Eτh < ∞for sufficiently small h > 0, and so (16) extends to t = ∞by dominated convergence. Relation (15) now follows from the continuity of Af, along with the fact that ρ(Xs, x) ≤h for all s < τh.
(ii) Since the positive-maximum principle holds for any extension of A with the stated properties, the assertion follows by Lemma 17.12.
2 In the special case of S = Rd, let ˆ C∞be the class of infinitely differentiable functions on Rd with bounded support. An operator (A, D) with D ⊃ˆ C∞is said to be local on ˆ C∞, if Af(x) = 0 when f vanishes in a neighborhood of x.
For a local generator A on ˆ C∞, the positive-maximum principle yields a local positive-maximum principle, asserting that if f ∈ˆ C∞has a local maximum ≥0 at a point x, then Af(x) ≤0.
We give a basic relationship between diffusion processes and elliptic differ-ential operators. This connection is further explored in Chapters 21, 32, and 34–35.
Theorem 17.24 (Feller diffusions and elliptic operators, Dynkin) Let (A, D) be the generator of a Feller process X in Rd with ˆ C∞⊂D. Then these condi-tions are equivalent: (i) X is continuous on [0, ζ), a.s. Pν for every ν, (ii) A is local on ˆ C∞.
17. Feller Processes and Semi-groups 385 In this case, there exist some functions aij, bi, c ∈CRd with c ≥0 and (aij) symmetric, non-negative definite, such that for any f ∈ˆ C∞and x ∈R+, Af(x) = 1 2 aij(x) ∂2 ijf(x) + bi(x) ∂if(x) −c(x)f(x).
(17) Here the matrix (aij) gives the diffusion rates of X, the vector (bi) repre-sents the drift rate of X, and c is the rate of killing3. For semi-groups of this type, we may choose Ω as the set of paths that are continuous on [0, ζ). The Markov process X is then referred to as a canonical Feller diffusion.
Proof: (i) ⇒(ii): Use Theorem 17.23.
(ii) ⇒(i): Assume (ii). Fix any x ∈Rd and 0 < h < m, and choose an f ∈ˆ C∞with f ≥0 and support {y; h ≤|y −x| ≤m}. Then Af(y) = 0 for all y ∈Bh x, and so Lemma 17.21 shows that f(Xt∧τh) is a martingale under Px.
By dominated convergence, we have Exf(Xτh) = 0, and since m was arbitrary, Px |Xτh −x| ≤h or Xτh = Δ = 1, x ∈Rd, h > 0.
By the Markov property at fixed times, we get for any initial distribution ν Pν ∩ t∈Q+ θ−1 t |Xτh −X0| ≤h or Xτh = Δ = 1, h > 0, which implies Pν sup t<ζ |ΔXt| ≤h = 1, h > 0, proving (i).
Now fix any x ∈Rd, and choose f x 0 , f x i , f x ij ∈ˆ C∞, such that for any y in a neighborhood of x, f x 0 (y) = 1, f x i (y) = yi −xi, f x ij(y) = (yi −xi)(yj −xj).
Putting c(x) = −Af x 0 (x), bi(x) = Af x i (x), aij(x) = Af x ij(x), we see that (17) holds locally when f ∈ˆ C∞agrees near x with a second-degree polynomial. In particular, choosing f0(y) = 1, fi(y) = yi, and fij(y) = yi yj for y near x, we obtain Af0(x) = −c(x), Afi(x) = bi(x) −xi c(x), Afij(x) = aij(x) + xi bj(x) + xj bi(x) −xixj c(x).
This shows that c, bi, and aij = aji are continuous.
Applying the local positive-maximum principle to f x 0 , we get c(x) ≥0. By the same principle applied to the function f = − uif x i 2 = −uiujf x ij, 3Instantaneous transfer to the ‘cemetery’ Δ ; this morbid terminology is again standard.
386 Foundations of Modern Probability we obtain uiuj aij(x) ≥0, which shows that (aij) is non-negative definite.
Finally, let f ∈ˆ C∞be arbitrary with a second-order Taylor expansion ˜ f around x. Then each of the functions gε ±(y) = ± f(y) −˜ f(y) −ε|x −y|2, ε > 0, has a local maximum 0 at x, and so Agε ±(x) = ± Af(x) −A ˜ f(x) −εΣ i aii(x) ≤0, ε > 0.
As ε →0 we get Af(x) = A ˜ f(x), showing that (17) is generally true.
2 We turn to a basic limit theorem for Feller processes, extending some results for L´ evy processes in Theorems 7.7 and 16.14.
Theorem 17.25 (convergence, Trotter, Sova, Kurtz, Mackeviˇ cius) Let X, X1, X2, . . . be Feller processes in S with semi-groups (Tt), (Tn,t) and generators (A, D), (An, Dn), respectively, and fix a core D for A. Then these conditions are equivalent: (i) for any f ∈D, there exist some fn ∈Dn with fn →f and Anfn →Af, (ii) Tn,t →Tt strongly for each t > 0, (iii) Tn,tf →Ttf for every f ∈C0, uniformly for bounded t > 0, (iv) Xn 0 d →X0 in S ⇒Xn sd − →X in D R+, ˆ S .
Our proof is based on two lemmas, the former extending Lemma 17.7.
Lemma 17.26 (norm inequality) Let (Tt), (T ′ t) be Feller semi-groups with generators (A, D), (A′, D′), respectively, where A′ is bounded. Then Ttf −T ′ tf ≤ t 0 (A −A′) Tsf ds, f ∈D, t ≥0.
(18) Proof: Fix any f ∈D and t > 0. Since (T ′ s) is norm continuous, Theorem 17.6 yields ∂ ∂s(T ′ t−sTsf) = T ′ t−s(A −A′)Tsf, 0 ≤s ≤t.
Here the right-hand side is continuous in s, by the strong continuity of (Ts), the boundedness of A′, the commutativity of A and Ts, and the norm continuity of (T ′ s). Hence, Ttf −T ′ tf = t 0 ∂ ∂s(T ′ t−sTsf) ds = t 0 T ′ t−s(A −A′)Tsf ds, and (18) follows by the contractivity of T ′ t−s.
2 We may next establish a continuity property for the Yosida approximations Aλ, Aλ n of the generators A, An, respectively.
17. Feller Processes and Semi-groups 387 Lemma 17.27 (continuity of Yosida approximation) Let (A, D), (An, Dn) be generators of some Feller semi-groups, satisfying (i) of Theorem 17.25. Then Aλ n →Aλ strongly for every λ > 0.
Proof: By Lemma 17.8, it is enough to show that Aλ nf →Aλf for every f ∈(λ−A)D. Then define g ≡Rλf ∈D. By (i) we may choose some gn ∈Dn with gn →g and Angn →Ag. Then fn ≡(λ −An)gn →(λ −A)g = f, and so Aλ nf −Aλf = λ2 Rλ nf −Rλf ≤λ2 Rλ n(f −fn) + λ2 Rλ nfn −Rλf ≤λ∥f −fn∥+ λ2∥gn −g∥→0.
2 Proof of Theorem 17.25, (i) ⇒(iii): Assume (i). Since D is dense in C0, we may take f ∈D. Then choose some functions fn as in (i), and conclude from Lemmas 17.7 and 17.26 that, for any n ∈N and t, λ > 0, Tn,tf −Ttf ≤ Tn,t(f −fn) + (Tn,t −T λ n,t)fn + T λ n,t(fn −f) + (T λ n,t −T λ t )f + (T λ t −Tt)f ≤2 ∥fn −f∥+ t (Aλ −A)f + t (An −Aλ n)fn + t 0 (Aλ n −Aλ)T λ s f ds.
(19) By Lemma 17.27 and dominated convergence, the last term tends to zero as n →∞. For the third term on the right, we get (An −Aλ n)fn ≤ Anfn −Af + (A −Aλ)f + (Aλ −Aλ n)f + Aλ n(f −fn) , which tends to ∥(A −Aλ)f∥by the same lemma. Hence, by (19), lim sup n→∞ sup t≤u Tn,tf −Ttf ≤2u (Aλ −A)f , u, λ > 0, and the desired convergence follows by Lemma 17.7, as we let λ →∞.
(iii) ⇒(ii): Obvious.
(ii) ⇒(i): Assume (ii), fix any f ∈D and λ > 0, and define g = (λ −A)f and fn = Rλ ng. By dominated convergence, fn →Rλg = f. Since (λ−An)fn = g = (λ −A)f, we also note that Anfn →Af, proving (i).
(iv) ⇒(ii): Assume (iv). We may take S to be compact and the semi-groups (Tt) and (Tn,t) to be conservative. We need to show that, for any f ∈C and 388 Foundations of Modern Probability t > 0, we have T n t f(xn) →Ttf(x) whenever xn →x in S. Then let X0 = x and Xn 0 = xn. By Lemma 17.3, the process X is a.s. continuous at t. Thus, (iv) yields Xn t d →Xt, and the desired convergence follows.
(i) – (iii) ⇒(iv): Assume (i) – (iii), and let Xn 0 d →X0. To obtain Xn fd − →X, it is enough to show that as n →∞, for any f0, . . . , fm ∈C and 0 = t0 < t1 · · · tm, E Π k≤m fk(Xn tk) →E Π k≤m fk(Xtk).
(20) This holds by hypothesis when m = 0. Proceeding by induction, we may use the Markov property to rewrite (20) in the form E Π k<m fk(Xn tk) · T n hmfm(Xn tm−1) →E Π k<m fk(Xtk) · Thmfm(Xtm−1), (21) where hm = tm −tm−1. Since (ii) implies T n hmfm →Thmfm, it is equivalent to prove (21) with T n hm replaced by Thm. The resulting condition is of the form (20) with m replaced by m −1. This completes the induction and shows that Xn fd − →X.
To strengthen the convergence to Xn d →X, it suffices by Theorems 23.9 and 23.11 to show that ρ(Xn τn, Xn τn+hn) P →0 for any finite optional times τn and positive constants hn →0.
By the strong Markov property, we may prove instead that ρ(Xn 0 , Xn hn) P →0 under any initial distributions νn. By the compactness of S and Theorem 23.2, we may then assume that νn w →ν for some probability measure ν. Fixing any f, g ∈C and noting that T n hng →g by (iii), we get E f(Xn 0 ) g(Xn hn) = E fT n hn g(Xn 0 ) →E fg(X0), where L(X0) = ν. Then (Xn 0 , Xn hn) d →(X0, X0) as before, and in particular ρ(Xn 0 , Xn hn) d →ρ(X0, X0) = 0, proving (iv).
2 From the last theorem and its proof, we may derive a similar approximation of discrete-time Markov chains. This extends the approximations of random walks in Corollary 16.17 and Theorem 23.14.
Theorem 17.28 (approximation of Markov chains) Let Y 1, Y 2, . . . be dis-crete-time Markov chains in S with transition operators U1, U2, . . . , and let X be a Feller process in S with semi-group (Tt) and generator A. Fix a core D for A, and let 0 < hn →0. Then conditions (i)−(iv) of Theorem 17.25 remain equivalent for the operators and processes An = h−1 n (Un −I), Tn,t = U [t/hn] n , Xn t = Y n [t/hn].
Proof, (i) ⇔(iv): Let N be an independent, unit-rate Poisson process, and note that the processes ˜ Xn t = Y n ◦Nt/hn are pseudo-Poisson with generators An. Then Theorem 17.25 yields (i) ⇔(iv) with Xn replaced by ˜ Xn. By the 17. Feller Processes and Semi-groups 389 strong law of large numbers for N, together with Theorem 5.29, we further note that (iv) holds simultaneously for the processes Xn and ˜ Xn.
(iv) ⇒(iii): Since X is a.s. continuous at fixed times, (iv) yields Xn tn d →Xt whenever tn →t, and the processes Xn and X start at fixed points xn →x in ˆ S. Hence, Tn,tnf(xn) →Ttf(x) for any f ∈ˆ C, and (iii) follows.
(iii) ⇒(ii): Trivial.
(ii) ⇒(i): As in the previous proof, we need to show that ˜ Rλ ng →Rλg for any λ > 0 and g ∈C0, where ˜ Rλ n = (λ −An)−1. Since (ii) yields Rλ ng →Rλg with Rλ n = e−λt Tn,t dt, it suffices to prove that (Rλ n −˜ Rλ n)g →0. Then note that λRλ ng −λ ˜ Rλ ng = Eg(Y n κn−1) −Eg(Y n ˜ κn−1), where the random variables κn and ˜ κn are independent of Y n and geometri-cally distributed with parameters pn = 1 −e−λhn and ˜ pn = λhn(1 + λhn)−1, respectively. Since pn ∼˜ pn, we have ∥L(κn) −L(˜ κn)∥→0, and the desired convergence follows by Fubini’s theorem.
2 We finally show that canonical Feller processes and their induced filtrations are quasi-left continuous.
Proposition 17.29 (ql-continuity of Feller processes, Blumenthal, Meyer) Let X be a canonical Feller process with arbitrary initial distribution, and fix an optional time τ. Then these conditions are equivalent: (i) τ is predictable, (ii) τ is accessible, (iii) Xτ−= Xτ a.s. on {τ < ∞}.
In particular we see that, when X is a.s. continuous, every optional time is predictable.
Proof, (ii) ⇒(iii): By Proposition 10.4, we may take τ to be finite and predictable. Now fix an announcing sequence (τn) and a function f ∈C0. By the strong Markov property, we get for any h > 0 E f(Xτn) −f(Xτn+h) 2 = E f 2 −2f Thf + Thf 2 (Xτn) ≤ f 2 −2f Thf + Thf 2 ≤2∥f∥ f −Thf + f 2 −Thf 2 .
Letting n →∞and then h ↓0, and using dominated convergence on the left and strong continuity on the right, we see that E{f(Xτ−) −f(Xτ)}2 = 0, which implies f(Xτ−) = f(Xτ) a.s. Applying this to a separating sequence of functions f1, f2, . . . ∈C0, we obtain Xτ−= Xτ a.s.
(iii) ⇒(i): By (iii) and Theorem 17.20, we have ΔMτ = 0 a.s. on {τ < ∞} for every martingale M, and so τ is predictable by Theorem 10.14.
390 Foundations of Modern Probability (i) ⇒(ii): Obvious.
2 Exercises 1. Show how the proofs of Theorems 17.4 and 17.6 can be simplified, if we assume (F3) instead of the weaker condition (F2).
2. Consider a pseudo-Poisson process X on S with rate kernel α. Give conditions ensuring X to be Feller.
3. Verify the resolvent equation (2), and conclude that the range of Rλ is inde-pendent of λ.
4. Show that a Feller semi-group (Tt) is uniquely determined by the resolvent operator Rλ for a fixed λ > 0. Interpret the result probabilistically in terms of an independent, exponentially distributed random variable with mean λ−1. (Hint: Use Theorem 17.4 and Lemma 17.5.) 5. Consider a discrete-time Markov process in S with transition operator T, and let τ be an independent random variable with a fixed geometric distribution. Show that T is uniquely determined by Exf(Xτ) for arbitrary x ∈S and f ≥0. (Hint: Apply the preceding result to the associated pseudo-Poisson process.) 6. Give a probabilistic description of the Yosida approximation T λ t in terms the original process X and two independent Poisson processes with rate λ.
7. Given a Feller diffusion semi-group, write the differential equation in Theorem 17.6 (ii), for suitable f, as a PDE for the function Ttf(x) on R+ × Rd. Also show that the backward equation of Theorem 13.9 is a special case of the same equation.
8. Consider a Feller process X and an independent subordinator T. Show that Y = X ◦T is again Markov, and that Y is L´ evy whenever this is true for X. If both T and X are stable, then so is Y . Find the relationship between the transition semi-groups, respectively between the indices of stability.
9. Consider a Feller process X and an independent renewal process τ0, τ1, . . . .
Show that Yn = Xτn is a discrete-time Markov process, and express its transition kernel in terms of the transition semi-group of X. Also show that Yt = X ◦τ[t] may fail to be Markov, even when (τn) is Poisson.
10. Let X, Y be independent Feller processes in S, T with generators A, B. Show that (X, Y ) is a Feller process in S ×T with generator extending ˜ A+ ˜ B, where ˜ A, ˜ B denote the natural extensions of A, B to C0(S × T).
11. Consider in S a Feller process with generator A and a pseudo-Poisson process with generator B. Construct a Markov process with generator A + B.
12.
Use Theorem 17.23 to show that the generator of Brownian motion in R extends A = 1 2Δ, on the set D of functions f ∈C2 0 with Af ∈C0.
13. Let Rλ be the λ-resolvent of Brownian motion in R. For any f ∈C0, put h = Rλf, and show by direct computation that λh−1 2h′′ = f. Conclude by Theorem 17.4 that 1 2Δ with domain D, defined as above, extends the generator A. Thus, A = 1 2Δ by the preceding exercise or by Lemma 17.12.
14. Show that if A is a bounded generator on C0, then the associated Markov process is pseudo-Poisson. (Hint: Note as in Theorem 17.11 that A satisfies the 17. Feller Processes and Semi-groups 391 positive-maximum principle. Next use Riesz’ representation theorem to express A in terms of bounded kernels, and show that A has the form of Proposition 17.2.) 15. Let X, Xn be processes as in Theorem 23.14. Show that if Xn t d →Xt for all t > 0, then Xn d →X in DR+,Rd, and compare with the stated theorem. Also prove a corresponding result for a sequence of L´ evy processes Xn. (Hint: Use Theorems 17.28 and 17.25, respectively.) VI. Stochastic Calculus and Applications Stochastic calculus is rightfully recognized as one of the central areas of modern probability. In Chapter 18 we develop the classical theory of Itˆ o inte-gration with respect to continuous semi-martingales, based on a detailed study of the quadratic variation process. Chapter 19 deals with some basic applica-tions to Brownian motion and related processes, including various martingale characterizations and transformations of martingales based on a random time change or a change of probability measure. In Chapter 20, the integration theory is extended to the more subtle case of semi-martingales with jump dis-continuities, and in the final Chapter 21 we develop some basic aspects of the Malliavin calculus for functionals on Wiener space. The material of the first two chapters is absolutely fundamental, whereas the remaining chapters are more advanced and could be deferred to a later stage of studies.
−−− 18. Itˆ o integration and quadratic variation. After a detailed study of the quadratic variation and covariation processes for continuous local martin-gales, we introduce the stochastic integral via an ingenious martingale charac-terization. This leads easily to some basic continuity and iteration properties, and to the fundamental Itˆ o formula, showing how continuous semi-martingales are transformed by smooth mappings.
19. Continuous martingales and Brownian motion. Using stochastic calculus, we derive some basic properties of Brownian motion and continuous local martingales. Thus, we show how the latter can be reduced to Brownian motions by a random time change, and how the drift of a continuous semi-martingale can be removed by a suitable change of measure. We further derive some integral representations of Brownian functionals and continuous martin-gales.
20. Semi-martingales and stochastic integration. Here we extend the stochastic calculus for continuous integrators to the context of semi-martingales with jump discontinuities. Important applications include some general decom-positions of semi-martingales, the Burkholder-type inequalities, the Bichteler– Dellacherie characterization of stochastic integrators, and a representation of the Dol´ eans exponential.
21. Malliavin calculus. Here we study the Malliavin derivative D and its dual D∗, along with the Ornstein–Uhlenbeck generator L.
Those three operators are closely related and admit simple representations in terms of mul-tiple Wiener–Itˆ o integrals. We further explain in what sense D∗extends the Itˆ o integral, derive a differentiation property for the latter, and establish the Clark–Ocone representation of Brownian functionals. We also indicate how the theory can be used to prove the existence and smoothness of densities.
393 Chapter 18 Itˆ o Integration and Quadratic Variation Local and finite-variation martingales, completeness, covariation, con-tinuity, norm comparison, uniform integrability, Cauchy-type inequali-ties, martingale integral, semi-martingales, continuity, dominated con-vergence, chain rule, optional stopping, integration by parts, approxi-mation of covariation, Itˆ o’s formula, local integral, conformal mapping, Fisk–Stratonovich integral, continuity and approximation, random time change, dependence on parameter, functional representation Here we initiate the study of stochastic calculus, arguably the most powerful tool of modern probability, with applications to a broad variety of subjects throughout the subject.
For the moment, we may only mention the time-change reduction and integral representation of continuous local martingales in Chapter 19, the Girsanov theory for removal of drift in the same chapter, the predictable transformations in Chapter 27, the construction of local time in Chapter 29, and the stochastic differential equations in Chapters 32–33.
In this chapter we consider only stochastic integration with respect to con-tinuous semi-martingales, whereas the more subtle case of integrators with possible jump discontinuities is postponed until Chapter 20, and an extension to non-anticipating integrators appears in Chapter 21. The theory of stochas-tic integration is inextricably linked to the notions of quadratic variation and covariation, already encountered in Chapter 14 in the special case of Brownian motion, and together the two notions are developed into a theory of amazing power and beauty.
We begin with a construction of the covariation process [M, N] of a pair of continuous local martingales M and N, which requires an elementary approxi-mation and completeness argument. The processes M ∗and [M] = [M, M] will be related by some useful continuity and norm relations, including the powerful BDG inequalities.
Given the quadratic variation [M], we may next construct the stochastic integral V dM for suitable progressive processes V , using a simple Hilbert space argument.
Combining with the ordinary Stieltjes integral V dA for processes A of locally finite variation, we may finally extend the integral to arbitrary continuous semi-martingales X = M + A. The continuity properties of quadratic variation carry over to the stochastic integral, and in conjunction with the obvious linearity they characterize the integration.
395 © Springer Nature Switzerland AG 2021 O. Kallenberg, Foundations of Modern Probability, Probability Theory and Stochastic Modelling 99, 396 Foundations of Modern Probability A key result for applications is Itˆ o’s formula, which shows how semi-mart-ingales are transformed under smooth mappings. Though the present substi-tution rule differs from the elementary result for Stieltjes integrals, the two formulas can be brought into agreement by a simple modification of the in-tegral. We conclude the chapter with some special topics of importance for applications, such as the transformation of stochastic integrals under a ran-dom time-change, and the integration of processes depending on a parameter.
The present material may be thought of as a continuation of the martingale theory of Chapters 9–10. Though no results for Brownian motion are used explicitly in this chapter, the existence of the Brownian quadratic variation in Chapter 14 may serve as a motivation. We also need the representation and measurability of limits obtained in Chapter 5.
Throughout the chapter we take F = (Ft) to be a right-continuous and complete filtration on R+. A process M is said to be a local martingale, if it is adapted to F and such that the stopped and centered processes M τn −M0 are true martingales for some optional times τn ↑∞. By a similar localiza-tion we may define local L2-martingales, locally bounded martingales, locally integrable processes, etc. The required optional times τn are said to form a localizing sequence.
Any continuous local martingale may clearly be reduced by localization to a sequence of bounded, continuous martingales. Conversely, we see by dominated convergence that every bounded local martingale is a true martingale. The following useful result may be less obvious.
Lemma 18.1 (local martingales) For any process M and optional times τn ↑ ∞, these conditions are equivalent: (i) M is a local martingale, (ii) M τn is a local martingale for every n.
Proof, (i) ⇒(ii): If M is a local martingale with localizing sequence (σn) and τ is an arbitrary optional time, then the processes (M τ)σn = (M σn)τ are true martingales. Thus, M τ is again a local martingale with localizing sequence (σn).
(ii) ⇒(i): Suppose that each process M τn is a local martingale with local-izing sequence (σn k). Since σn k →∞a.s. for each n, we may choose some indices kn with P σn kn < τn ∧n ≤2−n, n ∈N.
Writing τ ′ n = τn ∧σn kn, we get τ ′ n →∞a.s. by the Borel–Cantelli lemma, and so the optional times τ ′′ n = infm≥n τ ′ m satisfy τ ′′ n ↑∞a.s. It remains to note that the processes M τ ′′ n = (M τ ′ n)τ ′′ n are true martingales.
2 Next we show that every continuous martingale of finite variation is a.s.
constant. An extension appears as Lemma 10.11.
18. Itˆ o Integration and Quadratic Variation 397 Proposition 18.2 (finite-variation martingales) Let M be a continuous local martingale. Then M has locally finite variation ⇔ M is a.s. constant.
Proof: By localization we may reduce to the case where M0 = 0 and M has bounded variation. In fact, let Vt denote the total variation of M on the interval [0, t], and note that V is continuous and adapted. For each n ∈N, we may then introduce the optional time τn = inf{t ≥0; Vt = n}, and note that M τn −M0 is a continuous martingale with total variation bounded by n.
We further note that τn →∞, and that if M τn = M0 a.s. for each n, then even M = M0 a.s. In the reduced case, fix any t > 0, write tn,k = kt/n, and conclude from the continuity of M that a.s.
ζn ≡Σ k≤n Mtn,k −Mtn,k−1 2 ≤Vt max k≤n Mtn,k −Mtn,k−1 →0.
Since ζn ≤V 2 t , which is bounded by a constant, it follows by the martingale property and dominated convergence that EM 2 t = Eζn →0, and so Mt = 0 a.s. for each t > 0.
2 Our construction of stochastic integrals depends on the quadratic variation and covariation processes, which therefore need to be constructed first. Here we use a direct approach, which has the further advantage of giving some insight into the nature of the basic integration-by-parts formula in Proposition 18.16.
An alternative but less elementary approach would be to use the Doob–Meyer decomposition in Chapter 10.
The construction utilizes predictable step processes of the form Vt = Σ k ξk1{t > τk} = Σ k ηk1(τk,τk+1](t), t ≥0, (1) where the τn are optional times with τn ↑∞a.s., and the ξk and ηk are Fτk-measurable random variables for all k ∈N. For any process X we consider the elementary integral process V · X, given as in Chapter 9 by (V · X)t ≡ t 0 V dX = Σ k ξk Xt −Xτk t = Σ k ηk Xt τk+1 −Xt τk , (2) where the series converge since they have only finitely many non-zero terms.
Note that (V ·X)0 = 0, and that V ·X inherits the possible continuity properties of X. It is further useful to note that V · X = V · (X −X0). The following simple estimate will be needed later.
Lemma 18.3 (martingale preservation and L2-bound) For any continuous L2-martingale M with M0 = 0 and predictable step process V with |V | ≤1, the process V · M is again an L2-martingale satisfying E(V · M)2 t ≤EM 2 t , t ≥0.
398 Foundations of Modern Probability Proof: First suppose that the sum in (1) has only finitely many non-zero terms. Then V ·M is a martingale by Corollary 9.15, and the L2-bound follows by the computation E(V · M)2 t = E Σ k η2 k M t τk+1 −M t τk 2 ≤E Σ k M t τk+1 −M t τk 2 = E M 2 t .
The estimate extends to the general case by Fatou’s lemma, and the martingale property then extends by uniform integrability.
2 Now consider the space M2 of all L2-bounded, continuous martingales M with M0 = 0, equipped with the norm ∥M∥= ∥M∞∥2. Recall that ∥M ∗∥2 ≤ 2 ∥M∥by Proposition 9.17.
Lemma 18.4 (completeness) Let M2 be the space of L2-bounded, continuous martingales with M0 = 0 and norm ∥M∥= ∥M∞∥2. Then M2 is complete and hence a Hilbert space.
Proof: For any Cauchy sequence M 1, M 2, . . . in M2, the sequence (M n ∞) is Cauchy in L2 and thus converges toward some ξ ∈L2. Introduce the L2-martingale Mt = E(ξ | Ft), t ≥0, and note that M∞= ξ a.s. since ξ is F∞-measurable. Hence, (M n −M)∗ 2 ≤2 ∥M n −M∥ = 2 M n ∞−M∞ 2 →0, and so ∥M n −M∥→0. Moreover, (M n −M)∗→0 a.s. along a sub-sequence, which implies that M is a.s. continuous with M0 = 0.
2 We may now establish the existence and basic properties of the quadratic variation and covariation processes [M] and [M, N]. Extensions to possibly discontinuous processes are considered in Chapter 20.
Theorem 18.5 (covariation) For any continuous local martingales M, N, there exists a continuous process [M, N] with locally finite variation and [M, N]0 = 0, such that a.s.
(i) MN −[M, N] is a local martingale, (ii) [M, N] = [N, M], (iii) [a M1 + b M2, N] = a [M1, N] + b [M2, N], (iv) [M, N] = [M −M0, N], (v) [M] = [M, M] is non-decreasing, (vi) [M τ, N] = [M τ, N τ] = [M, N]τ for any optional time τ.
The process [M, N] is determined a.s. uniquely by (i).
18. Itˆ o Integration and Quadratic Variation 399 Proof: The a.s. uniqueness of [M, N] follows from Proposition 18.2, and (ii)−(iii) are immediate consequences. If [M, N] exists with the stated proper-ties and τ is an optional time, then Lemma 18.1 shows that M τN τ −[M, N]τ is a local martingale, as is the process M τ(N −N τ) by Corollary 9.15. Hence, even M τN −[M, N]τ is a local martingale, and (vi) follows.
Furthermore, MN −(M −M0)N = M0N is a local martingale, which yields (iv) whenever either side exists. If both [M + N] and [M −N] exist, then 4MN − [M + N] −[M −N] = (M + N)2 −[M + N] − (M −N)2 −[M −N] is a local martingale, and so we may take [M, N] = ([M + N] −[M −N])/4.
It is then enough to prove the existence of [M] when M0 = 0.
First let M be bounded. For each n ∈N, let τ n 0 = 0, and define recursively τ n k+1 = inf t > τ n k ; |Mt −Mτ n k | = 2−n , k ≥0.
Note that τ n k →∞as k →∞for fixed n. Introduce the processes V n t = Σ k Mτ n k 1 t ∈(τ n k , τ n k+1] , Qn t = Σ k Mt∧τ n k −Mt∧τ n k−1 2.
The V n are bounded, predictable step processes, and clearly M 2 t = 2 (V n · M)t + Qn t , t ≥0.
(3) By Lemma 18.3, the integrals V n ·M are continuous L2-martingales, and since |V n −M| ≤2−n for each n, we have for m ≤n V m · M −V n · M = (V m −V n) · M ≤2−m+1∥M∥.
Hence, Lemma 18.4 yields a continuous martingale N with (V n ·M −N)∗ P →0.
The process [M] = M 2 −2N is again continuous, and by (3) we have Qn −[M] ∗= 2 N −V n · M ∗ P →0.
In particular, [M] is a.s. non-decreasing on the random time set T = {τ n k ; n, k ∈N}, which extends by continuity to the closure ¯ T. Also note that [M] is constant on each interval in ¯ T c, since this is true for M and hence also for every Qn. Thus, (v) follows.
In the unbounded case, define τn = inf{t > 0; |Mt|=n}, n ∈N.
The processes [M τn] exist as before, and clearly [M τm]τm = [M τn]τm a.s. for all m < n. Hence, [M τm] = [M τn] a.s. on [0, τm], and since τn →∞, there exists a non-decreasing, continuous, and adapted process [M], such that [M] = [M τn] 400 Foundations of Modern Probability a.s. on [0, τn] for every n. Here (M τn)2 −[M]τn is a local martingale for every n, and so M 2 −[M] is a local martingale by Lemma 18.1.
2 Next we establish a basic continuity property.
Proposition 18.6 (continuity) For any continuous local martingales Mn start-ing at 0, we have M ∗ n P →0 ⇔ [Mn]∞ P →0.
Proof: First let M ∗ n P →0.
Fix any ε > 0, and define τn = inf{t ≥0; |Mn(t)| > ε}, n ∈N. Write Nn = M 2 n −[Mn], and note that N τn n is a true mar-tingale on ¯ R+. In particular, E[Mn]τn ≤ε2, and so by Chebyshev’s inequality P [Mn]∞> ε ≤P{τn < ∞} + ε−1E[Mn]τn ≤P{M ∗ n > ε} + ε.
Here the right-hand side tends to zero as n →∞and then ε →0, which shows that [Mn]∞ P →0.
Conversely, let [Mn]∞ P →0. By a localization argument and Fatou’s lemma, we see that M is L2-bounded. Now proceed as before to get M ∗ n P →0.
2 Next we prove a pair of basic norm inequalities1 involving the quadratic variation, known as the BDG inequalities. Partial extensions to discontinuous martingales appear in Theorem 20.12.
Theorem 18.7 (norm comparison, Burkholder, Millar, Gundy, Novikov) For a continuous local martingale M with M0 = 0, we have2 EM ∗p ≍E[M]p/2 ∞, p > 0.
Proof: By optional stopping, we may take M and [M] to be bounded. Write M ′ = M −M τ with τ = inf{t; M 2 t = r}, and define N = (M ′)2 −[M ′]. By Corollary 9.31, we have for any r > 0 and c ∈(0, 2−p) P M ∗2 ≥4r −P [M]∞≥c r ≤P M ∗2 ≥4r, [M]∞< c r ≤P N > −c r, suptNt > r −c r ≤c P{N ∗> 0} ≤c P{M ∗2 ≥r}.
Multiplying by (p/2) rp/2−1 and integrating over R+, we get by Lemma 4.4 2−pEM ∗p −c−p/2E[M]p/2 ∞≤c EM ∗p, which gives the bound < ⌢with domination constant cp = c−p/2/(2−p −c).
1Recall that f ≍g means f ≤cg and g ≤cf for some constant c > 0.
2The domination constants are understood to depend only on p.
18. Itˆ o Integration and Quadratic Variation 401 Defining N as before with τ = inf{t; [M]t = r}, we get for any r > 0 and c ∈(0, 2−p/2−2) P [M]∞≥2 r −P M ∗2 ≥c r ≤P [M]∞≥2 r, M ∗2 < c r ≤P N < 4 c r, inftNt < 4 c r −r ≤4c P [M]∞≥r .
Integrating as before yields 2−p/2E[M]p/2 ∞−c−p/2EM ∗p ≤4c E[M]p/2 ∞, and the bound > ⌢follows with domination constant cp = c−p/2/(2−p/2−4 c). 2 We often need to certify that a given local martingale is a true martingale.
The last theorem yields a useful criterion.
Corollary 18.8 (uniform integrability) Let M be a continuous local martin-gale. Then M is a uniformly integrable martingale, whenever E |M0| + [M]1/2 ∞ < ∞.
Proof: By Theorem 18.7 we have EM ∗< ∞, and the martingale property follows by dominated convergence.
2 The basic properties of [M, N] suggest that we think of the covariation process as a kind of inner product.
A further justification is given by the following Cauchy-type inequalities.
Proposition 18.9 (Cauchy-type inequalities, Courr ege) For any continuous local martingales M, N and product-measurable processes U, V , we have a.s.
(i) [M, N] ≤ d [M, N] ≤ [M] [N] 1/2, (ii) t 0 UV d [M, N] ≤ U 2 · [M] t V 2 · [N] t 1/2, t ≥0.
Proof: (i) By Theorem 18.5 (iii) and (v), we have a.s. for any a, b ∈R and t > 0 0 ≤[aM + bN]t = a2 [M]t + 2ab [M, N]t + b2 [N]t.
By continuity we may choose a common exceptional null set for all a, b, and so [M, N]2 t ≤[M]t[N]t a.s. Applying this inequality to the processes M −M s and N −N s for any s < t, we obtain a.s.
[M, N]t −[M, N]s ≤ [M]t −[M]s [N]t −[N]s 1/2, (4) and by continuity we may again choose a common null set. Now let 0 = t0 < t1 < · · · < tn = t be arbitrary, and conclude from (4) and the classical Cauchy inequality that 402 Foundations of Modern Probability | [M, N]t| ≤Σ k [M, N]tk −[M, N]tk−1 ≤ [M]t [N]t 1/2.
It remains to take the supremum over all partitions of [0, t].
(ii) Writing dμ = d[M], dν = d[N], and dρ = |d[M, N]|, we conclude from (i) that (ρI)2 ≤(μI)(νI) a.s. for every interval I.
By continuity, we may choose the exceptional null set A to be independent of I. Letting G ⊂R+ be open with connected components Ik and using Cauchy’s inequality, we get on Ac ρG = Σ k ρIk ≤Σ k μIk νIk 1/2 ≤ Σ j μIj Σ k νIk 1/2 = μG · νG 1/2.
By Lemma 1.36, the last relation extends to any B ∈BR+.
Now fix any simple, measurable functions f = Σk ak1Bk and g = Σk bk1Bk.
Using Cauchy’s inequality again, we obtain on Ac ρ|fg| ≤Σ k |akbk| ρBk ≤Σ k |akbk| μBk νBk 1/2 ≤ Σ j a2 j μBj Σ k b2 k νBk 1/2 ≤ μf 2 νg21/2, which extends by monotone convergence to any measurable functions f, g on R+. By Lemma 1.35, we may choose f(t) = Ut(ω) and g(t) = Vt(ω) for any fixed ω ∈Ac.
2 Let E be the class of bounded, predictable step processes with jumps at finitely many fixed times. To motivate the construction of general stochastic integrals, and for subsequent needs, we derive a basic identity for elementary integrals.
Lemma 18.10 (covariation identity) For any continuous local martingales M, N and processes U, V ∈E, the integrals U·M and V ·N are again continuous local martingales, and we have U · M, V · N = ( UV ) · [M, N] a.s.
(5) Proof: We may clearly take M0 = N0 = 0. The first assertion follows by localization from Lemma 18.3. To prove (5), let Ut = Σ k≤n ξk1(tk,tk+1](t), where ξk is bounded and Ftk-measurable for each k. By localization we may take M, N, [M, N] to be bounded, so that M, N, and MN −[M, N] are martingales on ¯ R+. Then E ( U · M)∞N∞= E Σ j ξj Mtj+1 −Mtj Σ k Ntk+1 −Ntk = E Σ k ξk Mtk+1Ntk+1 −MtkNtk = E Σ k ξk [M, N]tk+1 −[M, N]tk = E U · [M, N] ∞.
18. Itˆ o Integration and Quadratic Variation 403 Replacing M, N by M τ, N τ for an arbitrary optional time τ, we get E ( U · M)τNτ = E( U · M τ)∞N τ ∞ = E U · [M τ, N τ] ∞ = E U · [M, N] τ.
By Lemma 9.14, the process ( U · M)N −U · [M, N] is then a martingale, and so [ U · M, N] = U · [M, N] a.s. The general formula follows by iteration.
2 To extend the stochastic integral V ·M to more general processes V , we take (5) to be the characteristic property. For any continuous local martingale M, let L(M) be the class of all progressive processes V , such that (V 2 · [M])t < ∞ a.s. for every t > 0.
Theorem 18.11 (martingale integral, Itˆ o, Kunita & Watanabe) For any con-tinuous local martingale M and process V ∈L(M), there exists an a.s. unique continuous local martingale V · M with (V · M)0 = 0, such that for any con-tinuous local martingale N, [V · M, N] = V · [M, N] a.s.
Proof: To prove the uniqueness, let M ′, M ′′ be continuous local martingales with M ′ 0 = M ′′ 0 = 0, such that [M ′, N] = [M ′′, N] = V · [M, N] a.s.
for all continuous local martingales N. Then by linearity [M ′ −M ′′, N] = 0 a.s. Taking N = M ′ −M ′′ gives [M ′ −M ′′] = 0 a.s., and so M ′ = M ′′ a.s. by Proposition 18.6.
To prove the existence, we first assume ∥V ∥2 M = E V 2 · [M] ∞< ∞. Since V is measurable, we get by Proposition 18.9 and Cauchy’s inequality E V · [M, N] ∞ ≤∥V ∥M∥N∥, N ∈M2.
The mapping N →E V · [M, N] ∞is then a continuous linear functional on M2, and so Lemma 18.4 yields an element V · M ∈M2 with E V · [M, N] ∞= E (V · M)∞N∞, N ∈M2.
Replacing N by N τ for an arbitrary optional time τ and using Theorem 18.5 and optional sampling, we get E V · [M, N] τ = E V · [M, N]τ ∞ = E V · [M, N τ] ∞ = E(V · M)∞Nτ = E(V · M)τNτ.
404 Foundations of Modern Probability Since V is progressive, Lemma 9.14 shows that V · [M, N] −(V · M)N is a martingale, which implies [V · M, N] = V · [M, N] a.s. The last relation extends by localization to any continuous local martingale N.
For general V , define τn = inf t > 0; V 2 · [M] t = n , n ∈N.
By the previous argument, there exist some continuous local martingales V · M τn such that, for any continuous local martingale N, V · M τn, N = V · [M τn, N] a.s., n ∈N.
(6) For m < n it follows that (V · M τn)τm satisfies the corresponding relation with [M τm, N], and so (V · M τn)τm = V · M τm a.s. Hence, there exists a continuous process V · M with (V · M)τn = V · M τn a.s. for all n, and Lemma 18.1 shows that V ·M is again a local martingale. Finally, (6) yields [V ·M, N] = V ·[M, N] a.s. on [0, τn] for each n, and so the same relation holds on R+.
2 By Lemma 18.10, the stochastic integral V · M of Theorem 18.11 extends the previously defined elementary integral. It is also clear that V · M is a.s.
bilinear in the pair (V, M), with the following basic continuity property.
Lemma 18.12 (continuity) For any continuous local martingales Mn and processes Vn ∈L(Mn), we have (Vn · Mn)∗ P →0 ⇔ V 2 n · [Mn] ∞ P →0.
Proof: Note that [Vn · Mn] = V 2 n · [Mn], and use Proposition 18.6.
2 Before continuing our study of basic properties, we extend the stochastic integral to a larger class of integrators. By a continuous semi-martingale we mean a process X = M + A, where M is a continuous local martingale and A is a continuous, adapted process with locally finite variation and A0 = 0.
The decomposition X = M + A is then a.s. unique by Proposition 18.2, and it is often referred to as the canonical decomposition of X. A continuous semi-martingale in Rd is defined as a process X = (X1, . . . , Xd), where X1, . . . , Xd are continuous semi-martingales in R.
Let L(A) be the class of progressive processes V , such that the processes (V · A)t = t 0 V dA exist as elementary Stieltjes integrals. For any continuous semi-martingale X = M + A, we write L(X) = L(M) ∩L(A), and define the X-integral of a process V ∈L(X) as the sum V ·X = V ·M +V ·A, which makes V ·X a continuous semi-martingale with canonical decomposition V ·M +V ·A.
For progressive processes V , it is further clear that V ∈L(X) ⇔ V 2 ∈L([M]), V ∈L(A).
Lemma 18.12 yields the following stochastic version of the dominated con-vergence theorem.
18. Itˆ o Integration and Quadratic Variation 405 Corollary 18.13 (dominated convergence) For any continuous semi-martin-gale X and processes U, V, V1, V2, . . . ∈L(X), we have Vn →V |Vn| ≤U ⇒ Vn · X −V · X ∗ t P →0, t ≥0.
Proof: Let X = M + A. Since U ∈L(X), we have U 2 ∈L([M]) and U ∈L(A).
By dominated convergence for ordinary Stieltjes integrals, we obtain a.s. (Vn −V )2 · [M] t + Vn · A −V · A ∗ t →0.
Here the former convergence implies (Vn · M −V · M)∗ t P →0 by Lemma 18.12, and the assertion follows.
2 We further extend the elementary chain rule of Lemma 1.25 to stochastic integrals.
Lemma 18.14 (chain rule) For any continuous semi-martingale X and pro-gressive processes U, V with V ∈L(X), we have (i) U ∈L(V · X) ⇔UV ∈L(X), and then (ii) U · (V · X) = ( UV ) · X a.s.
Proof: (i) Letting X = M + A, we have U ∈L(V · X) ⇔U 2 ∈L([ V · M]), U ∈L(V · A), UV ∈L(X) ⇔(UV )2 ∈L([M]), UV ∈L(A).
Since [ V · M] = V 2 · [M], the two pairs of conditions are equivalent.
(ii) The relation U · (V · A) = ( UV ) · A is elementary. To see that even U · (V · M) = (UV ) · M a.s., consider any continuous local martingale N, and note that ( UV ) · M, N = ( UV ) · [M, N] = U · V · [M, N] = U · [ V · M, N] = U · ( V · M), N .
2 Next we examine the behavior under optional stopping.
Lemma 18.15 (optional stopping) For any continuous semi-martingale X, process V ∈L(X), and optional time τ, we have a.s.
(V · X)τ = V · Xτ = (V 1[0,τ]) · X.
406 Foundations of Modern Probability Proof: The relations being obvious for ordinary Stieltjes integrals, we may take X = M to be a continuous local martingale. Then (V ·M)τ is a continuous local martingale starting at 0, and we have (V · M)τ, N = V · M, N τ = V · [M, N τ] = V · [M, N]τ = (V 1[0,τ]) · [M, N].
Thus, (V · M)τ satisfies the conditions characterizing the integrals V · M τ and (V 1[0,τ]) · M.
2 We may extend the definitions of quadratic variation and covariation to any continuous semi-martingales X = M +A and Y = N +B by putting [X] = [M] and [X, Y ] = [M, N]. As a crucial step toward a general substitution rule, we show how the covariation process can be expressed in terms of stochastic integrals.
For martingales X and Y , the result is implicit in the proof of Theorem 18.5.
Theorem 18.16 (integration by parts) For any continuous semi-martingales X, Y , we have a.s.
XY = X0Y0 + X · Y + Y · X + [X, Y ].
(7) Proof: We may take X = Y , since the general result will then follow by polarization. First let X = M ∈M2, and define V n and Qn as in the proof of Theorem 18.5. Then V n →M and |V n t | ≤M ∗ t < ∞, and so Corollary 18.13 yields (V n · M)t P →(M · M)t for each t ≥0. Now (7) follows as we let n →∞ in the relation M 2 = 2 V n · M + Qn, and it extends by localization to general continuous local martingales M with M0 = 0. If instead X = A, then (7) reduces to A2 = 2 A · A, which holds by Fubini’s theorem.
For general X we may take X0 = 0, since the formula for general X0 will then follow by an easy computation from the result for X −X0. In this case, (7) reduces to X2 = 2 X · X + [M]. Subtracting the formulas for M 2 and A2, it remains to show that AM = A · M + M · A a.s. Then fix any t > 0, put tn k = (k/n)t, and introduce for s ∈(tn k−1, tn k], k, n ∈N, the processes An s = Atn k−1, M n s = Mtn k.
Note that At Mt = (An · M)t + (M n · A)t, n ∈N.
Here (An · M)t P →(A · M)t by Corollary 18.13, and (M n · A)t →(M · A)t by dominated convergence for ordinary Stieltjes integrals.
2 Our terminology is justified by the following result, which extends Theo-rem 14.9 for Brownian motion. It also shows that [X, Y ] is a.s. measurably determined3 by X and Y .
3This is remarkable, since it is defined by martingale properties that depend on both probability measure and filtration.
18. Itˆ o Integration and Quadratic Variation 407 Proposition 18.17 (approximation of covariation, Fisk) For any continuous semi-martingales X, Y on [0, t] and partitions 0 = tn 0 < tn 1 < · · · < tn kn = t, n ∈N, with maxk(tn k −tn k−1) →0, we have ζn ≡Σ k Xtn k −Xtn k−1 Ytn k −Ytn k−1 P →[X, Y ]t.
(8) Proof: We may clearly take X0 = Y0 = 0. Introduce for s ∈(tn k−1, tn k], k, n ∈N, the predictable step processes Xn s = Xtn k−1, Y n s = Ytn k−1, and note that XtYt = (Xn · Y )t + (Y n · X)t + ζn, n ∈N.
Since Xn →X and Y n →Y , and also (Xn)∗ t ≤X∗ t < ∞and (Y n)∗ t ≤X∗ t < ∞, we get by Corollary 18.13 and Theorem 18.16 ζn P →XtYt −(X · Y )t −(Y · X)t = [X, Y ]t.
2 We turn to a version of Itˆ o’s formula, arguably the most important formula of modern probability.4 It shows that the class of continuous semi-martingales is closed under smooth maps, and exhibits the canonical decomposition of the image process in terms of the components of the original process. Extended versions appear in Corollaries 18.19 and 18.20, as well as in Theorems 20.7 and 29.5.
Write Ck = Ck(Rd) for the class of k times continuously differentiable functions on Rd. When f ∈C2, we write ∂if and ∂2 ijf for the first and second order partial derivatives of f. Here and below, summation over repeated indices is understood.
Theorem 18.18 (substitution rule, Itˆ o) For any continuous semi-martingale X in Rd and function f ∈C2(Rd), we have a.s.
f(X) = f(X0) + ∂if(X) · Xi + 1 2 ∂2 ijf(X) · [Xi, Xj].
(9) This may also be written in differential form as d f(X) = ∂if(X) dXi + 1 2 ∂2 ijf(X) d [Xi, Xj].
(10) It is suggestive to write d [Xi, Xj] = dXi dXj, and think of (10) as a second order Taylor expansion.
If X has canonical decomposition M + A, we get the corresponding de-composition of f(X) by substituting M i + Ai for Xi on the right side of (9).
When M = 0, the last term vanishes, and (9) reduces to the familiar substitu-tion rule for ordinary Stieltjes integrals. In general, the appearance of this Itˆ o 4Possible contenders might include the representation of infinitely divisible distributions, the polynomial representation of multiple WI-integrals, and the formula for the generator of a continuous Feller process.
408 Foundations of Modern Probability correction term shows that the rules of ordinary calculus fail for the Itˆ o integral.
Proof of Theorem 18.18: For notational convenience, we may take d = 1, the general case being similar. Then fix a continuous semi-martingale X in R, and let C be the class of functions f ∈C2 satisfying (9), now written in the form f(X) = f(X0) + f ′(X) · X + 1 2 f ′′(X) · [X].
(11) The class C is clearly a linear subspace of C2 containing the functions f(x) ≡1 and f(x) ≡x. We shall prove that C is closed under multiplication, and hence contains all polynomials.
Then suppose that (11) holds for both f and g, so that F = f(X) and G = g(X) are continuous semi-martingales. Using the definition of the integral, along with Proposition 18.14 and Theorem 18.16, we get (fg)(X) −(fg)(X0) = FG −F0G0 = F · G + G · F + [F, G] = F · g′(X) · X + 1 2 g′′(X) · [X] + G · f ′(X) · X + 1 2 f ′′(X) · [X] + f ′(X) · X, g′(X) · X = fg′ + f ′g (X) · X + 1 2 fg′′ + 2f ′g′ + f ′′g (X) · [X] = (fg)′(X) · X + 1 2 (fg)′′(X) · [X].
Now let f ∈C2 be arbitrary. By Weierstrass’ approximation theorem5, we may choose some polynomials p1, p2, . . . , such that sup|x|≤c |pn(x)−f ′′(x)| →0 for every c > 0. Integrating the pn twice yields some polynomials fn satisfying sup |x|≤c fn(x) −f(x) ∨ f ′ n(x) −f ′(x) ∨ f ′′ n(x) −f ′′(x) →0, c > 0.
In particular, fn(Xt) →f(Xt) for each t > 0.
Letting X have canonical decomposition M + A and using dominated convergence for ordinary Stieltjes integrals, we get for any t ≥0 f ′ n(X) · A + 1 2 f ′′ n(X) · [X] t → f ′(X) · A + 1 2 f ′′(X) · [X] t.
Similarly, {f ′ n(X) −f ′(X)}2 · [M])t →0 for all t, and so by Lemma 18.12 f ′ n(X) · M t P → f ′(X) · M t, t ≥0.
Thus, formula (11) for the polynomials fn extends in the limit to the same relation for f.
2 We also need a local version of the last theorem, involving stochastic in-tegrals up to the first time ζD when X exits a given domain D ⊂Rd. For 5Any continuous function on [0, 1] admits a uniform approximation by polynomials.
18. Itˆ o Integration and Quadratic Variation 409 continuous and adapted X, the time ζD is clearly predictable, in the sense of being announced by some optional times τn ↑ζD with τn < ζD a.s. on {ζD > 0} for all n. Indeed, writing ρ for the Euclidean metric in Rd, we may choose τn = inf t ∈[0, n]; ρ(Xt, Dc) ≤n−1 , n ∈N.
(12) We say that X is a semi-martingale on [0, ζD), if the stopped process Xτn is a semi-martingale in the usual sense for every n ∈N. To define the co-variation processes [Xi, Xj] on the interval [0, ζD), we require [Xi, Xj]τn = [(Xi)τn, (Xj)τn] a.s. for every n. Stochastic integrals with respect to X1, . . . , Xd are defined on [0, ζD) in a similar way.
Corollary 18.19 (local Itˆ o formula) For any domain D ⊂Rd, let X be a continuous semi-martingale on [0, ζD). Then (9) holds a.s. on [0, ζD) for every f ∈C2(D).
Proof: Choose some fn ∈C2(Rd) with fn(x) = f(x) when ρ(x, Dc) ≥n−1.
Applying Theorem 18.18 to fn(Xτn) with τn as in (12), we get (9) on [0, τn].
Since n was arbitrary, the result extends to [0, ζD).
2 By a complex, continuous semi-martingale we mean a process of the form Z = X+iY , where X, Y are real, continuous semi-martingales. The bilinearity of the covariation process suggests that we define the quadratic variation of Z as [Z] = [Z, Z] = X + iY, X + iY = [X] + 2i [X, Y ] −[Y ].
Let L(Z) be the class of processes W = U + iV with U, V ∈L(X) ∩L(Y ). For such a process W, we define the integral by W · Z = (U + iV ) · (X + iY ) = U · X −V · Y + i U · Y + V · X .
Corollary 18.20 (conformal mapping) Let f be an analytic function on a domain D ⊂C. Then (9) holds for every continuous semi-martingale Z in D.
Proof: Writing f(x + iy) = g(x, y) + ih(x, y) for any x + iy ∈D, we get g′ 1 + ih′ 1 = f ′, g′ 2 + ih′ 2 = if ′, and so by iteration g′′ 11 + ih′′ 11 = f ′′, g′′ 12 + ih′′ 12 = if ′′, g′′ 22 + ih′′ 22 = −f ′′.
Now (9) follows for Z = X + iY , as we apply Corollary 18.19 to the semi-mar-tingale (X, Y ) and the functions g, h.
2 410 Foundations of Modern Probability Under suitable regularity conditions, we may modify the Itˆ o integral so that it will obey the rules of ordinary calculus. Then for any continuous semi-martingales X, Y , we define the Fisk–Stratonovich integral by t 0 X ◦dY = (X · Y )t + 1 2 [X, Y ]t, t ≥0, (13) or, in differential form X ◦dY = XdY + 1 2 d[X, Y ], where the first term on the right is an ordinary Itˆ o integral. The point of this modification is that the substitution rule simplifies to d f(X) = ∂if(X) ◦dXi, conforming with the chain rule of ordinary calculus. We may also prove a version for FS -integrals of the chain rule in Proposition 18.14.
Theorem 18.21 (modified Itˆ o integral, Fisk, Stratonovich) The integral in (13) satisfies the computational rules of elementary calculus. Thus, a.s., (i) for any continuous semi-martingale X in Rd and function f ∈C3(Rd), f(Xt) = f(X0) + t 0 ∂if(X) ◦dXi, t ≥0, (ii) for any real, continuous semi-martingales X, Y, Z, X ◦(Y ◦Z) = (XY ) ◦Z.
Proof: (i) By Itˆ o’s formula, ∂if(X) = ∂if(X0) + ∂2 ijf(X) · Xj + 1 2 ∂3 ijkf(X) · [Xj, Xk].
Using Itˆ o’s formula again together with (5) and (13), we get 0 ∂if(X) ◦dXi = ∂if(X) · Xi + 1 2 [∂if(X), Xi] = ∂if(X) · Xi + 1 2 ∂2 ijf(X) · [Xj, Xi] = f(X) −f(X0).
(ii) By Lemma 18.14 and integration by parts, X ◦(Y ◦Z) = X · (Y ◦Z) + 1 2 [X, Y ◦Z] = X · (Y · Z) + 1 2 X · [Y, Z] + 1 2 [X, Y · Z] + 1 4 X, [Y, Z] = XY · Z + 1 2 [XY, Z] = XY ◦Z.
2 The more convenient substitution rule of Corollary 18.21 comes at a high price: The FS -integral does not preserve the martingale property, and it re-quires even the integrand to be a continuous semi-martingale, which forces us to impose the stronger regularity constraint on the function f in the substitu-tion rule.
Our next task is to establish a basic uniqueness property, justifying our reference to the process V · M in Theorem 18.11 as an integral.
18. Itˆ o Integration and Quadratic Variation 411 Theorem 18.22 (continuity) The integral V ·M in Theorem 18.11 is the a.s.
unique linear extension of the elementary stochastic integral, such that for any t > 0, V 2 n · [M] t P →0 ⇒ (Vn · M)∗ t P →0.
This follows immediately from Lemmas 18.10 and 18.12, together with the following approximation of progressive processes by predictable step processes.
Lemma 18.23 (approximation) For any continuous semi-martingale X = M + A and process V ∈L(X), there exist some processes V1, V2, . . . ∈E, such that a.s., simultaneously for all t > 0, (Vn −V )2 · [M] t + (Vn −V ) · A ∗ t →0.
(14) Proof: We may take t = 1, since we can then combine the processes Vn for disjoint, finite intervals to construct an approximating sequence on R+. It is further enough to consider approximations in the sense of convergence in probability, since the a.s. versions will then follow for a suitable sub-sequence.
This allows us to perform the construction in steps, first approximating V by bounded and progressive processes V ′, next approximating each V ′ by contin-uous and adapted processes V ′′, and finally approximating each V ′′ by pre-dictable step processes V ′′′.
The first and last steps being elementary, we may focus on the second step. Then let V be bounded. We need to construct some continuous, adapted processes Vn, such that (14) holds a.s. for t = 1. Since the Vn can be chosen to be uniformly bounded, we may replace the first term by (|Vn −V | · [M])1.
Thus, it is enough to establish the approximation (|Vn −V |·A)1 →0 when the process A is non-decreasing, continuous, and adapted with A0 = 0. Replacing At by At + t if necessary, we may even take A to be strictly increasing.
To form the required approximations, we may introduce the inverse process Ts = sup{t ≥0; At ≤s}, and define for all t, h > 0 V h t = h−1 t T(At−h) V dA = h−1 At (At−h)+ V (Ts) ds.
Then Theorem 2.15 yields V h ◦T →V ◦T as h →0, a.e. on [0, A1], and so by dominated convergence, 1 0 V h −V dA = A1 0 V h(Ts) −V (Ts) ds →0.
The processes V h are clearly continuous. To prove their adaptedness, we note that the process T(At −h) is adapted for every h > 0, by the definition of T. Since V is progressive, we further note that V · A is adapted and hence progressive. The adaptedness of (V · A)T(At−h) now follows by composition. 2 412 Foundations of Modern Probability Though the class L(X) of stochastic integrands is sufficient for most pur-poses, it is sometimes useful to allow integration of slightly more general pro-cesses. Given any continuous semi-martingale X = M + A, let ˆ L(X) denote the class of product-measurable processes V , such that (V −˜ V ) · [M] = 0 and (V −˜ V ) · A = 0 a.s. for some process ˜ V ∈L(X). For V ∈ˆ L(X) we define V · X = ˜ V · X a.s. The extension clearly enjoys all the previously established properties of stochastic integration.
We often need to see how semi-martingales, covariation processes, and stochastic integrals are transformed by a random time change. Then consider a non-decreasing, right-continuous family of finite optional times τs, s ≥0, here referred to as a finite random time change τ. If even F is right-continuous, so is the induced filtration Gs = Fτs, s ≥0, by Lemma 9.3. A process X is said to be τ-continuous, if it is a.s. continuous on R+ and constant on every interval [τs−, τs], s ≥0, where τ0−= X0−= 0 by convention.
Theorem 18.24 (random time change, Kazamaki) Let τ be a finite random time change with induced filtration G, and let X = M + A be a τ-continuous F-semi-martingale. Then (i) X ◦τ is a continuous G-semi-martingale with canonical decomposition M ◦τ + A ◦τ, such that [X ◦τ] = [X] ◦τ a.s., (ii) V ∈L(X) implies V ◦τ ∈ˆ L(X ◦τ) and (V ◦τ) · (X ◦τ) = (V · X) ◦τ a.s.
(15) Proof: (i) It is easy to check that the time change X →X ◦τ preserves continuity, adaptedness, monotonicity, and the local martingale property. In particular, X ◦τ is then a continuous G -semi-martingale with canonical de-composition M ◦τ + A ◦τ. Since M 2 −[M] is a continuous local martingale, so is the time-changed process M 2 ◦τ −[M] ◦τ, and we get [X ◦τ] = [M ◦τ] = [M] ◦τ = [X] ◦τ a.s.
If V ∈L(X), we also note that V ◦τ is product-measurable, since this is true for both V and τ.
(ii) Fixing any t ≥0 and using the τ-continuity of X, we get (1[0,t] ◦τ) · (X ◦τ) = 1[0,τ −1 t ] · (X ◦τ) = (X ◦τ)τ −1 t = (1[0,t] · X) ◦τ, which proves (15) when V = 1[0,t]. If X has locally finite variation, the result extends by a monotone-class argument and monotone convergence to arbitrary V ∈L(X). In general, Lemma 18.23 yields some continuous, adapted processes V1, V2, . . . , such that a.s.
18. Itˆ o Integration and Quadratic Variation 413 (Vn −V )2d[M] + (Vn −V ) dA →0.
By (15) the corresponding properties hold for the time-changed processes, and since the processes Vn ◦τ are right-continuous and adapted, hence progressive, we obtain V ◦τ ∈ˆ L(X ◦τ).
Now assume instead that the approximating processes V1, V2, . . . are pre-dictable step processes. Then (15) holds as before for each Vn, and the relation extends to V by Lemma 18.12.
2 Next we consider stochastic integrals of processes depending on a param-eter. Given a measurable space (S, S), we say that a process V on S × R+ is progressive6, if its restriction to S × [0, t] is S ⊗Bt ⊗Ft-measurable for every t ≥0, where Bt = B[0,t]. A simple version of the following result will be useful in Chapter 19.
Theorem 18.25 (dependence on parameter, Dol´ eans, Stricker & Yor) For measurable S, consider a continuous semi-martingale X and a progressive pro-cess Vs(t) on S × R+ such that Vs ∈L(X) for all s ∈S. Then the process Ys(t) = (Vs · X)t has a version that is progressive and a.s. continuous at each s ∈S.
Proof: Let X have canonical decomposition of M +A, and let the processes V n s on S × R+ be progressive and such that for any t ≥0 and s ∈S, (V n s −Vs)2 · [M] t + (V n s −Vs) · A ∗ t P →0.
Then Lemma 18.12 yields (V n s ·X −Vs ·X)∗ t P →0 for every s and t. Proceeding as in the proof of Proposition 5.32, we may choose a sub-sequence {nk(s)} ⊂N depending measurably on s, such that the same convergence holds a.s. along {nk(s)} for any s and t. Define Ys,t = lim supk(V nk s ·X)t whenever this is finite, and put Ys,t = 0 otherwise. If the processes (V n s ·X)t have progressive versions on S × R+ that are a.s. continuous for each s, then Ys,t is clearly a version of the process (Vs · X)t with the same properties. We apply this argument in three steps.
First we reduce to the case of bounded, progressive integrands by taking V n = V 1{|V | ≤n}. Next, we apply the transformation in the proof of Lemma 18.23, to reduce to the case of continuous and progressive integrands. Finally, we approximate any continuous, progressive process V by the predictable step processes V n s (t) = Vs(2−n[2nt]). The integrals V n s · X are then elementary, and the desired continuity and measurability are obvious by inspection.
2 We turn to the related topic of functional representations. For motivation, we note that the construction of a stochastic integral V ·X depends in a subtle way on the underlying probability measure P and filtration F. Thus, we cannot expect a general representation F(V, X) of the integral process V · X. In view 6short for progressively measurable 414 Foundations of Modern Probability of Proposition 5.32, we might still hope for a modified representation of the form F(μ, V, X), where μ = L(V, X). Even this may be too optimistic, since the canonical decomposition of X also depends on F.
Here we consider only a special case, sufficient for our needs in Chapter 32. Fixing any progressive functions σi j and b i of suitable dimension, defined on the path space CR+,Rd, we consider an adapted process X satisfying the stochastic differential equation dXi t = σi j(t, X) dBj t + bi(t, X) dt, (16) where B is a Brownian motion in Rr. A detailed discussion of such equations appears in Chapter 32. Here we need only the simple fact from Lemma 32.1 that the coefficients σi j(t, X) and bi(t, X) are again progressive. Write aij = σi kσj k.
Proposition 18.26 (functional representation) For any progressive functions σ, b, f of suitable dimension, there exists a measurable mapping F : ˆ M CR+, Rd × CR+, Rd →CR+, R, (17) such that whenever X solves (16) with L(X) = μ and f i(X) ∈L(Xi) for all i, we have f i(X) · Xi = F(μ, X) a.s.
Proof: From (16) we note that X is a semi-martingale with covariation processes [Xi, Xj] = aij(X)·λ and drift components bi(X)·λ. Hence, f i(X) ∈ L(Xi) for all i iffthe processes (f i)2aii(X) and f ibi(X) are a.s. Lebesgue in-tegrable, which holds in particular when f is bounded. Letting f1, f2, . . . be progressive with (f i n −f i)2aii(X) · λ + (f i n −f i) bi(X) · λ →0, (18) we get by Lemma 18.12 f i n(X) · Xi −f i(X) · Xi ∗ t P →0, t ≥0.
Thus, if f i n(X) · Xi = Fn(μ, X) a.s. for some measurable mappings Fn as in (17), Proposition 5.32 yields a similar representation for the limit f i(X) · Xi.
As in the previous proof, we may apply this argument in steps, reducing first to the case where f is bounded, next to the case of continuous f, and finally to the case where f is a predictable step function. Here the first and last steps are again elementary. For the second step, we may now use the simpler approximation fn(t, x) = n t (t−n−1)+ f(s, x)ds, t ≥0, n ∈N, x ∈CR+,Rd.
By Theorem 2.15 we have fn(t, x) →f(t, x) a.e. in t for each x ∈CR+,Rd, and (18) follows by dominated convergence.
2 18. Itˆ o Integration and Quadratic Variation 415 Exercises 1. Show that if M is a local martingale and ξ is an F0-measurable random variable, then the process Nt = ξMt is again a local martingale.
2. Show that every local martingale M ≥0 with EM0 < ∞is a super-martingale.
Also show by an example that M may fail to be a martingale. (Hint: Use Fatou’s lemma. Then take Mt = Xt/(1−t)+, where X is a Brownian motion starting at 1, stopped when it reaches 0.) 3. For a continuous local martingale M, show that M and [M] have a.s. the same intervals of constancy. (Hint: For any r ∈Q+, put τ = inf{t > r; [M]t > [M]r}.
Then Mτ is a continuous local martingale on [r, ∞) with quadratic variation 0, and so Mτ is a.s. constant on [s, τ]. Use a similar argument in the other direction.) 4. For any continuous local martingales Mn starting at 0 and associated optional times τn, show that (Mn)∗ τn P →0 iff[Mn]τn P →0. State the corresponding result for stochastic integrals.
5. Give examples of continuous semi-martingales X1, X2, . . . , such that X∗ n P →0, and yet [Xn]t ̸ P →0 for all t > 0. (Hint: Let B be a Brownian motion stopped at time 1, put Ak2−n = B(k−1)+2−n, and interpolate linearly. Define Xn = B −An.) 6. For a Brownian motion B and an optional time τ, show that EBτ = 0 when Eτ 1/2 < ∞and EB2 τ = Eτ when Eτ < ∞. (Hint: Use optional sampling and Theorem 18.7.) 7. Deduce the first inequality of Proposition 18.9 from Proposition 18.17 and the classical Cauchy inequality.
8. For any continuous semi-martingales X, Y , show that [X + Y ]1/2 ≤[X]1/2 + [Y ]1/2 a.s.
9. (Kunita & Watanabe) Let M, N be continuous local martingales, and fix any p, q, r > 0 with p−1 + q−1 = r−1. Show that ∥[M, N]t∥2 2r ≤∥[M]t∥p ∥[N]t∥q for all t > 0.
10. Let M, N be continuous local martingales with M0 = N0 = 0. Show that M⊥ ⊥N implies [M, N] ≡0 a.s. Also show by an example that the converse is false.
(Hint: Let M = U · B and N = V · B for a Brownian motion B and suitable U, V ∈L(B).) 11. Fix a continuous semi-martingale X, and let U, V ∈L(X) with U = V a.s. on a set A ∈F0. Show that U · X = V · X a.s. on A. (Hint: Use Proposition 18.15.) 12. For a continuous local martingale M, let U, U1, U2, . . . and V, V1, V2, . . . ∈ L(M) with |Un| ≤Vn, Un →U, Vn →V , and {(Vn −V ) · M}∗ t P →0 for all t > 0.
Show that (Un·M)t P →(U ·M)t for all t. (Hint: Write (Un−U)2 ≤2 (Vn−V )2+8 V 2, and use Theorem 1.23 and Lemmas 5.2 and 18.12.) 13. Let B be a Brownian bridge. Show that Xt = Bt∧1 is a semi-martingale on R+ with the induced filtration. (Hint: Note that Mt = (1 −t)−1Bt is a martingale on [0, 1), integrate by parts, and check that the compensator has finite variation.) 14. Show by an example that the canonical decomposition of a continuous semi-martingale may depend on the filtration. (Hint: Let B be a Brownian motion with induced filtration F, put Gt = Ft ∨σ(B1), and use the preceding result.) 416 Foundations of Modern Probability 15.
Show by stochastic calculus that t−pBt →0 a.s. as t →∞, where B is a Brownian motion and p > 1 2.
(Hint: Integrate by parts to find the canonical decomposition. Compare with the L1-limit.) 16. Extend Theorem 18.16 to a product of n semi-martingales.
17. Consider a Brownian bridge X and a bounded, progressive process V with 1 0 Vtdt = 0 a.s.
Show that E 1 0 V dX = 0.
(Hint: Integrate by parts to get 1 0 V dX = 1 0 (V −U)dB, where B is a Brownian motion and Ut = (1−t)−1 1 t Vsds.) 18. Show that Proposition 18.17 remains valid for any finite optional times t and tnk satisfying maxk(tnk −tn,k−1) P →0.
19. Let M be a continuous local martingale. Find the canonical decomposition of |M|p when p ≥2. For such a p, deduce the second relation in Theorem 18.7. (Hint: Use Theorem 18.18. For the last part, use H¨ older’s inequality.) 20. Let M be a continuous local martingale with M0 = 0 and [M]∞≤1. Show that P{supt Mt ≥r} ≤e−r2/2 for all r ≥0. (Hint: Consider the super-martingale Z = exp(cM −c2[M]/2) for a suitable c > 0.) 21. Let X, Y be continuous semi-martingales. Fix a t > 0 and some partitions (tnk) of [0, t] with maxk(tnk −tn,k−1) →0. Show that 1 2 Σk (Ytnk + Ytn,k−1)(Xtnk − Xtn,k−1) P →(Y ◦X)t. (Hint: Use Corollary 18.13 and Proposition 18.17.) 23. Say that a process is predictable if it is measurable with respect to the σ-field in R+ × Ω induced by all predictable step processes. Show that every predictable process is progressive. Conversely, given any progressive process X and constant h > 0, show that the process Yt = X(t−h)+ is predictable.
24. For a progressive process V and a non-decreasing, continuous, adapted process A, prove the existence of a predictable process ˜ V with |V −˜ V | · A = 0 a.s. (Hint: Use Lemma 18.23.) 25. Use the preceding statement to deduce Lemma 18.23. (Hint: Begin with predictable V , using a monotone-class argument.) 26. Construct the stochastic integral V · M by approximation from elementary integrals, using Lemmas 18.10 and 18.23. Show that the resulting integral satisfies the relation in Theorem 18.11. (Hint: First let M ∈M2 and E(V 2 · [M])∞< ∞, and extend by localization.) 27. Let (V, B) d = ( ˜ V , ˜ B) for some Brownian motions B, ˜ B on possibly different filtered probability spaces and some V ∈L(B), ˜ V ∈L( ˜ B). Show that (V, B, V ·B) d = ( ˜ V , ˜ B, ˜ V · ˜ B). (Hint: Argue as in the proof of Proposition 18.26.) 28. Let X be a continuous F-semi-martingale. Show that X remains conditionally a semi-martingale given F0, and that the conditional quadratic variation agrees with [X].
Also show that if V ∈L(X), where V = σ(Y ) for some continuous process Y and measurable function σ, then V remains conditionally X-integrable, and the conditional integral agrees with V · X. (Hint: Conditioning on F0 preserves martingales.) Chapter 19 Continuous Martingales and Brownian Motion Real and complex exponential martingales, characterization of Brown-ian motion, time-change reduction, harmonic and analytic maps, point polarity and transience, skew-product representation, orthogonality and independence, Brownian invariance, Brownian functionals and martin-gales, Gauss and Poisson reduction, iterated and multiple integrals, chaos and integral representation, change of measure, transformation of drift, preservation properties, Cameron–Martin theorem, uniform integrabil-ity, Wald’s identity, removal of drift Here we deal with a wide range of applications of stochastic calculus, the principal tools of which were introduced in the previous chapter. A recurrent theme is the notion of exponential martingales, which appear in both a real and a complex variety. Exploring the latter yields an easy access to L´ evy’s celebrated martingale characterization of Brownian motion, and to the basic time-change reduction of isotropic continuous local martingales to a Brownian motion. Applying the latter result to suitable compositions of a Brownian mo-tion with harmonic or analytic maps, we obtain important information about Brownian motion in Rd. Similar methods can be used to analyze a variety of other transformations leading to Gaussian processes.
As a further application of the exponential martingales, we derive stochastic integral representations of Brownian functionals and martingales, and exam-ine their relationship to the chaos expansions obtained by different methods in Chapter 14. In this context, we also note how the previously introduced multiple Wiener–Itˆ o integrals can be expressed as iterated single Itˆ o integrals.
A related problem, of crucial importance for Chapter 32, is to represent a continuous local martingale with absolutely continuous covariation process in terms of stochastic integrals with respect to a suitable Brownian motion.
As a final major topic, we examine the transformations induced by abso-lutely continuous changes of the probability measure. Here the density process turns out to be a real exponential martingale, and any continuous local mar-tingale remains a martingale under the new measure, apart from an additional drift term. This observation is useful for applications, where it is often em-ployed to remove the drift of a given semi-martingale. The appropriate change of measure then depends on the given process, and it becomes important to find effective criteria for a proposed exponential process to be a true martingale.
417 © Springer Nature Switzerland AG 2021 O. Kallenberg, Foundations of Modern Probability, Probability Theory and Stochastic Modelling 99, 418 Foundations of Modern Probability The present exposition may be regarded as a continuation of our discus-sion of martingales and Brownian motion in Chapters 9 and 14, respectively.
Changes of time and measure are both important for the theory of stochastic differential equations, developed in Chapters 32–33. The time-change reduc-tions of continuous martingales have counterparts for the point processes ex-plored in Chapter 15, where the present Gaussian processes are replaced by suitable Poisson processes. The results about changes of measure are extended in Chapter 20 to the context of possibly discontinuous semi-martingales.
To begin our technical discussion of the mentioned ideas, we consider first the complex exponential martingales, which are closely related to the real versions appearing in Lemma 19.22.
Lemma 19.1 (complex exponential martingales) Let M be a real, continuous local martingale with M0 = 0. Then Zt = exp iMt + 1 2 [M]t , t ≥0, is a complex local martingale, satisfying Zt = 1 + i(Z · M)t a.s., t ≥0.
Proof: Applying Corollary 18.20 to the complex-valued semi-martingale Xt = iMt + 1 2 [M]t and the entire function f(z) = ez, we get dZt = Zt dXt + 1 2 d[X]t = Zt idMt + 1 2 d[M]t −1 2 d[M]t = iZt dMt.
2 We now explore the connection between continuous martingales and Gauss-ian processes. For a subset K of a Hilbert space H, let ¯ K denote the closed linear subspace generated by K.
Lemma 19.2 (isonormal martingales) For a Hilbert space H, let K ⊂H with ¯ K = H. Let M h, h ∈K, be continuous local martingales with M h 0 = 0, such that [M h, M k]∞= ⟨h, k⟩a.s., h, k ∈K.
(1) Then there exists an isonormal Gaussian process ζ ⊥ ⊥F0 on H, such that M h ∞= ζh a.s., h ∈K.
Proof: For any linear combination h = u1h1 + · · · + unhn, let Nt = u1M h1 t + · · · + unM hn t , t ≥0.
Then (1) yields [N]∞= Σ j,k ujuk M hj, M hk ∞ = Σ j,k ujuk ⟨hj, hk⟩= ∥h∥2.
19. Continuous Martingales and Brownian Motion 419 The process Z = exp(iN + 1 2[N]) is a.s. bounded, and so by Lemma 19.1 it is a uniformly integrable martingale. Writing ξ = N∞, we hence obtain for any A ∈F0 PA = E(Z∞; A) = E exp iN∞+ 1 2 [N]∞ ; A = E(eiξ; A) exp 1 2 ∥h∥2 .
Since u1, . . . , un were arbitrary, we conclude from Theorem 6.3 that the ran-dom vector (M h1 ∞, . . . , M hn ∞) is independent of F0 and centered Gaussian with covariances ⟨hj, hk⟩.
For any element h = Σi aihi, define ζh = Σi aiM hi ∞. To see that ζh is a.s.
independent of the representation of h, suppose that also h = Σi bihi. Writing ci = ai −bi, we get E Σ i aiM hi ∞−Σ i biM hi ∞ 2 = E Σ i ciM hi ∞ 2 = Σ i,j cicj ⟨hi, hj⟩ = Σ i cihi 2 = Σ i aihi −Σ i bihi 2 = 0, as desired. Note that ζ is centered Gaussian with E(ζh ζk) = ⟨h, k⟩. We may finally extend ζ by continuity to an isonormal Gaussian process on H.
2 For a first application, we may establish a celebrated martingale character-ization of Brownian motion.
Theorem 19.3 (Brownian criterion, L´ evy) For a continuous process B = (B1, . . . , Bd) in Rd with B0 = 0, these conditions are equivalent: (i) B is an F-Brownian motion, (ii) B is a local F-martingale with [Bi, Bj]t ≡δijt a.s.
Proof (OK): Assuming (ii), we introduce for fixed s < t the continuous local martingales M i r = Bi r∧t −Bi r∧s, r ≥s, i ≤d, and conclude from Lemma 19.2 that the differences Bi t −Bi s are i.i.d. N(0, t−s) and independent of Fs, which proves (i).
2 The last result suggests that we might transform a general continuous local martingale M into a Brownian motion through a suitable random time change.
In higher dimensions, this requires M to be isotropic, in the sense that a.s.
[M i] = [M j], [M i, M j] = 0, i ̸= j, which holds in particular when M is a Brownian motion in Rd. For continuous local martingales M in C, the condition is a.s. equivalent to [M] = 0, or 420 Foundations of Modern Probability [ ℜM] = [ℑM], [ ℜM, ℑM] = 0.
In the isotropic case, we refer to [M 1] = · · · = [M d] or [ ℜM] = [ℑM] as the rate process of M.
Though the proof is straightforward when [M]∞= ∞a.s., the general case requires a rather subtle extension of the filtered probability space. For two filtrations F, G on Ω, we say that G is a standard extension of F if Ft ⊂Gt ⊥ ⊥ Ft F, t ≥0.
This is precisely the condition needed to ensure that all adaptedness and con-ditioning properties will be preserved.
Theorem 19.4 (time-change reduction, Doeblin, Dambis, Dubins & Schwarz) Let M be an isotropic, continuous local F-martingale in Rd with M0 = 0, and define τs = inf t ≥0; [M 1]t > s , Gs = Fτs, s ≥0.
Then there exists a Brownian motion B in Rd w.r.t. a standard extension ˆ G of G, such that a.s.
B = M ◦τ on [ 0, [M 1]∞), M = B ◦[M 1].
Proof (OK): We may take d = 1, the proof for d > 1 being similar. Intro-duce a Brownian motion X⊥ ⊥F with induced filtration X, and put ˆ Gt = Gt∨Xt.
Since G⊥ ⊥X, it is clear that ˆ G is a standard extension of both G and X. In particular, X remains a Brownian motion under ˆ G. Now define Bs = Mτs + s 0 1{τr = ∞} dXr, s ≥0.
(2) Since M is τ-continuous by Proposition 18.6, Theorem 18.24 shows that the first term M ◦τ is a continuous G -martingale, hence also a ˆ G -martingale, with quadratic variation [M ◦τ]s = [M]τs = s ∧[M]∞, s ≥0.
The second term in (2) has quadratic variation s−s∧[M]∞, and the covariation vanishes since M ◦τ⊥ ⊥X. Thus, [B]s = s a.s., and so Theorem 19.3 shows that B is a ˆ G -Brownian motion. Finally, Bs = Mτs for s < [M]∞, which implies M = B ◦[M] a.s. by the τ-continuity of M.
2 In two dimensions, isotropic martingales arise naturally by composition of a complex Brownian motion B with a possibly multi-valued analytic function f.
For any continuous process X, we may clearly choose a continuous evolution of f(X), as long as X avoids all singularities of f. Similar results hold for harmonic functions, which is especially useful in dimensions d ≥3, where no analytic functions exist.
19. Continuous Martingales and Brownian Motion 421 Theorem 19.5 (harmonic and analytic maps, L´ evy) (i) For an harmonic function f on Rd, consider an isotropic, continuous local martingale M in Rd that a.s. avoids all singularities of f. Then f(M) is a local martingale satisfying [f(M) ] = |∇f(M)|2 · [M 1] a.s.
(ii) For an analytic function f, consider a complex, isotropic, continuous local martingale M that a.s. avoids all singularities of f. Then f(M) is again a complex, isotropic local martingale satisfying [ ℜf(M) ] = |f ′(M)|2 · [ ℜM ] a.s.
If B is a Brownian motion and f ′ ̸≡0, then [ ℜf(B)] is a.s. unbounded and strictly increasing.
Proof: (i) Since M is isotropic, we get by Corollary 18.19 f(M) = f(M0) + f ′ i · M i + 1 2 Δf(M) · [M 1].
Here the last term vanishes since f is harmonic, and so f(M) is a local mar-tingale. The isotropy of M also yields [f(M) ] = Σ i f ′ i(M) · M i = Σ i f ′ i(M) 2 · [M 1] = ∇f(M) 2 · [M 1].
(ii) Since f is analytic, Corollary 18.20 yields f(M) = f(M0) + f ′(M) · M + 1 2 f ′′(M) · [M].
Since M is isotropic, the last term vanishes, and we get [f(M) ] = f ′(M) · M = f ′(M) 2 · [M] = 0, which shows that f(M) is again isotropic. Writing M = X + iY and f ′(M) = U + iV , we get [ ℜf(M) ] = [ U · X −V · Y ] = ( U 2 + V 2) · [X] = |f ′(M)|2 · [ ℜM ].
Noting that f ′ has at most countably many zeros, unless it vanishes iden-tically, we obtain by Fubini’s theorem Eλ t ≥0; f ′(Bt) = 0 = ∞ 0 P f ′(Bt) = 0 dt = 0, 422 Foundations of Modern Probability and so [ ℜf(B) ] = |f ′(B)|2 · λ is a.s. strictly increasing. To see that it is also a.s. unbounded, we note that f(B) converges a.s. on the set {[ ℜf(B) ] < ∞}.
However, f(B) diverges a.s., since f is non-constant and the random walk B0, B1, . . . is recurrent by Theorem 12.2.
2 Combining the last two results, we may prove the polarity of single points when d ≥2, and the transience of the process when d ≥3. Note that the latter property is a continuous-time counterpart of Theorem 12.8 for random walks.
Both results are important for the potential theory developed in Chapter 34.
Define τa = inf{t > 0; Bt = a}.
Theorem 19.6 (point polarity and transience, L´ evy, Kakutani) For a Brow-nian motion B in Rd, we have (i) for d ≥2, τa = ∞a.s. for all a ∈Rd, (ii) for d ≥3, |Bt| →∞a.s. as t →∞.
Proof: (i) We may take d = 2, and hence choose B to be a complex Brownian motion. Using Theorem 19.5 (ii) with f(z) = ez, we see that M = eB is a conformal local martingale with unbounded rate [ ℜM ]. By Theorem 19.4 we have M −1 = X ◦[ ℜM ] a.s. for a Brownian motion X, and since M ̸= 0, it follows that X a.s. avoids −1. Hence, τ−1 = ∞a.s., and so by scaling and rotational symmetry we have τa = ∞a.s. for every a ̸= 0. To extend this to a = 0, we use the Markov property at h > 0 to obtain P0 τ0 ◦θh < ∞ = E0PBh{τ0 < ∞} = 0, h > 0.
As h →0, we get P0{τ0 < ∞} = 0, and so τ0 = ∞a.s.
(ii) Here we may take d = 3. For any a ̸= 0 we have τa = ∞a.s. by (i), and so by Theorem 19.5 (i) the process M = |B −a|−1 is a continuous local martingale. By Fatou’s lemma, M is then an L1-bounded super-martingale, and so by Theorem 9.19 it converges a.s. toward some random variable ξ. Since clearly Mt d →0 we have ξ = 0 a.s.
2 Combining (i) above with Theorem 17.11, we see that a complex, isotropic, continuous local martingale M with M0 = 0 avoids every fixed point a ̸= 0.
Thus, Theorem 19.5 (ii) applies to any analytic function f with only isolated singularities. Since f is allowed to be multi-valued, the result applies even to functions with essential singularities, such as to f(z) = log(1+z). For a simple application, we consider the windings of planar Brownian motion about a fixed point.
Corollary 19.7 (skew-product representation, Galmarino) Let B be a com-plex Brownian motion with B0 = 1, and choose a continuous version of V = arg B with V0 = 0. Then there exists a real Brownian motion Y ⊥ ⊥|B|, such that a.s.
Vt ≡Y ◦ |B|−2 · λ t, t ≥0.
19. Continuous Martingales and Brownian Motion 423 Proof: Using Theorem 19.5 (ii) with f(z) = log(1 + z), we see that Mt = log |Bt|+iVt is an isotropic martingale with rate [ ℜM ] = |B|−2·λ. Hence, The-orem 19.4 yields a complex Brownian motion Z = X +iY with M = Z ◦[ ℜM ] a.s., and the assertion follows.
2 For a non-isotropic, continuous local martingale M in Rd, there is no single random time change that reduces the process to a Brownian motion. How-ever, we may transform the components M i individually as in Theorem 19.4, to obtain a set of one-dimensional Brownian motions B1, . . . , Bd. If the latter processes are independent, they may be combined into a d -dimensional Brow-nian motion B = (B1, . . . , Bd). We show that the required independence arises automatically, whenever the original components M i are strongly orthogonal, in the sense that [M i, M j] = 0 a.s. for all i ̸= j.
Proposition 19.8 (multi-variate time change, Knight) Let M 1, M 2, . . . be strongly orthogonal, continuous local martingales starting at 0.
Then there exist some independent Brownian motions B1, B2, . . . with M k = Bk ◦[M k] a.s., k ∈N.
Proof: When [M k]∞= ∞a.s. for all k, the result is an easy conse-quence of Lemma 19.2. In general, choose some independent Brownian motions X1, X2, . . . ⊥ ⊥F with induced filtration X, and define Bk s = M k(τ k s ) + Xk (s −[M k]∞)+ , s ≥0, k ∈N.
Write ψt = −log(1 −t)+, and put Gt = Fψt + X(t−1)+, t ≥0. To check that B1, B2, . . . have the desired joint distribution, we may take the [M k] to be bounded. Then the processes N k t = M k ψt + Xk (t−1)+, t ≥0, are strongly orthogonal, continuous G -martingales with quadratic variations [N k]t = [M k]ψt + (t −1)+, t ≥0, and we note that Bk s = N k σk s , σk s = inf t ≥0; [N k]t > s .
The assertion now follows from the result for [M k]∞= ∞a.s.
2 We turn to some general invariance properties of a Brownian motion or bridge. Then for any processes U, V on I = R+ or [0, 1], we define1 λV = I Vt dt, ⟨U, V ⟩= I Ut Vt dt, ∥V ∥2 = ⟨V, V ⟩1/2.
1The inner product ⟨·, ·⟩must not be confused with the predictable covariation introduced in Chapter 20.
424 Foundations of Modern Probability A process B on [0, 1] is said to be a Brownian bridge w.r.t. a filtration F, if it is F-adapted and such that, conditionally on Ft for every t ∈[0, 1], it is a Brownian bridge from (t, Bt) to (1, 0). We anticipate the fact that an F-Brownian bridge in Rd is a continuous semi-martingale on [0, 1]. The associated martingale component is clearly a standard Brownian motion.
Theorem 19.9 (Brownian invariance) (i) Consider an F-Brownian motion X and some F-predictable processes V t on R+, t ≥0, such that ⟨V s, V t⟩= s ∧t a.s., s, t ≥0.
Then Yt = (V t · X)∞, t ≥0, is a version of a Brownian motion.
(ii) Consider an F-Brownian bridge X and some F-predictable processes V t on [0, 1], t ∈[0, 1], such that λV t = t, ⟨V s, V t⟩= s ∧t a.s., s, t ∈[0, 1].
Then Yt = (V t · X)1, t ∈[0, 1], is a version of a Brownian bridge.
The existence of the integrals V t·X should be regarded as part of the asser-tions. In case of (ii) we need to convert the relevant Brownian-bridge integrals into continuous martingales. Then for any Lebesgue integrable process V on [0, 1], we define ¯ Vt = (1 −t)−1 1 t Vs ds, t ∈[0, 1].
Lemma 19.10 (Brownian bridge integral) (i) Let X be an F-Brownian bridge with martingale component B, and con-sider an F-predictable process V on [0, 1], such that EF0 1 0 V 2 < ∞a.s.
and λV is F0-measurable. Then 1 0 Vt dXt = 1 0 (Vt −¯ Vt) dBt a.s.
(ii) For any processes U, V ∈L2[0, 1], we have ¯ U, ¯ V ∈L2[0, 1] and 1 0 (Ut −¯ Ut)(Vt −¯ Vt) dt = 1 0 Ut Vt dt −λU · λV.
Proof: (ii) We may take U = V , since the general case will then follow by polarization. Writing Rt = 1 t Vs ds = (1 −t)¯ Vt and integrating by parts, we get on [0, 1) ¯ V 2 t dt = (1 −t)−2R2 t dt = (1 −t)−1R2 t + 2 (1 −t)−1Rt Vt dt = (1 −t) ¯ V 2 t + 2 ¯ Vt Vt dt.
For bounded V , we conclude that 19. Continuous Martingales and Brownian Motion 425 1 0 (Vt −¯ Vt)2 dt = 1 0 V 2 t dt −(λV )2 = 1 0 (Vt −λV )2 dt, (3) and so by Minkowski’s inequality ∥¯ V ∥2 ≤∥V ∥2 + ∥V −¯ V ∥2 = ∥V ∥2 + ∥V −λV ∥2 ≤2 ∥V ∥2, which extends to the general case by monotone convergence. This gives ¯ V ∈L2, and the asserted relation follows from (3) by dominated convergence.
(i) The process Mt = Xt/(1−t) is clearly a continuous martingale on 0, 1), and so Xt = (1 −t)Mt is a continuous semi-martingale on the same interval.
Integrating by parts, we get for any t ∈[0, 1) dXt = (1 −t) dMt −Mt dt = dBt −Mt dt, and also t 0 VsMs ds = Mt t 0 Vs ds − t 0 dMs s 0 Vr dr = t 0 dMs 1 s Vr dr −Mt 1 t Vs ds = t 0 ¯ Vs dBs −Xt ¯ Vt.
Hence by combination, t 0 Vs dXs = t 0 (Vs −¯ Vs) dBs + Xt ¯ Vt, t ∈[0, 1).
(4) By Cauchy’s inequality and dominated convergence, we get as t →1 EF0|Xt ¯ Vt| 2 ≤EF0(M 2 t ) EF0 1 t |Vs| ds 2 ≤(1 −t) EF0(M 2 t ) EF0 1 t V 2 s ds = t EF0 1 t V 2 s ds →0, and so by dominated convergence Xt ¯ Vt P →0. By (4) and Corollary 18.13, it remains to note that ¯ V is B-integrable, which holds since ¯ V ∈L2 by (ii).
2 Proof of Theorem 19.9: (i) Use Lemma 19.2.
(ii) By Lemma 19.10 (i), we have a.s.
Yt = 1 0 (V t r −¯ V t r ) dBr, t ∈[0, 1], 426 Foundations of Modern Probability for the Brownian motion B = X −ˆ X. Furthermore, Lemma 19.10 (ii) yields 1 0 V s r −¯ V s r V t r −¯ V t r dr = 1 0 V s r V t r dr −λV s · λV t = s ∧t −st.
The assertion now follows by Lemma 19.2.
2 Next we consider a basic stochastic integral representation of martingales with respect to a Brownian filtration.
Theorem 19.11 (Brownian martingales) Let B = (B1, . . . , Bd) be a Brow-nian motion in Rd with complete, induced filtration F. Then every local F-martingale M is a.s. continuous, and there exist some (P × λ)-a.e. unique processes V 1, . . . , V d ∈L(B1), such that M = M0 + Σ k≤d V k · Bk a.s.
(5) The proof is based on a similar representation of Brownian functionals: Lemma 19.12 (Brownian functionals, Itˆ o) Let B = (B1, . . . , Bd) be a Brow-nian motion in Rd, and let ξ be a B-measurable random variable with E ξ = 0 and E ξ2 < ∞. Then there exist some (P ×λ)-a.e. unique processes V 1, . . . , V d ∈L(B1), such that 2 a.s.
E ∞ 0 |Vt|2 dt < ∞, ξ = Σ k≤d ∞ 0 V k dBk.
Proof (Dellacherie): Let H be the Hilbert space of B-measurable random variables ξ ∈L2 with Eξ = 0, and write K for the subspace of elements ξ admitting the desired representation Σk(V k · Bk)∞. For such a ξ we get E ξ2 = E Σk{(V k)2 · λ}∞, which implies the asserted uniqueness.
By the obvious completeness of L(B1), the same formula shows that K is closed. To obtain K = H, we need to show that any ξ ∈H ⊖K vanishes a.s.
Then fix any non-random functions u1, . . . , ud ∈L2(R). Put M = Σkuk·Bk, and define the process Z as in Lemma 19.1. Then Proposition 18.14 yields Z −1 = i Z · M = i Σ k≤d (Zuk) · Bk, and so ξ ⊥(Z∞−1), or E ξ exp iΣ k(uk · Bk)∞ = 0.
Specializing to step functions uk and using Theorem 6.3, we get E ξ; (Bt1, . . . , Btn) ∈C = 0, t1, . . . , tn ∈R+, C ∈Bn, n ∈N, 2The moment condition is essential, since the existence would otherwise be trivial and the uniqueness would fail. This is similar to the situation for the Skorohod embedding in Theorem 22.1.
19. Continuous Martingales and Brownian Motion 427 which extends by a monotone-class argument to E( ξ; A) = 0 for any A ∈F∞.
Thus, ξ = E( ξ | F∞) = 0 a.s.
2 Proof of Theorem 19.11: We may clearly take M0 = 0, and by suitable localization we may assume that M is uniformly integrable. Then M∞exists in L1(F∞) and may be approximated in L1 by some random variables ξ1, ξ2, . . . ∈ L2(F∞) with Eξn = 0. The martingales M n t = E( ξn| Ft) are a.s. continuous by Lemma 19.12, and Proposition 9.16 yields for any ε > 0 P (ΔM)∗> 2ε ≤P (M n −M)∗> ε ≤ε−1E| ξn −M∞| →0.
Hence, (ΔM)∗= 0 a.s., and so M is a.s. continuous. The remaining assertions now follow by localization from Lemma 19.12.
2 We turn to the converse problem of finding a Brownian motion B satisfying (5) for given processes V k. This result plays a crucial role in Chapter 32.
Theorem 19.13 (integral representation, Doob) Let M be a continuous local F-martingale in Rd with M0 = 0, such that [M i, M j] = Σ k≤n V i k V j k · λ a.s., i, j ≤d, for some F-progressive processes V i k, i ≤d, k ≤n. Then there exists a Brow-nian motion B = (B1, . . . , Bd) in Rd w.r.t. a standard extension G of F, such that M i = Σ k≤n V i k · Bk a.s., i ≤d.
Proof: For any t ≥0, let Nt and Rt be the null and range spaces of the matrix Vt, and write N ⊥ t and R⊥ t for their orthogonal complements. Denote the corresponding orthogonal projections by πNt, πRt, πN⊥ t , and πR⊥ t , respectively.
Note that Vt is a bijection from N ⊥ t to Rt, and write V −1 t for the inverse mapping from Rt to N ⊥ t .
All these mappings are clearly Borel-measurable functions of Vt, and hence again progressive.
Now introduce a Brownian motion X⊥ ⊥F in Rn with induced filtration X, and note that Gt = Ft ∨Xt, t ≥0, is a standard extension of both F and X.
Thus, V remains G -progressive, and the martingale properties of M and X remain valid for G. In Rn we introduce the local G -martingale B = V −1πR · M + πN · X.
The covariation matrix of B has density (V −1πR) V V ′ (V −1πR)′ + πNπ′ N = πN⊥π′ N⊥+ πNπ′ N = πN⊥+ πN = I, 428 Foundations of Modern Probability and so B is a Brownian motion by Theorem 19.3. Furthermore, the process πR⊥·M vanishes a.s., since its covariation matrix has density πR⊥V V ′π′ R⊥= 0.
Hence, Proposition 18.14 yields V · B = V V −1πR · M + V πN · Y = πR · M = (πR + πR⊥) · M = M.
2 The following joint extension of Theorems 15.15 and 19.2 will be needed in Chapter 27.
Proposition 19.14 (Gauss and Poisson reduction) Consider a continuous local martingale M in Rd, a ql-continuous, K-marked point process ξ on (0, ∞) with compensator ˆ ξ, a predictable process V : R+ × K →ˆ S, and a progressive process U : T × R+ →Rd, where K, S are Borel and ˆ S = S ∪{Δ} with Δ / ∈S.
Let the random measure μ on S and process ρ on T 2 be F0-measurable with μ = ˆ ξ ◦V −1, ρs,t = Σ i,j ∞ 0 U i s,rU j t,r d[M i, M j]r, s, t ∈T, and define the point process η on S and process X on T by η = ξ ◦V −1, Xt = Σ i ∞ 0 U i t,r dM i r, t ∈T.
Then conditionally on F0, we have (i) η and X are independent, (ii) η is Poisson on S with intensity measure μ, (iii) X is centered Gaussian with covariance function ρ.
Proof: For any constants c1, . . . , cm ∈R, times t1, . . . , tm ∈T, and disjoint sets B1, . . . , Bn ∈S with μBj < ∞, consider the processes Nt = Σ k ck Σ i t 0 U i tk,r dM i r, Y k t = S t+ 0 1Bk(Vs,x) ξ(ds dx), t ≥0, k ≤n.
Next define for any u1, . . . , un ≥0 the exponential local martingales Z0 t = exp iNt + 1 2[N, N]t , Zk t = exp −ukY k t + (1 −e−uk)ˆ Y k t , t ≥0, k ≤n, where the local martingale property holds for Z0 by Lemma 19.1 and for Z1, . . . , Zn by Theorem 20.8, applied to the processes Ak t = (1 −e−uk)(ˆ Y k t −Y k t ). The same property holds for the product Zt = Πk Zk t , since the Zk are strongly orthogonal by Theorems 20.4 and 20.6. Furthermore, N∞= Σ k ckXtk, Y k ∞= ηBk, [N, N]∞= Σ h,k chck Σ i,j ∞ 0 U i th,rU j tk,r d[M i, M j]r = Σ h,k chck ρth,tk, ˆ Y k ∞= S K 1Bk(Vs,x) ˆ ξ(ds dx) = μBk, k ≤n.
19. Continuous Martingales and Brownian Motion 429 The product Z remains a local martingale with respect to the conditional probability measure PA = P(· |A), for any A ∈F0 with PA > 0. Choosing A such that ρth,tk and μBk are bounded on A, we see that even Z becomes bounded on A, and hence is a uniformly integrable PA-martingale. In particu-lar, E(Z∞|A) = 1 or E(Z∞; A) = PA, which extends immediately to arbitrary A ∈F0. Hence, E(Z∞| F0) = 1, and we get E exp i Σ k ckXtk −Σ k uk ηBk F0 = exp −1 2 Σ h,k chck ρth,tk −Σ k (1 −e−uk) μBk .
By Theorem 6.3, the variables Xt1, . . . , Xtm and ηB1, . . . , ηBn have then the re-quired joint conditional distribution, and the assertion follows by a monotone-class argument.
2 For a similar purpose in Chapter 27, we also need a joint extension of Theorems 10.27 and 19.2, where the ordinary compensator is replaced by its discounted version.
Proposition 19.15 (discounted reduction maps) For j ∈N, let (τj, χj) be orthogonal, adapted pairs as in Theorem 10.27 with discounted compensators ζj, and let Yj : R+ × Kj →Sj be predictable with ζj ◦Y −1 j ≤μj a.s. for some F0-measurable random distributions μj on Sj. Put γj = Yj(τj, χj), and define X as in Lemma 19.14 for a continuous local martingale M in Rd and some predictable processes Ut, t ∈S, such that the process ρ on S2 is F0-measurable.
Then conditionally on F0, we have (i) X, γ1, γ2, . . . are independent, (ii) L(γj) = μj for all j, (iii) X is centered Gaussian with covariance function ρ.
Proof: For V1, . . . , Vm as before, we define the associated martingales M1, . . . , Mm as in Lemma 10.25. Further introduce the exponential local martingale Z = exp(iN + 1 2 [N]), where N is such as in the proof of Lemma 19.14. Then (Z −1)Πj Mj is conditionally a uniformly integrable martingale, and we get as in Theorem 10.27 EF0(Z∞−1)Π j 1Bj(γj) −μjBj = 0.
Combining with the same relation without the factor Z∞−1 and proceeding recursively, we get as before EF0 exp(iN∞)Π j1Bj(γj) = exp −1 2 [N]∞ Π j μjBj.
Applying Theorem 6.3 to the bounded measure ˜ P = Πj 1Bj(γj) · P F0, we conclude that P F0 X ∈A; γj ∈Bj, j ≤m = νρ(A)Π j μjBj, 430 Foundations of Modern Probability where νρ denotes the centered Gaussian distribution on RT with covariance function ρ.
2 Next we show how the multiple Wiener–Itˆ o integrals of Chapter 14 can be expressed as iterated single Itˆ o integrals. Then introduce the tetrahedral sets Δn = (t1, . . . , tn) ∈Rn +; t1 < · · · < tn , n ∈N.
Given a function f ∈L2(Rn +, λn), we write ˆ f = n! ˜ f1Δn, where ˜ f is the sym-metrization of f defined in Chapter 14. For any isonormal Gaussian process ζ on L2(λ), we may introduce an associated Brownian motion B on R+ satisfying Bt = ζ1[0,t] for all t ≥0.
Theorem 19.16 (multiple and iterated integrals) Let B be a Brownian mo-tion on R+ generated by an isonormal Gaussian process ζ on L2(λ). Then for any f ∈L2(λn), we have ζnf = dBtn dBtn−1· · · ˆ f(t1, . . . , tn) dBt1 a.s.
(6) Though a formal verification is easy, the existence of the integrals on the right depends in a subtle way on the choice of suitable versions in each step.
The existence of such versions is regarded as part of the assertion.
Proof: We prove by induction that the iterated integral V k tk+1,...,tn = dBtk dBtk−1· · · ˆ f(t1, . . . , tn) dBt1 exists for almost all tk+1, . . . , tn, and that V k has a version supported by Δn−k, which is progressive in the variable tk+1 for fixed tk+2, . . . , tn. We further need to establish the isometry E V k tk+1,...,tn 2 = · · · ˆ f(t1, . . . , tn) 2dt1 · · · dtk, (7) which allows us in the next step to define V k+1 tk+2,...,tn for almost all tk+2, . . . , tn.
The integral V 0 = ˆ f has clearly the stated properties. Now suppose that a version of the integral V k−1 tk,...,tn has been constructed with the desired properties.
For any tk+1, . . . , tn such that (7) is finite, Theorem 18.25 shows that the process Xk t,tk+1,...,tn = t 0 V k−1 tk,...,tn dBtk, t ≥0, has a progressive version that is a.s. continuous in t for fixed tk+1, . . . , tn. By Proposition 18.15 we obtain V k tk+1,...,tn = Xk tk+1,tk+1,...,tn a.s., tk+1, . . . , tn ≥0, and the progressivity carries over to V k, regarded as a process in tk+1 for fixed tk+2, . . . , tn. Since V k−1 is supported by Δn−k+1, we may choose Xk to be 19. Continuous Martingales and Brownian Motion 431 supported by R+ × Δn−k, which ensures that V k will be supported by Δn−k.
Finally, formula (7) for V k−1 yields E V k tk+1,...,tn 2 = E V k−1 tk,...,tn 2dtk = · · · ˆ f(t1, . . . , tn) 2dt1 · · · dtk.
To prove (6), we note that the right-hand side is linear and L2-continuous in f. Furthermore, the two sides agree for indicator functions of rectangles in Δn. The equality extends by a monotone-class argument to arbitrary indicator functions in Δn, and the further extension to L2(Δn) is immediate. It remains to note that ζnf = ζn˜ f = ζnˆ f for any f ∈L2(λn).
2 For a Brownian functional with mean 0 and finite variance, we have es-tablished both a chaos expansion in Theorem 14.26 and a stochastic integral representation in Lemma 19.12. We proceed to examine how the two formulas are related. For any f ∈L2(λn), we define ft(t1, . . . , tn−1) = f(t1, . . . , tn−1, t), and write ζn−1f(t) = ζn−1ft when ∥ft∥< ∞.
Proposition 19.17 (chaos and integral representations) Let ζ be an isonor-mal Gaussian process on L2(λ), generating a Brownian motion B, and let ξ ∈L2(ζ) with chaos expansion Σ n>0 ζnfn. Then a.s.
ξ = ∞ 0 Vt dBt, Vt = Σ n>0 ζn−1ˆ fn(t), t ≥0.
Proof: Proceeding as in the previous proof, we get for any m ∈N Σ n≥m E ζn−1ˆ fn(t) 2dt = Σ n≥m ∥ˆ fn∥2 = Σ n≥m E(ζnfn)2 < ∞.
(8) Since the integrals ζnf are orthogonal for different n, the series for Vt converges in L2 for almost every t ≥0. On the exceptional null set, we may redefine Vt = 0. Choosing progressive versions of the integrals ζn−1ˆ fn(t), as before, we see from the proof of Corollary 5.33 that even the sum V can be chosen to be progressive. Applying (8) with m = 1 gives V ∈L(B).
Using Theorem 19.16, we get by a formal calculation ξ = Σ n≥1 ζnfn = Σ n≥1 ζn−1ˆ fn(t) dBt = dBt Σ n≥1 ζn−1ˆ fn(t) = Vt dBt.
To justify the interchange of integration and summation, we may use (8) and let m →∞to obtain 432 Foundations of Modern Probability E dBt Σ n≥m ζn−1ˆ fn(t) 2 = Σ n≥m E ζn−1ˆ fn(t) 2dt = Σ n≥m E(ζnfn)2 →0.
2 Now let P, Q be probability measures on a common measurable space (Ω, A), endowed with a right-continuous and P-complete filtration (Ft). When Q ≪P on Ft, we write Zt for the corresponding density, so that Q = Zt · P on Ft. Since the martingale property depends on the choice of probability measure, we need to distinguish between P-martingales and Q -martingales.
Let EP denote integration with respect to P, and write F∞= t Ft.
We summarize the basic properties of such measure transformations. Recall from Theorem 9.28 that any F-martingale has an rcll version.
Lemma 19.18 (change of measure) Let P, Q be probability measures on a measurable space Ω with a right-continuous, P-complete filtration F, such that Q = Zt · P on Ft, t ≥0, for an adapted process Z. Then for any adapted process X, (i) Z is a P-martingale, (ii) Z is uniformly integrable ⇔Q ≪P on F∞, (iii) X is a Q-martingale ⇔XZ is a P-martingale.
When Z is right-continuous and X is rcll, we have also (iv) Q = Zτ · P on Fτ ∩{τ < ∞} for any optional time τ, (v) X is a local Q-martingale ⇔XZ is a local P-martingale, (vi) inf s≤t Zs > 0 a.s. Q, t > 0.
Proof: (iii) For any adapted process X, we note that Xt is Q -integrable iff Xt Zt is P -integrable. When this holds for all t, the Q -martingale property of X becomes A Xs dQ = A Xt dQ, A ∈Fs, s < t.
By the definition of Z, this is equivalent to EP XsZs; A = EP XtZt; A , A ∈Fs, s < t, which means that XZ is a P -martingale.
(i)−(ii): Choosing Xt ≡1 in (iii), we see that Z is a P -martingale. Now let Z be uniformly P -integrable, say with L1-limit Z∞. For any t < u and A ∈Ft, we have QA = EP(Zu; A). As u →∞we get QA = EP(Z∞; A), which extends to arbitrary A ∈F∞by a monotone-class argument. Thus, Q = Z∞·P on F∞. Conversely, if Q = ξ · P on F∞, then EPξ = 1, and the P -martingale Mt = EP( ξ |Ft) satisfies Q = Mt · P on Ft for every t. But then Zt = Mt a.s.
for all t, and Z is uniformly P -integrable with limit ξ.
(iv) By optional sampling, 19. Continuous Martingales and Brownian Motion 433 QA = EP Zτ∧t; A , A ∈Fτ∧t, t ≥0, and so Q A; τ ≤t = EP Zτ; A ∩{τ ≤t} , A ∈Fτ, t ≥0.
The assertion now follows by monotone convergence as t →∞.
(v) For any optional time τ, we need to show that Xτ is a Q -martingale iff(XZ)τ is a P -martingale. This may be seen as in (iv), once we note that Q = Zτ t · P on Fτ∧t for all t.
(vi) By Lemma 9.32 it is enough to show that Zt > 0 a.s. Q for every t > 0.
This is clear from the fact that Q{Zt = 0} = EP(Zt; Zt = 0) = 0.
2 The measure Q is usually not given at the outset, but needs to be con-structed from the martingale Z. This requires some regularity conditions on the underlying probability space.
Lemma 19.19 (existence) Let Z ≥0 be a martingale with Z0 = 1, on the canonical probability space (Ω, P) with induced filtration F, where Ω = D R+,S for a Polish space S. Then there exists a probability measure Q on Ω with Q = Zt · P on Ft, t ≥0.
Proof: For every t ≥0 we may introduce the probability measure Qt = Zt·P on Ft, which may be regarded as defined on D [0,t],S. Since the latter spaces are again Polish for the Skorohod topology, Corollary 8.22 yields a probability measure Q on D R+,S with projections Qt. The measure Q has clearly the re-quired properties.
2 We show how the drift of a continuous semi-martingale is transformed under a change of measure with a continuous density Z. An extension appears in Theorem 20.9.
Theorem 19.20 (transformation of drift, Girsanov, van Schuppen & Wong) Consider some probability measures P, Q and a continuous process Z, such that Q = Zt · P on Ft, t ≥0.
Then for any continuous local P-martingale M, we have the local Q-martingale ˜ M = M −Z−1 · [M, Z].
Proof: First let Z−1 be bounded on the support of [M].
Then ˜ M is a continuous P -semi-martingale, and so by Proposition 18.14 and an integration by parts, ˜ MZ −( ˜ MZ)0 = ˜ M · Z + Z · ˜ M + [ ˜ M, Z] = ˜ M · Z + Z · M −[M, Z] + [ ˜ M, Z] = ˜ M · Z + Z · M, 434 Foundations of Modern Probability which shows that ˜ MZ is a local P-martingale. Hence, by Lemma 19.18, ˜ M is a local Q -martingale.
For general M, define τn = inf{t ≥0; Zt < 1/n}, and note as before that ˜ M τn is a local Q -martingale for every n ∈N. Since τn →∞a.s. Q by Lemma 19.18, ˜ M is a local Q -martingale by Lemma 18.1.
2 Next we show how the basic notions of stochastic calculus are preserved by a change of measure. Then let [X]P be the quadratic variation of X under the probability measure P, write LP(X) for the class of processes V that are X-integrable under P, and let (V ·X)P be the corresponding stochastic integral.
Proposition 19.21 (preservation properties) Let Q = Zt · P on Ft for all t ≥0, where Z is continuous. Then (i) a continuous P-semi-martingale X is also a Q -semi-martingale, and [X]P = [X]Q a.s. Q, (ii) LP(X) ⊂LQ(X), and for any V ∈LP(X), (V · X)P = (V · X)Q a.s. Q, (iii) for any continuous local P-martingale M and process V ∈LP(M), (V · M) ∼= V · ˜ M a.s. Q, whenever either side exists.
Proof: (i) Consider a continuous P -semi-martingale X = M +A, where M is a continuous local P -martingale and A is a process of locally finite variation.
Under Q we may write X = ˜ M + Z−1 · [M, Z] + A, where ˜ M is the continuous local Q -martingale of Theorem 19.20, and we note that Z−1 · [M, Z] has locally finite variation, since Z > 0 a.s. Q by Lemma 19.18. Thus, X is also a semi-martingale under Q. The statement for [X] is now clear from Proposition 18.17.
(ii) If V ∈LP(X), then V 2 ∈LP([X]) and V ∈LP(A), and so the same relations hold under Q, and we get V ∈LQ( ˜ M + A). To get V ∈LQ(X), it remains to show that V ∈LQ(Z−1 · [M, Z]). Since Z > 0 under Q, it is equivalent to show that V ∈LQ([M, Z]), which is clear by Proposition 18.9, since [M, Z]Q = [ ˜ M, Z]Q and V ∈LQ( ˜ M).
(iii) Note that LQ(M) = LQ( ˜ M) as before. If V belongs to either class, then under Q, Proposition 18.14 yields the a.s. relations (V · M)∼= V · M −Z−1 · [V · M, Z] = V · M −V Z−1 · [M, Z] = V · ˜ M.
2 19. Continuous Martingales and Brownian Motion 435 In particular, Theorem 19.3 shows that if B is a P -Brownian motion in Rd, then ˜ B is a Brownian motion under Q, since both processes are continuous martingales with the same covariation process.
The preceding theory simplifies when P and Q are equivalent on each Ft, since in that case Z > 0 a.s. P by Lemma 19.18. If Z is also continuous, it can be expressed as an exponential martingale. More general processes of this kind are considered in Theorem 20.8.
Lemma 19.22 (real exponential martingales) A continuous process Z > 0 is a local martingale iffa.s.
Zt = E(M)t ≡exp Mt −1 2[M]t , t ≥0, (9) for a continuous local martingale M.
Here M is a.s. unique, and for any continuous local martingale N, [M, N] = Z−1 · [Z, N] a.s.
Proof: If M is a continuous local martingale, so is E(M) by Itˆ o’s formula.
Conversely, let Z > 0 be a continuous local martingale. Then Corollary 18.19 yields log Z −log Z0 = Z−1 · Z −1 2 Z−2 · [Z] = Z−1 · Z −1 2 [Z−1 · Z], and (9) follows with M = log Z0 +Z−1 ·Z. The last assertion is clear from this expression, and the uniqueness of M follows from Proposition 18.2.
2 The drift of a continuous semi-martingale can sometimes be eliminated by a suitable change of measure. Beginning with the case of a Brownian motion B with deterministic drift, we need to show that E(B) is a true martingale, which follows easily by a direct computation. By P ∼Q we mean that P ≪Q and Q ≪P. Let L2 loc be the class of measurable functions f : R+ →Rd, such that |f|2 is locally Lebesgue integrable.
For any f ∈L2 loc we define f · λ = (f 1 · λ, . . . , f d · λ), where the components on the right are ordinary Lebesgue integrals.
Theorem 19.23 (shifted Brownian motion, Cameron & Martin) Let B be a canonical Brownian motion in Rd with induced, complete filtration F, fix a continuous function h : R+ →Rd with h0 = 0, and put Ph = L(B + h). Then these conditions are equivalent: (i) Ph ∼P0 on Ft for all t ≥0, (ii) h = f · λ for an f ∈L2 loc.
Under those conditions, (iii) Ph = E(f · B)t · P0.
436 Foundations of Modern Probability Proof, (i) ⇒(ii)−(iii): Assuming (i), Lemma 19.18 yields a P0 -martingale Z > 0, such that Ph = Zt · P0 on Ft for all t ≥0. Here Z is a.s. continuous by Theorem 19.11, and by Lemma 19.22 it equals E(M) for a continuous local P0 -martingale M. Using Theorem 19.11 again, we note that M = V ·B = Σ iV i·Bi a.s. for some processes V i ∈L(B1), and in particular V ∈L2 loc a.s.
By Theorem 19.20, the process ˜ B = B −[B, M] = B −V · λ is a Ph -Brownian motion, and so under Ph the canonical process B has the semi-martingale decompositions B = ˜ B + V · λ = (B −h) + h.
Since the decomposition is a.s. unique by Proposition 18.2, we get V ·λ = h a.s.
Then also h = f · λ for some measurable function f on R+, and since clearly λ{t ≥0; Vt ̸= ft} = 0 a.s., we have even f ∈L2 loc, proving (ii). Furthermore, M = V · B = f · B a.s., and (iii) follows.
(ii) ⇒(i): Assume (ii). Since M = f ·B is a time-changed Brownian motion under P0, the process Z = E(M) is a P0 -martingale, and Lemma 19.19 yields a probability measure Q on CR+,Rd, such that Q = Zt · P0 on Ft for all t ≥0.
Moreover, Theorem 19.20 shows that the process ˜ B = B −[B, M] = B −h is a Q -Brownian motion, and so Q = Ph. In particular, this proves (i).
2 For more general semi-martingales, Theorem 19.20 and Lemma 19.22 sug-gest that we might remove the drift by a change of measure of the form Q = E(M)t · P on Ft for all t ≥0, where M is a continuous local martin-gale with M0 = 0. Then by Lemma 19.18 we need Z = E(M) to be a true martingale, which can often be verified through the following criterion.
Theorem 19.24 (uniform integrability, Novikov) Let M be a real, continuous local martingale with M0 = 0. Then E(M) is a uniformly integrable martingale, whenever E exp 1 2 [M]∞ < ∞.
This will first be proved in a special case.
Lemma 19.25 (Wald’s identity) For any real Brownian motion B and op-tional time τ, we have E eτ/2 < ∞ ⇒ E exp Bτ −1 2 τ = 1.
19. Continuous Martingales and Brownian Motion 437 Proof: First consider the hitting times τb = inf t ≥0; Bt = t −b , b > 0.
Since the τb remain optional with respect to the right-continuous, induced filtration, we may take B to be a canonical Brownian motion with associated distribution P = P0. Defining ht ≡t and Z = E(B), we see from Theorem 19.23 that Ph = Zt · P on Ft for all t ≥0. Since τb < ∞a.s. under both P and Ph, Lemma 19.18 yields E exp Bτb −1 2 τb = EZτb = E Zτb; τb < ∞ = Ph{τb < ∞} = 1.
The stopped process Mt ≡Zt∧τb is a positive local martingale, and by Fatou’s lemma it is also a super-martingale on [0, ∞]. Since clearly EM∞= EZτb = 1 = EM0, we see from the Doob decomposition that M is a true martingale on [0, ∞]. For τ as stated, we get by optional sampling 1 = EMτ = EZτ∧τb = E Zτ; τ ≤τb + E Zτb; τ > τb .
(10) Using the definition of τb and the condition on τ, we get as b →∞ E Zτb; τ > τb = e−bE eτb/2; τ > τb ≤e−bEeτ/2 →0, and so the last term in (10) tends to zero. Since also τb →∞, the first term on the right tends to EZτ by monotone convergence, and we get EZτ = 1. 2 Proof of Theorem 19.24: Since E(M) is a super-martingale on [0, ∞], it suffices to show that, under the stated condition, E E(M)∞= 1. Using Theo-rem 19.4 and Proposition 9.9, we may then reduce to the statement of Lemma 19.25.
2 This yields in particular a classical result for Brownian motion: Corollary 19.26 (drift removal, Girsanov) Consider a Brownian motion B and a progressive process V in Rd, such that E exp 1 2 |V |2 · λ ∞ < ∞.
Then (i) Q = E(V · B)∞· P is a probability measure, (ii) ˜ B = B −V · λ is a Q-Brownian motion.
Proof: Combine Theorems 19.20 and 19.24.
2 438 Foundations of Modern Probability Exercises 1. Let B be a real Brownian motion and let V ∈L(B). Show that X = V · B is a time-changed Brownian motion, express the required time change τ in terms of V , and verify that X is τ-continuous.
2. Consider a real Brownian motion B and some progressive processes V t ∈L(B), t ≥0. Give conditions on the V t for the process Xt = (V t · B)∞to be Gaussian.
3. Let M be a continuous local martingale in Rd, and let V 1, . . . , V d be predictable processes on S × R+, such that Σ ij ∞ 0 V i s,rV j t,r d[Mi, M j]r =ρs,t is a.s. non-random for all s, t ∈S. Show that the process Xs = i ∞ 0 V i s,r dM i r is Gaussian on S with covariance function ρ. (Hint: Use Lemma 19.2.) 4. For M in Theorem 19.4, let [M]∞= ∞a.s. Show that M is τ-continuous in the sense of Theorem 18.24, and conclude from Theorem 19.3 that B = M ◦τ is a Brownian motion. Also show for any V ∈L(M) that (V ◦τ) · B = (V · M) ◦τ a.s.
5. Deduce Theorem 19.3 for d = 1 from Theorem 22.17. (Hint: Proceed as above to construct a discrete-time martingale with jump sizes h. Let h →0, and use a version of Proposition 18.17.) 6. Let M be a real, continuous local martingale. Show that M converges a.s. on the set {supt Mt < ∞}. (Hint: Use Theorem 19.4.) 7. Let F, G be right-continuous filtrations, where G is a standard extension of F, and let τ be an F-optional time. Show that Fτ ⊂Gτ⊥ ⊥Fτ F. (Hint: Apply optional sampling to the uniformly integrable martingale Mt = E(ξ|Ft), for any ξ ∈L1(F∞).) 8. Let F, G be filtrations on a common probability space (Ω, A, P). Show that G is a standard extension of F iffevery F-martingale is also a G-martingale. (Hint: Consider martingales of the form Mt = E(ξ| Ft), where ξ ∈L1(F∞). Here Mt is Gt-measurable for all ξ iffFt ⊂Gt, and then Mt = E(ξ| Gt) a.s. for all ξ iffF⊥ ⊥FtGt by Proposition 8.9.) 9. Let M be a non-trivial, isotropic, continuous local martingale in Rd, and fix an affine transformation f on Rd. Show that even f(M) is isotropic ifff is conformal (i.e., the composition of a rigid motion with a change of scale).
10. Show that for any d ≥2, the path of a Brownian motion in Rd has Lebesgue measure 0. (Hint: Use Theorem 19.6 and Fubini’s theorem.) 11. Deduce Theorem 19.6 (ii) from Theorem 12.8. (Hint: Define τ = inf{t; |Bt| = 1}, and iterate the construction to form a random walk in Rd with steps of size 1.) 12. Give an example of a bounded, continuous martingale M ≥0 with M0 = 1, such that P{Mt = 0} > 0 for large t > 0. (Hint: We may take M to be a Brownian motion stopped at the levels 0 and 2.) 13. For any Brownian motion B and optional time τ < ∞, show that E exp(Bτ − 1 2 τ) ≤1, where the inequality may be strict.
(Hint: Truncate and use Fatou’s lemma. Note that t −2Bt →∞by the law of large numbers.) 14. In Theorem 19.23, give necessary and sufficient conditions for Ph ∼P0 on F∞, and find the associated density.
15. Show by an example that Lemma 19.25 may fail without a moment condition on τ. (Hint: Choose a τ > 0 with Bτ = 0.) Chapter 20 Semi-Martingales and Stochastic Integration Predictable covariation, L2-martingale and semi-martingale integrals, quadratic variation and covariation, substitution rule, Dol´ eans exponen-tial, change of measure, BDG inequalities, martingale integral, purely discontinuous semi-martingales, semi-martingale and martingale decom-positions, exponential super-martingales and inequalities, quasi-martin-gales, stochastic integrators In the last two chapters, we have seen how the Itˆ o integral with associated transformation rules leads to a powerful stochastic calculus for continuous semi-martingales, defined as sums of continuous local martingales and continuous processes of locally finite variation. This entire theory will now be extended to semi-martingales with possible jump discontinuities. In particular, the semi-martingales turn out to be precisely the processes that can serve as stochastic integrators with reasonable adaptedness and continuity properties.
The present theory is equally important but much more subtle, as it de-pends crucially on the Doob–Meyer decomposition from Chapter 10 and the powerful BDG inequalities for general local martingales.
Our construction of the general stochastic integral proceeds in three steps.
First we imitate the definition of the L2-integral V · M from Chapter 18, using a predictable version ⟨M, N⟩of the covariation process. A suitable truncation then allows us to extend the integral to arbitrary semi-martingales X and bounded, pre-dictable processes V . The ordinary covariation [X, Y ] can now be defined by the integration-by-parts formula, and we may use the general BDG inequalities to extend the martingale integral V · M to more general integrands V .
Once the stochastic integral is defined, we may develop a stochastic calculus for general semi-martingales.
In particular, we prove an extension of Itˆ o’s formula, solve a basic stochastic differential equation, and establish a general Girsanov-type theorem for absolutely continuous changes of the probability measure. The latter material extends the relevant parts of Chapters 18–19.
The stochastic integral and covariation process, along with the Doob–Meyer decomposition from Chapter 10, provide the basic tools for a more detailed study of general semi-martingales. In particular, we may now establish two general decompositions, similar to the decompositions of optional times and in-creasing processes in Chapter 10. We further derive some exponential inequal-ities for martingales with bounded jumps, characterize local quasi-martingales 439 © Springer Nature Switzerland AG 2021 O. Kallenberg, Foundations of Modern Probability, Probability Theory and Stochastic Modelling 99, 440 Foundations of Modern Probability as special semi-martingales, and show that no continuous extension of the pre-dictable integral exists beyond the context of semi-martingales.
Throughout this chapter, let M2 be the class of uniformly square-integrable martingales, and note as in Lemma 18.4 that M2 is a Hilbert space for the norm ∥M∥= (EM 2 ∞)1/2. Let M2 0 be the closed linear subspace of martingales M ∈M2 with M0 = 0. The corresponding classes M2 loc and M2 0,loc are defined as the sets of processes M, such that the stopped versions M τn belong to M2 or M2 0, respectively, for suitable optional times τn →∞.
For every M ∈M2 loc, we note that M 2 is a local sub-martingale.
The corresponding compensator ⟨M⟩is called the predictable quadratic variation of M. More generally, we define1 the predictable covariation ⟨M, N⟩of two processes M, N ∈M2 loc as the compensator of MN, also obtainable through the polarization formula 4⟨M, N⟩= ⟨M + N⟩−⟨M −N⟩.
Note that ⟨M, M⟩= ⟨M⟩. If M and N are continuous, then clearly ⟨M, N⟩= [M, N] a.s. Here we list some further useful properties.
Proposition 20.1 (predictable covariation) For any M, N, M n ∈M2 loc, we have a.s.
(i) ⟨M, N⟩is symmetric, bilinear with ⟨M, N⟩= ⟨M −M0, N⟩, (ii) ⟨M⟩= ⟨M, M⟩is non-decreasing, (iii) |⟨M, N⟩| ≤ |d⟨M, N⟩| ≤⟨M⟩1/2⟨N⟩1/2, (iv) ⟨M, N⟩τ = ⟨M τ, N⟩= ⟨M τ, N τ⟩for any optional time τ, (v) ⟨M n⟩∞ P →0 ⇒(M n −M n 0 )∗ P →0.
Proof, (i)−(iv): Lemma 10.11 shows that ⟨M, N⟩is the a.s. unique pre-dictable process of locally integrable variation starting at 0, such that MN − ⟨M, N⟩is a local martingale. The symmetry and bilinearity follow immedi-ately, as does last property in (i), since MN0, M0N, and M0N0 are all local martingales. Moreover, we get (iii) as in Proposition 18.9, and (iv) as in The-orem 18.5.
(v) Here we may take M n 0 = 0 for all n. Let ⟨M n⟩∞ P →0, fix any ε > 0, and define τn = inf{t; ⟨M n⟩t ≥ε}. Since ⟨M n⟩is predictable, even τn is predictable by Theorem 10.14, say with announcing sequence τnk ↑τn. The latter may be chosen such that M n becomes an L2-martingale and (M n)2−⟨M n⟩a uniformly integrable martingale on every interval [0, τnk]. Then Proposition 9.17 yields E(M n)∗2 τnk < ⌢E(M n)2 τnk = E⟨M n⟩τnk ≤ε, 1The predictable covariation ⟨·, ·⟩must not be confused with the inner product appearing in Chapters 14, 21, and 35.
20. Semi-Martingales and Stochastic Integration 441 and as k →∞we get E(M n)∗2 τn−< ⌢ε. Now fix any δ > 0, and write P (M n)∗2 > δ ≤P{τn < ∞} + δ−1E(M n)∗2 τn− < ⌢P ⟨M n⟩∞≥ε + δ−1ε.
Here the right-hand side tends to zero as n →∞and then ε →0.
2 The predictable quadratic variation can be used to extend the Itˆ o integral from Chapter 18. Then let E be the class of bounded, predictable step processes V with jumps at finitely many fixed times. The corresponding integral V · X is referred to as an elementary predictable integral.
Given any M ∈M2 loc, let L2(M) be the class of predictable processes V , such that (V 2 · ⟨M⟩)t < ∞a.s. for all t > 0.
Assuming M ∈M2 loc and V ∈L2(M), we may form an integral process V · M belonging to M2 0,loc. In the following statement, we assume that M, N, Mn ∈M2 loc, and let U, V, Vn be predictable processes such that the appropriate integrals exist.
Theorem 20.2 (L2-martingale integral, Courrege, Kunita & Watanabe) The elementary predictable integral extends a.s. uniquely to a bilinear map of any M ∈M2 loc and V ∈L2(M) into a process V · M ∈M2 0,loc, such that a.s.
(i) V 2 n · ⟨Mn⟩ t P →0 ⇒ Vn · Mn ∗ t P →0, t > 0, (ii) ⟨V · M, N⟩= V · ⟨M, N⟩, N ∈M2 loc, (iii) U · (V · M) = (UV ) · M, (iv) Δ(V · M) = V (ΔM), (v) (V · M)τ = V · M τ = (V 1[0,τ]) · M for any optional time τ.
The integral V · M is a.s. uniquely determined by (ii).
The proof depends on an elementary approximation of predictable pro-cesses, similar to Lemma 18.23 in the continuous case.
Lemma 20.3 (approximation) Let V be a predictable process with |V |p ∈ L(A), where A is increasing and p ≥1. Then there exist some V1, V2, . . . ∈E with |Vn −V |p · A t →0 a.s., t > 0.
Proof: It is enough to establish the approximation (|Vn −V |p · A)t P →0.
By Minkowski’s inequality we may then approximate in steps, and by domi-nated convergence we may first take V to be simple. Then each term can be approximated separately, and so we may next choose V = 1B for a predictable set B. Approximating separately on disjoint intervals, we may finally assume that B ⊂Ω × [0, t] for some t > 0. The desired approximation then follows from Lemma 10.2 by a monotone-class argument.
2 442 Foundations of Modern Probability Proof of Theorem 20.2: As in Theorem 18.11, we may construct the integral V ·M as the a.s. unique process in M2 0,loc satisfying (ii). The mapping (V, M) → V ·M is clearly bilinear and extends the elementary predictable integral, just as in Lemma 18.10. Properties (iii) and (v) may be proved as in Propositions 18.14 and 18.15. The stated continuity follows immediately from (ii) and Proposition 20.1 (v). To prove the asserted uniqueness, it is then enough to apply Lemma 20.3 with A = ⟨M⟩and p = 2.
To prove (iv), we may apply Lemma 20.3 with At = ⟨M⟩t + Σ s≤t (ΔMs)2, t ≥0, to ensure the existence of some processes Vn ∈E, such that a.s.
Vn(ΔM) →V (ΔM), Vn · M −V · M ∗→0.
In particular, Δ(Vn · M) →Δ(V · M) a.s., and (iv) follows from the corre-sponding relation for the elementary integrals Vn ·M, once we have shown that Σs≤t(ΔMs)2 < ∞a.s. For the latter condition, we may take M ∈M2 0 and define tn,k = kt 2−n for k ≤2n. Then Fatou’s lemma yields E Σ s≤t (ΔMs)2 ≤E lim inf n→∞Σ k Mtn,k −Mtn,k−1 2 ≤lim inf n→∞E Σ k Mtn,k −Mtn,k−1 2 = EM 2 t < ∞.
2 A semi-martingale is defined as a right-continuous, adapted process X, admitting a decomposition M + A into a local martingale M and a process A of locally finite variation starting at 0. If the variation of A is even locally integrable, we can write X = (M+A−ˆ A)+ ˆ A, where ˆ A denotes the compensator of A. We may then choose A to be predictable, in which case the decomposition is a.s. unique by Propositions 18.2 and 10.16, and X is called a special semi-martingale with canonical decomposition M + A.
The L´ evy processes constitute basic examples of semi-martingales, and we note that a L´ evy process is a special semi-martingale iffits L´ evy measure ν satisfies (x2 ∧|x|) ν(dx) < ∞. From Theorem 10.5 we further see that any local sub-martingale is a special semi-martingale.
We may extend the stochastic integration to general semi-martingales. For the moment we consider only locally bounded integrands, which covers most applications of interest.
Theorem 20.4 (semi-martingale integral, Dol´ eans-Dade & Meyer) The L2-integral of Theorem 20.2 and the ordinary Lebesgue–Stieltjes integral extend a.s. uniquely to a bilinear map of any semi-martingale X and locally bounded, predictable process V into a semi-martingale V · X. The integration map also satisfies 20. Semi-Martingales and Stochastic Integration 443 (i) properties (iii)−(v) of Theorem 20.2, (ii) V ≥|Vn| →0 ⇒(Vn · X)∗ t P →0, t > 0, (iii) when X is a local martingale, so is V · X.
Our proof relies on a basic truncation: Lemma 20.5 (truncation, Dol´ eans-Dade, Jacod & M´ emin, Yan) Any local martingale M has a decomposition M ′ + M ′′ into local martingales, such that (i) M ′ has locally integrable variation, (ii) |ΔM ′′| ≤1 a.s.
Proof: Define At = Σ s≤t (ΔMs) 1 |ΔMs| > 1 2 , t ≥0.
By optional sampling, A has locally integrable variation. Let ˆ A be the com-pensator of A, and put M ′ = A −ˆ A and M ′′ = M −M ′. Then M ′ and M ′′ are again local martingales, and M ′ has locally integrable variation. Since also |ΔM ′′| ≤ ΔM −ΔA + |Δ ˆ A| ≤1 2 + |Δ ˆ A|, it remains to show that |Δ ˆ A| ≤1 2. Since the constructions of A and ˆ A commute with optional stopping, we may take M and M ′ to be uniformly integrable.
Since ˆ A is predictable, the times τ = n ∧inf{t; |Δ ˆ A| > 1 2} are predictable by Theorem 10.14, and it suffices to show that |Δ ˆ Aτ| ≤1 2 a.s. Since clearly E(ΔMτ| Fτ−) = E(ΔM ′ τ| Fτ−) = 0 a.s., Lemma 10.3 yields |Δ ˆ Aτ| = E ΔAτ Fτ− = E ΔMτ; |ΔMτ| > 1 2 Fτ− = E ΔMτ; |ΔMτ| ≤1 2 Fτ− ≤1 2.
2 Proof of Theorem 20.4: By Lemma 20.5 we may write X = M + A, where M is a local martingale with bounded jumps, hence a local L2-martingale, and A has locally finite variation. For any locally bounded, predictable process V , we may then define V ·X = V ·M +V ·A, where the first term is the integral in Theorem 20.2 and the second term is an ordinary Lebesgue–Stieltjes integral.
If V ≥|Vn| →0, we get by dominated convergence V 2 n · ⟨M⟩ t + Vn · A ∗ t →0, and so by Theorem 20.2 we have (Vn · X)∗ t P →0 for all t > 0.
For the uniqueness it suffices to prove that, if M = A is a local L2-martingale of locally finite variation, then V · M = V · A a.s. for every locally bounded, predictable process V , where V · M is the integral in Theorem 20.2 444 Foundations of Modern Probability and V ·A is an ordinary Stieltjes integral. The two integrals clearly agree when V ∈E. For general V , we may approximate as in Lemma 20.3 by processes Vn ∈E, such that a.s. (Vn −V )2 · ⟨M⟩ ∗+ |Vn −V | · A ∗→0.
Then for any t > 0, (Vn · M)t P →(V · M)t, (Vn · A)t →(V · A)t, and the desired equality follows.
(iii) By Lemma 20.5 and a suitable localization, we may assume that V is bounded and X has integrable variation A. By Lemma 20.3 we may next choose some uniformly bounded processes V1, V2, . . . ∈E, such that (|Vn −V |·A)t →0 a.s. for every t ≥0. Then (Vn · X)t →(V · X)t a.s. for all t, which extends to L1 by dominated convergence. Thus, the martingale property of Vn · X carries over to V · X.
2 For any semi-martingales X, Y , the left-continuous versions (X−)t = Xt− and (Y−)t = Yt−are locally bounded and predictable, and hence may serve as integrands in general stochastic integrals. We may then define2 the quadratic variation [X] and covariation [X, Y ] by the integration-by-parts formulas [X] = X2 −X2 0 −2 X−· X, [X, Y ] = X Y −X0 Y0 −X−· Y −Y−· X = 1 4 [X + Y ] −[X −Y ] , (1) so that in particular [X] = [X, X].
We list some further properties of the covariation.
Theorem 20.6 (covariation) For any semi-martingales X, Y , we have a.s.
(i) [X, Y ] is symmetric, bilinear with [X, Y ] = [X −X0, Y ], (ii) [X] = [X, X] is non-decreasing, (iii) | [X, Y ] | ≤ |d [X, Y ]| ≤[X]1/2 [Y ]1/2, (iv) Δ[X] = (ΔX)2, Δ[X, Y ] = (ΔX)(ΔY ), (v) [V · X, Y ] = V · [X, Y ] for locally bounded, predictable V , (vi) [Xτ, Y ] = [Xτ, Y τ] = [X, Y ]τ for any optional time τ, (vii) M, N ∈M2 loc ⇒[M, N] has compensator ⟨M, N⟩, (viii) [X, A]t = Σ s≤t (ΔXs)(ΔAs) when A has locally finite variation.
2Both covariation processes [X, Y ] and ⟨X, Y ⟩remain important, as is evident from, e.g., Theorems 20.9, 20.16, and 20.17.
20. Semi-Martingales and Stochastic Integration 445 Proof: (i) The symmetry and bilinearity of [X, Y ] are obvious from (1), and the last relation holds since [X, Y0] = 0.
(ii) Proposition 18.17 extends with the same proof to general semi-martin-gales. In particular, [X]s ≤[X]t a.s. for all s ≤t. By right-continuity, we can choose the exceptional null set to be independent of s and t, which means that [X] is a.s. non-decreasing.
(iii) Proceed as in case of Proposition 18.9.
(iv) By (1) and Theorem 20.2 (iv), Δ[X, Y ]t = Δ(X Y )t −Δ(X−· Y )t −Δ(Y−· X)t = Xt Yt −Xt−Yt−−Xt−(ΔYt) −Yt−(ΔXt) = (ΔXt)(ΔYt).
(v) For V ∈E the relation follows most easily from the extended version in Proposition 18.17. Also note that both sides are a.s. linear in V . Now let V, V1, V2, . . . be locally bounded and predictable with V ≥|Vn| →0. Then Vn · [X, Y ] →0 by dominated convergence, and Theorem 20.4 yields [Vn · X, Y ] = (Vn · X) Y −(Vn · X)−· Y −(Vn Y−) · X P →0.
A monotone-class argument yields an extension to arbitrary V .
(vi) Use (v) with V = 1[0,τ].
(vii) Since M−· N and N−· M are local martingales, the assertion follows from (1) and the definition of ⟨M, N⟩.
(viii) For step processes A, the stated relation follows from the extended version in Proposition 18.17. If instead ΔA ≤ε, we conclude from the same result and property (iii), along with the ordinary Cauchy inequality, that [X, A]2 t ∨ Σ s≤t (ΔXs)(ΔAs) 2 ≤[X]t [A]t ≤ε[X]t t 0 |dAs|.
The assertion now follows by a simple approximation.
2 We may now extend Itˆ o’s formula in Theorem 18.18 to a substitution rule for general semi-martingales. By a semi-martingale in Rd we mean a process X = (X1, . . . , Xd) composed by one-dimensional semi-martingales Xi. Let [Xi, Xj]c denote the continuous components of the finite-variation processes [Xi, Xj], and write ∂if and ∂2 ijf for the first and second order partial deriva-tives of f, respectively.
446 Foundations of Modern Probability Theorem 20.7 (substitution rule, Kunita & Watanabe) For any semi-mar-tingale X = (X1, . . . , Xd) in Rd and function f ∈C2(Rd), we have3 f(Xt) = f(X0) + t 0 ∂if(Xs−) dXi s + 1 2 t 0 ∂2 ijf(Xs−) d [Xi, Xj]c s + Σ s≤t Δf(Xs) −∂if(Xs−)(ΔXi s) .
(2) Proof: Let f ∈C2(Rd) satisfy (2), where the last term has locally finite variation. We show that (2) remains true for the functions gk(x) = xkf(x), k = 1, . . . , n. Then note that by (1), gk(X) = gk(X0) + Xk −· f(X) + f(X−) · Xk + [Xk, f(X)].
(3) Writing ˆ f(x, y) = f(x) −f(y) −∂if(y)(xi −yi), we get by (2) and Theorem 20.2 (iii) Xk −· f(X) = Xk −∂if(X−) · Xi + 1 2Xk −∂2 ijf(X−) · [Xi, Xj]c +Σs Xk s−ˆ f(Xs, Xs−).
(4) Next we see from (ii), (iv), (v), and (viii) of Theorem 20.6 that Xk, f(X) = ∂if(X−) · [Xk, Xi] + Σs (ΔXk s ) ˆ f(Xs, Xs−) = ∂if(X−) · [Xk, Xi]c + Σs (ΔXk s ){Δf(Xs)}.
(5) Inserting (4)−(5) into (3) and using the elementary formulas ∂i gk(x) = δikf(x) + xk ∂if(x), ∂2 ij gk(x) = δik ∂jf(x) + δjk ∂if(x) + xk ∂2 ijf(x), ˆ gk(x, y) = (xk −yk) f(x) −f(y) + yk ˆ f(x, y), followed by some simplification, we obtain the desired expression for gk(X).
Formula (2) is trivially true for constant functions, and it extends by in-duction and linearity to arbitrary polynomials. By Weierstrass’ theorem, a general function f ∈C2(Rd) can be approximated by polynomials, such that all derivatives up to second order tend to those of f, uniformly on every com-pact set. To prove (2) for f, it is then enough to show that the right-hand side tends to zero in probability, as f and its first and second order derivatives tend to zero, uniformly on compact sets.
For the integrals in (2), this holds by the dominated convergence property in Theorem 20.4, and it remains to consider the last term. Writing Bt = {x ∈Rd; |x| ≤X∗ t } and ∥g∥B = supB |g|, we get by Taylor’s formula in Rd Σ s≤t ˆ f(Xs, Xs−) < ⌢Σ i,j ∂2 ij f Bt Σ s≤t |ΔXs|2 ≤Σ i,j ∂2 ij f Bt Σ i [Xi]t →0.
3As always, summation over repeated indices is understood.
20. Semi-Martingales and Stochastic Integration 447 The same estimate shows that the last term has locally finite variation.
2 To illustrate the use of the general substitution rule, we consider a partial extension of Lemma 19.22 and Proposition 32.2 to general semi-martingales.
Theorem 20.8 (Dol´ eans exponential) For a semi-martingale X with X0 = 0, the equation Z = 1 + Z−· X has the a.s. unique solution Zt = E(X) ≡exp Xt −1 2 [X]c t Π s≤t (1 + ΔXs) e−ΔXs, t ≥0.
(6) The infinite product in (6) is a.s. absolutely convergent, since Σs≤t(ΔXs)2 ≤ [X]t < ∞. However, we may have ΔXs = −1 for some s > 0, in which case Z = 0 for t ≥s. The process E(X) in (6) is called the Dol´ eans exponential of X. For continuous X we get E(X) = exp(X −1 2[X]), conforming with the notation of Lemma 19.22. For processes A of locally finite variation, formula (6) simplifies to E(A) = exp(Ac t) Π s≤t (1 + ΔAs), t ≥0.
Proof of Theorem 20.8: To check that (6) is a solution, we may write Z = eY V , where Yt = Xt −1 2 [X]c t, Vt = Π s ≤t (1 + ΔXs) e−ΔXs.
Then Theorem 20.7 yields Zt −1 = (Z−· Y )t + (eY−· V )t + 1 2 (Z−· [X]c)t + Σ s≤t ΔZs −Zs−ΔXs −eYs−ΔVs .
(7) Now eY−· V = Σ eY−ΔV since V is of pure jump type, and furthermore ΔZ = Z−ΔX. Hence, the right-hand side of (7) simplifies to Z−· X, as desired.
To prove the uniqueness, let Z be an arbitrary solution, and put V = Ze−Y , where Y = X −1 2 [X]c as before. Then Theorem 20.7 gives Vt −1 = (e−Y−· Z)t −(V−· Y )t + 1 2 (V−· [X]c)t −(e−Y−· [X, Z]c)t + Σ s≤t ΔVs + Vs−ΔYs −e−Ys−ΔZs = (V−· X)t −(V−· X)t + 1 2 (V−· [X]c)t + 1 2 (V−· [X]c)t −(V−· [X]c)t + Σ s≤t ΔVs + Vs−ΔXs −Vs−ΔXs = Σ s≤t ΔVs.
Thus, V is purely discontinuous of locally finite variation. We may further compute ΔVt = Zt e−Yt −Zt−e−Yt− = (Zt−+ ΔZt) e−Yt−−ΔYt −Zt−e−Yt− = Vt− (1 + ΔXt) e−ΔXt −1 , 448 Foundations of Modern Probability which shows that Vt = 1 + (V−· A)t, At = Σ s≤t (1 + ΔXt) e−ΔXt −1 .
It remains to show that the homogeneous equation V = V−· A has the unique solution V = 0. Then define R t = (0,t] |dA|, and conclude from Theo-rem 20.7 and the convexity of the function x →xn that R n t = n (R n−1 − · R)t + Σ s≤t ΔR n s −n R n−1 s−ΔRs ≥n (R n−1 − · R)t.
(8) We prove by induction that V ∗ t ≤V ∗ t R n t n!
, t ≥0, n ∈Z+.
(9) This is obvious for n = 0, and if it holds for n −1, then by (8) V ∗ t = (V−· A)∗ t ≤V ∗ t (R n−1 − · R)t (n −1)!
≤V ∗ t R n t n!
, as required. Since R n t /n! →0 as n →∞, relation (9) yields V ∗ t = 0 for all t > 0.
2 The equation Z = 1 + Z−· X arises naturally in connection with changes of probability measure. Here we extend Theorem 19.20 to general local mar-tingales.
Theorem 20.9 (change of measure, van Schuppen & Wong) Consider some probability measures P, Q and a local martingale Z ≥0, such that Q = Zt · P on Ft for all t ≥0, and let M be a local P-martingale such that [M, Z] has locally integrable variation and P-compensator ⟨M, Z⟩. Then we have the local Q-martingale ˜ M = M −Z−1 −· ⟨M, Z⟩.
A lemma is needed for the proof.
Lemma 20.10 (integration by parts) For any semi-martingale X with X0 = 0 and predictable process A of locally finite variation, we have A X = A · X + X−· A a.s.
Proof: We need to show that ΔA · X = [A, X] a.s., which is equivalent to (0,t](ΔAs) dXs = Σ s≤t (ΔAs)(ΔXs), t ≥0, by Theorem 20.6 (viii). Since the series on the right is absolutely convergent by Cauchy’s inequality, we may reduce, by dominated convergence on each side, 20. Semi-Martingales and Stochastic Integration 449 to the case where A is constant apart from finitely many jumps. By Lemma 10.3 and Theorem 10.14, we may proceed to the case where A has at most one jump occurring at a predictable time τ. Introducing an announcing sequence (τn) and writing Y = ΔA · X, we get by Theorem 20.2 (v) Yτn∧t = 0 = Yt −Yt∧τ a.s., t ≥0, n ∈N.
Thus, even Y is constant, apart from a possible jump at τ. Finally, Theorem 20.2 (iv) yields ΔYτ = (ΔAτ)(ΔXτ) a.s. on {τ < ∞}.
2 Proof of Theorem 20.9: Let τn = inf{t; Zt < 1/n} for n ∈N, and note that τn →∞a.s. Q by Lemma 19.18. Hence, ˜ M is well defined under Q, and it suffices as in Lemma 19.18 to show that ( ˜ MZ)τn is a local P -martingale for every n. Writing m = for equality up to a local P -martingale, we see from Lemma 20.10 with X = Z and A = Z−1 −· ⟨M, Z⟩that, on every interval [0, τn], MZ m = [M, Z] m = ⟨M, Z⟩ = Z−· A m = AZ.
Thus, ˜ MZ = (M −A) Z m = 0, as required.
2 Using the last theorem, we may show that the class of semi-martingales is closed under absolutely continuous changes of probability measure. A special case of this result was previously obtained as part of Proposition 19.21.
Corollary 20.11 (preservation law, Jacod) If Q ≪P on Ft for all t > 0, then every P-semi-martingale is also a Q-semi-martingale.
Proof: Let Q = Zt · P on Ft for all t ≥0. We need to show that every local P -martingale M is a semi-martingale under Q. By Lemma 20.5, we may then take ΔM to be bounded, so that [M] is locally bounded. By Theorem 20.9 it suffices to show that [M, Z] has locally integrable variation, and by Theorem 20.6 (iii) it is then enough to prove that [Z]1/2 is locally integrable.
By Theorem 20.6 (iv), [Z]1/2 t ≤[Z]1/2 t−+ |ΔZt| ≤[Z]1/2 t−+ Z∗ t−+ |Zt|, t ≥0, and so the desired integrability follows by optional sampling.
2 Next we extend the BDG inequalities of Theorem 18.7 to general local martingales. Such extensions are possible only for exponents p ≥1.
Theorem 20.12 (norm comparison, Burkholder, Davis, Gundy) For any lo-cal martingale M with M0 = 0, we have4 E M ∗p ≍E [M]p/2 ∞, p ≥1.
(10) 4Here M ∗= supt |Mt|, and f ≍g means that f ≤cg and g ≤cf for some constant c > 0.
The domination constants are understood to depend only on p.
450 Foundations of Modern Probability In particular, we conclude as in Corollary 18.8 that M is a uniformly inte-grable martingale whenever E[M]1/2 ∞< ∞.
Proof for p = 1 (Davis): Exploiting the symmetry of the argument, we write M ♭and M ♯for the processes M ∗and [M]1/2, taken in either order.
Putting J = ΔM, we define At = Σ s≤t Js1 |Js| > 2J∗ s− , t ≥0.
Since |ΔA| ≤2ΔJ∗, we have ∞ 0 |dAs| = Σs|ΔAs| ≤2J∗≤4M ♯ ∞.
Writing ˆ A for the compensator of A and putting D = A −ˆ A, we get ED♭ ∞∨ED♯ ∞≤E ∞ 0 |dDs| < ⌢E ∞ 0 |dAs| < ⌢EM ♯ ∞.
(11) To get a similar estimate for N = M −D, we introduce the optional times τr = inf t; N ♯ t ∨J∗ t > r , r > 0, and note that P{N ♭ ∞> r} ≤P{τr < ∞} + P τr = ∞, N ♭ ∞> r ≤P{N ♯ ∞> r} + P{J∗> r} + P{N ♭ τr > r}.
(12) Arguing as in the proof of Lemma 20.5, we get |ΔN| ≤4J∗ −, and so N ♯ τr ≤N ♯ ∞∧ N ♯ τr−+ 4J∗ τr− ≤N ♯ ∞∧5r.
Since N 2 −[N] is a local martingale, we get by Chebyshev’s inequality or Proposition 9.16, respectively, r2P{N ♭ τr > r} < ⌢EN ♯2 τr < ⌢E(N ♯ ∞∧r)2.
Hence, by Fubini’s theorem and calculus, ∞ 0 P{N ♭ τr > r}dr < ⌢ ∞ 0 E(N ♯ ∞∧r)2 r−2dr < ⌢EN ♯ ∞.
Combining this with (11)−(12) and using Lemma 4.4, we get EN ♭ ∞= ∞ 0 P{N ♭ ∞> r}dr ≤ ∞ 0 P{N ♯ ∞> r} + P{J∗> r} + P{N ♭ τr > r} dr < ⌢EN ♯ ∞+ EJ∗< ⌢EM ♯ ∞.
20. Semi-Martingales and Stochastic Integration 451 It remains to note that EM ♭ ∞≤ED♭ ∞+ EN ♭ ∞.
2 Extension to p > 1 (Garsia): For any t ≥0 and B ∈Ft, we may apply (10) with p = 1 to the local martingale 1B(M −M t) to get a.s.
c−1 1 E [M −M t]1/2 ∞ Ft ≤E (M −M t)∗ ∞ Ft ≤c1E [M −M t]1/2 ∞ Ft .
Since [M]1/2 ∞−[M]1/2 t ≤[M −M t]1/2 ∞ ≤[M]1/2 ∞, M ∗ ∞−M ∗ t ≤(M −M t)∗ ∞ ≤2M ∗ ∞, Proposition 10.20 yields E(A∞−At | Ft) < ⌢E(ζ | Ft) with At = [M]1/2 t and ζ = M ∗, and also with At = M ∗ t and ζ = [M]1/2 ∞. Since ΔM ∗ t ≤Δ[M]1/2 t = |ΔMt| ≤[M]1/2 t ∧2M ∗ t , we have in both cases ΔAτ < ⌢E(ζ | Fτ) a.s. for any optional time τ, and so the cited condition remains fulfilled for the left-continuous version A−. Hence, Proposition 10.20 yields ∥A∞∥p < ⌢∥ζ∥p for every p ≥1, and (10) follows.
2 The last theorem allows us to extend the stochastic integral to a larger class of integrands. Then write M for the space of local martingales, and let M0 be the subclass of processes M with M0 = 0. For any M ∈M, let L(M) be the class of predictable processes V such that (V 2 · [M])1/2 is locally integrable.
Theorem 20.13 (martingale integral, Meyer) The elementary predictable in-tegral extends a.s. uniquely to a bilinear map of any M ∈M and V ∈L(M) into a process V · M ∈M0, satisfying (i) |Vn| ≤V in L(M) with (V 2 n · [M])t P →0 ⇒(Vn · M)∗ t P →0, (ii) properties (iii)−(v) of Theorem 20.2, (iii) [V · M, N] = V · [M, N] a.s., N ∈M.
The integral V · M is a.s. uniquely determined by (iii).
Proof: To construct the integral, we may reduce by localization to the case where E(M −M0)∗< ∞and E(V 2 · [M])1/2 ∞ < ∞. For each n ∈N, define Vn = V 1{|V | ≤n}. Then Vn · M ∈M0 by Theorem 20.4, and by Theorem 20.12 we have E(Vn · M)∗< ∞. Using Theorems 20.6 (v) and 20.12, Minkowski’s inequality, and dominated convergence, we obtain E Vm · M −Vn · M ∗< ⌢E (Vm −Vn) · M 1/2 ∞ = E (Vm −Vn)2 · [M] 1/2 ∞→0.
452 Foundations of Modern Probability Hence, there exists a process V · M with E(Vn · M −V · M)∗→0, and we note that V · M ∈M0 and E(V · M)∗< ∞.
To prove (iii), we note that the relation holds for each Vn by Theorem 20.6 (v). Since E[Vn · M −V · M]1/2 ∞→0 by Theorem 20.12, we get by Theorem 20.6 (iii) for any N ∈M and t ≥0 [Vn · M, N]t −[V · M, N]t 2 ≤ Vn · M −V · M t [N]t P →0.
(13) Next we see from Theorem 20.6 (iii) and (v) that t 0 Vn d[M, N] = t 0 d[Vn · M, N] ≤[Vn · M]1/2 t [N]1/2 t .
As n →∞, we get by monotone convergence on the left and Minkowski’s inequality on the right t 0 V d[M, N] ≤[V · M]1/2 t [N]1/2 t < ∞.
Hence, Vn · [M, N] →V · [M, N] by dominated convergence, and (iii) follows by combination with (13).
To see that V · M is determined by (iii), it remains to note that, if [M] = 0 a.s. for some M ∈M0, then M ∗= 0 a.s. by Theorem 20.12. To prove the stated continuity property, we may reduce by localization to the case where E(V 2 · [M])1/2 ∞< ∞. But then E(V 2 n · [M])1/2 ∞→0 by dominated convergence, and Theorem 20.12 yields E(Vn ·M)∗→0. To prove the uniqueness of the integral, it is enough to consider bounded integrands V . We may then approximate as in Lemma 20.3 by uniformly bounded processes Vn ∈E with {(Vn−V )2·[M]}∞ P → 0, and conclude that (Vn · M −V · M)∗ P →0.
Of the remaining properties in Theorem 20.2, assertion (iii) may be proved as before by means of (iii) above, whereas properties (iv)−(v) follow easily by truncation from the corresponding statements in Theorem 20.4.
2 A semi-martingale X = M + A is said to be purely discontinuous, if there exist some local martingales M 1, M 2, . . . of locally finite variation, such that E(M −M n)∗2 t →0 for every t > 0. The property is clearly independent of the choice of decomposition X = M + A. To motivate the terminology, note that any martingale M of locally finite variation can be written as M = M0+A−ˆ A, where At = Σs≤tΔMs and ˆ A is the compensator of A. Thus, M −M0 is then a compensated sum of jumps.
We proceed to establish a basic decomposition5 of a general semi-martingale X into continuous and purely discontinuous components, corresponding to the elementary decomposition of the quadratic variation [X] into a continuous part and a jump part.
5Not to be confused with the decomposition of functions of bounded variation in Theorem 2.23 (iii). The distinction is crucial for part (ii) below, which involves both notions.
20. Semi-Martingales and Stochastic Integration 453 Theorem 20.14 (semi-martingale decomposition, Yoeurp, Meyer) For any real semi-martingale X, we have (i) X = X0 + Xc + Xd a.s. uniquely, for a continuous local martingale Xc with Xc 0 = 0 and a purely discontinuous semi-martingale Xd, (ii) [Xc] = [X]c and [Xd] = [X]d a.s.
Proof: (i) It is enough to consider the martingale component in an arbitrary decomposition X = X0+M +A, and by Lemma 20.5 we may take M ∈M2 0,loc.
Then choose some optional times τn ↑∞with τ0 = 0, such that M τn ∈M2 0 for all n. It is enough to construct the desired decomposition for each process M τn −M τn−1, which reduces the discussion to the case of M ∈M2 0. Now let C and D be the classes of continuous and purely discontinuous processes in M2 0, and note that both are closed linear subspaces of the Hilbert space M2 0.
The desired decomposition will follow from Theorem 1.35, if we can show that D⊥⊂C.
First let M ∈D⊥. To see that M is continuous, fix any ε > 0, and put τ = inf{t; ΔMt > ε}. Define At = 1{τ ≤t}, let ˆ A be the compensator of A, and put N = A −ˆ A. Integrating by parts and using Lemma 10.13 gives 1 2 E ˆ A2 τ ≤E ˆ A d ˆ A = E ˆ A dA = E ˆ Aτ = EAτ ≤1.
Thus, N is L2-bounded and hence lies in D. For any bounded martingale M ′, we get E(M ′ ∞N∞) = E M ′dN = E (ΔM ′) dN = E (ΔM ′) dA = E ΔM ′ τ; τ < ∞ , where the first equality is obtained as in the proof of Lemma 10.7, the second holds by the predictability of M ′ −, and the third holds since ˆ A is predictable and hence natural. Letting M ′ →M in M2, we obtain 0 = E(M∞N∞) = E ΔMτ; τ < ∞ ≥ε P{τ < ∞}.
Thus, ΔM ≤ε a.s., and so ΔM ≤0 a.s. since ε is arbitrary. Similarly, ΔM ≥0 a.s., and the desired continuity follows.
Next let M ∈D and N ∈C, and choose martingales M n →M of locally finite variation. By Theorem 20.6 (vi) and (vii) and optional sampling, we get for any optional time τ 454 Foundations of Modern Probability 0 = E[M n, N]τ = E(M n τ Nτ) →E(MτNτ) = E[M, N]τ, and so [M, N] is a martingale by Lemma 9.14. Since it is also continuous by (14), Proposition 18.2 yields [M, N] = 0 a.s. In particular, E(M∞N∞) = 0, which shows that C ⊥D. The asserted uniqueness now follows easily.
(ii) By Theorem 20.6 (iv), we have for any M ∈M2 [M]t = [M]c t + Σ s≤t (ΔMs)2, t ≥0.
(14) If M ∈D, we may choose some martingales of locally finite variation M n →M.
By Theorem 20.6 (vii) and (viii), we get [M n]c = 0 and E[M n −M]∞→0.
For any t ≥0, we get by Minkowski’s inequality and (14) Σ s≤t (ΔM n s )2 1/2 − Σ s≤t (ΔMs)2 1/2 ≤ Σ s≤t ΔM n s −ΔMs 2 1/2 ≤[M n −M]1/2 t P →0, [M n]1/2 t −[M]1/2 t ≤[M n −M]1/2 t P →0.
Taking limits in (14) for the martingales M n, we get the same formula for M without the term [M]c t, which shows that [M] = [M]d.
Now consider any M ∈M2. Using the strong orthogonality [M c, M d] = 0, we get a.s.
[M]c + [M]d = [M] = [M c + M d] = [M c] + [M d], which shows that even [M c] = [M]c a.s. By the same argument combined with Theorem 20.6 (viii), we obtain [Xd] = [X]d a.s. for any semi-martingale X. 2 The last result yields an explicit formula for the covariation of two semi-martingales.
Corollary 20.15 (decomposition of covariation) For any semi-martingales X, Y , we have (i) Xc is the a.s. unique continuous local martingale M with M0 = 0, such that [X −M] is a.s. purely discontinuous, (ii) [X, Y ]t = [Xc, Y c]t + Σ s≤t (ΔXs)(ΔYs) a.s., t ≥0.
In particular, (V · X)c = V · Xc a.s. for any semi-martingale X and locally bounded, predictable process V , and [Xc, Y c] = [X, Y ]c a.s.
Proof: If M has the stated properties, then [(X −M)c] = [X −M]c = 0 a.s., and so (X −M)c = 0 a.s. Thus, X −M is purely discontinuous. Formula (ii) holds by Theorems 20.6 (iv) and 20.14 when X = Y , and the general result 20. Semi-Martingales and Stochastic Integration 455 follows by polarization.
2 The purely discontinuous component of a local martingale has a further decomposition, similar to the decompositions of optional times and increasing processes in Propositions 10.4 and 10.17.
Corollary 20.16 (martingale decomposition, Yoeurp) For any purely discon-tinuous, local martingale M, we have (i) M = M0 + M q + M a a.s. uniquely, where M q, M a ∈M0 are purely discontinuous, M q is ql-continuous, and M a has accessible jumps, (ii) t; ΔM a t ̸= 0 ⊂ n[τn] a.s. for some predictable times τ1, τ2, . . . with disjoint graphs, (iii) [M q] = [M]q and [M a] = [M]a a.s., (iv) ⟨M q⟩= ⟨M⟩c and ⟨M a⟩= ⟨M⟩d a.s. when M ∈M2 loc.
Proof: Form a locally integrable process At = Σ s≤t (ΔMs)2 ∧1 , t ≥0, with compensator ˆ A, and define M q = M −M0 −M a = 1 Δ ˆ At = 0 · M.
Then Theorem 20.4 yields M q, M a ∈M0 and ΔM q = 1{Δ ˆ A = 0}ΔM a.s., and M q and M a are purely discontinuous by Corollary 20.15. The proof may now be completed as in case of Proposition 10.17.
2 We illustrate the use of the previous decompositions by proving two expo-nential inequalities for martingales with bounded jumps.
Theorem 20.17 (exponential inequalities, OK & Sztencel) Let M be a local martingale with M0 = 0, such that |ΔM| ≤c for a constant c ≤1. Then (i) if [M]∞≤1 a.s., we have P{M ∗≥r} < ⌢exp − r2 2(1 + rc) , r ≥0, (ii) if ⟨M⟩∞≤1 a.s., we have P{M ∗≥r} < ⌢exp −r log (1 + rc) 2 c , r ≥0.
For continuous martingales M, both bounds reduce to e−r2/2, which can also be seen directly by more elementary methods. For the proof of Theorem 20.17 we need two lemmas, beginning with a characterization of certain pure jump-type martingales.
456 Foundations of Modern Probability Lemma 20.18 (accessible jump-type martingales) Let N be a pure jump-type process with integrable variation and accessible jumps. Then N is a martingale iff E ΔNτ Fτ− = 0 a.s., τ < ∞predictable.
Proof: Proposition 10.17 yields some predictable times τ1, τ2, . . . with dis-joint graphs such that {t > 0; ΔNt ̸= 0} ⊂ n[τn]. Under the stated condition, we get by Fubini’s theorem and Lemma 10.1, for any bounded optional time τ, ENτ = Σ n E ΔNτn; τn ≤τ = Σ n E E ΔNτn Fτn− ; τn ≤τ = 0, and so N is a martingale by Lemma 9.14. Conversely, given any uniformly in-tegrable martingale N and finite predictable time τ, we have a.s. E(Nτ| Fτ−) = Nτ−and hence E(ΔNτ| Fτ−) = 0.
2 Though for general martingales M the process Z = eM−[M]/2 of Lemma 19.22 may fail to be a martingale, it can often be replaced by a similar super-martingale.
Lemma 20.19 (exponential super-martingales) Let M be a local martingale with M0 = 0 and |ΔM| ≤c < ∞a.s., and define a = f(c) and b = g(c), where f(x) = −x + log (1 −x)+ x2 , g(x) = ex −1 −x x2 .
Then we have the super-martingales Xt = exp Mt −a [M]t , Yt = exp Mt −b ⟨M⟩t , t ≥0.
Proof: In case of X, we may clearly assume that c < 1. By Theorem 20.7 we get, in an obvious shorthand notation, (X−1 −· X)t = Mt −(a −1 2) [M]c t + Σ s≤t eΔMs−a(ΔMs)2 −1 −ΔMs .
Here the first term on the right is a local martingale, and the second term is non-increasing since a ≥1 2. To see that even the sum is non-increasing, we need to show that exp(x −ax2) ≤1 + x or f(−x) ≤f(c) whenever |x| ≤c, which is clear by a Taylor expansion of each side. Thus, (X−1 −) · X is a local super-martingale, which remains true for X−· (X−1 −· X) = X since X > 0. By Fatou’s lemma, X is then a true super-martingale.
In case of Y , Theorem 20.14 and Proposition 20.16 yield a decomposition M = M c + M q + M a, and Theorem 20.7 gives (Y −1 − · Y )t = Mt −b ⟨M⟩c t + 1 2 [M]c t + Σ s≤t eΔMs−b Δ⟨M⟩s −1 −ΔMs = Mt + b [M q]t −⟨M q⟩t −(b −1 2) [M]c t 20. Semi-Martingales and Stochastic Integration 457 + Σ s≤t eΔMs−b Δ⟨M⟩s −1 + ΔMs + b (ΔMs)2 1 + b Δ⟨M⟩s + Σ s≤t 1 + ΔM a s + b (ΔM a s )2 1 + b Δ⟨M a⟩s −1 −ΔM a s .
Here the first two terms on the right are martingales, and the third term is non-increasing since b ≥1 2. Even the first sum of jumps is non-increasing, since ex −1 −x ≤b x2 for |x| ≤c and ey ≤1 + y for y ≥0.
The last sum clearly defines a purely discontinuous process N with locally finite variation and accessible jumps. Fixing any finite predictable time τ, and writing ξ = ΔMτ and η = Δ⟨M⟩τ, we note that E 1 + ξ + b ξ2 1 + b η −1 −ξ ≤E 1 + ξ + b ξ2 −(1 + ξ)(1 + b η) = b E ξ2 −(1 + ξ) η ≤b (2 + c) Eξ2.
Since E Σ t (ΔMt)2 ≤E[M]∞ = E⟨M⟩∞≤1, we conclude that N has integrable total variation. Furthermore, Lemmas 10.3 and 20.18 show that a.s. E(ξ |Fτ−) = 0 and E( ξ2 | Fτ−) = E Δ[M]τ Fτ− = E( η |Fτ−) = η.
Thus, E 1 + ξ + b ξ2 1 + b η −1 −ξ Fτ− = 0, and Lemma 20.18 shows that N is a martingale. The proof may now be com-pleted as before.
2 Proof of Theorem 20.17: (i) Fix any u > 0, and conclude from Lemma 20.19 that the process Xu t = exp uMt −u2f(uc) [M]t , t ≥0, is a positive super-martingale. Since [M] ≤1 and Xu 0 = 1, we get for any r > 0 P sup t Mt > r ≤P sup t Xu t > exp ur −u2f(uc) ≤exp −ur + u2f(uc) .
(15) The function F(x) = 2xf(x) is clearly continuous and strictly increasing from [0, 1) onto R+. Since also F(x) ≤x/(1 −x), we have F −1(y) ≥y/(1 + y).
Choosing u = F −1(rc)/c in (15) gives P sup t Mt > r ≤exp −rF −1(rc) 2 c ≤exp − r2 2 (1 + rc) .
458 Foundations of Modern Probability It remains to combine with the same inequality for −M.
(ii) The function G(x) = 2x g(x) is clearly continuous and strictly increasing onto R+. Since also G(x) ≤ex −1, we have G−1(y) ≥log(1 + y). Then as before, P sup t Mt > r ≤exp −r G−1(rc) 2 c ≤exp −r log (1 + rc) 2 c , and the result follows.
2 By a quasi-martingale we mean an integrable, adapted, and right-continu-ous process X, such that sup π Σ k≤n E Xtk −E(Xtk+1|Ftk) < ∞, (16) where π is the set of finite partitions of R+ of the form 0 = t0 < t1 < · · · < tn < ∞, and we take tn+1 = ∞and X∞= 0 in the last term. In particular, (16) holds when X is the sum of an L1-bounded martingale and a process of integrable variation starting at 0. We show that this is close to the general situation. Here localization is defined in the usual way in terms of some optional times τn ↑∞.
Theorem 20.20 (quasi-martingales, Rao) A quasi-martingale is the differ-ence of two non-negative super-martingales. Thus, for a process X with X0 = 0, these conditions are equivalent: (i) X is a local quasi-martingale, (ii) X is a special semi-martingale.
Proof: For any t ≥0, let Pt be the class of partitions π of the interval [t, ∞) of the form t = t0 < t1 < · · · < tn, and define η± π = Σ k≤n E Xtk −E(Xtk+1|Ftk) ± Ft , π ∈Pt, where tn+1 = ∞and X∞= 0, as before. We show that η+ π and η− π are a.s.
non-decreasing under refinements of π ∈Pt. It is then enough to add one more division point u to π, say in the interval (tk, tk+1). Put α = Xtk −Xu and β = Xu −Xtk+1. By sub-additivity and Jensen’s inequality, we get the desired relation E E α + β Ftk ± Ft ≤E E(α | Ftk)± + E(β | Ftk)± Ft ≤E E(α | Ftk)± + E(β | Fu)± Ft .
Now fix any t ≥0, and conclude from (16) that m± t ≡supπ∈Pt Eη± π < ∞.
For every n ∈N, we may then choose πn ∈Pt with Eη± πn > m± t −n−1. The sequences (η± πn) being Cauchy in L1, they converge in L1 toward some limits 20. Semi-Martingales and Stochastic Integration 459 Y ± t . Further note that E|η± π −Y ± t | < n−1 whenever π is a refinement of πn.
Thus, η± π →Y ± t in L1 along the directed set Pt.
Next fix any s < t, let π ∈Pt be arbitrary, and define π′ ∈Ps by adding the point s to π. Then Y ± s ≥η± π′ = Xs −E(Xt|Fs) ± + E(η± π | Fs) ≥E(η± π | Fs).
Taking limits along Pt on the right, we get Y ± s ≥E(Y ± t |Fs) a.s., which means that the processes Y ± are super-martingales. By Theorem 9.28, the right-hand limits along the rationals Z± t = Y ± t+ then exist outside a fixed null set, and the processes Z± are right-continuous super-martingales.
For π ∈Pt, we have Xt = η+ π −η− π →Y + t −Y − t , and so Z+ t −Z− t = Xt+ = Xt a.s.
2 The following consequence is easy to believe but surprisingly hard to prove.
Corollary 20.21 (non-random semi-martingales) Let X be a non-random process in Rd. Then these conditions are equivalent: (i) X is a semi-martingale, (ii) X is rcll of locally finite variation.
Proof (OK): Clearly (ii) ⇒(i). Now assume (i) with d = 1. Then X is trivially a quasi-martingale, and so by Theorem 20.20 it is a special semi-martingale, hence X = M + A for a local martingale M and a predictable process A of locally finite variation. Since M = X −A is again predictable, it is continuous by Proposition 10.16. Then X and A have the same jumps, and so by subtraction we may take all processes to be continuous. Then [M] = [X −A] = [X], which is non-random by Proposition 18.17, and we see as in Theorem 19.3 that M is a Brownian motion on the deterministic time scale [X], hence has independent increments. Then A = X −M has the same property, and since it is also continuous of locally finite variation, it is a.s. non-random by Proposition 14.4. Then even M = X −A is a.s. non-random, hence 0 a.s., and so X = A, which has locally finite variation.
2 Now we show that semi-martingales are the most general integrators for which a stochastic integral with reasonable continuity properties can be de-fined. As before, let E be the class of bounded, predictable step processes with jumps at finitely many fixed times.
Theorem 20.22 (stochastic integrators, Bichteler, Dellacherie) A right-con-tinuous, adapted process X is a semi-martingale iff, for any V1, V2, . . . ∈E, ∥V ∗ n ∥∞→0 ⇒ (Vn · X)t P →0, t > 0.
Our proof is based on three lemmas, beginning with the crucial functional-analytic part of the argument.
460 Foundations of Modern Probability Lemma 20.23 (convexity and tightness) For any convex, tight set K ⊂L1(P), there exists a bounded random variable ρ > 0 with sup ξ∈K E(ρ ξ) < ∞.
Proof (Yan): Let B be the class of bounded, non-negative random variables, and define C = γ ∈B; sup ξ∈K E(γ ξ) < ∞ .
We claim that, for any γ1, γ2, . . . ∈C, there exists a γ ∈C with {γ > 0} = n{γn > 0}. Indeed, we may assume that γn ≤1 and supξ∈K E(γn ξ) ≤1, in which case we may choose γ = n 2−nγn. It is then easy to construct a ρ ∈C, such that P{ρ > 0} = supγ∈C P{γ > 0}. Clearly, {γ > 0} ⊂{ρ > 0} a.s., γ ∈C, (17) since we could otherwise choose a ρ′ ∈C with P{ρ′ > 0} > P{ρ > 0}.
To see that ρ > 0 a.s., suppose that instead P{ρ = 0} > ε > 0. Since K is tight, we may choose r > 0 so large that P{ξ > r} ≤ε for all ξ ∈K.
Then P{ξ −β > r} ≤ε for all ξ ∈K and β ∈B. By Fatou’s lemma, we get P{ζ > r} ≤ε for all ζ in the L1-closure Z = K −B. In particular, the random variable ζ0 = 2r1{ρ = 0} lies outside Z. Now Z is convex and closed, and so, by a version of the Hahn–Banach theorem, there exists a γ ∈(L1)∗= L∞ satisfying sup ξ∈K E(γ ξ) −inf β∈B E(γ β) ≤sup ζ∈Z E(γ ζ) < E(γ ζ0) = 2rE(γ; ρ = 0).
(18) Here γ ≥0, since we would otherwise get a contradiction by choosing β = b 1{γ < 0} for large enough b > 0. Hence, (18) reduces to sup ξ∈K E(γ ξ) < 2rE(γ; ρ = 0), which implies γ ∈C and E(γ; ρ = 0) > 0. But this contradicts (17), proving that indeed ρ > 0 a.s.
2 Two further lemmas are needed for the proof of Theorem 20.22.
Lemma 20.24 (tightness and boundedness) Let X be a right-continuous, adap-ted process, and let T be the class of optional times τ < ∞taking finitely many values. Then {Xτ; τ ∈T } is tight ⇒ X∗< ∞a.s.
Proof: By Lemma 9.4, any bounded optional time τ can be approximated from the right by optional times τn ∈T , so that Xτn →Xτ by right continuity.
Hence, Fatou’s lemma yields P |Xτ| > r ≤lim inf n→∞P |Xτn| > r , 20. Semi-Martingales and Stochastic Integration 461 and so the hypothesis remains true with T replaced by the class ˆ T of bounded optional times. By Lemma 9.6, the times τ t,n = t ∧inf s; |Xs| > n , t > 0, n ∈N, belong to ˆ T , and as n →∞we get P{X∗> n} = sup t>0 P{X∗ t > n} ≤sup τ∈ˆ T P |Xτ| > n →0.
2 Lemma 20.25 (scaling) For any finite random variable ξ, there exists a bounded random variable ρ > 0 with E|ρ ξ| < ∞.
Proof: We may take ρ = (|ξ| ∨1)−1.
2 Proof of Theorem 20.22: The necessity is clear from Theorem 20.4. Now assume the stated condition. By Lemma 5.9, it is equivalent to assume for each t > 0 that the family Kt = {(V · X)t; V ∈E1} is tight, where E1 = {V ∈E; |V | ≤1}. The latter family is clearly convex, and by the linearity of the integral the convexity carries over to Kt.
By Lemma 20.24 we have X∗< ∞a.s., and so Lemma 20.25 yields a probability measure Q ∼P, such that EQX∗ t = X∗ t dQ < ∞. In particular, Kt ⊂L1(Q), and we note that Kt remains tight with respect to Q. Hence, Lemma 20.23 yields a probability measure R ∼Q with bounded density ρ > 0, such that Kt is bounded in L1(R).
For any partition 0 = t0 < t1 < · · · < tn = t, we note that Σ k≤n ER Xtk −ER(Xtk+1|Ftk) = ER(V · X)t + ER|Xt|, (19) where Vs = Σ k<n sgn ER(Xtk+1|Ftk) −Xtk 1(tk,tk+1](s), s ≥0.
Since ρ is bounded and V ∈E1, the right-hand side of (19) is bounded by a constant. Hence, the stopped process Xt is a quasi-martingale under R. By Theorem 20.20 it is then a semi-martingale for R, and since P ∼R, Corollary 20.11 shows that Xt remains a semi-martingale for P. Since t is arbitrary, we conclude that X itself is a semi-martingale under P.
2 Exercises 1.
Construct the quadratic variation [M] of a local L2-martingale M directly as in Theorem 18.5, and prove a corresponding version of the integration-by-parts formula. Use [M] to define the L2-integral of Theorem 20.2.
2. Show that the approximation in Proposition 18.17 remains valid for general semi-martingales.
462 Foundations of Modern Probability 3. For a local martingale M starting at 0 and an optional time τ, use Theorem 20.12 to give conditions for the relations EMτ = 0 and EM 2 τ = [M]τ.
4. Give an example of some L2-bounded martingales Mn, such that M∗ n P →0 and yet ⟨Mn⟩∞ P →∞. (Hint: Consider compensated Poisson processes with large jumps.) 5.
Give an example of some martingales Mn, such that [Mn]∞ P →0 and yet M∗ n P →∞. (Hint: See the preceding problem.) 6. Show that ⟨Mn⟩∞ P →0 implies [Mn]∞ P →0.
7.
Give an example of a martingale M of bounded variation and a bounded, progressive process V , such that V 2 ·⟨M⟩= 0 and yet V ·M ̸= 0. Conclude that the L2-integral in Theorem 20.2 has no continuous extension to progressive integrands.
8. Show that any general martingale inequality involving the processes M, [M], and ⟨M⟩remains valid in discrete time. (Hint: Embed M and the associated discrete filtration into a martingale and filtration on R+.) 9. Show that the a.s. convergence in Theorem 5.23 remains valid in Lp. (Hint: Use Theorem 20.12 to reduce to the case where p < 1. Then truncate.) 10. Let G be an extension of the filtration F. Show that any F-adapted semi-martingale for G is also a semi-martingale for F. Also show by an example that the converse implication fails in general. (Hint: Use Theorem 20.22.) 11. Show that if X is a L´ evy process in R, then [X] is a subordinator. Express the characteristics of [X] in terms of those for X.
12. For a L´ evy process X, show that if X is p -stable, then [X] is strictly p/2 -stable.
Also prove the converse, in the case where X has positive or symmetric jumps. (Hint: Use Proposition 16.11.) 13. Extend Theorem 20.17 to the case where [M]∞≤a or ⟨M⟩≤a a.s. for some a ≥1. (Hint: Apply the original result to a suitably scaled process.) 14. For a L´ evy process X with L´ evy measure ν, show that X ∈M2 loc iffX ∈M2, and also iff x2ν(dx) < ∞, in which case ⟨X⟩t = t EX2 1. (Hint: Use Corollary 15.17.) 15. For a purely discontinuous local martingale M with positive jumps, show that M −M0 is a.s. determined by [M]. (Hint: For any such processes M, N with [M] = [N], apply Theorem 20.14 to M −N.) 16. Show that a semi-martingale X is ql-continuous or has accessible jumps iff [X] has the same property. (Hint: Use Theorem 20.6 (iv).) 17. Show that a semi-martingale X with |ΔX| ≤c < ∞a.s. is a special semi-martingale with canonical decomposition M + A satisfying |ΔA| ≤c a.s. In par-ticular, X is a continuous semi-martingale iffit has a decomposition M + A with M and A continuous. (Hint: Use Lemma 20.5, and note that |ΔA| ≤c a.s. implies |Δ ˆ A| ≤c a.s.) 18.
Show that a semi-martingale X is ql-continuous or has accessible jumps, iffit has a decomposition M + A where M and A have the same property. Also show that, for special semi-martingales, we may choose M + A to be the canonical decomposition of X. (Hint: Use Proposition 10.17 and Corollary 20.16, and refer to 20. Semi-Martingales and Stochastic Integration 463 the preceding exercise.) 19. Show that a semi-martingale X is predictable, iffit is a special semi-martingale with canonical decomposition M + A for a continuous M. (Hint: Use Proposition 10.16.) Chapter 21 Malliavin Calculus Malliavin derivative, integration by parts, closure, chain rule, diver-gence operator, chaos representations, norms and domains, conditioning and adaptation, factorization, commutation, covariance, local proper-ties, Ornstein–Uhlenbeck semi-group and generator, elliptic operators, second order chain rule, extension and differentiation of Itˆ o integrals, Brownian functionals, existence of smooth density, absolute continuity The stochastic calculus of variations, known as Malliavin calculus, is an infinite-dimensional calculus on Wiener space of amazing power and beauty. Here we develop only some basic results for the differentiation and related operators, along with their representations and existence criteria in terms of multiple Wiener-Itˆ o integrals. Already when developed that far, the theory is powerful enough to yield important new insights into various topics treated in earlier chapters. Unfortunately, we had to omit the more advanced aspects of the the-ory, including some fundamental Sobolev-type inequalities, with applications to solutions of SDEs, due to our obvious space limitations.
In this chapter we give a fairly complete treatment of the three basic opera-tors of the theory—the Malliavin derivative D, the adjoint divergence operator D∗, even known as the Skorohod integral, and the Ornstein–Uhlenbeck gener-ator L. Highlights of the theory include the explicit representations of those operators and their domains in terms of multiple Wiener–Itˆ o integrals, the re-markable identity L = −D∗D, an expression of L as a second order differential operator, and some surprising connections to ordinary Itˆ o integrals.
Most of the basic theory covered here relies on some standard functional analysis, combined with the theory of Wiener chaos expansions from Chapter 14. To ease the access for probabilists, I have replaced the sometimes peculiar traditional notation1 by a more standard usage in probability theory, writing ξ, η, . . . for random variables and X, Y, . . . for random processes, and similarly for some basic notions of functional analysis. I have also tried to clarify the customary confusion2 between the OU-semi-group and generator in the sense of Feller processes, and the corresponding notions of Malliavin calculus.
To begin our technical exposition of the theory, let ζ be an isonormal Gaus-sian process on a separable Hilbert space H, as defined in Chapter 14. Our first aim is to define a linear differentiation operator D on a suitable class 1where F, G, . . . are often used for random variables, u, v, . . . for random processes, δ with domain D(δ) for the adjoint of the operator D, etc.
2the former acting on deterministic functions, the latter on random variables 465 © Springer Nature Switzerland AG 2021 O. Kallenberg, Foundations of Modern Probability, Probability Theory and Stochastic Modelling 99, 466 Foundations of Modern Probability of ζ-measurable random variables. Our basic requirements are the relations D(ζh) = h for all h ∈H, along with a suitable chain rule for differentiation.
For motivation, we consider first the class S of smooth random variables of the form ξ = f(ζ¯ h) = f(ζh1, . . . , ζhn), h1, . . . , hn ∈H, n ∈N, (1) for functions f ∈C1(Rn) with bounded, first order partial derivatives ∂if, where we write ¯ h = (h1, . . . , hn).
The condition D(ζhi) = hi, along with obvious demands of linearity and continuity, suggest that we define D ξ = Σ i≤n ∂if(ζ¯ h) hi, ξ ∈S.
(2) Then ⟨D ξ, h⟩= lim t→0 t−1 f ζ¯ h + t⟨¯ h, h⟩ −f(ζ¯ h) , h ∈H, which identifies ⟨D ξ, h⟩as the derivative of ξ in the direction of h. We may then think of D ξ as a generalized directional derivative of ξ.
Note that the differentiation operator D maps random variables ξ into random elements D ξ of H. In applications we often take H to be a function space L2(T, T , μ) whose elements are L2-functions h: T →R, so that random elements in H may be regarded as real-valued processes on T, and D ξ becomes a process D t ξ on T. In particular, we may take ζ to be an isonormal Gaussian process on L2([0, 1], λ) generating a Brownian motion B on [0, 1], so that D ξ becomes a process on [0, 1] with3 ∥D ξ∥2 = 1 0 (D t ξ)2dt < ∞a.s.
To state the basic existence theorem, recall that a linear operator A between two Banach spaces S, T is said to be closed, if its graph is closed in S × T, so that xn →x in S and Axn →y in T imply x ∈dom A and y = Ax.
An operator A : S →T is closable and hence admits a closed extension, if xn →0 in S and Axn →y in T imply y = 0. Let Lp(ζ, H) be the space of ζ -measurable random elements η in H with norm E(∥η∥p)1/p < ∞, and write ˆ C1(Rn) for the class of differentiable functions on Rn with bounded first derivatives. For convenience, we write ξ = (ξ1, . . . , ξn).
Theorem 21.1 (Malliavin derivative) Let ζ be an isonormal Gaussian pro-cess on a separable Hilbert space H. Then for any p ≥1 there exists a unique closed, linear operator Dp : Lp(ζ) →Lp(ζ, H) with domain Dp containing ζh for all h ∈H, such that for any ξ1, . . . , ξn ∈Dp, (i) Dp(ζh) = h, h ∈H, (ii) Dp(f ◦ξ) = Σ i≤n ∂if(ξ) Dp ξi, f ∈ˆ C1(Rn).
Note that for ξ ∈S, the chain rule in (ii) reduces to (2). In general it is understood that f(ξ) ∈Dp. For the proof, we define D on S by means of (2). The resulting operator from Lp(ζ) to Lp(ζ, H) turns out to be closable, and we may define Dp as the associated closure. We need to show that D is well-defined by (2) on the space S.
3In this chapter, the inner product ⟨·, ·⟩and norm ∥·∥are always taken in the underlying Hilbert space H, so that ∥D ξ∥becomes a random variable, not a constant.
21. Malliavin Calculus 467 Lemma 21.2 (consistency) For any ξ ∈S, the variable D ξ in (2) is a.s.
independent of the representation of ξ in (1).
Proof: We need to show that, if also ξ = g(ζ¯ k) = g(ζk1, . . . , ζkm), then Σ i≤n ∂if(ζ¯ h) hi = Σ j ≤m ∂i g(ζ¯ k) kj.
Here we may choose the kj to be ortho-normal with a linear span containing all hi. Assuming hi = j Tijkj for a matrix T, we obtain f ◦T(ζ¯ k) = g(ζ¯ k) a.s.
Since the variables ζki are i.i.d. N(0, 1) and the functions f◦T and g are contin-uous, it follows that f ◦T = g. By the elementary chain rule for differentiation, we get Σ j ∂j g(ζ¯ k) kj = Σ j ∂j(f ◦T)(ζ¯ k) kj = Σ i,j ∂i(f ◦T)(ζ¯ k) Tijkj = Σ i ∂if(ζ¯ h) hi.
2 We further need some elementary identities for the operator D on S.
Lemma 21.3 (integration by parts) Let ξ, η be smooth functionals of ζ, and let h ∈H. Then (i) E⟨D ξ, h⟩= E ξ(ζh), (ii) E ξη(ζh) = E ξ⟨D η, h⟩+ E η⟨D ξ, h⟩.
Proof: (i) We may choose ξ as in (1) with ortho-normal h1, . . . , hn ∈H, and take h = h1. Writing ϕ for the standard normal density on R, and noting that ϕ′(x) = −xϕ(x), we get by (2) and an elementary integration by parts E⟨D ξ, h⟩= ∂1f(x) ϕ⊗n(x) dx = f(x) ϕ⊗n(x) x1 dx = E ξ(ζh).
(ii) Here we may represent ξ, η as in (1) in terms of a common set of elements h1, . . . , hn ∈H. Using (2) and the elementary product rule for differ-entiation, we obtain D(ξη) = ξDη + ηDξ a.s., and so by (i) E ξη(ζh) = E !
D(ξη), h " = E !
ξDη + ηDξ, h " = E ξ⟨Dη, h⟩+ E η⟨Dξ, h⟩.
2 Next, we show that the operator D in (2) is closable.
468 Foundations of Modern Probability Lemma 21.4 (closure) For any p ≥1, the operator D on S is closable from Lp(ζ) to Lp(ζ, H), and hence extends to a closed operator (Dp, Dp) between those spaces.
Proof: Let ξ1, ξ2, . . . ∈S with ξn →0 in Lp, and such that Dξn →η in Lp(ζ, H). To show that η = 0 a.s., fix any h ∈H, and let β ∈S be such that β(ζh) is bounded. Using Lemma 21.3 (ii), we get by dominated convergence E β⟨η, h⟩←E β⟨D ξn, h⟩ = E β ξn(ζh) −ξn⟨Dβ, h⟩ →0, since β(ζh) and ⟨Dβ, h⟩are bounded. Hence, E β⟨η, h⟩= 0, and so by approx-imation ⟨η, h⟩= 0 a.s. Since h was arbitrary, we obtain η = 0 a.s.
2 This extends D to a closed operator Dp : Lp(ζ) →Lp(ζ, H), and we denote the associated domain by D1,p or simply Dp. Since clearly Dp = Dq on Dp ∩Dq for any p, q ≥1, we may henceforth write simply Dp = D. It remains to show that the extended version of D satisfies the required chain rule.
Lemma 21.5 (chain rule) Let ξ1, . . . , ξn ∈Dp with p ≥1, and let f ∈C1(Rn) with bounded first derivatives. Then even f(ξ) ∈Dp, and D(f ◦ξ) = Σ i≤n ∂if(ξ) D ξi, f ∈ˆ C1(Rn).
(3) Proof: First consider some smooth variables ξi = gi(ζ¯ h), represented in terms of a common set of elements h1, . . . , hn ∈H. Then f(ξ) = (f ◦¯ g)(ζ¯ h) is again smooth, where ¯ g = (g1, . . . , gm), and we get by (2) and the elementary chain rule D(f ◦ξ) = Σ j ∂j(f ◦¯ g)(ζ¯ h) hj = Σ i,j ∂if(ξ) ∂j gi(ξ) hj = Σ i ∂if(ξ) D ξi.
In the general case, we may choose some smooth variables ξ(n) i →ξi in Lp, such that also Dξ(n) i →Dξi in Lp(ζ, H) for all i. We may also choose some smooth functions fn →f, such that ∂ifn →∂if uniformly on compacts.
Note in particular that the fn have uniform linear growth. By dominated convergence it follows that fn(ξ(n)) →f(ξ) in Lp, and further that in Lp(ζ, H) D fn ◦ξ(n) = Σ i ∂ifn(ξ(n)) D ξ(n) i →Σ i ∂if(ξ) D ξi.
Since D is closed, we get f(ξ) ∈D1,p, and the last limit equals D(f ◦ξ), proving (3).
2 21. Malliavin Calculus 469 For any p ≥1, we define a norm on D1,p by ∥ξ∥1,p = E|ξ|p + E∥D ξ∥p1/p.
In particular, D1,2 is a Hilbert space with inner product ⟨ξ, η⟩1,2 = E ξη + E⟨D ξ, D η⟩.
Note that the space D1,p is reflexive for every p > 1, since it is isometrically isomorphic to the reflexive space Lp(ζ) × Lp(ζ, H). This yields a useful closure property: Lemma 21.6 (weak closure) Let ξ1, ξ2, . . . ∈D1,2 with ξn →ξ in Lp(ζ) for a p > 1. Then these conditions are equivalent: (i) ξ ∈D1,p, and D ξn →D ξ weakly in Lp(ζ, H), (ii) supn E∥D ξn∥p < ∞.
Proof, (i) ⇒(ii): Note that weak convergence in Lp implies Lp-boundedness, by the Banach-Steinhaus theorem.
(ii) ⇒(i): By (ii) the sequence (ξn) is bounded in the reflexive space D1,p.
Thus, (ξn) is weakly relatively compact in D1,p, and hence converges weakly in D1,p along a subsequence, say to some η ∈D1,p. Since also ξn →ξ in Lp(ζ), we have η = ξ a.s., and so ξ ∈D1,p, and ξn →ξ weakly in D1,p along the entire sequence. In particular, D ξn →D ξ weakly in Lp(ζ, H).
2 So far we have regarded the derivative D as an operator from Lp(ζ) to Lp(ζ, H). It is straightforward to extend D to an operator from Lp(ζ, K) to Lp(ζ, H ⊗K), for a possibly different separable Hilbert space K. Assuming as before that H = L2(μ) and K = L2(ν) for some σ-finite measures μ, ν on suitable spaces T, T ′, we may identify H ⊗K with L2(μ ⊗ν). For any random variables ξ1, . . . , ξm ∈D1,p and elements k1, . . . , km ∈K, we define D Σ i≤m ξiki = Σ i≤m D ξi ⊗ki.
In particular, we get for smooth variables ξi = fi(ζ¯ h) with ¯ h = (h1, . . . , hn) D Σ i≤m fi(ζh) ki = Σ i≤m Σ j ≤n ∂jfi(ζ¯ h) (hj ⊗ki).
As before, D extends to a closed operator from Lp(P ⊗ν) to Lp(P ⊗μ ⊗ν).
Iterating the construction, we may define the higher order derivatives Dm on Lp(ζ) by setting Dm+1ξ = D(Dmξ) with D1 = D. The associated domains Dm,p are defined recursively, and in particular Dm,2 is a Hilbert space with inner product ⟨ξ, η⟩m,2 = E(ξ η)+ m Σ i=1 E !
D iξ, D iη " H⊗i.
470 Foundations of Modern Probability For first order derivatives, the chaos decomposition in Theorem 14.26 yields explicit expressions of the L2-derivative D ξ and its domain D1,2, in terms of multiple Wiener-Itˆ o integrals. For elements in L2(ζ, H), we write ∥· ∥and ⟨·, ·⟩for the norm and inner product in H, so that the resulting quantities are random variables, not constants. This applies in particular to the norm ∥Dξ∥.
Theorem 21.7 (derivative of WI-integral) For an isonormal Gaussian pro-cess ζ on H = L2(T, μ), let ξ = ζnf with f ∈L2(μ⊗n) symmetric. Then (i) ξ ∈D1,2 with Dt ξ = n ζn−1f(·, t), t ∈T, (ii) E∥D ξ∥2 = n∥ξ∥2 = n(n!) ∥f∥2.
Proof: For any ONB h1, h2, . . . ∈H, Theorem 14.25 yields an orthogonal expansion of ζnf in terms of products ζn⊗ j ≤mh ⊗nj j = Π j ≤m pnj(ζhj), (4) where Σj≤m nj = n. Using the simple chain rule (2), for smooth functions of polynomial growth, together with the elementary fact that p′ n = n pn−1, we get D ζn⊗ j ≤mh ⊗nj j = Σ i≤m ni pni−1(ζhi) hiΠ j ̸=i pnj(ζhj), which is equivalent to Dtζnf = n ζn−1 ˜ f(t) for the symmetrization ˜ f of the ele-mentary integrand f in (4). This extends by linearity to any linear combination of such functions.
Using Lemma 14.22, along with the orthogonality of the various terms and factors, we obtain E D ζn⊗ j ≤mh ⊗nj j 2 = Σ i≤m n2 i ∥pni−1(ζhi)∥2Π j ̸=i pnj(ζhj) 2 = Σ i≤m n2 i (ni −1)! Π j ̸=i (nj)!
= Σ i≤m niΠ j ≤m (nj)!
= n Π j ≤m pnj(ζhj) 2 = n∥ζnf∥2, which extends by orthogonality to general linear combinations. This proves (i) and (ii) for such variables ξ, and the general result follows by the closure property of D.
2 Corollary 21.8 (chaos representation of D) For an isonormal Gaussian pro-cess ζ on a separable Hilbert space H, let ξ ∈L2(ζ) with chaos decomposition ξ = Σ n≥0 Jnξ. Then (i) ξ ∈D1,2 ⇔Σ n>0 n∥Jnξ∥2 < ∞, in which case (ii) D ξ = Σ n>0 D(Jnξ) in L2(ζ, H), 21. Malliavin Calculus 471 (iii) E∥D ξ∥2 = Σ n>0 E D(Jnξ) 2 = Σ n>0 n∥Jnξ∥2.
Proof: Use Theorems 14.26 and 21.7, along with the closure property of D.
2 Corollary 21.9 (vanishing derivative) For any ξ ∈D1,2, D ξ = 0 a.s.
⇔ ξ = E ξ a.s.
Proof: If ξ = E ξ, then D ξ = 0 by (2). Conversely, D ξ = 0 yields Jnξ = 0 for all n > 0 by Theorem 21.8 (iii), and so ξ = J0ξ = E ξ.
2 Corollary 21.10 (indicator variables) For any A ∈σ(ζ), 1A ∈D1,2 ⇔ PA ∈{0, 1}.
Proof: If PA ∈{0, 1}, then ξ = 1A is a.s. a constant, and so ξ ∈S ⊂D1,2.
Conversely, let 1A ∈D1,2. Applying the chain rule to the function f(x) = x2 on [0, 1] yields D1A = D12 A = 21AD1A, and so D1A = 0. Hence, Corollary 21.9 shows that 1A is a.s. a constant, which implies that either 1A = 0 a.s. or 1A = 1 a.s., meaning that PA ∈{0, 1}.
2 Corollary 21.11 (conditioning) Let ξ ∈D1,2, and fix any A ∈T . Then E(ξ | 1Aζ) ∈D1,2 and DtE ξ 1Aζ = E Dt ξ 1Aζ 1A(t) a.e.
Proof: Letting ξ = Σ nζnfn and using Corollary 14.28 and Theorem 21.7 twice, we get Dt E ξ 1Aζ = Dt Σ n ζn(fn 1⊗n A ) = Σ n n ζn−1 fn(·, t) 1⊗n A (·, t) = Σ n n ζn−1 fn(·, t) 1⊗(n−1) A 1A(t) = E Dt ξ 1Aζ 1A(t).
In particular, the last series converges, so that E(ξ | 1Aζ) ∈D1,2.
2 The last result yields immediately the following: Corollary 21.12 (null criterion) Let ξ ∈D1,2 be 1Aζ -measurable for an A ∈ T . Then Dt ξ = 0 a.e. on Ac × Ω.
Corollary 21.13 (adapted processes) Let B be a Brownian motion generated by an isonormal Gaussian process ζ on L2(R+, λ), and consider a B-adapted process X ∈D1,2(H). Then Xs ∈D1,2 for almost every s ≥0, and the process (s, t) →DtXs has a version such that, for fixed t ≥0, 472 Foundations of Modern Probability (i) DtXs = 0 for all s ≤t, (ii) DtXs is B-adapted in s ≥t.
Proof: (i) Use Corollary 21.12.
(ii) Apply Theorems 21.7 and 21.8 to the chaos representation in Corollary 14.29.
2 Next we introduce the divergence operator 4, defined as the adjoint D∗: L2(ζ, H) →L2(ζ) of the Malliavin derivative D on D1,2, in the sense of the duality relation E⟨D ξ, Y ⟩= ⟨ξ, D∗Y ⟩, valid for all ξ ∈D1,2 and suitable Y ∈L2(ζ, H), where the inner products are taken in L2(ζ, H) on the left and in L2(ζ) on the right. More precisely, the operator D∗and its domain D∗are given by: Lemma 21.14 (existence of D∗) For an isonormal Gaussian process ζ on H and element Y ∈L2(ζ, H), these conditions are equivalent: (i) there exists an a.s. unique element D∗Y ∈L2(ζ) with E⟨D ξ, Y ⟩= E(ξD∗Y ), ξ ∈D1,2, (ii) there exists a constant c > 0 such that E⟨D ξ, Y ⟩ 2 ≤c E|ξ|2, ξ ∈D1,2.
Proof: By (i) and Cauchy’s inequality, we get (ii) with c2 = E|D∗Y |2. Now assume (ii). Then the map ϕ(ξ) = E⟨D ξ, Y ⟩is a bounded linear functional on D1,2, and since the latter is a dense subset of L2(ζ), ϕ extends by continuity to all of L2(ζ). Hence, the Riesz–Fischer theorem yields ϕ(ξ) = E( ξγ) for an a.s. unique γ ∈L2(ζ), and (i) follows with D∗Y = γ.
2 For suitable elements X ∈L2(ζ, H), with chaos expansions as in Corollary 14.27 in terms of functions fn on T n+1, symmetric in the first n arguments, we may now express the divergence D∗X in terms of the full symmetrizations ˜ fn on T n+1. Since fn(s, t) is already symmetric in s ∈T n, we note that for all n ≥0, (n + 1) ˜ fn(s, t) = fn(s, t) + Σ i≤n fn s1, . . . , si−1, t, si+1, . . . , sn, si .
Theorem 21.15 (chaos representation of D∗) For an isonormal Gaussian process ζ on H = L2(μ), let X ∈L2(ζ, H) with Xt = Σ n≥0 ζnfn(·, t), where the fn ∈L2(μn+1) are symmetric in the first n arguments. Writing ˜ fn for the full symmetrizations of fn, we have (i) X ∈D∗ ⇔Σ n≥0 (n + 1)! ∥˜ fn∥2 < ∞, in which case 4also called the Skorohod integral and often denoted by δ or 21. Malliavin Calculus 473 (ii) D∗X = Σ n≥0 D∗(JnX) = Σ n≥0 ζn+1˜ fn in L2(ζ), (iii) ∥D∗X∥2 = Σ n≥0 ∥D∗(JnX)∥2 = Σ n≥0 (n + 1)! ∥˜ fn∥2.
Proof: For an n ≥1, let η = ζng with symmetric g ∈L2(μn).
Using Theorem 21.7 and the extension of Lemma 14.22, we get E⟨X, D η⟩= E Xt(D η)t μ(dt) = E Xt n ζn−1g(·, t) μ(dt) = Σ m nE ζmfm(·, t) ζn−1g(·, t) μ(dt) = n E ζn−1fn−1(·, t) ζn−1g(·, t) μ(dt) = n! !
fn−1(·, t), g(·, t) " μ(dt) = n! ⟨fn−1, g⟩= n! ⟨˜ fn−1, g⟩ = E ζn˜ fn−1 ζng = E η(ζn˜ fn−1).
Now let X ∈D∗. Comparing the above computation with Theorem 21.14 (i), we get E η ζn ˜ fn−1 = E η(D∗X), η ∈Hn, n ≥1, and so Jn(D∗X) = ζn ˜ fn−1, n ≥1, which remains true for n = 0, since E D∗X = E⟨D1, X⟩= 0 by Theorem 21.14 (i) and Corollary 21.9. Hence, D∗X = Σ n>0 Jn(D∗X) = Σ n>0 ζn ˜ fn−1 = Σ n≥0 ζn+1 ˜ fn, ∥D∗X∥2 = Σ n>0 Jn(D∗X) 2 = Σ n>0 ζn ˜ fn−1 2 = Σ n>0 n! ∥˜ fn−1∥2, which proves (ii)−(iii), along with the convergence of the series in (i).
Conversely, suppose that the series in (i) converges, so that Σ nζn ˜ fn−1 converges in L2. By the previous calculation, we get for any η ∈L2(ζ) E !
X, D(Jnη) " = E(Jnη) ζn ˜ fn−1 = E η ζn ˜ fn−1 .
If η ∈D1,2, we have Σ nD(Jnη) = Dη in L2(ζ, H) by Theorem 21.8, and so E⟨X, D η⟩= E η Σ n>0 ζn ˜ fn−1.
Hence, by Cauchy’s inequality, E⟨X, D η⟩ ≤∥η∥ Σ n>0 ζn ˜ fn−1 < ⌢∥η∥, η ∈D1,2, 474 Foundations of Modern Probability and X ∈D∗follows by Theorem 21.14.
2 Many results for D∗require a slightly stronger condition. Let ˆ D1,2 be the class of processes X ∈L2(P ⊗μ), such that Xt ∈D1,2 for almost every t, and the process DsXt on T 2 has a product-measurable version in L2(μ2). Note that ˆ D1,2 is a Hilbert space with norm ∥X∥2 1,2 = ∥X∥2 P⊗μ + ∥DX∥2 P⊗μ2, X ∈ˆ D1,2, and hence is isomorphic to D1,2(H).
Corollary 21.16 (sub-domain of D∗) For an isonormal Gaussian process ζ on L2(μ), the class ˆ D1,2 is a proper subset of D∗.
Partial proof: For the moment, we prove only the stated inclusion. Then let X ∈L2(T, μ) with a representation Xt = Σnζnfn(·, t), as in Lemma 14.27, where the functions fn ∈L2(μn+1) are symmetric in the first n arguments.
If X ∈ˆ D1,2, then DX ∈L2, and we get by Lemma 14.22, along with the vector-valued versions of Theorem 21.7 and Corollary 21.8, E∥DX∥2 = Σ n E D(JnX) 2 = Σ n n(n!) ∥fn∥2 > ⌢Σ n (n + 1)! ∥fn∥2 ≥Σ n (n + 1)! ∥˜ fn∥2, with summations over n ≥1. Hence, the series on the right converges, and so X ∈D∗by Theorem 21.15.
2 Now let SH be the class of smooth random elements in L2(ζ, H) of the form η = Σ i≤n ξihi, where ξ1, . . . , ξn ∈S and h1, . . . , hn ∈H.
Lemma 21.17 (divergence of smooth elements) The class SH is contained in D∗, and for any β = Σ i ξihi in SH, D∗β = Σ i ξi(ζhi) −Σ i ⟨D ξi, hi⟩.
Proof: By linearity, we may take β = ξ h for some ξ ∈S and h ∈H, which reduces the asserted formula to D∗(ξh) = ξ(ζh) −⟨D ξ, h⟩.
By the definition of D∗, we need to check that E ξ⟨D η, h⟩= E η ξ(ζh) −⟨D ξ, h⟩ , η ∈D1,2.
This holds by Lemma 21.3 (ii) for any η ∈S, and the general result follows by approximation.
2 The next result allows us to factor out scalar variables.
21. Malliavin Calculus 475 Lemma 21.18 (scalar factors) Let ξ ∈D1,2 and Y ∈D∗with (i) ξY ∈L2(ζ, H), (ii) ξD∗Y −⟨D ξ, Y ⟩∈L2(ζ).
Then ξY ∈D∗, and D∗(ξY ) = ξD∗Y −⟨D ξ, Y ⟩.
Proof: Using the elementary product rule for differentiation and the D/D∗-duality, we get for any ξ, η ∈S E⟨D η, ξY ⟩= E⟨Y, ξD η⟩ = E !
Y, D(ξη) −η D ξ " = E (D∗Y ) ξη −η ⟨Y, D ξ⟩ = !
η, ξD∗Y −⟨D ξ, Y ⟩ " , which extends to general ξ, η ∈D1,2 as stated, by (i)−(ii) and the closure of D.
Hence, Theorem 21.14 yields ξY ∈D∗, and the asserted formula follows.
2 This yields a simple factorization property: Corollary 21.19 (factorization) Let ξ ∈L2(ζ) be 1Acζ -measurable for an A ∈T . Then 1Aξ ∈D∗, and D∗(1Aξ) = ξ(ζA) a.s.
Proof: First let ξ ∈D1,2.
Using Lemmas 21.17 and 21.18, along with Corollary 21.12, we get D∗(1Aξ) = ξ D∗(1A) −⟨D ξ, 1A⟩ = ξ(ζA) −0 = ξ(ζA).
The general result follows since D1,2 is dense in L2(ζ) and D∗is closed, being the adjoint of a closed operator.
2 End of proof of Corollary 21.16: It remains to show that the stated inclu-sion is strict. Then let A, B ∈T be disjoint with μA ∧μB > 0, and define Y = 1Aξ with ξ = 1{ζB > 0}, so that Y ∈D∗by Corollary 21.19. On the other hand, Corollary 21.10 yields ξ / ∈D1,2 since P{ζB > 0} = 1 2. Noting that 1Aξ ∈ˆ D1,2 iffξ ∈D1,2 when μA > 0, we conclude that even Y / ∈ˆ D1,2.
2 We turn to a key relationship between D and D∗.
Theorem 21.20 (commutation) Let X ∈ˆ D1,2 be such that, for t ∈T a.e. μ, (i) the process s →DtXs belongs to D∗, (ii) the process t →D∗(DtXs) has a version in L2(P ⊗μ).
476 Foundations of Modern Probability Then D∗X ∈D1,2, and Dt(D∗X) = Xt + D∗(DtX), t ∈T.
Proof: By Corollary 14.27 we may write Xt = Σ nζnfn(·, t), where each fn is symmetric in the first n arguments. Since ˆ D1,2 ⊂D∗by Corollary 21.16, Theorem 21.15 yields D∗X = Σ n ζn+1 ˜ fn, Σ n (n + 1)! ∥˜ fn∥2 < ∞.
In particular, Jn(D∗X) = ζn ˜ fn−1, Jn(D∗X) 2 = n! ∥˜ fn−1∥2, and so Σ nn ∥Jn(DX)∥2 < ∞, which implies D∗X ∈D1,2 by Theorem 21.8.
Moreover, Theorem 21.7 yields Dt(D∗X) = DtΣ n ζn+1 ˜ fn = Σ n (n + 1)! ζn ˜ fn(·, t) = Σ n ζnfn(·, t) + Σ n,i ζnfn t1, . . . , ti−1, t, ti+1, . . . , tn, ti = Xt + Σ n≥0 ζngn(·, t), where gn t1, . . . , tn, t = Σ i≤n fn t1, . . . , ti−1, t, ti+1, . . . , tn, ti .
Now fix any s, t ∈T, and conclude from Theorem 21.7 that DtXs = Σ nζn−1fn(·, t, s). Then Theorem 21.15 yields D∗(DtXs) = Σ n≥0 ζng′ n(·, t), t ∈T, (5) where g′ n t1, . . . , tn, t = Σ i≤n fn t1, . . . , ti−1, tn, ti+1, . . . , tn−1, t, ti = Σ i≤n fn t1, . . . , ti−1, t, ti+1, . . . , tn−1, tn, ti = gn(·, t), since fn is symmetric in the first n variables. Hence, (5) holds with g′ n replaced by gn, and the asserted relation follows.
2 To state the next result, it is suggestive to write for any X, Y ∈ˆ D1,2 tr(DX · DY ) = (DsXt)(DtYs) μ⊗2(ds dt).
Corollary 21.21 (covariance) For any X, Y ∈ˆ D1,2, we have5 !
D∗X, D∗Y " = E⟨X, Y ⟩+ E tr(DX · DY ).
5Throughout this chapter, the inner product ⟨·, ·⟩must not be confused with the pre-dictable covariation in Chapter 20.
21. Malliavin Calculus 477 Proof: By a simple approximation, we may assume that X, Y have finite chaos decompositions. Then D∗X, D∗Y ∈D1,2, and we see from the D/D∗-duality, used twice, and Theorem 21.20 that !
D∗X, D∗Y " = E !
X, D(D∗Y ) " = E⟨X, Y ⟩+ E !
X, D∗(DY ) " = E⟨X, Y ⟩+ E tr(DX · DY ), where the last equality is clear if we write out the duality explicitly in terms of μ -integrals.
2 An operator T on a space of random variables ξ is said to be local if ξ = 0 a.s. on a set A ∈F implies Tξ = 0 a.e. on A. We show that D∗is local on ˆ D1,2 and D is local on D1,1. To make this more precise, we note that for any statement on Ω × T, the qualification ‘a.e.
’ is understood to be with respect to P ⊗μ.
Theorem 21.22 (local properties) For any set A ⊂Ω in F, we have (i) X ∈ˆ D1,2, 1AX = 0 a.e. ⇒1A(D∗X) = 0 a.s., (ii) ξ ∈D1,1, 1A ξ = 0 a.s. ⇒1A(D ξ) = 0 a.e.
Proof: (i) Choose a smooth function f : R →[0, 1] with f(0) = 1 and support in [−1, 1], and define fn(x) = f(nx). Let η ∈Sc be arbitrary. We can show that η fn(∥X∥2) ∈D1,2, and use the product and chain rules to get D η fn(∥X∥2) = fn(∥X∥2) D η + 2 η f ′ n(∥X∥2) DX · X.
Hence, the D/D∗-duality yields E η (D∗X) fn(∥X∥2) = E fn(∥X∥2) ⟨X, D η⟩ + 2 E η f ′ n(∥X∥2) !
DX · X, X " .
(6) As n →∞, we note that η (D∗X) fn(∥X∥2) →η (D∗X) 1{X = 0}, fn(∥X∥2) ⟨X, D η⟩→0, η f ′ n(∥X∥2) !
DX · X, X " →0, and (6) yields formally E η D∗X; X = 0 = 0.
(7) Since Sc is dense in L2(ζ), this implies D∗X = 0 a.s. on A ⊂{X = 0}, as required.
To justify (7), we note that fn(∥X∥2) ⟨X, Dη⟩ ≤∥f∥∞∥X∥H ∥Dη∥H, 478 Foundations of Modern Probability and η f ′ n(∥X∥2) !
DX · X, X " ≤|η| f ′ n(∥X∥2) ∥DX∥H2 ∥X∥2 ≤2 |η| ∥f ′∥∞∥DX∥H2, where both bounds are integrable. Hence, (7) follows from (6) by dominated convergence.
(ii) It is enough to prove the statement for bounded variables ξ ∈D1,1, since it will then follow for general ξ by the chain rule for D, applied to the variable arctan ξ. For f and fn as in (i), define gn(t) = t −∞fn(r) dr, and note that ∥gn∥∞≤n−1∥f∥1, and also Dgn(ξ) = fn(ξ)Dξ by the chain rule.
Now let Y ∈Sb(H) be arbitrary, and note that the D/D∗-duality remains valid in the form E(η D∗Y ) = E⟨Dη, Y ⟩, for any bounded variables η ∈D1,1.
Using the chain rule for D, the extended D/D∗-duality, and some easy esti-mates, we obtain E fn(ξ) ⟨D ξ, Y ⟩ = E !
D(gn ◦ξ), Y " = E gn(ξ) D∗Y ≤n−1∥f∥1E|D∗Y | →0, and so by dominated convergence E ⟨D ξ, Y ⟩; ξ = 0 = 0, which extends by suitable approximation to Dξ = 0 a.e. on A ⊂{ξ = 0}.
2 Since the operators D and D∗are local, we may extend their domains as follows. Say that a random variable ξ lies locally in D1,p and write ξ ∈D1,p loc, if there exist some measurable sets An ↑Ω with associated random variables ξn ∈D1,p, such that ξ = ξn on An. The definitions of Dm,p loc and ˆ D1,2 loc are similar.
To motivate our introduction of the third basic operator of the subject, let X be a standard Ornstein–Uhlenbeck process on H, defined as a centered Gaussian process on R × H with covariance function E(Xsh)(Xtk) = e−|t−s| ⟨h, k⟩, h, k ∈H, s, t ∈R.
This makes X a stationary Markov process on H, such that Xt is isonormal Gaussian on H for every t ∈R, whereas Xth is a real OU-process in t for each h ∈H with ∥h∥= 1. For the construction of X, fix any ONB h1, h2, . . . ∈H, let the Xthj be independent OU-processes in R, and extend to H by linearity and continuity6.
For the present purposes, we may put X0 = ζ, and define the associated transition operators Tt directly on L2(ζ) by Tt(f ◦ζ) = E f ◦Xt ζ , t ≥0, 6Here the possible path continuity of X plays no role.
21. Malliavin Calculus 479 for measurable functions f on RH with f(ζ) ∈L2. In fact, any ξ ∈L2(ζ) has such a representation by Lemma 1.14 or Theorem 14.26, and since Xt d = ζ, the right-hand side is a.s. independent of the choice of f.
To identify the operators Tt, we note that the Hermite polynomials pn, normalized as in Theorem 14.25, satisfy the identity exp xt −1 2 t2 = Σ n≥0 pn(x) tn n!, x, t ∈R.
(8) From the moving average representation in Chapter 14, we further note that for any t ≥0, Xt = e−tζ + √ 1 −e−2t ζt, (9) where ζ d = ζt ⊥ ⊥ζ.
Theorem 21.23 (Ornstein–Uhlenbeck semi-group) For a standard OU-pro-cess X on H with X0 = ζ, the transition operators Tt on L2(ζ) are given by Tt ξ = Σ n≥0 e−ntJnξ, ξ ∈L2(ζ), t ≥0.
Proof: By Proposition 4.2 and Corollary 6.5, it is enough to consider ran-dom variables of the form ξ = f(ζh) with ∥h∥= 1. By the uniqueness theorem for moment-generating functions, we may take f(x) = exp(rx −1 2 r2) for an arbitrary r ∈R. Using the representation (9) with a = e−t and b = √ 1 −e−2t, we obtain f(Xt) = exp rXth −1 2 r2 = exp ra ζh + rb ζt −1 2 r2 = exp ra ζh −1 2 r2a2 exp rb ζth −1 2 r2b2 .
In particular, we get for t = 0 by (8) and Theorem 14.25 ξ = f(ζ) = exp r ζh −1 2 r2 = Σ n≥0 pn(ζh) rn n!
= Σ n≥0 ζnh⊗h rn n!, which shows that Jnξ = ζnh⊗n rn/n!. For general t, we note that ζth is N(0, 1) with ζt ⊥ ⊥ζ, and conclude by the same calculation that E f(Xt) ζ = exp ra ζh −1 2 r2a2 = Σ n≥0 ζnh⊗h (ra)n n!
= Σ n≥0 e−ntJnξ.
2 The Tt clearly form a contraction semi-group on L2(ζ), and we proceed to determine the associated generator, in the sense of convergence in L2.
480 Foundations of Modern Probability Theorem 21.24 (Ornstein–Uhlenbeck generator) For a standard OU-process X on H with X0 = ζ, the generator L and its domain L ⊂L2(ζ) are given by (i) ξ ∈L ⇔Σ n>0 n2∥Jnξ∥2 < ∞, (ii) Lξ = Σ n>0 L(Jnξ) = −Σ n>0 n Jnξ in L2, (iii) ∥Lξ∥2 = Σ n>0 ∥L(Jnξ)∥2 = Σ n>0 n2∥Jnξ∥2.
Proof: Let an operator L with domain L be given by (i)−(ii), and let G denote the generator of (Tt) with domain G. First let ξ ∈L. Since Jn and Tt commute on L2(ζ) and n ≥t−1(1 −e−nt) →n as t →0 for fixed n, we get by dominated convergence as t →0 t−1 Tt ξ −ξ −Lξ 2 = Σ n n Jnξ −t−1 1 −e−nt Jnξ 2 = Σ n n −t−1 1 −e−nt 2 ∥Jn∥2 →0, and so ξ ∈G with Gξ = Lξ, proving that G = L on L ⊂G.
To show that G ⊂L, let ξ ∈G be arbitrary. Since the Jn act continuously on L2(ζ), we get as before Jn(Gξ) ←t−1 Tt(Jnξ) −Jnξ →−n Jnξ, and so Σ n n2∥Jnξ∥2 = Σ n Jn(Gξ) 2 = ∥Gξ∥2 < ∞, which shows that indeed ξ ∈L.
2 We may now establish a remarkable relationship between the three basic operators of Malliavin calculus: Theorem 21.25 (operator relation) The operators D, D∗, L are related by (i) ξ ∈L ⇔ξ ∈D1,2, D ξ ∈D∗, (ii) L = −D∗D on L.
Proof: First let ξ ∈D1,2 and D ξ ∈D∗. Then for any η ∈Hn, we get by the D/D∗-duality and Theorem 21.7 E η(D∗D ξ) = E⟨D η, D ξ⟩ = n E η(Jnξ), and so Jn(D∗D ξ) = n Jnξ, which shows that ξ ∈L with D∗Dξ = −Lξ.
It remains to show that L ⊂dom(D∗D), so let ξ ∈L be arbitrary. Then Theorem 21.24 yields Σ n n ∥Jnξ∥2 ≤Σ n n2∥Jnξ∥2 < ∞, and so ξ ∈D1,2 by Theorem 21.8. Next, Theorems 21.8 and 21.24 yield for any η ∈D1,2 21. Malliavin Calculus 481 E⟨D η, D ξ⟩= Σ n n E(Jnη)(Jnξ) = −E η(Lξ), and so D ξ ∈D∗by Theorem 21.14. Thus, we have indeed ξ ∈dom(D∗D). 2 We proceed to show that L behaves like a second order differential operator, when acting on smooth random variables. For comparison, recall from Chapter 32 that an OU-process in Rd solves the SDE dXt = √ 2 dBt −Xt dt, and hence that its generator A agrees formally with the expression for L below.
See also Theorem 17.24 for an equivalent formula in the context of general Feller diffusions.
Theorem 21.26 (L as elliptic operator) For smooth random variables ξ = f(ζ¯ h) with ¯ h = (h1, . . . , hn) ∈Hn, we have Lξ = Σ i, j ≤n ∂2 ij f(ζ¯ h) ⟨hi, hj⟩−Σ i≤n ∂if(ζ¯ h) ζhi.
Proof: Since ξ ∈D1,2, we have by (2) D ξ = Σ n ∂if(ζ¯ h) hi, so that in particular D ξ ∈SH ⊂D∗. Using (2) again, along with Lemma 21.17, we obtain D∗(D ξ) = Σ i≤n ∂if(ζ¯ h) ζhi −Σ i, j ≤n ∂2 ij f(ζ¯ h) ⟨hi, hj⟩, and the assertion follows by Theorem 21.25.
2 We may extend the last result to a chain rule for the OU-operator L, which may be regarded as a second order counterpart of the first order chain rule (3) for D.
Corollary 21.27 (second order chain rule) Let ξ1, . . . , ξn ∈D2,4, and let f ∈C2(Rn) with bounded first and second order partial derivatives.
Then f(ξ) ∈L with ξ = (ξ1, . . . , ξn), and we have L(f ◦ξ) = Σ i, j ≤n ∂2 ij f(ξ) !
D ξi, D ξj " + Σ i≤n ∂if(ξ) Lξi.
Proof: Approximate ξ in the norm ∥·∥2,4 by smooth random variables, and f by smooth functions in ˆ C(Rn). The result then follows by the continuity of L in the norm ∥· ∥2,2. We omit the details.
2 We now come to the remarkable fact that the divergence operator D∗, here introduced as the adjoint of the differentiation operator D, is a non-anticipating extension of the Itˆ o integral of stochastic calculus. This is why it is often regarded as an integral and may even be denoted by an integral sign.
482 Foundations of Modern Probability Here we take ζ to be an isonormal Gaussian process on H = L2(R+, λ), generating a Brownian motion B with Bt = ζ[0, t] a.s. for all t ≥0. Letting F be the filtration induced by B, we write L2 F(P ⊗λ) for the class of F-progressive processes X on R+ with E X2 t dt < ∞. For such a process X, we know from Chapter 18 that the Itˆ o integral Xt dBt exists.
Theorem 21.28 (D∗as extended Itˆ o integral) Let ζ be an isonormal process on L2(R+, λ), generating a Brownian motion B with induced filtration F. Then (i) L2 F(P ⊗λ) ⊂D∗, (ii) D∗X = ∞ 0 Xt dBt a.s., X ∈L2 F(P ⊗λ).
Proof: First let X be a predictable step process of the form Xt = Σ i≤m ξi1(ti,ti+1](t), t ≥0, where 0 ≤t1 < · · · < tm+1 and each ξi is an Fti-measurable L2-random variable.
By Corollary 21.19, we have X ∈D∗with ∞ 0 Xt dBt = Σ i≤m ξi Bti+1 −Bti = Σ i≤m D∗ ξi1(ti,ti+1] = D∗X.
(10) For general X ∈L2 F(P ⊗λ), Lemma 18.23 yields some predictable step pro-cesses Xn as above, such that Xn →X in L2(P ⊗λ). Applying (10) to the Xn and using Theorem 18.11, we get D∗Xn = ∞ 0 Xn t dBt → ∞ 0 Xt dBt in L2.
Hence, Theorem 21.14 yields7 X ∈D∗and D∗X = ∞ 0 Xt dBt.
2 We proceed with a striking differentiation property of the Itˆ o integral.
Theorem 21.29 (derivative of Itˆ o integral, Pardoux & Peng) Let ζ be an isonormal Gaussian process on L2([0, 1], λ) generating a Brownian motion B, and consider a B-adapted process X ∈L2(P ⊗λ). Then (i) X ∈ˆ D1,2 ⇔(X · B)1 ∈D1,2, in which case (ii) X · B ∈ˆ D1,2, and (X · B)t ∈D1,2, t ∈[0, 1], (iii) Ds t 0 Xr dBr = Xs1s≤t + t s (DsXr) dBr a.e., s, t ∈[0, 1].
7Indeed, the adjoint of a closed operator is always closed.
21. Malliavin Calculus 483 Proof: First let X ∈ˆ D1,2. Since for fixed t the process s →DtXs is square integrable by definition and adapted by Corollary 21.13, we have X ∈D∗by Theorem 21.28 (i). Furthermore, the isometry of the Itˆ o integral yields E 1 0 dt 1 t DtXs dBs 2 = 1 0 dt 1 t E(DtXs)2ds < ∞, and so X · B ∈ˆ D1,2.
By Theorems 21.20 and 21.28 (ii) we conclude that (X · B)1 ∈D1.2 for all t ∈[0, 1], and that (iii) holds for s ≤t. Equation (iii) remains true for s > t, since all three terms then vanish by Corollary 21.13 (i).
It remains to prove the implication ‘⇐’ in (i), so assume that (X·B)1 ∈D1,2.
Let Xn t denote the projection of Xt onto Pn = H0 ⊕· · ·⊕Hn. Then (Xn ·B)t is the projection of (X·)t onto Pn+1, and so (Xn ·B)1 →(X ·B)1 in the topology of D1,2. Applying (iii) to each process Xn and using the martingale property of the Itˆ o integral, we obtain 1 0 E Ds(Xn · B)1 2ds = 1 0 E|Xn s |2 ds + 1 0 ds s 0 E(DrXn s )2 dr ≥ 1 0 ds s 0 E(DrXn s )2 dr = E DXn 2 λ2.
By a similar estimate for DXm −DXn we conclude that DXn converges in L2(λ2), and so X ∈ˆ D1,2 by the closure of D. (Alternatively, we may use a version of Lemma 21.6.) 2 Next recall from Lemma 19.12 that any B-measurable random variable ξ ∈L2 with Eξ = 0 can be written as an Itˆ o integral ∞ 0 Xt dBt, for an a.e.
unique process X ∈L2 F(P ⊗λ). We may now give an explicit formula for the integrand X.
Theorem 21.30 (Brownian functionals, Clark, Ocone) Let B be a Brownian motion on [0, 1] with induced filtration F. Then for any ξ ∈D1,2, (i) ξ = E ξ + 1 0 E Dt ξ Ft dBt, (ii) E| ξ |p < ⌢|E ξ|p + 1 0 E|Dt ξ|p dt, p ≥2.
For the stated formula to make sense, we need to choose a progressively measurable version of the integrand. Such an optional projection of Dt ξ is easily constructed by a suitable approximation.
Proof: (i) Assuming ξ = Σ nζnfn, we get by Theorem 21.7 and Corollary 14.28 E Dt ξ Ft = Σ n>0 n E ζn−1fn(·, t) Ft = Σ n>0 n ζn−1 1[0,t]n−1fn(·, t) .
484 Foundations of Modern Probability The left-hand side is further adapted and square integrable, hence belongs to D∗by Theorem 21.28 (i). Applying D∗to the last sum and using Theorems 21.15 and 21.28 (ii), we conclude that 1 0 E Dt ξ Ft dBt = Σ n>0 ζnfn = ξ −Eξ, where the earlier factor n is canceled out by the symmetrization.
(ii) Using Jensen’s inequality (three times), along with Fubini’s theorem and the Burkholder inequality in Theorem 18.7, we get from (i) E| ξ |p −|E ξ|p < ⌢E 1 0 E Dt ξ Ft dBt p < ⌢E 1 0 E Dt ξ Ft 2dt p/2 ≤E 1 0 E Dt ξ Ft p dt ≤ 1 0 E|Dt ξ|p dt.
2 The Malliavin calculus can often be used to show the existence and possible regularity of a density. The following simple result illustrates the idea.
Theorem 21.31 (functionals with smooth density) For an isonormal Gaus-sian process ζ on H, let ξ ∈L2(ζ) with ξ ∈D1,2, D ξ ∈D∗, ∥D ξ∥> 0.
Then L(ξ) ≪λ with the bounded, continuous density8 g(x) = E D∗ ∥D ξ∥−2D ξ ; ξ > x , x ∈R.
Proof: Let f ∈ˆ C(R) be arbitrary, and put F(x) = x −∞f(y) dy. Then the chain rule in Theorem 21.1 yields F(ξ) ∈D1,2 and !
D(F ◦ξ), D ξ " = f(ξ) ∥D ξ∥2.
Using the D/D∗-duality and Fubini’s theorem, we obtain Ef(ξ) = E !
D(F ◦ξ), ∥D ξ∥−2D ξ " = E D∗ ∥D ξ∥−2D ξ F(ξ) = E D∗ ∥D ξ∥−2D ξ ξ −∞f(x) dx = E D∗ ∥D ξ∥−2D ξ ; ξ > x f(x) dx.
Since f was arbitrary, this gives the asserted density.
2 Weaker conditions will often ensure the mere absolute continuity, though we may then be unable to give an explicit formula for the density.
8Since ∥D ξ∥−2 is a random variable, not a constant, it can’t be pulled out in front of E or even of D∗.
21. Malliavin Calculus 485 Theorem 21.32 (absolute continuity) For an isonormal Gaussian process ζ on H, let ξ be a ζ-measurable random variable. Then ξ ∈D1,1 loc D ξ ̸= 0 a.s. ⇒L(ξ) ≪λ.
Proof: By localization, we may let ξ ∈D1,1, and we may further assume that μ = L(ξ) is supported by [−1, 1]. We need to show that if A ∈B[−1,1] with λA = 0, then μA = 0. As in Lemma 1.37, we may then choose some smooth functions fn : [−1, 1] →[0, 1] with bounded derivatives, such that fn →1A a.e.
λ + μ. Writing Fn(x) = x −1 fn(y) dy, we see by dominated convergence that Fn(x) →0 for all x. Since the Fn are again smooth with bounded derivatives, the chain rule in Theorem 21.1 yields Fn(ξ) ∈D1,1 with D(Fn ◦ξ) = fn(ξ) D ξ →1{ξ ∈A} D ξ a.s.
Since also Fn(ξ) →0, the closure property of D gives 1{ξ ∈A}D ξ = 0 a.s., and since D ξ ̸= 0 a.s. by hypothesis, we obtain μA = P{ξ ∈A} = 0.
2 Exercises 1. Show that when H = L2(T, μ) and ξ ∈D, we can choose the process Dt ξ to be product measurable on T × Ω.
2. Let H1 be the space of functions x ∈C0([0, 1]) of the form xt = t 0 ˙ xs ds with ˙ x ∈H = L2([0, 1]).
Show that the relation ⟨x, y⟩H1 = ⟨˙ x, ˙ y⟩H defines an inner product on H1, making it a Hilbert space isomorphic to H, and that the natural embedding H1 →C0 is injective and continuous.
3. Let B be a Brownian motion on [0, 1] generated by an isonormal Gaussian process ζ on L2([0, 1]), so that Bt = ζ1[0,t]. For any f ∈C∞(Rd) and t ∈[0, 1], show that the random variable ξ = f(Bt) is smooth.
4.
For ξ = f(Bt) as above and h ∈L2, show that ⟨D ξ, h⟩agrees with the directional derivative of ξ, in the direction of h · λ ∈H1. (Hint: Write ⟨Dξ, h⟩= f′(Bt)(h · λ)ti = (d/dε)ξ(B + εh · λ)ε=0.) 5. Let B be a Brownian motion generated by an isonormal Gaussian process ζ on L2([0, 1]). Show that for any t ∈[0, 1], n ∈N, and f ∈ˆ C∞, the random variable ξ = f(Bn t ) belongs to D1,2. (Hint: Note that Bn t has a finite chaos expansion. Then use Lemma 21.5 and Corollary 21.8.) 6. For a Brownian motion B, some fixed t1, . . . , tn, and a suitably smooth function f on Rd, define ξ = f(B1, . . . , Bn). Use Corollary 21.27 to calculate an expression for Lξ.
7. Let B be a Brownian motion generated by an isonormal Gaussian process ζ on L2([0, 1]), and fix any n ∈N. For X = Bn, show that X ∈D∗, and calculate an expression for D∗X. Check that X satisfies the conditions in Theorem 21.28, compute the corresponding Itˆ o integral 1 0 Xt dBt, and compare the two expressions.
(Hint: Use Theorem 21.15.) 486 Foundations of Modern Probability 8. For a Brownian motion B on [0, 1] generated by the isonormal process ζ, define ξ = Bt for a fixed t > 0. Calculate Dt ξ, and verify the expression in Theorem 21.30.
9. For a Brownian motion B and a fixed t > 0, use Theorem 21.31 to calculate an expression for the density of L(Bt), and compare with the known formula for the normal density.
VII. Convergence and Approximation Here our primary objective is to derive functional versions of the major limit theorems of probability theory, through the development of general embedding and compactness arguments. The intuitive Skorohod embedding in Chapter 22 leads easily to functional versions of the central limit theorem and the law of the iterated logarithm, and allows extensions to renewal and empirical processes, as well as to martingales with small jumps. Though the compactness approach in Chapter 23 is more subtle, as it depends on the development of compactness criteria in the underlying function or measure spaces, it has a much wider scope, as it applies even to processes with jumps and to random sets and measures.
In Chapter 24, we develop some powerful large deviation principles, providing precise exponential bounds for tail probabilities in a great variety of contexts.
The first two chapters contain essential core material, whereas the last chapter is more advanced and might be postponed.
−−− 22. Skorohod embedding and functional convergence. Here the key step is to express a centered random variable as the value of a Brownian motion B at a suitably bounded optional time. Iterating the construction yields an embedding of an entire random walk X into B. By estimating the associated approximation error, we can derive a variety of weak or strong limit theorems for X from the corresponding results for B, leading in particular to functional versions of the central limit theorem and the law of the iterated logarithm, with extensions to renewal and empirical processes. A similar embedding applies to martingales with uniformly small jumps.
23. Convergence in distribution. Here we develop the compactness ap-proach to convergence in distribution in function and measure spaces. The key result is Prohorov’s theorem, which shows how the required relative compact-ness is equivalent to tightness, under suitable conditions on the underlying space. The usefulness of the theory relies on our ability to develop efficient tightness criteria. Here we consider in particular the spaces D R+,S of functions in S with only jump discontinuities, and the space MS of locally finite mea-sures on S, leading to a wealth of useful convergence criteria.
24. Large deviations. The large deviation principles (LDPs) developed in this chapter can be regarded as refinements of the weak law of large numbers in a variety of contexts. The theory involves some general extension principles, leading to an extremely versatile and powerful body of methods and results, covering a wide range of applications. In particular, we include Schilder’s LDP for Brownian motion, Sanov’s LDP for empirical distributions, the Freidlin– Wentzel LDP for stochastic dynamical systems, and Strassen’s functional law of the iterated logarithm.
487 Chapter 22 Skorohod Embedding and Functional Convergence Embedding of random walk, Brownian martingales, moment identities and estimates, rate of continuity, approximations of random walk, func-tional central limit theorem, laws of the iterated logarithm, arcsine laws, approximation of renewal processes, empirical distribution functions, embedding and approximation of martingales, martingales with small jumps, time-scale comparison In Chapter 6 we used analytic methods to derive criteria for sums of inde-pendent random variables to be approximately Gaussian. Though this may remain the easiest approach to the classical limit theorems, such results are best understood when viewed in the context of some general approximation results for random processes. The aim of this chapter is to develop the purely probabilistic technique of Skorohod embedding to derive such functional limit theorems.
The scope of applications of the present method extends far beyond the classical limit theorems. This is because some of the most important function-als of a random process, such as the supremum up to a fixed time or the time at which the maximum occurs, can not be expressed in terms of the values at finitely many times. The powerful new technique allows us to extend some basic result for Brownian motion from Chapter 14, including the acsine laws and the law of the iterated logarithm, to random walks and related sequences.
Indeed, the technique of Skorohod embedding extends even to a wide class of martingales with small jumps.
From the statements for random walks, similar results can be deduced for various related processes. In particular, we prove a functional central limit theorem and a law of the iterated logarithm for renewal processes, and show how suitably normalized versions of the empirical distribution functions, based on a sample of i.i.d. random variables, can be approximated by a Brownian bridge.
In the simplest setting, we may consider a random walk (Xn), based on some i.i.d. random variables ξk with mean 0 and variance 1. Here we can prove the existence of a Brownian motion B and some optional times τ1 ≤τ2 ≤· · · , such that Xn = Bτn a.s. for every n. For applications, it is essential to choose the τn such that the differences Δτn become i.i.d. with finite mean. The path of the step process X[t] will then be close to that of B, and many results for Brownian motion carry over, at least approximately, to the random walk.
489 © Springer Nature Switzerland AG 2021 O. Kallenberg, Foundations of Modern Probability, Probability Theory and Stochastic Modelling 99, 490 Foundations of Modern Probability The present account depends in many ways on material from previous chap-ters. Thus, we rely on the basic theory of Brownian motion, as set forth in Chapter 14. We also make frequent use of ideas and results from Chapter 9 on martingales and optional times. Finally, occasional references are made to Chapter 5 for empirical distributions, to Chapter 8 for the transfer theorem, to Chapter 12 for random walks and renewal processes, and to Chapter 15 for Poisson processes.
More general approximations and functional limit theorems will be obtained by different methods in Chapters 23–24 and 30. The present approximations of martingales with small jumps are closely related to the time-change results for continuous local martingales in Chapter 19.
To clarify the basic ideas, we begin with a detailed discussion of the classical Skorohod embedding of random walks. Say that a random variable ξ or its distribution is centered if E ξ = 0.
Theorem 22.1 (embedding of random walk, Skorohod) Let ξ1, ξ2, . . . be i.i.d., centered random variables, and put Xn = ξ1 + · · · + ξn. Then there exists a filtered probability space with a Brownian motion B and some optional times 0 = τ0 ≤τ1 ≤. . . with i.i.d. differences Δτn = τn −τn−1, such that (Bτn) d = (Xn), EΔτn = E ξ2 1, E(Δτn)2 ≤4E ξ4 1.
Here the moment conditions are crucial, since without them the statement would be trivially true and totally useless. In fact, we could then take B⊥ ⊥(ξn) and choose recursively τn = inf t ≥τn−1; Bt = Xn .
Our proof of Theorem 22.1 is based on some lemmas. First we list a few martingales associated with Brownian motion.
Lemma 22.2 (Brownian martingales) For a Brownian motion B, these pro-cesses are martingales: Bt, B2 t −t, B4 t −6 tB2 t + 3 t2.
Proof: Note that EBt = EB3 t = 0, EB2 t = t, EB4 t = 3 t2.
Write F for the filtration induced by B, let 0 ≤s ≤t, and recall that the process ˜ Bt = Bs+t −Bs is again a Brownian motion independent of Fs. Hence, E B2 t Fs = E B2 s + 2Bs ˜ Bt−s + ˜ B2 t−s Fs = B2 s + t −s.
Moreover, E B4 t Fs = E B4 s + 4B3 s ˜ Bt−s + 6 B2 s ˜ B2 t−s + 4Bs ˜ B3 t−s + ˜ B4 t−s Fs = B4 s + 6 (t −s)B2 s + 3 (t −s)2, and so E B4 t −6 tB2 t Fs = B4 s −6 sB2 s + 3 (s2 −t2).
2 By optional sampling, we may deduce some useful moment formulas.
22. Skorohod Embedding and Functional Convergence 491 Lemma 22.3 (moment relations) For any Brownian motion B and optional time τ such that Bτ is bounded, we have EBτ = 0, E τ = EB2 τ, E τ 2 ≤4EB4 τ.
(1) Proof: By optional stopping and Lemma 22.2, we get for any t ≥0 EBτ∧t = 0, E(τ ∧t) = EB2 τ∧t, (2) 3 E(τ ∧t)2 + EB4 τ∧t = 6 E(τ ∧t)B2 τ∧t.
(3) The first two relations in (1) follow from (2) by dominated and monotone convergence as t →∞. In particular, we have Eτ < ∞. We may then take limits even in (3), and conclude by dominated and monotone convergence, together with Cauchy’s inequality, that 3 E τ 2 + EB4 τ = 6 E τB2 τ ≤6 Eτ 2EB4 τ 1/2.
Writing r = (Eτ 2/EB4 τ)1/2, we get 3 r2 + 1 ≤6 r, which yields 3 (r −1)2 ≤2, and finally r ≤1 + (2/3)1/2 < 2.
2 Next we may write the centered distributions1 as mixtures of suitable two-point distributions.
For any a ≤0 ≤b, there exists a unique probability measure νa,b on {a, b} with mean 0, given by νa,b = δ0 when ab = 0, and otherwise νa,b = b δa −a δb b −a , a < 0 < b.
This defines a probability kernel ν : R−×R+ →R, where for mappings between measure spaces, measurability is defined in terms of the σ-fields generated by the evaluation maps πB : μ →μB, for arbitrary sets B in the underlying σ-field.
Lemma 22.4 (centered distributions as mixtures) For any centered distribu-tion μ on R, there exists a μ-measurable distribution ˜ μ on R−× R+, such that μ = ˜ μ(dx dy) νx,y.
Proof (Chung): Let μ± denote the restrictions of μ to R± \ {0}, define l(x) ≡x, and put c = l dμ+ = − l dμ−.
For any measurable function f : R →R+ with f(0) = 0, we get c fdμ = l dμ+ fdμ−− l dμ− fdμ+ = (y −x) μ−(dx) μ+(dy) f dνx,y, and so we may choose ˜ μ(dx dy) = μ{0} δ0,0(dx dy) + c−1(y −x) μ−(dx) μ+(dy).
1probability measures with mean 0 492 Foundations of Modern Probability The measurability of the mapping μ →˜ μ is clear by a monotone-class ar-gument, once we note that ˜ μ(A × B) is a measurable function of μ for any A, B ∈B.
2 The embedding in Theorem 22.1 may be constructed recursively, beginning with a single random variable ξ with mean 0.
Lemma 22.5 (embedding of random variable) For any Brownian motion B and centered distribution μ on R, choose a random pair (α, β)⊥ ⊥B with distri-bution ˜ μ as in Lemma 22.4, and define a random time τ and filtration F by τ = inf t ≥0; Bt ∈{α, β} , Ft = σ α, β; Bs, s ≤t .
Then τ is F-optional with L(Bτ) = μ, E τ = x2μ(dx), E τ 2 ≤4 x4μ(dx).
Proof: The process B is clearly an F-Brownian motion, and τ is F-optional as in Lemma 9.6 (ii). Using Lemma 22.3 and Fubini’s theorem, we get L(Bτ) = E L(Bτ | α, β) = E να,β = μ, E τ = E E(τ | α, β) = E x2να,β(dx) = x2μ(dx), E τ 2 = E E(τ 2 | α, β) ≤4 E x4να,β(dx) = 4 x4μ(dx).
2 Proof of Theorem 22.1: Let μ be the common distribution of the ξn. Intro-duce a Brownian motion B and some independent i.i.d. random pairs (αn, βn), n ∈N, with the distribution ˜ μ of Lemma 22.4, and define recursively some random times 0 = τ0 ≤τ1 ≤· · · by τn = inf t ≥τn−1; Bt −Bτn−1 ∈{αn, βn} , n ∈N.
Each τn is clearly optional for the filtration Ft = σ{αk, βk, k ≥1; Bt}, t ≥0, and B is an F-Brownian motion.
By the strong Markov property at τn, the process B(n) t = Bτn+t −Bτn is again a Brownian motion independent of Gn = σ{τk, Bτk; k ≤n}.
Since also (αn+1, βn+1)⊥ ⊥(B(n), Gn), we obtain (αn+1, βn+1, B(n)) ⊥ ⊥Gn, and so the pairs (Δτn, ΔBτn) are i.i.d. The remaining assertions now follow by Lemma 22.5.
2 Using the last theorem, we may approximate a centered random walk by a Brownian motion. As before, we assume the underlying probability space to be rich enough to support the required randomization variables.
22. Skorohod Embedding and Functional Convergence 493 Theorem 22.6 (approximation of random walk, Skorohod, Strassen) Let ξ1, ξ2, . . . be i.i.d. random variables with E ξi = 0 and E ξ2 i = 1, and put Xn = ξ1 + · · · + ξn. Then there exists a Brownian motion B, such that (i) t−1/2 sup s≤t X[s] −Bs P →0, t →∞, (ii) lim t→∞ X[t] −Bt 2t log log t = 0 a.s.
The proof of (ii) requires the following estimate.
Lemma 22.7 (rate of continuity) For a real Brownian motion B, lim r↓1 lim sup t→∞ sup t≤u≤rt |Bu −Bt| 2t log log t = 0 a.s.
Proof: Write h(t) = (2t log log t)1/2. It is enough to show that lim r↓1 lim sup n→∞ sup rn≤t≤rn+1 |Bt −Brn| h(rn) = 0 a.s.
(4) Proceeding as in the proof of Theorem 14.18, we get2 as n →∞for fixed r > 1 and c > 0 P sup t∈[rn,rn+1] Bt −Brn > ch(rn) < ⌢P B{rn(r −1)} > ch(rn) < ⌢n−c2/(r−1)( log n)−1/2.
Choosing c2 > r −1 and using the Borel–Cantelli lemma, we see that the lim sup in (4) is a.s. bounded by c, and the assertion follows as we let r →1. 2 For the main proof, we introduce the modulus of continuity w(f, t, h) = sup r,s≤t, |r−s|≤h |fr −fs|, t, h > 0.
Proof of Theorem 22.6: By Theorems 8.17 and 22.1, we may choose a Brownian motion B and some optional times 0 ≡τ0 ≤τ1 ≤· · · , such that Xn = Bτn a.s. for all n, and the differences τn −τn−1 are i.i.d. with mean 1.
Then τn/n →1 a.s. by the law of large numbers, and so τ[t]/t →1 a.s. Now (ii) follows by Lemma 22.7.
Next define δt = sup s≤t τ[s] −s , t ≥0, and note that the a.s. convergence τn/n →1 implies δt/t →0 a.s. Fixing any t, h, ε > 0, and using the scaling property of B, we get 2Recall that a < ⌢b means a ≤c b for some constant c > 0.
494 Foundations of Modern Probability P t−1/2 sup s≤t Bτ[s] −Bs > ε ≤P w B, t + th, th > ε t1/2 + P{δt > th} = P w(B, 1 + h, h) > ε + P{t−1δt > h}.
Here the right-hand side tends to zero as t →∞and then h →0, and (i) follows.
2 For an immediate application of the last theorem, we may extend the law of the iterated logarithm to suitable random walks.
Corollary 22.8 (law of the iterated logarithm, Hartman & Wintner) Let ξ1, ξ2, . . . be i.i.d. random variables with E ξi = 0 and E ξ2 i = 1, and put Xn = ξ1 + · · · + ξn. Then lim sup n→∞ Xn 2n log log n = 1 a.s.
Proof: Combine Theorems 14.18 and 22.6.
2 The first approximation in Theorem 22.6 yields a classical weak convergence result for random walks. Here we introduce the space D [0,1] of right-continuous functions on [0, 1] with left-hand limits (rcll). For the present purposes, we equip D [0,1] with the norm ∥x∥= supt |xt| and the σ-field D generated by all evaluation maps πt : x →xt. Since the norm is clearly D-measurable3, the same thing is true for the open balls B r x = {y; ∥x −y∥< r}, x ∈D [0,1], r > 0.
For a process X with paths in D [0,1] and a functional f : D [0,1] →R, we say that f is a.s. continuous at X if X ̸∈Df a.s., where Df denotes the set of functions4 x ∈D [0,1], such that f is discontinuous at the path x.
We may now state a version of the functional central limit theorem.
Theorem 22.9 (functional central limit theorem, Donsker) Let ξ1, ξ2, . . . be i.i.d. random variables with E ξi = 0 and E ξ2 i = 1, and define Xn t = n−1/2 Σ k≤nt ξk, t ∈[0, 1], n ∈N.
Let the functional f : D [0,1] →R be measurable and a.s. continuous at the paths of a Brownian motion B on [0, 1]. Then f(Xn) d →f(B).
This follows immediately from Theorem 22.6, together with the following lemma.
3Note, however, that D is strictly smaller than the Borel σ-field induced by the norm.
4The measurability of Df is irrelevant here, provided we interpret the a.s. condition in the sense of inner measure.
22. Skorohod Embedding and Functional Convergence 495 Lemma 22.10 (approximation and convergence) For n ∈N, let Xn, Yn, Y be rcll processes on [0, 1] such that Yn d = Y, n ∈N; ∥Xn −Yn∥ P →0, and let the functional f : D [0,1] →R be measurable and a.s. continuous at Y .
Then f(Xn) d →f(Y ).
Proof: Put T = Q ∩[0, 1]. By Theorem 8.17 there exist some processes X′ n on T, such that (X′ n, Y ) d = (Xn, Yn) on T for all n. Then each X′ n is a.s.
bounded with finitely many upcrossings of any non-degenerate interval, and so the process ˜ Xn(t) = X′ n(t+) exists a.s. with paths in D [0,1]. From the right continuity of paths, it is also clear that ( ˜ Xn, Y ) d = (Xn, Yn) on [0, 1] for every n.
To obtain the desired convergence, we note that ˜ Xn −Y d = ∥Xn −Yn∥ P →0, and hence f(Xn) d = f( ˜ Xn) P →f(Y ), as in Lemma 5.3.
2 In particular, we may recover the central limit theorem in Proposition 6.10 by taking f(x) = x1 in Theorem 22.9. We may also obtain results that go beyond the classical theory, such as by choosing f(x) = supt |xt|. For a less ob-vious application, we may extend the arcsine laws of Theorem 14.16 to suitable random walks. Recall that a random variable ξ is said to be arcsine distributed if ξ d = sin2 α, where α is U(0, 2π).
Theorem 22.11 (arcsine laws, Erd¨ os & Kac, Sparre-Andersen) Let ξ1, ξ2, . . .
be i.i.d., non-degenerate random variables, and put Xn = ξ1 + · · · + ξn. Define for n ∈N τ 1 n = n−1 Σ k≤n 1{Xk > 0}, τ 2 n = n−1 min k ≥0; Xk = max i≤n Xi , τ 3 n = n−1 max k ≤n; Xk Xn ≤0 , and let τ be arcsine distributed. Then (i) τ k n d →τ for k = 1, 2, 3 when E ξi = 0 and E ξ2 i < ∞, (ii) τ k n d →τ for k = 1, 2 when L(ξi) is symmetric.
For the proof, we introduce on D [0,1] the functionals f1(x) = λ t ∈[0, 1]; xt > 0 , f2(x) = inf t ∈[0, 1]; xt ∨xt−= sup s≤1 xs , f3(x) = sup t ∈[0, 1]; xt x1 ≤0 .
The following result is elementary.
496 Foundations of Modern Probability Lemma 22.12 (continuity of functionals) The functionals fi are measurable, and • f1 is continuous at x ⇔λ{t; xt = 0} = 0, • f2 is continuous at x ⇔xt ∨xt−has a unique maximum, • f3 is continuous at x, if all local extremes of xt or xt−on (0, 1] are ̸= 0.
Proof of Theorem 22.11: Clearly, τ i n = fi(Xn) for n ∈N and i = 1, 2, 3, where Xn t = n−1/2X[nt], t ∈[0, 1], n ∈N.
(i) By Theorems 14.16 and 22.9, it suffices to show that each fi is a.s.
continuous at B. Thus, we need to verify that B a.s. satisfies the conditions in Lemma 22.12. This is obvious for f1, since Fubini’s theorem yields E λ t ≤1; Bt = 0 = 1 0 P{Bt = 0}dt = 0.
The conditions for f2 and f3 follow easily from Lemma 14.15.
(ii) Since τ 1 n d = τ 2 n by Corollary 27.8, it is enough to consider τ 1 n. Then introduce an independent Brownian motion B, and define σε n = n−1 Σ k≤n 1 εBk + (1 −ε)Xk > 0 , n ∈N, ε ∈(0, 1].
By (i) together with Theorem 12.12 and Corollary 27.8, we have σε n d = σ1 n d →τ.
Since P{Xn = 0} →0, e.g. by Theorem 5.17, we also note that lim sup ε→0 σε n −τ 1 n ≤n−1 Σ k≤n 1{Xk = 0} P →0.
Hence, we may choose some εn →0 with σεn n −τ 1 n P →0, and Theorem 5.29 yields τ 1 n d →τ.
2 Results like Theorem 22.9 are called invariance principles, since the limiting distribution of f(Xn) is the same5 for all i.i.d. sequences (ξk) with Eξi = 0 and Eξ2 i = 1. This is often useful for applications, since a direct computation may be possible for a special choice of distribution, such as for P{ξk = ±1} = 1 2.
The approximations in Theorem 22.6 yield similar results for renewal pro-cesses, here regarded as non-decreasing step processes.
Theorem 22.13 (approximation of renewal process) Let N be a renewal pro-cess based on a distribution μ with mean 1 and variance σ2 ∈(0, ∞). Then there exists a Brownian motion B, such that (i) t−1/2 sup s≤t Ns −s −σBs P →0, t →∞, (ii) lim t→∞ Nt −t −σBt 2t log log t = 0 a.s.
5sometimes also referred to as universality 22. Skorohod Embedding and Functional Convergence 497 Proof: (ii) Let τ0, τ1, . . . be the renewal times of N, and introduce the random walk Xn = n −τn + τ0, n ∈Z+. Choosing a Brownian motion B as in Theorem 22.6, we get as n →∞ Nτn −τn −σBn 2n log log n = Xn −σBn 2n log log n →0 a.s.
Since τn ∼n a.s. by the law of large numbers, we may replace n in the denom-inator by τn, and by Lemma 22.7 we may further replace Bn by Bτn. Hence, Nt −t −σBt 2t log log t →0 a.s. along (τn).
By Lemma 22.7 it is enough to show that τn+1 −τn 2τn log log τn →0 a.s., which is easily seen from Theorem 22.6.
(i) From Theorem 22.6, we further obtain n−1/2 sup k≤n Nτk −τk −σBk = n−1/2 sup k≤n Xk −τ0 −σBk P →0, and by Brownian scaling, n−1/2w(B, n, 1) d = w(B, 1, n−1) →0.
It is then enough to show that n−1/2 sup k≤n τk −τk−1 −1 = n−1/2 sup k≤n Xk −Xk−1 P →0, which is again clear from Theorem 22.6.
2 Proceeding as in Corollary 22.8 and Theorem 22.9, we may deduce an associated law of the iterated logarithm and a weak convergence result.
Corollary 22.14 (limits of renewal process) Let N be a renewal process based on a distribution μ with mean 1 and variance σ2 < ∞, and let the functional f : D [0,1] →R be measurable and a.s. continuous at a Brownian motion B.
Then (i) lim sup t→∞ ±(Nt −t) 2t log log t = σ a.s., (ii) f(Xr) d →f(B) as r →∞, where Xr t = Nrt −rt σ√r , t ∈[0, 1], r > 0.
Part (ii) above yields a similar result for empirical distribution functions, based on a sequence of i.i.d. random variables. The asymptotic behavior can then be expressed in terms of a Brownian bridge.
498 Foundations of Modern Probability Theorem 22.15 (approximation of empirical distributions) Let ξ1, ξ2, . . . be i.i.d. random variables with ordinary and empirical distribution functions F and ˆ F1, ˆ F2, . . . . Then there exist some Brownian bridges B1, B 2, . . . , such that sup x n1/2 ˆ Fn(x) −F(x) −B n ◦F(x) P →0, n →∞.
(5) Proof: Arguing as in the proof of Proposition 5.24, we may reduce to the case where the ξn are U(0, 1), and F(t) ≡t on [0, 1]. Then clearly n1/2 ˆ Fn(t) −F(t) = n−1/2 Σ k≤n 1{ξk ≤t} −t , t ∈[0, 1].
Now introduce for every n an independent Poisson random variable κn with mean n, and conclude from Proposition 15.4 that N n t = Σ k≤κn 1{ξk ≤t}, t ∈[0, 1], is a homogeneous Poisson process on [0, 1] with rate n. By Theorem 22.13, there exist some Brownian motions W n on [0, 1] with sup t≤1 n−1/2(N n t −nt) −W n t P →0.
For the associated Brownian bridges B n t = W n t −tW n 1 , we get sup t≤1 n−1/2(N n t −tN n 1 ) −B n t P →0.
To deduce (5), it is enough to show that n−1/2 sup t≤1 Σ k≤|κn−n| 1{ξk ≤t} −t P →0.
(6) Here |κn −n| P →∞, e.g. by Proposition 6.10, and so (6) holds by Proposition 5.24 with n1/2 replaced by |κn −n|. It remains to note that n−1/2|κn −n| is tight, since E(κn −n)2 = n.
2 We turn to a martingale version of the Skorohod embedding in Theorem 22.1 with associated approximation in Theorem 22.6.
Theorem 22.16 (embedding of martingale) For a martingale (Mn) with M0 = 0 and induced filtration (Gn), there exists a Brownian motion B with associated optional times 0 = τ0 ≤τ1 ≤· · · , such that a.s. for all n ∈N, (i) Mn = B τn, (ii) E Δτn Fn−1 = E (ΔMn)2 Gn−1 , (iii) E (Δτn)2 Fn−1 ≤4 E (ΔMn)4 Gn−1 , where (Fn) denotes the filtration induced by the pairs (Mn, τn).
22. Skorohod Embedding and Functional Convergence 499 Proof: Let μ1, μ2, . . . be probability kernels satisfying L ΔMn| Gn−1 = μn(M1, . . . , Mn−1; · ) a.s., n ∈N.
(7) Since the Mn form a martingale, we may assume that μn(x; ·) has mean 0 for all x ∈Rn−1. Define the associated measures ˜ μn(x; ·) on R2 as in Lemma 22.4, and conclude from the measurability part of the lemma that ˜ μn is a probability kernel from Rn−1 to R2. Next choose some measurable functions fn : Rn →R2 as in Lemma 4.22, such that L{fn(x, ϑ)} = ˜ μn(x, ·) when ϑ is U(0, 1).
Now fix a Brownian motion B′ and some independent i.i.d. U(0, 1) random variables ϑ1, ϑ2, . . . . Let τ ′ 0 = 0, and define recursively some random variables αn, βn, τ ′ n for n ∈N by (αn, βn) = fn B′ τ ′ 1, . . . , B′ τ ′ n−1, ϑn , (8) τ ′ n = inf t ≥τ ′ n−1; B′ t −B′ τ ′ n−1 ∈{αn, βn} .
(9) Since B′ is a Brownian motion for the filtration Bt = σ{(B′)t, (ϑn)}, t ≥0, and each τ ′ n is B -optional, the strong Markov property shows that B(n) t = B′ τ ′ n+t − B′ τ ′ n is again a Brownian motion independent of F′ n = σ{τ ′ k, B′ τ ′ k; k ≤n}. Since also ϑn+1⊥ ⊥(B(n), F′ n), we have (B(n), ϑn+1)⊥ ⊥F ′ n. Writing G′ n = σ{B′ τ ′ k; k ≤n}, we conclude that Δτ ′ n+1, ΔB′ τ ′ n+1 ⊥ ⊥ G′ n F′ n.
(10) Using (8) and Theorem 8.5, we get L αn, βn G′ n−1 = ˜ μn B′ τ ′ 1, . . . , B′ τ ′ n−1; · .
(11) Since also B(n−1)⊥ ⊥(αn, βn, G′ n−1), we have B(n−1)⊥ ⊥G′ n−1(αn, βn), and B(n−1) is conditionally a Brownian motion. Applying Lemma 22.5 to the conditional distributions given G′ n−1, we get by (9), (10), and (11) L ΔB′ τ ′ n G′ n−1 = μn B′ τ ′ 1, . . . , B′ τ ′ n−1; · , (12) E Δτ ′ n F′ n−1 = E Δτ ′ n G′ n−1 = E (ΔB′ τ ′ n)2 G′ n−1 , (13) E (Δτ ′ n)2 F′ n−1 = E (Δτ ′ n)2 G′ n−1 ≤4 E (ΔB′ τ ′ n)4 G′ n−1 .
(14) Comparing (7) and (12), we get (B′ τ ′ n) d = (Mn). By Theorem 8.17, we may then choose a Brownian motion B with associated optional times τ1, τ2, . . . , such that B, (Mn), (τn) d = B′, (B′ τ ′ n), (τ ′ n) .
All a.s. relations between the objects on the right, including their conditional expectations with respect to appropriate induced σ-fields, remain valid for the 500 Foundations of Modern Probability objects on the left. In particular, (i) holds for all n, and (13)−(14) imply the corresponding formulas (ii)−(iii).
2 The last theorem allows us to approximate a martingale with small jumps by a Brownian motion. For discrete-time martingales M, we define the quadratic variation [M] and predictable variation ⟨M⟩by [M]n = Σ k≤n (ΔMk)2, ⟨M⟩n = Σ k≤n E (ΔMk)2 Fk−1 , in analogy with the continuous-time versions in Chapters 18 and 20.
Theorem 22.17 (approximation of martingales with small jumps) For n ∈ N, let M n be an Fn-martingale M n on Z+ satisfying M n 0 = 0, |ΔM n k | ≤1, supk|ΔM n k | P →0, (15) put ζn = [M n]∞, and define Xn t = Σ k ΔM n k 1 [M n]k ≤t , t ∈[0, 1], n ∈N.
Then (i) (Xn −Bn)∗ ζn∧1 P →0 for some Brownian motions Bn, (ii) claim (i) remains true with [M n] replaced by ⟨M n⟩, (iii) the third condition in (15) can be replaced by Σ k P |ΔM n k | > ε Fn k−1 P →0, ε > 0.
(16) For the proof, we need to show that the time scales given by the sequences (τ n k ), [M n], and ⟨M n⟩are asymptotically equivalent.
Lemma 22.18 (time-scale comparison) In Theorem 22.17, let M n k = Bn(τ n k ) a.s. for some Brownian motions Bn with associated optional times τ n k as in Theorem 22.16, and put κn t = inf k; [M n]k > t , t ≥0, n ∈N.
Then as n →∞for fixed t > 0, sup k≤κn t τ n k −[M n]k ∨ [M n]k −⟨M n⟩k P →0.
(17) Proof: By optional stopping, we may assume [M n] to be uniformly bounded, and take the supremum in (17) over all k. To handle the second difference in (17), we note that Dn = [M n] −⟨M n⟩is a martingale for each n. Using the martingale property, Proposition 9.17, and dominated convergence, we get E(Dn)∗2 < ⌢supkE(Dn k)2 = Σ k E(ΔDn k)2 = Σ k E E (ΔDn k)2 Fn k−1 ≤Σ k E E Δ[M n]k 2 Fn k−1 = E Σ k (ΔM n k )4 < ⌢E supk(ΔM n k )2 →0, 22. Skorohod Embedding and Functional Convergence 501 and so (Dn)∗ P →0. This clearly remains true if each sequence ⟨M n⟩is defined in terms of the filtration Gn induced by M n.
To complete the proof of (17), it is enough to show, for the latter versions of ⟨M n⟩, that (τ n −⟨M n⟩)∗ P →0. Then write T n for the filtration induced by the pairs (M n k , τ n k ), k ∈N, and conclude from Theorem 22.16 (ii) that ⟨M n⟩m = Σ k≤m E Δτ n k T n k−1 , m, n ∈N.
Hence, ˜ Dn = τ n −⟨M n⟩is a T n-martingale. Using Theorem 22.16 (ii)−(iii), we get as before E ( ˜ Dn)∗2 < ⌢sup k E ( ˜ Dn k)2 = Σ k E E (Δ ˜ Dn k)2 T n k−1 ≤Σ k E E (Δτ n k )2 T n k−1 < ⌢Σ k E E (ΔM n k )4 Gn k−1 = E Σ k (ΔM n k )4 < ⌢E sup k (ΔM n k )2 →0.
2 The sufficiency of (16) is a consequence of the following simple estimate.
Lemma 22.19 (Dvoretzky) For any filtration F on Z+ and sets An ∈Fn, n ∈N, we have P ∪n An ≤P Σ n P(An| Fn−1) > ε + ε, ε > 0.
Proof: Write ξn = 1An and ˆ ξn = P(An| Fn−1), fix any ε > 0, and define τ = inf{n; ˆ ξ1 + · · · + ˆ ξn > ε}. Then {τ ≤n} ∈Fn−1 for each n, and so E Σ n<τ ξn = Σ n E ξn; τ > n = Σ n E ˆ ξn; τ > n = E Σ n<τ ˆ ξn ≤ε.
Hence, P ∪n An ≤P{τ < ∞} + E Σ n<τ ξn ≤P Σ n ˆ ξn > ε + ε.
2 Proof of Theorem 22.17: To prove the result for the time scales [M n], we may reduce by optional stopping to the case where [M n] ≤2 for all n. For every n ∈N, we may choose a Brownian motion Bn with associated optional times τ n k , as in Theorem 22.16. Then (Xn −Bn)∗ ζn∧1 ≤w Bn, 1 + δn, δn , n ∈N, where δn = supk τ n k −[M n]k + (ΔM n k )2 , 502 Foundations of Modern Probability and so E Xn −Bn∗ ζn∧1 ∧1 ≤E w Bn, 1 + h, h ∧1 + P{δn > h}.
Since δn P →0 by Lemma 22.18, the right-hand side tends to zero as n →∞ and then h →0, and the assertion follows.
In case of the time scales ⟨M n⟩, define κn = inf{k; [M n] > 2}.
Then [M n]κn −⟨M n⟩κn P →0 by Lemma 22.18, and so P ⟨M n⟩κn < 1, κn < ∞ →0.
We may then reduce by optional stopping to the case where [M n] ≤3. The proof may now be completed as before.
2 Though the Skorohod embedding has no natural extension to higher di-mensions, we can still derive some multi-variate approximations by applying the previous results separately in each component. To illustrate the method, we show how suitable random walks in Rd can be approximated by continuous processes with stationary, independent increments. Extensions to more gen-eral limits are obtained by different methods in Corollary 16.17 and Theorem 23.14.
Theorem 22.20 (approximation of random walks in Rd) Let X1, X2, . . . be random walks in Rd with L(Xn mn) w →N(0, σσ′), for some d × d matrix σ and integers mn →∞, and define Y n t = Xn [mnt]. Then there exist some Brownian motions B1, B2, . . . in Rd, such that Y n −σBn ∗ t P →0, t ≥0.
Proof: Since by Theorem 6.16, max k≤mnt ΔXn k P →0, t ≥0, we may assume that |ΔXn k | ≤1 for all n and k. Subtracting the means, we may further take EXn k ≡0. Applying Theorem 22.17 in each coordinate yields w(Y n, t, h) P →0, as n →∞and then h →0. Furthermore, w(σB, t, h) →0 a.s. as h →0.
Applying Theorem 6.16 in both directions yields Y n tn d →σBt as tn →t.
Hence, by independence, (Y n t1, . . . , Y n tm) d →σ(Bt1, . . . , Btm) for all n ∈N and t1, . . . , tn ≥0, and so Y n d →σB on Q+ by Theorem 5.30. By Theorem 5.31, or even simpler by Corollary 8.19 and Theorem A5.4, there exist some rcll processes ˜ Y n d = Y n with ˜ Y n t →σBt a.s. for all t ∈Q+. For any t, h > 0, E ˜ Y n −σB ∗ t ∧1 ≤E max j≤t/h ˜ Y n jh −σBjh ∧1 + E w ˜ Y n, t, h ∧1 + E w(σB, t, h) ∧1 .
22. Skorohod Embedding and Functional Convergence 503 Multiplying by e−t, integrating over t > 0, and letting n →∞and then h →0 along Q+, we get by dominated convergence ∞ 0 e−tE ˜ Y n −σB ∗ t ∧1 dt →0.
By monotonicity, the last integrand tends to zero as n →∞, and so ( ˜ Y n−σB)∗ t P →0 for each t > 0. Now use Theorem 8.17.
2 Exercises 1. Show that Theorem 22.6 yields a purely probabilistic proof of the central limit theorem, with no need of characteristic functions.
2. Given a Brownian motion B, put τ0 = 0, and define recursively τn = inf{t > τn−1; |Bτn −Bτn−1| = 1}. Show that EΔτn = ∞for all n. (Hint: We may use Proposition 12.14.) 3. Construct some Brownian martingales as in Lemma 22.2 with leading terms B3 t and B5 t . Use multiple Wiener–Itˆ o integrals to give an alternative proof of the lemma.
For every n ∈N, find a martingale with leading term Bn t .
(Hint: Use Theorem 14.25.) 4. For a Brownian motion B with associated optional time τ < ∞, show that Eτ ≥EB2 τ. (Hint: Truncate τ and use Fatou’s lemma.) 5.
For Xn as in Corollary 22.8, show that the sequence of random variables (2n log log n)−1/2Xn, n ≥3, is a.s. relatively compact with set of limit points [−1, 1].
(Hint: Prove the corresponding property for Brownian motion, and use Theorem 22.6.) 6. Let ξ1, ξ2, . . . be i.i.d. random vectors in Rd with mean 0 and covariances δij.
Show that Corollary 22.8 remains true with Xn replaced by |Xn|. More precisely, show that the sequence (2n log log n)−1/2Xn, n ≥3, is relatively compact in Rd, with all limit points contained in the closed unit ball. (Hint: Apply Corollary 22.8 to the projections u · Xn for arbitrary u ∈Rd with |u| = 1.) 7. In Theorem 14.18, show that for any c ∈(0, 1) we may choose tn →∞, such that a.s. the limsup along (tn) equals c. Conclude that the set of limit points in the preceding exercise agrees with the closed unit ball in Rd.
8. Condition (16) clearly follows from Σ kE |ΔMn k | ∧1 Fn n−1 P →0. Show by an example that the converse implication is false. (Hint: Consider a sequence of random walks.) 9. Specialize Lemma 22.18 to random walks, and give a direct proof in that case.
10. For random walks, show that condition (16) is also necessary for the approx-imation in Theorem 22.17. (Hint: Use Theorem 6.16.) 11. Specialize Theorem 22.17 to random walks in R, and derive a corresponding extension of Theorem 22.9. Then derive a functional version of Theorem 6.13.
12. Specialize further to successive renormalizations of a single random walk Xn.
Then derive a limit theorem for the values at t = 1, and compare with Proposition 6.10.
504 Foundations of Modern Probability 13. In the second arcsine law of Theorem 22.11, show that the first maximum on [0, 1] can be replaced by the last one. Conclude that the associated times σn and τn satisfy τn −σn P →0. (Hint: Use the corresponding result for Brownian motion.
Alternatively, use the symmetry of both (Xn) and the arcsine distribution.) 14. Extend Theorem 22.11 to symmetric random walks satisfying a Lindeberg condition.
Further extend the results for τ 1 n and τ 2 n to random walks based on diffuse, symmetric distributions. Finally, show that the result for τ 3 n may fail in the latter case. (Hint: Consider the n−1-increments of a compound Poisson process based on the uniform distribution on [−1, 1], perturbed by a small diffusion term εnB, where B is an independent Brownian motion.) 15. In the context of Theorem 22.20 and for a Brownian motion B, show that we can choose Y n d = Xn with (Y n −σB)∗ t →0 a.s. for all t ≥0. Prove a corresponding version of Theorem 22.17. (Hint: Use Theorem 5.31 or Corollary 8.19.) Chapter 23 Convergence in Distribution Tightness and relative compactness, convergence and tightness in func-tion spaces, convergence of continuous and rcll processes, functional cen-tral limit theorem, moments and tightness, optional equi-continuity and tightness, approximation of random walks, tightness and convergence of random measures, existence via tightness, vague and weak convergence, Cox and thinning continuity, strong Cox continuity, measure-valued pro-cesses, convergence of random sets and point processes Distributional convergence is another of those indispensable topics, constantly used throughout modern probability. The basic notions were introduced al-ready in Chapter 5, and in Chapter 6 we proved some fundamental limit the-orems for sums of independent random variables. In the previous chapter we introduced the powerful method of Skorohod embedding, which enabled us to establish some functional versions of those results, where the Gaussian limits are replaced by entire paths of Brownian motion and related processes.
Despite the amazing power of the latter technique, its scope of applica-bility is inevitably restricted by the limited class of possible limit laws. Here we turn to a systematic study of general weak convergence theory, based on some fundamental principles of compactness in appropriate function and mea-sure spaces. In particular, some functional limit theorems, derived in the last chapter by cumbersome embedding and approximation techniques, will now be accessible by straightforward compactness arguments.
The key result is Prohorov’s theorem, which gives the basic connection be-tween tightness and relative distributional compactness. This result enables us to convert some classical compactness criteria into similar probabilistic ver-sions. In particular, the Arzel a–Ascoli theorem yields a corresponding criterion for distributional compactness of continuous processes. Similarly, an optional equi-continuity condition guarantees the appropriate compactness, for right-continuous processes with left-hand limits1. We will also derive some general criteria for convergence in distribution of random measures and sets, with spe-cial emphasis on the point process case.
The general criteria will be applied to some interesting concrete situations.
In addition to some already familiar results from Chapters 22, we will derive convergence criteria for Cox processes and thinnings of point processes. Further applications appear in other chapters, such as a general approximation result 1henceforth referred to as rcll processes 505 © Springer Nature Switzerland AG 2021 O. Kallenberg, Foundations of Modern Probability, Probability Theory and Stochastic Modelling 99, 506 Foundations of Modern Probability for Markov chains in Chapter 17, various limit theorems for exchangeable pro-cesses and particle systems in Chapters 27 and 30, and a way of constructing weak solutions to SDEs in Chapter 32. To avoid overloading this chapter with technical detail, we outline some of the underlying analysis in Appendices 5–6.
Beginning with continuous processes, we fix some separable and complete metric spaces (T, d), (S, ρ), where T is also locally compact, and consider the space CT,S of continuous functions from T to S, endowed with the locally uniform topology. The associated notion of convergence, written as xn ul − →x, is given by sup t∈K ρ(xn,t, xt) →0, K ∈KT, (1) where KT denotes the class of compact sets K ⊂T. Since T is locally compact, we may choose K1, K2, . . . ∈KT with Ko n ↑T, and it is enough to verify (1) for all Kn. Using such a sequence, we may easily construct a separable and complete metrization of the topology. In Lemma A5.1 we show that the Borel σ-field in CT,S is generated by the evaluation maps πt : x →xt, t ∈T. This is significant for our purposes, since the random elements in CT,S are then precisely the continuous processes on T taking values in S.
Given any such processes X and X1, X2, . . . , we need to find tractable conditions for the associated convergence in distribution, written as Xn uld − →X.
This mode of convergence is clearly strictly stronger than the finite-dimensional convergence Xn fd − →X, defined by2 (Xn t1, . . . , Xn tm) d →(Xt1, . . . , Xtm), t1, . . . , tm ∈T, m ∈N.
(2) This is clear already in the non-random case, since the point-wise convergence xn t →xt in CT,S may not be locally uniform.
The classical way around this difficulty is to require the functions xn to be locally equi-continuous. Formally, we may impose suitable conditions on the associated local moduli of continuity wK(x, h) = sup ρ(xs, xt); s, t ∈K, d(s, t) ≤h , h > 0, K ∈KT.
(3) Using the Arzela–Ascoli compactness criterion, in the version of Theorem A5.2, we may easily characterize convergence in CT,S.
Though this case is quite elementary, it is discussed in detail below, since it sets the pattern for the more advanced criteria for distributional convergence, considered throughout the remainder of the chapter.
Lemma 23.1 (convergence in CT,S) Let x, x1, x2, . . . be continuous, S-valued functions on T, where S, T are separable, complete metric spaces and T is locally compact with a dense subset T ′. Then xn ul − →x in CT,S iff 2To avoid double subscripts, we may sometimes write xn, Xn as xn, Xn.
23. Convergence in Distribution 507 (i) xn t →xt for all t ∈T ′, (ii) lim h→0 lim sup n→∞ wK(xn, h) = 0, K ∈KT.
Proof: If xn ul − →x, then (i) holds by continuity and (ii) holds by Theorem A5.2. Conversely, assume (i)−(ii). Then (xn) is relatively compact by Theo-rem A5.2, and so for any sub-sequence N ′ ⊂N we have convergence xn ul − →y along a further sub-sequence N ′′. Then by continuity, xn(t) →y(t) along N ′′ for all t ∈T, and so (i) yields x(t) = y(t) for all t′ ∈T ′. Since x and y are continuous and T ′ is dense in T, we get x = y. Hence, xn ul − →x along N ′′, which extends to N since N ′ was arbitrary.
2 The approach for random processes is similar, except that here we need to replace the compactness criterion in Theorem A5.2 by a criterion for compact-ness in distribution. The key result is Prohorov’s Theorem 23.2 below, which relates compactness in a given metric space S to weak compactness3 in the associated space ˆ MS of probability measures on S. For most applications of interest, we may choose S to be a suitable function or measure space.
To make this precise, let Ξ be a family of random elements in a metric space S. Generalizing a notion from Chapter 5, we say that Ξ is tight if inf K∈KS sup ξ∈Ξ P{ξ / ∈K} = 0.
(4) We also say that Ξ is relatively compact in distribution, if every sequence of random elements ξ1, ξ2, . . . ∈Ξ has a sub-sequence converging in distribution toward a random element ξ in S.
We may now state the fundamental equivalence between tightness and rel-ative compactness, for random elements in sufficiently regular metric spaces.
This extends the version for Euclidean spaces in Proposition 6.22.
Theorem 23.2 (tightness and relative compactness, Prohorov) For a set Ξ of random elements in a metric space S, we have (i) ⇒(ii) with (i) Ξ is tight, (ii) Ξ is relatively compact in distribution, and equivalence holds when S is separable and complete.
In particular, we see that when S is separable and complete, a single random element ξ in S is tight, in the sense that infK P{ξ / ∈K} = 0. For sequences we may then replace the ‘sup’ in (4) by ‘lim sup’.
For the proof of Theorem 23.2, we need a simple lemma.
Recall from Lemma 1.6 that a random element in a sub-space of a metric space S may also be regarded as a random element in S.
3For all applications in this book, S and ˆ MS are both metric spaces, and compactness in the topological sense is equivalent to sequential compactness.
508 Foundations of Modern Probability Lemma 23.3 (tightness preservation) Tightness is preserved by continuous maps. Thus, if S is a metric space and Ξ is a set of random elements in a subset A ⊂S, we have Ξ is tight in A ⇒ Ξ is tight in S.
Proof: Compactness is preserved by continuous maps. This applies in par-ticular to the natural embedding I : A →S.
2 Proof of Theorem 23.2 (Varadarajan): For S = Rd, the result was proved in Proposition 6.22. Turning to the case of S = R∞, consider a tight sequence of random elements ξn = (ξn 1 , ξn 2 , . . .) in R∞. Writing ηn k = (ξn 1 , . . . , ξn k ), we see from Lemma 23.3 that the sequence (ηn k; n ∈N) is tight in Rk for each k ∈N.
For any sub-sequence N ′ ⊂N, a diagonal argument yields ηn k d →some ηk for all k ∈N, as n →∞along a further sub-sequence N ′′. The sequence {L(ηk)} is projective, since the coordinate projections are continuous, and so Theorem 8.21 yields a random sequence ξ = (ξ1, ξ2, . . .) with (ξ1, . . . , ξk) d = ηk for all k.
But then ξn fd − →ξ along N ′′, and so Theorem 5.30 gives ξn d →ξ along the same sequence.
Next let S ⊂R∞. If (ξn) is tight in S, then by Lemma 23.3 it remains tight as a sequence in R∞. Hence, for any sequence N ′ ⊂N, we have ξn d →some ξ in R∞along a further sub-sequence N ′′. To extend the convergence to S, it suffices by Lemma 5.26 to check that ξ ∈S a.s. Then choose some compact sets Km ⊂S with lim infn P{ξn ∈Km} ≥1 −2−m for all m ∈N. Since the Km remain closed in R∞, Theorem 5.25 yields P{ξ ∈Km} ≥lim sup n∈N′′ P{ξn ∈Km} ≥lim inf n→∞P{ξn ∈Km} ≥1 −2−m, and so ξ ∈ m Km ⊂S a.s.
Now let S be σ-compact. It is then separable and therefore homeomorphic to a subset A ⊂R∞. By Lemma 23.3, the tightness of (ξn) carries over to the image sequence (˜ ξn) in A, and by Lemma 5.26 the possible relative compactness of (˜ ξn) implies the same property for (ξn). This reduces the discussion to the previous case.
Now turn to the general case. If (ξn) is tight, there exist some compact sets Km ⊂S with lim infn P{ξn ∈Km} ≥1 −2−m. In particular, P{ξn ∈A} →1 with A = m Km, and so P{ξn = ηn} →1 for some random elements ηn in A.
Then (ηn) is again tight, even as a sequence in A, and since A is σ-compact, we see as before that (ηn) is relatively compact as a sequence in A. By Lemma 5.26 it remains relatively compact in S, and by Theorem 5.29 the relative compactness carries over to (ξn).
To prove the converse, let S be separable and complete, and suppose that (ξn) is relatively compact. For any r > 0, we may cover S by some open balls 23. Convergence in Distribution 509 B1, B2, . . . of radius r. Writing Gk = B1 ∪· · · ∪Bk, we claim that lim k→∞inf n P{ξn ∈Gk} = 1.
(5) If this fails, we may choose some integers nk ↑∞with supk P{ξnk ∈Gk} = c < 1. The relative compactness yields ξnk d →some ξ along a sub-sequence N ′ ⊂N, and so P{ξ ∈Gm} ≤lim inf k∈N′ P ξnk ∈Gm ≤c < 1, m ∈N.
As m →∞, we get the contradiction 1 < 1, proving (5).
Now take r = m−1, and write Gm k for the corresponding sets Gk. For any ε > 0, (5) yields some k1, k1, . . . ∈N with inf n P ξn ∈Gm km ≥1 −ε 2−m, m ∈N.
Writing A = m Gm km, we get infn P{ξn ∈A} ≥1 −ε. Since ¯ A is complete and totally bounded, hence compact, it follows that (ξn) is tight.
2 For applications of the last theorem, we need to find effective criteria for tightness. Beginning with the space CT,S, we may convert the classical Arzel a– Ascoli compactness criterion into a condition for tightness, stated in terms of the local moduli of continuity in (3). Note that wK(x, h) is continuous in x for fixed h > 0, and hence depends measurably on x.
Theorem 23.4 (tightness in CT,S, Prohorov) Let Ξ be a set of continuous, S-valued processes on T, where S, T are separable, complete metric spaces and T is locally compact with a dense subset T ′. Then Ξ is tight in CT,S iff (i) πtΞ is tight in S, t ∈T ′, (ii) lim h→0 sup X∈Ξ E wK(X, h) ∧1 = 0, K ∈KT, in which case (iii) ∪ t∈KπtΞ is tight in S, K ∈KT.
Proof: Let Ξ be tight in CT,S, so that for every ε > 0 there exists a compact set A ⊂CT,S with P{X / ∈A} ≤ε for all X ∈Ξ. Then for any K ∈KT and X ∈Ξ, P ∪ t∈K{Xt} / ∈∪ t∈KπtA ≤P{X / ∈A} ≤ε, X ∈Ξ, and (iii) follows since t∈K πtA is relatively compact by Theorem A5.2 (iii).
The same theorem yields an h > 0 with wK(x, h) ≤ε for all x ∈A, and so for any X ∈Ξ, E wK(X, h) ∧1 ≤ε + P wK(X, h) > ε ≤2ε, which implies (ii), since ε > 0 was arbitrary. Finally, note that (iii) ⇒(i).
510 Foundations of Modern Probability Conversely, assume (i)−(ii). Let t1, t2, . . . ∈T ′ be dense in T, and choose K1, K2, . . . ∈KT with Ko n ↑T. For any ε > 0, (i) yields some compact sets B1, B2, . . . ⊂S with P X(tm) / ∈Bm ≤2−mε, m ∈N, X ∈Ξ, (6) and by (ii) there exist some hnk > 0 with P wKn(X, hnk) > 2−k ≤2−n−kε, n, k ∈N, X ∈Ξ.
(7) By Theorem A5.2, the set Aε = ∩ m,n,k x ∈CT,S; x(tm) ∈Bm, wKn(x, hnk) ≤2−k is relatively compact in CT,S, and for any X ∈Ξ we get by (6) and (7) P{X / ∈Aε} ≤Σ n,k 2−n−kε + Σ m 2−mε = 2 ε.
Since ε > 0 was arbitrary, Ξ is then tight in CT,S.
2 We may now characterize convergence in distribution of continuous, S-valued processes on T, generalizing the elementary Lemma 23.1 for non-random functions. Write Xn uld − →X for convergence in distribution in CT,S with respect to the locally uniform topology.
Corollary 23.5 (convergence of continuous processes, Prohorov) Let X, X1, X2, . . . be continuous, S-valued processes on T, where S, T are separable, com-plete metric spaces and T is locally compact with a dense subset T ′. Then Xn uld − →X iff (i) Xn fd − →X on T ′, (ii) lim h→0 lim sup n→∞E wK(Xn, h) ∧1 = 0, K ∈KT.
(8) Proof: If Xn uld − →X, then (i) holds by Theorem 5.27 and (ii) by Theorems 23.2 and 23.4. Conversely, assume (i)−(ii). Then (Xn) is tight by Theorem 23.4, and so by Theorem 23.2 it is relatively compact in distribution. For any sub-sequence N ′ ⊂N, we get Xn uld − →some Y along a further sub-sequence N ′′.
Then Theorem 5.27 yields Xn fd − →Y along N ′′, and so by (i) we have X fd = Y on T ′, which extends to T by Theorem 5.27. Hence, X d = Y by Proposition 4.2, and so Xn uld − →X along N ′′, which extends to N since N ′ was arbitrary. 2 We illustrate the last result by proving a version of Donsker’s Theorem 22.9 in Rd. Since Theorem 23.5 applies only to processes with continuous paths, we need to replace the original step processes by suitably interpolated versions.
In Theorem 23.14 below, we will see how such an interpolation can be avoided.
23. Convergence in Distribution 511 Theorem 23.6 (functional central limit theorem, Donsker) Let ξ1, ξ2, . . . be i.i.d. random vectors in Rd with mean 0 and covariances δ ij, form the contin-uous processes Xn t = n−1/2 Σ k≤nt ξk + (nt −[nt]) ξ[nt]+1 , t ≥0, n ∈N, and let B be a Brownian motion in Rd. Then Xn uld − →B in CR+,Rd.
Proof: By Theorem 23.5 it is enough to prove convergence on [0, 1]. Clearly, Xn fd − →X by Proposition 6.10 and Corollary 6.5. Combining the former result with Lemma 11.12, we further get the rough estimate lim r→∞r2 lim sup n→∞P X∗ n ≥r√n = 0, which implies lim h→0 h−1 lim sup n→∞ sup t≤1 P sup 0≤r≤h |Xn t+r −Xn t | > ε = 0.
Now (8) follows easily, as we split [0, 1] into sub-intervals of length ≤h.
2 The Kolmogorov–Chentsov criterion in Theorem 4.23 can be converted into a sufficient condition for tightness in CRd,S. An important application appears in Theorem 32.9.
Theorem 23.7 (moment criteria and H¨ older continuity) Let X1, X2, . . . be continuous processes on Rd with values in a separable, complete metric space (S, ρ). Then (Xn) is tight in CRd,S, whenever (Xn 0 ) is tight in S and sup n E ρ(Xn s , Xn t ) a < ⌢|s −t| d+b, s, t ∈Rd, (9) for some a, b > 0. The limiting processes are then a.s. locally H¨ older continuous with exponent c, for any c ∈(0, b/a).
Proof: For every Xn, we may define the associated quantities ξnk as in the proof of Theorem 4.23, and we see that E ξa nk < ⌢2−kb. Hence, Lemma 1.31 yields for m, n ∈N wK(Xn, 2−m) a∧1 a < ⌢Σ k≥m ∥ξnk∥a∧1 a < ⌢Σ k≥m 2−kb/(a∨1) < ⌢2−mb/(a∨1), which implies (8). Condition (9) extends by Lemma 5.11 to any limiting pro-cess X, and the last assertion follows by Theorem 4.23.
2 For processes with possible jump discontinuities, the theory is similar but more complicated. Here we consider the space D R+,S of rcll4 functions on R+, 4right-continuous with left-hand limits 512 Foundations of Modern Probability taking values in a separable, complete metric space (S, ρ). For any ε, t > 0, such a function x has clearly at most finitely many jumps of size ρ(xt, xt−) > ε up to time t.
Though it still makes sense to consider the locally uniform topology with associated mode of convergence xn ul − →x, it is technically convenient and more useful for applications to allow appropriate time changes, given by increasing bijections on R+. Note that such functions λ are continuous and strictly in-creasing with λ0 = 0 and λ∞= ∞.
We define the Skorohod convergence xn s →x by the requirements λn ul − →ι, xn ◦λn ul − →x, for some increasing bijections λn on R+, where ι denotes the identity map on R+. This mode of convergence generates a Polish topology on D R+,S, known as Skorohod’s J1-topology, whose basic properties are listed in Lemma A5.3.
In particular, the associated Borel σ-field is again generated by the evaluation maps πt, and so the random elements in D R+,S are precisely the rcll processes in S.
The associated compactness criteria in Theorem A5.4 are analogous to the Arzea–Ascoli conditions in Theorem A5.2, except for being based on the mod-ified local moduli of continuity ˜ wt(x, h) = inf (Ik) max k sup r,s∈Ik ρ(xr, xs), t, h > 0, (10) where the infimum extends over all partitions of the interval [0, t) into sub-intervals Ik = [u, v), such that v −u ≥h when v < t. Note that ˜ wt(x, h) →0 as h →0 for fixed x ∈D R+,S and t > 0.
In order to derive criteria for convergence in distribution in DR+,S, corre-sponding to the conditions of Theorem 23.5 for processes in CT,S, we first need a convenient criterion for tightness.
Theorem 23.8 (tightness in D R+,S) Let Ξ be a set of rcll processes on R+ with values in a separable, complete metric space S, and fix a dense set T ⊂R+.
Then Ξ is tight in D R+,S iff (i) πtΞ is tight in S, t ∈T, (ii) lim h→0 sup X∈Ξ E ˜ wt(X, h) ∧1 = 0, t > 0, in which case (iii) ∪ s≤tπsΞ is tight in S, t ≥0.
Proof: Same as for Theorem 23.4, except that now we need to use Theorem A5.4 in place of Theorem A5.2.
2 We may now characterize convergence in distribution of S-valued, rcll pro-cesses on R+. In particular, the conditions simplify and lead to stronger conclu-sions, when the limiting process is continuous. Write Xn sd − →X and Xn uld − →X for convergence in distribution in D R+,S with respect to the Skorohod and lo-cally uniform topologies.
23. Convergence in Distribution 513 Theorem 23.9 (convergence of rcll processes, Skorohod) Let X, X1, X2, . . .
be rcll processes on R+ with values in a separable, complete metric space S.
Then (i) Xn sd − →X, iffXn fd − →X on a dense set T ⊂{t ≥0; Xt = Xt−a.s.} and lim h→0 lim sup n→∞E ˜ wt(Xn, h) ∧1 = 0, t > 0, (11) (ii) Xn uld − →X with X a.s. continuous, iffXn fd − →X on a dense set T ⊂R+ and lim h→0 lim sup n→∞E wt(Xn, h) ∧1 = 0, t > 0, (iii) for a.s. continuous X, Xn sd − →X ⇔ Xn uld − →X.
The present proof is more subtle than for continuous processes. For moti-vation and technical convenience, we begin with the non-random case.
Lemma 23.10 (convergence in D R+,S) Let x, x1, x2, . . . be rcll functions on R+ with values in a separable, complete metric space S. Then (i) xn s →x, iffxn t →xt for all t in a dense set T ⊂{t ≥0; xt = xt−} and lim h→0 lim sup n→∞ ˜ wt(xn, h) = 0, t > 0, (ii) xn ul − →x with x continuous, iffxn t →xt for all t in a dense set T ⊂R+ and lim h→0 lim sup n→∞ wt(xn, h) = 0, t > 0, (iii) for continuous x, xn s →x ⇔ xn ul − →x.
Proof: (i) If xn s →x, then xn t →xt for all t with xt = xt−, and the displayed condition holds by Theorem A5.4. Conversely, assume the stated conditions, and conclude from the same theorem that (xn) is relatively compact in D R+,S.
Then for any sub-sequence N ′ ⊂N, we have xn s →some y along a further sub-sequence N ′′. To see that x = y, fix any t > 0 with yt = yt−, and choose tk ↓t in T with xtk = xtk−. Then ρ(xt, yt) ≤ρ(xt, xtk) + ρ xtk, xn tk + ρ xn tk, yt , which tends to 0 as n →∞along N ′′ and then k →∞, by the convergence xn s →y, the point-wise convergence xn →x on T, and the right continuity of x. Hence, xt = yt on the continuity set of y. Since the latter set is dense in R+ and both functions are right-continuous, we obtain x = y, and so xn s →x along N ′′. This extends to N, since N ′ was arbitrary.
(iii) The claim ‘⇐’ is obvious. Now let xn s →x, so that λn ul − →ι and xn ◦λn ul − →x for some λn. Since x is continuous, we have also x ◦λn ul − →x, and so by combination ρ(xn ◦λn, x ◦λn) ul − →0, which implies xn ul − →x.
514 Foundations of Modern Probability (ii) Writing Jt(x) = sups≤t ρ(xs, xs−), we note that for any t, h > 0, ˜ wt(x, h) ∨Jt(x) ≤wt(x, h) ≤2 ˜ wt(x, h) + Jt(x).
(12) Now let xn ul − →x with x continuous. Then xn t →xt and Jt(xn) →0 for all t > 0, and so the stated conditions follow by (i) and (12). Conversely, assume the two conditions. Then by (12) and (i) we have xn s →x, and (12) also shows that Jt(xn) →0, which yields the continuity of x. By (iii) the convergence may then be strengthened to xn ul − →x.
2 Proof of Theorem 23.9: (i) Let Xn sd − →X.
Then Theorem 5.27 yields Xn fd − →X on {t ≥0; Xt = Xt−a.s.}. Furthermore, (Xn) is tight in D R+,S by Theorem 23.2, and (11) follows by Theorem 23.8 (ii).
Conversely, assume the stated conditions. Then (Xn,t) is tight in S for every t ∈T by Theorem 23.2, and so (Xn) is tight in D R+,S by Lemma 23.8. As-suming T to be countable and writing Xn,T = {Xn,t; t ∈T}, we see that even (Xn, Xn,T) is tight in D R+,S × ST. By Theorem 23.2 the sequence (Xn, Xn,T) is then relatively compact in distribution, and so for any sub-sequence N ′ ⊂N we have convergence (Xn, Xn,T) sd − →(Y, Z) along a further sub-sequence N ′′, for an S-valued process (Y, Z) on R+ ∪T. Since also Xn,T d →XT, we may take Z = XT. By Theorem 5.31, we may choose X′ n d = Xn such that a.s. X′ n s →Y and X′ n,t →Xt along N ′′ for all t ∈T. Hence, Lemma 23.10 (i) yields X′ n s →X a.s., and so Xn sd − →X along N ′′, which extends to N since N ′ was arbitrary.
(iii) Let Xn sd − →X for a continuous X. By Theorem 5.31 we may choose X′ n d = Xn with X′ n s →X a.s. Then Lemma 23.10 (iii) yields X′ n ul − →X a.s., which implies Xn uld − →X. The converse claim is again obvious.
(ii) Use (iii) and Theorem 5.31 to reduce to Lemma 23.10 (ii).
2 Tightness in D R+,S is often verified most easily by means of the following sufficient condition.
Given a process X, we say that a random time is X-optional if it is optional with respect to the filtration induced by X.
Theorem 23.11 (optional equi-continuity and tightness, Aldous) Let X1, X2, . . . be rcll processes in a metric space (S, ρ). Then (11) holds, if for any bounded Xn-optional times τn and positive constants hn →0, ρ Xn τn, Xn τn+hn P →0, n →∞.
(13) Our proof will be based on two lemmas, beginning with a restatement of condition (13).
23. Convergence in Distribution 515 Lemma 23.12 (equi-continuity) The condition in Theorem 23.11 holds iff lim h→0 lim sup n→∞ sup σ,τ E ρ(Xn σ, Xn τ ) ∧1 = 0, t > 0, (14) where the supremum extends over all Xn-optional times σ, τ ≤t with σ ≤τ ≤ σ + h.
Proof: Replacing ρ by ρ ∧1 if necessary, we may assume that ρ ≤1. The condition in Theorem 23.11 is then equivalent to lim δ→0 lim sup n→∞ sup τ≤t sup h∈[0,δ] Eρ(Xn τ , Xn τ+h) = 0, t > 0, where the outer supremum extends over all Xn-optional times τ ≤t. To deduce (14), suppose that 0 ≤τ −σ ≤δ. Then [τ, τ + δ] ⊂[σ, σ + 2δ], and so by the triangle inequality and a simple substitution, δ ρ(Xσ, Xτ) ≤ δ 0 ρ(Xσ, Xτ+h) + ρ(Xτ, Xτ+h) dh ≤ 2δ 0 ρ(Xσ, Xσ+h) dh + δ 0 ρ(Xτ, Xτ+h) dh.
Thus, sup σ,τ Eρ(Xσ, Xτ) ≤3 sup τ sup h∈[0,2δ] Eρ(Xτ, Xτ+h), where the suprema extend over all optional times τ ≤t and σ ∈[τ −δ, τ].
2 We also need the following elementary estimate.
Lemma 23.13 (exponential bound) For any random variables ξ1, . . . , ξn ≥0 with sum Xn, we have E e−Xn ≤e−nc + max k≤n P{ξk < c}, c > 0.
Proof: Let p be the maximum on the right. By the H¨ older and Chebyshev inequalities, Ee−Xn = E Π k e−ξk ≤Π k (Ee−nξk)1/n ≤ (e−nc + p)1/n n = e−nc + p.
2 Proof of Theorem 23.11: Again we may assume that ρ ≤1, and by suitable approximation we may extend condition (14) to weakly optional times σ and τ. For all n ∈N and ε > 0, we define recursively the weakly Xn-optional times σn k+1 = inf s > σn k; ρ(Xn σn k , Xn s ) > ε , k ∈Z+, starting with σn 0 = 0, and note that for m ∈N and t, h > 0, ˜ wt(Xn, h) ≤2ε + Σ k<m 1 σn k+1 −σn k < h, σn k < t + 1{σn m < t}.
(15) 516 Foundations of Modern Probability Writing νn(t, h) for the supremum in (14), we get by Chebyshev’s inequality and a simple truncation P σn k+1 −σn k < h, σn k < t ≤ε−1νn(t + h, h), k ∈N, t, h > 0, (16) and so by (14) and (15), lim h→0 lim sup n→∞E ˜ wt(Xn, h) ≤2ε + lim sup n→∞P{σn m < t}.
(17) Next we see from (16) and Lemma 23.13 that, for any c > 0, P σn m < t ≤etE e−σn m; σn m < t ≤et e−mc + ε−1νn(t + c, c) .
By (14) the right-hand side tends to 0 as m, n →∞and then c →0. Hence, the last term in (17) tends to 0 as m →∞, and (11) follows since ε is arbi-trary.
2 To illustrate the use of Theorem 23.11, we prove an extension of Theorem 23.6. A related result was obtained by different methods in Corollary 16.17.
An extension to Markov chains appears in Theorem 17.28.
Theorem 23.14 (approximation of random walks, Skorohod) Let Y 1, Y 2, . . .
be random walks in Rd, and form the rcll processes Xn t = Y n [mnt], t ≥0, n ∈N, where mn →∞. Then for any L´ evy process X in Rd, we have Xn 1 d →X1 ⇔ Xn sd − →X in D R+,Rd.
When X is a.s. continuous, this holds iffXn uld − →X.
Proof: By Corollary 7.9 we have Xn fd − →X, and so by Theorem 23.11 it is enough to show that |Xn τn+hn −Xn τn| P →0 for any optional times τn < ∞ and constants hn →0. By the strong Markov property of Y n, or alternatively by Theorem 27.7, we may reduce to the case where τn = 0 for all n. Thus, it suffices to show that Xn hn P →0 as hn →0, which again can be seen from Corollary 7.9.
2 Next we consider convergence in distribution of random measures on a separable, complete metric space S, defined as locally finite kernels ξ : Ω →S.
Thus, ξ(ω, B) is a measurable function of ω ∈Ω for fixed B and a measure in B ∈S for fixed ω, such that ξB < ∞a.s. for all B in the ring ˆ S ⊂S of bounded Borel sets. Equivalently, we may regard random measures as random elements in the space MS of locally finite measures μ on S, endowed with the σ-field generated by all evaluation maps πB : μ →μB with B ∈S.
23. Convergence in Distribution 517 On MS we introduce the vague topology, induced by the integration maps πf : μ →μf, for all f in the space ˆ CS of bounded, continuous functions f ≥0 with bounded support5. The corresponding notion of vague convergence μn v →μ is then given by μnf →μf for all f ∈ˆ CS. For bounded measures μ on S, we also consider the weak topology with associated weak convergence μn w →μ, given by μnf →μf for all bounded, continuous functions f ≥0 on S. The basic properties of the vague and weak topologies are summarized in Lemma A5.5, and associated compactness criteria are given in Theorem A5.6.
Using the latter, we may derive corresponding criteria for tightness of random measures: Theorem 23.15 (tightness of random measures, Matthes et al.) Let Ξ be a set of random measures on a separable, complete metric space S. Then Ξ is vaguely tight iff (i) lim r→∞sup ξ∈Ξ P{ξB > r} = 0, B ∈ˆ S, (ii) inf K∈K sup ξ∈Ξ E ξ(B \ K) ∧1 = 0, B ∈ˆ S.
When the ξ ∈Ξ are a.s. bounded, the set Ξ is weakly tight iff(i)−(ii) hold with B = S.
Proof: Let Ξ be vaguely tight. Then for any ε > 0, there exists a vaguely compact set A ⊂MS, such that P{ξ / ∈A} < ε for all ξ ∈Ξ. For fixed B ∈ˆ S, Theorem A5.6 yields r = supμ∈A μB < ∞, and so supξ∈Ξ P{ξB > r} < ε.
The same lemma yields a K ∈K with supμ∈A μ(B \ K) < ε, which implies supξ∈Ξ E{ξ(B \ K) ∧1} ≤2ε. Since ε was arbitrary, this proves the necessity of (i) and (ii).
Now assume (i)−(ii). Fix any s0 ∈S, and put Bn = {s ∈S; ρ(s, s0) < n}. For any ε > 0, we may choose some constants r1, r2, . . . > 0 and sets K1, K2, . . . ∈K, such that for all ξ ∈Ξ and n ∈N, P{ξBn > rn} < 2−nε, E ξ(Bn \ Kn) ∧1 < 2−2nε.
(18) Writing A for the set of measures μ ∈MS with μBn ≤rn, μ(Bn \ Kn) ≤2−n, n ∈N, we see from Theorem A5.6 that A is vaguely relatively compact. Noting that P ξ(Bn \ Kn) > 2−n ≤2nE ξ(Bn \ Kn) ∧1 , n ∈N, we get from (18) for any ξ ∈Ξ P{ξ / ∈A} ≤Σ n P{ξBn > rn} + Σ n 2nE ξ(Bn \ Kn) ∧1 < 2ε, 5Thus, we assume the supports of f to be metrically bounded. Requiring compact sup-ports yields a weaker topology.
518 Foundations of Modern Probability and since ε was arbitrary, we conclude that Ξ is tight.
2 We may now derive criteria for convergence in distribution with respect to the vague topology, written as ξn vd − →ξ and defined6 by L(ξn) vw − →L(ξ), where vw − →denotes weak convergence on ˆ MMS with respect to the vague topology on MS. Let ˆ Sξ be the class of sets B ∈ˆ S with ξ∂B = 0 a.s. A non-empty class I of sets I ⊂S is called a semi-ring, if it is closed under finite intersections, and such that every proper difference in I is a finite union of disjoint I -sets; it is called a ring if it is also closed under finite unions. Typical semi-rings in Rd are families of rectangular boxes I1 × · · · × Id. By ˆ I+ we denote the class of simple functions over I.
Theorem 23.16 (convergence of random measures, Harris, Matthes et al.) Let ξ, ξ1, ξ2, . . . be random measures on a separable, complete metric space S, and fix a dissecting semi-ring I ⊂ˆ Sξ. Then these conditions are equivalent: (i) ξn vd − →ξ, (ii) ξnf d →ξf for all f ∈ˆ CS or ˆ I+, (iii) Ee−ξnf →Ee−ξf for all f ∈ˆ CS or ˆ I+ with f ≤1.
Our proof relies on two simple lemmas. For any function f : S →R, let Df be the set of discontinuity points of f.
Lemma 23.17 (limits of integrals) Let ξ, ξ1, ξ2, . . . be random measures on S with ξn vd − →ξ, and let f ∈ˆ S+ with ξDf = 0 a.s. Then ξnf d →ξf.
Proof: By Theorem 5.27 it is enough to prove that, if μn v →μ with μDf = 0, then μnf →μf. By a suitable truncation and normalization, we may take all μn and μ to be probability measures. Then the same theorem yields μn◦f −1 w → μ ◦f −1, which implies μnf →μf since f is bounded.
2 Lemma 23.18 (continuity sets) Let ξ, η, ξ1, ξ2, . . . be random measures on S with ξn vd − →η, and fix a dissecting ring U and a constant r > 0. Then (i) ˆ Sη ⊃ˆ Sξ whenever lim inf n→∞E e−r ξnU ≥E e−r ξU, U ∈U, (ii) for point processes ξ, ξ1, ξ2, . . . , we may assume instead that lim inf n→∞P{ξnU = 0} ≥P{ξU = 0}, U ∈U.
6This mode of convergence involves the topologies on all three spaces S, MS, ˆ MMS, which must not be confused.
23. Convergence in Distribution 519 Proof: Fix any B ∈ˆ Sξ. Let ∂B ⊂F ⊂U with F ∈ˆ Sη and U ∈U, where F is closed. By (i) and Lemma 23.17, Ee−r η∂B ≥Ee−r ηF = lim n→∞Ee−r ξnF ≥lim inf n→∞Ee−r ξnU ≥Ee−r ξU.
Letting U ↓F and then F ↓∂B, we get Ee−r η∂B ≥Ee−r ξ∂B = 1 by dominated convergence. Hence, η∂B = 0 a.s., which means that B ∈ˆ Sη. The proof of (ii) is similar.
2 Proof of Theorem 23.16: Here (i) ⇒(ii) holds by Lemma 23.17 and (ii) ⇔ (iii) by Theorem 6.3. To prove (ii) ⇒(i), assume that ξnf d →ξf for all f ∈ˆ CS or ˆ I+, respectively. For any B ∈ˆ S we may choose an f with 1B ≤f, and so ξnB ≤ξnf is tight by the convergence of ξnf. Thus, (ξn) satisfies (i) of Theorem 23.15.
To prove (ii) of the same result, we may assume that ξ∂B = 0 a.s. A simple approximation yields (1Bξn)f d →(1Bξ)f for all f ∈ˆ CS of ˆ I+, and so we may take S = B, and let the measures ξn and ξ be a.s. bounded with ξnf d →ξf for all f ∈CS or I+. Here Corollary 6.5 yields (ξnf, ξnS) d →(ξf, ξS) for all such f, and so (ξnS ∨1)−1ξnf d →(ξS ∨1)−1ξf, which allows us to assume ∥ξn∥≤1 for all n. Then ξnf d →ξf implies Eξnf →Eξf for all f as above, and so Eξn v →Eξ by the definition of vague convergence. Here Lemma A5.6 yields infK∈K supn EξnKc = 0, which proves (ii) of Theorem 23.15 for the sequence (ξn).
The latter theorem shows that (ξn) is tight. If ξn d →η along a sub-sequence N ′ ⊂N for a random measure η on S, then Lemma 23.17 yields ξnf d →ηf for all f ∈ˆ CS or I+. In the former case, ηf d = ξf for all f ∈CS, which implies η d = ξ. In the latter case, Lemma 23.18 yields ˆ Sη ⊃ˆ Sξ ⊃I, and so ξnf d →ηf for all f ∈ˆ I+, which again implies η d = ξ. Since N ′ was arbitrary, we obtain ξn vd − →ξ.
2 Corollary 23.19 (existence of limit) Let ξ1, ξ2, . . . be random measures on an lcscH space S, such that ξnf d →αf for some random variables αf, f ∈ˆ CS.
Then ξn vd − →ξ for a random measure ξ on S satisfying ξf d = αf, f ∈ˆ CS.
Proof: Condition (ii) of Theorem 23.15 being void, the tightness of (ξn) follows already from the tightness of (ξnf) for all f ∈ˆ CS. Hence, Theorem 23.2 yields ξn vd − →ξ along a sub-sequence N ′ ⊂N, for a random measure ξ on S, and so ξnf d →ξf along N ′ for every f ∈ˆ CS. The latter convergence extends to N by hypothesis, and so ξn vd − →ξ by Theorem 23.16.
2 520 Foundations of Modern Probability Theorem 23.20 (weak and vague convergence) Let ξ, ξ1, ξ2, . . . be a.s. bounded random measures on S with ξn vd − →ξ. Then these conditions are equivalent: (i) ξn wd − →ξ, (ii) ξnS d →ξS, (iii) inf B∈ˆ S lim sup n→∞E ξnBc ∧1 = 0.
Proof, (i) ⇒(ii): Clear since 1 ∈CS.
(ii) ⇒(iii): If (iii) fails, then a diagonal argument yields a sub-sequence N ′ ⊂N with inf B∈ˆ S lim inf n∈N′ E ξnBc ∧1 > 0.
(19) Under (ii) the sequences (ξn) and (ξnS) are vaguely tight, whence so is the sequence of pairs (ξn, ξnS), which yields the convergence (ξn, ξnS) vd − →(˜ ξ, α) along a further sub-sequence N ′′ ⊂N ′. Since clearly ˜ ξ d = ξ, a transfer argument allows us to take ˜ ξ = ξ. Then for any B ∈ˆ Sξ, we get along N ′′ 0 ≤ξnBc = ξnS −ξnB d →α −ξB, and so ξB ≤α a.s., which implies ξS ≤α a.s. since B was arbitrary. Since also α d = ξS, we get E e−ξS −e−α = Ee−ξS −Ee−α = Ee−ξS −Ee−ξS = 0, proving that α = ξS a.s. Hence, for any B ∈ˆ Sξ, we have along N ′′ E ξnBc ∧1 = E (ξnS −ξnB) ∧1 →E (ξS −ξB) ∧1 = E ξBc ∧1 , which tends to 0 as B ↑S by dominated convergence. This contradicts (19), and (iii) follows.
(iii) ⇒(i): Writing Kc ⊂Bc ∪(B \ K) when K ∈K and B ∈ˆ S, and using the additivity of P and ξn and the sub-additivity of x ∧1 for x ≥0, we get lim sup n→∞E ξnKc ∧1 ≤lim sup n→∞ E ξnBc ∧1 + E ξn(B \ K) ∧1 ≤lim sup n→∞E ξnBc ∧1 + lim sup n→∞E ξn(B \ K) ∧1 .
Taking the infimum over K ∈K and then letting B ↑S, we get by Theorem 23.15 and (iii) inf K∈K lim sup n→∞E ξnKc ∧1 = 0.
(20) 23. Convergence in Distribution 521 Similarly, we have for any r > 1 lim sup n→∞P ξnS > r ≤lim sup n→∞ P ξnB > r −1 + P ξnBc > 1 ≤lim sup n→∞P ξnB > r −1 + lim sup n→∞E ξnBc ∧1 .
Letting r →∞and then B ↑S, we get by Theorem 23.15 and (iii) lim r→∞lim inf n→∞P ξnS > r = 0.
(21) By Theorem 23.15, we see from (20) and (21) that (ξn) is weakly tight. As-suming ξn wd − →η along a sub-sequence, we have also ξn vd − →η, and so η d = ξ.
Since the sub-sequence was arbitrary, (i) follows by Theorem 23.2.
2 For a first application, we prove some continuity properties of Cox pro-cesses and thinnings, extending the simple uniqueness properties in Lemma 15.7. Further limit theorems involving Cox processes and thinnings appear in Chapter 30.
Theorem 23.21 (Cox and thinning continuity) For n ∈N and a fixed p ∈ (0, 1], let ξn be a Cox process on S directed by a random measure ηn or a p -thinning of a point process ηn on S. Then ξn vd − →ξ ⇔ ηn vd − →η for some ξ, η, where ξ is a Cox process directed by η or a p -thinning of η, respectively. In that case also (ξn, ηn) vd − →(ξ, η) in M2 S.
Proof: Let ηn vd − →η for a random measure η on S. Then Lemma 15.2 yields for any f, g ∈ˆ CS E e−ξnf−ηng = E exp −ηn(1 −e−f + g) →E exp −η(1 −e−f + g) = E e−ξf−ηg, where ξ is a Cox process directed by η. Hence, Theorem 6.3 gives ξnf +ηng d → ξf + ηg for all such f and g, and so (ξn, ηn) vd − →(ξ, η) by Theorem 23.16.
Conversely, let ξn vd − →ξ for a point process ξ on S. Then the sequence (ξn) is vaguely tight, and so Theorem 23.15 yields for any B ∈ˆ S and u > 0 lim r→0 inf n≥1 Ee−r ξnB = sup K∈K inf n≥1 E exp −u ξn(B \ K) = 1.
Similar relations hold for (ηn) by Lemma 15.2, with r and u replaced by r′ = 1 −e−r and u′ = 1 −e−u, and so even (ηn) is vaguely tight by Theo-rem 23.15, which implies that (ηn) is vaguely relatively compact. If ηn vd − →η 522 Foundations of Modern Probability along a sub-sequence, then ξn vd − →ξ′ as before, where ξ′ is a Cox process di-rected by η. Since ξ′ d = ξ, the distribution of η is unique by Lemma 15.7, and so the convergence ηn vd − →η extends to the entire sequence. The proof for p -thinnings is similar.
2 We also consider a stronger, non-topological version of distributional con-vergence for random measures, defined as follows. Beginning with non-random measures μ and μ1, μ2, . . . ∈MS, we write μn u →μ for convergence in to-tal variation and μn ul − →μ for the corresponding local version, defined by 1Bμn u →1Bμ for all B ∈ˆ S. For random measures ξ and ξn on S, we may now define the convergence ξn ud − →ξ by L(ξn) u →L(ξ), and let ξn uld − →ξ be the corresponding local version, given by 1B ξn ud − →1B ξ for all B ∈ˆ S. Finally, we define the convergence ξn ulP − →ξ by ∥ξn −ξ∥B P →0 for all B ∈ˆ S, and similarly for the global version ξn uP − →ξ.
The last result has the following partial analogue for strong convergence.
Theorem 23.22 (strong Cox continuity) Let ξ, ξ1, ξ2, . . . be Cox processes directed by some random measures η, η1, η2, . . . on S. Then ηn ulP − →η ⇒ ξn uld − →ξ, with equivalence when η, η1, η2, . . . are non-random.
Proof: We may clearly take S to be bounded. First let η = λ and all ηn = λn be non-random. For any n ∈N, put ˆ λn = λ ∧λn, λ′ n = λ −ˆ λn, λ′′ n = λn −ˆ λn, so that λ = ˆ λn + λ′ n, λn = ˆ λn + λ′′ n, ∥λn −λ∥= ∥λ′ n∥+ ∥λ′′ n∥.
Letting ˆ ξn, ξ′ n, ξ′′ n be independent Poisson processes with intensities ˆ λn, λ′ n, λ′′ n, respectively, we note that ξ d = ˆ ξn + ξ′ n, ξn d = ˆ ξn + ξ′′ n.
Assuming λn u →λ, we get L(ξ) −L(ξn) ≤ L(ξ) −L(ˆ ξn) + L(ξn) −L(ˆ ξn) < ⌢P{ξ′ n ̸= 0} + P{ξ′′ n ̸= 0} = 1 −e−∥λ′ n∥ + 1 −e−∥λ′′ n∥ ≤∥λ′ n∥+ ∥λ′′ n∥ = ∥λ −λn∥→0, which shows that ξn ud − →ξ.
23. Convergence in Distribution 523 Conversely, let ξn ud − →ξ. Then for any B ∈S and n ∈N, P{ξnB = 0} ≥P{ξn = 0} →P{ξ = 0} = e−∥λ∥> 0.
Since (log x)′ = x−1 < ⌢1 on every interval [ε, 1] with ε > 0, we get ∥λ −λn∥< ⌢supB|λB −λnB| = supB log P{ξB = 0} −log P{ξnB = 0} < ⌢supB P{ξB = 0} −P{ξnB = 0} ≤∥L(ξ) −L(ξn)∥→0, which shows that λn u →λ.
Now let η and the ηn be random measures satisfying ηn uP − →η. To obtain ξn ud − →ξ, it is enough to show that, for any sub-sequence N ′ ⊂N, the desired convergence holds along a further sub-sequence N ′′. By Lemma 5.2 we may then assume that ηn u →η a.s. Letting the processes ξ and ξn be conditionally independent and Poisson distributed with intensities η and ηn, respectively, we conclude from the previous case that L(ξn | ηn) −L(ξ | η) →0 a.s., and so by dominated convergence, L(ξn) −L(ξ) = E L(ξn | ηn) −L(ξ | η) = sup |f|≤1 E E{f(ξn)| ηn} −E{f(ξ)| η} ≤sup |f|≤1 E E{f(ξn)| ηn} −E{f(ξ)| η} ≤E L(ξn| ηn) −L(ξ | η) →0, which shows that ξn ud − →ξ.
2 For measure-valued processes Xn with rcll paths, we may characterize tight-ness in terms of the real-valued projections Xn t f = f(s) Xn t (ds), f ∈ˆ C+.
Theorem 23.23 (measure-valued processes) Let X1, X2, . . . be vaguely rcll processes in MS, where S is lcscH. Then these conditions are equivalent: (i) (Xn) is vaguely tight in D R+,MS, (ii) (Xnf) is tight in D R+,R+ for every f ∈ˆ C+ S .
524 Foundations of Modern Probability Proof: Let (Xnf) be tight for every f ∈ˆ C+, and fix any ε > 0.
Let f1, f2, . . . be such as in Theorem A5.7, and choose some compact sets B1, B2, . . .
⊂DR+,R+ with P Xnfk ∈Bk ≥1 −ε2−k, k, n ∈N.
(22) Then A = k{μ; μfk ∈Bk} is relatively compact in D R+,MS, and (22) yields P{Xn ∈A} ≥1 −ε.
2 Next we consider convergence of random sets. Given an lcscH space S, let F, G, K be the classes of closed, open, and compact subsets of S. We endow F with the Fell topology, generated by the sets F ∈F; F ∩G ̸= ∅ , G ∈G, F ∈F; F ∩K = ∅ , K ∈K.
Some basic properties of this topology are summarized in Theorem A6.1. In particular, F is compact and metrizable, and the set {F; F ∩B = ∅} is uni-versally measurable for every B ∈ˆ S.
A random closed set in S is defined as a random element ϕ in F. Write ϕ ∩B = ϕB for any B ∈ˆ S, and note that the probabilities P{ϕB = ∅} are well defined. For any random closed set ϕ, we introduce the class ˆ Sϕ = B ∈ˆ S; P{ϕBo = ∅} = P{ϕ ¯ B = ∅} , which is separating by Lemma A6.2. We may now state the basic convergence criterion for random sets.
Theorem 23.24 (convergence of random sets, Norberg, OK) Let ϕ, ϕ1, ϕ2, . . . be random closed sets in an lcscH space S, and fix a separating class U ⊂ˆ Sϕ.
Then ϕn d →ϕ iff P{ϕnU = ∅} →P{ϕU = ∅}, U ∈U.
(23) In particular, we may take U = ˆ Sϕ.
Proof: Write h(B) = P{ϕB ̸= ∅}, hn(B) = P{ϕnB ̸= ∅}.
If ϕn d →ϕ, Theorem 5.25 yields h(Bo) ≤lim inf n→∞hn(B) ≤lim sup n→∞hn(B) ≤h( ¯ B), B ∈ˆ S, and so for any B ∈ˆ Sϕ we have hn(B) →h(B).
23. Convergence in Distribution 525 Now consider a separating class U satisfying (23). Fix any B ∈ˆ Sϕ, and conclude from (23) that, for any U, V ∈U with U ⊂B ⊂V , h(U) ≤lim inf n→∞hn(B) ≤lim sup n→∞hn(B) ≤h(V ).
(24) Since U is separating, we may choose some Uk, Vk ∈U with Uk ↑Bo and ¯ Vk ↓¯ B. Here clearly Uk ↑Bo ⇒{ϕUk ̸= ∅} ↑{ϕBo ̸= ∅} ⇒h(Uk) ↑h(Bo) = h(B), ¯ Vk ↓¯ B ⇒{ϕ¯ Vk ̸= ∅} ↓{ϕ ¯ B ̸= ∅} ⇒h(¯ Vk) ↓h( ¯ B) = h(B), where the finite-intersection property was used in the second line. In view of (24), we get hn(B) →h(B), and (23) follows with U = ˆ Sϕ.
Since F is compact, the sequence (ϕn) is relatively compact by Theorem 23.2. For any sub-sequence N ′ ⊂N, we obtain ϕn d →ψ along a further sub-sequence N ′′, for a random closed set ψ. Combining the direct statement with (23), we get P{ϕB = ∅} = P{ψB = ∅}, B ∈ˆ Sϕ ∩ˆ Sψ.
(25) Since ˆ Sϕ ∩ˆ Sψ is separating by Lemma A6.2, we may approximate as before to extend (25) to any compact set B. The class of sets {F; F ∩K = ∅} with compact K is clearly a π-system, and so a monotone-class argument yields ϕ d = ψ. Since N ′ is arbitrary, we obtain ϕn d →ϕ along N.
2 Simple point processes allow the dual descriptions as integer-valued random measures or locally finite random sets. The corresponding notions of conver-gence are different, and we proceed to clarify how they are related. Since the mapping μ →supp μ is continuous on NS, the convergence ξn vd − →ξ implies supp ξn d →supp ξ. Conversely, if E ξ and E ξn are locally finite, then by Theo-rem 23.24 and Proposition 23.25, we have ξn vd − →ξ whenever supp ξn d →supp ξ and Eξn v →Eξ. Here we give a precise criterion.
Theorem 23.25 (convergence of point processes) Let ξ, ξ1, ξ2, . . . be point processes on S with ξ simple, and fix any dissecting ring U ⊂ˆ Sξ and semi-ring I ⊂U. Then ξn vd − →ξ iff (i) P{ξnU = 0} →P{ξU = 0}, U ∈U, (ii) lim sup n→∞P{ξnI > 1} ≤P{ξI > 1}, I ∈I.
Proof: The necessity holds by Lemma 23.17. Now assume (i)−(ii). We may choose U and I to be countable. Define η(U) = ξU ∧1 and ηn(U) = ξnU ∧1 526 Foundations of Modern Probability for U ∈U. Taking repeated differences in (i), we get ηn d →η for the product topology on R U +. Hence, Theorem 5.31 yields some processes ˜ η d = η and ˜ ηn d = ηn, such that a.s. ˜ ηn(U) →˜ η(U) for all U ∈U. By a transfer argument, we may choose ˜ ξn d = ξn and ˜ ξ d = ξ, such that a.s.
˜ ηn(U) = ˜ ξnU ∧1, ˜ η(U) = ˜ ξU ∧1, U ∈U.
We may then assume that ηn(U) →η(U) for all U ∈U a.s.
Now fix an ω ∈Ω with ηn(U) →η(U) for all U ∈U, and let U ∈U be arbitrary. Splitting U into disjoint sets I1, . . . , Im ∈I with ξIk ≤1 for all k ≤m, we get ηn(U) →η(U) ≤ξU = Σ j ξIj = Σ j η(Ij) ←Σ j ηn(Ij) ≤Σ j ξnIj = ξnU, and so as n →∞, we have a.s.
lim sup n→∞ ξnU ∧1 ≤ξU ≤lim inf n→∞ξnU, U ∈U.
(26) Next we note that, for any h, k ∈Z+, k ≤h ≤1 c = {h > 1} ∪ h < k ∧2 = {k > 1} ∪ h = 0, k = 1 ∪ h > 1 ≥k , where all unions are disjoint. Substituting h = ξI and k = ξnI, and using (ii) and (26), we get lim n→∞P ξI < ξnI ∧2 = 0, I ∈I.
(27) Letting U ∈U be a union of disjoint sets I1, . . . , Im ∈I, we get by (27) lim sup n→∞P ξnU > ξU ≤lim sup n→∞P ∪j ξnIj > ξIj ≤P ∪j {ξIj > 1}.
By dominated convergence, we can make the right-hand side arbitrarily small, and so P{ξnU > ξU} →0. Combining with (26) gives P{ξnU ̸= ξU} →0, which means that ξnU P →ξU. Hence, ξnf P →ξf for every f ∈U+, and Theo-rem 23.16 (i) yields ξn vd − →ξ.
2 23. Convergence in Distribution 527 Exercises 1. Show that the two versions of Donsker’s theorem in Theorems 22.9 and 23.6 are equivalent.
2. Extend either version of Donsker’s theorem to random vectors in Rd.
3. For any metric space (S, ρ), show that if xn →x in D R+,S with x continuous, then sups≤t ρ(xn s , xs) →0 for every t ≥0. (Hint: Note that x is uniformly continuous on every interval [0, t].) 4.
For any separable, complete metric space (S, ρ), show that if Xn d →X in D R+,S with X continuous, we may choose Y n d = Xn with sups≤t ρ(Y n s , Xs) →0 a.s.
for every t ≥0. (Hint: Combine the preceding result with Theorems 5.31 and A5.4.) 5. Give an example where xn →x and yn →y in D R+,R, and yet (xn, yn) ̸→(x, y) in D R+,R2.
6.
Let f be a continuous map between the metric spaces S, T, where T is separable, complete.
Show that if Xn d →X in D R+,S, then f(Xn) d →f(X) in D R+,T . (Hint: By Theorem 5.27 it suffices to show that xn →x in D R+,S implies f(xn) →f(x) in D R+,T . Since A = {x, x1, x2, . . .} is relatively compact in D R+,S, Theorem A5.4 shows that Ut = s≤t πsA is relatively compact in S for every t > 0.
Hence, f is uniformly continuous on each Ut.) 7. Show by an example that the condition in Theorem 23.11 is not necessary for tightness. (Hint: Consider non-random processes Xn.) 8. Show that in Theorem 23.11 it is enough to consider optional times taking finitely many values. (Hint: Approximate from the right and use the right-continuity of paths.) 9. Let the process X on R+ be continuous in probability with values in a sep-arable, complete metric space (S, ρ).
Show that if ρ(Xτn, Xτn+hn) P →0 for any bounded optional times τn and constants hn →0, then X has an rcll version. (Hint: Approximate by suitable step processes, and use Theorems 23.9 and 23.11.) 10. Let X, X1, X2, . . . be L´ evy processes in Rd. Show that Xn d →X in D R+,Rd iff Xn 1 d →X1 in Rd. Compare with Theorem 16.14.
11. Examine how Theorem 23.15 simplifies when S is lcscH. (Hint: We may then omit condition (ii).) 12. Show that (ii)−(iii) of Theorem 23.16 remain sufficient if we replace ˆ Sξ by an arbitrary separating class. (Hint: Restate the conditions in terms of Laplace transforms, and extend to ˆ Sξ by a suitable approximation.) 13. Let η, η1, η2, . . . be λ -randomizations of some point processes ξ, ξ1, ξ2, . . . on a separable, complete space S. Show that ξn d →ξ iffηn d →η.
14. For random measures ξ, ξ1, ξ2, . . . on S, show that ξn uld − →ξ implies ξn vd − →ξ.
Further show by an example that the converse implication may fail.
15. Let X and Xn be random elements in D R+,Rd with Xn fd − →X, and such that uXn d →uX in D R+,R for every u ∈Rd. Show that Xn d →X. (Hint: Proceed as in Theorems 23.23 and A5.7.) 528 Foundations of Modern Probability 16. On an lcscH space S, let ξ, ξ1, ξ2, . . . be simple point processes with associated supports ϕ, ϕ1, ϕ2, . . . . Show that ξn vd − →ξ implies ϕn d →ϕ. Further show by an example that the reverse implication may fail.
17. For an lcscH space S, let U ⊂ˆ S be separating. Show that if K ⊂G with K compact and G open, there exists a U ∈U with K ⊂Uo ⊂¯ U ⊂G. (Hint: First choose B, C ∈ˆ S with K ⊂Bo ⊂¯ B ⊂Co ⊂¯ C ⊂G.) Chapter 24 Large Deviations Exponential rates, cumulant-generating function, Legendre–Fenchel trans-form, rate function, large deviations in Rd, Schilder’s theorem, regular-ization and uniqueness, large deviation principle, goodness, exponential tightness, weak and functional LDPs, continuous mapping, random se-quences, exponential equivalence, perturbed dynamical systems, empir-ical distributions, relative entropy, functional law of the iterated loga-rithm In many contexts throughout probability theory and its applications, we need to estimate the asymptotic rate of decline of various tail probabilities. Since such problems might be expected to lead to tedious technical calculations, we may be surprised and delighted to learn about the existence of a powerful general theory, providing such asymptotic rates with great accuracy. Indeed, in many cases of interest, the asymptotic rate turns out to be exponential, and the associated rate function can often be obtained explicitly from the underlying distributions.
In its simplest setting, the theory of large deviation provides exponential rates of convergence in the weak law of large numbers.
Here we consider some i.i.d. random variables ξ1, ξ2, . . . with mean m and cumulant-generating function Λ(u) = log Eeuξi < ∞, and write ¯ ξn = n−1Σ k≤nξk. For any x > m, we show that the tail probabilities P{¯ ξn > x} tend to 0 at an exponential rate I(x), given by the Legendre–Fenchel transform Λ∗of Λ. In higher dimensions, it is convenient to state the result in the more general form −n−1 log P{¯ ξn ∈B} →I(B) = inf x∈B I(x), where B is restricted to a suitable class of continuity sets. In this standard form of a large-deviation principle with rate function I, the result extends to a wide range of contexts throughout probability theory.
For a striking example of fundamental importance in statistical mechan-ics, we may mention Sanov’s theorem, which provides similar large deviation bounds for the empirical distributions of a sequence of i.i.d. random variables with a common distribution μ. Here the rate function I, defined on the space of probability measures ν on R, agrees with the relative entropy H(ν| μ). Another important case is Schilder’s theorem for a family of rescaled Brownian motions in Rd, where the rate function becomes I(x) = 1 2 ∥˙ x∥2 2 —the squared norm in the Cameron–Martin space first encountered in Chapter 19. The latter result can be used to derive the powerful Fredlin–Wentzell estimates for randomly 529 © Springer Nature Switzerland AG 2021 O. Kallenberg, Foundations of Modern Probability, Probability Theory and Stochastic Modelling 99, 530 Foundations of Modern Probability perturbed dynamical systems. It also yields a short proof of Strassen’s law of the iterated logarithm, a stunning extension of the classical Khinchin law from Chapter 14.
Modern proofs of those and other large deviation results rely on some gen-eral extension principles, which explains the wide applicability of the present ideas.
In addition to some rather elementary and straightforward methods of continuity and approximation, we consider the more sophisticated and ex-tremely powerful principles of inverse continuous mapping and projective lim-its, both of which play crucial roles in ensuing applications. We may also stress the significance of the notion of exponential tightness, and of the essential equiv-alence between the setwise and functional formulations of the large-deviation principle.
Large deviation theory is arguably one of the most technical areas of modern probability. For a beginning student, it is then essential not to get distracted by topological subtleties or elaborate computations. Many results are therefore stated under simplifying assumptions. Likewise, we postpone our discussion of the general principles, until the reader has become well acquainted with some basic ideas in a simple and concrete setting. For those reasons, impor-tant applications appear both at the beginning and at the end of the chapter, separated by some more abstract discussions of general notions and principles.
Returning to the elementary setting of i.i.d. random variables ξ, ξ1, ξ2, . . . , we write Sn = Σ k≤nξk and ¯ ξn = Sn/n. If m = Eξ exists and is finite, then P{¯ ξn > x} →0 for all x > m by the weak law of large numbers. Under stronger moment conditions, the rate of convergence turns out to be expo-nential and can be estimated with great accuracy. This elementary but quite technical observation, along with its multi-dimensional counterpart, lie at the core of large-deviation theory, and provide both a general pattern and a point of departure for the more advanced developments. For motivation, we begin with some simple observations.
Lemma 24.1 (tail rate) For i.i.d. random variables ξ, ξ1, ξ2, . . . , (i) n−1 log P{¯ ξn ≥x} →supn n−1 log P{¯ ξn ≥x} ≡−h(x), x ∈R, (ii) h is [0, ∞]-valued, non-decreasing, and convex, (iii) h(x) < ∞⇔P{ξ ≥x} > 0.
Proof: (i) Writing pn = P{¯ ξn ≥x}, we get for any m, n ∈N pm+n = P Sm+n ≥(m + n)x ≥P Sm ≥mx, Sm+n −Sm ≥nx = pm pn.
Taking logarithms, we conclude that the sequence −log pn is sub-additive, and the assertion follows by Lemma 25.19.
(ii) The first two assertions are obvious. To prove the convexity, let x, y ∈R be arbitrary, and proceed as before to get 24. Large Deviations 531 P S2n ≥n(x + y) ≥P{Sn ≥nx} P{Sn ≥ny}.
Taking logarithms, dividing by 2n, and letting n →∞, we obtain h 1 2 (x + y) ≤1 2 h(x) + h(y) , x, y > 0.
(iii) If P{ξ ≥x} = 0, then P{¯ ξn ≥x} = 0 for all n, and so h(x) = ∞.
Conversely, (i) yields log P{ξ ≥x} ≤−h(x), and so h(x) = ∞implies P{ξ ≥x} = 0.
2 To identify the limit in Lemma 24.1, we need some further notation, here given for convenience directly in d dimensions. For any random vector ξ in Rd, we introduce the cumulant-generating function Λ(u) = Λξ(u) = log Eeuξ, u ∈Rd.
(1) Note that Λ is convex, since by H¨ older’s inequality we have for any u, v ∈Rd and p, q > 0 with p + q = 1 Λ(pu + qv) = log E exp (pu + qv) ξ ≤log (Eeuξ)p (Eevξ)q = p Λ(u) + q Λ(v).
The surface z = Λ(u) in Rd+1 is determined by the class of supporting hyper-planes1 of different slopes, and we note that the plane with slope x ∈Rd (or normal vector (1, −x)) has equation z + Λ∗(x) = xu, u ∈Rd, where Λ∗is the Legendre–Fenchel transform of Λ, given by Λ∗(x) = sup u∈Rd ux −Λ(u) , x ∈Rd.
(2) We can often compute Λ∗explicitly. Here we consider two simple cases, both needed below. The results may be proved by elementary calculus.
Lemma 24.2 (Gaussian and Bernoulli distributions) (i) When ξ = (ξ1, . . . , ξd) is standard Gaussian in Rd, Λ∗ ξ(x) ≡1 2 |x|2, x ∈Rd, (ii) when ξ ∈{0, 1} with P{ξ = 1} = p ∈(0, 1), Λ∗ ξ(x) = x log x p + (1 −x) log 1−x 1−p, x ∈[0, 1], ∞, x / ∈[0, 1].
1d -dimensional affine subspaces 532 Foundations of Modern Probability The function Λ∗is again convex, since for any x, y ∈Rd and for p, q as before, Λ∗(px + qy) = supu p ux −Λ(u) + q uy −Λ(u) ≤p supu ux −Λ(u) + q supu uy −Λ(u) = p Λ∗(x) + q Λ∗(y).
If Λ < ∞near the origin, then m = Eξ exists and agrees with the gradient ∇Λ(0). Thus, the surface z = Λ(u) has tangent hyper-plane z = mu at 0, and we conclude that Λ∗(m) = 0 and Λ∗(x) > 0 for x ̸= m. If ξ is truly d -dimensional, then Λ is strictly convex at 0, and Λ∗is finite and continuous near m. For d = 1, we sometimes need the corresponding one-sided statements, which are easily derived by dominated convergence.
The following key result identifies the function h in Lemma 24.1. For sim-plicity, we assume that m = Eξ exists in [−∞, ∞).
Theorem 24.3 (rate function, Cram´ er, Chernoff) Let ξ, ξ1, ξ2, . . . be i.i.d.
random variables with m = E ξ < ∞. Then n−1 log P ¯ ξn ≥x →−Λ∗(x), x ≥m.
(3) Proof: Using Chebyshev’s inequality and (1), we get for any u > 0 P ¯ ξn ≥x = P euSn ≥enux ≤e−nuxEeuSn = exp nΛ(u) −nux , and so n−1 log P ¯ ξn ≥x ≤Λ(u) −ux.
This remains true for u ≤0, since in that case Λ(u) −ux ≥0 for x ≥m.
Hence, by (2), we have the upper bound n−1 log P ¯ ξn ≥x ≤−Λ∗(x), x ≥m, n ∈N.
(4) To derive a matching lower bound, we first assume that Λ < ∞on R+.
Then Λ is smooth on (0, ∞) with Λ′(0+) = m and Λ′(∞) = ess sup ξ ≡b, and so for any a ∈(m, b) we can choose a u > 0 with Λ′(u) = a. Let η, η1, η2, . . .
be i.i.d. with distribution P{η ∈B} = e−Λ(u)E euξ; ξ ∈B , B ∈B.
(5) Then Λη(r) = Λξ(r + u) −Λξ(u), and so Eη = Λ′ η(0) = Λ′ ξ(u) = a. For any ε > 0, we get by (5) P ¯ ξn −a < ε = enΛ(u)E exp(−nu¯ ηn); |¯ ηn −a| < ε ≥exp nΛ(u) −nu(a + ε) P |¯ ηn −a| < ε .
(6) 24. Large Deviations 533 Here the last probability tends to 1 by the law of large numbers, and so by (2) lim inf n→∞n−1 log P ¯ ξn −a < ε ≥Λ(u) −u(a + ε) ≥−Λ∗(a + ε).
Fixing any x ∈(m, b) and putting a = x + ε, we get for small enough ε > 0 lim inf n→∞n−1 log P ¯ ξn ≥x ≥−Λ∗(x + 2ε).
Since Λ∗is continuous on (m, b) by convexity, we may let ε →0 and combine with (4) to obtain (3).
The result for x > b is trivial, since in that case both sides of (3) equal −∞.
If instead x = b < ∞, then both sides equal log P{ξ = b}, the left side by a simple computation and the right side by an elementary estimate. Finally, let x = m > −∞. Since the statement is trivial when ξ = m a.s., we may assume that b > m. For any y ∈(m, b), we have 0 ≥n−1 log P ¯ ξn ≥m ≥n−1P ¯ ξn ≥y →−Λ∗(y) > −∞.
Here Λ∗(y) →Λ∗(m) = 0 by continuity, and (3) follows for x = m. This completes the proof when Λ < ∞on R+.
The case where Λ(u) = ∞for some u > 0 may be handled by truncation.
Thus, for any r > m we consider the random variables ξr k = ξk ∧r. Writing Λr and Λ∗ r for the associated functions Λ and Λ∗, we get for x ≥m ≥Eξr n−1 log P ¯ ξn ≥x ≥n−1 log P ¯ ξr n ≥x →−Λ∗ r(x).
(7) Now Λr(u) ↑Λ(u) by monotone convergence as r →∞, and by Dini’s theorem2 the convergence is uniform on every compact interval where Λ < ∞. Since also Λ′ is unbounded on the set where Λ < ∞, it follows easily that Λ∗ r(x) →Λ∗(x) for all x ≥m. The required lower bound is now immediate from (7).
2 We may supplement Lemma 24.1 with a criterion for exponential decline of the tail probabilities P{¯ ξn ≥x}.
Corollary 24.4 (exponential rate) Let ξ, ξ1, ξ2, . . . be i.i.d. random variables with m = E ξ < ∞and b = ess sup ξ. Then (i) for any x ∈(m, b), the probabilities P ¯ ξn ≥x decrease exponentially iff Λ(ε) < ∞for some ε > 0, (ii) the exponential decline extends to x = b iffP{ξ = b} ∈(0, 1).
2When fn ↑f for some continuous functions fn, f on a metric space, the convergence is uniform on compacts.
534 Foundations of Modern Probability Proof: If Λ(ε) < ∞for some ε > 0, then Λ′(0+) = m by dominated con-vergence, and so Λ∗(x) > 0 for all x > m. If instead Λ = ∞on (0, ∞), then Λ∗(x) = 0 for all x ≥m. The statement for x = b is trivial.
2 The large deviation estimates in Theorem 24.3 are easily extended from intervals [x, ∞) to arbitrary open or closed sets, which leads to a large-deviation principle for i.i.d. sequences in R. To fulfill our needs in subsequent applications and extensions, we consider a version of the same result in Rd. Motivated by the last result, and to avoid some technical complications, we assume that Λ(u) < ∞for all u. Write Bo and ¯ B = B−for the interior and closure of a set B.
Theorem 24.5 (large deviations in Rd, Varadhan) Let ξ, ξ1, ξ2, . . . be i.i.d.
random vectors in Rd with Λ = Λξ < ∞. Then for any B ∈Bd, −inf x∈Bo Λ ∗(x) ≤lim inf n→∞n−1 log P ¯ ξn ∈B ≤lim sup n→∞ n−1 log P ¯ ξn ∈B ≤−inf x∈¯ B Λ ∗(x).
Proof: To derive the upper bound, fix any ε > 0. By (2) there exists for every x ∈Rd some ux ∈Rd, such that uxx −Λ(ux) > Λ∗(x) −ε ∧ε−1, and by continuity we may choose an open ball Bx around x, such that uxy > Λ(ux) + Λ∗(x) −ε ∧ε−1, y ∈Bx.
By Chebyshev’s inequality and (1), we get for any n ∈N P ¯ ξn ∈Bx ≤E exp uxSn −n inf uxy; y ∈Bx ≤exp −n {Λ∗(x) −ε} ∧ε−1 .
(8) Further note that Λ < ∞implies Λ∗(x) →∞as |x| →∞, at least when d = 1.
By Lemma 24.1 and Theorem 24.3, we may then choose r > 0 so large that n−1 log P |¯ ξn| > r ≤−1/ε, n ∈N.
(9) Now let B ⊂Rd be closed. Then the set {x ∈B; |x| ≤r} is compact and may be covered by finitely many balls Bx1, . . . , Bxm with centers xi ∈B. By (8) and (9), we get for any n ∈N P ¯ ξn ∈B ≤Σ i≤m P ¯ ξn ∈Bxi + P |¯ ξn| > r ≤Σ i≤m exp −n {Λ∗(xi) −ε} ∧ε−1 + e−n/ε ≤(m + 1) exp −n {Λ∗(B) −ε} ∧ε−1 , 24. Large Deviations 535 where Λ∗(B) = infx∈B Λ∗(x). Hence, lim sup n→∞ n−1 log P ¯ ξn ∈B ≤− Λ∗(B) −ε ∧ε−1, and the upper bound follows since ε was arbitrary.
Turning to the lower bound, we first assume that Λ(u)/|u| →∞as |u| → ∞. Fix any open set B ⊂Rd and a point x ∈B. By compactness and the smoothness of Λ, there exists a u ∈Rd with ∇Λ(u) = x. Let η, η1, η2, . . . be i.i.d. random vectors with distribution (5), and note as before that Eη = x.
For ε > 0 small enough, we get as in (6) P ¯ ξn ∈B ≥P ¯ ξn −x < ε ≥exp nΛ(u) −nux −nε|u| P |¯ ηn −x| < ε .
Hence, by the law of large numbers and (2), lim inf n→∞n−1 log P ¯ ξn ∈B ≥Λ(u) −ux −ε|u| ≥−Λ∗(x) −ε|u|.
It remains to let ε →0 and take the supremum over x ∈B.
To eliminate the growth condition on Λ, let ζ, ζ1, ζ2, . . . be i.i.d. standard Gaussian random vectors independent of ξ and the ξn. Then for any σ > 0 and u ∈Rd, we have by Lemma 24.2 (i) Λξ+σζ(u) = Λξ(u) + Λζ(σu) = Λξ(u) + 1 2 σ2|u|2 ≥Λξ(u), and in particular Λ∗ ξ+σζ ≤Λ∗ ξ. Since also Λξ+σζ(u)/|u| ≥σ2|u|/2 →∞, we note that the previous bound applies to ¯ ξn +σ¯ ζn. Now fix any x ∈B as before, and choose ε > 0 small enough that B contains a 2ε-ball around x. Then P ¯ ξn + σ¯ ζn −x < ε ≤P ¯ ξn ∈B + P σ|¯ ζn| ≥ε ≤2 P ¯ ξn ∈B ∨P σ|¯ ζn| ≥ε .
Applying the lower bound to the variables ¯ ξn + σ¯ ζn and the upper bound to ¯ ζn, we get by Lemma 24.2 (i) −Λ∗ ξ(x) ≤−Λ∗ ξ+σζ(x) ≤lim inf n→∞n−1 log P ¯ ξn + σ¯ ζn −x < ε ≤lim inf n→∞n−1 log P ¯ ξn ∈B ∨P σ|¯ ζn| ≥ε ≤lim inf n→∞n−1 log P ¯ ξn ∈B ∨(−ε2/2σ2).
The desired lower bound now follows, as we let σ →0 and then take the supre-mum over all x ∈B.
2 536 Foundations of Modern Probability We can also establish large-deviation results in function spaces. The fol-lowing theorem is basic and sets the pattern for more complex results. For convenience, we may write C = C[0,1],Rd and Ck 0 = {x ∈Ck; x0 = 0}. We also introduce the Cameron–Martin space H1, consisting of all absolutely con-tinuous functions x ∈C0 with a Radon–Nikodym derivative ˙ x ∈L2, so that ∥˙ x∥2 2 = 1 0 | ˙ xt|2dt < ∞.
Theorem 24.6 (large deviations of Brownian motion, Schilder) Let X be a d-dimensional Brownian motion on [0, 1]. Then for any Borel set B ⊂C[0,1],Rd, we have −inf x∈Bo I(x) ≤lim inf ε→0 ε2 log P{εX ∈B} ≤lim sup ε→0 ε2 log P{εX ∈B} ≤−inf x∈¯ B I(x), where I(x) = 1 2 ∥˙ x∥2 2, x ∈H1, ∞, x / ∈H1.
The proof requires a simple topological fact.
Lemma 24.7 (level sets) The level sets Lr below are compact in C[0,1],Rd: Lr = I−1[0, r] = x ∈H1; ∥˙ x∥2 2 ≤2r , r > 0.
Proof: Cauchy’s inequality gives |xt −xs| ≤ t s | ˙ xu| du ≤(t −s)1/2 ∥˙ x∥2, 0 ≤s < t ≤1, x ∈H1.
By the Arzel a–Ascoli Theorem A5.2, the set Lr is then relatively compact in C. It is also weakly compact in the Hilbert space H1 with norm ∥x∥= ∥˙ x∥2.
Thus, every sequence x1, x2, . . . ∈Lr has a sub-sequence that converges in both C and H1, say with limits x ∈C and y ∈Lr, respectively. For every t ∈[0, 1], the sequence xn(t) then converges in Rd to both x(t) and y(t), and we get x = y ∈Lr.
2 Proof of Theorem 24.6: To establish the lower bound, fix any open set B ⊂C. Since I = ∞outside H1, it suffices to prove that −I(x) ≤lim inf ε→0 ε2 log P{εX ∈B}, x ∈B ∩H1.
(10) Then note as in Lemma 1.37 that C2 0 is dense in H1, and also that ∥x∥∞≤ ∥˙ x∥1 ≤∥˙ x∥2 for any x ∈H1. Hence, for every x ∈B ∩H1, there exist some functions xn ∈B ∩C2 0 with I(xn) →I(x), and it suffices to prove (10) for x ∈B ∩C2 0.
For small enough h > 0, Theorem 19.23 yields P{εX ∈B} ≥P ∥εX −x∥∞< h = E E −( ˙ x/ε) · X 1; ∥εX∥∞< h .
(11) 24. Large Deviations 537 Integrating by parts gives log E −( ˙ x/ε) · X 1 = −ε−1 1 0 ˙ xt dXt −ε−2I(x) = −ε−1 ˙ x1X1 + ε−1 1 0 ¨ xt Xt dt −ε−2I(x), and so by (11), ε2 log P{εX ∈B} ≥−I(x) −h| ˙ x1| −h∥¨ x∥1 + ε2 log P ∥εX∥∞< h .
Now (10) follows as we let ε →0 and then h →0.
Turning to the upper bound, fix any closed set B ⊂C, and let Bh be the closed h -neighborhood of B.
Letting Xn be the n -segment, polygonal approximation of X with Xn(k/n) = X(k/n) for k ≤n, we note that P{εX ∈B} ≤P εXn ∈Bh + P ε∥X −Xn∥> h .
(12) Writing I(Bh) = inf{I(x); x ∈Bh}, we obtain P εXn ∈Bh ≤P I(εXn) ≥I(Bh) .
Here 2I(Xn) is a sum of nd variables ξ2 ik, where the ξik are i.i.d. N(0, 1), and so by Lemma 24.2 (i) and an interpolated version of Theorem 24.5, lim sup ε→0 ε2 log P εXn ∈Bh ≤−I(Bh).
(13) Next, we get by Proposition 14.13 and some elementary estimates P ε∥X −Xn∥> h ≤n P ε∥X∥> h√n/2 ≤2 nd P ε2ξ2 > h2n/4d , where ξ is N(0, 1). Applying Theorem 24.5 and Lemma 24.2 (i) again, we obtain lim sup ε→0 ε2 log P ε∥X −Xn∥> h ≤−h2n/8d.
(14) Combining (12), (13), and (14) gives lim sup ε→0 ε2 log P{εX ∈B} ≤−I(Bh) ∧(h2n/8d), and as n →∞we obtain the upper bound −I(Bh).
It remains to show that I(Bh) ↑I(B) as h →0.
Then fix any r > suph I(Bh). For every h > 0, we may choose an xh ∈Bh with I(xh) ≤r, and by Lemma 24.7 we may extract a convergent sequence xhn →x with hn →0, such that even I(x) ≤r. Since also x ∈ h Bh = B, we obtain I(B) ≤r, as required.
2 The last two theorems suggest the following abstraction. Letting ξε, ε > 0, be random elements in a metric space S with Borel σ-field S, we say that 538 Foundations of Modern Probability the family (ξε) satisfies a large-deviation principle (LDP) with rate function I : S →[0, ∞], if for any B ∈S, −inf x∈Bo I(x) ≤lim inf ε→0 ε log P{ξε ∈B} ≤lim sup ε→0 ε log P{ξε ∈B} ≤−inf x∈¯ B I(x).
(15) For sequences ξ1, ξ2, . . . , we require the same condition with the normalizing factor ε replaced by n−1. It is often convenient to write I(B) = infx∈B I(x).
Writing SI for the class {B ∈S; I(Bo) = I( ¯ B)} of I-continuity sets, we note that (15) yields the convergence lim ε→0 ε log P{ξε ∈B} = −I(B), B ∈SI.
(16) If ξ, ξ1, ξ2, . . . are i.i.d. random vectors in Rd with Λ(u) = Eeuξ < ∞for all u, then by Theorem 24.5 the averages ¯ ξn satisfy the LDP in Rd with rate function Λ∗. If instead X is a d -dimensional Brownian motion on [0, 1], then Theorem 24.6 shows that the processes ε1/2X satisfy the LDP in C[0,1],Rd, with rate function I(x) = 1 2 ∥˙ x∥2 2 for x ∈H1 and I(x) = ∞otherwise.
We show that the rate function I is essentially unique.
Lemma 24.8 (regularization and uniqueness) Let (ξε) satisfy an LDP in a metric space S with rate function I. Then (i) I can be chosen to be lower semi-continuous, (ii) the choice of I in (i) is unique.
Proof: (i) Assume (15) for some I. Then the function J(x) = lim inf y→x I(y), x ∈S, is clearly lower semi-continuous with J ≤I.
It is also easy to verify that J(G) = I(G) for all open sets G ⊂S. Thus, (15) remains true with I replaced by J.
(ii) Suppose that (15) holds for each of the lower semi-continuous functions I and J, and let I(x) < J(x) for some x ∈S. By the semi-continuity of J, we may choose a neighborhood G of x with J( ¯ G) > I(x). Applying (15) to both I and J, we get the contradiction −I(x) ≤−I(G) ≤lim inf ε→0 ε log P{ξε ∈G} ≤−J( ¯ G) < −I(x).
2 Justified by the last result, we may henceforth take the lower semi-continuity to be part of our definition of a rate function. (An arbitrary function I satis-fying (15) will then be called a raw rate function.) No regularization is needed in Theorems 24.5 and 24.6, since the associated rate functions Λ∗and I are 24. Large Deviations 539 already lower semi-continuous, the former as the supremum of a family of con-tinuous functions, the latter by Lemma 24.7.
It is sometimes useful to impose a slightly stronger regularity condition on the function I. Thus, we say that I is good, if the level sets I−1[0, r] = {x ∈S; I(x) ≤r} are compact (rather than just closed).
Note that the infimum I(B) = infx∈B I(x) is then attained for every closed set B ̸= ∅.
The rate functions in Theorems 24.5 and 24.6 are clearly both good.
A related condition on the family (ξε) is the exponential tightness inf K lim sup ε→0 ε log P ξε / ∈K = −∞, (17) where the infimum extends over all compact sets K ⊂S. We actually need only the slightly weaker condition of sequential exponential tightness, where (17) is only required along sequences εn →0. To simplify our exposition, we often omit the sequential qualification from our statements, and carry out the proofs under the stronger non-sequential hypothesis.
We finally say that (ξε) satisfies the weak LDP with rate function I, if the lower bound in (15) holds as stated, while the upper bound is only required for compact sets B. We list some relations between the mentioned properties.
Lemma 24.9 (goodness, exponential tightness, and weak LDP) Let ξε, ε > 0, be random elements in a metric space S. Then (i) if (ξε) satisfies an LDP with rate function I, then (16) holds, and the two conditions are equivalent when I is good, (ii) if the ξε are exponentially tight and satisfy a weak LDP with rate function I, then I is good, and (ξε) satisfies the full LDP, (iii) if S is Polish and (ξε) satisfies an LDP with rate function I, then I is good iff(ξε) is sequentially exponentially tight.
Proof: (i) Let I be good and satisfy (16).
Write Bh for the closed h -neighborhood of B ∈S. Since I is non-increasing on S, we have Bh / ∈SI for at most countably many h > 0. Hence, (16) yields for almost every h > 0 lim sup ε→0 ε log P{ξε ∈B} ≤lim ε→0 ε log P ξε ∈Bh = −I(Bh).
To see that I(Bh) ↑I( ¯ B) as h →0, suppose that instead suph I(Bh) < I( ¯ B).
Since I is good, we may choose for every h > 0 some xh ∈Bh with I(xh) = I(Bh), and then extract a convergent sequence xhn →x ∈¯ B with hn →0.
Then the lower semi-continuity of I yields the contradiction I( ¯ B) ≤I(x) ≤lim inf n→∞I(xhn) ≤sup h>0 I(Bh) < I( ¯ B), proving the upper bound. Next let x ∈Bo, and conclude from (16) that, for almost all sufficiently small h > 0, 540 Foundations of Modern Probability −I(x) ≤−I({x}h) = lim ε→0 ε log P ξε ∈{x}h ≤lim inf ε→0 ε log P{ξε ∈B}.
The lower bound now follows as we take the supremum over x ∈Bo.
(ii) By (17), we may choose some compact sets Kr satisfying lim sup ε→0 ε log P{ξε / ∈Kr} < −r, r > 0.
(18) For any closed set B ⊂S, we have P{ξε ∈B} ≤2 P ξε ∈B ∩Kr ∨P ξε / ∈Kr , r > 0, and so, by the weak LDP and (18), lim sup ε→0 ε log P{ξε ∈B} ≤−I(B ∩Kr) ∧r ≤−I(B) ∧r.
The upper bound now follows as we let r →∞. Applying the lower bound and (18) to the sets Kc r gives −I(Kc r) ≤lim sup ε→0 ε log P ξε / ∈Kr < −r, r > 0, and so I−1[0, r] ⊂Kr for all r > 0, which shows that I is good.
(iii) The sufficiency follows from (ii), applied to an arbitrary sequence εn → 0. Now let S be separable and complete, and assume that the rate function I is good. For any k ∈N, we may cover S by some open balls Bk1, Bk2, . . . of radius 1/k. Putting Ukm = j≤m Bkj, we have supm I(U c km) = ∞, since any level set I−1[0, r] is covered by finitely many sets Bkj. Now fix any sequence εn →0 and constant r > 0. By the LDP upper bound and the fact that P{ξεn ∈U c km} →0 as m →∞for fixed n and k, we may choose mk ∈N so large that P ξεn ∈U c k,mk ≤exp (−rk/εn), n, k ∈N.
Summing a geometric series, we obtain lim sup n→∞ εn log P ξεn ∈∪k U c k,mk ≤−r.
The asserted exponential tightness now follows, since the set k Uk,mk is totally bounded and hence relatively compact.
2 By analogy with weak convergence theory, we may look for a version of (16) for continuous functions.
Theorem 24.10 (functional LDP, Varadhan, Bryc) Let ξε, ε > 0, be random elements in a metric space S.
24. Large Deviations 541 (i) If (ξε) satisfies an LDP with rate function I, and f : S →R is continuous and bounded above, then Λf ≡lim ε→0 ε log E exp f(ξε)/ε = sup x∈S f(x) −I(x) .
(ii) If the ξε are exponentially tight and the limits Λf in (i) exist for all f ∈Cb, then (ξε) satisfies an LDP with the good rate function I(x) = sup f∈Cb f(x) −Λf , x ∈S.
Proof: (i) For every n ∈N, we can choose finitely many closed sets B1, . . . , Bm ⊂S, such that f ≤−n on j Bc j, and the oscillation of f on each Bj is at most n−1. Then lim sup ε→0 ε log Eef(ξε)/ε ≤max j≤m lim sup ε→0 ε log E ef(ξε)/ε; ξε ∈Bj ∨(−n) ≤max j≤m sup x∈Bj f(x) −inf x∈Bj I(x) ∨(−n) ≤max j≤m sup x∈Bj f(x) −I(x) + n−1 ∨(−n) = sup x∈S f(x) −I(x) + n−1 ∨(−n).
The upper bound now follows as we let n →∞. Next we fix any x ∈S with a neighborhood G, and write lim inf ε→0 ε log Eef(ξε)/ε ≥lim inf ε→0 ε log E ef(ξε)/ε; ξε ∈G ≥inf y∈G f(y) −inf y∈G I(y) ≥inf y∈G f(y) −I(x).
Here the lower bound follows as we let G ↓{x}, and then take the supremum over x ∈S.
(ii) Note that I is lower semi-continuous, as the supremum over a family of continuous functions. Since Λf = 0 for f = 0, it is also clear that I ≥0. By Lemma 24.9 (ii), it remains to show that (ξε) satisfies the weak LDP with rate function I. Then fix any δ > 0. For every x ∈S, we may choose a function fx ∈Cb satisfying fx(x) −Λfx > I(x) −δ ∧δ−1, and by continuity there exists a neighborhood Bx of x such that fx(y) > Λfx + I(x) −δ ∧δ−1, y ∈Bx.
By Chebyshev’s inequality, we get for any ε > 0 P{ξε ∈Bx} ≤E exp ε−1 fx(ξε) −inf {fx(y); y ∈Bx} ≤E exp ε−1 fx(ξε) −Λfx −{I(x) −δ} ∧δ−1 , 542 Foundations of Modern Probability and so by the definition of Λfx, lim sup ε→0 ε log P ξε ∈Bx ≤lim ε→0 ε log E exp fx(ξε)/ε −Λfx − I(x) −δ ∧δ−1 = − I(x) −δ ∧δ−1.
Now fix any compact set K ⊂S, and choose x1, . . . , xm ∈K such that K ⊂ i Bxi. Then lim sup ε→0 ε log P ξε ∈K ≤max i≤m lim sup ε→0 ε log P ξε ∈Bxi ≤−min i≤m I(xi) −δ ∧δ−1 ≤− I(K) −δ ∧δ−1.
The upper bound now follows as we let δ →0.
Next consider any open set G and element x ∈G. For any n ∈N, we may choose a continuous function fn : S →[−n, 0] such that fn(x) = 0 and fn = −n on Gc. Then −I(x) = inf f∈Cb Λf −f(x) ≤Λfn −fn(x) = Λfn = lim ε→0 ε log E exp fn(ξε)/ε ≤lim inf ε→0 ε log P ξε ∈G ∨(−n).
The lower bound now follows as we let n →∞, and then take the supremum over all x ∈G.
2 We proceed to show how the LDP is preserved by continuous mappings.
The following results are often referred to as direct and inverse contraction principles. Given a rate function I on S and a map f : S →T, we define the image J = I ◦f −1 on T as the function J(y) = I f −1{y} = inf I(x); f(x) = y , y ∈T.
(19) Note that the corresponding set functions are related by J(B) ≡inf y∈B J(y) = inf I(x); f(x) ∈B = I(f −1B), B ⊂T.
Theorem 24.11 (continuous mapping) Let f be a continuous map between two metric spaces S, T, and let ξε be random elements in S.
(i) If (ξε) satisfies an LDP in S with rate function I, then the images f(ξε) satisfy an LDP in T with the raw rate function J = I ◦f −1. Moreover, J is a good rate function on T, whenever the function I is good on S.
24. Large Deviations 543 (ii) (Ioffe) Let (ξε) be exponentially tight in S, let f be injective, and let the images f(ξε) satisfy a weak LDP in T with rate function J. Then (ξε) satisfies an LDP in S with the good rate function I = J ◦f.
Proof: (i) Since f is continuous, the set f −1B is open or closed whenever the corresponding property holds for B. Using the LDP for (ξε), we get for any B ⊂T −I(f −1Bo) ≤lim inf ε→0 ε log P ξε ∈f −1Bo ≤lim sup ε→0 ε log P ξε ∈f −1 ¯ B ≤−I(f −1 ¯ B), which proves the LDP for {f(ξε)} with the raw rate function J = I ◦f −1.
When I is good, we claim that J−1[0, r] = f I−1[0, r] , r ≥0.
(20) To see this, fix any r ≥0, and let x ∈I−1[0, r]. Then J ◦f(x) = I ◦f −1 ◦f(x) = inf I(u); f(u) = f(x) ≤I(x) ≤r, which means that f(x) ∈J−1[0, r]. Conversely, let y ∈J−1[0, r]. Since I is good and f is continuous, the infimum in (19) is attained at some x ∈S, and we get y = f(x) with I(x) ≤r. Thus, y ∈f(I−1[0, r]), which completes the proof of (20). Since continuous maps preserve compactness, (20) shows that the goodness of I carries over to J.
(ii) Here I is again a rate function, since the lower semi-continuity of J is preserved by composition with the continuous map f. By Lemma 24.9 (ii), it is then enough to show that (ξε) satisfies a weak LDP in S. To prove the upper bound, fix any compact set K ⊂S, and note that the image set f(K) is again compact, since f is continuous. Hence, the weak LDP for {f(ξε)} yields lim sup ε→0 ε log P ξε ∈K = lim sup ε→0 ε log P f(ξε) ∈f(K) ≤−J{f(K)} = −I(K).
Next we fix any open set G ⊂S, and let x ∈G be arbitrary with I(x) = r < ∞. Since (ξε) is exponentially tight, we may choose a compact set K ⊂S such that lim sup ε→0 ε log P ξε / ∈K < −r.
(21) The continuous image f(K) is compact in T, and so by (21) and the weak LDP for {f(ξε)}, 544 Foundations of Modern Probability −I(Kc) = −J f(Kc) ≤−J {f(K)}c ≤lim inf ε→0 ε log P f(ξε) / ∈f(K) ≤lim sup ε→0 ε log P ξε / ∈K < −r.
Since I(x) = r, we conclude that x ∈K.
As a continuous bijection from the compact set K onto f(K), the function f is a homeomorphism between the two sets with their subset topologies. Then Lemma 1.6 yields an open set G′ ⊂T, such that f(x) ∈f(G∩K) = G′∩f(K).
Noting that P f(ξε) ∈G′ ≤P ξε ∈G + P ξε / ∈K , and using the weak LDP for {f(ξε)}, we get −r = −I(x) = −J{f(x)} ≤lim inf ε→0 ε log P f(ξε) ∈G′ ≤lim inf ε→0 ε log P ξε ∈G ∨lim sup ε→0 ε log P ξε / ∈K .
Hence, by (21), −I(x) ≤lim inf ε→0 ε log P ξε ∈G , x ∈G, and the lower bound follows as we take the supremum over all x ∈G.
2 We turn to the powerful method of projective limits.
The following se-quential version is sufficient for our needs, and will enable us to extend the LDP to a variety of infinite-dimensional contexts. Some general background on projective limits is provided by Appendix 6.
Theorem 24.12 (random sequences, Dawson & G¨ artner) For any metric spaces S1, S2, . . . , let ξε = (ξn ε ) be random elements in S∞= S1×S2×· · · , such that for each n ∈N the vectors (ξ1 ε, . . . , ξn ε ) satisfy an LDP in Sn = S1×· · ·×Sn with the good rate function In. Then (ξε) satisfies an LDP in S∞with the good rate function I(x) = supnIn(x1, . . . , xn), x = (x1, x2, . . .) ∈S∞.
(22) Proof: For any m ≤n, we introduce the natural projections πn : S∞→Sn and πmn : Sn →Sm.
Since the πmn are continuous and the In are good, Theorem 24.11 yields Im = In ◦π−1 mn for all m ≤n, and so πmn(I−1 n [0, r]) ⊂ I−1 m [0, r] for all r ≥0 and m ≤n. Hence, for each r ≥0 the level sets I−1 n [0, r] form a projective sequence. Since they are also compact by hypothesis, and by (22), I−1[0, r] = ∩n π−1 n I−1 n [0, r], r ≥0, (23) the sets I−1[0, r] are compact by Lemma A6.4. Thus, I is again a good rate function.
24. Large Deviations 545 Now fix any closed set A ⊂S, and put An = πnA, so that πmnAn = Am for all m ≤n. Since the πmn are continuous, we have also πmnA− n ⊂A− m for m ≤n, which means that the sets A− n form a projective sequence. We claim that A = ∩n π−1 n A− n .
(24) Here the relation A ⊂π−1 n A− n is obvious. Now let x / ∈A. By the definition of the product topology, we may choose a k ∈N and an open set U ⊂Sk, such that x ∈π−1 k U ⊂Ac. It follows easily that πkx ∈U ⊂Ac k. Since U is open, we have even πkx ∈(A− k )c. Thus, x / ∈ n π−1 n A− n , which completes the proof of (24). The projective property carries over to the intersections A− n ∩I−1 n [0, r], and formulas (23) and (24) combine into the relation A ∩I−1[0, r] = ∩n π−1 n A− n ∩I−1 n [0, r] , r ≥0.
(25) Now let I(A) > r ∈R. Then A ∩I−1[0, r] = ∅, and by (25) and Lemma A6.4 we get A− n ∩I−1 n [0, r] = ∅for some n ∈N, which implies In(A− n ) ≥r.
Noting that A ⊂π−1 n An and using the LDP in Sn, we conclude that lim sup ε→0 ε log P ξε ∈A ≤lim sup ε→0 ε log P πnξε ∈An ≤−In(A− n ) ≤−r, The upper bound now follows as we let r ↑I(A).
Finally, fix an open set G ⊂S∞, and let x ∈G be arbitrary.
By the definition of the product topology, we may choose n ∈N and an open set U ⊂Sn such that x ∈π−1 n U ⊂G. The LDP in Sn yields lim inf ε→0 ε log P ξε ∈G ≥lim inf ε→0 ε log P πnξε ∈U ≥−In(U) ≥−In ◦πn(x) ≥−I(x), and the lower bound follows as we take the supremum over all x ∈G.
2 We consider yet another basic extension principle for the LDP, involving suitable approximations. The families of random elements ξε and ηε in a com-mon separable metric space (S, d) are said to be exponentially equivalent, if lim ε→0 ε log P d(ξε, ηε) > h = −∞, h > 0.
(26) The separability of S is only needed to ensure measurability of the pairwise distances d(ξε, ηε).
In general, we may replace (26) by a similar condition involving the outer measure.
Lemma 24.13 (approximation) Let ξε, ηε be exponentially equivalent random elements in a separable metric space S. Then (ξε) satisfies an LDP with the good rate function I, iffthe same LDP holds for (ηε).
546 Foundations of Modern Probability Proof: Let the LDP hold for (ξε) with rate function I. Fix any closed set B ⊂S, and write Bh for the closed h -neighborhood of B. Then P ηε ∈B ≤P ξε ∈Bh + P d(ξε, ηε) > h , and so by (26) and the LDP for (ξε), lim sup ε→0 ε log P ηε ∈B ≤lim sup ε→0 ε log P ξε ∈Bh ∨lim sup ε→0 ε log P d(ξε, ηε) > h ≤−I(Bh) ∨(−∞) = −I(Bh).
Since I is good, we have I(Bh) ↑I(B) as h →0, and the required upper bound follows.
Now fix any open set G ⊂S, and let x ∈G. If d(x, Gc) > h > 0, we may choose a neighborhood U of x such that U h ⊂G. Noting that P ξε ∈U ≤P ηε ∈G + P d(ξε, ηε) > h , we get by (26) and the LDP for (ξε) −I(x) ≤−I(U) ≤lim inf ε→0 ε log P ξε ∈U ≤lim inf ε→0 ε log P ηε ∈G ∨lim sup ε→0 ε log P d(ξε, ηε) > h = lim inf ε→0 ε log P ηε ∈G .
The required lower bound now follows, as we take the supremum over all x ∈G.
2 We turn to some important applications, illustrating the power of the gen-eral theory. First we study the perturbations of an ordinary differential equa-tion ˙ x = b(x) by a small noise term. More precisely, we consider the unique solutions Xε with Xε 0 = 0 of the d -dimensional SDEs3 dXt = ε1/2 dBt + b(Xt) dt, t ≥0, ε ≥0, (27) where B is a Brownian motion in Rd and b is a bounded and uniformly Lipschitz continuous mapping on Rd. Let H∞be the set of all absolutely continuous functions x: R+ →Rd with x0 = 0, such that ˙ x ∈L2.
Theorem 24.14 (perturbed dynamical systems, Freidlin & Wentzell) For any bounded, uniformly Lipschitz continuous function b: Rd →Rd, the solutions Xε to (27) with Xε 0 = 0 satisfy an LDP in CR+,Rd with the good rate function I(x) = 1 2 ∞ 0 ˙ xt −b(xt) 2dt, x ∈H∞.
(28) 3See Chapter 32 for a detailed discussion of such equations.
24. Large Deviations 547 Here it is understood that I(x) = ∞when x / ∈H∞. Note that the result for b = 0 extends Theorem 24.6 to processes on R+.
Proof: If B1 is a Brownian motion on [0, 1], then for every r > 0, the scaled process Br = Φ(B1) given by Br t = r1/2B1 t/r is a Brownian motion on [0, r]. Since Φ is continuous from C[0,1] to C[0,r], we see from Theorems 24.6 and 24.11 (i), combined with Lemma 24.7, that the processes ε1/2Br satisfy an LDP in C[0,r] with the good rate function Ir = I1 ◦Φ−1, where I1(x) = 1 2 ∥˙ x∥2 2 for x ∈H1 and I1(x) = ∞otherwise. Now Φ maps H1 onto Hr, and when y = Φ(x) with x ∈H1, we have ˙ xt = r1/2 ˙ yrt. Hence, a simple calculation yields Ir(y) = 1 2 r 0 | ˙ ys|2ds = 1 2 ∥˙ y∥2 2, which extends Theorem 24.6 to [0, r]. For the further extension to R+, write πnx for the restriction of a function x ∈CR+ to [0, n], and infer from Theorem 24.12 that the processes ε1/2B satisfy an LDP in CR+ with the good rate function I∞(x) = supn In(πnx) = 1 2 ∥˙ x∥2 2.
By an elementary version of Theorem 32.3, the integral equation xt = zt + t 0 b(xs) ds, t ≥0, (29) has a unique solution x = F(z) in C = CR+ for every z ∈C. Letting z1, z2 ∈C be arbitrary, and writing a for the Lipschitz constant of b, we note that the corresponding solutions xi = F(zi) satisfy x1 t −x2 t ≤ z1 −z2 + a t 0 x1 s −x2 s ds, t ≥0.
Hence, Gronwall’s Lemma 32.4 yields ∥x1 −x2∥≤∥z1 −z2∥ear on the interval [0, r], which shows that F is continuous. Using Schilder’s theorem on R+, along with Theorem 24.11 (i), we conclude that the processes Xε satisfy an LDP in CR+ with the good rate function I = I∞◦F −1. Now F is clearly bijective, and by (29) the functions z and x = F(z) lie simultaneously in H∞, in which case ˙ z = ˙ x −b(x) a.e. Thus, I is indeed given by (28).
2 Now consider a random element ξ with distribution μ, in an arbitrary metric space S. Introduce the cumulant-generating functional Λ(f) = log Eef(ξ) = log μef, f ∈Cb(S), with associated Legendre–Fenchel transform Λ∗(ν) = sup f∈Cb νf −Λ(f) , ν ∈ˆ MS, (30) where ˆ MS is the class of probability measures on S, endowed with the topol-ogy of weak convergence. Note that Λ and Λ∗are both convex, by the same argument as for Rd.
For any measures μ, ν ∈ ˆ MS, we define the relative entropy of ν with respect to μ by 548 Foundations of Modern Probability H(ν | μ) = ν log p = μ(p log p), ν ≪μ with ν = p · μ, ∞, ν ̸≪μ.
Since x log x is convex, the function H(ν | μ) is convex in ν for fixed μ, and Jensen’s inequality yields H(ν | μ) ≥μp log μp = νS log νS = 0, v ∈ˆ MS, with equality iffν = μ.
Now let ξ1, ξ2, . . . be i.i.d. random elements in S. The associated empirical distributions ηn = n−1 Σ k≤n δξk, n ∈N, may be regarded as random elements in ˆ MS, and we note that ηnf = n−1 Σ k≤n f(ξk), f ∈Cb(S), n ∈N.
In particular, Theorem 24.5 applies to the random vectors (ηnf1, . . . , ηnfm), for fixed f1, . . . , fm ∈Cb(S). The following result may be regarded as an infinite-dimensional version of Theorem 24.5. It also provides an important connection to statistical mechanics, via the entropy function.
Theorem 24.15 (large deviations of empirical distributions, Sanov) Let ξ1, ξ2, . . . be i.i.d. random elements with distribution μ in a Polish space S, and put Λ(f) = log μ ef. Then the associated empirical distributions η1, η2, . . . satisfy an LDP in ˆ MS with the good rate function Λ∗(ν) = H(ν | μ), ν ∈ˆ MS.
(31) A couple of lemmas will be needed for the proof.
Lemma 24.16 (relative entropy, Donsker & Varadhan) For any μ, ν ∈ˆ MS, (i) the supremum in (30) may be taken over all bounded, measurable func-tions f : S →R, (ii) then (31) extends to measures on any measurable space S.
Proof: (i) Use Lemma 1.37 and dominated convergence.
(ii) If ν ̸≪μ, then H(ν | μ) = ∞by definition. Choosing B ∈S with μB = 0 and νB > 0, and taking fn = n1B, we obtain νfn −log μefn = n νB →∞.
Thus, even Λ∗(ν) = ∞in this case, and it remains to prove (31) when ν ≪μ.
Assuming ν = p · μ and writing f = log p, we note that νf −log μef = ν log p −log μp = H(ν | μ).
If f = log p is unbounded, it may approximated by some bounded, measurable functions fn satisfying μefn →1 and νfn →νf, and we get Λ∗(ν) ≥H(ν | μ).
24. Large Deviations 549 To prove the reverse inequality, we first let S be finite and generated by a partition B1, . . . , Bn of S. Putting μk = μBk, νk = νBk, and pk = νk/μk, we may write our claim in the form g(x) ≡Σ k νkxk −logΣ k μkexk ≤Σ k νk log pk, where x = (x1, . . . , xn) ∈Rd is arbitrary. Here the function g is concave and satisfies ∇g(x) = 0 for x = (log p1, . . . , log pn), asymptotically when pk = 0 for some k. Thus, supx g(x) = g log p1, . . . , log pk = Σ k νk log pk.
To prove the general inequality νf −log μef ≤ν log p, we may choose f to be simple.
The generated σ-field F ⊂S is then finite, and we note that ν = μ(p |F) · μ on F. Using the result in the finite case, along with Jensen’s inequality for conditional expectations, we obtain νf −log μef ≤μ μ(p |F) log μ(p |F) ≤μ μ p log p F = ν log p.
2 Lemma 24.17 (exponential tightness) The empirical distributions ηn in The-orem 24.15 are exponentially tight in ˆ MS.
Proof: If B ∈S with P{ξ ∈B} = p ∈(0, 1), Theorem 24.3 and Lemmas 24.1 and 24.2 yield for any x ∈[p, 1] supn n−1 log P ηnB > x ≤−x log x p −(1 −x) log 1 −x 1 −p .
(32) In particular, the right-hand side tends to −∞as p →0 for fixed x ∈(0, 1).
Now fix any r > 0. By (32) and Theorem 23.2, we may choose some compact sets K1, K2, . . . ⊂S with P ηnKc k > 2−k ≤e−knr, k, n ∈N.
Summing over k gives lim sup n→∞n−1 log P ∪k ηnKc k > 2−k ≤−r, and it remains to note that the set M = ∩k ν ∈ˆ MS; νKc k ≤2−k is compact, by another application of Theorem 23.2.
2 Proof of Theorem 24.15: By Theorem 1.8, we can embed S as a Borel subset of a compact metric space K. The function space Cb(K) is separable, 550 Foundations of Modern Probability and we can choose a dense sequence f1, f2, . . . ∈Cb(K). For any m ∈N, the random vector {f1(ξ), . . . , fm(ξ)} has cumulant-generating function Λm(u) = log E exp Σ k≤m ukfk(ξ) = Λ ◦Σ k≤m ukfk, u ∈Rm, and so by Theorem 24.5 the random vectors (ηnf1, . . . , ηnfm) satisfy an LDP in Rm with the good rate function Λ∗ m. Then by Theorem 24.12, the infinite sequences (ηnf1, ηnf2, . . .) satisfy an LDP in R∞with the good rate function J = supm(Λ∗ m ◦πm), where πm denotes the natural projection of R∞onto Rm.
Since ˆ MK is compact by Theorem 23.2 and the mapping ν →(νf1, νf2, . . .) is a continuous injection of ˆ MK into R∞, Theorem 24.11 (ii) shows that the random measures ηn satisfy an LDP in ˆ MK with the good rate function IK(ν) = J νf1, νf2, . . .
= supm Λ∗ m νf1, . . . , νfm = sup m sup u∈Rm Σ k≤m ukνfk −Λ ◦Σ k≤m ukfk = sup f∈F νf −Λ(f) = sup f∈Cb νf −Λ(f) , (33) where F is the set of all linear combinations of f1, f2, . . . .
Next, we note that the natural embedding ˆ MS →ˆ MK is continuous, since for any f ∈Cb(K) the restriction of f to S belongs to Cb(S). Since it is also trivially injective, we see from Theorem 24.11 (ii) and Lemma 24.17 that the ηn satisfy an LDP even in ˆ MS, with a good rate function IS equal to the restric-tion of IK to ˆ MS. It remains to note that IS = Λ∗by (33) and Lemma 24.16. 2 We conclude with a remarkable application of Schilder’s Theorem 24.6.
Writing B for a standard Brownian motion in Rd, we introduce for each t > e the scaled process Xt s = Bst 2t log log t , s ≥0.
(34) Theorem 24.18 (functional law of the iterated logarithm, Strassen) For a Brownian motion B in Rd, define the processes Xt by (34). Then these state-ments are equivalent and hold outside a fixed P-null set: (i) the set of paths Xt with t ≥3 is relatively compact in CR+,Rd, with set of limit points as t →∞ K = x ∈H∞; ∥˙ x∥2 ≤1 , (ii) for any continuous function F : CR+,Rd →R, lim sup t→∞ F(Xt) = sup x∈K F(x).
24. Large Deviations 551 In particular, we may recover the classical law of the iterated logarithm in Theorem 14.18 by choosing F(x) = x1. Using Theorem 22.6, we can easily derive a correspondingly strengthened version for random walks.
Proof: The equivalence (i) ⇔(ii) being elementary, it is enough to prove (i). Noting that Xt d = (2 log log t)−1/2B and using Theorem 24.6, we get for any measurable set A ⊂CR+,Rd and constant r > 1 lim sup n→∞ log P Xrn∈A log n ≤lim sup t→∞ log P{Xt ∈A} log log t ≤−2 I( ¯ A), lim inf n→∞ log P Xrn∈A log n ≥lim inf t→∞ log P{Xt ∈A} log log t ≥−2 I(Ao), where I(x) = 1 2 ∥˙ x∥2 2 for x ∈H∞and I(x) = ∞otherwise. Hence, Σ n P Xrn∈A < ∞, 2 I( ¯ A) > 1, = ∞, 2 I(Ao) < 1.
(35) Now fix any r > 1, and let G ⊃K be open. Note that 2 I(Gc) > 1 by Lemma 24.7. By the first part of (35) and the Borel–Cantelli lemma, we have P{Xrn / ∈G i.o.} = 0, or equivalently 1G(Xrn) →1 a.s. Since G was arbitrary, it follows that ρ(Xrn, K) →0 a.s. for any metrization ρ of CR+,Rd. In particular, this holds with any c > 0 for the metric ρc(x, y) = ∞ 0 (x −y)∗ s ∧1 e−cs ds, x, y ∈CR+,Rd.
To extend the convergence to the entire family {Xt}, fix a path of B with ρ1(Xrn, K) →0, and choose some functions yrn ∈K with ρ1(Xrn, yrn) →0.
For any t ∈[rn, rn+1), the paths Xrn and Xt are related by Xt(s) = Xrn(t r−ns) rn log log rn t log log t 1/2 , s > 0.
Defining yt in the same way in terms of yrn, we note that also yt ∈K, since I(yt) ≤I(yrn). (The two H∞-norms would agree if the logarithmic factors were omitted.) Furthermore, ρr(Xt, yt) = ∞ 0 Xt −yt∗ s ∧1 e−rs ds ≤ ∞ 0 Xrn −yrn∗ rs ∧1 e−rs ds = r−1ρ1 Xrn, yrn →0.
Thus, ρr(Xt, K) →0. Since K is compact, we conclude that {Xt} is relatively compact, and that all its limit points as t →∞belong to K.
Now fix any y ∈K and u ≥ε > 0. By the established part of the theorem and Cauchy’s inequality, we have a.s.
552 Foundations of Modern Probability lim sup t→∞ Xt −y ∗ ε ≤sup x∈K (x −y)∗ ε ≤sup x∈K x∗ ε + y∗ ε ≤2 ε1/2.
(36) Write x∗ ε,u = sups∈[ε,u] |xs −xε|, and choose r > u/ε to ensure independence between the variables (Xrn −y)∗ ε,u. Applying the second case of (35) to the open set A = {x; (x−y)∗ ε,u < ε}, and using the Borel–Cantelli lemma together with (36), we obtain a.s.
lim inf t→∞ Xt −y ∗ u ≤lim sup t→∞ Xt −y ∗ ε + lim inf n→∞ Xrn −y ∗ ε,u ≤2 ε1/2 + ε.
As ε →0, we get lim inft(Xt −y)∗ u = 0 a.s., and so lim inft ρ1(Xt, y) ≤e−u a.s.
Letting u →∞, we obtain lim inft ρ1(Xt, y) = 0 a.s. Applying this result to a dense sequence y1, y2, . . . ∈K, we see that a.s. every element of K is a limit point as t →∞of the family {Xt}.
2 Exercises 1. For any random vector ξ and constant a in Rd, show that Λξ−a(u) = Λξ(u)−ua and Λ∗ ξ−a(x) = Λ∗ ξ(x + a).
2. For any random vector ξ in Rd and non-singular d × d matrix a, show that Λaξ(u) = Λξ(ua) and Λ∗ aξ(x) = Λ∗ ξ(a−1x).
3.
For any random vectors ξ⊥ ⊥η, show that Λξ,η(u, v) = Λξ(u) + Λη(v) and Λ∗ ξ,η(x, y) = Λ∗ ξ(x) + Λ∗ η(y).
4. Prove the claims of Lemma 24.2.
5. For a Gaussian vector ξ in Rd with mean m ∈Rd and covariance matrix a, show that Λ∗ ξ(x) = 1 2 (x −m)′a−1(x −m). Explain the interpretation when a is singular.
6. Let ξ be a standard Gaussian random vector in Rd. Show that the family ε1/2ξ satisfies an LDP in Rd with the good rate function I(x) = 1 2 |x|2. (Hint: Deduce the result along the sequence εn = n−1 from Theorem 24.5, and extend by monotonicity to general ε > 0.) 7. Use Theorem 24.11 (i) to deduce the preceding result from Schilder’s theorem.
(Hint: For x ∈H1, note that |x1| ≤∥˙ x∥2, with equality iffxt ≡tx1.) 8. Prove Schilder’s theorem on [0, T] by the same argument as for [0, 1].
9. Deduce Schilder’s theorem in C[0,n],Rd from the version in C[0,1],Rnd.
10. Let B be a Brownian bridge in Rd. Show that the processes ε1/2B satisfy an LDP in C[0,1],Rd with the good rate function I(x) = 1 2 ∥˙ x∥2 2 for x ∈H1 with x1 = 0 and I(x) = ∞otherwise. (Hint: Write Bt = Xt −tX1, where X is a Brownian motion in Rd, and use Theorem 24.11. Check that ∥˙ x−a∥2 is minimized for a = x1.) 11. Show that the exponential tightness and its sequential version are preserved by continuous mappings.
24. Large Deviations 553 12. Show that if the processes Xε, Y ε in CR+,Rd are exponentially tight, then so is any linear combination aXε + b Y ε. (Hint: Use the Arzela–Ascoli theorem.) 13. Prove directly from (27) that the processes Xε in Theorem 24.14 are expo-nentially tight. (Hint: Use Lemmas 24.7 and 24.9 (iii), along with the Arzel a–Ascoli theorem.) Derive the same result from the stated theorem.
14. Let ξε be random elements in a locally compact metric space S, satisfying an LDP with the good rate function I. Show that the ξε are exponentially tight, even in the non-sequential sense. (Hint: For any r > 0, there exists a compact set Kr ⊂S with I−1[0, r] ⊂Ko r. Now apply the LDP upper bound to the closed sets (Ko r)c ⊃Kc r.) 15. For any metric space S and lcscH space T, let Xε be random elements in CT,S, whose restrictions Xε K to an arbitrary compact set K ⊂T satisfy an LDP in CK,S with the good rate function IK. Show that the Xε satisfy an LDP in CT,S with the good rate function I = supK(IK ◦πK), where πK denotes the restriction map CT,S →CK,S.
16. Let ξkj be i.i.d. random vectors in Rd with Λ(u) = Eeuξkj < ∞for all u ∈Rd.
Show that the sequences ¯ ξn = n−1Σk≤n(ξk1, ξk2, . . .) satisfy an LDP in (Rd)∞with the good rate function I(x) = ΣjΛ∗(xj). Also derive an LDP for the associated random walks in Rd.
17. Let ξ be a sequence of i.i.d. N(0, 1) random variables. Use the preceding result to show that the sequences ε1/2ξ satisfy an LDP in R∞with the good rate function I(x) = 1 2 ∥x∥2 for x ∈l2 and I(x) = ∞otherwise. Also derive the statement from Schilder’s theorem.
18. Let ξ1, ξ2, . . . be i.i.d. random probability measures on a Polish space S. Derive an LDP in ˆ MS for the averages ¯ ξn = n−1Σk≤nξk. (Hint: Define Λ(f) = log Eeξkf, and proceed as in the proof of Sanov’s theorem.) 19. Show how the classical LIL4 in Theorem 14.18 follows from Theorem 24.18.
Also use the latter result to derive an LIL for the variables ξt = |B2t −Bt|, where B is a Brownian motion in Rd.
20. Use Theorem 24.18 to derive a corresponding LIL in C[0,1],Rd.
21. Use Theorems 22.6 and 24.18 to derive a functional LIL for random walks, based on i.i.d. random variables with mean 0 and variance 1. (Hint: For the result in CR+,R, replace the summation process S[t] by its linearly interpolated version, as in Corollary 23.6.) 22. Use Theorems 22.13 and 24.18 to derive a functional LIL for suitable renewal processes.
23.
Let B1, B2, . . . be independent Brownian motions in Rd.
Show that the sequence of paths Xn t = (2 log n)−1/2Bn t , n ≥2, is a.s. relatively compact in CR+,Rd with set of limit points K = {x ∈H∞; ∥˙ x∥2 ≤1}.
4law of the iterated logarithm VIII. Stationarity, Symmetry and Invariance Stationary and symmetric processes represent the third basic dependence structure of probability theory. In Theorem 25 we prove the pointwise and mean ergodic theorems in discrete and continuous time, along with various multi-variate and sub-additive extensions.
Some main results admit exten-sions to transition semi-groups of Markov processes, which leads in Chapter 26 to some powerful limit theorems for the latter. Here we also study weak and strong ergodicity as well as Harris recurrence. In Chapter 27 we prove the basic representations and limit theorems for exchangeable processes, as well as the predictable sampling and mapping theorems. The final Chapter 28 pro-vides representations of contractable, exchangeable, and rotatable arrays. For the novice we recommend especially Chapter 25 plus selected material from Chapters 26–27. Chapter 28 is more advanced and might be postponed.
−−− 25.
Stationary processes and ergodic theorems.
After a general discussion of stationary and ergodic sequences and processes, we prove the ba-sic discrete- and continuous-time ergodic theorems, along with various multi-variate and sub-additive extensions. Next we consider the basic ergodic de-composition and prove some powerful coupling theorems. Among applications we note some limit theorems for entropy and information and for random ma-trices. We also include a very general version of the classical ballot theorem.
26. Ergodic properties of Markov processes. Here we begin with the a.e. ergodic theorem for positive L1−L∞contractions on an abstract measure space, along with a related ratio ergodic theorem, leading to some powerful limit theorems for Markov processes. Next we characterize weak and strong ergodicity, and study the notion of Harris recurrence and the existence of in-variant measures for regular Feller processes, using tools from potential theory.
27. Symmetric distributions and predictable maps. Here the cen-tral result is the celebrated de Finetti-type representation of exchangeable and contractable sequences, along with various continuous-time extensions, allow-ing applications to sampling from a finite population. We further note the powerful predictable sampling and mapping theorems, where the original in-variance property is extended to predictable times and mappings.
28.
Multi-variate arrays and symmetries.
Here we establish the Aldous–Hoover-type coding representations of jointly exchangeable and con-tractable arrays of arbitrary dimension, and identify pairs of functions that can be used to represent the same array. After highlighting some special cases, we proceed to a representation of rotatable arrays in terms of Gaussian random variables, and conclude with an extension of Kingman’s paintbox representa-tion for exchangeable and related partitions.
555 Chapter 25 Stationary Processes and Ergodic Theory Stationarity and invariance, two-sided extension, invariance and almost invariance, ergodicity, maximal ergodic lemma, discrete and continuous-time ergodic theorems, varying functions, moment and maximum in-equalities, multi-variate ergodic theorem, commuting maps, convex av-erages, mean and sub-additive ergodic theorems, products of random ma-trices, ergodic decomposition, shift coupling and mixing criteria, group coupling, ballot theorems, entropy and information We now come to the subject of stationarity, the third basic dependence struc-ture of modern probability, beside those of martingales and Markov processes1.
Apart from their useful role in probabilistic modeling and in providing a natural setting for many general theorems, stationary processes often arise as limiting processes in a wide variety of contexts throughout probability.
In particu-lar, they appear as steady-state versions of various Markov and renewal-type processes. Their importance is further due to the existence of a range of funda-mental limit theorems and maximum inequalities, belonging to the basic tool kit of every probabilist.
A process is said to be stationary if its distribution is invariant2 under shifts.
A key result is Birkhoff’s ergodic theorem, which may be regarded as a strong law of large numbers for stationary sequences and processes. After proving the classical ergodic theorems in discrete and continuous time, we turn to the multi-variate versions of Zygmund and Wiener, the former in a setting for non-commutative maps on rectangular regions, the latter in the commutative case and involving averages over increasing families of convex sets. We further consider a version of Kingman’s sub-additive ergodic theorem, and discuss a basic application to random matrices.
In all the mentioned results, the limit is a random variable, measurable with respect to an appropriate invariant σ-field I. Of special importance is the ergodic case, where I is trivial and the limit reduces to a constant. For general stationary processes, we consider a decomposition of the distribution into ergodic components. We further consider some basic criteria for a suitable shift coupling of two processes, expressed in terms of the tail and invariant σ-fields T and I, respectively. Those results will be helpful to prove some ergodic 1not to mention the mere independence 2Stationarity and invariance are often confused. The a.s. invariance Tξ = ξ is clearly much stronger than the stationarity Tξ d = ξ or L(Tξ) = L(ξ).
557 © Springer Nature Switzerland AG 2021 O. Kallenberg, Foundations of Modern Probability, Probability Theory and Stochastic Modelling 99, 558 Foundations of Modern Probability theorems in Chapters 27 and 31. We conclude with some general versions of the classical ballot theorem, and prove the basic limit theorem for entropy and information.
Our treatment of stationary sequences and processes is continued in Chap-ters 27 and 31 with some important applications and extensions of the present theory, including various ergodic theorems for Palm distributions. In Chapter 26, we show how the basic ergodic theorems allow extensions to suitable con-traction operators, which leads to a profound unification of the present theory with the ergodic theory for Markov transition operators. In this context, we also prove some powerful ratio ergodic theorems.
Returning to the basic notions of stationarity and invariance, we fix a gen-eral measurable space (S, S), often taken to be Borel. Given a measure μ and a measurable transformation T on S, we say that T preserves μ or is μ -preserving if μ ◦T −1 = μ. Thus, if ξ is a random element in S with distribution μ, then T is measure-preserving iffTξ d = ξ. In particular, we may consider a random sequence ξ = (ξ0, ξ1, . . .) in a measurable space (S′, S′), and let θ denote the shift on S = (S′)∞, given by θ(x0, x1, . . .) = (x1, x2, . . .). Then ξ is said to be stationary3 if θ ξ d = ξ. We show that the general situation is equivalent to this special case.
Lemma 25.1 (stationarity and invariance) For any random element ξ in S and measurable map T on S, we have (i) Tξ d = ξ iffthe sequence (T nξ) is stationary, (ii) for ξ as in (i), (f ◦T nξ) is stationary for every measurable function f, (iii) any stationary random sequence in S can be represented as in (ii).
Proof: Assuming Tξ d = ξ, we get θ(f ◦T nξ) = (f ◦T n+1ξ) = (f ◦T nTξ) d = (f ◦T nξ), and so (f ◦T nξ) is stationary. Conversely, if η = (η0, η1, . . .) is stationary, we may write ηn = π0(θnη) with π0(x0, x1, . . .) = x0, and note that θη d = η by the stationarity of η.
2 In particular, when ξ0, ξ1, . . . is a stationary sequence of random elements in a measurable space S, and f is a measurable map of S∞into a measurable space S′, we see that the random sequence ηn = f(ξn, ξn+1, . . .), n ∈Z+, 3The existence of a P-preserving transformation T on the underlying probability space Ω is often postulated as part of the definition, so that any random element ξ generates a stationary sequence ξn = ξ ◦T n. Here we prefer a purely probabilistic setting, where stationarity is defined directly in terms of the distributions of (ξn). Though the resulting statements are more general, most proofs remain essentially the same.
25. Stationary Processes and Ergodic Theory 559 is again stationary.
The definition of stationarity extends in an obvious way to random se-quences indexed by Z = {. . . , −1, 0, 1, . . .}. A technical advantage of the two-sided version is that the associated shift operators are invertible and form a group, rather than just a semi-group. We show that the two cases are essen-tially equivalent. Here we assume the existence of appropriate randomization variables, as explained in Chapter 8.
Lemma 25.2 (two-sided extension) Every stationary random sequence ξ0, ξ1, . . . in a Borel space can be extended to a two-sided stationary sequence (. . . , ξ−1, ξ0, ξ1, ξ2, . . .).
Proof: Letting ϑ1, ϑ2, . . . be i.i.d. U(0, 1) and independent of ξ = (ξ0, ξ1, . . .), we may construct the ξ−n recursively as functions of ξ and ϑ1, . . . , ϑn, such that (ξ−n, ξ−n+1, . . .) d = ξ for all n.
In fact, once ξ−1, . . . , ξ−n have been chosen, the existence of ξ−n−1 is clear from Theorem 8.17, if we note that (ξ−n, ξ−n+1, . . .) d = θξ. Finally, the extended sequence is stationary by Propo-sition 4.2.
2 Now fix a measurable transformation T on a measure space (S, S, μ), and let Sμ denote the μ -completion of S.
We say that a set I ⊂S is invari-ant if T −1I = I and almost invariant if T −1I = I a.e. μ, in the sense that μ(T −1I △I) = 0. Since inverse maps preserve the basic set operations, the classes I and I′ of invariant sets in S and almost invariant sets in Sμ form σ-fields in S, called the invariant and almost invariant σ-fields, respectively.
A measurable function f on S is said to be invariant if f ◦T ≡f, and almost invariant if f ◦T = f a.e. μ. We show how invariant or almost invariant sets and functions are related.
Lemma 25.3 (invariant sets and functions) Fix a measure μ and a measur-able transformation T on S, and let f be a measurable map of S into a Borel space U. Then (i) f is invariant iffit is I-measurable, (ii) f is almost invariant iffit is I′-measurable.
Proof: Since U is Borel, we may take U = R. If f is invariant or almost in-variant, then so is the set Ix = f −1(−∞, x) for any x ∈R, and so Ix ∈I or I′, respectively. Conversely, if f is measurable with respect to I or I′, then Ix ∈I or I′, respectively, for every x ∈R. Hence, the function fn(s) = 2−n[2nf(s)], s ∈S, is invariant or almost invariant for every n ∈N, and the invariance or almost invariance carries over to the limit f.
2 We proceed to clarify the relationship between the invariant and almost invariant σ-fields. Here Iμ denotes the μ -completion of I in Sμ, the σ-field generated by I and the μ -null sets in Sμ.
560 Foundations of Modern Probability Lemma 25.4 (almost invariance) For any distribution μ and μ-preserving transformation T on S, the invariant and almost invariant σ-fields I and I′ are related by I′ = Iμ.
Proof: If J ∈Iμ, there exists an I ∈I with μ(I△J) = 0. Since T is μ -preserving, we get μ T −1J △J ≤μ T −1J △T −1I + μ T −1I △I + μ(I △J) = μ ◦T −1(J △I) = μ(J △I) = 0, which shows that J ∈I′. Conversely, given any J ∈I′, we may choose a J′ ∈S with μ(J△J′) = 0, and put I = n k≥n T −nJ′. Then clearly I ∈I and μ(I△J) = 0, and so J ∈Iμ.
2 A measure-preserving map T on a probability space (S, S, μ) is said to be ergodic for μ or simply μ -ergodic, if the invariant σ-field I is μ -trivial, in the sense that μI = 0 or 1 for every I ∈I. Depending on viewpoint, we may prefer to say that μ is ergodic for T, or T-ergodic. The terminology carries over to any random element ξ with distribution μ, which is said to be ergodic whenever this is true for T or μ. Thus, ξ is ergodic iffP{ξ ∈I} = 0 or 1 for any I ∈I, i.e., if the σ-field Iξ = ξ−1I in Ω is P-trivial. In particular, a stationary sequence ξ = (ξn) is ergodic, if the shift-invariant σ-field is trivial for the distribution of ξ.
We show how the ergodicity of a random element ξ is related to the ergod-icity of the generated stationary sequence.
Lemma 25.5 (ergodicity) Let ξ be a random element in S with distribution μ, and let T be a μ -preserving map on S. Then ξ is T-ergodic ⇔ (T nξ) is θ-ergodic, in which case η = (f ◦T nξ) is θ-ergodic for every measurable function f on S.
Proof: Fix any measurable map f : S →S′, and define F = {f ◦T n; n ≥0}, so that F ◦T = θ ◦F. If I ⊂(S′)∞is θ -invariant, then T −1F −1I = F −1θ−1I = F −1I, and so F −1I is T-invariant in S. Assuming ξ to be ergodic, we obtain P{η ∈I} = P{ξ ∈F −1I} = 0 or 1, which shows that even η is ergodic.
Conversely, let the sequence (T nξ) be ergodic, and fix any T-invariant set I in S. Put F = {T n; n ≥0}, and define A = {s ∈S∞; sn ∈I i.o.}. Then I = F −1A and A is θ -invariant. Hence, P{ξ ∈I} = P{(T nξ) ∈A} = 0 or 1, which means that even ξ is ergodic.
2 We may now state the fundamental a.s. and mean ergodic theorem for sta-tionary sequences of random variables. Recall that (S, S) denotes an arbitrary measurable space, and write Iξ = ξ−1I for convenience.
25. Stationary Processes and Ergodic Theory 561 Theorem 25.6 (ergodic theorem, Birkhoff) Let ξ be a random element in S with distribution μ, and let T be a μ-preserving map4 on S with invariant σ-field I. Then for any measurable function f ≥0 on S, n−1 Σ k 0 ≥0.
Proof (Garsia): Put Mn = S1 ∨· · · ∨Sn. Assuming ξ to be defined on the canonical space R∞, we get Sk = ξ1 + Sk−1 ◦θ ≤ξ1 + (Mn ◦θ)+, k = 1, . . . , n.
Taking maxima yields Mn ≤ξ1+(Mn◦θ)+ for all n ∈N, and so by stationarity E ξ1; Mn > 0 ≥E Mn −(Mn ◦θ)+; Mn > 0 ≥E (Mn)+ −(Mn ◦θ)+ = 0.
Since Mn ↑supn Sn, the assertion follows by dominated convergence.
2 Proof of Theorem 25.6 (Yosida & Kakutani): First let f ∈L1, and put ηk = f(T k−1ξ) for convenience. Since E(η1| Iξ) is an invariant function of ξ by Lemmas 1.14 and 25.3, the sequence ζk = ηk −E(η1| Iξ) is again stationary.
Writing Sn = ζ1 + · · · + ζn, we define for any ε > 0 Aε = lim sup n→∞(Sn/n) > ε , ζε n = (ζn −ε)1Aε, and note that the sums Sε n = ζε 1 + · · · + ζε n satisfy supnSε n > 0 = supn(Sε n/n) > 0 = supn(Sn/n) > ε ∩Aε = Aε.
Since Aε ∈Iξ, the sequence (ζε n) is stationary, and Lemma 25.7 yields 0 ≤E ζε 1; supnSε n > 0 = E ζ1 −ε; Aε = E E(ζ1|Iξ); Aε −εPAε = −εPAε, 4When T is a P -preserving transformation directly on Ω, the result reduces to n−1 k 0 E 1A n−1 Σ k r , which tends to 0 as PA →0 and then r →∞. Hence, by Lemma 5.10, the p -th powers on the left are uniformly integrable, and the asserted Lp-convergence follows by Proposition 5.12.
Finally, let f ≥0 be arbitrary, and put E{f(ξ)| Iξ} = ¯ η. Conditioning on the event {¯ η ≤r} for an arbitrary r > 0, we see that (1) holds a.s. on {¯ η < ∞}.
Next, we have a.s. for any r > 0 lim inf n→∞n−1 Σ k≤n f(T kξ) ≥lim n→∞n−1 Σ k≤n f(T kξ) ∧r = E f(ξ) ∧r Iξ .
As r →∞, the right-hand side tends a.s. to ¯ η, by the monotone convergence property of conditional expectations. In particular, the left-hand side is a.s.
infinite on {¯ η = ∞}, as required.
2 Write I and T for the shift-invariant and tail σ-fields in R∞, respec-tively, and note that I ⊂T .
Thus, for any sequence of random variables ξ = (ξ1, ξ2, . . .), we have Iξ = ξ−1I ⊂ξ−1T . By Kolmogorov’s 0−1 law, the latter σ-field is trivial when the ξn are independent. If they are even i.i.d. and integrable, then Theorem 25.6 yields n−1(ξ1 + · · · + ξn) →Eξ1 a.s. and in L1, conforming with Theorem 5.23. Hence, the last theorem contains the strong law of large numbers.
We may often allow the function f = fn,k in Theorem 25.6 to depend on n or k. Here we consider a slightly more general situation.
Corollary 25.8 (varying functions, Maker) Let ξ be a random element in S with distribution μ, let T be a μ-preserving map on S with invariant σ-field I, and let f and fm,k be measurable functions on S. Then (i) if fm,k →f a.s. and supm,k |fm,k| ∈L1, we have as m, n →∞ n−1 Σ kr |fm,k|, and conclude from the same result that a.s.
lim sup m,n→∞ n−1 Σ k<n fm,k(T kξ) ≤lim n→∞n−1 Σ k<n gr(T kξ) = E gr(ξ) Iξ .
25. Stationary Processes and Ergodic Theory 563 Here gr(ξ) →0 a.s., and so by dominated convergence E{gr(ξ) | Iξ} →0 a.s.
(ii) Assuming f = 0, we get by Minkowski’s inequality and the invariance of μ n−1 Σ k<n fm,k ◦T k p ≤n−1 Σ k<n ∥fm,k∥p →0.
2 To extend the ergodic theorem to continuous time, consider a family of transformations Tt on S, t ≥0, satisfying the semi-group property Ts+t = TsTt.
The latter is called a flow if it is also measurable, in the sense that the map (x, t) →Ttx is product measurable from S × R+ to S. The invariant σ-field I now consists of all sets I ∈S with T −1 t I = I for all t. A random element ξ in S is said to be (Tt) -stationary if Ttξ d = ξ for all t ≥0.
Corollary 25.9 (continuous-time ergodic theorem) Let ξ be a random ele-ment in S with distribution μ, and let (Ts) be a μ-preserving flow 5 on S with invariant σ-field I. Then for any measurable function f ≥0 on S, lim t→∞t−1 t 0 f(Tsξ) ds = E f(ξ) Iξ a.s.
(2) When f ∈Lp(μ) for a p ≥1, the convergence extends to Lp.
Proof: In both cases we may assume that f ≥0. Writing Xs = f(Tsξ), we get by Jensen’s inequality and Fubini’s theorem E t−1 t 0 Xs ds p ≤E t−1 t 0 Xp s ds = t−1 t 0 EXp s ds = EXp 0 < ∞.
The required convergence now follows, as we apply Theorem 25.6 to the func-tion g(x) = 1 0 f(Tsx) ds and the discrete shift T = T1.
To identify the limit, take f ∈L1, and introduce the invariant version ¯ f(ξ) = lim r→∞lim sup n→∞ n−1 r+n r f(Tsξ) ds, which is also Iξ-measurable.
The stationarity of Tsξ yields EIξf(Tsξ) = EIξf(ξ) a.s. for all s ≥0. Using Fubini’s theorem, the L1-convergence in (2), and the contraction property of conditional expectations, we get as t →∞ EIξf(ξ) = t−1EIξ t 0 f(Tsξ) ds P →EIξ ¯ f(ξ) = ¯ f(ξ), as required. The result extends as before to arbitrary f ≥0.
2 5When (Ts) is a P-preserving flow directly on Ω, the result reduces to t−1 t 0(ξ ◦Ts) ds →E(ξ|I) a.s. for any random variable ξ ≥0. The present version is more general.
564 Foundations of Modern Probability We return to the case of a stationary sequence ξ1, ξ2, . . . of integrable ran-dom variables, and put Sn = Σk≤n ξk. Since Sn/n converges a.s. by Theorem 25.6, the maximum M = supn(Sn/n) is a.s. finite. The following result, re-lating the moments of ξ and M, is known as the dominated ergodic theorem.
Here we write log+ x = log(x ∨1), for convenience.
Proposition 25.10 (maximum inequalities, Hardy & Littlewood, Wiener) Let ξ = (ξk) be a stationary sequence of random variables, and put Sn = Σ k≤n ξk and M = supn(Sn/n). Then (i) E|M|p < ⌢E|ξ1|p, p > 1, (ii) E |M| logm + |M| < ⌢1 + E |ξ1| logm+1 + |ξ1| , m ≥0.
The proof requires a simple estimate related to Lemma 25.7.
Lemma 25.11 (tail estimate) When ξ = (ξk) is stationary in L1, r P supn (Sn/n) > 2r ≤E ξ1; ξ1 > r , r > 0.
Proof: For any r > 0, put ξr k = ξk1{ξk > r}, and note that ξk ≤ξr k + r.
Assuming ξ to be defined on the canonical space R∞, and writing An = Sn/n, we get An −2r = An ◦(ξ −2r) ≤An ◦(ξr −r), which implies M −2r ≤M ◦(ξr −r). Applying Lemma 25.7 to the sequence ξr −r, we obtain r P{M > 2r} ≤r P M ◦(ξr −r) > 0 ≤E ξr 1; M ◦(ξr −r) > 0 ≤E ξr 1 = E ξ1; ξ1 > r .
2 Proof of Proposition 25.10: We may clearly assume that ξ1 ≥0 a.s.
(i) By Lemma 25.11, Fubini’s theorem, and calculus, E M p = p E M 0 rp−1dr = p ∞ 0 P{M > r} rp−1dr ≤2p ∞ 0 E ξ1; 2 ξ1 > r rp−2 dr = 2p E ξ1 2ξ1 0 rp−2 dr = 2p (p −1)−1E ξ1(2 ξ1)p−1 < ⌢E ξp 1.
(ii) For m = 0, we may write 25. Stationary Processes and Ergodic Theory 565 EM −1 ≤E(M −1)+ = ∞ 1 P{M > r} dr ≤2 ∞ 1 E ξ1; 2 ξ1 > r r−1dr = 2 E ξ1 2 ξ1∨1 1 r−1dr = 2 E ξ1 log+ 2ξ1 ≤e + 2E ξ1 log 2 ξ1; 2 ξ1 > e < ⌢1 + E ξ1 log+ ξ1 .
For m > 0, we may write instead E M logm + M = ∞ 0 P M logm + M > r dr = ∞ 1 P{M > t} m logm−1 t + logm t dt ≤2 ∞ 1 E ξ1; 2 ξ1 > t m logm−1 t + logm t t−1dt = 2 E ξ1 log+ 2ξ1 0 mxm−1 + xm dx = 2 E ξ1 logm + 2 ξ1 + logm+1 + 2 ξ1 m + 1 ≤2e + 4 E ξ1 logm+12 ξ1; 2 ξ1 > e < ⌢1 + E ξ1 logm+1 + ξ1 .
2 Given a measure space (S, S, μ), we introduce for any m ≥0 the class L logmL(μ) of measurable functions f on S satisfying |f| logm + |f| dμ < ∞.
Note in particular that L log0 L = L1.
Using the maximum inequalities of Proposition 25.10, we may prove the following multi-variate version of The-orem 25.6, for possibly non-commuting, measure-preserving transformations T1, . . . , Td.
Theorem 25.12 (multi-variate ergodic theorem, Zygmund) Let ξ be a ran-dom element in S with distribution μ, let T1, . . . , Td be μ-preserving maps on S with invariant σ-fields I1, . . . , Id, and put Jk = ξ−1Ik. Then for any f ∈L logd−1L(μ), we have as n1, . . . , nd →∞ (n1 · · · nd)−1 Σ k1 <n1 · · · Σ kd <nd f T k1 1 · · · T kd d ξ →EJd · · · EJ1f(ξ) a.s.
(3) When f ∈Lp(μ) for a p ≥1, the convergence extends to Lp.
Proof: Since E{f(ξ) | Jk} = μ(f | Ik) ◦ξ a.s., e.g. by Theorem 25.6, we may choose ξ to be the identity map on S. For d = 1, the result reduces to Theorem 25.6. Now assume the statement to be true up to dimension d.
Proceeding by induction, consider any μ -preserving maps T1, . . . , Td+1 on S, and let f ∈L logd L. By the induction hypothesis, the d -dimensional version 566 Foundations of Modern Probability of (3) holds as stated, and we may write the result in the form fm →¯ f a.s., where m = (n1, . . . , nd). Iterating Proposition 25.10, we also note that μ supm |fm| < ∞. Hence, Corollary 25.8 (i) yields as m, n →∞ n−1 Σ k 0, a bounded set K ∈Bd, and a function h: K →{1, . . . , m}, there exists a finite subset M ⊂K, such that the sets Bh(x) + x, x ∈M, are disjoint and satisfy λdK ≤ 2d d Σ x∈M λdBh(x).
Proof: Put Cx = Bh(x)+x, and choose x1, x2, . . . ∈K recursively, as follows.
Once x1, . . . , xj−1 have been selected, choose xj ∈K with the largest possible h(x), such that Cxi ∩Cxj = ∅for all i < j. The construction terminates when no such xj exists. Put M = {xi}, and note that the sets Cx with x ∈M are disjoint. Now fix any y ∈K. By the construction of M, we have Cx ∩Cy ̸= ∅ for some x ∈M with h(x) ≥h(y), and so y ∈Bh(x) −Bh(y) + x ⊂Bh(x) −Bh(x) + x.
Hence, K ⊂ x∈M(Bh(x) −Bh(x) + x), and so by Lemma A6.5 (i), λdK ≤Σ x∈M λd Bh(x) −Bh(x) ≤ 2d d Σ x∈M λdBh(x).
2 Next we prove a multi-variate version of Lemma 25.11, stated for conve-nience in terms of random measures. For motivation, we note that the set function ηB = B f(Tsξ) ds in Theorem 25.14 is a stationary random measure on Rd, and that the intensity m of η, defined by the relation E η = mλd, is equal to Ef(ξ).
Lemma 25.16 (maximum inequality) Let ξ be a stationary random measure on Rd with intensity m, and let B1 ⊂B2 ⊂· · · be bounded, convex sets in Bd with λdB1 > 0. Then r P sup k ξBk λdBk > r ≤m 2d d , r > 0.
Proof: Fix any r, a > 0 and n ∈N, and define a process ν on Rd and a random set K in Sa = {x ∈Rd; |x| ≤a} by ν(x) = inf k ∈N; ξ(Bk + x) > r λdBk , x ∈Rd, K = x ∈Sa; ν(x) ≤n .
By Lemma 25.15 we may choose a finite, random subset M ⊂K, such that the sets Bν(x) + x, x ∈M, are disjoint and λdK ≤ 2d d Σx∈M λdBν(x). Writing b = sup{|x|; x ∈Bn}, we get 568 Foundations of Modern Probability ξSa+b ≥ x∈M ξ Bν(x) + x ≥r x∈M λdBν(x) ≥r 2d d −1 λdK.
Taking expectations and using Fubini’s theorem and the stationarity and mea-surability of ν, we obtain m 2d d λdSa+b ≥rEλdK = r Sa P ν(x) ≤n dx = rλdSa P max k≤n ξBk λdBk > r .
Now divide by λdSa, and then let a →∞and n →∞in this order.
2 We finally need an elementary Hilbert-space result. By a contraction on a Hilbert space H we mean a linear operator T, such that ∥Tx∥≤∥x∥for all x ∈H. For any linear subspace M ⊂H, write M ⊥for the orthogonal complement and ¯ M for the closure of M. The adjoint T ∗of an operator T is determined by the identity ⟨x, Ty⟩= ⟨T ∗x, y⟩, where ⟨·, ·⟩denotes the inner product in H.
Lemma 25.17 (invariant subspace) Let T be a set of contraction operators on a Hilbert space H, and write N for the T -invariant subspace of H. Then N ⊥⊂¯ R, R = span x −Tx; x ∈H, T ∈T .
Proof: If x ⊥R, ⟨x −T ∗x, y⟩= ⟨x, y −Ty⟩= 0, T ∈T , y ∈H, which implies T ∗x = x for every T ∈T . Hence, for any T ∈T , we have ⟨Tx, x⟩= ⟨x, T ∗x⟩= ∥x∥2, and so by contraction 0 ≤∥Tx −x∥2 = ∥Tx∥2 + ∥x∥2 −2⟨Tx, x⟩ ≤2∥x∥2 −2∥x∥2 = 0, which implies Tx = x. This gives R⊥⊂N, and so N ⊥⊂(R⊥)⊥= ¯ R.
2 Proof of Theorem 25.14: First let f ∈L1, and define ˜ Tsf = f ◦Ts, An = (λdBn)−1 Bn ˜ Tsds.
For any ε > 0, Lemma 25.17 yields a measurable decomposition f = f ε + Σ k≤m gε k −˜ Tskgε k + hε, 25. Stationary Processes and Ergodic Theory 569 where f ε ∈L2 is ˜ Ts-invariant for all s ∈Rd, the functions gε 1, . . . , gε m are bounded, and E|hε(ξ)| < ε. Here clearly Anf ε ≡f ε. Using Lemma A6.5 (ii), we get as n →∞for fixed k ≤m and ε > 0 An gε k −˜ Tskgε k ≤ λd (Bn + sk) △Bn λdBn ∥gε k∥ ≤2 1 + |sk|/r(Bn) d −1 ∥gε k∥→0.
Finally, Lemma 25.16 gives r P supn An|hε(ξ)| ≥r ≤ 2d d E|hε(ξ)| ≤ 2d d ε, r, ε > 0, and so supn An|hε(ξ)| P →0 as ε →0. In particular, lim infn Anf(ξ) < ∞a.s., which ensures that lim sup n→∞−lim inf n→∞ Anf(ξ) = lim sup n→∞−lim inf n→∞ Anhε(ξ) ≤2 supnAn|hε(ξ)| P →0.
Thus, the left-hand side vanishes a.s., and the required a.s. convergence follows.
When f ∈Lp for a p ≥1, the asserted Lp-convergence follows as before, by the uniform integrability of the powers |Anf(ξ)|p. We may now identify the limit, as in the proof of Corollary 25.9, and the a.s. convergence extends to arbitrary f ≥0, as in case of Theorem 25.6.
2 The Lp-version of Theorem 25.14 remains valid under weaker conditions.
For a simple extension, say that the distributions μn on Rd are asymptotically invariant if ∥μn −μn ∗δs∥→0 for every s ∈Rd, where ∥· ∥denotes the total variation norm. Note that the conclusion of Theorem 25.14 can be written as μnX →¯ X a.s., where μn = 1Bnλd λdBn , Xs = f(Ts ξ), ¯ X = E f(ξ) Iξ .
Corollary 25.18 (mean ergodic theorem) Let X be a stationary, measurable, and Lp-valued process on Rd, where p ≥1. Then for any asymptotically invari-ant distributions μn on Rd, μnX →¯ X ≡E Xs IX in Lp.
Proof: By Theorem 25.14 we may choose some distributions νm on Rd with νmX →¯ X in Lp. Using Minkowski’s inequality and its extension in Corollary 1.32, along with the stationarity of X, the invariance of ¯ X, and dominated convergence, we get as n →∞and then m →∞ μnX −¯ X p ≤ μnX −(μn ∗νm)X p + (μn ∗νm)X −¯ X p ≤ μn −μn ∗νm ∥X∥p + (δs ∗νm)X −¯ X p μn(ds) ≤∥X∥p μn −μn ∗δt νm(dt) + νmX −¯ X p →0.
2 570 Foundations of Modern Probability We turn to a sub-additive version of Theorem 25.6. For motivation and subsequent needs, we begin with a simple result for non-random sequences. A sequence c1, c2, . . . ∈R is said to be sub-additive, if cm+n ≤cm + cn for all m, n ∈N.
Lemma 25.19 (sub-additivity) For any sub-additive sequence c1, c2, . . . ∈R, lim n→∞ cn n = inf n cn n ∈[−∞, ∞).
Proof: Iterating the sub-additivity relation, we get for any k, n ∈N cn ≤[n/k]ck + cn−k[n/k] ≤[n/k]ck + c0 ∨· · · ∨ck−1, where c0 = 0. Noting that [n/k] ∼n/k as n →∞, we get lim supn(cn/n) ≤ ck/k for all k, and so inf n cn n ≤lim inf n→∞ cn n ≤lim sup n→∞ cn n ≤inf n cn n .
2 For two-dimensional arrays cjk, 0 ≤j < k, sub-additivity is defined by c0,n ≤c0,m + cm,n for all m < n, which reduces to the one-dimensional no-tion when cjk = ck−j for some sequence ck. Note that sub-additivity holds automatically for arrays of the form cjk = aj+1 + · · · + ak.
We now extend the ergodic theorem to sub-additive arrays of random vari-ables ξjk, 0 ≤j < k.
For motivation, recall from Theorem 25.6 that, if ξjk = ηj+1 + · · · + ηk for a stationary sequence of integrable random variables ηk, then ξ0,n/n converges a.s. and in L1. A similar result holds for general sub-additive arrays (ξjk), stationary under simultaneous shifts in the two indices, so that (ξj+1, k+1) d = (ξj,k). To allow a wider range of applications, we introduce the slightly weaker hypotheses ξk,2k, ξ2k,3k, . . .
d = ξ0,k, ξk,2k, . . .
, k ∈N, (5) ξk,k+1, ξk,k+2, . . .
d = ξ0,1, ξ0,2, . . .
, k ∈N.
(6) For convenience, we also restate the sub-additivity condition ξ0,n ≤ξ0,m + ξm,n, 0 < m < n.
(7) Theorem 25.20 (sub-additive ergodic theorem, Kingman, Liggett) Let (ξjk) be a sub-additive array of random variables with E ξ+ 0,1 < ∞, satisfying (5)−(6).
Then (i) n−1ξ0,n →¯ ξ a.s. in −∞, ∞), where E ¯ ξ = infn(E ξ0,n/n) ≡c, (ii) the convergence in (i) extends to L1 when c > −∞, (iii) ¯ ξ is a.s. constant when the sequences in (5) are ergodic.
25. Stationary Processes and Ergodic Theory 571 Proof: Put ξ0,n = ξn for convenience.
By (6) and (7) we have Eξ+ n ≤ nEξ+ 1 < ∞.
First let c > −∞, so that the variables ξm,n are integrable.
Iterating (7) gives ξn n ≤ [n/k] Σ j =1 ξ(j−1)k, jk/n + n Σ j =k[n/k]+1 ξj−1, j/n, n, k ∈N.
(8) By (5) the sequence ξ(j−1)k, jk, j ∈N, is stationary for fixed k, and so Theorem 25.6 yields n−1Σj≤n ξ(j−1)k, jk →¯ ξk a.s. and in L1, where E ¯ ξk = E ξk. Hence, the first term in (8) tends to ¯ ξk/k a.s. and in L1. Similarly, n−1Σj≤n ξj−1, j →¯ ξ1 a.s. and in L1, and so the second term in (8) tends in the same sense to 0.
Thus, the right-hand side converges a.s. and in L1 toward ¯ ξk/k, and since k is arbitrary, we get lim sup n→∞(ξn/n) ≤infn ¯ ξn/n ≡¯ ξ < ∞a.s.
(9) The variables ξ+ n /n are uniformly integrable by Proposition 5.12, and moreover E lim sup n→∞(ξn/n) ≤E ¯ ξ ≤infn E ¯ ξn/n = infn (E ξn/n) = c.
(10) To derive a lower bound, let κn⊥ ⊥(ξjk) be uniformly distributed over {1, . . . , n} for each n, and define ζn k = ξκn, κn+k, ηn k = ξκn+k −ξκn+k−1, k ∈N.
Then by (6), (ζn 1 , ζn 2 , . . .) d = (ξ1, ξ2, . . .), n ∈N.
(11) Moreover, ηn k ≤ξκn+k−1, κn+k d = ξ1 by (6) and (7), and so the variables (ηn k)+ are uniformly integrable.
On the other hand, the sequence E ξ1, E ξ2, . . . is sub-additive, and so by Lemma 25.19 we have as n →∞ E ηn k = n−1 E ξn+k −E ξk →infn (E ξn/n) = c, k ∈N.
(12) In particular, supn E|ηn k| < ∞, which shows that the sequence η1 k, η2 k, . . . is tight for each k. Hence, by Theorems 5.30, 6.20, and 8.21, there exist some random variables ζk and ηk such that ζn 1 , ζn 2 , . . . ; ηn 1 , ηn 2 , . . .
d → ζ1, ζ2, . . . ; η1, η2, . . .
(13) along a sub-sequence. Here (ζk) d = (ξk) by (11), and so by Theorem 8.17 we may assume that ζk = ξk for each k.
The sequence η1, η2, . . . is clearly stationary, and by Lemma 5.11 it is also integrable. From (7) we get 572 Foundations of Modern Probability ηn 1 + · · · + ηn k = ξκn+k −ξκn ≤ξκn, κn+k = ζn k , and so in the limit η1 + · · · + ηk ≤ξk a.s. Hence, Theorem 25.6 yields ξn/n ≥n−1 Σ k≤n ηk →¯ η a.s. and in L1, for some ¯ η ∈L1. In particular, the variables ξ− n /n are uniformly integrable, and so the same thing is true for ξn/n. Using Lemma 5.11 and the uniform integrability of the variables (ηn k)+, together with (10) and (12), we get c = lim sup n→∞E ηn 1 ≤E η1 = E ¯ η ≤E lim inf n→∞(ξn/n) ≤E lim sup n→∞(ξn/n) ≤E ¯ ξ ≤c.
Thus, ξn/n converges a.s., and by (9) the limit equals ¯ ξ.
Furthermore, by Lemma 5.11, the convergence holds even in L1, and E ¯ ξ = c. If the sequences in (5) are ergodic, then ¯ ξn = E ξn a.s. for each n, and we get ¯ ξ = c a.s.
Now assume instead that c = −∞. Then for each r ∈Z, the truncated array ξm,n ∨r(n −m), 0 ≤m < n, satisfies the hypotheses of the theo-rem with c replaced by cr = infn(E ξr n/n) ≥r, where ξr n = ξn ∨rn. Thus, ξr n/n = (ξn/n) ∨r converges a.s. toward a random variable ¯ ξr with mean cr, and so ξn/n →infr ¯ ξr ≡¯ ξ. Finally, E ¯ ξ ≤infr cr = c = −∞by monotone convergence.
2 For an application of the last result, we derive a celebrated ergodic theorem for products of random matrices.
Theorem 25.21 (random matrices, Furstenberg & Kesten) Let Xk = (ξk ij) be a stationary sequence of random d × d matrices, such that ξk ij > 0 a.s., E log ξk ij < ∞, i, j ≤d.
Then there exists a random variable η, independent of i, j, such that as n →∞, n−1 log X1 · · · Xn ij →η a.s. and in L1, i, j ≤d.
Proof: First let i = j = 1, and define ηm,n = log Xm+1 · · · Xn 11, 0 ≤m < n.
The array (−ηm,n) is clearly sub-additive and jointly stationary, and E|η0,1| < ∞by hypothesis. Further note that X1 · · · Xn 11 ≤d n−1Π k≤n max i,j ξk ij.
Hence, 25. Stationary Processes and Ergodic Theory 573 η0,n −(n −1) log d ≤Σ k≤n log max i,j ξk ij ≤Σ k≤nΣ i,j log ξk ij , and so n−1Eη0,n ≤log d + Σ i,j E log ξ1 ij < ∞.
Thus, by Theorem 25.20 and its proof, we have η0,n/n →η a.s. and in L1 for an invariant random variable η.
To extend the convergence to arbitrary i, j ∈{1, . . . , d}, we may write for any n ∈N ξ2 i1 X3 · · · Xn 11 ξn+1 1j ≤ X2 · · · Xn+1 ij ≤ ξ1 1i ξn+2 j1 −1 X1 · · · Xn+2 11.
Noting that n−1 log ξn ij →0 a.s. and in L1 by Theorem 25.6, and using the stationarity of (Xn) and the invariance of ξ, we obtain n−1 log (X2 · · · Xn+1)ij →η a.s. and in L1. The desired convergence now follows by stationarity.
2 We turn to the decomposition of an invariant distribution into ergodic com-ponents. For motivation, consider the setting of Theorem 25.6 or 25.14, and let S be Borel, to ensure the existence of regular conditional distributions.
Writing η = L(ξ | Iξ), we get L(ξ) = EL(ξ | Iξ) = Eη = m P{η ∈dm}.
(14) Furthermore, ηI = P ξ ∈I Iξ = 1{ξ ∈I} a.s., I ∈I, and so ηI = 0 or 1 a.s. for any fixed I ∈I. If we can choose the exceptional null set to be independent of I, then η is a.s. ergodic, and (14) gives the desired ergodic decomposition of μ = L(ξ). Though the suggested statement is indeed true, its proof requires a different approach.
Proposition 25.22 (ergodicity by conditioning, Farrell, Varadarajan) Let ξ be a random element in a Borel space S with distribution μ, and consider a measurable group G of μ-preserving maps Ts on S, s ∈Rd, with invariant σ-field I. Then η = L(ξ | Iξ) is a.s. invariant and ergodic under G.
For the proof, consider an increasing sequence of convex sets Bn ∈Bd with r(Bn) →∞, and introduce on S the probability kernels μn(x, A) = (λdBn)−1 Bn 1A(Tsx) ds, x ∈S, A ∈S, (15) along with the empirical distributions ηn = μn(ξ, ·). By Theorem 25.14, we have ηnf →ηf a.s. for every bounded, measurable function f on S, where η = L(ξ | Iξ). Say that a class C ⊂S is measure-determining, if every distribution on S is uniquely determined by its values on C.
574 Foundations of Modern Probability Lemma 25.23 (degenerate limit) Let A1, A2, . . . ∈S be measure-determining with empirical distributions ηn = μn(ξ, ·), where the μn are given by (15). Then ξ is ergodic whenever ηnAk →P{ξ ∈Ak} a.s., k ∈N.
Proof: By Theorem 25.14, we have ηnA →ηA ≡P( ξ ∈A | Iξ) a.s. for every A ∈S, and so by comparison ηAk = P{ξ ∈Ak} a.s. for all k. Since the Ak are measure-determining, we get η = L(ξ) a.s. Hence, for any I ∈I we have a.s.
P{ξ ∈I} = ηI = P ξ ∈I Iξ = 1I(ξ) ∈{0, 1}, which implies P{ξ ∈I} = 0 or 1.
2 Proof of Proposition 25.22: By the stationarity of ξ, we have for any A ∈S and s ∈Rd η ◦T −1 s A = P Tsξ ∈A Iξ = P ξ ∈A Iξ = ηA a.s.
Since S is Borel, we obtain η ◦T −1 s = η a.s. for every s. Now put C = [0, 1]d, and define ¯ η = C(η ◦T −1 s ) ds. Since η is a.s. invariant under shifts in Zd, the variable ¯ η is a.s. invariant under arbitrary shifts. Furthermore, Fubini’s theorem yields λd s ∈[0, 1]d; η ◦T −1 s = η = 1 a.s., and therefore ¯ η = η a.s. Thus, η is a.s. G-invariant.
Now choose a measure-determining sequence A1, A2, . . . ∈S, which exists since S is Borel. Noting that ηnAk →ηAk a.s. for every k by Theorem 25.14, we get by Theorem 8.5 η ∩k x ∈S; μn(x, Ak) →ηAk = P Iξ ∩k ηnAk →ηAk = 1 a.s.
Since η is a.s. a G-invariant distribution on S, Lemma 25.23 applies for every ω ∈Ω outside a P-null set, and we conclude that η is a.s. ergodic.
2 In (14) we saw that, for stationary ξ, the distribution μ = L(ξ) is a mixture of invariant, ergodic probability measures. We show that this representation is unique6 and characterizes the ergodic measures as extreme points in the convex set of invariant measures.
To explain the terminology, we say that a subset M of a linear space is convex if c m1 + (1 −c) m2 ∈M for all m1, m2 ∈M and c ∈(0, 1). In that case, an element m ∈M is extreme if for any m1, m2, c as above, the relation m = c m1 + (1 −c) m2 implies m1 = m2 = m. With any set of measures μ on a measurable space (S, S), we associate the σ-field generated by all evaluation maps πB : μ →μB, B ∈S.
6Under stronger conditions, a more explicit representation was obtained in Theorem 3.12.
25. Stationary Processes and Ergodic Theory 575 Theorem 25.24 (ergodic decomposition, Krylov & Bogolioubov) Let G = {Ts; s ∈Rd} be a measurable group of transformations on a Borel space S.
Then (i) the G-invariant distributions on S form a convex set M, whose extreme points are precisely the ergodic measures in M, (ii) every measure μ ∈M has a unique representation μ = m ν(dm), with ν restricted to the set of ergodic measures in M.
Proof: The set M is clearly convex, and Proposition 25.22 yields for every μ ∈M a representation μ = m ν(dm), where ν is a probability measure on the set of ergodic measures in M. To see that ν is unique, we introduce a regular conditional distribution η = μ( · | I) a.s. μ on S, and note that μnA →ηA a.s.
μ for all A ∈S by Theorem 25.14. Thus, for any A1, A2, . . . ∈S, we have m ∩k x ∈S; μn(x, Ak) →η(x, Ak) = 1 a.e. ν.
The same relation holds with η(x, Ak) replaced by mAk, since ν is restricted to the class of ergodic measures in M. Assuming the sets Ak to be measure-determining, we conclude that m{x; η(x, ·) = m} = 1 a.e. ν. Hence, for any measurable set A ⊂M, μ{η ∈A} = m{η ∈A} ν(dm) = 1A(m) ν(dm) = νA, which shows that ν = μ ◦η−1.
To prove the equivalence of ergodicity and extremality, fix any measure μ ∈M with ergodic decomposition m ν(dm). First let μ be extreme. If it is not ergodic, then ν is non-degenerate, and we have ν = c ν1+(1−c) ν2 for some ν1⊥ν2 and c ∈(0, 1). Since μ is extreme, we obtain m ν1(dm) = m ν2(dm), and so ν1 = ν2 by the uniqueness of the decomposition. The contradiction shows that μ is ergodic.
Next let μ be ergodic, so that ν = δμ, and let μ = c μ1 + (1 −c) μ2 with μ1, μ2 ∈M and c ∈(0, 1). If μi = m νi(dm) for i = 1, 2, then δμ = c ν1 + (1 −c) ν2 by the uniqueness of the decomposition. Hence, ν1 = ν2 = δμ, and so μ1 = μ2, which shows that μ is extreme.
2 We turn to some powerful coupling results, needed for the ergodic theory of Markov processes in Chapter 26 and for our discussion of Palm distributions in Chapter 31. First we consider pairs of measurable processes on R+, with values in an arbitrary measurable space S. In the associated path space, we introduce the invariant σ-field I and tail σ-field T = t Tt with Tt = σ(θt), where clearly I ⊂T . For any signed measure ν, write ∥ν∥A for the total variation of ν on the σ-field A.
576 Foundations of Modern Probability Theorem 25.25 (coupling on R+, Goldstein, Berbee, Aldous & Thorisson) Let X, Y be S-valued, product-measurable processes on R+. Then these condi-tions are equivalent: (i) X d = Y on T , (ii) (σ, θσX) d = (τ, θτY ) for some random times σ, τ ≥0, (iii) L(θtX) −L(θtY ) →0 as t →∞.
So are the conditions: (i′) X d = Y on I, (ii′) θσX d = θτY for some random times σ, τ ≥0, (iii′) 1 0 {L(θstX) −L(θstY )} ds →0 as t →∞.
When the path space is Borel, we may choose ˜ Y d = Y such that (ii) and (ii′) become, respectively, θτX = θτ ˜ Y , θσX = θτ ˜ Y a.s.
Proof for (i)−(iii): Let μ1, μ2 be the distributions of X, Y , and assume μ1 = μ2 on T .
Write U = SR+, and define a mapping p on R+ × U by p(s, x) = (s, θsx). Let C be the class of pairs (ν1, ν2) of measures on R+ × U with ν1 ◦p−1 = ν2 ◦p−1, ¯ ν1 ≤μ1, ¯ ν2 ≤μ2, (16) where ¯ νi = νi(R+ ×·), and regard C as partially ordered under component-wise inequality. Since by Corollary 1.17 every linearly ordered subset has an upper bound in C, Zorn’s lemma7 yields a maximal pair (ν1, ν2).
To see that ¯ ν1 = μ1 and ¯ ν2 = μ2, define μ′ i = μi −¯ νi, and conclude from the equality in (16) that μ′ 1 −μ′ 2 T = ¯ ν1 −¯ ν2 T ≤ ¯ ν1 −¯ ν2 Tn ≤2 ν1 (n, ∞) × U →0, (17) which implies μ′ 1 = μ′ 2 on T . Next, Corollary 2.13 yields some measures μn i ≤μ′ i satisfying μn 1 = μn 2 = μ′ 1 ∧μ′ 2 on Tn, n ∈N.
Writing νn i = δn ⊗μn i , we get ¯ νn i ≤μ′ i and νn 1 ◦p−1 = νn 2 ◦p−1, and so (ν1 + νn 1 , ν2 + νn 2 ) ∈C. Since (ν1, ν2) is maximal, we obtain νn 1 = νn 2 = 0, and so Corollary 2.9 gives μ′ 1 ⊥μ′ 2 on Tn for all n. Thus, μ′ 1An = μ′ 2Ac n = 0 for some sets An ∈Tn. Then also μ′ 1A = μ′ 2Ac = 0, where A = lim supn An ∈T . Since the μ′ i agree on T , we obtain μ′ 1 = μ′ 2 = 0, which means that ¯ νi = μi. Hence, Theorem 8.17 yields some random variables σ, τ ≥0, such that the pairs (σ, X) and (τ, Y ) have distributions ν1, ν2, and the desired coupling follows from the equality in (16).
7We don’t hesitate to use the Axiom of Choice, whenever it leads to short and transparent proofs.
25. Stationary Processes and Ergodic Theory 577 The remaining claims are easy. Thus, the relation (σ, θσX) d = (τ, θτY ) im-plies ∥μ1 −μ2∥Tn →0 as in (17), and the latter condition yields μ1 = μ2 on T . When the path space is Borel, the asserted a.s. coupling follows from the distributional version by Theorem 8.17.
2 To avoid repetitions, we postpone the proof for (i′)−(iii′) until after the proof of the next theorem, involving groups G of transformations on an arbi-trary measurable space (S, S).
Theorem 25.26 (group coupling, Thorisson) Let G be an lcscH group, acting measurably on a space S with G-invariant σ-field I. Then for any random elements ξ, η in S, these conditions are equivalent: (i) ξ d = η on I, (ii) γ ξ d = η for a random element γ in G.
Proof (OK): Let μ1, μ2 be the distributions of ξ, η. Define p : G × S →S by p(r, s) = rs, and let C be the class of pairs (ν1, ν2) of measures on G × S satisfying (16) with ¯ νi = νi(G × ·). As before, Zorn’s lemma yields a maximal element (ν1, ν2), and we claim that ¯ νi = μi for i = 1, 2.
To see this, let λ be a right-invariant Haar measure on G, which exists by Theorem 3.8. Since λ is σ-finite, we may choose a probability measure ˜ λ ∼λ, and define μ′ i = μi −¯ νi, χi = ˜ λ ⊗μ′ i, i = 1, 2.
By Corollary 2.13, there exist some measures ν′ i ≤χi satisfying ν′ 1 ◦p−1 = ν′ 2 ◦p−1 = χ1 ◦p−1 ∧χ2 ◦p−1.
Then ¯ ν′ i ≤μ′ i for i = 1, 2, and so (ν1 + ν′ 1, ν2 + ν′ 2) ∈C. The maximality of (ν1, ν2) yields ν′ 1 = ν′ 2 = 0, and so χ1 ◦p−1 ⊥χ2 ◦p−1 by Corollary 2.9. Thus, there exists a set A1 = Ac 2 ∈S with χi ◦p−1Ai = 0 for i = 1, 2. Since λ ≪˜ λ, Fubini’s theorem gives S μ′ i(ds) G 1Ai(gs) λ(dg) = (λ ⊗μ′ i) ◦p−1Ai = 0.
(18) By the right invariance of λ, the inner integral on the left is G-invariant and therefore I -measurable in s ∈S. Since also μ′ 1 = μ′ 2 on I by (16), equation (18) remains true with Ai replaced by Ac i. Adding the two versions gives λ⊗μ′ i = 0, and so μ′ i = 0. Thus, ¯ νi = μi for i = 1, 2.
Since G is Borel, Theorem 8.17 yields some random elements σ, τ in G, such that (σ, ξ) and (τ, η) have distributions ν1, ν2. By (16) we get σξ d = τη, and so the same theorem yields a random element ˜ τ in G with (˜ τ, σξ) d = (τ, τη). But then ˜ τ −1σξ d = τ −1τη = η, which proves the desired relation with γ = ˜ τ −1σ. 2 578 Foundations of Modern Probability Proof of Theorem 25.25 (i′)−(iii′): In the last proof, replace S by the path space U = SR+, G by the semi-group of shifts θt, t ≥0, and λ by Lebesgue measure on R+. Assuming X d = Y on I, we may proceed as before up to equation (18), which now takes the form U μ′ i(dx) ∞ 0 1Ai(θtx) dt = (λ ⊗μ′ i) ◦p−1Ai = 0.
(19) Writing fi(x) for the inner integral on the left, we note that for any h > 0, fi(θhx) = ∞ h 1Ai(θtx) dt = ∞ 0 1θ−1 h Ai(θtx) dt.
(20) Hence, (19) remains true with Ai replaced by θ−1 h Ai, and then also for the θ1-invariant sets ˜ A1 = lim sup n→∞ θ−1 n A1, ˜ A2 = lim inf n→∞θ−1 n A2, where n →∞along N. Since ˜ A1 = ˜ Ac 2, we may henceforth take the Ai in (19) to be θ1-invariant. Then so are the functions fi in view of (20). By the monotonicity of fi ◦θh, the fi are then θh-invariant for all h > 0 and therefore I -measurable. Arguing as before, we see that θσX d = θτY for some random variables σ, τ ≥0. The remaining assertions are again routine.
2 We turn to a striking relationship between the sample intensity ¯ ξ = E ξ[0, 1] Iξ of a stationary, a.s. singular random measure ξ on R+ and the corresponding maximum over increasing intervals. It may be compared with the more general but less precise maximum inequalities in Proposition 25.10 and Lemmas 25.11 and 25.16.
For the needs of certain applications, we also consider random measures ξ on [0, 1). Here ¯ ξ = ξ[0, 1) by definition, and stationarity is defined in the cyclic sense of shifts θt on [0, 1), where θts = s + t (mod 1), and correspondingly for sets and measures. Recall that ξ is said to be singular if its absolutely continuous component vanishes, which holds in particular for purely atomic measures ξ.
To justify the statement, we note that the singularity of a measure μ is a measurable property. Indeed, by Proposition 2.24, it is equivalent that the function Ft = μ[0, t] be singular. Now it is easy to check that the singularity of F can be described by countably many conditions, each involving the incre-ments of F over finitely many intervals with rational endpoints.
Theorem 25.27 (ballot theorem, Whitworth, Tak´ acs, OK) Let ξ be a sta-tionary, a.s. singular random measure on R+ or [0, 1). Then there exists a U(0, 1) random variable σ⊥ ⊥Iξ, such that σ sup t>0 t−1ξ[0, t] = ¯ ξ a.s.
(21) 25. Stationary Processes and Ergodic Theory 579 Proof: If ξ is stationary on [0, 1), the periodic continuation η = Σ n≤0 θnξ is clearly stationary on R+, and moreover Iη = Iξ and ¯ η = ¯ ξ. Furthermore, the elementary inequality x1 + · · · + xn t1 + · · · + tn ≤max k≤n xk tk , n ∈N, valid for arbitrary x1, x2, . . . ≥0 and t1, t2, . . . > 0, shows that supt t−1η[0, t] = supt t−1ξ[0, t]. Thus, it suffices to consider random measures on R+.
Then put Xt = ξ(0, t], and define At = inf s≥t (s −Xs), αt = 1 At = t −Xt , t ≥0.
(22) Noting that At ≤t−Xt and using the monotonicity of X, we get for any s < t As = inf r∈[s,t)(r −Xr) ∧At ≥(s −Xt) ∧At ≥(s −t + At) ∧At = s −t + At.
If A0 is finite, then so is At for every t, and we get by subtraction 0 ≤At −As ≤t −s on {A0 > −∞}, s < t.
(23) Thus, A is non-decreasing and absolutely continuous on {A0 > −∞}.
Now fix a singular path of X such that A0 is finite, and let t ≥0 be such that At < t −Xt. Then At + Xt± < t by monotonicity, and so by the left and right continuity of A and X, there exists an ε > 0 such that As + Xs < s −2ε, |s −t| < ε.
Then (23) yields s −Xs > As + 2ε > At + ε, |s −t| < ε, and by (22) it follows that As = At for |s −t| < ε.
In particular, A has derivative A′ t = 0 = αt at t.
On the complementary set D = {t ≥0; At = t −Xt}, Theorem 2.15 shows that both A and X are differentiable a.e., the latter with derivative 0, and we may form a set D′ by excluding the corresponding null sets. We may also exclude the countable set of isolated points of D. Then for any t ∈D′ we may choose some tn →t in D \ {t}. By the definition of D, Atn −At tn −t = 1 −Xtn −Xt tn −t , n ∈N, and as n →∞we get A′ t = 1 = αt.
Combining this with the result in the previous case, we see that A′ = α a.e. Since A is absolutely continuous, Theorem 2.15 yields 580 Foundations of Modern Probability At −A0 = t 0 αsds on {A0 > −∞}, t ≥0.
(24) Now Xt/t →¯ ξ a.s. as t →∞, by Corollary 30.10 below. When ¯ ξ < 1, (22) gives −∞< At/t →1 −¯ ξ a.s. Furthermore, At + Xt −t = inf s≥t (s −t) −(Xs −Xt) = inf s≥0 s −θt ξ(0, s] , and hence αt = 1 inf s≥0 s −θt ξ(0, s] = 0 , t ≥0.
Dividing (24) by t and using Corollary 25.9, we get a.s. on {¯ ξ < 1} P sup t>0 (Xt/t) ≤1 Iξ = P sup t≥0 (Xt −t) = 0 Iξ = P A0 = 0 Iξ = E α0 Iξ = 1 −¯ ξ.
Replacing ξ by rξ and taking complements, we obtain more generally P r sup t>0 (Xt/t) > 1 Iξ = r¯ ξ ∧1 a.s., r ≥0, (25) where the result for r¯ ξ ∈[1, ∞) follows by monotonicity.
When ¯ ξ ∈(0, ∞), we may define σ by (21). If instead ¯ ξ = 0 or ∞, we take σ = ϑ, where ϑ⊥ ⊥ξ is U(0, 1). In the latter case, (21) clearly remains true, since ξ = 0 a.s. on {¯ ξ = 0} and Xt/t →∞a.s. on {¯ ξ = ∞}.
To verify the distributional claim, we conclude from (25) and Theorem 8.5 that, on {¯ ξ ∈(0, ∞)}, P σ < r Iξ = P r sup t>0 (Xt/t) > ¯ ξ Iξ = r ∧1 a.s., r ≥0.
Since the same relation holds trivially when ¯ ξ = 0 or ∞, σ is conditionally U(0, 1) given Iξ, hence unconditionally U(0, 1) and independent of Iξ.
2 The last theorem yields a similar result in discrete time. Here (21) holds only with inequality and will be supplemented by an equality similar to (25).
For a stationary sequence ξ = (ξ1, ξ2, . . .) in R+ with invariant σ-field Iξ, we define ¯ ξ = E(ξ1| Iξ) a.s. On {1, . . . , n} we define stationarity in the obvious way in terms of addition modulo n, and put ¯ ξ = n−1Σ k ξk.
Corollary 25.28 (discrete-time ballot theorem) Let ξ = (ξ1, ξ2, . . .) be a finite or infinite, stationary random sequence in R+, and put Sn = Σ k≤n ξk. Then (i) there exists a U(0, 1) random variable σ⊥ ⊥Iξ, such that σ sup n>0 (Sn/n) ≤¯ ξ a.s., 25. Stationary Processes and Ergodic Theory 581 (ii) when the ξk are Z+-valued, we have also P sup n>0 (Sn −n) ≥0 Iξ = ¯ ξ ∧1 a.s.
Proof: (i) Arguing by periodic continuation, as before, we may reduce to the case of infinite sequences ξ. Now let ϑ⊥ ⊥ξ be U(0, 1), and define Xt = S[t+ϑ].
Then X has stationary increments, and we note that IX = Iξ and ¯ X = ¯ ξ. By Theorem 25.27 there exists a U(0, 1) random variable σ⊥ ⊥IX, such that a.s.
sup n>0 (Sn/n) = sup t>0 (S[t]/t) ≤sup t>0 (Xt/t) = ¯ ξ/σ.
(ii) If the ξk are Z+-valued, the same result yields a.s.
P sup n>0 (Sn −n) ≥0 Iξ = P sup t≥0 (Xt −t) > 0 Iξ = P sup t>0 (Xt/t) > 1 Iξ = P ¯ ξ > σ Iξ = ¯ ξ ∧1.
2 To state the next result, let ξ be a random element in a countable space S, and put pj = P{ξ = j}. For any σ-field F, we define the information I(j) and conditional information I(j |F) by I(j) = −log pj, I(j | F) = −log P ξ = j F , j ∈S.
For motivation, we note the additivity I ξ1, . . . , ξn = I(ξ1) + I(ξ2| ξ1) + · · · + I ξn ξ1, . . . , ξn−1 , (26) valid for any random elements ξ1, . . . , ξn in S. Next, we form the associated entropy H(ξ) = E I(ξ) and conditional entropy H(ξ|F) = E I(ξ|F), and note that H(ξ) = E I(ξ) = −Σ j pj log pj.
By (26), even H is additive, in the sense that H ξ1, . . . , ξn = H(ξ1) + H(ξ2| ξ1) + · · · + H ξn ξ1, . . . , ξn−1 .
(27) When the sequence (ξn) is stationary and ergodic with H(ξ0) < ∞, we show that the averages in (26) and (27) converge toward a common limit.
Theorem 25.29 (entropy and information, Shannon, McMillan, Breiman, Ionescu Tulcea) Let ξ = (ξk) be a stationary, ergodic sequence in a count-able space S, such that H(ξ0)< ∞. Then n−1I ξ1, . . . , ξn →H ξ0 ξ−1, ξ−2, . . .
a.s. and in L1.
582 Foundations of Modern Probability Note that H(ξ0) < ∞holds automatically when the state space is finite.
Our proof will be based on a technical estimate.
Lemma 25.30 (maximum inequality, Chung, Neveu) For any countably val-ued random variable ξ and discrete filtration (Fn), we have E supn I(ξ | Fn) ≤H(ξ) + 1.
Proof: Write pj = P{ξ = j} and η = supn I(ξ |Fn). For fixed r > 0, we introduce the optional times τj = inf n; I(j |Fn) > r = inf n; P ξ = j Fn < e−r , j ∈S.
By Lemma 8.3, P η > r, ξ = j = P τj < ∞, ξ = j = E P ξ = j Fτj ; τj < ∞ ≤e−rP{τj < ∞} ≤e−r.
Since the left-hand side is also bounded by pj, Lemma 4.4 yields E η = Σ j E η; ξ = j = Σ j ∞ 0 P η > r, ξ = j dr ≤Σ j ∞ 0 e−r ∧pj dr = Σ j pj 1 −log pj = H(ξ) + 1.
2 Proof of Theorem 25.29 (Breiman): We may choose ξ to be defined on the canonical space S∞. Then introduce the functions gk(ξ) = I ξ0 ξ−1, . . . , ξ−k+1 , g(ξ) = I ξ0 ξ−1, ξ−2, . . .
.
By (26), we may write the assertion as n−1 Σ k≤n gk(θkξ) →E g(ξ) a.s. and in L1.
(28) Here gk(ξ) →g(ξ) a.s. by martingale convergence, and E supk gk(ξ) < ∞by Lemma 25.30. Hence, (28) follows by Corollary 25.8.
2 25. Stationary Processes and Ergodic Theory 583 Exercises 1. State and prove some continuous-time, two-sided, and higher-dimensional ver-sions of Lemma 25.1.
2. For a stationary random sequence ξ = (ξ1, ξ2, . . .), show that the ξn are i.i.d.
iffξ1⊥ ⊥(ξ2, ξ3, . . .).
3. For a Borel space S, let X be a stationary array of random elements in S indexed by Nd. Prove the existence of a stationary array Y indexed by Zd such that X = Y a.s. on Nd.
4. Let X be a stationary process on R+ with values in a Borel space S. Prove the existence of a stationary process Y on R with X d = Y on R+. Strengthen this to an a.s. equality, when S is a complete metric space and X is right-continuous.
5.
Let ξ be a two-sided, stationary random sequence with restriction η to N.
Show that ξ, η are simultaneously ergodic. (Hint: For any measurable, invariant set I ∈SZ, there exists a measurable, invariant set I′ ∈SN with I = SZ−×I′ a.s. L(ξ).) 6. Establish two-sided and higher-dimensional versions of Lemmas 25.4 and 25.5, as well as of Theorem 25.9.
7. Say that a measure-preserving transformation T on a probability space (S, S, μ) is mixing if μ(A∩T −nB) →μA·μB for all A, B ∈S. Prove a counterpart of Lemma 25.5 for mixing. Also show that mixing transformations are ergodic. (Hint: For the latter assertion, take A = B to be invariant.) 8. Show that it suffices to verify the mixing property for sets in a generating π -system. Using this, prove that i.i.d. sequences are mixing under shifts.
9. For any a ∈R, define Ts = s + a (mod 1) on [0, 1]. Show that T fails to be mixing but is ergodic iffa ̸∈Q.
(Hint: For the ergodicity, let I ⊂[0, 1] be T-invariant. Then so is the measure 1Iλ. Since the points ka are dense in [0, 1], it follows that 1Iλ is invariant. Now use Theorem 2.6.) 10.
(Bohl, Sierpi´ nski, Weyl) For any a ̸∈Q, put μn = n−1Σ k≤nδka, where ka ∈[0, 1] is defined modulo 1. Show that μn w →λ. (Hint: Apply Theorem 25.6 to the map in the previous exercise.) 11. Show that the transformation Ts = 2s (mod 1) on [0, 1] is mixing. Also show how the map of Lemma 4.20 can be generated as in Lemma 25.1 by means of T.
12. Note that Theorem 25.6 remains true for invertible shifts T, with averages taken over increasing index sets [an, bn] with bn−an →∞. Show by an example that the a.s. convergence may fail without the monotonicity. (Hint: Consider an i.i.d.
sequence (ξn) and disjoint intervals [an, bn], and use the Borel–Cantelli lemma.) 13. Consider a one- or two-sided, stationary random sequence (ξn) in a measurable space (S, S), and fix any B ∈S. Show that a.s. either ξn ∈Bc for all n or ξn ∈B i.o. (Hint: Use Theorem 25.6.) 14. (von Neumann) Give a direct proof of the L2-version of Theorem 25.6. (Hint: Define a unitary operator U on L2(S) by Uf = f ◦T. Let M denote the U -invariant subspace of L2 and put A = I −U. Check that M⊥= ¯ RA, the closed range of A. By Theorem 1.35 it is enough to take f ∈M or f ∈RA.) Deduce the general Lp-version, and extend the argument to higher dimensions.
584 Foundations of Modern Probability 15. In the context of Theorem 25.24, show that the set of ergodic measures in M is measurable. (Hint: Use Lemma 3.2, Proposition 5.32, and Theorem 25.14.) 16. Prove a continuous-time version of Theorem 25.24.
17. Deduce Theorem 5.23 for p ≤1 from Theorem 25.20. (Hint: Take Xm,n = |Sn −Sm|p, and note that E|Sn|p = o(n) when p < 1.) 18. Let ξ = (ξ1, ξ2, . . .) be a stationary sequence of random variables, fix any B ∈BRd, and let κn be the number of indices k ∈{1, . . . , n−d} with (ξk, . . . , ξd) ∈B.
Use Theorem 25.20 to show that κn/n converges a.s. Deduce the same result from Theorem 25.6, by considering suitable sub-sequences.
19. Strengthen the inequality in Lemma 25.7 to E ξ1; supn(Sn/n) ≥0 ≥0.
(Hint: Apply the original result to the variables ξk + ε, and let ε →0.) 20. Extend Proposition 25.10 to stationary processes on Zd.
21. Extend Theorem 25.14 to averages over arbitrary rectangles An = [0, an1] × · · · × [0, and], such that anj →∞and supn(ani/anj) < ∞for all i ̸= j. (Hint: Note that Lemma 25.16 extends to this case.) 22. Derive a version of Theorem 25.14 for stationary processes X on Zd. (Hint: By a suitable randomization, construct an associated stationary process ˜ X on Rd, apply Theorem 25.14 to ˜ X, and estimate the error term as in Corollary 30.10.) 23. Give an example of two processes X, Y on R+, such that X d = Y on I but not on T .
24. Derive a version of Theorem 25.25 for processes on Z+. Also prove versions for processes on Rd + and Zd +.
25. Show that Theorem 25.25 (ii) implies a corresponding result for processes on R. (Hint: Apply Theorem 25.25 to the processes ˜ Xt = θtX and ˜ Y = θtY .) Also show how the two-sided statement follows from Theorem 25.26.
26.
For processes X on R+, define ˜ Xt = (Xt, t), and let ˜ I be the associated invariant σ-field.
Assuming X, Y to be measurable, show that X d = Y on T iff ˜ X d = ˜ Y on ˜ I. (Hint: Use Theorem 25.25.) 27. Show by an example that the conclusion of Theorem 25.27 may fail when ξ is not singular.
28. Give an example where the inequality in Corollary 25.28 is a.s. strict. (Hint: Examine the proof.) 29. (Bertrand, Andr´ e ) Show that if two candidates A, B in an election get the proportions p and 1−p of the votes, then the probability that A will lead throughout the ballot count equals (2p −1)+. (Hint: Apply Corollary 25.28, or use a direct combinatorial argument based on the reflection principle.) 30. Prove the second claim in Corollary 25.28 by a martingale argument, when ξ1, . . . , ξn are Z+-valued and exchangeable. (Hint: We may assume that Sn is non-random. Then the variables Mk = Sk/k form a reverse martingale, and the result follows by optional sampling.) 31. Prove that the convergence in Theorem 25.29 holds in Lp for arbitrary p > 0 when S is finite. (Hint: Show as in Lemma 25.30 that ∥supn I(ξ |Fn)∥p < ∞when ξ is S -valued, and use Corollary 25.8 (ii).) 25. Stationary Processes and Ergodic Theory 585 32. Show that H(ξ, η) ≤H(ξ) + H(η) for any ξ, η. (Hint: Note that H(η | ξ) ≤ H(η) by Jensen’s inequality.) 33. Give an example of a stationary Markov chain (ξn), such that H(ξ1) > 0 but H(ξ1| ξ0) = 0.
34. Give an example of a stationary Markov chain (ξn), such that H(ξ1) = ∞ but H(ξ1| ξ0) < ∞. (Hint: Choose the state space Z+ and transition probabilities pij = 0, unless j ∈{0, i + 1}.) Chapter 26 Ergodic Properties of Markov Processes Transition and contraction operators, operator ergodic theorem, maxi-mum inequalities, ratio ergodic theorem, filling operators and function-als, ratio limit theorem, space-time invariance, strong and weak ergodic-ity and mixing, tail and invariant triviality, Harris recurrence and ergod-icity, potential and resolvent operators, convergence dichotomy, recur-rence dichotomy, invariant measures, distributional and pathwise limits The primary aim of this chapter is to study the asymptotic behavior of broad classes of Markov processes. First, we will see how the ergodic theorems of the previous chapter admit some powerful extensions to suitable contraction operators, leading to some pathwise and distributional ratio limit theorems for fairly general Markov processes. Next, we establish some basic criteria for strong or weak ergodicity of conservative Markov semi-groups, and show that Harris recurrent Feller processes are strongly ergodic. Finally, we show that any regular Feller process is either Harris recurrent or uniformly transient, examine the existence of invariant measures, and prove some associated distri-butional and pathwise limit theorems. The latter analysis relies crucially on some profound tools from potential theory.
For a more detailed description, we first extend the basic ergodic theorem of Chapter 25 to suitable contraction operators on an arbitrary measure space, and establish a general operator version of the ratio ergodic theorem. The relevance of those results for the study of Markov processes is due to the fact that their transition operators are positive L1−L∞-contractions with respect to any invariant measure λ on the state space S. The mentioned results cover both the positive-recurrent case where λS < ∞, and the null-recurrent case where λS = ∞. Even more remarkably, the same ergodic theorems apply to both the transition probabilities and the sample paths, in both cases giving conclusive information about the asymptotic behavior.
Next we prove, for an arbitrary Markov process, that a certain condition of strong ergodicity is equivalent to the triviality of the tail σ-field, the constancy of all bounded, space-time invariant functions, and a uniform mixing condition.
We also consider a similar result where all four conditions are replaced by suitably averaged versions. For both sets of equivalences, one gets very simple and transparent proofs by applying the general coupling results of Chapter 25.
In order to apply the mentioned theorems to specific Markov processes, we need to find regularity conditions ensuring the existence of an invariant 587 © Springer Nature Switzerland AG 2021 O. Kallenberg, Foundations of Modern Probability, Probability Theory and Stochastic Modelling 99, 588 Foundations of Modern Probability measure or the triviality of the tail σ-field. Here we consider a general class of Feller processes satisfying either a strong recurrence or a uniform transience condition. In the former case, we prove the existence of an invariant measure, required for the application of the mentioned ergodic theorems, and show that the space-time invariant functions are constant, which implies the mentioned strong ergodicity. Our proofs of the latter results depend on some potential theoretic tools, related to those developed in Chapter 17.
To set the stage for the technical developments, we consider a Markov transition operator T on an arbitrary measurable space (S, S). Note that T is positive, in the sense that f ≥0 implies Tf ≥0, and also that T1 = 1.
As before, we write Px for the distribution of a Markov process on Z+ with transition operator T, starting at x ∈S.
More generally, we define Pμ = Px μ(dx) for any measure μ on S. A measure λ on S is said to be invariant if λ Tf = λf for any measurable function f ≥0. Writing θ for the shift on the path space S∞, we define the associated operator ˜ θ by ˜ θf = f ◦θ.
For any p ≥1, an operator T on a measure space (S, S, μ) is called an Lp-contraction, if ∥Tf∥p ≤∥f∥p for every f ∈Lp. We further say that T is an L1−L∞-contraction, if it is an Lp-contraction for every p ∈[1, ∞]. The following result shows the relevance of the mentioned notions for the theory of Markov processes.
Lemma 26.1 (transition and contraction operators) Let T be a Markov tran-sition operator on (S, S) with invariant measure λ. Then (i) T is a positive L1−L∞-contraction on (S, λ), (ii) ˜ θ is a positive L1−L∞-contraction on (S∞, Pλ).
Proof: (i) Applying Jensen’s inequality to the transition kernel μ(x, B) = T1B(x) and using the invariance of λ, we get for any p ∈[1, ∞) and f ∈Lp ∥Tf∥p p = λ|μf|p ≤λμ|f|p = λ|f|p = ∥f∥p p.
The result for p = ∞is obvious.
(ii) Proceeding as in Lemma 11.11, we see that θ is a measure-preserving transformation on (S∞, Pλ). Hence, for any measurable function f ≥0 on S∞ and constant p ≥1, we have Pλ ˜ θf p = Pλ f ◦θ p = Pλ ◦θ−1 |f|p = Pλ|f|p.
The contraction property for p = ∞is again obvious.
2 Some crucial results from Chapter 25 carry over to the context of positive L1 −L∞-contractions on an arbitrary measure space.
First we consider an 26. Ergodic Properties of Markov Processes 589 operator version of Birkhoff’s ergodic theorem. To simplify the writing, we introduce the operators Sn = Σ k 0 ≥0, f ∈L1.
If T is even an L1−L∞-contraction, then also (ii) r μ{Mf > 2r} ≤μ(f; f > r), f ∈L1, r > 0, (iii) ∥Mf∥p < ⌢∥f∥p, f ∈Lp, p > 1.
Proof: (i) For any f ∈L1, write Mnf = S1f ∨· · · ∨Snf, and conclude that by positivity Skf = f + TSk−1f ≤f + T(Mnf)+, k = 1, . . . , n.
Hence, Mnf ≤f + T(Mnf)+ for all n, and so by positivity and contractivity, μ f; Mnf > 0 ≥μ Mnf −T(Mnf)+; Mnf > 0 ≥μ (Mnf)+ −T(Mnf)+ = (Mnf)+ 1 − T(Mnf)+ 1 ≥0.
As before, it remains to let n →∞.
(ii) Put fr = f1{f > r}. By the L∞-contractivity and positivity of An, Anf −2r ≤An(f −2r) ≤An(fr −r), n ∈N, which implies Mf −2r ≤M(fr −r). Hence, by part (i), r μ{Mf > 2r} ≤r μ M(fr −r) > 0 ≤μ fr; M(fr −r) > 0 ≤μfr = μ f; f > r .
590 Foundations of Modern Probability (iii) Here the earlier proof applies with only notational changes.
2 Proof of Theorem 26.2: Fix any f ∈L1. By dominated convergence, f may be approximated in L1 by functions ˜ f ∈L1 ∩L∞⊂L2. By Lemma 25.17 we may next approximate ˜ f in L2 by functions of the form ˆ f +(g −Tg), where ˆ f, g ∈L2 and T ˆ f = ˆ f. Finally, we may approximate g in L2 by functions ˜ g ∈L1 ∩L∞. Since T contracts L2, the functions ˜ g −T ˜ g will then approximate g −Tg in L2. Combining the three approximations, we have for any ε > 0 f = fε + (gε −Tgε) + hε + kε, (1) where fε ∈L2 with Tfε = fε, gε ∈L1 ∩L∞, and ∥hε∥2 ∨∥kε∥1 < ε.
Since fε is invariant, we have Anfε ≡fε. Next, we note that An(gε −Tgε) ∞= n−1 gε −T ngε ∞ ≤2n−1 ∥gε∥∞→0.
(2) Hence, lim sup n→∞Anf ≤fε + Mhε + Mkε < ∞a.e., and similarly for lim infn Anf. Combining the two estimates gives lim sup n→∞−lim inf n→∞ Anf ≤2M|hε| + 2M|kε|.
Now Lemma 26.3 yields for any ε, r > 0 M|hε| 2 < ⌢∥hε∥2 < ε, μ M|kε| > 2r ≤r−1∥kε∥1 < ε/r, and so M|hε| + M|kε| →0 a.e. as ε →0 along a suitable sequence. Thus, Anf converges a.e. toward a limit Af.
To prove that Af is T-invariant, we see from (1) and (2) that the a.e. lim-its Ahε and Akε exist and satisfy TAf −Af = (TA −A)(hε + kε). By the contraction property and Fatou’s lemma, the right-hand side tends a.e. to 0 as ε →0 along a sequence, and we get TAf = Af a.e.
2 The limit Af in the last theorem may be 0, in which case the a.s. con-vergence Anf →Af gives little information about the asymptotic behavior of Anf. For example, this happens when μS = ∞and T is the operator induced by a μ -preserving and ergodic transformation θ on S. Then Af is a constant, and the condition Af ∈L1 implies Af = 0. To get around this difficulty, we may instead compare the asymptotic behavior of Snf with that of Sng for a suitable reference function g ∈L1. This idea leads to a far-reaching and powerful extension of Birkhoff’s theorem.
Theorem 26.4 (ratio ergodic theorem, Chacon & Ornstein) Let T be a pos-itive L1-contraction on a measure space (S, S, μ), and let f ∈L1 and g ∈L1 +.
Then Snf Sn g converges a.e. on {S∞g > 0}.
26. Ergodic Properties of Markov Processes 591 Three lemmas are needed for the proof.
Lemma 26.5 (individual terms) In the context of Theorem 26.4, T nf Sn+1 g →0 a.e. on {S∞g > 0}.
Proof: We may take f ≥0. Fix any ε > 0, and define hn = T nf −ε Sn+1 g, An = {hn > 0}, n ≥0.
By positivity, hn = Thn−1 −εg ≤Th+ n−1 −εg, n ≥1.
Examining separately the cases An and Ac n, we conclude that h+ n ≤Th+ n−1 −ε1Ang, n ≥1, and so by contractivity, ε μ(g; An) ≤μ Th+ n−1 −μh+ n ≤μh+ n−1 −μh+ n .
Summing over n gives ε μ Σ n≥1 1Ang ≤μh+ 0 = μ(f −εg) ≤μf < ∞, which implies μ(g; An i.o.) = 0 and hence lim supn(T nf/Sn+1g) ≤ε a.e. on {g > 0}. Since ε was arbitrary, we obtain T nf/Sn+1g →0 a.e. on {g > 0}.
Applying this result to the functions T mf and T mg gives the same convergence on {Sm−1g = 0 < Smg}, for arbitrary m > 1.
2 To state the next result, we introduce the non-linear filling operator F on L1, given by Fh = Th+ −h−.
We may think of the sequence F nh as arising from successive attempts to fill a hole h−, where in each step we are only mapping matter that has not yet fallen into the hole. We also define Mnh = S1h ∨· · · ∨Snh.
Lemma 26.6 (filling operator) For any h ∈L1 and n ∈N, F n−1h ≥0 on {Mnh > 0}.
Proof: Writing hk = h+ + (Fh)+ + · · · + (F kh)+, we claim that hk ≥Sk+1h, k ≥0.
(3) This is clear for k = 0 since h+ = h + h−≥h. Proceeding by induction, let (3) be true for k = m ≥0. Using the induction hypothesis and the definitions of Sk, hk, and F, we get for m + 1 592 Foundations of Modern Probability Sm+2h = h + TSm+1h ≤h + Thm = h + Σ k≤m T(F kh)+ = h + Σ k≤m F k+1h + (F kh)− = h + Σ k≤m (F k+1h)+ −(F k+1h)−+ (F kh)− = h + hm+1 −h+ + h−−(F m+1h)−≤hm+1.
This completes the proof of (3).
If Mnh > 0 at some point in S, then Skh > 0 for some k ≤n, and so by (3) we have hk > 0 for some k < n. But then (F kh)+ > 0, and therefore (F kh)−= 0 for the same k. Since (F kh)−is non-increasing, it follows that (F n−1h)−= 0, and hence F n−1h ≥0.
2 To state our third and crucial lemma, write g ∈T1(f) for a given f ∈L1 +, if there exists a decomposition f = f1+f2 with f1, f2 ∈L1 + such that g = Tf1+f2.
Note in particular that f, g ∈L1 + implies U(f −g) = f ′−g for some f ′ ∈T1(f).
The classes Tn(f) are defined recursively by Tn+1(f) = T1{Tn(f)}, and we put T (f) = n Tn(f). Now introduce the functionals ψBf = sup μ(g; B); g ∈T (f) , f ∈L1 +, B ∈S.
Lemma 26.7 (filling functionals) For any f, g ∈L1 + and B ∈S, B ⊂ lim sup n→∞Sn(f −g) > 0 ⇒ ψBf ≥ψB g.
Proof: Fix any g′ ∈T (g) and c > 1. First we show that lim sup n→∞Sn(f −g) > 0 ⊂ lim sup n→∞Sn(cf −g′) > 0 a.e.
(4) Here we may assume that g′ ∈T1(g), since the general result then follows by iteration in finitely many steps. Letting g′ = r + Ts for some r, s ∈L1 + with r + s = g, we obtain Sn(cf −g′) = Σ k 0} by Lemma 26.5, we conclude that eventually Sn(cf −g′) ≥Sn(f −g) a.e. on the same set, and (4) follows.
Combining the given hypothesis with (4), we obtain the a.e. relation B ⊂ {M(cf −g′) > 0}. Now Lemma 26.6 yields F n−1(cf −g′) ≥0 on Bn ≡B ∩ Mn(cf −g′) > 0 , n ∈N.
26. Ergodic Properties of Markov Processes 593 Since Bn ↑B a.e. and F n−1(cf −g′) = f ′ −g′ for some f ′ ∈T (cf), we get 0 ≤μ F n−1(cf −g′); Bn = μ(f ′; Bn) −μ(g′; Bn) ≤ψB(cf) −μ(g′; Bn) →c ψBf −μ(g′; B), and so c ψBf ≥μ(g′; B). It remains to let c →1 and take the supremum over g′ ∈T (g).
2 Proof of Theorem 26.4: We may take f ≥0. On {S∞g > 0}, put α = lim inf n→∞ Snf Sng ≤lim sup n→∞ Snf Sng = β, and define α = β = 0 otherwise. Since Sng is non-decreasing, we have for any c > 0 {β > c} ⊂ lim sup n→∞ Sn(f −cg) Sng > 0, S∞g > 0 ⊂ lim sup n→∞Sn(f −cg) > 0 .
Writing B = {β = ∞, S∞g > 0}, we see from Lemma 26.7 that c ψBg = ψB(cg) ≤ψBf ≤μf < ∞, and as c →∞we get ψBg = 0. But then μ(T ng; B) = 0 for all n ≥0, and therefore μ(S∞g; B) = 0. Since S∞g > 0 on B, we obtain μB = 0, which means that β < ∞a.e.
Now define C = {α < a < b < β} for fixed b > a > 0. As before, C ⊂ lim sup n→∞Sn(f −bg) ∧lim sup n→∞Sn(ag −f) > 0 , and so by Lemma 26.7, b ψCg = ψC(bg) ≤ψCf ≤ψC(ag) = a ψCg < ∞, which implies ψCg = 0, and therefore μC = 0. Hence, μ{α < β} ≤Σ a<b μ α < a < b < β = 0, where the summation extends over all rational a < b, and so α = β a.e., which proves the asserted convergence.
2 To illustrate the use of the last theorem, we consider a striking application to discrete-time Markov processes. For such a process X on S and a measurable function f on S∞, we define Snf = Σk 0 , (ii) Ex Snf Ex Sng converges a.e. λ on x ∈S; Ex S∞g > 0 .
Proof: (i) By Lemma 26.1 (ii), we may apply Theorem 26.4 to the L1−L∞-contraction ˜ θ on (S∞, Pλ) induced by the shift θ, and the result follows.
(ii) Writing ˜ f(x) = Exf(X) and using the Markov property at k, we get Exf(θkX) = ExEXkf(X) = Ex ˜ f(Xk) = T k ˜ f(x), and so ExSnf = Σ k 0 , which translates immediately into the asserted state-ment.
2 Now consider a conservative, continuous-time Markov process on an ar-bitrary state space (S, S) with distributions Px and associated expectation operators Ex. On the canonical path space Ω = SR+, we introduce the shift operators θt and filtration F = (Ft). A bounded function f : S →R is said to be invariant or harmonic, if it is measurable and such that f(x) = Ttf(x) = Exf(Xt), x ∈S, t ≥0.
More generally, we say that a bounded function f : S × R+ →R is space-time invariant or harmonic, if it is measurable and satisfies f(x, s) = Exf(Xt, s + t), x ∈S, s, t ≥0.
(5) For motivation, we note that f is then invariant for the associated space-time process ˜ Xt = (Xt, s + t) in S × R+, where the second component is 26. Ergodic Properties of Markov Processes 595 deterministic, apart from a possibly random initial value s ≥0. Note that ˜ X is again a time-homogeneous Markov process with transition operators ˜ Ttf(x, s) = Exf(Xt, s + t). We need the following useful martingale connec-tion.
Lemma 26.9 (space-time invariance) For any bounded, measurable function f : S × R+ →R, these conditions are equivalent: (i) f is space-time invariant, (ii) Mt = f(Xt, s + t) is a Pμ-martingale for every μ and s ≥0.
Proof, (i) ⇒(ii): Assume (i), and let s, t, h ≥0. Using the Markov property of X, we get Eμ Mt+h Ft = Eμ f Xt+h, s + t + h Ft = EXtf Xh, s + t + h = f(Xt, s + t) = Mt.
(ii) ⇒(i): By (ii) with μ = δx, we have for any x ∈S and s, t ≥0 Exf(Xt, s + t) = ExMt = ExM0 = f(x, s).
2 The tail σ-field on Ω is defined as T = t Tt, where Tt = σ(θt) = σ{Xs; s ≥ t}. A σ-field G on Ω is said to be Pμ-trivial, if PμA = 0 or 1 for every A ∈G.
Writing P B μ = Pμ( · |B), we say that Pμ is mixing, if lim t→∞ Pμ ◦θ−1 t −P B μ ◦θ−1 t = 0, B ∈F∞with PμB > 0.
The following properties define the notion of strong ergodicity, as opposed to the weak ergodicity in Theorem 26.11.
Theorem 26.10 (strong ergodicity and mixing, Orey) For a conservative Markov semi-group on R+ or Z+ with distributions Pμ, these conditions are equivalent: (i) the tail σ-field T is Pμ-trivial for every μ, (ii) Pμ is mixing for every μ, (iii) every bounded, space-time invariant function is a constant, (iv) Pμ ◦θ−1 t −Pν ◦θ−1 t →0 as t →∞for all μ and ν.
First proof: By Theorem 25.25 (i), conditions (ii) and (iv) are equivalent to respectively (ii′) Pμ = P B μ on T for all μ and B, (iv′) Pμ = Pν on T for all μ, ν.
596 Foundations of Modern Probability It is then enough to prove that (ii′) ⇔(i) ⇒(iv′) and (iv) ⇒(iii) ⇒(i).
(i) ⇔(ii′): If PμA = 0 or 1, then clearly also P B μ A = 0 or 1, which shows that (i) ⇒(ii′). Conversely, let A ∈T be arbitrary with PμA > 0. Taking B = A in (ii′) gives PμA = (PμA)2, which implies PμA = 1.
(i) ⇒(iv′): Applying (i) to the distribution 1 2(μ + ν) gives PμA + PνA = 0 or 2 for every A ∈T , which implies PμA = PνA = 0 or 1.
(iv) ⇒(iii): Let f be bounded and space-time invariant. Using (iv) with μ = δx and ν = δy gives f(x, s) −f(y, s) = Exf(Xt, s + t) −Eyf(Xt, s + t) ≤∥f∥ Px ◦θ−1 t −Py ◦θ−1 t →0, which shows that f(x, s) = f(s) is independent of x. Then f(s) = f(s + t) by (5), and so f is a constant.
(iii) ⇒(i): Fix any A ∈T . Since A ∈Tt = σ(θt) for every t ≥0, we have A = θ−1 t At for some sets At ∈F∞, which are unique since the θt are surjective.
For any s, t ≥0, we have θ−1 t θ−1 s As+t = θ−1 s+tAs+t = A = θ−1 t At, and so θ−1 s As+t = At. Putting f(x, t) = PxAt and using the Markov property at time s, we get Exf(Xs, s + t) = ExPXsAs+t = ExPx θ−1 s As+t Fs = PxAt = f(x, t).
Thus, f is space-time invariant and hence a constant c ∈[0, 1]. By the Markov property at t and martingale convergence as t →∞, we have a.s.
c = f(Xt, t) = PXtAt = Pμ θ−1 t At Ft = Pμ(A | Ft) →1A, which implies PμA = c ∈{0, 1}. This shows that T is Pμ-trivial.
2 Second proof: We can avoid to use the rather deep Theorem 25.25 by giving direct proofs of the implications (i) ⇒(ii) ⇒(iv).
(i) ⇒(ii): Assuming (i), we get by reverse martingale convergence Pμ(· ∩B) −Pμ(·) Pμ(B) Tt = Eμ Pμ(B | Tt) −PμB; · Tt ≤Eμ Pμ(B | Tt) −PμB →0.
26. Ergodic Properties of Markov Processes 597 (ii) ⇒(iv): Let μ′ −ν′ be the Hahn decomposition of μ −ν, and choose B ∈S with μ′Bc = νB = 0. Writing χ = μ′ + ν′ and A = {X0 ∈B}, we get by (ii) Pμ ◦θ−1 t −Pν ◦θ−1 t = ∥μ′∥ P A χ ◦θ−1 t −P Ac χ ◦θ−1 t →0.
2 The invariant σ-field I on Ω consists of all events A ⊂Ω with θ−1 t A = A for all t ≥0. Note that a random variable ξ on Ω is I-measurable iffξ ◦θt = ξ for all t ≥0. The invariant σ-field I is clearly contained in the tail σ-field T .
We say that Pμ is weakly mixing if lim t→∞ 1 0 Pμ −P B μ ◦θ−1 st ds = 0, B ∈F∞with PμB > 0, with the understanding that θs = θ[s] when the time scale is discrete. We may now state a weak counterpart of Theorem 26.10.
Theorem 26.11 (weak ergodicity and mixing) For a conservative Markov semi-group on R+ or Z+ with distributions Pμ, these conditions are equiva-lent: (i) the invariant σ-field I is Pμ-trivial for every μ, (ii) Pμ is weakly mixing for every μ, (iii) every bounded, invariant function is a constant, (iv) 1 0 (Pμ −Pν) ◦θ−1 st ds →0 as t →∞for all μ, ν.
Proof: By Theorem 25.25 (ii), we note that (ii) and (iv) are equivalent to the conditions (ii′) Pμ = P B μ on I for all μ and B, (iv′) Pμ = Pν on I for all μ, ν.
Here (ii′) ⇔(i) ⇒(iv′) may be established as before, and it only remains to show that (iv) ⇒(iii) ⇒(i).
(iv) ⇒(iii): Let f be bounded and invariant. Then f(x) = Exf(Xt) = Ttf(x), and therefore f(x) = 1 0 Tstf(x) ds. Using (iv) gives f(x) −f(y) = 1 0 Tstf(x) −Tstf(y) ds ≤∥f∥ 1 0 (Px −Py) ◦θ−1 st ds →0, which shows that f is constant.
(iii) ⇒(i): Fix any A ∈I, and define f(x) = PxA. Using the Markov property at t and the invariance of A, we get Exf(Xt) = ExPXtA = ExPx θ−1 t A Ft = Pxθ−1 t A = PxA = f(x), 598 Foundations of Modern Probability which shows that f is invariant. By (iii) it follows that f equals a constant c ∈[0, 1]. Hence, by the Markov property and martingale convergence, we have a.s.
c = f(Xt) = PXtA = Pμ θ−1 t A Ft = Pμ(A | Ft) →1A, which implies PμA = c ∈{0, 1}. Thus, I is Pμ-trivial.
2 We now specialize to conservative Feller processes X with distributions Px, defined on an lcscH1 space S with Borel σ-field S. Say that the process is regular, if there exist a locally finite measure ρ on S and a continuous function (x, y, t) →pt(x, y) > 0 on S2 × (0, ∞), such that Px{Xt ∈B} = B pt(x, y) ρ(dy), x ∈S, B ∈S, t > 0.
The supporting measure ρ is then unique up to an equivalence, and supp ρ = S by the Feller property. A Feller process is said to be Harris recurrent, if it is regular with a supporting measure ρ satisfying ∞ 0 1B(Xt) dt = ∞a.s. Px, x ∈S, B ∈S with ρB > 0.
(6) Theorem 26.12 (Harris recurrence and ergodicity, Orey) For a Feller pro-cess X, we have X is Harris recurrent ⇒ X is strongly ergodic.
Proof: By Theorem 26.10 it suffices to prove that any bounded, space-time invariant function f : S × R+ →R is a constant. First we show for fixed x ∈S that f(x, t) is independent of t. Then assume that instead f(x, h) ̸= f(x, 0) for some h > 0, say f(x, h) > f(x, 0). Recall from Lemma 26.9 that M s t = f(Xt, s+t) is a Py-martingale for any y ∈S and s ≥0. In particular, the limit M s ∞exists a.s. along hQ, and we get a.s. Px Ex M h ∞−M 0 ∞ F0 = M h 0 −M 0 0 = f(x, h) −f(x, 0) > 0, which implies Px{M h ∞> M 0 ∞} > 0. We may then choose some constants a < b such that Px M 0 ∞< a < b < M h ∞ > 0.
(7) We also note that M s+h t ◦θs = f Xs+t, s + t + h = M h s+t, s, t, h ≥0.
(8) 1locally compact, second countable, and Hausdorff 26. Ergodic Properties of Markov Processes 599 Restricting s, t to h Q+, we define g(y, s) = Py ∩ t≥0 M s t ≤a < b ≤M s+h t , y ∈S, s ≥0.
Using (8) and the Markov property at s, we get a.s. Px for any r ≤s g(Xs, s) = PXs ∩ t≥0 M s t ≤a < b ≤M s+h t = P Fs x ∩ t≥0 M s t ◦θs ≤a < b ≤M s+h t ◦θs = P Fs x ∩ t≥s M 0 t ≤a < b ≤M h t ≥P Fs x ∩ t≥r M 0 t ≤a < b ≤M h t .
By martingale convergence we get, a.s. as s →∞along hQ and then r →∞, lim inf s→∞g(Xs, s) ≥lim inf t→∞1 M 0 t ≤a < b ≤M h t ≥1 M 0 ∞< a < b < M h ∞ , and so by (7), Px g(Xs, s) →1 ≥Px M 0 ∞< a < b < M h ∞ > 0.
(9) Now fix any non-empty, bounded, open set B ⊂S. Using (6) and the right-continuity of X, we note that lim sups 1B(Xs) = 1 a.s. Px, and so by (9) Px lim sup s→∞1B(Xs) g(Xs, s) = 1 > 0.
(10) Furthermore, we have by regularity ph(u, v) ∧p 2h(u, v) ≥ε > 0, u, v ∈B, (11) for some ε > 0. By (10) we may choose some y ∈B and s ≥0 with g(y, s) > 1 −1 2 ε ρB. Define for i = 1, 2 Bi = B \ u ∈S; f(u, s + ih) ≤a < b ≤f u, s + (i + 1)h .
Using (11), the definitions of Bi, M s, g, and the properties of y and s, we get ε ρBi ≤Py{Xih ∈Bi} ≤1 −Py f Xih, s + ih ≤a < b ≤f Xih, s + (i + 1)h = 1 −Py M s ih ≤a < b ≤M s+h ih ≤1 −g(y, s) < 1 2 ε ρB.
Thus, ρB1 + ρB2 < ρB, and there exists a u ∈B \ (B1 ∪B2). But this yields the contradiction a < b ≤f(u, s + 2h) ≤a, showing that f(x, t) = f(x) is indeed independent of t.
600 Foundations of Modern Probability To see that f(x) is also independent of x, assume that instead ρ x; f(x) ≤a ∧ρ x; f(x) ≥b > 0 for some a < b. Then by (6), the martingale Mt = f(Xt) satisfies ∞ 0 1{Mt ≤a} dt = ∞ 0 1{Mt ≥b} dt = ∞a.s.
(12) Writing ˜ M for the right-continuous version of M, which exists by Theorem 9.28 (ii), we get by Fubini’s theorem for any x ∈S sup u>0 Ex u 0 1{Mt ≤a} dt − u 0 1{ ˜ Mt ≤a} dt ≤ ∞ 0 Ex 1{Mt ≤a} −1{ ˜ Mt ≤a} dt ≤ ∞ 0 Px Mt ̸= ˜ Mt dt = 0, and similarly for the events Mt ≥b and ˜ Mt ≥b. Thus, the integrals on the left agree a.s. Px for all x, and so (12) remains true with M replaced by ˜ M. In particular, lim inf t→∞ ˜ Mt ≤a < b ≤lim sup t→∞ ˜ Mt a.s. Px.
But this is impossible, since ˜ M is a bounded, right-continuous martingale and hence converges a.s. The contradiction shows that f(x) = c a.e. ρ for some constant c ∈R. Then for any t > 0, f(x) = Exf(Xt) = f(y) pt(x, y) ρ(dy) = c, x ∈S, and so f(x) is indeed independent of x.
2 Our further analysis of regular Feller processes requires some potential the-ory. For any measurable functions f, h ≥0 on S, we define the h -potential Uhf of f by Uhf(x) = Ex ∞ 0 e−Ah t f(Xt) dt, x ∈S, where Ah denotes the elementary additive functional Ah t = t 0 h(Xs) ds, t ≥0.
When h is a constant a ≥0, we note that Uh = Ua agrees with the resolvent operator Ra of the semigroup (Tt), in which case Uaf(x) = Ex ∞ 0 e−atf(Xt) dt = ∞ 0 e−at Ttf(x) dt, x ∈S.
The classical resolvent equation extends to general h -potentials, as follows.
26. Ergodic Properties of Markov Processes 601 Lemma 26.13 (resolvent equation) Let f ≥0 and h ≥k ≥0 be measurable functions on S, where h is bounded. Then Uhh ≤1, and Ukf = Uhf + Uh(h −k) Ukf = Uhf + Uk(h −k) Uhf.
Proof: Define F = f(X), H = h(X), and K = k(X) for convenience. By Itˆ o’s formula for continuous functions of bounded variation, e−Ah t = 1 − t 0 e−Ah s Hs ds, t ≥0, (13) which implies Uhh ≤1. We also see from the Markov property of X that a.s.
Ukf(Xt) = EXt ∞ 0 e−Ak s Fs ds = EFt x ∞ 0 e−Ak s◦θt Fs+t ds = EFt x ∞ t e−Ak u+Ak t Fu du.
(14) Using (13) and (14) together with Fubini’s theorem, we get Uh(h −k) Ukf(x) = Ex ∞ 0 e−Ah t (Ht −Kt) dt ∞ t e−Ak u+Ak t Fu du = Ex ∞ 0 e−Ak u Fu du u 0 e−Ah t +Ak t (Ht −Kt) dt = Ex ∞ 0 e−Ak uFu 1 −e−Ah u+Ak u du = Ex ∞ 0 e−Ak u −e−Ah u Fu du = Ukf(x) −Uhf(x).
A similar calculation yields the same expression for Uk(h −k) Uhf(x).
2 For a simple application of the resolvent equation, we show that any bounded potential function Uhf is continuous.
Lemma 26.14 (boundedness and continuity) For a regular Feller process on S and any bounded, measurable functions f, h ≥0 on S, we have Uhf is bounded ⇒ Uhf is continuous.
Proof: Using Fatou’s lemma and the continuity of pt(·, y), we get for any time t > 0 and sequence xn →x in S lim inf n→∞Ttf(xn) = lim inf n→∞ pt(xn, y)f(y) ρ(dy) ≥ pt(x, y)f(y) ρ(dy) = Ttf(x).
602 Foundations of Modern Probability If f ≤c, the same relation applies to the function Tt(c −f) = c −Ttf, and so by combination Ttf is continuous. By dominated convergence, Uaf is then continuous for every a > 0.
Now let h ≤a. Applying the previous result to the bounded, measurable function (a −h) Uhf ≥0, we see that even Ua(a −h) Uhf is continuous. The continuity of Uhf now follows from Lemma 26.13, with h and k replaced by a and h.
2 We proceed with some useful estimates.
Lemma 26.15 (lower bounds) For a regular Feller process on S, there exist some continuous functions h, k : S →(0, 1], such that for any measurable function f ≥0 on S, (i) U2f(x) ≥ρ(kf) h(x), (ii) Uhf(x) ≥ρ(kf) Uhh(x).
Proof: Fix any compact sets K ⊂S and T ⊂(0, ∞) with ρK > 0 and λT > 0. Define uT a (x, y) = T e−at pt(x, y) dt, x, y ∈S, and note that for any measurable function f ≥0 on S, Uaf(x) ≥ uT a (x, y) f(y) ρ(dy), x ∈S.
Applying Lemma 26.13 to the constant functions 4 and 2 gives U2f(x) = U4f(x) + 2 U4U2f(x) ≥2 K uT 4 (x, y) ρ(dy) uT 2 (y, z) f(z) ρ(dz), and (i) follows with h(x) = 2 K uT 4 (x, y) ρ(dy) ∧1, k(x) = inf y∈K uT 2 (y, x) ∧1.
To deduce (ii), we may combine (i) with Lemma 26.13 for the functions 2 and h, to obtain Uhf(x) ≥Uh(2 −h) U2f(x) ≥UhU2f(x) ≥Uhh(x) ρ(kf).
The continuity of h is clear by dominated convergence. For the same reason, the function uT 2 is jointly continuous on S2. Since K is compact, the functions uT 2 (y, ·), y ∈K, are then equi-continuous on S, which yields the required con-tinuity of k. Finally, the relation h > 0 is obvious, whereas k > 0 holds by the 26. Ergodic Properties of Markov Processes 603 compactness of K.
2 Fixing a function h as in Lemma 26.15, we introduce the kernel QxB = Uh(h1B)(x), x ∈S, B ∈S, (15) and note that QxS = Uhh(x) ≤1 by Lemma 26.13.
Lemma 26.16 (convergence dichotomy) For h as in Lemma 26.15, define Q by (15). Then (i) when Uhh ̸≡1, there exists an r ∈(0, 1) with ∥QnS∥≤rn−1, n ∈N, (ii) when Uhh ≡1, there exists a Q-invariant distribution ν ∼ρ with Qn x −ν →0, x ∈S, and every σ-finite, Q-invariant measure on S is proportional to ν.
Proof: (i) Choose k as in Lemma 26.15, fix any a ∈S with Uhh(a) < 1, and define r = 1 −h(a) ρ hk (1 −Uhh) .
Note that ρ{hk (1−Uhh)} > 0, since Uhh is continuous by Lemma 26.14. Using Lemma 26.15 (i), we obtain 0 < 1 −r ≤h(a) ρk ≤U21(a) = 1 2.
Next, Lemma 26.15 (ii) yields (1 −r) Uhh = h(a) ρ hk (1 −Uhh) Uhh ≤Uhh(1 −Uhh).
Hence, Q2S = Uhh Uhh ≤r Uhh = r QS, and so by iteration QnS ≤rn−1QS ≤rn−1, n ∈N.
(ii) Introduce a measure ˜ ρ = hk · ρ on S. Since Uhh = 1, Lemma 26.15 (ii) yields ˜ ρB = ρ(hk 1B) ≤Uh(h 1B)(x) = QxB, B ∈S.
(16) Regarding ˜ ρ as a kernel, we have for any x, y ∈S and m, n ∈Z+ Qm x −Qn y ˜ ρk = Qm x S −Qn yS ˜ ρk = 0.
604 Foundations of Modern Probability Iterating this and using (16), we get as n →∞ Qn x −Qn+k y = (δx −Qk y)Qn = (δx −Qk y)(Q −˜ ρ)n ≤ δx −Qk y supz Qz −˜ ρ n ≤2 (1 −˜ ρS)n →0.
Hence, supx ∥Qn x −ν∥→0 for some set function ν on S, and we note that ν is a Q -invariant probability measure. By Fubini’s theorem, we have Qx ≪ρ for all x, and so ν = νQ ≪ρ. Conversely, Lemma 26.15 (ii) yields νB = νQ(B) = ν Uh(h1B) ≥ρ(hk1B), which shows that even ρ ≪ν.
Now consider any σ-finite, Q -invariant measure μ on S. By Fatou’s lemma, we get for any B ∈S μB = lim inf n→∞μQnB ≥μνB = μS νB.
Choosing B with μB < ∞and νB > 0, we obtain μS < ∞. By dominated convergence, we conclude that μ = μQn →μν = μS ν, which proves the as-serted uniqueness of ν.
2 We may now establish the basic recurrence dichotomy for regular Feller processes. Write U = U0, and say that X is uniformly transient if ∥U1K∥= supxEx ∞ 0 1K(Xt) dt < ∞, K ⊂S compact.
Theorem 26.17 (recurrence dichotomy) A regular Feller process is either Harris recurrent or uniformly transient.
Proof: Choose h, k as in Lemma 26.15 and Q, r, ν as in Lemma 26.16.
First assume Uhh ̸≡1. Letting a ∈(0, ∥h∥], we note that ah ≤(h∧a)∥h∥, and hence a Uh∧ah ≤Uh∧a(h ∧a) ∥h∥≤∥h∥.
(17) Furthermore, Lemma 26.13 yields Uh∧ah ≤Uhh + UhhUh∧ah = Q 1 + Uh∧ah .
Iterating this relation and using Lemma 26.16 (i) and (17), we get Uh∧ah ≤Σ k≤n Qk1 + QnUh∧ah ≤Σ k≤n rk−1 + rn−1 Uh∧ah ≤(1 −r)−1 + rn−1∥h∥/a.
26. Ergodic Properties of Markov Processes 605 Letting n →∞and then a →0, we conclude by dominated and monotone convergence that Uh ≤(1 −r)−1. Now fix any compact set K ⊂S. Since b ≡infK h > 0, we get for all x ∈S U1K(x) ≤b−1 Uh(x) ≤b−1(1 −r)−1 < ∞, which shows that X is uniformly transient.
Now assume that instead Uhh ≡1. Fix any measurable function f on S with 0 ≤f ≤h and ρf > 0, and put g = 1 −Uff. By Lemma 26.13, we get g = 1 −Uff = Uhh −Uhf −Uh(h −f)Uff = Uh(h −f)(1 −Uff) = Uh(h −f)g ≤Uhhg = Qg.
(18) Iterating this relation and using Lemma 26.16 (ii), we obtain g ≤Qng →νg, where ν ∼ρ is the unique Q -invariant distribution on S. Inserting this into (18) gives g ≤Uh(h −f)νg, and so by Lemma 26.15 (ii), νg ≤ν Uh(h −f) νg ≤ 1 −ρ(kf) νg.
Since ρ(kf) > 0, we obtain νg = 0, and so Uff = 1 −g = 1 a.e. ν ∼ρ.
Since Uff is continuous by Lemma 26.14 and supp ρ = S, we obtain Uff ≡1.
Taking expected values in (13), we conclude that Af ∞= ∞a.s. Px for every x ∈S. Now fix any compact set K ⊂S with ρK > 0. Since b ≡infK h > 0, we may choose f = b 1K, and the desired Harris recurrence follows.
2 A measure λ on S is said to be invariant for the semi-group (Tt), if λ(Ttf) = λf for all t > 0 and every measurable function f ≥0 on S. In the Harris recurrent case, the existence of an invariant measure λ can be inferred from Lemma 26.16.
Theorem 26.18 (invariant measure, Harris, Watanabe) For a Harris recur-rent Feller process on S with supporting measure ρ, (i) there exists a locally finite, invariant measure λ ∼ρ, (ii) every σ-finite, invariant measure agrees with λ up to a normalization.
To prepare for the proof, we first express the required invariance in terms of the resolvent operators.
Lemma 26.19 (invariance equivalence) Let (Tt) be a Feller semi-group on S with resolvent (Ua), and fix a locally finite measure λ on S and a constant c > 0. Then these conditions are equivalent: 606 Foundations of Modern Probability (i) λ is Tt-invariant for every t > 0, (ii) λ is a Ua-invariant for every a ≥c.
Proof, (i) ⇒(ii): Assuming (i), we get by Fubini’s theorem for any mea-surable function f ≥0 and constant a > 0 λ(Uaf) = ∞ 0 e−atλ(Ttf) dt = ∞ 0 e−atλf dt = a−1λf.
(19) (ii) ⇒(i): Assume (ii). Then for any measurable function f ≥0 on S with λf < ∞, the integrals in (19) agree for all a ≥c. Hence, by Theorem 6.3 the measures λ(Ttf) e−ct dt and λf e−ct dt agree on R+, which implies λ(Ttf) = λf for almost every t ≥0. By the semi-group property and Fubini’s theorem, we then obtain for any t ≥0 λ(Ttf) = c λUc(Ttf) = c λ ∞ 0 e−cs TsTtf ds = c ∞ 0 e−csλ(Ts+tf) ds = c ∞ 0 e−csλf ds = λf.
2 Proof of Theorem 26.18: (i) Let h, Q, ν be such as in Lemmas 26.15 and 26.16, and put λ = h−1 · ν. Using the definition of λ (twice), the Q -invariance of ν (three times), and Lemma 26.13, we get for any constant a ≥∥h∥and bounded, measurable function f ≥0 on S a λUaf = a ν(h−1Uaf) = a νUhUaf = ν Uhf −Uaf + UhhUaf = νUhf = ν(h−1f) = λf, which shows that λ is a Ua -invariant for every such a. By Lemma 26.19 it follows that λ is also (Tt) -invariant.
(ii) Consider any σ-finite, (Tt) -invariant measure λ′ on S. By Lemma 26.19, λ′ is even a Ua -invariant for every a ≥∥h∥. Now define ν′ = h · λ′. Letting f ≥0 be bounded and measurable on S and using Lemma 26.13, we get as before ν′Uh(hf) = λ′(hUh(hf)) = a λ′UahUh(hf) = a λ′ Ua(hf) −Uh(hf) + a UaUh(hf) = a λ′Ua(hf) = λ′(hf) = ν′f, 26. Ergodic Properties of Markov Processes 607 which shows that ν′ is Q -invariant. Hence, the uniqueness in Lemma 26.16 (ii) yields ν′ = c ν for some constant c ≥0, which implies λ′ = c λ.
2 A Harris recurrent Feller process is said to be positive-recurrent if the in-variant measure λ is bounded, and null-recurrent otherwise. In the former case, we may choose λ to be a probability measure on S. For any process X in S, the divergence Xt →∞a.s. or Xt P →∞means that 1K(Xt) →0 in the same sense for every compact set K ⊂S.
Theorem 26.20 (distributional limits) Let X be a regular Feller process in S. Then for any distribution μ on S, we have as t →∞ (i) when X is positive-recurrent with invariant distribution λ, P A μ ◦θ−1 t −Pλ →0, A ∈F∞with PμA > 0, (ii) when X is null-recurrent or transient, Xt Pμ − →∞.
Proof: (i) Since Pλ ◦θ−1 t = Pλ by Lemma 11.11, the assertion follows from Theorem 26.12, together with properties (ii) and (iv) of Theorem 26.10.
(ii) (null-recurrent case) For any compact set K ⊂S and constant ε > 0, define Bt = x ∈S; Tt1K(x) ≥μTt1K −ε , t > 0, and note that for any invariant measure λ, μTt1K −ε λBt ≤λ(Tt1K) = λK < ∞.
(20) Since μTt1K −Tt1K(x) →0 for all x ∈S by Theorem 26.12, we have lim inft Bt = S, and so λBt →∞by Fatou’s lemma. Hence, (20) yields lim supt μTt1K ≤ ε, and ε being arbitrary, we obtain Pμ{Xt ∈K}= μTt1K →0.
(ii) (transient case) Fix any compact set K ⊂S with ρK > 0, and conclude from the uniform transience of X that U1K is bounded. Hence, by the Markov property at t and dominated convergence, Eμ U1K(Xt) = EμEXt ∞ 0 1K(Xs) ds = Eμ ∞ t 1K(Xs) ds →0, which shows that U1K(Xt) Pμ − →0. Since U1K is strictly positive and also con-tinuous by Lemma 26.14, we conclude that Xt Pμ − →∞.
2 We conclude with a pathwise limit theorem for regular Feller processes.
Here ‘almost surely’ means a.s. Pμ for every initial distribution μ on S.
608 Foundations of Modern Probability Theorem 26.21 (pathwise limits) For a regular Feller process X in S, we have as t →∞ (i) when X is positive-recurrent with invariant distribution λ, t−1 t 0 f(θsX) ds →Eλf(X) a.s., f bounded, measurable, (ii) when X is null-recurrent, t−1 t 0 1K(Xs) ds →0 a.s., K ⊂S compact, (iii) when X is transient, Xt →∞a.s.
Proof: (i) From Lemma 11.11 and Theorems 26.10 (i) and 26.12, we note that Pλ is stationary and ergodic, and so the assertion holds a.s. Pλ by Corol-lary 25.9. Since the stated convergence is a tail event and Pμ = Pλ on T for any μ, the general result follows.
(ii) Since Pλ is shift-invariant with Pλ{Xs ∈K} = λK < ∞, the left-hand side converges a.e. Pλ by Theorem 26.2. By Theorems 26.10 and 26.12, the limit is a.e. a constant c ≥0. Using Fatou’s lemma and Fubini’s theorem, we get Eλc ≤lim inf t→∞ t−1 t 0 Pλ{Xs ∈K} ds = λK < ∞, which implies c = 0 since ∥Pλ∥= ∥λ∥= ∞. The general result follows from the fact that Pμ = Pν on T for any distributions μ and ν.
(iii) Fix any compact set K ⊂S with ρK > 0, and conclude from the Markov property at t > 0 that a.s. Pμ, U1K(Xt) = EXt ∞ 0 1K(Xr) dr = EFt μ ∞ t 1K(Xr) dr.
Using the tower property of conditional expectations, we get for any s < t Eμ U1K(Xt) Fs = EFs μ ∞ t 1K(Xr) dr ≤EFs μ ∞ s 1K(Xr) dr = U1K(Xs), which shows that U1K(Xt) is a super-martingale. Since it is also non-negative and right-continuous, it converges a.s. Pμ as t →∞, and the limit equals 0 a.s., since U1K(Xt) Pμ − →0 by the previous proof. Since U1K is strictly positive and continuous, it follows that Xt →∞a.s. Pμ.
2 26. Ergodic Properties of Markov Processes 609 Exercises 1. For a measure space (S, S, μ), let T be a positive, linear operator on L1 ∩L∞.
Show that if T is both an L1-contraction and an L∞-contraction, it is also an Lp-contraction for every p ∈[1, ∞]. (Hint: Prove a H¨ older-type inequality for T.) 2.
Extend Lemma 25.3 to any transition operators T on a measurable space (S, S). Thus, letting I be the class of sets B ∈S with T1B = 1B, show that an S-measurable function f ≥0 is T-invariant iffit is I-measurable.
3. Prove a continuous-time version of Theorem 26.2, for measurable semi-groups of positive L1−L∞-contractions. (Hint: Interpolate in the discrete-time result.) 4. Let (Tt) be a measurable, discrete- or continuous-time semi-group of positive L1−L∞-contractions on (S, S, ν), let μ1, μ2, . . . be asymptotically invariant distri-butions on Z+ or R+, and define An = Tt μn(dt). Show that Anf ν →Af for any f ∈L1(λ), where ν →denotes convergence in measure. (Hint: Proceed as in Theorem 26.2, using the contractivity together with Minkowski’s and Chebyshev’s inequalities to estimate the remainder terms.) 5. Prove a continuous-time version of Theorem 26.4. (Hint: Use Lemma 26.5 to interpolate in the discrete-time result.) 6. Derive Theorem 25.6 from Theorem 26.4. (Hint: Take g ≡1, and proceed as in Corollary 25.9 to identify the limit.) 7. Show that when f ≥0, the limit in Theorem 26.4 is strictly positive on the set {S∞f ∧S∞g > 0}.
8. Show that the limit in Theorem 26.4 is invariant, at least when T is induced by a measure-preserving map on S.
9.
Derive Lemma 26.3 (i) from Lemma 26.6.
(Hint: Note that if g ∈T (f) with f ∈L1 +, then μg ≤μf. Conclude that for any h ∈L1, μ{h; Mnh > 0} ≥ μ Un−1h; Mnh > 0 ≥0.) 10. Show that Brownian motion X in Rd is regular and strongly ergodic for every d ∈N, with an invariant measure that is unique up to a constant factor. Also show that X is Harris recurrent for d ≤2, uniformly transient for d ≥3.
11. Let X be a Markov process with associated space-time process ˜ X. Show that X is strongly ergodic in the sense of Theorem 26.10 iff˜ X is weakly ergodic in the sense of Theorem 26.11. (Hint: Note that a function is space-time invariant for X iffit is invariant for ˜ X.) 12. For a Harris recurrent process on R+ or Z+, every tail event is clearly a.s.
invariant. Show by an example that the statement may fail in the transient case.
13. State and prove discrete-time versions of Theorems 26.12, 26.17, and 26.18.
(Hint: The continuous-time arguments apply with obvious changes.) 14. Derive discrete-time versions of Theorems 26.17 and 26.18 from the corre-sponding continuous-time results.
15. Give an example of a regular Markov process that is weakly but not strongly ergodic. (Hint: For any strongly ergodic process, the associated space-time process has the stated property. For a less trivial example, consider a suitable super-critical branching process.) 610 Foundations of Modern Probability 16. Give examples of non-regular Markov processes with no invariant measure, with exactly one (up to a normalization), and with more than one.
17. Show that a discrete-time Markov process X and the corresponding pseudo-Poisson process Y have the same invariant measures. Further show that if X is regular then so is Y , but not conversely.
Chapter 27 Symmetric Distributions and Predictable Maps Asymptotically invariant sampling, exchangeable and contractable, mixed and conditionally i.i.d. sequences, coding representation and equivalence criteria, urn sequences and factorial measures, strong stationarity, pre-diction sequence, predictable sampling, optional skipping, sojourns and maxima, exchangeable and mixed L´ evy processes, compensated jumps, sampling processes, hyper-contraction and tightness, rotatable sequences and processes, predictable mapping Classical results in probability typically require both an independence and an invariance condition1. It is then quite remarkable that, under special circum-stances, the independence follows from the invariance assumption alone. We have already seen some notable instances of this phenomenon, such as in The-orems 11.5 and 11.14 for Markov processes and in Theorems 10.27 and 19.8 for martingales. Here and in the next chapter, we will study some broad classes of random phenomena, where a symmetry or invariance condition alone yields some very precise structural information, typically involving properties of in-dependence.
For a striking example, recall that an infinite sequence of random variables ξ = (ξ1, ξ2, . . .) is said to be stationary if θnξ d = ξ for every n ∈Z+, where θn denotes the n -step shift operator on R∞. Replacing n by an arbitrary optional time τ in Z+ yields the notion of strong stationarity, where θτξ d = ξ. We will show that ξ is strongly stationary iffit is conditionally i.i.d., given a suitable σ-field I. This statement is essentially equivalent to de Finetti’s theorem—the fact that ξ is exchangeable, defined as invariance in distribution under finite permutations, iffit is mixed i.i.d. Thus, apart from a randomization, a simple symmetry condition leads us back to the elementary setting of i.i.d. sequences.
This is by modern standards a quite elementary result that can be proved in just a few lines. The same argument yields the remarkable extension by Ryll-Nardzewski—the fact that the mere contractability2, where all sub-sequences are assumed to have the same distribution, suffices for ξ to be mixed i.i.d. Fur-ther highlights of the theory include the predictable sampling theorem, extend-ing of the exchangeablity property of a finite or infinite sequence to arbitrary 1We may think of i.i.d., such as in the CLT, LLN, and LIL.
2short for contraction invariance in distribution 611 © Springer Nature Switzerland AG 2021 O. Kallenberg, Foundations of Modern Probability, Probability Theory and Stochastic Modelling 99, 612 Foundations of Modern Probability predictable permutations. In Chapters 14, 16, and 22, we saw how the latter result yields short proofs of the classical arcsine laws.
All the mentioned results have continuous-time counterparts. In particu-lar, a process on R+ with exchangeable or contractable increments is a mixture of L´ evy processes. The more subtle representation of exchangeable processes on [0, 1] will be derived from some asymptotic properties for sequences ob-tained by sampling without replacement from a finite population. Similarly, the predictable sampling theorem turns into the quite subtle predictable map-ping theorem—the invariance in distribution of an exchangeable process on R+ or [0, 1] under predictable and pathwise measure-preserving transformations.
Here some predictable mapping properties from Chapter 19, involving ordi-nary or discounted compensators, will be helpful for the proof.
The material in this chapter is related in many ways to other parts of the book. In particular, we note some links to various applications and extensions in Chapters 14–16, for results involving exchangeable sequences or processes.
The predictable sampling theorem is further related to results for random time change in Chapters 15–16 and 19.
To motivate our first main topic, we consider a simple limit theorem for multi-variate sampling from a stationary process. Here we consider a mea-surable process X on an index set T, taking values in a space S, and let τ = (τ1, τ2, . . .) be an independent sequence of random elements in T with joint distribution μ. Define the associated sampling sequence ξ = X ◦τ in S∞ by ξ = (ξ1, ξ2, . . .) = (Xτ1, Xτ2, . . .) henceforth referred to as a μ -sample from X.
The sampling distributions μ1, μ2, . . . on T ∞are said to be asymptotically invariant, if their projections onto T k are asymptotically invariant for every k ∈N. Recall that IX denotes the invariant σ-field of X, and note that the conditional distribution η = L(X0 | IX) exists by Theorem 8.5 when S is Borel.
Lemma 27.1 (asymptotically invariant sampling) Let X be a stationary, measurable process on T = R or Z with values in a Polish space S. For n ∈N let ξn be a μn-sample from X, where μ1, μ2, . . . are asymptotically invariant distributions on T ∞. Then ξn d →ξ in S∞, L(ξ) = E η∞with η = L(X0| IX).
Proof: Write ξ = (ξk) and ξn = (ξk n). Fix any asymptotically invariant distributions ν1, ν2, . . . on T, and let f1, . . . , fm be measurable functions on S bounded by ±1. Proceeding as in the proof of Corollary 25.18 (i), we get E Π k fk(ξk n) −E Π k fk(ξk) ≤E μn⊗k fk(X) −Π k ηfk ≤ μn −μn ∗νm r + E (νm r ∗δt)⊗k fk(X) −Π k ηfk μn(dt) ≤ μn −μn ∗δt νm r (dt) + Σ k suptE (νr ∗δt)fk(X) −ηfk .
27. Symmetric Distributions and Predictable Maps 613 Using the asymptotic invariance of μn and νr, together with Corollary 25.18 (i) and dominated convergence, we see that the right-hand side tends to 0 as n →∞and then r →∞. The assertion now follows by Theorem 5.30.
2 The last result leads immediately to a version of de Finetti’s theorem. For a precise statement, consider a finite or infinite random sequence ξ = (ξ1, ξ2, . . .) with index set I, and say that ξ is exchangeable if (ξk1, ξk2, . . .) d = (ξ1, ξ2, . . .), (1) for any finite3 permutation (k1, k2, . . .) of I. For infinite sequences ξ, we also consider the seemingly weaker property of contractability4, where I = N and (1) is only required for increasing sequences k1 < k2 < · · · . Then ξ is stationary, and every sub-sequence has the same distribution as ξ. By Lemma 27.1 we obtain L(ξ) = E η∞with η = L(ξ1| Iξ), here called the directing random measure of ξ. We may also give a stronger conditional statement. Recall that, for any random measure η on a measurable space (S, S), the associated σ-field is generated by the random variables ηB for arbitrary B ∈S.
Theorem 27.2 (exchangeable sequences, de Finetti, Ryll-Nardzewski) For an infinite random sequence ξ in a Borel space S, these conditions are equivalent: (i) ξ is exchangeable, (ii) ξ is contractable, (iii) ξ is mixed i.i.d., (iv) L(ξ | η) = η∞a.s. for a random distribution η on S.
Here η is a.s. unique and equal to L(ξ1| Iξ).
Since η∞is a.s. the distribution of an i.i.d. sequence in S based on the mea-sure η, (iv) means that ξ is conditionally i.i.d. η. Taking expectations of both sides in (iv), we obtain the seemingly weaker condition L(ξ) = E η∞, stating that ξ is mixed i.i.d. The latter condition implies that ξ is exchangeable, and so the two versions of (iii) are indeed equivalent.
First proof of Theorem 27.2 (OK): Since S is Borel, we may take S = [0, 1]. Write μn for the uniform distribution on the product set Πk{(k −1)n + 1, . . . , kn}. Assuming (ii), we see from Lemma 27.1 that L(ξ) = E η∞. More generally, we may extend (1) to 1A(ξ), ξk1, ξk2, . . . d = 1A(ξ), ξ1, ξ2, . . . , k1 < k2 < · · · , for any invariant set A ∈S∞. Applying Lemma 27.1 to the sequence of pairs {ξk, 1A(ξ)}, we get as before L(ξ; ξ ∈A) = E(η∞; ξ ∈A), and since η is Iξ-measurable, it follows that L(ξ | η) = η∞a.s.
3one affecting only finitely many elements 4short for contraction invariance in distribution, also called sub-sequence invariance, spreading invariance, or spreadability 614 Foundations of Modern Probability To see that η is a.s. unique, we may use the law of large numbers and Theorem 8.5 to obtain n−1Σ k≤n 1B(ξk) →ηB a.s., B ∈S.
2 We can also prove de Finetti’s theorem by a simple martingale argument: Second proof of Theorem 27.2 (Aldous): If ξ is contractable, then (ξm, θmξ) d = (ξk, θmξ) d = (ξk, θnξ), k ≤m ≤n.
Now write Tξ = n σ(θnξ). Using Lemma 8.10 followed by Theorem 9.24 as n →∞, we get a.s. for a fixed set B ∈S L(ξm | θmξ) = L(ξk | θmξ) = L(ξk | θnξ) →L(ξk | Tξ), which shows that the extreme members agree a.s. In particular, L(ξm | θmξ) = L(ξm | Tξ) = L(ξ1| Tξ) a.s.
Here the first relation gives ξm⊥ ⊥Tξ θmξ for all m ∈N, and so by iteration ξ1, ξ2, . . . are conditionally independent given Tξ. The second relation shows that the conditional distributions agree a.s. This proves that L(ξ | Tξ) = ν∞ with ν = L(ξ1| Tξ).
2 The nature of de Finetti’s criterion is further clarified by the following functional representation, here included to prepare for the much more subtle higher-dimensional versions in Chapter 28.
Corollary 27.3 (coding representation) Let ξ = (ξj) be an infinite random sequence in a Borel space S. Then ξ is exchangeable iff ξj = f(α, ζj), j ∈N, for a measurable function f : [0, 1]2 →S and some i.i.d. U(0, 1) random vari-ables α, ζ1, ζ2, . . . . The directing random measure ν of ξ is then given by νB = 1B ◦f(α, x) dx, B ∈S.
Proof: Let ξ be conditionally i.i.d. with directing random measure ν. For any i.i.d. U(0, 1) random variables ˜ α and ˜ ζ1, ˜ ζ2, . . . , a repeated use of Theorem 8.17 yields some measurable functions g, h, such that ˜ ν ≡g(˜ α) d = ν, (˜ ξ1, ˜ ν) ≡ h(˜ ν, ˜ ζ1), ˜ ν d = (ξ1, ν).
(2) 27. Symmetric Distributions and Predictable Maps 615 Writing f(s, x) = h{g(s), x} gives ˜ ξ1 = h(˜ ν, ˜ ζ1) = f g(˜ α), ˜ ζ1 = f(˜ α, ˜ ζ1), and we may define more generally ˜ ξj = f(˜ α, ˜ ζj) = h(˜ ν, ˜ ζj), j ∈N.
The last expression shows that the ˜ ξj are conditionally independent given ˜ ν, and by (2) their common conditional distribution equals ˜ ν. Hence, the sequence ˜ ξ = (˜ ξj) satisfies (˜ ν, ˜ ξ) d = (ν, ξ), and Corollary 8.18 yields a.s.
ν = g(α), ξj = h(ν, ζj), j ∈N, (3) for some i.i.d. U(0, 1) variables α and ζ1, ζ2, . . . .
To identify ν, let B ∈S be arbitrary, and use (3), Fubini’s theorem, and the definition of ν to get νB = P ξj ∈B ν = P h(ν, ζj) ∈B ν = 1B ◦h(ν, x) dx = 1B ◦f(α, x) dx.
2 The representing function f in Corollary 27.3 is far from unique. Here we state some basic equivalence criteria, motivating the two-dimensional versions in Theorem 28.3. The present proof is the same, apart from some obvious simplifications. To simplify the writing, put I = [0, 1].
Proposition 27.4 (equivalent coding functions, Hoover, OK) Fix any mea-surable functions f, g : I2 →S, and let α, ξ1, ξ2, . . . and α′, ξ′ 1, ξ′ 2, . . . be i.i.d.
U(0, 1). Then these conditions are equivalent 5: (i) we may choose (˜ α, ˜ ξ1, ˜ ξ2, . . .) d = (α, ξ1, ξ2, . . .) with f(α, ξi) = g(˜ α, ˜ ξi) a.s., i ≥1, (ii) f(α, ξi); i ≥1 d = g(α, ξi); i ≥1 , (iii) there exist some measurable functions T, T ′ : I →I and U, U ′ : I2 →I, preserving λ in the highest order arguments, such that f T(α), U(α, ξi) = g T ′(α), U ′(α, ξi) a.s., i ≥1, 5This result is often misunderstood: it is not enough in (iii) to consider λ -preserving transformations U, U ′ of a single variable, nor is (iv) true without the extra randomization variables α′, ξ′ i.
616 Foundations of Modern Probability (iv) there exist some measurable functions T : I2 →I and U : I4 →I, mapping λ2 into λ in the highest order arguments, such that f(α, ξi) = g T(α, α′), U(α, α′, ξi, ξ′ i) a.s., i ≥1.
Theorem 27.2 fails for finite sequences. Here we need instead to replace the inherent i.i.d. sequences by suitable urn sequences, generated by successive drawing without replacement from a finite set. To make this precise, fix a measurable space S, and consider a measure of the form μ = Σ k≤n δsk with s1, . . . , sn ∈S. As in Lemma 2.20, we introduce on Sn the associated factorial measure μ(n) = Σ p∈Pn δs◦p , where Pn is the set of permutations p = (p1, . . . , pn) of 1, . . . , n, and s ◦p = (sp1, . . . , spn).
Note that μ(n) is independent of the order of s1, . . . , sn and depends measurably on μ.
Lemma 27.5 (finite exchangeable sequences) Let ξ = (ξ1, . . . , ξn) be a finite random sequence in S, and put η = Σk δξk. Then ξ is exchangeable iff L(ξ | η) = η(n) n!
a.s.
Proof: Since η is invariant under permutations of ξ1, . . . , ξn, we have (ξ ◦ p, η) d = (ξ, η) for any p ∈Pn. Now introduce an exchangeable permutation π⊥ ⊥ξ of 1, . . . , n. Using Fubini’s theorem twice, we get for any measurable sets A and B in appropriate spaces P ξ ∈B, η ∈A = P ξ ◦π ∈B, η ∈A = E P ξ ◦π ∈B ξ ; η ∈A = E η(n)B/n! ; η ∈A .
2 Just as for martingales and Markov processes, we may relate the defining properties to a filtration F = (Fn).
Thus, a finite or infinite sequence of random elements ξ = (ξ1, ξ2, . . .) is said to be F-exchangeable, if it is F-adapted and such that, for every n ≥0, the shifted sequence θnξ = (ξn+1, ξn+2, . . .) is conditionally exchangeable given Fn. For infinite sequences ξ, the definition of F-contractability is similar.6 When F is the filtration induced by ξ, the stated properties reduce to the unqualified versions considered earlier.
An infinite, adapted sequence ξ is said to be strongly stationary or F-stationary if θτξ d = ξ for every optional time τ < ∞. By the prediction sequence of ξ we mean the set of conditional distributions πn = L( θnξ | Fn), n ∈Z+.
(4) 6Since both definitions can be stated without reference to regular conditional distribu-tions, no restrictions are needed on S.
27. Symmetric Distributions and Predictable Maps 617 The random probability measures π0, π1, . . . on S are said to form a measure-valued martingale, if (πnB) is a real-valued martingale for every measurable set B ⊂S. We show that strong stationarity is equivalent to exchangeability,7 and exhibit an interesting martingale connection.
Corollary 27.6 (strong stationarity) Let ξ be an infinite, F-adapted random sequence with prediction sequence π, taking values in a Borel space S. Then these conditions are equivalent: (i) ξ is F-exchangeable, (ii) ξ is F-contractable, (iii) ξ is F-stationary, (iv) π is a measure-valued F-martingale.
Proof: Conditions (i) and (ii) are equivalent by Theorem 27.2. Assuming (ii), we get a.s. for any B ∈S∞and n ∈Z+ E πn+1B Fn = P θn+1ξ ∈B Fn = P θnξ ∈B Fn = πnB, (5) which proves (iv). Conversely, (ii) follows by iteration from the second equality in (5), and so (ii) and (iv) are equivalent.
Next we note that (4) extends by Lemma 8.3 to πτB = P θτξ ∈B Fτ a.s., B ∈S∞, for any finite optional time τ. By Lemma 9.14, condition (iv) is then equivalent to P θτξ ∈B = EπτB = Eπ0B = P{ξ ∈B}, B ∈S∞, which is in turn equivalent to (iii).
2 The exchangeability property extends to a broad class of random permuta-tions. Say that an integer-valued random variable τ is predictable with respect to a given filtration F, if the shifted time (τ −1)+ is F-optional.
Theorem 27.7 (predictable sampling) Let ξ = (ξ1, ξ2, . . .) be a finite or infi-nite, F-exchangeable random sequence indexed by I. Then for any a.s. distinct, F-predictable times τ1, . . . , τn in I, we have (ξτ1, . . . , ξτn) d = (ξ1, . . . , ξn).
(6) 7This remains true for finite sequences ξ = (ξ1, . . . , ξn), if we define θkξ = (ξk+1, . . . , ξn, ξk, . . . , ξ1).
618 Foundations of Modern Probability Note that we do not require ξ to be infinite or the τk to be increasing. A simple special case is that of optional skipping, where I = N and τ1 < τ2 < · · · .
If τk ≡τ + k for some optional time τ < ∞, then (6) reduces to the strong stationarity in Proposition 27.6. For both applications and proof, it is useful to consider the inverse allocation sequence αj = inf k; τk = j , j ∈I.
When αj is finite, it gives the position of j in the permuted sequence (τk). The times τk are clearly predictable iffthe sequence (αj) is predictable in the sense of Chapters 9–10.
Proof of Theorem 27.7: First let ξ be indexed by In = {1, . . . , n}, so that (τ1, . . . , τn) and (α1, . . . , αn) are mutually inverse random permutations of In.
For each m ∈{0, . . . , n}, put αm j = αj for all j ≤m, and define recursively αm j+1 = min In\ {αm 1 , . . . , αm j } , m ≤j ≤n.
Then (αm 1 , . . . , αm n ) is a predictable, Fm−1-measurable permutation of In. Since also αm j = αm−1 j = αj whenever j < m, Theorem 8.5 yields for any measurable functions f1, . . . , fn ≥0 on S E Π j fαm j (ξj) = E E Π j fαm j (ξj) Fm−1 = E Π j n. The sequence (τk) may then be extended recursively to Im, through the formula τk+1 = min Im\ {τ1, . . . , τk} , k ≥n, (7) so that τ1, . . . , τm form a random permutation of Im. By induction based on (7), the times τn+1, . . . , τm are again predictable.
Hence, the previous case applies, and (6) follows.
Finally, let I = N. For each m ∈N, consider the predictable times τ m k = τk1{τk ≤m} + (m + k)1{τk > m}, k = 1, . . . , n, and conclude from the previous version of (6) that 27. Symmetric Distributions and Predictable Maps 619 ξτ m 1 , . . . , ξτ m n d = (ξ1, . . . , ξn).
(8) As m →∞we get τ m k →τk, and (6) follows from (8) by dominated conver-gence.
2 The last result yields a simple proof of yet another striking property of random walks in R, relating the first maximum to the number of positive values. This leads in turn to simple proofs of the arcsine laws in Theorems 14.16 and 22.11.
Corollary 27.8 (sojourns and maxima, Sparre-Andersen) Let ξ1, . . . , ξn be exchangeable random variables, and put Sk = ξ1 + · · · + ξk. Then Σ k≤n 1{Sk > 0} d = min k ≥0; Sk = max j≤n Sj .
Proof (OK): Put ˜ ξk = ξn−k+1 for k = 1, . . . , n, and note that the ˜ ξk remain exchangeable for the filtration Fk = σ{Sn, ˜ ξ1, . . . , ˜ ξk}, k = 0, . . . , n. Write ˜ Sk = ˜ ξ1 + · · · + ˜ ξk, and introduce the predictable permutation αk = k−1 Σ j ∈0 1{ ˜ Sj < Sn} + (n −k + 1)1{ ˜ Sk−1 ≥Sn}, k = 1, . . . , n.
Define ξ′ k = Σj ˜ ξj1{αj = k} for k = 1, . . . , n, and conclude from Theorem 27.7 that (ξ′ k) d = (ξk). Writing S′ k = ξ′ 1 + · · · + ξ′ k, we further note that min k ≥0; S′ k = maxj S′ j = n−1 Σ j =0 1{ ˜ Sj < Sn} = n Σ k=1 1{Sk > 0}.
2 Turning to the continuous-time case, we say that a process X in a topo-logical space is continuous in probability8 if Xs P →Xt as s →t. An Rd-valued process X on [0, 1] or R+ is said to be exchangeable or contractable, if it is continuous in probability with X0 = 0, and such that the increments Xt −Xs over disjoint intervals (s, t] of equal length form exchangeable or contractable sequences. Finally, we say that X has conditionally stationary, independent increments, given a σ-field I, if the stated property holds conditionally for any finite collection of intervals.
The following continuous-time version of Theorem 27.2 characterizes the exchangeable processes on R+; the harder finite-interval case is treated in The-orem 27.10.
The point process case was already considered separately, by different methods, in Theorem 15.14.
Proposition 27.9 (exchangeable processes on R+, B¨ uhlmann) Let X be an Rd-valued process on R+, continuous in probability with X0 = 0. Then these conditions are equivalent: 8also said to be stochastically continuous 620 Foundations of Modern Probability (i) X has exchangeable increments, (ii) X has contractable increments, (iii) X is a mixed L´ evy process, (iv) X is conditionally L´ evy, given a σ-field I.
Proof: The sufficiency of (iii)−(iv) being obvious, it suffices to show that the conditions are also necessary. Thus, let X be contractable. The increments ξnk over the dyadic intervals Ink = 2−n(k −1, k] are then contractable for fixed n, and so by Theorem 27.2 they are conditionally i.i.d. ηn for some random probability measure ηn on R.
Using Corollary 4.12 and the uniqueness in Theorem 27.2, we obtain η∗2n−m n = ηm a.s., m < n.
(9) Thus, for any m < n, the increments ξmk are conditionally i.i.d. ηm, given ηn.
Since the σ-fields σ(ηn) are a.s. non-decreasing by (9), Theorem 9.24 shows that the ξmk remain conditionally i.i.d. ηm, given I ≡σ{η0, η1, . . .}.
Now fix any disjoint intervals I1, . . . , In of equal length with associated increments ξ1, . . . , ξn, and approximate by disjoint intervals Im 1 , . . . , Im n of equal length with dyadic endpoints. For each m, the associated increments ξm k are conditionally i.i.d., given I.
Thus, for any bounded, continuous functions f1, . . . , fn, EI Π k≤n fk(ξm k ) = Π k≤n EIfk(ξm k ) = Π k≤n EIfk(ξm 1 ).
(10) Since X is continuous in probability, we have ξm k P →ξk for each k, and so (10) extends by dominated convergence to the original variables ξk. By suitable approximation and monotone-class arguments, we may finally extend the rela-tions to any measurable indicator functions fk = 1Bk.
2 We turn to the more difficult case of exchangeable processes on [0, 1]. To avoid some technical complications9, we will prove the following result only for d = 1.
Theorem 27.10 (exchangeable processes on [0, 1]) Let X be an Rd-valued process on [0, 1], continuous in probability with X0 = 0. Then X is exchangeable iffit has a version Xt = α t + σBt + Σ j ≥1 βj 1{τj ≤t} −t , t ∈[0, 1], (11) for a Brownian bridge B, some independent i.i.d. U(0, 1) random variables τ1, τ2, . . . , and an independent set10of coefficients α, σ, β1, β2, . . . with Σj|βj|2 < ∞a.s. The sum in (11) converges a.s., uniformly on [0, 1], toward an rcll process on [0, 1].
9The complications for d > 1 are similar to those in Theorem 7.7.
10Thus, B, τ1, τ2, . . . , {α, σ, (βj)} are independent; no claim is made about the dependence between α, σ, β1, β2, . . . .
27. Symmetric Distributions and Predictable Maps 621 In particular, a simple point process on [0, 1] is symmetric with respect to Lebesgue measure λ iffit is a mixed binomial process based on λ, as noted already in Theorem 15.14. Combining the present result with Theorem 27.9, we see that a continuous process X on R+ or [0, 1] with X0 = 0 is exchangeable iffXt ≡α t + σBt a.s., where B is a Brownian motion or bridge, respectively, and (α, σ) is an independent pair of random variables.
The coefficients α, σ, β1, β2, . . . in (11) are clearly a.s. unique and X-measur-able, apart from the order of the jump sizes βj. Thus, X determines a.s. the pair (α, ν), where ν is the a.s. bounded random measure on R given by ν = σ2 δ0 + Σ j β 2 j δβj.
(12) Since L(X) is also uniquely determined by L(α, ν), we say that a process X as in (11) is directed by (α, ν).
First we need to establish the convergence of the sum in (11).
Lemma 27.11 (uniform convergence) Let Y n t be the n-th partial sum of the series in (11). Then these conditions are equivalent: (i) Σj β 2 j < ∞a.s., (ii) Y n t converges a.s. for every t ∈[0, 1], (iii) there exists an rcll process Y on [0, 1], such that (Y n −Y )∗→0 a.s.
Proof: We may clearly take the variables βj to be non-random. Further note that trivially (iii) ⇒(ii).
(i) ⇔(ii): Here we may take the βj to be non-random.
Then for fixed t ∈(0, 1) the terms are independent and bounded with mean 0 and variance β 2 j t(1 −t), and so by Theorem 5.18 the series converges iffΣj β 2 j < ∞.
(i) ⇒(iii): The processes M n t = (1 −t)−1Y n t are L2-martingales on [0, 1), with respect to the filtration induced by the processes 1{τj ≤t}. By Doob’s inequality, we have for any m < n and t ∈[0, 1) E(Y n −Y m)∗2 t ≤E(M n −M m)∗2 t ≤4 E M n t −M m t 2 = 4 (1 −t)−2E Y n t −Y m t 2 ≤4 t(1 −t)−1 Σ j >m β 2 j , which tends to 0 as m →∞for fixed t.
By symmetry we have the same convergence for the time-reversed processes Y n 1−t, and so by combination (Y m −Y n)∗ 1 P →0, m, n →∞.
Statement (iii) now follows by Lemma 11.13.
2 622 Foundations of Modern Probability For the main proof, we consider first a continuity theorem for exchangeable processes of the form (11). Once the general representation is established, this yields a continuity theorem for general exchangeable processes on [0, 1], which is clearly of independent interest. The processes X are regarded as random elements in the function space D [0,1] of rcll functions on [0, 1], endowed with an obvious modification of the Skorohod topology, discussed in Chapter 23 and Appendix 5. We further regard the a.s. bounded random measures ν as random elements in the measure space MR endowed with the weak topology, equivalent to the vague topology on M¯ R.
Theorem 27.12 (limits of exchangeable processes) Let X1, X2, . . . be exchange-able processes as in (11) directed by the pairs (αn, νn). Then Xn sd − →X in D [0,1] ⇔ (αn, νn) wd − →(α, ν) in R × MR, where X is exchangeable and directed by (α, ν).
Here and below, we need an elementary but powerful tightness criterion.
Lemma 27.13 (hyper-contraction and tightness) Let the random variables ξ1, ξ2, . . . ≥0 and σ-fields F1, F2, . . . be such that E ξ2 n Fn ≤c E(ξn| Fn) 2 < ∞a.s., n ∈N, for a constant c > 0, and put ηn = E(ξn|Fn). Then (ξn) is tight ⇒ (ηn) is tight.
Proof: By Lemma 5.9, we need to show that rnηn P →0 whenever 0 ≤rn → 0. Then conclude from Lemma 5.1 that, for any p ∈(0, 1) and ε > 0, 0 < (1 −p)2 c−1 ≤P ξn ≥p ηn Fn ≤P rn ξn ≥p ε Fn + 1{rn ηn < ε}.
Here the first term on the right tends in probability to 0, since rnξn P →0 by Lemma 5.9. Hence, 1{rnηn < ε} P →1, which means that P{rnηn ≥ε} →0.
Since ε is arbitrary, we get rnηn P →0.
2 Proof of Theorem 27.12: First let (αn, νn) wd − →(α, ν). To show that Xn sd − → X for the corresponding processes in (11), it suffices by Lemma 5.28 to take all αn and νn to be non-random. Thus, we may restrict our attention to processes Xn with constant coefficients αn, σn, and βnj, j ∈N.
To prove that Xn fd − →X, we begin with four special cases. First we note that, if αn →α, then trivially αnt →αt uniformly on [0, 1]. Similarly, σn →σ implies σnB →σB in the same sense. Next suppose that αn = σn = 0 and βn,m+1 = βn,m+2 = · · · = 0 for a fixed m ∈N. We may then assume that even 27. Symmetric Distributions and Predictable Maps 623 α = σ = 0 and βm+1 = βm+2 = · · · = 0, and that βnj →βj for all j. Here the convergence Xn →X is obvious. We finally assume that αn = σn = 0 and α = β1 = β2 = · · · = 0. Then maxj |βnj| →0, and for any s ≤t, E(Xn s Xn t ) = s (1 −t)Σ k β 2 nk →s (1 −t) σ2 = E(XsXt).
(13) In this case, Xn fd − →X by Theorem 6.12 and Corollary 6.5. By independence, we may combine the four special cases into Xn fd − →X, whenever βj = 0 for all but finitely many j. To extend this to the general case, we may use Theorem 5.29, where the required uniform error estimate is obtained as in (13).
To strengthen the convergence to Xn sd − →X in D [0,1], it is enough to verify the tightness criterion in Theorem 23.11. Thus, for any Xn-optional times τn and positive constants hn →0 with τn + hn ≤1, we need to show that Xn τn+hn −Xn τn P →0. By Theorem 27.7 and a simple approximation, it is equivalent that Xn hn P →0, which is clear since E(Xn hn)2 = h2 n α2 n + hn (1 −hn) ∥νn∥→0.
To obtain the reverse implication, let Xn sd − →X in D [0,1] for some process X. Since αn = Xn 1 d →X1, the sequence (αn) is tight. Next define for n ∈N ηn = 2 Xn 1/2 −Xn 1 = 2 σnB1/2 + 2Σ j βnj 1{τj ≤1 2} −1 2 .
Then E(η2 n | νn) = σ2 n + Σ j β 2 nj = ∥νn∥, E(η4 n | νn) = 3 σ2 n + Σ j β 2 nj 2 −2Σ j β 4 nj ≤3 ∥νn∥2.
Since (ηn) is tight, even (νn) is tight by Lemmas 23.15 and 27.13, and so the same thing is true for the sequence of pairs (αn, νn).
The tightness implies relative compactness in distribution, and so every sub-sequence contains a further sub-sequence converging in R × M¯ R toward some random pair (α, ν). Since the measures in (12) form a vaguely closed subset of M¯ R, the limit ν has the same form for suitable σ and β1, β2, . . . . Then the direct assertion yields Xn sd − →Y with Y as in (11), and therefore X d = Y .
Since the coefficients in (11) are measurable functions of Y , the distribution of (α, ν) is uniquely determined by that of X. Thus, the limiting distribution is independent of sub-sequence, and the convergence (αn, νn) wd − →(α, ν) remains valid along N. Finally, we may use Corollary 8.18 to transfer the representation (22) to the original process X.
2 624 Foundations of Modern Probability Next we prove a functional limit theorem for partial sums of exchangeable random variables, extending the classical limit theorems for random walks in Theorems 22.9 and 23.14. Then for every n ∈N we consider some exchangeable random variables ξnj, j ≤mn, and form the summation processes Xn t = Σ j ≤mnt ξnj, t ∈[0, 1], n ∈N.
(14) Assuming mn →∞, we show that the Xn can be approximated by exchange-able processes as in (11). The convergence criteria may be stated in terms of the random variables and measures αn = Σ j ξnj, νn = Σ j ξ2 nj δ ξnj, n ∈N.
(15) When the αn and νn are non-random, this reduces to a functional limit theorem for sampling without replacement from a finite population, which is again of independent interest.
Theorem 27.14 (limits of sampling processes, Hagberg, OK) For n ∈N, let ξnj, j ≤mn →∞, be exchangeable random variables, and define the processes Xn with associated pairs (αn, νn) by (14) and (15). Then Xn sd − →X in D [0,1] ⇔ (αn, νn) wd − →(α, ν) in R × MR, where X is exchangeable on [0, 1] and directed by (α, ν).
Proof: Let τ1, τ2, . . . be i.i.d. U(0, 1) and independent of all ξnj, and define Y n t = Σ j ξnj1{τj ≤t} = αnt + Σ j ξnj 1{τj ≤t} −t , t ∈[0, 1].
Writing ˜ ξnk for the k-th jump from the left of Y n (including possible zero jumps when ξnj = 0), we note that (˜ ξnj) d = (ξnj) by exchangeability. Thus, ˜ Xn d = Xn, where ˜ Xn t is defined as in (14). Furthermore, d( ˜ Xn, Y n) →0 a.s. by Proposi-tion 5.24, where d is the metric in Theorem A5.4. Hence, by Theorem 5.29, it is equivalent to replace Xn by Y n. The assertion then follows by Theorem 27.12.
2 We are now ready to prove the main representation theorem for exchange-able processes on [0, 1]. As noted before, we consider only the case of d = 1.
Proof of Theorem 27.10: The sufficiency being obvious, it is enough to prove the necessity. Then let X have exchangeable increments. Introduce the step processes Xn t = X 2−n[2nt] , t ∈[0, 1], n ∈N, define νn as in (15) in terms of the jump sizes of Xn, and put αn ≡X1.
If the sequence (νn) is tight, then (αm, νn) wd − →(α, ν) along a sub-sequence, and Theorem 27.14 yields Xn sd − →Y along the same sub-sequence, where Y 27. Symmetric Distributions and Predictable Maps 625 has a representation as in (11). In particular, Xn fd − →Y , and so the finite-dimensional distributions of X and Y agree for dyadic times. The agreement extends to arbitrary times, since both processes are continuous in probability.
Then Lemma 4.24 shows that X has a version in D [0,1], whence Corollary 8.18 yields the desired representation.
To prove the required tightness of (νn), let ξnj denote the increments of Xn, put ζnj = ξnj −2−nαn, and note that ∥νn∥= Σ j ξ2 nj = Σ j ζ2 nj + 2−nα2 n.
(16) Writing ηn = 2Xn 1/2 −Xn 1 = 2X1/2 −X1, and noting that Σj ζnj = 0, we get the elementary estimates E(η4 n | νn) < ⌢Σ j ≥1 ζ4 nj + Σ i̸=j ζ2 ni ζ2 nj = Σ j ζ2 nj 2 < ⌢ E(η2 n | νn) 2.
Since ηn is independent of n, the sequence of sums Σj ζ2 nj is tight by Lemma 27.13, and the tightness of (νn) follows by (16).
2 Next we extend Theorem 14.3 to continuous time and higher dimensions.
Say that a random sequence ξ = (ξk) in Rd is rotatable11 if ξn U d = ξn for every finite sub-sequence ξn and orthogonal matrix U. A process X on R+ or [0, 1] is said to be rotatable, if it is continuous in probability with X0 = 0, and such that the increments over disjoint intervals of equal lengths are rotatable.
Proposition 27.15 (rotatable sequences and processes, Schoenberg, Freedman) (i) An infinite random sequence X = (ξi n) in Rd is rotatable iff ξi n = σi k ζk n a.s., i ≤d, n ∈N, for a random d × d matrix σ and some i.i.d. N(0, 1) random variables ζk n, k ≤d, n ∈N.
(ii) An Rd-valued process X = (Xi t) on I = [0, 1] or R+ is rotatable iff Xi t = σi k Bk t a.s., i ≤d, t ∈I, for a random d×d matrix σ and an Rd-valued Brownian motion B on I.
In both cases, σσ′ is X-measurable and L(X) is determined by L(σσ′).
Proof: (i) As in Theorem 14.3, the ξn are conditionally i.i.d. with a common distribution ν on Rd, such that all one-dimensional projections are Gaussian.
Letting ρ be the covariance matrix of ν, we may choose σ to be the non-negative square root of ρ, which yields the required representation. The last assertions are now obvious.
11short for rotation-invariant in distribution, also called spherical 626 Foundations of Modern Probability (ii) We may take I = [0, 1]. Then let h1, h2, . . . ∈L2[0, 1] be the Haar sys-tem of ortho-normal functions on [0, 1], and introduce the associated random vectors Yn = hn dX in Rd, n ∈N. Then the sequence Y = (Yn) is rotatable and hence representable as in (i), which translates into a representation of X as in (ii), for a Brownian motion B′ on the dyadic subset of [0, 1]. By Theorem 8.17, we may extend B′ to a Brownian motion B on the entire interval [0, 1], and since X is continuous in probability, the stated representation remains a.s.
valid for all t ∈[0, 1].
2 We turn to a continuous-time counterpart of the predictable sampling The-orem 27.7. Here we consider predictable mappings V on the index set I that are a.s. measure-preserving, in the sense that λ ◦V −1 = λ a.s. for Lebesgue measure λ on I = [0, 1] or R+. The associated transformations of a process X on I are given by (X ◦V −1)t = I 1{Vs ≤t} dXs, t ∈I, where the right-hand side is a stochastic integral of the predictable process Us = 1{Vs ≤t} with respect to the semi-martingale X. Note that if Xt = ξ[0, t] for a random measure ξ on I, the map t →(X◦V −1)t reduces to the distribution function of the transformed measure ξ ◦V −1, so that X ◦V −1 d = X becomes equivalent to ξ ◦V −1 d = ξ.
Theorem 27.16 (predictable invariance) Let X be an Rd-valued, F-exchange-able process on I = [0, 1] or R+, and let V be an F-predictable mapping on I.
Then λ ◦V −1 = λ a.s.
⇒ X ◦V −1 d = X.
This holds by definition when V is non-random. The result applies in par-ticular to L´ evy processes X. For a Brownian motion or bridge, we have the stronger statements in Theorem 19.9. With some effort, we can prove the result from the discrete-time version in Theorem 27.7 by a suitable approximation.
Here we use instead some general time-change results from Chapter 19, which were in turn based on ideas from Chapters 10 and 15. For clarity, we treat the cases of I = R+ or [0, 1] separately.
Proof for I = R+: Extending the original filtration if necessary, we may take the characteristics of X to be F0-measurable. First let X have isolated jumps. Then Xt = α t + σBt + t 0 x ξ(ds dx), t ≥0, for a Cox process ξ on R+ ×Rd directed by λ⊗ν and an independent Brownian motion B. We may also choose the triple (α, σ, ν) to be F0 -measurable and the pair (B, ξ) to be F-exchangeable.
Since ξ has compensator ˆ ξ = λ ⊗ν, we obtain ˆ ξ ◦V −1 = λ ⊗ν, which implies ξ ◦V −1 d = ξ by Lemma 19.14. Next define for every t ≥0 a predictable process Ut,r = 1{Vr ≤t}, r ≥0, and note that 27. Symmetric Distributions and Predictable Maps 627 ∞ 0 Us,rUt,r d[Bi, Bj]r = δij ∞ 0 1 Vr ≤s ∧t dr = (s ∧t) δij.
Using Lemma 19.14 again, we conclude that the Rd-valued process (B ◦V −1)t = ∞ 0 Ut,r dBr, t ≥0, is Gaussian with the same covariance function as B. The same result shows that ξ ◦V −1 and B ◦V −1 are conditionally independent given F0. Hence, (ξ, λ, B) ◦V −1, α, σ, ν d = ξ, λ, B, α, σ, ν , which implies X ◦V −1 d = X.
In the general case, we may write Xt = M ε t + (Xt −M ε t ), t ≥0, where M ε is the purely discontinuous local martingale formed by all compen-sated jumps of modulus ≤ε. Then (X −M ε)◦V −1 d = (X −M ε) as above, and it suffices to show that M ε t P →0 and (M ε ◦V −1)t P →0 as ε →0 for fixed t ≥0.
In the one-dimensional case, we may use the isometry property of stochastic L2-integrals in Theorem 20.2, along with the measure-preserving property of V , to see that EF0(M ε ◦V −1)2 t = EF0 ∞ 0 1{Vs ≤t} dM ε s 2 = EF0 ∞ 0 1{Vs ≤t} d⟨M ε⟩s, = EF0 ∞ 0 1{Vs ≤t} ds |x|≤ε x2ν(dx) = t |x|≤ε x2ν(dx) →0.
Hence, by Jensen’s inequality and dominated convergence, E M ε ◦V −12 t ∧1 ≤E EF0 M ε ◦V −12 t ∧1 →0, which implies (M ε ◦V −1)t P →0. Specializing to Vs ≡s, we get in particular M ε t P →0.
2 Proof for I = [0, 1]: Extending the original filtration if necessary, we may assume that the coefficients in the representation of X are F0 -measurable.
First truncate the sum of centered jumps in the representation of Theorem 27.10. Let Xn denote the remainder after the first n terms, and write Xn = Mn + ˆ Xn. Noting that tr[Mn]1 = Σj>n|βj|2 →0, and using the BDG inequali-ties in Theorem 20.12, we obtain (Mn◦V −1)t P →0 for every t ∈[0, 1]. Using the martingale property of (X1 −Xt)/(1 −t) and the symmetry of X, we further see that 628 Foundations of Modern Probability E 1 0 |d ˆ Xn| F0 < ⌢E(X∗ n| F0) < ⌢ tr[Xn] 1/2 1 = Σ j >n |βj|2 1/2 →0.
Hence, (Xn ◦V −1)t P →0 for all t ∈[0, 1], which reduces the proof to the case of finitely many jumps.
It is then enough to consider a jointly exchangeable pair, consisting of a marked point process ξ on [0, 1] and a Brownian bridge B in Rd. By a simple randomization, we may further reduce to the case where ξ has a.s. distinct marks. It is then equivalent to consider finitely many optional times τ1, . . . , τm, such that B and the point processes ξj = δτj are jointly exchangeable.
To apply Proposition 19.15, we may express the continuous martingale M as an integral with respect to the Brownian motion in Lemma 19.10, with the predictable process V replaced by the associated indicator functions U t r = 1{Vr ≤t}, r ∈[0, 1]. Then the covariations of the lemma become 1 0 U s r −¯ U s r U t r −¯ U t r dr = 1 0 U s r U t r dr −λU s · λU t = λU s∧t −λU s · λU t = s ∧t −st = E(BsBt), as required.
As for the random measures ξj = δτj, Proposition 10.23 yields the associated compensators ηj[0, t] = t∧τj 0 ds 1 −s = −log (1 −t ∧τj), t ∈[0, 1].
Since the ηj are diffuse, we get by Theorem 10.24 (ii) the corresponding dis-counted compensators ζj[0, t] = 1 −exp −ηj[0, t] = 1 −(1 −t ∧τj) = t ∧τj, t ∈[0, 1].
Thus, ζj = λ([0, τj] ∩·) ≤λ a.s., and so ζj ◦V −1 ≤λ ◦V −1 = λ a.s. Hence, Proposition 19.15 yields B ◦V −1, Vτ1, . . . , Vτm d = (B, τ1, . . . , τm), which implies X ◦V −1 d = X.
2 Exercises 1. Let μc be the joint distribution of τ1 < τ2 < · · · , where ξ = Σj δτj is a sta-tionary Poisson process on R+ with rate c > 0. Show that the μc are asymptotically invariant as c →0.
27. Symmetric Distributions and Predictable Maps 629 2. Let the random sequence ξ be conditionally i.i.d. η. Show that ξ is ergodic iff η is a.s. non-random.
3. Let ξ, η be random probability measures on a Borel space such that E ξ∞= E η∞. Show that ξ d = η. (Hint: Use the law of large numbers.) 4. Let ξ1, ξ2, . . . be contractable random elements in a Borel space S. Prove the existence of a measurable function f : [0, 1]2 →S and some i.i.d. U(0, 1) random variables ϑ0, ϑ1, . . . , such that ξn = f(ϑ0, ϑn) a.s. for all n. (Hint: Use Lemma 4.22, Proposition 8.20, and Theorems 8.17 and 27.2.) 5. Let ξ = (ξ1, ξ2, . . .) be an F-contractable random sequence in a Borel space S.
Prove the existence of a random measure η, such that for any n ∈Z+ the sequence θnξ is conditionally i.i.d. η, given Fn and η.
6. Describe the representation of Lemma 27.5, in the special case where ξ1, . . . , ξn are conditionally i.i.d. with directing random measure η.
7. Give an example of a finite, exchangeable sequence that is not mixed i.i.d. Also give an example of an exchangeable process on [0, 1] that is not mixed L´ evy.
8. For a simple point process or diffuse random measure ξ on [0, 1], show that ξ is exchangeable iffit is contractable.
9.
Say that a finite random sequence in S is contractable if all subsequences of equal length have the same distribution. Give an example of a finite random sequence that is contractable but not exchangeable. (Hint: We may take |S| = 3 and n = 2.) 10. Show that for |S| = 2, a finite or infinite sequence ξ in S is exchangeable iffit is contractable. (Hint: Regard ξ as a simple point process on {1, . . . , n} or N, and use Theorem 15.8.) 11. State and prove a continuous-time version of Lemma 27.6. (If no regularity conditions are imposed on the exchangeable processes in Theorem 27.9, we need to consider optional times taking countably many values.) 12. Let ξ1, . . . , ξn be exchangeable random variables, fix a Borel set B, and let τ1 < · · · < τν be the indices k ∈{1, . . . , n} with Σj<k ξj ∈B. Construct a random vector (η1, . . . , ηn) d = (ξ1, . . . , ξn) with ξτk = ηk a.s. for all k ≤ν. (Hint: Extend the sequence (τk) to k ∈(ν, n], and apply Theorem 27.7.) 13. Prove a version of Corollary 27.8 for the last maximum.
14. Use Proposition 27.16 to give direct proofs of the relation τ1 d = τ2 in Theorems 14.16 and 14.17. (Hint: Proceed as in Theorem 27.8.) 15.
Show that any Rd-valued, contractable process on R+ has a version with rcll paths. (Hint: Use the corresponding regularity property of L´ evy processes in Chapter 16.
Alternatively, use the the result for one-dimensional, exchangeable processes on [0, 1] in Theorem 27.10.) 16. Let ξ1, . . . , ξn be exchangeable random variables, and define Xt = Σ kξk1{τk ≤ t}, t ∈[0, 1], where τ1 < · · · , τn form a uniform binomial process on [0, 1]. Show that X is an exchangeable process on [0, 1], write it is the form of Theorem 27.10, and identify the characteristics of X.
17. Let X be a real, continuous, exchangeable process on R+ with t−1Xt →0 a.s.
Show that X is rotatable.
630 Foundations of Modern Probability 18. Describe the representation of Theorem 27.10, in the special case where X is a L´ evy process with characteristics (a, b, ν).
19. Specialize Theorem 27.14 to suitably normalized sequences of i.i.d. random variables, and compare with Corollary 23.6.
20. For continuous exchangeable processes X on [0, 1] or R+, show that Theorem 27.16 follows from Theorem 19.9. In what sense is the latter result more general?
(Hint: Note that in this case, Xt = αt + σBt for a Brownian motion or bridge B.) 21. For simple exchangeable point processes ξ on [0, 1] or R+, show that Theorem 27.16 follows from Theorem 10.27 or 15.15. (Hint: Note that ξ is exchangeable iffit is a mixed Poisson or binomial process based on λ.) 22. Let M be a continuous local martingale with [M]∞= ∞, and let ξ be a ql-continuous simple point process on R+ with unbounded compensator τξ. Time-change M and ξ as in Theorems 15.15 and 19.2 to form a Brownian motion B and a Poisson process η. Show that B⊥ ⊥η. (Hint: Use Lemma 19.14.) 23. Let F be the filtration induced by a Brownian motion B and some indepen-dent i.i.d. U(0, 1) times τ1, τ2, . . . , and let V1, V2, . . . be F-predictable, a.s. measure-preserving maps on [0, 1]. Show that the variables σk = Vk ◦τk are i.i.d. U(0, 1) and independent of B. (Hint: Use Lemma 19.15.) Chapter 28 Multi-variate Arrays and Symmetries Separately or jointly exchangeable arrays, contraction and extension prop-erties, coding representations, equivalent coding functions, coupling, con-ditional independence, shell σ-field, independent entries, invariant cod-ing, symmetric functions, inversion, conditioning and independence, shell-measurable and dissociated arrays, separate or joint rotatability, totally rotatable arrays, symmetric partitions Our survey of modern probability would be incomplete without a discussion of higher-dimensional random arrays, which appear frequently in a wide range of applications.
The most obvious use may be to describe the interactions between the nodes in a random graph or network, where Xij represents the interaction between nodes i and j. Here even higher order interactions such as Xijk are conceivable. The interactions may take values in an abstract space S, and we may allow the one-sided interactions i →j and j →i to be different, in general. The simplest example is the adjacency matrix between the nodes of a random graph.
For a totally different application, consider a Hilbert-space valued random process ξh, h ∈H, with associated inner products ρh,k = ⟨ξh, ξk⟩, for arbitrary h, k ∈H. Fixing an ortho-normal basis (ONB) h1, h2, . . . in H, we obtain a discrete array Xij = ρhi,hj. We may also think of X = (Xij) as the coordinate representation of an operator on H, representing an observable quantity in quantum mechanics. More mundane examples of random arrays appear fre-quently in statistics, such as in the context of U-statistics. We may further mention the case of random partitions of a fixed, countable set.
In all those applications, it is natural to impose suitable symmetry condi-tions. For the graph interactions, we may think of all nodes as equivalent, or we may enumerate them in random order, which leads to a condition of joint exchangeability of the interaction array X. If X instead codifies the interac-tions between two different kinds of nodes, those may be numbered separately, which suggests an assumption of separate exchangeability. For processes on a Hilbert space, we may think of all ortho-normal bases as equivalent, which leads to a condition of joint rotatability. Indeed, for operators in quantum me-chanics there is no preferred ONB, simply because the individual coordinate representations are not observable.
In this chapter, we derive representations of multi-variate random arrays with exchangeable, contractable, or rotatable symmetries. In the exchange-able case, there are no simple mixing representations in terms of i.i.d. arrays, and all higher-dimensional representations are instead extensions of the coding 631 © Springer Nature Switzerland AG 2021 O. Kallenberg, Foundations of Modern Probability, Probability Theory and Stochastic Modelling 99, 632 Foundations of Modern Probability representation in Corollary 27.3. They yield in particular a remarkable exten-sion property, generalizing the classical equivalence between exchangeable and contractable sequences.
The higher-dimensional proofs are quite intricate, and involve a subtle use of notions and results for conditional independence from Chapter 8. The rotatable case is even harder, as it leads in general to representations in terms of multiple Wiener-Itˆ o integrals such as in Chapter 14. Here we discuss only the relatively elementary representations in two dimensions, which can be stated in terms of i.i.d. Gaussian random variables. We finally include a version of Kingman’s paintbox representation for exchangeable and related partitions.
To prepare for our more precise technical discussions, we say that a ran-dom array X = (Xk), indexed by vectors k = (k1, . . . , kd) ∈Nd, is sepa-rately exchangeable, if its distribution is invariant under arbitrary permuta-tions p1, . . . , pd in the d indices.
We also consider the weaker property of joint exchangeability, where the same invariance is only required for a single permutation p = (p1, p2, . . .) of N, so that Xk1,...,kd d = Xp(k1),...,p(kd), k ∈Nd.
(1) In the latter case, it is equivalent and more natural to consider arrays X indexed by the non-diagonal set N(d), where all indices k1, . . . , kd are different.
Replacing the permutations in (1) by sub-sequences p1 < p2 < · · · yields the even weaker notion of (joint) contractability1. Here it is equivalent and often preferable to consider arrays indexed by the tetrahedral set N↑d of increasing sequences k1 < . . . < kd, which may be identified with the corresponding sets of components ˜ k = {k1, . . . , kd}. Define ˆ Nd = ∪ k≤dNk, ˜ Nd = ∪ k≤dN↑k, ˆ N(d) = ∪ k≤dN(k).
By a U-array we mean an indexed set of i.i.d. U(0, 1) random variables2.
For any U-array ξ on ˜ Nd and vector k ∈N(d) or set J ∈N↑d, we write ˆ ξk = {ξ˜ h; h ⊂k}, ˆ ξJ = {ξI; I ⊂J}, where h ⊂k means that h is a sub-sequence of k, and ˜ h denotes the set of components of the vector h. Here the order of enumeration, important for subsequent representations, is determined by the order of the k-components.
Thus, the index set for d = 2 consists of the sets ∅, {k1}, {k2}, {k1, k2}, whereas for d = 3 we have the index set ∅, {k1}, {k2}, {k3}, {k1, k2}, {k1, k3}, {k2, k3}, {k1, k2, k3}.
The jointly exchangeable and contractable arrays can be characterized by the following coding representations, extending the one-dimensional version in Corollary 27.3. Its proof will take up much of the present chapter.
1Here the qualification ‘joint’ can be dropped, since for infinite arrays the notions of separate exchangeability or contractability are equivalent by Ryll-Nardzewski’s theorem.
2There is nothing special about the uniform distribution, and we could use any other diffuse distribution on a Borel space.
28. Multi-variate Arrays and Symmetries 633 Theorem 28.1 (jointly exchangeable and contractable arrays, Aldous, Hoover, OK) For a d-dimensional random array X in a Borel space S, we have (i) X = (Xk) is jointly exchangeable on ˆ N(d) iffthere exist a measurable function f : [0, 1]2d →S and a U-array ξ = (ξJ) on ˜ Nd, such that Xk = f(ˆ ξk) a.s., k ∈ˆ N(d), (ii) X = (XJ) is contractable on ˜ Nd iffit can be extended to a jointly ex-changeable array on ˆ N(d), and hence can be represented as in (i), (iii) the array in (ii) is exchangeable on ˜ Nd iffwe can choose the representing function f to be symmetric.
When d = 2, the representation in (i) becomes Xij = f(α, ξi, ξj, ζij), i ̸= j in N, (2) for a measurable function f : [0, 1]4 →S and some i.i.d. U(0, 1) random vari-ables α, ξi, and ζij = ζji. Part (i) yields a similar representation of separately exchangeable arrays, where the representing U-array is now indexed by ˆ Nd, and ˆ ξk is indexed by all sub-sequences h ⊂k. Thus, the index set for d = 2 has elements ∅, k1, k2, (k1, k2), where ∅denotes the empty sequence, whereas for d = 3 we have the index set ∅, k1, k2, k3, (k1, k2), (k1, k3), (k2, k3), (k1, k2, k3).
The representation in the separately exchangeable case is the same as before, apart from the dependencies between coding variables.
Corollary 28.2 (separately exchangeable arrays) An S-valued array X = (Xk) on Nd is separately exchangeable iffthere exist a measurable function f : [0, 1]2d →S and a U-array ξ = (ξh) on ˆ Nd, such that Xk = f(ˆ ξk) a.s., k ∈Nd.
(3) Thus, for d = 2 we get the representation Xij = f(α, ξi, ηj, ζij), i, j ∈N, (4) for a measurable function f : [0, 1]4 →S and some i.i.d. U(0, 1) random vari-ables α, ξi, ηj, ζij.
Proof: If X is separately exchangeable, it is also jointly exchangeable and hence can be represented as in Theorem 28.1 (i), in terms of a U-array ξ = (ξJ) on ˜ Nd. Now choose any disjoint, countable sets N1, . . . , Nd ⊂N, and let Y be the restriction of X to N1 × · · · × Nd. Then the representation of Y reduces to the form (3), and since X d = Y apart from index sets, Theorem 8.17 yields the same representation for X.
2 634 Foundations of Modern Probability Before proceeding to the proof of Theorem 28.1, we note that the repre-senting functions are far from unique. It then becomes important to determine when two functions f and g can be used to represent the same array. The answer differs in the three cases, and for convenience we consider only jointly exchangeable arrays of dimension 2, where the more explicit statements may clarify the basic ideas. See also the one-dimensional version in Lemma 27.4.
Put I = [0, 1] for convenience.
Theorem 28.3 (equivalent coding functions, Hoover, OK) For any measur-able functions f, g : I4 →S and i.i.d. U(0, 1) random variables α, ξi, ζij = ζji and α′, ξ′ i, ζ′ ij = ζ′ ji, these conditions are equivalent 3: (i) there exist some variables {˜ α, ˜ ξi, ˜ ζij; i ̸= j} d = {α, ξi, ζij; i ̸= j}, such that f(α, ξi, ξj, ζij) = g(˜ α, ˜ ξi, ˜ ξj, ˜ ζij) a.s., i ̸= j, (ii) f(α, ξi, ξj, ζij); i ̸= j d = g(α, ξi, ξj, ζij); i ̸= j , (iii) there exist some measurable functions T, T ′ : I →I, U, U ′ : I2 →I, V, V ′ : I4 →I, where V, V ′ are symmetric and all functions preserve λ in the highest order arguments, such that f T(α), U(α, ξi), U(α, ξj), V (α, ξi, ξj, ζij) = g T ′(α), U ′(α, ξi), U ′(α, ξj), V ′(α, ξi, ξj, ζij) , (iv) there exist some measurable functions T : I2 →I, U : I4 →I, V : I8 →I, where V is symmetric and all functions map λ2 into λ in the highest order arguments, such that f(α, ξi, ξj, ζij) = g T(α, α′), U(α, α′, ξi, ξ′ i), U(α, α′, ξj, ξ′ j), V (α, α′, ξi, ξ′ i, ξj, ξ′ j, ζij, ζ′ ij) .
Note that (iii) is expressed in terms of six functions T, T ′, U, U ′, V, V ′, whereas (iv) requires only the three functions T, U, V . On the other hand, we need4 in (iv) (but not in (iii)) some external randomization variables α′, ξ′ i, ζ′ ij = ζ′ ji.
3This result is often misunderstood: it is not enough in (iii) to consider measure-preserving transformations U, V or U ′, V ′ of a single variable, nor is (iv) true without the additional randomization variables α′, ξ′ i, ζ′ ij.
4This is essentially because a permutation on N is invertible, whereas a measure-preserving transformation on [0, 1] is not in general, which is why we can’t solve for f in (iii) without a suitable randomization.
28. Multi-variate Arrays and Symmetries 635 To prepare for the proofs of Theorems 28.1 and 28.3, we begin with some general lemmas of independent interest.
Lemma 28.4 (coupling) Let X, Y, Z be S-valued arrays with X⊥ ⊥Y Z. Then (i) if (X, Y ), (Y, Z) are exchangeable on ˆ N(d), so is (X, Y, Z), (ii) if (X, Y ), (Y, Z) are contractable on ˜ Qd, so is (X, Y, Z).
Proof: (i) Writing μ(Y, ·) = L(X | Y ), we get for any measurable sets A, B and permutation p on N P (X, Y, Z) ◦p ∈A × B = E P X ◦p ∈A (Y, Z) ◦p ; (Y, Z) ◦p ∈B = E P X ◦p ∈A Y ◦p ; (Y, Z) ◦p ∈B = E μ(Y ◦p, A); (Y, Z) ◦p ∈B = E μ(Y, A); (Y, Z) ∈B = E P{X ∈A | Y, Z}; (Y, Z) ∈B = P (X, Y, Z) ∈A × B , which extends immediately to (X, Y, Z) ◦p d = (X, Y, Z).
(ii) For any a < b in Q+ and t ∈Q, we define on Q the functions pa,t(x) = x + a 1{x > t}, pa = pa,0, pb a(x) = x + a (1 −b−1x) 1{0 < x < b}, and note that pb a ◦pb = pb. The contractability of (X, Y ) yields (X, Y ) ◦pb a d = (X, Y ), and so (X ◦pb, Y ) d = X ◦pb a ◦pb, Y ◦pb a = X ◦pb, Y ◦pb a .
By Corollary 8.10, we get (X ◦pb) ⊥ ⊥ Y ◦pb a Y, and since σ(Y ◦pb a) = σ(Y ◦pa) and σ(Y ◦pb; b > a) = σ(Y ◦pa), a monotone-class argument yields the extended relation (X ◦pa) ⊥ ⊥ Y ◦paY.
Since also (X ◦pa)⊥ ⊥Y (Z ◦pa) by hypothesis, Theorem 8.12 gives (X ◦pa) ⊥ ⊥ Y ◦pa(Z ◦pa).
Combining this with the hypothetical relations X⊥ ⊥Y Z, (X, Y ) ◦pa d = (X, Y ), (Y, Z) ◦pa d = (Y, Z), 636 Foundations of Modern Probability we see as in case (i) that (X, Y, Z) ◦pa d = (X, Y, Z), a ∈Q+.
Relabeling Q if necessary, we get the same relation for the more general map-pings pa,t. For any finite sets I, J ⊂Q with |I| = |J|, we may choose com-positions p and q of such functions satisfying p(I) = q(J), and the required contractability follows.
2 Lemma 28.5 (conditional independence) Let (X, ξ) be a contractable array on ˜ Zd with restriction (Y, η) to ˜ Nd, where ξ has independent entries. Then Y ⊥ ⊥ η ξ.
Proof: Putting pn(k) = k −n 1{k ≤0}, k ∈Z, n ∈N, we get by the contractability of (X, ξ) (Y, ξ ◦pn) d = (Y, ξ), n ∈N.
Since also σ{ξ ◦pn} ⊂σ{ξ}, Corollary 8.10 yields Y ⊥ ⊥ ξ ◦pnξ, n ∈N.
Further note that n σ{ξ ◦pn} = σ{η} a.s. by Corollary 9.26. The assertion now follows by martingale convergence as n →∞.
2 For any S-valued random array X on Nd, we introduce the associated shell σ-field S(X) = ∩n σ Xk; max i ki ≥n .
Proposition 28.6 (shell σ-field, Aldous, Hoover) Let the array X on Zd + be separately exchangeable under permutations on N, and write X+ for the restriction of X to Nd. Then X+ ⊥ ⊥ S(X+)(X \ X+).
Proof: When d = 1, X is just a random sequence (η, ξ1, ξ2, . . .) = (η, ξ), and S(X+) reduces to the tail σ-field T . Applying de Finetti’s theorem to both ξ and (ξ, η) gives L(ξ | μ) = μ∞, L(ξ | ν, η) = ν∞, for some random probability measures μ and ν on S. By the law of large numbers, μ and ν are T -measurable with μ = ν a.s., and so ξ⊥ ⊥T η by Corollary 8.10.
Assuming the truth in dimension d −1, we turn to the d -dimensional case.
Define an S∞-valued array Y on Zd−1 + by 28. Multi-variate Arrays and Symmetries 637 Ym = Xm,k; k ≥0 , m ∈Zd−1 + , and note that Y is separately exchangeable under permutations on N. Writing Y + for the restriction of Y to Nd−1 and X0 for the restriction of X to Nd−1×{0}, we get by the induction hypothesis (X+, X0) = Y +⊥ ⊥ S(Y +)(Y \ Y +) = X \ (X+, X0).
Since also S(Y +) ⊂S(X+) ∨σ(X0) ⊂σ(Y +), we obtain X+ ⊥ ⊥ S(X+), X0(X \ X+).
(5) Next we apply the result for d = 1 to the S∞-valued sequence Zk = Xm,k; m ∈Nd−1 , k ≥0, to obtain X+ = (Z \ Z0) ⊥ ⊥ S(Z) Z0 = X0.
Since clearly S(Z) ⊂S(X+) ⊂σ(X+), we conclude that X+⊥ ⊥ S(X+)X0.
Now combine with (5) and use Theorem 8.12 to get the desired relation.
2 We need only the following corollary. Given an array ξ on Zd + and a set I ⊂ Nd = {1, . . . , d}, we write ξI for the sub-array on the index set NI × {0}Ic, and put ˆ ξJ = (ξI; I ⊂J). Further define ξm = (ξI; |I| = m) and ˆ ξm = (ξI; |I| ≤ m), and similarly for arrays on ˆ N↑d. Recall that N↑d = (J ⊂N; |J| = d) and ˜ Nd = (J ⊂N; |J| ≤d).
Corollary 28.7 (independent entries) (i) Let ξ be a random array on Zd + with independent entries, separately ex-changeable under permutations on N, and define ηk = f(ˆ ξk), k ∈Zd +, for a measurable function f. Then even η has independent entries iff ηk ⊥ ⊥(ˆ ξk \ ξk), k ∈Zd +, (6) in which case ηI⊥ ⊥(ξ \ ξI), I ⊂Nd.
(7) (ii) Let ξ be a contractable array on ˜ Nd with independent entries, and define ηJ = f(ˆ ξJ), J ∈˜ Nd, (8) for a measurable function f. Then even η has independent entries iff ηJ⊥ ⊥(ˆ ξJ \ ξJ), J ∈˜ Nd, (9) in which case ηk⊥ ⊥(ξ \ ξk), k ≤d.
(10) 638 Foundations of Modern Probability Proof: (i) Assume (6). Since also ˆ ξk⊥ ⊥(ξ \ ˆ ξk), we get ηk⊥ ⊥(ξ \ ξk), and so ηk ⊥ ⊥ ξ \ ξk, ˆ η|k| \ ηk , k ∈Zd +, where |k| is the number of k -components in N. As in Lemma 4.8, this yields both (7) and the independence of the η -entries.
Conversely, let η have independent entries. Then S(ηI) is trivial for ∅̸= I ⊂Nd by Kolmogorov’s 0−1 law. Applying Lemma 28.6 to the ZI +-indexed array (ηI, ˆ ξI \ ξI), which is clearly separately exchangeable on NI, we obtain ηI⊥ ⊥(ˆ ξI \ ξI), and (6) follows.
(ii) Assume (9). Since also ˆ ξJ⊥ ⊥(ξ \ ˆ ξJ), we obtain ηJ⊥ ⊥(ξ \ ξJ), and so ηJ⊥ ⊥ ξ \ ξd, ˆ ηd \ ηJ , J ∈˜ Nd.
Thus, η has independent entries and satisfies (10).
Conversely, let η have independent entries.
Extend ξ to a contractable array on Q↑d, and extend η accordingly to Q↑d by means of (8). Define (ξ′ k, η′ k) = (ξ, η) r(1,k1),...,r(d,kd), k ∈Zd +, where r(i, k) = i + k(k + 1)−1. Then ξ′ is separately exchangeable on Nd with independent entries, and (8) yields η′ k = f(ˆ ξ′ k) for all k ∈Zd +. Since η′ ⊂η has again independent entries, (i) yields η′ k⊥ ⊥(ˆ ξ′ k \ξ′ k) for all k ∈Nd, or equivalently ηJ⊥ ⊥(ˆ ξJ \ ξJ) for suitable J ∈Q↑d. The general relation (9) now follows by the contractability of ξ.
2 Lemma 28.8 (coding) Let ξ = (ξj) and η = (ηj) be random arrays on a countable index set I, satisfying (ξi, ηi) d = (ξj, ηj), i, j ∈I, (11) ξj ⊥ ⊥ ηj (ξ \ ξj, η), j ∈I.
(12) Then there exist some measurable functions f, g, such that for any U-array ϑ⊥ ⊥(ξ, η) on I, the random variables ζj = g(ξj, ηj, ϑj), j ∈I, (13) form a U-array ζ⊥ ⊥η satisfying ξj = f(ηj, ζj) a.s., j ∈I.
(14) Proof: For any i ∈I, Theorem 8.17 yields a measurable function f, such that (ξi, ηi) d = f(ηi, ϑi), ηi .
The same theorem provides some measurable functions g and h, such that the random elements ζi = g(ξi, ηi, ϑi), ˜ ηi = h(ξi, ηi, ϑi) satisfy 28. Multi-variate Arrays and Symmetries 639 (ξi, ηi) = f(˜ ηi, ζi), ˜ ηi a.s., (˜ ηi, ζi) d = (ηi, ϑi).
In particular ˜ ηi = ηi a.s., and so (14) holds for j = i with ζi as in (13).
Furthermore, ζi is U(0, 1) and independent of ηi. The two statements extend by (11) to arbitrary j ∈I.
Combining (12) with the relations ϑ⊥ ⊥(ξ, η) and ϑj⊥ ⊥(ϑ \ ϑj), we get (ξj, ηj, ϑj)⊥ ⊥ ηj ξ \ ξj, η, ϑ \ ϑj , j ∈I, which implies ζj ⊥ ⊥ ηj (ζ \ ζj, η), j ∈I.
Since also ζj⊥ ⊥ηj, Theorem 8.12 yields ζj⊥ ⊥(ζ \ ζj, η). Then Lemma 4.8 shows that the array η and the elements ζj, j ∈I, are all independent. Thus, ζ is indeed a U-array independent of η.
2 Proposition 28.9 (invariant coding) Let G be a finite group acting measur-ably on a Borel space S, and let ξ, η be random elements in S satisfying r(ξ, η) d = (ξ, η), r η ̸= η a.s., r ∈G.
Then for any U(0, 1) random variable ϑ⊥ ⊥(ξ, η), (i) there exists a measurable function f : S×[0, 1] →S and a U(0, 1) random variable ζ⊥ ⊥η, such that a.s.
r ξ = f(r η, ζ), r ∈G, (ii) we may choose ζ = b(ξ, η, ϑ), for a measurable function b: S2 × [0, 1] → [0, 1] satisfying b(rx, ry, t) = b(x, y, t), x, y ∈S, r ∈G, t ∈[0, 1].
Our proof relies on a simple algebraic fact: Lemma 28.10 (invariant functions) Let G be a finite group acting measur-ably on a Borel space S, such that rs ̸= s for all r ∈G and s ∈S. Then there exists a measurable function h: S →G, such that (i) h(rs) r = h(s), r ∈G, s ∈S, (ii) for any map b: S →S, the function f(s) = h−1 s b(hs s), s ∈S, satisfies f(rs) = rf(s), r ∈G, s ∈S.
640 Foundations of Modern Probability Proof: (i) We may take S ⊂R. For any s ∈S, the elements rs are all different, and we may choose hs to be the element r ∈G maximizing rs. Then hrs is the element p ∈G maximizing p(rs), so that pr maximizes (pr)s, and hence hrsr = p r = hs.
(ii) Using (i) and the definition of f, we get for any r ∈G and s ∈S f(rs) = h−1 rs b(hrs rs) = h−1 rs b(hs s) = h−1 rs hs f(s) = r f(s).
2 Proof of Lemma 28.9: (i) Assuming rη ̸= η identically, we may define h as in Lemma 28.10. Writing ˜ ξ = hη ξ and ˜ η = hη η, we have for any r ∈G (˜ ξ, ˜ η) = hη(ξ, η) = hrη r(ξ, η).
Letting γ⊥ ⊥(ξ, η) be uniformly distributed in G, we get by Fubini’s theorem γ(ξ, η) d = (ξ, η), (γhη, ξ, η) d = (γ, ξ, η).
Combining those relations and using Lemma 28.10 (i), we obtain (η, ˜ ξ, ˜ η) = η, hη ξ, hη η d = γη, hγηγ ξ, hγηγ η = γη, hη ξ, hη η d = γhη η, hη ξ, hη η = (γ˜ η, ˜ ξ, ˜ η).
Since also γ˜ η⊥ ⊥˜ η ˜ ξ by the independence γ⊥ ⊥(ξ, η), we get η⊥ ⊥˜ η ˜ ξ. Hence, Propo-sition 8.20 yields a measurable function g: S ×[0, 1] →S and a U(0, 1) random variable ζ⊥ ⊥η, such that a.s.
hη ξ = ˜ ξ = g(˜ η, ζ) = g(hη η, ζ).
This shows that ξ = f(η, ζ) a.s., where f(s, t) = h−1 s g(hss, t), s ∈S, t ∈[0, 1], for a measurable extension of h to S. Using Lemma 28.10 (ii), we obtain a.s.
rξ = rf(η, ζ) = f(rη, ζ), r ∈G.
(ii) Defining rt = t for r ∈G and t ∈[0, 1], we have by (i) and the independence ζ⊥ ⊥η 28. Multi-variate Arrays and Symmetries 641 r(ξ, η, ζ) = (rξ, rη, ζ) d = (ξ, η, ζ), r ∈G.
Proceeding as in (i), we may choose a measurable function b: S2×[0, 1] →[0, 1] and a U(0, 1) random variable ϑ⊥ ⊥(ξ, η), such that a.s.
ζ = b(rξ, rη, ϑ) a.s., r ∈G.
Averaging over G, we may assume relation (ii) to hold identically. Letting ϑ′⊥ ⊥(ξ, η) be U(0, 1) and putting ζ′ = b(ξ, η, ϑ′), we get (ξ, η, ζ′) d = (ξ, η, ζ), and so (i) remains true with ζ replaced by ζ′.
2 We proceed with some elementary properties of symmetric functions. Note that if two arrays ξ, η are related by ηJ = f(ˆ ξJ), then ˆ ηJ = ˆ f(ˆ ξJ) with ˆ f xI; I ⊂Nd = f(xJ◦I; I ⊂N|J|); J ⊂Nd .
Lemma 28.11 (symmetric functions) Let ξ be a random array on ˜ Nd, and let f, g be measurable functions between suitable spaces. Then (i) ηJ = f(ˆ ξJ) on ˜ Nd ⇒ˆ ηJ = ˆ f(ˆ ξJ) on ˜ Nd, (ii) ηk = f(ˆ ξk) on ˆ N(d) with f symmetric ⇒ˆ ηk = ˆ f(ˆ ξk) on ˆ N(d), (iii) when f, g are symmetric, so is f ◦ˆ g, (iv) for exchangeable ξ and symmetric f, the array ηJ = f(ˆ ξJ) on N↑d is again symmetric.
Proof: (i) This just restates the definition.
(ii) Let k ∈N(d) and J ⊂Nd. By the symmetry of f and the definition of ˆ ξ, f ξk◦(J◦I); I ⊂N|J| = ξ(k◦J)◦I; I ⊂N|J| = f(ˆ ξk◦I).
Hence, by the definitions of ˆ ξ, ˆ f, η, ˆ η, ˆ f(ˆ ξk) = ˆ f ξk◦I; I ⊂Nd = f ξk◦(J◦I); I ⊂N|J| ; J ⊂Nd = f(ˆ ξk◦J); J ⊂Nd = ηk◦J; J ⊂Nd = ˆ ηk.
(iii) Put ηk = g(ˆ ξk) and h = f ◦ˆ g.
Applying (ii) to g and using the symmetry of f, we get for any vector k ∈N(d) with associated subset ˜ k ∈N↑d h(ˆ ξk) = f ◦ˆ g(ˆ ξk) = f(ˆ ηk) = f(ˆ η˜ k) = f ◦ˆ g(ˆ ξ˜ k) = h(ˆ ξ˜ k).
642 Foundations of Modern Probability (iv) Fix any J ∈N↑d and a permutation p of Nd. Using the symmetry of f, we get as in (ii) (η ◦p)J = ηp◦J = f(ˆ ξp◦J) = f ξ(p◦J)◦I; I ⊂Nd = f ξp◦(J◦I); I ⊂Nd = f (ξ ◦p)J◦I; I ⊂Nd = f (ξ ◦p) ˆ J .
Thus, η ◦p = g(ξ ◦p) for a suitable function g, and so the exchangeability of ξ carries over to η.
2 Proposition 28.12 (inversion) Let ξ be a contractable array on ˜ Nd with in-dependent entries, and fix a measurable function f, such that the array ηJ = f(ˆ ξJ), J ∈˜ Nd, (15) has independent entries. Then (i) there exist some measurable functions g, h, such that for any U-array ϑ⊥ ⊥ξ on ˜ Nd, the variables ζJ = g(ˆ ξJ, ϑJ), J ∈˜ Nd, (16) form a U-array ζ⊥ ⊥η satisfying ξJ = h(ˆ ηJ, ˆ ζJ) a.s., J ∈˜ Nd, (17) (ii) when f is symmetric, we can choose g, h with the same property.
Proof: (i) Write ξd, ˆ ξd for the restrictions of ξ to N↑d and ˜ Nd, respectively, and similarly for η, ζ, ϑ. For d = 0, Lemma 28.8 applied to the pair (ξ∅, η∅) yields some functions g, h, such that when ϑ∅⊥ ⊥ξ is U(0, 1), the variable ζ∅= g(ξ∅, ϑ∅) becomes U(0, 1) with ζ∅⊥ ⊥η∅and ξ∅= h(η∅, ζ∅) a.s.
Proceeding by recursion on d, let g, h be such that (16) defines a U-array ˆ ζd−1⊥ ⊥ˆ ηd−1 on ˜ Nd−1 satisfying (17) on ˜ Nd−1. To extend the construction to dimension d, we note that ξJ ⊥ ⊥ ˆ ξ′ J (ˆ ξd \ ξJ), J ∈N↑d, since ξ has independent entries. Then (15) yields ξJ ⊥ ⊥ ˆ ξ′ J, ηJ ˆ ξd \ ξJ, ˆ ηd , J ∈N↑d, and by contractability we see that (ˆ ξJ, ηJ) = (ξJ, ˆ ξ′ J, ηJ) has the same distri-bution for all J ∈N↑d. Hence, Lemma 28.8 yields some measurable functions gd, hd, such that the random variables ζJ = gd ˆ ξJ, ηJ, ϑJ , J ∈N↑d, (18) form a U-array ζd on N↑d satisfying ζd⊥ ⊥ ˆ ξd−1, ηd , (19) 28. Multi-variate Arrays and Symmetries 643 ξJ = hd ˆ ξ′ J, ηJ, ζJ a.s., J ∈N↑d.
(20) Inserting (15) into (18), and (17) with J ∈˜ Nd−1 into (20), we obtain ζJ = gd ˆ ξJ, f(ˆ ξJ), ϑJ , ξJ = hd ˆ h′(ˆ η′ J, ˆ ζ′ J), ηJ, ζJ a.s., J ∈N↑d, for a measurable function ˆ h′.
Thus, (16) and (17) remain valid on ˜ Nd for suitable extensions of g, h.
To complete the recursion, we need to show that ˆ ζd−1, ζd, ˆ ηd are indepen-dent. Then note that ˆ ϑd−1⊥ ⊥(ˆ ξd, ϑd), since ϑ is a U-array independent of ξ.
Hence, (15) and (16) yield ˆ ϑd−1⊥ ⊥ ˆ ξd−1, ηd, ζd .
Combining this with (19) gives ζd⊥ ⊥ ˆ ξd−1, ηd, ˆ ϑd−1 , and so by (15) and (16) we have ζd⊥ ⊥(ˆ ηd, ˆ ζd−1).
Now only ˆ ηd⊥ ⊥ˆ ζd−1 remains to be proved. Since ξ and η are related by (15) and have independent entries, Lemma 28.7 (ii) yields ηd⊥ ⊥ˆ ξd−1, which extends to ηd⊥ ⊥(ˆ ξd−1, ˆ ϑd−1) since ϑ⊥ ⊥(ξ, η). Hence, ηd⊥ ⊥(ˆ ηd−1, ˆ ζd−1) by (15) and (16).
Since also ˆ ηd−1⊥ ⊥ˆ ζd−1 by the induction hypothesis, we have indeed ˆ ηd⊥ ⊥ˆ ζd−1.
(ii) Here a similar recursive argument applies, except that now we need to invoke Lemmas 28.9 and 28.11 in each step, to ensure the desired symmetry of g, h. We omit the details.
2 For the next result, we say that an array is representable, if it can be represented as in Theorem 28.1.
Lemma 28.13 (augmentation) Suppose that all contractable arrays on ˜ Nd are representable, and let (X, ξ) be a contractable array on ˜ Nd, where ξ has independent entries. Then there exist a measurable function f and a U-array η⊥ ⊥ξ on ˜ Nd, such that XJ = f ˆ ξJ, ˆ ηJ a.s., J ∈˜ Nd.
Proof: Since (X, ξ) is representable, there exist some measurable functions g, h and a U-array ζ, such that a.s.
XJ = g(ˆ ζJ), ξJ = h(ˆ ζJ), J ∈˜ Nd.
(21) Then Lemma 28.12 yields a function k and a U-array η⊥ ⊥ξ, such that ζJ = k ˆ ξJ, ˆ ηJ a.s., J ∈˜ Nd.
Inserting this into (21) gives the desired representation of X with f = g ◦ˆ k. 2 −−− We are now ready to prove the main results, beginning with Theorem 28.1 (i). Here our proof is based on the following lemma.
644 Foundations of Modern Probability Lemma 28.14 (recursion) Suppose that all exchangeable arrays on ˆ N(d−1) are representable. Then any exchangeable array X on ˆ N(d) can be represented on ˆ N(d−1) in terms of a U-array ξ on ˜ Nd−1, such that the pair (X, ξ) is exchangeable and the arrays ˜ Xk = (Xh; h ∼k) satisfy ˜ Xk ⊥ ⊥ ˆ ξk X \ ˜ Xk, ξ , k ∈N(d).
(22) Proof: Let ¯ X be a stationary extension of X to ˆ Z(d). For r = (r1, . . . , rm) ∈ˆ Z(d), let r+ be the sub-sequence of elements rj > 0. For k ∈ˆ N(d−1) and r ∈ˆ Z(d) with r+ ∼(1, . . . , |k|), write k ◦r = (kr1, . . . , krm), where krj = rj when rj ≤0. On ˆ N(d−1), we introduce the array Yk = ¯ Xk◦r; r ∈ˆ Z(d), r+ ∼(1, . . . , |k|) , k ∈ˆ N(d−1), so that informally Yk = { ¯ Xh; h+ ∼k} with a specified order of enumeration.
The exchangeability of ¯ X extends to the pair (X, Y ).
For k = (k1, . . . , kd) ∈N(d), write ˆ Yk for the restriction of Y to sequences in ˜ k of length < d, and note that ( ˜ Xk, ˆ Yk) d = ˜ Xk, X \ ˜ Xk, Y , k ∈N(d), since the two sides are restrictions of the same exchangeable array ¯ X to se-quences in Z−∪{˜ k} and Z, respectively. Since also σ(ˆ Yk) ⊂σ(X \ ˜ Xk, Y ), Lemma 8.10 yields ˜ Xk ⊥ ⊥ ˆ Yk X \ ˜ Xk, Y , k ∈N(d).
(23) Since exchangeable arrays on ˆ N(d−1) are representable, there exist a U-array ξ on ˜ Nd−1 and a measurable function f, such that (Xk, Yk) = f(ˆ ξk), k ∈ˆ N(d−1).
(24) By Theorems 8.17 and 8.20 we may choose ξ⊥ ⊥X′,Y X, where X′ denotes the restriction of X to ˆ N(d−1), in which case (X, ξ) is again exchangeable by Lemma 28.4. The same conditional independence yields ˜ Xk ⊥ ⊥ Y, X \ ˜ Xk ξ, k ∈N(d).
(25) Using Theorem 8.12, we may combine (23) and (25) into ˜ Xk ⊥ ⊥ ˆ Yk X \ ˜ Xk, ξ , k ∈N(d), and (22) follows since ˆ Yk is ˆ ξk -measurable by (24).
2 Proof of Theorem 28.1 (i): The result for d = 0 is elementary and follows from Lemma 28.8 with η = 0.
Proceeding by induction, suppose that all exchangeable arrays on ˆ N(d−1) are representable, and consider an exchangeable array X on ˆ N(d). By Lemma 28.14, we may choose a U-array ξ on ˜ Nd−1 and a measurable function f satisfying Xk = f(ˆ ξk) a.s., k ∈ˆ N(d−1), (26) 28. Multi-variate Arrays and Symmetries 645 and such that (X, ξ) is exchangeable and the arrays ˜ Xk = (Xh; h ∼k) with k ∈N(d) satisfy (22).
Letting Pd be the group of permutations on Nd, we may write ˜ Xk = Xk◦p; p ∈Pd , ˆ ξk = ξk◦I; I ⊂Nd , k ∈N(d), where k ◦I = (ki; i ∈I). The pairs ( ˜ Xk, ˆ ξk) are equally distributed by the exchangeability of (X, ξ), and in particular ˜ Xk◦p, ˆ ξk◦p d = ( ˜ Xk, ˆ ξk), p ∈Pd, k ∈N(d).
Since ξ is a U-array, the arrays ˆ ξk◦p are a.s. different for fixed k, and so the associated symmetry groups are a.s. trivial. Hence, Lemma 28.9 yields some U(0, 1) random variables ζk⊥ ⊥ˆ ξk and a measurable function h, such that Xk◦p = h ˆ ξk◦p, ζk a.s., p ∈Pd, k ∈N(d).
(27) Now introduce a U-array ϑ⊥ ⊥ξ on N↑d, and define Yk = h(ˆ ξk, ϑ˜ k), ˜ Yk = Yk◦p; p ∈Pd , k ∈N(d).
(28) Comparing with (27) and using the independence properties of (ξ, ϑ), we get (˜ Yk, ˆ ξk) d = ( ˜ Xk, ˆ ξk), ˜ Yk ⊥ ⊥ ˆ ξk Y \ ˜ Yk, ξ , k ∈N(d).
By (22) it follows that ( ˜ Yk, ξ) d = ( ˜ Xk, ξ) for all k ∈N(d), and moreover ˜ Xk ⊥ ⊥ξ (X \ ˜ Xk), ˜ Yk ⊥ ⊥ξ (Y \ ˜ Yk), k ∈N(d).
Thus, the arrays ˜ Xk with different ˜ k are conditionally independent given ξ, and similarly for the arrays ˜ Yk. Letting X′ be the restriction of X to ˆ Nd−1, which is ξ-measurable by (26), we obtain (X, ξ) d = (X′, Y, ξ). Using (28) along with Corollary 8.18, we conclude that Xk = h(ˆ ξk, ηk) a.s., k ∈N(d), for a U-array η⊥ ⊥ξ on N↑d. Combining this with (26) yields the desired repre-sentation of X.
2 −−− Next we prove the equivalence criteria for jointly exchangeable arrays.
Proof of Theorem 28.3, (i) ⇔(ii): The implication (i) ⇒(ii) is obvious, and the converse follows from Theorem 8.17.
(iv) ⇒(i): We may write condition (iv) as 646 Foundations of Modern Probability f α, ξi, ξj, ζij = g α′′, ξ′′ i , ξ′′ j , ζ′′ ij a.s., i ̸= j, (29) where α′′ = T(¯ α), ξ′′ i = U(¯ α, ¯ ξi), ζ′′ ij = V ¯ α, ¯ ξi, ¯ ξj, ¯ ζij , with ¯ α = (α, α′), ¯ ξi = (ξi, ξ′ i), ¯ ζij = (ζij, ζ′ ij).
The symmetry of V and ¯ ζ yields ζ′′ ij = V ¯ α, ¯ ξi, ¯ ξj, ¯ ζij = V ¯ α, ¯ ξj, ¯ ξi, ¯ ζji = ζ′′ ji, i ̸= j.
Using the mapping property λ2 →λ of the functions T, U, V , and condition-ing first on the pair ¯ α and then on the array (¯ α, ¯ ξ), we see that the variables α′′, ξ′′ i , ζ′′ ij are again i.i.d. U(0, 1). Condition (i) now follows from (29).
(i) ⇒(iii): Assume (i), so that Xij = f α, ξi, ξj, ζij = g α′, ξ′ i, ξ′ j, ζ′ ij a.s., i ̸= j, (30) for some U-arrays (α, ξ, ζ) and (α′, ξ′, ζ′) on ˜ N2. Since the latter may be chosen to be conditionally independent given X, the combined array (¯ α, ¯ ξ, ¯ ζ) is again jointly exchangeable by Lemma 28.4 (i). Hence, it may be represented as in Theorem 28.1 (i) in terms of a U-array (α′′, ξ′′, ζ′′) on ˜ N2, so that a.s. for all i ̸= j ¯ α = ¯ T(α′′), ¯ ξi = ¯ U(α′′, ξ′′ i ), ¯ ζij = ¯ V α′′, ξ′′ i , ξ′′ j , ζ′′ ij , for some functions ¯ T = (T, T ′), ¯ U = (U, U ′), and ¯ V = (V, V ′) between suitable spaces. Since ¯ ζij = ¯ ζji, we may choose ¯ V to be symmetric, and by Lemma 28.7 (ii) we may choose T, U, V and T ′, U ′, V ′ to preserve λ in the highest order arguments. Now (iii) follows by substitution into (30).
(iii) ⇒(iv): Condition (iii) may be written as f α′, ξ′ i, ξ′ j, ζ′ ij = g α′′, ξ′′ i , ξ′′ j , ζ′′ ij a.s., i ̸= j, (31) where α′ = T ′(α), ξ′ i = U ′(α, ξi), ζ′ ij = V ′(α, ξi, ξj, ζij), (32) α′′ = T ′′(α), ξ′′ i = U ′′(α, ξi), ζ′′ ij = V ′′(α, ξi, ξj, ζij).
(33) By the measure-preserving property of T ′, U ′, V ′ and T ′′, U ′′, V ′′ along with Lemma 28.7, the triples (α′, ξ′, ζ′) and (α′′, ξ′′, ζ′′) are again U-arrays on ˜ N2.
By Lemma 28.12 we may solve for (α, ξ, ζ) in (32) to obtain α = T(¯ α′), ξi = U(¯ α′, ¯ ξ′ i), ζij = V ¯ α′, ¯ ξ′ i, ¯ ξ′ j, ¯ ζ′ ij , (34) for some measurable functions T, U, V between suitable spaces, where ¯ α′ = (α′, β), ¯ ξ′ i = (ξ′ i, ηi), ¯ ζ′ ij = (ζ′ ij, ϑij), for an independent U-array (β, η, ϑ). Substituting (34) into (33) and using (31), we get 28. Multi-variate Arrays and Symmetries 647 f α′, ξ′ i, ξ′ j, ζ′ ij = g ¯ T(¯ α′), ¯ U(¯ α′, ¯ ξ′ i), ¯ U(¯ α′, ¯ ξ′ j), ¯ V ¯ α′, ¯ ξ′ i, ¯ ξ′ j, ¯ ζ′ ij , where ¯ T, ¯ U, ¯ V result from composition of (T, U, V ) into (T ′′, U ′′, V ′′).
By Lemma 28.7 (ii) we can modify ¯ T, ¯ U, ¯ V to achieve the desired mapping prop-erty λ2 →λ, and by Lemma 28.12 (ii) we can choose V to be symmetric, so that ¯ V becomes symmetric as well, by Lemma 28.11 (iii).
2 −−− Next we prove Theorem 28.1 (ii) by establishing a representation5 as in (i).
Here the following construction plays a key role for the main proof. Define ˜ T ′ = ˜ T \ {∅} and Z−= −Z+ = Z \ N.
Lemma 28.15 (key construction) Let X be a contractable array on ˜ Z, and define for J ∈˜ N and n ∈N YJ = XI∪J; I ∈˜ Z′ − , Xn J = X{n}∪(J+n), Y n J = YJ+n, Y{n}∪(J+n) .
Then the pairs (X, Y ) and (Xn, Y n) are contractable on ˜ N, and the latter are equally distributed with Xn⊥ ⊥ Y n (X \ Xn), n ∈N.
(35) Proof: It is enough to prove (35), the remaining assertions being obvious.
Then write X = (Qn, Rn) for all n ∈N, where Qn denotes the restriction of X to ˜ N + n −1 and Rn = X \ Qn. Defining pn(k) = k −(n −1) 1{k < n}, k ∈Z, n ∈N, and using the contractability of X, we obtain (Qn, Rn) = X d = X ◦pn = (Qn, Rn ◦pn), and so Lemma 8.10 yields Rn⊥ ⊥Rn◦pnQn, which amounts to Y, X1, . . . , Xn−1⊥ ⊥ Y n Xn, Xn+1, . . . ; X∅ .
Replacing n by n + 1 and noting that Y n+1 ⊂Y n, we see that also Y, X1, . . . , Xn⊥ ⊥ Y n Xn+1, Xn+2, . . . ; X∅ .
Combining the last two relations, we get by Lemma 4.8 Xn ⊥ ⊥ Y n Y, X1, . . . , Xn−1, Xn+1, . . . ; X∅ = X \ Xn.
2 5No direct proof of the extension property is known.
648 Foundations of Modern Probability Lemma 28.16 (recursion) Suppose that all contractable arrays on ˜ Nd−1 are representable, and let X be a contractable array on ˜ Nd. Then there exists a U-array η on ˜ Nd−1, such that (X, η) is contractable with σ(X∅) ⊂σ(η∅), and the arrays Xn J = X{n}∪(J+n), ηn J = ηJ+n, J ∈˜ Nd−1, satisfy Xn⊥ ⊥ ηn X \ Xn, η , n ∈N.
(36) Proof: Defining Y as in Lemma 28.15 in terms of a contractable extension of X to ˜ Z, we note that the pair (X∅, Y ) is contractable on ˜ Nd−1. Then by hypothesis it has a representation X∅= f(η∅); YJ = g(ˆ ηJ), J ∈˜ Nd−1, (37) for some measurable functions f, g and a U-array η on ˜ Nd−1. By Theorems 8.17 and 8.21, we may extend the pairs (X, Y ) and (Y, η) to contractable arrays on ˜ Q. Choosing η ⊥ ⊥ X∅, Y X, (38) we see that (X, Y, η) becomes contractable on ˜ Q by Lemma 28.4.
By Theorem 8.12 and Lemma 28.15, we have Xn ⊥ ⊥ X∅, Y (X \ Xn), n ∈N.
Combining with (38) and using a conditional version of Lemma 4.8, we get Xn ⊥ ⊥ X∅, Y X \ Xn, η , n ∈N, and since X∅and Y are η-measurable by (37), it follows that Xn⊥ ⊥ η X \ Xn, η , n ∈N.
(39) By the contractability of (X, η), we may replace ˜ Q by the index sets ˜ Tε with Tε = k>0(k −ε, k], ε > 0. Letting ε →0, we see from Corollary 9.26 that (39) remains true with η restricted to ˜ Nd−1. Since also Xn⊥ ⊥ηn η by Lemma 28.5, (36) follows by Theorem 8.12.
2 Proof of Theorem 28.1 (ii): For d = 0, the result is elementary and follows from Lemma 28.8 with η = 0. Now suppose that all contractable arrays on ˜ Nd−1 are representable, and let X be a contractable array on ˜ Nd.
Choose a U-array η on ˜ Nd−1 as in Lemma 28.16, and extend η to ˜ Nd by attaching some independent U(0, 1) variables ηJ with |J| = d.
Then (X, η) remains contractable, the element X∅is η∅-measurable, and the arrays6 Xn J = X{n}∪(J+n), ηn J = ηJ+n, η{n}∪(J+n) , J ∈˜ Nd−1, n ∈N, satisfy Xn⊥ ⊥ ηn X \ Xn, η , n ∈N.
(40) 6Note that these definitions differ slightly from those in Lemma 28.16.
28. Multi-variate Arrays and Symmetries 649 The pairs (Xn, ηn) are equally distributed and inherit the contractability from (X, η). Furthermore, the elements of ηn are independent for fixed n and uniformly distributed on [0, 1]2. Hence, Corollary 28.13 yields a measurable function G and some U-arrays ζn⊥ ⊥ηn on ˜ Nd−1, such that Xn J = G(ˆ ηn J, ˆ ζn J ) a.s., J ∈˜ Nd−1, n ∈N.
(41) By Theorem 8.17, we may choose ζn = h Xn, ηn, ϑn , n ∈N, (42) for a U-sequence ϑ = (ϑn)⊥ ⊥(X, η) and a measurable function h, so that by Proposition 8.20, ζn ⊥ ⊥ Xn, ηn(η, ζ \ ζn), n ∈N.
Using (40) and (42), along with the independence of the ϑn, we get Xn⊥ ⊥ ηn (η, ζ \ ζn), n ∈N.
Combining the last two relations, we get by Theorem 8.12 ζn⊥ ⊥ ηn (η, ζ \ ζn), n ∈N.
Since also ζn⊥ ⊥ηn, the same result yields ζn⊥ ⊥(η, ζ \ ζn), n ∈N.
By Lemma 4.8, the ζn are then mutually independent and independent of η.
Now combine the arrays ζn into a single U-array ζ⊥ ⊥η on ˜ Nd \ {∅}, given by ζ{n}∪(J+n) = ζn J , J ∈˜ Nd−1, n ∈N, and extend ζ to ˜ Nd by attaching an independent U(0, 1) variable ζ∅. Then (41) becomes XJ = F(ˆ ηJ, ˆ ζJ) a.s., J ∈˜ Nd \ {∅}, (43) where F is given for k = 1, . . . , d by F (y, z)I; I ⊂Nk = G yI+1, (y, z){1}∪(I+1); I ⊂Nk−1 .
Since X∅is η∅-measurable, we may extend (43) to J = ∅, through a suitable extension of F to [0, 1]2, as in Lemma 1.14. By Lemma 28.8 with η = 0, we may finally choose a U-array ξ on ˜ Nd and a measurable function b: [0, 1] →[0, 1]2, such that (ηJ, ζJ) = b(ξJ) a.s., J ∈˜ Nd.
Then (ˆ ηJ, ˆ ζJ) = ˆ b(ˆ ξJ) a.s. for all J ∈˜ Nd, and a substitution into (43) yields the desired representation XJ = f(ˆ ξJ) a.s. with f = F ◦ˆ b.
2 −−− We turn to a more detailed study of separately exchangeable arrays X on N2. Define the shell σ-field of X as in Lemma 28.6. Say that X is dissociated if Xij; i ≤m, j ≤n ⊥ ⊥ Xij; i > m, j > n , m, n ∈N.
650 Foundations of Modern Probability Theorem 28.17 (conditioning and independence) Let X be the separately exchangeable array on N2 given by (4), write S for the shell σ-field of X, and let X−be an exchangeable extension of X to Z2 −. Then (i) S ⊂σ(α, ξ, η) a.s., (ii) X⊥ ⊥S (α, ξ, η), (iii) the Xij are conditionally independent given S, (iv) X is i.i.d. ⇔X⊥ ⊥(α, ξ, η), (v) X is dissociated ⇔X⊥ ⊥α, (vi) X is S-measurable ⇔X⊥ ⊥ζ, (vii) L(X | X−) = L(X | α) a.s.
Proof: (i) Use Corollary 9.26.
(ii) Apply Lemma 28.6 to the array X on N2, extended to Z2 + by X0,0 = α, Xi,0 = ξi, X0,j = ηj, i, j ∈N.
(iii) By Lemma 8.7, the Xij are conditionally independent given (α, ξ, η).
Now apply (i)−(ii) and Theorem 8.9.
(iv) If X is i.i.d., then S is trivial by Kolmogorov’s 0−1 law, and (ii) yields X⊥ ⊥(α, ξ, η). Conversely, X⊥ ⊥(α, ξ, η) implies L(X) = L(X | α, ξ, η) by Lemma 8.6, and so Lemma 8.7 yields X d = Y with Yij = g(ζij) = λ3f(·, ·, ·, ζij), i, j ∈N, which shows that the Yij, and then also the Xij, are i.i.d.
(v) Choose some disjoint, countable sets N1, N2, . . . ⊂N, and write Xn for the restriction of X to N 2 n. If X is dissociated, the sequence (Xn) is i.i.d.
Applying Theorem 27.2 to the exchangeable sequence of pairs (α, Xn), we obtain a.s.
L(X) = L(X1) = L(X1| α) = L(X| α), and so X⊥ ⊥α by Lemma 8.6. Conversely, X⊥ ⊥α implies L(X) = L(X | α) a.s.
by the same lemma, and so Lemma 8.7 yields X d = Y with Yij = g(ξi, ηj, ζij) = λf(·, ξi, ηj, ζij), i, j ∈N, which shows that Y , and then also X, is dissociated.
(vi) We may choose X to take values in [0, 1]. If X is S -measurable, then by (i)−(ii) and Lemma 8.7, 28. Multi-variate Arrays and Symmetries 651 X = E(X| S) = E(X| α, ξ, η), and so X is (α, ξ, η) -measurable, which implies X⊥ ⊥ζ.
Conversely, X⊥ ⊥ζ implies L(X) = L(X | ζ) a.s. by Lemma 8.6, and so Lemma 8.7 yields X d = Y with Yij = g(α, ξi, ηj) = λf(α, ξi, ηj, ·), i, j ∈N, which shows that Y is (α, ξ, η) -measurable. Applying (i)−(ii) to Y yields Y = E(Y | α, ξ, η) = E(Y | SY ), which shows that Y is SY -measurable. Hence, X d = Y is SX -measurable.
(vii) Define the arrays Xn as in (v), and use X−to extend the sequence (Xn) to Z−, so that the entire sequence is exchangeable on Z. Since X−is invariant under permutations on N, the exchangeability on N is preserved by conditioning on X−, and the Xn with n > 0 are conditionally i.i.d. by the law of large numbers. This remains true under conditioning on α in the sequence of pairs (Xn, α).
By the uniqueness in Theorem 27.2, the two conditional distributions agree a.s., and so by exchangeability L(X| X−) = L(X1| X−) = L(X1| α) = L(X| α).
2 Corollary 28.18 (representations in special cases, Aldous) Let X be a sepa-rately exchangeable array on N2 with shell σ-field S. Then (i) X is dissociated iffit has an a.s. representation Xij = f(ξi, ηj, ζij), i, j ∈N, (ii) X is S-measurable iffit has an a.s. representation Xij = f(α, ξi, ηj), i, j ∈N, (iii) X is dissociated and S-measurable iffit has an a.s. representation Xij = f(ξi, ηj), i, j ∈N.
Proof: (i) Use Theorem 28.17 (v).
(ii) Proceed as in the proof of Theorem 28.17 (iv).
(iii) Combine (i) and (ii).
2 −−− 652 Foundations of Modern Probability We turn to the subject of rotational symmetries. Given a real-valued ran-dom array X = (Xij) indexed by N2, we introduce the rotated version Yij = Σ h,k UihVjk Xhk, i, j ∈N, where U, V are unitary arrays on N2 affecting only finitely many coordinates, hence orthogonal matrices of finite order n, extended to i ∨j > n by Uij = Vij = δij. Writing Y = (U ⊗V )X for the mapping X →Y , we say that X is separately rotatable if (U ⊗V )X d = X for all U, V , and jointly rotatable if the same condition holds with U = V .
The representations of separately and jointly rotatable arrays may be stated in terms of G-arrays, defined as arrays of independent N(0, 1) random vari-ables.
Theorem 28.19 (rotatable arrays, Aldous, OK) Let X = (Xij) be a real-valued random array on N2. Then (i) X is separately rotatable iffa.s.
Xij = σ ζij + Σ k α k ξki ηkj, i, j ∈N, for some independent G-arrays (ξki), (ηkj), (ζij) and an independent set7 of random variables σ, α1, α2, . . . with Σ k α2 k < ∞a.s., (ii) X is jointly rotatable iffa.s.
Xij = ρ δij + σ ζij + σ′ζji + Σ h, k α hk ξhi ξkj −δij δhk , i, j ∈N, for some independent G-arrays (ξhi), (ζij) and an independent set 8 of random variables ρ, σ, σ′, and αhk with Σ h,k α2 hk < ∞a.s.
In case (i), we note the remarkable symmetry (Xij) d = (Xji). The centering term δij δhk in (ii) is needed to ensure convergence of the double sum9. The simpler representation in (i) is essentially a special case. Here we prove only part (i), which still requires several lemmas.
Lemma 28.20 (conditional moments) Let X be a separately rotatable array on N2 with rotatable extension X−to Z2 −. Then E |Xij|p X− < ∞a.s., p > 0, i, j ∈N.
Proof: For fixed i ∈Z, we have Xij = σi ζij for all j by Theorem 14.3, where σi ≥0 and the ζij are independent of σi and i.i.d. N(0, 1). By the stronger 7This means that the arrays ξ, η, ζ, (σ, α) are mutually independent; no independence is claimed between the variables σ, α1, α2, . . . .
8Here ξ, ζ, (ρ, σ, σ′, α) are independent; the dependence between ρ, σ, σ′, α is arbitrary.
9Comparing (ii) with Theorem 14.25, we recognize a double Wiener–Itˆ o integral. To see the general pattern, we need to go to higher dimensions.
28. Multi-variate Arrays and Symmetries 653 Theorem 27.15, we have in fact (ζij)⊥ ⊥X−for i ≤0 and j > 0, and since σi is X−-measurable by the law of large numbers, we get E |Xij|p X− = σp i E|ζij|p < ∞a.s., i ≤0, j ∈N.
Hence, by rotatability, E |Xij|p X− < ∞a.s., i j < 0, p > 0.
(44) Taking i = 1, we put ζ1j = ζj and X1j = σ1ζj = ξj. Writing E( · |X−) = E X−and letting n ∈N, we get by Cauchy’s inequality E X−|ξj|p = E X− ζ2 j σ2 1 p/2 = E X− ζ2 j ξ2 −1 + · · · + ξ2 −n ζ2 −1 + · · · + ζ2 −n p/2 ≤ E X− ζ2 j ζ2 −1 + · · · + ζ2 −n p E X− ξ2 −1 + · · · + ξ2 −n p 1/2 .
For the second factor on the right, we get a.s. by Minkowski’s inequality and (44) E X− ξ2 −1 + · · · + ξ2 −n p ≤np EX−|ξ−1|2p < ∞.
The first factor is a.s. finite for n > 2p, since its expected value equals E ζ2 j ζ2 −1 + · · · + ζ2 −n p = E|ζj|2p E ζ2 −1 + · · · + ζ2 −n −p < ⌢ ∞ 0 r−2p e−r2/2 rn−1 dr < ∞.
2 Lemma 28.21 (arrays of product type, Aldous) Let X be a separately rotat-able array of the form Xij = f(ξi, ηj), i, j ∈N, for a measurable function f on [0, 1]2 and some independent U-sequences (ξi) and (ηj). Then a.s.
Xij = Σ k ck ϕki ψkj, i, j ∈N, (45) for some independent G-arrays (ϕki), (ψkj) on N2 and constants ck with Σkc2 k < ∞.
Proof: Let ¯ X be a stationary extension of X to Z2 with restriction X−to Z2 −, and note that X−⊥ ⊥X by the independence of all ξi and ηj. Hence, Lemma 28.20 yields EX2 ij < ∞for all i, j, so that f ∈L2(λ2). By a standard Hilbert space result10, there exist some ortho-normal sequences (gk), (hk) in L2(λ) and constants ck > 0 with Σk c2 k < ∞, such that f(x, y) = Σ k ck gk(x) hk(y) in L2, x, y ∈[0, 1], and (45) follows with 10Cf. Lemma A3.1 in K(05), p. 450.
654 Foundations of Modern Probability ϕki = gk(ξi), ψkj = hk(ηj), i, j, k ∈N.
We need to show that the variables ϕk = gk(ξ) and ψk = hk(η) are i.i.d. N(0, 1), where ξ1 = ξ and η1 = η.
By Fubini’s theorem, 1 0 dxΣ k c2 k gk(x) 2 = Σ k c2 k ∥gk∥2 < ∞, and so the sum on the left converges a.e. λ, and we may modify the gk (along with f) on a null set, to ensure that Σ k c2 k gk(x) 2 < ∞, x ∈[0, 1].
We may then define Y (x) = f(x, η) = Σ k ck gk(x) ψk, x ∈[0, 1], with convergence in L2.
The array X = (Xij) remains rotatable in j un-der conditioning on the invariant variables ξi, and so by Lemma 8.7 (i) and Theorem 27.15 the variables Y (x1), . . . , Y (xn) are jointly centered Gaussian for x1, . . . , xn ∈[0, 1] a.e. λn. Defining a measurable map G : [0, 1] →l2 by G(x) = {ck gk(x)} and putting μ = λ ◦G−1, we see from Lemma 1.19 that λ x ∈[0, 1]; G(x) ∈supp μ = μ(supp μ) = 1.
We can then modify the functions gk (along with Y and f) on a null set, to make G(x) ∈supp μ hold identically. Since the set of Gaussian distributions on Rn is weakly closed, we conclude that Y is centered Gaussian.
Now let H1 be the linear subspace of l2 spanned by the sequences {ck gk(x)}, so that for any (yk) ∈H1 the sum Σk yk ψk converges in L2 to a centered Gaussian limit. If instead (yk)⊥H1 in l2, we have Σk ckyk gk(x) ≡0, which implies Σk ckyk gk = 0 in L2. Hence, ckyk ≡0 by ortho-normality, and so yk ≡0. This gives ¯ H1 = l2, and so the sums Σk yk ψk are centered Gaussian for all (yk) ∈l2. The ψk are then jointly centered Gaussian by Corollary 6.5.
For the remaining properties, interchange the roles of i and j, and use the ortho-normality and independence.
2 Lemma 28.22 (totally rotatable arrays, Aldous) Let X be a dissociated, sep-arately rotatable array on N2 with shell σ-field S, such that E(X| S) = 0. Then the Xij are i.i.d. centered Gaussian.
Proof (OK): For any rotation R of rows or columns and indices h ̸= k, we get by Lemma 8.8, Proposition 28.17 (iii), and the centering assumption E (RX)k S d = E Xk S = 0, E (RX)h(RX)k S d = E XhXk S = E(Xh | S) E(Xk | S) = 0.
28. Multi-variate Arrays and Symmetries 655 Letting T be the tail σ-field in either index, and noting that T ⊂S, we get by the tower property of conditioning E (RX)k T = 0, E (RX)h(RX)k T = 0.
Now L(X | T ) is a.s. Gaussian by Theorem 27.15. Since the Gaussian property is preserved by rotations, Lemma 14.1 yields L(RX | T ) = L(X | T ), and so X is conditionally rotatable in both indices, hence i.i.d. N(0, 1). Thus, Xij = σ ζij a.s. for a G-array (ζij) and an independent random variable σ ≥0.
Since X is dissociated, we get for h, k ∈N2 differing in both indices E σ2 E|ζh|·E|ζk| = E XhXk = E|Xh|·E|Xk| = (E σ)2 E|ζh|·E|ζk|, and so Eσ2 = (Eσ)2, which yields Var(σ) = 0, showing that σ is a constant. 2 Proof of Theorem 28.19 (i): If X is separately rotatable on N2, it is also separately exchangeable and hence representable as in (4), in terms of a U-array (α, ξ, η, ζ). The rotatability on N2 is preserved by conditioning on an exchangeable extension X−to Z2 −, and by Proposition 28.17 (vii) it is equiv-alent to condition on α, which makes X dissociated. By Theorem A1.3, it is enough to prove the representation under this conditioning, and so we may assume that X is representable as in Corollary 28.18 (i). Then E|Xij|p < ∞ for all p > 0 by Lemma 28.20.
Letting S be the shell σ-field of X, we may decompose X into the arrays X′ = E(X| S), X′′ = X −E(X| S), which are again separately rotatable by Lemma 8.8.
Writing g(a, x, y) = λf(a, x, y, ·), we have by Propositions 8.7 (i), 8.9, and 28.17 (i)−(ii) the a.s.
representations X′ ij = g(ξi, ηj), X′′ ij = f(ξi, ηj, ζij) −g(ξi, ηj), i, j ∈N, and in particular X′ and X′′ are again dissociated. By Lemma 28.21, X′ is representable as in (45). Furthermore, the X′′ ij are i.i.d. centered Gaussian by Lemma 28.22, since clearly E(X′′| S) = 0. Then X′⊥ ⊥X′′ by Proposition 28.17 (iv), and we may choose the underlying G-arrays to be independent.
2 −−− Exchangeable arrays can be used to model the interactions between nodes in a random graph or network, in which case Xij represents the directed inter-action between nodes i, j. For an adjacency array we have Xij ∈{0, 1}, where Xij = 1 when there is a directed link from i to j.
656 Foundations of Modern Probability We turn to the related subject of random partitions of a countable set I, represented by arrays Xij = κi 1{i ∼j}, i, j ∈I, where i ∼j whenever i, j belong to the same random subset of I, and κi = κj is an associated random mark, taking values in a Borel space S. Given a class P of injective maps on I, we say that X is P-symmetric if X ◦p d = X for all p ∈P, where (X ◦p)ij = Xpi,pj. In particular, the partition is exchangeable if the array X = (Xij) is jointly exchangeable.
We give an extended version of the celebrated paintbox representation.
Theorem 28.23 (symmetric partitions, Kingman, OK) Let P be a class of injective maps on a countable set I, and let the array X = (Xij) on I2 represent a random partition of I with marks in a Borel space S. Then X is P-symmetric iff Xij = b(ξi)1{ξi = ξj}, i, j ∈I, (46) for a measurable function b : R →S and a P-symmetric sequence of random variables ξ1, ξ2, . . . .
Though contractable arrays on N2 may not be exchangeable in general, the two properties are equivalent when X is the indicator array of a random partition on N. The equivalence fails for partitions of N2.
Corollary 28.24 (contractable partitions) Let X be a marked partition of N.
Then X is exchangeable ⇔ X is contractable.
Proof: Use Theorem 28.23, together with the equivalence of exchangeabil-ity and contractability for a single sequence (ξn).
2 Our proof of Theorem 28.23 is based on an elementary algebraic fact, where for any partition of N with associated indicator array rij = 1{i ∼j}, we define a mapping m(r): N →N by mj(r) = min i ∈N; i ∼j , j ∈N.
Lemma 28.25 (lead elements) For any indicator array r on N2 and injective map p on N, there exists an injection q on N, depending measurably on r and p, such that q ◦mj(r ◦p) = mpj(r), j ∈N.
Proof: Fix any indicator array r with associated partition classes A1, A2, . . . , listed in the order of first entry, and define Bk = p−1Ak, K = k ∈N; Bk ̸= ∅ , ak = min Ak, bk = min Bk, q(bk) = ak, k ∈K.
28. Multi-variate Arrays and Symmetries 657 For any j ∈N, we note that j ∈Bk = p−1Ak and hence pj ∈Ak for some k ∈K, and so q mj(r ◦p) = q(bk) = ak = mpj(r).
To extend q to an injective map on N, we need to check that |Jc| ≤|Ic|, where I = ak; k ∈K , J = bk; k ∈K .
Noting that |Bk| ≤|Ak| for all k since p is injective, we get |Jc| = ∪ k∈K Bk \ {bk} ≤∪ k∈K Ak \ {ak} ≤|Ic|.
2 Proof of Theorem 28.23: We may take I = N. The sufficiency is clear from (46), since (X ◦p)ij = Xpi,pj = b(ξpi) 1{ξpi = ξpj}, i, j ∈N.
To prove the necessity, let X be P-symmetric, put kj(r) = rjj, choose ϑ1, ϑ2, . . . ⊥ ⊥X to be i.i.d. U(0, 1), and define ηj = ϑ ◦mj(X), κj = kj(X) = Xjj, j ∈N.
(47) For any p ∈P, Lemma 28.25 yields a random injection q(X) on N, such that q(X) ◦mj(X ◦p) = m(X) ◦pj, j ∈N.
(48) Using (47) and (48), the P-symmetry of X, the independence ϑ⊥ ⊥X, and Fubini’s theorem (four times), we get for any measurable function f ≥0 on ([0, 1] × S)∞ Ef η ◦p, κ ◦p = Ef ϑ ◦m(X) ◦p, k(X) ◦p = Ef ϑ ◦q(X) ◦m(X ◦p), k(X ◦p) = E Ef ϑ ◦q(r) ◦m(r ◦p), k(r ◦p) r=X = E Ef ϑ ◦m(r ◦p), k(r ◦p) r=X = Ef ϑ ◦m(X ◦p), k(X ◦p) = E Ef t ◦m(X ◦p), k(X ◦p) t=ϑ = E Ef t ◦m(X), k(X) t=ϑ = Ef ϑ ◦m(X), k(X) = Ef(η, κ), which shows that (η, κ) ◦p d = (η, κ).
658 Foundations of Modern Probability Now choose a Borel isomorphism h: [0, 1] × S →B ∈B with inverse (b, c): B →[0, 1] × S, and note that the B -valued random variables ξj = h(κj, ηj) are again P-symmetric. Now (46) follows from ξi = ξj ⇔ (κi, ηi) = (κj, ηj) ⇔ ηi = ηj ⇔ mi(X) = mj(X) ⇔ i ∼j.
2 Exercises 1. Show how the de Finetti–Ryll-Nardzewski theorem can be obtained as a special case of Theorem 28.1.
2. Explain the difference between the representations in (2) and (4). Then write out explicitly the corresponding three-dimensional representations, and again explain in what way they differ.
3. State a coding representation of a single random element in a Borel space, and specify when two coding functions f, g give rise to random elements with the same distribution.
4. Show how the one-dimensional coding representation with associated equiva-lence criteria follow from the corresponding two-dimensional results.
5. Give an example of a measure-preserving map f on [0, 1] that is not invertible.
Then show how we can construct an inversion by introducing an extra randomization variable. (Hint: For a simple example, let f(x) = 2 x (mod 1) for x ∈[0, 1]. Then introduce a Bernoulli variable to choose between the intervals [0, 1 2] and ( 1 2, 1].) 6.
Let Xij = f(α, ξi, ηj) for some i.i.d. random variables α, ξ1, ξ2, . . ., and η1, η2, . . . . Show that the Xij cannot be i.i.d., non-degenerate. (Hint: This can be shown from either Corollary 28.18 or Theorem 28.3.) 7. Express the notions of separate and joint rotatability as invariance under uni-tary transformations on a Hilbert space H. Then write the double sum in Theorem 28.19 (ii) in terms of a double Wiener-Itˆ o integral of a suitable isonormal Gaus-sian process. Similarly, write the double sum in part (i) as a tensor product of two independent isonormal Gaussian processes on H.
8. Show that all sets in an exchangeable partition have cardinality 1 or ∞, and give the probabilities for a fixed element to belong to a set of each type. Further give the probability for two fixed elements to belong to the same set. (Hint: Use the paintbox representation.) 9. Say that a graph is exchangeable if the associated adjacency matrix is jointly exchangeable. Show that in an infinite, exchangeable graph, a.s. every node is either disconnected or has infinitely many links to other nodes. (Hint: Use Lemma 30.9.) 10. Say that two nodes in a graph are connected, if one can be reached from the other by a sequence of links.
Show that this defines an equivalence relation between sites. For an infinite, exchangeable graph, show that the associated partition is exchangeable. (Hint: Note that the partition is exchangeable and hence has a paintbox representation.) IX. Random Sets and Measures In Chapter 29 we show how the entire excursion structure of a regenerative process can be described by a Poisson process on the time scale given by an associated local time process. We further study the local time as a function of the space variable, and initiate the study of continuous additive functionals. In Chapter 30 we discuss the weak and strong Poisson or Cox convergence under superposition and scattering of point processes, highlighting the relationship between the asymptotic properties of the processes and smoothing properties of the underlying transforms. Finally, in Chapter 31 we explore the basic notion of Palm measures, first for stationary point processes on the real line, and then for general random measures on an abstract Borel space. In both cases, we prove some local approximations and develop a variety of duality relations. A duality argument also leads to the notion of Gibbs kernel, related to ideas in statistical mechanics. For a first encounter, we recommend especially a careful study of the semi-martingale local time and excursion theory in Chapter 29, along with some basic limit theorems from Chapter 30.
−−− 29. Local time, excursions, and additive functionals. This chapter provides three totally different but essentially equivalent approaches to local time. Our first approach is based on Tanaka’s formula in stochastic calculus and leads to an interpretation of local time as an occupation density. The sec-ond approach extends to a fundamental representation of the entire excursion structure of a regenerative process in terms of a Poisson process on the local time scale. The third approach is based on the potential theory of continuous additive functionals.
30. Random measures, smoothing and scattering. Here we begin with the weak or strong Poisson convergence, for superpositions of indepen-dent, uniformly small point processes. Next, we consider the Cox convergence of dissipative transforms of suitable point processes, depending on appropriate averaging and smoothing properties of the transforms. Here a core result is the celebrated theorem of Dobrushin, based on conditions of asymptotic invari-ance. We finally explore some Cox criteria of stationary line and flat processes, and study the basic cluster dichotomy for spatial branching processes.
31.
Palm and Gibbs kernels, local approximation.
Here we be-gin with a study of Palm measures of stationary point processes on Euclidean spaces, highlighting the dual relationship between stationarity in discrete and continuous time. Next, we consider the Palm measures of random measures on an abstract Borel space, and prove some basic local approximations. Further highlights include a conditioning approach to Palm measures and a dual ap-proach via conditional densities. Finally, we discuss the dual notions of Gibbs and Papangelou kernels, of significance in statistical mechanics.
659 Chapter 29 Local Time, Excursions, and Additive Functionals Semi-martingale local time, Tanaka’s formula, space-time regularity, oc-cupation measure and density, extended Itˆ o formula, regenerative sets and processes, excursion law, excursion local time and Poisson process, approximations of local time, inverse local time as a subordinator, Brow-nian excursion, Ray–Knight theorem, continuous additive functionals, Revuz measure, potentials, excessive functions, additive-functional local time, additive functionals of Brownian motion Local time is yet another indispensable notion of modern probability, playing key roles in both stochastic calculus and excursion theory, and also needed to represent continuous additive functionals of suitable Markov processes. Most remarkably, it appears as the occupation density of continuous semi-martin-gales, and it provides a canonical time scale for the family of excursions of a regenerative process.
In each of the mentioned application areas, there is an associated method of construction, which leads to three different but essentially equivalent ver-sions of the local time process. We begin with the semi-martingale local time, constructed from Tanaka’s formula, and leading to a useful extension of Itˆ o’s formula, and to an interpretation of local time as an occupation density. Next we develop excursion theory for processes that are regenerative at a fixed state, and prove the powerful Itˆ o representation, involving a Poisson process of ex-cursions on the time scale given by an associated local time. Among its many applications, we consider a version of the Ray–Knight theorem, describing the spatial variation of Brownian local time. Finally, we study continuous additive functionals (CAFs) and their potentials, prove the existence of local time at a regular point, and show that any CAF of a one-dimensional Brownian motion is a mixture of local times.
The beginning of the chapter may be regarded as a continuation of the stochastic calculus developed in Chapter 18.
The present excursion theory continues the elementary discussion of the discrete-time case in Chapter 12.
Though the theory of CAFs is formally developed for Feller processes, few results are needed from Chapter 17, beyond the strong Markov property and its integrated version in Corollary 17.19. Both semi-martingale local time and excursion theory will be useful in Chapter 33 to study one-dimensional SDEs and diffusions. Our discussion of CAFs of Brownian motion with associated potentials will continue at the end of Chapter 34.
661 © Springer Nature Switzerland AG 2021 O. Kallenberg, Foundations of Modern Probability, Probability Theory and Stochastic Modelling 99, 662 Foundations of Modern Probability For the stochastic calculus approach to local time, let X be a continuous semi-martingale in R. The semi-martingale local time L0 of X at 0 may be defined by Tanaka’s formula L0 t = |Xt| −|X0| − t 0 sgn(Xs−) dXs, t ≥0, (1) where sgn(x−) = 1(0,∞)(x) −1(−∞,0. More generally, we define the local time Lx t at a point x ∈R as the local time at 0 of the process Xt −x. Note that the stochastic integral on the right exists, since the integrand is bounded and progressive. The process L0 is clearly continuous and adapted with L0 0 = 0.
For motivation, we note that if Itˆ o’s rule could be applied to the function f(x) = |x|, we would formally obtain (1) with L0 t = s≤t δ(Xs) d[X]s.
We state the basic properties of local time at a fixed point. Say that a non-decreasing function f is supported by a Borel set A, if the associated measure μ satisfies μAc = 0. The support of f is the smallest closed set with this property.
Theorem 29.1 (semi-martingale local time) Let X be a continuous semi-martingale with local time L0 at 0. Then a.s.
(i) L0 is non-decreasing, continuous, and supported by Ξ = t ≥0; Xt = 0 , (ii) L0 t = −|X0| −inf s≤t s 0 sgn(X−) dX ∨0, t ≥0.
The proof of (ii) depends on an elementary observation.
Lemma 29.2 (supporting function, Skorohod) Let f be a continuous function f on R+ with f0 ≥0. Then gt = −inf s≤t fs ∧0 = sup s≤t (−fs) ∨0, t ≥0, is the unique non-decreasing, continuous function g on R+ with g0 = 0, satis-fying h ≡f + g ≥0, 1{h > 0} dg = 0, Proof: The given function has clearly the stated properties. To prove the uniqueness, suppose that both g and g′ have the stated properties, and put h = f + g and h′ = f + g′. If gt < g′ t for some t > 0, define s = sup{r < t; gr = g′ r}, and note that h′ ≥h′ −h = g′ −g > 0 on (s, t]. Hence, g′ s = g′ t, and so 0 < g′ t −gt ≤g′ s −gs = 0, a contradiction.
2 Proof of Theorem 29.1: (i) For any constant h > 0, we may choose a convex function fh ∈C2 with fh(x) = −x, x ≤0, x −h, x ≥h.
29. Local Time, Excursions, and Additive Functionals 663 Then Itˆ o’s formula gives, a.s. for any t ≥0, Y h t ≡fh(Xt) −fh(X0) − t 0 f ′ h(Xs) dXs = 1 2 t 0 f ′′ h(Xs) d[X]s.
As h →0, we have fh(x) →|x| and f ′ h →sgn(x−). By Corollary 18.13 and dominated convergence, we get (Y h−L0)∗ t P →0 for every t > 0. Now (i) follows, since the processes Y h are non-decreasing with ∞ 0 1 Xs / ∈[0, h] dY h s = 0 a.s., h > 0.
(ii) This is clear from Lemma 29.2.
2 This leads in particular to a basic relationship between a Brownian motion, its maximum process, and its local time at 0. The result extends the more elementary Proposition 14.13.
Corollary 29.3 (local time and maximum process, L´ evy) Let L0 be the local time at 0 of a Brownian motion B, and define Mt = sups≤t Bs. Then L0, |B| d = (M, M −B).
Proof: Define B′ t = − s≤t sgn(Bs−) dBs, M ′ t = sup s≤t B′ s, and conclude from (1) and Theorem 29.1 (ii) that L0 = M ′ and |B| = L0−B′ = M ′ −B′. Further note that B′ d = B by Theorem 19.3.
2 For any continuous local martingale M, the associated local time Lx t has a jointly continuous version. More generally, we have for any continuous semi-martingale: Theorem 29.4 (spatial regularity, Trotter, Yor) Let X be a continuous semi-martingale with canonical decomposition M +A. Then the local time L = (Lx t ) of X has a version that is rcll in x, uniformly for bounded t, and satisfies Lx t −Lx− t = 2 t 0 1{Xs = x} dAs, x ∈R, t ∈R+.
Proof: By the definition of L, we have for any x ∈R and t ≥0 Lx t = |Xt −x| −|X0 −x| − t 0 sgn Xs −x − dMs − t 0 sgn Xs −x − dAs.
By dominated convergence, the last term has the required continuity proper-ties, with discontinuities given by the displayed formula. Since the first two 664 Foundations of Modern Probability terms are trivially continuous in x and t, it remains to show that the first integral term, henceforth denoted by Ix t , has a jointly continuous version.
By localization we may then assume that the processes X −X0, [M]1/2, and |dA| are all bounded by some constant c. Fixing any p > 2 and using Theorem 18.7, we get1 for any x < y E Ix −Iy ∗p t ≤2p E 1(x,y](X) · M ∗p t < ⌢E 1(x,y](X) · [M] p/2 t .
To estimate the integral on the right, put y −x = h, and choose f ∈C2 with f ′′ ≥2 · 1(x,y] and |f ′| ≤2h. Then by Itˆ o’s formula, 1(x,y · [M] ≤1 2f ′′(X) · [X] = f(X) −f(X0) −f ′(X) · X ≤4 c h + f ′(X) · M , and another use of Theorem 18.7 yields E f ′(X) · M ∗p/2 t < ⌢E {f ′(X)}2 · [M] p/4 t ≤(2 c h)p/2.
Combining the last three estimates gives E(Ix −Iy)∗p t < ⌢(c h)p/2, and since p/2 > 1, the desired continuity follows by Theorem 4.23.
2 We may henceforth take L to be a regularized version of the local time.
Given a continuous semi-martingale X, write ξt for the associated occupation measures, defined as below with respect to the quadratic variation process [X].
We show that ξt ≪λ a.s. for every t ≥0, and that the set of local times Lx t provides the associated densities. This also leads to a simultaneous extension of the Itˆ o and Tanaka formulas.
Since every convex function f on R has a non-decreasing and left-continuous left derivative f ′ −(x) = f ′(x−), Theorem 2.14 yields an associated measure μf on R satisfying μf[x, y) = f ′ −(y) −f ′ −(x) x ≤y in R.
More generally, when f is the difference of two convex functions, μf becomes a locally finite, signed measure.
Theorem 29.5 (occupation density, Meyer, Wang) Let X be a continuous semi-martingale with right-continuous local time L. Then outside a fixed P-null set, we have (i) for any measurable function f ≥0 on R, t 0 f(Xs) d[X]s = ∞ −∞f(x) Lx t dx, t ≥0, 1Recall the notation (X · Y )t = t 0 Xs dYs.
29. Local Time, Excursions, and Additive Functionals 665 (ii) when f is the difference between two convex functions, f(Xt) −f(X0) = t 0 f ′ −(X) dX + 1 2 ∞ −∞Lx t μf(dx), t ≥0.
In particular, Theorem 18.18 extends to functions f ∈C1 R, such that f ′ is absolutely continuous with density f ′′.
Note that (ii) remains valid for the left-continuous version of L, provided we replace f ′ −(X) by the right derivative f ′ +(X).
Proof: (ii) For f(x) ≡|x −a|, this reduces to the definition of La t . Since the formula is also trivially true for affine functions f(x) ≡ax + b, it extends by linearity to the case where μf is supported by a finite set. By linearity and suitable truncation, it remains to prove the formula when μf is positive with bounded support and f(−∞) = f ′(−∞) = 0. Then for any n ∈N, we define the functions gn(x) = f ′ 2−n[2nx]− , fn(x) = x −∞gn(u) du, x ∈R, and note that (ii) holds for all fn. As n →∞we get f ′ n(x−) = gn(x−) ↑f ′ −(x), and so by Corollary 18.13 we have f ′ n(X−)·X P →f ′ −(X)·X. Further note that fn →f by monotone convergence. It remains to show that Lx t μfn(dx) → Lx t μf(dx). Then let h be any bounded, right-continuous function on R, and note that μfnh = μfhn with hn(x) = h(2−n[2nx + 1]). Since hn →h, we get μfhn →μfh by dominated convergence.
(i) Comparing (ii) with Itˆ o’s formula, we obtain the stated relation for fixed t ≥0 and f ∈C. Since both sides define random measures on R, a simple approximation yields ξt = Lt · λ, a.s. for fixed t ≥0. By the continuity of each side, we may choose the exceptional null set to be independent of t.
If f ∈C1 with f ′ as stated, then (ii) applies with μf(dx) = f ′′(x) dx, and the last assertion follows by (i).
2 In particular, the occupation measure of X at time t, given by ηtA = t 0 1A(Xs) d[X]s, A ∈BR, t ≥0, (2) is a.s. absolutely continuous with density Lt. This leads to a spatial approx-imation of L, which may be compared with the temporal approximation in Proposition 29.12 below.
Corollary 29.6 (spatial approximation) Let X be a continuous semi-martin-gale in R with local time L and occupation measures ηt. Then outside a fixed P-null set, we have Lx t = lim h→0 h−1ηt[x, x + h), t ≥0, x ∈R.
666 Foundations of Modern Probability Proof: Use Theorem 29.5 and the right-continuity of L.
2 Local time processes also arise naturally in the context of regenerative pro-cesses. Here we consider an rcll process X in a Polish space S, adapted to a right-continuous and complete filtration F. Say that X is regenerative at a state a ∈S, if it satisfies the strong Markov property at a, so that for any optional time τ, L θτX Fτ = Pa a.s. on τ < ∞, Xτ = a , for some probability measure Pa on the path space D R+,S of X. In particular, a strong Markov process is regenerative at every point.
The associated random set Ξ = {t ≥0; Xt = a} is regenerative in the same sense. By the right-continuity of X, we note that Ξ ∋tn ↓t implies t ∈Ξ, which means that all points of ¯ Ξ \ Ξ are isolated from the right. In particular, the first-entry times τr = inf{t ≥r; t ∈Ξ} lie in Ξ, a.s. on {τr < ∞}. Since (¯ Ξ)c is open and hence a countable union of disjoint intervals (u, v), it also follows that Ξc is a countable union of intervals (u, v) or [u, v).
If we are only interested in regeneration at a fixed point a, we may further assume that X0 = a or 0 ∈Ξ a.s. The mentioned properties will be assumed below, even when we make no reference to an underlying process X.
We first classify the regenerative sets into five different categories. Recall that a set B ⊂R+ is said to be nowhere dense if ( ¯ B)o = ∅, and perfect if it is closed with no isolated points.
Theorem 29.7 (regenerative sets) For a regenerative set Ξ, exactly one of these cases occurs a.s.: (i) Ξ is locally finite, (ii) Ξ = R+, (iii) Ξ is a locally finite union of disjoint intervals with i.i.d., exponentially distributed lengths, (iv) Ξ is nowhere dense with no isolated points, and ¯ Ξ = supp(1Ξλ), (v) Ξ is nowhere dense with no isolated points, and λΞ = 0.
Case (iii) is excluded when Ξ is closed.
In case of (i), Ξ is a generalized renewal process based on a distribution μ on (0, ∞]. In cases (ii)−(iv), Ξ is known as a regenerative phenomenon, whose distribution is determined by the p -function p(t) = P{t ∈Ξ}. Our proof of Theorem 29.7 is based on the following dichotomies: Lemma 29.8 (local dichotomies) For a regenerative set Ξ, (i) either (¯ Ξ)o = ∅a.s., or Ξo = ¯ Ξ a.s., (ii) either a.s. all points of Ξ are isolated, or a.s. none of them is, (iii) either λΞ = 0 a.s., or supp(Ξ · λ) = ¯ Ξ a.s.
29. Local Time, Excursions, and Additive Functionals 667 Proof: We may take F to be the right-continuous, complete filtration in-duced by Ξ, which allows us to use a canonical notation. For any optional time τ, the regenerative property yields P{τ = 0} = E P(τ = 0 | F0); τ = 0 = (P{τ = 0})2, and so P{τ = 0} = 0 or 1. If σ is another optional time, then τ ′ = σ + τ ◦θσ is again optional by Proposition 11.8, and so for any h ≥0, P τ ′ −h ≤σ ∈Ξ = P τ ◦θσ ≤h, σ ∈Ξ = P{τ ≤h} P{σ ∈Ξ}, which implies L(τ ′ −σ | σ ∈Ξ) = P ◦τ −1. In particular, τ = 0 a.s. gives τ ′ = σ a.s. on {σ ∈Ξ}.
(i) The previous observations apply to the optional times τ = inf Ξc and σ = τr. If τ > 0 a.s., then τ ◦θτr > 0 a.s. on {τr < ∞}, and so τr ∈Ξo a.s. on the same set. Since the set {τr; r ∈Q+} is dense in ¯ Ξ, we obtain ¯ Ξ = Ξo a.s.
If instead τ = 0 a.s., then τ ◦θτr = 0 a.s. on {τr < ∞}, and so τr ∈Ξc a.s. on the same set. Hence, ¯ Ξ ⊂Ξc a.s., and so Ξc = R+ a.s. It remains to note that Ξc = (¯ Ξ)c, since Ξc is a disjoint union of intervals (u, v) or [u, v).
(ii) Put τ = inf(Ξ \ {0}). If τ = 0 a.s., then τ ◦θτr = 0 a.s. on {τr < ∞}.
Since every isolated point of Ξ equals τr for some r ∈Q+, it follows that Ξ has a.s. no isolated points. If instead τ > 0 a.s., we define recursively some optional times σn by σn+1 = σn + τ ◦θσn, starting with σ1 = τ. Then the σn form a renewal process based on L(τ), and so σn →∞a.s. by the law of large numbers. Thus, Ξ = {σn < ∞; n ∈N} a.s. with σ0 = 0, and a.s. all points of Ξ are isolated.
(iii) Let τ = inf{t > 0; (1Ξ · λ)t > 0}. If τ = 0 a.s., then τ ◦θτr = 0 a.s. on {τr < ∞}, and so τr ∈supp(1Ξλ) a.s. on the same set. Hence, ¯ Ξ ⊂supp(1Ξλ) a.s., and the two sets agree a.s. If instead τ > 0 a.s., then τ = τ + τ ◦θτ > τ a.s. on {τ < ∞}, which implies τ = ∞a.s., and so λΞ = 0 a.s.
2 Proof of Theorem 29.7: Using Lemma 29.8 (ii), we may eliminate case (i) where Ξ is a renewal process, which is clearly a.s. locally finite.
Next, we may use parts (i) and (iii) of the same lemma to separate out cases (iv) and (v). Excluding (i) and (iv)−(v), we see that Ξ is a.s. a locally finite union of intervals of i.i.d. lengths. Writing γ = inf Ξc and using the regenerative property at τs, we get P{γ > s + t} = P γ ◦θs > t, γ > s = P{γ > s} P{γ > t}, s, t ≥0.
Hence, the monotone function pt = P{γ > t} satisfies the Cauchy equation ps+t = ps pt with initial condition p0 = 1, and so pt ≡e−ct for some constant 668 Foundations of Modern Probability c ≥0. Here c = 0 corresponds to case (ii) and c > 0 to case (iii). If Ξ is closed, then γ ∈Ξ on {γ < ∞}, and the regenerative property at γ yields γ = γ + γ ◦θγ > γ a.s. on {γ < ∞}, which implies γ = ∞a.s., corresponding to case (ii).
2 The complement (¯ Ξ)c is a countable union of disjoint intervals (σ, τ), each supporting an excursion of X away from a. The shifted excursions Yσ,τ = θσXτ are rcll processes in their own right. Let D0 be the space of possible excursion paths, write l(x) for the length of excursion x ∈D0, and put Dh = {x ∈D0; l(x) > h}. Let κh be the number of excursions of X longer than h. We need some basic facts.
Lemma 29.9 (long excursions) Fix any h > 0, and allow even h ≥0 when the recurrence time is positive. Then exactly one of these cases occurs: (i) κh = 0 a.s., (ii) κh = ∞a.s., (iii) κh is geometrically distributed with mean mh ∈[1, ∞).
The excursions Y h j are i.i.d., and in case (iii) we have l(Y h κh) = ∞.
Proof: Let κt h be the number of excursions in Dh completed at time t ∈ [0, ∞], and note that κτt h > 0 when τt = ∞. Writing ph = P{κh > 0}, we get for any t > 0 ph = P{κτt h > 0} + P κτt h = 0, κh ◦θτt > 0 = P{κτt h > 0} + P{κτt h = 0} ph.
As t →∞, we obtain ph = ph + (1 −ph) ph, which implies ph = 0 or 1.
When ph = 1, let σ = σ1 be the right endpoint of the first Dh-excursion, and define recursively σn+1 = σn + σ1 ◦θσn, starting with σ0 = 0. Writing q = P{σ < ∞} and using the regenerative property at each σn, we obtain P{κh > n} = qn. Thus, q = 1 yields κh = ∞a.s., whereas for q < 1 the variable κh is geometrically distributed with mean mh = (1−q)−1, proving the first assertion. Even the last claim holds by regeneration at each σn.
2 Write ˆ h = inf{h > 0; κh = 0 a.s.}. For any h < ˆ h, let νh be the common distribution of all excursions in Dh. The measures νh may be combined into a single σ-finite measure ν : Lemma 29.10 (excursion law, Itˆ o) Let X be regenerative at a with ¯ Ξ a.s.
perfect. Then (i) there exists a measure ν on D0, such that for every h ∈(0, ˆ h), νDh ∈(0, ∞), νh = ν( · |Dh), (ii) ν is unique up to a normalization, (iii) ν is bounded iffthe recurrence time is a.s. positive.
29. Local Time, Excursions, and Additive Functionals 669 Proof (OK): (i) Fix any h ≤k in (0, ˆ h), and let Y 1 h , Y 2 h , . . . be such as in Lemma 29.9. Then the first excursion in Dk is the first process Y j h belonging to Dk, and since the Y j h are i.i.d. νh, we have νk = νh( · |Dk), 0 < h ≤k < ˆ h.
(3) Now fix any k ∈(0, ˆ h), and define ˜ νh = νh/νhDk for all h ∈(0, k]. Then (3) yields ˜ νh′ = ˜ νh(·∩Dh′) for any h ≤h′ ≤k, and so ˜ νh increases as h →0 toward a measure ν on D0 with ν(· ∩Dh) = ˜ νh for all h ≤k. For any h ∈(0, ˆ h), we get ν( · |Dh) = ˜ νh∧k( · |Dh) = νh∧k( · |Dh) = νh.
(ii) If ν′ is another measure with the stated properties, then ν(· ∩Dh) νDk = νh νhDk = ν′(· ∩Dh) ν′Dk , h ≤k < ˆ h.
Letting h →0 for fixed k, we get ν = rν′ with r = νDk/ν′Dk.
(iii) If the recurrence time is positive, then (3) remains true for h = 0, and we may take ν = ν0. Otherwise, let h ≤k in (0, ˆ h), and write κh,k for the number of Dh-excursions up to the first completed excursion in Dk. For fixed k we have κh,k →∞a.s. as h →0, since ¯ Ξ is perfect and nowhere dense.
Noting that κh,k is geometrically distributed with mean Eκh,k = (νhDk)−1 = ν(Dk|Dh) −1 = νDh/νDk, we get νDh →∞, which shows that ν is unbounded.
2 We turn to the fundamental theorem of excursion theory, describing the excursion structure of X in terms of a Poisson process ξ on R+ × D0 and a diffuse random measure ζ on R+. The measure ζ, with associated cumulative process Lt = ζ[0, t], is unique up to a normalization and will be referred to as the excursion local time of X at a, whereas ξ is referred to as the excursion point process of X at a.
Theorem 29.11 (excursion local time and point process, L´ evy, Itˆ o) Let the process X be regenerative at a with ¯ Ξ a.s. perfect. Then there exist a non-decreasing, continuous, adapted process L on R+ with support ¯ Ξ a.s., a Poisson process ξ on R+ × D0 with E ξ = λ ⊗ν, and a constant c ≥0, such that (i) Ξ · λ = c L a.s., (ii) the excursions of X with associated L-values are given a.s. by the re-striction of ξ to [0, L∞].
The product ν · L is a.s. unique.
670 Foundations of Modern Probability Proof (OK, beginning): When Eγ = c > 0, we define ν = ν0/c, and introduce a Poisson process ξ on R+ × D0 with intensity λ ⊗ν, say with points (σj, ˜ Yj), j ∈N. Putting σ0 = 0, we see from Theorem 13.6 that the differences ˜ γj = σj−σj−1 are i.i.d. exponentially distributed with mean m. By Proposition 15.3 (i), the processes ˜ Yj are further independent of the σj and i.i.d. ν0. Writing ˜ κ = inf{j; l(˜ Yj) = ∞}, we see from Theorem 29.7 and Lemma 29.9 that γj, Yj; j ≤κ d = ˜ γj, ˜ Yj; j ≤˜ κ , (4) where the quantities on the left are the holding times and subsequent excursions of X. By Theorem 8.17 we may redefine ξ to make (4) hold a.s. Then the stated conditions become fulfilled with L = 1Ξ · λ.
When Eγ = 0, we may again choose ξ to be Poisson λ ⊗ν, but now with ν as in Lemma 29.10. For any h ∈(0, ˆ h), we may enumerate the points of ξ in R+ ×Dh as (σj h, ˜ Y h j ), j ∈N, and define ˜ κh = inf{j; l(Y h j ) = ∞}. The processes ˜ Y h j are i.i.d. νh, and Lemma 29.9 yields γh j , Y h j ; j ≤κh d = ˜ γh j , ˜ Y h j ; j ≤˜ κh , h ∈(0, ˆ h).
(5) Since the excursions in Dh form nested sub-arrays, the entire collections have the same finite-dimensional distributions, and so by Theorem 8.17 we may redefine ξ to make (5) hold a.s.
Let τ j h be the right endpoint of the j-th excursion in Dh, and define Lt = inf σj h; h, j > 0, τ j h ≥t , t ≥0.
Then for any t ≥0 and h, j > 0, Lt < σj h ⇒ t ≤τ j h ⇒ Lt ≤σj h.
(6) To see that L is continuous, we may assume that (5) holds identically. Since ν is unbounded, we may further assume that the set {σj h; h, j > 0} is dense in the interval [0, L∞]. If ΔLt > 0, there exist some i, j, h > 0 with Lt−< σi h < σj h < Lt+. By (6) we get t −ε ≤τ i h < τ j h ≤t + ε for every ε > 0, which is impossible. Thus, ΔLt = 0 for all t.
To prove ¯ Ξ ⊂supp L a.s., we may further take ¯ Ξ ω to be perfect and nowhere dense for each ω ∈Ω. If t ∈¯ Ξ, then for every ε > 0 there exist some i, j, h > 0 with t −ε < τ i h < τ j h < t + ε. By (6) we get Lt−ε ≤σi h < σj h ≤Lt+ε, and so Lt−ε < Lt+ε for all ε > 0, which implies t ∈supp L.
2 In the perfect case, it remains to establish the a.s. relation Ξ · λ = c L for a constant c ≥0, and to show that L is a.s. unique and adapted. The former claim is a consequence of Theorem 29.13 below, whereas the latter statement follows immediately from the next theorem.
Most standard constructions of L are special cases of the following result, where ηtA denotes the number of excursions in a set A ∈D0 completed by time t ≥0, so that η is an adapted, measure-valued process on D0. The result may be compared with the global construction in Corollary 29.6.
29. Local Time, Excursions, and Additive Functionals 671 Proposition 29.12 (approximation) For sets A1, A2, . . . ∈D0 with finite measures νAn →∞, we have (i) sup t≤u ηtAn νAn −Lt P →0, u ≥0, (ii) the convergence holds a.s. when the An are increasing.
In particular, we have ηtDh/νDh →Lt a.s. as h →0 for fixed t, which shows that L is a.s. determined by the regenerative set Ξ.
Proof: Choose ξ as in Theorem 29.11, and put ξs = ξ([0, s] × ·). If the An are increasing, then for any unit rate Poisson process N on R+, we have (ξsAn) d = (Ns νAn) for all s ≥0. Since t−1Nt →1 a.s. by the law of large numbers, we get ξsAn νAn →s a.s., s ≥0.
By the monotonicity of each side, this may be strengthen to sup s≤r ξsAn νAn −s →0 a.s., r ≥0.
The desired convergence now follows, by the continuity of L and the fact that ξLt−≤ηt ≤ξLt for all t ≥0.
Dropping the monotonicity assumption, but keeping the condition νAn ↑∞ for convenience, we may choose a non-decreasing sequence A′ n ∈D0 such that νAn = νA′ n for all n. Then clearly (ξsAn) d = (ξsA′ n) for fixed n, and so sup s≤r ξsAn νAn −s d = sup s≤r ξsA′ n νA′ n −s →0 a.s., r ≥0, which implies the convergence P →0 on the left. The asserted convergence now follows as before.
2 The excursion local time L may be described most conveniently in terms of its right-continuous inverse Ts = L−1 s = inf t ≥0; Lt > s , s ≥0, which will be shown to be a generalized subordinator, defined as a non-decreas-ing L´ evy process with possibly infinite jumps. To state the next result, form Ξ′ ⊂Ξ by omitting all points of Ξ that are isolated from the right. Recall that l(u) denotes the length of an excursion path u ∈D0.
Theorem 29.13 (inverse local time) Let L, ξ, ν, c be such as in Theorem 29.11. Then a.s.
(i) T = L−1 is a generalized subordinator with characteristics (c, ν ◦l−1) and range Ξ′ in R+, (ii) Ts = c s + s+ 0 l(u) ξ(dr du), s ≥0.
672 Foundations of Modern Probability Proof: We may discard the P-null set where L fails to be continuous with support ¯ Ξ. If Ts < ∞for some s ≥0, then Ts ∈supp L = ¯ Ξ by the definition of T, and since L is continuous, we get Ts ̸∈¯ Ξ \ Ξ′. Thus, T(R+) ⊂Ξ′ ∪{∞} a.s. Conversely, let t ∈Ξ′. Then for any ε > 0 we have Lt+ε > Lt, and so t ≤T ◦Lt ≤t + ε. As ε →0, we get T ◦Lt = t. Thus, Ξ′ ⊂T(R+) a.s.
The times Ts are optional by the right-continuity of F. Furthermore, Propo-sition 29.12 shows that, as long as Ts < ∞, the process θsT −Ts is obtainable from θTsX by a measurable map independent of s.
Using the regenerative property at each Ts, we conclude from Lemma 16.13 that T is a generalized subordinator, hence admitting a representation as in Theorem 16.3. Since the jumps of T agree with the interval lengths in (¯ Ξ)c, we obtain (ii) for a constant c ≥0.
The double integral in (ii) equals x (ξs ◦l−1)(dx), which shows that T has L´ evy measure E(ξ1 ◦l−1) = ν ◦l−1. Taking s = Lt in (ii), we get a.s. for any t ∈Ξ′ t = T ◦Lt = c Lt + Lt+ 0 l(u) ξ(dr du) = c Lt + (Ξc · λ)t, and so by subtraction c Lt = (1Ξ · λ)t a.s., which extends by continuity to arbitrary t ≥0. Noting that a.s. T −1[0, t] = [0, Lt] or [0, Lt) for all t ≥0, since T is strictly increasing, we have a.s.
ξ[0, t] = Lt = λ ◦T −1[0, t], t ≥0, which extends to ξ = λ ◦T −1 a.s. by Lemma 4.3.
2 To justify our terminology, we show that the semi-martingale and excursion local times agree whenever both exist.
Proposition 29.14 (reconciliation) Let X be a continuous semi-martingale with local time L, and let X be regenerative at some a ∈R with P{La ∞̸= 0} > 0. Then (i) Ξ = {t; Xt = a} is a.s. perfect and nowhere dense, (ii) La is a version of the excursion local time at a.
Proof: The regenerative set Ξ = {t; Xt = a} is closed by the continuity of X. If Ξ were a countable union of closed intervals, then La would vanish a.s., contrary to our hypothesis, and so by Theorem 29.7 it is perfect and nowhere dense. Let L be a version of the excursion local time at a, and put T = L−1.
Define Ys = La ◦Ts for s < L∞, and put Ys = ∞otherwise. The continuity of La yields Ys± = La ◦Ts± for every s < L∞. If ΔTs > 0, then La ◦Ts−= La ◦Ts, since (Ts−, Ts) is an excursion interval of X and La is continuous with support in Ξ. Thus, Y is a.s. continuous on [0, L∞).
By Corollary 29.6 and Proposition 29.12, the processes θsY −Ys can be obtained from θTsX for all s < L∞by a common measurable map. By the 29. Local Time, Excursions, and Additive Functionals 673 regenerative property at a, Y is then a generalized subordinator, and so by Theorem 16.3 and the continuity of Y , we have Ys ≡c s a.s. on [0, L∞) for a constant c ≥0. For t ∈Ξ′ we have a.s. T ◦Lt = t, and therefore La t = La ◦(T ◦Lt) = (La ◦T) ◦Lt = c Lt, which extends to R+ since both extremes are continuous with support in Ξ. 2 For Brownian motion it is convenient to normalize local time by Tanaka’s formula, which leads to a corresponding normalization of the excursion law ν.
By the spatial homogeneity of Brownian motion, we may restrict our attention to excursions from 0. Then excursions of different lengths have the same dis-tribution, apart from a scaling. For a precise statement, we define the scaling operators Sr on D by (Srf)t = r1/2ft/r, t ≥0, r > 0, f ∈D.
Theorem 29.15 (Brownian excursion) Let ν be the normalized excursion law of Brownian motion. Then there exists a unique distribution ˆ ν on the set of excursions of length one, such that ν = (2π)−1/2 ∞ 0 ˆ ν ◦S−1 r r−3/2dr.
(7) Proof: By Theorem 29.13, the inverse local time L−1 is a subordinator with L´ evy measure ν ◦l−1, where l(u) denotes the length of u. Furthermore, L d = M by Corollary 29.3, where Mt = sups≤t Bs, and so by Theorem 16.12 the measure ν ◦l−1 has density (2π)−1/2 r−3/2, r > 0. As in Theorem 8.5 there exists a probability kernel (νr): (0, ∞) →D0, such that νr ◦l−1 ≡δr and ν = (2π)−1/2 ∞ 0 νrr−3/2dr, (8) and we note that the measures νr are unique a.e. λ.
For any r > 0, the process ˜ B = SrB is again a Brownian motion, and by Corollary 29.6 the local time of ˜ B equals ˜ L = SrL. If B has an excursion u ending at time t, then the corresponding excursion Sru of ˜ B ends at rt, and the local time for ˜ B at the new excursion equals ˜ Lrt = r1/2Lt. Thus, the excursion process ˜ ξ for ˜ B is obtained from the process ξ for B through the mapping Tr : (s, u) →(r1/2s, Sru). Since ˜ ξ d = ξ, each Tr leaves the intensity measure λ ⊗ν invariant, and we get ν ◦S−1 r = r1/2ν, r > 0.
(9) Combining (8) and (9), we get for any r > 0 ∞ 0 νx ◦S−1 r x−3/2 dx = r1/2 ∞ 0 νx x−3/2 dx = ∞ 0 νrx x−3/2 dx, 674 Foundations of Modern Probability and the uniqueness in (8) yields νx ◦S−1 r = νrx, x > 0 a.e. λ, r > 0.
By Fubini’s theorem, we may then fix an x = c > 0 with νc ◦S−1 r = νcr, r > 0 a.s. λ.
Defining ˆ ν = νc ◦S−1 1/c, we conclude that for almost every r > 0, νr = νc(r/c) = νc ◦S−1 r/c = νc ◦S−1 1/c ◦S−1 r = ˆ ν ◦S−1 r .
Substituting this into (8) yields equation (7).
If μ is another probability measure with the stated properties, then for almost every r > 0 we have μ ◦S−1 r = ˆ ν ◦S−1 r , and hence μ = μ ◦S−1 r ◦S−1 1/r = ˆ ν ◦S−1 r ◦S−1 1/r = ˆ ν.
Thus, ˆ ν is unique.
2 By the continuity of paths, an excursion of Brownian motion is either pos-itive or negative, and by symmetry the two possibilities have the same proba-bility 1 2 under ˆ ν. This leads to the further decomposition ˆ ν = 1 2(ˆ ν+ + ˆ ν−). A process with distribution ˆ ν+ is called a (normalized) Brownian excursion.
For subsequent needs, we insert a simple computational result.
Lemma 29.16 (height distribution) Let ν be the excursion law of Brownian motion. Then ν u ∈D0; supt ut > h = (2 h)−1, h > 0.
Proof: By Tanaka’s formula, the process M = 2B ∨0 −L0 = B + |B| −L0 is a martingale, and so for τ = inf{t ≥0; Bt = h} we get E L0 τ∧t = 2 E Bτ∧t ∨0 , t ≥0.
Hence, by monotone and dominated convergence, EL0 τ = 2E(Bτ ∨0) = 2 h.
On the other hand, Theorem 29.11 shows that L0 τ is exponentially distributed with mean (νAh)−1, where Ah = {u; supt ut ≥h}.
2 We proceed to show that Brownian local time has some quite amazing spatial properties.
29. Local Time, Excursions, and Additive Functionals 675 Theorem 29.17 (space dependence, Ray, Knight) Let B be a Brownian mo-tion with local time L, and put τ = inf{t > 0; Bt = 1}. Then the process St = L1−t τ on [0, 1] is a squared Bessel process of order 2.
Several proofs are known. Here we derive the result from the previously developed excursion theory.
Proof (Walsh): Fix any u ∈[0, 1], put σ = Lu τ, and let ξ± denote the Poisson processes of positive and negative excursions from u. Write Y for the process B, stopped when it first hits u. Then Y ⊥ ⊥(ξ+, ξ−) and ξ+⊥ ⊥ξ−, and so ξ+⊥ ⊥(ξ−, Y ). Since σ is ξ+-measurable, we obtain ξ+⊥ ⊥σ(ξ−, Y ), and hence ξ+ σ ⊥ ⊥σ(ξ− σ , Y ), which implies the Markov property of Lx τ at x = u.
To find the corresponding transition kernels, fix any x ∈[0, u), and write h = u −x. Put τ0 = 0, and let τ1, τ2, . . . be the right endpoints of excursions from x exceeding u. Further define ζk = Lx τk+1 −Lx τk, k ≥0, so that Lx τ = ζ0 + · · · + ζκ with κ = sup{k; τk ≤τ}. By Lemma 29.16, the variables ζk are i.i.d. and exponentially distributed with mean 2h. Since κ agrees with the number of completed u -excursions before time τ reaching x, and since σ⊥ ⊥ξ−, we further see that κ is conditionally Poisson σ/2h, given σ.
Next we show that (σ, κ)⊥ ⊥(ζ0, ζ1, . . .). Then define σk = Lu τk. Since ξ−is Poisson, we have (σ1, σ2, . . .)⊥ ⊥(ζ1, ζ2, . . .), and so (σ, σ1, σ2, . . .)⊥ ⊥(Y, ζ1, ζ2, . . .).
The desired relation now follows, since κ is a measurable function of (σ, σ1, σ2, . . .), and ζ0 depends measurably on Y .
For any s ≥0, we may now compute E exp(−sLu−h τ ) σ = E Ee−s ζ0κ+1 σ = E (1 + 2sh)−κ−1 σ = (1 + 2sh)−1 exp −s σ 1 + 2sh .
By the Markov property of Lx τ, the last relation is equivalent, via the substitu-tions u = 1 −t and 2s = (a −t)−1, to the martingale property, for every a > 0, of the process Mt = (a −t)−1 exp −L1−t τ 2(a −t) , t ∈[0, a).
Now let X be a squared Bessel process of order 2, and note that L1 τ = X0 = 0 by Theorem 29.4. By Corollary 14.12, the process X is again Markov. To see that X has the same transition kernel as L1−t τ , it is enough to show that, for any a > 0, the process M remains a martingale when L1−t τ is replaced by Xt. This is clear from Itˆ o’s formula, if we note that X is a weak solution to the SDE dXt = 2X1/2 t dBt + 2dt.
2 For an important application of the last result, we show that the local time is strictly positive on the range of the process.
676 Foundations of Modern Probability Corollary 29.18 (range and support) Let M be a continuous local martingale with local time L. Then outside a fixed P-null set, {Lx t > 0} = inf s≤t Ms < x < sup s≤t Ms , x ∈R, t ≥0.
Proof: By Corollary 29.6 and the continuity of L, we have Lx t = 0 for x outside the stated interval, except on a fixed P-null set. To see that Lx t > 0 otherwise, we may reduce by Theorem 19.3 and Corollary 29.6 to the case where M is a Brownian motion B. Letting τu = inf{t ≥0; Bt = u}, we see from Theorems 19.6 (i) and 18.16 that, outside a fixed P-null set, Lx τu > 0, 0 ≤x < u ∈Q+.
(10) If 0 ≤x < sups≤t Bs for some t and x, there exists a u ∈Q+ with x < u < sups≤t Bs. But then τu < t, and (10) yields Lx t ≥Lx τu > 0. A similar argument applies to the case where infs≤t Bs < x ≤0.
2 Our third approach to local times is via additive functionals and their potentials. Here we consider a canonical Feller process X with state space S, associated terminal time ζ, probability measures Px, transition operators Tt, shift operators θt, and filtration F. By a continuous, additive functional (CAF) of X we mean a non-decreasing, continuous, adapted process A with A0 = 0 and Aζ∨t ≡Aζ, satisfying As+t = As + At ◦θs a.s., s, t ≥0, (11) where the ‘a.s.’ without qualification means Px-a.s. for every x. By the conti-nuity of A, we may choose the exceptional null set to be independent of t. If it can also be chosen to be independent of s, then A is said to be perfect.
For a simple example, let f ≥0 be a bounded, measurable function on S, and consider the associated elementary CAF At = t 0 f(Xs) ds, t ≥0.
(12) More generally, given a CAF A and a function f as above, we may define a new CAF f · A by (f · A)t = s≤t f(Xs) dAs, t ≥0. A less trivial example is given by the local time of X at a fixed point x, whenever it exists in either sense discussed above.
For any CAF A and constant α ≥0, we may introduce the associated α-potential U α A(x) = Ex ∞ 0 e−αt dAt, x ∈S, and put U α Af = U α f·A. In the special case where At ≡t ∧ζ, we shall often write U αf = U α Af. Note in particular that U α A = U αf = Rαf for the A in (12). If α = 0, we may omit the superscript and write U = U 0 and UA = U 0 A. We proceed to show that a CAF is determined by its α-potential, whenever the latter is finite.
29. Local Time, Excursions, and Additive Functionals 677 Lemma 29.19 (potentials of additive functionals) Let A, B be continuous, additive functionals of a Feller process X, and fix any α ≥0. Then U α A = U α B < ∞ ⇒ A = B a.s.
Proof: Define Aα t = s≤t e−αs dAs, and conclude from (11) and the Markov property at t that, for any x ∈S, Ex Aα ∞ Ft −Aα t = e−αtEx Aα ∞◦θt Ft = e−αt U α A(Xt).
(13) Comparing with the same relation for B, we see that Aα −Bα is a continuous Px -martingale of finite variation, and so Aα = Bα a.s. Px by Proposition 18.2.
Since x was arbitrary, we get A = B a.s.
2 For any CAF A of Brownian motion in Rd, we may introduce the associated Revuz measure νA, given for any measurable function g ≥0 on Rd by νAg = ¯ E(g · A)1, where ¯ E = Ex dx. When A is defined by (12), we get in particular νAg = ⟨f, g⟩, where ⟨·, ·⟩denotes the inner product in L2(Rd). In general, we need to show that νA is σ-finite.
Lemma 29.20 (Revuz measure) For a continuous, additive functional A of Brownian motion X in Rd, the associated Revuz measure νA is σ-finite.
Proof: Fix any integrable function f > 0 on Rd, and define g(x) = Ex ∞ 0 e−t−Atf(Xt) dt, x ∈Rd.
Using Corollary 17.19, the additivity of A, and Fubini’s theorem, we get U 1 Ag(x) = Ex ∞ 0 e−t dAt EXt ∞ 0 e−s−Asf(Xs) ds = Ex ∞ 0 e−t dAt ∞ 0 e−s−As◦θtf(Xs+t) ds = Ex ∞ 0 eAt dAt ∞ t e−s−Asf(Xs) ds = Ex ∞ 0 e−s−Asf(Xs) ds s 0 eAt dAt = Ex ∞ 0 e−s 1 −e−As f(Xs) ds ≤E0 ∞ 0 e−sf(Xs + x) ds.
Hence, by Fubini’s theorem, e−1νAg ≤ U 1 Ag(x) dx ≤ dx E0 ∞ 0 e−sf(Xs + x) ds = E0 ∞ 0 e−sds f(Xs + x) dx = f(x)dx < ∞.
678 Foundations of Modern Probability The assertion now follows since g > 0.
2 Now let pt(x) denote the transition density (2πt)−d/2e−|x|2/2t of Brownian motion in Rd, and put uα(x) = ∞ 0 e−αtpt(x) dt. For any measure μ on Rd, we may introduce the associated α -potential U αμ(x) = uα(x −y) μ(dy). We show that the Revuz measure has the same potential as the underlying CAF.
Theorem 29.21 (potentials of Revuz measure, Hunt, Revuz) Let A be a con-tinuous, additive functional of Brownian motion in Rd with Revuz measure νA.
Then U α A = U ανA, α ≥0.
Proof: By monotone convergence we may take α > 0. By Lemma 29.20, we may choose some positive functions fn ↑1 with νfn·A1 = νAfn < ∞for each n, and by dominated convergence we have U α fn·A ↑U α A and U ανfn·A ↑U ανA.
Thus, we may further assume that νA is bounded. Then clearly U α A < ∞a.e.
Now fix any bounded, continuous function f ≥0 on Rd, and conclude by dominated convergence that U αf is again bounded and continuous. Writing h = n−1 for an arbitrary n ∈N, we get by dominated convergence and the additivity of A νAU αf = ¯ E 1 0 U αf(Xs) dAs = lim n→∞¯ E Σ j <n U αf(Xjh)Ah ◦θjh.
Noting that the operator U α is self-adjoint and using the Markov property, we may write the expression on the right as Σ j <n ¯ E U αf(Xjh)EXjhAh = n U αf(x) ExAh dx = n !
f, U αE. Ah " .
To estimate the function U αE. Ah on the right, it is enough to consider arguments x with U α A(x) < ∞.
Using the Markov property of X and the additivity of A, we get U αE. Ah(x) = Ex ∞ 0 e−αsEXsAh ds = Ex ∞ 0 e−αs(Ah ◦θs) ds = Ex ∞ 0 e−αs As+h −As ds = eαh −1 Ex ∞ 0 e−αsAs ds −eαhEx h 0 e−αsAs ds.
(14) Integrating by parts gives Ex ∞ 0 e−αsAs ds = α−1Ex ∞ 0 e−αt dAt = α−1 U α A(x).
29. Local Time, Excursions, and Additive Functionals 679 Thus, as n = h−1 →∞, the first term on the right of (14) tends in the limit to ⟨f, U α A⟩. The second term is negligible, since !
f, E. Ah " < ⌢ ¯ EAh = h νA1 →0.
Hence, !
U ανA, f " = νAU αf = ⟨U α A, f⟩, and since f is arbitrary, we obtain U α A = U ανA a.e.
To extend this to an identity, fix any h > 0 and x ∈Rd. Using the additivity of A, the Markov property at h, the a.e. version, Fubini’s theorem, and the Chapman–Kolmogorov relation, we get eαhEx ∞ h e−αsdAs = Ex ∞ 0 e−αsdAs ◦θh = Ex U α A(Xh) = Ex U ανA(Xh) = νA(dy) Exuα(Xh −y) = eαh νA(dy) ∞ h e−αsps(x −y) ds.
The required identity U α A(x) = U ανA(x) now follows by monotone convergence as h →0.
2 It follows easily that a CAF is determined by its Revuz measure.
Corollary 29.22 (uniqueness) Let A, B be continuous, additive functionals of a Brownian motion in Rd. Then A = B a.s.
⇔ νA = νB.
Proof: By Lemma 29.20, we may take νA to be bounded, so that U α A < ∞ a.e. for all α > 0. Now νA determines U α A by Theorem 29.21, and the proof of Lemma 29.19 shows that U α A determines A a.s. Px, whenever U α A(x) < ∞.
Since Px ◦X−1 h ≪λd for each h > 0, it follows that A ◦θh is a.s. unique, and it remains to let h →0.
2 We turn to the reverse problem of constructing a CAF associated with a given potential. To motivate the following definition, we may take expected values in (13) to get e−αt Tt U α A ≤U α A. A function f on S is said to be uniformly α-excessive, if it is bounded and measurable with 0 ≤e−αt Ttf ≤f for all t ≥0, and such that ∥Ttf −f∥→0 as t →0, where ∥·∥denotes the supremum norm.
Theorem 29.23 (excessive functions and additive functionals, Volkonsky) Let X be a Feller process in S, fix any α > 0, and let f ≥0 be a uniformly α-excessive function on S. Then there exists an a.s. unique, perfect, continuous, additive functional A of X, such that f = U α A.
680 Foundations of Modern Probability Proof: For any bounded, measurable function g on S, we get by Fubini’s theorem and the Markov property of X 1 2 Ex ∞ 0 e−αtg(Xt) dt 2 = Ex ∞ 0 e−αtg(Xt) dt ∞ 0 e−α(t+h)g(Xt+h) dh = Ex ∞ 0 e−2αtg(Xt) dt ∞ 0 e−αh Thg(Xt) dh = Ex ∞ 0 e−2αtg U αg(Xt) dt = ∞ 0 e−2αt Ttg U αg(x) dt ≤∥U αg∥ ∞ 0 e−αt Tt|g|(x) dt ≤∥U αg∥ U α|g| .
(15) Now introduce for each h > 0 the bounded, non-negative functions gh = h−1 f −e−αh Thf , fh = U αgh = h−1 h 0 e−αs Tsf ds, and define Ah(t) = t 0 gh(Xs) ds, Mh(t) = Aα h(t) + e−αtfh(Xt).
As in (13), we note that the processes Mh are martingales under Px for every x. Using the continuity of the Ah, we get by Proposition 9.17 and (15), for x ∈S and as h, k →0, Ex Aα h −Aα k ∗2 < ⌢Ex sup t∈Q+ Mh(t) −Mk(t) 2 + ∥fh −fk∥2 < ⌢Ex Aα h(∞) −Aα k(∞) 2 + ∥fh −fk∥2 < ⌢∥fh −fk∥∥fh + fk∥+ ∥fh −fk∥2 →0.
Hence, there exists a continuous process A independent of x, such that Ex(Aα h− Aα)∗2 →0 for every x.
For a suitable sequence hn →0, we have (Aα hn →Aα)∗→0 a.s. Px for all x, and it follows easily that A is a.s. a perfect CAF. Taking limits in the relation fh(x) = ExAα h(∞), we also note that f(x) = ExAα(∞) = U α A(x). Thus, A has α -potential f.
2 The last result can be used to construct local times. Then say that a CAF A is supported by a set B ⊂S, if its set of increase is a.s. contained in the closure of the set {t ≥0; Xt ∈B}. In particular, a non-zero, perfect CAF supported by a singleton set {x} is called a local time at x. This terminology is clearly consistent with our earlier definitions of local time. Writing τx = inf{t > 0; Xt = x}, we say that x is regular (for itself) if τx = 0 a.s. Px. By Proposition 29.8, this holds iff, Px-a.s., the random set Ξx = {t ≥0; Xt = x} has no isolated points.
29. Local Time, Excursions, and Additive Functionals 681 Theorem 29.24 (additive-functional local time, Blumenthal & Getoor) Let X be a Feller process in S, and fix any a ∈S. Then (i) X has a local time L at a iffa is regular, (ii) L is then a.s. unique up to a normalization, (iii) U 1 L(x) = U 1 L(a) Exe−τa < ∞, x ∈S.
Proof: (iii) Let L be a local time at a. Comparing with the renewal process L−1 n , n ∈Z+, we see that supx,t Ex(Lt+h −Lt) < ∞for every h > 0, which implies U 1 L(x) < ∞for all x. By the strong Markov property at τ = τa, we get for any x ∈S U 1 L(x) = Ex L1 ∞−L1 τ = Ex e−τ L1 ∞◦θτ = Ex e−τEaL1 ∞ = U 1 L(a) Exe−τ.
(ii) Use Lemma 29.19.
(i) Define f(x) = Exe−τ, and note that f is bounded and measurable. Since τ ≤t + τ ◦θt, we also see from the Markov property at t that, for any x ∈S, f(x) = Ex e−τ ≥e−tEx e−τ ◦θt = e−tExEXte−τ = e−tExf(Xt) = e−t Ttf(x).
Noting that σt = t + τ ◦θt is non-decreasing and tends to 0 a.s. Pa as t →0, by the regularity of a, we further obtain 0 ≤f(x) −e−h Thf(x) = Ex e−τ −e−σh ≤Ex e−τ −e−σh+τ = Ex e−τEa 1 −e−σh ≤Ea 1 −e−σh →0.
Thus, f is uniformly 1-excessive, and so by Theorem 29.23 there exists a perfect CAF L with U 1 L = f.
To see that L is supported by the singleton {a}, we may write Ex L1 ∞−L1 τ = Ex e−τEaL1 ∞ = Ex e−τEae−τ = Ex e−τ = ExL1 ∞, which implies L1 τ = 0 a.s. Hence, Lτ = 0 a.s., and so the Markov property yields Lσt = Lt a.s. for rational t. This shows that L has a.s. no point of 682 Foundations of Modern Probability increase outside the closure of {t ≥0; Xt = a}.
2 We may finally represent any CAF A of a one-dimensional Brownian motion as a unique mixture of local times. Recall that νA denotes the Revuz measure of A.
Theorem 29.25 (additive functionals of Brownian motion, Volkonsky, Mc-Kean & Tanaka) Let X be a Brownian motion in R with local time L. Then a process A is a continuous additive functional of X iffa.s.
At = ∞ −∞Lx t ν(dx), t ≥0, (16) for a locally finite measure ν on R. The latter is then unique and equal to νA.
Proof: For any measure ν, we may define an associated process A as in (16). If ν is locally finite, we see from the continuity of L and dominated convergence that A is a.s. continuous, hence a CAF. In the opposite case, ν is unbounded in every neighborhood of some point a ∈R. Under Pa and for any t > 0, the process Lx t is further a.s. continuous and strictly positive near x = a. Hence, At = ∞a.s. Pa, and A fails to be a CAF.
Next, we see from Fubini’s theorem and Theorem 29.5 that ¯ ELx 1 = dy EyLx 1 = E0 Lx−y 1 dy = 1.
Since Lx is supported by {x}, we get for any CAF A as in (16) νAf = ¯ E(f · A)1 = ¯ E ν(dx) 1 0 f(Xt) dLx t = f(x) ν(dx) ¯ ELx 1 = νf, which shows that ν = νA.
Now consider any CAF A. By Lemma 29.20, there exists a function f > 0 with νAf < ∞. The process Bt = Lx t νf·A(dx) = Lx t f(x) νA(dx), t ≥0, is then a CAF with νB = νf·A, and Corollary 29.22 yields B = f · A a.s. Thus, A = f −1 · B a.s., and (16) follows.
2 Exercises 1. Show that the set of increase of Brownian local time at 0 agrees a.s. with the zero set Ξ. Extend this to any continuous local martingale. (Hint: Apply Lemma 14.15 to the process sgn(B−) · B in Theorem 29.1.) 29. Local Time, Excursions, and Additive Functionals 683 2. (L´ evy) Let M be the maximum process of a Brownian motion B. Show that B can be measurably recovered from M −B. (Hint: Use Corollaries 29.3 and 29.6.) 3. Use Corollary 29.3 to give a simple proof of the relation τ2 d = τ3 in Theorem 14.16.
(Hint: Recall that the maximum is unique by Lemma 14.15.) Also use Proposition 27.16 to give a direct proof of the relation τ1 d = τ2. (Hint: Integrate separately over the positive and negative excursions of B, and use Lemma 14.15 to identify the minimum.) 4. Show that for any c ∈(0, 1 2), Brownian local time Lx t is a.s. H¨ older continuous in x with exponent c, uniformly for bounded t. Also show that the bound c < 1 2 is best possible. (Hint: Apply Theorem 4.23 to the estimate in the proof of Theorem 29.4. For the last assertion, use Theorem 29.17.) 5. Consider a continuous local martingale M = B◦[M] a.s. for a Brownian motion B. Show that if B has local time Lx t , then the local time of M at x equals Lx ◦[M].
(Hint: Use Theorem 29.5, and note that L ◦[M] is jointly continuous.) 6.
For a continuous semi-martingale X, show that t 0 f(Xs, s) d[X]s = dx × t 0 f(x, s) dLx s outside a fixed null set. (Hint: Extend Theorem 29.5 by a monotone-class argument.) 7.
Let Ξ be the zero set of Brownian motion B.
Use Proposition 29.12 and Theorem 29.15 to construct its local time L directly from Ξ. Also use Lemma 29.16 to construct L from the heights of the excursions of B. Finally, use Corollary 29.6 to construct L from the occupation measure of B.
8. Let η be the maximum of a Brownian excursion. Show that E η = (π/2)1/2.
(Hint: Use Theorem 29.15 and Lemmas 29.16 and 4.4.) 9. Let L be the continuous local time of a continuous local martingale M with [M]∞= ∞a.s. Show that a.s. Lx t →∞as t →∞, uniformly on compacts. (Hint: Reduce to the case of Brownian motion. Then use Corollary 29.18, the strong Markov property, and the law of large numbers.) 10. Show that the intersection of two regenerative sets is regenerative.
11. Let L be the local time of a regenerative set, and let τ be an independent, exponentially distributed time.
Show that Lτ is again exponentially distributed.
(Hint: Derive a Cauchy equation for the function P{Lτ > s}.) 12. For an unbounded regenerative set Ξ, show that L is a.s. determined by Ξ.
(Hint: Use the law of large numbers.) 13. Let Ξ be a non-trivial regenerative set. Show that c Ξ d = Ξ for all c > 0 iff the inverse local time is strictly stable.
14. Let X be a Feller process in R, and put Mt = sups≤t Xs. Show that the points of increase of M form a regenerative set. Also prove the same statement for the process X∗ t = sups≤t|Xs| when −X d = X.
15. Let X be a strictly stable L´ evy process, let Ξ be the set of increase of the process Mt = sups≤t Xs, and write L for the local time of Ξ. Assuming Ξ to be non-trivial, show that L−1 is strictly stable. Also prove the corresponding statement for X∗when X is symmetric.
16. Give an explicit construction of the process X in Theorem 29.11, based on the Poisson process ξ and the constant c. (Hint: Use Theorem 29.13 to construct 684 Foundations of Modern Probability the time scale.) 17. Show that the semi-martingale local time is preserved by a change of measure Q = Zt · P. Use this result to extend Corollary 29.18 to Brownian motion with a suitable drift. (Hint: Use Proposition 19.21 and Corollary 19.26.) 18. Show that the class of continuous additive functionals is preserved under a suitable change of measure Q = Zt · P. Use this result to extend Theorem 29.25 to a Brownian motion with drift.
Chapter 30 Random Measures, Smoothing and Scattering Null arrays of random measures, weak and strong Poisson convergence, compound Poisson approximation, dissipative scattering, strong equiv-alence and tightness, thinning limits, Cox criterion, 0−∞laws, weak and strong averaging, asymptotically invariant smoothing and scattering, convolution powers, constant-velocity transforms, independence and Cox property, line and flat processes, spanning and density criteria, cluster dichotomy, truncation and Palm tree criteria Much of the probability theory developed so far has dealt with processes on the real line, where the discussion has often centered around Brownian motion and related processes. Here and in the next chapter, we return to the equally important and often neglected1 theory of discrete point measures, where many results have connections to the class of Poisson and related processes.
The present treatment may be thought of as continuing our discussion of the latter processes in Chapter 15, though there are also strong connections with the elementary Poisson limit theorems in Chapter 6, the ordinary or discounted compensators in Chapter 10, and the weak convergence theory for random measures and point processes in Chapter 23. We have also seen the important role of random measures for the large deviation theory of Chapter 24 and the excursion theory of Chapter 29.
In this chapter, we will see how Poisson and Cox processes arise in vari-ous ways as limits of more general point processes. Here we begin with the basic weak and strong limit theorems for null arrays of point processes, ex-tending the results for integer-valued random variables from Chapter 6. Next, we consider the Cox convergence of suitable point processes under dissipative shifts of the individual points. In the stationary case, we will prove versions for random measures of the multi-variate ergodic theorem in Chapter 25, and establish some smoothing limits under convolution, including a general version of Dobrushin’s celebrated Cox convergence theorem. We conclude with con-ditions for the shift invariance of stationary line and flat processes, and study the cluster dichotomy for spatial branching processes, leading to a Palm tree criterion for stability of the generating cluster kernel.
1The discrete theory is often thought of by the ignorant as easier and more elementary, whereas the opposite is often true.
Here and in the next chapter, we will encounter an abundance of deep and powerful theorems.
685 © Springer Nature Switzerland AG 2021 O. Kallenberg, Foundations of Modern Probability, Probability Theory and Stochastic Modelling 99, 686 Foundations of Modern Probability The natural setting of the present theory is in terms of random measures.
Recall that a random measure on a localized Borel space S is defined as a locally finite kernel ξ from the basic probability space (Ω, A, P) into S. Thus, ξ(ω, B) is a locally finite measure in B ∈S for fixed ω ∈Ω, and a random variable in ω ∈Ω for each B ∈ˆ S. Equivalently, we may regard ξ as a random element in the space MS of locally finite measures μ on S, endowed with the σ-field generated by all evaluation maps μ →μB with B ∈ˆ S. It is called a point process if ξB ∈Z+ a.s. for all B ∈ˆ S, in which case ξ = i δσi a.s. for some random variables σ1, σ2, . . . , and we say that ξ is simple if the σi are a.s.
distinct, so that sups ξ{s} ≤1 a.s.
The random measures ξnj with n, j ∈N are said to form a null array, if they are independent in j for fixed n and such that, for each B ∈ˆ S, the random variables ξnjB form a null array in the sense of Chapter 6. As in Chapter 23, we write vd − →for convergence in distribution with respect to the vague topology on MS, as defined and studied in Appendix 5. Dissecting systems are defined in Chapter 1 and semi-rings in Chapter 23.
We begin with some criteria for Poisson convergence under independent superpositions, extending the elementary Poisson limit Theorem 6.7.
Theorem 30.1 (weak Poisson convergence, Grigelionis) Let (ξnj) be a null array of point processes on S, let ξ be a Poisson process on S with E ξ = λ, and fix a dissecting semi-ring I ⊂ˆ Sλ. Then Σj ξnj vd − →ξ iff (i) Σj P{ξnjI = 1} →λI, I ∈I, (ii) Σj P{ξnjB > 1} →0, B ∈ˆ S.
Just as for the main results in Chapter 7, the easiest and most transparent proof is via a compound Poisson approximation. Given a random measure ξ on S, we introduce an associated compound Poisson process ˜ ξ = μ η(dμ) on S, where η is a Poisson process on MS with intensity Eη = L(ξ) = λ. Writing πf : μ →μf, we note that ˜ ξf = η πf, and so by Lemma 15.2 (ii), Ee−˜ ξf = Ee−η πf = exp −λ(1 −e−πf) = exp Ee−ξf −1 .
(1) For a null array (ξnj) of random measures, we choose the associated compound Poisson processes ˜ ξnj to be independent in j for each n. Then ˜ ξn = Σj ˜ ξnj is again compound Poisson with characteristic measure λn = Σj λnj, where λnj = L(ξnj). By vd ∼we mean that, whenever either side converges along a sub-sequence N ′ ⊂N, so does the other side, and the two limits agree.
Lemma 30.2 (compound Poisson approximation) Let (ξnj) be a null array of random measures on S with associated compound Poisson array (˜ ξnj). Then Σ j ξnj vd ∼Σ j ˜ ξnj.
30. Random Measures, Smoothing and Scattering 687 Proof: Write ξn = Σj ξnj and ˜ ξn = Σj ˜ ξnj. By Theorem 23.16, it is enough to show that Ee−ξnf ∼Ee−˜ ξnf for every f ∈ˆ CS, in the sense that convergence of either side implies that both sides converge to the same limit. By (1) we have Ee−˜ ξnf = Π j Ee−˜ ξnjf = Π j exp Ee−ξnjf −1 .
Writing pnj = 1 −Ee−ξnjf, we need to show that Πj e−pnj ∼Πj (1 −pnj).
Since supj pnj →0 by the ‘null’ property of (ξnjf) and Lemma 6.6, this holds by a first order Taylor expansion of Σj log(1 −pnj).
2 Proof of Theorem 30.1: Fix any disjoint sets I1, . . . , Im ∈I, and put U = k Ik. Writing ρn = Σj L(ξnj) = E Σj ˜ ξnj, we get for any k ≤m ρn μIk = μU = 1 = ρn{μIk = 1} −ρn μIk = 1 < μU →λIk, ρn Σk μIk > 1 = ρn{μU > 1} →0, which shows that ρn v →λ ⊗δ1. Since ξ and all Σj ˜ ξnj are compound Poisson, we obtain Σj ˜ ξnj d →ξ, and Σj ξnj d →ξ follows by Lemma 30.2. The reverse argument yields the necessity of (i) and (ii).
2 The last theorem may be compared with the following strong version, where we use notations like ul − →, uld − →, ulP − →, . . . , as explained in Chapter 23.
Theorem 30.3 (strong Poisson convergence, Matthes et al.) Let (ξnj) be a null array of point processes on S, and let η be a Poisson process on S with E η = λ. Then Σj ξnj uld − →η whenever (i) Σj E ξnj ul − →λ, (ii) Σj E ξnjB; ξnjB > 1 →0, B ∈ˆ S.
Lemma 30.4 (Poisson approximation) Let (ξnj) be a null array of point pro-cesses on a bounded space S, and consider some Poisson processes ηn on S with intensities λn = Σ j E ξnj; ∥ξnj∥= 1 , n ∈N.
Then Σj ξnj ud ∼ηn whenever Σ j P ∥ξnj∥> 1 →0, lim sup n→∞∥λn∥< ∞.
Proof: Putting λnj = E(ξnj; ∥ξnj∥= 1), we may choose some independent Poisson processes ηnj on S with E ηnj = λnj, so that ηn = Σj ηnj is again Poisson with E ηn = λn. Clearly P ∥ηnj∥= 1 = ∥λnj∥e−∥λnj∥ = ∥λnj∥+ O(∥λnj∥2), P ∥ηnj∥> 1 = 1 −(1 + ∥λnj∥) e−∥λ∥ = O(∥λnj∥2), 688 Foundations of Modern Probability and by Theorem 15.4, E ηnj ∥ηnj∥= 1 = E ξnj ∥ξnj∥= 1 .
Hence, L(ξnj) −L(ηnj) < ⌢ P ∥ξnj∥= 1 −P ∥ηnj∥= 1 + P ∥ξnj∥> 1 + P ∥ηnj∥> 1 < ⌢P ∥ξnj∥> 1 + ∥λnj∥2, and so by independence, L Σ j ξnj −L(ηn) ≤Σ j L(ξnj) −L(ηnj) < ⌢Σ j P ∥ξnj∥> 1 + ∥λnj∥2 ≤Σ j P ∥ξnj∥> 1 + ∥λnj∥supjP{ξnj ̸= 0}, which tends to 0 as n →∞, by the stated hypotheses.
2 Proof of Theorem 30.3: We may choose S to be bounded, since for general S we may then consider the restrictions to an arbitrary bounded set B ∈ˆ S.
Put ξn = Σj ξnj, and choose some Poisson processes ηn with intensities λn, as in Lemma 30.4. Then (ii) yields Σ j E ξnj −λn ≤Σ j E ∥ξn∥; ∥ξnj∥> 1 →0, and so λn u →λ by (i), which implies ηn ud − →η by Theorem 23.21 (ii). Fur-thermore, the hypotheses of Lemma 30.4 hold by (i) and (ii), and so ξn ud ∼ηn.
Hence, by combination, L(ξn) −L(η) ≤ L(ξn) −L(ηn) + L(ηn) −L(η) →0, which means that ξn ud − →η.
2 We turn to some limits under independent dispersal of the individual points.
Say that the kernels νn : T →S are dissipative if supt νn(t, B) →0 for all B ∈ˆ S. Define ξn ulP − →ξ by ∥ξn −ξ∥B P →0 for all B ∈ˆ S.
Theorem 30.5 (dissipative scatter) For n ∈N, let ξn be a νn-transform of a point process ηn on T, for some dissipative kernels νn : T →S. Then for suitable ξ, η, (i) ηnνn vd − →η ⇔ ξn vd − →ξ, (ii) ηnνn ulP − →η ⇒ ξn uld − →ξ, in which case ξ is distributed as a Cox process directed by η.
The proof of part (ii) requires a lemma.
30. Random Measures, Smoothing and Scattering 689 Lemma 30.6 (strong equivalence and tightness) For ηn, νn as in Theorem 30.5, let ζn be Cox processes directed by the random measures ηnνn. Then (i) (ξn), (ζn), (ηnνn) are simultaneously tight, (ii) ξn uld ∼ζn holds whenever either side is tight.
Proof: (i) For tn > 0, write sn = 1 −e−tn, so that tn = −log(1 −sn). By Lemma 15.2, we have for any B ∈ˆ S E exp −tnζnB = E exp −snηn νnB , E exp −tn ξnB = E exp ηn log 1 −snνnB .
When tn →0, which holds iffsn →0, all three conditions tn(ξnB) P →0, tn(ζnB) P →0, sn(ηnνnB) P →0, are equivalent, and the assertion follows by Lemma 5.9.
(ii) We may take S to be bounded, so that the sequence ∥ηnνn∥is tight.
To prove the equivalence ξn ud ∼ζn, it suffices for any sub-sequence N ′ ⊂N to prove the required convergence along a further sub-sequence N ′′. By tightness we may then assume that ∥ηnνn∥converges in distribution, and by Theorems 5.31 and 8.17 we may even let this convergence hold a.s. In particular, ∥ηnνn∥ is then a.s. bounded.
Assuming the appropriate conditional independence, we may now apply Lemma 30.4 to the conditional distributions of the sequences (ξn) and (ζn), given (ηn), to obtain L(ξn| ηn) −L(ζn| ηn) →0 a.s., which implies ξn ud ∼ζn, as in the proof of Theorem 23.21 (ii).
2 Proof of Theorem 30.5: (i) By Lemma 15.2 (iii), Ee−ξnf = E exp ηn log νne−f , f ∈S+, n ∈N.
Now let f be supported by a fixed set B ∈ˆ S, and write g = 1 −e−f. Noting that for t ∈[0, 1), t ≤−log(1 −t) ≤t + O(t2), we get for any n ∈N E exp −ρnηn νng ≤E exp(−ξnf) ≤E exp −ηn νng , where 1 ≤ρn →1 a.s.
If ηnνn vd − →η, then even ρnηnνn vd − →η, and so Ee−ξnf →Ee−η g.
Since f was arbitrary, Theorem 23.16 yields ξn vd − →ξ, where ξ is a Cox process directed by η.
690 Foundations of Modern Probability Conversely, let ξn vd − →ξ for a point process ξ on S. By Theorem 23.15 and Lemma 15.2, we get lim t→0 inf n Ee−t ξnB = sup K∈K inf n Ee−ξn(B\K) = 1, B ∈ˆ S.
Since Ee−t′ηnνnB ≥Ee−t ξnB with t′ = 1 −e−t by Lemma 15.2, similar condi-tions hold for the random measures ηnνn, which are then tight by Theorem 23.15. If ηnνn vd − →η along a sub-sequence N ′ ⊂N, the direct assertion yields ξn vd − →ξ′ along N ′, where ξ′ is a Cox process directed by η. Since ξ′ d = ξ, the distribution of η is unique by Lemma 15.7, and so the convergence ηnνn vd − →η extends to the original sequence.
(ii) If ηnνn ulP − →η, then (ηnνn) is tight, and so by Lemma 30.6 (ii) it suffices to show that ξn uld − →ξ, which holds by Theorem 23.21 (ii).
2 In particular, we get by Theorem 30.5 (i): Corollary 30.7 (thinning limits) For n ∈N, let ξn be a pn-thinning of a point process ηn on S, where pn →0. Then for suitable ξ, η, ξn vd − →ξ ⇔ pnηn vd − →η in which case ξ is distributed as a Cox process directed by η.
This yields an interesting characterization of Cox processes.
Corollary 30.8 (Cox criterion, Mecke) For a point process ξ on S, these conditions are equivalent: (i) ξ is a Cox process directed by a random measure η, (ii) for every p ∈(0, 1), ξ is distributed as a p-thinning of a point process ξp.
Then ξp is again Cox and directed by a random measure ηp d = η/p.
Proof: If ξ and ξp are Cox processes directed by η and η/p, respectively, then Proposition 15.3 shows that ξ is distributed as a p -thinning of ξp. Con-versely, if the stated condition holds for every p ∈(0, 1), then ξ is Cox by Theorem 30.7.
2 A random measure ξ on Rd is said to be stationary if θsξ d = ξ for every s ∈Rd, where the shift operators θs on MRd are defined by (θsμ)B = μ(B −s) or (θsμ)f = μ(f ◦θs). We begin with a general property of stationary random sets and measures.
Lemma 30.9 ( 0−∞laws) (i) For a stationary random measure ξ on R or Z, we have a.s.
ξ ̸= 0 ⇔ ξR± = ∞.
30. Random Measures, Smoothing and Scattering 691 (ii) For a stationary, random closed set ϕ in R or Z, we have a.s.
ϕ ̸= ∅ ⇔ sup (±ϕ) = ∞.
Proof: (i) By the stationarity of ξ and Fatou’s lemma, we have for any t ∈R and h, ε > 0 P ξ[t, t + h) > ε = lim sup n→∞P ξ[(n −1)h, nh) > ε ≤P ξ[(n −1)h, nh) > ε i.o. ≤P{ ξR+ = ∞}.
Letting ε →0, h →∞, and t →−∞in this order, we get P{ξ ̸= 0} ≤ P{ξR+ = ∞}. Since ξR+ = ∞implies ξ ̸= 0, the two events agree a.s. By symmetry we have also ξR−= ∞a.s. on {ξ ̸= 0}.
(ii) Here we may write instead P ϕ ∩[t, t + h) ̸= ∅ = lim sup n→∞P ϕ ∩[(n −1)h, nh) ̸= ∅ ≤P ϕ ∩[(n −1)h, nh) ̸= ∅i.o. ≤P{sup ϕ = ∞}.
The assertion follows as before, as we let h →∞and then t →−∞.
2 For a stationary random measure ξ on Rd, we define the invariant σ-field by Iξ = ξ−1I, where I denotes the σ-field of shift-invariant, measurable sets in MRd. We further define the sample intensity of ξ as the ¯ R+-valued ran-dom variable ¯ ξ = E(ξB | Iξ)/λdB, where B ∈Bd with λdB ∈(0, ∞). This expression is independent of B, by the stationarity of ξ and Theorem 2.6.
We state a version for random measures of the multi-variate ergodic The-orem 25.14. Recall that, for a convex set B ⊂Rd, r(B) denotes the radius of the largest open ball included in B. Put I1 = [0, 1]d.
Theorem 30.10 (strong averaging, Nguyen & Zessin) Let ξ be a stationary random measure on Rd, and let B1 ⊂B2 ⊂· · · be bounded, convex sets in Bd with r(Bn) →∞. Then ξBn λdBn →¯ ξ a.s.
For any p ≥1, the convergence extends to Lp whenever ξI1 ∈Lp.
Proof: By Fubini’s theorem, we have for any A, B ∈Bd B(θsξ)A ds = B ds 1A(t −s) ξ(dt) = ξ(dt) B 1A(t −s) ds = ξ(1A ∗1B).
692 Foundations of Modern Probability Assuming |A| = 1 and A ⊂Sa = {s; |s| < a}, and putting B+ = B + Sa and B−= (Bc + Sa)c, we note that also 1A ∗1B−≤1B ≤1A ∗1B+. Applying this to the sets B = Bn gives λdB− n λdBn · ξ(1A ∗1B− n ) λdB− n ≤ξBn λdBn ≤λdB+ n λdBn · ξ(1A ∗1B+ n ) λdB+ n .
Since r(Bn) →∞, Lemma A6.5 (ii) yields λdB± n /λdBn →1. Applying The-orem 25.14 to the function f(μ) = μA and the convex sets B± n , we obtain ξ(1A ∗1B± n ) λdB± n →EIξξA = ¯ ξ, in the appropriate sense.
2 Turning to some weak ergodic theorems, we define a weight function f : Rd →R+ as a probability density on Rd. A sequence of such functions fn is said to be asymptotically invariant, if the corresponding property holds for the measures fn · λd, in the sense of Chapter 3.
Theorem 30.11 (weak averaging) Let ξ be a stationary random measure on Rd, let f1, f2, . . . be asymptotically invariant weight functions on Rd. Then for any p > 1, (i) ξfn P →¯ ξ, (ii) ξfn →¯ ξ in L1 ⇔E ξI1 < ∞, (iii) ξfn →¯ ξ in Lp ⇔the (ξfn)p are uniformly integrable.
In particular, (iii) holds when ξI1 ∈Lp and fn < ⌢1Bn/λdBn for some sets Bn as in Theorem 30.10.
Proof: (ii) By Theorem 30.10, we may choose some weight functions gm such that ξgm →¯ ξ in L1. Using Minkowski’s inequality, the stationarity of ξ, the invariance of ¯ ξ, and dominated convergence, we get as n →∞and then m →∞ ξfn −¯ ξ 1 ≤ ξfn −ξ(fn ∗gm) 1 + ξ(fn ∗gm) −¯ ξ 1 ≤E ξ fn −fn ∗gm + ξ(gm ◦θs) −¯ ξ 1fn(s) ds ≤E ¯ ξ λd fn −fn ◦θt gm(t) dt + ξgm −¯ ξ 1 →0.
(i) The point processes ξr = 1{¯ ξ ≤r}ξ with r > 0 are clearly stationary with E ξrI1 = E ¯ ξr = E ¯ ξ; ¯ ξ ≤r ≤r, r > 0, and so by (ii) we get ξrfn P →¯ ξr, which implies ξfn P →¯ ξ on {¯ ξ < ∞}. Next, we introduce the product-measurable processes Xk(s) = 1 ξB1 s ≤k , s ∈Rd, k ∈N, 30. Random Measures, Smoothing and Scattering 693 and note that the random measures ξk = Xk ·ξ are again stationary with ¯ ξ < ⌢k a.s. As before, we get ξfn ≥ξkfn P →¯ ξk, k ∈N.
(2) Since Xk ↑1, we have ξk ↑ξ and even ¯ ξk ↑¯ ξ a.s., by successive monotone convergence. For any sub-sequence N ′ ⊂N, the convergence in (2) holds a.s.
along a further sub-sequence N ′′, and so lim infn ξfn ≥¯ ξ a.s. along N ′′. In particular, ξfn P →¯ ξ remains true on {¯ ξ = ∞}.
(iii) This follows from (i) by Theorem 5.12.
2 Next we consider smoothing by convolution. For the notions of weak and strong asymptotic invariance, see Chapter 3.
Theorem 30.12 (invariant smoothing) Let ξ be a stationary random measure on Rd with ¯ ξ < ∞a.s., and let ν1, ν2, . . . be distributions on Rd. Then (i) ξ ∗νn vP − →¯ ξλd when the νn are weakly asymptotically invariant, (ii) ξ ∗νn ulP − →¯ ξλd when the νn are asymptotically invariant.
Proof: (i) By a simple approximation, we can choose a measure-determining sequence of continuous and compactly supported weight functions fk = λh∗gk, where λh denotes the uniform distribution on Ih. By a compactness argument based on Theorem A5.6, we may ensure that the fk are even convergence determining for the limit λd. Since the functions νn ∗λh ∗gk are asymptotically invariant in n for fixed h and k, we get by Theorem 30.11 (i) (ξ ∗νn)fk = ξ ∗νn ∗λh gk = ξ νn ∗λh ∗gk P →¯ ξλdgk = ¯ ξ, which implies ξ ∗νn vP − →¯ ξλd by a sub-sequence argument.
(ii) Let the νn be asymptotically invariant.
Then they are also weakly asymptotically invariant, and so by (i) we have ξ ∗νn vP − →¯ ξλd. To see that even ξ ∗νn ulP − →¯ ξλd, it suffices to prove, for any sub-sequence N ′ ⊂N, that ξ ∗νn ul − →¯ ξλd a.s. along a further sub-sequence N ′′ ⊂N ′. It is then enough to show that ξ ∗νn v →¯ ξλd a.s. implies ξ ∗νn ul − →¯ ξλd a.s. along a sub-sequence.
Replacing ξ by ξ/¯ ξ, we may further assume that ¯ ξ = 1.
Now let ξ ∗νn v →λd a.s. Since the νn are asymptotically invariant, Lemma 3.17 yields ∥νn −νn ∗ρ∥→0 for every probability measure ρ on Rd. Choosing ρ = f ·λd for a continuous function f ≥0 with bounded support C and writing γh = |νn −νn ∗ρ|, we get for any B ∈Bd E ξ ∗νn −ξ ∗νn ∗ρ B = E ξ ∗γn B = ∥γn∥λdB = νn −νn ∗ρ λdB →0, 694 Foundations of Modern Probability and so ∥ξ ∗νn −ξ ∗νn ∗ρ∥B →0 a.s. as n →∞along a sub-sequence.
By the assumptions on f, the a.s. convergence ξ ∗νn v →λd yields a.s.
ξ ∗νn ∗f x →(λd ∗f)x = 1, x ∈Rd.
Furthermore, we have a.s. for any compact set B ⊂Rd lim sup n→∞ sup x∈B ξ ∗νn ∗f x ≤∥f∥lim sup n→∞(ξ ∗νn)(B −C) ≤∥f∥λd(B −C) < ∞.
Hence, by dominated convergence, ξ ∗νn ∗ρ −λd B = B ξ ∗νn ∗f x −1 dx →0.
Combining the two estimates, we get a.s. for any B ∈Bd ξ ∗νn −λd B ≤ ξ ∗νn −ξ ∗νn ∗ρ B + ξ ∗νn ∗ρ −λd B →0, which means that ξ ∗νn ul − →λd a.s.
2 Combining Theorems 30.5 and 30.12, we obtain a fundamental limit theo-rem for asymptotically invariant transforms of point process.
Corollary 30.13 (invariant scatter, Dobrushin) Let ξ be a stationary point process on Rd with ¯ ξ < ∞a.s., form some νn-transforms ξn of ξ based on distributions νn on Rd, and let ζ be a Cox process directed by ¯ ξλd. Then (i) ξn vd − →ζ when the νn are weakly asymptotically invariant, (ii) ξn uld − →ζ when the νn are asymptotically invariant.
Proof: (i) Use Theorems 30.5 (i) and 30.12 (i).
(ii) Use Theorems 30.5 (ii) and 30.12 (ii).
2 In particular we may let the points of ξ, thought of as particles in Rd, perform independent random walks based on a common distribution μ. Here we may give precise criteria for weak and strong asymptotic invariance. Let μ∗n denote the n -th convolution power of μ. Say that μ is of lattice type, if it is supported by a shifted, proper sub-group of Rd. By μ ⊥ν we mean that the measures μ and ν are mutually singular.
Theorem 30.14 (convolution powers, Maruyama) For a distribution μ on Rd, we have (i) the μ∗n are weakly asymptotically invariant ⇔μ is non-lattice, (ii) the μ∗n are asymptotically invariant ⇔μ∗m̸⊥λd for some m ∈N.
30. Random Measures, Smoothing and Scattering 695 Proof: (i) If μ is non-lattice, we may write μ = 1 2 (μ1 + μ2), where μ1 is non-lattice with bounded support. Then μ∗n = 2−n n Σ k=0 n k μ∗k 1 ∗μ∗(n−k) 2 = μ∗m 1 ∗αn,m + βn,m, where 1−∥αn,m∥= ∥βn,m∥→0 as n →∞for fixed m. Since ∥μ∗ν∥≤∥μ∥∥ν∥ for any signed measures μ and ν, any weak asymptotic invariance of μ∗n 1 would extend to μ∗n. This reduces the proof to the case of measures μ with bounded support. By a linear transformation combined with a shift, we may further assume that μ has mean 0 and covariances δij.
By Theorem 6.11 and the estimate |∂iϕ∗n| < ⌢n−1−d/2, we have nd/2 μ∗n ∗δx ∗ph −ϕ∗n →0, x ∈Rd, h > 0, and so for any finite convex combination p of functions δx ∗ph, nd/2 μ∗n ∗δx −μ∗n ∗p →0, x ∈Rd.
Writing ρ = p · λd, we have for any r > 0, x ∈Rd, and n ∈N μ∗n ∗δx −μ∗n ∗ρ < ⌢rd nd/2 μ∗n ∗δx −μ∗n ∗p + μ∗n ∗(ρ ∗δx + ρ) Ic r√n .
Applying the central limit theorem with ν = ϕ · λd, we get lim sup n→∞ μ∗n ∗δx −μ∗n ∗ρ < ⌢2 νIc r, and r > 0 being arbitrary, we conclude that ∥(μ∗n ∗δx −μ∗n) ∗ρ∥→0.
For any h > 0, let λh be the uniform distribution on [0, h]d. By an el-ementary approximation, we may choose some measures ρm as above with ∥ρm −λh∥→0, and so μ∗n ∗δx −μ∗n ∗λh ≤ μ∗n ∗δx −μ∗n ∗ρm + 2 ∥ρm −λh∥, which tends to 0 as n →∞and then m →∞. The weak asymptotic invariance of μ∗n now follows by Lemma 3.18. The necessity is obvious.
(ii) Write μ∗n = μ′ n + μ′′ n with μ′ n ≪λd ⊥μ′′ n, and note that ∥μ′ n∥is non-decreasing, since μ′ n∗μ ≪λd. If ∥μ′ k∥> 0 for some k, then ∥μ′′ nk∥≤∥μ′′ k∥n →0, and so ∥μ′′ n∥→0. In particular, the μ∗n are non-lattice. For any x ∈Rd, μ∗n −δx ∗μ∗n = μ∗k ∗ μ∗(n−k) −δx ∗μ∗(n−k) ≤ μ′ k ∗ μ∗(n−k) −δx ∗μ∗(n−k) + 2 ∥μ′′ k∥, which tends to 0 by part (i), as n →∞and then k →∞, proving the suffi-ciency of the stated condition. The necessity holds by Lemma 3.17.
2 696 Foundations of Modern Probability We turn to the special case where the individual points move independently with constant velocities. For d = 1, we may think of this as modeling the traffic flow along a highway. In a space-time diagram, the evolution is described by a line process in Rd+1, which gives a basic connection to stochastic geometry.
Corollary 30.15 (constant velocities, Breiman, Stone) Let ξ be a stationary point process on Rd with ¯ ξ < ∞a.s., form some processes ξt, t ≥0, by inde-pendently moving the points of ξ with fixed velocities of distribution μ, and let ζ be a Cox process directed by ¯ ξλd. Then as t →∞, (i) the ξt are vaguely tight, (ii) μ is diffuse ⇒all distributional limits are Cox, (iii) μ is locally invariant ⇒ξt vd − →ζ, (iv) μ ≪λd ⇒ξt uld − →ζ.
Proof: For a random variable γ with distribution μ, the measures μt = L(γt) are weakly asymptotically invariant iffμ is locally invariant. Further-more, Lemma 3.19 shows that the μt are strictly asymptotically invariant iff μ ≪λd. Now (i) and (ii) follow by Theorem 30.13.
For general μ, all ξt have clearly the same sample intensity ¯ ξ, and so the stated tightness follows by Theorem 23.15 and Chebyshev’s inequality. If μ is diffuse, a simple compactness argument shows that the μt are dissipative as t →∞, and so all distributional limits are Cox by Theorem 30.5.
2 When ξ is Poisson, the independence property of the last theorem is clearly preserved at all times t ≥0, since the entire particle system is then time stationary. The preservation fails in general: Corollary 30.16 (stationarity and independence) Let ξ be a stationary point process on Rd, and form the processes (ξt) as in Corollary 30.15 for a non-lattice distribution μ. Then for a fixed t > 0, these conditions are equivalent: (i) the velocities at time t are i.i.d. and independent of ξt, (ii) ξ is a Cox process directed by ¯ ξλd, (iii) the line process (ξs) is space-time stationary.
Proof, (i) ⇒(ii): Assuming (i) for a fixed t > 0, let ν′ t be the distribution of the associated displacements, and let ν′′ t be the distribution of the inverse shifts leading from ξt to ξ. Then the distribution of ξ is invariant under independent shifts with distribution μ = ν′ t ∗ν′′ t , which is again non-lattice. The invariance extends by iteration to displacements with distributions μ∗n, which are weakly asymptotically invariant by Theorem 30.14 (i). Now (ii) follows by Corollary 30.13 (i).
(ii) ⇒(iii): Use Theorem 15.3.
(iii) ⇒(i): Note that the independence is preserved by temporal shifts. 2 30. Random Measures, Smoothing and Scattering 697 The previous results give a connection between particle systems in Rd and line processes in Rd+1.
We turn to a study of more general flat processes.
By a k -flat in Rd we mean an affine subspace x of dimension k, obtained by translation of a k -dimensional linear subspace πx of Rd, referred to as the direction of x. Write Fk for the space of k -flats in Rd and Φk for the subspace of flats through the origin.
Any reasonable parametrization makes Fk a localized Borel space, which allows us to consider point processes or more general random measures on Fk, where the former flat processes represent countable families of random flats in Rd. We assume such processes to give a.s. finite mass to the set of flats intersecting an arbitrary bounded set A ⊂Rd.
Translations on Rd induce shifts θx on Fk, and we say that a random measure η on Fk is stationary if θxη d = η for all x, and a.s. invariant if θxη ≡η a.s. Under suitable regularity conditions, we show that a stationary random measure on Fk is a.s. invariant.2 By Theorem 31.18 below, this yields the remarkable conclusion that any stationary and sufficiently regular flat process is Cox. We consider two basic instances of the former statement.
Theorem 30.17 (spanning criterion, Davidson, Krickeberg, OK) Let η be a stationary random measure on the space Fk of k -flats in Rd, where k < d.
Then η is a.s. invariant whenever 3 (i) span (πx, πy) = Rd, (x, y) ∈F 2 k a.e. E η2, (ii) E η2 is locally finite.
Proof: Choose the parametrization x = (p, r) ∈Φk × Rd−k of Fk, where p denotes the direction πx and r is the perpendicular shift bringing πx to x, as specified by an appropriate sign convention. By (ii), Theorem 3.4 applies and yields a disintegration Eη2 = ν ⊗μ, in terms of a σ-finite measure ν on Φ2 k and a kernel μ: Φ2 k →R2(d−k). Thus, for measurable f ≥0 on F 2 k , E η2f = ν(dp dq) μp,q(dr ds)f(p, q, r, s).
Since η is stationarity, E η2 is jointly invariant under shifts in Fk. Since the kernel μ is a.e. unique and the invariance is determined by countably many conditions, the joint invariance of E η2 carries over to μp,q, for ν -almost all pairs (p, q) ∈Φ2 k. Condition (i) yields span(p, q) = Rd a.e. ν.
For non-exceptional pairs (p, q), any shift x ∈(p ∩q)⊥can be written uniquely as xp + xq, with xp ∈p and xq ∈q. Thus, μp,q is a.e. invariant under the shifts xq ⊗xp, hence for all separate shifts in the two coordinates, and so μp,q = cp,q λ2(d−k) by Theorem 2.6. Absorbing the constants cp,q into ν gives 2Stationarity and invariance are often confused; here the distinction is of course crucial.
3With some effort, we can replace (ii) by a first-order moment condition; it is not known whether it can be omitted altogether.
698 Foundations of Modern Probability the simplified disintegration E η2 = ν ⊗λ2(d−k). Thus, for any bounded Borel set B ⊂Fk and shift x ∈Rd, E η2 B × θ−1 x B = ν(dp dq) μp,q Bp × θ−1 x Bq = ν(dp dq) μp,q(Bp × Bq) = E η2B2, where Bp, Bq are the sections of B at p, q, respectively. Combining this with the joint invariance of E η2, we obtain a.s.
E ηB −η θ−1 x B 2 = E η2B2 + E η2 θ−1 x B 2 −2E η2 B × θ−1 x B = 0, and so ηB = η(θ−1 x B) a.s., which extends to η = θxη a.s., showing that η is a.s. invariant.
2 The last criterion can only be fulfilled when 2 k ≥d, which excludes the important case of line processes in Rd for d ≥3. The following criterion is more restrictive but applies to arbitrary k and d. Here we write λk for the unique, rotation invariant probability measure on the homogeneous space Φk. For any u ∈Φh, we say that a random measure η or process X is u -stationary, if it is stationary under shifts in u.
Theorem 30.18 (density criterion, Papangelou, OK) Let η be a u-stationary random measure on the space Fk of k-flats in Rd, where k < d and u ∈Φd−k+1 are fixed. Then η is a.s. invariant whenever η ◦π−1 ≪λk a.s..
This follows from a similar result for processes X ≥0 on Fk: Lemma 30.19 (stationarity and invariance) Let X be a u-stationary, product-measurable process on Fk, for fixed k < d and u ∈Φd−k+1. Then Xa = Xb a.s., a, b ∈π−1t, t ∈Φk a.e. λk.
Proof: For any t = t0 ∈Φk, we may choose some ortho-normal vectors x1, . . . , xd−k ∈Rd satisfying span x1, . . . , xd−k, t = Rd, x1, . . . , xd−k ̸⊥t.
(3) For fixed x1, . . . , xd−k, the set G of flats t ∈Φk satisfying (3) is clearly open with full λk -measure, containing in particular a compact neighborhood C of t0. The compact space Φk is covered by finitely many such sets C with possibly different x1, . . . , xd−k, and it is enough to consider the restriction of X to one 30. Random Measures, Smoothing and Scattering 699 of the sets π−1C. Equivalently, we may take X to be supported by π−1C for a fixed C.
Fixing x1, . . . , xd−k accordingly, and letting a, b ∈π−1t with t ∈C, we have a = b + x for a linear combination x of x1, . . . , xd−k, and by iteration we may assume x = r xi for some r ∈R and i ≤d −k. By symmetry we may take i = 1, and we may also choose b = t, so that a = t + r x1.
The flats t ∈G may now be parametrized as follows, relative to the vector x1. Writing ˆ x1 for the projection of x1 onto t, we put p = |ˆ x1| ∈(0, 1). By continuity, p is bounded away from 0 and 1 on the compact set C.
Next, we introduce the unique unit vector y ∈span(x1, ˆ x1) satisfying y ⊥x1 and ⟨y, ˆ x1⟩> 0. Finally, let s ∈Φk−1 be the orthogonal complement of ˆ x1 in t. The map t →(p, y, s) is 1−1 and measurable on G, and by symmetry the vectors p, y, s are independent under λk, say with joint distribution μ.
The image ˆ λk = λk ◦p−1 is absolutely continuous on [0, 1] with a positive density, and so for any ε ∈(0, ε0] with a fixed ε0 > 0, there exists a continuous function gε : C →Φk that maps each t = (p, y, s) into a flat t′ = (p′, y, s) with p′ > p and ˆ λk(p, p′) = ε. By independence, 1Cλk ◦g−1 ε = 1gεC λk, ε ∈(0, ε0).
(4) Let p0 be the maximum of p on C, and define B0 = {(p, y, s); p ≤p0}. By Lemma 1.37, any function f ∈L2(μ) supported by B0 can be approximated in L2 by continuous functions fn with the same support. Combining with (4), we get as n →∞for fixed ε ∈(0, ε0) f ◦gε −fn ◦gε 2 2 = (μ ◦g−1 ε )(f −fn)2 = ∥f −fn∥2 2 →0.
Furthermore, we get by continuity and dominated convergence lim ε→0 fn −fn ◦gε 2 →0, n ∈N.
Using Minkowski’s inequality, and letting ε →0 and then n →∞, we obtain f −f ◦gε 2 ≤ fn −fn ◦gε 2 + 2 ∥f −fn∥2 →0.
(5) Truncating if necessary, we may take X to be bounded. Since it is also product-measurable, it is measurable on Fk for fixed ω ∈Ω by Lemma 1.28, and so by (5) lim ε→0 μ(dt) X(t) −X ◦gε(t) 2 = 0.
Hence, by Fubini’s theorem and dominated convergence, μ(dt) E X(t) −X ◦gε(t) 2 = E μ(dt) X(t) −X ◦gε(t) 2 →0, 700 Foundations of Modern Probability and so we may choose some εn →0 with E X(t) −X ◦gεn(t) 2 →0, t ∈C a.e. μ, (6) say for t ∈C′.
For any t ∈C and r ∈R, the flats gε(t) and a = t + r x1 are non-random and lie in the (k + 1) -dimensional span of x1, y, s, and so their intersection is non-empty. Thus, the flat tn = gεn(t) intersects both t and a, with flats of intersection parallel to s.
Now choose recursively some unit vectors s1, . . . , sk−1, each uniformly dis-tributed in the orthogonal complement of the preceding ones. The generated subspace s′ has the same distribution as s under μ, and it is further a.s. linearly independent of u, which implies span(u, s) = Rd, t ∈C′ a.e. μ, say for t ∈C′′. For such a t, every flat in π−1s equals s + x for an x ∈u, and so by (6) and the u -stationarity of X, E X(a) −X(tn) 2 = E X(t) −X(tn) 2 = E X(t) −X ◦gεn(t) 2 →0.
Hence, Minkowski’s inequality yields in L2(P) X(a) −X(t) 2 ≤ X(a) −X(tn) 2 + X(tn) −X(t) 2 →0, and so X(a) = X(t) a.s., as long as t ∈C′′.
2 Proof of Theorem 30.18: Every flat x ∈Fk can be written uniquely as x = πx + π′x, where πx ∈Φk and π′x ∈(πx)⊥, the orthogonal complement of πx in Rd. We may think of the pair (πx, π′x) as a parametrization of x.
Now consider a random flat ϕ ∈Φd−k with distribution λd−k. Since ϕ is gen-erated by some ortho-normal vectors α1, . . . , αd−k, each uniformly distributed in the orthogonal complement of the preceding ones, we have a.s.
dim span(u⊥, ϕ) = dim u⊥+ dim ϕ = (k −1) + (d −k) = d −1, and so the intersection u⊥∩ϕ has a.s. dimension 0. If a vector y ∈ϕ is not in the ϕ -projection of u, then x ⊥y for every x ∈u, which means that y ∈u⊥.
Assuming y ∈u⊥∩ϕ = {0}, we obtain y = 0, which shows that the ϕ -projection of u agrees a.s. with ϕ itself. In particular, any measure ν ≪λd−k+1 on u has a ϕ -projection ν′ ≪λd−k a.s. on ϕ.
Let νε be the uniform distribution on the ε -ball in u around 0. For any r ∈ Φk, write νr ε for the orthogonal projection of νε onto the subspace r⊥∈Φd−k, so that νr ε ≪λd−k on Φd−k for r ∈Φk a.e. λk. Since η ◦π−1 ≪μ a.s., η admits a disintegration λk(dr) ηr a.s., and so 30. Random Measures, Smoothing and Scattering 701 η ∗νε = λk(dr) ηr ∗νr ε a.s., ε > 0, with convolutions in Rd on the left and in π−1r on the right, for each r ∈Φk.
The u -stationarity of η carries over to η ∗νε, whereas the absolute continuity of νr ε carries over to ηr ∗νr ε, and then also to the mixture η ∗νε.
For fixed ε > 0, a version of Theorem 2.15 yields a u -stationary, product-measurable process Xε ≥0 on Fk, such that η ∗νε = Xε · (λk ⊗λd−k) in a suitable parameter space Φk × Rd−k. By Lemma 30.19, Xε a = Xε b a.s., a, b ∈π−1r, r ∈Φk a.e. λk.
Since η ◦π−1 ≪λk, we may extend this to all r ∈Φk by redefining X = 0 on the exceptional λk -set, which gives Xa = Xb a.s. whenever πa = πb.
Now fix any bounded, measurable sets I, J ∈Fk with J = I + h for some translation h ∈Rd−k in the parameter space. Then E (η ∗νε)I −(η ∗νε)J = E I(Xs −Xs+h) ds ≤E I Xs −Xs+h ds = I E Xs −Xs+h ds = 0, with all integrations with respect to λk ⊗λd−k, and so (η ∗νε)I = (η ∗νε)J a.s.
Since I was arbitrary, η ∗νε is then a.s. invariant under every fixed shift. Since also η∗νε v →η as ε →0, the measure η is a.s. h -invariant for fixed h. Applying this to a dense sequence of shifts h1, h2, . . . , and using the vague continuity of η ∗δh, we may extend the invariance to arbitrary h, for ω ∈Ω outside a fixed P-null set.
2 We return to the subject of spatial branching processes from Chapter 13, except that now we allow an arbitrary dependence between branching and spatial motion, as codified by a cluster kernel on S, defined as a probability kernel ν : S →NS, where νs represents the offspring distribution of a particle at s ∈S. Iterating ν, we obtain a discrete-time branching process in S, where the individuals in each generation give rise to independent sets of progeny distributed according to ν. This clearly defines a time-homogeneous Markov chain ξ = (ξn) in NS. The kernel ν is said to be critical if Eδs∥ξ1∥= 1 for all s ∈S, where ∥ξn∥= ξnS.
For measurable groups S = G, we note as before that translations of G induce shifts on NG. If ξ0 is stationary on G and ν is shift invariant, then each ξn is again stationary under shifts in G, as is in fact that entire branching tree ξ = (ξn). In this case, the generation sizes ∥ξn∥, when finite, form a classical Bienaym´ e process.
We also consider the cluster invariance ξ1 d = ξ0, which by Lemma 11.11 makes the entire process ξ a space-time stationary Markov chain in NS. For processes ξ generated by a critical and shift-invariant cluster kernel ν on S = 702 Foundations of Modern Probability Rd, we may distinguish between two radically different kinds of asymptotic behavior, depending on the nature of ν. We say that a point process ξ on Rd is cluster-invariant, if the generated cluster process (ξn) is stationary in n. Note that (ξn) is vaguely tight when E ¯ ξ < ∞, since E ξn ≡E ξ.
Theorem 30.20 (cluster dichotomy, Matthes et al.) Given a critical, invari-ant cluster kernel ν on Rd, exactly one of these cases occurs, for all stationary point process ξ on Rd with E ¯ ξ < ∞generating some cluster processes (ξn): (i) whenever ξn vd − →η along a sub-sequence4, we have E η = E ξ, (ii) ξn vd − →0.
Furthermore, (i) is equivalent to (iii) for every c > 0 there exists a cluster-invariant process ξ with E ¯ ξ = c.
We say that ν is stable if (i) holds, and otherwise unstable. For an intuitive understanding, we note that the branching tends to give rise to increasingly heavier but rarer ‘clumps’, which in turn get dispersed by the spatial motion.
Here the scattering effect wins when ν is stable, whereas the clumping wins when ν is unstable. Further note that (i) holds by Lemma 5.11 iff(ξnB) is uniformly integrable for every B ∈Bd.
Our proof requires some preliminaries. Given an invariant cluster kernel ν on Rd, write ν∗n for the n -th iterate of ν, and consider a point process χn with distribution ν∗n, representing a cluster of age n rooted at the origin. For any point process ξ0 on Rd, let (ξn) be the generated branching process, and note that ξn is a sum of independent clusters of age n rooted at the points of ξ0. Let κr n be the number of such clusters in ξn charging the ball B r 0 . If ξ0 is stationary with intensity c, then clearly E κr n = c P χnB r x > 0 dx < ∞, n ∈N, r > 0.
(7) Lemma 30.21 (monotonicity) For a critical, invariant cluster kernel ν on Rd, we have (i) E κr n is non-increasing in n for fixed r > 0, (ii) the limit in (i) is either 0 for all r or > 0 for all r.
Proof: (i) We may form χn+1 by iterated clustering in n steps, starting from χ. Assuming χ to be simple, write χu n for the associated cluster rooted at u. Using the conditional independence, Fubini’s theorem, the invariance of λd, and the criticality of ν, we obtain P χn+1B r x > 0 dx = P χ(du) χu nB r x > 0 dx ≤ dx E χ(du) 1 χu nB r x > 0 4Under stronger regularity conditions, we have even ξn vd − →η for some cluster-invariant process η with E η = E ξ.
30. Random Measures, Smoothing and Scattering 703 = dx Eχ(du) P χnB r x−u > 0 = Eχ(du) P χnB r x−u > 0 dx = Eχ(du) P χnB r x > 0 dx = P χnB r x > 0 dx.
For general χ, the same argument applies to any uniform randomizations of the processes χn.
(ii) This is clear since Eχr n is non-decreasing in r and Br 0 is covered by finitely many balls B1 x.
2 To prove Theorem 30.20, we may relate the stated properties to the limits Lr = lim n→∞ P χnBr x > 0 dx, r > 0.
(8) With stability interpreted for the moment as property (iii) of Theorem 30.20, we have the following criterion: Lemma 30.22 (hitting criterion, Matthes et al.) Let ν be a critical, invariant cluster kernel on Rd. Then ν is stable ⇔ Lr > 0, r > 0, Partial proof of Theorem 30.20: Our first aim is to show that (ii) ⇔Lr ≡0, (iii) ⇔Lr > 0.
Since (ii) and (iii) are incompatible, it is then enough to prove the implica-tions to the left, which will also show that the two properties are exhaustive, establishing the dichotomy of Theorem 30.20 with (i) replaced by (iii).
(Lr ≡0) ⇒(ii): Consider a stationary point process ξ on Rd with intensity c ∈(0, ∞), generating a cluster process (ξn). If Lr ≡0, then (7) yields κr n P →0, and so ξn vd − →0, proving (ii).
(Lr ̸= 0) ⇒(iii): Letting ξ be Poisson with intensity 1, we introduce some point processes ηn with distributions L(ηn) = n−1 n Σ k=1 L(ξk), n ∈N, which are again stationary with intensity 1. By Theorem 23.15, we have con-vergence ηn vd − →ζ along a sub-sequence N ′ ⊂N, where ζ is again stationary, and E ζ ≤λd by Lemma 5.11. Since L(ηn) −L(ηn−1) ≤(n −1) 1 n −1 −1 n + 1 n = 2 n →0, we have even ηn−1 vd − →ζ along N ′, which shows that ζ is ν -invariant.
Now let Lr > 0 for all r > 0. Fix an r > 0 with ζ ∂B r 0 = 0 a.s., and write p = Lr > 0. Then Theorem 15.3 (i) yields 704 Foundations of Modern Probability P ξnB r 0 = 0 = exp − P χnB r x > 0 dx →e−p, and so P ηnB r 0 = 0 = n−1 n Σ k=1 P ξkB r 0 = 0 →e−p.
Letting n →∞along N ′, we obtain P{ζB r 0 = 0} = e−p < 1, which shows that E ζ ̸= 0. Since the intensities of ξ0 and ζ are proportional, property (iii) follows.
2 To complete the proof of Theorem 30.20, it remains to show that (i) ⇔(iii).
Since (i) and (ii) are incompatible, we have in fact (i) ⇒(iii), so we need only prove that (iii) ⇒(i). Here we need a version of the following technical result.
Lemma 30.23 (truncation criteria, Matthes et al.) Let ν be a critical, in-variant cluster kernel on Rd, and fix any r > 0. Then (i) ν is stable iff lim k→∞inf n≥1 E χr,k n = 1, (ii) ν is unstable iff E χr,k n →0, k > 0.
For the moment, we can only prove this with stability understood in the sense of condition (iii) in Theorem 30.20. Once the theorem is fully proven, the lemma will follow in its stated form.
Partial proof: Since the conditions on the right are incompatible while those on the left are exhaustive, it is enough to prove the necessity of the two conditions.
(i) Let ξ be a stationary, cluster-invariant point process on Rd with intensity in (0, ∞), generating a cluster process (ξn). By stationarity and truncation of ξn or χn, we have for any r, k, n > 0 E ξr,k = E ξr,k n ≤E χr,k n Eξ.
As k →∞, we get E ξr,k v →Eξ by monotone convergence, and so Eξ ≤lim k→∞inf n≥1 E χr,k n Eξ.
Since Eξ ≍λd, we may finally cancel the factor Eξ on each side.
(ii) Using Fubini’s theorem, the invariance of λd, and the equivalence of u ∈Br x and x ∈Br u, we get for any r, k, n > 0 P χnB r x > 0 dx ≥ P χr,k n B r x > 0 dx ≥k−1 Eχr,k n B r x dx = k−1E χr,k n (du) 1 x ∈B r u dx = k−1E χr,k n λdB r 0 .
30. Random Measures, Smoothing and Scattering 705 If ν is unstable, the left-hand side tends to 0 as n →∞, and the stated condi-tion follows.
2 We may now complete the main proof: Proof of Theorem 30.20, (iii) ⇒(i): By monotone convergence we may re-place ξ by ξr,k for some fixed r, k > 0, and by Lemma 30.23 we may also replace χn by χr,k n . Letting κn be the number of truncated clusters in ξn hitting B r 0 , we have ξnB r 0 ≤k κn, and so it suffices to show that (κn) is uniformly inte-grable. By conditional independence, κn is the total mass of a pn-thinning of the measure ξ′(dx) = ξ(−dx), where p r,k n (x) = P χr,k n B r x > 0 , x ∈Rd, and so E(κn)2 ≤E ξ′p r,k n 2 + E ξ′p r,k n .
Writing gr for the uniform probability density on B r 0 , we note that p r,k n ≤ p 2r,k n ∗gr, and so ξ′p r,k n ≤ξ′ p 2r,k n ∗gr = (ξ′ ∗gr)p 2r,k n < ⌢λdp 2r,k n < ⌢E ξnB 2r 0 < ⌢λdB 2r 0 .
Thus, E(κn)2 is bounded, and the required uniform integrability follows.
2 Putting νn = L(χn), we may write the cluster iteration as νm+n = νm ◦νn, m, n ∈N.
For every n ∈N, we introduce a point process ηn with the centered Palm distribution5 of νn, given by Ef(ηn) = E f θ−xχn χn(dx), n ∈N.
The corresponding reduced Palm distributions ν0 n = L(ηn−δ0) satisfy a similar recursive property: Lemma 30.24 (Palm recursion) Let νn = L(χn), n ∈N, with associated reduced, centered Palm distributions ν0 n. Then ν0 m+n = ν0 m ◦νn ∗ν0 n, m, n ∈N.
5Though no knowledge of general Palm measures is needed here, some basic ideas from Chapter 31 may be helpful for motivation.
706 Foundations of Modern Probability Proof: For any cluster kernels ν1, ν2 on Rd, we need to show that (ν1 ◦ν2)0 = ν0 1 ◦ν2 ∗ν0 2.
(9) When d = 0, the measures νi are determined by their generating functions fi(s) = E sχi on [0, 1], and the kernel composition ν = ν1 ◦ν2 becomes equiv-alent to f = f1 ◦f2. The reduced Palm measure ν0 i has generating function E ηisηi−1 = f ′(s), and (9) holds by the elementary chain rule for differentia-tion, in the form (f1 ◦f2)′ = (f ′ 1 ◦f2)f ′ 2. The general case follows by suitable conditioning.
2 Using the last result, we may construct recursively an increasing sequence of Palm trees ηn, along with a random walk ζ = (ζn) in Rd based on the distri-bution ρ = Eχ, such that each ηn is rooted at −ζn and satisfies Δηn⊥ ⊥ζn ηn−1.
We may express the truncation criteria of Lemma 30.23 in terms of the Palm tree η = (ηn). The results are useful to derive some explicit conditions for stability.6 Theorem 30.25 (Palm tree criteria) Let ν be a critical, invariant cluster kernel on Rd with generated Palm tree (ηn), and fix any r > 0. Then (i) ν is stable ⇔ supn ηnB r 0 < ∞a.s., (ii) ν is unstable ⇔ ηnB r 0 →∞a.s.
Proof: By the definitions of χr,k n and ηn, E χr,k n = E 1 χnB r x ≤k χn(dx) = P ηnB r 0 ≤k .
Using the monotonicity of ηn, we get as k →∞ infnE χr,k n = infnP ηnB r 0 ≤k = P ∩n ηnB r 0 ≤k = P η∞B r 0 ≤k →P η∞B r 0 < ∞ .
The stated criteria now follow by Lemma 30.23.
2 Exercises 1. Show by examples that Theorem 30.1 fails in both directions if we replace (i) by j E ξnj v →λ. (Hint: We may take λ = δ0 in R.) 6From this point on the theory becomes very technical, and we refrain from pursuing those implications.
30. Random Measures, Smoothing and Scattering 707 2. Show by examples that Theorem 30.1 fails in boths directions if we replace (i)−(ii) by the sole condition j P{ξnjI > 0} →λI for all I ∈I. (Hint: Again we may take λ = δ0.) 3. Derive Theorem 30.1 from Theorem 6.7. (Hint: First let μ be diffuse and use Theorem 23.25. Then extend to the general case by a suitable randomization.) 4. Show that Theorems 30.1 and 30.3 fail if we drop the null-array assumption.
5. Give an example of a null array (ξnj) of point processes on S and a Poisson process ξ, such that Σ j ξnj vd − →ξ but not Σ j ξnj uld − →ξ. (Hint: Take S = R with E ξ = λ, and choose the ξnj to be supported by Q.) 6. Show that the conditions in Theorem 30.5 (i) imply (ξn, ηnνn) vd − →(ξ, η), with ξ and η related as stated. State and prove a similar extension of Corollary 30.7.
7. For an rcll, exchangeable process X on R+, use Theorem 27.10 and Corollary 30.8 to show that the point process of jump sizes on [0, 1] is Cox. Also conclude from Theorem 30.7 and the law of large numbers that the point process of jump times and sizes is Cox with directing random measure of the form ν ⊗λ.
8. Give a second proof of Corollary 30.8, by approximating in the Laplace trans-forms of Lemma 15.2.
9. For a stationary sequence of random elements ξ1, ξ2, . . . in S and a set B ∈S, show that if ξn ∈B occurs for some n, then it holds for infinitely many n. (Hint: Use Lemma 30.9.) 10. Show that Lemma 30.9 can be strengthened to lim inft t−1ξ[0, t] > 0 a.s. on ξ ̸= 0. (Hint: Use Corollary 30.10.) 11. Give an example of a stationary, simple point process ξ on Rd with a.s. infinite sample intensity ¯ ξ.
12. Give an example of a stationary random measure ξ and some distributions νn on Rd, such that ξ ∗νn vP − →¯ ξλd but not ξ ∗νn ulP − →¯ ξλd. (Hint: Let ξ be a point process, and choose the νn to be weakly asymptotically invariant with support in Qd.) 13. Give an example of a stationary point process ξ on Rd with νn-transforms ξn and a Cox process ζ directed by ¯ ξλd, such that ξn vd − →ζ but not ξn uld − →ζ. (Hint: Choose the νn to be weakly asymptotically invariant with support in Qd.) 14. In the context of Corollary 30.15, show that we may have ξt vd − →ζ while ξt uld − →ζ fails. (Hint: Let ξ be of lattice type, and choose μ to be locally invariant with μ ⊥λd.) 15. Show that if the process ξ in Corollary 30.15 is stationary Poisson, then the generated process (ξt) is space-time stationary for every choice of μ. (Hint: Show that (ξt) is again Poisson, using Theorem 15.3.) 16. For the random measure η in Theorem 30.17, show that if E (η ◦π−1)2 ≪λ2 k for the homogeneous measure λk on Φd k, then the spanning condition holds when 2k ≥d but fails for 2k < d. Thus, for line processes in Rd, it holds when d = 2 but not for d ≥3. (Hint: The set where the condition fails has dimension < 2k.) 17.
Let (ξn) be a cluster process in Rd based on a critical, invariant cluster kernel ν, starting with a stationary Poisson process ξ0 with intensity 1. Show that the process of ancestors of ξn at time 0 is again stationary Poisson, and find its 708 Foundations of Modern Probability asymptotic rate as n →∞. Also find the asymptotic size distribution of the cluster components of ξn. (Hint: Use Theorem 13.18.) Chapter 31 Palm and Gibbs Kernels, Local Approximation Palm kernel, supporting measure, invariant Palm disintegration, unique-ness and inversion, time–cycle duality, local approximation, Palm av-erages, mixed Poisson and binomial processes, Poisson criterion, con-ditioning and iteration, Palm–density duality, reduced Palm measures, compound Cambell measure, Gibbs and Papangelou kernels, invariance and Cox property, Palm–Gibbs duality, inner and outer conditioning Palm measures may be regarded as extensions of regular conditional distri-butions, which explains their fundamental importance. Their study leads in particular to an amazingly rich theory of conditioning in point processes. We consider first the case of stationary random measures ξ, where there is only a single Palm measure, describing the local behavior of the entire process. Here a highlight of the theory is the fundamental theorem of Kaplan, giving a re-markable connection between stationary processes in discrete and continuous time.
Dropping the condition of stationarity, we get a whole family of Palm mea-sures, one for each point in the underlying space S, combining into a kernel on S formed by disintegration of the underlying Campbell measure1. Among the key results in the non-stationary case, we note the local approximations in Theorem 31.9, justifying the interpretation of the Palm distribution of a point process ξ at a point s ∈S as the conditional distribution, given that ξ has a point at s. We further note the iteration approach in Theorem 31.12, where the Palm construction may be simplified through a preliminary conditioning, and the duality relation in Theorem 31.13, which enables us to derive smoothness properties of the Palm kernel from the possible smoothness of an associated conditional intensity.
We also call attention to the higher order Palm measures of a point process, obtained by disintegration of a compound Campbell measure, and allowing an interpretation in terms of inner conditioning. This leads naturally to the dual notion of Gibbs kernel, describing the corresponding outer conditioning. We further derive some basic characterizations involving the Papangelou kernel, obtained by restricting the Gibbs kernel to individual point masses.
1Note the analogy with the disintegration approach to conditional distributions in The-orem 8.5.
709 © Springer Nature Switzerland AG 2021 O. Kallenberg, Foundations of Modern Probability, Probability Theory and Stochastic Modelling 99, 710 Foundations of Modern Probability Random measures have already appeared in various contexts in earlier chap-ters, including the more comprehensive treatments in Chapters 15 and 30. Here we just recall the definition of a random measure on S as a locally finite kernel from the basic probability space Ω into S, and a point process as an integer-valued random measure, supported by a locally finite random set. Given a random measure ξ on S and a random element η in T, we define the associated Campbell measure Cξ,η on S × T by Cξ,ηf = E ξ(ds) f(s, η), f ≥0.
Theorem 31.1 (Palm disintegration) Consider a random measure ξ on S and a random element η in T, such that Cξ,η is σ-finite. Then (i) there exist a σ-finite measure ν ∼E ξ on S and a kernel L(η ∥ξ): S →T, such that Cξ,η = ν ⊗L(η ∥ξ), (ii) when E ξ = ν is σ-finite, we can choose L(η ∥ξ) to be a probability kernel.
Proof: Use Theorem 3.4.
2 The σ-finiteness of the Campbell measure is not a serious restriction, since we can always replace η by the pair (ξ, η), for which it holds automatically.
Putting μ = L(η ∥ξ), we can write the Palm disintegration as E ξ(ds) f(s, η) = ν(ds) μs(dt) f(s, t), f ≥0, just as for the conditional distributions in Theorem 8.5. Here L(η ∥ξ) is called the Palm kernel of η with respect to ξ, and its values L(η ∥ξ)s are known as Palm measures or, in case of (ii), as Palm distributions at the points s.
Furthermore, ν is called a supporting measure of ξ. When ξ = δτ for a random element τ in S, we may choose ν = E ξ = L(τ) to obtain L(η ∥ξ)s = L(η | τ)s, s ∈S a.e. L(τ), which justifies our notation and suggests that we think of the Palm measures as extensions of the regular conditional distributions from Chapter 8. Further justifications are provided by Theorems 31.5 and 31.9 below.
Now let G be a measurable group with Haar measure λ, acting measurably on the Borel spaces S and T. Given a random measure ξ on S and a random element η in T, we say that the pair (ξ, η) is G -stationary if θr(ξ, η) d = (ξ, η) for all r ∈G, where θrη = rη and θrξ = ξ ◦θ−1 r , so that (θrξ)B = ξ(θ−1 r B), (θrξ)f = ξ(f ◦θr).
If S = G, we assume ξB < ∞a.s. when λB < ∞, which makes λ an invariant supporting measure for ξ. In particular we may choose η = ξ, or replace η by the pair (ξ, η).
31. Palm and Gibbs Kernels, Local Approximation 711 If the pair (ξ, η) is jointly stationary with respect to a group G, we may choose the Palm disintegration to be G -invariant. When S = G, the entire Palm kernel L(η ∥ξ) is determined by the single measure L(η ∥ξ)ι, then referred to as the Palm measure of η with respect to ξ.
Theorem 31.2 (invariant Palm disintegration) Let G be a measurable group with Haar measure λ, acting properly on S and measurably on T, where S, T are Borel. Consider a random measure ξ on S and a random element η in T, where (ξ, η) is stationary and Cξ,η is σ-finite. Then (i) there exist a σ-finite, G-invariant measure ν ∼E ξ on S and a G-invar-iant kernel L(η ∥ξ): S →T, such that Cξ,η = ν ⊗L(η ∥ξ), (ii) when S = G we may choose ν = λ, in which case L(η ∥ξ) is given, for any B ∈G with λB = 1, by E f(η) ξ r = E B f rp−1η ξ(dp), r ∈G, f ≥0.
(1) Proof: Use Theorem 3.14.
2 When G = Rd with Lebesgue measure λd, the Palm definition in Theorem 31.2 (ii) is essentially a 1−1 correspondence, and we proceed to derive some useful inversion formulas. If ξ is a simple point process, then so is any ‘Palm version’ ˜ ξ with distribution L(ξ ∥ξ)0, and furthermore ˜ ξ{0} = 1. The inversion may then be stated in terms of the Voronoi cell around 0, given by Vμ = x ∈Rd; μB|x| x = 0 , μ ∈NRd, where B r x denotes the open ball around x of radius r > 0. When d = 1, the supporting points of μ may be enumerated in increasing order as tn(μ), subject to the convention t0(μ) ≤0 < t1(μ).
To avoid repetitions, we assume throughout that R or Rd acts measurably on the Borel space T. We further write E ξ = E(· ∥ξ)0, for convenience.
Theorem 31.3 (uniqueness and inversion) For a stationary pair of a random measure ξ on Rd and a random element η in T, the measure L(ξ, η; ξ ̸= 0) is determined by L(ξ, η ∥ξ)0, and for any measurable functions f ≥0 on T and g > 0 on Rd with ξg < ∞a.s., we have (i) E f(η); ξ ̸= 0 = E ξ f(θxη) ξ(g ◦θx) g(x) dx, (ii) when ξ is a simple point process on Rd, E f(η); ξ ̸= 0 = E ξ V0(ξ) f(θ−xη) dx, 712 Foundations of Modern Probability (iii) when ξ is a simple point process on R, E f(η); ξ ̸= 0 = E ξ t1(ξ) 0 f(θ−rη) dr.
Replacing η by (ξ, η) yields inversion formulas for the Palm measures L(ξ, η ∥ξ)0. A similar remark applies to subsequent theorems.
Proof: We may prove the stated formulas with η replaced by ζ = (ξ, η).
(i) Multiplying (1) by λdB and extending by a monotone-class argument, we get for any measurable function f ≥0 E ξ f(ζ, x) dx = E f θ−xζ, x ξ(dx), and so by substitution, E ξ f θxζ, x dx = E f(ζ, x) ξ(dx).
(2) In particular, we have for measurable f, g ≥0 E ξ f(θxζ) g(x) dx = Ef(ζ) ξg.
Now choose g > 0 with ξg < ∞a.s., so that ξg > 0 iffξ ̸= 0. The assertion follows as we replace f by the function h(μ, t) = f(μ, t) μg 1{μg > 0}, μ ∈MRd, t ∈T.
(ii) Apply (2) to the function h(μ, t, x) = f(μ, t) 1 μB|x| 0 = 0 .
Since (θxμ)B|x| 0 = 0 iff−x ∈V0(μ), and since a stationary point process ξ ̸= 0 has an a.s. unique point closest to 0, we get E ξ V0(ξ) f(θ−xζ) dx = E ξ f(θxζ) 1 (θxξ)B|x| 0 = 0 dx = E f(ζ) 1 ξB|x| 0 = 0 ξ(dx) = E f(ζ); ξ ̸= 0 .
(iii) Apply (2) to the function h(μ, t, r) = f(μ, t) 1 t0(μ) = r .
Since t0(θrξ) = r iff−r ∈[0, t1(ξ)), and since |t0(ξ)| < ∞a.s. when ξ ̸= 0 by Theorem 30.9, we get E ξ t1(ξ) 0 f(θ−rζ) dr = E ξ f(θrζ) 1 t0(θrξ) = r dr = E f(ζ) 1 t0(ξ) = r ξ(dr) = E f(ζ); ξ ̸= 0 .
2 31. Palm and Gibbs Kernels, Local Approximation 713 For pairs of a simple point process ξ on R and a random element η in T, we prove a striking relationship between stationarity in discrete and continuous time. Assuming ξ R± = ∞a.s., we say that the pair (ξ, η) is cycle-stationary if θ−τn(ξ, η) d = (ξ, η) for every n ∈Z, where τn = tn(ξ). Informally, the excursions of (ξ, η) between the points of ξ then form a stationary sequence. To make the distinction clear, we may now refer to ordinary stationarity on R as time stationarity.
Theorem 31.4 (time–cycle duality, Kaplan) For simple point processes ξ, ˜ ξ on R and random elements η, ˜ η in T, put ζ = (ξ, η) and ˜ ζ = (˜ ξ, ˜ η). Then the relations (i) ˜ Ef(˜ ζ) = E 1 0 f(θ−rζ) ξ(dr), (ii) Ef(ζ) = ˜ E t1(˜ ξ) 0 f(θ−r˜ ζ) dr, provide a 1−1 correspondence between all pseudo-distributions of time-station-ary pairs ζ with ξ ̸= 0 and cycle-stationary ones ˜ ζ with ˜ ξ{0} = 1 a.s. and ˜ E t1(˜ ξ) = 1.
Proof: Let ζ be stationary with ξ ̸= 0 a.s., put σk = tk(ξ), and define L(˜ ζ) by (i). Then for n ∈N and any bounded, measurable function f ≥0, n ˜ Ef(˜ ζ) = E n 0 f(θ−rζ) ξ(dr) = E Σ σk ∈(0, n) f θ−σkζ .
Writing τk = tk(˜ ξ), we get by a suitable substitution n ˜ Ef θ−τ1 ˜ ζ = E Σ σk ∈(0, n) f θ−σk+1ζ , and so by subtraction, ˜ Ef θ−τ1 ˜ ζ −˜ Ef(˜ ζ) ≤2 n−1∥f∥.
As n →∞we get ˜ Ef(θ−τ1 ˜ ζ) = ˜ Ef(˜ ζ), and so θ−τ1 ˜ ζ d = ˜ ζ, which means that ˜ ζ is cycle-stationary. In this case, (ii) holds by Theorem 31.3 (iii). Taking f ≡1 gives ˜ Et1(˜ ξ) = 1.
Conversely, let ˜ ζ be cycle-stationary with ˜ E t1(˜ ξ) = 1, and define L(ζ) by (ii). Then for n and f as above, n Ef(ζ) = E τn 0 f(θ−rζ) dr, and so for any h ∈R, n Ef(θ−hζ) = E τn 0 f θ−r−h˜ ζ dr = E τn+h h f θ−r˜ ζ dr, whence by subtraction, Ef(θ−hζ) −Ef(ζ) ≤2 n−1|h| ∥f∥.
714 Foundations of Modern Probability As n →∞, we get Ef(θ−hζ) = Ef(ζ), and so θ−hζ d = ζ, which means that ζ is stationary.
Now choose ˜ ζ′ = (˜ ξ′, ˜ η′) as in (i), with associated expectation operator ˜ E ′ satisfying ˜ E ′f(˜ ζ′) = E 1 0 f(θ−rζ) ξ(dr).
(3) Using Theorem 31.3 (iii) and comparing with (ii), we obtain ˜ E ′ t1(˜ ξ′) 0 f θ−r˜ ζ′ dr = ˜ E t1(˜ ξ) 0 f θ−r˜ ζ dr.
Replacing f(μ, t) by f θ−t0(μ)(μ, t) and noting that −t0(θ−rμ) = r when μ{0} = 1 and r ∈[0, t1(μ)), we get ˜ E ′t1(˜ ξ′) f(˜ ζ′) = ˜ E t1(˜ ξ) f(˜ ζ).
A further substitution yields ˜ E ′f(˜ ζ′) = ˜ Ef(˜ ζ), and so (i) holds by (3).
2 When ξ is a simple point process on Rd with locally finite intensity E ξ, we may think of the Palm distribution of (ξ, η) as the conditional distribution given that ξ{0} = 1, which further justifies our notation L(ξ, η ∥ξ)0.
The stated interpretation is suggested by the following result, which also provides an asymptotic formula for the hitting probabilities of small Borel sets. By Bn →{0} we mean that sup (|s|; s ∈Bn) →0, and u →denotes convergence in total variation norm ∥· ∥. The ‘hat’ in ˆ L or ˆ E denotes normalization.
Theorem 31.5 (approximations at 0, Korolyuk, Ryll-Nardzewski, K¨ onig & Matthes) Consider a stationary pair of a simple point process ξ on Rd and a random element η in T, and let B1, B2, . . . ∈Bd with λdBn > 0, Bn →{0}, and E ξB1 < ∞. When ξBn = 1, define σn by 1Bnξ = δσn. Then (i) P{ξBn = 1} ∼P{ξBn > 0} ∼E ξBn, (ii) L θ−σnη ξBn = 1 u →ˆ L(η ∥ξ)0, (iii) for any bounded, measurable, and shift-continuous function f on Nd, E f(η) ξBn > 0 →ˆ E f(η) ξ 0.
Proof: (i) Write L(ξ, η ∥ξ)0 = L(˜ ξ, ˜ η) for convenience. Since ˜ ξ{0} = 1 a.s., we have (θx˜ ξ)Bn > 0 iffx ∈Bn, and so by Theorem 31.3 (ii), P{ξBn > 0} E ξ[0, 1]d = ˆ E ξ V0(ξ) 1 (θ−xξ)Bn > 0 dx ≥ˆ E ξλd V0(ξ) ∩(−Bn) .
Dividing by λdBn and using Fatou’s lemma, we get lim inf n→∞ P{ξBn > 0} E ξBn ≥lim inf n→∞ ˆ E ξλd V0(ξ) ∩(−Bn) λdBn ≥ˆ E ξ lim inf n→∞ λd V0(ξ) ∩(−Bn) λdBn = 1.
31. Palm and Gibbs Kernels, Local Approximation 715 The assertions now follow from the elementary relations 2 · 1{k > 0} −k ≤1{k = 1} ≤1{k > 0} ≤k, k ∈Z+.
(ii) On T we introduce the measures μn = E Bn 1 θ−xη ∈· ξ(dx), νn = L θ−σnη; ξBn = 1 , and put mn = E ξBn and pn = P{ξBn = 1}. By (1) the stated total variation equals νn pn −μn mn ≤ νn pn −νn mn + νn mn −μn mn ≤pn 1 pn −1 mn + 1 mn |pn −mn| = 2 1 −pn mn , which tends to 0 in view of (i).
(iii) Here we write E f(η) ξBn > 0 −ˆ E f(η) ξ 0 ≤ E f(η) ξBn > 0 −E f(η) ξBn = 1 + E f(η) −f(θ−σnη) ξBn = 1 + E f(θ−σnη) ξBn = 1 −ˆ E f(η) ξ 0 .
By (i) and (ii), the first and last terms on the right tend to 0 as n →∞.
To estimate the second term, we introduce on T the bounded, measurable functions gε(t) = sup |x|<ε f(θ−xt) −f(t) , ε > 0, and conclude from (ii) that, for n large enough, E f(η) −f(θ−σnη) ξBn = 1 ≤E gε(θ−σnη) ξBn = 1 →ˆ E gε(η) ξ 0.
Since ˆ E gε(η) →0 as ε →0 by continuity and dominated convergence, the asserted convergence follows.
2 We turn to some general ergodic theorems for Palm distributions, showing how the mappings L(η) ↔L(˜ η) can be achieved by suitable averaging. Given a function f ∈T+ and a bounded measure ν ̸= 0 on Rd, we introduce the average ¯ fν(t) = ∥ν∥−1 f(θxt) ν(dx), t ∈T.
716 Foundations of Modern Probability Say that the weight functions (probability densities) g1, g2, . . . on Rd are asymp-totically invariant, if the corresponding property holds for the associated mea-sures gn · λd. We often omit the dot in f · μ.
Since such results apply to the proper Palm distributions, only when the sample intensity ¯ ξ is a.s. constant, we also need to introduce the modified Palm distributions ¯ L(η ∥ξ)0, defined for measurable functions f ≥0 by ¯ E f(η) ξ 0 = E ¯ ξ −1 I1 f(θ−xη) ξ(dx), whenever 0 < ¯ ξ < ∞a.s. For ergodic ξ we have ¯ ξ = E ¯ ξ a.s., and ¯ L(η ∥ξ)0 agrees with L(η ∥ξ)0. We may think of ¯ L(η ∥ξ)0 as the distribution of η when the pair (ξ, η) is shifted to a ‘typical’ point of ξ, an interpretation that fails for L(η ∥ξ)0 in general. The pair (˜ ξ, ˜ η) is called a modified Palm version of (ξ, η) if L(˜ ξ, ˜ η) = ¯ L(ξ, η ∥ξ)0.
Theorem 31.6 (Palm averaging) For a stationary pair of a random measure ξ on Rd with ¯ ξ ∈(0, ∞) a.s. and a random element η in T, consider a modified Palm version (˜ ξ, ˜ η). Then for any asymptotically invariant distributions μn or weight functions pn on Rd and for bounded, measurable functions f on T, we have (i) ¯ fμn(˜ η) P →Ef(η), (ii) ¯ fpnξ(η) P →Ef(˜ η).
The convergence holds a.s. when μn = 1Bnλd or pn = 1Bn, respectively, for some bounded, convex, increasing sets Bn ∈Bd with r(Bn) →∞.
Our proofs, here and below, are based on a remarkable coupling property.
Lemma 31.7 (shift coupling, Thorisson) For pairs (ξ, η) and (˜ ξ, ˜ η) as in Theorem 31.6, there exist some random elements σ, τ in Rd with η d = θσ˜ η, ˜ η d = θτη.
Proof: Let I be the invariant σ-field in MRd × T, put I1 = [0, 1]d, and note that E(ξI1 | Iξ,η) = ¯ ξ a.s. Then for any I ∈I, P{˜ η ∈I} = E ¯ ξ −1 I1 1 θ−xη ∈I ξ(dx) = E ξI1/¯ ξ; η ∈I = P{η ∈I}, which shows that η d = ˜ η on I. The assertions now follow by Theorem 25.26. 2 Proof of Theorem 31.6: (i) By Lemma 31.7, we may assume that ˜ η = θτη for some random vector τ in Rd. Using Theorem 25.18 (i) and the asymptotic invariance of the μn, we get ¯ fμn(˜ η) −Ef(η) ≤ μn −θτμn ∥f∥+ ¯ fμn(η) −Ef(η) P →0.
31. Palm and Gibbs Kernels, Local Approximation 717 The a.s. version follows in the same way from Theorem 25.14.
(ii) For fixed f, we may define a stationary random measure ξf on Rd by ξfB = B f(θ−xη) ξ(dx), B ∈Bd.
(4) Applying Theorem 25.18 to both ξ and ξf and using the representation in Theorem 31.2, we get with I1 = [0, 1]d ¯ fpnξ(η) = ξfpn λdpn · λdpn ξpn P → ¯ ξf ¯ ξ = E ξfI1 E ξI1 = Ef(˜ η).
The a.s. version follows by a similar argument, based on Theorem 30.10.
2 Taking expected values in Theorem 31.6, we get for bounded f the formulas E ¯ fμn(˜ η) →Ef(η), E ¯ fpnξ(η) →Ef(˜ η), which may be regarded as limit theorems for suitable space averages of the distributions of η and ˜ η. We show that both statements hold uniformly for bounded f. For a striking formulation, we may introduce the possibly defective distributions ¯ Lμ(˜ η) and ¯ Lp ξ(η), given for measurable functions f ≥0 by ¯ Lμ(˜ η)f = E ¯ fμ(˜ η), ¯ Lp ξ(η)f = E ¯ fp ξ(η).
Theorem 31.8 (distributional averages, Slivnyak, Z¨ ahle) For (ξ, η), (˜ ξ, ˜ η) as in Theorem 31.6, and for any asymptotically invariant distributions μn or weight functions pn on Rd, we have (i) ¯ Lμn(˜ η) u →L(η), (ii) ¯ Lpnξ(η) u →L(˜ η).
Proof: (i) By Lemma 31.7, we may assume that ˜ η = θτη for some random vector τ. Using Fubini’s theorem and the stationarity of η, we get for any measurable function f ≥0 ¯ Lμn(η)f = Ef(θxη) μn(dx) = Ef(η) = L(η)f.
Hence, by Fubini’s theorem and dominated convergence, ¯ Lμn(˜ η) −L(η) = ¯ Lμn(θτη) −¯ Lμn(η) ≤E 1{θxη ∈·} μn −θτμn (dx) ≤E μn −θτμn →0.
718 Foundations of Modern Probability (ii) Defining ξf by (4) with 0 ≤f ≤1, we get ξfpn = f(θ−xη) pn(x) ξ(dx) ≤ξpn, and so ¯ Lpnξ(η)f −L(˜ η)f = E ¯ fpnξ(η) −Ef(˜ η) ≤E ξfpn ξpn −ξfpn ¯ ξ ≤E 1 −ξpn ¯ ξ , where 0/0 = 0. Since ξpn/¯ ξ P →1 by Theorem 25.18, and E ξpn/¯ ξ = E E ξpn IX,ξ ¯ ξ = E( ¯ ξ/¯ ξ ) = 1, we get ξpn/¯ ξ →1 in L1 by Theorem 5.12, and the assertion follows.
2 We now drop the stationarity assumption and return to the setting of gen-eral random measures on a localized Borel space S, where the Palm measures are given by Theorem 31.1. Our first aim is to extend the point process approx-imations of Theorem 31.5 to local approximations at arbitrary points s ∈S.
As before, a ∼b means a/b →1 whereas a ≈b means a −b →0, and we write u ≈for the corresponding uniform approximation. Dissection systems are defined as in Chapter 1.
Theorem 31.9 (local approximations) Let ξ be a simple point process on a Borel space S with σ-finite intensity E ξ, and fix a dissection system2 I = (Inj) in S. Then for s ∈S a.e. E ξ, we have as B ↓{s} along I (i) P{ξB > 0} ∼P{ξB = 1} ∼E ξB, (ii) L 1B ξ ξB > 0 u ≈L 1B ξ ξB = 1 u ≈L(1B ξ ∥ξ)s, (iii) for any random variable η ≥0 such that E η ξ is σ-finite, E(η | ξB > 0) ≈E(η | ξB = 1) →E(η ∥ξ)s.
Proof: Putting In(s) = Inj when s ∈Inj, we may define some simple point processes ξn ≤ξ on S by ξnB = B 1 ξIn(s) = 1 ξ(ds) = Σ s∈B δs 1 ξ{s} = ξIn(s) = 1 , for any B ∈ˆ S. Since ξ is simple and (Inj) is separating, we have ξn ↑ξ, and so by monotone convergence E η ξn ↑E η ξ. Since also E η ξn ≤E η ξ ≪E ξ, where all three measures are σ-finite, Theorem 2.10 yields E η ξn = fn · E ξ for 2The result remains true with the same proof, if we replace the dissection system I = (Inj) by any standard differentiation basis, as defined in K(17).
31. Palm and Gibbs Kernels, Local Approximation 719 an increasing sequence of measurable functions fn ≥0. Assuming fn ↑f, we get by monotone convergence E η ξ = f · E ξ, and so for any B ∈S, B f(s) E ξ(ds) = (f · E ξ)B = E η ξB = B E ξ(ds) E(η ∥ξ)s, which implies f(s) = E(η ∥ξ)s a.e. E ξ.
For any n ∈N and s ∈S a.e. E ξ, we may choose B = In(s). Since ξnB ≤1 a.s., we get E η ξnB = E η; ξnB > 0 ≤E(η; ξB > 0) ≤E η ξB, and so E η ξnB E ξB ≤E(η; ξB > 0) E ξB ≤E η ξB E ξB .
For fixed n and s ∈S a.e. E ξ, the differentiation property gives fn(s) ≤lim inf B↓{s} E(η; ξB > 0) E ξB ≤lim sup B↓{s} E(η; ξB > 0) E ξB ≤f(s).
Since fn ↑f, we get as B ↓{s} along I E(η; ξB > 0) E ξB ≈E η ξB E ξB →f(s), s ∈S a.e. E ξ.
(5) Noting that 0 ≤1{k > 0} −1{k = 1} = 1{k > 1} ≤k −1{k > 0}, k ∈Z+, we see from (5) that 0 ≤E(η; ξB > 0) E ξB −E(η; ξB = 1) E ξB ≤E η ξB E ξB −E(η; ξB > 0) E ξB →0, and using (5) again gives E(η; ξB = 1) E ξB ≈E(η; ξB > 0) E ξB →f(s), s ∈S a.e. E ξ.
Taking η ≡1 gives (i), and dividing the two versions yields E(η; ξB = 1) P{ξB = 1} ≈E(η; ξB > 0) P{ξB > 0} →f(s), s ∈S a.e. E ξ.
which is equivalent to (iii).
720 Foundations of Modern Probability (ii) Using (i) along with some basic definitions and elementary estimates, we get L 1B ξ ξB = 1 −L 1B ξ ξ s = L σB ξB = 1 −L(τB) = E 1B ξ; ξB = 1 P(ξB = 1) −E 1B ξ E ξB ≤E(ξB; ξB > 1) E ξB + E ξB 1 P{ξB = 1} − 1 E ξB = 2 E ξB P{ξB = 1} −1 →0, and L 1B ξ ξB > 0 −L 1B ξ ξB = 1 ≤P{ξB > 1} P{ξB = 1} + P{ξB > 0} 1 P{ξB = 1} − 1 P{ξB > 0} = 2 P{ξB > 0} P{ξB = 1} −1 →0, and the assertion follows by combination.
2 We consider a special case where the Palm measures can be computed explicitly. Note that if ξ is a mixed Poisson process on S directed by ρ λ for a random variable ρ ≥0, then P{ξB = 0} = ϕ(λB), B ∈S, where ϕ(t) = Ee−t ρ denotes the Laplace transform of ρ. When 0 < λS < ∞, the same formula holds for a mixed binomial process ξ based on the probability measure ˆ λ = λ/λS and a Z+-valued random variable κ, provided we choose ϕ(t) = E (1 −t/λS)κ, 0 ≤t ≤λS.
In either case, ξ has Laplace functional Ee−ξf = ϕ λ(1 −e−f) , f ∈S+, (6) and so L(ξ) is uniquely determined by the pair (λ, ϕ), which justifies that we write M(λ, ϕ) for the distribution of the point process ξ in (6). For convenience, we allow the underlying probability measure P to be σ-finite, so that M(λ, −ϕ′) makes sense as the pseudo-distribution of a point process on S.
Theorem 31.10 (mixed Poisson and binomial processes) Let ξ be a mixed Poisson or binomial process on S with distribution M(λ, ϕ), where λ is σ-finite. Choosing λ as supporting measure for ξ, we have (i) L ξ −δs ξ s = M(λ, −ϕ′), s ∈S a.e. λ, 31. Palm and Gibbs Kernels, Local Approximation 721 (ii) L(ξ) ≡L ξ −δs ξ s ⇔ξ is Poisson (λ).
Proof: (i) First let E ξ be σ-finite. For any f ∈S+ and B ∈S with λf < ∞ and λB < ∞, we have by (6) E e−ξf−t ξB = ϕ λ 1 −e−f−t 1B , t ≥0.
Taking right derivatives at t = 0, we get by dominated convergence on each side E ξB e−ξf = −ϕ′ λ(1 −e−f) λ(1B e−f).
Hence, by disintegration, B λ(ds) E e−ξf ξ s = −ϕ′ λ(1 −e−f) B e−f(s) λ(ds).
Since B was arbitrary, we obtain E e−ξf ξ s = −ϕ′ λ(1 −e−f) e−f(s), s ∈S a.e. λ, which implies E e−(ξ−δs)f ξ s = E e−ξf ξ s ef(s) = −ϕ′ λ(1 −e−f) .
Equation (i) now follows by (6). For general E ξ, we may take f ≥ε1B for fixed ε > 0 to justify the differentiation. The resulting formulas extend by monotone convergence to arbitrary f ≥0.
(ii) The equation ϕ = −ϕ′ with initial condition ϕ(0) = 1 has the unique solution ϕ(t) = e−t.
2 A similar calculation yields a celebrated Poisson characterization: Corollary 31.11 (Poisson criterion, Slivnyak, Kerstan & Matthes, Mecke) Let ξ be a random measure on S with σ-finite intensity E ξ. Then ξ is Poisson iff L(ξ ∥ξ)s = L(ξ + δs), s ∈S a.e. E ξ.
To avoid any reference to Palm measures, we may write the stated condition in the integrated form3 E f(s, ξ) ξ(ds) = Ef(s, ξ + δs) E ξ(ds).
Proof: The necessity being part of Proposition 31.10, we need only prove the sufficiency. Then assume the stated condition, and put λ = E ξ. Fixing any f ≥0 with λf < ∞and writing ϕ(t) = Ee−t ξf, we get by dominated convergence 3This formula is sometimes referred to as the Mecke equation.
722 Foundations of Modern Probability −ϕ′(t) = E ξf e−t ξf = λ(ds) f(s) E e−t ξf ξ s = ϕ(t) λ(f e−tf).
Noting that ϕ(0) = 1 and ϕ(1) = Ee−ξf, we obtain −log Ee−ξf = 1 0 ϕ′(t) ϕ(t) dt = 1 0 λ(f e−tf) dt = λ 1 0 f e−tf dt = λ(1 −e−f), and so ξ is Poisson with intensity λ by Lemma 15.2.
2 It is often helpful to calculate the Palm distributions via a preliminary conditioning, similar to the approach in Theorem 8.15. Then consider a random measure ξ on S and some random elements η in T and ζ in U, where S, T, U are Borel and the Campbell measures Cξ,η and Cξ,ζ are σ-finite. Regarding ξ as a random element in the Borel space MS and fixing a supporting measure ρ of ξ, we may introduce the Palm measures and conditional distributions L(η, ζ ∥ξ)s = μ(s, ·), L(ξ, ζ | η)t = μ′(t, ·), in terms of some kernels μ: S →T × U and μ′ : T →MS × U. For fixed s, t, we may next form the Palm and conditional measures μs ˜ ζ ∈· ˜ η = νs(˜ η, ·), μ′ t ˜ ζ ∈· ˜ ξ s = ν′ t(s, ·), where the former conditioning makes sense, since the measures μs are a.e. σ-finite. By Corollary 3.6, we can choose ν, ν′ to be product-measurable, hence as kernels on S × T, in which case we may write suggestively P(· | η ∥ξ)s,t = P(· ∥ξ)s (· | η)t, P(· ∥ξ | η)t,s = P(· | η)t (· ∥ξ)s.
Finally, we may take t = η and put F = σ(η) and H = σ(ζ), to form on S the product-measurable, measure-valued processes PF(· ∥ξ)s = P(· ∥ξ | η)s,η, P(· ∥ξ)s F = {P(· | η ∥ξ)η,s.
Using this notation, we may state the iteration properties in the following suggestive form. In particular, part (iii) enables us to calculate the Palm kernel via a preliminary conditioning on η, which is especially useful in the contexts of randomizations and Cox processes.
Say that a σ-field F on Ω is Borel generated, if F = σ(β) for some random element β in a Borel space.
31. Palm and Gibbs Kernels, Local Approximation 723 Theorem 31.12 (iteration) Let ξ be a random measure on a Borel space S, and let F, H be Borel-generated σ-fields in Ω, such that Cξ,F and Cξ,H are σ-finite. Then for a fixed supporting measure ν ∼E ξ, we have on H (i) EF ξ is a.s. σ-finite, (ii) PF(· ∥ξ ) = P(· ∥ξ ) F a.e. Cξ,F, (iii) P(· ∥ξ )s = E PF(· ∥ξ )s ξ s, s ∈S a.e. E ξ.
Proof: Let F = σ(η) and H = σ(ζ) for some random elements η, ζ in the Borel spaces T, U, and introduce the measures μ1 = ν, μ2 = L(η), μ3 = L(ζ), μ12 = Cξ,η, μ13 = Cξ,ζ, μ123 = Cξ,η,ζ, which are all σ-finite. Since the support conditions of Theorem 3.7 are clearly fulfilled, the result yields μ3|2|1 ∼ = μ3|1|2 a.e. μ12, μ3|1 = μ2|1 μ3|1|2 a.e. μ1.
(7) Further note that μ12 = Cξ,η ∼ = L(η) ⊗EFξ = μ2 ⊗EFξ, and so the uniqueness in Theorem 3.4 yields μ1|2 = EFξ a.s. In particular this implies (i), and the relations (7) are equivalent to (ii) and (iii).
2 When looking for versions of the Palm measures with desired regularity properties, the following duality approach may be helpful. Then recall that, for any Borel spaces S, T, a σ-finite measure ρ on S × T admits the dual disintegrations ρ = ν ⊗μ ∼ = ν′ ⊗μ′, for some σ-finite measures ν on S and ν′ on T, and some kernels μ : S →T and μ′ : T →S. Here either disintegration may give useful information about the other one, as illustrated by the following simple case: Theorem 31.13 (Palm–density duality) Consider a random measure ξ on S and a random element η in T, where S, T are Borel, and fix a supporting measure ν of ξ. Then (i) E(ξ | η) and L(η ∥ξ) exist simultaneously as σ-finite kernels between S and T, (ii) for E(ξ | η) and L(η ∥ξ) as in (i), E(ξ | η) ≪ν a.s.
⇔ L(η ∥ξ)s ≪L(η) a.e. ν, in which case both kernels have product-measurable a.e. densities, 724 Foundations of Modern Probability (iii) for any measurable function f ≥0 on S × T, E(ξ | η) = f(·, η) · ν a.s.
⇔ L(η ∥ξ)s = f(s, ·) · L(η) a.e. ν.
Proof: (i) If E ξ is σ-finite, then for any B ∈S and C ∈T , E ξB; η ∈C = E E(ξB | η); η ∈C = C P{η ∈dt} E(ξB | η)t, which extends immediately to the dual disintegration Cξ,η ∼ = L(η) ⊗E(ξ | η).
(8) In general, ξ is a countable sum of bounded random measures, and we may use Fubini’s theorem to extend (8) to the general case, for a suitable version of E(ξ | η). In particular, the latter kernel has a σ-finite version iffCξ,η is σ-finite.
Since this condition is also equivalent to the existence of a σ-finite Palm kernel, the assertion follows.
(iii) Assuming E(ξ | η) = f(·, η) · ν a.s., we get for any B ∈S and C ∈T E ξB; η ∈C = E E(ξB | η); η ∈C = E B f(s, η) ν(ds) 1C(η) = B ν(ds) E f(s, η); η ∈C = B ν(ds) C P{η ∈dt} f(s, t), which implies L(η ∥ξ)s = f(s, ·) · L(η) a.e. ν. Conversely, the latter relation yields E ξB; η ∈C = B ν(ds) P η ∈C ξ s = B ν(ds) C P{η ∈dt} f(s, t) = B ν(ds) E f(s, η); η ∈C = E B f(s, η) ν(ds) 1C(η), which shows that E(ξB | η) = B f(s, η) ν(ds) = f(·, η) · ν B a.s., and therefore E(ξ | η) = f(·, η) · ν a.s.
(ii) Since S, T are Borel, Theorem 9.27 yields a product-measurable density in each case. The assertion now follows from (iii).
2 31. Palm and Gibbs Kernels, Local Approximation 725 Higher order Palm measures are defined in the obvious way as univariate Palm measures with respect to the product random measures ξn = ξ⊗n on Sn.
We are especially interested in the Palm distributions of ξ itself with respect to ξn, here written as L(ξ ∥ξn). When ξ is a simple point process with σ-finite moment measures E ξn, we note that P Π i ξ{si} = 1 ξ(n) s = 1, s ∈S(n) a.e. E ξn, where S(n) denotes the non-diagonal part of Sn. This suggests that we consider instead the reduced Palm measures L ξ′ ξ(n) s ≡L ξ −Σ i δsi ξ(n) s, s ∈S(n), formed by disintegration of the n -th order reduced Campbell measures C(n) ξ f = E ξ(n)(ds) f s; ξ −Σ i δsi , f ≥0 on S(n) × NS.
Since the latter are symmetric under permutation of coordinates in S(n), we may regard the reduced Palm measures P(ξ′ ∥ξ(n))s as functions of the associated point measures Σ i δsi. By Lemma 2.20 they can then be obtained, simultaneously for all n ∈N, by disintegration of the compound Campbell measure Cξf = Σ n≥0 C(n) ξ f/n!
= E Σ μ≤ξ f(μ, ξ −μ), where C(0) ξ = L(ξ) for consistency, and the last summation extends over all bounded point measures μ ≤ξ. More precisely, assuming ξ = Σ i∈J δσi for a countable index set J, we sum4 over all measures μ = Σ i∈I δσi with finite I ⊂J.
We now define the Gibbs kernel of ξ as the maximal random measure Γ = G(ξ) on ˆ NS satisfying E Γ(dμ) f(μ, ξ) ≤E Σ μ≤ξ f(μ, ξ −μ), (9) for all measurable functions f ≥0 on ˆ NS × NS. Such a partial disintegration kernel exists uniquely by Corollary 3.5.
Lemma 31.14 (boundedness and support) Let ξ be a point process on S with Gibbs kernel Γ. Then a.s.
(i) Γ{μS = ∞} = 0, (ii) Γ μB = μS = n < ∞, B ∈ˆ S, n ∈Z+, (iii) μ(supp ξ) Γ(dμ) = 0 when ξ is simple.
4The sum is clearly independent of the choice of atomic decomposition.
726 Foundations of Modern Probability Proof, (i)–(ii): For any B ∈ˆ S and m, n ∈Z+, E Γ μB = μS = n ; ξB = m ≤C νB = m × μB = μS = n = E 1 ξB = m + n Σ μ≤ξ 1 μB = μS = n = m + n n P ξB = m + n < ∞, and so Γ{νB = νS = n} < ∞a.s. on {ξB = m}.
(iii) Approximating supp m by means of dissections of S and using domi-nated convergence, we see that the function f(m, μ) = μ(supp m) is product-measurable on NS × ˆ NS. Hence, (9) yields for simple ξ E μ(supp ξ) Γ(dμ) ≤Cf = E Σ μ≤ξ μ supp(ξ −μ) = 0, and so the inner integral on the left vanishes a.s.
2 The definition of Γ is justified by some remarkable properties: Theorem 31.15 (Gibbs kernel) Let Γ = G(ξ) be the Gibbs kernel of a point process ξ on S. Then for any B ∈ˆ S and measurable f ≥0 on ˆ NS, (i) E Γ(dμ) f(ξ + μ) 1 ξB = μBc = 0 = E f(ξ); P(ξB = 0 | 1Bcξ) > 0 , (ii) Γ(· ; μBc = 0) = L 1Bξ 1Bcξ P ξB = 0 1Bcξ a.s. on {ξB = 0}, (iii) L 1Bξ 1Bcξ = Γ(· | μBc = 0) a.s. on {ξB = 0}.
In subsequent proofs, we shall often use the fact that P(A |F) > 0 a.s. on A for any A and F, which follows from P P(A |F) = 0; A = E P(A |F); P(A |F) = 0 = 0.
Proof: Define for fixed B ∈ˆ S M0 = μ; μB = 0, P ξB = 0 1Bcξ μ > 0 .
Then for any measurable M ⊂M0, C M × {μBc = 0} = E Σ μ≤ξ 1 ξ −μ ∈M, μBc = 0 = E Σ μ≤ξ 1 ξ −μ ∈M, P ξB = 0 1Bcξ > 0, μBc = 0 = P 1Bcξ ∈M, P ξB = 0 1Bcξ > 0 ≪E P ξB = 0 1Bcξ ; 1Bcξ ∈M = P ξB = 0, 1Bcξ ∈M ≤P{ξ ∈M}, 31. Palm and Gibbs Kernels, Local Approximation 727 and so C(· × {μBc = 0}) ≪L(ξ) on M0. Noting that P(ξB = 0 | 1Bcξ) > 0 a.s. on {ξB = 0} and using the maximality of G, we get for any B ∈ˆ S and measurable f ≥0 on NS E Γ(dμ) f(ξ + μ) 1 ξB = μBc = 0 = E Γ(dμ) f(ξ + μ) 1 ξB = μBc = 0, P ξB = 0 1Bcξ > 0 = C(dμ dν) f(μ + ν) 1 νB = μBc = 0, P ξB = 0 1Bcξ ν > 0 = E Σ μ≤ξ f(ξ) 1 ξB = μB, μBc = 0, P ξB = 0 1Bcξ > 0 = E f(ξ); P ξB = 0 1Bcξ > 0 .
(ii) Replacing f in (i) by the function g(μ) = f(1Bμ) 1M(1Bcμ), μ ∈NS, for measurable M ⊂NS and f ≥0 on ˆ NS, we get E f(1Bξ) 1M(1Bcξ); P ξB = 0 1Bcξ > 0 = E Γ(dμ) f(μ) 1M(ξ) 1 ξB = μBc = 0 = E G 1Bcξ, dμ f(μ) 1M(1Bcξ) 1 ξB = μBc = 0 .
Since M was arbitrary, we obtain a.s.
E f(1Bξ) 1Bcξ 1 P ξB = 0 | 1Bcξ > 0 = G 1Bcξ, dμ f(μ) 1{μBc = 0} P ξB = 0 1Bcξ , and since P(ξB = 0 | 1Bcξ) > 0 a.s. on {ξB = 0}, this gives 1NBΓf = E f(1Bξ) 1Bcξ P ξB = 0 1Bcξ a.s. on {ξB = 0}.
The assertion now follows since f was arbitrary.
(iii) By (ii), we have Γ{μBc = 0} = P ξB = 0 1Bcξ −1 > 0 a.s. on {ξB = 0}, and the assertion follows by division.
2 Of special importance is the restriction of the Gibbs kernel to the class of single point masses δs. Identifying δs with the point s ∈S yields a random measure η = g(ξ, ·) on S, known as the Papangelou kernel of ξ, which can also be defined directly as the maximal kernel satisfying the partial disintegration E η(ds) f(s, ξ) ≤E ξ(ds) f(s, ξ −δs).
(10) 728 Foundations of Modern Probability Corollary 31.16 (Papangelou kernel) The Papangelou kernel η of a point process ξ on S is a.s. locally finite, and for simple ξ we have η (supp ξ) = 0 a.s. Furthermore, for any B ∈ˆ S, (i) 1B η = E 1B ξ; ξB = 1 1Bcξ P ξB = 0 | 1Bcξ a.s. on {ξB = 0}, (ii) L τB 1Bcξ, ξB = 1 = 1B η ηB a.s. on {ξB = 0, ηB > 0}.
Proof: The local finiteness is clear, since by Lemma 31.14 (i), ηB = Γ μB = μS = 1 < ∞a.s., B ∈ˆ S.
When ξ is simple, part (ii) of the same lemma yields a.s.
η(supp ξ) = Γ μ(supp ξ) = μS = 1 = Γ(dμ) μ(supp ξ) = 0.
(i) By Theorem 31.15 (ii), we get for any C ⊂B in ˆ S ηC = Γ μC = μS = 1 = P ξC = ξB = 1 1Bcξ P ξB = 0 1Bcξ a.s. on {ξB = 0}.
(ii) By (i), we have ηB = P ξB = 1 1Bcξ P ξB = 0 | 1Bcξ a.s. on {ξB = 0}.
When this is positive, the assertion follows by division.
2 The theory simplifies under the classical regularity condition P ξB = 0 1Bcξ > 0 a.s., B ∈ˆ S, traditionally labeled by (Σ).
Lemma 31.17 (regularity) For a simple point process ξ on S satisfying (Σ), we have (i) the right-hand side of Theorem 31.15 (i) reduces to Ef(ξ), (ii) equality holds in (9) and (10), (iii) when A ∈σ(1Bcξ) a.s. on {ξB = 0}, we have PA = 1.
31. Palm and Gibbs Kernels, Local Approximation 729 Proof: (i) Obvious.
(ii) The equality in (9) holds by (i), and the equality in (10) follows.
(iii) The stated condition gives E P ξB = 0 1Bcξ ; Ac = P Ac ∩{ξB = 0} = 0, and (Σ) yields PAc = 0.
2 To illustrate the usefulness of the Papangelou kernel η, we show how in-variance of η may imply that the underlying point process ξ is Cox.
Theorem 31.18 (invariance and Cox property, Papangelou) Let ξ be a T-marked point process on R satisfying (Σ), let ν be a random measure on T, and define η = λ ⊗ν. Then these conditions are equivalent: (i) ξ is a Cox process directed by η, (ii) ξ has Papangelou kernel η.
Proof, (i) ⇒(ii): Let η be the Papangelou kernel of ξ. Assuming (i), and noting that ζ = λ ⊗ν is 1Bcξ -measurable for every bounded, measurable set B ∈R × T, we see from Corollary 31.16 that, a.s. on {ξB = 0}, ηB = P ξB = 1 1Bcξ P ξB = 0 1Bcξ = e−ζB ζB e−ζB = ζB.
Since also η = 0 a.s. on supp ξ by Lemma 31.14, we obtain η = ζ = λ ⊗ν a.s.
(ii) ⇒(i): Under (ii), Corollary 31.16 shows that L(1Bξ; ξB = 1 | 1Bcξ) is a.s. λ -invariant on {ξB = 0} for every bounded, measurable rectangle B ⊂R × T, which extends by Lemma 31.17 (iii) to all of Ω. By iteration it follows that the ordered points in L(1Bξ; ξB = n | 1Bcξ) have a shift-invariant joint distribution, and hence form a stationary binomial process on B. In other words, ξ is a mixed binomial process on every bounded, measurable set B, and Theorem 15.4 yields the desired Cox property.
2 When ξ is a.s. bounded, the compound Campbell measure becomes sym-metric in its two components, and the Palm and Gibbs kernels agree. This simple observation extends to a general relationship between the two kernels.
Here we write Q(μ, ·) for a version of the reduced Palm kernel of ξ, where μ ∈ˆ NS.
Proposition 31.19 (Palm–Gibbs duality) Let ξ be a point process on S with Gibbs kernel G and reduced Palm kernel Q, where ξS < ∞a.s. Then Q(ξ, {0}) > 0 a.s., and Γ = G(ξ, ·) = Q(ξ, ·) Q(ξ, {0}) a.s.
730 Foundations of Modern Probability Proof: When ξS < ∞a.s., the compound Campbell measure is symmetric, in the sense that C ˜ f = Cf with ˜ f(s, t) = f(t, s). Since L(ξ) ≪C(· × NS), we may choose an A ∈BNS with ξ ∈A a.s. and L(ξ) ∼C(· × NS) on A. Then on A × NS we have the dual disintegrations C = L(ξ) ⊗G ∼ = ν ⊗Q on A × NS, where ν is the supporting measure on NS associated with Q. The uniqueness in Theorem 3.4 yields Γ ≡G(ξ, ·) = p(ξ) Q(ξ, ·) a.s., (11) for a measurable function p > 0 on A. To determine p, we may apply Theorem 31.15 (ii) to the sets M = {0} and B = ∅to get a.s.
Γ{0} = P 1∅ξ = 0 1Sξ P ξ∅= 0 1Sξ = P(Ω | ξ) P(Ω | ξ) = 1.
Inserting this into (11) gives p(ξ) Q(ξ, {0}) = 1 a.s., and the assertion fol-lows.
2 We finally note how the duality between Palm and Gibbs kernels yields an inner conditioning property, similar to the outer conditioning in Theorem 31.15 (iii).
Corollary 31.20 (inner conditioning) Let ξ be a point process on S with reduced Palm kernel Q. Then for any B ∈ˆ S, we have Q1 B ξ{μB = 0} > 0 and L 1Bcξ 1B ξ = Q1 B ξ(· | μB = 0 ) a.s.
Proof: Applying Theorem 31.15 (ii) with ξ and Bc replaced by 1 B ξ and B, and using Proposition 31.19 with ξ replaced by 1 B ξ, we get a.s.
L 1Bcξ 1B ξ = ΓB 1 B ξ 1Bcμ μB = 0 = QB 1 B ξ 1Bcμ ∈· μB = 0 = Q1 B ξ 1Bcμ ∈· μB = 0 , where ΓB and QB denote the Gibbs and Palm kernels of the restriction 1 B ξ. 2 Exercises 1. Put ξ = δσ for a random element σ in S. Show that Theorem 31.1 reduces to the disintegration theorem for regular conditional distributions L(η | σ). Also give the Campbell measure Cξ,η in this case.
31. Palm and Gibbs Kernels, Local Approximation 731 2. Let G be a compact group with normalized Haar measure λ, acting measurably on S, T, consider a stationary random pair (σ, η) in S × T, and put ξ = δσ. Show that in this case the group actions are proper, and express the invariant Palm kernel in Theorem 31.2 in terms of conditional distributions. Also show that when S = G, (ii) gives an invariant representation of the conditional distributions L(η | σ)r.
3. Consider a stationary pair (ξ, η) on the d-dimensional torus T d, where ξ = δσ for a random element σ in T d, and extend by periodic continuation to a stationary pair (˜ ξ, ˜ η) on Rd. Express the inversion formulas of Theorem 31.3 in terms of conditional distributions L(η | σ)r.
4. For (ξ, η) as in Lemma 31.3, define a random measure ˜ ξ on the appropriate product space by ˜ ξ(A×B) = B 1A{θ−x(ξ, η)} ξ(ds). Show that ˜ ξ is again stationary under shifts in Rd.
5. Let ξ be a stationary renewal process based on a distribution μ, and describe the associated Palm measure as the distribution of a random sequence ˜ ξ. Explain in what sense ˜ ξ is cycle stationary. Conversely, given a stationary sequence ˜ ξ of positive random variables, explain how L(˜ ξ) arises as the Palm distribution of a stationary, simple point process ξ on R. Also state conditions on μ and ˜ ξ for the associated measures L(˜ ξ) and L(ξ) to be bounded.
6. Prove Theorem 31.5 (i) by an elementary argument, when the Bn are intervals in Rd.
(Hint: If an interval I is divided, for each n, into subintervals Inj with maxj |Inj| →0, then Σj1{ξInj = 1} →ξI a.s. Now take expected values and use dominated convergence.) 7. In the context of Theorem 31.8, show that Qξ,η = Q′ ξ,η iff¯ ξ = E ¯ ξ ∈(0, ∞) a.s. Also give examples where Qξ,η but not Q′ ξ,η exists as a probability measure, and conversely.
8. Show that Theorems 31.6 and 31.8 fail for ordinary Palm measures, unless ¯ ξ is a.s. a constant.
9. Show by examples that the statements in Theorem 31.9 (i)−(ii) fail when the point process ξ is not simple.
10. Find the Palm distribution of a Bernoulli sequence, and write the result in the form of Corollary 31.11. In other words, to obtain the Palm distribution of (ξn), we simply replace ξ0 by the value 1. Now apply Lemma 31.7. (Thus, a suitable random shift of (ξn) yields a sequence with the same distribution, except that the value at the origin in now 1.5) 11. Show how Theorem 31.12 can be used to find the Palm distribution of a Cox process or a mixed binomial process.
12. Explain the statements of Theorem 31.13, in the special cases where ξ is a Cox process directed by η or a mixed binomial process with ∥ξ∥= η.
13. Give an example of a simple point process that satisfies (Σ) and one that doesn’t. Describe how the conditions in Theorem 31.15 and Corollary 31.16 simplify under condition (Σ).
14. Since a compound Campbell measure of a bounded point process ξ is sym-metric in the two components, explain why the Gibbs kernel is based on a partial 5This is known as the paradox of an extra head.
732 Foundations of Modern Probability disintegration only, whereas the Palm kernel is obtained through a full disintegration.
Give an example of a bounded point process where the two kernels differ.
15. Since both the local approximation in Theorem 31.9 and the inner condition-ing in Corollary 31.20 describe Palm measures in terms of elementary conditioning, explain how they differ and why they lead to different interpretations of the Palm distributions.
X. SDEs, Diffusions, and Potential Theory Stochastic differential equations (SDEs) form a central topic of modern probability theory with a wealth of applications. In the autonomous case, the solutions provide probabilistic descriptions of continuous, strong Markov pro-cesses, known as diffusions. In particular, we explore the relations between weak and strong solutions, and characterize the solutions in terms of a mar-tingale problem. In the one-dimensional case, diffusions may be described in terms of a scale function and a speed measure. Potential theoretic tools are used throughout Markov process theory, and some classical problems in the area have beautiful solutions in terms of Brownian motion. Our final chapter on stochastic differential geometry deals primarily with martingales and semi-martingales in a general differentiable manifold. For beginners we recommend a careful study of Chapter 32 and selected parts of Chapter 33, whereas the remaining material is more advanced and might be postponed.
−−− 32. Stochastic equations and martingale problems. Using a Picard iteration, we may construct pathwise solutions to suitable SDEs, combining into a stochastic flow on the underlying space. Next, we examine the rela-tionship between weak solutions and the associated martingale problems, and explore the connections between weak and strong existence and uniqueness.
33.
One-dimensional SDEs and diffusions. For a one-dimensional SDE without a drift term, we give precise conditions for existence and unique-ness. We proceed with a detailed study of one-dimensional diffusions, leading to a complete description in terms of a scale function and a speed measure.
We further examine the recurrence and ergodic properties of diffusions on a natural scale, depending on the speed measure and nature of the endpoints.
34. PDE connections and potential theory. Here we explore some connections between Brownian motion and classical potential theory, including solutions to the three classical problems of electrostatics. Next, we explore the relationship between alternating capacities and random closed sets, as well as between excessive functions and super-martingales, leading to a probabilistic version of the classical Riesz decomposition.
35. Stochastic differential geometry. Here we consider semi-martin-gales and their covariation processes in a general differential manifold S, define martingales in terms of a connection, and explore criteria involving affine and convex functions.
We further introduce the local characteristics of a semi-martingale in S, and justify their definitions by some natural embedding and projection properties. Finally, we specialize to the diffusion case, and charac-terize Brownian motion and related processes in a Riemannian manifold.
733 Chapter 32 Stochastic Equations and Martingale Problems Progressive functions, drift and diffusion rates, Langevin equation, lin-ear equations, weak, strong, and functional solutions, stochastic flows, Picard iteration, explosion, uniqueness, path-wise and in law, martingale problems, weak existence and continuity, measurability and mixtures, strong Markov and Feller properties, transformation of drift, scaling, transfer of solutions Just as ordinary differential equations (ODEs) describe the evolution of suit-ably smooth dynamical systems, so the stochastic differential equations (SDEs) model the evolution of the random counterparts, the diffusion processes. The corresponding theories are in many ways analogous, except that the stochas-tic version involves an additional noise term, given by an Itˆ o-type stochastic integral with respect to a Brownian motion.
This leads to a close connec-tion between the present SDE theory and the stochastic calculus developed in Chapters 18–19, which in turn depends in a crucial way on the martingale theory of Chapter 9.
The study of SDEs and associated diffusion processes leads far beyond the mentioned analogy with classical dynamical systems. In fact, the coefficients of such an equation determine a possibly time-dependent elliptic operator A, as in Theorem 17.24, which suggests the associated martingale problem of finding a process X, such that the processes M f in Lemma 17.21 become martingales.
It turns out to be essentially equivalent for X to be a weak solution to the given SDE, as will be seen from the fundamental Theorem 32.7.
The general theory of SDEs involves both weak and strong solutions. Here it is important to distinguish between the notions of strong and weak existence, and the associated notions of pathwise uniqueness and uniqueness in law. In this connection, we will establish the powerful result of Yamada and Watanabe, asserting that weak existence and pathwise uniqueness imply strong existence and uniqueness in law. Under the same conditions, we can in fact express the general solution in functional form as X = F(X0, B) a.s.
The SDEs studied in this chapter are typically of the form dXi t = σi j(t, X) dBj t + bi(t, X) dt, (1) which may also be written in integrated form as Xi t = Xi 0 + Σ j t 0 σi j(s, X) dBj s + t 0 bi(s, X) ds, t ≥0.
(2) 735 © Springer Nature Switzerland AG 2021 O. Kallenberg, Foundations of Modern Probability, Probability Theory and Stochastic Modelling 99, 736 Foundations of Modern Probability Here B = (B1, . . . , Br) is a Brownian motion in Rr with respect to a filtration F, and the solution X = (X1, . . . , Xd) is a continuous F-semi-martingale in Rd. Furthermore, the coefficients σ and b are progressive1 functions of suitable dimension, defined on the canonical path space CR+,Rd, equipped with the induced filtration Gt = σ{xs; s ≤t}, t ≥0. For convenience, we shall often refer to (1) as equation (σ, b).
For the integrals in (2) to exist, in the sense of Itˆ o and Lebesgue integration, X must fulfill the integrability conditions t 0 aij(s, X) + bi(s, X) ds < ∞a.s., t ≥0, (3) where aij = σi kσj k or a = σσ′, and the bars denote any norms in the spaces of d × d -matrices and d -vectors, respectively. For the existence and adaptedness of the right-hand side, we also need the integrands in (2) be progressive, which is ensured by the following result.
Lemma 32.1 (progressive functions) Let f be a progressive function on R+ × CR+,Rd for the induced filtration G, and let X be a continuous, F-adapted pro-cess in Rd. Then the process Yt = f(t, X) is F-progressive.
Proof: Fix any t ≥0. Since X is adapted, we note that πs(X) = Xs is Ft -measurable for every s ≤t, where πs(x) = xs on CR+,Rd. Since Gt = σ{πs; s ≤ t}, Lemma 1.4 shows that X is Ft/Gt -measurable. Hence, by Lemma 1.9, the mapping ϕ(s, ω) = {s, X(ω)} is (Bt ⊗Ft)/(Bt ⊗Gt) -measurable from [0, t] × Ω to [0, t] × CR+,Rd, where Bt = B[0, t]. Also note that f is (Bt ⊗Gt) -measurable on [0, t]×CR+,Rd since f is progressive. Then Lemma 1.7 shows that Y = f ◦ϕ is (Bt ⊗Ft)/B -measurable on [0, t] × Ω.
2 Equation (2) exhibits the solution process X as a semi-martingale in Rd with drift components bi(X)·λ and covariation processes [Xi, Xj] = aij(X)·λ, where aij(x) = aij(·, x) and bi(x) = bi(·, x). The densities a(t, X) and b(t, X) may be regarded as local characteristics of X at time t. Of special interest is the diffusion case, where σ and b have the form σ(t, x) = σ(xt), b(t, x) = b(xt), t ≥0, x ∈CR+,Rd, for some measurable functions on Rd. Then the local characteristics at time t depend only on the current position Xt of the process, and the progressivity holds automatically.
We will distinguish between strong and weak solutions to an SDE (σ, b).
For the former, we regard the filtered probability space (Ω, F, P) as given, along with an F-Brownian motion B and an F0-measurable random vector ξ.
A strong solution is then defined as an adapted process X with X0 = ξ a.s.
1short for progressively measurable 32. Stochastic Equations and Martingale Problems 737 satisfying (1). For a weak solution, only the initial distribution μ is given, and the solution consists of a triple (Ω, F, P), along with an F-Brownian motion B and an adapted process X with L(X0) = μ satisfying (1).
This leads to different notions of existence and uniqueness, for a given equation (σ, b). We say that weak existence holds for the initial distribution μ, if there exists a corresponding weak solution (Ω, F, P, B, X). By contrast, strong existence for the given μ means that, for any basic triple (F, B, ξ) with L(ξ) = μ, there exists a strong solution X with X0 = ξ a.s. Next, we say that uniqueness in law holds for the initial distribution μ, if all weak solutions X with L(X0) = μ have the same distribution. Finally, pathwise uniqueness for a given distribution μ means that, whenever X and Y are two solutions with X0 = Y0 = ξ a.s. and L(ξ) = μ, defined on a common filtered probability space with a given Brownian motion B, we have X = Y a.s.
One of the simplest SDEs is the Langevin equation dXt = dBt −Xt dt, (4) of importance for both theory and applications. Integrating by parts gives d etXt = et dXt + etXt dt = et dBt, which yields the explicit solution Xt = e−tX0 + t 0 e−(t−s) dBs, t ≥0, (5) recognized as an Ornstein–Uhlenbeck process. Conversely, the latter process is easily seen to satisfy (4). We further note that θtX d →Y as t →∞, where Y denotes the stationary version of the process, encountered in Chapters 14 and 21. We can also get a stationary version2 directly from (5) by choosing X0 to be N(0, 1 2) and independent of B.
We turn to a more general class of equations that can be solved explicitly.
A further extension appears in Theorem 20.8.
Proposition 32.2 (linear equations) Let U, V be continuous semi-martin-gales, and put Z = exp V −V0 −1 2[V ] . Then the equation dX = d U + Xd V has the unique solution X = Z X0 + Z−1 · U −[U, V ] .
(6) Proof: Define Y = X/Z. Integrating by parts and noting that dZ = Z d V , we get d U = dX −Xd V = Y dZ + Z dY + d [Y, Z] −Xd V = Z dY + d [Y, Z].
(7) 2For full agreement with Chapter 14, we need to replace B by B √ 2 in (4) and (5).
738 Foundations of Modern Probability In particular, [U, V ] = Z · [Y, V ] = [Y, Z].
Substituting this into (7) yields Z dY = d U −d [U, V ], which implies dY = Z−1d(U −[U, V ]). Now (6) follows as we integrate from 0 to t and note that Y0 = X0. Since all steps are reversible, the same argument shows that (6) is indeed a solution.
2 Though most SDEs can not be solved explicitly, we may still derive some general conditions for strong existence, pathwise uniqueness, and continuous dependence on initial conditions, using the classical method of Picard iteration, originally devised for ordinary differential equations.
Theorem 32.3 (strong solutions and stochastic flows, Itˆ o) Let σ, b be bounded, progressive functions satisfying a Lipschitz condition3 σ(w) −σ(w′) ∗ t + b(w) −b(w′) ∗ t < ⌢(w −w′)∗ t, t ≥0, (8) and consider a Brownian motion B in Rr w.r.t. a complete filtration F. Then there exists a jointly continuous process X = (Xx t ) on R+ × Rd, such that for any F0-measurable random vector ξ in Rd, equation (σ, b) has the a.s. unique solution Xξ starting at ξ.
For one-dimensional diffusion equations, a stronger result is established in Theorem 33.3. The solution process X = (Xx t ) on R+ × Rd is called the stochastic flow generated by B. Our proof is based on two lemmas, beginning with an elementary classical estimate.
Lemma 32.4 (Gronwall) Let f be a continuous function on R+, such that f(t) ≤a + b t 0 f(s) ds, t ≥0, (9) for some constants a, b ≥0. Then f(t) ≤a ebt, t ≥0.
Proof: We may write (9) as d dt e−bt t 0 f(s) ds ≤a e−bt, t ≥0.
Now integrate over [0, t] and combine with (9).
2 To state the next result, let SX denote the process given by the right-hand side of (2).
3Recall that < ⌢denotes inequality up to a constant factor.
32. Stochastic Equations and Martingale Problems 739 Lemma 32.5 (Lp-contraction) Let σ, b be bounded, progressive functions sat-isfying (8). Then for any p ≥2, there exists a non-decreasing function c ≥0 on R+ such that, for any continuous, adapted processes X, Y in Rd, E SX −SY ∗p t ≤2 E X0 −Y0 p + ct t 0 E(X −Y )∗p s ds, t ≥0.
Proof: By Theorem 18.7, condition (8), and Jensen’s inequality, E SX −SY ∗p t −2E X0 −Y0 p < ⌢E (σX −σY) · B ∗p t + E (bX −bY) · λ ∗p t < ⌢E | σX −σY |2 · λ p/2 t + E | bX −bY | · λ p t < ⌢E t 0 (X −Y )∗2 s ds p/2 + E t 0 (X −Y )∗ s ds p ≤ tp/2−1 + tp−1 t 0 E(X −Y )∗p s ds.
2 Proof of Theorem 32.3: To prove the existence, fix any F0 -measurable random vector ξ in Rd, put X0 t ≡ξ, and define recursively Xn = SXn−1 for n ≥1. Since σ, b are bounded, we have E(X1 −X0)∗2 t < ∞, and Lemma 32.5 yields E Xn+1 −Xn ∗2 t ≤ct t 0 E Xn −Xn−1 ∗2 s ds, t ≥0, n ≥1.
Hence, by induction, E Xn+1 −Xn ∗2 t ≤cn t tn n! E X1 −ξ ∗2 t < ∞, t, n ≥0.
For any k ∈N, we get sup n≥k Xn −Xk∗ t 2 ≤Σ n≥k Xn+1 −Xn∗ t 2 ≤ X1 −ξ ∗ t 2 n≥k cn t tn/n!
1/2 < ∞.
Thus, Lemma 5.6 yields a continuous, adapted process X with X0 = ξ, such that (Xn −X)∗ t →0 a.s. and in L2 for each t ≥0. To see that X solves equation (σ, b), we conclude from Lemma 32.5 that E Xn −SX ∗2 t ≤ct t 0 E Xn−1 −X ∗2 s ds, t ≥0.
As n →∞, we get E(X −SX)∗2 t = 0 for all t, which implies X = SX a.s.
Now consider two solutions X, Y with |X0 −Y0| ≤ε a.s. By Lemma 32.5, we get for any p ≥2 E(X −Y )∗p t ≤2 εp + ct t 0 E(X −Y )∗p s ds, t ≥0, 740 Foundations of Modern Probability and by Lemma 32.4 it follows that E(X −Y )∗p t ≤2 εp ectt, t ≥0.
(10) If X0 = Y0 a.s., we may take ε = 0 and conclude that X = Y a.s., which proves the asserted uniqueness. Letting Xx denote the solution X with X0 = x a.s., we get by (10) E Xx −Xy ∗p t ≤2 |x −y|p ectt, t ≥0.
Taking p > d and applying Theorem 4.23 for each T > 0 with the metric ρT(f, g) = (f −g)∗ T, we conclude that the process (Xx t ) has a jointly continuous version on R+ × Rd.
The construction shows that if X, Y are solutions with X0 = ξ and Y0 = η a.s., then X = Y a.s. on the set {ξ = η}. In particular, X = Xξ a.s. when ξ takes only countably many values. In general, ξ may be uniformly approxi-mated by some random vectors ξ1, ξ2, . . . in Qd, and (10) yields Xξn t →Xt in L2 for all t ≥0. Since also Xξn t →Xξ t a.s. by the continuity of the flow, it follows that Xt = Xξ t a.s.
2 For many SDEs, the solutions may explode after a finite time. To deal with this possibility, we introduce as in Chapter 17 an extra absorbing state4 Δ at infinity, so that the path space becomes CR+,Rd with Rd = Rd ∪{Δ}. Define ζn = inf{t; |Xt| ≥n} for each n, put ζ = supn ζn, and let Xt = Δ for t ≥ζ.
Given a Brownian motion B in Rr and an adapted process X in the extended path space, we say that X or the pair (X, B) solves equation (σ, b) on the interval [0, ζ), if Xt∧ζn = X0 + t∧ζn 0 σ(s, X) dBs + t∧ζn 0 b(s, X) ds, t ≥0, n ∈N.
(11) When ζ < ∞, we have |Xζn| →∞, and X is said to explode at time ζ.
Conditions for existence and uniqueness of possibly exploding solutions are obtainable from Theorem 32.3 by suitable localization. The following result can sometimes be used to exclude the possibility of explosion.
Proposition 32.6 (explosion) For solutions to equation (σ, b), no explosion occurs a.s. when σ(x)∗ t + b(x)∗ t < ⌢1 + x∗ t, t ≥0.
(12) Proof: By Proposition 18.15 we may assume that X0 is bounded. From (11) and (12), we get for suitable constants ct < ∞ EX∗2 t∧ζn ≤2 E|X0|2 + ct t 0 1 + EX∗2 s∧ζn ds, t ≥0, n ∈N, and so by Lemma 32.4 1 + EX∗2 t∧ζn ≤ 1 + 2 E|X0|2 exp(ct t) < ∞, t ≥0, n ∈N.
4often referred to as the coffin state or cemetery 32. Stochastic Equations and Martingale Problems 741 As n →∞we obtain EX∗2 t∧ζ < ∞, which implies ζ > t a.s.
2 Our next aim is to characterize weak solutions to equation (σ, b) by a mar-tingale property, involving only the solution process X. Then define M f t = f(Xt) −f(X0) − t 0 Asf(X) ds, t ≥0, f ∈ˆ C∞, (13) where the operators As are given by Asf(x) = 1 2 aij(s, x) f ′′ ij(xs) + bi(s, x) f ′ i(xs), s ≥0, f ∈ˆ C∞.
(14) In the diffusion case we may replace the integrand Asf(X) in (13) by the expression Af(Xs), where A denotes the elliptic operator Af(x) = 1 2 aij(x) f ′′ ij(x) + bi(x) f ′ i(x), f ∈ˆ C∞, x ∈Rd.
(15) A continuous process X in Rd, or its distribution P, is said to solve the local martingale problem for (a, b), if M f is a local martingale for every f ∈ˆ C∞.
For bounded a, b it is clearly equivalent that M f be a true martingale, and the original problem turns into a martingale problem. The (local) martingale problem for (a, b) with initial distribution μ is said to be well posed if it has a unique solution Pμ. For degenerate initial distributions δx, we may write Px instead of Pδx. We state the basic equivalence between weak solutions to an SDE and solutions to the associated local martingale problem. Here ˆ MS denotes the class of probability measures on S.
Theorem 32.7 (weak solutions and martingale problems, Stroock & Varad-han) For progressive functions σ, b and distributions P ∈ ˆ M(CR+,Rd), these conditions are equivalent: (i) equation (σ, b) has a weak solution with distribution P, (ii) P solves the local martingale problem for (σσ′, b).
Proof: Write a = σσ′. If (X, B) solves equation (σ, b), then [Xi, Xj] = σi k(X) · Bk, σj l (X) · Bl = σi kσj l (X) · [Bk, Bl] = aij(X) · λ.
By Itˆ o’s formula, we get for any f ∈ˆ C∞ d f(Xt) = f ′ i(Xt) dXi t + 1 2 f ′′ ij(Xt) d[Xi, Xj]t = f ′ i(Xt) σi j(t, X) dBj t + Atf(X) dt.
Hence, dM f t = f ′ i(Xt) σi j(t, X) dBj t , and so M f is a local martingale.
Conversely, let X solve the local martingale problem for (a, b). When f i n ∈ ˆ C∞with f i n(x) = xi for |x| ≤n, a localization argument shows that the processes M i t = Xi t −Xi 0 − t 0 bi(s, X) ds, t ≥0, (16) 742 Foundations of Modern Probability are continuous local martingales. Similarly, choosing f ij n ∈ˆ C∞with f ij n (x) = xixj for |x| ≤n, we may form the local martingales M ij = XiXj −Xi 0Xj 0 − Xiβj + Xjβi + αij · λ, where αij = aij(X) and βi = bi(X). Integrating by parts and using (16), we get M ij = Xi · Xj + Xj · Xi + [Xi, Xj] − Xiβj + Xjβi + αij · λ = Xi · M j + Xj · M i + [M i, M j] −αij · λ.
Here the last two terms on the right form a local martingale, and so by Propo-sition 18.2, [M i, M j]t = t 0 aij(s, X) ds, t ≥0.
Hence, Theorem 19.13 yields a Brownian motion B with respect to a standard extension of the original filtration, such that M i t = t 0 σi k(s, X) dBk s , t ≥0.
Substituting this into (16) yields (2), which means that the pair (X, B) solves equation (σ, b).
2 For subsequent needs, we note that the previous construction can be made measurable, in the following sense.
Lemma 32.8 (functional representation) For any progressive functions σ, b, there exists a measurable map F : ˆ M(CR+,Rd) × CR+,Rd × [0, 1] →CR+,Rr, such that when X solves the local martingale problem for (σσ′, b) with distri-bution P, and ϑ⊥ ⊥X is U(0, 1), we have (i) B = F(P, X, ϑ) is a Brownian motion in Rr, (ii) the pair (X, B) with induced filtration solves equation (σ, b).
Proof: In the previous construction of B, the only non-elementary step is the stochastic integration with respect to (X, Y ) in Theorem 19.13, where Y is an independent Brownian motion and the integrand is a progressive function of X, obtained by some elementary matrix algebra. Since the pair (X, Y ) is again a solution to a local martingale problem, Proposition 18.26 yields the desired functional representation.
2 Combining the martingale formulation with a compactness argument, we may deduce some general existence and continuity properties.
Theorem 32.9 (weak existence and continuity, Skorohod) Let σ, b be bounded, progressive functions, such that σ(t, ·) and b(t, ·) are continuous on CR+,Rd for every t ≥0. Then 32. Stochastic Equations and Martingale Problems 743 (i) the martingale problem for (σσ′, b) has a solution Pμ for every initial distribution μ, (ii) when the Pμ are unique, the mapping μ →Pμ is weakly continuous.
Proof: (i) For any ε > 0, t ≥0, and x ∈CR+,Rd, define σε(t, x) = σ{(t −ε)+, x}, bε(t, x) = b{(t −ε)+, x}, and let aε = σεσ′ ε.
Since σ, b are progressive, the processes σε(s, X) and bε(s, X), s ≤t, are measurable functions of X on [0, (t −ε)+].
Hence, a strong solution Xε to equation (σε, bε) may be constructed recursively on the intervals [(n −1)ε, nε], n ∈N, starting from an arbitrary random vector ξ⊥ ⊥B in Rd with distribution μ. In particular, Xε solves the martingale problem for the pair (aε, bε).
Applying Theorem 18.7 to equation (σε, bε) and using the boundedness of σ, b, we get for any p > 0 E sup 0≤r≤h Xε t+r −Xε t p < ⌢hp/2 + hp < ⌢hp/2, t, ε ≥0, h ∈[0, 1].
Taking p > 2d, we see from Corollary 23.7 that the family {Xε} is tight in CR+,Rd. By Theorem 23.2, we may then choose some εn →0, such that Xεn d →X for a suitable X. To see that X solves the martingale problem for (a, b), let f ∈ˆ C∞and s < t be arbitrary, and consider any bounded, continuous function g: C[0,s],Rd →R. We need to show that E f(Xt) −f(Xs) − t s Arf(X)dr g(X) = 0.
Then note that Xε satisfies the corresponding equation for the operators Aε r, constructed from the pair (aε, bε). Writing the two conditions as Eϕ(X) = 0 and Eϕε(Xε) = 0, respectively, it suffices by Theorem 5.27 to show that ϕε(xε) →ϕ(x), whenever xε →x in CR+,Rd.
This follows easily from the continuity conditions imposed on σ, b.
(ii) Assuming the solutions Pμ to be unique, let μn w →μ. Arguing as be-fore, we see that (Pμn) is tight, and so by Theorem 23.2 it is also relatively compact. If Pμn w →Q along a sub-sequence, we see as before that Q solves the martingale problem for (a, b) with initial distribution μ. Hence Q = Pμ, and the convergence extends to the original sequence.
2 The well-posedness of the local martingale problem for (a, b) may now be extended from degenerate to arbitrary initial distributions. This requires a basic measurability property, which will also be needed later.
Theorem 32.10 (measurability and mixtures, Stroock & Varadhan) Let a, b be progressive functions, such that for any x ∈Rd the local martingale problem for (a, b) with initial distribution δx has a unique solution Px. Then (i) the Px form a kernel from Rd to CR+,Rd, 744 Foundations of Modern Probability (ii) for any initial distribution μ, the associated local martingale problem has the unique solution Pμ = Px μ(dx).
Proof: By the proof of Theorem 32.7, it is enough to state the local mar-tingale problem in terms of functions f belonging to some countable subclass C ⊂ˆ C∞, consisting of suitably truncated versions of the coordinate functions xi and their products xixj. Now define P = ˆ M(CRd,Rd) and PM = {Px; x ∈Rd}, and write X for the canonical process in CR+,Rd. Let D be the class of mea-sures P ∈P with degenerate projections P ◦X−1 0 . Next let I consist of all measures P ∈P, such that X satisfies the integrability condition (3). Finally, put τ f n = inf{t; |M f t | ≥n}, and let L be the class of measures P ∈P, such that the processes M f,n t = M f(t ∧τ f n) exist and are martingales under P for all f ∈C and n ∈N. Then clearly PM = D ∩I ∩L.
(i) It suffices to show that PM is a measurable subset of P, since the de-sired measurability will then follow by Theorem A1.1 and Lemma 3.2. The measurability of D is clear from Theorem 2.18. Even I is measurable, since the integrals on the left of (3) are measurable by Fubini’s theorem. Finally, L ∩I is a measurable subset of I, since the defining condition is equivalent to countably many relations of the form E M f,n t −M f,n s ; F = 0 with f ∈C, n ∈N, s < t in Q+, and F ∈Fs.
(ii) For any probability measure μ on Rd, the measure Pμ = Px μ(dx) has clearly initial distribution μ, and the previous argument shows that even Pμ solves the local martingale problem for (a, b). To prove the uniqueness, let P be any measure with the stated properties. Then E M f,n t −M f,n s ; F X0 = 0 a.s. for all f, n, s < t, and F as above, and so P( · |X0) a.s. solves the local martingale problem with initial distribution δX0. Thus, P( · |X0) = PX0 a.s., and we get P = EPX0 = Px μ(dx) = Pμ. This extends the well-posedness to arbitrary initial distributions.
2 We return to the problem of constructing a Feller diffusion with generator A as in (15), solving a suitable SDE or the associated martingale problem. The following result may be regarded as a converse to Theorem 17.24.
Theorem 32.11 (strong Markov and Feller properties, Stroock & Varadhan) Let a, b be measurable functions on Rd, such that for any x ∈Rd the local martingale problem for (a, b) with initial distribution δx has a unique solution Px. Then (i) the family (Px) satisfies the strong Markov property, (ii) when a, b are bounded, continuous, Ttf(x) = Exf(Xt) defines a Feller semigroup on C0, and the operator A in (15) extends uniquely to the associated generator.
Proof: (i) By Theorem 32.10 it remains to prove that, for any state x ∈Rd and bounded optional time τ, Lx X ◦θτ Fτ = PXτ a.s.
32. Stochastic Equations and Martingale Problems 745 As in the previous proof, this is equivalent to countably many relations of the form Ex 1F M f,n t −M f,n s ◦θτ Fτ = 0 a.s.
(17) with s < t and F ∈Fs, where M f,n denotes the process M f stopped at τn = inf{t; |M f| ≥n}. Now θ−1 τ Fs ⊂Fτ+s by Lemma 9.5, and in the diffusion case, M f,n t −M f,n s ◦θτ = M f (τ+t)∧σn −M f (τ+s)∧σn, where σn = τ +τn ◦θτ, which is again optional by Proposition 11.8. Thus, (17) follows by optional sampling from the local martingale property of M f under Px.
(ii) Assume that a, b are also bounded and continuous, and define Ttf(x) = Exf(Xt). By Theorem 32.9, the function Ttf is continuous for every f ∈C0 and t > 0, and the path continuity implies that Ttf(x) is continuous in t for every x. To see that Ttf ∈C0, it remains to show that |Xx t | P →∞as |x| →∞, where Xx has distribution Px. This follows from the SDE by the boundedness of σ, b, if for 0 < r < |x| we write P |Xx t | < r ≤P Xx t −x > |x| −r ≤ E Xx t −x 2 (|x| −r)2 < ⌢ t + t2 (|x| −r)2, and let |x| →∞for fixed r, t. The last assertion is obvious from the uniqueness in law together with Theorem 17.23.
2 Uniqueness in law is usually harder to prove than the weak existence. Some fairly general uniqueness criteria will be obtained in Theorems 33.1 and 34.2.
For the moment, we will only exhibit some transformations that may simplify the problem. First we show how the drift may sometimes be eliminated by a change of probability measure.
Proposition 32.12 (transformation of drift) Let σ, b, c be progressive func-tions of suitable dimension, where c is bounded. Then (i) weak existence holds simultaneously for equations (σ, b) and (σ, b + σc), (ii) when c = σ′h for a progressive function h, uniqueness in law holds si-multaneously for the two equations.
Proof: (i) Let X be a weak solution to equation (σ, b), defined on the canonical space for (X, B) with induced filtration F and probability measure P.
Put V = c(X), and note that (V 2 · λ)t is bounded for each t. By Lemma 19.19 and Corollary 19.26, there exists a probability measure Q with Q = E(V ′·B)t·P on Ft for each t ≥0, and we note that ˜ B = B −V ·λ is a Q -Brownian motion.
Under Q, we further get by Proposition 19.21 X −X0 = σ(X) · ˜ B + V · λ + b(X) · λ = σ(X) · ˜ B + (b + σc)(X) · λ, 746 Foundations of Modern Probability which shows that X is a weak solution to the SDE (σ, b + σc). Since the same argument applies to equation (σ, b + σc) with c replaced by −c, we conclude that weak existence holds simultaneously for the two equations.
(ii) Let c = σ′h, and suppose that uniqueness in law holds for equation (σ, b + ah). Further, let (X, B) solve equation (σ, b) under both P and Q.
Choosing V and ˜ B as before, it follows that (X, ˜ B) solves equation (σ, b + σc), under the transformed distributions E(V ′ ·B)t ·P and E(V ′ ·B)t ·Q for (X, B).
By hypothesis, the latter measures have then the same X-marginal, and the stated condition shows that E(V ′ ·B) is X-measurable. Thus, the X-marginals agree even for P, Q, which proves the uniqueness in law for equation (σ, b).
Reversing the argument yields an implication in the other direction.
2 Next we show how an SDE of diffusion type can be transformed by a random time change. The method will be used systematically in Chapter 33, to analyze the one-dimensional case.
Proposition 32.13 (scaling) Let σ, b, c be measurable functions on Rd, where the range of c has compact closure in (0, ∞). Then weak existence and unique-ness in law hold simultaneously for equations (σ, b) and (c σ, c2b).
Proof: Let X solve the local martingale problem for the pair (a, b), and introduce the process V = c2(X) · λ with inverse (τs). By optional sampling, we note that M f τs, s ≥0, is again a local martingale, and the process Ys = Xτs satisfies M f τs = f(Ys) −f(Y0) − s 0 c2Af(Yr) dr.
Thus, Y solves the local martingale problem for (c2a, c2b).
Now let T be the map on CR+,Rd transforming X into Y , and write T ′ for the corresponding map based on c−1, so that T and T ′ are mutual inverses.
Applying the previous argument to both maps, we conclude that a measure P ∈ˆ M(CR+,Rd) solves the local martingale problem for (a, b) iffP ◦T −1 solves the corresponding problem for (c2a, c2b). Thus, both existence and uniqueness hold simultaneously for the two problems. By Theorem 32.7, the last state-ment translates immediately into a corresponding assertion for the SDEs.
2 We finally explore the relationship between weak and strong solutions. Un-der suitable conditions, we further prove the existence of a universal functional solution. To explain the terminology, let G be the filtration induced by the iden-tity map (ξ, B) on the canonical space Ω = Rd × CR+,Rr, so that Gt = σ{ξ, Bt), t ≥0, where Bt s = Bs∧t. Writing W r for the r-dimensional Wiener measure, we introduce for every μ ∈ˆ MRd the (μ⊗W r) -completion Gμ t of Gt. The universal completion ¯ Gt is defined as μ Gμ t , and we say that a function F : Rd × CR+,Rr →CR+,Rd (18) is universally adapted if it is adapted to the filtration ¯ G = ( ¯ Gt).
32. Stochastic Equations and Martingale Problems 747 Theorem 32.14 (existence, uniqueness, and functional solution, Yamada & Watanabe, OK) Let the functions σ, b be progressive and such that weak exis-tence and pathwise uniqueness hold for solutions to equation (σ, b) starting at fixed points. Then (i) strong existence and uniqueness in law hold for any initial distribution, (ii) there exists a measurable, universally adapted function F as in (18), such that every solution (X, B) to equation (σ, b) satisfies X = F(X0, B) a.s.
Note that the function F in (ii) can be chosen to be independent of initial distribution μ. A couple of lemmas are needed for the proof, beginning with a statement clarifying the relationship between adaptedness, strong existence, and functional solutions.
Lemma 32.15 (transfer of strong solution) Let (X, B) solve equation (σ, b), where X is adapted to the complete filtration induced by X0 and B. Then (i) X = F(X0, B) a.s. for a Borel measurable function F as in (18), (ii) for any basic triple (F, ˜ B, ξ) with ξ d = X0, the process ˜ X = F(ξ, ˜ B) is F-adapted, and the pair ( ˜ X, ˜ B) solves equation (σ, b).
Proof: By Lemma 1.14, we have X = F(X0, B) a.s. for some Borel mea-surable function F as stated. The same result yields for every t ≥0 a rep-resentation Xt = Gt(X0, Bt) a.s., and so F(X0, B)t = Gt(X0, Bt) a.s. Hence, ˜ Xt = Gt(ξ, ˜ Bt) a.s., and so ˜ X is F-adapted.
Since also ( ˜ X, ˜ B) d = (X, B), Proposition 18.26 shows that even the former pair solves equation (σ, b).
2 Even weak solutions can be transferred to any probability space with a specified Brownian motion: Lemma 32.16 (transfer of weak solution) Let (X, B) solve equation (σ, b), and let (F, ˜ B, ξ) be a basic triple with ξ d = X0. Then (i) there exists a process ˜ X with ˜ X⊥ ⊥ ξ, ˜ B F, ˜ X0 = ξ a.s., ( ˜ X, ˜ B) d = (X, B), (ii) the filtration G induced by ( ˜ X, F) is a standard extension of F, (iii) the pair ( ˜ X, ˜ B) with filtration G solves equation (σ, b).
Proof: (i) By Theorem 8.17 and Proposition 8.20, there exists a process ˜ X⊥ ⊥ξ, ˜ B F with ( ˜ X, ξ, ˜ B) d = (X, X0, B), and in particular ˜ X0 = ξ a.s.
(ii) Fix any t ≥0, and put ˜ B′ = ˜ B −˜ Bt.
Then ( ˜ Xt, ˜ Bt) ⊥ ⊥˜ B′, since the corresponding relation holds for (X, B), and so ˜ Xt⊥ ⊥ξ, ˜ Bt ˜ B′.
Since also ˜ Xt⊥ ⊥ξ, ˜ B F, Proposition 8.12 yields ˜ Xt⊥ ⊥ξ, ˜ Bt( ˜ B′, F), and hence ˜ Xt⊥ ⊥FtF. Then also ( ˜ Xt, Ft)⊥ ⊥FtF by Corollary 8.11, which means that Gt⊥ ⊥FtF.
748 Foundations of Modern Probability (iii) Since standard extensions preserve martingales, Theorem 19.3 shows that ˜ B remains a Brownian motion with respect to G. As in Proposition 18.26, we conclude that the pair ( ˜ X, ˜ B) solves equation (σ, b).
2 Proof of Theorem 32.14: For clarity, we begin with a special case.
1. Fixed initial distribution. Let (X, B) solve equation (σ, b) with initial distribution μ and filtration F. Then Lemma 32.16 yields a process Y ⊥ ⊥X0,B F with Y0 = X0 a.s., such that (Y, B) solves equation (σ, b) for the filtration G induced by (Y, F).
Since G is a standard extension of F, the pair (X, B) remains a solution for G, and the pathwise uniqueness yields X = Y a.s.
For every t ≥0, we have Xt⊥ ⊥X0,BXt and (Xt, Bt)⊥ ⊥(B −Bt), and so Xt⊥ ⊥X0,BtXt a.s. by Proposition 8.12. Thus, Corollary 8.11 (ii) shows that X is adapted to the complete filtration induced by (X0, B). Hence, Lemma 32.15 yields a measurable function Fμ with X = Fμ(X0, B) a.s., such that for any basic triple ( ˜ F, ˜ B, ξ) with ξ d = X0, the process ˜ X = Fμ(ξ, ˜ B) is ˜ F-adapted and solves equation (σ, b) along with ˜ B. In particular, ˜ X d = X since (ξ, ˜ B) d = (X0, B), and the pathwise uniqueness shows that ˜ X is the a.s. unique solution for the given triple ( ˜ F, ˜ B, ξ). This proves the uniqueness in law.
2. General case. The previous case yields uniqueness in law for solutions starting at fixed points, and Theorem 32.10 shows that the corresponding dis-tributions Px form a kernel from Rd to CR+,Rd.
Now Lemma 32.8 yields a measurable mapping G, such that for a process X with distribution Px and a U(0, 1) random variable ϑ⊥ ⊥X, the process B = G(Px, X, ϑ) is a Brownian motion in Rr, and the pair (X, B) solves equation (σ, b). Writing Qx for the distribution of (X, B), we see from Theorem 2.19 (v) and Lemma 3.2 (ii) that the mapping x →Qx is a kernel from Rd to CR+,Rd+r.
Changing the notation, write (X, B) for the canonical process in CR+,Rd+r.
The special case yields X = Fx(x, B) = Fx(B) a.s. Qx, and so Qx(X ∈· | B) = δFx(B) a.s., x ∈Rd.
(19) By Proposition 9.27, we may choose versions νx,w = Qx(X ∈· | B ∈dw), combining into a probability kernel ν : Rd × CR+,Rr →CR+,Rd.
Here (19) shows that νx,w is a.s. degenerate for each x, and since the set D of degenerate measures is measurable by Lemma 2.18, we can modify ν to get νx,wD ≡1.
Then νx,w = δF(x,w), x ∈Rd, w ∈CR+,Rr, (20) for some function F as in (18), which is product-measurable, by the kernel property of ν. Comparing (19) and (20) gives F(x, B) = Fx(B) a.s. for all x.
Now fix any probability measure μ on Rd, and conclude as in Theorem 32.10 that Pμ = Px μ(dx) solves the local martingale problem for (a, b) with initial distribution μ. Hence, equation (σ, b) has a solution (X, B) with distribution μ for X0. Since conditioning on F0 preserves martingales, the equation remains conditionally valid, given X0. The pathwise uniqueness in the degenerate case 32. Stochastic Equations and Martingale Problems 749 yields P{X = F(X0, B) | X0} = 1 a.s., and so X = F(X0, B) a.s. In particular, the pathwise uniqueness extends to any initial distribution μ.
Returning to the canonical setting, let (ξ, B) be the identity map on the canonical space Rd × CR+,Rr, endowed with the probability measure μ ⊗W r and induced complete filtration Gμ. By the result in the special case, equation (σ, b) has a Gμ -adapted solution X = Fμ(ξ, B) with X0 = ξ a.s., and the previous discussion yields even X = F(ξ, B) a.s. Hence, F is adapted to Gμ, and μ being arbitrary, the adaptedness extends to the universal completion ¯ Gt = μ Gμ t , t ≥0.
2 Exercises 1. Show that for any c ∈(0, 1), the stochastic flow Xx t in Theorem 32.3 is a.s.
H¨ older continuous in x with exponent c, uniformly for bounded x and t. (Hint: Apply Theorem 4.23 to the estimate in the proof of Theorem 32.3.) 2. Show that a process X in Rd is a Brownian motion iffthe process f(Xt) − 1 2 t 0Δf(Xs) ds is a martingale for every f ∈ˆ C∞. Compare with Theorem 19.3 and Lemma 17.21.
3. Show that a Brownian bridge in Rd satisfies the SDE dXt = dBt−(1−t)−1Xt dt on [0, 1) with initial condition X0 = 0.
Further show that if Xx is the solution starting at x, then Y x t = Xx t −(1−t)x is again a Brownian bridge. (Hint: Note that Mt = Xt/(1−t) is a martingale on [0, 1), and that Y x satisfies the same SDE as X.) 4. Solve the preceding SDE, using Proposition 32.2 to express the Brownian bridge in terms of a Brownian motion. Compare with previously known formulas.
5. Given two continuous semi-martingales U, V , show that the Fisk–Stratonovich SDE dX = dU + X ◦dV has the unique solution X = Z(X0 + Z−1 ◦U), where Z = exp(V −V0). (Hint: Use Corollary 18.21 and the chain rule for FS -integrals, or derive the result from Proposition 32.2.) 6. Under suitable conditions, show how a Fisk–Stratonovich SDE can be converted into an Itˆ o equation, and conversely. Also give a sufficient condition for the existence of a strong solution to an FS -equation.
7. Show that weak existence and uniqueness in law hold for the SDE dXt = sgn(Xt+) dBt with initial condition X0 = 0, whereas strong existence and pathwise uniqueness fail. (Hint: Show that any solution X is a Brownian motion, and define B = sgn(X+) · X. Note that both X and −X satisfy the given SDE.) 8. Show that weak existence holds for the SDE dXt = sgn(Xt) dBt with initial condition X0 = 0, whereas strong existence and uniqueness in law fail. (Hint: We may take X to be a Brownian motion or put X ≡0.) 9. Show that strong existence holds for the SDE dXt = 1{Xt ̸= 0} dBt with initial condition X0 = 0, whereas uniqueness in law fails. (Hint: Here X = B and X = 0 are both solutions.) 10. Show that a given process X may satisfy SDEs with different pairs (σσ′, b).
(Hint: For a trivial example, take X = 0, b = 0, and σ = 0 or σ(x) = sgn x.) 750 Foundations of Modern Probability 11. Construct a non-Markovian solution X to the SDE dXt = sgn(Xt) dBt. (Hint: Let X be a Brownian motion, stopped at the first visit to 0 after time 1. We might also choose X to be 0 on [0, 1] and a Brownian motion on [1, ∞).) 12. For X as in Theorem 32.3, construct an SDE in Rmd satisfied by the process (Xx1 t , . . . , Xxm t ) for arbitrary x1, . . . , xm ∈Rd. Conclude that L(X) is determined by L(Xx, Xy) for arbitrary x, y ∈Rd. (Hint: Note that L(Xx) is determined by (σσ′, b) and x, and apply this result to the m -point motion.) 13. Find two SDEs as in Theorem 32.3 with solutions X, Y , such that Xx d = Y x for all x but X ̸ d = Y . (Hint: We may choose dX = dB and dY = sgn(Y +) dB.) 14. For a diffusion equation (σ, b) as in Theorem 32.3, show that the distribution of the associated flow X determines Σj σi j(x) σk j (y) for arbitrary pairs i, k ∈{1, . . . , d} and x, y ∈Rd.
15.
Show that if weak existence holds for the SDE (σ, b), then the pathwise uniqueness can be strengthened to the corresponding property for solutions X, Y with respect to possibly different filtrations.
16. Assume that weak existence and the stronger version of pathwise uniqueness hold for the SDE (σ, b). Use Theorem 8.17 and Lemma 32.15 to prove existence for every μ of an a.s. unique functional solution Fμ(X0, B) with L(X0) = μ.
Chapter 33 One-Dimensional SDEs and Diffusions Removal of drift, weak existence and uniqueness, path-wise uniqueness, weak and strict comparison, hitting times, scale function, speed measure, time-change reduction, Green function, accessible and reflecting bound-aries, entrance and exit boundaries, entrance laws and Feller properties, ratio ergodic theorem, recurrence and ergodicity, strong ergodicity, pos-itive recurrence and invariant distribution We have already recognized martingales and Markov processes as the most im-portant dependence structures of modern probability. In both cases, Brownian motion is the prototype and most fundamental example. In fact, we proved in Theorem 19.4 that a real, continuous local martingale is nothing but a suit-ably time-changed Brownian motion. Similarly, we will show in Theorem 33.9 below that even a real and sufficiently regular continuous Markov process is essentially a Brownian motion, up to a natural transformation in both space and time.
Much more can be said in the one-dimensional case of continuous strong Markov processes—also known as diffusion processes—which motivates that we spend a whole chapter to explore their basic properties. We begin with a study of the basic diffusion equation dXt = σ(Xt) dBt + b(Xt) dt, (1) referred to below as equation (σ, b). Here we will see how the drift term can often be removed by a simple spatial transformation.
This reduces (1) to the simple equation dXt = σ(Xt) dBt, where precise criteria can be given for weak existence and uniqueness in law, in terms of the function σ. For more general σ and b, we may give conditions for pathwise uniqueness and strict or weak comparison. From Theorem 32.11 we know that, if weak existence and uniqueness in law hold for (1), the solution process X is a continuous strong Markov process. It is clearly also a semi-martingale.
We proceed with a study of general, one-dimensional diffusion processes X.
Our first aim is then to show how a spatial transformation, based on a suitable scale function, reduces X to a continuous local martingale, which is then said to be on a natural scale. In the latter case, we may next identify a suitable speed measure, determining the time-change reduction of X into a Brownian motion. The tools for this construction are purely probabilistic, as they are 751 © Springer Nature Switzerland AG 2021 O. Kallenberg, Foundations of Modern Probability, Probability Theory and Stochastic Modelling 99, 752 Foundations of Modern Probability based on an analysis of the hitting times τa and τa,b of states a or interval endpoints a < b. The ergodic behavior of X also depends in a crucial way on the properties at the boundary points, which may be absorbing or reflecting, as well as of entrance or exit type.
Preparing for a more detailed discussion of equation (σ, b), we recall from Proposition 32.12 how the drift term can sometimes be eliminated through a change of the underlying probability measure. Under suitable regularity condi-tions on the coefficients, we may use the alternative approach of transforming the state space. Then assume that X solves (1), and put Yt = p(Xt) for some function p ∈C1, possessing an absolutely continuous derivative p′ with density p′′. By the generalized Itˆ o formula of Theorem 29.5, we have dYt = p′(Xt) dXt + 1 2 p′′(Xt) d[X]t = (σp′)(Xt) dBt + 1 2 σ2p′′ + b p′ (Xt) dt.
Here the drift term vanishes iffp solves the ordinary differential equation 1 2 σ2p′′ + b p′ = 0.
(2) When b/σ2 is locally integrable, (2) has the explicit solutions p′(x) = c exp −2 x 0 (b/σ2)(u) du , x ∈R, for an arbitrary constant c. The desired scale function p is then determined up to an affine transformation, and for c > 0 it is strictly increasing with a unique inverse p−1. A mapping by p reduces (1) to the form dYt = ˜ σ(Yt) dBt, where ˜ σ = (σp′) ◦p−1. Since the new equation is equivalent, it is clear that weak or strong existence or uniqueness hold simultaneously for the two equations.
Once the drift is removed, we are left with an equation of the form dXt = σ(Xt) dBt.
(3) Here exact criteria for weak existence and uniqueness in law may be given in terms of the singularity sets Sσ = x ∈R; x+ x−σ−2(y) dy = ∞ , Nσ = x ∈R; σ(x) = 0 .
Theorem 33.1 (existence and uniqueness, Engelbert & Schmidt) For equa-tion (3) with initial distribution μ, (i) weak existence holds for all μ ⇔Sσ ⊂Nσ, (ii) uniqueness in law then holds for all μ ⇔Sσ = Nσ.
We begin with a lemma, which will also be useful later on.
Given any measure ν on R, we may introduce the associated singularity set Sν = x ∈R; ν(x−, x+) = ∞ .
33. One-dimensional SDEs and Diffusions 753 Letting B be a one-dimensional Brownian motion with associated local time L, we may also introduce the additive functional As = Lx s ν(dx), s ≥0.
(4) Lemma 33.2 (singularity set) Let B be a Brownian motion with local time L and initial distribution μ, and define A by (4) for a measure ν on R. Then a.s. Pμ, inf s ≥0; As = ∞ = inf s ≥0; Bs ∈Sν .
Proof: Fix any t > 0, and let R be the event where Bs / ∈Sν on [0, t]. Noting that Lx t = 0 a.s. for x outside the range B[0, t], we get a.s. on R At = ∞ −∞Lx t ν(dx) ≤ν B[0, t] supxLx t < ∞, since B[0, t] is compact and Lx t is a.s. continuous and hence bounded.
Conversely, let Bs ∈Sν for some s < t. To show that At = ∞a.s. on this event, we may use the strong Markov property to reduce to the case where B0 = a is non-random in Sν. But then La t > 0 a.s. by Tanaka’s formula, and so the continuity of L yields for small enough ε > 0 At = ∞ −∞Lx t ν(dx) ≥ν a −ε, a + ε inf |x−a|<εLx t = ∞.
2 Proof of Theorem 33.1: (i) Let Sσ ⊂Nσ.
To prove the asserted weak existence, let Y be a Brownian motion with arbitrary initial distribution μ, and define ζ = inf{s ≥0; Ys ∈Sσ}. By Lemma 33.2, the additive functional As = s 0 σ−2(Yr) dr, s ≥0, (5) is continuous and strictly increasing on [0, ζ), and At = ∞for t > ζ. Further note that Aζ = ∞when ζ = ∞, whereas Aζ may be finite when ζ < ∞. In the latter case, A jumps from Aζ to ∞at time ζ.
Now introduce the inverse τt = inf s > 0; As > t , t ≥0.
(6) The process τ is clearly continuous and strictly increasing on [0, Aζ], and for t ≥Aζ we have τt = ζ. Further note that Xt = Yτt is a continuous local martingale, and t = Aτt = τt 0 σ−2(Yr) dr = t 0 σ−2(Xs) dτs, t < Aζ.
Hence, for t ≤Aζ, [X]t = τt = t 0 σ2(Xs) ds.
(7) 754 Foundations of Modern Probability Here both sides remain constant after time Aζ, since Sσ ⊂Nσ, and so (7) remains true for all t ≥0. Hence, Theorem 19.13 yields a Brownian motion B satisfying (3), which means that X is a weak solution with initial distribution μ.
Conversely, assume weak existence for any initial distribution. To show that Sσ ⊂Nσ, fix any x ∈Sσ, and choose a solution X with X0 = x. Since X is a continuous local martingale, Theorem 19.4 yields Xt = Yτt for a Brownian motion Y starting at x and a random time-change τ satisfying (7). For A as in (5) and t ≥0, we have Aτt = τt 0 σ−2(Yr) dr = t 0 σ−2(Xs) dτs = t 0 1 σ(Xs) > 0 ds ≤t.
(8) Since As = ∞for s > 0 by Lemma 33.2, we get τt = 0 a.s., and so Xt ≡x a.s.
But then x ∈Nσ by (7).
(ii) Let Nσ ⊂Sσ, and consider a solution X with initial distribution μ. As before, we may write Xt = Yτt a.s., where Y is a Brownian motion with initial distribution μ, and τ is a random time-change satisfying (7). Define A as in (5), put χ = inf{t ≥0; Xt ∈Sσ}, and note that τχ = ζ ≡inf{s ≥0; Ys ∈Sσ}.
Since Nσ ⊂Sσ, we get as in (8) Aτt = τt 0 σ−2(Ys) ds = t, t ≤χ.
Furthermore, As = ∞for s > ζ by Lemma 33.2, and so (8) yields τt ≤ζ a.s.
for all t, which means that τ remains constant after time χ. Thus, τ and A are related by (6), which shows that τ and then also X are measurable functions of Y . Since the distribution of Y depends only on μ, the same thing is true for X, which proves the asserted uniqueness in law.
Conversely, let Sσ be a proper subset of Nσ, and fix any x ∈Nσ \ Sσ. As before, we may form a solution Xt = Yτt starting at x, where Y is a Brownian motion starting at x, and τ is defined as in (6) from the process A in (5).
Since x / ∈Sσ, Lemma 33.2 gives A0+ < ∞a.s., and so τt > 0 a.s. for t > 0, which shows that X is a.s. non-constant. Since x ∈Nσ, (3) has also the trivial solution Xt ≡x. Thus, uniqueness in law fails for solutions starting at x.
2 Proceeding with a study of pathwise uniqueness, we return to equation (1), writing w(σ, ·) for the modulus of continuity of σ.
Theorem 33.3 (pathwise uniqueness, Skorohod, Yamada & Watanabe) Let σ, b be bounded, measurable functions on R, such that ε 0 w(σ, h) −2dh = ∞, ε > 0, (9) and either b is Lipschitz continuous or σ ̸= 0. Then pathwise uniqueness holds for equation (σ, b).
33. One-dimensional SDEs and Diffusions 755 The significance of (9) is clear from the following lemma, where for any semi-martingale Y we write Lx t (Y ) for the associated local time.
Lemma 33.4 (local time) Let Xi solve equation (σ, bi) for i = 1, 2, where σ satisfies (9). Then L0 X1 −X2 = 0 a.s.
Proof: Write Y = X1 −X2, Lx t = Lx t (Y ), and w(x) = w(σ, |x|). Using (1) and Theorem 29.5, we get for any t > 0 ∞ −∞w−2 x Lx t dx = t 0 w(Ys) −2d[Y ]s = t 0 σ(X1 s) −σ(X2 s) w X1 s −X2 s 2 ds ≤t < ∞.
By (1) and the right-continuity of L it follows that L0 t = 0 a.s.
2 Proof of Theorem 33.3 for σ ̸= 0: By Propositions 32.12 and 32.13 along with a simple localization argument, uniqueness in law holds for equation (σ, b) when σ ̸= 0. To prove the pathwise uniqueness, consider any two solutions X and Y with X0 = Y0 a.s. Using Tanaka’s formula, Lemma 33.4, and equation (σ, b), we get d(Xt ∨Yt) = dXt + d(Yt −Xt)+ = dXt + 1{Yt > Xt} d(Yt −Xt) = 1{Yt ≤Xt} dXt + 1{Yt > Xt} dYt = σ(Xt ∨Yt) dBt + b(Xt ∨Yt) dt, which shows that X ∨Y is again a solution.
The uniqueness in law gives X d = X ∨Y . Since X ≤X ∨Y , it follows that X = X ∨Y a.s., which implies Y ≤X a.s. The symmetric argument yields X ≤Y a.s.
2 The assertion for Lipschitz continuous b is a special case of the following comparison result.
Theorem 33.5 (weak comparison, Skorohod, Yamada) For i = 1, 2, let Xi solve equation (σ, bi), where σ satisfies (9) and b1 or b2 is Lipschitz continuous.
Then b1 ≥b2 X1 0 ≥X2 0 a.s. ⇒X1 ≥X2 a.s. on R+.
Proof: By symmetry we may take b1 to be Lipschitz continuous. Since X2 0 ≤X1 0 a.s., we get by Tanaka’s formula and Lemma 33.4 X2 t −X1 t + = t 0 1 X2 s > X1 s σ(X2 t ) −σ(X1 t ) dBt + t 0 1 X2 s > X1 s b2(X2 s) −b1(X1 s) ds.
756 Foundations of Modern Probability Using the martingale property of the first term, the Lipschitz continuity of b1, and the condition b2 ≤b1, we conclude that E X2 t −X1 t + ≤E t 0 1 X2 s > X1 s b1(X2 s) −b1(X1 s) ds < ⌢E t 0 1 X2 s > X1 s X2 s −X1 s ds = t 0 E X2 s −X1 s +ds.
Then E(X2 t −X1 t )+ = 0 by Gronwall’s lemma, and so X2 t ≤X1 t a.s.
2 By imposing stronger restrictions on the coefficients, we may strengthen the last conclusion to a strict inequality.
Theorem 33.6 (strict comparison) For i = 1, 2, let Xi solve equation (σ, bi), where σ is Lipschitz continuous and b1, b2 are continuous. Then b1 > b2 X1 0 ≥X2 0 a.s. ⇒X1 > X2 a.s. on (0, ∞).
Proof: Since the bi are continuous with b1 > b2, there exists a locally Lipschitz continuous function b on R with b1 > b > b2. By Theorem 32.3, equation (σ, b) has a solution X with X0 = X1 0 ≥X2 0 a.s., and it suffices to show that X1 > X > X2 a.s. on (0, ∞). This yields a reduction to the case where one of the functions bi is locally Lipschitz. By symmetry we may take that function to be b1.
By the Lipschitz continuity of σ and b1, we may introduce the continuous semi-martingales Ut = t 0 b1(X2 s) −b2(X2 s) ds, Vt = t 0 σ(X1 s) −σ(X2 s) X1 s −X2 s dBs + t 0 b1(X1 s) −b1(X2 s) X1 s −X2 s ds, subject to the convention 0/0 = 0, and we note that d X1 t −X2 t = dUt + X1 t −X2 t dVt.
Letting Z = exp V −1 2[V ] > 0, we get by Proposition 32.2 X1 t −X2 t = Zt X1 0 −X2 0 + Zt t 0 Z−1 s b1(X2 s) −b2(X2 s) ds, and the assertion follows since X1 0 ≥X2 0 a.s. and b1 > b2.
2 We turn to a systematic study of one-dimensional diffusions. By a diffusion in an interval I ⊂R we mean a continuous strong Markov process taking values in I. Termination is only permitted at open end-points of I. Define 33. One-dimensional SDEs and Diffusions 757 τy = inf{t ≥0; Xt = y}, and say that X is regular if Px{τy < ∞} > 0 for any x ∈Io and y ∈I. Further write τa,b = τa ∧τb.
Our first step is to transform the general diffusion process into a continu-ous local martingale, using a suitable change of scale. This corresponds to a removal of drift in the SDE (1).
Theorem 33.7 (scale function, Feller, Dynkin) Let X be a regular diffusion in an interval I ⊂R. Then (i) there exists a continuous, strictly increasing function p on I, such that p(Xτa,b) is a Px-martingale for every x ∈[a, b] ⊂I, (ii) an increasing function p is such as in (i) iff Px{τb < τa} = px −pa pb −pa , x ∈[a, b].
A function p with the stated property is called a scale function for X, and X is said to be defined on a natural scale if the scale function can be chosen to be linear. In general, the process Y = p(X) is a regular diffusion on a natural scale. We begin with a study of the functions pa,b(x) = Px{τb < τa}, ha,b(x) = Exτa,b, a ≤x ≤b, which play a basic role in the sequel.
Lemma 33.8 (hitting times) For any regular diffusion and constants a < b in I, we have (i) pa,b is continuous and strictly increasing on [a, b], (ii) ha,b is bounded on [a, b].
In particular, (ii) yields τa,b < ∞a.s. under Px for any a ≤x ≤b.
Proof: (i) First we show that Px{τb < τa} > 0 for any a < x < b. Then introduce the optional time σ1 = τa + τx ◦θτa, and define recursively σn+1 = σn + σ1 ◦θσn. By the strong Markov property, the σn form a random walk in [0, ∞] under each Px. If Px{τb < τa} = 0, we get τb ≥σn →∞a.s. Px, and so Px{τb = ∞} = 1, which contradicts the regularity of X.
Using the strong Markov property at τy, we next obtain Px{τb < τa} = Px{τy < τa} Py{τb < τa}, a < x < y < b.
(10) Since Px{τa < τy} > 0, we have Px{τy < τa} < 1, which shows that Px{τb < τa} is strictly increasing.
By symmetry it remains to prove that Py{τb < τa} is left-continuous on (a, b]. By (10) it is equivalent to show that, for any x ∈(a, b), the mapping y →Px{τy < τa} is left-continuous on (x, b]. Then let yn ↑y, and note that τyn ↑τy a.s. Px by the continuity of X. Hence, {τyn < τa} ↓{τy < τa}, which 758 Foundations of Modern Probability implies convergence of the corresponding probabilities.
(ii) Fix any c ∈(a, b). By the regularity of X, we may choose h > 0 so large that Pc{τa ≤h} ∧Pc{τb ≤h} = δ > 0.
When x ∈(a, c), the strong Markov property at τx yields δ ≤Pc{τa ≤h} ≤Pc{τx ≤h} Px{τa ≤h} ≤Px{τa ≤h} ≤Px τa,b ≤h , and similarly for x ∈(c, b). By the Markov property at h and induction on n, we obtain Px τa,b > nh ≤(1 −δ)n, x ∈[a, b], n ∈Z+, and Lemma 4.4 yields Exτa,b = ∞ 0 Px τa,b > t dt ≤h Σ n≥0 (1 −δ)n < ∞.
2 Proof of Theorem 33.7: Let p be a locally bounded and measurable function on I, such that M = p(Xτa,b) is a martingale under Px for any a < x < b. Then px = ExM0 = ExM∞ = Ex p(Xτa,b) = paPx{τa < τb} + pbPx{τb < τa} = pa + (pb −pa) Px{τb < τa}, and (ii) follows provided that pa ̸= pb.
To construct a function p with the stated properties, fix any points u < v in I, and define for arbitrary a ≤u and b ≥v in I p(x) = pa,b(x) −pa,b(u) pa,b(v) −pa,b(u), x ∈[a, b].
(11) To see that p is independent of a and b, consider a larger interval [a′, b′] ⊂I, and conclude from the strong Markov property at τa,b that, for any x ∈[a, b], Px{τb′ < τa′} = Px{τa < τb} Pa{τb′ < τa′} + Px{τb < τa} Pb{τb′ < τa′}, or pa′,b′(x) = pa,b(x) pa′,b′(b) −pa′,b′(a) + pa′b′(a).
Thus, pa,b and pa′,b′ agree on [a, b] up to an affine mapping, and so they yield the same value in (11).
By Lemma 33.8, the constructed function is continuous and strictly increas-ing, and it remains to show that p(Xτa,b) is a martingale under Px for any a < b 33. One-dimensional SDEs and Diffusions 759 in I. Since the martingale property is preserved by affine transformations, it is equivalent to show that pa,b(Xτa,b) is a Px-martingale. Then fix any optional time σ, and write τ = σ ∧τa,b. By the strong Markov property at τ, Ex pa,b(Xτ) = ExPXτ{τb < τa} = Pxθ−1 τ {τb < τa} = Px{τb < τa} = pa,b(x), and the desired martingale property follows by Lemma 9.14.
2 To prepare for the next result, consider a Brownian motion B in R with jointly continuous local time L. For any measure ν on R, we introduce as in (4) the additive functional A = Lx ν(dx) with right-continuous inverse σt = inf s > 0; As > t , t ≥0.
If ν ̸= 0, the recurrence of B shows that A is a.s. unbounded. Hence, σt < ∞ a.s. for all t, and we may define Xt = Bσt, t ≥0.
We refer to σ = (σt) as the random time-change based on ν, and to the process X = B ◦σ as a correspondingly time-changed Brownian motion.
Theorem 33.9 (speed measure and time change, Feller, Volkonsky, Itˆ o & McKean) (i) For a regular diffusion X on a natural scale in I, there exists a unique measure ν on I with ν[a, b] ∈(0, ∞) for all a < b in Io, such that X is a time-changed Brownian motion based on an extension of ν to ¯ I.
(ii) Any time-changed Brownian motion as above is a regular diffusion in I.
The extended version of ν is called the speed measure of the diffusion.
Contrary to what the term suggests, the process moves slowly through regions where ν is large. For Brownian motion itself, the speed measure is clearly Lebesgue measure. More generally, the speed measure of a regular diffusion solving equation (3) has density σ−2.
To prove the uniqueness of ν, we need the following lemma, which is also useful for the subsequent classification of boundary behavior. Here we write σa,b = inf{s > 0; Bs / ∈(a, b)}.
Lemma 33.10 (Green function) Let X be a time-changed Brownian motion based on ν. Then for any measurable function f ≥0 on I and points a < b in ¯ I, we have Ex τa,b 0 f(Xt) dt = b a ga,b(x, y) f(y) ν(dy), x ∈[a, b], (12) where ga,b(x, y) = ExLy σa,b = 2 (x ∧y −a)(b −x ∨y) b −a , x, y ∈[a, b].
(13) When X is recurrent, this remains true for a = −∞or b = ∞.
760 Foundations of Modern Probability Taking f ≡1 in (12), we get in particular ha,b(x) = Exτa,b = b a ga,b(x, y) ν(dy), x ∈[a, b], (14) which will be useful later on.
Proof: Clearly τa,b = A(σa,b) for any a, b ∈¯ I, and also for a = −∞or b = ∞when X is recurrent. Since Ly is supported by {y}, we see from (4) that τa,b 0 f(Xt) dt = σa,b 0 f(Bs) dAs = b a f(y) Ly σa,b ν(dy).
Taking expectations gives (12) with ga,b(x, y) = ExLy σa,b. To prove (13), we get by Tanaka’s formula and optional sampling ExLy σa,b∧s = Ex Bσa,b∧s −y −|x −y|, s ≥0.
If a, b are finite, we may let s →∞and conclude by monotone and dominated convergence that ga,b(x, y) = (y −a) (b −x) b −a + (b −y) (x −a) b −a −|x −y|, which simplifies to (13). The result for infinite a or b follows immediately by monotone convergence.
2 The next lemma will enable us to construct the speed measure ν from the functions ha,b in Lemma 33.8.
Lemma 33.11 (consistence) For a regular diffusion on a natural scale in I, there exists a strictly concave function h on Io, such that for any a < b in I, ha,b(x) = h(x) −x −a b −a h(b) −b −x b −a h(a), x ∈[a, b].
(15) Proof: Fix any u < v in I, and define for any a ≤u and b ≥v in I h(x) = ha,b(x) −x −u v −u ha,b(v) −v −x v −u ha,b(u), x ∈[a, b].
(16) To see that h is independent of a and b, consider any larger interval [a′, b′] ⊂I, and conclude from the strong Markov property at τa,b that, for any x ∈[a, b], Exτa′,b′ = Exτa,b + Px{τa < τb} Eaτa′,b′ + Px{τb < τa} Ebτa′,b′, or ha′,b′(x) = ha,b(x) + b −x b −a ha′,b′(a) + x −a b −a ha′,b′(b).
(17) Thus, ha,b and ha′,b′ agree on [a, b] up to an affine function and therefore yield the same value in (16).
33. One-dimensional SDEs and Diffusions 761 If a ≤u and b ≥v, then (16) shows that h and ha,b agree on [a, b] up to an affine function, and (15) follows since ha,b(a) = ha,b(b) = 0. The formula extends by (17) to arbitrary a < b in I.
2 Since h is strictly concave, its left derivative h′ −is strictly decreasing and left-continuous, and hence determines a measure ν on Io satisfying 2 ν[a, b) = h′ −(a) −h′ −(b), a < b in Io.
(18) For motivation, we note that this expression is consistent with (14).
To prove Theorem 33.9, we first need to clarify the behavior of X at the endpoints of I. If b ̸= I for an endpoint b, then by hypothesis the motion terminates when X reaches b, and we may attach b to I as an absorbing state.
For convenience, we may then assume that I is a compact interval of the form [a, b], where either endpoint may be inaccessible, in the sense that a.s. it can not be reached in finite time from a point in Io.
For either endpoint b, the set Ξb = {t ≥0; Xt = b} is regenerative under Pb in the sense of Chapter 29. In particular, Theorem 29.7 shows that b is either absorbing, in the sense that Ξb = R+ a.s., or reflecting, in the sense that Ξo b = ∅a.s. In the latter case, the reflection is said to be fast if λ Ξb = 0 and slow if λ Ξb > 0. We give a more detailed discussion of the boundary behavior after the proof of the main theorem.
First we establish Theorem 33.9 in a special case. The general result will then be deduced by a pathwise comparison.
Proof of Theorem 33.9, absorbing endpoints (M´ el´ eard): Let X have distri-bution Px, where x ∈Io, and put ζ = inf{t > 0; Xt ̸= Io}. For any a < b in Io with x ∈[a, b], the process Xτa,b is a continuous martingale, and so by Theorem 29.5 h(Xt) = h(x) + t 0 h′ −(X) dX − I ˜ Lx t ν(dx), t ∈[0, ζ), (19) where ˜ L is the local time of X.
Next conclude from Theorem 19.4 that X = B ◦[X] a.s. for a Brownian motion B starting at x. Using Theorem 29.5 twice, we get in particular, for any non-negative measurable function f, I f(x) ˜ Lx t dx = t 0 f(Xs) d[X]s = [X]t 0 f(Bs) ds = I f(x) Lx [X]t dt, where L is the local time of B. Hence, ˜ Lx t = Lx [X]t a.s. for t < ζ, and so the last term in (19) equals A[X]t a.s.
762 Foundations of Modern Probability For any optional time σ, put τ = σ ∧τa,b, and conclude from the strong Markov property that Ex τ + ha,b(Xτ) = Ex τ + EXττa,b = Ex τ + τa,b ◦θτ = Exτa,b = ha,b(x).
Writing Mt = h(Xt)+t, it follows by Lemma 9.14 that M τa,b is a Px-martingale whenever x ∈[a, b] ⊂Io. Comparing with (19) and using Proposition 18.2, we obtain A[X]t = t a.s. for all t ∈[0, ζ). Since A is continuous and strictly increasing on [0, ζ) with inverse σ, we get [X]t = σt a.s. for t < ζ. The last relation extends to [ζ, ∞), provided we give infinite mass to ν at each endpoint.
Then X = B ◦σ a.s. on R+.
Conversely, we note that B ◦σ is a regular diffusion on I, whenever σ is a random time-change based on a measure ν with the stated properties. To prove the uniqueness of ν, fix any a < x < b in Io, and apply Lemma 33.10 with f(y) = {ga,b(x, y)}−1 to see that ν(a, b) is determined by Px.
2 Proof of Theorem 33.9, general case: Define ν on Io as in (18), and ex-tend the definition to ¯ I, by assigning infinite mass to any absorbing endpoint.
To reflecting endpoints we attach finite mass, to be specified later. Given a Brownian motion B, we see as before that the correspondingly time-changed process ˜ X = B ◦σ is a regular diffusion in I. Letting ζ = sup{t; Xt ∈Io} and ˜ ζ = sup{t; ˜ Xt ∈Io}, we further see from the previous case that Xζ and ˜ X ˜ ζ have the same distribution for any starting position x ∈Io.
Now fix any a < b in Io, and define recursively χ1 = ζ + τa,b ◦θζ, χn+1 = χn + χ1 ◦θχn, n ∈N.
Then the processes Y a,b n = Xζ ◦θχn form a Markov chain in the path space.
A similar construction for ˜ X yields some processes ˜ Y a,b n , and we note that (Y a,b n ) d = (˜ Y a,b n ) for fixed a and b. Since the process Y a′,b′ n , obtained for a smaller interval [a′, b′], can be measurably recovered from that for [a, b], and similarly for ˜ Y a′,b′ n , the entire collections (Y a,b n ) and ( ˜ Y a,b n ) have the same distribution.
By Theorem 8.17, we may then assume that the two families agree a.s.
Now let I = [a, b], where a is reflecting. By the nature of Brownian motion, the level sets Ξa and ˜ Ξa for X and ˜ X are a.s. perfect, and so we may form the corresponding excursion point processes ξ and ˜ ξ, local times L and ˜ L, and inverse local times T and ˜ T. Since excursions within [a, b) agree a.s. for X and ˜ X, we can normalize the excursion laws of the two processes, using the law of large numbers, such that the corresponding parts of ξ and ˜ ξ agree a.s. Then even T and ˜ T agree, possibly apart from the lengths of excursions reaching b and the drift coefficient c in Theorem 29.13. For ˜ X the latter is proportional to the mass ν{a}, which can now be chosen such that c becomes the same as 33. One-dimensional SDEs and Diffusions 763 for X. This choice of ν{a} is clearly independent of starting position x for X and ˜ X.
If the other endpoint b is absorbing, then clearly X = ˜ X a.s., and the proof is complete. If b is instead reflecting, the excursions from b agree a.s. for X and ˜ X. Repeating the argument with the roles of a and b interchanged, we get X = ˜ X a.s., after a suitable adjustment of the mass ν{b}.
2 We proceed to classify the boundary behavior of a regular diffusion on a natural scale, in terms of the speed measure ν. A right endpoint b is called an entrance boundary for X if it is inaccessible, and yet lim r→∞inf y>x Py{τx ≤r} > 0, x ∈Io.
(20) By the Markov property at times nr, n ∈N, the limit in (20) then equals 1, and in particular Py{τx < ∞} = 1 for all x < y in Io. In Theorem 33.13 below, we show that an entrance boundary is an endpoint where X may enter but not exit.
The opposite behavior occurs at an exit boundary, defined as an endpoint b that is accessible and naturally absorbing, in the sense of remaining absorbing when the charge ν{b} is reduced to zero. If b is accessible but not naturally absorbing, we have already seen how the boundary behavior of X depends on the value of ν{b}. Thus, in this case, b is absorbing when ν{b} = ∞, slowly reflecting when ν{b} ∈(0, ∞), and fast reflecting when ν{b} = 0. For reflecting b, we further see from Theorem 33.9 that the set Ξ b = {t ≥0; Xt = b} is a.s.
perfect.
Theorem 33.12 (boundary behavior, Feller) Consider a regular diffusion on a natural scale in I = [a, b] with speed measure ν. Then for fixed u ∈Io, we have (i) b is accessible iff b < ∞, b u (b −x) ν(dx) < ∞, (ii) b is accessible, reflecting iff b < ∞, ν(u, b] < ∞, (iii) b is an entrance boundary iff b = ∞, b u x ν(dx) < ∞.
The stated conditions can be translated into similar criteria for any reg-ular diffusion. In general, exit and other accessible boundaries may clearly be infinite, whereas entrance boundaries may be finite. Explosion is said to occur when X reaches an infinite boundary in finite time. A basic example of a regular diffusion on (0, ∞) with an entrance boundary at 0 is given by the Bessel process Xt = |Bt|, where B is a Brownian motion in Rd with d ≥2.
764 Foundations of Modern Probability Proof of Theorem 33.12: (i) Since lim sups(±Bs) = ∞a.s., Theorem 33.9 shows that X cannot explode, so any accessible endpoint is finite. Now as-sume that a < c < u < b < ∞. Then Lemma 33.8 shows that b is accessible iff hc,b(u) < ∞, which holds by (14) iff b u(b −x) ν(dx) < ∞.
(ii) Here b < ∞by (i), in which case Lemma 33.2 shows that b is absorbing iffν(u, b] = ∞.
(iii) An entrance boundary b is inaccessible by definition, and therefore τu = τu,b a.s. when a < u < b. Arguing as in the proof of Lemma 33.8, we also note that Eyτu is bounded for y > u. If b < ∞, we obtain the contradiction Eyτu = hu,b(y) = ∞, and so b must be infinite. From (14), we get by monotone convergence as y →∞ Eyτu = hu,∞(y) = 2 ∞ u x ∧y −u ν(dx) →2 ∞ u (x −u) ν(dx), which is finite iff ∞ u x ν(dx) < ∞.
2 We proceed to establish an important regularity property, which also clar-ifies the nature of entrance boundaries.
Theorem 33.13 (entrance laws and Feller properties) For a regular diffusion X on I, form ¯ I ⊃I by attaching the entrance boundaries of X. Then X extends to a continuous Feller process on ¯ I.
Proof: For any f ∈Cb, a, x ∈I, and r, t ≥0, the strong Markov property at τx ∧r yields Eaf(Xτx∧r+t) = EaTtf Xτx∧r = Ttf(x)Pa{τx ≤r} + Ea Ttf(Xr); τx > r .
(21) To show that Ttf is left-continuous at any y ∈I, fix an a < y in Io, and choose r > 0 so large that Pa{τy ≤r} > 0. As x ↑y we have τx ↑τy, and so {τx ≤r} ↓{τy ≤r}. Thus, the probabilities and expectations in (21) converge to the corresponding expressions for τy, and we get Ttf(x) →Ttf(y). The proof of the right-continuity is similar.
If an endpoint b is inaccessible but not of entrance type, and if f(x) →0 as x →b, then clearly even Ttf(x) →0 at b for each t > 0. Now let ∞be an entrance boundary, and consider a function f with a finite limit at ∞. We need to show that even Ttf(x) converges, as x →∞for fixed t. Then conclude from Lemma 33.10 that, as a →∞, sup x≥a Exτa = 2 sup x≥a ∞ a x ∧r −a ν(dr) = 2 ∞ a (r −a) ν(dr) →0.
(22) 33. One-dimensional SDEs and Diffusions 765 Next we note that, for any a < x < y and r ≥0, Py{τa ≤r} ≤Py τx ≤r, τa −τx ≤r = Py{τx ≤r} Px{τa ≤r} ≤Px{τa ≤r}.
Thus, Px ◦τ −1 a converges vaguely as x →∞for fixed a, and (22) shows that the convergence holds even in the weak sense.
Now fix any t and f, and introduce for every a the continuous function ga(s) = Eaf{X(t−s)+}. By the strong Markov property at τa ∧t and Theorem 8.5, we get for any x, y ≥a Ttf(x) −Ttf(y) ≤ Ex ga(τa) −Ey ga(τa) + 2 ∥f∥ Px + Py {τa > t}.
Here the right-hand side tends to 0 as x, y →∞and then a →∞, by (22) and the weak convergence of Px ◦τ −1 a . Thus, Ttf(x) is Cauchy convergent as x →∞, and we may denote the limit by Ttf(∞).
It is now easy to check that the extended operators Tt form a Feller semi-group on C0(¯ I). Finally, Theorem 17.15 shows that the associated process, starting at a possible entrance boundary, has again a continuous version in the topology of ¯ I.
2 We may next establish a ratio ergodic theorem, for elementary additive functionals of a recurrent diffusion. It is instructive to compare with the general ratio limit theorems in Chapter 26.
Theorem 33.14 (ratio ergodic theorem, Derman, Motoo & Watanabe) Let X be a regular, recurrent diffusion on a natural scale in I with speed measure ν. Then for any measurable functions f, g ≥0 on I with νf < ∞and νg > 0, we have as t →∞ t 0 f(Xs) ds t 0 g(Xs) ds →νf νg a.s. Px, x ∈I.
Proof: Fix any a < b in I, put τ b a = τb +τa ◦θτb, and define recursively some optional times σ0, σ1, . . . by σn+1 = σn + τ b a ◦θσn, n ≥0, starting with σ0 = τa. Write σn 0 f(Xs) ds = σ0 0 f(Xs) ds+ n Σ k=1 σk σk−1 f(Xs) ds, (23) 766 Foundations of Modern Probability and note that the last sum has i.i.d. terms. By the strong Markov property and Lemma 33.10, we get for any x ∈I Ex σk σk−1 f(Xs) ds = Ea τb 0 f(Xs) ds + Eb τa 0 f(Xs) ds = f(y) g−∞,b(y, a) + ga,∞(y, b) ν(dy) = 2 f(y) b −y ∨a + + y ∧b −a + ν(dy) = 2 (b −a) νf.
The same lemma shows that the first term in (23) is a.s. finite. Hence, the law of large numbers yields lim n→∞n−1 σn 0 f(Xs) ds = 2 (b −a) νf a.s. Px, x ∈I.
Writing κt = sup{n ≥0; σn ≤t}, we get by monotone interpolation lim t→∞κ−1 t t 0 f(Xs) ds = 2 (b −a) νf a.s. Px, x ∈I.
(24) This remains true when νf = ∞, since we can then apply (24) to some ap-proximating functions fn ↑f with νfn < ∞, and let n →∞. The assertion now follows as we apply (24) to both f and g.
2 We may finally classify the asymptotic behavior of the process, depending on the boundedness of the speed measure ν and the nature of the endpoints.
Applying a suitable affine mapping, we may transforms Io into one of the intervals (0, 1), (0, ∞), or (−∞, ∞). We need to distinguish between the three possibilities of boundary behavior at finite endpoints, codified as follows: ( — inaccessible boundary, [ — absorbing boundary, [[ — reflecting boundary.
Thus, we end up with totally ten different cases.
A diffusion is said to be ν -ergodic if it is recurrent and such that Lx(Xt) w → ν/νI for all x. A recurrent diffusion may be either null-recurrent or positive recurrent, depending on whether |Xt| P →∞or not. We further say that ab-sorption occurs at an endpoint b, when Xt = b for all sufficiently large t, so that 1{Xt = b} →1.
Theorem 33.15 (recurrence and ergodicity, Feller, Maruyama & Tanaka) Let X be a regular diffusion on a natural scale with speed measure ν. Then the ergodic behavior of X depends as follows on the initial position x and the nature of the boundaries: • (−∞, ∞): X is ν-ergodic when ν is bounded, otherwise null-recurrent, 33. One-dimensional SDEs and Diffusions 767 • (0, ∞): Xt →0 a.s., • [0, ∞): 1{Xt = 0} →1 a.s., • : X is ν-ergodic.
First we prove the relatively elementary recurrence properties, distinguish-ing between the cases of absorption, convergence, and recurrence.
Proof of recurrence properties: [0, 1]: Theorem 33.7 (ii) yields Px{τ0 < ∞} = 1 −x and Px{τ1 < ∞} = x.
[0, ∞): The same theorem yields for any b > x Px{τ0 < ∞} ≥Px{τ0 < τb} = b −x b , which tends to 1 as b →∞.
(−∞, ∞): The recurrence follows from the previous case.
[[0, ∞): Since 0 is reflecting, P0{τy < ∞} > 0 for some y > 0, which extends to arbitrary y by the strong Markov property and the regularity of X.
Arguing as in the proof of Lemma 33.8, we conclude that P0{τy < ∞} = 1 for all y > 0. The asserted recurrence now follows, as we combine with the statement for [0, ∞).
(0, ∞): Here X = B ◦[X] a.s. for a Brownian motion B. Since X > 0, we have [X]∞< ∞a.s., and so X converges a.s. Now Py{τa,b < ∞}= 1 for any 0 < a ≤y ≤b. Applying the Markov property at an arbitrary time t > 0, we conclude that a.s. either lim inft Xt ≤a or lim supt Xt ≥b.
Since a and b are arbitrary, X∞is then an endpoint of (0, ∞), and hence equals 0.
(0, 1): As in the previous case, we have a.s. convergence to either 0 or 1. To find the corresponding probabilities, we see from Theorem 33.7 that Px{τa < ∞} ≥Px{τa < τb} = b −x b −a, 0 < a < x < b < 1.
Letting b →1 and then a →0, we obtain Px{X∞= 0} ≥1−x. Similarly, Px{X∞= 1} ≥x, and so equality holds in both relations.
[0, 1): Once again, X →0 or 1 with probabilities 1 −x and x, respectively.
Further note that Px{τ0 < ∞} ≥Px{τ0 < τb} = b −x b , 0 ≤x < b < 1, which tends to 1 −x as b →1. Thus, absorption occurs when X →0.
768 Foundations of Modern Probability : As in the previous case, we get P0{τ1 < ∞} = 1, and by symmetry also P1{τ0 < ∞} = 1.
[[0, 1]: As before P0{τ1 < ∞} = 1, and so the same relation holds for Px.
[[0, 1): Again P0{τb < ∞} = 1 for all b ∈(0, 1).
By the strong Markov property at τb and the result for [0, 1), it follows that P0{Xt →1}≥b.
Letting b →1 gives Xt →1, a.s. under P0. The result for Px now follows by the strong Markov property at τx, applied under P0.
2 The ergodic properties will be proved along the lines of Theorem 11.22, which requires some additional lemmas.
Lemma 33.16 (coupling) For any Feller processes X⊥ ⊥Y , the pair (X, Y ) is again Feller.
Proof: Use Theorem 5.30 and Lemma 17.3.
2 We proceed with a continuous-time counterpart of Lemma 11.24.
Lemma 33.17 (strong ergodicity) For a regular, recurrent diffusion with ini-tial distribution μ1 or μ2, we have lim t→∞ Pμ1 ◦θ−1 t −Pμ2 ◦θ−1 t = 0.
Proof: Let X and Y be independent with distributions Pμ1 and Pμ2, respec-tively. By Theorem 33.13 and Lemma 33.16, the pair (X, Y ) can be extended to a Feller diffusion, and so by Theorem 17.17 it is again strong Markov with respect to the induced filtration G. Define τ = inf{t ≥0; Xt = Yt}, and note that τ is G-optional by Lemma 9.6. The assertion now follows as for Lemma 11.24, provided we can show that τ < ∞a.s.
Here we first assume that I = R. The processes X and Y are then continu-ous local martingales. By independence they remain local martingales for the extended filtration G, and so even X −Y is a local G-martingale. Using the in-dependence and recurrence of X and Y , we get [X −Y ]∞= [X]∞+[Y ]∞= ∞ a.s., which shows that even X −Y is recurrent. In particular, τ < ∞a.s.
Next let I = , and define τ1 = inf{t ≥0; Xt = 0} and τ2 = inf{t ≥0; Yt = 0}. By the continuity and recurrence of X and Y , we get τ ≤τ1 ∨τ2 < ∞a.s.
2 Our next result is similar to the discrete-time version in Lemma 11.25.
Lemma 33.18 (existence) Every regular, positive-recurrent diffusion has an invariant distribution.
Proof: By Theorem 33.13 we may regard the transition kernels μt with associated operators Tt as defined on ¯ I—the interval I with possible entrance boundaries attached. Since X is not null-recurrent, we may choose a bounded 33. One-dimensional SDEs and Diffusions 769 Borel set B and some x0 ∈I and tn →∞, such that infn μtn(x0, B) > 0. Then Theorem 6.20 yields a measure μ on ¯ I with μI > 0, such that μtn(x0, ·) v →μ along a sub-sequence, in the topology of ¯ I. The convergence extends by Lemma 33.17 to arbitrary x ∈I, and so Ttnf(x) →μf, f ∈C0(¯ I), x ∈I.
(25) Now fix any h ≥0 and f ∈C0(¯ I), and note that even Thf ∈C0(¯ I) by The-orem 33.13. Using (25), the semi-group property, and dominated convergence, we get for any x ∈I μ(Thf) ←Ttn Thf (x) = Th Ttnf (x) →μf.
Thus, μμh = μ for all h, which means that μ is invariant on ¯ I. In particular, μ(¯ I \ I) = 0 by the nature of entrance boundaries, and so μ/μI is an invariant distribution on I.
2 Our final lemma provides the crucial connection between speed measure and invariant distributions.
Lemma 33.19 (positive recurrence) For a regular, recurrent diffusion X on a natural scale in I with speed measure ν, these conditions are equivalent: (i) νI < ∞, (ii) X is positive-recurrent, (iii) X has an invariant distribution.
The invariant distribution is then unique and given by ν/νI.
Proof: If the process is null-recurrent, then clearly no invariant distribution exists. The converse is also true by Lemma 33.18, and so (ii) ⇔(iii). Now fix any bounded, measurable function f : I →R+ with bounded support. By Theorem 33.14, Fubini’s theorem, and dominated convergence, we have for any distribution μ on I t−1 t 0 Eμf(Xs) ds = Eμ t−1 t 0 f(Xs) ds →νf νI .
If μ is invariant, then μf = νf/νI, and so νI < ∞. If instead X is null-recurrent, then Eμf(Xs) →0 as s →∞, and we get νf/νI = 0, which implies νI = ∞.
2 End of proof of Theorem 33.15: It remains to consider the cases where I equals (∞, ∞), , since otherwise we have convergence or ab-sorption at some endpoint. In case of , Theorem 33.12 (ii) shows that ν is bounded. In the remaining cases, ν may be unbounded, so that X is null-recurrent by Lemma 33.19. If ν is bounded, then μ = ν/νI is invariant by the same lemma, and the asserted ν -ergodicity follows from Lemma 33.17 with μ1 = μ.
2 770 Foundations of Modern Probability Exercises 1. Prove pathwise uniqueness for the SDE dXt = (X+ t )1/2 dBt + c dt with c > 0.
Also show that the solutions Xx with Xx 0 = x satisfy Xx t < Xy t a.s. for x < y, up to the time when Xx reaches 0.
2. Show that solutions to equation dXt = σ(Xt) dBt cannot explode. (Hint: If X explodes at time ζ < ∞, then [X]ζ = ∞, and the local time of X tends to ∞as t →ζ, uniformly on compacts. Now use Theorem 29.5 to see that ζ = ∞a.s.) 3. Assume in Theorem 33.1 that Sσ = Nσ. Show that the solutions X to (3) form a regular diffusion on a natural scale, on every connected component I of Sσ. Also note that the endpoints of I are either absorbing or exit boundaries for X. (Hint: Use Theorems 32.11, 29.4, and 29.5, and show that the exit time from any compact interval J ⊂I is finite.) 4. Assume in Theorem 33.1 that Sσ ⊂Nσ, and form ˜ σ from σ by taking ˜ σ(x) = 1 on A = Nσ \ Sσ. Show that solutions X to equation (˜ σ, 0) also solve equation (σ, 0), but not conversely unless A = ∅.
(Hint: Since λA = 0, we have 1A(Xt) dt = 1A(Xt) d[X]t = 0 a.s. by Theorem 29.5.) 5. Assume in Theorem 33.1 that Sσ ⊂Nσ. Show that equation (σ, 0) has solu-tions forming a regular diffusion on every connected component of Sc σ. Prove the corresponding statement for the connected components of Nc σ when Nσ is closed.
(Hint: For Sc σ, use the preceding result. For Nc σ, choose X to be absorbed when it first reaches Nσ.) 6. For a regular diffusion in I, show that any two scale functions p1, p2 on I are related by an affine transformation.
7. For a regular diffusion X on a natural scale in I and with speed measure ν, define a new process Yt = X ◦At, where At = t 0 h(Xs) ds for a bounded, continuous function h > 0 on R+. Show that Y is again a regular diffusion on a natural scale in I, and determine its speed measure.
8.
In the setting of Theorem 33.14, show that the stated relation implies the convergence in Corollary 26.8 (i). Further use the result to prove a law of large numbers for regular, recurrent diffusions with bounded speed measure ν.
(Hint: Note that νg > 0 implies g(Xs) ds > 0 a.s.) 9. Give examples of speed measures ν for diffusions on [0, 1), [[0, 1), and (0, 1). In any of these cases, can we change the character of the boundary at 0 by changing ν? Will it make any difference if the right endpoint is ∞?
10.
Identify the cases in Theorem 33.15 where X is positive-recurrent, null-recurrent, and transient. In the first of these cases, give the invariant distribution.
11. Let X be a Brownian motion in Rd absorbed at 0. Show that Y = |X|2 is a regular diffusion in (0, ∞), describe its boundary behavior for different d, and identify the corresponding case of Theorem 33.15. Verify the conclusion by computing the associated scale function and speed measure.
12. Explain how the ratio ergodic Theorem 33.14 is related to the ergodic prop-erties in Theorems 26.4 and 26.21. Name a context where two or all three results apply.
Chapter 34 PDE Connections and Potential Theory Cauchy problem, Feynman–Kac formula, PDE existence and SDE unique-ness, harmonic functions, regularity, Dirichlet’s problem, transition and occupation densities, Greenian domain, Green function and potential, hitting and quitting kernels, sweeping measure and hitting, equilibrium measure and quitting, dependence on conductor and domain, time rever-sal, capacities and random sets, super-harmonic and excessive functions, additive functional as compensator, Riesz decomposition, measure-induced additive functional In Chapters 17 and 32 we saw how elliptic differential operators arise naturally as the generators of nice diffusion processes. This is the ultimate cause of some profound connections between probability theory and partial differential equa-tions (PDEs). In particular, a suitable extension of the operator 1 2Δ appears as the generator of Brownian motion in Rd, which leads to a close relationship between classical potential theory and the theory of Brownian motion. More specifically, many basic problems in potential theory can be solved by proba-bilistic methods. Conversely, various hitting distributions for Brownian motion can be given a potential-theoretic interpretation.
This chapter explores some of the mentioned connections. First we derive the celebrated Feynman–Kac formula, and show how the existence of solutions to a given Cauchy problem implies uniqueness of solutions to the associated SDE. We then proceed with a probabilistic construction of Green functions and potentials, and solve the Dirichlet, sweeping, and equilibrium problems of classical potential theory in terms of Brownian motion. We further show how Green capacities and alternating set functions can be represented in a natural way in terms of random sets.
We conclude with a discussion of excessive and super-harmonic functions, along with their relations to modern martingale theory. To indicate the main ideas, let f be an excessive function of Brownian motion X on Rd. Then f(X) is a continuous super-martingale under Px for every x, and so it has a Doob– Meyer decomposition M −A. Here A can be chosen to be a continuous additive functional (CAF) of X, and we obtain an associated Riesz decomposition f = UA+h, where UA is the potential of A and h is the greatest harmonic minorant of f.
Some stochastic calculus from Chapters 18 and 32 is used at the beginning of the chapter, and we also rely on the theory of Feller processes from Chapter 771 © Springer Nature Switzerland AG 2021 O. Kallenberg, Foundations of Modern Probability, Probability Theory and Stochastic Modelling 99, 772 Foundations of Modern Probability 17. As for Brownian motion, the present discussion is essentially self-contained, apart from some elementary facts from Chapters 14 and 19. Occasionally, we refer to Chapters 5 and 23 for some basic weak convergence theory. Finally, the results at the end of the chapter require the existence of Poisson processes from Proposition 15.6, as well as some basic facts about the Fell topology listed in Theorem A6.1. Potential-theoretic ideas are used in several other chapters, and some additional, essentially unrelated results appear in especially Chapters 26 and 29.
To begin with the general PDE connections, we consider an arbitrary Feller diffusion in Rd with associated semi-group operators Tt and generator (A, D).
Recall from Theorem 17.6 that, for any f ∈D, the function u(t, x) = Ttf(x) = Exf(Xt), t ≥0, x ∈Rd, satisfies Kolmogorov’s backward equation ˙ u = Au, where ˙ u = ∂u/∂t. Thus, u provides a probabilistic solution to the Cauchy problem ˙ u = Au, u(0, x) = f(x).
(1) Adding a potential term vu to (1), where v: Rd →R+, we obtain the more general problem ˙ u = Au −vu, u(0, x) = f(x).
(2) Here the solution may be expressed in terms of the elementary multiplicative functional e−V , where Vt = t 0 v(Xs) ds, t ≥0.
Let C1,2 be the class of functions f : R+ × Rd of class C1 in the time variable and class C2 in the space variables. Write Cb(Rd) and C+ b (Rd) for the classes of bounded, continuous functions from Rd to R and R+, respectively.
Theorem 34.1 (Cauchy problem, Feynman, Kac) Let (A, D) be the generator of a Feller diffusion in Rd, and fix any f ∈Cb(Rd) and v ∈C+ b (Rd). Then (i) any bounded solution u ∈C1,2 to (2) is given by u(t, x) = Ex e−Vtf(Xt), t ≥0, x ∈Rd.
(ii) the function in (i) solves (2) whenever f ∈D.
Formula (i) can be interpreted in terms of killing. Then consider an expo-nential random variable γ⊥ ⊥X with mean 1, and define ζ = inf{t ≥0; Vt > γ}.
Letting ˜ X be the process X killed at time ζ, we may express the right-hand side in (i) as Exf( ˜ Xt), with the understanding that f( ˜ Xt) = 0 when t ≥ζ. Thus, u(t, x) = ˜ Ttf(x), where ˜ Tt is the transition operator of the killed process. It is easy to verify, directly from (i), that the family ( ˜ Tt) is again a Feller semigroup.
34. PDE Connections and Potential Theory 773 Proof of Theorem 34.1: Let u ∈C1,2 be a bounded solution to (2), fix any t > 0, and define Ms = e−Vs u t −s, Xs , s ∈[0, t].
Writing m = for equality apart from a continuous local martingale or its differ-ential, we see from Lemma 17.21, Itˆ o’s formula, and (2) that, for any s < t, dMs = e−Vs du(t −s, Xs) −u(t −s, Xs) v(Xs) ds m = e−Vs{Au(t −s, Xs) −˙ u(t −s, Xs) −u(t −s, Xs) v(Xs)} ds = 0.
Thus, M is a continuous local martingale on [0, t). Since M is bounded, the martingale property extends to t, and we get u(t, x) = ExM0 = ExMt = Exu(0, Xt) = Ex e−Vtf(Xt).
Next let u be given by (i) for some f ∈D. Integrating by parts and using Lemma 17.21, we obtain d e−Vtf(Xt) = e−Vt d f(Xt) −(vf)(Xt) dt m = e−Vt Af −vf (Xt) dt.
Taking expectations and differentiating at t = 0, we conclude that the semi-group ˜ Ttf(x) = Exf( ˜ Xt) = u(t, x) has generator ˜ A = A −v on D. Now (2) follows by the last assertion in Theorem 17.6.
2 Part (ii) of Theorem 34.1 can often be improved in special cases. In par-ticular, if v = 0 and A = 1 2 Δ = 1 2 Σ i ∂2/∂x 2 i , so that X is a Brownian motion and (2) reduces to the standard heat equation, then u(t, x) = Exf(Xt) solves (2) for any bounded, continuous function f on Rd. To see this, we note that u ∈C1,2 on (0, ∞)×Rd, by the smoothness of the Brownian transition density.
We may then obtain (2) by applying the backward equation to the function Thf(x) for a fixed h ∈(0, t).
Now consider the SDE in Rd dXi t = σi j(Xt) dBj t + bi(Xt) dt, (3) with associated elliptic operator1 Av(x) = 1 2 aij(x) ∂2 ij v(x) + bi(x) ∂i v(x), x ∈Rd, v ∈C2, where aij = σi kσj k. We show how uniqueness in law for solutions to (3) may be inferred from the existence of solutions to the associated Cauchy problem (1).
Theorem 34.2 (existence and uniqueness, Stroock & Varadhan) For any f ∈ C∞ 0 (Rd) we have (i) ⇒(ii), where 1Summation over repeated indices is always understood.
774 Foundations of Modern Probability (i) equation (1) has a bounded solution on a set [0, ε] × Rd, (ii) uniqueness in law holds for the SDE (3).
Proof: Fix any f ∈C∞ 0 and t ∈(0, ε], and let u be a bounded solution to (1) on [0, t] × Rd. If X solves (3), we see as before that Ms = u(t −s, Xs) is a martingale on [0, t], and so Ef(Xt) = E u(0, Xt) = EMt = EM0 = E u(t, X0).
Thus, the 1-dimensional distributions of X on [0, ε] are uniquely determined by the initial distribution.
Now let X and Y be solutions with the same initial distribution. To prove that X fd = Y , it is enough to consider times 0 = t0 < t1 < · · · < tn with tk −tk−1 ≤ε for all k. Assume that the distributions agree at t0, . . . , tn−1 = t, and fix any set C = π−1 t0,...,tn−1B with B ∈B nd. By Theorem 32.7, both L(X) and L(Y ) solve the local martingale problem for (a, b).
If P{X ∈C} = P{Y ∈C} > 0, Theorem 32.11 yields the same property for the conditional measures L(θtX | X ∈C) and L(θtY | Y ∈C). Since the corresponding initial distributions agree by hypothesis, the 1-dimensional result yields the extension L Xt+h; X ∈C = L Yt+h; Y ∈C , h ∈(0, ε].
In particular, the distributions agree at times t0, . . . , tn. The general result now follows by induction.
2 We now specialize to the case of a Brownian motion X in Rd. For any closed set B ⊂Rd, we introduce the hitting time τB = inf{t > 0; Xt ∈B} and associated hitting kernel HB(x, dy) = Px τB < ∞, XτB ∈dy , x ∈Rd.
For suitable functions f, we write HBf(x) = f(y) HB(x, dy).
By a domain in Rd we mean an open, connected subset D ⊂Rd. A function u : D →R is said to be harmonic, if it belongs to C2(D) and satisfies the Laplace equation Δu = 0. We also say that u has the mean-value property, if it is locally bounded and measurable, and such that for any ball B ⊂D with center x, the average of u over the boundary ∂B equals u(x). The following analytic result is crucial for the probabilistic developments.
Lemma 34.3 (harmonic function, Gauss, Koebe) For functions u on a do-main D ⊂Rd, these conditions are equivalent and imply u ∈C∞ D : (i) u is harmonic, (ii) u has the mean-value property.
34. PDE Connections and Potential Theory 775 Proof: First let u ∈C2(D), and fix a ball B ⊂D with center x. Writing τ = τ∂B and noting that Exτ < ∞, we get by Itˆ o’s formula Exu(Xτ) −u(x) = 1 2 Ex τ 0 Δu(Xs) ds.
Here the first term on the left equals the average of u over ∂B, by the spherical symmetry of Brownian motion.
If u is harmonic, then the right-hand side vanishes, and the mean-value property follows. If instead u is not harmonic, we may choose B such that Δu ̸= 0 on B. Then the right-hand side is non-zero, and the mean-value property fails.
It remains to show that every function u with the mean-value property is infinitely differentiable. Then fix any infinitely differentiable, spherically sym-metric probability density ϕ, supported by a ball of radius ε > 0 around the origin. Here the mean-value property yields u = u ∗ϕ, on the set where the right-hand side is defined, and by dominated convergence the infinite differen-tiability of ϕ carries over to u ∗ϕ = u.
2 Before proceeding to the potential-theoretic developments, we need to in-troduce a regularity condition on the domain D. Writing ζ = ζD = τDc, we note that Px{ζ = 0} = 0 or 1 for every x ∈∂D by Corollary 17.18. When this probability is 1, we say that x is regular for Dc or simply regular. If this holds for every x ∈∂D, the boundary ∂D is said to be regular, and we refer to D as a regular domain.
Regularity is a fairly weak condition. In particular, any domain with a smooth boundary is regular, and we shall see that even various edges and corners are allowed, provided they are not too sharp and directed inward. By a spherical cone in Rd with vertex v and axis a ̸= 0 we mean a set of the form C = x; ⟨x −v, a⟩≥c|x −v| , where c ∈(0, |a|].
Lemma 34.4 (cone condition, Zaremba) For a domain D ⊂Rd, let x ∈∂D be such that C ∩G ⊂D c for a spherical cone C with vertex x and a neigh-borhood G of x. Then x is regular for D c.
Proof: By compactness of the unit sphere in Rd, we may cover Rd by C1 = C along with finitely many congruent cones C2, . . . , Cn with vertex x.
By rotational symmetry, 1 = Px min k≤n τCk = 0 ≤Σ k≤n Px τCk = 0 = n Px{τC = 0}, and so Px{τC = 0} > 0. Hence, Corollary 17.18 yields P{τC = 0} = 1, and we get ζD ≤τC∩G = 0 a.s. Px.
2 Now fix a domain D ⊂Rd and a continuous function f : ∂D →R. A function u on ¯ D is said to solve the Dirichlet problem (D, f), if u is harmonic 776 Foundations of Modern Probability on D and continuous on ¯ D with u = f on ∂D. The solution may be interpreted as the electrostatic potential in D, when the potential on the boundary is given by f.
Theorem 34.5 (Dirichlet problem, Kakutani, Doob) For any regular domain D ⊂Rd and function f ∈Cb(∂D), we have (i) the Dirichlet problem (D, f) has a solution u(x) = Ex f(XζD); ζD < ∞ = HD cf(x), x ∈¯ D, (4) (ii) when ζD < ∞a.s., this is the only bounded solution, (iii) when d ≥3 and f ∈C0(∂D), it is the unique solution in C0( ¯ D).
Thus, HD c agrees with the sweeping (balayage) kernel in Newtonian po-tential theory, determining the harmonic measure on ∂D. The following result clarifies the role of the regularity condition on ∂D.
Lemma 34.6 (regularity, Doob) For any domain D and point a ∈∂D, these conditions are equivalent: (i) a is regular for D c, (ii) for f ∈Cb(∂D) and u as in (4), we have u(x) →f(a) as x →a in D.
Proof: First let a be regular. For any t > h > 0 and x ∈D, the Markov property yields Px{ζ > t} ≤Px ζ ◦θh > t −h = ExPXh ζ > t −h .
Here the right-hand side is continuous in x, by the continuity of the Gaussian kernel and dominated convergence, and so lim sup x→a Px{ζ > t} ≤Ea PXh ζ > t −h = Pa ζ ◦θh > t −h .
As h →0, the probability on the right tends to Pa{ζ > t} = 0, and so Px{ζ > t} →0 as x →a, which means that Lx(ζ) w →δ0. Since also Px w →Pa in CR+,Rd, Theorem 5.29 yields Lx(X, ζ) w →La(X, 0) in CR+,Rd × [0, ∞], x →a.
The continuity of the mapping (x, t) →xt implies Lx(Xζ) w →La(X0) = δa, and so u(x) →f(a), by the continuity of f.
Next assume (ii). If d = 1, then D is an interval, which is clearly regular.
Next let d ≥2. Then the Markov property yields for any f ∈Cb(∂D) u(a) = Ea f(Xζ); ζ ≤h + Ea u(Xh); ζ > h , h > 0.
34. PDE Connections and Potential Theory 777 As h →0, we get u(a) = f(a) by dominated convergence, and for f(x) = e−|x−a| we obtain Pa{Xζ = a, ζ < ∞} = 1. Since a.s. Xt ̸= a for all t > 0 by Theorem 19.6 (i), we obtain Pa{ζ = 0} = 1, and so a is regular.
2 Proof of Theorem 34.5: For u as in (4), fix any closed ball in D with center x and boundary S, and use the strong Markov property at τ = τS to obtain u(x) = Ex f(Xζ); ζ < ∞ = ExEXτ f(Xζ); ζ < ∞ = Exu(Xτ).
Hence, u has the mean-value property, and so it is harmonic by Lemma 34.3.
Furthermore, Lemma 34.6 shows that u is continuous on ¯ D with u = f on ∂D.
Thus, u solves the Dirichlet problem (D, f).
Now let d ≥3 and f ∈C0(∂D). For any ε > 0, |u(x)| ≤ε + ∥f∥Px f(Xζ) > ε, ζ < ∞ .
(5) Since X is transient by Theorem 19.6 (ii) and the set y ∈∂D; |f(y)| > ε is bounded, the right-hand side of (5) tends to 0 as |x| →∞and then ε →0, which shows that u ∈C0( ¯ D).
To prove the asserted uniqueness, we may choose f = 0, and show that any solution u with the stated properties is identically 0. When d ≥3 and u ∈C0( ¯ D), this is clear by Lemma 34.3, which shows that harmonic functions have no local maxima or minima. Next let ζ < ∞a.s. and u ∈Cb( ¯ D). Then Corollary 18.19 yields Exu(Xζ∧n) = u(x) for any x ∈D and n ∈N, and so u(x) = Exu(Xζ) = 0, by continuity and dominated convergence as n →∞. 2 To prepare for the probabilistic construction of the Green function in an arbitrary domain D ⊂Rd, we need to study the transition densities of a Brown-ian motion, killed upon hitting the boundary ∂D. Recall that the unrestricted Brownian motion in Rd has transition densities p t(x, y) = (2πt)−d/2e−|x−y|2/2t, x, y ∈Rd, t > 0.
(6) By the strong Markov property and Theorem 8.5, we get for any t > 0, x ∈D, and B ⊂BD Px{Xt ∈B} = Px Xt ∈B, t ≤ζ + Ex Tt−ζ1B(Xζ); t > ζ .
Thus, the killed process has transition densities pD t (x, y) = p t(x, y) −Ex p t−ζ(Xζ, y); t > ζ , x, y ∈D, t > 0.
(7) The following symmetry and continuity properties of pD t play a crucial role in the sequel.
778 Foundations of Modern Probability Theorem 34.7 (transition density, Hunt) For any domain D in Rd and time t > 0, we have (i) pD t is symmetric and continuous on D2, (ii) for regular a ∈∂D, lim x→a pD t (x, y) = 0, y ∈D.
Proof: (i) From (6) we note that pt(x, y) is uniformly continuous in (x, y) for fixed t > 0, as well as in (x, y, t) for |x −y| > ε > 0 and t > 0. By (7) it follows that pD t (x, y) is equi-continuous in y ∈D for fixed t > 0. To prove the continuity in x ∈D for fixed t > 0 and y ∈D, it is then enough to show that Px{Xt ∈B, t ≤ζ} is continuous in x for fixed t > 0 and B ∈BD. For any h ∈(0, t), the Markov property gives Px Xt ∈B, ζ ≥t = Ex PXh{Xt−h ∈B, ζ ≥t −h}; ζ > h .
Thus, for any x, y ∈D, (Px −Py){Xt ∈B, t ≤ζ} ≤(Px + Py){ζ ≤h} + Lx(Xh) −Ly(Xh) , which tends to 0 as y →x and then h →0. Combining the continuity in x with the equi-continuity in y, we conclude that pD t (x, y) is continuous in (x, y) ∈D2 for fixed t > 0.
To prove the symmetry in x and y, it is now enough to establish the inte-grated version C Px{Xt ∈B, ζ > t}dx = B Px{Xt ∈C, ζ > t}dx, (8) for bounded B, C ∈BD. Then fix any compact set F ⊂D. Letting n ∈N, and writing h = 2−nt and tk = kh, we get by Proposition 11.2 C Px Xtk ∈F, k ≤2n; Xt ∈B dx = F · · · F 1C(x0) 1B(x2n) Π k≤2nph(xk−1, xk) dx0 · · · dx2n.
Here the right-hand side is symmetric in the pair (B, C), by the symmetry of ph(x, y). By dominated convergence as n →∞, we obtain (8) with F in place of D, and the stated version follows by monotone convergence as F ↑D.
(ii) From the proof of Lemma 34.6, recall that Lx(ζ, X) w →La(0, X) as x →a with a regular a ∈∂D. In particular, Lx(ζ, Xζ) w →δ 0,a. By the bound-edness and continuity of pt(x, y) for |x −y| > ε > 0, we see from (7) that pD t (x, y) →0.
2 A domain D ⊂Rd is said to be Greenian, if either d ≥3, or if d ≤2 and Px{ζD < ∞} = 1 for all x ∈D. Since the latter probability is harmonic in 34. PDE Connections and Potential Theory 779 x, it suffices by Lemma 34.3 to verify the stated property for a single x ∈D.
Given a Greenian domain D, we introduce the Green function gD(x, y) = ∞ 0 pD t (x, y) dt, x, y ∈D.
For any measure μ on D, we further introduce the associated Green potential GDμ(x) = gD(x, y) μ(dy), x ∈D.
Writing GDμ = GDf when μ(dy) = f(y) dy, we get by Fubini’s theorem Ex ζ 0 f(Xt) dt = gD(x, y) f(y) dy = GDf(x), x ∈D, which identifies gD as an occupation density of the killed process.
The next result shows that gD and GD agree with the Green function and Green potential of classical potential theory. Thus, GDμ(x) may be interpreted as the electrostatic potential at x, arising from a charge distribution μ in D, when the boundary ∂D is grounded.
Theorem 34.8 (Green function) For a Greenian domain D ⊂Rd, (i) the function gD is symmetric on D 2, (ii) gD(x, y) is harmonic in x ∈D \ {y} for fixed y ∈D, (iii) for regular b ∈∂D, lim x→b gD(x, y) = 0, y ∈D.
The proof is straightforward when d ≥3, but for d ≤2 we need two technical lemmas. We begin with a uniform estimate for large t.
Lemma 34.9 (uniform integrability) For a domain D ⊂Rd, bounded when d ≤2, we have lim t→∞sup x,y∈D ∞ t pD s (x, y) ds = 0.
Proof: For d ≥3 we may take D = Rd, in which case the result is obvious from (6). Now let d = 2. By obvious domination and scaling arguments, we may then assume that |x| ≤1, y = 0, D = {z; |z| ≤2}, and t > 1. Writing p t(x) = p t(x, 0), we get by (7) pD t (x, 0) ≤p t(x) −E0 p t−ζ(1); ζ ≤t/2 ≤p t(0) −p t(1) P0{ζ ≤t/2} ≤p t(0) P0{ζ > t/2} + p t(0) −p t(1) < ⌢t−1P0{ζ > t/2} + t−2.
As in Lemma 33.8 (ii), we have E0ζ < ∞, and so by Lemma 4.4 the right-hand side is integrable in t ∈[1, ∞). The proof for d = 1 is similar.
2 We also need to show that bounded sets have bounded Green potential.
780 Foundations of Modern Probability Lemma 34.10 (boundedness) For a Greenian domain D ⊂Rd and bounded set B ∈BD, the function GD1B is bounded.
Proof: By domination and scaling, along with the strong Markov property, we may take B = {x; |x| ≤1}, and show that GD1B(0) < ∞. For d ≥3 we may further choose D = Rd, in which case the result follows by a simple computation. For d = 2, we may assume that D ⊃C ≡{x; |x| < 2}. Write σ = ζC + τB ◦θζC and τ0 = 0, and define recursively τk+1 = τk + σ ◦θτk, k ≥0.
Putting b = (1, 0) and using the strong Markov property at the times τk, we get GD1B(0) = GC1B(0) + GC1B(b) Σ k≥1 P0{τk < ζ}.
Here GC1B(0) ∨GC1B(b) < ∞by Lemma 34.9.
Furthermore, the strong Markov property yields P0{τk < ζ} ≤pk, where p = supx∈B Px{σ < ζ}. Fi-nally, we note that p < 1, since Px{σ < ζ} is harmonic and hence continuous on B. The proof for d = 1 is similar.
2 Proof of Theorem 34.8: (i) This is clear by Theorem 34.7.
(ii) If d ≥3, or if d = 2 and D is bounded, we see from Theorem 34.7, Lemma 34.9, and dominated convergence that gD(x, y) is continuous in x ∈ D{y} for fixed y ∈D. Moreover, GD1B has the mean-value property in D\ ¯ B for bounded B ∈BD. The latter property extends by continuity to the density gD(x, y), which is then harmonic in x ∈D \ {y} for fixed y ∈D, by Lemma 34.3.
For d = 2 and unbounded D, define Dn = {x ∈D; |x| < n}, and note as before that gDn(x, y) has the mean-value property in x ∈Dn \ {y} for fixed y ∈Dn. Since pDn t ↑pD t by dominated convergence, we have gDn ↑gD, and so the mean-value property extends to the limit. For any x ̸= y in D, choose a cir-cular disk B around y with radius ε > 0 small enough that x / ∈¯ B ⊂D. Then π ε2gD(x, y) = GD1B(x) < ∞by Lemma 34.10. Thus, Lemma 34.3 shows that even gD(x, y) is harmonic in x ∈D \ {y}.
(iii) Fix any y ∈D, and assume that x →b ∈∂D. Choose a Greenian do-main D′ ⊃D with b ∈D′. Since pD t ≤pD′ t , and both pD′ t (·, y) and gD′(·, y) are continuous at b, whereas pD t (x, y) →0 by Theorem 34.7, we get gD(x, y) →0 by Theorem 1.23.
2 We proceed to show that a measure is determined by its Green potential, whenever the latter is finite. An extension appears as part of Theorem 34.12.
For convenience, we write P D t μ(x) = pD t (x, y) μ(dy), x ∈D, t > 0.
Theorem 34.11 (uniqueness) For measures μ, ν on a Greenian domain D ⊂ Rd, we have GDμ = GDν < ∞ ⇒ μ = ν.
34. PDE Connections and Potential Theory 781 Proof: For any t > 0, we have t 0 (P D s μ) ds = GDμ −P D t GDμ = GDν −P D t GDν = t 0 (P D s ν) ds.
(9) By the symmetry of pD, we further get for any measurable function f : D →R+ f(x) P D s μ(x) dx = f(x) dx pD s (x, y) μ(dy) = μ(dy) f(x) pD s (x, y) dx = P D s f(y) μ(dy).
Hence, f(x) dx t 0 P D s μ(x) ds = t 0 ds P D s f(y) μ(dy) = μ(dy) t 0 P D s f(y) ds, and similarly for ν. By (9), we obtain μ(dy) t 0 P D s f(y) ds = ν(dy) t 0 P D s f(y) ds.
(10) Assuming f ∈C+ K(D), we get P D s f →f as s →0, and so t−1 t 0P D s f ds →f. If we can take limits in the outer integrations of (10), we obtain μf = νf, which implies μ = ν since f is arbitrary.
To justify the argument, it is enough to show that sups P D s f is μ- and ν-integrable. Then conclude from Theorem 34.7 that f < ⌢pD s (·, y) for fixed s > 0 and y ∈D, and from Theorem 34.8 that f < ⌢GDf. Here the latter property yields P D s f < ⌢P D s GDf ≤GDf, whereas the former property yields for any y ∈D and s > 0 μ(GDf) = GDμ(x)f(x) dx < ⌢P D s GDμ(y) ≤GDμ(y) < ∞, and similarly for ν.
2 Now let FD and KD be the classes of closed and compact subsets of D, and write F r D and K r D for the subclasses of sets with regular boundary. For any B ∈FD, we introduce the associated hitting kernel HD B (x, dy) = Px τB < ζD, XτB ∈dy , x ∈D.
Note that when X has initial distribution μ, the hitting distribution of Xζ in B equals μHD B = μ(dx) HD B (x, ·).
782 Foundations of Modern Probability The next result solves the sweeping problem of classical potential theory.
To avoid some technical distractions, here and below, we consider only subsets with regular boundary. In general, the irregular part of the boundary can be shown to be polar, in the sense of being a.s. avoided by a Brownian motion.
Given this result, one can easily remove all regularity restrictions.
Theorem 34.12 (sweeping and hitting) For a Greenian domain D ⊂Rd with subset B ∈F r D, let μ be a bounded measure on D with GDμ < ∞on B. Then μHD B is the unique measure ν on B with GDμ = GDν on B.
For an electrostatic interpretation, suppose we insert a grounded conductor B into a domain D with grounded boundary and charge distribution μ. Then a charge distribution −μHD B arises on B.
A lemma is needed for the proof. Here we define gD\B(x, y) = 0 when either x or y lies in B.
Lemma 34.13 (fundamental identity) For a Greenian domain D ⊂Rd with subset B ∈F r D, gD(x, y) = gD\B(x, y) + B HD B (x, dz) gD(z, y), x, y ∈D.
Proof: Write ζ = ζD and τ = τB. Subtracting relations (7) for the domains D and D\B, and using the strong Markov property at τ together with Theorem 8.5, we get pD t (x, y) −pD\B t (x, y) = Ex p t−τ(Xτ, y); τ < ζ ∧t −Ex p t−ζ(Xζ, y); τ < ζ < t = Ex p t−τ(Xτ, y); τ < ζ ∧t −Ex EXτ p t−τ−ζ(Xζ, y); ζ < t −τ ; τ < ζ ∧t = Ex pD t−τ(Xτ, y); τ < ζ ∧t .
Integrating with respect to t yields gD(x, y) −gD\B(x, y) = Ex gD(Xτ, y); τ < ζ = HD B (x, dz) gD(z, y).
2 Proof of Theorem 34.12: Since ∂B is regular, we have HD B (x, ·) = δx for all x ∈B, and so by Lemma 34.13 we get for all x ∈B and z ∈D gD(x, y) HD B (z, dy) = gD(z, y) HD B (x, dy) = gD(z, x).
34. PDE Connections and Potential Theory 783 Integrating with respect to μ(dz) gives GD(μHD B )(x) = GDμ(x), which shows that ν = μHD B has the stated property.
Now consider any measure ν on B with GDμ = GDν on B. Noting that gD\B(x, ·) = 0 on B, whereas HD B (x, ·) is supported by B, we get by Lemma 34.13 for any x ∈D GDν(x) = ν(dz) gD(z, x) = ν(dz) gD(z, y) HD B (x, dy) = HD B (x, dy) GDν(y) = HD B (x, dy) GDμ(y).
Thus, μ determines GDν on D, and so ν is unique by Theorem 34.11.
2 Turning to the classical equilibrium problem, we introduce for any K ∈KD the last exit or quitting time γD K = sup t < ζD; Xt ∈K , along with the associated quitting kernel LD K(x, dy) = Px γD K > 0; X(γD K) ∈dy .
Theorem 34.14 (equilibrium measure and quitting, Chung) For a Greenian domain D ∈Rd with subset K ∈KD, (i) there exists a measure μD K on ∂K with LD K(x, dy) = gD(x, y) μD K(dy), x ∈D, (ii) μD K is diffuse when d ≥2, (iii) when K ∈K r D, μD K is the unique measure μ on K with GDμ = 1 on K.
Here μD K is called the equilibrium measure of K relative to D, and its total mass CD K is called the capacity of K in D. For an electrostatic interpreta-tion, suppose we insert a conductor K with potential 1 into a domain D with grounded boundary. Then a charge distribution μD K arises on the boundary of K.
Proof: Write γ = γD K, and define lε(x) = ε−1Px 0 < γ ≤ε , ε > 0.
Using Fubini’s theorem, the simple Markov property, and dominated conver-gence as ε →0, we get for any f ∈Cb(D) and x ∈D 784 Foundations of Modern Probability GD(flε)(x) = Ex ζ 0 f(Xt) lε(Xt) dt = ε−1 ∞ 0 Ex f(Xt) PXt{0 < γ ≤ε}; t < ζ dt = ε−1 ∞ 0 Ex f(Xt); t < γ ≤t + ε dt = ε−1Ex γ (γ−ε)+ f(Xt) dt →Ex f(Xγ); γ > 0 = LD Kf(x).
If f has compact support, then for every x we may replace f by the bounded, continuous function f/gD(x, ·), to get as ε →0 f(y) lε(y) dy → LD K(x, dy) f(y) gD(x, y) .
(11) Since the left-hand side is independent of x, the same thing is true for the measure μD K(dy) = LD K(x, dy) gD(x, y) .
(12) When d = 1 we have gD(x, x) < ∞, and (12) is trivially equivalent to (i). If instead d ≥2, then singletons are polar, and so the measure LD K(x, ·) is diffuse, which implies the same property for μD K. Thus, (12) is again equivalent to (i).
Furthermore, the continuity of X implies that LD K(x, ·), and then also μD K is supported by ∂K.
Integrating equation (i) over D yields Px τK < ζD = GDμD K(x), x ∈D, and so for K ∈Kr D we get GDμD K = 1 on K. If ν is another measure on K with GDν = 1 on K, then ν = μD K by the uniqueness in Theorem 34.12.
2 Now we will see how the equilibrium measures and capacities depend on the sets K ∈Kr D.
Proposition 34.15 (consistency) For a Greenian domain D ⊂Rd with sub-sets K ⊂B in K r D, (i) μD K = μD BHD K = μD BLD K, (ii) CD K = B Px τK < ζD μD B(dx).
Proof: By Theorem 34.12 and the defining properties of μD B and μD K, we have on K GD μD BHD K = GDμD B = 1 = GDμD K, 34. PDE Connections and Potential Theory 785 and so μD BHD K = μD K by the same result. To prove the second equality in (i), we see from Theorem 34.14 that, for any A ∈BK, μD BLD K(A) = μD B(dx) A gD(x, y) μD K(dy) = A GDμD B(y) μD K(dy) = μD K(A), since GDμD B = 1 on A ⊂B. Finally (i) ⇒(ii), since HD K(x, K) = Px{τK < ζD}.
2 Some basic properties of capacities and equilibrium measures follow im-mediately from Proposition 34.15. To explain the terminology, fix a space S, along with a class U of subsets, closed under finite unions. For any function h: U →R and sets U, U1, U2, . . . ∈U, we define recursively the differences Δ U1h(U) = h(U ∪U1) −h(U), Δ U1,...,Unh(U) = Δ Un Δ U1,...,Un−1h(U) , n > 1, where the difference Δ Un in the last formula is taken with respect to U. Note that the higher-order differences Δ U1,...,Un are invariant under permutations of U1, . . . , Un. Say that h is alternating or completely monotone if (−1)n+1Δ U1,...,Unh(U) ≥0, n ∈N, U, U1, U2, . . . ∈U.
Corollary 34.16 (dependence on conductor, Choquet) For a Greenian do-main D ⊂Rd, (i) the capacity CD K is an alternating function of K ∈K r D, (ii) μD Kn w →μD K as Kn ↓K or Kn ↑K in K r D.
Proof: (i) Let ψ denote the path of Xζ, regarded as a random closed set in D. Writing hx(K) = Px ψK ̸= ∅ = Px{τK < ζ}, x ∈D \ K, we get by induction (−1)n+1ΔK1,...,Knhx(K) = Px ψK = ∅, ψK1 ̸= ∅, . . . , ψKn ̸= ∅ ≥0, and the assertion follows by Proposition 34.15 with K ⊂Bo.
(ii) Trivially, τKn ↓τK when Kn ↑K, and τKn ↑τK when Kn ↓K since the Kn are closed. In the latter case, we have also n {τKn < ζ} = {τK < ζ} by compactness. Thus, in both cases HD Kn(x, ·) w →HD K(x, ·) for all x ∈D \ n Kn, and by dominated convergence in Proposition 34.15 with Bo ⊃ n Kn, we get μD Kn w →μD K.
2 The next result solves the equilibrium problem for two conductors.
786 Foundations of Modern Probability Corollary 34.17 (condenser theorem) For any disjoint sets B ∈F r D and K ∈K r D, there exists a unique signed measure ν on B ∪K with GDν = 0 on B and GDν = 1 on K, given by ν = μD\B K −μD\B K HD B .
Proof: Applying Theorem 34.14 to the domain D \ B with subset K, we get ν = μD\B K on K, and then ν = −μD\B K HD B on B by Theorem 34.12.
2 The symmetry between hitting and quitting kernels in Proposition 34.15 can be extended to an invariance under time reversal of the whole process.
More precisely, putting γ = γD K, we may relate the stopped process Xζ t = Xγ∧t to its reversal ˜ Xγ t = X(γ−t)+. For convenience, write Pμ = Px μ(dx), and refer to the induced measures as distributions, even when μ is not normalized.
Theorem 34.18 (time reversal) For a Greenian domain D ∈Rd with subset K ∈K r D, put γ = γD K and μ = μD K. Then Xγ d = ˜ Xγ under Pμ.
Proof: Let Px and Ex refer to the process Xζ. Fix any times 0 = t0 < t1 < · · · < tn, and write sk = tn −tk and hk = tk −tk−1. For any continuous functions f0, . . . , fn with compact supports in D, define f ε(x) = Ex Π k≥0 fk(Xsk) lε(Xtn) = Ex Π k≥1 fk(Xsk) EXs1(f0 lε)(Xt1), where the last equality holds by the Markov property at s1. Proceeding as in the proof of Theorem 34.14, we get f εGDμ (x) dx = GDf ε(y) μ(dy) →EμΠ k fk( ˜ Xγ tk) 1{γ > tn}.
(13) On the other hand, (11) shows that the measure lε(x) dx tends vaguely to μ, and so by Theorem 34.7 Ex(f0 lε)(Xt1) = pD t1(x, y) (f0 lε)(y) dy → pD t1(x, y) f0(y) μ(dy).
Using dominated convergence, Fubini’s theorem, Proposition 11.2, Theorem 34.7, and the relation GDμ(x) = Px{γ > 0}, we obtain f εGDμ (x) dx → GDμ(x) dx f0(y) μ(dy) ExΠ k>0 fk(Xsk) pD t1(Xs1, y) = f0(x0) μ(dx0) · · · GDμ(xn) Π k>0 pD hk(xk−1, xk)fk(xk) dxk = EμΠ k fk(Xtk) GDμ(Xtn) = EμΠ k fk(Xtk) 1{γ > tn}.
34. PDE Connections and Potential Theory 787 Comparing with (13), we see that Xγ and ˜ Xγ have the same finite-dimensional distributions.
2 We now extend Proposition 34.15 to exhibit the dependence on Greenian domains D ⊂D′ as well. Then for fixed K ∈KD, we define recursively the optional times τj = γj−1 + τ D′ K ◦θγj−1, γj = τj + γD K ◦θτj, j ≥1, starting with γ0 = 0. Thus, τk and γk are the hitting or quitting times of K, during the k-th excursion in D reaching K, prior to the exit time ζD′. The extended hitting and quitting kernels are given by HD,D′ K (x, ·) = ExΣ k δX(τk), LD,D′ K (x, ·) = ExΣ k δX(γk), where the summations extend over all k ∈N with τk < ∞.
Theorem 34.19 (dependence on conductor and domain) For a Greenian do-mains D ⊂D′ in Rd with regular, compact subsets K ⊂K′, μD K = μD′ K′HD,D′ K = μD′ K′LD,D′ K .
Proof: Define lε = ε−1Px{γD K ∈(0, ε]}, ε > 0.
Proceeding as in the proof of Theorem 34.14, we get for any x ∈D′ and f ∈Cb(D′) GD′(f lε)(x) = ε−1Ex ζD′ 0 f(Xt) 1 γD K ◦θt ∈(0, ε] dt →LD,D′ K f(x).
If f has compact support in D, we may conclude as before that f(y) μD K(dy) ← (f lε)(y) dy → LD,D′ K (x, dy) f(y) gD′(x, y) , and so LD,D′ K (x, dy) = gD′(x, y) μD K(dy).
Integrating with respect to μD′ K′ and noting that GD′μD′ K′ = 1 on K′ ⊃K, we obtain the second expression for μD K.
To deduce the first expression for μD K, note that HD′ K HD,D′ K = HD,D′ K , by the strong Markov property at τK. Combining with the second expression for μD K and using Theorem 34.18 and Proposition 34.15, we get μD K = μD′ K LD,D′ K = μD′ K HD,D′ K = μD′ K′HD′ K HD,D′ K = μD′ K′HD,D′ K .
2 788 Foundations of Modern Probability The last result enables us to study the equilibrium measure μD K and capacity CD K as functions of both D and K.
In particular, we obtain the following continuity and monotonicity properties.
Corollary 34.20 (dependence on domain) For a fixed regular, compact set K ⊂Rd, the equilibrium measure μD K is a decreasing, upper continuous function of the Greenian domain D ⊃K.
Proof: The monotonicity is clear from Theorem 34.19 with K = K′, since HD,D′ K (x, ·) ≥δx, x ∈K ⊂D ⊂D′.
It remains to prove that CD K is continuous from above and below in D for fixed K. By dominated convergence, it is then enough to show that κDn K →κD K, where κD K = sup{j; τj < ∞} is the number of D -excursions hitting K.
Assuming Dn ↑D, we need to show that if Xs, Xt ∈K and X ∈D on [s, t], then X ∈Dn on [s, t] for sufficiently large n. But this is clear, since the path is compact on the interval [s, t]. If instead Dn ↓D, we need to show that, for any r < s < t with Xr, Xt ∈K and Xs / ∈D, we have Xs / ∈Dn for sufficiently large n. But this is obvious.
2 Next we show how Green capacities can be expressed in terms of random sets. Let χ denote the identity mapping on FD. Given any measure ν on FD \ {∅} with ν{χK ̸= ∅} < ∞for all K ∈KD, we may introduce a Poisson process η on FD{∅} with intensity measure ν, and form the associated random closed set ϕ = {F; η{F} > 0} in D. Letting πν be the distribution of ϕ, we note that πν{χK = ∅} = P η{χK ̸= ∅} = 0 = exp −ν{χK ̸= ∅} , K ∈KD.
Theorem 34.21 (Green capacities and random sets, Choquet) For any Green-ian domain D ⊂Rd, there exists a unique measure ν on FD \ {∅} such that CD K = ν{χK ̸= ∅} = −log πν{χK = ∅}, K ⊂Kr D.
Proof: Let ψ denote the path of Xζ in D. Choose sets Kn ↑D in Kr D with Kn ⊂Ko n+1 for all n, and put μn = μD Kn, ψn = ψKn, and χn = χKn. Define νp n = Px ψp ∈· , ψn ̸= ∅ μp(dx), n ≤p, (14) and conclude by the strong Markov property and Proposition 34.15 that νq n χp ∈· , χm ̸= ∅ = νp m, m ≤n ≤p ≤q.
(15) 34. PDE Connections and Potential Theory 789 By Corollary 8.22, there exist some measures νn on FD, n ∈N, with νn{χp ∈·} = νp n, n ≤p, (16) and from (15) we note that νn · , χm ̸= ∅ = νm, m ≤n.
(17) Hence, the measures νn agree on {χm ̸= ∅} for n ≥m, and so we may define ν = supn νn. By (17) we have ν{·, χn ̸= ∅} = νn for all n. Assuming K ∈Kr D with K ⊂Ko n, we conclude from (14), (16), and Proposition 34.15 that ν{χK ̸= ∅} = νn{χK ̸= ∅} = νn n{χK ̸= ∅} = Px ψnK ̸= ∅ μn(dx) = Px τK < ζ μn(dx) = CD K.
The uniqueness of ν is clear by a monotone-class argument.
2 We next explore the connection between alternating set functions and random closed sets.
As in Chapter 23, we then fix an lcscH space S with Borel σ-field S and classes G, F, K of open, closed, and compact sets. Write ˆ S = {B ∈S; ¯ B ∈K}, and say that a class U ⊂ˆ S is separating, if for any K ∈K and G ∈G with K ⊂G, there exists a set U ∈U with K ⊂U ⊂G.
For any non-decreasing function h on a separating class U ⊂ˆ S, we define the associated inner and outer capacities ho and ¯ h by ho(G) = sup h(U); U ∈U, ¯ U ⊂G , G ∈G, ¯ h(K) = inf h(U); U ∈U, U o ⊃K , K ∈K.
Those formulas clearly remain valid with U replaced by any separating subclass.
For any random closed set ϕ in S, we introduce the associated hitting function h(B) = P{ϕB ̸= ∅} B ∈ˆ S.
Theorem 34.22 (alternating functions and random sets, Choquet) Let S be an lcscH space with classes G, F, K of open, closed, and compact sets, and let U ⊂ˆ S be separating and closed under finite unions. Then (i) the hitting function h of a random closed set in S is alternating with h = ¯ h on K and h = ho on G, (ii) for any alternating function p : U →[0, 1] with p(∅) = 0, there exists a random closed set with hitting function h, such that h = ¯ p on K and h = po on G.
First we clarify the algebraic part of the construction: 790 Foundations of Modern Probability Lemma 34.23 (discrete case) Let the class U ⊂ˆ S be finite and closed under unions, and consider an alternating function h : U →[0, 1] with h(∅) = 0.
Then there exists a point process ξ on S with P{ξU > 0} = h(U), U ∈U.
Proof: This is obvious when U = {∅}. Proceeding by induction, assume the statement to be true when U is generated by up to n −1 sets, and consider a class U generated by n non-empty sets B1, . . . , Bn. By scaling we may assume that h(B1 ∪· · · ∪Bn) = 1.
For any j ∈{1, . . . , n}, let Uj be the class of unions formed by the sets Bi \ Bj, i ̸= j, and define hj(U) = Δ Uh(Bj) = h(Bj ∪U) −h(Bj), U ∈Uj.
Then each hj is again alternating with hj(∅) = 0, and so the induction hypoth-esis yields a point process ξj on i Bi \ Bj with hitting function hj. Note that hj remains the hitting function of ξj on all of U. We further introduce a point process ξn+1 with P ∩i ξn+1Bi > 0 = (−1)n+1ΔB1,...,Bnh(∅).
For 1 ≤j ≤n + 1, write νj for the restriction of L(ξj) to the set Aj = i 0}, and put ν = Σj νj.
Choose ξ to be the canonical point process on S with distribution ν.
To see that ξ has hitting function h, we note that for any U ∈U and j ≤n, νj{μU > 0} = P ξjB1 > 0, . . . , ξjBj−1 > 0, ξjU > 0 = (−1)j+1ΔB1,...,Bj−1, U hj(∅) = (−1)j+1ΔB1,...,Bj−1, U h(Bj).
It remains to show that, for any U ∈U \ {∅}, Σ j ≤n (−1)j+1ΔB1,...,Bj−1, U h(Bj) + (−1)n+1ΔB1,...,Bnh(∅) = h(U), which is clear from the fact that ΔB1,...,Bj−1, U h(Bj) = ΔB1,...,Bj, U h(∅) + ΔB1,...,Bj−1, U h(∅).
2 Proof of Theorem 34.22: The direct assertion can be proved in the same way as Corollary 34.16. Conversely, let U and p be such as stated. By Lemma A6.3 we may take U to be countable, say U = {U1, U2, . . .}. For every n, let Un be the class of unions of sets from U1, . . . , Un. By Lemma 34.23 there exist some point processes ξ1, ξ2, . . . on S, such that P{ξnU > 0} = p(U), U ∈Un, n ∈N.
34. PDE Connections and Potential Theory 791 The space F is compact by Theorem A6.1, and so Theorem 23.2 yields a random closed set ϕ in S with supp ξn d →ϕ along a sub-sequence N ′ ⊂N.
Writing hn and h for the associated hitting functions, we get h(Bo) ≤lim inf n∈N′ hn(B) ≤lim sup n∈N′ hn(B) = h( ¯ B), B ∈ˆ S, and in particular, h(U o) ≤p(U) ≤h( ¯ U), U ∈U.
Using the stronger separation property K ⊂U o ⊂¯ U ⊂G, we may easily conclude that h = po on G and h = ¯ p on K.
2 Turning to a discussion of super-harmonic and excessive functions, fix a domain D ⊂Rd, and let Tt = T D t be the transition operators of a Brownian motion X in D, killed on the boundary ∂D. A function f ≥0 on D is said to be excessive, if Ttf ≤f for all t > 0 and Ttf →f as t →0. Then clearly Ttf ↑f. Note that if f is excessive, then f(X) is a super-martingale under Px for every x ∈D. The basic example of an excessive function is the Green potential GDν of a measure ν on a Greenian domain D, provided this potential is finite.
Though excessivity is defined globally in terms of the operators T D t , it is in fact a local property. For a precise statement, say that a measurable function f ≥0 on D is super-harmonic, if for any ball B in D with center x, the average of f over the sphere ∂B is bounded by f(x). As we shall see, it is enough to consider balls in D of radius less than an arbitrary ε > 0. Recall that f is lower semi-continuous if xn →x implies lim infn f(xn) ≥f(x).
Theorem 34.24 (super-harmonic and excessive functions, Doob) For any measurable function f ≥0 on a domain D ⊂Rd, these conditions are equivalent: (i) f is excessive, (ii) f is super-harmonic and lower semi-continuous.
Our proof is based on two lemmas, beginning with a relationship between the two continuity properties.
Lemma 34.25 (semi-continuity) Let f ≥0 be a measurable function on a domain D ⊂Rd, such that Ttf ≤f for all t > 0. Then these conditions are equivalent: (i) f is excessive, (ii) f is lower semi-continuous.
Proof: Assume (i), and let xn →x in D. By Theorem 34.7 and Fatou’s lemma, 792 Foundations of Modern Probability Ttf(x) = pD t (x, y) f(y) dy ≤lim inf n→∞ pD t (xn, y) f(y) dy = lim inf n→∞Ttf(xn) ≤lim inf n→∞f(xn), and as t →0 we get f(x) ≤lim infn f(xn), which implies (ii).
Next assume (ii). Using the continuity of X and Fatou’s lemma, we get as t →0 along an arbitrary sequence f(x) = Exf(X0) ≤Ex lim inf t→0 f(Xt) ≤lim inf t→0 Exf(Xt) = lim inf t→0 Ttf(x) ≤lim sup t→0 Ttf(x) ≤f(x).
Thus, Ttf →f, proving (i).
2 For smooth functions, the super-harmonic property is easily described.
Lemma 34.26 (smooth functions) For any function f ≥0 in C2 D, f is super-harmonic ⇔ Δf ≤0 ⇒ f is excessive.
Proof: By Itˆ o’s formula, the process Mt = f(Xt) −1 2 t 0 Δf(Xs) ds, t ∈[0, ζ), (18) is a continuous local martingale. Now fix any closed ball B ⊂D with center x, and write τ = τ∂B. Since Exτ < ∞, we get by dominated convergence f(x) = Exf(Xτ) −1 2 Ex τ 0 Δf(Xs) ds.
Thus, f is super-harmonic iffthe last expectation is ≤0, and the first assertion follows.
To prove the last implication, we note that the exit time ζ = τ∂D is pre-dictable, say with announcing sequence (τn). If Δf ≤0, we get from (18) by optional sampling Ex f(Xt∧τn); t < ζ ≤Exf(Xt∧τn) ≤f(x).
Hence, Fatou’s lemma yields Ex{f(Xt); t < ζ} = Ttf(x), and so f is excessive by Lemma 34.25.
2 Proof of Theorem 34.24: If f is excessive or super-harmonic, then so is f ∧n for every n > 0 by Lemma 34.25. The converse statement is also true, 34. PDE Connections and Potential Theory 793 by monotone convergence, and since the lower semi-continuity is preserved by increasing limits. Thus, we may henceforth assume that f is bounded.
Now assume (i). By Lemma 34.25 it is then lower semi-continuous, and it remains to prove that f is super-harmonic. Since the property Ttf ≤f is preserved as we pass to a sub-domain, we may take D to be bounded. For each h > 0, we define qh = h−1(f −Thf) and fh = GDqh. Since f and D are bounded, we have GDf < ∞, and so fh = h−1 h 0 Tsfds ↑f. By the strong Markov property, we further see that for any optional time τ < ζ, Exfh(Xτ) = ExEXτ ∞ 0 qh(Xs) ds = Ex ∞ 0 qh(Xs+τ) ds = Ex ∞ τ qh(Xs) ds ≤fh(x).
In particular, fh is super-harmonic for each h, and so by monotone convergence the same property holds for f.
Conversely, assume (ii). To prove (i), it suffices by Lemma 34.25 to show that Ttf ≤f for all t. Then fix a spherically symmetric probability density ψ ∈C∞(Rd) supported by the unit ball, and put ψh(x) = h−d ψ(x/h) for each h > 0. Writing ρ for the Euclidean metric in Rd, we define fh = ψh ∗f on the set Dh = {x ∈D; ρ(x, Dc) > h}. Note that fh ∈C∞(Dh) for all h, that fh is super-harmonic on Dh, and that fh ↑f. By Lemma 34.26 and monotone convergence, we conclude that f is excessive on each set Dh. Letting ζh be the first exit time from Dh, we obtain Ex f(Xt); t < ζh ≤f(x), h > 0.
As h →0 we have ζh ↑ζ, and hence {t < ζh} ↑{t < ζ}. By monotone convergence, we get Ttf(x) ≤f(x).
2 Now we prove the remarkable fact that, although an excessive function f may not be continuous, the super-martingale f(X) is a.s. continuous under Px for every x.
Theorem 34.27 (continuity, Doob) Let f be an excessive function on a do-main D ⊂Rd, and let X be a Brownian motion, killed upon hitting ∂D. Then the process f(Xt) is a.s. continuous on [0, ζ).
Our proof is based on the following invariance under time reversal, for a stationary version of Brownian motion. Though no such process exists in the usual probabilistic sense, we do have the following: Lemma 34.28 (time reversal, Doob) Let Px be the distribution of Brownian motion X in Rd starting at x, and put ¯ P = Px dx. Then for any c > 0, these processes have the same distribution under ¯ P: Y c t = Xt, ˜ Y c t = Xc−t, t ∈[0, c].
794 Foundations of Modern Probability Proof: Introduce the processes Bt = Xt −X0, ˜ Bt = Xc−t −Xc, t ∈[0, c], and note that B and ˜ B are Brownian motions on [0, c] under each Px. Fix any measurable function f ≥0 on C[0,c],Rd. By Fubini’s theorem and the invariance of Lebesgue measure, we get ¯ Ef(˜ Y c) = ¯ Ef X0 −˜ Bc + ˜ B = Exf x −˜ Bc + ˜ B dx = E0f x −˜ Bc + ˜ B dx = E0 f x −˜ Bc + ˜ B dx = E0 f(x + ˜ B) dx = Exf(Y c) dx = ¯ Ef(Y c).
2 Proof of Theorem 34.27: Since f ∧n is again excessive for each n > 0 by Theorem 34.24, we may assume that f is bounded. As in the proof of the same theorem, we may then approximate f by smooth, excessive functions fh ↑f on suitable subdomains Dh ↑D. Since fh(X) is a continuous super-martingale, up to the exit time ζh from Dh, Theorem 9.33 shows that f(X) is a.s. right-continuous on [0, ζ), under any initial distribution μ. Using the Markov property at rational times, we may extend the a.s. right-continuity to the random time set T = {t ≥0; Xt ∈D}.
To strengthen the result to a.s. continuity on T, we note that f(X) is right-continuous on T, a.e. ¯ P. By Lemma 34.28, it follows that f(X) is also left-continuous on T, a.e. ¯ P. Thus, f(X) is continuous on T, a.s. Pμ for arbi-trary μ ≪λd. Since Pμ ◦X−1 h ≪λd for any μ and h > 0, we conclude that f(X) is a.s. continuous on T ∩[h, ∞) for any h > 0. Combining with the right-continuity at 0, we obtain the asserted continuity on [0, ζ).
2 If f is excessive, then f(X) is a super-martingale under Px for every x, and hence has a Doob–Meyer decomposition f(X) = M −A. It is remarkable that we can choose A to be a continuous additive functional (CAF) of X independent of x.
A similar situation was encountered in connection with Theorem 29.23.
Theorem 34.29 (excessive functions and additive functionals, Meyer) Let f be an excessive function on a domain D ⊂Rd, and let Px be the distribution of Brownian motion X in D, killed upon hitting ∂D. Then M = f(X) + A a.s. Px, x ∈D, for some a.s. unique processes A and M, where 34. PDE Connections and Potential Theory 795 (i) A is a CAF of X, (ii) M is a continuous local Px-martingale on [0, ζ), for every x ∈D.
The main difficulty of the proof is to construct a version of the process A that compensates −f(X) under every measure Pμ. Here the following lemma is helpful.
Lemma 34.30 (universal compensation) Consider an excessive function f on a domain D ⊂Rd, a distribution m ∼λd on D, and a Pm-compensator A of −f(X) on [0, ζ). Then for any distribution μ and constant h > 0, the process A ◦θh is a Pμ-compensator of −f(X ◦θh) on [0, ζ ◦θh).
In other words, the process Mt = f(Xt)+At−h ◦θh is a local Pμ-martingale on [h, ζ) for every μ and h.
Proof: For any bounded Pm-martingale M and initial distribution μ ≪m, we note that M is also a Pμ-martingale. To see this, write k = dμ/dm, and note that Pμ = k(X0) · Pm. It is equivalent to show that Nt = k(X0)Mt is a Pm-martingale, which is clear since k(X0) is F0 -measurable with mean 1.
Now fix any distribution μ and constant h > 0. To prove the stated property of A, it is enough to show that, for any bounded Pm-martingale M, the process Nt = Mt−h ◦θh is a Pμ-martingale on [h, ∞). Then fix any times s < t and sets F ∈Fh and G ∈Fs. Using the Markov property at h, and noting that Pμ ◦X−1 h ≪m, we get Eμ Mt ◦θh; F ∩θ−1 h G = Eμ EXh(Mt; G); F = Eμ EXh(Ms; G); F = Eμ Ms ◦θh; F ∩θ−1 h G .
Hence, a monotone-class argument yields Eμ Mt ◦θh Fh+s = Ms ◦θh a.s. 2 Proof of Theorem 34.29: Let Aμ be the Pμ-compensator of −f(X) on [0, ζ), and note that Aμ is a.s. continuous, e.g. by Theorem 19.11. Fix any distribution m ∼λd on D, and conclude from Lemma 34.30 that Am◦θh is a Pμ-compensator of −f(X ◦θh) on [0, ζ ◦θh), for any μ and h > 0. Since this remains true for the process Aμ t+h −Aμ h, we get for any μ and h > 0 Aμ t = Aμ h + Am t−h ◦θh, t ≥h, a.s. Pμ.
(19) Restricting h to the positive rationals, we define At = lim h→0 Am t−h ◦θh, t > 0, whenever the limit exists and is continuous and non-decreasing with A0 = 0, and put A = 0 otherwise. By (19) we have A = Aμ a.s. Pμ for every μ, and so A is a Pμ-compensator of −f(X) on [0, ζ) for every μ. For each h > 0, it follows 796 Foundations of Modern Probability by Lemma 34.30 that A ◦θh is a Pμ-compensator of −f(X ◦θh) on [0, ζ ◦θh), and since this is also true for the process At+h −Ah, we get At+h = Ah +At ◦θh a.s. Pμ. Thus, A is a CAF.
2 We can now establish a probabilistic version of the classical Riesz decom-position. To avoid some obscuring technicalities, we restrict our attention to locally bounded functions f. By the greatest harmonic minorant of f we mean a harmonic function h ≤f, dominating all other such functions. Recall that the potential UA of a CAF A of X is given by UA(x) = ExA∞.
Theorem 34.31 (Riesz decomposition) Let f ≥0 be a locally bounded func-tion on a domain D ⊂Rd, and let X be a Brownian motion in D, killed upon hitting ∂D. Then these conditions are equivalent: (i) f is excessive, (ii) f = UA + h for a CAF A of X and a harmonic function h ≥0.
In that case, (iii) A is the compensator of −f(X), (iv) h is the greatest harmonic minorant of f.
A similar result for uniformly α -excessive functions of an arbitrary Feller process was obtained in Theorem 29.23. From the classical Riesz representa-tion on Greenian domains, we know that UA may also be written as the Green potential of a unique measure νA, so that f = GDνA + h. In the special case where D = Rd with d ≥3, we recall from Theorem 29.21 that νAB = ¯ E(1B·A)1.
A similar representation holds in the general case.
Proof of Theorem 34.31: First let A be a CAF with UA < ∞. By the additivity of A and the Markov property of X, we get for any t > 0 UA(x) = ExA∞= Ex At + A∞◦θt = ExAt + ExEXtA∞ = ExAt + TtUA(x).
By dominated convergence, ExAt ↓0 as t →0, and so UA is excessive. Even UA + h is then excessive for any harmonic function h ≥0.
Conversely, let f be excessive and locally bounded. By Theorem 34.29 there exists a CAF A, such that M = f(X) + A is a continuous local martingale on [0, ζ). For any localizing and announcing sequence τn ↑ζ, we get f(x) = ExM0 = ExMτn = Exf(Xτn) + ExAτn ≥ExAτn.
As n →∞, we obtain UA ≤f by monotone convergence.
34. PDE Connections and Potential Theory 797 By the additivity of A and the Markov property of X, we have Ex A∞ Ft = At + Ex A∞◦θt Ft = At + EXtA∞ = Mt −f(Xt) + UA(Xt).
(20) Writing h = f −UA, it follows that h(X) is a continuous local martingale.
Since h is locally bounded, we conclude by optional sampling and dominated convergence that h has the mean-value property.
Thus, h is harmonic by Lemma 34.3.
To prove the uniqueness of A, suppose that f has also a representation UB + k for some CAF B and harmonic function k ≥0. Proceeding as in (20), we get At −Bt = Ex A∞−B∞ Ft + h(Xt) −k(Xt), t ≥0, which shows that A −B is a continuous local martingale. Hence, Proposition 18.2 yields A = B a.s.
To prove (iv), consider any harmonic minorant k ≥0. Since f −k is again excessive and locally bounded, it has a representation UB + l for some CAF B and harmonic function l. But then f = UB + k + l, and so A = B a.s. and h = k + l ≥k.
2 For a sufficiently regular measure ν on Rd, we may construct an associated CAF A of Brownian motion X, such that A increases only when X visits the support of ν. This clearly extends the notion of local time. For convenience we may write GD(1Dν) = GDν.
Proposition 34.32 (additive functional induced by measure) Let the measure ν on Rd be such that U(1D ν) is bounded for every bounded domain D. Then (i) there exists a CAF A of Brownian motion X, such that for any D ExA ζD = GDν(x), x ∈D, (ii) ν and A determine each other a.s. uniquely, (iii) supp A ⊂ t ≥0; Xt ∈supp ν a.s.
The proof is straightforward, given the classical Riesz decomposition, and we indicate only the main steps.
Proof: (i)−(ii) A simple calculation shows that GDν is excessive for any bounded domain D. Since GDν ≤U(1Dν), it is also bounded. Hence, Theorem 34.31 yields a CAF AD of X on [0, ζD) and a harmonic function hD ≥0, such that GDν = UAD + hD. In fact, hD = 0 by Riesz’ theorem.
Now consider another bounded domain D′ ⊃D. We claim that GD′ν−GDν is harmonic on D. This is clear from the analytic definitions, and it also fol-lows, under a regularity condition, from Lemma 34.13. Since AD and AD′ are 798 Foundations of Modern Probability compensators of −GDν(X) and −GD′ν(X), respectively, AD −AD′ is a mar-tingale on [0, ζD), and so AD = AD′ a.s. up to time ζD. Now choose some bounded domains Dn ↑Rd, and define A = supn ADn, so that A = AD a.s. on [0, ζD) for all D. It is easy to see that A is a CAF of X, and that (i) holds for any bounded domain D. The uniqueness of ν is clear from the uniqueness in the classical Riesz decomposition.
(iii) Note that GDν is harmonic on D\supp ν for every D, so that GDν(X) is a local martingale on the predictable set t < ζD; Xt ̸∈supp ν .
2 Exercises 1. For any domain D ⊂R2 and point x ∈∂D, let x ∈I ⊂Dc for a line segment I.
Show that x is regular for Dc. (Hint: Consider the windings around x of a Brownian motion starting at x, using the strong Markov property and Brownian scaling.) 2. Compute the Newtonian potential kernel g = gD when D = Rd with d ≥3, and check by direct computation that g(x, y) is harmonic in x ̸= y for fixed y.
3. For any domain D ⊂Rd, show that p t(x, y)−pD t (x, y) →0 as t →0, uniformly for x ̸= y in any compact set K ⊂D. Also prove the same convergence as inf{|x|; x / ∈D} →∞, uniformly for bounded t > 0 and x ̸= y. (Hint: Note that p t(x, y) is uniformly bounded for |x −y| > ε > 0, and use (7).) 4. For a domain D ⊂Rd with d ≥3, show that g(x, y) −gD(x, y) is uniformly bounded for x ̸= y in a compact set K ⊂D. Also show that the difference tends to 0 as inf{|x|; x / ∈D} →∞, uniformly for x ̸= y in K. (Hint: Use Lemma 34.13.) 5. Show that the equilibrium measure μD K is restricted to the outer boundary of K and agrees for all sets K with the same outer boundary. (Here the outer boundary of K consists of all points x ∈∂K that can be connected to Dc or ∞by a path through Kc.) Prove a corresponding statement for the sweeping measure ν in Theorem 34.12.
6. For any Greenian domain D ⊂Rd, disjoint sets K1, . . . , Kn ∈Kr D, and con-stants p1, . . . , pd ∈R, prove the existence of a unique signed measure ν on j Kj with GDν = pj on Kj for all j. (Hint: Use Corollary 34.17 recursively.) 7. Let ϕ1⊥ ⊥ϕ2 be random sets with distributions πν1, πν2. Show that ϕ1 ∪ϕ2 has distribution πν1+ν2.
8. Extend Theorem 34.22 to unbounded functions p. (Hint: Consider the restric-tions to compact sets, and proceed as in Theorem 34.21.) 9.
For a second degree polynomial f(x1, . . . , xd), find necessary and sufficient conditions for f to be super-harmonic. (Hint: First discard the constant and first order terms. Then diagonalize to reduce to a linear combination in the squares x2 k.) 10. Say that f is sub-harmonic if −f is super-harmonic. Show that every smooth, convex function is sub-harmonic. Then show that every positive linear combination of sub-harmonic functions is again sub-harmonic.
Finally, give an example of a sub-harmonic function that is not convex. (Hint: Use the result of the previous exercise.) 11. Show that there is no distribution ν on Rd such that, under Pν, a Brownian motion Xt and its time reversal Yt = X1−t have the same distribution on [0, 1]. (Hint: 34. PDE Connections and Potential Theory 799 Note that X1 has distribution ν ∗μ, where μ is the standard normal distribution. To see that the equation ν ∗μ = ν has no solution, extend by iteration to ν ∗μ∗n = ν, and let n →∞.) 12. Show that a Gaussian process is time-reversible iffit is stationary. Then extend this result to any mixture of Gaussian processes. (Hint: Use Lemma 14.1.) Chapter 35 Stochastic Differential Geometry Semi-martingales, bilinear forms, pull-back, covariation process, semi-martingale integral, connection, chain rule, differential operators and Christoffel symbols, martingale criteria, induced connections, affine and convex maps, geodesics and martingale criteria, local drift and diffusion rates, sub-manifolds and projection, diffusions, Riemannian metric and connection, isotropic martingales, Brownian motion, harmonic maps, L´ evy processes in Lie groups Just as differential geometry deals with the geometric properties of smooth curves and other objects in a differential manifold S, its stochastic counterpart involves a study of sufficiently regular processes in S, which may include the development of a relevant stochastic calculus. We may think of the manifold as a smooth surface of finite dimension n, embedded in a higher-dimensional Euclidean space.
Though such an embedding always exists, it is far from unique and may even allow some smooth bending and stretching. Thus, we insist that all objects on S are described intrinsically in terms of the local properties of the surface itself. Indeed, though the differential structure of S can be described in terms of some local coordinates in a small neighborhood of an arbitrary point, the actual choice of coordinates is fairly arbitrary, and no properties are allowed to depend on any specific choice.
When developing stochastic calculus in a Euclidean space, we may start with Brownian motion, then proceed to continuous martingales, via a stochas-tic integration or random time change, and eventually reach the general semi-martingales via Itˆ o’s formula. In a general differential manifold S we must proceed in the opposite direction, since there is no natural way to define a Brownian motion or some more general martingales in S. Indeed, to define martingales we need to endow S with a connection ∇, and for the notion of Brownian motion in S we need even a Riemannian metric ρ. Following such a program leads to an amazingly rich and beautiful theory, which has the fur-ther advantage of providing some deeper insight into the classical notions of martingales and stochastic calculus.
In this chapter, we consider first the class of general semi-martingales in S, along with their covariation processes. We then introduce connections on S, and study the associated classes of martingales and geodesics. Finally, we con-sider a general Riemannian metric with associated canonical connection, which allow us to define a Brownian motion and related processes in S. Though no semi-martingale decomposition is possible in an abstract manifold S, owing to the lack of any additive structure, we may still define some local characteristics 801 © Springer Nature Switzerland AG 2021 O. Kallenberg, Foundations of Modern Probability, Probability Theory and Stochastic Modelling 99, 802 Foundations of Modern Probability of a semi-martingale in S, describing the intrinsic drift and diffusion rates of the process. In particular, a martingale is then a process with drift 0.
Though the mentioned notions are all intrinsic, they can be justified by some natural embedding and projection properties.
Then recall that every differential manifold S can be embedded as a smooth sub-manifold of a suit-able Euclidean space. For a Riemannian manifold S, we can even choose the embedding to preserve the Riemannian structure.1 In both cases, the relevant martingale and rate properties agree with the corresponding extrinsic notions.
The same thing is true for the appropriate projection properties.
Though the reader is assumed to be at least a vaguely familiar with dif-ferential geometry, some basic notions and results are reviewed here and in Appendix 7. Write S for the class of smooth functions f, g, . . . on S, and let TS be the space of smooth (tangent) vector fields u, v, . . . on S. A form is a smooth function α on the dual space of cotangent vectors T ∗ S, and a bilinear form is a smooth function b on the dual (T ∗ S)⊗2 of the product space T 2 S. Any simple or bilinear forms can be written as finite sums2 α = a i df i, b = b ij (df i ⊗dgj), for suitable a i, b ij, f i, gj ∈S, where df(u) = uf, and we note that in local coordinates df = (∂if)(x) dxi. Further note that (α ⊗β)(u, v) = (αu)(βv).
For any smooth mapping ϕ: S →S′, the push-forward ϕ ◦u is a mapping from TS to TS′, given by3 (ϕ ◦u)f = u(f ◦ϕ), f ∈S′, and the dual pull-back ϕ∗or ϕ∗2 of a simple or bilinear form on S′ into a corresponding form on S is defined by (ϕ∗α)u = α(ϕ ◦u), (ϕ∗2b)(u, v) = b(ϕ ◦u, ϕ ◦v).
(1) In particular, we note that ϕ∗2(α ⊗β) = (ϕ∗α)(ϕ∗β), (ϕ∗d f)u = (ϕ ◦u)f = u(f ◦ϕ) = d(f ◦ϕ)u.
A continuous4 process X in a manifold S is called a semi-martingale, if f(X) is a real semi-martingale for every smooth function f on S. A fundamental role is played by the covariation process5 b [X], defined as follows for every bilinear form b on S.
1Those are the celebrated embedding theorems of Whitney and Nash.
2Note that the coefficients are functions, not constants.
3To ease the access for probabilists, we often use a simplified notation.
Traditional versions of the stated formulas are sometimes inserted, here or in Appendix 7.
4usually omitted, since all processes in this chapter are assumed to be continuous 5b [X] is often written as b( dX, dX) 35. Stochastic Differential Geometry 803 Theorem 35.1 (covariation process) Let X be a semi-martingale in a smooth manifold S. Then for any bilinear form b there exists a finite-variation process b [X], depending linearly on b, such that a.s. for smooth functions f, g and maps ϕ: S →S′, (i) (df ⊗dg)[X] = [f(X), g(X)], (ii) (fb)[X] = f(X) · b [X], (iii) (ϕ∗2b)[X] = b [ϕ(X)], (iv) b is symmetric, non-negative definite ⇒b [X] is non-decreasing.
The map b →b [X] is a.s. uniquely determined by (i)−(ii).
(In traditional notation, (ii) and (iii) may be written as (fb) ( dX, dX) = (f ◦X) d b( dX, dX) , S (T ∗ϕ ⊗T ∗ϕ) b( dX, dX) = S′ b d(ϕ ◦X), d(ϕ ◦X) .) Proof: Fix a local chart with coordinates x1, . . . , xn, and consider any bilin-ear form b, represented as a finite sum bij(dϕi ⊗dϕj) in terms of some smooth functions bij and ϕi. If b [X] exist with the stated properties, then by (i)−(ii), Itˆ o’s formula, and the chain rule for Stieltjes integrals, b [X] = bij(X) · (dϕi ⊗dϕj)[X] = bij(X) · [ϕi(X), ϕj(X)] = bij(X) · ∂hϕi(X) ∂kϕj(X) · [Xh, Xk] = bij(∂hϕi)(∂kϕj) (X) · [Xh, Xk].
(2) To see that b [X] is well-defined by this expression, we need to show that the right-hand side vanishes a.s. when b = 0. Using the coordinate representation dϕ = ∂iϕ(x) dxi, we obtain b = bij(dϕi ⊗dϕj) = bij(x) ∂hϕi(x) dxh ⊗ ∂kϕj(x) dxk = bij(∂hϕi)(∂kϕj) (x) dxh ⊗dxk .
If b = 0, we get bij(∂hϕi)(∂kϕj) = 0 for all h and k, and (2) yields b[X] = 0 a.s.
With b[X] as in (2), property (i) becomes a special case, whereas (ii) follows from (2) if we use the same chain rule to write (fb)[X] = fbij (dϕi ⊗dϕj) [X] = (f bij)(X) · [ϕi(X), ϕj(X)] = f(X) · bij(X) · [ϕi(X), ϕj(X)] = f(X) · b[X].
804 Foundations of Modern Probability (iii) By the linearity of both sides, it is enough to take b = f(dg ⊗dh) for some smooth functions f, g, h. Noting that ϕ∗2 f(dg ⊗dh) = (f ◦ϕ)(ϕ∗dg ⊗ϕ∗dh) = (f ◦ϕ) d(g ◦ϕ) ⊗d(h ◦ϕ) , we get by (i)−(ii) (ϕ∗2b)[X] = ϕ∗2 f(dg ⊗dh) [X] = (f ◦ϕ) d(g ◦ϕ) ⊗d(h ◦ϕ) [X] = (f ◦ϕ)(X) · g ◦ϕ(X), h ◦ϕ(X) = f(ϕ ◦X) · g(ϕ ◦X), h(ϕ ◦X) = f(dg ⊗dh) [ϕ ◦X] = b[ϕ ◦X].
(iv) In local coordinates, we may write b as a finite sum bij(x)(dxi ⊗dxj).
Letting A = (aij) be the positive square root of the matrix B = (bij), so that bij = Σ k aikajk, we get by (i)−(ii) b [X] = bij(X) · [Xi, Xj] = Σ k aik(X) · Xi, ajk(X) · Xj ≥0.
2 Given a semi-martingale X in a differentiable manifold S, along with a first order form α = fi dhi on S, we define the integral ⟨α, X⟩of α along X as the real semi-martingale6 ⟨α, X⟩= fi(X) ◦d (hi ◦X), (3) where Y ◦dX = Y ◦X denotes the Fisk–Stratonovich integral of Y with respect to X. Note that we must use FS rather than Itˆ o integrals to ensure independence of the choice of local coordinates7. This is shown in statement (i) below, along with some further basic properties.
Theorem 35.2 (semi-martingale integral) For a semi-martingale X and a first order form α in a manifold S, define ⟨α, X⟩by (3). Then a.s.
(i) ⟨α, X⟩is independent of the representation of α, (ii) ⟨df, X⟩t = f(Xt) −f(X0), (iii) ⟨fα, X⟩= f(X) ◦⟨α, X⟩, (iv) ⟨α, X⟩, ⟨β, X⟩ = (α ⊗β)[X].
6Also written as ⟨dα, dX⟩= ⟨α, δX⟩. Summation over repeated indices is understood.
Our circle notation must not be confused with the composition of functions.
7Even in the extrinsic case we must use FS -integrals, but then for a different reason: to ensure that the resulting process will stay in the manifold.
35. Stochastic Differential Geometry 805 Proof: (i) By linearity, it is enough to show that fi dhi = 0 implies ⟨fi dhi, X⟩ = 0 a.s. In local coordinates, we may write X = (X1, . . . , Xn) and note that fi ∂khi = 0 for all k. Using Theorem 18.21 (i)−(ii), we get fi(X) ◦d (hi(X)) = fi(X) ◦ ∂khi(X) ◦dXk = fi(X) ∂khi(X) ◦dXk = fi ∂khi (X) ◦dXk = 0.
(ii) Take α = 1 · d f in (3).
(iii) Using (3) and Theorem 18.21 (ii), we get for α = gi dhi ⟨fα, X⟩= fgi(X) ◦d (hi(X)) = f(X) ◦ gi(X) ◦d (hi(X)) = f(X) ◦d ⟨α, X⟩.
(iv) Assuming α = fi dhi and β = gj dhj, and using (3) and Theorem 35.1 (i)−(ii), along with some basic properties of the covariation, we get ⟨α, X⟩, ⟨β, X⟩ = fi(X) · hi(X), gj(X) · hj(X) = fi gj(X) · hi(X), hj(X) = fi gj(X) · dhi ⊗dhj [X] = fi gj dhi ⊗dhj [X] = fi dhi ⊗gj dhj [X] = (α ⊗β)[X].
2 −−− Before proceeding to the intrinsic theory of martingales in a differentiable man-ifold, we insert a somewhat informal discussion8 of semi-martingales in a smooth sub-manifold S of a Euclidean space Rn, motivating the general results in Theorems 35.8, 35.16−35.17, and 35.20. We may then define a martingale in S as an S-valued semi-martingale X in Rn with canonical decomposition M + A, where locally for Xt = x ∈S, the martingale increment dMt is restricted to the tangent space Tx, whereas the compensating part dAt is perpendicular to Tx. The role of the drift term A is to ensure that the process X will stay in the manifold.
Given a semi-martingale X in Rn, we define the projection ˆ X = πS(X) onto S starting at x ∈S as the unique strong solution to the Fisk–Stratonovich SDE ˆ Xi t = xi + t 0 P i j( ˆ Xs) ◦dXj s, t ≥0, i ≤n, 8This extrinsic discussion is included mainly for motivation; it can be skipped by purist or hurried readers. The intrinsic theory is resumed after the statement of Lemma 35.3.
806 Foundations of Modern Probability where P i j(x) denotes the projection9 of the j-th unit vector in Rn onto the tangent space TxS. Note that we must use FS rather than Itˆ o integrals to ensure that the solution ˆ X will stay in the manifold. It is suggestive to use the shorthand notation d ˆ Xt = PT ( ˆ Xt) ◦dXt = PT ( ˆ Xt) dXt −1 2 d QT ( ˆ X), X t, where PT (x) and QT (x) denote the complementary projections onto the tangent space Tx and its orthogonal complement T ⊥ x . Here the second expression is the Itˆ o form of the same projection, which shows that if X is a local martingale in Rn, then ˆ X is a martingale in S. The two covariance processes are related by d[ ˆ X, ˆ X]t = d PT ( ˆ X) · X, PT ( ˆ X) · X t = PT ( ˆ Xt) d[X, X]t PT ( ˆ Xt) = P 2 T ( ˆ Xt) d[X, X]t, which exhibits the covariance process ˆ Ct = [ ˆ X, ˆ X]t as the tangential projection of the process Ct = [X, X]t. It is suggestive to write the latter relation in the form [ ˆ X, ˆ X] = [X, X]S, or simply as ˆ C = CS.
When X is a Brownian motion in Rn, we have Ct = tI for the identity matrix I in Rn, and so [ ˆ X, ˆ X]t = P 2 T ( ˆ X)I · λ, or simply d ˆ Ct = IS dt, where the projection IS may be regarded as the identity matrix in S. Here ˆ X is called a Brownian motion in S. More generally, if X is an isotropic local martingale in Rn, so that d[X, X]t = d[X]tI for a non-decreasing rate process [X] in R+, the projection ˆ X is again isotropic, in the sense that d ˆ C = d[X]tIS.
The latter process may clearly be time-changed into a Brownian motion, just as in the Euclidean case of Theorem 19.4.
Our somewhat informal, extrinsic discussion is summarized below. The stated results, of some independent interest, will justify our intrinsic definitions of drift and diffusion rates in Theorem 35.15.
Lemma 35.3 (projection onto sub-manifold) Let X be a semi-martingale in Rn, with projection ˆ X onto a smooth sub-manifold S ⊂Rn. Then (i) ˆ X is a semi-martingale in S, (ii) X is a local martingale in Rn ⇒ ˆ X is a martingale in S, (iii) X is an isotropic local martingale in Rn ⇒ ˆ X is isotropic in S, (iv) X is a Brownian motion in Rn ⇒ ˆ X is a Brownian motion S.
−−− Returning to the intrinsic theory, we define martingales in a smooth mani-fold S in terms of a connection, defined as a linear map ∇of smooth functions f on S into symmetric, bilinear forms ∇f, satisfying10 ∇f 2 = 2 f ∇f + 2 (d f)⊗2, f ∈S.
(4) 9To be precise, we need to extend P(x) to a neighborhood of S, since the equation must first be stated, before we can argue that the solution process ˆ X stays in S. This is similar to the extension needed in Theorem 35.8 below.
10Connections are sometimes called Hessians and denoted by ∇= Hess.
35. Stochastic Differential Geometry 807 Given a connection ∇, we define an associated martingale in S as a semi-martingale X satisfying f(X) m = 1 2 ∇f[X], f ∈S, (5) where X m = Y in R means that X −Y is a local martingale.
For motivation, we state a couple of elementary results. Further justifica-tions are given by Theorems 35.8 and 35.16−35.17 below, which show how the intrinsic martingales and projection properties agree with the corresponding extrinsic notions and results when S is a smooth sub-manifold of Rn.
Lemma 35.4 (connections and martingales) (i) A continuous local martingale X in Rn satisfies (4)−(5) with (∇f) ij = ∂2 ijf, i, j ≤n, f ∈S.
(ii) For a smooth manifold S, let ∇be a linear map from S to the class of symmetric, bilinear maps on S. Then for any semi-martingale X in S satisfying (5), ∇f 2[X] = 2 f ∇f[X] + 2 (d f)⊗2[X] a.s., f ∈S.
Proof: (i) Equation (4) is obtained by repeated use of the chain rule for differentiation, whereas (5) holds by Itˆ o’s formula.
(ii) Applying (5) to both f and f 2 and using Theorem 35.1 (i)−(ii), along with Theorems 18.11 and 18.16 for real semi-martingales, we get 1 2 ∇f 2[X] m = (f ◦X)2 = 2 f(X) · (f ◦X) + [f ◦X] m = f(X) · ∇f[X] + (d f)⊗2[X] = f ∇f[X] + (d f)⊗2[X].
Since both sides have locally finite variation and start at 0, the assertion fol-lows by Proposition 18.2.
2 The defining property (4) of connections extends to a general chain rule: Theorem 35.5 (chain rule) Let S be a smooth manifold with connection ∇.
Then for any smooth functions f : S →Rn and ϕ: Rn →R, ∇(ϕ ◦f) = ∂iϕ ◦f ∇f i + ∂2 ijϕ ◦f d f i ⊗d f j .
Proof: For any vector fields u, v, define a linear operator w on S by wf = uvf −∇f(u, v), f ∈S.
(6) 808 Foundations of Modern Probability Using (4), (6), and the product rule for u, v, we get 1 2 wf 2 = 1 2 uvf 2 −1 2 ∇f 2(u, v) = u(f vf) −f ∇f(u, v) −(d f)⊗2(u, v) = f uvf + uf vf −f ∇f(u, v) −uf vf = f uvf −∇f(u, v) = f wf, and so Lemma A7.1 shows that w is a smooth vector field. Using (6) and the chain rule for u, v, w, we obtain ∇(ϕ ◦f)(u, v) = uv(ϕ ◦f) −w(ϕ ◦f) = u (∂iϕ ◦f) vf i −(∂iϕ ◦f) wf i = ∂2 ijϕ ◦f (uf j)(vf i) + (∂iϕ ◦f) uvf i −wf i = ∂2 ijϕ ◦f d f i ⊗d f j (u, v) −(∂iϕ ◦f) ∇f i(u, v), and the assertion follows since u, v were arbitrary.
2 In local coordinates, a connection on S is simply a special second order differential operator.
Theorem 35.6 (connections as differential operators) For a smooth manifold S, let ∇be a linear map of any f ∈S into a symmetric, bilinear form ∇f.
Then these conditions are equivalent: (i) ∇is a connection on S, (ii) in local coordinates there exist some smooth functions Γ k ij = Γ k ji, i, j, k ≤ n, such that (∇f)ij = ∂2 ij −Γ k ij ∂k f, f ∈S.
The functions Γ k ij, known as the Christoffel symbols of ∇, depend in a sub-tle way11 on the choice of local coordinates. When S = Rn with Cartesian coordinates, we may choose Γ k ij ≡0 to form the flat connection ∇n, first en-countered in Lemma 35.4 (i).
Proof, (i) ⇒(ii): Since the operator ∇is local by (6), it is enough to consider the representation of ∇f on a local chart U ⊂S. Here any smooth function f : U →R has a representation f = ˜ f ◦ϕ, for a smooth function ˜ f : Rn →R and some local coordinates ϕ1, . . . , ϕn on U. The bilinear forms ∇ϕk may be written uniquely12 as ∇ϕk = −Γk ij dϕi ⊗dϕj , i, j, k ≤n, (7) 11In particular, they do not transform as tensors.
12The minus sign is traditional in geometry and has no special significance for us.
35. Stochastic Differential Geometry 809 for some smooth functions Γk ij. By Theorem 35.5, ∇f = ∇( ˜ f ◦ϕ) = ∂k ˜ f ◦ϕ ∇ϕk + ∂2 ij ˜ f ◦ϕ dϕi ⊗dϕj = ∂2 ij ˜ f ◦ϕ −Γk ij ∂k ˜ f ◦ϕ dϕi ⊗dϕj = ∂2 ij ˜ f −Γk ij∂k ˜ f ◦ϕ dϕi ⊗dϕj , which proves the asserted relation in the more precise form (∇f)ij = ∂2 ij −Γk ij∂k ˜ f ◦ϕ.
(ii) ⇒(i): In Lemma 35.4 (i) we saw that the operator (∇nf)ij = ∂2 ijf on S satisfies (4). It remains to note that ∂kf 2 = 2f ∂kf for all k ≤n, by the chain rule for differentiation.
2 This yields a simple description of martingales in local coordinates.
Theorem 35.7 (martingale criteria) Let X be a semi-martingale in a smooth manifold S with connection ∇, fix some local coordinates f 1, . . . , f n in S, and put Xi = f i(X). Then these conditions are equivalent: (i) X is a ∇-martingale, (ii) f i(X) m = 1 2 ∇f i[X], i ≤n, (iii) Xk m = −1 2 Γk ij(X) · [Xi, Xj], k ≤n.
Proof, (i) ⇒(ii): Use (5).
(ii) ⇔(iii): By Theorems 35.1 (i) and 35.6, we have ∇f k[X] = −Γk ij(X) · d f i ⊗d f j [X] = −Γk ij(X) · f i ◦X, f j ◦X = −Γk ij(X) · [Xi, Xj].
(ii) ⇒(i): By Itˆ o’s formula, condition (ii), and Theorems 35.1 (i)−(ii) and 35.5, we get for any smooth function ϕ on Rn 2 ϕ ◦f(X) m = 2 (∂iϕ ◦f)(X) · Xi + ∂2 ijϕ ◦f (X) · [Xi, Xj] m = (∂iϕ ◦f)(X) · ∇f i[X] + ∂2 ijϕ ◦f (X) · d f i ⊗d f j [X] = (∂iϕ ◦f)∇f i[X] + ∂2 ijϕ ◦f d f i ⊗d f j [X] = ∇(ϕ ◦f)[X].
Hence, (5) remains true for ϕ ◦f, and (i) follows.
2 For any local martingales M i in Rn, Theorem 35.7 (iii) defines an SDE in X, possessing a unique solution in U up to an explosion time. This yields a large class of martingales associated with every connection.
The flat connection ∇ n on Rn induces a connection ∇ S on every smooth sub-manifold S. We proceed to describe ∇ S and characterize the associated martingales. Note the consistency with the extrinsic results in Lemma 35.3.
810 Foundations of Modern Probability Theorem 35.8 (induced connection on sub-manifold) For a smooth sub-mani-fold S ⊂Rn, let U be a neighborhood of S with a unique, smooth orthogonal projection πS onto S, and write ϕ for the embedding S →Rn. Then (i) ∇n induces a connection ∇ S on S, given by ∇ S f = ϕ∗2 ∇n(f ◦πS), f ∈SS, (ii) a semi-martingale X in S is a ∇ S-martingale, iffX m = H · C in Rn for a non-decreasing, continuous process C and a bounded, measurable process H in Rn satisfying Ht ⊥TXt, t ≥0.
Proof: (i) To see that ∇ S is a connection, fix any f ∈SS with an orthogonal extension ˜ f to U satisfying ˜ f = f ◦πS, and put ¯ u = ϕ◦u and ¯ v = ϕ◦v. Then 1 2 ∇ S f 2(u, v) = 1 2 ∇n ˜ f 2(¯ u, ¯ v) = ˜ f ∇n ˜ f(¯ u, ¯ v) + (¯ u ˜ f)(¯ v ˜ f) = f ∇ S f(u, v) + (uf)(vf).
(ii) For an S-valued semi-martingale X m = A in Rn, we get by Itˆ o’s formula, Theorems 35.1 (iii) and 35.6, and part (i) f(X) = ˜ f(X) m = ∂i ˜ f(X) · Xi + 1 2 ∂2 ij ˜ f(X) · [Xi, Xj] m = grad ˜ f(X) · A + 1 2 ϕ∗2 ∇n ˜ f[X] = grad ˜ f(X) · A + 1 2 ∇Sf[X].
Hence, X is a ∇ S -martingale iffgrad ˜ f ·A = 0 a.s. for all f, which clearly holds under the stated condition on A.
Conversely, let grad ˜ f(X)·A = 0 a.s. for all f ∈SS. Writing C = Σ i |dAi|, we get A = Σ i Hi ·C for some bounded, predictable processes Hi, and f being arbitrary, we obtain Hi t ⊥TXtS for C -a.e. t. Since modifying the Hi on a C-null set will not affect the sum Σ i Hi · C, we may assume that the stated orthogonality holds identically.
2 A smooth mapping ϕ between two manifolds S, S′ with connections ∇S, ∇S′ is said to be affine if it commutes with the two connections, in the sense that ∇S(f ◦ϕ) = ϕ∗2(∇S′f), f ∈SS′.
(8) In particular, a function f ∈S is affine if it is so as a mapping from (S, ∇) into R with the flat connection, so that ∇f = 0.
A geodesic in (S, ∇) is defined as an affine mapping γ : I →S, for a real interval I with the flat connection. It describes the motion of a particle along a path in S, not to be confused with the path itself. We may characterize geodesics by a differential equation, here given both globally and in local co-ordinates.
35. Stochastic Differential Geometry 811 Lemma 35.9 (geodesics) For a manifold S with connection ∇and an interval I ⊂R, consider a smooth map γ : I →S. Then γ is a geodesic, ifffor every t ∈I, (i) ∂2(f ◦γt) = ∇f(˙ γt, ˙ γt), f ∈S, (ii) ¨ γk t = − Γk ij ◦γt ˙ γi t ˙ γj t , i, j, k ≤n.
Proof: (i) Apply (8) to pairs of tangent vectors ∂= d/dt in TtI.
(ii) Apply (i) to the coordinate functions on a local chart.
2 Affine maps can be characterized in terms of both geodesics and martin-gales. Note the striking similarity between the two cases.
Theorem 35.10 (affine maps) Let ϕ be a smooth map between two manifolds S, S′ with connections ∇S, ∇S′. Then these conditions are equivalent: (i) ϕ is affine, (ii) γ is a geodesic in S ⇒ϕ ◦γ is a geodesic in S′, (iii) M is a martingale in S ⇒ϕ ◦M is a martingale in S′.
Proof, (i) ⇒(iii): Assume (i), and let M be a martingale in S. For any smooth function f on S′, we get by (1), (5), (8), and Theorem 35.1 (iv) 2 f ◦ϕ ◦M m = ∇S(f ◦ϕ)[M] = ϕ∗2(∇S′f)[M] = ∇S′f[ϕ ◦M], which shows that ϕ(M) is a martingale in S′, proving (iii).
(iii) ⇒(ii): Assume (iii), let γ : I →S be a geodesic in S, and put ψ = ϕ◦γ.
Consider a Brownian motion B in I, starting at t = 0 and stopped at some points r ± ε ∈I. Since γ is affine, we see as before that ϕ ◦B is a martingale in S, and so by (iii) the process ψ ◦B is a martingale in S′. Applying (5) to the martingales B and ψ ◦B and using Theorem 35.1 (iii), we get for any f on S′ ∇I(f ◦ψ)[B] m = f ◦ψ(B) m = ∇S′[ψ ◦B] = ψ∗2 ∇S′[B].
Since both sides have locally finite variation and vanish at 0, they agree a.s., and so up to an optional time τ > 0, t∧τ 0 (f ◦ψ)′′(Bs) ds = t∧τ 0 ∇S′f ˙ ψ ◦Bs ds.
By the continuity of the two integrands, we get at s = 0 (f ◦ψ)′′(r) = ∇S′f( ˙ ψr, ˙ ψr), 812 Foundations of Modern Probability and since r ∈I was arbitrary, we conclude from Lemma 35.9 that ψ is a geodesic, proving (ii).
(ii) ⇒(i): For any u ∈TS, Lemma 35.9 yields a geodesic γ in S with ˙ γ(0) = u, and so by (ii) the map ψ = ϕ ◦γ is a geodesic in S′.
Since γ has speed u at 0, we further note that ψ has speed ϕ(u) at 0. Using (1) and applying Lemma 35.9 to both ϕ and ψ, we get for any smooth function f on S′ ∇S(f ◦ϕ)(u, u) = ∇S(f ◦ϕ)(˙ γ0, ˙ γ0) = (f ◦ϕ ◦γ)′′(0) = ∇S′f( ˙ ψ0, ˙ ψ0) = ∇S′f(ϕ ◦u, ϕ ◦u) = ϕ∗2(∇S′f)(u, u).
Then by symmetry the two bilinear forms agree, and (8) follows.
2 Corollary 35.11 (uniqueness) A connection ∇on a smooth manifold S is determined by each of the following: (i) the class of all ∇-geodesics, (ii) the class of all ∇-martingales.
Proof: Let ∇, ∇′ be connections on S with the same set of associated geodesics or martingales. Then Theorem 35.10 shows that the identity map ϕ: (S, ∇) →(S, ∇′) is affine, and so for any smooth function f on S, ∇f = ∇(f ◦ϕ) = ϕ∗2(∇′f) = ∇′f, which means that ∇= ∇′.
2 A smooth function f on S is said to be convex if ∇f(u, u) ≥0 for all u ∈TS. In local coordinates, this means that the symmetric matrix (∇f)ij is non-negative definite. We begin with some elementary properties.
Lemma 35.12 (affine and convex maps) Let S, S′ be smooth manifolds with connections ∇S, ∇S′. Then for smooth functions f and ϕ, (i) f is affine on S ⇔f, −f are convex, (ii) ϕ: S →S′ is affine f is convex on S′ ⇒f ◦ϕ is convex on S, (iii) f is convex on S h is convex, increasing on R ⇒h ◦f is convex on S.
35. Stochastic Differential Geometry 813 Proof: (i) If both f and −f are convex, then ∇f(u, u) = 0 for all u ∈TS, and so by symmetry ∇f = 0.
(ii) Using the definitions of affinity and convexity along with (1), we get for any u ∈TS ∇S(f ◦ϕ)(u, u) = ϕ∗2∇S′f (u, u) = (∇S′f)(ϕ ◦u, ϕ ◦u) ≥0, which shows that f ◦ϕ is convex on S.
(iii) Using Lemma 35.5 and the conditions on f and h, we get for any u ∈TS ∇S(h ◦f)(u, u) = (h′ ◦f)∇Sf(u, u) + (h′′ ◦f)(uf)2 ≥0, which shows that h ◦f is convex on S.
2 We may next characterize convexity in terms of geodesics or martingales.
Theorem 35.13 (convex maps) Let f be a smooth function on a manifold S with connection ∇. Then these conditions are equivalent: (i) f is convex on S, (ii) γ is a geodesic in S ⇒f ◦γ is convex on R, (iii) M is a martingale in S ⇒f ◦M is a local sub-martingale in R.
Proof, (i) ⇒(iii): If f be convex on S and M be a martingale in S, then by (5) and Theorem 35.1 (iv), f(M) m = 1 2 ∇f[M] ≥0, which shows that f ◦M is a local sub-martingale, proving (iii).
(iii) ⇒(ii): Fix a geodesic γ : I →S, and let M be a martingale on I. Then γ ◦M is a martingale in S by Theorem 35.10, and so by (iii) f ◦γ(M) is a local sub-martingale on I. Since M was arbitrary, a converse of Jensen’s inequality in Lemma 4.5 shows that f ◦γ is convex, proving (ii).
(ii) ⇒(i): For any u ∈TS there exists a geodesic γ in S with ˙ γ0 = u. Then f ◦γ is convex by (ii), and Lemma 35.9 (i) yields ∇f(u, u) = ∇f(˙ γ0, ˙ γ0) = (f ◦γ)′′(0) ≥0, which shows that f is convex, proving (i).
2 Conversely, geodesics and martingales in S can be characterized in terms of convex functions.13 Here we state a simplified local version.14 13Since there may not be enough affine functions, we need to use the convex functions as convenient substitutes.
14The corresponding global statement is false in general, since there may not be enough globally convex functions. Fortunately, there are plenty of locally convex functions.
814 Foundations of Modern Probability Theorem 35.14 (convexity criteria, Darling) Let S be a manifold with con-nection ∇. Then every point x ∈S has an open neighborhood U, such that (i) a smooth curve γ : I →U is a geodesic iff f is convex on U ⇒f ◦γ is convex on I, (ii) a semi-martingale X in U is a martingale iff f is convex on U ⇒f ◦X is a local sub-martingale.
Proof: The necessity in parts (i) and (ii) is clear from Theorem 35.13. We omit the more difficult proof of the sufficiency.15 2 We return to a more detailed study of semi-martingales X in S. Though X has no semi-martingale decomposition in S, due to the absence of any linear structure, its local characteristics16 can still be defined intrinsically. By their local nature, it is then enough to consider semi-martingales in any local chart U. For any smooth function f and bilinear form b on S, we define the finite-variation processes Af and Cb by f(X) m = 1 2 ∇f[X] + Af, b [X] = Cb.
(9) To avoid some distracting technicalities, we assume17 that Af ≪λ and Cb ≪λ a.s., where λ denotes Lebesgue measure on R+.
Theorem 35.15 (local drift and diffusion rates) Let S be a smooth manifold with connection ∇, and consider a semi-martingale X in S, such that a.s.
Af ≪λ and Cb ≪λ for all f and b. Then (i) there exists an a.s. unique random tangent vector 18 A on S, such that f(X) m = 1 2 ∇f[X] + ⟨A, f⟩· λ, f ∈S, (ii) there exists an a.s. unique symmetric, second order, contra-variant ran-dom tensor 19 C on S, such that for any bilinear form b on S, b [X] = ⟨C, b ⟩· λ a.s., (iii) X solves the local martingale problem f(X) m = 1 2 ⟨C, ∇f⟩+ ⟨A, f⟩ · λ, f ∈S, (10) which in local coordinates becomes f(X) m = 1 2 C ij ∂2 ij −Γ k ij ∂k f + Ai ∂if · λ.
15Detailed discussions appear in Emery (1989), pp. 42–46, and (1998/2013), pp. 55–59.
16Our local characteristics are totally different from those of Meyer (1982).
17More generally, if Af ≪ξ and Cb ≪ξ a.s. for a diffuse random measure ξ on R+, we may reduce to the case ξ = λ by a random time change.
18In other words, At is a random element of TXt for every t ≥0.
19Thus, Ct is a symmetric random element of T ⊗2 Xt — the dual of the space of bilinear forms on S. Schwartz suggests we think of such objects as second order tangent vectors.
35. Stochastic Differential Geometry 815 Proof: (i) Fix any f 1, . . . , f n ∈S and a smooth function ϕ on Rn, write Y i = f i(X) and Af = Af · λ, and put f = (f i) and Y = (Y i). Using Itˆ o’s formula, equation (9), Lemma 18.14, and Theorems 35.1 (i)−(ii) and 35.5, we get ϕ(Y ) m = ∂iϕ(Y ) · Y i + 1 2 ∂2 ijϕ(Y ) · [Y i, Y j] m = 1 2 ∂iϕ(Y ) · ∇f i[X] + ∂iϕ(Y ) · (Af i · λ) + 1 2 ∂2 ijϕ(Y ) · d f i ⊗d f j [X] = 1 2 (∂iϕ ◦f)∇f i[X] + ∂iϕ(Y )Af i · λ + 1 2 ∂2 ijϕ ◦f d f i ⊗d f j [X] = 1 2 ∇(ϕ ◦f)[X] + ∂iϕ(Y )Af i · λ.
We may also apply (9) directly to the function ϕ ◦f to get (ϕ ◦f)(X) m = 1 2 ∇(ϕ ◦f)[X] + A(ϕ ◦f) · λ, and so by comparison, A(ϕ ◦f) = (∂iϕ ◦f)(X)Af i a.e.
Hence, Lemma A7.1 shows that A is indeed a random tangent vector.
(ii) For any local coordinates ϕ1, . . . , ϕn, we define C ij · λ = dϕi ⊗dϕj [X] = [ϕi ◦X, ϕj ◦X].
Letting b be a bilinear form with coordinate representation bij(dϕi ⊗dϕj), we obtain the duality relation ⟨C, b ⟩· λ = bij C ij · λ = bij dϕi ⊗dϕj [X] = b[X], where C is the random operator S →T ⊗2 S with coordinate representation (C ij).
Thus, C is a.e. unique and independent of any choice of local coordinates.
(iii) Combine (i)−(ii), and use Theorem 35.5, along with the coordinate representation of A.
2 We refer to the processes A and C above as the intrinsic drift and diffusion rates of X. Note in particular that X is a martingale iffA = 0 a.e. Claim (i) may be surprising, since ∇f[X] is linear in f while f(X) is not.
In normal coordinates around a point x ∈S, the random differential oper-ator in (iii) reduces at x to the form Lt = 1 2 C ij t ∂2 ij + Ai t ∂i, which may be thought of as the characteristic differential operator 20 of X.
This suggests that we regard the pair (A, C) as the random generator of X.
20This relates our local characteristics to the Schwartz–Meyer theory of second order stochastic calculus.
816 Foundations of Modern Probability Our definitions of intrinsic drift and diffusion rates are justified by the fol-lowing results for semi-martingales in Euclidean sub-manifolds, which formally agree with our findings in the extrinsic case.21 Theorem 35.16 (semi-martingales in sub-manifold) Let X be an S-valued semi-martingale in Rn, where S is a smooth sub-manifold with induced connec-tion ∇. Then (i) the local drift of X in S is the tangential projection of the drift in Rn, (ii) the local diffusion rates of X in S agree with those in Rn.
Proof: (i) Using the notation of Theorem 35.8, we may write condition (i) of Theorem 35.15 in the form f(X) m = 1 2 ∇f[X] + !
grad ˜ f(X) A " · λ.
Now let X m = H · λ + V · λ, where H and V denote the tangential and normal components of the Euclidean drift rate. Proceeding as in the proof of Theorem 35.8, we get f(X) m = 1 2 ∇f[X] + !
grad ˜ f(X) H " · λ + !
grad ˜ f(X) V " · λ.
Here the last term vanishes, since grad ˜ f(x) ∈Tx ⊥V , and so by comparison !
grad ˜ f(X) A " · λ = !
grad ˜ f(X) H " · λ.
Since f was arbitrary, it follows that A = H a.e., as asserted.
(ii) Writing ϕ for the embedding S →Rn, we get for any bilinear form b = h(d f ⊗dg) with f, g, h ∈S ϕ∗2b = (h ◦ϕ) (ϕ∗d f) ⊗(ϕ∗dg) = (h ◦ϕ) d(f ◦ϕ) ⊗d(g ◦ϕ) , and so ϕ∗2b(u, v) = (h ◦ϕ) u(f ◦ϕ) v(g ◦ϕ) = h(uf)(vg) = b(u, v), which extends to ϕ∗2b = b for general b. Hence, Theorem 35.1 (iii) gives b [ϕ ◦X] = ϕ∗2b [X] = b [X], as desired.
2 For a further justification, we show how the local characteristics of a semi-martingale in Rn are transformed by projection. This again agrees formally with the extrinsic results in Lemma 35.3.
21This may be remarkable, since the definitions of drift and diffusion rates are totally different in the two cases. A similar remark applies to Corollary 35.17.
35. Stochastic Differential Geometry 817 Corollary 35.17 (projection onto sub-manifold) Let X be a semi-martingale in Rn with projection ˆ X onto a smooth sub-manifold S with induced connection ∇. Then (i) ˆ X is a semi-martingale in S, (ii) the local drift of ˆ X equals the tangential projection of the drift of X, (iii) the local diffusion rates of ˆ X are tangential projections of the diffusion rates of X.
Proof: Proceed as in Lemma 35.3 and use Theorem 35.16.
2 The local rate processes A and C are said to be autonomous, if At = a(Xt) and Ct = c(Xt) for a deterministic vector field a and a second order contra-variant tensor field c on S.
In that case, the local martingale problem of Theorem 35.15 (iii) yields some more precise information about X. As before, we define a diffusion in S as a continuous strong Markov process.
Corollary 35.18 (diffusions in manifold) Let X be a semi-martingale in a manifold S with connection ∇. Then (i) if (10) has a unique solution X with autonomous rates At = a(Xt) and Ct = c(Xt), it defines a diffusion in S, (ii) if the functions a, c in (i) are smooth, then (10) has a locally unique solution X, defining a Feller diffusion in S.
Proof: Use Theorems 32.3 and 32.11 (i)−(ii).
2 Part (iii) of Theorem 35.15 suggests that, if the function c is non-singular, we might remove the drift a by a suitable change of connection ∇, so that X becomes a martingale. We turn to a separate discussion of such drift-free diffusions.
Theorem 35.19 (diffusive martingales) Let X be a semi-martingale in a manifold S with connection ∇, and let L be a non-negative, symmetric, lin-ear 22 map of bilinear forms b on S into smooth functions Lb on S. Then these conditions are equivalent: (i) f(X) m = 1 2 L(∇f)(X) · λ, f ∈S, (ii) X is a martingale with b [X] = Lb (X) · λ, (iii) X is a diffusion process in S with generator 1 2 L∇.
Proof: For clarity and importance, we give a direct proof: (ii) ⇒(i): The two properties in (ii) yield for any f ∈S 2 f(X) m = ∇f[X] = L(∇f)(X) · λ, 22so that in particular L(fb) = fLb 818 Foundations of Modern Probability proving (i).
(i) ⇒(ii): By (4) and the linearity of L, L(∇f 2) = 2 L f∇f + (d f)⊗2 = 2f L(∇f) + 2 L(d f)⊗2, and so by (i), f 2 ◦X m = 1 2 L(∇f 2)(Xt) dt = f L(∇f)(Xt) dt + L(d f)⊗2(Xt) dt.
On the other hand, we get by Itˆ o’s formula and (i) f 2 ◦X = 2 f(X) · (f ◦X) + [f ◦X] m = f(Xt) L(∇f)(Xt) dt + [f ◦X].
Comparing the two expressions and using Theorem 35.1 (i), we obtain (d f)⊗2[X] = [f ◦X] = L(d f)⊗2(X) · λ, which extends by polarization and Theorem 35.1 (ii) to f(dg ⊗dh)[X] = f(X) · L(dg ⊗dh)(X) · λ = fL(dg ⊗dh)(X) · λ = L f(dg ⊗dh) (X) · λ, and finally to b[X] = Lb(X) · λ. Applying the latter formula to ∇f and using (i), we get ∇f[X] = L(∇f)(X) · λ m = 2 f(X), which proves (5), showing that X is a martingale.
(i)–(ii) ⇔(iii): Use Corollary 35.18 and Lemma 17.21.
2 A differentiable manifold S is said to be Riemannian, if it is endowed with a positive definite, bilinear form ρ, called the metric tensor on S. For any vectors u, v ∈TS, ρ determines an inner product ρ(u, v) = ⟨u | v⟩with associated norm ∥· ∥. For any smooth curve γ : I →S, the associated length and energy are defined as integrals of ∥˙ γt∥and 1 2∥˙ γt∥2, respectively. Any semi-martingale X in S has the Riemannian quadratic variation23 ρ[X].
The Riemannian structure on S induces a Riemannian metric on any sub-manifold S′, preserving the length and energy of any smooth curve, as well as the quadratic variation of any semi-martingale. This again agrees with the situation in the extrinsic case.
23The process ρ[X] is sometimes written as ( dX, dX) or ⟨dX | dX⟩.
35. Stochastic Differential Geometry 819 Lemma 35.20 (induced metric on sub-manifold) For a Riemannian manifold (S, ρS), let S ′ ⊂S be a smooth sub-manifold with associated embedding ϕ : S ′ →S. Then (i) ρS ′ = ϕ∗2ρS is a Riemannian metric on S ′, (ii) for any smooth curve γ in S ′, the length ∥˙ γ∥and energy 1 2∥˙ γ∥2 agree under ρS and ρS ′, (iii) for any semi-martingale X in S ′, the quadratic variations ρ[X] agree under ρS and ρS ′.
Proof: (i) For any u ∈TxS′, we have (u ◦ϕ)f = u(f ◦ϕ) = uf, and so u ◦ϕ = u, which implies ρS′(u, v) = ρS(u, v).
(ii) From (i) we get ∥˙ γt∥S′ = ∥˙ γt∥S for any curve γ in S.
(iii) For any semi-martingale X in S′, Theorem 35.1 yields ρS′[X] = ϕ∗2ρS[X] = ρS[ϕ ◦X] = ρS[X].
2 We have seen how a smooth sub-manifold S of Rn can be endowed in a natu-ral way with both a connection ∇and a Riemannian metric ρ. It is remarkable that ∇, and hence also the associated class of martingales, depend intrinsically only on the metric ρ, regardless of the way (S, ρ) is embedded24 into Rn. For that reason, ∇is often called the canonical or Riemannian connection.25 For a precise statement, note that in local coordinates, the Riemannian metric ρ on S is given by a symmetric, positive definite matrix ρij, depending smoothly on x ∈S. The associated inverse matrix is denoted by ρij. The no-tions of Lie derivative Lub and gradient grad f = ˆ f are explained in Appendix 7. A coordinate neighborhood of a point x ∈S is said to be normal, if the local coordinates are ortho-normal at x, and every geodesic through x describes a uniform motion along a straight line.
Theorem 35.21 (Riemannian connection) On a Riemannian manifold (S, ρ), conditions (i)–(iii), (v) are equivalent and determine a unique connection ∇: (i) for every x ∈S, we have Γ k ij(x) = 0 in normal coordinates around x, (ii) in terms of Lie derivatives, ∇is given by ∇f = 1 2 L grad f(ρ), f ∈S, (iii) in local coordinates, ∇has Christoffel symbols Γ k ij = 1 2 ρ kl ∂i ρ lj + ∂j ρ il −∂l ρ ij , 24By a deep theorem in differential geometry, every smooth Riemannian manifold can be isometrically embedded into a Euclidean space of suitable dimension. Though the embedding is far from unique, the induced connection is always the same.
25also known as the Levi-Civita connection 820 Foundations of Modern Probability (iv) for S = Rn with Euclidean metric ρ, ∇reduces to the flat connection, (v) when (S, ρ) is isometrically embedded into Rn, ∇agrees with the induced connection ∇S of Theorem 35.8.
Proof: (i) The uniqueness is clear from Theorem 35.6. The existence follows from parts (ii)−(iii), if we can show that the constructed connection satisfies the condition in (i). This holds by (v) and Theorems 35.8 and 35.20, along with the fact that any local neighborhood U ⊂S can be isometrically embed-ded into Rn for a suitable n.
(ii) To show that ∇is a connection, we note that ⟨ˆ f | u⟩= d f(u) = uf, and hence for any u ∈TS, ⟨grad f 2| u⟩= uf 2 = 2f(uf) = 2f⟨ˆ f | u⟩ = ⟨2f( ˆ f) | u⟩, which shows that grad f 2 = 2f( ˆ f). Using Corollary A7.5, we get ∇f 2 = 1 2 L grad f2(ρ) = 1 2 L 2f( ˆ f)(ρ) = Lf( ˆ f)(ρ) = f L grad f(ρ) + d f ⊗⟨ˆ f | ·⟩+ ⟨· | ˆ f ⟩⊗d f = 2f ∇f + 2 (d f)⊗2.
(iii) Writing ˆ f = grad f and ˆ g = grad g, and using Theorem A7.4 (ii) and (iv), we get from (ii) for any vector field u 2 ∇f(u, ˆ g) = (L grad f ρ)(u, ˆ g) = ˆ f⟨u | ˆ g⟩−⟨L grad f u | ˆ g⟩−⟨u | L grad f ˆ g⟩ = ˆ f(ug) −(L grad f u)g −⟨L grad f ˆ g | u⟩ = u( ˆ fg) − !
u [ ˆ f, ˆ g] " .
Interchanging f and g and noting that [ ˆ f, ˆ g]+[ˆ g, ˆ f ] = 0 and ˆ fg = ˆ gf = ⟨ˆ f | ˆ g⟩, we obtain ∇f(u, ˆ g) + ∇g(u, ˆ f) = u⟨ˆ f | ˆ g⟩.
Choosing u = ˆ h = grad h and permuting f, g, and h, we get 2 ∇f(ˆ g, ˆ h) = ˆ g⟨ˆ f | ˆ h⟩+ ˆ h⟨ˆ f | ˆ g⟩−ˆ f⟨ˆ g | ˆ h⟩.
Noting that every coordinate vector ∂i in TS is locally a gradient field, and inverting the mapping ˆ f = grad f into ∂if = ρij ˆ f j, we have by (7) 2 Γi jk ρil = ∂j ρlk + ∂k ρjl −∂l ρjk.
Now solve for Γi jk by applying the inverse matrix (ρil) to both sides.
(iv) Here ρ = I is constant, and (iii) yields Γi jk = 0 for all i, j, k.
(v) Write ϕ: S →Rn for the embedding map, let U be a neighborhood of S with a unique, smooth, orthogonal projection πS : U →S, and introduce the orthogonal extension26 f ′ = f ◦πS of any smooth function f on S. Using the 26in Theorem 35.8 denoted by ˜ f 35. Stochastic Differential Geometry 821 definitions of ˆ f = grad f in S and Rn, we get !
ϕ ◦ˆ f ϕ ◦u " n = ⟨ˆ f | u⟩S = d f(u) = uf = (ϕ ◦u)f ′ = ⟨grad f ′ ◦ϕ | ϕ ◦u⟩n, which implies grad f ′ ◦ϕ = ϕ ◦ˆ f on S. By the definitions of ρ, ∇, and ∇S, we obtain ∇f = 1 2 L grad f(ρ) = 1 2 L grad f′◦ϕ(ϕ∗2I) = ϕ∗2∇nf ′ = ∇Sf.
2 In a Riemannian manifold (S, ρ), we always choose ∇to be the canonical connection. Given a bilinear form b on (S, ρ), we define tr b = ρij bij = Σ i b(ei, ei), (11) for any ortho-normal basis (ei) in Tx. A martingale M in a Riemannian man-ifold (S, ρ) is said to be isotropic if ∥ˆ f∥= ∥ˆ g∥ ⇒ [f ◦M] = [ g ◦M] a.s.
We may then introduce the rate process n−1ρ[M] of M, where n = dim S.
If it is absolutely continuous, its density is called the (local) rate of M. An isotropic martingale with constant rate 1 is called a Brownian motion.
Theorem 35.22 (isotropic martingales) Let M be a martingale in a Riemann-ian manifold (S, ρ) with dim S = n. Then these conditions are equivalent: (i) M is isotropic, (ii) b [M] = tr b(M) · T a.s. for an increasing process T.
In that case, we have a.s.
(iii) T = n−1ρ[M], and M = B ◦T for a Brownian motion B in S.
Proof, (i) ⇒(ii)–(iii): Let h1, . . . , hn be normal coordinates around a point x ∈S. Polarizing (i) and alternating the signs of the hi, we get by Theorem 35.1 (i) dhi ⊗dhj [M] = hi ◦M, hj ◦M = δ ij [hi ◦M], and so by Theorem 35.1 (ii), b [M] = b ij(M) δij · [h1 ◦M] = Σ i b ii(M) · [h1 ◦M] = n−1tr b(M) · ρ[M], proving (ii) with T = n−1ρ[M]. The time-change representation now follows as in Theorem 19.4 by means of Theorem 18.24.
822 Foundations of Modern Probability (ii) ⇒(i): Assuming (ii), we get for any f ∈S [f ◦M] = (d f)⊗2[M] = tr(d f)⊗2(M) · T = ∥ˆ f∥2 tr e⊗2 (M) · T, proving (i).
2 For the canonical connection ∇on a Riemannian manifold (S, ρ), we define the Laplacian27 Δ on S by Δf(x) = tr(∇f)(x), f ∈S, x ∈S.
(12) We list some characterizations of Brownian motion: Corollary 35.23 (Brownian motion) For a semi-martingale X in a Rieman-nian manifold (S, ρ), these conditions are equivalent: (i) f(X) m = 1 2 Δf(X) · λ for all f ∈S, (ii) X is a martingale with b [X] = tr b(X) · λ, (iii) X is an isotropic martingale with constant rate 1, (iv) X is a diffusion process in S with generator 1 2 Δ.
Proof, (ii) ⇔(iii): Use Theorem 35.22.
(i) ⇔(ii) ⇔(iv): Use Theorem 35.19.
2 If the function c in Corollary 35.18 is non-singular, we can change the Riemannian metric ρ to get the Brownian condition b[X] = tr b(X)·λ fulfilled.
Unfortunately, this may change the canonical connection ∇as well, and so the resulting process might have a non-zero drift. The latter can often be removed by a suitable Girsanov-type transformation.
A mapping ϕ from a Riemannian manifold (S, ρ) to a manifold S ′ with connection ∇is said to be harmonic, if Δ(f ◦ϕ) = tr ϕ∗2 ∇ S ′f , f ∈S′.
(13) Theorem 35.24 (harmonic maps) Let ϕ be a smooth map from a Rieman-nian manifold (S, ρ) to a manifold S ′ with connection ∇. Then these conditions are equivalent: (i) ϕ is harmonic, (ii) B is a Brownian motion in S ⇒ϕ ◦B is a martingale in S ′, (iii) M is an isotropic martingale in S ⇒ϕ ◦M is a martingale in S ′.
27also called the Laplace–Beltrami operator 35. Stochastic Differential Geometry 823 Proof, (i) ⇒(iii): Let ϕ be harmonic, and let M be an isotropic martingale in S. Using (13) and Theorems 35.1 (iii) and 35.23 (i)−(ii), we get for any f ∈S′ 2 f ◦ϕ(M) m = Δ(f ◦ϕ)(M) · ρ[M] = tr ϕ∗2∇S′f (M) · ρ[M] = ϕ∗2∇S′f [M] = ∇S′f[ϕ ◦M], which shows that ϕ ◦M is a martingale in S′.
(iii) ⇒(ii): Special case.
(ii) ⇒(i): Assuming (ii), we get as before for any f ∈S′ tr ϕ∗2∇S′f (B) · λ = ϕ∗2∇S′f [B] = ∇S′f[ϕ ◦B] m = 2 f ◦ϕ(B) m = Δ(f ◦ϕ)(B) · λ.
Since the extreme sides have locally finite variation, they agree a.s., and (13) follows by continuity, proving (i).
2 A Lie group is defined as a differential manifold endowed with a smooth group structure. On any Lie group S, we may introduce a left-invariant Rie-mannian metric ρ, generated by an arbitrary inner product on the basic tangent space Tι, where ι is the identity element of S. Since the group operation may not be commutative, we need to distinguish between left and right increments of a process X over an interval [s, t], given by X−1 s Xt or Xt X−1 s , respectively.
In the Abelian case, both versions reduce to Xt −Xs.
As before, we define a L´ evy process in S as a process with stationary, independent left increments.
We proceed to characterize L´ evy processes in terms of the local characteristics in Theorem 35.15. The result is a version for Lie groups of the elementary Theorem 11.5.
Theorem 35.25 (L´ evy processes in Lie groups) In a Lie group S with a left-invariant Riemannian metric ρ, let X be a semi-martingale with local rates A and C. Then these conditions are equivalent: (i) X has stationary, independent left increments, (ii) A = a(X) and C = c(X) for some smooth, left-invariant functions a, c on S.
Proof, (ii) ⇒(i): By Corollary 35.18 (ii), X is a Feller diffusion, whose distribution is uniquely determined by a and c. If the latter are left-invariant, Theorem 11.5 shows that X has stationary, independent left increments.
824 Foundations of Modern Probability (i) ⇒(ii): For any s < t, Proposition 18.17 and Theorem 35.1 show that the increment b[X]t −b[X]s depends measurably on the values of X in the interval [s, t]. By stationarity of the increments, the corresponding mapping is left-invariant. Taking right derivatives in Theorem 2.15 and using the stationarity again, we conclude that Ct depends measurably on X in any right neighborhood [t, t+ε), where the affecting map is again invariant. By the 0−1 law in Corollary 9.26, Ct then depends measurably on Xt for every t > 0, and so Ct = c(Xt) by Lemma 1.14 for a measurable and invariant function c.
To deal with A, we need to show that if f(X) has a semi-martingale de-composition M + V , then Vt −Vs depends measurably on X in the interval [s, t]. The required measurability then follows by approximation with sums of conditional expectations, as in the proof in Lemma 10.7, since the condition-ing σ-field Fs may be replaced by conditioning on Xs, by independence of the increments. Since such a measurability is already known for the term 1 2 ∇f[X] in (9), it holds for Af as well. We may now proceed as before to see that A = a(Xt) for some measurable and invariant function a.
Since a is a left-invariant vector field on S, it is automatically smooth28. A similar argument may yield the desired smoothness of the second order invari-ant vector field c.
2 Every continuous process in S with stationary, independent left increments can be shown to be a semi-martingale of the stated form. The intrinsic drift rate a clearly depends on the connection ∇, which in turn depends on the met-ric ρ, via Theorem 35.21. However, ρ is not unique since it can be generated by any inner product on Tι, and so there is no natural definition of Brownian motion in S. In fact, martingales and drift rates are not well-defined either, since even the canonical connection may fail to be unique29.
Exercises 1. Show that when S = Rn, a semi-martingale in S is just a continuous semi-martingale X = (X1, . . . , Xn) in the usual sense. Further show how the covariance matrix [Xi, Xj], i, j ≤n, can be recovered from b[X].
2. For any semi-martingales X, Y and bilinear form b on S, construct a process b [X, Y ] of locally finite variation, depending linearly on b, such that (d f⊗dg)[X, Y ] = [f(X), g(Y )] for smooth functions f, g, so that in particular b [X, X] = b [X]. (Hint: Regard (X, Y ) as a semi-martingale in S2, and define b [X, Y ] = (π1 ⊗π2)∗b [(X, Y )], where π1, π2 are the coordinate projections S2 →S, and (π1 ⊗π2)∗b denotes the pull-back of b by π1 ⊗π2 into a bilinear form on S2. Now check the desired formula.) 3. Explain where in the proof of Theorem 35.2 it is important that we use the FS rather than the Itˆ o integral. Specify which of the four parts of the theorem depend on that convention.
28Cf. Proposition 3.7 in Warner (1983), p. 85.
29This is far from obvious; thanks to Ming Liao for answering my question.
35. Stochastic Differential Geometry 825 4. Same requests as in the previous exercise for the extrinsic Lemma 35.3 with accompanying discussion. (Hint: Note that the reasons are very different.) 5. Show that the class of connections on S is not closed under addition or mul-tiplication by scalars. (Hint: This is clear from (4), and even more obvious from Theorem 35.6.) 6. Show that for any connection ∇with Christoffel symbols Γk ij, the latter can be changed arbitrarily in a neighborhood of a point x ∈S, such that the new functions ˜ Γk ij are Christoffel symbols of a connection ˜ ∇on S. (Hint: Extend the ˜ Γk ij to smooth functions agreeing with Γk ij outside a larger neighborhood.) 7. Show that the class of martingales in S depends on the choice of connection.
Thus, letting X be a (non-trivial) martingale for a connection ∇, show that we can choose another connection ∇′, such that X fails to be a ∇′-martingale. (Hint: Use Theorem 35.7 together with the previous exercise.) 8. Compare the martingale notions in Lemma 35.3 and Theorem 35.8. Thus, check that, for a given embedding, the two notions agree. On the other hand, explain in what sense one notion is extrinsic while the other is intrinsic.
9. Show that the notion of geodesic depends on the choice of connection. Thus, given a geodesic γ for a connection ∇, show that we can choose another connection ∇′ such that γ fails to be a ∇′-geodesic. (Hint: This might be seen most easily from Corollary 35.11.) 10. For a given connection ∇on S, show that the classes of geodesics and mar-tingales are invariant under a scaling of time. Thus, if γ is a geodesic in S, then so is the function γ′ t = γct, t ≥0, for any constant c > 0. Similarly, if M is a martingale in S, then so is the process M ′ t = Mct. (Hint: For geodesics, use Lemma 35.9. For martingales, we can use Theorem 35.7.) 11. When S = Rn with the flat connection, show that the notions of affine and convex functions agree with the elementary notions for Euclidean spaces. Further show that, in this case, a geodesic represents motion with constant speed along a straight line. For Euclidean spaces S, S′ with flat connections, verify the properties and equivalences in Lemma 35.12 and Theorems 35.10 and 35.13.
12. Show that the intrinsic notion of drift depends on the choice of connection, whereas the intrinsic diffusion rate doesn’t. Thus, show that if a semi-martingale X has drift A for connection ∇, we can choose another connection ∇′ such that the associated drift is different. (Hint: Note that X is a martingale for ∇iffA = 0. For the second assertion, note that b [X] is independent of ∇.) 13. For S = Rn with the flat connection, explain how our intrinsic local charac-teristics are related to the classical notions for ordinary semi-martingales.
14. Show that in Theorem 35.15, we can replace Lebesgue measure λ by any diffuse measure on R, and even by a suitable random measure.
15. Clarify the relationship between the intrinsic results of Theorem 35.16 and Corollary 35.17 and the extrinsic ones in Lemma 35.3. (Hint: Since the definitions of local characteristics are totally different in the two cases, the formal agreement merely justifies our intrinsic definitions. Further note that the intrinsic properties hold for any smooth embedding of S, whereas the extrinsic ones refer to a specific choice of embedding.) 826 Foundations of Modern Probability 16.
Show that for S = Rd with the Euclidean metric, the intrinsic notion of isotropic martingales agrees with the notion for continuous local martingales in Chapter 19. Prove a corresponding statement for Brownian motion.
17. Show that if X is a martingale in a Riemannian manifold (S, ρ), it may fail to be so for a different choice of metric ρ. Further show that if X is an isotropic martingale in (S, ρ), we can change ρ so that it remains a martingale but is no longer isotropic. Finally, if X is a Brownian motion in (S, ρ), we can change ρ so that it remains an isotropic martingale, though with a rate different from 1.
Appendices 1. Measurable maps, 2. General topology, 3. Linear spaces, 4. Linear operators, 5. Function and measure spaces, 6. Classes and spaces of sets, 7. Differential geometry Here we review some basic topology, functional analysis, and differential ge-ometry, needed at various places throughout the book. We further consider some special results from advanced measure theory, and discuss the topological properties of various spaces of functions, measures, and sets, of crucial impor-tance for especially the weak convergence theory in Chapter 23. Proofs are included only when they are not easily accessible in the standard literature.
1. Measurable maps The basic measure theory required for the development of modern probability was surveyed in Chapters 1–3. Here we add some less elementary facts, needed for special purposes.
If a measurable mapping is invertible, the measurability of its inverse can sometimes be inferred from the measurability of the range.
Theorem A1.1 (range and inverse, Kuratowski) For any measurable bijec-tion f between two Borel spaces S and T, the inverse f −1 : T →S is again measurable.
Proof: See Parthasarathy (1967), Section I.3.
2 We turn to the basic projection and section theorems, which play important roles in some more advanced contexts. Given a measurable space (Ω, F), we define the universal completion of F as the σ-field ¯ F = μ Fμ, where Fμ denotes the completion of F with respect to μ, and the intersection extends over all probability measures μ on F. For any spaces Ω and S, we define the projection πA of a set A ⊂Ω × S onto Ω as the union s As, where As = ω ∈Ω; (ω, s) ∈A , s ∈S.
Theorem A1.2 (projection and sections, Lusin, Choquet, Meyer) For any measurable space (Ω, F) and Borel space (S, S), consider a set A ∈F ⊗S with projection πA onto Ω. Then (i) πA ∈¯ F = ∩μ Fμ, 827 © Springer Nature Switzerland AG 2021 O. Kallenberg, Foundations of Modern Probability, Probability Theory and Stochastic Modelling 99, 828 Foundations of Modern Probability (ii) for any probability measure P on F, there exists a random element ξ in S, such that {ω, ξ(ω)} ∈A, ω ∈πA a.s. P.
Proof: See Dellacherie & Meyer (1975), Section III.44.
2 We note an application to functional representations needed in Chapter 28: Corollary A1.3 (conditioning representation) Consider a probability space (Ω, F, P) with sub-σ-field G ⊂F and a measurable function f : T × U →S, where S, T, U are Borel. Let ξ, η be random elements in S and U, such that L(ξ | G) ∈ L{f(t, η)}; t ∈T a.s.
Then there exist a G-measurable random element τ in T and a random element ˜ η in U, such that η d = ˜ η ⊥ ⊥τ, ξ = f(τ, ˜ η) a.s.
Proof: Fix a version of the conditional distribution L(ξ | G).
Applying Theorem A1.2 (ii) to the product-measurable set A = (ω, t) ∈Ω × T; L(ξ | G)(ω) = L{f(t, η)} , we obtain a G-measurable random element τ in T satisfying L(ξ | G) = L f(t, η) t=τ = L(ξ | τ) a.s., where the last equality holds by the chain rule for conditioning. Letting ζ d = η in U with ζ⊥ ⊥τ and using Lemma 4.11, we get a.s.
L f(τ, ζ) τ = L f(t, η) t=τ = L(ξ | τ), and so {f(τ, ζ), τ} d = (ξ, τ). Then Theorem 8.17 yields a random pair (˜ η, ˜ τ) d = (ζ, τ) in U × T with f(˜ τ, ˜ η), ˜ τ = (ξ, τ) a.s.
In particular ˜ τ = τ a.s., and so ξ = f(τ, ˜ η) a.s. Further note that ˜ η d = ζ d = η, and that ˜ η ⊥ ⊥˜ τ = τ since ζ⊥ ⊥τ by construction.
2 2. General topology Elementary metric topology, including properties of open and closed sets, con-tinuity, convergence, and compactness, are used throughout this book. For special purposes, we also need the slightly more subtle notions of separability and completeness, dense or nowhere dense sets, uniform and equi-continuity, and approximation. We assume the reader to be well familiar with most of this Appendices 829 material. Here we summarize some basic ideas of general, non-metric topology, of special importance in Sections 5–6 below and Chapter 23. Proofs may be found in most textbooks on real analysis.
A topology on a space S is defined as a class T of subsets of S, called the open sets, such that T is closed under finite intersections and arbitrary unions and contains S and ∅. The complements of open sets are said to be closed.
For any A ⊂S, the interior Ao is the union of all open set contained in A.
Similarly, the closure ¯ A is the intersection of all closed sets containing A, and the boundary ∂A is the difference ¯ A \ Ao. A set A is dense in S if ¯ A = S, and nowhere dense if ( ¯ A)c is dense in S. The space S is separable if it has a countable, dense subset. It is connected if S and ∅are the only sets that are both open and closed. For any A ⊂S, the relative topology on A consists of all sets A ∩G with G ∈T .
The family of topologies on S is partially ordered under set inclusion. The smallest topology on S has only the two sets S and ∅, whereas the largest one, the discrete topology 2S, contains all subsets of S. For any class C of subsets of S, the generated topology is the smallest topology T ⊃C. A topology T is said to be metrizable, if there exists a metric ρ on S, such that T is generated by all open ρ -balls. In particular, it is Polish if ρ can be chosen to be separable and complete.
An open set containing a point x ∈S is called a neighborhood of x. A class Bx of neighborhoods of x is called a neighborhood base at x, if every neighborhood of x contains a set in Bx. A base B of T is a class containing a neighborhood base at every point. Every base clearly generates the topology.
The space S is said to be first countable, if it has a countable neighborhood base at every point, and second countable, if it has a countable base. Every metric space is clearly first countable, and it is second countable iffit is separable.
A topological space S is said to be Hausdorff, if any two distinct points have disjoint neighborhoods, and normal if all singleton sets are closed, and any two disjoint closed sets are contained in disjoint open sets. In particular, all metric spaces are normal.
A mapping f between two topological spaces S, S′ is said to be continuous, if f −1B is open in S for every open set B ⊂S′. The spaces S, S′ are homeo-morphic, if there exists a bijection (1−1 correspondence) f : S →S′ such that f and f −1 are both continuous. For any family F of functions fi mapping S into some topological spaces Si, the generated topology on S is the smallest one making all the fi continuous. In particular, if S is a Cartesian product of some topological spaces Si, i ∈I, so that the elements of S are functions x = (xi; i ∈I), then the product topology on S is generated by all coordinate projections πi : x →xi. A set A ⊂S is said to be compact, if every cover of A by open sets contains a finite subcover, and relatively compact if its closure ¯ A is compact. By Tychonov’s theorem, any Cartesian product of compact spaces is again compact in the product topology. A topological space S is said to be locally compact, if it has a base consisting of sets with compact closure.
A set I is said to be partially ordered under a relation ≺, if 830 Foundations of Modern Probability • i ≺j ≺i ⇒i = j, • i ≺i for all i ∈I, • i ≺j ≺k ⇒i ≺k.
It is said to be directed if also • i, j ∈I ⇒i ≺k and j ≺k for some k ∈I.
A net in a topological space S is a mapping of a directed set I into S. In particular, sequences are nets on I = N with the usual order. A net (xi) is said to converge to a limit x ∈S, written as xi →x, if for every neighborhood A of x there exists an i ∈I, such that xj ∈A for all j ≻i. Possible limits are unique when S is Hausdorff, but not in general. We further say that (xi) clusters at x, if for any neighborhood A of x and i ∈I, there exists a j ≻i with xj ∈A. For sequences (xn), this means that xn →x along a sub-sequence.
Net convergence and clustering are constantly used1 throughout analysis and probability, such as in continuous time and higher dimensions, or in the definitions of total variation and Riemann integrals. They also serve to char-acterize the topology itself: Lemma A2.1 (closed sets) For sets A in a topological space S, these condi-tions are equivalent: (i) A is closed, (ii) for every convergent net in A, even the limit lies in A, (iii) for every net in A, all cluster points also lie in A.
When S is a metric space, it suffices in (ii)−(iii) to consider sequences.
We can also use nets to characterize compactness: Lemma A2.2 (compact sets) For sets A in a topological space S, these con-ditions are equivalent: (i) A is compact, (ii) every net in A has at least one cluster point in A.
If the net in (ii) has exactly one cluster point x, it converges to x. When S is a metric space, it is enough in (ii) to consider sequences.
For sequences, property (ii) above amounts to sequential compactness—the existence of a convergent sub-sequence with limit in A.
For the notion of relative sequential compactness, the limit is not required to lie in A.
Given a compact topological space X, let CX be the class of continuous functions f : X →R equipped with the supremum norm ∥f∥= supx |f(x)|.
An algebra A ⊂CX is defined as a linear subspace closed under multiplication.
It is said to separate points, if for any x ̸= y in X there exists an f ∈A with f(x) ̸= f(y).
1only the terminology may be less familiar Appendices 831 Theorem A2.3 (Stone–Weierstrass) For a compact topological space X, let A ⊂CX be an algebra separating points and containing the constant function 1. Then ¯ A = CX.
Like many other results in this and subsequent sections, this statement is also true in a complex version.
Throughout this book we are making use of spaces that are locally compact, second countable, and Hausdorff, here referred to as lcscH spaces. We record some standard facts about such spaces: Lemma A2.4 (lcscH spaces) Let the space S be lcscH. Then (i) S is Polish and σ-compact, (ii) S has a separable, complete metrization, such that a set is relatively com-pact iffit is bounded, (iii) if S is not already compact, it can be compactified by the attachment of a single point Δ ̸= S.
3. Linear spaces In this and the next section, we summarize some basic ideas of functional analysis, such as Hilbert and Banach spaces, and operators between linear spaces. Aspects of this material are needed in several chapters throughout the book. Proofs may be found in textbooks on real analysis.
A linear space2 over R is a set X with two operations, addition x + y and multiplication by scalars3 cx, such that for any x, y, z ∈X and a, b ∈R, • x + (y + z) = (x + y) + z, • x + y = y + x, • x + 0 = x for some 0 ∈X, • x + (−x) = 0 for some −x ∈X, • a(bx) = (ab)x, • (a + b)x = ax + bx, • a(x + y) = ax + ay, • 1x = x.
A subspace of X is a subset closed under the mentioned operations, hence a linear space in its own right. A function ∥x∥≥0 on X is called a norm, if for any x, y ∈X and c ∈R, 2also called a vector space 3Here and below, we can also use the set C of complex numbers as our scalar field. Usually this requires only trivial modifications.
832 Foundations of Modern Probability • ∥x∥= 0 ⇔x = 0, • ∥cx∥= |c| ∥x∥, • ∥x + y∥≤∥x∥+ ∥y∥.
The norm generates a metric ρ on X, given by ρ(x, y) = ∥x −y∥. If ρ is complete, in the sense that every Cauchy sequence converges, then X is called a Banach space. In that case, any closed subspace is again a Banach space.
Given a linear space X, we define a linear functional on X as a function f : X →R, such that f(ax + by) = a f(x) + b f(y), x, y ∈X, a, b ∈R.
Similarly, a convex functional is defined as a map p: X →R+ satisfying • p(cx) = c p(x), x ∈X, c ≥0, • p(x + y) ≤p(x) + p(y), x, y ∈X.
Lemma A3.1 (Hahn–Banach) Let X be a linear space with subspace Y , fix a convex functional p on X, and let f be a linear functional on Y with f ≤p on Y . Then f extends to a linear functional on X, such that f ≤p on X.
A set M in a linear space X is said to be convex if c x + (1 −c) y ∈M, x, y ∈M, c ∈[0, 1].
A hyper-plane in X is determined by an equation f(x) = a, for a linear func-tional f ̸= 0 and a constant a. We say that M is supported by the plane f(x) = a if f(x) ≤a, or f(x) ≥a, for all x ∈M. The sets M, N are separated by the plane f(x) = a if f(x) ≤a on M and f(x) ≥a on N, or vice versa. For clarity, we state the following support and separation result for convex sets in a Euclidean space.
Corollary A3.2 (convex sets) (i) For any convex set M ⊂Rd and point x ∈∂M, there exists a hyper-plane through x supporting M.
(ii) For any disjoint convex sets M, N ⊂Rd, there exists a hyper-plane sepa-rating M and N.
When X is a normed linear space, we say that a linear functional f on X is bounded if |f(x)| ≤c∥x∥for some constant c < ∞. Then there is a smallest value of c, denoted by ∥f∥, so that |f(x)| ≤∥f∥∥x∥. The space of bounded linear functionals f on X is again a normed linear space (in fact a Banach space), called the dual of X and denoted by X∗. Continuing recursively, we may form the second dual X∗∗= (X∗)∗, etc.
Theorem A3.3 (Hahn–Banach) Let X be a normed linear space with a sub-space Y . Then for any g ∈Y ∗, there exists an f ∈X∗with ∥f∥= ∥g∥and f = g on Y .
Appendices 833 Corollary A3.4 (natural embedding) For a normed linear space X, we may define a linear isometry T : X →X∗∗by (Tx)f = f(x), x ∈X, f ∈X∗.
The space X is said to be reflexive if T(X) = X∗∗, so that X and X∗∗are isomorphic, written as X ∼ = X∗∗. The reflexive property extends to any closed linear subspace of X. Familiar examples of reflexive spaces include lp and Lp for p > 1 (but not for p = 1), where the duals equal lq and Lq, respectively, with p−1 + q−1 = 1. For p = 2, we get a Hilbert space H with the unique property H ∼ = H∗.
On a normed linear space X, we may consider not only the norm topology induced by ∥x∥, but also the weak topology generated by the dual X∗. Thus, the weak topology on X∗is generated by the second dual X∗∗. Even more important is the weak ∗topology on X∗, generated by the coordinate maps πx : X∗→R given by πxf = f(x), x ∈X, f ∈X∗.
Lemma A3.5 (topologies on X∗) For a normed linear space X, write T ∗ w, Tw, Tn for the weak ∗, weak, and norm topologies on X∗. Then (i) T ∗ w ⊂Tw ⊂Tn, (ii) T ∗ w = Tw = Tn when X is reflexive.
An inner product on a linear space X is a function ⟨x, y⟩on X2, such that for any x, y, z ∈X and a, b ∈R, • ⟨x, x⟩≥0, with equality iffx = 0, • ⟨x, y⟩= ⟨y, x⟩, • ⟨ax + by, z⟩= a⟨x, z⟩+ b⟨y, z⟩.
The function ∥x∥= ⟨x, x⟩1/2 is then a norm on X, satisfying the Cauchy inequality4 • |⟨x, y⟩| ≤∥x∥∥y∥, x, y ∈X.
If the associated metric is complete, then X is called a Hilbert space5.
If ⟨x, y⟩= 0, we say that x and y are orthogonal and write x ⊥y. For any subset A ⊂H, we write x ⊥A if x ⊥y for all y ∈A. The orthogonal complement A⊥, consisting of all x ∈H with x ⊥A, is a closed linear subspace of H.
A linear functional f on a Hilbert space H is clearly continuous iffit is bounded, in the sense that |f(x)| ≤c∥x∥, x ∈H, for some constant c ≥0, in which case we define ∥f∥as the smallest constant c with this property. The bounded linear functionals form a linear space H∗ with norm ∥f∥, which is again a Hilbert space isomorphic to H.
4also known as the Schwarz or Buniakovsky inequality 5Hilbert spaces appear at many places throughout the book, especially in Chapters 8, 14, 18–19, and 21.
834 Foundations of Modern Probability Lemma A3.6 (Riesz representation6) For functions f on a Hilbert space H, these conditions are equivalent: (i) f is a continuous linear functional on H, (ii) there exists a unique element y ∈H satisfying f(x) = ⟨x, y⟩, x ∈H.
In this case, ∥f∥= ∥y∥.
An ortho-normal system (ONS) in H is a family of orthogonal elements xi ∈H with norm 1. It is said to be complete or to form an ortho-normal basis (ONB) if y ⊥(xi) implies y = 0.
Lemma A3.7 (ortho-normal bases) For any Hilbert space H, we have7 (i) H has an ONB (xi), (ii) the cardinality of (xi) is unique, (iii) every x ∈H has a unique representation x = Σ i bi xi, where bi = ⟨x, xi⟩, (iv) if x = Σi bi xi and y = Σi ci xi, then ⟨x, y⟩= Σ i bi ci, ∥x∥2 = Σ i b2 i .
The cardinality in (ii) is called the dimension of H. Since any finite-dimen-sional Hilbert space is isomorphic8 to a Euclidean space Rn, we may henceforth assume the dimension to be infinite. The simplest infinite-dimensional Hilbert space is l 2, consisting of all infinite sequences x = (xn) in R with ∥x∥2 = Σnx2 n < ∞. Here the inner product is given by ⟨x, y⟩= Σ n≥1 xn yn, x = (xn), y = (yn), and we may choose an ONB consisting of the elements (0, . . . , 0, 1, 0, . . .), with 1 in the n-th position.
Another standard example is L2(μ), for any σ-finite measure μ with infinite support on a measurable space S. Here the elements are (equivalence classes of) measurable functions f on S with ∥f∥2 = μf 2 < ∞, and the inner prod-uct is given by ⟨f, g⟩= μ(fg). Familiar ONB’s include the Haar functions when μ equals Lebesgue measure λ on [0, 1], and suitably normalized Hermite polynomials when μ = N(0, 1) on R.
All the basic Hilbert spaces are essentially equivalent: 6also known as the Riesz–Fischer theorem 7This requires the axiom of choice, assumed throughout this book.
8In other words, there exists a 1−1 correspondence between the two spaces preserving the algebraic structure.
Appendices 835 Lemma A3.8 (separable Hilbert spaces) For an infinite-dimensional Hilbert space H, these conditions are equivalent: (i) H is separable, (ii) H has countably infinite dimension, (iii) H ∼ = l 2, (iv) H ∼ = L2(μ) for any σ-finite measure μ with infinite support.
Most Hilbert spaces H arising in applications are separable, and for simplic-ity we may often take H = l 2, or let H = L2(μ) for a suitable measure μ. For any Hilbert spaces H1, H2, we may form a new Hilbert space H1 ⊗H2, called the tensor product of H1, H2. When H1 = L2(μ1) and H2 = L2(μ2), it may be thought of as the space L2(μ1 ⊗μ2), where μ1 ⊗μ2 is the product measure of μ1, μ2. If f1, f2, . . . and g1, g2, . . . are ONB’s in H1 and H2, the functions hij = fi⊗gj form an ONB in H1⊗H2, where (f ⊗g)(x, y) = f(x)g(y). Iterating the construction, we may form the n-fold tensor powers H⊗n = H ⊗· · · ⊗H, for every n ∈N.
4. Linear operators A linear operator between two normed linear spaces X, Y is a mapping T : X →Y satisfying T(ax + by) = a Tx + b Ty, x, y ∈X, a, b ∈R.
It is said to be bounded 9 if ∥Tx∥≤c ∥x∥for some constant c < ∞, in which case there is a smallest number ∥T∥with this property, so that ∥Tx∥≤∥T∥∥x∥, x ∈X.
When ∥T∥≤1 we call T a contraction operator. The space of bounded linear operators X →Y is denoted by BX,Y , and when X = Y we write BX,X = BX.
Those are again linear spaces, under the operations (T1 + T2) x = T1x + T2x, (c T)x = c (Tx), x ∈X, c ∈R, and the function ∥T∥above is a norm on BX,Y . Furthermore, BX,Y is complete, hence a Banach space, whenever this is true for Y . In particular, we note that BX,R equals the dual space X∗.
Theorem A4.1 (Banach–Steinhaus) Let T ⊂BX,Y , for a Banach space X and a normed linear space Y . Then sup T∈T ∥Tx∥< ∞, x ∈X ⇔ sup T∈T ∥T∥< ∞.
9Bounded operators are especially important in Chapter 26.
836 Foundations of Modern Probability If T ∈BX,Y and S ∈BY,Z, the product ST ∈BX,Z is given by (ST)x = S(Tx), and we note that ∥ST∥≤∥S∥∥T∥. Two operators S, T ∈BX are said to commute if ST = TS. The identity operator I ∈BX is defined by Ix = x for all x ∈X. Furthermore, T ∈BX is said to be idempotent if T 2 = T. For any operators T, T1, T2, . . . ∈BX,Y , we say that Tn converges strongly to T if ∥Tnx −Tx∥→0 in Y, x ∈X.
We also consider convergence in the norm topology, defined by ∥Tn −T∥→0 in BX,Y .
For any A ∈BX,Y , the adjoint A∗of A is a linear map on Y ∗given by (A∗f)(x) = f(Ax), x ∈X, f ∈Y ∗.
If also B ∈BY,Z, we note that (BA)∗= A∗B∗. Further note that I∗= I, where the latter I is given by If = f.
Lemma A4.2 (adjoint operator) For any normed linear spaces X, Y , the map A →A∗is a linear isometry from BX,Y to B Y ∗, X∗.
When A ∈BX,Y is bijective, its inverse A−1 : Y →X is again linear, though it need not be bounded.
Theorem A4.3 (inverses and adjoints) For a Banach space X and a normed linear space Y , let A ∈BX,Y . Then these conditions are equivalent: (i) A has a bounded inverse A−1 ∈B Y, X, (ii) A∗has a bounded inverse (A∗)−1 ∈BX∗, Y ∗.
In that case, (A∗)−1 = (A−1)∗.
For a Hilbert space H we may identify H∗with H, which leads to some simplifications. Then for any A ∈BH there exists a unique operator A∗∈BH satisfying ⟨Ax, y⟩= ⟨x, A∗y⟩, x, y ∈H, which again is called the adjoint of A.
The mapping A →A∗is a linear isometry on H satisfying (A∗)∗= A. We say that A is self-adjoint if A∗= A, so that ⟨Ax, y⟩= ⟨x, Ay⟩. In that case, ∥A∥= sup ∥x∥=1 |⟨Ax, x⟩|.
For any A ∈BH, the null space NA and range RA are given by NA = {x ∈H; Ax = 0}, RA = {Ax; x ∈H}, and we note that (RA)⊥= NA∗and (NA)⊥= RA∗ Appendices 837 Lemma A4.4 (unitary operators10) For any U ∈BH, these conditions are equivalent: (i) UU ∗= U ∗U = I, (ii) U is an isometry on H, (iii) U maps any ONB in H into an ONB.
We state an abstract version of Theorem 1.35: Lemma A4.5 (orthogonal decomposition) Let M be a closed subspace of a Hilbert space H. Then for any x ∈H, we have (i) x has a unique decomposition x = y + z with y ∈M and z = M ⊥, (ii) y is the unique element of M minimizing ∥x −y∥, (iii) writing y = πM x, we have πM ∈BH with ∥πM∥≤1, and πM +πM⊥= I.
Lemma A4.6 (projections11) When A ∈BH, these conditions are equivalent: (i) A = πM for a closed subspace M ⊂H, (ii) A is self-adjoint and idempotent.
In that case, M = RA.
For any sub-spaces M, N ⊂H, we introduce the orthogonal complement M ⊖N = M ∩N ⊥= {x ∈M; x ⊥N}.
When M ⊥N, we further consider the direct sum M ⊕N = {x + y; x ∈M, y ∈N}.
The conditional orthogonality M ⊥R N is defined by12 M ⊥ R N ⇔ πR⊥M ⊥πR⊥N.
We list some classical propositions of probabilistic relevance.
Lemma A4.7 (orthogonality) For any closed sub-spaces M, N ⊂H, these conditions are equivalent: (i) M ⊥N, (ii) πM πN = 0, (iii) πM + πN = πR for some R, In that case, R = M ⊕N.
Lemma A4.8 (commutativity) For any closed sub-spaces M, N ⊂H, these conditions are equivalent: (i) M ⊥ M ∩NN, (ii) πM πN = πN πM, (iii) πM πN = πR for some R, In that case, R = M ∩N.
10Unitary operators are especially important in Chapters 14 and 27–28.
11Projection operators play a fundamental role in especially Chapter 8, where most of the quoted results have important probabilistic counterparts.
12Here our terminology and notation are motivated by Theorem 8.13.
838 Foundations of Modern Probability Lemma A4.9 (tower property) For any closed sub-spaces M, N ⊂H, these conditions are equivalent: (i) N ⊂M, (ii) πM πN = πN, (iii) πN πM = πN, (iv) ∥πN x∥≤∥πM x∥, x ∈H.
The next result may be compared with the probabilistic versions in Corol-lary 25.18 and Theorem 30.11.
Theorem A4.10 (mean ergodic theorem) For a bi-measurable, bijection13 T on (S, μ) with μ ◦f −1 = μ, define Uf = f ◦T for f ∈L2(μ), and let M be the null space of U −I. Then (i) U is a unitary operator on L2(μ), (ii) n−1 Σ k<n f ◦T k = n−1 Σ k<n U kf →πMf, f ∈L2(μ).
We also need to consider possibly unbounded operators14 A between two Banach spaces X, Y , defined on a linear subspace D ⊂X called the domain of A. We say that A is closed if its graph G = (x, Ax); x ∈D is a closed subset of X ×Y , in the product topology for the norm topologies on X, Y . More generally, an operator (A, D) is said to be closable, if the closure ¯ G in X × Y is the graph of a single-valued operator ¯ A, called the closure of A.
Lemma A4.11 (closed graph theorem) For a linear operator A between two Banach spaces X, Y , we have A is bounded ⇔ A is closed.
Lemma A4.12 (closable operators) Let (A, D) be a linear operator between the Banach spaces X, Y . Then these conditions are equivalent: (i) A is closable, (ii) for any x1, x2, . . . ∈D, xn →0 Axn →y ⇒y = 0.
13Though measure-preserving maps are not invertible in general, they can be made so by a suitable randomization. Cf. Lemmas 25.2 and 27.4.
14Unbounded operators play fundamental roles in especially Chapters 17 and 21.
Appendices 839 The domain of an operator may be hard to identify and is often too large for technical convenience. When (A, D) is a closed operator from X to Y , a linear subspace D ⊂D is called a core of A, if the restriction of A to D is closable with closure (A, D).
For a suitable space C of functions on a topological space S, we say that an operator A on C is local, if for any s ∈S with a neighborhood G, we have f = 0 on G ⇒ Af(s) = 0.
Then by linearity, Af(s) depends only on values of f in a neighborhood of s. When S = Rd, this holds for any differential operator A, and the converse assertion is often true under additional assumptions. Local operators play im-portant roles in Chapters 17, 21, and 35.
5. Function and measure spaces Here we first collect some basic facts about the function spaces C T,S or D R+,S, of special importance in probability theory. Though processes with paths in those spaces are considered throughout the book, most topological results men-tioned here are not needed until Chapter 23, where they are fundamental for the theory of convergence in distribution.
Let CT,S be the space of continuous, S-valued functions on T, where S, T are separable, complete metric spaces and T is locally compact. For any func-tions x, x1, x2, . . . ∈CT,S, we define the locally uniform convergence15 xn ul − →x by sup t∈K ρ(xn t , xt) →0, K ∈KT, (1) where ρ is the metric on S and KT denotes the class of compact sets K ⊂T.
The distance in (1) defines a separable and complete metric on CK,S, and we may construct a similar metric ˆ ρ on CT,S by applying (1) to a sequence of compact sets16 Kn ∈KT with Ko n ↑T. The evaluation maps on CT,S are given by πt : x →xt for all t ∈T.
Lemma A5.1 (locally uniform topology on CT,S) Let S, T be separable, com-plete metric spaces, with T locally compact. Then there exists a topology T on CT,S, such that (i) T induces the convergence xn ul − →x in (1), (ii) CT,S is Polish under T , (iii) T generates the Borel σ-field σ{πt; t ∈T}.
Proof: We need to prove only (iii), the remaining claims being obvious.
The maps πt are continuous, hence Borel measurable, and so the generated 15Here and below, the associated net convergence is given by the same formulas.
16which exist since T is locally compact 840 Foundations of Modern Probability σ-field C is contained in BCT,S. For the converse, we need to show that any open subset G ⊂CT,S lies in C. Then choose a separable and complete metric ˆ ρ in CT,S, and note that G is a countable union of open balls Br x = y ∈CT,S; ˆ ρ(x, y) < r .
The latter lie in C, since for any countable dense set T ′ ⊂T, ¯ Br x = ∩ t∈T ′ y ∈CT,S; ρ(xt, yt) ≤r .
2 We state a version of the classical Arzela–Ascoli criterion for compactness in CT,S. For any metrics d on T and ρ on S, we define the associated moduli of continuity by wK(x, h) = sup ρ(xs, xt); s, t ∈K, d(s, t) ≤h , h > 0, K ∈KT.
Theorem A5.2 (compactness in CT,S, Arzel a, Ascoli) Let A ⊂CT,S, where S, T are separable, complete metric spaces with T locally compact, and fix a dense set T ′ ⊂T. Then A is relatively compact in the locally uniform topology iff (i) πtA is relatively compact in S for every t ∈T ′, (ii) lim h→0 sup x∈A wK(x, h) = 0, K ∈KT.
In that case, (iii) ∪ t∈KπtA is relatively compact in S for all K ∈KT.
Proof: This is essentially a special case of Theorem A5.4 below. Various versions are proved in textbooks on real analysis. Probabilists may consult Dudley (1989), Section 2.4.
2 Turning to spaces of functions with possible jump discontinuities, let D R+,S be the space of rcll17 functions on R+ with values in a separable, complete met-ric space (S, ρ). Note that such functions x can have only jump discontinuities, and that only finitely many jumps of size ρ(xt, xt−) ≥ε may occur in every finite interval. Here the topology of locally uniform convergence xn ul − →x is less useful and technically a bit awkward, since the separability of S may not be preserved in D R+,S.
For a more useful and convenient topology, we allow the path of each x to be shifted in time by an increasing bijection λ on R+, so that λ is continuous and strictly increasing with λ0 = 0. For the associated Skorohod convergence xn s →x in D R+,S, we require λn ul − →ι, xn ◦λn ul − →x, (2) for some bijections λn as above, where ι denotes the identity map on R+.
Though clearly xn ul − →x implies xn s →x, the converse implication is false. We state the basic properties of the corresponding Skorohod J1-topology: 17right-continuous with left-hand limits, often written as the French acronym cadl ag Appendices 841 Lemma A5.3 (Skorohod topology on D R+,S, Skorohod, Prohorov, Kolmogo-rov) For any separable, complete metric space S, there exists a topology T on D R+,S, such that (i) T induces the convergence xn s →x in (2), (ii) D R+,S is Polish under T , (iii) T generates the Borel σ-field σ{πt; t ≥0}.
Proof: Detailed proofs of various versions appear in Billingsley (1968), pp.
111–116, Ethier & Kurtz (1986), pp. 116–122, and Jacod & Shiryaev (1987), pp. 291–301.
2 To state an associated compactness criterion, we define some modified mod-uli of continuity on D R+,S by ˜ wt(x, h) = inf (Ik) max k sup r,s∈Ik ρ(xr, xs), x ∈D R+,S, t, h > 0, (3) where the infimum extends over all partitions of the interval [0, t) into sub-intervals Ik = [u, v) with v −u ≥h when v < t. Note that ˜ wt(x, h) →0 as h →0 for fixed x ∈D R+,S and t > 0.
Theorem A5.4 (compactness in D R+,S, Prohorov) Let A ⊂D R+,S for a separable, complete metric space S, and fix a dense set T ⊂R+. Then A is relatively compact in the Skorohod topology iff (i) πtA is relatively compact in S for all t ∈T, (ii) lim h→0 sup x∈A ˜ wt(x, h) = 0, t > 0.
(4) In that case, (iii) ∪ s≤tπsA is relatively compact in S for all t ≥0.
Proof: Detailed proofs are implicit in Ethier & Kurtz (1986), pp. 122– 127. Related versions appear in Billingsley (1968), pp. 116–120, and Jacod & Shiryaev (1987), pp. 292–298.
2 We turn to some measure spaces. Given a separable, complete metric space S, let MS be the class of locally finite measures μ on S, so that μB < ∞for all bounded Borel sets B. For any μ, μ1, μ2, . . . ∈MS, we define the vague convergence μn v →μ by μnf →μf, f ∈ˆ CS, (5) where ˆ CS is the class of bounded, continuous functions f ≥0 on S with bounded support. The maps πf : μ →μf induce the vague topology18 on MS: 18When S is locally compact, it is equivalent to require the supports of f to be compact.
In general, that would give a smaller and less useful topology.
842 Foundations of Modern Probability Lemma A5.5 (vague topology on MS) For a separable, complete metric space S, there exists a topology T on MS, such that (i) T induces the convergence μn v →μ in (5), (ii) MS is Polish under T , (iii) T generates the Borel σ-field σ{πf; f ∈ˆ CS}.
Proof: See K(17), pp. 111–117.
2 On the sub-class M b S of bounded measures on S, we also consider the weak topology and associated weak convergence μn w →μ, given by (5) with ˆ CS replaced by the class CS of bounded, continuous functions on S. No further analysis is required, since this agrees with the vague topology and convergence, when the metric ρ on S is replaced by the equivalent metric ρ ∧1, so that all sets in S become bounded.
Theorem A5.6 (vague compactness in MS, Prohorov, Matthes et al.) Let A ⊂MS for a separable, complete metric space S. Then A is vaguely relatively compact iff (i) sup μ∈A μB < ∞, B ∈ˆ S, (ii) inf K∈K sup μ∈A μ(B \ K) = 0, B ∈ˆ S.
In particular, a set A ⊂M b S is weakly relatively compact iff(i)−(ii) hold with B = S.
Proof: See K(17), pp. 114, 116.
2 We finally consider spaces of measure-valued rcll functions. Here we may characterize compactness in terms of countably many one-dimensional projec-tions. Here we write D R+,S = DS for simplicity.
Theorem A5.7 (measure-valued functions) Given an lcscH space S, there exist some f1, f2, . . . ∈ˆ CS, such that these conditions are equivalent, for any A ⊂DMS: (i) A is relatively compact in DMS, (ii) Afj = {xfj; x ∈A} is relatively compact in D R+ for every j ∈N.
Proof: If A is relatively compact, then so is Af for every f ∈ˆ CS, since the map x →xf is continuous from DMS to D R+. To prove the converse, we may choose a dense collection f1, f2, . . . ∈ˆ CS, closed under addition, such that Afj is relatively compact for every j. In particular, supx∈A xtfj < ∞for all t ≥0 and j ∈N, and so by Theorem A5.6 the set {xt; x ∈A} is relatively compact in MS for every t ≥0. By Theorem A5.4 it remains to verify (4), where ˜ w may be defined in terms of the complete metric Appendices 843 ρ(μ, ν) = Σ k≥1 2−k μfk −νfk ∧1 , μ, ν ∈MS.
(6) If (4) fails, then either we may choose some xn ∈A and tn →0 with lim supn ρ(xn tn, xn 0) > 0, or else there exist some xn ∈A and bounded st < tn < un with un −sn →0 such that lim sup n→∞ ρ(xn sn, xn tn) ∧ρ(xn tn, xn un) > 0.
(7) In the former case, (6) yields lim supn |xn tnfj −xn 0fj| > 0 for some j ∈N, which contradicts the relative compactness of Afj. Assuming (7) instead, we have by (6) some i, j ∈N with lim sup n→∞ xn snfi −xn tnfi ∧ xn tnfj −xn unfj > 0.
(8) Now for any a, a′, b, b′ ∈R, 1 2 |a| ∧|b′| ≤ |a| ∧|a′| ∨ |b| ∧|b′| ∨ |a + a′| ∧|b + b′| .
Since the set {fk} is closed under addition, (8) yields the same relation with a common i = j. But then (4) fails for Afi, which contradicts the relative com-pactness of Afi by Theorem A5.4. Thus, (4) fails for A, and so A is relatively compact.
2 6. Classes and spaces of sets Given an lcscH space S, we introduce the classes G, F, K of open, closed, and compact subsets, respectively. Here we may regard F as a space in its own right, endowed with the Fell topology and associated convergence19. To introduce the latter, choose a metric ρ on S making every closed ρ -ball com-pact, and define ρ(s, F) = inf x∈F ρ(s, x), s ∈S, F ∈F.
Then for any F, F1, F2, . . . ∈F, we define Fn f →F by the condition ρ(s, Fn) →ρ(s, F), s ∈S.
(9) We show that (9) is independent of the choice of ρ, and is induced by the topology generated by the sets F ∈F; F ∩G ̸= ∅ , G ∈G, F ∈F; F ∩K = ∅ , K ∈K.
(10) Theorem A6.1 (Fell topology on F) Let F be the class of closed sets in an lcscH space S. Then there exists a topology T on F, such that (i) T generates the convergence Fn f →F in (9), (ii) F is compact and metrizable under T , (iii) T is generated by the sets (10), (iv) F ∈F; F ∩B ̸= ∅ is universally Borel measurable for every B ∈S.
19This is important for our discussion of random sets in Chapters 23 and 34.
844 Foundations of Modern Probability Proof, (i)−(iii): We show that the Fell topology in (iii) is generated by the maps F →ρ(s, F), s ∈S. The convergence criterion in (i) then follows, once the metrization property is established. To see that the stated maps are continuous, put Br s = {t ∈S; ρ(s, t) < r}, and note that {F; ρ(s, F) < r} = {F; F ∩Br s ̸= ∅}, {F; ρ(s, F) > r} = {F; F ∩¯ Br s = ∅}.
Here the sets on the right are open, by the definition of the Fell topology and the choice of ρ. Thus, the Fell topology contains the ρ -topology.
To prove the converse, fix any F ∈F and a net {Fi} ⊂F with directed index set (I, ≺), such that Fi →F in the ρ -topology. To see that convergence holds even in the Fell topology, let G ∈G be arbitrary with F ∩G / ∈∅. Fix any s ∈F ∩G. Since ρ(s, Fi) →ρ(s, F) = 0, we may further choose some si ∈Fi with ρ(s, si) →0. Since G is open, there exists an i ∈I with sj ∈G for all j ≻i. Then also Fj ∩G / ∈∅for all j ≻i.
Next let K ∈K with F ∩K = ∅. Define rs = 1 2 ρ(s, F) for each s ∈K, and put Gs = Bs,rs. Since K is compact, it is covered by finitely many balls Gsk.
For each k we have ρ(sk, Fi) →ρ(sk, F), and so there exists an ik ∈I with Fj ∩Gsk = ∅for all j ≻ik. Letting i ∈I with i ≻ik for all k, it is clear that Fj ∩K = ∅for all j ≻i.
(ii) Fix a countable, dense set D ⊂S, and assume that ρ(s, Fi) →ρ(s, F) for all s ∈D. For any s, s′ ∈S, ρ(s, Fj) −ρ(s, F) ≤ ρ(s′, Fj) −ρ(s′, F) + 2ρ(s, s′).
Given any s and ε > 0, we can make the left-hand side < ε by choosing s′ ∈D with ρ(s, s′) < ε/3, and then letting i ∈I be such that |ρ(s′, Fj) −ρ(s′, F)| < ε/3 for all j ≻i.
Hence, the Fell topology is also generated by the maps F →ρ(s, F) for all s ∈D. But then F is homeomorphic to a subset of ¯ R∞ + , which is second-countable and metrizable.
To see that F is compact, it suffices to show that every sequence (Fn) ⊂F contains a convergent sub-sequence. Then choose a sub-sequence with ρ(s, Fn) converging in ¯ R+ for all s ∈D, and hence also for all s ∈S.
Since the functions ρ(s, Fn) are equi-continuous, the limit f is again continuous, and so the set F = {s ∈S; f(s) = 0} is closed.
To obtain Fn →F, we need to show that, whenever F ∩G ̸= ∅or F ∩K = ∅ for some G ∈G or K ∈K, the same relation holds eventually even for Fn. In the former case, fix any s ∈F ∩G, and note that ρ(s, Fn) →f(s) = 0. Hence, we may choose some sn ∈Fn with sn →s, and since sn ∈G for large n, we get Fn ∩G ̸= ∅. In the latter case, assume that instead Fn ∩K ̸= ∅along a sub-sequence. Then there exist some sn ∈Fn ∩K, and so sn →s ∈K along a further sub-sequence. Here 0 = ρ(sn, Fn) →ρ(s, F), which yields the contradiction s ∈F ∩K.
Appendices 845 (iv) The mapping (s, F) →ρ(s, F) is jointly continuous, hence Borel mea-surable. Now S and F are both separable, and so the Borel σ-field in S × F agrees with the product σ-field S ⊗BF. Since s ∈F iffρ(s, F) = 0, it follows that {(s, F); s ∈F} belongs to S ⊗BF. Hence, so does {(s, F); s ∈F ∩B} for arbitrary B ∈S. The assertion now follows by Theorem A1.2.
2 Say that a class U ⊂ˆ S is separating, if for any K ⊂G with K ∈K and G ∈G, there exists a U ∈U with K ⊂U ⊂G. A class I ⊂ˆ S is pre-separating if the finite unions of I -sets form a separating class. When S is Euclidean, we typically choose I to be a class of intervals or rectangles and U as the corresponding class of finite unions.
Lemma A6.2 (separation) For any monotone function h : ˆ S →R, we have the separating class ˆ Sh = B ∈ˆ S; h(Bo) = h( ¯ B) .
Proof: Fix a metric ρ in S such that every closed ρ -ball is compact, and let K ∈K and G ∈G with K ⊂G. For any ε > 0, define Kε = {s ∈S; d(s, K) < ε} and note that ¯ Kε = {s ∈S; ρ(s, K) ≤ε}. Since K is compact, we have ρ(K, Gc) > 0, and so K ⊂Kε ⊂G for sufficiently small ε > 0. Fur-thermore, the monotonicity of h yields Kε ∈ˆ Sh for almost every ε > 0.
2 We often need the separating class to be countable.
Lemma A6.3 (countable separation) Every separating class U ⊂ˆ S contains a countable separating subclass.
Proof: Fix a countable topological base B ⊂ˆ S, closed under finite unions.
For every B ∈B, choose some compact sets KB,n ↓¯ B with Ko B,n ⊃¯ B. Then for every pair (B, n) ∈B × N, choose UB,n ∈U with ¯ B ⊂UB,n ⊂Ko B,n. The family {UB,n} is clearly separating.
2 To state the next result, fix any metric spaces S1, S2, . . . , and introduce the product spaces Sn = S1 × · · · × Sn and S = S1 × S2 × · · · , endowed with their product topologies. For any m < n < ∞, let πm and πmn be the natural projections of S or Sn onto Sm. Say that the sets An ⊂Sn, n ∈N, form a projective sequence if πmnAn ⊂Am for all m ≤n. Their projective limit in S is then defined as the set A = n π−1 n An.
Lemma A6.4 (projective limits) For any metric spaces S1, S2, . . . , consider a projective sequence of non-empty, compact sets Kn ⊂S1 × · · · × Sn, n ∈N.
Then the projective limit K = n π−1 n Kn is again non-empty and compact.
Proof: Since the Kn are non-empty, we may choose some sequences xn = (xn m) ∈π−1 n Kn, n ∈N. By the projective property of the sets Km, we have πmxn ∈Km for all m ≤n. In particular, the sequence x1 m, x2 m, . . . is relatively 846 Foundations of Modern Probability compact in Sm for each m ∈N, and so a diagonal argument yields convergence xn →x = (xm) ∈S along a sub-sequence N ′ ⊂N, which implies πmxn →πmx along N ′ for each m ∈N. Since the Km are closed, we obtain πmx ∈Km for all m, and so x ∈K, which shows that K is non-empty. The compactness of K may be proved by a similar argument, where we assume that x1, x2, . . . ∈K. 2 We also include a couple of estimates for convex sets, needed in Chapter 25. Here ∂εB denotes the ε-neighborhood of the boundary ∂B.
Lemma A6.5 (convex sets) If B ⊂Rd is convex and ε > 0, then (i) λd(B −B) ≤ 2d d λdB, (ii) λd(∂εB) ≤2 (1 + ε/rB)d −1 λdB.
Proof: For (i), see Rogers & Shephard (1958). Part (ii) is elementary.
2 Throughout this book, we assume the Zermelo-Fraenkel axioms (ZF) of set theory, amended with the Axiom of Choice (AC), stated below. It is known that, if the ZF axioms are consistent20 (non-contradictory), then so is the ZF system amended with AC.
Axiom of Choice, AC: For any class of sets St ̸= ∅, t ∈T, there exists a function f on T with f(t) ∈St, t ∈T.
In other words, for any collection of non-empty sets St, the product set Xt St is again non-empty. We often need the following equivalent statement involving partially ordered sets (S, ≺). When A ⊂S is linearly ordered under ≺, we say that b ∈S is an upper bound of A if s≺b for all s ∈A. Furthermore, an element m ∈S is said to be maximal if m≺s ∈S ⇒s = m.
Lemma A6.6 (Zorn’s lemma, Kuratowski) Let (S, ≺) be a partially ordered set. Then (i) ⇒(ii), where (i) every linearly ordered subset A ⊂S has an upper bound b ∈S, (ii) S has a maximal element m ∈S.
Proof: See Dudley (1989) for a comprehensive discussion of AC and its equivalents.
2 20The consistency of ZF must be taken on faith, since no proof is possible. It had better be true, or else the entire edifice of modern mathematics would collapse. Most logicians seem to agree that even questions of truth are meaningless.
Appendices 847 7. Differential geometry Here we state some basic facts from intrinsic21 differential geometry, needed in Chapter 35. A more detailed discussion of the required results is given by Emery (1989).
Lemma A7.1 (tangent vectors) Let M be a smooth manifold with class S of smooth functions M →R, and consider a point-wise smooth map u : S →S.
Then these conditions are equivalent22: (i) u is a differential operator 23 of the form uf = ui ∂if, f ∈S, (ii) u is linear and satisfies uf 2 = 2f(uf), f ∈S, (iii) for any smooth functions f = (f 1, . . . , f n): M →Rn and ϕ: Rn →R, u(ϕ ◦f) = (∂iϕ ◦f) uf i.
Proof, (i) ⇒(iii): Consider the restriction to a local chart U, and write f i = ˜ f i ◦ψ in terms of some local coordinates ψ1, . . . , ψn. Using (i) (twice) along with the elementary chain rule, we get u(ϕ ◦f) = (uψi) ∂i(ϕ ◦˜ f) ◦ψ = (uψi) ∂jϕ ◦˜ f ◦ψ ∂i ˜ f j ◦ψ = ∂jϕ ◦f (uψi) ∂i ˜ f j ◦ψ = ∂jϕ ◦f (uf j).
(iii) ⇒(ii): Taking n = 2 and ϕ(x, y) = ax + by yields the linearity of u.
Then take n = 1 and ϕ(x) = x2 to get the relation in (ii).
(ii) ⇒(i): Condition (ii) gives u1 = 0, and by polarization u(fg) = f(ug)+ g(uf), which extends by iteration to u(fgh) = fg(uh) + gh(uf) + hf(ug), f, g, h ∈S.
(11) If f ∈S with f = 0 in a neighborhood of a point x ∈M, we may choose g ∈S with g(a) = 0 and fg = f, where the former formula yields uf(x) = 0, showing that the operator u is local. Thus, we may henceforth take f to be supported by a single chart U, and write f = ˜ f ◦ϕ for some smooth functions ϕ: U →Rn and ˜ f : Rn →R, where the ϕi form local coordinates on U. Fixing any a ∈U, we may assume that ϕ(a) = 0. By Taylor’s formula, ˜ f(r) = ˜ f(0) + ri∂i ˜ f(0) + rirjhij(r), r ∈Rn, 21meaning that no notion or result depends on the choice of local coordinates 22Here and below, summation over repeated indices is understood.
23In local coordinates, we can identify u with the geometric vector u = (u1, . . . , un).
848 Foundations of Modern Probability for some smooth functions hij on Rn. Inserting r = ϕ(x), applying u to both sides, and noting that uc = 0 for constants c, we get uf(x) = uϕi(x) ∂i ˜ f(0) + u ϕiϕj (hij ◦ϕ) (x).
Taking x = a and using (11), we obtain uf(a) = uϕi(a) ∂i ˜ f(0), and the assertion follows with ui = uϕi.
2 The space of tangent vectors at x is denoted by TxS, and a smooth function of tangent vectors is called a (tangent) vector field on S. The space of vector fields u, v, . . . on S will be denoted by TS. A cotangent at x is an element of the dual space T ∗ x, and smooth functions α, β, . . . of cotangent vectors are called (simple) forms. Similarly, smooth functions b, c, . . . on the dual T ∗2 S of the product space T 2 S are called bilinear forms.
The dual coupling of simple or bilinear forms α or b with vector fields u, v is often written as ⟨α, u⟩= α(u), ⟨b, (u, v)⟩= b(u, v).
A special role is played by the forms d f, given by ⟨d f, u⟩= d f(u) = uf, f ∈S.
For any simple forms α, β, we define a bilinear form α ⊗β by (α ⊗β)(u, v) = ⟨α, u⟩⟨β, v⟩ = α(u)β(v), so that in particular (d f ⊗dg)(u, v) = (uf)(vg).
In general, we have the following representations24 in terms of simple or bilinear forms d f and d f ⊗dg.
Lemma A7.2 (simple and bilinear forms) For a smooth manifold S, (i) every form on S is a finite sum α = a i d f i with a i, f i ∈S, (ii) every bilinear form on S is a finite sum b = b ij (d f i ⊗dgj) with b ij, f i, gj ∈S.
Such simple and bilinear forms can be represented as follows in terms of local coordinates: 24Note that the coefficients are functions, not constants.
Appendices 849 Lemma A7.3 (coordinate representations) Let S be a smooth manifold with local coordinates x1, . . . , xn. Then for any f, g, h ∈S, (i) fdg = (f∂ig)(x) dxi, (ii) f(dg ⊗dh) = f(∂ig)(∂jh) (x) (dxi ⊗dxj).
For a smooth mapping ϕ between two manifolds S, S′, the push-forward of a tangent vector u ∈TxS into a tangent vector ϕ ◦u ∈TϕxS′ is given by25 (ϕ ◦u)f = u(f ◦ϕ), d f(ϕ ◦u) = d(f ◦ϕ)u.
The dual operations of pull-back of a simple or bilinear form α or b on S, here denoted by ϕ∗or ϕ∗2, respectively, are given, for tangent vectors u, v at suitable points, by (ϕ∗α)u = α(ϕ ◦u), (ϕ∗2b)(u, v) = b(ϕ ◦u, ϕ ◦v).
(In traditional notation, the latter formulas may be written as # (T ∗ xϕ) α, A $ x = # α, (Txϕ)A $ ϕ(x), (T ∗ϕ ⊗T ∗ϕ) b(x)(A, B) = b(ϕ(x)) (Txϕ)A, (Txϕ)B .) Here we list some useful pull-back formulas, first for simple forms, ϕ∗(fα) = (f ◦ϕ)(ϕ∗α), ϕ∗(d f) = d(f ◦ϕ), and then for bilinear forms, ϕ∗2(fb) = (f ◦ϕ)(ϕ∗2b), ϕ∗2(α ⊗β) = ϕ∗α ⊗ϕ∗β, (ψ ◦ϕ)∗2 = ϕ∗2 ◦ψ∗2.
A Riemannian metric on S is a symmetric, positive-definite, bilinear form ρ on S, determining an inner product ⟨u | v⟩= ρ(u, v) with associated norm ∥u∥on S. For any f ∈S, we define the gradient grad f = ˆ f as the unique vector field on S satisfying ρ( ˆ f, u) = ⟨ˆ f | u⟩ = d f(u) = uf, for any vector field u on S. The inner product on TS may be transferred to the forms d f through the relation ⟨d f| dg⟩= ⟨ˆ f | ˆ g⟩, so that26 25To facilitate the access for probabilists, we often use a simplified notation. Traditional versions of the stated formulas are sometimes inserted for reference and comparison.
26The mapping d f →ˆ f extends immediately to a linear map α →α♯between T ∗ S and TS with inverse u →u♭, sometimes called the musical isomorphisms. Unfortunately, the nice interpretation of ˆ f as a gradient vector seems to be lost in general.
850 Foundations of Modern Probability ˆ fg = ˆ gf = ⟨ˆ f| ˆ g⟩ = ⟨d f| dg⟩.
Letting ρ = ρij(dxi ⊗dxj) in local coordinates and writing (ρij) for the inverse of the matrix (ρij), we note that ⟨u | v⟩= ρij uivj, ⟨ˆ f | ˆ g⟩= ρij ˆ fi ˆ gj, ˆ f i = ρij ∂jf, ∂if = ρij ˆ f j.
For special purposes, we also need some Lie derivatives. Since the formal definitions may be confusing for the novice27, we list only some basic formulas, which can be used recursively to calculate the Lie derivatives along a given vector field u of a smooth function f, another vector field v, a form α, or a bilinear form b.
Theorem A7.4 (Lie derivatives) (i) Luf = uf, (ii) Luv = uv −vu = [u, v], (iii) ⟨Luα, v⟩= Lu⟨α, v⟩−⟨α, Luv⟩, (iv) (Lub)(v, w) = u{b(v, w)} −b (Luv, w) −b (v, Luw), Proof: See Emery (1989), pp. 16–19.
2 Thus, Luf is simply the vector field u itself, applied to the function f. The u -derivative of a vector field v equals the commutator or Lie bracket [u, v].
Since the u -derivative of a simple or bilinear form α or b can be calculated recursively from (i)−(iv), we never need to go back to the underlying definitions in terms of derivatives. From the stated formulas, it is straightforward to derive the following useful relation: Corollary A7.5 (transformation rule) For any smooth function f, bilinear form b, and vector field u, we have Lfub = fLub + d f ⊗b(u, ·) + b(·, u) ⊗d f.
Proof: See Emery (1989), p. 19.
2 Exercise 1. Explain in what sense we can think of ξ in Theorem A1.2 as a measurable selection. Further state the result in terms of random sets.
2. A partially ordered set I is said to be linearly ordered if, for any two elements i, j ∈I, we have either i ≺j or j ≺i. Give an example of a partially ordered set that is directed by not linearly ordered.
27and are presumably well-known to the experts Appendices 851 3. For a topological space S, give an example of a net in S that is not indexed by a linearly ordered set I.
4. Give an example of a Polish space that is not locally compact. (Hint: This is true for for many function spaces.) 5. Give an example of a convex functional on Rd that is not linear. (Hint: If M is a convex body in Rd, then the functional p(x) = inf{r > 0; (x/r) ∈M} is convex.
When is it linear?) 6. Give examples of two different norms in Rd. Then give the geometric meaning of a linear functional on Rd. Finally, for each of the norms, explain the meaning of the Hahn–Banach theorem, and show that the claimed extension of f may not be unique. (Hint: The lp norms are different for 1 ≤p ≤∞. We may think of the dual as a space of vectors.) 7. For Rd with the usual ‘dot’ product, give examples of two different ONBs, and explain the geometric meaning of the statements in Lemma A3.7.
8. In the Euclidean space Rd, give the geometric and/or algebraic meaning of adjoints, unitary operators, and projections. Further explain in what sense geometric projections are self-adjoint and idempotent, and give the geometric meaning of the various propositions involving projection operators.
9.
Prove Lemmas A4.7, A4.8, and A4.9.
(Hint: Here Lemma A4.6 may be helpful.) 10. Show that the operator U in Theorem A4.10 is unitary, and prove the result.
Then explain how the statement is related to Theorems 25.18 and 30.11.
11.
Prove Lemma A6.5 (ii).
(Hint (Day): First show that if Sr ⊂B, then B + Sε ⊂(1 + ε/r)B, where Sr denotes an r -ball around 0.) Notes and References Here my modest aims are to trace the origins of some basic ideas in each chapter1, to give references for the main results cited in the text, and to suggest some literature for further reading. No completeness is claimed, and knowledgeable readers will notice many errors and omissions, for which I apologize in advance. I also inserted some footnotes with short comments about the lives and careers of a few notable people2. Here my selection is very subjective, and many others could have been included.
1. Sets and functions, measures and integration The existence of non-trivial, countably additive measures was first discov-ered by Borel (1895/98), who constructed Lebesgue measure on the Borel σ-field in R. The corresponding integral was introduced by Lebesgue3 (1902/04), who also established the dominated convergence theorem. The monotone con-vergence theorem and Fatou’s lemma were later obtained by Levi (1906a) and Fatou (1906), respectively. Lebesgue also introduced the higher-dimensional Lebesgue measure and proved a first version of Fubini’s theorem, subsequently generalized by Fubini (1907) and Tonelli (1909). The integration theory was extended to general measures and abstract spaces by many authors, including Radon (1913) and Fr´ echet (1928).
The norm inequalities in Lemma 1.31 were first noted for finite sums by H¨ older (1889) and Minkowski (1907), respectively, and were later extended to integrals by Riesz (1910). Part (i) for p = 2 goes back to Cauchy (1821) for finite sums and to Buniakowsky (1859) for integrals. The Hilbert space projection theorem can be traced back to Levi (1906b).
Monotone-class theorems were first established by Sierpi´ nski (1928). Their use in probability theory goes back to Halmos (1950), Loeve (1955), and Dynkin (1961). Mazurkiewicz (1916) proved that a topological space is Polish iffit is homeomorphic to a countable intersection of open sets in [0, 1]∞.
The fact that every uncountable Borel set in a Polish space is Borel isomorphic to 2∞was proved independently by Alexandrov (1916) and Hausdorff (1916). Careful proofs of both results appear in Dudley (1989). Localized Borel spaces were used systematically in K(17).
Most results in this chapter are well known and can be found in any textbook on real analysis.
Many graduate-level probability texts, such as Billingsley (1995) and C ¸inlar (2011), contain introductions to measure 1A comprehensive history of modern probability theory still remains to be written.
2I included only people who are no longer alive.
3Henri Lebesgue (1875–1941), French mathematician, along with Borel a founder of measure theory, which made modern probability possible.
853 © Springer Nature Switzerland AG 2021 O. Kallenberg, Foundations of Modern Probability, Probability Theory and Stochastic Modelling 99, 854 Foundations of Modern Probability theory. More advanced or comprehensive accounts are given, with numerous remarks and references, by Dudley (1989) and Bogachev (2007).
We are tacitly adopting ZFC or ZF+AC—the Zermelo–Fraenkel system amended with the Axiom of Choice—as the axiomatic foundation of modern analysis and probability. See, e.g., Dudley (1989) for a detailed discussion.
2. Measure extension and decomposition As already mentioned, Borel (1895/98) was the first to prove the existence of one-dimensional Lebesgue measure. The modern construction via outer mea-sures in due to Carath´ eodory (1918). Functions of bounded variation were introduced by Jordan (1881), who proved that any such function is the dif-ference of two non-decreasing functions. The corresponding decomposition of signed measures was obtained by Hahn (1921). Integrals with respect to non-decreasing functions were defined by Stieltjes (1894), but their importance was not recognized until Riesz4 (1909b) proved his representation theorem for linear functionals on C[0, 1]. The a.e. differentiability of a function of bounded variation was first proved by Lebesgue (1904).
Vitali (1905) noted the connection between absolute continuity and the existence of a density. The Radon–Nikodym theorem was then proved in in-creasing generality by Radon (1913), Daniell (1920), and Nikodym (1930).
A combined proof that also yields the Lebesgue decomposition was devised by von Neumann. Atomic decompositions and factorial measures appear in K(75/76) and K(83/86), respectively, and the present discussion is adapted from K(17).
Constructions of Lebesgue measure and proofs of the Radon–Nikodym the-orem and Lebesgue’s differentiation theorem are given in most textbooks on real analysis. Detailed accounts adapted to the needs of probabilists appear in, e.g., Lo eve (1977), Billingsley (1995), and Dudley (1989).
3. Kernels, disintegration, and invariance Though kernels have long been used routinely in many areas of probability theory, they are hardly mentioned in most real analysis or early probability texts. Their basic properties were listed in K(97/02), where the basic ker-nel operations are also mentioned. Though the disintegration of measures on a product space is well known and due to Doob (1938), the associated dis-integration formula is rarely stated and often used without proof, an early exception being K(75/76). Partial disintegrations go back to K(83/86). The theory of iterated disintegration was developed in K(10/11b), for applications to Palm measures.
Invariant measures on special groups were identified by explicit computa-tion by many authors, including Hurwitz (1897). Haar (1933) proved the 4Riesz, Frigyes (1880–1956) and Marcel (1886–1969), Hungarian mathematicians and brothers, making fundamental contributions to functional analysis, potential theory, and other areas. Among their many students, we note R´ enyi (Frigyes) and Cram´ er (Marcel).
Historical Notes and References 855 existence of invariant measures on a general lcscH group. Their uniqueness was then established by Weil (1936/40), who also extended the theory to homogeneous spaces. The present representation of invariant measures, under a non-topological, proper group action, was established in K(11a). The use-fulness of s-finite measures was recognized by Sharpe (1988) and Getoor (1990).
Invariant disintegrations were first used to construct Palm distributions of stationary point processes. Here the regularization approach goes back to Ryll-Nardzewski (1961) and Papangelou (1974a), whereas the simple skew factorization was first noted by Matthes (1963) and further developed by Mecke (1967) and Tortrat (1969). The general factorization and regu-larization results for invariant measures on a product space were obtained in K(07).
The notions of strict and weak asymptotic invariance, going back to Do-brushin (1956), were further studied by Debes et al. (1970/71) and Matthes et al. (1974/78). Stone (1968) noted that absolute continuity implies local invariance, and the latter notion was studied more systematically in K(78b).
The existence and uniqueness of Haar measures appears, along with much related material, in many textbooks on real or harmonic analysis, such as in Hewitt & Ross (1979). The existence of regular conditional distributions is proved in most graduate-level probability texts, though often without a statement of the associated disintegration formula. For a more detailed and comprehensive account of the remaining material, we refer to K(17).
4. Processes, distributions, and independence Countably additive probability measures were first used by Borel5 (1909/ 24), who constructed random variables as measurable functions on the Lebesgue unit interval and proved Theorem 4.18 for independent events.
Cantelli (1917) noticed that the ‘easy’ part remains true without the independence as-sumption. Lemma 4.5 was proved by Jensen (1906), after a special case had been noted by H¨ older.
The modern framework—with random variables as measurable functions on an abstract probability space (Ω, A, P) and with expected values as P-integrals over Ω—was used implicitly by Kolmogorov from (1928) on. It was later formalized in the monograph of Kolmogorov6 (1933c), which also contains the author’s 0−1 law, discovered long before the 0−1 law of Hewitt 5´ Emile Borel (1871–1956), French mathematician, was the first to base probability theory on countably additive measures, 24 years before Kolmogorov’s axiomatic foundations, which makes him the true founder of the subject.
6Andrey Kolmogorov (1903–87), Russian mathematician, making profound contribu-tions to many areas of mathematics, including probability. His 1933 paper was revolution-ary, not only for the axiomatization of probability theory, which was essentially known since Borel (1909), but also for his definition of conditional expectations, his 0−1 law, and his existence theorem for processes with given finite-dimensional distributions. As a leader of the Moscow probability school, he had countless of students.
856 Foundations of Modern Probability & Savage (1955).
Early work in probability theory deals mostly with properties depending only on the finite-dimensional distributions. Wiener (1923) was the first to construct the distribution of a process as a measure on a function space. The general continuity criterion in Theorem 4.23, essentially due to Kolmogorov, was first published by Slutsky (1937), with minor extensions added by Loeve (1955) and Chentsov (1956). The search for general regularity properties was initiated by Doob (1937/47). It soon became clear, through the work of L´ evy (1934/37), Doob (1951/53), and Kinney (1953), that most processes of interest have right-continuous versions with left-hand limits.
Introductions to measure-theoretic probability are given by countless au-thors, including Breiman (1968), Chung (1974), Billingsley (1995), and C ¸inlar (2011).
Some more specific regularity properties are discussed in Lo eve (1977) and Cram´ er & Leadbetter (1967). Earlier texts tend to give more weight to distribution functions and their densities, less to measures and σ-fields.
5. Random sequences, series, and averages The weak law of large numbers was first obtained by Bernoulli (1713), for the sequences named after him. More general versions were then established with increasing rigor by Bienaym´ e (1853), Chebyshev (1867), and Markov (1899). A necessary and sufficient condition for the weak law of large numbers was finally obtained by Kolmogorov (1928/29).
Khinchin & Kolmogorov (1925) studied series of independent, discrete random variables, and showed that convergence holds under the condition in Lemma 5.16. Kolmogorov (1928/29) then obtained his maximum inequality and showed that the three conditions in Theorem 5.18 are necessary and suf-ficient for a.s. convergence. The equivalence with convergence in distribution was later noted by L´ evy (1937).
A strong law of large numbers for Bernoulli sequences was stated by Borel (1909a), but the first rigorous proof is due to Faber (1910).
The simple criterion in Corollary 5.22 was obtained in Kolmogorov (1930). In (1933a) Kolmogorov showed that existence of the mean is necessary and sufficient for the strong law of large numbers, for general i.i.d. sequences. The extension to exponents p ̸= 1 is due to Marcinkiewicz & Zygmund (1937). Proposition 5.24 was proved in stages by Glivenko (1933) and Cantelli (1933).
Riesz (1909a) introduced the notion of convergence in measure, for prob-ability measures equivalent to convergence in probability, and showed that it implies a.e. convergence along a subsequence. The weak compactness crite-rion in Lemma 5.13 is due to Dunford (1939). The functional representa-tion of Proposition 5.32 appeared in K(96a), and Corollary 5.33 was given by Stricker & Yor (1978).
The theory of weak convergence was founded by Alexandrov (1940/43), who proved in particular the ‘portmanteau’ Theorem 5.25. The continuous Historical Notes and References 857 mapping Theorem 5.27 was obtained for a single function fn ≡f by Mann & Wald (1943), and then in general by Prohorov (1956) and Rubin. The coupling Theorem 5.31 is due for complete S to Skorohod (1956) and in general to Dudley (1968).
More detailed accounts of the material in this chapter may be found in many textbooks, such as in Loeve (1977) and Chow & Teicher (1997).
Additional results on random series and a.s. convergence appear in Stout (1974) and Kwapie´ n & Woyczy´ nski (1992).
6. Gaussian and Poisson convergence The central limit theorem—so first called by P´ olya (1920)—has a long and glorious history, beginning with the work of de Moivre7 (1738), who obtained the familiar approximation of binomial probabilities in terms of the normal density.
Laplace (1774, 1812/20) stated a general version of the central limit theorem, but his proof was incomplete, as was the proof of Chebyshev (1867/90).
The normal laws were used by Gauss (1809/16) to model the distribution of errors in astronomical observations.
The first rigorous proof was given by Liapounov (1901), though under an extra moment condition. Then Lindeberg (1922a) proved his fundamental Theorem 6.13, which led in turn to the basic Proposition 6.10, in a series of papers by Lindeberg (1922b) and L´ evy (1922a–c). Bernstein (1927) ob-tained the first extension to higher dimensions. The general problem of normal convergence, regarded for two centuries as the central (indeed the only non-trivial) problem in probability, was eventually solved in the form of Theorem 6.16, independently by Feller (1935) and L´ evy (1935a).
Many authors have proved local versions of the central limit theorem, the present version being due to Stone (1965). The domain of attraction to the normal law was identified by L´ evy, Feller, and Khinchin. General laws of attraction were studied by Doeblin (1947, posth.), who noticed the connection with Karamata’s (1930) theory of regular variation.
Even the simple Poisson approximation of binomial probabilities was noted by de Moivre (1711).
It was rediscovered by Poisson (1837), who also noted that the limits form a probability distribution in its own right. Appli-cations of this ‘law of small numbers’ were compiled by Cournot (1843) and Bortkiewicz8 (1898).
Though characteristic functions were used in probability already by La-place (1812/20), their first rigorous use to prove a limit theorem is credited to Liapounov (1901). The first general continuity theorem was established by L´ evy (1922c), who assumed the characteristic functions to converge uniformly in a neighborhood of the origin. The simpler criterion in Theorem 6.23 is due to Bochner (1933). Our direct approach to Theorem 6.3 may be new, in 7Abraham de Moivre (1667–1754), French mathematician. To escape the persecution of huguenots he fled to England, where he befriended Newton.
He discovered both the normal and the Poisson approximation to the binomial distribution.
8Known for the example of the number of deaths by horse kicks in the Prussian army.
858 Foundations of Modern Probability avoiding the relatively deep selection theorem of Helly (1911/12). The basic Corollary 6.5 was noted by Cram´ er & Wold (1936).
Introductions to characteristic functions and classical limit theorems may be found in many textbooks, notably Lo eve (1977). Feller (1971) is a rich source of further information on Laplace transforms, characteristic functions, and classical limit theorems. For more detailed or advanced results on charac-teristic functions, see Lukacs (1970).
7. Infinite divisibility and general null arrays The scope of the classical central limit problem was broadened by L´ evy (1925) to encompass the study of suitably normalized partial sums, leading to the classes of stable and self-decomposable limiting distributions. To include the case of the classical Poisson approximation, Kolmogorov proposed a further extension to general triangular arrays, subject to the sole condition of uniform asymptotic negligibility, allowing convergence to general infinitely divisible laws. de Finetti (1929) saw how the latter distributions are related to processes with stationary, independent increments, and posed the problem of finding their general form.
Here the characteristic functions were first characterized by Kolmogorov (1932), though only under a moment condition that was later removed by L´ evy (1934/35). Khinchin (1937/38) noted the potential for significant simplifi-cations in the one-dimensional case, leading to the celebrated L´ evy–Khinchin formula. The probabilistic interpretation in terms of Poisson processes is essen-tially due to Itˆ o (1942b). The simple form for non-negative random variables was noted by Jiˇ rina (1964).
For the mentioned limit problem, Feller (1937) and Khinchin (1937) proved independently that all limiting distributions are infinitely divisible. It remained to characterize the convergence to specific limits, continuing the work by Feller (1935) and L´ evy (1935a) for Gaussian limits. The general crite-ria were found independently by Doeblin (1939a) and Gnedenko (1939).
Convergence criteria for infinitely divisible distributions on Rd + were given by Jiˇ rina (1966) and Nawrotzki (1968). Corresponding criteria for null-arrays were obtained in K(75/76), after the special case of Z+-valued variables had been settled by Kerstan & Matthes (1964). Extreme value theory is surveyed by Leadbetter et al. (1983). Lemma 7.16 appears in Doeblin (1939a).
The limit theory for null arrays of random variables was once regarded as the central topic of probability theory, and comprehensive treatments were given by Gnedenko & Kolmogorov (1954/68), Loeve (1977), and Petrov (1995). The classical approach soon got a bad reputation, because of the nu-merous technical estimates involving characteristic and distribution functions, and the subject is often omitted in modern textbooks.9 Eventually, Feller 9This is a huge mistake, not only since characteristic functions provide the easiest ap-proach to the classical limit theorems, but also for their crucial role in connection with Historical Notes and References 859 (1966/71) provided a more readable, even enjoyable account. The present con-cise probabilistic approach, based on compound Poisson approximations and methods of modern measure theory, was pioneered in K(97). It has the further advantage of extending easily to the multi-variate case.
8. Conditioning and disintegration Though conditional densities have been computed by statisticians ever since Laplace (1774), a general approach to conditioning was not devised until Kolmogorov (1933c), who defined conditional probabilities and expectations as random variables on the basic probability space, using the Radon–Nikodym theorem, which had recently become available. His original notion of condi-tioning with respect to a random vector was extended by Halmos (1950) to arbitrary random elements, and then by Doob (1953) to general sub -σ-fields.
Our present Hilbert space approach to conditioning, essentially due to von Neumann (1940), is simpler and more suggestive, in avoiding the use of the less intuitive Radon–Nikodym theorem. It has the further advantage of leading to the attractive interpretation of martingales as projective families of random variables.
The existence of regular conditional distributions was studied by several authors, beginning with Doob (1938). It leads immediately to the often used but rarely stated disintegration Theorem 8.5, extending Fubini’s theorem to the case of dependent random variables. The fundamental importance of con-ditional distributions is not always appreciated.10 The interpretation of the simple Markov property in terms of conditional independence was indicated already by Markov (1906), and the formal state-ment of Proposition 8.9 appears in Doob (1953). Further properties of con-ditional independence have been listed by D¨ ohler (1980) and others. Our statement of conditional independence in terms of Hilbert-space projections is new. The subtle device of iterated conditioning was developed in K(10/11b).
The fundamental transfer Theorem 8.17, noted by many authors, may have been stated explicitly for the first time by Aldous (1981). It leads in par-ticular to a short proof of Daniell’s (1918/19/20) existence theorem for se-quences of random elements in a Borel space, which in turn implies Kol-mogorov’s (1933c) celebrated extension to continuous time and arbitrary index sets. Lomnicki & Ulam (1934) noted that no topological assumptions are needed for independent random elements, a result that was later extended by Ionescu Tulcea (1949/50) to measures specified by a sequence of condi-tional distributions.
The traditional Radon–Nikodym approach to conditional expectations ap-pears in most textbooks, such as in Billingsley (1995).
Our projection approach is essentially the one used by Dellacherie & Meyer (1975) and martingales, L´ evy processes, random measures, and potentials.
10Dellacherie & Meyer (1975) write: “[They] have a bad reputation, and probabilists rarely employ them without saying ‘we are obliged’ to use conditional distributions . . . .” We may hope that times have changed, and their importance is now universally recognized.
860 Foundations of Modern Probability Elliott (1982).
9. Optional times and martingales Martingales were first introduced by Bernstein11 (1927/37), in his ef-forts to relax the independence assumption in the classical limit theorems.
Both Bernstein and L´ evy (1935a-b/37) extended Kolmogorov’s maximum inequality and the central limit theorem to a general martingale context. The term martingale (originally denoting part of a horse’s harness and later used for a special gambling system) was introduced in the probabilistic context by Ville (1939).
The first martingale convergence theorem was obtained by Jessen12 (1934) and L´ evy (1935b), both of whom proved Theorem 9.24 for filtrations generated by a sequence of independent random variables.
A sub-martingale version of the same result appears in Sparre-Andersen & Jessen (1948).
The independence assumption was removed by L´ evy (1937/54), who also noted the simple martingale proof of Kolmogorov’s 0−1 law and proved his conditional version of the Borel–Cantelli lemma.
The general convergence theorem for discrete-time martingales was proved by Doob13 (1940), and the basic regularity theorems for continuous-time mar-tingales first appeared in Doob (1951).
The theory was extended to sub-martingales by Snell (1952) and Doob (1953). The latter book is also the original source of such fundamental results as the martingale closure theorem, the optional sampling theorem, and the Lp-inequality.
Though hitting times have long been used informally, general optional times (often called stopping times) seem to appear for the first time in Doob (1936).
Abstract filtrations were not introduced until Doob (1953). Progressive pro-cesses were introduced by Dynkin (1961), and the modern definition of the σ-fields Fτ is due to Yushkevich.
The first general exposition of martingale theory, truly revolutionary for its time, was given by Doob (1953). Most graduate-level texts, such as C ¸inlar (2011), contain introductions to martingales, filtrations, and optional times.
More extensive coverage of the discrete-time theory appears in Neveu (1975), Chow & Teicher (1997), and Rogers & Williams (1994). The encyclope-dic treatises of Dellacherie & Meyer (1975/80) remain standard references for the continuous-time theory.
10. Predictability and compensation Predictable and totally inaccessible times appear implicitly, along with 11Sergei Bernstein (1880–1968), Ukrainian and Sovjet mathematician working in anal-ysis and probability, invented martingales and proved martingale versions of some classical results for independent variables.
12Børge Jessen (1907–93), Danish mathematician working in analysis and geometry, proved the first martingale convergence theorem, long before Doob.
13Joseph Doob (1910–2004), American mathematician and a founder of modern proba-bility, known for path-breaking work on martingales and probabilistic potential theory.
Historical Notes and References 861 quasi-left continuous processes, in the work of Blumenthal (1957) and Hunt (1957/58). A systematic study of optional times and their associated σ-fields was initiated by Chung & Doob (1965), Meyer (1966/68), and Dol´ eans (1967a). Their ideas were further developed by Dellacherie (1972), Del-lacherie & Meyer (1975/80), and others into a ‘general theory of processes’, which has in many ways revolutionized modern probability.
The connection between excessive functions and super-martingales, noted by Doob (1954), suggested a continuous-time extension of the elementary Lemma 9.10. Such a result was eventually proved by Meyer14 (1962/63) in the form of Lemma 10.7, after special decompositions in the Markovian context had been obtained by Volkonsky (1960) and Shur (1961). Dol´ eans (1967) proved the equivalence of natural and predictable processes, leading to the ultimate version of the Doob–Meyer decomposition in the form of Theorem 10.5.
The original proof, appearing in Dellacherie (1972) and Dellacherie & Meyer (1975/80), is based on some deep results in capacity theory. The present more elementary approach combines Rao’s (1969) simple proof of Lemma 10.7, based on Dunford’s (1939) weak compactness criterion, with Doob’s (1984) ingenious approximation of totally inaccessible times. Itˆ o & Watanabe (1965) noted the extension to general sub-martingales, by suit-able localization. The present proof of Theorem 10.14 is taken from Chung & Walsh (1974). The moment inequality in Proposition 10.20 was proved independently by Garsia (1973) and Neveu (1975), after a special case had been obtained by Burkholder et al. (1972).
Compensators of optional times first arose as hazard functions in reliability theory. More general compensators were later studied in the Markovian con-text by S. Watanabe (1964) under the name of L´ evy systems. Grigelionis (1971) and Jacod (1975) constructed the compensators of adapted random measures on a product space R+ × S. The elementary formula for the induced compensator of a random pair (τ, χ) is classical and appears in, e.g., Del-lacherie (1972) and Jacod (1975). The equation Z = 1−Z−· ¯ η was studied in greater generality by Dol´ eans (1970). Discounted compensators were in-troduced in K(90), which contains the fundamental martingale and mapping results like Theorem 10.27.
Induced compensators of point processes have been studied by many au-thors, including Last & Brandt (1995) and Jacobsen (2006). More ad-vanced and comprehensive accounts of the ‘general theory’ are given by Del-lacherie (1972), Elliott (1982), Rogers & Williams (1987/2000), and Dellacherie & Meyer (1975/80).
14Paul-Andr´ e Meyer (1934–2003), French mathematician, making profound contribu-tions to martingales, stochastic integration, Markov processes, potential theory, Malliavin calculus, quantum probability, and stochastic differential geometry. As a leader of the famous Strasbourg school of probability, his work inspired probabilists all over the world.
862 Foundations of Modern Probability 11. Markov properties and discrete-time chains Discrete-time Markov chains in a finite state space were introduced by Markov15 (1906), who proved the first ergodic theorem and introduced the notion of irreducible chains. Kolmogorov (1936a/b) extended the theory to a countable state space and general transition probabilities. In particular, he obtained a decomposition of the state space into irreducible sets, classified the states with respect to recurrence and periodicity, and described the asymptotic behavior of the n-step transition probabilities. Kolmogorov’s original proofs were analytic. The more intuitive coupling approach was introduced by Doeb-lin (1938a/b), long before the strong Markov property was formalized.
Bachelier had noted the connection between random walks and dif-fusions, which inspired Kolmogorov (1931a) to give a precise definition of Markov processes in continuous time.
His treatment is purely analytic, with the distribution specified by a family of transition kernels satisfying the Chapman–Kolmogorov relation, previously noted in special cases by Chap-man (1928) and Smoluchovsky. Kolmogorov makes no reference to sam-ple paths. The transition to probabilistic methods began with the work of L´ evy (1934/35) and Doeblin (1938a/b).
Though the strong Markov property had been used informally since Bache-lier (1900/01), it was first proved rigorously in a special case by Doob (1945).
The elementary inequality of Ottaviani is from (1939). General filtrations were introduced in Markov process theory by Blumenthal (1957). The mod-ern setup, with a canonical process X defined on the path space Ω, equipped with a filtration F, a family of shift operators θt, and a collection of probability measures Px, was developed systematically by Dynkin (1961/65). A weaker form of Theorem 11.14 appears in Blumenthal & Getoor (1968), and the present version is from K(87).
Most graduate-level texts contain elementary introductions to Markov chains in discrete and continuous time. More extensive discussions appear in Feller (1968/71) and Freedman (1971a). Most literature on general Markov pro-cesses is fairly advanced, leading quickly into discussions of semi-group and potential theory, here treated in Chapters 17 and 34. Standard references in-clude Dynkin16 (1965), Blumenthal & Getoor (1968), Sharpe (1988), and Dellacherie & Meyer (1987/92).
The coupling method, soon for-gotten after Doeblin’s tragic death17, was revived by Lindvall (1992) and 15Andrei Markov (1856–1922), Russian mathematician, inventor of Markov processes.
16Eugene Dynkin (1924–2014), Russian-American mathematician, worked on Lie groups before turning to probability. Laid the foundations of the modern theory of Markov pro-cesses, and made seminal contributions to the theory of super-processes.
17Wolfgang Doeblin (1915–40), born in Berlin to the famous author Alfred D¨ oblin.
When Hitler came to power, his Jewish family fled to Paris. After a short career inspired by L´ evy and Fr´ echet, he joined the French army to fight the Nazis. When his unit got cornered, he shot himself, to avoid getting caught and sent to an extermination camp. Aged 25, he had already made revolutionary contributions to many areas of probability, including construc-tion of diffusions with given drift and diffusion rates, time-change reduction of martingales to Brownian motion, solution of the general limit problem for null arrays, and recurrence Historical Notes and References 863 Thorisson (2000).
12. Random walks and renewal processes Random walks arose early in a wide range of applications, such as in gam-bling, queuing, storage, and insurance; their history can be traced back to the origins of probability. Already Bachelier (1900/01) used random walks to approximate diffusion processes. Such approximations were used in potential theory in the 1920s, to yield probabilistic interpretations in terms of a simple symmetric random walk. Finally, random walks played an important role in the sequential analysis developed by Wald (1947).
The modern developments began with P´ olya’s (1921) discovery that a sim-ple symmetric random walk in Zd is recurrent for d ≤2, otherwise transient.
His result was later extended to Brownian motion by L´ evy (1940) and Kaku-tani (1944a). The general recurrence criterion in Theorem 12.4 was obtained by Chung & Fuchs (1951), and the probabilistic approach to Theorem 12.2 was found by Chung & Ornstein (1962). The first condition in Corollary 12.7 is even necessary for recurrence, as noted independently by Ornstein (1969) and Stone (1969). Wald (1944) discovered the equations named after him.
The reflection principle was first used by Andr´ e (1887) in the context of the ballot problem. The systematic study of fluctuation and absorption problems for random walks began with the work of Pollaczek (1930). Ladder times and heights, first introduced by Blackwell, were explored in an influential paper by Feller (1949).
The factorizations in Theorem 12.16 were origi-nally derived by the Wiener–Hopf technique, developed by Paley & Wiener (1934) as a general tool in Fourier analysis. Theorem 12.17 is due for u = 0 to Sparre-Andersen (1953/54) and in general to Baxter (1961). The in-tricate combinatorial methods used by the former author were later simplified by Feller and others.
Though renewals in Markov chains are implicit already in some early work of Kolmogorov and L´ evy, the general renewal process may have been first introduced by Palm (1943). The first renewal theorem was obtained by Erd¨ os et al. (1949) for random walks in Z+, though the result is then an easy conse-quence of Kolmogorov’s (1936a/b) ergodic theorem for Markov chains on a countable state space, as noted by Chung. Blackwell (1948/53) extended the result to random walks in R+. The ultimate version for transient random walks in R is due to Feller & Orey (1961). The first coupling proof of Blackwell’s theorem was given by Lindvall (1977). Our proof is a modifica-tion of an argument in Athreya et al. (1978), which originally did not cover all cases.
The method seems to require the existence of a possibly infinite mean. An analytic approach to the general case appears in Feller (1971).
Elementary introductions to random walks are given by many authors, in-cluding Chung (1974) and Feller (1971). A detailed treatment of random and ergodic properties of Markov processes.
864 Foundations of Modern Probability walks in Zd is given by Spitzer (1976). Further aspects of renewal theory are discussed in Smith (1954).
13. Jump-type chains and branching processes Continuous-time Markov chains have been studied by many authors, be-ginning with Kolmogorov (1931a). The transition functions of general pure jump-type Markov processes were explored by Pospiˇ sil (1935/36) and Feller (1936/40), and the corresponding path properties were analysed by Doeblin (1939b) and Doob (1942b). The first continuous-time version of the strong Markov property was obtained by Doob (1945). The equivalence of the count-ing and renewal descriptions of Poisson processes was established by Bate-man (1910). The significance of pseudo-Poisson processes was recognized by Feller.
The classical, discrete-time branching process was first studied by Bien-aym´ e18 (1845), who found in particular the formula for the extinction probabil-ity. His results were partially rediscovered by Watson & Galton19 (1874), after whom the process has traditionally been named.
In the critical case, the asymptotic survival probability and associated distribution were found by Kolmogorov (1938) and Yaglom (1947), and the present comparison ar-gument is due to Spitzer. The transition probabilities of a birth & death process with rates nμ and nλ were obtained by Palm (1943) and Kendall (1948). The associated ancestral process was identified in K(17).
The diffusion limit of a Bienaym´ e process was obtained by Feller (1951).
The associated space-time process, known as a super-process, was discovered by Watanabe (1968) and later studied extensively by Dawson (1977/93) and others. The corresponding ancestral structure was uncovered in profound work by Dynkin (1991), Dawson & Perkins (1991), and Le Gall (1991).
Comprehensive accounts of pure-jump type Markov processes have been given by many authors, beginning with Doob 1953), Chung (1960), and Ke-meny et al. (1966). The underlying regenerative structure was analyzed by Kingman (1972). Countless authors have discussed applications to queuing20 theory and other areas, including Tak´ acs (1962). Detailed accounts of Bi-enaym´ e and related processes include those by Harris (1963), Athreya & Ney (1972), and Jagers (1975). Etheridge (2000) gives an accessible in-troduction to super-processes.
14. Gaussian processes and Brownian motion As we have seen, the Gaussian distribution first arose in the work of de Moivre (1738) and Laplace (1774, 1812/20), and was popularized through 18Ir´ en´ ee-Jules Bienaym´ e (1796–1878), French probabilist and statistician.
19The latter became infamous as a founder of eugenics—the ‘science’ of breeding a human master race—later put in action by Hitler and others. Since his results were either wrong or already known, I would be happy to drop his name from the record.
20I prefer the traditional spelling, as in ensuing and pursuing.
Historical Notes and References 865 Gauss’ (1809) theory of errors.
Maxwell21 (1875/78) derived the Gaus-sian law as the velocity distribution for the molecules in a gas, assuming the hypotheses of Proposition 14.2. Freedman (1962/63) used de Finetti’s the-orem to characterize sequences and processes with rotatable symmetries. His results were later recognized as equivalent to a relationship between positive definite and completely monotone functions proved by Schoenberg (1938).
Isonormal Gaussian processes were introduced by Segal (1954).
The Brownian motion process, first introduced by Thiele (1880), was re-discovered and used by Bachelier22 (1900) to model fluctuations on the stock market. Bachelier also noted several basic properties of the process, such as the distribution of the maximum and the connection with the heat equation.
His work, long neglected and forgotten, was eventually revived by Kolmogorov and others. He is now recognized as the founder of mathematical finance.
The mathematical notion must not be confused with the physical phe-nomenon of Brownian motion—the irregular motion of pollen particles sus-pended in water. Such a motion, first noted by van Leeuwenhoek23, was originally thought to be a biological phenomenon, motivating some careful studies during different seasons, meticulously recorded by the botanist Brown (1828). Eventually, some evidence from physics and chemistry suggested the existence of atoms24, explaining Brownian motion as caused by the constant bombardment by water molecules. The atomic hypothesis was still controver-sial, when Einstein (1905/06), ignorant of Bachelier’s work, modeled the motion by the mentioned process, and used it to estimate the size of the molecules25.
A more refined model for the physical Brownian motion was proposed by Langevin (1908) and Ornstein & Uhlenbeck (1930).
A rigorous study of the Brownian motion process was initiated by Wiener26 (1923), who constructed its distribution as a measure on the space of contin-uous paths. The significance of Wiener’s revolutionary paper was not appre-ciated until after the pioneering work of Kolmogorov (1931a/33b), L´ evy (1934/35), and Feller (1936). The fact that the Brownian paths are nowhere differentiable was noted in a celebrated paper by Paley et al. (1933).
Wiener also introduced stochastic integrals of deterministic L2-functions, later studied in further detail by Paley et al. (1933). The spectral repre-21James Clerk Maxwell (1831–79), Scottish physicist, known for his equations de-scribing the electric and magnetic fields, and a co-founder of statistical mechanics.
22Louis Bachelier (1870–1946), French mathematician, the founder of mathematical finance and discoverer of the Brownian motion process.
23Antonie van Leeuwenhoek (1632–1723), Dutch scientist, who was the first to con-struct a microscope powerful enough to observe the micro-organisms in ‘plain’ water.
24A similar claim by Democritus can be dismissed, as it was based on a logical fallacy.
25This is one of the three revolutionary achievements of Albert Einstein (1879–1955) during his ‘miracle year’ of 1905, the others being his discovery of special relativity, including the celebrated formula E = mc2, and his explanation of the photo-electric effect, marking the beginning of quantum mechanics and earning him a Nobel prize in 1921.
26Norbert Wiener (1894–1964), famous American mathematician, making profound contributions to numerous areas of analysis and probability.
866 Foundations of Modern Probability sentation of stationary processes, originally deduced from Bochner’s (1932) theorem by Cram´ er (1942), was later recognized as equivalent to a general Hilbert space result of Stone (1932). The chaos expansion of Brownian func-tionals was discovered by Wiener (1938), and the theory of multiple integrals with respect to Brownian motion was developed in a seminal paper of Itˆ o (1951c).
The law of the iterated logarithm was discovered by Khinchin27, first (1923/24) for Bernoulli sequences, and then (1933b) for Brownian motion. A more detailed study of Brownian paths was pursued by L´ evy (1948), who proved the existence of quadratic variation in (1940) and the arcsine laws in (1948). Though many proofs of the latter have since been given, the present deduction from basic symmetry properties may be new. Though the strong Markov property was used implicitly by L´ evy and others, the result was not carefully stated and proved until Hunt (1956).
Most modern probability texts contain introductions to Brownian motion.
The books by Itˆ o & McKean (1965), Freedman (1971b), Karatzas & Shreve (1991), and Revuz & Yor (1999) give more detailed information on the subject. Further discussion of multiple Wiener–Itˆ o integrals appears in Kallianpur (1980), Dellacherie et al. (1992), and Nualart (2006). The advanced theory of Gaussian processes is surveyed by Adler (1990).
15. Poisson and related processes As we have seen, the Poisson distribution was recognized by de Moivre (1711) and Poisson (1837) as an approximation to the binomial distribution.
In a study of Bible chronology, Ellis (1844) obtains Poisson processes as approximations of more general renewal processes. Point processes with sta-tionary, independent increments were later considered by Lundberg (1903) to model a stream of insurance claims, by Rutherford & Geiger (1908) to describe the process of radioactive decay, and by Erlang (1909) to model the incoming calls to a telephone exchange. Here the first rigorous proof of the Poisson distribution for the increments was given by Bateman (1910), who also showed that the holding times are i.i.d. and exponentially distributed. The Poisson property in the non-homogeneous case is implicit in L´ evy (1934/35), but it was not rigorously proven until Copeland & Regan (1936).
The mixed binomial representation of Poisson processes is used implicitly by Wiener (1938), in his construction of higher-dimensional Poisson processes.
It was stated more explicitly by Moyal (1962), leading to the elementary construction of general Poisson processes, noted by both Kingman (1967) and Mecke (1967).
Itˆ o (1942b) noted that a marked point process with stationary, independent increments is equivalent to a Poisson process in the plane. The corresponding result in the non-homogeneous case eluded Itˆ o, but 27Alexandr Khinchin (1894–1959), Russian mathematician, along with Kolmogorov a leader of the Moscow school of analysis and probability. He wrote monographs on statistical mechanics, information theory, and queuing theory.
Historical Notes and References 867 follows from Kingman’s (1967) general description of random measures with independent increments.
The mapping and marking properties of a Poisson process were established by Pr´ ekopa (1958), after partial results and special cases had been noted by Bartlett (1949) and Doob (1953). Cox processes, first introduced by Cox (1955), were studied extensively by Kingman (1964), Krickeberg (1972), and Grandell (1976). Thinnings were first considered by R´ enyi28 (1956).
Laplace transforms of random measures were used systematically by von Waldenfels (1968) and Mecke (1968); their use to construct Cox processes and transforms relies on a lemma of Jiˇ rina (1964). One-dimensional unique-ness criteria were obtained in the Poisson case by R´ enyi (1967), and then in general independently by M¨ onch (1971) and in K(73a/83), with further extensions by Grandell (1976). The symmetry characterization of mixed Poisson and binomial processes was noted independently by Matthes et al.
(1974), Davidson (1974b), and K(73a).
S. Watanabe (1964) proved that a ql-continuous simple point process is Poisson iffits compensator is deterministic, a result later extended to gen-eral marked point processes by Jacod (1975). A corresponding time-change reduction to Poisson was obtained independently by Meyer (1971) and Pa-pangelou (1972); a related Poisson approximation was obtained by Brown (1978/79). The extended time-change result in Theorem 15.15 was obtained in K(90).
Multiple Poisson integrals were first mentioned by Itˆ o (1956), though some underlying ideas can be traced back to papers by Wiener (1938) and Wiener & Wintner (1943). Many sophisticated methods have been applied through the years to the study of such integrals. The present convergence criterion for double Poisson integrals is a special case of some general results for multiple Poisson and L´ evy integrals, established in K & Szulga (1989).
Introductions to Poisson and related processes are given by Kingman (1993) and Last & Penrose (2017).
More advanced and comprehensive accounts of random measure theory appear in Matthes et al. (1978), Daley & Vere-Jones (2003/08), and K(17).
16. Independent-increment and L´ evy processes Up to the 1920s, Brownian motion and the Poisson process were essen-tially the only known processes with independent increments. In (1924/25) L´ evy discovered the stable distributions and noted that they, too, could be associated with suitable ‘decomposable’ processes. de Finetti (1929) saw the connection between processes with independent increments and infinitely divisible distributions, and posed the problem of characterizing the latter. As noted in Chapter 7, the problem was solved by Kolmogorov (1932) and L´ evy (1934/35).
28Alfr´ ed R´ enyi (1921–70), Hungarian mathematician.
Through his passion for the subject, he inspired a whole generation of mathematicians.
868 Foundations of Modern Probability The paper of L´ evy29 (1934/35) was truly revolutionary30, in giving at the same time a complete description of processes with independent incre-ments. Though his intuition was perfectly clear, his reasoning was sometimes a bit sketchy, and a more careful analysis was given by Itˆ o (1942b). In the meantime, analytic proofs of the representation formula for the characteris-tic function were given by L´ evy himself (1937), by Feller (1937), and by Khinchin (1937). The L´ evy–Khinchin formula for the characteristic function was extended by Jacod (1983) to the case of possibly fixed discontinuities.
Our probabilistic description in the general case is new and totally different.
The basic convergence Theorem 16.14 for L´ evy processes and the associ-ated approximation of random walks in Corollary 16.17 are essentially due to Skorohod (1957), though with rather different statements and proofs. The special case of stable processes has been studied by countless authors, ever since L´ evy (1924/25). Versions of Proposition 16.19 were discovered independently by Rosi´ nski & Woyczy´ nski (1986) and in K(92).
The local characteristics of a semi-martingale were introduced indepen-dently by Grigelionis (1971) and Jacod (1975). Tangent processes, first mentioned by Itˆ o (1942b), were used by Jacod (1984) to extend the notion of semi-martingales. The existence of tangential sequences with conditionally independent terms was noted by Kwapie´ n & Woyczy´ nski (1992), and a careful proof appears in de la Pe˜ na & Gin´ e (1999). A discrete-time version of the tangential comparison was obtained by Zinn (1986) and Hitchenko (1988), and a detailed proof appears in Kwapie´ n & Woyczy´ nski (1992).
Both results were extended to continuous time in K(17a). The simplified com-parison criterion may be new.
Modern introductions to L´ evy processes have been given by Bertoin (1996) and Sato (2013). The latter gives a detailed account of the L´ evy–Itˆ o repre-sentation in the stochastically continuous case. The analytic approach in the general case is carefully explained by Jacod & Shiryaev (1987), which also contains an exposition of the associated limit theory.
17. Feller processes and semi-groups Semi-group ideas are implicit in Kolmogorov’s pioneering (1931a) pa-per, whose central theme is to identify some local characteristics determining the transition probabilities through a system of differential equations, the so-called Kolmogorov forward and backward equations. Markov chains and diffu-sion processes were originally treated separately, but in (1935) Kolmogorov proposed a unified framework, with transition kernels regarded as operators (initially operating on measures rather than on functions), and with local char-29Paul L´ evy (1886–1971), French mathematician and a founder of modern probability, making profound contributions to the central limit theorem, stable processes, Brownian motion, L´ evy processes, and local time.
30Lo eve (1978), who had been a student of L´ evy, writes (alluding to Greek mythology): “Decomposability sprang forth fully armed from the forehead of P. L´ evy . . . His analysis . . .
was so complete that since then only improvements of detail have been added.” Historical Notes and References 869 acteristics given by an associated generator.
Kolmogorov’s ideas were taken up by Feller31 (1936), who derived exis-tence and uniqueness criteria for the forward and backward equations. The general theory of contraction semi-groups on a Banach space was developed independently by Hille (1948) and Yosida (1948), both of whom recognized its significance for the theory of Markov processes. The power of the semi-group approach became clear through the work of Feller (1952/54), who gave a complete description of the generators of one-dimensional diffusions, in particular characterizing the boundary behavior of the process in terms of the domain of the generator.
A systematic study of Markov semi-groups was now initiated by Dynkin (1955a). The usual approach is to postulate the strong continuity (F3), instead of the weaker and more easily verified condition (F2). The positive maximum principle first appeared in the work of Itˆ o (1957), and the core condition in Proposition 17.9 is due to Watanabe (1968).
The first regularity theorem was obtained by Doeblin (1939b), who gave conditions for the paths to be step functions. A sufficient condition for conti-nuity was then obtained by Fortet (1943). Finally, Kinney (1953) showed that any Feller process has a version with rcll paths, after Dynkin (1952) had derived the same property under a H¨ older condition. The use of martingale methods in the study of Markov processes dates back to Kinney (1953) and Doob (1954).
The strong Markov property for Feller processes was proved independently by Dynkin & Yushkevich (1956) and Blumenthal (1957), after special cases had been obtained by Doob (1945), Hunt (1956), and Ray (1956).
Blumenthal’s (1957) paper also contains his 0−1 law.
Dynkin (1955a) introduced his characteristic operator, and a version of Theorem 17.24 appears in Dynkin (1956).
There is a vast literature on the approximation of Markov chains and Markov processes, covering a wide range of applications.
The use of semi-group methods to prove limit theorems can be traced back to Lindeberg’s (1922a) proof of the central limit theorem. The general results of Theorems 17.25 and 17.28 were developed in stages by Trotter (1958a), Sova (1967), Kurtz (1969/75), and Mackeviˇ cius (1974). Our proof of Theorem 17.25 uses ideas from Goldstein (1976).
A splendid introduction to semi-group theory is given by the relevant chap-ters in Feller (1971). In particular, Feller shows how the one-dimensional L´ evy–Khinchin formula and associated limit theorems can be derived by semi-group methods. More detailed and advanced accounts of the subject appear in Dynkin (1965), Ethier & Kurtz (1986), and Dellacherie & Meyer (1987/92).
31William Feller (1906–70), Croatian-American mathematician, making path-breaking contributions to many areas of analysis and probability, including classical limit theorems and diffusion processes. His introductory volumes on probability theory, arguably the best probability books ever written, became bestsellers and were translated to many languages.
870 Foundations of Modern Probability 18. Itˆ o integration and quadratic variation The first stochastic integral with a random integrand was defined by Itˆ o32 (1942a/44), using Brownian motion as the integrator and product measurable and adapted processes as integrands. Doob (1953) made the connection with martingale theory. A version of the fundamental substitution rule was proved by Itˆ o (1951a), and later extended by many authors. The compensated in-tegral in Corollary 18.21 was introduced independently by Fisk (1966) and Stratonovich (1966).
The quadratic variation process was originally obtained from the Doob– Meyer decomposition. Fisk (1966) noted how it can also be obtained directly from the process, as in Proposition 18.17. Our present construction was in-spired by Rogers & Williams (2000). The BDG inequalities were originally proved for p > 1 and discrete time by Burkholder (1966). Millar (1968) noted the extension to continuous martingales, in which context the further extension to arbitrary p > 0 was obtained independently by Burkholder & Gundy (1970) and Novikov (1971). Kunita & Watanabe (1967) in-troduced the covariation of two martingales and used it to characterize the integral. They further established some general inequalities related to Propo-sition 18.9.
Itˆ o’s original integral was extended to square-integrable martingales by Courrege (1962/63) and Kunita & Watanabe (1967), and then to contin-uous semi-martingales by Dol´ eans & Meyer (1970). The idea of localization is due to Itˆ o & Watanabe (1965). Theorem 18.24 was obtained by Kaza-maki (1972), as part of a general theory of random time change. Stochastic integrals depending on a parameter were studied by Dol´ eans (1967b) and Stricker & Yor (1978), and the functional representation in Proposition 18.26 first appeared in K(96a).
Elementary introductions to Itˆ o calculus and its applications appear in countless textbooks, including Chung & Williams (1990) and Klebaner (2012).
For more advanced accounts and further information, we recom-mend especially Ikeda & Watanabe (2014), Rogers & Williams (2000), Karatzas & Shreve (1998), and Revuz & Yor (1999).
19. Continuous martingales and Brownian motion The fundamental characterization of Brownian motion in Theorem 19.3 was proved by L´ evy (1937), who also in (1940) noted the conformal invariance up to a time change of complex Brownian motion and stated the polarity of singletons. A rigorous proof of Theorem 19.6 was provided by Kakutani (1944a/b). Kunita & Watanabe (1967) gave the first modern proof of L´ evy’s characterization, based on Itˆ o’s formula for exponential martingales.
32Kiyosi Itˆ o (1915–2008), Japanese mathematician, the creator of stochastic calculus, SDEs, multiple WI-integrals, and excursion theory. He is also notable for reworking and making rigorous L´ evy’s theory of processes with independent increments.
Historical Notes and References 871 The corresponding time-change reduction of continuous local martingales to a Brownian motion was discovered by Doeblin (1940b), and used by him to solve Kolmogorov’s fundamental problem of constructing diffusion processes with given drift and diffusion rates33. However, his revolutionary paper was deposited in a sealed envelope to the Academy of Sciences in Paris, and was not known to the world until 60 years later.
In the meantime, the result was rediscovered independently by Dambis (1965) and Dubins & Schwarz (1965). A multi-variate version of Doeblin’s theorem was noted by Knight (1971), and a simplified proof was given by Meyer (1971). A systematic study of isotropic martingales was initiated by Getoor & Sharpe (1972).
The skew-product representation in Corollary 19.7 is due to Galmarino (1963). The integral representation in Theorem 19.11 is essentially due to Itˆ o (1951c), who noted its connection with multiple stochastic integrals and chaos expansions. A one-dimensional version of Theorem 19.13 appears in Doob (1953).
The change of measure was first considered by Cameron & Martin (1944), which is the source of Theorem 19.23, and in Wald’s (1946/47) work on sequential analysis, containing the identity of Lemma 19.25 in a version for random walks. The Cameron–Martin theorem was gradually extended to more general settings by many authors, including Maruyama (1954/55), Gir-sanov (1960), and van Schuppen & Wong (1974). The martingale criterion of Theorem 19.24 was obtained by Novikov (1972).
The material in this chapter is covered by many texts, including the mono-graphs by Karatzas & Shreve (1998) and Revuz & Yor (1999). A more advanced and amazingly informative text is Jacod (1979).
20. Semi-martingales and stochastic integration Doob (1953) conceived the idea of a stochastic integration theory for gen-eral L2-martingales, based on a suitable decomposition of continuous-time sub-martingales. Meyer’s (1962) proof of such a result opened the door to the L2-theory, which was then developed by Courr ege (1962/63) and Kunita & Watanabe (1967). The latter paper contains in particular a version of the gen-eral substitution rule. The integration theory was later extended in a series of papers by Meyer (1967) and Dol´ eans-Dade & Meyer (1970), and reached maturity with the notes of Meyer (1976) and the books by Jacod (1979), M´ etivier & Pellaumail (1980), M´ etivier (1982), and Dellacherie & Meyer (1975/80).
The basic role of predictable processes as integrands was recognized by Meyer (1967). By contrast, semi-martingales were originally introduced in an ad hoc manner by Dol´ eans-Dade & Meyer (1970), and their basic preservation laws were only gradually recognized. In particular, Jacod (1975) 33A totally different construction was given independently by Itˆ o (1942a), through his invention of stochastic calculus and the theory of SDEs. Itˆ o’s approach was also developed independently by Gihman (1947/50/51).
872 Foundations of Modern Probability used the general Girsanov theorem of van Schuppen & Wong (1974) to show that the semi-martingale property is preserved under absolutely continuous changes of the probability measure. The characterization of general stochastic integrators as semi-martingales was obtained independently by Bichteler (1979) and Dellacherie (1980), in both cases with support from analysts.
Quasi-martingales were originally introduced by Fisk (1965) and Orey (1966). The decomposition of Rao (1969b) extends a result by Krickeberg (1956) for L1-bounded martingales. Yoeurp (1976) combined a notion of sta-ble subspaces, due to Kunita & Watanabe (1967), with the Hilbert space structure of M2 to obtain an orthogonal decomposition of L2-martingales, equivalent to the decompositions in Theorem 20.14 and Proposition 20.16.
Elaborating on those ideas, Meyer (1976) showed that the purely discontin-uous component admits a representation as a sum of compensated jumps.
SDEs driven by general L´ evy processes were considered already by Itˆ o (1951b). The study of SDEs driven by general semi-martingales was initiated by Dol´ eans-Dade (1970), who obtained her exponential process as a solution to the equation in Theorem 20.8. The scope of the theory was later expanded by many authors, and a comprehensive account is given by Protter (1990).
The martingale inequalities in Theorems 20.12 and 20.17 have ancient ori-gins. Thus, a version of the latter result for independent random variables was proved by Kolmogorov (1929) and, in a sharper form, by Prohorov (1959). Their result was extended to discrete-time martingales by Johnson et al. (1985) and Hitczenko (1990). The present statements appeared in K & Sztencel (1991).
Early versions of the inequalities in Theorem 20.12 were proved by Khin-chin (1923/24) for symmetric random walks and by Paley (1932) for Walsh series. A version for independent random variables was obtained by Marcin-kiewicz & Zygmund (1937/38). The extension to discrete-time martingales is due to Burkholder (1966) for p > 1 and to Davis (1970) for p = 1. The result was extended to continuous time by Burkholder et al. (1972), who also noted how the general result can be deduced from the statement for p = 1.
The present proof is a continuous-time version of Davis’ original argument.
Introductions to general semi-martingales and stochastic integration are given by Dellacherie & Meyer (1975/80), Jacod & Shiryaev (1987), and Rogers & Williams (2000). Protter (1990) offers an alternative ap-proach, originally suggested by Dellacherie (1980) and Meyer. The book by Jacod (1979) remains a rich source of further information on the subject.
21. Malliavin calculus Malliavin34 (1978) sensationally gave a probabilistic proof of H¨ orman-der’s (1967) celebrated regularity theorem for solutions to certain elliptic 34Paul Malliavin (1925–2010), French mathematician, already famous for contributions to harmonic analysis and other areas, before his revolutionary work in probability.
Historical Notes and References 873 partial differential equations. The underlying ideas35, forming the core of a stochastic calculus of variations known as Malliavin calculus, were further de-veloped by many authors, including Stroock (1981), Bismut (1981), and Watanabe (1987), into a powerful tool with many applications.
Ramer (1974) and Skorohod (1975) independently extended Itˆ o’s stochas-tic integral to the case of anticipating integrands. Their integral was recognized by Gaveau & Trauber (1982) as the adjoint of the Malliavin derivative D.
The expression for the integrand in Itˆ o’s representation of Brownian function-als was obtained by Clark (1970/71), under some regularity conditions that were later removed by Ocone (1984). The differentiability theorem for Itˆ o integrals was proved by Pardoux & Peng (1990).
Many introductions to Malliavin calculus exist, including an early account by Bell (1987). A standard reference is the book by Nualart (1995/2006), which also contains a wealth of applications to anticipating stochastic calculus, stochastic PDEs, mathematical finance, and other areas. I also found the notes of Kunze (2013) especially helpful.
22. Skorohod embedding and functional convergence The first functional limit theorems were obtained by Kolmogorov (1931b/ 33a), who considered special functionals of a random walk. Erd¨ os36 & Kac (1946/47) conceived the idea of an ‘invariance principle’ that would allow the extension of functional limit theorems from special cases to a general setting.
They also treated some interesting functionals of a random walk. The first functional limit theorems were obtained by Donsker37 (1951/52) for random walks and empirical distribution functions, following an idea of Doob (1949).
A general theory based on sophisticated compactness arguments was later de-veloped by Prohorov (1956) and others.
Skorohod’s38 (1965) embedding theorem provided a new, probabilistic approach to Donsker’s theorem. Extensions to the martingale context were obtained by many authors, beginning with Dubins (1968).
Lemma 22.19 appears in Dvoretzky (1972). Donsker’s weak invariance principle was sup-plemented by a strong version due to Strassen (1964), allowing extensions of many a.s. limit theorems for Brownian motion to suitable random walks.
In particular, his result yields a simple proof of the Hartman & Wintner (1941) law of the iterated logarithm, originally deduced from some deep results of Kolmogorov (1929).
35Denis Bell tells me that much of this material was known before the work of Malliavin, including the notion of Malliavin derivative.
36Paul Erd¨ os (1913–96), Hungarian, the most prolific and eccentric mathematician of the 20th century. Constantly on travel and staying with friends, his only possession was a small suitcase where he kept his laundry. His motto was ‘a new roof—a new proof’. He also offered monetary awards, successively upgraded, for solutions to open problems.
37Monroe Donsker (1924–91), American mathematician, who proved the first functional limit theorem, and along with Varadhan developed the theory of large deviations.
38Anatoliy Skorohod (1930–2011), Ukrainian mathematician, making profound con-tributions to many areas of stochastic processes.
874 Foundations of Modern Probability Koml´ os et al. (1975/76) showed how the approximation rate in the Sko-rohod embedding can be improved by a more delicate ‘strong approximation’.
For an exposition of their work and its numerous applications, see Cs¨ org¨ o & R´ ev´ esz (1981).
Many texts on different levels contain introductions to weak convergence and functional limit theorems. A standard reference remains Billingsley (1968), which contains a careful treatment of the general theory and its nu-merous applications. Books focusing on applications in special areas include Pollard (1984) for empirical distributions and Whitt (2002) for queuing theory. Versions of the Skorohod embedding, even for martingales, appear in Hall & Heyde (1980) and Durrett (2019).
23. Convergence in distribution After Donsker (1951/52) had proved his functional limit theorems for random walks and empirical distribution functions, a general theory of weak convergence in function spaces was developed by the Russian school, in semi-nal papers by Prohorov (1956), Skorohod (1956/57), and Kolmogorov (1956). Thus, Prohorov39 (1956) proved his fundamental compactness The-orem 23.2, in a setting for separable, complete metric spaces. The abstract the-ory was later extended in various directions by Le Cam (1957), Varadarajan (1958), and Dudley (1966/67).
Skorohod (1956) considered four different topologies on D[0,1], of which the J1-topology considered here is the most important for applications. The theory was later extended to D R+ by Stone (1963) and Lindvall (1973).
Moment conditions for tightness were developed by Chentsov (1956) and Billingsley (1968), before the powerful criterion of Aldous (1978) became available. Kurtz (1975) and Mitoma (1983) noted that tightness in D R+,S can often be verified by conditions on the one-dimensional projections, as in Theorem 23.23.
Convergence criteria for random measures were established by Prohorov (1961) for compact spaces, by von Waldenfels (1968) for locally compact spaces, and by Harris (1971) for separable, complete metric spaces. In the latter setting, convergence and tightness criteria for point processes, first an-nounced in Debes et al. (1970), were proved in Matthes et al. (1974). The relationship between weak and vague convergence in distribution was clarified, for locally compact spaces, in K(75/76). Strong convergence of point processes was studied extensively by Matthes et al. (1974), where it forms a foundation of their theory of infinitely divisible point processes.
One-dimensional convergence criteria, noted for locally compact spaces in K(75/83) and K(96b), were extended to Polish spaces by Peterson (2001).
The power of the Cox transform in this context was explored by Grandell (1976).
The one-dimensional criteria provide a link to random closed sets 39Yuri Prohorov (1929–2913), Russian probabilist, founder of the modern theory of weak convergence.
Historical Notes and References 875 with the associated Fell (1962) topology. Random sets had been studied ex-tensively by many authors, including Choquet (1953/54), Kendall (1974), and Matheron (1975), when associated convergence criteria were identified in Norberg40 (1984).
Detailed accounts of the theory and applications of weak convergence ap-pear in many textbooks and monographs, including Parthasarathy (1967), Billingsley (1968), Pollard (1984), Ethier & Kurtz (1986), Jacod & Shiryaev (1987), and Whitt (2002). The convergence theory for random measures is discussed in Matthes et al.
(1978), Daley & Vere-Jones (2008), and K(17b). An introduction to random sets appears in Schneider & Weil (2008), and more comprehensive treatments are given by Matheron (1975) and Molchanov (2005).
24. Large deviations The theory of large deviations originated with some refinements of the cen-tral limit theorem noted by many authors, beginning with Khinchin (1929).
Here the object of study is the ratio of tail probabilities rn(x) = P{ζn > x} /P{ζ > x}, where ζ is N(0, 1) and ζn = n−1/2 k≤n ξk for some i.i.d. random variables ξk with mean 0 and variance 1, so that rn(x) →1 for each x. A pre-cise asymptotic expansion was obtained by Cram´ er41 (1938), when x varies with n at a rate x = o(n1/2); see Petrov (1995) for details.
In the same paper, Cram´ er (1938) obtained the first true large deviation result, in the form of our Theorem 24.3, though under some technical assump-tions that were later removed by Chernoff (1952) and Bahadur (1971).
Varadhan (1966) extended the result to higher dimensions and rephrased it as a general large deviation principle. At about the same time, Schilder (1966) proved his large deviation result for Brownian motion, using the present change-of-measure approach.
Similar methods were used by Freidlin & Wentzell (1970/98) to study random perturbations of dynamical systems.
This was long after Sanov (1957) had obtained his large deviation result for empirical distributions of i.i.d. random variables. The relative entropy H(ν|μ) appearing in the limit had already been introduced in statistics by Kullback & Leibler (1951).
Its crucial link to the Legendre–Fenchel transform Λ ∗, long anticipated by physicists, was formalized by Donsker & Varadhan (1975/83). The latter authors also developed some profound and far-reaching extensions of Sanov’s theorem, in a series of revolutionary papers42. Ellis 40After this work, based on an unpublished paper of mine, Norberg went on to write some deep papers on random capacities, inspired by unpublished notes of W. Vervaat.
41Harald Cram´ er (1893–1985), Swedish mathematician and a founder of risk, large deviation, and extreme-value theories. He also wrote the first rigorous treatise on statistical theory. At the end of a distinguished academic career, he became chancellor of the entire Swedish university system.
42This work earned Varadhan an Abel prize in 2007, long after the death of his main collaborator. Together with Stroock, Varadhan also made profound contributions to the theory of multi-variate diffusions.
876 Foundations of Modern Probability (1985) gives a detailed exposition of those results, along with a discussion of their physical significance.
Some of the underlying principles and techniques were developed at a later stage. Thus, an abstract version of the projective limit approach was intro-duced by Dawson & G¨ artner (1987). Bryc (1990) supplemented Varad-han’s (1966) functional version of the LDP with a converse proposition. Sim-ilarly, Ioffe (1991) appended a powerful inverse to the classical ‘contraction principle’. Finally, Pukhalsky (1991) established the equivalence, under suit-able regularity conditions, of the exponential tightness and the goodness of the rate function.
Strassen (1964) established his formidable law of the iterated logarithm by direct estimates. Freedman (1971b) gives a detailed account of the original approach. Varadhan (1984) recognized the result as a corollary to Schilder’s theorem, and a complete proof along the suggested lines, different from ours, appears in Deuschel & Stroock (1989).
Gentle introductions to large deviation theory and its applications are given by Varadhan (1984) and Dembo & Zeitouni (1998). The more demanding text of Deuschel & Stroock (1989) provides much additional insight for the persistent reader.
25. Stationary processes and ergodic theorems The history of ergodic theory dates back to the work of Boltzmann43 (1887) in statistical mechanics. His ‘ergodic hypothesis’—the conjectural equal-ity of time and ensemble averages—was long accepted as an heuristic principle.
In probabilistic terms it amounts to the convergence t−1 t 0 f(Xs) ds →Ef(X0), where Xt represents the state of the system—typically the configuration of all molecules in a gas—at time t, and the expected value is taken with respect to an invariant probability measure on a compact sub-manifold of the state space.
The ergodic hypothesis was sensationally proved as a mathematical theo-rem, first in an L2-version by von Neumann (1932), after Koopman (1931) had noted the connection between measure-preserving transformations and uni-tary operators on a Hilbert space, and then in the pointwise form of Birkhoff44 (1932). The proof of the latter version was simplified in stages: first by Yosida & Kakutani (1939), who showed that the statement is an easy consequence of the maximal ergodic lemma, and then by Garsia (1965), who gave a short proof of the latter result. Khinchin (1933a/34) pioneered a translation of the basic ergodic theory into the probabilistic setting of stationary sequences and processes.
The first multi-variate ergodic theorem was obtained by Wiener (1939), who proved his result in the special case of averages over concentric balls. More general versions were established by many authors, including Day (1942) and 43Ludwig Boltzmann (1844–1906), Austrian physicist, along with Maxwell and Gibbs a founder of statistical mechanics.
44George David Birkhoff (1884–1944), prominent American mathematician.
Historical Notes and References 877 Pitt (1942). The classical methods were pushed to the limit in a notable paper by Tempel’man (1972). The first ergodic theorem for non-commuting transformations was obtained by Zygmund (1951), where the underlying max-imum inequalities go back to Hardy & Littlewood (1930). Sucheston (1983) noted that the statement follows easily from Maker’s (1940) lemma.
The ergodic theorem for random matrices was proved by Furstenberg & Kesten (1960), long before the sub-additive ergodic theorem became avail-able. The latter was originally proved by Kingman (1968), under the stronger hypothesis of joint stationarity of the array (Xm,n). The present extension and shorter proof are due to Liggett (1985).
The ergodic decomposition of invariant measures dates back to Krylov & Bogolioubov (1937), though the basic role of the invariant σ-field was not recognized until the work of Farrell (1962) and Varadarajan (1963). The connection between ergodic decompositions and sufficient statistics is explored in an elegant paper of Dynkin (1978). The traditional approach to the subject is via Choquet theory, as surveyed by Dellacherie & Meyer (1983).
The coupling equivalences in Theorem 25.25 (i) were proved by Gold-stein (1979), after Griffeath (1975) had obtained a related result for Markov chains. The shift coupling part of the same theorem was established by Berbee (1979) and Aldous & Thorisson (1993), and the version for abstract groups was then obtained by Thorisson (1996).
The original ballot theorem, due to Whitworth (1878) and later rediscov-ered by Bertrand (1887), is one of the earliest rigorous results of probability theory. It states that if two candidates A and B in an election get the propor-tions p and 1 −p of the votes, then A will lead throughout the ballot count with probability (2p−1)+. Extensions and new approaches have been noted by many authors, beginning with Andr´ e (1887) and Barbier (1887). A modern combinatorial proof is given by Feller (1968), and a simple martingale argu-ment appears in Chow & Teicher (1997). A version for cyclically stationary sequences and processes was obtained by Tak´ acs (1967). All earlier versions are subsumed by the present statement from K(99a), whose proof relies heavily on Tak´ acs’ ideas.
The first version of Theorem 25.29 was obtained by Shannon (1948), who proved convergence in probability for stationary and ergodic Markov chains in a finite state space. The Markovian restriction was lifted by McMillan (1953), who also strengthened the result to convergence in L1.
Carleson (1958) extended McMillan’s result to a countable state space. The a.s. convergence is due to Breiman (1957/60) and Ionescu Tulcea (1960) for finite state spaces and to Chung (1961) in the countable case.
Elementary introductions to stationary processes have been given by many authors, beginning with Doob (1953) and Cram´ er & Leadbetter (1967).
A modern and comprehensive survey of general ergodic theorems is given by Krengel (1985). A nice introduction to information theory is given by Billingsley (1965). Thorisson (2000) gives a detailed account of modern coupling principles.
878 Foundations of Modern Probability 26. Ergodic properties of Markov processes The first ratio ergodic theorems were obtained by Doeblin (1938b), Doob (1938/48a), Kakutani (1940), and Hurewicz (1944).
Hopf (1954) and Dunford & Schwartz (1956) extended the pointwise ergodic theorem to general L1 −L∞-contractions, and the ratio ergodic theorem was extended to positive L1-contractions by Chacon & Ornstein (1960). The present approach to their result in due to Akcoglu & Chacon (1970).
The notion of Harris recurrence goes back to Doeblin (1940a) and Harris45 (1956). The latter author used the condition to ensure the existence, in dis-crete time, of a σ-finite invariant measure. A corresponding continuous-time result was obtained by H. Watanabe (1964). The total variation convergence of Markov transition probabilities was obtained for a countable state space by Orey (1959/62), and in general by Jamison & Orey (1967). Blackwell & Freedman (1964) noted the equivalence of mixing and tail triviality. The present coupling approach goes back to Griffeath (1975) and Goldstein (1979) for the case of strong ergodicity, and to Berbee (1979) and Aldous & Thorisson (1993) for the corresponding weak result.
There is an extensive literature on ergodic theorems for Markov processes, mostly dealing with the discrete-time case.
General expositions have been given by many authors, beginning with Neveu (1971) and Orey (1971). Our treatment of Harris recurrent Feller processes is adapted from Kunita (1990), who in turn follows the discrete-time approach of Revuz (1984). Krengel (1985) gives a comprehensive survey of abstract ergodic theorems.
27. Symmetric distributions and predictable maps de Finetti46 (1930) proved that an infinite sequence of random events is exchangeable iffthe joint distribution is mixed i.i.d. He later (1937) claimed the corresponding result for general random variables, and a careful proof for even more general state spaces was given by Hewitt & Savage47 (1955).
Ryll-Nardzewski48 (1957) noted that the statement remains true under the weaker hypothesis of contractability, and B¨ uhlmann (1960) extended the result to processes on R+. The sampling approach to de Finetti’s theorem was noted in K(99b).
Finite exchangeable sequences arise in the context of sampling from a finite population, in which context a variety of limit theorems were obtained by Chernoff & Teicher (1958), H´ ajek (1960), Ros´ en (1964), Billingsley (1968), and Hagberg (1973). The representation of exchangeable processes on [0, 1] was discovered independently in K(73b), along with the associated 45Ted Harris (1919–2005), American mathematician, making profound contributions to branching processes, Markov ergodic theory, and convergence of random measures.
46Bruno de Finetti (1906–85), Italian mathematician and philosopher, whose famous theorem became a cornerstone in his theory of subjective probability and Bayesian statistics.
47This paper is notable for containing the authors’ famous 0−1 law.
48Czes law Ryll-Nardzewski (1926–2015), Polish mathematician prominent in analysis and probability, making profound contributions to both exchangeability and Palm measures.
Historical Notes and References 879 functional limit theorems. The sum of compensated jumps was shown in K(74) to converge uniformly a.s.
The optional skipping theorem—the special case of Theorem 27.7 for i.i.d.
sequences and increasing sequences of predictable times—was first noted by Doob49 (1936/53). It formalizes the intuitive idea that a gambler can’t ‘beat the odds’ by using a clever gambling strategy, as explained in Billingsley (1986). The general predictable sampling theorem, proved in K(88) along with its continuous-time counterpart, yields a simple proof of Sparre-Andersen’s (1953/54) notable identity50 for random walks, which in turn leads to a short proof of L´ evy’s third arcsine law.
Comprehensive surveys of exchangeability theory have been given by Al-dous (1985) and in K(05).
28. Multi-variate arrays and symmetries The notion of partial exchangeability—the invariance in distribution of a sequence of random variables under a proper sub-group of permutations—goes back to de Finetti (1938). Separately exchangeable arrays on N2 were first studied by Dawid (1972). Symmetric, jointly exchangeable arrays on N2 arise naturally in the context of U-statistics, whose theory goes back to Hoeffding (1948). They were later studied by McGinley & Sibson (1975), Silverman (1976), and Eagleson & Weber (1978). More recently, exchangeable arrays have been used extensively to describe random graphs.
General representations of exchangeable arrays were established indepen-dently in profound papers by Aldous (1981) and Hoover (1979), using to-tally different methods. Thus, Aldous uses probabilistic methods to derive the representation of separately exchangeable arrays on N2, whereas Hoover derives the representation of jointly exchangeable arrays of arbitrary dimension, using ideas from non-standard analysis and mathematical logic, going back to Gaif-man (1961) and Krauss (1969). Hoover also identifies pairs of functions f, g that can be used to represent the same array. Probabilistic proofs of Hoover’s results were found51 in K(92), along with the extensions to contractable arrays.
Hoover’s uniqueness criteria were amended in K(89/92).
The notion of rotatable arrays was suggested by some limit theorems for U-statistics, going back to Hoeffding (1948). Rotatable arrays also arise nat-urally in quantum mechanics, where the asymptotic eigenvalue distribution of X was studied by Wigner (1955), leading in particular to his celebrated semi-circle law for the ‘Gaussian orthogonal ensemble’, where the entries Xij = Xji are i.i.d. N(0, 1) for i < j. Olson & Uppuluri (1972) noted that, under the stated independence assumption, the Gaussian distribution follows from the 49See also the recollections of Halmos (1985), pp. 74–76, who was a student of Doob.
50Feller (1966/71) writes that Sparre-Andersen’s result ‘was a sensation greeted with incredulity, and the original proof was of an extraordinary intricacy and complexity’. He goes on to give a simplified proof, which is still quite complicated.
51Hoover’s unpublished paper requires expert knowledge in mathematical logic. Before developing my own proofs, I had to ask the author if my interpretations were correct.
880 Foundations of Modern Probability joint rotatability suggested by physical considerations52. Scores of physicists and mathematicians have extended Wigner’s results, but under independence assumptions without physical significance.
Separately and jointly rotatable arrays on N2 were first studied by Dawid (1977/78), who also conjectured the representation in the separate case. The statement was subsequently proved by Aldous (1981), under some simplifying assumptions that were later removed in K(88). The latter paper also gives the representation in the jointly rotatable case. The higher-dimensional versions, derived in K(95), are stated most conveniently in terms of multiple Wiener–Itˆ o integrals on tensor products of Hilbert spaces. Those results lead in turn to related representations of exchangeable and contractable random sheets.
Motivated by problems in population genetics, Kingman (1978) estab-lished his ‘paint-box’ representation of exchangeable partitions, later extended to more general symmetries in K(05). Algebraic and combinatorial aspects of exchangeable partitions have been studied extensively by Pitman (1995/2002) and Gnedin (1997).
A comprehensive account of contractable, exchangeable, and rotatable ar-rays is given in K(05), where the proofs of some major results take up to a hundred pages. From the huge literature on the eigenvalue distribution of ran-dom matrices, I can especially recommend the books by Mehta (1991) for physicists and by Anderson et al. (2010) for mathematicians.
29. Local time, excursions, and additive functionals Local time of Brownian motion at a fixed point was discovered and ex-plored by L´ evy (1939), who devised several explicit constructions, mostly of the type of Proposition 29.12. Much of L´ evy’s analysis is based on the obser-vation in Corollary 29.3. The elementary Lemma 29.2 is due to Skorohod (1961/62). Tanaka’s (1963) formula, first noted for Brownian motion, was taken by Meyer (1976) as a basis for a general semi-martingale approach. The resulting local time process was recognized as an occupation density, indepen-dently by Meyer (1976) and Wang (1977). Trotter (1958b) proved that Brownian local time has a jointly continuous version, and the regularization theorem for general continuous semi-martingales was obtained by Yor (1978).
Modern excursion theory originated with the seminal paper of Itˆ o (1972), which was partly inspired by work of L´ evy (1939). In particular, Itˆ o proved a version of Theorem 29.11, assuming the existence of local time. Horowitz (1972) independently studied regenerative sets and noted their connection with subordinators, equivalent to the existence of a local time. A systematic the-ory of regenerative processes was developed by Maisonneuve (1974). The remarkable Theorem 29.17 was discovered independently by Ray (1963) and Knight (1963), and the present proof is essentially due to Walsh (1978).
Our construction of the excursion process is close in spirit to L´ evy’s original ideas and those of Greenwood & Pitman (1980).
52An observer can’t distinguish between arrays differing by a joint rotation.
Historical Notes and References 881 Elementary additive functionals of integral type had been discussed exten-sively in the literature, when Dynkin proposed a study of the general case.
The existence Theorem 29.23 was obtained by Volkonsky (1960), and the construction of local time in Theorem 29.24 dates back to Blumenthal & Getoor (1964). The integral representation of CAFs in Theorem 29.25 was proved independently by Volkonsky (1958/60) and McKean & Tanaka (1961). The characterization of an additive functional in terms of a suitable measure on the state space dates back to Meyer (1962), and an explicit repre-sentation of the associated measure was found by Revuz (1970), after special cases had been considered by Hunt (1957/58).
An introduction to Brownian local time appears in Karatzas & Shreve (1998). The books by Itˆ o & McKean (1965) and Revuz & Yor (1999) con-tain an abundance of further information on the subject. The latter text may also serve as an introduction to additive functionals and excursion theory. For more information on the latter topics, the reader may consult Blumenthal & Getoor (1968), Blumenthal (1992), and Dellacherie et al. (1992).
30. Random measures, smoothing and scattering The Poisson convergence of superpositions of many small, independent point processes has been noted in increasing generality by many authors, including Palm (1943), Khinchin (1955), Ososkov (1956), Grigelionis (1963), Goldman (1967), and Jagers (1972). Limit theorems under simul-taneous thinning and rescaling of a given point process were obtained by R´ enyi (1956), Nawrotzki (1962), Belyaev (1963), and Goldman (1967). The Cox convergence of dissipative transforms, quoted from K(17b), extends the thin-ning theorem in K(75), which in turn implies Mecke’s (1968) characterization of Cox processes. The partly analogous theory of strong convergence, devel-oped by Matthes et al. (1974/78), forms a basis for their theory of infinitely divisible point processes.
The limit theorem of Dobrushin53 (1956) and its various extensions played a crucial role in the development of the subject. Related results for evolu-tions generated by independent random walks go back to Maruyama (1955).
The Poisson convergence of traffic flows was first noted by Breiman (1963).
Precise version of those results, including the role of asymptotically invariant measures, were given by Stone (1968). For constant motion, the fact that in-dependence alone implies the Poisson property was noted in K(78c). Wiener’s multi-variate ergodic theorem was extended to random measures by Nguyen & Zessin (1979). A general theory of averaging, smoothing, and scattering, with associated criteria for weak or strong Cox convergence, was developed in K(17b).
Line and flat processes have been studied extensively, first by Miles (1969/ 53Roland Dobrushin (1929–95), Russian mathematical physicist, famous for his work in statistical mechanics.
882 Foundations of Modern Probability 71/74) in the Poisson case, and then more generally by Davidson54 (1974a/c), Krickeberg (1974a/b), Papangelou (1974a-b/76), and in K(76/78b-c/81).
Spatial branching processes were studied extensively by Matthes et al. (1974/ 78). Here the stability dichotomy with associated truncation criteria go back to Debes et al. (1970/71). The Palm tree criterion for stability55 was noted in K(77). Further developments in this area appear in Liemant et al. (1988).
A systematic development of point process theory was initiated by the East-German school, as documented by Matthes56 et al. (1974/78/82). Daley & Vere-Jones (2003/08) give a detailed introduction to point processes and their applications. General random measure theory has been studied exten-sively since Jagers (1974) and K(75/76/83/86), and a modern, comprehensive account was attempted in K(17b).
31. Palm and Gibbs kernels, local approximation In his study of intensity fluctuations in telephone traffic, now recognized as the beginning of modern point process theory, Palm57 (1943) introduced some Palm probabilities associated with a simple, stationary point process on R. An extended and more rigorous account of Palm’s ideas was given by Khinchin (1955), in a monograph on queuing theory, and some basic inversion formulas became known as the Palm–Khinchin equations.
Kaplan (1955) established the fundamental time-cycle duality as an exten-sion of results for renewal processes by Doob (1948b), after a partial discrete-time version had been noted by Kac (1947). Kaplan’s result was rediscovered in the context of Palm distributions, independently by Ryll-Nardzewski (1961) and Slivnyak (1962).
In the special case of intervals on the line, Theorem 31.5 (i) was first noted by Korolyuk, as cited by Khinchin (1955), and part (iii) of the same theorem was obtained by Ryll-Nardzewski (1961).
The general versions are due to K¨ onig & Matthes (1963) and Matthes (1963) for d = 1, and to Matthes et al. (1978) for d > 1.
Averaging properties of Palm and related measures have been explored by many authors, including Slivnyak (1962), Z¨ ahle (1980), and Thorisson (1995). Palm measures of stationary point processes on the line have been studied in detail by many authors, including Nieuwenhuis (1994) and Tho-risson (2000). The duality between Palm measures and moment densities was noted in K(99c). Local approximations of general point processes and the associated Palm measures were studied in K(09). The iteration approach to 54Rollo Davidson (1944–1970), British math prodigy, making profound contributions to stochastic analysis and geometry, before dying in a climbing accident at age 25. A yearly prize was established to his memory.
55often referred to as ‘Kallenberg’s backward method’—there I got what I deserve!
56Klaus Matthes (1931–98), a leader of the East-German probability school, who along with countless students and collaborators developed the modern theory of point processes, with applications to queuing, branching processes, and statistical mechanics.
Served for many years as head of the Weierstrass Institute of the Academy of Sciences in DDR.
57Conny Palm (1907–51), Swedish engineer and applied mathematician, and the founder of modern point process theory. Also known for designing the first Swedish computer.
Historical Notes and References 883 conditional distributions and Palm measures goes back to K(10/11b).
Exterior conditioning plays a basic role in statistical mechanics, where the theory of Gibbs measures, inspired by the pioneering work of Gibbs58 (1902), has been pursued by scores of physicists and mathematicians. In particular, Theorem 31.15 (i) is a version of the celebrated ‘DLR equation’, noted by Dobrushin (1970) and Lanford & Ruelle (1969). Connections with mod-ern point process theory were recognized and explored by Nguyen & Zessin (1979), Matthes et al. (1979), and Rauchenschwandtner (1980). Moti-vated by problems in stochastic geometry, Papangelou (1974b) introduced the kernel named after him, under some simplifying assumptions that were later removed in K(78a). An elegant projection approach to Papangelou ker-nels was noted by van der Hoeven (1983). The general theory of Palm and Gibbs kernels was developed in K(83/86).
As with many subjects of the previous chapter, here too the systematic development was initiated by Matthes et al. (1974/78/82). A comprehen-sive account of general Palm and Gibbs kernels is given in K(17b), which also contains a detailed exposition of the stationary case. Applications to queuing theory and related areas are discussed by many authors, including Franken et al. (1981) and Baccelli & Br´ emaud (2000).
32. Stochastic equations and martingale problems Long before the existence of any general theory for SDEs, Langevin (1908) proposed his equation to model the velocity of a Brownian particle. The solu-tion process was later studied by Ornstein & Uhlenbeck (1930), and was thus named after them. A more rigorous discussion appears in Doob (1942a).
The general idea of a stochastic differential equation goes back to Bern-stein (1934/38), who proposed a pathwise construction of diffusion processes by a discrete approximation, leading in the limit to a formal differential equa-tion driven by a Brownian motion. Kolmogorov posed the problem of con-structing a diffusion process with given drift and diffusion rates59, which mo-tivated Itˆ o (1942a/51b) to develop his theory of SDEs, including a precise definition of the stochastic integral, conditions for existence and uniqueness of solutions, the Markov property of the solution process, and the continuous dependence on initial state. Similar results were later obtained independently by Gihman (1947/50/51). A general theory of stochastic flows was developed by Kunita (1990) and others.
The notion of a weak solution was introduced by Girsanov (1960), and criteria for weak existence appear in Skorohod (1965). The ideas behind the transformations in Propositions 32.12 and 32.13 date back to Girsanov (1960) and Volkonsky (1958), respectively. The notion of a martingale prob-lem goes back to L´ evy’s martingale characterization of Brownian motion and 58Josiah Willard Gibbs (1839–1903), American physicist, along with Maxwell and Boltzmann a founder of statistical mechanics.
59The problem was also solved independently by Doeblin (1940b), who used a totally different method based on a random time change; see the notes to Chapter 19.
884 Foundations of Modern Probability Dynkin’s theory of the characteristic operator. A comprehensive theory was developed by Stroock & Varadhan (1969), who established the equiva-lence with weak solutions to the associated SDEs, gave general criteria for uniqueness in law, and deduced conditions for the strong Markov and Feller properties. The measurability part of Theorem 32.10 extends an exercise in Stroock & Varadhan (1979).
Yamada & Watanabe (1971) proved that weak existence and pathwise uniqueness imply strong existence and uniqueness in law.
Under the same conditions, they also established the existence of a functional solution, possibly depending on the initial distribution. The latter dependence was later removed in K(96a). Ikeda & Watanabe (1989) noted how the notions of pathwise uniqueness and uniqueness in law extend by conditioning from degenerate to arbitrary initial distributions.
The theory of SDEs is covered by countless textbooks on different levels, including Ikeda & Watanabe (2014), Rogers & Williams (2000), and Karatzas & Shreve (1998). More information on the martingale problem appears in Jacod (1979), Stroock & Varadhan (1979), and Ethier & Kurtz (1986). There is also an extensive literature on SPDEs—stochastic partial differential equations—where a standard reference is Walsh (1984).
33. One-dimensional SDEs and diffusions The study of continuous Markov processes with associated parabolic dif-ferential equations, initiated by Kolmogorov (1931a) and Feller (1936), took a new turn with the seminal papers of Feller (1952/54), who studied the generators of one-dimensional diffusions, within the framework of the newly developed semi-group theory. In particular, Feller gave a complete description in terms of scale function and speed measure, classified the boundary behav-ior, and showed how the latter is determined by the domain of the generator.
Finally, he identified the cases when explosion occurs, corresponding to the absorption cases in Theorem 33.15.
A more probabilistic approach to those results was developed by Dynkin (1955b/59), who along with Ray (1956) continued Feller’s study of the relation-ship between analytic properties of the generator and sample path properties of the process. The idea of constructing diffusions on a natural scale through a time change of Brownian motion is due to Hunt (1958) and Volkonsky (1958), and the full description in Theorem 33.9 was completed by Volkonsky (1960) and Itˆ o & McKean (1965). The present stochastic calculus approach is based on ideas in M´ el´ eard (1986).
The ratio ergodic Theorem 33.14 was first obtained for Brownian motion by Derman (1954), by a method originally devised for discrete-time chains by Doeblin (1938a). It was later extended to more general diffusions by Motoo & Watanabe (1958).
The ergodic behavior of recurrent, one-dimensional diffusions was analyzed by Maruyama & Tanaka (1957).
For one-dimensional SDEs, Skorohod (1965) noticed that Itˆ o’s original Historical Notes and References 885 Lipschitz condition for pathwise uniqueness can be replaced by a weaker H¨ older condition. He also obtained a corresponding comparison theorem. The im-proved conditions in Theorems 33.3 and 33.5 are due to Yamada & Wata-nabe (1971) and Yamada (1973), respectively. Perkins (1982) and Le Gall (1983) noted how the semi-martingale local time can be used to simplify and unify the proofs of those and related results. The fundamental weak existence and uniqueness criteria in Theorem 33.1 were discovered by Engelbert & Schmidt (1984/85), whose (1981) 0−1 law is implicit in Lemma 33.2.
Elementary introductions to one-dimensional diffusions appear in Breiman (1968), Freedman (1971b), and Rogers & Williams (2000). More detailed and advanced accounts are given by Dynkin (1965) and Itˆ o & McKean (1965). Further information on one-dimensional SDEs appears in Karatzas & Shreve (1998) and Revuz & Yor (1999).
34. PDE connections and potential theory The fundamental solution to the heat equation in terms of the Gaussian kernel was obtained by Laplace (1809). A century later Bachelier (1900) noted the relationship between Brownian motion and the heat equation. The PDE connections were further explored by many authors, including Kol-mogorov (1931a), Feller (1936), Kac (1951), and Doob (1955). A first version of Theorem 34.1 was obtained by Kac (1949), who was in turn inspired by Feynman’s (1948) work on the Schr¨ odinger equation. Theorem 34.2 is due to Stroock & Varadhan (1969).
In a discussion of the Dirichlet problem, Green (1828) introduced the functions named after him. The Dirichlet, sweeping, and equilibrium problems were all studied by Gauss (1840), in a pioneering paper on electrostatics. The rigorous developments in potential theory began with Poincar´ e (1890/99), who solved the Dirichlet problem for domains with a smooth boundary. The equilibrium measure was characterized by Gauss as the unique measure min-imizing a certain energy functional, but the existence of the minimum was not rigorously established until Frostman (1935). The cone condition for regularity is due to Zaremba (1909).
The first probabilistic connections were made by Phillips & Wiener (1923) and Courant et al. (1928), who solved the Dirichlet problem in the plane by a method of discrete approximation, involving a version of Theorem 34.5 for a simple symmetric random walk. Kolmogorov & Leontovich (1933) evaluated a special hitting distribution for two-dimensional Brownian motion and noted that it satisfies the heat equation. Kakutani (1944b/45) showed how the harmonic measure and sweeping kernel can be expressed in terms of a Brownian motion. The probabilistic methods were extended and per-fected by Doob (1954/55), who noted the profound connections with martin-gale theory. A general potential theory was later developed by Hunt (1957/58) for broad classes of Markov processes.
The interpretation of Green functions as occupation densities was known 886 Foundations of Modern Probability to Kac (1951), and a probabilistic approach to Green functions was developed by Hunt60 (1956). The connection between equilibrium measures and quitting times, implicit already in Spitzer (1964) and Itˆ o & McKean (1965), was exploited by Chung61 (1973) to yield the representation in Theorem 34.14.
Time reversal of diffusion processes was first considered by Schr¨ odinger (1931). Kolmogorov (1936b/37) computed the transition kernels of the re-versed process and gave necessary and sufficient conditions for symmetry. The basic role of time reversal and duality in potential theory was recognized by Doob (1954) and Hunt (1958). Proposition 34.15 and the related construc-tion in Theorem 34.21 go back to Hunt, but Theorem 34.19 may be new.
The measure ν in Theorem 34.21 is related to the Kuznetsov measures, dis-cussed extensively in Getoor (1990). The connection between random sets and alternating capacities was established by Choquet (1953/54), and a cor-responding representation of infinitely divisible random sets was obtained by Matheron (1975).
The classical decomposition of subharmonic functions goes back to Riesz (1926/30). The basic connection between super-harmonic functions and super-martingales was established by Doob (1954), leading to a probabilistic coun-terpart of Riesz’ theorem. This is where he recognized the need for a gen-eral decomposition theorem for super-martingales, generalizing the elementary Lemma 9.10. Doob also proved the continuity of the composition of an exces-sive function with Brownian motion.
Introductions to probabilistic potential theory and other PDE connections appear in Bass (1995/98), Chung (1995), and Karatzas & Shreve (1998).
A detailed exposition of classical probabilistic potential theory is given by Port & Stone (1978). Doob (1984) provides a wealth of further informa-tion on both the analytic and the probabilistic aspects. An introduction to Hunt’s work and subsequent developments is given by Chung (1982). More advanced and comprehensive treatments appear in Blumenthal & Getoor (1968), Dellacherie & Meyer (1987/92), and Sharpe (1988), which also cover the theory of additive functionals and their potentials. The connection between random sets and alternating capacities is discussed in Molchanov (2005).
35. Stochastic differential geometry Brownian motion and stochastic calculus on Riemannian manifolds and Lie groups were considered already by Yosida (1949), Itˆ o (1950), Dynkin (1961), and others, involving the sophisticated machinery of Levy-Civita connections and constructions by ‘rolling without slipping’. Martingales in Riemannian 60Gilbert Hunt (1916–2008), American mathematician and a prominent tennis player, whose deep and revolutionary papers established a close connection between Markov pro-cesses and general potential theory.
61Kai Lai Chung (1917–2009), Chinese-American mathematician, making significant contributions to many areas of modern probability. He became a friend of mine and even spent a week as a guest in our house.
Historical Notes and References 887 manifolds have been studied by Duncan (1983) and Kendall (1988).
A L´ evy–Khinchin type representation of infinitely divisible distributions on Lie groups was obtained by Hunt (1956b), and a related theory of L´ evy processes in Lie groups and homogeneous spaces has been developed by Liao (2018).
Schwartz62 (1980/82) realized that much of the classical theory could be developed for semi-martingales in a differentiable manifold without any Riemannian structure. His ideas63 were further pursued by Meyer (1981/82), arguably leading to a significant simplification64 of the whole area.
The need of a connection to define martingales in a differential manifold was recognized by Meyer, and the present definition of martingales in connected manifolds goes back to Bismut (1981). Characterizations of martingales in terms of convex functions were given by Darling (1982). Our notion of local characteristics of a semi-martingale, believed to be new65, arose from attempts to make probabilistic sense of the beautiful but somewhat obscure Schwartz– Meyer theory for general semi-martingales. Though the diffusion rates require no additional structure, the drift rate makes sense only in manifolds with a connection66.
A survey of Brownian motion and SDEs in a Riemannian manifold appears in Ikeda & Watanabe (2014).
For beginners we further recommend the ‘overture’ to the Riemannian theory in Rogers & Williams (2000), and the introduction to stochastic differential geometry by Kendall (1988). De-tailed and comprehensive accounts of the Schwartz–Meyer theory are given by ´ Emery (1989/2000).
Appendices A modern and comprehensive account of general measure theory is given by Bogachev (2007). Another useful reference is Dudley (1989), who also discusses the axiomatization of mathematics. The projection and section the-orems rely on capacity theory, for which we refer to Dellacherie (1972) and Dellacherie & Meyer (1975/83). Most of the material on topology and functional analysis can be found in standard textbooks.
The J1-topology was introduced by Skorohod (1956), and detailed expo-sitions appear in Billingsley (1968), Ethier & Kurtz (1986), and Jacod & Shiryaev (1987). Our discussion of topologies on MS is adapted from K(17b). The topology on the space of closed sets was introduced in a more general setting by Fell (1962), and a detailed proof of Theorem A6.1, differ-62Laurent Schwartz (1915–2002), French mathematician, whose creation of distribu-tion theory earned him a Fields medal in 1950. Towards the end of a long and distinguished career, he got interested in stochastic differential geometry.
63Michel Emery tells me that much of this material was known before the work of Schwartz, whose aim was to develop a Bourbaki-style approach to stochastic calculus in manifolds.
64hence the phrase sans larmes (without tears) in the title of Meyer’s 1981 paper 65A notion of local characteristics also appears in Meyer (1981), though with a totally different meaning.
66simply because a martingale is understood to be a semi-martingale without drift 888 Foundations of Modern Probability ent from ours, appears in Matheron (1975). A modern and comprehensive account of the relevant set theory is given by Molchanov (2004).
The material on differential geometry is standard, and our account is adapted from ´ Emery (1989), though with a simplified notation.
Bibliography Here we list only publications that are explicitly mentioned in the text or notes or are directly related to results cited in the book. Knowledgeable readers will notice that many books or papers of historical significance are missing. The items are ordered alphabetically after the main part of the last name, as in de Finetti and von Neumann.
Adler, R.J. (1990).
An Introduction to Continuity, Extrema, and Related Topics for General Gaussian Proceses. Inst. Math. Statist., Hayward, CA.
Akcoglu, M.A., Chacon, R.V. (1970). Ergodic properties of operators in Lebesgue space. Adv. Appl. Probab. 2, 1–47.
Aldous, D.J. (1978). Stopping times and tightness. Ann. Probab. 6, 335–340.
— (1981). Representations for partially exchangeable arrays of random variables. J. Mul-tivar. Anal. 11, 581–598.
— (1985). Exchangeability and related topics. Lect. Notes in Math. 1117, 1–198. Springer, Berlin.
Aldous, D., Thorisson, H. (1993). Shift-coupling. Stoch. Proc. Appl. 44, 1-14.
Alexandrov, A.D. (1940/43). Additive set-functions in abstract spaces. Mat. Sb. 8, 307–348; 9, 563–628; 13, 169–238.
Alexandrov, P. (1916). Sur la puissance des ensembles mesurables B. C.R. Acad. Sci.
Paris 162, 323–325.
Anderson, G.W., Guionnet, A., Zeitouni, O. (2010). Am Introduction to Random Matrices. Cambridge Univ. Press.
Andr´ e, D. (1887). Solution directe du probleme r´ esolu par M. Bertrand. C.R. Acad. Sci.
Paris 105, 436–437.
Athreya, K., McDonald, D., Ney, P. (1978). Coupling and the renewal theorem.
Amer. Math. Monthly 85, 809–814.
Athreya, K.B., Ney, P.E. (1972). Branching Processes. Springer, Berlin.
Baccelli, F., Br´ emaud, P. (2000). Elements of Queueing [sic] Theory, 2nd ed. Springer, Berlin.
Bachelier, L. (1900). Th´ eorie de la sp´ eculation. Ann. Sci. ´ Ecole Norm. Sup. 17, 21–86.
— (1901). Th´ eorie math´ ematique du jeu. Ann. Sci. ´ Ecole Norm. Sup. 18, 143–210.
Bahadur, R.R. (1971). Some Limit Theorems in Statistics. SIAM, Philadelphia.
Barbier, ´ E. (1887). G´ en´ eralisation du probl eme r´ esolu par M. J. Bertrand. C.R. Acad.
Sci. Paris 105, 407, 440.
Bartlett, M.S. (1949). Some evolutionary stochastic processes. J. Roy. Statist. Soc. B 11, 211–229.
Bass, R.F. (1995). Probabilistic Techniques in Analysis. Springer, NY.
— (1998). Diffusions and Elliptic Operators. Springer, NY.
889 © Springer Nature Switzerland AG 2021 O. Kallenberg, Foundations of Modern Probability, Probability Theory and Stochastic Modelling 99, 890 Foundations of Modern Probability Bateman, H. (1910). Note on the probability distribution of α-particles. Philos. Mag. 20 (6), 704–707.
Baxter, G. (1961). An analytic approach to finite fluctuation problems in probability. J.
d’Analyse Math. 9, 31–70.
Bell, D.R. (1987). The Malliavin Calculus. Longman & Wiley.
Belyaev, Y.K. (1963). Limit theorems for dissipative flows. Th. Probab. Appl. 8, 165– 173.
Berbee, H.C.P. (1979). Random Walks with Stationary Increments and Renewal Theory.
Mathematisch Centrum, Amsterdam.
Bernoulli, J. (1713). Ars Conjectandi. Thurnisiorum, Basel.
Bernstein, S.N. (1927). Sur l’extension du th´ eoreme limite du calcul des probabilit´ es aux sommes de quantit´ es d´ ependantes. Math. Ann. 97, 1–59.
— (1934). Principes de la th´ eorie des ´ equations diff´ erentielles stochastiques. Trudy Fiz.-Mat., Steklov Inst., Akad. Nauk. 5, 95–124.
— (1937). On some variations of the Chebyshev inequality (in Russian). Dokl. Acad. Nauk SSSR 17, 275–277.
— (1938). ´ Equations diff´ erentielles stochastiques. Act. Sci. Ind. 738, 5–31.
Bertoin, J. (1996). L´ evy Processes. Cambridge Univ. Press.
Bertrand, J. (1887). Solution d’un probl eme. C.R. Acad. Sci. Paris 105, 369.
Bichteler, K. (1979). Stochastic integrators. Bull. Amer. Math. Soc. 1, 761–765.
Bienaym´ e, I.J. (1845).
De la loi de multiplication et de la dur´ ee des familles.
Soc.
Philomat. Paris Extraits, S´ er. 5, 37–39.
— (1853). Consid´ erations a l’appui de la d´ ecouverte de Laplace sur la loi de probabilit´ e dans la m´ ethode des moindres carr´ es. C.R. Acad. Sci. Paris 37, 309–324.
Billingsley, P. (1965). Ergodic Theory and Information. Wiley, NY.
— (1968). Convergence of Probability Measures. Wiley, NY.
— (1986/95). Probability and Measure, 3rd ed. Wiley, NY.
Birkhoff, G.D. (1932). Proof of the ergodic theorem. Proc. Natl. Acad. Sci. USA 17, 656–660.
Bismut, J.M. (1981). Martingales, the Malliavin calculus and hypoellipticity under gen-eral H¨ ormander’s condition. Z. Wahrsch. verw. Geb. 56, 469–505.
Blackwell, D. (1948). A renewal theorem. Duke Math. J. 15, 145–150.
— (1953). Extension of a renewal theorem. Pacific J. Math. 3, 315–320.
Blackwell, D., Freedman, D. (1964). The tail σ-field of a Markov chain and a theorem of Orey. Ann. Math. Statist. 35, 1291–1295.
Blumenthal, R.M. (1957). An extended Markov property. Trans. Amer. Math. Soc. 82, 52–72.
— (1992). Excursions of Markov Processes. Birkh¨ auser, Boston.
Blumenthal, R.M., Getoor, R.K. (1964).
Local times for Markov processes.
Z.
Wahrsch. verw. Geb. 3, 50–74.
— (1968). Markov Processes and Potential Theory. Academic Press, NY.
Bibliography 891 Bochner, S. (1932). Vorlesungen ¨ uber Fouriersche Integrale, Akad. Verlagsges., Leipzig.
Repr. Chelsea, NY 1948.
— (1933). Monotone Funktionen, Stieltjessche Integrale und harmonische Analyse. Math.
Ann. 108, 378–410.
Bogachev, V.I. (2007). Measure Theory 1–2. Springer.
Boltzmann, L. (1887). ¨ Uber die mechanischen Analogien des zweiten Hauptsatzes der Thermodynamik. J. Reine Angew. Math. 100, 201–212.
Borel, E. (1895). Sur quelques points de la th´ eorie des fonctions. Ann. Sci. ´ Ecole Norm.
Sup. (3) 12, 9–55.
— (1898). Le¸ cons sur la Th´ eorie des Fonctions. Gauthier-Villars, Paris.
— (1909a). Les probabilit´ es d´ enombrables et leurs applications arithm´ etiques. Rend. Circ.
Mat. Palermo 27, 247–271.
— (1909b/24). Elements of the Theory of Probability, 3rd ed., Engl. transl. Prentice-Hall 1965.
Bortkiewicz, L.v. (1898). Das Gesetz der kleinen Zahlen. Teubner, Leipzig.
Breiman, L. (1957/60). The individual ergodic theorem of infomation theory. Ann. Math.
Statist. 28, 809–811; 31, 809–810.
— (1963). The Poisson tendency in traffic distribution. Ann. Math. Statist. 34, 308–311.
— (1968). Probability. Addison-Wesley, Reading, MA. Repr. SIAM, Philadelphia 1992.
Brown, R. (1828). A brief description of microscopical observations made in the months of June, July and August 1827, on the particles contained in the pollen of plants; and on the general existence of active molecules in organic and inorganic bodies. Ann.
Phys. 14, 294–313.
Brown, T.C. (1978). A martingale approach to the Poisson convergence of simple point processes. Ann. Probab. 6, 615–628.
— (1979). Compensators and Cox convergence. Math. Proc. Cambridge Phil. Soc. 90, 305–319.
Bryc, W. (1990). Large deviations by the asymptotic value method. In Diffusion Pro-cesses and Related Problems in Analysis (M. Pinsky, ed.), 447–472. Birkh¨ auser, Basel.
B¨ uhlmann, H. (1960). Austauschbare stochastische Variabeln und ihre Grenzwerts¨ atze.
Univ. Calif. Publ. Statist. 3, 1–35.
Buniakowsky, V.Y. (1859). Sur quelques in´ egalit´ es concernant les int´ egrales ordinaires et les int´ egrales aux diff´ erences finies. M´ em. de l’Acad. St.-Petersbourg 1:9.
Burkholder, D.L. (1966). Martingale transforms. Ann. Math. Statist. 37, 1494–1504.
Burkholder, D.L., Davis, B.J., Gundy, R.F. (1972). Integral inequalities for con-vex functions of operators on martingales. Proc. 6th Berkeley Symp. Math. Statist.
Probab. 2, 223–240.
Burkholder, D.L., Gundy, R.F. (1970). Extrapolation and interpolation of quasi-linear operators on martingales. Acta Math. 124, 249–304.
Cameron, R.H., Martin, W.T. (1944). Transformation of Wiener integrals under trans-lations. Ann. Math. 45, 386–396.
Cantelli, F.P. (1917). Su due applicazione di un teorema di G. Boole alla statistica matematica. Rend. Accad. Naz. Lincei 26, 295–302.
892 Foundations of Modern Probability — (1933). Sulla determinazione empirica della leggi di probabilit a. Giorn. Ist. Ital. Attuari 4, 421–424.
Carath´ eodory, C. (1918/27). Vorlesungen ¨ uber reelle Funktionen, 2nd ed. Teubner, Leipzig (1st ed. 1918). Repr. Chelsea, NY 1946.
Carleson, L. (1958). Two remarks on the basic theorems of information theory. Math.
Scand. 6, 175–180.
Cauchy, A.L. (1821). Cours d’analyse de l’ ´ Ecole Royale Polytechnique, Paris.
Chacon, R.V., Ornstein, D.S. (1960). A general ergodic theorem. Illinois J. Math. 4, 153–160.
Chapman, S. (1928).
On the Brownian displacements and thermal diffusion of grains suspended in a non-uniform fluid. Proc. Roy. Soc. London (A) 119, 34–54.
Chebyshev, P.L. (1867). Des valeurs moyennes. J. Math. Pures Appl. 12, 177–184.
— (1890). Sur deux th´ eoremes relatifs aux probabilit´ es. Acta Math. 14, 305–315.
Chentsov, N.N. (1956). Weak convergence of stochastic processes whose trajectories have no discontinuities of the second kind and the “heuristic” approach to the Kolmogorov– Smirnov tests. Th. Probab. Appl. 1, 140–144.
Chernoff, H. (1952). A measure of asymptotic efficiency for tests of a hypothesis based on the sum of observations. Ann. Math. Statist. 23, 493–507.
Chernoff, H., Teicher, H. (1958). A central limit theorem for sequences of exchange-able random variables. Ann. Math. Statist. 29, 118–130.
Choquet, G. (1953/54). Theory of capacities. Ann. Inst. Fourier Grenoble 5, 131–295.
Chow, Y.S., Teicher, H. (1978/97). Probability Theory: Independence, Interchangeabil-ity, Martingales, 3nd ed. Springer, NY.
Chung, K.L. (1960). Markov Chains with Stationary Transition Probabilities. Springer, Berlin.
— (1961). A note on the ergodic theorem of information theory. Ann. Math. Statist. 32, 612–614.
— (1968/74). A Course in Probability Theory, 2nd ed. Academic Press, NY.
— (1973). Probabilistic approach to the equilibrium problem in potential theory. Ann.
Inst. Fourier Grenoble 23, 313–322.
— (1982). Lectures from Markov Processes to Brownian Motion. Springer, NY.
— (1995). Green, Brown, and Probability. World Scientific, Singapore.
Chung, K.L., Doob, J.L. (1965). Fields, optionality and measurability. Amer. J. Math.
87, 397–424.
Chung, K.L., Fuchs, W.H.J. (1951). On the distribution of values of sums of random variables. Mem. Amer. Math. Soc. 6.
Chung, K.L., Ornstein, D.S. (1962). On the recurrence of sums of random variables.
Bull. Amer. Math. Soc. 68, 30–32.
Chung, K.L., Walsh, J.B. (1974). Meyer’s theorem on previsibility. Z. Wahrsch. verw.
Geb. 29, 253–256.
Chung, K.L., Williams, R.J. (1983/90). Introduction to Stochastic Integration, 2nd ed.
Birkh¨ auser, Boston.
C ¸inlar, E. (2011). Probability and Stochastics. Springer.
Bibliography 893 Clark, J.M.C. (1970/71).
The representation of functionals of Brownian motion by stochastic integrals. Ann. Math. Statist. 41, 1282–95; 42, 1778.
Copeland, A.H., Regan, F. (1936). A postulational treatment of the Poisson law. Ann.
Math, 37, 357–362.
Courant, R., Friedrichs, K., Lewy, H. (1928).
¨ Uber die partiellen Differentialgle-ichungen der mathematischen Physik. Math. Ann. 100, 32–74.
Cournot, A.A. (1843). Exposition de la th´ eorie des chances et des probabilit´ es. Hauchette, Paris.
Courr ege, P. (1962/63).
Int´ egrales stochastiques et martingales de carr´ e int´ egrable.
Sem. Brelot–Choquet–Deny 7. Publ. Inst. H. Poincar´ e.
Cox, D.R. (1955). Some statistical methods connected with series of events. J. R. Statist.
Soc. Ser. B 17, 129–164.
Cram´ er, H. (1938). Sur un nouveau th´ eoreme-limite de la th´ eorie des probabilit´ es. Actual.
Sci. Indust. 736, 5–23.
— (1942). On harmonic analysis in certain functional spaces. Ark. Mat. Astr. Fys. 28B:12 (17 pp.).
Cram´ er, H., Leadbetter, M.R. (1967). Stationary and Related Stochastic Processes.
Wiley, NY.
Cram´ er, H., Wold, H. (1936). Some theorems on distribution functions. J. London Math. Soc. 11, 290–295.
Cs¨ org¨ o, M., R´ ev´ esz, P. (1981). Strong Approximations in Probability and Statistics.
Academic Press, NY.
Daley, D.J., Vere-Jones, D. (2003/08). An Introduction to the Theory of Point Pro-cesses 1-2, 2nd ed. Springer, NY.
Dambis, K.E. (1965). On the decomposition of continuous submartingales. Th. Probab.
Appl. 10, 401–410.
Daniell, P.J. (1918/19). Integrals in an infinite number of dimensions. Ann. Math. (2) 20, 281–288.
— (1919/20). Functions of limited variation in an infinite number of dimensions. Ann.
Math. (2) 21, 30–38.
— (1920). Stieltjes derivatives. Bull. Amer. Math. Soc. 26, 444–448.
Darling, R.W.R. (1982). Martingales in manifolds—definition, examples and behaviour under maps. Lect. Notes in Math. 921, Springer 1982.
Davidson, R. (1974a). Stochastic processes of flats and exchangeability. In: Stochastic Geometry (E.F. Harding & D.G. Kendall, eds.), pp. 13–45. Wiley, London.
— (1974b). Exchangeable point-processes. Ibid., 46–51.
— (1974c). Construction of line-processes: second-order properties. Ibid., 55–75.
Davis, B.J. (1970). On the integrability of the martingale square function. Israel J. Math.
8, 187–190.
Dawid, A.P. (1977). Spherical matrix distributions and a multivariate model. J. Roy.
Statist. Soc. (B) 39, 254–261.
— (1978). Extendibility of spherical matrix distributions. J. Multivar. Anal. 8, 559–566.
894 Foundations of Modern Probability Dawson, D.A. (1977). The critical measure diffusion. Z. Wahrsch. verw. Geb. 40, 125– 145.
— (1993). Measure-valued Markov processes. In: ´ Ecole d’´ et´ e de probabilit´ es de Saint-Flour XXI–1991. Lect. Notes in Math. 1541, 1–126. Springer, Berlin.
Dawson, D.A., G¨ artner, J. (1987). Large deviations from the McKean–Vlasov limit for weakly interacting diffusions. Stochastics 20, 247–308.
Dawson, D.A., Perkins, E.A. (1991). Historical processes. Mem. Amer. Math. Soc.
93, #454.
Day, M.M. (1942). Ergodic theorems for Abelian semigroups. Trans. Amer. Math. Soc.
51, 399–412.
Debes, H., Kerstan, J., Liemant, A., Matthes, K. (1970/71). Verallgemeinerung eines Satzes von Dobrushin I, III. Math. Nachr. 47, 183–244; 50, 99–139.
Dellacherie, C. (1972). Capacit´ es et Processus Stochastiques. Springer, Berlin.
— (1980).
Un survol de la th´ eorie de l’int´ egrale stochastique.
Stoch. Proc. Appl.
10, 115-144.
Dellacherie, C., Maisonneuve, B., Meyer, P.A. (1992). Probabilit´ es et Potentiel, V. Hermann, Paris.
Dellacherie, C., Meyer, P.A. (1975/80/83/87). Probabilit´ es et Potentiel, I–IV. Her-mann, Paris. Engl. trans., North-Holland.
Dembo, A., Zeitouni, O. (1998). Large Deviations Techniques and Applications, 2nd ed.
Springer, NY.
Derman, C. (1954). Ergodic property of the Brownian motion process. Proc. Natl. Acad.
Sci. USA 40, 1155–1158.
Deuschel, J.D., Stroock, D.W. (1989). Large Deviations. Academic Press, Boston.
Dobrushin, R.L. (1956). On Poisson laws for distributions of particles in space (in Rus-sian). Ukrain. Mat. Z. 8, 127–134.
— (1970). Gibbsian random fields without hard core conditions. Theor. Math. Phys. 4, 705–719.
Doeblin, W. (1938a). Expos´ e de la th´ eorie des chaˆ ınes simples constantes de Markov a un nombre fini d’´ etats. Rev. Math. Union Interbalkan. 2, 77–105.
— (1938b). Sur deux problemes de M. Kolmogoroffconcernant les chaˆ ınes d´ enombrables.
Bull. Soc. Math. France 66, 210–220.
— (1939a). Sur les sommes d’un grand nombre de variables al´ eatoires ind´ ependantes. Bull.
Sci. Math. 63, 23–64.
— (1939b). Sur certains mouvements al´ eatoires discontinus. Skand. Aktuarietidskr. 22, 211–222.
— (1940a). El´ ements d’une th´ eorie g´ en´ erale des chaˆ ınes simples constantes de Markoff.
Ann. Sci. Ecole Norm. Sup. 3, 57, 61–111.
— (1940b/2000). Sur l’´ equation de Kolmogoroff. C.R. Acad. Sci. 331.
— (1947). Sur l’ensemble de puissances d’une loi de probabilit´ e. Ann. Ec. Norm. Sup.
D¨ ohler, R. (1980).
On the conditional independence of random events.
Th. Probab.
Appl. 25, 628–634.
Bibliography 895 Dol´ eans(-Dade), C. (1967a). Processus croissants naturel et processus croissants tr es bien mesurable. C.R. Acad. Sci. Paris 264, 874–876.
— (1967b). Int´ egrales stochastiques d´ ependant d’un parametre. Publ. Inst. Stat. Univ.
Paris 16, 23–34.
— (1970). Quelques applications de la formule de changement de variables pour les semi-martingales. Z. Wahrsch. verw. Geb. 16, 181–194.
Dol´ eans-Dade, C., Meyer, P.A. (1970).
Int´ egrales stochastiques par rapport aux martingales locales. Lect. Notes in Math. 124, 77–107. Springer, Berlin.
Donsker, M.D. (1951/52). An invariance principle for certain probability limit theorems.
Mem. Amer. Math. Soc. 6.
— (1952). Justification and extension of Doob’s heuristic approach to the Kolmogorov– Smirnov theorems. Ann. Math. Statist. 23, 277–281.
Donsker, M.D., Varadhan, S.R.S. (1975/83). Asymptotic evaluation of certain Mar-kov process expectations for large time, I–IV. Comm. Pure Appl. Math. 28, 1–47, 279–301; 29, 389–461; 36, 183–212.
Doob, J.L. (1936). Note on probability. Ann. Math. (2) 37, 363–367.
— (1937). Stochastic processes depending on a continuous parameter. Trans. Amer. Math.
Soc. 42, 107–140.
— (1938). Stochastic processes with an integral-valued parameter. Trans. Amer. Math.
Soc. 44, 87–150.
— (1940). Regularity properties of certain families of chance variables. Trans. Amer. Math.
Soc. 47, 455–486.
— (1942a). The Brownian movement and stochastic equations. Ann. Math. 43, 351–369.
— (1942b). Topics in the theory of Markoffchains. Trans. Amer. Math. Soc. 52, 37–64.
— (1945). Markoffchains—denumerable case. Trans. Amer. Math. Soc. 58, 455–473.
— (1947). Probability in function space. Bull. Amer. Math. Soc. 53, 15–30.
— (1948a). Asymptotic properties of Markov transition probabilities. Trans. Amer. Math.
Soc. 63, 393–421.
— (1948b). Renewal theory from the point of view of the theory of probability. Trans.
Amer. Math. Soc. 63, 422–438.
— (1949). Heuristic approach to the Kolmogorov–Smirnov theorems. Ann. Math. Statist.
20, 393–403.
— (1951). Continuous parameter martingales. Proc. 2nd Berkeley Symp. Math. Statist.
Probab., 269–277.
— (1953). Stochastic Processes. Wiley, NY.
— (1954). Semimartingales and subharmonic functions. Trans. Amer. Math. Soc. 77, 86–121.
— (1955). A probability approach to the heat equation. Trans. Amer. Math. Soc. 80, 216–280.
— (1984). Classical Potential Theory and its Probabilistic Counterpart. Springer, NY.
— (1994). Measure Theory. Springer, NY.
Dubins, L.E. (1968). On a theorem of Skorohod. Ann. Math. Statist. 39, 2094–2097.
896 Foundations of Modern Probability Dubins, L.E., Schwarz, G. (1965). On continuous martingales. Proc. Natl. Acad. Sci.
USA 53, 913–916.
Dudley, R.M. (1966). Weak convergence of probabilities on nonseparable metric spaces and empirical measures on Euclidean spaces. Illinois J. Math. 10, 109–126.
— (1967). Measures on non-separable metric spaces. Illinois J. Math. 11, 449–453.
— (1968). Distances of probability measures and random variables. Ann. Math. Statist.
39, 1563–1572.
— (1989). Real Analysis and Probability. Wadsworth, Brooks & Cole, Pacific Grove, CA.
Duncan, T.E. (1983). Some geometric methods for stochastic integration in manifolds.
Geometry and Identification, 63–72. Lie groups: Hist., Frontiers and Appl. Ser. B 1, Math. Sci. Press, Brooklyn 1983.
Dunford, N. (1939). A mean ergodic theorem. Duke Math. J. 5, 635–646.
Dunford, N., Schwartz, J.T. (1956). Convergence almost everywhere of operator av-erages. J. Rat. Mech. Anal. 5, 129–178.
Durrett, R. (1991/2019). Probability Theory and Examples, 2nd ed. Wadsworth, Brooks & Cole, Pacific Grove, CA.
Dvoretzky, A. (1972). Asymptotic normality for sums of dependent random variables.
Proc. 6th Berkeley Symp. Math. Statist. Probab. 2, 513–535.
Dynkin, E.B. (1952). Criteria of continuity and lack of discontinuities of the second kind for trajectories of a Markov stochastic process (Russian). Izv. Akad. Nauk SSSR, Ser.
Mat. 16, 563–572.
— (1955a). Infinitesimal operators of Markov stochastic processes (Russian). Dokl. Akad.
Nauk SSSR 105, 206–209.
— (1955b). Continuous one-dimensional Markov processes (Russian). Dokl. Akad. Nauk SSSR 105, 405–408.
— (1956).
Markov processes and semigroups of operators.
Infinitesimal operators of Markov processes. Th. Probab. Appl. 1, 25–60.
— (1959). One-dimensional continuous strong Markov processes. Th. Probab. Appl. 4, 3–54.
— (1961a). Theory of Markov Processes. Engl. trans., Prentice-Hall and Pergamon Press, Englewood Cliffs, NJ, and Oxford. (Russian orig. 1959.) — (1961b). Non-negative eigenfunctions of the Laplace–Beltrami operator and Brownian motion in certain symmetric spaces (in Russian).
Dokl. Akad. Nauk. SSSR, 141, 288–291.
— (1965). Markov Processes, Vols. 1–2. Engl. trans., Springer, Berlin. (Russian orig.
1963.) — (1978). Sufficient statistics and extreme points. Ann. Probab. 6, 705–730.
— (1991). Branching particle systems and superprocesses. Ann. Probab. 19, 1157–1194.
Dynkin, E.B., Yushkevich, A.A. (1956). Strong Markov processes. Th. Probab. Appl.
1, 134–139.
Eagleson, G.K., Weber, N.C. (1978). Limit theorems for weakly exchangeable arrays.
Math. Proc. Cambridge Phil. Soc. 84, 123–130.
Bibliography 897 Einstein, A. (1905). On the movement of small particles suspended in a stationary liquid demanded by the molecular-kinetic theory of heat. Engl. trans. in Investigations on the Theory of the Brownian Movement. Repr. Dover, NY 1956.
— (1906). On the theory of Brownian motion. Ibid.
Elliott, R.J. (1982). Stochastic Calculus and Applications. Springer, NY.
Ellis, R.L. (1844). On a question in the theory of probabilities. Cambridge Math. J. 4 (21), 127–133.
Ellis, R.S. (1985). Entropy, Large Deviations, and Statistical Mechanics. Springer, NY.
´ Emery, M. (1989). Stochastic Calculus in Manifolds. Springer.
— (2000/13). Martingales continues dans les vari´ et´ es diff´ erentiables. Lect. Notes in Math.
1738, 3–84. Reprinted in Stochastic Differential Geometry at Saint-Flour, pp. 263– 346. Springer.
Engelbert, H.J., Schmidt, W. (1981). On the behaviour of certain functionals of the Wiener process and applications to stochastic differential equations. Lect. Notes in Control and Inform. Sci. 36, 47–55.
— (1984). On one-dimensional stochastic differential equations with generalized drift. Lect.
Notes in Control and Inform. Sci. 69, 143–155. Springer, Berlin.
— (1985). On solutions of stochastic differential equations without drift.
Z. Wahrsch.
verw. Geb. 68, 287–317.
Erd¨ os, P., Feller, W., Pollard, H. (1949). A theorem on power series. Bull. Amer.
Math. Soc. 55, 201–204.
Erd¨ os, P., Kac, M. (1946). On certain limit theorems in the theory of probability. Bull.
Amer. Math. Soc. 52, 292–302.
— (1947). On the number of positive sums of independent random variables. Bull. Amer.
Math. Soc. 53, 1011–1020.
Erlang, A.K. (1909).
The theory of probabilities and telephone conversations.
Nyt.
Tidskr. Mat. B 20, 33–41.
Etheridge, A.M. (2000). An Introduction to Superprocesses. Univ. Lect. Ser. 20, Amer.
Math. Soc., Providence, RI.
Ethier, S.N., Kurtz, T.G. (1986). Markov Processes: Characterization and Conver-gence. Wiley, NY.
Faber, G. (1910). ¨ Uber stetige Funktionen, II. Math. Ann. 69, 372–443.
Farrell, R.H. (1962). Representation of invariant measures. Illinois J. Math. 6, 447– 467.
Fatou, P. (1906). S´ eries trigonom´ etriques et s´ eries de Taylor. Acta Math. 30, 335–400.
Fell, J.M.G. (1962). A Hausdorfftopology for the closed subsets of a locally compact non-Hausdorffspace. Proc. Amer. Math. Soc. 13, 472–476.
Feller, W. (1935/37). ¨ Uber den zentralen Grenzwertsatz der Wahrscheinlichkeitstheorie, I–II. Math. Z. 40, 521–559; 42, 301–312.
— (1936). Zur Theorie der stochastischen Prozesse (Existenz und Eindeutigkeitss¨ atze).
Math. Ann. 113, 113–160.
— (1937). On the Kolmogoroff–P. L´ evy formula for infinitely divisible distribution func-tions. Proc. Yugoslav Acad. Sci. 82, 95–112.
898 Foundations of Modern Probability — (1940). On the integro-differential equations of purely discontinuous Markoffprocesses.
Trans. Amer. Math. Soc. 48, 488–515; 58, 474.
— (1949). Fluctuation theory of recurrent events. Trans. Amer. Math. Soc. 67, 98–119.
— (1951). Diffusion processes in genetics. Proc. 2nd Berkeley Symp. Math. Statist. Probab., 227–246. Univ. Calif. Press, Berkeley.
— (1952). The parabolic differential equations and the associated semi-groups of transfor-mations. Ann. Math. 55, 468–519.
— (1954). Diffusion processes in one dimension. Trans. Amer. Math. Soc. 77, 1–31.
— (1950/57/68; 1966/71). An Introduction to Probability Theory and its Applications, 1 (3rd ed.); 2 (2nd ed.). Wiley, NY (1st eds. 1950, 1966).
Feller, W., Orey, S. (1961). A renewal theorem. J. Math. Mech. 10, 619–624.
Feynman, R.P. (1948). Space-time approach to nonrelativistic quantum mechanics. Rev.
Mod. Phys. 20, 367–387.
Finetti, B. de (1929). Sulle funzioni ad incremento aleatorio. Rend. Acc. Naz. Lincei 10, 163–168.
— (1930). Fuzione caratteristica di un fenomeno aleatorio. Mem. R. Acc. Lincei (6) 4, 86–133.
— (1937). La pr´ evision: ses lois logiques, ses sources subjectives. Ann. Inst. H. Poincar´ e 7, 1–68.
— (1938). Sur la condition d’´ equivalence partielle. Act. Sci. Ind. 739, 5–18.
Fisk, D.L. (1965). Quasimartingales. Trans. Amer. Math. Soc. 120, 369–389.
— (1966). Sample quadratic variation of continuous, second-order martingales. Z. Wahr-sch. verw. Geb. 6, 273–278.
Fortet, R. (1943). Les fonctions al´ eatoires du type de Markoffassoci´ ees a certaines ´ equa-tions lin´ eaires aux d´ eriv´ ees partielles du type parabolique. J. Math. Pures Appl. 22, 177–243.
Franken, P., K¨ onig, D., Arndt, U., Schmidt, V. (1981). Queues and Point Processes.
Akademie-Verlag, Berlin.
Fr´ echet, M. (1928). Les Espaces Abstraits. Gauthier-Villars, Paris.
Freedman, D. (1962/63). Invariants under mixing which generalize de Finetti’s theorem.
Ann. Math. Statist. 33, 916–923; 34, 1194–1216.
— (1971a). Markov Chains. Holden-Day, San Francisco. Repr. Springer, NY 1983.
— (1971b). Brownian Motion and Diffusion. Holden-Day, San Francisco. Repr. Springer, NY 1983.
Freidlin, M.I., Wentzell, A.D. (1970). On small random permutations of dynamical systems. Russian Math. Surveys 25, 1–55.
— (1998).
Random Perturbations of Dynamical Systems.
Engl. trans., Springer, NY.
(Russian orig. 1979.) Frostman, O. (1935). Potentiel d’´ equilibre et capacit´ e des ensembles avec quelques ap-plications a la th´ eorie des fonctions. Medd. Lunds Univ. Mat. Sem. 3, 1-118.
Fubini, G. (1907). Sugli integrali multipli. Rend. Acc. Naz. Lincei 16, 608–614.
Furstenberg, H., Kesten, H. (1960). Products of random matrices. Ann. Math. Statist.
31, 457–469.
Bibliography 899 Gaifman, H. (1961). Concerning measures in first order calculi. Israel J. Math. 2, 1–18.
Galmarino, A.R. (1963). Representation of an isotropic diffusion as a skew product. Z.
Wahrsch. verw. Geb. 1, 359–378.
Garsia, A.M. (1965). A simple proof of E. Hopf’s maximal ergodic theorem. J. Math.
Mech. 14, 381–382.
— (1973). Martingale Inequalities: Seminar Notes on Recent Progress. Math. Lect. Notes Ser. Benjamin, Reading, MA.
Gauss, C.F. (1809/16). Theory of Motion of the Heavenly Bodies. Engl. trans., Dover, NY 1963.
— (1840).
Allgemeine Lehrs¨ atze in Beziehung auf die im vehrkehrten Verh¨ altnisse des Quadrats der Entfernung wirkenden Anziehungs- und Abstossungs-Kr¨ afte.
Gauss Werke 5, 197–242. G¨ ottingen 1867.
Gaveau, B., Trauber, P. (1982). L’int´ egrale stochastique comme op´ erateur de diver-gence dans l’espace fonctionnel. J. Functional Anal. 46, 230–238.
Getoor, R.K. (1990). Excessive Measures. Birkh¨ auser, Boston.
Getoor, R.K., Sharpe, M.J. (1972). Conformal martingales. Invent. Math. 16, 271– 308.
Gibbs, J.W. (1902). Elementary Principles in Statistical Mechanics. Yale Univ. Press.
Gihman, I.I. (1947). On a method of constructing random processes (Russian). Dokl.
Akad. Nauk SSSR 58, 961–964.
— (1950/51). On the theory of differential equations for random processes, I–II (Russian).
Ukr. Mat. J. 2:4, 37–63; 3:3, 317–339.
Girsanov, I.V. (1960). On transforming a certain class of stochastic processes by abso-lutely continuous substitution of measures. Th. Probab. Appl. 5, 285–301.
Glivenko, V.I. (1933). Sulla determinazione empirica della leggi di probabilit a. Giorn.
Ist. Ital. Attuari 4, 92–99.
Gnedenko, B.V. (1939). On the theory of limit theorems for sums of independent random variables (Russian). Izv. Akad. Nauk SSSR Ser. Mat. 181–232, 643–647.
Gnedenko, B.V., Kolmogorov, A.N. (1954/68). Limit Distributions for Sums of In-dependent Random Variables. Engl. trans., 2nd ed., Addison-Wesley, Reading, MA.
(Russian orig. 1949.) Gnedin, A.V. (1997). The representation of composition structures. Ann. Probab. 25, 1437–1450.
Goldman, J.R. (1967). Stochastic point processes: limit theorems. Ann. Math. Statist.
38, 771–779.
Goldstein, J.A. (1976).
Semigroup-theoretic proofs of the central limit theorem and other theorems of analysis. Semigroup Forum 12, 189–206.
Goldstein, S. (1979). Maximal coupling. Z. Wahrsch. verw. Geb. 46, 193–204.
Grandell, J. (1976). Doubly Stochastic Poisson Processes. Lect. Notes in Math. 529.
Springer, Berlin.
Green, G. (1828). An essay on the application of mathematical analysis to the theories of electricity and magnetism. Repr. in Mathematical Papers, Chelsea, NY 1970.
Greenwood, P., Pitman, J. (1980). Construction of local time and Poisson point pro-cesses from nested arrays. J. London Math. Soc. (2) 22, 182–192.
900 Foundations of Modern Probability Griffeath, D. (1975). A maximal coupling for Markov chains. Z. Wahrsch. verw. Geb.
31, 95–106.
Grigelionis, B. (1963). On the convergence of sums of random step processes to a Poisson process. Th. Probab. Appl. 8, 172–182.
— (1971). On the representation of integer-valued measures by means of stochastic integrals with respect to Poisson measure. Litovsk. Mat. Sb. 11, 93–108.
Haar, A. (1933). Der Maßbegriffin der Theorie der kontinuerlichen Gruppen. Ann. Math.
34, 147–169.
Hagberg, J. (1973). Approximation of the summation process obtained by sampling from a finite population. Th. Probab. Appl. 18, 790–803.
Hahn, H. (1921). Theorie der reellen Funktionen. Julius Springer, Berlin.
H´ ajek, J. (1960). Limiting distributions in simple random sampling from a finite popula-tion. Magyar Tud. Akad. Mat. Kutat´ o Int. K¨ ozl. 5, 361–374.
Hall, P., Heyde, C.C. (1980). Martingale Limit Theory and its Application. Academic Press, NY.
Halmos, P.R. (1950). Measure Theory, Van Nostrand, Princeton. Repr. Springer, NY 1974.
— (1985). I want to be a Mathematician, an Automathography. Springer, NY.
Hardy, G.H., Littlewood, J.E. (1930). A maximal theorem with function-theoretic applications. Acta Math. 54, 81–116.
Harris, T.E. (1956). The existence of stationary measures for certain Markov processes.
Proc. 3rd Berkeley Symp. Math. Statist. Probab. 2, 113–124.
— (1963/89). The Theory of Branching Processes. Dover Publ., NY.
— (1971). Random measures and motions of point processes. Z. Wahrsch. verw. Geb. 18, 85–115.
Hartman, P., Wintner, A. (1941). On the law of the iterated logarithm. J. Math. 63, 169–176.
Hausdorff, F. (1916). Die M¨ achtigkeit der Borelschen Mengen. Math. Ann. 77, 430–437.
Helly, E. (1911/12). ¨ Uber lineare Funktionaloperatoren. Sitzungsber. Nat. Kais. Akad.
Wiss. 121, 265–297.
Hewitt, E., Ross, K.A. (1979). Abstract Harmonic Analysis I, 2nd ed. Springer NY.
Hewitt, E., Savage, L.J. (1955). Symmetric measures on Cartesian products. Trans.
Amer. Math. Soc. 80, 470–501.
Hille, E. (1948). Functional analysis and semi-groups. Amer. Math. Colloq. Publ. 31, NY.
Hitczenko, P. (1988). Comparison of moments for tangent sequences of random variables.
Probab. Th. Rel. Fields 78, 223–230.
— (1990). Best constants in martingale version of Rosenthal’s inequality. Ann. Probab.
18, 1656–1668.
Hoeffding, W. (1948). A class of statistics with asymptotically normal distributions.
Ann. Math. Statist. 19, 293–325.
Hoeven, P.C.T. van der (1983). On Point Processes. Mathematisch Centrum, Amster-dam.
Bibliography 901 H¨ older, O. (1889). ¨ Uber einen Mittelwertsatz. Nachr. Akad. Wiss. G¨ ottingen, math.-phys. Kl., 38–47.
Hoover, D.N. (1979). Relations on probability spaces and arrays of random variables.
Preprint, Institute for Advanced Study, Princeton.
Hopf, E. (1954). The general temporally discrete Markov process. J. Rat. Mech. Anal.
3, 13–45.
H¨ ormander, L. (1967).
Hypoelliptic second order differential equations.
Acta Math.
119, 147–171.
Horowitz, J. (1972). Semilinear Markov processes, subordinators and renewal theory. Z.
Wahrsch. verw. Geb. 24, 167–193.
Hunt, G.A. (1956a). Some theorems concerning Brownian motion. Trans. Amer. Math.
Soc. 81, 294–319.
— (1956b). Semigroups of measures on Lie groups. Trans. Amer. Math. Soc. 81, 264–293.
— (1957/58). Markoffprocesses and potentials, I–III. Illinois J. Math. 1, 44–93, 316–369; 2, 151–213.
Hurewicz, W. (1944). Ergodic theorem without invariant measure.
Ann. Math. 45, 192–206.
Hurwitz, A. (1897). ¨ Uber die Erzeugung der Invarianten durch Integration. Nachr. Ges.
G¨ ottingen, math.-phys. Kl., 71–90.
Ikeda, N., Watanabe, S. (1989/2014). Stochastic Differential Equations and Diffusion Processes, 2nd ed. Kodansha and Elsevier.
Ioffe, D. (1991). On some applicable versions of abstract large deviations theorems. Ann.
Probab. 19, 1629–1639.
Ionescu Tulcea, A. (1960). Contributions to information theory for abstract alphabets.
Ark. Mat. 4, 235–247.
Ionescu Tulcea, C.T. (1949/50). Mesures dans les espaces produits. Atti Accad. Naz.
Lincei Rend. 7, 208–211.
Itˆ o, K. (1942a). Differential equations determining Markov processes (Japanese). Zenkoku Shij¯ o S¯ ugaku Danwakai 244:1077, 1352–1400.
— (1942b). On stochastic processes I: Infinitely divisible laws of probability. Jap. J. Math.
18, 261–301.
— (1944). Stochastic integral. Proc. Imp. Acad. Tokyo 20, 519–524.
— (1946). On a stochastic integral equation. Proc. Imp. Acad. Tokyo 22, 32–35.
— (1950). Stochastic differential equations in a differentiable manifold. Nagoya Math. J.
1, 35–47.
— (1951a). On a formula concerning stochastic differentials. Nagoya Math. J. 3, 55–65.
— (1951b). On stochastic differential equations. Mem. Amer. Math. Soc. 4, 1–51.
— (1951c). Multiple Wiener integral. J. Math. Soc. Japan 3, 157–169.
— (1956). Spectral type of the shift transformation of differential processes with stationary increments. Trans. Amer. Math. Soc. 81, 253–263.
— (1957). Stochastic Processes (Japanese). Iwanami Shoten, Tokyo.
— (1972). Poisson point processes attached to Markov processes. Proc. 6th Berkeley Symp.
Math. Statist. Probab. 3, 225–239.
902 Foundations of Modern Probability Itˆ o, K., McKean, H.P. (1965).
Diffusion Processes and their Sample Paths.
Repr.
Springer, Berlin 1996.
Itˆ o, K., Watanabe, S. (1965). Transformation of Markov processes by multiplicative functionals. Ann. Inst. Fourier 15, 15–30.
Jacobsen, M. (2006). Point Process Theory and Applications. Birkh¨ auser, Boston.
Jacod, J. (1975). Multivariate point processes: Predictable projection, Radon-Nikodym derivative, representation of martingales. Z. Wahrsch. verw. Geb. 31, 235–253.
— (1979). Calcul Stochastique et Problemes de Martingales. Lect. Notes in Math. 714.
Springer, Berlin.
— (1983). Processus a accroissements ind´ ependants: use condition n´ ecessaire et suffisante de convergence en loi. Z. Wahrsch. verw. Geb. 63, 109–136.
— (1984). Une g´ en´ eralisation des semimartingales: les processus admettant un processus a accroissements ind´ ependants tangent. Lect. Notes in Math. 1059, 91–118. Springer, Berlin.
Jacod, J., Shiryaev, A.N. (1987). Limit Theorems for Stochastic Processes. Springer, Berlin.
Jagers, P. (1972). On the weak convergence of superpositions of point processes.
Z.
Wahrsch. verw. Geb. 22, 1–7.
— (1974). Aspects of random measures and point processes. Adv. Probab. Rel. Topics 3, 179–239. Marcel Dekker, NY.
— (1975). Branching Processes with Biological Applications. Wiley.
Jamison, B., Orey, S. (1967).
Markov chains recurrent in the sense of Harris.
Z.
Wahrsch. verw. Geb. 8, 206–223.
Jensen, J.L.W.V. (1906). Sur les fonctions convexes et les in´ egalit´ es entre les valeurs moyennes. Acta Math. 30, 175–193.
Jessen, B. (1934). The theory of integration in a space of an infinite number of dimensions.
Acta Math. 63, 249–323.
Jiˇ rina, M. (1964). Branching processes with measure-valued states. In: Trans. 3rd Prague Conf. Inf. Th. Statist. Dec. Func. Rand. Proc., pp. 333–357.
— (1966). Asymptotic behavior of measure-valued branching processes. Rozpravy ˇ Cesko-slovensk´ e Akad. Vˇ ed. ˇ Rada Mat. Pˇ rirod. Vˇ ed 76:3, Praha.
Johnson, W.B., Schechtman, G., Zinn, J. (1985). Best constants in moment inequali-ties for linear combinations of independent and exchangeable random variables. Ann.
Probab. 13, 234–253.
Jordan, C. (1881). Sur la s´ erie de Fourier. C.R. Acad. Sci. Paris 92, 228–230.
Kac, M. (1947). On the notion of recurrence in discrete stochastic processes. Bull. Amer.
Math. Soc. 53, 1002–1010.
— (1949). On distributions of certain Wiener functionals. Trans. Amer. Math. Soc. 65, 1–13.
— (1951). On some connections between probability theory and differential and integral equations. Proc. 2nd Berkeley Symp. Math. Statist. Probab., 189–215. Univ. of Cali-fornia Press, Berkeley.
Kakutani, S. (1940). Ergodic theorems and the Markoffprocess with a stable distribution.
Proc. Imp. Acad. Tokyo 16, 49–54.
Bibliography 903 — (1944a). On Brownian motions in n-space. Proc. Imp. Acad. Tokyo 20, 648–652.
— (1944b). Two-dimensional Brownian motion and harmonic functions. Proc. Imp. Acad.
Tokyo 20, 706–714.
— (1945). Markoffprocess and the Dirichlet problem. Proc. Japan Acad. 21, 227–233.
Kallenberg, O. (1973a).
Characterization and convergence of random measures and point processes. Z. Wahrsch. verw. Geb. 27, 9–21.
— (1973b). Canonical representations and convergence criteria for processes with inter-changeable increments. Z. Wahrsch. verw. Geb. 27, 23–36.
— (1974). Series of random processes without discontinuities of the second kind. Ann.
Probab. 2, 729–737.
— (1975a). Limits of compound and thinned point processes. J. Appl. Probab. 12, 269– 278.
— (1975/76, 1983/86). Random Measures, 1st–2nd eds, 3rd–4th eds. Akademie-Verlag & Academic Press, Berlin, London.
— (1976/80/81). On the structure of stationary flat processes, I–III. Z. Wahrsch. verw.
Geb. 37, 157–174; 52, 127–147; 56, 239–253.
— (1977). Stability of critical cluster fields. Math. Nachr. 77, 1–43.
— (1978a). On conditional intensities of point processes. Z. Wahrsch. verw. Geb. 41, 205–220.
— (1978b). On the asymptotic behavior of line processes and systems of non-interacting particles. Z. Wahrsch. verw. Geb. 43, 65–95.
— (1978c). On the independence of velocities in a system of non-interacting particles. Ann.
Probab. 6, 885–890.
— (1987). Homogeneity and the strong Markov property. Ann. Probab. 15, 213–240.
— (1988a). Spreading and predictable sampling in exchangeable sequences and processes.
Ann. Probab. 16, 508–534.
— (1988b). Some new representation in bivariate exchangeability. Probab. Th. Rel. Fields 77, 415–455.
— (1989). On the representation theorem for exchangeable arrays. J. Multivar. Anal. 30, 137–154.
— (1990). Random time change and an integral representation for marked stopping times.
Probab. Th. Rel. Fields 86, 167–202.
— (1992). Some time change representations of stable integrals, via predictable transfor-mations of local martingales. Stoch. Proc. Appl. 40, 199–223.
— (1992). Symmetries on random arrays and set-indexed processes. J. Theor. Probab. 5, 727–765.
— (1995). Random arrays and functionals with multivariate rotational symmetries. Probab.
Th. Rel. Fields 103, 91–141.
— (1996a). On the existence of universal functional solutions to classical SDEs.
Ann.
Probab. 24, 196–205.
— (1996b).
Improved criteria for distributional convergence of point processes.
Stoch.
Proc. Appl. 64, 93-102.
— (1997/2002). Foundations of Modern Probability, eds. 1–2. Springer, NY.
904 Foundations of Modern Probability — (1999a). Ballot theorems and sojourn laws for stationary processes. Ann. Probab. 27, 2011–2019.
— (1999b).
Asymptotically invariant sampling and averaging from stationary-like pro-cesses. Stoch. Proc. Appl. 82, 195–204.
— (1999c). Palm measure duality and conditioning in regenerative sets. Ann. Probab. 27, 945–969.
— (2005). Probabilistic Symmetries and Invariance Principles. Springer.
— (2007). Invariant measures and disintegrations with applications to Palm and related kernels. Probab. Th. Rel. Fields 139, 285–310, 311.
— (2009). Some local approximation properties of simple point processes. Probab. Th. Rel.
Fields 143, 73–96.
— (2010).
Commutativity properties of conditional distributions and Palm measures.
Comm. Stoch. Anal. 4, 21–34.
— (2011a). Invariant Palm and related disintegrations via skew factorization. Probab. Th.
Rel. Fields 149, 279–301.
— (2011b).
Iterated Palm conditioning and some Slivnyak-type theorems for Cox and cluster processes. J. Theor. Probab. 24, 875–893.
— (2017a). Tangential existence and comparison, with applications to single and multiple integration. Probab. Math. Statist. 37, 21–52.
— (2017b). Random Measures, Theory and Applications. Springer.
Kallenberg, O., Sztencel, R. (1991). Some dimension-free features of vector-valued martingales. Probab. Th. Rel. Fields 88, 215–247.
Kallenberg, O., Szulga, J. (1989). Multiple integration with respect to Poisson and L´ evy processes. Probab. Th. Rel. Fields 83, 101–134.
Kallianpur, G. (1980). Stochastic Filtering Theory. Springer, NY.
Kaplan, E.L. (1955). Transformations of stationary random sequences. Math. Scand. 3, 127–149.
Karamata, J. (1930). Sur une mode de croissance r´ eguli ere des fonctions. Mathematica (Cluj) 4, 38–53.
Karatzas, I., Shreve, S.E. (1988/91/98). Brownian Motion and Stochastic Calculus, 2nd ed. Springer, NY.
Kazamaki, N. (1972). Change of time, stochastic integrals and weak martingales.
Z.
Wahrsch. verw. Geb. 22, 25–32.
Kemeny, J.G., Snell, J.L., Knapp, A.W. (1966). Denumerable Markov Chains. Van Nostrand, Princeton.
Kendall, D.G. (1948). On the generalized “birth-and-death” process. Ann. Math. Statist.
19, 1–15.
— (1974). Foundations of a theory of random sets. In Stochastic Geometry (eds. E.F.
Harding, D.G. Kendall), pp. 322–376. Wiley, NY.
Kendall, W.S. (1988).
Martingales on manifolds and harmonic maps.
Geometry of random motion. Contemp. Math. 73, 121–157.
Kerstan, J., Matthes, K. (1964). Station¨ are zuf¨ allige Punktfolgen II. Jahresber. Deutsch.
Mat.-Verein. 66, 106–118.
Bibliography 905 Khinchin, A.Y. (1923). ¨ Uber dyadische Br¨ ucke. Math. Z. 18, 109–116.
— (1924). ¨ Uber einen Satz der Wahrscheinlichkeitsrechnung. Fund. Math. 6, 9–20.
— (1929). ¨ Uber einen neuen Grenzwertsatz der Wahrscheinlichkeitsrechnung. Math. Ann.
101, 745–752.
— (1933a). Zur mathematischen Begr¨ unding der statistischen Mechanik. Z. Angew. Math.
Mech. 13, 101–103.
— (1933b).
Asymptotische Gesetze der Wahrscheinlichkeitsrechnung, Springer, Berlin.
Repr. Chelsea, NY 1948.
— (1934). Korrelationstheorie der station¨ aren stochastischen Prozesse. Math. Ann. 109, 604–615.
— (1937). Zur Theorie der unbeschr¨ ankt teilbaren Verteilungsgesetze. Mat. Sb. 2, 79–119.
— (1938). Limit Laws for Sums of Independent Random Variables (Russian). Moscow.
— (1955). Mathematical Methods in the Theory of Queuing (Russian). Engl. trans., Griffin, London 1960.
Khinchin, A.Y., Kolmogorov, A.N. (1925).
¨ Uber Konvergenz von Reihen deren Glieder durch den Zufall bestimmt werden. Mat. Sb. 32, 668–676.
Kingman, J.F.C. (1964). On doubly stochastic Poisson processes. Proc. Cambridge Phil.
Soc. 60, 923–930.
— (1967). Completely random measures. Pac.. J. Math. 21, 59–78.
— (1968). The ergodic theory of subadditive stochastic processes. J. Roy. Statist. Soc. (B) 30, 499–510.
— (1972). Regenerative Phenomena. Wiley, NY.
— (1978). The representation of partition structures. J. London Math, Soc. 18, 374–380.
— (1993). Poisson Processes. Clarendon Press, Oxford.
Kinney, J.R. (1953). Continuity properties of Markov processes. Trans. Amer. Math.
Soc. 74, 280–302.
Klebaner, F.C. (2012). Introduction to Stochastic Calculus with Applications, 3rd ed.
Imperial Coll. Press, London.
Knight, F.B. (1963). Random walks and a sojourn density process of Brownian motion.
Trans. Amer. Math. Soc. 107, 56–86.
— (1971). A reduction of continuous, square-integrable martingales to Brownian motion.
Lect. Notes in Math. 190, 19–31. Springer, Berlin.
Kolmogorov, A.N. (1928/29).
¨ Uber die Summen durch den Zufall bestimmter un-abh¨ angiger Gr¨ ossen. Math. Ann. 99, 309–319; 102, 484–488.
— (1929). ¨ Uber das Gesatz des iterierten Logarithmus. Math. Ann. 101, 126–135.
— (1930). Sur la loi forte des grandes nombres. C.R. Acad. Sci. Paris 191, 910–912.
— (1931a). ¨ Uber die analytischen Methoden in der Wahrscheinlichkeitsrechnung. Math.
Ann. 104, 415–458.
— (1931b). Eine Verallgemeinerung des Laplace–Liapounoffschen Satzes. Izv. Akad. Nauk USSR, Otdel. Matem. Yestestv. Nauk 1931, 959–962.
— (1932). Sulla forma generale di un processo stocastico omogeneo (un problema di B. de Finetti). Atti Accad. Naz. Lincei Rend. (6) 15, 805–808, 866–869.
906 Foundations of Modern Probability — (1933a). ¨ Uber die Grenzwerts¨ atze der Wahrscheinlichkeitsrechnung. Izv. Akad. Nauk USSR, Otdel. Matem. Yestestv. Nauk 1933, 363–372.
— (1933b). Zur Theorie der stetigen zuf¨ alligen Prozesse. Math. Ann. 108, 149–160.
— (1933c). Foundations of the Theory of Probability (German), Springer, Berlin. Engl.
trans., Chelsea, NY 1956.
— (1935). Some current developments in probability theory (in Russian). Proc. 2nd All-Union Math. Congr. 1, 349–358. Akad. Nauk SSSR, Leningrad.
— (1936a). Anfangsgr¨ unde der Markoffschen Ketten mit unendlich vielen m¨ oglichen Zu-st¨ anden. Mat. Sb. 1, 607–610.
— (1936b). Zur Theorie der Markoffschen Ketten. Math. Ann. 112, 155–160.
— (1937). Zur Umkehrbarkeit der statistischen Naturgesetze. Math. Ann. 113, 766–772.
— (1938). Zur L¨ osung einer biologischen Aufgabe. Izv. Nauchno—Issled. Inst. Mat. Mech.
Tomsk. Gosud. Univ. 2, 1–6.
— (1956). On Skorohod convergence. Th. Probab. Appl. 1, 213–222.
Kolmogorov, A.N., Leontovich, M.A. (1933). Zur Berechnung der mittleren Brown-schen Fl¨ ache. Physik. Z. Sowjetunion 4, 1–13.
Koml´ os, J., Major, P., Tusn´ ady, G. (1975/76). An approximation of partial sums of independent r.v.’s and the sample d.f., I–II. Z. Wahrsch. verw. Geb. 32, 111–131; 34, 33–58.
K¨ onig, D., Matthes, K. (1963). Verallgemeinerung der Erlangschen Formeln, I. Math.
Nachr. 26, 45–56.
Koopman, B.O. (1931). Hamiltonian systems and transformations in Hilbert space. Proc.
Nat. Acad. Sci. USA 17, 315–318.
Krauss, P.H. (1969). Representations of symmetric probability models. J. Symbolic Logic 34, 183–193.
Krengel, U. (1985). Ergodic Theorems. de Gruyter, Berlin.
Krickeberg, K. (1956). Convergence of martingales with a directed index set. Trans.
Amer. Math. Soc. 83, 313–357.
— (1972). The Cox process. Symp. Math. 9, 151–167.
— (1974a). Invariance properties of the correlation measure of line-processes. In: Stochastic Geometry (E.F. Harding & D.G. Kendall, eds.), pp. 76–88. Wiley, London.
— (1974b). Moments of point processes. Ibid., pp. 89–113.
Krylov, N., Bogolioubov, N. (1937). La th´ eorie g´ en´ erale de la mesure dans son appli-cation a l’´ etude des syst emes de la m´ ecanique non lin´ eaires. Ann. Math. 38, 65–113.
Kullback, S., Leibler, R.A. (1951).
On information and sufficiency.
Ann. Math.
Statist. 22, 79–86.
Kunita, H. (1990). Stochastic Flows and Stochastic Differential Equations. Cambridge Univ. Press, Cambridge.
Kunita, H., Watanabe, S. (1967). On square integrable martingales. Nagoya Math. J.
30, 209–245.
Kunze, M. (2013). An Introduction to Malliavin Calculus. Lecture notes, Universit¨ at Ulm.
Kuratowski, K. (1933). Topologie I. Monografje Matem., Warsaw & Lwow.
Bibliography 907 Kurtz, T.G. (1969). Extensions of Trotter’s operator semigroup approximation theorems.
J. Funct. Anal. 3, 354–375.
— (1975). Semigroups of conditioned shifts and approximation of Markov processes. Ann.
Probab. 3, 618–642.
Kwapie´ n, S., Woyczy´ nski, W.A. (1992). Random Series and Stochastic Integrals: Sin-gle and Multiple. Birkh¨ auser, Boston.
Lanford, O.E., Ruelle, D. (1969). Observables at infinity and states with short range correlations in statistical mechanics. Comm. Math. Phys. 13, 194–215.
Langevin, P. (1908). Sur la th´ eorie du mouvement brownien. C.R. Acad. Sci. Paris 146, 530–533.
Laplace, P.S. de (1774). M´ emoire sur la probabilit´ e des causes par les ´ ev´ enemens. Engl.
trans. in Statistical Science 1, 359–378.
— (1809). M´ emoire sur divers points d’analyse. Repr. in Oeuvres Completes de Laplace 14, 178–214. Gauthier-Villars, Paris 1886–1912.
— (1812/20). Th´ eorie Analytique des Probabilit´ es, 3rd ed. Repr. in Oeuvres Compl etes de Laplace 7. Gauthier-Villars, Paris 1886–1912.
Last, G., Brandt, A. (1995). Marked Point Processes on the Real Line: The Dynamic Approach. Springer, NY.
Last, G., Penrose, M.D. (2017). Lectures on the Poisson Process. Cambridge Univ.
Press.
Leadbetter, M.R., Lindgren, G., Rootz´ en, H. (1983). Extremes and Related Prop-erties of Random Sequences and Processes. Springer, NY.
Lebesgue, H. (1902). Int´ egrale, longeur, aire. Ann. Mat. Pura Appl. 7, 231–359.
— (1904). Le¸ cons sur l’Int´ egration et la Recherche des Fonctions Primitives. Paris.
Le Cam, L. (1957). Convergence in distribution of stochastic processes. Univ. California Publ. Statist. 2, 207–236.
Le Gall, J.F. (1983). Applications des temps locaux aux ´ equations diff´ erentielles stochas-tiques unidimensionelles. Lect. Notes in Math. 986, 15–31.
— (1991).
Brownian excursions, trees and measure-valued branching processes.
Ann.
Probab. 19, 1399–1439.
Levi, B. (1906a). Sopra l’integrazione delle serie. Rend. Ist. Lombardo Sci. Lett. (2) 39, 775–780.
— (1906b). Sul principio de Dirichlet. Rend. Circ. Mat. Palermo 22, 293–360.
L´ evy, P. (1922a). Sur le rˆ ole de la loi de Gauss dans la th´ eorie des erreurs. C.R. Acad.
Sci. Paris 174, 855–857.
— (1922b). Sur la loi de Gauss. C.R. Acad. Sci. Paris 1682–1684.
— (1922c). Sur la d´ etermination des lois de probabilit´ e par leurs fonctions caract´ eristiques.
C.R. Acad. Sci. Paris 175, 854–856.
— (1924). Th´ eorie des erreurs. La loi de Gauss et les lois exceptionelles. Bull. Soc. Math.
France 52, 49–85.
— (1925). Calcul des Probabilit´ es. Gauthier-Villars, Paris.
— (1934/35). Sur les int´ egrales dont les ´ el´ ements sont des variables al´ eatoires ind´ epen-dantes. Ann. Scuola Norm. Sup. Pisa (2) 3, 337–366; 4, 217–218.
908 Foundations of Modern Probability — (1935a). Propri´ et´ es asymptotiques des sommes de variables al´ eatoires ind´ ependantes ou enchain´ ees. J. Math. Pures Appl. (8) 14, 347–402.
— (1935b). Propri´ et´ es asymptotiques des sommes de variables al´ eatoires enchain´ ees. Bull.
Sci. Math. (2) 59, 84–96, 109–128.
— (1937/54). Th´ eorie de l’Addition des Variables Al´ eatoires, 2nd ed. Gauthier-Villars, Paris (1st ed. 1937).
— (1939). Sur certain processus stochastiques homogenes. Comp. Math. 7, 283–339.
— (1940). Le mouvement brownien plan. Amer. J. Math. 62, 487–550.
— (1948/65). Processus Stochastiques et Mouvement Brownien, 2nd ed. Gauthier-Villars, Paris (1st ed. 1948).
Liao, M. (2018). Invariant Markov Processes under Lie Group Action. Springer Intern.
Publ.
Liapounov, A.M. (1901).
Nouvelle forme du th´ eor eme sur la limite des probabilit´ es.
Mem. Acad. Sci. St. Petersbourg 12, 1–24.
Liemant, A., Matthes, K., Wakolbinger, A. (1988). Equilibrium Distributions of Branching Processes. Akademie-Verlag, Berlin.
Liggett, T.M. (1985). An improved subadditive ergodic theorem. Ann. Probab. 13, 1279–1285.
Lindeberg, J.W. (1922a). Eine neue Herleitung des Exponentialgesetzes in der Wahr-scheinlichkeitsrechnung. Math. Zeitschr. 15, 211–225.
— (1922b). Sur la loi de Gauss. C.R. Acad. Sci. Paris 174, 1400–1402.
Lindvall, T. (1973). Weak convergence of probability measures and random functions in the function space D[0, ∞). J. Appl. Probab. 10, 109–121.
— (1977). A probabilistic proof of Blackwell’s renewal theorem. Ann. Probab. 5, 482–485.
— (1992). Lectures on the Coupling Method. Wiley, NY.
Loeve, M. (1977/78). Probability Theory 1–2, 4th ed. Springer, NY (eds. 1–3, 1955/63).
Lomnicki, Z., Ulam, S. (1934). Sur la th´ eorie de la mesure dans les espaces combinatoires et son application au calcul des probabilit´ es: I. Variables ind´ ependantes. Fund. Math.
23, 237–278.
Lukacs, E. (1960/70). Characteristic Functions, 2nd ed. Griffin, London.
Lundberg, F. (1903). Approximate representation of the probability function. Reinsurance of collective risks (in Swedish). Ph.D. thesis, Almqvist & Wiksell, Uppsala.
Mackeviˇ cius, V. (1974). On the question of the weak convergence of random processes in the space D[0, ∞). Lithuanian Math. Trans. 14, 620–623.
Maisonneuve, B. (1974). Syst emes R´ eg´ en´ eratifs. Ast´ erique 15. Soc. Math. de France.
Maker, P. (1940). The ergodic theorem for a sequence of functions. Duke Math. J. 6, 27–30.
Malliavin, P. (1978). Stochastic calculus of variations and hypoelliptic operators. In: Proc. Intern. Symp. on Stoch. Diff. Equations, Kyoto 1976, pp. 195–263. Wiley 1978.
Mann, H.B., Wald, A. (1943). On stochastic limit and order relations. Ann. Math.
Statist. 14, 217–226.
Marcinkiewicz, J., Zygmund, A. (1937). Sur les fonctions ind´ ependantes. Fund. Math.
29, 60–90.
Bibliography 909 — (1938). Quelques th´ eoremes sur les fonctions ind´ ependantes. Studia Math. 7, 104–120.
Markov, A.A. (1899). The law of large numbers and the method of least squares (Rus-sian). Izv. Fiz.-Mat. Obshch. Kazan Univ. (2) 8, 110–128.
— (1906). Extension of the law of large numbers to dependent events (Russian). Bull. Soc.
Phys. Math. Kazan (2) 15, 135–156.
Maruyama, G. (1954). On the transition probability functions of the Markov process.
Natl. Sci. Rep. Ochanomizu Univ. 5, 10–20.
— (1955).
Continuous Markov processes and stochastic equations.
Rend. Circ. Mat.
Palermo 4, 48–90.
Maruyama, G., Tanaka, H. (1957). Some properties of one-dimensional diffusion pro-cesses. Mem. Fac. Sci. Kyushu Univ. 11, 117–141.
Matheron, G. (1975). Random Sets and Integral Geometry. Wiley, London.
Matthes, K. (1963).
Station¨ are zuf¨ allige Punktfolgen, I. Jahresber. Deutsch. Math.-Verein. 66, 66–79.
Matthes, K., Kerstan, J., Mecke, J. (1974/78/82). Infinitely Divisible Point Pro-cesses. Wiley, Chichester (German/English/Russian eds.).
Matthes, K., Warmuth, W., Mecke, J. (1979). Bemerkungen zu einer Arbeit von Nguyen Xuan Xanh und Hans Zessin. Math. Nachr. 88, 117–127.
Maxwell, J.C. (1875). Theory of Heat, 4th ed. Longmans, London.
— (1878). On Boltzmann’s theorem on the average distribution of energy in a system of material points. Trans. Cambridge Phil. Soc. 12, 547.
Mazurkiewicz, S. (1916). ¨ Uber Borelsche Mengen. Bull. Acad. Cracovie 1916, 490–494.
McGinley, W.G., Sibson, R. (1975). Dissociated random variables. Math. Proc. Cam-bridge Phil. Soc. 77, 185–188.
McKean, H.P. (1969). Stochastic Integrals. Academic Press, NY.
McKean, H.P., Tanaka, H. (1961). Additive functionals of the Brownian path. Mem.
Coll. Sci. Univ. Kyoto, A 33, 479–506.
McMillan, B. (1953). The basic theorems of information theory. Ann. Math. Statist.
24, 196–219.
Mecke, J. (1967). Station¨ are zuf¨ allige Maße auf lokalkompakten Abelschen Gruppen. Z.
Wahrsch. verw. Geb. 9, 36–58.
— (1968).
Eine characteristische Eigenschaft der doppelt stochastischen Poissonschen Prozesse. Z. Wahrsch. verw. Geb. 11, 74–81.
Mehta, M.L. (1991). Random Matrices. Academic Press.
M´ el´ eard, S. (1986). Application du calcul stochastique a l’´ etude des processus de Markov r´ eguliers sur [0, 1]. Stochastics 19, 41–82.
M´ etivier, M. (1982). Semimartingales: A Course on Stochastic Processes. de Gruyter, Berlin.
M´ etivier, M., Pellaumail, J. (1980). Stochastic Integration. Academic Press, NY.
Meyer, P.A. (1962). A decomposition theorem for supermartingales. Illinois J. Math. 6, 193–205.
— (1963). Decomposition of supermartingales: the uniqueness theorem. Illinois J. Math.
7, 1–17.
910 Foundations of Modern Probability — (1966). Probability and Potentials. Engl. trans., Blaisdell, Waltham.
— (1967).
Int´ egrales stochastiques, I–IV. Lect. Notes in Math.
39, 72–162.
Springer, Berlin.
— (1968). Guide d´ etaill´ e de la th´ eorie g´ en´ erale des processus. Lect. Notes in Math. 51, 140–165. Springer.
— (1971). D´ emonstration simplifi´ ee d’un th´ eoreme de Knight. Lect. Notes in Math. 191, 191–195. Springer, Berlin.
— (1976). Un cours sur les int´ egrales stochastiques. Lect. Notes in Math. 511, 245–398.
Springer, Berlin.
— (1981). G´ eom´ etrie stochastique sans larmes. Lecture Notes in Math. 850, 44–102.
— (1982). G´ eom´ etrie diff´ erentielle stochastique (bis). Lecture Notes in Math. 921, 165– 207.
Miles, R.E. (1969/71). Poisson flats in Euclidean spaces, I–II. Adv. Appl. Probab. 1, 211–237; 3, 1–43.
— (1974). A synopsis of ‘Poisson flats in Euclidean spaces’. In: Stochastic Geometry (E.F.
Harding & D.G. Kendall, eds), pp. 202–227. Wiley, London.
Millar, P.W. (1968). Martingale integrals. Trans. Amer. Math. Soc. 133, 145–166.
Minkowski, H. (1907). Diophantische Approximationen. Teubner, Leipzig.
Mitoma, I. (1983).
Tightness of probabilities on C([0, 1]; S′) and D([0, 1]; S′).
Ann.
Probab. 11, 989–999.
Moivre, A. de (1711). De Mensura Sortis (On the measurement of chance). Engl. trans., Int. Statist. Rev. 52 (1984), 229–262.
— (1738). The Doctrine of Chances, 2nd ed. Repr. Case and Chelsea, London and NY 1967.
Molchanov, I. (2005). Theory of Random Sets. Springer, London.
M¨ onch, G. (1971). Verallgemeinerung eines Satzes von A. R´ enyi. Studia Sci. Math. Hung.
6, 81–90.
Motoo, M., Watanabe, H. (1958). Ergodic property of recurrent diffusion process in one dimension. J. Math. Soc. Japan 10, 272–286.
Moyal, J.E. (1962). The general theory of stochastic population processes. Acta Math.
108, 1–31.
Nawrotzki, K. (1962). Ein Grenzwertsatz f¨ ur homogene zuf¨ allige Punktfolgen (Verallge-meinerung eines Satzes von A. R´ enyi). Math. Nachr. 24, 201–217.
— (1968). Mischungseigenschaften station¨ arer unbegrenzt teilbarer zuf¨ alliger Maße. Math.
Nachr. 38, 97–114.
Neumann, J. von (1932). Proof of the quasi-ergodic hypothesis. Proc. Natl. Acad. Sci.
USA 18, 70–82.
— (1940). On rings of operators, III. Ann. Math. 41, 94–161.
Neveu, J. (1971). Mathematical Foundations of the Calculus of Probability. Holden-Day, San Francisco.
— (1975). Discrete-Parameter Martingales. North-Holland, Amsterdam.
Nguyen, X.X., Zessin, H. (1979). Ergodic theorems for spatial processes. Z. Wahrsch.
verw. Geb. 48, 133–158.
Bibliography 911 Nieuwenhuis, G. (1994). Bridging the gap between a stationary point process and its Palm distribution. Statist. Nederl. 48, 37–62.
Nikodym, O.M. (1930). Sur une g´ en´ eralisation des int´ egrales de M. J. Radon. Fund.
Math. 15, 131–179.
Norberg, T. (1984).
Convergence and existence of random set distributions.
Ann.
Probab. 12, 726–732.
Novikov, A.A. (1971). On moment inequalities for stochastic integrals. Th. Probab. Appl.
16, 538–541.
— (1972). On an identity for stochastic integrals. Th. Probab. Appl. 17, 717–720.
Nualart, D. (1995/2006). The Malliavin Calculus and Related Topics. Springer, NY.
Ocone, D. (1984). Malliavin calculus and stochastic integral representation of diffusion processes. Stochastics 12, 161–185.
Olson, W.H., Uppuluri, V.R.R. (1972).
Asymptotic distribution of eigenvalues of random matrices. Proc. 6th Berkely Symp. Math. Statist. Probab. 3, 615–644.
Orey, S. (1959). Recurrent Markov chains. Pacific J. Math. 9, 805–827.
— (1962). An ergodic theorem for Markov chains. Z. Wahrsch. verw. Geb. 1, 174–176.
— (1966). F-processes. Proc. 5th Berkeley Symp. Math. Statist. Probab. 2:1, 301–313.
— (1971).
Limit Theorems for Markov Chain Transition Probabilities.
Van Nostrand, London.
Ornstein, D.S. (1969). Random walks. Trans. Amer. Math. Soc. 138, 1–60.
Ornstein, L.S., Uhlenbeck, G.E. (1930). On the theory of Brownian motion. Phys.
Review 36, 823–841.
Ososkov, G.A. (1956). A limit theorem for flows of homogeneous events. Th. Probab.
Appl. 1, 248–255.
Ottaviani, G. (1939). Sulla teoria astratta del calcolo delle probabilit a proposita dal Cantelli. Giorn. Ist. Ital. Attuari 10, 10–40.
Paley, R.E.A.C. (1932). A remarkable series of orthogonal functions I. Proc. London Math. Soc. 34, 241–264.
Paley, R.E.A.C., Wiener, N. (1934). Fourier transforms in the complex domain. Amer.
Math. Soc. Coll. Publ. 19.
Paley, R.E.A.C., Wiener, N., Zygmund, A. (1933).
Notes on random functions.
Math. Z. 37, 647–668.
Palm, C. (1943). Intensity Variations in Telephone Traffic (German). Ericsson Technics 44, 1–189. Engl. trans., North-Holland Studies in Telecommunication 10, Elsevier 1988.
Papangelou, F. (1972). Integrability of expected increments of point processes and a related random change of scale. Trans. Amer. Math. Soc. 165, 486–506.
— (1974a). On the Palm probabilities of processes of points and processes of lines. In: Stochastic geometry (E.F. Harding & D.G. Kendall, eds), pp. 114–147. Wiley, London.
— (1974b). The conditional intensity of general point processes and an application to line processes. Z. Wahrsch. verw. Geb. 28, 207–226.
— (1976). Point processes on spaces of flats and other homogeneous spaces. Math. Proc.
Cambridge Phil. Soc. 80, 297–314.
912 Foundations of Modern Probability Pardoux, E., Peng, S. (1990). Adapted solution of a backward stochastic differential equation. Systems and Control Letters 14, 55–61.
Parthasarathy, K.R. (1967). Probability Measures on Metric Spaces. Academic Press, NY.
Pe˜ na, V.H. de la, Gin´ e, E. (1999). Decoupling: from Dependence to Independence.
Springer, NY.
Perkins, E. (1982). Local time and pathwise uniqueness for stochastic differential equa-tions. Lect. Notes in Math. 920, 201–208. Springer, Berlin.
Peterson, L.D. (2001). Convergence of random measures on Polish spaces. Dissertation, Auburn Univ.
Petrov, V.V. (1995). Limit Theorems of Probability Theory. Clarendon Press, Oxford.
Phillips, H.B., Wiener, N. (1923). Nets and Dirichlet problem. J. Math. Phys. 2, 105–124.
Pitman, J.W. (1995). Exchangeable and partially exchangeable random partitions. Probab.
Th. Rel. Fields 102, 145–158.
Pitt, H.R. (1942). Some generalizations of the ergodic theorem. Proc. Camb. Phil. Soc.
38, 325–343.
Poincar´ e, H. (1890). Sur les ´ equations aux d´ eriv´ ees partielles de la physique math´ ema-tique. Amer. J. Math. 12, 211–294.
— (1899). Th´ eorie du Potentiel Newtonien. Gauthier-Villars, Paris.
Poisson, S.D. (1837).
Recherches sur la Probabilit´ e des Jugements en Matiere Crim-inelle et en Mati ere Civile, Pr´ ec´ ed´ ees des Regles G´ en´ erales du Calcul des Probabilit´ es.
Bachelier, Paris.
Pollaczek, F. (1930). ¨ Uber eine Aufgabe der Wahrscheinlichkeitstheorie, I–II. Math. Z.
32, 64–100, 729–750.
Pollard, D. (1984). Convergence of Stochastic Processes. Springer, NY.
P´ olya, G. (1920).
¨ Uber den zentralen Grenzwertsatz der Wahrscheinlichkeitsrechnung und das Momentenproblem. Math. Z. 8, 171–181.
— (1921). ¨ Uber eine Aufgabe der Wahrscheinlichkeitsrechnung betreffend die Irrfahrt im Strassennetz. Math. Ann. 84, 149–160.
Port, S.C., Stone, C.J. (1978). Brownian Motion and Classical Potential Theory. Aca-demic Press, NY.
Pospiˇ sil, B. (1935/36). Sur un probl eme de M.M.S. Bernstein et A. Kolmogoroff. ˇ Casopis Pˇ est. Mat. Fys. 65, 64–76.
Pr´ ekopa, A. (1958). On secondary processes generated by a random point distribution of Poisson type. Ann. Univ. Sci. Budapest Sect. Math. 1, 153–170.
Prohorov, Y.V. (1956). Convergence of random processes and limit theorems in proba-bility theory. Th. Probab. Appl. 1, 157–214.
— (1959). Some remarks on the strong law of large numbers. Th. Probab. Appl. 4, 204–208.
— (1961). Random measures on a compactum. Soviet Math. Dokl. 2, 539–541.
Protter, P. (1990). Stochastic Integration and Differential Equations. Springer, Berlin.
Pukhalsky, A.A. (1991). On functional principle of large deviations. In New Trends in Probability and Statistics (V. Sazonov and T. Shervashidze, eds.), 198–218. VSP Moks’las, Moscow.
Bibliography 913 Radon, J. (1913). Theorie und Anwendungen der absolut additiven Mengenfunktionen.
Wien Akad. Sitzungsber. 122, 1295–1438.
Ramer, R. (1974). On nonlinear transformations of Gaussian measures. J. Functional Anal. 15, 166–187.
Rao, K.M. (1969a). On decomposition theorems of Meyer. Math. Scand. 24, 66–78.
— (1969b). Quasimartingales. Math. Scand. 24, 79–92.
Rauchenschwandtner, B. (1980).
Gibbsprozesse und Papangeloukerne.
Verb. wis-sensch. Gesellsch. ¨ Osterreichs, Vienna.
Ray, D.B. (1956). Stationary Markov processes with continuous paths. Trans. Amer.
Math. Soc. 82, 452–493.
— (1963). Sojourn times of a diffusion process. Illinois J. Math. 7, 615–630.
R´ enyi, A. (1956).
A characterization of Poisson processes.
Magyar Tud. Akad. Mat.
Kutato Int. K¨ ozl. 1, 519–527.
— (1967). Remarks on the Poisson process. Studia Sci. Math. Hung. 2, 119–123.
Revuz, D. (1970). Mesures associ´ ees aux fonctionnelles additives de Markov, I–II. Trans.
Amer. Math. Soc. 148, 501–531; Z. Wahrsch. verw. Geb. 16, 336–344.
— (1984). Markov Chains, 2nd ed. North-Holland, Amsterdam.
Revuz, D., Yor, M. (1991/94/99). Continuous Martingales and Brownian Motion, 3rd ed. Springer, Berlin.
Riesz, F. (1909a). Sur les suites de fonctions mesurables. C.R. Acad. Sci. Paris 148, 1303–1305.
— (1909b). Sur les op´ erations fonctionelles lin´ eaires. C.R. Acad. Sci. Paris 149, 974–977.
— (1910).
Untersuchungen ¨ uber Systeme integrierbarer Funktionen.
Math. Ann.
69, 449–497.
— (1926/30). Sur les fonctions subharmoniques et leur rapport a la th´ eorie du potentiel, I–II. Acta Math. 48, 329–343; 54, 321–360.
Rogers, C.A., Shephard, G.C. (1958). Some extremal problems for convex bodies.
Mathematica 5, 93–102.
Rogers, L.C.G., Williams, D. (1994/2000). Diffusions, Markov Processes, and Mar-tingales 1–2, 2nd ed. Cambridge Univ. Press.
Ros´ en, B. (1964). Limit theorems for sampling from a finite population. Ark. Mat. 5, 383–424.
Rosi´ nski, J., Woyczy´ nski, W.A. (1986). On Itˆ o stochastic integration with respect to p-stable motion: Inner clock, integrability of sample paths, double and multiple integrals. Ann. Probab. 14, 271–286.
Rutherford, E., Geiger, H. (1908). An electrical method of counting the number of particles from radioactive substances. Proc. Roy. Soc. A 81, 141–161.
Ryll-Nardzewski, C. (1957). On stationary sequences of random variables and the de Finetti’s [sic] equivalence. Colloq. Math. 4, 149–156.
— (1961). Remarks on processes of calls. Proc. 4th Berkeley Symp. Math. Statist. Probab.
2, 455–465.
Sanov, I.N. (1957). On the probability of large deviations of random variables (Russian).
Engl. trans.: Sel. Trans. Math. Statist. Probab. 1 (1961), 213–244.
914 Foundations of Modern Probability Sato, K.-i. (1999/2013). L´ evy Processes and Infinitely Divisible Distributions, 2nd ed.
Cambridge Univ. Press.
Schilder, M. (1966).
Some asymptotic formulae for Wiener integrals.
Trans. Amer.
Math. Soc. 125, 63–85.
Schneider, R., Weil, W. (2008). Stochastic and Integral Geometry. Springer, Berlin.
Schoenberg, I.J. (1938). Metric spaces and completely monotone functions. Ann. Math.
39, 811–841.
Schr¨ odinger, E. (1931).
¨ Uber die Umkehrung der Naturgesetze. Sitzungsber. Preuss.
Akad. Wiss. Phys. Math. Kl. 144–153.
Schuppen, J.H. van, Wong, E. (1974). Transformation of local martingales under a change of law. Ann. Probab. 2, 879–888.
Schwartz, L. (1980). Semi-martingales sur des vari´ et´ es et martingales conformes sur des vari´ et´ es analytiques complexes. Lect. Notes in Math. 780, Springer.
— (1982). G´ eom´ etrie diff´ erentielle du 2e ordre, semimartingales et ´ equations diff´ erentielles stochastiques sur une vari´ et´ e diff´ erentielle. Lect. Notes in Math. 921, 1–148. Springer Segal, I.E. (1954). Abstract probability spaces and a theorem of Kolmogorov. Amer. J.
Math. 76, 721–732.
Shannon, C.E. (1948). A mathematical theory of communication. Bell System Tech. J.
27, 379–423, 623–656.
Sharpe, M. (1988). General Theory of Markov Processes. Academic Press, Boston.
Shiryaev, A.N. (1995). Probability, 2nd ed. Springer, NY.
Shur, M.G. (1961). Continuous additive functionals of a Markov process. Dokl. Akad.
Nauk SSSR 137, 800–803.
Sierpi´ nski, W. (1928). Une th´ eor eme g´ en´ erale sur les familles d’ensemble. Fund. Math.
12, 206–210.
Silverman, B.W. (1976). Limit theorems for dissociated random variables. Adv. Appl.
Probab. 8, 806–819.
Skorohod, A.V. (1956). Limit theorems for stochastic processes. Th. Probab. Appl. 1, 261–290.
— (1957).
Limit theorems for stochastic processes with independent increments.
Th.
Probab. Appl. 2, 122–142.
— (1961/62). Stochastic equations for diffusion processes in a bounded region, I–II. Th.
Probab. Appl. 6, 264–274; 7, 3–23.
— (1965). Studies in the Theory of Random Processes. Addison-Wesley, Reading, MA.
(Russian orig. 1961.) — (1975). On a generalization of a stochastic integral. Theory Probab. Appl. 20, 219–233.
Slivnyak, I.M. (1962).
Some properties of stationary flows of homogeneous random events. Th. Probab. Appl. 7, 336–341.
Slutsky, E.E. (1937). Qualche proposizione relativa alla teoria delle funzioni aleatorie.
Giorn. Ist. Ital. Attuari 8, 183–199.
Smith, W.L. (1954). Asymptotic renewal theorems. Proc. Roy. Soc. Edinb. A 64, 9–48.
Snell, J.L. (1952). Application of martingale system theorems. Trans. Amer. Math. Soc.
73, 293–312.
Bibliography 915 Sova, M. (1967). Convergence d’op´ erations lin´ eaires non born´ ees. Rev. Roumaine Math.
Pures Appl. 12, 373–389.
Sparre-Andersen, E. (1953/54). On the fluctuations of sums of random variables, I–II.
Math. Scand. 1, 263–285; 2, 195–223.
Sparre-Andersen, E., Jessen, B. (1948). Some limit theorems on set-functions. Danske Vid. Selsk. Mat.-Fys. Medd. 25:5 (8 pp.).
Spitzer, F. (1964). Electrostatic capacity, heat flow, and Brownian motion. Z. Wahrsch.
verw. Geb. 3, 110–121.
— (1976). Principles of Random Walk, 2nd ed. Springer, NY.
Stieltjes, T.J. (1894/95). Recherches sur les fractions continues. Ann. Fac. Sci. Toulouse 8, 1-122; 9, 1–47.
Stone, C.J. (1963). Weak convergence of stochastic processes defined on a semi-infinite time interval. Proc. Amer. Math. Soc. 14, 694–696.
— (1965). A local limit theorem for multidimensional distribution functions. Ann. Math.
Statist. 36, 546-551.
— (1968). On a theorem by Dobrushin. Ann. Math. Statist. 39, 1391–1401.
— (1969). On the potential operator for one-dimensional recurrent random walks. Trans.
Amer. Math. Soc. 136, 427–445.
Stone, M.H. (1932). Linear transformations in Hilbert space and their applications to analysis. Amer. Math. Soc. Coll. Publ. 15.
Stout, W.F. (1974). Almost Sure Convergence. Academic Press, NY.
Strassen, V. (1964). An invariance principle for the law of the iterated logarithm. Z.
Wahrsch. verw. Geb. 3, 211–226.
Stratonovich, R.L. (1966). A new representation for stochastic integrals and equations.
SIAM J. Control 4, 362–371.
Stricker, C., Yor, M. (1978).
Calcul stochastique d´ ependant d’un parametre.
Z.
Wahrsch. verw. Geb. 45, 109–133.
Stroock, D.W. (1981).
The Malliavin calculus and its applications to second order parabolic differential equations, I–II. Math. Systems Theory 14, 25–65, 141–171.
Stroock, D.W., Varadhan, S.R.S. (1969). Diffusion processes with continuous coeffi-cients, I–II. Comm. Pure Appl. Math. 22, 345–400, 479–530.
— (1979). Multidimensional Diffusion Processes. Springer, Berlin.
Sucheston, L. (1983). On one-parameter proofs of almost sure convergence of multipa-rameter processes. Z. Wahrsch. verw. Geb. 63, 43–49.
Tak´ acs, L. (1962). Introduction to the Theory of Queues. Oxford Univ. Press, NY.
— (1967). Combinatorial Methods in the Theory of Stochastic Processes. Wiley, NY.
Tanaka, H. (1963). Note on continuous additive functionals of the 1-dimensional Brow-nian path. Z. Wahrsch. verw. Geb. 1, 251–257.
Tempel’man, A.A. (1972).
Ergodic theorems for general dynamical systems.
Trans.
Moscow Math. Soc. 26, 94–132.
Thiele, T.N. (1880). On the compensation of quasi-systematic errors by the method of least squares (in Danish). Vidensk. Selsk. Skr., Naturvid. Mat. 12 (5), 381–408.
Thorisson, H. (1995). On time and cycle stationarity. Stoch. Proc. Appl. 55, 183–209.
916 Foundations of Modern Probability — (1996). Transforming random elements and shifting random fields. Ann. Probab. 24, 2057–2064.
— (2000). Coupling, Stationarity, and Regeneration. Springer, NY.
Tonelli, L. (1909). Sull’integrazione per parti. Rend. Acc. Naz. Lincei (5) 18, 246–253.
Tortrat, A. (1969). Sur les mesures al´ eatoires dans les groupes non ab´ eliens. Ann. Inst.
H. Poincar´ e, Sec. B, 5, 31–47.
Trotter, H.F. (1958a). Approximation of semi-groups of operators. Pacific J. Math. 8, 887–919.
— (1958b). A property of Brownian motion paths. Illinois J. Math. 2, 425–433.
Varadarajan, V.S. (1958). Weak convergence of measures on separable metric spaces.
On the convergence of probability distributions. Sankhy¯ a 19, 15–26.
— (1963).
Groups of automorphisms of Borel spaces.
Trans. Amer. Math. Soc.
109, 191–220.
Varadhan, S.R.S. (1966). Asymptotic probabilities and differential equations. Comm.
Pure Appl. Math. 19, 261–286.
— (1984). Large Deviations and Applications. SIAM, Philadelphia.
Ville, J. (1939). ´ Etude Critique de la Notion du Collectif. Gauthier-Villars, Paris.
Vitali, G. (1905). Sulle funzioni integrali. Atti R. Accad. Sci. Torino 40, 753–766.
Volkonsky, V.A. (1958). Random time changes in strong Markov processes. Th. Probab.
Appl. 3, 310–326.
— (1960). Additive functionals of Markov processes. Trudy Mosk. Mat. Obshc. 9, 143–189.
Wald, A. (1944). On cumulative sums of random variables. Ann. Math. Statist. 15, 283–296.
— (1946). Differentiation under the integral sign in the fundamental identity of sequential analysis. Ann. Math. Statist. 17, 493–497.
— (1947). Sequential Analysis. Wiley, NY.
Waldenfels, W.v. (1968). Characteristische Funktionale zuf¨ alliger Maße. Z. Wahrsch.
verw. Geb. 10, 279–283.
Walsh, J.B. (1978). Excursions and local time. Ast´ erisque 52–53, 159–192.
— (1984). An introduction to stochastic partial differential equations. Lect. Notes in Math.
1180, 265–439.
Wang, A.T. (1977). Generalized Itˆ o’s formula and additive functionals of Brownian mo-tion. Z. Wahrsch. verw. Geb. 41, 153–159.
Warner, F.W. (1983). Foundations of Differential Manifolds and Lie groups. Springer, Berlin.
Watanabe, H. (1964). Potential operator of a recurrent strong Feller process in the strict sense and boundary value problem. J. Math. Soc. Japan 16, 83–95.
Watanabe, S. (1964).
On discontinuous additive functionals and L´ evy measures of a Markov process. Japan. J. Math. 34, 53–79.
— (1968). A limit theorem of branching processes and continuous state branching processes.
J. Math. Kyoto Univ. 8, 141–167.
Bibliography 917 — (1987). Analysis of Wiener functionals (Malliavin calculus) and its applications to heat kernels. Ann. Probab. 15, 1–39.
Watson, H.W., Galton, F. (1874). On the probability of the extinction of families. J.
Anthropol. Inst. Gr. Britain, Ireland 4, 138–144.
Weil, A. (1936). La mesure invariante dans les espaces de groupes et les espaces ho-mog enes. Enseinement Math. 35, 241.
— (1940). L’int´ egration dans les Groupes Topologiques et ses Applications. Hermann et Cie, Paris.
Whitt, W. (2002). Stochastic-Process Limits. Springer, NY.
Whitworth, W.A. (1878). Arrangements of m things of one sort and n things of another sort under certain conditions of priority. Messenger of Math. 8, 105–14.
Wiener, N. (1923). Differential space. J. Math. Phys. 2, 131–174.
— (1938). The homogeneous chaos. Amer. J. Math. 60, 897–936.
— (1939). The ergodic theorem. Duke Math. J. 5, 1–18.
Wiener, N., Wintner, A. (1943). The discrete chaos. Amer. J. Math. 65, 279–298.
Wigner, E.P. (1955). Characteristic vectors of bordered matrices with infinite dimension.
Ann. Math. 62, 548–564.
Williams, D. (1991). Probability with Martingales. Cambridge Univ. Press.
Yaglom, A.M. (1947). Certain limit theorems of the theory of branching processes. Dokl.
Acad. Nauk SSSR 56, 795–798.
Yamada, T. (1973).
On a comparison theorem for solutions of stochastic differential equations and its applications. J. Math. Kyoto Univ. 13, 497–512.
Yamada, T., Watanabe, S. (1971). On the uniqueness of solutions of stochastic differ-ential equations. J. Math. Kyoto Univ. 11, 155–167.
Yoeurp, C. (1976). D´ ecompositions des martingales locales et formules exponentielles.
Lect. Notes in Math. 511, 432–480. Springer, Berlin.
Yor, M. (1978). Sur la continuit´ e des temps locaux associ´ ee a certaines semimartingales.
Ast´ erisque 52–53, 23–36.
Yosida, K. (1948). On the differentiability and the representation of one-parameter semi-groups of linear operators. J. Math. Soc. Japan 1, 15–21.
— (1949).
Brownian motion on the surface of the 3-sphere.
Ann. Math. Statist.
20, 292–296.
Yosida, K., Kakutani, S. (1939). Birkhoff’s ergodic theorem and the maximal ergodic theorem. Proc. Imp. Acad. 15, 165–168.
Z¨ ahle, M. (1980).
Ergodic properties of general Palm measures.
Math. Nachr.
95, 93–106.
Zaremba, S. (1909). Sur le principe du minimum. Bull. Acad. Sci. Cracovie.
Zinn, J. (1986). Comparison of martingale difference sequences. In: Probability in Banach Spaces. Lect. Notes in Math. 1153, 453–457. Springer, Berlin.
Zygmund, A. (1951). An individual ergodic theorem for noncommutative transformations.
Acta Sci. Math. (Szeged) 14, 103–110.
Indices Authors Abel, N.H., 26, 283, 823, 875 Adler, R.J., 866 Akcoglu, M.A., 878 Aldous, D.J., 514, 555, 576, 614, 633, 636, 651–4, 859, 874, 877–80 Alexandrov, A.D., 116, 856 Alexandrov, P., 853 Anderson, G.W., 880 Andr´ e, D., 263, 584, 863, 877 Arndt, U., 898 Arzel a, C., 505–6, 509, 512, 536, 553, 840 Ascoli, G., 505–6, 509, 512, 536, 553, 840 Athreya, K.B., 863–4 Baccelli, F., 883 Bachelier, L., 301, 306, 862–3, 865, 885 Bahadur, R.R., 875 Banach, S., 86, 369, 371, 460, 831–2, 835–6, 851 Barbier, ´ E., 877 Bartlett, M.S., 325, 867 Bass, R.F., 886 Bateman, H., 281, 332, 864, 866 Baxter, G., 255, 267, 863 Bayes, T., 878 Bell, D.R., 873 Beltrami, E., 822 Belyaev, Y.K., 881 Berbee, H.C.P., 576, 877–8 Bernoulli, J., 81–2, 93, 323, 531, 658, 731, 856 Bernstein, S.N., 195, 343, 362, 857, 860, 883 Bertoin, J., 868 Bertrand, J., 578, 584, 877 Bessel, F.W., 293, 305, 763 Bichteler, K., 393, 460, 872 Bienaym´ e, I.J., 102, 277, 287–8, 291, 294, 701, 856, 864 Billingsley, P., 841, 853–4, 856, 859, 874–5, 877–9, 887 Birkhoff, G.D., 557, 561, 589–90, 876 Bismut, J.M., 873, 887 Blackwell, D., 270, 863, 878 Blumenthal, R.M., 381, 389, 681, 861–2, 869, 881, 886 Bochner, S., 144, 311, 857, 866 Bogachev, V.I., 854, 887 Bogolioubov, N., 575, 877 Bohl, P., 583 Boltzmann, L., 876, 883 Borel, E., 9-10, 14–5, 35, 44–5, 51, 56–7, 61– 2, 64, 70–2, 81, 83, 92–3, 104, 112, 114, 174, 185, 197, 309, 396, 551–2, 658, 853–4, 855–6, 860 Bortkiewicz, L.v., 857 Bourbaki, N., 887 Brandt, A., 861 Breiman, L., 581–2, 696, 856, 877, 881, 885 Br´ emaud, P., 883 Brown, R., 865 Brown, T.C., 338, 867 Bryc, W., 540, 876 B¨ uhlmann, H., 619, 878 Buniakowsky, V.Y., 833, 853 Burkholder, D.L., 393, 400, 449, 459, 861, 870, 872 Cameron, R.H., 417, 435, 529, 535, 871 Campbell, N.R., 709–10, 722, 725, 729–31 Cantelli, F.P., 81, 83, 92, 112, 114–5, 185, 197, 309, 396, 551–2, 583, 855–6, 860 Carath´ eodory, C., 33–4, 36, 854 Carleson, L., 877 Cauchy, A.L., 20, 27–8, 86, 104, 108, 110, 160, 278, 353–4, 368, 372, 395, 398, 401, 403, 425, 445, 448, 472–3, 536, 653, 667, 683, 765, 771–3, 832–3, 853 Chacon, R.V., 590, 878 Chapman, S., 233, 235, 238, 246, 250, 367– 8, 679, 862 Chebyshev, P.L., 102, 107, 109, 450, 532, 541, 696, 856–7 Chentsov, N.N., 95, 511, 856, 874 Chernoff, H., 532, 875, 878 Choquet, G., 785, 788–9, 827, 875, 886 Chow, Y.S., 857, 860, 877 Christoffel, E.B., 801, 808, 825 Chung, K.L., 216, 258–9, 491, 582, 783, 856, 861, 863–4, 870, 877, 886 C ¸inlar, E., 853, 856, 860 Clark, J.M.C., 393, 483, 873 Copeland, A.H., 332, 866 Courant, R., 885 Cournot, A.A., 857 Courrege, P., 401, 441, 870–1 Cox, D.R., 55, 321, 323, 325, 327–31, 343, 360–1, 365, 505, 521–2, 626, 659, 685, 688–90, 696, 722, 729, 867, 881 Cram´ er, H., 129, 298, 311, 322, 532, 854, 856, 858, 866, 875, 877 919 © Springer Nature Switzerland AG 2021 O. Kallenberg, Foundations of Modern Probability, Probability Theory and Stochastic Modelling 99, 920 Foundations of Modern Probability Cs¨ org¨ o, M., 874 Daley, D.J., 867, 875, 882 Dambis, K.E., 420, 871 Daniell, P.J., 161, 164, 178, 854, 859 Darling, R.W.R., 814, 887 Davidson, R., 334, 697, 867, 882 Davis, B.J., 449–50, 872, 891 Dawid, A.P., 879–80 Dawson, D.A., 544, 864, 876 Day, M.M., 851, 876 Debes, H., 855, 874, 882 Dellacherie, C., 393, 426, 460, 827, 859–62, 866, 869, 871–2, 877, 881, 886–7 Dembo, A., 876 Democritus, 865 Derman, C., 765, 884 Deuschel, J.D., 876 Dini, U., 534 Dirac, P., 19, 55 Dirichlet, P.G.L., 771, 775–7, 885 Dobrushin, R.L., 659, 694, 855, 881, 883 Doeblin, W., 157, 159, 249, 420, 857–8, 862, 864, 869, 871, 878, 883–4 D¨ ohler, R., 859 Dol´ eans(-Dade), C., 211, 214, 223, 393, 413, 442–3, 447, 861, 870–2 Donsker, M.D., 3, 494, 510–1, 527, 548, 873– 5 Doob, J.L., 1–2, 18, 161, 166, 170, 185, 190, 192–3, 195–8, 201–2, 204–5, 207, 211, 213, 226, 228, 303, 325, 348, 397, 427, 437, 439, 771, 776, 793– 4, 854, 856, 859, 860–2, 864, 867, 869–71, 873, 877–9, 882–3, 885–6 Dubins, L.E., 420, 871, 873 Dudley, R.M., 119, 840, 846, 853–4, 857, 874, 887 Duncan, T.E., 887 Dunford, N., 108, 207, 589, 856, 861, 878 Durrett, R., 874 Dvoretzky, A., 501, 873 Dynkin, E.B., 367, 381, 383–4, 757, 853, 860, 862, 864, 869, 877, 881, 884–6 Eagleson, G.K., 879 Egorov, D., 30 Einstein, A., 865 Elliott, R.J., 860–1 Ellis, R.L., 866 Ellis, R.S., 875 ´ Emery, M., 814, 847, 850, 887–8 Engelbert, H.J., 752, 885 Erd¨ os, P., 495, 863, 873 Erlang, A.K., 866 Etheridge, A.M., 864 Ethier, S.N., 841, 869, 875, 884, 887 Faber, G., 856 Farrell, R.H., 573, 877 Fatou, P., 22–3, 82, 97, 107, 116, 203–4, 250, 272, 422, 461, 590, 601, 607–8, 714, 792, 853 Fell, J.M.G., 524, 772, 843, 875, 887 Feller, W., 3, 79, 135, 137, 139, 156, 185–6, 207, 231, 233, 262, 270, 277, 282, 292, 295, 367, 369–72, 374–6, 378– 9, 381–4, 386–90, 407, 555, 587, 598, 600–2, 604–5, 607–8, 661, 676, 677, 679, 681, 683, 735, 744, 751, 757, 759, 763–6, 768, 772–3, 817, 823, 857–8, 862–5, 868, 869, 877– 9, 884–5, 897 Fenchel, M.W., 529, 531, 547, 875 Feynman, R.P., 771–2, 885 Finetti, B. de, 299, 346, 555, 611, 613–4, 658, 858, 867, 878–9 Fischer, E.S., 472, 834 Fisk, D.L., 395, 407, 410, 749, 804–5, 870, 872 Fortet, R., 869 Fourier, J.B.J., 132, 144, 259, 312 Fraenkel, A., 846, 854 Franken, P., 883 Fr´ echet, M., 853, 862 Freedman, D., 299, 625, 862, 865–6, 876, 878, 885 Freidlin, M.I., 487, 529, 546, 875 Friedrichs, K., 893 Frostman, O., 885 Fubini, G., 2, 7, 9, 25, 66, 89, 164, 853, 859 Fuchs, W.H.J., 259, 863 Furstenberg, H., 572, 877 Gaifman, H., 879 Galmarino, A.R., 422, 871 Galton, F., 287, 864 Garsia, A.M., 220, 451, 561, 861, 876 G¨ artner, J., 544, 876 Gauss, C.F., 857, 864–5, 885 Gaveau, B., 873 Geiger, H., 866 Getoor, R.K., 681, 855, 862, 871, 881, 886 Gibbs, J.W., 659, 709, 725–7, 729–31, 883 Gihman, I.I., 871, 883 Gin´ e, E., 868 Girsanov, I.V., 125, 395, 433, 437, 822, 871, 883 Indices 921 Glivenko, V.I., 115, 856 Gnedenko, B.V., 157, 858 Gnedin, A.V., 880 Goldman, J.R., 881 Goldstein, J.A., 869 Goldstein, S., 576, 877–8 Grandell, J., 330, 867, 874 Green, G., 751, 759, 771, 777–80, 782–8, 791, 798, 885 Greenwood, P., 880 Griffeath, D., 877–8 Grigelionis, B., 221, 686, 861, 868, 881 Gr¨ onwall, T.H., 547, 738, 756 Guionnet, A., 889 Gundy, R.F., 400, 449, 870, 891 Haar, A., 7, 55, 64, 66–9, 71, 77, 577, 626, 710–1, 731, 854 Hagberg, J., 624, 878 Hahn, H., 33, 38, 47, 54, 86, 460, 597, 832, 851, 854 H´ ajek, J., 878 Hall, P., 874 Halmos, P.R., 853, 859, 879 Hardy, G.H., 564, 877 Harris, T.E., 518, 555, 587, 598, 604–5, 609, 864, 874, 878 Hartman, P., 494, 873 Hausdorff, F., 50, 343, 598, 829–30, 853 Heine, E., 35 Helly, E., 125, 142, 858 Hermite, C., 126, 315, 479 Hesse, O., 806 Hewitt, E., 81, 91, 855, 878 Heyde, C.C., 874 Hilbert, D., 161, 163–4, 300, 310, 312–3, 315–6, 398, 418, 440, 465–6, 469– 70, 536, 568, 631, 653, 833–7, 876, 880 Hille, E., 367, 376, 869 Hitczenko, P., 361, 868, 872 Hoeffding, W., 879 Hoeven, P.C.T. v.d., 883 H¨ older, O., 7, 9, 26–7, 81–2, 85–6, 95–6, 108, 168, 196, 221, 297, 301, 304, 416, 511, 531, 609, 683, 749, 853, 855, 869 Hoover, D.N., 555, 615, 633–4, 636, 879 Hopf, E., 255, 266, 561, 589, 863, 878 H¨ ormander, L., 872 Horowitz, J., 880 Hunt, G.A., 190, 305, 678, 778, 861, 866, 869, 881, 884–5, 886–7 Hurewicz, W., 878 Hurwitz, A., 854 Ikeda, N., 870, 884, 887 Ioffe, D., 876 Ionescu Tulcea, A., 581, 877 Ionescu Tulcea, C.T., 164, 180, 859 Itˆ o, K., 3, 125, 151, 295, 297, 312–3, 315, 319, 332, 346–7, 393, 395, 403, 407– 10, 426, 439, 465, 470, 481–2, 485, 503, 601, 632, 652, 658, 661, 664, 668–9, 675, 735–6, 738, 752, 759, 773, 804, 807, 809, 858, 861, 866– 9, 870–2, 880–1, 883–6 Jacobsen, M., 861 Jacod, J., 221, 337, 347, 353, 443, 449, 841, 861, 867–8, 871–2, 875, 884, 887 Jagers, P., 864, 882 Jamison, B., 878 Jensen, J.L.W.V., 81, 85–6, 166, 168, 193, 362, 459, 484, 562, 585, 855 Jessen, B., 199, 860 Jiˇ rina, M., 858, 867 Johnson, W.B., 872 Jordan, C., 33, 47, 854 Kac, M., 495, 771–2, 873, 882, 885–6 Kakutani, S., 422, 561, 776, 863, 870, 876, 878, 885 Kallenberg, O., 903–4 Kallianpur, G., 866 Kaplan, E.L., 659, 709, 713, 882 Karamata, J., 140, 857 Karatzas, I., 866, 870–1, 881, 884–6 Kazamaki, N., 412, 870 Kemeny, J.G., 864 Kendall, D.G., 864, 875 Kendall, W.S., 887 Kerstan, J., 721, 858, 894, 909 Kesten, H., 572, 877 Khinchin, A.Y., 3, 109, 139, 151, 156, 309, 856–8, 866, 868–9, 872, 875–6, 881– 2 Kingman, J.F.C., 332–3, 555, 557, 570, 632, 656, 864, 866–7, 877, 880 Kinney, J.R., 379, 856, 869 Klebaner, F.C., 870 Knapp, A.W., 904 Knight, F.B., 423, 661, 675, 871, 880 Koebe, P., 774 Kolmogorov, A.N., 33, 81, 90, 95, 109, 111, 113, 151, 161, 164, 179, 185, 195, 199, 233, 235–6, 238, 246, 248, 250– 1, 256, 283, 285–6, 290–1, 319, 367– 8, 372, 511, 562, 679, 772, 841, 922 Foundations of Modern Probability 855–6, 858–60, 862–5, 867–9, 871– 4, 883–6 Koml´ os, J., 874 K¨ onig, D., 714, 882, 898 Koopman, B.O., 876 Korolyuk, V.S., 714, 882 Krauss, P.H., 879 Krengel, U., 877–8 Krickeberg, K., 329, 697, 867, 872, 882 Kronecker, L., 112 Krylov, N., 575, 877 Kullback, S., 875 Kunita, H., 415, 441, 445, 870–2, 878, 883 Kunze, M., 873 Kuratowski, K., 827, 846 Kurtz, T.G., 386, 841, 869, 874–5, 884, 887 Kuznetsov, S.E., 886 Kwapie´ n, S., 857, 868 Lanford, O.E., 883 Langevin, P., 735, 737, 865, 883 Laplace, P.S. de, 2, 125–8, 130, 142, 153, 292, 323, 371, 375, 720, 774, 822, 857, 859, 864, 885 Last, G., 861, 867 Leadbetter, M.R., 856, 858, 877 Lebesgue, H., 7, 22, 25, 33, 35, 37, 40, 42, 54, 61, 82, 93, 97, 215, 275, 424, 443, 621, 711, 736, 759, 794, 825, 853–4 Le Cam, L., 874 Leeuwenhoek, A. van, 865 Le Gall, J.F., 864, 885 Legendre, A.M., 529, 531, 547, 875 Leibler, R.A., 875 Leontovich, M.A., 885 Levi, B., 21, 853 Levi-Civita, T., 819, 886 L´ evy, P., 2–3, 79, 111, 128, 132, 137, 139, 144, 151, 158, 195, 197, 199, 231, 233, 255, 277, 283, 295, 300, 303, 307, 321, 345–7, 353–6, 358, 362, 365, 367, 375, 386, 391, 417, 419, 421–2, 442, 462, 516, 611–2, 620, 629, 663, 669, 671, 683, 801, 823, 856–8, 860, 862–3, 865–7, 868–70, 879–80, 883 Lewy, H., 893 Liao, M., 824, 887 Liapounov, A.M., 857 Lie, M.S., 801, 819, 823, 850, 862, 886 Liemant, A., 882, 894 Liggett, T.M., 570, 877 Lindeberg, J.W., 125, 132, 135, 857, 869 Lindgren, G., 907 Lindvall, T., 862–3, 874 Lipschitz, R., 738, 754–6, 885 Littlewood, J.E., 564, 877 Lo eve, M., 1, 95, 853–4, 856–8, 868 Lomnicki, Z., 181, 859 Lukacs, E., 858 Lundberg, F., 866 Lusin, N.N., 30, 827 Mackeviˇ cius, V., 386, 869 Maisonneuve, B., 880, 894 Major, P., 906 Maker, P., 562, 877 Malliavin, P., 125, 393, 465–6, 472, 478, 480, 484, 861, 872–3 Mann, H.B., 117, 857 Marcinkiewicz, J., 113, 856, 872 Markov, A.A., 102, 234, 856, 862 Martin, W.T., 417, 435, 529, 535, 871 Maruyama, G., 694, 766, 871, 881, 884 Matheron, G., 875, 886, 888 Matthes, K., 334, 517–8, 687, 702–4, 714, 721, 842, 855, 858, 867, 874–5, 881, 882–3, 894, 908 Maxwell, J.C., 299, 865, 883 Mazurkiewicz, S., 853 McDonald, D., 889 McGinley, W.G., 879 McKean, H.P., 682, 759, 866, 881, 884–6 McMillan, B., 581, 877 Mecke, J., 690, 721, 855, 866–7, 881, 909 Mehta, M.L., 880 M´ el´ eard, S., 761, 884 M´ emin, J., 443 M´ etivier, M., 871 Meyer, P.A., 2, 161, 204, 207, 211, 216, 220, 228, 336, 389, 397, 439, 442, 451, 453, 664, 771, 794, 814–5, 827, 859– 60, 861–2, 867, 869–72, 877, 880– 1, 886–7, 894 Miles, R.E., 881 Millar, P.W., 400, 870 Minkowski, H., 7, 9, 26–7, 168, 313, 425, 454, 563, 569, 653, 692, 700, 853 Mitoma, I., 874 Moivre, A. de, 857, 864, 866 Molchanov, I., 875, 886, 888 M¨ onch, G., 330, 867 Morgan, A. de, 10 Motoo, M., 765, 884 Moyal, J.E., 866 Nash, J.F., 802 Indices 923 Nawrotzki, K., 858, 881 Neumann, J. von, 583, 854, 859, 876 Neveu, J., 220, 582, 860–1, 878 Newton, I., 798, 857 Ney, P., 864, 889 Nguyen, X.X., 692, 881 Nieuwenhuis, G., 882 Nikodym, O.M., 7, 33, 40, 54, 61, 163, 535, 854, 859 Norberg, T., 524, 875 Novikov, A.A., 400, 436, 870–1 Nualart, D., 866, 873 Ocone, D., 393, 483, 873 Olson, W.H., 879 Orey, S., 248, 270, 595, 598, 863, 872, 878 Ornstein, D.S., 258, 590, 863, 878 Ornstein, L.S., 297, 303, 312, 393, 465, 478– 80, 865, 883 Ososkov, G.A., 881 Ottaviani, G., 242, 862 Paley, R.E.A.C., 102, 304, 863, 865, 872 Palm, C., 4, 55, 575, 659, 685, 705–6, 709– 11, 715–6, 720–5, 729–32, 863–4, 878, 881, 882–3 Papangelou, F., 336, 659, 698, 709, 727–9, 855, 867, 882–3 Pardoux, E., 482, 873 Parseval, M.A., 312 Parthasarathy, K.R., 875 Pellaumail, J., 871 Pe˜ na, V.H. de la, 868 Peng, S., 873 Penrose, M.D., 867 Perkins, E.A., 864, 885 Peterson, L.D., 874 Petrov, V.V., 858, 875 Phillips, H.B., 885 Picard, E., 733, 735, 738 Pitman, J.W., 880 Pitt, H.R., 877 Poincar´ e, H., 885 Poisson, S.D., 129, 323, 857, 866 Pollaczek, F., 863 Pollard, D., 874–5 Pollard, H., 897 P´ olya, G., 857, 863 Port, S.C., 886 Pospiˇ sil, B., 864 Pr´ ekopa, A., 325, 867 Prohorov, Y.V., 3, 117, 487, 505, 507, 509– 10, 841–2, 857, 872–3, 874 Protter, P., 872 Pukhalsky, A.A., 876 Radon, J., 7, 33, 40, 51, 54, 61, 163, 535, 853–4, 859 Ramer, R., 873 Rao, K.M., 211, 458, 861, 872 Rauchenschwandtner, B., 883 Ray, D.B., 661, 675, 869, 880, 884 Regan, F., 332, 866 R´ enyi, A., 854, 867, 881 R´ ev´ esz, P., 874 Revuz, D., 661, 677–8, 866, 870–1, 878, 881, 885 Riemann, B., 54, 273, 733, 801–2, 818–9, 821–3, 826, 849 Riesz, F., 7, 33, 51, 54, 371, 379, 391, 472, 733, 771, 796, 834, 853, 854, 856, 886 Riesz, M., 854 Rogers, C.A., 846 Rogers, L.C.G., 860–1, 870, 872, 874–5, 887 Rootz´ en, H., 907 Ros´ en, B., 878 Rosi´ nski, J., 358, 868 Ross, K.A., 855 Rubin, H., 117, 857 Ruelle, D., 883 Rutherford, E., 866 Ryll-Nardzewski, C., 555, 611, 613, 632, 658, 714, 855, 878, 882 Sanov, I.N., 487, 529, 548, 875 Sato, K.-i., 868 Savage, L.J., 81, 91, 856, 878 Schechtman, G., 902 Schilder, M., 487, 529, 536, 550, 552, 875–6 Schmidt, V., 898 Schmidt, W., 752, 885 Schneider, R., 875 Schoenberg, I.J., 299, 625, 865 Schr¨ odinger, E., 885–6 Schuppen, J.H. van, 433, 448, 871–2 Schwartz, J.T., 589, 878 Schwartz, L., 814–5, 887 Schwarz, G., 420, 871 Schwarz, H.A., 833 Segal, I.E., 865 Shannon, C.E., 581, 877 Shappon, C.E., Sharpe, M., 855, 862, 871, 886 Shephard, G.C., 846 Shiryaev, A.N., 841, 868, 872, 875, 887 Shreve, S.E., 866, 870–1, 881, 884–6 Shur, M.G., 861 924 Foundations of Modern Probability Sibson, R., 879 Sierpi´ nski, W., 10, 583, 853 Silverman, B.W., 879 Skorohod, A.V., 3, 101, 119, 177, 356, 426, 433, 465, 472, 487, 489–90, 493, 502, 505, 512–3, 516, 622, 662, 742, 754–5, 841, 857, 868, 873–4, 880, 883–4, 887 Slivnyak, I.M., 717, 721, 882 Slutsky, E.E., 856 Smith, W.L., 864 Smoluchovsky, M., 235, 862 Snell, J.L., 196, 860, 904 Sobolev, S., 465, Sova, M., 386, 869 Sparre-Andersen, E., 263, 267, 495, 619, 860, 863, 879 Spitzer, F., 291, 864, 886 Steinhaus, H., 835 Stieltjes, T.J., 33, 42, 215, 310, 396, 405–6, 408, 443, 854 Stone, C.J., 133, 696, 855, 857, 863, 874, 881, 886 Stone, M.H., 311, 831, 866 Stout, W.F., 857 Strassen, V., 487, 493, 550, 873, 876 Stratonovich, R.L., 395, 410, 749, 804–5, 870 Stricker, C., 121, 413, 856, 870 Stroock, D.W., 741, 743–4, 773, 873, 875–6, 884–5 Sucheston, L., 877 Sztencel, R., 455, 872 Szulga, J., 340, 867 Tak´ acs, L., 578, 864, 877 Tanaka, H., 659, 661–2, 664, 682, 753, 755, 766, 880–1, 884 Taylor, B., 132–3, 136, 156, 368, 386, 446 Teicher, H., 857, 860, 877–8 Tempel’man, A.A., 877 Thorisson, H., 576–7, 716, 863, 877–8, 882 Tonelli, L., 25, 853 Tortrat, A., 855 Trauber, P., 873 Trotter, H.F., 386, 663, 869, 880 Tusn´ ady, G., 906 Tychonov, A.N., 66, 829 Uhlenbeck, G.E., 297, 303, 312, 393, 465, 478–80, 865, 883 Ulam, S., 181, 859 Uppuluri, V.R.R., 879 Varadarajan, V.S., 508, 573, 874, 877 Varadhan, S.R.S., 534, 540, 548, 741, 773, 875–6, 884–5 Vere-Jones, D., 867, 875, 882 Vervaat, W., 875 Ville, J., 860 Vitali, G., 854 Volkonsky, V.A., 679, 682, 759, 861, 881, 883–4 Voronoy, G.F., 711 Wakolbinger, A., 908 Wald, A., 117, 255, 261, 417, 436, 857, 863, 871 Waldenfels, W. von, 867, 874 Walsh, J.B., 216, 675, 861, 880, 884 Walsh, J.L., 872 Wang, A.T., 664, 880 Warmuth, W., 909 Warner, F.W., 824 Watanabe, H., 765, 878, 884 Watanabe, S., 337, 374, 403, 415, 441, 445, 605, 735, 743–4, 747, 754, 861, 864, 867, 869–73, 884–5, 887 Watson, H.W., 287, 864 Weber, N.C., 879 Weierstrass, K., 128, 408, 446, 831 Weil, A., 64, 855 Weil, W., 875 Wentzell, A.D., 487, 529, 546, 875 Weyl, H., 583 Whitney, H., 802 Whitt, W., 874–5 Wiener, N., 125, 255, 266, 295, 297, 301, 304, 310, 312–3, 316–7, 319, 393, 465, 470, 503, 557, 564, 566, 632, 652, 658, 746, 856, 863, 865–7, 876, 880–1, 885, 911 Wigner, E.P., 879–80 Williams, D., 860–1, 870, 872, 884–5, 887 Williams, R.J., 870 Wintner, A., 493, 867, 873 Wold, H., 129, 298, 322, 858 Wong, E., 433, 448, 871–2 Woyczy´ nski, W.A., 358, 857, 868 Yaglom, A.M., 291, 864 Yamada, T., 735, 747, 754–5, 884–5 Yoerp, C., 453, 455, 872 Yor, M., 121, 413, 663, 856, 866, 870–1, 880– 1, 885 Yosida, K., 367, 372, 376, 387, 390, 561, 869, 876, 886 Yule, G.U., 277, 290–1 Yushkevicz, A.A., 381, 860, 869 Indices 925 Z¨ ahle, M., 717, 882 Zaremba, S., 775, 885 Zeitouni, O., 876, 889 Zermelo, E., 846, 854 Zessin, H., 692, 881 Zinn, J., 361, 868, 902 Zorn, M., 54, 576–7, 846 Zygmund, A., 102, 113, 304, 557, 565, 856, 872, 877, 911 Topics Abelian group, 26, 241, 283 absolute, -ly continuous, 24, 49 distribution, 485 function, 49–50 measure, 24 moment, 85 absor|bing, -ption boundary, 761 martingale, 203 natural, 763 state, 278, 379 accessible boundary, set, 763 random measure, 218 time/jump, 210, 214, 218 action, 68 joint, 68 measurable, 68 proper, 68 transitive, 68 adapted process, 186 random measure, 218, 221 additive functional Brownian, 682 continuous, 676, 794–7 elementary, 676 uniqueness, 679 adjacency array, 655 adjoint operator, 568, 836 a.e. —almost everywhere, 23 affine maps, 810–12 algebra, 830 allocation sequence, 618 almost everywhere, 23 invariant, 559 alternating function, 785, 789 analytic map, 421 ancestors, 290–1 announcing sequence, 208 aperiodic, 246 approximation (in/of) covariation, 407 distribution, 118 empirical distribution, 498 exchangeable sum, 624 local time, 665, 671 Lp, 29 LDP, 545 Markov chains, 388 martingale, 500 Palm measure, 714, 718 point process, 714, 718 predictable process, 441 random walk, 357, 493, 502, 516 renewal process, 496 totally inaccessible time, 213 arcsine laws Brownian motion, 307 random walks, 495 L´ evy processes, 358 Arzela–Ascoli thm, 506, 840 a.s. —almost surely, 83 associative, 26 asymptotic|ally hitting probability, 524–5 invariant, 74, 569, 612, 692 weakly, 74 negligible, 129 atom, -ic decomposition, 44, 49 support, 525 augmented filtration, 190 averag|e, -ing, 112, 165, 691 distributional, 717 functional, 715–6 Palm measure, 716–7 strong, 691 weak, 692 avoidance function, 330, 524–5 axiom of choice, 846 backward equation, 283, 372 recursion, 705–6 tree, 706 balayage —sweeping, 776 ballot theorem, 578, 580 Banach Hahn–B thm, 832 space, 832 -Steinhaus thm, 835 926 Foundations of Modern Probability BDG —Burkholder et al., 400, 449 Bernoulli distribution, 531 sequence, 93 Bernstein–L´ evy inequality, 195 Bessel process, 293, 305, 763 Bienaym´ e process, 287 approximation, 292 asymptotics, 291 bilinear, 86 form, 802, 848 binary expansion, 93 splitting, 289–90 tree, 291 binomial process, 323 mixed, 323, 334 Birkhoff’s ergodic thm, 561, 563 birth–death process, 289 Blumenthal’s 0−1 law, 381 Borel -Cantelli lemma, 83, 92 extended, 197 isomorphic, 14 set, σ-field, 10 space, 14 localized, 15 boundary absorbing, 761 accessible, 763 entrance, 763–4 exit, 763 reflecting, 761, 763 regular, 775–6 bounded (in) Lp, 106, 197 linear functional, 832–3 martingale, 197 operator, 835 optional time, 193 set, 15 time, 193 branching Brownian motion, 291 extinction, 287 moments, 288 process, 287 recursion, 288 tree, 291 Brownian (motion), 301, 865 additive functional, 682 arcsine laws, 307 branching, 291 bridge, 301, 424, 498 criterion, 419 drift removal, 437 embedding, 490, 498 excursion, 673 functional, 426, 431, 483 H¨ older continuity, 301, 304 invariance, 424 iterated logarithm, 309 large deviations, 536 level sets, 303 manifold, 806, 821-2 martingales, 426, 490 maximum process, 306, 663 multiple integral, 430 path regularity, 301, 304 point polarity, 422 quadratic variation, 303 rate of continuity, 493 reflection, 306 scaling, inversion, 302 shifted, 435 strong Markov property, 305 time-change reduction, 420, 423, 759 transience, 422 Burkholder inequalities, 400, 449 CAF —continuous additive functional, 676 c adl` ag —rcll Cameron–Martin space, 435, 535–6, 550 Campbell measure, 710 compound, 725 reduced, 725 canonical decomposition, 404 Feller diffusion, 385 process, 381 filtration, 239 process, 239 capacity, 783, 788 alternating, 785 Carath´ eodory extension, 36 restriction, 34 Cartesian product, 11 Cauchy distribution, 160 equation, 278, 353 inequality, 28, 401, 833 in probability, 103 problem, 772–4 process, 354 sequence, 27, 104 center, -ing, 111, 173, 192, 298 Indices 927 central limit thm, 132 local, 133 chain rule conditional independence, 172 connection, 807–8 differentiation, 466, 468 second order, 482 integration, 23 change of measure, 432, 448 scale, 752, 757 time, 759 chaos representation, 316, 431 adapted, 318 conditional, 318 divergence operator, 472 Malliavin derivative, 470 processes, 317 Chapman–Kolmogorov eqn, 235, 238 characteristic|s, 151 exponent, 151 function, 125, 128 local, 359 measure, 148, 283 operator, 384 Chebyshev’s inequality, 102 Chentsov criterion, 95, 511 Christoffel symbols, 808, 819 clos|ed, -ure, -able martingale, 198 operator, 373–4, 466, 468, 838 optional times, 188 set, 830 weakly, 469 cluster, 291 invariant, 701–2 kernel, 701 net, 830 process, 702 coding representation, 614, 633 equivalent, 615, 634 commut|ing, -ative operation, 26 operators, 173, 370, 475–6, 566, 837 compact, -ification locally, 829 one-point, 378 relatively, 829 set, 829–30 vague, 142, 842 weak, 842 weak L1, 108 CT,S, 840 DR+,S, 841 comparison moment, 361 solutions, 755–6 tail, 362 compensat|ed, -or discounted, 223 optional time, 219 random measure, 218, 221 sub-martingale, 211 complement, 9 complet|e|ly, -ion, filtration, 190 Lp, 28 metric, 832 monotone —alternating, 785 in probability, 104 σ-field, 24 space, 398 complex-valued process, 310, 409 composition, 13 compound Campbell measure, 725 optional time, 239 Poisson approximation, 148, 155, 686 distribution, 147–8 limits, 150 process, 283 condenser thm, 786 condition, -al|ly, -ing covariance, 165–6 density, 723 distribution, 167–8 entropy, information, 581 expectation, 164 i.i.d., 613 independent, 170 inner, 730 intensity, 691 invariance, 170 iterated, 174 L´ evy, 620 orthogonal, 173, 837 outer, 726 probability, 167 variance, 165–6 cone condition, 775 conformal mapping, 409 connected, 829 connection, 806 Riemannian, 819 conservative semi-group, 369, 378 consistency, 846 constant velocities, 696 928 Foundations of Modern Probability continu|ous, -ity additive functional, 676 mapping, 103, 117, 829 martingale component, 453 in probability, 346, 619 right, 48 set, 115 stochastically, 346 theorem, 117 time-change, 190 contract|ion, -able, 739 array, 632–3 invariance, 611, 613 operator, 165, 168, 568, 588, 835 principle, 542 sequence, 613, 616 convergence (in/of) averages, 112–3, 132–3, 138–9 CT,S, 506, 510 DR+,S, 513 density, 133 dichotomy, 248, 285, 603 distribution, 104, 116 Feller processes, 386 Lp, 28, 107, 198 means, 107 nets, 830 point processes, 525 probability, 102 random measures, 518 sets, 524 series, 109–11 vague, 142, 517, 841 weak, 104, 143, 517, 842 convex function, -al, 664–5, 832 map, 86, 192, 812–4 set, 566–7, 574, 691, 832, 846 convolution, 26, 89 powers, 694 coordinate map, 15, 833 representation, 849 core of operator, 374, 839 cotangent, 848 countabl|e, -y, additive, 18 first, second, 829 sub-additive, 18 counting measure, 19 coupling exchangeable arrays, 635 group, 577 L´ evy processes, 356 Markov chains, 249 Palm measure, 716 shift, 576–7 Skorohod, 119 covariance, 86, 298 covariation, 398, 444, 802–3 Cox process, 323 continuity, 521–2 convergence, 688–90, 694, 696 criterion, 690, 696, 729 directing random measure, 323 existence, 328 integrability, 329 simplicity, 329 tangential, 360 uniqueness, 329 Cram´ er–Wold thm, 129 critical branching, 287, 289, 701 cumulant-generating function, 531, 547 cycle stationary, 713 cylinder set, 11 (D) —sub-martingale class, 211 Daniell–Kolmogorov thm, 178 debut, 189–90 decomposition atomic, 44 covariation, 454 Doob, 192 Doob–Meyer, 211 ergodic, 69, 575 finite-variation function, 48 Hahn, 38, 49 Jordan, 47, 49 Lebesgue, 40 martingale, 455 optional time, 210 orthogonal, 837 quasi-martingale, 458 random measure, 218 semi-martingale, 453 decoupling, 261, 341 degenerate measure, 19 random element, 88 delay, 268–9, 271 dens|e, -ity, 23, 829 criterion, 698 limit thm, 133 derivative adapted process, 471–2 Itˆ o integral, 482 WI-integral, 470 diagonal space, 46 Indices 929 dichotomy recurrence, 256, 604 regeneracy, 666 stability, 702 difference proper, 9 symmetric, 9–10 differential equation backward, 283, 372 Dol´ eans, 447 forward, 372 Langevin, 737 differentiation, 42, 304 operator, 465–6 diffuse —non-atomic distribution, 159 measure, 44 random measure, 329 diffusion, 756 approximation, 292 equation, 736, 751 Feller, 384 manifold, 817 martingale, 817 rate, 814 dimension, 834 effective, 256 Dirac measure, 19 direct, -ed set, 830 sum, 837 directing random measure, 613 directly Riemann integrable, 273 direction, -al derivative, 466 flat, 697 Dirichlet problem, 775–6 discontinuity set, 382 discounted compensator, 223 map, 429 discrete approximation, 188 time, 247 topology, 829 disintegration, 60, 168 invariant, 71, 711 iterated, 62 kernel, 61 Palm, 710–1 partial, 61 dissect|ion, -ing class, 15 ring/semi-ring, 518 system, 15 dissipative, 378, 688 dissociated, 649–51 distribution, 83 function, 84, 96 distributive laws, 10 divergence, 268 covariance, 476 factorization, 475 smooth elements, 474 DLR-equation, 883 Dol´ eans exponential, 223, 447 domain, 774 attraction, 139 operator, 838 dominated convergence, 22, 405 ergodic thm, 564 Donsker’s thm, 494, 511 Doob decomposition, 192 inequality, 195 -Meyer decomposition, 211 doubly stochastic Poisson —Cox drift component, 745, 752, 815 rate, 814 removal, 437, 752 dual, -ity disintegration, 723 ladder heights, 264 Palm/density, 723 Palm/Gibbs, 729 predictable projection, 216, 222 second, 832 space, 832 time/cycle, 713 dynamical system, 238 perturbed, 546 Dynkin’s characteristic operator, 384 formula, 383 effective dimension, 256 Egorov’s thm, 30 elementary additive functional, 676 function, 313 operations, 17 stochastic integral, 441 elliptic operator, 384–5, 481 embedd|ed, -ing Markov chain, 279 manifold, 802, 819–20 930 Foundations of Modern Probability martingale, 498 random walk, 490 space, 833 empirical distributions, 115 endpoints, 666, 766 energy, 818 entrance boundary, 763–4 entropy, 581 conditional, 581 relative, 548 equi-continuity, 127, 506 equilibrium measure, 783–4, 787–8 equivalent coding, 634 ergodic decomposition, 69, 575 diffusion, 766 map, 560 measure, 69, 560 random element, 560 sequence, 560 strongly, 595, 598 weakly, 597 ergodic theorem continuous time, 563 discrete time, 561 Feller process, 607–8 information, 581 Markov chains, 248, 285 mean, 569, 692, 838 multi-variate, 565–6 operator, 589 random matrices, 572 random measures, 691 ratio, 590, 594, 765 sub-additive, 570 escape time, 383 evaluation map, 44, 83, 516, 839 event, 82 excessive, 791–4, 796 uniformly, 679 exchangeable array, 633 finitely, 616 increments, 619 jointly, 632 partition, 656 point process, 334 process, 619–20, 622 random measure, 334 separately, 632 sequence, 611, 613, 616 excursion, 668 Brownian, 673 endpoints, 666 height, 674 law, 668 length, 668 local time, 669 Markov process, 246 measure, 668 point process, 669 space, 668 existence Brownian motion, 301 Cox process, 328 Feller process, 379 Markov process, 236 Poisson process, 328 process, 179 random sequence, 93, 178, 180–1 SDE solution, 737 point-process transform, 328 exit boundary, 763 expectat|ion, -ed value, 85 explosion, 280–1, 740, 763 exponential distribution, 278–81, 289 Dol´ eans, 223, 447 equivalence, 545 inequality, 455 martingale complex, 418 real, 435 rate, 533 super-martingale, 456 tightness, 539, 541, 549 exten|did, -sion Borel–Cantelli lemma, 197 dominated convergence, 22 filtration, 190, 359 hitting, quitting, 787 independence, 87 Itˆ o integral, 482 line, halfline, 16 measure, 24, 36 Poisson process, 332, 347 probability space, 175 set function, 36 subordinator, 355 external conditioning, 726 extinction, 287 local, 702 extreme|s, 158, 306 measure, 69, 574 point, 69, 575 factorial measure, 46, 616 factorization, 67 Indices 931 Fatou’s lemma, 22, 107 Fell topology, 524, 843 Feller diffusion, 384, 772 branching, 292 generator, 370, 376 process, 379 canonical, 381 convergence, 386 ql-continuity, 389 properties, 369, 744 semi-group, 369 conservative, 378 Feynman–Kac formula, 772 field, 10 filling functional, 592 operator, 591 filtration, 186 augmented, 190 complete, 190 extended, 190, 359 induced, 186 right-continuous, 187 de Finetti’s thm, 613 finite, -ly additive, 36 -dimensional distributions, 84, 235 exchangeable, 616 locally, 20 variation function, 47–9 process, 397 permutation, 613 s-, σ-finite, 20 first entry, 190 hit, 203 jump, 278 maximum, 264, 619 passage, 354 fixed jump, 332, 346–7, 349 flat, 697 connection, 807–8, 820 flow, 563 fluctuation, 265 form, 802, 848 forward equation, 372 Fourier transform, 132, 259 FS —Fisk–Stratonovich integral, 410, 804–5 Fubini’s thm, 25, 89, 168, 853, 859 function, -al central limit theorem, 494, 511 continuous, linear, 832 convex, 832 large deviations, 540 law of the iterated log, 550 measurable, 13 positive, 51 representation, 18, 414, 742 simple, 16–17 solution, 747 fundamental identity, 782 martingale, 224 Galton–Watson —Bienaym´ e, 287 Gaussian distribution, 132, 857, 864–5 convergence, 132–5, 137 isonormal, 300, 310 jointly, 298 Markov process, 302 process, 347, 298, 351 reduction, 428 genealogy, 290–1 generalized subordinator, 355 generat|ed, -ing cumulant, 531, 547 filtration, 186 function, 126, 288, 479 probability, 126 σ-field, 12, 16 topology, 829 generator, 368 criteria, 376 geodesic, 810–1 geometric distribution, 668 Gibbs kernel, 725–6 -Palm duality, 729 Girsanov thm, 433, 437 Glivenko–Cantelli thm, 115, 498 goodness of rate function, 539 gradient, 819, 849 graph of operator, 373, 838 optional time, 210 Green|ian domain, 778 function, 759, 779 potential, 779–80 Gronwall’s lemma, 738 group, -ing Abelian, 26, 241, 283 action, 68 Lie, 823 932 Foundations of Modern Probability measurable, 26, 64 property, 87 Haar functions, 834 measure, 64 Hahn -Banach thm, 86, 832 decomposition, 38, 49 harmonic function, 421, 594, 774, 822 measure, 776, 782 minorant, 796 sub/super, 791–2 Harris recurrent, 598, 604–5 Hausdorffspace, 50, 829 hazard function, 861 heat equation, 773 height excursion, 674 ladder, 264–5 Helly selection thm, 142 Hermite polynomials, 315, 479, 834 Hewitt–Savage 0−1 law, 91 Hilbert space, 833 domain, 469 martingales, 398 separable, 835 tensor product, 835 Hille–Yosida thm, 376 historical process, 290–1 hitting criterion, 703 function, 789 kernel, 774, 781 extended, 787 probability, 774, 788 time, 189, 757 H¨ older continuous, 95, 301, 304, 511 inequality, 26, 168 holding time, 666 homeomorphic, 829 homogeneous chaos, 316 space, 237–41 time, 238 hyper-contraction, 622 hyper-plane, 832 idempotent, 836 identity map, 836 i.i.d. —independent, equal distribution array, 148–9, 154 inaccessible boundary, 761 time, 210 increasing process, 211 natural, 211 super-martingales, 204 increments, 96 exchangeable, 619 independent, 345 independent, 87 conditional, 170 increments, 237, 300, 332–3, 347, 619 motion, 696 pairwise, 88 index of H¨ older continuity, 95 stability, 354 indicator function, 16 indistinguishable, 95 induced compensator, 222–3 connection, 810 filtration, 186, 222 Riemannian metric, 819 σ-field, 12, 16 topology, 829 infinitely divisible, 150–1, 346 convergence, 152 information, 581 conditional, 581 initial distribution, 234 invariant, 241–2 injective operator, 370 inner conditioning, 730 content, 51 continuous, 52 product, 28, 833 radius, 566, 691 regular, 52 integra|l, -ble, 21–2 approximation, 411 continuity, 404, 411 function, 21–2 increasing process, 404 Itˆ o, 403 martingale, 403, 441, 451 multiple, 313, 430 process, 397 random measure, 221 semi-martingale, 404, 442 stable, 358 Indices 933 integral representation, 431 invariant measure, 69, 575 martingale, 427 integration by parts, 406, 444, 467 intensity, 256, 322, 567 sample, 578 interior, 829 interpolation, 588 intersection, 9 intrinsic, 801, 847 drift, diffusion, 815 invarian|t, -ce disintegration, 71, 711 distribution, 245, 248, 285, 768–9 factorization, 67 function, 559, 589, 594 kernel, 70, 237, 283 measure, 26, 37, 67, 69, 284, 558, 605 permutation, 90 principle, 496 process, 698 rotational, 37 scatter, 694 set, σ-field, 68, 559, 563, 597 smoothing, 693 sub-space, 374, 568 u-invariant, 698 unitary, 299 invers|e, -ion, 302 coding representation, 642 contraction principle, 542 formulas, 711 function, 827 local time, 671 maximum process, 354 operator, 836 i.o. —infinitely often, 82 irreducible, 247, 285 isolated, 666 isometr|ic, -y, 313 isomorphic, 834 isonormal Gaussian, 300 complex, 310 martingales, 418 isotropic, 419–20, 906, 821–2 iterated clustering, 701–2 conditioning, 174 disintegrations, 723 expectation, 89 integrals, 25, 430 optional times, 239 Itˆ o correction term, 407–8 equation, 738 excursion law/thm, 668–9 formula, 407, 446, 665 local, 409 integral, 403 extended, 482 L´ evy–I representation, 346–7 J1-topology, 841 Jensen’s inequality, 86, 168 joint, -ly action, 68 exchangeable, 632 Gaussian, 298 invariant, 70 stationary, 710–1 Jordan decomposition, 47, 49 jump accessible, 214 kernel, 279 point process, 347 kernel, 55 cluster, 701 composition, 58 conditioning, 167–8 criteria, 56 density, 200 disintegration, 61 Gibbs, 725–6 hitting, 774 invariant, 70, 237 locally σ-finite, 56 maximal, 61 Palm, 710 probability, 56 product, 58 quitting, 783 rate, 279 representation, 94 transition, 234 killing, 772 Kolmogorov backward equation, 283, 372, 772 Chapman-K eqn, 235, 238 -Chentsov criterion, 95, 511 consistency/extension thm, 179 Daniell-K thm, 178 maximum inequality, 109 three-series criterion, 111 0−1 law, 90, 199 Kronecker’s lemma, 112 ladder 934 Foundations of Modern Probability distributions, 267 height, 264–5 time, 264–5 λ -system, 10 Langevin equation, 737 Laplace -Beltrami operator, 822 equation, 774 operator, 773, 822 transform, 125, 323–4 continuity, 128 uniqueness, 322 large deviation principle, 537 Brownian motion, 536 empirical distribution, 548 Rd, 534 sequences, 544 last exit, 783 return, 262 lattice distribution, 133 law arcsine, 307 excursion, 668 iterated logarithm, 309, 494, 549 large numbers,13 lcscH space, 50, 831 LDP —large deviation principle Lebesgue decomposition, 40 differentiation, 42 dominated convergence, 22 measure, 35, 37 -Stieltjes measure, 42 unit interval, 93 left-invariant, 64 Legendre–Fenchel transform, 531, 547 length, 818 level set, 303 Levi-Civita connection, 819 L´ evy arcsine laws, 307 BM characterization, 419 Borel–Cantelli extension, 197 -Itˆ o representation, 346–7 -Khinchin representation, 151 measure, 151, 283, 347 process, 516 generator, 375 Lie group, 823 representation, 346–7 0−1 law extension, 199 Lie bracket, 850 derivatives, 819, 850 group, 823 life length, 289 limit, 17 Lindeberg condition, 135 line process, 696–7 linear functional, 832 map, 21 operator, 835 SDE, 737 space, 831 Lipschitz condition, 738, 754–6 L log L -condition, 288, 565 local, -ly, localize approximation, 714, 718 averaging, 133 Borel space, 15 central limit thm, 133 characteristics, 359, 814 compact, 50 conditioning, 166 coordinates, 808–9 equi-continuous, 506 extinction, 702 finite measure, 20, 51, 56 variation, 47–9 Itˆ o formula, 409 invariant, 75, 696 martingale, 396 problem, 741 modulus of continuity, 506 operator, 477, 839 sequence, 396 sets, 15 sub-martingale, 211 symmetric, 359 uniform, 506, 839 local time additive functional, 680–1 approximation, 665, 671 excursion, 669 inverse, 671 positivity, 676 semi-martingale, 662 Lp-convergence, 28, 107, 198 inequality, 195 Lusin’s thm, 30 Malliavin derivative, 466 adapted process, 471–2 chaos representation, 470 Indices 935 indicator variable, 471 WI-integral, 470 manifold, 802 Riemannian, 818 mark|ed, -ing optional time, 223, 226–7 point process, 323, 325, 331, 335 theorem, 325 Markov chain, 247, 284 convergence, 248, 285 coupling, 249 excursions, 246 invariant distribution, 248 irreducible, 247 mean recurrence time, 251, 286 occupation time, 245 period, 246 recurrent, 245, 247 positive, null, 249, 286 strong ergodicity, 249 transient, 247 Markov inequality, 102 Markov process backward equation, 283 canonical, 239 embedded M-chain, 279 existence, 236 explosion, 281 finite-dim. distr., 235 Gaussian, 302 initial distribution, 234 invariant measure, 284 property, 234 extended, 234 strong, 240, 243, 278 rate kernel, 279 semi-group, 238 space homogeneous, 237 strong M-property, 240, 243, 278 time homogeneous, 238 transition kernel, 234 martingale, 191 absorption, 203 bounded, 197 clos|ure, -ed, 198, 202 continuous, 397–8 convergence, 197–8 criterion, 194, 809 decomposition, 455 embedding, 498 fundamental, 224–5 hitting, 203 inequalities, 195–6, 400, 449 integral, 403, 441, 451 isonormal, 418 isotropic, 419–20 local, 396 manifold, 805–7 optional sampling, 193, 202 predictable, 215, 217 problem, 741 regularization, 197, 201 representation, 427 semi-, 404, 442 sub-, super-, 192 transform, 194 truncation, 443 uniformly integrable, 198 maxim|um, -al ergodic lemma, 561 inequality, 109, 195, 242, 564, 567, 582, 589 kernel, 61 measure, 39 operator, 378, 384 principle, 376, 378 process, 306, 663 upper bound, 846 mean, 85 continuity, 38 ergodic thm, 569, 838 recurrence time, 250–1, 286 -value property, 774 measurable action, 68 decomposition, 45 map, 13 function, map, 13 group, 26, 64 limit, 121 set, space, 34 measure, 18 -determining, 20 equilibrium, 783 Haar, 64 invariant, 64 Lebesgue, 35 Lebesgue–Stieltjes, 42 preserving, 558, 626 product, 25 space, 18 supporting, 710 sweeping, 782 -valued, 523, 617, 842 Mecke equation, 721 median, 111 metri|c, -zation, 829 Minkowski’s inequality, 26, 168 936 Foundations of Modern Probability extended, 27 mixed binomial, 323, 326, 334, 720 i.i.d., 613 L´ evy, 619–20 Poisson, 323, 326, 334, 720 solutions, 743 mixing, 595 weakly, 597 moderate growth, 361 modified Campbell measure, 725 modulus of continuity, 841 Palm measure, 716 modular function, 67 modulus of continuity, 95, 506 moment, 85, 102 absolute, 85 comparison, 361 criterion, 95, 511 density, 723–4 identity, 340 monotone class, 10 convergence, 21 limits, 19 de Morgan’s laws, 10 moving average, 311, 479 multi|ple, -variate Campbell measure, 725 ergodic thm, 565–6 integral, 430 Palm measure, 725 stochastic integral, 430 time change, 423 Wiener–Itˆ o integral, 313, 315 multiplicative functional, 772 musical isomorphism, 849 natural absorption, 763 compensator, 223 embedding, 833 increasing process, 211, 216 scale, 757 nearly continuous, 30 super-additive, 65 uniform, 30 neighborhood, 829 net, 830 non-arithmetic, 270 decreasing, 21 diagonal, 46 interacting, 696 lattice, 133, 694 negative definite, 86 increments, 96 simgular, 694 topological, 522 transitive, 69 norm, 27, 831, 833 inequality, 400, 449 relation, 220 topology, 833, 836 normal Gaussian distribution, 132 space, 829 normalizing function, 68 nowhere dense, 666, 829 null array, 129–30, 148, 686 convergence, 157 limits, 156 recurrent, 249, 286, 607 set, 23–4, 43 space, 836 occupation density, 664, 779 measure, 256, 264, 268, 270, 664 time, 245 ONB —ortho-normal basis, 834 one-dimensional criteria, 129, 152, 330 sided bound, 197, 268 operator|s adjoint, 836 self-, 836 bounded, 835 clos|ure, -ed, 838 commuting, 837 contraction, 835 core, 839 domain, 838 ergodic thm, 589, 838 graph, 838 idempotent, 836 identity, 836 inverse, 836 local, 839 null space, 836 orthogonal, 837 complement, 83 conditionally, 837 projection, 837 Indices 937 range, 836 unitary, 837 optional equi-continuity, 514 evaluation, 189 projection, 382 sampling, 193, 202 skipping, 618 stopping, 194, 405 time, 186 iterated, 239 weakly, 187 orbit, 68 measure, 68 order statistics, 323 Ornstein–Uhlenbeck generator, 480 process, 303, 312, 478, 737 semi-group, 479 orthogonal, 833 complement, 833, 837 conditionally, 837 decomposition, 837 functions, spaces, 28, 837 martingales, 423 measures, 24 optional times, 226 projection, 29, 837 ortho-normal, 834 basis, ONB, 834 outer conditioning, 726, 730 measure, 33 regular, 52 p -function, 666 -thinning, 323 paintbox representation, 656 Palm approximation, 714, 718 averaging, 716–7 conditioning, 730 -density duality, 723 disintegration, 710–1 distribution, 710 -Gibbs duality, 729 invariant, 711 inversion, 711 iteration, 723 kernel, 710 measure, 710–1 modified, 716 recursion, 705 reduced, 725 tree, 706 uniqueness, 711 Papangelou kernel, 727–8 invariance, 729 parallelogram identity, 28 parameter dependence, 413 partial disintegration, 61 order, 829 particle system, 694 partition symmetric, 656 of unity, 51 path regularity, 304 -wise uniqueness, 737, 754 perfect additive functional, 676 set, 666 period, 246 permutation, 90 invariant, 90 persistence, 702 perturbed dynamical system, 546 π-system, 10 Picard iteration, 738 point mass, atom, 19 measure, 45 polarity, 422 point process, 322, 686 approximation, 714, 718 marked, 323 simple, 322 Poisson approximation, 687 binomial property, 326 compound, 147, 283 approximation, 148, 155 limits, 150 convergence, 130, 338, 686–7 criterion, 281, 331–4, 721 distribution, 129 excursion process, 669 existence, 328 extended, 332, 337 holding times, 281–2 integral, 339 double, 340 mapping, marking, 325 mixed, 326, 334 moments, 340 process, 281, 323 pseudo-, 282 938 Foundations of Modern Probability reduction, 335–6, 338, 428 stationary, 335 polar set, 422, 782 polarization, 440 Polish space, 14, 829 polynomial chaos, 316 portmanteau thm, 116 positive conditional expectation, 165 contraction operator, 368 functional, 51 maximum principle, 376, 378, 384 operator, 588 recurrent, 249, 286, 607, 769 terms, 108, 110, 134, 155 variation, 47 potential, 370, 600 additive functional, 676, 796 bounded, continuous, 601 Green, 779–80 operator, 370 Revuz measure, 678 term, 772 predict|ion, -able covariation, 440 invariance, 626 mapping, 227, 335 martingale, 215, 217 process, 209, 221 projection, 216 quadratic variation, 440, 500 random measure, 218, 221–2 restriction, 217 sampling, 617 sequence, 192, 616 set, σ-field, 209 step process, 194, 397 time, 208–9 pre-separating, 845 preservation (of) measure, 615, 626 semi-martingales, 434, 449 stochastic integrals, 434 probability generating function, 126 kernel, 56 measure, space, 82 process binomial, 323 contractable, 619–20 Cox, 323 exchangeable, 619–20 point, 322 Poisson, 323 regenerative, 666 renewal, 268 rotatable, 625 product Cartesian, 11 kernel, 58 measure, 25, 88 moment, 226 operator, 836 σ-field, 11 topology, 103, 829 progressive process, 188, 413 set, σ-field, 188 Prohorov’s thm, 507 project|ive, -ion dual, 216, 222 group, 68 limit, 178, 845 manifold, 805–6, 817 measures, 62 optional, 382 orthogonal, 29, 837 predictable, 216, 222 sequence, 845 set, 827 proper action, 68 difference, 9 function, 96 pseudo-Poisson, 282, 368 pull-back, 802, 849 pull-out property, 165 pure|ly atomic, 48–9 discontinuous, 452 jump-type, 277 push-forward, 802, 849 quadratic variation, 303, 398, 444, 500, 818 quasi-left (ql) continuous Feller process, 389 filtration, 220 random measure, 218, 335 quasi-martingale, 458 queuing, 882–3 quitting kernel, 783 extended, 787 Radon measure, 51 -Nikodym thm, 40 random array, 631 Indices 939 element, 83 matrix, 572 measure, 83, 221, 322, 516, 686 partition, 656 process, 83 sequence, 83, 103, 118 series, 108–11 set, 83, 691, 788–9 time, 186 variable, vector, 83 walk, 91, 255, 490 randomization, 117, 177, 323 uniform, 323 range, 836 rate drift, diffusion, 735, 814 function, 279, 532 good, 539 semi-continuous, 538 kernel, 279 process, 759, 821 ratio ergodic thm, 590, 594, 765 Ray–Knight theorem, 675 rcll —right-continuous, left limits, 201, 840 recurren|t, -ce class, 247 dichotomy, 256, 604 Harris, 598 null, positive, 249, 286, 607, 769 process, 256, 285, 766 random walk, 256–61 state, 245 time, 251, 286 recursion, 314 reduced Campbell measure, 725 Palm measure, 725 reflec|ting, -tion, -sive boundary, 761, 763 fast, slow, 761 Brownian motion, 306 principle, 263 space, 833 regenerative phenomenon, 666 process, 666 set, 666 regular|ity, -ization boundary, 775 conditional distribution, 167–8 diffusion, 757 domain, 775–6 Feller process, 379, 598 local time, 663 martingale, sub-, 197, 201 measure, outer measure, 29 point process, 728 rate function, 538 stochastic flow, 738 relative, -ly compact, 507, 829 entropy, 547–8 topology, 829 renewal equation, 273 measure, 268 process, 268 delayed, 268 stationary, 268–9 two-sided, 270 theorem, 270–1 replication, 94 representation coding, 633 infinitely divisible, 151 kernel, 94 limit, 120 resolvent, 370, 380 equation, 601 restriction of measure, 20 optional time, 210 outer measure, 34 Revuz measure, 677 potential, 678, 796 Riemann, -ian connection, 819 integrable, 273 manifold, metric, 818, 849 quadratic variation, 818 Riesz decomposition, 796 -Fischer thm, 834 representation, 51, 834 right-continuous filtration, 187 function, 48–50 process, 201 invariant, 64, 823 rigid motion, 37 ring, 518 rotat|ion, -able array, 652 invariant, 37 sequence, process, 625 totally, 654 940 Foundations of Modern Probability s-finite, 66 sampl|e, -ing intensity, 578 process, 624 sequence, 612 without replacement, 616, 624 scal|e, -ing, 302 Brownian, 302 function, 757 limit, 139 operator, 673 SDE, 746 scatter dissipative, 688 invariant, 694 Schwarz’ inequality, 28 SDE —stochastic differential eqn, 735 comparison, 755–6 existence, 747 explosion, 740 functional solution, 747 linear equation, 737 pathwise uniqueness, 737, 747, 754 strong solution, 737–738 transformation, 745–6, 752 uniqueness in law, 737, 747, 752 weak solution, 737, 741–2, 752 second countable, 50 sections, 25, 827–8 selection, 827–8 self -adjoint, 165, 836 -similar, 354 semi-continuous, 791 group, 368, 563 martingale, 359, 404, 442, 802 criterion, 459 decomposition 453 integral, 442, 804 local time, 662 non-random, 459 special, 442 ring, 518 separable, 829, 835 separat|ely, -ing class, 87, 845 exchangeable, 633 points, 830 sequentially compact, 830 series (of) measures, 19 random variables, 108–11 shell measurable, 651 σ-field, 636 shift, -ed, 68 coupling, 716 measure, 26 operator, 239, 381, 558, 690 σ-field, 10 induced, 10 product, 11 σ-finite, 20 signed measure, 38, 49 simple function, 16–7 point measure, 45 point process, 322 random walk, 262 singular, -ity function, 49–50 measures, 24, 38, 49, 578 point, 421 set, 753 skew-factorization, 71 product, 422 Skorohod convergence, 512–3, 840 coupling, 119 extended, 177 embedding, 490, 498 integral, 472 topology, 512, 622, 841 slow reflection, 761 variation, 139–40 smooth, -ing density, 484 function, 466 operation, 693 random measure, 693 sojourn, 619 spa|ce, -ial approximation, 665 Borel, 14 filling, 567 homogeneous, 237, 241 measurable, 34 Polish, 14 probability, 82 product, 11 regularity, 663 space-time function, 594 invariant, 594–5 spanning criterion, 697 Indices 941 special semi-martingale, 442 spectral measure, 311 representation, 311 speed measure, 759 spherical symmetry, 299 spreadable, 613 stab|le, -ility cluster kernel, 702–4, 706 index, 354 integral, 358 process, 354 standard differentiation basis, 718 extension, 359, 420 stationar|y, -ity cycle, 713 increments, 619 joint, 710–1 process, 241–2, 558, 563 random measure, 710 renewal process, 268 strong, 616–7 u -, 698 statistical mechanics, 876, 883 Stieltjes integral, 404 measure, 42 stochastic continuity, 619 differential equation, 735 geometry, 801 dynamical system, 238 equation, 177 flow, 738 geometry, 696–8 integrator, 459 process, 83 Stone–Weierstrass thm, 831 stopping time —optional time, 186 Stratonovich integral, 410, 804–5 strict comparison, 756 past, 208 strong, -ly continuous, 369, 372 convergence, 836 equivalence, 689 ergodic, 249, 595, 598, 768 existence, 737 homogeneous, 243 law of large numbers, 113 Markov, 240, 278, 305, 353, 381, 744 orthogonal, 423 solution, 738 stationary, 616 sub-additive, 570 ergodic thm, 570 sequence, 570 set function, 18 critical, 287 harmonic, 791 manifold, 816–7 martingale, 192 sequence criterion, 102 invariance, 613 set, space, 13, 83, 116, 831 subordinator, 346–7 extended, 355 substitution, 23, 407, 446 sum of measures, 19 super-critical, 287 harmonic, 791–2 martingale, 192, 380 position, 686–7 support, -ing additive functional, 680 affine function, 86 function, 662 hyper-plane, 832 local time, 662 measure, 598, 710 set, 20 survival, 291 sweeping kernel, 776 problem, 782 symmetr|y, -ic difference, 9 functions, 641 point process, 334 random measure, 334 variable, 91 set, 90 spherical, 299 stable, 358 terms, 110, 134, 155 symmetrization, 111, 260 system dissecting, 15 π -, λ -, 10 tail 942 Foundations of Modern Probability comparison, 362 probability, 85, 102, 126 rate, 530 σ -field, 90, 199, 595 Tanaka’s formula, 662 tangent, -ial comparison, 361, 364 existence, 359–60 processes, 359 vector, space, 847 Taylor expansion, 132, 136 tensor product, 835, 848 terminal time, 380 tetrahedral, 430, 632 thinning, 291, 323 convergence, 521, 690 Cox criterion, 690 uniqueness, 329 three-seies criterion, 111 tightness (in) exponential, 539 CT,S, 509, 511 DR+,S, 512, 514, 523 MS, 517 Rd, 105–6, 127, 143 relative compactness, 143, 507 time accessible, 210 homogeneous, 238 optional, 186, 219 weakly, 187 predictable, 208–10, 216 random, 186 reversal, 786, 793 scales, 500 totally inaccessible, 210, 213 time change, 190 diffusion, 759 filtration, 190 integral, 412 martingale, 193, 420, 423 point process, 336, 338 stable integral, 358 topolog|y, -ical, 829 group, 64 locally uniform, 839 norm, 833 Skorohod, 841 uniform, 839 vague, 842 weak, 842 total, -ly inaccessible, 210, 213 variation, 47 tower property, 165, 838 transfer random element, 176 regularity, 96 solution, 747 transform, -ation Cox, 323 drift, 745 Laplace, 125, 323 point process, 323 transient process, 256, 285, 422 state, 245 uniformly, 604 transition density, 777–8 function, 284 kernel, 234, 279 matrix, 247 operator, 367, 588 semi-group, 238 transitive, 68 translation, 26 tree branching, 291 Palm, 706 trivial σ-field, 88, 90 truncation of martingale, 443 two-sided extension, 559 Tychonov’s thm, 66, 829 ult. —ultimately, 82 uncorrelated, 86 uniform, -ly approximation, 128 convergence, 242, 620–1 distribution, 93 excessive, 679 integrable, 106, 166, 198, 201, 401, 436 laws, 308 locally, 839 randomization, 323 transient, 604 uniqueness (of) additive functional, 679 characteristic function, 128 Cox, thinning, 329 diffuse random measure, 330 distribution function, 84 generator, 371 Laplace transform, 128, 322 in law, 737, 752, 773–4 measure, 20 Palm measure, 711 Indices 943 pathwise, 737 point process, 330 unitary, 837–8 invariance, 299 universal, -ly adapted, 746 completion, 746, 827 measurable, 189–90, 827, 843 upcrossing, 196 urn sequence, 616 vague, -ly compact, 142, 842 convergent, 517, 520, 841 tight, 517 topology, 142, 517, 841–2 variance, 86 criterion, 109, 113, 135 variation positive, 47 total, 47 vector field, 848 space, 831 velocity, 696 version of process, 94 Voronoi cell, 711 Wald equations, 261 identity, 436 weak, -ly compact, 842 continuous, 742 convergent, 517, 520, 842 ergodic, 595 existence, 737, 742, 752 L1-compact, 108 large deviations, 539 law of large numbers, 138 mixing, 597 optional, 187 solution, 737, 741–2, 752 tight, 517 topology, 517, 833, 842 Weierstrass approximation, 128, 408, 446 Stone-W thm, 831 weight function, 692 well posed, 741 WI —Wiener–Itˆ o, 313 Wiener chaos, 316–8 -Hopf factorization, 266 integral, 310 multivariate ergodic thm, 566 process —Brownian motion, 301 Wiener–Itˆ o integral, 313, 430 derivative, 470 Yosida approximation, 372, 387 Yule process, 289–90 tree, 291 zero–infinity law, 690–1 zero–one law absorption, 243 Blumenthal, 381 Hewitt–Savage, 91 Kolmogorov, 90 ZF —Zermelo–Fraenkel, 846 Zorn’s lemma, 846 Symbols A, 370, 676, 741 A∗, 836 An, Ah t , 589, 600 Aλ, 372 A × B, 11 Ac, A \ B, AΔB, 9–10 Ao, ¯ A, ∂A, 829 1A, 1Aμ, 16, 20 A, Aμ, 24, 82 A ⊗B, 11 (a, b), 741 a(Xt), 817 α, αj, 279, 618 α ⊗β, 802, 848 ⟨α, X⟩, ⟨α, u⟩, 748, 804 Bε x, 14 B, BS, 10 BX, BX,Y , 835 b[X], 803 ˆ CS, ˆ C, 104, 378 C0, C∞ 0 , 369, 375 Ck, C1,2, 407, 773 C+ K, 142 CT,S, 839 CD K, 783 Cov(ξ, η), Cov(ξ; A), 86, 157 CovF(·) = Cov(·|F), 166 c, 279 D, Dh, 774, 668 944 Foundations of Modern Probability (D), 211 D[0,1], DR+,S, 622, 840–1 Dp, D1,p, 466–8 D∗, D∗, 472 Δ, 67, 192, 211, 375, 378, 490 Δn, 430 ΔU, ΔU1,...,Un, 785 ∇, ∇n, 806–8, 375 δx, 19 ∂i, ∂2 ij, 407 d =, d →, 84, 104 E, 85, 322 ¯ E, 677 Ex, Eμ, 239 E[ξ; A], 85 E[ξ|F] = EFξ, 164 E(·∥·), ¯ E(·∥·), 710, 716 E, En, 313, 402 E(X), 435, 447 F, ˆ Fn, 84, 96–7, 115 F±, Δb aF, ∥F∥b a, 47 Fa, Fs, F ′, 42–3 Fc, Fd, 48 F ∗μ, 273 Fk, Φk, 697 F, 186, 524 Fμ, ¯ F, 24, 190, 827 F+, 187 Fτ, Fτ−, 186, 208 F∞, F−∞, 199 FD, Fr D, 781 F ⊗G, 11 F ∨G, n Fn, 87 F⊥ ⊥G, F⊥ ⊥GH, 87, 170 f −1, f±, 12, 22 ˆ f = grad f, 849 f · μ, f · A, 23, 676 μf, μ ◦f −1, 20–1 f ◦g, f ∗g, 14, 26 f ⊗g, 312 f ⊗d, ˆ f ⊗d, 259 ⟨f, g⟩, f ⊥g, 28, 833 d f, ⟨d f, dg⟩, 802, 848–9 U ≺f ≺V , 51 f →, fd →, 843, 506 ϕs, ϕB, 68, 524 ϕ ◦u, ϕ∗α, ϕ∗2b, 802, 849 G, GD, 725, 779 G, 524, 843 ga,b, gD, 759, 779 Γ, Γk ij, 725, 808 γD K, 783 ⊥G, ⊥ ⊥G, 170, 173 H, 833 H1, H∞, 535, 546 H1 ⊗H2, H⊗n, 312, 835 HD K, HD,D′ K , 781, 787 ha,b, 757 H(·), H(·|·), 548, 581 I, I(B), |I|, 35, 368, 537–8, 836 Inf = ζnf, 312 I(ξ), I(ξ|F), 581 ˆ I+, Iξ, 322, 560, 691 ι, 68 Jnξ, 317, 470 K, KT , 524, 839, 843 KD, Kr D, 781 κy, κh, 245, 668 L, Lx t , 480, 662, 669, 681 LD K, LD,D′ K , 783, 787 Lp, 26 L(X), ˆ L(X), 403–4, 412, 451 L2(ζ), L2(M), 316, 441 L, Lu, 480, 850 L(·), L(·|·), L(·∥·), 83 l(u), l2, 668, 834 Λ, Λ∗, 531, 547 λ, 34–5 M, Mf, 564, 589 [M], [M, N], 398, 500 M q, M a, 455 M τ, M f, 194, 383, 741 M ∗, M ∗ t , 195 ⟨M⟩, ⟨M, N⟩, 440, 500 M(λ, ϕ), 720 M, M0, 451 M2, M2 0, M2 loc, 398, 440 MS, 44, 322, 841 Mc S, ˆ MS, 44, 547, 741 m =, 449, 807 μ∗, ∥μ∥, 45, 248 μ∗n, μ(n), 46, 494 μt, μs,t, 234, 238 μd, μa, 48 μ12, μ23|1, μ3|12, μ3|2|1, 62 μD K, 783 μf, μ ◦f −1, 20–1 f · μ, 1Bμ, μf, 20, 23, 664 μ ∗ν, 26 μν, μ ⊗ν, 25, 58 Indices 945 μ ⊥ν, μ ≪ν, μ ∼ν, 24, 40, 435 μ ∨ν, μ ∧ν, 39 N, ˆ Nd, ˜ Nd, ˆ N(d), 11, 632 NA, 836 N(m, σ2), 132 NS, N ∗ S, ˆ NS, 45–6 Nμ, 24 ν, 151, 346, 668, 701, 759 ν±, νd, νa, 38, 48 νt, νa,b, 234, 491 νA, 677 Ω, ˆ Ω, ω, 82, 175 P, ¯ P, 82, 793 P ≪Q, P ∼Q, 432, 435 Px, Pν, PT , 239, 379, 741, 806 P F, PF, (PF)G, 167, 174 P(·|·), P(·∥·), 167, 710 P(·| · ∥·), P(·∥· |·), 722 P ◦ξ−1, 83 P(A|F) = P FA, 167 Pn, 46, 616 pa,b, 757 pn ij, pt ij, 247, 284 pt, pD t , 777 Lp, ∥· ∥p, 26 P →, uP − →, 102, 607, 522 π, πs, πM, 68, 697, 837 πB, πf, πt, 44, 142, 839 ψ, 126, 340 Q, Q+, 142, 191 Qx, Qξ, 603, 729 R, R+, ¯ R, ¯ R+, Rd, 10, 16, 151 Rλ, RA, 370, 836 rij, r(B), 247, 566 ˆ S, Sn, S(n), 378, 46 Sn, Sr, 564, 589, 593, 673 (S, S, ˆ S), 10, 15 S, SH, 466, 474, 802 ˆ Sμ, ˆ Sh, ˆ Sξ, ˆ Sϕ, 115, 518, 524, 845 S/T , 13 (Σ), 728 σn, σ− n , 264 σ(·), (σ, b), 10, 16, 736 s →, sd − →, 512, 840–1 supp μ, 20 T, T ∗, 367, 568 ¯ T, ∥T∥, 186, 835 Tt, ˆ Tt, T λ t , 368, 372, 378 Tx, T∗x, 848 TS, T ∗ S, T ∗2 S , 802, 848 tr(·), 476, 821 τA, τB, 189, 210 τa, τa,b, 757 [τ], 210 θ, θn, θt, 192, 611, 381 ˜ θ, θr, ˜ θr, 64, 239, 588, 710 U, Uh, Ua, 600, 604 U α, UA, U α A, 676, 678 U(0, 1), U(0, 2π), 93, 307 ⟨U, V ⟩, ⟨u|v⟩, 423, 818 u →, ul − →, uld − →, uP − →, 522 ud ∼, u ≈, 687, 718 V · X, 194, 403, 441–2, 451 Var(·), Var(·; A), 86, 111 VarF(·) = Var(·|F), 165–6 v →, vd − →, vw − →, 142, 518, 841 vd ∼, 686 w, wf,, 95, 493, 506, 840 ˜ w(f, t, h), 512, 841 w →, wd − →, 520, 842 ˆ X, 211, 805–6, 817 Xc, Xd, 453 Xτ, 194 X∗, X∗ t , X∗∗, 195, 832 X ◦dY , 410 X ◦V −1, 626 [X], [X, Y ], b[X], 406, 444, 803 ⟨α, X⟩, 804 BX, BX,Y , 836 ∥x∥, ⟨x, y⟩, x ⊥y, 831, 833–4 Ξ, 666 ξ, ∥ξ∥p, 107, 669 ˆ ξ, ξt, ˆ ξt, 669, 221–2 ξc, ξq, ξa, 218 ξ2, ξη, 339 ¯ ξ, ¯ ξn, 530, 578, 580, 691 ∥ξ∥1,p, ⟨ξ, η⟩m,2, 469 Dξ, 466 χn, 702 Z, Z+, ¯ Z+, 16, 44, 559 ζ, ζn, 223, 300, 310–3, 380 ζD, 408, 775 ∅, 10 1A, 1{·}, 16, 82 2S, 2∞, 10, 15 946 Foundations of Modern Probability ∼, ≈, 718 < ⌢, ≍, ∼ =, 95, 400, 833 ⊥, ⊥ ⊥, 833, 87 ⊥G, ⊥ ⊥G, 170, 173 ⊕, ⊖, 316 ⊗, 11, 25, 58 ∨, ∧, 39 ∥· ∥, ∥· ∥p, 26, 248, 369 [[, [, (, 766 |
17246 | https://www.youtube.com/watch?v=g0DYyBkO7c4 | How to use special right triangles to find the value for cosine of 60 degrees
Brian McLogan
1600000 subscribers
140 likes
Description
17050 views
Posted: 24 Mar 2014
👉 Learn how to evaluate trigonometric functions using the special right triangles. A right triangle is a triangle with 90 degrees as one of its angles. A special right triangle is a right triangle with the angles 30, 60, 90 degrees or 45, 45, 90 degrees.
To evaluate the trigonometric function of special right triangles, we first note the ratio of the sides of a special triangle as 1, sqrt(3), 2 respectively for the sides opposite the 30, 60, 90 degrees special triangle and 1, 1, sqrt(2) respectively for the sides opposite the 45, 45, 90 degrees special triangle.
With the above knowledge, we can then apply the SOHCAHTOA principle for solving trigonometric functions of right triangles to evaluate the given trigonometric function.
👏SUBSCRIBE to my channel here:
❤️Support my channel by becoming a member:
🙋♂️Have questions? Ask here:
🎉Follow the Community:
Organized Videos:
✅ Evaluate the Trigonometric Functions With Triangles
✅ Evaluate Trig Functions Using Special Right Triangles
✅ Evaluate the Six Trig Functions Given Right Triangle
✅ Evaluate the Six Trig Functions Given Equations
✅ Evaluate Trig Functions from a point | Learn About
✅ Evaluate the six Trig Functions given a point
✅ Determine the Quadrant Given Constraints
✅ Evaluate the trigonometric function with constraint
🗂️ Organized playlists by classes here:
🌐 My Website -
🎯Survive Math Class Checklist: Ten Steps to a Better Year:
Connect with me:
⚡️Facebook -
⚡️Instagram -
⚡️Twitter -
⚡️Linkedin -
👨🏫 Current Courses on Udemy:
👨👩👧👧 About Me: I make short, to-the-point online math tutorials. I struggled with math growing up and have been able to use those experiences to help students improve in math through practical applications and tips. Find more here:
trigonometry #brianmclogan
7 comments
Transcript:
now to evaluate cosine of 60 guys there's two really really easy ways um or a really one really easy way to do this and the other way that i want to explain because they wanted you to find the decimal and the fraction for this guys i'm not going to exp i i really i'm not too concerned i just want to make sure you guys can be able to plug them into your calculator but i want to show you at least what they're talking about as far as finding the exact um fractional fractional uh value so here we have a 60 degrees they said use a special right triangle we know that on a 60 30 60 90 triangle hopefully you guys remember if this was x then we called this 2x and this was does anybody remember x times what thank you that was the relation that was the ratios of your special right triangle right now if i want to evaluate for the cosine of 60 degrees remember the cosine of any angle equals the adjacent over the hypotenuse right but i don't want to be putting x over 2x to be able to figure that out well actually even though i could um let's just go and plug in some numbers is it okay if i let x equal one would that work could i let x equal one sure so what i'm going to do is i'm going to look at a triangle it doesn't matter what values you have for that um actually you know what let's just let's just leave it as is so if i use my value here i'll say now the cosine of 60 degrees is equal to the adjacent side which is x over my hypotenuse which is 2x now you guys can see that the x's divide out and i'm just left with one-half so the cosine of 60 degrees is equal to one-half if you take your calculator and type in the cosine of 60 degrees you will get 0.5 which is one-half okay so all i want you guys to be able to do is take your 30-60-90 triangle and go ahead and find the right |
17247 | https://pmc.ncbi.nlm.nih.gov/articles/PMC4648774/ | Osteoradionecrosis of the Jaws: Clinico-Therapeutic Management: A Literature Review and Update - PMC
Skip to main content
An official website of the United States government
Here's how you know
Here's how you know
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
Secure .gov websites use HTTPS
A lock ( ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.
Search
Log in
Dashboard
Publications
Account settings
Log out
Search… Search NCBI
Primary site navigation
Search
Logged in as:
Dashboard
Publications
Account settings
Log in
Search PMC Full-Text Archive
Search in PMC
Journal List
User Guide
View on publisher site
Download PDF
Add to Collections
Cite
Permalink PERMALINK
Copy
As a library, NLM provides access to scientific literature. Inclusion in an NLM database does not imply endorsement of, or agreement with, the contents by NLM or the National Institutes of Health.
Learn more: PMC Disclaimer | PMC Copyright Notice
J Maxillofac Oral Surg
. 2015 Mar 10;14(4):891–901. doi: 10.1007/s12663-015-0762-9
Search in PMC
Search in PubMed
View in NLM Catalog
Add to search
Osteoradionecrosis of the Jaws: Clinico-Therapeutic Management: A Literature Review and Update
Koteswara Rao Nadella
Koteswara Rao Nadella
1 Department of Oral and Maxillofacial Surgery, Drs. Sudha & Nageswara Rao, Siddhartha Institute of Dental Sciences, Chinnaoutpalli, Gannavaram, Vijayawada, 521286 India
Find articles by Koteswara Rao Nadella
1, Rama Mohan Kodali
Rama Mohan Kodali
1 Department of Oral and Maxillofacial Surgery, Drs. Sudha & Nageswara Rao, Siddhartha Institute of Dental Sciences, Chinnaoutpalli, Gannavaram, Vijayawada, 521286 India
Find articles by Rama Mohan Kodali
1, Leela Krishna Guttikonda
Leela Krishna Guttikonda
1 Department of Oral and Maxillofacial Surgery, Drs. Sudha & Nageswara Rao, Siddhartha Institute of Dental Sciences, Chinnaoutpalli, Gannavaram, Vijayawada, 521286 India
Find articles by Leela Krishna Guttikonda
1, Ashok Jonnalagadda
Ashok Jonnalagadda
1 Department of Oral and Maxillofacial Surgery, Drs. Sudha & Nageswara Rao, Siddhartha Institute of Dental Sciences, Chinnaoutpalli, Gannavaram, Vijayawada, 521286 India
Find articles by Ashok Jonnalagadda
1,✉
Author information
Article notes
Copyright and License information
1 Department of Oral and Maxillofacial Surgery, Drs. Sudha & Nageswara Rao, Siddhartha Institute of Dental Sciences, Chinnaoutpalli, Gannavaram, Vijayawada, 521286 India
✉
Corresponding author.
Received 2014 Jun 13; Accepted 2015 Feb 16; Issue date 2015 Dec.
© The Association of Oral and Maxillofacial Surgeons of India 2015
PMC Copyright notice
PMCID: PMC4648774 PMID: 26604460
Abstract
Osteoradionecrosis is one of the most serious oral complications of head and neck cancer treatment. It is a severe delayed radiation-induced injury, characterized by bone tissue necrosis and failure to heal for at least 3 months. In most cases osteoradionecrosis gradually progresses, becoming more extensive and painful that leads to infection and pathological fracture. The present paper provides a literature review and update on the risk factors underlying osteoradionecrosis, its clinical and diagnostic particulars, prevention and most widely accepted treatment options including the latest treatment modalities. Lastly, a new early management protocol is proposed based on the current clinical criteria relating to osteonecrosis secondary to treatment with bisphosphonates, together with the adoption of new therapies supported by increased levels of evidence.
Keywords: Osteoradionecrosis, Etiology, Pathophysiology, Risk factors, Treatment options, Bisphosphonate osteonecrosis
Introduction
In 1922, Regaud published the first report about osteoradionecrosis (ORN) of the jaws after radiotherapy. Since then several theories have been propounded to explain its cause including the release of histamine, the theory of radiation, trauma and infection and until recently, the most widely accepted theory of hypoxia, hypovascularity and hypocellularity. There is a general consensus, however, about the clinical presentations of ORN, which are pain, drainage and fistulation of the mucosa or skin that is related to exposed bone in an area that has been irradiated . Once ORN is recognised, it is irreversible and extremely difficult to treat. This paper describes the recommendations and current theories about its aetiology, pathogenesis and treatment.
Definition
The most widely used definition of ORN that affects the jaws is based on clinical presentation and observation: irradiated bone becomes devitalized and exposed through the overlying skin or mucosa without healing for 3 months, without recurrence of the tumour .
Classification of Osteoradionecrosis
Together with the various evolving definitions of ORN, several classifications have been suggested. Some have attempted to derive a classification from Marx’s treatment protocol, but this is not universally applicable as it uses the clinical response to HBO and surgical treatments that will not be used in all cases. The classification by Epstein et al. also requires knowledge of the clinical course, distinguishing those actively “progressing” from more chronic “persistent” cases. Instead, a simple, memorable, and immediate classification of mandibular ORN by Notani et al. which does not rely on any knowledge of clinical progress or response to treatment, is given in Table1.
Table 1.
The Notani classification, is quickly applicable to all cases of mandibular osteoradionecrosis (ORN) after clinical examination and orthopantogram
| Notani class | Clinical features |
:--- |
| I | ORN confined to dentoalveolar bone |
| II | ORN limited to dentoalveolar bone or mandible above the inferior dental canal, or both |
| III | ORN involving the mandible below the inferior dental canal, or pathological fracture, or skin fistula |
Open in a new tab
The classification by Epstein et al., requires knowledge of the clinical course, distinguishing those actively “progressing” from more chronic “persistent” cases, given in Table2.
Table 2.
Epstein et al. , classification of osteoradionecrosis
Type I Resolved, healed
(A) No pathologic fracture
(B) Pathological fracture
Type II Chronic persistent (nonprogressive)
(A) No pathologic fracture
(B) Pathological fracture
Type III Active progressive
(A) No pathologic fracture
(B) Pathological fracture
Open in a new tab
A more recent classification given by Lyons et al., which is based on the extent of the condition and its management is given in Table3.
Table 3.
Lyons et al. , classification of osteoradionecrosis
| Stage | Description |
:--- |
| 1 | <2.5 cm length of bone affected (damaged or exposed); asymptomatic. Medical treatment only |
| 2 | >2.5 cm length of bone; asymptomatic, including pathological fracture or involvement of inferior dental nerve or both. Medical treatment only unless there is dental sepsis or obviously loose, necrotic bone |
| 3 | >2.5 cm length of bone; symptomatic, but with no other features despite medical treatment. Consider debridement of loose or necrotic bone, and local pedicled flap |
| 4 | 2.5 cm length of bone; pathological fracture, involvement of inferior dental nerve, or orocutaneous fistula, or a combination. Reconstruction with free flap if patient’s overall condition allows |
Open in a new tab
Epidemiology and Risk Factors for the Development of Osteoradionecrosis
Analysis of epidemiological studies of ORN does not provide accurate data about incidence and prevalence of ORN in the jaws because of the lack of agreement about its definition, inconsistencies in the length of follow-up between studies and limited data from prospective studies. ORN affects the mandible more often than the maxilla or any other bones of the head and neck . Its incidence in the mandible is between 2 and 22% of cases and it most often affects the body . It is rare after radiation of less than 60 Gy, but more common when brachytherapy is used (the mandible must be in the area of treatment to be at risk). However, the incidence of ORN is thought to be less common after hyperfractionated radiotherapy at 72–80 Gy, or moderately accelerated fractionated radiotherapy together with a boost of 64–72 Gy. Recent reports have suggested that when chemotherapy is added to radiotherapy the incidence of ORN may be increased whereas the use of intensity-modulated radiotherapy may reduce it .
Factors that Affect the Development of ORN
Size and site of the tumour, dose of radiation and type of mandibular resection, injury, or dental extractions, infection, immune deficiencies and malnutrition. Many patients with oral cancer have other serious diseases and have often had a long history of alcohol and tobacco misuse . These, combined with poor nutrition and unsatisfactory oral hygiene, place such patients at high risk of developing ORN. Various factors associated with the development of ORN is given in Table4.
Table 4.
Risk factors associated with the development of ORN
Primary site of tumor
Posterior mandible is more commonly affected by ORN because of its compact and dense nature
Proximity of tumor to bone
Extent of mandible included in primary radiation field
State of dentition—odontogenic and periodontal disease
Poor oral hygiene
Radiation dose>60 Gy
Use of brachytherapy
Nutritional status
Concomitant chemo-radiation
Ill-fitting tissue borne prosthesis resulting in chronic trauma
Acute trauma from surgical procedures to the jaw
Advanced stage tumors
Open in a new tab
Dental disease and dentoalveolar surgery, in particular dental extractions after radiotherapy, are well-established predisposing factors to ORN; the documented incidence of ORN after extractions is about 5%. Its incidence is three times higher in dentate than in edentulous patients, mainly as a result of injury from extractions and infection from periodontal disease . The risk of developing ORN after extractions is higher in posterior mandibular teeth with roots that lie below the mylohyoid line and when an atraumatic extraction was not possible .
Etiopathogenesis
Radiotherapy appears to cause ORN because it affects the small blood vessels of bone, inducing inflammation (endarteritis), which favors the generation of small thrombi that obliterate the vascular lumen and thus interrupt tissue perfusion (Fig.1). Likewise, radiation therapy produces an increase in free radicals and alters collagen synthesis. The bone loses its normal cellularity and undergoes fibrosis-atrophy with impairment of its repair and remodeling capacity. Under such conditions even minimal external trauma causes ulceration, facilitating contamination and infection and thus favoring bone necrosis. According to Marx, progressive hypoxia, hypovascularization and hypocellularity are observed in the affected bone (Table5).
Fig.1.
Open in a new tab
Incidence of ORN among various sites of maxilla and mandible
Table 5.
Management strategies in bisphosphonate osteonecrosis of the jaw (BONJ)
| Strategy | Treatment |
:--- |
| Conservative management | Mouth wash and analgesia |
| Non surgical management | Antibiotics and antifungals |
| Surgical management |
| Local intervention | No surgical flap |
| Surgical flap |
| Radical intervention | Marginal resection |
| Segmental resection |
| Adjunctive measures | Hyperbaric oxygen |
| Parathyroid hormone |
| Platelet rich plasma |
| Laser |
| Ozone |
Open in a new tab
Theories of the Pathophysiology of Osteoradionecrosis
Watson and Scarborough reported three crucial factors in the development of ORN based purely on clinical observations; exposure to radiotherapy above a critical dose; local injury; and infection . Early experimental models of the pathophysiology of ORN showed evidence of bacteria in tissues affected by ORN and documented microscopic tissue changes, namely thickening of arterial and arteriolar walls, loss of osteocytes and osteoblasts and the filling of bony cavities with inflammatory cells .
Meyer’s theory: proposed his radiation, trauma and infection theory. He suggested that injury provided the opening for invasion of oral microbiological flora into the underlying irradiated bone. Meyer’s theory lasted for a decade and became the foundation for the popular use of antibiotics with surgery to treat ORN .
Marx proposed the hypoxic-hypocellular-hypovascular theory as a new way of understanding the pathophysiology of ORN. Marx from his studies concluded that: “ORN is not a primary infection of irradiated bone, but a complex metabolic and homeostatic deficiency of tissue that is created by radiation-induced cellular injury; micro-organisms play only a contaminating role in ORN. The pathophysiological sequence suggested by Marx is: irradiation; formation of hypoxic-hypocellular, hypovascular tissue; and breakdown of tissue (cellular death and breakdown of collagen that exceeds cellular replication and synthesis) driven by persistent hypoxia that can cause a chronic non-healing wound (a wound in which metabolic demands exceed supply). These explanations formed the cornerstone for the use of hyperbaric oxygen (HBO) in the treatment of ORN.
Radiation-Induced Fibroatrophic Theory
Radiation-induced fibrosis is a new theory that accounts for the damage to normal tissues, including bone, after radiotherapy. It was introduced in 2004 when recent advances in cellular and molecular biology explained the progression of microscopically observed ORN.
The theory of radiation-induced fibrosis suggests that the key event in the progression of ORN is the activation and dysregulation of fibroblastic activity that leads to atrophic tissue within a previously irradiated area.
Clinical Range of Osteoradionecrosis and Its Staging
Early ORN may be asymptomatic even though the main features of exposed devitalised bone through ulcerated mucosa or skin can be seen clearly. Pain is a common symptom and some patients have presented with intractable pain. Other associated symptoms include dysaesthesia, halitosis, dysgeusia and food impaction in the area of exposed sequestra [12, 13]. In severe cases, patients can present with fistulation from the oral mucosa or skin, complete devitalisation of bone and pathological fractures. The interval between radiotherapy and the onset of ORN can vary, but most occur between 4 months and 2 years. ORN usually develops during the first 6–12 months after radiotherapy. It may present much earlier after a local traumatic event. Epstein et al. , reported that ORN usually presented about 4.5 months after radiotherapy in cases associated with dental or surgical injury, but in most it may present after follow-up of the incidence studies. Over the years, many staging systems have been proposed to aid treatment and provide classifications for research. The classifications were based on various criteria, including the presence of soft tissue dehiscence necrotic bone, the amount of necrotic bone, oro-cutaneous fistulae and pathological fracture. The Wilford Hall hyperbaric oxygen ORN protocol proposed by Marx stages ORN in its response to his HBO treatment protocols . The late effects on normal tissue (LENT) and subjective, objective, management and analytic (SOMA) scales proposed by the Radiation Therapy Oncology Group are scoring systems to stage the late complications of radiation and may also be used to stage ORN .
New Protocols for Prevention and Treatment of Osteoradionecrosis
Prevention
Preventive measures must be evaluated with a view to reducing the risk or severity of ORN. Deficient dental hygiene and septic mouth have been shown to increase the risk of osteoradionecrosis. Likewise, ORN is three times less frequent in edentulous patients than in patients who retain their teeth, possibly as a result of the trauma associated with the need for extractions after irradiation and the greater number of germs present. Before treatment, a thorough dental exploration is indicated, evaluating those teeth with a poor prognosis due to caries, periodontal disease, or with latent infections. Repair should be limited to those teeth that are truly amenable to restoration and which have adequate chances for survival. In these cases extractions should be made at least 2–3 weeks before treatment. In the case of retained teeth, this period should be even longer. Another important consideration after treatment for lessening the risk of ORN is good fitting, support and stability of removable dentures, avoiding points of excessive pressure that may give rise to pressure ulcers [16, 17].
Less optimally, extractions can be performed within 4 months of completion of therapy. All patients should be instructed on meticulous oral hygiene and fluoride should be applied to the dentition daily via custom molded trays. Patients should undergo weekly checkups during radiation therapy and monthly thereafter for the first 6 months. Following this early post-treatment period, the patients should see their dentist every 4 months. The reason behind this ‘‘close follow-up” schedule is to monitor the patient’s compliance with meticulous oral hygiene and the daily application of topical fluoride. Cervical root caries, common in xerostomic patients, must be treated promptly in order to avoid involvement of the pulp chamber or undermining the structure of the clinical crown. Those who require dental extractions more than 4 months after radiation therapy should be treated with HBO. The Marx protocol of 20 dives at 2.4 atmospheres for 90 min per dive before extraction and ten dives after extraction has become the de facto standard.
Advances in the delivery of radiation therapy such as intensity modulated radiation therapy (IMRT) holds promise to decrease the incidence of osteoradionecrosis (ORN) by increasing the conformality of the high dose prescription to spare larger volumes of mandible and improve homogeneity of dose. The primary treatment factors that impact the probability of developing ORN include total dose of radiation (>60 Gy), volume of mandible receiving that dose, the part of the mandible that is irradiated and dose fractionation (fraction sizes>2 Gy). Spontaneous ORN is associated with doses>60 Gy and can occur at a rate of 5–15% with older techniques while newer techniques with three-dimensional (3D) conformal therapy and IMRT have decreased the rate to 6% or less. A study comparing 3D and IMRT approaches, showed that when constrained appropriately, the volume of mandible receiving more than 50, 55 and 60 Gy could be decreased in oral cancer patients undergoing IMRT. In addition, there were fewer hot spots in the mandible and lower maximum dose. Several studies reporting the incidence of ORN after IMRT have been reported. The radiation therapy oncology group (RTOG-0022) study reported an incidence of 6% ORN in oropharynx cancer patients treated at fraction size of 2.2–66 Gy without chemotherapy. The University of Michigan reported on 176 patients treated with IMRT. At a median follow-up of 34 months, no cases of ORN developed which they attribute not only to the conformality of IMRT, but also to meticulous dental hygiene as well as salivary gland sparing which may decrease the risk for dental caries. Similarly Studer reported a 1.3% incidence of ORN after parotid sparing IMRT. Thus, to date the best outcomes with IMRT with regard to ORN appear to be when the dose to organs at risk (mandible, oral cavity and parotid) are constrained, conventional fractionation is utilized, and meticulous dental hygiene is applied [6, 18].
Conservative Management of Osteoradionecrosis
“Conservative management” consists of local irrigation (saline solution, NaHCO 3, or chlorhexidine 0.2%), systemic antibiotics in acute infectious episodes, avoidance of irritants (tobacco, alcohol, denture use) and oral hygiene instruction. “Simple management” refers to the gentle removal of sequestrum in sequestrating lesions (without local anesthetic) in addition to these conservative measures. Resection, HBO therapy, or both were initiated in cases of intractable pain, failure to respond to conservative measures and progressive deterioration (including pathologic fracture). Treatment duration terminated on the date of resection or first HBO dive .
Pentoxifyllin and Tocopherol in the Treatment of Osteoradionecrosis
To reverse changes in reactive oxygen species that produce radiation-induced fibrosis and ultimately ORN, new therapeutic regimens have been developed. Pentoxifylline is a methylxanthine derivative that exerts an anti-TNF_ effect, increases erythrocyte flexibility, dilates blood vessels, inhibits inflammatory reactions in vivo, inhibits proliferation of human dermal fibroblasts and the production of extracellular matrix and increases collagenase activity in vitro. It is given with tocopherol (vitamin E), which scavenges the reactive oxygen species that were generated during oxidative stress by protecting cell membranes against peroxidation of lipids, partial inhibition of TGF-_1, and expression of procollagen genes, so reducing fibrosis. These two drugs act synergistically as potent antifibrotic agents. Treatment always consisted of pentoxifylline 400 mg twice daily and tocopherol 1000 IU once a day .
Hyperbaric Oxygen Therapy
The rationale for the use of hyperbaric oxygen (HBO) in radiation tissue damage is to revascularize irradiated tissues and to improve the fibroblastic cellular density, thus limiting the amount of nonviable tissue to be surgically removed, enhancing wound healing and preparing the tissues for reconstruction when indicated.
Marx and Ames first outlined a standard approach to the treatment of established osteonecrosis of the jaws with adjunctive HBOT. They proposed an approach which is known as the “Wilfred-Hall Protocol”; which consists of the three stages outlined below.
Stage I. Thirty consecutive treatments. If the wound shows no definitive clinical improvement, a further ten exposures are given, to a full course of 40 exposures. If there is failure to heal after 3 months, the condition is advanced to Stage II.
Stage II. The exposed bone is removed by alveolar sequestrectomy and further 20 HBO treatments are given, to a total of 60 exposures. If wound dehiscence or failure to heal occurs, the patient is advanced to Stage III.
Stage III. The criteria for this category are failure of Stage II, pathological fracture, orocutaneous fistula, or radiographic evidence of résorption to the inferior border of the mandible.
Recommended management commences with the 30-exposure protocol, along with surgical resection to bleeding bone and/or bony reconstruction, followed by soft tissue coverage. An additional ten treatments are recommended. If healing fails, additional surgery is carried out and ten further exposures to HBOT are given at that time . Further evidence that HBO prophylaxis lowers the risk of ORN was found in Marx and Johnson tissue perfusion study.
Primary Outcomes of Marx Protocol (1999)
In 1999 Marx described two randomized control trial to measure the outcomes of HBO therapy. These trials reported data on two primary outcomes, postsurgical complication rate and, wound infection rate. All the patients in the trials required mandibular reconstruction in tissue beds exposed to≥64 Gy radiotherapy using mesh trays with free soft tissue flaps/bone grafts. The intervention was 20 preoperative and ten postoperative HBO sessions (Figs.2, 3, 4, 5).
Fig.2.
Open in a new tab
Algorithm of ORN pathophysiology
Fig.3.
Open in a new tab
Pathophysiology of ORN according to Marx
Fig.4.
Open in a new tab
Pathophysiology of ORN according to radiation induced fibro-atrophic theory
Fig.5.
Open in a new tab
Protocol for patients who require dental extractions after radiotherapy
The trials included 368 subjects, with 184 randomised to both HBOT and control groups. Overall, eight (6%) people in the HBOT group suffered wound breakdown versus 37 (28%) in the control group. Analysis for heterogeneity suggested a high proportion of variability between trials was not due to sampling variability (I2=70%), and so this comparison was made using a random-effects model. There was a significantly improved chance of wound breakdown with control (RR 4.2; 95% CI 1.1–16.8, P=0.04 (Analysis 9.1). Stratification by tissue type involved confirmed that the direction of effect was the same for both studies, but it remained significant only for soft tissue flaps and grafts (RR following hemimandibulectomy 2.2; 95% CI 0.8–5.9, P=0.12; RR following soft tissue flap or graft (8.7; 95% CI 2.7–27.5, P=0.0002). The number needed to treat to benefit with HBOT to avoid one wound dehiscence overall was 5 (95% CI 1–59), and for soft tissue repairs alone was 4 (95% CI 3–6) (Fig.6).
Fig.6.
Open in a new tab
Analysis showing primary outcomes of Marx protocol (1999) [29, 22]
Randomised Controlled Trials in HBO to Treat Osteoradionecrosis of the Mandible
To our knowledge the only randomised controlled trial in peer-reviewed publications for the use of HBO in the treatment of ORN in the head and neck region was by Annane et al. The trial had many laudable design features: it was a prospective, multi-centre, randomised, double-blind and placebo-controlled study carried out across 12 hospitals with an intended recruitment target of 222. The HBO protocol used 30 dives before and ten after operation at 2.4 atm. for 90 min, and so reflects the contemporary international consensus (if not the Wilford Hall protocol 4). However, the trial has proved controversial, with several different interpretations of the data possible, and despite serious voiced and published reservations about its design, has eroded enthusiasm for the role of HBO in the treatment of ORN. The principal finding was that HBO did not aid in the management of ORN, indeed an excess of poor outcomes in the HBO caused premature closure of the trial under early stopping rules. At 1 year, recovery in the HBO arm was 19%, and 32% in the placebo arm. This finding has now been cited by many health care funding bodies as evidence to withhold reimbursement for its use in the treatment of ORN. There are three main objections to the trial design. Firstly, the diagnosis, stage, and distribution of patients with ORN entered into the trial have been criticised, and the definition of ORN was imprecise compared with conventional clinical practice. Patients were included in the trial if they had one clinical change and one radiographic change. The clinical changes were pain, dysaesthesia in the distribution of the inferior alveolar nerve, bony exposure, trismus, or fistula. The radiographic changes were increased density, periosteal thickening, diffuse radiolucency, mottled areas of osteoporosis or sequestration. It can readily be appreciated that many patients who certainly do not have ORN would fit within these rather loose criteria. Patients with Notani III ORN (fracture or ORN at the lower border) were excluded from the trial; another factor that limited the usefulness of its findings. Stratification was not used so, of the small number (n=68) actually randomised, a concentration of more severely affected cases could have been assigned to one arm or the other. The importance of this omission is exaggerated by the imprecise inclusion criteria. Interestingly the criteria for the measure of primary outcome were more robust; absence of pain, and exposed bone with stabilisation of radiographic findings. Although this gives confidence as to which patients had “real” ORN at the conclusion of the trial, we do not know who had it at entry. Data in the paper state clearly that only 38 of the 68 patients included actually had an area of exposed bone. Can there be confident interpretation of this trial, powered for inclusion of 222 patients, with data based on perhaps 38 “true” cases of ORN?
Hyperbaric Oxygen Therapy for Late Radiation Tissue Injury (LRTI)—Cochrane Review
Bennett et al. , conducted a systematic review by evaluating the quality of eleven relevant randomised control trails using the guidelines of the Cochrane Handbook for Systematic Reviews of Interventions and extracted the data from the included trials.
The review concluded that hyperbaric oxygen therapy (HBOT) has been suggested as a treatment for LRTI based upon the ability to improve the blood supply to these tissues. It is postulated that HBOT may result in both healing of tissues and the prevention of problems following surgery. There was no evidence of benefit in clinical outcomes with established radiation injury to neural tissue, and no data reported on the use of HBOT to treat other manifestations of LRTI. HBOT also appears to reduce the chance of ORN following tooth extraction in an irradiated field. The application of HBOT to selected patients and tissues may be justified .
Ultrasound for the Treatment of Osteoradionecrosis
Therapeutic US can exert its physical effects on the cells and tissues by thermal and non thermal mechanisms.
Thermal effects are used in physiotherapy for the treatment of acute injuries, strains, and pain relief. Nonthermal effects are used in the stimulation of tissue regeneration, healing of varicose ulcers, pressure sores, blood flow in chronically ischemic muscles, protein synthesis in fibroblasts, and tendon repair. Ultrasound affects bone by induction of bone formation in vitro and acceleration of bone repair in animals and humans. It has been shown that the nonthermal effects can result in healing of mandibular osteoradionecrosis. The principal value is the induction of angiogenesis, as shown by Young and Dyson. The new capillary formation involves activation, degradation of basement membrane, migration and proliferation of endothelial cells from preexisting venules, capillary tube formation, and maturation of new capillaries. Therapeutic angiogenesis is used to reduce unfavorable tissue effects caused by local hypoxia, including osteoradionecrosis, and to enhance tissue repair. Harrisll has suggested the use of ultrasound as an important means of revascularization of mandibular osteoradionecrosis. The patients were treated with ultrasound (3 MHz, pulsed 1:4, 1 W/cm) for 40 sessions of 15 min/day. Ten of 21 (48%) cases showed healing when treated with debridement and ultrasound alone, 11 cases remained unhealed after ultrasound therapy and after debridement were covered with a local flap, and only one case needed mandibular resection and reconstruction .
Surgical Management
Major advances in the surgical management of ORN are related to reconstruction surgery. The development of myocutaneous flaps and the use of microvascular free bone flaps allowed substantial modifications in the decision-making process of the extent of the surgical ablation of extensive ORN. The replacement of the dead bone with a vascularized bone-containing flap will not only allow for restoration of the mandibular continuity but also bring non irradiated soft-tissue coverage with intact blood supply.
Commonly used flaps are fibular flap, ileac crest flap, scapular-parascapular flaps .
Treatment of Mandibular Osteoradionecrosis by Cancellous Bone Grafting
The use of a particulated cancellous bone and marrow (PCBM) graft after removal of necrotic bone is an interesting idea. Although it has been used in several other types of mandibular reconstruction, this is probably one of the first times it has been reported for ORN. However, stimulation of a new blood supply to the affected area is the main treatment goal and this has been proposed for a long time. Hahn and Corgill suggested creating holes in the affected area to stimulate granulation tissue in 1967. The use of PCBM for ORN has also been mentioned previously, but via an extraoral approach. Obwegeser and Sailer suggested bone grafting with autogenous decorticated iliac or rib via an intraoral approach. The use of tibia in this regard is new, but is there enough material for large defects. The clinical cases show excellent results, with good secondary healing. Perhaps the use of local vascular flaps could enhance these results, avoiding such high incidence of secondary healing. Placement of implants is easier and more predictable when this technique is used and that it should be done as a secondary procedure after the graft has taken .
Distraction Osteogenesis
Applicable for all distraction directions
Indicated in case of poor surgical candidates for free flap transfer
Increase bone quantity, improve bone quality and neovascularization
Synergistic with HBOT, the use of BFGF and cyclic stretching (callus massage)
Increase soft-tissue bed for further bone reconstruction .
Utilization of distraction osteogenesis (DO) in head and neck cancer is extremely appealing; patients could undergo large composite tissue resection and immediate soft tissue reconstruction with local flaps or microvascular free tissue transfer. Flaps could be chosen on soft tissue coverage needs alone, without the need to incorporate bone. Postoperative radiation therapy would proceed sooner as the wound healing period would be truncated.
Replacement of bone through transport DO could be performed on an elective basis after completion of XRT (high dose highly fractionated radiation). For elderly patients or patients in whom microvascular free tissue transfer would pose an extreme health risk, DO alone might provide a less invasive alternative. DO could also provide an additional reconstructive option after flap failure, bone resorption or osteordionecrosis .
Bisphosphonates and Osteonecrosis of the Jaws
Osteonecrosis of the jaws is a recently described adverse side effect of bisphosphonate therapy. Patients with multiple myeloma and metastatic carcinoma to the skeleton who are receiving intravenous, nitrogen-containing bisphosphonates are at greatest risk for osteonecrosis of the jaws; these patients represent 94% of published cases. The mandible is more commonly affected than the maxilla (2:1 ratio), and 60% of cases are preceded by a dental surgical procedure. Oversuppression of bone turnover is probably the primary mechanism for the development of this condition, although there may be contributing comorbid factors. All sites of potential jaw infection should be eliminated before bisphosphonate therapy is initiated in these patients to reduce the necessity of subsequent dentoalveolar surgery. Conservative debridement of necrotic bone, pain control, infection management, use of antimicrobial oral rinses and withdrawal of bisphosphonates are preferable to aggressive surgical measures for treating this condition. The degree of risk for osteonecrosis in patients taking oral bisphosphonates, such as alendronate, for osteoporosis is uncertain and warrants careful monitoring .
Discussion
ORN is still a serious complication resulting from radiotherapy and its incidence has not decreased in the last few years. Because ORN may be considered a nonhealing wound resulting from metabolic and tissue homeostatic disturbances, it responds to different forms of treatment. The incidence and the prevalence of ORN of the jaws after radiation therapy for head and neck cancer are unknown. Based on the literature, Clayman found an overall incidence of 11.8% before 1968 and 5.4% thereafter. Osteoradionecrosis (ORN) is characterized by delayed bone repair secondary to damage caused by radiotherapy (RT). The mean incidence of the disorder is 10%, and it is particularly seen after traumatisms in the form of dental extractions—manifesting between 6 months and 5 years after radiotherapy (90% of the lesions being located in the mandible).
The clinical management of ORN is difficult and normally comprises medical care, the avoidance of toxic habits, improvement of dental hygiene, the control of infections with antibiotics and antiseptics and removal of the necrotic tissue with more aggressive surgery once complications have appeared (pathological fractures). Some authors have preferred conservative treatment to control small necrotic areas, but this therapy may be insufficient in refractory and acute ORN. In addition, many clinical guides mention the possibility of employing hyperbaric oxygen therapy as a coadjuvant measure, though its use is controversial. Prospective random control trials conducted by Annane et al., Marx et al., and a recent Cochrane review by Bennet et al. , shows the effectiveness of HBO therapy in treatment of ORN. In any case, no general consensus-based clinico-therapeutic protocol has been established to deal with this disorder. The results of both a conservative approach and surgery/HBO treatment are well documented. In advanced conditions, the results of conservative treatment only are poor and under these circumstances radical resection of the involved segment and adjuvant HBO is a satisfactory option in the management of ORN of the jaws.
This article made an attempt to give a review on osteoradionecrosis regarding its etiology, clinical features, pathophysiology with most widely accepted theories and various treatment modalities depending upon the severity of the disease from conservative management, HBO therapy to recent treatment options such as ultrasound, treatment with antioxidants such as pentoxifylline and treatment with vitamin E, and reconstruction with vascularised bone flaps, treatment by distraction osteogenesis.
Conclusion
ORN can lead to intolerable pain, fracture, sequestration of devitalized bone and fistulas, which makes oral feeding impossible. ORN is an expensive disease to manage no matter what course of treatment is used. Effective management of any disease process initially requires diagnosis before treatment. Criteria used to identify ORN vary even among identical authors at different points in time. So, it is important to make a correct diagnosis before initiating a treatment.
References
1.Lyons A, Ghazali N. Osteoradionecrosis of the jaws: current understanding of its pathophysiology and treatment. Br J Oral Maxillofac Surg. 2008;46:653–660. doi: 10.1016/j.bjoms.2008.04.006. [DOI] [PubMed] [Google Scholar]
2.Harris M. The conservative management of osteoradionecrosis of the mandible with ultrasound therapy. Br J Oral Maxillofac Surg. 1992;30:313–318. doi: 10.1016/0266-4356(92)90181-H. [DOI] [PubMed] [Google Scholar]
3.Schwartz HC, Kagan AR. Osteoradionecrosis of the mandible: scientific basis for clinical staging. Am J Clin Oncol. 2002;25:168–171. doi: 10.1097/00000421-200204000-00013. [DOI] [PubMed] [Google Scholar]
4.Store G, Boysen M. Mandibular osteoradionecrosis: clinical behavior and diagnostic aspects. Clin Otolaryngol Allied Sci. 2000;25:378–384. doi: 10.1046/j.1365-2273.2000.00367.x. [DOI] [PubMed] [Google Scholar]
5.Kluth EV, Jain PR, Stuchell RN, Frich JC., Jr A study of factors contributing to the development of osteoradionecrosis of the jaws. J Prosthet Dent. 1988;59:194–201. doi: 10.1016/0022-3913(88)90015-7. [DOI] [PubMed] [Google Scholar]
6.Jacobson Adam S, et al. Paradigm shifts in the management of osteoradionecrosis of the mandible. Oral Oncol. 2010;46:795–801. doi: 10.1016/j.oraloncology.2010.08.007. [DOI] [PubMed] [Google Scholar]
7.Teng MS, Futran ND. Osteoradionecrosis of the mandible. Curr Opin Otolaryngol Head Neck Surg. 2005;13:217–221. doi: 10.1097/01.moo.0000170527.59017.ff. [DOI] [PubMed] [Google Scholar]
8.Thorn JJ, et al. Osteoradionecrosis of the jaws: clinical characteristics and relation to the field of irradiation. J Oral Maxillofac Surg. 2000;58:1088–1093. doi: 10.1053/joms.2000.9562. [DOI] [PubMed] [Google Scholar]
9.Watson WL, Scarborough JE. Osteoradionecrosis in intraoral cancer. Am J Roengenol. 1938;40:524–534. [Google Scholar]
10.Gowgiel JM. Experimental radio-osteonecrosis of the jaws. J Dent Res. 1960;39:176–197. doi: 10.1177/00220345600390011401. [DOI] [PubMed] [Google Scholar]
11.Titterington WP. Osteomyelitis and osteoradionecrosis of the jaws. J Oral Med. 1971;26:7–16. [PubMed] [Google Scholar]
12.Beumer JIII, Curtis T, Harrison RE. Radiation therapy of the oral cavity: sequelae and management, part 1. Head Neck Surg. 1979;1:301–312. doi: 10.1002/hed.2890010404. [DOI] [PubMed] [Google Scholar]
13.Epstein J, Wong F, Stevenson-Moore P. Osteoradionecrosis: clinical experience and a proposal for classification. J Oral Maxillofac Surg. 1987;45:104–111. doi: 10.1016/0278-2391(87)90399-5. [DOI] [PubMed] [Google Scholar]
14.Marx RE. A new concept in the treatment of osteoradionecrosis. J Oral Maxillofac Surg. 1983;41:351–357. doi: 10.1016/S0278-2391(83)80005-6. [DOI] [PubMed] [Google Scholar]
15.Jereczek-Fossa BA, Orecchia R. Radiotherapy-induced mandibular bone complications. Cancer Treat Rev. 2002;28:65–74. doi: 10.1053/ctrv.2002.0254. [DOI] [PubMed] [Google Scholar]
16.Silvestre-Rangil J, Silvestre FJ. Clinico-therapeutic management of osteoradionecrosis: a literature review and update. Med Oral Patol Oral Cir Bucal. 2011;16(7):e900–e904. doi: 10.4317/medoral.17257. [DOI] [PubMed] [Google Scholar]
17.Pasquier D, Hoelscher T, Schmutz J, et al. Hyperbaric oxygen therapy in the treatment of radio-induced lesions in normal tissues: a literature review. Radiother Oncol. 2004;72:1–13. doi: 10.1016/j.radonc.2004.04.005. [DOI] [PubMed] [Google Scholar]
18.Ben-David, et al. Lack of osteoradionecrosis of the mandible after IMRT for head and neck cancer: likely contributions of both dental care and improved dose distributions. Int J Radiat Oncol Biol Phys. 2007;68(2):396–402. doi: 10.1016/j.ijrobp.2006.11.059. [DOI] [PMC free article] [PubMed] [Google Scholar]
19.Wong JK, et al. Conservative management of osteoradionecrosis. Oral Surg Oral Med Oral Pathol Oral Radiol Endodontol. 1997;84:16–21. doi: 10.1016/S1079-2104(97)90287-0. [DOI] [PubMed] [Google Scholar]
20.Mcleod NMH, et al. Pentoxifylline and tocopherol in the management of patients with osteoradionecrosis: the Portsmouth experience. Br J Oral Maxillofac Surg. 2012;50:41–44. doi: 10.1016/j.bjoms.2010.11.017. [DOI] [PubMed] [Google Scholar]
21.Kaur J, et al. Retrospective audit of the use of the Marx Protocol for prophylactic hyperbaric oxygen therapy in managing patients requiring dental extractions following radiotherapy to the head and neck. N Z D J. 2009;105(2):47–50. [PubMed] [Google Scholar]
22.Bennett MH, Feldmeier J, Hampson N, Smee R, Milross C. Hyperbaric oxygen therapy for late radiation tissue injury. Cochrane Database Syst Rev. 2012;5(CD005005):8. doi: 10.1002/14651858.CD005005.pub2. [DOI] [PubMed] [Google Scholar]
23.Doan N, et al. In vitro effects of therapeutic ultrasound on cell proliferation, protein synthesis, and cytokine production by human fibroblasts, osteoblasts, and monocytes. J Oral Maxillofac Surg. 1999;57:409–419. doi: 10.1016/S0278-2391(99)90281-1. [DOI] [PubMed] [Google Scholar]
24.Ang E, Black C. Reconstructive options in the treatment of osteoradionecrosis of the craniomaxillofacial skeleton. Br J Plast Surg. 2003;56:92–99. doi: 10.1016/S0007-1226(03)00085-7. [DOI] [PubMed] [Google Scholar]
25.Rehem Peter. Treatment of mandibular osteoradionecrosis by cancellous bone grafting. Oral Maxiilofac Surg. 1999;57:942–943. doi: 10.1016/S0278-2391(99)90014-9. [DOI] [PubMed] [Google Scholar]
26.Madrid C, Abarca M, Bouferrache K. Osteoradionecrosis: an update. Oral Oncol. 2010;46:471–474. doi: 10.1016/j.oraloncology.2010.03.017. [DOI] [PubMed] [Google Scholar]
27.Monson, et al. The effects of high dose and highly fractionated radiation on distraction osteogenesis in the murine mandible. Radiation Oncology. 2012;7:15. doi: 10.1186/1748-717X-7-151. [DOI] [PMC free article] [PubMed] [Google Scholar]
28.Hellstein JW, et al. Systematic review: bisphosphonates and osteonecrosis of the jaws. Ann Intern Med. 2006;144:753–761. doi: 10.7326/0003-4819-144-10-200605160-00009. [DOI] [PubMed] [Google Scholar]
29.Shaw RJ, Dhanda J (2010) Hyperbaric oxygen in the management of late radiation injury to the head and neck. Part I: treatment. Br J Oral Maxillofac Surg 49(1):2–8 [DOI] [PubMed]
30.Epstein et al. Postradiation osteonecrosis of the mandible, a long-term follow-up study. Oral Surg Oral Med Oral Pathol Oral Radiol Endod. 1997;83:657–662. doi: 10.1016/S1079-2104(97)90314-0. [DOI] [PubMed] [Google Scholar]
31.Lyons A, et al. Osteoradionecrosis—a review of current concepts in defining the extent of the disease and a new classification proposal. Br J Oral Maxillofac Surg. 2014;52:392–395. doi: 10.1016/j.bjoms.2014.02.017. [DOI] [PubMed] [Google Scholar]
32.McLeod NM, et al. Bisphosphonate osteonecrosis of the jaw: a literature review of UK policies versus international policies on the management of bisphosphonate osteonecrosis of the jaw. Br J Oral Maxillofac Surg. 2011;49:335–342. doi: 10.1016/j.bjoms.2010.08.005. [DOI] [PubMed] [Google Scholar]
Articles from Journal of Maxillofacial & Oral Surgery are provided here courtesy of Springer
ACTIONS
View on publisher site
PDF (1.1 MB)
Cite
Collections
Permalink PERMALINK
Copy
RESOURCES
Similar articles
Cited by other articles
Links to NCBI Databases
On this page
Abstract
Introduction
Definition
Epidemiology and Risk Factors for the Development of Osteoradionecrosis
Factors that Affect the Development of ORN
Etiopathogenesis
Theories of the Pathophysiology of Osteoradionecrosis
New Protocols for Prevention and Treatment of Osteoradionecrosis
Conservative Management of Osteoradionecrosis
Pentoxifyllin and Tocopherol in the Treatment of Osteoradionecrosis
Hyperbaric Oxygen Therapy
Randomised Controlled Trials in HBO to Treat Osteoradionecrosis of the Mandible
Hyperbaric Oxygen Therapy for Late Radiation Tissue Injury (LRTI)—Cochrane Review
Ultrasound for the Treatment of Osteoradionecrosis
Surgical Management
Treatment of Mandibular Osteoradionecrosis by Cancellous Bone Grafting
Distraction Osteogenesis
Bisphosphonates and Osteonecrosis of the Jaws
Discussion
Conclusion
References
Cite
Copy
Download .nbib.nbib
Format:
Add to Collections
Create a new collection
Add to an existing collection
Name your collection
Choose a collection
Unable to load your collection due to an error
Please try again
Add Cancel
Follow NCBI
NCBI on X (formerly known as Twitter)NCBI on FacebookNCBI on LinkedInNCBI on GitHubNCBI RSS feed
Connect with NLM
NLM on X (formerly known as Twitter)NLM on FacebookNLM on YouTube
National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894
Web Policies
FOIA
HHS Vulnerability Disclosure
Help
Accessibility
Careers
NLM
NIH
HHS
USA.gov
Back to Top |
17248 | https://math.answers.com/math-and-arithmetic/The_probability_that_two_cards_drawn_from_a_deck_of_cards_are_both_aces | The probability that two cards drawn from a deck of cards are both aces? - Answers
Create
0
Log in
Subjects>Math>Math & Arithmetic
The probability that two cards drawn from a deck of cards are both aces?
Anonymous ∙ 14 y ago
Updated: 11/1/2022
The probability of drawing two Aces from a standard deck of 52 cards is 4 in 52 times 3 in 51, or 12 in 2652, or 1 in 221, or about 0.00452.
Wiki User ∙ 14 y ago
Copy
Add Your Answer
What else can I help you with?
Search
Continue Learning about Math & Arithmetic
### Two cards are drawn from a standard deck of 52 cards without replacement Find the probability that both cards are aces? There are 4 aces in the deck the odds that the first card is an ace is 4/52 or 1/13. The odds the second card is an ace is 3/51 or 1/17 because there are only 3 aces and 51 cards left. The odds that both are aces are 1/13 times 1/17 which is 1/221.
### What is the probability of drawing two consecutive aces from a deck of fifty two cards? If you are drawing only two cards, the probability that they will both be aces is one in 221. ( (52 / 4) (51 / 3) ) If you are drawing all the cards in the deck, one at a time, the probability that you will draw at least two aces in a row is much better than that, but how much better I leave for someone else to answer.
### What is the probability of pulling two red cards without replacement from a deck of cards? Assuming a pack consists of 52 cards as per normal. Initially half the cards are red. Probability that the first card drawn is red = 1/2. Now there are 25 red cards left out of 51 remaining cards. Probability that the second card drawn is red = 25/51. Probability that both cards drawn are red therfore = 1/2 25/51 = 25/102
### Jacob randomly draws two cards from a standard deck of 52 cards. He does not replace the first card. What is the probability that both cards are kings A. B. C. D.? To find the probability that both cards drawn are kings, we first consider the total number of kings in a standard deck, which is 4. When Jacob draws the first king, the probability is 4 out of 52. After drawing the first king, there are now 3 kings left and only 51 cards remaining in the deck. Therefore, the probability of drawing a second king is 3 out of 51. The overall probability of both events occurring is (4/52) (3/51) = 12/2652, which simplifies to 1/221.
### What suit of aces and eights did wild bill have? Both pair of cards were clubs and spades.
Related Questions
Trending Questions
What is another way of indicating a repeat with 1st and 2nd?What multipled by what by ten equals 360?Which is greater -0.44 or -0.77?Why 5 divided by 0 undefined?What are the first 3 multiples of 17?What is chord overstreet's type?How many hours to hike 3 miles?How many l are in 15 kl?What is thirteen fifteenths as a decimal?How do you write 456.18 in word form?When data is collected using a qualitative nominal variable what is true about a frequency distribution that summarizes the data?What is the distrubutive property of multiplication?How many times 2 go into 14?What sort of triangle has 3 equal sides and 3 equal angels?What would the date be 168 days after September 1st?How would you write five-twelfths as a fraction?What is 8 inches as a fraction of 3 feet?What is 4t-6 when t equals -6?How many syllables in resemblance?Is 7 divisible by 5671?
Resources
LeaderboardAll TagsUnanswered
Top Categories
AlgebraChemistryBiologyWorld HistoryEnglish Language ArtsPsychologyComputer ScienceEconomics
Product
Community GuidelinesHonor CodeFlashcard MakerStudy GuidesMath SolverFAQ
Company
About UsContact UsTerms of ServicePrivacy PolicyDisclaimerCookie PolicyIP Issues
Copyright ©2025 Answers.com. All Rights Reserved. The material on this site can not be reproduced, distributed, transmitted, cached or otherwise used, except with prior written permission of Answers. |
17249 | https://www.unep.org/news-and-stories/story/data-platform-charts-health-nearly-every-freshwater-body-earth-heres-what-it | Skip to main content
Are you sure you want to print? Save the planet. Opt not to print.
18 Mar 2025
Story
Fresh water
This data platform charts the health of nearly every freshwater body on Earth. Here’s what it reveals
Photo by AFP/Bryan Smith
At first, the satellite image of Lake Titicaca, which sits high in the Andes Mountains on the border between Bolivia and Peru, looks normal. But zoom in, and you’ll see a riot of reds, yellows and greens along its coastlines.
The colours graphically represent pollution – much of it raw sewage and farm runoff – flowing into the lake from surrounding communities.
The satellite image is part of the Freshwater Ecosystems Explorer, a groundbreaking data platform developed by the United Nations Environment Programme (UNEP) and partners that shows the health of millions of lakes, rivers and wetlands around the world. The online site is designed to shine a spotlight on the state of the planet’s freshwater ecosystems and the reserves they hold, which experts say are under mounting pressure from climate change, pollution and a host of other threats.
“Fresh water is fundamental to life on this planet and the website is enabling access to what is vital information,” said Sinikinesh Jimma, head of UNEP’s Marine and Freshwater Branch. “The more people know about the state of their local freshwater bodies, the more they can do to protect and restore them.”
Jimma made the comments just ahead of World Water Day, which falls on March 22 and is designed to raise awareness about the importance of better managing freshwater resources. This year, the day will focus on the preservation of glaciers, which are in retreat in many places around the world.
Eye in the sky
Want to check out the health of a lake, river or aquifer near you? You can begin with a quick introduction to the Freshwater Ecosystems Explorer, which draws on data from a range of sources, including satellites. Once you’re ready, you can dive into the explorer proper, where you’ll find detailed information on the extent and state of freshwater bodies around the world.
The Freshwater Ecosystems Explorer was developed as part of a UN effort to track progress on the environmental basis of Sustainable Development Goal 6, which calls for everyone on Earth to have access to clean water and sanitation by 2030. Headway towards the goal is halting: in 2022, 2.2 billion people still lacked access to safely managed drinking water.
Part of the reason for the shortage is what experts call a worrying state of freshwater ecosystems, which along with lakes, rivers and wetlands include aquifers, mangrove forests and glaciers. A UNEP-backed report from 2024 found that nearly half the countries on Earth had at least one significant freshwater ecosystem in decline. The report blamed the falloff on a range of factors including climate change, which is making many places drier, overusing water, constructing dams, and converting freshwater ecosystems, like wetlands, into farms or urban areas.
Many of those challenges have been laid bare by the Freshwater Ecosystems Explorer, which charts any body of water on the Earth’s surface larger than 30 metres by 30 metres. It offers what experts call an unprecedented level of detail, tracking not only pollution but also the size of freshwater bodies, some over the course of decades.
It shows, for example, how years of drought led to the near-calamitous shrinking of South Africa’s reservoir Theewaterskloof Dam, which supplies the city of Cape Town with drinking water. It also reveals how a surge in rainfall in the United Kingdom, which has been linked to climate change, caused rivers to burst their banks, leading to widespread flooding.
"The explorer highlights how robust data can help countries manage their freshwater resources in a more holistic, more integrated way,” said Jimma. “That is crucial to safeguarding this most precious of resources for generations to come.”
And it gives a bird’s eye view of Lake Titicaca, which sits in a basin home to 3 million people and is fed by glacial melt. The largest lake in South America is careening towards collapse, experts say, in part because of the dumping of raw sewage. Satellite data shows where the water is becoming increasingly cloudy, or turbid, and where there has been a surge in nutrient levels, two telltale signs of pollution. This runoff affects ecosystems, human health and biodiversity.
Not all news was bad, though. The explorer has charted the rebound of several bodies of water, including Iran’s Lake Urmia where an effort to unblock its feeder rivers have caused water levels to rise in a lake once thought near dead.
That was part of a larger trend that has seen countries revive many freshwater bodies, including some of the world’s most-polluted urban rivers.
Restoring freshwater ecosystems and other inland water bodies is a key aim of the Kunming-Montreal Global Biodiversity Framework, a landmark 2022 agreement to halt and reverse the decline of the natural world.
The Freshwater Ecosystems Explorer was developed by UNEP, UNEP-DHI, the European Commission Joint Research Centre and Google.
The Kunming-Montreal Global Biodiversity Framework
The planet is experiencing a dangerous decline in nature. One million species are threatened with extinction, soil health is declining and water sources are drying up. The Kunming-Montreal Global Biodiversity Framework sets out global targets to halt and reverse nature loss by 2030. It was adopted by world leaders in December 2022. To address the drivers of the nature crisis, UNEP is working with partners to take action in landscapes and seascapes, transform our food systems, and close the finance gap for nature.
Editor's note: The story was updated on 19 March 2025 to reflect accurately the name of the reservoir supplying Cape Town with water.
Topics
Fresh water
Environment under review
Nature action
Pollution
Health
Further Resources
UNEP’s work on nature
UNEP’s work on freshwater
The Freshwater Ecosystems Explorer
Mid-Term Status on Sustainable Development Goal 6
Related Content
Press release
Nations come together to establish new Intergovernmental Science-Policy Panel on Chemicals, Waste and Pollution
Statements
From waste to wealth: circular economy as a catalyst for jobs and sustainability
Related Sustainable Development Goals
### Goal 1
Goal 1: No Poverty
+
### Goal 2
Goal 2: Zero Hunger
+
### Goal 3
Goal 3: Good Health and Well-Being
+
### Goal 6
Goal 6: Clean Water and Sanitation
+
### Goal 8
Goal 8: Decent Work and Economic Growth
+
### Goal 12
Goal 12: Sustainable Consumption and Production
+
### Goal 13
Goal 13: Climate Action
+
### Goal 14
Goal 14: Life Below Water
+
### Goal 15
Goal 15: Life on Land
+
### Goal 17
Goal 17: Partnerships for the Goals
+
Learn about our work
Sign up for updates
Get involved |
17250 | https://bio.libretexts.org/Bookshelves/Introductory_and_General_Biology/Map%3A_Raven_Biology_12th_Edition/47%3A_The_Respiratory_System/47.05%3A_Transport_of_Gases_in_Body_Fluid | 47.5: Transport of Gases in Body Fluid - Biology LibreTexts
Skip to main content
Table of Contents menu
search Search build_circle Toolbar fact_check Homework cancel Exit Reader Mode
school Campus Bookshelves
menu_book Bookshelves
perm_media Learning Objects
login Login
how_to_reg Request Instructor Account
hub Instructor Commons
Search
Search this book
Submit Search
x
Text Color
Reset
Bright Blues Gray Inverted
Text Size
Reset
+-
Margin Size
Reset
+-
Font Type
Enable Dyslexic Font - [x]
Downloads expand_more
Download Page (PDF)
Download Full Book (PDF)
Resources expand_more
Periodic Table
Physics Constants
Scientific Calculator
Reference expand_more
Reference & Cite
Tools expand_more
Help expand_more
Get Help
Feedback
Readability
x
selected template will load here
Error
This action is not available.
chrome_reader_mode Enter Reader Mode
47: The Respiratory System
Map: Raven Biology 12th Edition
{ }
{ "47.01:_Gas_Exchange_Across_Respiratory_Surfaces" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "47.02:_Gills_Cutaneous_Respirate_and_Tracheal_Systems" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "47.03:_Lungs" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "47.04:_Structures_Mechanisms_and_Control_of_Ventilation_in_Mammals" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "47.05:_Transport_of_Gases_in_Body_Fluid" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1" }
{ "00:_Front_Matter" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "01:_The_Science_of_Biology" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "02:_The_Nature_of_Molecules_and_the_Properties_of_Water" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "03:_The_Chemical_Building_Blocks_of_Life" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "04:_Cell_Structure" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "05:_Membranes" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "06:_Energy_and_Metabolism" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "07:_How_Cells_Harvest_Energy" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "08:_Photosynthesis" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "09:_Cell_Communication" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "10:_How_Cells_Divide" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "11:_Sexual_Reproduction_and_Meiosis" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "12:_Patterns_of_Inheritance" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "13:_Chromosomes_Mapping_and_the_Meiosis-Inheritance_Connection" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "14:_DNA-_The_Genetic_Material" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "15:_Genes_and_How_They_Work" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "16:_Control_of_Gene_Expression" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "17:_Biotechnology" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "18:_Genomics" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "19:_Cellular_Mechanisms_of_Development" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "20:_Genes_Within_Populations" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "21:_The_Evidence_for_Evolution" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "22:_The_Origin_of_Species" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "23:_Systematics_Phylogeny_and_Comparative_Biology" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "24:_Genome_Evolution" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "25:_The_Origin_and_Diversity_of_Life" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "26:_Viruses" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "27:_Prokaryotes" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "28:_Protists" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "29:_Seedless_Plants" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "30:_Seed_Plants" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "31:_Fungi" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "32:_Animal_Diversity_and_the_Evolution_of_Body_Plans" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "33:_Protostomes" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "34:_Deuterostomes" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "35:_Plant_Form" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "36:_Transport_in_Plants" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "37:_Plant_Nutrition_and_Soils" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "38:_Plant_Defense_Responses" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "39:_Sensory_Systems_in_Plants" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "40:_Plant_Reproduction" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "41:_The_Animal_Body_and_Principles_of_Regulation" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "42:_The_Nervous_System" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "43:_Sensory_Systems" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "44:_The_Endocrine_System" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "45:_The_Musculoskeletal_System" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "46:_The_Digestive_System" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "47:_The_Respiratory_System" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "48:_The_Circulatory_System" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "49:_Osmotic_Regulation_and_the_Urinary_System" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "50:_The_Immune_System" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "51:_The_Reproductive_System" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "52:_Animal_Development" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "53:_Behavioral_Biology" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "54:_Ecology_of_Individuals_and_Populations" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "55:_Community_Ecology" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "56:_Dynamics_of_Ecosystems" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "57:_The_Biosphere_and_Human_Impacts" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "58:_Conservation_Biology" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "zz:_Back_Matter" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1" }
Sat, 04 Dec 2021 16:59:54 GMT
47.5: Transport of Gases in Body Fluid
74380
74380
lkloepper@saintmarys.edu
{ }
Anonymous
Anonymous
2
false
false
[ "article:topic", "authorname:openstax", "bicarbonate buffer system", "bicarbonate ion", "carbaminohemoglobin", "carbonic anhydrase", "chloride shift", "heme group", "hemoglobin", "oxygen-carrying capacity", "oxygen dissociation curve", "sickle cell anemia", "thalassemia", "showtoc:no", "license:ccby", "transcluded:yes", "source-bio-2032", "program:openstax" ]
[ "article:topic", "authorname:openstax", "bicarbonate buffer system", "bicarbonate ion", "carbaminohemoglobin", "carbonic anhydrase", "chloride shift", "heme group", "hemoglobin", "oxygen-carrying capacity", "oxygen dissociation curve", "sickle cell anemia", "thalassemia", "showtoc:no", "license:ccby", "transcluded:yes", "source-bio-2032", "program:openstax" ]
Search site Search Search Go back to previous article
Sign in
Username Password Sign in
Sign in
Sign in
Forgot password
Contents
1. Home
2. Bookshelves
3. Introductory and General Biology
4. Map: Raven Biology 12th Edition
5. 47: The Respiratory System
6. 47.5: Transport of Gases in Body Fluid
Expand/collapse global location
Map: Raven Biology 12th Edition
Front Matter
1: The Science of Biology
2: The Nature of Molecules and the Properties of Water
3: The Chemical Building Blocks of Life
4: Cell Structure
5: Membranes
6: Energy and Metabolism
7: How Cells Harvest Energy
8: Photosynthesis
9: Cell Communication
10: How Cells Divide
11: Sexual Reproduction and Meiosis
12: Patterns of Inheritance
13: Chromosomes, Mapping, and the Meiosis-Inheritance Connection
14: DNA- The Genetic Material
15: Genes and How They Work
16: Control of Gene Expression
17: Biotechnology
18: Genomics
19: Cellular Mechanisms of Development
20: Genes Within Populations
21: The Evidence for Evolution
22: The Origin of Species
23: Systematics, Phylogeny and Comparative Biology
24: Genome Evolution
25: The Origin and Diversity of Life
26: Viruses
27: Prokaryotes
28: Protists
29: Seedless Plants
30: Seed Plants
31: Fungi
32: Animal Diversity and the Evolution of Body Plans
33: Protostomes
34: Deuterostomes
35: Plant Form
36: Transport in Plants
37: Plant Nutrition and Soils
38: Plant Defense Responses
39: Sensory Systems in Plants
40: Plant Reproduction
41: The Animal Body and Principles of Regulation
42: The Nervous System
43: Sensory Systems
44: The Endocrine System
45: The Musculoskeletal System
46: The Digestive System
47: The Respiratory System
48: The Circulatory System
49: Osmotic Regulation and the Urinary System
50: The Immune System
51: The Reproductive System
52: Animal Development
53: Behavioral Biology
54: Ecology of Individuals and Populations
55: Community Ecology
56: Dynamics of Ecosystems
57: The Biosphere and Human Impacts
58: Conservation Biology
Back Matter
47.5: Transport of Gases in Body Fluid
Last updated Dec 4, 2021
Save as PDF
47.4: Structures, Mechanisms, and Control of Ventilation in Mammals
48: The Circulatory System
Page ID 74380
OpenStax
OpenStax
( \newcommand{\kernel}{\mathrm{null}\,})
Table of contents
1. Transport of Oxygen in the Blood
1. Hemoglobin
Factors That Affect Oxygen Binding
Transport of Carbon Dioxide in the Blood
Carbon Monoxide Poisoning
Summary
Art Connections
Glossary
Skills to Develop
Describe how oxygen is bound to hemoglobin and transported to body tissues
Explain how carbon dioxide is transported from body tissues to the lungs
Once the oxygen diffuses across the alveoli, it enters the bloodstream and is transported to the tissues where it is unloaded, and carbon dioxide diffuses out of the blood and into the alveoli to be expelled from the body. Although gas exchange is a continuous process, the oxygen and carbon dioxide are transported by different mechanisms.
Transport of Oxygen in the Blood
Although oxygen dissolves in blood, only a small amount of oxygen is transported this way. Only 1.5 percent of oxygen in the blood is dissolved directly into the blood itself. Most oxygen—98.5 percent—is bound to a protein called hemoglobin and carried to the tissues.
Hemoglobin
Hemoglobin, or Hb, is a protein molecule found in red blood cells (erythrocytes) made of four subunits: two alpha subunits and two beta subunits (Figure 47.5.1). Each subunit surrounds a central heme group that contains iron and binds one oxygen molecule, allowing each hemoglobin molecule to bind four oxygen molecules. Molecules with more oxygen bound to the heme groups are brighter red. As a result, oxygenated arterial blood where the Hb is carrying four oxygen molecules is bright red, while venous blood that is deoxygenated is darker red.
Figure 47.5.1: The protein inside (a) red blood cells that carries oxygen to cells and carbon dioxide to the lungs is (b) hemoglobin. Hemoglobin is made up of four symmetrical subunits and four heme groups. Iron associated with the heme binds oxygen. It is the iron in hemoglobin that gives blood its red color.
It is easier to bind a second and third oxygen molecule to Hb than the first molecule. This is because the hemoglobin molecule changes its shape, or conformation, as oxygen binds. The fourth oxygen is then more difficult to bind. The binding of oxygen to hemoglobin can be plotted as a function of the partial pressure of oxygen in the blood (x-axis) versus the relative Hb-oxygen saturation (y-axis). The resulting graph—an oxygen dissociation curve—is sigmoidal, or S-shaped (Figure 47.5.2). As the partial pressure of oxygen increases, the hemoglobin becomes increasingly saturated with oxygen.
Art Connection
Figure 47.5.2: The oxygen dissociation curve demonstrates that, as the partial pressure of oxygen increases, more oxygen binds hemoglobin. However, the affinity of hemoglobin for oxygen may shift to the left or the right depending on environmental conditions.
The kidneys are responsible for removing excess H+ ions from the blood. If the kidneys fail, what would happen to blood pH and to hemoglobin affinity for oxygen?
Factors That Affect Oxygen Binding
The oxygen-carrying capacity of hemoglobin determines how much oxygen is carried in the blood. In addition to P O 2, other environmental factors and diseases can affect oxygen carrying capacity and delivery.
Carbon dioxide levels, blood pH, and body temperature affect oxygen-carrying capacity (Figure 47.5.2). When carbon dioxide is in the blood, it reacts with water to form bicarbonate (HCO 3−) and hydrogen ions (H+). As the level of carbon dioxide in the blood increases, more H+ is produced and the pH decreases. This increase in carbon dioxide and subsequent decrease in pH reduce the affinity of hemoglobin for oxygen. The oxygen dissociates from the Hb molecule, shifting the oxygen dissociation curve to the right. Therefore, more oxygen is needed to reach the same hemoglobin saturation level as when the pH was higher. A similar shift in the curve also results from an increase in body temperature. Increased temperature, such as from increased activity of skeletal muscle, causes the affinity of hemoglobin for oxygen to be reduced.
Diseases like sickle cell anemia and thalassemia decrease the blood’s ability to deliver oxygen to tissues and its oxygen-carrying capacity. In sickle cell anemia, the shape of the red blood cell is crescent-shaped, elongated, and stiffened, reducing its ability to deliver oxygen (Figure 47.5.3). In this form, red blood cells cannot pass through the capillaries. This is painful when it occurs. Thalassemia is a rare genetic disease caused by a defect in either the alpha or the beta subunit of Hb. Patients with thalassemia produce a high number of red blood cells, but these cells have lower-than-normal levels of hemoglobin. Therefore, the oxygen-carrying capacity is diminished.
Figure 47.5.3: Individuals with sickle cell anemia have crescent-shaped red blood cells. (credit: modification of work by Ed Uthman; scale-bar data from Matt Russell)
Transport of Carbon Dioxide in the Blood
Carbon dioxide molecules are transported in the blood from body tissues to the lungs by one of three methods: dissolution directly into the blood, binding to hemoglobin, or carried as a bicarbonate ion. Several properties of carbon dioxide in the blood affect its transport. First, carbon dioxide is more soluble in blood than oxygen. About 5 to 7 percent of all carbon dioxide is dissolved in the plasma. Second, carbon dioxide can bind to plasma proteins or can enter red blood cells and bind to hemoglobin. This form transports about 10 percent of the carbon dioxide. When carbon dioxide binds to hemoglobin, a molecule called carbaminohemoglobin is formed. Binding of carbon dioxide to hemoglobin is reversible. Therefore, when it reaches the lungs, the carbon dioxide can freely dissociate from the hemoglobin and be expelled from the body.
Third, the majority of carbon dioxide molecules (85 percent) are carried as part of the bicarbonate buffer system. In this system, carbon dioxide diffuses into the red blood cells. Carbonic anhydrase (CA) within the red blood cells quickly converts the carbon dioxide into carbonic acid (H 2 CO 3). Carbonic acid is an unstable intermediate molecule that immediately dissociates into bicarbonate ions (HCO 3−) and hydrogen (H+) ions. Since carbon dioxide is quickly converted into bicarbonate ions, this reaction allows for the continued uptake of carbon dioxide into the blood down its concentration gradient. It also results in the production of H+ ions. If too much H+ is produced, it can alter blood pH. However, hemoglobin binds to the free H+ ions and thus limits shifts in pH. The newly synthesized bicarbonate ion is transported out of the red blood cell into the liquid component of the blood in exchange for a chloride ion (Cl-); this is called the chloride shift. When the blood reaches the lungs, the bicarbonate ion is transported back into the red blood cell in exchange for the chloride ion. The H+ ion dissociates from the hemoglobin and binds to the bicarbonate ion. This produces the carbonic acid intermediate, which is converted back into carbon dioxide through the enzymatic action of CA. The carbon dioxide produced is expelled through the lungs during exhalation.
CO 2+H 2O⇋H 2CO 3(carbonic acid)↔HCO 3+H+(bicarbonate)
The benefit of the bicarbonate buffer system is that carbon dioxide is “soaked up” into the blood with little change to the pH of the system. This is important because it takes only a small change in the overall pH of the body for severe injury or death to result. The presence of this bicarbonate buffer system also allows for people to travel and live at high altitudes: When the partial pressure of oxygen and carbon dioxide change at high altitudes, the bicarbonate buffer system adjusts to regulate carbon dioxide while maintaining the correct pH in the body.
Carbon Monoxide Poisoning
While carbon dioxide can readily associate and dissociate from hemoglobin, other molecules such as carbon monoxide (CO) cannot. Carbon monoxide has a greater affinity for hemoglobin than oxygen. Therefore, when carbon monoxide is present, it binds to hemoglobin preferentially over oxygen. As a result, oxygen cannot bind to hemoglobin, so very little oxygen is transported through the body (Figure 47.5.4). Carbon monoxide is a colorless, odorless gas and is therefore difficult to detect. It is produced by gas-powered vehicles and tools. Carbon monoxide can cause headaches, confusion, and nausea; long-term exposure can cause brain damage or death. Administering 100 percent (pure) oxygen is the usual treatment for carbon monoxide poisoning. Administration of pure oxygen speeds up the separation of carbon monoxide from hemoglobin.
Figure 47.5.4: As percent CO increases, the oxygen saturation of hemoglobin decreases.
Summary
Hemoglobin is a protein found in red blood cells that is comprised of two alpha and two beta subunits that surround an iron-containing heme group. Oxygen readily binds this heme group. The ability of oxygen to bind increases as more oxygen molecules are bound to heme. Disease states and altered conditions in the body can affect the binding ability of oxygen, and increase or decrease its ability to dissociate from hemoglobin.
Carbon dioxide can be transported through the blood via three methods. It is dissolved directly in the blood, bound to plasma proteins or hemoglobin, or converted into bicarbonate. The majority of carbon dioxide is transported as part of the bicarbonate system. Carbon dioxide diffuses into red blood cells. Inside, carbonic anhydrase converts carbon dioxide into carbonic acid (H 2 CO 3), which is subsequently hydrolyzed into bicarbonate (HCO 3−) and H+. The H+ ion binds to hemoglobin in red blood cells, and bicarbonate is transported out of the red blood cells in exchange for a chloride ion. This is called the chloride shift. Bicarbonate leaves the red blood cells and enters the blood plasma. In the lungs, bicarbonate is transported back into the red blood cells in exchange for chloride. The H+ dissociates from hemoglobin and combines with bicarbonate to form carbonic acid with the help of carbonic anhydrase, which further catalyzes the reaction to convert carbonic acid back into carbon dioxide and water. The carbon dioxide is then expelled from the lungs.
Art Connections
Figure 47.5.2: The kidneys are responsible for removing excess H+ ions from the blood. If the kidneys fail, what would happen to blood pH and to hemoglobin affinity for oxygen?
Answer
The blood pH will drop and hemoglobin affinity for oxygen will decrease.
Glossary
bicarbonate buffer system system in the blood that absorbs carbon dioxide and regulates pH levels bicarbonate (HCO 3−) ion ion created when carbonic acid dissociates into H+ and (HCOA 3A−)carbaminohemoglobin molecule that forms when carbon dioxide binds to hemoglobin carbonic anhydrase (CA)enzyme that catalyzes carbon dioxide and water into carbonic acid chloride shift chloride shift exchange of chloride for bicarbonate into or out of the red blood cell heme group centralized iron-containing group that is surrounded by the alpha and beta subunits of hemoglobin hemoglobin molecule in red blood cells that can bind oxygen, carbon dioxide, and carbon monoxide oxygen-carrying capacity amount of oxygen that can be transported in the blood oxygen dissociation curve curve depicting the affinity of oxygen for hemoglobin sickle cell anemia genetic disorder that affects the shape of red blood cells, and their ability to transport oxygen and move through capillaries thalassemia rare genetic disorder that results in mutation of the alpha or beta subunits of hemoglobin, creating smaller red blood cells with less hemoglobin
This page titled 47.5: Transport of Gases in Body Fluid is shared under a CC BY license and was authored, remixed, and/or curated by OpenStax.
Back to top
47.4: Structures, Mechanisms, and Control of Ventilation in Mammals
48: The Circulatory System
Was this article helpful?
Yes
No
Recommended articles
7.7.5: Transport of Gases in Human Bodily FluidsOnce the oxygen diffuses across the alveoli, it enters the bloodstream and is transported to the tissues where it is unloaded, and carbon dioxide diff...
24.5: Transport of Gases in Human Bodily FluidsOnce the oxygen diffuses across the alveoli, it enters the bloodstream and is transported to the tissues where it is unloaded, and carbon dioxide diff...
9.5: Transport of Gases in Human Bodily FluidsOnce the oxygen diffuses across the alveoli, it enters the bloodstream and is transported to the tissues where it is unloaded, and carbon dioxide diff...
10.3.5: Transport of Gases in Human Bodily FluidsOnce the oxygen diffuses across the alveoli, it enters the bloodstream and is transported to the tissues where it is unloaded, and carbon dioxide diff...
10.3.5: Transport of Gases in Human Bodily FluidsOnce the oxygen diffuses across the alveoli, it enters the bloodstream and is transported to the tissues where it is unloaded, and carbon dioxide diff...
Article typeSection or PageAuthorOpenStaxLicenseCC BYOER program or PublisherOpenStaxShow TOCnoTranscludedyes
Tags
bicarbonate buffer system
bicarbonate ion
carbaminohemoglobin
carbonic anhydrase
chloride shift
heme group
hemoglobin
oxygen dissociation curve
oxygen-carrying capacity
sickle cell anemia
source-bio-2032
thalassemia
© Copyright 2025 Biology LibreTexts
Powered by CXone Expert ®
?
The LibreTexts libraries arePowered by NICE CXone Expertand are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. We also acknowledge previous National Science Foundation support under grant numbers 1246120, 1525057, and 1413739. Privacy Policy. Terms & Conditions. Accessibility Statement.For more information contact us atinfo@libretexts.org.
Support Center
How can we help?
Contact Support Search the Insight Knowledge Base Check System Status×
contents readability resources tools
☰
47.4: Structures, Mechanisms, and Control of Ventilation in Mammals
48: The Circulatory System |
17251 | https://www.splashlearn.com/math/decimal-place-value-games | Parents
Explore by Grade
Preschool (Age 2-5)KindergartenGrade 1Grade 2Grade 3Grade 4Grade 5
Explore by Subject
Math ProgramEnglish Program
More Programs
Homeschool ProgramSummer ProgramMonthly Mash-up
Helpful Links
Parenting BlogSuccess StoriesSupportGifting
Also available on
Educators
Teach with us
For TeachersFor Schools and Districts Data Protection Addendum
Success Stories
Lesson Plans Classroom Tools Teacher Blog Help & Support
More Programs
SpringBoard Summer Learning
Our Library
All
By Grade
PreschoolKindergartenGrade 1Grade 2Grade 3Grade 4Grade 5
By Subject
MathEnglish
By Topic
CountingAdditionSubtractionMultiplicationPhonicsAlphabetVowels
One stop for learning fun!
Games, activities, lessons - it's all here!
Explore All
Games
By Grade
Preschool GamesKindergarten GamesGrade 1 GamesGrade 2 GamesGrade 3 GamesGrade 4 GamesGrade 5 Games
By Subject
Math GamesReading GamesArt and Creativity GamesGeneral Knowledge GamesLogic & Thinking GamesMultiplayer GamesMotor Skills Games
By Topic
Counting GamesAddition GamesSubtraction GamesMultiplication GamesPhonics GamesSight Words GamesAlphabet Games
Learning so good, it feels like play!
Explore hundreds of fun games that teach!
Dive In
Worksheets
By Grade
Preschool WorksheetsKindergarten WorksheetsGrade 1 WorksheetsGrade 2 WorksheetsGrade 3 WorksheetsGrade 4 WorksheetsGrade 5 Worksheets
By Subject
Math WorksheetsReading Worksheets
By Topic
Addition WorksheetsMultiplication WorksheetsFraction WorksheetsPhonics WorksheetsAlphabet WorksheetsLetter Tracing WorksheetsCursive Writing Worksheets
Stuck on a concept? We're here to help!
Find the perfect worksheet to reinforce any skill.
Explore Worksheets
Lesson Plans
By Grade
Kindergarten Lesson PlansGrade 1 Lesson PlansGrade 2 Lesson PlansGrade 3 Lesson PlansGrade 4 Lesson PlansGrade 5 Lesson Plans
By Subject
Math Lesson PlansReading Lesson Plans
By Topic
Addition Lesson PlansMultiplication Lesson PlansFraction Lesson PlansGeometry Lesson PlansPhonics Lesson PlansGrammar Lesson PlansVocabulary Lesson Plans
Ready-to-go lessons = More time for teaching!
Free K to Grade 5 plans with activities & assessments, all at your fingertips.
Access for free
Teaching Tools
By Topic
Math FactsMultiplication ToolTelling Time ToolFractions ToolNumber Line ToolCoordinate Graph ToolVirtual Manipulatives
Make learning stick!
Interactive PreK to Grade 5 teaching tools to bring lessons to life.
Use for free
Articles
By Topic
Prime NumberPlace ValueNumber LineLong DivisionFractionsFactorsShapes
Math definitions made easy
Explore 2,000+ definitions with examples and more - all in one place.
Explore math vocabulary
Log in Sign up
Log inSign up
Parents
Parents
Explore by Grade
Preschool (Age 2-5)KindergartenGrade 1Grade 2Grade 3Grade 4Grade 5
Explore by Subject
Math ProgramEnglish Program
More Programs
Homeschool ProgramSummer ProgramMonthly Mash-up
Helpful Links
Parenting BlogSuccess StoriesSupportGifting
Educators
Educators
Teach with us
For Teachers For Schools and Districts Data Protection Addendum
Impact
Success Stories
Resources
Lesson Plans Classroom Tools Teacher Blog Help & Support
More Programs
SpringBoard Summer Learning
Our Library
Our Library
All
All
By Grade
PreschoolKindergartenGrade 1Grade 2Grade 3Grade 4Grade 5
By Subject
MathEnglish
By Topic
CountingAdditionSubtractionMultiplicationPhonicsAlphabetVowels
Games
Games
By Grade
Preschool GamesKindergarten GamesGrade 1 GamesGrade 2 GamesGrade 3 GamesGrade 4 GamesGrade 5 Games
By Subject
Math GamesReading GamesArt and Creativity GamesGeneral Knowledge GamesLogic & Thinking GamesMultiplayer GamesMotor Skills Games
By Topic
Counting GamesAddition GamesSubtraction GamesMultiplication GamesPhonics GamesSight Words GamesAlphabet Games
Worksheets
Worksheets
By Grade
Preschool WorksheetsKindergarten WorksheetsGrade 1 WorksheetsGrade 2 WorksheetsGrade 3 WorksheetsGrade 4 WorksheetsGrade 5 Worksheets
By Subject
Math WorksheetsReading Worksheets
By Topic
Addition WorksheetsMultiplication WorksheetsFraction WorksheetsPhonics WorksheetsAlphabet WorksheetsLetter Tracing WorksheetsCursive Writing Worksheets
Lesson Plans
Lesson Plans
By Grade
Kindergarten Lesson PlansGrade 1 Lesson PlansGrade 2 Lesson PlansGrade 3 Lesson PlansGrade 4 Lesson PlansGrade 5 Lesson Plans
By Subject
Math Lesson PlansReading Lesson Plans
By Topic
Addition Lesson PlansMultiplication Lesson PlansFraction Lesson PlansGeometry Lesson PlansPhonics Lesson PlansGrammar Lesson PlansVocabulary Lesson Plans
Teaching Tools
Teaching Tools
By Topic
Math FactsMultiplication ToolTelling Time ToolFractions ToolNumber Line ToolCoordinate Graph ToolVirtual Manipulatives
Articles
Articles
By Topic
Prime NumberPlace ValueNumber LineLong DivisionFractionsFactorsShapes
Log inSign up
Filter
Home > Games > Math Games > Decimals > Read And Write Decimals > Decimal Place Value
Decimal Place Value Games for Kids
Let's dive into our interactive decimal place value games! Crafted by math teachers, these games cover a range of curriculum concepts, such as identifying tenths and hundredths, decimal place value charts, expanded and standard forms of decimals, visualizing decimals with number lines and visual models, a... Read morend much more. Kids will have a great time exploring decimals as numbers with our decimal place value games. Trusted by millions of parents and teachers! Start now for free!
Personalized Learning
Fun Rewards
Actionable Reports
Parents, Sign Up for Free
Teachers, Use for Free
Join millions of learners learning with a smile
GRADE
CONTENT TYPE
Resources
Games
Worksheets
Glossary
Lesson Plans
SUBJECT
Math (2,046)
Number Sense (381)
Number Recognition (30)
Number Recognition Within 5 (12)
Number Recognition Within 10 (12)
Number Recognition Within 20 (6)
Number Tracing (20)
Number Tracing Within 5 (5)
Number Tracing Within 10 (5)
Number Tracing Within 20 (10)
Number Sequence (56)
Counting (141)
Counting Objects Within 5 (47)
Counting Objects Within 10 (43)
Counting Objects Within 20 (6)
Compare Numbers (47)
Compare Objects (7)
Compare Numbers Using Place Value (5)
Compare 2-Digit Numbers (6)
Compare 3-Digit Numbers (10)
Order Numbers (15)
Skip Counting (36)
Skip Count By 2 (8)
Skip Count By 5 (8)
Skip Count By 10 (13)
Skip Count By 100 (3)
Even And Odd Numbers (3)
Place Value (78)
Teen Numbers (4)
Word Form (5)
Expanded And Standard Form (14)
Unit Form (4)
Round Numbers (16)
Round Numbers To The Nearest 10 (8)
Round Numbers To The Nearest 100 (4)
Addition (369)
Add With Pictures (38)
Addition Properties (6)
Addition Strategies (136)
Compose And Decompose Numbers (66)
Number Bonds (9)
Count All To Add (9)
Add Using A Number Line (7)
Count On To Add (11)
Add With 10 (2)
Doubles And Near Doubles Addition Strategy (21)
Make 10 Strategy (4)
Add Three Whole Numbers (20)
2-Digit Addition (45)
2-Digit Addition Without Regrouping (23)
2-Digit Addition With Regrouping (16)
3-Digit Addition (65)
3-Digit Addition Without Regrouping (48)
3-Digit Addition With Regrouping (13)
4-Digit Addition (30)
4-Digit Addition Without Regrouping (13)
4-Digit Addition With Regrouping (17)
Large Numbers Addition (10)
5-Digit Addition (6)
Subtraction (240)
Subtract With Pictures (35)
Subtraction Strategies (48)
Count Back Strategy (13)
Doubles And Near Doubles Subtraction Strategy (6)
2-Digit Subtraction (30)
2-Digit Subtraction Without Regrouping (20)
2-Digit Subtraction With Regrouping (8)
3-Digit Subtraction (57)
3-Digit Subtraction Without Regrouping (35)
3-Digit Subtraction With Regrouping (20)
4-Digit Subtraction (23)
4-Digit Subtraction Without Regrouping (12)
4-Digit Subtraction With Regrouping (11)
Large Numbers Subtraction (8)
5-Digit Subtraction (5)
Multiplication (196)
Multiplication Strategies (56)
Multiplication With Equal Groups (16)
Multiplication With Arrays (20)
Multiplication Sentences (11)
Multiplication On A Number Line (13)
Repeated Addition To Multiply (17)
Times Tables (88)
Multiplication By 2 (10)
Multiplication By 3 (10)
Multiplication By 4 (9)
Multiplication By 5 (10)
Multiplication By 6 (9)
Multiplication By 7 (8)
Multiplication By 8 (8)
Multiplication By 9 (9)
Multiplication By 10 (5)
Multiplication By 11 (5)
Multiplication By 12 (5)
Multiplication Properties (15)
Distributive Property Of Multiplication (6)
Multiply By Multiples Of 10 (8)
Estimate Products (4)
Multi-Digit Multiplication (40)
Multiply 2-Digit By 1-Digit Numbers (11)
Multiply 2-Digit By 2-Digit Numbers (11)
Multiply 3-Digit By 1-Digit Numbers (7)
Multiply 3-Digit By 2-Digit Numbers (4)
Multiply 4-Digit By 1-Digit Numbers (7)
Division (119)
Divide On A Number Line (3)
Division Facts (60)
Division By 2 (5)
Division By 3 (5)
Division By 4 (5)
Division By 5 (5)
Division By 6 (5)
Division By 7 (5)
Division By 8 (5)
Division By 9 (5)
Estimate Quotients (4)
Long Division (36)
Divide 2-Digit By 1-Digit Numbers (10)
Divide 3-Digit By 1-Digit Numbers (11)
Divide 4-Digit By 1-Digit Numbers (7)
Divide 4-Digit By 2-Digit Numbers (7)
Fractions (186)
Fractions Using Models (28)
Fractions On A Number Line (10)
Compare Fractions (22)
Compare Fractions Using Models (6)
Equivalent Fractions (25)
Equivalent Fractions Using Models (10)
Mixed Numbers As Improper Fractions (4)
Fractions Operations (91)
Add Fractions (22)
Add Fractions Using Models (7)
Add Like Fractions (15)
Estimate Fraction Sums (3)
Subtract Fractions (13)
Subtract Fractions Using Models (6)
Subtract Like Fractions (7)
Add Mixed Numbers (10)
Subtract Mixed Numbers (12)
Subtract A Fraction From A Mixed Number (5)
Multiply Fractions (22)
Multiply Fractions Using Models (10)
Multiply Fractions By Whole Numbers (18)
Divide Fractions (8)
Decimals (138)
Read And Write Decimals (40)
Decimals Using Models (8)
Decimals On A Number Line (6)
Decimal Place Value (26)
Expanded Form Of Decimals (7)
Compare Decimals (22)
Order Decimals (15)
Round Decimals (12)
Round Decimals To The Nearest Whole (5)
Convert Decimals To Fractions (11)
Decimal Operations (40)
Add Decimals (6)
Subtract Decimals (5)
Multiply Decimals (13)
Divide Decimals (14)
Divide Decimals By Whole Numbers (5)
Geometry (129)
Positional Words (5)
Lines, Line Segments, Rays (6)
Parallel And Perpendicular Lines (5)
Angles (15)
Shapes (80)
2D Shapes (68)
Attributes Of 2D Shapes (23)
Triangles (4)
Quadrilaterals (13)
3D Shapes (11)
Partition Into Equal Parts (13)
Partition In Halves, Thirds, And Fourths (12)
Coordinate Plane (7)
Data Handling (46)
Sorting Objects (7)
Bar Graphs (12)
Line Plots (13)
Picture Graphs (10)
Measurement (125)
Length (34)
Measure Lengths Using The Ruler (8)
Estimate Lengths (4)
Comparing Lengths (15)
Height (6)
Comparing Heights (6)
Weight (12)
Capacity (12)
Conversion Of Measurement Units (15)
Perimeter (19)
Area (18)
Volume (8)
Time (35)
Am And Pm (2)
Time In Hours (5)
Time In Half Hours (4)
Time In Quarter Hours (7)
Time To The Nearest 5 Minutes (11)
Time To The Nearest Minute (2)
Elapsed Time (3)
Money (60)
Identify Coins (13)
Counting Money (14)
Compare Money (7)
Add And Subtract Money (17)
Multiply And Divide Money (7)
Algebra (54)
Number Patterns (29)
Expressions And Equations (13)
Order Of Operations (5)
Factors And Multiples (7)
Prime And Composite Numbers (5)
Word Problems (97)
Addition Word Problems (26)
Addition Word Problems Within 20 (19)
Subtraction Word Problems (28)
Subtraction Word Problems Within 20 (22)
Multiplication Word Problems (2)
Division Word Problems (9)
Fraction Word Problems (6)
Money Word Problems (23)
ELA (2,367)
Reading (2,269)
Phonics (2,225)
Bossy R (60)
Words With Ar (3)
Words With Er (3)
Words With Ir (3)
Words With Or (3)
Words With Ur (3)
Diphthongs (22)
Consonant Blends (118)
Ending Blends (69)
Beginning Blends (49)
L Blend Words (26)
R Blend Words (23)
Alphabet (262)
Letter Recognition (262)
Letter A (9)
Letter B (9)
Letter C (9)
Letter D (9)
Letter E (9)
Letter F (9)
Letter G (9)
Letter H (9)
Letter I (9)
Letter J (9)
Letter K (9)
Letter L (9)
Letter M (9)
Letter N (9)
Letter O (9)
Letter P (9)
Letter Q (9)
Letter R (9)
Letter S (9)
Letter T (9)
Letter U (9)
Letter V (9)
Letter W (9)
Letter X (9)
Letter Y (9)
Letter Z (9)
Lowercase Letters (78)
Uppercase Letters (78)
Matching Lowercase And Uppercase Letters (59)
Alphabetical Order (54)
Abc Song (20)
Letter Sounds (130)
Vowels (158)
Long Vowel Sounds (75)
Long Vowel A Sound (15)
Long Vowel E Sound (17)
Long Vowel I Sound (15)
Long Vowel O Sound (15)
Long Vowel U Sound (13)
Silent E (12)
Short Vowel Sounds (81)
Short Vowel A Sound (59)
Short Vowel E Sound (26)
Short Vowel I Sound (59)
Short Vowel O Sound (44)
Short Vowel U Sound (24)
Vowel Teams (65)
Words With Ai And Ay (3)
Words With Ea And Ee (3)
Words With Ie And Y (3)
Words With Oa And Ow (3)
Words With Oo (2)
Words With Ue And Ui (1)
Blending (432)
Ccvc Words (43)
Ccvcc Words (6)
Cvc Words (271)
Cvcc Words (78)
Consonant Digraphs (8)
Digraph Ch (3)
Digraph Ph (2)
Digraph Sh (3)
Digraph Th (2)
Digraph Wh (2)
Double Consonants (8)
Rhyming Words (61)
Trigraphs (38)
Three Letter Blends (19)
Sight Words (1,035)
Dolch Sight Words (567)
Fry Sight Words (444)
Reading Comprehension (44)
Cause And Effect (6)
Inference (6)
Identify The Main Idea And Key Details (13)
Categorize Pictures Into Groups (4)
What'S The Title? (5)
Prediction (6)
Sequencing (13)
Arrange Pictures In Order (3)
Arrange Sentences In Order (4)
Writing (124)
Handwriting (124)
Letter Tracing (124)
Letter Tracing A (6)
Letter Tracing B (6)
Letter Tracing C (6)
Letter Tracing D (6)
Letter Tracing E (6)
Letter Tracing F (6)
Letter Tracing G (6)
Letter Tracing H (6)
Letter Tracing I (6)
Letter Tracing J (6)
Letter Tracing K (6)
Letter Tracing L (6)
Letter Tracing M (6)
Letter Tracing N (6)
Letter Tracing O (6)
Letter Tracing P (6)
Letter Tracing Q (6)
Letter Tracing R (6)
Letter Tracing S (6)
Letter Tracing T (6)
Letter Tracing U (6)
Letter Tracing V (6)
Letter Tracing W (6)
Letter Tracing X (6)
Letter Tracing Y (6)
Letter Tracing Z (6)
General Knowledge (295)
Food (60)
Vegetables (19)
Fruits (24)
Dessert (9)
Animals (58)
Underwater (9)
Dinosaurs (8)
Reptiles (9)
Seasonal (28)
Christmas (12)
Halloween (8)
Kitchen (11)
Utensils (6)
Musical Instruments (30)
Birds (20)
Space (12)
Transport (9)
Vehicles (9)
Insects (9)
Scene (8)
Professions (8)
Monuments (8)
Household Items (8)
Flowers (8)
Buildings (8)
Colors (7)
Toys (4)
Art & Creativity (236)
Coloring (181)
Animals (32)
Dinosaurs (8)
Underwater (8)
Reptiles (8)
Seasonal (28)
Christmas (12)
Halloween (8)
Food (24)
Vegetables (8)
Fruits (8)
Transport (8)
Vehicles (8)
Space (8)
Scene (8)
Professions (8)
Musical Instruments (8)
Monuments (8)
Kitchen (8)
Utensils (5)
Insects (8)
Household Items (8)
Flowers (8)
Buildings (8)
Birds (8)
Music (36)
Rhymes (25)
Poems (10)
Cooking (7)
Stories (10)
Logic & Thinking (16)
Puzzles (11)
Matching (3)
Multiplayer (12)
Time Based (12)
Player vs Player (12)
Motor Skills (16)
Fine Finger Movement (9)
Aiming and Precision (6)
Expanded Form Of Decimals Games
View all 7 games
Expanded Form Of Decimals
##### Choosing the Standard Form of the Decomposed Decimal Numbers Game
Help your child excel in decimals with this fun game! They'll convert decomposed decimals to standard form, practicing key math skills. This game makes learning decimals exciting by solving problems that focus on the fundamentals. Kids will enjoy reading and writing decimals to the hundredths, becoming more confident in math. Start playing today!
4 5 4.NF.6
VIEW DETAILS
Expanded Form Of Decimals
##### Completing the Expanded Form of Decimal Numbers Game
This engaging math game helps kids tackle decimals by completing their expanded forms. Kids will compose and decompose decimals less than 1, making math fun and interactive. With a focus on reading and writing decimals to hundredths, this game sharpens skills while ensuring your child enjoys learning. Perfect for young mathematicians ready for a challenge!
4 5 4.NF.6
VIEW DETAILS
Expanded Form Of Decimals
##### Filling in to Complete the Expanded Form of Decimals Game
Help your child excel in decimals with this engaging game. They'll learn to compose and decompose decimals greater than 1, tackling common misconceptions with targeted practice. Kids will read and write decimals to hundredths, gaining confidence and skill in a fun, interactive way. Perfect for young mathematicians ready to conquer decimals!
4 5 4.NF.6
VIEW DETAILS
Expanded Form Of Decimals
##### Converting Decimals Greater Than 1 to Their Standard Form Game
This engaging game helps kids convert decimals greater than 1 to their standard form. By choosing the correct answers from given options, students will practice reading and writing decimals to the hundredths. Perfect for building confidence in math, this game brings real-world skills to life. Join the fun and master decimals now!
4 5 4.NF.6
VIEW DETAILS
All Decimal Place Value Games
Word Form Of Decimals
##### Changing the Words Into Decimal Numbers Game
Introduce your child to the world of decimals with this engaging game. Kids will tackle tasks that challenge their understanding of writing decimals, focusing on decimal notation for tenths. This game helps clear up common misconceptions and makes learning fun and interactive. Perfect for young minds eager to explore the fascinating world of decimals!
4 4.NF.6
VIEW DETAILS
Word Form Of Decimals
##### Converting Words to Decimals Game
In this exciting math game, kids will tackle problems that challenge them to convert decimal numbers from word form to standard form. They'll analyze options and select the correct answers, enhancing their skills in reading and writing decimals. This game offers a fun way to understand and practice decimals, making learning both enjoyable and effective.
4 5
VIEW DETAILS
Expanded Form Of Decimals
##### Expanded Form to Standard Form Game
In this exciting math game, kids will learn to convert decimals from expanded to standard form. They'll analyze options and choose the correct answers, enhancing their understanding of place value and decimals. This interactive game makes learning decimals fun and helps build a solid foundation in math. Perfect for young learners ready to tackle decimals with confidence!
5 5.NBT.3.a
VIEW DETAILS
Expanded Form Of Decimals
##### Completing the Expanded Decimal Form Game
Help your child grasp decimal place value with this engaging game! They'll practice converting decimal numbers from standard to expanded form by selecting the correct options. This fun activity builds confidence and understanding in reading and writing decimals, making math more enjoyable. Perfect for honing decimal skills while having a blast!
5 5.NBT.3.a
VIEW DETAILS
Expanded Form Of Decimals
##### Completing the Expanded Fraction Form Game
In this engaging game, kids tackle decimal problems to find missing numbers in expanded forms, boosting their place value skills. By using their understanding of expanded fractions, students will confidently mark their responses. This interactive experience is perfect for fifth graders eager to master decimals and enhance their math skills.
5 5.NBT.3.a
VIEW DETAILS
Decimal Place Value
##### Identifying the Place (Up to Thousandths) Game
This exciting game helps kids understand decimal place value up to thousandths. By analyzing and selecting the correct digit placement from given options, children will boost their decimal skills. It's a fun way to read and write decimals, making math enjoyable and interactive. Encourage your young mathematician to explore the world of decimals today!
4 5 5.NBT.3.a
VIEW DETAILS
Decimal Place Value
##### Making Decimal Numbers Up to Thousandths Place Game
Join your alien friends on a math adventure! Help them get home by adjusting the digits on their rocket ships. This game boosts problem-solving skills as kids learn to represent decimal numbers up to the thousandths place using place value. A fun way to read and write decimals while exploring expanded form! Get ready for a going away party in space!
4 5 5.NBT.3.a
VIEW DETAILS
Decimal Place Value
##### Choosing the Correct Standard Form of the Decomposed Hundredths Game
In this exciting math game, your child will tackle problems involving composing and decomposing decimals. By selecting the right standard form, they will gain confidence in handling decimals less than 1. This interactive challenge helps kids master reading and writing decimals to hundredths, making math fun and engaging. Perfect for young learners to improve their skills!
4 5 4.NF.6
VIEW DETAILS
Decimal Place Value
##### Identifying the Digit at a Place (Up To Thousandths) Game
This fun math game helps kids tackle decimal place value by identifying digits up to the thousandths place. With exciting problems, students apply their knowledge to solve challenges and improve their understanding of decimals. Perfect for fifth graders, this game makes learning decimals enjoyable and helps develop essential math skills.
5 5.NBT.3.a
VIEW DETAILS
Decimal Place Value
##### Identifying the Place Value of the Digits in the Decimal Number Game
Join the fun of mastering decimal place value with this interactive game! Kids tackle exciting problems, learning to read and write decimals to hundredths. This game helps clear up common misconceptions, making math an adventure. Perfect for fourth graders, it encourages a deeper understanding of decimals and boosts confidence in math skills.
4 5 4.NF.6
VIEW DETAILS
Decimal Place Value
##### Completing the Place Value Chart of Decimals Greater Than 1 Game
In this exciting game, kids will tackle decimal place value challenges using a visual chart. By filling in the model accurately, they'll grasp the concept of decimals greater than 1. Designed for fourth graders, this game makes learning fun and interactive. Kids will also explore reading and writing decimals to hundredths, enhancing their math skills effortlessly.
4 5 4.NF.6
VIEW DETAILS
Decimal Place Value
##### Decimals on the Place Value Chart Game
Help your child grasp decimals with this fun game! Kids will fill in blanks on a place value chart, identifying the position of each digit. By solving varied problems, they'll gain confidence in understanding tenths and hundredths. It's a great way to practice reading and writing decimals while having fun. Perfect for young math explorers!
5 5.NBT.3.a
VIEW DETAILS
Decimal Place Value
##### Identifying the Place Value and Completing the Chart Game
Unlock the world of decimals with this engaging game! Kids will use a place value chart to tackle questions on decimal place value, focusing on decimals less than 1. Perfect for honing math skills, this game ensures a fun learning experience as children read and write decimals to the hundredths. Watch your fourth grader grow confident with decimals. Get started today!
4 5 4.NF.6
VIEW DETAILS
Decimal Place Value
##### Identifying the Place Game
In this fun math game, kids will dive into the world of decimals by identifying the tenths and hundredths places. They'll recall decimal place value concepts and choose the correct digit placement from given options. This interactive game sharpens skills in reading and writing decimals, making learning a delightful experience!
4 5 5.NBT.3.a
VIEW DETAILS
Decimal Place Value
##### Decimals (Up To Thousandths) on Place Value Chart Game
In this exciting game, your child will dive into understanding decimals up to the thousandths place. Using a place value chart, they'll learn to identify and write decimals accurately. This game offers timely practice, helping kids master decimal concepts while having fun. It's a great way to boost math skills and confidence in dealing with decimals!
5 5.NBT.3.a
VIEW DETAILS
Decimal Place Value
##### Building Decimal Numbers Up to Hundredths Game
Join the fun in helping little aliens return home by composing decimal numbers up to hundredths. Adjust the numbers on their rocket ships to open the gates and learn place value along the way. This game challenges students to understand expanded form and compose decimals, making math an exciting adventure. Perfect for building essential decimal skills!
4 5 4.NF.6
VIEW DETAILS
Decimal Place Value
##### Identifying the Digit of the Given Place Value in the Decimal Number Game
This fun math game focuses on identifying digits in decimal numbers, enhancing your child's understanding of decimal place value. Kids will practice reading and writing decimals to the hundredths, gaining confidence with each play. Perfect for young mathematicians, this game makes learning decimals enjoyable and interactive. Get started and watch their skills grow!
4 5 4.NF.6
VIEW DETAILS
Decimal Place Value
##### Choosing the Place Value of the Mentioned Decimal Number Game
This exciting game helps kids grasp the place value of decimals less than 1. By identifying the correct place value, children will build confidence and math skills. The fun challenges encourage students to apply their understanding of decimals, enhancing their ability to read and write decimals to hundredths. Perfect for young learners eager to explore decimals!
4 5 4.NF.6
VIEW DETAILS
Decimal Place Value
##### Identifying the Digit at a Place Game
Join the fun with this engaging game focused on decimal place value! Kids will tackle problems to identify digits at specific places, helping them grasp tenths and hundredths. This interactive experience clears up common misconceptions about decimals, making math more approachable and enjoyable for young learners. Perfect for building confidence in reading and writing decimals!
5 5.NBT.3.a
VIEW DETAILS
Decimal Place Value
##### Understanding Decimal Place Value Game
Blast off with your alien pals in this exciting game to master decimal place value! Adjust rocket ship digits to guide clients back to their home planet. Kids will tackle problems on place value while learning to read and write decimals. This fun journey helps kids become fluent in decimals. Are you ready to join the adventure and enhance your math skills?
4 5 5.NBT.3.a
VIEW DETAILS
Decimal Place Value
##### Choosing the Correct Number Placed at Tenths or Hundredths Game
In this colorful game, kids will explore decimal place values by choosing numbers at the tenths or hundredths place. Through active participation, children can overcome common misconceptions about decimals and gain confidence in reading and writing decimals to the hundredths. This game makes learning fun, ensuring kids practice and master decimal concepts with ease.
4 5 4.NF.6
VIEW DETAILS
Decimal Place Value
##### Composing and Decomposing Decimals to Hundredths Game
Join the adventure to help little aliens by calibrating their rocket ships! This interactive game lets kids compose and decompose decimals to hundredths, enhancing their understanding of place value. With each challenge, students will read and write decimals, making math fun and rewarding. Perfect for young mathematicians eager to explore decimals!
4 5 4.NF.6
VIEW DETAILS
<
View all result 26
GRADE
Decimal Place Value Games are a visually colorful blend of animation, sound, text, and puzzles that strengthen a young learner's conceptual knowledge of decimals. They inspire kids to practice extensively via a wide range of games. Playing these games is one of the best ways for kids to build a strong understanding of how to apply decimals in the context of everyday life.
Introduction
Every digit in decimal has a specific value. In the early days of learning decimals, it can be very confusing for young learners to understand decimal place values. If a number consists of a decimal point, the first digit to the right of the decimal point would indicate the number of tenths, and the second digit indicates the number of hundredths.
Interactive learning platforms like SplashLearn have reimagined how kids can learn more complex concepts like decimal place values with ease and be excited to practice and learn.
Such curriculum-aligned learning games integrate real-world examples into the experience, making it easy for kids to grasp traditional mathematics concepts. It helps to visualize decimal place values in the context of concepts such as number lines, demonstrating terms like tens, ones, tenths, and hundredths, and their significance in real life.
Decimal Place Value Learning Made More Fun
Interactive decimal place value games open up a whole new world of learning and discovery for kids. They compel them to practice more, understand the fundamentals of decimal numbers much faster, and improve their grade. Here's a look at some decimal place value games and how they are making learning fun for kids:
Identify the place value and complete the chart - Introducing kids to the various place values like tens, ones, tenths, and hundredths can be challenging. This game uses the place value chart to make learning easy. Learners are required to complete the place value chart representing decimals less than 1 or greater than 1.
Choose the correct number placed at tenths or hundredths - This colorful game requires young learners to identify the digit with a specific place value. It encourages targeted practice and helps kids identify the place values with speed and accuracy.
Compose and Decompose Decimals to Hundredths - This game raises the challenge quotient by introducing little aliens who miss their home as they are all alone on a faraway land. Learners must calibrate their rocket ships to fly them home, creating an opportunity to work with problems based on decimal place value.
Identify Decimals on a Number Line - Concepts like decimals can be confusing for kids. But introducing number lines into games is a great way to help them strengthen their conceptual understanding of decimals. In this game, learners are challenged to use their knowledge of decimal place values to identify the decimals on the number line.
Compare Decimals Using Place Value Chart - This game encourages young learners in Grade 4 to use the place value chart as a visual aid, and develops a strong foundation on comparing decimals. They must select the right answer from a set of given options to solve problems.
Order Decimals with the Help of Place Value Chart - This game helps young learners engage with abstract concepts, using visual representations. The place value chart is used as a visual help to order decimal numbers. It helps nurture confidence to solve a multitude of problems as your child navigates Grade 4.
Difficulties When Studying Decimal Place Values
Learning decimals is complicated as it is. Remembering the value of each digit in the decimal and also being able to identify them easily can be confusing for kids without a visual representation.
Include Decimals in Everyday Conversations
Decimals, like fractions, are relevant to everything we do, but they help define the exact quantities even better. One way to learn decimal place values at the grocery store when weighing the fresh fruits or vegetables you buy.
You can invite your kid to identify the decimal place values of the weights and see how they change as you add or subtract produce.
Growing our knowledge of Mathematics helps us expand our ability to be more specific in all areas of life, from budgeting to shopping. SplashLearn's interactive educational platform helps kids learn about Mathematics in the context of daily experiences and grasp these concepts for life.
//
Your one stop solution for all grade learning needs.
Give your child the passion and confidence to learn anything on their own fearlessly
Parents, Sign Up for Free
Teachers, Use for Free
4413+
4567+ |
17252 | https://intech-gmbh.ru/en/compr_calc_and_selec_examples/ | Example problems for the calculation and selection of compressors
Together
to success
since 1997
"INTECH GmbH"
EN
About Company
Corporate Policy LCC "INTECH GmbH"
Merits and Advantages
Our Team
Our Company’s Ethics and Values
Our Customers
Our customers in chemical industry
Our customers in metal and ore-dressing industries
Supplied equipment
for chemical industry
for metallurgy
Complex maintenance services
Supply of spare parts and wear components for GE (General Electric) gas turbines
Supply of wearing spare parts and components
Asset Performance Management by IT (Information Technology) solutions
Supply of spare parts and wear components for Siemens gas turbines
Production of components and spare parts for turbines and aircraft engines using precision casting method
Production and recovery of gas turbine blade
Technical Information
Contacts
Your distributor in CIS
Our partners
Intech GmbH - About the Company
Example problems for the calculation and selection of compressors
Example problems for the calculation and selection of compressors
Problem 1. Calculation of dead volume in a piston compressor
Problem 2. Determination of flow rate and consumed power of compressor equipment
Problem 3. Determination of the number of compression stages in a compressor and pressure values at each stage
Problem 4. Compressor selection based on the specified conditions
Problem 5. Calculation of actual throughput of a piston compressor
Problem 6. Calculation of throughput of a double-stage piston compressor
Problem 7. Calculation of actual throughput of a double-screw compressor
Problem 8. Calculation of power consumed by a screw compressor
Problem 9. Calculation of power consumed by a double-screw compressor
Problem 10. Calculation of power consumed by a compressor
Problem 11. Calculation of a centrifugal compressor efficiency
Problem 1. Calculation of dead volume in a piston compressor
Conditions:
The piston of a single-stage, single-cylinder and single-action compressor has diameter d = 200 mm and stroke s = 150 mm. Compressor shaft rotates at n = 120 rpm. The air inside is compressed at pressure from P1 = 0.1 MPa to P2 = 0.32 MPa. Compressor throughput is Q = 0.5 m 3/min. Polytropic coefficient m is assumed to be equal to 1.3.
Problem:
Calculate the dead volume of the cylinder V d.
Solution:
First, let’s determine the piston cross-section area F according to the formula below:
F = (π · d²)/4 = (3.14 · 0.2²)/4 = 0.0314 m 2
Let’s also calculate V p - the volume displaced by the piston per stroke:
V p = F · s = 0.0314 · 0.15 = 0.00471 m 3
From the compressor throughput formula we can find the delivery rate λ (since this is a single-action compressor, coefficient z = 1):
Q = λ · z · F · s · n
λ = Q/(z · F · s · n) = 0.5/(1 · 0.0314 · 0.15 · 120) = 0.88
Now we will use the approximation formula of delivery rate calculation to find the volumetric efficiency of the pump:
λ = λ v · (1.01 - 0.02·P 2/P 1)
λ v = λ / (1.01 - 0.02·P 2/P 1) = 0.88 / (1.01 - 0.02·0.32/0.1) = 0.93
Then, from the volumetric efficiency formula, we will express and find the cylinder dead space value:
λ v = 1 – с·[(P 2/P 1)1/m-1]
where c = V d/V p
V d = [(1-0.93) / ([0.32/0.1]1/1,3-1)] · 0.00471 = 0.000228 m 3
So, the dead volume of the cylinder is 0.000228 m 3
Problem 2. Determination of flow rate and consumed power of compressor equipment
Conditions:
A single-stage double-cylinder double-action compressor is equipped with pistons with diameter d = 0.6 m and stroke s = 0.5 m, while dead space is с = 0.036. Compressor shaft rotates at n = 180 rpm. The air inside is compressed at pressure from P1 = 0.1 MPa to P2 = 0.28 MPa and temperature t = 20 0. In the following calculations the polytropic coefficient m is assumed to be equal to 1.2, while mechanical efficiency η mech and adiabatic efficiency η ad - to be equal to 0.95 and 0.85 respectively.
Problem:
Calculate throughput Q and consumed power N of the compressor.
Solution:
First, let’s determine the piston cross-section area F according to the formula below:
F = (π · d²)/4 = (3.14 · 0.6²)/4 = 0.2826 m 2
Then, before we can find the compressor throughput, we need to determine the delivery rate. But first let’s calculate volumetric efficiency:
λ v = 1 – с·[(P 2/P 1)1/m-1] = 1 - 0.036·[(0.28/0.1)1/1.2-1] = 0.95
Now that we know the volumetric efficiency, let’s use this value to determine the delivery rate according to the formula:
λ = λ v · (1.01 – 0.02·P 2/P 1) = 0.95 · (1.01 – 0.02 · 0.28/0.1) = 0.91
So now we can find the compressor throughput Q:
Q = λ · z · F · s · n
Since it is a double-action compressor, coefficient z is 2. Since the compressor has two cylinders, the final throughput value also has to be multiplied by 2. So, we get:
Q = 2 · λ · z · F · s · n = 2 · 0.91 · 2 · 0.2826 · 0.5 · 180 = 92.6 m 3/min
Air mass flow G can be determined, where ρ is air density, which at this temperature is equal to 1.189 kg/m 3. So let’s calculate this value:
G = Q · ρ = 92.6 · 1.189 = 44 kg/min
Hourly rate is
60·G = 60·44 = 2,640 kg/h.
To determine the consumed power of the compressor we first need to calculate the gas compression energy. For this we can use the following formula:
A comp = k/(k-1) · R · t · [(P 2/P 1)(k-1)/k-1]
In this formula k is adiabatic exponent which is equal to the ratio of heat capacity at constant pressure to heat capacity at constant volume (k = С P P/C V), and for air this value is 1.4. R is gas constant equal to 8,310/M J/(kgК), where М is molar mass of gas. In case of air М is assumed to be 29 g/mole, then R = 8,310/29 = 286.6 J/(kgК).
Let’s insert the resulting values into the compression energy formula and find its value:
A comp = k/(k-1) · R · t · [(P 2/P 1)(k-1)/k-1] = 1.4/(1.4-1) · 286.6 · (273+20) · [(0.28/0.1)(1.4-1)/1.4-1] = 100,523 J/kg
After we have the air compression energy value, it is possible to determine the power consumed by the compressor according to the formula:
N = (G · A comp) / (3,600 · 1,000 · η mech · η ad) = (2,640 · 100,523) / (3,600 · 1,000 · 0.85 · 0.95) = 91.3 kW
So, we get the following results: compressor throughput is 92.6 m 3/min, consumed power is 91.3 kW
Problem 3. Determination of the number of compression stages in a compressor and pressure values at each stage
Conditions:
Ammonia has to be delivered at 160 m 3/h at the pressure of 4.5 MPa. Initial nitrogen pressure is 0.1 MPa and initial temperature – 20°C. For the calculations the maximum compression rate x is assumed to be equal to 4.
Problem:
Determine the number of compression stages in a compressor and pressure values at each stage.
Solution:
First, let’s calculate the required number of stages n using the compression rate formula:
x n = P f/P i
Let’s express and calculate n value:
n = log(Pf f/P i) / log(x) = log(4.5/0.1) / log(4) = 2.75
By rounding up the result to the next larger integer we get that there should be n = 3 stages in the compressor. Then let’s determine the compression rate of a single stage by assuming that the compression rate is the same at each individual stage.
x = n√(P f/P i) = ∛(4.5/0.1) = 3.56
Let’s find the final pressure of the first stage P n1 (n = 1), which is also the initial pressure of the second stage.
P f1 = P i · x n = 0.1 · 3.56 1 = 0.356 MPa
Let’s find the final pressure of the second stage P n2 (n = 2), which is also the initial pressure of the third stage.
P f2 = P i · x n = 0.1 · 3.56² = 1.267 MPa
So, there should be three stages in the compressor. At the first stage the pressure is increased from 0.1 MPa to 0.356 MPa, at the second one – from 0.356 MPa to 1.267 MPa, and at the third one – from 1.267 MPa to 4.5 MPa.
Problem 4. Compressor selection based on the specified conditions
Conditions:
It is necessary to provide for nitrogen supply with the throughput of Q n= 7.2 m 3/h from initial pressure P 1 = 0.1 MPa to pressure Р 2 = 0.5 MPa. Only a single-stage double-action piston compressor is available. The piston has diameter d = 80 mm and stroke s equal to 110 mm, while the dead space is 7% of the volume displaced by the piston. Compressor shaft rotation speed n is 120 rpm. Polytropic coefficient m is assumed to be equal to 1.3 in the calculations.
Problem:
Find out whether the compressor available is suitable for performing the specified task. If the compressor is not suitable, calculate the required increase in shaft rotation speed to be able to use the compressor.
Solution:
Since the dead space occupies 7% of the volume displaced by the piston, then by default, the dead space с is equal to 0.07.
Besides, let’s first calculate the piston cross-section area F:
F = (π · d²)/4 = (3.14 · 0.08²)/4 = 0.005 m 2
For further calculations it is necessary to calculate the volumetric efficiency of compressor λ v:
λ v = 1 – с·[(P 2/P 1)1/m-1] = 1 – 0.04·[(0.5/0.1)1/1,3-1] = 0.9
Now that we know λ v, let’s find the delivery rate λ:
λ = λ v · (1.01 – 0.02·(P 2/P 1)) = 0.9 · (1.01 – 0.02·0.5/0.1) = 0.82
Now we can determine the compressor throughput Q. Since it is a double-action compressor, coefficient z is 2:
Q = λ · z · F · s · n = 0.82 · 2 · 0.005 · 0.11 · 120 = 0.11 m 3/min
By expressing Q as an hourly rate, we get Q = 0.11 · 60 = 6.6 m 3/h.
Since the required delivery rate is 7.2 m 3/h, we can deduce that the available compressor is not suitable for performing the task. In this case let’s calculate the required increase in shaft rotations to meet the criteria for applicability. For this we will find the needed number of rotations from the ratio:
n n/n = Q n/Q
n n = n · Q n/Q = 120 · 7.2/6.6 = 131
In this case the compressor could be used if we increase its shaft rotation speed by 131-120 = 11 rpm.
Problem 5. Calculation of actual throughput of a piston compressor
Conditions:
A three-cylinder double-action piston compressor is available. Piston diameter d is equal to 120 mm, their stroke s is 160 mm. Shaft rotation speed n is 360 rpm. Methane is compressed from pressure P 1 = 0.3 MPa to pressure P 2 = 1.1 MPa. It is known that the volumetric efficiency λ v is 0.92.
Problem:
Calculate the actual throughput of a piston compressor.
Solution:
First, let’s determine the cross-section area of pistons F according to the formula below:
F = (π · d²)/4 = (3.14 · 0.12²)/4 = 0.0113 m 2
Based on the initial data let’s determine the delivery rate λ according to the formula:
λ = λ v · (1.01 – 0.02 ·(P 2/P 1)) = 0.92 · (1.01 – 0.02·(1.1/0.3)) = 0.86
Now we can use the formula for the calculation of a piston compressor throughput:
Q = λ · z · F · s · n
Here z is a coefficient depending on the number of suction lines of an individual piston. Since we have a double-action compressor, z is equal to 2.
Besides, since this is a three-cylinder compressor, i.e. three cylinders are operating in parallel, the total throughput of the entire compressor will be three times higher than that of a single piston, so we need to add a factor of 3 into the calculation.
From all the above, we deduce:
Q = 3 · λ · z · F · s · n = 3 · 0.86 · 2 · 0.0113 · 0.16 · 360 = 3.6 m 3/min.
So, we get that the throughput of the given piston compressor is 3.6 m 3/min or 216 m 3/h.
Problem 6. Calculation of throughput of a double-stage piston compressor
Conditions:
A double-stage single-action piston compressor is available. The piston of low-pressure stage has diameter d l = 100 mm and stroke s l equal to 125 mm. The diameter of high-pressure piston d h is equal to 80 mm, while its stroke is s h = 125 mm. Compressor shaft rotation speed n is 360 rpm. It is known that the delivery rate of the compressor λ is 0.85.
Problem:
Calculate the throughput of the compressor.
Solution:
In case of multi-stage piston compressors the data of low-pressure stage should be used for correspondence calculations, since it is there that the primary gas suction occurs, which determines the throughput of the entire compressor. The data of subsequent stages are not used for throughput calculations, since there is no additional gas suction of compressed gas at those stages. Hence, to solve this task it is sufficient to know diameter d l and piston stroke s l of low-pressure stage.
Let’s calculate the cross-sectional area of the low-pressure stage piston:
F l = (π · d l ²)/4 = (3.14 · 0.1²)/4 = 0.00785 m 2
The given compressor is not a multi-piston type and has a simple principle of operation (z = 1), hence, the final formula for the calculation of throughput in a specific case will look as follows:
Q = λ · F l · s l · n = 0.85 · 0.00785 · 0.125 · 360 = 0.3 m 3/min
So, the throughput of this piston compressor is 0.3 m 3/min or, converted to an hourly rate, 18 m 3/h.
Problem 7. Calculation of actual throughput of a double-screw compressor
Conditions:
A double-screw compressor is available. The compressor drive shaft rotates at n=750 rpm and has the following channel characteristics: z=4, length L=20 cm. Besides, it is known that the channel cross-section area of the drive shaft is F 1=5.2 cm 2, and the equivalent value for the idle shaft F 2 is 5.8 cm 2. In the calculations it is assumed that throughput coefficient λ th is equal to 0.9.
Problem:
Calculate the actual throughput of a double-screw compressor V a.
Solution:
Before the calculation of actual throughput let’s determine the value of theoretical throughput, which does not account for the inevitable backflows of gas through the gaps between compressor rotors and casing.
V t = L·z·n·(F 1+F 2) = 0.2·4·750·(0.052+0.058) = 66 m 3/min
Since the throughput index accounting for gas backflows is known, it is possible o determine the actual throughput of this double-screw compressor:
V a = λ th·V t = 0.9·66 = 59.4 m 3/min
So, the throughput of this double-screw compressor is 59.4 m 3/min.
Problem 8. Calculation of power consumed by a screw compressor
Conditions:
A screw compressor is available designed for air pressure boosting from P 1=0.6 MPa to P 2=1.8 MPa. The theoretical throughput of the compressor V th is 3 m 3/min. In the calculation adiabatic efficiency η ad is assumed to be equal to 0.76, and adiabatic k-value for air - 1.4.
Problem:
Calculate consumed power of the compressor N c.
Solution:
For the calculation of theoretical power of adiabatic compression of a screw compressor let’s use the following formula:
N ad = P 1 · V T · [k/(k-1)] · [(P 2/P 1)(k-1)/k - 1] = 600,000 · 3/60 · 1.4/(1.4-1) · [(1.8/0.6)(1.4-1)/1.4 - 1] · 10-3 = 38.7 kW
Now that we know N ad, we can calculate consumed power of a dry-type compressor:
N = N ad/η ad = 38.7/0.76 = 51 kW
So, consumed power of this double-screw compressor is equal to 50 kW.
Problem 9. Calculation of power consumed by a double-screw compressor
Conditions:
A double-screw compressor is available with the throughput of Q=10 m 3/min. Its medium is air at the temperature of t=20 0 C. The air is compressed at pressure from P 1=0.1 MPa to P 2=0.6 MPa. It is known that the backflow value in the compressor β bf is 0.02. The internal adiabatic efficiency of the compressor η ad is equal to 0.8, while mechanical efficiency η mech is 0.95. In the calculations adiabatic k-value for air is assumed to be 1.4, and gas constant for air R - 286 J/(kgК).
Problem:
Calculate consumed power of the compressor N.
Solution:
Let’s determine specific energy of the compressor A sp:
A sp = R · T a · [k/(k-1)] · [(P 2/P 1)(k-1)/k-1] = 286 · [20+273] · [1.4/(1.4-1)] · [(0.6/0.1)(1.4-1)/1.4-1] = 196,068 J/kg
Then we can calculate the mass flow rate of air G by assuming that at 20°C air density ρ a is 1.2 kg/m 3:
G = Q·ρ a = 10·1.2 = 12 kg/min
When calculating the compressor power it is necessary to take into account the backflows of working media, since additional power is required to compensate for those. Let’s determine the total compressor flow rate G tot considering the backflow leakages:
G tot = G·(1+β bf) = 12·(1+0.02) = 12.24 kg/min
Now it is necessary to determine compressor power considering adiabatic and mechanical efficiency:
N = (G tot·A sp) / (η ad·η mech) = (12.24·196068) / (60·1000·0.8·0.95) = 52.6 kW
So, the power of this compressor is equal to 52.6 kW.
Problem 10. Calculation of power consumed by a compressor
Conditions:
A centrifugal triple-stage single-section compressor is available with identical wheels. The compressor operates at volume flow rate V of 120 m 3/min of air at temperature t=20°C (in this case air density ρ will be equal to 1.2 kg/m 3). Besides, it is known that the circumferential speed of the wheel is 260 m/s, and theoretical head coefficient of the stage ϕ is 0.85. Total efficiency of the compressor η is 0.9. For the first stage friction loss coefficient β f is 0.007, leakage loss coefficient β l is 0.009, and let’s assume that for subsequent stages the losses will increase by 1%.
Problem:
Calculate consumed power of the compressor N.
Solution:
The power consumed for gas compression may be calculated as follows:
N in = V · ρ · ∑[u² i · φ i · (1+β f+β l)i]
where i is the number of stages. Since the the conditions specify that all the wheels within a section are the same, they have the same circumferential velocities u and theoretical head coefficients ϕ, so the formula may be transformed as follows:
N in = V · ρ · u² · φ · ∑(1+β f+β l)i
For the first stage:
1 + β f + β l = 1 + 0.007 + 0.009 = 1.016
Then, assuming that the losses at the next stage increase by 1%, let’s determine 1+β f+β l for the second stage:
1.016·1.01 = 1.026
For the third stage:
1.026·1.01 = 1.036
So, we get that:
N in = 120/60 · 1.2 · 260² · 0.85 · (1.016+1.026+1.036) · 10-3 = 424.5 kW
Now it is possible to calculate consumed power of the compressor:
N = N in/η = 424.5/0.9 = 471.7 W
So, the power of this compressor is equal to 471.7 kW.
Problem 11. Calculation of a centrifugal compressor efficiency
Conditions:
A centrifugal double-stage single-section compressor is available with identical wheels. Compressor pumps air at temperature t=20°C (density ρ at these conditions is equal to 1.2 kg/m 3) and flow rate V=100 m 3/min from initial pressure P 1=0.1 MPa to final pressure P 2=0.25 MPa. Circumferential speed of wheels u is 245 m/s, theoretical head ϕ is equal to 0.82. The total friction and leakage losses (1+ β f + β l) for the first stage is equal to 1.012, for the second stage - 1.019. Gas compression occurs in isentropic process. In the calculations adiabatic k-value for air is assumed to be 1.4, and gas constant for air R - 286 J/(kgК). Gas for this problem is assumed to be incompressible (compressibility factor z=1).
Problem:
It is necessary to calculate isentropic efficiency of the compressor η is.
Solution:
Isentropic efficiency is a relation of gas compression power in isentropic process N is to internal compression power of the compressor N in. From here we can deduce that to find the target value it is first necessary to determine N in and N is.
Gas compression in isentropic mode may be determined according to the formula:
N is = V · ρ · z · R · (273+t) · k/(k-1) · [(P 2/P 1)(k-1)/k-1] =
= 100/60 · 1.2 · 1 · 286 ·(273+20) · 1.4/(1.4-1) · [(0.25/0.1)(1.4-1)/1.4-1] · 10-3 = 175.5 kW
Internal power of the compressor may be calculated as follows:
N in = V · ρ · ∑[u i 2 · φ i · (1+β f+β l)i] = 100/60 · 1.2 · 245² · 0.82 · (1.012+1.019) = 200 kW.
Now we can determine the target value:
η is = N is/N in = 175.5/200 = 0.88
So, the isentropic efficiency of this double-stage single-section compressor is equal to 0.88.
Air and gas blowers
Axial vane compressors
Booster compressor stations
Compressors
Diaphragm compressors
Example problems for the calculation and selection of compressors
Examples of Pipeline Calculation and Selection Problems with Solutions
Main characteristics of a compressor. Throughput and power
Pipeline Design and Selection. Optimum Pipeline Diameter
Positive displacement compressors
Reciprocating compressors
Rotor compressor (Rotor compressor lines)
Screw compressors
Basic equipment calculation and selection
Examples of centrifuge calculations
Examples of Pipeline Calculation and Selection Problems with Solutions
Example problems for the calculation and selection of compressors
Filters calculation and selection
Hydrocyclone Filter Calculation Parameters
Main characteristics of a compressor. Throughput and power
Pipeline Design and Selection. Optimum Pipeline Diameter
The calculation problems for selecting filters
Thermal Calculations
Расчет компрессоров. Подбор компрессорного оборудования
Heat exchangers calculation and selection
Allgemeine Informationen zur Auswahl und Auslegung von Verdichtereinrichtungen
Descripción general de cálculo y selección de equipos de compresión
Informations générales pour calculer les caractéristiques et choisir un compresseur
Scelta dei compressori
© 2025 INTECH GmbH
Sitemap |
17253 | https://www.publichealthontario.ca/-/media/documents/C/2017/cds-spaulding-table.pdf | Spaulding's Classification of Medical Equipment/Devices and Required Level of Processing/Reprocessing This chart is an excerpt from Best Practices for Cleaning, Disinfection and Sterilization of Medical Equipment/Devices. It outlines Spaulding Classification which is the instrument classification system used for reprocessing decisions. For more information, please visit www.publichealthontario.ca or email ipac@oahpp.ca. Classification Definition Level of Processing/ Reprocessing Examples CRITICAL Equipment/Device Equipment/device that enters sterile tissues, including the vascular system • Cleaning followed by Sterilization • Surgical instruments • Implants • Biopsy instruments • Foot care equipment • Eye and dental equipment SEMICRITICAL Equipment/Device Equipment/device that comes in contact with non-intact skin or mucous membranes but does not penetrate them • Cleaning followed by High- Level Disinfection (as a minimum) • Sterilization is preferred • Respiratory therapy equipment • Anaesthesia equipment • Tonometer NONCRITICAL Equipment/Device Equipment/device that touches only intact skin and not mucous membranes, or does not directly touch the client/patient/resident • Cleaning followed by Low- Level Disinfection (in some cases, cleaning alone is acceptable) • ECG machines • Oximeters • Bedpans, urinals, commodes References Spaulding E. The role of chemical disinfection in the prevention of nosocomial infections. In: Proceedings of the International Conference on Nosocomial Infections, 1970. Chicago, IL: American Hospital Association; 1971. p. 247-54.Ontario Agency for Health Protection and Promotion logo |
17254 | https://www.albert.io/blog/first-derivative-test-and-local-extrema-ap-calculus-ab-bc-review/ | Skip to content
First Derivative Test and Local Extrema: AP® Calculus AB-BC Review
The first derivative test is a valuable tool in calculus. It helps determine when a function has local maxima or local minima, often called local extrema or relative extrema. In many AP® Calculus courses, including 5.4 the first derivative test, students learn to apply this concept to various functions. This article explains what the first derivative test is, why it matters, and how to find relative extrema effectively. By the end, the steps for analyzing critical points and classifying maxima or minima should become clearer and more approachable.
Start practicing AP® Calculus AB-BC on Albert now!
What We Review
What Is the First Derivative Test?
Definition and Significance
A first derivative test involves examining the sign of the derivative around critical points. When the derivative switches from positive to negative, that point is a local maximum. Conversely, if the derivative changes from negative to positive, there is a local minimum. Therefore, this is an essential procedure in determining local extrema.
In simpler terms, consider a function f(x). Its first derivative, f′(x), tells whether f(x) is increasing or decreasing within particular intervals. Consequently, a local maximum occurs at a point where the function stops rising and begins falling. A local minimum appears at a point where the function stops falling and begins rising.
Conditions for Applying the First Derivative Test
First, a function must be differentiable (or at least continuous) near the point of interest. Next, it is necessary to locate critical points, which are points where f′(x)=0 or f′(x) is undefined. To confirm whether each critical point is a local maximum, local minimum, or neither, the derivative’s sign is examined in intervals around that point.
However, remember that certain types of functions also require attention to endpoints if the interval is closed. Though endpoints are not always labeled “critical points,” they can be locations for local extrema too. Therefore, it is important not to ignore boundary points unless the problem specifically focuses on open intervals.
How to Find Local (Relative) Extrema
Identifying Critical Points
The first step in how to find relative extrema is discovering where the derivative equals zero or is undefined. Set f′(x)=0 to find potential turning points. Also, verify that the derivative exists at those points. If the derivative does not exist, the point may still be critical, so do not skip that step.
For example, if f′(x) is undefined at x=a, that point might still be a location of local extrema. Always include such points in the analysis.
Applying the First Derivative Test
After identifying the critical points, determine the sign of f′(x) on intervals around each critical point. Some students use sign charts or tables to keep track of whether the function is increasing (f′(x)>0) or decreasing (f′(x)<0).
If f′(x) changes from positive to negative at a critical point, that point is a local maximum.
If f′(x) changes from negative to positive, that point is a local minimum.
If f′(x) does not change sign, the point is not a local maximum or minimum.
Ready to boost your AP® scores? Explore our plans and pricing here!
Step-by-Step Example #1
Consider the function f(x)=x3−3x2+4. The goal is to use the first derivative test to find local extrema.
Find the derivative: f′(x)=3x2−6x.
Solve f′(x)=0: 3x2−6x=0. Factor out 3x: 3x(x−2)=0. Hence, x=0 or x=2.
Analyze intervals:
For x<0, pick a test value like x=−1. Then f′(−1)=3(−1)2−6(−1)=3+6=9>0. So the function is increasing on that interval.
Between 0 and 2, pick x=1. Then f′(1)=3(1)2−6(1)=3−6=−3<0. The function is decreasing on (0, 2).
For x>2, pick x=3. Then f′(3)=3(3)2−6(3)=27−18=9>0. The function is increasing for x>2.
Classify critical points:
At x=0, the derivative changes from positive to negative, indicating a local maximum.
At x=2, the derivative switches from negative to positive, indicating a local minimum.
Therefore, x=0 is a local maximum and x=2 is a local minimum.
Step-by-Step Example #2
Now consider a rational function, g(x)=x−12x. Apply the same method to find local (relative) extrema.
Differentiate the function: g′(x)=(x−1)2(2)(x−1)−(2x)(1). Simplify the numerator: g′(x)=(x−1)22x−2−2x=(x−1)2−2.
Identify critical points: Since g′(x)=−2/(x−1)2, the fraction is never zero (the numerator is -2, which is constant). However, g′(x) is undefined at x=1. Thus, x=1 is a critical point because the derivative does not exist there.
Analyze intervals around x=1:
For x<1, pick x=0. Then g′(0)=(0−1)2−2=1−2=−2<0. This indicates that g(x) is decreasing on that interval.
For x>1, pick x=2. Then g′(2)=(2−1)2−2=1−2=−2<0. The function is also decreasing for x>1.
Classify the critical point: Since the derivative is negative on both sides of x=1, there is no sign change. This means x=1 is neither a local maximum nor a local minimum. Even though x=1 is a critical point, the first derivative test reveals no local extrema there.
Common Pitfalls and Tips
Failing to test both sides of a critical point leads to incomplete conclusions.
Forgetting that some functions have points where the derivative is undefined can cause missing critical points.
Listing intervals around each critical point helps visualize sign changes. It also reduces confusion by organizing values methodically.
In some contexts, check endpoints for potential local extrema if the domain is closed, such as [a,b].
Quick Reference Chart
| | |
--- |
| Term | Definition |
| Critical Point | A point where f′(x)=0 or f′(x) is undefined. |
| Local Maximum | A point where f′(x) changes from positive to negative. |
| Local Minimum | A point where f′(x) changes from negative to positive. |
| Relative Extrema | Another term for local maxima or minima. |
| First Derivative Test | A method that determines local maxima or minima by analyzing the sign of f′(x) around critical points. |
Conclusion
The first derivative test is a central idea in calculus for identifying local (relative) extrema. It explains when a function stops climbing and starts descending (local maximum) or when it transitions from falling to rising (local minimum). In 5.4 the first derivative test, students gain experience applying these steps, from taking the derivative to classifying the nature of critical points. Understanding what the first derivative test is prepares students for more advanced topics, such as the second derivative test and inflection points. Familiarity with how to find relative extrema across different types of functions will also build confidence in handling more complicated calculus problems.
Sharpen Your Skills for AP® Calculus AB-BC
Are you preparing for the AP® Calculus exam? We’ve got you covered! Try our review articles designed to help you confidently tackle real-world math problems. You’ll find everything you need to succeed, from quick tips to detailed strategies. Start exploring now!
5.3 Determining Intervals on Which a Function Is Increasing or Decreasing
5.5 Using the Candidates Test to Determine Absolute (Global) Extrema
Need help preparing for your AP® Calculus AB-BC exam?
Albert has hundreds of AP® Calculus AB-BC practice questions, free responses, and an AP® Calculus AB-BC practice test to try out.
Start practicing AP® Calculus AB-BC on Albert now! |
17255 | https://www.chegg.com/homework-help/questions-and-answers/problem-2-air-pressure-37-psi-temperature-60-f-forced-tire-volume-17-f-velocity-180-ft-s-1-q58183637 | Solved Problem 2. Air at a pressure of 37 psi and a | Chegg.com
Skip to main content
Books
Rent/Buy
Read
Return
Sell
Study
Tasks
Homework help
Understand a topic
Writing & citations
Tools
Expert Q&A
Math Solver
Citations
Plagiarism checker
Grammar checker
Expert proofreading
Career
For educators
Help
Sign in
Paste
Copy
Cut
Options
Upload Image
Math Mode
÷
≤
≥
o
π
∞
∩
∪
√
∫
Math
Math
Geometry
Physics
Greek Alphabet
Engineering
Civil Engineering
Civil Engineering questions and answers
Problem 2. Air at a pressure of 37 psi and a temperature of 60°F is being forced into a tire, which has a volume of 17 fť, at a velocity of 180 ft/s through a 14-in.-diameter opening. Determine the rate of change ap of density in the tire in units of slugs/(ft'.s). at ( ) Figure 2. Tire I Linodia # Universal gas constant R = 1716 ft.lb/slug. OR Conversion:
Your solution’s ready to go!
Our expert help has broken down your problem into an easy-to-learn solution you can count on.
See Answer See Answer See Answer done loading
Question: Problem 2. Air at a pressure of 37 psi and a temperature of 60°F is being forced into a tire, which has a volume of 17 fť, at a velocity of 180 ft/s through a 14-in.-diameter opening. Determine the rate of change ap of density in the tire in units of slugs/(ft'.s). at ( ) Figure 2. Tire I Linodia # Universal gas constant R = 1716 ft.lb/slug. OR Conversion:
Fluid mechanics problem, need help asap, thank you!! <3 Show transcribed image text
Here’s the best way to solve it.Solution Share Share Share done loading Copy link View the full answer Previous questionNext question
Transcribed image text:
Problem 2. Air at a pressure of 37 psi and a temperature of 60°F is being forced into a tire, which has a volume of 17 fť, at a velocity of 180 ft/s through a 14-in.-diameter opening. Determine the rate of change ap of density in the tire in units of slugs/(ft'.s). at ( ) Figure 2. Tire I Linodia # Universal gas constant R = 1716 ft.lb/slug. OR Conversion: 'R= °F + 460 Standard atmospheric pressure = 14.7 psi (abs) Volume of=174
Not the question you’re looking for?
Post any question and get expert help quickly.
Start learning
Chegg Products & Services
Chegg Study Help
Citation Generator
Grammar Checker
Math Solver
Mobile Apps
Plagiarism Checker
Chegg Perks
Company
Company
About Chegg
Chegg For Good
Advertise with us
Investor Relations
Jobs
Join Our Affiliate Program
Media Center
Chegg Network
Chegg Network
Busuu
Citation Machine
EasyBib
Mathway
Customer Service
Customer Service
Give Us Feedback
Customer Service
Manage Subscription
Educators
Educators
Academic Integrity
Honor Shield
Institute of Digital Learning
© 2003-2025 Chegg Inc. All rights reserved.
Cookie NoticeYour Privacy ChoicesDo Not Sell My Personal InformationGeneral PoliciesPrivacy PolicyHonor CodeIP Rights
Do Not Sell My Personal Information
When you visit our website, we store cookies on your browser to collect information. The information collected might relate to you, your preferences or your device, and is mostly used to make the site work as you expect it to and to provide a more personalized web experience. However, you can choose not to allow certain types of cookies, which may impact your experience of the site and the services we are able to offer. Click on the different category headings to find out more and change our default settings according to your preference. You cannot opt-out of our First Party Strictly Necessary Cookies as they are deployed in order to ensure the proper functioning of our website (such as prompting the cookie banner and remembering your settings, to log into your account, to redirect you when you log out, etc.). For more information about the First and Third Party Cookies used please follow this link.
More information
Allow All
Manage Consent Preferences
Strictly Necessary Cookies
Always Active
These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information.
Functional Cookies
[x] Functional Cookies
These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly.
Performance Cookies
[x] Performance Cookies
These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance.
Sale of Personal Data
[x] Sale of Personal Data
Under the California Consumer Privacy Act, you have the right to opt-out of the sale of your personal information to third parties. These cookies collect information for analytics and to personalize your experience with targeted ads. You may exercise your right to opt out of the sale of personal information by using this toggle switch. If you opt out we will not be able to offer you personalised ads and will not hand over your personal information to any third parties. Additionally, you may contact our legal department for further clarification about your rights as a California consumer by using this Exercise My Rights link.
If you have enabled privacy controls on your browser (such as a plugin), we have to take that as a valid request to opt-out. Therefore we would not be able to track your activity through the web. This may affect our ability to personalize ads according to your preferences.
Targeting Cookies
[x] Switch Label label
These cookies may be set through our site by our advertising partners. They may be used by those companies to build a profile of your interests and show you relevant adverts on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. If you do not allow these cookies, you will experience less targeted advertising.
Cookie List
Clear
[x] checkbox label label
Apply Cancel
Consent Leg.Interest
[x] checkbox label label
[x] checkbox label label
[x] checkbox label label
Reject All Confirm My Choices |
17256 | https://www.merriam-webster.com/rhymes/syn/anomie | Related Words for anomie
| Word | Syllables | Categories |
---
| | | |
---
| ennui | x/x | Noun |
| alienation | xxx/x | Noun, Verb |
| malaise | x/ | Noun |
| disaffection | xx/x | Noun |
| nihilism | /xxx | Noun |
| loneliness | /xx | Noun |
| hopelessness | /xx | Noun |
| melancholia | /xxx | Noun |
| disillusionment | xx/xx | Noun |
| anarchy | /xx | Noun |
| apathy | /xx | Noun |
| disorientation | xxxx/x | Noun |
| despondency | x/xx | Noun |
| depersonalization | x/xx/x | Noun |
| maladjustment | xx/x | Noun |
| meaninglessness | /xxx | Noun |
| fatalism | /xxx | Noun |
| restlessness | /xx | Noun |
| neurosis | x/x | Noun |
| angst | / | Noun |
| disenchantment | xx/x | Noun |
| marginality | /x/xx | Noun |
| demoralization | xxxx/x | Noun |
| boredom | /x | Noun |
| existential | xx/x | Adjective |
| unease | x/ | Noun |
| helplessness | /xx | Noun |
| emptiness | /xx | Noun |
| communality | /x/xx | Noun |
| decadence | /xx | Noun |
| abjection | //x | Noun |
| insularity | xx/xx | Noun |
| individualism | xxx/xxx | Noun |
| dislocation | x//x | Noun |
| melancholy | /xxx | Noun |
| paranoia | xx/x | Noun |
| languor | // | Noun |
| ambivalence | x/xx | Noun |
| miasma | x/x | Noun |
| despair | x/ | Noun |
| disjuncture | x/x | Noun |
| mystification | /xx/x | Noun |
| puritanism | /xxxx | Noun |
| dejection | x/x | Noun |
| discontent | xx/ | Noun |
| squalor | /x | Noun |
| sociological | xxx/xx | Name |
| transience | /xx | Noun |
| cosmopolitanism | xx/xxxx | Noun |
| hedonism | /xxx | Noun |
| solipsism | x/xx | Noun |
| subcultural | /xxx | Adjective |
| aporia | x/xx | Noun |
| misery | /xx | Noun |
| bleakness | /x | Noun |
| desolation | xx/x | Noun |
| bureaucratization | x/xx/x | Noun |
| deviance | /xx | Noun |
| stagnation | x/x | Noun |
| lassitude | /xx | Noun |
| disenfranchisement | xx/xx | Noun |
| degeneracy | x/xxx | Noun |
| dissatisfaction | xxx/x | Noun |
| societal | x/xx | Adjective |
| insecurity | xx/xx | Noun |
| desperation | xx/x | Noun |
| disquiet | x/x | Noun |
| passivity | x/xx | Noun |
| powerlessness | /xxx | Noun |
| collectivity | xx/xx | Noun |
| incomprehension | xxx/x | Noun |
| uneasiness | x/xx | Noun |
| utopianism | x/xxx | Noun |
| deviancy | /xxx | Noun |
| bewilderment | x/xx | Noun |
| sociality | xx/xx | Noun |
| impersonality | xxx/xx | Noun |
| lethargy | /xx | Noun |
| cynicism | /xxx | Noun |
| anguish | /x | Noun |
| incoherence | xx/x | Noun |
| radicalism | /xxxx | Noun |
| destitution | /xxx | Noun |
| existentialism | xx/xxx | Noun |
| postmodern | x/x | Noun |
| indifference | x/xx | Noun |
| underclass | /xx | Noun |
| anxiety | x/xx | Noun |
| unsettling | x/xx | Adjective |
| aberration | xx/x | Noun |
| nonconformity | xx/xx | Noun |
| dystopia | //xx | Noun |
| distaste | x/ | Noun |
| fall from grace | /// | Phrase, Noun, Verb |
| decadent | /xx | Adjective |
| pollution | x/x | Noun |
| lawlessness | /xx | Noun |
| egoism | /xxx | Noun |
| disorganization | xxxx/x | Noun |
| redefinition | xxx/x | Noun |
| authoritarianism | xxx/xxxx | Noun |
| herbert | /x | Name |
| abstract | x/ | Adjective, Name, Verb |
| defect | /x | Noun, Verb |
| demographic | xx/x | Adjective, Noun |
| disharmony | x/xx | Noun |
| economic | xx/x | Adjective |
| elastic | x/x | Adjective, Noun |
| extensional | x/xx | Adjective |
| fragmentation | xx/x | Noun |
| geomorphological | xxxx/xx | Adjective |
| inelastic | xx/x | Adjective |
| lattice | /x | Noun, Verb |
| macro | /x | Adjective, Noun |
| macroscopic | /x/x | Adjective |
| marginalization | xxxx/x | Noun |
| octahedral | xx/x | Adjective |
| permissiveness | x/xx | Noun |
| poverty | /xx | Noun |
| tectonic | x/x | Adjective |
| turmoil | /x | Noun, Verb |
| viscoelastic | x/x/x | Adjective, Noun |
| stonewall | /x | Name, Verb, Adjective |
| shirtless | /x | Adjective |
| bulletproof | /xx | Adjective, Verb |
| no | / | Noun, Adverb, Adjective |
| anymore | xx/ | Adverb |
| sorta | /x | Adverb |
| literally | /xxx | Adverb |
| milestone | /x | Noun, Verb |
More Ideas for anomie
Adjectives for anomie:
Can you solve 4 words at once?
Can you solve 4 words at once?
Word of the Day
obliterate
See Definitions and Examples »
Get Word of the Day daily email!
Filter
Learn a new word every day. Delivered to your inbox!
© 2025 Merriam-Webster, Incorporated |
17257 | https://www.cs.umb.edu/~bangtqh/teaching/220_S2022/adm07-27.pdf | 7/27/22 1 Recurrence Relations Section 8.2 - 8.3 in the textbook 1 Recurrence Relations A recurrence relation for the sequence {𝑎!} is an equation that expresses 𝑎𝑛is terms of one or more of the previous terms of the sequence, namely, 𝑎0, 𝑎1, … , 𝑎!"#, for all integers 𝑛with 𝑛³ 𝑛0, where n0 is a nonnegative integer.
A sequence is called a solution of a recurrence relation if its terms satisfy the recurrence relation.
Applied Discrete Mathematics @ Class #4: Algorithms, Recursion 2 2 7/27/22 2 Recurrence Relations In other words, a recurrence relation is like a recursively defined sequence, but without specifying any initial values (initial conditions).
Therefore, the same recurrence relation can have (and usually has) multiple solutions.
If both the initial conditions and the recurrence relation are specified, then the sequence is uniquely determined.
Applied Discrete Mathematics @ Class #4: Algorithms, Recursion 3 3 Recurrence Relations Example: Consider the recurrence relation 𝑎𝑛= 2𝑎!"#– 𝑎!"$ for 𝑛= 2, 3, 4, … Is the sequence {𝑎!} with 𝑎𝑛= 3𝑛a solution of this recurrence relation?
For 𝑛³ 2 we see that 2𝑎!"#– 𝑎!"$ = 2(3(𝑛– 1)) – 3(𝑛– 2) = 3𝑛= 𝑎!.
Therefore, {𝑎!} with 𝑎𝑛= 3𝑛is a solution of the recurrence relation.
Applied Discrete Mathematics @ Class #4: Algorithms, Recursion 4 4 7/27/22 3 Recurrence Relations 𝑎𝑛= 2𝑎!"#– 𝑎!"$ for 𝑛= 2, 3, 4, … Is the sequence {𝑎!} with 𝑎! = 5 a solution of the same recurrence relation?
For 𝑛³ 2 we see that: 2𝑎!"#– 𝑎!"$ = 2×5 −5 = 5 = 𝑎!.
Therefore, {𝑎!} with 𝑎𝑛= 5 is also a solution of the recurrence relation.
Applied Discrete Mathematics @ Class #4: Algorithms, Recursion 5 5 Modeling with Recurrence Relations Example: Someone deposits $10,000 in a savings account at a bank yielding 5% per year with interest compounded annually. How much money will be in the account after 30 years?
Solution: Let 𝑃 ! denote the amount in the account after 𝑛years.
How can we determine 𝑃 ! on the basis of 𝑃!"#?
Applied Discrete Mathematics @ Class #4: Algorithms, Recursion 6 6 7/27/22 4 Modeling with Recurrence Relations We can derive the following recurrence relation: 𝑃 ! = 𝑃 !"# + 0.05𝑃 !"# = 1.05𝑃 !"#.
The initial condition is 𝑃 $ = 10,000.
Then we have: 𝑃1 = 1.05𝑃0 𝑃2 = 1.05𝑃1 = (1.05)2𝑃0 𝑃3 = 1.05𝑃2 = (1.05)3𝑃0 … 𝑃𝑛= 1.05𝑃 !"# = (1.05)𝑛𝑃0 We now have a formula to calculate 𝑃𝑛for any natural number 𝑛and can avoid the iteration.
Applied Discrete Mathematics @ Class #4: Algorithms, Recursion 7 7 Modeling with Recurrence Relations Let us use this formula to find 𝑃&' under the initial condition 𝑃' = 10,000: 𝑃&' = 1.05 &'×10,000 = 43,219.42 After 30 years, the account contains $43,219.42.
Applied Discrete Mathematics @ Class #4: Algorithms, Recursion 8 8 7/27/22 5 Modeling with Recurrence Relations Another example: Let 𝑎! denote the number of bit strings of length 𝑛that do not have two consecutive 0s (“valid strings”). Find a recurrence relation and give initial conditions for the sequence {𝑎!}.
Solution: Idea: The number of valid strings equals the number of valid strings ending with a 0 plus the number of valid strings ending with a 1.
Applied Discrete Mathematics @ Class #4: Algorithms, Recursion 9 9 Modeling with Recurrence Relations Let us assume that 𝑛³ 3, so that the string contains at least 3 bits.
Let us further assume that we know the number 𝑎!"#of valid strings of length (𝑛– 1) and the number 𝑎!"$ of valid strings of length (𝑛– 2).
Then how many valid strings of length 𝑛are there, if the string ends with a 1?
There are 𝑎!"# such strings, namely the set of valid strings of length (𝑛– 1) with a 1 appended to them.
Note: Whenever we append a 1 to a valid string, that string remains valid.
Applied Discrete Mathematics @ Class #4: Algorithms, Recursion 10 10 7/27/22 6 Modeling with Recurrence Relations Now we need to know: How many valid strings of length n are there, if the string ends with a 0?
Valid strings of length n ending with a 0 must have a 1 as their (n – 1)st bit (otherwise they would end with 00 and would not be valid).
And what is the number of valid strings of length (𝑛– 1) that end with a 1?
We already know that there are 𝑎!"# strings of length n that end with a 1.
Therefore, there are 𝑎!"$ strings of length (𝑛– 1) that end with a 1.
Applied Discrete Mathematics @ Class #4: Algorithms, Recursion 11 11 Modeling with Recurrence Relations So there are 𝑎!"$ valid strings of length n that end with a 0 (all valid strings of length (n – 2) with 10 appended to them).
As we said before, the number of valid strings is the number of valid strings ending with a 0 plus the number of valid strings ending with a 1.
That gives us the following recurrence relation: 𝑎! = 𝑎!"# + 𝑎!"$ Applied Discrete Mathematics @ Class #4: Algorithms, Recursion 12 12 7/27/22 7 Modeling with Recurrence Relations What are the initial conditions?
a1 = 2 (0 and 1) a2 = 3 (01, 10, and 11) a3 = a2 + a1 = 3 + 2 = 5 a4 = a3 + a2 = 5 + 3 = 8 a5 = a4 + a3 = 8 + 5 = 13 … This sequence satisfies the same recurrence relation as the Fibonacci sequence.
Since a1 = f3 and a2 = f4, we have an = fn+2.
Applied Discrete Mathematics @ Class #4: Algorithms, Recursion 13 13 Solving Recurrence Relations In general, we would prefer to have an explicit formula to compute the value of an rather than conducting n iterations.
For one class of recurrence relations, we can obtain such formulas in a systematic way.
Those are the recurrence relations that express the terms of a sequence as linear combinations of previous terms.
Applied Discrete Mathematics @ Class #4: Algorithms, Recursion 14 14 7/27/22 8 Solving Recurrence Relations Definition: A linear homogeneous recurrence relation of degree 𝑘with constant coefficients is a recurrence relation of the form: an = c1an-1 + c2an-2 + … + ckan-k, Where c1, c2, …, ck are real numbers, and ck ¹ 0. A sequence satisfying such a recurrence relation is uniquely determined by the recurrence relation and the k initial conditions a0 = C0, a1 = C1, a2 = C2, …, ak-1 = Ck-1.
Applied Discrete Mathematics @ Class #4: Algorithms, Recursion 15 15 Solving Recurrence Relations Examples: The recurrence relation Pn = (1.05)Pn-1 is a linear homogeneous recurrence relation of degree one.
The recurrence relation fn = fn-1 + fn-2 is a linear homogeneous recurrence relation of degree two.
The recurrence relation an = an-5 is a linear homogeneous recurrence relation of degree five.
Applied Discrete Mathematics @ Class #4: Algorithms, Recursion 16 16 7/27/22 9 Solving Recurrence Relations Basically, when solving such recurrence relations, we try to find solutions of the form 𝒂𝒏= 𝒓𝒏, where 𝑟is a constant, 𝑎! = 𝑟! is a solution of the recurrence relation 𝑎! = 𝑐#𝑎!"# + 𝑐$𝑎!"$ + ⋯+ 𝑐)𝑎!") if and only if 𝑟! = 𝑐#𝑟!"# + 𝑐$𝑟!"$ + ⋯+ 𝑐)𝑟!").
Divide this equation by rn-k and subtract the right-hand side from the left: 𝑟) −𝑐#𝑟)"# + 𝑐$𝑟)"$ + ⋯+ 𝑐) = 0 This is called the characteristic equation of the recurrence relation.
Applied Discrete Mathematics @ Class #4: Algorithms, Recursion 17 17 Solving Recurrence Relations The solutions of this equation are called the characteristic roots of the recurrence relation.
Let us consider linear homogeneous recurrence relations of degree two.
Theorem: Let 𝑐1 and 𝑐2 be real numbers. Suppose that 𝑟$– 𝑐#𝑟– 𝑐$ = 0 has two distinct roots 𝑟1 and 𝑟2. Then the sequence {𝑎𝑛} is a solution of the recurrence relation 𝑎! = 𝑐#𝑎!"# + 𝑐$𝑎!"$ ↔𝑎! = 𝛼#𝑟 # ! + 𝛼$𝑟$ ! for 𝑛= 0, 1, 2, … , where 𝛼# and 𝛼$ are constants.
The proof is shown on pp. 542-543 of the textbook Applied Discrete Mathematics @ Class #4: Algorithms, Recursion 18 18 7/27/22 10 Solving Recurrence Relations Example: What is the solution of the recurrence relation an = an-1 + 2an-2 where n≥2 a0 = 2 a1 = 7 ?
Solution: The characteristic equation of the recurrence relation is r2 – r – 2 = 0.
Its roots are r = 2 and r = -1.
Hence, the sequence {an} is a solution to the recurrence relation if and only if: an = a12n + a2(-1)n for some constants a1 and a2.
Applied Discrete Mathematics @ Class #4: Algorithms, Recursion 19 19 Solving Recurrence Relations Given the equation an = a12n + a2(-1)n and the initial conditions a0 = 2 and a1 = 7, it follows that: a0 = 2 = a1 + a2 a1 = 7 = a1×2 + a2 ×(-1) Solving these two equations gives us a1 = 3 and a2 = -1.
Therefore, the solution to the recurrence relation and initial conditions is the sequence {an} with an = 3×2n – (-1)n. Applied Discrete Mathematics @ Class #4: Algorithms, Recursion 20 20 7/27/22 11 Solving Recurrence Relations Another Example: Give an explicit formula for the Fibonacci numbers.
Solution: The Fibonacci numbers satisfy the recurrence relation fn = fn-1 + fn-2 with initial conditions f0 = 0 and f1 = 1.
The characteristic equation is r2 – r – 1 = 0.
Its roots are 𝑟 # = # + $ , 𝑟$ = #" + $ Applied Discrete Mathematics @ Class #4: Algorithms, Recursion 21 21 Solving Recurrence Relations Therefore, the Fibonacci numbers are given by: 𝑓 ! = 𝛼# # + $ !
+ 𝛼$ #" + $ !
for some 𝛼# and 𝛼$ We can determine values for these constants so that the sequence meets the conditions f0 = 0 and f1 = 1: 𝑓 ' = 𝛼# + 𝛼$ = 0 𝑓 # = 𝛼# 1 + 5 2 + 𝛼$ 1 − 5 2 = 1 Applied Discrete Mathematics @ Class #4: Algorithms, Recursion 22 22 7/27/22 12 Solving Recurrence Relations The unique solution to this system of two equations and two variables is 𝛼# = 1 5 , 𝛼$ = −1 5 So finally, we obtained an explicit formula for the Fibonacci numbers: 𝑓 ! = 1 5 1 + 5 2 !
−1 5 1 − 5 2 !
Applied Discrete Mathematics @ Class #4: Algorithms, Recursion 23 23 Solving Recurrence Relations But what happens if the characteristic equation has only one root?
How can we then match our equation with the initial conditions a0 and a1 ?
Theorem: Let 𝑐1 and 𝑐2 be real numbers with 𝑐$ ≠0. Suppose that 𝑟$– 𝑐#𝑟– 𝑐$ = 0 has only one roots 𝑟0. The sequence {𝑎!} is a solution of the recurrence relation 𝑎! = 𝑐#𝑎!"# + 𝑐$𝑎!"$ ↔𝑎! = 𝛼#𝑟' ! + 𝛼$𝑛𝑟' ! for 𝑛= 0, 1, 2, … , where 𝛼# and 𝛼$ are constants Applied Discrete Mathematics @ Class #4: Algorithms, Recursion 24 24 7/27/22 13 Solving Recurrence Relations Example: What is the solution of the recurrence relation an = 6an-1 – 9an-2 with a0 = 1 and a1 = 6?
Solution: The only root of r2 – 6r + 9 = 0 is r0 = 3. Hence, the solution to the recurrence relation is an = a13n + a2n3n for some constants a1 and a2. To match the initial condition, we need: a0 = 1 = a1 a1 = 6 = a1×3 + a2×3 Solving these equations yields a1 = 1 and a2 = 1.
Consequently, the overall solution is given by an = 3n + n3n.
Applied Discrete Mathematics @ Class #4: Algorithms, Recursion 25 25 Divide-and-Conquer Relations Some algorithms take a problem and successively divide it into one or more smaller problems until there is a trivial solution to them.
For example, the binary search algorithm recursively divides the input into two halves and eliminates the irrelevant half until only one relevant element remained.
This technique is called “divide and conquer”.
We can use recurrence relations to analyze the complexity of such algorithms.
Applied Discrete Mathematics @ Class #4: Algorithms, Recursion 26 26 7/27/22 14 Divide-and-Conquer Relations Suppose that an algorithm divides a problem (input) of size n into a subproblems, where each subproblem is of size !
,. Assume that 𝑔(𝑛) operations are performed for such a division of a problem.
Then, if 𝒇(𝒏) represents the number of operations required to solve the problem, it follows that f satisfies the recurrence relation 𝒇(𝒏) = 𝒂𝒇(𝒏 𝒃) + 𝒈(𝒏).
This is called a divide-and-conquer recurrence relation.
Applied Discrete Mathematics @ Class #4: Algorithms, Recursion 27 27 Divide-and-Conquer Relations Example: The binary search algorithm reduces the search for an element in a search sequence of size 𝒏to the binary search for this element in a search sequence of size 𝒏 𝟐(if n is even).
Two comparisons are needed to perform this reduction.
Hence, if 𝒇(𝒏) is the number of comparisons required to search for an element in a search sequence of size n, then 𝒇(𝒏) = 𝒇 𝒏 𝟐 + 𝟐if n is even.
Applied Discrete Mathematics @ Class #4: Algorithms, Recursion 28 28 7/27/22 15 Divide-and-Conquer Relations Usually, we do not try to solve such divide-and conquer relations, but we use them to derive a big-O estimate for the complexity of an algorithm.
Theorem: Let 𝑓be an increasing function that satisfies the recurrence relation 𝑓(𝑛) = 𝑎𝑓(𝑛 𝑏) + 𝑐𝑛% whenever 𝑛= 𝑏&, where 𝑘is a positive integer, 𝑎, 𝑐, and 𝑑are real numbers with a ³ 1, and 𝑏is an integer greater than 1. Then 𝑓(𝑛) is 𝑂(𝑛%), if a < bd, 𝑂(𝑛% log 𝑛) if a = bd, 𝑂(𝑛'() ) if a > bd Applied Discrete Mathematics @ Class #4: Algorithms, Recursion 29 29 Divide-and-Conquer Relations Example: For binary search, we have: 𝑓(𝑛) = 𝑓(!
$) + 2, so 𝑎= 1, 𝑏= 2, and 𝑑= 0 (d = 0 because here, 𝑔(𝑛) does not depend on 𝑛).
Consequently, a = bd, and therefore, 𝑓(𝑛) is O(nd log n) = O(log n).
The binary search algorithm has logarithmic time complexity.
Applied Discrete Mathematics @ Class #4: Algorithms, Recursion 30 30 7/27/22 16 Algorithms Chapter 3 in the textbook 31 Algorithms What is an algorithm?
An algorithm is a finite set of precise instructions for performing a computation or for solving a problem.
This is a rather vague definition. You will get to know a more precise and mathematically useful definition when you attend CS420 or CS620. But this one is good enough for now… Applied Discrete Mathematics @ Class #4: Algorithms, Recursion 32 32 7/27/22 17 Algorithms Properties of algorithms: • Input from a specified set, • Output from a specified set (solution), • Definiteness of every step in the computation, • Correctness of output for every possible input, • Finiteness of the number of calculation steps, • Effectiveness of each calculation step and • Generality for a class of problems.
Applied Discrete Mathematics @ Class #4: Algorithms, Recursion 33 33 Algorithm Examples We will use a pseudocode to specify algorithms, which slightly reminds us of Basic and Pascal.
Example: an algorithm that finds the maximum element in a finite sequence Applied Discrete Mathematics @ Class #4: Algorithms, Recursion 34 34 7/27/22 18 Algorithm Examples Another example: a linear search algorithm, that is, an algorithm that linearly searches a sequence for a particular element.
Applied Discrete Mathematics @ Class #4: Algorithms, Recursion 35 35 Algorithm Examples If the terms in a sequence are ordered, a binary search algorithm is more efficient than linear search.
The binary search algorithm iteratively restricts the relevant search interval until it closes in on the position of the element to be located.
Applied Discrete Mathematics @ Class #4: Algorithms, Recursion 36 36 7/27/22 19 Algorithm Examples Applied Discrete Mathematics @ Class #4: Algorithms, Recursion 37 a c d f g h j l m o p r s u v x z binary search for the letter ‘j’ center element search interval 37 Algorithm Examples Applied Discrete Mathematics @ Class #4: Algorithms, Recursion 38 a c d f g h j l m o p r s u v x z center element search interval binary search for the letter ‘j’ 38 7/27/22 20 Algorithm Examples Applied Discrete Mathematics @ Class #4: Algorithms, Recursion 39 a c d f g h j l m o p r s u v x z center element search interval binary search for the letter ‘j’ 39 Algorithm Examples Applied Discrete Mathematics @ Class #4: Algorithms, Recursion 40 a c d f g h j l m o p r s u v x z center element search interval binary search for the letter ‘j’ Found!
40 7/27/22 21 Algorithm Examples Applied Discrete Mathematics @ Class #4: Algorithms, Recursion 41 41 Algorithm Examples Obviously, on sorted sequences, binary search is more efficient than linear search.
How can we analyze the efficiency of algorithms?
We can measure the • time (number of elementary computations) and • space (number of memory cells) that the algorithm requires.
These measures are called computational complexity and space complexity, respectively.
Applied Discrete Mathematics @ Class #4: Algorithms, Recursion 42 42 7/27/22 22 Complexity What is the time complexity of the linear search algorithm?
We will determine the worst-case number of comparisons as a function of the number n of terms in the sequence.
The worst case for the linear algorithm occurs when the element to be located is not included in the sequence.
In that case, every item in the sequence is compared to the element to be located.
Applied Discrete Mathematics @ Class #4: Algorithms, Recursion 43 43 Algorithm Examples Here is the linear search algorithm again: Applied Discrete Mathematics @ Class #4: Algorithms, Recursion 44 44 7/27/22 23 Algorithm Examples Here is the linear search algorithm again: Applied Discrete Mathematics @ Class #4: Algorithms, Recursion 45 Is processed n times, requiring 2n comparisons. When it is entered for the (n+1)th time, only the comparison i £ n is executed and terminates the loop Finally, this comparison is excuted, so in the worst-case, the time complexity is (2n+2) 45 Reminder: Binary Search Algorithm What is the time complexity of the algorithm?
Again, we will determine the worst-case number of comparisons as a function of the number 𝑛of terms in the sequence.
Applied Discrete Mathematics @ Class #4: Algorithms, Recursion 46 46 7/27/22 24 Complexity-Binary Search Algorithm • Let us assume there are 𝑛= 2𝑘elements in the list, which means that 𝑘= log 𝑛.
• If 𝑛is not a power of 2, it can be considered part of a larger list, where 2) < 𝑛< 2)#.
• In the first cycle of the loop üThe search interval is restricted to 2&,- elements, üUsing 2 comparisons Applied Discrete Mathematics @ Class #4: Algorithms, Recursion 47 47 Complexity-Binary Search Algorithm • In the second cycle of the loop üThe search interval is restricted to 2&,. elements, üUsing 2 comparisons (again) • … this is repeated until only 1 (2') element left in the search interval üAt this point, 2k comparisons has been conducted Applied Discrete Mathematics @ Class #4: Algorithms, Recursion 48 48 7/27/22 25 Complexity Then, the comparison 𝑤ℎ𝑖𝑙𝑒(𝑖< 𝑗) exits the loop, and a final comparison 𝑖𝑓𝑥= 𝑎/ 𝑡ℎ𝑒𝑛𝑙𝑜𝑐𝑎𝑡𝑖𝑜𝑛∶= 𝑖 determines whether the element was found.
Therefore, the overall time complexity of the binary search algorithm is: 2𝑘+ 2 = 2élog 𝑛ù + 2.
Applied Discrete Mathematics @ Class #4: Algorithms, Recursion 49 49 Complexity In general, we are not so much interested in the time and space complexity for small inputs.
For example, while the difference in time complexity between linear and binary search is meaningless for a sequence with 𝑛= 10, it is gigantic for 𝑛= 230.
For example, let us assume two algorithms A and B that solve the same class of problems.
The time complexity of A is 5,000𝑛, the one for B is é1.1𝑛ù for an input with 𝑛 elements.
Applied Discrete Mathematics @ Class #4: Algorithms, Recursion 50 50 7/27/22 26 Complexity Comparison: time complexity of algorithms A and B This means that algorithm B cannot be used for large inputs, while running algorithm A is still feasible.
Applied Discrete Mathematics @ Class #4: Algorithms, Recursion 51 Algorithm A Algorithm B Input Size n 10 100 1,000 1,000,000 5,000n 50,000 500,000 5,000,000 5 ⋅109 é1.1𝑛ù 3 2.5 ⋅1041 13,781 4.8 ⋅1041392 51 Complexity So what important is the growth of the complexity functions.
The growth of time and space complexity with increasing input size 𝑛is a suitable measure for the comparison of algorithms. Applied Discrete Mathematics @ Class #4: Algorithms, Recursion 52 52 7/27/22 27 The Growth of Functions The growth of functions is usually described using the big-O notation.
Definition: Let 𝑓and 𝑔be functions from the integers or the real numbers to the real numbers. We say that 𝑓(𝑥) is 𝑶(𝑔(𝑥)) if there are constants 𝐶and 𝑘such that |𝑓(𝑥)| £ 𝐶|𝑔(𝑥)|, whenever 𝑥> 𝑘.
Applied Discrete Mathematics @ Class #4: Algorithms, Recursion 53 53 The Growth of Functions When we analyze the growth of complexity functions, 𝑓(𝑥) and 𝑔(𝑥) are always positive. Therefore, we can simplify the big-O requirement to 𝑓(𝑥) £ 𝐶×𝑔(𝑥) whenever 𝑥> 𝑘.
If we want to show that 𝑓(𝑥) is 𝑶(𝑔(𝑥)), we only need to find one pair (𝐶, 𝑘) (which is never unique).
Applied Discrete Mathematics @ Class #4: Algorithms, Recursion 54 54 7/27/22 28 The Growth of Functions The idea behind the big-O notation is to establish an upper boundary for the growth of a function 𝑓(𝑥) for large 𝑥.
This boundary is specified by a function 𝑔(𝑥) that is usually much simpler than 𝑓(𝑥).
We accept the constant 𝐶in the requirement 𝑓(𝑥) £ 𝐶×𝑔(𝑥) whenever 𝑥> 𝑘, because 𝑪does not grow with 𝒙 We are only interested in large 𝑥, so it is OK if 𝑓(𝑥) > 𝐶×𝑔(𝑥) for 𝑥£ 𝑘.
Applied Discrete Mathematics @ Class #4: Algorithms, Recursion 55 55 The Growth of Functions Example: Show that 𝑓(𝑥) = 𝑥2 + 2𝑥+ 1 is 𝑶(𝑥2).
For 𝑥> 1 we have: 𝑥2 + 2𝑥+ 1 £ 𝑥2 + 2𝑥2 + 𝑥2 Þ 𝑥2 + 2𝑥+ 1 £ 4𝑥2 Therefore, for 𝐶= 4 and 𝑘= 1: 𝑓(𝑥) £ 𝐶𝑥2 whenever 𝑥> 𝑘.
Þ 𝑓(𝑥) 𝑖𝑠𝑂(𝑥2).
Applied Discrete Mathematics @ Class #4: Algorithms, Recursion 56 56 7/27/22 29 The Growth of Functions Question: If 𝑓(𝑥) is 𝑶(𝑥2), is it also 𝑶(𝑥3)?
Yes. 𝑥3 grows faster than 𝑥2, so 𝑥3 grows also faster than 𝑓(𝑥).
Therefore, we always have to find the smallest simple function 𝑔(𝑥) for which 𝑓(𝑥) is 𝑶(𝑔(𝑥)) Applied Discrete Mathematics @ Class #4: Algorithms, Recursion 57 57 The Growth of Functions “Popular” functions g(n) are: • 𝑛𝑙𝑜𝑔𝑛, 1, 2𝑛, 𝑛2, 𝑛!, 𝑛, 𝑛3, log 𝑛 Listed from slowest to fastest growth: • 1 • log n • n • n log n • n2 • n3 • 2n • n!
Applied Discrete Mathematics @ Class #4: Algorithms, Recursion 58 58 7/27/22 30 The Growth of Functions A problem that can be solved with polynomial worst-case complexity is called tractable.
Problems of higher complexity are called intractable.
Problems that no algorithm can solve are called unsolvable.
More about this in CS420.
Applied Discrete Mathematics @ Class #4: Algorithms, Recursion 59 59 Useful Rules for Big-O For any polynomial 𝑓(𝑥) = 𝑎!𝑥! + 𝑎!"#𝑥!"# + ⋯+ 𝑎', where 𝑎0, 𝑎1, … , 𝑎! are real numbers, 𝑓(𝑥) 𝑖𝑠𝑶(𝑥𝑛) • If 𝑓1(𝑥) is 𝑶(𝑔(𝑥)) and 𝑓2(𝑥) is 𝑶(𝑔(𝑥)), then (𝑓1 + 𝑓2)(𝑥) is 𝑶(𝑔(𝑥)).
• If 𝑓1(𝑥) is 𝑶(𝑔1(𝑥)) and 𝑓2(𝑥) is 𝑶(𝑔2(𝑥)), then (𝑓1 + 𝑓2)(𝑥) is 𝑶(max(𝑔1(𝑥), 𝑔2(𝑥))) • If 𝑓1(𝑥) is 𝑶(𝑔1(𝑥)) and 𝑓2(𝑥) is 𝑶(𝑔2(𝑥)), then (𝑓1𝑓2)(𝑥) is 𝑶(𝑔1(𝑥) 𝑔2(𝑥)).
Applied Discrete Mathematics @ Class #4: Algorithms, Recursion 60 60 7/27/22 31 Complexity Examples What does the following algorithm compute?
procedure who_knows(a1, a2, …, an: integers) who_knows := 0 for i := 1 to n-1 for j := i+1 to n if |ai – aj| > who_knows then who_knows := |ai – aj| {who_knows is the maximum difference between any two numbers in the input sequence} Comparisons: 𝑛−1 + 𝑛−2 + 𝑛−3 + ⋯+ 1 = 𝑛𝑛– 1 2 = 0.5𝑛2 – 0.5𝑛 Time complexity is 𝑶(𝑛2).
Applied Discrete Mathematics @ Class #4: Algorithms, Recursion 61 61 Complexity Examples Another algorithm solving the same problem: procedure max_diff(a1, a2, …, an: integers) min := a1 max := a1 for i := 2 to n if ai < min then min := ai else if ai > max then max := ai max_diff := max - min Comparisons (worst case) ? 2𝑛−2 Time complexity is 𝑶(𝑛).
Applied Discrete Mathematics @ Class #4: Algorithms, Recursion 62 62 7/27/22 32 In-class exceries Give a Big-O estimate for the number of operations (+ or ) that used the following segment of an algorithm.
(Do not count additions used to increment the loop variable.) (a) (b) (c) Applied Discrete Mathematics @ Class #4: Algorithms, Recursion 63 63 |
17258 | https://zhuanlan.zhihu.com/p/661446483 | 高一数学必修 幂函数、指数函数和对数函数 - 知乎
首页
知乎直答 焕新
知乎知学堂
等你来答
切换模式
登录/注册
高一数学必修 幂函数、指数函数和对数函数
首发于高中数学
切换模式
登录/注册
高一数学必修 幂函数、指数函数和对数函数
hybase
音视频 flvAnalyser | hysAnalyser
收录于 · 高中数学
25 人赞同了该文章
目录
学习目标
理解n次方根、根式的概念,了解指数幂的拓展过程,掌握指数幂的运算性质
掌握常用幂函数的图像及其性质
理解指数函数的概念,掌握指数函数的图像和性质
了解指数增长型和指数衰减型在实际问题中的应用,能利用指数函数的单调性比较指数有关的大小问题
理解对数函数的概念,掌握对数函数的图像和性质
掌握对数型复核函数单调区间的求法及单调性的判定方法
了解常用的描述现实世界中不同增长规律的函数模型,了解直线上升,指数爆炸,对数增长等含义
了解函数的零点、方程的解与图象交点三者之间的联系
了解二分法的原理及其适用条件,掌握二分法的实施步骤
会利用已知函数模型求解实际问题
能自建确定性函数模型解决实际问题
了解建立拟合函数模型的步骤,并了解检验和调整的必要性
知识结构
一、实数指数幂和幂函数
n 次方根和根式
1) a 的 n 次方根的定义
一般地,如果 x^{n}=a ,那么 x 叫做 a 的 n 次方根,其中 n\geq2 且 n\in N ;
2) a 的 n 次方根的表示,如下:
n 为奇数, \sqrt[n]{a} , 其中 a\in R;
n 为偶数,\pm\sqrt[n]{a} ,其中 a\in [0, +\infty);
3)根式
式子 \sqrt[n]{a} 叫做根式 (n\in N, n\geq2) ,其中 n 叫做根指数, a 叫做被开方数。
4)根式的性质
负数没有偶次方根
0的任何次方根均为0, 记作 \sqrt[n]{0} = 0 ;
(\sqrt[n]{a})^{n}=a(n\in N 且 n\geq2);
\sqrt[n]{a^{n}}=a (n为大于1的奇数);
\sqrt[n]{a^{n}}=\left| a \right| (n为大于1的偶数),即 a(a\geq0), -a(a<0) ;
分数指数幂
1)规定正数的正分数指数幂的意义是: a^{\frac{m}{n}}=\sqrt[n]{a^{m}}(a>0, m,n\in N,且 n>1)
2)规定正数的负分数指数幂的意义是: a^{-\frac{m}{n}}=\frac{1}{a^{\frac{m}{n}}}=\frac{1}{\sqrt[n]{a^{m}}}(a>0, m,n\in N 且 n>1)
3)0的正分数指数幂等于0;
4)0的负分数指数幂没有意义;
有理数指数幂的运算性质
整数指数幂的运算性质,可以推广到有理数指数幂,即:
1) a^{r}a^{s}=a^{r+s}(a>0, r,s\in Q)
2) (a^{r})^{s}=a^{rs}(a>0, r,s\in Q)
3) (ab)^{r}=a^{r}b^{r}(a>0,b>0,r\in Q)
4) \frac{a^r}{a^s} = a^{r-s}(a>0, r,s\in Q)
无理数指数幂
一般地,无理数指数幂 a^{\alpha} (a>0, \alpha 是无理数)是一个确定的实数。
幂函数
定义 y=x^{\alpha} 叫做( \alpha 次)幂函数,
幂函数分类
1)整数次幂函数:细分为正整数次幂函数,负整数次幂函数
2)分数次幂函数
幂函数的图像和性质
对于实数次幂函数 y=x^{\alpha}(\alpha\ne0)
1)当 \alpha>0 时,它在 [0,\infty)上有定义且递增,值域为[0,\infty),函数图象经过(0,0),(1,1)两点
2)当 \alpha<0 时,它在 [0,\infty)上有定义且递减,值域为[0,\infty),函数图象经过点(1,1), 向上与 y 轴正向无限接近,向右与 x 轴正向无限接近
五个常见幂函数的图像
五个常见幂函数的性质
二、指数函数
定义
函数 y=a^{x}(a>0,且 a\ne 1) 叫做指数函数,其中 x 是自变量,函数的定义域是 R
两类指数模型
1) y=ka^{x}(k>0,a>0且a\ne1),当 a>1 时为指数增长型函数模型
2) y=ka^{x}(k>0,a>0且a\ne1),当 0<a<1 时为指数衰减型函数模型
指数函数的图像和性质
比较幂的大小
一般地,比较幂大小的方法有
(1)对于同底数不同指数的两个幂的大小,利用指数函数的单调性来判断.
(2)对于底数不同指数相同的两个幂的大小,利用幂函数的单调性来判断.
(3)对于底数不同指数也不同的两个幂的大小,则通过中间值来判断.
解指数方程和不等式
(1)形如 a^{f(x)} > a^{g(x)}的不等式,可借助y=a^{x}的单调性求解
(2)形如 a^{f(x)}>b 的不等式,可将 b 化为以 a 为底数的指数幂的形式,再借助 y=a^{x} 的单调性求解;
(3)形如 a^{x}>b^{x} 的不等式,可借助两函数 y=a^{x},y=b^{x} 的图象求解。
指数型函数的单调性
一般地,形如 y=a^{f(x)}(a>0,且a\ne1) 函数的性质:
(1)函数 y=a^{f(x)}与函数y=f(x) 有相同的定义域
(2)当 a>1 时,函数 y=a^{f(x)}与函数y=f(x) 具有相同的单调性;
(3)当 0<a<1 时,函数 y=a^{f(x)}与函数y=f(x) 的单调性相反;
三、对数函数
基础知识
1)对数运算法则,其中 (a>0,且a\ne1, M>0, N>0)
(1) log_a(MN)=log_aM+log_aN
(2) log_aM^{n}=nlog_aM(n\in R)
(3) log_a\frac{M}{N} = log_aM-log_aN
(4) log_aa=1,log_a1=0
2)换底公式
log_aN=\frac{log_bN}{log_ba}
3)常用对数
以10为底的对数, 常记为 lgN
4)自然对数
以 e 为底的对数, log_eN 简化为 lnN
对数函数定义
一般函数 形如 y=log_a x(a>0,且a\ne1) 叫做对数函数,其中 x 为自变量,函数的定义域是 (0,+\infty)
其中 a 为底数, x 为对数的真数。
对数函数的图像和性质
反函数
指数函数
y=a^x(a>0,且a\ne 1) 与
对数函数
y=log_a x(a>0,且a\ne1)
互为反函数,二者的定义域和值域正好互换。
对数型函数的性质及应用
形如 y=log_a f(x) 为对数型函数
(1)定义域:由 f(x)>0 解出 x 的取值范围,即为函数的定义域
(2)值域:先根据上述定义域确定 t=f(x) 的值域,再由 y=log_a t 的单调性确定函数的值域
(3)单调性:在定义域内考虑 t=f(x) 与 y=log_a t 的单调性,根据同增同减法则判定
(4)奇偶性:根据奇偶函数的定义判定
(5)最值:在 f(x)>0 的条件下,确定 t=f(x) 的值域,再根据 a 确定函数 y=log_a t 的单调性,最后确定最值
复合型对数型不等式的解法
形如 log_a f(x) < log_a g(x) 不等式解法
(1)讨论 a 与1的关系,确定单调性
(2)转化为 f(x) 与 g(x) 的不等关系求解,其中需特别注意真数需大于0
不同函数增长的差异
指数函数,对数函数,线性函数的增长差异
四、函数与方程
1、函数的零点
对于一般函数 y=f(x) ,我们把使得 f(x)=0 的实数 x 叫做函数 y=f(x) 的零点。
2、函数零点存在定理
如果函数 y=f(x) 在区间 [a,b] 上的图象是一条连续不断的曲线,且有 f(a)f(b)<0 ,那么,函数 y=f(x) 在区间 (a,b) 内至少有一个零点,即存在 c∈(a,b) ,使得 f(c)=0 ,这个 c 也就是方程 f(x)=0 的解.
3、用二分法求方程的近似解
1)二分法
对于在区间 [a,b] 上图象连续不断且 f(a)f(b)<0 的函数 y=f(x) ,通过不断地把它的零点所在区间一分为二,使所得区间的两个端点逐步逼近零点,进而得到零点近似值的方法叫做二分法.
2)用二分法求函数f(x)零点近似值的步骤
步骤1.确定零点 x_0 的初始区间 [a,b] ,验证 f(a)f(b)<0
步骤2.求区间 (a,b) 的中点 c
步骤3.计算 f(c) ,并进一步确定零点所在的区间
(1)若 f(c)=0 (此时 x_0=c ),则c就是函数的零点.
(2)若 f(a)f(c)<0 (此时 x_0∈(a,c) ),则令 b=c .
(3)若 f(c)f(b)<0 (此时 x_0∈(c,b) ),则令 a=c .
步骤4.判断是否达到精确度 ε :若 |a-b|<ε ,则得到零点近似值 a (或 b );否则重复步骤2~4.
以上步骤可简化为:定区间,找中点,中值计算两边看;同号去,异号算,零点落在异号间;周而复始怎么办?精确度上来判断.
五、函数模型及其应用
数学建模
把现实世界中的实际问题加以提炼,抽象为数学模型,求出模型的解,验证模型的合理性,并用该数学模型所提供的解来解释现实问题,数学知识的这一应用过程称为数学建模。
数学建模的步骤通常是:
1)正确理解并简化实际问题:了解问题的实际背景,明确其实际意义,掌握对象的各种信息,根据实际对象的特征和建模的目的,对问题进行必要的简化,并用精确的语言提出一些恰当的假设。
2)建立数学模型:在上述基础上,利用适当的数学工具来刻画各变量之间的数学关系,建立相应的数学结构。
3)求得数学问题的解
4)将求解时分析计算的结果与实际情形进行比较,验证模型的准确性、合理性和适用性。
常见的函数模型
一次函数模型
f(x)=ax+b(a,b为常数,a≠0)
反比例函数模型
f(x)=\frac{k}{x}+b(k,b为常数且k≠0)
一元二次函数模型
f(x)=ax^2+bx+c(a,b,c为常数,a≠0)
指数型函数模型
f(x)=ba^x+c(a,b,c为常数,b≠0,a>0且a≠1)
对数型函数模型
f(x)=blog_ax+c(a,b,c为常数,b≠0,a>0且a≠1)
幂函数型模型
f(x)=ax^n+b(a,b为常数,a≠0, n\geq2, n\in N)
用函数模型解决实际问题的基本步骤
1.审题——弄清题意,分清条件和结论,理顺数量关系,初步选择模型.
2.建模——将自然语言转化为数学语言,将文字语言转化为符号语言,利用数学知识建立相应的数学模型.
3.求模——求解数学模型,得出数学模型.
4.还原——将数学结论还原为实际问题.
六、复习与巩固
1.若 x^n=a(x≠0),则下列说法中正确的个数是( )
①当n为奇数时,x的n次方根为a;
②当n为奇数时,a的n次方根为x;
③当n为偶数时,x的n次方根为±a;
④当n为偶数时,a的n次方根为±x.
A.1
B.2
C.3
D.4
已知幂函数 f(x) 的图象过点 (2, \frac{1}{2}) ,则 f(4) 的值是( )
A.64
B.4\sqrt{2}
C. \frac{\sqrt{2}}{4}
D. \frac{1}{4}
3.在函数① y=\frac{1}{x} ,② y=3x^3 , ③ y=2x+1 , ④ y=1 , ⑤ y=x^3 ,⑥ y=x^{-\frac{1}{2}} 中,是幂函数的是( )
A.①②④⑤
B.①⑤⑥
C.①②⑥
D.①②④⑤⑥
函数 f(x)=\frac{1}{3^x+1} 的值域是 ()
A.(-∞,1)
B.(0,1)
C.(1,+∞)
D.(-∞,1)∪(1,+∞)
下一章
编辑于 2024-10-15 00:31・广东
幂函数
指数函数
高一数学
赞同 25添加评论
分享
喜欢收藏申请转载
写下你的评论...
还没有评论,发表第一个评论吧
关于作者
hybase
音视频 flvAnalyser | hysAnalyser
回答 33文章 77关注者 294
关注他发私信
推荐阅读
对数与对数函数的学习指导 ============ 许兴华数学数列“函数”的一个例题 =========== 在开始之前先放一个简单的题目 数列\left{ a_{n} \right}满足a_{1}=1,a_{n+1}=ln(1+a_{n}),证明:a_{n}≥\frac{1}{2^{n-1}} 你可以先想想第一反应是怎么做,然后在接着看下面的内容。 接… 梓欣 (数分笔记)初等函数的连续性 ============== 稳重高一数学的知识点:幂函数 ============ 高一数学的知识点:幂函数定义: 形如y=x^a(a为常数)的函数,即以底数为自变量幂为因变量,指数为常量的函数称为幂函数。 定义域和值域: 当a为不同的数值时,幂函数的定义域的不同情况如下… BAIG
想来知乎工作?请发送邮件到 jobs@zhihu.com
打开知乎App
在「我的页」右上角打开扫一扫
其他扫码方式:微信
下载知乎App
无障碍模式
验证码登录
密码登录
开通机构号
中国 +86
获取短信验证码
获取语音验证码
登录/注册
其他方式登录
未注册手机验证后自动登录,注册即代表同意《知乎协议》《隐私保护指引》
扫码下载知乎 App
关闭二维码
打开知乎App
在「我的页」右上角打开扫一扫
其他扫码方式:微信
下载知乎App
无障碍模式
验证码登录
密码登录
开通机构号
中国 +86
获取短信验证码
获取语音验证码
登录/注册
其他方式登录
未注册手机验证后自动登录,注册即代表同意《知乎协议》《隐私保护指引》
扫码下载知乎 App
关闭二维码 |
17259 | https://emedicine.medscape.com/article/871977-guidelines | close
Please confirm that you would like to log out of Medscape.
If you log out, you will be required to enter your username and password the next time you visit.
Log out
Cancel
processing....
Tools & Reference>Otolaryngology and Facial Plastic Surgery
Tonsillitis and Peritonsillar Abscess Guidelines
Updated: Aug 02, 2024
Author: Udayan K Shah, MD, FACS, FAAP; Chief Editor: Arlen D Meyers, MD, MBA more...;)
64
Share
Print
Feedback
Close
Facebook
Twitter
LinkedIn
WhatsApp
Email
Sections
Tonsillitis and Peritonsillar Abscess
Sections Tonsillitis and Peritonsillar Abscess
Overview
Practice Essentials
Background
Pathophysiology and Etiology
Epidemiology
Prognosis
Show All
Presentation
History
Physical Examination
Complications
Show All
DDx
Workup
Approach Considerations
Lab Studies
Imaging Studies
Show All
Treatment
Approach Considerations
Corticosteroids
Antibiotics
Tonsillectomy
Diet and Activity
Prevention
Show All
Guidelines
Medication
Medication Summary
Corticosteroids
Antibiotics, Other
Immune Globulins
Analgesics, Other
Show All
Media Gallery;)
References;)
Guidelines
Guidelines Summary
In 2012, the Infectious Diseases Society of America (IDSA) released updated guidelines for the diagnosis and management of group A streptococcal (GAS) pharyngitis. Recommendations for diagnosis and testing are summarized as follows :
Testing for GAS pharyngitis by rapid antigen detection test (RADT) and/or culture should be performed to distinguish between GAS and viral pharyngitis
In children and adolescents, negative RADT results should be backed up by a throat culture; positive results are highly specific and do not require a backup culture
In adults, routine use of backup throat cultures for those with a negative RADT is not necessary because the risk of subsequent acute rheumatic fever is low in adults with acute pharyngitis
Anti-streptococcal antibody titers are not recommended in the routine diagnosis of acute pharyngitis
In general, testing for GAS pharyngitis is not recommended for children or adults with a clinical presentation that strongly suggests a viral etiology (eg, cough, rhinorrhea, hoarseness, oral ulcers)
Acute rheumatic fever is rare in children under age 3 years, and the incidence and classic presentation of streptococcal pharyngitis are uncommon in this age group; thus, testing for GAS pharyngitis is not indicated for children under age 3 years; however, children under age 3 years who have other risk factors, such as an older sibling with GAS infection, may be considered for testing
Diagnostic testing or treatment of asymptomatic close contacts of patients with acute streptococcal pharyngitis is not routinely recommended
IDSA guideline recommendations for treatment include the following :
Patients with confirmed GAS pharyngitis should be treated for a duration likely to eradicate GAS from the pharynx (usually 10 days) with an appropriate narrow-spectrum antibiotic
Penicillin or amoxicillin is the drug of choice for those without a contraindication
Alternative agents for penicillin-allergic individuals include a first-generation cephalosporin, clindamycin, or clarithromycin for 10 days, or azithromycin for 5 days
An analgesic such as acetaminophen or a nonsteroidal anti-inflammatory drug (NSAID) may be considered as an adjunct to an appropriate antibiotic for treatment of moderate to severe symptoms or control of high fever; aspirin should be avoided in children
Adjunctive therapy with a corticosteroid is not recommended
In patients with recurrent episodes of pharyngitis, consider the possibility that they are a chronic pharyngeal GAS carrier who is experiencing repeated viral infections
Efforts to identify GAS carriers are not justified because they have a low risk of transmitting GAS pharyngitis to their close contacts and little or no risk for the development of acute rheumatic fever
Tonsillectomy is not recommended to reduce the frequency of GAS pharyngitis
In joint guidelines for appropriate antibiotic use for acute respiratory tract infection in adults, published in 2016, the American College of Physicians (ACP) and the Centers for Disease Control and Prevention (CDC) note that adult patients may be assured that antibiotics are usually not needed for a sore throat because they do little to alleviate symptoms and may have adverse effects.
The following are guidelines from the 2019 update of the American Academy of Otolaryngology–Head and Neck Surgery Foundation’s guidelines on tonsillectomy in children :
If fewer than seven episodes of recurrent throat infection have occurred in the past year, less than five episodes per year in the past 2 years, or fewer than three episodes annually in the past 3 years, watchful waiting should be recommended by the clinician
Tonsillectomy may be recommended if there have been at least seven episodes of recurrent throat infection in the past year, at least 5 episodes annually in the past 2 years, or at least 3 episodes per year in the past 3 years, with the medical record containing documentation for each episode and for at least one of the following: temperature above 38.3°C (101°F), cervical adenopathy, tonsillar exudate, or a positive group A beta-hemolytic streptococcus test
If children do not meet the criteria in the second guideline but nonetheless have recurrent throat infection, evaluate the patients for modifying factors that may still point toward a need for tonsillectomy; such factors may include, among others, multiple antibiotic allergies/intolerance, PFAPA (periodic fever, aphthous stomatitis, pharyngitis, adenitis), and a history of more than one peritonsillar abscess
Coronavirus disease 2019 (COVID-19)
Bann et al compiled a set of recommendations for best pediatric otolaryngology practices with regard to the coronavirus disease 2019 (COVID-19) pandemic. These included the following for procedures involving the oral cavity, oropharynx, nasal cavity, or nasopharynx :
Whenever possible, defer procedures involving the nasal cavity, nasopharynx, oral cavity, or oropharynx, as these pose a high risk for COVID-19 owing to the high viral burden in these locations
Whenever possible, preoperative COVID-19 testing should be administered to patients and caregivers prior to surgical intervention
Employment of enhanced personal protective equipment (PPE), with a strong recommendation for the use of a powered air-purifying respirator (PAPR), should be undertaken with any patient with unknown, suspected, or positive COVID-19 status
Limit the use of powered instrumentation, including microdebriders, to reduce aerosol generation
With regard to audiologic evaluation and otologic surgery, the recommendations include the following :
Perform routine newborn hearing screening and early intervention as indicated in the Joint Committee on Infant Hearing (JCIH) recommendations
Defer tympanostomy tube placement for unilateral otitis media with effusion
Although it should be prioritized, intervention for bilateral otitis media with effusion and hearing loss may be deferred based on the availability of COVID-19 testing
Surgery involving the middle ear and mastoid, owing to their continuity with the upper aerodigestive tract, should be considered high risk for COVID-19 transmission
Whenever possible, defer mastoidectomy, but if the surgery is required, employ enhanced PPE and avoid the use of high-speed drills
Employment of a PAPR is strongly recommended when, in patients with unknown, suspected, or positive COVID-19 status, high-speed drills are required for otologic procedures
With regard to head and neck surgery and deep neck space infections, the recommendations include the following :
Defer surgical excision of benign neck masses
A multidisciplinary tumor board should decide the most appropriate treatment modality for pediatric patients with solid tumors of the head and neck, including thyroid cancer, with the availability of local resources taken into account
Prior to surgical intervention, medical management of infectious conditions should, whenever possible, be attempted; on admission, patients and caregivers should be tested for COVID-19 and strictly quarantined pending test results
With regard to craniomaxillofacial trauma, the guidelines include the following :
When urgent or emergent bedside procedures, including closure of facial lacerations, are required, patients should be presumed positive for COVID-19, even if they are asymptomatic; carry out procedures in a negative-pressure room using enhanced PPE
Employ closed-reduction techniques, when possible, until preoperative COVID-19 testing is available
Avoid the use of high-speed drills, to reduce aerosol formation
When urgent or emergent surgical intervention is required, patients should be presumed positive for COVID-19, even if they are asymptomatic
Medication
References
Anderson J, Paterek E. Tonsillitis. StatPearls. 2023 Aug 8. [QxMD MEDLINE Link]. [Full Text].
Morens DM. Death of a president. N Engl J Med. 1999 Dec 9. 341(24):1845-9. [QxMD MEDLINE Link].
Kean J. Domestic Medical Lectures. Chicago, Ill: 1879.
Stelter K. Tonsillitis and sore throat in children. GMS Curr Top Otorhinolaryngol Head Neck Surg. 2014. 13:Doc07. [QxMD MEDLINE Link]. [Full Text].
Woolford TJ, Hanif J, Washband S, Hari CK, Ganguli LA. The effect of previous antibiotic therapy on the bacteriology of the tonsils in children. Int J Clin Pract. 1999 Mar. 53(2):96-8. [QxMD MEDLINE Link].
Bussi M, Carlevato MT, Panizzut B, Omede P, Cortesina G. Are recurrent and chronic tonsillitis different entities? An immunological study with specific markers of inflammatory stages. Acta Otolaryngol Suppl. 1996. 523:112-4. [QxMD MEDLINE Link].
Uhler M, Schrom T, Knipping S. [Peritonsillar abscess - smoking habits, preoperative coagulation screening and therapy]. Laryngorhinootologie. 2013 Sep. 92(9):589-93. [QxMD MEDLINE Link].
Pichichero ME, Casey JR. Defining and dealing with carriers of group A Streptococci. Contemporary Pediatrics. 2003. 1:46.
Wald ER. Commentary: Antibiotic treatment of pharyngitis. Pediatrics in Review. 2001. 22 (8):255-256.
Herzon FS. Harris P. Mosher Award thesis. Peritonsillar abscess: incidence, current management practices, and a proposal for treatment guidelines. Laryngoscope. 1995 Aug. 105(8 Pt 3 Suppl 74):1-17. [QxMD MEDLINE Link].
Kvestad E, Kvaerner KJ, Roysamb E, Tambs K, Harris JR, Magnus P. Heritability of recurrent tonsillitis. Arch Otolaryngol Head Neck Surg. 2005 May. 131(5):383-7. [QxMD MEDLINE Link].
Klug TE. Incidence and microbiology of peritonsillar abscess: the influence of season, age, and gender. Eur J Clin Microbiol Infect Dis. 2014 Jan 29. [QxMD MEDLINE Link].
Schmidt RJ, Herzog A, Cook S, O’Reilly R, Deutsch E, Reilly J. Complications of tonsillectomy. Arch Otolaryngol Head and Neck Surg. 2007. 133:925-928.
Shah, Udayan K. Peritonsillar and Retropharyngeal Abscess. Shah, Samir S. Pediatric Pracice: Infectious Diseases. China: McGraw-Hill; 2009. Chapter 25, pp. 216-22.
Hatch N, Wu TS. Advanced Ultrasound Procedures. Crit Care Clin. 2014 Apr. 30(2):305-329. [QxMD MEDLINE Link].
Huang Z, Vintzileos W, Gordish-Dressman H, Bandarkar A, Reilly BK. Pediatric peritonsillar abscess: Outcomes and cost savings from using transcervical ultrasound. Laryngoscope. 2017 Jan 16. [QxMD MEDLINE Link].
Orobello NC, Crowder HR, Riley PE, et al. Predicting failure of detection of peritonsillar abscess with ultrasound in pediatric populations. Am J Otolaryngol. 2024 Jan-Feb. 45 (1):104021. [QxMD MEDLINE Link].
Battaglia A, Burchette R, Hussman J, Silver MA, Martin P, Bernstein P. Comparison of Medical Therapy Alone to Medical Therapy with Surgical Treatment of Peritonsillar Abscess. Otolaryngol Head Neck Surg. 2018 Feb. 158 (2):280-6. [QxMD MEDLINE Link].
[Guideline] Shulman ST, Bisno AL, Clegg HW, Gerber MA, Kaplan EL, Lee G, et al. Clinical practice guideline for the diagnosis and management of group A streptococcal pharyngitis: 2012 update by the Infectious Diseases Society of America. Clin Infect Dis. 2012 Nov 15. 55 (10):e86-102. [QxMD MEDLINE Link]. [Full Text].
[Guideline] Chiappini E, Regoli M, Bonsignori F, Sollai S, Parretti A, Galli L, et al. Analysis of different recommendations from international guidelines for the management of acute pharyngitis in adults and children. Clin Ther. 2011 Jan. 33(1):48-58. [QxMD MEDLINE Link].
[Guideline] Mitchell RB, Archer SM, Ishman SL, et al. Clinical Practice Guideline: Tonsillectomy in Children (Update)-Executive Summary. Otolaryngol Head Neck Surg. 2019 Feb. 160 (2):187-205. [QxMD MEDLINE Link]. [Full Text].
Morad A, Sathe NA, Francis DO, McPheeters ML, Chinnadurai S. Tonsillectomy Versus Watchful Waiting for Recurrent Throat Infection: A Systematic Review. Pediatrics. 2017 Jan 17. [QxMD MEDLINE Link].
Swift D. Tonsillectomy Shows Short-term Benefits for Apnea, Sore Throat. Medscape Medical News. 2017 Jan 18. [Full Text].
Wilson JA, O'Hara J, Fouweather T, et al. Conservative management versus tonsillectomy in adults with recurrent acute tonsillitis in the UK (NATTINA): a multicentre, open-label, randomised controlled trial. Lancet. 2023 Jun 17. 401 (10393):2051-9. [QxMD MEDLINE Link]. [Full Text].
Wang YP, Wang MC, Lin HC, Lee KS, Chou P. Tonsillectomy and the risk for deep neck infection-a nationwide cohort study. PLoS One. 2015. 10 (4):e0117535. [QxMD MEDLINE Link].
Boggs W. Systemic Steroids Increase Post-tonsillectomy Bleeding in Children. Medscape. Sep 23 2014. Available at Accessed: Sep 29, 2014.
Suzuki S, Yasunaga H, Matsui H, Horiguchi H, Fushimi K, Yamasoba T. Impact of Systemic Steroids on Posttonsillectomy Bleeding: Analysis of 61?430 Patients Using a National Inpatient Database in Japan. JAMA Otolaryngol Head Neck Surg. 2014 Sep 18. [QxMD MEDLINE Link].
Spektor Z, Saint-Victor S, Kay DJ, Mandell DL. Risk factors for pediatric post-tonsillectomy hemorrhage. Int J Pediatr Otorhinolaryngol. 2016 May. 84:151-5. [QxMD MEDLINE Link].
Kshirsagar R, Mahboubi H, Moriyama D, Ajose-Popoola O, Pham NS, Ahuja GS. Increased immediate postoperative hemorrhage in older and obese children after outpatient tonsillectomy. Int J Pediatr Otorhinolaryngol. 2016 May. 84:119-23. [QxMD MEDLINE Link].
De Luca Canto G, Pacheco-Pereira C, Aydinoz S, et al. Adenotonsillectomy Complications: A Meta-analysis. Pediatrics. 2015 Oct. 136 (4):702-18. [QxMD MEDLINE Link].
Henderson D. One fifth of kids have complication after tonsillectomy. Medscape Medical News. Sep 23, 2015. [Full Text].
McLean JE, Hill CJ, Riddick JB, Folsom CR. Investigation of Adult Post-Tonsillectomy Hemorrhage Rates and the Impact of NSAID Use. Laryngoscope. 2021 Sep 2. [QxMD MEDLINE Link].
Tomkinson A, Harrison W, Owens D, Harris S, McClure V, Temple M. Risk factors for postoperative hemorrhage following tonsillectomy. Laryngoscope. 2011 Feb. 121 (2):279-88. [QxMD MEDLINE Link].
Galindo Torres BP, De Miguel Garcia F, Whyte Orozco J. Tonsillectomy in adults: analysis of indications and complications. Auris Nasus Larynx. 2017 Sep 16. [QxMD MEDLINE Link].
Shah, Udayan K. Tonsillectomy & Adenoidectomy: Techniques and Technologies. Madison WI. 2008: Omnipress, Inc. ISBN 978-0-615-23355-0.; 2008.
Sivaji N, Arshad FA, Karkos PD. A novel method of draining a peritonsillar abscess. Clin Otolaryngol. 2011 Apr. 36(2):189-90. [QxMD MEDLINE Link].
Chau JK, Seikaly HR, Harris JR, Villa-Roel C, Brick C, Rowe BH. Corticosteroids in peritonsillar abscess treatment: A blinded placebo-controlled clinical trial. Laryngoscope. 2014 Jan. 124(1):97-103. [QxMD MEDLINE Link].
Windfuhr JP, Zurawski A. Peritonsillar abscess: remember to always think twice. Eur Arch Otorhinolaryngol. 2015 Mar 21. [QxMD MEDLINE Link].
[Guideline] Bann DV, Patel VA, Saadi RA, et al. Best Practice Recommendations for Pediatric Otolaryngology During the COVID-19 Pandemic. Otolaryngol Head Neck Surg. 2020. [Full Text].
[Guideline] Harris AM, Hicks LA, Qaseem A, et al. Appropriate Antibiotic Use for Acute Respiratory Tract Infection in Adults: Advice for High-Value Care From the American College of Physicians and the Centers for Disease Control and Prevention. Ann Intern Med. 2016 Mar 15. 164 (6):425-34. [QxMD MEDLINE Link]. [Full Text].
Media Gallery
Acute bacterial tonsillitis is shown. The tonsils are enlarged and inflamed with exudates. The uvula is midline.
Tonsillitis caused by Epstein-Barr infection (infectious mononucleosis). The enlarged inflamed tonsils are covered with gray-white patches.
Examination of the tonsils and pharynx.
Oral mucosal examination.
of 4
Tables
Back to List
Contributor Information and Disclosures
Author
Udayan K Shah, MD, FACS, FAAP Professor of Otolaryngology-Head and Neck Surgery and Pediatrics, Sidney Kimmel Medical College of Thomas Jefferson University; Chief of Credentialing, Nemours Children's Health System; Chief of Otolaryngology, Nemours Delaware Valley
Udayan K Shah, MD, FACS, FAAP is a member of the following medical societies: American Academy of Otolaryngology-Head and Neck Surgery, American Academy of Pediatrics, American College of Surgeons, American Society of Pediatric Otolaryngology, Phi Beta Kappa, Society for Ear, Nose and Throat Advances in Children
Disclosure: Nothing to disclose.
Specialty Editor Board
Ted L Tewfik, MD Professor of Otolaryngology-Head and Neck Surgery, Professor of Pediatric Surgery, McGill University Faculty of Medicine; Senior Staff, Montreal Children's Hospital, Montreal General Hospital, and Royal Victoria Hospital
Ted L Tewfik, MD is a member of the following medical societies: American Society of Pediatric Otolaryngology, Canadian Society of Otolaryngology-Head & Neck Surgery
Disclosure: Nothing to disclose.
Chief Editor
Arlen D Meyers, MD, MBA Emeritus Professor of Otolaryngology, Dentistry, and Engineering, University of Colorado School of Medicine
Arlen D Meyers, MD, MBA is a member of the following medical societies: American Academy of Facial Plastic and Reconstructive Surgery, American Academy of Otolaryngology-Head and Neck Surgery, American Head and Neck Society
Disclosure: Serve(d) as a director, officer, partner, employee, advisor, consultant or trustee for: Cerescan; Neosoma; MI10;Invitrocaptal,MedtechsyndicatesReceived income in an amount equal to or greater than $250 from: Neosoma; Cyberionix (CYBX);MI10;Invitrocaptal;MTSReceived ownership interest from Cerescan for consulting for: Neosoma, MI10 advisor.
Acknowledgements
Ari J Goldsmith, MD Chief of Pediatric Otolaryngology, Long Island College Hospital; Associate Professor, Department of Otolaryngology, Division of Pediatric Otolaryngology, State University of New York Downstate Medical Center
Ari J Goldsmith, MD is a member of the following medical societies: American Academy of Otolaryngology-Head and Neck Surgery, American Medical Association, and Medical Society of the State of New York
Disclosure: Nothing to disclose.
Francisco Talavera, PharmD, PhD Adjunct Assistant Professor, University of Nebraska Medical Center College of Pharmacy; Editor-in-Chief, Medscape Drug Reference
Disclosure: Medscape Salary Employment
Close;)
What would you like to print?
What would you like to print?
Print this section
Print the entire contents of
Print the entire contents of article
encoded search term (Tonsillitis and Peritonsillar Abscess) and Tonsillitis and Peritonsillar Abscess
What to Read Next on Medscape
Related Conditions and Diseases
Pierre Robin Syndrome
Otolaryngologic Manifestations of Granulomatosis With Polyangiitis
Skill Checkup: Peritonsillar Abscess Drainage
Biologic Therapies in Refractory Chronic Rhinosinusitis With Nasal Polyps
Trending Clinical Topic: Tinnitus
Myringitis (Middle Ear, Tympanic Membrane, Inflammation)
News & Perspective
Study Reveals Workplace Discrimination in Otolaryngology
Low-Dose CT Vs Chest X-Ray for Lung Cancer Surveillance in HNSCC: Is One Better?
Does Vitamin D Deficiency Worsen Pediatric Obstructive Sleep Apnea?
American Academy of Otolaryngology–Head and Neck Surgery (AAO-HNS) 2024 Annual Meeting
MSK Pediatrics e-Tumor Boards: Case 2: Multisystem Rosai-Dorfman Disease
Shrinking Thyroid Nodules with Radiofrequency Ablation
Drug Interaction Checker
Pill Identifier
Calculators
Formulary
A Lump in the Throat: Thyroid Cancer
Log in or register for free to unlock more Medscape content
Unlimited access to our entire network of sites and services
Log in or Register |
17260 | https://books.google.com/books/about/Injuries_of_the_eye_and_their_medico_Leg.html?id=kzIOqtBAC_wC | Sign in
Try the new Google Books
Check out the new look and enjoy easier access to your favorite features
Try it now
No thanks
Try the new Google Books
Try the new Google Books
My library
Help
Advanced Book Search
Download EPUB
Download PDF
Read eBook
Get this book in print
AbeBooks
On Demand Books
Amazon
Find in a library
All sellers »
| |
| Injuries of the eye and their medico-Legal aspect Ferdinand Arlt Claxton, Remsen & Haffelfinger, 1878 - Eye - 198 pages A controversial new theory that the origins of spoken language, music, and art lie in the early communication between mothers and infants.--From publisher description. Preview this book » |
Selected pages
Title Page
Index
Contents
| | |
--- |
| Section 1 | 5 |
| |
| Section 2 | 7 |
| |
| Section 3 | 9 |
| |
| Section 4 | 13 |
| |
| Section 5 | 19 |
| |
| | |
--- |
| Section 6 | 59 |
| |
| Section 7 | 94 |
| |
| Section 8 | 164 |
| |
| Section 9 | 181 |
| |
Common terms and phrases
abscess absorption anterior chamber appearance aqueous humor atropine bandage become Berlin blood capsular wound cataract cause chamber changes choroidal rent cicatricial cicatrix ciliary body ciliary processes concussion conjunctiva connective tissue considerable contusion cornea corneal margin crystalline body crystalline lens cyclitis cyst degree Diagnosis dilated direction distance entrance eschar especially examination extensive external extraction extravasation eyeball favorable focal illumination foreign body frequently glasses globe Graefe hemorrhage inflammation inflammatory injection injury inwards iridectomy iris iritis irritation knife laceration latter left eye less lids luxation membrane millimetres nerve object occur ocular opacity opaque operation ophthalmoscope optic orbital pain passed patient penetrating wounds perception of light perforating perhaps portion pressure produced Prognosis prolapse pupil pupillary refraction region remain removal result retina retinal detachment right eye scalding sclera sclerotic sequelæ substance sudden compression suppuration symblepharon symptoms takes place tension tion transparent Treatment tunics upper usually vision vitreous body vitreous humor wounded surfaces zonula
Popular passages
Page 4 - Injuries of the Eye and their Medico-Legal Aspect. By FERDINAND VON ARLT, MD, Professor of Ophthalmology in the University of Vienna.
Appears in 11 books from 1873-1879
Page 15 - Injuries produced by the entrance of a foreign body not acting chemically.
Appears in 7 books from 1866-1890
More
Page 6 - LEA, in the Office of the Librarian of Congress. All rights reserved. PHILADELPHIA : COLLINS, PRINTER, 705 Jayne Street.
Appears in 14 books from 1871-1892
Page 22 - Suppose we consider the point attacked as the pole, and the direction of the attacking force as the axis of a sphere, then the equator of the latter must become longer at the moment of the injury.
Appears in 3 books from 1878-1884
Page 16 - In addition, a fourth chapter is devoted to a consideration of such affections as are either feigned or produced artificially and intentionally.
Appears in 3 books from 1866-1879
Page 28 - The cornea itself was round, somewhat flattened, its circumference diminished, and its sclerotic rent to the conical margin has an additional reason in the histological fact, that the fibres of the sclerotic coat run parallel to the latter within the confines of the ciliary region.
Appears in 3 books from 1878-1884
Page 120 - The removal of the lens, through a similar though more extensive corneal opening, is to be considered more as a doubtful remedy, as we seldom succeed in removing the lens as a whole, or even its greater part, and thus do not obviate the dangers of mechanical irritation, or of increased pressure ; perhaps, indeed, we even increase them.
Appears in 2 books from 1878-1899
Page 25 - ... and which result from blunt force. It is remarkable that lacerations of this nature, through which more or less of the fluid contents of the eyeball, and sometimes even portions of the iris or the whole lens, escape instantaneously, always run parallel or nearly so to the corneal margin; and also that they occur almost invariably at the upper part, usually above and to the inside.
Appears in 2 books from 1878-1884
Page 20+of+the+sphincter%3B+suspension+of+accommodation+%3B+bursting+of+the+capsule+%3B+overstretching,+or+laceration,+of+the+zonula,+with%22&source=gbs_quotes_r&cad=4) - ... contractions) of the sphincter; suspension of accommodation ; bursting of the capsule ; overstretching, or laceration, of the zonula, with various degrees of formative and other changes in the lens...
Appears in 2 books from 1878-1884
Page 19 - An eye, injured in this manner, exhibits, independently of the palpebral changes, the following appearances : extravasation of blood under the conjunctiva ; opacity, with subsequent inflammations and suppuration of the cornea, either with or without a solution of...
Appears in 2 books from 1878-1884
Less
Bibliographic information
| | |
--- |
| Title | Injuries of the eye and their medico-Legal aspectFrancis A. Countway Library of Medicine--Medical Heritage Library digitization project |
| Author | Ferdinand Arlt |
| Publisher | Claxton, Remsen & Haffelfinger, 1878 |
| Length | 198 pages |
| | |
| Export Citation | BiBTeX EndNote RefMan |
About Google Books - Privacy Policy - Terms of Service - Information for Publishers - Report an issue - Help - Google Home |
17261 | https://thirdspacelearning.com/gcse-maths/number/simple-interest-and-compound-interest/ | What are simple interest and compound interest?
How to work out simple and compound interest
Simple interest and compound interest worksheets
Simple and compound interest examples
↓
Example 1: Simple Interest – Percentage Increase Example 2: Simple Interest – Percentage Decrease Example 3: Simple Interest – Different Time Scale Example 4: Compound Interest – Percentage Increase Example 5: Compound Interest – Percentage Decrease Example 6: Compound Interest – Different Time Scale
Common misconceptions
Simple and Compound Interest practice questions
Simple and Compound Interest Exam Questions:
Learning checklist
Next lessons
Still Stuck?
GCSE Tutoring Programme
"Our chosen students improved 1.19 of a grade on average - 0.45 more than those who didn't have the tutoring."
Teacher-trusted tutoring
In order to access this I need to be confident with:
Converting percentages to decimals Substitution Laws of indices Percentage change Calculator skills
This topic is relevant for:
Introduction
What are simple interest and compound interest?
How to work out simple and compound interest
↓
Depreciation
Simple interest and compound interest worksheets
Simple and compound interest examples
↓
Example 1: Simple Interest – Percentage Increase Example 2: Simple Interest – Percentage Decrease Example 3: Simple Interest – Different Time Scale Example 4: Compound Interest – Percentage Increase Example 5: Compound Interest – Percentage Decrease Example 6: Compound Interest – Different Time Scale
Common misconceptions
Simple and Compound Interest practice questions
Simple and Compound Interest Exam Questions:
Learning checklist
Next lessons
Still Stuck?
Simple Interest And Compound Interest
Here is everything you need to know about simple and compound interest for GCSE maths (Edexcel, AQA and OCR). You’ll learn how to calculate simple and compound interest for increasing and decreasing values, and set-up, solve and interpret growth and decay problems.
Look out for the simple & compound interest worksheets and exam questions at the end.
What are simple interest and compound interest?
Simple and compound interest are two ways of calculating interest:
Simple interest is calculated on the original (principal) amount, whereas compound interest is calculated on the original amount and on the interest already accumulated on it.
The difference between simple and compound interest is that simple interest is calculated using only the original amount whereas compound interest works out the interest on a previous amount as well.
The formula for calculating the simple interest earned on an investment is
A=Prt
And we can calculate the value of the investment, A, after the time period with the formula:
\begin{aligned} A&=P+Prt \\ & =P\left( 1+rt \right) \end{aligned}
The formula for calculating compound interest is:
A=P(1+\frac{r}{100})^{n}
Where:
I represents the simple interest
A represents the final amount
P represents the original principal amount
r is the interest rate
n represents the number of times the interest rate is applied
Check out the comparative example below to see the similarities and differences between the two forms of interest.
Step-by-step guide: Simple interest
Step-by-step guide: Compound interest
What are simple and compound interest?
How to work out simple and compound interest
In order to calculate simple or compound interest:
State the formula and the value of each variable.
Substitute the values into the formula.
Solve the equation.
E.g
\bf{£100} is invested for \bf{3} years at \bf{2\%} per year. Find the final value.
Simple interest
A=P(1+rt) \
Here:
P=100
r=0.02 (as 2% = 0.02)
t=3
Substituting these values into the simple interest formula
A=P(1+rt)
We get:
A = 100(1+0.02\times{3})\
A = 100(1+0.06)\
A = 100(1.06)\
A = 100\times{1.06}\
A = £106
Compound interest
A=P(1+\frac{r}{100})^{n} \
Here:
P=100
r=2
n=3
Substituting these values into the compound interest formula
A=P(1+\frac{r}{100})^{n} \
We get:
A = 100(1+(\frac{2}{100})^{3}
A = 100(1+0.02)^3\
A = 100(1.02)^3\
A = 100 x 1.02^3\
A = £106.12
Simple and compound interest are used widely in real life, especially in financial mathematics for sale prices, borrowing money on a credit card or taking out a loan, the amount of money invested in the stock market and house prices to name a few.
Explain how to calculate simple or compound interest in 3 steps.
Depreciation
When the value of an asset reduces in price, it is known as depreciation.
Depreciation can be calculated the same way as compound interest but we must remember that the multiplier being used will be less than 1.
E.g
A car worth £20 \ 000 depreciates by 10\% per year for 2 years.
To find the value after 2 years we calculate using the multiplier of 0.9.
20 \ 000 \times 0.9^{2}=16 \ 200
The car will be worth £16 \ 200.
Step-by-step guide: Depreciation
Simple interest and compound interest worksheet
Get your free simple and compound interest worksheet of 20+ questions and answers. Includes reasoning and applied questions.
DOWNLOAD FREE
x
Simple interest and compound interest worksheet
Get your free simple and compound interest worksheet of 20+ questions and answers. Includes reasoning and applied questions.
DOWNLOAD FREE
Simple and compound interest examples
Example 1: Simple Interest – Percentage Increase
£1500 is invested for 4 years at 5\% per year simple interest. What is the value of the investment after this time?
State the formula and the value of each variable.
Here we use the formula A=P (1+rt) with:
P=1500
r=0.05
t=4
2 Substitute the values into the formula.
Substituting these values into the simple interest formula A=P (1+rt) , we get:
A=1500(1+0.05\times{4})
3 Solve the equation.
\begin{aligned} &A=1500(1+0.20)\\ &A=1500(1.20)\\ &A=1500\times{1.2}\\ &A=\pounds1800. \end{aligned}
Example 2: Simple Interest – Percentage Decrease
A car is bought for £10,000 and loses 9\% of its value per year simple interest. What is the value of the car after 8 years?
State the formula and the value of each variable.
Here we use the formula A=P (1+rt) with:
P=10000
r=-0.09
t=8
Substitute the values into the formula.
Substituting these values into the simple interest formula A=P (1+rt) , we get:
A=10000(1-0.09\times{8})
Solve the equation.
\begin{aligned} &A=10000(1-0.72)\\ &A=10000(0.28)\\ &A=10000\times{0.28}\\ &A=\pounds2800. \end{aligned}
Example 3: Simple Interest – Different Time Scale
£7600 is invested for 2 years at 1\% per month simple interest. What is the value of the investment after this time?
State the formula and the value of each variable.
Here we use the formula A=P (1+rt) with:
P=7600
r=0.01
t= 2 \times 12 = 24 (remember there are 12 months in 1 year)
Substitute the values into the formula.
Substituting these values into the simple interest formula A = P (1 + rt), we get:
A=7600(1+0.01\times{24})
Solve the equation.
\begin{aligned} &A=7600(1+0.24)\\ &A=7600(1.24)\\ &A=7600\times{1.24}\\ &A=\pounds{9424}. \end{aligned}
Example 4: Compound Interest – Percentage Increase
£8500 is invested for 5 years into a bank account at 0.3\% per year compound interest. What is the value of the investment after this time?
State the formula and the value of each variable.
Here we use the formula A=P(1+\frac{r}{100})^{n} with:
P=8500
r=0.3
n=5
Substitute the values into the formula.
Substituting these values into the compound interest formula A=P(1+\frac{r}{100})^{n}, we get:
A=8500(1+\frac{0.3}{100})^{5}
Solve the equation.
\begin{aligned} &A=8500(1+0.003)^{5}\\ &A=8500(1.003)^{5}\\ &A=8500\times1.009027\\ &A=\pounds{8576.73} \end{aligned}
Example 5: Compound Interest – Percentage Decrease
A vacuum cleaner is bought for £399 and loses 22\% of its value per year compound interest. What is the value of the vacuum cleaner after 2 years?
State the formula and the value of each variable.
Here we use the formula A=P(1+\frac{r}{100})^{n} with:
P=399
r=-22
n=2
Substitute the values into the formula.
Substituting these values into the compound interest formula A=P(1+\frac{r}{100})^{n}, we get:
A=399(1-\frac{22}{100})^{2}
Solve the equation.
\begin{aligned} &A=399(1-0.22)^{2}\\ &A=399(0.78)^{2}\\ &A=399\times0.6084\\ &A=\pounds{242.75} \end{aligned}
Example 6: Compound Interest – Different Time Scale
A house is valued at £150,000. On average, the house price increases by 0.24\% per month over a period of 2.5 years compound interest. What is the value of the house after this time?
State the formula and the value of each variable.
This time the interest rate in monthly so we need to convert the time into months.
2.5 year is the same as 2.5\times 12 = 30 months.
Here we use the formula
A=P(1+\frac{r}{100})^{n} with:
P=150000
r=0.24
n=30
Substitute the values into the formula.
Substituting these values into the compound interest formula A=P(1+\frac{r}{100})^{n}, we get:
A=150000(1+\frac{0.24}{100})^{30}
Solve the equation
\begin{aligned} &A=150000(1+0.0024)^{30}\\ &A=150000(1.0024)^{30}\\ &A=\pounds161184.40 \end{aligned}
Common misconceptions
Applying the incorrect formula to the question
This is a very common mistake where the simple interest on an amount is calculated instead of using the compound interest formula.
Incorrect percentage change due to different time scales
Here is example 6 with this misconception:
A house is valued at £150,000. On average, the house price increases by 2.4\% per month over a period of 2.5 years compound interest. What is the value of the house after this time?
Here we use the formula A=P(1+\frac{r}{100})^{n} with:
P=150000
r=0.24
n=2.5
\begin{aligned} &A=150000(1.0024)^{2.5}\\ &A=\pounds150901.62 \end{aligned}
Although the answer here looks reasonable, the 0.24\% interest is per month not per year, which is what has been calculated.
Using the incorrect value for the percentage change
Not dividing the percentage rate by 100 is a common error. For example when using compound interest to increase £100 by 2% for 5 years, this calculation is made:
\begin{aligned} &A=100(1+2)^{5}\\ &A=100\times{243}\\ &A=\pounds{24300} \end{aligned}
Is the value increasing or decreasing?
If a value is depreciating (going down), the value of r is negative whereas it is incorrectly used as a positive and so the answer will be larger than the original amount.
Simple and Compound Interest practice questions
The probability of a team winning their next three football matches is 65\% per match. What is the probability that they win all three matches?
Write your answer as a percentage.
4.29\%
27.46\%
35\%
65\%
We can write the percentage as a fraction and then apply the “and” rule from probability, so the probability is given by
\frac{65}{100} \times \frac{65}{100} \times \frac{65}{100}
This can be converted to a percentage afterwards.
The population of bees in a hive is expected to increase by 5\% per year simple interest.
The population of bees in the hive is approximately 56,000 .
Work out the population of bees after 3 years.
64,400
58,800
61,600
64,827
5\% of 56,000 is 2800 , so over the 3 years the population increases by 8,400 .
Adding 8,400 to 56,000 , we get 64,400 .
A bucket holds 25L of water. A hole in the bucket starts to leak water, losing 12\% every minute compound depreciation.
Calculate the total amount of water in the bucket after 10 minutes?
13L
22L
6.96L
77.65L
The original amount is 25 , the decrease is 12\% per minute compounded over 10 minutes, so our calculation is
\A=25(1-0.12)^{10}\ \A=6.96\
The height of an apple tree increases at an average rate of 1.6\% per month, compound interest.
The apple tree is currently 1m tall.
How tall will the apple tree be after 2 years?
1.6m
2.6m
1.46m
1.38m
The original amount is 1 , the increase is 1.6\% per month compounded over 24 months, so our calculation is
\A=1(1+0.016)^{24}\ \\A=1.46\
The population of chickens on an island is estimated to be around 20,000 .
With few natural predators, the chicken population has exploded, increasing the population by 3.6\% per year simple interest.
Estimate how many chickens there are on the island after 15 years.
72,000
30,800
20,720
33,996
3.6\% of 20,000 is 720 , so over the 15 years the population increases by 10,800 .
Adding 10,800 to 20,000 , we get 30,800 .
A vintage car was valued at \pounds 650,000 5 years ago. For the first 3 years, the value of the car depreciates by 2\% every year, using compound interest.
After this, the value increases by 5\% per year for the next 2 years, using compound interest.
What is the current value of the car?
\pounds 587,548
\pounds 752,456
\pounds 674,482
\pounds 611,775
The starting amount is 650,000 . To begin with, there is a 2\% decrease per year compounded over 3 years, then a 5\% increase per year compounded over 2 years. This can be combined into one calculation, so
\V=650000(1-0.02)^{3}(1+0.05)^{2}\ \V=674482\
Simple and Compound Interest Exam Questions:
1. Ed invested \pounds 1500 into a savings account. It earned compound interest at an annual interest rate of 0.72\% for 5 years.
He wrote down how much money he would receive after 5 years. Here is his calculation:
1500\times{0.72}\times{5}=£5400
(a) What are the two mistakes that Ed has made?
________________________________________________
________________________________________________
(b) What amount would Ed have after 5 years?
(4 marks)
Show answer
(a)
Ed has used simple interest.
(1)
Ed has not converted the percentage correctly ( 72\% interest).
(1)
(b)
1500 \times 1.0072^5
(1)
\pounds 1554.78
(1)
2. In 2003 , the population of bats in a cave was approximately 20 million.
After 5 years, the population grew by 3\% every year, and then for the next 3 years, the population gradually declined by 2\% each year.
How many more bats are in that cave in 2011 . Write your answer in standard form to 3 significant figures.
(5 marks)
Show answer
20,000,000 \times 1.03^5
(1)
20,000,000 \times 1.03^5 \times 0.98^3 or 23185481.49 \times 0.98^3
(1)
21821990
(1)
21,821,990-20,000,000=1821990
(1)
1.82 \times 10^6
(1)
3. (a) \pounds 30,000 is invested with an annual percentage rate of 0.1\% simple interest rate per month into Account A . How many months will it take for the investment to reach \pounds 31,200 ?
(b) Account B offers a compound interest rate of 1.5\% per year. Which account would provide the most interest after exactly 3 years?
(7 marks)
Show answer
(a)
30,000 \times 0.001 = £30 (monthly payment)
(1)
31,200-30,000=£1200 interest
(1)
£1200 \ / \ 30 = 40 months
(1)
(b)
30,000 \times 1.015^3
(1)
£31370.35
(1)
30,000 + (30 \times 36) = 31080
(1)
Account B
(1)
Learning checklist
You have now learned how to:
Solve problems involving percentage change, including: percentage increase, decrease and original value problems and simple interest in financial mathematics
Set up, solve and interpret the answers in growth and decay problems, including compound interest (and work with general iterative processes)
The next lessons are
Surds
Types of numbers
Standard form
Still stuck?
Prepare your KS4 students for maths GCSEs success with Third Space Learning. Weekly online one to one GCSE maths revision lessons delivered by expert maths tutors.
Find out more about our GCSE maths tuition programme.
Introduction
What are simple interest and compound interest?
How to work out simple and compound interest
↓
Depreciation
Simple interest and compound interest worksheets
Simple and compound interest examples
↓
Example 1: Simple Interest – Percentage Increase Example 2: Simple Interest – Percentage Decrease Example 3: Simple Interest – Different Time Scale Example 4: Compound Interest – Percentage Increase Example 5: Compound Interest – Percentage Decrease Example 6: Compound Interest – Different Time Scale
Common misconceptions
Simple and Compound Interest practice questions
Simple and Compound Interest Exam Questions:
Learning checklist
Next lessons
Still Stuck? |
17262 | https://www.physio-pedia.com/Extensor_Carpi_Radialis_Longus | Contents
Editors
Categories
Cite
Contents
1 Description
2 Structure
2.1 Origin
2.2 Insertion
2.3 Innervation
2.4 Blood supply
3 Function
4 Clinical Relevance
5 Assessment
5.1 Active movements
5.2 Passive movements
5.3 Resisted movements
5.4 Palpation
6 See Also
7 Resources
Original Editor - Wanda van Niekerk
Top Contributors - Wanda van Niekerk, Nina Myburg, Kim Jackson, Carina Therese Magtibay, Rachael Lowe, Amanda Ager and Aya Alhindi
Categories
Anatomy
Elbow - Anatomy
Wrist - Anatomy
Hand - Anatomy
Muscles
Elbow - Muscles
Wrist - Muscles
Hand - Muscles
Elbow
Wrist
Hand
When refering to evidence in academic writing, you should always try to reference the primary (original) source. That is usually the journal article where the information was first stated. In most cases Physiopedia articles are a secondary source and so should not be used as references. Physiopedia articles are best used to find the original sources of information (see the references list at the bottom of the article).
If you believe that this Physiopedia article is the primary source for the information you are refering to, you can use the button below to access a related citation statement.
Cite article
Extensor Carpi Radialis Longus
Jump to:navigation, search
Original Editor - Wanda van Niekerk
Top Contributors - Wanda van Niekerk, Nina Myburg, Kim Jackson, Carina Therese Magtibay, Rachael Lowe, Amanda Ager and Aya Alhindi
Contents
1 Description
2 Structure
2.1 Origin
2.2 Insertion
2.3 Innervation
2.4 Blood supply
3 Function
4 Clinical Relevance
5 Assessment
5.1 Active movements
5.2 Passive movements
5.3 Resisted movements
5.4 Palpation
6 See Also
7 Resources
Related online courses on +Physiopedia Plus
Online Course: Extensor Tendon Injury Management
Tendon Injury Management Effectively manage extensor tendon rehabilitation following surgery Start course 1.5-2 hours - - - - Powered by Physiopedia Course instructor Kate Thorn Kate is committed to providing and teaching high-quality evidence-based hand therapy Summarising Summarising the latest
•
ONLINE COURSE
Extensor Tendon Injury Management
Presented by:
Kate Thorn
ONLINE COURSE
Extensor Tendon Injury Management
Presented by:
Kate Thorn
Description[edit | edit source]
Extensor Carpi Radialis Longus
Extensor carpi radialis longus is a muscle that can be found in the posterior compartment of the forearm. It is partly overlapped by brachioradialis and these muscles often blend together. As its name suggests it is a wrist extensor and can be palpated infero-posteriorly to the elbow.
Structure[edit | edit source]
Extensor carpi radialis longus is a fusiform muscle that forms a flattened tendon which runs distally over the lateral surface of the radius. In the lower third of the forearm the tendon, together with that of extensor carpi radialis brevis, is crossed by the tendons of abductor pollicis brevis and extensor pollicis brevis. The tendons of extensor carpi radialis longus and brevis pass deep to the extensor retinaculum in a common synovial sheath. Together they groove the posterior surface of the styloid process of the radius.
Origin[edit | edit source]
Anterior lower third of the lateral supracondylar ridge of the humerus and adjacent intermuscular septum. Sometimes, it may be attached to the lateral eipcondyle by the common extensor tendon.
Insertion[edit | edit source]
The radial side of the dorsal surface of the base of the second metacarpal bone.
PAI: Your AI Clinical Assistant Powered by Physiopedia
Make confident clinical decisions, cut documentation time and prioritize patient care
Yes please!
Innervation[edit | edit source]
Radial nerve (root value C6 and C7) from the posterior cord of the brachial plexus.The roots of C5 and C6 innervate the skin over the muscle.
Blood supply[edit | edit source]
Radial artery
Function[edit | edit source]
Extensor carpi radialis longus together with extensor carpi radialis brevis produce wrist extension and abduction (radial deviation).
It also may help to flex the elbow joint and is active during fist clenching.
Extensor carpi radialis longus is one of three primary wrist extensors. It is most effective as a wrist extensor when the elbow is extended and when radial deviation is balanced by the primary ulnar deviator- extensor carpi ulnaris.
Clinical Relevance[edit | edit source]
Functionally, the wrist extensors work strongly in the action of gripping, together with extensor carpi ulnaris. By keeping the wrist in extension, flexion of the wrist by flexors digitorum superficialis and profundus is prevented, with the result that these muscles act on the fingers. When the wrist then moves into flexion, the flexor tendons are unable to shorten enough to generate effective movement at the interphalangeal joints, resulting in a state of active insufficiency.
Radial nerve damage leads to paralysis of the wrist extensors. This leads to a decreased grip strength. However, when the wrist is stabilised (splinted) in extension, the flexor digitorum superficialis and profundus tendons effectively engage the fingers and this allows for a functional grip.
Intersection Syndrome is a bursitis that occurs at the site where the abductor pollicis longus and extensor pollicis brevis tendons cross over the extensor carpi radialis tendons proximal to the extensor retinaculum. This may be due to friction at this site of crossing or it may occur from tenosynovitis of the two extensor tendons within their synovial sheath. This leads to tenderness, swelling and crepitus and can be confused with de Quervain's syndrome. This condition is often seen in rowers, but also in canoeists and racket sportsmen.
Assessment[edit | edit source]
Active movements[edit | edit source]
Elbow flexion/extension
Supination/pronation
Wrist flexion/extension
Radial/ulnar deviation
Passive movements[edit | edit source]
Elbow flexion/extension
Supination/pronation
Wrist flexion/pronation
Radial/ulnar deviation
Resisted movements[edit | edit source]
Wrist extension
Wrist abduction (radial deviation)
Wrist extension together with abduction
Grip test
Palpation[edit | edit source]
Lateral epicondyle
With resisted wrist extension and abduction, the extensor carpi radialis longus and brevis can be palpated in the upper lateral portion of the posterior forearm. The tendon of extensor carpi radialis longus, in particular, can be felt in the floor of the anatomical snuffbox when these movements (wrist extension and abduction) are performed.
See Also[edit | edit source]
Elbow examination
Elbow
Wrist and hand examination
Related articles
Extensor Carpi Radialis Brevis - Physiopedia
Description Extensor carpi radialis brevis (ECRB) is a short muscle emerging underneath the extensor carpi radialis longus and both muscles share a common tendinous synovial sheath. It is an extensor muscle located superficially at the posterior compartment of the forearm. It is the prime dorsiflexor of the wrist. Origin[edit | edit source] It originates from the common extensor tendon with extensor carpi ulnaris, extensor digiti minimi, and extensor digitorum at the Lateral epicondyle of humerus together. Insertion[edit | edit source] It attaches at the radial side dorsal surface of the base of the 3rd metacarpal. The extensor carpi radialis brevis shares a common synovial sheath with the extensor carpi radialis longus. Nerve[edit | edit source] ECRB is innervated by the deep branch of radial nerve (7th and 8th cervical nerve root) before the nerve courses through the two heads of the supinator muscle . The posterior interosseous nerve provides motor innervation to the deep and superficial extensors of the posterior compartment and the extensor carpi radialis brevis. Artery[edit | edit source] The main blood supply to ECRB is from the radial artery. Other augmentations are from the radial collateral branch from profunda brachii and the radial recurrent artery. Function[edit | edit source] ECRB extends and abducts the wrist The whole muscle is summarised in the following video. Watch the video to get an animated and clear understanding of the concept. Clinical relevance[edit | edit source] Lateral epicondylitis presents as pain on the lateral aspect of the elbow with or without loss of grip strength that is aggravated with activity and it will most likely have extensor carpi radialis brevis muscle affectation. In cases of radial nerve compromise proximal to its division to deep and superficial branches at the cubital fossa, then some functional loss of wrist and digit extension will be present. Assessment[edit | edit source] With the hand in pronation, the ECRB muscle can be palpated during extension and abduction of the wrist against resistance.
[Extensor Carpi Ulnaris - Physiopedia
Search Search Search Toggle navigation pPhysiopedia pPhysiopedia About News Contribute Courses PAI Resources Shop Contact Login pPhysiopedia About News Contribute Courses PAI Resources Shop Contact p + Contents Editors Categories Cite Contents loading... Editors loading... Categories loading... When refering to evidence in academic writing, you should always try to reference the primary (original) source. That is usually the journal article where the information was first stated. In most cases Physiopedia articles are a secondary source and so should not be used as references. Physiopedia articles are best used to find the original sources of information (see the references list at the bottom of the article). If you believe that this Physiopedia article is the primary source for the information you are refering to, you can use the button below to access a related citation statement. Cite article Extensor Carpi Ulnaris Jump to:navigation, search Original Editor - Rania Nasr Top Contributors - Kirenga Bamurange Liliane , Rania Nasr , Kim Jackson , Carina Therese Magtibay , Aya Alhindi , Alicia Fernandes , Nikhil Benhur Abburi and Wanda van Niekerk Contents 1 Description 1.1 Origin 1.2 Insertion 1.3 Nerve 1.4 Artery 2 Lymphatics 3 Function 4 Clinical Relevance 4.1 Contribution to Wrist Function: 4.2 Common Injuries in Athletes: 4.2.1 Tenosynovitis: 4.2.2 Tendinopathy: 4.2.3 Structural Damage and Partial Tear: 5 Assessment 6 Clinical Significance 7 Treatment 7.1 Acute Tendinosis 7.1.1 Rehabilitation for Early Reactive Phase 7.2 Chronic Tendinopathy: 8 Resources Description[edit | edit source] The extensor carpi ulnaris muscle is one of the extensor muscles of the forearm located in the superficial layer of the posterior compartment of the forearm. It shares this compartment with the brachioradialis, the extensor carpi radialis longus, the extensor carpi radialis brevis, the extensor digitorum, and the extensor digiti minimi. All of these muscles share a common origin on the lateral epicondyle via the common extensor tendon. As all of these muscles near their distal insertion sites, they are secured by the extensor retinaculum. Origin[edit | edit source] The extensor carpi ulnaris muscle originates from the lateral epicondyle of the distal humerus and the posterior aspect of the ulna. Insertion[edit | edit source] It inserts onto the dorsal base of the fifth metacarpal after passing through the sixth compartment of the extensor retinaculum. Nerve[edit | edit source] The posterior interosseous nerve, which is a motor branch of the radial nerve.The radial nerve arises from the brachial plexus by way of the posterior cord which has contributions from the spinal nerve roots of C5 to T1. Artery[edit | edit source] The ulnar artery, which branches off of the brachial artery near the antecubital fossa and supplies the medial aspect of the forearm. The posterior interosseous artery, a posterior branch of the radial artery, that runs between the superficial and deep extensor muscle groups and supplies them both. Lymphatics[edit | edit source] The lymphatic drainage of the upper limb consists of both superficial and deep lymphatic vessels. The deltopectoral lymph nodes are another potential drainage site. Function[edit | edit source] The extensor carpi ulnaris (ECU) plays a pivotal role in wrist and forearm function, contributing to both extension and adduction of the hand at the wrist, while also providing essential medial stability. Extension and Adduction Medial Stability- Apart from its role in movement, the ECU significantly contributes to the medial stability of the wrist, aiding in the prevention of excessive lateral deviation Fiber Origin: The ECU is characterized by its unique fiber origin, deriving from both the distal humerus as part of the common extensor tendon and the proximal ulna This comprehensive understanding of the ECU's function, encompassing its involvement in wrist movement and stability, is vital for physiotherapists in formulating effective rehabilitation strategies for patients with wrist and forearm injuries. Clinical Relevance[edit | edit source] The extensor carpi ulnaris (ECU) holds considerable clinical relevance in the context of wrist and forearm function, susceptibility to injuries, and the manifestation of related conditions. Contribution to Wrist Function:[edit | edit source] The ECU is a key contributor to the extension and adduction of the wrist, playing a crucial role in various activities involving these movements Common Injuries in Athletes:[edit | edit source] Athletes engaged in activities requiring forceful wrist movements are particularly prone to ECU injuries. These injuries may result from repetitive stress on the tendon during activities such as gripping, throwing, or racket sports Tenosynovitis:[edit | edit source] Repetitive flexion and extension of the wrist can lead to tenosynovitis, characterized by inflammation of the tendon and its sheath. This condition arises due to the constant irritation of the ECU tendon, impacting its smooth movement within the sheath Tendinopathy:[edit | edit source] Overuse of the ECU may result in tendinopathy, marked by thickening and painful stiffness of the tendon. Importantly, this condition may occur with minimal structural damage, highlighting the significance of early intervention and management Structural Damage and Partial Tear:[edit | edit source] Prolonged and excessive stress on the ECU tendon can lead to structural damage, potentially culminating in a partial tear. This underscores the importance of addressing overuse and providing appropriate rehabilitation to prevent further complications. Understanding the clinical implications of ECU injuries enables physiotherapists to tailor interventions for optimal patient outcomes, emphasizing the importance of early diagnosis and targeted rehabilitation. Assessment[edit | edit source] Ask the patient to pronate their forearms and extend their fingers. Place your hand along the hand's medial border to resist movement. The extended wrist is adducted against resistance. The muscle can be seen and felt in the proximal part of the forearm, and its tendon can be palpated proximal to the head of the ulna if it is working properly. Clinical Significance[edit | edit source] An accurate clinical history and assessment is essential for diagnosis of ECU tendon disorders. The timing of onset of symptoms discriminates between acute and chronic causes. Mechanical symptoms at the moment of onset are also common descriptors in this condition. Patients will use words such as ‘snap’, ‘pop’ or ‘tear’ in an acute sheath disruption. In some cases, episodes of tendon subluxation are excruciatingly painful. In others the subluxation may be entirely asymptomatic and may be easily reproduced by the patient. Palpation along the length of the ECU tendon (starting distally at its insertion into the base of the fifth metacarpal to ensure palpation of the correct structure) will reveal tenderness accurately localised to that structure. Pain on resisted active extension with ulnar deviation is pathognomonic of an ECU condition. Weakness is frequently associated with pain. Painless weakness is likely to represent a complete rupture of the ECU tendon. In equivocal or difficult cases, ultrasound (US) or MRI are the imaging modalities of choice to supplement the clinical diagnosis of ECU tendinopathy and instability. Conventional X-rays are not routinely required. Treatment[edit | edit source] Treatment Strategies for ECU Tendinosis and Tendinopathy aim to address ECU tendinosis and tendinopathy at various stages, providing a comprehensive approach to manage symptoms and promote recovery Acute Tendinosis[edit | edit source] Acute tendinosis of the ECU typically responds well to non-operative measures: Rest: Allowing the affected wrist to rest to promote healing. Activity Modification: Adjusting activities to reduce strain on the ECU tendon. Splintage: Immobilization in a short-arm plaster cast, positioned at 30° wrist extension and ulnar deviation, for approximately three weeks Rehabilitation for Early Reactive Phase[edit | edit source] For the early reactive phase of tendinopathy: Load Management: Gradual reintroduction of load through controlled exercises. Isometric Exercises:Engaging in exercises that involve muscle contraction without joint movement, helping to manage pain over 5–10 days. Pharmacological Support:Consideration of ibuprofen as an adjunct for its anti-inflammatory properties during this phase Chronic Tendinopathy:[edit | edit source] In cases of chronic tendinopathy without a sudden increase in pain: Load Management: Ongoing management to control and distribute load. Eccentric Work: Incorporation of eccentric exercises to address tendon strength and resilience. Isometrics and Strength Exercises: Further rehabilitation focusing on improving strength and function. Injection of Steroids:If symptoms persist despite non-operative measures, consideration may be given to an injection of steroids into the fibro-osseous sheath. This intervention aims to alleviate inflammation and pain, especially when other conservative methods have not provided relief Resources[edit | edit source] ↑ 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 Sawyer E, Sajjad H, Tadi P. Anatomy, shoulder and upper limb, forearm extensor carpi ulnaris muscle. InStatPearls [Internet] 2023 Aug 28. StatPearls Publishing. ↑ 2.0 2.1 2.2 Moore KL, Dalley AF, Agur AMR,2014, Clinically Oriented Anatomy ↑ 3.0 3.1 3.2 3.3 3.4 Magee DJ.2014.Orthopedic Physical Assessment ↑ Brukner P, Khan K,Date of Publication: 2017,Title: Clinical Sports Medicine,Source: Book ↑ Blackriver & Bootsma Education. Muscle Palpation - Extensor Carpi Ulnaris [Internet]. Youtube; 2021 . Available from: ↑ 6.0 6.1 Campbell D, Campbell R, O'Connor P, Hawkes R. Sports-related extensor carpi ulnaris pathology: a review of functional anatomy, sports injury and management. Br J Sports Med. 2013 Nov 1;47(17):1105-11. ↑ ACM OTA Class of 2016. MMT Extensor carpi radialis anti gravity & Extensor carpi ulnaris anti gravity. Available from: [last accessed 6/6/2009] ↑ Alfredson H, Cook JL, Khan KM, Kiss ZS, Purdam CR, Visentini PJ, et al.: 2000 : Chronic Achilles tendinopathy: painful solutions: The Medical Journal of Australia ↑ Alfredson H, Pietilä T, Jonsson P, Lorentzon R. Heavy-load eccentric calf muscle training for the treatment of chronic Achilles tendinosis. The American journal of sports medicine. 1998 May;26(3):360-6. ↑ Khan KM, Cook JL, Bonar F, Harcourt P, Åstrom M. Histopathology of common tendinopathies: update and implications for clinical management. Sports medicine. 1999 Jun;27:393-408. Retrieved from " Categories: Anatomy Muscles Hand - Anatomy Hand - Muscles Hand Elbow Elbow - Anatomy Elbow - Muscles Wrist Wrist - Anatomy Wrist - Muscles Get Top Tips Tuesday and The Latest Physiopedia updates Email Address I give my consent to Physiopedia to be in touch with me via email using the information I have provided in this form for the purpose of news, updates and marketing. HP Yes please It's free, and you can unsubscribe any time. Privacy policy. Our Partners The content on or accessible through Physiopedia is for informational purposes only. Physiopedia is not a substitute for professional advice or expert medical services from a qualified healthcare provider. Read more pPhysiopedia +Physiopedia Plus Physiopedia About News Donations Shop Contact Content Articles Categories Resources Projects Contribute Courses PAI Legal QA Framework Disclaimer Terms Privacy Cookies Report content AI Licensing Physiopedia available in: French German Italian Spanish Ukrainian © Physiopedia 2025 | Physiopedia is a registered charity in the UK, no. 1173185 Back to top](
[Flexor Carpi Radialis - Physiopedia
Search Search Search Toggle navigation pPhysiopedia pPhysiopedia About News Contribute Courses PAI Resources Shop Contact Login pPhysiopedia About News Contribute Courses PAI Resources Shop Contact p + Contents Editors Categories Cite Contents loading... Editors loading... Categories loading... When refering to evidence in academic writing, you should always try to reference the primary (original) source. That is usually the journal article where the information was first stated. In most cases Physiopedia articles are a secondary source and so should not be used as references. Physiopedia articles are best used to find the original sources of information (see the references list at the bottom of the article). If you believe that this Physiopedia article is the primary source for the information you are refering to, you can use the button below to access a related citation statement. Cite article Flexor Carpi Radialis Jump to:navigation, search Original Editor - Rania Nasr Top Contributors - Rania Nasr , Rishika Babburu , Chrysolite Jyothi Kommu , Kim Jackson and Nikhil Benhur Abburi Contents 1 Description 1.1 Origin 1.2 Insertion 1.3 Nerve 1.4 Artery 2 Function 3 Clinical Relevance 4 Assessment 5 Resources Description[edit | edit source] The flexor carpi radialis muscle is a long, superficial muscle of the forearm that belongs to the anterior muscle group and lies in the first layer. It is a relatively thin muscle located on the anterior part of the forearm. It arises in the humerus epicondyle, close to the wrist area. It is a superficial muscle that becomes very visible as the wrist comes into flexion. Origin[edit | edit source] The flexor carpi radialis originates from the medial epicondyle of the humerus, passes obliquely downwards to the lateral side of the forearm. Insertion[edit | edit source] The flexor carpi radialis inserts at the bases of the second and third metacarpal bones. Nerve[edit | edit source] The innervation of this muscle is provided by the Median nerve(C6-C7). Artery[edit | edit source] The flexor carpi radialis mainly receives its blood supply (high up in the forearm) via anterior ulnar recurrent artery or posterior ulnar recurrent artery. Function[edit | edit source] The main function of FCR is providing flexion of the wrist and assisting in abduction of the hand and wrist. The flexor carpi radialis (FCR) muscle has been suggested to act as a dynamic scaphoid stabilizer. Because the FCR tendon uses the scaphoid tuberosity as a pulley to reach its distal insertion onto the second metacarpal, it has been hypothesized that FCR muscle contraction generates a dorsally directed vector that resists the scaphoid from rotating into flexion. Clinical Relevance[edit | edit source] Flexor tendon ruptures in the wrist are uncommon. Flexor carpi radialis (FCR) tendon rupture can occur in rheumatoid patients, following cortisone injection for Tenosynovitis, and following trauma. Attritional FCR tendon ruptures are also seen with scaphotrapezial arthritis. The radial artery lies between the tendons of the brachioradialis and flexor carpi radialis, a place commonly used to take the radial pulse. Palpation for the radial artery proximal to the wrist crease and immediately lateral to the tendon of flexor carpi radials muscle is one of the sites to document patient's pulse. Assessment[edit | edit source] Manual muscle testing helps in knowing the power of the muscle Resources[edit | edit source] ↑ Standring S, Ellis H, Healy J, Johnson D, Williams A, Collins P, Wigley C. Gray's anatomy: the anatomical basis of clinical practice. American journal of neuroradiology. 2005 Nov;26(10):2703 ↑ Salvà-Coll G, Garcia-Elias M, Llusá-Pérez M, Rodríguez-Baeza A. The role of the flexor carpi radialis muscle in scapholunate instability. The Journal of hand surgery. 2011 Jan 1;36(1):31-6. ↑ Van Demark RE, Helsper E, Hayes M, Hayes M, Smith VJ. Painful Pseudotendon of the Flexor Carpi Radialis Tendon: A Literature Review and Case Report. HAND. 2017 Sep;12(5):NP78-83. ↑ Anatomy, Shoulder and Upper Limb, Forearm Radial Artery ↑ ACM OTA Class of 2016. MMT Flexor carpi radialis anti gravity & Flexor carpi ulnaris anti gravity. Available from: accessed 31/8/2021] Retrieved from " Categories: Anatomy Muscles Elbow Elbow - Anatomy Elbow - Muscles Wrist Wrist - Anatomy Wrist - Muscles Hand Hand - Anatomy Hand - Muscles Get Top Tips Tuesday and The Latest Physiopedia updates Email Address I give my consent to Physiopedia to be in touch with me via email using the information I have provided in this form for the purpose of news, updates and marketing. HP Yes please It's free, and you can unsubscribe any time. Privacy policy. Our Partners The content on or accessible through Physiopedia is for informational purposes only. Physiopedia is not a substitute for professional advice or expert medical services from a qualified healthcare provider. Read more pPhysiopedia +Physiopedia Plus Physiopedia About News Donations Shop Contact Content Articles Categories Resources Projects Contribute Courses PAI Legal QA Framework Disclaimer Terms Privacy Cookies Report content AI Licensing Physiopedia available in: French German Italian Spanish Ukrainian © Physiopedia 2025 | Physiopedia is a registered charity in the UK, no. 1173185 Back to top](
[Extensor carpi radialis brevis - Physiopedia
Search Search Search Toggle navigation pPhysiopedia pPhysiopedia About News Contribute Courses PAI Resources Shop Contact Login pPhysiopedia About News Contribute Courses PAI Resources Shop Contact p + Contents Editors Categories Cite Contents loading... Editors loading... Categories loading... When refering to evidence in academic writing, you should always try to reference the primary (original) source. That is usually the journal article where the information was first stated. In most cases Physiopedia articles are a secondary source and so should not be used as references. Physiopedia articles are best used to find the original sources of information (see the references list at the bottom of the article). If you believe that this Physiopedia article is the primary source for the information you are refering to, you can use the button below to access a related citation statement. Cite article Extensor carpi radialis brevis Jump to:navigation, search Contents 1 Description 2 Origin 3 Insertion 4 Innervation 5 Blood supply 6 Action 7 Clinical Relevance 8 Treatment Description[edit | edit source] Origin[edit | edit source] Insertion[edit | edit source] Innervation[edit | edit source] Blood supply[edit | edit source] Action[edit | edit source] Clinical Relevance[edit | edit source] Treatment[edit | edit source] Retrieved from " Get Top Tips Tuesday and The Latest Physiopedia updates Email Address I give my consent to Physiopedia to be in touch with me via email using the information I have provided in this form for the purpose of news, updates and marketing. HP Yes please It's free, and you can unsubscribe any time. Privacy policy. Our Partners The content on or accessible through Physiopedia is for informational purposes only. Physiopedia is not a substitute for professional advice or expert medical services from a qualified healthcare provider. Read more pPhysiopedia +Physiopedia Plus Physiopedia About News Donations Shop Contact Content Articles Categories Resources Projects Contribute Courses PAI Legal QA Framework Disclaimer Terms Privacy Cookies Report content AI Licensing Physiopedia available in: French German Italian Spanish Ukrainian © Physiopedia 2025 | Physiopedia is a registered charity in the UK, no. 1173185 Back to top](
Extensor Carpi Ulnaris (ECU) Subluxation - Physiopedia
Introduction Extensor Carpi Ulnaris (ECU) muscle primary functions at the wrist joint is to move the joint into extension and ulnar deviations whilst also providing a stabilising force at the ulnar side of the joint. The muscle’s function will be affected by the position of the forearm as forearm pronation and supination affect the muscle’s angle of pull. Due to the mobility required around the wrist the muscle relies on specific stabilising structures such as the fibro-osseous groove, tendon subsheath and extensor retinaculum to maintain its position at the wrist. Although the incidence of ECU subluxation is low in the general population, it can be found within sports, such as tennis, golf and rugby that require forceful or repeated wrist extension/ulnar deviation or good wrist stability for hold equipment. The overall incidence of wrist injury can be up to 8.9% of all reported sports injuries but data documenting the frequencing of ECU subluxations specifically is limited. However, it has been reported that the incidence of ECU injury is 1 case/18 players/year in professional tennis players. Men were more frequently affected with 42% of all athletes within the study of 50 professional tennis players having ECU instability. In contrast the prevalence of ECU injuries specifically within golf, has been poorly recognised although it is acknowledged that the wrist is frequently injured in both amateur and professional golfers. Though within professional Rugby League in England, it has been found that the incidence of acute ECU injury is 1 injury/60 players/year, with a significant proportion (50%) requiring surgical repairs in this cohort. Clinically Relevant Anatomy[edit | edit source] ECU Origin: Lateral epicondyle of the humerus via the common extensor tendon. Posterior border of the ulna. ECU Insertion: Medial side of the base of the fifth metacarpal. The tendon itself, passes under the extensor retinaculum within a synovial sheath that forms the 6th compartment of the wrist, within a grove lateral to the ulna styloid process. The function of the extensor retinaculum is predominantly to prevent bowstringing of the tendon as it passes across the wrist. Nerve supply: Posterior interosseous branch of the radial nerve Nerve root C7/8 Muscle Action: Wrist extension – along with extensor carpi radialis longus (ECRL) and brevis (ECRB) Ulnar deviation of the wrist – along with flexor carpi ulnaris (FCU) Palpation: The ECU tendon can be palpated on the dorsal aspect of the wrist with the wrist in resisted extension and ulnar deviation. Mechanism of Injury / Pathological Process [edit | edit source] Common risk factors for ECU injury are: Wrist loading with the ECU is in a vulnerable position (flexion during supination and ulnar deviation). Sudden lateral force applied to the wrist during an isometric contraction of the ECU. Acute injuries are commonly associated with some form of 'trauma' that requires high levels of wrist extensor or ulnar deviation forces to be produced, such as: Hitting a powerful backhand during tennis where the forearm is reuired to create top spin by moving forcefully from pronation to supination. Hitting a solid object during the golf swing whilst the golf club moves from a radially deviated position to neutral, and the ECU contracts isometrically to stabilize the joint. Contact sports like rugby that require the athlete to hold the ball (and thus contract the ECU isometrically in maximal supination) to maintain possession when entering a ‘contact’. The resultant force during the 'contact' can result in a tear of the tendon’s subsheath and a resultant sublaxation. An athlete/patient may report that they felp a "snap", "pop" or a "tear" at the time of the trauma. Chronic injuries will occur gradully over time and are potentially due to overuse or technical errors overloading the ulnar side of the wrist. Over time the ECU tendon subsheath will be damaged thus causing the subluxation. An athlete/patient may go on to develop co-comittant tenosynovitis/tendinopathy as the tendon becomes irritated by repeated rubbing against the ulna styloid during subluxations. Clinical Presentation[edit | edit source] Commonly athletes/patients present complaining of persistent ulnar wrist pain aggravated by activities requiring pronation and supination, which may be associated with a “clicking” or "snapping" sensation. Objectively, a thorugh wrist assessment should be completed to aid identification of associated pathologies and to rule out any additional differential diagnoses. Range of motion (ROM): likely full other except during the acute phase of injury and will potentially present with pain on passive wrist radial deviation active wrist extension and/or ulnar deviation Subluxation will occur during active supination, flexion and ulnar deviation and relocate during pronation. it is rare for this to occur passively due to the reduction in tendon tension when the muscle is not contracting. should a dislocation occur during passive movement, the ECU can be considered as grossly unstable. the presence of pain should be noted as pain severity may guide a patient towards a surgical approach. Tenderness on palpation of the 6th dorsal compartment and the ECU tendon will localise the are of discomfort. most athletes/patients with acute ECU subsheath ruptures or tendinopathies will be tender distal to the ulna styloid and groove, whilst those with a TFCC injury may present with tenderness localised to the wrist joint line Diagnostic Procedures[edit | edit source] Although most ECU subluxation diagnoses can be made through a good clincal examination, diagnostic imaging may be benefical to rule out concomitant pathology or to confirm the diagnosis in subtle cases. X-rays: will like be unremarkable but pronated grip views or other specialised plain radiographs may be helpful for assessing other possible differential diagnoses MRI: can be a sensitive and specific modality for the assessment of the ECU but the images should include studies with the wrists positioned in pronation, supination and neutral to maximise sensitivity. Comparison with the asymptomatic wrist is also helpful to assess the relative position of the ECU within the ulnar osseus groove in all positions. Ultrasound: is useful for assessing the dynamic stability of the ECU tendon as the tendon can be visualised whilst the patient/athlete pronates and supinates their forearm. Outcome Measures[edit | edit source] Disabilities of the Arm, Shoulder & Hand Questionnaire (DASH) Patient-Rated Wrist Evaluation (PRWE) Management / Interventions [edit | edit source] Conservative Management Wrist splint or long arm cast – in pronation and radial deviation (4-6 weeks) Appropriate conditioning programme to maintain fitness whilst wrist is immobilised Aim to meet national physical activity guidelines in the amateur athlete or to maintain appropriate levels of cardiovascular fitness in the professional athlete to aid an efficient return to competition on completion of their rehab. ROM recovery post immobilisation AAROM/AROM exercises: consider taping ECU during this time to help maintain tendon stability Progressive strengthening programme: Scapular stability Rotator cuff strength and endurance exercises Isometric -> isotonic wrist strengthening exercises Wrist stability exercises Sports-specific training Including review of equipment (eg tennis racket grip -> greater risk of injury with a western or semi-western style of grip due to the high amounts of top spin generated) Full recovery of function would be expected in 3-4 months with appropriate rehab. Surgical Management ECU subsheath reconstruction +/- wrist arthroscopy if nonoperative management fails Technique direct repair in acute cases chronic cases may require an extensor retinaculum flap for ECU subsheath reconstruction Wrist arthroscopy shows concurrent TFCC tears in 50% of cases Post operative rehab will follow similar principles to those described for conservative management. Full recovery of function would be expected in 3 months with appropriate rehab. Differential Diagnosis[edit | edit source] ECU Tenosynovitis ECU Tendinopathy Lunotriquetral ligament injury TFCC tear Ulnar styloid impaction syndrome Ulnar styloid fracture Hook of Hamate fracture Pisotriquetral arthritis
Resources[edit | edit source]
↑ Jump up to: 1.0 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 Soames R. Anatomy and human movement: structure and function. Eight Edition. Elsevier Limited. 2025.
↑ Jump up to: 2.0 2.1 Brukner P. Khan K. Clinical sports medicine. McGraw-Hill: 2006.
↑ Dr. Borst's Occupational Therapy Classroom. MMT Individual Muscle Extensor Carpi Radialis Longus (ECRL). Available from: [last accessed 14/02/2025]
↑ RandaRamnarine. palpating extensor carpi radialis longus. Available from: [last accessed 14/02/2025]
Retrieved from "
Categories:
Anatomy
Elbow - Anatomy
Wrist - Anatomy
Hand - Anatomy
Muscles
Elbow - Muscles
Wrist - Muscles
Hand - Muscles
Elbow
Wrist
Hand
Get Top Tips Tuesday and The Latest Physiopedia updates
It's free, and you can unsubscribe any time. Privacy policy.
Our Partners
The content on or accessible through Physiopedia is for informational purposes only. Physiopedia is not a substitute for professional advice or expert medical services from a qualified healthcare provider. Read more
pPhysiopedia
+Physiopedia Plus
Physiopedia
About
News
Donations
Shop
Contact
Content
Articles
Categories
Resources
Projects
Contribute
Courses
PAI
Legal
QA Framework
Disclaimer
Terms
Privacy
Cookies
Report content
AI Licensing
Physiopedia available in:
French
German
Italian
Spanish
Ukrainian
Back to top
suggested results |
17263 | https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/DeVoes_Thermodynamics_and_Chemistry/12%3A_Equilibrium_Conditions_in_Multicomponent_Systems/12.04%3A_Colligative_Properties_of_a_Dilute_Solution | 12.4: Colligative Properties of a Dilute Solution - Chemistry LibreTexts
Skip to main content
Table of Contents menu
search Search build_circle Toolbar fact_check Homework cancel Exit Reader Mode
school Campus Bookshelves
menu_book Bookshelves
perm_media Learning Objects
login Login
how_to_reg Request Instructor Account
hub Instructor Commons
Search
Search this book
Submit Search
x
Text Color
Reset
Bright Blues Gray Inverted
Text Size
Reset
+-
Margin Size
Reset
+-
Font Type
Enable Dyslexic Font - [x]
Downloads expand_more
Download Page (PDF)
Download Full Book (PDF)
Resources expand_more
Periodic Table
Physics Constants
Scientific Calculator
Reference expand_more
Reference & Cite
Tools expand_more
Help expand_more
Get Help
Feedback
Readability
x
selected template will load here
Error
This action is not available.
chrome_reader_mode Enter Reader Mode
12: Equilibrium Conditions in Multicomponent Systems
Thermodynamics and Chemistry (DeVoe)
{ }
{ "12.01:_Effects_of_Temperature" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "12.02:_Solvent_Chemical_Potentials_from_Phase_Equilibria" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "12.03:_Binary_Mixture_in_Equilibrium_with_a_Pure_Phase" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "12.04:_Colligative_Properties_of_a_Dilute_Solution" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "12.05:_Solid-Liquid_Equilibria" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "12.06:_Liquid-Liquid_Equilibria" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "12.07:_Membrane_Equilibria" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "12.08:_Liquid-Gas_Equilibria" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "12.09:_Reaction_Equilibria" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "12.10:_Evaluation_of_Standard_Molar_Quantities" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "12.11:_Chapter_12_Problems" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1" }
{ "00:_Front_Matter" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "01:_Introduction" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "02:_Systems_and_Their_Properties" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "03:_The_First_Law" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "04:_The_Second_Law" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "05:_Thermodynamic_Potentials" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "06:_The_Third_Law_and_Cryogenics" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "07:_Pure_Substances_in_Single_Phases" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "08:_Phase_Transitions_and_Equilibria_of_Pure_Substances" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "09:_Mixtures" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "10:_Electrolyte_Solutions" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "11:_Reactions_and_Other_Chemical_Processes" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "12:_Equilibrium_Conditions_in_Multicomponent_Systems" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "13:_The_Phase_Rule_and_Phase_Diagrams" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "14:_Galvanic_Cells" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "15:_Appendices" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "zz:_Back_Matter" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1" }
Wed, 13 Apr 2022 00:46:40 GMT
12.4: Colligative Properties of a Dilute Solution
20642
20642
admin
{ }
Anonymous
Anonymous User
2
false
false
[ "article:topic", "showtoc:no", "license:ccby", "licenseversion:40", "authorname:hdevoe", "source@ ]
[ "article:topic", "showtoc:no", "license:ccby", "licenseversion:40", "authorname:hdevoe", "source@ ]
Search site Search Search Go back to previous article
Sign in
Username Password Sign in
Sign in
Sign in
Forgot password
Expand/collapse global hierarchy
1. Home
2. Bookshelves
3. Physical & Theoretical Chemistry
4. Thermodynamics and Chemistry (DeVoe)
5. 12: Equilibrium Conditions in Multicomponent Systems
6. 12.4: Colligative Properties of a Dilute Solution
Expand/collapse global location
12.4: Colligative Properties of a Dilute Solution
Last updated Apr 13, 2022
Save as PDF
12.3: Binary Mixture in Equilibrium with a Pure Phase
12.5: Solid-Liquid Equilibria
Page ID 20642
Howard DeVoe
University of Maryland
( \newcommand{\kernel}{\mathrm{null}\,})
Table of contents
1. 12.4.1 Freezing-point depression
2. 12.4.2 Boiling-point elevation
3. 12.4.3 Vapor-pressure lowering
4. 12.4.4 Osmotic pressure
The colligative properties of a solution are usually considered to be:
Note that all four properties are defined by an equilibrium between the liquid solution and a solid, liquid, or gas phase of the pure solvent. The properties called colligative (Latin: tied together) have in common a dependence on the concentration of solute particles that affects the solvent chemical potential.
Figure 12.3 illustrates the freezing-point depression and boiling-point elevation of an aqueous solution. At a fixed pressure, pure liquid water is in equilibrium with ice at the freezing point and with steam at the boiling point. These are the temperatures at which H 2 O has the same chemical potential in both phases at this pressure. At these temperatures, the chemical potential curves for the phases intersect, as indicated by open circles in the figure. The presence of dissolved solute in the solution causes a lowering of the H 2 O chemical potential compared to pure water at the same temperature. Consequently, the curve for the chemical potential of H 2 O in the solution intersects the curve for ice at a lower temperature, and the curve for steam at a higher temperature, as indicated by open triangles. The freezing point is depressed by ΔT f, and the boiling point (if the solute is nonvolatile) is elevated by ΔT b.
Although these expressions provide no information about the activity coefficient of a solute, they are useful for estimating the solute molar mass. For example, from a measurement of any of the colligative properties of a dilute solution and the appropriate theoretical relation, we can obtain an approximate value of the solute molality m B. (It is only approximate because, for a measurement of reasonable precision, the solution cannot be extremely dilute.) If we prepare the solution with a known amount n A of solvent and a known mass of solute, we can calculate the amount of solute from n B=n AM Am B; then the solute molar mass is the solute mass divided by n B.
12.4.1 Freezing-point depression
As in Sec. 12.2.1, we assume the solid that forms when a dilute solution is cooled to its freezing point is pure component A.
Equation 12.3.6 gives the general dependence of temperature on the composition of a binary liquid mixture of A and B that is in equilibrium with pure solid A. We treat the mixture as a solution. The solvent is component A, the solute is B, and the temperature is the freezing point T f: (12.4.1)(∂T f∂x A)p=T f 2 Δ sol,AH[∂(μ A/T)∂x A]T,p
Consider the expression on the right side of this equation in the limit of infinite dilution. In this limit, T f becomes T f∗, the freezing point of the pure solvent, and Δ sol,AH becomes Δ fus,AH, the molar enthalpy of fusion of the pure solvent.
To deal with the partial derivative on the right side of Eq. 12.4.1 in the limit of infinite dilution, we use the fact that the solvent activity coefficient γ A approaches 1 in this limit. Then the solvent chemical potential is given by the Raoult’s law relation (12.4.2)μ A=μ A∗+RTlnx A(solution at infinite dilution) where μ A∗ is the chemical potential of A in a pure-liquid reference state at the same T and p as the mixture. (At the freezing point of the mixture, the reference state is an unstable supercooled liquid.)
If the solute is an electrolyte, Eq. 12.4.2 can be derived by the same procedure as described in Sec. 9.4.6 for an ideal-dilute binary solution of a nonelectrolyte. We must calculate x A from the amounts of all species present at infinite dilution. In the limit of infinite dilution, any electrolyte solute is completely dissociated to its constituent ions: ion pairs and weak electrolytes are completely dissociated in this limit. Thus, for a binary solution of electrolyte B with ν ions per formula unit, we should calculate x A from (12.4.3)x A=n A n A+νn B where n B is the amount of solute formula unit. (If the solute is a nonelectrolyte, we simply set ν equal to 1 in this equation.)
From Eq. 12.4.2, we can write (12.4.4)[∂(μ A/T)∂x A]T,p→R as x A→1 In the limit of infinite dilution, then, Eq. 12.4.1 becomes (12.4.5)lim x A→1(∂T f∂x A)p=R(T f∗)2 Δ fus,AH
It is customary to relate freezing-point depression to the solute concentration c B or molality m B. From Eq. 12.4.3, we obtain (12.4.6)1−x A=νn B n A+νn B In the limit of infinite dilution, when νn B is much smaller than n A, 1−x A approaches the value νn B/n A. Then, using expressions in Eq. 9.1.14, we obtain the relations (12.4.7)dx A=−d(1−x A)=−νd(n B/n A)=−νV A∗dc B=−νM Adm B(binary solution at infinite dilution) which transform Eq. 12.4.5 into the following (ignoring a small dependence of V A∗ on T): (12.4.8)lim c B→0(∂T f∂c B)p=−νV A∗R(T f∗)2 Δ fus,AH(12.4.1)lim m B→0(∂T f∂m B)p=−νM AR(T f∗)2 Δ fus,AH We can apply these equations to a nonelectrolyte solute by setting ν equal to 1.
As c B or m B approaches zero, T f approaches T f∗. The freezing-point depression (a negative quantity) is ΔT f=T f−T f∗. In the range of molalities of a dilute solution in which (∂T f/∂m B)p is given by the expression on the right side of Eq. 12.4.8, we can write (12.4.9)ΔT f=−νM AR(T f∗)2 Δ fus,AHm B
The molal freezing-point depression constant or cryoscopic constant, K f, is defined for a binary solution by (12.4.10)K f=def−lim m B→0ΔT f νm B and, from Eq. 12.4.9, has a value given by (12.4.11)K f=M AR(T f∗)2 Δ fus,AH The value of K f calculated from this formula depends only on the kind of solvent and the pressure. For H 2 O at 1 bar, the calculated value is K b=1.860K kg mol−1 (Prob. 12.4).
In the dilute binary solution, we have the relation (12.4.12)ΔT f=−νK fm B(dilute binary solution) This relation is useful for estimating the molality of a dilute nonelectrolyte solution (ν=1) from a measurement of the freezing point. The relation is of little utility for an electrolyte solute, because at any electrolyte molality that is high enough to give a measurable depression of the freezing point, the mean ionic activity coefficient deviates greatly from unity and the relation is not accurate. 12.4.2 Boiling-point elevation
We can apply Eq. 12.3.6 to the boiling point T b of a dilute binary solution. The pure phase of A in equilibrium with the solution is now a gas instead of a solid. (We must assume the solute is nonvolatile or has negligible partial pressure in the gas phase.) Following the procedure of Sec. 12.4.1, we obtain (12.4.13)lim m B→0(∂T b∂m B)p=νM AR(T b∗)2 Δ vap,AH where Δ vap,AH is the molar enthalpy of vaporization of pure solvent at its boiling point T b∗.
The molal boiling-point elevation constant or ebullioscopic constant, K b, is defined for a binary solution by (12.4.14)K b=def lim m B→0ΔT b νm B where ΔT b=T b−T b∗ is the boiling-point elevation. Accordingly, K b has a value given by (12.4.15)K b=M AR(T b∗)2 Δ vap,AH For the boiling point of a dilute solution, the analogy of Eq. 12.4.12 is (12.4.16)ΔT b=νK bm B(dilute binary solution) Since K f has a larger value than K b (because Δ fus,AH is smaller than Δ vap,AH), the measurement of freezing-point depression is more useful than that of boiling-point elevation for estimating the molality of a dilute solution. 12.4.3 Vapor-pressure lowering
In a binary two-phase system in which a solution of volatile solvent A and nonvolatile solute B is in equilibrium with gaseous A, the vapor pressure of the solution is equal to the system pressure p.
Equation 12.3.7 gives the general dependence of p on x A for a binary liquid mixture in equilibrium with pure gaseous A. In this equation, Δ sol,AV is the molar differential volume change for the dissolution of the gas in the solution. In the limit of infinite dilution, −Δ sol,AV becomes Δ vap,AV, the molar volume change for the vaporization of pure solvent. We also apply the limiting expressions of Eqs. 12.4.4 and 12.4.7. The result is (12.4.17)lim c B→0(∂p∂c B)T=−νV A∗RT Δ vap,AV lim m B→0(∂p∂m B)T=−νM ART Δ vap,AV
If we neglect the molar volume of the liquid solvent compared to that of the gas, and assume the gas is ideal, then we can replace Δ vap,AV in the expressions above by V A∗(g)=RT/p A∗ and obtain (12.4.18)lim c B→0(∂p∂c B)T≈−νV A∗p A∗lim m B→0(∂p∂m B)T≈−νM Ap A∗ where p A∗ is the vapor pressure of the pure solvent at the temperature of the solution.
Thus, approximate expressions for vapor-pressure lowering in the limit of infinite dilution are (12.4.19)Δp≈−νV A∗p A∗c B and Δp≈−νM Ap A∗m B We see that the lowering in this limit depends on the kind of solvent and the solution composition, but not on the kind of solute. 12.4.4 Osmotic pressure
The osmotic pressure Π is an intensive property of a solution and was defined in Sec. 12.2.2. In a dilute solution of low Π, the approximation used to derive Eq. 12.2.11 (that the partial molar volume V A of the solvent is constant in the pressure range from p to p+Π) becomes valid, and we can write (12.4.20)Π=μ A∗−μ A V A In the limit of infinite dilution, μ A∗−μ A approaches −RTlnx A (Eq. 12.4.2) and V A becomes the molar volume V A∗ of the pure solvent. In this limit, Eq. 12.4.20 becomes (12.4.21)Π=−RTlnx A V A∗ from which we obtain the equation (12.4.22)lim x A→1(∂Π∂x A)T,p=−RT V A∗ The relations in Eq. 12.4.7 transform Eq. 12.4.22 into (12.4.23)lim c B→0(∂Π∂c B)T,p=νRT(12.4.24)lim m B→0(∂Π∂m B)T,p=νRTM A V A∗=νρ A∗RT
Equations 12.4.23 and 12.4.24 show that the osmotic pressure becomes independent of the kind of solute as the solution approaches infinite dilution. The integrated forms of these equations are (12.4.25)Π=νc BRT(dilute binary solution)(12.4.26)Π=RTM A V A∗νm B=ρ A∗RTνm B(dilute binary solution) Equation 12.4.25 is van’t Hoff’s equation for osmotic pressure. If there is more than one solute species, νc B can be replaced by ∑i≠A c i and νm B by ∑i≠A m i in these expressions.
In Sec. 9.6.3, it was stated that Π/m B is equal to the product of ϕ m and the limiting value of Π/m B at infinite dilution, where ϕ m=(μ A∗−μ A)/RTM A∑i≠A m i is the osmotic coefficient. This relation follows directly from Eqs. 12.2.11 and 12.4.26.
This page titled 12.4: Colligative Properties of a Dilute Solution is shared under a CC BY 4.0 license and was authored, remixed, and/or curated by Howard DeVoe via source content that was edited to the style and standards of the LibreTexts platform.
Back to top
12.3: Binary Mixture in Equilibrium with a Pure Phase
12.5: Solid-Liquid Equilibria
Was this article helpful?
Yes
No
Recommended articles
12.1: Effects of Temperature
12.2: Solvent Chemical Potentials from Phase Equilibria
12.3: Binary Mixture in Equilibrium with a Pure Phase
12.5: Solid-Liquid Equilibria
12.6: Liquid-Liquid Equilibria
Article typeSection or PageAuthorHoward DeVoeLicenseCC BYLicense Version4.0Show Page TOCno on page
Tags
source@
© Copyright 2025 Chemistry LibreTexts
Powered by CXone Expert ®
?
The LibreTexts libraries arePowered by NICE CXone Expertand are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. We also acknowledge previous National Science Foundation support under grant numbers 1246120, 1525057, and 1413739. Privacy Policy. Terms & Conditions. Accessibility Statement.For more information contact us atinfo@libretexts.org.
Support Center
How can we help?
Contact Support Search the Insight Knowledge Base Check System Status×
contents readability resources tools
☰
12.3: Binary Mixture in Equilibrium with a Pure Phase
12.5: Solid-Liquid Equilibria |
17264 | https://www.frontiersin.org/journals/oncology/articles/10.3389/fonc.2022.892571/full | Your new experience awaits. Try the new design now and help us make it even better
ORIGINAL RESEARCH article
Front. Oncol., 05 August 2022
Sec. Cancer Epidemiology and Prevention
Volume 12 - 2022 |
This article is part of the Research TopicEpidemiology, Screening and Diagnosis of Lung CancerView all 24 articles
Epidemiological characteristics and risk factors of lung adenocarcinoma: A retrospective observational study from North China
Background: The main aim of the study was to determine the risk factors of lung adenocarcinoma and to analyze the variations in the incidence of lung adenocarcinoma according to time, sex, and smoking status in North China.
Methods: Patients with lung cancer in local household registries diagnosed and treated for the first time in the investigating hospital were enrolled from 11 cities in North China between 2010 and 2017. Baseline characteristics and tumor-related information were extracted from the patients’ hospital medical record, clinical course records, and clinical examination. Some of the variables, such as smoking, alcohol consumption, medical history, and family history of cancer, were obtained from interviews with the enrolled patients. The statistical method used were the chi-square test and multi-factor logistic regression analysis. The time trend was statistically analyzed using Joinpoint regression models, and p values were calculated.
Results: A total of 23,674 lung cancer cases were enrolled. People in severely polluted cities were at higher risk for lung adenocarcinoma (p < 0.001). Most patients with lung adenocarcinoma had no history of lung-related diseases (p = 0.001). Anatomically, lung adenocarcinoma was more likely to occur in the right lung (p < 0.001). Non-manual labor workers were more likely to develop from lung adenocarcinoma than manual workers (p = 0.015). Notably, non-smokers were more likely to develop lung adenocarcinoma than smokers (p < 0.001). The proportion of lung adenocarcinoma increased significantly in Hebei Province (p < 0.001). Among non-smokers, the proportion of lung adenocarcinoma showed a higher rise than in smokers (p < 0.001).
Conclusions: Lung adenocarcinoma is the most common histological type of lung cancer in North China (Hebei Province), and the proportion of lung adenocarcinoma is increasing, especially among non-smokers. Lung adenocarcinoma is more common in women, severely polluted cities, individuals with no history of lung-related diseases, in the right lung, and in non-smokers. These can serve as a great guide in determining the accuracy of lung adenocarcinoma high-risk groups and lung cancer risk assessment models.
Background
According to GLOBOCAN 2018, there were 2.09 million cases of lung cancer worldwide in 2018, accounting for 11.6% of all cancer incidences, and approximately 1.76 million lung cancer deaths, accounting for 18.4% of all cancer deaths, and lung cancer ranked first among all malignant tumor incidences and deaths. Moreover, there were 1.225 million new cases of lung cancer in Asia with 1.069 million lung cancer deaths, accounting for 58.5% of the incidence and 60.7% of the mortality of lung cancer worldwide (1). In China, lung cancer was the leading cancer in men and the second leading cancer in women with 550,000 and 278,000 new lung cancer cases, respectively. Lung cancer was the leading cancer in both men and women with 455,000 and 202,000 new lung cancer cases, respectively (2). Approximately 85% of lung cancers were non-small cell lung cancers, and 15% were small cell lung cancers. The following histological subtypes of non-small cell lung cancer were distinguished: adenocarcinoma, accounting for 38.5% of all lung cancers, and squamous cell carcinoma, accounting for 20% (3). A lung cancer study in the United States showed that the incidence of adenocarcinoma in men and women from 2004 to 2009 was 24.5/100,000 and 20.0/100,000, the incidence of squamous cell carcinoma was 18.8/100,000 and 8.5/100,000, and the incidence of small cell lung cancer was 9.8/100,000 and 7.9/100,000, respectively (4). Lung cancer was also the leading cause of cancer incidence and mortality in North China (Hebei Province) (5). Lung adenocarcinoma, the most common histological type of non-small cell lung cancer, has been on the rise in most countries over the past few decades (6), with a 5-year relative survival of only 12.8% (7). The increasing incidence of lung adenocarcinoma has been a subject of global interest. However, there is currently no report on the risk factors and epidemiological characteristics of lung adenocarcinoma in North China (Hebei Province). Therefore, this study aimed to determine the risk factors of lung adenocarcinoma and analyze the variations in the incidence of lung adenocarcinoma according to time, sex, and smoking status based on the distribution of lung adenocarcinoma in Hebei Province.
Methods
Study design and participants
Lung cancer clinical diagnosis and treatment information was collected from 133 hospitals in 11 cities of Hebei Province from 2010 to 2017: Shijiazhuang City, Baoding City, Tangshan City, Handan City, Xingtai City, Cangzhou City, Hengshui City, Langfang City, Qinhuangdao City, Chengde City, and Zhangjiakou City. Among the cities, Qinhuangdao City, Chengde City, and Zhangjiakou City were defined as the lightly polluted cities, while the other cities were severely polluted cities (8). Relevant variables were extracted from the patients’ hospital medical records, clinical course records, clinical examination, and clinical imaging studies. Baseline characteristics included the following: sex, age at diagnosis, marital status, occupation, height and weight at admission, blood type, smoking history, alcohol consumption, history of lung-related diseases, and family history of cancer. Tumor-related information was as follows: pathological type, diagnosis basis, grade, and sub-site of lung cancer patients. The inclusion criteria of study participants were as follows (1): admitted to hospital from 1 January 2010 to 31 December 2017; (2) new cases that were first diagnosed or treated in the investigating hospital and had not undergone surgery, radiotherapy, or chemotherapy in the previous hospital; and (3) cases of local household registration. The exclusion criterion included multiple primary or metastatic cancer cases.
Some of the variables, such as smoking, alcohol consumption, medical history, and family history of cancer, were obtained by interviewing the enrolled patients. Other variables were obtained by extracting information from the patients’ medical records.
The occupation classification in the questionnaire included non-manual labor and manual labor. Non-manual labor included persons in charge of party organs, mass organizations, social organizations, enterprises, and institutions; professional and technical personnel; and office and related personnel. Manual labor workers included social production service and life service personnel; agriculture, forestry, animal husbandry and fishery production, and auxiliary personnel; production and related personnel; soldiers; and other personnel who cannot be classified.
Lung cancer was mainly diagnosed by the following methods. Lung cancers were diagnosed by primary histology; that is, the pathological type of lung cancer was diagnosed by surgery or outpatient biopsy. Cytology and blood sample diagnosis referred to the diagnosis of lung cancer by sputum exfoliated cytology. Biochemical and immunological diagnosis referred to the diagnosis of lung cancer by biochemical, immunological, and tumor marker tests. Clinical diagnosis referred to the diagnosis of lung cancer by collecting medical history data through the patients’ clinical symptoms and signs and by asking patients about their subjective symptoms. Secondary histology referred to the examination and diagnosis of patients with discomfort caused by a secondary metastatic cancer and tracking their primary cancer as lung cancer. Related disease history included pulmonary tuberculosis, chronic bronchitis, emphysema, asthma, silicosis/pneumoconiosis, others, and unknown.
Patient’s height and weight were from the medical record system. Body mass index (BMI) was calculated with the formula of BMI = weight (kg) ÷ height (m2). BMI was categorized into <18.5 kg/m2 (underweight), 18.5–24.9 kg/m2 (normal weight), and 25–29.9 kg/m2 (overweight) (9).
When comparing lung adenocarcinoma with other pathological types, cases with unknown information on relevant variables were excluded from the study.
Data collection and quality control
An expert group including experts in various fields such as clinical medicine, epidemiology, health statistics, cancer registration, and specialized persons who had long been engaged in clinical data collection was established. An expert seminar was held to present the study design and conduct technical guidance and quality control, followed by the training meeting. Investigators were selected by the project site according to the workload, and all the project team underwent the same technical training. All investigators followed the standardized operating procedures strictly. Experts from the project team visited the project sites at least once a year, and the supervision included (1) supervision of the collection of lung cancer clinical diagnosis data, (2) data storage check, (3) checking whether the collected data met the inclusion and exclusion criteria, and (4) checking data completeness and extracting 5% of the newly collected data in the current year to verify the information by querying the original data again.
Statistical analysis
The statistical methods used were the chi-square test and multi-factor logistic regression analysis. According to well-known statistical methods, univariate analysis was performed for factor analysis. Through univariate analysis, a variable with statistical significance was screened out. Univariate analysis was performed on the variables collected in the questionnaire of this study, including sex, age, smoking status, alcohol intake, occupation, related disease history, family history of cancer, BMI, areas of residence, marriage, blood type, position, and morphology. After univariate analysis, we selected statistically significant variables for multivariate analysis, including sex, age, smoking status, areas, related disease history, occupation, and position. The time trend was statistically analyzed using Joinpoint regression models, and p-values were calculated. All statistical analysis and data quality control were performed using the Statistical Package for Social Science (SPSS) (version 20, SPSS Inc., Chicago, IL, USA), SAS software (Version 9.3), and the Joinpoint software (version 4.8.0.1; National Cancer Institute, Rockville, MD, US). p ≤ 0.05, established on two-sided probabilities, was considered statistically significant.
Results
Distribution of characteristics in patients with lung cancer in Hebei Province, 2010–2017
This study enrolled 23,674 patients diagnosed with lung cancer for the first time in 11 cities of Hebei Province. Among the lung cancer patients, the male-to-female ratio was 1.94:1. The average age of all lung cancer patients was 68.91 ± 11.21 years old, and the largest number of lung cancer cases was observed in the 65–69 age group.
Among all the lung cancer cases, regarding the smoking situation, the number of non-smokers was 10,913 (46.2%), current smokers was 7,533 (31.8%), former smokers (those who had quit smoking) was 2,732 (11.5%), and the number of patients with an unknown smoking status was 2,496 (10.5%). Moreover, 14,976 patients (63.3%) never drank, 5,881 patients (24.8%) drank frequently, and the drinking status of 2,817 (11.9%) patients was unknown. There were 2,850 non-manual labor (12.0%) and 9,724 (41.1%) manual workers. Among the lung cancer cases, 19,357 (81.8%) had no history of related diseases, 1,643 (6.9%) had a history of related diseases, and 2,674 had unknown information. Moreover, there were 18,378 (77.7%) patients without family history of cancer, 1,735 (7.3%) with family history of cancer, and 3,561 with unknown family history of cancer. Furthermore, 727 (3.1%) patients had a BMI index < 18.5 kg/m2, 6,444 (27.2%) had a BMI between 18.5 and 24.9 kg/m2, and 3,135 (13.2%) had a BMI ≥ 25 kg/m2. Among the lung cancer patients, 17.8% lived in areas with light air pollution, while 82.2% lived in areas with heavy air pollution. Regarding marital status, the married proportion was the highest at 82.7%, and the other proportions were all lower than 15%. Regarding blood type, blood type A accounted for 6.3%, blood type B accounted for 8.4%, blood type O accounted for 7.1%, and blood type AB accounted for 2.9%, and the proportion of unknown blood type was high (75.3%). Regarding the distribution of lung subsites, 1.2% occurred in both lungs, 38.9% occurred in the left lung, and 50.5% occurred in the right lung. There were 8,121 cases (34.3%) of adenocarcinoma, 3,443 cases (14.5%) of squamous cell carcinoma, 2,843 cases (12.1%) of small cell carcinoma, 956 cases (4.0%) of other cancer types, and 8,311 cases (35.1%) of unknown type with no pathology. Regarding the diagnosis, 13,785 lung cancer cases (58.2%) were diagnosed by primary histology, 892 cases (3.8%) were diagnosed by cytology and blood samples, 642 cases (2.7%) were diagnosed by biochemistry and immunology, and 1,822 cases (7.7%) were clinically diagnosed. Moreover, 215 cases (0.9%) were diagnosed by secondary histology, 5,198 cases (22.0%) were diagnosed by other special examinations (x-ray, ultrasound, CT, MRI, endoscope, etc.), and 1,120 cases (4.7%) had an unknown diagnosis (Table 1).
Table 1 Distribution of clinical characteristics in patients with lung cancer in Hebei Province, 2010–2017.
Lung adenocarcinoma was compared with other pathological types after excluding variables with unknown information. Through univariate analysis, variables with t statistical significance were obtained. Variables such as sex, age, smoking status, alcohol intake situation, occupation, related disease history, family history of cancer, BMI, air pollution level in areas, and subsites were included in the multivariate logistic regression analysis model. The multivariate regression findings suggested that women were at higher risk for lung adenocarcinoma than men (p < 0.001). Individuals over 65 years old had a lower risk of developing lung adenocarcinoma than those younger than 65 years old (p < 0.001). Notably, non-smokers were more likely to develop lung adenocarcinoma than smokers (p < 0.001). Non-manual labor workers were more likely to develop lung adenocarcinoma than manual workers (p = 0.015). The risk of lung adenocarcinoma was higher for people without a history of lung-related diseases (p = 0.001). Compared with people living in lightly polluted cities, those living in severely polluted cities were at higher risk for lung adenocarcinoma (p < 0.001). Moreover, lung adenocarcinoma occurred more frequently in the right lung than in the whole lung (p < 0.001) (Table 2 and Figure 1).
Table 2 Distribution of clinical characteristics in patients with lung adenocarcinoma (%) in Hebei Province, 2010–2017.
Figure 1 Forest plot of lung adenocarcinoma risk factors analyzed using a multivariate logistic regression model. The squares and bars represent the odds ratio (OR) with a 95% confidence interval (CI).
Distribution changes of different histological types of lung cancer
The incidence trends of lung cancer in Hebei Province from 2010 to 2017 were analyzed by histological type. The proportion of lung adenocarcinoma (38.90%) in 2017 was 1.98 times higher than in 2010 (19.60%) (p < 0.001). However, the changes in the proportion of small cell carcinoma and squamous cell carcinoma were not statistically significant in both sexes (p = 0.100). For male patients, the proportions of lung adenocarcinoma and small cell carcinoma increased by 93.67% and 125.34%, respectively, in 2017 compared to 2010 (p < 0.001 and p < 0.001), whereas the proportion of lung squamous cell carcinoma decreased by 29.06% from 2010 to 2017 (p < 0.001) (Figure 2A). For female patients, between 2010 and 2017, the proportion of lung adenocarcinoma increased by 80.36% (p < 0.001), and the proportion of lung squamous cell carcinoma decreased by 28.41% (p < 0.001) (Figure 2B).
Figure 2 The incidence distribution of the three main histological types of lung cancer (lung adenocarcinoma, squamous cell carcinoma, and small cell carcinoma). (A) The incidence distribution of the three main histological types of lung cancer in men. (B) The incidence distribution of three main histological types of lung cancer in women. (C) The incidence distribution of three main histological types of lung cancer in patients with never-smoking lung cancer. (D) The incidence distribution of three main histological types of lung cancer in patients with smoking lung cancer.
The histological types of lung cancer were analyzed by stratifying according to smoking status. The study suggested that there was a rise in the proportion of lung adenocarcinoma of 80.22% in never-smokers during 2010–2017 (p < 0.001). Moreover, the change in the proportion of small cell carcinoma was not statistically significant (p = 0.200), and there was a decline in the proportion of lung squamous cell carcinoma of 58.54% (p < 0.001) (Figure 2C). Among smokers, the proportion of lung adenocarcinoma similarly increased by 56.37% from 2010 to 2017 (p < 0.001), while the proportion of squamous cell carcinoma decreased by 17.75% (p < 0.001) (Figure 2D).
Discussion
Lung cancer is the leading cause of cancer morbidity and mortality, accounting for one-fifth of all cancer deaths worldwide (1). Over the past two decades, the incidence of lung adenocarcinoma has been on the rise worldwide, and lung adenocarcinoma has become the most common lung cancer subtype (10). Investigating the risk factors of lung adenocarcinoma and analyzing how the incidence of lung adenocarcinoma varies according to time, sex, and smoking status are particularly important. This study focused on the analysis of the risk factors and epidemiological trends of lung adenocarcinoma in North China (Hebei Province) to provide an important reference for the screening and precise prevention of lung adenocarcinoma in high-risk groups.
Overall, the proportion of adenocarcinoma, squamous cell carcinoma, and small cell lung carcinomas in Hebei Province was consistent with that in the United States, Korea, and Australia (11–13). The risk factors of lung adenocarcinoma, the most pathological type of lung cancer in Hebei Province, was analyzed. The multivariate regression analysis findings revealed that lung adenocarcinoma was more common in women, individuals younger than 65 years old, those living in severely polluted cities, non-smokers, those with no history of lung-related diseases, non-manual labor workers, and the right lung. Many lung cancer studies worldwide found that among lung cancer cases in Canada, Denmark, Germany, New Zealand, the Netherlands, the United States, (Lima) Peru, and Southern Sweden, adenocarcinoma was more common in women and in younger patients (14–16).
A study from nine European countries manifested that the risk of lung cancer increased by 18% with PM2.5 increasing every 5 mg/m3, which may be due to lung adenocarcinoma (HR: 1.55, 95% CI: 1.05–2.29) (17, 18). In Canada, researchers found that a 0.01 mg/m3 increase in PM2.5 was inversely related to lung cancer (HR: 1.34; 95% CI: 1.10–1.65), with the strongest associations with adenocarcinoma (HR: 1.44; 95% CI: 1.06–1.97) (19). One study in the US indicated that for every 0.01 mg/m3 increase in PM2.5 level, the hazard rate of lung adenocarcinoma (HR: 1.31, 95% CI: 0.87–1.97) also increased (20). Moreover, in one meta-analysis, lung adenocarcinoma was more strongly associated with PM2.5 and PM10 (RR per 10 μg/m3, 1.40 [95% CI: 1.07–1.83] and 1.29 [95% CI: 1.02–1.63]) compared with other pathological types (21). A study in Taiwan Province of China also concluded that changes in PM2.5 levels can affect the incidence of lung adenocarcinoma and patient survival (22). Air pollution and the PM2.5 concentration were severely high in Hebei Province (23–25). As in our study, they may increase the risk of lung adenocarcinoma.
Although ever smoking strongly correlated with increased risk for lung adenocarcinoma, lung adenocarcinoma was the most common subtype among never smokers (26, 27). In this study, the result found that non-smokers were more likely to develop lung adenocarcinoma than smokers. An earlier study in Singapore showed that approximately 70% of never-smoking lung cancer patients were diagnosed with adenocarcinoma (28). The World Trade Center Environmental Health Center conducted a study among people who were exposed and possibly inhaled dust and fumes from the destruction of the World Trade Center towers in 2001. The results showed that lung adenocarcinoma was more common in never-smokers compared to ever-smokers (72% vs. 65%) and more common in women compared to men (70% vs. 65%) (29). One study in China showed that though the proportion of lung adenocarcinoma increased in smokers and non-smokers, lung adenocarcinoma was more common in non-smokers (30). However, the result from a national health examination program in Korea was inconsistent with that of this study. The hazard ratios for lung adenocarcinoma were significantly higher in male current smoker than in never-smokers, while they were not different for female patients (31). Despite this, non-smoker lung cancer had been classified as an independent disease entity, which is different from smoker lung cancer. Gene sequencing technology has enabled us to truly understand the difference in lung adenocarcinoma between smokers and non-smokers at the microscopic molecular level. Most of the driver gene alterations were identified in lung adenocarcinoma in never-smokers. However, no such available molecular targets exist for smoker lung cancer (32, 33). This study found that non-smokers were more likely to develop lung adenocarcinoma, and because the mechanisms of lung adenocarcinoma in smokers and non-smokers were completely different, we should pay more attention to the screening of lung adenocarcinoma among non-smokers at high risk of lung cancer. It was of great significance to strengthen the screening of lung adenocarcinoma among non-smokers at high risk of lung cancer for related genes and to detect and diagnose lung adenocarcinoma early, improving the chances of surgery for patients and prolonging the survival time.
This study also found that most patients with lung adenocarcinoma had no history of lung-related diseases. Anatomically, lung adenocarcinoma was more likely to occur in the right lung. In addition, this study revealed that non-manual labor workers had a greater risk of lung adenocarcinoma compared with manual workers. These new findings will have a significant impact on the prevention of lung adenocarcinoma. We should strengthen the screening of lung adenocarcinoma for non-manual workers and people without a history of lung-related diseases to achieve early detection, early diagnosis, and early treatment. For designated lung adenocarcinoma high-risk groups, attention should be paid to the examination of the right lung. However, these findings may require more research to confirm.
Overall, the proportion of lung adenocarcinoma increased significantly in Hebei Province, but the proportion of lung squamous cell carcinoma decreased during 2010–2017. Among non-smokers, the proportion of lung adenocarcinoma showed a higher rise than in smokers. This implied that lung adenocarcinoma is still the main histological type of lung cancer.
This study showed that lung adenocarcinoma, one of the types of non−small cell lung carcinoma, was the most common form of lung cancer among non−smokers (34). In 10 years, lung cancer characteristics have changed: more women, more never-smokers, and more adenocarcinomas in France (35). In a study among Chinese women, the proportion of never-smokers with adenocarcinoma increased significantly compared with smokers (36). The prevention of lung adenocarcinoma has to be given more prominence, especially among non-smokers. The incidence of adenocarcinoma increased in the United States between 2006 and 2010, but there was a greater reduction in the rate of decline of squamous cell carcinoma than before (37). One study proved that the adenocarcinoma incidence trends were consistent with smoking trends; however, the relative risk with smoking for adenocarcinoma was lower than that for squamous cell carcinoma and small cell carcinoma (38). Our study showed that smoking had the greatest effect on squamous cell carcinomas, a lesser effect on lung adenocarcinomas (Supplementary Tables 1, 2), and no statistically significant effect on small cell carcinomas (Supplementary Table 3). In China, from 1990 to 2015, the age-standardized prevalence of daily smoking decreased significantly by 22.4% (95% UI: 20.7%–24.0%) and 48.4% (95% UI: 41.2%–55.1%) for men and women, respectively (39). Declining smoking rates in China may be one of the main reasons for this result.
In this study, we found that the incidence of lung adenocarcinoma was increasing, especially in non-smokers, revealing the changing trends in the pathological types of lung cancer. At present, few studies have reached this conclusion, which can have a great guiding effect on the selection of high-risk lung cancer groups for screening. In addition, we also found that lung adenocarcinoma was more common in severely polluted cities, non-smokers, individuals with no history of pulmonary diseases, non-manual labor workers, and the right lung. These conclusions have important implications. First, the conclusions have a key guiding significance for determining the accuracy of the lung cancer risk assessment model. In addition, because cancer screening programs do not currently cover the whole country, this can be more beneficial to high-risk populations through lung cancer screening, improve the detection rate, and reduce healthy individuals from receiving numerous unnecessary tests, especially those that can cause some pain and harm.
The present study also had some shortcomings. The study enrolled 23,674 lung cancer patients. For smoking status, the number of unknowns accounted for 10.5%. In the future, we need to pay attention to the integrity of variable information collection or choose an adequate method to supplement missing data; however, we currently only presented the results of real data. In this study, the main inclusion criterion for lung cancer patients was patients who were first diagnosed or treated in the hospital. Most patients with good economic conditions who were later found to have space occupancy by other means instead of a pathological diagnosis chose provincial or national specialized oncology hospitals to perform further examinations and make a pathological diagnosis. This led to the local hospital not being able to obtain the histological subtypes for the first diagnosis. Moreover, among the 35.1% of the total cases with unknown histological diagnosis, some of them may have had clear pathological types, but the local hospitals could not obtain them, which is also a limitation of this study.
Conclusion
Lung adenocarcinoma is one of the most common histological types of lung cancer in North China (Hebei Province), and the incidence of lung adenocarcinoma is increasing, especially in non-smokers. Lung adenocarcinoma is more common in women, individuals younger than 65 years old, those living in severely polluted cities, non-smokers, those with no history of lung-related diseases, non-manual labor workers, and the right lung. These can play a great guiding role in determining the accuracy of lung adenocarcinoma diagnosis in high-risk groups and lung cancer risk assessment models.
Data availability statement
The original contributions presented in the study are included in the article/Supplementary Material. Further inquiries can be directed to the corresponding author.
Ethics statement
This study was reviewed and approved by the Ethics Committee of Hebei Medical University Fourth Hospital. Written informed consent to participate in this study was provided by the participants’ legal guardian/next of kin. Written informed consent to participate in this study was provided by the participants’ legal guardian/next of kin.
Author contributions
DJL and YTH were involved in the design and data analysis of this study and drafted the manuscript. YTH was responsible for the oversight of the whole study and edited the manuscript. DJL conceived the review and edited the manuscript. DJL, JS, XPD, DL, and JJ collected and analyzed the data. All authors contributed to the article and approved the submitted version.
Funding
This study was approved and supported by Department of Disease Control and Prevention, Hebei Provincial Health Commission Letter (2018) No. 5.
Acknowledgments
We gratefully acknowledge the cooperation of all the population-based cancer registries in providing cancer statistics, data collection, sorting, verification, and database creation.
Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher’s note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
Supplementary material
The Supplementary Material for this article can be found online at:
References
Bray F, Ferlay J, Soerjomataram I, Siegel RL, Torre LA, Jemal A. Global cancer statistics 2018: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA Cancer J Clin (2018) 68:394–424. doi: 10.3322/caac.21492
PubMed Abstract | CrossRef Full Text | Google Scholar
Zheng R, Zhang S, Zeng H, Wang SM, Sun KX, Chen R, et al. Cancer incidence and mortality in China, 2016. J Natl Cancer Cen (2022) 2:1–9. doi: 10.1016/j.jncc.2022.02.002
CrossRef Full Text | Google Scholar
Skřičková J, Kadlec B, Venclíček O, Merta Z. Lung cancer. Cas Lek Cesk (2018) 157:226–36.
PubMed Abstract | Google Scholar
Houston KA, Henley SJ, Li J, White MC, Richards TB. Patterns in lung cancer incidence rates and trends by histologic type in the united states, 2004-2009. Lung Cancer (2014) 86:22–8. doi: 10.1016/j.lungcan.2014.08.001
PubMed Abstract | CrossRef Full Text | Google Scholar
He Y, Liang D, Li D, Shi J, Jin J, Zhai J, et al. Cancer incidence and mortality in hebei province, 2013. Med (Baltimore) (2017) 96:e7293. doi: 10.1097/MD.0000000000007293
CrossRef Full Text | Google Scholar
Nakamura H, Saji H. Worldwide trend of increasing primary adenocarcinoma of the lung. Surg Today (2014) 44:1004–12. doi: 10.1007/s00595-013-0636-z
PubMed Abstract | CrossRef Full Text | Google Scholar
Gedvilaitė V, Danila E, Cicėnas S, Smailytė G. Lung cancer survival in Lithuania: changes by histology, age, and sex from 2003-2007 to 2008-2012. Cancer Control (2019) 26:1073274819836085. doi: 10.1177/1073274819836085
PubMed Abstract | CrossRef Full Text | Google Scholar
Xu H, Xiao Z, Chen K, Tang M, Zheng N, Li P, et al. Spatial and temporal distribution, chemical characteristics, and sources of ambient particulate matter in the Beijing-Tianjin-Hebei region. Sci Total Environ (2019) 658:280–93. doi: 10.1016/j.scitotenv.2018.12.164
PubMed Abstract | CrossRef Full Text | Google Scholar
Gallagher D, Heymsfield SB, Heo M, Jebb SA, Murgatroyd PR, Sakamoto Y. Healthy percentage body fat ranges: an approach for developing guidelines based on body mass index. Am J Clin Nutr (2000) 72:694–701. doi: 10.1093/ajcn/72.3.694
PubMed Abstract | CrossRef Full Text | Google Scholar
B'chir F, Laouani A, Ksibi S, Arnaud MJ, Saguem S. Cigarette filter and the incidence of lung adenocarcinoma among Tunisian population. Lung Cancer (2007) 57:26–33. doi: 10.1016/j.lungcan.2007.01.034
PubMed Abstract | CrossRef Full Text | Google Scholar
Noone AM, Howlader N, Krapcho M, Miller D, Brest A, Yu M, et al eds. SEER cancer statistics review, 1975–2015. Bethesda, MD: Natl Cancer Inst (2018).
Google Scholar
Shin A, Oh CM, Kim BW, Woo H, Won YJ, Lee JS, et al. Lung cancer epidemiology in Korea. Cancer Res Treat (2017) 20170 49:616–26. doi: 10.4143/crt.2016.178
CrossRef Full Text | Google Scholar
John T, Cooper WA, Wright G, Siva S, Solomon B, Marshall HM, et al. Lung cancer in Australia. J Thorac Oncol (2020) 15:1809–14. doi: 10.1016/j.jtho.2020.09.005
PubMed Abstract | CrossRef Full Text | Google Scholar
Fidler-Benaoudia MM, Torre LA, Bray F, Ferlay J, Jemal A. Lung cancer incidence in young women vs. young men: a systematic analysis in 40 countries.k. Int J Cancer (2020) 147:811–19. doi: 10.1002/ijc.32809
PubMed Abstract | CrossRef Full Text | Google Scholar
Galvez−Nino M, Ruiz R, Pinto JA, Roque K, Mantilla R, Raez LE, et al. Lung cancer in the young. Lung (2020) 198:195–200. doi: 10.1007/s00408-019-00294-5
PubMed Abstract | CrossRef Full Text | Google Scholar
Fritz I, Olsson H. Lung cancer in young women in southern Sweden: a descriptive study. Clin Respir J (2018) 12:1565–71. doi: 10.1111/crj.12712
PubMed Abstract | CrossRef Full Text | Google Scholar
Air pollution and cancer, IARC scientific publication. Available at: (Accessed September 20, 2018).
Google Scholar
Raaschou-Nielsen O, Andersen ZJ, Beelen R, Samoli E, Stafoggia M, Weinmayr G, et al. Air pollution and lung cancer incidence in 17 European cohorts: prospective analyses from the European study of cohorts for air pollution effects (ESCAPE). Lancet Oncol (2013) 14:813–22. doi: 10.1016/S1470-2045(13)70279-1
PubMed Abstract | CrossRef Full Text | Google Scholar
Tomczak A, Miller AB, Weichenthal SA, To T, Wall C, van Donkelaar A, et al. Long-term exposure to fine particulate matter air pollution and the risk of lung cancer among participants of the Canadian national breast screening study. Int J Cancer (2016) 139:1958–66. doi: 10.1002/ijc.30255
PubMed Abstract | CrossRef Full Text | Google Scholar
Gharibvand L, Beeson WL, Shavlik D, Knutsen R, Ghamsary M, Soret S, et al. The association between ambient fine particulate matter and incident adenocarcinoma subtype of lung cancer. Environ Health (2017) 16:71. doi: 10.1186/s12940-017-0268-7
PubMed Abstract | CrossRef Full Text | Google Scholar
Turner MC, Andersen ZJ, Baccarelli A, Diver WR, Gapstur SM, Pope CA 3rd, et al. Outdoor air pollution and cancer: an overview of the current evidence and public health recommendations. CA Cancer J Clin (2020) 70(6):460–79. doi: 10.3322/caac.21632. Online ahead of print
CrossRef Full Text | Google Scholar
Tseng CH, Tsuang BJ, Chiang CJ, Ku KC, Tseng JS, Yang TY, et al. The relationship between air pollution and lung cancer in nonsmokers in Taiwan. J Thorac Oncol (2019) 14:784–92. doi: 10.1016/j.jtho.2018.12.033
PubMed Abstract | CrossRef Full Text | Google Scholar
Shao J, Ge J, Feng X, Zhao C. Study on the relationship between PM2.5 concentration and intensive land use in hebei province based on a spatial regression model. PLoS One (2020) 15(9):e0238547. doi: 10.1371/journal.pone.0238547
PubMed Abstract | CrossRef Full Text | Google Scholar
Zhao N, Wang G, Li G, Lang J, Zhang H. Air pollution episodes during the COVID-19 outbreak in the Beijing-Tianjin-Hebei region of China: an insight into the transport pathways and source distribution. Environ pollut (2020) 267:115617. doi: 10.1016/j.envpol.2020.115617
PubMed Abstract | CrossRef Full Text | Google Scholar
Li X, Yan C, Wang C, Ma J, Li W, Liu J, et al. PM2.5-bound elements in hebei province, China: pollution levels, source apportionment and health risks. Sci Total Environ (2022) 806:150440. doi: 10.1016/j.scitotenv.2021.150440
PubMed Abstract | CrossRef Full Text | Google Scholar
Li C, Lu H. Adenosquamous carcinoma of the lung. Onco Targets Ther (2018) 11:4829–35. doi: 10.2147/OTT.S164574
PubMed Abstract | CrossRef Full Text | Google Scholar
Corrales L, Rosell R, Cardona AF, Martín C, Zatarain-Barrón ZL, Arrieta O, et al. Lung cancer in never smokers: the role of different risk factors other than tobacco smoking. Crit Rev Oncol Hematol (2020) 148:102895. doi: 10.1016/j.critrevonc.2020.102895
PubMed Abstract | CrossRef Full Text | Google Scholar
Chapman AM, Sun KY, Ruestow P, Cowan DM, Madl AK. Lung cancer mutation profile of EGFR, ALK, and KRAS: Meta-analysis and comparison of never and ever smokers. Lung Cancer (2016) 102:122–34. doi: 10.1016/j.lungcan.2016.10.010
PubMed Abstract | CrossRef Full Text | Google Scholar
Durmus N, Pehlivan S, Zhang Y, Shao Y, Arslan AA, Corona R, et al. Lung cancer characteristics in the world trade center environmental health center. Int J Environ Res Public Health (2021) 18:2689. doi: 10.3390/ijerph18052689
PubMed Abstract | CrossRef Full Text | Google Scholar
Zeng Q, Vogtmann E, Jia MM, Parascandola M, Li JB, Wu YL, et al. Tobacco smoking and trends in histological subtypes of female lung cancer at the cancer hospital of the Chinese academy of medical sciences over 13 years. Thorac Cancer (2019) 10:1717–24. doi: 10.1111/1759-7714.13141
PubMed Abstract | CrossRef Full Text | Google Scholar
Yun YD, Back JH, Ghang H, Jee SH, Kim Y, Lee SM, et al. Hazard ratio of smoking on lung cancer in Korea according to histological type and gender. Lung (2016) 194:281–89. doi: 10.1007/s00408-015-9836-1
PubMed Abstract | CrossRef Full Text | Google Scholar
Gou LY, Niu FY, Wu YL, Zhong WZ. Differences in driver genes between smoking-related and non-smoking-related lung cancer in the Chinese population. Cancer (2015) 17:3069–79. doi: 10.1002/cncr.29531
CrossRef Full Text | Google Scholar
Lee JJ, Park S, Park H, Kim S, Lee J, Lee J, et al. Tracing oncogene rearrangements in the mutational history of lung adenocarcinoma. Cell (2019) 177:1842–57.e21. doi: 10.1016/j.cell.2019.05.013
PubMed Abstract | CrossRef Full Text | Google Scholar
Li G, Mei Y, Yang F, Yi S, Wang L. Identification of genome variations in patients with lung adenocarcinoma using whole genome re−sequencing. Mol Med Rep (2017) 16:9464–72. doi: 10.3892/mmr.2017.7805
PubMed Abstract | CrossRef Full Text | Google Scholar
Locher C, Debieuvre D, Coëtmeur D, Goupil F, Molinier O, Collon T, et al. Major changes in lung cancer over the last ten years in France: the KBP-CPHG studies. Lung Cancer (2013) 81:32–8. doi: 10.1016/j.lungcan.2013.03.001
PubMed Abstract | CrossRef Full Text | Google Scholar
Huang C, Qu X, Du J. Proportion of lung adenocarcinoma in female never-smokers has increased dramatically over the past 28 years. J Thorac Dis (2019) 11:2685–8. doi: 10.21037/jtd.2019.07.08
PubMed Abstract | CrossRef Full Text | Google Scholar
Lewis DR, Check DP, Caporaso NE, Travis WD, Devesa SS. US Lung cancer trends by histologic type. Cancer (2014) 120:2883–92. doi: 10.1002/cncr.28749
PubMed Abstract | CrossRef Full Text | Google Scholar
Jiang X, de Groh M, Liu S, Liang H, Morrison H. Rising incidence of adenocarcinoma of the lung in Canada. Lung Cancer (2012) 78:16–22. doi: 10.1016/j.lungcan.2012.06.002
PubMed Abstract | CrossRef Full Text | Google Scholar
GBD 2015 Tobacco Collaborators. Smoking prevalence and attributable disease burden in 195 countries and territories, 1990-2015: a systematic analysis from the global burden of disease study 2015. Lancet (2017) 389:1885–906. doi: 10.1016/S0140-6736(17)30819-X
PubMed Abstract | CrossRef Full Text | Google Scholar
Keywords: lung adenocarcinoma, epidemiology, risk factors, never-smokers, North China
Citation: Li D, Shi J, Dong X, Liang D, Jin J and He Y (2022) Epidemiological characteristics and risk factors of lung adenocarcinoma: A retrospective observational study from North China. Front. Oncol. 12:892571. doi: 10.3389/fonc.2022.892571
Received: 09 March 2022; Accepted: 07 July 2022;
Published: 05 August 2022.
Edited by:
Reviewed by:
Copyright © 2022 Li, Shi, Dong, Liang, Jin and He. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
Correspondence: Yutong He, aGV5dXRvbmdAaGVibXUuZWR1LmNu
Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.
Share on
Share on
Frontiers' impact
Articles published with Frontiers have received 12 million total citations
Your research is the real superpower - learn how we maximise its impact through our leading community journals
Supplementary Material |
17265 | https://proofwiki.org/wiki/Definition:Quadrangle/Complete | Definition:Quadrangle/Complete - ProofWiki
Definition:Quadrangle/Complete
From ProofWiki
<Definition:Quadrangle
Jump to navigationJump to search
[x]
Contents
1 Definition
1.1 Diagonal Points
2 Also see
3 Sources
Definition
A complete quadrangle is a quadrangle whose 4 4points are joined by 6 6lines: one connecting each pair of points.
It is complete because every pair is connected.
Diagonal Points
Let A A, B B, C C and D D be the points of a complete quadrangleA B C D A B C D.
The diagonal points of A B C D A B C D are the points of intersection of the lines:
A B A B and C D C D A C A C and B D B D A D A D and B C B C.
In the above diagram, A B C D A B C D are the initial 4 4points being connected.
The three diagonal points are R R, S S and T T.
Also see
Definition:Simple Quadrangle
Complete Quadrangle has Six Lines
Definition:Complete Quadrilateral
Results about complete quadrangles can be found here.
Sources
1998:David Nelson: The Penguin Dictionary of Mathematics(2nd ed.)... (previous)... (next): complete quadrangle
1998:David Nelson: The Penguin Dictionary of Mathematics(2nd ed.)... (previous)... (next): quadrangle
2008:David Nelson: The Penguin Dictionary of Mathematics(4th ed.)... (previous)... (next): complete quadrangle
2008:David Nelson: The Penguin Dictionary of Mathematics(4th ed.)... (previous)... (next): quadrangle
2014:Christopher Claphamand James Nicholson: The Concise Oxford Dictionary of Mathematics(5th ed.)... (previous)... (next): complete quadrangle
2021:Richard Earland James Nicholson: The Concise Oxford Dictionary of Mathematics(6th ed.)... (previous)... (next): complete quadrangle (complete quadrilateral)
Retrieved from "
Categories:
Definitions/Complete Quadrangles
Definitions/Quadrangles
Definitions/Projective Geometry
Navigation menu
Personal tools
Log in
Request account
Namespaces
Definition
Discussion
[x] English
Views
Read
View source
View history
[x] More
Search
Navigation
Main Page
Community discussion
Community portal
Recent changes
Random proof
Help
FAQ
P r∞f W i k i P r∞f W i k i L A T E X L A T E X commands
ProofWiki.org
Proof Index
Definition Index
Symbol Index
Axiom Index
Mathematicians
Books
Sandbox
All Categories
Glossary
Jokes
To Do
Proofread Articles
Wanted Proofs
More Wanted Proofs
Help Needed
Research Required
Stub Articles
Tidy Articles
Improvements Invited
Refactoring
Missing Links
Maintenance
Tools
What links here
Related changes
Special pages
Printable version
Permanent link
Page information
This page was last modified on 26 June 2025, at 20:21 and is 0 bytes
Content is available under Creative Commons Attribution-ShareAlike License unless otherwise noted.
Privacy policy
About ProofWiki
Disclaimers |
17266 | https://www.musclemathtuition.com/argand-diagram-and-electronics/ | MuscleMath Tuition
Contact Us
Argand Diagram and Electronics
Resources Uncategorized
Argand Diagram and Electronics
In the chapter on Complex Numbers for the O level syllabus, students are required to be able to represent complex numbers geometrically by means of an Argand diagram. But what is an Argand diagram if not the name of a game?
An Argand diagram, also known as the Argand plane or complex plane, is a graphical representation used in mathematics to visualize complex numbers. It was introduced by the Swiss mathematician Jean-Robert Argand in the early 19th century.
About the argand diagram
In an Argand diagram, complex numbers are represented in the Cartesian coordinate system. A complex number is composed of two parts: a real part (usually denoted as “a”) and an imaginary part (usually denoted as “bi,” where “i” is the imaginary unit, defined as the square root of -1). Complex numbers are written as “a + bi.”
The Argand diagram places the real part on the horizontal axis and the imaginary part on the vertical axis. So, if you have a complex number “a + bi,” you plot it as the point (a, b) in the complex plane.
How to draw argand diagram
Steps:
Prepare Your Paper:
If you’re using graph paper, make sure the grid lines are clearly visible. If not, draw a pair of perpendicular axes (horizontal and vertical lines) in the center of your paper. These axes represent the real and imaginary parts of the complex plane.
Label the Axes:
Label the horizontal axis as the “Real Axis” (often denoted as “Re”) and the vertical axis as the “Imaginary Axis” (often denoted as “Im”). Make sure to place these labels near the ends of the axes.
Scale the Axes:
Decide on a suitable scale for your diagram. This determines how much each grid line represents in terms of real and imaginary values. You can choose any scale that makes your diagram easy to work with. For example, you might choose one unit per grid line.
Plot Complex Numbers:
To plot a complex number, “a + bi,” find the corresponding point on the Argand diagram by following these steps:
Move “a” units to the right (positive direction) along the real axis.
Move “b” units up (positive direction) along the imaginary axis.
Mark the point where these two movements intersect. This point represents the complex number “a + bi.”
Connect Points (Optional):
If you’re plotting multiple complex numbers or want to illustrate relationships between them, use a ruler to draw lines connecting the points. These lines can help visualize complex arithmetic operations, such as addition, subtraction, multiplication, and division.
Label Points (Optional):
You can label each point with the complex number it represents. This can be particularly useful if you’re working with multiple complex numbers in a problem or graphing a function in the complex plane.
Highlight Points or Regions (Optional):
If you’re using the diagram to illustrate specific concepts or relationships, consider using colored pencils or pens to highlight points, regions, or lines of interest.
Add a Key (Optional):
If you’ve labeled points or highlighted regions, you may want to include a key or legend to explain the meaning of different colors or symbols used in your diagram.
Review and Edit:
Carefully review your Argand diagram to ensure that it accurately represents the complex numbers or relationships you intend to illustrate. Make any necessary adjustments or corrections.
An illustration of the Argand diagram can be found below, where numbers are shown in a complex polar format, and the phase angle between the two complex numbers is shown below:
What is the relation between the argand diagram and electronics?
Believe it or not, the argand diagram is used in electrical engineering! In electrical engineering, complex numbers and the Argand diagram are used to analyze alternating current (AC) circuits. Complex impedance, which combines resistance and reactance, is represented in the complex plane, aiding in circuit analysis and design.
Similarly this is also used in other forms of machines such as control systems and signal processing. Engineers use complex numbers and the Argand diagram to analyze and design control systems for applications like robotics and aerospace, while in signal processing they’re used to represent and manipulate signals. Techniques like the Fourier transform, which decomposes complex signals into simpler components, rely on complex numbers.
Quantum Mechanics: In quantum mechanics, complex numbers are fundamental. The Argand diagram helps visualize and understand complex probability amplitudes and quantum states.
Fluid Dynamics: Complex numbers are used in fluid dynamics to analyze fluid flow and turbulence. They help researchers understand the behavior of fluids in various conditions.
Economics and Finance: Complex numbers have applications in financial modeling and economics, where they can represent economic variables that involve both real and imaginary components.
Start Your Math Tuition Learning Adventure With Musclemath
At MuscleMath, we guide student to score well through O-level math tuition, A-level math tuition and H2 Math tuition. Our team consists of ex-MOE HOD’s, NIE teachers and full-time tutors.
As a leading and an MOE certified math tuition centre in Singapore, students will be provided with a wide range of options such as holiday tuition lessons, crash courses, free trials for all levels.
We provide fully customised lessons in conjunction with specialised notes and materials that are constantly updated. If you’re looking for math tuition or need help to figure out how to study maths, our team will be there for you to support your learning journey, start now and achieve better results!
Facebook Whatsapp Telegram Twitter
« Coordinate Geometry used in 3D
Stonehenge and circular measures » |
17267 | https://www.panafrican-med-journal.com/content/article/49/17/full/ | Acquired methemoglobinemia in infancy secondary to diarrhea: a case report
Instructions for authors
Submit to the PAMJ
Contact
Case report | Volume 49, Article 17, 19 Sep 2024 | 10.11604/pamj.2024.49.17.43733
Acquired methemoglobinemia in infancy secondary to diarrhea: a case report
Mariam Hany Aly, Hessa Mohammed Bukhari, Mohammed A.H. Aldirawi, Lemis Yavuz
Corresponding author:Lemis Yavuz, Al Jalila Children's Hospital, Dubai Health, Dubai, United Arab Emirates
Received: 22 Apr 2024 - Accepted: 24 May 2024 - Published: 19 Sep 2024
Domain: Pediatrics (general)
Keywords: Methemoglobinemia, gastroenteritis, infant, central venous sinus thrombosis, case report
©Mariam Hany Aly et al. Pan African Medical Journal (ISSN: 1937-8688). This is an Open Access article distributed under the terms of the Creative Commons Attribution International 4.0 License ( which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Cite this article:Mariam Hany Aly et al. Acquired methemoglobinemia in infancy secondary to diarrhea: a case report. Pan African Medical Journal. 2024;49:17. [doi: 10.11604/pamj.2024.49.17.43733]
Available online at:
PDF (793 Kb)
Home | Volume 49 | Article number 17
Case report
Acquired methemoglobinemia in infancy secondary to diarrhea: a case report
Acquired methemoglobinemia in infancy secondary to diarrhea: a case report
Mariam Hany Aly 1, Hessa Mohammed Bukhari 1, Mohammed A.H. Aldirawi 2, Lemis Yavuz 2,&
1 Mohamed Bin Rashid University, Dubai Health, Dubai, United Arab Emirates, 2 Al Jalila Children's Hospital, Dubai Health, Dubai, United Arab Emirates
&Corresponding author
Lemis Yavuz, Al Jalila Children's Hospital, Dubai Health, Dubai, United Arab Emirates
Abstract
Methemoglobinemia (MetHb) is a life-threatening condition that reduces the oxygen-carrying ability of hemoglobin. Acquired methemoglobinemia usually results from exposure to specific oxidizing agents. Symptoms and complications depend on the MetHb level, which can sometimes be fatal. We present a case of a 6-week-old infant exhibiting hypoxia alongside gastroenteritis, fever, poor oral intake and low activity, with confirmed methemoglobinemia in blood gas analysis. Despite negative results in the workup for underlying causes, including genetic testing conducted later, methylene blue administration only partially reduced methemoglobinemia level. Treatment involved managing diarrhea by transitioning to a hydrolyzed formula. Interestingly, an incidental discovery of partial central venous sinus thrombosis occurred during the diagnostic process, although no established correlation with methemoglobinemia was evident in the literature. This case report illustrates the complex presentation of methemoglobinemia in a previously healthy infant, occurring concurrently with gastrointestinal infection and unexpected thrombosis. It underscores the need for interdisciplinary collaboration and comprehensive management in addressing such multifaceted clinical scenarios in pediatric practice. This case emphasizes the importance of considering diarrhea as a possible cause of methemoglobinemia, especially in infancy. It also highlights the need for increased clinical awareness and prompt management approaches towards the various presentations of acquired methemoglobinemia in pediatric populations.
Introduction
Methemoglobinemia is a potentially life-threatening etiology of cyanosis necessitating heightened clinical attention due to its potentially fatal effects . It is characterized by a reduction in the oxygen-carrying capacity of circulating hemoglobin. This occurs due to the conversion of the iron species from the reduced ferrous [Fe 2+] state to the oxidized ferric [Fe 3+] state. At this stage, the ferric iron cannot effectively bind and transport oxygen. Elevated levels of methemoglobin lead to functional anemia . Methemoglobinemia can have either a congenital or acquired origin. In congenital forms, deficiencies in cytochrome b5 reductase or abnormal hemoglobin, specifically hemoglobin M, result in the incapacity for effective reduction . Acquired methemoglobinemia may be due to exposure to direct oxidizing agents like benzocaine or indirect oxidation like nitrates . Indirect oxidation by nitrites can be caused by exposure to exogenous toxins such as that of the bacteria Escherichia coli (E. coli) and Campylobacter [4,5]. These microorganisms induce methemoglobinemia by introducing a substantial nitrite burden. This excess nitrite subsequently oxidizes the ferrous iron in hemoglobin to a ferric state, creating methemoglobin. [4,5].
This case report details a 6-week-old infant presenting with both methemoglobinemia and gastroenteritis. The methylene blue only partially reduced methemoglobin levels. Diarrhea control with a hydrolyzed formula was the treatment. This case highlights the significance of considering diarrhea as a potential trigger for methemoglobinemia in infants and emphasizes the importance of prompt management and clinical vigilance in addressing acquired methemoglobinemia in pediatric patients.
Patient and observation
Patient information: a previously healthy 6-week-old baby boy presented to our hospital with a 1-day history of fever, vomiting, and diarrhea. The fever ranged between 37.8°C and 38.3°C. He experienced two episodes of non-bloody, non-bilious vomiting and five episodes of diarrhea. Additionally, he showed reduced oral intake and low activity. The infant had no prior medical history and was born to healthy, non-consanguineous parents. His antenatal and birth history were unremarkable, and he had received the birth vaccine. The child had been growing and developing normally until the onset of illness. There was no family history of genetic or hematological disorders.
Clinical findings: upon presentation, the infant appeared dusky in color. His vitals revealed an oxygen saturation of 86%, tachycardia, tachypnea, and a delayed capillary refill time of 2-3 seconds.
Diagnostic assessment: initial blood gas analysis indicated metabolic acidosis (pH: 7.21, partial pressure of carbon dioxide (PCO 2+): 27.8 mmHg, Bicarbonate (HCO 3): 13.9 mmol/L, Carboxyhemoglobin: 9.3 g/dL, Lactate: 2.1mmol/L), and methemoglobin levels were highly suggestive of methemoglobinemia (FMetHb: 63.8%). A fluid bolus of 20 mL/kg normal saline over 20 minutes was administered, and a repeat blood gas showed a marked worsening in metabolic acidosis (pH: 7.053, PCO 2: 30.9 mmHg, HCO 3: 8.9 mmol/L, FMetHb: 64.5%, Lactate: 8.1mmol/L). Another bolus of 20 mL/kg normal saline was given (Table 1).
Diagnosis: methemoglobinemia associated with gastroenteritis.
Therapeutic interventions: in the emergency department, given the worsening oxygen saturation, the infant was placed on a two-liter nasal cannula of oxygen. There was no improvement in oxygen saturation, so he was placed on BiPAP (Bilevel positive airway pressure) (pressure: 14/5 cmH20, rate: 30 beats per minute, FiO 2: 100%). The infant was then transferred to the pediatric intensive care unit (PICU) for further management. Methylene blue (2 mg/kg) was administered. Repeat capillary blood gas after 6 hours showed a MetHb of 2.2 (pH: 7.26, PaCO 2: 27.4, Bicarb: 13.9, Lactate: 2, MetHb: 2%). In the next 48 hours, the patient had hypokalaemia K 2.6, and blood sugar was borderline around 60 mg/dl. However, he remained stable on intravenous fluid and oxygen 1.5 L nasal cannula. The MetHb level fluctuated up to (15%) but didn't need a second dose of methylene blue. Given the very early onset of methemoglobinemia and the absence of predisposing factors, the genetic, metabolic, and hematology teams were consulted. Rapid genetic testing for methemoglobinemia was initiated. The echocardiogram revealed a small atrial communication from left to right, mostly patent foramen ovale, but otherwise normal cardiac anatomy. The patient was later shifted to the ward on day three of admission and was irritable. The repeated hemoglobin level was 7.1 g/dl, and the gastrointestinal panel showed Enteroaggregative Escherichia coli (E. coli). He received a packed red blood cell transfusion. Also, a cranial ultrasound (US) was done, which revealed partial superior sinus thrombosis. A subsequent CT scan confirmed partial central venous sinus thrombosis involving the superior sagittal and left transverse sinus (Figure 1), Figure 2)). Ophthalmology screening was reassuring. The workup for thrombosis was negative, and it did not require treatment.
Over one week and while waiting for blood test results, the baby continued to have watery diarrhea 3-5 times per day, hypoalbuminemia reached 2.4, and the methemoglobin level was fluctuating. Additionally, he was not gaining weight. However, he did not need more than oxygen supplements and intravenous fluid. During this time, the baby was on a regular baby formula. Given the negative testing results for methemoglobinemia causes, including genetic, the working diagnosis was methemoglobinemia secondary to gastroenteritis. Hence, the baby was placed on a hydrolyzed formula. In the next 72 hours, the diarrhea ceased, and weight gradually increased. The following comprehensive metabolic panel indicated normal methemoglobin and albumin levels.
Follow-up and outcome of interventions: follow-up in the clinic over the next three months revealed normal weight gain and normal methemoglobin level.
Patient perspective: the parents expressed satisfaction with the healthcare provided, understanding that the cause of their baby's presentation was likely related to gastrointestinal infection rather than genetic factors. They were provided with instructions on recognizing warning signs necessitating a visit to the emergency department, including persisting fever, vomiting, diarrhea, episodes of bluish discoloration, or any other concerning complaints.
Informed consent: the parents were briefed on the intention behind publishing the case report and provided consent for the inclusion of their baby's clinical data and imaging in the journal publication.
Discussion
Methemoglobinemia is a functional anemia. It decreases hemoglobin's ability to carry oxygen, which occurs when methemoglobin exceeds 1% in the bloodstream. This decrease happens because some or all of the iron atoms in hemoglobin shift from a ferrous [Fe 2+] state to an oxidized ferric [Fe 3+] state. The iron cannot efficiently bind and transport oxygen in this oxidized state . The pathophysiology of this condition remains unidentified. Several contributing factors play a role in its manifestation. Infants' erythrocytes exhibit heightened susceptibility to methemoglobin formation, primarily due to fetal hemoglobin, which is more prone to oxidation than adult hemoglobin .
Signs and symptoms of methemoglobinemia mimic other illnesses, including respiratory, cardiac, and central nervous system illnesses, shock, or even death . The suspicion of methemoglobinemia should arise in instances of unexplained cyanosis and hypoxemia. Like our patient, his saturation did not improve despite oxygen being provided. Significant methemoglobinemia symptoms are linked to MetHb levels. When the level exceeds 70%, it can cause death. Methemoglobinemia could be primary or secondary, so it is crucial to differentiate acquired cases that are more common than inherited ones. The most common cause is exposure to oxidizing agents . Interestingly, gastroenteritis in infancy has been identified as a possible cause of methemoglobinemia and was reported in a few cases . The data suggested that altered gastrointestinal flora due to infections and higher intestinal flora in infants promote the proliferation of intestinal flora, which, in turn, potentiates the conversion of nitrates to nitrites from dietary intake . The diagnosis of methemoglobinemia is typically confirmed through analysis of arterial or venous blood gas using co-oximetry .
Management depends on methemoglobinemia level and clinical manifestations. However, it is crucial to identify and treat the trigger factor to correct the level. Observation is enough in asymptomatic patients. Primary treatment involves 1% methylene blue with an initial 1-2 mg/kg dose. Ascorbic acid can be adjunctive. Non-responders may need blood exchange or hyperbaric oxygen therapy . Similarly, changing to a hydrolyzed formula and controlling the diarrhea were the treatments in our case.
An interesting finding in the case was the unexpected discovery of partial superior sinus thrombosis coexisting with methemoglobinemia. Despite an extensive review of the existing literature, no definitive correlation between thrombosis and methemoglobinemia was found. The absence of a clear relationship between the two conditions raises questions about whether there is a shared pathophysiological mechanism between the two or if it is just a rare co-occurrence. Given the critical nature of methemoglobinemia and thrombosis, future studies should explore the possible connections and implications for patient management.
The importance of this case report lies in the multifactorial presentation of methemoglobinemia in a previously healthy 6-week-old infant. The diagnosis of methemoglobinemia in the presence of a gastrointestinal infection with E. coli and the unexpected finding of partial superior sinus thrombosis show a complex diagnostic and therapeutic scenario not commonly encountered in pediatric practice. It shows the need for multidisciplinary investigation and intervention to effectively address the primary pathology and associated complications.
Conclusion
Methemoglobinemia is a life-threatening condition and should be considered in unexplained hypoxemia. The potential causes could be as severe as congenital or simple as gastroenteritis. Treatment depends on symptoms and level, and it is essential to treat the underlying condition.
Competing interests
The authors declare no competing interests.
Authors' contributions
Mariam Hany Aly and Hessa Mohammed Bukhari: collected the patient data; drafted and critically revised the manuscript for important intellectual content; conducted concurrent literature search; agreed to be accountable for all aspects of the work. Mohammed A.H. Aldiraw and Lemis Yavuz: critically revised the manuscript for important intellectual content; provided final approval of the version to be published; agreed to be accountable for all aspects of the work. All the authors read and approved the final version of this manuscript.
Table and figures
Table 1: diagnostic investigations conducted during hospital encounter
Figure 1): sagittal maximum intensity projection (MIP) showing a thrombus in the superior sagittal sinus
Figure 2): axial maximum intensity projection (MIP) showing a thrombus in the left transverse sinus
References
Ludlow JT, Wilkerson RG, Nappe TM. Methemoglobinemia. StatPearls Internet. Treasure Island (FL): StatPearls Publishing; 2024 Jan. Updated 2023 Aug 28.
Skold A, Cosco DL, Klein R. Methemoglobinemia: pathogenesis, diagnosis, and management. South Med J. 2011;104(11):757-61. PubMed | Google Scholar
Bradberry SM. Occupational methaemoglobinaemia: mechanisms of production, features, diagnosis and management including the use of methylene blue. Toxicol Rev. 2003;22(1):13-27. PubMed | Google Scholar
Hanukoglu A, Danon PN. Endogenous methemoglobinemia associated with diarrheal disease in infancy. J Pediatr Gastroenterol Nutr. 1996;23(1):1-7. PubMed | Google Scholar
Smith MA, Shah NR, Lobel JS, Hamilton W. Methemoglobinemia and hemolytic anemia associated with Campylobacter jejuni enteritis. J Pediatr Hematol Oncol. 1988;10(1):35-8. PubMed | Google Scholar
Naoum PC. Methemoglobinemia in children: how to explain the results? Rev Bras Hematol Hemoter. 2012;34(1):5. Google Scholar
Wills BK, Cumpston KL, Downs JW, Rose SR. Causative Agents in Clinically Significant Methemoglobinemia: A National Poison Data System Study. Am J Ther. 2020 Dec 29;28(5):e548-e551. PubMed | Google Scholar
Alhamoud NA, AlZugaibi RA, Aldoohan WM. Recurrent Acute Methemoglobinemia in an Infant With Persistent Gastroenteritis: A Case Report. Cureus. 2024 Jan;16(1):e51909. PubMed | Google Scholar
Search
Volume 52 (Sep - Dec 2025)
PDF (793 Kb)
This article authors
On Pubmed
Mariam Hany Aly
Hessa Mohammed Bukhari
Mohammed A.H. Aldirawi
Lemis Yavuz
On Google Scholar
Mariam Hany Aly
Hessa Mohammed Bukhari
Mohammed A.H. Aldirawi
Lemis Yavuz
Citation [Download]
Zotero
EndNote XML
Reference Manager
BibTex
ProCite
Navigate this article
Abstract
Introduction
Patient and observation
Discussion
Conclusion
Competing interests
Authors' contributions
Tables and figures
References
Similar articles in
Google Scholar
Key words
Methemoglobinemia
Gastroenteritis
Infant
Central venous sinus thrombosis
Case report
Tables and figures
Table 1: diagnostic investigations conducted during hospital encounter
"Open table")Figure 1: sagittal maximum intensity projection (MIP) showing a thrombus in the superior sagittal sinus)
"Open table")Figure 2: axial maximum intensity projection (MIP) showing a thrombus in the left transverse sinus)
Article metrics
This month (html)
422
All time (html)
1790
This month (PDF)
41
All time (PDF)
455
| Days | Access |
--- |
| 1 | 28 |
| 2 | 19 |
| 3 | 10 |
| 4 | 48 |
| 5 | 34 |
| 6 | 7 |
| 7 | 21 |
| 8 | 10 |
| 9 | 8 |
| 10 | 7 |
| 11 | 18 |
| 12 | 11 |
| 13 | 11 |
| 14 | 10 |
| 15 | 10 |
| 16 | 4 |
| 17 | 8 |
| 18 | 6 |
| 19 | 10 |
| 20 | 6 |
| 21 | 11 |
| 22 | 7 |
| 23 | 3 |
| 24 | 8 |
| 25 | 13 |
| 26 | 10 |
| 27 | 6 |
| 28 | 16 |
| 29 | 9 |
| 30 | 5 |
| 31 | 9 |
| 32 | 5 |
| 33 | 5 |
| 34 | 2 |
| 35 | 2 |
| 36 | 10 |
| 37 | 4 |
| 38 | 3 |
| 39 | 2 |
| 40 | 2 |
| 41 | 7 |
| 42 | 9 |
| 43 | 3 |
| 44 | 5 |
| 45 | 5 |
| 46 | 3 |
| 47 | 3 |
| 48 | 8 |
| 49 | 4 |
| 50 | 4 |
| 51 | 9 |
| 52 | 7 |
| 53 | 4 |
| 54 | 7 |
| 55 | 7 |
| 56 | 6 |
| 57 | 3 |
| 58 | 5 |
| 59 | 0 |
| 60 | 7 |
| 61 | 11 |
| 62 | 9 |
| 63 | 12 |
| 64 | 10 |
| 65 | 3 |
| 66 | 2 |
| 67 | 8 |
| 68 | 4 |
| 69 | 3 |
| 70 | 3 |
| 71 | 1 |
| 72 | 3 |
| 73 | 4 |
| 74 | 7 |
| 75 | 11 |
| 76 | 2 |
| 77 | 2 |
| 78 | 2 |
| 79 | 0 |
| 80 | 3 |
| 81 | 6 |
| 82 | 4 |
| 83 | 5 |
| 84 | 8 |
| 85 | 8 |
| 86 | 0 |
| 87 | 3 |
| 88 | 6 |
| 89 | 4 |
| 90 | 4 |
| 91 | 13 |
| 92 | 7 |
| 93 | 3 |
| 94 | 10 |
| 95 | 13 |
| 96 | 10 |
| 97 | 2 |
| 98 | 3 |
| 99 | 5 |
| 100 | 4 |
| 101 | 1 |
| 102 | 5 |
| 103 | 2 |
| 104 | 4 |
| 105 | 3 |
| 106 | 7 |
| 107 | 3 |
| 108 | 4 |
| 109 | 5 |
| 110 | 2 |
| 111 | 3 |
| 112 | 3 |
| 113 | 2 |
| 114 | 6 |
| 115 | 5 |
| 116 | 4 |
| 117 | 9 |
| 118 | 10 |
| 119 | 5 |
| 120 | 5 |
| 121 | 8 |
| 122 | 6 |
| 123 | 7 |
| 124 | 9 |
| 125 | 7 |
| 126 | 11 |
| 127 | 4 |
| 128 | 5 |
| 129 | 3 |
| 130 | 3 |
| 131 | 2 |
| 132 | 3 |
| 133 | 6 |
| 134 | 8 |
| 135 | 3 |
| 136 | 5 |
| 137 | 3 |
| 138 | 7 |
| 139 | 4 |
| 140 | 9 |
| 141 | 6 |
| 142 | 7 |
| 143 | 4 |
| 144 | 4 |
| 145 | 12 |
| 146 | 12 |
| 147 | 7 |
| 148 | 7 |
| 149 | 7 |
| 150 | 9 |
| 151 | 6 |
| 152 | 8 |
| 153 | 7 |
| 154 | 8 |
| 155 | 6 |
| 156 | 5 |
| 157 | 7 |
| 158 | 9 |
| 159 | 7 |
| 160 | 5 |
| 161 | 5 |
| 162 | 6 |
| 163 | 6 |
| 164 | 6 |
| 165 | 9 |
| 166 | 2 |
| 167 | 7 |
| 168 | 3 |
| 169 | 4 |
| 170 | 8 |
| 171 | 7 |
| 172 | 5 |
| 173 | 3 |
| 174 | 4 |
| 175 | 5 |
| 176 | 3 |
| 177 | 9 |
| 178 | 11 |
| 179 | 9 |
| 180 | 7 |
| 181 | 5 |
| 182 | 5 |
| 183 | 4 |
| 184 | 1 |
| 185 | 7 |
| 186 | 7 |
| 187 | 5 |
| 188 | 4 |
| 189 | 3 |
| 190 | 5 |
| 191 | 1 |
| 192 | 4 |
| 193 | 3 |
| 194 | 3 |
| 195 | 4 |
| 196 | 7 |
| 197 | 2 |
| 198 | 9 |
| 199 | 10 |
| 200 | 3 |
| 201 | 3 |
| 202 | 4 |
| 203 | 2 |
| 204 | 7 |
| 205 | 5 |
| 206 | 2 |
| 207 | 4 |
| 208 | 3 |
| 209 | 9 |
| 210 | 7 |
| 211 | 3 |
| 212 | 3 |
| 213 | 8 |
| 214 | 4 |
| 215 | 2 |
| 216 | 5 |
| 217 | 5 |
| 218 | 4 |
| 219 | 3 |
| 220 | 4 |
| 221 | 2 |
| 222 | 1 |
| 223 | 8 |
| 224 | 7 |
| 225 | 8 |
| 226 | 3 |
| 227 | 6 |
| 228 | 3 |
| 229 | 7 |
| 230 | 4 |
| 231 | 4 |
| 232 | 5 |
| 233 | 5 |
| 234 | 2 |
| 235 | 2 |
| 236 | 1 |
| 237 | 7 |
| 238 | 5 |
| 239 | 4 |
| 240 | 0 |
| 241 | 2 |
| 242 | 2 |
| 243 | 5 |
| 244 | 1 |
| 245 | 2 |
| 246 | 5 |
| 247 | 10 |
| 248 | 13 |
| 249 | 8 |
| 250 | 4 |
| 251 | 2 |
| 252 | 2 |
| 253 | 4 |
| 254 | 4 |
| 255 | 6 |
| 256 | 3 |
| 257 | 4 |
| 258 | 5 |
| 259 | 9 |
| 260 | 1 |
| 261 | 1 |
| 262 | 1 |
| 263 | 5 |
| 264 | 6 |
| 265 | 6 |
| 266 | 4 |
| 267 | 1 |
| 268 | 1 |
| 269 | 2 |
| 270 | 5 |
| 271 | 2 |
| 272 | 2 |
| 273 | 9 |
| 274 | 7 |
| 275 | 6 |
| 276 | 7 |
| 277 | 5 |
| 278 | 4 |
| 279 | 8 |
| 280 | 4 |
| 281 | 1 |
| 282 | 3 |
| 283 | 4 |
| 284 | 8 |
| 285 | 9 |
| 286 | 3 |
| 287 | 5 |
| 288 | 6 |
| 289 | 3 |
| 290 | 7 |
| 291 | 1 |
| 292 | 3 |
| 293 | 3 |
| 294 | 4 |
| 295 | 2 |
| 296 | 2 |
| 297 | 2 |
| 298 | 2 |
| 299 | 5 |
| 300 | 4 |
| 301 | 6 |
| 302 | 4 |
| 303 | 4 |
| 304 | 3 |
| 305 | 0 |
| 306 | 6 |
| 307 | 10 |
| 308 | 2 |
| 309 | 6 |
| 310 | 6 |
| 311 | 7 |
| 312 | 5 |
| 313 | 2 |
| 314 | 6 |
| 315 | 6 |
| 316 | 6 |
| 317 | 7 |
| 318 | 3 |
| 319 | 8 |
| 320 | 7 |
| 321 | 9 |
| 322 | 5 |
| 323 | 6 |
| 324 | 8 |
| 325 | 9 |
| 326 | 12 |
| 327 | 6 |
| 328 | 6 |
| 329 | 8 |
| 330 | 5 |
| 331 | 3 |
| 332 | 12 |
| 333 | 5 |
| 334 | 5 |
| 335 | 6 |
| 336 | 8 |
| 337 | 9 |
| 338 | 5 |
| 339 | 2 |
| 340 | 5 |
| 341 | 7 |
| 342 | 5 |
| 343 | 3 |
| 344 | 3 |
| 345 | 8 |
| 346 | 5 |
| 347 | 11 |
| 348 | 1 |
| 349 | 12 |
| 350 | 3 |
| 351 | 6 |
| 352 | 15 |
| 353 | 19 |
| 354 | 11 |
| 355 | 5 |
| 356 | 6 |
| 357 | 10 |
| 358 | 12 |
| 359 | 9 |
| 360 | 11 |
| 361 | 15 |
| 362 | 5 |
| 363 | 8 |
| 364 | 12 |
| 365 | 4 |
| 366 | 8 |
| 367 | 6 |
| 368 | 10 |
| 369 | 7 |
| 370 | 9 |
| 371 | 11 |
| 372 | 13 |
| 373 | 6 |
| 374 | 8 |
60
Countries of access
Zoom level changed to 1
We recommend
Acquired methemoglobinemia induced by indoxacarb poisoning: a case reportDhouha Ben Braiek Rania Hidri Chaima Kaabi Hend Zorgati Imen Mighri Rahma Ben Jazia Ameni Kacem Jihene Ayachi, Pan Afr Med Jrnl, 2024
Acute dapsone poisoning with methemoglobinemia: a case reportHamza El Hamzaoui Idriss Chajai Mohamed Chahidi El Ouazzani Abdelkader Benhalima Manal El Arfaoui Mustapha Alilou, Pan Afr Med Jrnl, 2022
Acute dapsone poisoning with methemoglobinemia: a case reportHamza El Hamzaoui Idriss Chajai Mohamed Chahidi El Ouazzani Abdelkader Benhalima Manal El Arfaoui Mustapha Alilou, Pan Afr Med Jrnl, 2022
Sudden onset methaemoglobinaemia in a previously well Ugandan child: a case report and literature reviewNicolette Nabukeera-Barungi, Pan Afr Med Jrnl, 2012
226 Methemoglobinemia in 2 exclusively breastfed infants with food protein-induced enterocolitis syndromeAntonella Geljic, Archives of Disease in Childhood, 2021
367 Red Blood Cell Exchange for Treatment of Acquired Methemoglobinemia in a Hospitalized Patient: A Case ReportValerie Lockhart, American Journal of Clinical Pathology, 2018
Rare cause of dyspnoea: phenazopyridine-induced methemoglobinemiaJewell Joseph, BMJ Supportive & Palliative Care, 2024
Severe methemoglobinemia secondary to isobutyl nitrite toxicity: the case of the ‘Gold Rush’Gregory M Taylor, Oxford Medical Case Reports, 2021
Powered by
Targeting settings
Do not sell my personal information
PlumX Metrics
Captures
Readers: 6
see details
Recently from the PAMJ
Oral health-related quality of life in Tunisian patients following radiotherapy for head and neck tumors: a cross-sectional study
18 Sep 2025
Acute drug intoxication: predictive factors for hospitalization
18 Sep 2025
Evaluation of the analytical performance of six rapid diagnostic tests (RDTs) for the detection of HIV-1 and 2 in Lubumbashi, Democratic Republic of Congo
17 Sep 2025
Tuberculosis in transit: risk factors in Tanzania´s high-mobility border regions
17 Sep 2025
IVIg-induced false positive hepatitis B serology in a patient with Guillain-Barré syndrome: a case report
17 Sep 2025
Neurocysticercosis and developmental venous anomaly in close proximity: a case report
16 Sep 2025
Authors´ services
About this article
The contents of this site is intended to professionals in the field or medicine, public health, and other professionals in the biomedical field. The PAMJ and associated products are from the Pan African Medical Center for Public Health Research and Information, a Non-governmental Organization (NGO) registered with the Kenya NGO Board.
All articles published in this journal are Open Access and distributed under the terms of the Creative Commons Attribution 4.0 International (CC BY 4.0).
PAMJ
Pan African Medical Journal
PAMJ Clinical Medicine
PAMJ - One Health
PAMJ Blog
PAMJ Conference Proceedings
PAMJ Conference Management System
PAMJ Supplements
PAMJ Editorial staff
For authors
Copyright agreements
Editorial policy
Instructions for authors
PAMJ - authors services
Recently in the PAMJ Blog
Uniting Healthcare Sectors: Join the East Africa Health Transformation in Nairobi (28 Jul 2025)
Call for papers: Marking the 50 years of Immunisation Programmes in Africa (A Special Issue) (29 Dec 2024)
PAMJ Call for papers: The Surge of Mpox in African Countries - A Global Call to Action (Special Issue) (24 Apr 2024)
L'autopalpation, un palier non négligeable dans la lutte contre le cancer du sein! (31 Oct 2023)
Resume sur le salon dentaire d'Afrique Centrale juin 2023 (07 Jun 2023)
About PAMJ - Manuscript Hut™
The Manuscript Hut is a product of the PAMJ Center for Public health Research and Information.
Kenya: 3rd Floor, Park Suite Building, Parkland Road, Nairobi. PoBox 38583-00100, tel: +254 (0)20-520-4356
Cameroon: Immeuble TechnoPark Essos, Yaounde, PoBox: 10020 Yaounde, tel: +237 (0)24-309-5880
Uganda: African Field Epidemiology network,Wings B & C, Ground Floor, Lugogo House, Plot 42, Lugogo By-pass, Kampala
Copyright © - Pan African Medical Journal - CEPHRI. 2025
Haraka Publishing Platform - (MMS V.2.5). Release date Jan 2018 - Customized for The Pan African Medical Journal
For advertisers Contact the PAMJ sales service. Download our latest media-kit.
sales-service@panafrican-med-journal.com
© Copyright Pan African Medical Journal. All Rights Reserved |
17268 | https://www.sparkl.me/learn/collegeboard-ap/statistics/estimating-probability-using-relative-frequency/revision-notes/214 | Past Papers
Log In
Past Papers
Courses
Collegeboard AP
IB DP
Cambridge IGCSE
AS & A Level
IB MYP 1-3
IB MYP 4-5
All Topics
statistics | collegeboard-ap
Collecting Data
1.1.1 Completely Randomized Design
1.1.2 Randomized Block & Matched Pairs Design
1.1.3 Introduction to Experiments
1.1.4 Well-Designed Experiments
1.1.5 Control Groups, Placebos & Blind Experiments
1.2.1 Introduction to Sampling
1.2.2 Simple Random Sampling (SRS)
1.2.3 Random Sampling Methods
1.2.4 Types of Bias
1.2.5 Non-random (Biased) Sampling Methods
Inference
2.1.1 Sampling Distributions for Sample Slopes
2.1.2 Hypothesis Tests for Slopes of Regression Lines
2.1.3 Confidence Intervals for Slopes of Regression Lines
2.2.1 Type I & Type II Errors
2.2.2 Probabilities of Errors
2.2.3 Power of a Test
2.3.1 Tails on a Normal Distribution
2.3.2 Introduction to Hypothesis Testing
2.3.3 Introduction to Confidence Intervals
2.4.1 Hypothesis Tests for Population Proportions
2.4.2 Confidence Intervals for Population Proportions
2.4.3 Hypothesis Tests for Differences in Population Proportions
2.4.4 Confidence Intervals for Differences in Population Proportions
2.5.1 The t-distribution
2.5.2 Hypothesis Tests for Population Means
2.5.3 Confidence Intervals for Population Means
2.5.4 Hypothesis Tests for Differences in Population Means
2.5.5 Confidence Intervals for Differences in Population Means
2.5.6 t-scores versus z-scores
2.5.7 Hypothesis Tests for Differences in Matched Pairs
2.5.8 Confidence Intervals for Differences in Matched Pairs
2.6.1 The Chi-Square Distribution
2.6.2 Hypothesis Tests for Goodness of Fit
2.7.1 Tests for Independence
2.7.2 Tests for Homogeneity
Probability, Random Variables and Probability Distributions
3.1.1 Estimating Probability using Relative Frequency
3.1.2 Probabilities of Single Events
3.1.3 Introduction to Combined Events
3.1.4 Addition Rule & Mutually Exclusive Events
3.1.5 Conditional Probability
3.1.6 Multiplication Rule & Independent Events
3.1.7 Probabilities of Combined Events using Tree Diagrams
3.1.8 Probabilities of Combined Events using the Rules
3.2.1 Probability Distributions for Discrete Random Variables
3.2.2 Cumulative Probability Distributions for Discrete Random Variables
3.2.3 Mean & Standard Deviation of a Discrete Random Variable
3.2.4 Linear Transformations of Random Variables
3.2.5 Linear Combinations of Random Variables
3.3.1 Introduction to Binomial Distributions
3.3.2 Probabilities for Binomial Distributions
3.3.3 Introduction to Geometric Distributions
3.3.4 Probabilities for Geometric Distributions
Exploring One-Variable Data
4.1.1 Describing Variables
4.1.2 Parameters & Statistics
4.1.3 Measures of Center
4.1.4 Measures of Position
4.1.5 Measures of Variability
4.1.6 Tables & Relative Frequency
4.1.7 Grouped Data
4.1.8 Outliers & Resistant Measures
4.1.9 Five-Number Summary & Boxplots
4.1.10 Skewness of Data
4.1.11 Comparing Data using Summary Statistics
4.2.1 Shape of Distributions
4.2.2 Bar Charts & Histograms
4.2.3 Dotplots & Stemplots
4.2.4 Cumulative Graphs
4.2.5 Comparing Univariate Graphs
4.3.1 Properties of Normal Distributions
4.3.2 Standardized z-scores
4.3.3 Comparing Normal Distributions
4.3.4 Finding Proportions from Normal Distributions
4.3.5 Inverse Normal Calculations
4.3.6 Estimating Parameters of Normal Distributions
Sampling Distributions
5.1.1 Introduction to Sampling Distributions
5.1.2 Sampling Distributions for Sample Means
5.1.3 The Central Limit Theorem
5.1.4 Sampling Distributions for Differences in Sample Means
5.1.5 Sampling Distributions for Sample Proportions
5.1.6 Sampling Distributions for Differences in Sample Proportions
5.1.7 Biased & Unbiased Estimators
Exploring Two-Variable Data
6.1.1 Two-Way Tables & Relative Frequencies
6.1.2 Bar Graphs & Mosaic Plots
6.2.1 Two-Way Tables & Relative Frequencies
6.2.2 Bar Graphs & Mosaic Plots
6.2.3 Explanatory & Response Variables
6.2.4 Scatterplots
6.2.5 Association & Correlation Coefficients
6.2.6 Interpolation & Extrapolation using Linear Models
6.2.7 Residuals
6.2.8 The Least-Squares Regression Line
6.2.9 Residual Plots
6.2.10 The Coefficient of Determination
6.2.11 Outliers, High-Leverage & Influential Points
6.2.12 Linearization of Bivariate Data
Estimating Probability using Relative Frequency
Topic 2/3
Revision Notes Flashcards Past Paper Analysis Questions Videos
Your Flashcards are Ready!
15 Flashcards in this deck.
or
How would you like to practise?
Choose Difficulty Level.
Choose Easy, Medium or Hard to match questions to your skill level.
Choose Learning Method.
Choose Easy, Medium or Hard to match questions to your skill level.
3
Still Learning
I know
12
Previous
Next
Estimating Probability using Relative Frequency
Introduction
Probability estimation is a fundamental concept in statistics, crucial for making informed decisions based on data. In the Collegeboard AP Statistics curriculum, understanding how to estimate probability using relative frequency equips students with the skills to analyze real-world situations effectively. This method bridges the gap between theoretical probability and practical application, providing a tangible approach to predicting outcomes based on observed data.
Key Concepts
Understanding Probability
Probability measures the likelihood of a particular event occurring within a set of possible outcomes. It is quantified between 0 and 1, where 0 indicates impossibility and 1 signifies certainty. Probability plays a vital role in fields ranging from gambling and finance to science and engineering, enabling predictions and risk assessments based on available data.
Theoretical vs. Empirical Probability
Probability can be classified into two main types: theoretical and empirical (or relative frequency). Theoretical probability is based on the assumption of equally likely outcomes, derived from logical reasoning and mathematical principles. For instance, the probability of rolling a three on a fair six-sided die is calculated as: $$ P(3) = \frac{1}{6} $$ On the other hand, empirical probability relies on actual experiments or historical data to estimate the likelihood of an event. This approach is particularly useful when theoretical probabilities are difficult to determine or do not account for real-world complexities.
Relative Frequency Method
The relative frequency method estimates probability by conducting experiments or observing events and calculating the ratio of the number of times an event occurs to the total number of trials. Mathematically, it is expressed as: $$ P(E) = \frac{\text{Number of favorable outcomes}}{\text{Total number of trials}} $$ For example, if a coin is flipped 100 times and lands on heads 55 times, the relative frequency probability of getting heads is: $$ P(\text{Heads}) = \frac{55}{100} = 0.55 $$ This method provides a practical way to estimate probabilities based on empirical evidence.
Law of Large Numbers
The Law of Large Numbers is a fundamental theorem in probability and statistics that states as the number of trials increases, the relative frequency of an event tends to approach its theoretical probability. This principle underpins the reliability of the relative frequency method, ensuring that with sufficient data, empirical estimates become accurate reflections of true probabilities.
Applications of Relative Frequency
Quality Control: In manufacturing, relative frequency helps in monitoring defect rates and ensuring product quality.
Epidemiology: Estimating the probability of disease occurrence based on observed case data.
Finance: Assessing the likelihood of market movements by analyzing historical price data.
Weather Forecasting: Predicting weather events based on historical weather patterns.
Advantages of Using Relative Frequency
Practicality: Relies on actual data, making it applicable in real-world scenarios where theoretical probabilities are unknown.
Flexibility: Can be used for both discrete and continuous events.
Simplicity: Easy to understand and implement, especially with large datasets.
Limitations of Relative Frequency
Sample Size Dependency: Smaller sample sizes can lead to inaccurate probability estimates.
Data Quality: Relies on the availability of accurate and representative data.
Variability: Subject to fluctuations and may not capture underlying probabilities without sufficient trials.
Steps to Estimate Probability using Relative Frequency
Define the Experiment: Clearly outline the event or outcome to be studied.
Conduct Trials: Perform a series of trials or collect observational data relevant to the event.
Record Outcomes: Tally the number of favorable outcomes for the event of interest.
Calculate Relative Frequency: Use the formula: $$ P(E) = \frac{\text{Number of favorable outcomes}}{\text{Total number of trials}} $$
Analyze Results: Interpret the estimated probability in the context of the experiment or application.
Example: Estimating Probability of Rain
Suppose a meteorologist wants to estimate the probability of rain on a given day in April based on past data. Over the last 30 years, it rained on 10 days in April.
Using the relative frequency method: $$ P(\text{Rain}) = \frac{10}{30} = 0.333 $$ Thus, the estimated probability of rain on any April day is approximately 33.3%.
Comparing Relative Frequency with Theoretical Probability
While theoretical probability relies on known mathematical models, relative frequency offers a data-driven approach. The choice between the two depends on the availability of data and the nature of the event being studied. For events with equally likely outcomes and sufficient theoretical framework, theoretical probability is efficient. However, in complex or uncertain environments, relative frequency provides a more adaptable and empirical method for probability estimation.
Confidence Intervals and Relative Frequency
When estimating probabilities using relative frequency, it's essential to consider the precision of the estimate. Confidence intervals provide a range within which the true probability is likely to lie, accounting for sample variability. For example, a 95% confidence interval offers high assurance that the true probability is within the specified range, enhancing the reliability of the relative frequency estimate.
Relative Frequency in Predictive Modeling
In predictive modeling, relative frequency serves as the foundation for forecasting and trend analysis. By analyzing historical data, statisticians can identify patterns and predict future occurrences with a quantifiable degree of confidence. This application is critical in sectors like finance, marketing, and public health, where accurate probability estimates inform strategic decisions.
Software Tools for Calculating Relative Frequency
Various software tools and statistical packages facilitate the calculation of relative frequency probabilities. Programs like R, Python (with libraries such as Pandas and NumPy), and Excel offer functions and modules that streamline data analysis and probability estimation. Utilizing these tools enhances efficiency and accuracy, especially when handling large datasets.
Best Practices for Using Relative Frequency
Ensure Representative Sampling: Data should be collected in a manner that accurately represents the population or process being studied.
Increase Sample Size: Larger samples reduce variability and lead to more precise probability estimates.
Validate Data Quality: Accurate and reliable data are crucial for meaningful probability estimates.
Combine with Theoretical Insights: Integrating empirical data with theoretical models can enhance the robustness of probability estimates.
Comparison Table
| | | |
---
| Aspect | Relative Frequency | Theoretical Probability |
| Definition | Estimates probability based on observed data from experiments or historical records. | Calculates probability based on known mathematical principles and equally likely outcomes. |
| Data Dependence | Requires actual data from trials or observations. | Does not require empirical data; relies on logical reasoning. |
| Accuracy | Improves with larger sample sizes due to the Law of Large Numbers. | Consistently accurate when underlying assumptions hold true. |
| Applicability | Ideal for complex or real-world scenarios where theoretical models are insufficient. | Suitable for simple, well-defined problems with known probabilities. |
| Flexibility | Adaptable to a wide range of situations with available data. | Limited to scenarios with clear, equal probability distributions. |
| Pros | Data-driven, practical, and applicable to real-world situations. | Straightforward and mathematically precise for defined problems. |
| Cons | Dependent on sample size and data quality; may be time-consuming. | Not applicable when theoretical assumptions do not hold. |
Summary and Key Takeaways
Relative frequency estimates probability based on actual data from experiments or observations.
The method becomes more accurate with larger sample sizes, aligning with the Law of Large Numbers.
Compared to theoretical probability, relative frequency is more adaptable to complex, real-world scenarios.
Understanding both relative and theoretical probabilities enhances statistical analysis and decision-making.
Utilizing software tools and adhering to best practices ensures reliable probability estimates.
Coming Soon!
Examiner Tip
Tips
Use Mnemonics: Remember "RFT" for Relative Frequency Technique. - Visualize Data: Create charts or graphs to better understand relative frequencies. - Practice with Real Data: Apply relative frequency calculations to everyday scenarios, like tracking weather patterns. - AP Exam Strategy: Carefully read questions to determine if empirical data is provided or if theoretical probability is required.
Did You Know
Did You Know
The concept of relative frequency dates back to the early 18th century with the work of Jacob Bernoulli, who formulated the Law of Large Numbers. Additionally, relative frequency is the foundation of modern data-driven decision-making, influencing areas like machine learning and artificial intelligence. For instance, recommendation systems on platforms like Netflix and Amazon utilize relative frequency to predict user preferences based on past behavior.
Common Mistakes
Common Mistakes
Confusing Relative Frequency with Probability: Students often mistake the two by not distinguishing between empirical data and theoretical models. Incorrect: Assuming the probability of rolling a six is always $\frac{1}{6}$ regardless of past rolls. Correct: Recognizing that while theoretically, the probability is $\frac{1}{6}$, empirical evidence may slightly differ based on actual trials. 2. Ignoring Sample Size Impact: Many overlook how small sample sizes can skew probability estimates. Incorrect Approach: Estimating $P(\text{Heads}) = \frac{1}{2}$ after only 2 coin flips. Correct Approach: Conducting a larger number of trials to ensure a more accurate estimation.
FAQ
What is the difference between relative frequency and theoretical probability?
Relative frequency is based on actual data from experiments or observations, whereas theoretical probability is calculated based on predefined mathematical principles assuming equally likely outcomes.
How does sample size affect relative frequency estimates?
Larger sample sizes generally lead to more accurate relative frequency estimates, as they reduce variability and better reflect the true probability of an event.
Can relative frequency be used for continuous data?
Yes, relative frequency can be applied to both discrete and continuous data by grouping continuous data into intervals and calculating the frequency within each interval.
What are confidence intervals in the context of relative frequency?
Confidence intervals provide a range within which the true probability is likely to lie, offering a measure of the estimate's precision based on the relative frequency approach.
Why might relative frequency differ from theoretical probability?
Differences can arise due to limited sample sizes, biases in data collection, or variations in real-world conditions that are not accounted for in theoretical models.
What software tools can assist in calculating relative frequency?
Tools like R, Python (with Pandas and NumPy), and Excel are commonly used to calculate relative frequencies efficiently, especially when handling large datasets.
Collecting Data
1.1.1 Completely Randomized Design
1.1.2 Randomized Block & Matched Pairs Design
1.1.3 Introduction to Experiments
1.1.4 Well-Designed Experiments
1.1.5 Control Groups, Placebos & Blind Experiments
1.2.1 Introduction to Sampling
1.2.2 Simple Random Sampling (SRS)
1.2.3 Random Sampling Methods
1.2.4 Types of Bias
1.2.5 Non-random (Biased) Sampling Methods
Inference
2.1.1 Sampling Distributions for Sample Slopes
2.1.2 Hypothesis Tests for Slopes of Regression Lines
2.1.3 Confidence Intervals for Slopes of Regression Lines
2.2.1 Type I & Type II Errors
2.2.2 Probabilities of Errors
2.2.3 Power of a Test
2.3.1 Tails on a Normal Distribution
2.3.2 Introduction to Hypothesis Testing
2.3.3 Introduction to Confidence Intervals
2.4.1 Hypothesis Tests for Population Proportions
2.4.2 Confidence Intervals for Population Proportions
2.4.3 Hypothesis Tests for Differences in Population Proportions
2.4.4 Confidence Intervals for Differences in Population Proportions
2.5.1 The t-distribution
2.5.2 Hypothesis Tests for Population Means
2.5.3 Confidence Intervals for Population Means
2.5.4 Hypothesis Tests for Differences in Population Means
2.5.5 Confidence Intervals for Differences in Population Means
2.5.6 t-scores versus z-scores
2.5.7 Hypothesis Tests for Differences in Matched Pairs
2.5.8 Confidence Intervals for Differences in Matched Pairs
2.6.1 The Chi-Square Distribution
2.6.2 Hypothesis Tests for Goodness of Fit
2.7.1 Tests for Independence
2.7.2 Tests for Homogeneity
Probability, Random Variables and Probability Distributions
3.1.1 Estimating Probability using Relative Frequency
3.1.2 Probabilities of Single Events
3.1.3 Introduction to Combined Events
3.1.4 Addition Rule & Mutually Exclusive Events
3.1.5 Conditional Probability
3.1.6 Multiplication Rule & Independent Events
3.1.7 Probabilities of Combined Events using Tree Diagrams
3.1.8 Probabilities of Combined Events using the Rules
3.2.1 Probability Distributions for Discrete Random Variables
3.2.2 Cumulative Probability Distributions for Discrete Random Variables
3.2.3 Mean & Standard Deviation of a Discrete Random Variable
3.2.4 Linear Transformations of Random Variables
3.2.5 Linear Combinations of Random Variables
3.3.1 Introduction to Binomial Distributions
3.3.2 Probabilities for Binomial Distributions
3.3.3 Introduction to Geometric Distributions
3.3.4 Probabilities for Geometric Distributions
Exploring One-Variable Data
4.1.1 Describing Variables
4.1.2 Parameters & Statistics
4.1.3 Measures of Center
4.1.4 Measures of Position
4.1.5 Measures of Variability
4.1.6 Tables & Relative Frequency
4.1.7 Grouped Data
4.1.8 Outliers & Resistant Measures
4.1.9 Five-Number Summary & Boxplots
4.1.10 Skewness of Data
4.1.11 Comparing Data using Summary Statistics
4.2.1 Shape of Distributions
4.2.2 Bar Charts & Histograms
4.2.3 Dotplots & Stemplots
4.2.4 Cumulative Graphs
4.2.5 Comparing Univariate Graphs
4.3.1 Properties of Normal Distributions
4.3.2 Standardized z-scores
4.3.3 Comparing Normal Distributions
4.3.4 Finding Proportions from Normal Distributions
4.3.5 Inverse Normal Calculations
4.3.6 Estimating Parameters of Normal Distributions
Sampling Distributions
5.1.1 Introduction to Sampling Distributions
5.1.2 Sampling Distributions for Sample Means
5.1.3 The Central Limit Theorem
5.1.4 Sampling Distributions for Differences in Sample Means
5.1.5 Sampling Distributions for Sample Proportions
5.1.6 Sampling Distributions for Differences in Sample Proportions
5.1.7 Biased & Unbiased Estimators
Exploring Two-Variable Data
6.1.1 Two-Way Tables & Relative Frequencies
6.1.2 Bar Graphs & Mosaic Plots
6.2.1 Two-Way Tables & Relative Frequencies
6.2.2 Bar Graphs & Mosaic Plots
6.2.3 Explanatory & Response Variables
6.2.4 Scatterplots
6.2.5 Association & Correlation Coefficients
6.2.6 Interpolation & Extrapolation using Linear Models
6.2.7 Residuals
6.2.8 The Least-Squares Regression Line
6.2.9 Residual Plots
6.2.10 The Coefficient of Determination
6.2.11 Outliers, High-Leverage & Influential Points
6.2.12 Linearization of Bivariate Data
Get PDF
PDF
Share
Explore
How would you like to practise?
Choose Difficulty Level.
Choose Easy, Medium or Hard to match questions to your skill level.
Choose Learning Method.
Choose Easy, Medium or Hard to match questions to your skill level.
Share via
COPY |
17269 | https://math.stackexchange.com/questions/2675460/prove-that-for-a-given-point-on-an-ellipse-the-sum-of-the-distances-from-each-f | Stack Exchange Network
Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.
Visit Stack Exchange
Teams
Q&A for work
Connect and share knowledge within a single location that is structured and easy to search.
Learn more about Teams
Prove that for a given point on an ellipse, the sum of the distances from each focal point is constant
Ask Question
Asked
Modified 7 years, 6 months ago
Viewed 2k times
7
$\begingroup$
An ellipse has equation $\frac{x^2}{25}+\frac{y^2}{9} = 1$ and $P(p,q)$ is a point on the ellipse. Points $F_1$ and $F_2$ have coordinates $(-4,0)$ and $(4,0)$. Show that the sum of the distances $|PF_1|$ +$|PF_2|$ does not depend on the value of p.
First, to find distance $|F_1P|$
$|F_1P| = \sqrt{(p+4)^2 + q^2}$
as $\frac{p^2}{25}+\frac{q^2}{9} = 1$
$\frac{9p^2}{25}+q^2 = 9$
$q^2 = 9-\frac{9p^2}{25}$
$|F_1P| = \sqrt{p^2+8p+16 + 9-\frac{9p^2}{25}}$
$= \sqrt{\frac{16}{25}p^2+8p-25}$
$= \sqrt{\frac{1}{25}(16p^2+200p-625)}$
$= \frac{1}{5}\sqrt{(4p+25)^2}$
$= \frac{1}{5}(4p+25)$
Finding the distance $|F_2P|$
$|F_2P| = \sqrt{(4-p)^2 + q^2}$
$= \sqrt{p^2-8p+16 + 9-\frac{9p^2}{25}}$
$= \sqrt{\frac{16}{25}p^2-8p-25}$
$= \sqrt{\frac{1}{25}(16p^2-200p-625)}$
$= \frac{1}{5}\sqrt{(4p-25)^2}$
$= \frac{1}{5}(4p-25)$
Therefore
$|F_1P|+|F_2P|= \frac{1}{5}(4p-25)+\frac{1}{5}(4p+25) = \frac{8}{5}p$
This is exactly what we're trying to disprove. I must have made a mistake somewhere. Please can someone explain where I went wrong. My answer appears to be the negative of what it should be. I expect the correct answer will be 10 units but for some reason it does not seem to work.
algebra-precalculus
geometry
euclidean-geometry
analytic-geometry
conic-sections
Share
edited Mar 7, 2018 at 14:48
nonuser
92.1k2020 gold badges110110 silver badges218218 bronze badges
asked Mar 3, 2018 at 21:55
InquirerInquirer
87522 gold badges99 silver badges2222 bronze badges
$\endgroup$
2
2
$\begingroup$ Compare $4p$ with 25 and think about the value of $\sqrt{(4p-25)^2}$. $\endgroup$
user
– user
2018-03-03 22:02:17 +00:00
Commented Mar 3, 2018 at 22:02
$\begingroup$ As a piece of generic advice, pick a value for $p$ and find the first line that's false. $\endgroup$
Ben Millwood
– Ben Millwood
2018-03-04 08:26:39 +00:00
Commented Mar 4, 2018 at 8:26
Add a comment |
3 Answers 3
Reset to default
8
$\begingroup$
Note that $y=0$ gives $x=\pm5$, so $-5 \le p \le 5$ for all points $P(p,q)$ on the ellipse. Therefore:
$$ |F_1P| = \frac{1}{5}\sqrt{(4p+25)^2} = \frac{1}{5}|4p+25| = \frac{1}{5}(25+4p) \ |F_2P| = \frac{1}{5}\sqrt{(4p-25)^2} = \frac{1}{5}|4p-25| = \color{red}{\frac{1}{5}(25-4p)} $$
Share
answered Mar 3, 2018 at 22:03
dxivdxiv
78k66 gold badges6969 silver badges127127 bronze badges
$\endgroup$
2
$\begingroup$ how do you know the absolute value of |4p25| is 254p $\endgroup$
Inquirer
– Inquirer
2018-03-03 22:07:16 +00:00
Commented Mar 3, 2018 at 22:07
$\begingroup$ @Inquirer $-5 \le p \le 5 \iff -20 \le 4p \le 20$, so $25-4p \ge 25-20 \gt 0$. $\endgroup$
dxiv
– dxiv
2018-03-03 22:08:01 +00:00
Commented Mar 3, 2018 at 22:08
Add a comment |
4
$\begingroup$
The key mistake you made here is your simplification of square roots. Actually, since a square root is positive always, you should get $\vert F_1P\vert = \frac{|4p+25|}{5} $ and $ \vert F_2P\vert = \frac{|4p-25|}{5}$. However, note $ -5 \leq p \leq 5$, so $ -20 \leq 4p \leq 20$ and $\frac{|4p+25|}{5} = \frac{4p+25}{5}, \frac{|4p-25|}{5} = \frac{-4p+25}{5}$ Summing the two distances, $p$ cancels out.
Share
edited Mar 8, 2018 at 14:47
answered Mar 3, 2018 at 22:15
Jesse MengJesse Meng
1,36099 silver badges2020 bronze badges
$\endgroup$
Add a comment |
3
$\begingroup$
Since $$e^2= a^2-b^2 = 25-9 =16\;\; \Longrightarrow \;\;e =4$$ so the point $F_1$ and $F_2$ are focuses of ellipse and by definiton of ellipse we have $$PF_1+PF_2 = constant = 2a =10$$
Share
answered Mar 3, 2018 at 22:00
nonusernonuser
92.1k2020 gold badges110110 silver badges218218 bronze badges
$\endgroup$
2
$\begingroup$ Bingo. It is just the definition of the ellipse. Any proof based on the familiar analytic equation assumes the definition to start with because the equation is derived from the definition. $\endgroup$
Oscar Lanzi
– Oscar Lanzi
2018-03-04 02:30:24 +00:00
Commented Mar 4, 2018 at 2:30
4
$\begingroup$ I don't think that's fair. The analytic equation and the focal-points definition are equivalent. This question is asking you to prove the equivalence, so I think this answer is begging the question. $\endgroup$
Ben Millwood
– Ben Millwood
2018-03-04 08:28:16 +00:00
Commented Mar 4, 2018 at 8:28
Add a comment |
You must log in to answer this question.
Start asking to get answers
Find the answer to your question by asking.
Ask question
Explore related questions
algebra-precalculus
geometry
euclidean-geometry
analytic-geometry
conic-sections
See similar questions with these tags.
Featured on Meta
Introducing a new proactive anti-spam measure
Spevacus has joined us as a Community Manager
stackoverflow.ai - rebuilt for attribution
Community Asks Sprint Announcement - September 2025
Related
2 Determining conic section equation given foci and sum of distance to each point
About a property of the tangents to an ellipse from a given point
2 Proving the Reflection Property of an Ellipse
0 Distance of point on ellipse from focii
0 Proof involving Reflection Property of an Ellipse
Recovering the standard equation of an ellipse ( in an appropriate CS) from its "( added) distances from foci "definition.
1 A Question about the Area of a Focal Triangle in an Ellipse
Hot Network Questions
Bypassing C64's PETSCII to screen code mapping
How to sample curves more densely (by arc-length) when their trajectory is more volatile, and less so when the trajectory is more constant
Identifying a movie where a man relives the same day
manage route redirects received from the default gateway
Help understanding moment of inertia
Where is the first repetition in the cumulative hierarchy up to elementary equivalence?
Can a GeoTIFF have 2 separate NoData values?
How can I get Remote Desktop (RD) to scale properly AND set maximum windowed size?
Identifying a thriller where a man is trapped in a telephone box by a sniper
How to locate a leak in an irrigation system?
Does clipping distortion affect the information contained within a frequency-modulated signal?
The geologic realities of a massive well out at Sea
Can Monks use their Dex modifier to determine jump distance?
On the Subject of Switches
My dissertation is wrong, but I already defended. How to remedy?
Analog story - nuclear bombs used to neutralize global warming
How to design a circuit that outputs the binary position of the 3rd set bit from the right in an 8-bit input?
Traversing a curve by portions of its arclength
Lingering odor presumably from bad chicken
"Unexpected"-type comic story. Aboard a space ark/colony ship. Everyone's a vampire/werewolf
Are there any alternatives to electricity that work/behave in a similar way?
Why is the definite article used in “Mi deporte favorito es el fútbol”?
Numbers Interpreted in Smallest Valid Base
What happens when the jewels run out?
more hot questions
Question feed |
17270 | http://abouthydrology.blogspot.com/2016/06/celerity-vs-velocity.html | AboutHydrology: Celerity versus velocity and the travel time problem
AboutHydrology
My reflections and notes about hydrology and being a hydrologist in academia. The daily evolution of my work. Especially for my students, but also for anyone with the patience to read them.
Thursday, June 30, 2016
Celerity versus velocity and the travel time problem
A little of introduction
Velocity of a fluid is the variation of its particles' position with time. Celerity is the speed at which a signal (a wave) is transmitted through a medium. Velocity of particles determines flows (fluxes) out of a control volume. Celerity, as we will see, modify the particles' velocity field (intensity and extension).
Jeff McDonnell(GS), and KeithBeven(GS) (J&K) with their nose for interesting topics,recently wrote a paper where they discuss the role of the two quantities with reference to travel time problems. Some other papers (e.g. Kirchner, 2016, Hrachoitzet al, 2013) echo the same problem.
One important point that Beven and McDonnell discuss is that transport of properties (passive tracers) depends on water molecules velocity and therefore between transported properties (like temperature, water isotopes, water age) and flow (assumed to be dominated by celerities) there can be various discrepancies.
Travel time, T r:=t−τ T r:=t−τ is described in literature by its probability p(t−τ|t)p(t−τ|t), where t t is the actual time, τ<t τ<t is the injection time and p(t−τ|t)p(t−τ|t) is the travel time distribution conditional to the actual time. For details about travel time distribution, residence time and life expectancy of water particles, please take the time to readthis paper.Implicitly, it is assumed that the probability is evaluated at the closure (boundary) of a control volume.The distribution represents the ensable distribution of infinite random experiment that regards the motion of a single molecule or, after the hypothesis of ergodicity, the distribution of the travel time of a bunch of molecules, which are injected together in the system, in different positions.
Actually J&K argued about the mean travel time tt more than of its distribution.tt definition is:
t:=∫∞0(t−τ)p(t−τ|t)d(t−τ)t:=∫0∞(t−τ)p(t−τ|t)d(t−τ)
J&K (and the other papers) said thattt is affected both by velocity and celerity but I do not think they went right to point. One central fact is that, as follow from Botter et al., 2010, 2011, discharge is estimated through a different probability distribution function, which is p(t−τ|τ)p(t−τ|τ), conditional on injection time:
Q(t)=∫t 0 p(t−τ|τ)J(τ)d τ Q(t)=∫0 t p(t−τ|τ)J(τ)d τ
where Q(t)Q(t) is the total outflow from a control volume and J(τ)J(τ) is the precipitation input. Therefore the breakthrough curve (the backward probability) and the fluxes are not, usually, linearly related.
p(t−τ|τ)p(t−τ|τ) and p(t−τ|t)p(t−τ|t) are not completely independent and are related by the Niemi's relationship.
Back to the basics
Let’s me re-discuss the topic from the scratch. When we have a velocity field, u(x⃗,t)u(x→,t) in a fluid it can be either stationary or time varying. Assume it is stationary. Then:
∂u(x⃗,t)∂t=0∂u(x→,t)∂t=0
which implies that u(x⃗,t)=u(x⃗)u(x→,t)=u(x→), i.e. the velocity field, does not depend on time, and it is called stationary. Does it imply that forces do not act ? No. Forces act, because still we can have gradient of velocity:
∇u(x⃗)≡(∂u(x⃗)∂x,∂u(x⃗)∂y,∂u(x⃗)∂z)≠0∇u(x→)≡(∂u(x→)∂x,∂u(x→)∂y,∂u(x→)∂z)≠0
So, when the we follow a particle moving (the so called Lagrangian view of fluid motion), it is locally accelerated, and it feels a force. However, forces do not change in time and are subject to stationary resistances (dissipation) that do not allow for the overall system to accelerate (increase or decrease the total kinetic energy).
If a second particle is injected in the same place of the first one, it will have the same travel time. Particles (parcels, molecules) injected in different points will have different trajectories, all stationary, and the overall distribution of of travel times for a bunch of particles injected together (or, BTW, at a different time) is not time varying. It has to be remarked that the fluxes that come out from such a velocity field is not constant but reflect the mass (volume) of particle injected in the fields. Furthermore, we can say what is going in, will go out in a certain composition of ages which is relatively easy to disentangle from the knowledge of the spatial field and the amount of injected matter.
In the above case no net energy is transferred and we do not have to invoke celerity but just velocity. Also, the integrated probabilities of which we talked at the beginning is: p(t−τ|τ)=p(t−τ|t)=p(t−τ)p(t−τ|τ)=p(t−τ|t)=p(t−τ), i.e. the probabilities are time invariant and not conditional to something.
Now, assume we have a time varying velocity field, so that u=u(x⃗,t)u=u(x→,t) and:
∂u(x⃗,t)∂t≠0∂u(x→,t)∂t≠0
This implies we have acceleration in (usually) many points which must be generated by a local force (which has to be time varying). The way to obtain it in a fluid is to generate a time-varying pressure gradient (yes, excluding time varying gravity, time varying electromagnetic fields, time varying applied work, time varying temperature or density gradient etc. , all things we usually not deal with in our hydrological problems). Actually it can be shown that these pressure variations, once imposed, can travel from one place to another (in ways that are different in different phenomena: see J&K for the enumeration of some cases) so that this pressure variation is the cause of the varying velocity field. The speed of these travelling pressures is the celerity we (and J&K and others) are talking about. Conjointly with velocity field variations, fluxes in previously unaffected portion of the control volume can be activated (in some part of the control volume particles can start to move or some part of the control volumes are invaded by new moving particles).
Therefore the distribution of particles’ travel times is affected in two ways: by mobilising water (and/or enlarging the sectional area) and by increasing (locally) water velocities.
From the above analysis is also evident that the phrase "celerity (of pressure waves) affects flows" is true, but in an indirect way, since flow is affected by velocity field and its spatial extension (no less, no more), but both are modified by pressure waves (the causal reason).
Consequences on the mean travel time
In the case of time varying velocity fields, distribution of the mean travel time can vary in time because the new activated areas can contain water of ages different from those activated previously, and because locally the velocity of water can increase (or decrease) at the passage of the pressure wave.
The latter effect can skew the distribution of travel time (to smaller or larger travel times), but unless the variation is such that the whole the control section is affected (which can be the case), it has not significant effects on the mean travel time, which is an integrated quantity.
Due to time variation in the velocity field, when we inject two water volumes in the same place at different times, their travel time is different, and consequently, any transported quantity is transported differently. The travel time distribution p(t−τ|t)p(t−τ|t) is therefore time-variant, and we can attribute this variation to the effect of celerity, meaning that there are pressure waves propagating in the control volumes. In my view this is a necessary and sufficient condition (mathematically speaking) but we can discuss how inputs variations affect water field velocity and special cases (mostly ideal ones) when the existing flow fields is just extended but not modified where previously existing.
Well, the velocity field is not superimposed, but caused by the quantity of water itself.
So far we discussed the cause of varying velocity fields which I suggested was due to pressure waves. These, in case of water flow are usually generated by variations of the water amount itself. The velocity field, in fact, is not something independent of the water quantity and distribution and, on the contrary, depends on it. So when we inject water, we cause pressure waves that propagates with a law dependent on the medium: at the surface, in the vadose zone, or in the ground. (Therefore the idea of a stationary flow field with varying water inputs is not, in general, a credible assumption. It can be an approximation to take with care).
I think there is no mystery in all of it.
Post Scriptum (July 8/2016)
I realised that what I wrote was overlooking the normal view (and problems) researchers in the field have about travel time distribution. The reference is the beautiful (a must read) work Dagan, 1984 cited below. Dagan and followers observe that there is diffusion and, moreover, stationary velocity field can be very heterogeneous. I forgot diffusion, and, whilst accounting for the second, I did not stressed enough the consequences of heterogeneity on travel time distributions. For this, the reading of Dagan's paper can be enough. What I said, however, remains true.
References
Botter G, Bertuzzo E, Rinaldo A, 2010. Transport in the hydrologic response: Travel time distributions, soil moisture dynamics, and the old water paradox. Water Resources Research 46(3): n/a–n/a. ISSN 1944-7973. doi:10.1029/2009WR008371.
Botter G, Bertuzzo E, Rinaldo A, 2011. Catchment residence and travel time distributions: The master equation. Geophysical Research Letters 38(11): L11403. ISSN 1944-8007. doi:10.1029/2011GL047666.
Dagan, G. (1984). Solute transport in heterogeneous porous formations. Journal of Fluid Mechanics, 145, 151–177.
Hrachowitz, M., Savenije, H., Boogard, T. A., Tetzlaff, D., & Soulsby, C. (2013). What can flux tracking teach us about water age distribution patterns and their temporal dynamics?, Hydrol. Earth Syst. Sciences, 17, 533–564.
Kirchner, J. W. (2016). Aggregation in environmental systems – Part 1: Seasonal tracer cycles quantify young water fractions, but not mean transit times, in spatially heterogeneous catchments,Hydrol. Earth Syst. Sciences, 279–297.
McDonnell, J. J., & Beven, K. J. (2014). The future of hydrological sciences: A (common) path forward? A call to action aimed at understanding velocities, celerities and residence time distributions of the headwater hydrograph. Water Resources Research, 50, 5342–5350.
Niemi AJ, 1977. Residence time distributions of variable flow processes. The International Journal of Applied Radiation and Isotopes 28(10–11): 855 – 860. ISSN 0020-708X. doi:
Posted by About Hydrology at 12:18 AM
Email ThisBlogThis!Share to XShare to FacebookShare to Pinterest
Labels: Celerity, Residence time
No comments:
Post a Comment
Newer PostOlder PostHome
Subscribe to: Post Comments (Atom)
Top 5 post from last week
Python resources for Hydrologists Python is a modern object oriented language. Occasionally I wrote about it in my posts, also for remarking that I went in a different direc...
R resources for Hydrologists R is my statistical software of election. I had hard time to convince my Ph.D. students to adopt it, but finally they did, and, as usually ...
A new topic for a Ph.D. in Hydrology GEOtop 2.0 ( )is a successful process-based model of the hydrological cycle. I...
Java for Hydrologists 101 There are a few postings on Java in this blog. Since I want to teach it to my students, I am quietly starting to populate this page with p...
Hydrology 2017 This year I decided to introduce strong news in my Hydrology course . Not only a change of topics, but also a change of perspective. I incr...
Java 101 for Hydrologists
Java for Hydrologists 101
There are a few postings on Java in this blog. Since I want to teach it to my students, I am quietly starting to populate this page with p...
Subscribe To
Posts
Atom
Posts
Comments
Atom
Comments
Search This Blog
Blog Archive
►2025(28)
►August(7)
►July(3)
►June(2)
►May(4)
►April(2)
►March(3)
►February(5)
►January(2)
►2024(48)
►December(5)
►November(2)
►October(4)
►September(7)
►August(6)
►July(1)
►June(3)
►May(4)
►April(4)
►March(4)
►February(4)
►January(4)
►2023(33)
►December(2)
►November(2)
►October(5)
►September(2)
►August(2)
►July(2)
►June(2)
►May(2)
►April(2)
►March(4)
►February(5)
►January(3)
►2022(45)
►December(1)
►November(3)
►October(5)
►September(5)
►August(6)
►July(3)
►June(2)
►May(4)
►April(3)
►March(3)
►February(8)
►January(2)
►2021(60)
►December(3)
►November(4)
►October(9)
►September(3)
►August(6)
►July(1)
►June(3)
►May(2)
►April(6)
►March(7)
►February(10)
►January(6)
►2020(60)
►December(7)
►November(10)
►October(9)
►September(3)
►August(1)
►July(5)
►June(3)
►May(6)
►April(3)
►March(4)
►February(3)
►January(6)
►2019(58)
►December(5)
►November(3)
►October(6)
►September(5)
►August(3)
►July(1)
►June(4)
►May(4)
►April(7)
►March(4)
►February(7)
►January(9)
►2018(66)
►December(2)
►November(6)
►October(6)
►September(4)
►August(4)
►July(3)
►June(4)
►May(7)
►April(10)
►March(9)
►February(6)
►January(5)
►2017(78)
►December(7)
►November(4)
►October(15)
►September(6)
►August(4)
►July(9)
►June(3)
►May(5)
►April(5)
►March(5)
►February(8)
►January(7)
▼2016(73)
►December(3)
►November(7)
►October(9)
►September(7)
►August(6)
►July(5)
▼June(4)
Celerity versus velocity and the travel time problem
Is possible to get the runoff for European Countri...
Java, Python, C/C++ or FORTRAN in scientific progr...
gvSIG Festival
►May(10)
►April(7)
►March(7)
►February(4)
►January(4)
►2015(83)
►December(6)
►November(9)
►October(6)
►September(6)
►August(7)
►July(8)
►June(3)
►May(5)
►April(5)
►March(13)
►February(8)
►January(7)
►2014(81)
►December(7)
►November(5)
►October(7)
►September(10)
►August(6)
►July(5)
►June(4)
►May(12)
►April(5)
►March(5)
►February(11)
►January(4)
►2013(71)
►December(8)
►November(6)
►October(6)
►September(9)
►August(5)
►July(7)
►June(4)
►May(5)
►April(5)
►March(5)
►February(6)
►January(5)
►2012(72)
►December(6)
►November(4)
►October(4)
►September(12)
►August(6)
►July(1)
►June(5)
►May(5)
►April(3)
►March(11)
►February(8)
►January(7)
►2011(57)
►December(3)
►November(5)
►October(7)
►September(1)
►August(3)
►July(3)
►June(5)
►May(6)
►April(2)
►March(4)
►February(7)
►January(11)
►2010(1)
►December(1)
Related Blogs
Azimuth Negative Mass 11 hours ago
R-bloggers Learning And Exploring The Workflow of RNA-Seq Analysis – A Note To Myself 2 days ago
Math3ma What is a Good Quantum Encoding? Part 1 4 days ago
Dr. Anne Jefferson's Watershed Hydrology Lab | Dr. Anne Jefferson's Watershed Hydrology Lab New video! The Little Rain Garden that Could (Survive a Flood) 6 days ago
GeoLog Slavery in the geologic record – Environmental and geomorphological legacies 6 days ago
RealClimate “But you said the ice was going to disappear in 10 years!” 1 week ago
Living in an Ivory Basement Announcing taxburst, an update of the Krona software for taxonomy exploration 1 month ago
GEOframe OMS Runner Library: Streamlining Hydrological Model Execution 3 months ago
JGrass Tech Tips Shaping the future after 20 years of HydroloGIS 7 months ago
Andy Philipp's Hydrology Blog | A hydrologist's blog on hydrology and beyond | Der Blog eines Hydrologen über Hydrologie und mehr | By Dr. Andy Philipp Weihnachtsgeschenk vom #LHP – #API 9 months ago
tonyladson | Hydrology, Natural Resources and R Climate change and home insurance 1 year ago
GFOSS Blog | GRASS GIS Courses GRASS GIS Annual Report 2023 1 year ago
scottishsnow Rapid dashboard prototyping 2 years ago
Hydrologic Jobs and Positions Lecturer/ Assistant Professor in Coastal Dynamics University of Liverpool, UK 4 years ago
The Hydrologic Webinar Tribune Introduction to Critical Zone Observatories and Watershed Sites. 4 years ago
Geology in Motion A brief history of explosive eruptions at Kilauea: 1790 and 1924 and a great 1955 video 7 years ago
Computational Hydrology | University of Saskatchewan - Blog
MUSCLES OF QUARTZ
How to cite this blog
To cite a post from this blog, please use something like:
Rigon R., R resources for Hydrologists, last retrieved 2014/01/03
The web address was cut from the address of the R post.
Total Pageviews
| | |
--- |
| 0 | 33 |
| 1 | 22 |
| 2 | 37 |
| 3 | 35 |
| 4 | 49 |
| 5 | 44 |
| 6 | 43 |
| 7 | 54 |
| 8 | 91 |
| 9 | 44 |
| 10 | 58 |
| 11 | 36 |
| 12 | 43 |
| 13 | 46 |
| 14 | 59 |
| 15 | 47 |
| 16 | 69 |
| 17 | 50 |
| 18 | 73 |
| 19 | 50 |
| 20 | 50 |
| 21 | 52 |
| 22 | 63 |
| 23 | 28 |
| 24 | 37 |
| 25 | 100 |
| 26 | 50 |
| 27 | 56 |
| 28 | 23 |
| 29 | 3 |
2,645,454
Popular Posts
R resources for Hydrologists R is my statistical software of election. I had hard time to convince my Ph.D. students to adopt it, but finally they did, and, as usually ...
A new topic for a Ph.D. in Hydrology GEOtop 2.0 ( )is a successful process-based model of the hydrological cycle. I...
Python resources for Hydrologists Python is a modern object oriented language. Occasionally I wrote about it in my posts, also for remarking that I went in a different direc...
Il corso di Idrologia (My hydrology class) In questa pagina, aggiornandola costantemente, carico il materiale del corso di Idrologia per gli Studenti della laurea triennale di Ingegne...
Matching process based modelling and remote sensing This blog post summarizes a recent discussion I have had with Luca Brocca concerning the use of remote sensing data in hydrological applic...
Hydrology 2017 This year I decided to introduce strong news in my Hydrology course . Not only a change of topics, but also a change of perspective. I incr...
Hydraulic Constructions (Again another class I teach) I finally decide to upload here also the material of my class of Hydraulic Construction. Is a little off-topic here but I realized that that...
Top 18 design patterns interview questions in Java to know This is (almost) verbatim from the Java67 blog (to which the appropriate credit must be given), with some links in addition to Java 8 . ...
Reservoirology #3 This is a revision of the previous post on the same topic . There I tried to develop my own algebra of symbols to represent coarse grained (...
GRAL Times ago we had the Gruppo Italiano delle Catastrofi Idrogeologiche (GNDC, Italian Group of Hydrological and Geological Hazards). But as t...
Labels
GEOFRAME(74)JGrass-NewAGE(55)GEOtop(53)Scientists(43)video(43)Accepted papers(40)OMS3(39)Submitted Papers(38)Hydrological modelling(35)Evapotranspiration(31)Distributed Modeling(30)Modelling(28)Java(25)GEOtop 2.0(24)Hydrologic Research Topics(24)JGrasstools(24)Snow(23)Corso Idrologia Ambiente e Territorio Trento(21)Ecohydrology(21)Residence time(21)Ph.D.(20)Statistics(20)landslides(20)Reproducible Research(19)Richards equations(19)CLIMAWARE(18)Remote sensing(17)Projects' Proposal(16)R software(16)Me-Myself-I(14)Modeling(14)Permafrost(14)geomorphology(14)Climate Change(13)Data(13)Hillslope Hydrology(13)Programming(13)Reservoirology(13)Travel times(13)Writing technical papers(13)snow modeling(13)Cryosphere(12)My Past Research(12)Ph.D. Positions(11)Python(11)Transpiration(11)Winter School(11)AdbPo Project(10)Hydrology Class(10)Machine Learning(10)WHETGEO(10)Andrea Rinaldo(9)DARTH(9)Free data(9)GIUH(9)Research Topics(9)Stradivari(9)WATZON(9)rainfall runoff(9)udig(9)Concetta D'Amato(8)Corso di Costruzioni Idrauliche Trento(8)Doing Research(8)Doing science(8)Francesco Serafin(8)Freezing Soil(8)GEOtop 3.0(8)Horton Machine(8)Hydrology(8)JGrass(8)Niccolò Tubini(8)Plants Hydraulics(8)Radiation Budget(8)Rainfall-Runoff(8)SPAC(8)Soil Moisture(8)Summer Schools(8)Surface Hydrology(8)floods(8)geomorphometry(8)Alpine convention(7)Costruzioni Idrauliche(7)Fractal river networks(7)Marialaura Bancheri(7)Meledrio(7)Petri nets(7)Ph.D. Students(7)Reviewing(7)Winter School on GEOframe(7)art on hydrology(7)benchmark papers(7)java for scientific computing(7)modeling by component(7)urban hydrology(7)Adige(6)Choice of hydrological model(6)EGU Wien 2017(6)Environmental Modelling(6)GEOSPACE(6)GIS Data(6)Hydrologis(6)Life FRANCA(6)Luca Brocca(6)Master Thesis(6)MeteoIO(6)Physico-Statistical modelling(6)R(6)Slides di Idrologia(6)Soil(6)WATZON seminars(6)landslide triggering(6)EPN(5)Gianluca Botter(5)Hazard Mapping(5)Hazards(5)Heterogeneity(5)Hillslope stability(5)Hydraulic Construction(5)IUH(5)Idrologia(5)Impacts of Climate change on water resources(5)Integrated Catchment Modelling(5)Jupyter(5)Michele Bottazzi(5)Open Science(5)Open Sourceness(5)Ph.D. Thesis(5)Plant-Atmosphere interactions(5)Poetry(5)Postdoc positions(5)Probability(5)Rivers(5)SII(5)SWMM(5)Shallow Landslides(5)Tenure-Track(5)Water Budget(5)gvSIG(5)4DHydro(4)CATHY(4)CUASHI(4)DEM(4)DICAM PhD(4)Droughts(4)EGU 2018(4)EGU 2019(4)Eclipse(4)Evaporation(4)GEOtop model executables(4)GMD(4)Global Hydrological Data(4)Global Hydrology(4)Graphs(4)Grids(4)Hydrologic Response(4)Hydrological Modelling 2021(4)Hydrology 2021(4)Interpolation methods(4)Kriging(4)MUSE(4)Object Oriented programming(4)Prospero(4)Python for Hydrologists(4)Rainfall(4)Random Functions(4)Random numbers(4)Slides(4)Soil Properties(4)SpaceItUp(4)Teaching(4)Thermodynamics(4)Thesis proposal (undergrad)(4)Turbulence(4)Uncertainty(4)Vincenzo Casulli(4)abouthydrology mailing list(4)iEMSs(4)AGU 2017(3)AGes(3)Algebraic Topology(3)Aqueducts(3)Blue Nile(3)CV(3)Catchment analysis(3)Causality(3)Celerity(3)Coupled Hydrological Modeling(3)Critical Zone(3)DARTHs(3)Dani Or(3)Daniele Penna(3)Data Assimilation(3)Dennis Baldocchi(3)Design Patterns(3)Discharges(3)Documenting models(3)Doing a poster(3)ERC(3)Earth System Modelling(3)Energy Budget(3)Epanet(3)Extremes(3)Failed proposal(3)GLOBAQUA(3)Geopaparazzi(3)Giuseppe Formetta(3)Growth and Form(3)Hydrogeomorphology(3)Hydrological measures(3)Hydrology Class 2022(3)Idrologia Urbana(3)Ignacio Rodriguez-Iturbe(3)Life FRANCA workshop(3)Net3(3)Networks(3)Nobody applied(3)Old Water(3)Open Data(3)Operational Hydrology(3)Optimal Channel Networks(3)PhD candidates(3)PhD studies(3)Posina(3)Presenting Science Contents in the Digital Era(3)Riccardo Rigon(3)Runoff formation(3)Software Carpentry(3)Soil Hydrology(3)Steep Streams Project(3)The man who planted trees(3)Thermodynamics of Earth(3)Tools(3)Tübingen(3)USGS(3)WATERSTEM(3)WHETGEO 1D(3)Water Resources Research(3)Writing a CV(3)return period(3)unstructured grids(3)AGU2024(2)Academic lifestyle(2)Accepted projects(2)Age-ranked functions(2)Alpine enviroments(2)BACI class(2)C/C++(2)C3A(2)Calibration(2)Carbon budget(2)Category theory(2)Chaos(2)Colorblind friendly Plots(2)Compiling(2)Concentration time(2)Concentrations(2)Corso di Modellazione Idrologica Unitn(2)Dalton medal(2)Database(2)Direttore(2)Doctoral Proposal(2)Documentation Templates(2)EGU2022(2)EO for WAter Cycle(2)EPA(2)ESA(2)Eagleson(2)Ecosystem Services(2)Emanuele Cordano(2)Enrico Bertuzzo(2)Flood Directive(2)Flood forecasting systems(2)Forest Ecosystems(2)GEOframe Summer School 2021(2)GEOframe Summer Schools(2)GEOtop history(2)GIS(2)GWS2021(2)Geostatistics(2)Git(2)Glaciers(2)Global DEM(2)Hazard Statistics in Italy(2)Hazard mapping in Trentino(2)Hoshin Gupta(2)Hydrological Modelling 2020(2)Hydrological Modelling 2022(2)Hydrological Modelling 2023(2)Hydrology 2023(2)Infiltration(2)Information Theory(2)Iowa Flood Forecasting System(2)Isotopes in Hydrology(2)Italian Data(2)JSWMM(2)Judea Pearl(2)Karst(2)LIDAR(2)Large Language Models(2)Leonardo da Vinci(2)Long wave radiation(2)Lysimiter GEO(2)MARRmoT(2)Manuela Girotto(2)Marco Borga(2)Mathematics(2)Model Validation(2)Modelling by Components(2)Ne3(2)Nested Newton(2)Niemi identity(2)ODEs(2)OSF(2)Open GIS(2)Opinions(2)Optimal principles in Hydrology(2)PRIN(2)Painting Hydrology(2)Paolo Benettin(2)Parallel computing(2)Parma Arpae(2)Peak Flows(2)Plants(2)Postdocs(2)Potential Evapotranspiration(2)Precipitations(2)Preferential Flow(2)Programming Tools(2)Projects(2)Publishing(2)Raindrops(2)Rainfall Kriging(2)Remote Sensing(2)Richards2D(2)Risk Policies(2)SWAT(2)Scientific Software(2)Shallow Landlsides(2)Shima Azimi(2)Slides on hydrology(2)Snow Remote Sensing(2)Snow and Forest(2)Snow maps(2)Soil Water Retention Curves(2)Soil function(2)Spatial interpolation(2)Stage(2)Stanislaus J. Schymanski(2)Statistics of Extreme(2)Storm water management(2)Sy-en-ergy-tropy(2)Teaching Hydrology(2)Thermodynamics of hydrological systems(2)Thesis proposal (master level)(2)Time series(2)Tom Dunne(2)Tracers hydrology(2)Trento Hydrology(2)Trento Hydrology Awareness Days(2)Trento Hydrology Days(2)WHETGEO 2D(2)WMMA(2)WMMA2022(2)Water Budget of Italy(2)Water Platform(2)Water Supply(2)Winter School 2020(2)channels initiation(2)geophysical fluid dynamics(2)ggplot(2)graphics(2)groundwater(2)hydraulic modeling(2)iPoster(2)instruments(2)landslide(2)measurements(2)object oriented pdes solver programming(2)sqlite(2)transpiration vs sapflow(2)ABL(1)AGU 2025(1)AGU Editors' choice(1)AIIA Conferences(1)ANBI(1)ANN(1)Acceptance speech(1)Adamello(1)Adige-CARITRO(1)Aeresols(1)Ageel Bushara(1)Agnese Cavazzini(1)Agricolture(1)Alberto Montanari(1)Alberto Viglione(1)Aldo Fiori(1)Algorithms(1)Alpine hydrology(1)Alto Vicentino(1)Amazonas(1)Amilcare Porporato(1)Andrea Dalpaos(1)Andrea Zanzotto(1)Anke Hildebrandt(1)Anke HildebrandtMarcel Bechmann(1)Anna Cestari(1)Anna De Nardi(1)Anthropocene(1)Antonio Coppola(1)Arduino(1)Assessment/Reference letters(1)Atmospheric Physics(1)Audio(1)Augmented Reality(1)Authorships.(1)Axel Kleidon(1)BAR Talks(1)BESS(1)BIGBANG water BUDGET(1)Ball-Berry(1)Barbara Lastoria(1)Basilicata.(1)Basin Analysis description(1)Bayesian Analysis(1)Bayesian assimilation(1)BeeGIS(1)Biennial iEMMs Conferences(1)Big data(1)Bill Dietrich(1)BioMA(1)Biogeodynamics(1)Biological Complexity(1)Blal Adem Esmail(1)Boetti(1)Bondone(1)Bonifiche(1)Boussinenesq equation(1)Boussinesq equation(1)Brenta River(1)Buckingham theorem(1)Butterfly Effect(1)CAMELS datasets(1)CISLAM(1)CLM(1)CO2(1)CSA(1)CSDMS(1)Calibration-Validation alternatives(1)Canopies radiation(1)Canzone mononota(1)Capillarity(1)Caritro(1)Carriers in Hydrology(1)Caso's Conference(1)Causal Inference(1)Chandighar(1)ChatGPT 4(1)Climaware Video(1)Climb fo climate(1)Command Line(1)Complex Systems(1)Complexity(1)Computing(1)Contemporary Hydrology(1)Continental scale hydrologcial models(1)Contributing Areas(1)Controllability(1)Convegno Nazionale di Idraulica(1)Copula(1)Correlation between time series(1)Corso Idrologia Ambiente e Territorio Trento 2020(1)Cosma Shalizi(1)Crete(1)Cristina Rulli(1)Crocus(1)Cyberseminar(1)DART4MED(1)DE(1)DICAM-EXC(1)DICAM-Unitn(1)Daniele Andreis(1)Daniele Dalla Torre(1)Darcy-Buckingham law(1)Data Analysis(1)Data Cleaning(1)Data Format(1)Data Rich Hydrology(1)Dawdy(1)Deltares(1)Desertification(1)Design Patterns in Hydrology(1)DevOps(1)Difference between science paper and blog post(1)Diffusion(1)Digital Agriculture(1)Digital Watershed model(1)Digital field mapping(1)Dimensional Analysis(1)Disasters(1)Discussions on the web. Klicker(1)Disdrometers(1)Docker(1)Doing good slides(1)Domain Specific Languages(1)Donald Knuth(1)Doodge(1)Drones(1)Dunnian runoff(1)DynSys(1)Dynamic River Networks(1)Dynamic Systems(1)EEA Report(1)EGU2024(1)Early warning Systems(1)Earth Dynamics(1)Earth System Modelling Framework(1)Earth observations(1)EarthArXiv(1)Economy(1)Edward Wilson(1)Effects of sediments(1)Effects of slopes(1)Effects of soil cover(1)Elio e le Storie Tese(1)Energy Conservation(1)Energy dissipation(1)Enrico Borinato(1)Environmental Engineering(1)Extremes and Human Societies(1)Ezio Todini(1)FEEDBACKs(1)FIS(1)FORTRAN(1)FOSS4G(1)Fabio Ciervo(1)Failing(1)Fall 2017 research goals(1)Fausto Guzzetti(1)Fermi's problem(1)Fernando Nardi(1)Festival Meteorologia(1)Flood(1)Flood frequency(1)Floods directives(1)Floods representation(1)Florence(1)Fluid Mechanics(1)Freeze and Cherry(1)Frost Heave(1)Fulbright(1)Fundamental laws of Physics(1)Future Hydrology(1)GCPP(1)GEOET(1)GEOFRAME material for beginners(1)GEOframe Schools(1)GEOtop 4.0(1)GII Ph.D days 2015(1)GLEAM(1)GRAL(1)GRDC(1)GSS2021(1)GSS2022(1)GWS2022(1)Gabriele D'annunzio(1)Gaby Katul(1)Gaby Turek(1)Generative AI(1)Geophysical Fluid Mechancs(1)Gigi Proietti(1)Giovanna Dalpiaz(1)Github(1)Giulia Zuecco(1)Giuliano Di Baldassarre(1)Google Earth Engine(1)Gradle(1)Grants(1)Granular Flows(1)Green Week(1)Groundwater contribution to the hydrological cycle(1)Gustau Camps-Vall(1)H2O(1)HEC(1)HESS(1)Hack's law(1)Han Xujun(1)Hasselmann(1)Hazard Statistics(1)Henk Dijkstra(1)High Performance Computing(1)High School Teachers(1)Highly cited papers(1)History of Hydrology(1)Horton(1)Hydraulic conductivity(1)Hydro_gen(1)Hydroelectric production(1)Hydroinformatics(1)Hydrologic resources(1)Hydrological Cycle(1)Hydrological Extremes(1)Hydrological Model 2025(1)Hydrological Modelling 2024(1)Hydrological Models(1)Hydrological budget(1)Hydrological hazards(1)Hydrological modelling Classes(1)Hydrologists toolbox(1)Hydrology Class 2025(1)Hydrology Lab Class 2019(1)Hydrology at large(1)Hydrology days 2021(1)Hydrology lab 2021(1)I mille fiumi(1)IAHR(1)IAHS(1)IHS(1)IIT Mumbai(1)INIShell(1)INSPIRE(1)ISI(1)ISPRA(1)Ian Colin Prentice(1)Idra(1)Infiltration and evapotranspiration(1)Initial Conditions(1)Intakes(1)Invited COnferences(1)Irrigation(1)Irrigation Class(1)Italian group about flooding(1)Italo Calvino(1)Iverson(1)Jackkinfe(1)Java 8(1)Jean Yves Parlange(1)Jeremy Pal(1)JfH(1)Jim Kirchner(1)John Baez(1)John Mohd Wani(1)Jonathan Godt(1)Joss(1)Jupyter Notebooks(1)Katia Cugerone(1)Keith Beven(1)Kelly Gleason(1)Kleiber's law(1)Korbinian Breinl(1)KultuRisk EU Project(1)LAI(1)LES(1)LLMs(1)LSM(1)LaTeX(1)Lab classes(1)Land Surface Models(1)Landscape evolution models(1)Langbein(1)Leaves(1)Lesto(1)Life expectancy(1)Linear Algebra Solvers(1)Linsley(1)Lisbon(1)Lorenzo Dal Santo(1)Luigi Meneghello(1)Luna Leopold(1)LysGEO(1)Lysimeter GEO(1)MATTM(1)MIS(1)MODIS(1)MRI(1)Mac OS X(1)Macropores(1)Mailing list(1)Manabe(1)Management of Water Resources(1)Managing research workflow(1)Marco Feltrin(1)Mario Rigoni Stern(1)Markus Rechstein(1)Martyn Clark(1)Math3ma(1)Maurizio Giugno(1)Mean Field Theories(1)Meier(1)Melsen(1)Meshes(1)Michele Vettorazzi(1)Mike Kirkby(1)Miles Traer(1)Mille Fiumi(1)Mixed Reality(1)Models of infiltration(1)Models structure(1)Mondays Seminar(1)Morfeo Project(1)Morphogenesis(1)Multirisk(1)Muskel2(1)NIWA(1)Naming(1)Natural Hazards(1)Nature(1)Nature Based Solutions(1)Navier-Stokes eq(1)Nera River Basin(1)Netravati(1)NewAGE database(1)Nicola D'Alberton(1)Nicola Durighetto(1)Ning Lu(1)Nobel(1)Noise(1)Non-equilibrium thermodynamics(1)Notebooks(1)Null Hypothesis model(1)Numerical Ecology(1)Numerics(1)O'Donnel(1)OMS Runner Library(1)Olaf David(1)Open Science Framework(1)Open Source hardware(1)OpenFoam(1)OpenMI(1)PARFLOW(1)PRECISE(1)PRMS(1)PUB(1)Paolo Botti(1)Paolo Nasta(1)Paolo Ronco(1)Paolo Scardi(1)Paolo d'Odorico(1)Paper on Hydrology(1)Parameter estimation(1)Passive microwave remote sensing(1)Patterns Classification(1)Peatlands(1)Peter German(1)Pfafstetter(1)Ph. DTheses(1)Ph. Positions(1)Ph.D. interviews(1)Phase separation of Precipitations(1)Philip(1)Phloem(1)Photosynthesis(1)Physics Of Plants(1)Pico session(1)Pictures(1)Pierre Gentine(1)Planning Simulations(1)Po(1)Po Climatology(1)Polyacrylate(1)PosgreSQL(1)Potenza lectures(1)Power laws(1)Precipitation statistics(1)Present(1)Prestley-Taylor(1)Printer 3D(1)Professorship(1)Programmers(1)Propero(1)Pseudorandom numbers(1)Publish or Perish(1)Pysics of solid Earth(1)QGIS(1)QUADRI(1)Quantum Computing(1)Rabindronath Tagore(1)Random Fields(1)Random sampling(1)Real Books(1)Reflections(1)Regional Climatic projections(1)Remote sensing of precipitation(1)Renjin(1)Replicable research(1)Response time(1)Reviews(1)Revolution blog(1)Ricardo Mantilla(1)Riccardo Busti(1)Richard Rotunno(1)Rills(1)Risposta Idrologica(1)River Adige Model(1)Roberto Deidda(1)Rocco Panciera(1)Roe(1)Roger Moussa(1)Roots(1)S-Hydrograph(1)SCOPUS(1)SUNSET(1)SWE(1)SWRC(1)Salgado(1)Salomon Vimal(1)Saman Razavi(1)Samaria National Park(1)Sandro Marani(1)Scale and Hydrology(1)Sci-hub(1)Scott Peckham(1)Self Organizing Criticality(1)Seven(1)Shaopen Chen(1)Siccità 2022(1)Silvia Francescy(1)Simone Fatichi(1)Six-Years Plan(1)Small Catchments(1)Smash(1)Snow crystals(1)SnowSunMed(1)Snowfall Rainfall Separation(1)Snowpack(1)Socio-hydrology(1)Soil Depth(1)Soil Water(1)Solutes(1)SpatialLite(1)Springs(1)States in lumped hydrological models(1)Statistical hydrology(1)Statististics of extreme Rainfall(1)Stefano Benni(1)Stefano Tasin(1)Steve Jobs(1)Stochastic Hydrology(1)Stockholm Water Prize(1)Stomata(1)Storms(1)Stuart Lane(1)Summer School Cagliari(1)Superheroes and SCI-FI(1)Symmetries(1)Synergy(1)TAIEX(1)TRIGRS(1)Tai-Danae Bradley(1)TeX(1)Terence Tao(1)Tevere(1)The hydrological budget(1)The hydrological cycle(1)The man-month myth(1)The simplest hydrological model(1)Theis(1)Thorsten Wagener(1)Tim-Berners Lee(1)Titus Brown(1)Tom Over(1)Tommaso Anfodillo(1)Travis(1)Treating and filtering environmental signals(1)Trentino(1)True random numbers(1)UAV(1)UK(1)UNESCO(1)Ugrid(1)Use and perceptions of science(1)Using models(1)V Interconfronto su SWE(1)Val di Sole(1)Vegetation Detection(1)Vegetation Models(1)Veronika Eyring(1)Version Control Systems(1)Villaverla(1)Virtual Hydrologists(1)Virtual Reality(1)Viscosity(1)Vulnerability(1)WHETEGEO(1)WRF(1)WaMaWaDit(1)Warredoc(1)Water Borne Diseases(1)Water Conference(1)Water Cycle(1)Water Flood Directive(1)Water Framework Directive(1)Water Resources Management Method in Agricolture(1)Water Resources in Europe(1)Water Scarcity(1)Water Stress(1)Water Supply systems(1)Water SupplyCostruzion(1)Water and Time(1)Water day 2014(1)Water directive(1)Water mangement in developing countries(1)Water phases(1)Watershed planning(1)Watsup(1)Weather Forecast(1)Weather Generators(1)Web 2.0(1)Wells(1)White(1)Whiterspoon(1)Width Function(1)Wien Hydrology Symposium 2019(1)Wildfried Brutsaert(1)Wilting Pont(1)Winter Schools(1)Workshop Life FRANCA(1)World Bank(1)Writing(1)Wuletawu Abera(1)Xylem(1)YHS(1)Zenodo(1)boussinesq equation (groundwater)(1)catchment functioning(1)convection(1)de Groot-Mazur(1)de Saint-Venant equation(1)flooding(1)fractals(1)functional programming(1)georeferenced images(1)historic papers(1)leave-one-out(1)libraries(1)nettools(1)pdes(1)ph.D. AES(1)rainfall spatial interpolation(1)ranking research(1)simple and less simple solutions for infiltration problems(1)soil evaporation(1)spatialite(1)stomatal resistance(1)temperature spatial interpolation(1)topographic index(1)vEGU2021(1)
Contributors
About Hydrology
AboutHydrology
Anna De Nardi
Concetta D'Amato
John Mohd Wani
Followers
Subscribe To
Posts
Atom
Posts
Comments
Atom
Comments
AboutHydrology Blog is licensed under a Creative Commons Attribution 3.0 Unported License.
Other Related Blogs moving slower
Lin.ear th.inking Fast detection of narrow polygons with JTS
Three-Toed Sloth Course Announcement: "Statistical Principles of Generative AI" (Fall 2025)
Hydro-Logic Last Post
Santiago Beguería US corn yield and production forecast 2017
Jan Verkade | hydrology. flood forecasting. predictive uncertainty. Muskingum routing: theory, example and a brief venture into linear algebra
User-friendly Desktop Internet GIS Using Stamen Map Tiles in uDig
Nicholas Dawes Change of position from September 2014
Pattern and Process A presentation overview of ideas from the book
Geology in Art Geology, ceramics and art: aesthetics in 3D
GLOWASIS | A collaborative project aimed at pre-validation of a GMES Global Water Scarcity Information Service
Watermark theme. Powered by Blogger. |
17271 | https://pubmed.ncbi.nlm.nih.gov/28334428/ | Discrepancy between cTNM and pTNM staging of oral cavity cancers and its prognostic significance - PubMed
Clipboard, Search History, and several other advanced features are temporarily unavailable.
Skip to main page content
An official website of the United States government
Here's how you know
The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.
The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
Log inShow account info
Close
Account
Logged in as:
username
Dashboard
Publications
Account settings
Log out
Access keysNCBI HomepageMyNCBI HomepageMain ContentMain Navigation
Search: Search
AdvancedClipboard
User Guide
Save Email
Send to
Clipboard
My Bibliography
Collections
Citation manager
Display options
Display options
Format
Save citation to file
Format:
Create file Cancel
Email citation
Email address has not been verified. Go to My NCBI account settings to confirm your email and then refresh this page.
To:
Subject:
Body:
Format:
[x] MeSH and other data
Send email Cancel
Add to Collections
Create a new collection
Add to an existing collection
Name your collection:
Name must be less than 100 characters
Choose a collection:
Unable to load your collection due to an error
Please try again
Add Cancel
Add to My Bibliography
My Bibliography
Unable to load your delegates due to an error
Please try again
Add Cancel
Your saved search
Name of saved search:
Search terms:
Test search terms
Would you like email updates of new search results? Saved Search Alert Radio Buttons
Yes
No
Email: (change)
Frequency:
Which day?
Which day?
Report format:
Send at most:
[x] Send even when there aren't any new results
Optional text in email:
Save Cancel
Create a file for external citation management software
Create file Cancel
Your RSS Feed
Name of RSS Feed:
Number of items displayed:
Create RSS Cancel
RSS Link Copy
Actions
Cite
Collections
Add to Collections
Create a new collection
Add to an existing collection
Name your collection:
Name must be less than 100 characters
Choose a collection:
Unable to load your collection due to an error
Please try again
Add Cancel
Permalink
Permalink
Copy
Display options
Display options
Format
Page navigation
Title & authors
Abstract
Similar articles
Cited by
MeSH terms
Related information
LinkOut - more resources
J Surg Oncol
Actions
Search in PubMed
Search in NLM Catalog
Add to Search
. 2017 Jun;115(8):1011-1018.
doi: 10.1002/jso.24606. Epub 2017 Mar 23.
Discrepancy between cTNM and pTNM staging of oral cavity cancers and its prognostic significance
Nayeon Choi1,Yangseop Noh1,Eun Kyu Lee1,Manki Chung1,Chung-Hwan Baek1,Kwan-Hyuck Baek2,Han-Sin Jeong1
Affiliations Expand
Affiliations
1 Department of Otorhinolaryngology-Head and Neck Surgery, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea.
2 Department of Molecular and Cellular Biology, Samsung Biomedical Research Institute, Sungkyunkwan University School of Medicine, Suwon, Republic of Korea.
PMID: 28334428
DOI: 10.1002/jso.24606
Item in Clipboard
Discrepancy between cTNM and pTNM staging of oral cavity cancers and its prognostic significance
Nayeon Choi et al. J Surg Oncol.2017 Jun.
Show details
Display options
Display options
Format
J Surg Oncol
Actions
Search in PubMed
Search in NLM Catalog
Add to Search
. 2017 Jun;115(8):1011-1018.
doi: 10.1002/jso.24606. Epub 2017 Mar 23.
Authors
Nayeon Choi1,Yangseop Noh1,Eun Kyu Lee1,Manki Chung1,Chung-Hwan Baek1,Kwan-Hyuck Baek2,Han-Sin Jeong1
Affiliations
1 Department of Otorhinolaryngology-Head and Neck Surgery, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Republic of Korea.
2 Department of Molecular and Cellular Biology, Samsung Biomedical Research Institute, Sungkyunkwan University School of Medicine, Suwon, Republic of Korea.
PMID: 28334428
DOI: 10.1002/jso.24606
Item in Clipboard
Cite
Display options
Display options
Format
Abstract
Introduction: Accurate tumor-node-metastasis(TNM) staging of oral cavity cancer(OCC) is very important in the management of this dismal disease. However, stage migration from cTNM to pTNM was found in a portion of OCC patients. The objective of this study was to determine the possible causes of discrepancy between cTNM and pTNM in OCC and the clinical impacts of stage migration.
Methods: Clinical and pathological data of 252 OCC patients were retrospectively reviewed and compared each other. Clinical staging was determined through the multidisciplinary evaluation of pre-treatment work-ups including PET/CT. In addition, we compared the up-staged cases with those in the no-change group with the same pTNM stages to identify the clinical impacts of such change.
Results: Clinical staging yielded overall 82.5% diagnostic accuracy in predicting pathological tumor status, and tumor extent was under-estimated in 9.5-13.5% of cases. The main causes of T up-staging were under-estimation of surface dimension (62.5%) and deep invasion to tongue extrinsic muscles (37.5%). N up-staging was due to occult single (57.6%) and multiple (42.4%) metastases. Surprisingly, TNM up-staging in our series did not have prognostic significance under the current management protocol.
Conclusion: Clinical under-estimation of pathological tumor extent occurred in approximately 13% of OCC, without clinical impacts on prognosis.
Keywords: mouth neoplasm; neoplasm staging; prognosis; surgery; treatment outcome.
© 2017 Wiley Periodicals, Inc.
PubMed Disclaimer
Similar articles
Clinical and histopathological staging in oral squamous cell carcinoma - Comparison of the prognostic significance.Kreppel M, Nazarli P, Grandoch A, Safi AF, Zirk M, Nickenig HJ, Scheer M, Rothamel D, Hellmich M, Zöller JE.Kreppel M, et al.Oral Oncol. 2016 Sep;60:68-73. doi: 10.1016/j.oraloncology.2016.07.004. Epub 2016 Jul 11.Oral Oncol. 2016.PMID: 27531875
Utility of FDG PET in patients with squamous cell carcinomas of the oral cavity.Kim SY, Roh JL, Kim JS, Ryu CH, Lee JH, Cho KJ, Choi SH, Nam SY.Kim SY, et al.Eur J Surg Oncol. 2008 Feb;34(2):208-15. doi: 10.1016/j.ejso.2007.03.015. Epub 2007 May 7.Eur J Surg Oncol. 2008.PMID: 17482789
Accuracy of 18F-FDG-PET/CT for staging of oral squamous cell carcinoma.Pentenero M, Cistaro A, Brusa M, Ferraris MM, Pezzuto C, Carnino R, Colombini E, Valentini MC, Giovanella L, Spriano G, Gandolfo S.Pentenero M, et al.Head Neck. 2008 Nov;30(11):1488-96. doi: 10.1002/hed.20906.Head Neck. 2008.PMID: 18767178
Evaluation and Staging of Oral Cancer.Mupparapu M, Shanti RM.Mupparapu M, et al.Dent Clin North Am. 2018 Jan;62(1):47-58. doi: 10.1016/j.cden.2017.08.003. Epub 2017 Oct 16.Dent Clin North Am. 2018.PMID: 29126493 Review.
Oral cavity cancer and its pre-treatment radiological evaluation: A pictorial overview.Lam V, O'Brien O, Amin O, Nigar E, Kumar M, Lingam RK.Lam V, et al.Eur J Radiol. 2024 Jul;176:111494. doi: 10.1016/j.ejrad.2024.111494. Epub 2024 May 4.Eur J Radiol. 2024.PMID: 38776803 Review.
See all similar articles
Cited by
Predictive factors and repetition numbers for intraoperative additional resection of initially involved soft tissue resection margins in oral squamous cell carcinoma: a retrospective study.Ooms M, Ponke L, Winnand P, Heitzer M, Peters F, Steiner T, Hölzle F, Modabber A.Ooms M, et al.World J Surg Oncol. 2023 Sep 27;21(1):308. doi: 10.1186/s12957-023-03192-6.World J Surg Oncol. 2023.PMID: 37752503 Free PMC article.
Comparison Between Clinical and Pathological Staging After Elective Neck Dissection in Head and Neck Cancer.Martins Sousa M, Guimarães J, Monteiro E.Martins Sousa M, et al.Cureus. 2023 Jun 24;15(6):e40881. doi: 10.7759/cureus.40881. eCollection 2023 Jun.Cureus. 2023.PMID: 37492826 Free PMC article.
Concordance between Clinical and Pathological T and N Stages in Polish Patients with Head and Neck Cancers.Chloupek A, Kania J, Jurkiewicz D.Chloupek A, et al.Diagnostics (Basel). 2023 Jun 28;13(13):2202. doi: 10.3390/diagnostics13132202.Diagnostics (Basel). 2023.PMID: 37443596 Free PMC article.
Comparison of Clinical and Pathological Staging in Patients with Head and Neck Cancer After Neck Dissection.Pinto JV, Sousa MM, Silveira H, Vales F, Moura CP.Pinto JV, et al.Int Arch Otorhinolaryngol. 2023 Sep 14;27(4):e571-e578. doi: 10.1055/s-0042-1758208. eCollection 2023 Oct.Int Arch Otorhinolaryngol. 2023.PMID: 37876699 Free PMC article.
Comparative Analysis of Clinical and Pathological Lymph Node Staging Data in Head and Neck Squamous Cell Carcinoma Patients Treated at the General Hospital Vienna.Eder-Czembirek C, Erlacher B, Thurnher D, Erovic BM, Selzer E, Formanek M.Eder-Czembirek C, et al.Radiol Oncol. 2018 May 11;52(2):173-180. doi: 10.2478/raon-2018-0020. eCollection 2018 Jun.Radiol Oncol. 2018.PMID: 30018521 Free PMC article.
MeSH terms
Adult
Actions
Search in PubMed
Search in MeSH
Add to Search
Aged
Actions
Search in PubMed
Search in MeSH
Add to Search
Carcinoma, Squamous Cell / diagnostic imaging
Actions
Search in PubMed
Search in MeSH
Add to Search
Carcinoma, Squamous Cell / mortality
Actions
Search in PubMed
Search in MeSH
Add to Search
Carcinoma, Squamous Cell / pathology
Actions
Search in PubMed
Search in MeSH
Add to Search
Female
Actions
Search in PubMed
Search in MeSH
Add to Search
Humans
Actions
Search in PubMed
Search in MeSH
Add to Search
Magnetic Resonance Imaging
Actions
Search in PubMed
Search in MeSH
Add to Search
Male
Actions
Search in PubMed
Search in MeSH
Add to Search
Margins of Excision
Actions
Search in PubMed
Search in MeSH
Add to Search
Middle Aged
Actions
Search in PubMed
Search in MeSH
Add to Search
Mouth Neoplasms / diagnostic imaging
Actions
Search in PubMed
Search in MeSH
Add to Search
Mouth Neoplasms / mortality
Actions
Search in PubMed
Search in MeSH
Add to Search
Mouth Neoplasms / pathology
Actions
Search in PubMed
Search in MeSH
Add to Search
Neoplasm Invasiveness
Actions
Search in PubMed
Search in MeSH
Add to Search
Neoplasm Staging
Actions
Search in PubMed
Search in MeSH
Add to Search
Positron Emission Tomography Computed Tomography
Actions
Search in PubMed
Search in MeSH
Add to Search
Predictive Value of Tests
Actions
Search in PubMed
Search in MeSH
Add to Search
Prognosis
Actions
Search in PubMed
Search in MeSH
Add to Search
Retrospective Studies
Actions
Search in PubMed
Search in MeSH
Add to Search
Survival Rate
Actions
Search in PubMed
Search in MeSH
Add to Search
Related information
MedGen
LinkOut - more resources
Other Literature Sources
scite Smart Citations
[x]
Cite
Copy Download .nbib.nbib
Format:
Send To
Clipboard
Email
Save
My Bibliography
Collections
Citation Manager
[x]
NCBI Literature Resources
MeSHPMCBookshelfDisclaimer
The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.
Follow NCBI
Connect with NLM
National Library of Medicine
8600 Rockville Pike Bethesda, MD 20894
Web Policies
FOIA
HHS Vulnerability Disclosure
Help
Accessibility
Careers
NLM
NIH
HHS
USA.gov |
17272 | https://zaojv.com/3659144_8.html | 不管,都造句_用不管,都造句大全(5-300个句子) - 造句网(在线造句词典)
- [x] 直接进入词语页- [x] 模糊搜索(百度提供)手机版
首页
小学生造句
中学生造句
成语造句
关联词造句
造句笑话
名言佳句
短信句子
反馈
其它
优质造句
常用造句
英文造句
当前位置:首页>不管,都造句
不管,都造句
赞(126)踩(47)
句子数:234+68简单造句 创建:2016-05-09 最后修改:2019-10-16
造句网zaojv.com随机好图- [x] 不显示
【不管,都解释】:暂无。相似词:尽管,都不管,不管,总是都不都不如都不能都不会都不是都不得
二百十一、不管是谁,如果他认为最近发生在选区分界的风风雨雨都是不正当的,是受利益驱动的,那么他就应该凝视着大西洋,期待着在地图上雕刻出那些奇怪的国会选区的赤裸裸的党派目的。
二百十二、不管 是写文章或是说话我们 都 不要拖泥带水的。
二百十三、春节到了,不管 你给不给红包,我 都 会祝你马年红红火火;不管你送不送祝福,我都会祝你马年五福临门;不管你回不回信息,我都祝你马年欢欢喜喜。
二百十三、造 句 网是一部在线造句词典,其宗旨是让大家更快地造出更优质的句子.
二百十四、不管 谁有困难,他 都 会解囊相助。
二百十五、不管 你对多少异性失望,你 都 没有理由对爱情失望。因为爱情本身就是希望,永远是生命的一种希望。爱情是你自己的品质,是你自己的心魂,是你自己的处境,与别人无关。
二百十六、一个人,他心底真正喜欢的是一个人,但从来不对这个人好,反而对另外一个人极尽温柔,不管 有什么理由,你不觉得 都 太荒谬了?
二百十七、不管 学什么,都 不能邯郸学步,生搬硬套。
二百十八、这些人拗得很,不管 怎样劝说 都 不听,真是不可理喻。
二百十九、妹妹:哥,你是我见过最爱干净的人。小新:过奖了,你是怎么看出来的?妹妹:不管 什么事,你 都 推得一干二净。
二百二十、不管多大多老,不管 家人朋友怎幺催,都 不要随便对待婚姻,婚姻不是打牌,重新洗牌要付出巨大代价。
二百二十一、不管 生产什么产品,都 不能偷工减料,否则会影响销路。
二百二十二、她最怕别人在她背后说三道四,所以 不管 做什么事 都 很谨慎。
二百二十三、不管 你经历多痛的事情,到最后 都 会渐渐遗忘。因为,没有什么能敌得过时光。
二百二十四、不管 你走到哪里,到处的人 都 是春风满面、喜笑颜开。
二百二十五、我想我们并不经常快乐。生活从来对我都不是轻松的。但是我总以为,无论怎样受伤害都要去爱,无论怎样被背叛都要相信,不管 怎样被攻击 都 要真诚。
二百二十六、只要我们一心一意,积极努力,不管 什么工作 都 会做好。
二百二十七、很多事情当你再回忆时会发现其实没什么。所以,不管 当时你多么生气 都 告诉自己不必这样,你会发现事实真的没啥大不了。
二百二十八、不管 遇到什么样的困难,我 都 不会消沉下去的。
二百二十九、不管 大家怎样帮助他,都无济于事。
二百三十、我们 不管 做什么事 都 不能麻痹大意。
二百三十一、不管 海上风浪有多大,这座导航灯 都稳如泰山,忠于职守。
二百三十二、不管她如何涂脂抹粉,都掩饰不住她的生理缺点。
二百三十三、如果我们有了真才实学,不管 到哪里 都 能找到好工作。
二百三十四、不管前进的道路怎样迂回曲折,我们都要完成这项艰巨的任务。
更多相似词语造句:尽管,都造句不管,不管,总是造句都不造句都不如造句都不能造句都不会造句都不是造句都不得造句无论,都不要造句不管不顾造句不管不问造句不管,不管造句不管造句管不着造句管不住造句不管是造句管不了造句不管也造句不管人造句三不管造句撒手不管造句坐视不管造句放任不管造句不管怎样造句不管怎么说造句不管三七二十一造句不怕官,只怕管造句现官不如现管造句不塞不流,不止不行造句都都造句
共 234 条 每页显示 30 条 第 8/8 页 首页上一页尾页 转到 页
更多造句
野蚕造句(12)
拍板定案造句(6+1)
两相情愿造句(10+4)
两可造句(134+28)
是非之心造句(22)
先圣造句(33+1)
实繁有徒造句(6+1)
水帘洞造句(32)
乳虎造句(11)
松风水月造句(5)
顺水造句(254+19)
蚂蚁搬泰山造句(6)
沉滓泛起造句(21)
征粮造句(23)
扣住造句(61+2)
欲海造句(27)
翻覆造句(57+5)
她把造句(177+19)
一笔一画造句(36)
浔阳造句(26)
Copyright © 2014
英文造句请进 英文造句网
欢迎反馈 联系方式: 3029540182@qq.com |
17273 | https://www.youtube.com/watch?v=9CMN5lbDeFY | Khan Academy Tutorial: find angle measures using triangles
West Explains Best
4960 subscribers
62 likes
Description
5064 views
Posted: 18 Oct 2020
khanacademy #maths #angles #triangles #geometry
Find angles using the triangle sum theorem, vertical angles, and transversals with parallel lines
I hope you enjoyed the video! Please leave a comment if you'd like to see a topic covered or have any mathematics related question, and make sure to be good, be kind, be true, be nice, and be honest, and have a wonderful day!
0:00 Question 1
2:14 Question 2
4:12 "STAR" Question 3
7:34 Question 4
9:00 Question 5
10:07 Question 6
10:29 Question 7
10:39 Like and Subscribe!
@khanacademy
Please follow on Instagram! @westexplainsbest
Link:
Math-Drills Tutorial Playlist:
West Explains Best is a proud partner of Math-Drills!
See the intro video here:
Also check out my favorite video!
Math Teacher Reviews Classroom Scenes in Cinema:
4 comments
Transcript:
Question 1 hello everyone and welcome to a khan academy tutorial today we are finding angle measures using triangles and the following diagram h i is parallel to j k just kidding what is the measure of x angles are not necessarily drawn to scale okay so don't get your protractor out and put it on your screen that's what it's saying all right now this one's a lot not cop more complicated i want to say but it's definitely a step up from the previous video on triangle angle sum which you just subtracted from 180. that's something you do need to know in case you didn't watch that video well shame on you if you didn't watch it thanks for supporting me but all the angles in a triangle need to add up to 180 degrees so angle 1 plus angle 2 plus angle 3 always needs to add up to 180 degrees otherwise it's not a triangle okay so our goal is to find x well how can we find x if it's not part of the triangle if we're told that these two lines are parallel to each other then we know something about them you'll have to watch another video on what parallel lines and a transversal in this case this would be the transversal of interest because it's touching angle x with these two parallel lines transversal means it's crossing two or more lines and if it's uh if they're parallel then that means it creates an identical angle here so this will also be angle x now what do we do from here well we know that 56 i guess we'll make it green 56 plus 66 plus x has got to be equal to 180 degrees because those are three angles in a triangle that we said earlier here's a triangle here's an angle here's an angle those are all three angles of this yellow triangle right here okay so we just need to do some subtraction so we're going to subtract 56 and 66 from 180 i'm going to go ahead and show that so we do minus 56 minus 56 minus 66 minus 66 from 180 and we get for our answer x equals 58. where's my blue x equals 58 so that's our answer so just a little bit more complicated just one more step involved using the parallel lines Question 2 what is the measure of angle x uh again not drawn to scale but we're told that these lines are parallel and perpendicular we're told that this is perpendicular here i think that's ba where is it uh d e and k l okay that's one and f g are perpendicular and then d e and f g are parallel okay so that's important okay so now we should be able to find this now this is a problem almost identical problem on the test that i have for class so if you're watching in your my class make sure you understand how to do this problem but what do we do well first we gotta identify some vertical angles okay uh we don't have we see a triangles formed here hopefully you guys can see that we got a triangle boom boom boom boom we know that three angles need to add up to 180 okay but we only have one we know that one is 90 but we don't know the other ones so that's what we need to do okay we're like whoa mr west hold on we don't really care about the triangle because x is over here can't we just pinpoint x well i would agree with you but we don't know any of these angles here or here or here or here so it doesn't really help us so if we can target the triangle that's something we know and we can solve because we have vertical angles going on if that is x then this is also x because of vertical angles and if that is 59 then this is also 59 because of vertical angles now we know that 59 plus 90 because we've given that's perpendicular that was a clue plus x is going to be equal to 180 so now it's just as simple as 180 minus 90 minus 59 okay so we do minus 59 minus 90 minus 59 minus 90 and we get x equals 31. so go ahead and put 31 in there 31 check it and we're done okay oh star i remember "STAR" Question 3 lots of my students had problems with this one let's go ahead and see if we can figure out what is the measure of x angles are not necessarily drawn to scale again okay so x is over there we have this triangle here okay that's part of one of the triangles that is uh that you we can see uh there's also some other triangles that we could probably figure out as well this is a actually a tougher problem you're to have to use some creativity with this so let me kind of enlighten you as how to do that now if i draw if i drew that triangle okay this one let's just brainstorm here i have a missing angle there missing angle there and there's no way to know what those angles are okay there's no way because we don't have that angle we don't have that angle okay so we're kind of at a loss using that triangle but because it's a star and we have all these overlapping lines there's more triangles that we can create so we want to try to get at least okay we at least want to try to get a triangle where we know to the angles okay a known angle and a known angle and then one variable okay with that we can find x but if we only have one angle then it's impossible we're at an impasse and we can't solve this problem so we need to somehow get two known angles uh into a drawn triangle well the only two angles we know are 31 and 40. so is there a way we can draw a triangle using those two well they're both on this line so i'm going to draw this line so is there a triangle we can form okay i'm going to connect it this way oh i shouldn't go any further because here's this line okay there i have it i have 31 here i have 40 here so i have two of the three angles but now i'm missing one angle okay i'm missing this angle here does that help us well it should because we know x is right next to it and x will be supplementary to that green one we know that x plus that missing angle we'll call it y is going to be equal to 180 because they form a linear pair okay so this one's definitely a more complicated one it starts with finding two angles in a in a triangle drawing that triangle and then seeing that x is supplementary to the tr to the angle that we are going to find so we know that 40 this is the relationship 40 plus 31 the two known angles plus y is going to be equal to 180 because we know that there's 180 degrees in a triangle and those are the three angles plus so we get 180 minus 40 minus 31. so we get y equals i'm not showing the step hopefully you guys know what i'm doing 109 degrees so we know this is 109 and now we have the rest of this we have 109 degrees equals 180 and then plus x so now we can solve for x by doing uh 180 minus 109 and we get x equals 71 degrees so a little bit more complicated with this one stick with it okay and you'll and you'll be okay Question 4 all right again we're uh we're told that we have some parallel lines here bc and de it's already drawn uh that they're parallel as you can see i don't know if i noticed it earlier we're told that those are parallel okay and we're asked to find x okay so what do we do here well the first thing we should probably do is we want to find as many angles as possible we see where x is it's kind of in a tough spot right now we'll have to get to that in a little bit but let's just try to find as many angles as possible so let's find this missing angle right here whoops i want to do a different color let's find this angle right here we'll call that y so we know that 42 plus 103 plus y is going to be 180 because they're all three angles in that triangle so we have one 180 minus 42 minus 103. so we get that's equal to 35 so y equals 35. now why is that important well we have uh as we showed earlier we have parallel lines with the transversal and now we know that this angle is call it green i guess this is 35 degrees okay so we know oh actually we're done look at that we also know that this is a corresponding angle so that's going to be 35 so x is going to be equal to 35 because that's in the corresponding angle spot with this angle so that's one way to do that probably the easiest way to do this problem using corresponding angles okay maybe Question 5 i'll do like one or two more oh this one's great this is an exterior angle so one thing you could do is you could find this angle and then add up uh these three angles to equal 180 but there's a quicker way to do this using the exterior angle sum theorem and that is if you have an exterior angle okay that's extra because it's outside the triangle with two remote interior angles me remote meaning it's not the one next to uh the exterior angle we know that this relationship is true 114 the exterior angle is equal to the two remote angles added together so 56 plus x if you want to use the supplementary angle find x first and then or excuse me not x but this guy right here we'll call them y if you want to find that first and then use the triangle angle sum theorem that's fine but i just wanted to give you this formula because it makes it a little bit quicker and we get x equals 58 degrees okay so kind of a quick uh not short cut but it is shorter all right next question Question 6 okay this one's going to be fairly straightforward we have parallel lines this guy is going to be 67 okay so now we have 180 minus 67 minus 40 what is that 46 is going to be equal to what do you know 67 is that right 18 minus 67 minus 46 is 67 okay so we have 67 is going to be equal to x Question 7 all right last one okay this is another vertical angles one we're told that it's perpendicular so we need to do this guy and we know this guy is 62 we Like and Subscribe! know this guy is going to be x because it's vertical so now we have 180 minus 90 minus 62 and we get our answer which is 28. thanks for sticking with me on this one kind of a little bit longer one well if you follow these steps you'll have success just like we had here thank you so much for watching |
17274 | https://www.youtube.com/watch?v=TzL4zqy0HE0 | AP Chemistry Unit 3.13 Practice Problems - Beer-Lambert Law
Christine McKenna
1230 subscribers
10 likes
Description
642 views
Posted: 17 Oct 2024
Explain the amount of light absorbed by a solution of molecules or ions in relationship to the concentration, path length, and molar absorptivity.
The Beer-Lambert law relates the absorption of light by a solution to three variables according to the equation:
EQN: A = ebc.
The molar absorptivity, e, describes how intensely a chemical species absorbs light of a specific wavelength. The path length, b, and concentration, c, are proportional to the number of light-absorbing particles in the light path.
In most experiments the path length and wavelength of light are held constant. In such cases, the absorbance is proportional only to the concentration of absorbing molecules or ions. The spectrophotometer is typically set to the wavelength of maximum absorbance (optimum wavelength) for the species being analyzed to ensure the maximum sensitivity of measurement.
Find question packet here -
Transcript:
3.13 practice problems the diagrams above show an UltraViolet absorption spectra for two compounds diagram one is the absorption Spectrum for pure acetone a solvent used when preparing the absorbent uh needs to be measured to determine its concentration when a student reads the absorption of the solution at 280 nanm the results are too high which of the following is most likely responsible for the error in the absorbed uh the measure absorbance so uh these two things should match but they don't so we are looking for something that is either uh wrong with the uh acetone it is not pure acetone like it should be or there is something wrong with the measurement system like we forgot to calibrate uh too little solute to the acetone we are looking for this to be uh acetone tone so that's not going to match student Rich uh rinsed the cuvette with the solution before filling the cuvette with the solution uh no that would help it would make sure it was clean uh the student forgot to calibrate the spectr photometer first by using a cuvette containing only acetone uh that's one of the possible errors uh we forgot to calibrate the machine and the wavelength setting was accidentally changed from 280 n to 3 nanm before the student uh made the measurement um that is not going to going to change the fact that we still have an absorbance that is too high here uh so option Choice c um is going to be our best our best choice A student uses visible spectr photometry spec spectr photometry to determine the concentration of cobalt 2 chloride in a sample solution the first student prepares a set of cobalt 2 chloride Solutions of known concentration then the student uses a spectr photometer to determine the absorbance at each of the standard Solutions at a wavelength of 510 ners and constructs a standard curve finally the student determines the absorption uh the absorbance of the sample of unknown concentration the student made the the standard curve above which of the following most likely caused the error Point uh the student plotted for the 05 molar Cobalt 2 uh chloride so we are not quite as high of an absorbance as we should be so that means that I'm looking at something where I have uh less Cobalt 2 than expected so option Choice a says that there was distilled water in the cuvette when the uh student put the standard solution in this would lower the overall amount of cobalt that was present so that looks good there were a few drops of the 0.1 molar uh Cobalt standard solution the cuvette that would raise uh the absorbance not lower it the student used a cuvette with a longer path length and the cuvette used uh no we're using the same cuvettes did not run a blank between the 05 uh cuvette and the one before it uh that would not decrease the amount that we see there so um it being contaminated by extra water thereby lowering the overall concentration is going to be our best choice so answer Choice a uh we have um iron 3 plus uh a pottassium sulfur and cyanide compound going to the iron sulfur cyanide compound and potassium to determine the moles of iron 3 in 100 ml sample of an unknown solution excess potassium sulfur cyanide uh was is added to convert all of the iron three into dark uh species of iron uh three sulfur cyanide as represented by the equation above the absorbance of the iron sulfur cyanide compound at different concentrations is shown in the graph below if the absorbance of uh the mixture is2 at uh 453 nanm uh how many moles of the iron three are present in the 100 ml sample so we are looking at 2 and uh so we are uh each of these is going to be one so this is 4 10 to the Nega uh Fifth and it is uh per 100 moles uh so we are going to uh since marity is moles per liter and that's what this is currently in we need to move the decimal over one more time so it is going to be 4 10 -6 uh moles of the iron 3 using a spectr photometer a student measures the absorbance of four Solutions of copper to sulfate at a given wave length the collected data is given in the table opposite which of the following is most likely is the most likely explanation for the discrepant data in trial number four uh so we are increasing our uh concentration of uh copper here and then we're looking at our um uh absorbance here so um looking at uh these things I am doubling doubling and then half and so I should uh look at approximately 1.5 uh times higher than this so this is lower than 1.5 times higher than that and so uh that would tell me that I have a lower concentration than I was expecting um because my absorbance isn't as high as I was expecting so option and choice [Music] d would be the best representation of that a student prepared five Solutions of copper 2 sulfate with different concentrations and then filled five cuetes uh each containing one of the solutions the cuetes were placed in the spectr photometer set to the appropriate wavelength for the maximum absor absorbance the absorbance for each of the solutions is measured and recorded the student plotted the absorbance process the concentration is shown in the figure above which of the following is the most likely explanation for the variance of the data point of the 6 mol uh copper 2 sulfate solution so we can see that the 6 molar copper sulfate solution is higher than we were expecting it's a higher absorbance so we were expecting it's almost the same as the 08 so this uh I would uh I would suspect that we have more of the uh copper 2 sulfate than we were initially anticipating so um option Choice a had some water droplets inside that would lower the uh concentration but we actually um are seeing a higher concentration than initially expected so that doesn't make any sense option Choice B the cuvette into which the 6 M solution was placed at a slightly more filled uh than the other cuetes set aren going to affect our absorbance uh the wavelength setting was accidentally moved away from that of the maximum absorption absorbance uh then we would have a lower absorbance not less or sorry not more and then we have option Choice D the cuvette used for the 6 molar solution had not been wiped clean before putting it into the spectr photometer so uh this could mean that it is contaminated and we have a higher concentration of the copper 2 sulfate than we should have had and therefore that is a good reason for um our data to be off okay uh each student in a class is placed uh uh in a class placed point a 2 G sample of a mixture of copper and aluminum in a beaker and place that Beaker in a fum Hood the students slowly pour 15 milliliters of a 15.8 molar uh nitric acid solution into their beakers the reaction between the copper and the mixture of the nitric acid is represented in the equation above students observed that a brown gas was released from the beakers and that the solution turned blue indicating a formation of copper two the solutions were then diluted with distilled water to known volumes uh to determine the number of moles of copper in the sample of the mixture the students measured the absorbance of known concentration of copper to nitrate using a spectr photometer a cuvette filled uh with some of the solution produced from the sample of the mixture was also tested the data recorded by one student is shown in the table opposite on the basis of the data provided which of the following is a possible error that the student made so we have um each of our uh concentrations they are doubling and our absorbance is um not quite doubling but um something going on there uh we definitely have a uh an outlier here we have a drop in uh absorbance for the 0.1 and so that is going to be uh a a a big thing it actually looks like these two should be reverse first and then we would have um a pretty good doubling going on with each of the increasing concentrations and therefore uh it would better help us uh predict the concentration of our unknown it sample so we have uh that the uh copper 2 nitrate of the sample of the mixure was not diluted properly um maybe but that's not going to explain our uh strange are here the spectr photometer was calibrated with tap water instead of distilled water um that is going to be a uh an error that will continue throughout the entire experiment and so would uh we would expect that to be the same throughout and not provide this strange uh reversing of data the student labeled the cuetes incorrectly reversing the labels of two solutions of known concentration now this is something that seems pretty reasonable since these two are uh seem to be inversed uh and aren't following the approximate doubling of absorbance that we had been seeing or expected to see so that seems pretty reasonable and then finally the spectr photometer was originally set with an inappropriate wavelength causing the absorbance to vary unpredictably if we have uh the same wavelength throughout uh the absorbance will uh still go through um it just won't be the maximum absorbance however the reversing of these two labels is um a pretty reasonable expectation since these two absorbances seem to be uh reversed and are not following the approximate doubling of absorptions that we would have expected |
17275 | https://www.khanacademy.org/math/ap-statistics/random-variables-ap/geometric-random-variable/v/cumulative-geometric-probability-less-than-a-value | Cumulative geometric probability (less than a value) (video) | Khan Academy
Skip to main content
If you're seeing this message, it means we're having trouble loading external resources on our website.
If you're behind a web filter, please make sure that the domains .kastatic.org and .kasandbox.org are unblocked.
Explore
Browse By Standards
Explore Khanmigo
Math: Pre-K - 8th grade
Math: High school & college
Math: Multiple grades
Math: Illustrative Math-aligned
Math: Eureka Math-aligned
Math: Get ready courses
Test prep
Science
Economics
Reading & language arts
Computing
Life skills
Social studies
Partner courses
Khan for educators
Select a category to view its courses
Search
AI for Teachers FreeDonateLog inSign up
Search for courses, skills, and videos
Help us do more
We'll get right to the point: we're asking you to help support Khan Academy. We're a nonprofit that relies on support from people like you. If everyone reading this gives $10 monthly, Khan Academy can continue to thrive for years. Please help keep Khan Academy free, for anyone, anywhere forever.
Select gift frequency
One time
Recurring
Monthly
Yearly
Select amount
$10
$20
$30
$40
Other
Give now
By donating, you agree to our terms of service and privacy policy.
Skip to lesson content
AP®︎/College Statistics
Course: AP®︎/College Statistics>Unit 8
Lesson 7: The geometric distribution
Geometric random variables introduction
Binomial vs. geometric random variables
Geometric distribution mean and standard deviation
Geometric distributions
Probability for a geometric random variable
Geometric probability
Cumulative geometric probability (greater than a value)
Cumulative geometric probability (less than a value)
TI-84 geometpdf and geometcdf functions
Cumulative geometric probability
Proof of expected value of geometric random variable
Math>
AP®︎/College Statistics>
Random variables and probability distributions>
The geometric distribution
© 2025 Khan Academy
Terms of usePrivacy PolicyCookie NoticeAccessibility Statement
Cumulative geometric probability (less than a value)
AP.STATS: UNC‑3 (EU), UNC‑3.E (LO), UNC‑3.E.2 (EK)
Google Classroom
Microsoft Teams
About About this video Transcript
Probability for a geometric random variable being less than a certain value.
Skip to end of discussions
Questions Tips & Thanks
Want to join the conversation?
Log in
Sort by:
Top Voted
NB 6 years ago Posted 6 years ago. Direct link to NB's post “Is that right? The way I ...” more Is that right?
The way I understand is: what's the probability that she receives at least 1 telephone order in the 1st 4 orders. The complement of which is no telephone orders in the 1st 4 orders, so 1 - p(no telephone order)
Answer Button navigates to signup page •1 comment Comment on NB's post “Is that right? The way I ...”
(5 votes)
Upvote Button navigates to signup page
Downvote Button navigates to signup page
Flag Button navigates to signup page
more
Answer
Show preview Show formatting options
Post answer
jollyneuron 7 years ago Posted 7 years ago. Direct link to jollyneuron's post “Hi, it seems to me that h...” more Hi, it seems to me that here we are calculating the probability that C = 4. Is that really the same as calculating the probability that C<5?
If our condition is C<5, then C could take on 1,2,3 or 4, all in equal probabilities I assume. Are we taking that into account?
Answer Button navigates to signup page •Comment Button navigates to signup page
(4 votes)
Upvote Button navigates to signup page
Downvote Button navigates to signup page
Flag Button navigates to signup page
more
Answer
Show preview Show formatting options
Post answer
jollyneuron 7 years ago Posted 7 years ago. Direct link to jollyneuron's post “actually I think I got it...” more actually I think I got it. Because we are actually adding up all probabilities lower than C=5. So we are adding P(C=1) + P(C=2) + P(C=3) + P(C=4). So we are taking all those probabilities into account.
1 comment Comment on jollyneuron's post “actually I think I got it...”
(6 votes)
Upvote Button navigates to signup page
Downvote Button navigates to signup page
Flag Button navigates to signup page
more
Jill a year ago Posted a year ago. Direct link to Jill's post “How is P(no phone orders ...” more How is P(no phone orders in first four) equivalent to P(C<5)? It makes more sense for it to be equivalent to P(C=5) to me, why isn't that the case?
Answer Button navigates to signup page •1 comment Comment on Jill's post “How is P(no phone orders ...”
(3 votes)
Upvote Button navigates to signup page
Downvote Button navigates to signup page
Flag Button navigates to signup page
more
Answer
Show preview Show formatting options
Post answer
joshua a year ago Posted a year ago. Direct link to joshua's post “First thing, they aren't ...” more First thing, they aren't equivalent, which the video didn't claim they are?
Second thing, no phone orders in the first four isn't P(C = 5).
P(C = 5) = no phone orders in the first four and has a phone order at C = 5
Comment Button navigates to signup page
(2 votes)
Upvote Button navigates to signup page
Downvote Button navigates to signup page
Flag Button navigates to signup page
more
Sean Kelly 8 months ago Posted 8 months ago. Direct link to Sean Kelly's post “Was wondering if we shoul...” more Was wondering if we shouldn't calculate P(C = 5) first, which I think would be (.9)(.9)(.9)(.9)(.1). The subtract that from 1 to get P(C<5)?
Answer Button navigates to signup page •1 comment Comment on Sean Kelly's post “Was wondering if we shoul...”
(2 votes)
Upvote Button navigates to signup page
Downvote Button navigates to signup page
Flag Button navigates to signup page
more
Answer
Show preview Show formatting options
Post answer
Loua 2 months ago Posted 2 months ago. Direct link to Loua's post “That’s a great question. ...” more That’s a great question. Here’s how I understand it:
If you calculate this as 1-P(C=5), the result becomes a P(When C is not 5), so this represents all the possibilities except 5 P(C<5, and C>5).
In conclusion, this cannot represent P(C<5).
Comment Button navigates to signup page
(1 vote)
Upvote Button navigates to signup page
Downvote Button navigates to signup page
Flag Button navigates to signup page
more
Chen Weng 5 months ago Posted 5 months ago. Direct link to Chen Weng's post “Can't you calculate P(C>5...” more Can't you calculate P(C>5)? And then 1-P(C>5)
Answer Button navigates to signup page •Comment Button navigates to signup page
(2 votes)
Upvote Button navigates to signup page
Downvote Button navigates to signup page
Flag Button navigates to signup page
more
Answer
Show preview Show formatting options
Post answer
Loua 2 months ago Posted 2 months ago. Direct link to Loua's post “You can't calculate P(C>5...” more You can't calculate P(C>5) because then the chances will go to infinity!
Hope this helps :))
Comment Button navigates to signup page
(1 vote)
Upvote Button navigates to signup page
Downvote Button navigates to signup page
Flag Button navigates to signup page
more
Video transcript
[Instructor] Lilyana runs a cake decorating business, for which 10% of her orders come over the telephone. Let C be the number of cake orders Lilyana receives in a month until she first gets an order over the telephone. Assume the method of placing each cake order is independent. So C, if we assume a few things, is a classic geometric random variable. What tells us that? Well, a giveaway is that we're gonna keep doing these independent trials, where the probability of success is constant, and there's a clear success. A telephone order in this case is a success. The probability is 10% of it happening. And we're gonna keep doing it until we get a success. So classic geometric random variable. Now they ask us, find the probability, the probability, that it takes fewer than five orders for Lilyana to get her first telephone order of the month. So it's really the probability that C is less than five. So like always, pause this video and have a go at it. And even if you struggle with it, that's better. Your brain will be more primed for the actual solution that we can go through together. Alright. (chuckles) So I'm assuming you've had a go at it. So there's a couple of ways to approach it. You could say, well, look, this is just gonna be the probability that C is equal to one, plus the probability that C is equal to two, plus the probability that C is equal to three, plus the probability that C is equal to four. and we can calculate it this way. What is the probability that C equals one? Well, it's the probability that her very first order is a telephone order. And so we'll have 0.1. What's the probability that C equals two? Well, it's the probability that her first order is not a telephone order. So it's one minus 10%. There's a 90% chance it's not a telephone order, and that her second order is a telephone order. What about the probability C equals three? Well, her first two orders would not be telephone orders, and her third order would be one. And then C equals four? Well, her first three orders would not be telephone orders, and her fourth one would. And we could get a calculator maybe, and add all of these things up, and we would actually get the answer. But you're probably wondering, well, this is kind of hairy to type into a calculator. Maybe there is an easier way to tackle this. And indeed, there is. So think about it. The probability that C is less than five, that's the same thing as one minus the probability that we don't have a telephone order in the first four. One minus the probability that no telephone order in first four orders. So what's this? Well, 'cause this is just saying, what's the probability we do have an order in the first four? So it's the same thing as one minus the probability that we don't have an order in the first four. And this is pretty straightforward to calculate. So this is going to be equal to one minus, and let me do this in another color so we know what I'm referring to. So what's the probability that we have no telephone orders in the first four orders? Well, the probability on a given order that you don't have a telephone order is 0.9. And then if that has to be true for the first four, well, it's gonna be 0.9 times 0.9 times 09 times 0.9, or 0.9 to the fourth power. So this a lot easier to calculate, so let's do that. Let's get a calculator out. Alright, so let me just take .9 to the fourth power is equal to, and then let me subtract that from one. So let me make that a negative, and then let me add one to it. And we get, there you go, 0.3439. So this is equal to 0.3439. And we're done. That's the probability that it takes fewer than five orders for her to get her first telephone order of the month.
Creative Commons Attribution/Non-Commercial/Share-AlikeVideo on YouTube
Up next: video
Use of cookies
Cookies are small files placed on your device that collect information when you use Khan Academy. Strictly necessary cookies are used to make our site work and are required. Other types of cookies are used to improve your experience, to analyze how Khan Academy is used, and to market our service. You can allow or disallow these other cookies by checking or unchecking the boxes below. You can learn more in our cookie policy
Accept All Cookies Strictly Necessary Only
Cookies Settings
Privacy Preference Center
When you visit any website, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized web experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and change our default settings. However, blocking some types of cookies may impact your experience of the site and the services we are able to offer.
More information
Allow All
Manage Consent Preferences
Strictly Necessary Cookies
Always Active
Certain cookies and other technologies are essential in order to enable our Service to provide the features you have requested, such as making it possible for you to access our product and information related to your account. For example, each time you log into our Service, a Strictly Necessary Cookie authenticates that it is you logging in and allows you to use the Service without having to re-enter your password when you visit a new page or new unit during your browsing session.
Functional Cookies
[x] Functional Cookies
These cookies provide you with a more tailored experience and allow you to make certain selections on our Service. For example, these cookies store information such as your preferred language and website preferences.
Targeting Cookies
[x] Targeting Cookies
These cookies are used on a limited basis, only on pages directed to adults (teachers, donors, or parents). We use these cookies to inform our own digital marketing and help us connect with people who are interested in our Service and our mission. We do not use cookies to serve third party ads on our Service.
Performance Cookies
[x] Performance Cookies
These cookies and other technologies allow us to understand how you interact with our Service (e.g., how often you use our Service, where you are accessing the Service from and the content that you’re interacting with). Analytic cookies enable us to support and improve how our Service operates. For example, we use Google Analytics cookies to help us measure traffic and usage trends for the Service, and to understand more about the demographics of our users. We also may use web beacons to gauge the effectiveness of certain communications and the effectiveness of our marketing campaigns via HTML emails.
Cookie List
Clear
[x] checkbox label label
Apply Cancel
Consent Leg.Interest
[x] checkbox label label
[x] checkbox label label
[x] checkbox label label
Reject All Confirm My Choices |
17276 | https://www.youtube.com/watch?v=Ae7OlQsn6ug | Exponential Regression Word Problems
Ms. Smith's Math Tutorials
11200 subscribers
60 likes
Description
6070 views
Posted: 5 Aug 2020
Ms. Smith's Math Tutorials
You Try Answer:
1) y=9.88(1.3)^x
2) $25,885.56 Note that the answer could go up to $25,605.59 depending on how you round your equation. The answer listed for 2) is based on the rounding shown in 1).
1 comments
Transcript:
[Music] welcome to miss smith's math tutorials i'm miss smith in this video we're going to be talking about exponential regression word problems the increasing number of frogs living in a forest is shown in the table below let 1989 be year zero and what that means by year zero anytime you see that is they're saying that that is the first year the data was collected so they want us in this problem to write an exponential equation and then based on past data predict how many frogs will be in the forest in year 2026 so let's look at this data real quick and it told us up front that we were increasing the number of frogs for each year that passes by it tells us how many frogs were in this forest and as we can see our numbers are pretty rapidly increasing overall the great thing about exponential regression word problems is that the calculator does most of the work for us we just need to be able to input our data into the calculator now one thing that we want to note before we do that is that we've got an x and y table here our year is going to be our x and our number of frogs is going to be our y we can't put a year value into our data set we need to translate to the calculator what those years actually represent like we said in the problem let 1989 be year zero because that was our first year data was collected we're not going to put 1989 in the calculator we're gonna put zero and so then we have to think okay if 1989 is your zero what would 1993 be and there's an easy way to figure that out we can always just find the difference between the two years so we can take our larger year 1993 and subtract our smaller year 1989 and that will give us four so that tells us that four years have passed and we can do that same process for each of these years to speed up our time though i'm going to go ahead and let you know what each of these values are so if 1989 was our starting year then 1994 would be year five five years have passed from 1989 1998 would be year nine 2003 would be year 14 2005 would be year 16 and 2006 would be year 17. so now that we have our year number we can go ahead and put our data into the calculator what we want to do is use this stat button so we want to hit that and then we want to input this data into our list and so we need to edit our list first so we're just going to hit enter and now you'll notice we've got list one list two and we even have a list three but we won't be using that we only have two lists for this problem so we're going to go ahead and put our year values into our x column which will be our list one so all you have to do is type in zero first and then if you hit enter it jumps up into the list four 5 9 14 16 and 17 and then we'll right arrow over to the list two and we'll add our number of frogs for each year 57 104 372 698 920 2034 and 4081 so now that we have our data in the calculator we want to hit stat again and this time we're going to right arrow over to the calc tab and then you can either scroll all the way down to where you see exponential regression it's gonna be number zero right there so once we get on this page we can also just hit zero and we wanna just hit enter through this information and it's going to bring up our equation so let's interpret this information they've asked us to write an exponential equation which is our information here let's quickly know what an exponential equation is made up of it looks like y equals a times b to the power of x and they gave us that up here a times b to the power of x they're giving us the form that it should go in and a is going to be our starting or initial value b is going to be our rate of growth or decay and x is going to be our time here they've said a is 66.18 if we round to the nearest hundredth and it's okay to round this value just keep in mind the less you round the more exact your answer will be but for our purpose it's okay to round to the nearest hundredth our a value is 66.18 and then our b value in our parentheses is 1.25 and then if we're just writing an equation we just leave x as x so this is our exponential equation for this data you may have noticed that i mentioned that a is our starting point or our initial value and the a for our exponential equation is 66.18 which is different from the actual starting value of our data right our starting value of our data was 57 and that's totally fine this equation is aligning all of these data points as best it can into an exponential form so it makes sense that the starting value may not match our actual starting value our second instruction says based on past data predict how many frogs will be in the forest in year 2026 so they're basically saying if we added another data point down here then what would be the predicted number of frogs and a key word there is predicted this equation is just taking past data to help us predict future data we have no idea what will actually happen in real life to this frog population but models like this help us predict what could happen based on past data essentially they're giving us x and they're wanting us to find y now i can't put 2026 into my equation we need to know what is the year number that would go along with 2026. so to figure that out we can bring our calculator out and say 2026 minus our starting year of 1989 and we get 37 so 2026 would be year 37 and we want to know what is the predicted number of frogs for year 37 so all we have to do is bring our equation down and instead of x remember x is our time so we're just going to put in 37 for x and now this we can just type in our calculator 66.18 times 1.25 to the power of 37 and we get a huge number now notice we get a decimal and we can't have .087 of a frog so we're just gonna round to the nearest whole number so that would be 254 thousand nine hundred and sixteen frogs in year 2026. let's look at one more example together so this one says the decreasing value of an item that was purchased new in 2008 is listed below so looking at our table we've got our years as they go by and then the value of the item and as it mentioned in the problem the value is decreasing over time we're asked to write an exponential equation and also based on our past data predict when the item will be worth 1.92 cents so we need to put our data into the calculator but before we do that we need to figure out our year numbers to be able to list them in the calculator now as it mentioned in the problem this item was purchased new in 2008 so 2008 is going to be our year zero now what i like about this problem is you'll notice that each of these years are in chronological order so they've just collected this data information the value of the item every single year so 2009 would be year one 2010 would be year two 2011 would be year three 2012 would be your four 2013 would be your five and 2014 would be year six now that we have our data laid out let's bring out our calculator and have it write an exponential equation for us so remember we're going to go to stat and then we need to edit our list so we'll hit enter now you'll notice i still have my data in here from the last problem we just did i'm going to show you a quick way to clear your data out one option is just to totally clear your calculator and to do that you would hit second plus seven one two and when you do that that'll reset your calculator but a faster way would be to clear the lists individually so if you scroll up to l2 and then hit the clear button and down and notice that clears out list two you can right arrow over to l1 we want to highlight l1 clear and then down and it'll clear that too now let's put in our new data so we've got zero one two three four five six that is our x column because remember year is our x value for our list two we're going to put in our y values 40 35 and 50 cents now you'll notice i put a zero in 50 cents and when i hit enter the zero is not going to show up 0.5 and 0.50 are the same thing 29.61 21.20 15.73 13.24 and 10.99 we can hit stat again right arrow over to our calc tab and then we can just hit zero for exponential regression we're going to hit enter through all this and there is our equation so we've got our y equals now in the place of a it says to put 42 point and again let's just round to the nearest hundredth 42.81 in our b value in parentheses we're going to put 0.79 and then for x we just keep it x for our basic exponential equation now they've said based on past data predict when the item will be worth 1.92 cents so in the previous problem they gave us the year they gave us x and asked us to find y in this problem they've given us y 1.92 cents and they've asked us to find x because they say when predict when the item will be worth 1.92 cents so we need to figure out what year our value would be one dollar and 92 cents so here's how we're gonna do that in the place of y in our equation we're gonna write one dollar and ninety-two cents one point nine two and then we'll have our equals and then the rest of our equation and then we'll just have x because x is what we're trying to figure out that's what we're trying to solve for our time our year value there's a really quick easy way to solve this on the calculator what you're going to do and i'm going to write the steps here is in our y 1 and the way we get to y1 is this y equals button up here so we're going to hit that and in our y1 equals we're going to put 1.92 one dollar and 92 cents then we're going to arrow down and in our y 2 equals we're going to put this part of the equation so 42.81 times .79 to the power of x and so what we're doing by creating this setup is we know that if we set these two values equal to each other and we graph them they're going to intersect at our answer so let me show you what i mean if i go to graph this that would be our first line that's our horizontal line at 1.92 cents and then this line coming down that's our exponential equation which we have listed here and we want to know where do these two lines intersect i can't see it here it looks like though it's probably going to happen somewhere over here so i'm going to need to adjust the window range on my graph and the way that we do that is we hit window now before i actually hit it let's think about what we want we just want the graph a little more this way so we can see what's happening over here we don't need it to go up more or down more or to the left more just a little more this way the way we tell the window that is we go down to x maximum so this side of our graph is going to be our x maximum how high does x go right now it's set to 10 that's just our standard window view let's just double it let's put 20 and we'll hit graph now we can see where these two lines cross each other so we want to know what is that intersecting point the way we figure that out is we hit second trace and then we're gonna either go down to number five or hit number five intersect now this little spaceship i always think of it as a spaceship shows up now we want that spaceship to land right over here on that intersection so we're going to right arrow over until we get to that crossing point it's right about there and then hit enter three times so we see that this intersection happens where y is 1.92 remember that's our dollar value so y is at 1.92 cents and our x value is let's round to the nearest hundredth again 13.17 so our y value was 13.17 now remember we're talking about years so 13.17 is our year number so that would be our number in our parentheses and because all our other numbers were whole numbers and we're talking about years let's go ahead and just call this 13. so we have to think what year would year 13 be remember 2008 was year zero so all we have to do is say 2008 plus 13 years and that will leave us at 2 000 21. when they asked us based on past data predict when the item will be worth 1.92 cents well it will be worth that in year 2021 okay here's one for you guys to try i will post the answers in the video description below this has been miss miss math tutorials |
17277 | https://www.spex1.com/post/understanding-yield-strength-and-tensile-strength | Understanding Yield Strength and Tensile Strength
January 26, 2025
Understanding the properties of materials is crucial when it comes to designing and manufacturing parts in a machine shop. Two key properties that engineers and manufacturers must consider are yield strength and tensile strength. Both properties provide valuable insights into a material’s behavior under stress and its ability to withstand deformation or failure.
Yield strength and tensile strength are essential properties that engineers and manufacturers must consider when designing and producing parts. Yield strength provides information on the maximum stress a material can endure before experiencing permanent deformation, while tensile strength indicates a material’s resistance to fracture or failure under tensile loads. Understanding these properties allows for better part design, optimized fabrication processes, and the selection of suitable materials for various applications.
Yield strength
Yield strength is a measure of the stress at which a material begins to undergo plastic deformation. In other words, it is the point at which a material transitions from elastic deformation, where it can return to its original shape, to plastic deformation, where it remains permanently deformed. Yield strength is typically expressed in units of pressure, such as Pascals (Pa) or pounds per square inch (psi).
When a material is subjected to increasing stress, it initially undergoes elastic deformation. As the stress increases beyond the yield strength, the material experiences the yield point phenomenon. This is where the material starts to deform plastically, and any further increase in stress results in permanent deformation. The yield point can be observed on a stress-strain curve, which shows the relationship between the applied stress and the resulting strain on a material.
Yield strength is a crucial factor in part design and manufacturing, as it indicates the maximum stress a material can endure before experiencing permanent deformation. This information is essential for engineers when designing parts that will be subjected to stress, as it helps ensure the parts will maintain their intended shape and function throughout their lifespan. Additionally, understanding a material’s yield strength allows manufacturers to optimize fabrication processes and avoid potential issues related to overloading or excessive deformation.
Tensile strength
Tensile strength, also known as ultimate tensile strength (UTS), is the maximum stress a material can withstand while being stretched or pulled before it breaks or fractures. Like yield strength, tensile strength is expressed in units of pressure, such as Pascals (Pa) or pounds per square inch (psi).
The ultimate tensile strength of a material can be found on a stress-strain curve, which plots the applied stress against the resulting strain. The peak of this curve represents the material’s tensile strength, and the corresponding strain at this point is called the fracture strain. Once a material reaches its ultimate tensile strength, it can no longer withstand the applied stress and will fail, either by breaking or fracturing.
Tensile strength is a vital consideration in part design and manufacturing, as it provides insights into a material’s ability to resist fracture or failure under tensile loads. Engineers need to understand a material’s tensile strength to design parts that can withstand the expected forces during their service life without breaking. This knowledge is also crucial for manufacturers, as it enables them to select the appropriate materials for specific applications and helps prevent potential part failures due to insufficient tensile strength.
Yield and Tensile strengths of common alloys
Note: The values provided in these tables are approximate and vary depending on factors such as material processing and heat treatment. Always consult material datasheets or manufacturer guidelines for precise values when designing and ordering parts.
Aluminum Alloys
Alloys | Yield Strength | Tensile Strength
Aluminum 2011 | 43,000 psi [296 MPa] | 55,000 psi [379 MPa]
Aluminum 2024 | 47,000 psi [324 MPa] | 68,000 psi [469 MPa]
Aluminum 6061 | 40,000 psi [276 MPa] | 45,000 psi [310 MPa]
Aluminum 6262 | 39,000 psi [269 MPa] | 42,000 psi [290 MPa]
Aluminum 7075 | 73,000 psi [503 MPa] | 83,000 psi [572 MPa]
Stainless Steel Alloys
AlloysYield StrengthTensile Strength
SS 303 | 35,000 psi [241 MPa] | 90,000 psi [621 MPa]
SS 304 | 31,200 psi [215 MPa] | 90,000 psi [621 MPa]
SS 304L | 25,000 psi [172 MPa] | 70,000 psi [483 MPa]
SS 310 | 30,000 psi [207 MPa] | 80,000 psi [552 MPa]
SS 316 | 30,000 psi [207 MPa] | 90,000 psi [621 MPa]
SS 316L | 25,000 psi [172 MPa] | 70,000 psi [483 MPa]
SS 404 | 131,980 psi [910 MPa] | 162,400 psi [1,120 MPa]
SS 410 | 45,000 psi [310 MPa] | 100,000 psi [689 MPa]
SS 416 | 40,000 psi [276 MPa] | 95,000 psi [655 MPa]
SS 430 | 40,000 psi [276 MPa] | 75,000 psi [517 MPa]
SS 440C | 60,000 psi [414 Mpa] | 100,000 psi [689 MPa]
Steel Alloys
AlloysYield StrengthTensile Strength
Steel 1008 | 41,300 psi [285 MPa] | 49,300 psi [340 MPa]
Steel 1018 | 53,700 psi [370 MPa] | 63,800 psi [440 MPa]
Steel 1045 | 45,000 psi [310 MPa] | 90,000 psi [621 MPa]
Steel 1137 | 49,890 psi [344 MPa] | 84,700 psi [584 MPa]
Steel 1215 | 60,200 psi [415 MPa] | 78,300 psi [540 MPa]
Steel 12L14 | 60,000 psi [414 MPa] | 78,000 psi [538 MPa]
Steel 4130 | 63,100 psi [435 MPa] | 97,200 psi [670 MPa]
Steel 4140 | 60,200 psi [415 MPa] | 95,000 psi [655 MPa]
Steel 8620 | 55,800 psi [385 MPa] | 76,900 psi [530 MPa]
Steel A2 | 36,000 psi [248 MPa] | 90,000 psi [621 MPa]
Steel M2 | 80,000 psi [552 MPa] | 120,000 psi [827 MPa]
Titanium Alloys
AlloysYield StrengthTensile Strength
Ti Grade 5 | 128,000 psi [882 MPa] | 138,000 psi [952 MPa]
Ti Grade 23 | 110,000 psi [759 MPa] | 125,000 psi [862 MPa]
Copper Alloys
AlloysYield StrengthTensile Strength
Copper C360 | 20,000 psi [138 MPa] | 58,000 psi [400 MPa]
Copper C353 | 16,000 psi [110 MPa] | 54,000 psi [372 MPa]
Copper C230 | 10,000 psi [69 MPa] | 42,000 psi [290 MPa]
Copper C464 | 32,000 psi [221 MPa] | 70,000 psi [483 MPa]
Copper C443 | 15,000 psi [103 MPa] | 52,000 psi [358 MPa]
Brass Alloys
AlloysYield StrengthTensile Strength
Brass 353 | 16,000 psi [110 MPa] | 54,000 psi [372 MPa]
Brass 360 | 20,000 psi [138 MPa] | 58,000 psi [400 MPa]
Plastics
AlloysYield StrengthTensile Strength
Polyethylene (PE) | 1,800 - 4,800 psi [12-33 MPa] | 2,500 - 6,000 psi [17-41 MPa]
Polypropylene (PP) | 3,500 - 5,500 psi [24-38 MPa] | 4,500 - 7,000 psi [31-48 MPa]
Polyvinyl chloride (PVC) | 2,500 - 6,000 psi [17-41 MPa] | 3,500 - 7,000 psi [24-48 MPa]
Polymethyl methacrylate (PMMA) | 6,000 - 9,000 psi [41-62 MPa] | 7,000 - 11,000 psi [48-76 MPa]
Polycarbonate (PC) | 9,000 - 12,000 psi [62-83 MPa] | 9,500 - 14,000 psi [65-97 MPa]
Nylon | 3,500 - 8,000 psi [24-55 MPa] | 5,000 - 12,000 psi [34-83 MPa]
Micarta | N/A | 10,000 - 20,000 psi [69-138 MPa]
Understanding Yield Strength and Tensile Strength
In order to fully understand the differences between yield strength and tensile strength, you’ll need to understand how materials behave and change under stress. This section explains the stress-strain curve, various types of material deformation, the distinction between ductile and brittle materials, and factors influencing the yield and tensile strength.
Stress-Strain Curve
The stress-strain curve is a graphical representation of a material’s response to applied stress. It plots stress, or the force applied per unit area, against strain, the change in the material’s dimensions relative to its original size. This curve is crucial in understanding how materials behave under different stress conditions and provides invaluable insights into their mechanical properties.
The stress-strain curve typically consists of several distinct regions. Initially, the curve follows a linear path, which represents the material’s elastic behavior. In this region, the material returns to its original shape once the stress is removed. As the stress increases, the material reaches its yield strength, the point at which plastic deformation begins.
Elastic and Plastic Deformation
Deformation can be classified into two types: elastic and plastic.
Elastic deformation occurs when a material returns to its original shape after the removal of an applied stress. This behavior is governed by Hooke’s Law, which states that the strain is proportional to the stress applied within the elastic limit.
Plastic deformation, on the other hand, refers to the permanent change in a material’s shape after the applied stress exceeds the yield strength. In this region, the stress-strain curve becomes nonlinear, and the material undergoes irreversible changes, such as dislocation movements and slip. The ability of a material to undergo significant plastic deformation before breaking is a measure of its ductility.
Ductile vs Brittle Materials
Materials can be broadly classified as ductile or brittle based on their ability to withstand deformation. Ductile materials, such as metals and certain polymers, can undergo large amounts of plastic deformation before fracture. These materials typically exhibit a high degree of toughness and energy absorption capacity, making them ideal for various applications, including automotive components and structural elements.
Brittle materials, on the other hand, are characterized by their inability to undergo significant plastic deformation. These materials, which include ceramics, glasses, and some polymers, often have high strength but low toughness. They tend to fracture suddenly and without warning, making them less suitable for applications where impact or sudden stress is expected.
Factors Influencing Yield and Tensile Strength
Various factors influence a material’s yield and tensile strength, which are critical properties for designing and manufacturing reliable parts. Some of these factors include:
Material Composition: The elements that make up an alloy greatly influence its mechanical properties, including yield and tensile strength. For example, adding carbon to iron creates a carbon-steel alloy, which has greater strength and toughness than pure iron.
Heat Treatment and Processing: The way a material is processed can have a significant impact on its mechanical properties. Heat treatments, such as annealing, quenching, and tempering, can alter a material’s strength by changing its microstructure. Similarly, processes like cold working or hot working can influence a material’s mechanical behavior by introducing dislocations, affecting grain size, or inducing residual stresses.
Grain Size and Orientation: The size and orientation of a material’s grains or crystallites play a crucial role in determining its mechanical properties. Finer grains typically result in higher strength and toughness due to the increased number of grain boundaries, which impede dislocation movement. The orientation of grains can also influence a material’s response to stress, as some orientations are more favorable for slip and dislocation movement than others.
Choosing the right material for your parts
Selecting the best material for a specific application is a critical aspect of the design and manufacturing process. A well-informed decision requires finding the right balance between yield strength, tensile strength, and other material properties, considering the specific requirements of the application, and accounting for factors such as cost, availability, machinability, and environmental conditions.
Balancing Yield Strength, Tensile Strength, and Other Properties
The key to choosing the right material lies in understanding the relationship between yield strength, tensile strength, and other material properties such as ductility, toughness, hardness, and fatigue resistance. These properties often interact with one another, and an increase in one property may lead to a decrease in another. For example, increasing a material’s strength may reduce its ductility.
When selecting a material, it’s essential to prioritize the properties that are most important for the specific application. For instance, a component subjected to high impact forces may require a material with high toughness and ductility, while a part exposed to cyclic loading might need a material with excellent fatigue resistance. By understanding these trade-offs, you can make informed decisions that optimize the performance of your components.
Material Selection Based on Application Requirements
The primary step in material selection is identifying the application’s specific requirements. This may involve considering factors such as load-bearing capacity, operating temperature, corrosion resistance, and exposure to chemicals or other environmental conditions. For example, a component used in a high-temperature environment may require a material with excellent thermal stability and resistance to creep, while a part exposed to a corrosive environment like saltwater or fluid handling equipment need a material with exceptional corrosion resistance.
Additionally, the material’s yield strength and tensile strength should be matched to the anticipated loads and stresses the component will experience during its service life. By assessing the application requirements, you can narrow down your material options and make a more informed selection.
Factors to Consider
Several factors should be taken into account when choosing a material for a specific application. Some of these factors include:
Cost: Budget constraints often play a significant role in material selection. Although more expensive materials may offer superior performance, it is essential to weigh these benefits against the increased cost to determine if the additional expense is justified.
Availability: The availability of a material can also influence the selection process. Some materials may have long lead times or be difficult to source, which can impact production schedules and project timelines.
Machinability: The ease with which a material can be machined, formed, or processed is another crucial consideration. Some materials are more challenging to work with or require specialized equipment, leading to increased production costs and manufacturing complexities.
Environmental Factors: The environmental conditions where the part will be used should also be considered when selecting a material. Temperature, humidity, exposure to chemicals, and UV radiation can significantly impact a material’s performance and longevity. Choosing a material with suitable resistance to these environmental factors can help ensure the long-term reliability of your components.
Spex is an ISO 9001:2015 certified company. Organizations use this standard to demonstrate the ability to consistently provide products and services that meet customer and regulatory requirements.
We incorporate world-class excellence in every step of the process, in our ongoing efforts to ensure your success.
Reach out to our team to get a quote for CNC machined parts for your next project.
Do you have a question or need a quote? Fill out this form to contect our team! |
17278 | https://www.maxhealthcare.in/blogs/neutrophils-count-and-functions | Find a Doctor
Blogs
My Reports
Investors
Research
CSR
Contact Us
WhatsApp Us (24/7)
WhatsApp Us (24/7)
+91 926 888 0303 (24/7)
+91 92895 20303 (24/7)
Delhi/NCR:
Mohali:
Dehradun:
Bathinda:
Mumbai:
Nagpur:
Lucknow:
BRAIN ATTACK:
01140554055
7710777107
01357193333
01645212039
02261347777
07127120000
05226780001
9910204023
To Book an Appointment
Call Us+91 926 888 0303
Neutrophils: Description, Count and Functions
By
Medical Expert Team
Jan 23 , 2024
| 4 min read
1
Your Clap has been added.
Thanks for your consideration
Share
Share Link has been copied to the clipboard.
Here is the link
What are Neutrophils?
Neutrophils are a variety of white blood cells (leukocytes), which serve as the immune system's initial defence mechanism. White blood cells are categorised into three types: granulocytes, lymphocytes, and monocytes. Among granulocytes, neutrophils coexist with eosinophils and basophils. Collectively, these white blood cells safeguard the body against infections and injuries.
Functions of Neutrophils
Neutrophils play several crucial roles in the body's immune response, functioning as a key component of the innate immune system. Here are their primary functions:
Phagocytosis: Neutrophils are primarily responsible for phagocytosis, a process where they engulf and digest pathogens like bacteria and fungi. This is their most well-known function.
Antimicrobial substances: They release a variety of substances that have antimicrobial properties. These include enzymes like lysozyme and myeloperoxidase, which can break down the cell walls of bacteria.
Formation of Neutrophil Extracellular Traps (NETs): In response to certain pathogens, neutrophils can undergo a unique form of cell death called NETosis. This process releases DNA and antimicrobial proteins to form neutrophil extracellular traps (NETs), which can trap and kill pathogens.
Chemical signalling: Neutrophils produce chemical signals (cytokines and chemokines) that help to modulate the immune response, including attracting other immune cells to the site of infection or inflammation.
Regulation of inflammation: They play a role in controlling the inflammation process, either by promoting or inhibiting inflammation, depending on the signals they receive.
Removing debris: After phagocytosis or tissue injury, neutrophils help in cleaning up the debris, including dead cells and other waste, facilitating the healing process.
Interacting with adaptive immune system: Neutrophils also interact with the adaptive immune system (comprising lymphocytes), aiding in shaping a more targeted immune response.
What are Common Conditions that Affect Neutrophils?
The quantity of neutrophils in the human body must be maintained within a certain range for normal bodily functions. Deviations from this range can lead to conditions stemming from an imbalance in neutrophil levels.
These conditions include:
Neutropenia: This condition arises when the neutrophil count falls too low, leading to swelling and recurrent infections. Neutropenia can be triggered by various factors, including cancer treatments, autoimmune diseases, or infections.
Neutrophilia: Also known as neutrophilic leukocytosis, this occurs when there is an excessive number of neutrophils. Often a response to a bacterial infection, neutrophilia is characterised by the premature release of immature neutrophils from the bone marrow into the bloodstream as the body attempts to combat the infection.
What Causes a High Neutrophil Count?
It is often a normal response for the human body to generate an increased number of neutrophils to aid in healing, particularly in situations like bone fractures or severe burns. However, if neutrophil levels do not return to normal after an injury has healed, it can present a health risk. Factors that can lead to an elevation in neutrophil counts include:
Infections
Inflammation
Physical injuries
Specific forms of leukaemia
Adverse reactions to certain medications
What Causes a Low Neutrophil Count?
Neutropenia occurs when the body eliminates neutrophils at a rate faster than the bone marrow can replenish them. Factors leading to a reduced neutrophil count encompass:
Infections, such as hepatitis, tuberculosis, sepsis, and Lyme disease
Chemotherapy treatment
Deficiencies in certain vitamins, like vitamin B12, folate, and copper
Autoimmune diseases, for instance, Crohn’s disease, lupus, and rheumatoid arthritis
What is a Normal Range for a Neutrophil Count?
An Absolute Neutrophil Count (ANC) determines the quantity of neutrophils present in a blood sample. In a healthy adult, the typical range is between 2,500 and 7,000 neutrophils per microlitre of blood. A count exceeding 7,000 or falling below 2,500 indicates a potential risk for a neutrophil-related condition.
Tests to Check the Health of Neutrophils
Tests that evaluate the health of neutrophils include:
Complete blood count (CBC): This test analyses the cells in a blood sample, providing insight into the cell count within the body. A CBC is useful for diagnosing various medical conditions and serves as an indicator of overall health.
Absolute neutrophil count (ANC): This test measures the number of neutrophil cells in a blood sample.
Bone marrow biopsy: This procedure assesses cell quantity and the location of their growth within the body. It involves the removal and examination of a small sample of bone marrow. Since cell production originates in the bone marrow, a biopsy helps determine whether the body is producing an adequate amount of cells and if any specific conditions are present.
Treatments for Neutrophil Conditions
Whether individuals are dealing with low or high neutrophil counts, various treatments are available to address the imbalance. Let's explore some common approaches:
Taking antibiotics
If an individual's neutrophil count is low, their healthcare provider may recommend antibiotic therapy. This helps prevent and treat infections, bolstering the body's defence against harmful microorganisms.
Adjusting medication
Certain medications can cause neutropenia. In such cases, healthcare providers might suggest modifying or discontinuing the use of these drugs. This adjustment aims to restore a balanced neutrophil count.
Treating underlying medical conditions
Addressing the root cause is essential for managing neutrophil counts. Treating underlying medical conditions, such as infections or chronic diseases, can positively impact neutrophil levels.
Corticosteroid therapy for autoimmune disorders
Individuals with autoimmune disorders may experience high neutrophil counts. In such instances, corticosteroids may be prescribed to regulate the immune response and reduce inflammation.
White blood cell transfusion
In certain emergencies, a white blood cell transfusion might be recommended to quickly boost the overall white blood cell count, including neutrophils.
Bone marrow transplant
In severe cases of neutropenia (low neutrophil count), a bone marrow transplant may be considered. This procedure involves replacing damaged or insufficient marrow with healthy stem cells, promoting the production of normal blood cells, including neutrophils.
Conclusion
Neutrophils are vital components of the immune system, playing key roles in infection defence and inflammation management. Their count, measured through various tests, is crucial for diagnosing and monitoring health conditions. To learn more about neutrophils and ensure your immune health, visit Max Healthcare. Their expert team offers comprehensive care and state-of-the-art diagnostics to keep your immune system functioning optimally. Connect with Max Healthcare today for personalised advice and advanced medical solutions.
Written and Verified by:
Medical Expert Team
Related Blogs
Want to know more about Leukemia Treatment Options?
Medical Expert Team
Jun 07 , 2016
| 2
min
read
Feeling tired? Check if you have Anemia
Medical Expert Team
Jul 14 , 2016
| 1
min
read
Know More on Leukemia (Infographic)
Medical Expert Team
Aug 12 , 2016
| 1
min
read
Blogs by Doctor
Understanding Multiple Myeloma Cancer
Medical Expert Team
Aug 07 , 2023
| 4
min
read
Diffuse Large B-cell Lymphoma: Causes, Diagnosis, and Treatment
Medical Expert Team
Jun 14 , 2024
| 13
min
read
Leukocytosis: Causes, Symptoms, and Treatment Options
Medical Expert Team
Oct 10 , 2024
| 11
min
read
Most read Blogs
Simple Home Remedies For Loose Motions
Dr. Manoj Ranka
In
Internal Medicine
Feb 22 , 2018
|
11
min read
Indian Diet Plan In Pregnancy
Dr. Neera Aggarwal
In
Obstetrics And Gynaecology
,
Nutrition And Dietetics
Nov 07 , 2020
|
11
min read
What to Eat in Loose Motion - Do's and Dont's
Dr. Ashok Grover
In
Internal Medicine
Mar 21 , 2018
|
11
min read
Related Blogs
Want to know more about Leukemia Treatment Options?
Medical Expert Team
Jun 07 , 2016
| 2
min
read
Feeling tired? Check if you have Anemia
Medical Expert Team
Jul 14 , 2016
| 1
min
read
Know More on Leukemia (Infographic)
Medical Expert Team
Aug 12 , 2016
| 1
min
read
Blogs by Doctor
Understanding Multiple Myeloma Cancer
Medical Expert Team
Aug 07 , 2023
| 4
min
read
Diffuse Large B-cell Lymphoma: Causes, Diagnosis, and Treatment
Medical Expert Team
Jun 14 , 2024
| 13
min
read
Leukocytosis: Causes, Symptoms, and Treatment Options
Medical Expert Team
Oct 10 , 2024
| 11
min
read
Most read Blogs
Simple Home Remedies For Loose Motions
Dr. Manoj Ranka
In
Internal Medicine
Feb 22 , 2018
|
11
min read
Indian Diet Plan In Pregnancy
Dr. Neera Aggarwal
In
Obstetrics And Gynaecology
,
Nutrition And Dietetics
Nov 07 , 2020
|
11
min read
What to Eat in Loose Motion - Do's and Dont's
Dr. Ashok Grover
In
Internal Medicine
Mar 21 , 2018
|
11
min read
Other Blogs
Piles and fissure
What is pet ct scan
Healing through ayurveda
Lung cancer causes
Hair fall in monsoon
7 ways to keep your heart healthy
New born care
Gynecomastia symptoms
Tips in loose motions
What is leukemia
FAQs on Autism
Complications of Kidney Transplant
Specialist in Location
Best Bone Marrow Transplant Doctors in India
Best Bone Marrow Transplant Doctors in Ghaziabad
Best Bone Marrow Transplant Doctors in Bathinda
Best Bone Marrow Transplant Doctors in Panchsheel Park
Best Bone Marrow Transplant Doctors in Patparganj
Best Bone Marrow Transplant Doctors in Noida
Best Bone Marrow Transplant Doctors in Lajpat Nagar
Best Bone Marrow Transplant Doctors in Shalimar Bagh
Best Bone Marrow Transplant Doctors in Gurgaon
Best Bone Marrow Transplant Doctors in Mohali
Best Bone Marrow Transplant Doctors in Saket
Best Bone Marrow Transplant Doctors in Delhi
Best Bone Marrow Transplant Doctor in Nagpur
Best Bone Marrow Transplant Doctor in Lucknow
Best Bone Marrow Transplant Doctors in Dwarka
Best Bone Marrow Transplant Doctor in Pusa Road
Best Bone Marrow Transplant Doctor in Vile Parle
Best Bone Marrow Transplant Doctors in Sector 128 Noida
Best Bone Marrow Transplant Doctors in Sector 19 Noida
Best Haematologists in Delhi
Find a doctor
Book an appointment
Treatments
Emergency 24x7
Air Ambulance
Max Lab
Max @ Home
Health checkup plans
Immigration Services
Technology
Doctor Videos
Patient Testimonials
Health Calculators
Hospitals in Delhi / NCR
Hospitals in Punjab
Hospitals in Uttarakhand
Hospitals in Maharashtra
Hospitals in Uttar Pradesh
Request an Appointment
Get an Opinion
Cancer Care / Oncology
Cardiac Sciences
Neuro Sciences
Liver Transplant And Biliary Sciences
Orthopaedics
Nephrology
Kidney Transplant
Bone Marrow Transplant
Bariatric/Weight Loss Surgery
Minimal Access / laparoscopic Surgery
CAR T-Cell Therapy
Chemotherapy
LVAD
Robotic Heart Surgery
Kidney Transplant
The Da Vinci Xi Robotic System
Lung Transplant
Bone Marrow Transplant (BMT)
HIPEC
Valvular Heart Surgery
Coronary Artery Bypass Grafting (CABG)
Knee Replacement Surgery
ECMO
Bariatric Surgery
Biopsies / FNAC And Catheter Drainages
Cochlear Implant
More...
About us
Leadership
Directors
Investor
Feedback
Contact Us
CSR Initiatives
News & Events
Awards & Certificates
Announcements
Sitemap
Registration for Online Dispute Redressal (ODR)
International Exams
MECP Newsletter
Life Saving Training |
17279 | https://magoosh.com/gmat/gmat-quant-practice-problems-with-percents/ | GMAT Quant: Practice Problems with Percents - Magoosh Blog — GMAT® Exam
GMAT®Blog
Magoosh is the leader in GMAT prep, helping millions of students study since 2010. Our affordable self-study plans include 800+ practice questions, full-length practice tests, and a 70 point score improvement guarantee.
Prep with Magoosh
Top Resources:Free GMAT Practice Test400+ GMAT Math FlashcardsDaily Study PlansGMAT Score CalculatorTop MBA Programs
GMAT®Blog
Top Resources:
Free GMAT Practice Test
400+ GMAT Math Flashcards
Daily Study Plans
GMAT Score Calculator
Top MBA Programs
Magoosh is the leader in GMAT prep having helped millions of students study since 2010.
Try GMAT Prep with Magoosh
GMAT Quant: Practice Problems with Percents
By
Mike MᶜGarry
— Last updated on
September 30, 2013
in
GMAT Word Problems
First, here are eight practice problems exploring typical percent word problems on the GMAT.
The original price of a suit is $200. The price increased 30%, and after this increase, the store published a 30% off coupon for a one-day sale. Given that the consumers who used the coupon on sale day were getting 30% off the increased price, how much did these consumers pay for the suit?
(A) $182
(B) $191
(C) $200
(D) $209
(E) $219
The profits of QRS company rose 10% from March to April, then dropped 20% from April to May, then rose 50% from May to June. What was the percent increase for the whole quarter, from March to June?
(A) 15%
(B) 32%
(C) 40%
(D) 62%
(E) 80%
Bert and Rebecca were looking at the price of a condominium. The price of the condominium was 80% more than Bert had in savings, and separately, the same price was also 20% more than Rebecca had in savings. What is the ratio of what Bert has in savings to what Rebecca has in savings.
(A) 1:4
(B) 4:1
(C) 2:3
(D) 3:2
(E) 3:4
Company KW is being sold, and both Company A and Company B were considering the purchase. The price of Company KW is 50% more than Company A has in assets, and this same price is also 100% more than Company B has in assets. If Companies A and B were to merge and combine their assets, the price of Company KW would be approximately what percent of these combined assets?
(A) 66%
(B) 75%
(C) 86%
(D) 116%
(E) 150%
There are 300 seniors at Morse High School, and 40% of them have cars. Of the remaining grades (freshmen, sophomores, and juniors), only 10% of them have cars. If 15% of all the students at Morse have cars, how many students are in those other three lower grades?
(A) 600
(B) 900
(C) 1200
(D) 1350
(E) 1500
A scientific research study examined a large number of young foxes, that is, foxes between 1 year and 2 years old. The study found that 80% of the young foxes caught a rabbit at least once, and 60% caught a songbird at least once If 10% of the young foxes never caught either a rabbit or a songbird, then what percentage of young foxes were successful in catching at least one rabbit and at least one songbird?
(A) 40%
(B) 50%
(C) 60%
(D) 80%
(E) 90%
A book store bought copies of a new book by a popular author, in anticipation of robust sales. The store bought 400 copies from their supplier, each copy at wholesale price W. The store sold the first 150 copies in the first week at 80% more than W, and then over the next month, sold a 100 more at 20% more than W. Finally, to clear shelf space, the store sold the remaining copies to a bargain retailer at 40% less than W. What was the bookstore’s net percent profit or loss on the entire lot of 400 books?
(A) 30% loss
(B) 10% loss
(C) 10% profit
(D) 20% profit
(E) 60% profit
At a certain symphonic concert, tickets for the orchestra level were $50 and tickets for the balcony level were $30. These two ticket types were the only source of revenue for this concert. If R% of the revenue for the concert was from the sale of balcony tickets, and B% of the tickets sold were balcony tickets, then which of the following expresses B in terms of R?
Solutions will be given at the end of this article.
If you’re looking for more GMAT math practice, check out our FREE GMAT practice test with accurate score prediction and subject-by-subject performance breakdown. You can practice a single section like Math, Data Insights, or Verbal for 45 minutes or take the whole assessment for 2 hours and 15 minutes. Give it a try!
Thoughts on percents
First of all, here are some previous blogs on percents:
(1) Fundamentals of percents, including using multipliers for percent increase and decreases.
(2) Solution of a percent with variables problem, from the OG
(3) Another solution of percent with variables problem, from the OG
(4) Scale factors & percent change
(5) Solution & Mixing problems
(6) Ratios & Proportions
Especially in those first three, you can find some useful hints for the foregoing problems.
The BIG percent mistake
Folks who are rusty at math and/or returning to it after a long absence may not appreciate something known well to anyone who writes math problems: certain mistakes are as predictable as clockwork. Anyone who writes problems knows — in such-and-such kind of problem, the vast majority of test-takers will make such-and-such very predictable mistake. Of course, on any multiple choice standardized test, that predictable mistake consistently will be among the answer choices: it’s as if the test-maker sets up a huge butterfly net, and the unwitting test-takers run into this trap like lemmings running to the sea. I know this sound cruel, but the purpose of a good standardized test question is to distinguish those who know their stuff from those who don’t, and highly predictable errors are great ways to draw such a distinction very clearly.
Obviously, as a test-taker, it is very much to your advantage to learn to spot these very predicable mistake patterns. Just avoiding these will put you well ahead of the pack. In many different articles on this blog, I discuss these predictable mistakes.
The big predictable mistake with percents has a few variations on the same pattern.
(a) if something increases by P%, then decreases by Q%, the net change is not (P – Q)%
(b) if something increases by P%, then increases by Q%, that’s not an increase of (P + Q)%
(c) [special case] if something increase by P%, then decrease by P%, we do not return to the original value.
(d) [more general case] if something increase by P%, then decreases by Q%, then increase by R%, the total change in percent is not (P – Q + R)%
What all of these have in common is a deep root error — you cannot figure out the total percent change of a series of individual percent changes by adding or subtracting the individual percents. That’s the mistake. People see “start at $100, 40% increase, followed by a 40% decrease“, and scores upon score of people, as predictably as the sunrise, will believe and insist without a shadow of a doubt that the end result must be $100. This large crowd will be in unison and they will be wrong.
What’s the correct thing to do? The correct way to treat any series of percent changes is to express each change as a multiplier, and then multiply all the multipliers. That first blog above talks about creating the multipliers for percent increases & decreases. Once we have that, we can figure out the real change. For example, the multiplier for a 40% increase would be: 1 + 0.40 = 1.4, and the multiplier for a 40% decrease would be 1 – 0.40 = 0.6; now, 1.40.6 = = 0.84, so the final price in fact would be $84, which is a 16% decrease.
Summary
If the foregoing discussion gave you any insights, you may want to re-read the practice problems before reading the solution. Here’s another practice problem from the Magoosh Product:
If you have any observations or questions, please let us know in the comments section.
Practice problem explanations
1) Given the foregoing discussion, it may be obvious now the trap-mistake answer is (C). Even if you can’t remember the correct thing to do, at the very least, learn to spot the trap!
The multiplier for a 30% increases is 1 + 0.30 = 1.3, and the multiplier for a 30% decrease is 1 – 0.30 = 0.70, so the combined change is 1.30.7 = 0.91, 91% percent of the original, or a 9% decreases. Now, multiply $2000.91 = $182. Answer = (A).
2) Given the foregoing discussion, it may be obvious now the trap-mistake answer is (C), which results from simply adding and subtracting the percents. We need multipliers.
multiplier for a 10% increases = 1 + 0.10 = 1.1
multiplier for a 20% decreases = 1 – 0.20 = 0.8
multiplier for a 50% increases = 1 + 0.50 = 1.5
Now, multiply these. First, multiply (0.8) and (1.5), using the doubling & halving trick. Half of 0.80 is 0.40, and twice 1.5 is 3
(0.8)(1.5) = (0.4)(3) = 1.2
Now, multiply this by 1.1
1.21.1 = 1.32
Thus, the three percent changes combined produce a 32% increase. Answer = (B).
3) The trap answer here would be to take the ratio of 80% and 20% — those don’t represent actually amounts that other person has, just the differences between amounts owned and the cost of the condo. Think of this in terms of multipliers. Use the variables:
B = amount Bert has in savings
R = amount Rebecca has in savings
P = price of the condominium
Then in terms of multipliers, the information given tells us that P = 1.8B, and P = 1.2R. Set these equal.
1.8B = 1.2R
Answer = (C)
4) There are a few ways to solve this. This is plug-in approach. Suppose Company A has $100 in assets. (Yes, unrealistic, but a convenient choice.) Then Company KW is being sold for 50% more = $150. Now, this $150 is 100% more than what company B has in assets —- i.e., $150 is double what company B has in assets, so company B has $75 in assets. Now, suppose companies A & B pool their resources — together, they have $175 in assets.
Notice, first of all, combined they have more in assets than the cost of KW, the price of KW would be a percent less than 100%. Even if nothing else, we could eliminate (D)&(E), and practice solution behavior.
Part = price of KW = $150
Whole = combined assets = $175
Answer = (C)
5) Let x = the number of students other than seniors (freshmen + sophomores + junior). We know 40% of the 300 seniors have cars. Well, 10% of 300 is 30, so 40% is 4 times this —- 430 = 120 seniors have cars. We know 10% of the other students have cars, so that would be 0.1x. The total number of students with cars is 120 + 0.1x. That’s the PART.
The total number of students = 300 + x. That’s the WHOLE.
PART/ WHOLE x 100% = 15%, which means that PART/ WHOLE = 0.15, which means PART = 0.15(WHOLE). That can be our equation.
(120 + 0.1x) = 0.15(300 + x)
120 + 0.1x = 45 + 0.15x
75 + 0.1x = 0.15x
75 = 0.15x – 0.10x = 0.05x
150 = 0.10x
1500 = x
Answer = (E)
6) This is less about percents and more about probability, particularly the probability OR-rule. Let R = the event that a young fox catches at least one rabbit, and let S = the event that a young fox catches at least one songbird. Using algebraic probability notation, we know P(R) = 0.8 and P(S) = 0.6. We know P((not R) and (not S)) = 0.1, and the complement of [(not R) and (not S)] would be [R or S], so by the complement rule, P(R or S) = 1 – 0.1 = 0.9. The question is asking for P(R and S). The OR rule tells us
P(R or S) = P(R) + P(S) – P(R and S)
0.9 = 0.6 + 0.8 – P(R and S)
0.9 = 1.4 – P(R and S)
0.9 + P(R and S) = 1.4
P(R and S) = 0.5
Answer = (B)
7) First of all, amount paid = 400W. That was the bookstore’s total expenditures. The total revenue came in three stages
150 copies @ 80% more than W = 1501.8W = 300.9W = 270W
100 copies @ 20% more than W = 1001.2W = 120W
150 copies @ 40% less than W = 1500.6W = 3000.3W = 90W
Notice, all the percent changes were converted to multipliers. Also, notice the use of the doubling & halving trickin the first and third lines.
Total revenue = 270W + 120W + 90W = 480W
Well, the store took in more revenue than they spent, so they made a profit, not a loss. Notice that 10% more than 400 would be 440, so 480 would be a 20% increase.
Answer = (D)
8) There are a few different methods of solution. I will show a numerical approach. Let’s try some simple cases — suppose they sold nothing but balcony tickets: then B = 100 and R = 100. If we plug in R = 100 …
Right away, with one choice, we know that (A) and (C) are out. At this point, even if we could do nothing else, we could still use solution behavior.
Now, suppose the revenue they took in was half and half, so that R = 50%. Well, the prices of balcony to orchestra tickets are in a ratio of 3:5, so in order for the revenue from balcony tickets to equal the revenue from orchestra tickets, they would have to be sold in the reciprocal ratio of 5:3 (if you think about it, this just uses the fact that AB = BA). This means that 5/8 of the tickets sold would be balcony tickets. (For this logic, see the Ratio blogfor information on portioning.) Thus, if we plug in R = 50, we should get B = (5/8)100 — notice, we don’t actually need to calculate that out: the fraction is fine. In the calculations below, notice the use of the doubling & halving trick in the denominator, to get the factor of 100.
Backsolving gets us to answer (D) very efficient.
Now, here’s an algebraic approach, which is longer, but some folks want to see this anyway for the algebra practice:
Let N = the total number of tickets sold. Then:
Therefore,
In the big fraction, in each term: cancel the N’s, cancel a factor of 10, and cancel the “divided by 100”:
Multiply both sides by the denominator.
R(500 – 2B) = (3B)100
500R – 2RB = 300B
Get all the B’s on one side
500R = 2RB + 300B
B(2R + 300) = 500R
Answer = (D)
That’s the full-blown algebraic solution, although why anyone would want to slog through all that instead of using backsolving is beyond me!!
Author
Mike MᶜGarry
Mike served as a GMAT Expert at Magoosh, helping create hundreds of lesson videos and practice questions to help guide GMAT students to success. He was also featured as “member of the month” for over two years at GMAT Club. Mike holds an A.B. in Physics (graduating magna cum laude) and an M.T.S. in Religions of the World, both from Harvard. Beyond standardized testing, Mike has over 20 years of both private and public high school teaching experience specializing in math and physics. In his free time, Mike likes smashing foosballs into orbit, and despite having no obvious cranial deficiency, he insists on rooting for the NY Mets. Learn more about the GMAT through Mike’s Youtubevideo explanations and resources like What is a Good GMAT Score? and the GMAT Diagnostic Test.
View all posts
More from Magoosh
GMAT Math: Possibilities for Variables
GMAT Math – What Kind of Math is in the Quantitative Section?
Challenging GMAT Math Practice Questions
GMAT Math: The Probability “At Least” Question
Share
Tweet 1
Pin
GRE vs GMAT: Which Should I Take?
More from Magoosh
GMAT Math: Possibilities for Variables
GMAT Math – What Kind of Math is in the Quantitative Section?
Challenging GMAT Math Practice Questions
GMAT Math: The Probability “At Least” Question
Our Products
GRE PrepGMAT PrepSAT PrepACT PrepSAT & ACT Prep for High SchoolsIELTS PrepTOEFL PrepLSAT PrepMCAT PrepPraxis Prep
Our Blogs
GRE BlogGMAT BlogSAT BlogACT BlogIELTS BlogTOEFL BlogLSAT BlogMCAT BlogPraxis BlogCompany Blog
Company
Magoosh HomeAbout UsPrivacy PolicyMissionPartner With UsContact Us
GMAT® is a registered trademark of the Graduate Management Admission Council (GMAC). This website is not endorsed or approved by GMAC. GRE®, TOEFL®, and Praxis® are registered trademarks of Educational Testing Service (ETS). This website is not endorsed or approved by ETS. SAT® is a registered trademark of the College Board, which was not involved in the production of, and does not endorse this product. LSAT® is a registered trademark of the Law School Admission Council, Inc. This website is not endorsed or approved by the LSAC. ACT® is a registered trademark of ACT, inc. This website is not endorsed or approved by ACT, inc. MCAT is a registered trademark of the Association of American Medical Colleges (AAMC). This website is not endorsed or approved by AAMC.
Follow us on social media
@2025 Magoosh
All Rights Reserved |
17280 | https://epigeneticsandchromatin.biomedcentral.com/articles/10.1186/s13072-018-0186-0 | Advertisement
Transient reduction of DNA methylation at the onset of meiosis in male mice
Epigenetics & Chromatin
volume 11, Article number: 15 (2018)
Cite this article
5667 Accesses
54 Citations
8 Altmetric
Metrics details
Abstract
Background
Meiosis is a specialized germ cell cycle that generates haploid gametes. In the initial stage of meiosis, meiotic prophase I (MPI), homologous chromosomes pair and recombine. Extensive changes in chromatin in MPI raise an important question concerning the contribution of epigenetic mechanisms such as DNA methylation to meiosis. Interestingly, previous studies concluded that in male mice, genome-wide DNA methylation patters are set in place prior to meiosis and remain constant subsequently. However, no prior studies examined DNA methylation during MPI in a systematic manner necessitating its further investigation.
Results
In this study, we used genome-wide bisulfite sequencing to determine DNA methylation of adult mouse spermatocytes at all MPI substages, spermatogonia and haploid sperm. This analysis uncovered transient reduction of DNA methylation (TRDM) of spermatocyte genomes. The genome-wide scope of TRDM, its onset in the meiotic S phase and presence of hemimethylated DNA in MPI are all consistent with a DNA replication-dependent DNA demethylation. Following DNA replication, spermatocytes regain DNA methylation gradually but unevenly, suggesting that key MPI events occur in the context of hemimethylated genome. TRDM also uncovers the prior deficit of DNA methylation of LINE-1 retrotransposons in spermatogonia resulting in their full demethylation during TRDM and likely contributing to the observed mRNA and protein expression of some LINE-1 elements in early MPI.
Conclusions
Our results suggest that contrary to the prevailing view, chromosomes exhibit dynamic changes in DNA methylation in MPI. We propose that TRDM facilitates meiotic prophase processes and gamete quality control.
Background
Meiosis is a specialized cell division program that produces haploid gametes. To achieve haploidy, a diploid germ cell replicates its DNA once and divides twice. Following the final round of DNA replication (meiotic S phase), chromosomes pair and recombine in meiotic prophase I (MPI) . Meiosis is a highly protracted cell cycle due to meiotic S phase being much longer than mitotic S phase in the same organism [2, 3] and MPI itself lasting about 2 weeks , during which time the chromosomes undergo dramatic changes in organization. Based on these changes, MPI is subdivided into leptotene (L), zygotene (Z), pachytene (P) and diplotene (D) substages, which are immediately preceded by meiotic S phase also known as the preleptotene (PL) stage (Additional file 1: Fig. S1). These descriptive names of MPI substages serve as common reference points in the studies of meiosis across species since they associate with specific molecular processes such as double-stranded break (DSB) formation in L, the onset of homolog synapsis in Z, completion of synapsis and meiotic recombination in P and the onset of homolog decondensation and desynapsis in D. Upon the completion of MPI, homologous chromosomes segregate in the first meiotic division (Meiosis I), while sister chromatids separate in Meiosis II.
Extensive changes in meiotic chromosome configurations in MPI raise an important question concerning the contribution of epigenetic mechanisms to meiosis. Indeed, prior studies have implicated histone modifications in chromosome condensation, synapsis and meiotic recombination in plants, fungi and animals [5,6,7,8,9]. Disruption of DNA methylation also interferes with MPI processes in a wide range of species capable of this modification including mammals 10,11,12,13,[14,15]. Here, we focused on genome-wide DNA methylation of male meiotic germ cells of mice. Studies over the past decade revealed important roles of DNA methylation, repressive histone modifications and small Piwi-interacting RNAs (piRNAs) in LINE-1 (L1) control in male germ cells [13:96–9."), 16:785–99."),17:909–12."),18:601–8.")]. Intriguingly, despite these defensive mechanisms, early meiotic male germ cells exhibit L1 ORF1p expression albeit at levels significantly lower than those observed in mutants deficient in transposon defense [18:601–8."),19:e1000764."),20:285–97."),21:775–87."),22:2584–92.")]. To explain this observation, we have previously posited a transient change in DNA methylation at the onset of meiosis [23:76–9.")]. Critically, unlike the detailed knowledge of genome-wide DNA methylation in mouse postnatal spermatogonia [24:2312–24."), 25] and the evidence that bulk DNA methylation precedes meiosis [26:368–79.")], the precise dynamics of DNA methylation during MPI remain unknown. To a large extent, this gap in understanding of epigenetic makeup of meiotic chromosomes was due to inaccessibility of cell populations from all MPI substages. To overcome this limitation, we first have optimized the method for the purification of adult mouse male germ cells from all substages of MPI [27
."),28:556–65."),29]. In this study, using this method, we obtained high-quality germ cell samples that allowed us to discover genome-wide transient reduction of DNA methylation (TRDM) during MPI, a previously unrecognized epigenetic feature of meiotic chromosomes in male mice.
Results
Genome-wide DNA methylation levels in meiotic prophase I
To characterize the dynamics of DNA methylation across MPI, we used an optimized flow cytometry cell sorting method to obtain two biological replicates of spermatogonial (Spg), PL, L, Z, P, D spermatocytes and epididymal spermatozoa (Spz) 27)
."),[28:556–65."),29] (Additional file 2: Fig. S2). The purity of MPI cell fractions was verified by staining for meiosis-specific (SYCP3, γH2AX) and spermatogonia-enriched (DMRT1, DMRT6) markers as described previously [28:556–65."), 29]. Critically, all germ cell fractions were devoid of somatic cells (Methods) and gene expression profiling of a wide panel of soma-enriched and germ cell-specific genes by RNA-seq confirmed the purity and stage specificity of our samples (Additional file 3: Fig. S3). Using these samples, we performed whole-genome bisulfite DNA sequencing (WGBS) for genome-wide analysis of DNA methylation at single CpG resolution (Additional file 4: Table S1). Over 90% of reads aligned to the mouse genome and exhibited high efficiency of bisulfite conversion (Additional file 4: Table S1 and Additional file 5: Table S2). Each biological replicate accounted for 87–94% of genomic CpGs with 3 to 6 × average CpG coverage per individual sample after read de-duplication and processing (Additional file 6: Table S3). Pairs of biological replicates exhibited high inter-individual Pearson correlation indicating excellent reproducibility of our data (Additional file 7: Table S4). Since cytosine methylation levels at non-CpG sites were negligible (0.3–0.4%), we excluded them from later analyses.
First, we examined genome-wide DNA methylation levels across MPI. Consistent with the notion of high DNA methylation levels in spermatogenesis, we observed high median (> 84%) levels of CpG methylation in Spg, P, D and Spz genomes (Fig. 1a, Additional file 8: Table S5). Interestingly, we uncovered an extended window of reduced global DNA methylation during early MPI demarcated by a pronounced drop (~ 12–13 percentage points) in DNA methylation levels in PL followed by a progressive gain of DNA methylation in L and Z, returning to premeiotic levels by P (Fig. 1a, Additional file 8: Table S5). Overall, a period of reduced global DNA methylation lasts from PL to P (~ 70 h), and we will refer to it as transient reduction of DNA methylation at the onset of meiosis (TRDM).
Global DNA methylation dynamics in MPI. a Genome-wide DNA methylation was summarized as means of non-overlapping bins of 500 CpGs for individual biological replicates. Box-and-whisker plot shows the maximum, upper quartile, median, lower quartile and minimum of data. Median percent DNA methylation for both replicates is specified above the boxplot. b Chromosome-wide DNA methylation levels were plotted across chromosome length (chromosome 13, replicate 1 is shown). DNA methylation was averaged using sliding non-overlapping bins of 100 kbp. c Box-and-whisker plot of DNA methylation levels across various genomic features. The average DNA methylation levels were aggregated as consecutive, non-overlapping averages of 100 CpGs. Averages were combined for biological replicates
To examine the chromosome-wide distribution of DNA methylation in individual MPI substages, we summarized DNA methylation levels over a distance of 100-kb-wide non-overlapping windows spanning the length of each chromosome. We found that global hypomethylation in PL is chromosome-wide (Fig. 1b). This was true for all autosomes examined in both biological replicates (Additional file 9: Fig. S4, Additional file 10: Fig. S5). Interestingly, although the X chromosome also exhibits TRDM, it tended to be less methylated in all MPI substages (Additional file 9: Fig. S4, Additional file 10: Fig. S5, Additional file 11: Fig. S6). X chromosome DNA methylation levels in Spg-to-PL and PL-to-L transitions are distinctly less correlated than in the autosomes, further suggesting differences in the dynamics of its demethylation and remethylation (Additional file 11: Fig. S6). Nonetheless, these results showed that TRDM holds true for all chromosomes and that remethylation in MPI appears as a gradual chromosome-wide process.
To determine whether DNA hypomethylation in PL is specific to a particular genomic feature, we examined DNA methylation dynamics of exons, introns, intergenic and repetitive regions, as well as functionally specialized sequences such as promoters and CpG islands (CGIs) (Fig. 1c, Additional file 12: Fig. S7A, Additional file 13: Table S6). This analysis showed that all genomic features were highly methylated in Spg and then demethylated in PL (most prominently at introns, repeats and intergenic regions), except for CGIs whose methylation levels are already very low. Likewise, the analysis revealed comparable DNA methylation dynamics of repetitive DNA with major classes of TEs, namely the LINEs, SINEs, LTRs and DNA transposons (Additional file 12: Fig. S7B). Finally, we asked whether differentially methylated regions (DMRs) of imprinted genes also become hypomethylated in PL. The analysis of a subset of imprinted DMRs showed that DNA methylation levels of paternal imprinted DMRs follow the same dynamic observed for other genomic features while maternal DMRs remained unmethylated as expected (Additional file 12: Fig. 7C). Cumulatively, these results show that TRDM is indeed a genome-wide event that encompasses all chromosomes and all genomic features.
Dynamics of DNA methylation in the course of MPI
To better understand the DNA methylation dynamics in MPI, we identified regions that exhibited significant differences in methylation levels between any two consecutive MPI substages in a statistically principled, coverage-conscious and biological replicate-aware manner . This analysis revealed thousands of DMRs supporting the results of our genome- and chromosome-wide analyses (Fig. 2, Additional file 14: Table S7 and Additional file 15: Table S8A). Formation of large hypomethylated DMRs (with a median size of ~ 35 kb, the median number of CpGs around 257 and implicating over half of the mouse genome) marked the Spg-to-PL transition (Fig. 2a, b). As a result of gradual remethylation of hypomethylated Spg-to-PL DMRs in L and Z, their mean methylation difference and sizes also progressively decreased (Fig. 2b, c). Thus, while Spg-to-PL DMRs included ~ 56% of all evaluated CpGs, PL-to-L and L-to-Z DMRs included ~ 41 and ~ 3% of all CpGs, respectively (Additional file 15: Table 8A). By intersecting genomic coordinates of DMRs between MPI substages, we found that Spg-to-PL DMRs accounted for up to 75% of all PL-to-L and 63% of L-to-Z DMRs (Methods). Therefore, a sharp demethylation in PL is followed by gradual remethylation of the same CpGs in L and Z germ cells.
DMR dynamics in MPI. a Heatmap of DNA methylation profiles of all DMRs between consecutive MPI substages. Each row shows DNA methylation level of a DMR in a pairwise comparison. DNA methylation level is scaled according to red (low)-to-green (high) color scale. The percentage of DMRs exhibiting the main direction of DNA methylation change is indicated above the plots. b Boxplot showing DNA methylation value distribution at DMRs in MPI between two consecutive stages. c Smoothed DNA methylation at DMRs for Spg and PL (top), PL and L (middle), and P and D (bottom). d The proportion of CpGs accounted for by hypomethylated and hypermethylated DMRs
Although global levels of DNA methylation in Z were higher relative to the preceding substages, Z is still hypomethylated relative to P (Fig. 2a). Accordingly, our DMR analysis showed that during the Z-to-P transition there is an increase in methylation at ~ 57% of analyzed CpGs from 81% in Z to 88% P (Additional file 15: Table S8A, Fig. 2b). Therefore, while the bulk of remethylation occurs by Z, remethylation that reaches premeiotic or almost Spz-like levels occurs between Z and P. Indeed, the original Spg-to-PL DMRs explain most (~ 75%) of all DMRs observed between Z and P (Methods). We find that gradual remethylation concerns all genomic features examined (exons, introns coding sequences and repeats) (Additional file 15: Table S8B). In Z, up to 60% of these features are still hypomethylated compared to P, although mean DNA methylation difference is relatively small (Fig. 2d, Additional file 15: Table S8A). In P, less than two percent of these features are found in hypomethylated DMRs relative to D, due to remethylation.
Interestingly, while P and D share very similar DNA methylation profiles overall (Fig. 1a), we observed the emergence of hypomethylated P-to-D DMRs that involve 8% of all examined CpGs in common and are of a relatively small mean genomic size (~ 9 kb, as compared to ~ 35 kb in PL) (Fig. 2a, b, d, Additional file 15: Table S8A). Considering that Spg-to-PL DMRs account for only 50% of all P-to-D DMRs (Additional file 15: Table S8A), it is likely that the hypomethylation observed in late MPI is unrelated to TRDM.
Evidence of DNA replication-dependent DNA demethylation in TRDM
The discovery of TRDM raised the question of its mechanistic origin. By visually inspecting patterns of chromosome-wide DNA methylation levels, we observed a possible clue to the cause of hypomethylation in PL. Focusing on PL DNA methylation trace along a chromosome, one can observe regions of relative DNA hypomethylation interrupted by a few prominent regions of relative hypermethylation (Fig. 3a, Additional file 9: Fig. S4, Additional file 10: Fig. S5). In fact, every chromosome in both PL biological replicates possessed such prominent subchromosomal domains (Additional file 9: Fig. S4, Additional file 10: Fig. S5, Additional file 16: Fig. S8, Additional file 17: Fig. S9A). Furthermore, the subchromosomal domains of higher relative DNA methylation levels in PL show lower DNA methylation levels in L, resulting in an apparent switch in DNA methylation traces in these MPI substages when compared to the rest of the chromosome (Fig. 3a top panel, Additional file 16: Fig. S8A, Additional file 17: Fig. S9A).
DNA methylation pattern in PL overlaps with replication timing. a The plot of CpG DNA methylation averaged using sliding non-overlapping 100-kbp windows in Spg, PL and L across a region of chromosome 14 (top) and replication timing (RT) data for the same region of chromosome 14 from mouse B cell lymphoma CH12 cells (bottom). b Normalized genome sequencing coverage after WGBS summarized as averages of sliding non-overlapping 5-kbp windows
Given the above dynamics of DNA methylation in the PL-to-L transition, we considered a role for DNA replication and replication timing domains in this phenomenon. Replication domains are large-scale genomic territories that replicate at particular times during S phase [32, 33]. Global early or late replication timing profiles appear relatively preserved between different cell lines and cell types tested, although there are tissue-specific differences [33, 34]. Remarkably, an overlay of the chromosome-wide DNA methylation pattern from our data with replication timing domains of a mouse B cell lymphoma CH12 cell line revealed a strong overlap between the two (Fig. 3a). Specifically, in PL, we observe an overlap between large hypermethylated regions and late-replicating domains. Correspondingly, an overlap is observed in PL between large-scale hypomethylated regions and early-replicating domains. Interestingly, a switch between DNA hypo- and hypermethylation in PL is marked by an opposite switch in DNA methylation pattern in L (Fig. 3a). This switch in DNA methylation pattern in PL-to-L transition matched the transition from early to late replication timing domains (Fig. 3a). The overlap between DNA methylation pattern and replication timing pattern in PL was true of both biological replicates (e.g., Fig. 3a, Additional file 16: Fig. S8, Additional file 17: Fig. S9). To test the strength of the association of a switch in DNA methylation levels with late replication genome-wide, we determined their Pearson correlation coefficient in the course of MPI. This analysis showed an abrupt switch in the directionality of correlation from PL to L, supporting that late-replicating domains switch from high to low DNA methylation levels between the two MPI stages (Additional file 18: Fig. S10A).
To further explore a role of DNA replication in hypomethylation of the PL genome, we evaluated the uniformity of genome sequencing coverage in our WGBS data (Fig. 3b). Previously, DNA sequence coverage was used to estimate replication timing and to evaluate underreplication in Drosophila polytene chromosomes [36, 37]. We summarized read frequency over a distance of 5-kb non-overlapping windows spanning the length of the chromosome and corrected for the difference in total read count between the samples. Remarkably, we observe consistently lower sequencing coverage in the hypermethylated regions/late replication timing domains in PL, disappearing in L (Fig. 3b, Additional file 16: Fig. S8, Additional file 17: Fig. S9). The lower sequencing coverage in PL is consistent with DNA replication during this time, while recovery of sequence coverage in L agrees with the lack of replication in L, as no replication occurs then and during the rest of meiosis. Specifically, lower sequencing coverage of late replication timing domains in PL indicates that these regions have not yet completed replication (with lower sequencing coverage reflecting lower DNA content), while early replication timing domains have already replicated (hence exhibit higher sequencing coverage associated with higher DNA content). To confirm that PL spermatocytes used in our studies are replicative, we performed FACS enrichment of PL cells from mice injected with EdU 2 h prior to cell sorting. Subsequent EdU detection showed that > 70% of FACS-enriched PL cells were replicative, with the majority of EdU patterns corresponding to middle and late S phase (Additional file 19: Fig. S10B) .
The above results suggested that DNA replication in PL dilutes DNA methylation levels by creating hemimethylated DNA. To test this possibility directly, we analyzed methylation of complementary DNA strands using hairpin-bisulfite analysis combined with next-generation sequencing (Methods). Here we focused on the 5`-end sequence of full-length L1 elements in L1MdTf_I and L1Md_Tf_II families . The mouse genome has ~ 3000 of such elements, thus permitting simultaneous measurement of DNA hemimethylation throughout the genome. After PCR amplification of hairpin-bisulfite products using L1MdTf-specific primers (Additional file 19: Fig. S11) followed by Illumina paired-end sequencing of these L1MdTf amplicons, we obtained a range of 77–172 thousand reads per each analyzed MPI stage (Additional file 20: Table S9). We reliably determined methylation states of 5 L1 CpG dyads (Additional file 19: Fig. Fig. S11) before, during and after MPI. This analysis revealed a robust increase in L1 CpG hemimethylation in PL that was followed by gradual remethylation in MPI (Fig. 4a, b). Notably, L1 hemimethylation levels and dynamics in MPI paralleled those of LINE elements in our genome-wide DNA methylation studies (compare Fig. 4a, red values, with Additional file 12: Fig. S7B). Importantly, by excluding hemimethylated L1 DNA from this analysis we effectively almost erased TRDM of L1 (Fig. 4b, blue values). Together with the above evidence, these results strongly support the primary mechanism of TRDM by DNA replication-driven passive DNA demethylation. Finally, this analysis also revealed the emergence of L1s bearing no methylation on analyzed CpG (Fig. 4c), suggesting that TRDM reveals the existence of L1 elements incompletely methylated in Spgs, thus providing an opportunity for their expression at the onset of MPI.
Hemimethylation of L1. Hemimethylated, methylated and unmethylated levels were quantified at 5 CpGs in L1. a The proportion of reads supporting hemimethylation at each of the 5 CpGs in six different cell states. b The amount of methylation at the 5 CpGs quantified including hemimethylation (pink) and excluding hemimethylation (blue). c Levels of fully unmethylated L1 elements in MPI based on reads where all 5 CpGs are unmethylated on both strands
Transposon expression during TRDM
To test whether TRDM of potentially active L1s in PL contributes to their expression in MPI, we performed RNA-seq of FACS-enriched individual MPI cell populations (Methods). To analyze RNA abundance of TEs, we used RepEnrich strategy to account for most TE-derived RNA by way of counting both uniquely mapped and multi-mapped reads in our RNA-seq data . Using this strategy, we found that transcript abundance for repeat elements as a whole shows an overall decrease from Spg onwards, with lowest levels in Spz (Additional file 21: Fig. S12). Intriguingly, we find that Spg-to-PL and PL-to-L transitions are accompanied by transcriptional upregulation of many classes of LINE elements (Fig. 5a). This upregulation includes all classes of potentially active L1 elements, whose expression begins to decrease in Z and is essentially extinguished by P (Fig. 5b). Interestingly, a P-to-D transition involves a strong upregulation of LINEs including potentially active LINE members, a phenomenon apparently independent of TRDM (Fig. 5b).
Dynamics of LINE transcript abundance in MPI. a, b A pairwise differential expression analysis is represented as log2 fold change in CPM between consecutive MPI stages and also sperm (Spz) relative to diplotene stage. Each horizontal barplot shows log2(FC) on the x-axis for a different LINE subfamily on the y-axis or b LINE-1 subfamilies that contain evolutionarily young and potentially active members
To determine how these two bursts of L1 transcription relate to L1 protein expression, we performed immunofluorescence analysis using antibodies to the L1-encoded ORF1 protein, an acrosome-specific marker sp56 and double-strand break marker γH2AX [20, 42, 43]. This analysis established that L1ORF1p expression in MPI begins in L, persists until mid-P and extinguishes in late P (Additional file 22: Fig. S13). These results suggest that the initial, smaller wave of L1 mRNA in the early MPI is productive, while the second burst of L1 transcription at P-to-D transition does not lead to a corresponding increase in L1ORF1p levels. These L1 mRNA and protein expression dynamics fit well with the relatively low activity of the piRNA pathway in early MPI and its robust transcriptional activation in P [18, 44].
Expression analysis of DNA methylation machinery supports passive DNA methylation in TRDM
To better understand the mechanism of TRDM, we examined MPI expression of genes associated with passive and active DNA demethylation. In cultured mammalian cells, maintenance CpG methylation in a newly synthesized DNA strand occurs within minutes after DNA replication [45, 46]. This process requires targeting of Dnmt1 to chromatin containing hemimethylated DNA by Uhrf1 protein [47, 48]. Our expression data indeed support dynamic but appreciable Dnmt1 mRNA and protein expression in MPI (Fig. 6a, b). Interestingly, Uhrf1 mRNA expression levels drop twofold in PL, L and Z compared to Spgs (Fig. 6a), thus providing a potential explanation to diffuse nucleoplasmic rather than replication foci-centered localization of Dnmt1 protein in PL (Fig. 6b and ). These observations agree with passive DNA demethylation in early MPI and are further supported by reduced Dnmt1 expression in L and Z (Fig. 6a).
Examination of transcript and protein abundance of genes associated with passive or active DNA demethylation or remethylation. a RNA-seq transcript abundance of select genes in individual FACS-enriched MPI germ cells, expressed as RPKM. b Immunofluorescence co-staining of testicular cross sections for Dnmt1 (pink) and yH2AX (green), and counterstaining with DAPI (blue). The Roman numerals denote spermatogenic stages; the numbers correspond to steps of haploid sperm development during spermatogenesis. Spermatogonia (Spg), preleptonema (PL), leptonema (L) and zygonema (Z), pachynema (P), diplonema (D), Sertoli cells (S). c Immunofluorescence staining of testicular cross sections (cryosections) for EdU (pink) and Dnmt3a, Dnmt3a2 or Dnmt3b (green), and counterstaining with DAPI (blue)
At the same time, genes for proteins implicated in active DNA demethylation exhibit low level of expression across MPI arguing against the leading role of this mechanism in TRDM (Fig. 6a). Indeed, previous genome-wide analysis of 5-hydroxymethylcytosine uncovered only minor contribution of this modification to the epigenetic landscape in MPI . However, in our data, DNA demethylation in at the P-to-D transition in narrow genomic windows is consistent with active DNA methylation and corroborates the previous study . Taken together, the above results strongly suggest that reduction of DNA methylation in TRDM occurs primarily by passive, DNA replication-coupled mechanism.
Gradual but uneven genome-wide DNA remethylation occurs over the period of 70-h spanning early MPI substages. Given the predominance of hemimethylated DNA in the meiotic genome, restoration of premeiotic levels of DNA methylation is likely accomplished by Dnmt1 whose mRNA expression (along with Uhrf1) gradually recovers in P and D (Fig. 6a). In addition, despite low mRNA expression levels of de novo methyltransferases, we cannot exclude a role for Dnmt3a2 in remethylation of MPI genomes (Fig. 6c). Cumulatively, the results support the idea of the leading role of passive DNA demethylation and DNMT1-mediated DNA remethylation in TRDM.
Discussion
In this study, we systematically examined genome-wide DNA methylation across all MPI substages in adult male mice. This analysis provided the first evidence of a genome-wide, transient reduction of DNA methylation at the onset of meiosis. The central implication of this work is that critical MPI events (homology search, chromosome pairing, meiotic DSB formation and repair) occur in the context of hemimethylated genomic DNA.
With respect to the mechanism of DNA demethylation in TRDM, our data are most consistent with a passive, DNA replication-coupled mechanism. We base this conclusion on observations of (a) the initial drop of DNA methylation in PL cells going through meiotic S phase, (b) the genome-wide scope of DNA demethylation, (c) the dynamics of DNA methylation levels of early- and late-replicating domains in PL and L, (d) low expression of genes implicated in active DNA methylation and (e) direct measurement of the dynamics of DNA hemimethylation of L1 elements which allowed us to assess the extent of DNA hemimethylation in thousands of locations throughout the genome. Results of this analysis strongly support the idea of passive DNA replication mechanism of DNA demethylation at the onset of meiosis.
Overall, our analysis provides strong evidence of genome-wide DNA methylation by a passive, DNA replication-dependent mechanism. It is important to note that while a theoretical maximum reduction of DNA methylation levels is 50% of the starting values, the lower observed percentage in PL (by ~ 13 percentage points) is consistent with the non-uniform dynamics of DNA demethylation across individual genomes (early and late replication domains) and a non-uniform PL germ cell population studied (at different times of S phase, e.g., early, mid and late) which together lead to the apparently higher DNA methylation levels.
The dynamics of genome-wide CpG methylation point to a robust reduction of DNA methylation in PL compared to Spg. Interestingly, high methylation levels across Spg chromosomes argue against a possibility of preexisting DNA methylation levels determining the timing of DNA replication along chromosomes. Instead, consistent with semiconservative mechanism of DNA replication, DNA methylation levels remained high in late-replicating domains in PL but dropped in L upon replication of these regions in late PL, but before the recovery of DNA methylation.
Our observations underscore the uniqueness of meiotic S phase whose significance goes beyond being simply the last round of DNA replication prior to meiosis. Previous studies in numerous plant, fungal and animal species documented the longer duration of the meiotic S phase [2, 3, 51,52,53,54]. In addition, several studies provided evidence linking the meiotic S phase to meiotic recombination [38, 55,56,57,58]. Our results suggest that meiotic DNA replication in the male mice is different from somatic cells in that it fails to restore premeiotic DNA methylation levels in a timely manner.
What role(s) does TRDM play in meiosis? We envision several possibilities. First, TRDM might be a by-product of genome-wide remodeling of structure of chromosomes during mitosis-to-meiosis transition. Incorporation of meiosis-specific cohesin complexes, meiotic DNA recombination machinery and other events may preclude or reduce the accessibility of newly replicated DNA to DNMTs and their accessory proteins. Therefore, the observed reduction of DNA methylation might be non-consequential to meiosis.
Second, alternatively or in parallel, lower DNA methylation of the genome may create permissive conditions for some aspect of meiosis. Prior studies in plants, fungi and animals demonstrated that the disruption of normal pattern of DNA methylation in meiosis influences the meiotic recombination landscape [10,11,12, 59]. Although these prior studies do not demonstrate that the reduction of DNA methylation is an absolute prerequisite for wild-type levels of meiotic recombination, our finding of TRDM demonstrates that reduced DNA methylation is a common feature both of male and female meiotic germ cells of mice.
Third, DNA replication-coupled mechanism of TRDM suggests potential for distinct epigenetic states of hemimethylated sister chromatids of meiotic chromosomes. The impact of this epigenetic asymmetry of genetically identical DNA sequences on meiosis remains to be understood. We speculate that the epigenetic asymmetry of sister chromatids may lead to differential expression of their gene content. Indeed, as a result of passive DNA demethylation, genetically identical alleles will exist in distinct hemimethylated states with one allele inheriting the methylated coding strand while the coding strand of the other having no methylation. In addition, hemimethylation might stimulate inter-sister chromatid recombination. This idea is supported by early studies in cultured mammalian somatic cells that implicated DNA hemimethylation in increased sister chromatid exchanges in mitosis [60, 61]. In addition, given the essential role of the mismatch repair pathway in meiosis, it is tempting to speculate that DNA hemimethylation may influence the usage of sister chromatids in MPI akin to methylation-directed mismatch repair system of E. coli .
Finally, our results of analysis of L1 DNA methylation, and L1 RNA and protein expression suggest that TRDM contributes to gamete quality control. We base this conclusion on the finding of the appearance of fully unmethylated L1 elements that accounted for 2% of obtained sequencing reads in PL. This finding is consistent with incomplete DNA methylation (presumably hemimethylation) of a population of L1 elements in spermatogonia. Prior studies in embryonic stem cells and somatic cells demonstrated the existence of L1 elements showing DNA hemimethylation that necessitate cooperativity between de novo and maintenance DNA methyltransferases [63, 64]. We suspect that hemimethylated L1 elements remain largely transcriptionally silent until the meiotic S phase when they become fully demethylated and expressed. While fully demethylated L1s are the minority of potentially active L1 elements, they still correspond to dozens of L1 elements that may be expressed. Intriguingly, a potential role for TRDM in gamete quality control parallels a previously described selective elimination of MPI fetal oocytes with excessive L1 levels during the evolutionarily conserved process of fetal oocyte attrition . If this were the case, L1 elements may be contributing to gamete quality control in meiotic germ cells in both sexes.
Conclusions
Our results suggest that chromosomes exhibit dynamic changes in DNA methylation in MPI in male mice. These changes in DNA methylation arise by a passive mechanism during meiotic S phase resulting in hemimethylation of the genome in early MPI. We propose that TRDM facilitates meiotic prophase processes and gamete quality control.
Methods
Animals
Adult C57BL/6 J male mice (2- to 5-month-old) (Jackson Laboratory) were used as a source of adult testes. All experimental procedures were performed in compliance with ethical regulations and approved by the IACUC of Carnegie Institution for Science.
Germ cell isolation
Germ cell fractions were enriched by fluorescence-activated cell sorting (FACS) as described previously . Sorted germ cell fractions were devoid of somatic contamination but contained small amounts of germ cells from adjacent MPI stages. Cell fraction purity was determined to be > 85% for Spg, ~ 85% for PL, ~ 85% for L, ~ 80% for Z, > 90% for P, > 90% for D.
Cryosections
After dissection of the testis, the tunica was removed, the testis was fixed (2% PFA in PBS) at 4C for 4 h, shaking. Samples were passed through sucrose solutions (10% for 1 h, 20% for 1 h, 30% overnight at 4C), embedded in OCT and stored at − 80 °C. Sections of 10 μm were used for IF.
Immunofluorescence
IF on testicular sections or meiotic spreads was performed as described before . ImageJ was used for image analysis.
Antibodies
The following primary antibodies were for immunofluorescence (IF): monoclonal anti-γH2AX (Mouse, 1 mg/ml, 1:1000, Millipore 05-636), polyclonal anti-Sycp3 (Rabbit, 1 mg/ml, Abcam, ab15092. IF: 1:500 dilution), polyclonal anti-ORF1p (Rabbit, 1 mg/ml, a kind gift from Dr. Martin. IF: 1:500 dilution), monoclonal anti-Dmrt1 (Mouse, 200 µg/ml, Santa-Cruz, sc-10222. IF: 1:200 dilution), polyclonal anti-Dmrt6 (Rabbit, a kind gift from Dr. Zarkower. IF: 1:200 dilution), monoclonal anti-sp56 (Mouse, Pierce, MA1-10866. IF: 1:750). The following secondary antibodies (2 mg/ml) were used in this study: donkey anti-rabbit Alexa Fluor 594, donkey anti-rabbit Alexa Fluor 488, donkey anti-mouse Alexa Fluor 488.
EdU labeling
Adult mice 1–3 months old were injected with 12.5 μg/g of body weight EdU (0.5 mg/ml DMSO stock) dissolved in 200 μL water. Mice were killed 2 h after injection and processed for FACS or for cryosections as described above. EdU detection with Click-iT EdU Alexa Fluor Kit was performed as described in the manual (Invitrogen).
Whole-genome bisulfite sequencing (WGBS)
Each biological replicate consisted of pooled cells from 2 to 3 different animals from different FACS procedures. For WGBS, two biological replicates (2×) were used for Spg, PL, L, Z, P, D and epididymal spermatozoa (Spz). Genomic DNA (gDNA) was prepared by incubating cells in tail lysis buffer with 5 µl of Proteinase K (Life Technologies, 20 mg/ml) at 55 °C for 2–3 h. At the end of lysis, 2 µl of linear acrylamide (Ambion, 5 mg/ml) was added to samples. DNA was extracted with phenol–chloroform–isoamyl alcohol (Life Technologies, #15593) using Phase Lock Gel (PLG) Light tubes (5 Prime). One microliter of RNase A (Thermo Scientific, #EN0531, 10 mg/ml) was added to the aqueous phase, and the samples were incubated at 37 °C for 30 min, transferred to a PLG tube and mixed with chloroform. DNA was precipitated using ethanol and quantified with Quant-iT PicoGreen reagent (Molecular Probes) using SpectraMax microplate spectrophotometer. Isolated mouse gDNA was spiked with approximately 0.1% unmethylated cl857 Sam7 Lambda DNA (Promega) and sheared to fragments with a range of 200–600 bp using a Covaris M220 Ultrasonicator.
WGBS library preparation was based on Illumina’s ‘WGBS for Methylation Analysis’ protocol. The adaptor-ligated DNA fragments were processed for bisulfite conversion according to the manufacturer’s protocol (EZ DNA methylation Gold kit, Zymo Research). Bisulfite-treated DNA underwent 15 rounds of PCR amplification; libraries were prepared using Illumina TruSeq RNA Sample Prep Kit v2 and sequenced on Illumina HiSeq 2000 platform, yielding 100-bp paired-end reads. Each sample was run in a single lane, spiked with 5% Illumina PhiX genomic DNA control. Data were downloaded onto our servers in FASTQ format for processing.
WGBS read alignment and extraction of methylation evidence
We used Bismark program for alignment of bisulfite-converted reads to mouse genome assembly NCBI37/mm9. The alignment was performed with respect to the bisulfite-treated Watson (original top) and Crick (original bottom) strands, and not their reverse complements, as the library was prepared in a strand-specific (directional) manner. No trimming was performed prior to alignment. After alignment Bismark de-duplication module was used to remove PCR duplicates.
Bismark was used to extract and summarize CpG methylation evidence present in the unique alignments. CpG evidence was filtered based on evaluation of methylation bias (M-bias) plots, and we excluded the first 6 nt from 5′ end of read 1 and 10 nt from read 2, and the last 1 nt from 3′ end of both reads prior to the extraction of methylation. Subsequently, using Bismark, we extracted CpG coverage into a file containing information for both strands. Finally, we merged strand-specific information. The final output text file contained chromosome (chr) name, chr start, chr end, CpG methylation percentage, count C and count unmethylated C. The final DNA methylation files were then examined with bsseq package and supplemented with R-based data analysis or in-house scripts.
Bioinformatics analysis of global DNA methylation levels
Correlation between replicates of WGBS data was performed as follows: Biological replicates were compared pairwise (e.g., Spg1 with Spg2). Final Bismark output files containing CpG methylation and coverage were imported into R. DNA methylation was extracted and summarized in non-overlapping bins of 500 CpGs using rep() function, followed by aggregation of data and computation of mean values, using aggregate() function in R. Pearson correlation coefficient was then calculated using cor() function in R.
Global DNA methylation analysis was performed using bsseq package. Two replicate groups were formed, each consisted of seven samples (Spg, PL, L, Z, P, D and Spz) and made up a single Bsseq object. Only those CpGs that were covered by at least one read in all samples (common CpGs) were analyzed. Since only these common CpGs were involved in data analysis, the overall DNA methylation levels for each sample slightly differ from “raw” DNA methylation values obtained for all mappable CpGs of a given sample.
For plotting DNA methylation across the length of the chromosome, a custom Python script was used and DNA methylation was averaged using sliding non-overlapping windows of 100 kbp.
Annotation used for DNA methylation analysis
The following genomic feature annotations were directly extracted from UCSC Table Browser based on mm9 genome: introns, exons, intergenic regions, CpG Islands and RepeatMasker (RMSK) repeats. Promoter coordinates were extracted from the UCSC/knownGene transcriptome file by taking + 1 kb to − 1 kb relative to the TSS, in a strand-conscious manner. The geneID and geneSymbols were obtained from specifying the selected fields in the output format and selecting knownGene (name), and kgXref (geneSymbol) fields. A custom script was used to change ucsc_ids to geneSymbols in the BED files. The genomic coordinates for imprinted gametic DMRs included 11 maternal (Grb10, Igf2r, Impact, Kcnq1ot1, Mest, Nespas-Gnasxl, Peg10, Peg3, Snrpn, U2af1-rs1 and Zac1) and 3 paternal (H19, Dlk1-Gt12 and Rasgrf1) DMRs.
Analysis of sequencing coverage after WGBS
For chromosome-based visualization, de-duplicated BAM files were sorted using SAMtools (samtools.sourceforge.net). Individual chromosomes were analyzed separately (extracted from sorted BAM files). Biological replicates were analyzed separately for independent assessment. The files were uploaded into SeqMonk and read coverage was quantitated in the following manner: running, non-overlapping window probes of 5 kb were created to span the chromosome length. Read counts (the probes) were quantitated using the SeqMonk’s Read Count Quantitation approach where we counted all reads and corrected for total read count based on the largest data set. For overall coverage quantitation, running, non-overlapping window probes of 5 kb were created to span the chromosome length. Data store summary report was exported.
Determination of overlap between datasets
Generally, overlaps were computed using bedtools intersect. For example, for intersecting DNA methylation with late-replicating regions, was used, with replication timing (RT) file (-a) and DNA methylation file (-b). The RT file contained . The CpG methylation file contained <chr C/start C/end C/methylation level. Pearson correlation between DNA methylation values and RT values was performed using cor() function in R. To examine proportion of Spg-to-PL DMRs that overlap with PL-to-L and L-to-Z DMRs, we used bedtools intersect –wa –wb –a option to intersect BED files of DMRs in question.
Analysis and annotation of differentially methylated regions (DMRs)
DMR analysis was performed using Bsseq package with previously optimized settings for DMR blocks. Bsseq employs local likelihood method, aggregating information from neighboring CpGs in a coverage-conscious manner and uses the combined data from two biological replicates to estimate DNA methylation at single CpG level. For this analysis, we required that each CpG be covered at least once in all four samples compared pairwise (two biological replicates per two stages). This selection resulted in a median CpG coverage of 3X–7X per sample (or 6X–14X per duplicate) and an overall coverage of > 77% of all genomic CpGs.
For analysis of DMRs and replication timing, genomic coordinates of DMR blocks were intersected with early or late replication timing (RT) coordinates. For every region within A (early or late RT domain), a number of intersections with B (DMR block) were computed. To calculate the proportion of an overlap between DMR blocks formed between Spg and PL (‘WTSpgPL’ DMRs) and all other DMR blocks, <bedtools intersect –wa –wb> was utilized, using a file containing WTSpgPL DMRs as (-a) and separate files, each containing DMRs between WT PL and L, L and Z, Z and P, P and D as (-b). Subsequent processing of the output involved adding the number of all intersections that matched WTSpgPL DMRs and normalizing to the total number of DMRs for a particular pairwise comparison.
Annotation from Illumina’s iGenomes (genes.gtf), based on the RefSeq dataset (July 2015), was used for annotating DMRs with genes.
We used Fisher’s exact test to examine the strength of overlap between DMRs and gene transcription. For each set of DMRs, a custom Python script was used to form a 2 × 2 table containing significantly upregulated genes that overlapped with DMRs, significantly upregulated genes that fell outside of DMRs, all genes found inside of the DMRs, and all genes found outside of the DMRs. To evaluate the significance of overlap, we calculated p values using Fisher’s exact test.
Hairpin-bisulfite sequencing
Each sample consisted of pooled cells from up to two different animals from different FACS procedures. Sixty to 200 ng of DNA was restriction digested with BspE1 for 16 h at 37 °C and enzyme heat inactivated at 80 °C for 20 min. Digested DNA was extracted using phenol–chloroform–isoamyl, precipitated with ethanol and digestion verified on a 1% agarose gel. Genomic DNA was digested with BspEI, and the two complementary DNA strands were linked with a hairpin linker (5′P-CCGGGGGCCTATATAGTATAGGCCC) in a 25-μl reaction containing: 17 μl digest, 2.5 μl water, 2 μl 10 μM hairpin linker, 2.5 μl 10X T4 ligase reaction buffer and 1 μl (400U) of T4 ligase. The ligation reaction proceeded 10 h at 16C followed by bisulfite conversion using EZ DNA Methylation-Direct Kit (Zymo Research). Conditions for bisulfite conversion of hairpin L1 sequences were adjusted to include additional thermal denaturation steps (Laird et al., PNAS 2004) as follows: (1) 99 °C for 15 min, (2) 64 °C for 1 h, (3) 99 °C for 5 min, (4) 64 °C for 1.5 h, (5) 99 °C for 5 min, (6) 64 °C for 1.5 h. L1MdTf-specific PCR was performed in a 50-μl reaction containing 5 μl DNA, 1x PfU Turbo Cx reaction buffer and 2.5 units of Pfu Turbo Cx Hotstart DNA Polymerase (Agilent), 12.5 mM dNTPs, 10 μM each forward and reverse primers 5′TGGTAGTTTTTAGGTGGTATAGAT and 5′TCAAACACTATATTACTTTAACAATTCCCA resulting in 332-bp amplicon. PCR conditions were as follows: (1) 96 °C for 5 min followed by 35 cycles of (2) 96 °C for 60 s, (3) 55 °C for 45 s, (4) 72 °C for 90 s followed by the final extension at 72 °C for 5 min. PCR products were analyzed by agarose gel electrophoresis prior to processing for sequencing.
The resulting product was prepared for sequencing using Illumina TruSeq mRNA v2 kit, starting with end repair step, and sequenced on NextSeq 500. The 150 paired-end reads were aligned to L1MdTf promoter consensus sequence.
Hairpin-bisulfite sequencing analysis
Fastq reads were trimmed with Trim Galore! using the following parameters: q 30, length 100, phred33, paired, three_prime_clip_R1 6, three_prime_clip_R2 6, stringency 5. We used Bismark to align Reads 1 and 2 independently to Bismark-indexed L1TfMd 5′ end consensus sequence (5′ tccggaccggaggacaggtgcccacccggctggggaggcggcctaagccacagcagcagcggtcgccatcttggtcccgggactccaaggaacttaggaatttagtctgcttaagtgagagtctgtaccacctgggaactgccaaagcaacacagtgtctgagaaaggtcctgttttgg), using Bowtie 1 option. We used Bismak to align trimmed reads 1 and 2 to L1TfMd, independently, using the following parameters: non_directional, n 3, l 20. Subsequently, the reads were split into reads aligned to original bottom (OB) and those that aligned to complementary to original top (CTOT) using SAMtools. Thus, each read resulted in 2 files containing alignments to OB and CTOT, with read 1 OB plus read 2 CTOT, or read 2 OB plus read 1 CTOT corresponding to the two complementary strands of hairpin-bisulfite L1TfMd DNA. Bismark was used to extract CpG methylation from each file, and a custom script was used to generate a matrix where each line represented a “stiched” string of CpG methylation values for both strands of the hairpin-bisulfite L1TfMd DNA. Finally, R script was used to evaluate methylation, hemimethylation and unmethylated status of DNA.
RNA-Seq
Total RNA was isolated from FACS-enriched fractions from adult C57BL6 male mouse testis. In most cases, due to the limited availability of enriched cells, total RNA from 2–4 mice (2–4 independent FACS enrichment sessions) was pooled to create one sample. One microliter of RNA was used for evaluation on the BioAnalyzer. Ribosomal RNA (rRNA) was removed from total RNA (up to 50 ng) using Ribo-Zero Gold rRNA Removal Kit according to the manufacturer’s protocol. The TruSeq RNA Sample Preparation Kit v2 was used to prepare cDNA library from ribosomal RNA-depleted RNA. The libraries were prepared as described in the manufacturer’s protocol (Pub. Part No.: 15026495) following low sample protocol. DNA fragments were enriched with PCR for 15 cycles. One microliter of the resulting library was used for validation and quantification analysis, using Agilent Technologies 2100 Bioanalyzer and Agilent DNA-1000 chip. The cDNA libraries were sequenced as single end 50-mers using the Illumina HiSeq 2000 platform, yielding a total of ~ 246 million reads (26–66 million total reads per sample).
The quality of the raw RNA-seq libraries was evaluated using fastQC ( The fastQC-reported “Per base sequence quality” measure was very good, with more than 92% of all reads having a quality score of more than 30, and mean quality score of more than 36.
The read alignment was performed with TopHat (v2.0.7) , using short read mapping program Bowtie 2 (v2.0.6). During the alignment, we provided a transcriptome file that contained gene annotation. The reads were processed based on NCBI37/mm9 mouse genome and UCSC RefSeq gene annotation obtained from Illumina iGenomes (July 2015).
We used HTSeq package to count sequencing reads that overlap with gene transcriptome . Specifically, we used <htseq-count –s no –a 10 input.sam iGenomes.gtf> command. The output is a tab-delimited text file containing counts for each gene (gene id and number of read counts). Subsequently, to evaluate differential expression we used edgeR. Specifically, we (1) built a counts table with all samples, using DGElist function, (2) normalized counts using the default TMM method, which accounts for compositional differences between the libraries, using calcNormFactors function, (3) obtained a table with normalized count-per-million (CPM), using cpm function, which we used directly for data analysis, or converted CPM to RPKM by (cpm/gene length/1000). For the differential expression analysis, an exact test was performed with an estimated Biological Coefficient of Variation (BCV) of 0.1, and topTags function was applied. A final table containing logFC (is log2FC), logCPM (is log2CPM), p value and FDR value for each gene was obtained.
For the analysis of the transcriptional landscape of repetitive elements, we used RepEnrich according to the suggested protocol . Briefly, we aligned RNA-seq data to the genome using Bowtie 1 parameters that allow only unique mapping (-m1) and outputted multi-mapping and uniquely mapping reads into separate files. We ran RepEnrich python script on the data and then used EdgeR for subsequent processing of fraction counts file, which contained 1444 repetitive element entries. Specifically, we (1) built a counts table with all samples, using DGElist function, (2) normalized counts using the default TMM method, which accounts for compositional differences between the libraries, using calcNormFactors function, (3) obtained a table with normalized count-per-million (CPM), using cpm function, which we used directly for data analysis. For the differential expression analysis, an exact test was performed with an estimated dispersion specific to each pairwise comparison and topTags function was applied. A final table containing logFC (is log2FC), logCPM (is log2CPM), p value and FDR value for each repetitive element entry (subfamily) was obtained.
References
Zickler D, Kleckner N. Recombination, pairing, and synapsis of homologs during meiosis. Cold Spring Harb Perspect Biol. 2015;7(6):a016626.
Article
PubMed
PubMed Central
Google Scholar
Jaramillo-Lambert A, Ellefson M, Villeneuve AM, Engebrecht J. Differential timing of S phases, X chromosome replication, and meiotic prophase in the C. elegans germ line. Dev Biol. 2007;308(1):206–21.
Article
CAS
PubMed
Google Scholar
Callan HG, Taylor JH. A radioautographic study of the time course of male meiosis in the newt Triturus vulgaris. J Cell Sci. 1968;3(4):615–26.
CAS
PubMed
Google Scholar
Adler ID. Comparison of the duration of spermatogenesis between male rodents and humans. Mutat Res. 1996;352(1–2):169–72.
Article
PubMed
Google Scholar
Crichton JH, Playfoot CJ, Adams IR. The role of chromatin modifications in progression through mouse meiotic prophase. J Genet Genom. 2014;41(3):97–106.
Article
CAS
Google Scholar
Wang L, Xu Z, Khawar M, Liu C, Li W. The histone codes for meiosis. Reproduction. 2017;154(3):R65–79.
Article
PubMed
Google Scholar
Székvölgyi L, Ohta K, Nicolas A. Initiation of meiotic homologous recombination: flexibility, impact of histone modifications, and chromatin remodeling. Cold Spring Harb Perspect Biol. 2015;7(5):a016527.
Article
PubMed
PubMed Central
Google Scholar
Yelina N, Diaz P, Lambing C, Henderson IR. Epigenetic control of meiotic recombination in plants. Sci China Life Sci. 2015;58(3):223–31.
Article
CAS
PubMed
Google Scholar
Getun IV, Wu Z, Fallahi M, Ouizem S, Liu Q, Li W, et al. Functional roles of acetylated histone marks at mouse meiotic recombination hot spots. Mol Cell Biol. 2017;37(3):e00942-15.
Article
PubMed
PubMed Central
Google Scholar
Yelina NE, Choi K, Chelysheva L, Macaulay M, de Snoo B, Wijnker E, et al. Epigenetic remodeling of meiotic crossover frequency in Arabidopsis thaliana DNA methyltransferase mutants. PLoS Genet. 2012;8(8):e1002844.
Article
CAS
PubMed
PubMed Central
Google Scholar
Yelina NE, Lambing C, Hardcastle TJ, Zhao X, Santos B, Henderson IR. DNA methylation epigenetically silences crossover hot spots and controls chromosomal domains of meiotic recombination in Arabidopsis. Genes Dev. 2015;29(20):2183–202.
Article
CAS
PubMed
PubMed Central
Google Scholar
Zamudio N, Barau J, Teissandier A, Walter M, Borsos M, Servant N, Bourc’his D. DNA methylation restrains transposons from adopting a chromatin signature permissive for meiotic recombination. Genes Dev. 2015;29(12):1256–70.
Article
CAS
PubMed
PubMed Central
Google Scholar
Bourc’his D, Bestor TH. Meiotic catastrophe and retrotransposon reactivation in male germ cells lacking Dnmt3L. Nature. 2004;431(7004):96–9.
Article
PubMed
Google Scholar
Choi K, Zhao X, Lambing C, Underwood CJ, Hardcastle TJ, Serra H, et al. Nucleosomes and DNA methylation shape meiotic DSB frequency in Arabidopsis transposons and gene regulatory regions. Genome Res. 2017.
Google Scholar
Underwood CJ, Choi K, Lambing C, Zhao X, Serra H, Borges F, et al. Epigenetic activation of meiotic recombination near Arabidopsis thaliana centromeres via loss of H3K9me2 and non-CG DNA methylation. Genome Res. 2018.
Google Scholar
Aravin AA, Sachidanandam R, Bourc’his D, Schaefer C, Pezic D, Toth KF, Bestor T, Hannon GJ. A piRNA pathway primed by individual transposons is linked to de novo DNA methylation in mice. Mol Cell. 2008;31(6):785–99.
Article
CAS
PubMed
PubMed Central
Google Scholar
Barau J, Teissandier A, Zamudio N, Roy S, Nalesso V, Herault Y, Guillou F, Bourc’his D. The DNA methyltransferase DNMT3C protects male germ cells from transposon activity. Science. 2016;354(6314):909–12.
Article
CAS
PubMed
Google Scholar
Di Giacomo M, Comazzetto S, Saini H, De Fazio S, Carrieri C, Morgan M, et al. Multiple epigenetic mechanisms and the piRNA pathway enforce LINE1 silencing during adult spermatogenesis. Mol Cell. 2013;50(4):601–8.
Article
PubMed
Google Scholar
Aravin AA, van der Heijden GW, Castaneda J, Vagin VV, Hannon GJ, Bortvin A. Cytoplasmic compartmentalization of the fetal piRNA pathway in mice. PLoS Genet. 2009;5(12):e1000764.
Article
PubMed
PubMed Central
Google Scholar
Soper SF, van der Heijden GW, Hardiman TC, Goodheart M, Martin SL, de Boer P, Bortvin A. Mouse maelstrom, a component of nuage, is essential for spermatogenesis and transposon repression in meiosis. Dev Cell. 2008;15(2):285–97.
Article
CAS
PubMed
PubMed Central
Google Scholar
Shoji M, Tanaka T, Hosokawa M, Reuter M, Stark A, Kato Y, et al. The TDRD9-MIWI2 complex is essential for piRNA-mediated retrotransposon silencing in the mouse male germline. Dev Cell. 2009;17(6):775–87.
Article
CAS
PubMed
Google Scholar
Branciforte D, Martin SL. Developmental and cell type specificity of LINE-1 expression in mouse testis: implications for transposition. Mol Cell Biol. 1994;14(4):2584–92.
Article
CAS
PubMed
PubMed Central
Google Scholar
van der Heijden GW, Bortvin A. Transient relaxation of transposon silencing at the onset of mammalian meiosis. Epigenetics. 2009;4(2):76–9.
Article
PubMed
Google Scholar
Hammoud SS, Low DH, Yi C, Lee CL, Oatley JM, Payne CJ, Carrell DT, Guccione E, Cairns BR. Transcription and imprinting dynamics in developing postnatal male germline stem cells. Genes Dev. 2015;29(21):2312–24.
Article
CAS
PubMed
PubMed Central
Google Scholar
Kubo N, Toh H, Shirane K, Shirakawa T, Kobayashi H, Sato T, et al. DNA methylation and gene expression dynamics during spermatogonial stem cell differentiation in the early postnatal mouse testis. BMC Genom. 2015;16:624.
Article
Google Scholar
Oakes CC, La Salle S, Smiraglia DJ, Robaire B, Trasler JM. Developmental acquisition of genome-wide DNA methylation occurs prior to meiosis in male germ cells. Dev Biol. 2007;307(2):368–79.
Article
CAS
PubMed
Google Scholar
Getun IV, Torres B, Bois PR. Flow cytometry purification of mouse meiotic cells. J Vis Exp. 2011.
PubMed
PubMed Central
Google Scholar
Gaysinskaya V, Soh IY, van der Heijden GW, Bortvin A. Optimized flow cytometry isolation of murine spermatocytes. Cytometry Part A. 2014;85(6):556–65.
Article
Google Scholar
Gaysinskaya V, Bortvin A. Flow cytometry of murine spermatocytes. Curr Protoc Cytom. 2015;72:7.44.1-24.
PubMed
Google Scholar
Tomizawa S, Kobayashi H, Watanabe T, Andrews S, Hata K, Kelsey G, Sasaki H. Dynamic stage-specific changes in imprinted differentially methylated regions during early mammalian development and prevalence of non-CpG methylation in oocytes. Development. 2011;138(5):811–20.
Article
CAS
PubMed
PubMed Central
Google Scholar
Hansen KD, Langmead B, Irizarry RA. BSmooth: from whole genome bisulfite sequencing reads to differentially methylated regions. Genom Biol. 2012;13(10):R83.
Article
Google Scholar
Pope BD, Chandra T, Buckley Q, Hoare M, Ryba T, Wiseman FK, et al. Replication-timing boundaries facilitate cell-type and species-specific regulation of a rearranged human chromosome in mouse. Hum Mol Genet. 2012;21(19):4162–70.
Article
CAS
PubMed
PubMed Central
Google Scholar
Ryba T, Hiratani I, Lu J, Itoh M, Kulik M, Zhang J, et al. Evolutionarily conserved replication timing profiles predict long-range chromatin interactions and distinguish closely related cell types. Genome Res. 2010;20(6):761–70.
Article
CAS
PubMed
PubMed Central
Google Scholar
Yaffe E, Farkash-Amar S, Polten A, Yakhini Z, Tanay A, Simon I. Comparative analysis of DNA replication timing reveals conserved large-scale chromosomal architecture. PLoS Genet. 2010;6(7):e1001011.
Article
PubMed
PubMed Central
Google Scholar
Weddington N, Stuy A, Hiratani I, Ryba T, Yokochi T, Gilbert DM. ReplicationDomain: a visualization tool and comparative database for genome-wide replication timing data. BMC Bioinform. 2008;9:530.
Article
Google Scholar
Yarosh W, Spradling AC. Incomplete replication generates somatic DNA alterations within Drosophila polytene salivary gland cells. Genes Dev. 2014;28(16):1840–55.
Article
CAS
PubMed
PubMed Central
Google Scholar
Koren A, Handsaker RE, Kamitaki N, Karlic R, Ghosh S, Polak P, Eggan K, McCarroll SA. Genetic variation in human DNA replication timing. Cell. 2014;159(5):1015–26.
Article
CAS
PubMed
PubMed Central
Google Scholar
Boateng KA, Bellani MA, Gregoretti IV, Pratto F, Camerini-Otero RD. Homologous pairing preceding SPO11-mediated double-strand breaks in mice. Dev Cell. 2013;24(2):196–205.
Article
CAS
PubMed
PubMed Central
Google Scholar
Arand J, Spieler D, Karius T, Branco MR, Meilinger D, Meissner A, et al. In vivo control of CpG and non-CpG DNA methylation by DNA methyltransferases. PLoS Genet. 2012;8(6):e1002750.
Article
CAS
PubMed
PubMed Central
Google Scholar
Sookdeo A, Hepp CM, McClure MA, Boissinot S. Revisiting the evolution of mouse LINE-1 in the genomic era. Mob DNA. 2013;4(1):3.
Article
CAS
PubMed
PubMed Central
Google Scholar
Criscione SW, Zhang Y, Thompson W, Sedivy JM, Neretti N. Transcriptional landscape of repetitive elements in normal and cancer human cells. BMC Genom. 2014;15:583.
Article
Google Scholar
Mahadevaiah SK, Turner JM, Baudat F, Rogakou EP, de Boer P, Blanco-Rodriguez J, et al. Recombinational DNA double-strand breaks in mice precede synapsis. Nat Genet. 2001;27(3):271–6.
Article
CAS
PubMed
Google Scholar
Kim KS, Cha MC, Gerton GL. Mouse sperm protein sp56 is a component of the acrosomal matrix. Biol Reprod. 2001;64(1):36–43.
Article
CAS
PubMed
Google Scholar
Li XZ, Roy CK, Dong X, Bolcun-Filas E, Wang J, Han BW, et al. An ancient transcription factor initiates the burst of piRNA production during early meiosis in mouse testes. Mol Cell. 2013;50(1):67–81.
Article
CAS
PubMed
PubMed Central
Google Scholar
Kappler JW. The kinetics of DNA methylation in cultures of a mouse adrenal cell line. J Cell Physiol. 1970;75(1):21–31.
Article
CAS
PubMed
Google Scholar
Gruenbaum Y, Szyf M, Cedar H, Razin A. Methylation of replicating and post-replicated mouse L-cell DNA. Proc Natl Acad Sci U S A. 1983;80(16):4919–21.
Article
CAS
PubMed
PubMed Central
Google Scholar
Sharif J, Muto M, Takebayashi S, Suetake I, Iwamatsu A, Endo TA, et al. The SRA protein Np95 mediates epigenetic inheritance by recruiting Dnmt1 to methylated DNA. Nature. 2007;450(7171):908–12.
Article
CAS
PubMed
Google Scholar
Liu X, Gao Q, Li P, Zhao Q, Zhang J, Li J, Koseki H, Wong J. UHRF1 targets DNMT1 for DNA methylation through cooperative binding of hemi-methylated DNA and methylated H3K9. Nat Commun. 2013;4:1563.
Article
PubMed
Google Scholar
Jue K, Bestor TH, Trasler JM. Regulated synthesis and localization of DNA methyltransferase during spermatogenesis. Biol Reprod. 1995;53(3):561–9.
Article
CAS
PubMed
Google Scholar
Gan H, Wen L, Liao S, Lin X, Ma T, Liu J, et al. Dynamics of 5-hydroxymethylcytosine during mouse spermatogenesis. Nat Commun. 1995;2013:4.
Google Scholar
Holm PB. The premeiotic DNA replication of euchromatin and heterochromatin in Lilium longiflorum (Thunb.). Carlsberg Res Commun. 1977;42(4):249–81.
Article
CAS
Google Scholar
Monesi V. Autoradiographic study of DNA synthesis and the cell cycle in spermatogonia and spermatocytes of mouse testis using tritiated thymidine. J Cell Biol. 1962;14:1–18.
Article
CAS
PubMed
PubMed Central
Google Scholar
Crone M, Levy E, Peters H. The duration of the premeiotic DNA synthesis in mouse oocytes. Exp Cell Res. 1965;39(2):678–88.
Article
CAS
PubMed
Google Scholar
Williamson DH, Johnston LH, Fennell DJ, Simchen G. The timing of the S phase and other nuclear events in yeast meiosis. Exp Cell Res. 1983;145(1):209–17.
Article
CAS
PubMed
Google Scholar
Watanabe Y, Yokobayashi S, Yamamoto M, Nurse P. Pre-meiotic S phase is linked to reductional chromosome segregation and recombination. Nature. 2001;409(6818):359–63.
Article
CAS
PubMed
Google Scholar
Cha RS, Weiner BM, Keeney S, Dekker J, Kleckner N. Progression of meiotic DNA replication is modulated by interchromosomal interaction proteins, negatively by Spo11p and positively by Rec8p. Genes Dev. 2000;14(4):493–503.
CAS
PubMed
PubMed Central
Google Scholar
Borde V, Goldman AS, Lichten M. Direct coupling between meiotic DNA replication and recombination initiation. Science. 2000;290(5492):806–9.
Article
CAS
PubMed
Google Scholar
Merino ST, Cummings WJ, Acharya SN, Zolan ME. Replication-dependent early meiotic requirement for Spo11 and Rad50. Proc Natl Acad Sci. 2000;97(19):10477–82.
Article
CAS
PubMed
PubMed Central
Google Scholar
Maloisel L, Rossignol JL. Suppression of crossing-over by DNA methylation in Ascobolus. Genes Dev. 1998;12(9):1381–9.
Article
CAS
PubMed
PubMed Central
Google Scholar
Bianchi NO, Larramendy M, Bianchi MS. The asymmetric methylation of CG palindromic dinucleotides increases sister-chromatid exchanges. Mutat Res. 1988;197(1):151–6.
Article
CAS
PubMed
Google Scholar
Albanesi T, Polani S, Cozzi R, Perticone P. DNA strand methylation and sister chromatid exchanges in mammalian cells in vitro. Mutat Res. 1999;429(2):239–48.
Article
CAS
PubMed
Google Scholar
Putnam CD. Evolution of the methyl directed mismatch repair system in Escherichia coli. DNA Repair (Amst). 2016;38:32–41.
Article
CAS
Google Scholar
Jones PA, Liang G. Rethinking how DNA methylation patterns are maintained. Nat Rev Genet. 2009;10(11):805–11.
Article
CAS
PubMed
PubMed Central
Google Scholar
Liang G, Chan MF, Tomigahara Y, Tsai YC, Gonzales FA, Li E, Laird PW, Jones PA. Cooperativity between DNA methyltransferases in the maintenance methylation of repetitive elements. Mol Cell Biol. 2002;22(2):480–91.
Article
CAS
PubMed
PubMed Central
Google Scholar
Malki S, van der Heijden GW, O’Donnell KA, Martin SL, Bortvin A. A role for retrotransposon LINE-1 in fetal oocyte attrition in mice. Dev Cell. 2014;29(5):521–33.
Article
CAS
PubMed
PubMed Central
Google Scholar
Krueger F, Andrews SR. Bismark: a flexible aligner and methylation caller for Bisulfite-Seq applications. Bioinformatics. 2011;27(11):1571–2.
Article
CAS
PubMed
PubMed Central
Google Scholar
TopHat. Accessed 20 Mar 2018.
HTSeq: analysing high-throughput sequencing data with Python. Accessed 20 Mar 2018.
RepEnrich. Accessed 20 Mar 2018.
Alberts B, Johnson A, Lewis J, Raff M, Roberts K, Walter P. Molecular biology of the cell. 4th ed. New York: Garland Science; 2002.
Google Scholar
Replication Domain. Accessed 20 Mar 2018.
Download references
Authors’ contributions
AB and VG conceived and designed the experiments and wrote the manuscript. VG, GWvdH, CDL performed the experiments. VG (data analysis and bioinformatics), BM (bioinformatics and custom Python scripts) and KH (bioinformatics advice and guidance related to WGBS analysis) analyzed the data. All authors read and approved the final manuscript.
Acknowledgements
We thank Fred Tan for helping with bioinformatics analyses, Svetlana Deryusheva, Safia Malki and Marla Tharp for constructive feedback on the manuscript.
Availability of data and materials
The datasets supporting the conclusions of this article are available under BioProject accession number PRJNA326117 in the NCBI BioProject database (
Consent for publication
Not applicable.
Competing interests
The authors declare that they have no competing interests.
Ethics approval and consent to participate
All experimental procedures were performed in compliance with ethical regulations and approved by the IACUC of Carnegie Institution for Science. The study did not involve human participants, human data or human tissues.
Funding
This research was supported by the endowment of Carnegie Institution for Science. The funding body had no role in the design of the study and collection, analysis, and interpretation of the data.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Author information
Authors and Affiliations
Department of Embryology, Carnegie Institution for Science, Baltimore, MD, USA
Valeriya Gaysinskaya, Chiara De Luca & Alex Bortvin
Department of Biology, Johns Hopkins University, Baltimore, MD, USA
Valeriya Gaysinskaya & Brendan F. Miller
Translational and Functional Genomics Branch, National Human Genome Research Institute, National Institutes of Health, Bethesda, MD, USA
Brendan F. Miller
Department of Obstetrics and Gynaecology, Erasmus MC, University Medical Center, PO BOX 2040, 3000 CA, Rotterdam, The Netherlands
Godfried W. van der Heijden
Department of Biostatistics, Johns Hopkins University, Baltimore, MD, USA
Kasper D. Hansen
Center for Computational Biology, Johns Hopkins University, Baltimore, MD, USA
Kasper D. Hansen
McKusick-Nathans Institute of Genetic Medicine, Johns Hopkins University, Baltimore, MD, USA
Kasper D. Hansen
Search author on:PubMed Google Scholar
Search author on:PubMed Google Scholar
Search author on:PubMed Google Scholar
Search author on:PubMed Google Scholar
Search author on:PubMed Google Scholar
Search author on:PubMed Google Scholar
Corresponding author
Correspondence to
Alex Bortvin.
Additional files
Additional file 1: Figure S1.
Schematic representation of main events in meiotic prophase I. Following premeiotic DNA replication in preleptonema (PL), parental homologous chromosomes (each containing two sister chromatids) develop chromosome axes (marked by SYCP3 protein), pair and synapse in leptonema (L) and zygonema (Z). Synapse is complete in pachynema (P) indicated by the complete overlap of SYCP3 and SYCP1 proteins. Following the completion of meiotic recombination, the synaptonemal complex disassembles in diplonema (D). Approximate duration of MPI substages are indicated (hrs). Figure adapted from .
Additional file 2: Figure S2.
Sample FACS analysis of adult murine testicular cells based on Hoechst 33342 dye staining. In summary, (A) initial gating on individual testicular populations based on Hoechst-blue and Hoechst-red fluorescence. R0 (excluded) is enriched in haploid spermatids; R1 (purple) is enriched in meiosis II spermatocytes; R2 (green) is enriched in leptotene (L); R3 is enriched in zygotene (Z) cells; R4 (dark green) is enriched in pachytene (P) spermatocytes; R5 (purple) is enriched in diplotene (D) spermatocytes; R6 (orange) contains spermatogonia (Spg) and somatic cells (Soma) that will be separated during subsequent back-gating; R7 (red) is enriched in preleptotene (PL) spermatocytes. (B) Gating tree formed after gating-based Hoechst dye staining followed by back-gating on forward scatter (FSC) and side scatter (SSC). Back-gating involves projection of a gate from the Hoechst plot onto an FSC/SSC plot, where the final Spg, L, Z, P and D enrichment gates are created. (C) Back-gates used to enrich for L (from R2 gate), Z (from R3 gate), P (from R4 gate) and D (from R5 gate). (D) Back-gates used to enrich for Spg and Soma (from R6 gate), shown in relation to P and D. (E) DNA content of enriched germ cells based on Hoechst-blue fluorescence histogram. The “DNA” gate used for cell sorting excludes 1C content (haploid) and includes cells with 2C through 4C DNA content where C is the amount of DNA within a haploid nucleus. The 2C region contains both, diploid Spg and Soma; the bimodal 4C region is enriched in L and Z and P and D spermatocytes; 2C-4C DNA content contains PL cells.
Additional file 3: Figure S3.
Transcript abundance of select genes in individual MPI germ cells using RNA-seq, expressed as RPKM. Prominent transcripts from (A) testicular somatic cells, including Sertoli, Leydig and Macrophage cells, were examined to assess the level of contamination and included Amh, Ccl2, Cd9, Cyp11a1, Cyp17a1, Fn1, Fshr, Gap43, Gata1, Gata4, Gpc3, Lhcgr, Lum, Mmp12, Mmp9, Pla2g4a, Rlf, Star, Tead2 and Vcam1. Meiosis-specific gene Mlh3 is used for relative comparison. Transcript abundance of genes associated with (B) differentiated (blue) and undifferentiated (beige) Spg, (C) meiosis-specific synaptonemal complex (SC) formation, (D) meiotic onset (Stra8), meiosis-specific sister chromatid cohesion complex (Smc1b, Rec8, Rad21l and Stag3), recognition of meiotically programmed DSBs (H2afx) and other early meiosis-associated genes (e.g., Mei1), (E–F) DSB formation and repair and recombination were evaluated. (G-H) Replication-dependent histone variant genes are highly expressed and enriched in PL spermatocytes. Twelve replication-dependent histone variant genes with high transcript abundance are shown. Selected are whose genes that are known to be highly enriched in early spermatocytes at 9-dpp testis, but not 2-dpp (gonocytes), 25-dpp (enriched in round spermatids) or 60-dpp (enriched in haploid cells). (I) Two genes, H3f3a and H3f3b, encoding replication-independent histone H3.3 were examined.
Additional file 4: Table S1.
Bisulfite sequencing, alignment and read de-duplication results summary.
Additional file 5: Table S2.
The efficiency of bisulfite conversion based on unmethylated lambda DNA.
Additional file 6: Table S3.
Methylation evidence results for the uniquely aligned, de-duplicated and M-bias filtered reads.
Additional file 7: Table S4.
Pearson correlation between biological replicates.
Additional file 8: Table S5.
Coverage and methylation evidence for common CpGs (covered in common between samples of individual biological replicate groups).
Additional file 9: Figure S4.
CpG DNA methylation levels across chromosome length. DNA methylation was averaged using sliding non-overlapping windows of 100 kb.
Additional file 10: Figure S5.
CpG DNA methylation levels across chromosome length. DNA methylation was averaged using sliding non-overlapping windows of 100 kb.
Additional file 11: Figure S6.
DNA methylation dynamics on chromosome X compared to autosomes. Biological replicate 1 (left panel) and 2 (right panel): were examined independently. CpG methylation was averaged in bins of 100 CpGs (A), and then, Pearson correlation was calculated (B). The number of different CpGs (CpG loci) evaluated for biological replicates 1 and 2 was as follows: chrX (214,981 and 266,439), chr1 (6,983,222 and 7,564,250), chr2 (1,022,879 and 1,103,479), chr3 (805,182 and 875,167), chr19 (383,441 and 412,451) all minus chrX (no chrX) (13,667,873 and 14,804,983).
Additional file 12: Figure S7.
(A) Box-and-whisker plot of DNA methylation levels across promoters and CpG islands. The average DNA methylation levels were aggregated as consecutive, non-overlapping averages of 100 CpGs. Averages were combined for biological replicates. (B) Box-and-whisker plot of DNA methylation levels across various genomic features. The average DNA methylation levels were aggregated as consecutive, non-overlapping averages of 100 CpGs. Averages were combined for biological replicates. (C) Plots of mean DNA methylation levels of maternal and paternal select imprinted differentially methylated regions (DMRs) across MP.
Additional file 13: Table S6.
Average DNA methylation results for different functional genomic elements.
Additional file 14: Table S7.
Characteristics of pairwise comparisons for large DMR block analysis.
Additional file 15: Table S8.
(A) Features of large differentially methylated blocks in germ cells. (B) Mean percentage (%) of genomic feature quantitation (covered by remethylating DMRs).
Additional file 16: Figure S8.
DNA methylation pattern in PL overlaps with replication timing, an example of chromosome 14, biological replicate one. (A) Plot of CpG DNA methylation of MPI stages, premeiotic Spg and post-meiotic Spz, across chromosome 14, which is ~ 125 Mbp long. Biological Replicate 1 is shown. (B) Replication timing (RT) data from CH12 cells (mouse B cell lymphoma) and (C) genome sequencing coverage after WGBS-seq, viewed in SeqMonk program.
Additional file 17: Figure S9.
DNA methylation pattern in PL overlaps with replication timing, an example of chromosome 16. Biological replicate 1 is shown on the left, and replicate 2 on the right. (A,D) DNA methylation, (B,E) replication timing (RT) and (C, F) genome sequencing coverage for two biological replicates.
Additional file 18: Figure S10.
Genome-wide relationship between replication timing and DNA methylation. (A) Replication timing (RT) data from CH12 cells (mouse B cell lymphoma) were correlated with CpG DNA methylation corresponding to late RT domains. Note a prominent switch in correlation directionality from PL to L. Biological replicates (Reps 1 and 2) were processed individually and are shown in light and dark red. (B) The PL cell fraction enriched by FACS contains replicating cells. More than 70% of FACS-enriched PL cells are EdU + , enriched in mid- and late- S phase, based on the characteristic EdU staining patterns.
Additional file 19: Figure S11.
L1 hairpin-bisulfite sequencing amplicon analysis. Specific primers, Primer 1 and Primer 2 were used to amplify bisulfite-converted (BSC) and hairpin-linked L1MdTf promoter region. Primer 1 corresponds to BSC original top (OT) strand; Primer 2 corresponds to BSC complementary to original top (CTOT) strand. CpGs analyzed for hemimethylation with hairpin-bisulfite sequencing are indicated.
Additional file 20: Table S9.
Hairpin-bisulfite sequencing of L1, alignment results summary.
Additional file 21: Figure S12.
Analysis of transcript abundance of repetitive elements by RNA-seq. RepEnrich (fractional counts strategy) was used to calculate the total abundance of repetitive elements, expressed in counts per million mapping reads (CPM) for A) reads that map to all types of repeats annotated by repeat masker (n = 1266 types) and B) reads that map to LINE subfamilies (n = 121)(Supplemental Table S11).
Additional file 22: Figure S13.
L1ORF1p expression in MPI of male germ cells. Temporal expression of L1ORF1p (green) was evaluated in testicular cryosections in the context of seminiferous epithelial cycle composed of stages I–XII. Haploid spermatids are identified based on numbers 1 through 16 according to degree of differentiation (only some are highlighted here). The basal membrane is outlined by the bright cross-reacting red staining. Following spermatogenic progression based on acrosome development marked by sp56 (red) and DNA stain, DAPI (blue), it is determined that cytoplasmic L1ORF1p is first seen in L/Z spermatocytes at stage XI (or X–XI) and persists from Z (stages XI–XII) to mid-P spermatocytes (stages I through VI–VII). L1ORF1p is not detectable in late P cells found stages VII–X, but is evident in the cytoplasm of elongating spermatids (see stages VII–IX) and is also detected as small dots in early round spermatids. The sp56 staining for spermatids beyond step 13 is difficult to see here, since the acrosome spreads very thin at this time. The selection inside the white box of the merged image (top row) is shown as a close-up inset in the DAPI-containing image row and represents a single confocal plane in an otherwise 3-D stacked image, highlighting the cytoplasmic distribution of L1ORF1p in meiotic prophase I spermatocytes. Bar = 10 micron.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( applies to the data made available in this article, unless otherwise stated.
Reprints and permissions
About this article
Cite this article
Gaysinskaya, V., Miller, B.F., De Luca, C. et al. Transient reduction of DNA methylation at the onset of meiosis in male mice.
Epigenetics & Chromatin 11, 15 (2018).
Download citation
Received: 28 March 2018
Accepted: 30 March 2018
Published: 04 April 2018
DOI:
Share this article
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
Keywords
Advertisement
Epigenetics & Chromatin
Epigenetics & Chromatin
ISSN: 1756-8935
Contact us
Read more on our blogs
Receive BMC newsletters
Manage article alerts
Language editing for authors
Scientific editing for authors
Policies
Accessibility
Press center
Support and Contact
Leave feedback
Careers
Follow BMC
BMC Twitter page
BMC Facebook page
BMC Weibo page
By using this website, you agree to our
Terms and Conditions,
Your US state privacy rights,
Privacy
statement and
Cookies policy.
Your privacy choices/Manage cookies we use in the preference centre.
Follow BMC
By using this website, you agree to our
Terms and Conditions,
Your US state privacy rights,
Privacy
statement and
Cookies policy.
Your privacy choices/Manage cookies we use in the preference centre.
© 2025 BioMed Central Ltd unless otherwise stated. Part of
Springer Nature. |
17281 | https://www.icas.org/icas_archive/ICAS2020/data/papers/ICAS2020_0751_paper.pdf | ANALYSIS OF THRUST-SCALED ACOUSTIC EMISSIONS OF AIRCRAFT PROPELLERS AND THEIR DEPENDENCE ON PROPULSIVE EFFICIENCY Xin Geng1, Tianxiang Hu1,2, Peiqing Liu1, Tomas Sinnige2 & Georg Eitelberg2 1Key Laboratory of Aero-Acoustics Ministry of Industry and Information Technology (Lu Shijia Laboratory), Beihang University (BUAA), Beijing, 100191, China. 2Flight Performance and Propulsion Section (FPP), Faculty of Aerospace Engineering, Delft University of Technology, 2629 HS Delft, The Netherlands. Abstract The increasing demand for short-range passenger air transport and the strong push for aircraft with electric propulsion has renewed research interest in propellers. Despite the unmatched aerodynamic efficiency of propellers, their relatively high noise emissions limit widespread application on aircraft. Previous research has not systematically addressed the tradeoff between aerodynamic and aeroacoustic performance. This paper presents the results of an optimization study aimed at minimizing propeller noise without compromising aerodynamic efficiency. In the optimization, a blade-element-momentum-theory (BEMT) model is utilized which accounts for the effects of blade sweep on the blade loading. This BEMT model is coupled to a frequency-domain code for tonal noise prediction. A novel scaling approach is presented to directly relate the propeller noise emissions to the propeller thrust. Dedicated wind-tunnel experiments were performed to validate the analysis models. Good agreement between numerical and experimental results is obtained at low to moderate blade loading conditions. The optimization study shows that the blade sweep is an important design parameter to simultaneously maximize aerodynamic and acoustic performance. Compared to a modern baseline design, a noise reduction of 2.9 dB is achieved without reduction in propeller efficiency. Keywords: aircraft propulsion, propellers, aeroacoustics, multi-objective optimization, noise reduction 1. Introduction Propeller performance is a topic long studied in the aviation industry, and it is well known that propellers can provide aircraft propulsion with exceptionally high propulsive efficiency. The increasing demand for short-range passenger transportation and aircraft with (distributed) electric propulsion has led to renewed interest in propellers by the research community. However, a major challenge needs to be overcome in order to enable widespread use of propellers: noise emissions. Contrary to turbofan engines, that feature engine casings with extensive acoustic liners, unducted propellers provide no means for noise attenuation. Instead, the noise generation needs to be mitigated at the source, by improving the blade design and operation. In recent times, several studies have focused on broadband noise reductions for small-scale propellers, using techniques such as forced transition and trailing-edge modifications such as serrations , which have been shown to indeed enable broadband noise reductions. Serrations are also widely investigated in the wind-turbine industry . For high-speed propellers relevant to passenger aircraft, however, the major noise sources are of tonal nature, caused by the periodic passages of the loaded propeller blades . Therefore, in order to achieve effective propeller noise reductions, the acoustic blade optimization should focus on the tonal noise component. Multi-objective optimizations for minimum sound-pressure level and maximum aerodynamic efficiency have been performed by Marinus using RANS simulations coupled with a time-domain acoustic solver. Others - have taken a similar approach with lower-fidelity aerodynamic analysis tools coupled to comparable acoustic solvers. The existing optimization studies indicate the potential improvements that can be achieved by optimizing the propeller design and operation. However, due to the black box nature of the optimization approaches involved, the results from these optimization ANALYSIS OF THRUST-SCALED ACOUSTIC EMISSIONS OF AIRCRAFT PROPELLERS 2 studies do not provide insight into the sensitivities of the blade performance to the design parameters. As such, they cannot provide effective design guidelines for propellers. To this end, sensitivity studies have been performed as well , often investigating a subset of design variables with a one-variable-at-a-time approach. Most of the previous studies [6,7,10] show that the blade sweep angle is an important design parameter to reduce the noise emissions without significantly penalizing the aerodynamic efficiency of the propeller. The physical mechanism behind this noise reduction due to sweep is acoustic interference due to phase offsets between the acoustic waves emanating from the different blade sections . In all previous work on propeller noise emissions, the acoustic data have been expressed in terms of the nondimensional sound pressure level and usually dimensional frequency. However, these are not scaling parameters, and therefore are unsuitable to compare propeller performance between different designs and scale from model scale to full scale in case of experimental activities. The goal of this paper is to obtain acoustic scaling parameters to be able to describe the propeller noise emissions in terms of scaled amplitude and frequency, and subsequently use the newly defined scaling approach in an aerodynamic-acoustic optimization approach based on validated analysis tools. Fast, low-order aerodynamic and acoustic analysis methods were used in order to minimize turn-around time of the optimization routine, thereby making it suitable for preliminary design activities. Dedicated validation experiments were performed to benchmark the analysis tools. The optimization study focused on the blade sweep angle considering its potential in reducing the propeller noise emissions without penalizing aerodynamic performance. The outline of the paper is as follows. The acoustic scaling is discussed in Section 2. Section 3 then describes the aerodynamic and acoustic analysis models. Subsequently, Section 4 describes the validation experiments, followed by the validation of the numerical models in Section 5. Section 6 then discusses the optimization study, after which conclusions are provided in Section 7. 2. Scaling of Propeller Noise Emissions Most of the propeller noise emission measurement results are characteristically presented in terms of sound pressure level SPL vs. frequency f. An example is provided in Figure 1. The SPL is defined as a logarithmic variable, where the rms value of the measured pressure oscillation prms is normalized with a reference pressure for a given observed frequency range: 10 20log rms ref p SPL p (1) The frequency is often presented in an unscaled manner, in dimensions of Hz. Figure 1 – Measured sound pressure level spectra of propeller noise in the far-field for various configurations. Usually both the frequency range (20 Hz – 20 kHz) as well as the reference pressure pref = 20 μPa used to normalize the measured acoustic pressure originate from human audibility characteristics. They represent the frequency range and minimal pressure audible to healthy adults. ANALYSIS OF THRUST-SCALED ACOUSTIC EMISSIONS OF AIRCRAFT PROPELLERS 3 Considering the need to use ground based experiments to predict flight performance, scaling parameters for extrapolation to flight need to be derived. Human audibility is not a parameter that can be used for scaling. Therefore, a physics-based scaling approach was defined for scaling of propeller noise measurements. For a preliminary analysis, we limit ourselves to the tonal components of the rotating propellers and ignore the variations in the broadband noise; the latter usually being orders of magnitude below the amplitude of the tonal noise observed at blade passing frequency (BPF) and its harmonics, see also the same figure 1 above. So instead of using a broadband reduced frequency formulation, only the frequency determined by the rotation speed of the propeller is scaled according to the Strouhal scaling: nD Str u (2) In this formulation n is the rotational speed of the propeller, D is the diameter of the propeller and u∞ is the freestream/flight velocity. In the further discussion, we choose to normalize the acoustic frequency with the blade passing frequency BPF, which is a blade number multiple of the rotational speed n. The Strouhal number will then be identically reproduced in the scaled experiment, when the geometrically similar propeller is operated at the same advance ratio J as the full-scale propeller. So, by fulfilling the advance ratio similarity we simultaneously obtain the Strouhal number similarity for the blade passing frequency as well. This means that the frequency of interest, the BPF, is a scalable parameter in the experiments and we can focus our discussion on that scaled frequency in the following. In order to obtain also scalability for the noise amplitude at the scaled frequencies, we focus on the prime task of the propeller: producing thrust. In the actuator disk approximation, the thrust is the result of the pressure jump across the disk. Thus we want to evaluate the effect of the pressure jump across the actuator disk – the disk loading – on the generation of noise per type of propeller geometry. Thus, instead of presenting the acoustic pressure variations as a fraction of the audible pressure threshold, we present it as a dimensionless fraction Π of the disk loading: 2 20log rms p D T (3) where T is the thrust of the propeller and D the diameter, as noted above. The dimensionless fraction defined by Equation 3 is referred to as thrust-specific sound pressure level (TSSP) in the remainder of the paper. It is obvious that scaling with the thrust alone, while necessary, is not sufficient for scaling the acoustic behavior of the propeller. To start with, the thickness noise is present even when the thrust T = 0 at finite advance ratios and even in the windmilling conditions. This indicates that the Mach number dependence of the noise at the respective advance ratios J has to be accounted for in the scaling of propeller noise. This is because of the possibility to vary the inflow velocity and still maintain the advance ratio. Both for the thrust-scaled noise as well as for the thickness noise – the dominant contributors to the tonal components of noise – the Mach number scaling is imperative. In the presented optimization process, the Mach number dependency was not further explored; only one freestream condition was considered. The propeller operating conditions were thus defined in terms of performance, the thrust (coefficient) and efficiency, as these were considered as sufficient to determine the aerodynamic behavior. For the acoustic optimization this means that the Mach number dependency is at present treated as an independent parameter. The validity of this treatment still needs to be verified in the future. The difficulty to verify the above approach with the help of the current experiments is that the experiments were performed at constant angular frequency n and the advance ratio was varied by changing the freestream velocity. The optimization, on the other hand was performed at constant freestream Ma, while the advance ratio J was varied by changing the angular frequency n. Both approaches result in a Mach number variation, albeit in different ways and directions. In a simplified approach, the Mach number scaling can either be obtained from the same theory due to Hanson as applied in the design process discussed in the present paper, or from the even ANALYSIS OF THRUST-SCALED ACOUSTIC EMISSIONS OF AIRCRAFT PROPELLERS 4 more generic discussion as formulated by Lighthill in Crocker et al. In this approach, the lifting surface, like the propeller blade in our approximation, produces noise which scales with the 6th power of the Mach number, see Hutcheson et al. This would account for both the dipole as well as quadrupole contribution in the description of the noise generation mechanism. The experimental data will be analyzed in light of the above considerations. 3. Numerical analysis models 3.1 BEMT prediction of propeller performance The propeller performance was predicted with a standard blade element momentum theory (BEMT) model , extended to account for the effects of blade sweep. The blade was divided into a set of 25 blade elements, and each blade element is treated as a two dimensional airfoil with its local flow schematics as presented in Figure 2. Figure 2 – Blade section with velocity triangle and resulting loading. In Figure 2, V0 is the incoming flow velocity, W is total kinematic velocity of the actual airflow, va is axial induced velocity, vt is tangential induced velocity, ns is the rotational speed in Hz, θ is pitch angle, β is the interference angle, α is the angle of attack, γ is the drag-to-lift angle, ϕ0 is the airflow angle ϕ0 = tan-1(V0/2πrns). Based on this blade element model, the local thrust and torque can be written as: { 𝑑𝑇= 1 2 𝜌𝑉 0 2 𝑐𝑙𝑏(1+𝑎)2 sin2 𝜙cos 𝛾cos(𝜙+ 𝛾)𝑑𝑟 𝑑𝑄= 1 2 𝜌𝑉 0 2 𝑐𝑙𝑏(1+𝑎)2 sin2 𝜙cos 𝛾𝑟sin(𝜙+ 𝛾)𝑑𝑟 (4) where a is the axial induction factor that is defined as a = va /V0. According to the actuator disk theory, the local thrust and torque are also given by: { 𝑑𝑇= 4𝜋𝑟𝑑𝑟𝜌𝑉 0 2(1 + 𝑎)𝑎 𝑑𝑄= 4𝜋𝜌𝑉 0(1 + 𝑎)(2𝜋𝑟𝑛𝑠)𝑎′𝑟2𝑑𝑟 (5) where a' is the tangential induction factor defined as a’ = vt /2πrns. Combining Eq.4 and 5 and re-arranging, the expression can be written as: 𝑐𝑙𝜎= 4 sin𝜙tan(𝜙−𝜙0) 1−tan𝛾tan(𝜙−𝜙0) (6) where cl is the lift coefficient of blade element, cl = 2dL/(ρW2bdr), and σ is the local blade solidity defined as σ = NBb/(2πr). In the present study, the aerodynamic performance of each blade element (lift and drag polars) were analyzed using XFOIL Version 6.99 in a viscous mode, in which both local Reynolds number and Mach number were considered with a transition setting at 0.1c downstream the leading-edge over both sides of the airfoil. Eq.6 can be further simplified by substituting ϕ = ϕ0 – β: 𝑐𝑙𝜎= 4 sin(𝜙0+𝛽) tan 𝛽 1−tan𝛾tan 𝛽 (7) Eq.7 is an implicit equation, and can be solved for the interference angle β with a Newton-Raphson method. Once the interference angle is known, the induction factors in both the axial and tangential directions can be found from Eq.8: {𝑎= tan𝜙[1+tan𝜙0tan (𝜙+𝛾)] tan𝜙0[1+tan𝜙tan (𝜙+𝛾)] −1 𝑎′ = atan𝜙0 tan(𝜙+ 𝛾) (8) ANALYSIS OF THRUST-SCALED ACOUSTIC EMISSIONS OF AIRCRAFT PROPELLERS 5 Previous work by Burger accounted for the impact of the sweep angle Λ on aerodynamic and acoustic propeller performance using the BEM implementation by Rosen and Gur . In this study, the sweep is defined by the mid chord line of the blade in the plane of rotation. The sweep is computed by drawing a line from each mid chord point on the airfoil stations towards the next. Then, a reference line parallel to the center line of the unswept blade is drawn which intersects the mid chord line. The angle between the parallel reference line and the mid chord line is the local angle of attack. Any out of plane translation for sweep or lean is not accounted for using this method. Consistent with , the midchord alignment (MCA) describes the deviation of the blade section midchord from the helicoid swept out by the pitch change axis. The geometric relationship between the Λ and MCA is: Λ = atan 𝑀𝐶𝐴 𝑟 (10) with r is the radial station of the blade section, as is depicted in Figure 3. Figure 3 – Definition of midchord alignment (MCA). Due to the addition of sweep, the spanwise extent of each blade section is adjusted. This is done by correcting dr, as can be seen in Equation (11): 𝑑𝑙= 𝑑𝑟 cosΛ (11) With dl known, the thrust and torque at each radial segment can be computed. The total thrust and torque are obtained by a simple summation of the values at each station: { 𝑇= 1 2 𝜌𝑉 0 2𝑁𝐵∫ 𝑐𝑙𝑏(1+𝑎)2 sin2𝜙 cos𝛾cos(𝜙+ 𝛾)𝑑𝑙 𝑅 0 𝑄= 1 2 𝜌𝑉 0 2𝑁𝐵∫ 𝑐𝑙𝑏(1+𝑎)2 sin2𝜙 cos𝛾𝑟sin(𝜙+ 𝛾)𝑑𝑙 𝑅 0 (12) For the three-dimensional correction of the current BEMT model, a momentum loss factor F was implemented: ℱ= 2 𝜋cos−1𝑒−𝑓 (13) where f is: 𝑓= 𝑁𝐵 2 𝑅−𝑟 𝑅sin𝜙 (14) 3.2 Frequency-domain prediction of propeller noise The current study only models the tonal component of the propeller noise. As will be shown later, this was an appropriate choice since the sound-pressure level of the measured tonal noise was about 20 dB above that of the broadband noise. It is well-known that tonal propeller noise is composed of contributions from thickness and loading sources. The frequency-domain noise prediction method proposed by Hanson was used to predict the far-field noise emissions from the propeller. In this analysis, the propeller noise waveform is Fourier transformed and expressed as: 𝑝(𝑡) = ∑ 𝑃 𝑚𝐵exp[−𝑖𝑚𝐵Ω𝐷𝑡] ∞ 𝑚=−∞ , (15) ANALYSIS OF THRUST-SCALED ACOUSTIC EMISSIONS OF AIRCRAFT PROPELLERS 6 with PmB the complex Fourier coefficient of the acoustic pressure, consisting of contributions due to blade volume (PVm), drag (PDm), and lift (PLm): PmB = PVm + PDm + PLm (16) { 𝑃 𝑉𝑚 𝑃 𝐷𝑚 𝑃 𝐿𝑚 } = − 𝜌0𝑐0 2𝐵𝑠𝑖𝑛𝜃𝑒𝑥𝑝[𝑖𝑚𝐵(𝛺𝐷𝑟 𝑐0 −𝜋 2)] 8𝜋𝑦 𝐷(1 −𝑀𝑥𝑐𝑜𝑠𝜃) × ∫𝑀𝑟 2 𝑒𝑖(𝜙0+𝜙𝑠)𝐽𝑚𝐵( 𝑚𝐵𝑧𝑀𝑇sin𝜃 1−𝑀𝑥cos 𝜃) { 𝑘𝑥 2𝑡𝑏𝛹𝑉(𝑘𝑥) 𝑖𝑘𝑥(𝑐𝑑/2)𝛹𝐷(𝑘𝑥) −𝑖𝑘𝑦(𝑐𝑙/2)𝛹𝐿(𝑘𝑥) } 𝑑𝑧 (17) 4. Setup of validation experiments 4.1 Propeller model As a reference for the analysis and optimization, a scaled propeller model was tested as shown in Figure 4. The propeller consists of six blades (NB = 6), with an overall diameter of D = 0.6 m and a hub diameter of 0.2D. The distribution of the planform design parameters (relative chord length c/D, relative thickness t/c, twist angle χ and normalized sweep MCA/R) of the propeller along the radial direction are shown in Figure 5. The blades are characterized by nonzero and radially varying midchord alignment, and the thickness and twist angle of the blades gradually decrease along the radial direction. The chord length remains almost constant in the radial direction up to r/R = 0.65, and after that decreases rapidly toward the slender tip. The propeller was driven by a servo motor that can deliver 5 kW power at a maximum rotational speed of 3000 rpm. Figure 4 – CAD drawing of the test model propeller. a) Chord (c/D) and thickness (t/c) b) twist (χ) and Sweep (MCA) Figure 5 – Radial distributions of relative chord length c/D, relative thickness t/c, twist angle χ and normalized midchord alignment MCA/R of the propeller blade. ANALYSIS OF THRUST-SCALED ACOUSTIC EMISSIONS OF AIRCRAFT PROPELLERS 7 4.2 Wind-tunnel facility The tests were conducted in Beihang’s D5 acoustic wind tunnel. This closed-circuit, open-jet tunnel features a test section size of 1m (H) × 1m (W) × 2.5m (L) as shown in Figure 6. The test section is surrounded by an anechoic chamber to provide non-reflecting test conditions. The size of the anechoic chamber is 7 m (L) × 6 m (W) × 6 m (H), and the lower cut-off frequency is 200 Hz . The background noise level at the frequencies of the BPF and its first four harmonics are 50.1, 43.6, 43.3, 37.4 and 32.6 dB, respectively (1BPF up to 5BPF). The experimental rig and inner surface of the open section collector were covered with sponge material to minimize acoustic interference. Figure 6 – Propeller performance test in D5 acoustic wind tunnel. 4.3 Measurement techniques Measurements were taken of the integral forces produced by the propeller, as well as the acoustic emissions in the far field. The propeller performance (thrust, torque) was measured using a six-component beam strain-gauge balance mounted between the propeller and motor. The output signal from the force balance was fed into the data acquisition system via a slip-ring device. The data acquisition system consists of an analog-to-digital converter, a signal amplifier and a data acquiring computer. The test sample rate is 1 kHz, and the sample time 60 seconds for each test point. The acoustic emissions from the propeller were measured with five far-field microphones, see Figure 7. Far-field noise was measured using a Brüel & Kjær 12-channel acoustic vibration analysis system, which includes a 12-channel compact LAN-XI module and 1/2 inch free-field microphones (type 4189). The free-field microphone sensitivity is 50 mV/Pa, and the input range is 14.6 - 146 dB. The acoustic signal was measured over a time interval of 41.75 seconds at a sampling frequency of 65,536 Hz. The far-field microphone array allowed the microphones to be placed 2.5 m (approx. 4 diameters) away from the geometric center of the propeller disk, with axial directivity angles between 50° and 145° (5° interval). Figure 7 – The acoustic measurement setup for the test model propeller. ANALYSIS OF THRUST-SCALED ACOUSTIC EMISSIONS OF AIRCRAFT PROPELLERS 8 4.4 Operating conditions The propeller performance measurements were taken at a freestream velocity range of V∞ = 9 -35 ms-1 at constant RPM of 2300 (advance ratio range J = 0.4 – 1.8), over which the turbulence level was always below 0.08% of V∞. The acoustic measurements were all performed at propeller angular speed of 2300 rpm and incoming flow speed V∞ = 17, 21, 29 ms-1, corresponding to (uncorrected) advance ratios of 0.74, 0.91, and 1.26, respectively. The pitch angle at r/R=0.7 was 36°. The raw propeller performance data were reduced into non-dimensional scaling parameters advance ratio J, thrust coefficient CT, power coefficient CP, and propeller efficiency η: J = Vc /(nsD) (18) CT = T/(ρns2D4) (19) CP = 2πQ/(ρns2D5) (20) η = JCT /CP (21) where ns = RPM/60 and Vc is the equivalent velocity which accounts for the change in inflow conditions in the wind tunnel due to the operation of the propeller. This effective velocity can be calculated by the following expressions : 𝑉 𝑐 𝑉 ∞= 1 − 𝜏𝑘 √1+2𝜏 (22) 𝜏= 𝑇 0.25𝜌𝜋𝐷2𝑉 ∞ 2 (23) 𝑘= 0.25𝜋𝐷2 𝑆 (24) where V∞ is the freestream velocity, τ is the equivalent thrust coefficient, k is the blockage ratio, and S is the area of the wind tunnel test section. All data presented are corrected by considering the corrected freestream velocity. 5. Validation results 5.1 BEMT prediction validation The comparison between experimental and BEMT predicted results is shown in Figure 8. It can be seen that the BEMT method predicts the thrust coefficient of the test model propeller with a reasonable accuracy for J > 1.2. Since the rotational speed of the propeller was fixed, the variation in advance ratio was achieved by changing the incoming flow speed. Therefore, the conditions at low advance ratio correspond to low Reynolds numbers at the blade sections. For J < 1.2, the nonlinearity of the CT curve becomes more obvious. Compared to the agreement for CT, the BEMT predicted power coefficient and propulsive efficiency have more significant offsets even at high advance ratio. The maximum propulsive efficiency of the test model propeller was found at J = 1.35 in the experiment, while in the BEMT prediction it occurred at higher J due to the underestimation of the power coefficient. The power coefficient differed significantly at the higher advance ratios while the thrust was well captured. This suggests that the viscous effects were still underestimated in the BEMT analysis even in this region. At low advance ratios one would expect the BEMT to work less reliably than at high advance ratios, which in part is due to the dominance of radial forces at low J. Also the onset of three-dimensional stall is not captured. In the current study, the higher J cases have been chosen during the optimization. The radial forces that are relevant at low J can be neglected in these conditions. It is assumed that the aerodynamic prediction in these conditions is sufficiently accurate for appropriate prediction of the TSSP, considering the good agreement of the blade lift component which is a dominant source of the BPF noise during the prediction . ANALYSIS OF THRUST-SCALED ACOUSTIC EMISSIONS OF AIRCRAFT PROPELLERS 9 Figure 8 – Comparison between experimental (EXP) and BEMT predicted results of the test model propeller performance. ANALYSIS OF THRUST-SCALED ACOUSTIC EMISSIONS OF AIRCRAFT PROPELLERS 10 5.2 Validation of propeller noise prediction The microphone signals were analyzed using a fast Fourier Transform. Figure 9 provides example spectra in terms of thrust specific sound pressure level (TSSP) versus frequency. Results are also provided for the motor-only (blades-off) and wind tunnel background noise. For comparison reasons, these were scaled with the thrust level measured for the blades-on case. The tonal component can be observed at the blade passing frequency (BPF=230 Hz) for all three incoming flow speed cases, and the corresponding TSSP at these three advance ratios are -68.91, -64.83 and -65.22 dB, respectively. When the advance ratio is 0.91 and 1.26, the TSSP value at the blade passing frequency is basically the same after the acoustic scaling, because the blade tip Mach number is not significantly different (0.2213 and 0.2290, respectively). As can be seen from the variation trend of the thrust coefficient with the advance ratio in the aerodynamic performance curves, when the advance ratio is 0.74, the blade is in the stall state, the surface flow is relatively complex and the thrust coefficient is in the nonlinear regime, so it cannot be applied directly to the acoustic scaling. Simultaneously, the amplitude of the broadband noise is the largest at this advance ratio, which is related to the turbulence noise caused by the complex flow field on the blade surface. Moreover, the higher harmonics of BPF are distinct at greater incoming flow speed. ANALYSIS OF THRUST-SCALED ACOUSTIC EMISSIONS OF AIRCRAFT PROPELLERS 11 Figure 9 – TSSP of farfield noise with various incoming flow speeds at 90° directivity angle. Figure 10 shows the directivity of the fundamental propeller tone (at 1BPF). When the directivity angle is between 50° and 130°, the TSSP value increases first and then decreases with increasing advance ratio. At an axial directivity angle of 100° at the farfield location, the TSSP value corresponding to the advance ratio of 1.09 is the largest (-63.74 dB), which is 2.65 dB larger than that corresponding to the advance ratio of 1.43. Figure 10 – The directivity (orientation angle = 50° - 145°) of TSSP (BPF = 1) at 2300 rpm and various incoming flow speeds. ANALYSIS OF THRUST-SCALED ACOUSTIC EMISSIONS OF AIRCRAFT PROPELLERS 12 Figure 11 – The comparison between experimental and analytical results of the test model propeller noise at various orientation angles. A comparison of the experimental data with the predicted results is provided in Figure 11. It can be seen that around the propeller plane, at an axial directivity angle near 90°, the analytical method provided a good accuracy with an offset of around 0.3dB. At the more upstream and downstream directivity positions, on the other hand, the offset between measured and predicted amplitude becomes more significant. It is expected that at low tip Mach number, as considered here, the frequency domain method is capable of capturing dipole noise sources (loading noise) of the overall noise emission, whereas other noise sources, for instance the thickness noise component were underestimated. 6. Optimization study 6.1 Optimizer architecture Based on the above prediction methods, an integrated optimization design framework of propeller aerodynamic and aeroacoustic performance is established, as shown in Figure 12. The optimization was performed using an simulated annealing algorithm, which is based on the principle of solid annealing. At high temperatures, a large number of molecules in a solid move relatively freely between each other. If the solid cools slowly, the thermal motion of the particles decreases and gradually becomes ordered, reaching equilibrium at each temperature, and finally forming the crystal structure of the lowest energy state of the system. Simulated annealing is realized by using the Metropolis algorithm and controlling the temperature decline appropriately, and the searching direction is guided by the change of probability, so as to achieve the purpose of solving the global optimization problem. Accordingly, this concept is expressed computationally as follows . First, the current state with design variable vector xk and function value f(xk) is chosen, after which a subsequent state with design variable vector xp and function value f(xp) is examined. The change in the function value between the current and probed states is denoted by ∆𝑓(𝒙) = 𝑓(𝒙𝑝) −𝑓(𝒙𝑘). The probed state is accepted with probability 𝑃(∆𝑓, 𝑇): 1, if 0 , exp[ / ], otherwise f P f T f T x x (25) where T denotes the temperature which is slowly reduced by the annealing process according to T→αT. Moreover, the parameter α represents the cooling rate and is set to 0.85 in this work. The ANALYSIS OF THRUST-SCALED ACOUSTIC EMISSIONS OF AIRCRAFT PROPELLERS 13 design variable vector in the probed state is obtained using a random walk procedure within a shrinking search boundary, max 0 min 0 () / () / p k k k rand T T rand T T x x x x x x (26) where xmax and xmin represent the maximum and minimum values of the design variable vector and T0 denotes the initial temperature. The variable rand( ) is a random function that is uniformly distributed in the range [0, 1]. Equation (26) indicates that two random values are generated. The mathematical formulation used to update the design variable vector in the new state xk +1 is as follows: 1 [ ] [ ] [ , ()] k k k k u f u f u P f T rand x x x x x x (27) where k x denotes the change of vectors xp and xk, such that k p k x x x . The value of the random function rand( ) is between zero and one. Moreover, the unit step function u[ ] is a discontinuous function that equals zero for negative inputs and otherwise is unity. In the current study, the Markov chain length is 100, the cooling down factor is 0.99, the perturbation factor is 0.01, the initial temperature is 300, the final temperature is 0.001 and the convergence residual is 1e-8. Initial input value Change the sweep angle to produce a new configuration Blade aerodynamic performance calculation (BEMT) Frequency domain prediction of propeller noise Simulated annealing algorithm Convergence?
Final propeller shape No Yes Figure 12 – Integrated optimization design framework of aerodynamics and aeroacoustics. ANALYSIS OF THRUST-SCALED ACOUSTIC EMISSIONS OF AIRCRAFT PROPELLERS 14 6.2 Operating Conditions, Objectives and Constraints In the current study, a six-bladed propeller was optimized by using a classic simulated annealing algorithm with a consideration of both aerodynamic and aeroacoustic performances. As described above, the goal of the optimization study was to evaluate the potential of reducing the propeller noise while minimizing the cost of this reduction in terms of thrust performance and efficiency of propulsion. As a well-described baseline propeller the test model propeller could be used as a starting point (Section 4.1). The operation condition was set at a freestream velocity V∞ = 29 ms-1 at constant RPM of 2300 (advance ratio J = 1.26). The aerodynamic and aeroacoustic performances were predicted by BEMT and the frequency-domain prediction method respectively as aforementioned in Section 3. To achieve a quiet propeller design with minimum noise level, the thrust specific sound pressure level (TSSP) has been selected as the objective during the optimization. The value of the design variable sweep (MCA) at a given radial location r is determined through 4th order Bezier curve interpolation using the given five control points. The ordinates xi of those control points, determining the shape of the variable midchord alignment distributions x(r), are used as optimization parameters and are allowed to vary over specific intervals. In the optimization process the coordinates of the control points at both ends of the curve remain unchanged, and the coordinate value of the third control point is always smaller than that of the second and third control points. The blade twist and chord length distribution along the radial direction remain unchanged. The final optimization design model is as follows : min / i n n TSSP opt org _opt _org max max opt org max 1 n s.t.
max , T T C C TSSP TSSP TSSP TSSP TSSP (28) with TSSPi (i=1,2,…,10) is obtained by energy superposition of the thrust specific sound pressure level in each frequency band. The index i indicates the considered monitoring point, distributed over the axial directivity range (25° up to and including 155°). A linear sum of TSSP values was used to not bias the optimization towards the region of maximum noise emissions (around the propeller plane). Future work should assess the impact of this choice on the overall noise hindrance during a typical flyover. TSSPmax is the maximum thrust specific sound pressure level at the monitoring point in the direction perpendicular to the axis of rotation and 2.5 meters from the center of the disk plane (θ = 90°), the subscript ‘‘org” represents the original propeller blade, and the subscript ‘‘opt” represents the propeller blade in the optimization design process. 6.3 Results Figure 13 shows the geometric shape comparison between original (ORG) and the optimized (OPT) design. The radial distribution of dimensionless sweep (MCA/R) has been plotted in Figure 14. It can be seen that the maximum sweep location has been shifted from r/R = 0.55 to 0.49, at which the value of the MCA/R has been decreased by -0.01268 (from -0.01362 to -0.02648). Moreover, at r/R = 0.76, the dimensionless sweep of these two designs is similar. The individual OPT has a larger forward sweep than the individual ORG from root to 76% radius, and the former also has a little larger sweep from 76% radius to the tip. Table 1 shows the propeller thrust coefficient CT, efficiency η, thrust specific sound pressure level at θ = 90° TSSPπ/2 and average thrust specific sound pressure level TSSPoverall, 2 2 31/36 10 5/36 10 log rms overall p D TSSP d T , of the original and optimized designs . Compared with ANALYSIS OF THRUST-SCALED ACOUSTIC EMISSIONS OF AIRCRAFT PROPELLERS 15 the ORG configuration, the OPT configuration has no reduction in efficiency and thrust coefficient, and a 6.22 dB reduction in average thrust specific sound pressure level. a) Original Propeller(ORG) b) Optimized Propeller(OPT) c) Comparison of the ORG and OPT blade planform shapes Figure 13 – Comparison of original and optimized propeller geometries. Figure 14 – Original and optimized radial distributions of blade sweep (midchord alignment). ANALYSIS OF THRUST-SCALED ACOUSTIC EMISSIONS OF AIRCRAFT PROPELLERS 16 Figure 15 presents a comparison of the thrust specific sound pressure level (TSSP) at each monitoring point between the OPT configuration and the ORG configuration. From the view of the whole axial directivity distribution, the thrust specific sound pressure level (TSSP) of the ORG configuration of the monitoring point at 95° is the largest and the value is -65.18 dB. For the OPT configuration a 2.9 dB noise reduction is obtained, while the corresponding directivity angle of the maximum TSSP shifts to 85°. The noise reduction is effective from θ = 50° up to 155°. As a deviation from the overall trend, the TSSP of the OPT configuration obtained in the upstream direction is slightly higher than that of the ORG configuration, albeit at very low sound pressures. This may be a side-effect of the change in the radial sweep distribution. The reduced average TSSP level, quantified by TSSPoverall, confirms that the OPT design is a low-noise configuration. As discussed by Hanson , the effect of sweep is to cause some phase cancellation between spanwise segments of the blade. This has been exploited by the optimization algorithm in the current study. Table 1 – Comparison of original and optimized design results Original Optimized CT 0.2103 0.2215 η 0.8349 0.8453 TSSPπ/2 (dB) -65.49 -68.49 TSSPoverall (dB) 6.74 0.52 Figure 15 – Comparison of thrust specific sound pressure fluctuation (TSSP) before and after optimal design. 7. Conclusions Instead of presenting the acoustic pressure variations as a fraction of the audible pressure threshold, a novel scaling approach has been proposed to directly relate the propeller noise emissions to the propeller thrust. Compared to the experimental aerodynamic performance, the BEMT method predicts the thrust coefficient of the propeller with a reasonable accuracy at low to moderate loading conditions (high advance ratio). In the current study, this high advance ratio case has been chosen during the optimization. It is assumed that the TSSP is well presented because of the agreement of the lift which is a dominant source of the BPF noise during the acoustic prediction. The noise results ANALYSIS OF THRUST-SCALED ACOUSTIC EMISSIONS OF AIRCRAFT PROPELLERS 17 show that around the propeller plane, at an axial directivity angle near 90°, the analytical method provided a good accuracy with an offset of around 0.3 dB from the experimental data. The original propeller design was optimized using a classic simulated annealing algorithm with a consideration of both aerodynamic and aeroacoustic performances. The value of the design variable sweep at a given radial location is determined through 4th order Bezier curve interpolation using the given five control points. An improved propeller design with no loss of aerodynamic performance and reduction of thrust specific sound pressure level was obtained through an optimization design process, and the maximum thrust specific sound pressure level at the monitoring point was reduced by 2.9 dB compared to the modern baseline design; the same reduction in noise level was obtained when integrating the TSSP over the semi-circle of all axial directivity angles considered. This points to an overall reduction in acoustic emissions achieved through the optimized blade sweep distribution. References Leslie, A., Wong, K. C., Auld, D., “Broadband noise reduction on a mini-UAV Propeller”, 14th AIAA/CEAS Aeroacoustics Conference, AIAA Paper 2008-3069, 2008. Yang, Y., Li, Y., Liu, Y., Arcondoulis, E. J. G., Wang, Y., Huang, B., Li, W., “Aeroacoustic and aerodynamic investigation of multicopter rotors with serrated trailing edges”, 25th AIAA/CEAS Aeroacoustics Conference, AIAA Paper 2019-2523, 2019. Ning, Z., Wlezien, R., Hu, H., “An Experimental Study on Small UAV Propellers with Serrated Trailing Edges”, 47th AIAA Fluid Dynamics Conference, AIAA Paper 2017-3813, 2017. Oerlemans, S., Fisher, M., Maeder, T., Kögler, K., “Reduction of Wind Turbine Noise Using Optimized Airfoils and Trailing-Edge Serrations”, AIAA Journal, Vol. 47, No. 6, 2019, pp. 1470-1481. Magliozzi, B., Hanson, D. B., Amiet, R. K., “Propeller and Propfan Noise”. In: Hubbard, H. H. (ed.): Aeroacoustics of Flight Vehicles: Theory and Practice – Volume 1: Noise Sources, NASA Langley Research Center, 1991, pp. 1-64. Marinus, B., “Multidisciplinary Optimization of Aircraft Propeller Blades”, PhD thesis, École Centrale de Lyon, 2011. Pagano, A., Federico, L., Barbarino, M., Guida, F., Aversano, M., “Multi-Objective Aeroacoustic Optimization of an Aircraft Propeller”, 12th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference, AIAA Paper 2008-6059, 2008. Gur, O., Rosen, A., “Optimization of Propeller Based Propulsion System”, Journal of Aircraft, Vol. 46, No. 1, 2009, pp. 95-106. Ingraham, D., Gray, J., Lopes, L. V., “Gradient-Based Propeller Optimization with Acoustic Constraints”, AIAA Scitech 2019 Forum, AIAA Paper 2019-1219, 2019. Miller, C. J., Sullivan, J. P., “Noise constraints effecting optimal propeller designs”, SAE General Aviation Aircraft Meeting and Exposition, 1985. Yang, L., Huang, J., Yi, M., Zhang, C., Xiao, Q., “A numerical study on the effects of design parameters on the acoustics noise of a high efficiency propeller”, Acoustical Physics, Vol. 65, No. 6, 2017, pp. 699-710. Hanson, D. B., “Helicoidal Surface Theory for Harmonic Noise of Propellers in the Far Field”, AIAA Journal, Vol. 18, No. 10, 1980. Malcolm J. Crocker, Editor-in-Chief, Handbook of Acoustics, John Wiley and Sons, 1998 ISNB 0-471-25293-X Hutcheson, F. V., Brooks, T. F., Burley, C. L., Stead, D. J., “Measurement of the Noise Resulting from the Interaction of Turbulence with a Lifting Surface”, International Journal of Aeroacoustics, Vol. 11, No. 5-6, 2012, pp. 675-700. Liu, X., He, W., “Performance Calculation and Design of Stratospheric Propeller”, IEEE Access, Vol. 5, 2017, pp. 14358-14368. Liu, P., Xing, Y., Guo, H., Li, L., “Design and performance of a small-scale aeroacoustic wind tunnel”, Applied Acoustics, Vol. 116, 2017, pp. 65–69. ANALYSIS OF THRUST-SCALED ACOUSTIC EMISSIONS OF AIRCRAFT PROPELLERS 18 Liu, Z., Liu, P., Hu, T., Qu, Q., “Experimental investigations on high altitude airship propellers with blade planform variations”, Proceedings of the Institution of Mechanical Engineers, Part G: Journal of Aerospace Engineering, Vol. 232, No. 16, 2018, pp. 2952-2960. Liu, J.-L., “Novel Taguchi-Simulated Annealing Method Applied to Airfoil and Wing Planform Optimization”, Journal of Aircraft, Vol. 43, No. 1, 2006. Yu. P., Peng, J., Bai, J., Han, X., Song, X., "Aeroacoustic and aerodynamic optimization of propeller blades", Chinese Journal of Aeronautics, Vol. 33, No. 3. 2020, pp. 826-839. Burger, S., “Multi-Fidelity Aerodynamic and Aeroacoustic Sensitivity Study of Isolated Propellers”, Master thesis, Delft University of Engineering, 2020. A. Rosen and O. Gur, “Novel approach to axisymmetric actuator disk modeling”, AIAA Journal, Vol. 46, No. 11, 2008, pp. 2914–2925. O. Gur and A. Rosen, “Comparison between blade-element models of propellers”, Aeronautical Journal, Vol. 112, No. 1138, 2008, pp. 689–704. Whitfield, C. E., Mani, R., Gliebe, P. R., "High speed turboprop aeroacoustic study (single rotation). Volume 1: Model development." (1989). Copyright Statement The authors confirm that they, and/or their company or organization, hold copyright on all of the original material included in this paper. The authors also confirm that they have obtained permission, from the copyright holder of any third party material included in this paper, to publish it as part of their paper. The authors confirm that they give permission, or have obtained permission from the copyright holder of this paper, for the publication and distribution of this paper as part of the ICAS proceedings or as individual off-prints from the proceedings. |
17282 | https://brainly.com/question/51091676 | [FREE] The mass of 1 cubic meter of air, at sea level, is about 1.3 kg. 1. How much does it weigh in newtons? 2. - brainly.com
1
Search
Learning Mode
Cancel
Log in / Join for free
Browser ExtensionTest PrepBrainly App Brainly TutorFor StudentsFor TeachersFor ParentsHonor CodeTextbook Solutions
Log in
Join for free
Tutoring Session
+64,9k
Smart guidance, rooted in what you’re studying
Get Guidance
Test Prep
+25k
Ace exams faster, with practice that adapts to you
Practice
Worksheets
+6,5k
Guided help for every grade, topic or textbook
Complete
See more
/
Physics
Textbook & Expert-Verified
Textbook & Expert-Verified
The mass of 1 cubic meter of air, at sea level, is about 1.3 kg.
How much does it weigh in newtons?
How much would a stack of 8000 such cubes weigh?
If this total weight were all applied to an area of 1 m², how much pressure would it produce on that surface?
(The answer should look familiar. Although the atmosphere actually gets less dense with increasing altitude, the total amount of air is the same as if these sea-level cubes were stacked some 8 km high.)
1
See answer Explain with Learning Companion
NEW
Asked by jbxdxnjxze4086 • 05/24/2024
0:00
/
0:15
Read More
Community
by Students
Brainly
by Experts
ChatGPT
by OpenAI
Gemini
Google AI
Community Answer
This answer helped 31375472 people
31M
0.0
0
Upload your school material for a more relevant answer
The weight of 1 cubic meter of air is 12.74 N. A stack of 8000 such cubes weighs 101920 N, and if applied to an area of 1 m², it would generate approximately 101920 Pa of pressure, close to the atmospheric pressure at sea level (1 atm).
The mass of 1 cubic meter of air at sea level is about 1.3 kg. Let's calculate the weight in newtons.
Weight of 1 cubic meter of air:
Given mass = 1.3 kg
Weight = mass × gravitational acceleration (g = 9.8 m/s²)
Weight = 1.3 kg × 9.8 m/s² = 12.74 N
Weight of a stack of 8000 such cubes:
Total mass = mass of one cube × number of cubes
Total mass = 1.3 kg × 8000 = 10400 kg
Total weight = total mass × gravitational acceleration
Total weight = 10400 kg × 9.8 m/s² = 101920 N
Pressure produced on a 1 m² area:
Pressure = Force/Area
Pressure = 101920 N / 1 m² = 101920 Pa or N/m²
This value is very close to the average atmospheric pressure at sea level, which is about 101325 Pa (1 atm).
Answered by SloanGarrett45 •52.7K answers•31.4M people helped
Thanks 0
0.0
(0 votes)
Textbook &Expert-Verified⬈(opens in a new tab)
This answer helped 31375472 people
31M
0.0
0
Introduction to Physics - Openstax
CK-12 Earth Science For Middle School
College Physics 1e - OpenStax
Upload your school material for a more relevant answer
The weight of 1 cubic meter of air is 12.74 N. A stack of 8000 such cubes weighs 101920 N, producing a pressure of about 101920 Pa on a 1 m² area, close to atmospheric pressure. These calculations demonstrate the relationship between mass, weight, and pressure in the context of air at sea level.
Explanation
The mass of 1 cubic meter of air at sea level is approximately 1.3 kg. Let's calculate its weight, the weight of a stack of 8000 cubes, and the pressure exerted.
Weight of 1 cubic meter of air:
To find the weight, we use the formula:
Weight=mass×g
where:
mass = 1.3 kg
g (acceleration due to gravity) = 9.8 m/s²
Weight=1.3 kg×9.8 m/s 2=12.74 N
So, the weight of 1 cubic meter of air is 12.74 N.
Weight of a stack of 8000 such cubes:
Total mass of 8000 cubes = mass of one cube × number of cubes
Total Mass=1.3 kg×8000=10400 kg
Total weight:
Total Weight=Total Mass×g=10400 kg×9.8 m/s 2=101920 N
Thus, the weight of 8000 cubes is 101920 N.
Pressure produced on a 1 m² area:
Pressure is defined as force per unit area:
Pressure=Area Force
Using the total weight calculated as the force:
Pressure=1 m 2 101920 N=101920 Pa
Therefore, this weight would generate a pressure of approximately 101920 Pa (or N/m²). This is close to the average atmospheric pressure at sea level, which is around 101325 Pa (1 atm).
Examples & Evidence
For example, if you consider a column of air extending to the top of the atmosphere, it will have a similar mass and produce similar atmospheric pressure measured at sea level. Additionally, think about how the pressure you feel from the air around you is a result of the weight of this air above you.
The calculations are based on standard physics formulas for weight and pressure, which consistently hold true in our atmosphere, reflecting how air behaves under the influence of gravity.
Thanks 0
0.0
(0 votes)
Advertisement
jbxdxnjxze4086 has a question! Can you help?
Add your answer See Expert-Verified Answer
### Free Physics solutions and answers
Community Answer 47 Whats the usefulness or inconvenience of frictional force by turning a door knob?
Community Answer 5 A cart is pushed and undergoes a certain acceleration. Consider how the acceleration would compare if it were pushed with twice the net force while its mass increased by four. Then its acceleration would be?
Community Answer 4.8 2 define density and give its SI unit
Community Answer 9 To prevent collisions and violations at intersections that have traffic signals, use the _____ to ensure the intersection is clear before you enter it.
Community Answer 4.0 29 Activity: Lab safety and Equipment Puzzle
Community Answer 4.7 5 If an instalment plan quotes a monthly interest rate of 4%, the effective annual/yearly interest rate would be _______. 4% Between 4% and 48% 48% More than 48%
Community Answer When a constant force acts upon an object, the acceleration of the object varies inversely with its mass 2kg. When a certain constant force acts upon an object with mass , the acceleration of the object is 26m/s^2 . If the same force acts upon another object whose mass is 13 , what is this object's acceleration
Community Answer 20 [4 A wheel starts from rest and has an angular acceleration of 4.0 rad/s2. When it has made 10 rev determine its angular velocity.]]( "4 A wheel starts from rest and has an angular acceleration of 4.0 rad/s2. When it has made 10 rev determine its angular velocity.]")
Community Answer 4.1 5 Lucy and Zaki each throw a ball at a target. What is the probability that both Lucy and Zaki hit the target?
Community Answer 22 The two non-parallel sides of an isosceles trapezoid are each 7 feet long. The longer of the two bases measures 22 feet long. The sum of the base angles is 140°. a. Find the length of the diagonal. b. Find the length of the shorter base.
New questions in Physics
How are the Celsius, Kelvin, and Fahrenheit scales similar? How are they different? Considering their differences, think of one scenario in which each scale is more convenient to use.
The table below includes data for a ball rolling down a hill. Fill in the missing data values in the table and determine the acceleration of the rolling ball. | Time (seconds) | Speed (km/h) | :--- | | 0 (start) | 0 (start) | | 2 | 3 | | | 6 | | | 9 | | 8 | | | 10 | 15 | Acceleration = ____
How do wheel size and axle friction affect the speed and distance a rubber band car can travel?
(a) In what year will we next be able to see Halley's Comet? (b) In 1997, people could see Comet Hale-Bopp. In what year will this comet return? Why do comets appear to have large tails flowing away?
Microwaves were originally used in which equipment during the world wars? A. Sonar B. Radars C. Thermostats D. Telescopes
Previous questionNext question
Learn
Practice
Test
Open in Learning Companion
Company
Copyright Policy
Privacy Policy
Cookie Preferences
Insights: The Brainly Blog
Advertise with us
Careers
Homework Questions & Answers
Help
Terms of Use
Help Center
Safety Center
Responsible Disclosure Agreement
Connect with us
(opens in a new tab)(opens in a new tab)(opens in a new tab)(opens in a new tab)(opens in a new tab)
Brainly.com |
17283 | https://www.2dcurves.com/conicsection/conicsection.html | conic section
conic section =============main ------------------------------------------- A conic section can be defined as the intersection of a plane and a conic, that's where the name is from. The relation between the angles of plane and conic determines the kind of conic section: equal: parabola smaller: ellipse; when the plane is parallel to the ground surface the resulting curve is a circle larger: hyperbola These qualities were already known by Apollonius of Perga (200 BC). He wrote a series of books about the conic sections (Konika 1)), where he used work done by preceding scholars as Euclid (who was his teacher). The first discovering of the conic sections has been made by Menaichmos (350 BC), a member of the school of Plato. He found the parabola on his attempts to duplicate the cube. But before that period, there was already a certain knowledge of the qualities of the conic sections, for instance in the head of Pythagoras (550 BC). A conic section is an algebraic curve of the 2nd degree in x and y, and every 2nd degree equation represents a conic: the conic section is the quadratic curve. The kind of conic section can be obtained from the coefficients of the equation in x and y. The geometric treatment by the old Greek of the conic sections is equivalent with such an equation. But it had a different form, consisting of the theory of the 'adaptation of areas'. This theory can already be read in 'the Elements' of Euclid. Depending the kind adaptation one of the three conic sections is obtained: elleipsis: adaptation with defect parabole: exact adaptation hyperbole: adaptation with excess Given a point F and a line l. The conic section can be defined as the collection of points P for which the ratio 'distance to F / distance to l' is constant. The point F is called the focus of the conic, the line l is called the directrix. The ratio is the eccentricity; its value gives the kind of conic section: eccentricity econic section 0 circle 0 < e < 1 ellipse 1 parabola > 1 hyperbola So the value gives the amount of deviation from a circle. When we set the distance of the focus to the line to 1, we can write for a conic section with eccentricity e the polar equation at top of this page. In this equation it is quite clear that the polar inverse is the limaçon. The circular conic section is the circle. The orbit of a comet can in fact take the form of each of the conic sections. More detailed information about each of the conic sections: ellipse circle parabola hyperbola A conic section is uniquely defined by 5 points, given the points are sufficient away from one line. You can conclude this from the general equation of a conic through a point (x, y): x 2 + a y 2 + b x y + c x + d y + e = 0. This gives a linear equation in five variables (a to e), so five points define a conic. notes 1) koonikos (Gr.) = cone |
17284 | https://pdfs.semanticscholar.org/8771/0775c546664e3e3c4195b54cf0d940a262b9.pdf | GLOBAL WATER PATHOGEN PROJECT PART THREE. SPECIFIC EXCRETED PATHOGENS: ENVIRONMENTAL AND EPIDEMIOLOGY ASPECTS THE LIVER FLUKES: CLONORCHIS SINENSIS, OPISTHORCHIS SPP, AND METORCHIS SPP.
K. Darwin Murrell University of Copenhagen Copenhagen, Denmark Edoardo Pozio Istituto Superiore di Sanità Rome, Italy Copyright: This publication is available in Open Access under the Attribution-ShareAlike 3.0 IGO (CC-BY-SA 3.0 IGO) license ( By using the content of this publication, the users accept to be bound by the terms of use of the UNESCO Open Access Repository (
Disclaimer: The designations employed and the presentation of material throughout this publication do not imply the expression of any opinion whatsoever on the part of UNESCO concerning the legal status of any country, territory, city or area or of its authorities, or concerning the delimitation of its frontiers or boundaries. The ideas and opinions expressed in this publication are those of the authors; they are not necessarily those of UNESCO and do not commit the Organization.
Citation: Murell, K.D., Pozio, E. 2017. The Liver Flukes: Clonorchis sinensis, Opisthorchis spp, and Metorchis spp. In: J . B . R o s e a n d B . J i m é n e z - C i s n e r o s , ( e d s ) G l o b a l W a t e r P a t h o g e n s P r o j e c t . h t t p : / / w w w . w a t e r p a t h o g e n s . o r g ( R o b e r t s o n , L ( e d s ) P a r t 4 Helminths) Michigan State University, E. Lansing, MI, UNESCO.
Acknowledgements: K.R.L. Young, Project Design editor; Website Design ( Published: January 15, 2015, 3:45 pm, Updated: July 27, 2017, 10:36 am The Liver Flukes: Clonorchis sinensis, Opisthorchis spp, and Metorchis spp.
3 Summary The liver and intestinal fish-borne zoonotic trematodes (flukes) are important parasites of humans and animals and are estimated to infect more than 18 million people, especially in Asia. The diseases caused by fish-borne liver flukes, clonorchiasis, opisthorchiasis and methorchiasis, can be severe. Infection with high worm burdens has high impact on health status in endemic areas; a recent estimation of the effect of liver flukes on morbidity yielded DALY value of 275,370. Because fish are a major source of protein and an important export commodity in western Siberia and South East Asia these diseases are of both economic and public health concern.
Clonorchis sinensis is endemic in southern China, Korea and northern Vietnam, whereas O. viverrini is endemic in the Lower Mekong Basin, including Thailand, Lao People's Democratic Republic), Cambodia and south and central Vietnam. Opisthorchis felineus has been documented in at least 12 countries of the European Union, Belarus, Ukraine, and in Western Siberia (Russia). Methorchis species are wide spread, and reported from North America, Eurasia, and East Asia; however, information on human infections is very limited. The infective stage for humans, as well for animals, is the larval metacercaria stage present in fish that matures to the adult stage in the hepatobiliary system of humans and other fish-eating mammals. A significant feature of the epidemiology of these parasites is their wide definitive host range, which includes not only domestic animals but also sylvatic mammals such as rodents and carnivores. The adult flukes can survive for up to ten years in the host, producing around 200 eggs per day. This results in considerable contamination of the environment.
Water becomes contaminated with fluke eggs from indiscriminate deposition of infected human and animal excreta, which, if ingested by appropriate snail hosts, are the source of the infective metacercariae found in fish.
While those fecal egg sources associated with household fish ponds can be addressed by sanitation approaches, the common infection of wild fish from the sylvatic cycle of liver flukes is not amenable to sanitation interventions.
Further, the snail intermediate host species are diverse and abundant in water bodies. These features make control of these zoonotic parasites difficult and focuses prevention on human food behaviors, and mass drug treatment of communities. Procedures to limit contamination of ponds, lakes, and rivers, with human and animal feces containing liver fluke eggs are limited, but methods focusing on the education of consumers, farmers, and fishermen will be discussed.
The Liver Flukes: Clonorchis sinensis, Opisthorchis spp, and Metorchis spp.
1.0 Epidemiology of the Disease and Pathogens Trematode parasites of the genera Clonorchis, Opisthorchis and Metorchis, commonly referred to as liver flukes, are transmitted to humans and other mammals by the ingestion of fish infected with their larval stages which ultimately come from snails infected due to excreta and polluted waters (Chai et al., 2005; Mordvinov et al., 2012; Petney et al., 2013). These zoonotic helminths are of public health concern because of the serious pathology they can induce in the liver and bile ducts (Sithithaworn et al., 2007a; Pakharukova and Mordvinov, 2016). According to the Food and Agriculture Organization and the World Health Organization (FAO/WHO, 2014), they rank 8 th overall in global health importance among 24 food-borne parasites. Because their life cycles require intermediate hosts that are aquatic (snails and fish) infected due to excretion of the eggs of this parasite from feces of infected humans and other mammals, they may, especially when associated with aquaculture systems, be a consideration in the design of sanitation systems for human and animal excreta.
1.1 Global Burden of Disease 1.1.1 Global distribution Figure 1 shows the distribution area of Opisthorchis felineus in western Siberia and Europe and Figure 2 shows the distribution areas of Clonorchis sinensis and Opistorchis viverrini in Cambodia, China, Laos, Thailand and Vietnam.
Figure 1. Distribution area of Opisthorchis felineus.
Information on the distribution of O. felineus in western Siberia orginated from Pakharukova and Mordvinov, 2016. Information on the distribution of O. felineus in Europe is from Pozio and Gomez Morales, 2014. (permission obtained from Dr. Edoardo Pozio, Istituto Superiore Di Sanita’, Department of Infectious, Parasitic and Immune-Mediated Diseases) Figure 2. Distribution areas of Clonorchis sinensis and Opistorchis viverrini. C. sinensis distribution area in China (Lai et al., 2016) and Vietnam (Doanh and Nawa, 2016) (light brown); O. viverrini distribution area in Vietnam (red) (Doanh and Nawa, 2016).
Rough distribution area of O. viverrini in Cambodia, Laos and Thailand (stripped red). (permission obtained The Liver Flukes: Clonorchis sinensis, Opisthorchis spp, and Metorchis spp.
4 from Dr. Edoardo Pozio, Istituto Superiore Di Sanita’, Department of Infectious, Parasitic and Immune-Mediated Diseases) 1.1.1.1 Clonorchis sinensis Infection with C. sinensis and the disease it causes, clonorchiasis, occurs primarily in East Asia, where it is widely distributed; it is currently endemic in South Korea, China, Taiwan, northern Vietnam, and eastern Russia (De et al., 2003; Chai et al., 2005; Lun et al., 2005; Sithithaworn et al., 2007a; Sithithaworn et al., 2012). The number of people infected in this region is estimated to be 7-15 million (WHO, 1995, 1999; Fürst et al., 2011) and prevalence varies widely, from <1.0% in Guang Xi, China to > 40% in North Vietnam, to >70% in Guangdong Pr., China. Importantly, Fürst et al. (2011) calculated that 1.1 million of infected people had heavy infections (> 1000 eggs/gram feces).
1.1.1.2 Opisthorchis viverrini Infection with O. viverrini and the disease it causes, opisthorchiasis, occurs in Cambodia, Lao PDR (mainly southern areas), Thailand (mainly northeast areas), and southern Vietnam (De et al., 2003; Andrews et al., 2008; Sohn et al., 2011, 2012; Yong et al., 2012; Doanh and Nawa, 2016). The number of people infected in these countries is estimated to be eight million in Thailand and two million in Laos (Sithithaworn and Haswell-Elkins, 2003); little prevalence data are available in Vietnam even if the presence of O. viverrini infection in humans has been documented (De et al; 2003; Sithithaworn et al., 2006; Sayasone et al., 2007; Doanh and Nawa, 2016).
1.1.1.3 Opithorchis felineus Infection with O. felineus and the disease it causes, opisthorchiasis, occurs in Byelorussia, Kazakhstan, Russia, Ukraine and Siberia, and in scattered foci of the European Union (Germany, Greece, Italy, Poland, Portugal and Spain) (Pozio et al., 2013). In Russia, Ukraine, and Kazakhstan, 12.5 million people have been considered to be at risk for O. felineus (Keiser and Utzinger, 2005). In these foci, both humans and domestic animals (cats and dogs) play the role as final hosts (Mordvinov et al., 2012). In the Tomsk region of Siberia, the prevalence of opisthorchiasis in humans increased from 495 cases per 100,000 inhabitants to 649 cases per 100,000 inhabitants between 1997 and 2006 (Mordvinov et al., 2012). Other endemic foci of O. felineus in Siberia are the Ob river and the Irtysh river basins.
1.1.1.4 Trade impact The liver flukes are ranked 6th among 24 food-borne parasites for impact on trade in endemic countries; their impact on overall socioeconomic wellbeing of affected communities is ranked 5th (FAO/WHO, 2014). An important factor affecting the evaluation of trade impact of liver flukes is that their primary source are wild-caught freshwater fish rather than fish produced in aquaculture (see Section 1.3.2). Further, non-intensive aquaculture farms generally produce for local domestic markets rather than for international trade. However, in Italy, wild tenches fished from central Italy lakes, where O. felineus is highly endemic, are exported to several fish markets outside the country and have caused opisthorchiasis outbreaks (Traverso et al., 2012).
1.1.2 Symptomology In general, all the liver fluke infections induce chronic inflammatory diseases of the hepatobiliary system and in chronic high worm burden infections this may lead to bile duct cancer termed cholangiocarcinoma (CCA) (Sithithaworn et al., 2007a; Pakharukova and Mordvinov, 2016; Qian et al., 2016). Most of these manifestations are mild and asymptomatic. However, once advanced CCA develops, clinical manifestations such as jaundice occurs in approximately half of the cases, while the other half may have no specific symptoms (Chai et al., 2005).
Infections with less than 100 worms may be asymptomatic (Armignacco et al., 2008, 2013; Pakharukova and Mordvinov, 2016; Qian et al., 2016). Infection with one-hundred to thousands of worms, however, may cause jaundice, indigestion, epigastric discomfort, anorexia, general malaise, diarrhea, and mild fever (Chai et al., 2005). Over time, without treatment, infection may lead to liver enlargement, allergic lesions, congestion of the spleen, bile stone development, cholecystitis, and liver cirrhosis. The most serious possible outcome, however, is the development of CCA. Benign hepatobiliary diseases are characterized by cholangitis, obstructive jaundice hepatomegaly, periductal fibrosis, cholecystitis, and cholelithiasis (Chai et al., 2005; Sithithaworn et al., 2007a).
1.1.2.1 Morbidity and mortality Because of the potentially severe consequences of all liver fluke infections (e.g., hepatic lesions, cholangitis, and, most seriously, CCA), chronic infections with high worm burdens have a high impact on health status in endemic areas; a recent estimation of the effect of clonorchiasis on The Liver Flukes: Clonorchis sinensis, Opisthorchis spp, and Metorchis spp.
5 morbidity yielded DALY value of 275,370 (Fürst et al., 2011), a relatively high impact for a helminthic disease. In highly endemic foci of O. felineus in Western Siberia, CCA was detected in 77% of patients with opisthorchiasis, versus 34.2% of patients without opisthorchiasis (Pakharukova and Mordvinov, 2016).
1.2 Taxonomic Classification of the Agents The fishborne liver flukes of public health importance belong to the trematode family Opisthorchiidae (Scholtz, 2008). The most prevalent and important species are Clonorchis sinensis, Opisthorchis viverrini, and O. felineus, members of the subfamily Opisthorchiinae. These species are similar in morphology, life cycles, and modes of transmission, which often causes difficulties in specific diagnosis. Their geographic distributions are basically allopatric, however. Other species of the Opisthorchis genus have been reported from humans only rarely and will not be considered further in this chapter.
1.2.1 Physical description of the agents Clonorchis sinensis: The adult worms are flat, elongated, leaf or lanceolate shaped, generally 8-15 mm in length, and 1.5-4.0 mm wide (Chai et al., 2005; Rim, 1990). As shown in Figure 3, C. sinensis is morphologically similar to Opisthorchis viverrini and O.
felineus, but differs particularly in having highly branched testis (Scholtz, 2008). The larval stage transmitted through fish to humans and other mammals is termed a metacercaria, which is encysted in various tissues of the fish host. The metacercaria is round to oval, measuring 0.13-0.14 X 0.09-0.10 mm (Figure 4) (Chai et al., 2005).
Figure 3. Hematoxylin and eosin stained adult worms of the most important liver flukes. A, Opishorchis felineus, scale bar 2 mm; B, Opithorchis vivverini, scale bar 1 mm; and C, Clonorchis sinensis scale bar 2 mm. There is no proportion between worm size. (permission obtained from Dr. Edoardo Pozio, Istituto Superiore Di Sanita’, Department of Infectious, Parasitic and Immune-Mediated Diseases) The Liver Flukes: Clonorchis sinensis, Opisthorchis spp, and Metorchis spp.
6 Figure 4. First intermediate host of liver flukes, and infecting stages for fish and mammals. 1, Bythinia sp., major snail host of liver flukes, scale bar 8 mm; 2 Opisthorchis felineus cercaria, the swimming larval stage of liver flukes infecting fish, scale bar 100 µm; 3, metacercaria of Clonorchis sinensis scale bar 100 µm; 4, metacercariae of Opisthorchis viverrini, scale bar 150 µm; and 5, metacercariae of Opisthorchis felineus, scale bar 200 µm. (permission obtained from Dr.
Edoardo Pozio, Istituto Superiore Di Sanita’, Department of Infectious, Parasitic and Immune-Mediated Diseases) Opisthorchis viverrini: The adult worms are flat, elongated, leaf or lanceolate shaped, generally 5.5-10 mm in length, and 0.8-1.6 mm wide (Figure 3) (Pozio and Gomez Morales, 2014). The metacercaria is round to oval, measuring 0.19-0.25 x 0.15-0.22 mm in size (Figure 4).
Opisthorchis felineus: The adult worms are flat, elongated, leaf or lanceolate shaped, generally 7-12 mm in length, and 1.5-2.5 mm wide (Figure 3) (Pozio et al., 2013).
The metacercaria is oval, measuring 0.25-0.30 x 0.19-0.23 mm in size (Figure 4).
Variation in the size of adults depends on the intensity of infection and the diameters of the bile ducts they inhabit.
Metorchis spp: This genus belongs to a separate opisthorchid subfamily, the Metorchiinae, and is readily differentiated morphologically from C. sinensis and Opisthorchis spp. (Scholtz, 2008); their life cycle features, however, are similar the other liver flukes (Mas-Coma and Bargues, 1997). The species reported from humans are M.
conjunctus, M. bilis, M. orientalis, and M. taiwanensis.
Because their overall prevalence and geographic distributions are limited compared to that of C. sinensis and Opisthorchis spp., there is comparatively little information on their epidemiology, health burden, and control (Mas-Coma and Baruges, 1997; Chai et al., 2005; Furst et al., 2011; Mordvinov et al., 2012). For these reasons, species of the Metorchis genus will not be discussed further in this chapter.
Although the metacercariae of the liver flukes species are very similar, they can be differentiated morphologically and by molecular methods (Figure 4).
1.3 Transmission 1.3.1 Life cycle and routes of transmission The Liver Flukes: Clonorchis sinensis, Opisthorchis spp, and Metorchis spp.
7 The basic life cycle of the fish-borne liver flukes is shown in Figure 5. Liver flukes utilize as their first intermediate host freshwater snails belonging to several genera. The egg contains a mature miracidia that emerges from the egg if it reaches freshwater and is ingested by an appropriate snail species. In the snail, the miracidial stage then develops to a sporocyst, which then undergoes asexual multiplication producing rediae, which mature in the snail hepatopancreas within about 17 days. Each redia, in turn, produces 4-50 cercariae, which emerge from the snail into the surrounding water. The time between trematode egg ingestion by the snail and the emergence of the cercariae is influenced by the water temperature; in tropical regions, this takes about 14-17 days after ingestion of the egg (Rim et al., 1982). An infected snail may produce and release into the water 500 to 5,000 pleurolophocercus cercariae (Figure 4) per day, depending on the infection level (Rim, 1990). The cercariae are phototactic and geotropic and are able to survive in the water up to 24 hours at temperatures ranging from 12°C to 27°C. The second intermediate hosts are freshwater fish of the family Cyprinidae; however, metacercariae of C. sinensis in fish of other families have been reported (WHO, 1995). Fish movements attract cercariae, which, on contact with the fish, penetrate under the scales, lose their tails and encyst, mainly in the muscles, subcutaneous tissues, and to a lesser degree in the fins and gills (Rim et al., 1982). In the fish, the metacercaria reaches maturity in about 5-6 weeks, and may remain infective for the definitive host for at least 30 days, probably much longer although this has not been well characterized (Rim et al., 1982). When a metacercaria is ingested by the definitive mammalian host (e.g., humans), it excysts in the duodenum and migrates to the common bile duct and then to the biliary ducts within 4-7 hours. The hermaphroditic adult worm reaches sexual maturity and, in four-six weeks, begins producing eggs that are expelled with the host’s feces. The longevity of adult liver flukes has been estimated to be 15-26 years (Lai et al., 2016). In humans, adult worms may shed 1000 to 4000 eggs per day, depending on the worm burden, which is density-dependent (Mas-Coma and Bargues, 1997).
Figure 5. Life cycle of human liver flukes. As their common names indicate, the majority of opisthorchid adults parasitize the liver, bile ducts, and gall bladder of fish eating mammals including humans. Infected hosts (1) shed the fluke's eggs in their feces (2) and if the eggs reach water (3), they can be ingested by an appropriate snail species (4). In the snail, the parasite emerges from the egg and undergoes several stages of asexual multiplications (4) until emerging from the snail as a swimming cercariae (5). Fish movements attract cercariae (6), which on contact with the fish penetrate various tissues (7), and develop into an encysted metacercariae (8). This stage is infective to mammals and is transmitted to them when the fish is ingested raw or improperly prepared. Main reservoir hosts of Clonorchis sinensis and Opisthorchis viverrini: 1a, 1b, 1c, 1d,; main reservoir hosts of Opisthorchis felineus: 1d, 1e, 1f and 1g. (permission obtained from Dr.
Edoardo Pozio, Istituto Superiore Di Sanita’, Department of Infectious, Parasitic and Immune-Mediated Diseases) The risk for human infection is closely related to social and cultural traits that determine food behaviors, such as a The Liver Flukes: Clonorchis sinensis, Opisthorchis spp, and Metorchis spp.
8 fondness for raw or inadequately prepared fish (i.e., cooked, frozen, or pickled). The consumption of raw or undercooked fish is widely practiced, particularly in localities near lakes, reservoirs, streams and ponds where fresh fish are readily available (WHO, 1995). For example, in China, raw fish is commonly served after dipping briefly in boiling soup and immediately eaten, or in hot rice congee. In Thailand, a major source of infection with O.
viverrini is consumption of raw or inadequately cooked, frozen, salted, or smoked fish in a dish called Koi-pla. In Italy, large O. felineus human outbreaks occurred from 2007 to 2011 from the consumption of marinated tench fillets at restaurants or during social events (Pozio et al., 2013). A strong risk factor for C. sinensis infection, especially for males, is the consumption of raw fish at social gatherings where alcoholic drinks are served (Chai et al., 2005). Studies have shown that liver fluke metacercariae in fish tissue are moderately tolerant to low levels of heat, freezing, and pickling (Table 1).
Table 1. Reports on the preservation and treatment parameters necessary to inactivate liver fluke metacercariae in fish Preservative Parameter Parasite Process Variable Time Required for In-activation of Metacercaria a Reference Salting O. viverrini in fermented fish 13.6% 48 hrs Kruatrachue et al. 1982 Salting C. sinensis in fish 30.0% (wt based) 8 192 hrs Fan, 1998 Salting O. viverrini in fermented fish 20.0% (wt based) 5 hrs b Tesana et al. 1986 Freezing C. sinensis in fish -12°C 480 hrs c Fan, 1998 Freezing C. sinensis in fish -20°C 72-96 hrs d Fan, 1998 Freezing C. sinensis and O. viverrini in fish -10°C 124 hrs WHO, 1979 Freezing O. felinus in fish -28°C 20 hrs Fattakhov, 1989 Freezing O. felinus in fish -35°C 8 hrs Fattakhov, 1989 Freezing O. felinus in fish -40°C 2 hrs Fattakhov, 1989 Freezing Metacercariae -10°C 120-168 hrs Lloyd and Soulsby, 1998 Freezing O. felinus in fish -18°C 96 hrs Lloyd and Soulsby, 1998 Heating Metacercariae 50°C 5 hrs Waikagul, 1991 Heating Metacercariae 70°C 0.5 hrs Waikagul, 1991 Heating O. felineus in fish 65°C in the fish core 1 min EFSA, 2010 Irradiation O. felineus in fish 12.5-25 kGy e Naz'mov et al., 2001 Irradiation O. viverrini and C. sinensis in fish 0.15 kGy Sornmani et al., 1993; Chai et al., 1993 a The data presented are the reported treatment conditions that yielded complete inactivation of metacercarie in the bioassays employed; the details of test protocols and results can be found in the appropriate reference; b Viability was markedly reduced but not completely inhibited; c 10 days had no inactivating effect and 18 days had only marginal inactivating effect; d 7 days at -20°C had no inhibitory effect on 10 rats infected but 3 days storage at -20ºC, followed by thawing and re-freezing for 4 days had 100% inhibitory effect on 10 infected rats; e doses much above the recommended levels.
There is an age and gender bias in human infections with C. sinensis and O. felineus. Infection rates of O.
felineus are generally higher in men than in women, and higher in adults than in children (Pozio et al., 2013).
Analysis of clonorchiasis cases have revealed that men 25-55 years old and women over 45 years are the most highly affected groups in Sothern China, Korea, and North Vietnam (De et al., 2003; Chai et al., 2005). Similar infection patterns are reported for O.viverrini outbreaks (Sithithaworn et al., 2007a). This probably reflects behavioral patterns of men and alcohol, as mentioned above. The age influence is evident from clinical studies that indicate that initial infections are acquired at an early age, and repeated exposure results, in the absence of immunity and worm expulsion, in increasing worm burdens and disease (Sithithaworn et al, 2007a). It is important to distinguish human infections acquired in endemic foci, where people frequently consume raw infected fish, from human infections acquired sporadically in endemic foci where the consumption of raw fish is infrequent, as in Italy.
The Liver Flukes: Clonorchis sinensis, Opisthorchis spp, and Metorchis spp.
9 1.3.2 Epidemiological role of the intermediate and reservoir hosts The liver fluke C. sinensis utilizes, as first intermediate host, snail species including Alocinma longicornis, Bithynia spp., Melanoides tuberculatus, Parafossalurus spp., and Thiara (WHO, 1995; Sithithaworn et al., 2007a; Hung et al., 2013). The major snail hosts for O. viverrini and O. felineus belong to the genus Bithynia. The prevalence of larval stages in snails is always quite low and does not often exceed 1% in endemic foci (De Liberato et al., 2011).
The second intermediate hosts are mainly fish of the family Cyprinidae. Metacercariae of C. sinensis have been recovered from fish belonging to the genera Acanthogobius, Abbottina, Carassius, Cirrhinus, Crassiodes, Cultrichthys, Cyprinus, Ctenopharyngodon, Erythroculter, Gnathopogon, Hemibarbus, Hemiculter, Hypomesus, Hypophthalmichthys, Ischikauia, Opsariicthys, Oreochromis, Parabramis, Pseudogobio, Pseudorabora, Pungtungia, Rhodeus, Sarcocheilichthys, Toxobramus, Xenocypris, and Zacco (WHO, 1995; Hung et al., 2013).
Metacercariae of O. viverrini were detected in fish of the genera Carassius, Channa, Cyclocheilicthys, Hampala, Esomus, Osteochilus, Puntioplites, and Puntius (WHO, 1995). Metacercariae of O. felineus were detected in fish of the genera Alburnus, Abramis, Aspius, Blicca, Carassius, Chondrostoma, Cobitis, Cyprinus, Gobio, Leucaspius, Leuciscus, Phoxinus, Rutilus, Scardinius, and Tinca (Erhardt et al., 1962; Pozio et al., 2013; Pakharukova and Mordvinov, 2016). There are reports of shrimp found infected with metacercariae that morphologically were identified as C. sinensis (Chen et al., 2010), but follow up studies to verify their identity do not appear to have been conducted. Because shrimp are intermediate hosts for other trematode species, this report must be provisional.
The large number of fish species reported infected with liver fluke metacerariae (see above) implies that these parasites have low host specificity (WHO, 1995; Chai et al., 2005). ) It is therefore important to be aware that in locations where these parasites are found, infections may occur in several fish species, and the relative infection rates may fluctuate independently, an important consideration for epidemiological studies. Importantly, wild fish from clean fresh water sources, such as rivers and reservoirs, are usually preferred for preparing raw fish dishes.
All fish eating mammals, including humans, are potential final hosts of liver flukes. Hosts for C. sinensis and O. viverrini, besides humans, include feral and domestic cats and dogs, and pigs and rats (Rattus); these host species play the crucial role of reservoir host (Mas-Coma and Bargues, 1997; Lun et al., 2005; Lan-Anh et al., 2009; Petney et al., 2013). However, C. sinensis has also been reported from sylvatic animals such as martins, civet cats, badgers, monkeys, weasels, muskrats, foxes and rice rats (Mas-Coma and Bargues, 1997; Hung et al., 2013; Petney et al., 2013). The role of sylvatic reservoir hosts is very important in the epidemiology of O. felineus, which has an even wider spectrum of final hosts; it has been reported from domestic (e.g., cats, dogs, pigs), synanthropic (e.g., muskrats, rats) and 28 wild animals (e.g., otters, polecats, polar and red foxes, sable, seals, wild boar, wolverines) (Mordvinov et al., 2012; Pozio et al., 2013; Chai et al., 2005).
The role of a sylvatic cycle in the epidemiology of C.
sinensis and O. viverrini has not been well studied. A role for wild animal reservoirs is suggested from surveys of fish infections in endemic areas, which frequently demonstrate that the prevalence of liver fluke metacercariae is often higher in wild fish from reservoirs, canals, streams and rice fields than in fish from farm ponds (Mas-Coma and Bargues, 1997; Chai et al., 2005; Phan et al., 2010; Li et al., 2013).
Cats, dogs and pigs are considered to be the most important reservoir hosts in the domestic habitat, because of their wide distribution and large populations (WHO, 1995). Although it is a common assumption that farm households, including humans, dogs, pigs, and cats, play essential roles in liver fluke epidemiology, the greatest infection risk factor for domestic cats and dogs is the common practice of allowing such animals to roam and scavenge freely in the communities (Lan-Anh et al., 2009; Aunpromma et al., 2012; Petney et al., 2013).
Epidemiological studies on the role of cultured and wild-caught fish in liver fluke transmission have demonstrated that sylvatic hosts, both fish and mammals, can sustain the life cycle and risk for humans in the absence of a domestic cycle (Li et al., 2013; Clausen et al., 2015).
The probable explanation for the higher infection rates of C. sinenesis and O. viverinni in wild caught fish maybe related to both biotic and abiotic factors associated with the different aquatic habitats of vector snails, especially Bythinia spp. Recent research in Thailand and Vietnam on the ecology and distribution in various aquatic habitats of a major snail host in the genus Bithynia, revealed a greater abundance in rice fields, streams, and small canals than in lakes and in farm ponds (Brockelman et al., 1986; Ngern-klun et al., 2006; Petney et al., 2012; Doanh and Nawa, 2016). Further, investigations on the abiotic factors affecting the abundance of Bithynia siamemsis gonoiomphalos, the major snail host for O. vivverini, revealed the importance of water depth and temperature, level of dissolved oxygen, pH, and salinity (Nithiuthai et al., 2002). These conditions may not be met in farm ponds that are generally stagnant, warm, and with low oxygen levels.
1.4 Population and Individual Control Measures 1.4.1 Treatment options Prevention and control of human liver fluke infections must begin with an effective education effort directed at enabling consumers to understand the risks associated with eating raw or undercooked fish, regardless of source.
Currently, the major strategies for community prevention and control encompasses fecal examination and treatment of individual cases with praziquantel (25 mg/kg three times daily for 2-3 days), and environmental sanitation by building and use of household latrines (Chai et al., 2005; Sithithaworn et al., 2007a). Mass chemotherapy with praziquantel (40 mg in a single dose) is recommended by The Liver Flukes: Clonorchis sinensis, Opisthorchis spp, and Metorchis spp.
10 WHO (1995), but the sustainability of control by this approach is uncertain (Chai et al., 2005; Clausen et al., 2015). O. felineus infection in humans can be treated with praziquantel (25 mg/kg orally 3×/day for 1–2 days) or albendazole (10 mg/kg/day orally in 2 doses for 7–14 days).
The treatment with praziquantel is, as a rule, effective, whereas treatment with albendazole can fail and the individual has to be treated again with another drug (Armignacco et al., 2008, 2013). After the second treatment, eggs are generally not detected in fecal samples; however, in very few cases, eggs have been detected after a period from 5 to 6 months up to 2 years despite treatment of the patients with albendazole (Pozio et al., 2013). Although the effectiveness of treatment can be determined based on the search for eggs in stool samples, very few eggs are produced in cases of unsuccessful treatment; for this reason, ELISA could be used, although the antibody levels decrease very slowly to the cut-off value (Pozio et al., 2013). Vaccines to prevent liver fluke infection in humans have not been developed.
1.4.2 Hygiene measures Emphasis on hygienic measures applicable to households and restaurants may be the most fruitful approach to control in endemic areas, at least for the immediate future. Treatment of fish to inactivate the metacercariae by heating to temperatures of 70 °C or above are effective (WHO, 1995). Table 1 lists parameters for inactivation by salting and freezing, pickling, and irradiation that can be effective (WHO, 1995). Other food preparation methods for household use that may inactivate metacercariae, such as microwaving, smoking, fermenting, and marinating, need further investigation before guidance on their use can be formulated. O. felineus metacercariae, for example, can survive in smoked fish causing human infections (Yossepowitch et al., 2004), nor does marinating does not kill O. felineus metacercariae present in tench muscles (Armignacco et al., 2008, 2013; Traverso et al., 2012).
Education of food handlers on the risk from fish in endemic areas and on the need to keep preparation counters and utensils clean between individual fish preparations is highly recommended. It should be noted that many home freezers can reach temperatures of -6°C or -12°C, , but domestic freezers that can operate at a temperature of -18°C or below are more effective; infected fish need to be frozen in all parts of the products for longer than 24 hours to ensure that all parasites are inactivated.
Investigations in Thailand have demonstrated that low dose irradiation of freshwater fish can prevent infectivity of O. viverrini metacercariae when such fish are prepared in local dishes made from raw or semi processed fish (Bhaibulaya, 1985). Experimental studies should be extended in order to evaluate the usefulness of this control measure in food processing (EFSA, 2010).
2.0 Environmental Occurrence and Persistence of Larval Liver Fluke Stages 2.1. Detection Methods Detection is made most often by microscopy however trained parasitologists are needed to make the identification. Eggs containing the miracidia of liver flukes can be detected in the feces by using standard fecal exam methods (e.g., Kato-Katz technic). Fecal deposits picked up from the ground adjacent to fish ponds, rice paddies, reservoirs and streams, and from latrines and pig pens, can be collected and tested. This approach has been successful in identifying reservoir hosts with aquaculture activities and assessing their role in the epidemiology of fishborne flukes (Lan-Anh et al., 2009).
Possible ongoing transmission may be detected by examining snails collected from fishponds and local waterbodies and examining them for the presence of opithorchid sporocysts and/or rediae, and for cercariae.
This can be done either by crushing the snail and viewing the remnants under a stereomicroscope or by allowing the snails to shed cercariae directly into water-filled containers.
However, because snails may be infected with other trematodes with pleurolophocercous type cercaria (Figure 4), especially heterophyid intestinal flukes, a specific identification of liver fluke cercariae may not be possible.
Examination of local fish can also provide an indication of liver fluke transmission in the area. Fish tissue can be examined directly by microscope for metacercariae (Figure 4) or, preferably, after pepsin digestion to free any metacercariae present before examination (WHO, 1995; Thu et al., 2007; De Liberato et al., 2011).
Use of molecular methods such as PCR, can make detection more reliable (Jeon et al., 2012). Distinguishing liver fluke eggs from the intestinal heterophyid fluke eggs can also be difficult (Ditrich et al., 1992), and molecular methods applicable to egg identification have also been developed (Sato et al., 2009; Sanpool et al., 2012; Armignacco et al., 2008).
2.2 Environmental Contamination with Eggs There are no data on the occurrence of liver fluke eggs in sewage or various types of polluted waters. The number of eggs per gram of human feces can be used to roughly calculate potential egg concentration in sewage. In a study carried out in a focus of clonorchiasis of South Korea, the mean concentration of eggs was reported to be 2.8 x 10 3 per gram of human feces, with a range of 12 to 6.6x10 4 per gram (Kim et al., 2011). On average, humans excrete approximately 100 grams of feces per person per day and in highly endemic foci of clonorchiasis the prevalence of infection can reach 70%. Eggs could sediment out into the solids, such as sludge or carried along in wet sewage, and if these waste materials are emptied into water bodies (rice fields, ponds, streams, canals, or rivers) that contain suitable snail hosts and fish, the liver fluke life cycle could be sustained.
2.3 Survival (Persistence) of Eggs in the Environment The Liver Flukes: Clonorchis sinensis, Opisthorchis spp, and Metorchis spp.
11 Because of a lack of a desiccation-resistant protective coating, trematode eggs and cercariae are extremely fragile.. Limited studies under laboratory conditions have yielded some information on their survival. Faust and Khaw (1927) and Komiya (1966) reported that C. sinensis eggs survived in isotonic solution at 2°C - 4°C for up to 3 months, and at 26 °C for up to 1 month. In fresh night soil, the survival time was 2 days at 25 °C; survival decreased with age of the night soil. Drodzdov (1962) observed survival of O. felineus eggs for 160 days in river water held at 0 °C - 5 °C. Field research on egg persistence in soil and water under natural environmental conditions is needed.
Similarly, studies on the survival of eggs in sewage, sludge, surface water, wastewater and irrigation water are insufficient to draw any conclusions.
3.0 Reducing Environmental Contamination with Liver Fluke Eggs, Snail Intermediate Hosts, and Cercariae by Sanitation Measures 3.1 Education and Community-based Actions Successful control of these parasitic trematodes requires reducing the probability of transmission.
Modifying or breaking the transmission cycle can occur at any stage of the parasite’s life cycle, but both snail and fish infections have proven difficult to control in natural habitats (Sithithaworn et al., 2007a). Therefore, most control programs aim at reducing and interrupting transmission at the reservoir host (including humans) level.
The approaches to accomplishing this differ between the three liver fluke species. For examples, wild mammals play a very important role in maintaining O. felineus endemicity (Pozio et al., 2013). In the case of C. sinensis and O.
viverrini, uncontrolled free-roaming cats and dogs are important in sustaining their life cycles; the importance of wild mammal reservoirs in the epidemiology of these flukes has not been adequately investigated but may be significant. Because of these non-human hosts, and the importance of wild-caught fish intermediate hosts, the most important control intervention is the education of consumers in endemic areas. Education of the relevant communities on the epidemiology and health consequences of liver flukes is a key component of any control program for all three species (WHO, 1995; Jongsuksuntigul and Imsomboon, 1997, 2003). Health education both at the village and school levels is of major significance, because infection is often unapparent and CCA only develops after many years (Sithithaworn et al., 2007a). Education should be aimed at explaining the pathology of the disease and in particular, its association with CCA, and how the parasite is transmitted by various final and intermediate hosts. It is particularly important to stress the role that eating raw or partially cooked fish plays in disease transmission (WHO, 1995; Jongsuksuntigul and Imsomboon, 2003).
An example of the impact of an education program is the experience in Thailand. Education on the risks of infection led to the frequency of consumption of raw, or partially cooked or fermented fish decreasing from 14% in 1990 to 7% in 1994 (Jongsuksuntigul and Imsomboon, 1997). In addition to education programs, which are aimed at reducing the risk of infection, anthelminthic treatment is still required to reduce the output of eggs from infected individuals into the environment. Because of the lack of acquired protective immunity in human, dog and cat infections, reinfection of people in endemic areas is likely if they continue to eat raw fish (Sornmani et al., 1984; Upatham et al., 1988; Clausen et al., 2015).
3.2 Control of Snails There are many methods employed or proposed to control snails, including chemical control (molluscicides), physical control (Hoffman, 1970; Khamboonraung et al., 1997), ecological control (Wang et al. 2007), and biological control (Hung et al., 2014). Although these methods are applicable to aquaculture ponds, control of snails in marshes, rivers, reservoirs and rice paddies is impractical and may be environmentally harmful. Many molluscicides, such as copper sulfate (Reddy et al., 2004), endosulfan (Otludil et al., 2004), and bayluscide (Dai et al., 2010), have been used but they have toxic effects on fish, water plants, and small organisms, and in modern aquaculture systems they are either prohibited or discouraged. Biobased molluscicides, including plant components, may have less toxic effects on humans, aquatic animals, and plants, and research on them should be encouraged, especially their cost-effectiveness. Studies on biological control with predatory fish (on snails) such as black carp have recently shown promise, but this approach is still in the early phase of research and development (Hung et al., 2013).
3.3 Sanitation Technologies There are no data available on removal of the eggs by wastewater treatment. Their small size (average 27 X 15 μm) may account for their ability to escape removal from water by standard filtration systems or sedimentation. Also there are very few recommendations on standard sanitation measures for the control of human liver flukes, the lack of which may be attributed to the unique epidemiology of human infections and the ecology of the liver flukes. As discussed in Section 1.3.2, the liver flukes have a wide range of both domestic and sylvatic reservoir hosts, and surveys of snails and fish for liver fluke metacercariae consistently reveal a major cycling of liver flukes through wild, non-cultured fish residing in lakes, reservoirs, streams, rivers, marshes and rice fields (Sithithaworn et al., 2007b; Thu et al., 2007; Li et al., 2013). In contrast, surveys of fish produced in aquaculture ponds generally reveal low prevalence of C. sinensis, and O. viverrini (Chi et al., 2008; Thien et al., 2007; Li et al., 2013; Chen et al., 2010; Pitaksakulrat et al., 2013), even when the human prevalence of C. sinensis, for example, in a community is high (Dung et al., 2007). Further, the human prevalence of liver flukes in aquaculture systems may be overestimated because of diagnostic confusion between their eggs with those of fishborne intestinal flukes (Heterophyidae) (Ditrich et al., 1992; De et al., 2003; Chi et al., 2008; Thien et al., 2007).
The major snail vectors of liver flukes (e.g., Bythinia spp., Parafossarulus spp.), however, are not common in fish ponds, preferring instead moving water associated with The Liver Flukes: Clonorchis sinensis, Opisthorchis spp, and Metorchis spp.
12 lakes, reservoirs, streams and canals (Petney et al., 2012).
In contrast, the major snail host for the heterophyids, Melanoidies tuberculata, is very common in aquaculture ponds (Dung et al 2010; Madsen and Hung 2014). Because most people in endemic areas consume both wild caught and cultured fish, it is difficult to determine the actual source of the infected fish. Importantly, wild fish are readily available in local markets and they are often preferred for raw fish dishes because they are obtained from water that is cleaner and less polluted than that normally found in farm ponds. The importance of wild caught fish in the transmission of liver flukes underscores the difficulty of applying standard sanitation approaches for prevention and control of liver flukes.
3.4 Role for Sanitation Interventions in Aquaculture When there is evidence that significant transmission from cultured fish is an important source of human infections, there are interventions that can be implemented in aquaculture systems to control fish-borne liver and intestinal flukes. These interventions, developed in extensive field trials (Khambooraung et al., 1997; Clausen et al., 2015), are designed to control fish infections by eliminating egg contamination of the ponds and reducing the snail populations. They require a strong program of farmer education and improvements in management practices and pond infrastructure, as follows: Education: 1.
Before initiating the program’s changes to pond infrastructure and management, the farmers should receive training on the basics of the biology and epidemiology of fishborne zoonotic trematodes and the health benefits to themselves and their families for the prescriptive prevention and control interventions.
Household members must be encouraged to avoid eating raw fish and to prevent consumption of raw or dead fish by farm animals, including dogs, cats and pigs.
Interventions to prevent egg and host fecal 2.
contamination of the pond environment require: Modification of pond embankments to prevent surface water run-off from entering the pond by installing a cement barrier at least 10-15 cm above the bank top.
Installation of fencing to exclude reservoir hosts, especially cats and dogs, from the immediate pond environment.
Prevent discharge into the pond of all waste from latrines and livestock pens.
Interventions to prevent and control snails in the 3.
fish pond require that: Before restocking the ponds with juvenile fish between the harvests, the pond should be drained and dried completely for at least 5 days. The top 3-5 cm of bottom mud should be removed to a site not adjacent to the pond.
All vegetation in ponds should be removed, and a liner (plastic or cement) applied to the banks of the ponds.
Aquatic vegetation must be removed at least 3 m from the water intake portal (the inlet for pond water replenishment) and all in-coming water filtered through a 5 mm mesh screen before entering the pond.
Additional public health actions for endemic 4.
communities. Educate households: on the risk from inadequately prepared fish food; to avoid contaminating water bodies with human and animal waste to the extent possible; on the signs and symptoms of liver fluke infection, and to seek medical treatment when infection is suspected.
The Liver Flukes: Clonorchis sinensis, Opisthorchis spp, and Metorchis spp.
13 References Andrews, RH, Sithithaworn, P and Petney, TN (2008). Opisthorchis viverrini: an underestimated parasite in world health.
Trends Parasitol. 24, pp. 497–501.
Armignacco, O, Caterini, L, Marucci, G, Ferri, F, Bernardini, G, G Raponi, N et al. (2008). Human illnesses caused by Opisthorchis felineus flukes, Italy. Emerg Infect Dis. 14, pp. 1902–1905.
Armignacco, O, Ferri, F, Gomez-Morales, MA, Caterini, L and Pozio, E (2013). Cryptic and asymptomatic Opisthorchis felineus infections. Am J Trop Med Hyg. 88, pp. 364–366.
Aunpromma, S, Tangkawattana, P, Papirom, P, Kanjampa, P, Tesana, S, Sripa, B et al. (2012). High prevalence of Opisthorchis viverrini infection in reservoir hosts in four districts of Khon Kaen Province, an opisthorchiasis endemic area of Thailand. Parasitol Int. 61, pp. 60–64.
Bhaibulaya, M (1985). Human infection with Bertiella studeri in Thailand. Southeast Asian J Trop Med Public Health. 16, pp. 505–507.
Brockelman, WY, Upatham, ES, Viyanant, V, Ardsungnoen, S and Chantanawat, R (1986). Field studies on the transmission of the human liver fluke, Opisthorchis viverrini, in northeast Thailand: population changes of the snail intermediate host.
Int J Parasitol. 16, pp. 545–552.
Chai, JY, Huh, S,, Kook, J, Jung, KC, Park, EC et al. (1993). An epidemiological study of metagonimiasis along the upper reaches of the Namhan River. Korean J Parasitol. 31, pp. 99–108.
Chai, JY, Murrell, KD and Lymbery, AJ (2005). Fish-borne parasitic zoonoses: status and issues. Int J Parasitol. 35, pp.
1233–1254.
Chen, D, Chen, J, Huang, J, Chen, X, Feng, D, Liang, B et al. (2010). Epidemiological investigation of Clonorchis sinensis infection in freshwater fishes in the Pearl River Delta. Parasitol Res. 107, pp. 835–839.
Chi, T, Dalsgaard, A, Turnbull, J, Tuan, P and Murrell, KD (2008). Prevalence of zoonotic trematodes in fish from a Vietnamese fish-farming community. J Parasitol. 94, pp. 423–428.
Clausen, JH, Madsen, H, Van, PT, Dalsgaard, A and Murrell, KD (2015). Integrated parasite management: path to sustainable control of fishborne trematodes in aquaculture. Trends Parasitol. 31, pp. 8–15.
Coles, GC,, Wang, W and Liang, YS (2010). Toxicity of a novel suspension concentrate of niclosamide against Biomphalaria glabrata. Trans R Soc Trop Med Hyg. 104, pp. 304–306.
De Liberato, C, Scaramozzino, P, Brozzi, A, Lorenzetti, R, Di Cave, D, Martini, E et al. (2011). Investigation on Opisthorchis felineus occurrence and life cycle in Italy. Vet Parasitol. 177, pp. 67–71.
De, NV, Murrell, KD, Cong, LD, Cam, P, Chau, LV, Toan, N et al. (2003). The food-borne trematode zoonoses of Vietnam.
Southeast J Trop Med Pub Health. Southeast Asian J Trop Med Public Health. 34, pp. 12–34.
Ditrich, O, Nasincová, V, Scholz, T and Giboda, M (1992). Larval stages of medically important flukes (Trematoda) from Vientiane province, Laos. Part II. Cercariae. Ann Parasitol Hum Comp. 67, pp. 75–81.
Doanha, PN and Nawab, Y (2016). Clonorchis sinensis and Opisthorchis spp. in Vietnam: current status and prospects.
Trans R Soc Trop Med Hyg. 110, pp. 13–20.
Drozdov, VN (1962). Survival of Opisthorchis felineus eggs (Rivolta, 1884) in various environmental conditions. Med Parazitol (Mosk). 31, pp. 323–336.
Dung, BT, Madsen, H and DT, T (2010). Distribution of freshwater snails in family-based VAC ponds and associated The Liver Flukes: Clonorchis sinensis, Opisthorchis spp, and Metorchis spp.
14 waterbodies with special reference to intermediate hosts of fish-borne zoonotic trematodes in Nam Dinh Province, Vietnam. Acta Trop. 116, pp. 15–23.
Dung, D, Van De, N, Waikagul, J, Dalsgaard, A, Chai, JY, Sohn, WM et al. (2007). Fishborne zoonotic intestinal trematodes, Vietnam. Emerg Infect Dis. 13, pp. 1828–1833.
EFSA (2010). Scientific Opinion on risk assessment of parasites in fishery products. Atlantic. 8, Erhardt, A, Germer, WD and Hörning, B (1962). Die Opisthorchiasis, hervorgerufen durch den Katzenleberegel Opisthorchis felineus (Riv.). Parasitologische Schriftenreihe. Veb Gustav Fischer Verlag, Jena. 15, Fan, PC (1998). Viability of metacercariae of Clonorchis sinensis in frozen or salted freshwater fish. International Journal for Parasitology. 28, pp. 603–605.
Fattakhov, RG (1989). Low- temperature regimes for decontamination of fish of the larvae Opisthorchis (in Russian).
Medicine Parazitology (Mosk). 5, pp. 63–65.
Faust, EC and Khaw, OK (1927). Studies on Clonorchis sinensis (Cobbold).. Am J Hyg Monographic Series. 8, Fürst, T, Keiser, J and Utzinger, J (2011). Global burden of human food-borne trematodiasis: a systematic review and metaanalysis. Lancet Infect Dis. 12, pp. 210–221.
Hoffman, GL (1970). Control methods for snail-borne zoonoses. J Wildl Dis. 6, pp. 262–265.
Hung, NM, Madsen, H and Fried, B (2013). Global status of fish-borne zoonotic trematodiasis in humans. Acta Parasitol.
58, pp. 231–258.
Jeon, HK, Lee, D, Park, H, Min, DY, Rim, HJ, Zhang, H et al. (2012). Human infections with liver and minute intestinal flukes in Guangxi, China: analysis by DNA sequencing, ultrasonography, and immunoaffinity chromatography. Korean J Parasitol. 50, pp. 391–394.
Jongsuksuntigul, P and Imsomboon, T (2003). Opisthorchiasis control in Thailand. Acta Trop. 88, pp. 229–232.
Jongsuksuntigul, P and Imsomboon, T (1997). The impact of a decade long opisthorchiasis control program in northeastern Thailand. Social Sciences and Medicine. 28, pp. 551–557.
Keiser, J and Utzinger, J (2005). Emerging foodborne trematodiasis. Emerg Infect Dis. 11, pp. 1507–1514.
Khamboonraung, C, Keawvichit, R, Wongworapat, K, Suwanrangsi, S, Hongpromyart, M, Sukhawat, K et al.
(1997). Application of hazard analysis critical control point (HACCP) as a possible control measure for Opisthorchis viverrini infection in cultured carp (Puntius gonionotus). Southeast Asian J Trop Med Public Health. 28, pp. 65–72.
Kim, JH, Choi, MH, Bae, YM, Oh, JK, Lim, MK and Hong, ST (2011). Correlation between discharged worms and fecal egg counts in human clonorchiasis. PLoS Negl Trop Dis. 5, pp. e1339.
Komiya, Y (1966). Clonorchis and clonorchiasis. Adv Parasitol. 4, pp. 53–106.
Kruatrachue, M, Chitramvong, YP, Upatham, ES, Vichari, S and Viyanant, V (1982). Effects of physico-chemical factors on the infection of hamsters by metacercariae of Opisthorchis viverrini. Southeast Asian J Trop Med Public Health. 13, pp.
614–617.
Lai, DH, Hong, XK, Su, BX, Liang, C, Hide, G, Zhang, X et al. (2016). Current status of Clonorchis sinensis and clonorchiasis in China. Trans R Soc Trop Med Hyg. 110, pp. 21–27.
Lan-Anh, NT, Phuong, NT, Murrell, KD, Johansen, MV, Dalsgaard, A, Thu, LT et al. (2009). Animal reservoir hosts and fish-borne zoonotic trematode infections on fish farms, Vietnam. Emerg Infect Dis. 15, pp. 540–546.
The Liver Flukes: Clonorchis sinensis, Opisthorchis spp, and Metorchis spp.
15 Li, K, Clausen, JH, Murrell, KD, Liu, L and Dalsgaard, A (2013). Risks for fishborne zoonotic trematodes in Tilapia production systems in Guangdong Province, China. Vet Parasitol. 198, pp. 223–229.
Lloyd, S and Soulsby, L (1998). Other cestode infections. Zoonoses: biology, clinical practice and public health control. (, Soulsby, L and Simpson, DIH, ed.).Oxford, Oxford University Presspp. 653–656.
Lun, ZR, Gasser, RB, Lai, DH, Li, AX, Zhu, XQ, Yu, XB et al. (2005). Clonorchiasis: a key foodborne zoonosis in China.
Lancet Infect Dis. 5, pp. 31–41.
Madsen, H and Hung, NM (2014). An overview of freshwater snails in Asia with main focus on Vietnam. Acta Trop. 140, pp.
105–117.
Mas-Coma, S and Bargues, MD (1997). Human liver flukes: A review. Res Rev Parasitol. 57, pp. 145–218.
Mordvinov, VA, Yurlova, NI, Ogorodova, LM and Katokhin, AV (2012). Opisthorchis felineus and Metorchis bilis are the main agents of liver fluke infection of humans in Russia. Parasitol Int. 61, pp. 25–31.
Naz'mov, VP, Fedorov, KP, Serbin, VI and Auslender, VL (2001). Decontamination of freshly caught fish infected with Opisthorchis metacercariae by fast electron irradiation. Med Parazitol (Mosk). 2, pp. 26-27.
Ngern-klun, R, Sukantason, K, Tesna, S, Snpakdee, D and Irvine, K (2006). Field investigations of Bithynia funiculata, intermediate host of Opisthorchis viverrini in northern Thailand. Southeast Asian J Trop Med Public Health. 37, pp.
662–672.
Nithiuthai, S, Wiwanitkit, V, Suwansaksri, J and Chaengphukeaw, P (2002). A survey of trematode cercariae in Bithynia goniomphalos in northeast Thailand. Southeast Asian J Trop Med Public Health. 33, pp. 106–109.
Food and Agriculture Organization of the United Nations World Health Organization (2014). Multicriteria-based ranking for risk management of food-borne parasites. Microbiological Risk Assessment Series. Rome. 23, pp. 302.
Otludil, B, Cengiz, EI, Yildirim, MZ, Unver, O and Unlü, E (2004). The effects of endosulfan on the great ramshorn snail Planorbarius corneus (Gastropoda, Pulmonata): a histopathological study. Chemosphere. 56, pp. 707–716.
Pakharukova, MY and Mordinov, VA (2016). The liver fluke Opisthorchis felineus: biology, epidemiology and carcinogenic potential. Trans R Soc Trop Med Hyg. 110, pp. 28–36.
Petney, T, Sithithaworn, P, Andrews, R, Kiatsopit, N, Tesana, S, Grundy-Warr, C et al. (2012). The ecology of the Bithynia first intermediate hosts of Opisthorchis viverrini. Parasitol Int. 61, pp. 38–45.
Petney, TN, Andrews, RH, Saijuntha, W, Wenz-Mucke, A and Sithaworn, P (2013). The zoonotic fishborne liver flukes Clonorchis sinensis, Opithorchis felineus, and O. viverrini. Int J Parasitol. 43, pp. 1032–1046.
Phan, VT, Ersbøll, AK, Bui, TQ, Nguyen, HT, Murrell, D and Dalsgaard, A (2010). Fish-borne zoonotic trematodes in cultured and wild-caught freshwater fish from the Red River Delta, Vietnam. Vector Borne Zoonotic Dis. 10, pp. 861–866.
Pitaksakulrat, O, Sithithaworn, P, Laoprom, N, Laha, T, Petney, TN and Andrews, RH (2013). A cross-sectional study on the potential transmission of the carcinogenic liver fluke Opisthorchis viverrini and other fishborne zoonotic trematodes by aquaculture fish. Foodborne Pathol Dis. 10, pp. 35–41.
Pozio, E, Armignacco, O, Ferri, F and Morales, MAGomez (2013). Opisthorchis felineus, an emerging infection in Italy and its implication for the European Union. Acta Trop. 126, pp. 54–62.
Pozio, E and Morales, MAGomez (2014). Clonorchiasis and Opisthorchiasis. Helminth Infections and their Impact on Global Public Health. (Bruschi, F, ed.)., Springer-Verlag Wienpp. 123–152.
Qian, MB, Utzinger, J, Keiser, J and Zhou, XN (2016). Clonorchiasis. Lancet2. 387, pp. 800–810.
Reddy, A, Ponder, EL and Fried, B (2004). . Effects of copper sulfate toxicity on cercariae and metacercariae of The Liver Flukes: Clonorchis sinensis, Opisthorchis spp, and Metorchis spp.
16 Echinostoma caproni and Echinostoma trivolvis and on the survival of Biomphalaria glabrata snails. J Parasitol. 90, pp.
1332–1337.
Rim, HJ, Lee, YM, Lee, JS and Joo, KH (1982). Therapeutic field trial with praziquantel (Biltricide(R)) in a rural population infected with Clonorchis sinensis. Kisaengchunghak Chapchi. 20, pp. 1–8.
Rim, HJ. (1990). Clonorchiasis in Korea. Kisaengchunghak Chapchi. 28S, pp. 63–78.
Sanpool, O, Intapan, PM, Thanchomnang, T, Janwan, P, Lulitanond, V, Doanh, PN et al. (2012). Rapid detection and differentiation of Clonorchis sinensis and Opisthorchis viverrini eggs in human fecal samples using a duplex real-time fluorescence resonance energy transfer PCR and melting curve analysis. Parasitol Res. 111, pp. 89–96.
Sato, M, Thaenkham, U, Dekumyoy, P and Waikagul, J (2009). Discrimination of O. viverrini, C. sinensis, H. pumilio and H.
taichui using nuclear DNA-based PCR targeting ribosomal DNA ITS regions. Acta Trop. 109, pp. 81–83.
Sayasone, S, Odermatt, P, Phoumindr, N, Vongsaravane, X, Sensombath, V, Phetsouvanh, R et al. (2007). Epidemiology of Opisthorchis viverrini in a rural district of southern Lao PDR. Trans R Soc Trop Med Hyg. 101, pp. 40–47.
Scholtz, T (2008). Family Opisthorchiidae. Keys to the Trematoda, Vol.3. (Bray, RA, Gibson, A and Jones, DI, ed.).London, CAB Internat and Nat Hist Museumpp. 9–49.
Sithithaworn, P, Andrews, RH, Nguyen, VD, Wongsaroj, T, Sinuon, M, Odermatt, P et al. (2012). The current status of opisthorchiasis and clonorchiasis in the Mekong Basin. Parasitol Int. 61, pp. 10–16.
Sithithaworn, P and Haswell-Elkins, M (2003). Epidemiology of Opisthorchis viverrini. Acta Trop. 88, pp. 187–194.
Sithithaworn, P, Nuchjungreed, C, Srisawangwong, T, Ando, K, Petney, TN, Chilton, NB et al. (2007). Genetic variation in Opisthorchis viverrini (Trematoda: Opisthorchiidae) from northeast Thailand and Laos PDR based on random amplified polymorphic DNA analyses. Parasitol Res. 100, pp. 613–617.
Sithithaworn, P, Sukavat, K, Vannachone, B, Sophonphong, K, Ben-Embarek, P, Petney, T et al. (2006). Epidemiology of food-borne trematodes and other parasite infections in a fishing community on the Nam Ngum reservoir, Lao PDR.
Southeast Asian J Trop Med Public Health. 37, pp. 1083–1090.
Sithithaworn, P, Yongvanit, P, Tesna, S and Pairojkul, C (2007). Liver flukes. Foodborne Parasitic Zoonoses. Fish and Plant-borne Parasites. (Murrell, KD and Fried, B, ed.)., Springerpp. 3–52.
Sohn, WM, Shin, EH, Yong, TS, Eom, KS, Jeong, HG, Sinuon, M et al. (2011). Adult Opisthorchis viverrini flukes in humans, Takeo, Cambodia. Emerg Infect Dis. 17, pp. 1302–1304.
Sohn, WM, Yong, TS, Eom, KS, Pyo, KH, Lee, MY, Lim, H et al. (2012). Prevalence of Opisthorchis viverrini infection in humans and fish in Kratie Province, Cambodia. Acta Trop. 124, pp. 215–220.
Sornmani, S, Impand, P and Bundising, C (1993). Irradiation of fish to control the infectivity of the liver fluke Opisthorchis viverrini. Use of Irradiation to Control Infectivity of Food-borne Parasites. Vienna, International Atomic Energy Agencypp.
115–127.
Sornmani, S, Vivatanasesth, P, Impand, P, Phatihatakorn, W, Sitabutra, P and Schelp, FP (1984). Infection and re-infection rates of opisthorchiasis in the water resource development area of Nam Pong project, Khon Kaen Province, northeast Thailand. Ann Trop Med Parasitol. 78, pp. 649–656.
Tesana, S, Kaewkes, S and Phinlaor, S (1986). Infectivity and survivorship of Opisthorchis viverrini metacercariae in fermented fish. Journal of Parasitology of the Tropical Medical Association Thailand. 9, pp. 21–30.
Thien, PC, Dalsgaard, A, Thanh, BN, Olsen, A and Murrell, KD (2007). Prevalence of fishborne zoonotic parasites in important cultured fish species in the Mekong Delta, Vietnam. Parasitol Res. 101, pp. 1277–1284.
Thu, ND, Dalsgaard, A, Loan, LT and Murrell, KD (2007). Survey for zoonotic liver and intestinal trematode metacercariae The Liver Flukes: Clonorchis sinensis, Opisthorchis spp, and Metorchis spp.
17 in cultured and wild fish in An Giang Province, Vietnam. Korean J Parasitol. 45, pp. 45–54.
Traverso, A, Repetto, E, Magnani, S, Meloni, T, Natrella, M, Marchisio, P et al. (2012). A large outbreak of Opisthorchis felineus in Italy suggests that opisthorchiasis develops as a febrile eosinophilic syndrome with cholestasis rather than a hepatitis-like syndrome. Eur J Clin Microbiol Infect Dis. 31, pp. 1089–1093.
Upatham, ES, Viyanant, V, Brockelman, WY, Kurathong, S, Lee, P and Kraengraeng, R (1988). Rate of re-infection by Opisthorchis viverrini in an endemic northeast Thai community after chemotherapy. Int J Parasitol. 18, pp. 643–649.
WHO (1999). Food safety issues associated with products from aquaculture. Geneva, WHO Tecg Report Serpp. 9–12, 45–49.
WHO (1995). Control of foodborne trematode infections. Geneva, WHO Tech Report Serpp. 849.
WHO (1979). Parasitic zoonoses. Report of a WHO Expert Committee with Participation of FAO. Geneva, World Health Organization. WHO Technical Report Series, No. 637 Waikagul, J (1991). Intestinal fluke infections in Southeast Asia. Southeast Asia J Trop Med Public Health. 22, pp. 158–162.
Wang, QP, Chen, XG and Lun, ZR (2007). Invasive freshwater snail, China. Emerg Infect Dis. 13, pp. 1119–1120.
Yong, TS, Shin, EH, Chai, JY, Sohn, WM, Eom, KS, Lee, DM et al. (2012). High prevalence of Opisthorchis viverrini infection in a riparian population in Takeo Province, Cambodia. Korean J Parasitol. 50, pp. 173–176.
Yossepowitch, O, Gotesman, T, Assous, M, Marva, E, Zimlichman, R and Dan, M (2004). Opisthorchiasis from imported raw fish. Emerg Infect Dis. 10, pp. 2122–2126. |
17285 | https://link.springer.com/article/10.1007/BF00970424 | Chebyshev polynomials of the second kind | Lithuanian Mathematical Journal
Your privacy, your choice
We use essential cookies to make sure the site can function. We also use optional cookies for advertising, personalisation of content, usage analysis, and social media.
By accepting optional cookies, you consent to the processing of your personal data - including transfers to third parties. Some third parties are outside of the European Economic Area, with varying standards of data protection.
See our privacy policy for more information on the use of your personal data.
Manage preferences for further information and to change your choices.
Accept all cookies
Skip to main content
Log in
Menu
Find a journalPublish with usTrack your research
Search
Cart
Search
Search by keyword or author
Search
Navigation
Find a journal
Publish with us
Track your research
Home
Lithuanian Mathematical Journal
Article
Chebyshev polynomials of the second kind
Published: January 1993
Volume 33,pages 41–43, (1993)
Cite this article
Lithuanian Mathematical JournalAims and scopeSubmit manuscript
F. Luquin
208 Accesses
1 Citation
Explore all metrics
This is a preview of subscription content, log in via an institution to check access.
Access this article
Log in via an institution
Subscribe and save
Springer+
from $39.99 /Month
Starting from 10 chapters or articles per month
Access and download chapters and articles from more than 300k books and 2,500 journals
Cancel anytime
View plans
Buy Now
Buy article PDF USD 39.95
Price excludes VAT (USA)
Tax calculation will be finalised during checkout.
Instant access to the full article PDF.
Institutional subscriptions
References
A. O. Gel'fond, On polynomials having minimal deviation from zero together with their derivatives,Dokl. Akad. Nauk SSSR,96 (1954).
F. Luquin, Monic polynomials with extremal properties,J. Approx. Theory (to appear).
I. P. Natanson,Theory of Functions of a Real Variable, I, II, Ungar, New York (1965).
Google Scholar
T. J. Rivlin,The Chebyshev Polynomials, Wiley, New York (1974).
Google Scholar
G. Szegö,Orthogonal Polynomials, Amer. Math. Soc., Providence, RI (1939).
Google Scholar
Download references
Authors
1. F. LuquinView author publications Search author on:PubMedGoogle Scholar
Additional information
Supported by the University of the Basque Country.
Universidad del Pais Vasco Facultad de Ciencias Departamento de Matemáticas Apdo. 644 Bilbao, España. Published in Lietuvos Matematikos Rinkinys, Vol. 33, No. 1, pp. 52–54, January–March, 1993.
Rights and permissions
Reprints and permissions
About this article
Cite this article
Luquin, F. Chebyshev polynomials of the second kind. Lith Math J33, 41–43 (1993).
Download citation
Received: 28 May 1992
Issue Date: January 1993
DOI:
Share this article
Anyone you share the following link with will be able to read this content:
Get shareable link
Sorry, a shareable link is not currently available for this article.
Copy shareable link to clipboard
Provided by the Springer Nature SharedIt content-sharing initiative
Keywords
Chebyshev Polynomial
Access this article
Log in via an institution
Subscribe and save
Springer+
from $39.99 /Month
Starting from 10 chapters or articles per month
Access and download chapters and articles from more than 300k books and 2,500 journals
Cancel anytime
View plans
Buy Now
Buy article PDF USD 39.95
Price excludes VAT (USA)
Tax calculation will be finalised during checkout.
Instant access to the full article PDF.
Institutional subscriptions
Sections
References
References
Additional information
Rights and permissions
About this article
Advertisement
A. O. Gel'fond, On polynomials having minimal deviation from zero together with their derivatives,Dokl. Akad. Nauk SSSR,96 (1954).
F. Luquin, Monic polynomials with extremal properties,J. Approx. Theory (to appear).
I. P. Natanson,Theory of Functions of a Real Variable, I, II, Ungar, New York (1965).
Google Scholar
T. J. Rivlin,The Chebyshev Polynomials, Wiley, New York (1974).
Google Scholar
G. Szegö,Orthogonal Polynomials, Amer. Math. Soc., Providence, RI (1939).
Google Scholar
Discover content
Journals A-Z
Books A-Z
Publish with us
Journal finder
Publish your research
Language editing
Open access publishing
Products and services
Our products
Librarians
Societies
Partners and advertisers
Our brands
Springer
Nature Portfolio
BMC
Palgrave Macmillan
Apress
Discover
Your privacy choices/Manage cookies
Your US state privacy rights
Accessibility statement
Terms and conditions
Privacy policy
Help and support
Legal notice
Cancel contracts here
34.96.49.128
Not affiliated
© 2025 Springer Nature |
17286 | http://www.industrial-maths.com/ms4414_circular_motion.pdf | Circular Motion MS4414 Theoretical Mechanics William Lee Contents 1 Uniform Circular Motion 1 2 Circular motion and Vectors 3 3 Circular motion and complex numbers 4 4 Uniform Circular Motion and Acceleration 5 5 Angular Acceleration 6 Microsoft Hiring Question You are in a boat in the exact centre of a perfectly circular lake. There is a goblin on the shore of the lake. The goblin wants to do bad things to you.
The goblin can’t swim and doesn’t have a boat. Provided you can make it to the shore—and the goblin isn’t there, waiting to grab you—you can always outrun him on land and get away.
The problem is this: The goblin can run four times as fast as the maximum speed of your boat. He has perfect eyesight, never sleeps and is extremely logical. He will do everything in his power to catch you. How would you escape the goblin?
1 MS4414, Theoretical Mechanics 2 1 Uniform Circular Motion A particle undergoing uniform circular motion has Cartesian coordinates x = R cos (ωt + θ0) y = R sin (ωt + θ0) , and polar coordinates r = R θ = ωt + θ0 Angular Velocity Angular velocity, usually denoted ω is the rate of change of the angular coordinate, θ.
ω = dθ dt = ˙ θ Angular Frequency Angular frequency is the inverse of the time for a single revolution.
Speed the magnitude of the particle’s (linear) velocity.
If it takes a particle T seconds to make a complete revolution round a circle of radius R: • The angular frequency is T −1 • In T the particle travels a distance of 2πR, therefore its speed is 2πR/T • There are 2π radians in a complete revolution so the particle moves through an angle of 1 radian in T/2π. Its angular velocity is ω = 1/ (T/2π) = 2π/T • Angular velocity, ω, and speed, v, are related by v = rω.
vt ωt www.industrial-maths.com/wlee/ms4414.html William Lee MS4414, Theoretical Mechanics 3 Worked example Which is greater, the velocity of the the Earth’s rotation (at the equator) or the velocity of the Earth’s orbit?
The Earth, which has radius rE = 6.4 × 106 m, completes one rotation in 23 hours 56 minutes and 4 seconds i.e. Tday = 8.62 × 104 s.
angular frequency = 1 Tday = 1 8.62 × 104 = 1.16 × 10−5 Hz angular velocity = 2π Tday = 2π 8.62 × 104 = 7.29 × 10−5 rad s−1 linear velocity = radius × angular velocity = 6.4 × 106 × 7.29 × 10−5 = 467 m s−1 The Earth is RES = 1 AU = 1.5 × 1011 m away from the Sun (on average). It completes an orbit in 1 year i.e. Tyr = 3.16 × 107 s, angular frequency = 1 Tyr = 1 3.16 × 107 = 3.17 × 10−8 Hz angular velocity = 2π Tyr = 2π 3.16 × 107 = 1.99 × 10−8 rad s−1 linear velocity = radius × angular velocity = 1.5 × 1011 × 1.99 × 10−8 = 2.98 × 104 m s−1 Conclusion: The velocity due to the Earth’s orbit is greater.
2 Circular motion and Vectors A small angle δθ may be represented by an vector of magnitude δθ and direction normal to the plane of rotation: δθ. (A large angle cannot be represented by a vector. Why not?) Since ω = lim δt→0 δθ δt = dθ dt only involves a small angle δθ = θ(t + δt) −θ(t), we can convert the above into a vector equation.
ω = dθ dt .
www.industrial-maths.com/wlee/ms4414.html William Lee MS4414, Theoretical Mechanics 4 ω r v We can now write a vector equation for the velocity v = ω × r Remember that r will be changing with time, so the equation could be written as dr dt = ω × r 3 Circular motion and complex numbers Circular motion on a plane can be modelled by taking x and y to be the real and imaginary parts of the complex number z = Rei(ωt+θ0).
It is easy to see that this corresponds to the Cartesian formula for circular motion x = ℜz = R cos (ωt + θ0) y = ℑz = R sin (ωt + θ0) 3.1 Modelling the solar system On interesting case where objects move in (more or less) circular orbits on a plane is the orbits of the planets of the solar system (now Pluto has been demoted).
www.industrial-maths.com/wlee/ms4414.html William Lee MS4414, Theoretical Mechanics 5 The clever part is that move the origin to another planet (e.g. Earth) and see the motions of the other planets relative to that planet all I have to do is subtract the complex number describing the Earth’s orbit from the complex number describing the planet’s orbit.
The Dresden Codex The Dresden codex was left by the Mayan people of central America.
The key to deciphering it was the number 584.
4 Uniform Circular Motion and Acceleration If we take another look at the vector formula relating velocity and angular velocity v = ω × r we see that we can differentiate it to get an acceleration a = dv dt = d (ω) × r dt For uniform circular motion ω is a constant, but r is constantly changing dr dt = v.
a = ω × v The acceleration is orthogonal to both the velocity and the angular velocity: therefore it must be parallel to the radius (but directed inwards). The magnitude of the acceleration can be written as a = ωv = v2 r = ω2r We can see this geometrically.
r(t) v(t) r(t + δt) v(t + δt) v(t + δt) v(t) aδt ωδt ωδt www.industrial-maths.com/wlee/ms4414.html William Lee MS4414, Theoretical Mechanics 6 aδt = v(t + δt) −v(t) where a is directed in the inwards radial direction. The magnitude of the acceleration is a = vω Alternatively and rather neatly, we can calculate the acceleration from the complex number formula. If the components of r are the real and imaginary parts of the complex number z = Reiωt, then the velocity is given by ˙ z = iωReiωt = iωz The factor of i tells us the velocity, ˙ z, is at 90◦to the radius, z. The magnitude of the velocity is given by | ˙ z| = Rω.
The acceleration is given by ¨ z = −ω2Reiωt i.e. the magnitude of the acceleration is |¨ z| = Rω2 and it is directed in the opposite direction to the radius, z.
We will return to acceleration of uniform circular motion when we have learned about New-ton’s laws of motion and gravitation.
5 Angular Acceleration If the angular frequency ω is changing at a constant rate α then: d2θ dt2 = dω dt = dθ dt = ω = θ = where, at t = 0, the angle is θ0, and the angular velocity is ω0.
www.industrial-maths.com/wlee/ms4414.html William Lee MS4414, Theoretical Mechanics 7 Worked Example Tidal friction in the Earth-Moon system transfers angular momentum from the Earth to the Moon: the length of the day on Earth is increasing by 2.4 milliseconds per century and the Moon is moving away from the Earth at a rate of 38.14 millimetres per year. What is the angular acceleration of the Earth?
www.industrial-maths.com/wlee/ms4414.html William Lee |
17287 | https://ghoffarth.wordpress.com/introduction-to-economics/unit-3-supply-and-demand/midpoint-formula-for-elasticity/ | Menu
Extra Practice Problems for Elasticity
Given the following demand schedule:
Demand schedule
| PRICE | QUANTITY | REVENUE |
---
| 15 | 10 | |
| 10 | 55 | |
| 5 | 100 | |
a) Fill in the revenue column; without doing any further computations, is the demand curve elastic or inelastic? Why?
b) Compute the coefficient of elasticity between a price of $5 and of $15 using the midpoint formula.
Company ElastiPad produces tablets competing with the iPad. Six months into the release of their new product, elastoPad, they reduced the price from $200 to $100 to test the market and their theory that they could improve revenues by making the change. It was a huge success! At $200 demand for their elastoPad was 20,000 units, but with the reduced price of $100 demand topped 60,000 units! At $200 revenue was $4 million, whereas at $100 revenue reached $6 million. Using the above midpoint formula, determine if the price change was elastic or inelastic. Show your work.
3.
PRICE ELASTICITY OF SUPPLY EXAMPLE PROBLEM
Given the following data for the supply and demand of movie tickets, calculate the price elasticity of supply when the price changes from $9.00 to $10.00.
We know that the original price is $9 and the new price is $10, so we have Price (Old) =$9 and Price (New) = $10. From the chart we see that the quantity supplied, when the price is $9 is 75 and when the price is $10 is 105.
4.
Answer Key
Total Revenue Column: 150, 550, 500
1a. since revenue increased when price declines, demand will be elastic
1b. The percentage change in quantity, using the midpoint formula is: (100 – 10) / 55 = 90/55 = 1.636.
The coefficient of elasticity is the percentage change in quantity divided by the percentage change in price, or – 90/55 = – 1.636. Since this is greater than 1 demand is elasticover this range, as we expected from the increase in revenue.
2.
E =
[(60,000 – 20,000) / [(60,000 + 20,000)/2]]
————————————————————
[($200 – $100)/[($200 + $100)/2]]
E = (40,000/40,000) / (100/150)
E = 1 / .67
E = 1.49: This product is considered elastic because E > 1
3.
Price (Old) = $9
Price (New) = $10
Quantity Supplied (Old) = 75
Quantity Supplied (New) = 105
To calculate the price elasticity of supply for movie tickets, we need to know what the percentage change in quantity supplied is and what the percentage change in price is.
CALCULATING THE PERCENTAGE CHANGE IN QUANTITY SUPPLY
The formula used to calculate the percentage change in quantity supplied is:
[Quantity Supplied (New) –Quantity Supplied (Old)] /Quantity Supplied (Old))
[105 – 75] / 75 = (30/75) = 0.4
So we note that % Change in Quantity Supplied = 0.4
Now we need to calculate the percentage change in price.
CALCULATING THE PERCENTAGE CHANGE IN PRICE
The formula used to calculate the percentage change in price is:
[Price (New) – Price (Old)] /Price (Old))
By filling in the values we wrote down, we get:
[10 – 9] / 9 = (1/9) = 0.1111
We have both the percentage change in quantity supplied and the percentage change in price, so we can calculate the price elasticity of supply.
CALCULATING THE PRICE ELASTICITY OF SUPPLY
PES = (% Change in Quantity Supplied)/(% Change in Price)
We now fill in the two percentages in this equation using the figures we calculated.
PEoD = (0.4)/(0.1111) = 3.6
We get that the price elasticity of supply when the price increases from $9 to $10 is 3.6. So for movie tickets, the price is elastic and thus supply is very sensitive to price changes.
Share this:
Leave a comment Cancel reply
Δ |
17288 | https://louis.pressbooks.pub/appliedcalculus/chapter/1-1-functions/ | Section 1.1 Functions – Applied Calculus
Skip to content
Menu
Primary Navigation
Home
Read
Sign in
Search in book: Search
Book Contents Navigation
Contents
Introduction
Introduction
About This Book
About the Authors
Authors
Chapter 1: Algebra Review
Introduction to Algebra Review
Section 1.1 Functions
What Is a Function?
Function Notation
Tables as Functions
Solving and Evaluating Functions
Graphs as Functions
Formulas as Functions
Basic Toolkit Functions
Domain and Range
Vocabulary
Section 1.2 Operations on Functions
Composition of Functions
Composition of Functions Using Tables and Graphs
Compositions Using Formulas
Transformations of Functions
Combining Transformations
Vocabulary
Section 1.3 Linear Functions
Linear Functions
Slope and Increasing/Decreasing
Calculating Rate of Change
Point-Slope Equation of a Line
Graphs of Linear Functions
Finding Horizontal Intercept
Intersections of Lines
Vocabulary
Section 1.4 Exponents
Laws of Exponents
Section 1.5 Quadratics
Quadratics
Short Run Behavior: Intercepts
Quadratic Formula
Vocabulary
Section 1.6 Polynomials and Rational Functions
Polynomial Functions
Solving Polynomial Inequalities
Rational Functions
Vertical and Horizontal Asymptotes
Vocabulary
Section 1.7 Exponential Functions
Continuous Growth
Graphs of Exponential Functions
Vocabulary
Section 1.8: Logarithmic Functions
Logarithm Basics
Logarithm Graphs
Changing Logarithm Bases
Vocabulary
Chapter 1 Review Exercises
Chapter 1 Review Problems
Review Problems
Brief Answers to Selected Exercises
Chapter 2: Limits and The Derivative
Introduction to Limits
Section 2.1: Limits and Continuity
Pre-calculus Idea: Slope and Rate of Change
Limits
One Sided Limits
Continuity
Introduction to the Derivative
Section 2.2: The Derivative
Instantaneous Velocity
Tangent Lines
The Derivatives as a Function
Interpreting the Derivative
Section 2.3: The Power and Sum Rules for Derivatives
Building Blocks
Business and Economics Terms
Graphical Interpretations of the Basic Business Math Terms
Section 2.4: Product and Quotient Rules
Derivative Rules: Product and Quotient Rules
More Graphical Interpretations of the Basic Business Math Terms
Section 2.5: Chain Rule
Derivative Rules: Chain Rule
Derivatives of Complicated Functions
What If the Derivative Doesn’t Exist?
Section 2.6: Second Derivative and Concavity
Second Derivative and Concavity
Inflection Points
Chapter 2 Review Exercises: Limits
Chapter 2 Review Exercises: The Derivative
Chapter 2 Review Problems
Chapter 3: Applications of the Derivative
Introduction to Applications of the Derivative
Section 3.1: Optimization
Real-World Optimization Problems
Local Maxima and Minima
The First Derivative Test
The Second Derivative Test
Section 3.2: Curve Sketching
First Derivative Information
Second Derivative Information
Sketching without an Equation
Section 3.3: Applied Optimization
Max/Min Applications
Section 3.4: Other Applications
Tangent Line Approximation
Elasticity
Section 3.5: Implicit Differentiation and Related Rates
Implicit Differentiation
Related Rates
Chapter 3 Review Exercises
Chapter 3 Review Problems
Chapter 3 Solutions to Review Problems
Chapter 4: The Integral
Introduction to the Integral
Section 4.1: The Definite Integral
Pre-calculus Idea: The Area of a Rectangle
Section 4.2: The Fundamental Theorem and Antidifferentiation
The Fundamental Theorem of Calculus
Antiderivatives Graphically or Numerically
Derivative of the Integral
Section 4.3: Antiderivatives of Formulas
Building Blocks
Section 4.4: Substitution
Substitution and Indefinite Integrals
Substitution and Definite Integrals
Section 4.5: Average Value and the Net Change Theorem
Average Value
The Net Change Theorem
Applying the Net Change Theorem
Section 4.6: Applications to Business
Consumer and Producer Surplus
Continuous Income Stream
Chapter 4 Review Exercises
Chapter 4 Review Problems
Answers to Selected Review Problems: Chapter 4
Glossary
Formulas from Geometry and Algebra
Formulas from Geometry
Formulas from Algebra
Table of Derivatives and Integrals
General Derivative Formulas
Basic Integrals
Summary of Adaptations
Full Resource Adaptations
Chapter-Specific Changes
Applied Calculus
Chapter 1: Algebra Review
Section 1.1 Functions
Learning Objectives
By the end of this section, the student should be able to:
Find function values.
Graph functions and determine whether or not a graph is a function.
Find the domain and range of the function.
What Is a Function?
The natural world is full of relationships between quantities that change. When we see these relationships, it is natural for us to ask, If I know one quantity, can I then determine the other? This establishes the idea of an input quantity, or independent variable, and a corresponding output quantity, or dependent variable. From this we get the notion of a functional relationship in which the output can be determined from the input.
For some quantities, like height and age, there are certainly relationships between these quantities. Given a specific person and any age, it is easy enough to determine their height, but if we tried to reverse that relationship and determine the height from a given age, that would be problematic, since most people maintain the same height for many years.
Function
A function is a rule for a relationship between input, or independent, quantity and output, or dependent, quantity in which each input value uniquely determines one output value. We say the output is a function of the input.
Example 1
In the height and age example above, is height a function of age? Is age a function of height?
In the height and age example above, it would be correct to say that height is a function of age, since each age uniquely determines a height. For example, on my 18 th birthday, I had exactly one height of 69 inches.
However, age is not a function of height, since one height input might correspond with more than one output age. For example, for an input height of 70 inches, there is more than one output of age, since I was 70 inches at the age of 20 and 21.
Video
Function Notation
To simplify writing out expressions and equations involving functions, a simplified notation is often used. We also use descriptive variables to help us remember the meaning of the quantities in the problem.
Rather than write height is a function of age, we could use the descriptive variable h to represent height and we could use the descriptive variable a to represent age.
“height is a function of age”if we name the function f, we write
“h is f of a“or more simply
h = f(a)we could instead name the function h and write
h(a)which is read “h of a“
Remember we can use any variable to name the function; the notation h(a) shows us that h depends on a. The value a must be put into the function h to get a result. Be careful – the parentheses indicate that age is input into the function. (Note: do not confuse these parentheses with multiplication!)
Function Notation
TheFunction Notationoutput = f(input) defines a function named f. This would be read as output is f of input.
Example 2
A function N=f(y) gives the number of police officers, N, in a town in year y. What does f(2005)=300 tell us?
When we read f(2005)=300, we see the input quantity is 2005, which is a value for the input quantity of the function, the year y. The output value is 300, the number of police officers N, a value for the output quantity. Remember N=f(y). So this tells us that in the year 2005, there were 300 police officers in the town.
Tables as Functions
Functions can be represented in many ways: words (as we did in the last few examples), tables of values, graphs, or formulas. Represented as a table, we are presented with a list of input and output values.
This table represents the age of children in years and their corresponding heights. While some tables show all the information we know about a function, this particular table represents just some of the data available for height and ages of children.
(input) a, age in years 4 5 6 7 8 9 10
(output) h, height in inches 40 42 44 47 50 52 54
Example 3
Which of these tables define a function (if any)?
Input Output
2 1
5 3
8 6
Input Output
−3 5
0 1
4 5
Input Output
1 0
5 2
5 4
The first and second tables define functions. In both, each input corresponds to exactly one output. The third table does not define a function, since the input value of 5 corresponds with two different output values.
Video
Solving and Evaluating Functions
When we work with functions, there are two typical things we do: evaluate and solve. Evaluating a function is what we do when we know an input and use the function to determine the corresponding output. Evaluating will always produce one result, since each input of a function corresponds to exactly one output.
Solving equations involving a function is what we do when we know an output and use the function to determine the inputs that would produce that output. Solving a function could produce more than one solution, since different inputs can produce the same output.
Example 4
Using the table shown, where Q=g(n)
n 1 2 3 4 5
Q 8 6 7 6 8
a) Evaluate g(3)
b) Solve g(n)=6
a) Evaluate g(3): Evaluating g(3) (read: g of 3) means that we need to determine the output value, Q, of the function g given the input value of n=3. Looking at the table, we see the output corresponding to n=3 is Q=7, allowing us to conclude g(3)=7.
b) Solve g(n)=6: Solving g(n)=6 means we need to determine what input values, n, produce an output value of 6. Looking at the table we see there are two solutions: n=2 and n=4. When we input 2 into the function g, our output is Q=6. When we input 4 into the function g, our output is also Q=6.
Graphs as Functions
Oftentimes a graph of a relationship can be used to define a function. By convention, graphs are typically created with the input quantity along the horizontal axis and the output quantity along the vertical.
Example 5
Which of these graphs defines a function y=f(x)?
Looking at the three graphs above, the first two define a function y=f(x), since for each input value along the horizontal axis there is exactly one output value corresponding, determined by the y-value of the graph. The third graph does not define a function y=f(x), since some input values, such as x=2, correspond with more than one output value.
Vertical Line Test
TheVertical Line Testis a handy way to think about whether a graph defines the vertical output as a function of the horizontal input. Imagine drawing vertical lines through the graph. If any vertical line could cross the graph more than once, then the graph does not define only one vertical output for each horizontal input.
Video
Evaluating a function using a graph requires taking the given input and using the graph to look up the corresponding output. Solving a function equation using a graph requires taking the given output and looking on the graph to determine the corresponding input.
Example 6
Given the graph below,
a) Evaluate f(2).
b) Solve f(x)=4.
a) To evaluate f(2), we find the input of x=2 on the horizontal axis. Moving up to the graph gives the point (2,1), giving an output of y=1. So f(2)=1.
b) To solve f(x)=4, we find the value 4 on the vertical axis because if f(x)=4, then 4 is the output. Moving horizontally across the graph gives two points with the output of 4:(−1,4) and (3,4). These give the two solutions to f(x)=4: x=−1 or x=3. This means f(−1)=4 and f(3)=4, or when the input is −1 or 3, the output is 4.
Notice that while the graph in the previous example is a function, getting two input values for the output value of 4 shows us that this function is not one-to-one.
Formulas as Functions
When possible, it is very convenient to define relationships using formulas. If it is possible to express the output as a formula involving the input quantity, then we can define a function.
Example 7
Express the relationship 2 n+6 p=12 as a function p=f(n) if possible.
To express the relationship in this form, we need to be able to write the relationship where p is a function of n, which means writing it as p= [something involving n].
2 n+6 p=12 subtract 2 n from both sides
6 p=12−2 n divide both sides by 6 and simplify
p=12−2 n 6=12 6−2 n 6=2−1 3 n
Having rewritten the formula as p=, we can now express p as a function: p=f(n)=2−1 3 n
Not every relationship can be expressed as a function with a formula.
As with tables and graphs, it is common to evaluate and solve functions involving formulas. Evaluating will require replacing the input variable in the formula with the value provided and calculating. Solving will require replacing the output variable in the formula with the value provided, and solving for the input(s) that would produce that output.
Example 8
Given the function k(t)=t 3+2:
a) Evaluate k(2).
b) Solve k(t)=1.
a) To evaluate k(2), we plug in the input value 2 into the formula wherever we see the input variable t, then simplify:
k(2)=2 3+2 k(2)=8+2. So k(2)=10.
b) To solve k(t)=1, we set the formula for k(t) equal to 1, and solve for the input value that will produce that output:
k(t)=1 t 3+2=1 substitute the original formula t 3=−1 subtract 2 from each side t=−1 take the cube root of each side
When solving an equation using formulas, you can check your answer by using your solution in the original equation to see if your calculated answer is correct.
We want to know if k(t)=1 is true when t=−1: k(−1)=(−1)3+2=−1+2=1,, which was the desired result.
Basic Toolkit Functions
There are some basic functions that it is helpful to know the name and shape of. We call these the basic toolkit of functions. For these definitions we will use x as the input variable and f(x) as the output variable.
Video
Toolkit Functions
Constant:f(x)=c, where c is a constant (number)
Identity:f(x)=x
Absolute value:f(x)=|x|
Quadratic:f(x)=x 2
Cubic:f(x)=x 3
Reciprocal:f(x)=1 x
Reciprocal squared:f(x)=1 x 2
Square root:f(x)=x 2=x
Cube root:f(x)=x 3
Graphs of the Toolkit Functions
Constant: f(x)=c (c is 2 in this picture).
Identity: f(x)=x.
Absolute value: f(x)=|x|
Quadratic: f(x)=x 2
Cubic: f(x)=x 3
Reciprocal: f(x)=1 x
Reciprocal squared: f(x)=1 x 2
Square root: f(x)=x 2=x
Cube root: f(x)=x 3
Domain and Range
One of our main goals in mathematics is to model the real world with mathematical functions. In doing so, it is important to keep in mind the limitations of the models we create.
This table shows a relationship between the circumference and height of a tree as it grows.
Circumference, c 1.7 2.5 5.5 8.2 13.7
Height, h 24.5 31 45.2 54.6 92.1
While there is a strong relationship between the two, it would certainly be ridiculous to talk about a tree with a circumference of −3 feet, or a height of 3000 feet. When we identify limitations on the inputs and outputs of a function, we are determining the domain and range of the function.
Domain and Range
Domain: The set of possible input values to a function
Range: The set of possible output values of a function
Example 9
Using the tree table above, determine a reasonable domain and range.
We could combine the data provided with our own experiences and reason to approximate the domain and range of the function h=f(c). For the domain, possible values for the input circumference c, it doesn’t make sense to have negative values, so c>0. We could make an educated guess at a maximum reasonable value, or look up that the maximum circumference measured is about 119 feet. With this information we would say a reasonable domain is feet.
Similarly for the range, it doesn’t make sense to have negative heights, and the maximum height of a tree could be looked up to be 379 feet, so a reasonable range is feet.
A more compact alternative to inequality notation is interval notation, in which intervals of values are referred to by the starting and ending values. Curved parentheses are used for strictly less than, and square brackets are used for less than or equal to. Since infinity is not a number, we can’t include it in the interval, so we always use curved parentheses with ∞ and −∞. The table below will help you see how inequalities correspond to interval notation:
Inequality Interval notation
5<h≤10(5, 10]
5≤h<10[5, 10)
5<h<10(5, 10)
h<10(−∞,10)
h≥10[10,∞)
All real numbers (R)(−∞,∞)
Example 10
Describe the intervals of values shown on the line graph below using set builder and interval notations:
To describe the values, x, that lie in the intervals shown above, we would say, x is a real number greater than or equal to 1 and less than or equal to 3, or a real number greater than 5. As an inequality it is 1≤x≤3 or x>5. In interval notation it is [1,3]∪(5,∞).
Example 11
Find the domain of each function:
a) f(x)=2 x+4
b) g(x)=3 6−3 x
a) Since we cannot take the square root of a negative number, we need the inside of the square root to be non-negative. x+4≥0 when x≥−4, so the domain of f(x) is [−4,∞).
b) We cannot divide by zero, so we need the denominator to be non-zero. 6−3 x=0 when x=2, so we must exclude 2 from the domain. The domain of g(x) is (−∞,2)∪(2,∞).
Video
Vocabulary
Function
Vertical Line Test
Horizontal Line Test
Domain
Range
Function Notation
Media Attributions
image082
image083
image084
image085
image086
image087
image088
image089
image092
image093
image090
image091
definition
The domain of a function is the set of all possible input values for which the function is defined.
×Close definition
The range of a function is the set of all possible output values that the function can produce.
×Close definition
A function f is a mathematical relation that maps elements from a set called the domain X to a set called the codomain Y. It associates each input element x in the domain with a unique output element f(x) in the codomain. The function can be defined using various notations, such as a formula, equation, or a set of rules. For example, we can define a function f as follows:
f : X -->Y, f(x) = some expression involving x
where f(x) represents the value obtained by applying the function f to the input x. The notation f: X--> Y indicates that the function f maps elements from the domain X to the codomain Y. The specific expression or rule defining the function may vary depending on the context and the nature of the function.
×Close definition
Consider a function f defined by f(x) = 2x + 3 , where xis the input variable. It can also be written as y = 2x +3 and we say y is a function ofx
×Close definition
The vertical line test is a way to check if a graph represents a function. Imagine drawing a vertical line anywhere on the graph. If that line touches the graph at only one point, then the graph represents a function. However, if the line touches the graph at multiple points, then the graph does not represent a function
×Close definition
The horizontal line test is a way to check if a function has a one-to-one relationship. Imagine drawing horizontal lines across the graph of a function. If any horizontal line intersects the graph at more than one point, then the function is not one-to-one because multiple inputs have the same output. However, if every horizontal line intersects the graph at most one point, then the function is one-to-one because each input has a unique output.
×Close definition
Previous/next navigation
Previous: Introduction to Algebra Review
Next: Section 1.2 Operations on Functions
Back to top
License
Applied Calculus Copyright © 2024 by LOUIS: The Louisiana Library Network is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License, except where otherwise noted.
Share This Book
Pressbooks
Powered by Pressbooks
Pressbooks User Guide
|Pressbooks Directory
|Contact
Pressbooks on YouTubePressbooks on LinkedIn |
17289 | https://math.stackexchange.com/questions/3891691/proving-two-sets-are-equal-if-the-size-of-the-intersection-and-union-are-equal | Stack Exchange Network
Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.
Visit Stack Exchange
Teams
Q&A for work
Connect and share knowledge within a single location that is structured and easy to search.
Learn more about Teams
Proving two sets are equal if the size of the intersection and union are equal
Ask Question
Asked
Modified 4 years, 11 months ago
Viewed 624 times
0
$\begingroup$
I want to prove the following: if $|X \cap Y| = |X \cup Y|$, then $X=Y$. But not sure how to do this mathematically. I am using this prove the property of indiscernibles. But not sure where to start with this when the size of the intersection/union is thrown into the mix. Would it be the same as proving $X \cap Y = X \cup Y$ or are there edge cases I'm missing?
Would the right step be to prove the inverse using DeMorgan's law?
discrete-mathematics
Share
edited Nov 2, 2020 at 23:36
Asaf Karagila♦
407k4848 gold badges646646 silver badges1.1k1.1k bronze badges
asked Nov 2, 2020 at 20:17
JonathanJonathan
73611 gold badge1111 silver badges3434 bronze badges
$\endgroup$
Add a comment |
2 Answers 2
Reset to default
2
$\begingroup$
Note that $X\cap Y\subset X\subset X\cup Y$ so if these two sets have the same size, and $X,Y$ are finite, then you must have equality instead of inclusion. Likewise for $Y$. If they are infinite, by the way, you could have $|X\cap Y|=|X\cup Y|$ while $X\neq Y$, for example choosing $X=Z$ and $Y=N$ would do the trick. However, if $X\cap Y=X\cup Y$, this always works.
Share
edited Nov 2, 2020 at 20:47
answered Nov 2, 2020 at 20:33
NL1992NL1992
95755 silver badges1717 bronze badges
$\endgroup$
Add a comment |
1
$\begingroup$
Hint: $$X \cup Y = (X \cap Y) \sqcup (X \setminus Y) \sqcup (Y \setminus X),$$ where $\sqcup$ indicates disjoint union. So $$|X \cup Y| = |X \cap Y| + |X \setminus Y| + |Y \setminus X|.$$ Now what can you conclude about the last two terms?
Share
answered Nov 2, 2020 at 20:26
RobPrattRobPratt
51.8k44 gold badges3232 silver badges6969 bronze badges
$\endgroup$
4
$\begingroup$ |X \ Y| and |Y \ X| must be equal to 0. $\endgroup$
Jonathan
– Jonathan
2020-11-02 20:38:10 +00:00
Commented Nov 2, 2020 at 20:38
$\begingroup$ Yes, so $X \setminus Y = Y \setminus X = \emptyset$, which implies that $X=Y$. $\endgroup$
RobPratt
– RobPratt
2020-11-02 20:39:51 +00:00
Commented Nov 2, 2020 at 20:39
$\begingroup$ Was there some assumption of finiteness that we weren't aware of? $X=[0,2]$ and $Y=[1,3]$ are definitely not the same (unless, of course, $0=1$ and $2=3$) and $|X\cap Y|=|X\cup Y|$. $\endgroup$
Asaf Karagila
– Asaf Karagila ♦
2020-11-02 23:38:10 +00:00
Commented Nov 2, 2020 at 23:38
$\begingroup$ @AsafKaragila I assumed finiteness from the discrete-mathematics tag and because otherwise it is not true. $\endgroup$
RobPratt
– RobPratt
2020-11-03 01:08:41 +00:00
Commented Nov 3, 2020 at 1:08
Add a comment |
You must log in to answer this question.
Start asking to get answers
Find the answer to your question by asking.
Ask question
Explore related questions
discrete-mathematics
See similar questions with these tags.
Featured on Meta
Introducing a new proactive anti-spam measure
Spevacus has joined us as a Community Manager
stackoverflow.ai - rebuilt for attribution
Community Asks Sprint Announcement - September 2025
Related
3 Proving DeMorgan's Theorem
0 Intersection and Union of Sets
0 Prove the following set identity using the laws of set theory
0 How can I prove that the union of two sets is equal to the intersection of two sets iff the sets are equal?
0 Proving cardinality of intersection of union of subsets
0 Need to simplify $(A © \varnothing)' \cap (A \cup B)'$
0 The union of all the possible intersections of sets
2 $A, B,$ and $C$ are all sets that lie in a common universal set. Prove $A = B$ given the following statements.
2 Show the intersection of $A$ and $B$ and the relative complement of $B$ in $A$ are disjoint and their union is $A$
0 Prove that the complement of A intersection B is equal to the union of the complement of A and B
Hot Network Questions
Clinical-tone story about Earth making people violent
Storing a session token in localstorage
Separating trefoil knot on torus
Proof of every Highly Abundant Number greater than 3 is Even
Is this commentary on the Greek of Mark 1:19-20 accurate?
Real structure on a complex torus
Why include unadjusted estimates in a study when reporting adjusted estimates?
What happens if you miss cruise ship deadline at private island?
What were "milk bars" in 1920s Japan?
Change default Firefox open file directory
What can be said?
The geologic realities of a massive well out at Sea
I have a lot of PTO to take, which will make the deadline impossible
how do I remove a item from the applications menu
Why is the definite article used in “Mi deporte favorito es el fútbol”?
How big of a hole can I drill in an exterior wall's bottom plate?
Is existence always locational?
How can blood fuel space travel?
Can I go in the edit mode and by pressing A select all, then press U for Smart UV Project for that table, After PBR texturing is done?
"Unexpected"-type comic story. Aboard a space ark/colony ship. Everyone's a vampire/werewolf
Can I use the TEA1733AT for a 150-watt load despite datasheet saying 75 W?
Origin of Australian slang exclamation "struth" meaning greatly surprised
Transforming wavefunction from energy basis to annihilation operator basis for quantum harmonic oscillator
Who is the target audience of Netanyahu's speech at the United Nations?
more hot questions
Question feed |
17290 | https://math.vanderbilt.edu/sapirmv/msapir/proofdet1.html | Here is the theorem.
Theorem. Let A be an n by n matrix. Then
the following conditions hold.
If we multiply a row (column) of A by a number, the determinant
of A will be multiplied by the same number.
If the i-th row (column) in A
is a sum of the i-th row (column) of a matrix B and the i-th row (column)
of a matrix C and all other rows in B and C are equal to the corresponding rows in A (that is B and C differ from A by one row only),
then det(A)=det(B)+det(C).
If two rows (columns) in A are equal then det(A)=0.
If we add a row (column) of A multiplied by a scalar k to another row (column) of A, then the determinant will not change.
If we swap two rows (columns) in A, the determinant will change
its sign.
det(A)=det(AT).
Proof. 1. In
the expression
of the determinant of A every product
contains exactly one entry from each row and exactly one entry from each
column. Thus if we multiply a row (column) by a number, say, k,
each term in the expression of the determinant of the resulting
matrix will be equal to the corresponding term in det(A) multiplied by k.
Therefore the determinant of the resulting matrix will be equal
kdet(A).
We have that A(i,j)=B(i,j)+C(i,j) for every j=1,2,...,n.
Consider the expression
for det(A). Each term in this expression contains exactly one factor from
the i-th row of A. Conside one of these terms:
(-1) p A(p1,1) A(p2,2)...A(i,j)... A(pn,n)
(A(i,j) is the factor from i-th row in this product).
Replace A(i,j) by B(i,j)+C(i,j):
(-1) p A(p1,1) A(p2,2)...(B(i,j)+C(i,j))... A(pn,n)
Multiply through:
(-1) p A(p1,1) A(p2,2)...B(i,j)... A(pn,n)+
(-1) p A(p1,1) A(p2,2)...C(i,j)... A(pn,n)
If we add only the terms containing B's, we get the determinant of B;
if we add all terms containing C's, we get the determinant of C. Thus
det(A)=det(B)+det(C).
Suppose that the i-th row in A is equal to the j-th row of A,
that is A(i,k)=A(j,k) for every k=1,2,...,n. Consider an arbitrary product
in the
expression of det(A):
A(p1,1) A(p2,2)...A(i,k1)...A(j,k2)... A(pn,n)
(we use the fact that this product contains one factor from the i-th row
and one factor from the j-th row, and we assume that i occurs before j;
the case when i occurs further than j is similar).
Consider also the product corresponding to the permutation p' obtained from p
by switching i and j:
A(p1,1) A(p2,2)...A(j,k1)...A(i,k2)... A(pn,n)
Now since
A(i,k1)=A(j,k1), A(j,k2)=A(i,k2)
these terms are equal. But they occur in the expression of det(A) with opposite
signs
(remember that p' is obtained from p by one transposition).
Thus these products kill each other in det(A). Therefore each term in
det(A) gets killed when we combine like terms in det(A), so det(A)=0.
Let matrix B be obtained from matrix
A by adding the j-th row multiplied by k
to the i-th row. Let us represent A as a column of rows:
```
[ r1 ]
...
[ ri ]
...
[ rj ]
...
[ rn ]
```
Then B has the following form:
```
[ r1 ]
...
[ ri+krj ]
...
[ rj ]
...
[ rn ]
```
By property 2 we can conclude that det(B) is equal to the sum of determinants
of two matrices:
```
[ r1 ]
...
[ ri ]
...
[ rj ]
...
[ rn ]
```
and
```
[ r1 ]
...
[ krj ]
...
[ rj ]
...
[ rn ]
```
The first of these matrices is A. Let us denote the second one by C. So
det(B)=det(A)+det(C). By property 1
det(C) is k times the determinant of the
following matrix:
```
[ r1 ]
...
[ rj ]
...
[ rj ]
...
[ rn ]
```
But this matrix has two equal rows, therefore its determinant is equal to 0
(property 3). Thus det(C)=0 and det(B)=det(A). The proof is complete.
Suppose that we swap the i-th row and the j-th row of matrix A.
Represent A as a column of rows:
```
[ r1 ]
...
[ ri ]
...
[ rj ]
...
[ rn ]
```
In order to swap ri and rj we can do the following
procedure:
Add the j-th row to the i-th row:
```
[ r1 ]
...
[ ri+rj ]
...
[ rj ]
...
[ rn ]
```
2. Subtract the i-th row of the resulting matrix from the j-th row:
```
[ r1 ]
...
[ ri+rj ]
...
[ -ri ]
...
[ rn ]
```
3. Add the j-th row of the resulting matrix to the i-th row:
```
[ r1 ]
...
[ rj ]
...
[ -ri ]
...
[ rn ]
```
4. Multiply the j-th row of the resulting matrix by -1:
```
[ r1 ]
...
[ rj ]
...
[ ri ]
...
[ rn ]
```
By the properties that we already proved all operations of this procedure
except the very last one do not change the determinant. The last operation
changes the sign of the determinant. The proof is complete.
6. This is left as an exercise. |
17291 | https://math.stackexchange.com/questions/787419/arrangement-of-numbers-to-get-a-common-sum | recreational mathematics - Arrangement of Numbers to Get a Common Sum - Mathematics Stack Exchange
Join Mathematics
By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy.
Sign up with Google
OR
Email
Password
Sign up
Already have an account? Log in
Skip to main content
Stack Exchange Network
Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.
Visit Stack Exchange
Loading…
Tour Start here for a quick overview of the site
Help Center Detailed answers to any questions you might have
Meta Discuss the workings and policies of this site
About Us Learn more about Stack Overflow the company, and our products
current community
Mathematics helpchat
Mathematics Meta
your communities
Sign up or log in to customize your list.
more stack exchange communities
company blog
Log in
Sign up
Home
Questions
Unanswered
AI Assist Labs
Tags
Chat
Users
Teams
Ask questions, find answers and collaborate at work with Stack Overflow for Teams.
Try Teams for freeExplore Teams
3. Teams
4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams
Teams
Q&A for work
Connect and share knowledge within a single location that is structured and easy to search.
Learn more about Teams
Hang on, you can't upvote just yet.
You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it?
Instead, you can save this post to reference later.
Save this post for later Not now
Thanks for your vote!
You now have 5 free votes weekly.
Free votes
count toward the total vote score
does not give reputation to the author
Continue to help good content that is interesting, well-researched, and useful, rise to the top! To gain full voting privileges, earn reputation.
Got it!Go to help center to learn more
Arrangement of Numbers to Get a Common Sum
Ask Question
Asked 11 years, 4 months ago
Modified11 years, 4 months ago
Viewed 228 times
This question shows research effort; it is useful and clear
1
Save this question.
Show activity on this post.
I'm having trouble with a math problem. I need to arrange 6 numbers on a certain diagram:
At every intersection of two circles, I have to put one of these six numbers: 4, 5, 5, 6, 6, or 7. The sum of all of the numbers on each circle must be the same. Is it possible to arrange these numbers this way?
recreational-mathematics
puzzle
Share
Share a link to this question
Copy linkCC BY-SA 3.0
Cite
Follow
Follow this question to receive notifications
edited May 9, 2014 at 4:16
JRN
6,721 4 4 gold badges 42 42 silver badges 68 68 bronze badges
asked May 9, 2014 at 3:58
Jason ChenJason Chen
1,199 2 2 gold badges 12 12 silver badges 28 28 bronze badges
2
You mean no number that belongs to all three?Jason Chen –Jason Chen 2014-05-09 04:06:43 +00:00 Commented May 9, 2014 at 4:06
@JoelReyesNoche: Oh, you misunderstood. Each section does not get its own number. The intersections get numbers, and each number is on one of the circles.Jason Chen –Jason Chen 2014-05-09 04:08:26 +00:00 Commented May 9, 2014 at 4:08
Add a comment|
2 Answers 2
Sorted by: Reset to default
This answer is useful
1
Save this answer.
Show activity on this post.
The sum of the numbers is 33 33. Each occurs twice, total 66 66. Three circles, four numbers adding up to 22 22 on each. The 7 7 must be matched with 6,5,4 6,5,4. The 7 7 is on the intersection of two circles; each of these circles must include a 4 4; but there is only one 4 4, so it must be on the other intersection of the same two circles. The rest is easy, and there are various solutions.
Share
Share a link to this answer
Copy linkCC BY-SA 3.0
Cite
Follow
Follow this answer to receive notifications
answered May 9, 2014 at 4:15
DavidDavid
84.9k 9 9 gold badges 96 96 silver badges 166 166 bronze badges
Add a comment|
This answer is useful
1
Save this answer.
Show activity on this post.
Edit: I misunderstood the question. Anyway, I'm keeping my answer here in case it provides additional ideas...
The sum of the numbers in each circle is 17.
Share
Share a link to this answer
Copy linkCC BY-SA 3.0
Cite
Follow
Follow this answer to receive notifications
edited May 9, 2014 at 4:22
community wiki
3 revsJoel Reyes Noche
2
Technically, all of the numbers are in intersections, and the diagram just makes it look like it's "alone."Jason Chen –Jason Chen 2014-05-09 04:10:11 +00:00 Commented May 9, 2014 at 4:10
This wasn't actually what the question looked like. You need to put each number on the circumference of the circles, where the circles meet.Jason Chen –Jason Chen 2014-05-09 04:15:20 +00:00 Commented May 9, 2014 at 4:15
Add a comment|
You must log in to answer this question.
Start asking to get answers
Find the answer to your question by asking.
Ask question
Explore related questions
recreational-mathematics
puzzle
See similar questions with these tags.
Featured on Meta
Introducing a new proactive anti-spam measure
Spevacus has joined us as a Community Manager
stackoverflow.ai - rebuilt for attribution
Community Asks Sprint Announcement - September 2025
Report this ad
Related
10"8 Dice arranged as a Cube" Face-Sum Equals 14 Problem
2What is the sum of the numbers in the shaded circles?
0How many different ways can I add three numbers to get a certain sum?
1How to solve these types of sum puzzles
2Problem Solving - Is it Possible to Place 10 Numbers in a certain way?
1finding sum of entries in magic square directly
8Antisymmetric Table Puzzle where the Rows/Columns Sum to Zero
0Common sum in magic square
7How to get 2700 2700 grams of corn from a bag of 6.8 6.8 kilograms?
Hot Network Questions
Who is the target audience of Netanyahu's speech at the United Nations?
Is it ok to place components "inside" the PCB
What were "milk bars" in 1920s Japan?
If Israel is explicitly called God’s firstborn, how should Christians understand the place of the Church?
Can peaty/boggy/wet/soggy/marshy ground be solid enough to support several tonnes of foot traffic per minute but NOT support a road?
What happens if you miss cruise ship deadline at private island?
Should I let a player go because of their inability to handle setbacks?
How different is Roman Latin?
Cannot build the font table of Miama via nfssfont.tex
How exactly are random assignments of cases to US Federal Judges implemented? Who ensures randomness? Are there laws regulating how it should be done?
Can you formalize the definition of infinitely divisible in FOL?
Does the curvature engine's wake really last forever?
Why do universities push for high impact journal publications?
The geologic realities of a massive well out at Sea
Languages in the former Yugoslavia
Proof of every Highly Abundant Number greater than 3 is Even
Is existence always locational?
An odd question
For every second-order formula, is there a first-order formula equivalent to it by reification?
how do I remove a item from the applications menu
How can the problem of a warlock with two spell slots be solved?
Overfilled my oil
Does "An Annotated Asimov Biography" exist?
Why are LDS temple garments secret?
Question feed
Subscribe to RSS
Question feed
To subscribe to this RSS feed, copy and paste this URL into your RSS reader.
Why are you flagging this comment?
It contains harassment, bigotry or abuse.
This comment attacks a person or group. Learn more in our Code of Conduct.
It's unfriendly or unkind.
This comment is rude or condescending. Learn more in our Code of Conduct.
Not needed.
This comment is not relevant to the post.
Enter at least 6 characters
Something else.
A problem not listed above. Try to be as specific as possible.
Enter at least 6 characters
Flag comment Cancel
You have 0 flags left today
Mathematics
Tour
Help
Chat
Contact
Feedback
Company
Stack Overflow
Teams
Advertising
Talent
About
Press
Legal
Privacy Policy
Terms of Service
Your Privacy Choices
Cookie Policy
Stack Exchange Network
Technology
Culture & recreation
Life & arts
Science
Professional
Business
API
Data
Blog
Facebook
Twitter
LinkedIn
Instagram
Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.9.26.34547
By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy.
Accept all cookies Necessary cookies only
Customize settings
Cookie Consent Preference Center
When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer.
Cookie Policy
Accept all cookies
Manage Consent Preferences
Strictly Necessary Cookies
Always Active
These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information.
Cookies Details
Performance Cookies
[x] Performance Cookies
These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance.
Cookies Details
Functional Cookies
[x] Functional Cookies
These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly.
Cookies Details
Targeting Cookies
[x] Targeting Cookies
These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device.
Cookies Details
Cookie List
Clear
[x] checkbox label label
Apply Cancel
Consent Leg.Interest
[x] checkbox label label
[x] checkbox label label
[x] checkbox label label
Necessary cookies only Confirm my choices |
17292 | https://hsm.stackexchange.com/questions/771/what-is-so-mysterious-about-archimedes-approximation-of-sqrt-3 | Stack Exchange Network
Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers.
Visit Stack Exchange
History of Science and Mathematics
Teams
Q&A for work
Connect and share knowledge within a single location that is structured and easy to search.
Learn more about Teams
What is so mysterious about Archimedes' approximation of $\sqrt 3$?
Ask Question
Asked
Modified 2 years, 10 months ago
Viewed 5k times
$\begingroup$
In his famous estimation of $\pi$ by inscribed and circumscribed polygons, Archimedes uses several rational approximations of irrational values; a typical example is that he states, without explanation, $$\sqrt 3 > \frac{265}{153}.$$
The sudden appearance of $\frac{265}{153}$ has puzzled many people through the ages. The page Archimedes and the Square Root of 3 lists several such. This example, attributed to W.W. Rouse Ball, is typical:
It would seem...that [Archimedes] had some (at present unknown) method of extracting the square root of numbers approximately.
To my mind, there is a very easy answer to this seeming puzzle. If you want rational approximations to $\sqrt 3$, the very simplest thing you can possibly do is as follows: You want to find integers $a$ and $b$ whose ratio is close to $\sqrt 3$, or equivalently you want to find $a^2$ and $b^2$ whose ratio is close to $3$. So you should tabulate $n^2$ and $3n^2$ and look for integers in the left column that are approximately equal to integers in the right column:
$$\begin{array}{rr} n^2 & 3n^2 \ \hline 1 & \color{darkgreen}{3} \ \color{darkgreen}{4} & 12 \ 9 & \color{darkred}{27} \ 16 & \color{darkblue}{48} \ \color{darkred}{25} & 75 \ 36 & 108 \ \color{darkblue}{49} & 147 \ 64 & 192 \ 81 & 243 \ 100 & 300 \ 121 & \color{purple}{363} \ \vdots & \vdots \ \color{purple}{361} & 1183 \ \vdots & \vdots \ \end{array} $$
The pairs of colored entries give the approximations $\color{green}{\frac21}, \color{maroon}{\frac53}, \color{darkblue}{\frac74}, $ and $\color{purple}{\frac{19}{11}}$ respectively. If you carry the table to 265 entries, you find that $265^2 = 70225$ and $3\cdot 153^2 = 70227$ and there is your $\sqrt 3 \approx \frac{265}{153}$. You don't have to be as clever as Archimedes to think of this. No magical technique was required, no sophisticated theory was required. The calculation is tedious but straightforward and I judge that it could have been carried to 265 entries in at most a few hours, even using whatever awful technology was available at the time.
Archimedes might, of course, have used a better method. (He also produces the approximation $\frac{1351}{780}$, for which the foregoing is not obviously practical. He might have deployed the so-called Babylonian method. Or, having tabulated the first few approximations from the table above, he might have noticed the simple recurrence that governs them; this would not be hard for a bright high-school student. Or he might have had a some other method.) But it seems to me that since there is a straightforward and simple method by which Archimedes could have produced the required approximations, there is no puzzle to solve.
However, Rouse Ball and a number of other eminent authors disagree with me, and consider Archimedes' calculation of $\sqrt 3\approx \frac{265}{153}$ to be mysterious. (The page I linked earlier cites: T. Heath, E.T. Bell, C.B. Boyer, M. Kline, P. Beckmann, Sondheimer and Rogerson, and goes on to describe a complicated and elaborate method that Archimedes could have used.)
If it was only popular mathematics authors who thought this calculation required mysterious powers on the part of Archimedes, I would write it off as the authors overlooking the obvious, because the authors of popular books on mathematics can be quite obtuse. But Rouse Ball and Morris Kline are not dummies and I do not quite believe that they could have overlooked the method I described above, which seems not merely obvious but literally the first thing anyone would try.
My question is:
Is there some reason I don't appreciate that Archimedes could not have or certainly did not use the method I described? Or did all those other guys overlook the obvious?
mathematics
computation
archimedes
Share
Improve this question
edited Nov 29, 2022 at 23:34
Mark DominusMark Dominus
asked Jan 1, 2015 at 21:45
Mark DominusMark Dominus
1,18599 silver badges2121 bronze badges
$\endgroup$
5
$\begingroup$ Archimedes calculation of square roots by E.B. Davies discusses this and other methods, and concludes that although Archimedes might have used the method I described, he likely used a different method that would have required a less tedious computation. This does not, unfortunately, resolve my question about why so many people seem to find this mysterious. $\endgroup$
Mark Dominus
– Mark Dominus
2015-01-01 22:16:46 +00:00
Commented Jan 1, 2015 at 22:16
$\begingroup$ I suppose a possible answer is that it is mysterious because we have no evidence, or perhaps only inconclusive evidence, whether Archimedes calculated square root via successive squaring, continued fractions, or some other method. If true, would that be the sort of answer you're looking for? $\endgroup$
Michael E2
– Michael E2
2017-10-19 10:54:36 +00:00
Commented Oct 19, 2017 at 10:54
$\begingroup$ The Davies link is now stale, but it can be found at arxiv. $\endgroup$
Bill Dubuque
– Bill Dubuque
2018-09-04 15:09:52 +00:00
Commented Sep 4, 2018 at 15:09
2
$\begingroup$ My very tentative guess is that Archimedes did NOT do the following, but may have done a geometry problem related to the following: $$ \frac{265}{153} = 1 + \cfrac 1 {1 + \cfrac 1 {2 + \cfrac 1{1 + \cfrac 1 {2 + \cfrac 1 {1 + \cfrac 1 {2 + \cfrac 1 {1 + \cfrac 1 2}}}}}}}. $$ If this alternation of $1$s and $2$ were to continue forever, we would have $$ x = 1 + \cfrac 1 {1 + \cfrac 1 {1+x}} $$ and solving this for $x$ yields $x = \sqrt 3. \qquad$ $\endgroup$
Michael Hardy
– Michael Hardy
2019-06-09 03:59:50 +00:00
Commented Jun 9, 2019 at 3:59
$\begingroup$ Not sure whether anyone was using continued fractions before Euler. $\endgroup$
David
– David
2022-11-29 23:23:49 +00:00
Commented Nov 29, 2022 at 23:23
Add a comment |
6 Answers 6
Reset to default
39
$\begingroup$
According to the paper of Davies cited by MJD, Archimedes actually gives a double inequality $$\frac{265}{153}<\sqrt{3}<\frac{1351}{780}.$$
As both of these fractions are not just random approximations, but are the 9-th and 12-th convergents for the continued fraction expansion of $\sqrt{3}$, there is no doubt that Archimedes used the continued fraction expansion or some equivalent algorithm, for example Pell's equation. (Otherwise it cannot be explained why Archimedes has chosen these two fractions of infinitely many other fractions aproximating $\sqrt{3}$.
Continued fraction algorithm was known to Euclid in some form (book 10). Davis's arguments against it are totally unconvincing. And his conjecture about Archimedes computation by the "averaging" algorithm that he proposes is not plausible, because it does not explain the appearance of exactly these fractions.
Davies's argument that the first published theory of Pell's equation appeared in 18-th century is also not convincing. Indeed, there is a well known text which is attributed to Archimedes, called "Archimedes Cattle problem" which shows that the author's familiarity with very complicated Pell's equations:-)
It is absolutely clear that for the author of the Cattle problem, the Pell equation $m^2-3n^2=\pm1$ was a trivial example.
In general, there are indications that Hellenistic Greeks knew much more of number theory than is contained in their work that survived. For example, it was recently discovered that Hipparchus knew the so-called Schroder numbers (rediscovered in 19-th century).
EDIT. I just found a text whose author shares my opinion and elaborates on the question. So I can add a reference:
(I do not know who this author is, but his web pages covering a variety of topics in physics, mathematics and history are so good that I value his opinion.)
EDIT2. In the paper by D. H. Fowler, Ratio in early Greek mathematics (Bull. AMS, 1(1979) 6, 807-846) the author explains that the continued fraction algorithm was called in Greek anthyphairesis, and that it pre-dates Euclid's Elements. He sites Archimedes' approximation of $\sqrt{3}$ as an example of the use of this algorithm.
Share
Improve this answer
edited May 30, 2020 at 23:38
answered Jan 4, 2015 at 19:51
Alexandre EremenkoAlexandre Eremenko
54.9k44 gold badges9898 silver badges202202 bronze badges
$\endgroup$
10
3
$\begingroup$ @MJD: the answer to your question is short and simple: nothing mysterious. $\endgroup$
Alexandre Eremenko
– Alexandre Eremenko
2015-01-05 15:59:50 +00:00
Commented Jan 5, 2015 at 15:59
1
$\begingroup$ Alexandre, you seem to be saying that Archimedes knew how to solve Pell equations, and that he specifically solved them using the continued fractions approach. But I do not see evidence for either claim in Archimedes's works or in the scholarship about Archimedes I have consulted. In particular, the consensus seems to be that Archimedes did not have the tools to solve the cattle problem. It is indeed significant that the two approximations to $\sqrt3$ in the problem at hand can be found through the continued fraction approach, but I do not see evidence that this is how Archimedes proceeded. $\endgroup$
Andrés E. Caicedo
– Andrés E. Caicedo
2015-01-05 18:01:59 +00:00
Commented Jan 5, 2015 at 18:01
3
$\begingroup$ In particular, T. L. Heath in The Works of Archimedes with the Method of Archimedes (CUP, Cambridge, 1912; reprinted by Dover publication) and W. Knorr, in Archimedes and the measurement of the circle: A new interpretation (Arch. Hist. Exact Sci., 15, (1975), 115140), suggest alternative approaches Archimedes may have used. I became aware of these two references through the excellent Solving the Pell equation by M.J. Jacobson, Jr., and H.C. Williams (Springer, 2009), that contains a very good discussion of these matters (including Archimedes's estimates) in chapter 2. $\endgroup$
Andrés E. Caicedo
– Andrés E. Caicedo
2015-01-05 18:10:51 +00:00
Commented Jan 5, 2015 at 18:10
3
$\begingroup$ @Andres Caicedo: We cannot decide with no reasonable doubt what Archimedes really did, so the question has no definite answer. In my answer I used the principle "If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck". And I think this is the best one can do in this situation. $\endgroup$
Alexandre Eremenko
– Alexandre Eremenko
2015-01-05 19:02:41 +00:00
Commented Jan 5, 2015 at 19:02
9
$\begingroup$ I don't follow your reasoning in "As both of these fractions are not just random approximations, but are the 9-th and 12-th convergents ... there is no doubt that Archimedes used the continued fraction expansion". To me the fact that they're the 9th and 12th, rather than the 11th and 12th, would seem to be strong evidence that he didn't use continued fractions. $\endgroup$
Peter Taylor
– Peter Taylor
2017-10-19 06:17:23 +00:00
Commented Oct 19, 2017 at 6:17
| Show 5 more comments
11
$\begingroup$
There's a very simple geometrical response that provides a rounded solution which might reflect on this problem at It's so much easier seeing it!
This involves circumscribing an equilateral triangle and taking a diameter from the apex. Then, applying $26/15$ as an approximation to $\sqrt 3$ ($\tan 30^\circ\approx 15/26$) and reiterating in the similar triangle on the base gives $15/(225/26)$. Thence, multiplying out by 26, and using the fact that the diameter is divided by the base in the ratio 3:1, and then combining, results in a further approximation of $$\frac{3\cdot225+26\cdot26}{15\cdot2\cdot26}=\frac{1351}{780}.$$
Thereafter, subtracting the original approximation's numerator and denominator $26/15$ gives $$\frac{1351-26}{780-15}=\frac{1325}{765}=\frac{265}{153}$$.
This is such straightforward geometry and reasoning that you wouldn't expect any comment from a Classical geometer. And it doesn't involve Pell!
Share
Improve this answer
edited Sep 3, 2018 at 15:21
Rosie F
12455 bronze badges
answered Apr 13, 2016 at 0:02
Geoff BathGeoff Bath
12111 silver badge22 bronze badges
$\endgroup$
Add a comment |
10
$\begingroup$
Archimedes might, of course, have used a better method. (He also produces the approximation $\frac{1351}{780}$, for which the foregoing is not obviously practical...)
I disagree on the foregoing not being practical: it's a trivial computation compared to some which were carried out by hand by later mathematicians. But I think that really is the nub: any explanation for how Archimedes produced the approximation $\frac{265}{153} < \sqrt{3} < \frac{1351}{780}$ should also explain why he didn't produce the approximation $\frac{989}{571} < \sqrt{3} < \frac{1351}{780}$. Here both your approach and the continued fraction approach fall down.
There is also the point that this tabulation method is rather ungeometrical, and so not really in the expected style. I suspect that Archimedes would have preferred not to publish a result than to face the embarrassment of explaining that it was calculated thus when someone asked him for the proof.
Share
Improve this answer
answered Oct 19, 2017 at 6:36
Peter TaylorPeter Taylor
41233 silver badges1010 bronze badges
$\endgroup$
1
$\begingroup$ I don't know the context this approximation was used in, but perhaps he didn't need such a good lower bound, and this one was easier to work with? $\endgroup$
Ovi
– Ovi
2019-12-06 23:22:43 +00:00
Commented Dec 6, 2019 at 23:22
Add a comment |
6
$\begingroup$
Some information can be found here 1
Using the secant method on the parabola $y = x^2$, a new estimate $x_{n+m}$ for $\sqrt{3}$ can be obtained from $x_n$ and $x_m$ as $$x_{n+m} = \frac{3 + x_n x_m}{x_n + x_m}$$ where if we start with $x_1 = \frac{5}{3}$ then $x_{2n} > \sqrt{3}$ and $x_{2n+1} < \sqrt{3}$.We therefore obtain $$x_1 < x_3 < \sqrt{3} < x_4 < x_2$$ Where $$x_1 = \frac{5}{3}$$ $$x_2 = \frac{3 + x_1 x_1}{x_1 + x_1} = \frac{26}{15}$$ $$x_3 = \frac{3 + x_2 x1}{x_2 + x_1} = \frac{265}{153}$$ $$x_4 = \frac{3 + x_3 x_1}{x_3 + x_1} = \frac{3 + x_2 x_2}{x_2 + x_2} = \frac{1351}{780}$$
Share
Improve this answer
edited Apr 13, 2017 at 12:19
CommunityBot
1
answered Apr 16, 2016 at 17:52
Ignacio RodriguezIgnacio Rodriguez
7111 silver badge11 bronze badge
$\endgroup$
3
$\begingroup$ This does not answer my question, which was Is there some reason I don't appreciate that Archimedes could not have or certainly did not use the method I described? Or did all those other guys overlook the obvious? $\endgroup$
Mark Dominus
– Mark Dominus
2016-04-16 18:30:01 +00:00
Commented Apr 16, 2016 at 18:30
$\begingroup$ If you really think that Archimedes squared several hundred numbers to obtain his well known appoximation of sqrt 3, which by the way is $\frac{265}{153} < \sqrt{3} < \frac{1351}{780}$ and not simply $\frac{265}{153} < \sqrt{3}$ when I have shown a straightforward method which produces the result with only three fraction evaluations, only building a time machine and asking Archimedes himself will convince you that he did not square several hundred numbers to get his result. $\endgroup$
Ignacio Rodriguez
– Ignacio Rodriguez
2016-04-16 18:50:20 +00:00
Commented Apr 16, 2016 at 18:50
1
$\begingroup$ I did not suggest that Archimedes either did or did not use the method I described. Nor did I ask what methods he used or might have used. My question was why several historical experts consider the calculation mysterious when, as we both know, several simple and even obvious methods were available. Your reply does not address this. $\endgroup$
Mark Dominus
– Mark Dominus
2016-04-16 23:51:19 +00:00
Commented Apr 16, 2016 at 23:51
Add a comment |
1
$\begingroup$
We don't know how Archimedes compute square root of 3 not because there is no known method, but because there are several methods that Archimedes could use, and we don't know which one he actually used.
Share
Improve this answer
answered Jun 2, 2020 at 17:41
Alexei KopylovAlexei Kopylov
93144 silver badges66 bronze badges
$\endgroup$
Add a comment |
-1
$\begingroup$
There is a very simple geometric approach to continued fraction that yields the expected approximation:
For the minoring sequence : You alternatively stack a double square above and a square to the right.
For the majoring sequence you start with a quare with a 3:1 rectangle (that creates a 3:4 rectangle) then, as above, You alternatively stack a double square above and a square to the right.
Share
Improve this answer
edited Nov 29, 2022 at 15:10
answered Nov 29, 2022 at 8:00
Fred Beatrix Fred Beatrix
111 bronze badge
$\endgroup$
1
$\begingroup$ Thanks, but that is not relevant to my question, which was "Is there some reason I don't appreciate that Archimedes could not have or certainly did not use the method I described? " $\endgroup$
Mark Dominus
– Mark Dominus
2022-11-29 14:18:40 +00:00
Commented Nov 29, 2022 at 14:18
Add a comment |
Start asking to get answers
Find the answer to your question by asking.
Ask question
Explore related questions
mathematics
computation
archimedes
See similar questions with these tags.
Featured on Meta
Introducing a new proactive anti-spam measure
Spevacus has joined us as a Community Manager
stackoverflow.ai - rebuilt for attribution
Community Asks Sprint Announcement - September 2025
Related
About Archimedes methods in the discovered palimpsest
Did Archimedes know about Callipus?
9 Where does the formula $(1+\frac r n)^n$ for compound interest come from?
Did Gosper or the Borweins first prove Ramanujans formula?
1 Newton as the first one to establish numerical analysis as a new field of study
11 When and how did Stirling's formula become adulterated?
Hot Network Questions
Is there a specific term to describe someone who is religious but does not necessarily believe everything that their religion teaches, and uses logic?
is the following argument valid?
Change default Firefox open file directory
Calculating the node voltage
What can be said?
How do you emphasize the verb "to be" with do/does?
What happens if you miss cruise ship deadline at private island?
Find non-trivial improvement after submitting
Can a GeoTIFF have 2 separate NoData values?
Is encrypting the login keyring necessary if you have full disk encryption?
Is existence always locational?
Clinical-tone story about Earth making people violent
Who is the target audience of Netanyahu's speech at the United Nations?
Exchange a file in a zip file quickly
How to design a circuit that outputs the binary position of the 3rd set bit from the right in an 8-bit input?
ICC in Hague not prosecuting an individual brought before them in a questionable manner?
Why is the fiber product in the definition of a Segal spaces a homotopy fiber product?
how do I remove a item from the applications menu
manage route redirects received from the default gateway
The rule of necessitation seems utterly unreasonable
Quantizing EM field by imposing canonical commutation relations
Proof of every Highly Abundant Number greater than 3 is Even
What "real mistakes" exist in the Messier catalog?
How to use cursed items without upsetting the player?
more hot questions
Question feed |
17293 | https://www.finra.org/rules-guidance/notices/92-16 | FINRA Utility Menu
For the Public
FINRA DATA
FINRA Data provides non-commercial use of data, specifically the ability to save data views and create and manage a Bond Watchlist.
For Industry Professionals
FINPRO
Registered representatives can fulfill Continuing Education requirements, view their industry CRD record and perform other compliance tasks.
For Member Firms
FINRA GATEWAY
Firm compliance professionals can access filings and requests, run reports and submit support tickets.
For Case Participants
DR PORTAL
Arbitration and mediation case participants and FINRA neutrals can view case information and submit documents through this Dispute Resolution Portal.
Need Help? | Check System Status
Log In to other FINRA systems
FINRA Utility Menu
For the Public
FINRA DATA
FINRA Data provides non-commercial use of data, specifically the ability to save data views and create and manage a Bond Watchlist.
For Industry Professionals
FINPRO
Registered representatives can fulfill Continuing Education requirements, view their industry CRD record and perform other compliance tasks.
For Member Firms
FINRA GATEWAY
Firm compliance professionals can access filings and requests, run reports and submit support tickets.
For Case Participants
DR PORTAL
Arbitration and mediation case participants and FINRA neutrals can view case information and submit documents through this Dispute Resolution Portal.
Need Help? | Check System Status
Log In to other FINRA systems
FINRA Main Navigation
NASD Policies and Procedures for Markups/Markdowns in Equity Securities
| |
| SUGGESTED ROUTING |
| Senior Management Government Securities Institutional Internal Audit Legal & Compliance Municipal Operations Syndicate Trading Training These are suggested departments only. Others may be appropriate for your firm. |
SUGGESTED ROUTING
EXECUTIVE SUMMARY
Fairness of markups and markdowns charged by members in principal equity transactions with customers has become an increasingly important issue to members as the NASD, Securities and Exchange Commission (SEC), state, and other federal regulatory and criminal agencies place greater emphasis on these practices in examination and enforcement programs. The NASD and other regulators have initiated a substantial number of disciplinary proceedings during the recent past alleging excessive markups. The Association has also placed great emphasis on member education in the area as well as a variety of other sales- and trading-practice abuses at NASD educational seminars and in Notices to Members.
In order to provide more guidance and assistance to our members, and in response to interest expressed by a significant segment of the membership for a comprehensive NASD release concerning markups and markdowns, the NASD is issuing this detailed Notice, which addresses the significant considerations in determining appropriate markups and mark-downs in connection with retail transactions in equity securities.
Members must take steps to develop compliance procedures designed to guard against abusive markup/markdown practices, and to ensure that critical issues such as prevailing market price, market-maker status, market environment for a security, and validation of quotations, among others, are routinely and consistently considered.
The NASD is committed to ensuring fair pricing with customers and requires strict adherence to all rules and regulations to accomplish this goal. We will continue to carefully review markup and markdown practices in examinations and investigations, and violations will be vigorously pursued. We are hopeful that this Notice, which embodies existing principles governing markups and markdowns, will aid members in their compliance efforts so customer protection is enhanced and fewer disciplinary actions required.
Questions concerning this Notice may be directed to your local NASD district office, or to William R. Schief (Vice President) or Daniel M. Sibears (Director), NASD Compliance Division, 1735 K Street, NW, Washington, DC 20006-1506.
1 See, Notice to Members 91-69, issued in November 1991, which addresses, among other things, the application of the NASD's Mark-Up Policy to the secondary market in direct participation program securities.
2 Although the narrative portion of this memorandum tends to focus on markups, the same principles are to be applied when considering markdowns in purchases from customers by a member.
3 In its House Report on the Penny Stock Reform Act of 1990, Congress cited Alstead as the leading case articulating the principles for calculating markups in equity securities.
4 NASD Manual (CCH) pi54.
5 See, Gerald M. Greenberg, 40 S.E.C. 133 (1960); Thill Securities Corp., 42 S.E.C. 89 (1964).
6 See, Peter J. Kisch, 47 S.E.C. 802 (1982); Staten Securities Corp., 47 S.E.C. 766 (1982); Powell & Associates, Inc., 47 S.E.C. 746 (1982).
7 Article III, Section 18 of the Rules of Fair Practice states:
No member shall effect any transaction in, or induce the purchase or sale of, any security by means of any manipulative, deceptive or other fraudulent device or contrivance. NASD Manual (CCH) ¶ 2168.
8 In Hampton Securities, Inc., NASD, ATL-992 (June 1, 1989), the NASD Board said that a competitive market is characterized by three features: (i) the regular publication of quotations with relatively narrow spreads, (ii) frequent interdealer transactions consistently effected at or near the quoted prices so as to validate the reliability of quotations, and (iii) the absence of domination by a single firm.
9 Members should be aware that factors other than volume may indicate domination and control. For example, in the SEC case of Universal Heritage Investment Corp., SEC Exchange Act Release No. 19308 (Dec. 8, 1982), a market maker was the only firm to submit quotations on 109 of 189 days under review, and on 5 2 of those days it was the exclusive or shared high bid for the security, leading the Commission to conclude that the firm controlled the market for the security to such a degree that it could not use quotes for computing its markups.
10 Considerations for determining the best evidence of the prevailing market price in various situations are summarized in a matrix attached as Appendix A, which contains a separate matrix for arriving at the prevailing market price in both markup and markdown situations.
11 See, Alstead; Gateway Stock and Bond, Inc., 43 S.E.C. 191 (1966); aftatin & Co., Inc., 41 S.E.C. 823 (1964).
12 Hampton; See also, James E. Ryan, SEC Exchange Act Release No. 18617 (April 5, 1982); Powell & Associates, Inc., 47 S.E.C. 746 (1982); First Pittsburgh Securities Corp., 47 S.E.C. 299 (1980); Charles Michael West, 47 S.E.C. 39 (1979).
l3 See, NASD Manual (CCH) ¶ 2151.03.
14 See also, SEC Exchange Act Release No. 29093 (April 17, 1991 at 1189) in which the Commission explicitly accepted the use of contemporaneous purchases from retail customers as indicative of prevailing market price in a dominated and controlled environment that lacks interdealer purchases by the subject firm.
15 NASD Regulatory & Compliance Alert, Volume 2, No. 3 (October 1988).
16 Rule 10b-10 requires broker/dealers, other than market makers, that execute riskless principal trades in equity securities to disclose on the confirmation the amount of any markup, markdown, or similar remuneration received in the transaction. This confirmation disclosure does not alter the broker/dealer's responsibilities under the markup policy or the anti-fraud provision of Article III, Section 18 of the Rules of Fair Practice and the federal securities laws.
17 See Alstead; Gateway Stock and Bond, Inc.; Naftalin & Co.
18 See, LSCO Securities, Inc., ("absent some showing of a change in the prevailing market, a dealer's inter-dealer cost may be used to establish market price for a period up to five business days from the date of the dealer's purchase."); Alstead, (same day or day before sales); First Pittsburgh Securities Corp. (Sales within one business day of the firm's purchases are closely related in time); Under Bilotti & Co., Inc., 42 S.E.C. 807 (1965) (If no same-day purchase occurred, the price at which registrant made the most nearly contemporaneous purchase within three days before or after the sale.) Compare, Nicholas A. Codispoti, SEC Exchange Act Release No. 24946 (Sept. 29, 1987) (Firm purchases of municipal securities within five days of its sales to customers.); Naftalin & Co., Inc., at 829 n. 16 (no purchase on the day of sales but purchases made on the preceding day as well as the succeeding day.)
19 For example: "Transactions occurring over a period of time cannot be lumped together for the purpose of determining whether mark-downs or markups are fair." Markups and markdowns must be reasonably related to the prevailing market price at the time each transaction is executed. Hamilton Bohner, Inc., SEC Exchange Act Release No. 27232 (September 8, 1989).
20 See R.B. Marich, Inc., et. al, NASD, MS-849 (Dec. 23, 1991).
21 See General Investing Corporation, 41 S.E.C. 952 (1964).
APPENDIX A: MARKUPS
General Guidelines for Determining the Best Evidence of Prevailing Market Price in Order of Priority for a Normal-Size Trade
| | | |
---
| | Market Maker | Non-Market Maker |
| Active Competitive Market | Nasdaq/NMS A. Comparable sales to other broker/dealers immediately surrounding retail sales. B. Lowest ask (inside ask) quotation reflected in Nasdaq at time of sale to customer (if validated). Regular Nasdaq A. Comparable sales to other broker/dealers immediately surrounding retail sales. B. Lowest ask (inside ask) quotation reflected in Nasdaq at time of sale to customer if validated by a comparison with interdealer transactions. NNOTC A. Contemporaneous sales to other broker/dealers. B. Lowest ask quotation if validated by a comparison with interdealer transactions. C. Contemporaneous purchases from other broker/dealers (i.e., cost). D. Contemporaneous purchases from customers adjusted for appropriate imputed markdowns. | All Securities A. Contemporaneous purchases from other broker/dealers (i.e., cost). B. Contemporaneous purchases from customers adjusted for appropriate imputed markdown. |
| Inactive Competitive Market | All Securities A. Contemporaneous sales to other broker/dealers. B. Lowest ask quotation if validated by a comparison with interdealer transactions. C. Contemporaneous purchases from other broker/dealers (i.e., cost). D. Contemporaneous purchases from customers adjusted for appropriate imputed markdown. | All Securities A. Contemporaneous purchases from other broker/dealers (i.e., cost). B. Contemporaneous purchases from customers adjusted for appropriate imputed markdown. |
| Dominated and Controlled Market | All Securities A. Contemporaneous purchases from other broker/dealers (i.e., cost). B. Contemporaneous purchases from customers adjusted for appropriate imputed markdown. | All Securities A. Contemporaneous purchases from other broker/dealers (i.e., cost). B. Contemporaneous purchases from customers adjusted to account for appropriate imputed markdown. |
Market Maker
Non-Market Maker
Active Competitive Market
Nasdaq/NMS
A. Comparable sales to other broker/dealers immediately surrounding retail sales.
B. Lowest ask (inside ask) quotation reflected in Nasdaq at time of sale to customer (if validated).
Regular Nasdaq
A. Comparable sales to other broker/dealers immediately surrounding retail sales.
B. Lowest ask (inside ask) quotation reflected in Nasdaq at time of sale to customer if validated by a comparison with interdealer transactions.
NNOTC
A. Contemporaneous sales to other broker/dealers.
B. Lowest ask quotation if validated by a comparison with interdealer transactions.
C. Contemporaneous purchases from other broker/dealers (i.e., cost).
D. Contemporaneous purchases from customers adjusted for appropriate imputed markdowns.
All Securities
A. Contemporaneous purchases from other broker/dealers (i.e., cost).
B. Contemporaneous purchases from customers adjusted for appropriate imputed markdown.
Inactive Competitive Market
All Securities
A. Contemporaneous sales to other broker/dealers.
B. Lowest ask quotation if validated by a comparison with interdealer transactions.
C. Contemporaneous purchases from other broker/dealers (i.e., cost).
D. Contemporaneous purchases from customers adjusted for appropriate imputed markdown.
All Securities
A. Contemporaneous purchases from other broker/dealers (i.e., cost).
B. Contemporaneous purchases from customers adjusted for appropriate imputed markdown.
Dominated and Controlled Market
All Securities
A. Contemporaneous purchases from other broker/dealers (i.e., cost).
B. Contemporaneous purchases from customers adjusted for appropriate imputed markdown.
All Securities
A. Contemporaneous purchases from other broker/dealers (i.e., cost).
B. Contemporaneous purchases from customers adjusted to account for appropriate imputed markdown.
This matrix is part of Notice to Members 92-16 and cannot be relied on as a separate document. Consult the full text of the Notice when using this guide.
APPENDIX A: MARKDOWNS
General Guidelines for Determining the Best Evidence of Prevailing Market Price in Order of Priority for a Normal-Size Trade
| | | |
---
| | Market Maker | Non-Market Maker |
| Active Competitive Market | Nasdaq/NMS A. Comparable purchases from other broker/dealers immediately surrounding retail sales. B. Highest bid (inside bid) quotation reflected in Nasdaq at time of purchase from customer (if validated). Regular Nasdaq A. Comparable purchases from other broker/dealers immediately surrounding retail sales. B. Highest bid (inside bid) quotation reflected in Nasdaq at time of purchase from customer if validated by a comparison with interdealer transactions. NNOTC A. Contemporaneous purchases from other broker/dealers. B. Highest bid quotation if validated by a comparison with interdealer transactions. C. Contemporaneous sales to other broker/dealers. D. Contemporaneous sales to customers adjusted for appropriate imputed markup. | All Securities A. Contemporaneous sales to other broker/dealers. B. Contemporaneous sales to customers adjusted for appropriate imputed markup. |
| Inactive Competitive Market | All Securities A. Contemporaneous purchases from other broker/dealers. B. Highest bid quotation if validated by a comparison with interdealer transactions. C. Contemporaneous sales to other broker/dealers. D. Contemporaneous sales to customers adjusted for appropriate imputed markup. | All Securities A. Contemporaneous sales to other broker/dealers. B. Contemporaneous sales to customers adjusted for appropriate imputed markup. |
| Dominated and Controlled Market | All Securities A. Contemporaneous sales to other broker/dealers. B. Contemporaneous sales to customers adjusted for appropriate imputed markup. | All Securities A. Contemporaneous sales to other broker/dealers. B. Contemporaneous sales to customers adjusted for appropriate imputed markup. |
Market Maker
Non-Market Maker
Active Competitive Market
Nasdaq/NMS
A. Comparable purchases from other broker/dealers immediately surrounding retail sales.
B. Highest bid (inside bid) quotation reflected in Nasdaq at time of purchase from customer (if validated).
Regular Nasdaq
A. Comparable purchases from other broker/dealers immediately surrounding retail sales.
B. Highest bid (inside bid) quotation reflected in Nasdaq at time of purchase from customer if validated by a comparison with interdealer transactions.
NNOTC
A. Contemporaneous purchases from other broker/dealers.
B. Highest bid quotation if validated by a comparison with interdealer transactions.
C. Contemporaneous sales to other broker/dealers.
D. Contemporaneous sales to customers adjusted for appropriate imputed markup.
All Securities
A. Contemporaneous sales to other broker/dealers.
B. Contemporaneous sales to customers adjusted for appropriate imputed markup.
Inactive Competitive Market
All Securities
A. Contemporaneous purchases from other broker/dealers.
B. Highest bid quotation if validated by a comparison with interdealer transactions.
C. Contemporaneous sales to other broker/dealers.
D. Contemporaneous sales to customers adjusted for appropriate imputed markup.
All Securities
A. Contemporaneous sales to other broker/dealers.
B. Contemporaneous sales to customers adjusted for appropriate imputed markup.
Dominated and Controlled Market
All Securities
A. Contemporaneous sales to other broker/dealers.
B. Contemporaneous sales to customers adjusted for appropriate imputed markup.
All Securities
A. Contemporaneous sales to other broker/dealers.
B. Contemporaneous sales to customers adjusted for appropriate imputed markup.
This matrix is part of Notice to Members 92-16 and cannot be relied on as a separate document. Consult the full text of the Notice when using this guide.
FINRA Utility Menu
FINRA Main Navigation
General Inquiries
301-590-6500
Securities Helpline for Seniors®
844-574-3577 (Mon-Fri 9am-5pm ET)
File a Regulatory Tip
To report on abuse or fraud in the industry
Arbitration & Mediation
FINRA operates the largest securities dispute resolution forum in the United States
File an Investor Complaint
File a complaint about fraud or unfair practices.
Small Firm Help Line
833-26-FINRA (Mon-Fri 9am-6pm ET)
Office of the Ombuds
Report a concern about FINRA at 888-700-0028
Footer Legal Links
© 2025 FINRA. All Rights Reserved.
FINRA is a Registered Trademark of the Financial Industry Regulatory Authority, Inc. |
17294 | https://philosophy.stackexchange.com/questions/52127/since-words-are-defined-in-terms-of-other-words-in-dictionaries-leading-to-infi | Skip to main content
Asked
Modified
6 years, 7 months ago
Viewed
12k times
This question shows research effort; it is useful and clear
Save this question.
Show activity on this post.
5
17
Dictionary lemmas are not definitions, they are descriptions.
– user2953
Commented
May 13, 2018 at 15:14
14
Dictionaries are not authority for word definitions. They are intended to be guides for a language not the law for word usage. Human beings must rely on the context of the words used to get the meaning of the communication. There are times when understanding the words just make the communication easier to express and faster once both parties know the context one usually mentions. Every subject matter can use the same word with a different context. Marines use words other Marines know the context of so they don't need a dictionary. Context is most important with or without dictionaries.
– Logikal
Commented
May 13, 2018 at 15:47
32
If natural languages are meaningless, why are you asking here ?
– Mauro ALLEGRANZA
Commented
May 13, 2018 at 16:21
Comments are not for extended discussion; this conversation has been moved to chat.
– Philip Klöcking
♦
Commented
May 15, 2018 at 11:14
A DAG would almost certainly lead to words being 'missed', and their meanings unhandled. There would form 'cliques' of related meaning that would be of linguistic interest but it would not be useful for obtaining the meaning of any word.
– christo183
Commented
Jan 24, 2019 at 13:21
Add a comment
|
14 Answers 14
Reset to default
This answer is useful
64
Save this answer.
Show activity on this post.
Natural languages do not depend in any fundamental way on our learning the meanings of words from dictionaries. No child I know learns to speak, read and understand meanings by memorising dictionary entries.
For one thing, a child must know some rules of grammar even to see the point of a dictionary. For another, children learn words by associating sounds with meanings; and meanings with written words only secondarily. Indeed I know some people who can speak a foreign language, understanding fully the meanings of its standard words, without being able to write or read a word of the language. They do not appear to be whizzing around in infinite loops.
Sounds can be connected with meaning by ostension among other ways. X says to child Y : 'Cat jumps down' as the cat jumps down. X later says to the child Y : 'Kim jumps down' as child Z jumps down. The semantics of 'jumps' dawns on the child. This is only a thumbnail sketch of how a child comes to understand the meanings of words but there's not a dictionary in sight.
Whatever the role of dictionaries, no feature of them renders natural languages meaningless.
Share
CC BY-SA 4.0
Improve this answer
Follow this answer to receive notifications
answered May 13, 2018 at 15:49
Geoffrey Thomas♦Geoffrey Thomas
36.3k44 gold badges4646 silver badges150150 bronze badges
4
2
As an additional argument: Meta-language (language discourse about language) cannot be understood before the age of roughly 4-5, although language learning obviously happens earlier. Exactly because of the difference of diadic and triadic symbols, see Krüger, Hans-Peter (2014): 'The Nascence of Modern Man: Two Approaches to the Problem – Biological Evolutionary Theory and Philosophical Anthropology' in J. de Mul (ed.) Plessner's Philosophical Anthropology: Perspectives and Prospects. Amsterdam: Amsterdam University Press, pp. 57–77.
– Philip Klöcking
♦
Commented
May 14, 2018 at 11:01
children learn words by associating sounds with meanings; and meanings with written words only secondarily. Where would you put the sounds--->letters/words association in there? (Honest question, hope it's not too off-topic)
– xDaizu
Commented
May 14, 2018 at 11:36
@xDaizu That would come even later, I think. A toddler can learn to write the word which means her name without learning that each letter makes part of the sound of her name. Similarly, there is a didactic method of teaching complete words by sight well in advance of teaching phonetic reading or even talking at all about the individual letters in those words.
– Beanluc
Commented
May 15, 2018 at 21:51
1
Assumption that all languages are based on sound notwithstanding, this is a good answer.
– TRiG
Commented
May 16, 2018 at 14:22
Add a comment
|
This answer is useful
13
Save this answer.
Show activity on this post.
The fact that a dictionary defines each word as a loop that includes other words doesn't mean there is no information present in the dictionary. The information about all the words together is encoded in a mangled form, namely in the structure of this network of relations and loops.
If an alien civilization received an English dictionary, there's a good chance they will eventually be able to decode large parts of it. The source of this information is the structure of the graph.
Unless you're a badly programmed robot, you just never get to the point where you're looping around ad infinitum.
Share
CC BY-SA 4.0
Improve this answer
Follow this answer to receive notifications
answered May 14, 2018 at 7:19
JesbusJesbus
25511 silver badge77 bronze badges
6
1
Could you give an example of how the structure of the graph encodes information? Does this not also largely depend on the culture behind the language? There are languages which do not distinguish certain colours, languages which have more/less words for certain animal species, etc. (Basically, the fact that there is no 1-to-1 correspondence between different natural languages means that the graph structure of different dictionaries won't be the same.) Do you have any references to back your claims up?
– user2953
Commented
May 15, 2018 at 13:10
1
To the latter point about recovering the english language as an alien civilization. Historians have successfully done something similar with latin, but the notable difference is that some of latin still "lives on" in modern languages.
– Rohan Jhunjhunwala
Commented
May 15, 2018 at 20:02
2
@Keelan here you go, ... physics.aps.org/synopsis-for/10.1103/PhysRevX.2.031018 ... "It was found that dictionaries consist of a set of words, roughly 10% the size of the original dictionary, from which all other words can be defined ... we show that new concepts can be introduced only by the insertion of a loop into the graph... This relationship between concepts and loops reflects our basic intuition that new concepts must be self-contained and as such the collection of words used to represent them must be self-referential."
– Chan-Ho Suh
Commented
May 15, 2018 at 20:53
1
@Chan-HoSuh sorry, I don't see how it is related. My problem is, for instance, that the graph structure of the subgraph containing animal names and that containing plant names will look highly similar, since both will be defined taxonomically. How would one be able to recognise which graph is for animals, and which for plants, especially considering that different languages may choose (not) to distinguish different species?
– user2953
Commented
May 15, 2018 at 20:56
1
@JeffUK In theory, you could consider all possible mappings of real world structures onto these dictionary graph structures and figure out which is the best fit. In practice, that would be computationally infeasible, but perhaps not for aliens with superior technology, and especially not for those willing to do some field research on human societies.
– Chan-Ho Suh
Commented
May 16, 2018 at 13:48
|
Show 1 more comment
This answer is useful
9
Save this answer.
Show activity on this post.
Many linguists including Chomsky I believe have studied languages up to the point of realizing that there are no set rules as to how languages develop. They just do.
It's technocratic and overly formal philosophical view to think that natural language is structured like formal languages. Formal languages can model natural language, but it doesn't mean that natural language "reduces" to formal language, because they have developed through different means. Lojban is an attempt to a natural language that's both formal and natural, but as it's not adopted yet one cannot understand whether people have evolved to it or not. Due to formality it's also perhaps more complicated language, just like maths is "complicated" for some people, so it might be that it will never become broad. Natural language has other aspects than mere syntax for people. It's by very nature informal in many cases.
Also dictionaries are not a "formula" for how words form. It's just a map about language. It tells what exists, but it doesn't tell how they developed. So in natural reality the structure given in a dictionary, doesn't likely exist.
Natural language is a form of communication and communication is a basic necessity for (group) animals.
Share
CC BY-SA 4.0
Improve this answer
Follow this answer to receive notifications
answered May 13, 2018 at 16:00
user16821user16821
Add a comment
|
This answer is useful
7
Save this answer.
Show activity on this post.
Humans do not initially learn language from the dictionary; they initially learn the first rudiments of language from ostensive definitions (also known as "definition by pointing"). A pre-linguistic child points to a cat, and his mother says "cat", and after repetition of this event, the child learns that the word "cat" describes the class of furry creatures he has been pointing to. Eventually he learns this well enough (and develops his speech sufficiently) that he points at a cat and says "cat" himself. Once a child has formed a language of nouns from ostensive definitions, he gradually moves on to higher concepts, with verbs, and then adjectives. He points to a jumping cat and says "cat", and now his mother says, "Yes, cat. Cat is jumping." Since he has already learned cat, now he starts to learn the ostensive definition of "jumping".
The purpose of a dictionary is to assist literate people (not pre-literate children) to relate concepts together, by describing each word in terms of others. The definitions in the dictionary are mostly in a descriptive hierarchy, where more complex higher-order concepts are described in terms of less complex lower-order concepts. However, due to the necessity of describing lower-order concepts, like "cat", without appeal to ostensive definitions, this hierarchy is reversed for simple ostensive concepts, and their description is given in terms of higher-order concepts. For example, one dictionary definition of "cat" (excluding slang) is:
noun
a small domesticated carnivore, Felis domestica or F. catus, bred in a number of varieties.
any of several carnivores of the family Felidae, as the lion, tiger, leopard or jaguar, etc.
Obviously no pre-linguistic child would gain any assistance from this definition, since the concept of "cat" is described in terms of higher order concepts ("domesticated", "carnivore", etc.) that the child will learn later on, after they already know what a cat is. If his mother tried to describe this to the child by pointing to the cat and saying "small domesticated carnivore", it would lead to confusion. However, to a literate adult, this definition assists in relating "cat" to the broader concept "carnivore" and other concepts. (And the perceptive adult will presumably notice that the first definition refers only to the subclass of domesticated cats, since not all cats are domesticated.)
The infinite loop you are referring to would occur if you used the dictionary in isolation to try to learn language. If a thinking being (capable of forming language) was deprived of observation, and thereby deprived of ostensive definitions, it would not be able to form language from the dictionary. Without ostensive definitions, the dictionary would merely give a web of inter-related words, with no reference tying them to reality. Every definition would lead to a cycle in the dictionary, or an undefined term. (To deal with this shortcoming, some dictionaries have pictures, which gives some progress toward an ostensive definition, though a single picture is not really sufficient.)
Share
CC BY-SA 4.0
Improve this answer
Follow this answer to receive notifications
answered May 15, 2018 at 2:23
BenBen
2,11488 silver badges1616 bronze badges
5
4
I like the idea that maybe we could have a dictionary with a single picture of a cat, and then all further human knowledge derived from that.
– mattdm
Commented
May 15, 2018 at 6:18
1
For a good ostensive definition, I think the best idea would be to have a bunch of different pictures of cats, which vary in terms of location, colour, what the cat is doing, etc. This allows the viewer to see that the common element is the presence of the cat, and hence, the concept of "cat" is referring to that entity.
– Ben
Commented
May 15, 2018 at 7:00
1
Based on the number of cat pictures on the internet, one might imagine that all human knowledge is derived from pictures of cats. ;)
– Ben
Commented
May 15, 2018 at 7:00
1
@mattdm That would be an interesting machine learning project. If you fed a computer a dictionary and a picture, and told the computer that the picture related to a specific word, then tasked it to build it's own dictionary off of related concepts, how many other pictures could we introduce it to that it would instantly be able to define, and what picture would give the largest number of definable concepts.
– SGR
Commented
May 15, 2018 at 10:19
Thanks for this. It's a stark example that dictionaries don't define simple words at all. At best they "elaborate" or "formalize" them. Really quite ridiculous.
– Jeff Y
Commented
Oct 7, 2019 at 19:38
Add a comment
|
This answer is useful
3
Save this answer.
Show activity on this post.
Words are mapped to their "real world" physical objects as observed by humans, or "real" emotions felt by humans. Abstractions slowly depart from this but still follow the same form, as if they had some true "type" that exists somewhere we can't see.
The dictionary definition is a reflection of the word, using other words to define it inside the mind of the human reading it.
Yes, if you start looking at the definitions of words in the definition, the whole thing become an infinite loop, just like if you put 2 mirrors together. That's just a property of how the mirror works/how the dictionary works, not a property of the physical object/or the word inside the language.
Share
CC BY-SA 4.0
Improve this answer
Follow this answer to receive notifications
edited May 14, 2018 at 18:04
answered May 14, 2018 at 17:56
IKMIKM
17922 bronze badges
3
Definitions are not useful if they are mere reflections. I still dont know the definition of a Human Being. Defining a Person is a reflection in most cases. So know I don't know either one. GREAT! How is this reflective dedinition of person or human being helpful to anyone? You will only know either term by experience. You cannot know by concept alone..
– Logikal
Commented
May 14, 2018 at 19:18
2
@Logikal - I think you're actually agreeing with my point. The word "pitcher" is only helpful because humans have mapped it to the real-life object. The real-life object is not meaningless. The dictionary definition would define a pitcher of water with words, only words. The same way a mirror reflects a pitcher of water with light, only light. They break-down when you hold 2 mirrors together or when you try to define words inside of the definition of a word. That's simply a flaw in how the dictionary works, not a flaw in how the language works. Natural language is not meaningless.
– IKM
Commented
May 14, 2018 at 21:33
1
There are things that can't be mapped to real world objects such as human being and person. We think we know what they are. We have no definition for many words still. We can even define unicorns which are not real world objects. The point is some definitions are well formed while others are horrible. The context is what humans extract even in a foreign language.
– Logikal
Commented
May 14, 2018 at 22:36
Add a comment
|
This answer is useful
2
Save this answer.
Show activity on this post.
I recap your question, then answer, based on my experience:
The purpose of natural language is to convey meaning.
The mechanism by which natural language conveys meaning is use of words.
A mechanism that allows words to convey meaning, thus endowing natural language with meaning is then necessary; this mechanism is the definition of a word.
Then, if a word's definition is, as you imply, an infinite loop, does the whole foundation for natural language's meaning crumble?
No. A word is not defined as an infinite loop. The relation of a word to an infinite loop is an emergent property of the word-definition system; it does not define a word. Thus, a word is not an infinite loop and therefore does not confer 'meaninglessness' to natural language.
In addition, it seems you assume that an infinite loop is meaningless. This is not an obvious truth, if a truth at all.
Regarding the validity of an infinitely recursive definition, a definition must impart a finite description of a given concept to be well-founded.
Your final question strikes me as recursive. I'm not clear on its meaning...
Share
CC BY-SA 4.0
Improve this answer
Follow this answer to receive notifications
answered May 13, 2018 at 23:36
torelizatoreliza
2111 bronze badge
Add a comment
|
This answer is useful
2
Save this answer.
Show activity on this post.
The trick is that you don't need any language to understand basic concepts or usual things which happen around you every day.
We as humans developed language to share information without the need of showing. Without language, if I wanted you to know that I have a dog, I would have to bring you to my place (or show you a photo or draw a picture of the dog) - instead we developed some basic words to describe basic things. Nobody asks what a dog is or what it means to "be" (leaving philosophical considerations of existence aside).
Anyways, naming basic things and activities is a second step, because as I said, you don't need language to understand them, which is the first step - babies interact with and understand their parents to a high extent, before they are able to understand the language (not to mention using it).
The third step is to use combination of words as sentences to carry and share infromation.
The fourth step is to use combination of words to define meaning of new words - to describe abstract concepts or the ones which would be hard to show otherwise.
There's no infinite loop. You basicly think that an advanced feature of language is a requirement for understanding and sharing information. Which is wrong.
Share
CC BY-SA 4.0
Improve this answer
Follow this answer to receive notifications
answered May 14, 2018 at 13:05
ElmoVanKielmoElmoVanKielmo
12133 bronze badges
2
I like your answer, but do you have any references that would point me to the literature so I could get more information?
– Frank Hubeny
Commented
May 14, 2018 at 13:37
@ElmoVanKielmo Your first sentence, and the second to last sentence seem to cancel each other out.
– Norman Edward
Commented
May 14, 2018 at 15:38
Add a comment
|
This answer is useful
2
Save this answer.
Show activity on this post.
You are asking two very different questions as if they were equivalent, which they are not:
Are all natural languages bound to be meaningless?
This depends entirely on your definition of meaningless. For nearly all commonly accepted definitions, the answer is "no." Empirically speaking, people do communicate meaning (by most definitions) through natural language. Conversely, much of what most people would consider "meaningful" cannot be expressed formally. I'm guessing you would consider (for example) love poetry to be "meaningless," but plenty of people would disagree.
Should we state our logical proofs in formal logic / mathematically?
Generally, yes. That's arguably the entire reason we have formal logic in the first place.
Share
CC BY-SA 4.0
Improve this answer
Follow this answer to receive notifications
answered May 14, 2018 at 20:23
Chris SunamiChris Sunami
31.3k22 gold badges5353 silver badges107107 bronze badges
Add a comment
|
This answer is useful
2
Save this answer.
Show activity on this post.
The map is not the territory.
A dictionary is a map of a language (actually just a partial map, as grammar, metaphors and many other details are missing). The fact that it is its own legend does not change the fact that the dictionary describes a language, but the language is not what is contained within the dictionary.
The meaning of words is a question of semantics. A dictionary is using the tool of words to convey this meaning in the same way that a factory of the right kind can produce another factory. But as you noticed, if you look for the meaning of the factory, you need to look outside of it. But you won't be able to express it with the machines and materials of the factory itself.
Words can be understood without a dictionary, and are in fact acquired by native speakers (i.e. as children) through use and observation rather than through rote learning of dictionary definitions. The dictionary comes second, not first. Arguing about a property of language through a property of a dictionary is rather backwards.
Share
CC BY-SA 4.0
Improve this answer
Follow this answer to receive notifications
answered May 15, 2018 at 12:08
TomTom
2,19011 gold badge1010 silver badges1414 bronze badges
Add a comment
|
This answer is useful
1
Save this answer.
Show activity on this post.
You're querying the definition of 'definition'! A non-human that communicates by other means than speech would ask how we know what words mean. The short version of a long answer is that our species is hard-wired to learn language. It's been proven many times. We may reasonably suppose that in-group communication is a massive survival advantage; we do it with language, thus we've evolved to be very good at learning it.
Dictionaries exist for further language learning, they don't specify meanings. Communities specify them by common consent.
This question's interesting in light of contemporary postmodernism. We are now asked to accept that everyday words mean only what the speaker says they do: language is socially constructed and thus changeable by any member of a society. By extension, nobody can be sure what anyone else means by "table" so there's no such thing as a table ... be careful when putting your drink down! (What is a drink?) If I could meet Foucault now, I'd bang his head against a wall until he understood that a wall is, indeed, an impermeable structure.
Share
CC BY-SA 4.0
Improve this answer
Follow this answer to receive notifications
answered May 13, 2018 at 22:27
cherryaustincherryaustin
11133 bronze badges
2
2
Do you have references where someone could go to get more information about your perspective?
– Frank Hubeny
Commented
May 13, 2018 at 23:46
Not to hand, I'm afraid. I'll have to look 'em up for you - will do if I get time.
– cherryaustin
Commented
May 14, 2018 at 0:39
Add a comment
|
This answer is useful
1
Save this answer.
Show activity on this post.
Yes. Natural languages begin with agreement for abstract usages. Do Words Have Inherent Meaning Each individual piece develops into larger structures based on the need to communicate content. Once the agreement on the content is made, the language develops by usage. And, by usage alone. Dictionaries attempt to give the most common usages. If you do not understand the initial base agreements for a language it is all meaningless, in your personal POV. Both language and mathematics are based upon agreed abstractions. Mathematics has a greater commonality and distribution well beyond most common languages. That does not make it more meaningful to those who have the base understanding for it. That is a basic, superficial answer. There is an inherent human need to develop and learn to communicate with abstractions. Both Mathematics and languages satisfy that need. And, that primal instinct to connect and create legacies, cannot be tossed away as 'meaningless'.
Share
CC BY-SA 4.0
Improve this answer
Follow this answer to receive notifications
answered May 14, 2018 at 15:27
Norman EdwardNorman Edward
20522 silver badges99 bronze badges
Add a comment
|
This answer is useful
1
Save this answer.
Show activity on this post.
No. Definitions are fundamentally rooted either in our senses (for physical meaning -- e.g. "what does table mean?"), in our actions (for ethical meaning -- e.g. "what does rights mean?") or in the axioms of an abstract system (for mathematical meaning -- e.g. "what does triangle mean?").
So you have a basis on which stuff is defined. There is no circularity (when your semantics isn't rooted in these, that's when you have meaninglessness).
Share
CC BY-SA 4.0
Improve this answer
Follow this answer to receive notifications
answered May 15, 2018 at 10:13
Abhimanyu Pallavi SudhirAbhimanyu Pallavi Sudhir
1691010 bronze badges
Add a comment
|
This answer is useful
1
Save this answer.
Show activity on this post.
Although controversial within the field of linguistics, there are some linguists who believe that natural human languages are built from semantic primes, the base set of concepts which cannot be meaningfully reduced to other concepts. While it is difficult to determine what the primes are (one project, the Natural Semantic Metalanguage has been refining its set of primes for over 45 years), once you have determined your list of semantic primes you can in theory define every word in every human language without recursion.
People have expectations of the dictionaries they use however, and if a dictionary embraced a theory like NSM and left primes like I, YOU, GOOD or THINK undefined, it would probably get so many complaints that they'd have to quickly publish a revision defining the primes just to stop the complaints.
And in practice, as the language specific expressions of these primes have additional meanings, it is appropriate for dictionaries to define them. For example, the prime YOU is strictly only the singular second person pronoun, however the English word you means both the singular and plural second person.
Share
CC BY-SA 4.0
Improve this answer
Follow this answer to receive notifications
answered Jul 15, 2018 at 6:27
curiousdanniicuriousdannii
1,83144 gold badges2222 silver badges2626 bronze badges
Add a comment
|
This answer is useful
1
Save this answer.
Show activity on this post.
In a dictionary and in the brain, words and concepts are defined on top of each other.
In the brain, at the bottom of the conceptual hierachy, words are bound to senses & feelings. That makes the brain definition of language non cyclical (and artificial intelligence difficult).
The traditional paper dictionary showed little pictures to save space (but encyclopedia showed more) and could not let you smell a dog or hear it bark. So yes the traditional dictionary was cyclical.
With the advent of multimedia, it doesnt have to stay so.
Share
CC BY-SA 4.0
Improve this answer
Follow this answer to receive notifications
answered Jan 24, 2019 at 10:15
Manu de HanoiManu de Hanoi
43622 silver badges1010 bronze badges
Add a comment
|
You must log in to answer this question.
Start asking to get answers
Find the answer to your question by asking.
Ask question
Explore related questions
philosophy-of-language
language
See similar questions with these tags.
Linked
0
Are all concepts definable?
5
Why does a word refer to the particular object it refers to?
3
Defining 2 words with respect to each other
3
What's the simplest thing?
1
How does imprecise and ambiguous natural language relate to the equivocation fallacy and how can we know what words mean?
Related
4
Is philosophy meaningless?
0
How can language be objective, when it depends on subjectivities?
0
Is it useful to be able to define complex words with many definitions?
3
Is there any theory which formalises the “facts” of a scientific field?
1
How to represent concepts without words or gestures (purely in your mind) in a discrete and easily "navigable" way?
5
What are some refutations to the etymological fallacy?
4
What's the deal with definitional disputes relying so much on the particulars of language?
0
Can etymologies eliminate the confusion present in philosophy today?
Hot Network Questions
How can I temporarily seal a crack in an acrylic shower pan?
Ok to solder copper pipe that's close to black gas pipe
Can I expect flight prices around a public holiday to drop in the 7 months leading up to the holiday?
spline parameter not working with set curve radius
Three Houses in Mile Lane
Is “nothing happens at the event horizon” only true for point particles?
Why is the gain of the op amp zero?
"Opening of data source in update mode failed (OGR error)" when exporting multiple shapefiles to GeoPackage using PyQGIS
Extra large seat belt extenders
How does Jesus cause the “falling and rising of many” in Luke 2:34?
How can the magnetometers in migratory birds' eyes sense the direction of the Earth's magnetic field?
"Intermediate value theorem" for manifolds
Why did all my surge protectors smoke when my home became ungrounded?
How to set vertical padding around the caption for table 3 (tabularray (longtblr)) similar to tables 1 and 2
How did Henry know that Jimmy was going to have him killed in Florida?
How can I encourage players to play optimally against bosses?
Function to return free memory in Sinclair ZX Spectrum BASIC?
~てくれる/もらう with non-action non-"favor" verbs e.g. わかってくれる, 喜んでもらう
Is there an algorithm to determine the scheduling of CNOTs for rotated surface codes?
How to improve the symbolic computation speed for plane geometry with multiple constraint conditions?
Why were the characters so careless when they raided Dracula's house?
Where does the voting occur in a 3 Computer system?
How can we attain ‘truth’ if all we can do is justification?
Was a bright orange shark discovered in Costa Rica?
Question feed |
17295 | https://www.khanacademy.org/districts-courses/grade-6-scps-pilot/x9de80188cb8d3de5:comparing-rational-numbers/x9de80188cb8d3de5:unit-5-topic-2/a/number-opposites-review | Number opposites review (article) | Topic 2 | Khan Academy
Skip to main content
If you're seeing this message, it means we're having trouble loading external resources on our website.
If you're behind a web filter, please make sure that the domains .kastatic.org and .kasandbox.org are unblocked.
Explore
Browse By Standards
Explore Khanmigo
Math: Pre-K - 8th grade
Math: High school & college
Math: Multiple grades
Math: Illustrative Math-aligned
Math: Eureka Math-aligned
Math: Get ready courses
Test prep
Science
Economics
Reading & language arts
Computing
Life skills
Social studies
Partner courses
Khan for educators
Select a category to view its courses
Search
AI for Teachers FreeDonateLog inSign up
Search for courses, skills, and videos
Skip to lesson content
Grade 6 Math (SCPS pilot)
Course: Grade 6 Math (SCPS pilot)>Unit 6
Lesson 2: Topic 2
Number opposites
Number opposites
Number opposites
Number opposites review
Negative symbol as opposite
Negative symbol as opposite
Number opposites challenge problems
Number opposites challenge
Districts>
Grade 6 Math (SCPS pilot)>
Comparing rational numbers>
Topic 2
© 2025 Khan Academy
Terms of usePrivacy PolicyCookie NoticeAccessibility Statement
Number opposites review
CCSS.Math: 6.NS.C.6, 6.NS.C.6.a
Google Classroom
Microsoft Teams
Review the opposites of numbers, and try some practice problems.
What is the opposite of a number?
The opposite of a number is the number on the other side of 0 on the number line, and the same distance from 0.
Example:
−5 is the opposite of 5.
−7−6−4−3−2−101234675−5
Ways to write the opposite of a number
We can write the opposite of a number using a negative sign.
Example:
The opposite of −2.2 can be written −(−2.2).
Want to learn more about number opposites? Check out this video.
Practice
Problem 1
Current
What number is the opposite of −524?
Check Explain
Want to try more problems like this? Check out these exercises:Number opposites
Number opposites challenge
Skip to end of discussions
Questions Tips & Thanks
Want to join the conversation?
Log in
Sort by:
Top Voted
Chengfu Zhao 6 years ago Posted 6 years ago. Direct link to Chengfu Zhao's post “It is so strange how "0" ...” more It is so strange how "0" doesn't have an opposite like -0
Answer Button navigates to signup page •14 comments Comment on Chengfu Zhao's post “It is so strange how "0" ...”
(26 votes)
Upvote Button navigates to signup page
Downvote Button navigates to signup page
Flag Button navigates to signup page
more
Answer
Show preview Show formatting options
Post answer
Elie T. 3 years ago Posted 3 years ago. Direct link to Elie T.'s post “That's because if there w...” more That's because if there was a 0 in the whole numbers world and a 0 in the decimal world, it would make sense, or at least for me it wouldn't.
3 comments Comment on Elie T.'s post “That's because if there w...”
(29 votes)
Upvote Button navigates to signup page
Downvote Button navigates to signup page
Flag Button navigates to signup page
more
Show more...
Landon 2 years ago Posted 2 years ago. Direct link to Landon's post “what do I do when I finis...” more what do I do when I finish Khan Acadamy
Answer Button navigates to signup page •11 comments Comment on Landon's post “what do I do when I finis...”
(13 votes)
Upvote Button navigates to signup page
Downvote Button navigates to signup page
Flag Button navigates to signup page
more
Answer
Show preview Show formatting options
Post answer
Burhan A. Bijarani 2 years ago Posted 2 years ago. Direct link to Burhan A. Bijarani's post “u don't have to do it aga...” more u don't have to do it again
1 comment Comment on Burhan A. Bijarani's post “u don't have to do it aga...”
(10 votes)
Upvote Button navigates to signup page
Downvote Button navigates to signup page
Flag Button navigates to signup page
more
Show more...
Volleyball4Christ14 2 years ago Posted 2 years ago. Direct link to Volleyball4Christ14's post “Question: Why is the form...” more Question: Why is the format for opposite question X = -Y or vise versa, because the equal signs means 'equal to' so how does that work? Also can I get upvoted please?
Answer Button navigates to signup page •Comment Button navigates to signup page
(16 votes)
Upvote Button navigates to signup page
Downvote Button navigates to signup page
Flag Button navigates to signup page
more
Answer
Show preview Show formatting options
Post answer
KelAcademy 3 months ago Posted 3 months ago. Direct link to KelAcademy's post “It is not put with an equ...” more It is not put with an equal sign unless y is the opposite of x. You would not put the equal sign because they are not equal and the equation would not be true. It would be written as -3 is the opposite of 3 -x is the opposite of x.
Comment Button navigates to signup page
(1 vote)
Upvote Button navigates to signup page
Downvote Button navigates to signup page
Flag Button navigates to signup page
more
Show more...
Taylor Roberson 7 years ago Posted 7 years ago. Direct link to Taylor Roberson's post “Question 4 doesn't make s...” more Question 4 doesn't make sense to me. Can someone help me, please.
Answer Button navigates to signup page •1 comment Comment on Taylor Roberson's post “Question 4 doesn't make s...”
(8 votes)
Upvote Button navigates to signup page
Downvote Button navigates to signup page
Flag Button navigates to signup page
more
Answer
Show preview Show formatting options
Post answer
Kai 4 years ago Posted 4 years ago. Direct link to Kai's post “It's saying that the oppo...” more It's saying that the opposite of 6 is negative 6, but then what is the opposite of negative 6?
9 comments Comment on Kai's post “It's saying that the oppo...”
(5 votes)
Upvote Button navigates to signup page
Downvote Button navigates to signup page
Flag Button navigates to signup page
more
Show more...
Sg. 2 years ago Posted 2 years ago. Direct link to Sg.'s post “what is a opposite of 0?” more what is a opposite of 0?
Answer Button navigates to signup page •6 comments Comment on Sg.'s post “what is a opposite of 0?”
(6 votes)
Upvote Button navigates to signup page
Downvote Button navigates to signup page
Flag Button navigates to signup page
more
Answer
Show preview Show formatting options
Post answer
🅹🅾🅷🅽🅽🆈 2 years ago Posted 2 years ago. Direct link to 🅹🅾🅷🅽🅽🆈's post “Zero is neither negative ...” more Zero is neither negative or positive, so zeros opposite could be viewed as 0, or have no opposite at all.
Comment Button navigates to signup page
(9 votes)
Upvote Button navigates to signup page
Downvote Button navigates to signup page
Flag Button navigates to signup page
more
Show more...
amrutha.svg 2 years ago Posted 2 years ago. Direct link to amrutha.svg's post “why is the opposite of 0 ...” more why is the opposite of 0 always 0?
Answer Button navigates to signup page •Comment Button navigates to signup page
(7 votes)
Upvote Button navigates to signup page
Downvote Button navigates to signup page
Flag Button navigates to signup page
more
Answer
Show preview Show formatting options
Post answer
Jennifer Stark 2 years ago Posted 2 years ago. Direct link to Jennifer Stark's post “0 is 0 to the right meani...” more 0 is 0 to the right meaning -0 is zero to the left or zero to the right because there is no 0 on the other side of the table. It is confusing so just remember 0 as 0
Comment Button navigates to signup page
(5 votes)
Upvote Button navigates to signup page
Downvote Button navigates to signup page
Flag Button navigates to signup page
more
Show more...
Iqra Nadeem 2 years ago Posted 2 years ago. Direct link to Iqra Nadeem's post “So, how does this help in...” more So, how does this help in high school geometry since number opposite are a basic level in math?
Answer Button navigates to signup page •Comment Button navigates to signup page
(8 votes)
Upvote Button navigates to signup page
Downvote Button navigates to signup page
Flag Button navigates to signup page
more
Answer
Show preview Show formatting options
Post answer
rylandjones7 3 months ago Posted 3 months ago. Direct link to rylandjones7's post “do your level math bro” more do your level math bro
Comment Button navigates to signup page
(2 votes)
Upvote Button navigates to signup page
Downvote Button navigates to signup page
Flag Button navigates to signup page
more
fatimamustafa228 2 years ago Posted 2 years ago. Direct link to fatimamustafa228's post “When the number has a neg...” more When the number has a negative symbol infront of a negative symbol what does that mean for example - (-5)
Answer Button navigates to signup page •1 comment Comment on fatimamustafa228's post “When the number has a neg...”
(6 votes)
Upvote Button navigates to signup page
Downvote Button navigates to signup page
Flag Button navigates to signup page
more
Answer
Show preview Show formatting options
Post answer
AnaghGenius 2 years ago Posted 2 years ago. Direct link to AnaghGenius's post “It means that it is the o...” more It means that it is the opposite of the negative number. Opposites of negative numbers are positive numbers
1 comment Comment on AnaghGenius's post “It means that it is the o...”
(5 votes)
Upvote Button navigates to signup page
Downvote Button navigates to signup page
Flag Button navigates to signup page
more
Show more...
Opal 2 years ago Posted 2 years ago. Direct link to Opal's post “Is there such thing as -∞...” more Is there such thing as -∞?
Answer Button navigates to signup page •4 comments Comment on Opal's post “Is there such thing as -∞...”
(5 votes)
Upvote Button navigates to signup page
Downvote Button navigates to signup page
Flag Button navigates to signup page
more
Answer
Show preview Show formatting options
Post answer
David Severin 2 years ago Posted 2 years ago. Direct link to David Severin's post “Yes, on a number line, go...” more Yes, on a number line, going forever to the right (or up on a coordinate plane) is positive infinity and going forever to the left (or down on a coordinate plane) is negative infinity. So it is always in relation to a defined 0 or origin at (0,0).
6 comments Comment on David Severin's post “Yes, on a number line, go...”
(6 votes)
Upvote Button navigates to signup page
Downvote Button navigates to signup page
Flag Button navigates to signup page
more
Show more...
virinchi.janaswamy 2 years ago Posted 2 years ago. Direct link to virinchi.janaswamy's post “Bro to easy im going into...” more Bro to easy im going into 7th grade math
Answer Button navigates to signup page •4 comments Comment on virinchi.janaswamy's post “Bro to easy im going into...”
(8 votes)
Upvote Button navigates to signup page
Downvote Button navigates to signup page
Flag Button navigates to signup page
more
Answer
Show preview Show formatting options
Post answer
, a year ago Posted a year ago. Direct link to ,'s post “There is more than negati...” more There is more than negative numbers in 6th grade math
Comment Button navigates to signup page
(0 votes)
Upvote Button navigates to signup page
Downvote Button navigates to signup page
Flag Button navigates to signup page
more
Up next: video
Use of cookies
Cookies are small files placed on your device that collect information when you use Khan Academy. Strictly necessary cookies are used to make our site work and are required. Other types of cookies are used to improve your experience, to analyze how Khan Academy is used, and to market our service. You can allow or disallow these other cookies by checking or unchecking the boxes below. You can learn more in our cookie policy
Accept All Cookies Strictly Necessary Only
Cookies Settings
Privacy Preference Center
When you visit any website, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized web experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and change our default settings. However, blocking some types of cookies may impact your experience of the site and the services we are able to offer.
More information
Allow All
Manage Consent Preferences
Strictly Necessary Cookies
Always Active
Certain cookies and other technologies are essential in order to enable our Service to provide the features you have requested, such as making it possible for you to access our product and information related to your account. For example, each time you log into our Service, a Strictly Necessary Cookie authenticates that it is you logging in and allows you to use the Service without having to re-enter your password when you visit a new page or new unit during your browsing session.
Functional Cookies
[x] Functional Cookies
These cookies provide you with a more tailored experience and allow you to make certain selections on our Service. For example, these cookies store information such as your preferred language and website preferences.
Targeting Cookies
[x] Targeting Cookies
These cookies are used on a limited basis, only on pages directed to adults (teachers, donors, or parents). We use these cookies to inform our own digital marketing and help us connect with people who are interested in our Service and our mission. We do not use cookies to serve third party ads on our Service.
Performance Cookies
[x] Performance Cookies
These cookies and other technologies allow us to understand how you interact with our Service (e.g., how often you use our Service, where you are accessing the Service from and the content that you’re interacting with). Analytic cookies enable us to support and improve how our Service operates. For example, we use Google Analytics cookies to help us measure traffic and usage trends for the Service, and to understand more about the demographics of our users. We also may use web beacons to gauge the effectiveness of certain communications and the effectiveness of our marketing campaigns via HTML emails.
Cookie List
Clear
[x] checkbox label label
Apply Cancel
Consent Leg.Interest
[x] checkbox label label
[x] checkbox label label
[x] checkbox label label
Reject All Confirm My Choices |
17296 | https://chem.libretexts.org/Courses/Arkansas_Northeastern_College/CH14133%3A_Chemistry_for_General_Education/05%3A_Molecules_and_Compounds/5.05%3A_Writing_Formulas_for_Ionic_Compounds | 5.5: Writing Formulas for Ionic Compounds - Chemistry LibreTexts
Skip to main content
Table of Contents menu
search Search build_circle Toolbar fact_check Homework cancel Exit Reader Mode
school Campus Bookshelves
menu_book Bookshelves
perm_media Learning Objects
login Login
how_to_reg Request Instructor Account
hub Instructor Commons
Search
Search this book
Submit Search
x
Text Color
Reset
Bright Blues Gray Inverted
Text Size
Reset
+-
Margin Size
Reset
+-
Font Type
Enable Dyslexic Font - [x]
Downloads expand_more
Download Page (PDF)
Download Full Book (PDF)
Resources expand_more
Periodic Table
Physics Constants
Scientific Calculator
Reference expand_more
Reference & Cite
Tools expand_more
Help expand_more
Get Help
Feedback
Readability
x
selected template will load here
Error
This action is not available.
chrome_reader_mode Enter Reader Mode
5: Molecules and Compounds
CH14133: Chemistry for General Education
{ }
{ "5.01:_Sugar_and_Salt" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "5.02:_Compounds_Display_Constant_Composition" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "5.03:_Chemical_Formulas-_How_to_Represent_Compounds" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "5.04:_A_Molecular_View_of_Elements_and_Compounds" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "5.05:_Writing_Formulas_for_Ionic_Compounds" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "5.06:_Nomenclature-_Naming_Compounds" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "5.07:_Naming_Ionic_Compounds" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "5.08:_Naming_Molecular_Compounds" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "5.09:_Naming_Acids" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "5.10:_Nomenclature_Summary" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "5.11:_Formula_Mass-_The_Mass_of_a_Molecule_or_Formula_Unit" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1" }
{ "00:_Front_Matter" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "01:_The_Chemical_World" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "02:_Measurement_and_Problem_Solving" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "03:_Matter_and_Energy" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "04:_Atoms_and_Elements" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "05:_Molecules_and_Compounds" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "06:_Chemical_Composition" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "07:_Chemical_Reactions" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "08:_Quantities_in_Chemical_Reactions" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "09:_Electrons_in_Atoms_and_the_Periodic_Table" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "10:_Chemical_Bonding" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "11:_Gases" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "12:_Liquids_Solids_and_Intermolecular_Forces" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "13:_Solutions" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "14:_Acids_and_Bases" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "zz:_Back_Matter" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1" }
Tue, 07 Jan 2020 16:33:50 GMT
5.5: Writing Formulas for Ionic Compounds
203074
203074
gberry@smail.anc.edu
{ }
Anonymous
Anonymous User
2
false
false
[ "article:topic", "showtoc:no", "transcluded:yes", "source-chem-47480" ]
[ "article:topic", "showtoc:no", "transcluded:yes", "source-chem-47480" ]
Search site Search Search Go back to previous article
Sign in
Username Password Sign in
Sign in
Sign in
Forgot password
Contents
1. Home
2. Campus Bookshelves
3. Arkansas Northeastern College
4. CH14133: Chemistry for General Education
5. 5: Molecules and Compounds
6. 5.5: Writing Formulas for Ionic Compounds
Expand/collapse global location
CH14133: Chemistry for General Education
Front Matter
1: The Chemical World
2: Measurement and Problem Solving
3: Matter and Energy
4: Atoms and Elements
5: Molecules and Compounds
6: Chemical Composition
7: Chemical Reactions
8: Quantities in Chemical Reactions
9: Electrons in Atoms and the Periodic Table
10: Chemical Bonding
11: Gases
12: Liquids, Solids, and Intermolecular Forces
13: Solutions
14: Acids and Bases
Back Matter
5.5: Writing Formulas for Ionic Compounds
Last updated Jan 7, 2020
Save as PDF
5.4: A Molecular View of Elements and Compounds
5.6: Nomenclature- Naming Compounds
Page ID 203074
( \newcommand{\kernel}{\mathrm{null}\,})
Table of contents
1. Learning Objectives
2. Example 5.5.1: Aluminum Nitride and Lithium Oxide
1. Solution
Example 5.5.2: The Crisscross Method for Lead (IV) Oxide
Solution
Exercise 5.5.2
Example 5.5.3: Sulfur Compound
Solution
Exercise 5.5.3
Polyatomic Ions
Writing Formulas for Ionic Compounds Containing Polyatomic Ions
Example 5.5.4: Calcium Nitrate
Solution
Example 5.5.5
Solution
Exercise 5.5.5
Recognizing Ionic Compounds
Method 1
Method 2
Example 5.5.6
Solution
Exercise 5.5.6
Summary
Learning Objectives
Write the correct formula for an ionic compound.
Recognize polyatomic ions in chemical formulas.
Ionic compounds do not exist as molecules. In the solid state, ionic compounds are in crystal lattice containing many ions each of the cation and anion. An ionic formula, like NaCl, is an empirical formula. This formula merely indicates that sodium chloride is made of an equal number of sodium and chloride ions. Sodium sulfide, another ionic compound, has the formula NaA 2S. This formula indicates that this compound is made up of twice as many sodium ions as sulfide ions. This section will teach you how to find the correct ratio of ions, so that you can write a correct formula.
If you know the name of a binary ionic compound, you can write its chemical formula. Start by writing the metal ion with its charge, followed by the nonmetal ion with its charge. Because the overall compound must be electrically neutral, decide how many of each ion is needed in order for the positive and negative charges to cancel each other out.
Example 5.5.1: Aluminum Nitride and Lithium Oxide
Write the formulas for aluminum nitride and lithium oxide.
Solution
Solution to Example 5.5.1| | Write the formula for aluminum nitride | Write the formula for lithium oxide |
---
| 1. Write the symbol and charge of the cation (metal) first and the anion (nonmetal) second. | AlA 3+NA 3− | LiA+OA 2− |
| 2. Use a multiplier to make the total charge of the cations and anions equal to each other. | total charge of cations = total charge of anions 1(3+) = 1(3-) +3 = -3 | total charge of cations = total charge of anions 2(1+) = 1(2-) +2 = -2 |
| 3. Use the multipliers as subscript for each ion. | AlA 1NA 1 | LiA 2OA 1 |
| 4. Write the final formula. Leave out all charges and all subscripts that are 1. | AlN | LiA 2O |
An alternative way to writing a correct formula for an ionic compound is to use the crisscross method. In this method, the numerical value of each of the ion charges is crossed over to become the subscript of the other ion. Signs of the charges are dropped.
Example 5.5.2: The Crisscross Method for Lead (IV) Oxide
Write the formula for lead (IV) oxide.
Solution
Solution to Example 5.5.2| Crisscross Method | Write the formula for lead (IV) oxide |
--- |
| 1. Write the symbol and charge of the cation (metal) first and the anion (nonmetal) second. | PbA 4+OA 2− |
| 2. Transpose only the number of the positive charge to become the subscript of the anion and the number only of the negative charge to become the subscript of the cation. | |
| 3. Reduce to the lowest ratio. | PbA 2OA 4 |
| 4. Write the final formula. Leave out all subscripts that are 1. | PbOA 2 |
Exercise 5.5.2
Write the chemical formula for an ionic compound composed of each pair of ions.
the calcium ion and the oxygen ion
the 2+ copper ion and the sulfur ion
the 1+ copper ion and the sulfur ion
Answer a:CaOAnswer b:CuSAnswer c:Cu 2 S
Be aware that ionic compounds are empirical formulas and so must be written as the lowest ratio of the ions.
Example 5.5.3: Sulfur Compound
Write the formula for sodium combined with sulfur.
Solution
Solution to Example 5.5.3| Crisscross Method | Write the formula for sodium combined with sulfur |
--- |
| 1. Write the symbol and charge of the cation (metal) first and the anion (nonmetal) second. | NaA+SA 2− |
| 2. Transpose only the number of the positive charge to become the subscript of the anion and the number only of the negative charge to become the subscript of the cation. | |
| 3. Reduce to the lowest ratio. | This step is not necessary. |
| 4. Write the final formula. Leave out all subscripts that are 1. | NaA 2S |
Exercise 5.5.3
Write the formula for each ionic compound.
sodium bromide
lithium chloride
magnesium oxide
Answer a:NaBrAnswer b:LiClAnswer c:MgO
Polyatomic Ions
Some ions consist of groups of atoms bonded together and have an overall electric charge. Because these ions contain more than one atom, they are called polyatomic ions. Polyatomic ions have characteristic formulas, names, and charges that should be memorized. For example, NO 3− is the nitrate ion; it has one nitrogen atom and three oxygen atoms and an overall 1− charge. Table 5.5.1 lists the most common polyatomic ions.
Table 5.5.1: Some Polyatomic Ions| Name | Formula |
--- |
| ammonium ion | NH 4+ |
| acetate ion | C 2 H 3 O 2− (also written CH 3 CO 2−) |
| carbonate ion | CO 3 2− |
| chromate ion | CrO 4 2− |
| dichromate ion | Cr 2 O 7 2− |
| hydrogen carbonate ion (bicarbonate ion) | HCO 3− |
| cyanide ion | CN− |
| hydroxide ion | OH− |
| nitrate ion | NO 3− |
| nitrite ion | NO 2− |
| permanganate ion | MnO 4− |
| phosphate ion | PO 4 3− |
| hydrogen phosphate ion | HPO 4 2− |
| dihydrogen phosphate ion | H 2 PO 4− |
| sulfate ion | SO 4 2− |
| hydrogen sulfate ion (bisulfate ion) | HSO 4− |
| sulfite ion | SO 3 2− |
The rule for constructing formulas for ionic compounds containing polyatomic ions is the same as for formulas containing monatomic (single-atom) ions: the positive and negative charges must balance. If more than one of a particular polyatomic ion is needed to balance the charge, the entire formula for the polyatomic ion must be enclosed in parentheses, and the numerical subscript is placed outside the parentheses. This is to show that the subscript applies to the entire polyatomic ion. An example is Ba(NO 3)2.
Writing Formulas for Ionic Compounds Containing Polyatomic Ions
Writing a formula for ionic compounds containing polyatomic ions also involves the same steps as for a binary ionic compound. Write the symbol and charge of the cation followed by the symbol and charge of the anion.
Example 5.5.4: Calcium Nitrate
Write the formula for calcium nitrate.
Solution
Solution to Example 5.5.4| Crisscross Method | Write the formula for calcium nitrate |
--- |
| 1. Write the symbol and charge of the cation (metal) first and the anion (nonmetal) second. | CaA 2+NOA 3A− |
| 2. Transpose only the number of the positive charge to become the subscript of the anion and the number only of the negative charge to become the subscript of the cation. | The 2+ charge on Ca becomes the subscript of NO3 and the 1- charge on NO3 becomes the subscript of Ca. |
| 3. Reduce to the lowest ratio. | CaA 1(NOA 3)A 2 |
| 4. Write the final formula. Leave out all subscripts that are 1. If there is only 1 of the polyatomic ion, leave off parentheses. | Ca(NOA 3)A 2 |
Example 5.5.5
Write the chemical formula for an ionic compound composed of the potassium ion and the sulfate ion.
Solution
Solution to Example 5.5.5| Explanation | Answer |
--- |
| Potassium ions have a charge of 1+, while sulfate ions have a charge of 2−. We will need two potassium ions to balance the charge on the sulfate ion, so the proper chemical formula is KA 2SOA 4. | KA 2SOA 4 |
Exercise 5.5.5
Write the chemical formula for an ionic compound composed of each pair of ions.
the magnesium ion and the carbonate ion
the aluminum ion and the acetate ion
Answer a:MgCOA 3Answer b:Al(CHA 3COO)A 3
Recognizing Ionic Compounds
There are two ways to recognize ionic compounds.
Method 1
Compounds between metal and nonmetal elements are usually ionic. For example, CaBrA 2 contains a metallic element (calcium, a group 2 [or 2A] metal) and a nonmetallic element (bromine, a group 17 [or 7A] nonmetal). Therefore, it is most likely an ionic compound (in fact, it is ionic). In contrast, the compound NOA 2 contains two elements that are both nonmetals (nitrogen, from group 15 [or 5A], and oxygen, from group 16 [or 6A]. It is not an ionic compound; it belongs to the category of covalent compounds discussed elsewhere. Also note that this combination of nitrogen and oxygen has no electric charge specified, so it is not the nitrite ion.
Method 2
Second, if you recognize the formula of a polyatomic ion in a compound, the compound is ionic. For example, if you see the formula Ba(NOA 3)A 2, you may recognize the “NOA 3” part as the nitrate ion, NOA 3A−. (Remember that the convention for writing formulas for ionic compounds is not to include the ionic charge.) This is a clue that the other part of the formula, Ba, is actually the BaA 2+ ion, with the 2+ charge balancing the overall 2− charge from the two nitrate ions. Thus, this compound is also ionic.
Example 5.5.6
Identify each compound as ionic or not ionic.
NaA 2O
PClA 3
NHA 4Cl
OFA 2
Solution
Solution to Example 5.5.6| Explanation | Answer |
--- |
| a. Sodium is a metal, and oxygen is a nonmetal. Therefore, NaA 2O is expected to be ionic via method 1. | NaA 2O, ionic |
| b. Both phosphorus and chlorine are nonmetals. Therefore, PClA 3 is not ionic via method 1 | PClA 3, not ionic |
| c. The NHA 4 in the formula represents the ammonium ion, NHA 4A+, which indicates that this compound is ionic via method 2 | NHA 4Cl, ionic |
| d. Both oxygen and fluorine are nonmetals. Therefore, OFA 2 is not ionic via method 1 | OFA 2, not ionic |
Exercise 5.5.6
Identify each compound as ionic or not ionic.
NA 2O
FeClA 3
(NHA 4)A 3POA 4
SOClA 2
Answer a:not ionicAnswer b:ionicAnswer c:ionicAnswer d:not ionic
Summary
Formulas for ionic compounds contain the symbols and number of each atom present in a compound in the lowest whole number ratio.
5.5: Writing Formulas for Ionic Compounds is shared under a not declared license and was authored, remixed, and/or curated by LibreTexts.
5.5: Writing Formulas for Ionic Compounds by Henry Agnew, Marisa Alviar-Agnew is licensed CK-12. Original source:
Toggle block-level attributions
Back to top
5.4: A Molecular View of Elements and Compounds
5.6: Nomenclature- Naming Compounds
Was this article helpful?
Yes
No
Recommended articles
5.5: Writing Formulas for Ionic CompoundsFormulas for ionic compounds contain the symbols and number of each atom present in a compound in the lowest whole number ratio.
7.3: Writing Formulas for Ionic CompoundsFormulas for ionic compounds contain the symbols and number of each atom present in a compound in the lowest whole number ratio.
4.6: Writing Formulas for Ionic CompoundsFormulas for ionic compounds contain the symbols and number of each atom present in a compound in the lowest whole number ratio.
6.7: Writing Formulas for Ionic CompoundsFormulas for ionic compounds contain the symbols and number of each atom present in a compound in the lowest whole number ratio.
5.5: Writing Formulas for Ionic CompoundsFormulas for ionic compounds contain the symbols and number of each atom present in a compound in the lowest whole number ratio.
Article typeSection or PageShow Page TOCno on pageTranscludedyes
Tags
source-chem-47480
© Copyright 2025 Chemistry LibreTexts
Powered by CXone Expert ®
?
The LibreTexts libraries arePowered by NICE CXone Expertand are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. We also acknowledge previous National Science Foundation support under grant numbers 1246120, 1525057, and 1413739. Privacy Policy. Terms & Conditions. Accessibility Statement.For more information contact us atinfo@libretexts.org.
Support Center
How can we help?
Contact Support Search the Insight Knowledge Base Check System Status×
contents readability resources tools
☰
5.4: A Molecular View of Elements and Compounds
5.6: Nomenclature- Naming Compounds |
17297 | https://www.reddit.com/r/algorithms/comments/1eidwv/when_log_is_used_on_wikipedia_which_base_is/ | When Log is used on Wikipedia, which base is implied? : r/algorithms
Skip to main contentWhen Log is used on Wikipedia, which base is implied? : r/algorithms
Open menu Open navigationGo to Reddit Home
r/algorithms A chip A close button
Log InLog in to Reddit
Expand user menu Open settings menu
Go to algorithms
r/algorithms•13 yr. ago
[deleted]
When Log is used on Wikipedia, which base is implied?
For example, Graham scan mentions that the time complexity for this algorithm is n log n. Which base is in log here? Is it consistent across all of wikipedia? What about other sources? Often no base is mentioned in articles on the internet.
Read more
Share
Related Answers Section
Related Answers
Base of log in Wikipedia algorithms
Hidden categories on Wikipedia pages
Best algorithms for real-time data processing
Real-world applications of graph algorithms
Exploring the future of quantum algorithms
New to Reddit?
Create your account and connect with a world of communities.
Continue with Google Continue with Google. Opens in new tab
Continue with Email
Continue With Phone Number
By continuing, you agree to ourUser Agreementand acknowledge that you understand thePrivacy Policy.
Public
Anyone can view, post, and comment to this community
0 0
Top Posts
Reddit reReddit: Top posts of May 17, 2013
Reddit reReddit: Top posts of May 2013
Reddit reReddit: Top posts of 2013
Reddit RulesPrivacy PolicyUser AgreementAccessibilityReddit, Inc. © 2025. All rights reserved.
Expand Navigation Collapse Navigation |
17298 | https://www.progressfocused.com/2024/07/the-improbability-principle.html | Skip to main content
PROGRESS WILL ALWAYS REMAIN BOTH NECESSARY AND POSSIBLE
=A blog by Coert Visser=
The Improbability Principle: Understanding Coincidence and Chance in Everyday Life
An old woman had just lost her husband. I spoke to her on the phone. While I was talking to her, she mentioned that a butterfly flew into the room. The butterfly landed on her late husband's watch in front of her. She let out a cry of wonder and said, “This can't be a coincidence! The butterfly was his favorite animal. He comes to greet me.”
I could understand her reaction and did not contradict her. But David J. Hand's book “The Improbability Principle” offers insight into why events we think cannot be a coincidence are more common than we think and why our intuitive responses are often misleading.
The five principles of improbability
Hand introduces the improbability principle: the principle that extremely improbable events occur regularly due to underlying mathematical laws. He explains this on the basis of five principles:
| | | |
---
| The law of inevitability | Something has to happen if there are enough opportunities. | Suppose you roll a die a thousand times, then it is almost certain that there will be a series of six identical outcomes. Although any particular sequence is highly unlikely, it is inevitable that such a sequence will eventually occur given enough attempts. |
| The law of large numbers | With enough attempts, rare events will occur. | So many people live in a big city that almost every day there is someone with an extremely rare disease. |
| The law of selection | We mainly notice the most striking events and forget the mundane ones. | The news often reports about plane crashes, but not about the thousands of safe flights. This makes plane crashes appear to occur more often than they actually do. |
| The law of leverage of probability | Small changes can cause big differences in probabilities. | In a lottery with one ticket, the chance of winning is minimal, but if you buy a thousand tickets, your chance is considerably greater. Although the chance still remains small, the increase is significant. |
| The law of 'almost enough' | Similar events are often considered identical. | Different forms of lottery winnings are often seen as one type of event, which makes them seem more common. |
Correcting misconceptions
Understanding these principles is important because it helps us better deal with seemingly impossible events.
Avoiding incorrect conclusions: When we experience a very unlikely event, we tend to think that there must be a deeper meaning or supernatural force behind it. Hand's principles show that such events are often simply the result of probability on a large scale.
Promoting rational thinking: By understanding how these principles work, we can deal more rationally with chance and uncertainty in our daily lives.
Reducing superstition: Knowledge of these principles can help reduce superstition and irrational beliefs that often arise from misunderstood coincidences.
Better decision-making: In many professions, from finance to medicine, an understanding of probability is essential. These principles can help you make better, more informed decisions.
The extreme improbability of everyday specific outcomes
When you think about it straight, any specific outcome in life is extremely unlikely. Let's illustrate this with an example:
Herman is 55 years old, married, has 2 daughters, lives in Boulder, works as a project manager in IT, is a marathon runner, has a dog, and plays the piano. If we go back 40 years, when Herman was 15 and at school in Omaha, how likely would this specific life course have been? To calculate the probability of this exact situation, we must consider the probability of each individual element: the probability that he lives to the age of 55, the probability that he marries, the probability that he has exactly two children, the probability that both children are girls, the chance that he will live in Boulder (quite unlikely for someone from Omaha), the chance that he will end up in the IT sector, the chance that he will specifically become a project manager, the chance that he will become a marathon runner, the chance that he will have a dog, the chance that he plays the piano. Each of these probabilities is less than 1, and some are significantly smaller. To calculate the total probability of this exact life course, we must multiply all these probabilities together. The result is an unbelievably small number. We can increase the improbability even further by adding more specific details: the brand of car he drives, his favorite holiday destination, his shoe size, etc. With each addition, the probability of this exact combination becomes even smaller.
► This leads to an astonishing conclusion: any precise condition in our lives is, statistically speaking, extremely unlikely if we try to calculate its probability years in advance.
Why is this insight important?
Perspective on 'coincidence': It helps us understand why events that we consider 'coincidental' or 'impossible' are in fact very normal in the larger context of all possible outcomes.
Appreciation for uniqueness: It underlines how unique everyone's path in life is without having to attribute supernatural meaning to it.
Flexibility in future thinking: It can help us to be less rigid in our expectations for the future, knowing that any specific prediction is unlikely to come true.
Understanding statistical principles: It illustrates how the combination of many independent events leads to extremely small probabilities, an important concept in probability theory.
Promote critical thinking: It encourages us to think critically about how we interpret 'unlikely' events in our daily lives.
This paradox of specific outcomes fits in seamlessly with Hand's improbability principle. It shows how the law of large numbers (there are billions of people, each with their own unique life courses), the law of selection (we focus on one specific outcome), and the law of 'almost enough' (we see similar life courses as'the same') together explain why our lives are simultaneously unique and inevitable.
Conclusion
David J. Hand's “Improbability Principle,” together with the insight that each specific life course is statistically extremely improbable, offers a fascinating perspective on chance and coincidence in our daily lives. These principles help us to:
Deal more rationally with seemingly impossible coincidences.
To look more critically at claims of predestination or supernatural intervention or abilities
To appreciate the uniqueness of everyone's life path without assigning excessive meaning to it
Looking to the future more flexibly and openly
By recognizing that unlikely events are a natural and inevitable part of our reality, we can develop a deeper understanding of the true nature of chance and probability. This allows us to look at the world around us with more wonder and sobriety, knowing that every moment, no matter how ordinary or extraordinary it seems, is part of an infinitely complex tapestry of possibilities.
The butterfly
Looking back on my conversation with the grieving widow, I wonder if I did the right thing by not contradicting her interpretation. The improbability principle teaches us to look critically at coincidences but also to understand the human need for meaning. In this case, I chose empathy over rationality.
References:
Hand, D.J. (2014). The Improbability Principle: Why Coincidences, Miracles, and Rare Events Happen Every Day . Farrar, Straus and Giroux.
Coincidence
Improbability Principle
Get link
Facebook
X
Pinterest
Email
Other Apps
Comments
Post a Comment
Original text
Rate this translation
Your feedback will be used to help improve Google Translate |
17299 | https://www.masterclass.com/articles/what-is-consumer-surplus | Skip To Main Content
MasterClass logo
|
Articles
Get
At Work
Log In
Get
All
Community and Government
Wellness
Design & Style
Arts & Entertainment
Writing
Sports & Gaming
Science & Tech
Music
Food
Home & Lifestyle
Business
Community and Government
Consumer Surplus Definition: Examples of Consumer Surplus
Written by MasterClass
Last updated: Aug 31, 2022 • 4 min read
The positive feeling that you get when you score a great deal is something that economists study and measure using graphs. It’s called consumer surplus, and it’s equal to the difference between the highest price you would be willing to pay for something, and the price that you actually paid.
Learn From the Best
Teaches Conservation
Teaches Gardening
Teaches Tracing Your Roots Through Food
Chris Voss and Noted Co-Experts
Teaches Relational Intelligence
Teaches Writing for Social Change
Mastering the Markets
Teaches Yoga Foundations
Teaches the Runner’s Mindset
With Noted Experts
José Andrés with Special Guest Chefs
A Documentary by Artist JR
With Former CIA Officers
Teach Campaign Strategy and Messaging
Teaches Economics and Society
Teaches U.S. Presidential History and Leadership
Pharrell Williams and Noted Co-Instructors
Gloria Steinem and Noted Co-Instructors
Teaches Philosophy
Teaches Creating Change
Lessons from Influential Black Voices
Teaches the Power of Resilience
Teaches Inclusive Leadership
Teaches Authentic Leadership
Teach Diplomacy
Teaches Impactful Giving
Independent Thinking and Media’s Invisible Powers
A Documentary by Andrea Nevins
With Professor Jeffrey Pfeffer
Icons and Their Influences
Teaches Conservation
Teaches Gardening
Teaches Tracing Your Roots Through Food
Chris Voss and Noted Co-Experts
Teaches Relational Intelligence
Teaches Writing for Social Change
Mastering the Markets
Teaches Yoga Foundations
Teaches the Runner’s Mindset
With Noted Experts
José Andrés with Special Guest Chefs
A Documentary by Artist JR
With Former CIA Officers
Teach Campaign Strategy and Messaging
Teaches Economics and Society
Teaches U.S. Presidential History and Leadership
Pharrell Williams and Noted Co-Instructors
Gloria Steinem and Noted Co-Instructors
Teaches Philosophy
Teaches Creating Change
Lessons from Influential Black Voices
Teaches the Power of Resilience
Teaches Inclusive Leadership
Teaches Authentic Leadership
Teach Diplomacy
Teaches Impactful Giving
Independent Thinking and Media’s Invisible Powers
A Documentary by Andrea Nevins
With Professor Jeffrey Pfeffer
Icons and Their Influences
Get
What Is Consumer Surplus?
Consumer surplus, also known as consumer’s surplus or social surplus, is the difference between the actual price a consumer paid for a product and the maximum price they were willing to pay. Put simply, consumer surplus is the benefit consumers feel when buying something at a lower price than expected.
The microeconomics concept of consumer surplus relies on the assumption that utility (consumer satisfaction) is measurable, and that marginal utility (the additional satisfaction derived from purchasing additional units of a product) decreases with quantity of units purchased. Under the law of diminishing marginal utility, the total utility of a good exceeds the total market value, since consumers would be willing to pay the market price even at the lowest utility; any extra utility is considered consumer surplus.
Meet One of Your New Instructors
Get
Get
History of Consumer Surplus
Consumer surplus is one half of the concept of economic surplus first put forward by French engineer and economist Jules Dupuit in 1844 and later popularized by Alfred Marshall, giving it the name, Marshallian surplus. Economic surplus, or total surplus, is the combination of consumer surplus and producer surplus (the amount producers benefit by selling goods at a higher price).
The concept of consumer surplus was originally used in welfare economics, to measure the benefit of public goods such as roads and bridges. In the twentieth century, consumer surplus fell out of favor as economists questioned the measurability of utility.
How to Calculate Consumer Surplus
The basic formula to find consumer surplus is CS = ½ (base) (height), with the equilibrium quantity being the base, and price difference between the high price and the equilibrium price being the height. Economists in a competitive market track this information in a graph called the demand curve. Here’s how to plot this information:
Market price: On the y-axis (vertical axis) of the graph, plot the rising market price of the product.
Market quantity: On the x-axis (horizontal axis) of the graph, plot the amount of goods available in the marketplace.
Demand curve: Track the demand by plotting at the price at which goods were sold in conjunction with the amount of product available. This line that slopes downward from left to right is the demand curve. This follows the law of diminishing marginal utility, which states that as consumption increases, the marginal utility declines.
Supply curve: Determine the supply curve by plotting the price-to-quantity relationship from the point of view of the seller or producer. This line ascends from left to right.
Market equilibrium: The equilibrium price shows where the price matches the demand. The equilibrium quantity is a horizontal line showing where there is neither a shortage or surplus of goods. Where these two factors meet tells you where there is a balance between the price and the demand. On the graph, it appears where the supply curve and the demand curve cross, and is referred to as the equilibrium point.
Consumer surplus: Purchases made above the equilibrium price, but below the demand curve, form a triangle whose area is consumer surplus. The consumer surplus formula, therefore, is based on the area of a triangle: CS = ½ (base) (height).
Deadweight loss: When the socially optimal amount of a good is not produced, this is considered a deadweight loss. This region will be visible on the graph if the actual quantity produced is less than the equilibrium quantity. It will reduce the size of both consumer surplus and producer surplus.
3 Examples of Consumer Surplus
Here’s how consumer surplus plays out in real life.
Airline tickets: Airline ticket prices change rapidly, with countless sites dedicated to guaranteeing consumers the lowest price. If you’re willing to pay $500 for a flight from New York to Los Angeles, but end up paying $300, you receive $200 in consumer surplus. Sellers, however, know that you are willing to pay more for a flight, and will increase ticket prices before important dates like Thanksgiving or summer vacation, thereby turning consumer surplus into producer surplus.
Gasoline: People often drive further to guarantee a lower price per gallon to refill their car’s gas tank, but they would be willing to pay more to avoid running out of gas. The money you save by driving to a gas station with lower prices is consumer surplus. For example, if gas costs $5/gallon in an urban area and $4/gallon out of town, your consumer surplus is $1/gallon. Goods like gasoline also can have a ceiling price, which caps how much companies can charge for goods. Price ceilings typically apply to normal goods like gas, food, or medicine. A price ceiling can cause deadweight loss, reducing consumer and producer surplus.
Cell phones: A cell phone provides immense utility to consumers: communication with the outside world. Therefore, consumers are often willing to pay more than the market price to secure a cell phone—even if the market price is high. According to the law of diminishing marginal utility, however, consumer satisfaction will decrease with each subsequent purchase of a replacement phone.
Learn More
Get the MasterClass Annual Membership for exclusive access to video lessons taught by the world’s best, including Paul Krugman, Doris Kearns Goodwin, Ron Finley, Jane Goodall, and more.
Thanks for signing up!
We’ll let you know about new instructors, classes, and promotions.
Featured MasterClass Instructor
Paul Krugman
Nobel Prize-winning economist Paul Krugman teaches you the economic theories that drive history, policy, and help explain the world around you.
Explore the class
Level Up Your Team
See why leading organizations rely on MasterClass for learning & development.
Contact Sales
Learn More
Community and Government
Social Criticism: Types of Social Criticism
Dec 13, 2022
Community and Government
Learn About Gross National Product: Definition and Formula for Calculating GNP
Oct 12, 2022
Community and Government
How to Calculate Price Elasticity of Demand
Oct 12, 2022
Community and Government
Guide to Capitalism: 4 Characteristics of Capitalism
Oct 12, 2022
Community and Government
Stocks vs. Bonds: Compare the Two Types of Investments
Oct 12, 2022
Community and Government
How to Distinguish Common Stock From Preferred Stock
Oct 12, 2022
Other articles
Types of Asparagus: 6 Popular Asparagus Varieties
Aug 10, 2021
How to Hang Curtains Without Drilling: 3 Easy Methods
May 5, 2022
Claude Debussy: A Guide to Claude Debussy’s Life and Music
Jun 25, 2021
How to Play Basketball: Explore the Basics of Basketball
Jun 7, 2021
How to Write a Horror Story in 7 Steps
Dec 8, 2021
Daniel Pink’s ABC’s of Selling: Inside the 3 Key Sales Qualities
Dec 10, 2021 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.