text stringlengths 559 401k | source stringlengths 13 121 |
|---|---|
In statistics, the Neyman–Pearson lemma describes the existence and uniqueness of the likelihood ratio as a uniformly most powerful test in certain contexts. It was introduced by Jerzy Neyman and Egon Pearson in a paper in 1933. The Neyman–Pearson lemma is part of the Neyman–Pearson theory of statistical testing, which introduced concepts such as errors of the second kind, power function, and inductive behavior. The previous Fisherian theory of significance testing postulated only one hypothesis. By introducing a competing hypothesis, the Neyman–Pearsonian flavor of statistical testing allows investigating the two types of errors. The trivial cases where one always rejects or accepts the null hypothesis are of little interest but it does prove that one must not relinquish control over one type of error while calibrating the other. Neyman and Pearson accordingly proceeded to restrict their attention to the class of all
α
{\displaystyle \alpha }
level tests while subsequently minimizing type II error, traditionally denoted by
β
{\displaystyle \beta }
. Their seminal paper of 1933, including the Neyman–Pearson lemma, comes at the end of this endeavor, not only showing the existence of tests with the most power that retain a prespecified level of type I error (
α
{\displaystyle \alpha }
), but also providing a way to construct such tests. The Karlin-Rubin theorem extends the Neyman–Pearson lemma to settings involving composite hypotheses with monotone likelihood ratios.
== Statement ==
Consider a test with hypotheses
H
0
:
θ
=
θ
0
{\displaystyle H_{0}:\theta =\theta _{0}}
and
H
1
:
θ
=
θ
1
{\displaystyle H_{1}:\theta =\theta _{1}}
, where the probability density function (or probability mass function) is
ρ
(
x
∣
θ
i
)
{\displaystyle \rho (x\mid \theta _{i})}
for
i
=
0
,
1
{\displaystyle i=0,1}
.
For any hypothesis test with rejection set
R
{\displaystyle R}
, and any
α
∈
[
0
,
1
]
{\displaystyle \alpha \in [0,1]}
, we say that it satisfies condition
P
α
{\displaystyle P_{\alpha }}
if
α
=
Pr
θ
0
(
X
∈
R
)
{\displaystyle \alpha ={\Pr }_{\theta _{0}}(X\in R)}
That is, the test has size
α
{\displaystyle \alpha }
(that is, the probability of falsely rejecting the null hypothesis is
α
{\displaystyle \alpha }
).
∃
η
≥
0
{\displaystyle \exists \eta \geq 0}
such that
x
∈
R
∖
A
⟹
ρ
(
x
∣
θ
1
)
>
η
ρ
(
x
∣
θ
0
)
x
∈
R
c
∖
A
⟹
ρ
(
x
∣
θ
1
)
<
η
ρ
(
x
∣
θ
0
)
{\displaystyle {\begin{aligned}x\in {}&R\smallsetminus A\implies \rho (x\mid \theta _{1})>\eta \rho (x\mid \theta _{0})\\x\in {}&R^{c}\smallsetminus A\implies \rho (x\mid \theta _{1})<\eta \rho (x\mid \theta _{0})\end{aligned}}}
where
A
{\displaystyle A}
is a negligible set in both
θ
0
{\displaystyle \theta _{0}}
and
θ
1
{\displaystyle \theta _{1}}
cases:
Pr
θ
0
(
X
∈
A
)
=
Pr
θ
1
(
X
∈
A
)
=
0
{\displaystyle {\Pr }_{\theta _{0}}(X\in A)={\Pr }_{\theta _{1}}(X\in A)=0}
.
That is, we have a strict likelihood ratio test, except on a negligible subset.
For any
α
∈
[
0
,
1
]
{\displaystyle \alpha \in [0,1]}
, let the set of level
α
{\displaystyle \alpha }
tests be the set of all hypothesis tests with size at most
α
{\displaystyle \alpha }
. That is, letting its rejection set be
R
{\displaystyle R}
, we have
Pr
θ
0
(
X
∈
R
)
≤
α
{\displaystyle {\Pr }_{\theta _{0}}(X\in R)\leq \alpha }
.
In practice, the likelihood ratio is often used directly to construct tests — see likelihood-ratio test. However it can also be used to suggest particular test-statistics that might be of interest or to suggest simplified tests — for this, one considers algebraic manipulation of the ratio to see if there are key statistics in it related to the size of the ratio (i.e. whether a large statistic corresponds to a small ratio or to a large one).
== Example ==
Let
X
1
,
…
,
X
n
{\displaystyle X_{1},\dots ,X_{n}}
be a random sample from the
N
(
μ
,
σ
2
)
{\displaystyle {\mathcal {N}}(\mu ,\sigma ^{2})}
distribution where the mean
μ
{\displaystyle \mu }
is known, and suppose that we wish to test for
H
0
:
σ
2
=
σ
0
2
{\displaystyle H_{0}:\sigma ^{2}=\sigma _{0}^{2}}
against
H
1
:
σ
2
=
σ
1
2
{\displaystyle H_{1}:\sigma ^{2}=\sigma _{1}^{2}}
. The likelihood for this set of normally distributed data is
L
(
σ
2
∣
x
)
∝
(
σ
2
)
−
n
/
2
exp
{
−
∑
i
=
1
n
(
x
i
−
μ
)
2
2
σ
2
}
.
{\displaystyle {\mathcal {L}}\left(\sigma ^{2}\mid \mathbf {x} \right)\propto \left(\sigma ^{2}\right)^{-n/2}\exp \left\{-{\frac {\sum _{i=1}^{n}(x_{i}-\mu )^{2}}{2\sigma ^{2}}}\right\}.}
We can compute the likelihood ratio to find the key statistic in this test and its effect on the test's outcome:
Λ
(
x
)
=
L
(
σ
0
2
∣
x
)
L
(
σ
1
2
∣
x
)
=
(
σ
0
2
σ
1
2
)
−
n
/
2
exp
{
−
1
2
(
σ
0
−
2
−
σ
1
−
2
)
∑
i
=
1
n
(
x
i
−
μ
)
2
}
.
{\displaystyle \Lambda (\mathbf {x} )={\frac {{\mathcal {L}}\left({\sigma _{0}}^{2}\mid \mathbf {x} \right)}{{\mathcal {L}}\left({\sigma _{1}}^{2}\mid \mathbf {x} \right)}}=\left({\frac {\sigma _{0}^{2}}{\sigma _{1}^{2}}}\right)^{-n/2}\exp \left\{-{\frac {1}{2}}(\sigma _{0}^{-2}-\sigma _{1}^{-2})\sum _{i=1}^{n}(x_{i}-\mu )^{2}\right\}.}
This ratio only depends on the data through
∑
i
=
1
n
(
x
i
−
μ
)
2
{\displaystyle \sum _{i=1}^{n}(x_{i}-\mu )^{2}}
. Therefore, by the Neyman–Pearson lemma, the most powerful test of this type of hypothesis for this data will depend only on
∑
i
=
1
n
(
x
i
−
μ
)
2
{\displaystyle \sum _{i=1}^{n}(x_{i}-\mu )^{2}}
. Also, by inspection, we can see that if
σ
1
2
>
σ
0
2
{\displaystyle \sigma _{1}^{2}>\sigma _{0}^{2}}
, then
Λ
(
x
)
{\displaystyle \Lambda (\mathbf {x} )}
is a decreasing function of
∑
i
=
1
n
(
x
i
−
μ
)
2
{\displaystyle \sum _{i=1}^{n}(x_{i}-\mu )^{2}}
. So we should reject
H
0
{\displaystyle H_{0}}
if
∑
i
=
1
n
(
x
i
−
μ
)
2
{\displaystyle \sum _{i=1}^{n}(x_{i}-\mu )^{2}}
is sufficiently large. The rejection threshold depends on the size of the test. In this example, the test statistic can be shown to be a scaled chi-square distributed random variable and an exact critical value can be obtained.
== Application in economics ==
A variant of the Neyman–Pearson lemma has found an application in the seemingly unrelated domain of the economics of land value. One of the fundamental problems in consumer theory is calculating the demand function of the consumer given the prices. In particular, given a heterogeneous land-estate, a price measure over the land, and a subjective utility measure over the land, the consumer's problem is to calculate the best land parcel that they can buy – i.e. the land parcel with the largest utility, whose price is at most their budget. It turns out that this problem is very similar to the problem of finding the most powerful statistical test, and so the Neyman–Pearson lemma can be used.
== Uses in electrical engineering ==
The Neyman–Pearson lemma is quite useful in electronics engineering, namely in the design and use of radar systems, digital communication systems, and in signal processing systems.
In radar systems, the Neyman–Pearson lemma is used in first setting the rate of missed detections to a desired (low) level, and then minimizing the rate of false alarms, or vice versa.
Neither false alarms nor missed detections can be set at arbitrarily low rates, including zero. All of the above goes also for many systems in signal processing.
== Uses in particle physics ==
The Neyman–Pearson lemma is applied to the construction of analysis-specific likelihood-ratios, used to e.g. test for signatures of new physics against the nominal Standard Model prediction in proton–proton collision datasets collected at the LHC.
== Discovery of the lemma ==
Neyman wrote about the discovery of the lemma as follows. Paragraph breaks have been inserted.
I can point to the particular moment when I understood how to formulate the undogmatic problem of the most powerful test of a simple statistical hypothesis against a fixed simple alternative. At the present time [probably 1968], the problem appears entirely trivial and within easy reach of a beginning undergraduate. But, with a degree of embarrassment, I must confess that it took something like half a decade of combined effort of E. S. P. [Egon Pearson] and myself to put things straight.
The solution of the particular question mentioned came on an evening when I was sitting alone in my room at the Statistical Laboratory of the School of Agriculture in Warsaw, thinking hard on something that should have been obvious long before. The building was locked up and, at about 8 p.m., I heard voices outside calling me. This was my wife, with some friends, telling me that it was time to go to a movie.
My first reaction was that of annoyance. And then, as I got up from my desk to answer the call, I suddenly understood: for any given critical region and for any given alternative hypothesis, it is possible to calculate the probability of the error of the second kind; it is represented by this particular integral. Once this is done, the optimal critical region would be the one which minimizes this same integral, subject to the side condition concerned with the probability of the error of the first kind. We are faced with a particular problem of the calculus of variation, probably a simple problem.
These thoughts came in a flash, before I reached the window to signal to my wife. The incident is clear in my memory, but I have no recollections about the movie we saw. It may have been Buster Keaton.
== See also ==
Error exponents in hypothesis testing
F-test
Lemma
Wilks' theorem
== References ==
E. L. Lehmann, Joseph P. Romano, Testing statistical hypotheses, Springer, 2008, p. 60
== External links ==
Cosma Shalizi gives an intuitive derivation of the Neyman–Pearson Lemma using ideas from economics
cnx.org: Neyman–Pearson criterion | Wikipedia/Neyman–Pearson_lemma |
In probability theory and statistics, the cumulative distribution function (CDF) of a real-valued random variable
X
{\displaystyle X}
, or just distribution function of
X
{\displaystyle X}
, evaluated at
x
{\displaystyle x}
, is the probability that
X
{\displaystyle X}
will take a value less than or equal to
x
{\displaystyle x}
.
Every probability distribution supported on the real numbers, discrete or "mixed" as well as continuous, is uniquely identified by a right-continuous monotone increasing function (a càdlàg function)
F
:
R
→
[
0
,
1
]
{\displaystyle F\colon \mathbb {R} \rightarrow [0,1]}
satisfying
lim
x
→
−
∞
F
(
x
)
=
0
{\displaystyle \lim _{x\rightarrow -\infty }F(x)=0}
and
lim
x
→
∞
F
(
x
)
=
1
{\displaystyle \lim _{x\rightarrow \infty }F(x)=1}
.
In the case of a scalar continuous distribution, it gives the area under the probability density function from negative infinity to
x
{\displaystyle x}
. Cumulative distribution functions are also used to specify the distribution of multivariate random variables.
== Definition ==
The cumulative distribution function of a real-valued random variable
X
{\displaystyle X}
is the function given by: 77
where the right-hand side represents the probability that the random variable
X
{\displaystyle X}
takes on a value less than or equal to
x
{\displaystyle x}
.
The probability that
X
{\displaystyle X}
lies in the semi-closed interval
(
a
,
b
]
{\displaystyle (a,b]}
, where
a
<
b
{\displaystyle a<b}
, is therefore: 84
In the definition above, the "less than or equal to" sign, "≤", is a convention, not a universally used one (e.g. Hungarian literature uses "<"), but the distinction is important for discrete distributions. The proper use of tables of the binomial and Poisson distributions depends upon this convention. Moreover, important formulas like Paul Lévy's inversion formula for the characteristic function also rely on the "less than or equal" formulation.
If treating several random variables
X
,
Y
,
…
{\displaystyle X,Y,\ldots }
etc. the corresponding letters are used as subscripts while, if treating only one, the subscript is usually omitted. It is conventional to use a capital
F
{\displaystyle F}
for a cumulative distribution function, in contrast to the lower-case
f
{\displaystyle f}
used for probability density functions and probability mass functions. This applies when discussing general distributions: some specific distributions have their own conventional notation, for example the normal distribution uses
Φ
{\displaystyle \Phi }
and
ϕ
{\displaystyle \phi }
instead of
F
{\displaystyle F}
and
f
{\displaystyle f}
, respectively.
The probability density function of a continuous random variable can be determined from the cumulative distribution function by differentiating using the Fundamental Theorem of Calculus; i.e. given
F
(
x
)
{\displaystyle F(x)}
,
f
(
x
)
=
d
F
(
x
)
d
x
{\displaystyle f(x)={\frac {dF(x)}{dx}}}
as long as the derivative exists.
The CDF of a continuous random variable
X
{\displaystyle X}
can be expressed as the integral of its probability density function
f
X
{\displaystyle f_{X}}
as follows:: 86
F
X
(
x
)
=
∫
−
∞
x
f
X
(
t
)
d
t
.
{\displaystyle F_{X}(x)=\int _{-\infty }^{x}f_{X}(t)\,dt.}
In the case of a random variable
X
{\displaystyle X}
which has distribution having a discrete component at a value
b
{\displaystyle b}
,
P
(
X
=
b
)
=
F
X
(
b
)
−
lim
x
→
b
−
F
X
(
x
)
.
{\displaystyle \operatorname {P} (X=b)=F_{X}(b)-\lim _{x\to b^{-}}F_{X}(x).}
If
F
X
{\displaystyle F_{X}}
is continuous at
b
{\displaystyle b}
, this equals zero and there is no discrete component at
b
{\displaystyle b}
.
== Properties ==
Every cumulative distribution function
F
X
{\displaystyle F_{X}}
is non-decreasing: p. 78 and right-continuous,: p. 79 which makes it a càdlàg function. Furthermore,
lim
x
→
−
∞
F
X
(
x
)
=
0
,
lim
x
→
+
∞
F
X
(
x
)
=
1.
{\displaystyle \lim _{x\to -\infty }F_{X}(x)=0,\quad \lim _{x\to +\infty }F_{X}(x)=1.}
Every function with these three properties is a CDF, i.e., for every such function, a random variable can be defined such that the function is the cumulative distribution function of that random variable.
If
X
{\displaystyle X}
is a purely discrete random variable, then it attains values
x
1
,
x
2
,
…
{\displaystyle x_{1},x_{2},\ldots }
with probability
p
i
=
p
(
x
i
)
{\displaystyle p_{i}=p(x_{i})}
, and the CDF of
X
{\displaystyle X}
will be discontinuous at the points
x
i
{\displaystyle x_{i}}
:
F
X
(
x
)
=
P
(
X
≤
x
)
=
∑
x
i
≤
x
P
(
X
=
x
i
)
=
∑
x
i
≤
x
p
(
x
i
)
.
{\displaystyle F_{X}(x)=\operatorname {P} (X\leq x)=\sum _{x_{i}\leq x}\operatorname {P} (X=x_{i})=\sum _{x_{i}\leq x}p(x_{i}).}
If the CDF
F
X
{\displaystyle F_{X}}
of a real valued random variable
X
{\displaystyle X}
is continuous, then
X
{\displaystyle X}
is a continuous random variable; if furthermore
F
X
{\displaystyle F_{X}}
is absolutely continuous, then there exists a Lebesgue-integrable function
f
X
(
x
)
{\displaystyle f_{X}(x)}
such that
F
X
(
b
)
−
F
X
(
a
)
=
P
(
a
<
X
≤
b
)
=
∫
a
b
f
X
(
x
)
d
x
{\displaystyle F_{X}(b)-F_{X}(a)=\operatorname {P} (a<X\leq b)=\int _{a}^{b}f_{X}(x)\,dx}
for all real numbers
a
{\displaystyle a}
and
b
{\displaystyle b}
. The function
f
X
{\displaystyle f_{X}}
is equal to the derivative of
F
X
{\displaystyle F_{X}}
almost everywhere, and it is called the probability density function of the distribution of
X
{\displaystyle X}
.
If
X
{\displaystyle X}
has finite L1-norm, that is, the expectation of
|
X
|
{\displaystyle |X|}
is finite, then the expectation is given by the Riemann–Stieltjes integral
E
[
X
]
=
∫
−
∞
∞
t
d
F
X
(
t
)
{\displaystyle \mathbb {E} [X]=\int _{-\infty }^{\infty }t\,dF_{X}(t)}
and for any
x
≥
0
{\displaystyle x\geq 0}
,
x
(
1
−
F
X
(
x
)
)
≤
∫
x
∞
t
d
F
X
(
t
)
{\displaystyle x(1-F_{X}(x))\leq \int _{x}^{\infty }t\,dF_{X}(t)}
as well as
x
F
X
(
−
x
)
≤
∫
−
∞
−
x
(
−
t
)
d
F
X
(
t
)
{\displaystyle xF_{X}(-x)\leq \int _{-\infty }^{-x}(-t)\,dF_{X}(t)}
as shown in the diagram (consider the areas of the two red rectangles and their extensions to the right or left up to the graph of
F
X
{\displaystyle F_{X}}
). In particular, we have
lim
x
→
−
∞
x
F
X
(
x
)
=
0
,
lim
x
→
+
∞
x
(
1
−
F
X
(
x
)
)
=
0.
{\displaystyle \lim _{x\to -\infty }xF_{X}(x)=0,\quad \lim _{x\to +\infty }x(1-F_{X}(x))=0.}
In addition, the (finite) expected value of the real-valued random variable
X
{\displaystyle X}
can be defined on the graph of its cumulative distribution function as illustrated by the drawing in the definition of expected value for arbitrary real-valued random variables.
== Examples ==
As an example, suppose
X
{\displaystyle X}
is uniformly distributed on the unit interval
[
0
,
1
]
{\displaystyle [0,1]}
.
Then the CDF of
X
{\displaystyle X}
is given by
F
X
(
x
)
=
{
0
:
x
<
0
x
:
0
≤
x
≤
1
1
:
x
>
1
{\displaystyle F_{X}(x)={\begin{cases}0&:\ x<0\\x&:\ 0\leq x\leq 1\\1&:\ x>1\end{cases}}}
Suppose instead that
X
{\displaystyle X}
takes only the discrete values 0 and 1, with equal probability.
Then the CDF of
X
{\displaystyle X}
is given by
F
X
(
x
)
=
{
0
:
x
<
0
1
/
2
:
0
≤
x
<
1
1
:
x
≥
1
{\displaystyle F_{X}(x)={\begin{cases}0&:\ x<0\\1/2&:\ 0\leq x<1\\1&:\ x\geq 1\end{cases}}}
Suppose
X
{\displaystyle X}
is exponential distributed. Then the CDF of
X
{\displaystyle X}
is given by
F
X
(
x
;
λ
)
=
{
1
−
e
−
λ
x
x
≥
0
,
0
x
<
0.
{\displaystyle F_{X}(x;\lambda )={\begin{cases}1-e^{-\lambda x}&x\geq 0,\\0&x<0.\end{cases}}}
Here λ > 0 is the parameter of the distribution, often called the rate parameter.
Suppose
X
{\displaystyle X}
is normal distributed. Then the CDF of
X
{\displaystyle X}
is given by
F
(
t
;
μ
,
σ
)
=
1
σ
2
π
∫
−
∞
t
exp
(
−
(
x
−
μ
)
2
2
σ
2
)
d
x
.
{\displaystyle F(t;\mu ,\sigma )={\frac {1}{\sigma {\sqrt {2\pi }}}}\int _{-\infty }^{t}\exp \left(-{\frac {(x-\mu )^{2}}{2\sigma ^{2}}}\right)\,dx.}
Here the parameter
μ
{\displaystyle \mu }
is the mean or expectation of the distribution; and
σ
{\displaystyle \sigma }
is its standard deviation.
A table of the CDF of the standard normal distribution is often used in statistical applications, where it is named the standard normal table, the unit normal table, or the Z table.
Suppose
X
{\displaystyle X}
is binomial distributed. Then the CDF of
X
{\displaystyle X}
is given by
F
(
k
;
n
,
p
)
=
Pr
(
X
≤
k
)
=
∑
i
=
0
⌊
k
⌋
(
n
i
)
p
i
(
1
−
p
)
n
−
i
{\displaystyle F(k;n,p)=\Pr(X\leq k)=\sum _{i=0}^{\lfloor k\rfloor }{n \choose i}p^{i}(1-p)^{n-i}}
Here
p
{\displaystyle p}
is the probability of success and the function denotes the discrete probability distribution of the number of successes in a sequence of
n
{\displaystyle n}
independent experiments, and
⌊
k
⌋
{\displaystyle \lfloor k\rfloor }
is the "floor" under
k
{\displaystyle k}
, i.e. the greatest integer less than or equal to
k
{\displaystyle k}
.
== Derived functions ==
=== Complementary cumulative distribution function (tail distribution) ===
Sometimes, it is useful to study the opposite question and ask how often the random variable is above a particular level. This is called the complementary cumulative distribution function (ccdf) or simply the tail distribution or exceedance, and is defined as
F
¯
X
(
x
)
=
P
(
X
>
x
)
=
1
−
F
X
(
x
)
.
{\displaystyle {\bar {F}}_{X}(x)=\operatorname {P} (X>x)=1-F_{X}(x).}
This has applications in statistical hypothesis testing, for example, because the one-sided p-value is the probability of observing a test statistic at least as extreme as the one observed. Thus, provided that the test statistic, T, has a continuous distribution, the one-sided p-value is simply given by the ccdf: for an observed value
t
{\displaystyle t}
of the test statistic
p
=
P
(
T
≥
t
)
=
P
(
T
>
t
)
=
1
−
F
T
(
t
)
.
{\displaystyle p=\operatorname {P} (T\geq t)=\operatorname {P} (T>t)=1-F_{T}(t).}
In survival analysis,
F
¯
X
(
x
)
{\displaystyle {\bar {F}}_{X}(x)}
is called the survival function and denoted
S
(
x
)
{\displaystyle S(x)}
, while the term reliability function is common in engineering.
Properties
For a non-negative continuous random variable having an expectation, Markov's inequality states that
F
¯
X
(
x
)
≤
E
(
X
)
x
.
{\displaystyle {\bar {F}}_{X}(x)\leq {\frac {\operatorname {E} (X)}{x}}.}
As
x
→
∞
,
F
¯
X
(
x
)
→
0
{\displaystyle x\to \infty ,{\bar {F}}_{X}(x)\to 0}
, and in fact
F
¯
X
(
x
)
=
o
(
1
/
x
)
{\displaystyle {\bar {F}}_{X}(x)=o(1/x)}
provided that
E
(
X
)
{\displaystyle \operatorname {E} (X)}
is finite. Proof: Assuming
X
{\displaystyle X}
has a density function
f
X
{\displaystyle f_{X}}
, for any
c
>
0
{\displaystyle c>0}
E
(
X
)
=
∫
0
∞
x
f
X
(
x
)
d
x
≥
∫
0
c
x
f
X
(
x
)
d
x
+
c
∫
c
∞
f
X
(
x
)
d
x
{\displaystyle \operatorname {E} (X)=\int _{0}^{\infty }xf_{X}(x)\,dx\geq \int _{0}^{c}xf_{X}(x)\,dx+c\int _{c}^{\infty }f_{X}(x)\,dx}
Then, on recognizing
F
¯
X
(
c
)
=
∫
c
∞
f
X
(
x
)
d
x
{\displaystyle {\bar {F}}_{X}(c)=\int _{c}^{\infty }f_{X}(x)\,dx}
and rearranging terms,
0
≤
c
F
¯
X
(
c
)
≤
E
(
X
)
−
∫
0
c
x
f
X
(
x
)
d
x
→
0
as
c
→
∞
{\displaystyle 0\leq c{\bar {F}}_{X}(c)\leq \operatorname {E} (X)-\int _{0}^{c}xf_{X}(x)\,dx\to 0{\text{ as }}c\to \infty }
as claimed.
For a random variable having an expectation,
E
(
X
)
=
∫
0
∞
F
¯
X
(
x
)
d
x
−
∫
−
∞
0
F
X
(
x
)
d
x
{\displaystyle \operatorname {E} (X)=\int _{0}^{\infty }{\bar {F}}_{X}(x)\,dx-\int _{-\infty }^{0}F_{X}(x)\,dx}
and for a non-negative random variable the second term is 0. If the random variable can only take non-negative integer values, this is equivalent to
E
(
X
)
=
∑
n
=
0
∞
F
¯
X
(
n
)
.
{\displaystyle \operatorname {E} (X)=\sum _{n=0}^{\infty }{\bar {F}}_{X}(n).}
=== Folded cumulative distribution ===
While the plot of a cumulative distribution
F
{\displaystyle F}
often has an S-like shape, an alternative illustration is the folded cumulative distribution or mountain plot, which folds the top half of the graph over, that is
F
fold
(
x
)
=
F
(
x
)
1
{
F
(
x
)
≤
0.5
}
+
(
1
−
F
(
x
)
)
1
{
F
(
x
)
>
0.5
}
{\displaystyle F_{\text{fold}}(x)=F(x)1_{\{F(x)\leq 0.5\}}+(1-F(x))1_{\{F(x)>0.5\}}}
where
1
{
A
}
{\displaystyle 1_{\{A\}}}
denotes the indicator function and the second summand is the survivor function, thus using two scales, one for the upslope and another for the downslope. This form of illustration emphasises the median, dispersion (specifically, the mean absolute deviation from the median) and skewness of the distribution or of the empirical results.
=== Inverse distribution function (quantile function) ===
If the CDF F is strictly increasing and continuous then
F
−
1
(
p
)
,
p
∈
[
0
,
1
]
,
{\displaystyle F^{-1}(p),p\in [0,1],}
is the unique real number
x
{\displaystyle x}
such that
F
(
x
)
=
p
{\displaystyle F(x)=p}
. This defines the inverse distribution function or quantile function.
Some distributions do not have a unique inverse (for example if
f
X
(
x
)
=
0
{\displaystyle f_{X}(x)=0}
for all
a
<
x
<
b
{\displaystyle a<x<b}
, causing
F
X
{\displaystyle F_{X}}
to be constant). In this case, one may use the generalized inverse distribution function, which is defined as
F
−
1
(
p
)
=
inf
{
x
∈
R
:
F
(
x
)
≥
p
}
,
∀
p
∈
[
0
,
1
]
.
{\displaystyle F^{-1}(p)=\inf\{x\in \mathbb {R} :F(x)\geq p\},\quad \forall p\in [0,1].}
Example 1: The median is
F
−
1
(
0.5
)
{\displaystyle F^{-1}(0.5)}
.
Example 2: Put
τ
=
F
−
1
(
0.95
)
{\displaystyle \tau =F^{-1}(0.95)}
. Then we call
τ
{\displaystyle \tau }
the 95th percentile.
Some useful properties of the inverse cdf (which are also preserved in the definition of the generalized inverse distribution function) are:
F
−
1
{\displaystyle F^{-1}}
is nondecreasing
F
−
1
(
F
(
x
)
)
≤
x
{\displaystyle F^{-1}(F(x))\leq x}
F
(
F
−
1
(
p
)
)
≥
p
{\displaystyle F(F^{-1}(p))\geq p}
F
−
1
(
p
)
≤
x
{\displaystyle F^{-1}(p)\leq x}
if and only if
p
≤
F
(
x
)
{\displaystyle p\leq F(x)}
If
Y
{\displaystyle Y}
has a
U
[
0
,
1
]
{\displaystyle U[0,1]}
distribution then
F
−
1
(
Y
)
{\displaystyle F^{-1}(Y)}
is distributed as
F
{\displaystyle F}
. This is used in random number generation using the inverse transform sampling-method.
If
{
X
α
}
{\displaystyle \{X_{\alpha }\}}
is a collection of independent
F
{\displaystyle F}
-distributed random variables defined on the same sample space, then there exist random variables
Y
α
{\displaystyle Y_{\alpha }}
such that
Y
α
{\displaystyle Y_{\alpha }}
is distributed as
U
[
0
,
1
]
{\displaystyle U[0,1]}
and
F
−
1
(
Y
α
)
=
X
α
{\displaystyle F^{-1}(Y_{\alpha })=X_{\alpha }}
with probability 1 for all
α
{\displaystyle \alpha }
.
The inverse of the cdf can be used to translate results obtained for the uniform distribution to other distributions.
=== Empirical distribution function ===
The empirical distribution function is an estimate of the cumulative distribution function that generated the points in the sample. It converges with probability 1 to that underlying distribution. A number of results exist to quantify the rate of convergence of the empirical distribution function to the underlying cumulative distribution function.
== Multivariate case ==
=== Definition for two random variables ===
When dealing simultaneously with more than one random variable the joint cumulative distribution function can also be defined. For example, for a pair of random variables
X
,
Y
{\displaystyle X,Y}
, the joint CDF
F
X
Y
{\displaystyle F_{XY}}
is given by: p. 89
where the right-hand side represents the probability that the random variable
X
{\displaystyle X}
takes on a value less than or equal to
x
{\displaystyle x}
and that
Y
{\displaystyle Y}
takes on a value less than or equal to
y
{\displaystyle y}
.
Example of joint cumulative distribution function:
For two continuous variables X and Y:
Pr
(
a
<
X
<
b
and
c
<
Y
<
d
)
=
∫
a
b
∫
c
d
f
(
x
,
y
)
d
y
d
x
;
{\displaystyle \Pr(a<X<b{\text{ and }}c<Y<d)=\int _{a}^{b}\int _{c}^{d}f(x,y)\,dy\,dx;}
For two discrete random variables, it is beneficial to generate a table of probabilities and address the cumulative probability for each potential range of X and Y, and here is the example:
given the joint probability mass function in tabular form, determine the joint cumulative distribution function.
Solution: using the given table of probabilities for each potential range of X and Y, the joint cumulative distribution function may be constructed in tabular form:
=== Definition for more than two random variables ===
For
N
{\displaystyle N}
random variables
X
1
,
…
,
X
N
{\displaystyle X_{1},\ldots ,X_{N}}
, the joint CDF
F
X
1
,
…
,
X
N
{\displaystyle F_{X_{1},\ldots ,X_{N}}}
is given by
Interpreting the
N
{\displaystyle N}
random variables as a random vector
X
=
(
X
1
,
…
,
X
N
)
T
{\displaystyle \mathbf {X} =(X_{1},\ldots ,X_{N})^{T}}
yields a shorter notation:
F
X
(
x
)
=
P
(
X
1
≤
x
1
,
…
,
X
N
≤
x
N
)
{\displaystyle F_{\mathbf {X} }(\mathbf {x} )=\operatorname {P} (X_{1}\leq x_{1},\ldots ,X_{N}\leq x_{N})}
=== Properties ===
Every multivariate CDF is:
Monotonically non-decreasing for each of its variables,
Right-continuous in each of its variables,
0
≤
F
X
1
…
X
n
(
x
1
,
…
,
x
n
)
≤
1
,
{\displaystyle 0\leq F_{X_{1}\ldots X_{n}}(x_{1},\ldots ,x_{n})\leq 1,}
lim
x
1
,
…
,
x
n
→
+
∞
F
X
1
…
X
n
(
x
1
,
…
,
x
n
)
=
1
{\displaystyle \lim _{x_{1},\ldots ,x_{n}\to +\infty }F_{X_{1}\ldots X_{n}}(x_{1},\ldots ,x_{n})=1}
and
lim
x
i
→
−
∞
F
X
1
…
X
n
(
x
1
,
…
,
x
n
)
=
0
,
{\displaystyle \lim _{x_{i}\to -\infty }F_{X_{1}\ldots X_{n}}(x_{1},\ldots ,x_{n})=0,}
for all i.
Not every function satisfying the above four properties is a multivariate CDF, unlike in the single dimension case. For example, let
F
(
x
,
y
)
=
0
{\displaystyle F(x,y)=0}
for
x
<
0
{\displaystyle x<0}
or
x
+
y
<
1
{\displaystyle x+y<1}
or
y
<
0
{\displaystyle y<0}
and let
F
(
x
,
y
)
=
1
{\displaystyle F(x,y)=1}
otherwise. It is easy to see that the above conditions are met, and yet
F
{\displaystyle F}
is not a CDF since if it was, then
P
(
1
3
<
X
≤
1
,
1
3
<
Y
≤
1
)
=
−
1
{\textstyle \operatorname {P} \left({\frac {1}{3}}<X\leq 1,{\frac {1}{3}}<Y\leq 1\right)=-1}
as explained below.
The probability that a point belongs to a hyperrectangle is analogous to the 1-dimensional case:
F
X
1
,
X
2
(
a
,
c
)
+
F
X
1
,
X
2
(
b
,
d
)
−
F
X
1
,
X
2
(
a
,
d
)
−
F
X
1
,
X
2
(
b
,
c
)
=
P
(
a
<
X
1
≤
b
,
c
<
X
2
≤
d
)
=
∫
⋯
{\displaystyle F_{X_{1},X_{2}}(a,c)+F_{X_{1},X_{2}}(b,d)-F_{X_{1},X_{2}}(a,d)-F_{X_{1},X_{2}}(b,c)=\operatorname {P} (a<X_{1}\leq b,c<X_{2}\leq d)=\int \cdots }
== Complex case ==
=== Complex random variable ===
The generalization of the cumulative distribution function from real to complex random variables is not obvious because expressions of the form
P
(
Z
≤
1
+
2
i
)
{\displaystyle P(Z\leq 1+2i)}
make no sense. However expressions of the form
P
(
ℜ
(
Z
)
≤
1
,
ℑ
(
Z
)
≤
3
)
{\displaystyle P(\Re {(Z)}\leq 1,\Im {(Z)}\leq 3)}
make sense. Therefore, we define the cumulative distribution of a complex random variables via the joint distribution of their real and imaginary parts:
F
Z
(
z
)
=
F
ℜ
(
Z
)
,
ℑ
(
Z
)
(
ℜ
(
z
)
,
ℑ
(
z
)
)
=
P
(
ℜ
(
Z
)
≤
ℜ
(
z
)
,
ℑ
(
Z
)
≤
ℑ
(
z
)
)
.
{\displaystyle F_{Z}(z)=F_{\Re {(Z)},\Im {(Z)}}(\Re {(z)},\Im {(z)})=P(\Re {(Z)}\leq \Re {(z)},\Im {(Z)}\leq \Im {(z)}).}
=== Complex random vector ===
Generalization of Eq.4 yields
F
Z
(
z
)
=
F
ℜ
(
Z
1
)
,
ℑ
(
Z
1
)
,
…
,
ℜ
(
Z
n
)
,
ℑ
(
Z
n
)
(
ℜ
(
z
1
)
,
ℑ
(
z
1
)
,
…
,
ℜ
(
z
n
)
,
ℑ
(
z
n
)
)
=
P
(
ℜ
(
Z
1
)
≤
ℜ
(
z
1
)
,
ℑ
(
Z
1
)
≤
ℑ
(
z
1
)
,
…
,
ℜ
(
Z
n
)
≤
ℜ
(
z
n
)
,
ℑ
(
Z
n
)
≤
ℑ
(
z
n
)
)
{\displaystyle F_{\mathbf {Z} }(\mathbf {z} )=F_{\Re {(Z_{1})},\Im {(Z_{1})},\ldots ,\Re {(Z_{n})},\Im {(Z_{n})}}(\Re {(z_{1})},\Im {(z_{1})},\ldots ,\Re {(z_{n})},\Im {(z_{n})})=\operatorname {P} (\Re {(Z_{1})}\leq \Re {(z_{1})},\Im {(Z_{1})}\leq \Im {(z_{1})},\ldots ,\Re {(Z_{n})}\leq \Re {(z_{n})},\Im {(Z_{n})}\leq \Im {(z_{n})})}
as definition for the CDS of a complex random vector
Z
=
(
Z
1
,
…
,
Z
N
)
T
{\displaystyle \mathbf {Z} =(Z_{1},\ldots ,Z_{N})^{T}}
.
== Use in statistical analysis ==
The concept of the cumulative distribution function makes an explicit appearance in statistical analysis in two (similar) ways. Cumulative frequency analysis is the analysis of the frequency of occurrence of values of a phenomenon less than a reference value. The empirical distribution function is a formal direct estimate of the cumulative distribution function for which simple statistical properties can be derived and which can form the basis of various statistical hypothesis tests. Such tests can assess whether there is evidence against a sample of data having arisen from a given distribution, or evidence against two samples of data having arisen from the same (unknown) population distribution.
=== Kolmogorov–Smirnov and Kuiper's tests ===
The Kolmogorov–Smirnov test is based on cumulative distribution functions and can be used to test to see whether two empirical distributions are different or whether an empirical distribution is different from an ideal distribution. The closely related Kuiper's test is useful if the domain of the distribution is cyclic as in day of the week. For instance Kuiper's test might be used to see if the number of tornadoes varies during the year or if sales of a product vary by day of the week or day of the month.
== See also ==
Descriptive statistics
Distribution fitting
Ogive (statistics)
== References ==
== External links ==
Media related to Cumulative distribution functions at Wikimedia Commons | Wikipedia/Complementary_cumulative_distribution_function |
Survival analysis is a branch of statistics for analyzing the expected duration of time until one event occurs, such as death in biological organisms and failure in mechanical systems. This topic is called reliability theory, reliability analysis or reliability engineering in engineering, duration analysis or duration modelling in economics, and event history analysis in sociology. Survival analysis attempts to answer certain questions, such as what is the proportion of a population which will survive past a certain time? Of those that survive, at what rate will they die or fail? Can multiple causes of death or failure be taken into account? How do particular circumstances or characteristics increase or decrease the probability of survival?
To answer such questions, it is necessary to define "lifetime". In the case of biological survival, death is unambiguous, but for mechanical reliability, failure may not be well-defined, for there may well be mechanical systems in which failure is partial, a matter of degree, or not otherwise localized in time. Even in biological problems, some events (for example, heart attack or other organ failure) may have the same ambiguity. The theory outlined below assumes well-defined events at specific times; other cases may be better treated by models which explicitly account for ambiguous events.
More generally, survival analysis involves the modelling of time to event data; in this context, death or failure is considered an "event" in the survival analysis literature – traditionally only a single event occurs for each subject, after which the organism or mechanism is dead or broken. Recurring event or repeated event models relax that assumption. The study of recurring events is relevant in systems reliability, and in many areas of social sciences and medical research.
== Introduction to survival analysis ==
Survival analysis is used in several ways:
To describe the survival times of members of a group
Life tables
Kaplan–Meier curves
Survival function
Hazard function
To compare the survival times of two or more groups
Log-rank test
To describe the effect of categorical or quantitative variables on survival
Cox proportional hazards regression
Parametric survival models
Survival trees
Survival random forests
=== Definitions of common terms in survival analysis ===
The following terms are commonly used in survival analyses:
Event: Death, disease occurrence, disease recurrence, recovery, or other experience of interest
Time: The time from the beginning of an observation period (such as surgery or beginning treatment) to (i) an event, or (ii) end of the study, or (iii) loss of contact or withdrawal from the study.
Censoring / Censored observation: Censoring occurs when we have some information about individual survival time, but we do not know the survival time exactly. The subject is censored in the sense that nothing is observed or known about that subject after the time of censoring. A censored subject may or may not have an event after the end of observation time.
Survival function S(t): The probability that a subject survives longer than time t.
=== Example: Acute myelogenous leukemia survival data ===
This example uses the Acute Myelogenous Leukemia survival data set "aml" from the "survival" package in R. The data set is from Miller (1997) and the question is whether the standard course of chemotherapy should be extended ('maintained') for additional cycles.
The aml data set sorted by survival time is shown in the box.
Time is indicated by the variable "time", which is the survival or censoring time
Event (recurrence of aml cancer) is indicated by the variable "status". 0 = no event (censored), 1 = event (recurrence)
Treatment group: the variable "x" indicates if maintenance chemotherapy was given
The last observation (11), at 161 weeks, is censored. Censoring indicates that the patient did not have an event (no recurrence of aml cancer). Another subject, observation 3, was censored at 13 weeks (indicated by status=0). This subject was in the study for only 13 weeks, and the aml cancer did not recur during those 13 weeks. It is possible that this patient was enrolled near the end of the study, so that they could be observed for only 13 weeks. It is also possible that the patient was enrolled early in the study, but was lost to follow up or withdrew from the study. The table shows that other subjects were censored at 16, 28, and 45 weeks (observations 17, 6, and 9 with status=0). The remaining subjects all experienced events (recurrence of aml cancer) while in the study. The question of interest is whether recurrence occurs later in maintained patients than in non-maintained patients.
==== Kaplan–Meier plot for the aml data ====
The survival function S(t), is the probability that a subject survives longer than time t. S(t) is theoretically a smooth curve, but it is usually estimated using the Kaplan–Meier (KM) curve. The graph shows the KM plot for the aml data and can be interpreted as follows:
The x axis is time, from zero (when observation began) to the last observed time point.
The y axis is the proportion of subjects surviving. At time zero, 100% of the subjects are alive without an event.
The solid line (similar to a staircase) shows the progression of event occurrences.
A vertical drop indicates an event. In the aml table shown above, two subjects had events at five weeks, two had events at eight weeks, one had an event at nine weeks, and so on. These events at five weeks, eight weeks and so on are indicated by the vertical drops in the KM plot at those time points.
At the far right end of the KM plot there is a tick mark at 161 weeks. The vertical tick mark indicates that a patient was censored at this time. In the aml data table five subjects were censored, at 13, 16, 28, 45 and 161 weeks. There are five tick marks in the KM plot, corresponding to these censored observations.
==== Life table for the aml data ====
A life table summarizes survival data in terms of the number of events and the proportion surviving at each event time point. The life table for the aml data, created using the R software, is shown.
The life table summarizes the events and the proportion surviving at each event time point. The columns in the life table have the following interpretation:
time gives the time points at which events occur.
n.risk is the number of subjects at risk immediately before the time point, t. Being "at risk" means that the subject has not had an event before time t, and is not censored before or at time t.
n.event is the number of subjects who have events at time t.
survival is the proportion surviving, as determined using the Kaplan–Meier product-limit estimate.
std.err is the standard error of the estimated survival. The standard error of the Kaplan–Meier product-limit estimate it is calculated using Greenwood's formula, and depends on the number at risk (n.risk in the table), the number of deaths (n.event in the table) and the proportion surviving (survival in the table).
lower 95% CI and upper 95% CI are the lower and upper 95% confidence bounds for the proportion surviving.
==== Log-rank test: Testing for differences in survival in the aml data ====
The log-rank test compares the survival times of two or more groups. This example uses a log-rank test for a difference in survival in the maintained versus non-maintained treatment groups in the aml data. The graph shows KM plots for the aml data broken out by treatment group, which is indicated by the variable "x" in the data.
The null hypothesis for a log-rank test is that the groups have the same survival. The expected number of subjects surviving at each time point in each is adjusted for the number of subjects at risk in the groups at each event time. The log-rank test determines if the observed number of events in each group is significantly different from the expected number. The formal test is based on a chi-squared statistic. When the log-rank statistic is large, it is evidence for a difference in the survival times between the groups. The log-rank statistic approximately has a Chi-squared distribution with one degree of freedom, and the p-value is calculated using the Chi-squared test.
For the example data, the log-rank test for difference in survival gives a p-value of p=0.0653, indicating that the treatment groups do not differ significantly in survival, assuming an alpha level of 0.05. The sample size of 23 subjects is modest, so there is little power to detect differences between the treatment groups. The chi-squared test is based on asymptotic approximation, so the p-value should be regarded with caution for small sample sizes.
=== Cox proportional hazards (PH) regression analysis ===
Kaplan–Meier curves and log-rank tests are most useful when the predictor variable is categorical (e.g., drug vs. placebo), or takes a small number of values (e.g., drug doses 0, 20, 50, and 100 mg/day) that can be treated as categorical. The log-rank test and KM curves don't work easily with quantitative predictors such as gene expression, white blood count, or age. For quantitative predictor variables, an alternative method is Cox proportional hazards regression analysis. Cox PH models work also with categorical predictor variables, which are encoded as {0,1} indicator or dummy variables. The log-rank test is a special case of a Cox PH analysis, and can be performed using Cox PH software.
==== Example: Cox proportional hazards regression analysis for melanoma ====
This example uses the melanoma data set from Dalgaard Chapter 14.
Data are in the R package ISwR. The Cox proportional hazards regression using R gives the results shown in the box.
The Cox regression results are interpreted as follows.
Sex is encoded as a numeric vector (1: female, 2: male). The R summary for the Cox model gives the hazard ratio (HR) for the second group relative to the first group, that is, male versus female.
coef = 0.662 is the estimated logarithm of the hazard ratio for males versus females.
exp(coef) = 1.94 = exp(0.662) - The log of the hazard ratio (coef= 0.662) is transformed to the hazard ratio using exp(coef). The summary for the Cox model gives the hazard ratio for the second group relative to the first group, that is, male versus female. The estimated hazard ratio of 1.94 indicates that males have higher risk of death (lower survival rates) than females, in these data.
se(coef) = 0.265 is the standard error of the log hazard ratio.
z = 2.5 = coef/se(coef) = 0.662/0.265. Dividing the coef by its standard error gives the z score.
p=0.013. The p-value corresponding to z=2.5 for sex is p=0.013, indicating that there is a significant difference in survival as a function of sex.
The summary output also gives upper and lower 95% confidence intervals for the hazard ratio: lower 95% bound = 1.15; upper 95% bound = 3.26.
Finally, the output gives p-values for three alternative tests for overall significance of the model:
Likelihood ratio test = 6.15 on 1 df, p=0.0131
Wald test = 6.24 on 1 df, p=0.0125
Score (log-rank) test = 6.47 on 1 df, p=0.0110
These three tests are asymptotically equivalent. For large enough N, they will give similar results. For small N, they may differ somewhat. The last row, "Score (logrank) test" is the result for the log-rank test, with p=0.011, the same result as the log-rank test, because the log-rank test is a special case of a Cox PH regression. The Likelihood ratio test has better behavior for small sample sizes, so it is generally preferred.
==== Cox model using a covariate in the melanoma data ====
The Cox model extends the log-rank test by allowing the inclusion of additional covariates. This example use the melanoma data set where the predictor variables include a continuous covariate, the thickness of the tumor (variable name = "thick").
In the histograms, the thickness values are positively skewed and do not have a Gaussian-like, Symmetric probability distribution. Regression models, including the Cox model, generally give more reliable results with normally-distributed variables. For this example we may use a logarithmic transform. The log of the thickness of the tumor looks to be more normally distributed, so the Cox models will use log thickness. The Cox PH analysis gives the results in the box.
The p-value for all three overall tests (likelihood, Wald, and score) are significant, indicating that the model is significant. The p-value for log(thick) is 6.9e-07, with a hazard ratio HR = exp(coef) = 2.18, indicating a strong relationship between the thickness of the tumor and increased risk of death.
By contrast, the p-value for sex is now p=0.088. The hazard ratio HR = exp(coef) = 1.58, with a 95% confidence interval of 0.934 to 2.68. Because the confidence interval for HR includes 1, these results indicate that sex makes a smaller contribution to the difference in the HR after controlling for the thickness of the tumor, and only trend toward significance. Examination of graphs of log(thickness) by sex and a t-test of log(thickness) by sex both indicate that there is a significant difference between men and women in the thickness of the tumor when they first see the clinician.
The Cox model assumes that the hazards are proportional. The proportional hazard assumption may be tested using the R function cox.zph(). A p-value which is less than 0.05 indicates that the hazards are not proportional. For the melanoma data we obtain p=0.222. Hence, we cannot reject the null hypothesis of the hazards being proportional. Additional tests and graphs for examining a Cox model are described in the textbooks cited.
==== Extensions to Cox models ====
Cox models can be extended to deal with variations on the simple analysis.
Stratification. The subjects can be divided into strata, where subjects within a stratum are expected to be relatively more similar to each other than to randomly chosen subjects from other strata. The regression parameters are assumed to be the same across the strata, but a different baseline hazard may exist for each stratum. Stratification is useful for analyses using matched subjects, for dealing with patient subsets, such as different clinics, and for dealing with violations of the proportional hazard assumption.
Time-varying covariates. Some variables, such as gender and treatment group, generally stay the same in a clinical trial. Other clinical variables, such as serum protein levels or dose of concomitant medications may change over the course of a study. Cox models may be extended for such time-varying covariates.
=== Tree-structured survival models ===
The Cox PH regression model is a linear model. It is similar to linear regression and logistic regression. Specifically, these methods assume that a single line, curve, plane, or surface is sufficient to separate groups (alive, dead) or to estimate a quantitative response (survival time).
In some cases alternative partitions give more accurate classification or quantitative estimates. One set of alternative methods are tree-structured survival models, including survival random forests. Tree-structured survival models may give more accurate predictions than Cox models. Examining both types of models for a given data set is a reasonable strategy.
==== Example survival tree analysis ====
This example of a survival tree analysis uses the R package "rpart". The example is based on 146 stage C prostate cancer patients in the data set stagec in rpart. Rpart and the stagec example are described in Atkinson and Therneau (1997), which is also distributed as a vignette of the rpart package.
The variables in stages are:
pgtime: time to progression, or last follow-up free of progression
pgstat: status at last follow-up (1=progressed, 0=censored)
age: age at diagnosis
eet: early endocrine therapy (1=no, 0=yes)
ploidy: diploid/tetraploid/aneuploid DNA pattern
g2: % of cells in G2 phase
grade: tumor grade (1-4)
gleason: Gleason grade (3-10)
The survival tree produced by the analysis is shown in the figure.
Each branch in the tree indicates a split on the value of a variable. For example, the root of the tree splits subjects with grade < 2.5 versus subjects with grade 2.5 or greater. The terminal nodes indicate the number of subjects in the node, the number of subjects who have events, and the relative event rate compared to the root. In the node on the far left, the values 1/33 indicate that one of the 33 subjects in the node had an event, and that the relative event rate is 0.122. In the node on the far right bottom, the values 11/15 indicate that 11 of 15 subjects in the node had an event, and the relative event rate is 2.7.
==== Survival random forests ====
An alternative to building a single survival tree is to build many survival trees, where each tree is constructed using a sample of the data, and average the trees to predict survival. This is the method underlying the survival random forest models. Survival random forest analysis is available in the R package "randomForestSRC".
The randomForestSRC package includes an example survival random forest analysis using the data set pbc. This data is from the Mayo Clinic Primary Biliary Cirrhosis (PBC) trial of the liver conducted between 1974 and 1984. In the example, the random forest survival model gives more accurate predictions of survival than the Cox PH model. The prediction errors are estimated by bootstrap re-sampling.
=== Deep Learning survival models ===
Recent advancements in deep representation learning have been extended to survival estimation. The DeepSurv model proposes to replace the log-linear parameterization of the CoxPH model with a multi-layer perceptron. Further extensions like Deep Survival Machines and Deep Cox Mixtures involve the use of latent variable mixture models to model the time-to-event distribution as a mixture of parametric or semi-parametric distributions while jointly learning representations of the input covariates. Deep learning approaches have shown superior performance especially on complex input data modalities such as images and clinical time-series.
== General formulation ==
=== Survival function ===
The object of primary interest is the survival function, conventionally denoted S, which is defined as
S
(
t
)
=
Pr
(
T
>
t
)
{\displaystyle S(t)=\Pr(T>t)}
where t is some time, T is a random variable denoting the time of death, and "Pr" stands for probability. That is, the survival function is the probability that the time of death is later than some specified time t.
The survival function is also called the survivor function or survivorship function in problems of biological survival, and the reliability function in mechanical survival problems. In the latter case, the reliability function is denoted R(t).
Usually one assumes S(0) = 1, although it could be less than 1 if there is the possibility of immediate death or failure.
The survival function must be non-increasing: S(u) ≤ S(t) if u ≥ t. This property follows directly because T>u implies T>t. This reflects the notion that survival to a later age is possible only if all younger ages are attained. Given this property, the lifetime distribution function and event density (F and f below) are well-defined.
The survival function is usually assumed to approach zero as age increases without bound (i.e., S(t) → 0 as t → ∞), although the limit could be greater than zero if eternal life is possible. For instance, we could apply survival analysis to a mixture of stable and unstable carbon isotopes; unstable isotopes would decay sooner or later, but the stable isotopes would last indefinitely.
=== Lifetime distribution function and event density ===
Related quantities are defined in terms of the survival function.
The lifetime distribution function, conventionally denoted F, is defined as the complement of the survival function,
F
(
t
)
=
Pr
(
T
≤
t
)
=
1
−
S
(
t
)
.
{\displaystyle F(t)=\Pr(T\leq t)=1-S(t).}
If F is differentiable then the derivative, which is the density function of the lifetime distribution, is conventionally denoted f,
f
(
t
)
=
F
′
(
t
)
=
d
d
t
F
(
t
)
.
{\displaystyle f(t)=F'(t)={\frac {d}{dt}}F(t).}
The function f is sometimes called the event density; it is the rate of death or failure events per unit time.
The survival function can be expressed in terms of probability distribution and probability density functions
S
(
t
)
=
Pr
(
T
>
t
)
=
∫
t
∞
f
(
u
)
d
u
=
1
−
F
(
t
)
.
{\displaystyle S(t)=\Pr(T>t)=\int _{t}^{\infty }f(u)\,du=1-F(t).}
Similarly, a survival event density function can be defined as
s
(
t
)
=
S
′
(
t
)
=
d
d
t
S
(
t
)
=
d
d
t
∫
t
∞
f
(
u
)
d
u
=
d
d
t
[
1
−
F
(
t
)
]
=
−
f
(
t
)
.
{\displaystyle s(t)=S'(t)={\frac {d}{dt}}S(t)={\frac {d}{dt}}\int _{t}^{\infty }f(u)\,du={\frac {d}{dt}}[1-F(t)]=-f(t).}
In other fields, such as statistical physics, the survival event density function is known as the first passage time density.
=== Hazard function and cumulative hazard function ===
The hazard function
h
{\displaystyle h}
is defined as the event rate at time
t
,
{\displaystyle t,}
conditional on survival at time
t
.
{\displaystyle t.}
Synonyms for hazard function in different fields include hazard rate, force of mortality (demography and actuarial science, denoted by
μ
{\displaystyle \mu }
), force of failure, or failure rate (engineering, denoted
λ
{\displaystyle \lambda }
). For example, in actuarial science,
μ
(
x
)
{\displaystyle \mu (x)}
denotes rate of death for people aged
x
{\displaystyle x}
, whereas in reliability engineering
λ
(
t
)
{\displaystyle \lambda (t)}
denotes rate of failure of components after operation for time
t
{\displaystyle t}
.
Suppose that an item has survived for a time
t
{\displaystyle t}
and we desire the probability that it will not survive for an additional time
d
t
{\displaystyle dt}
:
h
(
t
)
=
lim
d
t
→
0
Pr
(
t
≤
T
<
t
+
d
t
)
d
t
⋅
S
(
t
)
=
f
(
t
)
S
(
t
)
=
−
S
′
(
t
)
S
(
t
)
.
{\displaystyle h(t)=\lim _{dt\rightarrow 0}{\frac {\Pr(t\leq T<t+dt)}{dt\cdot S(t)}}={\frac {f(t)}{S(t)}}=-{\frac {S'(t)}{S(t)}}.}
Any function
h
{\displaystyle h}
is a hazard function if and only if it satisfies the following properties:
∀
x
≥
0
(
h
(
x
)
≥
0
)
{\displaystyle \forall x\geq 0\left(h(x)\geq 0\right)}
,
∫
0
∞
h
(
x
)
d
x
=
∞
{\displaystyle \int _{0}^{\infty }h(x)dx=\infty }
.
In fact, the hazard rate is usually more informative about the underlying mechanism of failure than the other representations of a lifetime distribution.
The hazard function must be non-negative,
λ
(
t
)
≥
0
{\displaystyle \lambda (t)\geq 0}
, and its integral over
[
0
,
∞
]
{\displaystyle [0,\infty ]}
must be infinite, but is not otherwise constrained; it may be increasing or decreasing, non-monotonic, or discontinuous. An example is the bathtub curve hazard function, which is large for small values of
t
{\displaystyle t}
, decreasing to some minimum, and thereafter increasing again; this can model the property of some mechanical systems to either fail soon after operation, or much later, as the system ages.
The hazard function can alternatively be represented in terms of the cumulative hazard function, conventionally denoted
Λ
{\displaystyle \Lambda }
or
H
{\displaystyle H}
:
Λ
(
t
)
=
−
log
S
(
t
)
{\displaystyle \,\Lambda (t)=-\log S(t)}
so transposing signs and exponentiating
S
(
t
)
=
exp
(
−
Λ
(
t
)
)
{\displaystyle \,S(t)=\exp(-\Lambda (t))}
or differentiating (with the chain rule)
d
d
t
Λ
(
t
)
=
−
S
′
(
t
)
S
(
t
)
=
λ
(
t
)
.
{\displaystyle {\frac {d}{dt}}\Lambda (t)=-{\frac {S'(t)}{S(t)}}=\lambda (t).}
The name "cumulative hazard function" is derived from the fact that
Λ
(
t
)
=
∫
0
t
λ
(
u
)
d
u
{\displaystyle \Lambda (t)=\int _{0}^{t}\lambda (u)\,du}
which is the "accumulation" of the hazard over time.
From the definition of
Λ
(
t
)
{\displaystyle \Lambda (t)}
, we see that it increases without bound as t tends to infinity (assuming that
S
(
t
)
{\displaystyle S(t)}
tends to zero). This implies that
λ
(
t
)
{\displaystyle \lambda (t)}
must not decrease too quickly, since, by definition, the cumulative hazard has to diverge. For example,
exp
(
−
t
)
{\displaystyle \exp(-t)}
is not the hazard function of any survival distribution, because its integral converges to 1.
The survival function
S
(
t
)
{\displaystyle S(t)}
, the cumulative hazard function
Λ
(
t
)
{\displaystyle \Lambda (t)}
, the density
f
(
t
)
{\displaystyle f(t)}
, the hazard function
λ
(
t
)
{\displaystyle \lambda (t)}
, and the lifetime distribution function
F
(
t
)
{\displaystyle F(t)}
are related through
S
(
t
)
=
exp
[
−
Λ
(
t
)
]
=
f
(
t
)
λ
(
t
)
=
1
−
F
(
t
)
,
t
>
0.
{\displaystyle S(t)=\exp[-\Lambda (t)]={\frac {f(t)}{\lambda (t)}}=1-F(t),\quad t>0.}
=== Quantities derived from the survival distribution ===
Future lifetime at a given time
t
0
{\displaystyle t_{0}}
is the time remaining until death, given survival to age
t
0
{\displaystyle t_{0}}
. Thus, it is
T
−
t
0
{\displaystyle T-t_{0}}
in the present notation. The expected future lifetime is the expected value of future lifetime. The probability of death at or before age
t
0
+
t
{\displaystyle t_{0}+t}
, given survival until age
t
0
{\displaystyle t_{0}}
, is just
P
(
T
≤
t
0
+
t
∣
T
>
t
0
)
=
P
(
t
0
<
T
≤
t
0
+
t
)
P
(
T
>
t
0
)
=
F
(
t
0
+
t
)
−
F
(
t
0
)
S
(
t
0
)
.
{\displaystyle P(T\leq t_{0}+t\mid T>t_{0})={\frac {P(t_{0}<T\leq t_{0}+t)}{P(T>t_{0})}}={\frac {F(t_{0}+t)-F(t_{0})}{S(t_{0})}}.}
Therefore, the probability density of future lifetime is
d
d
t
F
(
t
0
+
t
)
−
F
(
t
0
)
S
(
t
0
)
=
f
(
t
0
+
t
)
S
(
t
0
)
{\displaystyle {\frac {d}{dt}}{\frac {F(t_{0}+t)-F(t_{0})}{S(t_{0})}}={\frac {f(t_{0}+t)}{S(t_{0})}}}
and the expected future lifetime is
1
S
(
t
0
)
∫
0
∞
t
f
(
t
0
+
t
)
d
t
=
1
S
(
t
0
)
∫
t
0
∞
S
(
t
)
d
t
,
{\displaystyle {\frac {1}{S(t_{0})}}\int _{0}^{\infty }t\,f(t_{0}+t)\,dt={\frac {1}{S(t_{0})}}\int _{t_{0}}^{\infty }S(t)\,dt,}
where the second expression is obtained using integration by parts.
For
t
0
=
0
{\displaystyle t_{0}=0}
, that is, at birth, this reduces to the expected lifetime.
In reliability problems, the expected lifetime is called the mean time to failure, and the expected future lifetime is called the mean residual lifetime.
As the probability of an individual surviving until age t or later is S(t), by definition, the expected number of survivors at age t out of an initial population of n newborns is n × S(t), assuming the same survival function for all individuals. Thus the expected proportion of survivors is S(t).
If the survival of different individuals is independent, the number of survivors at age t has a binomial distribution with parameters n and S(t), and the variance of the proportion of survivors is S(t) × (1-S(t))/n.
The age at which a specified proportion of survivors remain can be found by solving the equation S(t) = q for t, where q is the quantile in question. Typically one is interested in the median lifetime, for which q = 1/2, or other quantiles such as q = 0.90 or q = 0.99.
== Censoring ==
Censoring is a form of missing data problem in which time to event is not observed for reasons such as termination of study before all recruited subjects have shown the event of interest or the subject has left the study prior to experiencing an event. Censoring is common in survival analysis.
If only the lower limit l for the true event time T is known such that T > l, this is called right censoring. Right censoring will occur, for example, for those subjects whose birth date is known but who are still alive when they are lost to follow-up or when the study ends. We generally encounter right-censored data.
If the event of interest has already happened before the subject is included in the study but it is not known when it occurred, the data is said to be left-censored. When it can only be said that the event happened between two observations or examinations, this is interval censoring.
Left censoring occurs for example when a permanent tooth has already emerged prior to the start of a dental study that aims to estimate its emergence distribution. In the same study, an emergence time is interval-censored when the permanent tooth is present in the mouth at the current examination but not yet at the previous examination. Interval censoring often occurs in HIV/AIDS studies. Indeed, time to HIV seroconversion can be determined only by a laboratory assessment which is usually initiated after a visit to the physician. Then one can only conclude that HIV seroconversion has happened between two examinations. The same is true for the diagnosis of AIDS, which is based on clinical symptoms and needs to be confirmed by a medical examination.
It may also happen that subjects with a lifetime less than some threshold may not be observed at all: this is called truncation. Note that truncation is different from left censoring, since for a left censored datum, we know the subject exists, but for a truncated datum, we may be completely unaware of the subject. Truncation is also common. In a so-called delayed entry study, subjects are not observed at all until they have reached a certain age. For example, people may not be observed until they have reached the age to enter school. Any deceased subjects in the pre-school age group would be unknown. Left-truncated data are common in actuarial work for life insurance and pensions.
Left-censored data can occur when a person's survival time becomes incomplete on the left side of the follow-up period for the person. For example, in an epidemiological example, we may monitor a patient for an infectious disorder starting from the time when he or she is tested positive for the infection. Although we may know the right-hand side of the duration of interest, we may never know the exact time of exposure to the infectious agent.
== Fitting parameters to data ==
Survival models can be usefully viewed as ordinary regression models in which the response variable is time. However, computing the likelihood function (needed for fitting parameters or making other kinds of inferences) is complicated by the censoring. The likelihood function for a survival model, in the presence of censored data, is formulated as follows. By definition the likelihood function is the conditional probability of the data given the parameters of the model.
It is customary to assume that the data are independent given the parameters. Then the likelihood function is the product of the likelihood of each datum. It is convenient to partition the data into four categories: uncensored, left censored, right censored, and interval censored. These are denoted "unc.", "l.c.", "r.c.", and "i.c." in the equation below.
L
(
θ
)
=
∏
T
i
∈
u
n
c
.
Pr
(
T
=
T
i
∣
θ
)
∏
i
∈
l
.
c
.
Pr
(
T
<
T
i
∣
θ
)
∏
i
∈
r
.
c
.
Pr
(
T
>
T
i
∣
θ
)
∏
i
∈
i
.
c
.
Pr
(
T
i
,
l
<
T
<
T
i
,
r
∣
θ
)
.
{\displaystyle L(\theta )=\prod _{T_{i}\in unc.}\Pr(T=T_{i}\mid \theta )\prod _{i\in l.c.}\Pr(T<T_{i}\mid \theta )\prod _{i\in r.c.}\Pr(T>T_{i}\mid \theta )\prod _{i\in i.c.}\Pr(T_{i,l}<T<T_{i,r}\mid \theta ).}
For uncensored data, with
T
i
{\displaystyle T_{i}}
equal to the age at death, we have
Pr
(
T
=
T
i
∣
θ
)
=
f
(
T
i
∣
θ
)
.
{\displaystyle \Pr(T=T_{i}\mid \theta )=f(T_{i}\mid \theta ).}
For left-censored data, such that the age at death is known to be less than
T
i
{\displaystyle T_{i}}
, we have
Pr
(
T
<
T
i
∣
θ
)
=
F
(
T
i
∣
θ
)
=
1
−
S
(
T
i
∣
θ
)
.
{\displaystyle \Pr(T<T_{i}\mid \theta )=F(T_{i}\mid \theta )=1-S(T_{i}\mid \theta ).}
For right-censored data, such that the age at death is known to be greater than
T
i
{\displaystyle T_{i}}
, we have
Pr
(
T
>
T
i
∣
θ
)
=
1
−
F
(
T
i
∣
θ
)
=
S
(
T
i
∣
θ
)
.
{\displaystyle \Pr(T>T_{i}\mid \theta )=1-F(T_{i}\mid \theta )=S(T_{i}\mid \theta ).}
For an interval censored datum, such that the age at death is known to be less than
T
i
,
r
{\displaystyle T_{i,r}}
and greater than
T
i
,
l
{\displaystyle T_{i,l}}
, we have
Pr
(
T
i
,
l
<
T
<
T
i
,
r
∣
θ
)
=
S
(
T
i
,
l
∣
θ
)
−
S
(
T
i
,
r
∣
θ
)
.
{\displaystyle \Pr(T_{i,l}<T<T_{i,r}\mid \theta )=S(T_{i,l}\mid \theta )-S(T_{i,r}\mid \theta ).}
An important application where interval-censored data arises is current status data, where an event
T
i
{\displaystyle T_{i}}
is known not to have occurred before an observation time and to have occurred before the next observation time.
== Non-parametric estimation ==
The Kaplan–Meier estimator can be used to estimate the survival function. The Nelson–Aalen estimator can be used to provide a non-parametric estimate of the cumulative hazard rate function. These estimators require lifetime data. Periodic case (cohort) and death (and recovery) counts are statistically sufficient to make nonparametric maximum likelihood and least squares estimates of survival functions, without lifetime data.
== Discrete-time survival models ==
While many parametric models assume a continuous-time, discrete-time survival models can be mapped to a binary classification problem. In a discrete-time survival model the survival period is artificially resampled in intervals where for each interval a binary target indicator is recorded if the event takes place in a certain time horizon. If a binary classifier (potentially enhanced with a different likelihood to take more structure of the problem into account) is calibrated, then the classifier score is the hazard function (i.e. the conditional probability of failure).
Discrete-time survival models are connected to empirical likelihood.
== Goodness of fit ==
The goodness of fit of survival models can be assessed using scoring rules.
== Computer software for survival analysis ==
The textbook by Kleinbaum has examples of survival analyses using SAS, R, and other packages. The textbooks by Brostrom, Dalgaard
and Tableman and Kim
give examples of survival analyses using R (or using S, and which run in R).
== Distributions used in survival analysis ==
Exponential distribution
Exponential-logarithmic distribution
Gamma distribution
Generalized gamma distribution
Hypertabastic distribution
Lindley distribution
Log-logistic distribution
Weibull distribution
== Applications ==
Credit risk
False conviction rate of inmates sentenced to death
Lead times for metallic components in the aerospace industry
Predictors of criminal recidivism
Survival distribution of radio-tagged animals
Time-to-violent death of Roman emperors
Intertrade waiting times of electronically traded shares on a stock exchange
== See also ==
== References ==
== Further reading ==
Collett, David (2003). Modelling Survival Data in Medical Research (Second ed.). Boca Raton: Chapman & Hall/CRC. ISBN 1584883251.
Elandt-Johnson, Regina; Johnson, Norman (1999). Survival Models and Data Analysis. New York: John Wiley & Sons. ISBN 0471349925.
Kalbfleisch, J. D.; Prentice, Ross L. (2002). The statistical analysis of failure time data. New York: John Wiley & Sons. ISBN 047136357X.
Lawless, Jerald F. (2003). Statistical Models and Methods for Lifetime Data (2nd ed.). Hoboken: John Wiley and Sons. ISBN 0471372153.
Rausand, M.; Hoyland, A. (2004). System Reliability Theory: Models, Statistical Methods, and Applications. Hoboken: John Wiley & Sons. ISBN 047147133X.
== External links ==
Therneau, Terry. "A Package for Survival Analysis in S". Archived from the original on 2006-09-07. via Dr. Therneau's page on the Mayo Clinic website
"Engineering Statistics Handbook". NIST/SEMATEK.
SOCR, Survival analysis applet and interactive learning activity.
Survival/Failure Time Analysis @ Statistics' Textbook Page
Survival Analysis in R
Lifelines, a Python package for survival analysis
Survival Analysis in NAG Fortran Library | Wikipedia/Hazard_rate |
In probability theory, a probability density function (PDF), density function, or density of an absolutely continuous random variable, is a function whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) can be interpreted as providing a relative likelihood that the value of the random variable would be equal to that sample. Probability density is the probability per unit length, in other words, while the absolute likelihood for a continuous random variable to take on any particular value is 0 (since there is an infinite set of possible values to begin with), the value of the PDF at two different samples can be used to infer, in any particular draw of the random variable, how much more likely it is that the random variable would be close to one sample compared to the other sample.
More precisely, the PDF is used to specify the probability of the random variable falling within a particular range of values, as opposed to taking on any one value. This probability is given by the integral of this variable's PDF over that range—that is, it is given by the area under the density function but above the horizontal axis and between the lowest and greatest values of the range. The probability density function is nonnegative everywhere, and the area under the entire curve is equal to 1.
The terms probability distribution function and probability function have also sometimes been used to denote the probability density function. However, this use is not standard among probabilists and statisticians. In other sources, "probability distribution function" may be used when the probability distribution is defined as a function over general sets of values or it may refer to the cumulative distribution function, or it may be a probability mass function (PMF) rather than the density. "Density function" itself is also used for the probability mass function, leading to further confusion. In general though, the PMF is used in the context of discrete random variables (random variables that take values on a countable set), while the PDF is used in the context of continuous random variables.
== Example ==
Suppose bacteria of a certain species typically live 20 to 30 hours. The probability that a bacterium lives exactly 5 hours is equal to zero. A lot of bacteria live for approximately 5 hours, but there is no chance that any given bacterium dies at exactly 5.00... hours. However, the probability that the bacterium dies between 5 hours and 5.01 hours is quantifiable. Suppose the answer is 0.02 (i.e., 2%). Then, the probability that the bacterium dies between 5 hours and 5.001 hours should be about 0.002, since this time interval is one-tenth as long as the previous. The probability that the bacterium dies between 5 hours and 5.0001 hours should be about 0.0002, and so on.
In this example, the ratio (probability of living during an interval) / (duration of the interval) is approximately constant, and equal to 2 per hour (or 2 hour−1). For example, there is 0.02 probability of dying in the 0.01-hour interval between 5 and 5.01 hours, and (0.02 probability / 0.01 hours) = 2 hour−1. This quantity 2 hour−1 is called the probability density for dying at around 5 hours. Therefore, the probability that the bacterium dies at 5 hours can be written as (2 hour−1) dt. This is the probability that the bacterium dies within an infinitesimal window of time around 5 hours, where dt is the duration of this window. For example, the probability that it lives longer than 5 hours, but shorter than (5 hours + 1 nanosecond), is (2 hour−1)×(1 nanosecond) ≈ 6×10−13 (using the unit conversion 3.6×1012 nanoseconds = 1 hour).
There is a probability density function f with f(5 hours) = 2 hour−1. The integral of f over any window of time (not only infinitesimal windows but also large windows) is the probability that the bacterium dies in that window.
== Absolutely continuous univariate distributions ==
A probability density function is most commonly associated with absolutely continuous univariate distributions. A random variable
X
{\displaystyle X}
has density
f
X
{\displaystyle f_{X}}
, where
f
X
{\displaystyle f_{X}}
is a non-negative Lebesgue-integrable function, if:
Pr
[
a
≤
X
≤
b
]
=
∫
a
b
f
X
(
x
)
d
x
.
{\displaystyle \Pr[a\leq X\leq b]=\int _{a}^{b}f_{X}(x)\,dx.}
Hence, if
F
X
{\displaystyle F_{X}}
is the cumulative distribution function of
X
{\displaystyle X}
, then:
F
X
(
x
)
=
∫
−
∞
x
f
X
(
u
)
d
u
,
{\displaystyle F_{X}(x)=\int _{-\infty }^{x}f_{X}(u)\,du,}
and (if
f
X
{\displaystyle f_{X}}
is continuous at
x
{\displaystyle x}
)
f
X
(
x
)
=
d
d
x
F
X
(
x
)
.
{\displaystyle f_{X}(x)={\frac {d}{dx}}F_{X}(x).}
Intuitively, one can think of
f
X
(
x
)
d
x
{\displaystyle f_{X}(x)\,dx}
as being the probability of
X
{\displaystyle X}
falling within the infinitesimal interval
[
x
,
x
+
d
x
]
{\displaystyle [x,x+dx]}
.
== Formal definition ==
(This definition may be extended to any probability distribution using the measure-theoretic definition of probability.)
A random variable
X
{\displaystyle X}
with values in a measurable space
(
X
,
A
)
{\displaystyle ({\mathcal {X}},{\mathcal {A}})}
(usually
R
n
{\displaystyle \mathbb {R} ^{n}}
with the Borel sets as measurable subsets) has as probability distribution the pushforward measure X∗P on
(
X
,
A
)
{\displaystyle ({\mathcal {X}},{\mathcal {A}})}
: the density of
X
{\displaystyle X}
with respect to a reference measure
μ
{\displaystyle \mu }
on
(
X
,
A
)
{\displaystyle ({\mathcal {X}},{\mathcal {A}})}
is the Radon–Nikodym derivative:
f
=
d
X
∗
P
d
μ
.
{\displaystyle f={\frac {dX_{*}P}{d\mu }}.}
That is, f is any measurable function with the property that:
Pr
[
X
∈
A
]
=
∫
X
−
1
A
d
P
=
∫
A
f
d
μ
{\displaystyle \Pr[X\in A]=\int _{X^{-1}A}\,dP=\int _{A}f\,d\mu }
for any measurable set
A
∈
A
.
{\displaystyle A\in {\mathcal {A}}.}
=== Discussion ===
In the continuous univariate case above, the reference measure is the Lebesgue measure. The probability mass function of a discrete random variable is the density with respect to the counting measure over the sample space (usually the set of integers, or some subset thereof).
It is not possible to define a density with reference to an arbitrary measure (e.g. one can not choose the counting measure as a reference for a continuous random variable). Furthermore, when it does exist, the density is almost unique, meaning that any two such densities coincide almost everywhere.
== Further details ==
Unlike a probability, a probability density function can take on values greater than one; for example, the continuous uniform distribution on the interval [0, 1/2] has probability density f(x) = 2 for 0 ≤ x ≤ 1/2 and f(x) = 0 elsewhere.
The standard normal distribution has probability density
f
(
x
)
=
1
2
π
e
−
x
2
/
2
.
{\displaystyle f(x)={\frac {1}{\sqrt {2\pi }}}\,e^{-x^{2}/2}.}
If a random variable X is given and its distribution admits a probability density function f, then the expected value of X (if the expected value exists) can be calculated as
E
[
X
]
=
∫
−
∞
∞
x
f
(
x
)
d
x
.
{\displaystyle \operatorname {E} [X]=\int _{-\infty }^{\infty }x\,f(x)\,dx.}
Not every probability distribution has a density function: the distributions of discrete random variables do not; nor does the Cantor distribution, even though it has no discrete component, i.e., does not assign positive probability to any individual point.
A distribution has a density function if its cumulative distribution function F(x) is absolutely continuous. In this case: F is almost everywhere differentiable, and its derivative can be used as probability density:
d
d
x
F
(
x
)
=
f
(
x
)
.
{\displaystyle {\frac {d}{dx}}F(x)=f(x).}
If a probability distribution admits a density, then the probability of every one-point set {a} is zero; the same holds for finite and countable sets.
Two probability densities f and g represent the same probability distribution precisely if they differ only on a set of Lebesgue measure zero.
In the field of statistical physics, a non-formal reformulation of the relation above between the derivative of the cumulative distribution function and the probability density function is generally used as the definition of the probability density function. This alternate definition is the following:
If dt is an infinitely small number, the probability that X is included within the interval (t, t + dt) is equal to f(t) dt, or:
Pr
(
t
<
X
<
t
+
d
t
)
=
f
(
t
)
d
t
.
{\displaystyle \Pr(t<X<t+dt)=f(t)\,dt.}
== Link between discrete and continuous distributions ==
It is possible to represent certain discrete random variables as well as random variables involving both a continuous and a discrete part with a generalized probability density function using the Dirac delta function. (This is not possible with a probability density function in the sense defined above, it may be done with a distribution.) For example, consider a binary discrete random variable having the Rademacher distribution—that is, taking −1 or 1 for values, with probability 1⁄2 each. The density of probability associated with this variable is:
f
(
t
)
=
1
2
(
δ
(
t
+
1
)
+
δ
(
t
−
1
)
)
.
{\displaystyle f(t)={\frac {1}{2}}(\delta (t+1)+\delta (t-1)).}
More generally, if a discrete variable can take n different values among real numbers, then the associated probability density function is:
f
(
t
)
=
∑
i
=
1
n
p
i
δ
(
t
−
x
i
)
,
{\displaystyle f(t)=\sum _{i=1}^{n}p_{i}\,\delta (t-x_{i}),}
where
x
1
,
…
,
x
n
{\displaystyle x_{1},\ldots ,x_{n}}
are the discrete values accessible to the variable and
p
1
,
…
,
p
n
{\displaystyle p_{1},\ldots ,p_{n}}
are the probabilities associated with these values.
This substantially unifies the treatment of discrete and continuous probability distributions. The above expression allows for determining statistical characteristics of such a discrete variable (such as the mean, variance, and kurtosis), starting from the formulas given for a continuous distribution of the probability.
== Families of densities ==
It is common for probability density functions (and probability mass functions) to be parametrized—that is, to be characterized by unspecified parameters. For example, the normal distribution is parametrized in terms of the mean and the variance, denoted by
μ
{\displaystyle \mu }
and
σ
2
{\displaystyle \sigma ^{2}}
respectively, giving the family of densities
f
(
x
;
μ
,
σ
2
)
=
1
σ
2
π
e
−
1
2
(
x
−
μ
σ
)
2
.
{\displaystyle f(x;\mu ,\sigma ^{2})={\frac {1}{\sigma {\sqrt {2\pi }}}}e^{-{\frac {1}{2}}\left({\frac {x-\mu }{\sigma }}\right)^{2}}.}
Different values of the parameters describe different distributions of different random variables on the same sample space (the same set of all possible values of the variable); this sample space is the domain of the family of random variables that this family of distributions describes. A given set of parameters describes a single distribution within the family sharing the functional form of the density. From the perspective of a given distribution, the parameters are constants, and terms in a density function that contain only parameters, but not variables, are part of the normalization factor of a distribution (the multiplicative factor that ensures that the area under the density—the probability of something in the domain occurring— equals 1). This normalization factor is outside the kernel of the distribution.
Since the parameters are constants, reparametrizing a density in terms of different parameters to give a characterization of a different random variable in the family, means simply substituting the new parameter values into the formula in place of the old ones.
== Densities associated with multiple variables ==
For continuous random variables X1, ..., Xn, it is also possible to define a probability density function associated to the set as a whole, often called joint probability density function. This density function is defined as a function of the n variables, such that, for any domain D in the n-dimensional space of the values of the variables X1, ..., Xn, the probability that a realisation of the set variables falls inside the domain D is
Pr
(
X
1
,
…
,
X
n
∈
D
)
=
∫
D
f
X
1
,
…
,
X
n
(
x
1
,
…
,
x
n
)
d
x
1
⋯
d
x
n
.
{\displaystyle \Pr \left(X_{1},\ldots ,X_{n}\in D\right)=\int _{D}f_{X_{1},\ldots ,X_{n}}(x_{1},\ldots ,x_{n})\,dx_{1}\cdots dx_{n}.}
If F(x1, ..., xn) = Pr(X1 ≤ x1, ..., Xn ≤ xn) is the cumulative distribution function of the vector (X1, ..., Xn), then the joint probability density function can be computed as a partial derivative
f
(
x
)
=
∂
n
F
∂
x
1
⋯
∂
x
n
|
x
{\displaystyle f(x)=\left.{\frac {\partial ^{n}F}{\partial x_{1}\cdots \partial x_{n}}}\right|_{x}}
=== Marginal densities ===
For i = 1, 2, ..., n, let fXi(xi) be the probability density function associated with variable Xi alone. This is called the marginal density function, and can be deduced from the probability density associated with the random variables X1, ..., Xn by integrating over all values of the other n − 1 variables:
f
X
i
(
x
i
)
=
∫
f
(
x
1
,
…
,
x
n
)
d
x
1
⋯
d
x
i
−
1
d
x
i
+
1
⋯
d
x
n
.
{\displaystyle f_{X_{i}}(x_{i})=\int f(x_{1},\ldots ,x_{n})\,dx_{1}\cdots dx_{i-1}\,dx_{i+1}\cdots dx_{n}.}
=== Independence ===
Continuous random variables X1, ..., Xn admitting a joint density are all independent from each other if
f
X
1
,
…
,
X
n
(
x
1
,
…
,
x
n
)
=
f
X
1
(
x
1
)
⋯
f
X
n
(
x
n
)
.
{\displaystyle f_{X_{1},\ldots ,X_{n}}(x_{1},\ldots ,x_{n})=f_{X_{1}}(x_{1})\cdots f_{X_{n}}(x_{n}).}
=== Corollary ===
If the joint probability density function of a vector of n random variables can be factored into a product of n functions of one variable
f
X
1
,
…
,
X
n
(
x
1
,
…
,
x
n
)
=
f
1
(
x
1
)
⋯
f
n
(
x
n
)
,
{\displaystyle f_{X_{1},\ldots ,X_{n}}(x_{1},\ldots ,x_{n})=f_{1}(x_{1})\cdots f_{n}(x_{n}),}
(where each fi is not necessarily a density) then the n variables in the set are all independent from each other, and the marginal probability density function of each of them is given by
f
X
i
(
x
i
)
=
f
i
(
x
i
)
∫
f
i
(
x
)
d
x
.
{\displaystyle f_{X_{i}}(x_{i})={\frac {f_{i}(x_{i})}{\int f_{i}(x)\,dx}}.}
=== Example ===
This elementary example illustrates the above definition of multidimensional probability density functions in the simple case of a function of a set of two variables. Let us call
R
→
{\displaystyle {\vec {R}}}
a 2-dimensional random vector of coordinates (X, Y): the probability to obtain
R
→
{\displaystyle {\vec {R}}}
in the quarter plane of positive x and y is
Pr
(
X
>
0
,
Y
>
0
)
=
∫
0
∞
∫
0
∞
f
X
,
Y
(
x
,
y
)
d
x
d
y
.
{\displaystyle \Pr \left(X>0,Y>0\right)=\int _{0}^{\infty }\int _{0}^{\infty }f_{X,Y}(x,y)\,dx\,dy.}
== Function of random variables and change of variables in the probability density function ==
If the probability density function of a random variable (or vector) X is given as fX(x), it is possible (but often not necessary; see below) to calculate the probability density function of some variable Y = g(X). This is also called a "change of variable" and is in practice used to generate a random variable of arbitrary shape fg(X) = fY using a known (for instance, uniform) random number generator.
It is tempting to think that in order to find the expected value E(g(X)), one must first find the probability density fg(X) of the new random variable Y = g(X). However, rather than computing
E
(
g
(
X
)
)
=
∫
−
∞
∞
y
f
g
(
X
)
(
y
)
d
y
,
{\displaystyle \operatorname {E} {\big (}g(X){\big )}=\int _{-\infty }^{\infty }yf_{g(X)}(y)\,dy,}
one may find instead
E
(
g
(
X
)
)
=
∫
−
∞
∞
g
(
x
)
f
X
(
x
)
d
x
.
{\displaystyle \operatorname {E} {\big (}g(X){\big )}=\int _{-\infty }^{\infty }g(x)f_{X}(x)\,dx.}
The values of the two integrals are the same in all cases in which both X and g(X) actually have probability density functions. It is not necessary that g be a one-to-one function. In some cases the latter integral is computed much more easily than the former. See Law of the unconscious statistician.
=== Scalar to scalar ===
Let
g
:
R
→
R
{\displaystyle g:\mathbb {R} \to \mathbb {R} }
be a monotonic function, then the resulting density function is
f
Y
(
y
)
=
f
X
(
g
−
1
(
y
)
)
|
d
d
y
(
g
−
1
(
y
)
)
|
.
{\displaystyle f_{Y}(y)=f_{X}{\big (}g^{-1}(y){\big )}\left|{\frac {d}{dy}}{\big (}g^{-1}(y){\big )}\right|.}
Here g−1 denotes the inverse function.
This follows from the fact that the probability contained in a differential area must be invariant under change of variables. That is,
|
f
Y
(
y
)
d
y
|
=
|
f
X
(
x
)
d
x
|
,
{\displaystyle \left|f_{Y}(y)\,dy\right|=\left|f_{X}(x)\,dx\right|,}
or
f
Y
(
y
)
=
|
d
x
d
y
|
f
X
(
x
)
=
|
d
d
y
(
x
)
|
f
X
(
x
)
=
|
d
d
y
(
g
−
1
(
y
)
)
|
f
X
(
g
−
1
(
y
)
)
=
|
(
g
−
1
)
′
(
y
)
|
⋅
f
X
(
g
−
1
(
y
)
)
.
{\displaystyle f_{Y}(y)=\left|{\frac {dx}{dy}}\right|f_{X}(x)=\left|{\frac {d}{dy}}(x)\right|f_{X}(x)=\left|{\frac {d}{dy}}{\big (}g^{-1}(y){\big )}\right|f_{X}{\big (}g^{-1}(y){\big )}={\left|\left(g^{-1}\right)'(y)\right|}\cdot f_{X}{\big (}g^{-1}(y){\big )}.}
For functions that are not monotonic, the probability density function for y is
∑
k
=
1
n
(
y
)
|
d
d
y
g
k
−
1
(
y
)
|
⋅
f
X
(
g
k
−
1
(
y
)
)
,
{\displaystyle \sum _{k=1}^{n(y)}\left|{\frac {d}{dy}}g_{k}^{-1}(y)\right|\cdot f_{X}{\big (}g_{k}^{-1}(y){\big )},}
where n(y) is the number of solutions in x for the equation
g
(
x
)
=
y
{\displaystyle g(x)=y}
, and
g
k
−
1
(
y
)
{\displaystyle g_{k}^{-1}(y)}
are these solutions.
=== Vector to vector ===
Suppose x is an n-dimensional random variable with joint density f. If y = G(x), where G is a bijective, differentiable function, then y has density pY:
p
Y
(
y
)
=
f
(
G
−
1
(
y
)
)
|
det
[
d
G
−
1
(
z
)
d
z
|
z
=
y
]
|
{\displaystyle p_{Y}(\mathbf {y} )=f{\Bigl (}G^{-1}(\mathbf {y} ){\Bigr )}\left|\det \left[\left.{\frac {dG^{-1}(\mathbf {z} )}{d\mathbf {z} }}\right|_{\mathbf {z} =\mathbf {y} }\right]\right|}
with the differential regarded as the Jacobian of the inverse of G(⋅), evaluated at y.
For example, in the 2-dimensional case x = (x1, x2), suppose the transform G is given as y1 = G1(x1, x2), y2 = G2(x1, x2) with inverses x1 = G1−1(y1, y2), x2 = G2−1(y1, y2). The joint distribution for y = (y1, y2) has density
p
Y
1
,
Y
2
(
y
1
,
y
2
)
=
f
X
1
,
X
2
(
G
1
−
1
(
y
1
,
y
2
)
,
G
2
−
1
(
y
1
,
y
2
)
)
|
∂
G
1
−
1
∂
y
1
∂
G
2
−
1
∂
y
2
−
∂
G
1
−
1
∂
y
2
∂
G
2
−
1
∂
y
1
|
.
{\displaystyle p_{Y_{1},Y_{2}}(y_{1},y_{2})=f_{X_{1},X_{2}}{\big (}G_{1}^{-1}(y_{1},y_{2}),G_{2}^{-1}(y_{1},y_{2}){\big )}\left\vert {\frac {\partial G_{1}^{-1}}{\partial y_{1}}}{\frac {\partial G_{2}^{-1}}{\partial y_{2}}}-{\frac {\partial G_{1}^{-1}}{\partial y_{2}}}{\frac {\partial G_{2}^{-1}}{\partial y_{1}}}\right\vert .}
=== Vector to scalar ===
Let
V
:
R
n
→
R
{\displaystyle V:\mathbb {R} ^{n}\to \mathbb {R} }
be a differentiable function and
X
{\displaystyle X}
be a random vector taking values in
R
n
{\displaystyle \mathbb {R} ^{n}}
,
f
X
{\displaystyle f_{X}}
be the probability density function of
X
{\displaystyle X}
and
δ
(
⋅
)
{\displaystyle \delta (\cdot )}
be the Dirac delta function. It is possible to use the formulas above to determine
f
Y
{\displaystyle f_{Y}}
, the probability density function of
Y
=
V
(
X
)
{\displaystyle Y=V(X)}
, which will be given by
f
Y
(
y
)
=
∫
R
n
f
X
(
x
)
δ
(
y
−
V
(
x
)
)
d
x
.
{\displaystyle f_{Y}(y)=\int _{\mathbb {R} ^{n}}f_{X}(\mathbf {x} )\delta {\big (}y-V(\mathbf {x} ){\big )}\,d\mathbf {x} .}
This result leads to the law of the unconscious statistician:
E
Y
[
Y
]
=
∫
R
y
f
Y
(
y
)
d
y
=
∫
R
y
∫
R
n
f
X
(
x
)
δ
(
y
−
V
(
x
)
)
d
x
d
y
=
∫
R
n
∫
R
y
f
X
(
x
)
δ
(
y
−
V
(
x
)
)
d
y
d
x
=
∫
R
n
V
(
x
)
f
X
(
x
)
d
x
=
E
X
[
V
(
X
)
]
.
{\displaystyle {\begin{aligned}\operatorname {E} _{Y}[Y]&=\int _{\mathbb {R} }yf_{Y}(y)\,dy\\&=\int _{\mathbb {R} }y\int _{\mathbb {R} ^{n}}f_{X}(\mathbf {x} )\delta {\big (}y-V(\mathbf {x} ){\big )}\,d\mathbf {x} \,dy\\&=\int _{{\mathbb {R} }^{n}}\int _{\mathbb {R} }yf_{X}(\mathbf {x} )\delta {\big (}y-V(\mathbf {x} ){\big )}\,dy\,d\mathbf {x} \\&=\int _{\mathbb {R} ^{n}}V(\mathbf {x} )f_{X}(\mathbf {x} )\,d\mathbf {x} =\operatorname {E} _{X}[V(X)].\end{aligned}}}
Proof:
Let
Z
{\displaystyle Z}
be a collapsed random variable with probability density function
p
Z
(
z
)
=
δ
(
z
)
{\displaystyle p_{Z}(z)=\delta (z)}
(i.e., a constant equal to zero). Let the random vector
X
~
{\displaystyle {\tilde {X}}}
and the transform
H
{\displaystyle H}
be defined as
H
(
Z
,
X
)
=
[
Z
+
V
(
X
)
X
]
=
[
Y
X
~
]
.
{\displaystyle H(Z,X)={\begin{bmatrix}Z+V(X)\\X\end{bmatrix}}={\begin{bmatrix}Y\\{\tilde {X}}\end{bmatrix}}.}
It is clear that
H
{\displaystyle H}
is a bijective mapping, and the Jacobian of
H
−
1
{\displaystyle H^{-1}}
is given by:
d
H
−
1
(
y
,
x
~
)
d
y
d
x
~
=
[
1
−
d
V
(
x
~
)
d
x
~
0
n
×
1
I
n
×
n
]
,
{\displaystyle {\frac {dH^{-1}(y,{\tilde {\mathbf {x} }})}{dy\,d{\tilde {\mathbf {x} }}}}={\begin{bmatrix}1&-{\frac {dV({\tilde {\mathbf {x} }})}{d{\tilde {\mathbf {x} }}}}\\\mathbf {0} _{n\times 1}&\mathbf {I} _{n\times n}\end{bmatrix}},}
which is an upper triangular matrix with ones on the main diagonal, therefore its determinant is 1. Applying the change of variable theorem from the previous section we obtain that
f
Y
,
X
(
y
,
x
)
=
f
X
(
x
)
δ
(
y
−
V
(
x
)
)
,
{\displaystyle f_{Y,X}(y,x)=f_{X}(\mathbf {x} )\delta {\big (}y-V(\mathbf {x} ){\big )},}
which if marginalized over
x
{\displaystyle x}
leads to the desired probability density function.
== Sums of independent random variables ==
The probability density function of the sum of two independent random variables U and V, each of which has a probability density function, is the convolution of their separate density functions:
f
U
+
V
(
x
)
=
∫
−
∞
∞
f
U
(
y
)
f
V
(
x
−
y
)
d
y
=
(
f
U
∗
f
V
)
(
x
)
{\displaystyle f_{U+V}(x)=\int _{-\infty }^{\infty }f_{U}(y)f_{V}(x-y)\,dy=\left(f_{U}*f_{V}\right)(x)}
It is possible to generalize the previous relation to a sum of N independent random variables, with densities U1, ..., UN:
f
U
1
+
⋯
+
U
(
x
)
=
(
f
U
1
∗
⋯
∗
f
U
N
)
(
x
)
{\displaystyle f_{U_{1}+\cdots +U}(x)=\left(f_{U_{1}}*\cdots *f_{U_{N}}\right)(x)}
This can be derived from a two-way change of variables involving Y = U + V and Z = V, similarly to the example below for the quotient of independent random variables.
== Products and quotients of independent random variables ==
Given two independent random variables U and V, each of which has a probability density function, the density of the product Y = UV and quotient Y = U/V can be computed by a change of variables.
=== Example: Quotient distribution ===
To compute the quotient Y = U/V of two independent random variables U and V, define the following transformation:
Y
=
U
/
V
Z
=
V
{\displaystyle {\begin{aligned}Y&=U/V\\[1ex]Z&=V\end{aligned}}}
Then, the joint density p(y,z) can be computed by a change of variables from U,V to Y,Z, and Y can be derived by marginalizing out Z from the joint density.
The inverse transformation is
U
=
Y
Z
V
=
Z
{\displaystyle {\begin{aligned}U&=YZ\\V&=Z\end{aligned}}}
The absolute value of the Jacobian matrix determinant
J
(
U
,
V
∣
Y
,
Z
)
{\displaystyle J(U,V\mid Y,Z)}
of this transformation is:
|
det
[
∂
u
∂
y
∂
u
∂
z
∂
v
∂
y
∂
v
∂
z
]
|
=
|
det
[
z
y
0
1
]
|
=
|
z
|
.
{\displaystyle \left|\det {\begin{bmatrix}{\frac {\partial u}{\partial y}}&{\frac {\partial u}{\partial z}}\\{\frac {\partial v}{\partial y}}&{\frac {\partial v}{\partial z}}\end{bmatrix}}\right|=\left|\det {\begin{bmatrix}z&y\\0&1\end{bmatrix}}\right|=|z|.}
Thus:
p
(
y
,
z
)
=
p
(
u
,
v
)
J
(
u
,
v
∣
y
,
z
)
=
p
(
u
)
p
(
v
)
J
(
u
,
v
∣
y
,
z
)
=
p
U
(
y
z
)
p
V
(
z
)
|
z
|
.
{\displaystyle p(y,z)=p(u,v)\,J(u,v\mid y,z)=p(u)\,p(v)\,J(u,v\mid y,z)=p_{U}(yz)\,p_{V}(z)\,|z|.}
And the distribution of Y can be computed by marginalizing out Z:
p
(
y
)
=
∫
−
∞
∞
p
U
(
y
z
)
p
V
(
z
)
|
z
|
d
z
{\displaystyle p(y)=\int _{-\infty }^{\infty }p_{U}(yz)\,p_{V}(z)\,|z|\,dz}
This method crucially requires that the transformation from U,V to Y,Z be bijective. The above transformation meets this because Z can be mapped directly back to V, and for a given V the quotient U/V is monotonic. This is similarly the case for the sum U + V, difference U − V and product UV.
Exactly the same method can be used to compute the distribution of other functions of multiple independent random variables.
=== Example: Quotient of two standard normals ===
Given two standard normal variables U and V, the quotient can be computed as follows. First, the variables have the following density functions:
p
(
u
)
=
1
2
π
e
−
u
2
/
2
p
(
v
)
=
1
2
π
e
−
v
2
/
2
{\displaystyle {\begin{aligned}p(u)&={\frac {1}{\sqrt {2\pi }}}e^{-{u^{2}}/{2}}\\[1ex]p(v)&={\frac {1}{\sqrt {2\pi }}}e^{-{v^{2}}/{2}}\end{aligned}}}
We transform as described above:
Y
=
U
/
V
Z
=
V
{\displaystyle {\begin{aligned}Y&=U/V\\[1ex]Z&=V\end{aligned}}}
This leads to:
p
(
y
)
=
∫
−
∞
∞
p
U
(
y
z
)
p
V
(
z
)
|
z
|
d
z
=
∫
−
∞
∞
1
2
π
e
−
1
2
y
2
z
2
1
2
π
e
−
1
2
z
2
|
z
|
d
z
=
∫
−
∞
∞
1
2
π
e
−
1
2
(
y
2
+
1
)
z
2
|
z
|
d
z
=
2
∫
0
∞
1
2
π
e
−
1
2
(
y
2
+
1
)
z
2
z
d
z
=
∫
0
∞
1
π
e
−
(
y
2
+
1
)
u
d
u
u
=
1
2
z
2
=
−
1
π
(
y
2
+
1
)
e
−
(
y
2
+
1
)
u
|
u
=
0
∞
=
1
π
(
y
2
+
1
)
{\displaystyle {\begin{aligned}p(y)&=\int _{-\infty }^{\infty }p_{U}(yz)\,p_{V}(z)\,|z|\,dz\\[5pt]&=\int _{-\infty }^{\infty }{\frac {1}{\sqrt {2\pi }}}e^{-{\frac {1}{2}}y^{2}z^{2}}{\frac {1}{\sqrt {2\pi }}}e^{-{\frac {1}{2}}z^{2}}|z|\,dz\\[5pt]&=\int _{-\infty }^{\infty }{\frac {1}{2\pi }}e^{-{\frac {1}{2}}\left(y^{2}+1\right)z^{2}}|z|\,dz\\[5pt]&=2\int _{0}^{\infty }{\frac {1}{2\pi }}e^{-{\frac {1}{2}}\left(y^{2}+1\right)z^{2}}z\,dz\\[5pt]&=\int _{0}^{\infty }{\frac {1}{\pi }}e^{-\left(y^{2}+1\right)u}\,du&&u={\tfrac {1}{2}}z^{2}\\[5pt]&=\left.-{\frac {1}{\pi \left(y^{2}+1\right)}}e^{-\left(y^{2}+1\right)u}\right|_{u=0}^{\infty }\\[5pt]&={\frac {1}{\pi \left(y^{2}+1\right)}}\end{aligned}}}
This is the density of a standard Cauchy distribution.
== See also ==
Density estimation – Estimate of an unobservable underlying probability density function
Kernel density estimation – EstimatorPages displaying short descriptions with no spaces
Likelihood function – Function related to statistics and probability theory
List of probability distributions
Probability amplitude – Complex number whose squared absolute value is a probability
Probability mass function – Discrete-variable probability distribution
Secondary measure – Concept in mathematics
Merging independent probability density functions
Uses as position probability density:
Atomic orbital – Function describing an electron in an atom
Home range – The area in which an animal lives and moves on a periodic basis
== References ==
== Further reading ==
Billingsley, Patrick (1979). Probability and Measure. New York, Toronto, London: John Wiley and Sons. ISBN 0-471-00710-2.
Casella, George; Berger, Roger L. (2002). Statistical Inference (Second ed.). Thomson Learning. pp. 34–37. ISBN 0-534-24312-6.
Stirzaker, David (2003). Elementary Probability. Cambridge University Press. ISBN 0-521-42028-8. Chapters 7 to 9 are about continuous variables.
== External links ==
Ushakov, N.G. (2001) [1994], "Density of a probability distribution", Encyclopedia of Mathematics, EMS Press
Weisstein, Eric W. "Probability density function". MathWorld. | Wikipedia/Density_function |
In probability and statistics, the class of exponential dispersion models (EDM), also called exponential dispersion family (EDF), is a set of probability distributions that represents a generalisation of the natural exponential family.
Exponential dispersion models play an important role in statistical theory, in particular in generalized linear models because they have a special structure which enables deductions to be made about appropriate statistical inference.
== Definition ==
=== Univariate case ===
There are two versions to formulate an exponential dispersion model.
==== Additive exponential dispersion model ====
In the univariate case, a real-valued random variable
X
{\displaystyle X}
belongs to the additive exponential dispersion model with canonical parameter
θ
{\displaystyle \theta }
and index parameter
λ
{\displaystyle \lambda }
,
X
∼
E
D
∗
(
θ
,
λ
)
{\displaystyle X\sim \mathrm {ED} ^{*}(\theta ,\lambda )}
, if its probability density function can be written as
f
X
(
x
∣
θ
,
λ
)
=
h
∗
(
λ
,
x
)
exp
(
θ
x
−
λ
A
(
θ
)
)
.
{\displaystyle f_{X}(x\mid \theta ,\lambda )=h^{*}(\lambda ,x)\exp \left(\theta x-\lambda A(\theta )\right)\,\!.}
==== Reproductive exponential dispersion model ====
The distribution of the transformed random variable
Y
=
X
λ
{\displaystyle Y={\frac {X}{\lambda }}}
is called reproductive exponential dispersion model,
Y
∼
E
D
(
μ
,
σ
2
)
{\displaystyle Y\sim \mathrm {ED} (\mu ,\sigma ^{2})}
, and is given by
f
Y
(
y
∣
μ
,
σ
2
)
=
h
(
σ
2
,
y
)
exp
(
θ
y
−
A
(
θ
)
σ
2
)
,
{\displaystyle f_{Y}(y\mid \mu ,\sigma ^{2})=h(\sigma ^{2},y)\exp \left({\frac {\theta y-A(\theta )}{\sigma ^{2}}}\right)\,\!,}
with
σ
2
=
1
λ
{\displaystyle \sigma ^{2}={\frac {1}{\lambda }}}
and
μ
=
A
′
(
θ
)
{\displaystyle \mu =A'(\theta )}
, implying
θ
=
(
A
′
)
−
1
(
μ
)
{\displaystyle \theta =(A')^{-1}(\mu )}
.
The terminology dispersion model stems from interpreting
σ
2
{\displaystyle \sigma ^{2}}
as dispersion parameter. For fixed parameter
σ
2
{\displaystyle \sigma ^{2}}
, the
E
D
(
μ
,
σ
2
)
{\displaystyle \mathrm {ED} (\mu ,\sigma ^{2})}
is a natural exponential family.
=== Multivariate case ===
In the multivariate case, the n-dimensional random variable
X
{\displaystyle \mathbf {X} }
has a probability density function of the following form
f
X
(
x
|
θ
,
λ
)
=
h
(
λ
,
x
)
exp
(
λ
(
θ
⊤
x
−
A
(
θ
)
)
)
,
{\displaystyle f_{\mathbf {X} }(\mathbf {x} |{\boldsymbol {\theta }},\lambda )=h(\lambda ,\mathbf {x} )\exp \left(\lambda ({\boldsymbol {\theta }}^{\top }\mathbf {x} -A({\boldsymbol {\theta }}))\right)\,\!,}
where the parameter
θ
{\displaystyle {\boldsymbol {\theta }}}
has the same dimension as
X
{\displaystyle \mathbf {X} }
.
== Properties ==
=== Cumulant-generating function ===
The cumulant-generating function of
Y
∼
E
D
(
μ
,
σ
2
)
{\displaystyle Y\sim \mathrm {ED} (\mu ,\sigma ^{2})}
is given by
K
(
t
;
μ
,
σ
2
)
=
log
E
[
e
t
Y
]
=
A
(
θ
+
σ
2
t
)
−
A
(
θ
)
σ
2
,
{\displaystyle K(t;\mu ,\sigma ^{2})=\log \operatorname {E} [e^{tY}]={\frac {A(\theta +\sigma ^{2}t)-A(\theta )}{\sigma ^{2}}}\,\!,}
with
θ
=
(
A
′
)
−
1
(
μ
)
{\displaystyle \theta =(A')^{-1}(\mu )}
=== Mean and variance ===
Mean and variance of
Y
∼
E
D
(
μ
,
σ
2
)
{\displaystyle Y\sim \mathrm {ED} (\mu ,\sigma ^{2})}
are given by
E
[
Y
]
=
μ
=
A
′
(
θ
)
,
Var
[
Y
]
=
σ
2
A
″
(
θ
)
=
σ
2
V
(
μ
)
,
{\displaystyle \operatorname {E} [Y]=\mu =A'(\theta )\,,\quad \operatorname {Var} [Y]=\sigma ^{2}A''(\theta )=\sigma ^{2}V(\mu )\,\!,}
with unit variance function
V
(
μ
)
=
A
″
(
(
A
′
)
−
1
(
μ
)
)
{\displaystyle V(\mu )=A''((A')^{-1}(\mu ))}
.
=== Reproductive ===
If
Y
1
,
…
,
Y
n
{\displaystyle Y_{1},\ldots ,Y_{n}}
are i.i.d. with
Y
i
∼
E
D
(
μ
,
σ
2
w
i
)
{\displaystyle Y_{i}\sim \mathrm {ED} \left(\mu ,{\frac {\sigma ^{2}}{w_{i}}}\right)}
, i.e. same mean
μ
{\displaystyle \mu }
and different weights
w
i
{\displaystyle w_{i}}
, the weighted mean is again an
E
D
{\displaystyle \mathrm {ED} }
with
∑
i
=
1
n
w
i
Y
i
w
∙
∼
E
D
(
μ
,
σ
2
w
∙
)
,
{\displaystyle \sum _{i=1}^{n}{\frac {w_{i}Y_{i}}{w_{\bullet }}}\sim \mathrm {ED} \left(\mu ,{\frac {\sigma ^{2}}{w_{\bullet }}}\right)\,\!,}
with
w
∙
=
∑
i
=
1
n
w
i
{\displaystyle w_{\bullet }=\sum _{i=1}^{n}w_{i}}
. Therefore
Y
i
{\displaystyle Y_{i}}
are called reproductive.
=== Unit deviance ===
The probability density function of an
E
D
(
μ
,
σ
2
)
{\displaystyle \mathrm {ED} (\mu ,\sigma ^{2})}
can also be expressed in terms of the unit deviance
d
(
y
,
μ
)
{\displaystyle d(y,\mu )}
as
f
Y
(
y
∣
μ
,
σ
2
)
=
h
~
(
σ
2
,
y
)
exp
(
−
d
(
y
,
μ
)
2
σ
2
)
,
{\displaystyle f_{Y}(y\mid \mu ,\sigma ^{2})={\tilde {h}}(\sigma ^{2},y)\exp \left(-{\frac {d(y,\mu )}{2\sigma ^{2}}}\right)\,\!,}
where the unit deviance takes the special form
d
(
y
,
μ
)
=
y
f
(
μ
)
+
g
(
μ
)
+
h
(
y
)
{\displaystyle d(y,\mu )=yf(\mu )+g(\mu )+h(y)}
or in terms of the unit variance function as
d
(
y
,
μ
)
=
2
∫
μ
y
y
−
t
V
(
t
)
d
t
{\displaystyle d(y,\mu )=2\int _{\mu }^{y}\!{\frac {y-t}{V(t)}}\,dt}
.
== Examples ==
Many very common probability distributions belong to the class of EDMs, among them are: normal distribution, binomial distribution, Poisson distribution, negative binomial distribution, gamma distribution, inverse Gaussian distribution, and Tweedie distribution.
== References == | Wikipedia/Exponential_dispersion_model |
In statistics, the generalized linear array model (GLAM) is used for analyzing data sets with array structures. It based on the generalized linear model with the design matrix written as a Kronecker product.
== Overview ==
The generalized linear array model or GLAM was introduced in 2006. Such models provide a structure and a computational procedure for fitting generalized linear models or GLMs whose model matrix can be written as a Kronecker product and whose data can be written as an array. In a large GLM, the GLAM approach gives very substantial savings in both storage and computational time over the usual GLM algorithm.
Suppose that the data
Y
{\displaystyle \mathbf {Y} }
is arranged in a
d
{\displaystyle d}
-dimensional array with size
n
1
×
n
2
×
⋯
×
n
d
{\displaystyle n_{1}\times n_{2}\times \dots \times n_{d}}
; thus, the corresponding data vector
y
=
vec
(
Y
)
{\displaystyle \mathbf {y} =\operatorname {vec} (\mathbf {Y} )}
has size
n
1
n
2
n
3
⋯
n
d
{\displaystyle n_{1}n_{2}n_{3}\cdots n_{d}}
. Suppose also that the design matrix is of the form
X
=
X
d
⊗
X
d
−
1
⊗
⋯
⊗
X
1
.
{\displaystyle \mathbf {X} =\mathbf {X} _{d}\otimes \mathbf {X} _{d-1}\otimes \dots \otimes \mathbf {X} _{1}.}
The standard analysis of a GLM with data vector
y
{\displaystyle \mathbf {y} }
and design matrix
X
{\displaystyle \mathbf {X} }
proceeds by repeated evaluation of the scoring algorithm
X
′
W
~
δ
X
θ
^
=
X
′
W
~
δ
θ
~
,
{\displaystyle \mathbf {X} '{\tilde {\mathbf {W} }}_{\delta }\mathbf {X} {\hat {\boldsymbol {\theta }}}=\mathbf {X} '{\tilde {\mathbf {W} }}_{\delta }{\tilde {\boldsymbol {\theta }}},}
where
θ
~
{\displaystyle {\tilde {\boldsymbol {\theta }}}}
represents the approximate solution of
θ
{\displaystyle {\boldsymbol {\theta }}}
, and
θ
^
{\displaystyle {\hat {\boldsymbol {\theta }}}}
is the improved value of it;
W
δ
{\displaystyle \mathbf {W} _{\delta }}
is the diagonal weight matrix with elements
w
i
i
−
1
=
(
∂
η
i
∂
μ
i
)
2
v
a
r
(
y
i
)
,
{\displaystyle w_{ii}^{-1}=\left({\frac {\partial \eta _{i}}{\partial \mu _{i}}}\right)^{2}\mathrm {var} (y_{i}),}
and
z
=
η
+
W
δ
−
1
(
y
−
μ
)
{\displaystyle \mathbf {z} ={\boldsymbol {\eta }}+\mathbf {W} _{\delta }^{-1}(\mathbf {y} -{\boldsymbol {\mu }})}
is the working variable.
Computationally, GLAM provides array algorithms to calculate the linear predictor,
η
=
X
θ
{\displaystyle {\boldsymbol {\eta }}=\mathbf {X} {\boldsymbol {\theta }}}
and the weighted inner product
X
′
W
~
δ
X
{\displaystyle \mathbf {X} '{\tilde {\mathbf {W} }}_{\delta }\mathbf {X} }
without evaluation of the model matrix
X
.
{\displaystyle \mathbf {X} .}
=== Example ===
In 2 dimensions, let
X
=
X
2
⊗
X
1
{\displaystyle \mathbf {X} =\mathbf {X} _{2}\otimes \mathbf {X} _{1}}
, then the linear predictor is written
X
1
Θ
X
2
′
{\displaystyle \mathbf {X} _{1}{\boldsymbol {\Theta }}\mathbf {X} _{2}'}
where
Θ
{\displaystyle {\boldsymbol {\Theta }}}
is the matrix of coefficients; the weighted inner product is obtained from
G
(
X
1
)
′
W
G
(
X
2
)
{\displaystyle G(\mathbf {X} _{1})'\mathbf {W} G(\mathbf {X} _{2})}
and
W
{\displaystyle \mathbf {W} }
is the matrix of weights; here
G
(
M
)
{\displaystyle G(\mathbf {M} )}
is the row tensor function of the
r
×
c
{\displaystyle r\times c}
matrix
M
{\displaystyle \mathbf {M} }
given by
G
(
M
)
=
(
M
⊗
1
′
)
∘
(
1
′
⊗
M
)
{\displaystyle G(\mathbf {M} )=(\mathbf {M} \otimes \mathbf {1} ')\circ (\mathbf {1} '\otimes \mathbf {M} )}
where
∘
{\displaystyle \circ }
means element by element multiplication and
1
{\displaystyle \mathbf {1} }
is a vector of 1's of length
c
{\displaystyle c}
.
On the other hand, the row tensor function
G
(
M
)
{\displaystyle G(\mathbf {M} )}
of the
r
×
c
{\displaystyle r\times c}
matrix
M
{\displaystyle \mathbf {M} }
is the example of Face-splitting product of matrices, which was proposed by Vadym Slyusar in 1996:
M
∙
M
=
(
M
⊗
1
T
)
∘
(
1
T
⊗
M
)
,
{\displaystyle \mathbf {M} \bullet \mathbf {M} =\left(\mathbf {M} \otimes \mathbf {1} ^{\textsf {T}}\right)\circ \left(\mathbf {1} ^{\textsf {T}}\otimes \mathbf {M} \right),}
where
∙
{\displaystyle \bullet }
means Face-splitting product.
These low storage high speed formulae extend to
d
{\displaystyle d}
-dimensions.
== Applications ==
GLAM is designed to be used in
d
{\displaystyle d}
-dimensional smoothing problems where the data are arranged in an array and the smoothing matrix is constructed as a Kronecker product of
d
{\displaystyle d}
one-dimensional smoothing matrices.
== References == | Wikipedia/Generalized_linear_array_model |
In applied statistics, fractional models are, to some extent, related to binary response models. However, instead of estimating the probability of being in one bin of a dichotomous variable, the fractional model typically deals with variables that take on all possible values in the unit interval. One can easily generalize this model to take on values on any other interval by appropriate transformations. Examples range from participation rates in 401(k) plans to television ratings of NBA games.
== Description ==
There have been two approaches to modeling this problem. Even though they both rely on an index that is linear in xi combined with a link function, this is not strictly necessary. The first approach uses a log-odds transformation of y as a linear function of xi, i.e.,
logit
y
=
log
y
1
−
y
=
x
β
{\displaystyle \operatorname {logit} y=\log {\frac {y}{1-y}}=x\beta }
. This approach is problematic for two distinct reasons. The y variable can not take on boundary values 1 and 0, and the interpretation of the coefficients is not straightforward. The second approach circumvents these issues by using the logistic regression as a link function. More specifically,
E
[
y
∨
x
]
=
exp
(
x
β
)
1
+
exp
(
x
β
)
{\displaystyle \operatorname {E} [y\lor x]={\frac {\exp(x\beta )}{1+\exp(x\beta )}}}
It immediately becomes clear that this set up is very similar to the binary logit model, with that difference that the y variable can actually take on values in the unit interval. Many of the estimation techniques for the binary logit model, such as non-linear least squares and quasi-MLE, carry over in a natural way, just like heteroskedasticity adjustments and partial effects calculations.
Extensions to this cross-sectional model have been provided that allow for taking into account important econometric issues, such as endogenous explanatory variables and unobserved heterogeneous effects. Under strict exogeneity assumptions, it is possible to difference out these unobserved effects using panel data techniques, although weaker exogeneity assumptions can also result in consistent estimators. Control function techniques to deal with endogeneity concerns have also been proposed.
== References == | Wikipedia/Fractional_model |
In applied statistics, a variance-stabilizing transformation is a data transformation that is specifically chosen either to simplify considerations in graphical exploratory data analysis or to allow the application of simple regression-based or analysis of variance techniques.
== Overview ==
The aim behind the choice of a variance-stabilizing transformation is to find a simple function ƒ to apply to values x in a data set to create new values y = ƒ(x) such that the variability of the values y is not related to their mean value. For example, suppose that the values x are realizations from different Poisson distributions: i.e. the distributions each have different mean values μ. Then, because for the Poisson distribution the variance is identical to the mean, the variance varies with the mean. However, if the simple variance-stabilizing transformation
y
=
x
{\displaystyle y={\sqrt {x}}\,}
is applied, the sampling variance associated with observation will be nearly constant: see Anscombe transform for details and some alternative transformations.
While variance-stabilizing transformations are well known for certain parametric families of distributions, such as the Poisson and the binomial distribution, some types of data analysis proceed more empirically: for example by searching among power transformations to find a suitable fixed transformation. Alternatively, if data analysis suggests a functional form for the relation between variance and mean, this can be used to deduce a variance-stabilizing transformation. Thus if, for a mean μ,
var
(
X
)
=
h
(
μ
)
,
{\displaystyle \operatorname {var} (X)=h(\mu ),\,}
a suitable basis for a variance stabilizing transformation would be
y
∝
∫
x
1
h
(
μ
)
d
μ
,
{\displaystyle y\propto \int ^{x}{\frac {1}{\sqrt {h(\mu )}}}\,d\mu ,}
where the arbitrary constant of integration and an arbitrary scaling factor can be chosen for convenience.
=== Example: relative variance ===
If X is a positive random variable and for some constant, s, the variance is given as h(μ) = s2μ2 then the standard deviation is proportional to the mean, which is called fixed relative error. In this case, the variance-stabilizing transformation is
y
=
∫
x
d
μ
s
2
μ
2
=
1
s
ln
(
x
)
∝
log
(
x
)
.
{\displaystyle y=\int ^{x}{\frac {d\mu }{\sqrt {s^{2}\mu ^{2}}}}={\frac {1}{s}}\ln(x)\propto \log(x)\,.}
That is, the variance-stabilizing transformation is the logarithmic transformation.
=== Example: absolute plus relative variance ===
If the variance is given as h(μ) = σ2 + s2μ2 then the variance is dominated by a fixed variance σ2 when |μ| is small enough and is dominated by the relative variance s2μ2 when |μ| is large enough. In this case, the variance-stabilizing transformation is
y
=
∫
x
d
μ
σ
2
+
s
2
μ
2
=
1
s
asinh
x
σ
/
s
∝
asinh
x
λ
.
{\displaystyle y=\int ^{x}{\frac {d\mu }{\sqrt {\sigma ^{2}+s^{2}\mu ^{2}}}}={\frac {1}{s}}\operatorname {asinh} {\frac {x}{\sigma /s}}\propto \operatorname {asinh} {\frac {x}{\lambda }}\,.}
That is, the variance-stabilizing transformation is the inverse hyperbolic sine of the scaled value x / λ for λ = σ / s.
=== Example: pearson correlation ===
The Fisher transformation is a variance stabilizing transformation for the pearson correlation coefficient.
== Relationship to the delta method ==
Here, the delta method is presented in a rough way, but it is enough to see the relation with the variance-stabilizing transformations. To see a more formal approach see delta method.
Let
X
{\displaystyle X}
be a random variable, with
E
[
X
]
=
μ
{\displaystyle E[X]=\mu }
and
Var
(
X
)
=
σ
2
{\displaystyle \operatorname {Var} (X)=\sigma ^{2}}
.
Define
Y
=
g
(
X
)
{\displaystyle Y=g(X)}
, where
g
{\displaystyle g}
is a regular function. A first order Taylor approximation for
Y
=
g
(
x
)
{\displaystyle Y=g(x)}
is:
Y
=
g
(
X
)
≈
g
(
μ
)
+
g
′
(
μ
)
(
X
−
μ
)
{\displaystyle Y=g(X)\approx g(\mu )+g'(\mu )(X-\mu )}
From the equation above, we obtain:
E
[
Y
]
≈
g
(
μ
)
{\displaystyle E[Y]\approx g(\mu )}
and
Var
[
Y
]
≈
σ
2
g
′
(
μ
)
2
{\displaystyle \operatorname {Var} [Y]\approx \sigma ^{2}g'(\mu )^{2}}
This approximation method is called delta method.
Consider now a random variable
X
{\displaystyle X}
such that
E
[
X
]
=
μ
{\displaystyle E[X]=\mu }
and
Var
[
X
]
=
h
(
μ
)
{\displaystyle \operatorname {Var} [X]=h(\mu )}
.
Notice the relation between the variance and the mean, which implies, for example, heteroscedasticity in a linear model. Therefore, the goal is to find a function
g
{\displaystyle g}
such that
Y
=
g
(
X
)
{\displaystyle Y=g(X)}
has a variance independent (at least approximately) of its expectation.
Imposing the condition
Var
[
Y
]
≈
h
(
μ
)
g
′
(
μ
)
2
=
constant
{\displaystyle \operatorname {Var} [Y]\approx h(\mu )g'(\mu )^{2}={\text{constant}}}
, this equality implies the differential equation:
d
g
d
μ
=
C
h
(
μ
)
{\displaystyle {\frac {dg}{d\mu }}={\frac {C}{\sqrt {h(\mu )}}}}
This ordinary differential equation has, by separation of variables, the following solution:
g
(
μ
)
=
∫
C
d
μ
h
(
μ
)
{\displaystyle g(\mu )=\int {\frac {C\,d\mu }{\sqrt {h(\mu )}}}}
This last expression appeared for the first time in a M. S. Bartlett paper.
== References == | Wikipedia/Variance-stabilizing_transformation |
A log-linear model is a mathematical model that takes the form of a function whose logarithm equals a linear combination of the parameters of the model, which makes it possible to apply (possibly multivariate) linear regression. That is, it has the general form
exp
(
c
+
∑
i
w
i
f
i
(
X
)
)
{\displaystyle \exp \left(c+\sum _{i}w_{i}f_{i}(X)\right)}
,
in which the fi(X) are quantities that are functions of the variable X, in general a vector of values, while c and the wi stand for the model parameters.
The term may specifically be used for:
A log-linear plot or graph, which is a type of semi-log plot.
Poisson regression for contingency tables, a type of generalized linear model.
The specific applications of log-linear models are where the output quantity lies in the range 0 to ∞, for values of the independent variables X, or more immediately, the transformed quantities fi(X) in the range −∞ to +∞. This may be contrasted to logistic models, similar to the logistic function, for which the output quantity lies in the range 0 to 1. Thus the contexts where these models are useful or realistic often depends on the range of the values being modelled.
== See also ==
Log-linear analysis
General linear model
Generalized linear model
Boltzmann distribution
Elasticity
== Further reading ==
Gujarati, Damodar N.; Porter, Dawn C. (2009). "How to Measure Elasticity: The Log-Linear Model". Basic Econometrics. New York: McGraw-Hill/Irwin. pp. 159–162. ISBN 978-0-07-337577-9. | Wikipedia/Log-linear_model |
In statistics, a linear probability model (LPM) is a special case of a binary regression model. Here the dependent variable for each observation takes values which are either 0 or 1. The probability of observing a 0 or 1 in any one case is treated as depending on one or more explanatory variables. For the "linear probability model", this relationship is a particularly simple one, and allows the model to be fitted by linear regression.
The model assumes that, for a binary outcome (Bernoulli trial),
Y
{\displaystyle Y}
, and its associated vector of explanatory variables,
X
{\displaystyle X}
,
Pr
(
Y
=
1
|
X
=
x
)
=
x
′
β
.
{\displaystyle \Pr(Y=1|X=x)=x'\beta .}
For this model,
E
[
Y
|
X
]
=
0
⋅
Pr
(
Y
=
0
|
X
)
+
1
⋅
Pr
(
Y
=
1
|
X
)
=
Pr
(
Y
=
1
|
X
)
=
x
′
β
,
{\displaystyle E[Y|X]=0\cdot \Pr(Y=0|X)+1\cdot \Pr(Y=1|X)=\Pr(Y=1|X)=x'\beta ,}
and hence the vector of parameters β can be estimated using least squares. This method of fitting would be inefficient, and can be improved by adopting an iterative scheme based on weighted least squares, in which the model from the previous iteration is used to supply estimates of the conditional variances,
Var
(
Y
|
X
=
x
)
{\displaystyle \operatorname {Var} (Y|X=x)}
, which would vary between observations. This approach can be related to fitting the model by maximum likelihood.
A drawback of this model is that, unless restrictions are placed on
β
{\displaystyle \beta }
, the estimated coefficients can imply probabilities outside the unit interval
[
0
,
1
]
{\displaystyle [0,1]}
. For this reason, models such as the logit model or the probit model are more commonly used.
== Latent-variable formulation ==
More formally, the LPM can arise from a latent-variable formulation (usually to be found in the econometrics literature), as follows: assume the following regression model with a latent (unobservable) dependent variable:
y
∗
=
b
0
+
x
′
b
+
ε
,
ε
∣
x
∼
U
(
−
a
,
a
)
.
{\displaystyle y^{*}=b_{0}+\mathbf {x} '\mathbf {b} +\varepsilon ,\;\;\varepsilon \mid \mathbf {x} \sim U(-a,a).}
The critical assumption here is that the error term of this regression is a symmetric around zero uniform random variable, and hence, of mean zero. The cumulative distribution function of
ε
{\displaystyle \varepsilon }
here is
F
ε
|
x
(
ε
∣
x
)
=
ε
+
a
2
a
.
{\displaystyle F_{\varepsilon |\mathbf {x} }(\varepsilon \mid \mathbf {x} )={\frac {\varepsilon +a}{2a}}.}
Define the indicator variable
y
=
1
{\displaystyle y=1}
if
y
∗
>
0
{\displaystyle y^{*}>0}
, and zero otherwise, and consider the conditional probability
P
r
(
y
=
1
∣
x
)
=
P
r
(
y
∗
>
0
∣
x
)
=
P
r
(
b
0
+
x
′
b
+
ε
>
0
∣
x
)
{\displaystyle {\rm {Pr}}(y=1\mid \mathbf {x} )={\rm {Pr}}(y^{*}>0\mid \mathbf {x} )={\rm {Pr}}(b_{0}+\mathbf {x} '\mathbf {b} +\varepsilon >0\mid \mathbf {x} )}
=
P
r
(
ε
>
−
b
0
−
x
′
b
∣
x
)
=
1
−
P
r
(
ε
≤
−
b
0
−
x
′
b
∣
x
)
{\displaystyle ={\rm {Pr}}(\varepsilon >-b_{0}-\mathbf {x} '\mathbf {b} \mid \mathbf {x} )=1-{\rm {Pr}}(\varepsilon \leq -b_{0}-\mathbf {x} '\mathbf {b} \mid \mathbf {x} )}
=
1
−
F
ε
|
x
(
−
b
0
−
x
′
b
∣
x
)
=
1
−
−
b
0
−
x
′
b
+
a
2
a
=
b
0
+
a
2
a
+
x
′
b
2
a
.
{\displaystyle =1-F_{\varepsilon |\mathbf {x} }(-b_{0}-\mathbf {x} '\mathbf {b} \mid \mathbf {x} )=1-{\frac {-b_{0}-\mathbf {x} '\mathbf {b} +a}{2a}}={\frac {b_{0}+a}{2a}}+{\frac {\mathbf {x} '\mathbf {b} }{2a}}.}
But this is the Linear Probability Model,
P
(
y
=
1
∣
x
)
=
β
0
+
x
′
β
{\displaystyle P(y=1\mid \mathbf {x} )=\beta _{0}+\mathbf {x} '\beta }
with the mapping
β
0
=
b
0
+
a
2
a
,
β
=
b
2
a
.
{\displaystyle \beta _{0}={\frac {b_{0}+a}{2a}},\;\;\beta ={\frac {\mathbf {b} }{2a}}.}
This method is a general device to obtain a conditional probability model of a binary variable: if we assume that the distribution of the error term is logistic, we obtain the logit model, while if we assume that it is the normal, we obtain the probit model and, if we assume that it is the logarithm of a Weibull distribution, the complementary log-log model.
== See also ==
Linear approximation
== References ==
== Further reading ==
Aldrich, John H.; Nelson, Forrest D. (1984). "The Linear Probability Model". Linear Probability, Logit, and Probit Models. Sage. pp. 9–29. ISBN 0-8039-2133-0.
Amemiya, Takeshi (1985). "Qualitative Response Models". Advanced Econometrics. Oxford: Basil Blackwell. pp. 267–359. ISBN 0-631-13345-3.
Wooldridge, Jeffrey M. (2013). "A Binary Dependent Variable: The Linear Probability Model". Introductory Econometrics: A Modern Approach (5th international ed.). Mason, OH: South-Western. pp. 238–243. ISBN 978-1-111-53439-4.
Horrace, William C., and Ronald L. Oaxaca. "Results on the Bias and Inconsistency of Ordinary Least Squares for the Linear Probability Model." Economics Letters, 2006: Vol. 90, P. 321–327 | Wikipedia/Linear_probability_model |
In statistics, the variance function is a smooth function that depicts the variance of a random quantity as a function of its mean. The variance function is a measure of heteroscedasticity and plays a large role in many settings of statistical modelling. It is a main ingredient in the generalized linear model framework and a tool used in non-parametric regression, semiparametric regression and functional data analysis. In parametric modeling, variance functions take on a parametric form and explicitly describe the relationship between the variance and the mean of a random quantity. In a non-parametric setting, the variance function is assumed to be a smooth function.
== Intuition ==
In a regression model setting, the goal is to establish whether or not a relationship exists between a response variable and a set of predictor variables. Further, if a relationship does exist, the goal is then to be able to describe this relationship as best as possible. A main assumption in linear regression is constant variance or (homoscedasticity), meaning that different response variables have the same variance in their errors, at every predictor level. This assumption works well when the response variable and the predictor variable are jointly normal. As we will see later, the variance function in the Normal setting is constant; however, we must find a way to quantify heteroscedasticity (non-constant variance) in the absence of joint Normality.
When it is likely that the response follows a distribution that is a member of the exponential family, a generalized linear model may be more appropriate to use, and moreover, when we wish not to force a parametric model onto our data, a non-parametric regression approach can be useful. The importance of being able to model the variance as a function of the mean lies in improved inference (in a parametric setting), and estimation of the regression function in general, for any setting.
Variance functions play a very important role in parameter estimation and inference. In general, maximum likelihood estimation requires that a likelihood function be defined. This requirement then implies that one must first specify the distribution of the response variables observed. However, to define a quasi-likelihood, one need only specify a relationship between the mean and the variance of the observations to then be able to use the quasi-likelihood function for estimation. Quasi-likelihood estimation is particularly useful when there is overdispersion. Overdispersion occurs when there is more variability in the data than there should otherwise be expected according to the assumed distribution of the data.
In summary, to ensure efficient inference of the regression parameters and the regression function, the heteroscedasticity must be accounted for. Variance functions quantify the relationship between the variance and the mean of the observed data and hence play a significant role in regression estimation and inference.
== Types ==
The variance function and its applications come up in many areas of statistical analysis. A very important use of this function is in the framework of generalized linear models and non-parametric regression.
=== Generalized linear model ===
When a member of the exponential family has been specified, the variance function can easily be derived.: 29 The general form of the variance function is presented under the exponential family context, as well as specific forms for Normal, Bernoulli, Poisson, and Gamma. In addition, we describe the applications and use of variance functions in maximum likelihood estimation and quasi-likelihood estimation.
==== Derivation ====
The generalized linear model (GLM), is a generalization of ordinary regression analysis that extends to any member of the exponential family. It is particularly useful when the response variable is categorical, binary or subject to a constraint (e.g. only positive responses make sense). A quick summary of the components of a GLM are summarized on this page, but for more details and information see the page on generalized linear models.
A GLM consists of three main ingredients:
1. Random Component: a distribution of y from the exponential family,
E
[
y
∣
X
]
=
μ
{\displaystyle E[y\mid X]=\mu }
2. Linear predictor:
η
=
X
B
=
∑
j
=
1
p
X
i
j
T
B
j
{\displaystyle \eta =XB=\sum _{j=1}^{p}X_{ij}^{T}B_{j}}
3. Link function:
η
=
g
(
μ
)
,
μ
=
g
−
1
(
η
)
{\displaystyle \eta =g(\mu ),\mu =g^{-1}(\eta )}
First it is important to derive a couple key properties of the exponential family.
Any random variable
y
{\displaystyle {\textit {y}}}
in the exponential family has a probability density function of the form,
f
(
y
,
θ
,
ϕ
)
=
exp
(
y
θ
−
b
(
θ
)
ϕ
−
c
(
y
,
ϕ
)
)
{\displaystyle f(y,\theta ,\phi )=\exp \left({\frac {y\theta -b(\theta )}{\phi }}-c(y,\phi )\right)}
with loglikelihood,
ℓ
(
θ
,
y
,
ϕ
)
=
log
(
f
(
y
,
θ
,
ϕ
)
)
=
y
θ
−
b
(
θ
)
ϕ
−
c
(
y
,
ϕ
)
{\displaystyle \ell (\theta ,y,\phi )=\log(f(y,\theta ,\phi ))={\frac {y\theta -b(\theta )}{\phi }}-c(y,\phi )}
Here,
θ
{\displaystyle \theta }
is the canonical parameter and the parameter of interest, and
ϕ
{\displaystyle \phi }
is a nuisance parameter which plays a role in the variance.
We use the Bartlett's Identities to derive a general expression for the variance function.
The first and second Bartlett results ensures that under suitable conditions (see Leibniz integral rule), for a density function dependent on
θ
,
f
θ
(
)
{\displaystyle \theta ,f_{\theta }()}
,
E
θ
[
∂
∂
θ
log
(
f
θ
(
y
)
)
]
=
0
{\displaystyle \operatorname {E} _{\theta }\left[{\frac {\partial }{\partial \theta }}\log(f_{\theta }(y))\right]=0}
Var
θ
[
∂
∂
θ
log
(
f
θ
(
y
)
)
]
+
E
θ
[
∂
2
∂
θ
2
log
(
f
θ
(
y
)
)
]
=
0
{\displaystyle \operatorname {Var} _{\theta }\left[{\frac {\partial }{\partial \theta }}\log(f_{\theta }(y))\right]+\operatorname {E} _{\theta }\left[{\frac {\partial ^{2}}{\partial \theta ^{2}}}\log(f_{\theta }(y))\right]=0}
These identities lead to simple calculations of the expected value and variance of any random variable
y
{\displaystyle {\textit {y}}}
in the exponential family
E
θ
[
y
]
,
V
a
r
θ
[
y
]
{\displaystyle E_{\theta }[y],Var_{\theta }[y]}
.
Expected value of Y:
Taking the first derivative with respect to
θ
{\displaystyle \theta }
of the log of the density in the exponential family form described above, we have
∂
∂
θ
log
(
f
(
y
,
θ
,
ϕ
)
)
=
∂
∂
θ
[
y
θ
−
b
(
θ
)
ϕ
−
c
(
y
,
ϕ
)
]
=
y
−
b
′
(
θ
)
ϕ
{\displaystyle {\frac {\partial }{\partial \theta }}\log(f(y,\theta ,\phi ))={\frac {\partial }{\partial \theta }}\left[{\frac {y\theta -b(\theta )}{\phi }}-c(y,\phi )\right]={\frac {y-b'(\theta )}{\phi }}}
Then taking the expected value and setting it equal to zero leads to,
E
θ
[
y
−
b
′
(
θ
)
ϕ
]
=
E
θ
[
y
]
−
b
′
(
θ
)
ϕ
=
0
{\displaystyle \operatorname {E} _{\theta }\left[{\frac {y-b'(\theta )}{\phi }}\right]={\frac {\operatorname {E} _{\theta }[y]-b'(\theta )}{\phi }}=0}
E
θ
[
y
]
=
b
′
(
θ
)
{\displaystyle \operatorname {E} _{\theta }[y]=b'(\theta )}
Variance of Y:
To compute the variance we use the second Bartlett identity,
Var
θ
[
∂
∂
θ
(
y
θ
−
b
(
θ
)
ϕ
−
c
(
y
,
ϕ
)
)
]
+
E
θ
[
∂
2
∂
θ
2
(
y
θ
−
b
(
θ
)
ϕ
−
c
(
y
,
ϕ
)
)
]
=
0
{\displaystyle \operatorname {Var} _{\theta }\left[{\frac {\partial }{\partial \theta }}\left({\frac {y\theta -b(\theta )}{\phi }}-c(y,\phi )\right)\right]+\operatorname {E} _{\theta }\left[{\frac {\partial ^{2}}{\partial \theta ^{2}}}\left({\frac {y\theta -b(\theta )}{\phi }}-c(y,\phi )\right)\right]=0}
Var
θ
[
y
−
b
′
(
θ
)
ϕ
]
+
E
θ
[
−
b
″
(
θ
)
ϕ
]
=
0
{\displaystyle \operatorname {Var} _{\theta }\left[{\frac {y-b'(\theta )}{\phi }}\right]+\operatorname {E} _{\theta }\left[{\frac {-b''(\theta )}{\phi }}\right]=0}
Var
θ
[
y
]
=
b
″
(
θ
)
ϕ
{\displaystyle \operatorname {Var} _{\theta }\left[y\right]=b''(\theta )\phi }
We have now a relationship between
μ
{\displaystyle \mu }
and
θ
{\displaystyle \theta }
, namely
μ
=
b
′
(
θ
)
{\displaystyle \mu =b'(\theta )}
and
θ
=
b
′
−
1
(
μ
)
{\displaystyle \theta =b'^{-1}(\mu )}
, which allows for a relationship between
μ
{\displaystyle \mu }
and the variance,
V
(
θ
)
=
b
″
(
θ
)
=
the part of the variance that depends on
θ
{\displaystyle V(\theta )=b''(\theta )={\text{the part of the variance that depends on }}\theta }
V
(
μ
)
=
b
″
(
b
′
−
1
(
μ
)
)
.
{\displaystyle \operatorname {V} (\mu )=b''(b'^{-1}(\mu )).\,}
Note that because
Var
θ
[
y
]
>
0
,
b
″
(
θ
)
>
0
{\displaystyle \operatorname {Var} _{\theta }\left[y\right]>0,b''(\theta )>0}
, then
b
′
:
θ
→
μ
{\displaystyle b':\theta \rightarrow \mu }
is invertible.
We derive the variance function for a few common distributions.
==== Example – normal ====
The normal distribution is a special case where the variance function is a constant. Let
y
∼
N
(
μ
,
σ
2
)
{\displaystyle y\sim N(\mu ,\sigma ^{2})}
then we put the density function of y in the form of the exponential family described above:
f
(
y
)
=
exp
(
y
μ
−
μ
2
2
σ
2
−
y
2
2
σ
2
−
1
2
ln
2
π
σ
2
)
{\displaystyle f(y)=\exp \left({\frac {y\mu -{\frac {\mu ^{2}}{2}}}{\sigma ^{2}}}-{\frac {y^{2}}{2\sigma ^{2}}}-{\frac {1}{2}}\ln {2\pi \sigma ^{2}}\right)}
where
θ
=
μ
,
{\displaystyle \theta =\mu ,}
b
(
θ
)
=
μ
2
2
,
{\displaystyle b(\theta )={\frac {\mu ^{2}}{2}},}
ϕ
=
σ
2
,
{\displaystyle \phi =\sigma ^{2},}
c
(
y
,
ϕ
)
=
−
y
2
2
σ
2
−
1
2
ln
2
π
σ
2
{\displaystyle c(y,\phi )=-{\frac {y^{2}}{2\sigma ^{2}}}-{\frac {1}{2}}\ln {2\pi \sigma ^{2}}}
To calculate the variance function
V
(
μ
)
{\displaystyle V(\mu )}
, we first express
θ
{\displaystyle \theta }
as a function of
μ
{\displaystyle \mu }
. Then we transform
V
(
θ
)
{\displaystyle V(\theta )}
into a function of
μ
{\displaystyle \mu }
θ
=
μ
{\displaystyle \theta =\mu }
b
′
(
θ
)
=
θ
=
E
[
y
]
=
μ
{\displaystyle b'(\theta )=\theta =\operatorname {E} [y]=\mu }
V
(
θ
)
=
b
″
(
θ
)
=
1
{\displaystyle V(\theta )=b''(\theta )=1}
Therefore, the variance function is constant.
==== Example – Bernoulli ====
Let
y
∼
Bernoulli
(
p
)
{\displaystyle y\sim {\text{Bernoulli}}(p)}
, then we express the density of the Bernoulli distribution in exponential family form,
f
(
y
)
=
exp
(
y
ln
p
1
−
p
+
ln
(
1
−
p
)
)
{\displaystyle f(y)=\exp \left(y\ln {\frac {p}{1-p}}+\ln(1-p)\right)}
θ
=
ln
p
1
−
p
=
{\displaystyle \theta =\ln {\frac {p}{1-p}}=}
logit(p), which gives us
p
=
e
θ
1
+
e
θ
=
{\displaystyle p={\frac {e^{\theta }}{1+e^{\theta }}}=}
expit
(
θ
)
{\displaystyle (\theta )}
b
(
θ
)
=
ln
(
1
+
e
θ
)
{\displaystyle b(\theta )=\ln(1+e^{\theta })}
and
b
′
(
θ
)
=
e
θ
1
+
e
θ
=
{\displaystyle b'(\theta )={\frac {e^{\theta }}{1+e^{\theta }}}=}
expit
(
θ
)
=
p
=
μ
{\displaystyle (\theta )=p=\mu }
b
″
(
θ
)
=
e
θ
1
+
e
θ
−
(
e
θ
1
+
e
θ
)
2
{\displaystyle b''(\theta )={\frac {e^{\theta }}{1+e^{\theta }}}-\left({\frac {e^{\theta }}{1+e^{\theta }}}\right)^{2}}
This give us
V
(
μ
)
=
μ
(
1
−
μ
)
{\displaystyle V(\mu )=\mu (1-\mu )}
==== Example – Poisson ====
Let
y
∼
Poisson
(
λ
)
{\displaystyle y\sim {\text{Poisson}}(\lambda )}
, then we express the density of the Poisson distribution in exponential family form,
f
(
y
)
=
exp
(
y
ln
λ
−
ln
λ
)
{\displaystyle f(y)=\exp(y\ln \lambda -\ln \lambda )}
θ
=
ln
λ
=
{\displaystyle \theta =\ln \lambda =}
which gives us
λ
=
e
θ
{\displaystyle \lambda =e^{\theta }}
b
(
θ
)
=
e
θ
{\displaystyle b(\theta )=e^{\theta }}
and
b
′
(
θ
)
=
e
θ
=
λ
=
μ
{\displaystyle b'(\theta )=e^{\theta }=\lambda =\mu }
b
″
(
θ
)
=
e
θ
=
μ
{\displaystyle b''(\theta )=e^{\theta }=\mu }
This give us
V
(
μ
)
=
μ
{\displaystyle V(\mu )=\mu }
Here we see the central property of Poisson data, that the variance is equal to the mean.
==== Example – Gamma ====
The Gamma distribution and density function can be expressed under different parametrizations. We will use the form of the gamma with parameters
(
μ
,
ν
)
{\displaystyle (\mu ,\nu )}
f
μ
,
ν
(
y
)
=
1
Γ
(
ν
)
y
(
ν
y
μ
)
ν
e
−
ν
y
μ
{\displaystyle f_{\mu ,\nu }(y)={\frac {1}{\Gamma (\nu )y}}\left({\frac {\nu y}{\mu }}\right)^{\nu }e^{-{\frac {\nu y}{\mu }}}}
Then in exponential family form we have
f
μ
,
ν
(
y
)
=
exp
(
−
1
μ
y
+
ln
(
1
μ
)
1
ν
+
ln
(
ν
ν
y
ν
−
1
Γ
(
ν
)
)
)
{\displaystyle f_{\mu ,\nu }(y)=\exp \left({\frac {-{\frac {1}{\mu }}y+\ln({\frac {1}{\mu }})}{\frac {1}{\nu }}}+\ln \left({\frac {\nu ^{\nu }y^{\nu -1}}{\Gamma (\nu )}}\right)\right)}
θ
=
−
1
μ
→
μ
=
−
1
θ
{\displaystyle \theta ={\frac {-1}{\mu }}\rightarrow \mu ={\frac {-1}{\theta }}}
ϕ
=
1
ν
{\displaystyle \phi ={\frac {1}{\nu }}}
b
(
θ
)
=
−
ln
(
−
θ
)
{\displaystyle b(\theta )=-\ln(-\theta )}
b
′
(
θ
)
=
−
1
θ
=
−
1
−
1
μ
=
μ
{\displaystyle b'(\theta )={\frac {-1}{\theta }}={\frac {-1}{\frac {-1}{\mu }}}=\mu }
b
″
(
θ
)
=
1
θ
2
=
μ
2
{\displaystyle b''(\theta )={\frac {1}{\theta ^{2}}}=\mu ^{2}}
And we have
V
(
μ
)
=
μ
2
{\displaystyle V(\mu )=\mu ^{2}}
==== Application – weighted least squares ====
A very important application of the variance function is its use in parameter estimation and inference when the response variable is of the required exponential family form as well as in some cases when it is not (which we will discuss in quasi-likelihood). Weighted least squares (WLS) is a special case of generalized least squares. Each term in the WLS criterion includes a weight that determines that the influence each observation has on the final parameter estimates. As in regular least squares, the goal is to estimate the unknown parameters in the regression function by finding values for parameter estimates that minimize the sum of the squared deviations between the observed responses and the functional portion of the model.
While WLS assumes independence of observations it does not assume equal variance and is therefore a solution for parameter estimation in the presence of heteroscedasticity. The Gauss–Markov theorem and Aitken demonstrate that the best linear unbiased estimator (BLUE), the unbiased estimator with minimum variance, has each weight equal to the reciprocal of the variance of the measurement.
In the GLM framework, our goal is to estimate parameters
β
{\displaystyle \beta }
, where
Z
=
g
(
E
[
y
∣
X
]
)
=
X
β
{\displaystyle Z=g(E[y\mid X])=X\beta }
. Therefore, we would like to minimize
(
Z
−
X
B
)
T
W
(
Z
−
X
B
)
{\displaystyle (Z-XB)^{T}W(Z-XB)}
and if we define the weight matrix W as
W
⏟
n
×
n
=
[
1
ϕ
V
(
μ
1
)
g
′
(
μ
1
)
2
0
⋯
0
0
0
1
ϕ
V
(
μ
2
)
g
′
(
μ
2
)
2
0
⋯
0
⋮
⋮
⋮
⋮
0
⋮
⋮
⋮
⋮
0
0
⋯
⋯
0
1
ϕ
V
(
μ
n
)
g
′
(
μ
n
)
2
]
,
{\displaystyle \underbrace {W} _{n\times n}={\begin{bmatrix}{\frac {1}{\phi V(\mu _{1})g'(\mu _{1})^{2}}}&0&\cdots &0&0\\0&{\frac {1}{\phi V(\mu _{2})g'(\mu _{2})^{2}}}&0&\cdots &0\\\vdots &\vdots &\vdots &\vdots &0\\\vdots &\vdots &\vdots &\vdots &0\\0&\cdots &\cdots &0&{\frac {1}{\phi V(\mu _{n})g'(\mu _{n})^{2}}}\end{bmatrix}},}
where
ϕ
,
V
(
μ
)
,
g
(
μ
)
{\displaystyle \phi ,V(\mu ),g(\mu )}
are defined in the previous section, it allows for iteratively reweighted least squares (IRLS) estimation of the parameters. See the section on iteratively reweighted least squares for more derivation and information.
Also, important to note is that when the weight matrix is of the form described here, minimizing the expression
(
Z
−
X
B
)
T
W
(
Z
−
X
B
)
{\displaystyle (Z-XB)^{T}W(Z-XB)}
also minimizes the Pearson distance. See Distance correlation for more.
The matrix W falls right out of the estimating equations for estimation of
β
{\displaystyle \beta }
. Maximum likelihood estimation for each parameter
β
r
,
1
≤
r
≤
p
{\displaystyle \beta _{r},1\leq r\leq p}
, requires
∑
i
=
1
n
∂
l
i
∂
β
r
=
0
{\displaystyle \sum _{i=1}^{n}{\frac {\partial l_{i}}{\partial \beta _{r}}}=0}
, where
l
(
θ
,
y
,
ϕ
)
=
log
(
f
(
y
,
θ
,
ϕ
)
)
=
y
θ
−
b
(
θ
)
ϕ
−
c
(
y
,
ϕ
)
{\displaystyle \operatorname {l} (\theta ,y,\phi )=\log(\operatorname {f} (y,\theta ,\phi ))={\frac {y\theta -b(\theta )}{\phi }}-c(y,\phi )}
is the log-likelihood.
Looking at a single observation we have,
∂
l
∂
β
r
=
∂
l
∂
θ
∂
θ
∂
μ
∂
μ
∂
η
∂
η
∂
β
r
{\displaystyle {\frac {\partial l}{\partial \beta _{r}}}={\frac {\partial l}{\partial \theta }}{\frac {\partial \theta }{\partial \mu }}{\frac {\partial \mu }{\partial \eta }}{\frac {\partial \eta }{\partial \beta _{r}}}}
∂
η
∂
β
r
=
x
r
{\displaystyle {\frac {\partial \eta }{\partial \beta _{r}}}=x_{r}}
∂
l
∂
θ
=
y
−
b
′
(
θ
)
ϕ
=
y
−
μ
ϕ
{\displaystyle {\frac {\partial l}{\partial \theta }}={\frac {y-b'(\theta )}{\phi }}={\frac {y-\mu }{\phi }}}
∂
θ
∂
μ
=
∂
b
′
−
1
(
μ
)
μ
=
1
b
″
(
b
′
(
μ
)
)
=
1
V
(
μ
)
{\displaystyle {\frac {\partial \theta }{\partial \mu }}={\frac {\partial b'^{-1}(\mu )}{\mu }}={\frac {1}{b''(b'(\mu ))}}={\frac {1}{V(\mu )}}}
This gives us
∂
l
∂
β
r
=
y
−
μ
ϕ
V
(
μ
)
∂
μ
∂
η
x
r
{\displaystyle {\frac {\partial l}{\partial \beta _{r}}}={\frac {y-\mu }{\phi V(\mu )}}{\frac {\partial \mu }{\partial \eta }}x_{r}}
, and noting that
∂
η
∂
μ
=
g
′
(
μ
)
{\displaystyle {\frac {\partial \eta }{\partial \mu }}=g'(\mu )}
we have that
∂
l
∂
β
r
=
(
y
−
μ
)
W
∂
η
∂
μ
x
r
{\displaystyle {\frac {\partial l}{\partial \beta _{r}}}=(y-\mu )W{\frac {\partial \eta }{\partial \mu }}x_{r}}
The Hessian matrix is determined in a similar manner and can be shown to be,
H
=
X
T
(
y
−
μ
)
[
∂
β
s
W
∂
β
r
]
−
X
T
W
X
{\displaystyle H=X^{T}(y-\mu )\left[{\frac {\partial }{\beta _{s}}}W{\frac {\partial }{\beta _{r}}}\right]-X^{T}WX}
Noticing that the Fisher Information (FI),
FI
=
−
E
[
H
]
=
X
T
W
X
{\displaystyle {\text{FI}}=-E[H]=X^{T}WX}
, allows for asymptotic approximation of
β
^
{\displaystyle {\hat {\beta }}}
β
^
∼
N
p
(
β
,
(
X
T
W
X
)
−
1
)
{\displaystyle {\hat {\beta }}\sim N_{p}(\beta ,(X^{T}WX)^{-1})}
, and hence inference can be performed.
==== Application – quasi-likelihood ====
Because most features of GLMs only depend on the first two moments of the distribution, rather than the entire distribution, the quasi-likelihood can be developed by just specifying a link function and a variance function. That is, we need to specify
the link function,
E
[
y
]
=
μ
=
g
−
1
(
η
)
{\displaystyle E[y]=\mu =g^{-1}(\eta )}
the variance function,
V
(
μ
)
{\displaystyle V(\mu )}
, where the
Var
θ
(
y
)
=
σ
2
V
(
μ
)
{\displaystyle \operatorname {Var} _{\theta }(y)=\sigma ^{2}V(\mu )}
With a specified variance function and link function we can develop, as alternatives to the log-likelihood function, the score function, and the Fisher information, a quasi-likelihood, a quasi-score, and the quasi-information. This allows for full inference of
β
{\displaystyle \beta }
.
Quasi-likelihood (QL)
Though called a quasi-likelihood, this is in fact a quasi-log-likelihood. The QL for one observation is
Q
i
(
μ
i
,
y
i
)
=
∫
y
i
μ
i
y
i
−
t
σ
2
V
(
t
)
d
t
{\displaystyle Q_{i}(\mu _{i},y_{i})=\int _{y_{i}}^{\mu _{i}}{\frac {y_{i}-t}{\sigma ^{2}V(t)}}\,dt}
And therefore the QL for all n observations is
Q
(
μ
,
y
)
=
∑
i
=
1
n
Q
i
(
μ
i
,
y
i
)
=
∑
i
=
1
n
∫
y
i
μ
i
y
−
t
σ
2
V
(
t
)
d
t
{\displaystyle Q(\mu ,y)=\sum _{i=1}^{n}Q_{i}(\mu _{i},y_{i})=\sum _{i=1}^{n}\int _{y_{i}}^{\mu _{i}}{\frac {y-t}{\sigma ^{2}V(t)}}\,dt}
From the QL we have the quasi-score
Quasi-score (QS)
Recall the score function, U, for data with log-likelihood
l
(
μ
∣
y
)
{\displaystyle \operatorname {l} (\mu \mid y)}
is
U
=
∂
l
d
μ
.
{\displaystyle U={\frac {\partial l}{d\mu }}.}
We obtain the quasi-score in an identical manner,
U
=
y
−
μ
σ
2
V
(
μ
)
{\displaystyle U={\frac {y-\mu }{\sigma ^{2}V(\mu )}}}
Noting that, for one observation the score is
∂
Q
∂
μ
=
y
−
μ
σ
2
V
(
μ
)
{\displaystyle {\frac {\partial Q}{\partial \mu }}={\frac {y-\mu }{\sigma ^{2}V(\mu )}}}
The first two Bartlett equations are satisfied for the quasi-score, namely
E
[
U
]
=
0
{\displaystyle E[U]=0}
and
Cov
(
U
)
+
E
[
∂
U
∂
μ
]
=
0.
{\displaystyle \operatorname {Cov} (U)+E\left[{\frac {\partial U}{\partial \mu }}\right]=0.}
In addition, the quasi-score is linear in y.
Ultimately the goal is to find information about the parameters of interest
β
{\displaystyle \beta }
. Both the QS and the QL are actually functions of
β
{\displaystyle \beta }
. Recall,
μ
=
g
−
1
(
η
)
{\displaystyle \mu =g^{-1}(\eta )}
, and
η
=
X
β
{\displaystyle \eta =X\beta }
, therefore,
μ
=
g
−
1
(
X
β
)
.
{\displaystyle \mu =g^{-1}(X\beta ).}
Quasi-information (QI)
The quasi-information, is similar to the Fisher information,
i
b
=
−
E
[
∂
U
∂
β
]
{\displaystyle i_{b}=-\operatorname {E} \left[{\frac {\partial U}{\partial \beta }}\right]}
QL, QS, QI as functions of
β
{\displaystyle \beta }
The QL, QS and QI all provide the building blocks for inference about the parameters of interest and therefore it is important to express the QL, QS and QI all as functions of
β
{\displaystyle \beta }
.
Recalling again that
μ
=
g
−
1
(
X
β
)
{\displaystyle \mu =g^{-1}(X\beta )}
, we derive the expressions for QL, QS and QI parametrized under
β
{\displaystyle \beta }
.
Quasi-likelihood in
β
{\displaystyle \beta }
,
Q
(
β
,
y
)
=
∫
y
μ
(
β
)
y
−
t
σ
2
V
(
t
)
d
t
{\displaystyle Q(\beta ,y)=\int _{y}^{\mu (\beta )}{\frac {y-t}{\sigma ^{2}V(t)}}\,dt}
The QS as a function of
β
{\displaystyle \beta }
is therefore
U
j
(
β
j
)
=
∂
∂
β
j
Q
(
β
,
y
)
=
∑
i
=
1
n
∂
μ
i
∂
β
j
y
i
−
μ
i
(
β
j
)
σ
2
V
(
μ
i
)
{\displaystyle U_{j}(\beta _{j})={\frac {\partial }{\partial \beta _{j}}}Q(\beta ,y)=\sum _{i=1}^{n}{\frac {\partial \mu _{i}}{\partial \beta _{j}}}{\frac {y_{i}-\mu _{i}(\beta _{j})}{\sigma ^{2}V(\mu _{i})}}}
U
(
β
)
=
[
U
1
(
β
)
U
2
(
β
)
⋮
⋮
U
p
(
β
)
]
=
D
T
V
−
1
(
y
−
μ
)
σ
2
{\displaystyle U(\beta )={\begin{bmatrix}U_{1}(\beta )\\U_{2}(\beta )\\\vdots \\\vdots \\U_{p}(\beta )\end{bmatrix}}=D^{T}V^{-1}{\frac {(y-\mu )}{\sigma ^{2}}}}
Where,
D
⏟
n
×
p
=
[
∂
μ
1
∂
β
1
⋯
⋯
∂
μ
1
∂
β
p
∂
μ
2
∂
β
1
⋯
⋯
∂
μ
2
∂
β
p
⋮
⋮
∂
μ
m
∂
β
1
⋯
⋯
∂
μ
m
∂
β
p
]
V
⏟
n
×
n
=
diag
(
V
(
μ
1
)
,
V
(
μ
2
)
,
…
,
…
,
V
(
μ
n
)
)
{\displaystyle \underbrace {D} _{n\times p}={\begin{bmatrix}{\frac {\partial \mu _{1}}{\partial \beta _{1}}}&\cdots &\cdots &{\frac {\partial \mu _{1}}{\partial \beta _{p}}}\\{\frac {\partial \mu _{2}}{\partial \beta _{1}}}&\cdots &\cdots &{\frac {\partial \mu _{2}}{\partial \beta _{p}}}\\\vdots \\\vdots \\{\frac {\partial \mu _{m}}{\partial \beta _{1}}}&\cdots &\cdots &{\frac {\partial \mu _{m}}{\partial \beta _{p}}}\end{bmatrix}}\underbrace {V} _{n\times n}=\operatorname {diag} (V(\mu _{1}),V(\mu _{2}),\ldots ,\ldots ,V(\mu _{n}))}
The quasi-information matrix in
β
{\displaystyle \beta }
is,
i
b
=
−
∂
U
∂
β
=
Cov
(
U
(
β
)
)
=
D
T
V
−
1
D
σ
2
{\displaystyle i_{b}=-{\frac {\partial U}{\partial \beta }}=\operatorname {Cov} (U(\beta ))={\frac {D^{T}V^{-1}D}{\sigma ^{2}}}}
Obtaining the score function and the information of
β
{\displaystyle \beta }
allows for parameter estimation and inference in a similar manner as described in Application – weighted least squares.
=== Non-parametric regression analysis ===
Non-parametric estimation of the variance function and its importance, has been discussed widely in the literature
In non-parametric regression analysis, the goal is to express the expected value of your response variable(y) as a function of your predictors (X). That is we are looking to estimate a mean function,
g
(
x
)
=
E
[
y
∣
X
=
x
]
{\displaystyle g(x)=\operatorname {E} [y\mid X=x]}
without assuming a parametric form. There are many forms of non-parametric smoothing methods to help estimate the function
g
(
x
)
{\displaystyle g(x)}
. An interesting approach is to also look at a non-parametric variance function,
g
v
(
x
)
=
Var
(
Y
∣
X
=
x
)
{\displaystyle g_{v}(x)=\operatorname {Var} (Y\mid X=x)}
. A non-parametric variance function allows one to look at the mean function as it relates to the variance function and notice patterns in the data.
g
v
(
x
)
=
Var
(
Y
∣
X
=
x
)
=
E
[
y
2
∣
X
=
x
]
−
[
E
[
y
∣
X
=
x
]
]
2
{\displaystyle g_{v}(x)=\operatorname {Var} (Y\mid X=x)=\operatorname {E} [y^{2}\mid X=x]-\left[\operatorname {E} [y\mid X=x]\right]^{2}}
An example is detailed in the pictures to the right. The goal of the project was to determine (among other things) whether or not the predictor, number of years in the major leagues (baseball), had an effect on the response, salary, a player made. An initial scatter plot of the data indicates that there is heteroscedasticity in the data as the variance is not constant at each level of the predictor. Because we can visually detect the non-constant variance, it useful now to plot
g
v
(
x
)
=
Var
(
Y
∣
X
=
x
)
=
E
[
y
2
∣
X
=
x
]
−
[
E
[
y
∣
X
=
x
]
]
2
{\displaystyle g_{v}(x)=\operatorname {Var} (Y\mid X=x)=\operatorname {E} [y^{2}\mid X=x]-\left[\operatorname {E} [y\mid X=x]\right]^{2}}
, and look to see if the shape is indicative of any known distribution. One can estimate
E
[
y
2
∣
X
=
x
]
{\displaystyle \operatorname {E} [y^{2}\mid X=x]}
and
[
E
[
y
∣
X
=
x
]
]
2
{\displaystyle \left[\operatorname {E} [y\mid X=x]\right]^{2}}
using a general smoothing method. The plot of the non-parametric smoothed variance function can give the researcher an idea of the relationship between the variance and the mean. The picture to the right indicates a quadratic relationship between the mean and the variance. As we saw above, the Gamma variance function is quadratic in the mean.
== Notes ==
== References ==
McCullagh, Peter; Nelder, John (1989). Generalized Linear Models (second ed.). London: Chapman and Hall. ISBN 0-412-31760-5.{{cite book}}: CS1 maint: publisher location (link)
Henrik Madsen and Poul Thyregod (2011). Introduction to General and Generalized Linear Models. Chapman & Hall/CRC. ISBN 978-1-4200-9155-7.
== External links ==
Media related to Variance function at Wikimedia Commons | Wikipedia/Variance_function |
The general linear model or general multivariate regression model is a compact way of simultaneously writing several multiple linear regression models. In that sense it is not a separate statistical linear model. The various multiple linear regression models may be compactly written as
Y
=
X
B
+
U
,
{\displaystyle \mathbf {Y} =\mathbf {X} \mathbf {B} +\mathbf {U} ,}
where Y is a matrix with series of multivariate measurements (each column being a set of measurements on one of the dependent variables), X is a matrix of observations on independent variables that might be a design matrix (each column being a set of observations on one of the independent variables), B is a matrix containing parameters that are usually to be estimated and U is a matrix containing errors (noise). The errors are usually assumed to be uncorrelated across measurements, and follow a multivariate normal distribution. If the errors do not follow a multivariate normal distribution, generalized linear models may be used to relax assumptions about Y and U.
The general linear model (GLM) encompasses several statistical models, including ANOVA, ANCOVA, MANOVA, MANCOVA, ordinary linear regression. Within this framework, both t-test and F-test can be applied. The general linear model is a generalization of multiple linear regression to the case of more than one dependent variable. If Y, B, and U were column vectors, the matrix equation above would represent multiple linear regression.
Hypothesis tests with the general linear model can be made in two ways: multivariate or as several independent univariate tests. In multivariate tests the columns of Y are tested together, whereas in univariate tests the columns of Y are tested independently, i.e., as multiple univariate tests with the same design matrix.
== Comparison to multiple linear regression ==
Multiple linear regression is a generalization of simple linear regression to the case of more than one independent variable, and a special case of general linear models, restricted to one dependent variable. The basic model for multiple linear regression is
Y
i
=
β
0
+
β
1
X
i
1
+
β
2
X
i
2
+
…
+
β
p
X
i
p
+
ϵ
i
{\displaystyle Y_{i}=\beta _{0}+\beta _{1}X_{i1}+\beta _{2}X_{i2}+\ldots +\beta _{p}X_{ip}+\epsilon _{i}}
or more compactly
Y
i
=
β
0
+
∑
k
=
1
p
β
k
X
i
k
+
ϵ
i
{\displaystyle Y_{i}=\beta _{0}+\sum \limits _{k=1}^{p}{\beta _{k}X_{ik}}+\epsilon _{i}}
for each observation i = 1, ... , n.
In the formula above we consider n observations of one dependent variable and p independent variables. Thus, Yi is the ith observation of the dependent variable, Xik is kth observation of the kth independent variable, k = 1, 2, ..., p. The values βk represent parameters to be estimated, and εi is the ith independent identically distributed normal error.
In the more general multivariate linear regression, there is one equation of the above form for each of m > 1 dependent variables that share the same set of explanatory variables and hence are estimated simultaneously with each other:
Y
i
j
=
β
0
j
+
β
1
j
X
i
1
+
β
2
j
X
i
2
+
…
+
β
p
j
X
i
p
+
ϵ
i
j
{\displaystyle Y_{ij}=\beta _{0j}+\beta _{1j}X_{i1}+\beta _{2j}X_{i2}+\ldots +\beta _{pj}X_{ip}+\epsilon _{ij}}
or more compactly
Y
i
j
=
β
0
j
+
∑
k
=
1
p
β
k
j
X
i
k
+
ϵ
i
j
{\displaystyle Y_{ij}=\beta _{0j}+\sum \limits _{k=1}^{p}{\beta _{kj}X_{ik}}+\epsilon _{ij}}
for all observations indexed as i = 1, ... , n and for all dependent variables indexed as j = 1, ... , m.
Note that, since each dependent variable has its own set of regression parameters to be fitted, from a computational point of view the general multivariate regression is simply a sequence of standard multiple linear regressions using the same explanatory variables.
== Comparison to generalized linear model ==
The general linear model and the generalized linear model (GLM) are two commonly used families of statistical methods to relate some number of continuous and/or categorical predictors to a single outcome variable.
The main difference between the two approaches is that the general linear model strictly assumes that the residuals will follow a conditionally normal distribution, while the GLM loosens this assumption and allows for a variety of other distributions from the exponential family for the residuals. The general linear model is a special case of the GLM in which the distribution of the residuals follow a conditionally normal distribution.
The distribution of the residuals largely depends on the type and distribution of the outcome variable; different types of outcome variables lead to the variety of models within the GLM family. Commonly used models in the GLM family include binary logistic regression for binary or dichotomous outcomes, Poisson regression for count outcomes, and linear regression for continuous, normally distributed outcomes. This means that GLM may be spoken of as a general family of statistical models or as specific models for specific outcome types.
== Applications ==
An application of the general linear model appears in the analysis of multiple brain scans in scientific experiments where Y contains data from brain scanners, X contains experimental design variables and confounds. It is usually tested in a univariate way (usually referred to a mass-univariate in this setting) and is often referred to as statistical parametric mapping.
== See also ==
Bayesian multivariate linear regression
F-test
t-test
== Notes ==
== References ==
Christensen, Ronald (2020). Plane Answers to Complex Questions: The Theory of Linear Models (5th ed.). New York: Springer. ISBN 978-3-030-32096-6.
Wichura, Michael J. (2006). The coordinate-free approach to linear models. Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge: Cambridge University Press. pp. xiv+199. ISBN 978-0-521-86842-6. MR 2283455.
Rawlings, John O.; Pantula, Sastry G.; Dickey, David A., eds. (1998). Applied Regression Analysis. Springer Texts in Statistics. doi:10.1007/b98890. ISBN 0-387-98454-2. | Wikipedia/Comparison_of_general_and_generalized_linear_models |
In statistics, a generalized additive model (GAM) is a generalized linear model in which the linear response variable depends linearly on unknown smooth functions of some predictor variables, and interest focuses on inference about these smooth functions.
GAMs were originally developed by Trevor Hastie and Robert Tibshirani to blend properties of generalized linear models with additive models. They can be interpreted as the discriminative generalization of the naive Bayes generative model.
The model relates a univariate response variable, Y, to some predictor variables, xi. An exponential family distribution is specified for Y (for example normal, binomial or Poisson distributions) along with a link function g (for example the identity or log functions) relating the expected value of Y to the predictor variables via a structure such as
g
(
E
(
Y
)
)
=
β
0
+
f
1
(
x
1
)
+
f
2
(
x
2
)
+
⋯
+
f
m
(
x
m
)
.
{\displaystyle g(\operatorname {E} (Y))=\beta _{0}+f_{1}(x_{1})+f_{2}(x_{2})+\cdots +f_{m}(x_{m}).\,\!}
The functions fi may be functions with a specified parametric form (for example a polynomial, or an un-penalized regression spline of a variable) or may be specified non-parametrically, or semi-parametrically, simply as 'smooth functions', to be estimated by non-parametric means. So a typical GAM might use a scatterplot smoothing function, such as a locally weighted mean, for f1(x1), and then use a factor model for f2(x2). This flexibility to allow non-parametric fits with relaxed assumptions on the actual relationship between response and predictor, provides the potential for better fits to data than purely parametric models, but arguably with some loss of interpretability.
== Theoretical background ==
It had been known since the 1950s (via the Kolmogorov–Arnold representation theorem) that any multivariate continuous function could be represented as sums and compositions of univariate functions,
f
(
x
→
)
=
∑
q
=
0
2
n
Φ
q
(
∑
p
=
1
n
ϕ
q
,
p
(
x
p
)
)
{\displaystyle f({\vec {x}})=\sum _{q=0}^{2n}\Phi _{q}\left(\sum _{p=1}^{n}\phi _{q,p}(x_{p})\right)}
.
Unfortunately, though the Kolmogorov–Arnold representation theorem asserts the existence of a function of this form, it gives no mechanism whereby one could be constructed. Certain constructive proofs exist, but they tend to require highly complicated (i.e. fractal) functions, and thus are not suitable for modeling approaches. Therefore, the generalized additive model drops the outer sum, and demands instead that the function belong to a simpler class,
f
(
x
→
)
=
Φ
(
∑
p
=
1
n
ϕ
p
(
x
p
)
)
{\displaystyle f({\vec {x}})=\Phi \left(\sum _{p=1}^{n}\phi _{p}(x_{p})\right)}
.
where
Φ
{\displaystyle \Phi }
is a smooth monotonic function. Writing
g
{\displaystyle g}
for the inverse of
Φ
{\displaystyle \Phi }
, this is traditionally written as
g
(
f
(
x
→
)
)
=
∑
i
f
i
(
x
i
)
{\displaystyle g(f({\vec {x}}))=\sum _{i}f_{i}(x_{i})}
.
When this function is approximating the expectation of some observed quantity, it could be written as
g
(
E
(
Y
)
)
=
β
0
+
f
1
(
x
1
)
+
f
2
(
x
2
)
+
⋯
+
f
m
(
x
m
)
.
{\displaystyle g(\operatorname {E} (Y))=\beta _{0}+f_{1}(x_{1})+f_{2}(x_{2})+\cdots +f_{m}(x_{m}).\,\!}
Which is the standard formulation of a generalized additive model. It was then shown that the backfitting algorithm will always converge for these functions.
== Generality ==
The GAM model class is quite broad, given that smooth function is a rather broad category. For example, a covariate
x
j
{\displaystyle x_{j}}
may be multivariate and the corresponding
f
j
{\displaystyle f_{j}}
a smooth function of several variables, or
f
j
{\displaystyle f_{j}}
might be the function mapping the level of a factor to the value of a random effect. Another example is a varying coefficient (geographic regression) term such as
z
j
f
j
(
x
j
)
{\displaystyle z_{j}f_{j}(x_{j})}
where
z
j
{\displaystyle z_{j}}
and
x
j
{\displaystyle x_{j}}
are both covariates. Or if
x
j
(
t
)
{\displaystyle x_{j}(t)}
is itself an observation of a function, we might include a term such as
∫
f
j
(
t
)
x
j
(
t
)
d
t
{\displaystyle \int f_{j}(t)x_{j}(t)dt}
(sometimes known as a signal regression term).
f
j
{\displaystyle f_{j}}
could also be a simple parametric function as might be used in any generalized linear model. The model class has been generalized in several directions, notably beyond exponential family response distributions, beyond modelling of only the mean and beyond univariate data.
== GAM fitting methods ==
The original GAM fitting method estimated the smooth components of the model using non-parametric smoothers (for example smoothing splines or local linear regression smoothers) via the backfitting algorithm. Backfitting works by iterative smoothing of partial residuals and provides a very general modular estimation method capable of using a wide variety of smoothing methods to estimate the
f
j
(
x
j
)
{\displaystyle f_{j}(x_{j})}
terms. A disadvantage of backfitting is that it is difficult to integrate with the estimation of the degree of smoothness of the model terms, so that in practice the user must set these, or select between a modest set of pre-defined smoothing levels.
If the
f
j
(
x
j
)
{\displaystyle f_{j}(x_{j})}
are represented using smoothing splines then the degree of smoothness can be estimated as part of model fitting using generalized cross validation, or by restricted maximum likelihood (REML, sometimes known as 'GML') which exploits the duality between spline smoothers and Gaussian random effects. This full spline approach carries an
O
(
n
3
)
{\displaystyle O(n^{3})}
computational cost, where
n
{\displaystyle n}
is the number of observations for the response variable, rendering it somewhat impractical for moderately large datasets. More recent methods have addressed this computational cost either by up front reduction of the size of the basis used for smoothing (rank reduction) or by finding sparse representations of the smooths using Markov random fields, which are amenable to the use of sparse matrix methods for computation. These more computationally efficient methods use GCV (or AIC or similar) or REML or take a fully Bayesian approach for inference about the degree of smoothness of the model components. Estimating the degree of smoothness via REML can be viewed as an empirical Bayes method.
An alternative approach with particular advantages in high dimensional settings is to use boosting, although this typically requires bootstrapping for uncertainty quantification. GAMs fit using bagging and boosting have been found to generally outperform GAMs fit using spline methods.
== The rank reduced framework ==
Many modern implementations of GAMs and their extensions are built around the reduced rank smoothing approach, because it allows well founded estimation of the smoothness of the component smooths at comparatively modest computational cost, and also facilitates implementation of a number of model extensions in a way that is more difficult with other methods. At its simplest the idea is to replace the unknown smooth functions in the model with basis expansions
f
j
(
x
j
)
=
∑
k
=
1
K
j
β
j
k
b
j
k
(
x
j
)
{\displaystyle f_{j}(x_{j})=\sum _{k=1}^{K_{j}}\beta _{jk}b_{jk}(x_{j})}
where the
b
j
k
(
x
j
)
{\displaystyle b_{jk}(x_{j})}
are known basis functions, usually chosen for good approximation theoretic properties (for example B splines or reduced rank thin plate splines), and the
β
j
k
{\displaystyle \beta _{jk}}
are coefficients to be estimated as part of model fitting. The basis dimension
K
j
{\displaystyle K_{j}}
is chosen to be sufficiently large that we expect it to overfit the data to hand (thereby avoiding bias from model over-simplification), but small enough to retain computational efficiency. If
p
=
∑
j
K
j
{\displaystyle p=\sum _{j}K_{j}}
then the computational cost of model estimation this way will be
O
(
n
p
2
)
{\displaystyle O(np^{2})}
.
Notice that the
f
j
{\displaystyle f_{j}}
are only identifiable to within an intercept term (we could add any constant to
f
1
{\displaystyle f_{1}}
while subtracting it from
f
2
{\displaystyle f_{2}}
without changing the model predictions at all), so identifiability constraints have to be imposed on the smooth terms to remove this ambiguity. Sharpest inference about the
f
j
{\displaystyle f_{j}}
is generally obtained by using the sum-to-zero constraints
∑
i
f
j
(
x
j
i
)
=
0
{\displaystyle \sum _{i}f_{j}(x_{ji})=0}
i.e. by insisting that the sum of each the
f
j
{\displaystyle f_{j}}
evaluated at its observed covariate values should be zero. Such linear constraints can most easily be imposed by reparametrization at the basis setup stage, so below it is assumed that this has been done.
Having replaced all the
f
j
{\displaystyle f_{j}}
in the model with such basis expansions we have turned the GAM into a generalized linear model (GLM), with a model matrix that simply contains the basis functions evaluated at the observed
x
j
{\displaystyle x_{j}}
values. However, because the basis dimensions,
K
j
{\displaystyle K_{j}}
, have been chosen to be a somewhat larger than is believed to be necessary for the data, the model is over-parameterized and will overfit the data if estimated as a regular GLM. The solution to this problem is to penalize departure from smoothness in the model fitting process, controlling the weight given to the smoothing penalties using smoothing parameters. For example, consider the situation in which all the smooths are univariate functions. Writing all the parameters in one vector,
β
{\displaystyle \beta }
, suppose that
D
(
β
)
{\displaystyle D(\beta )}
is the deviance (twice the difference between saturated log likelihood and the model log likelihood) for the model. Minimizing the deviance by the usual iteratively re-weighted least squares would result in overfit, so we seek
β
{\displaystyle \beta }
to minimize
D
(
β
)
+
∑
j
λ
j
∫
f
j
′
′
(
x
)
2
d
x
{\displaystyle D(\beta )+\sum _{j}\lambda _{j}\int f_{j}^{\prime \prime }(x)^{2}dx}
where the integrated square second derivative penalties serve to penalize wiggliness (lack of smoothness) of the
f
j
{\displaystyle f_{j}}
during fitting, and the smoothing parameters
λ
j
{\displaystyle \lambda _{j}}
control the tradeoff between model goodness of fit and model smoothness. In the example
λ
j
→
∞
{\displaystyle \lambda _{j}\to \infty }
would ensure that the estimate of
f
j
(
x
j
)
{\displaystyle f_{j}(x_{j})}
would be a straight line in
x
j
{\displaystyle x_{j}}
.
Given the basis expansion for each
f
j
{\displaystyle f_{j}}
the wiggliness penalties can be expressed as quadratic forms in the model coefficients. That is we can write
∫
f
j
′
′
(
x
)
2
d
x
=
β
j
T
S
¯
j
β
j
=
β
T
S
j
β
{\displaystyle \int f_{j}^{\prime \prime }(x)^{2}dx=\beta _{j}^{T}{\bar {S}}_{j}\beta _{j}=\beta ^{T}S_{j}\beta }
,
where
S
¯
j
{\displaystyle {\bar {S}}_{j}}
is a matrix of known coefficients computable from the penalty and basis,
β
j
{\displaystyle \beta _{j}}
is the vector of coefficients for
f
j
{\displaystyle f_{j}}
, and
S
j
{\displaystyle S_{j}}
is just
S
¯
j
{\displaystyle {\bar {S}}_{j}}
padded with zeros so that the second equality holds and we can write the penalty in terms of the full coefficient vector
β
{\displaystyle \beta }
. Many other smoothing penalties can be written in the same way, and given the smoothing parameters the model fitting problem now becomes
β
^
=
argmin
β
{
D
(
β
)
+
∑
j
λ
j
β
T
S
j
β
}
{\displaystyle {\hat {\beta }}={\text{argmin}}_{\beta }\{D(\beta )+\sum _{j}\lambda _{j}\beta ^{T}S_{j}\beta \}}
,
which can be found using a penalized version of the usual iteratively reweighted least squares (IRLS) algorithm for GLMs: the algorithm is unchanged except that the sum of quadratic penalties is added to the working least squared objective at each iteration of the algorithm.
Penalization has several effects on inference, relative to a regular GLM. For one thing the estimates are subject to some smoothing bias, which is the price that must be paid for limiting estimator variance by penalization. However, if smoothing parameters are selected appropriately the (squared) smoothing bias introduced by penalization should be less than the reduction in variance that it produces, so that the net effect is a reduction in mean square estimation error, relative to not penalizing. A related effect of penalization is that the notion of degrees of freedom of a model has to be modified to account for the penalties' action in reducing the coefficients' freedom to vary. For example, if
W
{\displaystyle W}
is the diagonal matrix of IRLS weights at convergence, and
X
{\displaystyle X}
is the GAM model matrix, then the model effective degrees of freedom is given by
trace
(
F
)
{\displaystyle {\text{trace}}(F)}
where
F
=
(
X
T
W
X
+
∑
j
λ
j
S
j
)
−
1
X
T
W
X
{\displaystyle F=(X^{T}WX+\sum _{j}\lambda _{j}S_{j})^{-1}X^{T}WX}
,
is the effective degrees of freedom matrix. In fact summing just the diagonal elements of
F
{\displaystyle F}
corresponding to the coefficients of
f
j
{\displaystyle f_{j}}
gives the effective degrees of freedom for the estimate of
f
j
{\displaystyle f_{j}}
.
=== Bayesian smoothing priors ===
Smoothing bias complicates interval estimation for these models, and the simplest approach turns out to involve a Bayesian approach. Understanding this Bayesian view of smoothing also helps to understand the REML and full Bayes approaches to smoothing parameter estimation. At some level smoothing penalties are imposed because we believe smooth functions to be more probable than wiggly ones, and if that is true then we might as well formalize this notion by placing a prior on model wiggliness. A very simple prior might be
π
(
β
)
∝
exp
{
−
β
T
∑
j
λ
j
S
j
β
/
(
2
ϕ
)
}
{\displaystyle \pi (\beta )\propto \exp\{-\beta ^{T}\sum _{j}\lambda _{j}S_{j}\beta /(2\phi )\}}
(where
ϕ
{\displaystyle \phi }
is the GLM scale parameter introduced only for later convenience), but we can immediately recognize this as a multivariate normal prior with mean
0
{\displaystyle 0}
and precision matrix
S
λ
=
∑
j
λ
j
S
j
/
ϕ
{\displaystyle S_{\lambda }=\sum _{j}\lambda _{j}S_{j}/\phi }
. Since the penalty allows some functions through unpenalized (straight lines, given the example penalties),
S
λ
{\displaystyle S_{\lambda }}
is rank deficient, and the prior is actually improper, with a covariance matrix given by the Moore–Penrose pseudoinverse of
S
λ
{\displaystyle S_{\lambda }}
(the impropriety corresponds to ascribing infinite variance to the unpenalized components of a smooth).
Now if this prior is combined with the GLM likelihood, we find that the posterior mode for
β
{\displaystyle \beta }
is exactly the
β
^
{\displaystyle {\hat {\beta }}}
found above by penalized IRLS. Furthermore, we have the large sample result that
β
|
y
∼
N
(
β
^
,
(
X
T
W
X
+
S
λ
)
−
1
ϕ
)
.
{\displaystyle \beta |y\sim N({\hat {\beta }},(X^{T}WX+S_{\lambda })^{-1}\phi ).}
which can be used to produce confidence/credible intervals for the smooth components,
f
j
{\displaystyle f_{j}}
.
The Gaussian smoothness priors are also the basis for fully Bayesian inference with GAMs, as well as methods estimating GAMs as mixed models that are essentially empirical Bayes methods.
=== Smoothing parameter estimation ===
So far we have treated estimation and inference given the smoothing parameters,
λ
{\displaystyle \lambda }
, but these also need to be estimated. One approach is to take a fully Bayesian approach, defining priors on the (log) smoothing parameters, and using stochastic simulation or high order approximation methods to obtain information about the posterior of the model coefficients. An alternative is to select the smoothing parameters to optimize a prediction error criterion such as Generalized cross validation (GCV) or the
Akaike information criterion (AIC). Finally we may choose to maximize the Marginal Likelihood (REML) obtained by integrating the model coefficients,
β
{\displaystyle \beta }
out of the joint density of
β
,
y
{\displaystyle \beta ,y}
,
λ
^
=
argmax
λ
∫
f
(
y
|
β
,
λ
)
π
(
β
|
λ
)
d
β
{\displaystyle {\hat {\lambda }}={\text{argmax}}_{\lambda }\int f(y|\beta ,\lambda )\pi (\beta |\lambda )d\beta }
.
Since
f
(
y
|
β
,
λ
)
{\displaystyle f(y|\beta ,\lambda )}
is just the likelihood of
β
{\displaystyle \beta }
, we can view this as choosing
λ
{\displaystyle \lambda }
to maximize the average likelihood of random draws from the prior. The preceding integral is usually analytically intractable but can be approximated to quite high accuracy using Laplace's method.
Smoothing parameter inference is the most computationally taxing part of model estimation/inference. For example, to optimize a GCV or marginal likelihood typically requires numerical optimization via a Newton or Quasi-Newton method, with each trial value for the (log) smoothing parameter vector requiring a penalized IRLS iteration to evaluate the corresponding
β
^
{\displaystyle {\hat {\beta }}}
alongside the other ingredients of the GCV score or Laplace approximate marginal likelihood (LAML). Furthermore, to obtain the derivatives of the GCV or LAML, required for optimization, involves implicit differentiation to obtain the derivatives of
β
^
{\displaystyle {\hat {\beta }}}
w.r.t. the log smoothing parameters, and this requires some care is efficiency and numerical stability are to be maintained.
== Software ==
Backfit GAMs were originally provided by the gam function in S, now ported to the R language as the gam package. The SAS proc GAM also provides backfit GAMs. The recommended package in R for GAMs is mgcv, which stands for mixed GAM computational vehicle, which is based on the reduced rank approach with automatic smoothing parameter selection. The SAS proc GAMPL is an alternative implementation. In Python, there is the PyGAM package, with similar features to R's mgcv. Alternatively, there is InterpretML package, which implements a bagging and boosting approach. There are many alternative packages. Examples include the R packages mboost, which implements a boosting approach; gss, which provides the full spline smoothing methods; VGAM which provides vector GAMs; and gamlss, which provides Generalized additive model for location, scale and shape. BayesX and its R interface provides GAMs and extensions via MCMC and penalized likelihood methods. The INLA software implements a fully Bayesian approach based on Markov random field representations exploiting sparse matrix methods.
As an example of how models can be estimated in practice with software, consider R package mgcv. Suppose that our R workspace contains vectors y, x and z and we want to estimate the model
y
i
=
β
0
+
f
1
(
x
i
)
+
f
2
(
z
i
)
+
ϵ
i
where
ϵ
i
∼
N
(
0
,
σ
2
)
.
{\displaystyle y_{i}=\beta _{0}+f_{1}(x_{i})+f_{2}(z_{i})+\epsilon _{i}{\text{ where }}\epsilon _{i}\sim N(0,\sigma ^{2}).}
Within R we could issue the commands
library(mgcv) # load the package
b = gam(y ~ s(x) + s(z))
In common with most R modelling functions gam expects a model formula to be supplied, specifying the model structure to fit. The response variable is given to the left of the ~ while the specification of the linear predictor is given to the right. gam sets up bases and penalties for the smooth terms, estimates the model including its smoothing parameters and, in standard R fashion, returns a fitted model object, which can then be interrogated using various helper functions, such as summary, plot, predict, and AIC.
This simple example has used several default settings which it is important to be aware of. For example a Gaussian distribution and identity link has been assumed, and the smoothing parameter selection criterion was GCV. Also the smooth terms were represented using `penalized thin plate regression splines', and the basis dimension for each was set to 10 (implying a maximum of 9 degrees of freedom after identifiability constraints have been imposed). A second example illustrates how we can control these things. Suppose that we want to estimate the model
y
i
∼
Poi
(
μ
i
)
where
log
μ
i
=
β
0
+
β
1
x
i
+
f
1
(
t
i
)
+
f
2
(
v
i
,
w
i
)
.
{\displaystyle y_{i}\sim {\text{Poi}}(\mu _{i}){\text{ where }}\log \mu _{i}=\beta _{0}+\beta _{1}x_{i}+f_{1}(t_{i})+f_{2}(v_{i},w_{i}).}
using REML smoothing parameter selection, and we expect
f
1
{\displaystyle f_{1}}
to be a relatively complicated function which we would like to model with a penalized cubic regression spline. For
f
2
{\displaystyle f_{2}}
we also have to decide whether
v
{\displaystyle v}
and
w
{\displaystyle w}
are naturally on the same scale so that an isotropic smoother such as thin plate spline is appropriate (specified via `s(v,w)'), or whether they are really on different scales so that we need separate smoothing penalties and smoothing parameters for
v
{\displaystyle v}
and
w
{\displaystyle w}
as provided by a tensor product smoother. Suppose we opted for the latter in this case, then the following R code would estimate the model
b1 = gam(y ~ x + s(t,bs="cr",k=100) + te(v,w),family=poisson,method="REML")
which uses a basis size of 100 for the smooth of
t
{\displaystyle t}
. The specification of distribution and link function uses the `family' objects that are standard when fitting GLMs in R or S. Note that Gaussian random effects can also be added to the linear predictor.
These examples are only intended to give a very basic flavour of the way that GAM software is used, for more detail refer to the software documentation for the various packages and the references below.
== Model checking ==
As with any statistical model it is important to check the model assumptions of a GAM. Residual plots should be examined in the same way as for any GLM. That is deviance residuals (or other standardized residuals) should be examined for patterns that might suggest a substantial violation of the independence or mean-variance assumptions of the model. This will usually involve plotting the standardized residuals against fitted values and covariates to look for mean-variance problems or missing pattern, and may also involve examining Correlograms (ACFs) and/or Variograms of the residuals to check for violation of independence. If the model mean-variance relationship is correct then scaled residuals should have roughly constant variance. Note that since GLMs and GAMs can be estimated using Quasi-likelihood, it follows that details of the distribution of the residuals beyond the mean-variance relationship are of relatively minor importance.
One issue that is more common with GAMs than with other GLMs is a danger of falsely concluding that data are zero inflated. The difficulty arises when data contain many zeroes that can be modelled by a Poisson or binomial with a very low expected value: the flexibility of the GAM structure will often allow representation of a very low mean over some region of covariate space, but the distribution of standardized residuals will fail to look anything like the approximate normality that introductory GLM classes teach us to expect, even if the model is perfectly correct.
The one extra check that GAMs introduce is the need to check that the degrees of freedom chosen are appropriate. This is particularly acute when using methods that do not automatically estimate the smoothness of model components. When using methods with automatic smoothing parameter selection then it is still necessary to check that the choice of basis dimension was not restrictively small, although if the effective degrees of freedom of a term estimate is comfortably below its basis dimension then this is unlikely. In any case, checking
f
j
(
x
j
)
{\displaystyle f_{j}(x_{j})}
is based on examining pattern in the residuals with respect to
x
j
{\displaystyle x_{j}}
. This can be done using partial residuals overlaid on the plot of
f
^
j
(
x
j
)
{\displaystyle {\hat {f}}_{j}(x_{j})}
, or using permutation of the residuals to construct tests for residual pattern.
== Model selection ==
When smoothing parameters are estimated as part of model fitting then much of what would traditionally count as model selection has been absorbed into the fitting process: the smoothing parameters estimation has already selected between a rich family of models of different functional complexity. However smoothing parameter estimation does not typically remove a smooth term from the model altogether, because most penalties leave some functions un-penalized (e.g. straight lines are unpenalized by the spline derivative penalty given above). So the question of whether a term should be in the model at all remains. One simple approach to this issue is to add an extra penalty to each smooth term in the GAM, which penalizes the components of the smooth that would otherwise be unpenalized (and only those). Each extra penalty has its own smoothing parameter and estimation then proceeds as before, but now with the possibility that terms will be completely penalized to zero. In high dimensional settings then it may make more sense to attempt this task using the lasso or elastic net regularization. Boosting also performs term selection automatically as part of fitting.
An alternative is to use traditional stepwise regression methods for model selection. This is also the default method when smoothing parameters are not estimated as part of fitting, in which case each smooth term is usually allowed to take one of a small set of pre-defined smoothness levels within the model, and these are selected between in a stepwise fashion. Stepwise methods operate by iteratively comparing models with or without particular model terms (or possibly with different levels of term complexity), and require measures of model fit or term significance in order to decide which model to select at each stage. For example, we might use p-values for testing each term for equality to zero to decide on candidate terms for removal from a model, and we might compare Akaike information criterion (AIC) values for alternative models.
P-value computation for smooths is not straightforward, because of the effects of penalization, but approximations are available. AIC can be computed in two ways for GAMs. The marginal AIC is based on the Marginal Likelihood (see above) with the model coefficients integrated out. In this case the AIC penalty is based on the number of smoothing parameters (and any variance parameters) in the model. However, because of the well known fact that REML is not comparable between models with different fixed effects structures, we can not usually use such an AIC to compare models with different smooth terms (since their un-penalized components act like fixed effects). Basing AIC on the marginal likelihood in which only the penalized effects are integrated out is possible (the number of un-penalized coefficients now gets added to the parameter count for the AIC penalty), but this version of the marginal likelihood suffers from the tendency to oversmooth that provided the original motivation for developing REML. Given these problems GAMs are often compared using the conditional AIC, in which the model likelihood (not marginal likelihood) is used in the AIC, and the parameter count is taken as the effective degrees of freedom of the model.
Naive versions of the conditional AIC have been shown to be much too likely to select larger models in some circumstances, a difficulty attributable to neglect of smoothing parameter uncertainty when computing the effective degrees of freedom, however correcting the effective degrees of freedom for this problem restores reasonable performance.
== Caveats ==
Overfitting can be a problem with GAMs, especially if there is un-modelled residual auto-correlation or un-modelled overdispersion. Cross-validation can be used to detect and/or reduce overfitting problems with GAMs (or other statistical methods), and software often allows the level of penalization to be increased to force smoother fits. Estimating very large numbers of smoothing parameters is also likely to be statistically challenging, and there are known tendencies for prediction error criteria (GCV, AIC etc.) to occasionally undersmooth substantially, particularly at moderate sample sizes, with REML being somewhat less problematic in this regard.
Where appropriate, simpler models such as GLMs may be preferable to GAMs unless GAMs improve predictive ability substantially (in validation sets) for the application in question.
== See also ==
Additive model
Backfitting algorithm
Generalized additive model for location, scale and shape (GAMLSS)
Residual effective degrees of freedom
Semiparametric regression
== References ==
== External links ==
gam, an R package for GAMs by backfitting.
gam, Python module in statsmodels.gam module.
InterpretML, a Python package for fitting GAMs via bagging and boosting.
mgcv, an R package for GAMs using penalized regression splines.
mboost, an R package for boosting including additive models.
gss, an R package for smoothing spline ANOVA.
INLA software for Bayesian Inference with GAMs and more.
BayesX software for MCMC and penalized likelihood approaches to GAMs.
Doing magic and analyzing seasonal time series with GAM in R
GAM: The Predictive Modeling Silver Bullet
Building GAM by projection descent | Wikipedia/Generalized_additive_model |
In statistics, the variance function is a smooth function that depicts the variance of a random quantity as a function of its mean. The variance function is a measure of heteroscedasticity and plays a large role in many settings of statistical modelling. It is a main ingredient in the generalized linear model framework and a tool used in non-parametric regression, semiparametric regression and functional data analysis. In parametric modeling, variance functions take on a parametric form and explicitly describe the relationship between the variance and the mean of a random quantity. In a non-parametric setting, the variance function is assumed to be a smooth function.
== Intuition ==
In a regression model setting, the goal is to establish whether or not a relationship exists between a response variable and a set of predictor variables. Further, if a relationship does exist, the goal is then to be able to describe this relationship as best as possible. A main assumption in linear regression is constant variance or (homoscedasticity), meaning that different response variables have the same variance in their errors, at every predictor level. This assumption works well when the response variable and the predictor variable are jointly normal. As we will see later, the variance function in the Normal setting is constant; however, we must find a way to quantify heteroscedasticity (non-constant variance) in the absence of joint Normality.
When it is likely that the response follows a distribution that is a member of the exponential family, a generalized linear model may be more appropriate to use, and moreover, when we wish not to force a parametric model onto our data, a non-parametric regression approach can be useful. The importance of being able to model the variance as a function of the mean lies in improved inference (in a parametric setting), and estimation of the regression function in general, for any setting.
Variance functions play a very important role in parameter estimation and inference. In general, maximum likelihood estimation requires that a likelihood function be defined. This requirement then implies that one must first specify the distribution of the response variables observed. However, to define a quasi-likelihood, one need only specify a relationship between the mean and the variance of the observations to then be able to use the quasi-likelihood function for estimation. Quasi-likelihood estimation is particularly useful when there is overdispersion. Overdispersion occurs when there is more variability in the data than there should otherwise be expected according to the assumed distribution of the data.
In summary, to ensure efficient inference of the regression parameters and the regression function, the heteroscedasticity must be accounted for. Variance functions quantify the relationship between the variance and the mean of the observed data and hence play a significant role in regression estimation and inference.
== Types ==
The variance function and its applications come up in many areas of statistical analysis. A very important use of this function is in the framework of generalized linear models and non-parametric regression.
=== Generalized linear model ===
When a member of the exponential family has been specified, the variance function can easily be derived.: 29 The general form of the variance function is presented under the exponential family context, as well as specific forms for Normal, Bernoulli, Poisson, and Gamma. In addition, we describe the applications and use of variance functions in maximum likelihood estimation and quasi-likelihood estimation.
==== Derivation ====
The generalized linear model (GLM), is a generalization of ordinary regression analysis that extends to any member of the exponential family. It is particularly useful when the response variable is categorical, binary or subject to a constraint (e.g. only positive responses make sense). A quick summary of the components of a GLM are summarized on this page, but for more details and information see the page on generalized linear models.
A GLM consists of three main ingredients:
1. Random Component: a distribution of y from the exponential family,
E
[
y
∣
X
]
=
μ
{\displaystyle E[y\mid X]=\mu }
2. Linear predictor:
η
=
X
B
=
∑
j
=
1
p
X
i
j
T
B
j
{\displaystyle \eta =XB=\sum _{j=1}^{p}X_{ij}^{T}B_{j}}
3. Link function:
η
=
g
(
μ
)
,
μ
=
g
−
1
(
η
)
{\displaystyle \eta =g(\mu ),\mu =g^{-1}(\eta )}
First it is important to derive a couple key properties of the exponential family.
Any random variable
y
{\displaystyle {\textit {y}}}
in the exponential family has a probability density function of the form,
f
(
y
,
θ
,
ϕ
)
=
exp
(
y
θ
−
b
(
θ
)
ϕ
−
c
(
y
,
ϕ
)
)
{\displaystyle f(y,\theta ,\phi )=\exp \left({\frac {y\theta -b(\theta )}{\phi }}-c(y,\phi )\right)}
with loglikelihood,
ℓ
(
θ
,
y
,
ϕ
)
=
log
(
f
(
y
,
θ
,
ϕ
)
)
=
y
θ
−
b
(
θ
)
ϕ
−
c
(
y
,
ϕ
)
{\displaystyle \ell (\theta ,y,\phi )=\log(f(y,\theta ,\phi ))={\frac {y\theta -b(\theta )}{\phi }}-c(y,\phi )}
Here,
θ
{\displaystyle \theta }
is the canonical parameter and the parameter of interest, and
ϕ
{\displaystyle \phi }
is a nuisance parameter which plays a role in the variance.
We use the Bartlett's Identities to derive a general expression for the variance function.
The first and second Bartlett results ensures that under suitable conditions (see Leibniz integral rule), for a density function dependent on
θ
,
f
θ
(
)
{\displaystyle \theta ,f_{\theta }()}
,
E
θ
[
∂
∂
θ
log
(
f
θ
(
y
)
)
]
=
0
{\displaystyle \operatorname {E} _{\theta }\left[{\frac {\partial }{\partial \theta }}\log(f_{\theta }(y))\right]=0}
Var
θ
[
∂
∂
θ
log
(
f
θ
(
y
)
)
]
+
E
θ
[
∂
2
∂
θ
2
log
(
f
θ
(
y
)
)
]
=
0
{\displaystyle \operatorname {Var} _{\theta }\left[{\frac {\partial }{\partial \theta }}\log(f_{\theta }(y))\right]+\operatorname {E} _{\theta }\left[{\frac {\partial ^{2}}{\partial \theta ^{2}}}\log(f_{\theta }(y))\right]=0}
These identities lead to simple calculations of the expected value and variance of any random variable
y
{\displaystyle {\textit {y}}}
in the exponential family
E
θ
[
y
]
,
V
a
r
θ
[
y
]
{\displaystyle E_{\theta }[y],Var_{\theta }[y]}
.
Expected value of Y:
Taking the first derivative with respect to
θ
{\displaystyle \theta }
of the log of the density in the exponential family form described above, we have
∂
∂
θ
log
(
f
(
y
,
θ
,
ϕ
)
)
=
∂
∂
θ
[
y
θ
−
b
(
θ
)
ϕ
−
c
(
y
,
ϕ
)
]
=
y
−
b
′
(
θ
)
ϕ
{\displaystyle {\frac {\partial }{\partial \theta }}\log(f(y,\theta ,\phi ))={\frac {\partial }{\partial \theta }}\left[{\frac {y\theta -b(\theta )}{\phi }}-c(y,\phi )\right]={\frac {y-b'(\theta )}{\phi }}}
Then taking the expected value and setting it equal to zero leads to,
E
θ
[
y
−
b
′
(
θ
)
ϕ
]
=
E
θ
[
y
]
−
b
′
(
θ
)
ϕ
=
0
{\displaystyle \operatorname {E} _{\theta }\left[{\frac {y-b'(\theta )}{\phi }}\right]={\frac {\operatorname {E} _{\theta }[y]-b'(\theta )}{\phi }}=0}
E
θ
[
y
]
=
b
′
(
θ
)
{\displaystyle \operatorname {E} _{\theta }[y]=b'(\theta )}
Variance of Y:
To compute the variance we use the second Bartlett identity,
Var
θ
[
∂
∂
θ
(
y
θ
−
b
(
θ
)
ϕ
−
c
(
y
,
ϕ
)
)
]
+
E
θ
[
∂
2
∂
θ
2
(
y
θ
−
b
(
θ
)
ϕ
−
c
(
y
,
ϕ
)
)
]
=
0
{\displaystyle \operatorname {Var} _{\theta }\left[{\frac {\partial }{\partial \theta }}\left({\frac {y\theta -b(\theta )}{\phi }}-c(y,\phi )\right)\right]+\operatorname {E} _{\theta }\left[{\frac {\partial ^{2}}{\partial \theta ^{2}}}\left({\frac {y\theta -b(\theta )}{\phi }}-c(y,\phi )\right)\right]=0}
Var
θ
[
y
−
b
′
(
θ
)
ϕ
]
+
E
θ
[
−
b
″
(
θ
)
ϕ
]
=
0
{\displaystyle \operatorname {Var} _{\theta }\left[{\frac {y-b'(\theta )}{\phi }}\right]+\operatorname {E} _{\theta }\left[{\frac {-b''(\theta )}{\phi }}\right]=0}
Var
θ
[
y
]
=
b
″
(
θ
)
ϕ
{\displaystyle \operatorname {Var} _{\theta }\left[y\right]=b''(\theta )\phi }
We have now a relationship between
μ
{\displaystyle \mu }
and
θ
{\displaystyle \theta }
, namely
μ
=
b
′
(
θ
)
{\displaystyle \mu =b'(\theta )}
and
θ
=
b
′
−
1
(
μ
)
{\displaystyle \theta =b'^{-1}(\mu )}
, which allows for a relationship between
μ
{\displaystyle \mu }
and the variance,
V
(
θ
)
=
b
″
(
θ
)
=
the part of the variance that depends on
θ
{\displaystyle V(\theta )=b''(\theta )={\text{the part of the variance that depends on }}\theta }
V
(
μ
)
=
b
″
(
b
′
−
1
(
μ
)
)
.
{\displaystyle \operatorname {V} (\mu )=b''(b'^{-1}(\mu )).\,}
Note that because
Var
θ
[
y
]
>
0
,
b
″
(
θ
)
>
0
{\displaystyle \operatorname {Var} _{\theta }\left[y\right]>0,b''(\theta )>0}
, then
b
′
:
θ
→
μ
{\displaystyle b':\theta \rightarrow \mu }
is invertible.
We derive the variance function for a few common distributions.
==== Example – normal ====
The normal distribution is a special case where the variance function is a constant. Let
y
∼
N
(
μ
,
σ
2
)
{\displaystyle y\sim N(\mu ,\sigma ^{2})}
then we put the density function of y in the form of the exponential family described above:
f
(
y
)
=
exp
(
y
μ
−
μ
2
2
σ
2
−
y
2
2
σ
2
−
1
2
ln
2
π
σ
2
)
{\displaystyle f(y)=\exp \left({\frac {y\mu -{\frac {\mu ^{2}}{2}}}{\sigma ^{2}}}-{\frac {y^{2}}{2\sigma ^{2}}}-{\frac {1}{2}}\ln {2\pi \sigma ^{2}}\right)}
where
θ
=
μ
,
{\displaystyle \theta =\mu ,}
b
(
θ
)
=
μ
2
2
,
{\displaystyle b(\theta )={\frac {\mu ^{2}}{2}},}
ϕ
=
σ
2
,
{\displaystyle \phi =\sigma ^{2},}
c
(
y
,
ϕ
)
=
−
y
2
2
σ
2
−
1
2
ln
2
π
σ
2
{\displaystyle c(y,\phi )=-{\frac {y^{2}}{2\sigma ^{2}}}-{\frac {1}{2}}\ln {2\pi \sigma ^{2}}}
To calculate the variance function
V
(
μ
)
{\displaystyle V(\mu )}
, we first express
θ
{\displaystyle \theta }
as a function of
μ
{\displaystyle \mu }
. Then we transform
V
(
θ
)
{\displaystyle V(\theta )}
into a function of
μ
{\displaystyle \mu }
θ
=
μ
{\displaystyle \theta =\mu }
b
′
(
θ
)
=
θ
=
E
[
y
]
=
μ
{\displaystyle b'(\theta )=\theta =\operatorname {E} [y]=\mu }
V
(
θ
)
=
b
″
(
θ
)
=
1
{\displaystyle V(\theta )=b''(\theta )=1}
Therefore, the variance function is constant.
==== Example – Bernoulli ====
Let
y
∼
Bernoulli
(
p
)
{\displaystyle y\sim {\text{Bernoulli}}(p)}
, then we express the density of the Bernoulli distribution in exponential family form,
f
(
y
)
=
exp
(
y
ln
p
1
−
p
+
ln
(
1
−
p
)
)
{\displaystyle f(y)=\exp \left(y\ln {\frac {p}{1-p}}+\ln(1-p)\right)}
θ
=
ln
p
1
−
p
=
{\displaystyle \theta =\ln {\frac {p}{1-p}}=}
logit(p), which gives us
p
=
e
θ
1
+
e
θ
=
{\displaystyle p={\frac {e^{\theta }}{1+e^{\theta }}}=}
expit
(
θ
)
{\displaystyle (\theta )}
b
(
θ
)
=
ln
(
1
+
e
θ
)
{\displaystyle b(\theta )=\ln(1+e^{\theta })}
and
b
′
(
θ
)
=
e
θ
1
+
e
θ
=
{\displaystyle b'(\theta )={\frac {e^{\theta }}{1+e^{\theta }}}=}
expit
(
θ
)
=
p
=
μ
{\displaystyle (\theta )=p=\mu }
b
″
(
θ
)
=
e
θ
1
+
e
θ
−
(
e
θ
1
+
e
θ
)
2
{\displaystyle b''(\theta )={\frac {e^{\theta }}{1+e^{\theta }}}-\left({\frac {e^{\theta }}{1+e^{\theta }}}\right)^{2}}
This give us
V
(
μ
)
=
μ
(
1
−
μ
)
{\displaystyle V(\mu )=\mu (1-\mu )}
==== Example – Poisson ====
Let
y
∼
Poisson
(
λ
)
{\displaystyle y\sim {\text{Poisson}}(\lambda )}
, then we express the density of the Poisson distribution in exponential family form,
f
(
y
)
=
exp
(
y
ln
λ
−
ln
λ
)
{\displaystyle f(y)=\exp(y\ln \lambda -\ln \lambda )}
θ
=
ln
λ
=
{\displaystyle \theta =\ln \lambda =}
which gives us
λ
=
e
θ
{\displaystyle \lambda =e^{\theta }}
b
(
θ
)
=
e
θ
{\displaystyle b(\theta )=e^{\theta }}
and
b
′
(
θ
)
=
e
θ
=
λ
=
μ
{\displaystyle b'(\theta )=e^{\theta }=\lambda =\mu }
b
″
(
θ
)
=
e
θ
=
μ
{\displaystyle b''(\theta )=e^{\theta }=\mu }
This give us
V
(
μ
)
=
μ
{\displaystyle V(\mu )=\mu }
Here we see the central property of Poisson data, that the variance is equal to the mean.
==== Example – Gamma ====
The Gamma distribution and density function can be expressed under different parametrizations. We will use the form of the gamma with parameters
(
μ
,
ν
)
{\displaystyle (\mu ,\nu )}
f
μ
,
ν
(
y
)
=
1
Γ
(
ν
)
y
(
ν
y
μ
)
ν
e
−
ν
y
μ
{\displaystyle f_{\mu ,\nu }(y)={\frac {1}{\Gamma (\nu )y}}\left({\frac {\nu y}{\mu }}\right)^{\nu }e^{-{\frac {\nu y}{\mu }}}}
Then in exponential family form we have
f
μ
,
ν
(
y
)
=
exp
(
−
1
μ
y
+
ln
(
1
μ
)
1
ν
+
ln
(
ν
ν
y
ν
−
1
Γ
(
ν
)
)
)
{\displaystyle f_{\mu ,\nu }(y)=\exp \left({\frac {-{\frac {1}{\mu }}y+\ln({\frac {1}{\mu }})}{\frac {1}{\nu }}}+\ln \left({\frac {\nu ^{\nu }y^{\nu -1}}{\Gamma (\nu )}}\right)\right)}
θ
=
−
1
μ
→
μ
=
−
1
θ
{\displaystyle \theta ={\frac {-1}{\mu }}\rightarrow \mu ={\frac {-1}{\theta }}}
ϕ
=
1
ν
{\displaystyle \phi ={\frac {1}{\nu }}}
b
(
θ
)
=
−
ln
(
−
θ
)
{\displaystyle b(\theta )=-\ln(-\theta )}
b
′
(
θ
)
=
−
1
θ
=
−
1
−
1
μ
=
μ
{\displaystyle b'(\theta )={\frac {-1}{\theta }}={\frac {-1}{\frac {-1}{\mu }}}=\mu }
b
″
(
θ
)
=
1
θ
2
=
μ
2
{\displaystyle b''(\theta )={\frac {1}{\theta ^{2}}}=\mu ^{2}}
And we have
V
(
μ
)
=
μ
2
{\displaystyle V(\mu )=\mu ^{2}}
==== Application – weighted least squares ====
A very important application of the variance function is its use in parameter estimation and inference when the response variable is of the required exponential family form as well as in some cases when it is not (which we will discuss in quasi-likelihood). Weighted least squares (WLS) is a special case of generalized least squares. Each term in the WLS criterion includes a weight that determines that the influence each observation has on the final parameter estimates. As in regular least squares, the goal is to estimate the unknown parameters in the regression function by finding values for parameter estimates that minimize the sum of the squared deviations between the observed responses and the functional portion of the model.
While WLS assumes independence of observations it does not assume equal variance and is therefore a solution for parameter estimation in the presence of heteroscedasticity. The Gauss–Markov theorem and Aitken demonstrate that the best linear unbiased estimator (BLUE), the unbiased estimator with minimum variance, has each weight equal to the reciprocal of the variance of the measurement.
In the GLM framework, our goal is to estimate parameters
β
{\displaystyle \beta }
, where
Z
=
g
(
E
[
y
∣
X
]
)
=
X
β
{\displaystyle Z=g(E[y\mid X])=X\beta }
. Therefore, we would like to minimize
(
Z
−
X
B
)
T
W
(
Z
−
X
B
)
{\displaystyle (Z-XB)^{T}W(Z-XB)}
and if we define the weight matrix W as
W
⏟
n
×
n
=
[
1
ϕ
V
(
μ
1
)
g
′
(
μ
1
)
2
0
⋯
0
0
0
1
ϕ
V
(
μ
2
)
g
′
(
μ
2
)
2
0
⋯
0
⋮
⋮
⋮
⋮
0
⋮
⋮
⋮
⋮
0
0
⋯
⋯
0
1
ϕ
V
(
μ
n
)
g
′
(
μ
n
)
2
]
,
{\displaystyle \underbrace {W} _{n\times n}={\begin{bmatrix}{\frac {1}{\phi V(\mu _{1})g'(\mu _{1})^{2}}}&0&\cdots &0&0\\0&{\frac {1}{\phi V(\mu _{2})g'(\mu _{2})^{2}}}&0&\cdots &0\\\vdots &\vdots &\vdots &\vdots &0\\\vdots &\vdots &\vdots &\vdots &0\\0&\cdots &\cdots &0&{\frac {1}{\phi V(\mu _{n})g'(\mu _{n})^{2}}}\end{bmatrix}},}
where
ϕ
,
V
(
μ
)
,
g
(
μ
)
{\displaystyle \phi ,V(\mu ),g(\mu )}
are defined in the previous section, it allows for iteratively reweighted least squares (IRLS) estimation of the parameters. See the section on iteratively reweighted least squares for more derivation and information.
Also, important to note is that when the weight matrix is of the form described here, minimizing the expression
(
Z
−
X
B
)
T
W
(
Z
−
X
B
)
{\displaystyle (Z-XB)^{T}W(Z-XB)}
also minimizes the Pearson distance. See Distance correlation for more.
The matrix W falls right out of the estimating equations for estimation of
β
{\displaystyle \beta }
. Maximum likelihood estimation for each parameter
β
r
,
1
≤
r
≤
p
{\displaystyle \beta _{r},1\leq r\leq p}
, requires
∑
i
=
1
n
∂
l
i
∂
β
r
=
0
{\displaystyle \sum _{i=1}^{n}{\frac {\partial l_{i}}{\partial \beta _{r}}}=0}
, where
l
(
θ
,
y
,
ϕ
)
=
log
(
f
(
y
,
θ
,
ϕ
)
)
=
y
θ
−
b
(
θ
)
ϕ
−
c
(
y
,
ϕ
)
{\displaystyle \operatorname {l} (\theta ,y,\phi )=\log(\operatorname {f} (y,\theta ,\phi ))={\frac {y\theta -b(\theta )}{\phi }}-c(y,\phi )}
is the log-likelihood.
Looking at a single observation we have,
∂
l
∂
β
r
=
∂
l
∂
θ
∂
θ
∂
μ
∂
μ
∂
η
∂
η
∂
β
r
{\displaystyle {\frac {\partial l}{\partial \beta _{r}}}={\frac {\partial l}{\partial \theta }}{\frac {\partial \theta }{\partial \mu }}{\frac {\partial \mu }{\partial \eta }}{\frac {\partial \eta }{\partial \beta _{r}}}}
∂
η
∂
β
r
=
x
r
{\displaystyle {\frac {\partial \eta }{\partial \beta _{r}}}=x_{r}}
∂
l
∂
θ
=
y
−
b
′
(
θ
)
ϕ
=
y
−
μ
ϕ
{\displaystyle {\frac {\partial l}{\partial \theta }}={\frac {y-b'(\theta )}{\phi }}={\frac {y-\mu }{\phi }}}
∂
θ
∂
μ
=
∂
b
′
−
1
(
μ
)
μ
=
1
b
″
(
b
′
(
μ
)
)
=
1
V
(
μ
)
{\displaystyle {\frac {\partial \theta }{\partial \mu }}={\frac {\partial b'^{-1}(\mu )}{\mu }}={\frac {1}{b''(b'(\mu ))}}={\frac {1}{V(\mu )}}}
This gives us
∂
l
∂
β
r
=
y
−
μ
ϕ
V
(
μ
)
∂
μ
∂
η
x
r
{\displaystyle {\frac {\partial l}{\partial \beta _{r}}}={\frac {y-\mu }{\phi V(\mu )}}{\frac {\partial \mu }{\partial \eta }}x_{r}}
, and noting that
∂
η
∂
μ
=
g
′
(
μ
)
{\displaystyle {\frac {\partial \eta }{\partial \mu }}=g'(\mu )}
we have that
∂
l
∂
β
r
=
(
y
−
μ
)
W
∂
η
∂
μ
x
r
{\displaystyle {\frac {\partial l}{\partial \beta _{r}}}=(y-\mu )W{\frac {\partial \eta }{\partial \mu }}x_{r}}
The Hessian matrix is determined in a similar manner and can be shown to be,
H
=
X
T
(
y
−
μ
)
[
∂
β
s
W
∂
β
r
]
−
X
T
W
X
{\displaystyle H=X^{T}(y-\mu )\left[{\frac {\partial }{\beta _{s}}}W{\frac {\partial }{\beta _{r}}}\right]-X^{T}WX}
Noticing that the Fisher Information (FI),
FI
=
−
E
[
H
]
=
X
T
W
X
{\displaystyle {\text{FI}}=-E[H]=X^{T}WX}
, allows for asymptotic approximation of
β
^
{\displaystyle {\hat {\beta }}}
β
^
∼
N
p
(
β
,
(
X
T
W
X
)
−
1
)
{\displaystyle {\hat {\beta }}\sim N_{p}(\beta ,(X^{T}WX)^{-1})}
, and hence inference can be performed.
==== Application – quasi-likelihood ====
Because most features of GLMs only depend on the first two moments of the distribution, rather than the entire distribution, the quasi-likelihood can be developed by just specifying a link function and a variance function. That is, we need to specify
the link function,
E
[
y
]
=
μ
=
g
−
1
(
η
)
{\displaystyle E[y]=\mu =g^{-1}(\eta )}
the variance function,
V
(
μ
)
{\displaystyle V(\mu )}
, where the
Var
θ
(
y
)
=
σ
2
V
(
μ
)
{\displaystyle \operatorname {Var} _{\theta }(y)=\sigma ^{2}V(\mu )}
With a specified variance function and link function we can develop, as alternatives to the log-likelihood function, the score function, and the Fisher information, a quasi-likelihood, a quasi-score, and the quasi-information. This allows for full inference of
β
{\displaystyle \beta }
.
Quasi-likelihood (QL)
Though called a quasi-likelihood, this is in fact a quasi-log-likelihood. The QL for one observation is
Q
i
(
μ
i
,
y
i
)
=
∫
y
i
μ
i
y
i
−
t
σ
2
V
(
t
)
d
t
{\displaystyle Q_{i}(\mu _{i},y_{i})=\int _{y_{i}}^{\mu _{i}}{\frac {y_{i}-t}{\sigma ^{2}V(t)}}\,dt}
And therefore the QL for all n observations is
Q
(
μ
,
y
)
=
∑
i
=
1
n
Q
i
(
μ
i
,
y
i
)
=
∑
i
=
1
n
∫
y
i
μ
i
y
−
t
σ
2
V
(
t
)
d
t
{\displaystyle Q(\mu ,y)=\sum _{i=1}^{n}Q_{i}(\mu _{i},y_{i})=\sum _{i=1}^{n}\int _{y_{i}}^{\mu _{i}}{\frac {y-t}{\sigma ^{2}V(t)}}\,dt}
From the QL we have the quasi-score
Quasi-score (QS)
Recall the score function, U, for data with log-likelihood
l
(
μ
∣
y
)
{\displaystyle \operatorname {l} (\mu \mid y)}
is
U
=
∂
l
d
μ
.
{\displaystyle U={\frac {\partial l}{d\mu }}.}
We obtain the quasi-score in an identical manner,
U
=
y
−
μ
σ
2
V
(
μ
)
{\displaystyle U={\frac {y-\mu }{\sigma ^{2}V(\mu )}}}
Noting that, for one observation the score is
∂
Q
∂
μ
=
y
−
μ
σ
2
V
(
μ
)
{\displaystyle {\frac {\partial Q}{\partial \mu }}={\frac {y-\mu }{\sigma ^{2}V(\mu )}}}
The first two Bartlett equations are satisfied for the quasi-score, namely
E
[
U
]
=
0
{\displaystyle E[U]=0}
and
Cov
(
U
)
+
E
[
∂
U
∂
μ
]
=
0.
{\displaystyle \operatorname {Cov} (U)+E\left[{\frac {\partial U}{\partial \mu }}\right]=0.}
In addition, the quasi-score is linear in y.
Ultimately the goal is to find information about the parameters of interest
β
{\displaystyle \beta }
. Both the QS and the QL are actually functions of
β
{\displaystyle \beta }
. Recall,
μ
=
g
−
1
(
η
)
{\displaystyle \mu =g^{-1}(\eta )}
, and
η
=
X
β
{\displaystyle \eta =X\beta }
, therefore,
μ
=
g
−
1
(
X
β
)
.
{\displaystyle \mu =g^{-1}(X\beta ).}
Quasi-information (QI)
The quasi-information, is similar to the Fisher information,
i
b
=
−
E
[
∂
U
∂
β
]
{\displaystyle i_{b}=-\operatorname {E} \left[{\frac {\partial U}{\partial \beta }}\right]}
QL, QS, QI as functions of
β
{\displaystyle \beta }
The QL, QS and QI all provide the building blocks for inference about the parameters of interest and therefore it is important to express the QL, QS and QI all as functions of
β
{\displaystyle \beta }
.
Recalling again that
μ
=
g
−
1
(
X
β
)
{\displaystyle \mu =g^{-1}(X\beta )}
, we derive the expressions for QL, QS and QI parametrized under
β
{\displaystyle \beta }
.
Quasi-likelihood in
β
{\displaystyle \beta }
,
Q
(
β
,
y
)
=
∫
y
μ
(
β
)
y
−
t
σ
2
V
(
t
)
d
t
{\displaystyle Q(\beta ,y)=\int _{y}^{\mu (\beta )}{\frac {y-t}{\sigma ^{2}V(t)}}\,dt}
The QS as a function of
β
{\displaystyle \beta }
is therefore
U
j
(
β
j
)
=
∂
∂
β
j
Q
(
β
,
y
)
=
∑
i
=
1
n
∂
μ
i
∂
β
j
y
i
−
μ
i
(
β
j
)
σ
2
V
(
μ
i
)
{\displaystyle U_{j}(\beta _{j})={\frac {\partial }{\partial \beta _{j}}}Q(\beta ,y)=\sum _{i=1}^{n}{\frac {\partial \mu _{i}}{\partial \beta _{j}}}{\frac {y_{i}-\mu _{i}(\beta _{j})}{\sigma ^{2}V(\mu _{i})}}}
U
(
β
)
=
[
U
1
(
β
)
U
2
(
β
)
⋮
⋮
U
p
(
β
)
]
=
D
T
V
−
1
(
y
−
μ
)
σ
2
{\displaystyle U(\beta )={\begin{bmatrix}U_{1}(\beta )\\U_{2}(\beta )\\\vdots \\\vdots \\U_{p}(\beta )\end{bmatrix}}=D^{T}V^{-1}{\frac {(y-\mu )}{\sigma ^{2}}}}
Where,
D
⏟
n
×
p
=
[
∂
μ
1
∂
β
1
⋯
⋯
∂
μ
1
∂
β
p
∂
μ
2
∂
β
1
⋯
⋯
∂
μ
2
∂
β
p
⋮
⋮
∂
μ
m
∂
β
1
⋯
⋯
∂
μ
m
∂
β
p
]
V
⏟
n
×
n
=
diag
(
V
(
μ
1
)
,
V
(
μ
2
)
,
…
,
…
,
V
(
μ
n
)
)
{\displaystyle \underbrace {D} _{n\times p}={\begin{bmatrix}{\frac {\partial \mu _{1}}{\partial \beta _{1}}}&\cdots &\cdots &{\frac {\partial \mu _{1}}{\partial \beta _{p}}}\\{\frac {\partial \mu _{2}}{\partial \beta _{1}}}&\cdots &\cdots &{\frac {\partial \mu _{2}}{\partial \beta _{p}}}\\\vdots \\\vdots \\{\frac {\partial \mu _{m}}{\partial \beta _{1}}}&\cdots &\cdots &{\frac {\partial \mu _{m}}{\partial \beta _{p}}}\end{bmatrix}}\underbrace {V} _{n\times n}=\operatorname {diag} (V(\mu _{1}),V(\mu _{2}),\ldots ,\ldots ,V(\mu _{n}))}
The quasi-information matrix in
β
{\displaystyle \beta }
is,
i
b
=
−
∂
U
∂
β
=
Cov
(
U
(
β
)
)
=
D
T
V
−
1
D
σ
2
{\displaystyle i_{b}=-{\frac {\partial U}{\partial \beta }}=\operatorname {Cov} (U(\beta ))={\frac {D^{T}V^{-1}D}{\sigma ^{2}}}}
Obtaining the score function and the information of
β
{\displaystyle \beta }
allows for parameter estimation and inference in a similar manner as described in Application – weighted least squares.
=== Non-parametric regression analysis ===
Non-parametric estimation of the variance function and its importance, has been discussed widely in the literature
In non-parametric regression analysis, the goal is to express the expected value of your response variable(y) as a function of your predictors (X). That is we are looking to estimate a mean function,
g
(
x
)
=
E
[
y
∣
X
=
x
]
{\displaystyle g(x)=\operatorname {E} [y\mid X=x]}
without assuming a parametric form. There are many forms of non-parametric smoothing methods to help estimate the function
g
(
x
)
{\displaystyle g(x)}
. An interesting approach is to also look at a non-parametric variance function,
g
v
(
x
)
=
Var
(
Y
∣
X
=
x
)
{\displaystyle g_{v}(x)=\operatorname {Var} (Y\mid X=x)}
. A non-parametric variance function allows one to look at the mean function as it relates to the variance function and notice patterns in the data.
g
v
(
x
)
=
Var
(
Y
∣
X
=
x
)
=
E
[
y
2
∣
X
=
x
]
−
[
E
[
y
∣
X
=
x
]
]
2
{\displaystyle g_{v}(x)=\operatorname {Var} (Y\mid X=x)=\operatorname {E} [y^{2}\mid X=x]-\left[\operatorname {E} [y\mid X=x]\right]^{2}}
An example is detailed in the pictures to the right. The goal of the project was to determine (among other things) whether or not the predictor, number of years in the major leagues (baseball), had an effect on the response, salary, a player made. An initial scatter plot of the data indicates that there is heteroscedasticity in the data as the variance is not constant at each level of the predictor. Because we can visually detect the non-constant variance, it useful now to plot
g
v
(
x
)
=
Var
(
Y
∣
X
=
x
)
=
E
[
y
2
∣
X
=
x
]
−
[
E
[
y
∣
X
=
x
]
]
2
{\displaystyle g_{v}(x)=\operatorname {Var} (Y\mid X=x)=\operatorname {E} [y^{2}\mid X=x]-\left[\operatorname {E} [y\mid X=x]\right]^{2}}
, and look to see if the shape is indicative of any known distribution. One can estimate
E
[
y
2
∣
X
=
x
]
{\displaystyle \operatorname {E} [y^{2}\mid X=x]}
and
[
E
[
y
∣
X
=
x
]
]
2
{\displaystyle \left[\operatorname {E} [y\mid X=x]\right]^{2}}
using a general smoothing method. The plot of the non-parametric smoothed variance function can give the researcher an idea of the relationship between the variance and the mean. The picture to the right indicates a quadratic relationship between the mean and the variance. As we saw above, the Gamma variance function is quadratic in the mean.
== Notes ==
== References ==
McCullagh, Peter; Nelder, John (1989). Generalized Linear Models (second ed.). London: Chapman and Hall. ISBN 0-412-31760-5.{{cite book}}: CS1 maint: publisher location (link)
Henrik Madsen and Poul Thyregod (2011). Introduction to General and Generalized Linear Models. Chapman & Hall/CRC. ISBN 978-1-4200-9155-7.
== External links ==
Media related to Variance function at Wikimedia Commons | Wikipedia/Variance_functions |
Visual Basic for Applications (VBA) is an implementation of Microsoft's event-driven programming language Visual Basic 6.0 built into most desktop Microsoft Office applications. Although based on pre-.NET Visual Basic, which is no longer supported or updated by Microsoft (except under Microsoft's "It Just Works" support which is for the full lifetime of supported Windows versions, including Windows 10 and Windows 11), the VBA implementation in Office continues to be updated to support new Office features. VBA is used for professional and end-user development due to its perceived ease-of-use, Office's vast installed userbase, and extensive legacy in business.
Visual Basic for Applications enables building user-defined functions (UDFs), automating processes and accessing Windows API and other low-level functionality through dynamic-link libraries (DLLs). It supersedes and expands on the abilities of earlier application-specific macro programming languages such as Word's WordBASIC. It can be used to control many aspects of the host application, including manipulating user interface features, such as menus and toolbars, and working with custom user forms or dialog boxes.
As its name suggests, VBA is closely related to Visual Basic and uses the Visual Basic Runtime Library. However, VBA code normally can only run within a host application, rather than as a standalone program. VBA can, however, control one application from another using OLE Automation. For example, VBA can automatically create a Microsoft Word report from Microsoft Excel data that Excel collects automatically from polled sensors. VBA can use, but not create, ActiveX/COM DLLs, and later versions add support for class modules.
VBA is built into most Microsoft Office applications, including Office for Mac OS X (except version 2008), and other Microsoft applications, including Microsoft MapPoint and Microsoft Visio. VBA is also implemented, at least partially, in applications published by companies other than Microsoft, including ArcGIS, AutoCAD, Collabora Online, CorelDraw, Kingsoft Office, LibreOffice, SolidWorks, WordPerfect, and UNICOM System Architect (which supports VBA 7.1).
== Origins ==
When personal computers were initially released in the 1970s and 1980s, they typically included a version of BASIC so that customers could write their own programs. Microsoft's first products were BASIC compilers and interpreters, and the company distributed versions of BASIC with MS-DOS (versions 1.0 through 6.0) and developed follow-on products that offered more features and capabilities (QuickBASIC and BASIC Professional Development System).
In 1989, Bill Gates sketched out Microsoft's plans to use BASIC as a universal language to embellish or alter the performance of a range of software applications on microcomputers. He also revealed that the installed base of active BASIC programmers was four million users, and that BASIC was used three times more frequently than any other language on PCs.
When Visual Basic was released in 1991, it seemed logical to use Visual Basic as the universal programming language for Windows applications. Until that time, each Microsoft application had its own macro language or automation technique, and the tools were largely incompatible. The first Microsoft application to debut VBA was Microsoft Excel 5.0 in 1993, based on Microsoft Visual Basic 3.0. This spurred the development of numerous custom business applications, and the decision was made to release VBA in a range of products.
Windows users learned about the changes through user groups, books, and magazines. Early computer books that introduced VBA programming skills include Reed Jacobsen's Microsoft Excel Visual Basic for Windows 95 Step by Step (Microsoft Press, 1995) and Michael Halvorson and Chris Kinata's Microsoft Word 97 Visual Basic Step by Step (Microsoft Press, 1997).
== Design ==
Code written in VBA is compiled to Microsoft P-Code (pseudo-code), a proprietary intermediate language, which the host applications (Access, Excel, Word, Outlook, and PowerPoint) store as a separate stream in COM Structured Storage files (e.g., .doc or .xls) independent of the document streams. The intermediate code is then executed by a virtual machine (hosted by the host application). Compatibility ends with Visual Basic version 6; VBA is incompatible with Visual Basic .NET (VB.NET). VBA is proprietary to Microsoft and, apart from the COM interface, is not an open standard.
== Automation ==
Interaction with the host application uses OLE Automation. Typically, the host application provides a type library and application programming interface (API) documentation which document how VBA programs can interact with the application. This documentation can be examined from inside the VBA development environment using its Object Browser.
Visual Basic for Applications programs which are written to use the OLE Automation interface of one application cannot be used to automate a different application, even if that application hosts the Visual Basic runtime, because the OLE Automation interfaces will be different. For example, a VBA program written to automate Microsoft Word cannot be used with a different word processor, even if that word processor hosts VBA.
Conversely, multiple applications can be automated from the one host by creating Application objects within the VBA code. References to the different libraries must be created within the VBA client before any of the methods, objects, etc. become available to use in the application. This is achieved through what is referred to as Early or Late Binding. These application objects create the OLE link to the application when they are first created. Commands to the different applications must be done explicitly through these application objects in order to work correctly.
As an example, VBA code written in Microsoft Access can establish references to the Excel, Word and Outlook libraries; this allows creating an application that – for instance – runs a query in Access, exports the results to Excel and analyzes them, and then formats the output as tables in a Word document or sends them as an Outlook email.
VBA programs can be attached to a menu button, a macro, a keyboard shortcut, or an OLE/COM event, such as the opening of a document in the application. The language provides a user interface in the form of UserForms, which can host ActiveX controls for added functionality.
Inter-process communication automation includes the Dynamic Data Exchange (DDE) and RealTimeData (RTD) which allows calling a Component Object Model (COM) automation server for dynamic or realtime financial or scientific data.
== Security concerns ==
As with any common programming language, VBA macros can be created with malicious intent. Using VBA, most of the security features lie in the hands of the user, not the author. The VBA host application options are accessible to the user. The user who runs any document containing VBA macros can preset the software with user preferences. End-users can protect themselves from attack by disabling macros from running in an application or by granting permission for a document to run VBA code only if they are sure that the source of the document can be trusted.
In February 2022, Microsoft announced its plan to block VBA macros in files downloaded from the Internet by default in a variety of Office apps due to their widespread use to spread malware.
=== Macro risks ===
A risk with using VBA macros, such as in Microsoft Office applications, is exposure to viruses. Risks stem from factors including ease of writing macros which decreases the skill required the write a malicious macro and that typical document sharing practices allow for a virus to spread quickly.
System macro virus
A system macro – one that provides a core operation – can be redefined. This allows for significant flexibility, but also is a risk that hackers can exploit to access the document and its host computer without the user's knowledge or consent. For example, a hacker could replace the built-in core functionality macros such as AutoExec, AutoNew, AutoClose, AutoOpen, AutoExit with malicious versions. A malicious macro could be configured to run when the user presses a common keyboard shortcut such as Ctrl+B which is normally for bold font.
Document-to-macro conversion
A type of macro virus that cuts and pastes the text of a document in the macro. The macro could be invoked with the Auto-open macro so that the text would be re-created when the document (empty) is opened. The user will not notice that the document is empty. The macro could also convert only some parts of the text in order to be less noticeable. Removing macros from the document manually or by using an anti-virus program could lead to a loss of content in the document.
: 609–610
Polymorphic macros
Polymorphic viruses change their code in fundamental ways with each replication in order to avoid detection by anti-virus scanners.
In WordBasic (first name of the language Visual Basic), polymorphic viruses are difficult to do.
Indeed, the macro's polymorphism relies of the encryption of the document. However, the hackers have no control of the encryption key.
Furthermore, the encryption is inefficient: the encrypted macros are just in the document, so the encryption key is too and when a polymorphic macro replicates itself, the key does not change (the replication affects only the macro not the encryption).
In addition to these difficulties, a macro can not modify itself, but another macro can. WordBasic is a powerful language, it allows some operations to the macros:
Rename the variables used in the macro(s).
Insert random comments between the operators of its macro(s)
Insert between the operators of its macros other, ‘do-nothing’ WordBasic operators which do not affect the execution of the virus.
Replace some of its operators with others, equivalent ones, which perform the same function.
Swap around any operators the order of which does not impact the result of the macro’s execution.
Rename the macro(s) themselves to new, randomly selected names each time the virus replicates itself to a new document, with the appropriate changes in these parts of the virus body which refer to these macros.
So, in order to implement macros viruses which can change its contents, hackers have to create another macro which fulfills the task to modify the content of the virus. However, this type of macro viruses is not widespread. Indeed, hackers frequently choose to do macro viruses because they are easy and quick to implement. Making a polymorphic macro requires a lot of knowledge of the WordBasic language (it needs the advanced functionalities) and more time than a "classic" macro virus. Even if a hacker were to make a polymorphic macro, the polymorphism needs to be done, so, the document needs to update and the update can be visible to a user.: 610–612
Chained macros
During replication, a macro can create do-nothing macros. But this idea can be combined with polymorphic macros, so macros are not necessarily do-nothing; each macro invokes the next one, so they can be arranged in a chain. In such a case, if they are not all removed during a disinfection, some destructive payload is activated. Such an attack can crash the winword processor with an internal error. Since Winword 6.0, the number of macros per template is limited to 150, so the attack is limited, too, but can still be very annoying.
: 623
"Mating" macro viruses
Macro viruses can, in some cases, interact between themselves. If two viruses are executed at the same time, both of them can modify the source code of each other.
So, it results a new virus which can not be recognize by the anti-viruses software. But the result is totally random: the macro virus can be more infectious or less infectious, depending upon which part of the virus has been changed.
However, when the 'mating' is unintentional, the resulting macro virus has more chances to be less infectious.
Indeed, in order to replicate itself, it has to know the commands in the source code, but, if it is changed with a random scheme, the macro can not replicate itself.
Nevertheless, it is possible to do such macros intentionally (it is different from polymorphic macros viruses which must use another macro to change their contents) in order to increase the infectivity of the two viruses.
In the example of the article,: 612–613 the macro virus Colors infected a document, but another infected the user's system before : the macro virus Concept.
Both of these viruses use the command AutoOpen, so, at first, the macro virus Colors was detected but the command AutoOpen in it was the command of the macro virus Concept.
Moreover, when Concept duplicates itself, it is unencrypted, but the command in the virus Colors was encrypted (Colors encrypt its commands).
So, replication of the macro virus Concept results in the hybridation of this macro virus (which had infected the user's system first) and Colors.
The "hybrid" could replicate itself only if AutoOpen were not executed; indeed this command comes from Concept, but the body of the hybrid is Colors, so that create some conflicts.
This example shows the potential of mating macro viruses: if a couple of mating macro viruses is created, it will make it more difficult to detect both macro viruses (in this hypothesis, there are only two viruses which mate) by the virus-specific scanners and may reinforce the virility of the viruses.
Fortunately, this type of macro virus is rare (more than the polymorphic macro viruses, one may not even exist), indeed, creating two (or more) which can interact with each other and not reduce the virility (rather reinforce it) is complicated.
Macro virus mutators
Among the worst scenarios in the world of viruses would be a tool allowing one to create a new virus by modifying an existing one.
For executable files, it is hard to create this kind of tool. But it is very simple for macro viruses since sources of macros are always available. Based on the same idea of polymorphic macros, a macro can perform modifications to all macros present in the document. Considering this, there are just a few modifications to make to the macro in order to convert it in a macro virus mutator.
So it is easy to create macro virus generators, and thereby to create quickly several thousands of known viruses.
: 613–614
Parasitic macro viruses
Most macros viruses are stand-alone; they do not depend on other macros (for the infectious part of the virus, not for the replication for some viruses), but some macros viruses do. They are called parasitic macros.: 614–615
When launched, they check other macros (viruses or not), and append their contents to them. In this way, all of the macros became viruses.
But, this type of macro can not be spread as quickly as stand-alone macros.
Indeed, it depends on other macros, so, without them, the virus can not be spread. So, parasitic macros often are hybrid: they are stand alone and they can infect other macros.
This kind of macro virus poses real problems to the virus-specific anti-virus; in fact, they change the content of other viruses, so that accurate detection is not possible.
==== Suboptimal anti-virus ====
There are different types of anti-virus (or scanners), one is the heuristic analysis anti-virus which interprets or emulates macros.
Indeed, to examine all branches of macros require a NP-complete complexity: 605 (using backtracking), so in this case, the analysis of one document (which contains macros) would take too much time. Interpreting or emulating a macro would lead to either false positive errors or in macro viruses not detected.
Another type of anti-virus, the integrity checker anti-virus, in some cases, does not work: it only checks documents with extensions DOT or DOC (indeed, some anti-virus producers suggest to their users), but Word documents can reside in others extensions than those two, and the content of the document tends to change often.: 605 So, like the heuristic analysis, this can lead to false positives errors, due to the fact that this type of anti-virus checks the whole document.
The last type of anti-virus seen will be the virus-specific scanner.: 608 It searches the signature of viruses, so, the type of anti-virus is weaker than the previous ones.
Indeed, the viruses detected by virus-specific scanners are just the ones known by the software producers (so, more updates are needed than in other types of scanners). Moreover, this type of anti-virus is weak against morphing viruses (cf.section above). If a macro virus change its content (so, its signature), it cannot be detected any more by the virus-specific scanners, even if it is the same virus doing the same actions. Its signature does not match the one declared in the virus scanner.
Additional to the responsibility of the anti-virus is the user's responsibility: if a potential macro virus is detected, the user can choose what to do with it: ignore it, quarantine it or destroy it, but the last option is the most dangerous.
The anti-virus can activate some destructive macro viruses which destroy some data when they are deleted by the anti-virus.
So, both virus scanners and users are responsible for the security and the integrity of the documents/computer.
Moreover, even if the anti-virus is not optimal in the virus detection, most macro viruses are detected and the progression in virus detection improves but with creation of new macro viruses.
== Version history ==
VBA was first launched with MS Excel 5.0 in 1993. It became an instant success among developers to create corporate solutions using Excel. Inclusion of VBA with Microsoft Project, Access and Word replacing Access BASIC and WordBASIC respectively made it more popular.
VBA 4.0 is the next famous release with a totally upgraded version compared to previous one. Released in 1996, it is written in C++ and became an object oriented language.
VBA 5.0 was launched in 1997 along with all of MS Office 97 products. The only exception for this was Outlook 97 which used VBScript.
VBA 6.0 and VBA 6.1 were launched in 1999, notably with support for COM add-ins in Office 2000. VBA 6.2 was released alongside Office 2000 SR-1.
VBA 6.3 was released after Office XP, VBA 6.4 followed Office 2003 and VBA 6.5 was released with Office 2007.
Office 2010 includes VBA 7.0. There are no new features in VBA 7 for developers compared to VBA 6.5 except for 64-bit support. However, after VBA 6.5/Office 2007, Microsoft stopped licensing VBA for other applications.
Office 2013, Office 2016, Office 2019 and Office 2021 include VBA 7.1.
== Development ==
As of July 1, 2007, Microsoft no longer offers VBA distribution licenses to new customers. Microsoft intended to add .NET-based languages to the current version of VBA ever since the release of the .NET Framework, of which versions 1.0 and 1.1 included a scripting runtime technology named Script for the .NET Framework. Visual Studio .NET 2002 and 2003 SDK contained a separate scripting IDE called Visual Studio for Applications (VSA) that supported VB.NET. One of its significant features was that the interfaces to the technology were available via Active Scripting (VBScript and JScript), allowing even .NET-unaware applications to be scripted via .NET languages. However, VSA was deprecated in version 2.0 of the .NET Framework, leaving no clear upgrade path for applications desiring Active Scripting support (although "scripts" can be created in C#, VBScript, and other .NET languages, which can be compiled and executed at run-time via libraries installed as part of the standard .NET runtime).
Microsoft dropped VBA support for Microsoft Office 2008 for Mac. VBA was restored in Microsoft Office for Mac 2011. Microsoft said that it has no plan to remove VBA from the Windows version of Office.
With Office 2010, Microsoft introduced VBA7, which contains a true pointer data type: LongPtr. This allows referencing 64-bit address space. The 64-bit install of Office 2010 does not support common controls of MSComCtl (TabStrip, Toolbar, StatusBar, ProgressBar, TreeView, ListViews, ImageList, Slider, ImageComboBox) or MSComCt2 (Animation, UpDown, MonthView, DateTimePicker, FlatScrollBar), so legacy 32-bit code ported to 64-bit VBA code that depends on these common controls will not function. This did not affect the 32-bit version Office 2010. Microsoft eventually released a 64-bit version of MSComCtl with the July 27th, 2017 update to Office 2016.
== See also ==
Visual Studio Tools for Applications
Visual Studio Tools for Office
Microsoft Visual Studio
Microsoft FrontPage
OpenOffice Basic
LotusScript
Microsoft Power Fx
== References == | Wikipedia/Visual_Basic_for_Applications |
In functional programming, a generalized algebraic data type (GADT, also first-class phantom type, guarded recursive datatype, or equality-qualified type) is a generalization of a parametric algebraic data type (ADT).
== Overview ==
In a GADT, the product constructors (called data constructors in Haskell) can provide an explicit instantiation of the ADT as the type instantiation of their return value. This allows defining functions with a more advanced type behaviour. For a data constructor of Haskell 2010, the return value has the type instantiation implied by the instantiation of the ADT parameters at the constructor's application.
They are currently implemented in the Glasgow Haskell Compiler (GHC) as a non-standard extension, used by, among others, Pugs and Darcs. OCaml supports GADT natively since version 4.00.
The GHC implementation provides support for existentially quantified type parameters and for local constraints.
== History ==
An early version of generalized algebraic data types were described by Augustsson & Petersson (1994) and based on pattern matching in ALF.
Generalized algebraic data types were introduced independently by Cheney & Hinze (2003) and prior by Xi, Chen & Chen (2003) as extensions to the algebraic data types of ML and Haskell. Both are essentially equivalent to each other. They are similar to the inductive families of data types (or inductive datatypes) found in Coq's Calculus of Inductive Constructions and other dependently typed languages, modulo the dependent types and except that the latter have an additional positivity restriction which is not enforced in GADTs.
Sulzmann, Wazny & Stuckey (2006) introduced extended algebraic data types which combine GADTs together with the existential data types and type class constraints.
Type inference in the absence of any programmer supplied type annotation, is undecidable and functions defined over GADTs do not admit principal types in general. Type reconstruction requires several design trade-offs and is an area of active research (Peyton Jones, Washburn & Weirich 2004; Peyton Jones et al. 2006.
In spring 2021, Scala 3.0 was released. This major update of Scala introduced the possibility to write GADTs with the same syntax as algebraic data types, which is not the case in other programming languages according to Martin Odersky.
== Applications ==
Applications of GADTs include generic programming, modelling programming languages (higher-order abstract syntax), maintaining invariants in data structures, expressing constraints in embedded domain-specific languages, and modelling objects.
=== Higher-order abstract syntax ===
An important application of GADTs is to embed higher-order abstract syntax in a type safe fashion. Here is an embedding of the simply typed lambda calculus with an arbitrary collection of base types, product types (tuples) and a fixed point combinator:
And a type safe evaluation function:
The factorial function can now be written as:
Problems would have occurred using regular algebraic data types. Dropping the type parameter would have made the lifted base types existentially quantified, making it impossible to write the evaluator. With a type parameter, it is still restricted to one base type. Further, ill-formed expressions such as App (Lam (\x -> Lam (\y -> App x y))) (Lift True) would have been possible to construct, while they are type incorrect using the GADT. A well-formed analogue is App (Lam (\x -> Lam (\y -> App x y))) (Lift (\z -> True)). This is because the type of x is Lam (a -> b), inferred from the type of the Lam data constructor.
== See also ==
Type variable
== Notes ==
== Further reading ==
== External links ==
Generalised Algebraic Datatype Page on the Haskell wiki
Generalised Algebraic Data Types in the GHC Users' Guide
Generalized Algebraic Data Types and Object-Oriented Programming
GADTs – Haskell Prime – Trac
Papers about type inference for GADTs, bibliography by Simon Peyton Jones
Type inference with constraints, bibliography by Simon Peyton Jones
Emulating GADTs in Java via the Yoneda lemma | Wikipedia/Generalized_algebraic_data_type |
In computer programming, a reference is a value that enables a program to indirectly access a particular datum, such as a variable's value or a record, in the computer's memory or in some other storage device. The reference is said to refer to the datum, and accessing the datum is called dereferencing the reference. A reference is distinct from the datum itself.
A reference is an abstract data type and may be implemented in many ways. Typically, a reference refers to data stored in memory on a given system, and its internal value is the memory address of the data, i.e. a reference is implemented as a pointer. For this reason a reference is often said to "point to" the data. Other implementations include an offset (difference) between the datum's address and some fixed "base" address, an index, or identifier used in a lookup operation into an array or table, an operating system handle, a physical address on a storage device, or a network address such as a URL.
== Formal representation ==
A reference R is a value that admits one operation, dereference(R), which yields a value. Usually the reference is typed so that it returns values of a specific type, e.g.:
Often the reference also admits an assignment operation store(R, x), meaning it is an abstract variable.
== Use ==
References are widely used in programming, especially to efficiently pass large or mutable data as arguments to procedures, or to share such data among various uses. In particular, a reference may point to a variable or record that contains references to other data. This idea is the basis of indirect addressing and of many linked data structures, such as linked lists. References increase flexibility in where objects can be stored, how they are allocated, and how they are passed between areas of code. As long as one can access a reference to the data, one can access the data through it, and the data itself need not be moved. They also make sharing of data between different code areas easier; each keeps a reference to it.
References can cause significant complexity in a program, partially due to the possibility of dangling and wild references and partially because the topology of data with references is a directed graph, whose analysis can be quite complicated. Nonetheless, references are still simpler to analyze than pointers due to the absence of pointer arithmetic.
The mechanism of references, if varying in implementation, is a fundamental programming language feature common to nearly all modern programming languages. Even some languages that support no direct use of references have some internal or implicit use. For example, the call by reference calling convention can be implemented with either explicit or implicit use of references.
== Examples ==
Pointers are the most primitive type of reference. Due to their intimate relationship with the underlying hardware, they are one of the most powerful and efficient types of references. However, also due to this relationship, pointers require a strong understanding by the programmer of the details of memory architecture. Because pointers store a memory location's address, instead of a value directly, inappropriate use of pointers can lead to undefined behavior in a program, particularly due to dangling pointers or wild pointers. Smart pointers are opaque data structures that act like pointers but can only be accessed through particular methods.
A handle is an abstract reference, and may be represented in various ways. A common example are file handles (the FILE data structure in the C standard I/O library), used to abstract file content. It usually represents both the file itself, as when requesting a lock on the file, and a specific position within the file's content, as when reading a file.
In distributed computing, the reference may contain more than an address or identifier; it may also include an embedded specification of the network protocols used to locate and access the referenced object, the way information is encoded or serialized. Thus, for example, a WSDL description of a remote web service can be viewed as a form of reference; it includes a complete specification of how to locate and bind to a particular web service. A reference to a live distributed object is another example: it is a complete specification for how to construct a small software component called a proxy that will subsequently engage in a peer-to-peer interaction, and through which the local machine may gain access to data that is replicated or exists only as a weakly consistent message stream. In all these cases, the reference includes the full set of instructions, or a recipe, for how to access the data; in this sense, it serves the same purpose as an identifier or address in memory.
If we have a set of keys K and a set of data objects D, any well-defined (single-valued) function from K to D ∪ {null} defines a type of reference, where null is the image of a key not referring to anything meaningful.
An alternative representation of such a function is a directed graph called a reachability graph. Here, each datum is represented by a vertex and there is an edge from u to v if the datum in u refers to the datum in v. The maximum out-degree is one. These graphs are valuable in garbage collection, where they can be used to separate accessible from inaccessible objects.
== External and internal storage ==
In many data structures, large, complex objects are composed of smaller objects. These objects are typically stored in one of two ways:
With internal storage, the contents of the smaller object are stored inside the larger object.
With external storage, the smaller objects are allocated in their own location, and the larger object only stores references to them.
Internal storage is usually more efficient, because there is a space cost for the references and dynamic allocation metadata, and a time cost associated with dereferencing a reference and with allocating the memory for the smaller objects. Internal storage also enhances locality of reference by keeping different parts of the same large object close together in memory. However, there are a variety of situations in which external storage is preferred:
If the data structure is recursive, meaning it may contain itself. This cannot be represented in the internal way.
If the larger object is being stored in an area with limited space, such as the stack, then we can prevent running out of storage by storing large component objects in another memory region and referring to them using references.
If the smaller objects may vary in size, it is often inconvenient or expensive to resize the larger object so that it can still contain them.
References are often easier to work with and adapt better to new requirements.
Some languages, such as Java, Smalltalk, Python, and Scheme, do not support internal storage. In these languages, all objects are uniformly accessed through references.
== Language support ==
=== Assembly ===
In assembly language, it is typical to express references using either raw memory addresses or indexes into tables. These work, but are somewhat tricky to use, because an address tells you nothing about the value it points to, not even how large it is or how to interpret it; such information is encoded in the program logic. The result is that misinterpretations can occur in incorrect programs, causing bewildering errors.
=== Lisp ===
One of the earliest opaque references was that of the Lisp language cons cell, which is simply a record containing two references to other Lisp objects, including possibly other cons cells. This simple structure is most commonly used to build singly linked lists, but can also be used to build simple binary trees and so-called "dotted lists", which terminate not with a null reference but a value.
=== C/C++ ===
The pointer is still one of the most popular types of references today. It is similar to the assembly representation of a raw address, except that it carries a static datatype which can be used at compile-time to ensure that the data it refers to is not misinterpreted. However, because C has a weak type system which can be violated using casts (explicit conversions between various pointer types and between pointer types and integers), misinterpretation is still possible, if more difficult. Its successor C++ tried to increase type safety of pointers with new cast operators, a reference type &, and smart pointers in its standard library, but still retained the ability to circumvent these safety mechanisms for compatibility.
=== Fortran ===
Fortran does not have an explicit representation of references, but does use them implicitly in its call-by-reference calling semantics. A Fortran reference is best thought of as an alias of another object, such as a scalar variable or a row or column of an array. There is no syntax to dereference the reference or manipulate the contents of the referent directly. Fortran references can be null. As in other languages, these references facilitate the processing of dynamic structures, such as linked lists, queues, and trees.
=== Object-oriented languages ===
A number of object-oriented languages such as Eiffel, Java, C#, and Visual Basic have adopted a much more opaque type of reference, usually referred to as simply a reference. These references have types like C pointers indicating how to interpret the data they reference, but they are typesafe in that they cannot be interpreted as a raw address and unsafe conversions are not permitted. References are extensively used to access and assign objects. References are also used in function/method calls or message passing, and reference counts are frequently used to perform garbage collection of unused objects.
=== Functional languages ===
In Standard ML, OCaml, and many other functional languages, most values are persistent: they cannot be modified by assignment. Assignable "reference cells" provide mutable variables, data that can be modified. Such reference cells can hold any value, and so are given the polymorphic type α ref, where α is to be replaced with the type of value pointed to. These mutable references can be pointed to different objects over their lifetime. For example, this permits building of circular data structures. The reference cell is functionally equivalent to a mutable array of length 1.
To preserve safety and efficient implementations, references cannot be type-cast in ML, nor can pointer arithmetic be performed. In the functional paradigm, many structures that would be represented using pointers in a language like C are represented using other facilities, such as the powerful algebraic datatype mechanism. The programmer is then able to enjoy certain properties (such as the guarantee of immutability) while programming, even though the compiler often uses machine pointers "under the hood".
=== Perl/PHP ===
Perl supports hard references, which function similarly to those in other languages, and symbolic references, which are just string values that contain the names of variables. When a value that is not a hard reference is dereferenced, Perl considers it to be a symbolic reference and gives the variable with the name given by the value. PHP has a similar feature in the form of its $$var syntax.
== See also ==
Abstraction (computer science)
Autovivification
Bounded pointer
Linked data
Magic cookie
Weak reference
== References ==
== External links ==
Pointer Fun With Binky Introduction to pointers in a 3-minute educational video – Stanford Computer Science Education Library | Wikipedia/Reference_(computer_science) |
In computer science, type conversion, type casting, type coercion, and type juggling are different ways of changing an expression from one data type to another. An example would be the conversion of an integer value into a floating point value or its textual representation as a string, and vice versa. Type conversions can take advantage of certain features of type hierarchies or data representations. Two important aspects of a type conversion are whether it happens implicitly (automatically) or explicitly, and whether the underlying data representation is converted from one representation into another, or a given representation is merely reinterpreted as the representation of another data type. In general, both primitive and compound data types can be converted.
Each programming language has its own rules on how types can be converted. Languages with strong typing typically do little implicit conversion and discourage the reinterpretation of representations, while languages with weak typing perform many implicit conversions between data types. Weak typing language often allow forcing the compiler to arbitrarily interpret a data item as having different representations—this can be a non-obvious programming error, or a technical method to directly deal with underlying hardware.
In most languages, the word coercion is used to denote an implicit conversion, either during compilation or during run time. For example, in an expression mixing integer and floating point numbers (like 5 + 0.1), the compiler will automatically convert integer representation into floating point representation so fractions are not lost. Explicit type conversions are either indicated by writing additional code (e.g. adding type identifiers or calling built-in routines) or by coding conversion routines for the compiler to use when it otherwise would halt with a type mismatch.
In most ALGOL-like languages, such as Pascal, Modula-2, Ada and Delphi, conversion and casting are distinctly different concepts. In these languages, conversion refers to either implicitly or explicitly changing a value from one data type storage format to another, e.g. a 16-bit integer to a 32-bit integer. The storage needs may change as a result of the conversion, including a possible loss of precision or truncation. The word cast, on the other hand, refers to explicitly changing the interpretation of the bit pattern representing a value from one type to another. For example, 32 contiguous bits may be treated as an array of 32 Booleans, a 4-byte string, an unsigned 32-bit integer or an IEEE single precision floating point value. Because the stored bits are never changed, the programmer must know low level details such as representation format, byte order, and alignment needs, to meaningfully cast.
In the C family of languages and ALGOL 68, the word cast typically refers to an explicit type conversion (as opposed to an implicit conversion), causing some ambiguity about whether this is a re-interpretation of a bit-pattern or a real data representation conversion. More important is the multitude of ways and rules that apply to what data type (or class) is located by a pointer and how a pointer may be adjusted by the compiler in cases like object (class) inheritance.
== Explicit casting in various languages ==
=== Ada ===
Ada provides a generic library function Unchecked_Conversion.
=== C-like languages ===
==== Implicit type conversion ====
Implicit type conversion, also known as coercion or type juggling, is an automatic type conversion by the compiler. Some programming languages allow compilers to provide coercion; others require it.
In a mixed-type expression, data of one or more subtypes can be converted to a supertype as needed at runtime so that the program will run correctly. For example, the following is legal C language code:
Although d, l, and i belong to different data types, they will be automatically converted to equal data types each time a comparison or assignment is executed. This behavior should be used with caution, as unintended consequences can arise. Data can be lost when converting representations from floating-point to integer, as the fractional components of the floating-point values will be truncated (rounded toward zero). Conversely, precision can be lost when converting representations from integer to floating-point, since a floating-point type may be unable to exactly represent all possible values of some integer type. For example, float might be an IEEE 754 single precision type, which cannot represent the integer 16777217 exactly, while a 32-bit integer type can. This can lead to unintuitive behavior, as demonstrated by the following code:
On compilers that implement floats as IEEE single precision, and ints as at least 32 bits, this code will give this peculiar print-out:
The integer is: 16777217
The float is: 16777216.000000
Their equality: 1
Note that 1 represents equality in the last line above. This odd behavior is caused by an implicit conversion of i_value to float when it is compared with f_value. The conversion causes loss of precision, which makes the values equal before the comparison.
Important takeaways:
float to int causes truncation, i.e., removal of the fractional part.
double to float causes rounding of digit.
long to int causes dropping of excess higher order bits.
===== Type promotion =====
One special case of implicit type conversion is type promotion, where an object is automatically converted into another data type representing a superset of the original type. Promotions are commonly used with types smaller than the native type of the target platform's arithmetic logic unit (ALU), before arithmetic and logical operations, to make such operations possible, or more efficient if the ALU can work with more than one type. C and C++ perform such promotion for objects of Boolean, character, wide character, enumeration, and short integer types which are promoted to int, and for objects of type float, which are promoted to double. Unlike some other type conversions, promotions never lose precision or modify the value stored in the object.
In Java:
==== Explicit type conversion ====
Explicit type conversion, also called type casting, is a type conversion which is explicitly defined within a program (instead of being done automatically according to the rules of the language for implicit type conversion). It is requested by the user in the program.
There are several kinds of explicit conversion.
checked
Before the conversion is performed, a runtime check is done to see if the destination type can hold the source value. If not, an error condition is raised.
unchecked
No check is performed. If the destination type cannot hold the source value, the result is undefined.
bit pattern
The raw bit representation of the source is copied verbatim, and it is re-interpreted according to the destination type. This can also be achieved via aliasing.
In object-oriented programming languages, objects can also be downcast : a reference of a base class is cast to one of its derived classes.
=== C# and C++ ===
In C#, type conversion can be made in a safe or unsafe (i.e., C-like) manner, the former called checked type cast.
In C++ a similar effect can be achieved using C++-style cast syntax.
=== Eiffel ===
In Eiffel the notion of type conversion is integrated into the rules of the type system. The Assignment Rule says that an assignment, such as:
is valid if and only if the type of its source expression, y in this case, is compatible with the type of its target entity, x in this case. In this rule, compatible with means that the type of the source expression either conforms to or converts to that of the target. Conformance of types is defined by the familiar rules for polymorphism in object-oriented programming. For example, in the assignment above, the type of y conforms to the type of x if the class upon which y is based is a descendant of that upon which x is based.
==== Definition of type conversion in Eiffel ====
The actions of type conversion in Eiffel, specifically converts to and converts from are defined as:
A type based on a class CU converts to a type T based on a class CT (and T converts from U) if either
CT has a conversion procedure using U as a conversion type, or
CU has a conversion query listing T as a conversion type
==== Example ====
Eiffel is a fully compliant language for Microsoft .NET Framework. Before development of .NET, Eiffel already had extensive class libraries. Using the .NET type libraries, particularly with commonly used types such as strings, poses a conversion problem. Existing Eiffel software uses the string classes (such as STRING_8) from the Eiffel libraries, but Eiffel software written for .NET must use the .NET string class (System.String) in many cases, for example when calling .NET methods which expect items of the .NET type to be passed as arguments. So, the conversion of these types back and forth needs to be as seamless as possible.
In the code above, two strings are declared, one of each different type (SYSTEM_STRING is the Eiffel compliant alias for System.String). Because System.String does not conform to STRING_8, then the assignment above is valid only if System.String converts to STRING_8.
The Eiffel class STRING_8 has a conversion procedure make_from_cil for objects of type System.String. Conversion procedures are also always designated as creation procedures (similar to constructors). The following is an excerpt from the STRING_8 class:
The presence of the conversion procedure makes the assignment:
semantically equivalent to:
in which my_string is constructed as a new object of type STRING_8 with content equivalent to that of my_system_string.
To handle an assignment with original source and target reversed:
the class STRING_8 also contains a conversion query to_cil which will produce a System.String from an instance of STRING_8.
The assignment:
then, becomes equivalent to:
In Eiffel, the setup for type conversion is included in the class code, but then appears to happen as automatically as explicit type conversion in client code. The includes not just assignments but other types of attachments as well, such as argument (parameter) substitution.
=== Rust ===
Rust provides no implicit type conversion (coercion) between primitive types. But, explicit type conversion (casting) can be performed using the as keyword.
== Type assertion ==
A related concept in static type systems is called type assertion, which instruct the compiler to treat the expression of a certain type, disregarding its own inference. Type assertion may be safe (a runtime check is performed) or unsafe. A type assertion does not convert the value from a data type to another.
=== TypeScript ===
In TypeScript, a type assertion is done by using the as keyword:
In the above example, document.getElementById is declared to return an HTMLElement, but you know that it always return an HTMLCanvasElement, which is a subtype of HTMLElement, in this case. If it is not the case, subsequent code which relies on the behaviour of HTMLCanvasElement will not perform correctly, as in Typescript there is no runtime checking for type assertions.
In Typescript, there is no general way to check if a value is of a certain type at runtime, as there is no runtime type support. However, it is possible to write a user-defined function which the user tells the compiler if a value is of a certain type of not. Such a function is called type guard, and is declared with a return type of x is Type, where x is a parameter or this, in place of boolean.
This allows unsafe type assertions to be contained in the checker function instead of littered around the codebase.
=== Go ===
In Go, a type assertion can be used to access a concrete type value from an interface value. It is a safe assertion that it will panic (in the case of one return value), or return a zero value (if two return values are used), if the value is not of that concrete type.
This type assertions tell the system that i is of type T. If it isn't, it panics.
== Implicit casting using untagged unions ==
Many programming languages support union types which can hold a value of multiple types. Untagged unions are provided in some languages with loose type-checking, such as C and PL/I, but also in the original Pascal. These can be used to interpret the bit pattern of one type as a value of another type.
== Security issues ==
In hacking, typecasting is the misuse of type conversion to temporarily change a variable's data type from how it was originally defined. This provides opportunities for hackers since in type conversion after a variable is "typecast" to become a different data type, the compiler will treat that hacked variable as the new data type for that specific operation.
== See also ==
Downcasting
Run-time type information § C++ – dynamic cast and Java cast
Truth value
Type punning
== References ==
== External links ==
Casting in Ada
Casting in C++
C++ Reference Guide Why I hate C++ Cast Operators, by Danny Kalev
Casting in Java
Implicit Conversions in C#
Implicit Type Casting at Cppreference.com
Static and Reinterpretation castings in C++
Upcasting and Downcasting in F# | Wikipedia/Type_conversion |
In computer programming, an enumerated type (also called enumeration, enum, or factor in the R programming language, a status variable in the JOVIAL programming language, and a categorical variable in statistics) is a data type consisting of a set of named values called elements, members, enumeral, or enumerators of the type. The enumerator names are usually identifiers that behave as constants in the language. An enumerated type can be seen as a degenerate tagged union of unit type. A variable that has been declared as having an enumerated type can be assigned any of the enumerators as a value. In other words, an enumerated type has values that are different from each other, and that can be compared and assigned, but are not generally specified by the programmer as having any particular concrete representation in the computer's memory; compilers and interpreters can represent them arbitrarily.
== Description ==
For example, the four suits in a deck of playing cards may be four enumerators named Club, Diamond, Heart, and Spade, belonging to an enumerated type named suit. If a variable V is declared having suit as its data type, one can assign any of those four values to it.
Although the enumerators are usually distinct, some languages may allow the same enumerator to be listed twice in the type's declaration. The names of enumerators need not be semantically complete or compatible in any sense. For example, an enumerated type called color may be defined to consist of the enumerators Red, Green, Zebra, Missing, and Bacon. In some languages, the declaration of an enumerated type also intentionally defines an ordering of its members (High, Medium and Low priorities); in others, the enumerators are unordered (English, French, German and Spanish supported languages); in others still, an implicit ordering arises from the compiler concretely representing enumerators as integers.
Some enumerator types may be built into the language. The Boolean type, for example is often a pre-defined enumeration of the values False and True. A unit type consisting of a single value may also be defined to represent null. Many languages allow users to define new enumerated types.
Values and variables of an enumerated type are usually implemented with some integer type as the underlying representation. Some languages, especially system programming languages, allow the user to specify the bit combination to be used for each enumerator, which can be useful to efficiently represent sets of enumerators as fixed-length bit strings. In type theory, enumerated types are often regarded as tagged unions of unit types. Since such types are of the form
1
+
1
+
⋯
+
1
{\displaystyle 1+1+\cdots +1}
, they may also be written as natural numbers.
== Rationale ==
Some early programming languages did not originally have enumerated types. If a programmer wanted a variable, for example myColor, to have a value of red, the variable red would be declared and assigned some arbitrary value, usually an integer constant. The variable red would then be assigned to myColor. Other techniques assigned arbitrary values to strings containing the names of the enumerators.
These arbitrary values were sometimes referred to as magic numbers since there often was no explanation as to how the numbers were obtained or whether their actual values were significant. These magic numbers could make the source code harder for others to understand and maintain.
Enumerated types, on the other hand, make the code more self-documenting. Depending on the language, the compiler could automatically assign default values to the enumerators thereby hiding unnecessary detail from the programmer. These values may not even be visible to the programmer (see information hiding). Enumerated types can also prevent a programmer from writing illogical code such as performing mathematical operations on the values of the enumerators. If the value of a variable that was assigned an enumerator were to be printed, some programming languages could also print the name of the enumerator rather than its underlying numerical value. A further advantage is that enumerated types can allow compilers to enforce semantic correctness. For instance:
myColor = TRIANGLE
can be forbidden, whilst
myColor = RED
is accepted, even if TRIANGLE and RED are both internally represented as 1.
Conceptually, an enumerated type is similar to a list of nominals (numeric codes), since each possible value of the type is assigned a distinctive natural number. A given enumerated type is thus a concrete implementation of this notion. When order is meaningful and/or used for comparison, then an enumerated type becomes an ordinal type.
== Conventions ==
Programming languages tend to have their own, oftentimes multiple, programming styles and naming conventions. The variable assigned to an enumeration is usually a noun in singular form, and frequently follows either a PascalCase or uppercase convention, while lowercase and others are seen less frequently.
== Syntax in several programming languages ==
=== Pascal and syntactically similar languages ===
==== Pascal ====
In Pascal, an enumerated type can be implicitly declared by listing the values in a parenthesised list:
The declaration will often appear in a type synonym declaration, such that it can be used for multiple variables:
The order in which the enumeration values are given matters. An enumerated type is an ordinal type, and the pred and succ functions will give the prior or next value of the enumeration, and ord can convert enumeration values to their integer representation. Standard Pascal does not offer a conversion from arithmetic types to enumerations, however. Extended Pascal offers this functionality via an extended succ function. Some other Pascal dialects allow it via type-casts. Some modern descendants of Pascal, such as Modula-3, provide a special conversion syntax using a method called VAL; Modula-3 also treats BOOLEAN and CHAR as special pre-defined enumerated types and uses ORD and VAL for standard ASCII decoding and encoding.
Pascal style languages also allow enumeration to be used as array index:
==== Ada ====
In Ada, the use of "=" was replaced with "is" leaving the definition quite similar:
In addition to Pred, Succ, Val and Pos Ada also supports simple string conversions via Image and Value.
Similar to C-style languages Ada allows the internal representation of the enumeration to be specified:
Unlike C-style languages Ada also allows the number of bits of the enumeration to be specified:
Additionally, one can use enumerations as indexes for arrays, like in Pascal, but there are attributes defined for enumerations
Like Modula-3 Ada treats Boolean and Character as special pre-defined (in package "Standard") enumerated types. Unlike Modula-3 one can also define own character types:
=== C and syntactically similar languages ===
==== C ====
The original K&R dialect of the programming language C had no enumerated types. In C, enumerations are created by explicit definitions (the enum keyword by itself does not cause allocation of storage) which use the enum keyword and are reminiscent of struct and union definitions:
C exposes the integer representation of enumeration values directly to the programmer. Integers and enum values can be mixed freely, and all arithmetic operations on enum values are permitted. It is even possible for an enum variable to hold an integer that does not represent any of the enumeration values. In fact, according to the language definition, the above code will define Clubs, Diamonds, Hearts, and Spades as constants of type int, which will only be converted (silently) to enum cardsuit if they are stored in a variable of that type.
C also allows the programmer to choose the values of the enumeration constants explicitly, even without type. For example,
could be used to define a type that allows mathematical sets of suits to be represented as an enum cardsuit by bitwise logic operations.
Since C23, the underlying type of an enumeration can be specified by the programmer:
==== C# ====
Enumerated types in the C# programming language preserve most of the "small integer" semantics of C's enums. Some arithmetic operations are not defined for enums, but an enum value can be explicitly converted to an integer and back again, and an enum variable can have values that were not declared by the enum definition. For example, given
the expressions CardSuit.Diamonds + 1 and CardSuit.Hearts - CardSuit.Clubs are allowed directly (because it may make sense to step through the sequence of values or ask how many steps there are between two values), but CardSuit.Hearts * CardSuit.Spades is deemed to make less sense and is only allowed if the values are first converted to integers.
C# also provides the C-like feature of being able to define specific integer values for enumerations. By doing this it is possible to perform binary operations on enumerations, thus treating enumeration values as sets of flags. These flags can be tested using binary operations or with the enum type's builtin 'HasFlag' method.
The enumeration definition defines names for the selected integer values and is syntactic sugar, as it is possible to assign to an enum variable other integer values that are not in the scope of the enum definition.
==== C++ ====
C++ has enumeration types that are directly inherited from C's and work mostly like these, except that an enumeration is a real type in C++, giving added compile-time checking. Also (as with structs), the C++ enum keyword is combined with a typedef, so that instead of naming the type enum name, simply name it name. This can be simulated in C using a typedef: typedef enum {Value1, Value2} name;
C++11 also provides a second kind of enumeration, called a scoped enumeration. These are type-safe: the enumerators are not implicitly converted to an integer type. Among other things, this allows I/O streaming to be defined for the enumeration type. Another feature of scoped enumerations is that the enumerators do not leak, so usage requires prefixing with the name of the enumeration (e.g., Color::Red for the first enumerator in the example below), unless a using enum declaration (introduced in C++20) has been used to bring the enumerators into the current scope. A scoped enumeration is specified by the phrase enum class (or enum struct). For example:
The underlying type of an enumeration is an implementation-defined integral type that is large enough to hold all enumerated values; it does not have to be the smallest possible type. The underlying type can be specified directly, which allows "forward declarations" of enumerations:
==== Go ====
Go uses the iota keyword to create enumerated constants.
==== Java ====
The J2SE version 5.0 of the Java programming language added enumerated types whose declaration syntax is
similar to that of C:
The Java type system, however, treats enumerations as a type separate from integers, and intermixing of enum and integer values is not allowed. In fact, an enum type in Java is actually a special compiler-generated class rather than an arithmetic type, and enum values behave as global pre-generated instances of that class. Enum types can have instance methods and a constructor (the arguments of which can be specified separately for each enum value). All enum types implicitly extend the Enum abstract class. An enum type cannot be instantiated directly.
Internally, each enum value contains an integer, corresponding to the order in which they are declared in the source code, starting from 0. The programmer cannot set a custom integer for an enum value directly, but one can define overloaded constructors that can then assign arbitrary values to self-defined members of the enum class. Defining getters allows then access to those self-defined members. The internal integer can be obtained from an enum value using the ordinal() method, and the list of enum values of an enumeration type can be obtained in order using the values() method. It is generally discouraged for programmers to convert enums to integers and vice versa. Enumerated types are Comparable, using the internal integer; as a result, they can be sorted.
The Java standard library provides utility classes to use with enumerations. The EnumSet class implements a Set of enum values; it is implemented as a bit array, which makes it very compact and as efficient as explicit bit manipulation, but safer. The EnumMap class implements a Map of enum values to object. It is implemented as an array, with the integer value of the enum value serving as the index.
==== Perl ====
Dynamically typed languages in the syntactic tradition of C (e.g., Perl or JavaScript) do not, in general, provide enumerations. But in Perl programming the same result can be obtained with the shorthand strings list and hashes (possibly slices):
==== Raku ====
Raku (formerly known as Perl 6) supports enumerations. There are multiple ways to declare enumerations in Raku, all creating a back-end Map.
==== PHP ====
Enums were added in PHP version 8.1.
Enumerators may be backed by string or integer values to aid serialization:
The Enum's interface exposes a method that gives a collection of its enumerators and their names. String/integer-backed Enums also expose the backing value and methods to (attempt) deserialization. Users may add further methods.
==== Rust ====
Though Rust uses the enum keyword like C, it uses it to describe tagged unions, which enums can be considered a degenerate form of. Rust's enums are therefore much more flexible and can contain struct and tuple variants.
Like C, Rust also supports specifying the values of each variant,
==== Swift ====
In C, enumerations assign related names to a set of integer values. In Swift, enumerations are much more flexible and need not provide a value for each case of the enumeration. If a value (termed a raw value) is provided for each enumeration case, the value can be a string, a character, or a value of any integer or floating-point type.
Alternatively, enumeration cases can specify associated values of any type to be stored along with each different case value, much as unions or variants do in other languages. One can define a common set of related cases as part of one enumeration, each of which has a different set of values of appropriate types associated with it.
In Swift, enumerations are a first-class type. They adopt many features traditionally supported only by classes, such as computed properties to provide additional information about the enumeration's current value, and instance methods to provide functionality related to the values the enumeration represents. Enumerations can also define initializers to provide an initial case value and can be extended to expand their functionality beyond their original implementation; and can conform to protocols to provide standard functionality.
Unlike C and Objective-C, Swift enumeration cases are not assigned a default integer value when they are created. In the CardSuit example above, clubs, diamonds, hearts, and spades do not implicitly equal 0, 1, 2 and 3. Instead, the different enumeration cases are fully-fledged values in their own right, with an explicitly-defined type of CardSuit.
Multiple cases can appear on a single line, separated by commas:
When working with enumerations that store integer or string raw values, one doesn't need to explicitly assign a raw value for each case because Swift will automatically assign the values.
For instance, when integers are used for raw values, the implicit value for each case is one more than the previous case. If the first case doesn't have a value set, its value is 0. For the CardSuit example, suits can be numbered starting from 1 by writing:
==== TypeScript ====
TypeScript adds an 'enum' data type to JavaScript.
By default, enums number members starting at 0; this can be overridden by setting the value of the first:
All the values can be set:
TypeScript supports mapping the numeric value to its name. For example, this finds the name of the value 2:
==== Python ====
An enum module was added to the Python standard library in version 3.4.
There is also a functional API for creating enumerations with automatically generated indices (starting with one):
Python enumerations do not enforce semantic correctness (a meaningless comparison to an incompatible enumeration always returns False rather than raising a TypeError):
==== Fortran ====
Fortran only has enumerated types for interoperability with C; hence, the semantics is similar to C and, as in C, the enum values are just integers and no further type check is done. The C example from above can be written in Fortran as
==== Visual Basic/VBA ====
Enumerated datatypes in Visual Basic (up to version 6) and VBA are automatically assigned the "Long" datatype and also become a datatype themselves:
Example Code in VB.NET
==== Lisp ====
Common Lisp uses the member type specifier, e.g.,
that states that object is of type cardsuit if it is #'eql to club, diamond, heart or spade. The member type specifier is not valid as a Common Lisp Object System (CLOS) parameter specializer, however. Instead, (eql atom), which is the equivalent to (member atom) may be used (that is, only one member of the set may be specified with an eql type specifier, however, it may be used as a CLOS parameter specializer.) In other words, to define methods to cover an enumerated type, a method must be defined for each specific element of that type.
Additionally,
may be used to define arbitrary enumerated types at runtime. For instance
would refer to a type equivalent to the prior definition of cardsuit, as of course would simply have been using
but may be less confusing with the function #'member for stylistic reasons.
==== Dart ====
Dart has a support for the most basic form of enums and has a syntax that is a lot similar with other languages supporting enums.
Note that the switch operator does not guarantee the completeness of the cases. This means if you omit one case, the compiler will not raise an error.
== Algebraic data type in functional programming ==
In functional programming languages in the ML lineage (e.g., Standard ML (SML), OCaml, and Haskell), an algebraic data type with only nullary constructors can be used to implement an enumerated type. For example (in the syntax of SML signatures):
In these languages the small-integer representation is completely hidden from the programmer, if indeed such a representation is employed by the implementation. However, Haskell has the Enum type class which a type can derive or implement to get a mapping between the type and Int.
== Databases ==
Some databases support enumerated types directly. MySQL provides an enumerated type ENUM with allowable values specified as strings when a table is created. The values are stored as numeric indices with the empty string stored as 0, the first string value stored as 1, the second string value stored as 2, etc. Values can be stored and retrieved as numeric indexes or string values.
Example:
== XML Schema ==
XML Schema supports enumerated types through the enumeration facet used for constraining most primitive datatypes such as strings.
== See also ==
Contrast set
== Notes ==
== References ==
== External links ==
Enumerated types in C/C++
Enumerated types in C#
Enumerated types in Java
Enumerated types in MySQL
Enumerated types in Obix
Enumerated types in PHP
Enumerated types in Swift
Enumerated types in XML
Enumerated types in Visual Basic | Wikipedia/Enumerated_type |
In computer programming, a variable is an abstract storage location paired with an associated symbolic name, which contains some known or unknown quantity of data or object referred to as a value; or in simpler terms, a variable is a named container for a particular set of bits or type of data (like integer, float, string, etc...). A variable can eventually be associated with or identified by a memory address. The variable name is the usual way to reference the stored value, in addition to referring to the variable itself, depending on the context. This separation of name and content allows the name to be used independently of the exact information it represents. The identifier in computer source code can be bound to a value during run time, and the value of the variable may thus change during the course of program execution.
Variables in programming may not directly correspond to the concept of variables in mathematics. The latter is abstract, having no reference to a physical object such as storage location. The value of a computing variable is not necessarily part of an equation or formula as in mathematics. Variables in computer programming are frequently given long names to make them relatively descriptive of their use, whereas variables in mathematics often have terse, one- or two-character names for brevity in transcription and manipulation.
A variable's storage location may be referenced by several different identifiers, a situation known as aliasing. Assigning a value to the variable using one of the identifiers will change the value that can be accessed through the other identifiers.
Compilers have to replace variables' symbolic names with the actual locations of the data. While a variable's name, type, and location often remain fixed, the data stored in the location may be changed during program execution.
== Actions on a variable ==
In imperative programming languages, values can generally be accessed or changed at any time. In pure functional and logic languages, variables are bound to expressions and keep a single value during their entire lifetime due to the requirements of referential transparency. In imperative languages, the same behavior is exhibited by (named) constants (symbolic constants), which are typically contrasted with (normal) variables.
Depending on the type system of a programming language, variables may only be able to store a specified data type (e.g. integer or string). Alternatively, a datatype may be associated only with the current value, allowing a single variable to store anything supported by the programming language. Variables are the containers for storing the values.
Variables and scope:
Automatic variables: Each local variable in a function comes into existence only when the function is called, and disappears when the function is exited. Such variables are known as automatic variables.
External variables: These are variables that are external to a function and can be accessed by name by any function. These variables remain in existence permanently; rather than appearing and disappearing as functions are called and exited, they retain their values even after the functions that set them have returned.
== Identifiers referencing a variable ==
An identifier referencing a variable can be used to access the variable in order to read out the value, or alter the value, or edit other attributes of the variable, such as access permission, locks, semaphores, etc.
For instance, a variable might be referenced by the identifier "total_count" and the variable can contain the number 1956. If the same variable is referenced by the identifier "r" as well, and if using this identifier "r", the value of the variable is altered to 2009, then reading the value using the identifier "total_count" will yield a result of 2009 and not 1956.
If a variable is only referenced by a single identifier, that identifier can simply be called the name of the variable; otherwise, we can speak of it as one of the names of the variable. For instance, in the previous example the identifier "total_count" is the name of the variable in question, and "r" is another name of the same variable.
== Scope and extent ==
The scope of a variable describes where in a program's text the variable may be used, while the extent (also called lifetime) of a variable describes when in a program's execution the variable has a (meaningful) value. The scope of a variable affects its extent. The scope of a variable is actually a property of the name of the variable, and the extent is a property of the storage location of the variable. These should not be confused with context (also called environment), which is a property of the program, and varies by point in the program's text or execution—see scope: an overview. Further, object lifetime may coincide with variable lifetime, but in many cases is not tied to it.
Scope is an important part of the name resolution of a variable. Most languages define a specific scope for each variable (as well as any other named entity), which may differ within a given program. The scope of a variable is the portion of the program's text for which the variable's name has meaning and for which the variable is said to be "visible". Entrance into that scope typically begins a variable's lifetime (as it comes into context) and exit from that scope typically ends its lifetime (as it goes out of context). For instance, a variable with "lexical scope" is meaningful only within a certain function/subroutine, or more finely within a block of expressions/statements (accordingly with function scope or block scope); this is static resolution, performable at parse-time or compile-time. Alternatively, a variable with dynamic scope is resolved at run-time, based on a global binding stack that depends on the specific control flow. Variables only accessible within a certain functions are termed "local variables". A "global variable", or one with indefinite scope, may be referred to anywhere in the program.
Extent, on the other hand, is a runtime (dynamic) aspect of a variable. Each binding of a variable to a value can have its own extent at runtime. The extent of the binding is the portion of the program's execution time during which the variable continues to refer to the same value or memory location. A running program may enter and leave a given extent many times, as in the case of a closure.
Unless the programming language features garbage collection, a variable whose extent permanently outlasts its scope can result in a memory leak, whereby the memory allocated for the variable can never be freed since the variable which would be used to reference it for deallocation purposes is no longer accessible. However, it can be permissible for a variable binding to extend beyond its scope, as occurs in Lisp closures and C static local variables; when execution passes back into the variable's scope, the variable may once again be used. A variable whose scope begins before its extent does is said to be uninitialized and often has an undefined, arbitrary value if accessed (see wild pointer), since it has yet to be explicitly given a particular value. A variable whose extent ends before its scope may become a dangling pointer and deemed uninitialized once more since its value has been destroyed. Variables described by the previous two cases may be said to be out of extent or unbound. In many languages, it is an error to try to use the value of a variable when it is out of extent. In other languages, doing so may yield unpredictable results. Such a variable may, however, be assigned a new value, which gives it a new extent.
For space efficiency, a memory space needed for a variable may be allocated only when the variable is first used and freed when it is no longer needed. A variable is only needed when it is in scope, thus beginning each variable's lifetime when it enters scope may give space to unused variables. To avoid wasting such space, compilers often warn programmers if a variable is declared but not used.
It is considered good programming practice to make the scope of variables as narrow as feasible so that different parts of a program do not accidentally interact with each other by modifying each other's variables. Doing so also prevents action at a distance. Common techniques for doing so are to have different sections of a program use different name spaces, or to make individual variables "private" through either dynamic variable scoping or lexical variable scoping.
Many programming languages employ a reserved value (often named null or nil) to indicate an invalid or uninitialized variable.
== Typing ==
In statically typed languages such as C, C++, Java or C#, a variable also has a type, meaning that only certain kinds of values can be stored in it. For example, a variable of type "integer" is prohibited from storing text values.
In dynamically typed languages such as Python, a variable's type is inferred by its value, and can change according to its value. In Common Lisp, both situations exist simultaneously: A variable is given a type (if undeclared, it is assumed to be T, the universal supertype) which exists at compile time. Values also have types, which can be checked and queried at runtime.
Typing of variables also allows polymorphisms to be resolved at compile time. However, this is different from the polymorphism used in object-oriented function calls (referred to as virtual functions in C++) which resolves the call based on the value type as opposed to the supertypes the variable is allowed to have.
Variables often store simple data, like integers and literal strings, but some programming languages allow a variable to store values of other datatypes as well. Such languages may also enable functions to be parametric polymorphic. These functions operate like variables to represent data of multiple types. For example, a function named length may determine the length of a list. Such a length function may be parametric polymorphic by including a type variable in its type signature, since the number of elements in the list is independent of the elements' types.
== Parameters ==
The formal parameters (or formal arguments) of functions are also referred to as variables. For instance, in this Python code segment,
the variable named x is a parameter because it is given a value when the function is called. The integer 5 is the argument which gives x its value. In most languages, function parameters have local scope. This specific variable named x can only be referred to within the addtwo function (though of course other functions can also have variables called x).
== Memory allocation ==
The specifics of variable allocation and the representation of their values vary widely, both among programming languages and among implementations of a given language. Many language implementations allocate space for local variables, whose extent lasts for a single function call on the call stack, and whose memory is automatically reclaimed when the function returns. More generally, in name binding, the name of a variable is bound to the address of some particular block (contiguous sequence) of bytes in memory, and operations on the variable manipulate that block. Referencing is more common for variables whose values have large or unknown sizes when the code is compiled. Such variables reference the location of the value instead of storing the value itself, which is allocated from a pool of memory called the heap.
Bound variables have values. A value, however, is an abstraction, an idea; in implementation, a value is represented by some data object, which is stored somewhere in computer memory. The program, or the runtime environment, must set aside memory for each data object and, since memory is finite, ensure that this memory is yielded for reuse when the object is no longer needed to represent some variable's value.
Objects allocated from the heap must be reclaimed—especially when the objects are no longer needed. In a garbage-collected language (such as C#, Java, Python, Golang and Lisp), the runtime environment automatically reclaims objects when extant variables can no longer refer to them. In non-garbage-collected languages, such as C, the program (and the programmer) must explicitly allocate memory, and then later free it, to reclaim its memory. Failure to do so leads to memory leaks, in which the heap is depleted as the program runs, risks eventual failure from exhausting available memory.
When a variable refers to a data structure created dynamically, some of its components may be only indirectly accessed through the variable. In such circumstances, garbage collectors (or analogous program features in languages that lack garbage collectors) must deal with a case where only a portion of the memory reachable from the variable needs to be reclaimed.
== Naming conventions ==
Unlike their mathematical counterparts, programming variables and constants commonly take multiple-character names, e.g. COST or total. Single-character names are most commonly used only for auxiliary variables; for instance, i, j, k for array index variables.
Some naming conventions are enforced at the language level as part of the language syntax which involves the format of valid identifiers. In almost all languages, variable names cannot start with a digit (0–9) and cannot contain whitespace characters. Whether or not punctuation marks are permitted in variable names varies from language to language; many languages only permit the underscore ("_") in variable names and forbid all other punctuation. In some programming languages, sigils (symbols or punctuation) are affixed to variable identifiers to indicate the variable's datatype or scope.
Case-sensitivity of variable names also varies between languages and some languages require the use of a certain case in naming certain entities; Most modern languages are case-sensitive; some older languages are not. Some languages reserve certain forms of variable names for their own internal use; in many languages, names beginning with two underscores ("__") often fall under this category.
However, beyond the basic restrictions imposed by a language, the naming of variables is largely a matter of style. At the machine code level, variable names are not used, so the exact names chosen do not matter to the computer. Thus names of variables identify them, for the rest they are just a tool for programmers to make programs easier to write and understand. Using poorly chosen variable names can make code more difficult to review than non-descriptive names, so names that are clear are often encouraged.
Programmers often create and adhere to code style guidelines that offer guidance on naming variables or impose a precise naming scheme. Shorter names are faster to type but are less descriptive; longer names often make programs easier to read and the purpose of variables easier to understand. However, extreme verbosity in variable names can also lead to less comprehensible code.
== Variable types (based on lifetime) ==
We can classify variables based on their lifetime. The different types of variables are static, stack-dynamic, explicit heap-dynamic, and implicit heap-dynamic. A static variable is also known as global variable, it is bound to a memory cell before execution begins and remains to the same memory cell until termination. A typical example is the static variables in C and C++. A Stack-dynamic variable is known as local variable, which is bound when the declaration statement is executed, and it is deallocated when the procedure returns. The main examples are local variables in C subprograms and Java methods. Explicit Heap-Dynamic variables are nameless (abstract) memory cells that are allocated and deallocated by explicit run-time instructions specified by the programmer. The main examples are dynamic objects in C++ (via new and delete) and all objects in Java. Implicit Heap-Dynamic variables are bound to heap storage only when they are assigned values. Allocation and release occur when values are reassigned to variables. As a result, Implicit heap-dynamic variables have the highest degree of flexibility. The main examples are some variables in JavaScript, PHP and all variables in APL.
== See also ==
Control variable (programming)
Non-local variable
Temporary variable
Variable interpolation
Scalar (mathematics)
== Notes ==
== References ==
=== Works cited ===
Brookshear, J. Glenn (2019). "Computer Science: An Overview" (PDF). Retrieved 2024-04-01. | Wikipedia/Variable_(computer_science) |
In computer science, the term range may refer to one of three things:
The possible values that may be stored in a variable.
The upper and lower bounds of an array.
An alternative to iterator.
== Range of a variable ==
The range of a variable is given as the set of possible values that that variable can hold. In the case of an integer, the variable definition is restricted to whole numbers only, and the range will cover every number within its range (including the maximum and minimum). For example, the range of a signed 16-bit integer variable is all the integers from −32,768 to +32,767.
== Range of an array ==
When an array is numerically indexed, its range is the upper and lower bound of the array. Depending on the environment, a warning, a fatal exception, or unpredictable behavior will occur if the program attempts to access an array element that is outside the range. In some programming languages, such as C, arrays have a fixed lower bound (zero) and will contain data at each position up to the upper bound (so an array with 5 elements will have a range of 0 to 4). In others, such as PHP, an array may have holes where no element is defined, and therefore an array with a range of 0 to 4 will have up to 5 elements (and a minimum of 2).
== Range as an alternative to iterator ==
Another meaning of range in computer science is an alternative to iterator. When used in this sense, range is defined as "a pair of begin/end iterators packed together". It is argued that "Ranges are a superior abstraction" (compared to iterators) for several reasons, including better safety.
In particular, such ranges are supported in C++20, Boost C++ Libraries and the D standard library.
== Range as a data type ==
A data type for ranges can be implemented using generics.
Example in C#.
Example in Kotlin.
Example in PHP.
Example in Python.
Rust has a built-in range struct in the standard library in std::ops::Range.
== Range as a operator ==
Rust has the .. and ..= operators.
Zig also has the .. operator.
As does C#,
F#,
Kotlin,
and Perl.
Python and PHP does not have any range operator but they do have a range function.
== See also ==
Interval
== References == | Wikipedia/Range_(computer_science) |
The Journal of Computational and Graphical Statistics is a quarterly peer-reviewed scientific journal published by Taylor & Francis on behalf of the American Statistical Association. Established in 1992, the journal covers the use of computational and graphical methods in statistics and data analysis, including numerical methods, graphical displays and methods, and perception. It is published jointly with the Institute of Mathematical Statistics and the Interface Foundation of North America. According to the Journal Citation Reports, the journal has a 2021 impact factor of 1.884.
== See also ==
List of statistics journals
== References ==
== External links ==
Official website | Wikipedia/Journal_of_Computational_and_Graphical_Statistics |
Extract, load, transform (ELT) is an alternative to extract, transform, load (ETL) used with data lake implementations. In contrast to ETL, in ELT models the data is not transformed on entry to the data lake, but stored in its original raw format. This enables faster loading times. However, ELT requires sufficient processing power within the data processing engine to carry out the transformation on demand, to return the results in a timely manner. Since the data is not processed on entry to the data lake, the query and schema do not need to be defined a priori (although often the schema will be available during load since many data sources are extracts from databases or similar structured data systems and hence have an associated schema). ELT is a data pipeline model.
== Benefits ==
Some of the benefits of an ELT process include speed and the ability to handle both structured and unstructured data.
== Cloud data lake components ==
=== Common storage options ===
AWS
Simple Storage Service (S3)
Amazon RDS
Azure
Azure Blob Storage
GCP
Google Storage (GCS)
=== Querying ===
AWS
Redshift Spectrum
Athena
EMR (Presto)
Azure
Azure Data Lake
GCP
BigQuery
== References ==
== External links ==
Dull, Tamara, "The Data Lake Debate: Pro is Up First", smartdatacollective.com, March 20, 2015.
ELT: Extract, Load, and Transform A Complete Guide | Astera Software | Wikipedia/Extract,_load,_transform |
In the design of experiments, optimal experimental designs (or optimum designs) are a class of experimental designs that are optimal with respect to some statistical criterion. The creation of this field of statistics has been credited to Danish statistician Kirstine Smith.
In the design of experiments for estimating statistical models, optimal designs allow parameters to be estimated without bias and with minimum variance. A non-optimal design requires a greater number of experimental runs to estimate the parameters with the same precision as an optimal design. In practical terms, optimal experiments can reduce the costs of experimentation.
The optimality of a design depends on the statistical model and is assessed with respect to a statistical criterion, which is related to the variance-matrix of the estimator. Specifying an appropriate model and specifying a suitable criterion function both require understanding of statistical theory and practical knowledge with designing experiments.
== Advantages ==
Optimal designs offer three advantages over sub-optimal experimental designs:
Optimal designs reduce the costs of experimentation by allowing statistical models to be estimated with fewer experimental runs.
Optimal designs can accommodate multiple types of factors, such as process, mixture, and discrete factors.
Designs can be optimized when the design-space is constrained, for example, when the mathematical process-space contains factor-settings that are practically infeasible (e.g. due to safety concerns).
== Minimizing the variance of estimators ==
Experimental designs are evaluated using statistical criteria.
It is known that the least squares estimator minimizes the variance of mean-unbiased estimators (under the conditions of the Gauss–Markov theorem). In the estimation theory for statistical models with one real parameter, the reciprocal of the variance of an ("efficient") estimator is called the "Fisher information" for that estimator. Because of this reciprocity, minimizing the variance corresponds to maximizing the information.
When the statistical model has several parameters, however, the mean of the parameter-estimator is a vector and its variance is a matrix. The inverse matrix of the variance-matrix is called the "information matrix". Because the variance of the estimator of a parameter vector is a matrix, the problem of "minimizing the variance" is complicated. Using statistical theory, statisticians compress the information-matrix using real-valued summary statistics; being real-valued functions, these "information criteria" can be maximized. The traditional optimality-criteria are invariants of the information matrix; algebraically, the traditional optimality-criteria are functionals of the eigenvalues of the information matrix.
A-optimality ("average" or trace)
One criterion is A-optimality, which seeks to minimize the trace of the inverse of the information matrix. This criterion results in minimizing the average variance of the estimates of the regression coefficients.
C-optimality
This criterion minimizes the variance of a best linear unbiased estimator of a predetermined linear combination of model parameters.
D-optimality (determinant)
A popular criterion is D-optimality, which seeks to minimize |(X'X)−1|, or equivalently maximize the determinant of the information matrix X'X of the design. This criterion results in maximizing the differential Shannon information content of the parameter estimates.
E-optimality (eigenvalue)
Another design is E-optimality, which maximizes the minimum eigenvalue of the information matrix.
S-optimality
This criterion maximizes a quantity measuring the mutual column orthogonality of X and the determinant of the information matrix.
T-optimality
This criterion maximizes the discrepancy between two proposed models at the design locations.
Other optimality-criteria are concerned with the variance of predictions:
G-optimality
A popular criterion is G-optimality, which seeks to minimize the maximum entry in the diagonal of the hat matrix X(X'X)−1X'. This has the effect of minimizing the maximum variance of the predicted values.
I-optimality (integrated)
A second criterion on prediction variance is I-optimality, which seeks to minimize the average prediction variance over the design space.
V-optimality (variance)
A third criterion on prediction variance is V-optimality, which seeks to minimize the average prediction variance over a set of m specific points.
=== Contrasts ===
In many applications, the statistician is most concerned with a "parameter of interest" rather than with "nuisance parameters". More generally, statisticians consider linear combinations of parameters, which are estimated via linear combinations of treatment-means in the design of experiments and in the analysis of variance; such linear combinations are called contrasts. Statisticians can use appropriate optimality-criteria for such parameters of interest and for contrasts.
== Implementation ==
Catalogs of optimal designs occur in books and in software libraries.
In addition, major statistical systems like SAS and R have procedures for optimizing a design according to a user's specification. The experimenter must specify a model for the design and an optimality-criterion before the method can compute an optimal design.
== Practical considerations ==
Some advanced topics in optimal design require more statistical theory and practical knowledge in designing experiments.
=== Model dependence and robustness ===
Since the optimality criterion of most optimal designs is based on some function of the information matrix, the 'optimality' of a given design is model dependent: While an optimal design is best for that model, its performance may deteriorate on other models. On other models, an optimal design can be either better or worse than a non-optimal design. Therefore, it is important to benchmark the performance of designs under alternative models.
=== Choosing an optimality criterion and robustness ===
The choice of an appropriate optimality criterion requires some thought, and it is useful to benchmark the performance of designs with respect to several optimality criteria. Cornell writes that
since the [traditional optimality] criteria . . . are variance-minimizing criteria, . . . a design that is optimal for a given model using one of the . . . criteria is usually near-optimal for the same model with respect to the other criteria.
Indeed, there are several classes of designs for which all the traditional optimality-criteria agree, according to the theory of "universal optimality" of Kiefer. The experience of practitioners like Cornell and the "universal optimality" theory of Kiefer suggest that robustness with respect to changes in the optimality-criterion is much greater than is robustness with respect to changes in the model.
==== Flexible optimality criteria and convex analysis ====
High-quality statistical software provide a combination of libraries of optimal designs or iterative methods for constructing approximately optimal designs, depending on the model specified and the optimality criterion. Users may use a standard optimality-criterion or may program a custom-made criterion.
All of the traditional optimality-criteria are convex (or concave) functions, and therefore optimal-designs are amenable to the mathematical theory of convex analysis and their computation can use specialized methods of convex minimization. The practitioner need not select exactly one traditional, optimality-criterion, but can specify a custom criterion. In particular, the practitioner can specify a convex criterion using the maxima of convex optimality-criteria and nonnegative combinations of optimality criteria (since these operations preserve convex functions). For convex optimality criteria, the Kiefer-Wolfowitz equivalence theorem allows the practitioner to verify that a given design is globally optimal. The Kiefer-Wolfowitz equivalence theorem is related with the Legendre-Fenchel conjugacy for convex functions.
If an optimality-criterion lacks convexity, then finding a global optimum and verifying its optimality often are difficult.
=== Model uncertainty and Bayesian approaches ===
==== Model selection ====
When scientists wish to test several theories, then a statistician can design an experiment that allows optimal tests between specified models. Such "discrimination experiments" are especially important in the biostatistics supporting pharmacokinetics and pharmacodynamics, following the work of Cox and Atkinson.
==== Bayesian experimental design ====
When practitioners need to consider multiple models, they can specify a probability-measure on the models and then select any design maximizing the expected value of such an experiment. Such probability-based optimal-designs are called optimal Bayesian designs. Such Bayesian designs are used especially for generalized linear models (where the response follows an exponential-family distribution).
The use of a Bayesian design does not force statisticians to use Bayesian methods to analyze the data, however. Indeed, the "Bayesian" label for probability-based experimental-designs is disliked by some researchers. Alternative terminology for "Bayesian" optimality includes "on-average" optimality or "population" optimality.
== Iterative experimentation ==
Scientific experimentation is an iterative process, and statisticians have developed several approaches to the optimal design of sequential experiments.
=== Sequential analysis ===
Sequential analysis was pioneered by Abraham Wald. In 1972, Herman Chernoff wrote an overview of optimal sequential designs, while adaptive designs were surveyed later by S. Zacks. Of course, much work on the optimal design of experiments is related to the theory of optimal decisions, especially the statistical decision theory of Abraham Wald.
=== Response-surface methodology ===
Optimal designs for response-surface models are discussed in the textbook by Atkinson, Donev and Tobias, and in the survey of Gaffke and Heiligers and in the mathematical text of Pukelsheim. The blocking of optimal designs is discussed in the textbook of Atkinson, Donev and Tobias and also in the monograph by Goos.
The earliest optimal designs were developed to estimate the parameters of regression models with continuous variables, for example, by J. D. Gergonne in 1815 (Stigler). In English, two early contributions were made by Charles S. Peirce and Kirstine Smith.
Pioneering designs for multivariate response-surfaces were proposed by George E. P. Box. However, Box's designs have few optimality properties. Indeed, the Box–Behnken design requires excessive experimental runs when the number of variables exceeds three.
Box's "central-composite" designs require more experimental runs than do the optimal designs of Kôno.
=== System identification and stochastic approximation ===
The optimization of sequential experimentation is studied also in stochastic programming and in systems and control. Popular methods include stochastic approximation and other methods of stochastic optimization. Much of this research has been associated with the subdiscipline of system identification.
In computational optimal control, D. Judin & A. Nemirovskii and Boris Polyak has described methods that are more efficient than the (Armijo-style) step-size rules introduced by G. E. P. Box in response-surface methodology.
Adaptive designs are used in clinical trials, and optimal adaptive designs are surveyed in the Handbook of Experimental Designs chapter by Shelemyahu Zacks.
== Specifying the number of experimental runs ==
=== Using a computer to find a good design ===
There are several methods of finding an optimal design, given an a priori restriction on the number of experimental runs or replications. Some of these methods are discussed by Atkinson, Donev and Tobias and in the paper by Hardin and Sloane. Of course, fixing the number of experimental runs a priori would be impractical. Prudent statisticians examine the other optimal designs, whose number of experimental runs differ.
=== Discretizing probability-measure designs ===
In the mathematical theory on optimal experiments, an optimal design can be a probability measure that is supported on an infinite set of observation-locations. Such optimal probability-measure designs solve a mathematical problem that neglected to specify the cost of observations and experimental runs. Nonetheless, such optimal probability-measure designs can be discretized to furnish approximately optimal designs.
In some cases, a finite set of observation-locations suffices to support an optimal design. Such a result was proved by Kôno and Kiefer in their works on response-surface designs for quadratic models. The Kôno–Kiefer analysis explains why optimal designs for response-surfaces can have discrete supports, which are very similar as do the less efficient designs that have been traditional in response surface methodology.
== History ==
In 1815, an article on optimal designs for polynomial regression was published by Joseph Diaz Gergonne, according to Stigler.
Charles S. Peirce proposed an economic theory of scientific experimentation in 1876, which sought to maximize the precision of the estimates. Peirce's optimal allocation immediately improved the accuracy of gravitational experiments and was used for decades by Peirce and his colleagues. In his 1882 published lecture at Johns Hopkins University, Peirce introduced experimental design with these words:
Logic will not undertake to inform you what kind of experiments you ought to make in order best to determine the acceleration of gravity, or the value of the Ohm; but it will tell you how to proceed to form a plan of experimentation.
[....] Unfortunately practice generally precedes theory, and it is the usual fate of mankind to get things done in some boggling way first, and find out afterward how they could have been done much more easily and perfectly.
Kirstine Smith proposed optimal designs for polynomial models in 1918. (Kirstine Smith had been a student of the Danish statistician Thorvald N. Thiele and was working with Karl Pearson in London.)
== See also ==
== Notes ==
== References ==
Atkinson, A. C.; Donev, A. N.; Tobias, R. D. (2007). Optimum experimental designs, with SAS. Oxford University Press. pp. 511+xvi. ISBN 978-0-19-929660-6.
Chernoff, Herman (1972). Sequential analysis and optimal design. Society for Industrial and Applied Mathematics. ISBN 978-0-89871-006-9.
Fedorov, V. V. (1972). Theory of Optimal Experiments. Academic Press.
Fedorov, Valerii V.; Hackl, Peter (1997). Model-Oriented Design of Experiments. Lecture Notes in Statistics. Vol. 125. Springer-Verlag.
Goos, Peter (2002). The Optimal Design of Blocked and Split-plot Experiments. Lecture Notes in Statistics. Vol. 164. Springer.
Kiefer, Jack Carl (1985). Brown; Olkin, Ingram; Sacks, Jerome; et al. (eds.). Jack Carl Kiefer: Collected papers III—Design of experiments. Springer-Verlag and the Institute of Mathematical Statistics. pp. 718+xxv. ISBN 978-0-387-96004-3.
Logothetis, N.; Wynn, H. P. (1989). Quality through design: Experimental design, off-line quality control, and Taguchi's contributions. Oxford U. P. pp. 464+xi. ISBN 978-0-19-851993-5.
Nordström, Kenneth (May 1999). "The life and work of Gustav Elfving". Statistical Science. 14 (2): 174–196. doi:10.1214/ss/1009212244. JSTOR 2676737. MR 1722074.
Pukelsheim, Friedrich (2006). Optimal design of experiments. Classics in Applied Mathematics. Vol. 50 (republication with errata-list and new preface of Wiley (0-471-61971-X) 1993 ed.). Society for Industrial and Applied Mathematics. pp. 454+xxxii. ISBN 978-0-89871-604-7.
Shah, Kirti R. & Sinha, Bikas K. (1989). Theory of Optimal Designs. Lecture Notes in Statistics. Vol. 54. Springer-Verlag. pp. 171+viii. ISBN 978-0-387-96991-6.
== Further reading ==
=== Textbooks for practitioners and students ===
==== Textbooks emphasizing regression and response-surface methodology ====
The textbook by Atkinson, Donev and Tobias has been used for short courses for industrial practitioners as well as university courses.
Atkinson, A. C.; Donev, A. N.; Tobias, R. D. (2007). Optimum experimental designs, with SAS. Oxford University Press. pp. 511+xvi. ISBN 978-0-19-929660-6.
Logothetis, N.; Wynn, H. P. (1989). Quality through design: Experimental design, off-line quality control, and Taguchi's contributions. Oxford U. P. pp. 464+xi. ISBN 978-0-19-851993-5.
==== Textbooks emphasizing block designs ====
Optimal block designs are discussed by Bailey and by Bapat. The first chapter of Bapat's book reviews the linear algebra used by Bailey (or the advanced books below). Bailey's exercises and discussion of randomization both emphasize statistical concepts (rather than algebraic computations).
Bailey, R. A. (2008). Design of Comparative Experiments. Cambridge U. P. ISBN 978-0-521-68357-9. Draft available on-line. (Especially Chapter 11.8 "Optimality")
Bapat, R. B. (2000). Linear Algebra and Linear Models (Second ed.). Springer. ISBN 978-0-387-98871-9. (Chapter 5 "Block designs and optimality", pages 99–111)
Optimal block designs are discussed in the advanced monograph by Shah and Sinha and in the survey-articles by Cheng and by Majumdar.
=== Books for professional statisticians and researchers ===
Chernoff, Herman (1972). Sequential Analysis and Optimal Design. SIAM. ISBN 978-0-89871-006-9.
Fedorov, V. V. (1972). Theory of Optimal Experiments. Academic Press.
Fedorov, Valerii V.; Hackl, Peter (1997). Model-Oriented Design of Experiments. Vol. 125. Springer-Verlag.
Goos, Peter (2002). The Optimal Design of Blocked and Split-plot Experiments. Vol. 164. Springer.
Goos, Peter & Jones, Bradley (2011). Optimal design of experiments: a case study approach. Chichester Wiley. p. 304. ISBN 978-0-470-74461-1.
Kiefer, Jack Carl. (1985). Brown, Lawrence D.; Olkin, Ingram; Jerome Sacks; Wynn, Henry P (eds.). Jack Carl Kiefer Collected Papers III Design of Experiments. Springer-Verlag and the Institute of Mathematical Statistics. ISBN 978-0-387-96004-3.
Pukelsheim, Friedrich (2006). Optimal Design of Experiments. Vol. 50. Society for Industrial and Applied Mathematics. ISBN 978-0-89871-604-7. Republication with errata-list and new preface of Wiley (0-471-61971-X) 1993
Shah, Kirti R. & Sinha, Bikas K. (1989). Theory of Optimal Designs. Vol. 54. Springer-Verlag. ISBN 978-0-387-96991-6.
=== Articles and chapters ===
Chaloner, Kathryn & Verdinelli, Isabella (1995). "Bayesian Experimental Design: A Review". Statistical Science. 10 (3): 273–304. CiteSeerX 10.1.1.29.5355. doi:10.1214/ss/1177009939.
Ghosh, S.; Rao, C. R., eds. (1996). Design and Analysis of Experiments. Handbook of Statistics. Vol. 13. North-Holland. ISBN 978-0-444-82061-7.
"Model Robust Designs". Design and Analysis of Experiments. Handbook of Statistics. pp. 1055–1099.
Cheng, C.-S. "Optimal Design: Exact Theory". Design and Analysis of Experiments. Handbook of Statistics. pp. 977–1006.
DasGupta, A. "Review of Optimal Bayesian Designs". Design and Analysis of Experiments. Handbook of Statistics. pp. 1099–1148.
Gaffke, N. & Heiligers, B. "Approximate Designs for Polynomial Regression: Invariance, Admissibility, and Optimality". Design and Analysis of Experiments. Handbook of Statistics. pp. 1149–1199.
Majumdar, D. "Optimal and Efficient Treatment-Control Designs". Design and Analysis of Experiments. Handbook of Statistics. pp. 1007–1054.
Stufken, J. "Optimal Crossover Designs". Design and Analysis of Experiments. Handbook of Statistics. pp. 63–90.
Zacks, S. "Adaptive Designs for Parametric Models". Design and Analysis of Experiments. Handbook of Statistics. pp. 151–180.
Kôno, Kazumasa (1962). "Optimum designs for quadratic regression on k-cube" (PDF). Memoirs of the Faculty of Science. Kyushu University. Series A. Mathematics. 16 (2): 114–122. doi:10.2206/kyushumfs.16.114.
=== Historical ===
Gergonne, J. D. (November 1974) [1815]. "The application of the method of least squares to the interpolation of sequences". Historia Mathematica. 1 (4) (Translated by Ralph St. John and S. M. Stigler from the 1815 French ed.): 439–447. doi:10.1016/0315-0860(74)90034-2.
Stigler, Stephen M. (November 1974). "Gergonne's 1815 paper on the design and analysis of polynomial regression experiments". Historia Mathematica. 1 (4): 431–439. doi:10.1016/0315-0860(74)90033-0.
Peirce, C. S (1876). "Note on the Theory of the Economy of Research". Coast Survey Report: 197–201. (Appendix No. 14). NOAA PDF Eprint. Reprinted in Collected Papers of Charles Sanders Peirce. Vol. 7. 1958. paragraphs 139–157, and in Peirce, C. S. (July–August 1967). "Note on the Theory of the Economy of Research". Operations Research. 15 (4): 643–648. doi:10.1287/opre.15.4.643. JSTOR 168276.
Smith, Kirstine (1918). "On the Standard Deviations of Adjusted and Interpolated Values of an Observed Polynomial Function and its Constants and the Guidance They Give Towards a Proper Choice of the Distribution of the Observations". Biometrika. 12 (1/2): 1–85. doi:10.2307/2331929. JSTOR 2331929. | Wikipedia/Optimal_experimental_design |
A nested case–control (NCC) study is a variation of a case–control study in which cases and controls are drawn from the population in a fully enumerated cohort.
Usually, the exposure of interest is only measured among the cases and the selected controls. Thus the nested case–control study is more efficient than the full cohort design. The nested case–control study can be analyzed using methods for missing covariates.
The NCC design is often used when the exposure of interest is difficult or expensive to obtain and when the outcome is rare. By utilizing data previously collected from a large cohort study, the time and cost of beginning a new case–control study is avoided. By only measuring the covariate in as many participants as necessary, the cost and effort of exposure assessment is reduced. This benefit is pronounced when the covariate of interest is biological, since assessments such as gene expression profiling are expensive, and because the quantity of blood available for such analysis is often limited, making it a valuable resource that should not be used unnecessarily.
== Example ==
As an example, of the 91,523 women in the Nurses' Health Study who did not have cancer at baseline and who were followed for 14 years, 2,341 women had developed breast cancer by 1993. Several studies have used standard cohort analyses to study precursors to breast cancer, e.g. use of hormonal contraceptives, which is a covariate easily measured on all of the women in the cohort. However, note that in comparison to the cases, there are so many controls that each particular control contributes relatively little information to the analysis.
If, on the other hand, one is interested in the association between gene expression and breast cancer incidence, it would be very expensive and possibly wasteful of precious blood specimen to assay all 89,000 women without breast cancer. In this situation, one may choose to assay all of the cases, and also, for each case, select a certain number of women to assay from the risk set of participants who have not yet failed (i.e. those who have not developed breast cancer before the particular case in question has developed breast cancer). The risk set is often restricted to those participants who are matched to the case on variables such as age, which reduces the variability of effect estimates.
== Efficiency of the NCC model ==
Commonly 1–4 controls are selected for each case. Since the covariate is not measured for all participants, the nested case–control model is both less expensive than a full cohort analysis and more efficient than taking a simple random sample from the full cohort. However, it has been shown that with 4 controls per case and/or stratified sampling of controls, relatively little efficiency may be lost, depending on the method of estimation used.
== Analysis of nested case–control studies ==
The analysis of a nested case–control model must take into account the way in which controls are sampled from the cohort. Failing to do so, such as by treating the cases and selected controls as the original cohort and performing a logistic regression, which is common, can result in biased estimates whose null distribution is different from what is assumed. Ways to account for the random sampling include conditional logistic regression, and using inverse probability weighting to adjust for missing covariates among those who are not selected into the study.
== Case–cohort study ==
A case–cohort study is a design in which cases and controls are drawn from within a prospective study. All cases who developed the outcome of interest during the follow-up are selected and compared with a random sample of the cohort. This randomly selected control sample could, by chance, include some cases. Exposure is defined prior to disease development based on data collected at baseline or on assays conducted in biological samples collected at baseline.
== References ==
Porta, Miquel (2014). A Dictionary of Epidemiology. Oxford: Oxford University Press.
== Further reading ==
Keogh, Ruth H.; Cox, D. R. (2014). "Nested case–control studies". Case–Control Studies. Cambridge University Press. pp. 160–190. ISBN 978-1-107-01956-0. | Wikipedia/Nested_case-control_study |
A scientific control is an experiment or observation designed to minimize the effects of variables other than the independent variable (i.e. confounding variables). This increases the reliability of the results, often through a comparison between control measurements and the other measurements. Scientific controls are a part of the scientific method.
== Controlled experiments ==
Controls eliminate alternate explanations of experimental results, especially experimental errors and experimenter bias. Many controls are specific to the type of experiment being performed, as in the molecular markers used in SDS-PAGE experiments, and may simply have the purpose of ensuring that the equipment is working properly. The selection and use of proper controls to ensure that experimental results are valid (for example, absence of confounding variables) can be very difficult. Control measurements may also be used for other purposes: for example, a measurement of a microphone's background noise in the absence of a signal allows the noise to be subtracted from later measurements of the signal, thus producing a processed signal of higher quality.
For example, if a researcher feeds an experimental artificial sweetener to sixty laboratories rats and observes that ten of them subsequently become sick, the underlying cause could be the sweetener itself or something unrelated. Other variables, which may not be readily obvious, may interfere with the experimental design. For instance, the artificial sweetener might be mixed with a dilutant and it might be the dilutant that causes the effect. To control for the effect of the dilutant, the same test is run twice; once with the artificial sweetener in the dilutant, and another done exactly the same way but using the dilutant alone. Now the experiment is controlled for the dilutant and the experimenter can distinguish between sweetener, dilutant, and non-treatment. Controls are most often necessary where a confounding factor cannot easily be separated from the primary treatments. For example, it may be necessary to use a tractor to spread fertilizer where there is no other practicable way to spread fertilizer. The simplest solution is to have a treatment where a tractor is driven over plots without spreading fertilizer and in that way, the effects of tractor traffic are controlled.
The simplest types of control are negative and positive controls, and both are found in many different types of experiments. These two controls, when both are successful, are usually sufficient to eliminate most potential confounding variables: it means that the experiment produces a negative result when a negative result is expected, and a positive result when a positive result is expected. Other controls include vehicle controls, sham controls and comparative controls.
== Confounding ==
Confounding is a critical issue in observational studies because it can lead to biased or misleading conclusions about relationships between variables. A confounder is an extraneous variable that is related to both the independent variable (treatment or exposure) and the dependent variable (outcome), potentially distorting the true association. If confounding is not properly accounted for, researchers might incorrectly attribute an effect to the exposure when it is actually due to another factor. This can result in incorrect policy recommendations, ineffective interventions, or flawed scientific understanding. For example, in a study examining the relationship between physical activity and heart disease, failure to control for diet, a potential confounder, could lead to an overestimation or underestimation of the true effect of exercise.
Falsification tests are a robustness-checking technique used in observational studies to assess whether observed associations are likely due to confounding, bias, or model misspecification rather than a true causal effect. These tests help validate findings by applying the same analytical approach to a scenario where no effect is expected. If an association still appears where none should exist, it raises concerns that the primary analysis may suffer from confounding or other biases.
Negative controls are one type of falsification tests. The need to use negative controls usually arise in observational studies, when the study design can be questioned because of a potential confounding mechanism. A Negative control test can reject study design, but it cannot validate them. Either because there might be another confounding mechanism, or because of low statistical power. Negative controls are increasingly used in the epidemiology literature, but they show promise in social sciences fields such as economics. Negative controls are divided into two main categories: Negative Control Exposures (NCEs) and Negative Control Outcomes (NCOs).
Lousdal et al. examined the effect of screening participation on death from breast cancer. They hypothesized that screening participants are healthier than non-participants and, therefore, already at baseline have a lower risk of breast-cancer death. Therefore, they used proxies for better health as negative-control outcomes (NCOs) and proxies for healthier behavior as negative-control exposures (NCEs). Death from causes other than breast cancer was taken as NCO, as it is an outcome of better health, not effected by breast cancer screening. Dental care participation was taken to be NCE, as it is assumed to be a good proxy of health attentive behavior.
== Negative control ==
Negative controls are variables that meant to help when the study design is suspected to be invalid because of unmeasured confounders that are correlated with both the treatment and the outcome. Where there are only two possible outcomes, e.g. positive or negative, if the treatment group and the negative control (non-treatment group) both produce a negative result, it can be inferred that the treatment had no effect. If the treatment group and the negative control both produce a positive result, it can be inferred that a confounding variable is involved in the phenomenon under study, and the positive results are not solely due to the treatment.
In other examples, outcomes might be measured as lengths, times, percentages, and so forth. In the drug testing example, we could measure the percentage of patients cured. In this case, the treatment is inferred to have no effect when the treatment group and the negative control produce the same results. Some improvement is expected in the placebo group due to the placebo effect, and this result sets the baseline upon which the treatment must improve upon. Even if the treatment group shows improvement, it needs to be compared to the placebo group. If the groups show the same effect, then the treatment was not responsible for the improvement (because the same number of patients were cured in the absence of the treatment). The treatment is only effective if the treatment group shows more improvement than the placebo group.
=== Negative Control Exposure (NCE) ===
NCE is a variable that should not causally affect the outcome, but may suffer from the same confounding as the exposure-outcome relationship in question. A priori, there should be no statistical association between the NCE and the outcome. If an association is found, then it through the unmeasured confounder, and since the NCE and treatment share the same confounding mechanism, there is an alternative path, apart from the direct path from the treatment to the outcome. In that case, the study design is invalid.
For example, Yerushalmy used husband's smoking as an NCE. The exposure was maternal smoking; the outcomes were various birth factors, such as incidence of low birth weight, length of pregnancy, and neonatal mortality rates. It is assumed that husband's smoking share common confounders, such household health lifestyle with the pregnant woman's smoking, but it does not causally affect the fetus development. Nonetheless, Yerushalmy found a statistical association, And as a result, it casts doubt on the proposition that cigarette smoking causally interferes with intrauterine development of the fetus.
==== Differences Between Negative Control Exposures and Placebo ====
The term negative controls is used when the study is based on observations, while the Placebo should be used as a non-treatment in randomized control trials.
=== Negative Control Outcome (NCO) ===
Negative Control Outcomes are the more popular type of negative controls. NCO is a variable that is not causally affected by the treatment, but suspected to have a similar confounding mechanism as the treatment-outcome relationship. If the study design is valid, there should be no statistical association between the NCO and the treatment. Thus, an association between them suggest that the design is invalid.
For example, Jackson et al. used mortality from all causes outside of influenza season an NCO in a study examining influenza vaccine's effect on influenza-related deaths. A possible confounding mechanism is health status and lifestyle, such as the people who are more healthy in general also tend to take the influenza vaccine. Jackson et al. found that a preferential receipt of vaccine by relatively healthy seniors, and that differences in health status between vaccinated and unvaccinated groups leads to bias in estimates of influenza vaccine effectiveness. In a similar example, when discussing the impact of air pollutants on asthma hospital admissions, Sheppard et al. et al. used non-elderly appendicitis hospital admissions as NCO.
==== Formal Conditions ====
Given a treatment
A
{\displaystyle A}
and an outcome
Y
{\displaystyle Y}
, in the presence of a set of control variables
X
{\displaystyle X}
, and unmeasured confounder
U
{\displaystyle U}
for the
A
−
Y
{\displaystyle A-Y}
relationship. Shi et al. presented formal conditions for a negative control outcome
Y
~
{\displaystyle {\tilde {Y}}}
,
Stable Unit Treatment Value Assumption (SUTVA): For both
Y
{\displaystyle {Y}}
and
Y
~
{\displaystyle {\tilde {Y}}}
with regard to
A
=
a
{\displaystyle A=a}
.
Latent Exchangeability:
Y
A
=
a
⊥
A
|
X
,
U
{\displaystyle Y^{A=a}\perp A|\;X,U}
Given
X
{\displaystyle X}
and
U
{\displaystyle U}
, the potential outcome
Y
A
=
a
{\displaystyle Y^{A=a}}
is independent of the treatment.
Irrelevancy: Ensures the irrelevancy of the treatment on the NCO.
Y
~
A
=
a
=
Y
~
A
=
a
′
=
Y
~
|
U
,
X
{\displaystyle {\tilde {Y}}^{A=a}={\tilde {Y}}^{A=a'}={\tilde {Y}}|\;U,X}
: There is no causal effect of
A
{\displaystyle A}
on
Y
~
{\displaystyle {\tilde {Y}}}
given
X
{\displaystyle X}
and
U
{\displaystyle U}
.
Y
~
⊥
A
|
U
,
X
{\displaystyle {\tilde {Y}}\perp A|\;U,X}
: There is no causal effect of
A
{\displaystyle A}
on
Y
~
{\displaystyle {\tilde {Y}}}
given
X
{\displaystyle X}
and
U
{\displaystyle U}
. The NCO is independent of the treatment given
X
{\displaystyle X}
and
U
{\displaystyle U}
.
U-Comparability:
Y
~
⧸
⊥
U
|
X
{\displaystyle {\tilde {Y}}\not {\perp }U|\;X}
The unmeasured confounders
U
{\displaystyle U}
of the association between
A
{\displaystyle A}
and
Y
{\displaystyle Y}
are the same for the association between
A
{\displaystyle A}
and
Y
~
{\displaystyle {\tilde {Y}}}
.
Given assumption 1 - 4, a non-null association between
A
{\displaystyle A}
and
Y
~
{\displaystyle {\tilde {Y}}}
, can be explained by
U
{\displaystyle U}
, and not by another mechanism. A possible violation of Latent Exchangeability will be when only the people that are influenced by a medicine will take it, even if both
X
{\displaystyle X}
and
U
{\displaystyle U}
are the same. For example, we would expect that given age and medical history (
X
{\displaystyle X}
), general health awareness (
U
{\displaystyle U}
), the intake of
A
{\displaystyle A}
influenza vaccine will be independent of potential influenza related deaths
Y
~
A
=
a
{\displaystyle {\tilde {Y}}^{A=a}}
. Otherwise, the Latent Exchangeability assumption is violated, and no identification can be made.
A violation of Irrelevancy occurs when there is a causal effect of
A
{\displaystyle A}
on
Y
~
{\displaystyle {\tilde {Y}}}
. For example, we would expect that given
X
{\displaystyle X}
and
U
{\displaystyle U}
, the influenza vaccine does not influence all-cause mortality. If, however, during the influenza vaccine medical visit, the physician also performs a general physical test, recommends good health habits, and prescribes vitamins and essential drugs. In this case, there is likely a causal effect of
A
{\displaystyle A}
on
Y
~
{\displaystyle {\tilde {Y}}}
(conditional on
X
{\displaystyle X}
and
U
{\displaystyle U}
). Therefore,
Y
~
{\displaystyle {\tilde {Y}}}
cannot be used as NCO, as the test might fail even if the causal design is valid.
U-Comparability is violated when
Y
~
⊥
U
{\displaystyle {\tilde {Y}}{\perp }U}
, and therefore the lack of association between
A
{\displaystyle A}
and
Y
~
{\displaystyle {\tilde {Y}}}
does not provide us any evidence for the invalidity of
A
{\displaystyle A}
. This violation would occur when we choose a poor NCO, that is not or very weakly correlated with the unmeasured confounders.
== Positive control ==
Positive controls are often used to assess test validity. For example, to assess a new test's ability to detect a disease (its sensitivity), then we can compare it against a different test that is already known to work. The well-established test is a positive control since we already know that the answer to the question (whether the test works) is yes.
Similarly, in an enzyme assay to measure the amount of an enzyme in a set of extracts, a positive control would be an assay containing a known quantity of the purified enzyme (while a negative control would contain no enzyme). The positive control should give a large amount of enzyme activity, while the negative control should give very low to no activity.
If the positive control does not produce the expected result, there may be something wrong with the experimental procedure, and the experiment is repeated. For difficult or complicated experiments, the result from the positive control can also help in comparison to previous experimental results. For example, if the well-established disease test was determined to have the same effect as found by previous experimenters, this indicates that the experiment is being performed in the same way that the previous experimenters did.
When possible, multiple positive controls may be used—if there is more than one disease test that is known to be effective, more than one might be tested. Multiple positive controls also allow finer comparisons of the results (calibration, or standardization) if the expected results from the positive controls have different sizes. For example, in the enzyme assay discussed above, a standard curve may be produced by making many different samples with different quantities of the enzyme.
== Randomization ==
In randomization, the groups that receive different experimental treatments are determined randomly. While this does not ensure that there are no differences between the groups, it ensures that the differences are distributed equally, thus correcting for systematic errors.
For example, in experiments where crop yield is affected (e.g. soil fertility), the experiment can be controlled by assigning the treatments to randomly selected plots of land. This mitigates the effect of variations in soil composition on the yield.
== Blind experiments ==
Blinding is the practice of withholding information that may bias an experiment. For example, participants may not know who received an active treatment and who received a placebo. If this information were to become available to trial participants, patients could receive a larger placebo effect, researchers could influence the experiment to meet their expectations (the observer effect), and evaluators could be subject to confirmation bias. A blind can be imposed on any participant of an experiment, including subjects, researchers, technicians, data analysts, and evaluators. In some cases, sham surgery may be necessary to achieve blinding.
During the course of an experiment, a participant becomes unblinded if they deduce or otherwise obtain information that has been masked to them. Unblinding that occurs before the conclusion of a study is a source of experimental error, as the bias that was eliminated by blinding is re-introduced. Unblinding is common in blind experiments and must be measured and reported. Meta-research has revealed high levels of unblinding in pharmacological trials. In particular, antidepressant trials are poorly blinded. Reporting guidelines recommend that all studies assess and report unblinding. In practice, very few studies assess unblinding.
Blinding is an important tool of the scientific method, and is used in many fields of research. In some fields, such as medicine, it is considered essential. In clinical research, a trial that is not blinded trial is called an open trial.
== See also ==
False positives and false negatives
Designed experiment
Controlling for a variable
James Lind cured scurvy using a controlled experiment that has been described as the first clinical trial.
Randomized controlled trial
Wait list control group
== References ==
== External links ==
"Control" . Encyclopædia Britannica. Vol. 7 (11th ed.). 1911. | Wikipedia/Experimental_control |
Interrupted time series analysis (ITS), sometimes known as quasi-experimental time series analysis, is a method of statistical analysis involving tracking a long-term period before and after a point of intervention to assess the intervention's effects. The time series refers to the data over the period, while the interruption is the intervention, which is a controlled external influence or set of influences. Effects of the intervention are evaluated by changes in the level and slope of the time series and statistical significance of the intervention parameters. Interrupted time series design is the design of experiments based on the interrupted time series approach.
The method is used in various areas of research, such as:
political science: impact of changes in laws on the behavior of people; (e.g., Effectiveness of sex offender registration policies in the United States)
economics: impact of changes in credit controls on borrowing behavior;
sociology: impact of experiments in income maintenance on the behavior of participants in welfare programs;
history: impact of major historical events on the behavior of those affected by the events;
psychology: impact of expressing emotional experiences on online content;
medicine: in medical research, medical treatment is an intervention whose effect are to be studied;
marketing research: to analyze the effect of "designed market interventions" (e.g., advertising) on sales.
environmental sciences: impacts of human activities on environmental quality and ecosystem dynamics (e.g., forest logging on local climate).
== See also ==
Quasi-experimental design
== References == | Wikipedia/Interrupted_time_series_design |
Statistical process control (SPC) or statistical quality control (SQC) is the application of statistical methods to monitor and control the quality of a production process. This helps to ensure that the process operates efficiently, producing more specification-conforming products with less waste scrap. SPC can be applied to any process where the "conforming product" (product meeting specifications) output can be measured. Key tools used in SPC include run charts, control charts, a focus on continuous improvement, and the design of experiments. An example of a process where SPC is applied is manufacturing lines.
SPC must be practiced in two phases: the first phase is the initial establishment of the process, and the second phase is the regular production use of the process. In the second phase, a decision of the period to be examined must be made, depending upon the change in 5M&E conditions (Man, Machine, Material, Method, Movement, Environment) and wear rate of parts used in the manufacturing process (machine parts, jigs, and fixtures).
An advantage of SPC over other methods of quality control, such as "inspection," is that it emphasizes early detection and prevention of problems, rather than the correction of problems after they have occurred.
In addition to reducing waste, SPC can lead to a reduction in the time required to produce the product. SPC makes it less likely the finished product will need to be reworked or scrapped.
== History ==
Statistical process control was pioneered by Walter A. Shewhart at Bell Laboratories in the early 1920s. Shewhart developed the control chart in 1924 and the concept of a state of statistical control. Statistical control is equivalent to the concept of exchangeability developed by logician William Ernest Johnson also in 1924 in his book Logic, Part III: The Logical Foundations of Science. Along with a team at AT&T that included Harold Dodge and Harry Romig he worked to put sampling inspection on a rational statistical basis as well. Shewhart consulted with Colonel Leslie E. Simon in the application of control charts to munitions manufacture at the Army's Picatinny Arsenal in 1934. That successful application helped convince Army Ordnance to engage AT&T's George D. Edwards to consult on the use of statistical quality control among its divisions and contractors at the outbreak of World War II.
W. Edwards Deming invited Shewhart to speak at the Graduate School of the U.S. Department of Agriculture and served as the editor of Shewhart's book Statistical Method from the Viewpoint of Quality Control (1939), which was the result of that lecture. Deming was an important architect of the quality control short courses that trained American industry in the new techniques during WWII. The graduates of these wartime courses formed a new professional society in 1945, the American Society for Quality Control, which elected Edwards as its first president. Deming travelled to Japan during the Allied Occupation and met with the Union of Japanese Scientists and Engineers (JUSE) in an effort to introduce SPC methods to Japanese industry.
=== 'Common' and 'special' sources of variation ===
Shewhart read the new statistical theories coming out of Britain, especially the work of William Sealy Gosset, Karl Pearson, and Ronald Fisher. However, he understood that data from physical processes seldom produced a normal distribution curve (that is, a Gaussian distribution or 'bell curve'). He discovered that data from measurements of variation in manufacturing did not always behave the same way as data from measurements of natural phenomena (for example, Brownian motion of particles). Shewhart concluded that while every process displays variation, some processes display variation that is natural to the process ("common" sources of variation); these processes he described as being in (statistical) control. Other processes additionally display variation that is not present in the causal system of the process at all times ("special" sources of variation), which Shewhart described as not in control.
=== Application to non-manufacturing processes ===
Statistical process control is appropriate to support any repetitive process, and has been implemented in many settings where for example ISO 9000 quality management systems are used, including financial auditing and accounting, IT operations, health care processes, and clerical processes such as loan arrangement and administration, customer billing etc. Despite criticism of its use in design and development, it is well-placed to manage semi-automated data governance of high-volume data processing operations, for example in an enterprise data warehouse, or an enterprise data quality management system.
In the 1988 Capability Maturity Model (CMM) the Software Engineering Institute suggested that SPC could be applied to software engineering processes. The Level 4 and Level 5 practices of the Capability Maturity Model Integration (CMMI) use this concept.
The application of SPC to non-repetitive, knowledge-intensive processes, such as research and development or systems engineering, has encountered skepticism and remains controversial.
In No Silver Bullet, Fred Brooks points out that the complexity, conformance requirements, changeability, and invisibility of software results in inherent and essential variation that cannot be removed. This implies that SPC is less effective in the software development than in, e.g., manufacturing.
== Variation in manufacturing ==
In manufacturing, quality is defined as conformance to specification. However, no two products or characteristics are ever exactly the same, because any process contains many sources of variability. In mass-manufacturing, traditionally, the quality of a finished article is ensured by post-manufacturing inspection of the product. Each article (or a sample of articles from a production lot) may be accepted or rejected according to how well it meets its design specifications, SPC uses statistical tools to observe the performance of the production process in order to detect significant variations before they result in the production of a sub-standard article.
Any source of variation at any point of time in a process will fall into one of two classes.
(1) Common causes
'Common' causes are sometimes referred to as 'non-assignable', or 'normal' sources of variation. It refers to any source of variation that consistently acts on process, of which there are typically many. This type of causes collectively produce a statistically stable and repeatable distribution over time.
(2) Special causes
'Special' causes are sometimes referred to as 'assignable' sources of variation. The term refers to any factor causing variation that affects only some of the process output. They are often intermittent and unpredictable.
Most processes have many sources of variation; most of them are minor and may be ignored. If the dominant assignable sources of variation are detected, potentially they can be identified and removed. When they are removed, the process is said to be 'stable'. When a process is stable, its variation should remain within a known set of limits. That is, at least, until another assignable source of variation occurs.
For example, a breakfast cereal packaging line may be designed to fill each cereal box with 500 grams of cereal. Some boxes will have slightly more than 500 grams, and some will have slightly less. When the package weights are measured, the data will demonstrate a distribution of net weights.
If the production process, its inputs, or its environment (for example, the machine on the line) change, the distribution of the data will change. For example, as the cams and pulleys of the machinery wear, the cereal filling machine may put more than the specified amount of cereal into each box. Although this might benefit the customer, from the manufacturer's point of view it is wasteful, and increases the cost of production. If the manufacturer finds the change and its source in a timely manner, the change can be corrected (for example, the cams and pulleys replaced).
From an SPC perspective, if the weight of each cereal box varies randomly, some higher and some lower, always within an acceptable range, then the process is considered stable. If the cams and pulleys of the machinery start to wear out, the weights of the cereal box might not be random. The degraded functionality of the cams and pulleys may lead to a non-random linear pattern of increasing cereal box weights. We call this common cause variation. If, however, all the cereal boxes suddenly weighed much more than average because of an unexpected malfunction of the cams and pulleys, this would be considered a special cause variation.
== Application ==
The application of SPC involves three main phases of activity:
Understanding the process and the specification limits.
Eliminating assignable (special) sources of variation, so that the process is stable.
Monitoring the ongoing production process, assisted by the use of control charts, to detect significant changes of mean or variation.
The proper implementation of SPC has been limited, in part due to a lack of statistical expertise at many organizations.
=== Control charts ===
The data from measurements of variations at points on the process map is monitored using control charts. Control charts attempt to differentiate "assignable" ("special") sources of variation from "common" sources. "Common" sources, because they are an expected part of the process, are of much less concern to the manufacturer than "assignable" sources. Using control charts is a continuous activity, ongoing over time.
==== Stable process ====
When the process does not trigger any of the control chart "detection rules" for the control chart, it is said to be "stable". A process capability analysis may be performed on a stable process to predict the ability of the process to produce "conforming product" in the future.
A stable process can be demonstrated by a process signature that is free of variances outside of the capability index. A process signature is the plotted points compared with the capability index.
==== Excessive variations ====
When the process triggers any of the control chart "detection rules", (or alternatively, the process capability is low), other activities may be performed to identify the source of the excessive variation.
The tools used in these extra activities include: Ishikawa diagram, designed experiments, and Pareto charts. Designed experiments are a means of objectively quantifying the relative importance (strength) of sources of variation. Once the sources of (special cause) variation are identified, they can be minimized or eliminated. Steps to eliminating a source of variation might include: development of standards, staff training, error-proofing, and changes to the process itself or its inputs.
==== Process stability metrics ====
When monitoring many processes with control charts, it is sometimes useful to calculate quantitative measures of the stability of the processes. These metrics can then be used to identify/prioritize the processes that are most in need of corrective actions. These metrics can also be viewed as supplementing the traditional process capability metrics. Several metrics have been proposed, as described in Ramirez and Runger.
They are (1) a Stability Ratio which compares the long-term variability to the short-term variability, (2) an ANOVA Test which compares the within-subgroup variation to the between-subgroup variation, and (3) an Instability Ratio which compares the number of subgroups that have one or more violations of the Western Electric rules to the total number of subgroups.
== Mathematics of control charts ==
Digital control charts use logic-based rules that determine "derived values" which signal the need for correction. For example,
derived value = last value + average absolute difference between the last N numbers.
== See also ==
ANOVA Gauge R&R
Distribution-free control chart
Electronic design automation
Industrial engineering
Process Window Index
Process capability index
Quality assurance
Reliability engineering
Six sigma
Stochastic control
Total quality management
== References ==
== Bibliography ==
== External links ==
MIT Course - Control of Manufacturing Processes
Guthrie, William F. (2012). "NIST/SEMATECH e-Handbook of Statistical Methods". National Institute of Standards and Technology. doi:10.18434/M32189. | Wikipedia/Statistical_Process_Control |
Statistical process control (SPC) or statistical quality control (SQC) is the application of statistical methods to monitor and control the quality of a production process. This helps to ensure that the process operates efficiently, producing more specification-conforming products with less waste scrap. SPC can be applied to any process where the "conforming product" (product meeting specifications) output can be measured. Key tools used in SPC include run charts, control charts, a focus on continuous improvement, and the design of experiments. An example of a process where SPC is applied is manufacturing lines.
SPC must be practiced in two phases: the first phase is the initial establishment of the process, and the second phase is the regular production use of the process. In the second phase, a decision of the period to be examined must be made, depending upon the change in 5M&E conditions (Man, Machine, Material, Method, Movement, Environment) and wear rate of parts used in the manufacturing process (machine parts, jigs, and fixtures).
An advantage of SPC over other methods of quality control, such as "inspection," is that it emphasizes early detection and prevention of problems, rather than the correction of problems after they have occurred.
In addition to reducing waste, SPC can lead to a reduction in the time required to produce the product. SPC makes it less likely the finished product will need to be reworked or scrapped.
== History ==
Statistical process control was pioneered by Walter A. Shewhart at Bell Laboratories in the early 1920s. Shewhart developed the control chart in 1924 and the concept of a state of statistical control. Statistical control is equivalent to the concept of exchangeability developed by logician William Ernest Johnson also in 1924 in his book Logic, Part III: The Logical Foundations of Science. Along with a team at AT&T that included Harold Dodge and Harry Romig he worked to put sampling inspection on a rational statistical basis as well. Shewhart consulted with Colonel Leslie E. Simon in the application of control charts to munitions manufacture at the Army's Picatinny Arsenal in 1934. That successful application helped convince Army Ordnance to engage AT&T's George D. Edwards to consult on the use of statistical quality control among its divisions and contractors at the outbreak of World War II.
W. Edwards Deming invited Shewhart to speak at the Graduate School of the U.S. Department of Agriculture and served as the editor of Shewhart's book Statistical Method from the Viewpoint of Quality Control (1939), which was the result of that lecture. Deming was an important architect of the quality control short courses that trained American industry in the new techniques during WWII. The graduates of these wartime courses formed a new professional society in 1945, the American Society for Quality Control, which elected Edwards as its first president. Deming travelled to Japan during the Allied Occupation and met with the Union of Japanese Scientists and Engineers (JUSE) in an effort to introduce SPC methods to Japanese industry.
=== 'Common' and 'special' sources of variation ===
Shewhart read the new statistical theories coming out of Britain, especially the work of William Sealy Gosset, Karl Pearson, and Ronald Fisher. However, he understood that data from physical processes seldom produced a normal distribution curve (that is, a Gaussian distribution or 'bell curve'). He discovered that data from measurements of variation in manufacturing did not always behave the same way as data from measurements of natural phenomena (for example, Brownian motion of particles). Shewhart concluded that while every process displays variation, some processes display variation that is natural to the process ("common" sources of variation); these processes he described as being in (statistical) control. Other processes additionally display variation that is not present in the causal system of the process at all times ("special" sources of variation), which Shewhart described as not in control.
=== Application to non-manufacturing processes ===
Statistical process control is appropriate to support any repetitive process, and has been implemented in many settings where for example ISO 9000 quality management systems are used, including financial auditing and accounting, IT operations, health care processes, and clerical processes such as loan arrangement and administration, customer billing etc. Despite criticism of its use in design and development, it is well-placed to manage semi-automated data governance of high-volume data processing operations, for example in an enterprise data warehouse, or an enterprise data quality management system.
In the 1988 Capability Maturity Model (CMM) the Software Engineering Institute suggested that SPC could be applied to software engineering processes. The Level 4 and Level 5 practices of the Capability Maturity Model Integration (CMMI) use this concept.
The application of SPC to non-repetitive, knowledge-intensive processes, such as research and development or systems engineering, has encountered skepticism and remains controversial.
In No Silver Bullet, Fred Brooks points out that the complexity, conformance requirements, changeability, and invisibility of software results in inherent and essential variation that cannot be removed. This implies that SPC is less effective in the software development than in, e.g., manufacturing.
== Variation in manufacturing ==
In manufacturing, quality is defined as conformance to specification. However, no two products or characteristics are ever exactly the same, because any process contains many sources of variability. In mass-manufacturing, traditionally, the quality of a finished article is ensured by post-manufacturing inspection of the product. Each article (or a sample of articles from a production lot) may be accepted or rejected according to how well it meets its design specifications, SPC uses statistical tools to observe the performance of the production process in order to detect significant variations before they result in the production of a sub-standard article.
Any source of variation at any point of time in a process will fall into one of two classes.
(1) Common causes
'Common' causes are sometimes referred to as 'non-assignable', or 'normal' sources of variation. It refers to any source of variation that consistently acts on process, of which there are typically many. This type of causes collectively produce a statistically stable and repeatable distribution over time.
(2) Special causes
'Special' causes are sometimes referred to as 'assignable' sources of variation. The term refers to any factor causing variation that affects only some of the process output. They are often intermittent and unpredictable.
Most processes have many sources of variation; most of them are minor and may be ignored. If the dominant assignable sources of variation are detected, potentially they can be identified and removed. When they are removed, the process is said to be 'stable'. When a process is stable, its variation should remain within a known set of limits. That is, at least, until another assignable source of variation occurs.
For example, a breakfast cereal packaging line may be designed to fill each cereal box with 500 grams of cereal. Some boxes will have slightly more than 500 grams, and some will have slightly less. When the package weights are measured, the data will demonstrate a distribution of net weights.
If the production process, its inputs, or its environment (for example, the machine on the line) change, the distribution of the data will change. For example, as the cams and pulleys of the machinery wear, the cereal filling machine may put more than the specified amount of cereal into each box. Although this might benefit the customer, from the manufacturer's point of view it is wasteful, and increases the cost of production. If the manufacturer finds the change and its source in a timely manner, the change can be corrected (for example, the cams and pulleys replaced).
From an SPC perspective, if the weight of each cereal box varies randomly, some higher and some lower, always within an acceptable range, then the process is considered stable. If the cams and pulleys of the machinery start to wear out, the weights of the cereal box might not be random. The degraded functionality of the cams and pulleys may lead to a non-random linear pattern of increasing cereal box weights. We call this common cause variation. If, however, all the cereal boxes suddenly weighed much more than average because of an unexpected malfunction of the cams and pulleys, this would be considered a special cause variation.
== Application ==
The application of SPC involves three main phases of activity:
Understanding the process and the specification limits.
Eliminating assignable (special) sources of variation, so that the process is stable.
Monitoring the ongoing production process, assisted by the use of control charts, to detect significant changes of mean or variation.
The proper implementation of SPC has been limited, in part due to a lack of statistical expertise at many organizations.
=== Control charts ===
The data from measurements of variations at points on the process map is monitored using control charts. Control charts attempt to differentiate "assignable" ("special") sources of variation from "common" sources. "Common" sources, because they are an expected part of the process, are of much less concern to the manufacturer than "assignable" sources. Using control charts is a continuous activity, ongoing over time.
==== Stable process ====
When the process does not trigger any of the control chart "detection rules" for the control chart, it is said to be "stable". A process capability analysis may be performed on a stable process to predict the ability of the process to produce "conforming product" in the future.
A stable process can be demonstrated by a process signature that is free of variances outside of the capability index. A process signature is the plotted points compared with the capability index.
==== Excessive variations ====
When the process triggers any of the control chart "detection rules", (or alternatively, the process capability is low), other activities may be performed to identify the source of the excessive variation.
The tools used in these extra activities include: Ishikawa diagram, designed experiments, and Pareto charts. Designed experiments are a means of objectively quantifying the relative importance (strength) of sources of variation. Once the sources of (special cause) variation are identified, they can be minimized or eliminated. Steps to eliminating a source of variation might include: development of standards, staff training, error-proofing, and changes to the process itself or its inputs.
==== Process stability metrics ====
When monitoring many processes with control charts, it is sometimes useful to calculate quantitative measures of the stability of the processes. These metrics can then be used to identify/prioritize the processes that are most in need of corrective actions. These metrics can also be viewed as supplementing the traditional process capability metrics. Several metrics have been proposed, as described in Ramirez and Runger.
They are (1) a Stability Ratio which compares the long-term variability to the short-term variability, (2) an ANOVA Test which compares the within-subgroup variation to the between-subgroup variation, and (3) an Instability Ratio which compares the number of subgroups that have one or more violations of the Western Electric rules to the total number of subgroups.
== Mathematics of control charts ==
Digital control charts use logic-based rules that determine "derived values" which signal the need for correction. For example,
derived value = last value + average absolute difference between the last N numbers.
== See also ==
ANOVA Gauge R&R
Distribution-free control chart
Electronic design automation
Industrial engineering
Process Window Index
Process capability index
Quality assurance
Reliability engineering
Six sigma
Stochastic control
Total quality management
== References ==
== Bibliography ==
== External links ==
MIT Course - Control of Manufacturing Processes
Guthrie, William F. (2012). "NIST/SEMATECH e-Handbook of Statistical Methods". National Institute of Standards and Technology. doi:10.18434/M32189. | Wikipedia/Statistical_control |
In statistical quality control, the regression control chart allows for monitoring a change in a process where two or more variables are correlated. The change in a dependent variable can be detected and compensatory change in the independent variable can be recommended. Examples from the Post Office Department provide an application of such models.
== Difference ==
Regression control chart differs from a traditional control chart in four main aspects:
It is designed to control a varying (rather than a constant) average.
The control limit lines are parallel to the regression line rather than the horizontal line.
The computations here are much more complex.
It is appropriate for use in more complex situations.
== References ==
== External links ==
Quality Inspection | Wikipedia/Regression_control_chart |
In statistical quality control, the individual/moving-range chart is a type of control chart used to monitor variables data from a business or industrial process for which it is impractical to use rational subgroups.
The chart is necessary in the following situations:: 231
Where automation allows inspection of each unit, so rational subgrouping has less benefit.
Where production is slow so that waiting for enough samples to make a rational subgroup unacceptably delays monitoring
For processes that produce homogeneous batches (e.g., chemical) where repeat measurements vary primarily because of measurement error
The "chart" actually consists of a pair of charts: one, the individuals chart, displays the individual measured values; the other, the moving range chart, displays the difference from one point to the next. As with other control charts, these two charts enable the user to monitor a process for shifts in the process that alter the mean or variance of the measured statistic.
== Interpretation ==
As with other control charts, the individuals and moving range charts consist of points plotted with the control limits, or natural process limits. These limits reflect what the process will deliver without fundamental changes.: 43 Points outside of these control limits are signals indicating that the process is not operating as consistently as possible; that some assignable cause has resulted in a change in the process. Similarly, runs of points on one side of the average line should also be interpreted as a signal of some change in the process. When such signals exist, action should be taken to identify and eliminate them. When no such signals are present, no changes to the process control variables (i.e. "tampering") are necessary or desirable.: 125
== Assumptions ==
The normal distribution is NOT assumed nor required in the calculation of control limits. Thus making the IndX/mR chart a very robust tool. This is demonstrated by Wheeler using real-world data, and for a number of highly non-normal probability distributions.
== Calculation and plotting ==
=== Calculation of moving range ===
The difference between data point,
x
i
{\displaystyle x_{i}}
, and its predecessor,
x
i
−
1
{\displaystyle x_{i-1}}
, is calculated as
M
R
i
=
|
x
i
−
x
i
−
1
|
{\displaystyle {MR}_{i}={\big |}x_{i}-x_{i-1}{\big |}}
. For
m
{\displaystyle m}
individual values, there are
m
−
1
{\displaystyle m-1}
ranges.
Next, the arithmetic mean of these values is calculated as
M
R
¯
=
∑
i
=
2
m
M
R
i
m
−
1
{\displaystyle {\overline {MR}}={\frac {\sum _{i=2}^{m}{MR_{i}}}{m-1}}}
If the data are normally distributed with standard deviation
σ
{\displaystyle \sigma }
then the expected value of
M
R
¯
{\displaystyle {\overline {MR}}}
is
d
2
σ
=
2
σ
/
π
{\displaystyle d_{2}\sigma =2\sigma /{\sqrt {\pi }}}
, the mean absolute difference of the normal distribution.
=== Calculation of moving range control limit ===
The upper control limit for the range (or upper range limit) is calculated by multiplying the average of the moving range by 3.267:
U
C
L
r
=
3.267
M
R
¯
{\displaystyle UCL_{r}=3.267{\overline {MR}}}
.
The value 3.267 is taken from the sample size-specific D4 anti-biasing constant for n=2, as given in most textbooks on statistical process control (see, for example, Montgomery: 725 ).
=== Calculation of individuals control limits ===
First, the average of the individual values is calculated:
x
¯
=
∑
i
=
1
m
x
i
m
{\displaystyle {\overline {x}}={\frac {\sum _{i=1}^{m}{x_{i}}}{m}}}
.
Next, the upper control limit (UCL) and lower control limit (LCL) for the individual values (or upper and lower natural process limits) are calculated by adding or subtracting 2.66 times the average moving range to the process average:
U
C
L
=
x
¯
+
2.66
M
R
¯
{\displaystyle UCL={\overline {x}}+2.66{\overline {MR}}}
.
L
C
L
=
x
¯
−
2.66
M
R
¯
{\displaystyle LCL={\overline {x}}-2.66{\overline {MR}}}
The value 2.66 is obtained by dividing 3 by the sample size-specific d2 anti-biasing constant for n=2, as given in most textbooks on statistical process control (see, for example, Montgomery: 725 ).
=== Creation of graphs ===
Once the averages and limits are calculated, all of the individuals data are plotted serially, in the order in which they were recorded. To this plot is added a line at the average value, x and lines at the UCL and LCL values.
On a separate graph, the calculated ranges MRi are plotted. A line is added for the average value, MR and second line is plotted for the range upper control limit (UCLr).
=== Analysis ===
The resulting plots are analyzed as for other control charts, using the rules that are deemed appropriate for the process and the desired level of control. At the least, any points above either upper control limits or below the lower control limit are marked and considered a signal of changes in the underlying process that are worth further investigation.
== Potential pitfalls ==
The moving ranges involved are serially correlated so runs or cycles can show up on the moving average chart that do not indicate real problems in the underlying process.: 237
In some cases, it may be advisable to use the median of the moving range rather than its average, as when the calculated range data contains a few large values that may inflate the estimate of the population's dispersion.
Some have alleged that departures in normality in the process output significantly reduce the effectiveness of the charts to the point where it may require control limits to be set based on percentiles of the empirically-determined distribution of the process output: 237 although this assertion has been consistently refuted. See Footnote 6.
Many software packages will, given the individuals data, perform all of the needed calculations and plot the results. Care should be taken to ensure that the control limits are correctly calculated, per the above and standard texts on SPC. In some cases, the software's default settings may produce incorrect results; in others, user modifications to the settings could result in incorrect results. Sample data and results are presented by Wheeler for the explicit purpose of testing SPC software. Performing such software validation is generally a good idea with any SPC software.
== See also ==
x
¯
{\displaystyle {\overline {x}}}
and R chart
x
¯
{\displaystyle {\overline {x}}}
and s chart
== References ==
== External links ==
Online control chart generator Archived 2019-01-04 at the Wayback Machine | Wikipedia/Shewhart_individuals_control_chart |
Process window index (PWI) is a statistical measure that quantifies the robustness of a manufacturing process, e.g. one which involves heating and cooling, known as a thermal process. In manufacturing industry, PWI values are used to calibrate the heating and cooling of soldering jobs (known as a thermal profile) while baked in a reflow oven.
PWI measures how well a process fits into a user-defined process limit known as the specification limit. The specification limit is the tolerance allowed for the process and may be statistically determined. Industrially, these specification limits are known as the process window, and values that a plotted inside or outside this window are known as the process window index.
Using PWI values, processes can be accurately measured, analyzed, compared, and tracked at the same level of statistical process control and quality control available to other manufacturing processes.
== Statistical process control ==
Process capability is the ability of a process to produce output within specified limits. To help determine whether a manufacturing or business process is in a state of statistical control, process engineers use control charts, which help to predict the future performance of the process based on the current process.
To help determine the capability of a process, statistically determined upper and lower limits are drawn on either side of a process mean on the control chart. The control limits are set at three standard deviations on either side of the process mean, and are known as the upper control limit (UCL) and lower control limit (LCL) respectively. If the process data plotted on the control chart remains within the control limits over an extended period, then the process is said to be stable.
The tolerance values specified by the end-user are known as specification limits – the upper specification limit (USL) and lower specification limit (LSL). If the process data plotted on a control chart remains within these specification limits, then the process is considered a capable process, denoted by
C
^
p
k
{\displaystyle {\hat {C}}_{pk}}
.
The manufacturing industry has developed customized specification limits known as process windows. Within this process window, values are plotted. The values relative to the process mean of the window are known as the process window index. By using PWI values, processes can be accurately measured, analyzed, compared, and tracked at the same level of statistical process control and quality control available to other manufacturing processes.
=== Control limits ===
Control limits, also known as natural process limits, are horizontal lines drawn on a statistical process control chart, usually at a distance of ±3 standard deviations of the plotted statistic's mean, used to judge the stability of a process.
Control limits should not be confused with tolerance limits or specifications, which are completely independent of the distribution of the plotted sample statistic. Control limits describe what a process is capable of producing (sometimes referred to as the "voice of the process"), while tolerances and specifications describe how the product should perform to meet the customer's expectations (referred to as the "voice of the customer").
==== Use ====
Control limits are used to detect signals in process data that indicate that a process is not in control and, therefore, not operating predictably. A value in excess of the control limit indicates a special cause is affecting the process.
To detect signals one of several rule sets may be used (Control chart § Rules for detecting signals). One specification outlines that a signal is defined as any single point outside of the control limits. A process is also considered out of control if there are seven consecutive points, still inside the control limits but on one single side of the mean.
For normally distributed statistics, the area bracketed by the control limits will on average contain 99.73% of all the plot points on the chart, as long as the process is and remains in statistical control. A false-detection rate of at least 0.27% is therefore expected.
It is often not known whether a particular process generates data that conform to particular distributions, but the Chebyshev's inequality and the Vysochanskij–Petunin inequality allow the inference that for any unimodal distribution at least 95% of the data will be encapsulated by limits placed at 3 sigma.
== PWI in electronics manufacturing ==
An example of a process to which the PWI concept may be applied is soldering. In soldering, a thermal profile is the set of time-temperature values for a variety of processes such as slope, thermal soak, reflow, and peak.
Each thermal profile is ranked on how it fits in a process window (the specification or tolerance limit). Raw temperature values are normalized in terms of a percentage relative to both the process mean and the window limits. The center of the process window is defined as zero, and the extreme edges of the process window are ±99%. A PWI greater than or equal to 100% indicates that the profile does not process the product within specification. A PWI of 99% indicates that the profile runs at the edge of the process window. For example, if the process mean is set at 200 °C, with the process window calibrated at 180 °C and 220 °C respectively; then a measured value of 188 °C translates to a process window index of −60%. A lower PWI value indicates a more robust profile. For maximum efficiency, separate PWI values are computed for peak, slope, reflow, and soak processes of a thermal profile.
To avoid thermal shock affecting production, the steepest slope in the thermal profile is determined and leveled. Manufacturers use custom-built software to accurately determine and decrease the steepness of the slope. In addition, the software also automatically recalibrates the PWI values for the peak, slope, reflow, and soak processes. By setting PWI values, engineers can ensure that the reflow soldering work does not overheat or cool too quickly.
== Formula ==
The PWI is calculated as the worst case (i.e. highest number) in the set of thermal profile data. For each profile statistic the percentage used of the respective process window is calculated, and the worst case (i.e. highest percentage) is the PWI.
For example, a thermal profile with three thermocouples, with four profile statistics logged for each thermocouple, would have a set of twelve statistics for that thermal profile. In this case, the PWI would be the highest value among the twelve percentages of the respective process windows.
The formula to calculate PWI is:
PWI
=
100
×
max
i
=
1
…
N
j
=
1
…
M
{
|
measured value
[
i
,
j
]
−
average limits
[
i
,
j
]
range
[
i
,
j
]
/
2
|
}
{\displaystyle {\text{PWI}}=100\times \max _{i=1\dots N \atop j=1\dots M}\left\{\left|{\frac {{\text{measured value}}_{[i,j]}-{\text{average limits}}_{[i,j]}}{{\text{range}}_{[i,j]}/2}}\right|\right\}}
where:
i = 1 to N (number of thermocouples)
j = 1 to M (number of statistics per thermocouple)
measured value [i, j] = the [i, j]th statistic's measured value
average limits [i, j] = the average of the high and low (specified) limits of the [i, j']th statistic
range [i, j] = the high limit minus the low limit of the [i, j]th statistic
== See also ==
Acceptable quality limit
Control chart § Chart details
Reflow soldering
Wave soldering
== References == | Wikipedia/Control_limits |
Simultaneous perturbation stochastic approximation (SPSA) is an algorithmic method for optimizing systems with multiple unknown parameters. It is a type of stochastic approximation algorithm. As an optimization method, it is appropriately suited to large-scale population models, adaptive modeling, simulation optimization, and atmospheric modeling. Many examples are presented at the SPSA website http://www.jhuapl.edu/SPSA. A comprehensive book on the subject is Bhatnagar et al. (2013). An early paper on the subject is Spall (1987) and the foundational paper providing the key theory and justification is Spall (1992).
SPSA is a descent method capable of finding global minima, sharing this property with other methods such as simulated annealing. Its main feature is the gradient approximation that requires only two measurements of the objective function, regardless of the dimension of the optimization problem. Recall that we want to find the optimal control
u
∗
{\displaystyle u^{*}}
with loss
function
J
(
u
)
{\displaystyle J(u)}
:
u
∗
=
arg
min
u
∈
U
J
(
u
)
.
{\displaystyle u^{*}=\arg \min _{u\in U}J(u).}
Both Finite Differences Stochastic Approximation (FDSA)
and SPSA use the same iterative process:
u
n
+
1
=
u
n
−
a
n
g
^
n
(
u
n
)
,
{\displaystyle u_{n+1}=u_{n}-a_{n}{\hat {g}}_{n}(u_{n}),}
where
u
n
=
(
(
u
n
)
1
,
(
u
n
)
2
,
…
,
(
u
n
)
p
)
T
{\displaystyle u_{n}=((u_{n})_{1},(u_{n})_{2},\ldots ,(u_{n})_{p})^{T}}
represents the
n
t
h
{\displaystyle n^{th}}
iterate,
g
^
n
(
u
n
)
{\displaystyle {\hat {g}}_{n}(u_{n})}
is the estimate of the gradient of the objective function
g
(
u
)
=
∂
∂
u
J
(
u
)
{\displaystyle g(u)={\frac {\partial }{\partial u}}J(u)}
evaluated at
u
n
{\displaystyle {u_{n}}}
, and
{
a
n
}
{\displaystyle \{a_{n}\}}
is a positive number sequence converging to 0. If
u
n
{\displaystyle u_{n}}
is a p-dimensional vector, the
i
t
h
{\displaystyle i^{th}}
component of the symmetric finite difference gradient estimator is:
FD:
(
g
n
^
(
u
n
)
)
i
=
J
(
u
n
+
c
n
e
i
)
−
J
(
u
n
−
c
n
e
i
)
2
c
n
,
{\displaystyle ({\hat {g_{n}}}(u_{n}))_{i}={\frac {J(u_{n}+c_{n}e_{i})-J(u_{n}-c_{n}e_{i})}{2c_{n}}},}
1 ≤i ≤p, where
e
i
{\displaystyle e_{i}}
is the unit vector with a 1 in the
i
t
h
{\displaystyle i^{th}}
place, and
c
n
{\displaystyle c_{n}}
is a small positive number that decreases with n. With this method, 2p evaluations of J for each
g
n
{\displaystyle g_{n}}
are needed. When p is large, this estimator loses efficiency.
Let now
Δ
n
{\displaystyle \Delta _{n}}
be a random perturbation vector. The
i
t
h
{\displaystyle i^{th}}
component of the stochastic perturbation gradient estimator is:
SP:
(
g
n
^
(
u
n
)
)
i
=
J
(
u
n
+
c
n
Δ
n
)
−
J
(
u
n
−
c
n
Δ
n
)
2
c
n
(
Δ
n
)
i
.
{\displaystyle ({\hat {g_{n}}}(u_{n}))_{i}={\frac {J(u_{n}+c_{n}\Delta _{n})-J(u_{n}-c_{n}\Delta _{n})}{2c_{n}(\Delta _{n})_{i}}}.}
Remark that FD perturbs only one direction at a time, while the SP estimator disturbs all directions at the same time (the numerator is identical in all p components). The number of loss function measurements needed in the SPSA method for each
g
n
{\displaystyle g_{n}}
is always 2, independent of the dimension p. Thus, SPSA uses p times fewer function evaluations than FDSA, which makes it a lot more efficient.
Simple experiments with p=2 showed that SPSA converges in the same number of iterations as FDSA. The latter follows approximately the steepest descent direction, behaving like the gradient method. On the other hand, SPSA, with the random search direction, does not follow exactly the gradient path. In average though, it tracks it nearly because the gradient approximation is an almost unbiased
estimator of the gradient, as shown in the following lemma.
== Convergence lemma ==
Denote by
b
n
=
E
[
g
^
n
|
u
n
]
−
∇
J
(
u
n
)
{\displaystyle b_{n}=E[{\hat {g}}_{n}|u_{n}]-\nabla J(u_{n})}
the bias in the estimator
g
^
n
{\displaystyle {\hat {g}}_{n}}
. Assume that
{
(
Δ
n
)
i
}
{\displaystyle \{(\Delta _{n})_{i}\}}
are all mutually independent with zero-mean, bounded second
moments, and
E
(
|
(
Δ
n
)
i
|
−
1
)
{\displaystyle E(|(\Delta _{n})_{i}|^{-1})}
uniformly bounded. Then
b
n
{\displaystyle b_{n}}
→0 w.p. 1.
== Sketch of the proof ==
The main idea is to use conditioning on
Δ
n
{\displaystyle \Delta _{n}}
to express
E
[
(
g
^
n
)
i
]
{\displaystyle E[({\hat {g}}_{n})_{i}]}
and then to use a second order Taylor expansion of
J
(
u
n
+
c
n
Δ
n
)
i
{\displaystyle J(u_{n}+c_{n}\Delta _{n})_{i}}
and
J
(
u
n
−
c
n
Δ
n
)
i
{\displaystyle J(u_{n}-c_{n}\Delta _{n})_{i}}
. After algebraic manipulations using the zero mean and the independence of
{
(
Δ
n
)
i
}
{\displaystyle \{(\Delta _{n})_{i}\}}
, we get
E
[
(
g
^
n
)
i
]
=
(
g
n
)
i
+
O
(
c
n
2
)
{\displaystyle E[({\hat {g}}_{n})_{i}]=(g_{n})_{i}+O(c_{n}^{2})}
The result follows from the hypothesis that
c
n
{\displaystyle c_{n}}
→0.
Next we resume some of the hypotheses under which
u
t
{\displaystyle u_{t}}
converges in probability to the set of global minima of
J
(
u
)
{\displaystyle J(u)}
. The efficiency of
the method depends on the shape of
J
(
u
)
{\displaystyle J(u)}
, the values of the parameters
a
n
{\displaystyle a_{n}}
and
c
n
{\displaystyle c_{n}}
and the distribution of the perturbation terms
Δ
n
i
{\displaystyle \Delta _{ni}}
. First, the algorithm parameters must satisfy the
following conditions:
a
n
{\displaystyle a_{n}}
>0,
a
n
{\displaystyle a_{n}}
→0 when n→∝ and
∑
n
=
1
∞
a
n
=
∞
{\displaystyle \sum _{n=1}^{\infty }a_{n}=\infty }
. A good choice would be
a
n
=
a
n
;
{\displaystyle a_{n}={\frac {a}{n}};}
a>0;
c
n
=
c
n
γ
{\displaystyle c_{n}={\frac {c}{n^{\gamma }}}}
, where c>0,
γ
∈
[
1
6
,
1
2
]
{\displaystyle \gamma \in \left[{\frac {1}{6}},{\frac {1}{2}}\right]}
;
∑
n
=
1
∞
(
a
n
c
n
)
2
<
∞
{\displaystyle \sum _{n=1}^{\infty }({\frac {a_{n}}{c_{n}}})^{2}<\infty }
Δ
n
i
{\displaystyle \Delta _{ni}}
must be mutually independent zero-mean random variables, symmetrically distributed about zero, with
Δ
n
i
<
a
1
<
∞
{\displaystyle \Delta _{ni}<a_{1}<\infty }
. The inverse first and second moments of the
Δ
n
i
{\displaystyle \Delta _{ni}}
must be finite.
A good choice for
Δ
n
i
{\displaystyle \Delta _{ni}}
is the Rademacher distribution, i.e. Bernoulli +-1 with probability 0.5. Other choices are possible too, but note that the uniform and normal distributions cannot be used because they do not satisfy the finite inverse moment conditions.
The loss function J(u) must be thrice continuously differentiable and the individual elements of the third derivative must be bounded:
|
J
(
3
)
(
u
)
|
<
a
3
<
∞
{\displaystyle |J^{(3)}(u)|<a_{3}<\infty }
. Also,
|
J
(
u
)
|
→
∞
{\displaystyle |J(u)|\rightarrow \infty }
as
u
→
∞
{\displaystyle u\rightarrow \infty }
.
In addition,
∇
J
{\displaystyle \nabla J}
must be Lipschitz continuous, bounded and the ODE
u
˙
=
g
(
u
)
{\displaystyle {\dot {u}}=g(u)}
must have a unique solution for each initial condition.
Under these conditions and a few others,
u
k
{\displaystyle u_{k}}
converges in probability to the set of global minima of J(u) (see Maryak and Chin, 2008).
It has been shown that differentiability is not required: continuity and convexity are sufficient for convergence.
== Extension to second-order (Newton) methods ==
It is known that a stochastic version of the standard (deterministic) Newton-Raphson algorithm (a “second-order” method) provides an asymptotically optimal or near-optimal form of stochastic approximation. SPSA can also be used to efficiently estimate the Hessian matrix of the loss function based on either noisy loss measurements or noisy gradient measurements (stochastic gradients). As with the basic SPSA method, only a small fixed number of loss measurements or gradient measurements are needed at each iteration, regardless of the problem dimension p. See the brief discussion in Stochastic gradient descent.
== References ==
Bhatnagar, S., Prasad, H. L., and Prashanth, L. A. (2013), Stochastic Recursive Algorithms for Optimization: Simultaneous Perturbation Methods, Springer [1].
Hirokami, T., Maeda, Y., Tsukada, H. (2006) "Parameter estimation using simultaneous perturbation stochastic approximation", Electrical Engineering in Japan, 154 (2), 30–3 [2]
Maryak, J.L., and Chin, D.C. (2008), "Global Random Optimization by Simultaneous Perturbation Stochastic Approximation," IEEE Transactions on Automatic Control, vol. 53, pp. 780-783.
Spall, J. C. (1987), “A Stochastic Approximation Technique for Generating Maximum Likelihood Parameter Estimates,” Proceedings of the American Control Conference, Minneapolis, MN, June 1987, pp. 1161–1167.
Spall, J. C. (1992), “Multivariate Stochastic Approximation Using a Simultaneous Perturbation Gradient Approximation,” IEEE Transactions on Automatic Control, vol. 37(3), pp. 332–341.
Spall, J.C. (1998). "Overview of the Simultaneous Perturbation Method for Efficient Optimization" 2. Johns Hopkins APL Technical Digest, 19(4), 482–492.
Spall, J.C. (2003) Introduction to Stochastic Search and Optimization: Estimation, Simulation, and Control, Wiley. ISBN 0-471-33052-3 (Chapter 7) | Wikipedia/Simultaneous_perturbation_stochastic_approximation |
In numerical analysis, the Newton–Raphson method, also known simply as Newton's method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real-valued function. The most basic version starts with a real-valued function f, its derivative f′, and an initial guess x0 for a root of f. If f satisfies certain assumptions and the initial guess is close, then
x
1
=
x
0
−
f
(
x
0
)
f
′
(
x
0
)
{\displaystyle x_{1}=x_{0}-{\frac {f(x_{0})}{f'(x_{0})}}}
is a better approximation of the root than x0. Geometrically, (x1, 0) is the x-intercept of the tangent of the graph of f at (x0, f(x0)): that is, the improved guess, x1, is the unique root of the linear approximation of f at the initial guess, x0. The process is repeated as
x
n
+
1
=
x
n
−
f
(
x
n
)
f
′
(
x
n
)
{\displaystyle x_{n+1}=x_{n}-{\frac {f(x_{n})}{f'(x_{n})}}}
until a sufficiently precise value is reached. The number of correct digits roughly doubles with each step. This algorithm is first in the class of Householder's methods, and was succeeded by Halley's method. The method can also be extended to complex functions and to systems of equations.
== Description ==
The purpose of Newton's method is to find a root of a function. The idea is to start with an initial guess at a root, approximate the function by its tangent line near the guess, and then take the root of the linear approximation as a next guess at the function's root. This will typically be closer to the function's root than the previous guess, and the method can be iterated.
The best linear approximation to an arbitrary differentiable function
f
(
x
)
{\displaystyle f(x)}
near the point
x
=
x
n
{\displaystyle x=x_{n}}
is the tangent line to the curve, with equation
f
(
x
)
≈
f
(
x
n
)
+
f
′
(
x
n
)
(
x
−
x
n
)
.
{\displaystyle f(x)\approx f(x_{n})+f'(x_{n})(x-x_{n}).}
The root of this linear function, the place where it intercepts the
x
{\displaystyle x}
-axis, can be taken as a closer approximate root
x
n
+
1
{\displaystyle x_{n+1}}
:
x
n
+
1
=
x
n
−
f
(
x
n
)
f
′
(
x
n
)
.
{\displaystyle x_{n+1}=x_{n}-{\frac {f(x_{n})}{f'(x_{n})}}.}
The process can be started with any arbitrary initial guess
x
0
{\displaystyle x_{0}}
, though it will generally require fewer iterations to converge if the guess is close to one of the function's roots. The method will usually converge if
f
′
(
x
0
)
≠
0
{\displaystyle f'(x_{0})\neq 0}
. Furthermore, for a root of multiplicity 1, the convergence is at least quadratic (see Rate of convergence) in some sufficiently small neighbourhood of the root: the number of correct digits of the approximation roughly doubles with each additional step. More details can be found in § Analysis below.
Householder's methods are similar but have higher order for even faster convergence. However, the extra computations required for each step can slow down the overall performance relative to Newton's method, particularly if
f
{\displaystyle f}
or its derivatives are computationally expensive to evaluate.
== History ==
In the Old Babylonian period (19th–16th century BCE), the side of a square of known area could be effectively approximated, and this is conjectured to have been done using a special case of Newton's method, described algebraically below, by iteratively improving an initial estimate; an equivalent method can be found in Heron of Alexandria's Metrica (1st–2nd century CE), so is often called Heron's method. Jamshīd al-Kāshī used a method to solve xP − N = 0 to find roots of N, a method that was algebraically equivalent to Newton's method, and in which a similar method was found in Trigonometria Britannica, published by Henry Briggs in 1633.
The method first appeared roughly in Isaac Newton's work in De analysi per aequationes numero terminorum infinitas (written in 1669, published in 1711 by William Jones) and in De metodis fluxionum et serierum infinitarum (written in 1671, translated and published as Method of Fluxions in 1736 by John Colson). However, while Newton gave the basic ideas, his method differs from the modern method given above. He applied the method only to polynomials, starting with an initial root estimate and extracting a sequence of error corrections. He used each correction to rewrite the polynomial in terms of the remaining error, and then solved for a new correction by neglecting higher-degree terms. He did not explicitly connect the method with derivatives or present a general formula. Newton applied this method to both numerical and algebraic problems, producing Taylor series in the latter case.
Newton may have derived his method from a similar, less precise method by mathematician François Viète, however, the two methods are not the same. The essence of Viète's own method can be found in the work of the mathematician Sharaf al-Din al-Tusi.
The Japanese mathematician Seki Kōwa used a form of Newton's method in the 1680s to solve single-variable equations, though the connection with calculus was missing.
Newton's method was first published in 1685 in A Treatise of Algebra both Historical and Practical by John Wallis. In 1690, Joseph Raphson published a simplified description in Analysis aequationum universalis. Raphson also applied the method only to polynomials, but he avoided Newton's tedious rewriting process by extracting each successive correction from the original polynomial. This allowed him to derive a reusable iterative expression for each problem. Finally, in 1740, Thomas Simpson described Newton's method as an iterative method for solving general nonlinear equations using calculus, essentially giving the description above. In the same publication, Simpson also gives the generalization to systems of two equations and notes that Newton's method can be used for solving optimization problems by setting the gradient to zero.
Arthur Cayley in 1879 in The Newton–Fourier imaginary problem was the first to notice the difficulties in generalizing Newton's method to complex roots of polynomials with degree greater than 2 and complex initial values. This opened the way to the study of the theory of iterations of rational functions.
== Practical considerations ==
Newton's method is a powerful technique—if the derivative of the function at the root is nonzero, then the convergence is at least quadratic: as the method converges on the root, the difference between the root and the approximation is squared (the number of accurate digits roughly doubles) at each step. However, there are some difficulties with the method.
=== Difficulty in calculating the derivative of a function ===
Newton's method requires that the derivative can be calculated directly. An analytical expression for the derivative may not be easily obtainable or could be expensive to evaluate. In these situations, it may be appropriate to approximate the derivative by using the slope of a line through two nearby points on the function. Using this approximation would result in something like the secant method whose convergence is slower than that of Newton's method.
=== Failure of the method to converge to the root ===
It is important to review the proof of quadratic convergence of Newton's method before implementing it. Specifically, one should review the assumptions made in the proof. For situations where the method fails to converge, it is because the assumptions made in this proof are not met.
For example, in some cases, if the first derivative is not well behaved in the neighborhood of a particular root, then it is possible that Newton's method will fail to converge no matter where the initialization is set. In some cases, Newton's method can be stabilized by using successive over-relaxation, or the speed of convergence can be increased by using the same method.
In a robust implementation of Newton's method, it is common to place limits on the number of iterations, bound the solution to an interval known to contain the root, and combine the method with a more robust root finding method.
=== Slow convergence for roots of multiplicity greater than 1 ===
If the root being sought has multiplicity greater than one, the convergence rate is merely linear (errors reduced by a constant factor at each step) unless special steps are taken. When there are two or more roots that are close together then it may take many iterations before the iterates get close enough to one of them for the quadratic convergence to be apparent. However, if the multiplicity m of the root is known, the following modified algorithm preserves the quadratic convergence rate:
x
n
+
1
=
x
n
−
m
f
(
x
n
)
f
′
(
x
n
)
.
{\displaystyle x_{n+1}=x_{n}-m{\frac {f(x_{n})}{f'(x_{n})}}.}
This is equivalent to using successive over-relaxation. On the other hand, if the multiplicity m of the root is not known, it is possible to estimate m after carrying out one or two iterations, and then use that value to increase the rate of convergence.
If the multiplicity m of the root is finite then g(x) = f(x)/f′(x) will have a root at the same location with multiplicity 1. Applying Newton's method to find the root of g(x) recovers quadratic convergence in many cases although it generally involves the second derivative of f(x). In a particularly simple case, if f(x) = xm then g(x) = x/m and Newton's method finds the root in a single iteration with
x
n
+
1
=
x
n
−
g
(
x
n
)
g
′
(
x
n
)
=
x
n
−
x
n
m
1
m
=
0
.
{\displaystyle x_{n+1}=x_{n}-{\frac {g(x_{n})}{g'(x_{n})}}=x_{n}-{\frac {\;{\frac {x_{n}}{m}}\;}{\frac {1}{m}}}=0\,.}
=== Slow convergence ===
The function f(x) = x2 has a root at 0. Since f is continuously differentiable at its root, the theory guarantees that Newton's method as initialized sufficiently close to the root will converge. However, since the derivative f ′ is zero at the root, quadratic convergence is not ensured by the theory. In this particular example, the Newton iteration is given by
x
n
+
1
=
x
n
−
f
(
x
n
)
f
′
(
x
n
)
=
1
2
x
n
.
{\displaystyle x_{n+1}=x_{n}-{\frac {f(x_{n})}{f'(x_{n})}}={\frac {1}{2}}x_{n}.}
It is visible from this that Newton's method could be initialized anywhere and converge to zero, but at only a linear rate. If initialized at 1, dozens of iterations would be required before ten digits of accuracy are achieved.
The function f(x) = x + x4/3 also has a root at 0, where it is continuously differentiable. Although the first derivative f ′ is nonzero at the root, the second derivative f ′′ is nonexistent there, so that quadratic convergence cannot be guaranteed. In fact the Newton iteration is given by
x
n
+
1
=
x
n
−
f
(
x
n
)
f
′
(
x
n
)
=
x
n
4
/
3
3
+
4
x
n
1
/
3
≈
x
n
⋅
x
n
1
/
3
3
.
{\displaystyle x_{n+1}=x_{n}-{\frac {f(x_{n})}{f'(x_{n})}}={\frac {x_{n}^{4/3}}{3+4x_{n}^{1/3}}}\approx x_{n}\cdot {\frac {x_{n}^{1/3}}{3}}.}
From this, it can be seen that the rate of convergence is superlinear but subquadratic. This can be seen in the following tables, the left of which shows Newton's method applied to the above f(x) = x + x4/3 and the right of which shows Newton's method applied to f(x) = x + x2. The quadratic convergence in iteration shown on the right is illustrated by the orders of magnitude in the distance from the iterate to the true root (0,1,2,3,5,10,20,39,...) being approximately doubled from row to row. While the convergence on the left is superlinear, the order of magnitude is only multiplied by about 4/3 from row to row (0,1,2,4,5,7,10,13,...).
The rate of convergence is distinguished from the number of iterations required to reach a given accuracy. For example, the function f(x) = x20 − 1 has a root at 1. Since f ′(1) ≠ 0 and f is smooth, it is known that any Newton iteration convergent to 1 will converge quadratically. However, if initialized at 0.5, the first few iterates of Newton's method are approximately 26214, 24904, 23658, 22476, decreasing slowly, with only the 200th iterate being 1.0371. The following iterates are 1.0103, 1.00093, 1.0000082, and 1.00000000065, illustrating quadratic convergence. This highlights that quadratic convergence of a Newton iteration does not mean that only few iterates are required; this only applies once the sequence of iterates is sufficiently close to the root.
=== Convergence dependent on initialization ===
The function f(x) = x(1 + x2)−1/2 has a root at 0. The Newton iteration is given by
x
n
+
1
=
x
n
−
f
(
x
n
)
f
′
(
x
n
)
=
x
n
−
x
n
(
1
+
x
n
2
)
−
1
/
2
(
1
+
x
n
2
)
−
3
/
2
=
−
x
n
3
.
{\displaystyle x_{n+1}=x_{n}-{\frac {f(x_{n})}{f'(x_{n})}}=x_{n}-{\frac {x_{n}(1+x_{n}^{2})^{-1/2}}{(1+x_{n}^{2})^{-3/2}}}=-x_{n}^{3}.}
From this, it can be seen that there are three possible phenomena for a Newton iteration. If initialized strictly between ±1, the Newton iteration will converge (super-)quadratically to 0; if initialized exactly at 1 or −1, the Newton iteration will oscillate endlessly between ±1; if initialized anywhere else, the Newton iteration will diverge. This same trichotomy occurs for f(x) = arctan x.
In cases where the function in question has multiple roots, it can be difficult to control, via choice of initialization, which root (if any) is identified by Newton's method. For example, the function f(x) = x(x2 − 1)(x − 3)e−(x − 1)2/2 has roots at −1, 0, 1, and 3. If initialized at −1.488, the Newton iteration converges to 0; if initialized at −1.487, it diverges to ∞; if initialized at −1.486, it converges to −1; if initialized at −1.485, it diverges to −∞; if initialized at −1.4843, it converges to 3; if initialized at −1.484, it converges to 1. This kind of subtle dependence on initialization is not uncommon; it is frequently studied in the complex plane in the form of the Newton fractal.
=== Divergence even when initialization is close to the root ===
Consider the problem of finding a root of f(x) = x1/3. The Newton iteration is
x
n
+
1
=
x
n
−
f
(
x
n
)
f
′
(
x
n
)
=
x
n
−
x
n
1
/
3
1
3
x
n
−
2
/
3
=
−
2
x
n
.
{\displaystyle x_{n+1}=x_{n}-{\frac {f(x_{n})}{f'(x_{n})}}=x_{n}-{\frac {x_{n}^{1/3}}{{\frac {1}{3}}x_{n}^{-2/3}}}=-2x_{n}.}
Unless Newton's method is initialized at the exact root 0, it is seen that the sequence of iterates will fail to converge. For example, even if initialized at the reasonably accurate guess of 0.001, the first several iterates are −0.002, 0.004, −0.008, 0.016, reaching 1048.58, −2097.15, ... by the 20th iterate. This failure of convergence is not contradicted by the analytic theory, since in this case f is not differentiable at its root.
In the above example, failure of convergence is reflected by the failure of f(xn) to get closer to zero as n increases, as well as by the fact that successive iterates are growing further and further apart. However, the function f(x) = x1/3e−x2 also has a root at 0. The Newton iteration is given by
x
n
+
1
=
x
n
−
f
(
x
n
)
f
′
(
x
n
)
=
x
n
(
1
−
3
1
−
6
x
n
2
)
.
{\displaystyle x_{n+1}=x_{n}-{\frac {f(x_{n})}{f'(x_{n})}}=x_{n}\left(1-{\frac {3}{1-6x_{n}^{2}}}\right).}
In this example, where again f is not differentiable at the root, any Newton iteration not starting exactly at the root will diverge, but with both xn + 1 − xn and f(xn) converging to zero. This is seen in the following table showing the iterates with initialization 1:
Although the convergence of xn + 1 − xn in this case is not very rapid, it can be proved from the iteration formula. This example highlights the possibility that a stopping criterion for Newton's method based only on the smallness of xn + 1 − xn and f(xn) might falsely identify a root.
=== Oscillatory behavior ===
It is easy to find situations for which Newton's method oscillates endlessly between two distinct values. For example, for Newton's method as applied to a function f to oscillate between 0 and 1, it is only necessary that the tangent line to f at 0 intersects the x-axis at 1 and that the tangent line to f at 1 intersects the x-axis at 0. This is the case, for example, if f(x) = x3 − 2x + 2. For this function, it is even the case that Newton's iteration as initialized sufficiently close to 0 or 1 will asymptotically oscillate between these values. For example, Newton's method as initialized at 0.99 yields iterates 0.99, −0.06317, 1.00628, 0.03651, 1.00196, 0.01162, 1.00020, 0.00120, 1.000002, and so on. This behavior is present despite the presence of a root of f approximately equal to −1.76929.
=== Undefinedness of Newton's method ===
In some cases, it is not even possible to perform the Newton iteration. For example, if f(x) = x2 − 1, then the Newton iteration is defined by
x
n
+
1
=
x
n
−
f
(
x
n
)
f
′
(
x
n
)
=
x
n
−
x
n
2
−
1
2
x
n
=
x
n
2
+
1
2
x
n
.
{\displaystyle x_{n+1}=x_{n}-{\frac {f(x_{n})}{f'(x_{n})}}=x_{n}-{\frac {x_{n}^{2}-1}{2x_{n}}}={\frac {x_{n}^{2}+1}{2x_{n}}}.}
So Newton's method cannot be initialized at 0, since this would make x1 undefined. Geometrically, this is because the tangent line to f at 0 is horizontal (i.e. f ′(0) = 0), never intersecting the x-axis.
Even if the initialization is selected so that the Newton iteration can begin, the same phenomenon can block the iteration from being indefinitely continued.
If f has an incomplete domain, it is possible for Newton's method to send the iterates outside of the domain, so that it is impossible to continue the iteration. For example, the natural logarithm function f(x) = ln x has a root at 1, and is defined only for positive x. Newton's iteration in this case is given by
x
n
+
1
=
x
n
−
f
(
x
n
)
f
′
(
x
n
)
=
x
n
(
1
−
ln
x
n
)
.
{\displaystyle x_{n+1}=x_{n}-{\frac {f(x_{n})}{f'(x_{n})}}=x_{n}(1-\ln x_{n}).}
So if the iteration is initialized at e, the next iterate is 0; if the iteration is initialized at a value larger than e, then the next iterate is negative. In either case, the method cannot be continued.
== Analysis ==
Suppose that the function f has a zero at α, i.e., f(α) = 0, and f is differentiable in a neighborhood of α.
If f is continuously differentiable and its derivative is nonzero at α, then there exists a neighborhood of α such that for all starting values x0 in that neighborhood, the sequence (xn) will converge to α.
If f is continuously differentiable, its derivative is nonzero at α, and it has a second derivative at α, then the convergence is quadratic or faster. If the second derivative is not 0 at α then the convergence is merely quadratic. If the third derivative exists and is bounded in a neighborhood of α, then:
Δ
x
i
+
1
=
f
″
(
α
)
2
f
′
(
α
)
(
Δ
x
i
)
2
+
O
(
Δ
x
i
)
3
,
{\displaystyle \Delta x_{i+1}={\frac {f''(\alpha )}{2f'(\alpha )}}\left(\Delta x_{i}\right)^{2}+O\left(\Delta x_{i}\right)^{3}\,,}
where
Δ
x
i
≜
x
i
−
α
.
{\displaystyle \Delta x_{i}\triangleq x_{i}-\alpha \,.}
If the derivative is 0 at α, then the convergence is usually only linear. Specifically, if f is twice continuously differentiable, f′(α) = 0 and f″(α) ≠ 0, then there exists a neighborhood of α such that, for all starting values x0 in that neighborhood, the sequence of iterates converges linearly, with rate 1/2. Alternatively, if f′(α) = 0 and f′(x) ≠ 0 for x ≠ α, x in a neighborhood U of α, α being a zero of multiplicity r, and if f ∈ Cr(U), then there exists a neighborhood of α such that, for all starting values x0 in that neighborhood, the sequence of iterates converges linearly.
However, even linear convergence is not guaranteed in pathological situations.
In practice, these results are local, and the neighborhood of convergence is not known in advance. But there are also some results on global convergence: for instance, given a right neighborhood U+ of α, if f is twice differentiable in U+ and if f′ ≠ 0, f · f″ > 0 in U+, then, for each x0 in U+ the sequence xk is monotonically decreasing to α.
=== Proof of quadratic convergence for Newton's iterative method ===
According to Taylor's theorem, any function f(x) which has a continuous second derivative can be represented by an expansion about a point that is close to a root of f(x). Suppose this root is α. Then the expansion of f(α) about xn is:
where the Lagrange form of the Taylor series expansion remainder is
R
1
=
1
2
!
f
″
(
ξ
n
)
(
α
−
x
n
)
2
,
{\displaystyle R_{1}={\frac {1}{2!}}f''(\xi _{n})\left(\alpha -x_{n}\right)^{2}\,,}
where ξn is in between xn and α.
Since α is the root, (1) becomes:
Dividing equation (2) by f′(xn) and rearranging gives
Remembering that xn + 1 is defined by
one finds that
α
−
x
n
+
1
⏟
ε
n
+
1
=
−
f
″
(
ξ
n
)
2
f
′
(
x
n
)
(
α
−
x
n
⏟
ε
n
)
2
.
{\displaystyle \underbrace {\alpha -x_{n+1}} _{\varepsilon _{n+1}}={\frac {-f''(\xi _{n})}{2f'(x_{n})}}{(\,\underbrace {\alpha -x_{n}} _{\varepsilon _{n}}\,)}^{2}\,.}
That is,
Taking the absolute value of both sides gives
Equation (6) shows that the order of convergence is at least quadratic if the following conditions are satisfied:
f′(x) ≠ 0; for all x ∈ I, where I is the interval [α − |ε0|, α + |ε0|];
f″(x) is continuous, for all x ∈ I;
M |ε0| < 1
where M is given by
M
=
1
2
(
sup
x
∈
I
|
f
″
(
x
)
|
)
(
sup
x
∈
I
1
|
f
′
(
x
)
|
)
.
{\displaystyle M={\frac {1}{2}}\left(\sup _{x\in I}\vert f''(x)\vert \right)\left(\sup _{x\in I}{\frac {1}{\vert f'(x)\vert }}\right).\,}
If these conditions hold,
|
ε
n
+
1
|
≤
M
⋅
ε
n
2
.
{\displaystyle \vert \varepsilon _{n+1}\vert \leq M\cdot \varepsilon _{n}^{2}\,.}
=== Fourier conditions ===
Suppose that f(x) is a concave function on an interval, which is strictly increasing. If it is negative at the left endpoint and positive at the right endpoint, the intermediate value theorem guarantees that there is a zero ζ of f somewhere in the interval. From geometrical principles, it can be seen that the Newton iteration xi starting at the left endpoint is monotonically increasing and convergent, necessarily to ζ.
Joseph Fourier introduced a modification of Newton's method starting at the right endpoint:
y
i
+
1
=
y
i
−
f
(
y
i
)
f
′
(
x
i
)
.
{\displaystyle y_{i+1}=y_{i}-{\frac {f(y_{i})}{f'(x_{i})}}.}
This sequence is monotonically decreasing and convergent. By passing to the limit in this definition, it can be seen that the limit of yi must also be the zero ζ.
So, in the case of a concave increasing function with a zero, initialization is largely irrelevant. Newton iteration starting anywhere left of the zero will converge, as will Fourier's modified Newton iteration starting anywhere right of the zero. The accuracy at any step of the iteration can be determined directly from the difference between the location of the iteration from the left and the location of the iteration from the right. If f is twice continuously differentiable, it can be proved using Taylor's theorem that
lim
i
→
∞
y
i
+
1
−
x
i
+
1
(
y
i
−
x
i
)
2
=
−
1
2
f
″
(
ζ
)
f
′
(
ζ
)
,
{\displaystyle \lim _{i\to \infty }{\frac {y_{i+1}-x_{i+1}}{(y_{i}-x_{i})^{2}}}=-{\frac {1}{2}}{\frac {f''(\zeta )}{f'(\zeta )}},}
showing that this difference in locations converges quadratically to zero.
All of the above can be extended to systems of equations in multiple variables, although in that context the relevant concepts of monotonicity and concavity are more subtle to formulate. In the case of single equations in a single variable, the above monotonic convergence of Newton's method can also be generalized to replace concavity by positivity or negativity conditions on an arbitrary higher-order derivative of f. However, in this generalization, Newton's iteration is modified so as to be based on Taylor polynomials rather than the tangent line. In the case of concavity, this modification coincides with the standard Newton method.
=== Error for n>1 variables ===
If we seek the root of a single function
f
:
R
n
→
R
{\displaystyle f:\mathbf {R} ^{n}\to \mathbf {R} }
then the error
ϵ
n
=
x
n
−
α
{\displaystyle \epsilon _{n}=x_{n}-\alpha }
is a vector such that its components obey
ϵ
k
(
n
+
1
)
=
1
2
(
ϵ
(
n
)
)
T
Q
k
ϵ
(
n
)
+
O
(
‖
ϵ
(
n
)
‖
3
)
{\displaystyle \epsilon _{k}^{(n+1)}={\frac {1}{2}}(\epsilon ^{(n)})^{T}Q_{k}\epsilon ^{(n)}+O(\|\epsilon ^{(n)}\|^{3})}
where
Q
k
{\displaystyle Q_{k}}
is a quadratic form:
(
Q
k
)
i
,
j
=
∑
ℓ
(
(
D
2
f
)
−
1
)
i
,
ℓ
∂
3
f
∂
x
j
∂
x
k
∂
x
ℓ
{\displaystyle (Q_{k})_{i,j}=\sum _{\ell }((D^{2}f)^{-1})_{i,\ell }{\frac {\partial ^{3}f}{\partial x_{j}\partial x_{k}\partial x_{\ell }}}}
evaluated at the root
α
{\displaystyle \alpha }
(where
D
2
f
{\displaystyle D^{2}f}
is the 2nd derivative Hessian matrix).
== Examples ==
=== Use of Newton's method to compute square roots ===
Newton's method is one of many known methods of computing square roots. Given a positive number a, the problem of finding a number x such that x2 = a is equivalent to finding a root of the function f(x) = x2 − a. The Newton iteration defined by this function is given by
x
n
+
1
=
x
n
−
f
(
x
n
)
f
′
(
x
n
)
=
x
n
−
x
n
2
−
a
2
x
n
=
1
2
(
x
n
+
a
x
n
)
.
{\displaystyle x_{n+1}=x_{n}-{\frac {f(x_{n})}{f'(x_{n})}}=x_{n}-{\frac {x_{n}^{2}-a}{2x_{n}}}={\frac {1}{2}}\left(x_{n}+{\frac {a}{x_{n}}}\right).}
This happens to coincide with the "Babylonian" method of finding square roots, which consists of replacing an approximate root xn by the arithmetic mean of xn and a⁄xn. By performing this iteration, it is possible to evaluate a square root to any desired accuracy by only using the basic arithmetic operations.
The following three tables show examples of the result of this computation for finding the square root of 612, with the iteration initialized at the values of 1, 10, and −20. Each row in a "xn" column is obtained by applying the preceding formula to the entry above it, for instance
306.5
=
1
2
(
1
+
612
1
)
.
{\displaystyle 306.5={\frac {1}{2}}\left(1+{\frac {612}{1}}\right).}
The correct digits are underlined. It is seen that with only a few iterations one can obtain a solution accurate to many decimal places. The first table shows that this is true even if the Newton iteration were initialized by the very inaccurate guess of 1.
When computing any nonzero square root, the first derivative of f must be nonzero at the root, and that f is a smooth function. So, even before any computation, it is known that any convergent Newton iteration has a quadratic rate of convergence. This is reflected in the above tables by the fact that once a Newton iterate gets close to the root, the number of correct digits approximately doubles with each iteration.
=== Solution of cos(x) = x3 using Newton's method ===
Consider the problem of finding the positive number x with cos x = x3. We can rephrase that as finding the zero of f(x) = cos(x) − x3. We have f′(x) = −sin(x) − 3x2. Since cos(x) ≤ 1 for all x and x3 > 1 for x > 1, we know that our solution lies between 0 and 1.
A starting value of 0 will lead to an undefined result which illustrates the importance of using a starting point close to the solution. For example, with an initial guess x0 = 0.5, the sequence given by Newton's method is:
x
1
=
x
0
−
f
(
x
0
)
f
′
(
x
0
)
=
0.5
−
cos
0.5
−
0.5
3
−
sin
0.5
−
3
×
0.5
2
=
1.112
141
637
097
…
x
2
=
x
1
−
f
(
x
1
)
f
′
(
x
1
)
=
⋮
=
0.
_
909
672
693
736
…
x
3
=
⋮
=
⋮
=
0.86
_
7
263
818
209
…
x
4
=
⋮
=
⋮
=
0.865
47
_
7
135
298
…
x
5
=
⋮
=
⋮
=
0.865
474
033
1
_
11
…
x
6
=
⋮
=
⋮
=
0.865
474
033
102
_
…
{\displaystyle {\begin{matrix}x_{1}&=&x_{0}-{\dfrac {f(x_{0})}{f'(x_{0})}}&=&0.5-{\dfrac {\cos 0.5-0.5^{3}}{-\sin 0.5-3\times 0.5^{2}}}&=&1.112\,141\,637\,097\dots \\x_{2}&=&x_{1}-{\dfrac {f(x_{1})}{f'(x_{1})}}&=&\vdots &=&{\underline {0.}}909\,672\,693\,736\dots \\x_{3}&=&\vdots &=&\vdots &=&{\underline {0.86}}7\,263\,818\,209\dots \\x_{4}&=&\vdots &=&\vdots &=&{\underline {0.865\,47}}7\,135\,298\dots \\x_{5}&=&\vdots &=&\vdots &=&{\underline {0.865\,474\,033\,1}}11\dots \\x_{6}&=&\vdots &=&\vdots &=&{\underline {0.865\,474\,033\,102}}\dots \end{matrix}}}
The correct digits are underlined in the above example. In particular, x6 is correct to 12 decimal places. We see that the number of correct digits after the decimal point increases from 2 (for x3) to 5 and 10, illustrating the quadratic convergence.
== Multidimensional formulations ==
=== Systems of equations ===
==== k variables, k functions ====
One may also use Newton's method to solve systems of k equations, which amounts to finding the (simultaneous) zeroes of k continuously differentiable functions
f
:
R
k
→
R
.
{\displaystyle f:\mathbb {R} ^{k}\to \mathbb {R} .}
This is equivalent to finding the zeroes of a single vector-valued function
F
:
R
k
→
R
k
.
{\displaystyle F:\mathbb {R} ^{k}\to \mathbb {R} ^{k}.}
In the formulation given above, the scalars xn are replaced by vectors xn and instead of dividing the function f(xn) by its derivative f′(xn) one instead has to left multiply the function F(xn) by the inverse of its k × k Jacobian matrix JF(xn). This results in the expression
x
n
+
1
=
x
n
−
J
F
(
x
n
)
−
1
F
(
x
n
)
.
{\displaystyle \mathbf {x} _{n+1}=\mathbf {x} _{n}-J_{F}(\mathbf {x} _{n})^{-1}F(\mathbf {x} _{n}).}
or, by solving the system of linear equations
J
F
(
x
n
)
(
x
n
+
1
−
x
n
)
=
−
F
(
x
n
)
{\displaystyle J_{F}(\mathbf {x} _{n})(\mathbf {x} _{n+1}-\mathbf {x} _{n})=-F(\mathbf {x} _{n})}
for the unknown xn + 1 − xn.
==== k variables, m equations, with m > k ====
The k-dimensional variant of Newton's method can be used to solve systems of greater than k (nonlinear) equations as well if the algorithm uses the generalized inverse of the non-square Jacobian matrix J+ = (JTJ)−1JT instead of the inverse of J. If the nonlinear system has no solution, the method attempts to find a solution in the non-linear least squares sense. See Gauss–Newton algorithm for more information.
==== Example ====
For example, the following set of equations needs to be solved for vector of points
[
x
1
,
x
2
]
,
{\displaystyle \ [\ x_{1},x_{2}\ ]\ ,}
given the vector of known values
[
2
,
3
]
.
{\displaystyle \ [\ 2,3\ ]~.}
5
x
1
2
+
x
1
x
2
2
+
sin
2
(
2
x
2
)
=
2
e
2
x
1
−
x
2
+
4
x
2
=
3
{\displaystyle {\begin{array}{lcr}5\ x_{1}^{2}+x_{1}\ x_{2}^{2}+\sin ^{2}(2\ x_{2})&=\quad 2\\e^{2\ x_{1}-x_{2}}+4\ x_{2}&=\quad 3\end{array}}}
the function vector,
F
(
X
k
)
,
{\displaystyle \ F(X_{k})\ ,}
and Jacobian Matrix,
J
(
X
k
)
{\displaystyle \ J(X_{k})\ }
for iteration k, and the vector of known values,
Y
,
{\displaystyle \ Y\ ,}
are defined below.
F
(
X
k
)
=
[
f
1
(
X
k
)
f
2
(
X
k
)
]
=
[
5
x
1
2
+
x
1
x
2
2
+
sin
2
(
2
x
2
)
e
2
x
1
−
x
2
+
4
x
2
]
k
J
(
X
k
)
=
[
∂
f
1
(
X
)
∂
x
1
,
∂
f
1
(
X
)
∂
x
2
∂
f
2
(
X
)
∂
x
1
,
∂
f
2
(
X
)
∂
x
2
]
k
=
[
10
x
1
+
x
2
2
,
2
x
1
x
2
+
4
sin
(
2
x
2
)
cos
(
2
x
2
)
2
e
2
x
1
−
x
2
,
−
e
2
x
1
−
x
2
+
4
]
k
Y
=
[
2
3
]
{\displaystyle {\begin{aligned}~&F(X_{k})~=~{\begin{bmatrix}{\begin{aligned}~&f_{1}(X_{k})\\~&f_{2}(X_{k})\end{aligned}}\end{bmatrix}}~=~{\begin{bmatrix}{\begin{aligned}~&5\ x_{1}^{2}+x_{1}\ x_{2}^{2}+\sin ^{2}(2\ x_{2})\\~&e^{2\ x_{1}-x_{2}}+4\ x_{2}\end{aligned}}\end{bmatrix}}_{k}\\~&J(X_{k})={\begin{bmatrix}~{\frac {\ \partial {f_{1}(X)}\ }{\partial {x_{1}}}}\ ,&~{\frac {\ \partial {f_{1}(X)}\ }{\partial {x_{2}}}}~\\~{\frac {\ \partial {f_{2}(X)}\ }{\partial {x_{1}}}}\ ,&~{\frac {\ \partial {f_{2}(X)}\ }{\partial {x_{2}}}}~\end{bmatrix}}_{k}~=~{\begin{bmatrix}{\begin{aligned}~&10\ x_{1}+x_{2}^{2}\ ,&&2\ x_{1}\ x_{2}+4\ \sin(2\ x_{2})\ \cos(2\ x_{2})\\~&2\ e^{2\ x_{1}-x_{2}}\ ,&&-e^{2\ x_{1}-x_{2}}+4\end{aligned}}\end{bmatrix}}_{k}\\~&Y={\begin{bmatrix}~2~\\~3~\end{bmatrix}}\end{aligned}}}
Note that
F
(
X
k
)
{\displaystyle \ F(X_{k})\ }
could have been rewritten to absorb
Y
,
{\displaystyle \ Y\ ,}
and thus eliminate
Y
{\displaystyle Y}
from the equations. The equation to solve for each iteration are
[
10
x
1
+
x
2
2
,
2
x
1
x
2
+
4
sin
(
2
x
2
)
cos
(
2
x
2
)
2
e
2
x
1
−
x
2
,
−
e
2
x
1
−
x
2
+
4
]
k
[
c
1
c
2
]
k
+
1
=
[
5
x
1
2
+
x
1
x
2
2
+
sin
2
(
2
x
2
)
−
2
e
2
x
1
−
x
2
+
4
x
2
−
3
]
k
{\displaystyle {\begin{aligned}{\begin{bmatrix}{\begin{aligned}~&~10\ x_{1}+x_{2}^{2}\ ,&&2x_{1}x_{2}+4\ \sin(2\ x_{2})\ \cos(2\ x_{2})~\\~&~2\ e^{2\ x_{1}-x_{2}}\ ,&&-e^{2\ x_{1}-x_{2}}+4~\end{aligned}}\end{bmatrix}}_{k}{\begin{bmatrix}~c_{1}~\\~c_{2}~\end{bmatrix}}_{k+1}={\begin{bmatrix}~5\ x_{1}^{2}+x_{1}\ x_{2}^{2}+\sin ^{2}(2\ x_{2})-2~\\~e^{2\ x_{1}-x_{2}}+4\ x_{2}-3~\end{bmatrix}}_{k}\end{aligned}}}
and
X
k
+
1
=
X
k
−
C
k
+
1
{\displaystyle X_{k+1}~=~X_{k}-C_{k+1}}
The iterations should be repeated until
[
∑
i
=
1
i
=
2
|
f
(
x
i
)
k
−
(
y
i
)
k
|
]
<
E
,
{\displaystyle \ {\Bigg [}\sum _{i=1}^{i=2}{\Bigl |}f(x_{i})_{k}-(y_{i})_{k}{\Bigr |}{\Bigg ]}<E\ ,}
where
E
{\displaystyle \ E\ }
is a value acceptably small enough to meet application requirements.
If vector
X
0
{\displaystyle \ X_{0}\ }
is initially chosen to be
[
1
1
]
,
{\displaystyle \ {\begin{bmatrix}~1~&~1~\end{bmatrix}}\ ,}
that is,
x
1
=
1
,
{\displaystyle \ x_{1}=1\ ,}
and
x
2
=
1
,
{\displaystyle \ x_{2}=1\ ,}
and
E
,
{\displaystyle \ E\ ,}
is chosen to be 1.10−3, then the example converges after four iterations to a value of
X
4
=
[
0.567297
,
−
0.309442
]
.
{\displaystyle \ X_{4}=\left[~0.567297,\ -0.309442~\right]~.}
==== Iterations ====
The following iterations were made during the course of the solution.
=== Complex functions ===
When dealing with complex functions, Newton's method can be directly applied to find their zeroes. Each zero has a basin of attraction in the complex plane, the set of all starting values that cause the method to converge to that particular zero. These sets can be mapped as in the image shown. For many complex functions, the boundaries of the basins of attraction are fractals.
In some cases there are regions in the complex plane which are not in any of these basins of attraction, meaning the iterates do not converge. For example, if one uses a real initial condition to seek a root of x2 + 1, all subsequent iterates will be real numbers and so the iterations cannot converge to either root, since both roots are non-real. In this case almost all real initial conditions lead to chaotic behavior, while some initial conditions iterate either to infinity or to repeating cycles of any finite length.
Curt McMullen has shown that for any possible purely iterative algorithm similar to Newton's method, the algorithm will diverge on some open regions of the complex plane when applied to some polynomial of degree 4 or higher. However, McMullen gave a generally convergent algorithm for polynomials of degree 3. Also, for any polynomial, Hubbard, Schleicher, and Sutherland gave a method for selecting a set of initial points such that Newton's method will certainly converge at one of them at least.
=== In a Banach space ===
Another generalization is Newton's method to find a root of a functional F defined in a Banach space. In this case the formulation is
X
n
+
1
=
X
n
−
(
F
′
(
X
n
)
)
−
1
F
(
X
n
)
,
{\displaystyle X_{n+1}=X_{n}-{\bigl (}F'(X_{n}){\bigr )}^{-1}F(X_{n}),\,}
where F′(Xn) is the Fréchet derivative computed at Xn. One needs the Fréchet derivative to be boundedly invertible at each Xn in order for the method to be applicable. A condition for existence of and convergence to a root is given by the Newton–Kantorovich theorem.
==== Nash–Moser iteration ====
In the 1950s, John Nash developed a version of the Newton's method to apply to the problem of constructing isometric embeddings of general Riemannian manifolds in Euclidean space. The loss of derivatives problem, present in this context, made the standard Newton iteration inapplicable, since it could not be continued indefinitely (much less converge). Nash's solution involved the introduction of smoothing operators into the iteration. He was able to prove the convergence of his smoothed Newton method, for the purpose of proving an implicit function theorem for isometric embeddings. In the 1960s, Jürgen Moser showed that Nash's methods were flexible enough to apply to problems beyond isometric embedding, particularly in celestial mechanics. Since then, a number of mathematicians, including Mikhael Gromov and Richard Hamilton, have found generalized abstract versions of the Nash–Moser theory. In Hamilton's formulation, the Nash–Moser theorem forms a generalization of the Banach space Newton method which takes place in certain Fréchet spaces.
== Modifications ==
=== Quasi-Newton methods ===
When the Jacobian is unavailable or too expensive to compute at every iteration, a quasi-Newton method can be used.
=== Chebyshev's third-order method ===
Since higher-order Taylor expansions offer more accurate local approximations of a function f, it is reasonable to ask why Newton’s method relies only on a second-order Taylor approximation. In the 19th century, Russian mathematician Pafnuty Chebyshev explored this idea by developing a variant of Newton’s method that used cubic approximations.
=== Over p-adic numbers ===
In p-adic analysis, the standard method to show a polynomial equation in one variable has a p-adic root is Hensel's lemma, which uses the recursion from Newton's method on the p-adic numbers. Because of the more stable behavior of addition and multiplication in the p-adic numbers compared to the real numbers (specifically, the unit ball in the p-adics is a ring), convergence in Hensel's lemma can be guaranteed under much simpler hypotheses than in the classical Newton's method on the real line.
=== q-analog ===
Newton's method can be generalized with the q-analog of the usual derivative.
=== Modified Newton methods ===
==== Maehly's procedure ====
A nonlinear equation has multiple solutions in general. But if the initial value is not appropriate, Newton's method may not converge to the desired solution or may converge to the same solution found earlier. When we have already found N solutions of
f
(
x
)
=
0
{\displaystyle f(x)=0}
, then the next root can be found by applying Newton's method to the next equation:
F
(
x
)
=
f
(
x
)
∏
i
=
1
N
(
x
−
x
i
)
=
0.
{\displaystyle F(x)={\frac {f(x)}{\prod _{i=1}^{N}(x-x_{i})}}=0.}
This method is applied to obtain zeros of the Bessel function of the second kind.
==== Hirano's modified Newton method ====
Hirano's modified Newton method is a modification conserving the convergence of Newton method and avoiding unstableness. It is developed to solve complex polynomials.
==== Interval Newton's method ====
Combining Newton's method with interval arithmetic is very useful in some contexts. This provides a stopping criterion that is more reliable than the usual ones (which are a small value of the function or a small variation of the variable between consecutive iterations). Also, this may detect cases where Newton's method converges theoretically but diverges numerically because of an insufficient floating-point precision (this is typically the case for polynomials of large degree, where a very small change of the variable may change dramatically the value of the function; see Wilkinson's polynomial).
Consider f → C1(X), where X is a real interval, and suppose that we have an interval extension F′ of f′, meaning that F′ takes as input an interval Y ⊆ X and outputs an interval F′(Y) such that:
F
′
(
[
y
,
y
]
)
=
{
f
′
(
y
)
}
F
′
(
Y
)
⊇
{
f
′
(
y
)
∣
y
∈
Y
}
.
{\displaystyle {\begin{aligned}F'([y,y])&=\{f'(y)\}\\[5pt]F'(Y)&\supseteq \{f'(y)\mid y\in Y\}.\end{aligned}}}
We also assume that 0 ∉ F′(X), so in particular f has at most one root in X.
We then define the interval Newton operator by:
N
(
Y
)
=
m
−
f
(
m
)
F
′
(
Y
)
=
{
m
−
f
(
m
)
z
|
z
∈
F
′
(
Y
)
}
{\displaystyle N(Y)=m-{\frac {f(m)}{F'(Y)}}=\left\{\left.m-{\frac {f(m)}{z}}~\right|~z\in F'(Y)\right\}}
where m ∈ Y. Note that the hypothesis on F′ implies that N(Y) is well defined and is an interval (see interval arithmetic for further details on interval operations). This naturally leads to the following sequence:
X
0
=
X
X
k
+
1
=
N
(
X
k
)
∩
X
k
.
{\displaystyle {\begin{aligned}X_{0}&=X\\X_{k+1}&=N(X_{k})\cap X_{k}.\end{aligned}}}
The mean value theorem ensures that if there is a root of f in Xk, then it is also in Xk + 1. Moreover, the hypothesis on F′ ensures that Xk + 1 is at most half the size of Xk when m is the midpoint of Y, so this sequence converges towards [x*, x*], where x* is the root of f in X.
If F′(X) strictly contains 0, the use of extended interval division produces a union of two intervals for N(X); multiple roots are therefore automatically separated and bounded.
== Applications ==
=== Minimization and maximization problems ===
Newton's method can be used to find a minimum or maximum of a function f(x). The derivative is zero at a minimum or maximum, so local minima and maxima can be found by applying Newton's method to the derivative. The iteration becomes:
x
n
+
1
=
x
n
−
f
′
(
x
n
)
f
″
(
x
n
)
.
{\displaystyle x_{n+1}=x_{n}-{\frac {f'(x_{n})}{f''(x_{n})}}.}
=== Multiplicative inverses of numbers and power series ===
An important application is Newton–Raphson division, which can be used to quickly find the reciprocal of a number a, using only multiplication and subtraction, that is to say the number x such that 1/x = a. We can rephrase that as finding the zero of f(x) = 1/x − a. We have f′(x) = −1/x2.
Newton's iteration is
x
n
+
1
=
x
n
−
f
(
x
n
)
f
′
(
x
n
)
=
x
n
+
1
x
n
−
a
1
x
n
2
=
x
n
(
2
−
a
x
n
)
.
{\displaystyle x_{n+1}=x_{n}-{\frac {f(x_{n})}{f'(x_{n})}}=x_{n}+{\frac {{\frac {1}{x_{n}}}-a}{\frac {1}{x_{n}^{2}}}}=x_{n}(2-ax_{n}).}
Therefore, Newton's iteration needs only two multiplications and one subtraction.
This method is also very efficient to compute the multiplicative inverse of a power series.
=== Solving transcendental equations ===
Many transcendental equations can be solved up to an arbitrary precision by using Newton's method. For example, finding the cumulative probability density function, such as a Normal distribution to fit a known probability generally involves integral functions with no known means to solve in closed form. However, computing the derivatives needed to solve them numerically with Newton's method is generally known, making numerical solutions possible. For an example, see the numerical solution to the inverse Normal cumulative distribution.
=== Numerical verification for solutions of nonlinear equations ===
A numerical verification for solutions of nonlinear equations has been established by using Newton's method multiple times and forming a set of solution candidates.
== Code ==
The following is an example of a possible implementation of Newton's method in the Python (version 3.x) programming language for finding a root of a function f which has derivative f_prime.
The initial guess will be x0 = 1 and the function will be f(x) = x2 − 2 so that f′(x) = 2x.
Each new iteration of Newton's method will be denoted by x1. We will check during the computation whether the denominator (yprime) becomes too small (smaller than epsilon), which would be the case if f′(xn) ≈ 0, since otherwise a large amount of error could be introduced.
== See also ==
== Notes ==
== References ==
Gil, A.; Segura, J.; Temme, N. M. (2007). Numerical methods for special functions. Society for Industrial and Applied Mathematics. ISBN 978-0-89871-634-4.
Süli, Endre; Mayers, David (2003). An Introduction to Numerical Analysis. Cambridge University Press. ISBN 0-521-00794-1.
== Further reading ==
Kendall E. Atkinson: An Introduction to Numerical Analysis, John Wiley & Sons Inc., ISBN 0-471-62489-6 (1989).
Tjalling J. Ypma: "Historical development of the Newton–Raphson method", SIAM Review, vol.37, no.4, (1995), pp.531–551. doi:10.1137/1037125.
Bonnans, J. Frédéric; Gilbert, J. Charles; Lemaréchal, Claude; Sagastizábal, Claudia A. (2006). Numerical optimization: Theoretical and practical aspects. Universitext (Second revised ed. of translation of 1997 French ed.). Berlin: Springer-Verlag. pp. xiv+490. doi:10.1007/978-3-540-35447-5. ISBN 3-540-35445-X. MR 2265882.
P. Deuflhard: Newton Methods for Nonlinear Problems: Affine Invariance and Adaptive Algorithms, Springer Berlin (Series in Computational Mathematics, Vol. 35) (2004). ISBN 3-540-21099-7.
C. T. Kelley: Solving Nonlinear Equations with Newton's Method, SIAM (Fundamentals of Algorithms, 1) (2003). ISBN 0-89871-546-6.
J. M. Ortega, and W. C. Rheinboldt: Iterative Solution of Nonlinear Equations in Several Variables, SIAM (Classics in Applied Mathematics) (2000). ISBN 0-89871-461-3.
Press, W. H.; Teukolsky, S. A.; Vetterling, W. T.; Flannery, B. P. (2007). "Chapter 9. Root Finding and Nonlinear Sets of Equations Importance Sampling". Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge Univ. Press. ISBN 978-0-521-88068-8.. See especially Sections 9.4, 9.6, and 9.7.
Avriel, Mordecai (1976). Nonlinear Programming: Analysis and Methods. Prentice Hall. pp. 216–221. ISBN 0-13-623603-0.
== External links ==
"Newton method", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Weisstein, Eric W. "Newton's Method". MathWorld.
Newton's method, Citizendium.
Mathews, J., The Accelerated and Modified Newton Methods, Course notes.
Wu, X., Roots of Equations, Course notes. | Wikipedia/Newton's_Method |
Up-and-down designs (UDDs) are a family of statistical experiment designs used in dose-finding experiments in science, engineering, and medical research. Dose-finding experiments have binary responses: each individual outcome can be described as one of two possible values, such as success vs. failure or toxic vs. non-toxic. Mathematically the binary responses are coded as 1 and 0. The goal of dose-finding experiments is to estimate the strength of treatment (i.e., the "dose") that would trigger the "1" response a pre-specified proportion of the time. This dose can be envisioned as a percentile of the distribution of response thresholds. An example where dose-finding is used is in an experiment to estimate the LD50 of some toxic chemical with respect to mice.
Dose-finding designs are sequential and response-adaptive: the dose at a given point in the experiment depends upon previous outcomes, rather than be fixed a priori. Dose-finding designs are generally more efficient for this task than fixed designs, but their properties are harder to analyze, and some require specialized design software. UDDs use a discrete set of doses rather than vary the dose continuously. They are relatively simple to implement, and are also among the best understood dose-finding designs. Despite this simplicity, UDDs generate random walks with intricate properties. The original UDD aimed to find the median threshold by increasing the dose one level after a "0" response, and decreasing it one level after a "1" response. Hence the name "up-and-down". Other UDDs break this symmetry in order to estimate percentiles other than the median, or are able to treat groups of subjects rather than one at a time.
UDDs were developed in the 1940s by several research groups independently. The 1950s and 1960s saw rapid diversification with UDDs targeting percentiles other than the median, and expanding into numerous applied fields. The 1970s to early 1990s saw little UDD methods research, even as the design continued to be used extensively. A revival of UDD research since the 1990s has provided deeper understanding of UDDs and their properties, and new and better estimation methods.
UDDs are still used extensively in the two applications for which they were originally developed: psychophysics where they are used to estimate sensory thresholds and are often known as fixed forced-choice staircase procedures, and explosive sensitivity testing, where the median-targeting UDD is often known as the Bruceton test. UDDs are also very popular in toxicity and anesthesiology research. They are also considered a viable choice for Phase I clinical trials.
== Mathematical description ==
=== Definition ===
Let
n
{\displaystyle n}
be the sample size of a UDD experiment, and assuming for now that subjects are treated one at a time. Then the doses these subjects receive, denoted as random variables
X
1
,
…
,
X
n
{\displaystyle X_{1},\ldots ,X_{n}}
, are chosen from a discrete, finite set of
M
{\displaystyle M}
increasing dose levels
X
=
{
d
1
,
…
,
d
M
:
d
1
<
⋯
<
d
M
}
.
{\displaystyle {\mathcal {X}}=\left\{d_{1},\ldots ,d_{M}:\ d_{1}<\cdots <d_{M}\right\}.}
Furthermore, if
X
i
=
d
m
{\displaystyle X_{i}=d_{m}}
, then
X
i
+
1
∈
{
d
m
−
1
,
d
m
,
d
m
+
1
}
,
{\displaystyle X_{i+1}\in \{d_{m-1},d_{m},d_{m+1}\},}
according to simple constant rules based on recent responses. The next subject must be treated one level up, one level down, or at the same level as the current subject. The responses themselves are denoted
Y
1
,
…
,
Y
n
∈
{
0
,
1
}
;
{\displaystyle Y_{1},\ldots ,Y_{n}\in \left\{0,1\right\};}
hereafter the "1" responses are positive and "0" negative. The repeated application of the same rules (known as dose-transition rules) over a finite set of dose levels, turns
X
1
,
…
,
X
n
{\displaystyle X_{1},\ldots ,X_{n}}
into a random walk over
X
{\displaystyle {\mathcal {X}}}
. Different dose-transition rules produce different UDD "flavors", such as the three shown in the figure above.
Despite the experiment using only a discrete set of dose levels, the dose-magnitude variable itself,
x
{\displaystyle x}
, is assumed to be continuous, and the probability of positive response is assumed to increase continuously with increasing
x
{\displaystyle x}
. The goal of dose-finding experiments is to estimate the dose
x
{\displaystyle x}
(on a continuous scale) that would trigger positive responses at a pre-specified target rate
Γ
=
P
{
Y
=
1
∣
X
=
x
}
,
Γ
∈
(
0
,
1
)
{\displaystyle \Gamma =P\left\{Y=1\mid X=x\right\},\ \ \Gamma \in (0,1)}
; often known as the "target dose". This problem can be also expressed as estimation of the quantile
F
−
1
(
Γ
)
{\displaystyle F^{-1}(\Gamma )}
of a cumulative distribution function describing the dose-toxicity curve
F
(
x
)
{\displaystyle F(x)}
. The density function
f
(
x
)
{\displaystyle f(x)}
associated with
F
(
x
)
{\displaystyle F(x)}
is interpretable as the distribution of response thresholds of the population under study.
=== Transition probability matrix ===
Given that a subject receives dose
d
m
{\displaystyle d_{m}}
, denote the probability that the next subject receives dose
d
m
−
1
,
d
m
{\displaystyle d_{m-1},d_{m}}
, or
d
m
+
1
{\displaystyle d_{m+1}}
, as
p
m
,
m
−
1
,
p
m
m
{\displaystyle p_{m,m-1},p_{mm}}
or
p
m
,
m
+
1
{\displaystyle p_{m,m+1}}
, respectively. These transition probabilities obey the constraints
p
m
,
m
−
1
+
p
m
m
+
p
m
,
m
+
1
=
1
{\displaystyle p_{m,m-1}+p_{mm}+p_{m,m+1}=1}
and the boundary conditions
p
1
,
0
=
p
M
,
M
+
1
=
0
{\displaystyle p_{1,0}=p_{M,M+1}=0}
.
Each specific set of UDD rules enables the symbolic calculation of these probabilities, usually as a function of
F
(
x
)
{\displaystyle F(x)}
. Assuming that transition probabilities are fixed in time, depending only upon the current allocation and its outcome, i.e., upon
(
X
i
,
Y
i
)
{\displaystyle \left(X_{i},Y_{i}\right)}
and through them upon
F
(
x
)
{\displaystyle F(x)}
(and possibly on a set of fixed parameters). The probabilities are then best represented via a tri-diagonal transition probability matrix (TPM)
P
{\displaystyle \mathbf {P} }
:
P
=
(
p
11
p
12
0
⋯
⋯
0
p
21
p
22
p
23
0
⋱
⋮
0
⋱
⋱
⋱
⋱
⋮
⋮
⋱
⋱
⋱
⋱
0
⋮
⋱
0
p
M
−
1
,
M
−
2
p
M
−
1
,
M
−
1
p
M
−
1
,
M
0
⋯
⋯
0
p
M
,
M
−
1
p
M
M
)
.
{\displaystyle {\bf {{P}=\left({\begin{array}{cccccc}p_{11}&p_{12}&0&\cdots &\cdots &0\\p_{21}&p_{22}&p_{23}&0&\ddots &\vdots \\0&\ddots &\ddots &\ddots &\ddots &\vdots \\\vdots &\ddots &\ddots &\ddots &\ddots &0\\\vdots &\ddots &0&p_{M-1,M-2}&p_{M-1,M-1}&p_{M-1,M}\\0&\cdots &\cdots &0&p_{M,M-1}&p_{MM}\\\end{array}}\right).}}}
=== Balance point ===
Usually, UDD dose-transition rules bring the dose down (or at least bar it from escalating) after positive responses, and vice versa. Therefore, UDD random walks have a central tendency: dose assignments tend to meander back and forth around some dose
x
∗
{\displaystyle x^{*}}
that can be calculated from the transition rules, when those are expressed as a function of
F
(
x
)
{\displaystyle F(x)}
. This dose has often been confused with the experiment's formal target
F
−
1
(
Γ
)
{\displaystyle F^{-1}(\Gamma )}
, and the two are often identical - but they do not have to be. The target is the dose that the experiment is tasked with estimating, while
x
∗
{\displaystyle x^{*}}
, known as the "balance point", is approximately where the UDD's random walk revolves around.
=== Stationary distribution of dose allocations ===
Since UDD random walks are regular Markov chains, they generate a stationary distribution of dose allocations,
π
{\displaystyle \pi }
, once the effect of the manually-chosen starting dose wears off. This means, long-term visit frequencies to the various doses will approximate a steady state described by
π
{\displaystyle \pi }
. According to Markov chain theory the starting-dose effect wears off rather quickly, at a geometric rate. Numerical studies suggest that it would typically take between
2
/
M
{\displaystyle 2/M}
and
4
/
M
{\displaystyle 4/M}
subjects for the effect to wear off nearly completely.
π
{\displaystyle \pi }
is also the asymptotic distribution of cumulative dose allocations.
UDDs' central tendencies ensure that long-term, the most frequently visited dose (i.e., the mode of
π
{\displaystyle \pi }
) will be one of the two doses closest to the balance point
x
∗
{\displaystyle x^{*}}
. If
x
∗
{\displaystyle x^{*}}
is outside the range of allowed doses, then the mode will be on the boundary dose closest to it. Under the original median-finding UDD, the mode will be at the closest dose to
x
∗
{\displaystyle x^{*}}
in any case. Away from the mode, asymptotic visit frequencies decrease sharply, at a faster-than-geometric rate. Even though a UDD experiment is still a random walk, long excursions away from the region of interest are very unlikely.
== Common UDDs ==
=== Original ("simple" or "classical") UDD ===
The original "simple" or "classical" UDD moves the dose up one level upon a negative response, and vice versa. Therefore, the transition probabilities are
p
m
,
m
+
1
=
P
{
Y
i
=
0
|
X
i
=
d
m
}
=
1
−
F
(
d
m
)
;
p
m
,
m
−
1
=
P
{
Y
i
=
1
|
X
i
=
d
m
}
=
F
(
d
m
)
.
{\displaystyle {\begin{array}{rl}p_{m,m+1}&=P\{Y_{i}=0|X_{i}=d_{m}\}=1-F(d_{m});\\p_{m,m-1}&=P\{Y_{i}=1|X_{i}=d_{m}\}=F(d_{m}).\end{array}}}
We use the original UDD as an example for calculating the balance point
x
∗
{\displaystyle x^{*}}
. The design's 'up', 'down' functions are
p
(
x
)
=
1
−
F
(
x
)
,
q
(
x
)
=
F
(
x
)
.
{\displaystyle p(x)=1-F(x),q(x)=F(x).}
We equate them to find
F
∗
{\displaystyle F^{*}}
:
1
−
F
∗
=
F
∗
⟶
F
∗
=
0.5.
{\displaystyle 1-F^{*}=F^{*}\ \longrightarrow \ F^{*}=0.5.}
The "classical" UDD is designed to find the median threshold. This is a case where
F
∗
=
Γ
.
{\displaystyle F^{*}=\Gamma .}
The "classical" UDD can be seen as a special case of each of the more versatile designs described below.
=== Durham and Flournoy's biased coin design ===
This UDD shifts the balance point, by adding the option of treating the next subject at the same dose rather than move only up or down. Whether to stay is determined by a random toss of a metaphoric "coin" with probability
b
=
P
{
heads
}
.
{\displaystyle b=P\{{\textrm {heads}}\}.}
This biased-coin design (BCD) has two "flavors", one for
F
∗
>
0.5
{\displaystyle F^{*}>0.5}
and one for
F
∗
<
0.5
,
{\displaystyle F^{*}<0.5,}
whose rules are shown below:
X
i
+
1
=
d
m
+
1
if
Y
i
=
0
&
'heads'
;
d
m
−
1
if
Y
_
i
=
1
;
d
m
if
Y
i
=
0
&
'tails'
.
{\displaystyle X_{i+1}={\begin{array}{ll}d_{m+1}&{\textrm {if}}\ \ Y_{i}=0\ \ \&\ \ {\textrm {'heads'}};\\d_{m-1}&{\textrm {if}}\ \ Y\_i=1;\\d_{m}&{\textrm {if}}\ \ Y_{i}=0\ \ \&\ \ {\textrm {'tails'}}.\\\end{array}}}
The heads probability
b
{\displaystyle b}
can take any value in
[
0
,
1
]
{\displaystyle [0,1]}
. The balance point is
b
(
1
−
F
∗
)
=
F
∗
F
∗
=
b
1
+
b
∈
[
0
,
0.5
]
.
{\displaystyle {\begin{array}{rcl}b\left(1-F^{*}\right)&=&F^{*}\\F^{*}&=&{\frac {b}{1+b}}\in [0,0.5].\end{array}}}
The BCD balance point can made identical to a target rate
F
−
1
(
Γ
)
{\displaystyle F^{-1}(\Gamma )}
by setting the heads probability to
b
=
Γ
/
(
1
−
Γ
)
{\displaystyle b=\Gamma /(1-\Gamma )}
. For example, for
Γ
=
0.3
{\displaystyle \Gamma =0.3}
set
b
=
3
/
7
{\displaystyle b=3/7}
. Setting
b
=
1
{\displaystyle b=1}
makes this design identical to the classical UDD, and inverting the rules by imposing the coin toss upon positive rather than negative outcomes, produces above-median balance points. Versions with two coins, one for each outcome, have also been published, but they do not seem to offer an advantage over the simpler single-coin BCD.
=== Group (cohort) UDDs ===
Some dose-finding experiments, such as phase I trials, require a waiting period of weeks before determining each individual outcome. It may preferable then, to be able treat several subjects at once or in rapid succession. With group UDDs, the transition rules apply rules to cohorts of fixed size
s
{\displaystyle s}
rather than to individuals.
X
i
{\displaystyle X_{i}}
becomes the dose given to cohort
i
{\displaystyle i}
, and
Y
i
{\displaystyle Y_{i}}
is the number of positive responses in the
i
{\displaystyle i}
-th cohort, rather than a binary outcome. Given that the
i
{\displaystyle i}
-th cohort is treated at
X
i
=
d
m
{\displaystyle X_{i}=d_{m}}
on the interior of
X
{\displaystyle {\mathcal {X}}}
the
i
+
1
{\displaystyle i+1}
-th cohort is assigned to
X
i
+
1
=
{
d
m
+
1
if
Y
i
≤
l
;
d
m
−
1
if
Y
i
≥
u
;
d
m
if
l
<
Y
i
<
u
.
{\displaystyle X_{i+1}={\begin{cases}d_{m+1}&{\textrm {if}}\ \ Y_{i}\leq l;\\d_{m-1}&{\textrm {if}}\ \ Y_{i}\geq u;\\d_{m}&{\textrm {if}}\ \ l<Y_{i}<u.\end{cases}}}
Y
i
{\displaystyle Y_{i}}
follow a binomial distribution conditional on
X
i
{\displaystyle X_{i}}
, with parameters
s
{\displaystyle s}
and
F
(
X
i
)
{\displaystyle F(X_{i})}
. The up and down probabilities are the binomial distribution's tails, and the stay probability its center (it is zero if
u
=
l
+
1
{\displaystyle u=l+1}
). A specific choice of parameters can be abbreviated as GUD
(
s
,
l
,
u
)
.
{\displaystyle _{(s,l,u)}.}
Nominally, group UDDs generate
s
{\displaystyle s}
-order random walks, since the
s
{\displaystyle s}
most recent observations are needed to determine the next allocation. However, with cohorts viewed as single mathematical entities, these designs generate a first-order random walk having a tri-diagonal TPM as above. Some relevant group UDD subfamilies:
Symmetric designs with
l
+
u
=
s
{\displaystyle l+u=s}
(e.g., GUD
(
2
,
0
,
2
)
{\displaystyle _{(2,0,2)}}
) target the median.
The family GUD
(
s
,
0
,
1
)
,
{\displaystyle _{(s,0,1)},}
encountered in toxicity studies, allows escalation only with zero positive responses, and de-escalate upon any positive response. The escalation probability at
x
{\displaystyle x}
is
(
1
−
F
(
x
)
)
s
,
{\displaystyle \left(1-F(x)\right)^{s},}
and since this design does not allow for remaining at the same dose, at the balance point it will be exactly
1
/
2
{\displaystyle 1/2}
. Therefore,
F
∗
=
1
−
(
1
2
)
1
/
s
.
{\displaystyle F^{*}=1-\left({\frac {1}{2}}\right)^{1/s}.}
With
s
=
2
,
3
,
4
{\displaystyle s=2,3,4}
would be associated with
F
∗
≈
0.293
,
0.206
{\displaystyle F^{*}\approx 0.293,0.206}
and
0.159
{\displaystyle 0.159}
, respectively. The mirror-image family GUD
(
s
,
s
−
1
,
s
)
{\displaystyle _{(s,s-1,s)}}
has its balance points at one minus these probabilities.
For general group UDDs, the balance point can be calculated only numerically, by finding the dose
x
∗
{\displaystyle x^{*}}
with toxicity rate
F
∗
{\displaystyle F^{*}}
such that
∑
r
=
u
s
(
s
r
)
(
F
∗
)
r
(
1
−
F
∗
)
s
−
r
=
∑
t
=
0
l
(
s
t
)
(
F
∗
)
t
(
1
−
F
∗
)
s
−
t
.
{\displaystyle \sum _{r=u}^{s}\left({\begin{array}{c}s\\r\\\end{array}}\right)\left(F^{*}\right)^{r}(1-F^{*})^{s-r}=\sum _{t=0}^{l}\left({\begin{array}{c}s\\t\\\end{array}}\right)\left(F^{*}\right)^{t}(1-F^{*})^{s-t}.}
Any numerical root-finding algorithm, e.g., Newton–Raphson, can be used to solve for
F
∗
{\displaystyle F^{*}}
.
=== ===
k
{\displaystyle k}
-in-a-row (or "transformed" or "geometric") UDD
This is the most commonly used non-median UDD. It was introduced by Wetherill in 1963, and proliferated by him and colleagues shortly thereafter to psychophysics, where it remains one of the standard methods to find sensory thresholds. Wetherill called it "transformed" UDD; Misrak Gezmu who was the first to analyze its random-walk properties, called it "Geometric" UDD in the 1990s; and in the 2000s the more straightforward name "
k
{\displaystyle k}
-in-a-row" UDD was adopted. The design's rules are deceptively simple:
X
i
+
1
=
{
d
m
+
1
if
Y
i
−
k
+
1
=
⋯
=
Y
i
=
0
,
all
observed
at
d
m
;
d
m
−
1
if
Y
i
=
1
;
d
m
otherwise
,
{\displaystyle X_{i+1}={\begin{cases}d_{m+1}&{\textrm {if}}\ \ Y_{i-k+1}=\cdots =Y_{i}=0,\ \ {\textrm {all}}\ {\textrm {observed}}\ {\textrm {at}}\ \ d_{m};\\d_{m-1}&{\textrm {if}}\ \ Y_{i}=1;\\d_{m}&{\textrm {otherwise}},\end{cases}}}
Every dose escalation requires
k
{\displaystyle k}
non-toxicities observed on consecutive data points, all at the current dose, while de-escalation only requires a single toxicity. It closely resembles GUD
(
s
,
0
,
1
)
{\displaystyle _{(s,0,1)}}
described above, and indeed shares the same balance point. The difference is that
k
{\displaystyle k}
-in-a-row can bail out of a dose level upon the first toxicity, whereas its group UDD sibling might treat the entire cohort at once, and therefore might see more than one toxicity before descending.
The method used in sensory studies is actually the mirror-image of the one defined above, with
k
{\displaystyle k}
successive responses required for a de-escalation and only one non-response for escalation, yielding
F
∗
≈
0.707
,
0.794
,
0.841
,
…
{\displaystyle F^{*}\approx 0.707,0.794,0.841,\ldots }
for
k
=
2
,
3
,
4
,
…
{\displaystyle k=2,3,4,\ldots }
.
k
{\displaystyle k}
-in-a-row generates a
k
{\displaystyle k}
-th order random walk because knowledge of the last
k
{\displaystyle k}
responses might be needed. It can be represented as a first-order chain with
M
k
{\displaystyle Mk}
states, or as a Markov chain with
M
{\displaystyle M}
levels, each having
k
{\displaystyle k}
internal states labeled
0
{\displaystyle 0}
to
k
−
1
{\displaystyle k-1}
The internal state serves as a counter of the number of immediately recent consecutive non-toxicities observed at the current dose. This description is closer to the physical dose-allocation process, because subjects at different internal states of the level
m
{\displaystyle m}
, are all assigned the same dose
d
m
{\displaystyle d_{m}}
. Either way, the TPM is
M
k
×
M
k
{\displaystyle Mk\times Mk}
(or more precisely,
[
(
M
−
1
)
k
+
1
)
]
×
[
(
M
−
1
)
k
+
1
)
]
{\displaystyle \left[(M-1)k+1)\right]\times \left[(M-1)k+1)\right]}
, because the internal counter is meaningless at the highest dose) - and it is not tridiagonal.
Here is the expanded
k
{\displaystyle k}
-in-a-row TPM with
k
=
2
{\displaystyle k=2}
and
M
=
5
{\displaystyle M=5}
, using the abbreviation
F
m
≡
F
(
d
m
)
.
{\displaystyle F_{m}\equiv F\left(d_{m}\right).}
Each level's internal states are adjacent to each other.
[
F
1
1
−
F
1
0
0
0
0
0
0
0
F
1
0
1
−
F
1
0
0
0
0
0
0
F
2
0
0
1
−
F
2
0
0
0
0
0
F
2
0
0
0
1
−
F
2
0
0
0
0
0
0
F
3
0
0
1
−
F
3
0
0
0
0
0
F
3
0
0
0
1
−
F
3
0
0
0
0
0
0
F
4
0
0
1
−
F
4
0
0
0
0
0
F
4
0
0
0
1
−
F
4
0
0
0
0
0
0
F
5
0
1
−
F
5
]
.
{\displaystyle {\begin{bmatrix}F_{1}&1-F_{1}&0&0&0&0&0&0&0\\F_{1}&0&1-F_{1}&0&0&0&0&0&0\\F_{2}&0&0&1-F_{2}&0&0&0&0&0\\F_{2}&0&0&0&1-F_{2}&0&0&0&0\\0&0&F_{3}&0&0&1-F_{3}&0&0&0\\0&0&F_{3}&0&0&0&1-F_{3}&0&0\\0&0&0&0&F_{4}&0&0&1-F_{4}&0\\0&0&0&0&F_{4}&0&0&0&1-F_{4}\\0&0&0&0&0&0&F_{5}&0&1-F_{5}\\\end{bmatrix}}.}
k
{\displaystyle k}
-in-a-row is often considered for clinical trials targeting a low-toxicity dose. In this case, the balance point and the target are not identical; rather,
k
{\displaystyle k}
is chosen to aim close to the target rate, e.g.,
k
=
2
{\displaystyle k=2}
for studies targeting the 30th percentile, and
k
=
3
{\displaystyle k=3}
for studies targeting the 20th percentile.
== Estimating the target dose ==
Unlike other design approaches, UDDs do not have a specific estimation method "bundled in" with the design as a default choice. Historically, the more common choice has been some weighted average of the doses administered, usually excluding the first few doses to mitigate the starting-point bias. This approach antedates deeper understanding of UDDs' Markov properties, but its success in numerical evaluations relies upon the eventual sampling from
π
{\displaystyle \pi }
, since the latter is centered roughly around
x
∗
.
{\displaystyle x^{*}.}
The single most popular among these averaging estimators was introduced by Wetherill et al. in 1966, and only includes reversal points (points where the outcome switches from 0 to 1 or vice versa) in the average. In recent years, the limitations of averaging estimators have come to light, in particular the many sources of bias that are very difficult to mitigate. Reversal estimators suffer from both multiple biases (although there is some inadvertent cancelling out of biases), and increased variance due to using a subsample of doses. However, the knowledge about averaging-estimator limitations has yet to disseminate outside the methodological literature and affect actual practice.
By contrast, regression estimators attempt to approximate the curve
y
=
F
(
x
)
{\displaystyle y=F(x)}
describing the dose-response relationship, in particular around the target percentile. The raw data for the regression are the doses
d
m
{\displaystyle d_{m}}
on the horizontal axis, and the observed toxicity frequencies,
F
^
m
=
∑
i
=
1
n
Y
i
I
[
X
i
=
d
m
]
∑
i
=
1
n
I
[
X
i
=
d
m
]
,
m
=
1
,
…
,
M
,
{\displaystyle {\hat {F}}_{m}={\frac {\sum _{i=1}^{n}Y_{i}I\left[X_{i}=d_{m}\right]}{\sum _{i=1}^{n}I\left[X_{i}=d_{m}\right]}},\ m=1,\ldots ,M,}
on the vertical axis. The target estimate is the abscissa of the point where the fitted curve crosses
y
=
Γ
.
{\displaystyle y=\Gamma .}
Probit regression has been used for many decades to estimate UDD targets, although far less commonly than the reversal-averaging estimator. In 2002, Stylianou and Flournoy introduced an interpolated version of isotonic regression (IR) to estimate UDD targets and other dose-response data. More recently, a modification called "centered isotonic regression" (CIR) was developed by Oron and Flournoy, promising substantially better estimation performance than ordinary isotonic regression in most cases, and also offering the first viable interval estimator for isotonic regression in general. Isotonic regression estimators appear to be the most compatible with UDDs, because both approaches are nonparametric and relatively robust. The publicly available R package "cir" implements both CIR and IR for dose-finding and other applications.
== References == | Wikipedia/Up-and-down_design |
A likelihood function (often simply called the likelihood) measures how well a statistical model explains observed data by calculating the probability of seeing that data under different parameter values of the model. It is constructed from the joint probability distribution of the random variable that (presumably) generated the observations. When evaluated on the actual data points, it becomes a function solely of the model parameters.
In maximum likelihood estimation, the argument that maximizes the likelihood function serves as a point estimate for the unknown parameter, while the Fisher information (often approximated by the likelihood's Hessian matrix at the maximum) gives an indication of the estimate's precision.
In contrast, in Bayesian statistics, the estimate of interest is the converse of the likelihood, the so-called posterior probability of the parameter given the observed data, which is calculated via Bayes' rule.
== Definition ==
The likelihood function, parameterized by a (possibly multivariate) parameter
θ
{\textstyle \theta }
, is usually defined differently for discrete and continuous probability distributions (a more general definition is discussed below). Given a probability density or mass function
x
↦
f
(
x
∣
θ
)
,
{\displaystyle x\mapsto f(x\mid \theta ),}
where
x
{\textstyle x}
is a realization of the random variable
X
{\textstyle X}
, the likelihood function is
θ
↦
f
(
x
∣
θ
)
,
{\displaystyle \theta \mapsto f(x\mid \theta ),}
often written
L
(
θ
∣
x
)
.
{\displaystyle {\mathcal {L}}(\theta \mid x).}
In other words, when
f
(
x
∣
θ
)
{\textstyle f(x\mid \theta )}
is viewed as a function of
x
{\textstyle x}
with
θ
{\textstyle \theta }
fixed, it is a probability density function, and when viewed as a function of
θ
{\textstyle \theta }
with
x
{\textstyle x}
fixed, it is a likelihood function. In the frequentist paradigm, the notation
f
(
x
∣
θ
)
{\textstyle f(x\mid \theta )}
is often avoided and instead
f
(
x
;
θ
)
{\textstyle f(x;\theta )}
or
f
(
x
,
θ
)
{\textstyle f(x,\theta )}
are used to indicate that
θ
{\textstyle \theta }
is regarded as a fixed unknown quantity rather than as a random variable being conditioned on.
The likelihood function does not specify the probability that
θ
{\textstyle \theta }
is the truth, given the observed sample
X
=
x
{\textstyle X=x}
. Such an interpretation is a common error, with potentially disastrous consequences (see prosecutor's fallacy).
=== Discrete probability distribution ===
Let
X
{\textstyle X}
be a discrete random variable with probability mass function
p
{\textstyle p}
depending on a parameter
θ
{\textstyle \theta }
. Then the function
L
(
θ
∣
x
)
=
p
θ
(
x
)
=
P
θ
(
X
=
x
)
,
{\displaystyle {\mathcal {L}}(\theta \mid x)=p_{\theta }(x)=P_{\theta }(X=x),}
considered as a function of
θ
{\textstyle \theta }
, is the likelihood function, given the outcome
x
{\textstyle x}
of the random variable
X
{\textstyle X}
. Sometimes the probability of "the value
x
{\textstyle x}
of
X
{\textstyle X}
for the parameter value
θ
{\textstyle \theta }
" is written as P(X = x | θ) or P(X = x; θ). The likelihood is the probability that a particular outcome
x
{\textstyle x}
is observed when the true value of the parameter is
θ
{\textstyle \theta }
, equivalent to the probability mass on
x
{\textstyle x}
; it is not a probability density over the parameter
θ
{\textstyle \theta }
. The likelihood,
L
(
θ
∣
x
)
{\textstyle {\mathcal {L}}(\theta \mid x)}
, should not be confused with
P
(
θ
∣
x
)
{\textstyle P(\theta \mid x)}
, which is the posterior probability of
θ
{\textstyle \theta }
given the data
x
{\textstyle x}
.
==== Example ====
Consider a simple statistical model of a coin flip: a single parameter
p
H
{\textstyle p_{\text{H}}}
that expresses the "fairness" of the coin. The parameter is the probability that a coin lands heads up ("H") when tossed.
p
H
{\textstyle p_{\text{H}}}
can take on any value within the range 0.0 to 1.0. For a perfectly fair coin,
p
H
=
0.5
{\textstyle p_{\text{H}}=0.5}
.
Imagine flipping a fair coin twice, and observing two heads in two tosses ("HH"). Assuming that each successive coin flip is i.i.d., then the probability of observing HH is
P
(
HH
∣
p
H
=
0.5
)
=
0.5
2
=
0.25.
{\displaystyle P({\text{HH}}\mid p_{\text{H}}=0.5)=0.5^{2}=0.25.}
Equivalently, the likelihood of observing "HH" assuming
p
H
=
0.5
{\textstyle p_{\text{H}}=0.5}
is
L
(
p
H
=
0.5
∣
HH
)
=
0.25.
{\displaystyle {\mathcal {L}}(p_{\text{H}}=0.5\mid {\text{HH}})=0.25.}
This is not the same as saying that
P
(
p
H
=
0.5
∣
H
H
)
=
0.25
{\textstyle P(p_{\text{H}}=0.5\mid HH)=0.25}
, a conclusion which could only be reached via Bayes' theorem given knowledge about the marginal probabilities
P
(
p
H
=
0.5
)
{\textstyle P(p_{\text{H}}=0.5)}
and
P
(
HH
)
{\textstyle P({\text{HH}})}
.
Now suppose that the coin is not a fair coin, but instead that
p
H
=
0.3
{\textstyle p_{\text{H}}=0.3}
. Then the probability of two heads on two flips is
P
(
HH
∣
p
H
=
0.3
)
=
0.3
2
=
0.09.
{\displaystyle P({\text{HH}}\mid p_{\text{H}}=0.3)=0.3^{2}=0.09.}
Hence
L
(
p
H
=
0.3
∣
HH
)
=
0.09.
{\displaystyle {\mathcal {L}}(p_{\text{H}}=0.3\mid {\text{HH}})=0.09.}
More generally, for each value of
p
H
{\textstyle p_{\text{H}}}
, we can calculate the corresponding likelihood. The result of such calculations is displayed in Figure 1. The integral of
L
{\textstyle {\mathcal {L}}}
over [0, 1] is 1/3; likelihoods need not integrate or sum to one over the parameter space.
=== Continuous probability distribution ===
Let
X
{\textstyle X}
be a random variable following an absolutely continuous probability distribution with density function
f
{\textstyle f}
(a function of
x
{\textstyle x}
) which depends on a parameter
θ
{\textstyle \theta }
. Then the function
L
(
θ
∣
x
)
=
f
θ
(
x
)
,
{\displaystyle {\mathcal {L}}(\theta \mid x)=f_{\theta }(x),}
considered as a function of
θ
{\textstyle \theta }
, is the likelihood function (of
θ
{\textstyle \theta }
, given the outcome
X
=
x
{\textstyle X=x}
). Again,
L
{\textstyle {\mathcal {L}}}
is not a probability density or mass function over
θ
{\textstyle \theta }
, despite being a function of
θ
{\textstyle \theta }
given the observation
X
=
x
{\textstyle X=x}
.
==== Relationship between the likelihood and probability density functions ====
The use of the probability density in specifying the likelihood function above is justified as follows. Given an observation
x
j
{\textstyle x_{j}}
, the likelihood for the interval
[
x
j
,
x
j
+
h
]
{\textstyle [x_{j},x_{j}+h]}
, where
h
>
0
{\textstyle h>0}
is a constant, is given by
L
(
θ
∣
x
∈
[
x
j
,
x
j
+
h
]
)
{\textstyle {\mathcal {L}}(\theta \mid x\in [x_{j},x_{j}+h])}
. Observe that
a
r
g
m
a
x
θ
L
(
θ
∣
x
∈
[
x
j
,
x
j
+
h
]
)
=
a
r
g
m
a
x
θ
1
h
L
(
θ
∣
x
∈
[
x
j
,
x
j
+
h
]
)
,
{\displaystyle \mathop {\operatorname {arg\,max} } _{\theta }{\mathcal {L}}(\theta \mid x\in [x_{j},x_{j}+h])=\mathop {\operatorname {arg\,max} } _{\theta }{\frac {1}{h}}{\mathcal {L}}(\theta \mid x\in [x_{j},x_{j}+h]),}
since
h
{\textstyle h}
is positive and constant. Because
a
r
g
m
a
x
θ
1
h
L
(
θ
∣
x
∈
[
x
j
,
x
j
+
h
]
)
=
a
r
g
m
a
x
θ
1
h
Pr
(
x
j
≤
x
≤
x
j
+
h
∣
θ
)
=
a
r
g
m
a
x
θ
1
h
∫
x
j
x
j
+
h
f
(
x
∣
θ
)
d
x
,
{\displaystyle \mathop {\operatorname {arg\,max} } _{\theta }{\frac {1}{h}}{\mathcal {L}}(\theta \mid x\in [x_{j},x_{j}+h])=\mathop {\operatorname {arg\,max} } _{\theta }{\frac {1}{h}}\Pr(x_{j}\leq x\leq x_{j}+h\mid \theta )=\mathop {\operatorname {arg\,max} } _{\theta }{\frac {1}{h}}\int _{x_{j}}^{x_{j}+h}f(x\mid \theta )\,dx,}
where
f
(
x
∣
θ
)
{\textstyle f(x\mid \theta )}
is the probability density function, it follows that
a
r
g
m
a
x
θ
L
(
θ
∣
x
∈
[
x
j
,
x
j
+
h
]
)
=
a
r
g
m
a
x
θ
1
h
∫
x
j
x
j
+
h
f
(
x
∣
θ
)
d
x
.
{\displaystyle \mathop {\operatorname {arg\,max} } _{\theta }{\mathcal {L}}(\theta \mid x\in [x_{j},x_{j}+h])=\mathop {\operatorname {arg\,max} } _{\theta }{\frac {1}{h}}\int _{x_{j}}^{x_{j}+h}f(x\mid \theta )\,dx.}
The first fundamental theorem of calculus provides that
lim
h
→
0
+
1
h
∫
x
j
x
j
+
h
f
(
x
∣
θ
)
d
x
=
f
(
x
j
∣
θ
)
.
{\displaystyle \lim _{h\to 0^{+}}{\frac {1}{h}}\int _{x_{j}}^{x_{j}+h}f(x\mid \theta )\,dx=f(x_{j}\mid \theta ).}
Then
a
r
g
m
a
x
θ
L
(
θ
∣
x
j
)
=
a
r
g
m
a
x
θ
[
lim
h
→
0
+
L
(
θ
∣
x
∈
[
x
j
,
x
j
+
h
]
)
]
=
a
r
g
m
a
x
θ
[
lim
h
→
0
+
1
h
∫
x
j
x
j
+
h
f
(
x
∣
θ
)
d
x
]
=
a
r
g
m
a
x
θ
f
(
x
j
∣
θ
)
.
{\displaystyle {\begin{aligned}\mathop {\operatorname {arg\,max} } _{\theta }{\mathcal {L}}(\theta \mid x_{j})&=\mathop {\operatorname {arg\,max} } _{\theta }\left[\lim _{h\to 0^{+}}{\mathcal {L}}(\theta \mid x\in [x_{j},x_{j}+h])\right]\\[4pt]&=\mathop {\operatorname {arg\,max} } _{\theta }\left[\lim _{h\to 0^{+}}{\frac {1}{h}}\int _{x_{j}}^{x_{j}+h}f(x\mid \theta )\,dx\right]\\[4pt]&=\mathop {\operatorname {arg\,max} } _{\theta }f(x_{j}\mid \theta ).\end{aligned}}}
Therefore,
a
r
g
m
a
x
θ
L
(
θ
∣
x
j
)
=
a
r
g
m
a
x
θ
f
(
x
j
∣
θ
)
,
{\displaystyle \mathop {\operatorname {arg\,max} } _{\theta }{\mathcal {L}}(\theta \mid x_{j})=\mathop {\operatorname {arg\,max} } _{\theta }f(x_{j}\mid \theta ),}
and so maximizing the probability density at
x
j
{\textstyle x_{j}}
amounts to maximizing the likelihood of the specific observation
x
j
{\textstyle x_{j}}
.
=== In general ===
In measure-theoretic probability theory, the density function is defined as the Radon–Nikodym derivative of the probability distribution relative to a common dominating measure. The likelihood function is this density interpreted as a function of the parameter, rather than the random variable. Thus, we can construct a likelihood function for any distribution, whether discrete, continuous, a mixture, or otherwise. (Likelihoods are comparable, e.g. for parameter estimation, only if they are Radon–Nikodym derivatives with respect to the same dominating measure.)
The above discussion of the likelihood for discrete random variables uses the counting measure, under which the probability density at any outcome equals the probability of that outcome.
=== Likelihoods for mixed continuous–discrete distributions ===
The above can be extended in a simple way to allow consideration of distributions which contain both discrete and continuous components. Suppose that the distribution consists of a number of discrete probability masses
p
k
(
θ
)
{\textstyle p_{k}(\theta )}
and a density
f
(
x
∣
θ
)
{\textstyle f(x\mid \theta )}
, where the sum of all the
p
{\textstyle p}
's added to the integral of
f
{\textstyle f}
is always one. Assuming that it is possible to distinguish an observation corresponding to one of the discrete probability masses from one which corresponds to the density component, the likelihood function for an observation from the continuous component can be dealt with in the manner shown above. For an observation from the discrete component, the likelihood function for an observation from the discrete component is simply
L
(
θ
∣
x
)
=
p
k
(
θ
)
,
{\displaystyle {\mathcal {L}}(\theta \mid x)=p_{k}(\theta ),}
where
k
{\textstyle k}
is the index of the discrete probability mass corresponding to observation
x
{\textstyle x}
, because maximizing the probability mass (or probability) at
x
{\textstyle x}
amounts to maximizing the likelihood of the specific observation.
The fact that the likelihood function can be defined in a way that includes contributions that are not commensurate (the density and the probability mass) arises from the way in which the likelihood function is defined up to a constant of proportionality, where this "constant" can change with the observation
x
{\textstyle x}
, but not with the parameter
θ
{\textstyle \theta }
.
=== Regularity conditions ===
In the context of parameter estimation, the likelihood function is usually assumed to obey certain conditions, known as regularity conditions. These conditions are assumed in various proofs involving likelihood functions, and need to be verified in each particular application. For maximum likelihood estimation, the existence of a global maximum of the likelihood function is of the utmost importance. By the extreme value theorem, it suffices that the likelihood function is continuous on a compact parameter space for the maximum likelihood estimator to exist. While the continuity assumption is usually met, the compactness assumption about the parameter space is often not, as the bounds of the true parameter values might be unknown. In that case, concavity of the likelihood function plays a key role.
More specifically, if the likelihood function is twice continuously differentiable on the k-dimensional parameter space
Θ
{\textstyle \Theta }
assumed to be an open connected subset of
R
k
,
{\textstyle \mathbb {R} ^{k}\,,}
there exists a unique maximum
θ
^
∈
Θ
{\textstyle {\hat {\theta }}\in \Theta }
if the matrix of second partials
H
(
θ
)
≡
[
∂
2
L
∂
θ
i
∂
θ
j
]
i
,
j
=
1
,
1
n
i
,
n
j
{\displaystyle \mathbf {H} (\theta )\equiv \left[\,{\frac {\partial ^{2}L}{\,\partial \theta _{i}\,\partial \theta _{j}\,}}\,\right]_{i,j=1,1}^{n_{\mathrm {i} },n_{\mathrm {j} }}\;}
is negative definite for every
θ
∈
Θ
{\textstyle \,\theta \in \Theta \,}
at which the gradient
∇
L
≡
[
∂
L
∂
θ
i
]
i
=
1
n
i
{\textstyle \;\nabla L\equiv \left[\,{\frac {\partial L}{\,\partial \theta _{i}\,}}\,\right]_{i=1}^{n_{\mathrm {i} }}\;}
vanishes,
and if the likelihood function approaches a constant on the boundary of the parameter space,
∂
Θ
,
{\textstyle \;\partial \Theta \;,}
i.e.,
lim
θ
→
∂
Θ
L
(
θ
)
=
0
,
{\displaystyle \lim _{\theta \to \partial \Theta }L(\theta )=0\;,}
which may include the points at infinity if
Θ
{\textstyle \,\Theta \,}
is unbounded. Mäkeläinen and co-authors prove this result using Morse theory while informally appealing to a mountain pass property. Mascarenhas restates their proof using the mountain pass theorem.
In the proofs of consistency and asymptotic normality of the maximum likelihood estimator, additional assumptions are made about the probability densities that form the basis of a particular likelihood function. These conditions were first established by Chanda. In particular, for almost all
x
{\textstyle x}
, and for all
θ
∈
Θ
,
{\textstyle \,\theta \in \Theta \,,}
∂
log
f
∂
θ
r
,
∂
2
log
f
∂
θ
r
∂
θ
s
,
∂
3
log
f
∂
θ
r
∂
θ
s
∂
θ
t
{\displaystyle {\frac {\partial \log f}{\partial \theta _{r}}}\,,\quad {\frac {\partial ^{2}\log f}{\partial \theta _{r}\partial \theta _{s}}}\,,\quad {\frac {\partial ^{3}\log f}{\partial \theta _{r}\,\partial \theta _{s}\,\partial \theta _{t}}}\,}
exist for all
r
,
s
,
t
=
1
,
2
,
…
,
k
{\textstyle \,r,s,t=1,2,\ldots ,k\,}
in order to ensure the existence of a Taylor expansion. Second, for almost all
x
{\textstyle x}
and for every
θ
∈
Θ
{\textstyle \,\theta \in \Theta \,}
it must be that
|
∂
f
∂
θ
r
|
<
F
r
(
x
)
,
|
∂
2
f
∂
θ
r
∂
θ
s
|
<
F
r
s
(
x
)
,
|
∂
3
f
∂
θ
r
∂
θ
s
∂
θ
t
|
<
H
r
s
t
(
x
)
{\displaystyle \left|{\frac {\partial f}{\partial \theta _{r}}}\right|<F_{r}(x)\,,\quad \left|{\frac {\partial ^{2}f}{\partial \theta _{r}\,\partial \theta _{s}}}\right|<F_{rs}(x)\,,\quad \left|{\frac {\partial ^{3}f}{\partial \theta _{r}\,\partial \theta _{s}\,\partial \theta _{t}}}\right|<H_{rst}(x)}
where
H
{\textstyle H}
is such that
∫
−
∞
∞
H
r
s
t
(
z
)
d
z
≤
M
<
∞
.
{\textstyle \,\int _{-\infty }^{\infty }H_{rst}(z)\mathrm {d} z\leq M<\infty \;.}
This boundedness of the derivatives is needed to allow for differentiation under the integral sign. And lastly, it is assumed that the information matrix,
I
(
θ
)
=
∫
−
∞
∞
∂
log
f
∂
θ
r
∂
log
f
∂
θ
s
f
d
z
{\displaystyle \mathbf {I} (\theta )=\int _{-\infty }^{\infty }{\frac {\partial \log f}{\partial \theta _{r}}}\ {\frac {\partial \log f}{\partial \theta _{s}}}\ f\ \mathrm {d} z}
is positive definite and
|
I
(
θ
)
|
{\textstyle \,\left|\mathbf {I} (\theta )\right|\,}
is finite. This ensures that the score has a finite variance.
The above conditions are sufficient, but not necessary. That is, a model that does not meet these regularity conditions may or may not have a maximum likelihood estimator of the properties mentioned above. Further, in case of non-independently or non-identically distributed observations additional properties may need to be assumed.
In Bayesian statistics, almost identical regularity conditions are imposed on the likelihood function in order to proof asymptotic normality of the posterior probability, and therefore to justify a Laplace approximation of the posterior in large samples.
== Likelihood ratio and relative likelihood ==
=== Likelihood ratio ===
A likelihood ratio is the ratio of any two specified likelihoods, frequently written as:
Λ
(
θ
1
:
θ
2
∣
x
)
=
L
(
θ
1
∣
x
)
L
(
θ
2
∣
x
)
.
{\displaystyle \Lambda (\theta _{1}:\theta _{2}\mid x)={\frac {{\mathcal {L}}(\theta _{1}\mid x)}{{\mathcal {L}}(\theta _{2}\mid x)}}.}
The likelihood ratio is central to likelihoodist statistics: the law of likelihood states that the degree to which data (considered as evidence) supports one parameter value versus another is measured by the likelihood ratio.
In frequentist inference, the likelihood ratio is the basis for a test statistic, the so-called likelihood-ratio test. By the Neyman–Pearson lemma, this is the most powerful test for comparing two simple hypotheses at a given significance level. Numerous other tests can be viewed as likelihood-ratio tests or approximations thereof. The asymptotic distribution of the log-likelihood ratio, considered as a test statistic, is given by Wilks' theorem.
The likelihood ratio is also of central importance in Bayesian inference, where it is known as the Bayes factor, and is used in Bayes' rule. Stated in terms of odds, Bayes' rule states that the posterior odds of two alternatives,
A
1
{\displaystyle A_{1}}
and
A
2
{\displaystyle A_{2}}
, given an event
B
{\displaystyle B}
, is the prior odds, times the likelihood ratio. As an equation:
O
(
A
1
:
A
2
∣
B
)
=
O
(
A
1
:
A
2
)
⋅
Λ
(
A
1
:
A
2
∣
B
)
.
{\displaystyle O(A_{1}:A_{2}\mid B)=O(A_{1}:A_{2})\cdot \Lambda (A_{1}:A_{2}\mid B).}
The likelihood ratio is not directly used in AIC-based statistics. Instead, what is used is the relative likelihood of models (see below).
In evidence-based medicine, likelihood ratios are used in diagnostic testing to assess the value of performing a diagnostic test.
=== Relative likelihood function ===
Since the actual value of the likelihood function depends on the sample, it is often convenient to work with a standardized measure. Suppose that the maximum likelihood estimate for the parameter θ is
θ
^
{\textstyle {\hat {\theta }}}
. Relative plausibilities of other θ values may be found by comparing the likelihoods of those other values with the likelihood of
θ
^
{\textstyle {\hat {\theta }}}
. The relative likelihood of θ is defined to be
R
(
θ
)
=
L
(
θ
∣
x
)
L
(
θ
^
∣
x
)
.
{\displaystyle R(\theta )={\frac {{\mathcal {L}}(\theta \mid x)}{{\mathcal {L}}({\hat {\theta }}\mid x)}}.}
Thus, the relative likelihood is the likelihood ratio (discussed above) with the fixed denominator
L
(
θ
^
)
{\textstyle {\mathcal {L}}({\hat {\theta }})}
. This corresponds to standardizing the likelihood to have a maximum of 1.
==== Likelihood region ====
A likelihood region is the set of all values of θ whose relative likelihood is greater than or equal to a given threshold. In terms of percentages, a p% likelihood region for θ is defined to be
{
θ
:
R
(
θ
)
≥
p
100
}
.
{\displaystyle \left\{\theta :R(\theta )\geq {\frac {p}{100}}\right\}.}
If θ is a single real parameter, a p% likelihood region will usually comprise an interval of real values. If the region does comprise an interval, then it is called a likelihood interval.
Likelihood intervals, and more generally likelihood regions, are used for interval estimation within likelihoodist statistics: they are similar to confidence intervals in frequentist statistics and credible intervals in Bayesian statistics. Likelihood intervals are interpreted directly in terms of relative likelihood, not in terms of coverage probability (frequentism) or posterior probability (Bayesianism).
Given a model, likelihood intervals can be compared to confidence intervals. If θ is a single real parameter, then under certain conditions, a 14.65% likelihood interval (about 1:7 likelihood) for θ will be the same as a 95% confidence interval (19/20 coverage probability). In a slightly different formulation suited to the use of log-likelihoods (see Wilks' theorem), the test statistic is twice the difference in log-likelihoods and the probability distribution of the test statistic is approximately a chi-squared distribution with degrees-of-freedom (df) equal to the difference in df's between the two models (therefore, the e−2 likelihood interval is the same as the 0.954 confidence interval; assuming difference in df's to be 1).
== Likelihoods that eliminate nuisance parameters ==
In many cases, the likelihood is a function of more than one parameter but interest focuses on the estimation of only one, or at most a few of them, with the others being considered as nuisance parameters. Several alternative approaches have been developed to eliminate such nuisance parameters, so that a likelihood can be written as a function of only the parameter (or parameters) of interest: the main approaches are profile, conditional, and marginal likelihoods. These approaches are also useful when a high-dimensional likelihood surface needs to be reduced to one or two parameters of interest in order to allow a graph.
=== Profile likelihood ===
It is possible to reduce the dimensions by concentrating the likelihood function for a subset of parameters by expressing the nuisance parameters as functions of the parameters of interest and replacing them in the likelihood function. In general, for a likelihood function depending on the parameter vector
θ
{\textstyle \mathbf {\theta } }
that can be partitioned into
θ
=
(
θ
1
:
θ
2
)
{\textstyle \mathbf {\theta } =\left(\mathbf {\theta } _{1}:\mathbf {\theta } _{2}\right)}
, and where a correspondence
θ
^
2
=
θ
^
2
(
θ
1
)
{\textstyle \mathbf {\hat {\theta }} _{2}=\mathbf {\hat {\theta }} _{2}\left(\mathbf {\theta } _{1}\right)}
can be determined explicitly, concentration reduces computational burden of the original maximization problem.
For instance, in a linear regression with normally distributed errors,
y
=
X
β
+
u
{\textstyle \mathbf {y} =\mathbf {X} \beta +u}
, the coefficient vector could be partitioned into
β
=
[
β
1
:
β
2
]
{\textstyle \beta =\left[\beta _{1}:\beta _{2}\right]}
(and consequently the design matrix
X
=
[
X
1
:
X
2
]
{\textstyle \mathbf {X} =\left[\mathbf {X} _{1}:\mathbf {X} _{2}\right]}
). Maximizing with respect to
β
2
{\textstyle \beta _{2}}
yields an optimal value function
β
2
(
β
1
)
=
(
X
2
T
X
2
)
−
1
X
2
T
(
y
−
X
1
β
1
)
{\textstyle \beta _{2}(\beta _{1})=\left(\mathbf {X} _{2}^{\mathsf {T}}\mathbf {X} _{2}\right)^{-1}\mathbf {X} _{2}^{\mathsf {T}}\left(\mathbf {y} -\mathbf {X} _{1}\beta _{1}\right)}
. Using this result, the maximum likelihood estimator for
β
1
{\textstyle \beta _{1}}
can then be derived as
β
^
1
=
(
X
1
T
(
I
−
P
2
)
X
1
)
−
1
X
1
T
(
I
−
P
2
)
y
{\displaystyle {\hat {\beta }}_{1}=\left(\mathbf {X} _{1}^{\mathsf {T}}\left(\mathbf {I} -\mathbf {P} _{2}\right)\mathbf {X} _{1}\right)^{-1}\mathbf {X} _{1}^{\mathsf {T}}\left(\mathbf {I} -\mathbf {P} _{2}\right)\mathbf {y} }
where
P
2
=
X
2
(
X
2
T
X
2
)
−
1
X
2
T
{\textstyle \mathbf {P} _{2}=\mathbf {X} _{2}\left(\mathbf {X} _{2}^{\mathsf {T}}\mathbf {X} _{2}\right)^{-1}\mathbf {X} _{2}^{\mathsf {T}}}
is the projection matrix of
X
2
{\textstyle \mathbf {X} _{2}}
. This result is known as the Frisch–Waugh–Lovell theorem.
Since graphically the procedure of concentration is equivalent to slicing the likelihood surface along the ridge of values of the nuisance parameter
β
2
{\textstyle \beta _{2}}
that maximizes the likelihood function, creating an isometric profile of the likelihood function for a given
β
1
{\textstyle \beta _{1}}
, the result of this procedure is also known as profile likelihood. In addition to being graphed, the profile likelihood can also be used to compute confidence intervals that often have better small-sample properties than those based on asymptotic standard errors calculated from the full likelihood.
=== Conditional likelihood ===
Sometimes it is possible to find a sufficient statistic for the nuisance parameters, and conditioning on this statistic results in a likelihood which does not depend on the nuisance parameters.
One example occurs in 2×2 tables, where conditioning on all four marginal totals leads to a conditional likelihood based on the non-central hypergeometric distribution. This form of conditioning is also the basis for Fisher's exact test.
=== Marginal likelihood ===
Sometimes we can remove the nuisance parameters by considering a likelihood based on only part of the information in the data, for example by using the set of ranks rather than the numerical values. Another example occurs in linear mixed models, where considering a likelihood for the residuals only after fitting the fixed effects leads to residual maximum likelihood estimation of the variance components.
=== Partial likelihood ===
A partial likelihood is an adaption of the full likelihood such that only a part of the parameters (the parameters of interest) occur in it. It is a key component of the proportional hazards model: using a restriction on the hazard function, the likelihood does not contain the shape of the hazard over time.
== Products of likelihoods ==
The likelihood, given two or more independent events, is the product of the likelihoods of each of the individual events:
Λ
(
A
∣
X
1
∧
X
2
)
=
Λ
(
A
∣
X
1
)
⋅
Λ
(
A
∣
X
2
)
.
{\displaystyle \Lambda (A\mid X_{1}\land X_{2})=\Lambda (A\mid X_{1})\cdot \Lambda (A\mid X_{2}).}
This follows from the definition of independence in probability: the probabilities of two independent events happening, given a model, is the product of the probabilities.
This is particularly important when the events are from independent and identically distributed random variables, such as independent observations or sampling with replacement. In such a situation, the likelihood function factors into a product of individual likelihood functions.
The empty product has value 1, which corresponds to the likelihood, given no event, being 1: before any data, the likelihood is always 1. This is similar to a uniform prior in Bayesian statistics, but in likelihoodist statistics this is not an improper prior because likelihoods are not integrated.
== Log-likelihood ==
Log-likelihood function is the logarithm of the likelihood function, often denoted by a lowercase l or
ℓ
{\displaystyle \ell }
, to contrast with the uppercase L or
L
{\textstyle {\mathcal {L}}}
for the likelihood. Because logarithms are strictly increasing functions, maximizing the likelihood is equivalent to maximizing the log-likelihood. But for practical purposes it is more convenient to work with the log-likelihood function in maximum likelihood estimation, in particular since most common probability distributions—notably the exponential family—are only logarithmically concave, and concavity of the objective function plays a key role in the maximization.
Given the independence of each event, the overall log-likelihood of intersection equals the sum of the log-likelihoods of the individual events. This is analogous to the fact that the overall log-probability is the sum of the log-probability of the individual events. In addition to the mathematical convenience from this, the adding process of log-likelihood has an intuitive interpretation, as often expressed as "support" from the data. When the parameters are estimated using the log-likelihood for the maximum likelihood estimation, each data point is used by being added to the total log-likelihood. As the data can be viewed as an evidence that support the estimated parameters, this process can be interpreted as "support from independent evidence adds", and the log-likelihood is the "weight of evidence". Interpreting negative log-probability as information content or surprisal, the support (log-likelihood) of a model, given an event, is the negative of the surprisal of the event, given the model: a model is supported by an event to the extent that the event is unsurprising, given the model.
A logarithm of a likelihood ratio is equal to the difference of the log-likelihoods:
log
L
(
A
)
L
(
B
)
=
log
L
(
A
)
−
log
L
(
B
)
=
ℓ
(
A
)
−
ℓ
(
B
)
.
{\displaystyle \log {\frac {{\mathcal {L}}(A)}{{\mathcal {L}}(B)}}=\log {\mathcal {L}}(A)-\log {\mathcal {L}}(B)=\ell (A)-\ell (B).}
Just as the likelihood, given no event, being 1, the log-likelihood, given no event, is 0, which corresponds to the value of the empty sum: without any data, there is no support for any models.
=== Graph ===
The graph of the log-likelihood is called the support curve (in the univariate case).
In the multivariate case, the concept generalizes into a support surface over the parameter space.
It has a relation to, but is distinct from, the support of a distribution.
The term was coined by A. W. F. Edwards in the context of statistical hypothesis testing, i.e. whether or not the data "support" one hypothesis (or parameter value) being tested more than any other.
The log-likelihood function being plotted is used in the computation of the score (the gradient of the log-likelihood) and Fisher information (the curvature of the log-likelihood). Thus, the graph has a direct interpretation in the context of maximum likelihood estimation and likelihood-ratio tests.
=== Likelihood equations ===
If the log-likelihood function is smooth, its gradient with respect to the parameter, known as the score and written
s
n
(
θ
)
≡
∇
θ
ℓ
n
(
θ
)
{\textstyle s_{n}(\theta )\equiv \nabla _{\theta }\ell _{n}(\theta )}
, exists and allows for the application of differential calculus. The basic way to maximize a differentiable function is to find the stationary points (the points where the derivative is zero); since the derivative of a sum is just the sum of the derivatives, but the derivative of a product requires the product rule, it is easier to compute the stationary points of the log-likelihood of independent events than for the likelihood of independent events.
The equations defined by the stationary point of the score function serve as estimating equations for the maximum likelihood estimator.
s
n
(
θ
)
=
0
{\displaystyle s_{n}(\theta )=\mathbf {0} }
In that sense, the maximum likelihood estimator is implicitly defined by the value at
0
{\textstyle \mathbf {0} }
of the inverse function
s
n
−
1
:
E
d
→
Θ
{\textstyle s_{n}^{-1}:\mathbb {E} ^{d}\to \Theta }
, where
E
d
{\textstyle \mathbb {E} ^{d}}
is the d-dimensional Euclidean space, and
Θ
{\textstyle \Theta }
is the parameter space. Using the inverse function theorem, it can be shown that
s
n
−
1
{\textstyle s_{n}^{-1}}
is well-defined in an open neighborhood about
0
{\textstyle \mathbf {0} }
with probability going to one, and
θ
^
n
=
s
n
−
1
(
0
)
{\textstyle {\hat {\theta }}_{n}=s_{n}^{-1}(\mathbf {0} )}
is a consistent estimate of
θ
{\textstyle \theta }
. As a consequence there exists a sequence
{
θ
^
n
}
{\textstyle \left\{{\hat {\theta }}_{n}\right\}}
such that
s
n
(
θ
^
n
)
=
0
{\textstyle s_{n}({\hat {\theta }}_{n})=\mathbf {0} }
asymptotically almost surely, and
θ
^
n
→
p
θ
0
{\textstyle {\hat {\theta }}_{n}\xrightarrow {\text{p}} \theta _{0}}
. A similar result can be established using Rolle's theorem.
The second derivative evaluated at
θ
^
{\textstyle {\hat {\theta }}}
, known as Fisher information, determines the curvature of the likelihood surface, and thus indicates the precision of the estimate.
=== Exponential families ===
The log-likelihood is also particularly useful for exponential families of distributions, which include many of the common parametric probability distributions. The probability distribution function (and thus likelihood function) for exponential families contain products of factors involving exponentiation. The logarithm of such a function is a sum of products, again easier to differentiate than the original function.
An exponential family is one whose probability density function is of the form (for some functions, writing
⟨
−
,
−
⟩
{\textstyle \langle -,-\rangle }
for the inner product):
p
(
x
∣
θ
)
=
h
(
x
)
exp
(
⟨
η
(
θ
)
,
T
(
x
)
⟩
−
A
(
θ
)
)
.
{\displaystyle p(x\mid {\boldsymbol {\theta }})=h(x)\exp {\Big (}\langle {\boldsymbol {\eta }}({\boldsymbol {\theta }}),\mathbf {T} (x)\rangle -A({\boldsymbol {\theta }}){\Big )}.}
Each of these terms has an interpretation, but simply switching from probability to likelihood and taking logarithms yields the sum:
ℓ
(
θ
∣
x
)
=
⟨
η
(
θ
)
,
T
(
x
)
⟩
−
A
(
θ
)
+
log
h
(
x
)
.
{\displaystyle \ell ({\boldsymbol {\theta }}\mid x)=\langle {\boldsymbol {\eta }}({\boldsymbol {\theta }}),\mathbf {T} (x)\rangle -A({\boldsymbol {\theta }})+\log h(x).}
The
η
(
θ
)
{\textstyle {\boldsymbol {\eta }}({\boldsymbol {\theta }})}
and
h
(
x
)
{\textstyle h(x)}
each correspond to a change of coordinates, so in these coordinates, the log-likelihood of an exponential family is given by the simple formula:
ℓ
(
η
∣
x
)
=
⟨
η
,
T
(
x
)
⟩
−
A
(
η
)
.
{\displaystyle \ell ({\boldsymbol {\eta }}\mid x)=\langle {\boldsymbol {\eta }},\mathbf {T} (x)\rangle -A({\boldsymbol {\eta }}).}
In words, the log-likelihood of an exponential family is inner product of the natural parameter
η
{\displaystyle {\boldsymbol {\eta }}}
and the sufficient statistic
T
(
x
)
{\displaystyle \mathbf {T} (x)}
, minus the normalization factor (log-partition function)
A
(
η
)
{\displaystyle A({\boldsymbol {\eta }})}
. Thus for example the maximum likelihood estimate can be computed by taking derivatives of the sufficient statistic T and the log-partition function A.
==== Example: the gamma distribution ====
The gamma distribution is an exponential family with two parameters,
α
{\textstyle \alpha }
and
β
{\textstyle \beta }
. The likelihood function is
L
(
α
,
β
∣
x
)
=
β
α
Γ
(
α
)
x
α
−
1
e
−
β
x
.
{\displaystyle {\mathcal {L}}(\alpha ,\beta \mid x)={\frac {\beta ^{\alpha }}{\Gamma (\alpha )}}x^{\alpha -1}e^{-\beta x}.}
Finding the maximum likelihood estimate of
β
{\textstyle \beta }
for a single observed value
x
{\textstyle x}
looks rather daunting. Its logarithm is much simpler to work with:
log
L
(
α
,
β
∣
x
)
=
α
log
β
−
log
Γ
(
α
)
+
(
α
−
1
)
log
x
−
β
x
.
{\displaystyle \log {\mathcal {L}}(\alpha ,\beta \mid x)=\alpha \log \beta -\log \Gamma (\alpha )+(\alpha -1)\log x-\beta x.\,}
To maximize the log-likelihood, we first take the partial derivative with respect to
β
{\textstyle \beta }
:
∂
log
L
(
α
,
β
∣
x
)
∂
β
=
α
β
−
x
.
{\displaystyle {\frac {\partial \log {\mathcal {L}}(\alpha ,\beta \mid x)}{\partial \beta }}={\frac {\alpha }{\beta }}-x.}
If there are a number of independent observations
x
1
,
…
,
x
n
{\textstyle x_{1},\ldots ,x_{n}}
, then the joint log-likelihood will be the sum of individual log-likelihoods, and the derivative of this sum will be a sum of derivatives of each individual log-likelihood:
∂
log
L
(
α
,
β
∣
x
1
,
…
,
x
n
)
∂
β
=
∂
log
L
(
α
,
β
∣
x
1
)
∂
β
+
⋯
+
∂
log
L
(
α
,
β
∣
x
n
)
∂
β
=
n
α
β
−
∑
i
=
1
n
x
i
.
{\displaystyle {\begin{aligned}&{\frac {\partial \log {\mathcal {L}}(\alpha ,\beta \mid x_{1},\ldots ,x_{n})}{\partial \beta }}\\&={\frac {\partial \log {\mathcal {L}}(\alpha ,\beta \mid x_{1})}{\partial \beta }}+\cdots +{\frac {\partial \log {\mathcal {L}}(\alpha ,\beta \mid x_{n})}{\partial \beta }}\\&={\frac {n\alpha }{\beta }}-\sum _{i=1}^{n}x_{i}.\end{aligned}}}
To complete the maximization procedure for the joint log-likelihood, the equation is set to zero and solved for
β
{\textstyle \beta }
:
β
^
=
α
x
¯
.
{\displaystyle {\widehat {\beta }}={\frac {\alpha }{\bar {x}}}.}
Here
β
^
{\textstyle {\widehat {\beta }}}
denotes the maximum-likelihood estimate, and
x
¯
=
1
n
∑
i
=
1
n
x
i
{\textstyle \textstyle {\bar {x}}={\frac {1}{n}}\sum _{i=1}^{n}x_{i}}
is the sample mean of the observations.
== Background and interpretation ==
=== Historical remarks ===
The term "likelihood" has been in use in English since at least late Middle English. Its formal use to refer to a specific function in mathematical statistics was proposed by Ronald Fisher, in two research papers published in 1921 and 1922. The 1921 paper introduced what is today called a "likelihood interval"; the 1922 paper introduced the term "method of maximum likelihood". Quoting Fisher:
[I]n 1922, I proposed the term 'likelihood,' in view of the fact that, with respect to [the parameter], it is not a probability, and does not obey the laws of probability, while at the same time it bears to the problem of rational choice among the possible values of [the parameter] a relation similar to that which probability bears to the problem of predicting events in games of chance. . . . Whereas, however, in relation to psychological judgment, likelihood has some resemblance to probability, the two concepts are wholly distinct. . . ."
The concept of likelihood should not be confused with probability as mentioned by Sir Ronald Fisher
I stress this because in spite of the emphasis that I have always laid upon the difference between probability and likelihood there is still a tendency to treat likelihood as though it were a sort of probability. The first result is thus that there are two different measures of rational belief appropriate to different cases. Knowing the population we can express our incomplete knowledge of, or expectation of, the sample in terms of probability; knowing the sample we can express our incomplete knowledge of the population in terms of likelihood.
Fisher's invention of statistical likelihood was in reaction against an earlier form of reasoning called inverse probability. His use of the term "likelihood" fixed the meaning of the term within mathematical statistics.
A. W. F. Edwards (1972) established the axiomatic basis for use of the log-likelihood ratio as a measure of relative support for one hypothesis against another. The support function is then the natural logarithm of the likelihood function. Both terms are used in phylogenetics, but were not adopted in a general treatment of the topic of statistical evidence.
=== Interpretations under different foundations ===
Among statisticians, there is no consensus about what the foundation of statistics should be. There are four main paradigms that have been proposed for the foundation: frequentism, Bayesianism, likelihoodism, and AIC-based. For each of the proposed foundations, the interpretation of likelihood is different. The four interpretations are described in the subsections below.
==== Frequentist interpretation ====
==== Bayesian interpretation ====
In Bayesian inference, although one can speak about the likelihood of any proposition or random variable given another random variable: for example the likelihood of a parameter value or of a statistical model (see marginal likelihood), given specified data or other evidence, the likelihood function remains the same entity, with the additional interpretations of (i) a conditional density of the data given the parameter (since the parameter is then a random variable) and (ii) a measure or amount of information brought by the data about the parameter value or even the model. Due to the introduction of a probability structure on the parameter space or on the collection of models, it is possible that a parameter value or a statistical model have a large likelihood value for given data, and yet have a low probability, or vice versa. This is often the case in medical contexts. Following Bayes' Rule, the likelihood when seen as a conditional density can be multiplied by the prior probability density of the parameter and then normalized, to give a posterior probability density. More generally, the likelihood of an unknown quantity
X
{\textstyle X}
given another unknown quantity
Y
{\textstyle Y}
is proportional to the probability of
Y
{\textstyle Y}
given
X
{\textstyle X}
.
==== Likelihoodist interpretation ====
In frequentist statistics, the likelihood function is itself a statistic that summarizes a single sample from a population, whose calculated value depends on a choice of several parameters θ1 ... θp, where p is the count of parameters in some already-selected statistical model. The value of the likelihood serves as a figure of merit for the choice used for the parameters, and the parameter set with maximum likelihood is the best choice, given the data available.
The specific calculation of the likelihood is the probability that the observed sample would be assigned, assuming that the model chosen and the values of the several parameters θ give an accurate approximation of the frequency distribution of the population that the observed sample was drawn from. Heuristically, it makes sense that a good choice of parameters is those which render the sample actually observed the maximum possible post-hoc probability of having happened. Wilks' theorem quantifies the heuristic rule by showing that the difference in the logarithm of the likelihood generated by the estimate's parameter values and the logarithm of the likelihood generated by population's "true" (but unknown) parameter values is asymptotically χ2 distributed.
Each independent sample's maximum likelihood estimate is a separate estimate of the "true" parameter set describing the population sampled. Successive estimates from many independent samples will cluster together with the population's "true" set of parameter values hidden somewhere in their midst. The difference in the logarithms of the maximum likelihood and adjacent parameter sets' likelihoods may be used to draw a confidence region on a plot whose co-ordinates are the parameters θ1 ... θp. The region surrounds the maximum-likelihood estimate, and all points (parameter sets) within that region differ at most in log-likelihood by some fixed value. The χ2 distribution given by Wilks' theorem converts the region's log-likelihood differences into the "confidence" that the population's "true" parameter set lies inside. The art of choosing the fixed log-likelihood difference is to make the confidence acceptably high while keeping the region acceptably small (narrow range of estimates).
As more data are observed, instead of being used to make independent estimates, they can be combined with the previous samples to make a single combined sample, and that large sample may be used for a new maximum likelihood estimate. As the size of the combined sample increases, the size of the likelihood region with the same confidence shrinks. Eventually, either the size of the confidence region is very nearly a single point, or the entire population has been sampled; in both cases, the estimated parameter set is essentially the same as the population parameter set.
==== AIC-based interpretation ====
Under the AIC paradigm, likelihood is interpreted within the context of information theory.
== See also ==
== Notes ==
== References ==
== Further reading ==
Azzalini, Adelchi (1996). "Likelihood". Statistical Inference Based on the Likelihood. Chapman and Hall. pp. 17–50. ISBN 0-412-60650-X.
Boos, Dennis D.; Stefanski, L. A. (2013). "Likelihood Construction and Estimation". Essential Statistical Inference : Theory and Methods. New York: Springer. pp. 27–124. doi:10.1007/978-1-4614-4818-1_2. ISBN 978-1-4614-4817-4.
Edwards, A. W. F. (1992) [1972]. Likelihood (Expanded ed.). Johns Hopkins University Press. ISBN 0-8018-4443-6.
King, Gary (1989). "The Likelihood Model of Inference". Unifying Political Methodology : the Likehood Theory of Statistical Inference. Cambridge University Press. pp. 59–94. ISBN 0-521-36697-6.
Richard, Mark; Vecer, Jan (1 February 2021). "Efficiency Testing of Prediction Markets: Martingale Approach, Likelihood Ratio and Bayes Factor Analysis". Risks. 9 (2): 31. doi:10.3390/risks9020031. hdl:10419/258120.
Lindsey, J. K. (1996). "Likelihood". Parametric Statistical Inference. Oxford University Press. pp. 69–139. ISBN 0-19-852359-9.
Rohde, Charles A. (2014). Introductory Statistical Inference with the Likelihood Function. Berlin: Springer. ISBN 978-3-319-10460-7.
Royall, Richard (1997). Statistical Evidence : A Likelihood Paradigm. London: Chapman & Hall. ISBN 0-412-04411-0.
Ward, Michael D.; Ahlquist, John S. (2018). "The Likelihood Function: A Deeper Dive". Maximum Likelihood for Social Science : Strategies for Analysis. Cambridge University Press. pp. 21–28. ISBN 978-1-316-63682-4.
== External links ==
Likelihood function at Planetmath
"Log-likelihood". Statlect. | Wikipedia/Likelihood_equations |
Statistics (from German: Statistik, orig. "description of a state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a scientific, industrial, or social problem, it is conventional to begin with a statistical population or a statistical model to be studied. Populations can be diverse groups of people or objects such as "all people living in a country" or "every atom composing a crystal". Statistics deals with every aspect of data, including the planning of data collection in terms of the design of surveys and experiments.
When census data (comprising every member of the target population) cannot be collected, statisticians collect data by developing specific experiment designs and survey samples. Representative sampling assures that inferences and conclusions can reasonably extend from the sample to the population as a whole. An experimental study involves taking measurements of the system under study, manipulating the system, and then taking additional measurements using the same procedure to determine if the manipulation has modified the values of the measurements. In contrast, an observational study does not involve experimental manipulation.
Two main statistical methods are used in data analysis: descriptive statistics, which summarize data from a sample using indexes such as the mean or standard deviation, and inferential statistics, which draw conclusions from data that are subject to random variation (e.g., observational errors, sampling variation). Descriptive statistics are most often concerned with two sets of properties of a distribution (sample or population): central tendency (or location) seeks to characterize the distribution's central or typical value, while dispersion (or variability) characterizes the extent to which members of the distribution depart from its center and each other. Inferences made using mathematical statistics employ the framework of probability theory, which deals with the analysis of random phenomena.
A standard statistical procedure involves the collection of data leading to a test of the relationship between two statistical data sets, or a data set and synthetic data drawn from an idealized model. A hypothesis is proposed for the statistical relationship between the two data sets, an alternative to an idealized null hypothesis of no relationship between two data sets. Rejecting or disproving the null hypothesis is done using statistical tests that quantify the sense in which the null can be proven false, given the data that are used in the test. Working from a null hypothesis, two basic forms of error are recognized: Type I errors (null hypothesis is rejected when it is in fact true, giving a "false positive") and Type II errors (null hypothesis fails to be rejected when it is in fact false, giving a "false negative"). Multiple problems have come to be associated with this framework, ranging from obtaining a sufficient sample size to specifying an adequate null hypothesis.
Statistical measurement processes are also prone to error in regards to the data that they generate. Many of these errors are classified as random (noise) or systematic (bias), but other types of errors (e.g., blunder, such as when an analyst reports incorrect units) can also occur. The presence of missing data or censoring may result in biased estimates and specific techniques have been developed to address these problems.
== Introduction ==
"Statistics is both the science of uncertainty and the technology of extracting information from data." - featured in the International Encyclopedia of Statistical Science.Statistics is the discipline that deals with data, facts and figures with which meaningful information is inferred. Data may represent a numerical value, in form of quantitative data, or a label, as with qualitative data. Data may be collected, presented and summarised, in one of two methods called descriptive statistics. Two elementary summaries of data, singularly called a statistic, are the mean and dispersion. Whereas inferential statistics interprets data from a population sample to induce statements and predictions about a population.
Statistics is regarded as a body of science or a branch of mathematics. It is based on probability, a branch of mathematics that studies random events. Statistics is considered the science of uncertainty. This arises from the ways to cope with measurement and sampling error as well as dealing with uncertanties in modelling. Although probability and statistics were once paired together as a single subject, they are conceptually distinct from one another. The former is based on deducing answers to specific situations from a general theory of probability, meanwhile statistics induces statements about a population based on a data set. Statistics serves to bridge the gap between probability and applied mathematical fields.
Some consider statistics to be a distinct mathematical science rather than a branch of mathematics. While many scientific investigations make use of data, statistics is generally concerned with the use of data in the context of uncertainty and decision-making in the face of uncertainty. Statistics is indexed at 62, a subclass of probability theory and stochastic processes, in the Mathematics Subject Classification. Mathematical statistics is covered in the range 276-280 of subclass QA (science > mathematics) in the Library of Congress Classification.
The word statistics ultimately comes from the Latin word Status, meaning "situation" or "condition" in society, which in late Latin adopted the meaning "state". Derived from this, political scientist Gottfried Achenwall, coined the German word statistik (a summary of how things stand). In 1770, the term entered the English language through German and referred to the study of political arrangements. The term gained its modern meaning in the 1790s in John Sinclair's works. In modern German, the term statistik is synonymous with mathematical statistics. The term statistic, in singular form, is used to describe a function that returns its value of the same name.
== Statistical data ==
=== Data collection ===
==== Sampling ====
When full census data cannot be collected, statisticians collect sample data by developing specific experiment designs and survey samples. Statistics itself also provides tools for prediction and forecasting through statistical models.
To use a sample as a guide to an entire population, it is important that it truly represents the overall population. Representative sampling assures that inferences and conclusions can safely extend from the sample to the population as a whole. A major problem lies in determining the extent that the sample chosen is actually representative. Statistics offers methods to estimate and correct for any bias within the sample and data collection procedures. There are also methods of experimental design that can lessen these issues at the outset of a study, strengthening its capability to discern truths about the population.
Sampling theory is part of the mathematical discipline of probability theory. Probability is used in mathematical statistics to study the sampling distributions of sample statistics and, more generally, the properties of statistical procedures. The use of any statistical method is valid when the system or population under consideration satisfies the assumptions of the method. The difference in point of view between classic probability theory and sampling theory is, roughly, that probability theory starts from the given parameters of a total population to deduce probabilities that pertain to samples. Statistical inference, however, moves in the opposite direction—inductively inferring from samples to the parameters of a larger or total population.
==== Experimental and observational studies ====
A common goal for a statistical research project is to investigate causality, and in particular to draw a conclusion on the effect of changes in the values of predictors or independent variables on dependent variables. There are two major types of causal statistical studies: experimental studies and observational studies. In both types of studies, the effect of differences of an independent variable (or variables) on the behavior of the dependent variable are observed. The difference between the two types lies in how the study is actually conducted. Each can be very effective. An experimental study involves taking measurements of the system under study, manipulating the system, and then taking additional measurements with different levels using the same procedure to determine if the manipulation has modified the values of the measurements. In contrast, an observational study does not involve experimental manipulation. Instead, data are gathered and correlations between predictors and response are investigated. While the tools of data analysis work best on data from randomized studies, they are also applied to other kinds of data—like natural experiments and observational studies—for which a statistician would use a modified, more structured estimation method (e.g., difference in differences estimation and instrumental variables, among many others) that produce consistent estimators.
===== Experiments =====
The basic steps of a statistical experiment are:
Planning the research, including finding the number of replicates of the study, using the following information: preliminary estimates regarding the size of treatment effects, alternative hypotheses, and the estimated experimental variability. Consideration of the selection of experimental subjects and the ethics of research is necessary. Statisticians recommend that experiments compare (at least) one new treatment with a standard treatment or control, to allow an unbiased estimate of the difference in treatment effects.
Design of experiments, using blocking to reduce the influence of confounding variables, and randomized assignment of treatments to subjects to allow unbiased estimates of treatment effects and experimental error. At this stage, the experimenters and statisticians write the experimental protocol that will guide the performance of the experiment and which specifies the primary analysis of the experimental data.
Performing the experiment following the experimental protocol and analyzing the data following the experimental protocol.
Further examining the data set in secondary analyses, to suggest new hypotheses for future study.
Documenting and presenting the results of the study.
Experiments on human behavior have special concerns. The famous Hawthorne study examined changes to the working environment at the Hawthorne plant of the Western Electric Company. The researchers were interested in determining whether increased illumination would increase the productivity of the assembly line workers. The researchers first measured the productivity in the plant, then modified the illumination in an area of the plant and checked if the changes in illumination affected productivity. It turned out that productivity indeed improved (under the experimental conditions). However, the study is heavily criticized today for errors in experimental procedures, specifically for the lack of a control group and blindness. The Hawthorne effect refers to finding that an outcome (in this case, worker productivity) changed due to observation itself. Those in the Hawthorne study became more productive not because the lighting was changed but because they were being observed.
===== Observational study =====
An example of an observational study is one that explores the association between smoking and lung cancer. This type of study typically uses a survey to collect observations about the area of interest and then performs statistical analysis. In this case, the researchers would collect observations of both smokers and non-smokers, perhaps through a cohort study, and then look for the number of cases of lung cancer in each group. A case-control study is another type of observational study in which people with and without the outcome of interest (e.g. lung cancer) are invited to participate and their exposure histories are collected.
=== Types of data ===
Various attempts have been made to produce a taxonomy of levels of measurement. The psychophysicist Stanley Smith Stevens defined nominal, ordinal, interval, and ratio scales. Nominal measurements do not have meaningful rank order among values, and permit any one-to-one (injective) transformation. Ordinal measurements have imprecise differences between consecutive values, but have a meaningful order to those values, and permit any order-preserving transformation. Interval measurements have meaningful distances between measurements defined, but the zero value is arbitrary (as in the case with longitude and temperature measurements in Celsius or Fahrenheit), and permit any linear transformation. Ratio measurements have both a meaningful zero value and the distances between different measurements defined, and permit any rescaling transformation.
Because variables conforming only to nominal or ordinal measurements cannot be reasonably measured numerically, sometimes they are grouped together as categorical variables, whereas ratio and interval measurements are grouped together as quantitative variables, which can be either discrete or continuous, due to their numerical nature. Such distinctions can often be loosely correlated with data type in computer science, in that dichotomous categorical variables may be represented with the Boolean data type, polytomous categorical variables with arbitrarily assigned integers in the integral data type, and continuous variables with the real data type involving floating-point arithmetic. But the mapping of computer science data types to statistical data types depends on which categorization of the latter is being implemented.
Other categorizations have been proposed. For example, Mosteller and Tukey (1977) distinguished grades, ranks, counted fractions, counts, amounts, and balances. Nelder (1990) described continuous counts, continuous ratios, count ratios, and categorical modes of data. (See also: Chrisman (1998), van den Berg (1991).)
The issue of whether or not it is appropriate to apply different kinds of statistical methods to data obtained from different kinds of measurement procedures is complicated by issues concerning the transformation of variables and the precise interpretation of research questions. "The relationship between the data and what they describe merely reflects the fact that certain kinds of statistical statements may have truth values which are not invariant under some transformations. Whether or not a transformation is sensible to contemplate depends on the question one is trying to answer.": 82
== Methods ==
=== Descriptive statistics ===
A descriptive statistic (in the count noun sense) is a summary statistic that quantitatively describes or summarizes features of a collection of information, while descriptive statistics in the mass noun sense is the process of using and analyzing those statistics. Descriptive statistics is distinguished from inferential statistics (or inductive statistics), in that descriptive statistics aims to summarize a sample, rather than use the data to learn about the population that the sample of data is thought to represent.
=== Inferential statistics ===
Statistical inference is the process of using data analysis to deduce properties of an underlying probability distribution. Inferential statistical analysis infers properties of a population, for example by testing hypotheses and deriving estimates. It is assumed that the observed data set is sampled from a larger population. Inferential statistics can be contrasted with descriptive statistics. Descriptive statistics is solely concerned with properties of the observed data, and it does not rest on the assumption that the data come from a larger population.
==== Terminology and theory of inferential statistics ====
===== Statistics, estimators and pivotal quantities =====
Consider independent identically distributed (IID) random variables with a given probability distribution: standard statistical inference and estimation theory defines a random sample as the random vector given by the column vector of these IID variables. The population being examined is described by a probability distribution that may have unknown parameters.
A statistic is a random variable that is a function of the random sample, but not a function of unknown parameters. The probability distribution of the statistic, though, may have unknown parameters. Consider now a function of the unknown parameter: an estimator is a statistic used to estimate such function. Commonly used estimators include sample mean, unbiased sample variance and sample covariance.
A random variable that is a function of the random sample and of the unknown parameter, but whose probability distribution does not depend on the unknown parameter is called a pivotal quantity or pivot. Widely used pivots include the z-score, the chi square statistic and Student's t-value.
Between two estimators of a given parameter, the one with lower mean squared error is said to be more efficient. Furthermore, an estimator is said to be unbiased if its expected value is equal to the true value of the unknown parameter being estimated, and asymptotically unbiased if its expected value converges at the limit to the true value of such parameter.
Other desirable properties for estimators include: UMVUE estimators that have the lowest variance for all possible values of the parameter to be estimated (this is usually an easier property to verify than efficiency) and consistent estimators which converges in probability to the true value of such parameter.
This still leaves the question of how to obtain estimators in a given situation and carry the computation, several methods have been proposed: the method of moments, the maximum likelihood method, the least squares method and the more recent method of estimating equations.
===== Null hypothesis and alternative hypothesis =====
Interpretation of statistical information can often involve the development of a null hypothesis which is usually (but not necessarily) that no relationship exists among variables or that no change occurred over time. The alternative hypothesis is the name of the hypothesis that contradicts the null hypothesis.
The best illustration for a novice is the predicament encountered by a criminal trial. The null hypothesis, H0, asserts that the defendant is innocent, whereas the alternative hypothesis, H1, asserts that the defendant is guilty. The indictment comes because of suspicion of the guilt. The H0 (the status quo) stands in opposition to H1 and is maintained unless H1 is supported by evidence "beyond a reasonable doubt". However, "failure to reject H0" in this case does not imply innocence, but merely that the evidence was insufficient to convict. So the jury does not necessarily accept H0 but fails to reject H0. While one can not "prove" a null hypothesis, one can test how close it is to being true with a power test, which tests for type II errors.
===== Error =====
Working from a null hypothesis, two broad categories of error are recognized:
Type I errors where the null hypothesis is falsely rejected, giving a "false positive".
Type II errors where the null hypothesis fails to be rejected and an actual difference between populations is missed, giving a "false negative".
Standard deviation refers to the extent to which individual observations in a sample differ from a central value, such as the sample or population mean, while Standard error refers to an estimate of difference between sample mean and population mean.
A statistical error is the amount by which an observation differs from its expected value. A residual is the amount an observation differs from the value the estimator of the expected value assumes on a given sample (also called prediction).
Mean squared error is used for obtaining efficient estimators, a widely used class of estimators. Root mean square error is simply the square root of mean squared error.
Many statistical methods seek to minimize the residual sum of squares, and these are called "methods of least squares" in contrast to Least absolute deviations. The latter gives equal weight to small and big errors, while the former gives more weight to large errors. Residual sum of squares is also differentiable, which provides a handy property for doing regression. Least squares applied to linear regression is called ordinary least squares method and least squares applied to nonlinear regression is called non-linear least squares. Also in a linear regression model the non deterministic part of the model is called error term, disturbance or more simply noise. Both linear regression and non-linear regression are addressed in polynomial least squares, which also describes the variance in a prediction of the dependent variable (y axis) as a function of the independent variable (x axis) and the deviations (errors, noise, disturbances) from the estimated (fitted) curve.
Measurement processes that generate statistical data are also subject to error. Many of these errors are classified as random (noise) or systematic (bias), but other types of errors (e.g., blunder, such as when an analyst reports incorrect units) can also be important. The presence of missing data or censoring may result in biased estimates and specific techniques have been developed to address these problems.
===== Interval estimation =====
Most studies only sample part of a population, so results do not fully represent the whole population. Any estimates obtained from the sample only approximate the population value. Confidence intervals allow statisticians to express how closely the sample estimate matches the true value in the whole population. Often they are expressed as 95% confidence intervals. Formally, a 95% confidence interval for a value is a range where, if the sampling and analysis were repeated under the same conditions (yielding a different dataset), the interval would include the true (population) value in 95% of all possible cases. This does not imply that the probability that the true value is in the confidence interval is 95%. From the frequentist perspective, such a claim does not even make sense, as the true value is not a random variable. Either the true value is or is not within the given interval. However, it is true that, before any data are sampled and given a plan for how to construct the confidence interval, the probability is 95% that the yet-to-be-calculated interval will cover the true value: at this point, the limits of the interval are yet-to-be-observed random variables. One approach that does yield an interval that can be interpreted as having a given probability of containing the true value is to use a credible interval from Bayesian statistics: this approach depends on a different way of interpreting what is meant by "probability", that is as a Bayesian probability.
In principle confidence intervals can be symmetrical or asymmetrical. An interval can be asymmetrical because it works as lower or upper bound for a parameter (left-sided interval or right sided interval), but it can also be asymmetrical because the two sided interval is built violating symmetry around the estimate. Sometimes the bounds for a confidence interval are reached asymptotically and these are used to approximate the true bounds.
===== Significance =====
Statistics rarely give a simple Yes/No type answer to the question under analysis. Interpretation often comes down to the level of statistical significance applied to the numbers and often refers to the probability of a value accurately rejecting the null hypothesis (sometimes referred to as the p-value).
The standard approach is to test a null hypothesis against an alternative hypothesis. A critical region is the set of values of the estimator that leads to refuting the null hypothesis. The probability of type I error is therefore the probability that the estimator belongs to the critical region given that null hypothesis is true (statistical significance) and the probability of type II error is the probability that the estimator does not belong to the critical region given that the alternative hypothesis is true. The statistical power of a test is the probability that it correctly rejects the null hypothesis when the null hypothesis is false.
Referring to statistical significance does not necessarily mean that the overall result is significant in real world terms. For example, in a large study of a drug it may be shown that the drug has a statistically significant but very small beneficial effect, such that the drug is unlikely to help the patient noticeably.
Although in principle the acceptable level of statistical significance may be subject to debate, the significance level is the largest p-value that allows the test to reject the null hypothesis. This test is logically equivalent to saying that the p-value is the probability, assuming the null hypothesis is true, of observing a result at least as extreme as the test statistic. Therefore, the smaller the significance level, the lower the probability of committing type I error.
Some problems are usually associated with this framework (See criticism of hypothesis testing):
A difference that is highly statistically significant can still be of no practical significance, but it is possible to properly formulate tests to account for this. One response involves going beyond reporting only the significance level to include the p-value when reporting whether a hypothesis is rejected or accepted. The p-value, however, does not indicate the size or importance of the observed effect and can also seem to exaggerate the importance of minor differences in large studies. A better and increasingly common approach is to report confidence intervals. Although these are produced from the same calculations as those of hypothesis tests or p-values, they describe both the size of the effect and the uncertainty surrounding it.
Fallacy of the transposed conditional, aka prosecutor's fallacy: criticisms arise because the hypothesis testing approach forces one hypothesis (the null hypothesis) to be favored, since what is being evaluated is the probability of the observed result given the null hypothesis and not probability of the null hypothesis given the observed result. An alternative to this approach is offered by Bayesian inference, although it requires establishing a prior probability.
Rejecting the null hypothesis does not automatically prove the alternative hypothesis.
As everything in inferential statistics it relies on sample size, and therefore under fat tails p-values may be seriously mis-computed.
===== Examples =====
Some well-known statistical tests and procedures are:
=== Bayesian Statistics ===
An alternative paradigm to the popular frequentist paradigm is to use Bayes' theorem to update the prior probability of the hypotheses in consideration based on the relative likelihood of the evidence gathered to obtain a posterior probability. Bayesian methods have been aided by the increase in available computing power to compute the posterior probability using numerical approximation techniques like Markov Chain Monte Carlo.
For statistically modelling purposes, Bayesian models tend to be hierarchical, for example, one could model each Youtube channel as having video views distributed as a normal distribution with channel dependent mean and variance
N
(
μ
i
,
σ
i
)
{\displaystyle {\mathcal {N}}(\mu _{i},\sigma _{i})}
, while modeling the channel means as themselves coming from a normal distribution representing the distribution of average video view counts per channel, and the variances as coming from another distribution.
The concept of using likelihood ratio can also be prominently seen in medical diagnostic testing.
=== Exploratory data analysis ===
Exploratory data analysis (EDA) is an approach to analyzing data sets to summarize their main characteristics, often with visual methods. A statistical model can be used or not, but primarily EDA is for seeing what the data can tell us beyond the formal modeling or hypothesis testing task.
=== Mathematical statistics ===
Mathematical statistics is the application of mathematics to statistics. Mathematical techniques used for this include mathematical analysis, linear algebra, stochastic analysis, differential equations, and measure-theoretic probability theory. All statistical analyses make use of at least some mathematics, and mathematical statistics can therefore be regarded as a fundamental component of general statistics.
== History ==
Formal discussions on inference date back to the mathematicians and cryptographers of the Islamic Golden Age between the 8th and 13th centuries. Al-Khalil (717–786) wrote the Book of Cryptographic Messages, which contains one of the first uses of permutations and combinations, to list all possible Arabic words with and without vowels. Al-Kindi's Manuscript on Deciphering Cryptographic Messages gave a detailed description of how to use frequency analysis to decipher encrypted messages, providing an early example of statistical inference for decoding. Ibn Adlan (1187–1268) later made an important contribution on the use of sample size in frequency analysis.
Although the term statistic was introduced by the Italian scholar Girolamo Ghilini in 1589 with reference to a collection of facts and information about a state, it was the German Gottfried Achenwall in 1749 who started using the term as a collection of quantitative information, in the modern use for this science. The earliest writing containing statistics in Europe dates back to 1663, with the publication of Natural and Political Observations upon the Bills of Mortality by John Graunt. Early applications of statistical thinking revolved around the needs of states to base policy on demographic and economic data, hence its stat- etymology. The scope of the discipline of statistics broadened in the early 19th century to include the collection and analysis of data in general. Today, statistics is widely employed in government, business, and natural and social sciences.
The mathematical foundations of statistics developed from discussions concerning games of chance among mathematicians such as Gerolamo Cardano, Blaise Pascal, Pierre de Fermat, and Christiaan Huygens. Although the idea of probability was already examined in ancient and medieval law and philosophy (such as the work of Juan Caramuel), probability theory as a mathematical discipline only took shape at the very end of the 17th century, particularly in Jacob Bernoulli's posthumous work Ars Conjectandi. This was the first book where the realm of games of chance and the realm of the probable (which concerned opinion, evidence, and argument) were combined and submitted to mathematical analysis. The method of least squares was first described by Adrien-Marie Legendre in 1805, though Carl Friedrich Gauss presumably made use of it a decade earlier in 1795.
The modern field of statistics emerged in the late 19th and early 20th century in three stages. The first wave, at the turn of the century, was led by the work of Francis Galton and Karl Pearson, who transformed statistics into a rigorous mathematical discipline used for analysis, not just in science, but in industry and politics as well. Galton's contributions included introducing the concepts of standard deviation, correlation, regression analysis and the application of these methods to the study of the variety of human characteristics—height, weight and eyelash length among others. Pearson developed the Pearson product-moment correlation coefficient, defined as a product-moment, the method of moments for the fitting of distributions to samples and the Pearson distribution, among many other things. Galton and Pearson founded Biometrika as the first journal of mathematical statistics and biostatistics (then called biometry), and the latter founded the world's first university statistics department at University College London.
The second wave of the 1910s and 20s was initiated by William Sealy Gosset, and reached its culmination in the insights of Ronald Fisher, who wrote the textbooks that were to define the academic discipline in universities around the world. Fisher's most important publications were his 1918 seminal paper The Correlation between Relatives on the Supposition of Mendelian Inheritance (which was the first to use the statistical term, variance), his classic 1925 work Statistical Methods for Research Workers and his 1935 The Design of Experiments, where he developed rigorous design of experiments models. He originated the concepts of sufficiency, ancillary statistics, Fisher's linear discriminator and Fisher information. He also coined the term null hypothesis during the Lady tasting tea experiment, which "is never proved or established, but is possibly disproved, in the course of experimentation". In his 1930 book The Genetical Theory of Natural Selection, he applied statistics to various biological concepts such as Fisher's principle (which A. W. F. Edwards called "probably the most celebrated argument in evolutionary biology") and Fisherian runaway, a concept in sexual selection about a positive feedback runaway effect found in evolution.
The final wave, which mainly saw the refinement and expansion of earlier developments, emerged from the collaborative work between Egon Pearson and Jerzy Neyman in the 1930s. They introduced the concepts of "Type II" error, power of a test and confidence intervals. Jerzy Neyman in 1934 showed that stratified random sampling was in general a better method of estimation than purposive (quota) sampling.
Among the early attempts to measure national economic activity were those of William Petty in the 17th century. In the 20th century the uniform System of National Accounts was developed.
Today, statistical methods are applied in all fields that involve decision making, for making accurate inferences from a collated body of data and for making decisions in the face of uncertainty based on statistical methodology. The use of modern computers has expedited large-scale statistical computations and has also made possible new methods that are impractical to perform manually. Statistics continues to be an area of active research, for example on the problem of how to analyze big data.
== Applications ==
=== Applied statistics, theoretical statistics and mathematical statistics ===
Applied statistics, sometimes referred to as Statistical science, comprises descriptive statistics and the application of inferential statistics. Theoretical statistics concerns the logical arguments underlying justification of approaches to statistical inference, as well as encompassing mathematical statistics. Mathematical statistics includes not only the manipulation of probability distributions necessary for deriving results related to methods of estimation and inference, but also various aspects of computational statistics and the design of experiments.
Statistical consultants can help organizations and companies that do not have in-house expertise relevant to their particular questions.
=== Machine learning and data mining ===
Machine learning models are statistical and probabilistic models that capture patterns in the data through use of computational algorithms.
=== Statistics in academia ===
Statistics is applicable to a wide variety of academic disciplines, including natural and social sciences, government, and business. Business statistics applies statistical methods in econometrics, auditing and production and operations, including services improvement and marketing research. A study of two journals in tropical biology found that the 12 most frequent statistical tests are: analysis of variance (ANOVA), chi-squared test, Student's t-test, linear regression, Pearson's correlation coefficient, Mann-Whitney U test, Kruskal-Wallis test, Shannon's diversity index, Tukey's range test, cluster analysis, Spearman's rank correlation coefficient and principal component analysis.
A typical statistics course covers descriptive statistics, probability, binomial and normal distributions, test of hypotheses and confidence intervals, linear regression, and correlation. Modern fundamental statistical courses for undergraduate students focus on correct test selection, results interpretation, and use of free statistics software.
=== Statistical computing ===
The rapid and sustained increases in computing power starting from the second half of the 20th century have had a substantial impact on the practice of statistical science. Early statistical models were almost always from the class of linear models, but powerful computers, coupled with suitable numerical algorithms, caused an increased interest in nonlinear models (such as neural networks) as well as the creation of new types, such as generalized linear models and multilevel models.
Increased computing power has also led to the growing popularity of computationally intensive methods based on resampling, such as permutation tests and the bootstrap, while techniques such as Gibbs sampling have made use of Bayesian models more feasible. The computer revolution has implications for the future of statistics with a new emphasis on "experimental" and "empirical" statistics. A large number of both general and special purpose statistical software are now available. Examples of available software capable of complex statistical computation include programs such as Mathematica, SAS, SPSS, and R.
=== Business statistics ===
In business, "statistics" is a widely used management- and decision support tool. It is particularly applied in financial management, marketing management, and production, services and operations management. Statistics is also heavily used in management accounting and auditing. The discipline of Management Science formalizes the use of statistics, and other mathematics, in business. (Econometrics is the application of statistical methods to economic data in order to give empirical content to economic relationships.)
A typical "Business Statistics" course is intended for business majors, and covers descriptive statistics (collection, description, analysis, and summary of data), probability (typically the binomial and normal distributions), test of hypotheses and confidence intervals, linear regression, and correlation; (follow-on) courses may include forecasting, time series, decision trees, multiple linear regression, and other topics from business analytics more generally. Professional certification programs, such as the CFA, often include topics in statistics.
== Specialized disciplines ==
Statistical techniques are used in a wide range of types of scientific and social research, including: biostatistics, computational biology, computational sociology, network biology, social science, sociology and social research. Some fields of inquiry use applied statistics so extensively that they have specialized terminology. These disciplines include:
In addition, there are particular types of statistical analysis that have also developed their own specialised terminology and methodology:
Statistics form a key basis tool in business and manufacturing as well. It is used to understand measurement systems variability, control processes (as in statistical process control or SPC), for summarizing data, and to make data-driven decisions.
== Misuse ==
Misuse of statistics can produce subtle but serious errors in description and interpretation—subtle in the sense that even experienced professionals make such errors, and serious in the sense that they can lead to devastating decision errors. For instance, social policy, medical practice, and the reliability of structures like bridges all rely on the proper use of statistics.
Even when statistical techniques are correctly applied, the results can be difficult to interpret for those lacking expertise. The statistical significance of a trend in the data—which measures the extent to which a trend could be caused by random variation in the sample—may or may not agree with an intuitive sense of its significance. The set of basic statistical skills (and skepticism) that people need to deal with information in their everyday lives properly is referred to as statistical literacy.
There is a general perception that statistical knowledge is all-too-frequently intentionally misused by finding ways to interpret only the data that are favorable to the presenter. A mistrust and misunderstanding of statistics is associated with the quotation, "There are three kinds of lies: lies, damned lies, and statistics". Misuse of statistics can be both inadvertent and intentional, and the book How to Lie with Statistics, by Darrell Huff, outlines a range of considerations. In an attempt to shed light on the use and misuse of statistics, reviews of statistical techniques used in particular fields are conducted (e.g. Warne, Lazo, Ramos, and Ritter (2012)).
Ways to avoid misuse of statistics include using proper diagrams and avoiding bias. Misuse can occur when conclusions are overgeneralized and claimed to be representative of more than they really are, often by either deliberately or unconsciously overlooking sampling bias. Bar graphs are arguably the easiest diagrams to use and understand, and they can be made either by hand or with simple computer programs. Most people do not look for bias or errors, so they are not noticed. Thus, people may often believe that something is true even if it is not well represented. To make data gathered from statistics believable and accurate, the sample taken must be representative of the whole. According to Huff, "The dependability of a sample can be destroyed by [bias]... allow yourself some degree of skepticism."
To assist in the understanding of statistics Huff proposed a series of questions to be asked in each case:
Who says so? (Does he/she have an axe to grind?)
How does he/she know? (Does he/she have the resources to know the facts?)
What's missing? (Does he/she give us a complete picture?)
Did someone change the subject? (Does he/she offer us the right answer to the wrong problem?)
Does it make sense? (Is his/her conclusion logical and consistent with what we already know?)
=== Misinterpretation: correlation ===
The concept of correlation is particularly noteworthy for the potential confusion it can cause. Statistical analysis of a data set often reveals that two variables (properties) of the population under consideration tend to vary together, as if they were connected. For example, a study of annual income that also looks at age of death, might find that poor people tend to have shorter lives than affluent people. The two variables are said to be correlated; however, they may or may not be the cause of one another. The correlation phenomena could be caused by a third, previously unconsidered phenomenon, called a lurking variable or confounding variable. For this reason, there is no way to immediately infer the existence of a causal relationship between the two variables.
== See also ==
Foundations and major areas of statistics
== References ==
== Further reading ==
Lydia Denworth, "A Significant Problem: Standard scientific methods are under fire. Will anything change?", Scientific American, vol. 321, no. 4 (October 2019), pp. 62–67. "The use of p values for nearly a century [since 1925] to determine statistical significance of experimental results has contributed to an illusion of certainty and [to] reproducibility crises in many scientific fields. There is growing determination to reform statistical analysis... Some [researchers] suggest changing statistical methods, whereas others would do away with a threshold for defining "significant" results". (p. 63.)
Barbara Illowsky; Susan Dean (2014). Introductory Statistics. OpenStax CNX. ISBN 978-1938168208.
Stockburger, David W. "Introductory Statistics: Concepts, Models, and Applications". Missouri State University (3rd Web ed.). Archived from the original on 28 May 2020.
OpenIntro Statistics Archived 2019-06-16 at the Wayback Machine, 3rd edition by Diez, Barr, and Cetinkaya-Rundel
Stephen Jones, 2010. Statistics in Psychology: Explanations without Equations. Palgrave Macmillan. ISBN 978-1137282392.
Cohen, J (1990). "Things I have learned (so far)" (PDF). American Psychologist. 45 (12): 1304–1312. doi:10.1037/0003-066x.45.12.1304. S2CID 7180431. Archived from the original (PDF) on 2017-10-18.
Gigerenzer, G (2004). "Mindless statistics". Journal of Socio-Economics. 33 (5): 587–606. doi:10.1016/j.socec.2004.09.033.
Ioannidis, J.P.A. (2005). "Why most published research findings are false". PLOS Medicine. 2 (4): 696–701. doi:10.1371/journal.pmed.0040168. PMC 1855693. PMID 17456002.
== External links ==
(Electronic Version): TIBCO Software Inc. (2020). Data Science Textbook.
Online Statistics Education: An Interactive Multimedia Course of Study. Developed by Rice University (Lead Developer), University of Houston Clear Lake, Tufts University, and National Science Foundation.
UCLA Statistical Computing Resources (archived 17 July 2006)
Philosophy of Statistics from the Stanford Encyclopedia of Philosophy | Wikipedia/Statistical_methods |
In the design of experiments, completely randomized designs are for studying the effects of one primary factor without the need to take other nuisance variables into account. This article describes completely randomized designs that have one primary factor. The experiment compares the values of a response variable based on the different levels of that primary factor. For completely randomized designs, the levels of the primary factor are randomly assigned to the experimental units.
== Randomization ==
To randomize is to determine the run sequence of the experimental units randomly. For example, if there are 3 levels of the primary factor with each level to be run 2 times, then there are 6! (where ! denotes factorial) possible run sequences (or ways to order the experimental trials). Because of the replication, the number of unique orderings is 90 (since 90 = 6!/(2!*2!*2!)). An example of an unrandomized design would be to always run 2 replications for the first level, then 2 for the second level, and finally 2 for the third level. To randomize the runs, one way would be to put 6 slips of paper in a box with 2 having level 1, 2 having level 2, and 2 having level 3. Before each run, one of the slips would be drawn blindly from the box and the level selected would be used for the next run of the experiment.
In practice, the randomization is typically performed by a computer program. However, the randomization can also be generated from random number tables or by some physical mechanism (e.g., drawing the slips of paper).
== Three key numbers ==
All completely randomized designs with one primary factor are defined by 3 numbers:
k = number of factors (= 1 for these designs)
L = number of levels
n = number of replications
and the total sample size (number of runs) is N = k × L × n. Balance dictates that the number of replications be the same at each level of the factor (this will maximize the sensitivity of subsequent statistical t- (or F-) tests).
== Example ==
A typical example of a completely randomized design is the following:
k = 1 factor (X1)
L = 4 levels of that single factor (called "1", "2", "3", and "4")
n = 3 replications per level
N = 4 levels × 3 replications per level = 12 runs
=== Sample randomized sequence of trials ===
The randomized sequence of trials might look like: X1: 3, 1, 4, 2, 2, 1, 3, 4, 1, 2, 4, 3
Note that in this example there are 12!/(3!*3!*3!*3!) = 369,600 ways to run the experiment, all equally likely to be picked by a randomization procedure.
== Model for a completely randomized design ==
The model for the response is
Y
i
,
j
=
μ
+
T
i
+
r
a
n
d
o
m
e
r
r
o
r
{\displaystyle Y_{i,j}=\mu +T_{i}+\mathrm {random\ error} }
with
Yi,j being any observation for which X1 = i (i and j denote the level of the factor and the replication within the level of the factor, respectively)
μ (or mu) is the general location parameter
Ti is the effect of having treatment level i
== Estimates and statistical tests ==
=== Estimating and testing model factor levels ===
Estimate for μ :
Y
¯
{\displaystyle {\bar {Y}}}
= the average of all the data
Estimate for Ti :
Y
¯
i
−
Y
¯
{\displaystyle {\bar {Y}}_{i}-{\bar {Y}}}
with
Y
¯
i
{\displaystyle {\bar {Y}}_{i}}
= average of all Y for which X1 = i.
Statistical tests for levels of X1 are those used for a one-way ANOVA and are detailed in the article on analysis of variance.
== Bibliography ==
Caliński, Tadeusz; Kageyama, Sanpei (2000). Block designs: A Randomization approach, Volume I: Analysis. Lecture Notes in Statistics. Vol. 150. New York: Springer-Verlag. ISBN 0-387-98578-6.
Christensen, Ronald (2002). Plane Answers to Complex Questions: The Theory of Linear Models (Third ed.). New York: Springer. ISBN 0-387-95361-2.
Kempthorne, Oscar (1979). The Design and Analysis of Experiments (Corrected reprint of (1952) Wiley ed.). Robert E. Krieger. ISBN 0-88275-105-0.
Hinkelmann, Klaus and Kempthorne, Oscar (2008). Design and Analysis of Experiments. Vol. I and II (Second ed.). Wiley. ISBN 978-0-470-38551-7.{{cite book}}: CS1 maint: multiple names: authors list (link)
Hinkelmann, Klaus and Kempthorne, Oscar (2008). Design and Analysis of Experiments, Volume I: Introduction to Experimental Design (Second ed.). Wiley. ISBN 978-0-471-72756-9.{{cite book}}: CS1 maint: multiple names: authors list (link)
Hinkelmann, Klaus and Kempthorne, Oscar (2005). Design and Analysis of Experiments, Volume 2: Advanced Experimental Design (First ed.). Wiley. ISBN 978-0-471-55177-5.{{cite book}}: CS1 maint: multiple names: authors list (link)
== See also ==
Randomized block design
== External links ==
Completely randomized designs
Completely randomized design (CRD)
This article incorporates public domain material from the National Institute of Standards and Technology | Wikipedia/Completely_randomized_design |
A quasi-experiment is a research design used to estimate the causal impact of an intervention. Quasi-experiments share similarities with experiments and randomized controlled trials, but specifically lack random assignment to treatment or control. Instead, quasi-experimental designs typically allow assignment to treatment condition to proceed how it would in the absence of an experiment.
Quasi-experiments are subject to concerns regarding internal validity, because the treatment and control groups may not be comparable at baseline. In other words, it may not be possible to convincingly demonstrate a causal link between the treatment condition and observed outcomes. This is particularly true if there are confounding variables that cannot be controlled or accounted for.
With random assignment, study participants have the same chance of being assigned to the intervention group or the comparison group. As a result, differences between groups on both observed and unobserved characteristics would be due to chance, rather than to a systematic factor related to treatment (e.g., illness severity). Randomization itself does not guarantee that groups will be equivalent at baseline. Any change in characteristics post-intervention is likely attributable to the intervention.
== Design ==
The first part of creating a quasi-experimental design is to identify the variables. The quasi-independent variable is the variable that is manipulated in order to affect a dependent variable. It is generally a grouping variable with different levels. Grouping means two or more groups, such as two groups receiving alternative treatments, or a treatment group and a no-treatment group (which may be given a placebo – placebos are more frequently used in medical or physiological experiments). The predicted outcome is the dependent variable. In a time series analysis, the dependent variable is observed over time for any changes that may take place. One or more covariates are usually included in analyses, ideally variables that predict both the treatment group and the outcome. These are additional variables that are often used to address confounding, e.g., through statistical adjustment or matching. Once the variables have been identified and defined, a procedure should then be implemented and group differences should be examined.
In an experiment with random assignment, study units have the same chance of being assigned to a given treatment condition. As such, random assignment ensures that both the experimental and control groups are equivalent. In a quasi-experimental design, assignment to a given treatment condition is based on something other than random assignment. Depending on the type of quasi-experimental design, the researcher might have control over assignment to the treatment condition but use some criteria other than random assignment (e.g., a cutoff score) to determine which participants receive the treatment, or the researcher may have no control over the treatment condition assignment and the criteria used for assignment may be unknown. Factors such as cost, feasibility, political concerns, or convenience may influence how or if participants are assigned to a given treatment conditions, and as such, quasi-experiments are subject to concerns regarding internal validity (i.e., can the results of the experiment be used to make a causal inference?).
Quasi-experiments are also effective because they use the "pre-post testing". This means that there are tests done before any data are collected to see if there are any person confounds or if any participants have certain tendencies. Then the actual experiment is done with post test results recorded. This data can be compared as part of the study or the pre-test data can be included in an explanation for the actual experimental data. Quasi experiments have independent variables that already exist such as age, gender, eye color. These variables can either be continuous (age) or they can be categorical (gender). In short, naturally occurring variables are measured within quasi experiments.
There are several types of quasi-experimental designs, each with different strengths, weaknesses and applications. These designs include (but are not limited to):
Difference in differences (pre-post with-without comparison)
Nonequivalent control groups design
no-treatment control group designs
nonequivalent dependent variables designs
removed treatment group designs
repeated treatment designs
reversed treatment nonequivalent control groups designs
cohort designs
post-test only designs
regression continuity designs
Regression discontinuity design
Case-control design
time-series designs
multiple time series design
interrupted time series design
propensity score matching or weighting
instrumental variables
Panel analysis
Of all of these designs, the regression discontinuity design comes the closest to the experimental design, as the experimenter maintains control of the treatment assignment and it is known to "yield an unbiased estimate of the treatment effects".: 242 It does, however, require large numbers of study participants and precise modeling of the functional form between the assignment and the outcome variable, in order to yield the same power as a traditional experimental design.
Though quasi-experiments are sometimes shunned by those who consider themselves to be experimental purists (leading Donald T. Campbell to coin the term "queasy experiments" for them), they can be useful in areas where it is not feasible or desirable to conduct an experiment or randomized control trial. Such instances include evaluating the impact of public policy changes, educational interventions or large scale health interventions. The primary drawback of quasi-experimental designs is that they cannot eliminate the possibility of confounding bias, which can hinder one's ability to draw causal inferences. This drawback is often used as an excuse to discount quasi-experimental results. However, such bias can be controlled for by using various statistical techniques such as multiple regression, if one can identify and measure the confounding variable(s). Such techniques can be used to model and partial out the effects of confounding variables techniques, thereby improving the accuracy of the results obtained from quasi-experiments. Moreover, the developing use of propensity score matching to match participants on variables important to the treatment selection process can also improve the accuracy of quasi-experimental results.
In fact, data derived from quasi-experimental analyses has been shown to closely match experimental data in certain cases, even when different criteria were used. In sum, quasi-experiments are a valuable tool, especially for the applied researcher. On their own, quasi-experimental designs do not allow one to make definitive causal inferences; however, they provide necessary and valuable information that cannot be obtained by experimental methods alone. Researchers, especially those interested in investigating applied research questions, should move beyond the traditional experimental design and avail themselves of the possibilities inherent in quasi-experimental designs.
== Ethics ==
A true experiment would, for example, randomly assign children to a scholarship, in order to control for all other variables. Quasi-experiments are commonly used in social sciences, public health, education, and policy analysis, especially when it is not practical or reasonable to randomize study participants to the treatment condition.
As an example, suppose we divide households into two categories: Households in which the parents spank their children, and households in which the parents do not spank their children. We can run a linear regression to determine if there is a positive correlation between parents' spanking and their children's aggressive behavior. However, to simply randomize parents to spanking or not spanking categories may not be practical or ethical, because some parents may believe it is morally wrong to spank their children and refuse to participate.
Some authors distinguish between a natural experiment and a "quasi-experiment". A natural experiment may approximate random assignment, or involve real randomization not by the experimenters or for the experiment. A quasi-experiment generally does not involve actual randomization.
Quasi-experiments have outcome measures, treatments, and experimental units, but do not use random assignment. Quasi-experiments are often the design that most people choose over true experiments. It is usually easily conducted as opposed to true experiments, because they bring in features from both experimental and non-experimental designs. Measured variables as well as manipulated variables can be brought in. Usually quasi-experiments are chosen by experimenters because they maximize internal and external validity.
== Advantages ==
Since quasi-experimental designs are used when randomization is impractical and/or unethical, they are typically easier to set up than true experimental designs, which require random assignment of subjects. Additionally, utilizing quasi-experimental designs minimizes threats to ecological validity as natural environments do not suffer the same problems of artificiality as compared to a well-controlled laboratory setting. Since quasi-experiments are natural experiments, findings in one may be applied to other subjects and settings, allowing for some generalizations to be made about population. Also, this experimentation method is efficient in longitudinal research that involves longer time periods which can be followed up in different environments.
Other advantages of quasi experiments include the idea of having any manipulations the experimenter so chooses. In natural experiments, the researchers have to let manipulations occur on their own and have no control over them whatsoever. Also, using self selected groups in quasi experiments also takes away the chance of ethical, conditional, etc. concerns while conducting the study.
== Disadvantages ==
Quasi-experimental estimates of impact are subject to contamination by confounding variables. In the example above, a variation in the children's response to spanking is plausibly influenced by factors that cannot be easily measured and controlled, for example the child's intrinsic wildness or the parent's irritability. The lack of random assignment in the quasi-experimental design method may allow studies to be more feasible, but this also poses many challenges for the investigator in terms of internal validity. This deficiency in randomization makes it harder to rule out confounding variables and introduces new threats to internal validity. Because randomization is absent, some knowledge about the data can be approximated, but conclusions of causal relationships are difficult to determine due to a variety of extraneous and confounding variables that exist in a social environment. Moreover, even if these threats to internal validity are assessed, causation still cannot be fully established because the experimenter does not have total control over extraneous variables.
Disadvantages also include the study groups may provide weaker evidence because of the lack of randomness. Randomness brings a lot of useful information to a study because it broadens results and therefore gives a better representation of the population as a whole. Using unequal groups can also be a threat to internal validity. If groups are not equal, which is sometimes the case in quasi experiments, then the experimenter might not be positive about determining the causes of the results.
== Internal validity ==
Internal validity is the approximate truth about inferences regarding cause-effect or causal relationships. This is why validity is important for quasi experiments because they are all about causal relationships. It occurs when the experimenter tries to control all variables that could affect the results of the experiment. Statistical regression, history and the participants are all possible threats to internal validity. The question you would want to ask while trying to keep internal validity high is "Are there any other possible reasons for the outcome besides the reason I want it to be?" If so, then internal validity might not be as strong.
== External validity ==
External validity is the extent to which the results obtained from a study sample can be generalized "to" some well-specified population of interest, and "across" subpopulations of people, times, contexts, and methods of study. Lynch has argued that generalizing "to" a population is almost never possible because the populations to which we would like to project are measures of future behavior, which by definition cannot be sampled. Therefore, the more relevant question is whether treatment effects generalize "across" subpopulations that vary on background factors that might not be salient to the researcher. External validity depends on whether the treatments studies have homogeneous effects across different subsets of people, times, contexts, and methods of study or whether the sign and magnitude of any treatment effects changes across subsets in ways that may not be acknowledged or understood by the researchers. Athey and Imbens and Athey and Wager have pioneered machine learning techniques for inductive understanding of heterogeneous treatment effects.
== Design types ==
"Person-by-treatment" designs are the most common type of quasi experiment design. In this design, the experimenter measures at least one independent variable. Along with measuring one variable, the experimenter will also manipulate a different independent variable. Because there is manipulating and measuring of different independent variables, the research is mostly done in laboratories. An important factor in dealing with person-by-treatment designs is that random assignment will need to be used in order to make sure that the experimenter has complete control over the manipulations that are being done to the study.
An example of this type of design was performed at the University of Notre Dame. The study was conducted to see if being mentored for one's job led to increased job satisfaction. The results showed that many people who did have a mentor showed very high job satisfaction. However, the study also showed that those who did not receive the mentor also had a high number of satisfied employees. Seibert concluded that although the workers who had mentors were happy, he could not assume that the reason for it was the mentors themselves because of the numbers of the high number of non-mentored employees that said they were satisfied. This is why prescreening is very important so that you can minimize any flaws in the study before they are seen.
"Natural experiments" are a different type of quasi-experiment design used by researchers. It differs from person-by-treatment in a way that there is not a variable that is being manipulated by the experimenter. Instead of controlling at least one variable like the person-by-treatment design, experimenters do not use random assignment and leave the experimental control up to chance. This is where the name "natural" experiment comes from. The manipulations occur naturally, and although this may seem like an inaccurate technique, it has actually proven to be useful in many cases. These are the studies done to people who had something sudden happen to them. This could mean good or bad, traumatic or euphoric. An example of this could be studies done on those who have been in a car accident and those who have not. Car accidents occur naturally, so it would not be ethical to stage experiments to traumatize subjects in the study. These naturally occurring events have proven to be useful for studying posttraumatic stress disorder cases.
== References ==
== External links ==
Quasi-Experimental Design at the Research Methods Knowledge Base | Wikipedia/Quasi-experimental_design |
Minimisation is a method of adaptive stratified sampling that is used in clinical trials, as described by Pocock and Simon.
The aim of minimisation is to minimise the imbalance between the number of patients in each treatment group over a number of factors. Normally patients would be allocated to a treatment group randomly and while this maintains a good overall balance, it can lead to imbalances within sub-groups. For example, if a majority of the patients who were receiving the active drug happened to be male, or smokers, the statistical usefulness of the study would be reduced.
The traditional method to avoid this problem, known as blocked randomisation, is to stratify patients according to a number of factors (e.g. male and female, or smokers and non-smokers) and to use a separate randomisation list for each group. Each randomisation list would be created such that after every block of x patients, there would be an equal number in each treatment group. The problem with this method is that the number of lists increases exponentially with the number of stratification factors.
Minimisation addresses this problem by calculating the imbalance within each factor should the patient be allocated to a particular treatment group. The various imbalances are added together to give the overall imbalance in the study. The treatment group that would minimise the imbalance can be chosen directly, or a random element may be added (perhaps allocating a higher chance to the groups that will minimise the imbalance, or perhaps only allocating a chance to groups that will minimise the imbalance).
The imbalances can be weighted if necessary to give some factors more importance than others. Similarly a ratio can be applied to the number of patients in each treatment group.
In use, minimisation often maintains a better balance than traditional blocked randomisation, and its advantage rapidly increases with the number of stratification factors.
== References == | Wikipedia/Minimisation_(clinical_trials) |
Zelen's design is an experimental design for randomized clinical trials proposed by Harvard School of Public Health statistician Marvin Zelen (1927-2014). In this design, patients are randomized to either the treatment or control group before giving informed consent. Because the group to which a given patient is assigned is known, consent can be sought conditionally.
== Overview ==
In this design, those patients receiving standard care need not be consented for participation in the study other than possibly for privacy issues. On the other hand, those patients randomized to the experimental group are consented as usual, except that they are consenting to the certainty of receiving the experimental treatment only; alternatively these patients can decline and receive the standard treatment instead.
In comparison, the current predominant design is for consent to be solicited prior to randomization. That is, eligible patients are asked if they would agree to participate in the clinical trial as a whole. This entails agreeing to receiving the experimental treatment as a possibility, receiving the control treatment as a possibility, and the uncertainty involved in not knowing.
== Statistical and epidemiological issues ==
There are a number of advantages conferred by the post-randomization consent design.
Clinicians are more comfortable with this design because each time consent is only sought for one treatment without the uncertainty of randomization.
Patients correspondingly are not subjected to the uncomfortable feeling that they may or may not be receiving the experimental treatment. This means effects such as resentful demoralization will not become an issue. Analogously, since patients allocated to the standard care group are not necessarily aware of the existence of an alternative treatment, Hawthorne effect is also less of an issue.
Some disadvantages include:
Contamination by crossing over may be more likely since patients assigned to the treatment group are fully aware of their assignment. Notably, statistical analysis should be performed with intention-to-treat.
Lack of allocation concealment, which may produce further bias.
Ethical drawbacks. Palmer (2002) notes, "in the few trials where [Zelen randomization] was employed it has been met with disapproval from participants and others, being deemed inappropriately deceptive and manipulative, at least in trials for serious or life-threatening conditions."
== See also ==
Cluster randomised controlled trial
Marvin Zelen (biostatistician)
Randomized controlled trial
Statistics
== References ==
Zelen, Marvin (1979). "A New Design for Randomized Clinical Trials". The New England Journal of Medicine. 300 (22): 1242–1245. doi:10.1056/NEJM197905313002203. PMID 431682.
Torgerson, D. J.; Roland, M. (1998). "What is Zelen's design?". BMJ. 316 (7131): 606. doi:10.1136/bmj.316.7131.606. PMC 1112637. PMID 9518917.
Palmer, C. R. (2002). "Ethics, data-dependent designs, and the strategy of clinical trials: time to start learning-as-we-go?". Statistical Methods in Medical Research. 11 (5): 381–402. CiteSeerX 10.1.1.128.9963. doi:10.1191/0962280202sm298ra. ISSN 0962-2802. PMID 12357585. S2CID 1818466.
== External links ==
Introduction to the Special Issue Dedicated to Marvin Zelen , Lifetime Data Analysis, Issue Volume 10, Number 4 / December, 2004. DOI 10.1007/s10985-004-4769-7, Pages 321-323. | Wikipedia/Zelen's_design |
In statistical modeling (especially process modeling), polynomial functions and rational functions are sometimes used as an empirical technique for curve fitting.
== Polynomial function models ==
A polynomial function is one that has the form
y
=
a
n
x
n
+
a
n
−
1
x
n
−
1
+
⋯
+
a
2
x
2
+
a
1
x
+
a
0
{\displaystyle y=a_{n}x^{n}+a_{n-1}x^{n-1}+\cdots +a_{2}x^{2}+a_{1}x+a_{0}}
where n is a non-negative integer that defines the degree of the polynomial. A polynomial with a degree of 0 is simply a constant function; with a degree of 1 is a line; with a degree of 2 is a quadratic; with a degree of 3 is a cubic, and so on.
Historically, polynomial models are among the most frequently used empirical models for curve fitting.
=== Advantages ===
These models are popular for the following reasons.
Polynomial models have a simple form.
Polynomial models have well known and understood properties.
Polynomial models have moderate flexibility of shapes.
Polynomial models are a closed family. Changes of location and scale in the raw data result in a polynomial model being mapped to a polynomial model. That is, polynomial models are not dependent on the underlying metric.
Polynomial models are computationally easy to use.
=== Disadvantages ===
However, polynomial models also have the following limitations.
Polynomial models have poor interpolatory properties. High-degree polynomials are notorious for oscillations between exact-fit values.
Polynomial models have poor extrapolatory properties. Polynomials may provide good fits within the range of data, but they will frequently deteriorate rapidly outside the range of the data.
Polynomial models have poor asymptotic properties. By their nature, polynomials have a finite response for finite x values and have an infinite response if and only if the x value is infinite. Thus polynomials may not model asymptotic phenomena very well.
While no procedure is immune to the bias-variance tradeoff, polynomial models exhibit a particularly poor tradeoff between shape and degree. In order to model data with a complicated structure, the degree of the model must be high, indicating that the associated number of parameters to be estimated will also be high. This can result in highly unstable models.
When modeling via polynomial functions is inadequate due to any of the limitations above, the use of rational functions for modeling may give a better fit.
== Rational function models ==
A rational function is simply the ratio of two polynomial functions.
y
=
a
n
x
n
+
a
n
−
1
x
n
−
1
+
…
+
a
2
x
2
+
a
1
x
+
a
0
b
m
x
m
+
b
m
−
1
x
m
−
1
+
…
+
b
2
x
2
+
b
1
x
+
b
0
{\displaystyle y={\frac {a_{n}x^{n}+a_{n-1}x^{n-1}+\ldots +a_{2}x^{2}+a_{1}x+a_{0}}{b_{m}x^{m}+b_{m-1}x^{m-1}+\ldots +b_{2}x^{2}+b_{1}x+b_{0}}}}
with n denoting a non-negative integer that defines the degree of the numerator and m denoting a non-negative integer that defines the degree of the denominator. For fitting rational function models, the constant term in the denominator is usually set to 1. Rational functions are typically identified by the degrees of the numerator and denominator. For example, a quadratic for the numerator and a cubic for the denominator is identified as a quadratic/cubic rational function. The rational function model is a generalization of the polynomial model: rational function models contain polynomial models as a subset (i.e., the case when the denominator is a constant).
=== Advantages ===
Rational function models have the following advantages:
Rational function models have a moderately simple form.
Rational function models are a closed family. As with polynomial models, this means that rational function models are not dependent on the underlying metric.
Rational function models can take on an extremely wide range of shapes, accommodating a much wider range of shapes than does the polynomial family.
Rational function models have better interpolatory properties than polynomial models. Rational functions are typically smoother and less oscillatory than polynomial models.
Rational functions have excellent extrapolatory powers. Rational functions can typically be tailored to model the function not only within the domain of the data, but also so as to be in agreement with theoretical/asymptotic behavior outside the domain of interest.
Rational function models have excellent asymptotic properties. Rational functions can be either finite or infinite for finite values, or finite or infinite for infinite x values. Thus, rational functions can easily be incorporated into a rational function model.
Rational function models can often be used to model complicated structure with a fairly low degree in both the numerator and denominator. This in turn means that fewer coefficients will be required compared to the polynomial model.
Rational function models are moderately easy to handle computationally. Although they are nonlinear models, rational function models are particularly easy nonlinear models to fit.
One common difficulty in fitting nonlinear models is finding adequate starting values. A major advantage of rational function models is the ability to compute starting values using a linear least squares fit. To do this, p points are chosen from the data set, with p denoting the number of parameters in the rational model. For example, given the linear/quadratic model
y
=
A
0
+
A
1
x
1
+
B
1
x
+
B
2
x
2
,
{\displaystyle y={\frac {A_{0}+A_{1}x}{1+B_{1}x+B_{2}x^{2}}},}
one would need to select four representative points, and perform a linear fit on the model
y
=
A
0
+
A
1
x
−
B
1
x
y
−
B
2
x
2
y
,
{\displaystyle y=A_{0}+A_{1}x-B_{1}xy-B_{2}x^{2}y,}
which is derived from the previous equation by clearing the denominator. Here, the x and y contain the subset of points, not the full data set. The estimated coefficients from this linear fit are used as the starting values for fitting the nonlinear model to the full data set.
This type of fit, with the response variable appearing on both sides of the function, should only be used to obtain starting values for the nonlinear fit. The statistical properties of fits like this are not well understood.
The subset of points should be selected over the range of the data. It is not critical which points are selected, although obvious outliers should be avoided.
=== Disadvantages ===
Rational function models have the following disadvantages:
The properties of the rational function family are not as well known to engineers and scientists as are those of the polynomial family. The literature on the rational function family is also more limited. Because the properties of the family are often not well understood, it can be difficult to answer the following modeling question: Given that data has a certain shape, what values should be chosen for the degree of the numerator and the degree on the denominator?
Unconstrained rational function fitting can, at times, result in undesired vertical asymptotes due to roots in the denominator polynomial. The range of x values affected by the function "blowing up" may be quite narrow, but such asymptotes, when they occur, are a nuisance for local interpolation in the neighborhood of the asymptote point. These asymptotes are easy to detect by a simple plot of the fitted function over the range of the data. These nuisance asymptotes occur occasionally and unpredictably, but practitioners argue that the gain in flexibility of shapes is well worth the chance that they may occur, and that such asymptotes should not discourage choosing rational function models for empirical modeling.
== See also ==
Response surface methodology
Pade Approximant
== Bibliography ==
Atkinson, A. C.; Donev, A. N.; Tobias, R. D. (2007). Optimum Experimental Designs, with SAS. Oxford University Press. pp. 511+xvi. ISBN 978-0-19-929660-6.
Box, G. E. P. and Draper, Norman. 2007. Response Surfaces, Mixtures, and Ridge Analyses, Second Edition [of Empirical Model-Building and Response Surfaces, 1987], Wiley.
Kiefer, Jack Carl (1985). L. D. Brown; et al. (eds.). Collected Papers III Design of Experiments. Springer-Verlag. ISBN 978-0-387-96004-3.
R. H. Hardin and N. J. A. Sloane, "A New Approach to the Construction of Optimal Designs", Journal of Statistical Planning and Inference, vol. 37, 1993, pp. 339-369
R. H. Hardin and N. J. A. Sloane, "Computer-Generated Minimal (and Larger) Response Surface Designs: (I) The Sphere"
R. H. Hardin and N. J. A. Sloane, "Computer-Generated Minimal (and Larger) Response Surface Designs: (II) The Cube"
Ghosh, S.; Rao, C. R., eds. (1996). Design and Analysis of Experiments. Handbook of Statistics. Vol. 13. North-Holland. ISBN 978-0-444-82061-7.
Draper, Norman & Lin, Dennis K. J. "Response Surface Designs". pp. 343–375. {{cite book}}: Missing or empty |title= (help)
Gaffke, N. & Heiligers, B. "Approximate Designs for Polynomial Regression: Invariance, Admissibility, and Optimality". pp. 1149–1199. {{cite book}}: Missing or empty |title= (help)
Melas, Viatcheslav B. (2006). Functional Approach to Optimal Experimental Design. Lecture Notes in Statistics. Vol. 184. Springer-Verlag. ISBN 978-0-387-98741-5. (Modeling with rational functions)
=== Historical ===
Gergonne, J. D. (1815). "Application de la méthode des moindre quarrés a l'interpolation des suites". Annales de mathématiques pures et appliquées. 6: 242–252.
Gergonne, J. D. (1974) [1815]. "The application of the method of least squares to the interpolation of sequences". Historia Mathematica. 1 (4) (Translated by Ralph St. John and S. M. Stigler from the 1815 French ed.): 439–447. doi:10.1016/0315-0860(74)90034-2.
Stigler, Stephen M. (1974). "Gergonne's 1815 paper on the design and analysis of polynomial regression experiments". Historia Mathematica. 1 (4): 431–439. doi:10.1016/0315-0860(74)90033-0.
Smith, Kirstine (1918). "On the Standard Deviations of Adjusted and Interpolated Values of an Observed Polynomial Function and its Constants and the Guidance They Give Towards a Proper Choice of the Distribution of the Observations". Biometrika. 12 (1/2): 1–85. doi:10.1093/biomet/12.1-2.1. JSTOR 2331929.
== External links ==
Rational Function Models
This article incorporates public domain material from the National Institute of Standards and Technology | Wikipedia/Polynomial_and_rational_function_modeling |
Hormone replacement therapy (HRT), also known as menopausal hormone therapy or postmenopausal hormone therapy, is a form of hormone therapy used to treat symptoms associated with female menopause. Effects of menopause can include symptoms such as hot flashes, accelerated skin aging, vaginal dryness, decreased muscle mass, and complications such as osteoporosis (bone loss), sexual dysfunction, and vaginal atrophy. They are mostly caused by low levels of female sex hormones (e.g. estrogens) that occur during menopause.
Estrogens and progestogens are the main hormone drugs used in HRT. Progesterone is the main female sex hormone that occurs naturally and is also manufactured into a drug that is used in menopausal hormone therapy. Although both classes of hormones can have symptomatic benefit, progestogen is specifically added to estrogen regimens, unless the uterus has been removed, to avoid the increased risk of endometrial cancer. Unopposed estrogen therapy promotes endometrial hyperplasia and increases the risk of cancer, while progestogen reduces this risk. Androgens like testosterone are sometimes used as well. HRT is available through a variety of different routes.
The long-term effects of HRT on most organ systems vary by age and time since the last physiological exposure to hormones, and there can be large differences in individual regimens, factors which have made analyzing effects difficult. The Women's Health Initiative (WHI) is an ongoing study of over 27,000 women that began in 1991, with the most recent analyses suggesting that, when initiated within 10 years of menopause, HRT reduces all-cause mortality and risks of coronary disease, osteoporosis, and dementia; after 10 years the beneficial effects on mortality and coronary heart disease are no longer apparent, though there are decreased risks of hip and vertebral fractures and an increased risk of venous thromboembolism when taken orally.
"Bioidentical" hormone replacement is a development in the 21st century and uses manufactured compounds with "exactly the same chemical and molecular structure as hormones that are produced in the human body." These are mainly manufactured from plant steroids and can be a component of either registered pharmaceutical or custom-made compounded preparations, with the latter generally not recommended by regulatory bodies due to their lack of standardization and formal oversight. Bioidentical hormone replacement has inadequate clinical research to determine its safety and efficacy as of 2017.
The current indications for use from the United States Food and Drug Administration (FDA) include short-term treatment of menopausal symptoms, such as vasomotor hot flashes or vaginal atrophy, and prevention of osteoporosis.
== Medical uses ==
Approved uses of HRT in the United States include short-term treatment of menopausal symptoms such as hot flashes and vaginal atrophy, and prevention of osteoporosis. The American College of Obstetrics and Gynecology (ACOG) approves of HRT for symptomatic relief of menopausal symptoms, and advocates its use beyond the age of 65 in appropriate scenarios. The North American Menopause Society (NAMS) 2016 annual meeting mentioned that HRT may have more benefits than risks in women before the age of 60.
A consensus expert opinion published by The Endocrine Society stated that when taken during perimenopause or the initial years of menopause, HRT carries fewer risks than previously published, and reduces all cause mortality in most scenarios. The American Association of Clinical Endocrinologists (AACE) has also released position statements approving of HRT when appropriate.
Women receiving this treatment are usually post-, peri-, or surgically induced menopausal. Menopause is the permanent cessation of menstruation resulting from loss of ovarian follicular activity, defined as beginning twelve months after the final natural menstrual cycle. This twelve month time point divides menopause into early and late transition periods known as 'perimenopause' and 'postmenopause'. Premature menopause can occur if the ovaries are surgically removed, as can be done to treat ovarian or uterine cancer.
Demographically, the vast majority of data available is in postmenopausal American women with concurrent pre-existing conditions and an average age of over 60 years.
=== Menopausal symptoms ===
HRT is often given as a short-term relief from menopausal symptoms during perimenopause. Potential menopausal symptoms include:
Hot flashes – vasomotor symptoms
Vulvovaginal atrophy – atrophic vaginitis and dryness
Dyspareunia – painful sexual intercourse due to vaginal atrophy and lack of lubrication
Bone loss – decreased bone mineral density, which can eventually lead to osteopenia, osteoporosis, and associated fractures
Decreased sexual desire
Defeminization – diminished feminine fat distribution and accelerated skin aging
Sleep disturbances and joint pain
The most common of these are loss of sexual drive and vaginal dryness.
The use of hormone therapy for heart health among menopausal women has declined significantly over the past few decades. In 1999, nearly 27% of menopausal women in the U.S. used estrogen, but by 2020, that figure had dropped to less than 5%. Recent evidence in 2024 suggests evidence supporting the cardiovascular benefits of hormone therapy, including improvements in insulin resistance and other heart-related markers. This adds to a growing body of research highlighting hormone therapy’s effectiveness, not only for heart health but also for managing menopausal symptoms like hot flashes, disrupted sleep, vaginal dryness, and painful intercourse. Despite its proven benefits, many menopausal women avoid hormone therapy, often due to lingering misconceptions about its risks and societal discomfort with openly discussing menopause.
=== Sexual function ===
HRT can help with the lack of sexual desire and sexual dysfunction that can occur with menopause. Epidemiological surveys of women between 40 and 69 years suggest that 75% of women remain sexually active after menopause. With increasing life spans, women today are living one third or more of their lives in a postmenopausal state, a period during which healthy sexuality can be integral to their quality of life.
Decreased libido and sexual dysfunction are common issues in postmenopausal women, an entity referred to hypoactive sexual desire disorder (HSDD); its signs and symptoms can both be improved by HRT. Several hormonal changes take place during this period, including a decrease in estrogen and an increase in follicle-stimulating hormone. For most women, the majority of change occurs during the late perimenopausal and postmenopausal stages. Decreases in sex hormone-binding globulin (SHBG) and inhibin (A and B) also occur. Testosterone is present in women at a lower level than men, peaking at age 30 and declining gradually with age; there is less variation during the menopausal transition relative to estrogen and progesterone.
A global consensus position statement has advised that postmenopausal testosterone replacement to premenopausal levels can be effective for HSDD. Safety information for testosterone treatment is not available beyond two years of continuous therapy however and dosing above physiologic levels is not advised. Testosterone patches have been found to restore sexual desire in post menopausal women. There is insufficient data to evaluate the impact of testosterone replacement on heart disease, breast cancer, with most trials having included women taking concomitant estrogen and progesterone and with testosterone therapy itself being relatively short in duration. In the setting of this limited data, testosterone therapy has not been associated with adverse events.
Not all women are responsive, especially those with preexisting sexual difficulties. Estrogen replacement can restore vaginal cells, pH levels, and blood flow to the vagina, all of which tend to deteriorate at the onset of menopause. Pain or discomfort with sex appears to be the most responsive component to estrogen. It also has been shown to have positive effects on the urinary tract. Estrogen can also reduce vaginal atrophy and increase sexual arousal, frequency and orgasm.
The effectiveness of hormone replacement can decline in some women after long-term use. A number of studies have also found that the combined effects of estrogen/androgen replacement therapy can increase libido and arousal over estrogen alone. Tibolone, a synthetic steroid with estrogenic, androgenic, and progestogenic properties that is available in Europe, has the ability to improve mood, libido, and physical symptomatology. In various placebo-controlled studies, improvements in vasomotor symptoms, emotional response, sleep disturbances, physical symptoms, and sexual desire have been seen, though it also carries a similar risk profile to conventional HRT.
=== Muscle and bone ===
There is a significant decrease in hip fracture risk during treatment that to a lesser degree persists after HRT is stopped. It also helps collagen formation, which in turn improves intervertebral disc and bone strength.
Hormone replacement therapy in the form of estrogen and androgen can be effective at reversing the effects of aging on muscle. Lower testosterone is associated with lower bone density and higher free testosterone is associated with lower hip fracture rates in older women. Testosterone therapy, which can be used for decreased sexual function, can also increase bone mineral density and muscle mass.
== Side effects ==
Side effects in HRT occur with varying frequency and include:
== Health effects ==
=== Heart disease ===
The effect of HRT in menopause appears to be divergent, with lower risk of heart disease when started within five years, but no impact after ten. For women who are in early menopause and have no issues with their cardiovascular health, HRT comes with a low risk of adverse cardiovascular events. There may be an increase in heart disease if HRT is given twenty years post-menopause. This variability has led some reviews to suggest an absence of significant effect on morbidity. Importantly, there is no difference in long-term mortality from HRT, regardless of age.
A Cochrane review suggested that women starting HRT less than 10 years after menopause had lower mortality and coronary heart disease, without any strong effect on the risk of stroke and pulmonary embolism. Those starting therapy more than 10 years after menopause showed little effect on mortality and coronary heart disease, but an increased risk of stroke. Both therapies had an association with venous clots and pulmonary embolism.
HRT with estrogen and progesterone also improves cholesterol levels. With menopause, HDL decreases, while LDL, triglycerides and lipoprotein a increase, patterns that reverse with estrogen. Beyond this, HRT improves heart contraction, coronary blood flow, sugar metabolism, and decreases platelet aggregation and plaque formation. HRT may promote reverse cholesterol transport through induction of cholesterol ABC transporters. Atherosclerosis imaging trials show that HRT decreases the formation of new vascular lesions, but does not reverse the progression of existing lesions. HRT also results in a large reduction in the pro-thrombotic lipoprotein a.
Studies on cardiovascular disease with testosterone therapy have been mixed, with some suggesting no effect or a mild negative effect, though others have shown an improvement in surrogate markers such as cholesterol, triglycerides and weight. Testosterone has a positive effect on vascular endothelial function and tone with observational studies suggesting that women with lower testosterone may be at greater risk for heart disease. Available studies are limited by small sample size and study design. Low sex hormone-binding globulin, which occurs with menopause, is associated with increased body mass index and risk for type 2 diabetes.
=== Blood clots ===
Effects of hormone replacement therapy on venous blood clot formation and potential for pulmonary embolism may vary with different estrogen and progestogen therapies, and with different doses or method of use. Comparisons between routes of administration suggest that when estrogens are applied to the skin or vagina, there is a lower risk of blood clots, whereas when used orally, the risk of blood clots and pulmonary embolism is increased. Skin and vaginal routes of hormone therapy are not subject to first pass metabolism, and so lack the anabolic effects that oral therapy has on liver synthesis of vitamin K-dependent clotting factors, possibly explaining why oral therapy may increase blood clot formation.
While a 2018 review found that taking progesterone and estrogen together can decrease this risk, other reviews reported an increased risk of blood clots and pulmonary embolism when estrogen and progestogen were combined, particularly when treatment was started 10 years or more after menopause and when the women were older than 60 years.
The risk of venous thromboembolism may be reduced with bioidentical preparations, though research on this is only preliminary.
=== Stroke ===
Multiple studies suggest that the possibility of HRT related stroke is absent if therapy is started within five years of menopause, and that the association is absent or even preventive when given by non-oral routes. Ischemic stroke risk was increased during the time of intervention in the WHI, with no significant effect after the cessation of therapy and no difference in mortality at long term follow up. When oral synthetic estrogen or combined estrogen-progestogen treatment is delayed until five years from menopause, cohort studies in Swedish women have suggested an association with hemorrhagic and ischemic stroke. Another large cohort of Danish women suggested that the specific route of administration was important, finding that although oral estrogen increased risk of stroke, absorption through the skin had no impact, and vaginal estrogen actually had a decreased risk.
=== Endometrial cancer ===
In postmenopausal women, continuous combined estrogen plus progestin decreases endometrial cancer incidence. The duration of progestogen therapy should be at least 14 days per cycle to prevent endometrial disease.
Endometrial cancer has been grouped into two forms in the context of hormone replacement. Type 1 is the most common, can be associated with estrogen therapy, and is usually low grade. Type 2 is not related to estrogen stimulation and usually higher grade and poorer in prognosis. The endometrial hyperplasia that leads to endometrial cancer with estrogen therapy can be prevented by concomitant administration of progestogen. The extensive use of high-dose estrogens for birth control in the 1970s is thought to have resulted in a significant increase in the incidence of type 1 endometrial cancer.
Paradoxically, progestogens do promote the growth of uterine fibroids, and a pelvic ultrasound can be performed before beginning HRT to make sure there are no underlying uterine or endometrial lesions.
Androgens do not stimulate endometrial proliferation in post menopausal women, and appear to inhibit the proliferation induced by estrogen to a certain extent.
There is insufficient high‐quality evidence to inform women considering hormone replacement therapy after treatment for endometrial cancer.
=== Breast cancer ===
In general, hormone replacement therapy to treat menopause is associated with only a small increased risk of breast cancer. The level of risk also depends on the type of HRT, the duration of the treatment and the age of the person. Oestrogen-only HRT, taken by people who had a hysterectomy, comes with an extremely low level of breast cancer risk. The most commonly taken combined HRT (oestrogen and progestogen) is linked to a small risk of breast cancer. This risk is lower for women in their 50s and higher for older women. The risk increases with the duration of HRT. When HRT is taken for a year or less, there is no increased risk of breast cancer. HRT taken for more than 5 years comes with an increased risk but the risk reduces after the therapy is stopped.
There is a non-statistically significant increased rate of breast cancer for hormone replacement therapy with synthetic progestogens. The risk may be reduced with bioidentical progesterone, though the only prospective study that suggested this was underpowered due to the rarity of breast cancer in the control population. There have been no randomized controlled trials as of 2018. The relative risk of breast cancer also varies depending on the interval between menopause and HRT and route of synthetic progestin administration.
The most recent follow up of the Women's Health Initiative participants demonstrated a lower incidence of breast cancer in post-hysterectomy participants taking equine estrogen alone, though the relative risk was increased if estrogen was taken with medroxyprogesterone. Estrogen is usually only given alone in the setting of a hysterectomy due to the increased risk of vaginal bleeding and uterine cancer with unopposed estrogen.
HRT has been more strongly associated with risk of breast cancer in women with lower body mass indices (BMIs). No breast cancer association has been found with BMIs of over 25. It has been suggested by some that the absence of significant effect in some of these studies could be due to selective prescription to overweight women who have higher baseline estrone, or to the very low progesterone serum levels after oral administration leading to a high tumor inactivation rate.
Evaluating the response of breast tissue density to HRT using mammography appears to help assessing the degree of breast cancer risk associated with therapy; women with dense or mixed-dense breast tissue have a higher risk of developing breast cancer than those with low density tissue.
Micronized progesterone does not appear to be associated with breast cancer risk when used for less than five years with limited data suggesting an increased risk when used for longer duration.
For women who previously have had breast cancer, it is recommended to first consider other options for menopausal effects, such as bisphosphonates or selective estrogen receptor modulators (SERMs) for osteoporosis, cholesterol-lowering agents and aspirin for cardiovascular disease, and vaginal estrogen for local symptoms. Observational studies of systemic HRT after breast cancer are generally reassuring. If HRT is necessary after breast cancer, estrogen-only therapy or estrogen therapy with a progestogen may be safer options than combined systemic therapy. In women who are BRCA1 or BRCA2 mutation carriers, HRT does not appear to impact breast cancer risk. The relative number of women using HRT who also obtain regular screening mammograms is higher than that in women who do not use HRT, a factor which has been suggested as contributing to different breast cancer detection rates in the two groups.
With androgen therapy, pre-clinical studies have suggested an inhibitory effect on breast tissue though the majority of epidemiological studies suggest a positive association.
=== Ovarian cancer ===
HRT is associated with an increased risk of ovarian cancer, with women using HRT having about one additional case of ovarian cancer per 1,000 users. This risk is decreased when progestogen therapy is given concomitantly, as opposed to estrogen alone, and also decreases with increasing time since stopping HRT. Regarding the specific subtype, there may be a higher risk of serous cancer, but no association with clear cell, endometrioid, or mucinous ovarian cancer. Hormonal therapy in ovarian cancer survivors after surgical removal of the ovaries is generally thought to improval survival rates.
=== Other cancers ===
==== Colorectal cancer ====
In the WHI, women who took combined estrogen-progesterone therapy had a lower risk of getting colorectal cancer. However, the cancers they did have were more likely to have spread to lymph nodes or distant sites than colorectal cancer in women not taking hormones. In colorectal cancer survivors, usage of HRT is thought to lead to lower recurrence risk and overall mortality.
==== Cervical cancer ====
There appears to be a significantly decreased risk of cervical squamous cell cancer in post menopausal women treated with HRT and a weak increase in adenocarcinoma. No studies have reported an increased risk of recurrence when HRT is used with cervical cancer survivors.
=== Neurodegenerative disorders ===
As of 2024 there has been conflicting evidence from clinical studies regarding the beneficial effects of estrogens at reducing the risk of Alzheimer's Disease.
For prevention, the WHI suggested in 2013, that HRT may increase risk of dementia if initiated after 65 years of age, but have a neutral outcome or be neuroprotective for those between 50 and 55 years. However, the prospective ELITE trial showed negligible effects on verbal memory and other mental skills regardless of how soon after menopause a woman began HRT.
A 2012 review of clinical and epidemiological studies of HRT and AD, PD, FTD and HIV related dementia concluded results were inconclusive at this time.
The majority of clinical and epidemiological studies show either no association with the risk of developing Parkinson's disease or inconclusive results. One Danish study suggested an increased risk of Parkinson's with HRT in cyclical dosing schedules.
Other randomized trials have shown HRT to improve executive and attention processes outside of the context of dementia in postmenopausal women, both in asymptomatic and those with mild cognitive impairment.
As of 2011, estrogen replacement in post menopausal women with Parkinson's disease appeared to improve motor symptoms and activities of daily living , with significant improvement of UPDRS scores. Testosterone replacement has also shown to be associated with small statistically significant improvements in verbal learning and memory in postmenopausal women but DHEA has not been found to improve cognitive performance after menopause.
Pre-clinical studies have indicated that endogenous estrogen and testosterone are neuroprotective and can prevent brain amyloid deposition.
== Contraindications ==
The following are absolute and relative contraindications to HRT:
=== Absolute contraindications ===
Undiagnosed vaginal bleeding
Severe liver disease
Pregnancy
Severe coronary artery disease
Aggressive breast, uterine or ovarian cancer
=== Relative contraindications ===
Migraine headaches
History of breast cancer
History of ovarian cancer
Venous thrombosis
History of uterine fibroids
Atypical ductal hyperplasia of the breast
Active gallbladder disease (cholangitis, cholecystitis)
Well-differentiated and early endometrial cancer – once treatment for the malignancy is complete, is no longer an absolute contraindication.
== History and research ==
The extraction of CEEs from the urine of pregnant mares led to the marketing in 1942 of Premarin, one of the earlier forms of estrogen to be introduced. From that time until the mid-1970s, estrogen was administered without a supplemental progestogen. Beginning in 1975, studies began to show that without a progestogen, unopposed estrogen therapy with Premarin resulted in an eight-fold increased risk of endometrial cancer, eventually causing sales of Premarin to plummet. It was recognized in the early 1980s that the addition of a progestogen to estrogen reduced this risk to the endometrium. This led to the development of combined estrogen–progestogen therapy, most commonly with a combination of conjugated equine estrogen (Premarin) and medroxyprogesterone (Provera).
=== Trials ===
The Women's Health Initiative trials were conducted between 1991 and 2006 and were the first large, double-blind, placebo-controlled clinical trials of HRT in healthy women. Their results were both positive and negative, suggesting that during the time of hormone therapy itself, there are increases in invasive breast cancer, stroke and lung clots. Other risks include increased endometrial cancer, gallbladder disease, and urinary incontinence, while benefits include decreased hip fractures, decreased incidence of diabetes, and improvement of vasomotor symptoms. There is also an increased risk of dementia with HRT in women over 65, though at younger ages it appears to be neuroprotective. After the cessation of HRT, the WHI continued to observe its participants, and found that most of these risks and benefits dissipated, though some elevation in breast cancer risk did persist. Other studies have also suggested an increased risk of ovarian cancer.
The arm of the WHI receiving combined estrogen and progestin therapy was closed prematurely in 2002 by its Data Monitoring Committee (DMC) due to perceived health risks, though this occurred a full year after the data suggesting increased risk became manifest. In 2004, the arm of the WHI in which post-hysterectomy patients were being treated with estrogen alone was also closed by the DMC. Clinical medical practice changed based upon two parallel Women's Health Initiative (WHI) studies of HRT. Prior studies were smaller, and many were of women who electively took hormonal therapy. One portion of the parallel studies followed over 16,000 women for an average of 5.2 years, half of whom took placebo, while the other half took a combination of CEEs and MPA (Prempro). This WHI estrogen-plus-progestin trial was stopped prematurely in 2002 because preliminary results suggested risks of combined CEEs and progestins exceeded their benefits. The first report on the halted WHI estrogen-plus-progestin study came out in July 2002.
Initial data from the WHI in 2002 suggested mortality to be lower when HRT was begun earlier, between age 50 to 59, but higher when begun after age 60. In older patients, there was an apparent increased incidence of breast cancer, heart attacks, venous thrombosis, and stroke, although a reduced incidence of colorectal cancer and bone fracture. At the time, The WHI recommended that women with non-surgical menopause take the lowest feasible dose of HRT for the shortest possible time to minimize associated risks. Some of the WHI findings were again found in a larger national study done in the United Kingdom, known as the Million Women Study (MWS). As a result of these findings, the number of women taking HRT dropped precipitously. In 2012, the United States Preventive Task Force (USPSTF) concluded that the harmful effects of combined estrogen and progestin therapy likely exceeded their chronic disease prevention benefits.
In 2002 when the first WHI follow up study was published, with HRT in post menopausal women, both older and younger age groups had a slightly higher incidence of breast cancer, and both heart attack and stroke were increased in older patients, although not in younger participants. Breast cancer was increased in women treated with estrogen and a progestin, but not with estrogen and progesterone or estrogen alone. Treatment with unopposed estrogen (i.e., an estrogen alone without a progestogen) is contraindicated if the uterus is still present, due to its proliferative effect on the endometrium. The WHI also found a reduced incidence of colorectal cancer when estrogen and a progestogen were used together, and most importantly, a reduced incidence of bone fractures. Ultimately, the study found disparate results for all cause mortality with HRT, finding it to be lower when HRT was begun during ages 50–59, but higher when begun after age 60. The authors of the study recommended that women with non-surgical menopause take the lowest feasible dose of hormones for the shortest time to minimize risk.
The data published by the WHI suggested supplemental estrogen increased risk of venous thromboembolism and breast cancer but was protective against osteoporosis and colorectal cancer, while the impact on cardiovascular disease was mixed. These results were later supported in trials from the United Kingdom, but not in more recent studies from France and China. Genetic polymorphism appears to be associated with inter-individual variability in metabolic response to HRT in postmenopausal women.
The WHI reported statistically significant increases in rates of breast cancer, coronary heart disease, strokes and pulmonary emboli. The study also found statistically significant decreases in rates of hip fracture and colorectal cancer. "A year after the study was stopped in 2002, an article was published indicating that estrogen plus progestin also increases the risks of dementia." The conclusion of the study was that the HRT combination presented risks that outweighed its measured benefits. The results were almost universally reported as risks and problems associated with HRT in general, rather than with Prempro, the specific proprietary combination of CEEs and MPA studied.
After the increased clotting found in the first WHI results was reported in 2002, the number of Prempro prescriptions filled reduced by almost half. Following the WHI results, a large percentage of HRT users opted out of them, which was quickly followed by a sharp drop in breast cancer rates. The decrease in breast cancer rates has continued in subsequent years. An unknown number of women started taking alternatives to Prempro, such as compounded bioidentical hormones, though researchers have asserted that compounded hormones are not significantly different from conventional hormone therapy.
The other portion of the parallel studies featured women who were post hysterectomy and so received either placebo progestogen or CEEs alone. This group did not show the risks demonstrated in the combination hormone study, and the estrogen-only study was not halted in 2002. However, in February 2004 it, too, was halted. While there was a 23% decreased incidence of breast cancer in the estrogen-only study participants, risks of stroke and pulmonary embolism were increased slightly, predominantly in patients who began HRT over the age of 60.
Several other large studies and meta-analyses have reported reduced mortality for HRT in women younger than age 60 or within 10 years of menopause, and a debatable or absent effect on mortality in women over 60.
Though research thus far has been substantial, further investigation is needed to fully understand differences in effect for different types of HRT and lengths of time since menopause. As of 2023, for example, no trial has studied women who begin taking HRT around age 50 and continue taking it for longer than 10 years.
== Available forms ==
There are five major human steroid hormones: estrogens, progestogens, androgens, mineralocorticoids, and glucocorticoids. Estrogens and progestogens are the two most often used in menopause. They are available in a wide variety of FDA approved and non–FDA-approved formulations.
In women with intact uteruses, estrogens are almost always given in combination with progestogens, as long-term unopposed estrogen therapy is associated with a markedly increased risk of endometrial hyperplasia and endometrial cancer. Conversely, in women who have undergone a hysterectomy or do not have a uterus, a progestogen is not required, and estrogen can be used alone. There are many combined formulations which include both estrogen and progestogen.
Specific types of hormone replacement include:
Estrogens – bioidentical estrogens like estradiol and estriol, animal-derived estrogens like conjugated estrogens (CEEs), and synthetic estrogens like ethinylestradiol
Progestogens – bioidentical progesterone, and progestins (synthetic progestogens) like medroxyprogesterone acetate (MPA), norethisterone, and dydrogesterone
Androgens – bioidentical testosterone and dehydroepiandrosterone (DHEA), and synthetic anabolic steroids like methyltestosterone and nandrolone decanoate
Tibolone – a synthetic medication available in Europe but not the United States– is more effective than placebo but less effective than combination hormone therapy in postmenopausal women. It may have a decreased risk of breast and colorectal cancer, though conversely it can be associated with vaginal bleeding, endometrial cancer, and increase the risk of stroke in women over age 60 years.
Vaginal estrogen can improve local atrophy and dryness, with fewer systemic effects than estrogens delivered by other routes. Sometimes an androgen, generally testosterone, can be added to treat diminished libido.
=== Continuous versus cyclic ===
Dosage is often varied cyclically to more closely mimic the ovarian hormone cycle, with estrogens taken daily and progestogens taken for about two weeks every month or every other month, a schedule referred to as 'cyclic' or 'sequentially combined'. Alternatively, 'continuous combined' HRT can be given with a constant daily hormonal dosage. Continuous combined HRT is associated with less complex endometrial hyerplasia than cyclic. Impact on breast density appears to be similar in both regimen timings.
=== Route of administration ===
The medications used in menopausal HRT are available in numerous different formulations for use by a variety of different routes of administration:
Oral administration – tablets, capsules
Transdermal administration – patches, gels, creams
Vaginal administration – tablets, creams, suppositories, rings
Intramuscular or subcutaneous injection – solutions in vials or ampoules
Subcutaneous implant – surgically-inserted pellets placed into fat tissue
Less commonly sublingual, buccal, intranasal, and rectal administration, as well as intrauterine devices
More recently developed forms of drug delivery are alleged to have increased local effect lower dosing, fewer side effects, and constant rather than cyclical serum hormone levels. Transdermal and vaginal estrogen, in particular, avoid first pass metabolism through the liver. This in turn prevents an increase in clotting factors and accumulation of anti-estrogenic metabolites, resulting in fewer adverse side effects, particularly with regard to cardiovascular disease and stroke.
Injectable forms of estradiol exist and have been used occasionally in the past. However, they are rarely used in menopausal hormone therapy in modern times and are no longer recommended. Instead, other non-oral forms of estradiol such as transdermal estradiol are recommended and may be used. Estradiol injectables are generally well-tolerated and convenient, requiring infrequent administration. However, this form of estradiol does not release estradiol at a constant rate and there are very high circulating estradiol levels soon after injection followed by a rapid decline in levels. Injections may also be painful. Examples of estradiol injectables that may be used in menopausal hormone therapy include estradiol valerate and estradiol cypionate. In terms of injectable progestogens, injectable progesterone is associated with pain and injection site reactions as well as a short duration of action requiring very frequent injections, and is similarly not recommended in menopausal hormone therapy.
=== Bioidentical hormone therapy ===
Bioidentical hormone therapy (BHT) is the usage of hormones that are chemically identical to those produced in the body. Although proponents of BHT claim advantages over non-bioidentical or conventional hormone therapy, the FDA does not recognize the term 'bioidentical hormone', stating there is no scientific evidence that these hormones are identical to their naturally occurring counterparts. There are, however, FDA approved products containing hormones classified as 'bioidentical'.
Bioidentical hormones can be used in either pharmaceutical or compounded preparations, with the latter generally not recommended by regulatory bodies due to their lack of standardization and regulatory oversight. Most classifications of bioidentical hormones do not take into account manufacturing, source, or delivery method of the products, and so describe both non-FDA approved compounded products and FDA approved pharmaceuticals as 'bioidentical'. The British Menopause Society has issued a consensus statement endorsing the distinction between "compounded" forms (cBHRT), described as unregulated, custom made by specialty pharmacies and subject to heavy marketing and "regulated" pharmaceutical grade forms (rBHRT), which undergo formal oversight by entities such as the FDA and form the basis of most clinical trials. Some practitioners recommending compounded bioidentical HRT also use salivary or serum hormonal testing to monitor response to therapy, a practice not endorsed by current clinical guidelines in the United States and Europe.
Bioidentical hormones in pharmaceuticals may have very limited clinical data, with no randomized controlled prospective trials to date comparing them to their animal derived counterparts. Some pre-clinical data has suggested a decreased risk of venous thromboembolism, cardiovascular disease, and breast cancer. As of 2012, guidelines from the North American Menopause Society, the Endocrine Society, the International Menopause Society, and the European Menopause and Andropause Society endorsed the reduced risk of bioidentical pharmaceuticals for those with increased clotting risk.
==== Compounding ====
Compounding for HRT is generally discouraged by the FDA and medical industry in the United States due to a lack of regulation and standardized dosing. The U.S. Congress did grant the FDA explicit but limited oversight of compounded drugs in a 1997 amendment to the Federal Food, Drug, and Cosmetic Act (FDCA), but they have encountered obstacles in this role since that time. After 64 patient deaths and 750 harmed patients from a 2012 meningitis outbreak due to contaminated steroid injections, Congress passed the 2013 Drug Quality and Security Act, authorizing creation by the FDA of a voluntary registration for facilities that manufactured compounded drugs, and reinforcing FDCA regulations for traditional compounding. The DQSA and its reinforcement of provision §503A of the FDCA solidifies FDA authority to enforce FDCA regulation of against compounders of bioidentical hormone therapy.
In the United Kingdom, on the other hand, compounding is a regulated activity. The Medicines and Healthcare products Regulatory Agency regulates compounding performed under a Manufacturing Specials license and the General Pharmaceutical Council regulates compounding performed within a pharmacy. All testosterone prescribed in the United Kingdom is bioidentical, with its use supported by the National Health Service. There is also marketing authorisation for male testosterone products. National Institute for Health and Care Excellence guideline 1.4.8 states: "consider testosterone supplementation for menopausal women with low sexual desire if HRT alone is not effective". The footnote adds: "at the time of publication (November 2015), testosterone did not have a United Kingdom marketing authorisation for this indication in women. Bioidentical progesterone is used in IVF treatment and for pregnant women who are at risk of premature labour."
== Society and culture ==
=== Wyeth controversy ===
Wyeth, now a subsidiary of Pfizer, was a pharmaceutical company that marketed the HRT products Premarin (CEEs) and Prempro (CEEs + MPA). In 2009, litigation involving Wyeth resulted in the release of 1,500 documents that revealed practices concerning its promotion of these medications. The documents showed that Wyeth commissioned dozens of ghostwritten reviews and commentaries that were published in medical journals to promote unproven benefits of its HRT products, downplay their harms and risks, and cast competing therapies in a negative light. Starting in the mid-1990s and continuing for over a decade, Wyeth pursued an aggressive "publication plan" strategy to promote its HRT products through the use of ghostwritten publications. It worked mainly with DesignWrite, a medical writing firm. Between 1998 and 2005, Wyeth had 26 papers promoting its HRT products published in scientific journals.
These favorable publications emphasized the benefits and downplayed the risks of its HRT products, especially the "misconception" of the association of its products with breast cancer. The publications defended unsupported cardiovascular "benefits" of its products, downplayed risks such as breast cancer, and promoted off-label and unproven uses like prevention of dementia, Parkinson's disease, vision problems, and wrinkles. In addition, Wyeth emphasized negative messages against the SERM raloxifene for osteoporosis, instructed writers to stress the fact that "alternative therapies have increased in usage since the WHI even though there is little evidence that they are effective or safe...", called into question the quality and therapeutic equivalence of approved generic CEE products, and made efforts to spread the notion that the unique risks of CEEs and MPA were a class effect of all forms of menopausal HRT: "Overall, these data indicate that the benefit/risk analysis that was reported in the Women's Health Initiative can be generalized to all postmenopausal hormone replacement therapy products."
Following the publication of the WHI data in 2002, the stock prices for the pharmaceutical industry plummeted, and huge numbers of women stopped using HRT. The stocks of Wyeth, which supplied the Premarin and Prempro that were used in the WHI trials, decreased by more than 50%, and never fully recovered. Some of their articles in response promoted themes such as the following: "the WHI was flawed; the WHI was a controversial trial; the population studied in the WHI was inappropriate or was not representative of the general population of menopausal women; results of clinical trials should not guide treatment for individuals; observational studies are as good as or better than randomized clinical trials; animal studies can guide clinical decision-making; the risks associated with hormone therapy have been exaggerated; the benefits of hormone therapy have been or will be proven, and the recent studies are an aberration." Similar findings were observed in a 2010 analysis of 114 editorials, reviews, guidelines, and letters by five industry-paid authors. These publications promoted positive themes and challenged and criticized unfavorable trials such as the WHI and MWS. In 2009, Wyeth was acquired by Pfizer in a deal valued at US$68 billion. Pfizer, a company that produces Provera and Depo-Provera (MPA) and has also engaged in medical ghostwriting, continues to market Premarin and Prempro, which remain best-selling medications.
According to Fugh-Berman (2010), "Today, despite definitive scientific data to the contrary, many gynecologists still believe that the benefits of [HRT] outweigh the risks in asymptomatic women. This non-evidence–based perception may be the result of decades of carefully orchestrated corporate influence on medical literature." As many as 50% of physicians have expressed skepticism about large trials like the WHI and HERS in a 2011 survey. The positive perceptions of many physicians of HRT in spite of large trials showing risks that potentially outweigh any benefits may be due to the efforts of pharmaceutical companies like Wyeth, according to May and May (2012) and Fugh-Berman (2015).
=== Popularity ===
The 1990s showed a dramatic decline in prescription rates, though more recently they have begun to rise again. Transdermal therapy, in part due to its lack of increase in venous thromboembolism, is now often the first choice for HRT in the United Kingdom. Conjugate equine estrogen, in distinction, has a potentially higher thrombosis risk and is now not commonly used in the UK, replaced by estradiol based compounds with lower thrombosis risk. Oral progestogen combinations such as medroxyprogesterone acetate have changed to dyhydrogesterone, due to a lack of association of the latter with venous clot.
== See also ==
Androgen replacement therapy
Androgen deficiency
Hormone therapy
Gender-affirming hormone therapy
Feminizing hormone therapy
== References ==
== External links ==
Menopause treatment, Hormone Health Network, The Endocrine Society
Sexual Health and Menopause Online, The North American Menopause Society
Menopause, US Food and Drug Administration
British Menopause Society | Wikipedia/Hormone_replacement_therapy_(menopause) |
In statistics, Box–Behnken designs are experimental designs for response surface methodology, devised by George E. P. Box and Donald Behnken in 1960, to achieve the following goals:
Each factor, or independent variable, is placed at one of three equally spaced values, usually coded as −1, 0, +1. (At least three levels are needed for the following goal.)
The design should be sufficient to fit a quadratic model, that is, one containing squared terms, products of two factors, linear terms and an intercept.
The ratio of the number of experimental points to the number of coefficients in the quadratic model should be reasonable (in fact, their designs kept in the range of 1.5 to 2.6).
The estimation variance should more or less depend only on the distance from the centre (this is achieved exactly for the designs with 4 and 7 factors), and should not vary too much inside the smallest (hyper)cube containing the experimental points. (See "rotatability" in "Comparisons of response surface designs".)
Box-Behnken design is still considered to be more proficient and more powerful than other designs such as the three-level full factorial design, central composite design (CCD) and Doehlert design, despite its poor coverage of the corner of nonlinear design space.
The design with 7 factors was found first while looking for a design having the desired property concerning estimation variance, and then similar designs were found for other numbers of factors.
Each design can be thought of as a combination of a two-level (full or fractional) factorial design with an incomplete block design. In each block, a certain number of factors are put through all combinations for the factorial design, while the other factors are kept at the central values. For instance, the Box–Behnken design for 3 factors involves three blocks, in each of which 2 factors are varied through the 4 possible combinations of high and low. It is necessary to include centre points as well (in which all factors are at their central values).
In this table, m represents the number of factors which are varied in each of the blocks.
The design for 8 factors was not in the original paper. Taking the 9 factor design, deleting one column and any resulting duplicate rows produces an 81 run design for 8 factors, while giving up some "rotatability" (see above). Designs for other numbers of factors have also been invented (at least up to 21). A design for 16 factors exists having only 256 factorial points. Using Plackett–Burmans to construct a 16 factor design (see below) requires only 221 points.
Most of these designs can be split into groups (blocks), for each of which the model will have a different constant term, in such a way that the block constants will be uncorrelated with the other coefficients.
== Extended uses ==
These designs can be augmented with positive and negative "axial points", as in central composite designs, but, in this case, to estimate univariate cubic and quartic effects, with length α = min(2, (int(1.5 + K/4))1/2), for K factors, roughly to approximate original design points' distances from the centre.
Plackett–Burman designs can be used, replacing the fractional factorial and incomplete block designs, to construct smaller or larger Box–Behnkens, in which case, axial points of length α = ((K + 1)/2)1/2 better approximate original design points' distances from the centre. Since each column of the basic design has 50% 0s and 25% each +1s and −1s, multiplying each column, j, by σ(Xj)·21/2 and adding μ(Xj) prior to experimentation, under a general linear model hypothesis, produces a "sample" of output Y with correct first and second moments of Y.
== References ==
== Bibliography ==
George Box, Donald Behnken, "Some new three level designs for the study of quantitative variables", Technometrics, Volume 2, pages 455–475, 1960.
Box–Behnken designs from a handbook on engineering statistics at NIST | Wikipedia/Box–Behnken_design |
The PRECEDE–PROCEED model is a cost–benefit evaluation framework proposed in 1974 by Lawrence W. Green that can help health program planners, policy makers and other evaluators, analyze situations and design health programs efficiently. It provides a comprehensive structure for assessing health and quality of life needs, and for designing, implementing and evaluating health promotion and other public health programs to meet those needs. One purpose and guiding principle of the PRECEDE–PROCEED model is to direct initial attention to outcomes, rather than inputs. It guides planners through a process that starts with desired outcomes and then works backwards in the causal chain to identify a mix of strategies for achieving those objectives. A fundamental assumption of the model is the active participation of its intended audience — that is, that the participants ("consumers") will take an active part in defining their own problems, establishing their goals and developing their solutions.
In this framework, health behavior is regarded as being influenced by both individual and environmental factors, and hence has two distinct parts. First is an "educational diagnosis" – PRECEDE, an acronym for Predisposing, Reinforcing and Enabling Constructs in Educational Diagnosis and Evaluation. Second is an "ecological diagnosis" – PROCEED, for Policy, Regulatory, and Organizational Constructs in Educational and Environmental Development. The model is multidimensional and is founded in the social/behavioral sciences, epidemiology, administration, and education. The systematic use of the framework in a series of clinical and field trials confirmed the utility and predictive validity of the model as a planning tool.
== Brief history and purpose ==
The PRECEDE framework was first developed and introduced in the 1970s by Green and colleagues. PRECEDE is based on the premise that, just as a medical diagnosis precedes a treatment plan, an educational diagnosis of the problem is very essential before developing and implementing the intervention plan. Predisposing factors include knowledge, attitudes, beliefs, personal preferences, existing skills, and self-efficacy towards the desired behavior change. Reinforcing factors include factors that reward or reinforce the desired behavior change, including social support, economic rewards, and changing social norms. Enabling factors are skills or physical factors such as availability and accessibility of resources, or services that facilitate achievement of motivation to change behavior. The model has led to more than 1000 published studies, applications and commentaries on the model in the professional and scientific literature.
In the early 1990s the National Center for Chronic Disease Prevention and Health Promotion at the Centers for Disease Control and Prevention (CDC, US Department of Health and Human Services) gave additional national prominence to the PRECEDE model. Dr. Marshall Kreuter, Director of the Division of Chronic Disease Control and Community Intervention and his staff adapted and incorporated PRECEDE within a model planning process offered, with federal technical assistance, to state and local health departments to plan and evaluate health promotion programs (with their coalitions). The CDC model was called PATCH, for Planned Approach to Community Health. The relevance of this initiative to the application of PRECEDE, and the inspiration for some of the extensions of the (PATCH) model to incorporate PROCEED dimensions was detailed in a special issue of the Journal of Health Education in 1992.
In 1991, "PROCEED" was added to the framework in consideration of the growing recognition of the expansion of health education to encompass policy, regulatory and related ecological/environmental factors, in determining health and health behaviors. As health-related behaviors, such as smoking and excessive use of alcohol, increased or became more resistant to change, so did the recognition that these behaviors are influenced by factors such as the media, politics, and businesses, which are outside the direct control of the individuals. Hence more "ecological" methods were needed to identify and influence these environmental and social determinants of health behaviors. With the emergence of and rapid growth in the field of genetics, the PRECEDE–PROCEED model was revised, in 2005, to include and address the growing knowledge in this field.
== Description ==
The PRECEDE–PROCEED planning model consists of four planning phases, one implementation phase, and three evaluation phases.
=== Phase 1 – Social diagnosis ===
The first stage in the program planning phase deals with identifying and evaluating the social problems that affect the quality of life of a population of interest. Social assessment is the "application, through broad participation, of multiple sources of information, both objective and subjective, designed to expand the mutual understanding of people regarding their aspirations for the common good". During this stage, the program planners try to gain an understanding of the social problems that affect the quality of life of the community and its members, their strengths, weaknesses, and resources; and their readiness to change. This is done through various activities such as developing a planning committee, holding community forums, and conducting focus groups, surveys, and/or interviews. These activities will engage the beneficiaries in the planning process and planners will be able to see the issues just as the community sees them.
=== Phase 2 – Epidemiological, behavioral, and environmental diagnosis ===
Epidemiological diagnosis deals with determining and focusing on specific health issue(s) of the community, and the behavioral and environmental factors related to prioritized health needs of the community. Based on these priorities, achievable program goals and objectives for the program being developed are established. Epidemiological assessment may include secondary data analysis or original data collection — examples of epidemiological data include vital statistics, state and national health surveys, medical and administrative records, etc. Genetic factors, although not directly changeable through a health promotion program, are becoming increasingly important in understanding health problems and counseling people with genetic risks, or may be useful in identifying high-risk groups for intervention.
Behavioral diagnosis — This is the analysis of behavioral links to the goals or problems that are identified in the social or epidemiological diagnosis. The behavioral ascertainment of a health issue is understood, firstly, through those behaviors that exemplify the severity of the disease (e.g tobacco use among teenagers); secondly, through the behavior of the individuals who directly affect the individual at risk (e.g. parents of teenagers who keep cigarettes at home); and thirdly, through the actions of the decision-makers that affects the environment of the individuals at risk (e.g. law enforcement actions that restrict teens' access to cigarettes). Once behavioral diagnosis is completed for each health problem identified, the planner is able to develop more specific and effective interventions.
Environmental diagnosis — This is a parallel analysis of social and physical environmental factors other than specific actions that could be linked to behaviors. In this assessment, environmental factors beyond the control of the individual are modified to influence the health outcome. For example, poor nutritional status among children may be due to the availability of unhealthful foods in school. This may require not only educational interventions, but also additional strategies such as influencing the behaviors of a school's food service managers
=== Phase 3 – Educational and ecological diagnosis ===
Once the behavioral and environmental factors are identified and interventions selected, planners can start to work on selecting factors that, if modified, will most likely result in behavior change, as well as sustain it. These factors are classified as 1) predisposing, 2) enabling, and 3) reinforcing factors.
Predisposing factors are any characteristics of a person or population that motivate behavior prior to or during the occurrence of that behavior. They include an individual's knowledge, beliefs, values, and attitudes.
Enabling factors are those characteristics of the environment that facilitate action and any skill or resource required to attain specific behavior. They include programs, services, availability and accessibility of resources, or new skills required to enable behavior change.
Reinforcing factors are rewards or punishments following or anticipated as a consequence of a behavior. They serve to strengthen the motivation for a behavior. Some of the reinforcing factors include social support, peer support, etc.
=== Phase 4 – Administrative and policy diagnosis ===
This phase focuses on the administrative and organizational concerns that must be addressed prior to program implementation. This includes assessment of resources, development and allocation of budgets, looking at organizational barriers, and coordination of the program with other departments, including external organizations and the community.
Administrative diagnosis assesses policies, resources, circumstances and prevailing organizational situations that could hinder or facilitate the development of the health program.
Policy diagnosis assesses the compatibility of program goals and objectives with those of the organization and its administration. This evaluates whether program goals fit into the mission statements, rules and regulations that are needed for the implementation and sustainability of the program.
=== Phase 5 – Implementation of the program ===
=== Phase 6 – Process evaluation ===
This phase is used to evaluate the process by which the program is being implemented. This phase determines whether the program is being implemented according to the protocol, and determines whether the objectives of the program are being met. It also helps identify modifications that may be needed to improve the program.
=== Phase 7 – Impact evaluation ===
This phase measures the effectiveness of the program with regards to the intermediate objectives as well as the changes in predisposing, enabling, and reinforcing factors. Often this phase is used to evaluate the performance of educators.
=== Phase 8 – Outcome evaluation ===
This phase measures change in terms of overall objectives as well as changes in health and social benefits or quality of life. That is, it determines the effect of the program in the health and quality of life of the community.
== Conclusion ==
The PRECEDE–PROCEED model is a participatory model for creating successful community health promotion and other public health interventions. It is based on the premise that behavior change is by and large voluntary, and that health programs are more likely to be effective if they are planned and evaluated with the active participation of those who will implement them, and those who are affected by them. Thus, it looks at health and other issues within the context of the community. Interventions designed for behavior change to prevent injuries and violence, to improve heart health, and to improve and increase scholarly productivity among health education faculty, are among more than 1000 published applications developed or evaluated using the PRECEDE–PROCEED model as a guideline.
== Bibliography ==
Green, L.W., Kreuter, M.W., Deeds, S.G., Partridge, K.B. (1980). Health Education Planning: A Diagnostic Approach. 1st edition. Mountain View, California: Mayfield.
The first edition where the model was introduced and presented as a planning model for health education programs in various settings and where term PRECEDE first appeared.
Green L, Kreuter M. (1991). Health promotion planning: An educational and environmental approach. 2nd edition. Mountain View, CA: Mayfield Publishing Company
The second edition of the book where the model's application was expanded from PRECEDE to PROCEED with the addition of the policy, regulatory, and organizational aspects of planning for environmental changes that took health promotion beyond a narrower understanding of health education
Green L, Kreuter M. (1999). Health promotion planning: An educational and ecological approach. 3rd edition. Mountain View, CA: Mayfield Publishing Company
The third edition strengthened the ecological approach reflected in the social-environmental aspects that were increasingly relevant to the emerging infectious diseases and problems of lifestyle and social conditions surrounding the increasing prevalence of chronic diseases
Green L, Kreuter M. (2005). Health program planning: An educational and ecological approach. 4th edition. New York, NY: McGraw Hill.
A 2002/2003 IOM report on the Future of the Public's Health in the 21st Century urged more expanded application and teaching of ecological and participatory approaches in public health, which are the two cornerstones of the "educational and ecological approach" of PRECEDE–PROCEED planning. This latest edition sought to respond to the challenges of the IOM report and expand the scope of this PRECEDE–PROCEED model as an educational and ecological approach to broader public health and population health planning.
With recent advances in the genetic field and the increasing attention public health is giving to genetic factors, another significant addition was the inclusion of a specific place for genetic factors, alongside the environmental and behavioral determinants of health.
== See also ==
Social marketing
Ecological model
Health promotion
== References ==
== External links ==
L. W. Green's website: "If we want more evidence-based practice, we need more practice-based evidence"
PRECEDE/PROCEED Model: The Community Tool Box
How does the Precede–Proceed Model provide a structure for assessing health and quality-of-life needs? | Wikipedia/PRECEDE–PROCEED_model |
Infection prevention and control (IPC) is the discipline concerned with preventing healthcare-associated infections; a practical rather than academic sub-discipline of epidemiology. In Northern Europe, infection prevention and control is expanded from healthcare into a component in public health, known as "infection protection" (smittevern, smittskydd, Infektionsschutz in the local languages). It is an essential part of the infrastructure of health care. Infection control and hospital epidemiology are akin to public health practice, practiced within the confines of a particular health-care delivery system rather than directed at society as a whole.
Infection control addresses factors related to the spread of infections within the healthcare setting, whether among patients, from patients to staff, from staff to patients, or among staff. This includes preventive measures such as hand washing, cleaning, disinfecting, sterilizing, and vaccinating. Other aspects include surveillance, monitoring, and investigating and managing suspected outbreaks of infection within a healthcare setting.
A subsidiary aspect of infection control involves preventing the spread of antimicrobial-resistant organisms such as MRSA. This in turn connects to the discipline of antimicrobial stewardship—limiting the use of antimicrobials to necessary cases, as increased usage inevitably results in the selection and dissemination of resistant organisms. Antimicrobial medications (aka antimicrobials or anti-infective agents) include antibiotics, antibacterials, antifungals, antivirals and antiprotozoals.
The World Health Organization (WHO) has set up an Infection Prevention and Control (IPC) unit in its Service Delivery and Safety department that publishes related guidelines.
== Infection prevention and control ==
Aseptic technique is a key component of all invasive medical procedures. Similar control measures are also recommended in any healthcare setting to prevent the spread of infection generally.
=== Hand hygiene ===
Hand hygiene is one of the basic, yet most important steps in IPC (Infection Prevention and Control). Hand hygiene reduces the chances of HAI (Healthcare Associated Infections) drastically at a floor-low cost. Hand hygiene consists of either hand wash (water based) or hand rubs (alcohol based). Hand wash is a solid 7-steps according to the WHO standards, wherein hand rubs are 5-steps.
The American Nurses Association (ANA) and American Association of Nurse Anesthesiology (AANA) have set specific checkpoints for nurses to clean their hands; the checkpoints for nurses include, before patient contact, before putting on protective equipment, before doing procedures, after contact with patient's skin and surroundings, after contamination of foreign substances, after contact with bodily fluids and wounds, after taking off protective equipment, and after using the restroom. To ensure all before and after checkpoints for hand washing are done, precautions such as hand sanitizer dispensers filled with sodium hypochlorite, alcohol, or hydrogen peroxide, which are three approved disinfectants that kill bacteria, are placed in certain points, and nurses carrying mini hand sanitizer dispensers help increase sanitation in the work field. In cases where equipment is being placed in a container or bin and picked back up, nurses and doctors are required to wash their hands or use alcohol sanitizer before going back to the container to use the same equipment.
Independent studies by Ignaz Semmelweis in 1846 in Vienna and Oliver Wendell Holmes Sr. in 1843 in Boston established a link between the hands of health care workers and the spread of hospital-acquired disease. The U.S. Centers for Disease Control and Prevention (CDC) state that "It is well documented that the most important measure for preventing the spread of pathogens is effective handwashing". In the developed world, hand washing is mandatory in most health care settings and required by many different regulators.
In the United States, OSHA standards require that employers must provide readily accessible hand washing facilities, and must ensure that employees wash hands and any other skin with soap and water or flush mucous membranes with water as soon as feasible after contact with blood or other potentially infectious materials (OPIM).
In the UK healthcare professionals have adopted the 'Ayliffe Technique', based on the 6 step method developed by Graham Ayliffe, J. R. Babb, and A. H. Quoraishi.
Drying is an essential part of the hand hygiene process. In November 2008, a non-peer-reviewed study was presented to the European Tissue Symposium by the University of Westminster, London, comparing the bacteria levels present after the use of paper towels, warm air hand dryers, and modern jet-air hand dryers. Of those three methods, only paper towels reduced the total number of bacteria on hands, with "through-air dried" towels the most effective.
The presenters also carried out tests to establish whether there was the potential for cross-contamination of other washroom users and the washroom environment as a result of each type of drying method. They found that:
the jet air dryer, which blows air out of the unit at claimed speeds of 400 mph, was capable of blowing micro-organisms from the hands and the unit and potentially contaminating other washroom users and the washroom environment up to 2 metres away
use of a warm air hand dryer spread micro-organisms up to 0.25 metres from the dryer
paper towels showed no significant spread of micro-organisms.
In 2005, in a study conducted by TÜV Produkt und Umwelt, different hand drying methods were evaluated. The following changes in the bacterial count after drying the hands were observed:
=== Cleaning, Disinfection, Sterilization ===
The field of infection prevention describes a hierarchy of removal of microorganisms from surfaces including medical equipment and instruments. Cleaning is the lowest level, accomplishing substantial removal. Disinfection involves the removal of all pathogens other than bacterial spores. Sterilization is defined as the removal or destruction of ALL microorganisms including bacterial spores.
==== Cleaning ====
Cleaning is the first and simplest step in preventing the spread of infection via surfaces and fomites. Cleaning reduces microbial burden by chemical deadsorption of organisms (loosening bioburden/organisms from surfaces via cleaning chemicals), simple mechanical removal (rinsing, wiping), as well as disinfection (killing of organisms by cleaning chemicals).
To reduce their chances of contracting an infection, individuals are recommended to maintain good hygiene by washing their hands after every contact with questionable areas or bodily fluids and by disposing of garbage at regular intervals to prevent germs from growing.
==== Disinfection ====
Disinfection uses liquid chemicals on surfaces and at room temperature to kill disease-causing microorganisms. Ultraviolet light has also been used to disinfect the rooms of patients infected with Clostridioides difficile after discharge. Disinfection is less effective than sterilization because it does not kill bacterial endospores.
Along with ensuring proper hand washing techniques are followed, another major component to decrease the spread of disease is the sanitation of all medical equipment. The ANA and AANA set guidelines for sterilization and disinfection based on the Spaulding Disinfection and Sterilization Classification Scheme (SDSCS). The SDSCS classifies sterilization techniques into three categories: critical, semi-critical, and non-critical. For critical situations, or situations involving contact with sterile tissue or the vascular system, sterilize devices with sterilants that destroy all bacteria, rinse with sterile water, and use chemical germicides. In semi-critical situations, or situations with contact of mucous membranes or non-intact skin, high-level disinfectants are required. Cleaning and disinfecting devices with high-level disinfectants, rinsing with sterile water, and drying all equipment surfaces to prevent microorganism growth are methods nurses and doctors must follow. For non-critical situations, or situations involving electronic devices, stethoscopes, blood pressure cuffs, beds, monitors and other general hospital equipment, intermediate level disinfection is required. "Clean all equipment between patients with alcohol, use protective covering for non-critical surfaces that are difficult to clean, and hydrogen peroxide gas. . .for reusable items that are difficult to clean."
==== Sterilization ====
Sterilization is a process intended to kill all microorganisms and is the highest level of microbial kill that is possible.Sterilization, if performed properly, is an effective way of preventing Infections from spreading. It should be used for the cleaning of medical instruments and any type of medical item that comes into contact with the blood stream and sterile tissues.
There are four main ways in which such items are usually sterilized: autoclave (by using high-pressure steam), dry heat (in an oven), by using chemical sterilants such as glutaraldehydes or formaldehyde solutions or by exposure to ionizing radiation. The first two are the most widely used methods of sterilization mainly because of their accessibility and availability. Steam sterilization is one of the most effective types of sterilizations, if done correctly which is often hard to achieve. Instruments that are used in health care facilities are usually sterilized with this method. The general rule in this case is that in order to perform an effective sterilization, the steam must get into contact with all the surfaces that are meant to be disinfected. On the other hand, dry heat sterilization, which is performed with the help of an oven, is also an accessible type of sterilization, although it can only be used to disinfect instruments that are made of metal or glass. The very high temperatures needed to perform sterilization in this way are able to melt the instruments that are not made of glass or metal.
Effectiveness of the sterilizer, for example a steam autoclave is determined in three ways.
First, mechanical indicators and gauges on the machine itself indicate proper operation of the machine. Second heat sensitive indicators or tape on the sterilizing bags change color which indicate proper levels of heat or steam. And, third (most importantly) is biological testing in which a microorganism that is highly heat and chemical resistant (often the bacterial endospore) is selected as the standard challenge. If the process kills this microorganism, the sterilizer is considered to be effective.
Steam sterilization is done at a temperature of 121 C (250 F) with a pressure of 209 kPa (~2atm). In these conditions, rubber items must be sterilized for 20 minutes, and wrapped items 134 C with pressure of 310 kPa for 7 minutes. The time is counted once the temperature that is needed has been reached. Steam sterilization requires four conditions in order to be efficient: adequate contact, sufficiently high temperature, correct time and sufficient moisture. Sterilization using steam can also be done at a temperature of 132 C (270 F), at a double pressure.
Dry heat sterilization is performed at 170 C (340 F) for one hour or two hours at a temperature of 160 C (320 F). Dry heat sterilization can also be performed at 121 C, for at least 16 hours.
Chemical sterilization, also referred to as cold sterilization, can be used to sterilize instruments that cannot normally be disinfected through the other two processes described above. The items sterilized with cold sterilization are usually those that can be damaged by regular sterilization. A variety of chemicals can be used including aldehydes, hydrogen peroxide, and peroxyacetic acid. Commonly, glutaraldehydes and formaldehyde are used in this process, but in different ways. When using the first type of disinfectant, the instruments are soaked in a 2–4% solution for at least 10 hours while a solution of 8% formaldehyde will sterilize the items in 24 hours or more. Chemical sterilization is generally more expensive than steam sterilization and therefore it is used for instruments that cannot be disinfected otherwise. After the instruments have been soaked in the chemical solutions, they must be rinsed with sterile water which will remove the residues from the disinfectants. This is the reason why needles and syringes are not sterilized in this way, as the residues left by the chemical solution that has been used to disinfect them cannot be washed off with water and they may interfere with the administered treatment. Although formaldehyde is less expensive than glutaraldehydes, it is also more irritating to the eyes, skin and respiratory tract and is classified as a potential carcinogen, so it is used much less commonly.
Ionizing radiation is typically used only for sterilizing items for which none of the above methods are practical, because of the risks involved in the process
=== Personal protective equipment ===
Personal protective equipment (PPE) is specialized clothing or equipment worn by a worker for protection against a hazard. The hazard in a health care setting is exposure to blood, saliva, or other bodily fluids or aerosols that may carry infectious materials such as Hepatitis C, HIV, or other blood borne or bodily fluid pathogen. PPE prevents contact with a potentially infectious material by creating a physical barrier between the potential infectious material and the healthcare worker.
The United States Occupational Safety and Health Administration (OSHA) requires the use of personal protective equipment (PPE) by workers to guard against blood borne pathogens if there is a reasonably anticipated exposure to blood or other potentially infectious materials.
Components of PPE include gloves, gowns, bonnets, shoe covers, face shields, CPR masks, goggles, surgical masks, and respirators. How many components are used and how the components are used is often determined by regulations or the infection control protocol of the facility in question, which in turn are derived from knowledge of the mechanism of transmission of the pathogen(s) of concern. Many or most of these items are disposable to avoid carrying infectious materials from one patient to another patient and to avoid difficult or costly disinfection. In the US, OSHA requires the immediate removal and disinfection or disposal of a worker's PPE prior to leaving the work area where exposure to infectious material took place. For health care professionals who may come into contact with highly infectious bodily fluids, using personal protective coverings on exposed body parts improves protection. Breathable personal protective equipment improves user-satisfaction and may offer a similar level of protection. In addition, adding tabs and other modifications to the protective equipment may reduce the risk of contamination during donning and doffing (putting on and taking off the equipment). Implementing an evidence-based donning and doffing protocol such as a one-step glove and gown removal technique, giving oral instructions while donning and doffing, double gloving, and the use of glove disinfection may also improve protection for health care professionals.
Guidelines set by the ANA and ANAA for proper use of disposable gloves include, removing and replacing gloves frequently and when they are contaminated, damaged, or in between treatment of multiple patients. When removing gloves, “grasp outer edge of glove near wrist, peel away from hand turning inside out, hold removed glove in opposite gloved hand, slide ungloved finger under wrist of gloved hand so finger is inside gloved area, peel off the glove from inside creating a ‘bag’ for both gloves, dispose of gloves in proper waste receptacle”.
The inappropriate use of PPE equipment such as gloves, has been linked to an increase in rates of the transmission of infection, and the use of such must be compatible with the other particular hand hygiene agents used. Research studies in the form of randomized controlled trials and simulation studies are needed to determine the most effective types of PPE for preventing the transmission of infectious diseases to healthcare workers. There is low quality evidence that supports making improvements or modifications to personal protective equipment in order to help decrease contamination. Examples of modifications include adding tabs to masks or gloves to ease removal and designing protective gowns so that gloves are removed at the same time. In addition, there is weak evidence that the following PPE approaches or techniques may lead to reduced contamination and improved compliance with PPE protocols: Wearing double gloves, following specific doffing (removal) procedures such as those from the CDC, and providing people with spoken instructions while removing PPE.
=== Device-related infections ===
Healthcare-related infections such as (catheter-associated) urinary tract infections and (central-line) associated bloodstream infections can be caused by medical devices such as urinary catheters and central lines. Prudent use is essential in preventing infections associated with these medical devices. mHealth and patient participation have been used to improve risk awareness and prudent use (e.g. Participatient).
=== Antimicrobial surfaces ===
Microorganisms are known to survive on non-antimicrobial inanimate 'touch' surfaces (e.g., bedrails, over-the-bed trays, call buttons, bathroom hardware, etc.) for extended periods of time. This can be especially troublesome in hospital environments where patients with immunodeficiencies are at enhanced risk for contracting nosocomial infections.
Products made with antimicrobial copper alloy (brasses, bronzes, cupronickel, copper-nickel-zinc, and others) surfaces destroy a wide range of microorganisms in a short period.
The United States Environmental Protection Agency has approved the registration of 355 different antimicrobial copper alloys and one synthetic copper-infused hard surface that kills E. coli O157:H7, methicillin-resistant Staphylococcus aureus (MRSA), Staphylococcus, Enterobacter aerogenes, and Pseudomonas aeruginosa in less than 2 hours of contact. Other investigations have demonstrated the efficacy of antimicrobial copper alloys to destroy
Clostridioides difficile, influenza A virus, adenovirus, and fungi. As a public hygienic measure in addition to regular cleaning, antimicrobial copper alloys are being installed in healthcare facilities in the UK, Ireland, Japan, Korea, France, Denmark, and Brazil. The synthetic hard surface is being installed in the United States as well as in Israel.
== Vaccination of health care workers ==
Healthcare workers may be exposed to certain infections in the course of their work. Vaccines are available to provide some protection to workers in a healthcare setting. Depending on regulation, recommendation, specific work function, or personal preference, healthcare workers or first responders may receive vaccinations for hepatitis B; influenza; COVID-19, measles, mumps and rubella; Tetanus, diphtheria, pertussis; N. meningitidis; and varicella.
== Surveillance for infections ==
Surveillance is the act of infection investigation using the CDC definitions. Determining the presence of a hospital acquired infection requires an infection control practitioner (ICP) to review a patient's chart and see if the patient had the signs and symptom of an infection. Surveillance definitions exist for infections of the bloodstream, urinary tract, pneumonia, surgical sites and gastroenteritis.
Surveillance traditionally involved significant manual data assessment and entry in order to assess preventative actions such as isolation of patients with an infectious disease. Increasingly, computerized software solutions are becoming available that assess incoming risk messages from microbiology and other online sources. By reducing the need for data entry, software can reduce the data workload of ICPs, freeing them to concentrate on clinical surveillance.
As of 1998, approximately one third of healthcare acquired infections were preventable. Surveillance and preventative activities are increasingly a priority for hospital staff. The Study on the Efficacy of Nosocomial Infection Control (SENIC) project by the U.S. CDC found in the 1970s that hospitals reduced their nosocomial infection rates by approximately 32 per cent by focusing on surveillance activities and prevention efforts.
== Isolation and quarantine ==
In healthcare facilities, medical isolation refers to various physical measures taken to interrupt nosocomial spread of contagious diseases. Various forms of isolation exist, and are applied depending on the type of infection and agent involved, and its route of transmission, to address the likelihood of spread via airborne particles or droplets, by direct skin contact, or via contact with body fluids.
In cases where infection is merely suspected, individuals may be quarantined until the incubation period has passed and the disease manifests itself or the person remains healthy. Groups may undergo quarantine, or in the case of communities, a cordon sanitaire may be imposed to prevent infection from spreading beyond the community, or in the case of protective sequestration, into a community. Public health authorities may implement other forms of social distancing, such as school closings, when needing to control an epidemic.
== Barriers and facilitators of implementing infection prevention and control guidelines ==
Barriers to the ability of healthcare workers to follow PPE and infection control guidelines include communication of the guidelines, workplace support (manager support), the culture of use at the workplace, adequate training, the amount of physical space in the facility, access to PPE, and healthcare worker motivation to provide good patient care. Facilitators include the importance of including all the staff in a facility (healthcare workers and support staff) should be done when guidelines are implemented.
== Outbreak investigation ==
When an unusual cluster of illness is noted, infection control teams undertake an investigation to determine whether there is a true disease outbreak, a pseudo-outbreak (a result of contamination within the diagnostic testing process), or just random fluctuation in the frequency of illness. If a true outbreak is discovered, infection control practitioners try to determine what permitted the outbreak to occur, and to rearrange the conditions to prevent ongoing propagation of the infection. Often, breaches in good practice are responsible, although sometimes other factors (such as construction) may be the source of the problem.
Outbreaks investigations have more than a single purpose. These investigations are carried out in order to prevent additional cases in the current outbreak, prevent future outbreaks, learn about a new disease or learn something new about an old disease. Reassuring the public, minimizing the economic and social disruption as well as teaching epidemiology are some other obvious objectives of outbreak investigations.
According to the WHO, outbreak investigations are meant to detect what is causing the outbreak, how the pathogenic agent is transmitted, where it all started from, what is the carrier, what is the population at risk of getting infected and what are the risk factors.
== Training in infection control and health care epidemiology ==
Practitioners can come from several different educational streams: many begin as registered nurses, some as public health inspectors (environmental health officers), some as medical technologists (particularly in clinical microbiology), and some as physicians (typically infectious disease specialists). Specialized training in infection control and health care epidemiology are offered by the professional organizations described below. Physicians who desire to become infection control practitioners often are trained in the context of an infectious disease fellowship. Training that is conducted "face to face", via a computer, or via video conferencing may help improve compliance and reduce errors when compared with "folder based" training (providing health care professionals with written information or instructions).
In the United States, Certification Board of Infection Control and Epidemiology is a private company that certifies infection control practitioners based on their educational background and professional experience, in conjunction with testing their knowledge base with standardized exams. The credential awarded is CIC, Certification in Infection Control and Epidemiology. It is recommended that one has 2 years of Infection Control experience before applying for the exam. Certification must be renewed every five years.
A course in hospital epidemiology (infection control in the hospital setting) is offered jointly each year by the Centers for Disease Control and Prevention (CDC) and the Society for Healthcare Epidemiology of America.
== Standardization ==
=== Australia ===
In 2002, the Royal Australian College of General Practitioners published a revised standard for office-based infection control which covers the sections of managing immunisation, sterilisation and disease surveillance. However, the document on the personal hygiene of health workers is only limited to hand hygiene, waste and linen management, which may not be sufficient since some of the pathogens are air-borne and could be spread through air flow.
Since 1 November 2019, the Australian Commission on Safety and Quality in Health Care has managed the Hand Hygiene initiative in Australia, an initiative focused on improving hand hygiene practices to reduce the incidence of healthcare-associated infections.
=== United States ===
Currently, the federal regulation that describes infection control standards, as related to occupational exposure to potentially infectious blood and other materials, is found at 29 CFR Part 1910.1030 Bloodborne pathogens.
== See also ==
Pandemic prevention – Organization and management of preventive measures against pandemics
== References ==
== Further reading ==
Wong, P., & Lim, W. Y. (2020). Aligning difficult airway guidelines with the anesthetic COVID-19 guidelines to develop a COVID-19 difficult airway strategy: A narrative review. Journal of Anesthesia, 34(6), 924–943. https://doi.org/10.1007/s00540-020-02819-2
== External links ==
Association for Professionals in Infection Control and Epidemiology is primarily composed of infection prevention and control professionals with nursing or medical technology backgrounds
The Society for Healthcare Epidemiology of America is more heavily weighted towards practitioners who are physicians or doctoral-level epidemiologists.
Regional Infection Control Networks
The Certification Board of Infection Control and Epidemiology, Inc. | Wikipedia/Infection_control |
In statistics, a central composite design is an experimental design, useful in response surface methodology, for building a second order (quadratic) model for the response variable without needing to use a complete three-level factorial experiment.
After the designed experiment is performed, linear regression is used, sometimes iteratively, to obtain results. Coded variables are often used when constructing this design.
== Implementation ==
The design consists of three distinct sets of experimental runs:
A factorial (perhaps fractional) design in the factors studied, each having two levels;
A set of center points, experimental runs whose values of each factor are the medians of the values used in the factorial portion. This point is often replicated in order to improve the precision of the experiment;
A set of axial points, experimental runs identical to the centre points except for one factor, which will take on values both below and above the median of the two factorial levels, and typically both outside their range. All factors are varied in this way.
== Design matrix ==
The design matrix for a central composite design experiment involving k factors is derived from a matrix, d, containing the following three different parts corresponding to the three types of experimental runs:
The matrix F obtained from the factorial experiment. The factor levels are scaled so that its entries are coded as +1 and −1.
The matrix C from the center points, denoted in coded variables as (0,0,0,...,0), where there are k zeros.
A matrix E from the axial points, with 2k rows. Each factor is sequentially placed at ±α and all other factors are at zero. The value of α is determined by the designer; while arbitrary, some values may give the design desirable properties. This part would look like:
E
=
[
α
0
0
⋯
⋯
⋯
0
−
α
0
0
⋯
⋯
⋯
0
0
α
0
⋯
⋯
⋯
0
0
−
α
0
⋯
⋯
⋯
0
⋮
⋮
0
0
0
0
⋯
⋯
α
0
0
0
0
⋯
⋯
−
α
]
.
{\displaystyle \mathbf {E} ={\begin{bmatrix}\alpha &0&0&\cdots &\cdots &\cdots &0\\{-\alpha }&0&0&\cdots &\cdots &\cdots &0\\0&\alpha &0&\cdots &\cdots &\cdots &0\\0&{-\alpha }&0&\cdots &\cdots &\cdots &0\\\vdots &{}&{}&{}&{}&{}&\vdots \\0&0&0&0&\cdots &\cdots &\alpha \\0&0&0&0&\cdots &\cdots &{-\alpha }\\\end{bmatrix}}.}
Then d is the vertical concatenation:
d
=
[
F
C
E
]
.
{\displaystyle \mathbf {d} ={\begin{bmatrix}\mathbf {F} \\\mathbf {C} \\\mathbf {E} \end{bmatrix}}.}
The design matrix X used in linear regression is the horizontal concatenation of a column of 1s (intercept), d, and all elementwise products of a pair of columns of d:
X
=
[
1
d
d
(
1
)
×
d
(
2
)
d
(
1
)
×
d
(
3
)
⋯
d
(
k
−
1
)
×
d
(
k
)
d
(
1
)
2
d
(
2
)
2
⋯
d
(
k
)
2
]
,
{\displaystyle \mathbf {X} ={\begin{bmatrix}\mathbf {1} &\mathbf {d} &\mathbf {d} (1)\times \mathbf {d} (2)&\mathbf {d} (1)\times \mathbf {d} (3)&\cdots &\mathbf {d} (k-1)\times \mathbf {d} (k)&\mathbf {d} (1)^{2}&\mathbf {d} (2)^{2}&\cdots &\mathbf {d} (k)^{2}\end{bmatrix}},}
where d(i) represents the ith column in d.
=== Choosing α ===
There are many different methods to select a useful value of α. Let F be the number of points due to the factorial design and T = 2k + n, the number of additional points, where n is the number of central points in the design. Common values are as follows (Myers, 1971):
Orthogonal design::
α
=
(
Q
×
F
/
4
)
1
/
4
{\displaystyle \alpha =(Q\times F/4)^{1/4}\,\!}
, where
Q
=
(
F
+
T
−
F
)
2
{\displaystyle Q=({\sqrt {F+T}}-{\sqrt {F}})^{2}}
;
Rotatable design: α = F1/4 (the design implemented by MATLAB’s ccdesign function).
=== Application of central composite designs for optimization ===
Statistical approaches such as Response Surface Methodology can be employed to maximize the production of a special substance by optimization of operational factors. In contrast to conventional methods, the interaction among process variables can be determined by statistical techniques. For instance, in a study, a central composite design was employed to investigate the effect of critical parameters of organosolv pretreatment of rice straw including temperature, time, and ethanol concentration. The residual solid, lignin recovery, and hydrogen yield were selected as the response variables.
== References ==
Myers, Raymond H. Response Surface Methodology. Boston: Allyn and Bacon, Inc., 1971 | Wikipedia/Central_composite_design |
Coronary artery disease (CAD), also called coronary heart disease (CHD), or ischemic heart disease (IHD), is a type of heart disease involving the reduction of blood flow to the cardiac muscle due to a build-up of atheromatous plaque in the arteries of the heart. It is the most common of the cardiovascular diseases. CAD can cause stable angina, unstable angina, myocardial ischemia, and myocardial infarction.
A common symptom is angina, which is chest pain or discomfort that may travel into the shoulder, arm, back, neck, or jaw. Occasionally it may feel like heartburn. In stable angina, symptoms occur with exercise or emotional stress, last less than a few minutes, and improve with rest. Shortness of breath may also occur and sometimes no symptoms are present. In many cases, the first sign is a heart attack. Other complications include heart failure or an abnormal heartbeat.
Risk factors include high blood pressure, smoking, diabetes mellitus, lack of exercise, obesity, high blood cholesterol, poor diet, depression, and excessive alcohol consumption. A number of tests may help with diagnosis including electrocardiogram, cardiac stress testing, coronary computed tomographic angiography, biomarkers (high-sensitivity cardiac troponins) and coronary angiogram, among others.
Ways to reduce CAD risk include eating a healthy diet, regularly exercising, maintaining a healthy weight, and not smoking. Medications for diabetes, high cholesterol, or high blood pressure are sometimes used. There is limited evidence for screening people who are at low risk and do not have symptoms. Treatment involves the same measures as prevention. Additional medications such as antiplatelets (including aspirin), beta blockers, or nitroglycerin may be recommended. Procedures such as percutaneous coronary intervention (PCI) or coronary artery bypass surgery (CABG) may be used in severe disease. In those with stable CAD it is unclear if PCI or CABG in addition to the other treatments improves life expectancy or decreases heart attack risk.
In 2015, CAD affected 110 million people and resulted in 8.9 million deaths. It makes up 15.6% of all deaths, making it the most common cause of death globally. The risk of death from CAD for a given age decreased between 1980 and 2010, especially in developed countries. The number of cases of CAD for a given age also decreased between 1990 and 2010. In the United States in 2010, about 20% of those over 65 had CAD, while it was present in 7% of those 45 to 64, and 1.3% of those 18 to 45; rates were higher among males than females of a given age.
== Signs and symptoms ==
The most common symptom is chest pain or discomfort that occurs regularly with activity, after eating, or at other predictable times; this phenomenon is termed stable angina and is associated with narrowing of the arteries of the heart. Angina also includes chest tightness, heaviness, pressure, numbness, fullness, or squeezing. Angina that changes in intensity, character, or frequency is termed unstable. Unstable angina may precede myocardial infarction. In adults who go to the emergency department with an unclear cause of pain, about 30% have pain due to coronary artery disease. Angina, shortness of breath, sweating, nausea or vomiting, and lightheadedness are signs of a heart attack or myocardial infarction, and immediate emergency medical services are crucial.
With advanced disease, the narrowing of coronary arteries reduces the supply of oxygen-rich blood flowing to the heart, which becomes more pronounced during strenuous activities during which the heart beats faster and has an increased oxygen demand. For some, this causes severe symptoms, while others experience no symptoms at all.
=== Symptoms in females ===
Symptoms in females can differ from those in males, and the most common symptom reported by females of all races is shortness of breath. Other symptoms more commonly reported by females than males are extreme fatigue, sleep disturbances, indigestion, and anxiety. However, some females experience irregular heartbeat, dizziness, sweating, and nausea. Burning, pain, or pressure in the chest or upper abdomen that can travel to the arm or jaw can also be experienced in females, but females less commonly report it than males. Generally, females experience symptoms 10 years later than males. Females are less likely to recognize symptoms and seek treatment.
== Risk factors ==
Coronary artery disease is characterized by heart problems that result from atherosclerosis. Atherosclerosis is a type of arteriosclerosis which is the "chronic inflammation of the arteries which causes them to harden and accumulate cholesterol plaques (atheromatous plaques) on the artery walls". CAD has several well-determined risk factors contributing to atherosclerosis. These risk factors for CAD include "smoking, diabetes, high blood pressure (hypertension), abnormal (high) amounts of cholesterol and other fat in the blood (dyslipidemia), type 2 diabetes and being overweight or obese (having excess body fat)" due to lack of exercise and a poor diet. Some other risk factors include high blood pressure, smoking, diabetes, lack of exercise, obesity, high blood cholesterol, poor diet, depression, family history, psychological stress and excessive alcohol. About half of cases are linked to genetics. Apart from these classical risk factors, several unconventional risk factors have also been studied including high serum fibrinogen, high c-reactive protein (CRP), chronic inflammatory conditions, hypovitaminosis D, high lipoprotein A levels, serum homocysteine etc. Smoking and obesity are associated with about 36% and 20% of cases, respectively. Smoking just one cigarette per day about doubles the risk of CAD. Lack of exercise has been linked to 7–12% of cases. Exposure to the herbicide Agent Orange may increase risk. Rheumatologic diseases such as rheumatoid arthritis, systemic lupus erythematosus, psoriasis, and psoriatic arthritis are independent risk factors as well.
Job stress appears to play a minor role accounting for about 3% of cases. In one study, females who were free of stress from work life saw an increase in the diameter of their blood vessels, leading to decreased progression of atherosclerosis. In contrast, females who had high levels of work-related stress experienced a decrease in the diameter of their blood vessels and significantly increased disease progression.
=== Air pollution ===
Air pollution, both indoor and outdoor, is responsible for roughly 28% of deaths from CAD. This varies by region: In highly developed areas, this is approximately 10%, whereas in Southern, East and West Africa, and South Asia, approximately 40% of deaths from CAD can be attributed to unhealthy air. In particular, fine particle pollution (PM2.5), which comes mostly from the burning of fossil fuels, is a key risk factor for CAD.
=== Blood fats ===
The consumption of different types of fats including trans fat (trans unsaturated), and saturated fat, in a diet "influences the level of cholesterol that is present in the bloodstream". Unsaturated fats originate from plant sources (such as oils). There are two types of unsaturated fats, cis and trans isomers. Cis unsaturated fats are bent in molecular structure and trans are linear. Saturated fats originate from animal sources (such as animal fats) and are also molecularly linear in structure. The linear configurations of unsaturated trans and saturated fats allow them to easily accumulate and stack at the arterial walls when consumed in high amounts (and other positive measures towards physical health are not met).
Fats and cholesterol are insoluble in blood and thus are amalgamated with proteins to form lipoproteins for transport. Low-density lipoproteins (LDL) transport cholesterol from the liver to the rest of the body and raise blood cholesterol levels. The consumption of "saturated fats increases LDL levels within the body, thus raising blood cholesterol levels".
High-density lipoproteins (HDL) are considered 'good' lipoproteins as they search for excess cholesterol in the body and transport it back to the liver for disposal. Trans fats also "increase LDL levels whilst decreasing HDL levels within the body, significantly raising blood cholesterol levels".
High levels of cholesterol in the bloodstream lead to atherosclerosis. With increased levels of LDL in the bloodstream, "LDL particles will form deposits and accumulate within the arterial walls, which will lead to the development of plaques, restricting blood flow". The resultant reduction in the heart's blood supply due to atherosclerosis in coronary arteries "causes shortness of breath, angina pectoris (chest pains that are usually relieved by rest), and potentially fatal heart attacks (myocardial infarctions)".
=== Genetics ===
The heritability of coronary artery disease has been estimated between 40% and 60%. Genome-wide association studies have identified over 160 genetic susceptibility loci for coronary artery disease.
=== Transcriptome ===
Several RNA Transcripts associated with CAD - FoxP1, ICOSLG, IKZF4/Eos, SMYD3, TRIM28, and TCF3/E2A are likely markers of regulatory T cells (Tregs), consistent with known reductions in Tregs in CAD.
The RNA changes are mostly related to ciliary and endocytic transcripts, which in the circulating immune system would be related to the immune synapse. One of the most differentially expressed genes, fibromodulin (FMOD), which is increased 2.8-fold in CAD, is found mainly in connective tissue and is a modulator of the TGF-beta signaling pathway. However, not all RNA changes may be related to the immune synapse. For example, Nebulette, the most down-regulated transcript (2.4-fold), is found in cardiac muscle; it is a 'cytolinker' that connects actin and desmin to facilitate cytoskeletal function and vesicular movement. The endocytic pathway is further modulated by changes in tubulin, a key microtubule protein, and fidgetin, a tubulin-severing enzyme that is a marker for cardiovascular risk identified by genome-wide association study. Protein recycling would be modulated by changes in the proteasomal regulator SIAH3, and the ubiquitin ligase MARCHF10. On the ciliary aspect of the immune synapse, several of the modulated transcripts are related to ciliary length and function. Stereocilin is a partner to mesothelin, a related super-helical protein, whose transcript is also modulated in CAD. DCDC2, a double-cortin protein, modulates ciliary length. In the signaling pathways of the immune synapse, numerous transcripts are directly related to T-cell function and the control of differentiation. Butyrophilin is a co-regulator for T cell activation. Fibromodulin modulates the TGF-beta signaling pathway, a primary determinant of Tre differentiation. Further impact on the TGF-beta pathway is reflected in concurrent changes in the BMP receptor 1B RNA (BMPR1B), because the bone morphogenic proteins are members of the TGF-beta superfamily, and likewise impact Treg differentiation. Several of the transcripts (TMEM98, NRCAM, SFRP5, SHISA2) are elements of the Wnt signaling pathway, which is a major determinant of Treg differentiation.
=== Other ===
Endometriosis in females under the age of 40.
Depression and hostility appear to be risks.
The number of categories of adverse childhood experiences (psychological, physical, or sexual abuse; violence against mother; or living with household members who used substances, mentally ill, suicidal, or incarcerated) showed a graded correlation with the presence of adult diseases including coronary artery (ischemic heart) disease.
Hemostatic factors: High levels of fibrinogen and coagulation factor VII are associated with an increased risk of CAD.
Low hemoglobin.
In the Asian population, the b fibrinogen gene G-455A polymorphism was associated with the risk of CAD.
Patient-specific vessel ageing or remodelling determines endothelial cell behaviour and thus disease growth and progression. Such 'hemodynamic markers' are patient-specific risk surrogates.
HIV is a known risk factor for developing atherosclerosis and coronary artery disease.
== Pathophysiology ==
Limitation of blood flow to the heart causes ischemia (cell starvation secondary to a lack of oxygen) of the heart's muscle cells. The heart's muscle cells may die from lack of oxygen and this is called a myocardial infarction (commonly referred to as a heart attack). It leads to damage, death, and eventual scarring of the heart muscle without regrowth of heart muscle cells. Chronic high-grade narrowing of the coronary arteries can induce transient ischemia which leads to the induction of a ventricular arrhythmia, which may terminate into a dangerous heart rhythm known as ventricular fibrillation, which often leads to death.
Typically, coronary artery disease occurs when part of the smooth, elastic lining inside a coronary artery (the arteries that supply blood to the heart muscle) develops atherosclerosis. With atherosclerosis, the artery's lining becomes hardened, stiffened, and accumulates deposits of calcium, fatty lipids, and abnormal inflammatory cells – to form a plaque. Calcium phosphate (hydroxyapatite) deposits in the muscular layer of the blood vessels appear to play a significant role in stiffening the arteries and inducing the early phase of coronary arteriosclerosis. This can be seen in a so-called metastatic mechanism of calciphylaxis as it occurs in chronic kidney disease and hemodialysis. Although these people have kidney dysfunction, almost fifty percent of them die due to coronary artery disease. Plaques can be thought of as large "pimples" that protrude into the channel of an artery, causing partial obstruction to blood flow. People with coronary artery disease might have just one or two plaques or might have dozens distributed throughout their coronary arteries. A more severe form is chronic total occlusion (CTO) when a coronary artery is completely obstructed for more than 3 months.
Microvascular angina is a type of angina pectoris in which chest pain and chest discomfort occur without signs of blockages in the larger coronary arteries of their hearts when an angiogram (coronary angiogram) is being performed.
The exact cause of microvascular angina is unknown. Explanations include microvascular dysfunction or epicardial atherosclerosis. For reasons that are not well understood, females are more likely than males to have it; however, hormones and other risk factors unique to females may play a role.
== Diagnosis ==
The diagnosis of CAD depends largely on the nature of the symptoms and imaging. The first investigation when CAD is suspected is an electrocardiogram (ECG/EKG), both for stable angina and acute coronary syndrome. An X-ray of the chest, blood tests and resting echocardiography may be performed.
For stable symptomatic patients, several non-invasive tests can diagnose CAD depending on pre-assessment of the risk profile. Noninvasive imaging options include; Computed tomography angiography (CTA) (anatomical imaging, best test in patients with low-risk profile to "rule out" the disease), positron emission tomography (PET), single-photon emission computed tomography (SPECT)/nuclear stress test/myocardial scintigraphy and stress echocardiography (the three latter can be summarized as functional noninvasive methods and are typically better to "rule in"). Exercise ECG or stress test is inferior to non-invasive imaging methods due to the risk of false negative and false positive test results. The use of non-invasive imaging is not recommended on individuals who are exhibiting no symptoms and are otherwise at low risk for developing coronary disease. Invasive testing with coronary angiography (ICA) can be used when non-invasive testing is inconclusive or show a high event risk.
The diagnosis of microvascular angina (previously known as cardiac syndrome X – the rare coronary artery disease that is more common in females, as mentioned, is a diagnosis of exclusion. Therefore, usually, the same tests are used as in any person suspected of having coronary artery disease:
Intravascular ultrasound
Magnetic resonance imaging (MRI)
=== Stable angina ===
Stable angina is the most common manifestation of ischemic heart disease, and is associated with reduced quality of life and increased mortality. It is caused by epicardial coronary stenosis which results in reduced blood flow and oxygen supply to the myocardium.
Stable angina is short-term chest pain during physical exertion caused by an imbalance between myocardial oxygen supply and metabolic oxygen demand. Various forms of cardiac stress tests may be used to induce both symptoms and detect changes by way of electrocardiography (using an ECG), echocardiography (using ultrasound of the heart) or scintigraphy (using uptake of radionuclide by the heart muscle). If part of the heart seems to receive an insufficient blood supply, coronary angiography may be used to identify stenosis of the coronary arteries and suitability for angioplasty or bypass surgery.
In minor to moderate cases, nitroglycerine may be used to alleviate acute symptoms of stable angina or may be used immediately before exertion to prevent the onset of angina. Sublingual nitroglycerine is most commonly used to provide rapid relief for acute angina attacks and as a complement to anti-anginal treatments in patients with refractory and recurrent angina. When nitroglycerine enters the bloodstream, it forms free radical nitric oxide, or NO, which activates guanylate cyclase and in turn stimulates the release of cyclic GMP. This molecular signaling stimulates smooth muscle relaxation, resulting in vasodilation and consequently improved blood flow to heart regions affected by atherosclerotic plaque.
Stable coronary artery disease (SCAD) is also often called stable ischemic heart disease (SIHD). A 2015 monograph explains that "Regardless of the nomenclature, stable angina is the chief manifestation of SIHD or SCAD." There are U.S. and European clinical practice guidelines for SIHD/SCAD. In patients with non-severe asymptomatic aortic valve stenosis and no overt coronary artery disease, the increased troponin T (above 14 pg/mL) was found associated with an increased 5-year event rate of ischemic cardiac events (myocardial infarction, percutaneous coronary intervention, or coronary artery bypass surgery).
=== Acute coronary syndrome ===
Diagnosis of acute coronary syndrome generally takes place in the emergency department, where ECGs may be performed sequentially to identify "evolving changes" (indicating ongoing damage to the heart muscle). Diagnosis is clear-cut if ECGs show elevation of the "ST segment", which in the context of severe typical chest pain is strongly indicative of an acute myocardial infarction (MI); this is termed a STEMI (ST-elevation MI) and is treated as an emergency with either urgent coronary angiography and percutaneous coronary intervention (angioplasty with or without stent insertion) or with thrombolysis ("clot buster" medication), whichever is available. In the absence of ST-segment elevation, heart damage is detected by cardiac markers (blood tests that identify heart muscle damage). If there is evidence of damage (infarction), the chest pain is attributed to a "non-ST elevation MI" (NSTEMI). If there is no evidence of damage, the term "unstable angina" is used. This process usually necessitates hospital admission and close observation on a coronary care unit for possible complications (such as cardiac arrhythmias – irregularities in the heart rate). Depending on the risk assessment, stress testing or angiography may be used to identify and treat coronary artery disease in patients who have had an NSTEMI or unstable angina.
=== Risk assessment ===
There are various risk assessment systems for determining the risk of coronary artery disease, with various emphasis on the different variables above. A notable example is Framingham Score, used in the Framingham Heart Study. It is mainly based on age, gender, diabetes, total cholesterol, HDL cholesterol, tobacco smoking, and systolic blood pressure. When predicting risk in younger adults (18–39 years old), the Framingham Risk Score remains below 10–12% for all deciles of baseline-predicted risk.
Polygenic score is another way of risk assessment. In one study the relative risk of incident coronary events was 91% higher among participants at high genetic risk than among those at low genetic risk.
== Prevention ==
Up to 90% of cardiovascular disease may be preventable if established risk factors are avoided. Prevention involves adequate physical exercise, decreasing obesity, treating high blood pressure, eating a healthy diet, decreasing cholesterol levels, and stopping smoking. Medications and exercise are roughly equally effective. High levels of physical activity reduce the risk of coronary artery disease by about 25%. Life's Essential 8 are the key measures for improving and maintaining cardiovascular health, as defined by the American Heart Association. AHA added sleep as a factor influencing heart health in 2022.
Most guidelines recommend combining these preventive strategies. A 2015 Cochrane Review found some evidence that counseling and education to bring about behavioral change might help in high-risk groups. However, there was insufficient evidence to show an effect on mortality or actual cardiovascular events.
In diabetes mellitus, there is little evidence that very tight blood sugar control improves cardiac risk although improved sugar control appears to decrease other problems such as kidney failure and blindness.
A 2024 study published in The Lancet Diabetes & Endocrinology found that the oral glucose tolerance test (OGTT) is more effective than hemoglobin A1c (HbA1c) for detecting dysglycemia in patients with coronary artery disease. The study highlighted that 2-hour post-load glucose levels of at least 9 mmol/L were strong predictors of cardiovascular outcomes, while HbA1c levels of at least 5.9% were also significant but not independently associated when combined with OGTT results.
=== Diet ===
A diet high in fruits and vegetables decreases the risk of cardiovascular disease and death. Vegetarians have a lower risk of heart disease, possibly due to their greater consumption of fruits and vegetables. Evidence also suggests that the Mediterranean diet and a high fiber diet lower the risk.
The consumption of trans fat (commonly found in hydrogenated products such as margarine) has been shown to cause a precursor to atherosclerosis and increase the risk of coronary artery disease.
Evidence does not support a beneficial role for omega-3 fatty acid supplementation in preventing cardiovascular disease (including myocardial infarction and sudden cardiac death).
=== Secondary prevention ===
Secondary prevention is preventing further sequelae of already established disease. Effective lifestyle changes include:
Weight control
Smoking cessation
Avoiding the consumption of trans fats (in partially hydrogenated oils)
Decreasing psychosocial stress
Exercise
Aerobic exercise, like walking, jogging, or swimming, can reduce the risk of mortality from coronary artery disease. Aerobic exercise can help decrease blood pressure and the amount of blood cholesterol (LDL) over time. It also increases HDL cholesterol.
Although exercise is beneficial, it is unclear whether doctors should spend time counseling patients to exercise. The U.S. Preventive Services Task Force found "insufficient evidence" to recommend that doctors counsel patients on exercise but "it did not review the evidence for the effectiveness of physical activity to reduce chronic disease, morbidity, and mortality", only the effectiveness of counseling itself. The American Heart Association, based on a non-systematic review, recommends that doctors counsel patients on exercise.
Psychological symptoms are common in people with CHD. Many psychological treatments may be offered following cardiac events. There is no evidence that they change mortality, the risk of revascularization procedures, or the rate of non-fatal myocardial infarction.
Antibiotics for secondary prevention of coronary heart disease
Early studies suggested that antibiotics might help patients with coronary disease to reduce the risk of heart attacks and strokes. However, a 2021 Cochrane meta-analysis found that antibiotics given for secondary prevention of coronary heart disease are harmful to people with increased mortality and occurrence of stroke. So, antibiotic use is not currently supported for preventing secondary coronary heart disease.
=== Neuropsychological assessment ===
A thorough systematic review found that indeed there is a link between a CHD condition and brain dysfunction in females. Consequently, since research is showing that cardiovascular diseases, like CHD, can play a role as a precursor for dementia, like Alzheimer's disease, individuals with CHD should have a neuropsychological assessment.
== Treatment ==
There are a number of treatment options for coronary artery disease:
Lifestyle changes
Medical treatment – commonly prescribed drugs (e.g., cholesterol lowering medications, beta-blockers, nitroglycerin, calcium channel blockers, etc.);
Coronary interventions as angioplasty and coronary stent;
Coronary artery bypass grafting (CABG)
=== Medications ===
Statins, which reduce cholesterol, reduce the risk of coronary artery disease
Nitroglycerin
Calcium channel blockers and/or beta-blockers
Antiplatelet drugs such as aspirin
It is recommended that blood pressure typically be reduced to less than 140/90 mmHg. The diastolic blood pressure should not be below 60 mmHg. Beta-blockers are recommended first line for this use.
==== Aspirin ====
In those with no previous history of heart disease, aspirin decreases the risk of a myocardial infarction but does not change the overall risk of death. Aspirin therapy to prevent heart disease is thus recommended only in adults who are at increased risk for cardiovascular events, which may include postmenopausal females, males above 40, and younger people with risk factors for coronary heart disease, including high blood pressure, a family history of heart disease, or diabetes. The benefits outweigh the harms most favorably in people at high risk for a cardiovascular event, where high risk is defined as at least a 3% chance over five years, but others with lower risk may still find the potential benefits worth the associated risks.
==== Anti-platelet therapy ====
Clopidogrel plus aspirin (dual anti-platelet therapy) reduces cardiovascular events more than aspirin alone in those with a STEMI. In others at high risk but not having an acute event, the evidence is weak. Specifically, its use does not change the risk of death in this group. In those who have had a stent, more than 12 months of clopidogrel plus aspirin does not affect the risk of death.
=== Surgery ===
Revascularization for acute coronary syndrome has a mortality benefit. Percutaneous revascularization for stable ischaemic heart disease does not appear to have benefits over medical therapy alone. In those with disease in more than one artery, coronary artery bypass grafts appear better than percutaneous coronary interventions. Newer "anaortic" or no-touch off-pump coronary artery revascularization techniques have shown reduced postoperative stroke rates comparable to percutaneous coronary intervention. Hybrid coronary revascularization has also been shown to be a safe and feasible procedure that may offer some advantages over conventional CABG though it is more expensive.
== Epidemiology ==
As of 2010, CAD was the leading cause of death globally resulting in over 7 million deaths. This increased from 5.2 million deaths from CAD worldwide in 1990. It may affect individuals at any age but becomes dramatically more common at progressively older ages, with approximately a tripling with each decade of life. Males are affected more often than females.
The World Health Organization reported that: "The world's biggest killer is ischemic heart disease, responsible for 13% of the world's total deaths. Since 2000, the largest increase in deaths has been for this disease, rising by 2.7 million to 9.1 million deaths in 2021."
It is estimated that 60% of the world's cardiovascular disease burden will occur in the South Asian subcontinent despite only accounting for 20% of the world's population. This may be secondary to a combination of genetic predisposition and environmental factors. Organizations such as the Indian Heart Association are working with the World Heart Federation to raise awareness about this issue.
Coronary artery disease is the leading cause of death for both males and females and accounts for approximately 600,000 deaths in the United States every year. According to present trends in the United States, half of healthy 40-year-old males will develop CAD in the future, and one in three healthy 40-year-old females. It is the most common reason for death of males and females over 20 years of age in the United States.
After analysing data from 2 111 882 patients, the recent meta-analysis revealed that the incidence of coronary artery diseases in breast cancer survivors was 4.29 (95% CI 3.09–5.94) per 1000 person-years.
== Society and culture ==
=== Names ===
Other terms sometimes used for this condition are "hardening of the arteries" and "narrowing of the arteries". In Latin it is known as morbus ischaemicus cordis (MIC).
=== Support groups ===
The Infarct Combat Project (ICP) is an international nonprofit organization founded in 1998 which tries to decrease ischemic heart diseases through education and research.
=== Industry influence on research ===
In 2016 research into the internal documents of the Sugar Research Foundation, the trade association for the sugar industry in the US, had sponsored an influential literature review published in 1965 in the New England Journal of Medicine that downplayed early findings about the role of a diet heavy in sugar in the development of CAD and emphasized the role of fat; that review influenced decades of research funding and guidance on healthy eating.
== Research ==
Research efforts are focused on new angiogenic treatment modalities and various (adult) stem-cell therapies. A region on chromosome 17 was confined to families with multiple cases of myocardial infarction. Other genome-wide studies have identified a firm risk variant on chromosome 9 (9p21.3). However, these and other loci are found in intergenic segments and need further research in understanding how the phenotype is affected.
A more controversial link is that between Chlamydophila pneumoniae infection and atherosclerosis. While this intracellular organism has been demonstrated in atherosclerotic plaques, evidence is inconclusive regarding whether it can be considered a causative factor. Treatment with antibiotics in patients with proven atherosclerosis has not demonstrated a decreased risk of heart attacks or other coronary vascular diseases.
Myeloperoxidase has been proposed as a biomarker.
Plant-based nutrition has been suggested as a way to reverse coronary artery disease, but strong evidence is still lacking for claims of potential benefits.
Several immunosuppressive drugs targeting the chronic inflammation in coronary artery disease have been tested.
== See also ==
Mental stress-induced myocardial ischemia
== References ==
== External links ==
Risk Assessment of having a heart attack or dying of coronary artery disease, from the American Heart Association.
"Coronary Artery Disease". MedlinePlus. U.S. National Library of Medicine.
Norman J (7 October 2019). "Managing Diabetes with Blood Glucose Control". Endocrineweb. | Wikipedia/Coronary_heart_disease |
The theory of planned behavior (TPB) is a psychological theory that links beliefs to behavior. The theory maintains that three core components, namely, attitude, subjective norms, and perceived behavioral control, together shape an individual's behavioral intentions. In turn, a tenet of TPB is that behavioral intention is the most proximal determinant of human social behavior.
The theory was elaborated by Icek Ajzen for the purpose of improving the predictive power of the theory of reasoned action (TRA). Ajzen's idea was to include perceived behavioral control in TPB. Perceived behavior control was not a component of TRA. TPB has been applied to studies of the relations among beliefs, attitudes, behavioral intentions, and behaviors in various human domains. These domains include, but are not limited to, advertising, public relations, advertising campaigns, healthcare, sport management consumer/household finance, and sustainability.
== History ==
=== Extension from the theory of reasoned action ===
Icek Ajzen (1985) proposed TPB in his chapter "From intentions to actions: A theory of planned behavior." TPB developed out of TRA, a theory first proposed in 1980 by Martin Fishbein and Ajzen. TRA was in turn grounded in various theories bearing on attitude and attitude change, including learning theories, expectancy-value theories, attribution theory, and consistency theories (e.g., Heider's balance theory, Osgood and Tannenbaum's congruity theory, and Festinger's dissonance theory). According to TRA, if an individual evaluates a suggested behavior as positive (attitude), and if he or she believes significant others want the person to perform the behavior (subjective norm), the intention (motivation) to perform the behavior will be greater and the individual will be more likely to perform the behavior. Attitudes and subjective norms are highly correlated with behavioral intention; behavioral intention is correlated with actual behavior.
Research, however, shows that behavioral intention does not always lead to actual behavior. Because behavioral intention cannot be the exclusive determinant of behavior where an individual's control over the behavior is incomplete, Ajzen introduced TPB by adding to TRA the component "perceived behavioral control". In this way he extended TRA to better predict actual behavior.
Perceived behavioral control refers to the degree to which a person believes that he or she can perform a given behavior. Perceived behavioral control involves the perception of the individual's own ability to perform the behavior. In other words, perceived behavioral control is behavior- or goal-specific. That perception varies by environmental circumstances and the behavior involved. The theory of planned behavior suggests that people are much more likely to intend to enact certain behaviors when they feel that they can enact them successfully.
The theory has thus improved upon TRA.
=== Extension of self-efficacy ===
Along with attitudes and subjective norms (which make up TRA), TPB adds the concept of perceived behavioral control, which grew out of self-efficacy theory (SET). Bandura proposed self-efficacy construct in 1977, in connection to social cognitive theory. Self-efficacy refers to a person's expectation or confidence that he or she can master a behavior or accomplish a goal; an individual has different levels of self-efficacy depending on the behavior or intent. Bandura distinguished two distinct types of goal-related expectations: self-efficacy and outcome expectancy. He defined self-efficacy as the conviction that one can successfully execute the behavior required to produce the outcome in question. Outcome expectancy refers to a person's estimation that a given behavior will lead to certain outcomes. Bandura advanced the view that self-efficacy is the most important precondition for behavioral change, since it is key to the initiation of coping behavior.
Previous investigations have shown that a person's behavior is strongly influenced by the individual's confidence in his or her ability to perform that behavior. As self-efficacy contributes to explanations of various relationships among beliefs, attitudes, intentions, and behavior, TPB has been widely applied in health-related fields such as helping preadolescents to engage in more physical activity, thereby improving their mental health, and getting adults to exercise more.
== Key concepts ==
=== Normative beliefs and subjective norms ===
Normative belief: an individual's perception of social normative pressures, or the beliefs of relevant others bearing on what behaviors should or should not be performed.
Subjective norm: an individual's perception about the particular behavior, which is influenced by the judgment of significant others (e.g., parents, spouse, friends, teachers).
=== Control beliefs and perceived behavioral control ===
Control beliefs: an individual's beliefs about the presence of factors that may facilitate or hinder performance of the behavior.
Perceived behavioral control: an individual's perceived ease or difficulty of performing the particular behavior. The concept of perceived behavioral control is conceptually related to self-efficacy. It is assumed that perceived behavioral control is determined by the total set of accessible control beliefs.
=== Behavioral intention and behavior ===
Behavioral intention: an individual's readiness to perform a given behavior. It is assumed to be an immediate antecedent of behavior. It is based on attitude toward the behavior, subjective norm, and perceived behavioral control, with each predictor weighted for its importance in relation to the behavior and population of interest.
Behavior: an individual's observable response in a given situation with respect to a given target. Ajzen advanced the view that a behavior is a function of compatible intentions and perceptions of behavioral control. Perceived behavioral control is expected to moderate the effect of intention on behavior, such that a favorable intention produces the behavior only when perceived behavioral control is strong.
== Conceptual / operational comparison ==
=== Perceived behavioral control vs. self-efficacy ===
Ajzen (1991) wrote that the role of perceived behavioral control in the theory of planned behavior derived from Bandura's concept of self-efficacy. More recently, Fishbein and Cappella advanced the view that self-efficacy is equivalent to perceived behavioral control in Ajzen's integrative model. Perceived behavioral control can be assessed with the help of items from a self-efficacy scale.
In previous studies, the construction of measures of perceived behavioral control has had to be tailored to each particular health-related behavior. For example, for smoking, an item could read "I don't think I am addicted because I can really just not smoke and not crave for it" or "It would be really easy for me to quit."
The concept of self-efficacy is rooted in Bandura's social cognitive theory. It refers to the conviction that one can successfully execute the behavior required to attain a desired goal. The concept of self-efficacy is used as perceived behavioral control, which means the perception of the ease or difficulty of the particular behavior. It is linked to control beliefs, which refer to beliefs about the presence of factors that may facilitate or impede performance of the behavior.
Perceived behavioral control is usually measured with self-report instruments comprising items that begin with the stem, "I am sure I can ... (e.g., exercise, quit smoking, etc.)." Such instruments attempt to measure the individual's confidence that he or she can execute a given behavior.
=== Attitude toward behavior vs. outcome expectancy ===
The theory of planned behavior specifies the nature of the relationship between beliefs and attitudes. According to the theory, an individual's evaluation of, or attitude toward, a behavior is determined by his or her accessible beliefs about the behavior. The term belief in this theory refers to the subjective probability that the behavior will produce a certain outcome. Specifically, the evaluation of each outcome contributes to the attitude commensurately with the person's subjective probability that the behavior produces the outcome in question. A belief is accessible if available from long-term memory.
The concept of outcome expectancy originated in the expectancy-value model. Outcome expectancy can be a belief, attitude, opinion, or expectation. According to the theory of planned behavior, an individual's positive evaluation of his or her performance of a particular behavior is similar to the concept of perceived benefits. A positive evaluation refers to a belief regarding the effectiveness of the proposed behavior in reducing the vulnerability to negative outcomes. By contrast, a negative self-evaluation refers to a belief regarding adverse consequences that can result from the enactment of the behavior.
=== Social influence ===
The concept of social influence has been assessed in both the theory of reasoned action and theory of planned behavior. Individuals' elaborative thoughts on subjective norms are perceptions of whether they are expected by their friends, their family, and society in general to perform a particular behavior. Social influence is measured by evaluating the attitudes of social groups. For example, in the case of smoking:
Subjective norms the individual attaches to the peer group include thoughts such as, "Most of my friends smoke" or "I feel ashamed of smoking in front of a group of friends who don't smoke";
Subjective norms the individual attaches to the family include thoughts such as, "All of my family smokes, and it seems natural to start smoking" or "My parents were really mad at me when I started smoking"; and
Subjective norms the individual attaches to society or the general culture include thoughts such as, "Everyone is against smoking" or "We just assume everyone is a nonsmoker."
While most models are conceptualized within individual cognitive space, the theory of planned behavior considers social influence in terms of social norms and normative beliefs. Given that an individual's behavior (e.g., health-related decision-making such as diet, condom use, quitting smoking, and drinking, etc.) might very well be located in and dependent on social networks and organizations (e.g., peer group, family, school, and workplace), social influence has been a welcomed addition to the theory.
== Model ==
Human behavior is guided by three kinds of considerations: behavioral beliefs, normative beliefs, and control beliefs. In their respective aggregates, behavioral beliefs produce a favorable or unfavorable attitude toward the behavior, normative beliefs result in a subjective norm, and control beliefs pertain to perceived behavioral control.
In combination, the attitude toward the behavior, the subjective norm, and the perceived behavioral control lead to the formation of a behavioral intention. In particular, perceived behavioral control is presumed not only to affect actual behavior directly, but also to affect it indirectly through behavioral intention.
As a general rule, when (a) the individual has a favorable attitude toward a behavior, (b) the attitude is aligned with the relevant norms, and (c) the individual perceives that s/he has a high level of behavioral control, a strong intention to perform the behavior in question is expected. Finally, given a sufficient degree of actual control over the behavior, the individual is expected to carry out his or her intentions when the opportunity arises.
== Formula ==
In a simple form, behavioral intention for the theory of planned behavior can be expressed as the following mathematical function:
B
I
=
w
A
A
+
w
S
N
S
N
+
w
P
B
C
P
B
C
{\displaystyle BI=w_{A}A+w_{SN}SN+w_{PBC}PBC}
The three factors being proportional to their underlying beliefs:
A
∝
∑
i
=
1
n
b
i
e
i
S
N
∝
∑
i
=
1
n
n
i
m
i
P
B
C
∝
∑
i
=
1
n
c
i
p
i
{\displaystyle {\begin{aligned}A&\propto \sum _{i=1}^{n}b_{i}e_{i}\\SN&\propto \sum _{i=1}^{n}n_{i}m_{i}\\PBC&\propto \sum _{i=1}^{n}c_{i}p_{i}\end{aligned}}}
To the extent that it is an accurate reflection of actual behavioral control, perceived behavioral control can, together with intention, be used to predict behavior.
B
=
w
B
I
B
I
+
w
P
B
C
P
B
C
{\displaystyle B=w_{BI}BI+w_{PBC}PBC}
== Applications of the theory ==
The theory of planned behavior has been applied to a number of research areas including health-related behaviors, environmental psychology, and voting behavior.
=== Health-related behaviors ===
Several studies found that, compared to TRA, TPB better predicts health-related behavioral intention. TPB has improved the predictability of intention with regard to several health-related behaviors, including condom use, choice of leisure activities, exercise, and diet. In this research, attitudes and intentions tend to be mediated by goals and needs. For example, an individual may be guided by the goal of losing 5 kg of weight in 60 days; a positive attitude and intention towards dieting would be important. However, if a need is taken into account, such as a need for a partner in an individual's effort to lose weight and the person is unable to find such a partner, the individual is not likely to lose weight.
TPB can also be applied to the area of nutrition-related interventions. In a study by Sweitzer et al., TPB-related behavioral constructs guided the development of intervention strategies. TPB was applied in such a way as to encourage parents to include more fruit, vegetables and whole grains in the lunches they packed for their preschool children. Knowledge/behavioral control, self-efficacy/perceived behavioral control, subjective norms, and intentions were assessed. The researchers observed in the TPB-oriented intervention a significant increase in vegetables and whole grains in the lunches parents prepared for their children.
TPB has guided research aimed at preventing weight regain in individuals who had recently experienced a significant weight loss. McConnon et al. (2012) found that perceived need to control weight predicts the behavior needed for weight maintenance. TPB can also help in assessing the behavioral intentions of practitioners who promote specific health behaviors. Chase et al. (2003) studied dietitians' intentions to promote the consumption of whole grain foods. The study team found that the strongest indicator of dietitians' intentions to promote the consumption of whole grain foods was their normative beliefs about diet. However, some dietitians' knowledge was problematic, with only 60% of dietitians being able to correctly identify a whole grain product from a food label.
More recent research based on TPB examined college students' intentions to smoke e-cigarettes. Studies found that attitudes toward smoking and social norms significantly predicted college students' behavior, as TPB suggests. Positive attitudes toward smoking and normalizing the behavior were, in part, helped by advertisements on the Internet. With this knowledge, a smoking prevention campaign was started, specifically targeting college students collectively, not just as individuals.
The theory of planned behavior model has thus been helpful in understanding health-related behaviors and developing interventions aimed at modifying those behaviors.
=== Environmental psychology ===
Another application of TPB has been in the field of environmental psychology. Generally speaking, actions that are environmentally friendly carry a positive normative belief. That is to say, behaviors that are consistent with environmental sustainability are widely promoted as positive behaviors. However, although there may be a behavioral intention to practice such behaviors, constraints can impede a sense of perceived behavioral control. An example of such a constraint is the belief that one's behavior will not have an impact. There are external constraints as well. For example, if an individual intends to behave in an environmentally responsible way but recycling infrastructure is absent in the individual's community, perceived behavioral control is likely to be low. The application of TPB in these situations helps explain contradictions such as individuals having positive attitudes toward sustainability but engaging in behavior that is antithetical to the idea of sustainability.
Other research has found that attitudes toward climate change, perceived behavioral control, and subjective norms are associated with the intention to adopt a pro-environmental behavior. This knowledge can be applied to policy-making aimed at increasing environmentally friendly behavior.
Additionally, research has been done more recently on Gen Z and how they relate to environmental psychology through the use of the TPB. For example, in 2020, Chaturvedi et al. conducted research on the behavioral intentions of Gen Z when it came to recycled clothing. They found that environmental concern, perceived value, and willingness to pay were the top leading factors in their purchasing intentions. Similarly, Noor et al., looked at the actions surrounding green purchases and activities amongst Gen Z in 2017. They found that Gen Z has the intention of consuming green products due to the positive associations aligned with it, along with the subjective norms, perceived green knowledge, and social visibility towards those purchases. Outside of personal product consumption, Ngo and Ha looked at Gen Z on using green tourism in 2023. They found that Gen Z used knowledge sharing as a precursor to shape their perception and attitudes towards green tourism services. Moreover, they recognized the importance of knowledge sharing to raise awareness surrounding not only green tourism, but green practices all together, to induce positive attitudes about sustainable practices.
=== Voting behavior ===
TPB has guided political scientists' research on voter turnout and behavior. TPB has also been applied to help us understand legislator behavior.
=== Financial behavior ===
The theory of planned behavior (TPB) is widely utilized in the field of household financial behavior research. This theory helps to understand and predict various financial decisions and behaviors, including investment choices, debt management, mortgage use, cash, saving, and credit management. It posits that individual intentions and attitudes, subjective norms, and perceived behavioral control are key factors influencing behavior. Over the years, researchers have applied and expanded upon this theory to gain insights into specific financial behaviors and their determinants. For example, in a study examining investment decisions, East (1993) found that the subjective norm (influence of friends and relatives) and perceived control (importance of easy access to funds) significantly influenced individuals' investment choices. This highlights the importance of social influences and the perceived ease of acting in financial decision-making. In another study on individual debt behavior, Xiao and Wu (2008) extended the TPB model and discovered that customer satisfaction contributed to behavioral intention and influenced actual behavior, emphasizing the role of client satisfaction in shaping financial actions. Similarly, in a study involving mortgage clients, Bansal & Taylor (2002) explored factors affecting customer service switching behavior within the context of the TPB. They identified significant interactions between perceived control and intention, perceived control and attitude, and attitude and subjective norms, all of which shaped behavior intention. The TPB has also been applied to study the financial behaviors of college students concerning cash, credit, and saving management, providing valuable insights into how young adults form their financial behaviors based on their intentions, attitudes, social norms, and perceived control.
=== Important steps in applying TPB to help change behavior ===
With TPB as a theoretical framework, certain steps can be followed in efforts to increase the chances of behavior change. The team implementing an intervention should specify the action, target, context, and time. For example, a goal might be to "consume at least one serving of whole grains during breakfast each day in the forthcoming month." In this example, "consuming" is the action, "one serving of whole grains" is the target, "during breakfast each day" is the context, and "in the forthcoming month" is the time. Once a goal is specified, an elicitation phase can be used to identify salient factors that bear on achieving the goal. The pertinent beliefs regarding a specific behavior may differ in different populations. Conducting open-ended elicitation interviews can be useful in applying TPB. Elicitation interviews help to identify relevant behavioral outcomes, referents, cultural factors, facilitating factors, and barriers to change in the focal behavior and target population. The following are sample questions that may be used during an elicitation interview:
What do you like/ dislike about behavior X?
What are some disadvantages of doing behavior X?
Who would be against your doing behavior X?
Who can you think of that would do behavior X?
What things make it hard for you to do behavior X?
If you want to do behavior X, how certain are you that you can?
== Evaluation of the theory ==
=== Strengths ===
TPB covers people's volitional behavior that cannot be explained by TRA. An individual's behavioral intention cannot be the exclusive determinant of behavior where an individual's control over the behavior is incomplete. By adding "perceived behavioral control," TPB can explain the relationship between behavioral intention and actual behavior.
Several studies found that, compared to TRA, TPB better predicts health-related behavioral intentions. TPB has improved the predictability of intention in various health-related areas, including condom use, leisure, exercise, diet, etc. In addition, TPB (and TRA) have helped to explain the individual's social behavior by including social norms as an important contributing explanatory factor.
=== Limitations ===
More recently, some scholars criticize the theory because it ignores the individual's needs prior to engaging in a certain action, needs that would affect behavior regardless of expressed attitudes. For example, a person might have a positive attitude regarding consuming beefsteak and yet not order a beefsteak because she is not hungry. Or, a person might have a negative attitude towards drinking and little intention to drink and yet engage in drinking because he is seeking group membership.
Another limitation is that TPB does not integrate into the theory the role the individual's emotions play in the development of intentions and during decision-making play. In addition, most of the research on TPB is correlational. More evidence from randomized experiments would be helpful.
Some experimental studies challenge the assumption that intentions and behavior are consequences of attitudes, social norms, and perceived behavioral control. To illustrate, Sussman et al. (2019) prompted participants to form the intention to support a specific environmental organization, for example, to sign a petition. After this intention was formed, attitudes, social norms, and perceived behavioral control shifted. Participants became more likely to report positive attitudes towards this organization and were more inclined to assume that members of their social group share comparable attitudes. These findings imply that the associations among the three key elements—attitudes, social norms, and perceived behavioral control—and intentions may be bi-directional.
== See also ==
Behavioral change
Theory of reasoned action
Reasoned action approach
== References ==
Armitage, C.J.; Conner, M. (2001). "Efficacy of the theory of planned behavior: a meta-analytic review". British Journal of Social Psychology. 40 (4): 471–499. doi:10.1348/014466601164939. PMID 11795063. S2CID 28044256.
Ajzen, I. & Fishbein, M. (2005). The influence of attitudes on behaviour. In Albarracin, D.; Johnson, B.T.; Zanna M.P. (Eds.), The handbook of attitudes, Lawrence Erlbaum Associates.
== External links ==
Information about the theory on Icek Ajzen's homepage | Wikipedia/Theory_of_planned_behavior |
A notifiable disease is any disease that is required by law to be reported to government authorities. The collation of information allows the authorities to monitor the disease, and provides early warning of possible outbreaks. In the case of livestock diseases, there may also be the legal requirement to kill the infected livestock upon notification. Many governments have enacted regulations for reporting of both human and animal (generally livestock) diseases.
== Global ==
=== Human ===
The World Health Organization's International Health Regulations 1969 require disease reporting to the organization in order to help with its global surveillance and advisory role. The current (1969) regulations are rather limited with a focus on reporting of three main diseases: cholera, yellow fever and plague. Smallpox was a contagious disease during the 18th-20th century. It was endemic until mass vaccination, after which WHO certified Smallpox to be eradicated. This marked the first (and thus far only) human disease to be successfully eradicated.
The revised International Health Regulations 2005 broadens this scope and is no longer limited to the notification of specific diseases. Whilst it does identify a number of specific diseases, it also defines a limited set of criteria to assist in deciding whether an event is notifiable to WHO.
WHO states that "Notification is now based on the identification within a State Party’s territory of an "event that may constitute a public health emergency of international concern". This non-disease specific definition of notifiable events expands the scope of the IHR (2005) to include any novel or evolving risk to international public health, taking into account the context in which the event occurs. Such notifiable events can extend beyond communicable diseases and arise from any origin or source. This broad notification requirement aims at detecting, early on, all public health events that could have serious and international consequences, and preventing or containing them at source through an adapted response before they spread across borders."
=== Animal ===
The OIE (World Organisation for Animal Health) monitors specific animal diseases on a global scale.
Diseases Notifiable to the OIE
== Australia ==
=== Human ===
The National Notifiable Diseases Surveillance System (NNDSS) was established in 1990. Notifications are made to the States or Territory health authority and computerised, de-identified records are then supplied to the Department of Health and Ageing for collation, analysis and publication. The Australian national notifiable diseases list and case definitions are available online.
=== Animal ===
Within Australia the Department of Agriculture, Fisheries and Forestry regulates the notification of infectious animal diseases.
National List of Notifiable Animal Diseases
State and Territory Notifiable Animal Diseases Lists
== Brazil ==
=== Human ===
Notification is regulated under Brazilian Ministry of Health Ordinance number 1.271 of June 6, 2014.
List of national notifiable diseases
== Canada ==
Diseases of concern to public health officials have been tracked in Canada since 1924.
A subcommittee of the National Advisory Committee on Epidemiology was set up in 1987. At the time, 34 diseases were surveyed on the list of communicable diseases while another 13 were recommended for addition to the list. As of 1 January 2000, a total of 43 diseases were given the status of notifiable. In 2006, the Final report and recommendations from the National Notifiable Diseases Working Group found that certain diseases should be added and certain diseases should not.
The Canadian Notifiable Disease Surveillance System is a searchable database tool provided by the Public Health Agency of Canada.
=== Human ===
List of national notifiable diseases
== France ==
=== Human ===
The first policies of mandatory notifiable disease originated a long time ago in France, while exact times are unclear we know that at the end of the 18th century Plague was a highly enforced notifiable disease.
The current list of notifiable diseases is written in the Code de la santé publique Article D3113-6 and Article D3113-7 (last revision has been made in 2012), it contains 36 diseases : 34 infectious ones and 2 non-infectious disease directly linked to the environment (Lead poisoning and Mesothelioma). Notifications of both the disease and the distribution of specific medicine are made to a regional desk governmental agency called Agence régionale de santé by :
Physician and Biologists, both in public or in private workplaces,
Physician controllers (MISP) and Administratives civil-servant from Directions départementales des affaires sanitaires et sociales (DDASS),
Epidemiologists from the Institut de veille sanitaire (InVS),
Drugs sellers.
Anonymous records are then used by the government health-insurance system.
Ill people must cure them and in many case are put in quarantine.
=== Animal ===
Only infectious diseases are notifiable to the authorities. The complete list can be found in the Article L. 223-22 du code rural, it is updated with every new entry on World Organisation for Animal Health (OIE) lists A and B and with European Union mandatory lists.
== New Zealand ==
=== Human ===
Notification is regulated under the Health Act 1956, except for tuberculosis which is regulated under the Tuberculosis Act 1948. All diseases
List of national notifiable diseases
== United Kingdom ==
=== Human ===
Requirement for the notification of infectious diseases originated near the end of the 19th century. The list started with a few select diseases and has since grown to 31. Currently disease notification for humans in the UK is regulated under the Public Health (Control of Disease) Act 1984 and Public Health (Infectious Diseases) Regulations 1988. The governing body is Public Health England Public Health England
List of Notifiable Diseases can be found here Notifiable diseases and causative organisms: how to report.
==== Children ====
There are also requirements for notification specific to children in the National standards for under 8s day care and childminding that state:
"Office for Standards in Education should be notified of any food poisoning affecting two or more children looked after on the premises, any child having meningitis or the outbreak on the premises of any notifiable disease identified as such in the Public Health (Control of Disease) Act 1984 or because the notification requirement has been applied to them by regulations (the relevant regulations are the Public Health (Infectious Diseases) Regulations 1988).
=== Animal ===
In the UK notification of diseases in animals is regulated by the Animal Health Act 1981, as well as the Specified Diseases (Notification and Slaughter) Order 1992 (as amended) and Specified Diseases (Notification) Order 1996 (as amended). The act states that a police constable should be notified, however in practice a Defra divisional veterinary manager is notified and Defra will investigate.
List of Notifiable Diseases
== United States ==
In the past, notifiable diseases in the United States varied according to the laws of individual states. The Centers for Disease Control and Prevention (CDC) and the Council of State and Territorial Epidemiologists (CSTE) also produced a list of nationally notifiable diseases that health officials should report to the CDC's National Notifiable Diseases Surveillance System (NNDSS). A uniform criterion for reporting diseases to the NNDSS was introduced in 1990.
== See also ==
List of notifiable diseases
Contagious disease
Disease surveillance
Public Health Emergency of International Concern
== References == | Wikipedia/Notifiable_disease |
Physical therapy (PT), also known as physiotherapy, is a healthcare profession, as well as the care provided by physical therapists who promote, maintain, or restore health through patient education, physical intervention, disease prevention, and health promotion. Physical therapist is the term used for such professionals in the United States, and physiotherapist is the term used in many other countries.
The career has many specialties including musculoskeletal, orthopedics, cardiopulmonary, neurology, endocrinology, sports medicine, geriatrics, pediatrics, women's health, wound care and electromyography. PTs practice in many settings, both public and private.
In addition to clinical practice, other aspects of physical therapy practice include research, education, consultation, and health administration. Physical therapy is provided as a primary care treatment or alongside, or in conjunction with, other medical services. In some jurisdictions, such as the United Kingdom, physical therapists may have the authority to prescribe medication.
== Overview ==
Physical therapy addresses the illnesses or injuries that limit a person's abilities to move and perform functional activities in their daily lives. PTs use an individual's history and physical examination to arrive at a diagnosis and establish a management plan and, when necessary, incorporate the results of laboratory and imaging studies like X-rays, CT-scan, or MRI findings. Physical therapists can use sonography to diagnose and manage common musculoskeletal, nerve, and pulmonary conditions. Electrodiagnostic testing (e.g., electromyograms and nerve conduction velocity testing) may also be used.
PT management commonly includes prescription of or assistance with specific exercises, manual therapy, and manipulation, mechanical devices such as traction, education, electrophysical modalities which include heat, cold, electricity, sound waves, radiation, assistive devices, prostheses, orthoses, and other interventions. In addition, PTs work with individuals to prevent the loss of mobility before it occurs by developing fitness and wellness-oriented programs for healthier and more active lifestyles, providing services to individuals and populations to develop, maintain, and restore maximum movement and functional ability throughout the lifespan. This includes providing treatment in circumstances where movement and function are threatened by aging, injury, disease, or environmental factors. Functional movement is central to what it means to be healthy.
Physical therapy is a professional career that has many specialties including musculoskeletal, orthopedics, cardiopulmonary, neurology, endocrinology, sports medicine, geriatrics, pediatrics, women's health, wound care and electromyography. Neurological rehabilitation is, in particular, a rapidly emerging field. PTs practice in many settings, such as privately-owned physical therapy clinics, outpatient clinics or offices, health and wellness clinics, rehabilitation hospital facilities, skilled nursing facilities, extended care facilities, private homes, education and research centers, schools, hospices, industrial and these workplaces or other occupational environments, fitness centers and sports training facilities.
Physical therapists also practice in non-patient care roles such as health policy, health insurance, health care administration and as health care executives. Physical therapists are involved in the medical-legal field serving as experts, performing peer review and independent medical examinations.
Education varies greatly by country. The span of education ranges from some countries having little formal education to others having doctoral degrees and post-doctoral residencies and fellowships.
Regarding its relationship to other healthcare professions, physiotherapy is one of the allied health professions. World Physiotherapy has signed a "memorandum of understanding" with the four other members of the World Health Professions Alliance "to enhance their joint collaboration on protecting and investing in the health workforce to provide safe, quality and equitable care in all settings".
== History ==
Physicians like Hippocrates and later Galen are believed to have been the first practitioners of physical therapy, advocating massage, manual therapy techniques and hydrotherapy to treat people in 460 BC. After the development of orthopedics in the eighteenth century, machines like the Gymnasticon were developed to treat gout and similar diseases by systematic exercise of the joints, similar to later developments in physical therapy.
The earliest documented origins of actual physical therapy as a professional group date back to Per Henrik Ling, "Father of Swedish Gymnastics," who founded the Royal Central Institute of Gymnastics (RCIG) in 1813 for manipulation, and exercise. Up until 2014, the Swedish word for a physical therapist was sjukgymnast = someone involved in gymnastics for those who are ill, but the title was then changed to fysioterapeut (physiotherapist), the word used in the other Scandinavian countries. In 1887, PTs were given official registration by Sweden's National Board of Health and Welfare. Other countries soon followed. In 1894, four nurses in Great Britain formed the Chartered Society of Physiotherapy. The School of Physiotherapy at the University of Otago in New Zealand in 1913, and the United States 1914 Reed College in Portland, Oregon, which graduated "reconstruction aides." Since the profession's inception, spinal manipulative therapy has been a component of the physical therapist practice.
Modern physical therapy was established towards the end of the 19th century due to events that affected on a global scale, which called for rapid advances in physical therapy. Following this, American orthopedic surgeons began treating children with disabilities and employed women trained in physical education, and remedial exercise. These treatments were further applied and promoted during the Polio outbreak of 1916.
During the First World War, women were recruited to work with and restore physical function to injured soldiers, and the field of physical therapy was institutionalized. In 1918 the term "Reconstruction Aide" was used to refer to individuals practicing physical therapy. The first school of physical therapy was established at Walter Reed Army Hospital in Washington, D.C., following the outbreak of World War I. Treatment through the 1940s primarily consisted of exercise, massage, and traction. Manipulative procedures to the spine and extremity joints began to be practiced, especially in the British Commonwealth countries, in the early 1950s.
Around the time polio vaccines were developed, physical therapists became a normal occurrence in hospitals throughout North America and Europe. In the late 1950s, physical therapists started to move beyond hospital-based practice to outpatient orthopedic clinics, public schools, colleges/universities health-centres, geriatric settings (skilled nursing facilities), rehabilitation centers and medical centers. Specialization in physical therapy in the U.S. occurred in 1974, with the Orthopaedic Section of the APTA being formed for those physical therapists specializing in orthopedics. In the same year, the International Federation of Orthopaedic Manipulative Physical Therapists was formed, which has ever since played an important role in advancing manual therapy worldwide.
An international organization for the profession is the World Confederation for Physical Therapy (WCPT). It was founded in 1951 and has operated under the brand name World Physiotherapy since 2020.
== Education ==
Educational criteria for physical therapy providers vary from state to state, country to country, and among various levels of professional responsibility. Most U.S. states have physical therapy practice acts that recognize both physical therapists (PT) and physical therapist assistants (PTA) and some jurisdictions also recognize physical therapy technicians (PT Techs) or aides. Most countries have licensing bodies that require physical therapists to be member of before they can start practicing as independent professionals.
=== Canada ===
The Canadian Alliance of Physiotherapy Regulators (CAPR) offers eligible program graduates to apply for the national Physiotherapy Competency Examination (PCE). Passing the PCE is one of the requirements in most provinces and territories to work as a licensed physiotherapist in Canada. CAPR has members which are physiotherapy regulatory organizations recognized in their respective provinces and territories:
Government of Yukon, Consumer Services
College of Physical Therapists of British Columbia
College of Physiotherapists of Alberta
Saskatchewan College of Physical Therapists
College of Physiotherapists of Manitoba
College of Physiotherapists of Ontario
Ordre professionnel de la physiothérapie du Québec
College of Physiotherapists of New Brunswick/Collège des physiothérapeutes du Nouveau-Brunswick
Nova Scotia College of Physiotherapists
Prince Edward Island College of Physiotherapists
Newfoundland & Labrador College of Physiotherapists
Physiotherapy programs are offered at fifteen universities, often through the university's respective college of medicine. Each of Canada's physical therapy schools has transitioned from three-year Bachelor of Science in Physical Therapy (BScPT) programs that required two years of prerequisite university courses (five-year bachelor's degree) to two-year Master's of Physical Therapy (MPT) programs that require prerequisite bachelor's degrees. The last Canadian university to follow suit was the University of Manitoba, which transitioned to the MPT program in 2012, making the MPT credential the new entry to practice standard across Canada. Existing practitioners with BScPT credentials are not required to upgrade their qualifications.
In the province of Quebec, prospective physiotherapists are required to have completed a college diploma in either health sciences, which lasts on average two years, or physical rehabilitation technology, which lasts at least three years, to apply to a physiotherapy program or program in university. Following admission, physical therapy students work on a bachelor of science with a major in physical therapy and rehabilitation. The B.Sc. usually requires three years to complete. Students must then enter graduate school to complete a master's degree in physical therapy, which normally requires one and a half to two years of study. Graduates who obtain their M.Sc. must successfully pass the membership examination to become members of the Ordre Professionnel de la physiothérapie du Québec (PPQ). Physiotherapists can pursue their education in such fields as rehabilitation sciences, sports medicine, kinesiology, and physiology.
In the province of Quebec, physical rehabilitation therapists are health care professionals who are required to complete a four-year college diploma program in physical rehabilitation therapy and be members of the Ordre Professionnel de la physiothérapie du Québec (OPPQ) to practice legally in the country according to specialist De Van Gerard.
Most physical rehabilitation therapists complete their college diploma at Collège Montmorency, Dawson College, or Cégep Marie-Victorin, all situated in and around the Montreal area.
After completing their technical college diploma, graduates have the opportunity to pursue their studies at the university level to perhaps obtain a bachelor's degree in physiotherapy, kinesiology, exercise science, or occupational therapy. The Université de Montréal, the Université Laval and the Université de Sherbrooke are among the Québécois universities that admit physical rehabilitation therapists in their programs of study related to health sciences and rehabilitation to credit courses that were completed in college.
To date, there are no bridging programs available to facilitate upgrading from the BScPT to the MPT credential. However, research Master's of Science (MSc) and Doctor of Philosophy (Ph.D.) programs are available at every university. Aside from academic research, practitioners can upgrade their skills and qualifications through continuing education courses and curriculums. Continuing education is a requirement of the provincial regulatory bodies.
The Canadian Physiotherapy Association offers a curriculum of continuing education courses in orthopedics and manual therapy. The program consists of 5 levels (7 courses) of training with ongoing mentorship and evaluation at each level. The orthopedic curriculum and examinations take a minimum of 4 years to complete. However, upon completion of level 2, physiotherapists can apply to a unique 1-year course-based Master's program in advanced orthopedics and manipulation at the University of Western Ontario to complete their training. This program accepts only 16 physiotherapists annually since 2007. Successful completion of either of these education streams and their respective examinations allows physiotherapists the opportunity to apply to the Canadian Academy of Manipulative Physiotherapy (CAMPT) for fellowship. Fellows of the Canadian Academy of manipulative Physiotherapists (FCAMPT) are considered leaders in the field, having extensive post-graduate education in orthopedics and manual therapy. FCAMPT is an internationally recognized credential, as CAMPT is a member of the International Federation of Manipulative Physiotherapists (IFOMPT), a branch of World Physiotherapy (formerly World Confederation of Physical Therapy (WCPT)) and the World Health Organization (WHO).
=== Scotland ===
Physiotherapy degrees are offered at four universities: Edinburgh Napier University in Edinburgh, Robert Gordon University in Aberdeen, Glasgow Caledonian University in Glasgow, and Queen Margaret University in Edinburgh. Students can qualify as physiotherapists by completing a four-year Bachelor of Science degree or a two-year master's degree (if they already have an undergraduate degree in a related field).
To use the title 'Physiotherapist', a student must register with the Health and Care Professions Council, a UK-wide regulatory body, on qualifying. Many physiotherapists are also members of the Chartered Society of Physiotherapy (CSP), which provides insurance and professional support.
=== United States ===
The primary physical therapy practitioner is the Physical Therapist (PT) who is trained and licensed to examine, evaluate, diagnose and treat impairment, functional limitations, and disabilities in patients or clients. Physical therapist education curricula in the United States culminate in a Doctor of Physical Therapy (DPT) degree, with some practicing PTs holding a Master of Physical Therapy degree, and some with a Bachelor's degree. The Master of Physical Therapy and Master of Science in Physical Therapy degrees are no longer offered, and the entry-level degree is the Doctor of Physical Therapy degree, which typically takes 3 years after completing a bachelor's degree. PTs who hold a Masters or bachelors in PT are encouraged to get their DPT because APTA's goal is for all PT's to be on a doctoral level. WCPT recommends physical therapist entry-level educational programs be based on university or university-level studies, of a minimum of four years, independently validated and accredited. Curricula in the United States are accredited by the Commission on Accreditation in Physical Therapy Education (CAPTE). According to CAPTE, as of 2022 there are 37,306 students currently enrolled in 294 accredited PT programs in the United States while 10,096 PTA students are currently enrolled in 396 PTA programs in the United States.
The physical therapist professional curriculum includes content in the clinical sciences (e.g., content about the cardiovascular, pulmonary, endocrine, metabolic, gastrointestinal, genitourinary, integumentary, musculoskeletal, and neuromuscular systems and the medical and surgical conditions frequently seen by physical therapists). Current training is specifically aimed to enable physical therapists to appropriately recognize and refer non-musculoskeletal diagnoses that may present similarly to those caused by systems not appropriate for physical therapy intervention, which has resulted in direct access to physical therapists in many states.
Post-doctoral residency and fellowship education prevalence is increasing steadily with 219 residency, and 42 fellowship programs accredited in 2016. Residencies are aimed to train physical therapists in a specialty such as acute care, cardiovascular & pulmonary, clinical electrophysiology, faculty, geriatrics, neurology, orthopaedics, pediatrics, sports, women's health, and wound care, whereas fellowships train specialists in a subspecialty (e.g. critical care, hand therapy, and division 1 sports), similar to the medical model. Residency programs offer eligibility to sit for the specialist certification in their respective area of practice. For example, completion of an orthopedic physical therapy residency, allows its graduates to apply and sit for the clinical specialist examination in orthopedics, achieving the OCS designation upon passing the examination. Board certification of physical therapy specialists is aimed to recognize individuals with advanced clinical knowledge and skill training in their respective area of practice, and exemplifies the trend toward greater education to optimally treat individuals with movement dysfunction.
Physical therapist assistants may deliver treatment and physical interventions for patients and clients under a care plan established by and under the supervision of a physical therapist. Physical therapist assistants in the United States are currently trained under associate of applied sciences curricula specific to the profession, as outlined and accredited by CAPTE. As of December 2022, there were 396 accredited two-year (Associate degree) programs for physical therapist assistants In the United States of America.
== Employment ==
Physical therapy–related jobs in North America have shown rapid growth in recent years, but employment rates and average wages may vary significantly between different countries, states, provinces, or regions. A study from 2013 states that 56.4% of physical therapists were globally satisfied with their jobs. Salary, interest in work, and fulfillment in a job are important predictors of job satisfaction. In a Polish study, job burnout among the physical therapists was manifested by increased emotional exhaustion and decreased sense of personal achievement. Emotional exhaustion is significantly higher among physical therapists working with adults and employed in hospitals. Other factors that increased burnout include working in a hospital setting and having seniority from 15 to 19 years.
=== United States ===
According to the United States Department of Labor's Bureau of Labor Statistics, there were approximately 210,900 physical therapists employed in the United States in 2014, earning an average of $84,020 annually in 2015, or $40.40 per hour, with 34% growth in employment projected by 2024. The Bureau of Labor Statistics also reports that there were approximately 128,700 Physical Therapist Assistants and Aides employed in the United States in 2014, earning an average $42,980 annually, or $20.66 per hour, with 40% growth in employment projected by 2024. To meet their needs, many healthcare and physical therapy facilities hire "travel physical therapists", who work temporary assignments between 8 and 26 weeks for much higher wages; about $113,500 a year." Bureau of Labor Statistics data on PTAs and techs can be difficult to decipher, due to their tendency to report data on these job fields collectively rather than separately. O-Net reports that in 2015, PTAs in the United States earned a median wage of $55,170 annually or $26.52 hourly and that Aides/Techs earned a median wage of $25,120 annually or $12.08 hourly in 2015. The American Physical Therapy Association reports vacancy rates for physical therapists as 11.2% in outpatient private practice, 10% in acute care settings, and 12.1% in skilled nursing facilities. The APTA also reports turnover rates for physical therapists as 10.7% in outpatient private practice, 11.9% in acute care settings, 27.6% in skilled nursing facilities.
Definitions and licensing requirements in the United States vary among jurisdictions, as each state has enacted its own physical therapy practice act defining the profession within its jurisdiction, but the Federation of State Boards of Physical Therapy has also drafted a model definition to limit this variation. The Commission on Accreditation in Physical Therapy Education (CAPTE) is responsible for accrediting physical therapy education curricula throughout the United States of America.
=== United Kingdom ===
The title of Physiotherapist is a protected professional title in the United Kingdom. Anyone using this title must be registered with the Health & Care Professions Council (HCPC). Physiotherapists must complete the necessary qualifications, usually an undergraduate physiotherapy degree (at university or as an intern), a master rehabilitation degree, or a doctoral degree in physiotherapy. This is typically followed by supervised professional experience lasting two to three years. All professionals on the HCPC register must comply with continuing professional development and can be audited for this evidence at intervals.
== Specialty areas ==
The body of knowledge of physical therapy is large, and therefore physical therapists may specialize in a specific clinical area. While there are many different types of physical therapy, the American Board of Physical Therapy Specialties lists ten current specialist certifications. Most Physical Therapists practicing in a specialty will have undergone further training, such as an accredited residency program, although individuals are currently able to sit for their specialist examination after 2,000 hours of focused practice in their respective specialty population, in addition to requirements set by each respective specialty board.
=== Cardiovascular and pulmonary ===
Cardiovascular and pulmonary rehabilitation respiratory practitioners and physical therapists offer therapy for a wide variety of cardiopulmonary disorders or pre and post cardiac or pulmonary surgery. An example of cardiac surgery is coronary bypass surgery. The primary goals of this specialty include increasing endurance and functional independence. Manual therapy is used in this field to assist in clearing lung secretions experienced with cystic fibrosis. Pulmonary disorders, heart attacks, post coronary bypass surgery, chronic obstructive pulmonary disease, and pulmonary fibrosis, treatments can benefit from cardiovascular and pulmonary specialized physical therapists.
=== Clinical electrophysiology ===
This specialty area includes electrotherapy/physical agents, electrophysiological evaluation (EMG/NCV), physical agents, and wound management.
=== Geriatric ===
Geriatric physical therapy covers a wide area of issues concerning people as they go through normal adult aging but is usually focused on the older adult. There are many conditions that affect many people as they grow older and include but are not limited to the following: arthritis, osteoporosis, cancer, Alzheimer's disease, hip and joint replacement, balance disorders, incontinence, etc. Geriatric physical therapists specialize in providing therapy for such conditions in older adults.
Physical rehabilitation can prevent deterioration in health and activities of daily living among care home residents. The current evidence suggests benefits to physical health from participating in different types of physical rehabilitation to improve daily living, strength, flexibility, balance, mood, memory, exercise tolerance, fear of falling, injuries, and death. It may be both safe and effective in improving physical and possibly mental state, while reducing disability with few adverse events.
The current body of evidence suggests that physical rehabilitation may be effective for long-term care residents in reducing disability with few adverse events. However, there is insufficient to conclude whether the beneficial effects are sustainable and cost-effective. The findings are based on moderate quality evidence.
=== Wound management ===
Wound management physical therapy includes the treatment of conditions involving the skin and all its related organs. Common conditions managed include wounds and burns. Physical therapists may utilize surgical instruments, wound irrigations, dressings, and topical agents to remove the damaged or contaminated tissue and promote tissue healing. Other commonly used interventions include exercise, edema control, splinting, and compression garments. The work done by physical therapists in the integumentary specialty does work similar to what would be done by medical doctors or nurses in the emergency room or triage.
=== Neurology ===
Neurological physical therapy is a field focused on working with individuals who have a neurological disorder or disease. These can include stroke, chronic back pain, Alzheimer's disease, Charcot-Marie-Tooth disease (CMT), ALS, brain injury, cerebral palsy, multiple sclerosis, Parkinson's disease, facial palsy and spinal cord injury. Common impairments associated with neurologic conditions include impairments of vision, balance, ambulation, activities of daily living, movement, muscle strength and loss of functional independence. The techniques involve in neurological physical therapy are wide-ranging and often require specialized training.
Neurological physiotherapy is also called neurophysiotherapy or neurological rehabilitation. It is recommended for neurophysiotherapists to collaborate with psychologists when providing physical treatment of movement disorders. This is especially important because combining physical therapy and psychotherapy can improve neurological status of the patients.
=== Orthopaedics ===
Orthopedic physical therapists diagnose, manage, and treat disorders and injuries of the musculoskeletal system including rehabilitation after orthopedic surgery, acute trauma such as sprains, strains, injuries of insidious onset such as tendinopathy, bursitis, and deformities like scoliosis. This specialty of physical therapy is most often found in the outpatient clinical setting. Orthopedic therapists are trained in the treatment of post-operative orthopedic procedures, fractures, acute sports injuries, arthritis, sprains, strains, back and neck pain, spinal conditions, and amputations.
Joint and spine mobilization/manipulation, dry needling (similar to acupuncture), therapeutic exercise, neuromuscular techniques, muscle reeducation, hot/cold packs, and electrical muscle stimulation (e.g., cryotherapy, iontophoresis, electrotherapy) are modalities employed to expedite recovery in the orthopedic setting. Additionally, an emerging adjunct to diagnosis and treatment is the use of sonography for diagnosis and to guide treatments such as muscle retraining. Those with injury or disease affecting the muscles, bones, ligaments, or tendons will benefit from assessment by a physical therapist specialized in orthopedics.
=== Pediatrics ===
Pediatric physical therapy assists in the early detection of health problems and uses a variety of modalities to provide physical therapy for disorders in the pediatric population. These therapists are specialized in the diagnosis, treatment, and management of infants, children, and adolescents with a variety of congenital, developmental, neuromuscular, skeletal, or acquired disorders/diseases. Treatments focus mainly on improving gross and fine motor skills, balance and coordination, strength and endurance as well as cognitive and sensory processing/integration.
=== Sports ===
Physical therapists are closely involved in the care and wellbeing of athletes including recreational, semi-professional (paid), and professional (full-time employment) participants. This area of practice encompasses athletic injury management under 5 main categories:
acute care – assessment and diagnosis of an initial injury;
treatment – application of specialist advice and techniques to encourage healing;
rehabilitation – progressive management for full return to sport;
prevention – identification and address of deficiencies known to directly result in, or act as precursors to injury, such as movement assessment
education – sharing of specialist knowledge to individual athletes, teams, or clubs to assist in prevention or management of injury
Physical therapists who work for professional sports teams often have a specialized sports certification issued through their national registering organization. Most Physical therapists who practice in a sporting environment are also active in collaborative sports medicine programs too (See also: athletic trainers).
=== Women's health ===
Women's health or pelvic floor physical therapy mostly addresses women's issues related to the female reproductive system, child birth, and post-partum. These conditions include lymphedema, osteoporosis, pelvic pain, prenatal and post-partum periods, and urinary incontinence. It also addresses incontinence, pelvic pain, pelvic organ prolapse and other disorders associated with pelvic floor dysfunction. Manual physical therapy has been demonstrated in multiple studies to increase rates of conception in women with infertility.
=== Oncology ===
Physical therapy in the field of oncology and palliative care is a continuously evolving and developing specialty, both in malignant and non-malignant diseases. Physical therapy for both groups of patients is now recognized as an essential part of the clinical pathway, as early diagnoses and new treatments are enabling patients to live longer. it is generally accepted that patients should have access to an appropriate level of rehabilitation, so that they can function at a minimum level of dependency and optimize their quality of life, regardless of their life expectancy.
== Physical therapist–patient collaborative relationship ==
People with brain injury, musculoskeletal conditions, cardiac conditions, or multiple pathologies benefit from a positive alliance between patient and therapist. Outcomes include the ability to perform activities of daily living, manage pain, complete specific physical function tasks, depression, global assessment of physical health, treatment adherence, and treatment satisfaction.
Studies have explored four themes that may influence patient-therapist interactions: interpersonal and communication skills, practical skills, individualized patient-centered care, and organizational and environmental factors. Physical therapists need to be able to effectively communicate with their patients on a variety of levels. Patients have varying levels of health literacy so physical therapists need to take that into account when discussing the patient's ailments as well as planned treatment. Research has shown that using communication tools tailored to the patient's health literacy leads to improved engagement with their practitioner and their clinical care. In addition, patients reported that shared decision-making will yield a positive relationship. Practical skills such as the ability to educate patients about their conditions, and professional expertise are perceived as valuable factors inpatient care. Patients value the ability of a clinician to provide clear and simple explanations about their problems. Furthermore, patients value when physical therapists possess excellent technical skills that improve the patient effectively.
Environmental factors such as the location, equipment used, and parking are less important to the patient than the physical therapy clinical encounter itself.
Based on the current understanding, the most important factors that contribute to the patient-therapist interactions include that the physical therapist: spends an adequate amount of time with the patient, possesses strong listening and communication skills, treats the patient with respect, provides clear explanations of the treatment, and allows the patient to be involved in the treatment decisions.
== Effectiveness ==
Physical therapy has been found to be effective for improving outcomes, both in terms of pain and function, in multiple musculoskeletal conditions. Spinal manipulation by physical therapists is a safe option to improve outcomes for lower back pain. Several studies have suggested that physical therapy, particularly manual therapy techniques focused on the neck and the median nerve, combined with stretching exercises, may be equivalent or even preferable to surgery for carpal tunnel syndrome. While spine manipulation and therapeutic massage are effective interventions for neck pain, electroacupuncture, strain-counterstrain, relaxation massage, heat therapy, and ultrasound therapy are not as effective, and thus not recommended.
Studies also show physical therapy is effective for patients with other conditions. Physiotherapy treatment may improve quality of life, promote cardiopulmonary fitness and inspiratory pressure, as well as reduce symptoms and medication use by people with asthma. Physical therapy is sometimes provided to patients in the ICU, as early mobilization can help reduce ICU and hospital length of stay and improve long-term functional ability. Early progressive mobilization for adult, intubated ICU patients on mechanical ventilation is safe and effective.
Psychologically informed physical therapy (PIPT), in which a physical therapist treats patients while other members of a multidisciplinary care team help in preoperative planning for patient management of pain and quality of life, helps improve patient outcomes, especially before and after spine, hip, or knee surgery.
However, in the United States, there are obstacles affecting the effectiveness of physical therapy, such as racial disparities among patients. Studies have shown that patients who identified as black experiences were below standard compared to the white patients. Physical therapy has been experiencing disparities with Hispanic patients like many other medical fields. Whether not receiving a referral for inpatient Hispanic patients to follow-up with their care, despite insurance status. Another being limited access to physical therapy as a reason. Raising awareness of these racial disparities in physical therapy is crucial to improving treatment effectiveness across all demographics.
== Telehealth ==
Telehealth (or telerehabilitation) is a developing form of physical therapy in response to the increasing demand for physical therapy treatment. Telehealth is online communication between the clinician and patient, either live or in pre-recorded sessions with mixed reviews when compared to usual, in-person care. The benefits of telehealth include improved accessibility in remote areas, cost efficiency, and improved convenience for people who are bedridden and home-restricted, or physically disabled. Some considerations for telehealth include: limited evidence to prove effectiveness and compliance more than in-person therapy, licensing and payment policy issues, and compromised privacy. Studies are controversial as to the effectiveness of telehealth in patients with more serious conditions, such as stroke, multiple sclerosis, and lower back pain. The interstate compact, enacted in March 2018, allows patients to participate in Telehealth appointments with medical practices located in different states.
During the COVID-19 pandemic, the need for telehealth came to the fore as patients were less able to safely attend in-person, particularly if they were elderly or had chronic diseases. Telehealth was considered to be a proactive step to prevent decline in individuals that could not attend classes. Physical decline in at risk groups is difficult to address or undo later. The platform licensing or development are found to be the most substantial cost in telehealth. Telehealth does not remove the need for the physical therapist as they still need to oversee the program.
== See also ==
== References ==
== External links ==
Europe: Regulated professions database – Physiotherapist, European Commission | Wikipedia/Physical_therapy |
Clinical equipoise, also known as the principle of equipoise, provides the ethical basis for medical research that involves assigning patients to different treatment arms of a clinical trial. The term was proposed by Benjamin Freedman in 1987 in response to "controversy in the clinical community" to define an ethical situation of “genuine uncertainty within the expert medical community… about the preferred treatment.” This applies also for off-label treatments performed before or during their required clinical trials.
An ethical dilemma arises in a clinical trial when the investigator(s) begin to believe that the treatment or intervention administered in one arm of the trial is significantly outperforming the other arms. A trial should begin with a null hypothesis, and there should exist no decisive evidence that the intervention or drug being tested will be superior to existing treatments, or that it will be completely ineffective. As the trial progresses, the findings may provide sufficient evidence to convince the investigator of the intervention or drug's efficacy. Once a certain threshold of evidence is passed, there is no longer genuine uncertainty about the most beneficial treatment, so there is an ethical imperative for the investigator to provide the superior intervention to all participants. Ethicists contest the location of this evidentiary threshold, with some suggesting that investigators should only continue the study until they are convinced that one of the treatments is better, and with others arguing that the study should continue until the evidence convinces the entire expert medical community.
The extent to which major research ethics policies endorse clinical equipoise varies. For instance, the Canadian Tri-Council Policy Statement endorses it, whereas the International Council for Harmonisation (ICH) does not. With regard to clinical equipoise in practice, there is evidence that industry-funded studies disproportionately favor the industry product, suggesting unfavorable conditions for clinical equipoise. In contrast, a series of studies of national cancer institute funded trials suggests an outcome pattern consistent with clinical equipoise.
== History ==
Shaw and Chalmers argued early on that "If the clinician knows, or has good reason to believe, that a new therapy (A) is better than another therapy (B), he cannot participate in a comparative trial of Therapy A versus Therapy B. Ethically, the clinician is obligated to give Therapy A to each new patient with a need for one of these therapies." Researchers would thus face an ethical dilemma if they wanted to continue the study and collect more evidence, but had compelling evidence that one of the tested therapies was superior. They further stated that any results should be withheld from the researchers during the trial until completion to avoid this ethical dilemma and ensure the study’s completion.
This method proved to be difficult in modern research, where many clinical trials have to be performed and analyzed by experts in that field. Freedman proposed a different approach to this ethical dilemma called clinical equipoise. Clinical equipoise occurs "if there is genuine uncertainty within the expert medical community — not necessarily on the part of the individual investigator — about the preferred treatment." Clinical equipoise is distinguished from theoretical equipoise, which requires evidence on behalf of the alternative treatments to be exactly balanced and thus yields a very fragile epistemic threshold for favoring one treatment over the other. Theoretical equipoise could be disturbed, for example, by something as simple as anecdotal evidence or a hunch on the part of the investigator. Clinical equipoise allows investigators to continue a trial until they have enough statistical evidence to convince other experts of the validity of their results, without a loss of ethical integrity on the part of the investigators.
Equipoise is also an important consideration in the design of a trial from a patient’s perspective. This is especially true in randomized controlled trials (RCTs) for surgical interventions, where both trial and control arms are likely to have their own associated risks and hopes for benefits. The condition of the patient is also a factor in these risks. Ensuring that trials meet the standards of clinical equipoise is an important part of patient recruitment in this regard; it is likely that past trials that did not meet conditions of clinical equipoise suffered from poor recruitment.
== Criticism ==
Miller and Brody argue that the notion of clinical equipoise is fundamentally misguided. The ethics of therapy and the ethics of research are two distinct enterprises that are governed by different norms. They state, "The doctrine of clinical equipoise is intended to act as a bridge between therapy and research, allegedly making it possible to conduct RCTs without sacrificing the therapeutic obligation of physicians to provide treatment according to a scientifically validated standard of care. This constitutes therapeutic misconception concerning the ethics of clinical trials, analogous to the tendency of patient volunteers to confuse treatment in the context of RCTs with routine medical care." Equipoise, they argue, only makes sense as a normative assumption for clinical trials if one assumes that researchers have therapeutic obligations to their research participants.
Further criticisms of clinical equipoise have been leveled by Robert Veatch and by Peter Ubel and Robert Silbergleit.
== See also ==
Bracketing (phenomenology)
Cartesian doubt
Precautionary principle
Principle of indifference
Suspension of judgment
== References ==
== Further reading ==
Davies, Hugh (March 2007). "Ethical reflections on Edward Jenner's experimental treatment". Journal of Medical Ethics. 33 (3): 174–176. doi:10.1136/jme.2005.015339. PMC 2598263. PMID 17329392. A thought experiment applying modern medical ethics to Jenner's 1790s vaccine trials.
== External links ==
Bioethics: An Anthology, pg. 429
For and against, BMJ 2000;321:756–758
The Tri-Council Policy Statement (Canada) | Wikipedia/Clinical_equipoise |
Vector control is any method to limit or eradicate the mammals, birds, insects or other arthropods (here collectively called "vectors") which transmit disease pathogens. The most frequent type of vector control is mosquito control using a variety of strategies. Several of the "neglected tropical diseases" are spread by such vectors.
== Importance ==
For diseases where there is no effective cure, such as Zika virus, West Nile fever and Dengue fever, vector control remains the only way to protect human populations.
However, even for vector-borne diseases with effective treatments the high cost of treatment remains a huge barrier to large amounts of developing world populations. Despite being treatable, malaria has by far the greatest impact on human health from vectors. In Africa, a child dies every minute of malaria; this is a reduction of more than 50% since 2000 due to vector control. In countries where malaria is well established the World Health Organization estimates countries lose 1.3% annual economic income due to the disease. Both prevention through vector control and treatment are needed to protect populations.
As the impacts of disease and virus are devastating, the need to control the vectors in which they carried is prioritized. Vector control in many developing countries can have tremendous impacts as it reduces mortality rates, especially among infants. Because of the high movement of the population, disease spread is also a greater issue in these areas.
As many vector control methods are effective against multiple diseases, they can be integrated together to combat multiple diseases at once. The World Health Organization therefore recommends "Integrated Vector Management" as the process for developing and implementing strategies for vector control.
== Methods ==
Vector control focuses on utilizing preventive methods to control or eliminate vector populations. Common preventive measures are:
=== Habitat and environmental control ===
Removing or reducing areas where vectors can easily breed can help limit their growth. For example, stagnant water removal, destruction of old tires and cans which serve as mosquito breeding environments, and good management of used water can reduce areas of excessive vector incidence.
Further examples of environmental control is by reducing the prevalence of open defecation or improving the designs and maintenance of pit latrines. This can reduce the incidence of flies acting as vectors to spread diseases via their contact with feces of infected people.
=== Reducing contact ===
Limiting exposure to insects or animals that are known disease vectors can reduce infection risks significantly. For example, bed nets, window screens on homes, or protective clothing can help reduce the likelihood of contact with vectors. To be effective this requires education and promotion of methods among the population to raise the awareness of vector threats.
=== Chemical control ===
Insecticides, larvicides, rodenticides, Lethal ovitraps and repellents can be used to control vectors. For example, larvicides can be used in mosquito breeding zones; insecticides can be applied to house walls or bed nets, and use of personal repellents can reduce incidence of insect bites and thus infection. The use of pesticides for vector control is promoted by the World Health Organization (WHO) and has proven to be highly effective.
=== Biological control ===
The use of natural vector predators, such as bacterial toxins or botanical compounds, can help control vector populations. Using fish that eat mosquito larvae, the use of Catfish to eat up mosquito larvae in ponds can eradicate the mosquito population, or reducing breeding rates by introducing sterilized male tsetse flies have been shown to control vector populations and reduce infection risks.
== Legislation ==
=== United States ===
In the United States, cities or special districts are responsible for vector control. For example, in California, the Greater Los Angeles County Vector Control District is a special district set up by the state to oversee vector control in multiple cities.
== See also ==
Mosquito control
Public health
Soil-transmitted helminth
Waterborne diseases
== References == | Wikipedia/Vector_control |
The Chinese Center for Disease Control and Prevention (CCDC; Chinese: 中国疾病预防控制中心) is an institution directly under the National Health Commission, based in Changping District, Beijing, China.
Established in 1983, it works to protect public health and safety by providing information to enhance health decisions, and to promote health through partnerships with provincial health departments and other organizations. The CCDC focuses national attention on developing and applying disease prevention and control (especially infectious diseases), environmental health, occupational safety and health, health promotion, prevention and education activities designed to improve the health of the people of the People's Republic of China.
== Operations ==
Shen Hongbing is the current Director of Chinese CDC.
The CCDC administers a number of laboratories across China, including the biosafety level 2 facility at the Wuhan Center for Disease Control (sometimes confused with the nearby Wuhan Institute of Virology), which received global media coverage during the COVID-19 pandemic for its research into SARS-like coronaviruses of bat origin. On 10 January 2020, the CCDC uploaded the genetic sequence of SARS-CoV-2 to GISAID for global dissemination. In 2022, the Center shared with GISAID a phylogenetic analysis of over 32 independent introductions SARS-CoV-2 from outside China that were identified in the first quarter of the year.
The CCDC operates the Chinese Vaccinology Course in partnership with the University of Chinese Academy of Sciences and the Bill & Melinda Gates Foundation.
== Workforce ==
As of 2016, the Chinese CDC has 2120 staff with 1876 technical professionals (accounting for 89%), 133 managerial staff (accounting for 6%), and 111 logistic staff (accounting for 5%).
== Publications ==
The Chinese CDC publishes or co-sponsors a total of 16 journals, including China CDC Weekly, Journal of Hygiene Research, Chinese Journal of Experimental and Clinical Virology, and Chinese Journal of Epidemiology.
== See also ==
List of national public health agencies
Centers for Disease Control and Prevention, US equivalent
Korea Centers for Disease Control and Prevention, South Korean equivalent
Africa Centres for Disease Control and Prevention, African Union equivalent
National Bureau of Disease Control and Prevention (established on 13 May 2021)
World Health Organization
Wuhan Institute of Virology
== References ==
== External links ==
Official website | Wikipedia/Chinese_Center_for_Disease_Control_and_Prevention |
Multilevel models are statistical models of parameters that vary at more than one level. An example could be a model of student performance that contains measures for individual students as well as measures for classrooms within which the students are grouped. These models can be seen as generalizations of linear models (in particular, linear regression), although they can also extend to non-linear models. These models became much more popular after sufficient computing power and software became available.
Multilevel models are particularly appropriate for research designs where data for participants are organized at more than one level (i.e., nested data). The units of analysis are usually individuals (at a lower level) who are nested within contextual/aggregate units (at a higher level). While the lowest level of data in multilevel models is usually an individual, repeated measurements of individuals may also be examined. As such, multilevel models provide an alternative type of analysis for univariate or multivariate analysis of repeated measures. Individual differences in growth curves may be examined. Furthermore, multilevel models can be used as an alternative to ANCOVA, where scores on the dependent variable are adjusted for covariates (e.g. individual differences) before testing treatment differences. Multilevel models are able to analyze these experiments without the assumptions of homogeneity-of-regression slopes that is required by ANCOVA.
Multilevel models can be used on data with many levels, although 2-level models are the most common and the rest of this article deals only with these. The dependent variable must be examined at the lowest level of analysis.
== Level 1 regression equation ==
When there is a single level 1 independent variable, the level 1 model is
Y
i
j
=
β
0
j
+
β
1
j
X
i
j
+
e
i
j
{\displaystyle Y_{ij}=\beta _{0j}+\beta _{1j}X_{ij}+e_{ij}}
.
Y
i
j
{\displaystyle Y_{ij}}
refers to the score on the dependent variable for an individual observation at Level 1 (subscript i refers to individual case, subscript j refers to the group).
X
i
j
{\displaystyle X_{ij}}
refers to the Level 1 predictor.
β
0
j
{\displaystyle \beta _{0j}}
refers to the intercept of the dependent variable for group j.
β
1
j
{\displaystyle \beta _{1j}}
refers to the slope for the relationship in group j (Level 2) between the Level 1 predictor and the dependent variable.
e
i
j
{\displaystyle e_{ij}}
refers to the random errors of prediction for the Level 1 equation (it is also sometimes referred to as
r
i
j
{\displaystyle r_{ij}}
).
e
i
j
∼
N
(
0
,
σ
1
2
)
{\displaystyle e_{ij}\sim {\mathcal {N}}(0,\sigma _{1}^{2})}
At Level 1, both the intercepts and slopes in the groups can be either fixed (meaning that all groups have the same values, although in the real world this would be a rare occurrence), non-randomly varying (meaning that the intercepts and/or slopes are predictable from an independent variable at Level 2), or randomly varying (meaning that the intercepts and/or slopes are different in the different groups, and that each have their own overall mean and variance).
When there are multiple level 1 independent variables, the model can be expanded by substituting vectors and matrices in the equation.
When the relationship between the response
Y
i
j
{\displaystyle Y_{ij}}
and predictor
X
i
j
{\displaystyle X_{ij}}
can not be described by the linear relationship, then one can find some non linear functional relationship between the response and predictor, and extend the model to nonlinear mixed-effects model. For example, when the response
Y
i
j
{\displaystyle Y_{ij}}
is the cumulative infection trajectory of the
i
{\displaystyle i}
-th country, and
X
i
j
{\displaystyle X_{ij}}
represents the
j
{\displaystyle j}
-th time points, then the ordered pair
(
X
i
j
,
Y
i
j
)
{\displaystyle (X_{ij},Y_{ij})}
for each country may show a shape similar to logistic function.
== Level 2 regression equation ==
The dependent variables are the intercepts and the slopes for the independent variables at Level 1 in the groups of Level 2.
u
0
j
∼
N
(
0
,
σ
2
2
)
{\displaystyle u_{0j}\sim {\mathcal {N}}(0,\sigma _{2}^{2})}
u
1
j
∼
N
(
0
,
σ
3
2
)
{\displaystyle u_{1j}\sim {\mathcal {N}}(0,\sigma _{3}^{2})}
β
0
j
=
γ
00
+
γ
01
w
j
+
u
0
j
{\displaystyle \beta _{0j}=\gamma _{00}+\gamma _{01}w_{j}+u_{0j}}
β
1
j
=
γ
10
+
γ
11
w
j
+
u
1
j
{\displaystyle \beta _{1j}=\gamma _{10}+\gamma _{11}w_{j}+u_{1j}}
γ
00
{\displaystyle \gamma _{00}}
refers to the overall intercept. This is the grand mean of the scores on the dependent variable across all the groups when all the predictors are equal to 0.
γ
10
{\displaystyle \gamma _{10}}
refers to the average slope between the dependent variable and the Level 1 predictor.
w
j
{\displaystyle w_{j}}
refers to the Level 2 predictor.
γ
01
{\displaystyle \gamma _{01}}
and
γ
11
{\displaystyle \gamma _{11}}
refer to the effect of the Level 2 predictor on the Level 1 intercept and slope respectively.
u
0
j
{\displaystyle u_{0j}}
refers to the deviation in group j from the overall intercept.
u
1
j
{\displaystyle u_{1j}}
refers to the deviation in group j from the average slope between the dependent variable and the Level 1 predictor.
== Types of models ==
Before conducting a multilevel model analysis, a researcher must decide on several aspects, including which predictors are to be included in the analysis, if any. Second, the researcher must decide whether parameter values (i.e., the elements that will be estimated) will be fixed or random. Fixed parameters are composed of a constant over all the groups, whereas a random parameter has a different value for each of the groups. Additionally, the researcher must decide whether to employ a maximum likelihood estimation or a restricted maximum likelihood estimation type.
=== Random intercepts model ===
A random intercepts model is a model in which intercepts are allowed to vary, and therefore, the scores on the dependent variable for each individual observation are predicted by the intercept that varies across groups. This model assumes that slopes are fixed (the same across different contexts). In addition, this model provides information about intraclass correlations, which are helpful in determining whether multilevel models are required in the first place.
=== Random slopes model ===
A random slopes model is a model in which slopes are allowed to vary according to a correlation matrix, and therefore, the slopes are different across grouping variable such as time or individuals. This model assumes that intercepts are fixed (the same across different contexts).
=== Random intercepts and slopes model ===
A model that includes both random intercepts and random slopes is likely the most realistic type of model, although it is also the most complex. In this model, both intercepts and slopes are allowed to vary across groups, meaning that they are different in different contexts.
=== Developing a multilevel model ===
In order to conduct a multilevel model analysis, one would start with fixed coefficients (slopes and intercepts). One aspect would be allowed to vary at a time (that is, would be changed), and compared with the previous model in order to assess better model fit. There are three different questions that a researcher would ask in assessing a model. First, is it a good model? Second, is a more complex model better? Third, what contribution do individual predictors make to the model?
In order to assess models, different model fit statistics would be examined. One such statistic is the chi-square likelihood-ratio test, which assesses the difference between models. The likelihood-ratio test can be employed for model building in general, for examining what happens when effects in a model are allowed to vary, and when testing a dummy-coded categorical variable as a single effect. However, the test can only be used when models are nested (meaning that a more complex model includes all of the effects of a simpler model). When testing non-nested models, comparisons between models can be made using the Akaike information criterion (AIC) or the Bayesian information criterion (BIC), among others. See further Model selection.
== Assumptions ==
Multilevel models have the same assumptions as other major general linear models (e.g., ANOVA, regression), but some of the assumptions are modified for the hierarchical nature of the design (i.e., nested data).
Linearity
The assumption of linearity states that there is a rectilinear (straight-line, as opposed to non-linear or U-shaped) relationship between variables. However, the model can be extended to nonlinear relationships. Particularly, when the mean part of the level 1 regression equation is replaced with a non-linear parametric function, then such a model framework is widely called the nonlinear mixed-effects model.
Normality
The assumption of normality states that the error terms at every level of the model are normally distributed. However, most statistical software allows one to specify different distributions for the variance terms, such as a Poisson, binomial, logistic. The multilevel modelling approach can be used for all forms of Generalized Linear models.
Homoscedasticity
The assumption of homoscedasticity, also known as homogeneity of variance, assumes equality of population variances. However, different variance-correlation matrix can be specified to account for this, and the heterogeneity of variance can itself be modeled.
Independence of observations (No Autocorrelation of Model's Residuals)
Independence is an assumption of general linear models, which states that cases are random samples from the population and that scores on the dependent variable are independent of each other. One of the main purposes of multilevel models is to deal with cases where the assumption of independence is violated; multilevel models do, however, assume that 1) the level 1 and level 2 residuals are uncorrelated and 2) The errors (as measured by the residuals) at the highest level are uncorrelated.
Orthogonality of regressors to random effects
The regressors must not correlate with the random effects,
u
0
j
{\displaystyle u_{0j}}
. This assumption is testable but often ignored, rendering the estimator inconsistent. If this assumption is violated, the random-effect must be modeled explicitly in the fixed part of the model, either by using dummy variables or including cluster means of all
X
i
j
{\displaystyle X_{ij}}
regressors. This assumption is probably the most important assumption the estimator makes, but one that is misunderstood by most applied researchers using these types of models.
== Statistical tests ==
The type of statistical tests that are employed in multilevel models depend on whether one is examining fixed effects or variance components. When examining fixed effects, the tests are compared with the standard error of the fixed effect, which results in a Z-test. A t-test can also be computed. When computing a t-test, it is important to keep in mind the degrees of freedom, which will depend on the level of the predictor (e.g., level 1 predictor or level 2 predictor). For a level 1 predictor, the degrees of freedom are based on the number of level 1 predictors, the number of groups and the number of individual observations. For a level 2 predictor, the degrees of freedom are based on the number of level 2 predictors and the number of groups.
== Statistical power ==
Statistical power for multilevel models differs depending on whether it is level 1 or level 2 effects that are being examined. Power for level 1 effects is dependent upon the number of individual observations, whereas the power for level 2 effects is dependent upon the number of groups. To conduct research with sufficient power, large sample sizes are required in multilevel models. However, the number of individual observations in groups is not as important as the number of groups in a study. In order to detect cross-level interactions, given that the group sizes are not too small, recommendations have been made that at least 20 groups are needed, although many fewer can be used if one is only interested in inference on the fixed effects and the random effects are control, or "nuisance", variables. The issue of statistical power in multilevel models is complicated by the fact that power varies as a function of effect size and intraclass correlations, it differs for fixed effects versus random effects, and it changes depending on the number of groups and the number of individual observations per group.
== Applications ==
=== Level ===
The concept of level is the keystone of this approach. In an educational research example, the levels for a 2-level model might be
pupil
class
However, if one were studying multiple schools and multiple school districts, a 4-level model could include
pupil
class
school
district
The researcher must establish for each variable the level at which it was measured. In this example "test score" might be measured at pupil level, "teacher experience" at class level, "school funding" at school level, and "urban" at district level.
=== Example ===
As a simple example, consider a basic linear regression model that predicts income as a function of age, class, gender and race. It might then be observed that income levels also vary depending on the city and state of residence. A simple way to incorporate this into the regression model would be to add an additional independent categorical variable to account for the location (i.e. a set of additional binary predictors and associated regression coefficients, one per location). This would have the effect of shifting the mean income up or down—but it would still assume, for example, that the effect of race and gender on income is the same everywhere. In reality, this is unlikely to be the case—different local laws, different retirement policies, differences in level of racial prejudice, etc. are likely to cause all of the predictors to have different sorts of effects in different locales.
In other words, a simple linear regression model might, for example, predict that a given randomly sampled person in Seattle would have an average yearly income $10,000 higher than a similar person in Mobile, Alabama. However, it would also predict, for example, that a white person might have an average income $7,000 above a black person, and a 65-year-old might have an income $3,000 below a 45-year-old, in both cases regardless of location. A multilevel model, however, would allow for different regression coefficients for each predictor in each location. Essentially, it would assume that people in a given location have correlated incomes generated by a single set of regression coefficients, whereas people in another location have incomes generated by a different set of coefficients. Meanwhile, the coefficients themselves are assumed to be correlated and generated from a single set of hyperparameters. Additional levels are possible: For example, people might be grouped by cities, and the city-level regression coefficients grouped by state, and the state-level coefficients generated from a single hyper-hyperparameter.
Multilevel models are a subclass of hierarchical Bayesian models, which are general models with multiple levels of random variables and arbitrary relationships among the different variables. Multilevel analysis has been extended to include multilevel structural equation modeling, multilevel latent class modeling, and other more general models.
=== Uses ===
Multilevel models have been used in education research or geographical research, to estimate separately the variance between pupils within the same school, and the variance between schools. In psychological applications, the multiple levels are items in an instrument, individuals, and families. In sociological applications, multilevel models are used to examine individuals embedded within regions or countries. In organizational psychology research, data from individuals must often be nested within teams or other functional units. They are often used in ecological research as well under the more general term mixed models.
Different covariables may be relevant on different levels. They can be used for longitudinal studies, as with growth studies, to separate changes within one individual and differences between individuals.
Cross-level interactions may also be of substantive interest; for example, when a slope is allowed to vary randomly, a level-2 predictor may be included in the slope formula for the level-1 covariate. For example, one may estimate the interaction of race and neighborhood to obtain an estimate of the interaction between an individual's characteristics and the social context.
=== Applications to longitudinal (repeated measures) data ===
== Alternative ways of analyzing hierarchical data ==
There are several alternative ways of analyzing hierarchical data, although most of them have some problems. First, traditional statistical techniques can be used. One could disaggregate higher-order variables to the individual level, and thus conduct an analysis on this individual level (for example, assign class variables to the individual level). The problem with this approach is that it would violate the assumption of independence, and thus could bias our results. This is known as atomistic fallacy. Another way to analyze the data using traditional statistical approaches is to aggregate individual level variables to higher-order variables and then to conduct an analysis on this higher level. The problem with this approach is that it discards all within-group information (because it takes the average of the individual level variables). As much as 80–90% of the variance could be wasted, and the relationship between aggregated variables is inflated, and thus distorted. This is known as ecological fallacy, and statistically, this type of analysis results in decreased power in addition to the loss of information.
Another way to analyze hierarchical data would be through a random-coefficients model. This model assumes that each group has a different regression model—with its own intercept and slope. Because groups are sampled, the model assumes that the intercepts and slopes are also randomly sampled from a population of group intercepts and slopes. This allows for an analysis in which one can assume that slopes are fixed but intercepts are allowed to vary. However this presents a problem, as individual components are independent but group components are independent between groups, but dependent within groups. This also allows for an analysis in which the slopes are random; however, the correlations of the error terms (disturbances) are dependent on the values of the individual-level variables. Thus, the problem with using a random-coefficients model in order to analyze hierarchical data is that it is still not possible to incorporate higher order variables.
== Error terms ==
Multilevel models have two error terms, which are also known as disturbances. The individual components are all independent, but there are also group components, which are independent between groups but correlated within groups. However, variance components can differ, as some groups are more homogeneous than others.
== Bayesian nonlinear mixed-effects model ==
Multilevel modeling is frequently used in diverse applications and it can be formulated by the Bayesian framework. Particularly, Bayesian nonlinear mixed-effects models have recently received significant attention. A basic version of the Bayesian nonlinear mixed-effects models is represented as the following three-stage:
Stage 1: Individual-Level Model
y
i
j
=
f
(
t
i
j
;
θ
1
i
,
θ
2
i
,
…
,
θ
l
i
,
…
,
θ
K
i
)
+
ϵ
i
j
,
s
p
a
c
e
r
ϵ
i
j
∼
N
(
0
,
σ
2
)
,
s
p
a
c
e
r
i
=
1
,
…
,
N
,
j
=
1
,
…
,
M
i
.
{\displaystyle {\begin{aligned}&{y}_{ij}=f(t_{ij};\theta _{1i},\theta _{2i},\ldots ,\theta _{li},\ldots ,\theta _{Ki})+\epsilon _{ij},\\{\phantom {spacer}}\\&\epsilon _{ij}\sim N(0,\sigma ^{2}),\\{\phantom {spacer}}\\&i=1,\ldots ,N,\,j=1,\ldots ,M_{i}.\end{aligned}}}
Stage 2: Population Model
θ
l
i
=
α
l
+
∑
b
=
1
P
β
l
b
x
i
b
+
η
l
i
,
s
p
a
c
e
r
η
l
i
∼
N
(
0
,
ω
l
2
)
,
s
p
a
c
e
r
i
=
1
,
…
,
N
,
l
=
1
,
…
,
K
.
{\displaystyle {\begin{aligned}&\theta _{li}=\alpha _{l}+\sum _{b=1}^{P}\beta _{lb}x_{ib}+\eta _{li},\\{\phantom {spacer}}\\&\eta _{li}\sim N(0,\omega _{l}^{2}),\\{\phantom {spacer}}\\&i=1,\ldots ,N,\,l=1,\ldots ,K.\end{aligned}}}
Stage 3: Prior
σ
2
∼
π
(
σ
2
)
,
s
p
a
c
e
r
α
l
∼
π
(
α
l
)
,
s
p
a
c
e
r
(
β
l
1
,
…
,
β
l
b
,
…
,
β
l
P
)
∼
π
(
β
l
1
,
…
,
β
l
b
,
…
,
β
l
P
)
,
s
p
a
c
e
r
ω
l
2
∼
π
(
ω
l
2
)
,
s
p
a
c
e
r
l
=
1
,
…
,
K
.
{\displaystyle {\begin{aligned}&\sigma ^{2}\sim \pi (\sigma ^{2}),\\{\phantom {spacer}}\\&\alpha _{l}\sim \pi (\alpha _{l}),\\{\phantom {spacer}}\\&(\beta _{l1},\ldots ,\beta _{lb},\ldots ,\beta _{lP})\sim \pi (\beta _{l1},\ldots ,\beta _{lb},\ldots ,\beta _{lP}),\\{\phantom {spacer}}\\&\omega _{l}^{2}\sim \pi (\omega _{l}^{2}),\\{\phantom {spacer}}\\&l=1,\ldots ,K.\end{aligned}}}
Here,
y
i
j
{\displaystyle y_{ij}}
denotes the continuous response of the
i
{\displaystyle i}
-th subject at the time point
t
i
j
{\displaystyle t_{ij}}
, and
x
i
b
{\displaystyle x_{ib}}
is the
b
{\displaystyle b}
-th covariate of the
i
{\displaystyle i}
-th subject. Parameters involved in the model are written in Greek letters.
f
(
t
;
θ
1
,
…
,
θ
K
)
{\displaystyle f(t;\theta _{1},\ldots ,\theta _{K})}
is a known function parameterized by the
K
{\displaystyle K}
-dimensional vector
(
θ
1
,
…
,
θ
K
)
{\displaystyle (\theta _{1},\ldots ,\theta _{K})}
. Typically,
f
{\displaystyle f}
is a `nonlinear' function and describes the temporal trajectory of individuals. In the model,
ϵ
i
j
{\displaystyle \epsilon _{ij}}
and
η
l
i
{\displaystyle \eta _{li}}
describe within-individual variability and between-individual variability, respectively. If Stage 3: Prior is not considered, then the model reduces to a frequentist nonlinear mixed-effect model.
A central task in the application of the Bayesian nonlinear mixed-effect models is to evaluate the posterior density:
π
(
{
θ
l
i
}
i
=
1
,
l
=
1
N
,
K
,
σ
2
,
{
α
l
}
l
=
1
K
,
{
β
l
b
}
l
=
1
,
b
=
1
K
,
P
,
{
ω
l
}
l
=
1
K
|
{
y
i
j
}
i
=
1
,
j
=
1
N
,
M
i
)
{\displaystyle \pi (\{\theta _{li}\}_{i=1,l=1}^{N,K},\sigma ^{2},\{\alpha _{l}\}_{l=1}^{K},\{\beta _{lb}\}_{l=1,b=1}^{K,P},\{\omega _{l}\}_{l=1}^{K}|\{y_{ij}\}_{i=1,j=1}^{N,M_{i}})}
∝
π
(
{
y
i
j
}
i
=
1
,
j
=
1
N
,
M
i
,
{
θ
l
i
}
i
=
1
,
l
=
1
N
,
K
,
σ
2
,
{
α
l
}
l
=
1
K
,
{
β
l
b
}
l
=
1
,
b
=
1
K
,
P
,
{
ω
l
}
l
=
1
K
)
{\displaystyle \propto \pi (\{y_{ij}\}_{i=1,j=1}^{N,M_{i}},\{\theta _{li}\}_{i=1,l=1}^{N,K},\sigma ^{2},\{\alpha _{l}\}_{l=1}^{K},\{\beta _{lb}\}_{l=1,b=1}^{K,P},\{\omega _{l}\}_{l=1}^{K})}
=
π
(
{
y
i
j
}
i
=
1
,
j
=
1
N
,
M
i
|
{
θ
l
i
}
i
=
1
,
l
=
1
N
,
K
,
σ
2
)
}
Stage 1: Individual-Level Model
s
p
a
c
e
r
×
π
(
{
θ
l
i
}
i
=
1
,
l
=
1
N
,
K
|
{
α
l
}
l
=
1
K
,
{
β
l
b
}
l
=
1
,
b
=
1
K
,
P
,
{
ω
l
}
l
=
1
K
)
}
Stage 2: Population Model
s
p
a
c
e
r
×
p
(
σ
2
,
{
α
l
}
l
=
1
K
,
{
β
l
b
}
l
=
1
,
b
=
1
K
,
P
,
{
ω
l
}
l
=
1
K
)
}
Stage 3: Prior
{\displaystyle {\begin{aligned}=&~\left.{\pi (\{y_{ij}\}_{i=1,j=1}^{N,M_{i}}|\{\theta _{li}\}_{i=1,l=1}^{N,K},\sigma ^{2})}\right\}{\text{Stage 1: Individual-Level Model}}\\{\phantom {spacer}}\\\times &~\left.{\pi (\{\theta _{li}\}_{i=1,l=1}^{N,K}|\{\alpha _{l}\}_{l=1}^{K},\{\beta _{lb}\}_{l=1,b=1}^{K,P},\{\omega _{l}\}_{l=1}^{K})}\right\}{\text{Stage 2: Population Model}}\\{\phantom {spacer}}\\\times &~\left.{p(\sigma ^{2},\{\alpha _{l}\}_{l=1}^{K},\{\beta _{lb}\}_{l=1,b=1}^{K,P},\{\omega _{l}\}_{l=1}^{K})}\right\}{\text{Stage 3: Prior}}\end{aligned}}}
The panel on the right displays Bayesian research cycle using Bayesian nonlinear mixed-effects model. A research cycle using the Bayesian nonlinear mixed-effects model comprises two steps: (a) standard research cycle and (b) Bayesian-specific workflow. Standard research cycle involves literature review, defining a problem and specifying the research question and hypothesis. Bayesian-specific workflow comprises three sub-steps: (b)–(i) formalizing prior distributions based on background knowledge and prior elicitation; (b)–(ii) determining the likelihood function based on a nonlinear function
f
{\displaystyle f}
; and (b)–(iii) making a posterior inference. The resulting posterior inference can be used to start a new research cycle.
== See also ==
Hyperparameter
Mixed-design analysis of variance
Multiscale modeling
Random effects model
Nonlinear mixed-effects model
Bayesian hierarchical modeling
Restricted randomization
== Notes ==
== References ==
== Further reading ==
Gelman, A.; Hill, J. (2007). Data Analysis Using Regression and Multilevel/Hierarchical Models. New York: Cambridge University Press. pp. 235–299. ISBN 978-0-521-68689-1.
Goldstein, H. (2011). Multilevel Statistical Models (4th ed.). London: Wiley. ISBN 978-0-470-74865-7.
Hedeker, D.; Gibbons, R. D. (2012). Longitudinal Data Analysis (2nd ed.). New York: Wiley. ISBN 978-0-470-88918-3.
Hox, J. J. (2010). Multilevel Analysis: Techniques and Applications (2nd ed.). New York: Routledge. ISBN 978-1-84872-845-5.
Raudenbush, S. W.; Bryk, A. S. (2002). Hierarchical Linear Models: Applications and Data Analysis Methods (2nd ed.). Thousand Oaks, CA: Sage. This concentrates on education.
Snijders, T. A. B.; Bosker, R. J. (2011). Multilevel Analysis: an Introduction to Basic and Advanced Multilevel Modeling (2nd ed.). London: Sage. ISBN 9781446254332.
Swamy, P. A. V. B.; Tavlas, George S. (2001). "Random Coefficient Models". In Baltagi, Badi H. (ed.). A Companion to Theoretical Econometrics. Oxford: Blackwell. pp. 410–429. ISBN 978-0-631-21254-6.
Verbeke, G.; Molenberghs, G. (2013). Linear Mixed Models for Longitudinal Data. Springer. Includes SAS code
Gomes, Dylan G.E. (20 January 2022). "Should I use fixed effects or random effects when I have fewer than five levels of a grouping factor in a mixed-effects model?". PeerJ. 10: e12794. doi:10.7717/peerj.12794. PMC 8784019. PMID 35116198.
== External links ==
Centre for Multilevel Modelling | Wikipedia/Hierarchical_Bayes_model |
Drug checking or pill testing is a way to reduce the harm from drug consumption by allowing users to find out the content and purity of substances that they intend to consume. This enables users to make safer choices: to avoid more dangerous substances, to use smaller quantities, and to avoid dangerous combinations.
Drug checking services have developed over the last twenty-five years in twenty countries and are being considered in more countries, although attempts to implement them in some countries have been hindered by local laws. Drug checking initially focused on MDMA users in electronic dance music events but the services have broadened as drug use has become more complex. These developments have been strongly affected by local laws and culture, resulting in a diverse range of services, both for mobile services that attend events and festivals and fixed sites in town centres and entertainment districts. For instance, staff may or may not be able to handle illegal substances, which limits the use of testing techniques to those where the staff are not legally in possession of those substances.
People intending to take drugs provide a small sample to the testing service (often less than a single dose). Test results may be provided immediately, after a short waiting period, or later. Drug checking services use this time to discuss health risks and safe behaviour with the service users. The services also provide public health information about drug use, new psychoactive substances and trends at a national level.
== History ==
The earliest reported drug checking activity began in Amsterdam in November 1970 with a group from the University Hospital of Amsterdam and samples obtained through psychiatrists working with people who used drugs.
The earliest reported drug checking service is the Drug Information and Monitoring System (DIMS) in the Netherlands supported by the Ministry of Health, Welfare and Sport. Since 1992 the service has tested over 100,000 drug samples at a national network of twenty-three testing facilities. Service users receive results within a week via phone or email and the service publishes aggregated results describing what substances are in use.
European countries have led the introduction of drug checking services, with Asociación Hegoak Elkartea founded in Spain in 1994, TechnoPlus in France founded in 1995, and Modus Fiesta in Belgium in 1996. DanceSafe have operated in the USA since 1998 providing reagent testing and harm reduction advice.
More recent services include Neutravel founded in Italy in 2007, The Loop founded in the UK in 2013 and KnowYourStuffNZ in New Zealand in 2015 with Pill Testing Australia launching after a successful trial in 2018.
In 2008, the Trans-European Drug Information network (TEDI) was created, a database compiling information from different non-profit drug checking services located in different European countries.
On March 31, 2017, a coalition of drug safety organisations hosted the first-ever International Drug Checking Day to raise awareness of safer drug use. The initiative was aimed at recreational users, with a particular emphasis on the nightlife community, and aims to promote harm reduction—accepting that people will choose to take drugs, and providing them with tools to minimise the risks.
In November 2021 New Zealand became the first country to make drug checking fully legal after previously allowing this under temporary legislation. Other countries like the Netherlands allow drug checking but do not have legislation to protect the clients or testers, and the practice exists in a legal grey area in countries like the US and UK.
== Approaches to drug checking ==
=== Front-of-house testing ===
Front-of-house testing provides testing services to clients at events. It provides real-time, as-you-wait results. An example is the testing at BOOM festival in Portugal where drug testers are legally allowed to handle samples. Where testers are not allowed to handle samples, for fear of breaking laws around possession, clients themselves must handle the substance to be tested. Examples of this model are KnowYourStuffNZ in New Zealand.
=== Back-of-house testing ===
Back-of-house testing is more restrictive. The substances tested do not come directly from event participants. Instead, they may come from samples confiscated by police or event security or samples that are disposed of via drug amnesty bins. The results may not be available to event attendees.
=== Middle-of-house testing ===
Middle-of-house testing is a new development, started by The Loop in the UK. Testing happens on-site, but without face-to-face interaction with the public. Samples from medical incidents are tested and alerts can be issued after multiple incidents with a trend are identified.
=== Testing outside events ===
Static testing sites provide testing services to clients at fixed locations away from events. Often these are in the entertainment districts of cities. Energy Control in Barcelona and DIMS in the Netherlands provide such services.
Off-site testing occurs away from events and away from clients. Clients submit samples by post or at drop-off locations. Those samples are analysed and then the results are publicised. Examples of this model include WEDINOS (the Welsh Emerging Drugs & Identification of Novel Substances Project) and DIMS in the Netherlands.
The UK's first trial of community-based drug safety testing was carried out in Bristol and Durham in 2018 in a church, a drugs service, and a youth and community centre. Users reported that they intended to carry out a range of harm reduction actions such as alerting friends and acquaintances, being more careful mixing substances, consuming lowered dosage, and disposing of substances.
Drug Checking Programs have been emerging across the Americas in recent years. A directory of these programs can be found at the Harm Reduction Innovation Lab's website.
== Analysis methods ==
A range of analysis techniques are in use by drug checking services. The most common are reagent testing, fourier transform infrared spectroscopy, ultraviolet-visible spectroscopy, raman spectroscopy, mass spectrometry and gas chromatography mass spectrometry.
Reagent testing uses chemical indicators that show a colour change in the presence of particular drugs. These tests are widely available and affordable. The use of several reagents is generally necessary to positively identify a substance with Marquis, Mandelin, and Mecke reagents being used to detect MDMA and Ehrlich's reagent common for detecting LSD. However, reagent testing only indicates the presence of a substance, not the absence of contaminants or other substances. This can provide a false sense of security when illicit drugs are deliberately adulterated to fool reagent tests.
The presence of specific drugs can also be detected through immunoassay testing strips. Testing strips for fentanyl can detect a few tens of nanograms of the substance at a price of a few dollars per test. Recent increased demand for immunoassay test strips, lack of regulation, and approval to use federal funding for test strip purchasing in the US have led to a boom in test strip manufacturers creating concern in drug checking programs and harm reduction organizations about the lack of validation, consistency, and accuracy of results.
Fourier transform infrared spectroscopy is a rapid test using robust hardware that can be carried out in the field. It provides sample identification and mixture analysis, allowing the detection of impurities and adulterants. It is highly sensitive and can carry out analysis using only a few milligrams of a sample. It is semi-quantitative and can provide an indication of purity. For these reasons, it is widely used by both fixed and mobile testing services and considered the best technology to use.
Gas chromatography mass spectrometry provides very sensitive and quantified information about substances. However, the high price and delicate equipment generally limit the use of this technique to fixed sites.
=== Development ===
Developing technologies include:
Ion-trap mass spectrometry
Laser-induced immunofluorometric biosensors
Magnetic levitation
Nuclear magnetic resonance spectroscopy
== Effectiveness of drug checking ==
Drug checking has been shown to be an effective way to reduce the harm from drug use through informing safer use, limiting use, and helping users avoid the most dangerous substances. The services also provide monitoring and detection of new psychoactive substances to inform public health interventions. The Loop have stated that 20% of samples are handed in for disposal and 40% of service users reduce intake. KnowYourStuffNZ have found that, when substances are not as expected, half of service users state they will not take that substance and a quarter say they will take a smaller quantity. Drug checking services also reach drug users who are not reached by existing services. Evidence from research conducted by Austrian pill testing service CheckIt! found 58% of people who use the service would not otherwise seek out harm reduction information, and about 75% are more likely to access harm reduction services if pill testing is included.
Academic research from the UK has found that one in five substances were not what they were expected to be and two-thirds of misrepresented samples were disposed of. Such on-site testing accesses otherwise hard-to-reach user groups to reduce the harms associated with drug use.
Research that followed-up people who had used drug checking services in the UK revealed that those people acted upon the harm reduction advice that they received from the service. Those people disposed of unwanted substances, reduced their dosage of wanted substances, and reduced their risk of overdose. People also followed those risk management practices after attending festivals, alerted friends to the risks of drug use, and continued to follow that advice.
In a peer-reviewed study published in Journal of Psychopharmacology, researchers at Johns Hopkins found that people were about half as likely (relative risk = 0.56) to report intent to use a product if testing did not identify the substance as MDMA, and this was a statistically significant reduction.
== See also ==
Counterfeit drug
Dille–Koppanyi reagent
Drug education
Drug test
Ehrlich's reagent
Folin's reagent
Froehde reagent
Harm reduction
Liebermann reagent
Mandelin reagent
Marquis reagent
Mecke reagent
Simon's reagent
Reagent testing
Trans-European Drug Information
Zwikker reagent
== References == | Wikipedia/Drug_checking |
Tropical diseases are diseases that are prevalent in or unique to tropical and subtropical regions. The diseases are less prevalent in temperate climates, due in part to the occurrence of a cold season, which controls the insect population by forcing hibernation. However, many were present in northern Europe and northern America in the 17th and 18th centuries before modern understanding of disease causation. The initial impetus for tropical medicine was to protect the health of colonial settlers, notably in India under the British Raj. Insects such as mosquitoes and flies are by far the most common disease carrier, or vector. These insects may carry a parasite, bacterium or virus that is infectious to humans and animals. Most often disease is transmitted by an insect bite, which causes transmission of the infectious agent through subcutaneous blood exchange. Vaccines are not available for most of the diseases listed here, and many do not have cures.
Human exploration of tropical rainforests, deforestation, rising immigration and increased international air travel and other tourism to tropical regions has led to an increased incidence of such diseases to non-tropical countries. Of particular concern is the habitat loss of reservoir host species.
== Health programmes ==
In 1975 the Special Programme for Research and Training in Tropical Diseases (TDR) was established to focus on neglected infectious diseases which disproportionately affect poor and marginalized populations in developing regions of Africa, Asia, Central America and North South America. It was established at the World Health Organization, which is the executing agency, and is co-sponsored by the United Nations Children's Fund, United Nations Development Programme, the World Bank and the World Health Organization.
TDR's vision is to foster an effective global research effort on infectious diseases of poverty in which disease endemic countries play a pivotal role. It has a dual mission of developing new tools and strategies against these diseases, and to develop the research and leadership capacity in the countries where the diseases occur. The TDR secretariat is based in Geneva, Switzerland, but the work is conducted throughout the world through many partners and funded grants.
Some examples of work include helping to develop new treatments for diseases, such as ivermectin for onchocerciasis (river blindness); showing how packaging can improve use of artemesinin-combination treatment (ACT) for malaria; demonstrating the effectiveness of bednets to prevent mosquito bites and malaria; and documenting how community-based and community-led programmes increases distribution of multiple treatments. TDR history
The current TDR disease portfolio includes the following entries:
† Although leprosy and tuberculosis are not exclusively tropical diseases, their high incidence in the tropics justifies their inclusion.
‡ People living with HIV are 19 (15–22) times more likely to develop active TB disease than people without HIV.
== Other neglected tropical diseases ==
Additional neglected tropical diseases include:
Some tropical diseases are very rare, but may occur in sudden epidemics, such as the Ebola hemorrhagic fever, Lassa fever and the Marburg virus. There are hundreds of different tropical diseases which are less known or rarer, but that, nonetheless, have importance for public health.
== Relation of climate to tropical diseases ==
The so-called "exotic" diseases in the tropics have long been noted both by travelers, explorers, etc., as well as by physicians. One obvious reason is that the hot climate present during all the year and the larger volume of rains directly affect the formation of breeding grounds, the larger number and variety of natural reservoirs and animal diseases that can be transmitted to humans (zoonosis), the largest number of possible insect vectors of diseases. It is possible also that higher temperatures may favor the replication of pathogenic agents both inside and outside biological organisms. Socio-economic factors may be also in operation, since most of the poorest nations of the world are in the tropics. Tropical countries like Brazil, which have improved their socio-economic situation and invested in hygiene, public health and the combat of transmissible diseases have achieved dramatic results in relation to the elimination or decrease of many endemic tropical diseases in their territory.
Climate change, global warming caused by the greenhouse effect, and the resulting increase in global temperatures, are possibly causing tropical diseases and vectors to spread to higher altitudes in mountainous regions, and to higher latitudes that were previously spared, such as the Southern United States, the Mediterranean area, etc. For example, in the Monteverde cloud forest of Costa Rica, global warming enabled Chytridiomycosis, a tropical disease, to flourish and thus force into decline amphibian populations of the Monteverde Harlequin frog. Here, global warming raised the heights of orographic cloud formation, and thus produced cloud cover that would facilitate optimum growth conditions for the implicated pathogen, B. dendrobatidis.
=== Role of human activities in the spread of tropical diseases ===
Human activities, particularly those driving climate change, are significantly influencing the spread and geographical range of tropical diseases. The burning of fossil fuels, deforestation, industrial agriculture, and urbanization release large amounts of greenhouse gases into the atmosphere, raising global temperatures and altering weather patterns. These environmental changes, such as increased rainfall, higher temperatures, and more frequent extreme weather events, create more favorable conditions for disease vectors like mosquitoes, which transmit diseases such as malaria, dengue, and Zika. In many cases, this has expanded the reach of tropical diseases into regions that were previously unaffected, including higher altitudes and temperate zones. Additionally, human-driven habitat destruction, such as the clearing of forests and wetlands, disrupts natural reservoirs and increases human-wildlife contact, further elevating the risk of zoonotic diseases crossing into human populations. As climate change continues, these activities will likely exacerbate the public health burden, especially in low-income regions that are most vulnerable to both the impacts of climate change and the diseases it helps spread.
== Prevention and treatment ==
=== Vector-borne diseases ===
Vectors are living organisms that pass disease between humans or from animal to human. The vector carrying the highest number of diseases is the mosquito, which is responsible for the tropical diseases dengue and malaria. Many different approaches have been taken to treat and prevent these diseases. NIH-funded research has produced genetically modify mosquitoes that are unable to spread diseases such as malaria. An issue with this approach is global accessibility to genetic engineering technology; Approximately 50% of scientists in the field do not have access to information on genetically modified mosquito trials being conducted.
Other prevention methods include:
Draining wetlands to reduce populations of insects and other vectors, or introducing natural predators of the vectors.
The application of insecticides and/or insect repellents to strategic surfaces such as clothing, skin, buildings, insect habitats, and bed nets.
The use of a mosquito net over a bed (also known as a "bed net") to reduce nighttime transmission, since certain species of tropical mosquitoes feed mainly at night.
=== Community approaches ===
Assisting with economic development in endemic regions can contribute to prevention and treatment of tropical diseases. For example, microloans enable communities to invest in health programs that lead to more effective disease treatment and prevention technology.
Educational campaigns can aid in the prevention of various diseases. Educating children about how diseases spread and how they can be prevented has proven to be effective in practicing preventative measures. Educational campaigns can yield significant benefits at low costs.
=== Innovative approaches ===
Recent advancements in vector control technologies are proving effective in reducing the transmission of mosquito-borne diseases like malaria, dengue, and Zika. Genetically modified (GM) mosquitoes, such as Oxitec's mosquitoes, which prevent females from surviving to adulthood, have demonstrated over a 90% reduction in mosquito populations in field trials in Brazil.
Another promising approach is the use of Wolbachia bacteria, which renders mosquitoes resistant to the dengue virus. A trial in Yogyakarta, Indonesia, showed a 77% reduction in symptomatic dengue cases in areas with Wolbachia-infected mosquitoes.
Additionally, integrated vector management (IVM), which combines biological controls, insecticides, and public education, has proven successful in reducing the transmission of arboviruses. These strategies offer more sustainable and eco-friendly solutions for controlling mosquito populations and preventing disease spread.
=== Other approaches ===
Use of water wells, and/or water filtration, water filters, or water treatment with water tablets to produce drinking water free of parasites.
Sanitation to prevent transmission through human waste.
Development and use of vaccines to promote disease immunity.
Pharmacologic treatment (to treat disease after infection or infestation).
== See also ==
Hospital for Tropical Diseases
Tropical medicine
Infectious disease
Neglected diseases
List of epidemics
Waterborne diseases
Globalization and disease
== References ==
== Further reading ==
=== Books ===
TDR at a glance – fostering an effective global research effort on diseases of poverty
Le TDR en un coup d’oeilLe TDR en un coup d’oeil – favoriser un eff ort mondial de recherche eff icace sur les maladies liées à la pauvreté
TDR annual report – 2009
Monitoring and evaluation tool kit for indoor residual spraying
Indicators for monitoring and evaluation of the kala-azar elimination programme
Malaria Rapid Diagnostic Test Performance – results of WHO product testing of malaria RDTs: Round 2- 2009
Quality Practices in Basic Biomedical Research (QPBR) training manual: Trainer
Quality Practices in Basic Biomedical Research (QPBR) training manual: Trainee
Progress and prospects for the use of genetically modified mosquitoes to inhibit disease transmission
Use of Influenza Rapid Diagnostic Tests
Manson's Tropical Diseases
Mandell's Principles and Practice of Infectious Diseases or this site
=== Journals ===
American Journal of Tropical Medicine and Hygiene
Japanese Journal of Tropical Medicine and Hygiene
Tropical Medicine and International Health
The Southeast Asian Journal of Tropical Medicine and Public Health
Revista do Instituto de Medicina Tropical de São Paulo
Revista da Sociedade Brasileira de Medicina Tropical
Journal of Venomous Animals and Toxins including Tropical Diseases
=== Websites ===
Special Programme for Research and Training in Tropical Diseases -TDR
GIDEON-Global Infectious Disease Epidemiology Network
== External links ==
WHO Neglected Tropical Diseases
WHO Operational research in tropical and other communicable diseases
European Bioinformatics Institute
open source drug discovery
Drugs for Neglected Diseases Initiative
Tropical diseases from Maya Paradise, The Guatemala Information Web Site
American Society for Tropical Medicine and Hygiene
Treating Tropical Diseases U.S. Food and Drug Administration
Travelers' Health – National Center for Infectious Diseases – Centers for Disease Control and Prevention
Tropicology Library. In Portuguese.
'Conquest and Disease or Colonisation and Health', lecture by Professor Frank Cox on the history of tropical disease, given at Gresham College, 17 September 2007 (available for download as video and audio files, as well as a text file).
Thomas Nutman (December 28, 2007). "Neglected Tropical Diseases Burden Those Overseas, But Travelers Also At Risk". ScienceDaily. NIH/National Institute of Allergy and Infectious Diseases. Retrieved 2025-05-22. | Wikipedia/Tropical_disease |
A pragmatic clinical trial (PCT), sometimes called a practical clinical trial (PCT), is a clinical trial that focuses on correlation between treatments and outcomes in real-world health system practice rather than focusing on proving causative explanations for outcomes, which requires extensive deconfounding with inclusion and exclusion criteria so strict that they risk rendering the trial results irrelevant to much of real-world practice.
== Examples ==
A typical example is that an anti-diabetic medication in the real world will often be used in people with (latent or apparent) diabetes-induced kidney problems, but if a study of its efficacy and safety excluded some subsets of people with kidney problems (to escape confounding), the study's results may not reflect well what will actually happen in broad practice. PCTs thus contrast with explanatory clinical trials, which focus more on causation through deconfounding. The pragmatic versus explanatory distinction is a spectrum or continuum rather than a dichotomy (each study can fall toward one end or the other), but the distinction is nonetheless important to evidence-based medicine (EBM) because physicians have found that treatment effects in explanatory clinical trials do not always translate to outcomes in typical practice. Decision-makers (including individual physicians deciding what to do next for a particular patient, developers of clinical guidelines, and health policy directors) hope to build a better evidence base to inform decisions by encouraging more PCTs to be conducted.
== Distinction from other forms of trials ==
The distinction between pragmatic and explanatory trials is not the same as the distinction between randomized and nonrandomized trials. Any trial can be either randomized or nonrandomized and have any degree of pragmatic and explanatory power, depending on its study design, with randomization being preferable if practicably available. However, most randomized controlled trials (RCTs) to date have leaned toward the explanatory side of the pragmatic-explanatory spectrum, largely because of the value traditionally placed on proving causation by deconfounding as part of proving efficacy, but sometimes also because "attempts to minimize cost and maximize efficiency have led to smaller sample sizes". The movement toward supporting pragmatic randomized controlled trials (pRCTs) hopes to make sure that money spent on RCTs is well spent by providing information that actually matters to real-world outcomes, regardless of conclusively tying causation to particular variables. This is the pragmatic element of such designs. Thus pRCTs are important to comparative effectiveness research, and a distinction is often (although not always) made between efficacy and effectiveness, whereby efficacy implies causation provided by deconfounding other variables (we know with certainty that drug X treats disease Y by mechanism of action Z) but effectiveness implies correlation with outcomes regardless of presence of other variables (we know with certainty that people in a situation similar to X who take drug A tend to have slightly better outcomes than those who take drug B, and even if we think we may suspect why, the causation is not as important).
Explanation remains important, as does traditional efficacy research, because we still value knowledge of causation to advance our understanding of molecular biology and to maintain our ability to differentiate real efficacy from placebo effects. What has become apparent in the era of advanced health technology is that we also need to know about comparative effectiveness in real-world applications so that we can ensure the best use of our limited resources as we make countless instances of clinical decisions. And it is apparent that explanatory evidence, such as in vitro evidence and even in vivo evidence from clinical trials with tight exclusion criteria, often does not help enough, by itself, with that task.
== Other types of pragmatic research ==
Pragmatism can be used as an epistemology when undertaking any type of research. Examples include systematic reviews, consensus methods such as Delphi and crowdsourcing in fields such as urban planning.
== See also ==
Other ways to use evidence tied to outcomes but not necessarily to known causality
Real world data
Real world evidence
== References == | Wikipedia/Pragmatic_clinical_trial |
Multilevel models are statistical models of parameters that vary at more than one level. An example could be a model of student performance that contains measures for individual students as well as measures for classrooms within which the students are grouped. These models can be seen as generalizations of linear models (in particular, linear regression), although they can also extend to non-linear models. These models became much more popular after sufficient computing power and software became available.
Multilevel models are particularly appropriate for research designs where data for participants are organized at more than one level (i.e., nested data). The units of analysis are usually individuals (at a lower level) who are nested within contextual/aggregate units (at a higher level). While the lowest level of data in multilevel models is usually an individual, repeated measurements of individuals may also be examined. As such, multilevel models provide an alternative type of analysis for univariate or multivariate analysis of repeated measures. Individual differences in growth curves may be examined. Furthermore, multilevel models can be used as an alternative to ANCOVA, where scores on the dependent variable are adjusted for covariates (e.g. individual differences) before testing treatment differences. Multilevel models are able to analyze these experiments without the assumptions of homogeneity-of-regression slopes that is required by ANCOVA.
Multilevel models can be used on data with many levels, although 2-level models are the most common and the rest of this article deals only with these. The dependent variable must be examined at the lowest level of analysis.
== Level 1 regression equation ==
When there is a single level 1 independent variable, the level 1 model is
Y
i
j
=
β
0
j
+
β
1
j
X
i
j
+
e
i
j
{\displaystyle Y_{ij}=\beta _{0j}+\beta _{1j}X_{ij}+e_{ij}}
.
Y
i
j
{\displaystyle Y_{ij}}
refers to the score on the dependent variable for an individual observation at Level 1 (subscript i refers to individual case, subscript j refers to the group).
X
i
j
{\displaystyle X_{ij}}
refers to the Level 1 predictor.
β
0
j
{\displaystyle \beta _{0j}}
refers to the intercept of the dependent variable for group j.
β
1
j
{\displaystyle \beta _{1j}}
refers to the slope for the relationship in group j (Level 2) between the Level 1 predictor and the dependent variable.
e
i
j
{\displaystyle e_{ij}}
refers to the random errors of prediction for the Level 1 equation (it is also sometimes referred to as
r
i
j
{\displaystyle r_{ij}}
).
e
i
j
∼
N
(
0
,
σ
1
2
)
{\displaystyle e_{ij}\sim {\mathcal {N}}(0,\sigma _{1}^{2})}
At Level 1, both the intercepts and slopes in the groups can be either fixed (meaning that all groups have the same values, although in the real world this would be a rare occurrence), non-randomly varying (meaning that the intercepts and/or slopes are predictable from an independent variable at Level 2), or randomly varying (meaning that the intercepts and/or slopes are different in the different groups, and that each have their own overall mean and variance).
When there are multiple level 1 independent variables, the model can be expanded by substituting vectors and matrices in the equation.
When the relationship between the response
Y
i
j
{\displaystyle Y_{ij}}
and predictor
X
i
j
{\displaystyle X_{ij}}
can not be described by the linear relationship, then one can find some non linear functional relationship between the response and predictor, and extend the model to nonlinear mixed-effects model. For example, when the response
Y
i
j
{\displaystyle Y_{ij}}
is the cumulative infection trajectory of the
i
{\displaystyle i}
-th country, and
X
i
j
{\displaystyle X_{ij}}
represents the
j
{\displaystyle j}
-th time points, then the ordered pair
(
X
i
j
,
Y
i
j
)
{\displaystyle (X_{ij},Y_{ij})}
for each country may show a shape similar to logistic function.
== Level 2 regression equation ==
The dependent variables are the intercepts and the slopes for the independent variables at Level 1 in the groups of Level 2.
u
0
j
∼
N
(
0
,
σ
2
2
)
{\displaystyle u_{0j}\sim {\mathcal {N}}(0,\sigma _{2}^{2})}
u
1
j
∼
N
(
0
,
σ
3
2
)
{\displaystyle u_{1j}\sim {\mathcal {N}}(0,\sigma _{3}^{2})}
β
0
j
=
γ
00
+
γ
01
w
j
+
u
0
j
{\displaystyle \beta _{0j}=\gamma _{00}+\gamma _{01}w_{j}+u_{0j}}
β
1
j
=
γ
10
+
γ
11
w
j
+
u
1
j
{\displaystyle \beta _{1j}=\gamma _{10}+\gamma _{11}w_{j}+u_{1j}}
γ
00
{\displaystyle \gamma _{00}}
refers to the overall intercept. This is the grand mean of the scores on the dependent variable across all the groups when all the predictors are equal to 0.
γ
10
{\displaystyle \gamma _{10}}
refers to the average slope between the dependent variable and the Level 1 predictor.
w
j
{\displaystyle w_{j}}
refers to the Level 2 predictor.
γ
01
{\displaystyle \gamma _{01}}
and
γ
11
{\displaystyle \gamma _{11}}
refer to the effect of the Level 2 predictor on the Level 1 intercept and slope respectively.
u
0
j
{\displaystyle u_{0j}}
refers to the deviation in group j from the overall intercept.
u
1
j
{\displaystyle u_{1j}}
refers to the deviation in group j from the average slope between the dependent variable and the Level 1 predictor.
== Types of models ==
Before conducting a multilevel model analysis, a researcher must decide on several aspects, including which predictors are to be included in the analysis, if any. Second, the researcher must decide whether parameter values (i.e., the elements that will be estimated) will be fixed or random. Fixed parameters are composed of a constant over all the groups, whereas a random parameter has a different value for each of the groups. Additionally, the researcher must decide whether to employ a maximum likelihood estimation or a restricted maximum likelihood estimation type.
=== Random intercepts model ===
A random intercepts model is a model in which intercepts are allowed to vary, and therefore, the scores on the dependent variable for each individual observation are predicted by the intercept that varies across groups. This model assumes that slopes are fixed (the same across different contexts). In addition, this model provides information about intraclass correlations, which are helpful in determining whether multilevel models are required in the first place.
=== Random slopes model ===
A random slopes model is a model in which slopes are allowed to vary according to a correlation matrix, and therefore, the slopes are different across grouping variable such as time or individuals. This model assumes that intercepts are fixed (the same across different contexts).
=== Random intercepts and slopes model ===
A model that includes both random intercepts and random slopes is likely the most realistic type of model, although it is also the most complex. In this model, both intercepts and slopes are allowed to vary across groups, meaning that they are different in different contexts.
=== Developing a multilevel model ===
In order to conduct a multilevel model analysis, one would start with fixed coefficients (slopes and intercepts). One aspect would be allowed to vary at a time (that is, would be changed), and compared with the previous model in order to assess better model fit. There are three different questions that a researcher would ask in assessing a model. First, is it a good model? Second, is a more complex model better? Third, what contribution do individual predictors make to the model?
In order to assess models, different model fit statistics would be examined. One such statistic is the chi-square likelihood-ratio test, which assesses the difference between models. The likelihood-ratio test can be employed for model building in general, for examining what happens when effects in a model are allowed to vary, and when testing a dummy-coded categorical variable as a single effect. However, the test can only be used when models are nested (meaning that a more complex model includes all of the effects of a simpler model). When testing non-nested models, comparisons between models can be made using the Akaike information criterion (AIC) or the Bayesian information criterion (BIC), among others. See further Model selection.
== Assumptions ==
Multilevel models have the same assumptions as other major general linear models (e.g., ANOVA, regression), but some of the assumptions are modified for the hierarchical nature of the design (i.e., nested data).
Linearity
The assumption of linearity states that there is a rectilinear (straight-line, as opposed to non-linear or U-shaped) relationship between variables. However, the model can be extended to nonlinear relationships. Particularly, when the mean part of the level 1 regression equation is replaced with a non-linear parametric function, then such a model framework is widely called the nonlinear mixed-effects model.
Normality
The assumption of normality states that the error terms at every level of the model are normally distributed. However, most statistical software allows one to specify different distributions for the variance terms, such as a Poisson, binomial, logistic. The multilevel modelling approach can be used for all forms of Generalized Linear models.
Homoscedasticity
The assumption of homoscedasticity, also known as homogeneity of variance, assumes equality of population variances. However, different variance-correlation matrix can be specified to account for this, and the heterogeneity of variance can itself be modeled.
Independence of observations (No Autocorrelation of Model's Residuals)
Independence is an assumption of general linear models, which states that cases are random samples from the population and that scores on the dependent variable are independent of each other. One of the main purposes of multilevel models is to deal with cases where the assumption of independence is violated; multilevel models do, however, assume that 1) the level 1 and level 2 residuals are uncorrelated and 2) The errors (as measured by the residuals) at the highest level are uncorrelated.
Orthogonality of regressors to random effects
The regressors must not correlate with the random effects,
u
0
j
{\displaystyle u_{0j}}
. This assumption is testable but often ignored, rendering the estimator inconsistent. If this assumption is violated, the random-effect must be modeled explicitly in the fixed part of the model, either by using dummy variables or including cluster means of all
X
i
j
{\displaystyle X_{ij}}
regressors. This assumption is probably the most important assumption the estimator makes, but one that is misunderstood by most applied researchers using these types of models.
== Statistical tests ==
The type of statistical tests that are employed in multilevel models depend on whether one is examining fixed effects or variance components. When examining fixed effects, the tests are compared with the standard error of the fixed effect, which results in a Z-test. A t-test can also be computed. When computing a t-test, it is important to keep in mind the degrees of freedom, which will depend on the level of the predictor (e.g., level 1 predictor or level 2 predictor). For a level 1 predictor, the degrees of freedom are based on the number of level 1 predictors, the number of groups and the number of individual observations. For a level 2 predictor, the degrees of freedom are based on the number of level 2 predictors and the number of groups.
== Statistical power ==
Statistical power for multilevel models differs depending on whether it is level 1 or level 2 effects that are being examined. Power for level 1 effects is dependent upon the number of individual observations, whereas the power for level 2 effects is dependent upon the number of groups. To conduct research with sufficient power, large sample sizes are required in multilevel models. However, the number of individual observations in groups is not as important as the number of groups in a study. In order to detect cross-level interactions, given that the group sizes are not too small, recommendations have been made that at least 20 groups are needed, although many fewer can be used if one is only interested in inference on the fixed effects and the random effects are control, or "nuisance", variables. The issue of statistical power in multilevel models is complicated by the fact that power varies as a function of effect size and intraclass correlations, it differs for fixed effects versus random effects, and it changes depending on the number of groups and the number of individual observations per group.
== Applications ==
=== Level ===
The concept of level is the keystone of this approach. In an educational research example, the levels for a 2-level model might be
pupil
class
However, if one were studying multiple schools and multiple school districts, a 4-level model could include
pupil
class
school
district
The researcher must establish for each variable the level at which it was measured. In this example "test score" might be measured at pupil level, "teacher experience" at class level, "school funding" at school level, and "urban" at district level.
=== Example ===
As a simple example, consider a basic linear regression model that predicts income as a function of age, class, gender and race. It might then be observed that income levels also vary depending on the city and state of residence. A simple way to incorporate this into the regression model would be to add an additional independent categorical variable to account for the location (i.e. a set of additional binary predictors and associated regression coefficients, one per location). This would have the effect of shifting the mean income up or down—but it would still assume, for example, that the effect of race and gender on income is the same everywhere. In reality, this is unlikely to be the case—different local laws, different retirement policies, differences in level of racial prejudice, etc. are likely to cause all of the predictors to have different sorts of effects in different locales.
In other words, a simple linear regression model might, for example, predict that a given randomly sampled person in Seattle would have an average yearly income $10,000 higher than a similar person in Mobile, Alabama. However, it would also predict, for example, that a white person might have an average income $7,000 above a black person, and a 65-year-old might have an income $3,000 below a 45-year-old, in both cases regardless of location. A multilevel model, however, would allow for different regression coefficients for each predictor in each location. Essentially, it would assume that people in a given location have correlated incomes generated by a single set of regression coefficients, whereas people in another location have incomes generated by a different set of coefficients. Meanwhile, the coefficients themselves are assumed to be correlated and generated from a single set of hyperparameters. Additional levels are possible: For example, people might be grouped by cities, and the city-level regression coefficients grouped by state, and the state-level coefficients generated from a single hyper-hyperparameter.
Multilevel models are a subclass of hierarchical Bayesian models, which are general models with multiple levels of random variables and arbitrary relationships among the different variables. Multilevel analysis has been extended to include multilevel structural equation modeling, multilevel latent class modeling, and other more general models.
=== Uses ===
Multilevel models have been used in education research or geographical research, to estimate separately the variance between pupils within the same school, and the variance between schools. In psychological applications, the multiple levels are items in an instrument, individuals, and families. In sociological applications, multilevel models are used to examine individuals embedded within regions or countries. In organizational psychology research, data from individuals must often be nested within teams or other functional units. They are often used in ecological research as well under the more general term mixed models.
Different covariables may be relevant on different levels. They can be used for longitudinal studies, as with growth studies, to separate changes within one individual and differences between individuals.
Cross-level interactions may also be of substantive interest; for example, when a slope is allowed to vary randomly, a level-2 predictor may be included in the slope formula for the level-1 covariate. For example, one may estimate the interaction of race and neighborhood to obtain an estimate of the interaction between an individual's characteristics and the social context.
=== Applications to longitudinal (repeated measures) data ===
== Alternative ways of analyzing hierarchical data ==
There are several alternative ways of analyzing hierarchical data, although most of them have some problems. First, traditional statistical techniques can be used. One could disaggregate higher-order variables to the individual level, and thus conduct an analysis on this individual level (for example, assign class variables to the individual level). The problem with this approach is that it would violate the assumption of independence, and thus could bias our results. This is known as atomistic fallacy. Another way to analyze the data using traditional statistical approaches is to aggregate individual level variables to higher-order variables and then to conduct an analysis on this higher level. The problem with this approach is that it discards all within-group information (because it takes the average of the individual level variables). As much as 80–90% of the variance could be wasted, and the relationship between aggregated variables is inflated, and thus distorted. This is known as ecological fallacy, and statistically, this type of analysis results in decreased power in addition to the loss of information.
Another way to analyze hierarchical data would be through a random-coefficients model. This model assumes that each group has a different regression model—with its own intercept and slope. Because groups are sampled, the model assumes that the intercepts and slopes are also randomly sampled from a population of group intercepts and slopes. This allows for an analysis in which one can assume that slopes are fixed but intercepts are allowed to vary. However this presents a problem, as individual components are independent but group components are independent between groups, but dependent within groups. This also allows for an analysis in which the slopes are random; however, the correlations of the error terms (disturbances) are dependent on the values of the individual-level variables. Thus, the problem with using a random-coefficients model in order to analyze hierarchical data is that it is still not possible to incorporate higher order variables.
== Error terms ==
Multilevel models have two error terms, which are also known as disturbances. The individual components are all independent, but there are also group components, which are independent between groups but correlated within groups. However, variance components can differ, as some groups are more homogeneous than others.
== Bayesian nonlinear mixed-effects model ==
Multilevel modeling is frequently used in diverse applications and it can be formulated by the Bayesian framework. Particularly, Bayesian nonlinear mixed-effects models have recently received significant attention. A basic version of the Bayesian nonlinear mixed-effects models is represented as the following three-stage:
Stage 1: Individual-Level Model
y
i
j
=
f
(
t
i
j
;
θ
1
i
,
θ
2
i
,
…
,
θ
l
i
,
…
,
θ
K
i
)
+
ϵ
i
j
,
s
p
a
c
e
r
ϵ
i
j
∼
N
(
0
,
σ
2
)
,
s
p
a
c
e
r
i
=
1
,
…
,
N
,
j
=
1
,
…
,
M
i
.
{\displaystyle {\begin{aligned}&{y}_{ij}=f(t_{ij};\theta _{1i},\theta _{2i},\ldots ,\theta _{li},\ldots ,\theta _{Ki})+\epsilon _{ij},\\{\phantom {spacer}}\\&\epsilon _{ij}\sim N(0,\sigma ^{2}),\\{\phantom {spacer}}\\&i=1,\ldots ,N,\,j=1,\ldots ,M_{i}.\end{aligned}}}
Stage 2: Population Model
θ
l
i
=
α
l
+
∑
b
=
1
P
β
l
b
x
i
b
+
η
l
i
,
s
p
a
c
e
r
η
l
i
∼
N
(
0
,
ω
l
2
)
,
s
p
a
c
e
r
i
=
1
,
…
,
N
,
l
=
1
,
…
,
K
.
{\displaystyle {\begin{aligned}&\theta _{li}=\alpha _{l}+\sum _{b=1}^{P}\beta _{lb}x_{ib}+\eta _{li},\\{\phantom {spacer}}\\&\eta _{li}\sim N(0,\omega _{l}^{2}),\\{\phantom {spacer}}\\&i=1,\ldots ,N,\,l=1,\ldots ,K.\end{aligned}}}
Stage 3: Prior
σ
2
∼
π
(
σ
2
)
,
s
p
a
c
e
r
α
l
∼
π
(
α
l
)
,
s
p
a
c
e
r
(
β
l
1
,
…
,
β
l
b
,
…
,
β
l
P
)
∼
π
(
β
l
1
,
…
,
β
l
b
,
…
,
β
l
P
)
,
s
p
a
c
e
r
ω
l
2
∼
π
(
ω
l
2
)
,
s
p
a
c
e
r
l
=
1
,
…
,
K
.
{\displaystyle {\begin{aligned}&\sigma ^{2}\sim \pi (\sigma ^{2}),\\{\phantom {spacer}}\\&\alpha _{l}\sim \pi (\alpha _{l}),\\{\phantom {spacer}}\\&(\beta _{l1},\ldots ,\beta _{lb},\ldots ,\beta _{lP})\sim \pi (\beta _{l1},\ldots ,\beta _{lb},\ldots ,\beta _{lP}),\\{\phantom {spacer}}\\&\omega _{l}^{2}\sim \pi (\omega _{l}^{2}),\\{\phantom {spacer}}\\&l=1,\ldots ,K.\end{aligned}}}
Here,
y
i
j
{\displaystyle y_{ij}}
denotes the continuous response of the
i
{\displaystyle i}
-th subject at the time point
t
i
j
{\displaystyle t_{ij}}
, and
x
i
b
{\displaystyle x_{ib}}
is the
b
{\displaystyle b}
-th covariate of the
i
{\displaystyle i}
-th subject. Parameters involved in the model are written in Greek letters.
f
(
t
;
θ
1
,
…
,
θ
K
)
{\displaystyle f(t;\theta _{1},\ldots ,\theta _{K})}
is a known function parameterized by the
K
{\displaystyle K}
-dimensional vector
(
θ
1
,
…
,
θ
K
)
{\displaystyle (\theta _{1},\ldots ,\theta _{K})}
. Typically,
f
{\displaystyle f}
is a `nonlinear' function and describes the temporal trajectory of individuals. In the model,
ϵ
i
j
{\displaystyle \epsilon _{ij}}
and
η
l
i
{\displaystyle \eta _{li}}
describe within-individual variability and between-individual variability, respectively. If Stage 3: Prior is not considered, then the model reduces to a frequentist nonlinear mixed-effect model.
A central task in the application of the Bayesian nonlinear mixed-effect models is to evaluate the posterior density:
π
(
{
θ
l
i
}
i
=
1
,
l
=
1
N
,
K
,
σ
2
,
{
α
l
}
l
=
1
K
,
{
β
l
b
}
l
=
1
,
b
=
1
K
,
P
,
{
ω
l
}
l
=
1
K
|
{
y
i
j
}
i
=
1
,
j
=
1
N
,
M
i
)
{\displaystyle \pi (\{\theta _{li}\}_{i=1,l=1}^{N,K},\sigma ^{2},\{\alpha _{l}\}_{l=1}^{K},\{\beta _{lb}\}_{l=1,b=1}^{K,P},\{\omega _{l}\}_{l=1}^{K}|\{y_{ij}\}_{i=1,j=1}^{N,M_{i}})}
∝
π
(
{
y
i
j
}
i
=
1
,
j
=
1
N
,
M
i
,
{
θ
l
i
}
i
=
1
,
l
=
1
N
,
K
,
σ
2
,
{
α
l
}
l
=
1
K
,
{
β
l
b
}
l
=
1
,
b
=
1
K
,
P
,
{
ω
l
}
l
=
1
K
)
{\displaystyle \propto \pi (\{y_{ij}\}_{i=1,j=1}^{N,M_{i}},\{\theta _{li}\}_{i=1,l=1}^{N,K},\sigma ^{2},\{\alpha _{l}\}_{l=1}^{K},\{\beta _{lb}\}_{l=1,b=1}^{K,P},\{\omega _{l}\}_{l=1}^{K})}
=
π
(
{
y
i
j
}
i
=
1
,
j
=
1
N
,
M
i
|
{
θ
l
i
}
i
=
1
,
l
=
1
N
,
K
,
σ
2
)
}
Stage 1: Individual-Level Model
s
p
a
c
e
r
×
π
(
{
θ
l
i
}
i
=
1
,
l
=
1
N
,
K
|
{
α
l
}
l
=
1
K
,
{
β
l
b
}
l
=
1
,
b
=
1
K
,
P
,
{
ω
l
}
l
=
1
K
)
}
Stage 2: Population Model
s
p
a
c
e
r
×
p
(
σ
2
,
{
α
l
}
l
=
1
K
,
{
β
l
b
}
l
=
1
,
b
=
1
K
,
P
,
{
ω
l
}
l
=
1
K
)
}
Stage 3: Prior
{\displaystyle {\begin{aligned}=&~\left.{\pi (\{y_{ij}\}_{i=1,j=1}^{N,M_{i}}|\{\theta _{li}\}_{i=1,l=1}^{N,K},\sigma ^{2})}\right\}{\text{Stage 1: Individual-Level Model}}\\{\phantom {spacer}}\\\times &~\left.{\pi (\{\theta _{li}\}_{i=1,l=1}^{N,K}|\{\alpha _{l}\}_{l=1}^{K},\{\beta _{lb}\}_{l=1,b=1}^{K,P},\{\omega _{l}\}_{l=1}^{K})}\right\}{\text{Stage 2: Population Model}}\\{\phantom {spacer}}\\\times &~\left.{p(\sigma ^{2},\{\alpha _{l}\}_{l=1}^{K},\{\beta _{lb}\}_{l=1,b=1}^{K,P},\{\omega _{l}\}_{l=1}^{K})}\right\}{\text{Stage 3: Prior}}\end{aligned}}}
The panel on the right displays Bayesian research cycle using Bayesian nonlinear mixed-effects model. A research cycle using the Bayesian nonlinear mixed-effects model comprises two steps: (a) standard research cycle and (b) Bayesian-specific workflow. Standard research cycle involves literature review, defining a problem and specifying the research question and hypothesis. Bayesian-specific workflow comprises three sub-steps: (b)–(i) formalizing prior distributions based on background knowledge and prior elicitation; (b)–(ii) determining the likelihood function based on a nonlinear function
f
{\displaystyle f}
; and (b)–(iii) making a posterior inference. The resulting posterior inference can be used to start a new research cycle.
== See also ==
Hyperparameter
Mixed-design analysis of variance
Multiscale modeling
Random effects model
Nonlinear mixed-effects model
Bayesian hierarchical modeling
Restricted randomization
== Notes ==
== References ==
== Further reading ==
Gelman, A.; Hill, J. (2007). Data Analysis Using Regression and Multilevel/Hierarchical Models. New York: Cambridge University Press. pp. 235–299. ISBN 978-0-521-68689-1.
Goldstein, H. (2011). Multilevel Statistical Models (4th ed.). London: Wiley. ISBN 978-0-470-74865-7.
Hedeker, D.; Gibbons, R. D. (2012). Longitudinal Data Analysis (2nd ed.). New York: Wiley. ISBN 978-0-470-88918-3.
Hox, J. J. (2010). Multilevel Analysis: Techniques and Applications (2nd ed.). New York: Routledge. ISBN 978-1-84872-845-5.
Raudenbush, S. W.; Bryk, A. S. (2002). Hierarchical Linear Models: Applications and Data Analysis Methods (2nd ed.). Thousand Oaks, CA: Sage. This concentrates on education.
Snijders, T. A. B.; Bosker, R. J. (2011). Multilevel Analysis: an Introduction to Basic and Advanced Multilevel Modeling (2nd ed.). London: Sage. ISBN 9781446254332.
Swamy, P. A. V. B.; Tavlas, George S. (2001). "Random Coefficient Models". In Baltagi, Badi H. (ed.). A Companion to Theoretical Econometrics. Oxford: Blackwell. pp. 410–429. ISBN 978-0-631-21254-6.
Verbeke, G.; Molenberghs, G. (2013). Linear Mixed Models for Longitudinal Data. Springer. Includes SAS code
Gomes, Dylan G.E. (20 January 2022). "Should I use fixed effects or random effects when I have fewer than five levels of a grouping factor in a mixed-effects model?". PeerJ. 10: e12794. doi:10.7717/peerj.12794. PMC 8784019. PMID 35116198.
== External links ==
Centre for Multilevel Modelling | Wikipedia/Hierarchical_linear_modeling |
In social psychology, the health belief model (HBM) is a psychological framework used to explain and predict individuals' potentially detrimental behaviors, attitudes and beliefs on their health. Developed in the 1950s by social psychologists at the United States Public Health Service, the model examines how perceptions of susceptibility to illness, the severity of health conditions, the benefits of preventive care, and barriers to healthcare influence behavior. The HBM is widely used in health behavior research and public health interventions to understand and promote engagement in health-protective behaviors. It also incorporates concepts similar to the transtheoretical model like self-efficacy, or confidence in one's ability to take action, and identifies the role of cues to action or stimulus, such as health campaigns or medical advice, in prompting behavior change.
== History ==
One of the first theories of health behavior, the HBM was developed in 1950s by social psychologists Irwin M. Rosenstock, Godfrey M. Hochbaum, S. Stephen Kegeles, and Howard Leventhal at the U.S. Public Health Service. At that time, researchers and health practitioners were worried because few people were getting screened for tuberculosis (TB), even if mobile X-ray cars went to neighborhoods. The HBM has been applied to predict a wide variety of health-related behaviors such as being screened for the early detection of asymptomatic diseases and receiving immunizations. More recently, the model has been applied to understand intentions to vaccinate (e.g. COVID-19), preventive measures to combat the spread of COVID-19 at a social gathering (such as getting tested or limiting the number of attendees) responses to symptoms of disease, compliance with medical regimens, lifestyle behaviors (e.g. sexual risk behaviors), and behaviors related to chronic illnesses, which may require long-term behavior maintenance in addition to initial behavior change. Amendments to the model were made as late as 1988 to incorporate emerging evidence within the field of psychology about the role of self-efficacy in decision-making and behavior.
== Theoretical constructs ==
The HBM theoretical constructs originate from theories in Cognitive Psychology. In early twentieth century, cognitive theorists believed that reinforcements operated by affecting expectations rather than by affecting behavior straightly. Mental processes are severe consists of cognitive theories that are seen as expectancy-value models, because they propose that behavior is a function of the degree to which people value a result and their evaluation of the expectation, that a certain action will lead that result. In terms of the health-related behaviors, the value is avoiding sickness. The expectation is that a certain health action could prevent the condition for which people consider they might be at risk.
The following constructs of the HBM are proposed to vary between individuals and predict engagement in health-related behaviors.
=== Perceived susceptibility ===
Perceived susceptibility refers to subjective assessment of risk of developing a health problem. The HBM predicts that individuals who perceive that they are susceptible to a particular health problem will engage in behaviors to reduce their risk of developing the health problem. Individuals with low perceived susceptibility may deny that they are at risk for contracting a particular illness. Others may acknowledge the possibility that they could develop the illness, but believe it is unlikely.
The combination of perceived severity and perceived susceptibility is referred to as perceived threat. Perceived severity and perceived susceptibility to a given health condition depend on knowledge about the condition. The HBM predicts that higher perceived threat leads to a higher likelihood of engagement in health-promoting behaviors.
=== Perceived severity ===
Perceived severity refers to the subjective assessment of the severity of a health problem and its potential consequences. The HBM proposes that individuals who perceive a given health problem as serious are more likely to engage in behaviors to prevent the health problem from occurring (or reduce its severity). Perceived seriousness encompasses beliefs about the disease itself (e.g., whether it is life-threatening or may cause disability or pain) as well as broader impacts of the disease on functioning in work and social roles.
Through studying Australians and their self-reporting in 2019 of receiving the influenza vaccine, researchers found that by studying perceived severity they could determine the likelihood that Australians would receive the shot. They asked, "On a scale from 0 to 10, how severe do you think the flu would be if you got it?" to measure the perceived severity and they found that 31% perceived the severity of getting the flu as low, 44% as moderate, and 25% as high. Additionally, the researchers found those with a high perceived severity were significantly more likely to have received the vaccine than those with a moderate perceived severity. Furthermore, self-reported vaccination was similar for individuals with low and moderate perceived severity of influenza.
=== Perceived benefits ===
Health-related behaviors are also influenced by the perceived benefits of taking action. Perceived benefits refer to an individual's assessment of the value or efficacy of engaging in a health-promoting behavior to decrease risk of disease. If an individual believes that a particular action will reduce susceptibility to a health problem or decrease its seriousness, then he or she is likely to engage in that behavior regardless of objective facts regarding the effectiveness of the action.
=== Perceived barriers ===
Health-related behaviors are also a function of perceived barriers to taking action. Perceived barriers refer to an individual's assessment of the obstacles to behavior change. Even if an individual perceives a health condition as threatening and believes that a particular action will effectively reduce the threat, barriers may prevent engagement in the health-promoting behavior. In other words, the perceived benefits must outweigh the perceived barriers in order for behavior change to occur. Perceived barriers to taking action include the perceived inconvenience, expense, danger (e.g., side effects of a medical procedure) and discomfort (e.g., pain, emotional upset) involved in engaging in the behavior. For instance, lack of access to affordable health care and the perception that a flu vaccine shot will cause significant pain may act as barriers to receiving the flu vaccine. In a study about the breast and cervical cancer screening among Hispanic women, perceived barriers, like fear of cancer, embarrassment, fatalistic views of cancer and language, was proved to impede screening.
=== Modifying variables ===
Individual characteristics, including demographic, psychosocial, and structural variables, can affect perceptions (i.e., perceived seriousness, susceptibility, benefits, and barriers) of health-related behaviors. Demographic variables include age, sex, race, ethnicity, and education, among others. Psychosocial variables include personality, social class, and peer and reference group pressure, among others. Structural variables include knowledge about a given disease and prior contact with the disease, among other factors. The HBM suggests that modifying variables affect health-related behaviors indirectly by affecting perceived seriousness, susceptibility, benefits, and barriers.
=== Cues to action ===
The HBM posits that a cue, or trigger, is necessary for prompting engagement in health-promoting behaviors. Cues to action can be internal or external. Physiological cues (e.g., pain, symptoms) are an example of internal cues to action. External cues include events or information from close others, the media, or health care providers promoting engagement in health-related behaviors. Examples of cues to action include a reminder postcard from a dentist, the illness of a friend or family member, mass media campaigns on health issues, and product health warning labels. The intensity of cues needed to prompt action varies between individuals by perceived susceptibility, seriousness, benefits, and barriers.
=== Self-efficacy ===
Self-efficacy was added to the four components of the HBM (i.e., perceived susceptibility, severity, benefits, and barriers) in 1988. Self-efficacy refers to an individual's perception of his or her competence to successfully perform a behavior. Self-efficacy was added to the HBM in an attempt to better explain individual differences in health behaviors. The model was originally developed in order to explain engagement in one-time health-related behaviors such as being screened for cancer or receiving an immunization. Eventually, the HBM was applied to more substantial, long-term behavior change such as diet modification, exercise, and smoking. Developers of the model recognized that confidence in one's ability to effect change in outcomes (i.e., self-efficacy) was a key component of health behavior change. For example, Schmiege et al. found that when dealing with calcium consumption and weight-bearing exercises, self-efficacy was a more powerful predictor than beliefs about future negative health outcomes.
Rosenstock et al. argued that self-efficacy could be added to the other HBM constructs without elaboration of the model's theoretical structure. However, this was considered short-sighted because related studies indicated that key HBM constructs have indirect effects on behavior as a result of their effect on perceived control and intention, which might be regarded as more proximal factors of action.
== Empirical support ==
The HBM has gained substantial empirical support since its development in the 1950s. It remains one of the most widely used and well-tested models for explaining and predicting health-related behavior. A 1984 review of 18 prospective and 28 retrospective studies suggests that the evidence for each component of the HBMl is strong. The review reports that empirical support for the HBM is particularly notable given the diverse populations, health conditions, and health-related behaviors examined and the various study designs and assessment strategies used to evaluate the model. A more recent meta-analysis found strong support for perceived benefits and perceived barriers predicting health-related behaviors, but weak evidence for the predictive power of perceived seriousness and perceived susceptibility. The authors of the meta-analysis suggest that examination of potential moderated and mediated relationships between components of the model is warranted.
Several studies have provided empirical support from the chronic illness perspective. Becker et al. used the model to predict and explain a mother's adherence to a diet prescribed for their obese children. Cerkoney et al. interviewed insulin-treated diabetic individuals after diabetic classes at a community hospital. It empirically tested the HBM's association with the compliance levels of persons chronically ill with diabetes mellitus.
== Applications ==
The HBM has been used to develop effective interventions to change health-related behaviors by targeting various aspects of the model's key constructs. Interventions based on the HBM may aim to increase perceived susceptibility to and perceived seriousness of a health condition by providing education about prevalence and incidence of disease, individualized estimates of risk, and information about the consequences of disease (e.g., medical, financial, and social consequences). Interventions may also aim to alter the cost-benefit analysis of engaging in a health-promoting behavior (i.e., increasing perceived benefits and decreasing perceived barriers) by providing information about the efficacy of various behaviors to reduce risk of disease, identifying common perceived barriers, providing incentives to engage in health-promoting behaviors, and engaging social support or other resources to encourage health-promoting behaviors. Furthermore, interventions based on the HBM may provide cues to action to remind and encourage individuals to engage in health-promoting behaviors. Interventions may also aim to boost self-efficacy by providing training in specific health-promoting behaviors, particularly for complex lifestyle changes (e.g., changing diet or physical activity, adhering to a complicated medication regimen). Interventions can be aimed at the individual level (i.e., working one-on-one with individuals to increase engagement in health-related behaviors) or the societal level (e.g., through legislation, changes to the physical environment, mass media campaigns).
== Limitations ==
The HBM attempts to predict health-related behaviors by accounting for individual differences in beliefs and attitudes. However, it does not account for other factors that influence health behaviors. For instance, habitual health-related behaviors (e.g., smoking, seatbelt buckling) may become relatively independent of conscious health-related decision-making processes. Additionally, individuals engage in some health-related behaviors for reasons unrelated to health (e.g., exercising for aesthetic reasons). Environmental factors outside an individual's control may prevent engagement in desired behaviors. For example, an individual living in a dangerous neighborhood may be unable to go for a jog outdoors due to safety concerns. Furthermore, the HBM does not consider the impact of emotions on health-related behavior. Evidence suggests that fear may be a key factor in predicting health-related behavior.
Alternative factors may predict health behavior, such as outcome expectancy (i.e., whether the person feels they will be healthier as a result of their behavior) and self-efficacy (i.e., the person's belief in their ability to carry out preventive behavior).
The theoretical constructs that constitute the HBM are broadly defined. Furthermore, the HBM does not specify how constructs of the model interact with one another. Therefore, different operationalizations of the theoretical constructs may not be strictly comparable across studies.
Research assessing the contribution of cues to action in predicting health-related behaviors is limited. Cues to action are often difficult to assess, limiting research in this area. For instance, individuals may not accurately report cues that prompted behavior change. Cues such as a public service announcement on television or on a billboard may be fleeting and individuals may not be aware of their significance in prompting them to engage in a health-related behavior. Interpersonal influences are also particularly difficult to measure as cues.
Scholars extend the HBM by adding four more variables (self-identity, perceived importance, consideration of future consequences and concern for appearance) as possible determinants of healthy behavior. They prove that consideration of future consequences, self-identity, concern for appearance, perceived importance, self-efficacy, perceived susceptibility are significant determinants of healthy eating behavior that can be manipulated by healthy eating intervention design.
== See also ==
United States Public Health Service
Centers for Disease Control and Prevention
National Institute for Occupational Safety and Health
== References ==
=== Bibliography ===
== Further reading == | Wikipedia/Health_belief_model |
Plackett–Burman designs are experimental designs presented in 1946 by Robin L. Plackett and J. P. Burman while working in the British Ministry of Supply.
Their goal was to find experimental designs for investigating the dependence of some measured quantity on a number of independent variables (factors), each taking L levels, in such a way as to minimize the variance of the estimates of these dependencies using a limited number of experiments. Interactions between the factors were considered negligible. The solution to this problem is to find an experimental design where each combination of levels for any pair of factors appears the same number of times, throughout all the experimental runs (refer to table). A complete factorial design would satisfy this criterion, but the idea was to find smaller designs.
For the case of two levels (L = 2), Plackett and Burman used the method found in 1933 by Raymond Paley for generating orthogonal matrices whose elements are all either 1 or −1 (Hadamard matrices). Paley's method could be used to find such matrices of size N for most N equal to a multiple of 4. In particular, it worked for all such N up to 100 except N = 92. If N is a power of 2, however, the resulting design is identical to a fractional factorial design, so Plackett–Burman designs are mostly used when N is a multiple of 4 but not a power of 2 (i.e. N = 12, 20, 24, 28, 36 …). If one is trying to estimate less than N parameters (including the overall average), then one simply uses a subset of the columns of the matrix.
For the case of more than two levels, Plackett and Burman rediscovered designs that had previously been given by Raj Chandra Bose and K. Kishen at the Indian Statistical Institute.
Plackett and Burman give specifics for designs having a number of experiments equal to the number of levels L to some integer power, for L = 3, 4, 5, or 7.
When interactions between factors are not negligible, they are often confounded in Plackett–Burman designs with the main effects, meaning that the designs do not permit one to distinguish between certain main effects and certain interactions. This is called confounding.
== Extended uses ==
In 1993, Dennis Lin described a construction method via half-fractions of Plackett–Burman designs, using one column to take half of the rest of the columns. The resulting matrix, minus that column, is a "supersaturated design" for finding significant first order effects, under the assumption that few exist.
Box–Behnken designs can be made smaller, or very large ones constructed, by replacing the fractional factorials and incomplete blocks traditionally used for plan and seed matrices, respectively, with Plackett–Burmans. For example, a quadratic design for 30 variables requires a 30 column PB plan matrix of zeroes and ones, replacing the ones in each line using PB seed matrices of −1s and +1s (for 15 or 16 variables) wherever a one appears in the plan matrix, creating a 557 runs design with values, −1, 0, +1, to estimate the 496 parameters of a full quadratic model. Adding axial points allows estimating univariate cubic and quartic effects.
By equivocating certain columns with parameters to be estimated, Plackett–Burmans can also be used to construct mixed categorical and numerical designs, with interactions or high order effects, requiring no more than 4 runs more than the number of model parameters to be estimated. Sort by a-1 columns assigned to categorical variable A and following columns, where A = 1 + int(a·i /(max(i) + 0.00001)), i = row number and a = A's number of values. Next sort on columns assigned to any other categorical variables and following columns, repeating as needed. Such designs, if large, may otherwise be incomputable by standard search techniques like D-optimality. For example, 13 variables averaging 3 values each could have well over a million combinations to search. To estimate the 105 parameters in a quadratic model of 13 variables, one must formally exclude from consideration or compute |X'X| for well over 106C102, i.e. 313C105, or roughly 10484 matrices.
== 4 to 48 runs, sorted to show half-fractions ==
P.B.4
+ + +
+ – –
– + –
– – +
P.B.8
+ + + + + + +
+ + – – – – +
+ – + + – – –
+ – – – + + –
– + + – + – –
– + – + – + –
– – + – – + +
– – – + + – +
P.B.12
+ + + + + + + + + + +
+ + + + – – – + – – –
+ + – – – + – – + – +
+ – + – + + + – – – –
+ – – + – – + – + + –
+ – – – + – – + – + +
– + + – – – + – – + +
– + – + + + – – – + –
– + – – + – + + + – –
– – + + + – – – + – +
– – + – – + – + + + –
– – – + – + + + – – +
P.B.16
+ + + + + + + + + + + + + + +
+ + + – – – – – – – – + + + +
+ + – + – – – – + + + – – – +
+ + – – + + + + – – – – – – +
+ – + + + – – + + – – + – – –
+ – + – – + + – – + + + – – –
+ – – + – + + – + – – – + + –
+ – – – + – – + – + + – + + –
– + + + – + – + – + – – + – –
– + + – + – + – + – + – + – –
– + – + + – + – – + – + – + –
– + – – – + – + + – + + – + –
– – + + – – + + – – + – – + +
– – + – + + – – + + – – – + +
– – – + + + – – – – + + + – +
– – – – – – + + + + – + + – +
P.B.20
+ + + + + + + + + + + + + + + + + + +
+ + + – + – + – – – – + + – – + – – +
+ + – + – + – – – – + + – – + – – + +
+ + – + – – – – + + – – + – – + + + –
+ + – – – – + + – – + – – + + + + – –
+ – + + + + – + – + – – – – + + – – –
+ – + – + – – – – + + – – + – – + + +
+ – + – – + + + + – + – + – – – – + –
+ – – + – – + + + + – + – + – – – – +
+ – – – + + – – + – – + + + + – + – –
– + + + + – + – + – – – – + + – – + –
– + + + – + – + – – – – + + – – + – +
– + + – – + – – + + + + – + – + – – –
– + – – + + + + – + – + – – – – + + –
– + – – + – – + + + + – + – + – – – +
– – + + – – + – – + + + + – + – + – –
– – + – – – – + + – – + – – + + + + +
– – – + + + + – + – + – – – – + + – +
– – – + + – – + – – + + + + – + – + –
– – – – – + + – – + – – + + + + – + +
P.B.24
+ + + + + + + + + + + + + + + + + + + + + + +
+ + + – + – + + – – + + – – + – + – – – – – +
+ + + – – + + – – + – + – – – – – + + + + – –
+ + – + + – – + + – – + – + – – – – – + + + –
+ + – + – + + – – + + – – + – + – – – – – + +
+ + – – – – – + + + + – + – + + – – + + – – –
+ – + + – – + – + – – – – – + + + + – + – + –
+ – + – + + – – + + – – + – + – – – – – + + +
+ – + – + – – – – – + + + + – + – + + – – + –
+ – – + + – – + – + – – – – – + + + + – + – +
+ – – + – + – – – – – + + + + – + – + + – – +
+ – – – – + + + + – + – + + – – + + – – + – –
– + + + + – + – + + – – + + – – + – + – – – –
– + + + – + – + + – – + + – – + – + – – – – +
– + + – – + – + – – – – – + + + + – + – + + –
– + – + – – – – – + + + + – + – + + – – + + –
– + – – + + – – + – + – – – – – + + + + – + +
– + – – + – + – – – – – + + + + – + – + + – +
– – + + + + – + – + + – – + + – – + – + – – –
– – + + – – + + – – + – + – – – – – + + + + +
– – + – – – – – + + + + – + – + + – – + + – +
– – – + + + + – + – + + – – + + – – + – + – –
– – – – + + + + – + – + + – – + + – – + – + –
– – – – – – + + + + – + – + + – – + + – – + +
P.B.28
+ + + + + + + + + + + + + + + + + + + + + + + + + + –
+ + + – – + + + – + + – – + + + + – – – – – – – – + +
+ + + – – – – – – – – + + + + – – + + + – + + – – + +
+ + – + + – – + + + + – – – – – – – – + + + + – – + +
+ + – + – – + + – – – + – – + + – + – – + – + – + – –
+ + – + – – + – + – + – + + – + – – + + – – – + – – –
+ + – – – + – – + + – + – – + – + – + – + + – + – – –
+ – + + – + – – + + – – – + – – + + – + – – + – + – –
+ – + – + + – + – – + + – – – + – – + + – + – – + – –
+ – + – + – + + – + – – + + – – – + – – + + – + – – –
+ – + – + – + – + – + – + – + – + – + – + – + – + – +
+ – – + + + + – – – – – – – – + + + + – – + + + – + +
+ – – + + + – + + – – + + + + – – – – – – – – + + + +
+ – – – – – – – – + + + + – – + + + – + + – – + + + +
– + + + + – – + + + – + + – – + + + + – – – – – – – +
– + + + + – – – – – – – – + + + + – – + + + – + + – +
– + + + – + + – – + + + + – – – – – – – – + + + + – +
– + + – – + + + + – – – – – – – – + + + + – – + + + +
– + – – + + – + – – + – + – + – + + – + – – + + – – –
– + – – + + – – – + – – + + – + – – + – + – + – + + –
– + – – + – + – + – + + – + – – + + – – – + – – + + –
– – + + – + – – + – + – + – + + – + – – + + – – – + –
– – + + – – – + – – + + – + – – + – + – + – + + – + –
– – + – + – + – + + – + – – + + – – – + – – + + – + –
– – – + + + + – – + + + – + + – – + + + + – – – – – +
– – – + – – + + – + – – + – + – + – + + – + – – + + –
– – – – – + + + + – – + + + – + + – – + + + + – – – +
– – – – – – – + + + + – – + + + – + + – – + + + + – +
P.B.32
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + – – – – – – – – – – – – – – – – + + + + + + + + + + +
+ + + – + – – – – – – – – + + + + + + + – – – – – – – + + + +
+ + + – – + + + + + + + + – – – – – – – – – – – – – – + + + +
+ + – + + – – – – + + + + – – – – + + + – – – – + + + – – – +
+ + – + – + + + + – – – – + + + + – – – – – – – + + + – – – +
+ + – – + + + + + – – – – – – – – + + + + + + + – – – – – – +
+ + – – – – – – – + + + + + + + + – – – + + + + – – – – – – +
+ – + + + + – – + + – – + + – – + + – – + – – + + – – + – – –
+ – + + – – + + – – + + – – + + – – + + + – – + + – – + – – –
+ – + – + – + + – – + + – + – – + + – – – + + – – + + + – – –
+ – + – – + – – + + – – + – + + – – + + – + + – – + + + – – –
+ – – + + – + + – + – – + – + + – + – – – + + – + – – – + + –
+ – – + – + – – + – + + – + – – + – + + – + + – + – – – + + –
+ – – – + + – – + – + + – – + + – + – – + – – + – + + – + + –
+ – – – – – + + – + – – + + – – + – + + + – – + – + + – + + –
– + + + + – + – + – + – + – + – + – + – – + – + – + – – + – –
– + + + – + – + – + – + – + – + – + – + – + – + – + – – + – –
– + + – + + – + – + – + – – + – + – + – + – + – + – + – + – –
– + + – – – + – + – + – + + – + – + – + + – + – + – + – + – –
– + – + + + – + – – + – + + – + – – + – + – + – – + – + – + –
– + – + – – + – + + – + – – + – + + – + + – + – – + – + – + –
– + – – + – + – + + – + – + – + – – + – – + – + + – + + – + –
– + – – – + – + – – + – + – + – + + – + – + – + + – + + – + –
– – + + + – – + + – – + + – – + + – – + – – + + – – + – – + +
– – + + – + + – – + + – – + + – – + + – – – + + – – + – – + +
– – + – + + + – – + + – – – – + + – – + + + – – + + – – – + +
– – + – – – – + + – – + + + + – – + + – + + – – + + – – – + +
– – – + + + + – – – – + + + + – – – – + + + – – – – + + + – +
– – – + – – – + + + + – – – – + + + + – + + – – – – + + + – +
– – – – + – – + + + + – – + + – – – – + – – + + + + – + + – +
– – – – – + + – – – – + + – – + + + + – – – + + + + – + + – +
P.B.36
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + –
+ + + + – + + + + – – + + – – – – – – + + + + – – – – – – + + – – + +
+ + + – – + + – – – – – – + + + + – – – – – – + + – – + + + + + – + +
+ + + – – – – – – + + – – + + + + + – + + + + – – + + – – – – – – + +
+ + – + + + + – – + + – – – – – – + + + + – – – – – – + + – – + + + +
+ + – + – – + – + – + + – – + + – + – – – + – + – – + + – – + – + – –
+ + – + – – – + – + – – + + – – + – + – + + – + – – + – + – + + – – –
+ + – – + + – + – – – + – + – – + + – – + – + – + + – + – – + – + – –
+ + – – + – + – + + – + – – + – + – + + – – + + – + – – – + – + – – –
+ – + + – + – – + – + – + + – – + + – + – – – + – + – – + + – – + – –
+ – + + – – + + – + – – – + – + – – + + – – + – + – + + – + – – + – –
+ – + – + + – + – – + – + – + + – – + + – + – – – + – + – – + + – – –
+ – + – + + – – + + – + – – – + – + – – + + – – + – + – + + – + – – –
+ – + – + – + – + – + – + – + – + – + – + – + – + – + – + – + – + – +
+ – – + + + + + – + + + + – – + + – – – – – – + + + + – – – – – – + +
+ – – + + – – – – – – + + + + – – – – – – + + – – + + + + + – + + + +
+ – – – – – – + + + + – – – – – – + + – – + + + + + – + + + + – – + +
+ – – – – – – + + – – + + + + + – + + + + – – + + – – – – – – + + + +
– + + + + + – + + + + – – + + – – – – – – + + + + – – – – – – + + – +
– + + + + – – + + – – – – – – + + + + – – – – – – + + – – + + + + + +
– + + + + – – – – – – + + – – + + + + + – + + + + – – + + – – – – – +
– + + – – + + + + + – + + + + – – + + – – – – – – + + + + – – – – – +
– + + – – – – – – + + + + – – – – – – + + – – + + + + + – + + + + – +
– + – + – – + + – – + – + – + + – + – – + – + – + + – – + + – + – – –
– + – – + + – – + – + – + + – + – – + – + – + + – – + + – + – – – + –
– + – – + – + – + + – – + + – + – – – + – + – – + + – – + – + – + + –
– + – – – + – + – – + + – – + – + – + + – + – – + – + – + + – – + + –
– – + + – + – – – + – + – – + + – – + – + – + + – + – – + – + – + + –
– – + + – – + – + – + + – + – – + – + – + + – – + + – + – – – + – + –
– – + – + – + + – + – – + – + – + + – – + + – + – – – + – + – – + + –
– – + – + – + + – – + + – + – – – + – + – – + + – – + – + – + + – + –
– – – + + + + – – – – – – + + – – + + + + + – + + + + – – + + – – – +
– – – + + – – + + + + + – + + + + – – + + – – – – – – + + + + – – – +
– – – + – + – – + + – – + – + – + + – + – – + – + – + + – – + + – + –
– – – – – + + + + – – – – – – + + – – + + + + + – + + + + – – + + – +
– – – – – + + – – + + + + + – + + + + – – + + – – – – – – + + + + – +
P.B.40
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + –
+ + + + + + + – – + + – – + + – – – – – – – – + + + + – – – – + + – – – – + –
+ + + + + – – + + – – + + – – – – – – – – + + + + – – – – + + – – – – + + + –
+ + + – – + + – – + + – – – – – – – – + + + + – – – – + + – – – – + + + + + –
+ + + – – – – + + – – – – + + + + + + + + – – + + – – + + – – – – – – – – + –
+ + – + – + – + – – + + – – + + – – + – + – + – + + – + – – + – + + – – + – +
+ + – + – – + – + + – – + – + + – + – + – + – – + + – – + + – – + – + – + – +
+ + – – + + – – + – + – + – + + – + – – + – + + – – + – + + – + – + – + – – +
+ + – – + – + + – + – + – + – – + + – – + + – – + – + – + – + + – + – – + – +
+ + – – + – + – + – + + – + – – + – + + – – + – + + – + – + – + – – + + – – +
+ – + + – + – + – + – – + + – – + + – – + – + – + – + + – + – – + – + + – – +
+ – + + – + – – + – + + – – + – + + – + – + – + – – + + – – + + – – + – + – +
+ – + + – – + – + + – + – + – + – – + + – – + + – – + – + – + – + + – + – – +
+ – + – + + – + – – + – + + – – + – + + – + – + – + – – + + – – + + – – + – +
+ – + – + – + + – + – – + – + + – – + – + + – + – + – + – – + + – – + + – – +
+ – – + + – – + + – – – – – – – – + + + + – – – – + + – – – – + + + + + + + –
+ – – + + – – – – – – – – + + + + – – – – + + – – – – + + + + + + + + – – + –
+ – – – – + + + + + + + + – – + + – – + + – – – – – – – – + + + + – – – – + –
+ – – – – + + – – – – + + + + + + + + – – + + – – + + – – – – – – – – + + + –
+ – – – – – – – – + + + + – – – – + + – – – – + + + + + + + + – – + + – – + –
– + + + + + + + + – – + + – – + + – – – – – – – – + + + + – – – – + + – – – –
– + + + + – – – – + + – – – – + + + + + + + + – – + + – – + + – – – – – – – –
– + + – – + + – – – – – – – – + + + + – – – – + + – – – – + + + + + + + + – –
– + + – – – – + + + + + + + + – – + + – – + + – – – – – – – – + + + + – – – –
– + + – – – – – – – – + + + + – – – – + + – – – – + + + + + + + + – – + + – –
– + – + – + – + – + – + – + – + – + – + – + – + – + – + – + – + – + – + – + +
– + – + – + – – + + – – + + – – + – + – + – + + – + – – + – + + – – + – + + +
– + – + – – + + – – + + – – + – + – + – + + – + – – + – + + – – + – + + – + +
– + – – + + – – + + – – + – + – + – + + – + – – + – + + – – + – + + – + – + +
– + – – + – + + – – + – + + – + – + – + – – + + – – + + – – + – + – + – + + +
– – + + – – + + – – + – + – + – + + – + – – + – + + – – + – + + – + – + – + +
– – + + – – + – + – + – + + – + – – + – + + – – + – + + – + – + – + – – + + +
– – + – + + – + – + – + – – + + – – + + – – + – + – + – + + – + – – + – + + +
– – + – + + – – + – + + – + – + – + – – + + – – + + – – + – + – + – + + – + +
– – + – + – + – + + – + – – + – + + – – + – + + – + – + – + – – + + – – + + +
– – – + + + + + + + + – – + + – – + + – – – – – – – – + + + + – – – – + + – –
– – – + + + + – – – – + + – – – – + + + + + + + + – – + + – – + + – – – – – –
– – – + + – – – – + + + + + + + + – – + + – – + + – – – – – – – – + + + + – –
– – – – – + + + + – – – – + + – – – – + + + + + + + + – – + + – – + + – – – –
– – – – – – – + + + + – – – – + + – – – – + + + + + + + + – – + + – – + + – –
P.B.44
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + + + – – – + – + + + – – – – – + – – – + + – + – + + – – + – – + – + – – + + –
+ + + + – – – + – + + + – – – – – + – – – + + – + – + + – – + – – + – + – – + + + – +
+ + + + – – – – – + – – – + + – + – + + – – + – – + – + – – + + + – + + + + + – – – –
+ + + – – + – – + – + – – + + + – + + + + + – – – + – + + + – – – – – + – – – + + – –
+ + + – – – + – + + + – – – – – + – – – + + – + – + + – – + – – + – + – – + + + – + +
+ + – + + + + + – – – + – + + + – – – – – + – – – + + – + – + + – – + – – + – + – – +
+ + – + + – – + – – + – + – – + + + – + + + + + – – – + – + + + – – – – – + – – – + –
+ + – – + + + – + + + + + – – – + – + + + – – – – – + – – – + + – + – + + – – + – – –
+ + – – – + – + + + – – – – – + – – – + + – + – + + – – + – – + – + – – + + + – + + +
+ + – – – – – + – – – + + – + – + + – – + – – + – + – – + + + – + + + + + – – – + – +
+ – + + + + + – – – + – + + + – – – – – + – – – + + – + – + + – – + – – + – + – – + +
+ – + + + – + + + + + – – – + – + + + – – – – – + – – – + + – + – + + – – + – – + – –
+ – + – + + – – + – – + – + – – + + + – + + + + + – – – + – + + + – – – – – + – – – +
+ – + – + – – + + + – + + + + + – – – + – + + + – – – – – + – – – + + – + – + + – – –
+ – + – – + – + – – + + + – + + + + + – – – + – + + + – – – – – + – – – + + – + – + –
+ – – + + – + – + + – – + – – + – + – – + + + – + + + + + – – – + – + + + – – – – – –
+ – – + – + + + – – – – – + – – – + + – + – + + – – + – – + – + – – + + + – + + + + –
+ – – + – – + – + – – + + + – + + + + + – – – + – + + + – – – – – + – – – + + – + – +
+ – – – + – + + + – – – – – + – – – + + – + – + + – – + – – + – + – – + + + – + + + +
+ – – – – + – – – + + – + – + + – – + – – + – + – – + + + – + + + + + – – – + – + + –
+ – – – – – + – – – + + – + – + + – – + – – + – + – – + + + – + + + + + – – – + – + +
– + + + + – – – + – + + + – – – – – + – – – + + – + – + + – – + – – + – + – – + + + +
– + + + – + + + + + – – – + – + + + – – – – – + – – – + + – + – + + – – + – – + – + –
– + + – + + + + + – – – + – + + + – – – – – + – – – + + – + – + + – – + – – + – + – +
– + + – + – + + – – + – – + – + – – + + + – + + + + + – – – + – + + + – – – – – + – –
– + + – – – – – + – – – + + – + – + + – – + – – + – + – – + + + – + + + + + – – – + +
– + – + + + – – – – – + – – – + + – + – + + – – + – – + – + – – + + + – + + + + + – –
– + – + – + + – – + – – + – + – – + + + – + + + + + – – – + – + + + – – – – – + – – +
– + – + – – + + + – + + + + + – – – + – + + + – – – – – + – – – + + – + – + + – – + –
– + – – + – + – – + + + – + + + + + – – – + – + + + – – – – – + – – – + + – + – + + –
– + – – + – – + – + – – + + + – + + + + + – – – + – + + + – – – – – + – – – + + – + +
– + – – – + + – + – + + – – + – – + – + – – + + + – + + + + + – – – + – + + + – – – –
– – + + + – – – – – + – – – + + – + – + + – – + – – + – + – – + + + – + + + + + – – +
– – + + – + – + + – – + – – + – + – – + + + – + + + + + – – – + – + + + – – – – – + –
– – + + – – + – – + – + – – + + + – + + + + + – – – + – + + + – – – – – + – – – + + +
– – + – + + + – – – – – + – – – + + – + – + + – – + – – + – + – – + + + – + + + + + –
– – + – – + + + – + + + + + – – – + – + + + – – – – – + – – – + + – + – + + – – + – +
– – + – – – + + – + – + + – – + – – + – + – – + + + – + + + + + – – – + – + + + – – –
– – – + + + – + + + + + – – – + – + + + – – – – – + – – – + + – + – + + – – + – – + +
– – – + – + – – + + + – + + + + + – – – + – + + + – – – – – + – – – + + – + – + + – +
– – – + – – – + + – + – + + – – + – – + – + – – + + + – + + + + + – – – + – + + + – –
– – – – + + – + – + + – – + – – + – + – – + + + – + + + + + – – – + – + + + – – – – +
– – – – + – – – + + – + – + + – – + – – + – + – – + + + – + + + + + – – – + – + + + –
P.B.48
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + + – – + – + – + + + – – + – – + + – + + – – – + – + – + + – – – – + – – – – – + + + –
+ + + + – – + – – + + – + + – – – + – + – + + – – – – + – – – – – + + + + – + + + + – – + – –
+ + + – + + + + – – + – + – + + + – – + – – + + – + + – – – + – + – + + – – – – + – – – – – +
+ + + – – + – + – + + + – – + – – + + – + + – – – + – + – + + – – – – + – – – – – + + + + – +
+ + + – – – + – + – + + – – – – + – – – – – + + + + – + + + + – – + – + – + + + – – + – – + –
+ + + – – – – + – – – – – + + + + – + + + + – – + – + – + + + – – + – – + + – + + – – – + – –
+ + – + + + + – – + – + – + + + – – + – – + + – + + – – – + – + – + + – – – – + – – – – – + +
+ + – + + + – – + – – + + – + + – – – + – + – + + – – – – + – – – – – + + + + – + + + + – – –
+ + – + + – – – – + – – – – – + + + + – + + + + – – + – + – + + + – – + – – + + – + + – – – –
+ + – – + – + – + + + – – + – – + + – + + – – – + – + – + + – – – – + – – – – – + + + + – + +
+ + – – + – – + + – + + – – – + – + – + + – – – – + – – – – – + + + + – + + + + – – + – + – +
+ – + + + + – – + – + – + + + – – + – – + + – + + – – – + – + – + + – – – – + – – – – – + + +
+ – + + – + + – – – + – + – + + – – – – + – – – – – + + + + – + + + + – – + – + – + + + – – –
+ – + + – – – + – + – + + – – – – + – – – – – + + + + – + + + + – – + – + – + + + – – + – – +
+ – + – + – + + + – – + – – + + – + + – – – + – + – + + – – – – + – – – – – + + + + – + + + –
+ – + – – + + – + + – – – + – + – + + – – – – + – – – – – + + + + – + + + + – – + – + – + + –
+ – – + – + – + + + – – + – – + + – + + – – – + – + – + + – – – – + – – – – – + + + + – + + +
+ – – + – + – + + – – – – + – – – – – + + + + – + + + + – – + – + – + + + – – + – – + + – + –
+ – – + – – + + – + + – – – + – + – + + – – – – + – – – – – + + + + – + + + + – – + – + – + +
+ – – – + – + – + + – – – – + – – – – – + + + + – + + + + – – + – + – + + + – – + – – + + – +
+ – – – + – – – – – + + + + – + + + + – – + – + – + + + – – + – – + + – + + – – – + – + – + –
+ – – – – + + + + – + + + + – – + – + – + + + – – + – – + + – + + – – – + – + – + + – – – – –
+ – – – – + – – – – – + + + + – + + + + – – + – + – + + + – – + – – + + – + + – – – + – + – +
– + + + + – + + + + – – + – + – + + + – – + – – + + – + + – – – + – + – + + – – – – + – – – –
– + + + – + + + + – – + – + – + + + – – + – – + + – + + – – – + – + – + + – – – – + – – – – +
– + + + – – + – + – + + + – – + – – + + – + + – – – + – + – + + – – – – + – – – – – + + + + +
– + + – + + – – – + – + – + + – – – – + – – – – – + + + + – + + + + – – + – + – + + + – – + –
– + + – – + – – + + – + + – – – + – + – + + – – – – + – – – – – + + + + – + + + + – – + – + +
– + – + + – – – + – + – + + – – – – + – – – – – + + + + – + + + + – – + – + – + + + – – + – +
– + – + – + + + – – + – – + + – + + – – – + – + – + + – – – – + – – – – – + + + + – + + + + –
– + – + – + + – – – – + – – – – – + + + + – + + + + – – + – + – + + + – – + – – + + – + + – –
– + – – + + – + + – – – + – + – + + – – – – + – – – – – + + + + – + + + + – – + – + – + + + –
– + – – – + – + – + + – – – – + – – – – – + + + + – + + + + – – + – + – + + + – – + – – + + +
– + – – – – + – – – – – + + + + – + + + + – – + – + – + + + – – + – – + + – + + – – – + – + +
– + – – – – – + + + + – + + + + – – + – + – + + + – – + – – + + – + + – – – + – + – + + – – –
– – + + + + – + + + + – – + – + – + + + – – + – – + + – + + – – – + – + – + + – – – – + – – –
– – + + + – – + – – + + – + + – – – + – + – + + – – – – + – – – – – + + + + – + + + + – – + +
– – + + – – – – + – – – – – + + + + – + + + + – – + – + – + + + – – + – – + + – + + – – – + +
– – + – + + + – – + – – + + – + + – – – + – + – + + – – – – + – – – – – + + + + – + + + + – +
– – + – + + – – – – + – – – – – + + + + – + + + + – – + – + – + + + – – + – – + + – + + – – +
– – + – + – + + – – – – + – – – – – + + + + – + + + + – – + – + – + + + – – + – – + + – + + –
– – + – – – – – + + + + – + + + + – – + – + – + + + – – + – – + + – + + – – – + – + – + + – –
– – – + + + + – + + + + – – + – + – + + + – – + – – + + – + + – – – + – + – + + – – – – + – –
– – – + + – + + – – – + – + – + + – – – – + – – – – – + + + + – + + + + – – + – + – + + + – +
– – – + – – – – – + + + + – + + + + – – + – + – + + + – – + – – + + – + + – – – + – + – + + –
– – – – + + + + – + + + + – – + – + – + + + – – + – – + + – + + – – – + – + – + + – – – – + –
– – – – – – + + + + – + + + + – – + – + – + + + – – + – – + + – + + – – – + – + – + + – – – +
== References ==
This article incorporates public domain material from the National Institute of Standards and Technology | Wikipedia/Plackett-Burman_design |
In the statistical theory of the design of experiments, blocking is the arranging of experimental units that are similar to one another in groups (blocks) based on one or more variables. These variables are chosen carefully to minimize the effect of their variability on the observed outcomes. There are different ways that blocking can be implemented, resulting in different confounding effects. However, the different methods share the same purpose: to control variability introduced by specific factors that could influence the outcome of an experiment. The roots of blocking originated from the statistician, Ronald Fisher, following his development of ANOVA.
== History ==
The use of blocking in experimental design has an evolving history that spans multiple disciplines. The foundational concepts of blocking date back to the early 20th century with statisticians like Ronald A. Fisher. His work in developing analysis of variance (ANOVA) set the groundwork for grouping experimental units to control for extraneous variables. Blocking evolved over the years, leading to the formalization of randomized block designs and Latin square designs. Today, blocking still plays a pivotal role in experimental design, and in recent years, advancements in statistical software and computational capabilities have allowed researchers to explore more intricate blocking designs.
== Use ==
We often want to reduce or eliminate the influence of some Confounding factor when designing an experiment. We can sometimes do this by "blocking", which involves the separate consideration of blocks of data that have different levels of exposure to that factor.
=== Examples ===
Male and female: An experiment is designed to test a new drug on patients. There are two levels of the treatment, drug, and placebo, administered to male and female patients in a double blind trial. The sex of the patient is a blocking factor accounting for treatment variability between males and females. This reduces sources of variability and thus leads to greater precision.
Elevation: An experiment is designed to test the effects of a new pesticide on a specific patch of grass. The grass area contains a major elevation change and thus consists of two distinct regions – 'high elevation' and 'low elevation'. A treatment group (the new pesticide) and a placebo group are applied to both the high elevation and low elevation areas of grass. In this instance the researcher is blocking the elevation factor which may account for variability in the pesticide's application.
Intervention: Suppose a process is invented that intends to make the soles of shoes last longer, and a plan is formed to conduct a field trial. Given a group of n volunteers, one possible design would be to give n/2 of them shoes with the new soles and n/2 of them shoes with the ordinary soles, randomizing the assignment of the two kinds of soles. This type of experiment is a completely randomized design. Both groups are then asked to use their shoes for a period of time, and then measure the degree of wear of the soles. This is a workable experimental design, but purely from the point of view of statistical accuracy (ignoring any other factors), a better design would be to give each person one regular sole and one new sole, randomly assigning the two types to the left and right shoe of each volunteer. Such a design is called a "randomized complete block design." This design will be more sensitive than the first, because each person is acting as his/her own control and thus the control group is more closely matched to the treatment group block design
=== Nuisance variables ===
In the examples listed above, a nuisance variable is a variable that is not the primary focus of the study but can affect the outcomes of the experiment. They are considered potential sources of variability that, if not controlled or accounted for, may confound the interpretation between the independent and dependent variables.
To address nuisance variables, researchers can employ different methods such as blocking or randomization. Blocking involves grouping experimental units based on levels of the nuisance variable to control for its influence. Randomization helps distribute the effects of nuisance variables evenly across treatment groups.
By using one of these methods to account for nuisance variables, researchers can enhance the internal validity of their experiments, ensuring that the effects observed are more likely attributable to the manipulated variables rather than extraneous influences.
In the first example provided above, the sex of the patient would be a nuisance variable. For example, consider if the drug was a diet pill and the researchers wanted to test the effect of the diet pills on weight loss. The explanatory variable is the diet pill and the response variable is the amount of weight loss. Although the sex of the patient is not the main focus of the experiment—the effect of the drug is—it is possible that the sex of the individual will affect the amount of weight lost.
=== Blocking used for nuisance factors that can be controlled ===
In the statistical theory of the design of experiments, blocking is the arranging of experimental units in groups (blocks) that are similar to one another. Typically, a blocking factor is a source of variability that is not of primary interest to the experimenter.
When studying probability theory the blocks method consists of splitting a sample into blocks (groups) separated by smaller subblocks so that the blocks can be considered almost independent. The blocks method helps proving limit theorems in the case of dependent random variables.
The blocks method was introduced by S. Bernstein:
The method was successfully applied in the theory of sums of dependent random variables and in extreme value theory.
==== Example ====
In our previous diet pills example, a blocking factor could be the sex of a patient. We could put individuals into one of two blocks (male or female). And within each of the two blocks, we can randomly assign the patients to either the diet pill (treatment) or placebo pill (control). By blocking on sex, this source of variability is controlled, therefore, leading to greater interpretation of how the diet pills affect weight loss.
=== Definition of blocking factors ===
A nuisance factor is used as a blocking factor if every level of the primary factor occurs the same number of times with each level of the nuisance factor. The analysis of the experiment will focus on the effect of varying levels of the primary factor within each block of the experiment.
=== Block a few of the most important nuisance factors ===
The general rule is:
"Block what you can; randomize what you cannot."
Blocking is used to remove the effects of a few of the most important nuisance variables. Randomization is then used to reduce the contaminating effects of the remaining nuisance variables. For important nuisance variables, blocking will yield higher significance in the variables of interest than randomizing.
== Implementation ==
Implementing blocking in experimental design involves a series of steps to effectively control for extraneous variables and enhance the precision of treatment effect estimates.
=== Identify nuisance variables ===
Identify potential factors that are not the primary focus of the study but could introduce variability.
=== Select appropriate blocking factors ===
Carefully choose blocking factors based on their relevance to the study as well as their potential to confound the primary factors of interest.
=== Define block sizes ===
There are consequences to partitioning a certain sized experiment into a certain number of blocks as the number of blocks determines the number of confounded effects.
=== Assign treatments to blocks ===
You may choose to randomly assign experimental units to treatment conditions within each block which may help ensure that any unaccounted for variability is spread evenly across treatment groups. However, depending on how you assign treatments to blocks, you may obtain a different number of confounded effects. Therefore, the number of as well as which specific effects get confounded can be chosen which means that assigning treatments to blocks is superior over random assignment.
=== Replication ===
By running a different design for each replicate, where a different effect gets confounded each time, the interaction effects are partially confounded instead of completely sacrificing one single effect. Replication enhances the reliability of results and allows for a more robust assessment of treatment effects.
== Example ==
=== Table ===
One useful way to look at a randomized block experiment is to consider it as a collection of completely randomized experiments, each run within one of the blocks of the total experiment.
with
L1 = number of levels (settings) of factor 1
L2 = number of levels (settings) of factor 2
L3 = number of levels (settings) of factor 3
L4 = number of levels (settings) of factor 4
⋮
{\displaystyle \vdots }
Lk = number of levels (settings) of factor k
=== Example ===
Suppose engineers at a semiconductor manufacturing facility want to test whether different wafer implant material dosages have a significant effect on resistivity measurements after a diffusion process taking place in a furnace. They have four different dosages they want to try and enough experimental wafers from the same lot to run three wafers at each of the dosages.
The nuisance factor they are concerned with is "furnace run" since it is known that each furnace run differs from the last and impacts many process parameters.
An ideal way to run this experiment would be to run all the 4x3=12 wafers in the same furnace run. That would eliminate the nuisance furnace factor completely. However, regular production wafers have furnace priority, and only a few experimental wafers are allowed into any furnace run at the same time.
A non-blocked way to run this experiment would be to run each of the twelve experimental wafers, in random order, one per furnace run. That would increase the experimental error of each resistivity measurement by the run-to-run furnace variability and make it more difficult to study the effects of the different dosages. The blocked way to run this experiment, assuming you can convince manufacturing to let you put four experimental wafers in a furnace run, would be to put four wafers with different dosages in each of three furnace runs. The only randomization would be choosing which of the three wafers with dosage 1 would go into furnace run 1, and similarly for the wafers with dosages 2, 3 and 4.
==== Description of the experiment ====
Let X1 be dosage "level" and X2 be the blocking factor furnace run. Then the experiment can be described as follows:
k = 2 factors (1 primary factor X1 and 1 blocking factor X2)
L1 = 4 levels of factor X1
L2 = 3 levels of factor X2
n = 1 replication per cell
N = L1 * L2 = 4 * 3 = 12 runs
Before randomization, the design trials look like:
==== Matrix representation ====
An alternate way of summarizing the design trials would be to use a 4x3 matrix whose 4 rows are the levels of the treatment X1 and whose columns are the 3 levels of the blocking variable X2. The cells in the matrix have indices that match the X1, X2 combinations above.
By extension, note that the trials for any K-factor randomized block design are simply the cell indices of a k dimensional matrix.
=== Model ===
The model for a randomized block design with one nuisance variable is
Y
i
j
=
μ
+
T
i
+
B
j
+
r
a
n
d
o
m
e
r
r
o
r
{\displaystyle Y_{ij}=\mu +T_{i}+B_{j}+\mathrm {random\ error} }
where
Yij is any observation for which X1 = i and X2 = j
X1 is the primary factor
X2 is the blocking factor
μ is the general location parameter (i.e., the mean)
Ti is the effect for being in treatment i (of factor X1)
Bj is the effect for being in block j (of factor X2)
=== Estimates ===
Estimate for μ :
Y
¯
{\displaystyle {\overline {Y}}}
= the average of all the data
Estimate for Ti :
Y
¯
i
⋅
−
Y
¯
{\displaystyle {\overline {Y}}_{i\cdot }-{\overline {Y}}}
with
Y
¯
i
⋅
{\displaystyle {\overline {Y}}_{i\cdot }}
= average of all Y for which X1 = i.
Estimate for Bj :
Y
¯
⋅
j
−
Y
¯
{\displaystyle {\overline {Y}}_{\cdot j}-{\overline {Y}}}
with
Y
¯
⋅
j
{\displaystyle {\overline {Y}}_{\cdot j}}
= average of all Y for which X2 = j.
=== Generalizations ===
Generalized randomized block designs (GRBD) allow tests of block–treatment interaction, and has exactly one blocking factor like the RCBD.
Latin squares (and other row–column designs) have two blocking factors that are believed to have no interaction.
Latin hypercube sampling
Graeco-Latin squares
Hyper-Graeco-Latin square designs
== See also ==
Algebraic statistics
Block design
Combinatorial design
Generalized randomized block design
Glossary of experimental design
Optimal design
Paired difference test
Dependent and independent variables
Blockmodeling
Paired data
== References ==
This article incorporates public domain material from the National Institute of Standards and Technology
== Bibliography ==
Addelman, S. (1969). "The Generalized Randomized Block Design". The American Statistician. 23 (4): 35–36. doi:10.2307/2681737. JSTOR 2681737.
Addelman, S. (1970). "Variability of Treatments and Experimental Units in the Design and Analysis of Experiments". Journal of the American Statistical Association. 65 (331): 1095–1108. doi:10.2307/2284277. JSTOR 2284277.
Anscombe, F.J. (1948). "The Validity of Comparative Experiments". Journal of the Royal Statistical Society. A (General). 111 (3): 181–211. doi:10.2307/2984159. JSTOR 2984159. MR 0030181.
Bailey, R. A (2008). Design of Comparative Experiments. Cambridge University Press. ISBN 978-0-521-68357-9. Archived from the original on 2011-03-06. Retrieved 2010-02-22.{{cite book}}: CS1 maint: bot: original URL status unknown (link) Pre-publication chapters are available on-line.
Bapat, R. B. (2000). Linear Algebra and Linear Models (Second ed.). Springer. ISBN 978-0-387-98871-9.
Caliński T.; Kageyama S. (2000). Block designs: A Randomization approach. Vol. I: Analysis. New York: Springer-Verlag. ISBN 0-387-98578-6.
Caliński T.; Kageyama S. (2003). Block designs: A Randomization approach. Vol. II: Design. New York: Springer-Verlag. ISBN 0-387-95470-8. MR 1994124.
Gates, C.E. (Nov 1995). "What Really Is Experimental Error in Block Designs?". The American Statistician. 49 (4): 362–363. doi:10.2307/2684574. JSTOR 2684574.
Kempthorne, Oscar (1979). The Design and Analysis of Experiments (Corrected reprint of (1952) Wiley ed.). Robert E. Krieger. ISBN 0-88275-105-0.
Hinkelmann, Klaus; Kempthorne, Oscar (2008). Design and Analysis of Experiments. Vol. I and II (Second ed.). Wiley. ISBN 978-0-470-38551-7.
Hinkelmann, Klaus; Kempthorne, Oscar (2008). Design and Analysis of Experiments. Vol. I: Introduction to Experimental Design (Second ed.). Wiley. ISBN 978-0-471-72756-9.
Hinkelmann, Klaus; Kempthorne, Oscar (2005). Design and Analysis of Experiments. Vol. 2: Advanced Experimental Design (First ed.). Wiley. ISBN 978-0-471-55177-5.
Lentner, Marvin; Thomas Bishop (1993). "The Generalized RCB Design (Chapter 6.13)". Experimental design and analysis (Second ed.). Blacksburg, VA: Valley Book Company. pp. 225–226. ISBN 0-9616255-2-X.
Raghavarao, Damaraju (1988). Constructions and Combinatorial Problems in Design of Experiments (corrected reprint of the 1971 Wiley ed.). New York: Dover. ISBN 0-486-65685-3.
Raghavarao, Damaraju; Padgett, L.V. (2005). Block Designs: Analysis, Combinatorics and Applications. World Scientific. ISBN 981-256-360-1.
Shah, Kirti R.; Sinha, Bikas K. (1989). Theory of Optimal Designs. Springer-Verlag. ISBN 0-387-96991-8.
Street, Anne Penfold; Street, Deborah J. (1987). Combinatorics of Experimental Design. Oxford U. P. [Clarendon]. ISBN 0-19-853256-3.
Wilk, M. B. (1955). "The Randomization Analysis of a Generalized Randomized Block Design". Biometrika. 42 (1–2): 70–79. doi:10.2307/2333423. JSTOR 2333423.
Zyskind, George (1963). "Some Consequences of randomization in a Generalization of the Balanced Incomplete Block Design". The Annals of Mathematical Statistics. 34 (4): 1569–1581. doi:10.1214/aoms/1177703889. JSTOR 2238364. | Wikipedia/Randomized_block_design |
Disease surveillance is an epidemiological practice by which the spread of disease is monitored in order to establish patterns of progression. The main role of disease surveillance is to predict, observe, and minimize the harm caused by outbreak, epidemic, and pandemic situations, as well as increase knowledge about which factors contribute to such circumstances. A key part of modern disease surveillance is the practice of disease case reporting.
In modern times, reporting incidences of disease outbreaks has been transformed from manual record keeping, to instant worldwide internet communication.
The number of cases could be gathered from hospitals – which would be expected to see most of the occurrences – collated, and eventually made public. With the advent of modern communication technology, this has changed dramatically. Organizations like the World Health Organization (WHO) and the Centers for Disease Control and Prevention (CDC) now can report cases and deaths from significant diseases within days – sometimes within hours – of the occurrence. Further, there is considerable public pressure to make this information available quickly and accurately.
== Mandatory reporting ==
Formal reporting of notifiable infectious diseases is a requirement placed upon health care providers by many regional and national governments, and upon national governments by the World Health Organization to monitor spread as a result of the transmission of infectious agents. Since 1969, WHO has required that all cases of the following diseases be reported to the organization: cholera, plague, yellow fever, smallpox, relapsing fever and typhus. In 2005, the list was extended to include polio and SARS. Regional and national governments typically monitor a larger set of (around 80 in the U.S.) communicable diseases that can potentially threaten the general population. Tuberculosis, HIV, botulism, hantavirus, anthrax, and rabies are examples of such diseases. The incidence counts of diseases are often used as health indicators to describe the overall health of a population.
== World Health Organization ==
The World Health Organization (WHO) is the lead agency for coordinating global response to major diseases. The WHO maintains Websites for a number of diseases and has active teams in many countries where these diseases occur.
During the SARS outbreak in early 2004, for example, the Beijing staff of the WHO produced updates every few days for the duration of the outbreak. Beginning in January 2004, the WHO has produced similar updates for H5N1. These results are widely reported and closely watched.
WHO's Epidemic and Pandemic Alert and Response (EPR) to detect, verify rapidly and respond appropriately to epidemic-prone and emerging disease threats covers the following diseases:
Anthrax
Avian influenza
Crimean–Congo hemorrhagic fever
Dengue hemorrhagic fever
Ebola virus disease
Hepatitis
Influenza
Lassa fever
Marburg hemorrhagic fever
Meningococcal disease
Plague
Rift Valley fever
Severe acute respiratory syndrome (SARS)
Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2)
Smallpox
Tularemia
Yellow fever
== Political challenges ==
As the lead organization in global public health, the WHO occupies a delicate role in global politics. It must maintain good relationships with each of the many countries in which it is active. As a result, it may only report results within a particular country with the agreement of the country's government. Because some governments regard the release of any information on disease outbreaks as a state secret, this can place the WHO in a difficult position.
The WHO coordinated International Outbreak Alert and Response is designed to ensure "outbreaks of potential international importance are rapidly verified and information is quickly shared within the Network" but not necessarily by the public; integrate and coordinate "activities to support national efforts" rather than challenge national authority within that nation in order to "respect the independence and objectivity of all partners". The commitment that "All Network responses will proceed with full respect for ethical standards, human rights, national and local laws, cultural sensitivities and tradition" ensures each nation that its security, financial, and other interests will be given full weight.
== Technical challenges ==
Testing for a disease can be expensive, and distinguishing between two diseases can be prohibitively difficult in many countries. One standard means of determining if a person has had a particular disease is to test for the presence of antibodies that are particular to this disease. In the case of H5N1, for example, there is a low pathogenic H5N1 strain in wild birds in North America that a human could conceivably have antibodies against. It would be extremely difficult to distinguish between antibodies produced by this strain, and antibodies produced by Asian lineage HPAI A(H5N1). Similar difficulties are common, and make it difficult to determine how widely a disease may have spread.
There is currently little available data on the spread of H5N1 in wild birds in Africa and Asia. Without such data, predicting how the disease might spread in the future is difficult. Information that scientists and decision makers need to make useful medical products and informed decisions for health care, but currently lack include:
Surveillance of wild bird populations
Cell cultures of particular strains of diseases
== H5N1 ==
Surveillance of H5N1 in humans, poultry, wild birds, cats and other animals remains very weak in many parts of Asia and Africa. Much remains unknown about the exact extent of its spread.
H5N1 in China is less than fully reported. Blogs have described many discrepancies between official China government announcements concerning H5N1 and what people in China see with their own eyes. Many reports of total H5N1 cases have excluded China due to widespread disbelief in China's official numbers. (See Disease surveillance in China.)
"Only half the world's human bird flu cases are being reported to the World Health Organization within two weeks of being detected, a response time that must be improved to avert a pandemic, a senior WHO official said Saturday. Shigeru Omi, WHO's regional director for the Western Pacific, said it is estimated that countries would have only two to three weeks to stamp out, or at least slow, a pandemic flu strain after it began spreading in humans."
David Nabarro, chief avian flu coordinator for the United Nations, says avian flu has too many unanswered questions.
CIDRAP reported on 25 August 2006 on a new US government Website that allows the public to view current information about testing of wild birds for H5N1 avian influenza, which is part of a national wild-bird surveillance plan that "includes five strategies for early detection of highly pathogenic avian influenza. Sample numbers from three of these will be available on HEDDS: live wild birds, subsistence hunter-killed birds, and investigations of sick and dead wild birds. The other two strategies involve domestic bird testing and environmental sampling of water and wild-bird droppings. [...] A map on the new USGS site shows that, 9327 birds from Alaska have been tested so far this year, with only a few from most other states. Last year, officials tested just 721 birds from Alaska and none from most other states, another map shows. The goal of the surveillance program for 2006 is to collect 75000 to 100000 samples from wild birds and 50000 environmental samples, officials have said".
== See also ==
1985 World Health Organization AIDS surveillance case definition
AIDS-defining clinical condition – CDC list of diseases associated with AIDS
Bioterrorism#Biosurveillance – Terrorism involving biological agents
Disease surveillance in China – Main public health surveillance activity in China
Public health surveillance – Collection, analysis and interpretation of health-related data
Predictive analytics – Statistical techniques analyzing facts to make predictions about unknown events
Pandemic prevention – Organization and management of preventive measures against pandemics
Contact tracing – Finding and identifying people in contact with someone with an infectious disease
Council of State and Territorial Epidemiologists – non-profit organizationPages displaying wikidata descriptions as a fallback
Early Warning and Response System (EWRS) – European communicable disease communication system
Global Infectious Disease Epidemiology Network (GIDEON) – Medical decision support system (GIDEON)
Infection control – Medical discipline for preventing nosocomial or healthcare-associated infectionPages displaying short descriptions of redirect targets
List of notifiable diseases
STD testing – Infection transmitted through human sexual behaviorPages displaying short descriptions of redirect targets
UK statutory notification system – Infectious disease notification system in the UK
== References ==
== Further reading ==
CDC: Influenza Activity – United States and Worldwide, 2003–2004 Season, and Composition of the 2004–2005 Influenza Vaccine
Global Outbreak Alert & Response Network
WHO Alert & Response Operations
WHO Severe Acute Respiratory Syndrome Web site
WHO Avian Influenza Web site
Sickweather Archived 2021-12-06 at the Wayback Machine The world's first real-time social media disease surveillance tool
HealthMap The HealthMap real-time automated surveillance system is a program of Children's Hospital Boston with support from Google.org
GermTrax Tracking the spread of sickness and disease with the help of social media
ProMED-mail Archived 2007-12-26 at the Wayback Machine The global electronic reporting system for outbreaks of emerging infectious diseases & toxins, open to all sources. ProMED-mail, the Program for Monitoring Emerging Diseases, is a program of the International Society for Infectious Diseases with the support and encouragement of the Federation of American Scientists and SatelLife. [1] The International Society for Infectious Diseases (ISID) - ISID | Wikipedia/Disease_surveillance |
A drug policy is the policy regarding the control and regulation of psychoactive substances (commonly referred to as drugs), particularly those that are addictive or cause physical and mental dependence. While drug policies are generally implemented by governments, entities at all levels (from international organisations, national or local government, administrations, or public places) may have specific policies related to drugs.
Drug policies are usually aimed at combatting drug addiction or dependence addressing both demand and supply of drugs, as well as mitigating the harm of drug use, and providing medical assistance and treatment. Demand reduction measures include voluntary treatment, rehabilitation, substitution therapy, overdose management, alternatives to incarceration for drug related minor offenses, medical prescription of drugs, awareness campaigns, community social services, and support for families. Supply side reduction involves measures such as enacting foreign policy aimed at eradicating the international cultivation of plants used to make drugs and interception of drug trafficking, fines for drug offenses, incarceration for persons convicted for drug offenses. Policies that help mitigate the dangers of drug use include needle syringe programs, drug substitution programs, and free facilities for testing a drug's purity.
The concept of "drugs" –a substance subject to control– varies from jurisdiction to jurisdiction. For example, heroin is regulated almost everywhere; substances such as khat, codeine, or alcohol are regulated in some places, but not others. Most jurisdictions also regulate prescription drugs, medicinal drugs not considered dangerous but that can only be supplied to holders of a medical prescription, and sometimes drugs available without prescription but only from an approved supplier such as a pharmacy, but this is not usually described as a "drug policy". There are however some international standards as to which substances are under certain controls, in particular via the three international drug control conventions.
== International drug control treaties ==
=== History ===
The first international treaty to control a psychoactive substance was adopted at the Brussels Conference in 1890 in the context of the regulations against slave trade, and concerned alcoholic beverages. It was followed by the final act of the Shanghai Opium Commission of 1909 which attempted to settle peace and arrange the trade in opium, after the Opium Wars in the 19th Century.
In 1912 at the First International Opium Conference held in the Hague, the multilateral International Opium Convention was adopted; it ultimately got incorporated into the Treaty of Versailles in 1919. A number of international treaties related to drugs followed in subsequent decades: the 1925 Agreement concerning the Manufacture of, Internal Trade in and Use of Prepared Opium (which introduced some restrictions—but no total prohibition—on the export of "Indian hemp" pure extracts), the 1931 Convention for Limiting the Manufacture and Regulating the Distribution of Narcotic Drugs and Agreement for the Control of Opium Smoking in the Far East, the 1936 Convention for the Suppression of the Illicit Traffic in Dangerous Drugs, among others. After World War II, a series of Protocols signed at Lake Success brought into the mandate of the newly created United Nations these pre-war treaties which had been handled by the League of Nations and the Office international d'hygiène publique.
In 1961 the nine previous drug-control treaties in force were superseded by the 1961 Single Convention, which rationalized global control on drug trading and use. Countries commit to "protecting the health and welfare of [hu]mankind" and to combat substance abuse and addiction. The treaty is not a self-enforcing agreement: countries have to pass their own legislation aligned with the framework of the Convention. The 1961 Convention was supplemented by the 1971 Convention and the 1988 Convention, forming the three international drug control treaties upon which other legal instruments rely. Their implementation has been led by the United States, in particular after the Nixon administration's declaration of "War on drugs" in 1971, and the creation of the Drug Enforcement Administration (DEA) as a U.S. federal law enforcement agency in 1973.
Since the early 2000s the European Union (EU) has developed several comprehensive and multidisciplinary strategies as part of its drug policy to prevent the diffusion of recreational drug use and abuse among the European population and raise public awareness on the adverse effects of drugs among all member states of the European Union, as well as conjoined efforts with European law enforcement agencies, such as Europol and EMCDDA, to counter organized crime and illegal drug trade in Europe.
=== Current treaties ===
The core drug control treaties currently in force internationally are:
the Single Convention on Narcotic Drugs, 1961 (1961 Convention or Single Convention) composed of:
the original Single Convention concluded at New York City (United States), 30 March 1961, and
its amendement, the Protocol amending the Single Convention on Narcotic Drugs which was adopted in Geneva (Switzerland), 25 March 1972,
the Convention on Psychotropic Substances (1971 Convention), concluded at Vienna, 21 February 1971, and
the UN Convention against Illicit Traffic in Narcotic Drugs and Psychotropic Substances (1988 Convention) concluded at Vienna (Austria), 20 December 1988.
There are other treaties that address drugs under international control, such as:
the UN Convention on the Law of the Sea (UNCLOS), concluded on 10 December 1982 in Montego Bay (Jamaica),
the Convention on the Rights of the Child (CRC), concluded on 20 November 1989 in New York City,
the International Convention Against Doping in Sport concluded in Paris (France) on 19 October 2005.
Additionally, other pieces of international law enter into play, like the international human rights treaties protecting the right to health or the rights of indigenous peoples, and, in the case of plants considered as drug crops (coca plant, cannabis, opium poppy), treaties protecting the right to land, farmers' of peasants' rights, and treaties on plant genetic resources or traditional knowledge.
=== Treaty-mandated organizations ===
There are four bodies mandated under the international drug control conventions (1961, 1971 and 1988):
The Commission on Narcotic Drugs (CND), a subsidiary body of the United Nations ECOSOC, the CND is acting as a Conference of the parties to the three core Conventions,
the UN Secretary-General, whose mandate is de facto carried on by the United Nations Office on Drugs and Crime (UNODC),
the World Health Organization (WHO), in charge of the scientific review of substances for inclusion under, changes in, or withdrawal from control (scheduling assessment),
the International Narcotics Control Board (INCB), the treaty-body monitoring implementation and collecting statistical data.
== Drug policy by country ==
=== Australia ===
Australian drug laws are criminal laws and mostly exist at the state and territory level, not the federal, and are therefore different, which means an analysis of trends and laws for Australia is complicated. The federal jurisdiction has enforcement powers over national borders.
In October 2016, Australia legislated for some medicinal use cannabis.
=== Bolivia ===
Like Colombia, the Bolivian government signed onto the ATPA in 1991 and called for the forced eradication of the coca plant in the 1990s and early 2000s. Until 2004, the government allowed each residential family to grow 1600m2 of coca crop, enough to provide the family with a monthly minimum wage. In 2005, Bolivia saw another reformist movement. The leader of a coca grower group, Evo Morales, was elected President in 2005. Morales ended any U.S. backed War on Drugs. President Morales opposed the decriminalization of drugs but saw the coca crop as an important piece of indigenous history and a pillar of the community because of the traditional use of chewing coca leaves. In 2009, the Bolivian Constitution backed the legalization and industrialization of coca products.
Bolivia first proposed an amendment to the Single Convention on Narcotic Drugs in 2009. After its failure, Bolivia left the convention and re-accessed with a reservation for coca leaf in its natural form.
=== Canada ===
=== China ===
=== Colombia ===
Under President Ronald Reagan, the United States declared War on Drugs in the late 1980s; the Colombian drug lords were widely viewed as the root of the cocaine issue in America. In the 1990s, Colombia was home to the world's two largest drug cartels: the Cali cartel and the Medellín cartel. It became Colombia's priority, as well as the priority of the other countries in the Andean Region, to extinguish the cartels and drug trafficking from the region. In 1999, under President Andrés Pastrana, Colombia passed Plan Colombia. Plan Colombia funded the Andean Region's fight against the drug cartels and drug trafficking. With the implementation of Plan Colombia, the Colombian government aimed to destroy the coca crop. This prohibitionist regime has had controversial results, especially on human rights. Colombia has seen a significant decrease in coca cultivation. In 2001, there were 362,000 acres of coca crop in Colombia; by 2011, fewer than 130,000 acres remained. However, farmers who cultivated the coca crop for uses other than for the creation of cocaine, such as the traditional use of chewing coca leaves, became impoverished.
Since 1994, consumption of drugs has been decriminalized. However, possession and trafficking of drugs are still illegal. In 2014, Colombia further eased its prohibitionist stance on the coca crop by ceasing aerial fumigation of the coca crop and creating programs for addicts. President Juan Manuel Santos (2010–2018), has called for the revision of Latin American drug policy, and was open to talks about legalization.
=== Ecuador ===
In the mid-1980s, under President León Febres-Cordero, Ecuador adopted the prohibitionist drug policy recommended by the United States. By cooperating with the United States, Ecuador received tariff exemptions from the United States. In February 1990, the United States held the Cartagena Drug Summit, in the hopes of continuing progress on the War on Drugs. Three of the four countries in the Andean Region were invited to the Summit: Peru, Colombia and Bolivia, with the notable absence of Ecuador. Two of those three countries—Colombia and Bolivia—joined the Andean Trade Preference Act, later called the Andean Trade Promotion and Drug Eradication Act, in 1992. Ecuador, along with Peru, would eventually join the ATPA in 1993. The Act united the region in the War on Drugs as well as stimulated their economies with tariff exemptions.
In 1991, President Rodrigo Borja Cevallos passed Law 108, a law that decriminalized drug use, while continuing to prosecute drug possession. In reality, Law 108 set a trap that snared many citizens. Citizens confused the legality of use with the illegality of carrying drugs on their person. This led to a large increase in prison populations, as 100% of drug crimes were processed. In 2007, 18,000 prisoners were kept in a prison built to hold up to 7,000. In urban regions of Ecuador as many as 45% of male inmates were serving time for drug charges; this prison demographic rises to 80% of female inmates. In 2008, under Ecuador's new Constitution, current prisoners serving time were allowed the "smuggler pardon" if they were prosecuted for purchasing or carrying up to 2 kg of any drug, and they already served 10% of their sentence. Later, in 2009, Law 108 was replaced by the Organic Penal Code (COIP). The COIP contains many of the same rules and regulations as Law 108, but it established clear distinctions among large, medium and small drug traffickers, as well as between the mafia and rural growers, and prosecutes accordingly. In 2013, the Ecuadorian government left the Andean Trade Promotion and Drug Eradication Act.
=== Germany ===
Compared with other EU countries, Germany's drug policy is considered progressive, but still stricter than, for example, the Netherlands. In 1994 the Federal Constitutional Court ruled that drug addiction was not a crime, nor was the possession of small amounts of drugs for personal use. In 2000, Germany changed the narcotic law ("BtmG") to allow supervised drug injection rooms. In 2002, they started a pilot study in seven German cities to evaluate the effects of heroin-assisted treatment on addicts, compared to methadone-assisted treatment. The positive results of the study led to the inclusion of heroin-assisted treatment into the services of the mandatory health insurance in 2009.
In 2017, Germany re-allowed medical cannabis; after the 2021 German federal election, the new government announced in their coalition agreement their intention to legalise cannabis for all other purposes (including recreational). This was implemented on 1 April 2024. Cannabis can be legally acquired from Cannabis Social Clubs which however have periodic membership fees and a maximum 500 of members as of 2024 or be grown by consumers themselves who can have up to three plants.
=== India ===
=== Indonesia ===
Like many other governments in Southeast Asia, the Indonesian government applies severe laws to discourage drug use.
=== Liberia ===
Liberia prohibits drugs such as cocaine and marijuana. Its drug laws are enforced by the Liberia Drug Enforcement Agency.
=== Netherlands ===
Drug policy in the Netherlands is based on two principles: that drug use is a health issue, not a criminal issue, and that there is a distinction between hard and soft drugs. It was also one of the first countries to introduce heroin-assisted treatment and safe injection sites. From 2008, a number of town councils have closed many so called coffee shops that sold cannabis or implemented other new restrictions for sale of cannabis, e.g. for foreigners.
Importing and exporting of any classified drug is a serious offence. The penalty can run up to 12 to 16 years if it is for hard drugs, or a maximum of 4 years for importing or exporting large quantities of cannabis. Investment in treatment and prevention of drug addiction is high when compared to the rest of the world. The Netherlands spends significantly more per capita than all other countries in the EU on drug law enforcement. 75% of drug-related public spending is on law enforcement. Drug use remains at average Western European levels and slightly lower than in English speaking countries.
=== Peru ===
According to article 8 of the Constitution of Peru, the state is responsible for battling and punishing drug trafficking. Likewise, it regulates the use of intoxicants. Consumption of drugs is not penalized and possession is allowed for small quantities only. Production and distribution of drugs are illegal.
In 1993, Peru, along with Ecuador, signed the Andean Trade Preference Agreement with the United States, later replaced with the Andean Trade Promotion and Drug Eradication Act. Bolivia and Colombia had already signed the ATPA in 1991, and began enjoying its benefits in 1992. By agreeing to the terms of this Agreement, these countries worked in concert with the United States to fight drug trafficking and production at the source. The Act aimed to substitute the production of the coca plant with other agricultural products. In return for their efforts towards eradication of the coca plant, the countries were granted U.S. tariff exemptions on certain products, such as certain types of fruit. Peru ceased complying with the ATPA in 2012, and lost all tariff exemptions previously granted by the United States through the ATPA. By the end of 2012, Peru overtook Colombia as the world's largest cultivator of the coca plant.
=== Poland ===
=== Portugal ===
In July 2001, a law maintained the status of illegality for using or possessing any drug for personal use without authorization. The offense was however changed from a criminal one, with prison a possible punishment, to an administrative one if the possessing was no more than up to ten days' supply of that substance. This was in line with the de facto Portuguese drug policy before the reform. Drug addicts were then aggressively targeted with therapy or community service rather than fines or waivers. Even if there are no criminal penalties, these changes did not legalize drug use in Portugal. Possession has remained prohibited by Portuguese law, and criminal penalties are still applied to drug growers, dealers and traffickers.
=== Russia ===
Drugs became popular in Russia among soldiers and the homeless, particularly due to the First World War. This included morphine-based drugs and cocaine, which were readily available. The government under Tsar Nicholas II of Russia had outlawed alcohol in 1914 (including vodka) as a temporary measure until the conclusion of the War. Following the Russian Revolution and in particular the October Revolution and the Russian Civil War, the Bolsheviks emerged victorious as the new political power in Russia. The Soviet Union inherited a population with widespread drug addiction, and in the 1920s, tried to tackle it by introducing a 10-year prison sentence for drug-dealers. The Bolsheviks also decided in August 1924 to re-introduce the sale of vodka, which, being more readily available, led to a drop in drug-use.
=== Sweden ===
Sweden's drug policy has gradually turned from lenient in the 1960s with an emphasis on drug supply towards a policy of zero tolerance against all illicit drug use (including cannabis). The official aim is a drug-free society. Drug use became a punishable crime in 1988. Personal use does not result in jail time if not combined with driving a car. Prevention includes widespread drug testing, and penalties range from fines for minor drug offenses up to a 10-year prison sentence for aggravated offenses. The condition for suspended sentences could be regular drug tests or submission to rehabilitation treatment. Drug treatment is free of charge and provided through the health care system and the municipal social services. Drug use that threatens the health and development of minors could force them into mandatory treatment if they don't apply voluntarily. If the use threatens the immediate health or the security of others (such as a child of an addict) the same could apply to adults.
Among 9th year students, drug experimentation was highest in the early 1970s, falling towards a low in the late 1980s, redoubling in the 1990s to stabilize and slowly decline in 2000s. Estimates of heavy drug addicts have risen from 6000 in 1967 to 15000 in 1979, 19000 in 1992 and 26000 in 1998. According to inpatient data, there were 28000 such addicts in 2001 and 26000 in 2004, but these last two figures may represent the recent trend in Sweden towards out-patient treatment of drug addicts rather than an actual decline in drug addictions.
The United Nations Office on Drugs and Crime (UNODC) reports that Sweden has one of the lowest drug use rates in the Western world, and attributes this to a drug policy that invests heavily in prevention and treatment as well as strict law enforcement. The general drug policy is supported by all political parties and, according to opinion polls made in the mid 2000s, the restrictive approach received broad support from the public at that time.
=== Switzerland ===
The national drug policy of Switzerland was developed in the early 1990s and comprises the four elements of prevention, therapy, harm reduction and prohibition. In 1994 Switzerland was one of the first countries to try heroin-assisted treatment and other harm reduction measures like supervised injection rooms. In 2008 a popular initiative by the right wing Swiss People's Party aimed at ending the heroin program was rejected by more than two-thirds of the voters. A simultaneous initiative aimed at legalizing marijuana was rejected at the same ballot.
Between 1987 and 1992, illegal drug use and sales were permitted in Platzspitz park, Zurich, in an attempt to counter the growing heroin problem. However, as the situation grew increasingly out of control, authorities were forced to close the park.
In 2022, Switzerland initiated pilot trials for the non-medical use of cannabis.
=== Thailand ===
Thailand has a strict drug policy. The use, storage, transportation and distribution of drugs is illegal. In 2021, Thailand unified all the laws on narcotic, psychoactive substances, and inhalants into the Narcotic Code 2564 BE (2021 AD) with more relaxing policy. The sentence of many criminal offenses relating to narcotic was reduced as the new law focuses more on drug rehabilitation. According to the Narcotic Code, narcotic substances are divided into 5 categories.
Category I – highly addictive narcotic such as heroin, amphetamines, methamphetamines, etc.
Category II – highly addictive narcotic with medical use such as morphine, cocaine, ketamine, codeine, medicinal opium (opium extracts or products), etc.
Category III – drug formularies that legally contain the category II narcotic, etc.
Category IV – chemicals used for synthesizing the categories I and II narcotic such as acetic anhydride, acetyl chloride, etc.
Category V – narcotic plants such as opium poppy, magic mushroom, cannabis extracts with THC higher than 0.2% by weight and cannabis seed extracts.
With the current law, kratom and cannabis plant no longer belong to the category V narcotic. They are no longer considered narcotic plants. However, plantation, possession, distribution, and use of these plants are still controlled by certain level of permission and regulations.
It is also illegal to import more than 200 cigarettes per person to Thailand. Control takes place at customs at the airport. If the limit has been exceeded, the owner can be fined up to ten times the cost of cigarettes.
In January 2018, Thai authorities imposed a ban on smoking on beaches in some tourist areas. Those who smoke in public places can be punished with a fine of 100,000 Baht or imprisonment for up to one year. It is forbidden to import electronic cigarettes into Thailand. These items are likely to be confiscated, and you can be fined or sent to prison for up to 10 years. The sale or supply of electronic cigarettes and similar devices is also prohibited and is punishable by a fine or imprisonment of up to 5 years.
It is worth noting that most people arrested for possessing a small amount of substances from the V-th category are fined and not imprisoned. At present, in Thailand, the anti-drug police are considering methamphetamines as a more serious and dangerous problem.
On 9 February 2024 the Public Health Ministry published possession limits for many illicit drugs. This means if you possess a small amount of an illegal drug, you have to go to a rehabilitation program, instead of imprisonment. This marks another progressive step in Thailand's drug policy.
=== Ukraine ===
Crimes in the sphere of trafficking in narcotic, psychotropic substances and crimes against health are classified using the 13th section of the Criminal Code of Ukraine; articles from 305 to 327.
According to official statistics for 2016, 53% of crimes in the field of drugs fall on art. 309 of the Criminal Code of Ukraine: "illegal production, manufacture, acquisition, storage, transportation or shipment of narcotic drugs, psychotropic substances or their analogues without the purpose of sale".
Sentence for crime:
fine of fifty to one hundred non-taxable minimum incomes of citizens;
or correctional labor for up to two years;
or arrest for up to six months, or restriction of liberty for up to three years;
or imprisonment for the same term.
On 28 August 2013, the Cabinet of Ministers of Ukraine adopted a strategy for state policy on drugs until 2020. This is the first document of this kind in Ukraine. The strategy developed by the State Drug Control Service, involves strengthening criminal liability for distributing large amounts of drugs, and easing the penalty for possession of small doses. Thanks to this strategy, it is planned to reduce the number of injecting drug users by 20% by 2020, and the number of drug overdose deaths by 30%.
In October 2018, the State Service of Ukraine on Drugs and Drug Control issued the first license for the import and re-export of raw materials and products derived from cannabis. The corresponding licenses were obtained by the USA company C21. She is also in the process of applying for additional licenses, including the cultivation of hemp.
=== United Kingdom ===
Drugs considered addictive or dangerous in the United Kingdom (with the exception of tobacco and alcohol) are called "controlled substances" and regulated by law. Until 1964 the medical treatment of dependent drug users was separated from the punishment of unregulated use and supply. This arrangement was confirmed by the Rolleston Committee in 1926. This policy on drugs, known as the "British system", was maintained in Britain, and nowhere else, until the 1960s. Under this policy drug use remained low; there was relatively little recreational use and few dependent users, who were prescribed drugs by their doctors as part of their treatment. From 1964 drug use was increasingly criminalised, with the framework still in place as of 2014 largely determined by the 1971 Misuse of Drugs Act.: 13–14
=== United States ===
Modern US drug policy still has roots in the war on drugs started by president Richard Nixon in 1971.
In the United States, illegal drugs fall into different categories and punishment for possession and dealing varies on amount and type. Punishment for marijuana possession is light in most states, but punishment for dealing and possession of hard drugs can be severe, and has contributed to the growth of the prison population.
US drug policy is also heavily invested in foreign policy, supporting military and paramilitary actions in South America, Central Asia, and other places to eradicate the growth of coca and opium. In Colombia, U.S. president Bill Clinton dispatched military and paramilitary personnel to interdict the planting of coca, as a part of the Plan Colombia. The project is often criticized for its ineffectiveness and its negative impact on local farmers, but it has been effective in destroying the once-powerful drug cartels and guerrilla groups of Colombia. President George W. Bush intensified anti-drug efforts in Mexico, initiating the Mérida Initiative, but has faced criticisms for similar reasons.
21 May 2012 the U.S Government published an updated version of its Drug Policy
The director of ONDCP stated simultaneously that this policy is something different from "War on Drugs":
The U.S Government see the policy as a "third way" approach to drug control one that is based on the results of a huge investment in research from some of the world's preeminent scholars on disease of substance abuse.
The policy does not see drug legalization as the "silver bullet" solution to drug control.
It is not a policy where success is measured by the number of arrests made or prisons built.
The U.S. government generates grants to develop and disseminate evidence based addiction treatments. These grants have developed several practices that NIDA endorses, such as community reinforcement approach and community reinforcement and family training approach, which are behavior therapy interventions.
== See also ==
== References == | Wikipedia/Drug_policy |
In statistics, a fractional factorial design is a way to conduct experiments with fewer experimental runs than a full factorial design. Instead of testing every single combination of factors, it tests only a carefully selected portion. This "fraction" of the full design is chosen to reveal the most important information about the system being studied (sparsity-of-effects principle), while significantly reducing the number of runs required. It is based on the idea that many tests in a full factorial design can be redundant. However, this reduction in runs comes at the cost of potentially more complex analysis, as some effects can become intertwined, making it impossible to isolate their individual influences. Therefore, choosing which combinations to test in a fractional factorial design must be done carefully.
== History ==
Fractional factorial design was introduced by British statistician David John Finney in 1945, extending previous work by Ronald Fisher on the full factorial experiment at Rothamsted Experimental Station. Developed originally for agricultural applications, it has since been applied to other areas of engineering, science, and business.
== Basic working principle ==
Similar to a full factorial experiment, a fractional factorial experiment investigates the effects of independent variables, known as factors, on a response variable. Each factor is investigated at different values, known as levels. The response variable is measured using a combination of factors at different levels, and each unique combination is known as a run. To reduce the number of runs in comparison to a full factorial, the experiments are designed to confound different effects and interactions, so that their impacts cannot be distinguished. Higher-order interactions between main effects are typically negligible, making this a reasonable method of studying main effects. This is the sparsity of effects principle. Confounding is controlled by a systematic selection of runs from a full-factorial table.
== Notation ==
Fractional designs are expressed using the notation lk − p, where l is the number of levels of each factor, k is the number of factors, and p describes the size of the fraction of the full factorial used. Formally, p is the number of generators; relationships that determine the intentionally confounded effects that reduce the number of runs needed. Each generator halves the number of runs required. A design with p such generators is a 1/(lp)=l−p fraction of the full factorial design.
For example, a 25 − 2 design is 1/4 of a two-level, five-factor factorial design. Rather than the 32 runs that would be required for the full 25 factorial experiment, this experiment requires only eight runs. With two generators, the number of experiments has been halved twice.
In practice, one rarely encounters l > 2 levels in fractional factorial designs as the methodology to generate such designs for more than two levels is much more cumbersome. In cases requiring 3 levels for each factor, potential fractional designs to pursue are Latin squares, mutually orthogonal Latin squares, and Taguchi methods. Response surface methodology can also be a much more experimentally efficient way to determine the relationship between the experimental response and factors at multiple levels, but it requires that the levels are continuous. In determining whether more than two levels are needed, experimenters should consider whether they expect the outcome to be nonlinear with the addition of a third level. Another consideration is the number of factors, which can significantly change the experimental labor demand.
The levels of a factor are commonly coded as +1 for the higher level, and −1 for the lower level. For a three-level factor, the intermediate value is coded as 0.
To save space, the points in a factorial experiment are often abbreviated with strings of plus and minus signs. The strings have as many symbols as factors, and their values dictate the level of each factor: conventionally,
−
{\displaystyle -}
for the first (or low) level, and
+
{\displaystyle +}
for the second (or high) level. The points in a two-level experiment with two factors can thus be represented as
−
−
{\displaystyle --}
,
+
−
{\displaystyle +-}
,
−
+
{\displaystyle -+}
, and
+
+
{\displaystyle ++}
.
The factorial points can also be abbreviated by (1), a, b, and ab, where the presence of a letter indicates that the specified factor is at its high (or second) level and the absence of a letter indicates that the specified factor is at its low (or first) level (for example, "a" indicates that factor A is on its high setting, while all other factors are at their low (or first) setting). (1) is used to indicate that all factors are at their lowest (or first) values. Factorial points are typically arranged in a table using Yates’ standard order: 1, a, b, ab, c, ac, bc, abc, which is created when the level of the first factor alternates with each run.
== Generation ==
In practice, experimenters typically rely on statistical reference books to supply the "standard" fractional factorial designs, consisting of the principal fraction. The principal fraction is the set of treatment combinations for which the generators evaluate to + under the treatment combination algebra. However, in some situations, experimenters may take it upon themselves to generate their own fractional design.
A fractional factorial experiment is generated from a full factorial experiment by choosing an alias structure. The alias structure determines which effects are confounded with each other. For example, the five-factor 25 − 2 can be generated by using a full three-factor factorial experiment involving three factors (say A, B, and C) and then choosing to confound the two remaining factors D and E with interactions generated by D = A*B and E = A*C. These two expressions are called the generators of the design. So for example, when the experiment is run and the experimenter estimates the effects for factor D, what is really being estimated is a combination of the main effect of D and the two-factor interaction involving A and B.
An important characteristic of a fractional design is the defining relation, which gives the set of interaction columns equal in the design matrix to a column of plus signs, denoted by I. For the above example, since D = AB and E = AC, then ABD and ACE are both columns of plus signs, and consequently so is BDCE:
D*D = AB*D = I
E*E = AC*E = I
I= ABD*ACE= A*ABCDE = BCDE
In this case, the defining relation of the fractional design is I = ABD = ACE = BCDE. The defining relation allows the alias pattern of the design to be determined and includes 2p words. Notice that in this case, the interaction effects ABD, ACE, and BCDE cannot be studied at all. As the number of generators and the degree of fractionation increases, more and more effects become confounded.
The alias pattern can then be determined through multiplying by each factor column. To determine how main effect A is confounded, multiply all terms in the defining relation by A:
A*I = A*ABD = A*ACE = A*BCDE
A = BC = CE = ABCDE
Thus main effect A is confounded with interaction effects BC, CE, and ABCDE. Other main effects can be computed following a similar method.
== Resolution ==
An important property of a fractional design is its resolution or ability to separate main effects and low-order interactions from one another. Formally, if the factors are binary then the resolution of the design is the minimum word length in the defining relation excluding (I). The resolution is denoted using Roman numerals, and it increases with the number. The most important fractional designs are those of resolution III, IV, and V: Resolutions below III are not useful and resolutions above V are wasteful (with binary factors) in that the expanded experimentation has no practical benefit in most cases—the bulk of the additional effort goes into the estimation of very high-order interactions which rarely occur in practice. The 25 − 2 design above is resolution III since its defining relation is I = ABD = ACE = BCDE.
The resolution classification system described is only used for regular designs. Regular designs have run size that equal a power of two, and only full aliasing is present. Non-regular designs, sometimes known as Plackett-Burman designs, are designs where run size is a multiple of 4; these designs introduce partial aliasing, and generalized resolution is used as design criterion instead of the resolution described previously.
Resolution III designs can be used to construct saturated designs, where N-1 factors can be investigated in only N runs. These saturated designs can be used for quick screening when many factors are involved.
== Example fractional factorial experiment ==
Montgomery gives the following example of a fractional factorial experiment. An engineer performed an experiment to increase the filtration rate (output) of a process to produce a chemical, and to reduce the amount of formaldehyde used in the process. The full factorial experiment is described in the Wikipedia page Factorial experiment. Four factors were considered: temperature (A), pressure (B), formaldehyde concentration (C), and stirring rate (D). The results in that example were that the main effects A, C, and D and the AC and AD interactions were significant. The results of that example may be used to simulate a fractional factorial experiment using a half-fraction of the original 24 = 16 run design. The table shows the 24-1 = 8 run half-fraction experiment design and the resulting filtration rate, extracted from the table for the full 16 run factorial experiment.
In this fractional design, each main effect is aliased with a 3-factor interaction (e.g., A = BCD), and every 2-factor interaction is aliased with another 2-factor interaction (e.g., AB = CD). The aliasing relationships are shown in the table. This is a resolution IV design, meaning that main effects are aliased with 3-way interactions, and 2-way interactions are aliased with 2-way interactions.
The analysis of variance estimates of the effects are shown in the table below. From inspection of the table, there appear to be large effects due to A, C, and D. The coefficient for the AB interaction is quite small. Unless the AB and CD interactions have approximately equal but opposite effects, these two interactions appear to be negligible. If A, C, and D have large effects, but B has little effect, then the AC and AD interactions are most likely significant. These conclusions are consistent with the results of the full-factorial 16-run experiment.
Because B and its interactions appear to be insignificant, B may be dropped from the model. Dropping B results in a full factorial 23 design for the factors A, C, and D. Performing the anova using factors A, C, and D, and the interaction terms A:C and A:D, gives the results shown in the table, which are very similar to the results for the full factorial experiment experiment, but have the advantage of requiring only a half-fraction 8 runs rather than 16.
== External links ==
Fractional Factorial Designs (National Institute of Standards and Technology)
== See also ==
Robust parameter designs
== References == | Wikipedia/Fractional_factorial_design |
The phases of clinical research are the stages in which scientists conduct experiments with a health intervention to obtain sufficient evidence for a process considered effective as a medical treatment. For drug development, the clinical phases start with testing for drug safety in a few human subjects, then expand to many study participants (potentially tens of thousands) to determine if the treatment is effective. Clinical research is conducted on drug candidates, vaccine candidates, new medical devices, and new diagnostic assays.
== Description ==
Clinical trials testing potential medical products are commonly classified into four phases. The drug development process will normally proceed through all four phases over many years. When expressed specifically, a clinical trial phase is capitalized both in name and Roman numeral, such as "Phase I" clinical trial.
If the drug successfully passes through Phases I, II, and III, it will usually be approved by the national regulatory authority for use in the general population. Phase IV trials are 'post-marketing' or 'surveillance' studies conducted to monitor safety over several years.
== Preclinical studies ==
Before clinical trials are undertaken for a candidate drug, vaccine, medical device, or diagnostic assay, the product candidate is tested extensively in preclinical studies. Such studies involve in vitro (test tube or cell culture) and in vivo (animal model) experiments using wide-ranging doses of the study agent to obtain preliminary efficacy, toxicity and pharmacokinetic information. Such tests assist the developer to decide whether a drug candidate has scientific merit for further development as an investigational new drug.
== Phase 0 ==
Phase 0 is a designation for optional exploratory trials, originally introduced by the United States Food and Drug Administration's (FDA) 2006 Guidance on Exploratory Investigational New Drug (IND) Studies, but now generally adopted as standard practice. Phase 0 trials are also known as human microdosing studies and are designed to speed up the development of promising drugs or imaging agents by establishing very early on whether the drug or agent behaves in human subjects as was expected from preclinical studies. Distinctive features of Phase 0 trials include the administration of single subtherapeutic doses of the study drug to a small number of subjects (10 to 15) to gather preliminary data on the agent's pharmacokinetics (what the body does to the drugs).
A Phase 0 study gives no data on safety or efficacy, being by definition a dose too low to cause any therapeutic effect. Drug development companies carry out Phase 0 studies to rank drug candidates to decide which has the best pharmacokinetic parameters in humans to take forward into further development. They enable go/no-go decisions to be based on relevant human models instead of relying on sometimes inconsistent animal data.
== Phase I ==
Phase I trials were formerly referred to as "first-in-man studies" but the field generally moved to the gender-neutral language phrase "first-in-humans" in the 1990s; these trials are the first stage of testing in human subjects. They are designed to test the safety, side effects, best dose, and formulation method for the drug. Phase I trials are not randomized, and thus are vulnerable to selection bias.
Normally, a small group of 20–100 healthy volunteers will be recruited. These trials are often conducted in a clinical trial clinic, where the subject can be observed by full-time staff. These clinical trial clinics are often run by contract research organization (CROs) who conduct these studies on behalf of pharmaceutical companies or other research investigators.
The subject who receives the drug is usually observed until several half-lives of the drug have passed. This phase is designed to assess the safety (pharmacovigilance), tolerability, pharmacokinetics, and pharmacodynamics of a drug. Phase I trials normally include dose-ranging, also called dose escalation studies, so that the best and safest dose can be found and to discover the point at which a compound is too poisonous to administer. The tested range of doses will usually be a fraction of the dose that caused harm in animal testing.
Phase I trials most often include healthy volunteers. However, there are some circumstances when clinical patients are used, such as patients who have terminal cancer or HIV and the treatment is likely to make healthy individuals ill. These studies are usually conducted in tightly controlled clinics called Central Pharmacological Units, where participants receive 24-hour medical attention and oversight. In addition to the previously mentioned unhealthy individuals, "patients who have typically already tried and failed to improve on the existing standard therapies" may also participate in Phase I trials. Volunteers are paid a variable inconvenience fee for their time spent in the volunteer center.
Before beginning a Phase I trial, the sponsor must submit an Investigational New Drug application to the FDA detailing the preliminary data on the drug gathered from cellular models and animal studies.
Phase I trials can be further divided:
=== Phase Ia ===
Single ascending dose (Phase Ia): In single ascending dose studies, small groups of subjects are given a single dose of the drug while they are observed and tested for a period of time to confirm safety. Typically, a small number of participants, usually three, are entered sequentially at a particular dose. If they do not exhibit any adverse side effects, and the pharmacokinetic data are roughly in line with predicted safe values, the dose is escalated, and a new group of subjects is then given a higher dose.
If unacceptable toxicity is observed in any of the three participants, an additional number of participants, usually three, are treated at the same dose. This is continued until pre-calculated pharmacokinetic safety levels are reached, or intolerable side effects start showing up (at which point the drug is said to have reached the maximum tolerated dose (MTD)). If an additional unacceptable toxicity is observed, then the dose escalation is terminated and that dose, or perhaps the previous dose, is declared to be the maximally tolerated dose. This particular design assumes that the maximally tolerated dose occurs when approximately one-third of the participants experience unacceptable toxicity. Variations of this design exist, but most are similar.
=== Phase Ib ===
Multiple ascending dose (Phase Ib): Multiple ascending dose studies investigate the pharmacokinetics and pharmacodynamics of multiple doses of the drug, looking at safety and tolerability. In these studies, a group of patients receives multiple low doses of the drug, while samples (of blood, and other fluids) are collected at various time points and analyzed to acquire information on how the drug is processed within the body. The dose is subsequently escalated for further groups, up to a predetermined level.
=== Food effect ===
A short trial designed to investigate any differences in absorption of the drug by the body, caused by eating before the drug is given. These studies are usually run as a crossover study, with volunteers being given two identical doses of the drug while fasted, and after being fed.
== Phase II ==
Once a dose or range of doses is determined, the next goal is to evaluate whether the drug has any biological activity or effect. Phase II trials are performed on larger groups (50–300 individuals) and are designed to assess how well the drug works, as well as to continue Phase I safety assessments in a larger group of volunteers and patients. Genetic testing is common, particularly when there is evidence of variation in metabolic rate. When the development process for a new drug fails, this usually occurs during Phase II trials when the drug is discovered not to work as planned, or to have toxic effects.
Phase II studies are sometimes divided into Phase IIa and Phase IIb. There is no formal definition for these two sub-categories, but generally:
Phase IIa studies are usually pilot studies designed to find an optimal dose and assess safety ('dose finding' studies).
Phase IIb studies determine how well the drug works in subjects at a given dose to assess efficacy ('proof of concept' studies).
=== Trial design ===
Some Phase II trials are designed as case series, demonstrating a drug's safety and activity in a selected group of participants. Other Phase II trials are designed as randomized controlled trials, where some patients receive the drug/device and others receive placebo/standard treatment. Randomized Phase II trials have far fewer patients than randomized Phase III trials.
==== Example: cancer design ====
In the first stage, the investigator attempts to rule out drugs that have no or little biologic activity. For example, the researcher may specify that a drug must have some minimal level of activity, say, in 20% of participants. If the estimated activity level is less than 20%, the researcher chooses not to consider this drug further, at least not at that maximally tolerated dose. If the estimated activity level exceeds 20%, the researcher will add more participants to get a better estimate of the response rate. A typical study for ruling out a 20% or lower response rate enters 14 participants. If no response is observed in the first 14 participants, the drug is considered not likely to have a 20% or higher activity level. The number of additional participants added depends on the degree of precision desired, but ranges from 10 to 20. Thus, a typical cancer phase II study might include fewer than 30 people to estimate the response rate.
==== Efficacy vs effectiveness ====
When a study assesses efficacy, it is looking at whether the drug given in the specific manner described in the study is able to influence an outcome of interest (e.g. tumor size) in the chosen population (e.g. cancer patients with no other ongoing diseases). When a study is assessing effectiveness, it is determining whether a treatment will influence the disease. In an effectiveness study, it is essential that participants are treated as they would be when the treatment is prescribed in actual practice. That would mean that there should be no aspects of the study designed to increase compliance above those that would occur in routine clinical practice. The outcomes in effectiveness studies are also more generally applicable than in most efficacy studies (for example does the patient feel better, come to the hospital less or live longer in effectiveness studies as opposed to better test scores or lower cell counts in efficacy studies). There is usually less rigid control of the type of participant to be included in effectiveness studies than in efficacy studies, as the researchers are interested in whether the drug will have a broad effect in the population of patients with the disease.
=== Success rate ===
Phase II clinical programs historically have experienced the lowest success rate of the four development phases. In 2010, the percentage of Phase II trials that proceeded to Phase III was 18%, and only 31% of developmental candidates advanced from Phase II to Phase III in a study of trials over 2006–2015.
== Phase III ==
This phase is designed to assess the effectiveness of the new intervention and, thereby, its value in clinical practice. Phase III studies are randomized controlled multicenter trials on large patient groups (300–3,000 or more depending upon the disease/medical condition studied) and are aimed at being the definitive assessment of how effective the drug is, in comparison with current 'gold standard' treatment. Because of their size and comparatively long duration, Phase III trials are the most expensive, time-consuming and difficult trials to design and run, especially in therapies for chronic medical conditions. Phase III trials of chronic conditions or diseases often have a short follow-up period for evaluation, relative to the period of time the intervention might be used in practice. This is sometimes called the "pre-marketing phase" because it actually measures consumer response to the drug.
It is common practice that certain Phase III trials will continue while the regulatory submission is pending at the appropriate regulatory agency. This allows patients to continue to receive possibly lifesaving drugs until the drug can be obtained by purchase. Other reasons for performing trials at this stage include attempts by the sponsor at "label expansion" (to show the drug works for additional types of patients/diseases beyond the original use for which the drug was approved for marketing), to obtain additional safety data, or to support marketing claims for the drug. Studies in this phase are by some companies categorized as "Phase IIIB studies."
While not required in all cases, it is typically expected that there be at least two successful Phase III trials, demonstrating a drug's safety and efficacy, to obtain approval from the appropriate regulatory agencies such as FDA (US), or the EMA (European Union).
Once a drug has proved satisfactory after Phase III trials, the trial results are usually combined into a large document containing a comprehensive description of the methods and results of human and animal studies, manufacturing procedures, formulation details, and shelf life. This collection of information makes up the "regulatory submission" that is provided for review to the appropriate regulatory authorities in different countries. They will review the submission, and if it is acceptable, give the sponsor approval to market the drug.
Most drugs undergoing Phase III clinical trials can be marketed under FDA norms with proper recommendations and guidelines through a New Drug Application (NDA) containing all manufacturing, preclinical, and clinical data. In case of any adverse effects being reported anywhere, the drugs need to be recalled immediately from the market. While most pharmaceutical companies refrain from this practice, it is not abnormal to see many drugs undergoing Phase III clinical trials in the market.
=== Adaptive design ===
The design of individual trials may be altered during a trial – usually during Phase II or III – to accommodate interim results for the benefit of the treatment, adjust statistical analysis, or to reach early termination of an unsuccessful design, a process called an "adaptive design". Examples are the 2020 World Health Organization Solidarity trial, European Discovery trial, and UK RECOVERY Trial of hospitalized people with severe COVID-19 infection, each of which applies adaptive designs to rapidly alter trial parameters as results from the experimental therapeutic strategies emerge.
Adaptive designs within ongoing Phase II–III clinical trials on candidate therapeutics may shorten trial durations and use fewer subjects, possibly expediting decisions for early termination or success, and coordinating design changes for a specific trial across its international locations.
=== Success rate ===
For vaccines, the probability of success ranges from 7% for non-industry-sponsored candidates to 40% for industry-sponsored candidates.
A 2019 review of average success rates of clinical trials at different phases and diseases over the years 2005–15 found a success range of 5–14%. Separated by diseases studied, cancer drug trials were on average only 3% successful, whereas ophthalmology drugs and vaccines for infectious diseases were 33% successful. Trials using disease biomarkers, especially in cancer studies, were more successful than those not using biomarkers.
A 2010 review found about 50% of drug candidates either fail during the Phase III trial or are rejected by the national regulatory agency.
== Cost of trials by phases ==
In the early 21st century, a typical Phase I trial conducted at a single clinic in the United States ranged from $1.4 million for pain or anesthesia studies to $6.6 million for immunomodulation studies. Main expense drivers were operating and clinical monitoring costs of the Phase I site.
The amount of money spent on Phase II or III trials depends on numerous factors, with therapeutic area being studied and types of clinical procedures as key drivers. Phase II studies may cost as low as $7 million for cardiovascular projects, and as much as $20 million for hematology trials.
Phase III trials for dermatology may cost as low as $11 million, whereas a pain or anesthesia Phase III trial may cost as much as $53 million. An analysis of Phase III pivotal trials leading to 59 drug approvals by the US Food and Drug Administration over 2015–16 showed that the median cost was $19 million, but some trials involving thousands of subjects may cost 100 times more.
Across all trial phases, the main expenses for clinical trials were administrative staff (about 20% of the total), clinical procedures (about 19%), and clinical monitoring of the subjects (about 11%).
== Phase IV ==
A Phase IV trial is also known as a postmarketing surveillance trial or drug monitoring trial to assure long-term safety and effectiveness of the drug, vaccine, device or diagnostic test. Phase IV trials involve the safety surveillance (pharmacovigilance) and ongoing technical support of a drug after it receives regulatory approval to be sold. Phase IV studies may be required by regulatory authorities or may be undertaken by the sponsoring company for competitive (finding a new market for the drug) or other reasons (for example, the drug may not have been tested for interactions with other drugs, or on certain population groups such as pregnant women, who are unlikely to subject themselves to trials). The safety surveillance is designed to detect any rare or long-term adverse effects over a much larger patient population and longer time period than was possible during the Phase I-III clinical trials. Harmful effects discovered by Phase IV trials may result in a drug being withdrawn from the market or restricted to certain uses; examples include cerivastatin (brand names Baycol and Lipobay), troglitazone (Rezulin) and rofecoxib (Vioxx).
== Overall cost ==
The entire process of developing a drug from preclinical research to marketing can take approximately 12 to 18 years and often costs well over $1 billion.
== References == | Wikipedia/Phase_III_clinical_trials |
A medical guideline (also called a clinical guideline, standard treatment guideline, or clinical practice guideline) is a document with the aim of guiding decisions and criteria regarding diagnosis, management, and treatment in specific areas of healthcare. Such documents have been in use for thousands of years during the entire history of medicine. However, in contrast to previous approaches, which were often based on tradition or authority, modern medical guidelines are based on an examination of current evidence within the paradigm of evidence-based medicine. They usually include summarized consensus statements on best practice in healthcare. A healthcare provider is obliged to know the medical guidelines of their profession, and has to decide whether to follow the recommendations of a guideline for an individual treatment.
== Background ==
Modern clinical guidelines identify, summarize and evaluate the highest quality evidence and most current data about prevention, diagnosis, prognosis, therapy including dosage of medications, risk/benefit and cost-effectiveness. Then they define the most important questions related to clinical practice and identify all possible decision options and their outcomes. Some guidelines contain decision or computation algorithms to be followed. Thus, they integrate the identified decision points and respective courses of action with the clinical judgement and experience of practitioners. Many guidelines place the treatment alternatives into classes to help providers in deciding which treatment to use.
Additional objectives of clinical guidelines are to standardize medical care, to raise quality of care, to reduce several kinds of risk (to the patient, to the healthcare provider, to medical insurers and health plans) and to achieve the best balance between cost and medical parameters such as effectiveness, specificity, sensitivity, resoluteness, etc. It has been demonstrated repeatedly that the use of guidelines by healthcare providers such as hospitals is an effective way of achieving the objectives listed above, although they are not the only ones.
== Publication ==
Guidelines are usually produced at national or international levels by medical associations or governmental bodies, such as the United States Agency for Healthcare Research and Quality. Local healthcare providers may produce their own set of guidelines or adapt them from existing top-level guidelines. Healthcare payers such as insurers practicing utilization management also publish guidelines.
Special computer software packages known as guideline execution engines have been developed to facilitate the use of medical guidelines in concert with an electronic medical record system.
The Guideline Interchange Format (GLIF) is a computer representation format for clinical guidelines that can be used with such engines.
The US and other countries maintain medical guideline clearinghouses. In the US, the National Guideline Clearinghouse maintains a catalog of high-quality guidelines published by various health and medical associations. In the United Kingdom, clinical practice guidelines are published primarily by the National Institute for Health and Care Excellence (NICE). In The Netherlands, two bodies—the Institute for Healthcare Improvement (CBO) and College of General Practitioners (NHG)—have published specialist and primary care guidelines, respectively. In Germany, the German Agency for Quality in Medicine (ÄZQ) coordinates a national program for disease management guidelines. All these organisations are now members of the Guidelines International Network (G-I-N), an international network of organisations and individuals involved in clinical practice guidelines.
== Compliance ==
Checklists have been used in medical practice to attempt to ensure that clinical practice guidelines are followed. An example is the Surgical Safety Checklist developed for the World Health Organization by Dr. Atul Gawande. According to a meta-analysis after introduction of the checklist mortality dropped by 23% and all complications by 40%, but further high-quality studies are required to make the meta-analysis more robust. In the UK, a study on the implementation of a checklist for provision of medical care to elderly patients admitting to hospital found that the checklist highlighted limitations with frailty assessment in acute care and motivated teams to review routine practices, but that work is needed to understand whether and how checklists can be embedded in complex multidisciplinary care.
== Problems ==
Guidelines may lose their clinical relevance as they age and newer research emerges. Even 20% of strong recommendations, especially when based on opinion rather than trials, from practice guidelines may be retracted.
The New York Times reported in 2004 that some simple clinical practice guidelines are not routinely followed to the extent they might be. It has been found that providing a nurse or other medical assistant with a checklist of recommended procedures can result in the attending physician being reminded in a timely manner regarding procedures that might have been overlooked.
Guidelines may have both methodological problems and conflict of interest. As such, the quality of guidelines may vary substantially, especially for guidelines that are published on-line and have not had to follow methodological reporting standards often required by reputable clearinghouses. Patients and caregivers are frequently excluded from clinical guidelines development, in part because there is a lack of guidance for how to include them in the process.
Guidelines may make recommendations that are stronger than the supporting evidence.
In response to many of these problems with traditional guidelines, the BMJ created a new series of trustworthy guidelines focused on the most pressing medical issues called BMJ Rapid Recommendations.
== Examples ==
The American Heart Association Guidelines for the Prevention of Infective Endocarditis
The BMJ Rapid Recommendation guideline on transcatheter aortic valve implantation versus surgical aortic valve replacement for aortic stenosis.
== See also ==
Clinical formulation
Clinical prediction rule
Clinical trial protocol
Medical algorithm
The Medical Letter on Drugs and Therapeutics
== References ==
== External links ==
British Columbia Medical Guidelines – In Canada, British Columbia's guidelines and protocols are developed under the direction of the Guidelines and Protocols Advisory Committee (GPAC), jointly sponsored by the B.C. Medical Association and the B.C. Ministry of Health Services.
The Cochrane Collaboration – An international, independent, not-for-profit organisation of over 27,000 contributors from more than 100 countries, dedicated to making up-to-date, accurate information about the effects of health care readily available worldwide.
GuiaSalud. Clinical Practice Guidelines for the National Health System (Spain) – Contains clinical practice guidelines developed in Spain translated into English.
Guideline Elements Model – The Guideline Elements Model (GEM) is an ASTM standard for the representation of practice guidelines in XML format.
Guideline Interchange Format – The Guideline Interchange Format (GLIF) is a specification for structured representation of guidelines.
Guidelines International Network – Contains the largest online guideline library.
Hospital Quality Alliance – A project of the Hospital Quality Initiative (HQI) of the Centers for Medicare and Medicaid Services (USA).
National Guideline Clearinghouse (NGC) – A public resource for evidence-based clinical practice guidelines. NGC is an initiative of the Agency for Healthcare Research and Quality (AHRQ), U.S. Department of Health and Human Services.
Scottish Intercollegiate Guidelines Network (SIGN) – Contains 113 evidence-based clinical guidelines – published, in development, or under review.
German guidelines (AWMF) – A collection of current German health care-related professional associations' guidelines. | Wikipedia/Clinical_practice_guideline |
Globalization, the flow of information, goods, capital, and people across political and geographic boundaries, allows infectious diseases to rapidly spread around the world, while also allowing the alleviation of factors such as hunger and poverty, which are key determinants of global health. The spread of diseases across wide geographic scales has increased through history. Early diseases that spread from Asia to Europe were bubonic plague, influenza of various types, and similar infectious diseases.
In the current era of globalization, the world is more interdependent than at any other time. Efficient and inexpensive transportation has left few places inaccessible, and increased global trade in agricultural products has brought more and more people into contact with animal diseases that have subsequently jumped species barriers (see zoonosis).
Globalization intensified during the Age of Exploration, but trading routes had long been established between Asia and Europe, along which diseases were also transmitted. An increase in travel has helped spread diseases to natives of lands who had not previously been exposed. When a native population is infected with a new disease, where they have not developed antibodies through generations of previous exposure, the new disease tends to run rampant within the population.
Etiology, the modern branch of science that deals with the causes of infectious disease, recognizes five major modes of disease transmission: airborne, waterborne, bloodborne, by direct contact, and through vector (insects or other creatures that carry germs from one species to another). As humans began traveling overseas and across lands which were previously isolated, research suggests that diseases have been spread by all five transmission modes.
== Travel patterns and globalization ==
The Age of Exploration generally refers to the period between the 15th and 17th centuries. During this time, technological advances in shipbuilding and navigation made it easier for nations to explore outside previous boundaries. Globalization has had many benefits, for example, new products to Europeans were discovered, such as tea, silk and sugar when Europeans developed new trade routes around Africa to India and the Spice Islands, Asia, and eventually running to the Americas.
In addition to trading in goods, many nations began to trade in slavery. Trading in slaves was another way by which diseases were carried to new locations and peoples, for instance, from sub-Saharan Africa to the Caribbean and the Americas. During this time, different societies began to integrate, increasing the concentration of humans and animals in certain places, which led to the emergence of new diseases as some jumped in mutation from animals to humans.
During this time sorcerers' and witch doctors' treatment of disease was often focused on magic and religion, and healing the entire body and soul, rather than focusing on a few symptoms like modern medicine. Early medicine often included the use of herbs and meditation. Based on archaeological evidence, some prehistoric practitioners in both Europe and South America used trephining, making a hole in the skull to release illness. Severe diseases were often thought of as supernatural or magical. The result of the introduction of Eurasian diseases to the Americas was that many more native peoples were killed by disease and germs than by the colonists' use of guns or other weapons. Scholars estimate that over a period of four centuries, epidemic diseases wiped out as much as 90 percent of the American indigenous populations.
In Europe during the age of exploration, diseases such as smallpox, measles and tuberculosis (TB) had already been introduced centuries before through trade with Asia and Africa. People had developed some antibodies to these and other diseases from the Eurasian continent. When the Europeans traveled to new lands, they carried these diseases with them. (Note: Scholars believe TB was already endemic in the Americas.) When such diseases were introduced for the first time to new populations of humans, the effects on the native populations were widespread and deadly. The Columbian Exchange, referring to Christopher Columbus's first contact with the native peoples of the Caribbean, began the trade of animals, and plants, and unwittingly began an exchange of diseases.
It was not until the 1800s that humans began to recognize the existence and role of germs and microbes in relation to disease. Although many thinkers had ideas about germs, it was not until French doctor Louis Pasteur spread his theory about germs, and the need for washing hands and maintaining sanitation (particularly in medical practice), that anyone listened. Many people were quite skeptical, but on May 22, 1881, Pasteur persuasively demonstrated the validity of his germ theory of disease with an early example of vaccination. The anthrax vaccine was administered to 25 sheep while another 25 were used as a control. On May 31, 1881, all of the sheep were exposed to anthrax. While every sheep in the control group died, each of the vaccinated sheep survived. Pasteur's experiment would become a milestone in disease prevention. His findings, in conjunction with other vaccines that followed, changed the way globalization affected the world.
=== Effects of globalization on disease in the modern world ===
Modern modes of transportation allow more people and products to travel around the world at a faster pace; they also open the airways to the transcontinental movement of infectious disease vectors. One example is the West Nile virus. It is believed that this disease reached the United States via "mosquitoes that crossed the ocean by riding in airplane wheel wells and arrived in New York City in 1999." With the use of air travel, people are able to go to foreign lands, contract a disease and not have any symptoms of illness until after they get home, and having exposed others to the disease along the way. Another example of the potency of modern modes of transportation in increasing the spread of disease is the 1918 Spanish Flu pandemic. Global transportation, back in the early 20th century, was able to spread a virus because the network of transmittance and trade was already global. The virus was found on crew members of ships and trains, and all the infected employees spread the virus everywhere they traveled. As a result, almost 50-100 million people died of this global transmission.
As medicine has progressed, many vaccines and cures have been developed for some of the worst diseases (plague, syphilis, typhus, cholera, malaria) that people develop. But, because the evolution of disease organisms is very rapid, even with vaccines, there is difficulty providing full immunity to many diseases. Since vaccines are made partly from the virus itself, when an unknown virus is introduced into the environment, it takes time for the medical community to formulate a curable vaccine. The lack of operational and functional research and data, which provide a quicker and more strategized pathway to a reliable vaccine, makes for a lengthy vaccine development timeline. Even though frameworks are set up and preparations plans are utilized to decrease the COVID-19 cases, a vaccine is the only way to ensure complete immunization. Some systems like the IIS, Immunization Information System, help give preliminary structure for quick responses to outbreaks and unknown viruses. These systems employ past data and research-based on modern world vaccine development successes. Finding vaccines at all for some diseases remains extremely difficult. Without vaccines, the global world remains vulnerable to infectious diseases.
Evolution of disease presents a major threat in modern times. For example, the current "swine flu" or H1N1 virus is a new strain of an old form of flu, known for centuries as Asian flu based on its origin on that continent. From 1918 to 1920, a post-World War I global influenza epidemic killed an estimated 50–100 million peens, including half a million in the United States alone. H1N1 is a virus that has evolved from and partially combined with portions of avian, swine, and human flu.
Globalization has increased the spread of infectious diseases from South to North, but also the risk of non-communicable diseases by transmission of culture and behavior from North to South. It is important to target and reduce the spread of infectious diseases in developing countries. However, addressing the risk factors of non-communicable diseases and lifestyle risks in the South that cause disease, such as use or consumption of tobacco, alcohol, and unhealthy foods, is important as well.
Even during pandemics, it is vital to recognize economic globalization in being a catalyst in the spread of the coronavirus. Economic factors are especially damaged by increased global lockdown regulations and trade blockades. As transportation globalized, economies expanded. Internalized economies saw great financial opportunities in global trade. With increased interconnectivity among economies and the globalization of the world economy, the spread of the coronavirus maximized the potentiality of global recessions. The coronavirus pandemic caused many economic disruptions, which caused a functional disconnect in the supply chain and the flow of goods. As transportation modes are relevant to the spread of infectious diseases, it is important to also recognize the economy being the motor of this globalized transmission system.
== Specific diseases ==
=== Plague ===
Bubonic plague is a variant of the deadly flea-borne disease plague, which is caused by the enterobacteria Yersinia pestis, that devastated human populations beginning in the 14th century. Bubonic plague is primarily spread by fleas that lived on the black rat, an animal that originated in South Asia and spread to Europe by the 6th century. It became common to cities and villages, traveling by ship with explorers. A human would become infected after being bitten by an infected flea. The first sign of an infection of bubonic plague is swelling of the lymph nodes, and the formation of buboes. These buboes would first appear in the groin or armpit area, and would often ooze pus or blood. Eventually infected individuals would become covered with dark splotches caused by bleeding under the skin. The symptoms would be accompanied by a high fever, and within four to seven days of infection, more than half of the affected would die.
The first recorded outbreak of plague occurred in China in the 1330s, a time when China was engaged in substantial trade with western Asia and Europe. The plague reached Europe in October 1347. It was thought to have been brought into Europe through the port of Messina, Sicily, by a fleet of Genoese trading ships from Kaffa, a seaport on the Crimean peninsula. When the ship left port in Kaffa, many of the inhabitants of the town were dying, and the crew was in a hurry to leave. By the time the fleet reached Messina, all the crew were either dead or dying; the rats that took passage with the ship slipped unnoticed to shore and carried the disease with them and their fleas.
Within Europe, the plague struck port cities first, then followed people along both sea and land trade routes. It raged through Italy into France and the British Isles. It was carried over the Alps into Switzerland, and eastward into Hungary and Russia. For a time during the 14th and 15th centuries, the plague would recede. Every ten to twenty years, it would return. Later epidemics, however, were never as widespread as the earlier outbreaks, when 60% of the population died.
The third plague pandemic emerged in Yunnan province of China in the mid-nineteenth century. It spread east and south through China, reaching Guangzhou (Canton) and Hong Kong in 1894, where it entered the global maritime trade routes. Plague reached Singapore and Bombay in 1896. China lost an estimated 2 million people between plague's reappearance in the mid-nineteenth century and its retreat in the mid-twentieth. In India, between 1896 and the 1920s, plague claimed an estimated 12 million lives, most in the Bombay province. Plague spread into the countries around the Indian Ocean, the Red Sea and the Mediterranean. From China it also spread eastward to Japan, the Philippines and Hawaii, and in Central Asia it spread overland into the Russian territories from Siberia to Turkistan. By 1901 there had been outbreaks of plague on every continent, and new plague reservoirs would produce regular outbreaks over the ensuing decades.
=== Measles ===
Measles is a highly contagious airborne virus spread by contact with infected oral and nasal fluids. When a person with measles coughs or sneezes, they release microscopic particles into the air. During the 4- to 12-day incubation period, an infected individual shows no symptoms, but as the disease progresses, the following symptoms appear: runny nose, cough, red eyes, extremely high fever and a rash.
Measles is an endemic disease, meaning that it has been continually present in a community, and many people developed resistance. In populations that have not been exposed to measles, exposure to the new disease can be devastating. In 1529, a measles outbreak in Cuba killed two-thirds of the natives who had previously survived smallpox. Two years later measles was responsible for the deaths of half the indigenous population of Honduras, and ravaged Mexico, Central America, and the Inca civilization.
Historically, measles was very prevalent throughout the world, as it is highly contagious. According to the National Immunization Program, 90% of people were infected with measles by age 15, acquiring immunity to further outbreaks. Until a vaccine was developed in 1963, measles was considered to be deadlier than smallpox. Vaccination reduced the number of reported occurrences by 98%. Major epidemics have predominantly occurred in unvaccinated populations, particularly among nonwhite Hispanic and African American children under 5 years old. In 2000 a group of experts determined that measles was no longer endemic in the United States. The majority of cases that occur are among immigrants from other countries.
=== Typhus ===
Typhus is caused by rickettsia, which is transmitted to humans through lice. The main vector for typhus is the rat flea. Flea bites and infected flea feaces in the respiratory tract are the two most common methods of transmission. In areas where rats are not common, typhus may also be transmitted through cat and opossum fleas. The incubation period of typhus is 7–14 days. The symptoms start with a fever, then headache, rash, and eventually stupor. Spontaneous recovery occurs in 80–90% of victims.
The first outbreak of typhus was recorded in 1489. Historians believe that troops from the Balkans, hired by the Spanish army, brought it to Spain with them. By 1490 typhus traveled from the eastern Mediterranean into Spain and Italy, and by 1494, it had swept across Europe. From 1500 to 1914, more soldiers were killed by typhus than from all the combined military actions during that time. It was a disease associated with the crowded conditions of urban poverty and refugees as well. Finally, during World War I, governments instituted preventative delousing measures among the armed forces and other groups, and the disease began to decline. The creation of antibiotics has allowed disease to be controlled within two days of taking a 200 mg dose of tetracycline.
=== Syphilis ===
Syphilis is a sexually transmitted disease that causes open sores, delirium and rotting skin, and is characterized by genital ulcers. Syphilis can also do damage to the nervous system, brain and heart. The disease can be transmitted from mother to child.
The origins of syphilis are unknown, and some historians argue that it descended from a twenty-thousand-year-old African zoonosis. Other historians place its emergence in the New World, arguing that the crews of Columbus's ships first brought the disease to Europe. The first recorded case of syphilis occurred in Naples in 1495, after King Charles VIII of France besieged the city of Naples, Italy. The soldiers, and the prostitutes who followed their camps, came from all corners of Europe. When they went home, they took the disease with them and spread it across the continent.
=== Smallpox ===
Smallpox is a highly contagious disease caused by the Variola virus. There are four variations of smallpox; variola major, variola minor, haemorrhagic, and malignant, with the most common being variola major and variola minor. Symptoms of the disease including hemorrhaging, blindness, back ache, vomiting, which generally occur shortly after the 12- to 17-day incubation period. The virus begins to attack skin cells, and eventually leads to an eruption of pimples that cover the whole body. As the disease progresses, the pimples fill up with pus or merge. This merging results in a sheet that can detach the bottom layer from the top layer of skin. The disease is easily transmitted through airborne pathways (coughing, sneezing, and breathing), as well as through contaminated bedding, clothing or other fabrics,
It is believed that smallpox first emerged over 3000 years ago, probably in India or Egypt. There have been numerous recorded devastating epidemics throughout the world, with high losses of life.
Smallpox was a common disease in Eurasia in the 15th century, and was spread by explorers and invaders. After Columbus landed on the island of Hispaniola during his second voyage in 1493, local people started to die of a virulent infection. Before the smallpox epidemic started, more than one million indigenous people had lived on the island; afterward, only ten thousand had survived.
During the 16th century, Spanish soldiers introduced smallpox by contact with natives of the Aztec capital Tenochtitlan. A devastating epidemic broke out among the indigenous people, killing thousands.
In 1617, smallpox reached Massachusetts, probably brought by earlier explorers to Nova Scotia, Canada." By 1638 the disease had broken out among people in Boston, Massachusetts. In 1721 people fled the city after an outbreak, but the residents spread the disease to others throughout the Thirteen Colonies. Smallpox broke out in six separate epidemics in the United States through 1968.
The smallpox vaccine was developed in 1798 by Edward Jenner. By 1979 the disease had been completely eradicated, with no new outbreaks. The WHO stopped providing vaccinations and by 1986, vaccination was no longer necessary to anyone in the world except in the event of future outbreak.
=== Leprosy ===
Leprosy, also known as Hansen's Disease, is caused by a bacillus, Mycobacterium leprae. It is a chronic disease with an incubation period of up to five years. Symptoms often include irritation or erosion of the skin, and effects on the peripheral nerves, mucosa of the upper respiratory tract and eyes. The most common sign of leprosy are pale reddish spots on the skin that lack sensation.
Leprosy originated in India, more than four thousand years ago. It was prevalent in ancient societies in China, Egypt and India, and was transmitted throughout the world by various traveling groups, including Roman Legionnaires, Crusaders, Spanish conquistadors, Asian seafarers, European colonists, and Arab, African, and American slave traders. Some historians believe that Alexander the Great's troops brought leprosy from India to Europe during the 3rd century BC. With the help of the crusaders and other travelers, leprosy reached epidemic proportions by the 13th century.
Once detected, leprosy can be cured using multi-drug therapy, composed of two or three antibiotics, depending on the type of leprosy. In 1991 the World Health Assembly began an attempt to eliminate leprosy. By 2005 116 of 122 countries were reported to be free of leprosy.
=== Malaria ===
On Nov. 6, 1880 Alphonse Laveran discovered that malaria (then called "Marsh Fever") was a protozoan parasite, and that mosquitoes carry and transmit malaria. Malaria is a protozoan infectious disease that is generally transmitted to humans by mosquitoes between dusk and dawn. The European variety, known as "vivax" after the Plasmodium vivax parasite, causes a relatively mild, yet chronically aggravating disease. The west African variety is caused by the sporozoan parasite, Plasmodium falciparum, and results in a severely debilitating and deadly disease.
Malaria was common in parts of the world where it has now disappeared, as the vast majority of Europe (disease of African descent are particularly diffused in the Empire romain) and North America . In some parts of England, mortality due to malaria was comparable to that of sub-Saharan Africa today. Although William Shakespeare was born at the beginning of a colder period called the "Little Ice Age", he knew enough ravages of this disease to include in eight parts. Plasmodium vivax lasted until 1958 in the polders of Belgium and the Netherlands.
In the 1500s, it was the European settlers and their slaves who probably brought malaria on the American continent (we know that Columbus had this disease before his arrival in the new land). The Spanish Jesuit missionaries saw the Indians bordering on Lake Loxa Peru used the Cinchona bark powder to treat fevers. However, there is no reference to malaria in the medical literature of the Maya or Aztecs. The use of the bark of the "fever tree" was introduced into European medicine by Jesuit missionaries whose Barbabe Cobo who experimented in 1632 and also by exports, which contributed to the precious powder also being called "Jesuit powder". A study in 2012 of thousands of genetic markers for Plasmodium falciparum samples confirmed the African origin of the parasite in South America (Europeans themselves have been affected by this disease through Africa): it borrowed from the mid-sixteenth century and the mid-nineteenth the two main roads of the slave trade, the first leading to the north of South America (Colombia) by the Spanish, the second most leading south (Brazil) by Portugueses.
Parts of Third World countries are more affected by malaria than the rest of the world. For instance, many inhabitants of sub-Saharan Africa are affected by recurring attacks of malaria throughout their lives. In many areas of Africa, there is limited running water. The residents' use of wells and cisterns provides many sites for the breeding of mosquitoes and spread of the disease. Mosquitoes use areas of standing water like marshes, wetlands, and water drums to breed.
=== Tuberculosis ===
The bacterium that causes tuberculosis, Mycobacterium tuberculosis, is generally spread when an infected person coughs and another person inhales the bacteria. Once inhaled TB frequently grows in the lungs, but can spread to any part of the body. Although TB is highly contagious, in most cases the human body is able to fend off the bacteria. But, TB can remain dormant in the body for years, and become active unexpectedly. If and when the disease does become active in the body, it can multiply rapidly, causing the person to develop many symptoms including cough (sometimes with blood), night sweats, fever, chest pains, loss of appetite and loss of weight. This disease can occur in both adults and children and is especially common among those with weak or undeveloped immune systems.
Tuberculosis (TB) has been one of history's greatest killers, taking the lives of over 3 million people annually. It has been called the "white plague". According to the WHO, approximately fifty percent of people infected with TB today live in Asia. It is the most prevalent, life-threatening infection among AIDS patients. It has increased in areas where HIV seroprevalence is high.
Air travel and the other methods of travel which have made global interaction easier, have increased the spread of TB across different societies. Luckily, the BCG vaccine was developed, which prevents TB meningitis and miliary TB in childhood. But, the vaccine does not provide substantial protection against the more virulent forms of TB found among adults. Most forms of TB can be treated with antibiotics to kill the bacteria. The two antibiotics most commonly used are rifampicin and isoniazid. There are dangers, however, of a rise of antibiotic-resistant TB. The TB treatment regimen is lengthy, and difficult for poor and disorganized people to complete, increasing resistance of bacteria. Antibiotic-resistant TB is also known as "multidrug-resistant tuberculosis." "Multidrug-resistant tuberculosis" is a pandemic that is on the rise. Patients with MDR-TB are mostly young adults who are not infected with HIV or have other existing illness. Due to the lack of health care infrastructure in underdeveloped countries, there is a debate as to whether treating MDR-TB will be cost effective or not. The reason is the high cost of "second-line" antituberculosis medications. It has been argued that the reason the cost of treating patients with MDR-TB is high is because there has been a shift in focus in the medical field, in particular the rise of AIDS, which is now the world's leading infectious cause of death. Nonetheless, it is still important to put in the effort to help and treat patients with "multidrug-resistant tuberculosis" in poor countries.
=== HIV/AIDS ===
HIV and AIDS are among the newest and deadliest diseases. According to the World Health Organization, it is unknown where the HIV virus originated, but it appeared to move from animals to humans. It may have been isolated within many groups throughout the world. It is believed that HIV arose from another, less harmful virus, that mutated and became more virulent. The first two AIDS/HIV cases were detected in 1981. As of 2013, an estimated 1.3 million persons in the United States were living with HIV or AIDS, almost 110,000 in the UK and an estimated 35 million people worldwide are living with HIV".
Despite efforts in numerous countries, awareness and prevention programs have not been effective enough to reduce the numbers of new HIV cases in many parts of the world, where it is associated with high mobility of men, poverty and sexual mores among certain populations. Uganda has had an effective program, however. Even in countries where the epidemic has a very high impact, such as Eswatini and South Africa, a large proportion of the population do not believe they are at risk of becoming infected. Even in countries such as the UK, there is no significant decline in certain at-risk communities. 2014 saw the greatest number of new diagnoses in gay men, the equivalent of nine being diagnosed a day.
Initially, HIV prevention methods focused primarily on preventing the sexual transmission of HIV through behaviour change. The ABC Approach - "Abstinence, Be faithful, Use a Condom". However, by the mid-2000s, it became evident that effective HIV prevention requires more than that and that interventions need to take into account underlying socio-cultural, economic, political, legal and other contextual factors.
=== Ebola ===
The Ebola outbreak, which was the 26th outbreak since 1976, started in Guinea in March 2014. The WHO warned that the number of Ebola patients could rise to 20,000, and said that it used $489m (£294m) to contain Ebola within six to nine months. The outbreak was accelerating. Medecins sans Frontieres has just opened a new Ebola hospital in Monrovia, and after one week it already has a capacity of 120 patients. It said that the number of patients seeking treatment at its new Monrovia center was increasing faster than they could handle both in terms of the number of beds and the capacity of the staff, adding that it was struggling to cope with the caseload in the Liberian capital. Lindis Hurum, MSF's emergency coordinator in Monrovia, said that it was humanitarian emergency and they needed a full-scale humanitarian response. Brice de la Vinge, MSF director of operations, said that it was not until five months after the declaration of the Ebola outbreak that serious discussions started about international leadership and coordination, and said that it was not acceptable.
=== Leptospirosis ===
Leptospirosis, also known as the "rat fever" or "field fever" is an infection caused by Leptospira. Symptoms can range from none to mild such as headaches, muscle pains, and fevers; to severe with bleeding from the lungs or meningitis. Leptospira is transmitted by both wild and domestic animals, most commonly by rodents. It is often transmitted by animal urine or by water or soil containing animal urine coming into contact with breaks in the skin, eyes, mouth, or nose.
The countries with the highest reported incidence are located in the Asia-Pacific region (Seychelles, India, Sri Lanka and Thailand) with incidence rates over 10 per 1000,000 people s well as in Latin America and the Caribbean (Trinidad and Tobago, Barbados, Jamaica, El Salvador, Uruguay, Cuba, Nicaragua and Costa Rica) However, the rise in global travel and eco-tourism has led to dramatic changes in the epidemiology of leptospirosis, and travelers from around the world have become exposed to the threat of leptospirosis. Despite decreasing prevalence of leptospirosis in endemic regions, previously non-endemic countries are now reporting increasing numbers of cases due to recreational exposure International travelers engaged in adventure sports are directly exposed to numerous infectious agents in the environment and now comprise a growing proportion of cases worldwide.
=== Disease X ===
The World Health Organization (WHO) proposed the name Disease X in 2018 to focus on preparations and predictions of a major pandemic.
=== COVID-19 ===
The virus outbreak originated in Wuhan, China. It was first detected in December 2019, which is why scientists called it COVID-19 (coronavirus disease 2019). This outbreak has since caused a health issue in the city of Wuhan, China which evolved into a global pandemic. The World Health Organization officially declared it a pandemic on March 11, 2020.
As of May 2020, scientists believe that COVID-19, a zoonotic disease, is linked to the wet markets in China. Epidemiologists have also warned of the virus's contagiousness. Specialists have declared that the spread of SARS-CoV-2 is still unknown. The generally accepted notion among virologists and experts is that the action of inhaling droplets from an infected person is most likely the way SARS-CoV-2 is spreading. As more people travel and more goods and capital are traded globally, COVID-19 cases started to slowly appear all over the world.
Some of the symptoms that COVID-19 patients could experience is shortness of breath (which might be a sign of pneumonia), cough, fever, and diarrhea. The three most recorded and common symptoms are fever, tiredness, and coughing, as reported by the World Health Organization. COVID-19 is also categorized among the viruses that can show no symptoms in the carrier. Asymptomatic COVID-19 carriers transmitted the virus to many people which eventually did show symptoms, some being deadly.
The first number of cases was detected in Wuhan, China, the origin of the outbreak. On December 31, 2019, Wuhan Municipal Health Commission announced to the World Health Organization that the number of pneumonia cases that have been previously detected in Wuhan, Hubei Province is now under investigation. Proper identification of a novel coronavirus was developed and reported, making the pneumonia cases in China the first reported cases of COVID-19. As of November 25, 2021, there have been around 260 million confirmed COVID-19 cases around the world. Confirmed deaths as a result of COVID-19 is over 5 million globally. Over 235 million of the 260 million confirmed COVID-19 cases have successfully recovered. Countries showing lack of preparation and awareness in January and February 2020 are now reporting the highest numbers of COVID-19 cases. The United States leads the worldwide count with almost 49 million confirmed cases. Deaths in The United States have crossed 798,000, maintaining the highest death count of any country. Brazil, Russia, Spain, UK, and Italy have all suffered because of the increase in cases, leading to an impaired health system unable to attend to so many sick people at one time.
The first-ever confirmed case of COVID-19 in the United States was in Washington State on January 21, 2020. It was a man who had just returned from China. Following this incident, on January 31, 2020, Trump announced that travel to and from China is restricted, effective on February 2, 2020. On March 11, 2020, Trump issues executive order to restrict travel from Europe, except for the UK and Ireland. On May 24, 2020, Trump bans travel from Brazil, as Brazil becomes the new center of the coronavirus pandemic. International restrictions were set to decrease international entities of entering a country, potentially carrying the virus. This is because governments understand that with the accessibility in travel and free trade, any person can travel and carry the virus to a new environment. Recommendations to U.S. travelers have been set by the State Department. As of March 19, 2020, some countries have been marked Level 4 "do not travel". The coronavirus pandemic travel restrictions have affected almost 93% of the global population. Increased travel restrictions effectively aid multilateral and bilateral health organizations to control the number of confirmed cases of COVID-19.
== Non-communicable disease ==
Globalization can benefit people with non-communicable diseases such as heart problems or mental health problems. Global trade and rules set forth by the World Trade Organization can actually benefit the health of people by making their incomes higher, allowing them to afford better health care, but making many non-communicable diseases more likely as well. Also the national income of a country, mostly obtained by trading on the global market, is important because it dictates how much a government spends on health care for its citizens. It also has to be acknowledged that an expansion in the definition of disease often accompanies development, so the net effect is not clearly beneficial due to this and other effects of increased affluence. Metabolic syndrome is one obvious example, although poorer countries have not yet experienced this and are still having the diseases listed above.
== Economic globalization and disease ==
Globalization is multifaceted in implementation and is objective in the framework and systemic ideology. Infectious diseases spread mainly as a result of the modern globalization of many and almost all industries and sectors. Economic globalization is the interconnectivity of world economies and the interdependency of internal and external supply chains. With the advancement of science and technology, the possibility of economic globalization is enabled even more. Economic factors have been defined by global boundaries rather than national. The cost of activities of economic measures has been significantly decreased as a result of the advancements in the fields of technology and science, slowly creating an interconnected economy lacking centralized integration. As economies increase levels of integration and singularity within the partnership, any global financial and economic disruptions would cause a global recession. Collateral damage is further observed with the increase in integrated economic activity. Countries lean more on economic benefits than health benefits, which lead to a miscalculated and ill-reported health issue.
== See also ==
== References == | Wikipedia/Globalization_and_disease |
The transtheoretical model of behavior change is an integrative theory of therapy that assesses an individual's readiness to act on a new healthier behavior, and provides strategies, or processes of change to guide the individual. The model is composed of constructs such as: stages of change, processes of change, levels of change, self-efficacy, and decisional balance.
The transtheoretical model is also known by the abbreviation "TTM" and sometimes by the term "stages of change", although this latter term is a synecdoche since the stages of change are only one part of the model along with processes of change, levels of change, etc. Several self-help books—Changing for Good (1994), Changeology (2012), and Changing to Thrive (2016)—and articles in the news media have discussed the model. In 2009, an article in the British Journal of Health Psychology called it "arguably the dominant model of health behaviour change, having received unprecedented research attention, yet it has simultaneously attracted exceptional criticism".
== History and core constructs ==
James O. Prochaska of the University of Rhode Island, and Carlo Di Clemente and colleagues developed the transtheoretical model beginning in 1977. It is based on analysis and use of different theories of psychotherapy, hence the name "transtheoretical".: 148 Prochaska and colleagues refined the model on the basis of research that they published in peer-reviewed journals and books.
=== Stages of change ===
This construct refers to the temporal dimension of behavioural change. In the transtheoretical model, change is a "process involving progress through a series of stages":
Precontemplation ("not ready") – "People are not intending to take action in the foreseeable future, and can be unaware that their behaviour is problematic"
Contemplation ("getting ready") – "People are beginning to recognize that their behaviour is problematic, and start to look at the pros and cons of their continued actions"
Preparation ("ready") – "People are intending to take action in the immediate future, and may begin taking small steps toward behaviour change"
Action – "People have made specific overt modifications in modifying their problem behaviour or in acquiring new healthy behaviours"
Maintenance – "People have been able to sustain action for at least six months and are working to prevent relapse"
Termination – "Individuals have zero temptation and they are sure they will not return to their old unhealthy habit as a way of coping"
In addition, the researchers conceptualized "Relapse" (recycling) which is not a stage in itself but rather the "return from Action or Maintenance to an earlier stage".
The quantitative definition of the stages of change (see below) is perhaps the most well-known feature of the model. However it is also one of the most critiqued, even in the field of smoking cessation, where it was originally formulated. It has been said that such quantitative definition (i.e. a person is in preparation if he intends to change within a month) does not reflect the nature of behaviour change, that it does not have better predictive power than simpler questions (i.e. "do you have plans to change..."), and that it has problems regarding its classification reliability.
Communication theorist and sociologist Everett Rogers suggested that the stages of change are analogues of the stages of the innovation adoption process in Rogers' theory of diffusion of innovations.
==== Details of each stage ====
Stage 1: Precontemplation (not ready)
People at this stage do not intend to start the healthy behavior in the near future (within 6 months), and may be unaware of the need to change. People here learn more about healthy behavior: they are encouraged to think about the pros of changing their behavior and to feel emotions about the effects of their negative behavior on others.
Precontemplators typically underestimate the pros of changing, overestimate the cons, and often are not aware of making such mistakes.
One of the most effective steps that others can help with at this stage is to encourage them to become more mindful of their decision making and more conscious of the multiple benefits of changing an unhealthy behavior.
Stage 2: Contemplation (getting ready)
At this stage, participants are intending to start the healthy behavior within the next 6 months. While they are usually now more aware of the pros of changing, their cons are about equal to their Pros. This ambivalence about changing can cause them to keep putting off taking action.
People here learn about the kind of person they could be if they changed their behavior and learn more from people who behave in healthy ways.
Others can influence and help effectively at this stage by encouraging them to work at reducing the cons of changing their behavior.
Stage 3: Preparation (ready)
People at this stage are ready to start taking action within the next 30 days. They take small steps that they believe can help them make the healthy behavior a part of their lives. For example, they tell their friends and family that they want to change their behavior.
People in this stage should be encouraged to seek support from friends they trust, tell people about their plan to change the way they act, and think about how they would feel if they behaved in a healthier way. Their number one concern is: when they act, will they fail? They learn that the better prepared they are, the more likely they are to keep progressing.
Stage 4: Action (current action)
People at this stage have changed their behavior within the last 6 months and need to work hard to keep moving ahead. These participants need to learn how to strengthen their commitments to change and to fight urges to slip back.
People in this stage progress by being taught techniques for keeping up their commitments such as substituting activities related to the unhealthy behavior with positive ones, rewarding themselves for taking steps toward changing, and avoiding people and situations that tempt them to behave in unhealthy ways.
Stage 5: Maintenance (monitoring)
People at this stage changed their behavior more than 6 months ago. It is important for people in this stage to be aware of situations that may tempt them to slip back into doing the unhealthy behavior—particularly stressful situations.
It is recommended that people in this stage seek support from and talk with people whom they trust, spend time with people who behave in healthy ways, and remember to engage in healthy activities (such as exercise and deep relaxation) to cope with stress instead of relying on unhealthy behavior.
Relapse (recycling)
Relapse in the TTM specifically applies to individuals who successfully quit smoking or using drugs or alcohol, only to resume these unhealthy behaviors. Individuals who attempt to quit highly addictive behaviors such as drug, alcohol, and tobacco use are at particularly high risk of a relapse. Achieving a long-term behavior change often requires ongoing support from family members, a health coach, a physician, or another motivational source. Supportive literature and other resources can also be helpful to avoid a relapse from happening.
=== Processes of change ===
The 10 processes of change are "covert and overt activities that people use to progress through the stages".
To progress through the early stages, people apply cognitive, affective, and evaluative processes. As people move toward Action and Maintenance, they rely more on commitments, counter conditioning, rewards, environmental controls, and support.
Prochaska and colleagues state that their research related to the transtheoretical model shows that interventions to change behavior are more effective if they are "stage-matched", that is, "matched to each individual's stage of change".
In general, for people to progress they need:
A growing awareness that the advantages (the "pros") of changing outweigh the disadvantages (the "cons")—the TTM calls this decisional balance.
Confidence that they can make and maintain changes in situations that tempt them to return to their old, unhealthy behavior—the TTM calls this self-efficacy.
Strategies that can help them make and maintain change—the TTM calls these processes of change.
The ten processes of change include:: 149
Consciousness-raising (Get the facts) — increasing awareness via information, education, and personal feedback about the healthy behavior.
Dramatic relief (Pay attention to feelings) — feeling fear, anxiety, or worry because of the unhealthy behavior, or feeling inspiration and hope when hearing about how people are able to change to healthy behaviors.
Self-reevaluation (Create a new self-image) — realizing that the healthy behavior is an important part of who they want to be.
Environmental reevaluation (Notice your effect on others) — realizing how their unhealthy behavior affects others and how they could have more positive effects by changing.
Social liberation (Notice public support) — realizing that society is supportive of the healthy behavior.
Self-liberation (Make a commitment) — believing in one's ability to change and making commitments and re-commitments to act on that belief.
Helping relationships (Get support) — finding people who are supportive of their change.
Counterconditioning (Use substitutes) — substituting healthy ways of acting and thinking for unhealthy ways.
Reinforcement management (Use rewards) — increasing the rewards that come from positive behavior and reducing those that come from negative behavior.
Stimulus control (Manage your environment) — using reminders and cues that encourage healthy behavior and avoiding places that don't.
Health researchers have extended Prochaska's and DiClemente's 10 original processes of change by an additional 21 processes. In the first edition of Planning Health Promotion Programs, Bartholomew et al. (2006) summarised the processes that they identified in a number of studies; however, their extended list of processes was removed from later editions of the text, perhaps because the list mixes techniques with processes. There are unlimited ways of applying processes. The additional strategies of Bartholomew et al. were:
Risk comparison (Understand the risks) – comparing risks with similar dimensional profiles: dread, control, catastrophic potential and novelty
Cumulative risk (Get the overall picture) – processing cumulative probabilities instead of single incident probabilities
Qualitative and quantitative risks (Consider different factors) – processing different expressions of risk
Positive framing (Think positively) – focusing on success instead of failure framing
Self-examination relate to risk (Be aware of your risks) – conducting an assessment of risk perception, e.g. personalisation, impact on others
Reevaluation of outcomes (Know the outcomes) – emphasising positive outcomes of alternative behaviours and reevaluating outcome expectancies
Perception of benefits (Focus on benefits) – perceiving advantages of the healthy behaviour and disadvantages of the risk behaviour
Self-efficacy and social support (Get help) – mobilising social support; skills training on coping with emotional disadvantages of change
Decision making perspective (Decide) – focusing on making the decision
Tailoring on time horizons (Set the time frame) – incorporating personal time horizons
Focus on important factors (Prioritise) – incorporating personal factors of highest importance
Trying out new behaviour (Try it) – changing something about oneself and gaining experience with that behaviour
Persuasion of positive outcomes (Persuade yourself) – promoting new positive outcome expectations and reinforcing existing ones
Modelling (Build scenarios) – showing models to overcome barriers effectively
Skill improvement (Build a supportive environment) – restructuring environments to contain important, obvious and socially supported cues for the new behaviour
Coping with barriers (Plan to tackle barriers) – identifying barriers and planning solutions when facing these obstacles
Goal setting (Set goals) – setting specific and incremental goals
Skills enhancement (Adapt your strategies) – restructuring cues and social support; anticipating and circumventing obstacles; modifying goals
Dealing with barriers (Accept setbacks) – understanding that setbacks are normal and can be overcome
Self-rewards for success (Reward yourself) – feeling good about progress; reiterating positive consequences
Coping skills (Identify difficult situations) – identifying high risk situations; selecting solutions; practicing solutions; coping with relapse
While most of these processes and strategies are associated with health interventions such as stress management, exercise, healthy eating, smoking cessation and other addictive behaviour, some of them are also used in other types of interventions such as travel interventions. Some processes are recommended in a specific stage, while others can be used in one or more stages.
=== Decisional balance ===
This core construct "reflects the individual's relative weighing of the pros and cons of changing". Decision making was conceptualized by Janis and Mann as a "decisional balance sheet" of comparative potential gains and losses. Decisional balance measures, the pros and the cons, have become critical constructs in the transtheoretical model. The pros and cons combine to form a decisional "balance sheet" of comparative potential gains and losses. The balance between the pros and cons varies depending on which stage of change the individual is in.
Sound decision making requires the consideration of the potential benefits (pros) and costs (cons) associated with a behavior's consequences. TTM research has found the following relationships between the pros, cons, and the stage of change across 48 behaviors and over 100 populations studied.
The cons of changing outweigh the pros in the Precontemplation stage.
The pros surpass the cons in the middle stages.
The pros outweigh the cons in the Action stage.
The evaluation of pros and cons is part of the formation of decisional balance. During the change process, individuals gradually increase the pros and decrease the cons forming a more positive balance towards the target behaviour. Attitudes are one of the core constructs explaining behaviour and behaviour change in various research domains. Other behaviour models, such as the theory of planned behavior (TPB) and the stage model of self-regulated change, also emphasise attitude as an important determinant of behaviour. The progression through the different stages of change is reflected in a gradual change in attitude before the individual acts.
Due to the use of decisional balance and attitude, travel behaviour researchers have begun to combine the TTM with the TPB. Forward uses the TPB variables to better differentiate the different stages. Especially all TPB variables (attitude, perceived behaviour control, descriptive and subjective norm) are positively show a gradually increasing relationship to stage of change for bike commuting. As expected, intention or willingness to perform the behaviour increases by stage. Similarly, Bamberg uses various behavior models, including the transtheoretical model, theory of planned behavior and norm-activation model, to build the stage model of self-regulated behavior change (SSBC). Bamberg claims that his model is a solution to criticism raised towards the TTM. Some researchers in travel, dietary, and environmental research have conducted empirical studies, showing that the SSBC might be a future path for TTM-based research.
=== Self-efficacy ===
This core construct is "the situation-specific confidence people have that they can cope with high-risk situations without relapsing to their unhealthy or high risk-habit". The construct is based on Bandura's self-efficacy theory and conceptualizes a person's perceived ability to perform on a task as a mediator of performance on future tasks. In his research Bandura already established that greater levels of perceived self-efficacy leads to greater changes in behavior. Similarly, Ajzen mentions the similarity between the concepts of self-efficacy and perceived behavioral control. This underlines the integrative nature of the transtheoretical model which combines various behavior theories. A change in the level of self-efficacy can predict a lasting change in behavior if there are adequate incentives and skills. The transtheoretical model employs an overall confidence score to assess an individual's self-efficacy. Situational temptations assess how tempted people are to engage in a problem behavior in a certain situation.
=== Levels of change ===
This core construct identifies the depth or complexity of presenting problems according to five levels of increasing complexity. Different therapeutic approaches have been recommended for each level as well as for each stage of change. The levels are:
Symptom/situational problems: e.g., motivational interviewing, behavior therapy, exposure therapy
Current maladaptive cognitions: e.g., Adlerian therapy, cognitive therapy, rational emotive therapy
Current interpersonal conflicts: e.g., Sullivanian therapy, interpersonal therapy
Family/systems conflicts: e.g., strategic therapy, Bowenian therapy, structural family therapy
Long-term intrapersonal conflicts: e.g., psychoanalytic therapies, existential therapy, Gestalt therapy
In one empirical study of psychotherapy discontinuation published in 1999, measures of levels of change did not predict premature discontinuation of therapy. Nevertheless, in 2005 the creators of the TTM stated that it is important "that both therapists and clients agree as to which level they attribute the problem and at which level or levels they are willing to target as they work to change the problem behavior".: 152
Psychologist Donald Fromme, in his book Systems of Psychotherapy, adopted many ideas from the TTM, but in place of the levels of change construct, Fromme proposed a construct called contextual focus, a spectrum from physiological microcontext to environmental macrocontext: "The horizontal, contextual focus dimension resembles TTM's Levels of Change, but emphasizes the breadth of an intervention, rather than the latter's focus on intervention depth.": 57
== Outcomes of programs ==
The outcomes of the TTM computerized tailored interventions administered to participants in pre-Action stages are outlined below.
=== Stress management ===
A national sample of pre-Action adults was provided a stress management intervention. At the 18-month follow-up, a significantly larger proportion of the treatment group (62%) was effectively managing their stress when compared to the control group. The intervention also produced statistically significant reductions in stress and depression and an increase in the use of stress management techniques when compared to the control group. Two additional clinical trials of TTM programs by Prochaska et al. and Jordan et al. also found significantly larger proportions of treatment groups effectively managing stress when compared to control groups.
=== Adherence to antihypertensive medication ===
Over 1,000 members of a New England group practice who were prescribed antihypertensive medication participated in an adherence to antihypertensive medication intervention. The vast majority (73%) of the intervention group who were previously pre-Action were adhering to their prescribed medication regimen at the 12-month follow-up when compared to the control group.
=== Adherence to lipid-lowering drugs ===
Members of a large New England health plan and various employer groups who were prescribed a cholesterol lowering medication participated in an adherence to lipid-lowering drugs intervention. More than half of the intervention group (56%) who were previously pre-Action were adhering to their prescribed medication regimen at the 18-month follow-up. Additionally, only 15% of those in the intervention group who were already in Action or Maintenance relapsed into poor medication adherence compared to 45% of the controls. Further, participants who were at risk for physical activity and unhealthy diet were given only stage-based guidance. The treatment group doubled the control group in the percentage in Action or Maintenance at 18 months for physical activity (43%) and diet (25%).
=== Depression prevention ===
Participants were 350 primary care patients experiencing at least mild depression but not involved in treatment or planning to seek treatment for depression in the next 30 days. Patients receiving the TTM intervention experienced significantly greater symptom reduction during the 9-month follow-up period. The intervention's largest effects were observed among patients with moderate or severe depression, and who were in the Precontemplation or Contemplation stage of change at baseline. For example, among patients in the Precontemplation or Contemplation stage, rates of reliable and clinically significant improvement in depression were 40% for treatment and 9% for control. Among patients with mild depression, or who were in the Action or Maintenance stage at baseline, the intervention helped prevent disease progression to Major Depression during the follow-up period.
=== Weight management ===
Five-hundred-and-seventy-seven overweight or moderately obese adults (BMI 25-39.9) were recruited nationally, primarily from large employers. Those randomly assigned to the treatment group received a stage-matched multiple behavior change guide and a series of tailored, individualized interventions for three health behaviors that are crucial to effective weight management: healthy eating (i.e., reducing calorie and dietary fat intake), moderate exercise, and managing emotional distress without eating. Up to three tailored reports (one per behavior) were delivered based on assessments conducted at four time points: baseline, 3, 6, and 9 months. All participants were followed up at 6, 12, and 24 months. Multiple Imputation was used to estimate missing data. Generalized Labor Estimating Equations (GLEE) were then used to examine differences between the treatment and comparison groups. At 24 months, those who were in a pre-Action stage for healthy eating at baseline and received treatment were significantly more likely to have reached Action or Maintenance than the comparison group (47.5% vs. 34.3%). The intervention also impacted a related, but untreated behavior: fruit and vegetable consumption. Over 48% of those in the treatment group in a pre-Action stage at baseline progressed to Action or Maintenance for eating at least 5 servings a day of fruit and vegetables as opposed to 39% of the comparison group. Individuals in the treatment group who were in a pre-Action stage for exercise at baseline were also significantly more likely to reach Action or Maintenance (44.9% vs. 38.1%). The treatment also had a significant effect on managing emotional distress without eating, with 49.7% of those in a pre-Action stage at baseline moving to Action or Maintenance versus 30.3% of the comparison group. The groups differed on weight lost at 24 months among those in a pre-Action stage for healthy eating and exercise at baseline. Among those in a pre-Action stage for both healthy eating and exercise at baseline, 30% of those randomized to the treatment group lost 5% or more of their body weight vs. 16.6% in the comparison group. Coaction of behavior change occurred and was much more pronounced in the treatment group with the treatment group losing significantly more than the comparison group. This study demonstrates the ability of TTM-based tailored feedback to improve healthy eating, exercise, managing emotional distress, and weight on a population basis. The treatment produced the highest population impact to date on multiple health risk behaviors.
The effectiveness of the use of this model in weight management interventions (including dietary or physical activity interventions, or both, and also combined with other interventions) for overweight and obese adults was assessed in a 2014 systematic review. The results revealed that there is inconclusive evidence regarding the impact of these interventions on sustainable (one year or longer) weight loss. However, this approach may produce positive effects in physical activity and dietary habits, such as increased in both exercise duration and frequency, and fruits and vegetables consumption, along with reduced dietary fat intake, based on very low quality scientific evidence.
=== Smoking cessation ===
Multiple studies have found individualized interventions tailored on the 14 TTM variables for smoking cessation to effectively recruit and retain pre-Action participants and produce long-term abstinence rates within the range of 22% – 26%. These interventions have also consistently outperformed alternative interventions including best-in-class action-oriented self-help programs, non-interactive manual-based programs, and other common interventions. Furthermore, these interventions continued to move pre-Action participants to abstinence even after the program ended. For a summary of smoking cessation clinical outcomes, see Velicer, Redding, Sun, & Prochaska, 2007 and Jordan, Evers, Spira, King & Lid, 2013.
==== Example for TTM application on smoke control ====
In the treatment of smoke control, TTM focuses on each stage to monitor and to achieve a progression to the next stage.
In each stage, a patient may have multiple sources that could influence their behavior. These may include: friends, books, and interactions with their healthcare providers. These factors could potentially influence how successful a patient may be in moving through the different stages. This stresses the importance to have continuous monitoring and efforts to maintain progress at each stage. TTM helps guide the treatment process at each stage, and may assist the healthcare provider in making an optimal therapeutic decision.
=== Travel research ===
The use of TTM in travel behaviour interventions is rather novel. A number of cross-sectional studies investigated the individual constructs of TTM, e.g. stage of change, decisional balance and self-efficacy, with regards to transport mode choice. The cross-sectional studies identified both motivators and barriers at the different stages regarding biking, walking and public transport. The motivators identified were e.g. liking to bike/walk, avoiding congestion and improved fitness. Perceived barriers were e.g. personal fitness, time and the weather. This knowledge was used to design interventions that would address attitudes and misconceptions to encourage an increased use of bikes and walking. These interventions aim at changing people's travel behaviour towards more sustainable and more active transport modes. In health-related studies, TTM is used to help people walk or bike more instead of using the car. Most intervention studies aim to reduce car trips for commute to achieve the minimum recommended physical activity levels of 30 minutes per day. Other intervention studies using TTM aim to encourage sustainable behaviour. By reducing single occupied motor vehicle and replacing them with so called sustainable transport (public transport, car pooling, biking or walking), greenhouse gas emissions can be reduced considerably. A reduction in the number of cars on our roads solves other problems such as congestion, traffic noise and traffic accidents. By combining health and environment related purposes, the message becomes stronger. Additionally, by emphasising personal health, physical activity or even direct economic impact, people see a direct result from their changed behaviour, while saving the environment is a more general and effects are not directly noticeable.
Different outcome measures were used to assess the effectiveness of the intervention. Health-centred intervention studies measured BMI, weight, waist circumference as well as general health. However, only one of three found a significant change in general health, while BMI and other measures had no effect. Measures that are associated with both health and sustainability were more common. Effects were reported as number of car trips, distance travelled, main mode share etc. Results varied due to greatly differing approaches. In general, car use could be reduced between 6% and 55%, while use of the alternative mode (walking, biking and/or public transport) increased between 11% and 150%. These results indicate a shift to action or maintenance stage, some researchers investigated attitude shifts such as the willingness to change. Attitudes towards using alternative modes improved with approximately 20% to 70%. Many of the intervention studies did not clearly differentiate between the five stages, but categorised participants in pre-action and action stage. This approach makes it difficult to assess the effects per stage. Also, interventions included different processes of change; in many cases these processes are not matched to the recommended stage. It highlights the need to develop a standardised approach for travel intervention design.
== Criticisms ==
In 2009, an article in the British Journal of Health Psychology called the TTM "arguably the dominant model of health behaviour change, having received unprecedented research attention, yet it has simultaneously attracted exceptional criticism", and said "that there is still value in the transtheoretical model but that the way in which it is researched needs urgently to be addressed". Depending on the field of application (e.g. smoking cessation, substance abuse, condom use, diabetes treatment, obesity and travel) somewhat different criticisms have been raised.
In a systematic review, published in 2003, of 23 randomized controlled trials, the authors found that "stage based interventions are no more effective than non-stage based interventions or no intervention in changing smoking behaviour". However, it was also mentioned that stage based interventions are often used and implemented inadequately in practice. Thus, criticism is directed towards the use rather the effectiveness of the model itself. Looking at interventions targeting smoking cessation in pregnancy found that stage-matched interventions were more effective than non-matched interventions. One reason for this was the greater intensity of stage-matched interventions. Also, the use of stage-based interventions for smoking cessation in mental illness proved to be effective. Further studies, e.g. a randomized controlled trial published in 2009, found no evidence that a TTM based smoking cessation intervention was more effective than a control intervention not tailored to stage of change. The study claims that those not wanting to change (i.e. precontemplators) tend to be responsive to neither stage nor non-stage based interventions. Since stage-based interventions tend to be more intensive they appear to be most effective at targeting contemplators and above rather than pre-contemplators. A 2010 systematic review of smoking cessation studies under the auspices of the Cochrane Collaboration found that "stage-based self-help interventions (expert systems and/or tailored materials) and individual counselling were neither more nor less effective than their non-stage-based equivalents". A 2014 Cochrane systematic review concluded that research on the use of TTM stages of change "in weight loss interventions is limited by risk of bias and imprecision, not allowing firm conclusions to be drawn".
Main criticism is raised regarding the "arbitrary dividing lines" that are drawn between the stages. West claimed that a more coherent and distinguishable definition for the stages is needed. Especially the fact that the stages are bound to a specific time interval is perceived to be misleading. Additionally, the effectiveness of stage-based interventions differs depending on the behavior. A continuous version of the model has been proposed, where each process is first increasingly used, and then decreases in importance, as smokers make progress along some latent dimension. This proposal suggests the use of processes without reference to stages of change.
West claimed that the model "assumes that individuals typically make coherent and stable plans", when in fact they often do not. However, the model does not require that all people make a plan: for example, the SAMSHA document Enhancing Motivation for Change in Substance Use Disorder Treatment, which uses the TTM, also says: "Don't assume that all clients need a structured method to develop a change plan. Many people can make significant lifestyle changes and initiate recovery from SUDs without formal assistance".
Within research on prevention of pregnancy and sexually transmitted diseases, a systematic review from 2003 comes to the conclusion that "no strong conclusions" can be drawn about the effectiveness of interventions based on the transtheoretical model. Again this conclusion is reached due to the inconsistency of use and implementation of the model. This study also confirms that the better stage-matched the intervention the more effect it has to encourage condom use.
Within the health research domain, a 2005 systematic review of 37 randomized controlled trials claims that "there was limited evidence for the effectiveness of stage-based interventions as a basis for behavior change. Studies with which focused on increasing physical activity levels through active commute however showed that stage-matched interventions tended to have slightly more effect than non-stage matched interventions. Since many studies do not use all constructs of the TTM, additional research suggested that the effectiveness of interventions increases the better it is tailored on all core constructs of the TTM in addition to stage of change. In diabetes research the "existing data are insufficient for drawing conclusions on the benefits of the transtheoretical model" as related to dietary interventions. Again, studies with slightly different design, e.g. using different processes, proved to be effective in predicting the stage transition of intention to exercise in relation to treating patients with diabetes.
TTM has generally found a greater popularity regarding research on physical activity, due to the increasing problems associated with unhealthy diets and sedentary living, e.g. obesity, cardiovascular problems. A 2011 Cochrane Systematic Review found that there is little evidence to suggest that using the transtheoretical model stages of change (TTM SOC) method is effective in helping obese and overweight people lose weight. There were only five studies in the review, two of which were later dropped due to not being relevant since they did not measure weight. Earlier in a 2009 paper, the TTM was considered to be useful in promoting physical activity. In this study, the algorithms and questionnaires that researchers used to assign people to stages of change lacked standardisation to be compared empirically, or validated.
Similar criticism regarding the standardisation as well as consistency in the use of TTM is also raised in a 2017 review on travel interventions. With regard to travel interventions only stages of change and sometimes decisional balance constructs are included. The processes used to build the intervention are rarely stage-matched and short cuts are taken by classifying participants in a pre-action stage, which summarises the precontemplation, contemplation and preparation stage, and an action/maintenance stage. More generally, TTM has been criticised within various domains due to the limitations in the research designs. For example, many studies supporting the model have been cross-sectional, but longitudinal study data would allow for stronger causal inferences. Another point of criticism is raised in a 2002 review, where the model's stages were characterized as "not mutually exclusive". Furthermore, there was "scant evidence of sequential movement through discrete stages". While research suggests that movement through the stages of change is not always linear, a study of smoking cessation conducted in 1996 demonstrated that the probability of forward stage movement is greater than the probability of backward stage movement. Due to the variations in use, implementation and type of research designs, data confirming TTM are ambiguous. More care has to be taken in using a sufficient amount of constructs, trustworthy measures, and longitudinal data.
== See also ==
Change management
Decision cycle
== Notes ==
== References ==
== Further reading ==
== External links ==
Pro-Change Behavior Systems, Inc. Company founded by James O. Prochaska. Mission is to enhance the well-being of individuals and organizations through the scientific development and dissemination of Transtheoretical Model-based change management programs. | Wikipedia/Transtheoretical_model |
A cluster-randomised controlled trial is a type of randomised controlled trial in which groups of subjects (as opposed to individual subjects) are randomised. Cluster randomised controlled trials are also known as cluster-randomised trials, group-randomised trials, and place-randomized trials. Cluster-randomised controlled trials are used when there is a strong reason for randomising treatment and control groups over randomising participants.
== Prevalence ==
A 2004 bibliometric study documented an increasing number of publications in the medical literature on cluster-randomised controlled trials since the 1980s.
== Advantages ==
Advantages of cluster-randomised controlled trials over individually randomised controlled trials include:
The ability to study interventions that cannot be directed toward selected individuals (e.g., a radio show about lifestyle changes) and the ability to control for "contamination" across individuals (e.g., one individual's changing behaviors may influence another individual to do so).
Reduced cost in running a survey. For example, when wanting to survey households, it could often be cheaper to choose street blocks and survey all the houses there in order to reduce the cost of traveling for the people conducting the survey.
Sometimes due to data availability, it is only possible to do cluster sampling. For example, if wanting to survey households, it may be that there is no census list of houses (due to privacy restrictions of the Bureau of Statistics of the country). However, there may be a public record of street blocks and their addresses, and these can be used for creating the sampling frame.
== Disadvantages ==
Disadvantages compared with individually randomised controlled trials include greater complexity in design and analysis, and a requirement for more participants to obtain the same statistical power. Use of this type of trial also means that the experiences of individuals within the same group are likely similar, leading to correlated results. This correlation is measured by the intraclass correlation, also known as the intracluster correlation. Though this correlation is a known component of cluster-randomised controlled trials, a large proportion of the trials fail to account for it. Failing to control for intraclass correlation negatively affects both the statistical power and the incidence of Type I errors of an analysis.
== See also ==
Randomized controlled trial
Statistics
Zelen's design
== References ==
== Further reading ==
Boruch RF. Place randomized trials: experimental tests of public policy. Thousand Oaks, CA: Sage Publications, 2005. ISBN 1-4129-2582-7
M. J. Campbell and S. J. Walters, 2014: How to Design, Analyse, and Report Cluster Randomised Trials. Wiley. ISBN 978-1-119-99202-8
A. Donner and N. Klar, 2000: Design and Analysis of Cluster Randomization Trials in Health Research. Arnold.
S. Eldridge and S. Kerry, 2012: A Practical Guide to Cluster Randomised Trials in Health Services Research. Wiley.
R. J. Hayes and L. H. Moulton, 2017: Cluster Randomised Trials. Second edition. Chapman & Hall.
Mosteller F, Boruch RF. Evidence matters: randomized trials in education research. Washington, DC: Brookings Institution Press, 2002. ISBN 0-8157-0204-3
Murray DM. Design and analysis of group-randomized trials. New York: Oxford University Press, 1998. ISBN 0-19-512036-1 | Wikipedia/Cluster_randomised_controlled_trial |
Proportional hazards models are a class of survival models in statistics. Survival models relate the time that passes, before some event occurs, to one or more covariates that may be associated with that quantity of time. In a proportional hazards model, the unique effect of a unit increase in a covariate is multiplicative with respect to the hazard rate. The hazard rate at time
t
{\displaystyle t}
is the probability per short time dt that an event will occur between
t
{\displaystyle t}
and
t
+
d
t
{\displaystyle t+dt}
given that up to time
t
{\displaystyle t}
no event has occurred yet.
For example, taking a drug may halve one's hazard rate for a stroke occurring, or, changing the material from which a manufactured component is constructed, may double its hazard rate for failure. Other types of survival models such as accelerated failure time models do not exhibit proportional hazards. The accelerated failure time model describes a situation where the biological or mechanical life history of an event is accelerated (or decelerated).
== Background ==
Survival models can be viewed as consisting of two parts: the underlying baseline hazard function, often denoted
λ
0
(
t
)
{\displaystyle \lambda _{0}(t)}
, describing how the risk of event per time unit changes over time at baseline levels of covariates; and the effect parameters, describing how the hazard varies in response to explanatory covariates. A typical medical example would include covariates such as treatment assignment, as well as patient characteristics such as age at start of study, gender, and the presence of other diseases at start of study, in order to reduce variability and/or control for confounding.
The proportional hazards condition states that covariates are multiplicatively related to the hazard. In the simplest case of stationary coefficients, for example, a treatment with a drug may, say, halve a subject's hazard at any given time
t
{\displaystyle t}
, while the baseline hazard may vary. Note however, that this does not double the lifetime of the subject; the precise effect of the covariates on the lifetime depends on the type of
λ
0
(
t
)
{\displaystyle \lambda _{0}(t)}
. The covariate is not restricted to binary predictors; in the case of a continuous covariate
x
{\displaystyle x}
, it is typically assumed that the hazard responds exponentially; each unit increase in
x
{\displaystyle x}
results in proportional scaling of the hazard.
== The Cox model ==
=== Introduction ===
Sir David Cox observed that if the proportional hazards assumption holds (or, is assumed to hold) then it is possible to estimate the effect parameter(s), denoted
β
i
{\displaystyle \beta _{i}}
below, without any consideration of the full hazard function. This approach to survival data is called application of the Cox proportional hazards model, sometimes abbreviated to Cox model or to proportional hazards model. However, Cox also noted that biological interpretation of the proportional hazards assumption can be quite tricky.
Let Xi = (Xi1, … , Xip) be the realized values of the p covariates for subject i. The hazard function for the Cox proportional hazards model has the form
λ
(
t
|
X
i
)
=
λ
0
(
t
)
exp
(
β
1
X
i
1
+
⋯
+
β
p
X
i
p
)
=
λ
0
(
t
)
exp
(
X
i
⋅
β
)
{\displaystyle {\begin{aligned}\lambda (t|X_{i})&=\lambda _{0}(t)\exp(\beta _{1}X_{i1}+\cdots +\beta _{p}X_{ip})\\&=\lambda _{0}(t)\exp(X_{i}\cdot \beta )\end{aligned}}}
This expression gives the hazard function at time t for subject i with covariate vector (explanatory variables) Xi. Note that between subjects, the baseline hazard
λ
0
(
t
)
{\displaystyle \lambda _{0}(t)}
is identical (has no dependency on i). The only difference between subjects' hazards comes from the baseline scaling factor
exp
(
X
i
⋅
β
)
{\displaystyle \exp(X_{i}\cdot \beta )}
.
=== Why it is called "proportional" ===
To start, suppose we only have a single covariate,
x
{\displaystyle x}
, and therefore a single coefficient,
β
1
{\displaystyle \beta _{1}}
. Our model looks like:
λ
(
t
|
x
)
=
λ
0
(
t
)
exp
(
β
1
x
)
{\displaystyle \lambda (t|x)=\lambda _{0}(t)\exp(\beta _{1}x)}
Consider the effect of increasing
x
{\displaystyle x}
by 1:
λ
(
t
|
x
+
1
)
=
λ
0
(
t
)
exp
(
β
1
(
x
+
1
)
)
=
λ
0
(
t
)
exp
(
β
1
x
+
β
1
)
=
(
λ
0
(
t
)
exp
(
β
1
x
)
)
exp
(
β
1
)
=
λ
(
t
|
x
)
exp
(
β
1
)
{\displaystyle {\begin{aligned}\lambda (t|x+1)&=\lambda _{0}(t)\exp(\beta _{1}(x+1))\\&=\lambda _{0}(t)\exp(\beta _{1}x+\beta _{1})\\&={\Bigl (}\lambda _{0}(t)\exp(\beta _{1}x){\Bigr )}\exp(\beta _{1})\\&=\lambda (t|x)\exp(\beta _{1})\end{aligned}}}
We can see that increasing a covariate by 1 scales the original hazard by the constant
exp
(
β
1
)
{\displaystyle \exp(\beta _{1})}
. Rearranging things slightly, we see that:
λ
(
t
|
x
+
1
)
λ
(
t
|
x
)
=
exp
(
β
1
)
{\displaystyle {\frac {\lambda (t|x+1)}{\lambda (t|x)}}=\exp(\beta _{1})}
The right-hand-side is constant over time (no term has a
t
{\displaystyle t}
in it). This relationship,
x
/
y
=
constant
{\displaystyle x/y={\text{constant}}}
, is called a proportional relationship.
More generally, consider two subjects, i and j, with covariates
X
i
{\displaystyle X_{i}}
and
X
j
{\displaystyle X_{j}}
respectively. Consider the ratio of their hazards:
λ
(
t
|
X
i
)
λ
(
t
|
X
j
)
=
λ
0
(
t
)
exp
(
X
i
⋅
β
)
λ
0
(
t
)
exp
(
X
j
⋅
β
)
=
λ
0
(
t
)
exp
(
X
i
⋅
β
)
λ
0
(
t
)
exp
(
X
j
⋅
β
)
=
exp
(
(
X
i
−
X
j
)
⋅
β
)
{\displaystyle {\begin{aligned}{\frac {\lambda (t|X_{i})}{\lambda (t|X_{j})}}&={\frac {\lambda _{0}(t)\exp(X_{i}\cdot \beta )}{\lambda _{0}(t)\exp(X_{j}\cdot \beta )}}\\&={\frac {{\cancel {\lambda _{0}(t)}}\exp(X_{i}\cdot \beta )}{{\cancel {\lambda _{0}(t)}}\exp(X_{j}\cdot \beta )}}\\&=\exp((X_{i}-X_{j})\cdot \beta )\end{aligned}}}
The right-hand-side isn't dependent on time, as the only time-dependent factor,
λ
0
(
t
)
{\displaystyle \lambda _{0}(t)}
, was cancelled out. Thus the ratio of hazards of two subjects is a constant, i.e. the hazards are proportional.
=== Absence of an intercept term ===
Often there is an intercept term (also called a constant term or bias term) used in regression models. The Cox model lacks one because the baseline hazard,
λ
0
(
t
)
{\displaystyle \lambda _{0}(t)}
, takes the place of it. Let's see what would happen if we did include an intercept term anyways, denoted
β
0
{\displaystyle \beta _{0}}
:
λ
(
t
|
X
i
)
=
λ
0
(
t
)
exp
(
β
1
X
i
1
+
⋯
+
β
p
X
i
p
+
β
0
)
=
λ
0
(
t
)
exp
(
X
i
⋅
β
)
exp
(
β
0
)
=
(
exp
(
β
0
)
λ
0
(
t
)
)
exp
(
X
i
⋅
β
)
=
λ
0
∗
(
t
)
exp
(
X
i
⋅
β
)
{\displaystyle {\begin{aligned}\lambda (t|X_{i})&=\lambda _{0}(t)\exp(\beta _{1}X_{i1}+\cdots +\beta _{p}X_{ip}+\beta _{0})\\&=\lambda _{0}(t)\exp(X_{i}\cdot \beta )\exp(\beta _{0})\\&=\left(\exp(\beta _{0})\lambda _{0}(t)\right)\exp(X_{i}\cdot \beta )\\&=\lambda _{0}^{*}(t)\exp(X_{i}\cdot \beta )\end{aligned}}}
where we've redefined
exp
(
β
0
)
λ
0
(
t
)
{\displaystyle \exp(\beta _{0})\lambda _{0}(t)}
to be a new baseline hazard,
λ
0
∗
(
t
)
{\displaystyle \lambda _{0}^{*}(t)}
. Thus, the baseline hazard incorporates all parts of the hazard that are not dependent on the subjects' covariates, which includes any intercept term (which is constant for all subjects, by definition). In other words, adding an intercept term would make the model unidentifiable.
=== Likelihood for unique times ===
The Cox partial likelihood, shown below, is obtained by using Breslow's estimate of the baseline hazard function, plugging it into the full likelihood and then observing that the result is a product of two factors. The first factor is the partial likelihood shown below, in which the baseline hazard has "canceled out". It is simply the probability for subjects to have experienced events in the order that they actually have occurred, given the set of times of occurrences and given the subjects' covariates.
The second factor is free of the regression coefficients and depends on the data only through the censoring pattern. The effect of covariates estimated by any proportional hazards model can thus be reported as hazard ratios.
To calculate the partial likelihood, the probability for the order of events, let us index the M samples for which events have already occurred by increasing time of occurrence, Y1 < Y2 < ... < YM. Covariates of all other subjects for which no event has occurred get indices M+1,.., N. The partial likelihood can be factorized into one factor for each event that has occurred. The i 'th factor is the probability that out of all subjects (i,i+1,..., N) for which no event has occurred before time Yi, the one that actually occurred at time Yi is the event for subject i:
L
i
(
β
)
=
λ
(
Y
i
∣
X
i
)
∑
j
=
i
N
λ
(
Y
i
∣
X
j
)
=
λ
0
(
Y
i
)
θ
i
∑
j
=
i
N
λ
0
(
Y
i
)
θ
j
=
θ
i
∑
j
=
i
N
θ
j
,
{\displaystyle L_{i}(\beta )={\frac {\lambda (Y_{i}\mid X_{i})}{\sum _{j=i}^{N}\lambda (Y_{i}\mid X_{j})}}={\frac {\lambda _{0}(Y_{i})\theta _{i}}{\sum _{j=i}^{N}\lambda _{0}(Y_{i})\theta _{j}}}={\frac {\theta _{i}}{\sum _{j=i}^{N}\theta _{j}}},}
where θj = exp(Xj ⋅ β) and the summation is over the set of subjects j where the event has not occurred before time Yi (including subject i itself). Obviously 0 < Li(β) ≤ 1.
Treating the subjects as statistically independent of each other, the partial likelihood for the order of events
is
L
(
β
)
=
∏
i
=
1
M
L
i
(
β
)
=
∏
i
:
C
i
=
1
L
i
(
β
)
,
{\displaystyle L(\beta )=\prod _{i=1}^{M}L_{i}(\beta )=\prod _{i:C_{i}=1}L_{i}(\beta ),}
where the subjects for which an event has occurred are indicated by Ci = 1 and all others by Ci = 0. The corresponding log partial likelihood is
ℓ
(
β
)
=
∑
i
:
C
i
=
1
(
X
i
⋅
β
−
log
∑
j
:
Y
j
≥
Y
i
θ
j
)
,
{\displaystyle \ell (\beta )=\sum _{i:C_{i}=1}\left(X_{i}\cdot \beta -\log \sum _{j:Y_{j}\geq Y_{i}}\theta _{j}\right),}
where we have written
∑
j
=
i
N
{\displaystyle \sum _{j=i}^{N}}
using the indexing introduced above in a more general way, as
∑
j
:
Y
j
≥
Y
i
{\displaystyle \sum _{j:Y_{j}\geq Y_{i}}}
.
Crucially, the effect of the covariates can be estimated without the need to specify the hazard function
λ
0
(
t
)
{\displaystyle \lambda _{0}(t)}
over time. The partial likelihood can be maximized over β to produce maximum partial likelihood estimates of the model parameters.
The partial score function is
ℓ
′
(
β
)
=
∑
i
:
C
i
=
1
(
X
i
−
∑
j
:
Y
j
≥
Y
i
θ
j
X
j
∑
j
:
Y
j
≥
Y
i
θ
j
)
,
{\displaystyle \ell ^{\prime }(\beta )=\sum _{i:C_{i}=1}\left(X_{i}-{\frac {\sum _{j:Y_{j}\geq Y_{i}}\theta _{j}X_{j}}{\sum _{j:Y_{j}\geq Y_{i}}\theta _{j}}}\right),}
and the Hessian matrix of the partial log likelihood is
ℓ
′
′
(
β
)
=
−
∑
i
:
C
i
=
1
(
∑
j
:
Y
j
≥
Y
i
θ
j
X
j
X
j
′
∑
j
:
Y
j
≥
Y
i
θ
j
−
[
∑
j
:
Y
j
≥
Y
i
θ
j
X
j
]
[
∑
j
:
Y
j
≥
Y
i
θ
j
X
j
′
]
[
∑
j
:
Y
j
≥
Y
i
θ
j
]
2
)
.
{\displaystyle \ell ^{\prime \prime }(\beta )=-\sum _{i:C_{i}=1}\left({\frac {\sum _{j:Y_{j}\geq Y_{i}}\theta _{j}X_{j}X_{j}^{\prime }}{\sum _{j:Y_{j}\geq Y_{i}}\theta _{j}}}-{\frac {\left[\sum _{j:Y_{j}\geq Y_{i}}\theta _{j}X_{j}\right]\left[\sum _{j:Y_{j}\geq Y_{i}}\theta _{j}X_{j}^{\prime }\right]}{\left[\sum _{j:Y_{j}\geq Y_{i}}\theta _{j}\right]^{2}}}\right).}
Using this score function and Hessian matrix, the partial likelihood can be maximized using the Newton-Raphson algorithm. The inverse of the Hessian matrix, evaluated at the estimate of β, can be used as an approximate variance-covariance matrix for the estimate, and used to produce approximate standard errors for the regression coefficients.
=== Likelihood when there exist tied times ===
Several approaches have been proposed to handle situations in which there are ties in the time data. Breslow's method describes the approach in which the procedure described above is used unmodified, even when ties are present. An alternative approach that is considered to give better results is Efron's method. Let tj denote the unique times, let Hj denote the set of indices i such that Yi = tj and Ci = 1, and let mj = |Hj|. Efron's approach maximizes the following partial likelihood.
L
(
β
)
=
∏
j
∏
i
∈
H
j
θ
i
∏
ℓ
=
0
m
j
−
1
[
∑
i
:
Y
i
≥
t
j
θ
i
−
ℓ
m
j
∑
i
∈
H
j
θ
i
]
.
{\displaystyle L(\beta )=\prod _{j}{\frac {\prod _{i\in H_{j}}\theta _{i}}{\prod _{\ell =0}^{m_{j}-1}\left[\sum _{i:Y_{i}\geq t_{j}}\theta _{i}-{\frac {\ell }{m_{j}}}\sum _{i\in H_{j}}\theta _{i}\right]}}.}
The corresponding log partial likelihood is
ℓ
(
β
)
=
∑
j
(
∑
i
∈
H
j
X
i
⋅
β
−
∑
ℓ
=
0
m
j
−
1
log
(
∑
i
:
Y
i
≥
t
j
θ
i
−
ℓ
m
j
∑
i
∈
H
j
θ
i
)
)
,
{\displaystyle \ell (\beta )=\sum _{j}\left(\sum _{i\in H_{j}}X_{i}\cdot \beta -\sum _{\ell =0}^{m_{j}-1}\log \left(\sum _{i:Y_{i}\geq t_{j}}\theta _{i}-{\frac {\ell }{m_{j}}}\sum _{i\in H_{j}}\theta _{i}\right)\right),}
the score function is
ℓ
′
(
β
)
=
∑
j
(
∑
i
∈
H
j
X
i
−
∑
ℓ
=
0
m
j
−
1
∑
i
:
Y
i
≥
t
j
θ
i
X
i
−
ℓ
m
j
∑
i
∈
H
j
θ
i
X
i
∑
i
:
Y
i
≥
t
j
θ
i
−
ℓ
m
j
∑
i
∈
H
j
θ
i
)
,
{\displaystyle \ell ^{\prime }(\beta )=\sum _{j}\left(\sum _{i\in H_{j}}X_{i}-\sum _{\ell =0}^{m_{j}-1}{\frac {\sum _{i:Y_{i}\geq t_{j}}\theta _{i}X_{i}-{\frac {\ell }{m_{j}}}\sum _{i\in H_{j}}\theta _{i}X_{i}}{\sum _{i:Y_{i}\geq t_{j}}\theta _{i}-{\frac {\ell }{m_{j}}}\sum _{i\in H_{j}}\theta _{i}}}\right),}
and the Hessian matrix is
ℓ
′
′
(
β
)
=
−
∑
j
∑
ℓ
=
0
m
j
−
1
(
∑
i
:
Y
i
≥
t
j
θ
i
X
i
X
i
′
−
ℓ
m
j
∑
i
∈
H
j
θ
i
X
i
X
i
′
ϕ
j
,
ℓ
,
m
j
−
Z
j
,
ℓ
,
m
j
Z
j
,
ℓ
,
m
j
′
ϕ
j
,
ℓ
,
m
j
2
)
,
{\displaystyle \ell ^{\prime \prime }(\beta )=-\sum _{j}\sum _{\ell =0}^{m_{j}-1}\left({\frac {\sum _{i:Y_{i}\geq t_{j}}\theta _{i}X_{i}X_{i}^{\prime }-{\frac {\ell }{m_{j}}}\sum _{i\in H_{j}}\theta _{i}X_{i}X_{i}^{\prime }}{\phi _{j,\ell ,m_{j}}}}-{\frac {Z_{j,\ell ,m_{j}}Z_{j,\ell ,m_{j}}^{\prime }}{\phi _{j,\ell ,m_{j}}^{2}}}\right),}
where
ϕ
j
,
ℓ
,
m
j
=
∑
i
:
Y
i
≥
t
j
θ
i
−
ℓ
m
j
∑
i
∈
H
j
θ
i
{\displaystyle \phi _{j,\ell ,m_{j}}=\sum _{i:Y_{i}\geq t_{j}}\theta _{i}-{\frac {\ell }{m_{j}}}\sum _{i\in H_{j}}\theta _{i}}
Z
j
,
ℓ
,
m
j
=
∑
i
:
Y
i
≥
t
j
θ
i
X
i
−
ℓ
m
j
∑
i
∈
H
j
θ
i
X
i
.
{\displaystyle Z_{j,\ell ,m_{j}}=\sum _{i:Y_{i}\geq t_{j}}\theta _{i}X_{i}-{\frac {\ell }{m_{j}}}\sum _{i\in H_{j}}\theta _{i}X_{i}.}
Note that when Hj is empty (all observations with time tj are censored), the summands in these expressions are treated as zero.
=== Examples ===
Below are some worked examples of the Cox model in practice.
==== A single binary covariate ====
Suppose the endpoint we are interested in is patient survival during a 5-year observation period after a surgery. Patients can die within the 5-year period, and we record when they died, or patients can live past 5 years, and we only record that they lived past 5 years. The surgery was performed at one of two hospitals, A or B, and we would like to know if the hospital location is associated with 5-year survival. Specifically, we would like to know the relative increase (or decrease) in hazard from a surgery performed at hospital A compared to hospital B. Provided is some (fake) data, where each row represents a patient: T is how long the patient was observed for before death or 5 years (measured in months), and C denotes if the patient died in the 5-year period. We have encoded the hospital as a binary variable denoted X: 1 if from hospital A, 0 from hospital B.
Our single-covariate Cox proportional model looks like the following, with
β
1
{\displaystyle \beta _{1}}
representing the hospital's effect, and i indexing each patient:
λ
(
t
|
X
i
)
⏞
hazard for i
=
λ
0
(
t
)
⏟
baseline
hazard
⋅
exp
(
β
1
X
i
)
⏞
scaling factor for i
{\displaystyle \overbrace {\lambda (t|X_{i})} ^{\text{hazard for i}}=\underbrace {\lambda _{0}(t)} _{{\text{baseline}} \atop {\text{hazard}}}\cdot \overbrace {\exp(\beta _{1}X_{i})} ^{\text{scaling factor for i}}}
Using statistical software, we can estimate
β
1
{\displaystyle \beta _{1}}
to be 2.12. The hazard ratio is the exponential of this value,
exp
(
β
1
)
=
exp
(
2.12
)
{\displaystyle \exp(\beta _{1})=\exp(2.12)}
. To see why, consider the ratio of hazards, specifically:
λ
(
t
|
X
=
1
)
λ
(
t
|
X
=
0
)
=
λ
0
(
t
)
exp
(
β
1
⋅
1
)
λ
0
(
t
)
exp
(
β
1
⋅
0
)
=
exp
(
β
1
)
{\displaystyle {\frac {\lambda (t|X=1)}{\lambda (t|X=0)}}={\frac {{\cancel {\lambda _{0}(t)}}\exp(\beta _{1}\cdot 1)}{{\cancel {\lambda _{0}(t)}}\exp(\beta _{1}\cdot 0)}}=\exp(\beta _{1})}
Thus, the hazard ratio of hospital A to hospital B is
exp
(
2.12
)
=
8.32
{\displaystyle \exp(2.12)=8.32}
. Putting aside statistical significance for a moment, we can make a statement saying that patients in hospital A are associated with a 8.3x higher risk of death occurring in any short period of time compared to hospital B.
There are important caveats to mention about the interpretation:
a 8.3x higher risk of death does not mean that 8.3x more patients will die in hospital A: survival analysis examines how quickly events occur, not simply whether they occur.
More specifically, "risk of death" is a measure of a rate. A rate has units, like meters per second. However, a relative rate does not: a bicycle can go two times faster than another bicycle (the reference bicycle), without specifying any units. Likewise, the risk of death (comparable to the speed of a bike) in hospital A is 8.3 times higher (faster) than the risk of death in hospital B (the reference group).
the inverse quantity,
1
/
8.32
=
1
exp
(
2.12
)
=
exp
(
−
2.12
)
=
0.12
{\displaystyle 1/8.32={\frac {1}{\exp(2.12)}}=\exp(-2.12)=0.12}
is the hazard ratio of hospital B relative to hospital A.
We haven't made any inferences about probabilities of survival between the hospitals. This is because we would need an estimate of the baseline hazard rate,
λ
0
(
t
)
{\displaystyle \lambda _{0}(t)}
, as well as our
β
1
{\displaystyle \beta _{1}}
estimate. However, standard estimation of the Cox proportional hazard model does not directly estimate the baseline hazard rate.
Because we have ignored the only time varying component of the model, the baseline hazard rate, our estimate is timescale-invariant. For example, if we had measured time in years instead of months, we would get the same estimate.
It is tempting to say that the hospital caused the difference in hazards between the two groups, but since our study is not causal (that is, we do not know how the data was generated), we stick with terminology like "associated".
==== A single continuous covariate ====
To demonstrate a less traditional use case of survival analysis, the next example will be an economics question: what is the relationship between a company's price-to-earnings ratio (P/E) on their first IPO anniversary and their future survival? More specifically, if we consider a company's "birth event" to be their first IPO anniversary, and any bankruptcy, sale, going private, etc. as a "death" event the company, we'd like to know the influence of the companies' P/E ratio at their "birth" (first IPO anniversary) on their survival.
Provided is a (fake) dataset with survival data from 12 companies: T represents the number of days between first IPO anniversary and death (or an end date of 2022-01-01, if did not die). C represents if the company died before 2022-01-01 or not. P/E represents the company's price-to-earnings ratio at its 1st IPO anniversary.
Unlike the previous example where there was a binary variable, this dataset has a continuous variable, P/E; however, the model looks similar:
λ
(
t
|
P
i
)
=
λ
0
(
t
)
⋅
exp
(
β
1
P
i
)
{\displaystyle \lambda (t|P_{i})=\lambda _{0}(t)\cdot \exp(\beta _{1}P_{i})}
where
P
i
{\displaystyle P_{i}}
represents a company's P/E ratio. Running this dataset through a Cox model produces an estimate of the value of the unknown
β
1
{\displaystyle \beta _{1}}
, which is -0.34. Therefore, an estimate of the entire hazard is:
λ
(
t
|
P
i
)
=
λ
0
(
t
)
⋅
exp
(
−
0.34
P
i
)
{\displaystyle \lambda (t|P_{i})=\lambda _{0}(t)\cdot \exp(-0.34P_{i})}
Since the baseline hazard,
λ
0
(
t
)
{\displaystyle \lambda _{0}(t)}
, was not estimated, the entire hazard is not able to be calculated. However, consider the ratio of the companies i and j's hazards:
λ
(
t
|
P
i
)
λ
(
t
|
P
j
)
=
λ
0
(
t
)
⋅
exp
(
−
0.34
P
i
)
λ
0
(
t
)
⋅
exp
(
−
0.34
P
j
)
=
exp
(
−
0.34
(
P
i
−
P
j
)
)
{\displaystyle {\begin{aligned}{\frac {\lambda (t|P_{i})}{\lambda (t|P_{j})}}&={\frac {{\cancel {\lambda _{0}(t)}}\cdot \exp(-0.34P_{i})}{{\cancel {\lambda _{0}(t)}}\cdot \exp(-0.34P_{j})}}\\&=\exp(-0.34(P_{i}-P_{j}))\end{aligned}}}
All terms on the right are known, so calculating the ratio of hazards between companies is possible. Since there is no time-dependent term on the right (all terms are constant), the hazards are proportional to each other. For example, the hazard ratio of company 5 to company 2 is
exp
(
−
0.34
(
6.3
−
3.0
)
)
=
0.33
{\displaystyle \exp(-0.34(6.3-3.0))=0.33}
. This means that, within the interval of study, company 5's risk of "death" is 0.33 ≈ 1/3 as large as company 2's risk of death.
There are important caveats to mention about the interpretation:
The hazard ratio is the quantity
exp
(
β
1
)
{\displaystyle \exp(\beta _{1})}
, which is
exp
(
−
0.34
)
=
0.71
{\displaystyle \exp(-0.34)=0.71}
in the above example. From the last calculation above, an interpretation of this is as the ratio of hazards between two "subjects" that have their variables differ by one unit: if
P
i
=
P
j
+
1
{\displaystyle P_{i}=P_{j}+1}
, then
exp
(
β
1
(
P
i
−
P
j
)
=
exp
(
β
1
(
1
)
)
{\displaystyle \exp(\beta _{1}(P_{i}-P_{j})=\exp(\beta _{1}(1))}
. The choice of "differ by one unit" is convenience, as it communicates precisely the value of
β
1
{\displaystyle \beta _{1}}
.
The baseline hazard can be represented when the scaling factor is 1, i.e.
P
=
0
{\displaystyle P=0}
.
λ
(
t
|
P
i
=
0
)
=
λ
0
(
t
)
⋅
exp
(
−
0.34
⋅
0
)
=
λ
0
(
t
)
{\displaystyle \lambda (t|P_{i}=0)=\lambda _{0}(t)\cdot \exp(-0.34\cdot 0)=\lambda _{0}(t)}
Can we interpret the baseline hazard as the hazard of a "baseline" company whose P/E happens to be 0? This interpretation of the baseline hazard as "hazard of a baseline subject" is imperfect, as the covariate being 0 is impossible in this application: a P/E of 0 is meaningless (it means the company's stock price is 0, i.e., they are "dead"). A more appropriate interpretation would be "the hazard when all variables are nil".
It is tempting to want to understand and interpret a value like
exp
(
β
1
P
i
)
{\displaystyle \exp(\beta _{1}P_{i})}
to represent the hazard of a company. However, consider what this is actually representing:
exp
(
β
1
P
i
)
=
exp
(
β
1
(
P
i
−
0
)
)
=
exp
(
β
1
P
i
)
exp
(
β
1
0
)
=
λ
(
t
|
P
i
)
λ
(
t
|
0
)
{\displaystyle \exp(\beta _{1}P_{i})=\exp(\beta _{1}(P_{i}-0))={\frac {\exp(\beta _{1}P_{i})}{\exp(\beta _{1}0)}}={\frac {\lambda (t|P_{i})}{\lambda (t|0)}}}
. There is implicitly a ratio of hazards here, comparing company i's hazard to an imaginary baseline company with 0 P/E. However, as explained above, a P/E of 0 is impossible in this application, so
exp
(
β
1
P
i
)
{\displaystyle \exp(\beta _{1}P_{i})}
is meaningless in this example. Ratios between plausible hazards are meaningful, however.
== Time-varying predictors and coefficients ==
Extensions to time dependent variables, time dependent strata, and multiple events per subject, can be incorporated by the counting process formulation of Andersen and Gill. One example of the use of hazard models with time-varying regressors is estimating the effect of unemployment insurance on unemployment spells.
In addition to allowing time-varying covariates (i.e., predictors), the Cox model may be generalized to time-varying coefficients as well. That is, the proportional effect of a treatment may vary with time; e.g. a drug may be very effective if administered within one month of morbidity, and become less effective as time goes on. The hypothesis of no change with time (stationarity) of the coefficient may then be tested. Details and software (R package) are available in Martinussen and Scheike (2006).
In this context, it could also be mentioned that it is theoretically possible to specify the effect of covariates by using additive hazards, i.e. specifying
λ
(
t
|
X
i
)
=
λ
0
(
t
)
+
β
1
X
i
1
+
⋯
+
β
p
X
i
p
=
λ
0
(
t
)
+
X
i
⋅
β
.
{\displaystyle \lambda (t|X_{i})=\lambda _{0}(t)+\beta _{1}X_{i1}+\cdots +\beta _{p}X_{ip}=\lambda _{0}(t)+X_{i}\cdot \beta .}
If such additive hazards models are used in situations where (log-)likelihood maximization is the objective, care must be taken to restrict
λ
(
t
∣
X
i
)
{\displaystyle \lambda (t\mid X_{i})}
to non-negative values. Perhaps as a result of this complication, such models are seldom seen. If the objective is instead least squares the non-negativity restriction is not strictly required.
== Specifying the baseline hazard function ==
The Cox model may be specialized if a reason exists to assume that the baseline hazard follows a particular form. In this case, the baseline hazard
λ
0
(
t
)
{\displaystyle \lambda _{0}(t)}
is replaced by a given function. For example, assuming the hazard function to be the Weibull hazard function gives the Weibull proportional hazards model.
Incidentally, using the Weibull baseline hazard is the only circumstance under which the model satisfies both the proportional hazards, and accelerated failure time models.
The generic term parametric proportional hazards models can be used to describe proportional hazards models in which the hazard function is specified. The Cox proportional hazards model is sometimes called a semiparametric model by contrast.
Some authors use the term Cox proportional hazards model even when specifying the underlying hazard function, to acknowledge the debt of the entire field to David Cox.
The term Cox regression model (omitting proportional hazards) is sometimes used to describe the extension of the Cox model to include time-dependent factors. However, this usage is potentially ambiguous since the Cox proportional hazards model can itself be described as a regression model.
== Relationship to Poisson models ==
There is a relationship between proportional hazards models and Poisson regression models which is sometimes used to fit approximate proportional hazards models in software for Poisson regression. The usual reason for doing this is that calculation is much quicker. This was more important in the days of slower computers but can still be useful for particularly large data sets or complex problems. Laird and Olivier (1981) provide the mathematical details. They note, "we do not assume [the Poisson model] is true, but simply use it as a device for deriving the likelihood." McCullagh and Nelder's book on generalized linear models has a chapter on converting proportional hazards models to generalized linear models.
== Under high-dimensional setup ==
In high-dimension, when number of covariates p is large compared to the sample size n, the LASSO method is one of the classical model-selection strategies. Tibshirani (1997) has proposed a Lasso procedure for the proportional hazard regression parameter. The Lasso estimator of the regression parameter β is defined as the minimizer of the opposite of the Cox partial log-likelihood under an L1-norm type constraint.
ℓ
(
β
)
=
∑
j
(
∑
i
∈
H
j
X
i
⋅
β
−
∑
ℓ
=
0
m
j
−
1
log
(
∑
i
:
Y
i
≥
t
j
θ
i
−
ℓ
m
j
∑
i
∈
H
j
θ
i
)
)
+
λ
‖
β
‖
1
,
{\displaystyle \ell (\beta )=\sum _{j}\left(\sum _{i\in H_{j}}X_{i}\cdot \beta -\sum _{\ell =0}^{m_{j}-1}\log \left(\sum _{i:Y_{i}\geq t_{j}}\theta _{i}-{\frac {\ell }{m_{j}}}\sum _{i\in H_{j}}\theta _{i}\right)\right)+\lambda \|\beta \|_{1},}
There has been theoretical progress on this topic recently.
== Software implementations ==
Mathematica: CoxModelFit function.
R: coxph() function, located in the survival package.
SAS: phreg procedure
Stata: stcox command
Python: CoxPHFitter located in the lifelines library. phreg in the statsmodels library.
SPSS: Available under Cox Regression.
MATLAB: fitcox or coxphfit function
Julia: Available in the Survival.jl library.
JMP: Available in Fit Proportional Hazards platform.
Prism: Available in Survival Analyses and Multiple Variable Analyses
== See also ==
Accelerated failure time model
One in ten rule
Weibull distribution
Hypertabastic distribution
== Notes ==
== References == | Wikipedia/Cox_proportional_hazards_model |
The Bachelor of Science in Public Health (BSPH) (or Bachelor of Public Health) is an undergraduate degree that prepares students to pursue careers in the public, private, or non-profit sector in areas such as public health, environmental health, health administration, epidemiology, nutrition, biostatistics, or health policy and planning. Postbaccalaureate training is available in public health, health administration, public affairs, and related areas.
The University of California at Irvine, Program in Public Health, Department of Population Health and Disease Prevention, has the largest enrollment of undergraduate majors in Public Health, with about 1,500 students including ~1,000 in the Bachelor of Science in Public Health Sciences, and another ~500 students in the Bachelor of Arts in Public Health Policy (2014). UC Irvine also offers a minor in Public Health for students of other majors.
The Council on Education for Public Health includes undergraduate public health degrees in the accreditation review of public health programs and schools.
== See also ==
Master of Health Administration
Master of Public Health
Upsilon Phi Delta
== External links ==
Commission on the Accreditation of Healthcare Management Education (CAHME)
A list of CAHME-accredited programs by name
The Association of University Programs in Health Administration (AUPHA)
Upsilon Phi Delta
[1]p | Wikipedia/Bachelor_of_Science_in_Public_Health |
Diagnosis (pl.: diagnoses) is the identification of the nature and cause of a certain phenomenon. Diagnosis is used in a lot of different disciplines, with variations in the use of logic, analytics, and experience, to determine "cause and effect". In systems engineering and computer science, it is typically used to determine the causes of symptoms, mitigations, and solutions.
== Computer science and networking ==
Bayesian network
Complex event processing
Diagnosis (artificial intelligence)
Event correlation
Fault management
Fault tree analysis
Grey problem
RPR problem diagnosis
Remote diagnostics
Root cause analysis
Troubleshooting
Unified Diagnostic Services
== Mathematics and logic ==
Bayesian probability
Block Hackam's dictum
Occam's razor
Regression diagnostics
Sutton's law
== Medicine ==
Medical diagnosis
Molecular diagnostics
=== Methods ===
CDR computerized assessment system
Computer-aided diagnosis
Differential diagnosis
Retrospective diagnosis
=== Tools ===
DELTA (taxonomy)
DXplain
List of diagnostic classification and rating scales used in psychiatry
== Organizational development ==
Organizational diagnostics
== Systems engineering ==
Five whys
Eight disciplines problem solving
Fault detection and isolation
Problem solving
== References ==
== External links ==
The dictionary definition of diagnosis at Wiktionary | Wikipedia/Diagnosis |
Waterborne diseases are conditions (meaning adverse effects on human health, such as death, disability, illness or disorders): 47 caused by pathogenic micro-organisms that are transmitted by water. These diseases can be spread while bathing, washing, drinking water, or by eating food exposed to contaminated water. They are a pressing issue in rural areas amongst developing countries all over the world. While diarrhea and vomiting are the most commonly reported symptoms of waterborne illness, other symptoms can include skin, ear, respiratory, or eye problems. Lack of clean water supply, sanitation and hygiene (WASH) are major causes for the spread of waterborne diseases in a community. Therefore, reliable access to clean drinking water and sanitation is the main method to prevent waterborne diseases.
Microorganisms causing diseases that characteristically are waterborne prominently include protozoa and bacteria, many of which are intestinal parasites, or invade the tissues or circulatory system through walls of the digestive tract. Various other waterborne diseases are caused by viruses.
Yet other important classes of waterborne diseases are caused by metazoan parasites. Typical examples include certain Nematoda, that is to say "roundworms". As an example of waterborne Nematode infections, one important waterborne nematode disease is Dracunculiasis. It is acquired by swallowing water in which certain copepoda occur that act as vectors for the Nematoda. Anyone swallowing a copepod that happens to be infected with Nematode larvae in the genus Dracunculus, becomes liable to infection. The larvae cause guinea worm disease.
Another class of waterborne metazoan pathogens are certain members of the Schistosomatidae, a family of blood flukes. They usually infect people that make skin contact with the water. Blood flukes are pathogens that cause Schistosomiasis of various forms, more or less seriously affecting hundreds of millions of people worldwide.
== Terminology ==
The term waterborne disease is reserved largely for infections that predominantly are transmitted through contact with or consumption of microbially polluted water. Many infections may be transmitted by microbes or parasites that accidentally, possibly as a result of exceptional circumstances, have entered the water. However, the fact that there might be an occasional infection need not mean that it is useful to categorize the resulting disease as "waterborne". Nor is it common practice to refer to diseases such as malaria as "waterborne" just because mosquitoes have aquatic phases in their life cycles, or because treating the water they inhabit happens to be an effective strategy in control of the mosquitoes that are the vectors.
A related term is "water-related disease" which is defined as "any significant or widespread adverse effects on human health, such as death, disability, illness or disorders, caused directly or indirectly by the condition, or changes in the quantity or quality of any water".: 47 Water-related diseases are grouped according to their transmission mechanism: water borne, water hygiene, water based, water related.: 47 The main transmission mode for waterborne diseases is ingestion of contaminated water.
== Causes ==
Water-borne diseases are primarily transmitted through the consumption of water contaminated with pathogenic microorganisms, including bacteria, viruses, and parasites. Chemical pollutants can also contribute to water-related health issues. Contamination typically occurs at various points in the water supply chain, often due to inadequate sanitation, industrial activity, or poor hygiene practices.
=== Natural Water Sources ===
Surface water bodies such as rivers, lakes, and ponds can become contaminated through the direct discharge of human and animal waste. This is particularly common in regions where open defecation is prevalent or where sanitation infrastructure is limited. The presence of fecal matter in water significantly increases the risk of transmitting pathogens responsible for diseases such as cholera, typhoid, and dysentery.
=== Inadequate Sanitation and Sewage Disposal ===
Improperly treated or untreated sewage can pollute groundwater and surface water sources. Leaks from septic tanks or sewer systems may introduce harmful microorganisms into water supplies. In areas with limited wastewater treatment facilities, this form of contamination is a major contributor to the spread of water-borne illnesses.
=== Agricultural Runoff ===
Agricultural activities can affect water quality through runoff containing fertilizers, pesticides, and animal waste. These substances may enter water bodies during rainfall or irrigation, carrying both chemical contaminants and microbial pathogens. Nitrates from fertilizers, for example, can cause health problems such as methemoglobinemia (blue baby syndrome) in infants.
=== Industrial Pollution ===
Industries may discharge untreated or inadequately treated waste into nearby water sources. Industrial effluents often contain hazardous substances such as heavy metals, organic toxins, and chemical solvents. Prolonged exposure to these pollutants through drinking or household use of contaminated water can lead to chronic health issues, including cancer and organ damage.
=== Poor Hygiene Practices ===
In many low-resource settings, contaminated water is used for washing food, bathing, or cleaning cooking utensils. The absence of basic hygiene measures, such as handwashing with soap, further exacerbates the risk of infection. Diseases like hepatitis A and E are commonly transmitted under such conditions.
=== Influence of climate change ===
== Diseases by type of pathogen ==
=== Protozoa ===
=== Bacteria ===
=== Viruses ===
=== Algae ===
=== Parasitic worms ===
== Prevention ==
Reliable access to clean drinking water and sanitation is the main method to prevent waterborne diseases. The aim is to break the fecal–oral route of disease transmission.
== Epidemiology ==
According to the World Health Organization, waterborne diseases account for an estimated 3.6% of the total DALY (disability- adjusted life year) global burden of disease, and cause about 1.5 million human deaths annually. The World Health Organization estimates that 58% of that burden, or 842,000 deaths per year, is attributable to a lack of safe drinking water supply, sanitation and hygiene (summarized as WASH).
=== United States ===
The Waterborne Disease and Outbreak Surveillance System (WBDOSS) is the principal database used to identify the causative agents, deficiencies, water systems, and sources associated with waterborne disease and outbreaks in the United States. Since 1971, the Centers for Disease Control and Prevention (CDC), the Council of State and Territorial Epidemiologists (CSTE), and the US Environmental Protection Agency (EPA) have maintained this surveillance system for collecting and reporting data on "waterborne disease and outbreaks associated with recreational water, drinking water, environmental, and undetermined exposures to water." "Data from WBDOSS have supported EPA efforts to develop drinking water regulations and have provided guidance for CDC's recreational water activities."
WBDOSS relies on complete and accurate data from public health departments in individual states, territories, and other U.S. jurisdictions regarding waterborne disease and outbreak activity. In 2009, reporting to the WBDOSS transitioned from a paper form to the electronic National Outbreak Reporting System (NORS). Annual or biennial surveillance reports of the data collected by the WBDOSS have been published in CDC reports from 1971 to 1984; since 1985, surveillance data have been published in the Morbidity and Mortality Weekly Report (MMWR).
WBDOSS and the public health community work together to look into the causes of contaminated water leading to waterborne disease outbreaks and maintaining those outbreaks. They do so by having the public health community investigating the outbreaks and WBDOSS receiving the reports.
== Society and culture ==
=== Socioeconomic impact ===
Waterborne diseases can have a significant impact on the economy. People who are infected by a waterborne disease are usually confronted with related healthcare costs. This is especially the case in developing countries. On average, a family spends about 10% of the monthly households income per person infected.
== History ==
Waterborne diseases were once wrongly explained by the miasma theory, the theory that bad air causes the spread of diseases. However, people started to find a correlation between water quality and waterborne diseases, which led to different water purification methods, such as sand filtering and chlorinating their drinking water. Founders of microscopy, Antonie van Leeuwenhoek and Robert Hooke, used the newly invented microscope to observe for the first time small material particles that were suspended in the water, laying the groundwork for the future understanding of waterborne pathogens and waterborne diseases.
== See also ==
Airborne disease
Food microbiology
List of diseases caused by water pollution
Neglected tropical diseases
Public health
Vector (epidemiology)
Water quality
Zoonosis
== References ==
== External links ==
Water-related Diseases, Contaminants, and Injuries Listing of water-related diseases, contaminants and injuries with alphabetical index, listing by type of disease (bacterial, parasitic, etc.) and listing by symptoms caused (diarrhea, skin rash, and many more ) including links to other resources (CDC's Healthy Water site)
World Health Organization (WHO) "Water-Related Diseases" | Wikipedia/Waterborne_diseases |
In randomized statistical experiments, generalized randomized block designs (GRBDs) are used to study the interaction between blocks and treatments. For a GRBD, each treatment is replicated at least two times in each block; this replication allows the estimation and testing of an interaction term in the linear model (without making parametric assumptions about a normal distribution for the error).
== Univariate response ==
=== GRBDs versus RCBDs: Replication and interaction ===
Like a randomized complete block design (RCBD), a GRBD is randomized. Within each block, treatments are randomly assigned to experimental units: this randomization is also independent between blocks. In a (classic) RCBD, however, there is no replication of treatments within blocks.
=== Two-way linear model: Blocks and treatments ===
The experimental design guides the formulation of an appropriate linear model. Without replication, the (classic) RCBD has a two-way linear-model with treatment- and block-effects but without a block-treatment interaction. Without replicates, this two-way linear-model that may be estimated and tested without making parametric assumptions (by using the randomization distribution, without using a normal distribution for the error). In the RCBD, the block-treatment interaction cannot be estimated using the randomization distribution; a fortiori there exists no "valid" (i.e. randomization-based) test for the block-treatment interaction in the analysis of variance (anova) of the RCBD.
The distinction between RCBDs and GRBDs has been ignored by some authors, and the ignorance regarding the GRCBD has been criticized by statisticians like Oscar Kempthorne and Sidney Addelman. The GRBD has the advantage that replication allows block-treatment interaction to be studied.
==== GRBDs when block-treatment interaction lacks interest ====
However, if block-treatment interaction is known to be negligible, then the experimental protocol may specify that the interaction terms be assumed to be zero and that their degrees of freedom be used for the error term. GRBD designs for models without interaction terms offer more degrees of freedom for testing treatment-effects than do RCBs with more blocks: An experimenter wanting to increase power may use a GRBD rather than RCB with additional blocks, when extra blocks-effects would lack genuine interest.
== Multivariate analysis ==
The GRBD has a real-number response. For vector responses, multivariate analysis considers similar two-way models with main effects and with interactions or errors. Without replicates, error terms are confounded with interaction, and only error is estimated. With replicates, interaction can be tested with the multivariate analysis of variance and coefficients in the linear model can be estimated without bias and with minimum variance (by using the least-squares method).
== Functional models for block-treatment interactions: Testing known forms of interaction ==
Non-replicated experiments are used by knowledgeable experimentalists when replications have prohibitive costs. When the block-design lacks replicates, interactions have been modeled. For example, Tukey's F-test for interaction (non-additivity) has been motivated by the multiplicative model of Mandel (1961); this model assumes that all treatment-block interactions are proportion to the product of the mean treatment-effect and the mean block-effect, where the proportionality constant is identical for all treatment-block combinations. Tukey's test is valid when Mandel's multiplicative model holds and when the errors independently follow a normal distribution.
Tukey's F-statistic for testing interaction has a distribution based on the randomized assignment of treatments to experimental units. When Mandel's multiplicative model holds, the F-statistics randomization distribution is closely approximated by the distribution of the F-statistic assuming a normal distribution for the error, according to the 1975 paper of Robinson.
The rejection of multiplicative interaction need not imply the rejection of non-multiplicative interaction, because there are many forms of interaction.
Generalizing earlier models for Tukey's test are the “bundle-of-straight lines” model of Mandel (1959) and the functional model of Milliken and Graybill (1970), which assumes that the interaction is a known function of the block and treatment main-effects. Other methods and heuristics for block-treatment interaction in unreplicated studies are surveyed in the monograph Milliken & Johnson (1989).
== See also ==
Block design
Complete block design
Incomplete block design
Randomized block design
Randomization
Randomized experiment
== Notes ==
== References ==
Addelman, Sidney (Oct 1969). "The Generalized Randomized Block Design". The American Statistician. 23 (4): 35–36. doi:10.2307/2681737. JSTOR 2681737.
Addelman, Sidney (Sep 1970). "Variability of Treatments and Experimental Units in the Design and Analysis of Experiments". Journal of the American Statistical Association. 65 (331): 1095–1108. doi:10.2307/2284277. JSTOR 2284277.
Gates, Charles E. (Nov 1995). "What Really Is Experimental Error in Block Designs?". The American Statistician. 49 (4): 362–363. doi:10.2307/2684574. JSTOR 2684574.
Hinkelmann, Klaus; Kempthorne, Oscar (2008). Design and Analysis of Experiments, Volume I: Introduction to Experimental Design (Second ed.). Wiley. ISBN 978-0-471-72756-9. MR 2363107.
Johnson, Richard A.; Wichern, Dean W. (2002). "6 Comparison of several multivariate means". Applied multivariate statistical analysis (Fifth ed.). Prentice Hall. pp. 272–353. ISBN 0-13-121973-1.
Lentner, Marvin; Bishop, Thomas (1993). "The Generalized RCB Design (Chapter 6.13)". Experimental design and analysis (Second ed.). Blacksburg, VA: Valley Book Company. pp. 225–226. ISBN 0-9616255-2-X.
Mardia, K. V.; Kent, J. T.; Bibby, J. M. (1979). "12 Multivariate analysis of variance". Multivariate analysis. Academic Press. ISBN 0-12-471250-9.
Milliken, George A.; Johnson, Dallas E. (1989). Nonreplicated experiments: Designed experiments. Analysis of messy data. Vol. 2. New York: Van Nostrand Reinhold.
Wilk, M. B. (June 1955). "The Randomization Analysis of a Generalized Randomized Block Design". Biometrika. 42 (1–2): 70–79. doi:10.2307/2333423. JSTOR 2333423. MR 0068800.
Zyskind, George (December 1963). "Some Consequences of Randomization in a Generalization of the Balanced Incomplete Block Design". The Annals of Mathematical Statistics. 34 (4): 1569–1581. doi:10.1214/aoms/1177703889. JSTOR 2238364. MR 0157448. | Wikipedia/Generalized_randomized_block_design |
Taguchi methods (Japanese: タグチメソッド) are statistical methods, sometimes called robust design methods, developed by Genichi Taguchi to improve the quality of manufactured goods, and more recently also applied to engineering, biotechnology, marketing and advertising. Professional statisticians have welcomed the goals and improvements brought about by Taguchi methods, particularly by Taguchi's development of designs for studying variation, but have criticized the inefficiency of some of Taguchi's proposals.
Taguchi's work includes three principal contributions to statistics:
A specific loss function
The philosophy of off-line quality control; and
Innovations in the design of experiments.
== Loss functions ==
=== Loss functions in the statistical theory ===
Traditionally, statistical methods have relied on mean-unbiased estimators of treatment effects: Under the conditions of the Gauss–Markov theorem, least squares estimators have minimum variance among all mean-unbiased linear estimators. The emphasis on comparisons of means also draws (limiting) comfort from the law of large numbers, according to which the sample means converge to the true mean. Fisher's textbook on the design of experiments emphasized comparisons of treatment means.
However, loss functions were avoided by Ronald A. Fisher.
=== Taguchi's use of loss functions ===
Taguchi knew statistical theory mainly from the followers of Ronald A. Fisher, who also avoided loss functions.
Reacting to Fisher's methods in the design of experiments, Taguchi interpreted Fisher's methods as being adapted for seeking to improve the mean outcome of a process. Indeed, Fisher's work had been largely motivated by programmes to compare agricultural yields under different treatments and blocks, and such experiments were done as part of a long-term programme to improve harvests.
However, Taguchi realised that in much industrial production, there is a need to produce an outcome on target, for example, to machine a hole to a specified diameter, or to manufacture a cell to produce a given voltage. He also realised, as had Walter A. Shewhart and others before him, that excessive variation lay at the root of poor manufactured quality and that reacting to individual items inside and outside specification was counterproductive.
He therefore argued that quality engineering should start with an understanding of quality costs in various situations. In much conventional industrial engineering, the quality costs are simply represented by the number of items outside specification multiplied by the cost of rework or scrap. However, Taguchi insisted that manufacturers broaden their horizons to consider cost to society. Though the short-term costs may simply be those of non-conformance, any item manufactured away from nominal would result in some loss to the customer or the wider community through early wear-out; difficulties in interfacing with other parts, themselves probably wide of nominal; or the need to build in safety margins. These losses are externalities and are usually ignored by manufacturers, which are more interested in their private costs than social costs. Such externalities prevent markets from operating efficiently, according to analyses of public economics. Taguchi argued that such losses would inevitably find their way back to the originating corporation (in an effect similar to the tragedy of the commons), and that by working to minimise them, manufacturers would enhance brand reputation, win markets and generate profits.
Such losses are, of course, very small when an item is near to negligible. Donald J. Wheeler characterised the region within specification limits as where we deny that losses exist. As we diverge from nominal, losses grow until the point where losses are too great to deny and the specification limit is drawn. All these losses are, as W. Edwards Deming would describe them, unknown and unknowable, but Taguchi wanted to find a useful way of representing them statistically. Taguchi specified three situations:
Larger the better (for example, agricultural yield);
Smaller the better (for example, carbon dioxide emissions); and
On-target, minimum-variation (for example, a mating part in an assembly).
The first two cases are represented by simple monotonic loss functions. In the third case, Taguchi adopted a squared-error loss function for several reasons:
It is the first "symmetric" term in the Taylor series expansion of real analytic loss-functions.
Total loss is measured by the variance. For uncorrelated random variables, as variance is additive the total loss is an additive measurement of cost.
The squared-error loss function is widely used in statistics, following Gauss's use of the squared-error loss function in justifying the method of least squares.
=== Reception of Taguchi's ideas by statisticians ===
Though many of Taguchi's concerns and conclusions are welcomed by statisticians and economists, some ideas have been especially criticized. For example, Taguchi's recommendation that industrial experiments maximise some signal-to-noise ratio (representing the magnitude of the mean of a process compared to its variation) has been criticized.
== Off-line quality control ==
=== Taguchi's rule for manufacturing ===
Taguchi realized that the best opportunity to eliminate variation of the final product quality is during the design of a product and its manufacturing process. Consequently, he developed a strategy for quality engineering that can be used in both contexts. The process has three stages:
System design
Parameter (measure) design
Tolerance design
==== System design ====
This is design at the conceptual level, involving creativity and innovation.
==== Parameter design ====
Once the concept is established, the nominal values of the various dimensions and design parameters need to be set, the detail design phase of conventional engineering. Taguchi's radical insight was that the exact choice of values required is under-specified by the performance requirements of the system. In many circumstances, this allows the parameters to be chosen so as to minimize the effects on performance arising from variation in manufacture, environment and cumulative damage. This is sometimes called robustification.
Robust parameter designs consider controllable and uncontrollable noise variables; they seek to exploit relationships and optimize settings that minimize the effects of the noise variables.
==== Tolerance design ====
With a successfully completed parameter design, and an understanding of the effect that the various parameters have on performance, resources can be focused on reducing and controlling variation in the critical few dimensions.
== Design of experiments ==
Taguchi developed his experimental theories independently. Taguchi read works following R. A. Fisher only in 1954.
=== Outer arrays ===
Taguchi's designs aimed to allow greater understanding of variation than did many of the traditional designs from the analysis of variance (following Fisher). Taguchi contended that conventional sampling is inadequate here as there is no way of obtaining a random sample of future conditions. In Fisher's design of experiments and analysis of variance, experiments aim to reduce the influence of nuisance factors to allow comparisons of the mean treatment-effects. Variation becomes even more central in Taguchi's thinking.
Taguchi proposed extending each experiment with an "outer array" (possibly an orthogonal array); the "outer array" should simulate the random environment in which the product would function. This is an example of judgmental sampling. Many quality specialists have been using "outer arrays".
Later innovations in outer arrays resulted in "compounded noise." This involves combining a few noise factors to create two levels in the outer array: First, noise factors that drive output lower, and second, noise factors that drive output higher. "Compounded noise" simulates the extremes of noise variation but uses fewer experimental runs than would previous Taguchi designs.
=== Management of interactions ===
==== Interactions, as treated by Taguchi ====
Many of the orthogonal arrays that Taguchi has advocated are saturated arrays, allowing no scope for estimation of interactions. This is a continuing topic of controversy. However, this is only true for "control factors" or factors in the "inner array". By combining an inner array of control factors with an outer array of "noise factors", Taguchi's approach provides "full information" on control-by-noise interactions, it is claimed. Taguchi argues that such interactions have the greatest importance in achieving a design that is robust to noise factor variation. The Taguchi approach provides more complete interaction information than typical fractional factorial designs, its adherents claim.
Followers of Taguchi argue that the designs offer rapid results and that interactions can be eliminated by proper choice of quality characteristics. That notwithstanding, a "confirmation experiment" offers protection against any residual interactions. If the quality characteristic represents the energy transformation of the system, then the "likelihood" of control factor-by-control factor interactions is greatly reduced, since "energy" is "additive".
==== Inefficiencies of Taguchi's designs ====
Interactions are part of the real world. In Taguchi's arrays, interactions are confounded and difficult to resolve.
Statisticians in response surface methodology (RSM) advocate the "sequential assembly" of designs: In the RSM approach, a screening design is followed by a "follow-up design" that resolves only the confounded interactions judged worth resolution. A second follow-up design may be added (time and resources allowing) to explore possible high-order univariate effects of the remaining variables, as high-order univariate effects are less likely in variables already eliminated for having no linear effect. With the economy of screening designs and the flexibility of follow-up designs, sequential designs have great statistical efficiency. The sequential designs of response surface methodology require far fewer experimental runs than would a sequence of Taguchi's designs.
== Assessment ==
Genichi Taguchi has made valuable contributions to statistics and engineering. His emphasis on loss to society, techniques for investigating variation in experiments, and his overall strategy of system, parameter and tolerance design have been influential in improving manufactured quality worldwide.
== See also ==
Design of experiments – Design of tasks
Optimal design – Experimental design that is optimal with respect to some statistical criterionPages displaying short descriptions of redirect targets
Orthogonal array – Type of mathematical array
Quality management – Business process to aid consistent product fitness
Response surface methodology – Statistical approach
Sales process engineering – Systematic design of sales processes
Six Sigma – Business process improvement technique
Engineering tolerance – Permissible limit or limits of variation
Probabilistic design – Discipline within engineering design
== References ==
== Bibliography == | Wikipedia/Taguchi_methods |
The Centers for Disease Control and Prevention (CDC) is the national public health agency of the United States. It is a United States federal agency under the Department of Health and Human Services (HHS), and is headquartered in Atlanta, Georgia. The CDC's current nominee for director is Susan Monarez. She became acting director on January 23, 2025, but stepped down on March 24, 2025 when nominated for the director position. On May 14, 2025, Robert F. Kennedy Jr. stated that lawyer Matthew Buzzelli is acting CDC director. However, the CDC web site does not state the acting director's name.
The agency's main goal is the protection of public health and safety through the control and prevention of disease, injury, and disability in the US and worldwide. The CDC focuses national attention on developing and applying disease control and prevention. It especially focuses its attention on infectious disease, food borne pathogens, environmental health, occupational safety and health, health promotion, injury prevention, and educational activities designed to improve the health of United States citizens. The CDC also conducts research and provides information on non-infectious diseases, such as obesity and diabetes, and is a founding member of the International Association of National Public Health Institutes.
As part of the announced 2025 HHS reorganization, CDC is planned to be reoriented towards infectious disease programs. It is planned to absorb the Administration for Strategic Preparedness and Response, while the National Institute for Occupational Safety and Health is planned to move into the new Administration for a Healthy America.
== History ==
=== Establishment ===
The Communicable Disease Center was founded July 1, 1946, as the successor to the World War II Malaria Control in War Areas program of the Office of National Defense Malaria Control Activities.
Preceding its founding, organizations with global influence in malaria control were the Malaria Commission of the League of Nations and the Rockefeller Foundation. The Rockefeller Foundation greatly supported malaria control, sought to have the governments take over some of its efforts, and collaborated with the agency.
The new agency was a branch of the U.S. Public Health Service and Atlanta was chosen as the location because malaria was endemic in the Southern United States. The agency changed names (see infobox on top) before adopting the name Communicable Disease Center in 1946. Offices were located on the sixth floor of the Volunteer Building on Peachtree Street.
With a budget at the time of about $1 million, 59 percent of its personnel were engaged in mosquito abatement and habitat control with the objective of control and eradication of malaria in the United States (see National Malaria Eradication Program).
Among its 369 employees, the main jobs at CDC were originally entomology and engineering. In CDC's initial years, more than six and a half million homes were sprayed, mostly with DDT. In 1946, there were only seven medical officers on duty and an early organization chart was drawn. Under Joseph Walter Mountin, the CDC continued to be an advocate for public health issues and pushed to extend its responsibilities to many other communicable diseases.
In 1947, the CDC made a token payment of $10 to Emory University for 15 acres (61,000 m2) of land on Clifton Road in DeKalb County, still the home of CDC headquarters as of 2025. CDC employees collected the money to make the purchase. The benefactor behind the "gift" was Robert W. Woodruff, chairman of the board of the Coca-Cola Company. Woodruff had a long-time interest in malaria control, which had been a problem in areas where he went hunting. The same year, the PHS transferred its San Francisco based plague laboratory into the CDC as the Epidemiology Division, and a new Veterinary Diseases Division was established.
=== Growth ===
In 1951, Chief Epidemiologist Alexander Langmuir's warnings of potential biological warfare during the Korean War spurred the creation of the Epidemic Intelligence Service (EIS) as a two-year postgraduate training program in epidemiology. The success of the EIS program led to the launch of Field Epidemiology Training Programs (FETP) in 1980, training more than 18,000 disease detectives in over 80 countries. In 2020, FETP celebrated the 40th anniversary of the CDC's support for Thailand's Field Epidemiology Training Program. Thailand was the first FETP site created outside of North America and is found in numerous countries, reflecting CDC's influence in promoting this model internationally. The Training Programs in Epidemiology and Public Health Interventions Network (TEPHINET) has graduated 950 students.
The mission of the CDC expanded beyond its original focus on malaria to include sexually transmitted diseases when the Venereal Disease Division of the U.S. Public Health Service (PHS) was transferred to the CDC in 1957. Shortly thereafter, Tuberculosis Control was transferred (in 1960) to the CDC from PHS, and then in 1963 the Immunization program was established.
It became the National Communicable Disease Center effective July 1, 1967, and the Center for Disease Control on June 24, 1970. At the end of the Public Health Service reorganizations of 1966–1973, it was promoted to being a principal operating agency of PHS.
=== Recent history ===
It was renamed to the plural Centers for Disease Control effective October 14, 1980, as the modern organization of having multiple constituent centers was established. By 1990, it had four centers formed in the 1980s: the Center for Infectious Diseases, Center for Chronic Disease Prevention and Health Promotion, the Center for Environmental Health and Injury Control, and the Center for Prevention Services; as well as two centers that had been absorbed by CDC from outside: the National Institute for Occupational Safety and Health in 1973, and the National Center for Health Statistics in 1987.
An act of the United States Congress appended the words "and Prevention" to the name effective October 27, 1992. However, Congress directed that the initialism CDC be retained because of its name recognition. Since the 1990s, the CDC focus has broadened to include chronic diseases, disabilities, injury control, workplace hazards, environmental health threats, and terrorism preparedness. CDC combats emerging diseases and other health risks, including birth defects, West Nile virus, obesity, avian, swine, and pandemic flu, E. coli, and bioterrorism, to name a few. The organization would also prove to be an important factor in preventing the abuse of penicillin. In May 1994 the CDC admitted having sent samples of communicable diseases to the Iraqi government from 1984 through 1989 which were subsequently repurposed for biological warfare, including Botulinum toxin, West Nile virus, Yersinia pestis and Dengue fever virus.
On April 21, 2005, then–CDC director Julie Gerberding formally announced the reorganization of CDC to "confront the challenges of 21st-century health threats". She established four coordinating centers. In 2009 the Obama administration re-evaluated this change and ordered them cut as an unnecessary management layer.
As of 2013, the CDC's Biosafety Level 4 laboratories were among the few that exist in the world. They included one of only two official repositories of smallpox in the world, with the other one located at the State Research Center of Virology and Biotechnology VECTOR in the Russian Federation. In 2014, the CDC revealed they had discovered several misplaced smallpox samples while their lab workers were "potentially infected" with anthrax.
The city of Atlanta annexed the property of the CDC headquarters effective January 1, 2018, as a part of the city's largest annexation within a period of 65 years; the Atlanta City Council had voted to do so the prior December. The CDC and Emory University had requested that the Atlanta city government annex the area, paving the way for a MARTA expansion through the Emory campus, funded by city tax dollars. The headquarters were located in an unincorporated area, statistically in the Druid Hills census-designated place.
On August 17, 2022, Walensky said the CDC would make drastic changes in the wake of mistakes during the COVID-19 pandemic. She outlined an overhaul of how the CDC would analyze and share data and how they would communicate information to the general public. In her statement to all CDC employees, she said: "For 75 years, CDC and public health have been preparing for COVID-19, and in our big moment, our performance did not reliably meet expectations." Based on the findings of an internal report, Walensky concluded that "The CDC must refocus itself on public health needs, respond much faster to emergencies and outbreaks of disease, and provide information in a way that ordinary people and state and local health authorities can understand and put to use" (as summarized by the New York Times).
==== Second Trump administration ====
In January 2025, it was reported that a CDC official had ordered all CDC staff to stop working with the World Health Organization. Around January 31, 2025, several CDC websites, pages, and datasets related to HIV and STI prevention, LGBT and youth health became unavailable for viewing after the agency was ordered to comply with Donald Trump's executive order to remove all material of "diversity, equity, and inclusion" and "gender identity". Shortly thereafter, the CDC ordered its scientists to retract or pause the publication of all research which had been submitted or accepted for publication, but not yet published, which included any of the following banned terms: "Gender, transgender, pregnant person, pregnant people, LGBT, transsexual, non-binary, nonbinary, assigned male at birth, assigned female at birth, biologically male, biologically female."
Also in January 2025, due to a pause in communications imposed by the second Trump administration at federal health agencies, publication of the Morbidity and Mortality Weekly Report (MMWR) was halted, the first time that had happened since its inception in 1960. The president of the Infectious Diseases Society of America (IDSA) called the pause in publication a "disaster." Attempts to halt publication had been made by the first Trump administration after MMWR published information about COVID-19 that "conflicted with messaging from the White House." The pause in communications also caused the cancellation of a meeting between the CDC and IDSA about threats to public health regarding the H5N1 influenza virus.
On February 14, 2025, around 1,300 CDC employees were laid off by the administration, which included all first-year officers of the Epidemic Intelligence Service. The cuts also terminated 16 of the 24 Laboratory Leadership Service program fellows, a program designed for early-career lab scientists to address laboratory testing shortcomings of the CDC. In the following month, the Trump administration quietly withdrew its CDC director nominee, Dave Weldon, just minutes before his scheduled Senate confirmation hearing on March 13.
In April 2025, it was reported that among the reductions is the elimination of the Freedom of Information Act team, the Division of Violence Prevention, laboratories involved in testing for antibiotic resistance, and the team responsible for determining recalls of hazardous infant products. Additional cuts affect the technology branch of the Center for Forecasting and Outbreak Analytics, which includes software engineers and computer scientists supporting the centre established during the COVID-19 pandemic to improve disease outbreak prediction.
== Organization ==
The CDC is organized into centers, institutes, and offices (CIOs), with each organizational unit implementing the agency's activities in a particular area of expertise while also providing intra-agency support and resource-sharing for cross-cutting issues and specific health threats.
As of the most recent reorganization in February 2023, the CIOs are:
National Center for Immunization and Respiratory Diseases
National Center for Emerging and Zoonotic Infectious Diseases
Division of Global Migration Health
National Center for HIV/AIDS, Viral Hepatitis, STD, and TB Prevention
National Center on Birth Defects and Developmental Disabilities
National Center for Chronic Disease Prevention and Health Promotion
National Center for Environmental Health / Agency for Toxic Substances and Disease Registry
National Center for Injury Prevention and Control
National Institute for Occupational Safety and Health
Public Health Infrastructure Center
Global Health Center
Immediate Office of the Director
Chief of Staff
Office of the Chief Operating Officer
Office of Policy, Performance, and Evaluation
Office of Equal Employment Opportunity and Workplace Equity
Office of Communications
Office of Health Equity
Office of Science
CDC Washington Office
Office of Laboratory Science and Safety
Office of Readiness and Response
Center for Forecasting and Outbreak Analytics
Office of Public Health Data, Surveillance, and Technology
National Center for Health Statistics
The Office of Public Health Preparedness was created during the 2001 anthrax attacks shortly after the terrorist attacks of September 11, 2001. Its purpose was to coordinate among the government the response to a range of biological terrorism threats.
=== Locations ===
Most CDC centers are located in the Atlanta metropolitan area, where it has three major campuses:
The Chamblee Campus in Chamblee, Georgia, opened in 1946, inheriting the site and buildings of Lawson General Hospital immediately adjacent to but not part of Naval Air Station Atlanta. Although it was initially planned to be shut down when the Roybal Campus opened, it was found that the latter was not suitable for live animal facilities. The buildings were slowly replaced with modern buildings over time.
The Roybal Campus in Atlanta is the largest, named in honor of the late representative Edward R. Roybal. It was originally called the Clifton Road Campus. Although its land was donated by adjacent Emory University in 1947, it did not open until 1960. Its Building 18, which opened in 2005, contains the premier BSL4 laboratory in the United States.
The Lawrenceville Campus in Lawrenceville, Georgia, was acquired as a destination for Chamblee's animal facilities if that campus was shut down. It was first developed in the early 1960s.
A few of the centers are based in or operate other domestic locations:
The National Center for Health Statistics is primarily located in Hyattsville, Maryland, with a branch in Research Triangle Park in North Carolina.
The National Institute for Occupational Safety and Health's primary locations are Cincinnati; Morgantown, West Virginia; Pittsburgh; Spokane, Washington; and Washington, D.C., with branches in Denver; Anchorage, Alaska; and Atlanta.
The CDC Washington Office is based in Washington, D.C.
Two divisions of the National Center for Emerging and Zoonotic Infectious Diseases are based outside Atlanta. The Division of Vector-Borne Diseases is based in Fort Collins, Colorado, with a branch in San Juan, Puerto Rico. The Arctic Investigations Program is based in Anchorage.
In addition, CDC operates quarantine facilities in 20 cities in the U.S.
== Budget ==
The CDC budget for fiscal year 2024 is $11.581 billion.
== Workforce ==
As of 2021, CDC staff numbered approximately 15,000 personnel (including 6,000 contractors and 840 United States Public Health Service Commissioned Corps officers) in 170 occupations. Eighty percent held bachelor's degrees or higher; almost half had advanced degrees (a master's degree or a doctorate such as a PhD, D.O., or M.D.).
Common CDC job titles include engineer, entomologist, epidemiologist, biologist, physician, veterinarian, behavioral scientist, nurse, medical technologist, economist, public health advisor, health communicator, toxicologist, chemist, computer scientist, and statistician. The CDC also operates a number of notable training and fellowship programs, including those indicated below.
=== Epidemic Intelligence Service (EIS) ===
The Epidemic Intelligence Service (EIS) is composed of "boots-on-the-ground disease detectives" who investigate public health problems domestically and globally. When called upon by a governmental body, EIS officers may embark on short-term epidemiological assistance assignments, or "Epi-Aids", to provide technical expertise in containing and investigating disease outbreaks. The EIS program is a model for the international Field Epidemiology Training Program.
=== Public Health Associates Program ===
The CDC also operates the Public Health Associate Program (PHAP), a two-year paid fellowship for recent college graduates to work in public health agencies all over the United States. PHAP was founded in 2007 and currently has 159 associates in 34 states.
== Leadership ==
The director of the CDC is a position that currently requires Senate confirmation. The director serves at the pleasure of the President and may be fired at any time. The CDC director concurrently serves as the Administrator of the Agency for Toxic Substances and Disease Registry.
Prior to January 20, 2025, it was a Senior Executive Service position that could be filled either by a career employee, or as a political appointment that does not require Senate confirmation, with the latter method typically being used. The change to requiring Senate confirmation was due to a provision in the Consolidated Appropriations Act, 2023.
Twenty directors have served the CDC or its predecessor agencies, including three who have served during the Trump administration (including Anne Schuchat who twice served as acting director) and three who have served during the Carter administration (including one acting director not shown here). Two served under Bill Clinton, but only one under the Nixon to Ford terms.
=== List of directors ===
The following persons have served as the director of the Centers for Disease Control and Prevention (or chief of the Communicable Disease Center):
== Datasets and survey systems ==
CDC Scientific Data, Surveillance, Health Statistics, and Laboratory Information.
Behavioral Risk Factor Surveillance System (BRFSS), the world's largest, ongoing telephone health-survey system.
Pregnancy Risk Assessment Monitoring System (PRAMS), a surveillance system on maternal and infant health with telephone and mail questionnaires in English and Spanish in 50 US jurisdictions.
Mortality Medical Data System.
Abortion statistics in the United States
CDC WONDER (Wide-ranging ONline Data for Epidemiologic Research)
Data systems of the National Center for Health Statistics
== Areas of focus ==
=== Communicable diseases ===
The CDC's programs address more than 400 diseases, health threats, and conditions that are major causes of death, disease, and disability. The CDC's website has information on various infectious (and noninfectious) diseases, including smallpox, measles, and others.
==== Influenza ====
The CDC targets the transmission of influenza, including the H1N1 swine flu, and launched websites to educate people about hygiene.
==== Division of Select Agents and Toxins ====
Within the division are two programs: the Federal Select Agent Program (FSAP) and the Import Permit Program. The FSAP is run jointly with an office within the U.S. Department of Agriculture, regulating agents that can cause disease in humans, animals, and plants. The Import Permit Program regulates the importation of "infectious biological materials."
The CDC runs a program that protects the public from rare and dangerous substances such as anthrax and the Ebola virus. The program, called the Federal Select Agent Program, calls for inspections of labs in the U.S. that work with dangerous pathogens.
During the 2014 Ebola outbreak in West Africa, the CDC helped coordinate the return of two infected American aid workers for treatment at Emory University Hospital, the home of a special unit to handle highly infectious diseases.
As a response to the 2014 Ebola outbreak, Congress passed a Continuing Appropriations Resolution allocating $30,000,000 towards CDC's efforts to fight the virus.
=== Non-communicable diseases ===
The CDC also works on non-communicable diseases, including chronic diseases caused by obesity, physical inactivity and tobacco-use. The work of the Division for Cancer Prevention and Control, led from 2010 by Lisa C. Richardson, is also within this remit.
=== Antibiotic resistance ===
The CDC implemented their National Action Plan for Combating Antibiotic Resistant Bacteria as a measure against the spread of antibiotic resistance in the United States. This initiative has a budget of $161 million and includes the development of the Antibiotic Resistance Lab Network.
=== Global health ===
Globally, the CDC works with other organizations to address global health challenges and contain disease threats at their source. They work with many international organizations such as the World Health Organization (WHO) as well as ministries of health and other groups on the front lines of outbreaks. The agency maintains staff in more than 60 countries, including some from the U.S. but more from the countries in which they operate. The agency's global divisions include the Division of Global HIV and TB (DGHT), the Division of Parasitic Diseases and Malaria (DPDM), the Division of Global Health Protection (DGHP), and the Global Immunization Division (GID).
The CDC has been working with the WHO to implement the International Health Regulations (IHR), an agreement between 196 countries to prevent, control, and report on the international spread of disease, through initiatives including the Global Disease Detection Program (GDD).
The CDC has also been involved in implementing the U.S. global health initiatives President's Emergency Plan for AIDS Relief (PEPFAR) and President's Malaria Initiative.
=== Travelers' health ===
The CDC collects and publishes health information for travelers in a comprehensive book, CDC Health Information for International Travel, which is commonly known as the "yellow book." The book is available online and in print as a new edition every other year and includes current travel health guidelines, vaccine recommendations, and information on specific travel destinations. The CDC also issues travel health notices on its website, consisting of three levels:
"Watch": Level 1 (practice usual precautions)
"Alert": Level 2 (practice enhanced precautions)
"Warning": Level 3 (avoid nonessential travel)
=== Vaccine safety ===
The CDC uses a number of tools to monitor the safety of vaccines. The Vaccine Adverse Event Reporting System (VAERS), a national vaccine safety surveillance program run by CDC and the FDA. "VAERS detects possible safety issues with U.S. vaccines by collecting information about adverse events (possible side effects or health problems) after vaccination." The CDC's Safety Information by Vaccine page provides a list of the latest safety information, side effects, and answers to common questions about CDC recommended vaccines.
The Vaccine Safety Datalink (VSD) works with a network of healthcare organizations to share data on vaccine safety and adverse events. The Clinical Immunization Safety Assessment (CISA) project is a network of vaccine experts and health centers that research and assist the CDC in the area of vaccine safety.
CDC also runs a program called V-safe, a smartphone web application that allows COVID-19 vaccine recipients to be surveyed in detail about their health in response to getting the shot.
== CDC Foundation ==
The CDC Foundation operates independently from CDC as a private, nonprofit 501(c)(3) organization incorporated in the State of Georgia. The creation of the Foundation was authorized by section 399F of the Public Health Service Act to support the mission of CDC in partnership with the private sector, including organizations, foundations, businesses, educational groups, and individuals. From 1995 to 2022, the foundation raised over $1.6 billion and launched more than 1,200 health programs. Bill Cosby formerly served as a member of the foundation's Board of Directors, continuing as an honorary member after completing his term.
=== Activities ===
The foundation engages in research projects and health programs in more than 160 countries every year, including in focus areas such as cardiovascular disease, cancer, emergency response, and infectious diseases, particularly HIV/AIDS, Ebola, rotavirus, and COVID-19.
EmPOWERED Health Program: Launched in November 2019 with funding from Amgen, the program works to empower cancer patients to become actively involved in the decision making around their treatments.
Fries Prize for Improving Health: An annual prize first awarded in 1992 that "recognizes an individual who has made major accomplishments in health improvement and with the general criteria of the greatest good for the greatest number".
=== Criticism ===
In 2015, BMJ associate editor Jeanne Lenzer raised concerns that the CDC's recommendations and publications may be influenced by donations received through the Foundation, which includes pharmaceutical companies.
== Controversies ==
=== Tuskegee study of untreated syphilis in Black men ===
For 15 years, the CDC had direct oversight over the Tuskegee syphilis experiment. In the study, which lasted from 1932 to 1972, a group of Black men (nearly 400 of whom had syphilis) were studied to learn more about the disease. The disease was left untreated in the men, who had not given their informed consent to serve as research subjects. The Tuskegee Study was initiated in 1932 by the Public Health Service, with the CDC taking over the Tuskegee Health Benefit Program in 1995.
=== Gun control ===
An area of partisan dispute related to CDC funding is studying firearms effectiveness. Although the CDC was one of the first government agencies to study gun related data, in 1996 the Dickey Amendment, passed with the support of the National Rifle Association of America, states "none of the funds available for injury prevention and control at the Centers for Disease Control and Prevention may be used to advocate or promote gun control". Advocates for gun control oppose the amendment and have tried to overturn it.
Looking at the history of the passage of the Dickey Amendment, in 1992, Mark L. Rosenberg and five CDC colleagues founded the CDC's National Center for Injury Prevention and Control, with an annual budget of approximately $260,000. They focused on "identifying causes of firearm deaths, and methods to prevent them". Their first report, published in the New England Journal of Medicine in 1993 entitled "Guns are a Risk Factor for Homicide in the Home", reported "mere presence of a gun in a home increased the risk of a firearm-related death by 2.7 percent, and suicide fivefold – a "huge" increase." In response, the NRA launched a "campaign to shut down the Injury Center." Two conservative pro-gun groups, Doctors for Responsible Gun Ownership and Doctors for Integrity and Policy Research joined the pro-gun effort, and, by 1995, politicians also supported the pro-gun initiative. In 1996, Jay Dickey (R) Arkansas introduced the Dickey Amendment statement stating "none of the funds available for injury prevention and control at the Centers for Disease Control and Prevention may be used to advocate or promote gun control" as a rider. in the 1996 appropriations bill." In 1997, "Congress re-directed all of the money for gun research to the study of traumatic brain injury." David Satcher, CDC head 1993–98 advocated for firearms research. In 2016 over a dozen "public health insiders, including current and former CDC senior leaders" told The Trace interviewers that CDC senior leaders took a cautious stance in their interpretation of the Dickey Amendment and that they could do more but were afraid of political and personal retribution.
In 2013, the American Medical Association, the American Psychological Association, and the American Academy of Pediatrics sent a letter to the leaders of the Senate Appropriations Committee asking them "to support at least $10 million within the Centers for Disease Control and Prevention (CDC) in FY 2014 along with sufficient new taxes at the National Institutes of Health to support research into the causes and prevention of violence. Furthermore, we urge Members to oppose any efforts to reduce, eliminate, or condition CDC funding related to violence prevention research." Congress maintained the ban in subsequent budgets.
=== Ebola ===
In October 2014, the CDC gave a nurse with a fever who was later diagnosed with Ebola permission to board a commercial flight to Cleveland.
=== COVID-19 ===
The CDC has been widely criticized for its handling of the COVID-19 pandemic. In 2022, CDC director Rochelle Walensky acknowledged "some pretty dramatic, pretty public mistakes, from testing to data to communications", based on the findings of an internal examination.
The first confirmed case of COVID-19 was discovered in the U.S. on January 20, 2020. However, widespread COVID-19 testing in the United States was effectively stalled until February 28, when federal officials revised a faulty CDC test, and days afterward, when the Food and Drug Administration began loosening rules that had restricted other labs from developing tests. In February 2020, as the CDC's early coronavirus test malfunctioned nationwide, CDC Director Robert R. Redfield reassured fellow officials on the White House Coronavirus Task Force that the problem would be quickly solved, according to White House officials. It took about three weeks to sort out the failed test kits, which may have been contaminated during their processing in a CDC lab. Later investigations by the FDA and the Department of Health and Human Services found that the CDC had violated its own protocols in developing its tests. In November 2020, NPR reported that an internal review document they obtained revealed that the CDC was aware that the first batch of tests which were issued in early January had a chance of being wrong 33 percent of the time, but they released them anyway.
In May 2020, The Atlantic reported that the CDC was conflating the results of two different types of coronavirus tests – tests that diagnose current coronavirus infections, and tests that measure whether someone has ever had the virus. The magazine said this distorted several important metrics, provided the country with an inaccurate picture of the state of the pandemic, and overstated the country's testing ability.
In July 2020, the Trump administration ordered hospitals to bypass the CDC and instead send all COVID-19 patient information to a database at the Department of Health and Human Services. Some health experts opposed the order and warned that the data might become politicized or withheld from the public. On July 15, the CDC alarmed health care groups by temporarily removing COVID-19 dashboards from its website. It restored the data a day later.
In August 2020, the CDC recommended that people showing no COVID-19 symptoms do not need testing. The new guidelines alarmed many public health experts. The guidelines were crafted by the White House Coronavirus Task Force without the sign-off of Anthony Fauci of the NIH. Objections by other experts at the CDC went unheard. Officials said that a CDC document in July arguing for "the importance of reopening schools" was also crafted outside the CDC. On August 16, the chief of staff, Kyle McGowan, and his deputy, Amanda Campbell, resigned from the agency. The testing guidelines were reversed on September 18, 2020, after public controversy.
In September 2020, the CDC drafted an order requiring masks on all public transportation in the United States, but the White House Coronavirus Task Force blocked the order, refusing to discuss it, according to two federal health officials.
In October 2020, it was disclosed that White House advisers had repeatedly altered the writings of CDC scientists about COVID-19, including recommendations on church choirs, social distancing in bars and restaurants, and summaries of public-health reports.
In the lead up to 2020 Thanksgiving, the CDC advised Americans not to travel for the holiday saying, "It's not a requirement. It's a recommendation for the American public to consider." The White House coronavirus task force had its first public briefing in months on that date but travel was not mentioned.
The New York Times later concluded that the CDC's decisions to "ben[d] to political pressure from the Trump White House to alter key public health guidance or withhold it from the public [...] cost it a measure of public trust that experts say it still has not recaptured" as of 2022.
In May 2021, following criticism by scientists, the CDC updated its COVID-19 guidance to acknowledge airborne transmission of COVID-19, after having previously claimed that the majority of infections occurred via "close contact, not airborne transmission".
In December 2021, following a request from the CEO of Delta Air Lines, CDC shortened its recommended isolation period for asymptomatic individuals infected with COVID-19 from 10 days to five.
Until 2022, the CDC withheld critical data about COVID-19 vaccine boosters, hospitalizations and wastewater data.
On June 10, 2022, the Biden Administration ordered the CDC to remove the COVID-19 testing requirement for air travelers entering the United States.
==== Controversy over the Morbidity and Mortality Weekly Report ====
During the pandemic, the CDC Morbidity and Mortality Weekly Report (MMWR) came under pressure from political appointees at the Department of Health and Human Services (HHS) to modify its reporting so as not to conflict with what Trump was saying about the pandemic.
Starting in June 2020, Michael Caputo, the HHS assistant secretary for public affairs, and his chief advisor Paul Alexander tried to delay, suppress, change, and retroactively edit MMR releases about the effectiveness of potential treatments for COVID-19, the transmissibility of the virus, and other issues where the president had taken a public stance. Alexander tried unsuccessfully to get personal approval of all issues of MMWR before they went out.
Caputo claimed this oversight was necessary because MMWR reports were being tainted by "political content"; he demanded to know the political leanings of the scientists who reported that hydroxychloroquine had little benefit as a treatment while Trump was saying the opposite. In emails Alexander accused CDC scientists of attempting to "hurt the president" and writing "hit pieces on the administration".
In October 2020, emails obtained by Politico showed that Alexander requested multiple alterations in a report. The published alterations included a title being changed from "Children, Adolescents, and Young Adults" to "Persons." One current and two former CDC officials who reviewed the email exchanges said they were troubled by the "intervention to alter scientific reports viewed as untouchable prior to the Trump administration" that "appeared to minimize the risks of the coronavirus to children by making the report's focus on children less clear."
==== Eroding trust in the CDC as a result of COVID-19 controversies ====
A poll conducted in September 2020 found that nearly 8 in 10 Americans trusted the CDC, a decrease from 87 percent in April 2020. Another poll showed an even larger drop in trust with the results dropping 16 percentage points. By January 2022, according to an NBC News poll, only 44% of Americans trusted the CDC compared to 69% at the beginning of the pandemic. As the trustworthiness eroded, so too did the information it disseminates. The diminishing level of trust in the CDC and the information releases also incited "vaccine hesitancy" with the result that "just 53 percent of Americans said they would be somewhat or extremely likely to get a vaccine."
In September 2020, amid the accusations and the faltering image of the CDC, the agency's leadership was called into question. Former acting director at the CDC, Richard Besser, said of Redfield that "I find it concerning that the CDC director has not been outspoken when there have been instances of clear political interference in the interpretation of science." In addition, Mark Rosenberg, the first director of CDC's National Center for Injury Prevention and Control, also questioned Redfield's leadership and his lack of defense of the science.
Historically, the CDC has not been a political agency; however, the COVID-19 pandemic, and specifically the Trump administration's handling of the pandemic, resulted in a "dangerous shift" according to a previous CDC director and others. Four previous directors claim that the agency's voice was "muted for political reasons." Politicization of the agency has continued into the Biden administration as COVID-19 guidance is contradicted by State guidance and the agency is criticized as "CDC's credibility is eroding".
In 2021, the CDC, then under the leadership of the Biden administration, received criticism for its mixed messaging surrounding COVID-19 vaccines, mask-wearing guidance, and the state of the pandemic.
=== Gender censorship ===
On February 1, 2025, the CDC ordered its scientists to retract any not yet published research they had produced which included any of the following banned terms: "Gender, transgender, pregnant person, pregnant people, LGBT, transsexual, non-binary, nonbinary, assigned male at birth, assigned female at birth, biologically male, biologically female”. Larry Gostin, director of the World Health Organization Center on Global Health Law, said that the directive amounted to censorship of not only government employees, but private citizens as well. For example, if the lead author of a submitted paper works for the CDC and withdraws their name from the submission, that kills the submission even if coauthors who are private scientists remain on it. Other censored topics include DEI, climate change, and HIV.
Following extensive public backlash, some, but not all, of the removed pages were reinstated. The CDC's censorship led to many researchers and journalists to preserve databases themselves, with many removed articles being uploaded to archival sites such as the Internet Archive.
On February 4, Doctors for America filed a federal lawsuit against the CDC, Food and Drug Administration, and Department of Health and Human Services, asking the removed websites to be put back online. On February 11, a judge ordered removed pages to be restored temporarily while the suit is being considered, citing doctors who said the removed materials were "vital for real-time clinical decision-making".
== Publications ==
CDC publications
State of CDC report
CDC Programs in Brief
Morbidity and Mortality Weekly Report
Emerging Infectious Diseases (monthly journal)
Preventing Chronic Disease
Vital statistics
== Popular culture ==
=== Zombie Apocalypse campaign ===
On May 16, 2011, the Centers for Disease Control and Prevention's blog published an article instructing the public on what to do to prepare for a zombie invasion. While the article did not claim that such a scenario was possible, it did use the popular culture appeal as a means of urging citizens to prepare for all potential hazards, such as earthquakes, tornadoes, and floods.
According to David Daigle, the associate director for communications, public health preparedness and response, the idea arose when his team was discussing their upcoming hurricane-information campaign and Daigle mused that "we say pretty much the same things every year, in the same way, and I just wonder how many people are paying attention." A social-media employee mentioned that the subject of zombies had come up a lot on Twitter when she had been tweeting about the Fukushima Daiichi nuclear disaster and radiation. The team realized that a campaign like this would most likely reach a different audience from the one that normally pays attention to hurricane-preparedness warnings and went to work on the zombie campaign, launching it right before hurricane season began. "The whole idea was, if you're prepared for a zombie apocalypse, you're prepared for pretty much anything," said Daigle.
Once the blog article was posted, the CDC announced an open contest for YouTube submissions of the most creative and effective videos covering preparedness for a zombie apocalypse (or apocalypse of any kind), to be judged by the "CDC Zombie Task Force". Submissions were open until October 11, 2011. They also released a zombie-themed graphic novella available on their website. Zombie-themed educational materials for teachers are available on the site.
== See also ==
Gun violence in the United States
Haddon Matrix
List of national public health agencies
Safe Kids Worldwide
=== CDC Departments ===
ATSDR – CDC department
NIOSH – CDC department
N95 respirator – regulated by NIOSH
Division of Industrial Hygiene – predecessor to NIOSH
=== Other US Executive Departments ===
MSHA – co-regulator of respirators prior to 1998
Bureau of Mines – predecessor to MSHA
National Highway Traffic Safety Administration
OSHA
== References ==
=== Citations ===
=== Sources ===
== Further reading ==
Editorial (May 16, 2020). "Reviving the US CDC". The Lancet. 395 (10236): 1521. doi:10.1016/S0140-6736(20)31140-5. PMC 7255307. PMID 32416772.
Etheridge, Elizabeth W. (1992). Sentinel for Health: A History of the Centers for Disease Control. Berkeley, CA: University of California Press. ISBN 978-0-520-07107-0.
Meyerson, Beth E.; Martich, Frederick A.; Naehr, Gerald P. (2008). Ready to Go: The History and Contributions of U.S. Public Health Advisors. Research Triangle Park, NC: American Social Health Association. ISBN 978-0-615-20383-6. OCLC 244483702. Retrieved April 8, 2025.
Stobbe, Mike (2014). Surgeon General's Warning: How Politics Crippled the Nation's Doctor. Berkeley: Univ of California Press. ISBN 978-0-520-27229-3.
== External links ==
Official website
CDC in the Federal Register
CDC-Wide Activities and Program Support account on USAspending.gov
CDC Online Newsroom
CDC Public Health Image Library
CDC Global Communications Center
CDC Emerging Infectious Diseases Laboratory – Atlanta, Georgia (archived July 3, 2008)
CDC WONDER online databases.
Vaccine Safety Monitoring Systems and Methods (CDC) a slide deck presented at October 2019 Advisory Committee on Immunization Practices (ACIP) meeting | Wikipedia/Centers_for_Disease_Control_and_Prevention |
An adverse drug reaction (ADR) is a harmful, unintended result caused by taking medication.: 1.1 Adverse Drug Reaction (ADR) ADRs may occur following a single dose or prolonged administration of a drug or may result from the combination of two or more drugs. The meaning of this term differs from the term "side effect" because side effects can be beneficial as well as detrimental. The study of ADRs is the concern of the field known as pharmacovigilance. An adverse event (AE) refers to any unexpected and inappropriate occurrence at the time a drug is used, whether or not the event is associated with the administration of the drug.: 1.2 Adverse Event (AE) An ADR is a special type of AE in which a causative relationship can be shown. ADRs are only one type of medication-related harm. Another type of medication-related harm type includes not taking prescribed medications, known as non-adherence. Non-adherence to medications can lead to death and other negative outcomes. Adverse drug reactions require the use of a medication.
== Classification ==
=== Traditional ===
Type A: augmented pharmacological effects, which are dose-dependent and predictable
Type A reactions, which constitute approximately 80% of adverse drug reactions, are usually a consequence of the drug's primary pharmacological effect (e.g., bleeding when using the anticoagulant warfarin) or a low therapeutic index of the drug (e.g., nausea from digoxin), and they are therefore predictable. They are dose-related and usually mild, although they may be serious or even fatal (e.g. intracranial bleeding from warfarin). Such reactions are usually due to inappropriate dosage, especially when drug elimination is impaired. The term side effects may be applied to minor type A reactions.
Type B: Type B reactions are not dose-dependent and are not predictable, and so may be called idiosyncratic. These reactions can be due to particular elements within the person or the environment.
Types A and B were proposed in the 1970s, and the other types were proposed subsequently when the first two proved insufficient to classify ADRs.
Other types of adverse drug reactions are Type C, Type D, Type E, and Type F. Type C was categorized for chronic adverse drug reactions, Type D for delayed adverse drug reactions, Type E for withdrawal adverse drug reactions, and Type F for failure of therapy as an adverse drug reaction. Adverse drug reactions can also be categorized using time-relatedness, dose-relatedness, and susceptibility, which collectively are called the DoTS classification.
=== Seriousness ===
The U.S Food and Drug Administration defines a serious adverse event as one when the patient outcome is one of the following:
Death
Life-threatening
Hospitalization (initial or prolonged)
Disability — significant, persistent, or permanent change, impairment, damage or disruption in the patient's body function/structure, physical activities or quality of life.
Congenital abnormality
Requires intervention to prevent permanent impairment or damage
Severity is a measure of the intensity of the adverse event in question. The terms "severe" and "serious", when applied to adverse events, are technically very different. They are easily confused but can not be used interchangeably, requiring care in usage. Seriousness usually indicates patient outcome (such as negative outcomes including disability, long-term effects, and death).
In adverse drug reactions, the seriousness of the reaction is important for reporting.
== Location ==
Some ocular antihypertensives cause systemic effects, although they are administered locally as eye drops, since a fraction escapes to the systemic circulation.
== Mechanisms ==
=== Abnormal pharmacokinetics ===
==== Comorbid disease states ====
Various diseases, especially those that cause renal or hepatic insufficiency, may alter drug metabolism. Resources are available that report changes in a drug's metabolism due to disease states.
The Medication Appropriateness Tool for Comorbid Health Conditions in Dementia (MATCH-D) criteria warns that people with dementia are more likely to experience adverse effects, and that they are less likely to be able to reliably report symptoms.
==== Genetic factors ====
Pharmacogenomics includes how genes can predict potential adverse drug reactions. However, pharmacogenomics is not limited to adverse events (of any type), but also looks at how genes may impact other responses to medications, such as low/no effect or expected/normal responses (especially based on drug metabolism).
Abnormal drug metabolism may be due to inherited factors of either Phase I oxidation or Phase II conjugation.
===== Phase I reactions =====
Phase I reactions include metabolism by cytochrome P450. Patients have abnormal metabolism by cytochrome P450 due to either inheriting abnormal alleles or due to drug interactions. Tables are available to check for drug interactions due to P450 interactions.
Inheriting abnormal butyrylcholinesterase (pseudocholinesterase) may affect metabolism of drugs such as succinylcholine.
===== Phase II reactions =====
Inheriting abnormal N-acetyltransferase which conjugated some drugs to facilitate excretion may affect the metabolism of drugs such as isoniazid, hydralazine, and procainamide.
Inheriting abnormal thiopurine S-methyltransferase may affect the metabolism of the thiopurine drugs mercaptopurine and azathioprine.
===== Protein binding =====
Protein binding interactions are usually transient and mild until a new steady state is achieved. These are mainly for drugs without much first-pass liver metabolism. The principal plasma proteins for drug binding are:
albumin
α1-acid glycoprotein
lipoproteins
Some drug interactions with warfarin are due to changes in protein binding.
=== Drug interactions ===
The risk of drug interactions is increased with polypharmacy, especially in older adults.
==== Additive drug effects ====
Two or more drugs that contribute to the same mechanism in the body can have additive toxic or adverse effects. One example of this is multiple medications administered concurrently that prolong the QT interval, such as antiarrhythmics like sotalol and some macrolide antibiotics, such as systemic azithromycin.
Another example of additive effects for adverse drug reactions is in serotonin toxicity (serotonin syndrome). If medications that cause increased serotonin levels are combined, they can cause serotonin toxicity (though therapeutic doses of one agent that increases serotonin levels can cause serotonin toxicity in certain cases and individuals). Some of the medications that can contribute to serotonin toxicity include MAO inhibitors, SSRIs, and tricyclic antidepressants.
==== Altered metabolism ====
Some medications can either inhibit or induce key drug metabolizing enzymes or drug transporters, which when combined with other medications that utilize the same proteins can lead to either toxic or sub-therapeutic adverse effects. One example of this is a patient taking a cytochrome P450 3A4 (CYP3A4) inhibitor such as the antibiotic clarithromycin, as well as another medication metabolized by CYP3A4 such as the anticoagulant apixaban, which results in elevated blood concentrations of apixaban and greater risk of serious bleeds. Additionally, Clarithromycin is a permeability glycoprotein (P-gp) efflux pump inhibitor, which when given with apixaban (a substrate for P-gp) will lead to increased absorption of apixaban, resulting in the same adverse effects as with CYP3A4 inhibition.
== Management ==
=== Assessing causality ===
Causality assessment is used to determine the likelihood that a drug caused a suspected ADR. There are a number of different methods used to judge causation, including the Naranjo algorithm, the Venulet algorithm and the WHO causality term assessment criteria. Each have pros and cons associated with their use and most require some level of expert judgement to apply.
An ADR should not be labeled as 'certain' unless the ADR abates with a challenge-dechallenge-rechallenge protocol (stopping and starting the agent in question). The chronology of the onset of the suspected ADR is important, as another substance or factor may be implicated as a cause; co-prescribed medications and underlying psychiatric conditions may be factors in the ADR.
Assigning causality to a specific agent often proves difficult, unless the event is found during a clinical study or large databases are used. Both methods have difficulties and can be fraught with error. Even in clinical studies, some ADRs may be missed as large numbers of test individuals are required to find a specific adverse drug reaction, especially for rare ADRs. Psychiatric ADRs are often missed as they are grouped together in the questionnaires used to assess the population.
=== Monitoring bodies ===
Many countries have official bodies that monitor drug safety and reactions. On an international level, the WHO runs the Uppsala Monitoring Centre. The European Union runs the European Medicines Agency (EMA). In the United States, the Food and Drug Administration (FDA) is responsible for monitoring post-marketing studies. The FDA has a reporting system called the FDA Adverse Event Reporting System, where individuals can report adverse drug events. Healthcare professionals, consumers, and the pharmaceutical industry can all submit information to this system. For health products marketed in Canada, a branch of Health Canada called The Canada Vigilance Program is responsible for surveillance. Both healthcare professionals and consumers can report to this program. In Australia, the Therapeutic Goods Administration (TGA) conducts postmarket monitoring of therapeutic products. In the UK, a monitoring system called the Yellow Card Scheme was established in 1964. The Yellow Card Scheme was set up to surveil medications and other health products.
== Epidemiology ==
A study by the Agency for Healthcare Research and Quality (AHRQ) found that in 2011, sedatives and hypnotics were a leading source for adverse drug events seen in the hospital setting. Approximately 2.8% of all ADEs present on admission and 4.4% of ADEs that originated during a hospital stay were caused by a sedative or hypnotic drug. A second study by AHRQ found that in 2011, the most common specifically identified causes of adverse drug events that originated during hospital stays in the U.S. were steroids, antibiotics, opiates/narcotics, and anticoagulants. Patients treated in urban teaching hospitals had higher rates of ADEs involving antibiotics and opiates/narcotics compared to those treated in urban nonteaching hospitals. Those treated in private, nonprofit hospitals had higher rates of most ADE causes compared to patients treated in public or private, for-profit hospitals.
Medication related harm (MRH) is common after hospital discharge in older adults, but methodological inconsistencies between studies and a paucity of data on risk factors limits clear understanding of the epidemiology. There was a wide range in incidence, from 0.4% to 51.2% of participants, and 35% to 59% of harm was preventable. Medication related harm incidence within 30 days after discharge ranged from 167 to 500 events per 1,000 individuals discharged (17–51% of individuals).
In the U.S., females had a higher rate of ADEs involving opiates and narcotics than males in 2011, while male patients had a higher rate of anticoagulant ADEs. Nearly 8 in 1,000 adults aged 65 years or older experienced one of the four most common ADEs (steroids, antibiotics, opiates/narcotics, and anticoagulants) during hospitalization. A study showed that 48% of patients had an adverse drug reaction to at least one drug, and pharmacist involvement helps to pick up adverse drug reactions.
In 2012, McKinsey & Company concluded that the cost of the 50-100 million preventable error-related adverse drug events would be between US$18–115 billion.
An article published in The Journal of the American Medical Association (JAMA) in 2016 reported adverse drug event statistics from emergency departments around the United States in 2013-2014. From this article, an estimated prevalence of adverse drug events that were presented to the emergency department (ED) was 4 events out of every 1000 people. This article reported that 57.1% of these adverse drug events presented to the ED were in females. As well, out of all of the adverse drug events presented to the emergency department documented in this article, 17.6% were from anticoagulants, 16.1% were from antibiotics, and 13.3% from diabetic agents.
== See also ==
== References ==
== Further reading ==
Incidence of adverse drug reactions in human immune deficiency virus-positive patients using highly active antiretroviral therapy PMC 3312730
== External links == | Wikipedia/Adverse_drug_reactions |
Failure rate is the frequency with which any system or component fails, expressed in failures per unit of time. It thus depends on the system conditions, time interval, and total number of systems under study.
It can describe electronic, mechanical, or biological systems, in fields such as systems and reliability engineering, medicine and biology, or insurance and finance. It is usually denoted by the Greek letter
λ
{\displaystyle \lambda }
(lambda).
In real-world applications, the failure probability of a system usually differs over time; failures occur more frequently in early-life ("burning in"), or as a system ages ("wearing out"). This is known as the bathtub curve, where the middle region is called the "useful life period".
== Mean time between failures (MTBF) ==
The mean time between failures (MTBF,
1
/
λ
{\displaystyle 1/\lambda }
) is often reported instead of the failure rate, as numbers such as "2,000 hours" are more intuitive than numbers such as "0.0005 per hour".
However, this is only valid if the failure rate
λ
(
t
)
{\displaystyle \lambda (t)}
is actually constant over time, such as within the flat region of the bathtub curve. In many cases where MTBF is quoted, it refers only to this region; thus it cannot be used to give an accurate calculation of the average lifetime of a system, as it ignores the "burn-in" and "wear-out" regions.
MTBF appears frequently in engineering design requirements, and governs the frequency of required system maintenance and inspections. A similar ratio used in the transport industries, especially in railways and trucking, is "mean distance between failures" - allowing maintenance to be scheduled based on distance travelled, rather than at regular time intervals.
== Mathematical definition ==
The simplest definition of failure rate
λ
{\displaystyle \lambda }
is simply the number of failures
Δ
n
{\displaystyle \Delta n}
per time interval
Δ
t
{\displaystyle \Delta t}
:
λ
=
Δ
n
Δ
t
{\displaystyle \lambda ={\frac {\Delta n}{\Delta t}}}
which would depend on the number of systems under study, and the conditions over the time period.
=== Failures over time ===
To accurately model failures over time, a cumulative failure distribution,
F
(
t
)
{\displaystyle F(t)}
must be defined, which can be any cumulative distribution function (CDF) that gradually increases from
0
{\displaystyle 0}
to
1
{\displaystyle 1}
. In the case of many identical systems, this may be thought of as the fraction of systems failing over time
t
{\displaystyle t}
, after all starting operation at time
t
=
0
{\displaystyle t=0}
; or in the case of a single system, as the probability of the system having its failure time
T
{\displaystyle T}
before time
t
{\displaystyle t}
:
F
(
t
)
=
P
(
T
≤
t
)
.
{\displaystyle F(t)=\operatorname {P} (T\leq t).}
As CDFs are defined by integrating a probability density function, the failure probability density
f
(
t
)
{\displaystyle f(t)}
is defined such that:
F
(
t
)
=
∫
0
t
f
(
τ
)
d
τ
{\displaystyle F(t)=\int _{0}^{t}f(\tau )\,d\tau \!}
where
τ
{\displaystyle \tau }
is a dummy integration variable. Here
f
(
t
)
{\displaystyle f(t)}
can be thought of as the instantaneous failure rate, i.e. the fraction of failures per unit time, as the size of the time interval
Δ
t
{\displaystyle \Delta t}
tends towards
0
{\displaystyle 0}
:
f
(
t
)
=
lim
Δ
t
→
0
+
P
(
t
<
T
≤
t
+
Δ
t
)
Δ
t
.
{\displaystyle f(t)=\lim _{\Delta t\to 0^{+}}{\frac {P(t<T\leq t+\Delta t)}{\Delta t}}.}
=== Hazard rate ===
A concept closely-related but different to instantaneous failure rate
f
(
t
)
{\displaystyle f(t)}
is the hazard rate (or hazard function),
h
(
t
)
{\displaystyle h(t)}
.
In the many-system case, this is defined as the proportional failure rate of the systems still functioning at time
t
{\displaystyle t}
(as opposed to
f
(
t
)
{\displaystyle f(t)}
, which is the expressed as a proportion of the initial number of systems).
For convenience we first define the reliability (or survival function) as:
R
(
t
)
=
1
−
F
(
t
)
{\displaystyle R(t)=1-F(t)}
then the hazard rate is simply the instantaneous failure rate, scaled by the fraction of surviving systems at time
t
{\displaystyle t}
:
h
(
t
)
=
f
(
t
)
R
(
t
)
{\displaystyle h(t)={\frac {f(t)}{R(t)}}}
In the probabilistic sense, for a single system this can be interpreted as how much the conditional probability of failure time
T
{\displaystyle T}
within the time interval
t
{\displaystyle t}
to
t
+
Δ
t
{\displaystyle t+\Delta t}
changes, given that the system or component has already survived to time
t
{\displaystyle t}
:
h
(
t
)
=
lim
Δ
t
→
0
+
P
(
t
<
T
≤
t
+
Δ
t
∣
T
>
t
)
Δ
t
.
{\displaystyle h(t)=\lim _{\Delta t\to 0^{+}}{\frac {P(t<T\leq t+\Delta t\mid T>t)}{\Delta t}}.}
==== Conversion to cumulative failure rate ====
To convert between
h
(
t
)
{\displaystyle h(t)}
and
F
(
t
)
{\displaystyle F(t)}
, we can solve the differential equation
h
(
t
)
=
f
(
t
)
R
(
t
)
=
−
R
′
(
t
)
R
(
t
)
{\displaystyle h(t)={\frac {f(t)}{R(t)}}=-{\frac {R'(t)}{R(t)}}}
with initial condition
R
(
0
)
=
1
{\displaystyle R(0)=1}
, which yields
F
(
t
)
=
1
−
exp
(
−
∫
0
t
h
(
τ
)
d
τ
)
.
{\displaystyle F(t)=1-\exp {\left(-\int _{0}^{t}h(\tau )d\tau \right)}.}
Thus for a collection of identical systems, only one of hazard rate
h
(
t
)
{\displaystyle h(t)}
, failure probability density
f
(
t
)
{\displaystyle f(t)}
, or cumulative failure distribution
F
(
t
)
{\displaystyle F(t)}
need be defined.
Confusion can occur as the notation
λ
(
t
)
{\displaystyle \lambda (t)}
for "failure rate" often refers to the function
h
(
t
)
{\displaystyle h(t)}
rather than
f
(
t
)
.
{\displaystyle f(t).}
=== Constant hazard rate model ===
There are many possible functions that could be chosen to represent failure probability density
f
(
t
)
{\displaystyle f(t)}
or hazard rate
h
(
t
)
{\displaystyle h(t)}
, based on empirical or theoretical evidence, but the most common and easily-understandable choice is to set
f
(
t
)
=
λ
e
−
λ
t
{\displaystyle f(t)=\lambda e^{-\lambda t}}
,
an exponential function with scaling constant
λ
{\displaystyle \lambda }
. As seen in the figures above, this represents a gradually decreasing failure probability density.
The CDF
F
(
t
)
{\displaystyle F(t)}
is then calculated as:
F
(
t
)
=
∫
0
t
λ
e
−
λ
τ
d
τ
=
1
−
e
−
λ
t
,
{\displaystyle F(t)=\int _{0}^{t}\lambda e^{-\lambda \tau }\,d\tau =1-e^{-\lambda t},\!}
which can be seen to gradually approach
1
{\displaystyle 1}
as
t
→
∞
,
{\displaystyle t\to \infty ,}
representing the fact that eventually all systems under study will fail.
The hazard rate function is then:
h
(
t
)
=
f
(
t
)
R
(
t
)
=
λ
e
−
λ
t
e
−
λ
t
=
λ
.
{\displaystyle h(t)={\frac {f(t)}{R(t)}}={\frac {\lambda e^{-\lambda t}}{e^{-\lambda t}}}=\lambda .}
In other words, in this particular case only, the hazard rate is constant over time.
This illustrates the difference in hazard rate and failure probability density - as the number of systems surviving at time
t
>
0
{\displaystyle t>0}
gradually reduces, the total failure rate also reduces, but the hazard rate remains constant. In other words, the probabilities of each individual system failing do not change over time as the systems age - they are "memory-less".
=== Other models ===
For many systems, a constant hazard function may not be a realistic approximation; the chance of failure of an individual component may depend on its age. Therefore, other distributions are often used.
For example, the deterministic distribution increases hazard rate over time (for systems where wear-out is the most important factor), while the Pareto distribution decreases it (for systems where early-life failures are more common). The commonly-used Weibull distribution combines both of these effects, as do the log-normal and hypertabastic distributions.
After modelling a given distribution and parameters for
h
(
t
)
{\displaystyle h(t)}
, the failure probability density
f
(
t
)
{\displaystyle f(t)}
and cumulative failure distribution
F
(
t
)
{\displaystyle F(t)}
can be predicted using the given equations.
== Measuring failure rate ==
Failure rate data can be obtained in several ways. The most common means are:
Estimation
From field failure rate reports, statistical analysis techniques can be used to estimate failure rates. For accurate failure rates the analyst must have a good understanding of equipment operation, procedures for data collection, the key environmental variables impacting failure rates, how the equipment is used at the system level, and how the failure data will be used by system designers.
Historical data about the device or system under consideration
Many organizations maintain internal databases of failure information on the devices or systems that they produce, which can be used to calculate failure rates for those devices or systems. For new devices or systems, the historical data for similar devices or systems can serve as a useful estimate.
Government and commercial failure rate data
Handbooks of failure rate data for various components are available from government and commercial sources. MIL-HDBK-217F, Reliability Prediction of Electronic Equipment, is a military standard that provides failure rate data for many military electronic components. Several failure rate data sources are available commercially that focus on commercial components, including some non-electronic components.
Prediction
Time lag is one of the serious drawbacks of all failure rate estimations. Often by the time the failure rate data are available, the devices under study have become obsolete. Due to this drawback, failure-rate prediction methods have been developed. These methods may be used on newly designed devices to predict the device's failure rates and failure modes. Two approaches have become well known, Cycle Testing and FMEDA.
Life Testing
The most accurate source of data is to test samples of the actual devices or systems in order to generate failure data. This is often prohibitively expensive or impractical, so that the previous data sources are often used instead.
Cycle Testing
Mechanical movement is the predominant failure mechanism causing mechanical and electromechanical devices to wear out. For many devices, the wear-out failure point is measured by the number of cycles performed before the device fails, and can be discovered by cycle testing. In cycle testing, a device is cycled as rapidly as practical until it fails. When a collection of these devices are tested, the test will run until 10% of the units fail dangerously.
FMEDA
Failure modes, effects, and diagnostic analysis (FMEDA) is a systematic analysis technique to obtain subsystem / product level failure rates, failure modes and design strength. The FMEDA technique considers:
All components of a design,
The functionality of each component,
The failure modes of each component,
The effect of each component failure mode on the product functionality,
The ability of any automatic diagnostics to detect the failure,
The design strength (de-rating, safety factors) and
The operational profile (environmental stress factors).
Given a component database calibrated with field failure data that is reasonably accurate, the method can predict product level failure rate and failure mode data for a given application. The predictions have been shown to be more accurate than field warranty return analysis or even typical field failure analysis given that these methods depend on reports that typically do not have sufficient detail information in failure records.
== Examples ==
=== Decreasing failure rates ===
A decreasing failure rate describes cases where early-life failures are common and corresponds to the situation where
h
(
t
)
{\displaystyle h(t)}
is a decreasing function.
This can describe, for example, the period of infant mortality in humans, or the early failure of a transistors due to manufacturing defects.
Decreasing failure rates have been found in the lifetimes of spacecraft - Baker and Baker commenting that "those spacecraft that last, last on and on."
The hazard rate of aircraft air conditioning systems was found to have an exponentially decreasing distribution.
=== Renewal processes ===
In special processes called renewal processes, where the time to recover from failure can be neglected, the likelihood of failure remains constant with respect to time.
For a renewal process with DFR renewal function, inter-renewal times are concave. Brown conjectured the converse, that DFR is also necessary for the inter-renewal times to be concave, however it has been shown that this conjecture holds neither in the discrete case nor in the continuous case.
=== Coefficient of variation ===
When the failure rate is decreasing the coefficient of variation is ⩾ 1, and when the failure rate is increasing the coefficient of variation is ⩽ 1. Note that this result only holds when the failure rate is defined for all t ⩾ 0 and that the converse result (coefficient of variation determining nature of failure rate) does not hold.
=== Units ===
Failure rates can be expressed using any measure of time, but hours is the most common unit in practice. Other units, such as miles, revolutions, etc., can also be used in place of "time" units.
Failure rates are often expressed in engineering notation as failures per million, or 10−6, especially for individual components, since their failure rates are often very low.
The Failures In Time (FIT) rate of a device is the number of failures that can be expected in one billion (109) device-hours of operation
(e.g. 1,000 devices for 1,000,000 hours, or 1,000,000 devices for 1,000 hours each, or some other combination). This term is used particularly by the semiconductor industry.
=== Combinations of failure types ===
If a complex system consists of many parts, and the failure of any single part means the failure of the entire system, then the total failure rate is simply the sum of the individual failure rates of its parts
λ
S
=
λ
P
1
+
λ
P
2
+
…
{\displaystyle \lambda _{S}=\lambda _{P1}+\lambda _{P2}+\ldots }
however, this assumes that the failure rate
λ
(
t
)
{\displaystyle \lambda (t)}
is constant, and that the units are consistent (e.g. failures per million hours), and not expressed as a ratio or as probability densities. This is useful to estimate the failure rate of a system when individual components or subsystems have already been tested.
Adding "redundant" components to eliminate a single point of failure may thus actually increase the failure rate, however reduces the "mission failure" rate, or the "mean time between critical failures" (MTBCF).
Combining failure or hazard rates that are time-dependent is more complicated. For example, mixtures of Decreasing Failure Rate (DFR) variables are also DFR. Mixtures of exponentially distributed failure rates are hyperexponentially distributed.
=== Simple example ===
Suppose it is desired to estimate the failure rate of a certain component. Ten identical components are each tested until they either fail or reach 1,000 hours, at which time the test is terminated. A total of 7,502 component-hours of testing is performed, and 6 failures are recorded.
The estimated failure rate is:
6
failures
7502
hours
=
0.0007998
failures
hour
{\displaystyle {\frac {6{\text{ failures}}}{7502{\text{ hours}}}}=0.0007998\,{\frac {\text{failures}}{\text{hour}}}}
which could also be expressed as a MTBF of 1,250 hours, or approximately 800 failures for every million hours of operation.
== See also ==
== References ==
== Further reading ==
Goble, William M. (2018), Safety Instrumented System Design: Techniques and Design Verification, Research Triangle Park, NC: International Society of Automation
Blanchard, Benjamin S. (1992). Logistics Engineering and Management (Fourth ed.). Englewood Cliffs, New Jersey: Prentice-Hall. pp. 26–32. ISBN 0135241170.
Ebeling, Charles E. (1997). An Introduction to Reliability and Maintainability Engineering. Boston: McGraw-Hill. pp. 23–32. ISBN 0070188521.
Federal Standard 1037C
Kapur, K. C.; Lamberson, L. R. (1977). Reliability in Engineering Design. New York: John Wiley & Sons. pp. 8–30. ISBN 0471511919.
Knowles, D. I. (1995). "Should We Move Away From 'Acceptable Failure Rate'?". Communications in Reliability Maintainability and Supportability. 2 (1). International RMS Committee, USA: 23.
Modarres, M.; Kaminskiy, M.; Krivtsov, V. (2010). Reliability Engineering and Risk Analysis: A Practical Guide (2nd ed.). CRC Press. ISBN 9780849392474.
Mondro, Mitchell J. (June 2002). "Approximation of Mean Time Between Failure When a System has Periodic Maintenance" (PDF). IEEE Transactions on Reliability. 51 (2): 166–167. doi:10.1109/TR.2002.1011521.
Rausand, M.; Hoyland, A. (2004). System Reliability Theory; Models, Statistical methods, and Applications. New York: John Wiley & Sons. ISBN 047147133X.
Turner, T.; Hockley, C.; Burdaky, R. (1997). The Customer Needs A Maintenance-Free Operating Period. Leatherhead, Surrey, UK: ERA Technology Ltd. {{cite book}}: |work= ignored (help)
U.S. Department of Defense, (1991) Military Handbook, “Reliability Prediction of Electronic Equipment, MIL-HDBK-217F, 2
== External links ==
Bathtub curve issues Archived 2014-11-29 at the Wayback Machine, ASQC
Fault Tolerant Computing in Industrial Automation Archived 2014-03-26 at the Wayback Machine by Hubert Kirrmann, ABB Research Center, Switzerland | Wikipedia/Hazard_function |
The one-factor-at-a-time method, also known as one-variable-at-a-time, OFAT, OF@T,
OFaaT, OVAT, OV@T, OVaaT, or monothetic analysis is a method of designing experiments involving the testing of factors, or causes, one at a time instead of multiple factors simultaneously.
== Advantages ==
OFAT is favored by non-experts, especially in situations where the data is cheap and abundant.
There exist cases where the mental effort required to conduct a complex multi-factor analysis exceeds the effort required to acquire extra data, in which case OFAT might make sense. Furthermore, some researchers have shown that OFAT can be more effective than fractional factorials under certain conditions (number of runs is limited, primary goal is to attain improvements in the system, and experimental error is not large compared to factor effects, which must be additive and independent of each other).
== Disadvantages ==
In contrast, in situations where data is precious and must be analyzed with care, it is almost always better to
change multiple factors at once. A middle-school-level example illustrating this point is the family of balance puzzles, which includes the Twelve Coins puzzle. At the undergraduate level, one could compare
Bevington's GRIDLS versus GRADLS. The latter is far from optimal, but the former, which changes only one variable at a time, is worse. See also the factorial experimental design methods pioneered by Sir Ronald A. Fisher. Reasons for disfavoring OFAT include:
OFAT requires more runs for the same precision in effect estimation
OFAT cannot estimate interactions
OFAT can miss optimal settings of factors.
Designed experiments remain nearly always preferred to OFAT with many types and methods available, in addition to fractional factorials which, though usually requiring more runs than OFAT, do address the three concerns above. One modern design over which OFAT has no advantage in number of runs is the Plackett-Burman which, by having all factors vary simultaneously (an important quality in experimental designs), gives generally greater precision in effect estimation.
== See also ==
Ceteris paribus
== References == | Wikipedia/One-factor-at-a-time_method |
A robust parameter design, introduced by Genichi Taguchi, is an experimental design used to exploit the interaction between control and uncontrollable noise variables by robustification—finding the settings of the control factors that minimize response variation from uncontrollable factors. Control variables are variables of which the experimenter has full control. Noise variables lie on the other side of the spectrum. While these variables may be easily controlled in an experimental setting, outside of the experimental world they are very hard, if not impossible, to control. Robust parameter designs use a naming convention similar to that of FFDs. A 2(m1+m2)-(p1-p2) is a 2-level design where m1 is the number of control factors, m2 is the number of noise factors, p1 is the level of fractionation for control factors, and p2 is the level of fractionation for noise factors.
Consider an RPD cake-baking example from Montgomery (2005), where an experimenter wants to improve the quality of cake. While the cake manufacturer can control the amount of flour, amount of sugar, amount of baking powder, and coloring content of the cake, other factors are uncontrollable, such as oven temperature and bake time. The manufacturer can print instructions for a bake time of 20 minutes but in the real world has no control over consumer baking habits. Variations in the quality of the cake can arise from baking at 325° instead of 350° or from leaving the cake in the oven for a slightly too short or too long period of time. Robust parameter designs seek to minimize the effects of noise factors on quality. For this example, the manufacturer hopes to minimize the effects in fluctuation of bake time on cake quality, and in doing this the optimal settings for the control factors are required.
RPDs are primarily used in a simulation setting where uncontrollable noise variables are generally easily controlled. Whereas in the real world, noise factors are difficult to control; in an experimental setting, control over these factors is easily maintained. For the cake-baking example, the experimenter can fluctuate bake-time and oven-temperature to understand the effects of such fluctuation that may occur when control is no longer in his/her hands.
Robust parameter designs are very similar to fractional factorial designs (FFDs) in that the optimal design can be found using Hadamard matrices, principles of effect hierarchy and factor sparsity are maintained, and aliasing is present when full RPDs are fractionated. Much like FFDs, RPDs are screening designs and can provide a linear model of the system at hand. What is meant by effect hierarchy for FFDs is that higher-order interactions tend to have a negligible effect on the response. As stated in Carraway, main effects are most likely to have an effect on the response, then two-factor interactions, then three-factor interactions, and so on. The concept of effect sparsity is that not all factors will have an effect on the response. These principles are the foundation for fractionating Hadamard matrices. By fractionating, experimenters can form conclusions in fewer runs and with fewer resources. Oftentimes, RPDs are used at the early stages of an experiment. Because two-level RPDs assume linearity among factor effects, other methods may be used to model curvature after the number of factors has been reduced.
== Construction ==
Hadamard matrices are square matrices consisting of only + and −. If a Hadamard matrix is normalized and fractionated, a design pattern is obtained. However, not all designs are equal, which means that some designs are better than others, and specific design criteria are used to determine which design is best. After obtaining a design pattern, experimenters generally know to which setting each factor should be set. Each row, in the pattern, indicates a run, and each column indicates a factor. For the partial design pattern shown left, the experimenter has identified seven factors that may have an effect on the response and hopes to gain insight as to which factors have an effect in eight runs. In the first run, factors 1, 4, 5, and 6 are set to high levels while factors 2, 3, and 7 are set to low levels. Low levels and high levels are settings typically defined by the subject matter expert. These values are extremes but not so extreme that the response is pushed into non-smooth regions. After each run, results are obtained; and by fluctuating multiple factors in single runs instead of using the OFAT method, interactions between variables may be estimated as well as the individual factor effects. If two factors interact, then the effect one factor has on the response is different depending on the settings of another factor.
Fractionating Hadamard matrices appropriately is very time-consuming. Consider a 24-run design accommodating six factors. The number of Hadamard designs from each Hadamard matrix is 23 choose 6; that is 100,947 designs from each 24×24 Hadamard matrix. Since there are 60 Hadamard matrices of that size, the total number of designs to compare is 6,056,820. Leoppky, Bingham, and Sitter (2006) used complete search methodology and have listed the best RPDs for 12, 16, and 20 runs. Because complete search work is so exhaustive, the best designs for larger run sizes are often not readily available. In that case, other statistical methods may be used to fractionate a Hadamard matrix in such a way that allows only a tolerable amount of aliasing. Efficient algorithms such as forward selection and backward elimination have been produced for FFDs, but due to the complexity of aliasing introduced by distinguishing control and noise variables, these methods have not yet been proven effective for RPDs.
== History and design criteria ==
To fully understand the design criteria, an understanding of history and fractional factorial designs is necessary. FFDs seek to understand which factors have an effect on a response and seek to optimize the response by finding the appropriate factor settings. Unlike RPDs, FFDs do not distinguish between control and noise variables.
=== Resolution and minimum aberration ===
In 2003, Bingham and Sitter defined maximum resolution and minimum aberration for two-level fractional factorial designs. Resolution determines the worst amount of aliasing present, and aberration determines how much of that worst-case aliasing is present in the design. Resolution III designs alias main effects with two-factor interactions. Resolution IV designs alias main effects with three-factor interactions. Resolution V designs alias main effects with four-factor interactions. As the resolution increases, the level of aliasing becomes less serious because higher order interactions tend to have negligible effects on the response. Resolution measures regular designs; that is, effects are either fully aliased or not aliased at all. Consider the following statement, "Factor A is aliased with the two-factor interaction of factors BC." This means that if the two-factor interaction BC has an effect on the response, then the estimation of factor A's effect on the response is contaminated because factor A's effect cannot be distinguished from BC's effect. Clearly a resolution V design is preferred over a resolution IV design.
Designs of the same resolution are not always equal, and the knowledge of which type of aliasing is the worst involved is not enough to know which design is better. Instead further investigation of how much of the worst-case aliasing is needed. This idea is known as minimum aberration. Better designs contain the least amount of the worst-case aliasing. If designs D1 and D2 are both resolution V designs, but D1 has more instances of main effects aliased with 4-factor interactions, then D2 is the better design. D2 is the better design because there is a larger quantity of well-estimated effects.
=== Generalized resolution and generalized minimum aberration ===
Fontana, Pistone, and Rogantin had created an indicator function for two-level fractional factorial designs, and in 2003 Ye expanded the indicator function for regular and nonregular designs. In doing this, Ye established generalized resolution and generalized minimum aberration. Whereas regular designs are designs with run size equaling a power of two; nonregular designs can be any multiple of four. In nonregular designs, effects can be fully aliased, partially aliased, or not aliased at all. Generalized minimum aberration and generalized resolution take this partial aliasing into account.
Formally, Ye (2003) distinguishes between regular and nonregular designs and states that any polynomial function can be written as
F
(
x
)
=
∑
J
∈
P
b
J
X
J
(
x
)
=
∑
J
∈
P
C
∑
K
∈
P
N
b
J
∪
K
X
J
∪
K
(
x
)
{\displaystyle {\begin{aligned}F(x)&=\sum _{J\in P}b_{J}X_{J}(x)\\&=\sum _{J\in PC}\sum _{K\in PN}b_{J\cup K}X_{J\cup K}(x)\end{aligned}}}
Where:
b
L
=
1
2
m
∑
x
∈
F
X
L
(
x
)
;
b
0
=
n
2
m
{\displaystyle b_{L}={\frac {1}{2^{m}}}\sum _{x\in F}X_{L}(x);\quad b_{0}={\frac {n}{2^{m}}}}
If
|
b
J
∪
K
b
0
|
=
1
{\displaystyle \left|{\frac {b_{J\cup K}}{b_{0}}}\right|=1}
then the design is regular; otherwise partial aliasing exists.
While Ye developed this indicator function, Bingham and Sitter were working on clarification of resolution and aberration for robust parameter designs. In 2006, Leoppky, Bingham, and Sitter published the extended word-length pattern and indicator function for robust parameter designs. Because RPDs are concerned about minimizing process variation due to noise factors, the priority of effects changes from the hierarchy of effects of FFDs. Main effects are still the first priority, and two-factor interactions are still the second priority; but if any interactions have a control-by-noise (CN) interaction, then that interaction is increased by 0.5 on the priority scale. For example, a CCN three-factor interaction would be a priority 3 in a FFD because three-factor interactions are the third priority, two-factor interactions are the second priority, and main effects are the first priority. However, since RPDs are concerned about noise variables, the CCN interaction is a priority 2.5 effect. The CN interaction bumps the priority up by 0.5; so the traditional priority 3 minus the 0.5 for the CN interaction results in a 2.5 priority. A full table of priorities can be found in Leoppky, Bingham, and Sitter (2006).
== Design comparison ==
Further investigation of the principles introduced will provide a deeper understanding of design comparison.
For regular fractional factorial designs, the word length will determine what types of aliasing are present. For example, the word "2367" can be broken into aliasing structures as follows:
The word 2367 is of length 4, and the worst-case aliasing is that main effects are aliased with three-factor interactions, and two-factor interactions are aliased with other two-factor interactions.
Word lengths become less simplistic when talking about RPDs because the priority of effects has changed. Consider the word 23578 where factors 2, 3, and 5 are control variables and factors 7 and 8 are noise variables. The following aliasing strings can be derived from this word:
2=3578, 3=2578 5=2378 or C=CCNN
7=2358, 8=2357 or N=CCCN
23=578, 25=378, 35=278 or CC=CNN
27=358 and 28=357 or CN=CCN
235=78 or CCC=NN
Now that one can see what types of aliasing occur, one must use Leoppky, Bingham, and Sitter's priority of effects to determine the worst amount of aliasing present. This means that any CN interaction bumps that priority up by 0.5; and the word length is obtained by summing each side of the aliasing string. The table below finds the sums for each aliasing type found in the word 23578.
Since lower sums indicate worse aliasing, this word has the worst-case aliasing of length 4. It is important to understand that in an FFD the differentiation between control and noise would not be taken into account, and this word would be of length 5; but RPDs are concerned with this distinction and even though the word appears to be length 5, design criteria determines priority 4. Now, assume design D1 contains only the word just analyzed (23578). If D1 was compared to D2, and the worst-case aliasing found in D2 was priority 3.5, then D1 would be the better design. If, however, the worst-case aliasing of D2 was priority 4, then minimum aberration must be taken into consideration. For each design, we would calculate the frequencies of each type of worst-case aliasing. The better design would be chosen as the design that minimizes the occurrence of worst-case aliasing. These frequencies can be organized using the extended word length pattern (EWLP).
=== Notation ===
The notion of minimum aberration can be understood from the definition provided in Leoppky, Bingham, and Sitter (2006):
For any two 2(m1+m2 )-(p1+p2) fractional factorial robust parameter designs, D1 and D2, we say that D1 has less aberration than D2 if there exists an r such that, Bi (D1) = Bi (D2) for all i < r – 1 and Br (D1) < Br (D2). If no other design has less aberration than D1, then D1 is the minimum aberration fractional factorial robust parameter design.
Leoppky, Bingham, and Sitter (2006) also provide the RPD indicator function as:
For a given design, D, and a run, x∈D, define a contrast XL(x) = Πl∈Lxl on D, where L ∈ P and P is the set of all subsets of {1, 2, ..., m}. Further, define PC to be the set of all subsets of {1, 2,..., m} and PN to be the set of all subset of {1, 2, ..., m}, where an element of P is of the form L ≡ J ∪ K where J ∈ PC and K ∈ PN.
=== Extended word-length pattern ===
Bingham and Sitter (2006) generate the EWLP by providing the following concept:
Let F be a robust parameter design with indicator function F(x)= ΣJ∈PCΣK∈PNbJ∪K XJ∪K (x), if bJ∪K≠ 0, then XJ∪K is a word of the design F with word length r + (1- |bJ∪K ⁄ b0 |) / 2, where |bJ∪K ⁄ b0 | is a measure of the degree of confounding for the word XJ∪K. Further let gr+l / 2t be the number of words of length (r+l / 2t), where r = 2.0, 2.5, 3.0, ... according to Table 2.1. Thus, the robust parameter design extended word length pattern is (g2.0,...,g2.0+((t-1)) ⁄ 2t ,...,gm-1,...,gm+(t-1) ⁄ 2t).
Consider designs D1 and D2 with the following EWLPs:
D1: [(0 0 3)(2 3 1)(2 5 5)]
D2: [(0 0 3)(2 4 0)(2 4 6)]
One can read an EWLP from left to right since the left side indicates the most serious level of aliasing, and the aliasing becomes less serious as we move to the right. D2 is the better design because there is one more occurrence of more serious aliasing than in D1.
== Uses and examples ==
Design of experiments (DOE) is a fundamental part of experimentation, modeling, and simulation. Banks states, "Experimental design is concerned with reducing the time and effort associated with simulating by identifying the information needed to be gathered from each simulation replication, how many replications need to be made, and what model parameter changes need to be compared." After a conceptual model has been implemented as a programmed model, DOE is necessary to perform experimentation and obtain simulation results in the most timely and cost-efficient manner. The following examples demonstrate situations where RPDs can be used to draw significant conclusions.
=== Example 1 ===
Consider the permanent marker manufacturing example adapted from Brewer, Carraway, and Ingram (2010). The subject matter experts (SMEs) have recognized seven factors that may affect the quality of the marker: amount of ink, propanol content, butanol content, diacetone content, quality of container, humidity, and temperature. Amount of ink, propanol content, butanol content, diacetone content, and quality of container are determined by the manufacturer; humidity and temperature, while easily controlled in an experimental setting, cannot be controlled once the product has left the manufacturer's hands. Even if the manufacturer states to keep the marker temperature between 35 and 80 degrees Fahrenheit, consumers may be in 90 degree weather or take little note of the advice. This variation is uncontrollable and affects the consumers opinion of the product; therefore, the manufacturer wants the product to be robust to variations due to temperature.
To run every possible combination of factors would be 128 runs. However, by fractionating this matrix, the effects of factors can be seen in much fewer runs. Therefore, fractionating is less costly and less time consuming.
After the RPD has been created, the quality of permanent marker is tested at the end of each run. This is an example of live simulation because in order to test the quality of the marker, simulating the humidity and temperature of the real-world is necessary. The permanent marker manufacturing company opts to simulate high or low temperatures and humidity instead of traveling to specific locations where the marker may be used. The manufacturer saves time and money and gets close to the same effect as someone using the marker in extreme weather conditions or elsewhere.
=== Example 2 ===
Imagine being hired as a store manager and wanting to increase efficiency of labor. You have noticed that the same number of people are staffed at all hours of the day, but the store is busier from noon until 3:30 pm and empty after 7:00 pm. You do not want to risk being understaffed, so you choose to simulate different scenarios to determine the best scheduling solution. Control factors that effect scheduling optimality may include number of people on a shift whereas uncontrollable factors may include weather and traffic flow.
A constructive model is implemented to understand the dilemma at hand, and an RPD is the method used to determine the settings of the control factors we need in order to minimize the effects of the noise factors. In other words, one can use an RPD to determine how many people are needed on each shift so that the store is not understaffed or overstaffed regardless of the weather conditions or flow of traffic.
== Analyzing ==
Because RPDs relate so closely to FFDs, the same analysis methods can be applied. ANOVA can be used to determine which factors are significant. Center points can be run to determine if curvature is present. Many statistics software packages have split-plot designs stored and ready for analysis. RPDs are screening designs and are often used to reduce the number of factors that are thought to have an effect on the response.
== References ==
== Further reading ==
Box, G.E.P., (1988), Signal-to-Noise Ratios, Performance Criteria, and Transformations (with discussion), Technometrics, 30 1-40.
Box, G.E.P., Hunter, W.G., and Hunter, J.S. (1978), Statistics for Experimenters. Wiley.
Castillo, E. (2007), Process Optimization: A Statistical Approach. Springer.
Deng, L.Y. and Tang, B. (1999), Generalized Resolution and Minimum Aberration Criteria for Plackett-Burman and Other Non-regular Factorial Designs, Statistica Sinica, 9 1071-1082.
Deng, L.Y. and Tang, B. (2002), Design Selection and Classification for Hadamard Matrices Using Generalized Minimum Aberration Criteria, Technometrics, 44 173-184.
Lawson, J. and Erjavec, J. (2001), Modern Statistics for Engineering and Quality Improvement. Duxbury.
Loeppky, J. (2004), Ranking Non-Regular Designs. Dissertation, Simon Fraser University.
Novosad, S. and Ingram, D. (2006), Optimal Non-regular Designs that Provide Alternative to the 16-Run and 32-Run Regular Fractional Factorial Designs. Arkansas State University, State University, AR.
Pistone, G. and Wynn, H.P. (1996), Generalized Confounding with Gröbner Bases, Biometrika, 83 653-666.
Taguchi, G. (1986), Introduction to Quality Engineering. New York: Quality Resources.
Tang, B. and Deng. L.Y. (1999), Minimum G2-aberration for Non-regular Fractional Factorial Designs, The Annals of Statistics, 27 1914-1926.
Wiley, A. and Ingram, D. (2007), Uncovering the Complex Aliasing Patterns of Some Non-regular Designs. Senior Honors Thesis, Arkansas State University, State University, AR. | Wikipedia/Robust_parameter_design_(RPD) |
In combinatorial mathematics, a block design is an incidence structure consisting of a set together with a family of subsets known as blocks, chosen such that number of occurrences of each element satisfies certain conditions making the collection of blocks exhibit symmetry (balance). Block designs have applications in many areas, including experimental design, finite geometry, physical chemistry, software testing, cryptography, and algebraic geometry.
Without further specifications the term block design usually refers to a balanced incomplete block design (BIBD), specifically (and also synonymously) a 2-design, which has been the most intensely studied type historically due to its application in the design of experiments. Its generalization is known as a t-design.
== Overview ==
A design is said to be balanced (up to t) if all t-subsets of the original set occur in equally many (i.e., λ) blocks. When t is unspecified, it can usually be assumed to be 2, which means that each pair of elements is found in the same number of blocks and the design is pairwise balanced. For t = 1, each element occurs in the same number of blocks (the replication number, denoted r) and the design is said to be regular. A block design in which all the blocks have the same size (usually denoted k) is called uniform or proper. The designs discussed in this article are all uniform. Block designs that are not necessarily uniform have also been studied; for t = 2 they are known in the literature under the general name pairwise balanced designs (PBDs). Any uniform design balanced up to t is also balanced in all lower values of t (though with different λ-values), so for example a pairwise balanced (t = 2) design is also regular (t = 1). When the balancing requirement fails, a design may still be partially balanced if the t-subsets can be divided into n classes, each with its own (different) λ-value. For t = 2 these are known as PBIBD(n) designs, whose classes form an association scheme.
Designs are usually said (or assumed) to be incomplete, meaning that the collection of blocks is not all possible k-subsets, thus ruling out a trivial design.
Block designs may or may not have repeated blocks. Designs without repeated blocks are called simple, in which case the "family" of blocks is a set rather than a multiset.
In statistics, the concept of a block design may be extended to non-binary block designs, in which blocks may contain multiple copies of an element (see blocking (statistics)). There, a design in which each element occurs the same total number of times is called equireplicate, which implies a regular design only when the design is also binary. The incidence matrix of a non-binary design lists the number of times each element is repeated in each block.
== Regular uniform designs (configurations) ==
The simplest type of "balanced" design (t = 1) is known as a tactical configuration or 1-design. The corresponding incidence structure in geometry is known simply as a configuration, see Configuration (geometry). Such a design is uniform and regular: each block contains k elements and each element is contained in r blocks. The number of set elements v and the number of blocks b are related by
b
k
=
v
r
{\displaystyle bk=vr}
, which is the total number of element occurrences.
Every binary matrix with constant row and column sums is the incidence matrix of a regular uniform block design. Also, each configuration has a corresponding biregular bipartite graph known as its incidence or Levi graph.
== Pairwise balanced uniform designs (2-designs or BIBDs) ==
Given a finite set X (of elements called points) and integers k, r, λ ≥ 1, we define a 2-design (or BIBD, standing for balanced incomplete block design) B to be a family of k-element subsets of X, called blocks, such that any x in X is contained in r blocks, and any pair of distinct points x and y in X is contained in λ blocks. Here, the condition that any x in X is contained in r blocks is redundant, as shown below.
Here v (the number of elements of X, called points), b (the number of blocks), k, r, and λ are the parameters of the design. (To avoid degenerate examples, it is also assumed that v > k, so that no block contains all the elements of the set. This is the meaning of "incomplete" in the name of these designs.) In a table:
The design is called a (v, k, λ)-design or a (v, b, r, k, λ)-design. The parameters are not all independent; v, k, and λ determine b and r, and not all combinations of v, k, and λ are possible. The two basic equations connecting these parameters are
b
k
=
v
r
,
{\displaystyle bk=vr,}
obtained by counting the number of pairs (B, p) where B is a block and p is a point in that block, and
λ
(
v
−
1
)
=
r
(
k
−
1
)
,
{\displaystyle \lambda (v-1)=r(k-1),}
obtained from counting for a fixed x the triples (x, y, B) where x and y are distinct points and B is a block that contains them both. This equation for every x also proves that r is constant (independent of x) even without assuming it explicitly, thus proving that the condition that any x in X is contained in r blocks is redundant and r can be computed from the other parameters.
The resulting b and r must be integers, which imposes conditions on v, k, and λ. These conditions are not sufficient as, for example, a (43,7,1)-design does not exist.
The order of a 2-design is defined to be n = r − λ. The complement of a 2-design is obtained by replacing each block with its complement in the point set X. It is also a 2-design and has parameters v′ = v, b′ = b, r′ = b − r, k′ = v − k, λ′ = λ + b − 2r. A 2-design and its complement have the same order.
A fundamental theorem, Fisher's inequality, named after the statistician Ronald Fisher, is that b ≥ v in any 2-design.
A rather surprising and not very obvious (but very general) combinatorial result for these designs is that if points are denoted by any arbitrarily chosen set of equally or unequally spaced numerics, there is no choice of such a set which can make all block-sums (that is, sum of all points in a given block) constant. For other designs such as partially balanced incomplete block designs this may however be possible. Many such cases are discussed in. However, it can also be observed trivially for the magic squares or magic rectangles which can be viewed as the partially balanced incomplete block designs.
=== Examples ===
The unique (6,3,2)-design (v = 6, k = 3, λ = 2) has 10 blocks (b = 10) and each element is repeated 5 times (r = 5). Using the symbols 0 − 5, the blocks are the following triples:
012 013 024 035 045 125 134 145 234 235.
and the corresponding incidence matrix (a v×b binary matrix with constant row sum r and constant column sum k) is:
(
1
1
1
1
1
0
0
0
0
0
1
1
0
0
0
1
1
1
0
0
1
0
1
0
0
1
0
0
1
1
0
1
0
1
0
0
1
0
1
1
0
0
1
0
1
0
1
1
1
0
0
0
0
1
1
1
0
1
0
1
)
{\displaystyle {\begin{pmatrix}1&1&1&1&1&0&0&0&0&0\\1&1&0&0&0&1&1&1&0&0\\1&0&1&0&0&1&0&0&1&1\\0&1&0&1&0&0&1&0&1&1\\0&0&1&0&1&0&1&1&1&0\\0&0&0&1&1&1&0&1&0&1\\\end{pmatrix}}}
One of four nonisomorphic (8,4,3)-designs has 14 blocks with each element repeated 7 times. Using the symbols 0 − 7 the blocks are the following 4-tuples:
0123 0124 0156 0257 0345 0367 0467 1267 1346 1357 1457 2347 2356 2456.
The unique (7,3,1)-design is symmetric and has 7 blocks with each element repeated 3 times. Using the symbols 0 − 6, the blocks are the following triples:
013 026 045 124 156 235 346.
This design is associated with the Fano plane, with the elements and blocks of the design corresponding to the points and lines of the plane. Its corresponding incidence matrix can also be symmetric, if the labels or blocks are sorted the right way:
(
1
1
1
0
0
0
0
1
0
0
1
1
0
0
1
0
0
0
0
1
1
0
1
0
1
0
1
0
0
1
0
0
1
0
1
0
0
1
1
0
0
1
0
0
1
0
1
1
0
)
{\displaystyle \left({\begin{matrix}1&1&1&0&0&0&0\\1&0&0&1&1&0&0\\1&0&0&0&0&1&1\\0&1&0&1&0&1&0\\0&1&0&0&1&0&1\\0&0&1&1&0&0&1\\0&0&1&0&1&1&0\end{matrix}}\right)}
== Symmetric 2-designs (SBIBDs) ==
The case of equality in Fisher's inequality, that is, a 2-design with an equal number of points and blocks, is called a symmetric design. Symmetric designs have the smallest number of blocks among all the 2-designs with the same number of points.
In a symmetric design r = k holds as well as b = v, and, while it is generally not true in arbitrary 2-designs, in a symmetric design every two distinct blocks meet in λ points. A theorem of Ryser provides the converse. If X is a v-element set, and B is a v-element set of k-element subsets (the "blocks"), such that any two distinct blocks have exactly λ points in common, then (X, B) is a symmetric block design.
The parameters of a symmetric design satisfy
λ
(
v
−
1
)
=
k
(
k
−
1
)
.
{\displaystyle \lambda (v-1)=k(k-1).}
This imposes strong restrictions on v, so the number of points is far from arbitrary. The Bruck–Ryser–Chowla theorem gives necessary, but not sufficient, conditions for the existence of a symmetric design in terms of these parameters.
The following are important examples of symmetric 2-designs:
=== Projective planes ===
Finite projective planes are symmetric 2-designs with λ = 1 and order n > 1. For these designs the symmetric design equation becomes:
v
−
1
=
k
(
k
−
1
)
.
{\displaystyle v-1=k(k-1).}
Since k = r we can write the order of a projective plane as n = k − 1 and, from the displayed equation above, we obtain v = (n + 1)n + 1 = n2 + n + 1 points in a projective plane of order n.
As a projective plane is a symmetric design, we have b = v, meaning that b = n2 + n + 1 also. The number b is the number of lines of the projective plane. There can be no repeated lines since λ = 1, so a projective plane is a simple 2-design in which the number of lines and the number of points are always the same. For a projective plane, k is the number of points on each line and it is equal to n + 1. Similarly, r = n + 1 is the number of lines with which a given point is incident.
For n = 2 we get a projective plane of order 2, also called the Fano plane, with v = 4 + 2 + 1 = 7 points and 7 lines. In the Fano plane, each line has n + 1 = 3 points and each point belongs to n + 1 = 3 lines.
Projective planes are known to exist for all orders which are prime numbers or powers of primes. They form the only known infinite family (with respect to having a constant λ value) of symmetric block designs.
=== Biplanes ===
A biplane or biplane geometry is a symmetric 2-design with λ = 2; that is, every set of two points is contained in two blocks ("lines"), while any two lines intersect in two points. They are similar to finite projective planes, except that rather than two points determining one line (and two lines determining one point), two points determine two lines (respectively, points). A biplane of order n is one whose blocks have k = n + 2 points; it has v = 1 + (n + 2)(n + 1)/2 points (since r = k).
The 18 known examples are listed below.
(Trivial) The order 0 biplane has 2 points (and lines of size 2; a 2-(2,2,2) design); it is two points, with two blocks, each consisting of both points. Geometrically, it is the digon.
The order 1 biplane has 4 points (and lines of size 3; a 2-(4,3,2) design); it is the complete design with v = 4 and k = 3. Geometrically, the points are the vertices of a tetrahedron and the blocks are its faces.
The order 2 biplane is the complement of the Fano plane: it has 7 points (and lines of size 4; a 2-(7,4,2)), where the lines are given as the complements of the (3-point) lines in the Fano plane.
The order 3 biplane has 11 points (and lines of size 5; a 2-(11,5,2)), and is also known as the Paley biplane after Raymond Paley; it is associated to the Paley digraph of order 11, which is constructed using the field with 11 elements, and is the Hadamard 2-design associated to the size 12 Hadamard matrix; see Paley construction I.
Algebraically this corresponds to the exceptional embedding of the projective special linear group PSL(2,5) in PSL(2,11) – see projective linear group: action on p points for details.
There are three biplanes of order 4 (and 16 points, lines of size 6; a 2-(16,6,2)). One is the Kummer configuration. These three designs are also Menon designs.
There are four biplanes of order 7 (and 37 points, lines of size 9; a 2-(37,9,2)).
There are five biplanes of order 9 (and 56 points, lines of size 11; a 2-(56,11,2)).
Two biplanes are known of order 11 (and 79 points, lines of size 13; a 2-(79,13,2)).
Biplanes of orders 5, 6, 8 and 10 do not exist, as shown by the Bruck-Ryser-Chowla theorem.
=== Hadamard 2-designs ===
An Hadamard matrix of size m is an m × m matrix H whose entries are ±1 such that HH⊤ = mIm, where H⊤ is the transpose of H and Im is the m × m identity matrix. An Hadamard matrix can be put into standardized form (that is, converted to an equivalent Hadamard matrix) where the first row and first column entries are all +1. If the size m > 2 then m must be a multiple of 4.
Given an Hadamard matrix of size 4a in standardized form, remove the first row and first column and convert every −1 to a 0. The resulting 0–1 matrix M is the incidence matrix of a symmetric 2-(4a − 1, 2a − 1, a − 1) design called an Hadamard 2-design.
It contains
4
a
−
1
{\displaystyle 4a-1}
blocks/points; each contains/is contained in
2
a
−
1
{\displaystyle 2a-1}
points/blocks. Each pair of points is contained in exactly
a
−
1
{\displaystyle a-1}
blocks.
This construction is reversible, and the incidence matrix of a symmetric 2-design with these parameters can be used to form an Hadamard matrix of size 4a.
== Resolvable 2-designs ==
A resolvable 2-design is a BIBD whose blocks can be partitioned into sets (called parallel classes), each of which forms a partition of the point set of the BIBD. The set of parallel classes is called a resolution of the design.
If a 2-(v,k,λ) resolvable design has c parallel classes, then b ≥ v + c − 1.
Consequently, a symmetric design can not have a non-trivial (more than one parallel class) resolution.
Archetypical resolvable 2-designs are the finite affine planes. A solution of the famous 15 schoolgirl problem is a resolution of a 2-(15,3,1) design.
== General balanced designs (t-designs) ==
Given any positive integer t, a t-design B is a class of k-element subsets of X, called blocks, such that every point x in X appears in exactly r blocks, and every t-element subset T appears in exactly λ blocks. The numbers v (the number of elements of X), b (the number of blocks), k, r, λ, and t are the parameters of the design. The design may be called a t-(v,k,λ)-design. Again, these four numbers determine b and r and the four numbers themselves cannot be chosen arbitrarily. The equations are
λ
i
=
λ
(
v
−
i
t
−
i
)
/
(
k
−
i
t
−
i
)
for
i
=
0
,
1
,
…
,
t
,
{\displaystyle \lambda _{i}=\lambda \left.{\binom {v-i}{t-i}}\right/{\binom {k-i}{t-i}}{\text{ for }}i=0,1,\ldots ,t,}
where λi is the number of blocks that contain any i-element set of points and λt = λ.
Note that
b
=
λ
0
=
λ
(
v
t
)
/
(
k
t
)
{\displaystyle b=\lambda _{0}=\lambda {v \choose t}/{k \choose t}}
and
r
=
λ
1
=
λ
(
v
−
1
t
−
1
)
/
(
k
−
1
t
−
1
)
{\displaystyle r=\lambda _{1}=\lambda {v-1 \choose t-1}/{k-1 \choose t-1}}
.
Theorem: Any t-(v,k,λ)-design is also an s-(v,k,λs)-design for any s with 1 ≤ s ≤ t. (Note that the "lambda value" changes as above and depends on s.)
A consequence of this theorem is that every t-design with t ≥ 2 is also a 2-design.
A t-(v,k,1)-design is called a Steiner system.
The term block design by itself usually means a 2-design.
=== Derived and extendable t-designs ===
Let D = (X, B) be a t-(v,k,λ) design and p a point of X. The derived design Dp has point set X − {p} and as block set all the blocks of D which contain p with p removed. It is a (t − 1)-(v − 1, k − 1, λ) design. Note that derived designs with respect to different points may not be isomorphic. A design E is called an extension of D if E has a point p such that Ep is isomorphic to D; we call D extendable if it has an extension.
Theorem: If a t-(v,k,λ) design has an extension, then k + 1 divides b(v + 1).
The only extendable projective planes (symmetric 2-(n2 + n + 1, n + 1, 1) designs) are those of orders 2 and 4.
Every Hadamard 2-design is extendable (to an Hadamard 3-design).
Theorem:.
If D, a symmetric 2-(v,k,λ) design, is extendable, then one of the following holds:
D is an Hadamard 2-design,
v = (λ + 2)(λ2 + 4λ + 2), k = λ2 + 3λ + 1,
v = 495, k = 39, λ = 3.
Note that the projective plane of order two is an Hadamard 2-design; the projective plane of order four has parameters which fall in case 2; the only other known symmetric 2-designs with parameters in case 2 are the order 9 biplanes, but none of them are extendable; and there is no known symmetric 2-design with the parameters of case 3.
==== Inversive planes ====
A design with the parameters of the extension of an affine plane, i.e., a 3-(n2 + 1, n + 1, 1) design, is called a finite inversive plane, or Möbius plane, of order n.
It is possible to give a geometric description of some inversive planes, indeed, of all known inversive planes. An ovoid in PG(3,q) is a set of q2 + 1 points, no three collinear. It can be shown that every plane (which is a hyperplane since the geometric dimension is 3) of PG(3,q) meets an ovoid O in either 1 or q + 1 points. The plane sections of size q + 1 of O are the blocks of an inversive plane of order q. Any inversive plane arising this way is called egglike. All known inversive planes are egglike.
An example of an ovoid is the elliptic quadric, the set of zeros of the quadratic form
x1x2 + f(x3, x4),
where f is an irreducible quadratic form in two variables over GF(q). [f(x,y) = x2 + xy + y2 for example].
If q is an odd power of 2, another type of ovoid is known – the Suzuki–Tits ovoid.
Theorem. Let q be a positive integer, at least 2. (a) If q is odd, then any ovoid is projectively equivalent to the elliptic quadric in a projective geometry PG(3,q); so q is a prime power and there is a unique egglike inversive plane of order q. (But it is unknown if non-egglike ones exist.) (b) if q is even, then q is a power of 2 and any inversive plane of order q is egglike (but there may be some unknown ovoids).
== Partially balanced designs (PBIBDs) ==
An n-class association scheme consists of a set X of size v together with a partition S of X × X into n + 1 binary relations, R0, R1, ..., Rn. A pair of elements in relation Ri are said to be ith–associates. Each element of X has ni ith associates. Furthermore:
R
0
=
{
(
x
,
x
)
:
x
∈
X
}
{\displaystyle R_{0}=\{(x,x):x\in X\}}
and is called the Identity relation.
Defining
R
∗
:=
{
(
x
,
y
)
∣
(
y
,
x
)
∈
R
}
{\displaystyle R^{*}:=\{(x,y)\mid (y,x)\in R\}}
, if R in S, then R* in S
If
(
x
,
y
)
∈
R
k
{\displaystyle (x,y)\in R_{k}}
, the number of
z
∈
X
{\displaystyle z\in X}
such that
(
x
,
z
)
∈
R
i
{\displaystyle (x,z)\in R_{i}}
and
(
z
,
y
)
∈
R
j
{\displaystyle (z,y)\in R_{j}}
is a constant
p
i
j
k
{\displaystyle p_{ij}^{k}}
depending on i, j, k but not on the particular choice of x and y.
An association scheme is commutative if
p
i
j
k
=
p
j
i
k
{\displaystyle p_{ij}^{k}=p_{ji}^{k}}
for all i, j and k. Most authors assume this property.
A partially balanced incomplete block design with n associate classes (PBIBD(n)) is a block design based on a v-set X with b blocks each of size k and with each element appearing in r blocks, such that there is an association scheme with n classes defined on X where, if elements x and y are ith associates, 1 ≤ i ≤ n, then they are together in precisely λi blocks.
A PBIBD(n) determines an association scheme but the converse is false.
=== Example ===
Let A(3) be the following association scheme with three associate classes on the set X = {1,2,3,4,5,6}. The (i,j) entry is s if elements i and j are in relation Rs.
The blocks of a PBIBD(3) based on A(3) are:
The parameters of this PBIBD(3) are: v = 6, b = 8, k = 3, r = 4 and λ1 = λ2 = 2 and λ3 = 1. Also, for the association scheme we have n0 = n2 = 1 and n1 = n3 = 2. The incidence matrix M is
and the concurrence matrix MMT is
from which we can recover the λ and r values.
=== Properties ===
The parameters of a PBIBD(m) satisfy:
v
r
=
b
k
{\displaystyle vr=bk}
∑
i
=
1
m
n
i
=
v
−
1
{\displaystyle \sum _{i=1}^{m}n_{i}=v-1}
∑
i
=
1
m
n
i
λ
i
=
r
(
k
−
1
)
{\displaystyle \sum _{i=1}^{m}n_{i}\lambda _{i}=r(k-1)}
∑
u
=
0
m
p
j
u
h
=
n
j
{\displaystyle \sum _{u=0}^{m}p_{ju}^{h}=n_{j}}
n
i
p
j
h
i
=
n
j
p
i
h
j
{\displaystyle n_{i}p_{jh}^{i}=n_{j}p_{ih}^{j}}
A PBIBD(1) is a BIBD and a PBIBD(2) in which λ1 = λ2 is a BIBD.
=== Two associate class PBIBDs ===
PBIBD(2)s have been studied the most since they are the simplest and most useful of the PBIBDs. They fall into six types based on a classification of the then known PBIBD(2)s by Bose & Shimamoto (1952):
group divisible;
triangular;
Latin square type;
cyclic;
partial geometry type;
miscellaneous.
== Applications ==
The mathematical subject of block designs originated in the statistical framework of design of experiments. These designs were especially useful in applications of the technique of analysis of variance (ANOVA). This remains a significant area for the use of block designs.
While the origins of the subject are grounded in biological applications (as is some of the existing terminology), the designs are used in many applications where systematic comparisons are being made, such as in software testing.
The incidence matrix of block designs provide a natural source of interesting block codes that are used as error correcting codes. The rows of their incidence matrices are also used as the symbols in a form of pulse-position modulation.
=== Statistical application ===
Suppose that skin cancer researchers want to test three different sunscreens. They coat two different sunscreens on the upper sides of the hands of a test person. After a UV radiation they record the skin irritation in terms of sunburn. The number of treatments is 3 (sunscreens) and the block size is 2 (hands per person).
A corresponding BIBD can be generated by the R-function design.bib of the R-package agricolae and is specified in the following table:
The investigator chooses the parameters v = 3, k = 2 and λ = 1 for the block design which are then inserted into the R-function. Subsequently, the remaining parameters b and r are determined automatically.
Using the basic relations we calculate that we need b = 3 blocks, that is, 3 test people in order to obtain a balanced incomplete block design. Labeling the blocks A, B and C, to avoid confusion, we have the block design,
A = {2, 3}, B = {1, 3} and C = {1, 2}.
A corresponding incidence matrix is specified in the following table:
Each treatment occurs in 2 blocks, so r = 2.
Just one block (C) contains the treatments 1 and 2 simultaneously and the same applies to the pairs of treatments (1,3) and (2,3). Therefore, λ = 1.
It is impossible to use a complete design (all treatments in each block) in this example because there are 3 sunscreens to test, but only 2 hands on each person.
== See also ==
Incidence geometry
Steiner system
Fractional factorial design
== Notes ==
== References ==
Aschbacher, Michael (1971). "On collineation groups of symmetric block designs". Journal of Combinatorial Theory. Series A. 11 (3): 272–281. doi:10.1016/0097-3165(71)90054-9.
Assmus, E.F.; Key, J.D. (1992), Designs and Their Codes, Cambridge: Cambridge University Press, ISBN 0-521-41361-3
Beth, Thomas; Jungnickel, Dieter; Lenz, Hanfried (1986), Design Theory, Cambridge University Press. 2nd ed. (1999) ISBN 978-0-521-44432-3.
Bose, R. C. (1949), "A Note on Fisher's Inequality for Balanced Incomplete Block Designs", Annals of Mathematical Statistics, 20 (4): 619–620, doi:10.1214/aoms/1177729958
Bose, R. C.; Shimamoto, T. (1952), "Classification and analysis of partially balanced incomplete block designs with two associate classes", Journal of the American Statistical Association, 47 (258): 151–184, doi:10.1080/01621459.1952.10501161
Cameron, P. J.; van Lint, J. H. (1991), Designs, Graphs, Codes and their Links, Cambridge University Press, ISBN 0-521-42385-6
Colbourn, Charles J.; Dinitz, Jeffrey H. (2007), Handbook of Combinatorial Designs (2nd ed.), Boca Raton: Chapman & Hall/ CRC, ISBN 978-1-58488-506-1
Fisher, R.A. (1940), "An examination of the different possible solutions of a problem in incomplete blocks", Annals of Eugenics, 10: 52–75, doi:10.1111/j.1469-1809.1940.tb02237.x, hdl:2440/15239
Hall, Marshall Jr. (1986), Combinatorial Theory (2nd ed.), New York: Wiley-Interscience, ISBN 0-471-09138-3
Hughes, D.R.; Piper, E.C. (1985), Design theory, Cambridge: Cambridge University Press, ISBN 0-521-25754-9
Kaski, Petteri; Östergård, Patric (2008). "There Are Exactly Five Biplanes with k = 11". Journal of Combinatorial Designs. 16 (2): 117–127. doi:10.1002/jcd.20145. MR 2384014. S2CID 120721016.
Lander, E. S. (1983), Symmetric Designs: An Algebraic Approach, Cambridge University Press, ISBN 978-0-521-28693-0
Lindner, C.C.; Rodger, C.A. (1997), Design Theory, Boca Raton: CRC Press, ISBN 0-8493-3986-3
Raghavarao, Damaraju (1988). Constructions and Combinatorial Problems in Design of Experiments. Dover. ISBN 978-0-486-65685-4.
Raghavarao, Damaraju; Padgett, L.V. (11 October 2005). Block Designs: Analysis, Combinatorics and Applications. World Scientific. ISBN 978-981-4480-23-9.
Ryser, Herbert John (1963), "8. Combinatorial Designs", Combinatorial Mathematics, Carus Mathematical Monographs, vol. 14, Mathematical Association of America, pp. 96–130, ISBN 978-1-61444-014-7 {{citation}}: ISBN / Date incompatibility (help)
Salwach, Chester J.; Mezzaroba, Joseph A. (1978). "The four biplanes with k = 9". Journal of Combinatorial Theory. Series A. 24 (2): 141–145. doi:10.1016/0097-3165(78)90002-X.
Khattree, Ravindra (2019). "A note on the nonexistence of the constant block-sum balanced incomplete block designs". Communications in Statistics - Theory and Methods. 48 (20): 5165–5168. doi:10.1080/03610926.2018.1508715. S2CID 125795689.
Khattree, Ravindra (2022). "On construction of equireplicated constant block-sum designs". Communications in Statistics - Theory and Methods. 51 (2): 4434–4450. doi:10.1080/03610926.2020.1814816. S2CID 225335042.
Shrikhande, S.S.; Bhat-Nayak, Vasanti N. (1970), "Non-isomorphic solutions of some balanced incomplete block designs I", Journal of Combinatorial Theory, 9 (2): 174–191, doi:10.1016/S0021-9800(70)80024-2
Stinson, Douglas R. (2003), Combinatorial Designs: Constructions and Analysis, Springer, ISBN 0-387-95487-2
Street, Anne Penfold & Street, Deborah J. (1987). Combinatorics of Experimental Design. Oxford U. P. [Clarendon]. ISBN 0-19-853256-3.
van Lint, J.H.; Wilson, R.M. (1992). A Course in Combinatorics. Cambridge University Press. ISBN 978-0-521-41057-1.
== External links ==
DesignTheory.Org: Databases of combinatorial, statistical, and experimental block designs. Software and other resources hosted by the School of Mathematical Sciences at Queen Mary College, University of London.
Design Theory Resources: Peter Cameron's page of web based design theory resources.
Weisstein, Eric W. "Block Designs". MathWorld. | Wikipedia/Block_design |
Plackett–Burman designs are experimental designs presented in 1946 by Robin L. Plackett and J. P. Burman while working in the British Ministry of Supply.
Their goal was to find experimental designs for investigating the dependence of some measured quantity on a number of independent variables (factors), each taking L levels, in such a way as to minimize the variance of the estimates of these dependencies using a limited number of experiments. Interactions between the factors were considered negligible. The solution to this problem is to find an experimental design where each combination of levels for any pair of factors appears the same number of times, throughout all the experimental runs (refer to table). A complete factorial design would satisfy this criterion, but the idea was to find smaller designs.
For the case of two levels (L = 2), Plackett and Burman used the method found in 1933 by Raymond Paley for generating orthogonal matrices whose elements are all either 1 or −1 (Hadamard matrices). Paley's method could be used to find such matrices of size N for most N equal to a multiple of 4. In particular, it worked for all such N up to 100 except N = 92. If N is a power of 2, however, the resulting design is identical to a fractional factorial design, so Plackett–Burman designs are mostly used when N is a multiple of 4 but not a power of 2 (i.e. N = 12, 20, 24, 28, 36 …). If one is trying to estimate less than N parameters (including the overall average), then one simply uses a subset of the columns of the matrix.
For the case of more than two levels, Plackett and Burman rediscovered designs that had previously been given by Raj Chandra Bose and K. Kishen at the Indian Statistical Institute.
Plackett and Burman give specifics for designs having a number of experiments equal to the number of levels L to some integer power, for L = 3, 4, 5, or 7.
When interactions between factors are not negligible, they are often confounded in Plackett–Burman designs with the main effects, meaning that the designs do not permit one to distinguish between certain main effects and certain interactions. This is called confounding.
== Extended uses ==
In 1993, Dennis Lin described a construction method via half-fractions of Plackett–Burman designs, using one column to take half of the rest of the columns. The resulting matrix, minus that column, is a "supersaturated design" for finding significant first order effects, under the assumption that few exist.
Box–Behnken designs can be made smaller, or very large ones constructed, by replacing the fractional factorials and incomplete blocks traditionally used for plan and seed matrices, respectively, with Plackett–Burmans. For example, a quadratic design for 30 variables requires a 30 column PB plan matrix of zeroes and ones, replacing the ones in each line using PB seed matrices of −1s and +1s (for 15 or 16 variables) wherever a one appears in the plan matrix, creating a 557 runs design with values, −1, 0, +1, to estimate the 496 parameters of a full quadratic model. Adding axial points allows estimating univariate cubic and quartic effects.
By equivocating certain columns with parameters to be estimated, Plackett–Burmans can also be used to construct mixed categorical and numerical designs, with interactions or high order effects, requiring no more than 4 runs more than the number of model parameters to be estimated. Sort by a-1 columns assigned to categorical variable A and following columns, where A = 1 + int(a·i /(max(i) + 0.00001)), i = row number and a = A's number of values. Next sort on columns assigned to any other categorical variables and following columns, repeating as needed. Such designs, if large, may otherwise be incomputable by standard search techniques like D-optimality. For example, 13 variables averaging 3 values each could have well over a million combinations to search. To estimate the 105 parameters in a quadratic model of 13 variables, one must formally exclude from consideration or compute |X'X| for well over 106C102, i.e. 313C105, or roughly 10484 matrices.
== 4 to 48 runs, sorted to show half-fractions ==
P.B.4
+ + +
+ – –
– + –
– – +
P.B.8
+ + + + + + +
+ + – – – – +
+ – + + – – –
+ – – – + + –
– + + – + – –
– + – + – + –
– – + – – + +
– – – + + – +
P.B.12
+ + + + + + + + + + +
+ + + + – – – + – – –
+ + – – – + – – + – +
+ – + – + + + – – – –
+ – – + – – + – + + –
+ – – – + – – + – + +
– + + – – – + – – + +
– + – + + + – – – + –
– + – – + – + + + – –
– – + + + – – – + – +
– – + – – + – + + + –
– – – + – + + + – – +
P.B.16
+ + + + + + + + + + + + + + +
+ + + – – – – – – – – + + + +
+ + – + – – – – + + + – – – +
+ + – – + + + + – – – – – – +
+ – + + + – – + + – – + – – –
+ – + – – + + – – + + + – – –
+ – – + – + + – + – – – + + –
+ – – – + – – + – + + – + + –
– + + + – + – + – + – – + – –
– + + – + – + – + – + – + – –
– + – + + – + – – + – + – + –
– + – – – + – + + – + + – + –
– – + + – – + + – – + – – + +
– – + – + + – – + + – – – + +
– – – + + + – – – – + + + – +
– – – – – – + + + + – + + – +
P.B.20
+ + + + + + + + + + + + + + + + + + +
+ + + – + – + – – – – + + – – + – – +
+ + – + – + – – – – + + – – + – – + +
+ + – + – – – – + + – – + – – + + + –
+ + – – – – + + – – + – – + + + + – –
+ – + + + + – + – + – – – – + + – – –
+ – + – + – – – – + + – – + – – + + +
+ – + – – + + + + – + – + – – – – + –
+ – – + – – + + + + – + – + – – – – +
+ – – – + + – – + – – + + + + – + – –
– + + + + – + – + – – – – + + – – + –
– + + + – + – + – – – – + + – – + – +
– + + – – + – – + + + + – + – + – – –
– + – – + + + + – + – + – – – – + + –
– + – – + – – + + + + – + – + – – – +
– – + + – – + – – + + + + – + – + – –
– – + – – – – + + – – + – – + + + + +
– – – + + + + – + – + – – – – + + – +
– – – + + – – + – – + + + + – + – + –
– – – – – + + – – + – – + + + + – + +
P.B.24
+ + + + + + + + + + + + + + + + + + + + + + +
+ + + – + – + + – – + + – – + – + – – – – – +
+ + + – – + + – – + – + – – – – – + + + + – –
+ + – + + – – + + – – + – + – – – – – + + + –
+ + – + – + + – – + + – – + – + – – – – – + +
+ + – – – – – + + + + – + – + + – – + + – – –
+ – + + – – + – + – – – – – + + + + – + – + –
+ – + – + + – – + + – – + – + – – – – – + + +
+ – + – + – – – – – + + + + – + – + + – – + –
+ – – + + – – + – + – – – – – + + + + – + – +
+ – – + – + – – – – – + + + + – + – + + – – +
+ – – – – + + + + – + – + + – – + + – – + – –
– + + + + – + – + + – – + + – – + – + – – – –
– + + + – + – + + – – + + – – + – + – – – – +
– + + – – + – + – – – – – + + + + – + – + + –
– + – + – – – – – + + + + – + – + + – – + + –
– + – – + + – – + – + – – – – – + + + + – + +
– + – – + – + – – – – – + + + + – + – + + – +
– – + + + + – + – + + – – + + – – + – + – – –
– – + + – – + + – – + – + – – – – – + + + + +
– – + – – – – – + + + + – + – + + – – + + – +
– – – + + + + – + – + + – – + + – – + – + – –
– – – – + + + + – + – + + – – + + – – + – + –
– – – – – – + + + + – + – + + – – + + – – + +
P.B.28
+ + + + + + + + + + + + + + + + + + + + + + + + + + –
+ + + – – + + + – + + – – + + + + – – – – – – – – + +
+ + + – – – – – – – – + + + + – – + + + – + + – – + +
+ + – + + – – + + + + – – – – – – – – + + + + – – + +
+ + – + – – + + – – – + – – + + – + – – + – + – + – –
+ + – + – – + – + – + – + + – + – – + + – – – + – – –
+ + – – – + – – + + – + – – + – + – + – + + – + – – –
+ – + + – + – – + + – – – + – – + + – + – – + – + – –
+ – + – + + – + – – + + – – – + – – + + – + – – + – –
+ – + – + – + + – + – – + + – – – + – – + + – + – – –
+ – + – + – + – + – + – + – + – + – + – + – + – + – +
+ – – + + + + – – – – – – – – + + + + – – + + + – + +
+ – – + + + – + + – – + + + + – – – – – – – – + + + +
+ – – – – – – – – + + + + – – + + + – + + – – + + + +
– + + + + – – + + + – + + – – + + + + – – – – – – – +
– + + + + – – – – – – – – + + + + – – + + + – + + – +
– + + + – + + – – + + + + – – – – – – – – + + + + – +
– + + – – + + + + – – – – – – – – + + + + – – + + + +
– + – – + + – + – – + – + – + – + + – + – – + + – – –
– + – – + + – – – + – – + + – + – – + – + – + – + + –
– + – – + – + – + – + + – + – – + + – – – + – – + + –
– – + + – + – – + – + – + – + + – + – – + + – – – + –
– – + + – – – + – – + + – + – – + – + – + – + + – + –
– – + – + – + – + + – + – – + + – – – + – – + + – + –
– – – + + + + – – + + + – + + – – + + + + – – – – – +
– – – + – – + + – + – – + – + – + – + + – + – – + + –
– – – – – + + + + – – + + + – + + – – + + + + – – – +
– – – – – – – + + + + – – + + + – + + – – + + + + – +
P.B.32
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + – – – – – – – – – – – – – – – – + + + + + + + + + + +
+ + + – + – – – – – – – – + + + + + + + – – – – – – – + + + +
+ + + – – + + + + + + + + – – – – – – – – – – – – – – + + + +
+ + – + + – – – – + + + + – – – – + + + – – – – + + + – – – +
+ + – + – + + + + – – – – + + + + – – – – – – – + + + – – – +
+ + – – + + + + + – – – – – – – – + + + + + + + – – – – – – +
+ + – – – – – – – + + + + + + + + – – – + + + + – – – – – – +
+ – + + + + – – + + – – + + – – + + – – + – – + + – – + – – –
+ – + + – – + + – – + + – – + + – – + + + – – + + – – + – – –
+ – + – + – + + – – + + – + – – + + – – – + + – – + + + – – –
+ – + – – + – – + + – – + – + + – – + + – + + – – + + + – – –
+ – – + + – + + – + – – + – + + – + – – – + + – + – – – + + –
+ – – + – + – – + – + + – + – – + – + + – + + – + – – – + + –
+ – – – + + – – + – + + – – + + – + – – + – – + – + + – + + –
+ – – – – – + + – + – – + + – – + – + + + – – + – + + – + + –
– + + + + – + – + – + – + – + – + – + – – + – + – + – – + – –
– + + + – + – + – + – + – + – + – + – + – + – + – + – – + – –
– + + – + + – + – + – + – – + – + – + – + – + – + – + – + – –
– + + – – – + – + – + – + + – + – + – + + – + – + – + – + – –
– + – + + + – + – – + – + + – + – – + – + – + – – + – + – + –
– + – + – – + – + + – + – – + – + + – + + – + – – + – + – + –
– + – – + – + – + + – + – + – + – – + – – + – + + – + + – + –
– + – – – + – + – – + – + – + – + + – + – + – + + – + + – + –
– – + + + – – + + – – + + – – + + – – + – – + + – – + – – + +
– – + + – + + – – + + – – + + – – + + – – – + + – – + – – + +
– – + – + + + – – + + – – – – + + – – + + + – – + + – – – + +
– – + – – – – + + – – + + + + – – + + – + + – – + + – – – + +
– – – + + + + – – – – + + + + – – – – + + + – – – – + + + – +
– – – + – – – + + + + – – – – + + + + – + + – – – – + + + – +
– – – – + – – + + + + – – + + – – – – + – – + + + + – + + – +
– – – – – + + – – – – + + – – + + + + – – – + + + + – + + – +
P.B.36
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + –
+ + + + – + + + + – – + + – – – – – – + + + + – – – – – – + + – – + +
+ + + – – + + – – – – – – + + + + – – – – – – + + – – + + + + + – + +
+ + + – – – – – – + + – – + + + + + – + + + + – – + + – – – – – – + +
+ + – + + + + – – + + – – – – – – + + + + – – – – – – + + – – + + + +
+ + – + – – + – + – + + – – + + – + – – – + – + – – + + – – + – + – –
+ + – + – – – + – + – – + + – – + – + – + + – + – – + – + – + + – – –
+ + – – + + – + – – – + – + – – + + – – + – + – + + – + – – + – + – –
+ + – – + – + – + + – + – – + – + – + + – – + + – + – – – + – + – – –
+ – + + – + – – + – + – + + – – + + – + – – – + – + – – + + – – + – –
+ – + + – – + + – + – – – + – + – – + + – – + – + – + + – + – – + – –
+ – + – + + – + – – + – + – + + – – + + – + – – – + – + – – + + – – –
+ – + – + + – – + + – + – – – + – + – – + + – – + – + – + + – + – – –
+ – + – + – + – + – + – + – + – + – + – + – + – + – + – + – + – + – +
+ – – + + + + + – + + + + – – + + – – – – – – + + + + – – – – – – + +
+ – – + + – – – – – – + + + + – – – – – – + + – – + + + + + – + + + +
+ – – – – – – + + + + – – – – – – + + – – + + + + + – + + + + – – + +
+ – – – – – – + + – – + + + + + – + + + + – – + + – – – – – – + + + +
– + + + + + – + + + + – – + + – – – – – – + + + + – – – – – – + + – +
– + + + + – – + + – – – – – – + + + + – – – – – – + + – – + + + + + +
– + + + + – – – – – – + + – – + + + + + – + + + + – – + + – – – – – +
– + + – – + + + + + – + + + + – – + + – – – – – – + + + + – – – – – +
– + + – – – – – – + + + + – – – – – – + + – – + + + + + – + + + + – +
– + – + – – + + – – + – + – + + – + – – + – + – + + – – + + – + – – –
– + – – + + – – + – + – + + – + – – + – + – + + – – + + – + – – – + –
– + – – + – + – + + – – + + – + – – – + – + – – + + – – + – + – + + –
– + – – – + – + – – + + – – + – + – + + – + – – + – + – + + – – + + –
– – + + – + – – – + – + – – + + – – + – + – + + – + – – + – + – + + –
– – + + – – + – + – + + – + – – + – + – + + – – + + – + – – – + – + –
– – + – + – + + – + – – + – + – + + – – + + – + – – – + – + – – + + –
– – + – + – + + – – + + – + – – – + – + – – + + – – + – + – + + – + –
– – – + + + + – – – – – – + + – – + + + + + – + + + + – – + + – – – +
– – – + + – – + + + + + – + + + + – – + + – – – – – – + + + + – – – +
– – – + – + – – + + – – + – + – + + – + – – + – + – + + – – + + – + –
– – – – – + + + + – – – – – – + + – – + + + + + – + + + + – – + + – +
– – – – – + + – – + + + + + – + + + + – – + + – – – – – – + + + + – +
P.B.40
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + –
+ + + + + + + – – + + – – + + – – – – – – – – + + + + – – – – + + – – – – + –
+ + + + + – – + + – – + + – – – – – – – – + + + + – – – – + + – – – – + + + –
+ + + – – + + – – + + – – – – – – – – + + + + – – – – + + – – – – + + + + + –
+ + + – – – – + + – – – – + + + + + + + + – – + + – – + + – – – – – – – – + –
+ + – + – + – + – – + + – – + + – – + – + – + – + + – + – – + – + + – – + – +
+ + – + – – + – + + – – + – + + – + – + – + – – + + – – + + – – + – + – + – +
+ + – – + + – – + – + – + – + + – + – – + – + + – – + – + + – + – + – + – – +
+ + – – + – + + – + – + – + – – + + – – + + – – + – + – + – + + – + – – + – +
+ + – – + – + – + – + + – + – – + – + + – – + – + + – + – + – + – – + + – – +
+ – + + – + – + – + – – + + – – + + – – + – + – + – + + – + – – + – + + – – +
+ – + + – + – – + – + + – – + – + + – + – + – + – – + + – – + + – – + – + – +
+ – + + – – + – + + – + – + – + – – + + – – + + – – + – + – + – + + – + – – +
+ – + – + + – + – – + – + + – – + – + + – + – + – + – – + + – – + + – – + – +
+ – + – + – + + – + – – + – + + – – + – + + – + – + – + – – + + – – + + – – +
+ – – + + – – + + – – – – – – – – + + + + – – – – + + – – – – + + + + + + + –
+ – – + + – – – – – – – – + + + + – – – – + + – – – – + + + + + + + + – – + –
+ – – – – + + + + + + + + – – + + – – + + – – – – – – – – + + + + – – – – + –
+ – – – – + + – – – – + + + + + + + + – – + + – – + + – – – – – – – – + + + –
+ – – – – – – – – + + + + – – – – + + – – – – + + + + + + + + – – + + – – + –
– + + + + + + + + – – + + – – + + – – – – – – – – + + + + – – – – + + – – – –
– + + + + – – – – + + – – – – + + + + + + + + – – + + – – + + – – – – – – – –
– + + – – + + – – – – – – – – + + + + – – – – + + – – – – + + + + + + + + – –
– + + – – – – + + + + + + + + – – + + – – + + – – – – – – – – + + + + – – – –
– + + – – – – – – – – + + + + – – – – + + – – – – + + + + + + + + – – + + – –
– + – + – + – + – + – + – + – + – + – + – + – + – + – + – + – + – + – + – + +
– + – + – + – – + + – – + + – – + – + – + – + + – + – – + – + + – – + – + + +
– + – + – – + + – – + + – – + – + – + – + + – + – – + – + + – – + – + + – + +
– + – – + + – – + + – – + – + – + – + + – + – – + – + + – – + – + + – + – + +
– + – – + – + + – – + – + + – + – + – + – – + + – – + + – – + – + – + – + + +
– – + + – – + + – – + – + – + – + + – + – – + – + + – – + – + + – + – + – + +
– – + + – – + – + – + – + + – + – – + – + + – – + – + + – + – + – + – – + + +
– – + – + + – + – + – + – – + + – – + + – – + – + – + – + + – + – – + – + + +
– – + – + + – – + – + + – + – + – + – – + + – – + + – – + – + – + – + + – + +
– – + – + – + – + + – + – – + – + + – – + – + + – + – + – + – – + + – – + + +
– – – + + + + + + + + – – + + – – + + – – – – – – – – + + + + – – – – + + – –
– – – + + + + – – – – + + – – – – + + + + + + + + – – + + – – + + – – – – – –
– – – + + – – – – + + + + + + + + – – + + – – + + – – – – – – – – + + + + – –
– – – – – + + + + – – – – + + – – – – + + + + + + + + – – + + – – + + – – – –
– – – – – – – + + + + – – – – + + – – – – + + + + + + + + – – + + – – + + – –
P.B.44
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + + + – – – + – + + + – – – – – + – – – + + – + – + + – – + – – + – + – – + + –
+ + + + – – – + – + + + – – – – – + – – – + + – + – + + – – + – – + – + – – + + + – +
+ + + + – – – – – + – – – + + – + – + + – – + – – + – + – – + + + – + + + + + – – – –
+ + + – – + – – + – + – – + + + – + + + + + – – – + – + + + – – – – – + – – – + + – –
+ + + – – – + – + + + – – – – – + – – – + + – + – + + – – + – – + – + – – + + + – + +
+ + – + + + + + – – – + – + + + – – – – – + – – – + + – + – + + – – + – – + – + – – +
+ + – + + – – + – – + – + – – + + + – + + + + + – – – + – + + + – – – – – + – – – + –
+ + – – + + + – + + + + + – – – + – + + + – – – – – + – – – + + – + – + + – – + – – –
+ + – – – + – + + + – – – – – + – – – + + – + – + + – – + – – + – + – – + + + – + + +
+ + – – – – – + – – – + + – + – + + – – + – – + – + – – + + + – + + + + + – – – + – +
+ – + + + + + – – – + – + + + – – – – – + – – – + + – + – + + – – + – – + – + – – + +
+ – + + + – + + + + + – – – + – + + + – – – – – + – – – + + – + – + + – – + – – + – –
+ – + – + + – – + – – + – + – – + + + – + + + + + – – – + – + + + – – – – – + – – – +
+ – + – + – – + + + – + + + + + – – – + – + + + – – – – – + – – – + + – + – + + – – –
+ – + – – + – + – – + + + – + + + + + – – – + – + + + – – – – – + – – – + + – + – + –
+ – – + + – + – + + – – + – – + – + – – + + + – + + + + + – – – + – + + + – – – – – –
+ – – + – + + + – – – – – + – – – + + – + – + + – – + – – + – + – – + + + – + + + + –
+ – – + – – + – + – – + + + – + + + + + – – – + – + + + – – – – – + – – – + + – + – +
+ – – – + – + + + – – – – – + – – – + + – + – + + – – + – – + – + – – + + + – + + + +
+ – – – – + – – – + + – + – + + – – + – – + – + – – + + + – + + + + + – – – + – + + –
+ – – – – – + – – – + + – + – + + – – + – – + – + – – + + + – + + + + + – – – + – + +
– + + + + – – – + – + + + – – – – – + – – – + + – + – + + – – + – – + – + – – + + + +
– + + + – + + + + + – – – + – + + + – – – – – + – – – + + – + – + + – – + – – + – + –
– + + – + + + + + – – – + – + + + – – – – – + – – – + + – + – + + – – + – – + – + – +
– + + – + – + + – – + – – + – + – – + + + – + + + + + – – – + – + + + – – – – – + – –
– + + – – – – – + – – – + + – + – + + – – + – – + – + – – + + + – + + + + + – – – + +
– + – + + + – – – – – + – – – + + – + – + + – – + – – + – + – – + + + – + + + + + – –
– + – + – + + – – + – – + – + – – + + + – + + + + + – – – + – + + + – – – – – + – – +
– + – + – – + + + – + + + + + – – – + – + + + – – – – – + – – – + + – + – + + – – + –
– + – – + – + – – + + + – + + + + + – – – + – + + + – – – – – + – – – + + – + – + + –
– + – – + – – + – + – – + + + – + + + + + – – – + – + + + – – – – – + – – – + + – + +
– + – – – + + – + – + + – – + – – + – + – – + + + – + + + + + – – – + – + + + – – – –
– – + + + – – – – – + – – – + + – + – + + – – + – – + – + – – + + + – + + + + + – – +
– – + + – + – + + – – + – – + – + – – + + + – + + + + + – – – + – + + + – – – – – + –
– – + + – – + – – + – + – – + + + – + + + + + – – – + – + + + – – – – – + – – – + + +
– – + – + + + – – – – – + – – – + + – + – + + – – + – – + – + – – + + + – + + + + + –
– – + – – + + + – + + + + + – – – + – + + + – – – – – + – – – + + – + – + + – – + – +
– – + – – – + + – + – + + – – + – – + – + – – + + + – + + + + + – – – + – + + + – – –
– – – + + + – + + + + + – – – + – + + + – – – – – + – – – + + – + – + + – – + – – + +
– – – + – + – – + + + – + + + + + – – – + – + + + – – – – – + – – – + + – + – + + – +
– – – + – – – + + – + – + + – – + – – + – + – – + + + – + + + + + – – – + – + + + – –
– – – – + + – + – + + – – + – – + – + – – + + + – + + + + + – – – + – + + + – – – – +
– – – – + – – – + + – + – + + – – + – – + – + – – + + + – + + + + + – – – + – + + + –
P.B.48
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + + – – + – + – + + + – – + – – + + – + + – – – + – + – + + – – – – + – – – – – + + + –
+ + + + – – + – – + + – + + – – – + – + – + + – – – – + – – – – – + + + + – + + + + – – + – –
+ + + – + + + + – – + – + – + + + – – + – – + + – + + – – – + – + – + + – – – – + – – – – – +
+ + + – – + – + – + + + – – + – – + + – + + – – – + – + – + + – – – – + – – – – – + + + + – +
+ + + – – – + – + – + + – – – – + – – – – – + + + + – + + + + – – + – + – + + + – – + – – + –
+ + + – – – – + – – – – – + + + + – + + + + – – + – + – + + + – – + – – + + – + + – – – + – –
+ + – + + + + – – + – + – + + + – – + – – + + – + + – – – + – + – + + – – – – + – – – – – + +
+ + – + + + – – + – – + + – + + – – – + – + – + + – – – – + – – – – – + + + + – + + + + – – –
+ + – + + – – – – + – – – – – + + + + – + + + + – – + – + – + + + – – + – – + + – + + – – – –
+ + – – + – + – + + + – – + – – + + – + + – – – + – + – + + – – – – + – – – – – + + + + – + +
+ + – – + – – + + – + + – – – + – + – + + – – – – + – – – – – + + + + – + + + + – – + – + – +
+ – + + + + – – + – + – + + + – – + – – + + – + + – – – + – + – + + – – – – + – – – – – + + +
+ – + + – + + – – – + – + – + + – – – – + – – – – – + + + + – + + + + – – + – + – + + + – – –
+ – + + – – – + – + – + + – – – – + – – – – – + + + + – + + + + – – + – + – + + + – – + – – +
+ – + – + – + + + – – + – – + + – + + – – – + – + – + + – – – – + – – – – – + + + + – + + + –
+ – + – – + + – + + – – – + – + – + + – – – – + – – – – – + + + + – + + + + – – + – + – + + –
+ – – + – + – + + + – – + – – + + – + + – – – + – + – + + – – – – + – – – – – + + + + – + + +
+ – – + – + – + + – – – – + – – – – – + + + + – + + + + – – + – + – + + + – – + – – + + – + –
+ – – + – – + + – + + – – – + – + – + + – – – – + – – – – – + + + + – + + + + – – + – + – + +
+ – – – + – + – + + – – – – + – – – – – + + + + – + + + + – – + – + – + + + – – + – – + + – +
+ – – – + – – – – – + + + + – + + + + – – + – + – + + + – – + – – + + – + + – – – + – + – + –
+ – – – – + + + + – + + + + – – + – + – + + + – – + – – + + – + + – – – + – + – + + – – – – –
+ – – – – + – – – – – + + + + – + + + + – – + – + – + + + – – + – – + + – + + – – – + – + – +
– + + + + – + + + + – – + – + – + + + – – + – – + + – + + – – – + – + – + + – – – – + – – – –
– + + + – + + + + – – + – + – + + + – – + – – + + – + + – – – + – + – + + – – – – + – – – – +
– + + + – – + – + – + + + – – + – – + + – + + – – – + – + – + + – – – – + – – – – – + + + + +
– + + – + + – – – + – + – + + – – – – + – – – – – + + + + – + + + + – – + – + – + + + – – + –
– + + – – + – – + + – + + – – – + – + – + + – – – – + – – – – – + + + + – + + + + – – + – + +
– + – + + – – – + – + – + + – – – – + – – – – – + + + + – + + + + – – + – + – + + + – – + – +
– + – + – + + + – – + – – + + – + + – – – + – + – + + – – – – + – – – – – + + + + – + + + + –
– + – + – + + – – – – + – – – – – + + + + – + + + + – – + – + – + + + – – + – – + + – + + – –
– + – – + + – + + – – – + – + – + + – – – – + – – – – – + + + + – + + + + – – + – + – + + + –
– + – – – + – + – + + – – – – + – – – – – + + + + – + + + + – – + – + – + + + – – + – – + + +
– + – – – – + – – – – – + + + + – + + + + – – + – + – + + + – – + – – + + – + + – – – + – + +
– + – – – – – + + + + – + + + + – – + – + – + + + – – + – – + + – + + – – – + – + – + + – – –
– – + + + + – + + + + – – + – + – + + + – – + – – + + – + + – – – + – + – + + – – – – + – – –
– – + + + – – + – – + + – + + – – – + – + – + + – – – – + – – – – – + + + + – + + + + – – + +
– – + + – – – – + – – – – – + + + + – + + + + – – + – + – + + + – – + – – + + – + + – – – + +
– – + – + + + – – + – – + + – + + – – – + – + – + + – – – – + – – – – – + + + + – + + + + – +
– – + – + + – – – – + – – – – – + + + + – + + + + – – + – + – + + + – – + – – + + – + + – – +
– – + – + – + + – – – – + – – – – – + + + + – + + + + – – + – + – + + + – – + – – + + – + + –
– – + – – – – – + + + + – + + + + – – + – + – + + + – – + – – + + – + + – – – + – + – + + – –
– – – + + + + – + + + + – – + – + – + + + – – + – – + + – + + – – – + – + – + + – – – – + – –
– – – + + – + + – – – + – + – + + – – – – + – – – – – + + + + – + + + + – – + – + – + + + – +
– – – + – – – – – + + + + – + + + + – – + – + – + + + – – + – – + + – + + – – – + – + – + + –
– – – – + + + + – + + + + – – + – + – + + + – – + – – + + – + + – – – + – + – + + – – – – + –
– – – – – – + + + + – + + + + – – + – + – + + + – – + – – + + – + + – – – + – + – + + – – – +
== References ==
This article incorporates public domain material from the National Institute of Standards and Technology | Wikipedia/Plackett–Burman_design |
In statistics, a factorial experiment (also known as full factorial experiment) investigates how multiple factors influence a specific outcome, called the response variable. Each factor is tested at distinct values, or levels, and the experiment includes every possible combination of these levels across all factors. This comprehensive approach lets researchers see not only how each factor individually affects the response, but also how the factors interact and influence each other.
Often, factorial experiments simplify things by using just two levels for each factor. A 2x2 factorial design, for instance, has two factors, each with two levels, leading to four unique combinations to test. The interaction between these factors is often the most crucial finding, even when the individual factors also have an effect.
If a full factorial design becomes too complex due to the sheer number of combinations, researchers can use a fractional factorial design. This method strategically omits some combinations (usually at least half) to make the experiment more manageable.
These combinations of factor levels are sometimes called runs (of an experiment), points (viewing the combinations as vertices of a graph), and cells (arising as intersections of rows and columns).
== History ==
Factorial designs were used in the 19th century by John Bennet Lawes and Joseph Henry Gilbert of the Rothamsted Experimental Station.
Ronald Fisher argued in 1926 that "complex" designs (such as factorial designs) were more efficient than studying one factor at a time. Fisher wrote, "No aphorism is more frequently repeated in connection with field trials, than that we must ask Nature few questions, or, ideally, one question, at a time. The writer is convinced that this view is wholly mistaken. Nature, he suggests, will best respond to a logical and carefully thought out questionnaire; indeed, if we ask her a single question, she will often refuse to answer until some other topic has been discussed."
A factorial design allows the effect of several factors and even interactions between them to be determined with the same number of trials as are necessary to determine any one of the effects by itself with the same degree of accuracy.
Frank Yates made significant contributions, particularly in the analysis of designs, by the Yates analysis.
The term "factorial" may not have been used in print before 1935, when Fisher used it in his book The Design of Experiments.
== Advantages and disadvantages of factorial experiments ==
Many people examine the effect of only a single factor or variable. Compared to such one-factor-at-a-time (OFAT) experiments, factorial experiments offer several advantages
Factorial designs are more efficient than OFAT experiments. They provide more information at similar or lower cost. They can find optimal conditions faster than OFAT experiments.
When the effect of one factor is different for different levels of another factor, it cannot be detected by an OFAT experiment design. Factorial designs are required to detect such interactions. Use of OFAT when interactions are present can lead to serious misunderstanding of how the response changes with the factors.
Factorial designs allow the effects of a factor to be estimated at several levels of the other factors, yielding conclusions that are valid over a range of experimental conditions.
The main disadvantage of the full factorial design is its sample size requirement, which grows exponentially with the number of factors or inputs considered. Alternative strategies with improved computational efficiency include fractional factorial designs, Latin hypercube sampling, and quasi-random sampling techniques.
=== Example of advantages of factorial experiments ===
In his book, Improving Almost Anything: Ideas and Essays, statistician George Box gives many examples of the benefits of factorial experiments. Here is one. Engineers at the bearing manufacturer SKF wanted to know if changing to a less expensive "cage" design would affect bearing lifespan. The engineers asked Christer Hellstrand, a statistician, for help in designing the experiment.
Box reports the following. "The results were assessed by an accelerated life test. … The runs were expensive because they needed to be made on an actual production line and the experimenters were planning to make four runs with the standard cage and four with the modified cage. Christer asked if there were other factors they would like to test. They said there were, but that making added runs would exceed their budget. Christer showed them how they could test two additional factors "for free" – without increasing the number of runs and without reducing the accuracy of their estimate of the cage effect. In this arrangement, called a 2×2×2 factorial design, each of the three factors would be run at two levels and all the eight possible combinations included. The various combinations can conveniently be shown as the vertices of a cube ... "
"In each case, the standard condition is indicated by a minus sign and the modified condition by a plus sign. The factors changed were heat treatment, outer ring osculation, and cage design. The numbers show the relative lengths of lives of the bearings. If you look at [the cube plot], you can see that the choice of cage design did not make a lot of difference. … But, if you average the pairs of numbers for cage design, you get the [table below], which shows what the two other factors did. … It led to the extraordinary discovery that, in this particular application, the life of a bearing can be increased fivefold if the two factor(s) outer ring osculation and inner ring heat treatments are increased together."
"Remembering that bearings like this one have been made for decades, it is at first surprising that it could take so long to discover so important an improvement. A likely explanation is that, because most engineers have, until recently, employed only one factor at a time experimentation, interaction effects have been missed."
== Example ==
The simplest factorial experiment contains two levels for each of two factors. Suppose an engineer wishes to study the total power used by each of two different motors, A and B, running at each of two different speeds, 2000 or 3000 RPM. The factorial experiment would consist of four experimental units: motor A at 2000 RPM, motor B at 2000 RPM, motor A at 3000 RPM, and motor B at 3000 RPM. Each combination of a single level selected from every factor is present once.
This experiment is an example of a 22 (or 2×2) factorial experiment, so named because it considers two levels (the base) for each of two factors (the power or superscript), or #levels#factors, producing 22=4 factorial points.
Designs can involve many independent variables. As a further example, the effects of three input variables can be evaluated in eight experimental conditions shown as the corners of a cube.
This can be conducted with or without replication, depending on its intended purpose and available resources. It will provide the effects of the three independent variables on the dependent variable and possible interactions.
== Notation ==
Factorial experiments are described by two things: the number of factors, and the number of levels of each factor. For example, a 2×3 factorial experiment has two factors, the first at 2 levels and the second at 3 levels. Such an experiment has 2×3=6 treatment combinations or cells. Similarly, a 2×2×3 experiment has three factors, two at 2 levels and one at 3, for a total of 12 treatment combinations. If every factor has s levels (a so-called fixed-level or symmetric design), the experiment is typically denoted by sk, where k is the number of factors. Thus a 25 experiment has 5 factors, each at 2 levels. Experiments that are not fixed-level are said to be mixed-level or asymmetric.
There are various traditions to denote the levels of each factor. If a factor already has natural units, then those are used. For example, a shrimp aquaculture experiment might have factors temperature at 25 °C and 35 °C, density at 80 or 160 shrimp/40 liters, and salinity at 10%, 25% and 40%. In many cases, though, the factor levels are simply categories, and the coding of levels is somewhat arbitrary. For example, the levels of an 6-level factor might simply be denoted 1, 2, ..., 6.
Treatment combinations are denoted by ordered pairs or, more generally, ordered tuples. In the aquaculture experiment, the ordered triple (25, 80, 10) represents the treatment combination having the lowest level of each factor. In a general 2×3 experiment the ordered pair (2, 1) would indicate the cell in which factor A is at level 2 and factor B at level 1. The parentheses are often dropped, as shown in the accompanying table.
To denote factor levels in 2k experiments, three particular systems appear in the literature:
The values 1 and 0;
the values 1 and −1, often simply abbreviated by + and −;
A lower-case letter with the exponent 0 or 1.
If these values represent "low" and "high" settings of a treatment, then it is natural to have 1 represent "high", whether using 0 and 1 or −1 and 1. This is illustrated in the accompanying table for a 2×2 experiment. If the factor levels are simply categories, the correspondence might be different; for example, it is natural to represent "control" and "experimental" conditions by coding "control" as 0 if using 0 and 1, and as 1 if using 1 and −1. An example of the latter is given below. That example illustrates another use of the coding +1 and −1.
For other fixed-level (sk) experiments, the values 0, 1, ..., s−1 are often used to denote factor levels. These are the values of the integers modulo s when s is prime.
== Contrasts, main effects and interactions ==
The expected response to a given treatment combination is called a cell mean, usually denoted using the Greek letter μ. (The term cell is borrowed from its use in tables of data.) This notation is illustrated here for the 2 × 3 experiment.
A contrast in cell means is a linear combination of cell means in which the coefficients sum to 0. Contrasts are of interest in themselves, and are the building blocks by which main effects and interactions are defined.
In the 2 × 3 experiment illustrated here, the expression
is a contrast that compares the mean responses of the treatment combinations 11 and 12. (The coefficients here are 1 and –1.) The contrast
is said to belong to the main effect of factor A as it contrasts the responses to the "1" level of factor
A
{\displaystyle A}
with those for the "2" level. The main effect of A is said to be absent if the true values of the cell means
μ
i
j
{\displaystyle \mu _{ij}}
make this expression equal to 0. Since the true cell means are unobservable in principle, a statistical hypothesis test is used to assess whether this expression equals 0.
Interaction in a factorial experiment is the lack of additivity between factors, and is also expressed by contrasts. In the 2 × 3 experiment, the contrasts
belong to the A × B interaction; interaction is absent (additivity is present) if these expressions equal 0. Additivity may be viewed as a kind of parallelism between factors, as illustrated in the Analysis section below. As with main effects, one assesses the assumption of additivity by performing a hypothesis test.
Since it is the coefficients of these contrasts that carry the essential information, they are often displayed as column vectors. For the example above, such a table might look like this:
The columns of such a table are called contrast vectors: their components add up to 0. Each effect is determined by both the pattern of components in its columns and the number of columns.
The patterns of components of these columns reflect the general definitions given by Bose:
A contrast vector belongs to the main effect of a particular factor if the values of its components depend only on the level of that factor.
A contrast vector belongs to the interaction of two factors, say A and B, if (i) the values of its components depend only on the levels of A and B, and (ii) it is orthogonal (perpendicular) to the contrast vectors representing the main effects of A and B.
Similar definitions hold for interactions of more than two factors. In the 2 × 3 example, for instance, the pattern of the A column follows the pattern of the levels of factor A, indicated by the first component of each cell. Similarly, the pattern of the B columns follows the levels of factor B (sorting on B makes this easier to see).
The number of columns needed to specify each effect is the degrees of freedom for the effect, and is an essential quantity in the analysis of variance. The formula is as follows:
A main effect for a factor with s levels has s−1 degrees of freedom.
The interaction of two factors with s1 and s2 levels, respectively, has (s1−1)(s2−1) degrees of freedom.
The formula for more than two factors follows this pattern. In the 2 × 3 example above, the degrees of freedom for the two main effects and the interaction — the number of columns for each — are 1, 2 and 2, respectively.
=== Examples ===
In the tables in the following examples, the entries in the "cell" column are treatment combinations: The first component of each combination is the level of factor A, the second for factor B, and the third (in the 2 × 2 × 2 example) the level of factor C. The entries in each of the other columns sum to 0, so that each column is a contrast vector.
A 3 × 3 experiment: Here we expect 3-1 = 2 degrees of freedom each for the main effects of factors A and B, and (3-1)(3-1) = 4 degrees of freedom for the A × B interaction. This accounts for the number of columns for each effect in the accompanying table.
The two contrast vectors for A depend only on the level of factor A. This can be seen by noting that the pattern of entries in each A column is the same as the pattern of the first component of "cell". (If necessary, sorting the table on A will show this.) Thus these two vectors belong to the main effect of A. Similarly, the two contrast vectors for B depend only on the level of factor B, namely the second component of "cell", so they belong to the main effect of B.
The last four column vectors belong to the A × B interaction, as their entries depend on the values of both factors, and as all four columns are orthogonal to the columns for A and B. The latter can be verified by taking dot products.
A 2 × 2 × 2 experiment: This will have 1 degree of freedom for every main effect and interaction. For example, a two-factor interaction will have (2-1)(2-1) = 1 degree of freedom. Thus just a single column is needed to specify each of the seven effects.
The columns for A, B and C represent the corresponding main effects, as the entries in each column depend only on the level of the corresponding factor. For example, the entries in the B column follow the same pattern as the middle component of "cell", as can be seen by sorting on B.
The columns for AB, AC and BC represent the corresponding two-factor interactions. For example, (i) the entries in the BC column depend on the second and third (B and C) components of cell, and are independent of the first (A) component, as can be seen by sorting on BC; and (ii) the BC column is orthogonal to columns B and C, as can be verified by computing dot products.
Finally, the ABC column represents the three-factor interaction: its entries depend on the levels of all three factors, and it is orthogonal to the other six contrast vectors.
Combined and read row-by-row, columns A, B, C give an alternate notation, mentioned above, for the treatment combinations (cells) in this experiment: cell 000 corresponds to +++, 001 to ++−, etc.
In columns A through ABC, the number 1 may be replaced by any constant, because the resulting columns will still be contrast vectors.
For example, it is common to use the number 1/4 in 2 × 2 × 2 experiments to define each main effect or interaction, and to declare, for example, that the contrast
is "the" main effect of factor A, a numerical quantity that can be estimated.
== Implementation ==
For more than two factors, a 2k factorial experiment can usually be recursively designed from a 2k−1 factorial experiment by replicating the 2k−1 experiment, assigning the first replicate to the first (or low) level of the new factor, and the second replicate to the second (or high) level. This framework can be generalized to, e.g., designing three replicates for three level factors, etc.
A factorial experiment allows for estimation of experimental error in two ways. The experiment can be replicated, or the sparsity-of-effects principle can often be exploited. Replication is more common for small experiments and is a very reliable way of assessing experimental error. When the number of factors is large (typically more than about 5 factors, but this does vary by application), replication of the design can become operationally difficult. In these cases, it is common to only run a single replicate of the design, and to assume that factor interactions of more than a certain order (say, between three or more factors) are negligible. Under this assumption, estimates of such high order interactions are estimates of an exact zero, thus really an estimate of experimental error.
When there are many factors, many experimental runs will be necessary, even without replication. For example, experimenting with 10 factors at two levels each produces 210=1024 combinations. At some point this becomes infeasible due to high cost or insufficient resources. In this case, fractional factorial designs may be used.
As with any statistical experiment, the experimental runs in a factorial experiment should be randomized to reduce the impact that bias could have on the experimental results. In practice, this can be a large operational challenge.
Factorial experiments can be used when there are more than two levels of each factor. However, the number of experimental runs required for three-level (or more) factorial designs will be considerably greater than for their two-level counterparts. Factorial designs are therefore less attractive if a researcher wishes to consider more than two levels.
== Analysis ==
A factorial experiment can be analyzed using ANOVA or regression analysis. To compute the main effect of a factor "A" in a 2-level experiment, subtract the average response of all experimental runs for which A was at its low (or first) level from the average response of all experimental runs for which A was at its high (or second) level.
Other useful exploratory analysis tools for factorial experiments include main effects plots, interaction plots, Pareto plots, and a normal probability plot of the estimated effects.
When the factors are continuous, two-level factorial designs assume that the effects are linear. If a quadratic effect is expected for a factor, a more complicated experiment should be used, such as a central composite design. Optimization of factors that could have quadratic effects is the primary goal of response surface methodology.
=== Analysis example ===
Montgomery gives the following example of analysis of a factorial experiment:.An engineer would like to increase the filtration rate (output) of a process to produce a chemical, and to reduce the amount of formaldehyde used in the process. Previous attempts to reduce the formaldehyde have lowered the filtration rate. The current filtration rate is 75 gallons per hour. Four factors are considered: temperature (A), pressure (B), formaldehyde concentration (C), and stirring rate (D). Each of the four factors will be tested at two levels.Onwards, the minus (−) and plus (+) signs will indicate whether the factor is run at a low or high level, respectively.
The non-parallel lines in the A:C interaction plot indicate that the effect of factor A depends on the level of factor C. A similar results holds for the A:D interaction. The graphs indicate that factor B has little effect on filtration rate. The analysis of variance (ANOVA) including all 4 factors and all possible interaction terms between them yields the coefficient estimates shown in the table below.
Because there are 16 observations and 16 coefficients (intercept, main effects, and interactions), p-values cannot be calculated for this model. The coefficient values and the graphs suggest that the important factors are A, C, and D, and the interaction terms A:C and A:D.
The coefficients for A, C, and D are all positive in the ANOVA, which would suggest running the process with all three variables set to the high value. However, the main effect of each variable is the average over the levels of the other variables. The A:C interaction plot above shows that the effect of factor A depends on the level of factor C, and vice versa. Factor A (temperature) has very little effect on filtration rate when factor C is at the + level. But Factor A has a large effect on filtration rate when factor C (formaldehyde) is at the − level. The combination of A at the + level and C at the − level gives the highest filtration rate. This observation indicates how one-factor-at-a-time analyses can miss important interactions. Only by varying both factors A and C at the same time could the engineer discover that the effect of factor A depends on the level of factor C.
The best filtration rate is seen when A and D are at the high level, and C is at the low level. This result also satisfies the objective of reducing formaldehyde (factor C). Because B does not appear to be important, it can be dropped from the model. Performing the ANOVA using factors A, C, and D, and the interaction terms A:C and A:D, gives the result shown in the following table, in which all the terms are significant (p-value < 0.05).
== See also ==
Combinatorial design
Design of experiments
Orthogonal array
Plackett–Burman design
Taguchi methods
Welch's t-test
== Explanatory footnotes ==
== Notes ==
== References ==
== External links ==
Factorial Designs (California State University, Fresno)
GOV.UK Factorial randomised controlled trials (Public Health England) | Wikipedia/Factorial_design |
Software that is used for designing factorial experiments plays an important role in scientific experiments and represents a route to the implementation of design of experiments procedures that derive from statistical and combinatorial theory. In principle, easy-to-use design of experiments (DOE) software should be available to all experimenters to foster use of DOE.
== Background ==
== Use of software ==
Factorial experimental design software drastically simplifies previously laborious hand calculations needed before the use of computers.
During World War II, a more sophisticated form of DOE, called factorial design, became a big weapon for speeding up industrial development for the Allied forces. These designs can be quite compact, involving as few as two levels of each factor and only a fraction of all the combinations, and yet they are quite powerful for screening purposes. After the war, a statistician at Imperial Chemical, George Box, described how to generate response surfaces for process optimization. From this point forward, DOE took hold in the chemical process industry, where factors such as time, temperature, pressure, concentration, flow rate and agitation are easily manipulated.
DOE results, when discovered accurately with DOE software, strengthen the capability to discern truths about sample populations being tested: see Sampling (statistics). Statisticians describe stronger multifactorial DOE methods as being more “robust”: see Experimental design.
As DOE software advancements gave rise to solving complex factorial statistical equations, statisticians began in earnest to design experiments with more than one factor (multifactor) being tested at a time. Simply stated, computerized multifactor DOE began supplanting one-factor-at-a-time experiments. Computer software designed specifically for designed experiments became available from various leading software companies in the 1980s and included packages such as JMP, Minitab, Cornerstone and Design–Expert.
Notable benefits when using DOE software include avoiding laborious hand calculations when:
Identifying key factors for process or product improvements.
Setting up and analyzing general factorial, two-level factorial, fractional factorial and Plackett–Burman designs.
Performing numerical optimizations.
Screening for critical factors and their interactions.
Analyzing process factors or mixture components.
Combining mixture and process variables in designs.
Rotating 3D plots to visualize response surfaces.
Exploring 2D contours with a computer mouse, setting flags along the way to identify coordinates and predict responses.
Precisely locating where all specified requirements meet using numerical optimization functions within DOE software.
Finding the most desirable factor settings for multiple responses simultaneously.
Today, factorial DOE software is a notable tool that engineers, scientists, geneticists, biologists, and virtually all other experimenters and creators, ranging from agriculturists to zoologists, rely upon. DOE software is most applicable to controlled, multifactor experiments in which the experimenter is interested in the effect of some process or intervention on objects such as crops, jet engines, demographics, marketing techniques, materials, adhesives, and so on. Design of experiments software is therefore a valuable tool with broad applications for all natural, engineering, and social sciences.
== Notes ==
== External links ==
Response Surface Methodology: Process and Product Optimization Using Designed Experiments, 4th Edition
Design and Analysis of Experiments, 9th Edition
DOE Simplified: Practical Tools for Effective Experimentation, 3rd Edition
RSM Simplified: Optimizing Processes Using Response Surface Methods for Design of Experiments, 2nd Edition
Warning Signs in Experimental Design and Interpretation
NIST Eng. Stats Section 5 Process Improvement | Wikipedia/Multifactor_design_of_experiments_software |
The Demographic Window is defined to be that period of time in a nation's demographic evolution when the proportion of population of working age group is particularly prominent. This occurs when the demographic architecture of a population becomes younger and the percentage of people able to work reaches its height. Typically, the demographic window of opportunity lasts for 30–40 years depending upon the country. Because of the mechanical link between fertility levels and age structures, the timing and duration of this period is closely associated to those of fertility decline: when birth rates fall, the age pyramid first shrinks with gradually lower proportions of young population (under 15s) and the dependency ratio decreases as is happening (or happened) in various parts of East Asia over several decades. After a few decades, low fertility however causes the population to get older and the growing proportion of elderly people inflates again the dependency ratio as is observed in present-day Europe.
The exact technical boundaries of definition may vary. The UN Population Department has defined it as period when the proportion of children and youth under 15 years falls below 30 per cent and the proportion of people 65 years and older is still below 15 per cent. The Global Data Lab released an alternative classification of phases:
Europe's demographic window lasted from 1950 to 2000. It began in China in 1990 and is expected to last until 2015. India is expected to enter the demographic window in 2010, which may last until the middle of the present century. Much of Africa will not enter the demographic window until 2045 or later.
Societies who have entered the demographic window have smaller dependency ratio (ratio of dependents to working-age population) and therefore the demographic potential for high economic growth as favorable dependency ratios tend to boost savings and investments in human capital. But this so-called "demographic bonus" (or demographic dividend) remains only a potential advantage as low participation rates (for instance among women) or rampant unemployment may limit the impact of favorable age structures.
For a list of demographic windows of other nations check the UN link in References.
== See also ==
Demographics
== References ==
Proceedings of the United Nations Expert Meeting on World Population to 2300
Bloom, David E., David Canning and Jaypee Sevilla (2003)- The Demographic Dividend: A New Perspective on the Economic Consequences of Population Change.
A CICRED Policy Paper on implications of age structural transitions | Wikipedia/Demographic_window |
Divorce demography is the study of divorce statistics in a population. There are three ratios used for divorce rate calculations: crude divorce rate, refined divorce rate, and divorce-to-marriage ratio. Each of these calculations has weaknesses and can be misleading.
== Estimates of annual divorces by country ==
The following are the countries with the most annual divorces according to the United Nations in 2009.
== Divorce statistics by country/region (per 1,000 population / year) ==
== Metrics / statistics ==
=== Crude divorce rate ===
This is divorces per 1,000 population per year. For example, if a city has 10,000 people living in it, and 30 couples divorce in one year, then the crude divorce rate for that year is 3 divorces per 1,000 residents.
Crude Divorce Rate
=
Number of divorces
Population
×
1000
{\displaystyle {\text{Crude Divorce Rate}}={\frac {\text{Number of divorces}}{\text{Population}}}\times 1000}
The crude divorce rate can give a general overview of marriage in an area, but it does not take people who cannot marry into account. For example, it would include young children, who are clearly not of marriageable age in its sample. In a place with large numbers of children or single adults, the crude divorce rate can seem low. In a place with few children and single adults, the crude divorce rate can seem high.
=== Refined divorce rate ===
This measures the number of divorces per 1,000 women married to men, so that all unmarried persons are left out of the calculation. For example, if that same city of 10,000 people has 3,000 married women, and 30 couples divorce in one year, then the refined divorce rate is 10 divorces per 1,000 married women.
Refined Divorce Rate
=
Number of divorces
Number of married women
×
1000
{\displaystyle {\text{Refined Divorce Rate}}={\frac {\text{Number of divorces}}{\text{Number of married women}}}\times 1000}
=== Divorce-to-marriage ratio ===
This compares the number of divorces in a given year to the number of marriages in that same year (the ratio of the crude divorce rate to the crude marriage rate). For example, if there are 500 divorces and 1,000 marriages in a given year in a given area, the ratio would be one divorce for every two marriages, e.g. a ratio of 0.50 (50%).
Divorce-to-Marriage Ratio
=
Number of divorces
Number of marriages
{\displaystyle {\text{Divorce-to-Marriage Ratio}}={\frac {\text{Number of divorces}}{\text{Number of marriages}}}}
However, this measurement compares two unlike populations – those who can marry and those who can divorce. Say there exists a community with 100,000 married couples, and very few people capable of marriage, for reasons such as age. If 1,000 people obtain divorces and 1,000 people get married in the same year, the ratio is one divorce for every marriage, which may lead people to think that the community's relationships are extremely unstable, despite the number of married people not changing. This is also true in reverse: a community with very many people of marriageable age may have 10,000 marriages and 1,000 divorces, leading people to believe that it has very stable relationships.
Furthermore, these two rates are not directly comparable since the marriage rate only examines the current year, while the divorce rate examines the outcomes of marriages for many previous years. This does not equate to the proportion of marriages in a given single-year cohort that will ultimately end in divorce. In any given year, underlying rates may change, and this can affect the ratio. For example, during an economic downturn, some couples might postpone a divorce because they can't afford to live separately. These individual choices could seem to temporarily improve the divorce-to-marriage ratio.
== References ==
== External links ==
Authorship: United States CDC National Center for Health Statistics; Contents: Longitudinal study of first-marriage outcomes in the United States, 2012 | Wikipedia/Divorce_demography |
Antarctica contains research stations and field camps that are staffed seasonally or year-round, and former whaling settlements. Approximately 12 nations, all signatory to the Antarctic Treaty, send personnel to perform seasonal (summer) or year-round research on the continent and in its surrounding oceans. There are also two official civilian settlements: Villa Las Estrellas in Base Presidente Eduardo Frei Montalva operated by Chile, and Fortín Sargento Cabral in Esperanza Base operated by Argentina.
The population of people doing and supporting scientific research on the continent and its nearby islands south of 60 degrees south latitude (the region covered by the Antarctic Treaty) varies from approximately 4,000 in summer to 1,000 in winter. In addition, approximately 1,000 personnel including ship's crew and scientists doing onboard research are present in the waters of the treaty region. The largest station, McMurdo Station, has a summer population of about 1,000 people and a winter population of about 200.
== Births ==
At least 11 children have been born in Antarctica. The first was Emilio Marcos Palma, born on 7 January 1978 to Argentine parents at Esperanza, Hope Bay, near the tip of the Antarctic peninsula. The first girl born on the Antarctic continent was Marisa De Las Nieves Delgado, born on 27 May 1978. The birth occurred at Fortín Sargento Cabral, Base Esperanza (Argentine Army).
Solveig Gunbjørg Jacobsen of Norway, born in the island territory of South Georgia on 8 October 1913, was the first person born and raised in the Antarctic (the world region south of the Antarctic Convergence). The first human born in the wider Antarctic region was the Australian James Kerguelen Robinson, born in the Kerguelen Islands on 11 March 1859.
== Languages ==
English, Spanish, and Russian are the most widely spoken languages spoken in Antartica. Spanish is dominant among South American launchpads of Argentina and Chile for Antarctic voyages. While Spanish is spoken especially among Argentinian, Chilean, and other Spanish-speaking research stations, English is the most widely used language. This is due to the large representation of English-speaking countries and the fact that English has become the de facto language of scientific research in the region. Antarctic English, a distinct variety of the English language, has been found to be spoken by people living on Antarctica and the subantarctic islands.
== See also ==
== References ==
== External links ==
Antarctica at the CIA World Factbook (includes section on the population of Antarctica). | Wikipedia/Demographics_of_Antarctica |
The Annual Review of Statistics and Its Application is a peer-reviewed scientific journal published by Annual Reviews. It releases an annual volume of review articles relevant to the field of statistics. It has been in publication since 2014. The editor is Nancy Reid. As of 2023, Annual Review of Statistics and Its Application is being published as open access, under the Subscribe to Open model. As of 2024, Journal Citation Reports gives the journal a 2023 impact factor of 7.4, ranking it second of 168 journal titles in the category "Statistics and Probability" and third of 135 titles in "Mathematics, Interdisciplinary Applications".
== History ==
The Annual Review of Statistics and Its Application was first published in 2014 by nonprofit publisher Annual Reviews. Its founding editor was Stephen E. Fienberg. Following Fienberg's death in 2016, associate editor Nancy Reid completed the 2017 volume, of which Feinberg is credited as editor. Reid is credited as editor beginning in 2018. Though the journal was initially published in print, as of 2021 it is only published electronically. Some of its articles are available online prior to the volume publication date.
== Scope and indexing ==
The Annual Review of Statistics and Its Application publishes review articles about methodological advances in statistics and the use of computational tools that make the advances possible. It is abstracted and indexed in Scopus, Science Citation Index Expanded, and Inspec.
== References == | Wikipedia/Annual_Review_of_Statistics_and_Its_Application |
In statistics and machine learning, the bias–variance tradeoff describes the relationship between a model's complexity, the accuracy of its predictions, and how well it can make predictions on previously unseen data that were not used to train the model. In general, as the number of tunable parameters in a model increase, it becomes more flexible, and can better fit a training data set. That is, the model has lower error or lower bias. However, for more flexible models, there will tend to be greater variance to the model fit each time we take a set of samples to create a new training data set. It is said that there is greater variance in the model's estimated parameters.
The bias–variance dilemma or bias–variance problem is the conflict in trying to simultaneously minimize these two sources of error that prevent supervised learning algorithms from generalizing beyond their training set:
The bias error is an error from erroneous assumptions in the learning algorithm. High bias can cause an algorithm to miss the relevant relations between features and target outputs (underfitting).
The variance is an error from sensitivity to small fluctuations in the training set. High variance may result from an algorithm modeling the random noise in the training data (overfitting).
The bias–variance decomposition is a way of analyzing a learning algorithm's expected generalization error with respect to a particular problem as a sum of three terms, the bias, variance, and a quantity called the irreducible error, resulting from noise in the problem itself.
== Motivation ==
The bias–variance tradeoff is a central problem in supervised learning. Ideally, one wants to choose a model that both accurately captures the regularities in its training data, but also generalizes well to unseen data. Unfortunately, it is typically impossible to do both simultaneously. High-variance learning methods may be able to represent their training set well but are at risk of overfitting to noisy or unrepresentative training data. In contrast, algorithms with high bias typically produce simpler models that may fail to capture important regularities (i.e. underfit) in the data.
It is an often made fallacy to assume that complex models must have high variance. High variance models are "complex" in some sense, but the reverse needs not be true. In addition, one has to be careful how to define complexity. In particular, the number of parameters used to describe the model is a poor measure of complexity. This is illustrated by an example adapted from: The model
f
a
,
b
(
x
)
=
a
sin
(
b
x
)
{\displaystyle f_{a,b}(x)=a\sin(bx)}
has only two parameters (
a
,
b
{\displaystyle a,b}
) but it can interpolate any number of points by oscillating with a high enough frequency, resulting in both a high bias and high variance.
An analogy can be made to the relationship between accuracy and precision. Accuracy is one way of quantifying bias and can intuitively be improved by selecting from only local information. Consequently, a sample will appear accurate (i.e. have low bias) under the aforementioned selection conditions, but may result in underfitting. In other words, test data may not agree as closely with training data, which would indicate imprecision and therefore inflated variance. A graphical example would be a straight line fit to data exhibiting quadratic behavior overall. Precision is a description of variance and generally can only be improved by selecting information from a comparatively larger space. The option to select many data points over a broad sample space is the ideal condition for any analysis. However, intrinsic constraints (whether physical, theoretical, computational, etc.) will always play a limiting role. The limiting case where only a finite number of data points are selected over a broad sample space may result in improved precision and lower variance overall, but may also result in an overreliance on the training data (overfitting). This means that test data would also not agree as closely with the training data, but in this case the reason is inaccuracy or high bias. To borrow from the previous example, the graphical representation would appear as a high-order polynomial fit to the same data exhibiting quadratic behavior. Note that error in each case is measured the same way, but the reason ascribed to the error is different depending on the balance between bias and variance. To mitigate how much information is used from neighboring observations, a model can be smoothed via explicit regularization, such as shrinkage.
== Bias–variance decomposition of mean squared error ==
Suppose that we have a training set consisting of a set of points
x
1
,
…
,
x
n
{\displaystyle x_{1},\dots ,x_{n}}
and real-valued labels
y
i
{\displaystyle y_{i}}
associated with the points
x
i
{\displaystyle x_{i}}
. We assume that the data is generated by a function
f
(
x
)
{\displaystyle f(x)}
such as
y
=
f
(
x
)
+
ε
{\displaystyle y=f(x)+\varepsilon }
, where the noise,
ε
{\displaystyle \varepsilon }
, has zero mean and variance
σ
2
{\displaystyle \sigma ^{2}}
. That is,
y
i
=
f
(
x
i
)
+
ε
i
{\displaystyle y_{i}=f(x_{i})+\varepsilon _{i}}
, where
ε
i
{\displaystyle \varepsilon _{i}}
is a noise sample.
We want to find a function
f
^
(
x
;
D
)
{\displaystyle {\hat {f}}(x;D)}
, that approximates the true function
f
(
x
)
{\displaystyle f(x)}
as well as possible, by means of some learning algorithm based on a training dataset (sample)
D
=
{
(
x
1
,
y
1
)
…
,
(
x
n
,
y
n
)
}
{\displaystyle D=\{(x_{1},y_{1})\dots ,(x_{n},y_{n})\}}
. We make "as well as possible" precise by measuring the mean squared error between
y
{\displaystyle y}
and
f
^
(
x
;
D
)
{\displaystyle {\hat {f}}(x;D)}
: we want
(
y
−
f
^
(
x
;
D
)
)
2
{\displaystyle (y-{\hat {f}}(x;D))^{2}}
to be minimal, both for
x
1
,
…
,
x
n
{\displaystyle x_{1},\dots ,x_{n}}
and for points outside of our sample. Of course, we cannot hope to do so perfectly, since the
y
i
{\displaystyle y_{i}}
contain noise
ε
{\displaystyle \varepsilon }
; this means we must be prepared to accept an irreducible error in any function we come up with.
Finding an
f
^
{\displaystyle {\hat {f}}}
that generalizes to points outside of the training set can be done with any of the countless algorithms used for supervised learning. It turns out that whichever function
f
^
{\displaystyle {\hat {f}}}
we select, we can decompose its expected error on an unseen sample
x
{\displaystyle x}
(i.e. conditional to x) as follows:: 34 : 223
E
D
,
ε
[
(
y
−
f
^
(
x
;
D
)
)
2
]
=
(
Bias
D
[
f
^
(
x
;
D
)
]
)
2
+
Var
D
[
f
^
(
x
;
D
)
]
+
σ
2
{\displaystyle \mathbb {E} _{D,\varepsilon }{\Big [}{\big (}y-{\hat {f}}(x;D){\big )}^{2}{\Big ]}={\Big (}\operatorname {Bias} _{D}{\big [}{\hat {f}}(x;D){\big ]}{\Big )}^{2}+\operatorname {Var} _{D}{\big [}{\hat {f}}(x;D){\big ]}+\sigma ^{2}}
where
Bias
D
[
f
^
(
x
;
D
)
]
≜
E
D
[
f
^
(
x
;
D
)
−
f
(
x
)
]
=
E
D
[
f
^
(
x
;
D
)
]
−
f
(
x
)
=
E
D
[
f
^
(
x
;
D
)
]
−
E
y
|
x
[
y
(
x
)
]
{\displaystyle {\begin{aligned}\operatorname {Bias} _{D}{\big [}{\hat {f}}(x;D){\big ]}&\triangleq \mathbb {E} _{D}{\big [}{\hat {f}}(x;D)-f(x){\big ]}\\&=\mathbb {E} _{D}{\big [}{\hat {f}}(x;D){\big ]}\,-\,f(x)\\&=\mathbb {E} _{D}{\big [}{\hat {f}}(x;D){\big ]}\,-\,\mathbb {E} _{y|x}{\big [}y(x){\big ]}\end{aligned}}}
and
Var
D
[
f
^
(
x
;
D
)
]
≜
E
D
[
(
E
D
[
f
^
(
x
;
D
)
]
−
f
^
(
x
;
D
)
)
2
]
{\displaystyle \operatorname {Var} _{D}{\big [}{\hat {f}}(x;D){\big ]}\triangleq \mathbb {E} _{D}{\Big [}{\big (}\mathbb {E} _{D}[{\hat {f}}(x;D)]-{\hat {f}}(x;D){\big )}^{2}{\Big ]}}
and
σ
2
=
E
y
[
(
y
−
f
(
x
)
⏟
E
y
|
x
[
y
]
)
2
]
{\displaystyle \sigma ^{2}=\operatorname {E} _{y}{\Big [}{\big (}y-\underbrace {f(x)} _{E_{y|x}[y]}{\big )}^{2}{\Big ]}}
The expectation ranges over different choices of the training set
D
=
{
(
x
1
,
y
1
)
…
,
(
x
n
,
y
n
)
}
{\displaystyle D=\{(x_{1},y_{1})\dots ,(x_{n},y_{n})\}}
, all sampled from the same joint distribution
P
(
x
,
y
)
{\displaystyle P(x,y)}
which can for example be done via bootstrapping.
The three terms represent:
the square of the bias of the learning method, which can be thought of as the error caused by the simplifying assumptions built into the method. E.g., when approximating a non-linear function
f
(
x
)
{\displaystyle f(x)}
using a learning method for linear models, there will be error in the estimates
f
^
(
x
)
{\displaystyle {\hat {f}}(x)}
due to this assumption;
the variance of the learning method, or, intuitively, how much the learning method
f
^
(
x
)
{\displaystyle {\hat {f}}(x)}
will move around its mean;
the irreducible error
σ
2
{\displaystyle \sigma ^{2}}
.
Since all three terms are non-negative, the irreducible error forms a lower bound on the expected error on unseen samples.: 34
The more complex the model
f
^
(
x
)
{\displaystyle {\hat {f}}(x)}
is, the more data points it will capture, and the lower the bias will be. However, complexity will make the model "move" more to capture the data points, and hence its variance will be larger.
=== Derivation ===
The derivation of the bias–variance decomposition for squared error proceeds as follows. For convenience, we drop the
D
{\displaystyle D}
subscript in the following lines, such that
f
^
(
x
;
D
)
=
f
^
(
x
)
{\displaystyle {\hat {f}}(x;D)={\hat {f}}(x)}
.
Let us write the mean-squared error of our model:
MSE
≜
E
[
(
y
−
f
^
(
x
)
)
2
]
=
E
[
(
f
(
x
)
+
ε
−
f
^
(
x
)
)
2
]
since
y
≜
f
(
x
)
+
ε
=
E
[
(
f
(
x
)
−
f
^
(
x
)
)
2
]
+
2
E
[
(
f
(
x
)
−
f
^
(
x
)
)
ε
]
+
E
[
ε
2
]
{\displaystyle {\begin{aligned}{\text{MSE}}&\triangleq \mathbb {E} {\Big [}{\big (}y-{\hat {f}}(x){\big )}^{2}{\Big ]}\\&=\mathbb {E} {\Big [}{\big (}f(x)+\varepsilon -{\hat {f}}(x){\big )}^{2}{\Big ]}&&{\text{since }}y\triangleq f(x)+\varepsilon \\&=\mathbb {E} {\Big [}{\big (}f(x)-{\hat {f}}(x){\big )}^{2}{\Big ]}\,+\,2\ \mathbb {E} {\Big [}{\big (}f(x)-{\hat {f}}(x){\big )}\varepsilon {\Big ]}\,+\,\mathbb {E} [\varepsilon ^{2}]\end{aligned}}}
We can show that the second term of this equation is null:
E
[
(
f
(
x
)
−
f
^
(
x
)
)
ε
]
=
E
[
f
(
x
)
−
f
^
(
x
)
]
E
[
ε
]
since
ε
is independent from
x
=
0
since
E
[
ε
]
=
0
{\displaystyle {\begin{aligned}\mathbb {E} {\Big [}{\big (}f(x)-{\hat {f}}(x){\big )}\varepsilon {\Big ]}&=\mathbb {E} {\big [}f(x)-{\hat {f}}(x){\big ]}\ \mathbb {E} {\big [}\varepsilon {\big ]}&&{\text{since }}\varepsilon {\text{ is independent from }}x\\&=0&&{\text{since }}\mathbb {E} {\big [}\varepsilon {\big ]}=0\end{aligned}}}
Moreover, the third term of this equation is nothing but
σ
2
{\displaystyle \sigma ^{2}}
, the variance of
ε
{\displaystyle \varepsilon }
.
Let us now expand the remaining term:
E
[
(
f
(
x
)
−
f
^
(
x
)
)
2
]
=
E
[
(
f
(
x
)
−
E
[
f
^
(
x
)
]
+
E
[
f
^
(
x
)
]
−
f
^
(
x
)
)
2
]
=
E
[
(
f
(
x
)
−
E
[
f
^
(
x
)
]
)
2
]
+
2
E
[
(
f
(
x
)
−
E
[
f
^
(
x
)
]
)
(
E
[
f
^
(
x
)
]
−
f
^
(
x
)
)
]
+
E
[
(
E
[
f
^
(
x
)
]
−
f
^
(
x
)
)
2
]
{\displaystyle {\begin{aligned}\mathbb {E} {\Big [}{\big (}f(x)-{\hat {f}}(x){\big )}^{2}{\Big ]}&=\mathbb {E} {\Big [}{\big (}f(x)-\mathbb {E} {\big [}{\hat {f}}(x){\big ]}+\mathbb {E} {\big [}{\hat {f}}(x){\big ]}-{\hat {f}}(x){\big )}^{2}{\Big ]}\\&={\color {Blue}\mathbb {E} {\Big [}{\big (}f(x)-\mathbb {E} {\big [}{\hat {f}}(x){\big ]}{\big )}^{2}{\Big ]}}\,+\,2\ {\color {PineGreen}\mathbb {E} {\Big [}{\big (}f(x)-\mathbb {E} {\big [}{\hat {f}}(x){\big ]}{\big )}{\big (}\mathbb {E} {\big [}{\hat {f}}(x){\big ]}-{\hat {f}}(x){\big )}{\Big ]}}\,+\,\mathbb {E} {\Big [}{\big (}\mathbb {E} {\big [}{\hat {f}}(x){\big ]}-{\hat {f}}(x){\big )}^{2}{\Big ]}\end{aligned}}}
We show that:
E
[
(
f
(
x
)
−
E
[
f
^
(
x
)
]
)
2
]
=
E
[
f
(
x
)
2
]
−
2
E
[
f
(
x
)
E
[
f
^
(
x
)
]
]
+
E
[
E
[
f
^
(
x
)
]
2
]
=
f
(
x
)
2
−
2
f
(
x
)
E
[
f
^
(
x
)
]
+
E
[
f
^
(
x
)
]
2
=
(
f
(
x
)
−
E
[
f
^
(
x
)
]
)
2
{\displaystyle {\begin{aligned}{\color {Blue}\mathbb {E} {\Big [}{\big (}f(x)-\mathbb {E} {\big [}{\hat {f}}(x){\big ]}{\big )}^{2}{\Big ]}}&=\mathbb {E} {\big [}f(x)^{2}{\big ]}\,-\,2\ \mathbb {E} {\Big [}f(x)\ \mathbb {E} {\big [}{\hat {f}}(x){\big ]}{\Big ]}\,+\,\mathbb {E} {\Big [}\mathbb {E} {\big [}{\hat {f}}(x){\big ]}^{2}{\Big ]}\\&=f(x)^{2}\,-\,2\ f(x)\ \mathbb {E} {\big [}{\hat {f}}(x){\big ]}\,+\,\mathbb {E} {\big [}{\hat {f}}(x){\big ]}^{2}\\&={\Big (}f(x)-\mathbb {E} {\big [}{\hat {f}}(x){\big ]}{\Big )}^{2}\end{aligned}}}
This last series of equalities comes from the fact that
f
(
x
)
{\displaystyle f(x)}
is not a random variable, but a fixed, deterministic function of
x
{\displaystyle x}
. Therefore,
E
[
f
(
x
)
]
=
f
(
x
)
{\displaystyle \mathbb {E} {\big [}f(x){\big ]}=f(x)}
. Similarly
E
[
f
(
x
)
2
]
=
f
(
x
)
2
{\displaystyle \mathbb {E} {\big [}f(x)^{2}{\big ]}=f(x)^{2}}
, and
E
[
f
(
x
)
E
[
f
^
(
x
)
]
]
=
f
(
x
)
E
[
E
[
f
^
(
x
)
]
]
=
f
(
x
)
E
[
f
^
(
x
)
]
{\displaystyle \mathbb {E} {\Big [}f(x)\ \mathbb {E} {\big [}{\hat {f}}(x){\big ]}{\Big ]}=f(x)\ \mathbb {E} {\Big [}\ \mathbb {E} {\big [}{\hat {f}}(x){\big ]}{\Big ]}=f(x)\ \mathbb {E} {\big [}{\hat {f}}(x){\big ]}}
. Using the same reasoning, we can expand the second term and show that it is null:
E
[
(
f
(
x
)
−
E
[
f
^
(
x
)
]
)
(
E
[
f
^
(
x
)
]
−
f
^
(
x
)
)
]
=
E
[
f
(
x
)
E
[
f
^
(
x
)
]
−
f
(
x
)
f
^
(
x
)
−
E
[
f
^
(
x
)
]
2
+
E
[
f
^
(
x
)
]
f
^
(
x
)
]
=
f
(
x
)
E
[
f
^
(
x
)
]
−
f
(
x
)
E
[
f
^
(
x
)
]
−
E
[
f
^
(
x
)
]
2
+
E
[
f
^
(
x
)
]
2
=
0
{\displaystyle {\begin{aligned}{\color {PineGreen}\mathbb {E} {\Big [}{\big (}f(x)-\mathbb {E} {\big [}{\hat {f}}(x){\big ]}{\big )}{\big (}\mathbb {E} {\big [}{\hat {f}}(x){\big ]}-{\hat {f}}(x){\big )}{\Big ]}}&=\mathbb {E} {\Big [}f(x)\ \mathbb {E} {\big [}{\hat {f}}(x){\big ]}\,-\,f(x){\hat {f}}(x)\,-\,\mathbb {E} {\big [}{\hat {f}}(x){\big ]}^{2}+\mathbb {E} {\big [}{\hat {f}}(x){\big ]}\ {\hat {f}}(x){\Big ]}\\&=f(x)\ \mathbb {E} {\big [}{\hat {f}}(x){\big ]}\,-\,f(x)\ \mathbb {E} {\big [}{\hat {f}}(x){\big ]}\,-\,\mathbb {E} {\big [}{\hat {f}}(x){\big ]}^{2}\,+\,\mathbb {E} {\big [}{\hat {f}}(x){\big ]}^{2}\\&=0\end{aligned}}}
Eventually, we plug our derivations back into the original equation, and identify each term:
MSE
=
(
f
(
x
)
−
E
[
f
^
(
x
)
]
)
2
+
E
[
(
E
[
f
^
(
x
)
]
−
f
^
(
x
)
)
2
]
+
σ
2
=
Bias
(
f
^
(
x
)
)
2
+
Var
[
f
^
(
x
)
]
+
σ
2
{\displaystyle {\begin{aligned}{\text{MSE}}&={\Big (}f(x)-\mathbb {E} {\big [}{\hat {f}}(x){\big ]}{\Big )}^{2}+\mathbb {E} {\Big [}{\big (}\mathbb {E} {\big [}{\hat {f}}(x){\big ]}-{\hat {f}}(x){\big )}^{2}{\Big ]}+\sigma ^{2}\\&=\operatorname {Bias} {\big (}{\hat {f}}(x){\big )}^{2}\,+\,\operatorname {Var} {\big [}{\hat {f}}(x){\big ]}\,+\,\sigma ^{2}\end{aligned}}}
Finally, the MSE loss function (or negative log-likelihood) is obtained by taking the expectation value over
x
∼
P
{\displaystyle x\sim P}
:
MSE
=
E
x
{
Bias
D
[
f
^
(
x
;
D
)
]
2
+
Var
D
[
f
^
(
x
;
D
)
]
}
+
σ
2
.
{\displaystyle {\text{MSE}}=\mathbb {E} _{x}{\bigg \{}\operatorname {Bias} _{D}[{\hat {f}}(x;D)]^{2}+\operatorname {Var} _{D}{\big [}{\hat {f}}(x;D){\big ]}{\bigg \}}+\sigma ^{2}.}
== Approaches ==
Dimensionality reduction and feature selection can decrease variance by simplifying models. Similarly, a larger training set tends to decrease variance. Adding features (predictors) tends to decrease bias, at the expense of introducing additional variance. Learning algorithms typically have some tunable parameters that control bias and variance; for example,
linear and Generalized linear models can be regularized to decrease their variance at the cost of increasing their bias.
In artificial neural networks, the variance increases and the bias decreases as the number of hidden units increase, although this classical assumption has been the subject of recent debate. Like in GLMs, regularization is typically applied.
In k-nearest neighbor models, a high value of k leads to high bias and low variance (see below).
In instance-based learning, regularization can be achieved varying the mixture of prototypes and exemplars.
In decision trees, the depth of the tree determines the variance. Decision trees are commonly pruned to control variance.: 307
One way of resolving the trade-off is to use mixture models and ensemble learning. For example, boosting combines many "weak" (high bias) models in an ensemble that has lower bias than the individual models, while bagging combines "strong" learners in a way that reduces their variance.
Model validation methods such as cross-validation (statistics) can be used to tune models so as to optimize the trade-off.
=== k-nearest neighbors ===
In the case of k-nearest neighbors regression, when the expectation is taken over the possible labeling of a fixed training set, a closed-form expression exists that relates the bias–variance decomposition to the parameter k:: 37, 223
E
[
(
y
−
f
^
(
x
)
)
2
∣
X
=
x
]
=
(
f
(
x
)
−
1
k
∑
i
=
1
k
f
(
N
i
(
x
)
)
)
2
+
σ
2
k
+
σ
2
{\displaystyle \mathbb {E} \left[(y-{\hat {f}}(x))^{2}\mid X=x\right]=\left(f(x)-{\frac {1}{k}}\sum _{i=1}^{k}f(N_{i}(x))\right)^{2}+{\frac {\sigma ^{2}}{k}}+\sigma ^{2}}
where
N
1
(
x
)
,
…
,
N
k
(
x
)
{\displaystyle N_{1}(x),\dots ,N_{k}(x)}
are the k nearest neighbors of x in the training set. The bias (first term) is a monotone rising function of k, while the variance (second term) drops off as k is increased. In fact, under "reasonable assumptions" the bias of the first-nearest neighbor (1-NN) estimator vanishes entirely as the size of the training set approaches infinity.
== Applications ==
=== In regression ===
The bias–variance decomposition forms the conceptual basis for regression regularization methods such as LASSO and ridge regression. Regularization methods introduce bias into the regression solution that can reduce variance considerably relative to the ordinary least squares (OLS) solution. Although the OLS solution provides non-biased regression estimates, the lower variance solutions produced by regularization techniques provide superior MSE performance.
=== In classification ===
The bias–variance decomposition was originally formulated for least-squares regression. For the case of classification under the 0-1 loss (misclassification rate), it is possible to find a similar decomposition, with the caveat that the variance term becomes dependent on the target label. Alternatively, if the classification problem can be phrased as probabilistic classification, then the expected cross-entropy can instead be decomposed to give bias and variance terms with the same semantics but taking a different form.
It has been argued that as training data increases, the variance of learned models will tend to decrease, and hence that as training data quantity increases, error is minimised by methods that learn models with lesser bias, and that conversely, for smaller training data quantities it is ever more important to minimise variance.
=== In reinforcement learning ===
Even though the bias–variance decomposition does not directly apply in reinforcement learning, a similar tradeoff can also characterize generalization. When an agent has limited information on its environment, the suboptimality of an RL algorithm can be decomposed into the sum of two terms: a term related to an asymptotic bias and a term due to overfitting. The asymptotic bias is directly related to the learning algorithm (independently of the quantity of data) while the overfitting term comes from the fact that the amount of data is limited.
=== In Monte Carlo methods ===
While in traditional Monte Carlo methods the bias is typically zero, modern approaches, such as Markov chain Monte Carlo are only asymptotically unbiased, at best. Convergence diagnostics can be used to control bias via burn-in removal, but due to a limited computational budget, a bias–variance trade-off arises, leading to a wide-range of approaches, in which a controlled bias is accepted, if this allows to dramatically reduce the variance, and hence the overall estimation error.
=== In human learning ===
While widely discussed in the context of machine learning, the bias–variance dilemma has been examined in the context of human cognition, most notably by Gerd Gigerenzer and co-workers in the context of learned heuristics. They have argued (see references below) that the human brain resolves the dilemma in the case of the typically sparse, poorly-characterized training-sets provided by experience by adopting high-bias/low variance heuristics. This reflects the fact that a zero-bias approach has poor generalizability to new situations, and also unreasonably presumes precise knowledge of the true state of the world. The resulting heuristics are relatively simple, but produce better inferences in a wider variety of situations.
Geman et al. argue that the bias–variance dilemma implies that abilities such as generic object recognition cannot be learned from scratch, but require a certain degree of "hard wiring" that is later tuned by experience. This is because model-free approaches to inference require impractically large training sets if they are to avoid high variance.
== See also ==
== References ==
== External links ==
MLU-Explain: The Bias Variance Tradeoff — An interactive visualization of the bias–variance tradeoff in LOESS Regression and K-Nearest Neighbors. | Wikipedia/Bias-variance_dilemma |
In statistics, data transformation is the application of a deterministic mathematical function to each point in a data set—that is, each data point zi is replaced with the transformed value yi = f(zi), where f is a function. Transforms are usually applied so that the data appear to more closely meet the assumptions of a statistical inference procedure that is to be applied, or to improve the interpretability or appearance of graphs.
Nearly always, the function that is used to transform the data is invertible, and generally is continuous. The transformation is usually applied to a collection of comparable measurements. For example, if we are working with data on peoples' incomes in some currency unit, it would be common to transform each person's income value by the logarithm function.
== Motivation ==
Guidance for how data should be transformed, or whether a transformation should be applied at all, should come from the particular statistical analysis to be performed. For example, a simple way to construct an approximate 95% confidence interval for the population mean is to take the sample mean plus or minus two standard error units. However, the constant factor 2 used here is particular to the normal distribution, and is only applicable if the sample mean varies approximately normally. The central limit theorem states that in many situations, the sample mean does vary normally if the sample size is reasonably large. However, if the population is substantially skewed and the sample size is at most moderate, the approximation provided by the central limit theorem can be poor, and the resulting confidence interval will likely have the wrong coverage probability. Thus, when there is evidence of substantial skew in the data, it is common to transform the data to a symmetric distribution before constructing a confidence interval. If desired, the confidence interval for the quantiles (such as the median) can then be transformed back to the original scale using the inverse of the transformation that was applied to the data.
Data can also be transformed to make them easier to visualize. For example, suppose we have a scatterplot in which the points are the countries of the world, and the data values being plotted are the land area and population of each country. If the plot is made using untransformed data (e.g. square kilometers for area and the number of people for population), most of the countries would be plotted in tight cluster of points in the lower left corner of the graph. The few countries with very large areas and/or populations would be spread thinly around most of the graph's area. Simply rescaling units (e.g., to thousand square kilometers, or to millions of people) will not change this. However, following logarithmic transformations of both area and population, the points will be spread more uniformly in the graph.
Another reason for applying data transformation is to improve interpretability, even if no formal statistical analysis or visualization is to be performed. For example, suppose we are comparing cars in terms of their fuel economy. These data are usually presented as "kilometers per liter" or "miles per gallon". However, if the goal is to assess how much additional fuel a person would use in one year when driving one car compared to another, it is more natural to work with the data transformed by applying the reciprocal function, yielding liters per kilometer, or gallons per mile.
== In regression ==
Data transformation may be used as a remedial measure to make data suitable for modeling with linear regression if the original data violates one or more assumptions of linear regression. For example, the simplest linear regression models assume a linear relationship between the expected value of Y (the response variable to be predicted) and each independent variable (when the other independent variables are held fixed). If linearity fails to hold, even approximately, it is sometimes possible to transform either the independent or dependent variables in the regression model to improve the linearity. For example, addition of quadratic functions of the original independent variables may lead to a linear relationship with expected value of Y, resulting in a polynomial regression model, a special case of linear regression.
Another assumption of linear regression is homoscedasticity, that is the variance of errors must be the same regardless of the values of predictors. If this assumption is violated (i.e. if the data is heteroscedastic), it may be possible to find a transformation of Y alone, or transformations of both X (the predictor variables) and Y, such that the homoscedasticity assumption (in addition to the linearity assumption) holds true on the transformed variables and linear regression may therefore be applied on these.
Yet another application of data transformation is to address the problem of lack of normality in error terms. Univariate normality is not needed for least squares estimates of the regression parameters to be meaningful (see Gauss–Markov theorem). However confidence intervals and hypothesis tests will have better statistical properties if the variables exhibit multivariate normality. Transformations that stabilize the variance of error terms (i.e. those that address heteroscedasticity) often also help make the error terms approximately normal.
=== Examples ===
Equation:
Y
=
a
+
b
X
{\displaystyle Y=a+bX}
Meaning: A unit increase in X is associated with an average of b units increase in Y.
Equation:
log
(
Y
)
=
a
+
b
X
{\displaystyle \log(Y)=a+bX}
(From exponentiating both sides of the equation:
Y
=
e
a
e
b
X
{\displaystyle Y=e^{a}e^{bX}}
)
Meaning: A unit increase in X is associated with an average increase of b units in
log
(
Y
)
{\displaystyle \log(Y)}
, or equivalently, Y increases on an average by a multiplicative factor of
e
b
{\displaystyle e^{b}\!}
. For illustrative purposes, if base-10 logarithm were used instead of natural logarithm in the above transformation and the same symbols (a and b) are used to denote the regression coefficients, then a unit increase in X would lead to a
10
b
{\displaystyle 10^{b}}
times increase in Y on an average. If b were 1, then this implies a 10-fold increase in Y for a unit increase in X
Equation:
Y
=
a
+
b
log
(
X
)
{\displaystyle Y=a+b\log(X)}
Meaning: A k-fold increase in X is associated with an average of
b
×
log
(
k
)
{\displaystyle b\times \log(k)}
units increase in Y. For illustrative purposes, if base-10 logarithm were used instead of natural logarithm in the above transformation and the same symbols (a and b) are used to denote the regression coefficients, then a tenfold increase in X would result in an average increase of
b
×
log
10
(
10
)
=
b
{\displaystyle b\times \log _{10}(10)=b}
units in Y
Equation:
log
(
Y
)
=
a
+
b
log
(
X
)
{\displaystyle \log(Y)=a+b\log(X)}
(From exponentiating both sides of the equation:
Y
=
e
a
X
b
{\displaystyle Y=e^{a}X^{b}}
)
Meaning: A k-fold increase in X is associated with a
k
b
{\displaystyle k^{b}}
multiplicative increase in Y on an average. Thus if X doubles, it would result in Y changing by a multiplicative factor of
2
b
{\displaystyle 2^{b}\!}
.
=== Alternative ===
Generalized linear models (GLMs) provide a flexible generalization of ordinary linear regression that allows for response variables that have error distribution models other than a normal distribution. GLMs allow the linear model to be related to the response variable via a link function and allow the magnitude of the variance of each measurement to be a function of its predicted value.
== Common cases ==
The logarithm transformation and square root transformation are commonly used for positive data, and the multiplicative inverse transformation (reciprocal transformation) can be used for non-zero data. The power transformation is a family of transformations parameterized by a non-negative value λ that includes the logarithm, square root, and multiplicative inverse transformations as special cases. To approach data transformation systematically, it is possible to use statistical estimation techniques to estimate the parameter λ in the power transformation, thereby identifying the transformation that is approximately the most appropriate in a given setting. Since the power transformation family also includes the identity transformation, this approach can also indicate whether it would be best to analyze the data without a transformation. In regression analysis, this approach is known as the Box–Cox transformation.
The reciprocal transformation, some power transformations such as the Yeo–Johnson transformation, and certain other transformations such as applying the inverse hyperbolic sine, can be meaningfully applied to data that include both positive and negative values (the power transformation is invertible over all real numbers if λ is an odd integer). However, when both negative and positive values are observed, it is sometimes common to begin by adding a constant to all values, producing a set of non-negative data to which any power transformation can be applied.
A common situation where a data transformation is applied is when a value of interest ranges over several orders of magnitude. Many physical and social phenomena exhibit such behavior — incomes, species populations, galaxy sizes, and rainfall volumes, to name a few. Power transforms, and in particular the logarithm, can often be used to induce symmetry in such data. The logarithm is often favored because it is easy to interpret its result in terms of "fold changes".
The logarithm also has a useful effect on ratios. If we are comparing positive quantities X and Y using the ratio X / Y, then if X < Y, the ratio is in the interval (0,1), whereas if X > Y, the ratio is in the half-line (1,∞), where the ratio of 1 corresponds to equality. In an analysis where X and Y are treated symmetrically, the log-ratio log(X / Y) is zero in the case of equality, and it has the property that if X is K times greater than Y, the log-ratio is the equidistant from zero as in the situation where Y is K times greater than X (the log-ratios are log(K) and −log(K) in these two situations).
If values are naturally restricted to be in the range 0 to 1, not including the end-points, then a logit transformation may be appropriate: this yields values in the range (−∞,∞).
=== Transforming to normality ===
1. It is not always necessary or desirable to transform a data set to resemble a normal distribution. However, if symmetry or normality are desired, they can often be induced through one of the power transformations.
2. A linguistic power function is distributed according to the Zipf-Mandelbrot law. The distribution is extremely spiky and leptokurtic, this is the reason why researchers had to turn their backs to statistics to solve e.g. authorship attribution problems. Nevertheless, usage of Gaussian statistics is perfectly possible by applying data transformation.
3. To assess whether normality has been achieved after transformation, any of the standard normality tests may be used. A graphical approach is usually more informative than a formal statistical test and hence a normal quantile plot is commonly used to assess the fit of a data set to a normal population. Alternatively, rules of thumb based on the sample skewness and kurtosis have also been proposed.
=== Transforming to a uniform distribution or an arbitrary distribution ===
If we observe a set of n values X1, ..., Xn with no ties (i.e., there are n distinct values), we can replace Xi with the transformed value Yi = k, where k is defined such that Xi is the kth largest among all the X values. This is called the rank transform, and creates data with a perfect fit to a uniform distribution. This approach has a population analogue.
Using the probability integral transform, if X is any random variable, and F is the cumulative distribution function of X, then as long as F is invertible, the random variable U = F(X) follows a uniform distribution on the unit interval [0,1].
From a uniform distribution, we can transform to any distribution with an invertible cumulative distribution function. If G is an invertible cumulative distribution function, and U is a uniformly distributed random variable, then the random variable G−1(U) has G as its cumulative distribution function.
Putting the two together, if X is any random variable, F is the invertible cumulative distribution function of X, and G is an invertible cumulative distribution function then the random variable G−1(F(X)) has G as its cumulative distribution function.
=== Variance stabilizing transformations ===
Many types of statistical data exhibit a "variance-on-mean relationship", meaning that the variability is different for data values with different expected values. As an example, in comparing different populations in the world, the variance of income tends to increase with mean income. If we consider a number of small area units (e.g., counties in the United States) and obtain the mean and variance of incomes within each county, it is common that the counties with higher mean income also have higher variances.
A variance-stabilizing transformation aims to remove a variance-on-mean relationship, so that the variance becomes constant relative to the mean. Examples of variance-stabilizing transformations are the Fisher transformation for the sample correlation coefficient, the square root transformation or Anscombe transform for Poisson data (count data), the Box–Cox transformation for regression analysis, and the arcsine square root transformation or angular transformation for proportions (binomial data). While commonly used for statistical analysis of proportional data, the arcsine square root transformation is not recommended because logistic regression or a logit transformation are more appropriate for binomial or non-binomial proportions, respectively, especially due to decreased type-II error.
== Transformations for multivariate data ==
Univariate functions can be applied point-wise to multivariate data to modify their marginal distributions. It is also possible to modify some attributes of a multivariate distribution using an appropriately constructed transformation. For example, when working with time series and other types of sequential data, it is common to difference the data to improve stationarity. If data generated by a random vector X are observed as vectors Xi of observations with covariance matrix Σ, a linear transformation can be used to decorrelate the data. To do this, the Cholesky decomposition is used to express Σ = A A'. Then the transformed vector Yi = A−1Xi has the identity matrix as its covariance matrix.
== See also ==
Arcsin
Feature engineering
Logit
Nonlinear regression § Transformation
Pearson correlation coefficient
Power transform (Box–Cox)
Wilson–Hilferty transformation
Whitening transformation
== References ==
== External links ==
Log Transformations for Skewed and Wide Distributions – discussing the log and the "signed logarithm" transformations (A chapter from "Practical Data Science with R"). | Wikipedia/Data_transformation_(statistics) |
In statistics, identifiability is a property which a model must satisfy for precise inference to be possible. A model is identifiable if it is theoretically possible to learn the true values of this model's underlying parameters after obtaining an infinite number of observations from it. Mathematically, this is equivalent to saying that different values of the parameters must generate different probability distributions of the observable variables. Usually the model is identifiable only under certain technical restrictions, in which case the set of these requirements is called the identification conditions.
A model that fails to be identifiable is said to be non-identifiable or unidentifiable: two or more parametrizations are observationally equivalent. In some cases, even though a model is non-identifiable, it is still possible to learn the true values of a certain subset of the model parameters. In this case we say that the model is partially identifiable. In other cases it may be possible to learn the location of the true parameter up to a certain finite region of the parameter space, in which case the model is set identifiable.
Aside from strictly theoretical exploration of the model properties, identifiability can be referred to in a wider scope when a model is tested with experimental data sets, using identifiability analysis.
== Definition ==
Let
P
=
{
P
θ
:
θ
∈
Θ
}
{\displaystyle {\mathcal {P}}=\{P_{\theta }:\theta \in \Theta \}}
be a statistical model with parameter space
Θ
{\displaystyle \Theta }
. We say that
P
{\displaystyle {\mathcal {P}}}
is identifiable if the mapping
θ
↦
P
θ
{\displaystyle \theta \mapsto P_{\theta }}
is one-to-one:
P
θ
1
=
P
θ
2
⇒
θ
1
=
θ
2
for all
θ
1
,
θ
2
∈
Θ
.
{\displaystyle P_{\theta _{1}}=P_{\theta _{2}}\quad \Rightarrow \quad \theta _{1}=\theta _{2}\quad \ {\text{for all }}\theta _{1},\theta _{2}\in \Theta .}
This definition means that distinct values of θ should correspond to distinct probability distributions: if θ1≠θ2, then also Pθ1≠Pθ2. If the distributions are defined in terms of the probability density functions (pdfs), then two pdfs should be considered distinct only if they differ on a set of non-zero measure (for example two functions ƒ1(x) = 10 ≤ x < 1 and ƒ2(x) = 10 ≤ x ≤ 1 differ only at a single point x = 1 — a set of measure zero — and thus cannot be considered as distinct pdfs).
Identifiability of the model in the sense of invertibility of the map
θ
↦
P
θ
{\displaystyle \theta \mapsto P_{\theta }}
is equivalent to being able to learn the model's true parameter if the model can be observed indefinitely long. Indeed, if {Xt} ⊆ S is the sequence of observations from the model, then by the strong law of large numbers,
1
T
∑
t
=
1
T
1
{
X
t
∈
A
}
→
a.s.
Pr
[
X
t
∈
A
]
,
{\displaystyle {\frac {1}{T}}\sum _{t=1}^{T}\mathbf {1} _{\{X_{t}\in A\}}\ {\xrightarrow {\text{a.s.}}}\ \Pr[X_{t}\in A],}
for every measurable set A ⊆ S (here 1{...} is the indicator function). Thus, with an infinite number of observations we will be able to find the true probability distribution P0 in the model, and since the identifiability condition above requires that the map
θ
↦
P
θ
{\displaystyle \theta \mapsto P_{\theta }}
be invertible, we will also be able to find the true value of the parameter which generated given distribution P0.
== Examples ==
=== Example 1 ===
Let
P
{\displaystyle {\mathcal {P}}}
be the normal location-scale family:
P
=
{
f
θ
(
x
)
=
1
2
π
σ
e
−
1
2
σ
2
(
x
−
μ
)
2
|
θ
=
(
μ
,
σ
)
:
μ
∈
R
,
σ
>
0
}
.
{\displaystyle {\mathcal {P}}={\Big \{}\ f_{\theta }(x)={\tfrac {1}{{\sqrt {2\pi }}\sigma }}e^{-{\frac {1}{2\sigma ^{2}}}(x-\mu )^{2}}\ {\Big |}\ \theta =(\mu ,\sigma ):\mu \in \mathbb {R} ,\,\sigma \!>0\ {\Big \}}.}
Then
f
θ
1
(
x
)
=
f
θ
2
(
x
)
⟺
1
2
π
σ
1
exp
(
−
1
2
σ
1
2
(
x
−
μ
1
)
2
)
=
1
2
π
σ
2
exp
(
−
1
2
σ
2
2
(
x
−
μ
2
)
2
)
⟺
1
σ
1
2
(
x
−
μ
1
)
2
+
ln
σ
1
=
1
σ
2
2
(
x
−
μ
2
)
2
+
ln
σ
2
⟺
x
2
(
1
σ
1
2
−
1
σ
2
2
)
−
2
x
(
μ
1
σ
1
2
−
μ
2
σ
2
2
)
+
(
μ
1
2
σ
1
2
−
μ
2
2
σ
2
2
+
ln
σ
1
−
ln
σ
2
)
=
0
{\displaystyle {\begin{aligned}&f_{\theta _{1}}(x)=f_{\theta _{2}}(x)\\[6pt]\Longleftrightarrow {}&{\frac {1}{{\sqrt {2\pi }}\sigma _{1}}}\exp \left(-{\frac {1}{2\sigma _{1}^{2}}}(x-\mu _{1})^{2}\right)={\frac {1}{{\sqrt {2\pi }}\sigma _{2}}}\exp \left(-{\frac {1}{2\sigma _{2}^{2}}}(x-\mu _{2})^{2}\right)\\[6pt]\Longleftrightarrow {}&{\frac {1}{\sigma _{1}^{2}}}(x-\mu _{1})^{2}+\ln \sigma _{1}={\frac {1}{\sigma _{2}^{2}}}(x-\mu _{2})^{2}+\ln \sigma _{2}\\[6pt]\Longleftrightarrow {}&x^{2}\left({\frac {1}{\sigma _{1}^{2}}}-{\frac {1}{\sigma _{2}^{2}}}\right)-2x\left({\frac {\mu _{1}}{\sigma _{1}^{2}}}-{\frac {\mu _{2}}{\sigma _{2}^{2}}}\right)+\left({\frac {\mu _{1}^{2}}{\sigma _{1}^{2}}}-{\frac {\mu _{2}^{2}}{\sigma _{2}^{2}}}+\ln \sigma _{1}-\ln \sigma _{2}\right)=0\end{aligned}}}
This expression is equal to zero for almost all x only when all its coefficients are equal to zero, which is only possible when |σ1| = |σ2| and μ1 = μ2. Since in the scale parameter σ is restricted to be greater than zero, we conclude that the model is identifiable: ƒθ1 = ƒθ2 ⇔ θ1 = θ2.
=== Example 2 ===
Let
P
{\displaystyle {\mathcal {P}}}
be the standard linear regression model:
y
=
β
′
x
+
ε
,
E
[
ε
∣
x
]
=
0
{\displaystyle y=\beta 'x+\varepsilon ,\quad \mathrm {E} [\,\varepsilon \mid x\,]=0}
(where ′ denotes matrix transpose). Then the parameter β is identifiable if and only if the matrix
E
[
x
x
′
]
{\displaystyle \mathrm {E} [xx']}
is invertible. Thus, this is the identification condition in the model.
=== Example 3 ===
Suppose
P
{\displaystyle {\mathcal {P}}}
is the classical errors-in-variables linear model:
{
y
=
β
x
∗
+
ε
,
x
=
x
∗
+
η
,
{\displaystyle {\begin{cases}y=\beta x^{*}+\varepsilon ,\\x=x^{*}+\eta ,\end{cases}}}
where (ε,η,x*) are jointly normal independent random variables with zero expected value and unknown variances, and only the variables (x,y) are observed. Then this model is not identifiable, only the product βσ²∗ is (where σ²∗ is the variance of the latent regressor x*). This is also an example of a set identifiable model: although the exact value of β cannot be learned, we can guarantee that it must lie somewhere in the interval (βyx, 1÷βxy), where βyx is the coefficient in OLS regression of y on x, and βxy is the coefficient in OLS regression of x on y.
If we abandon the normality assumption and require that x* were not normally distributed, retaining only the independence condition ε ⊥ η ⊥ x*, then the model becomes identifiable.
== See also ==
System identification
Structural identifiability
Observability
Simultaneous equations model
== References ==
=== Citations ===
=== Sources ===
== Further reading ==
Walter, É.; Pronzato, L. (1997), Identification of Parametric Models from Experimental Data, Springer
=== Econometrics ===
Lewbel, Arthur (2019-12-01). "The Identification Zoo: Meanings of Identification in Econometrics". Journal of Economic Literature. 57 (4). American Economic Association: 835–903. doi:10.1257/jel.20181361. ISSN 0022-0515. S2CID 125792293.
Matzkin, Rosa L. (2013). "Nonparametric Identification in Structural Economic Models". Annual Review of Economics. 5 (1): 457–486. doi:10.1146/annurev-economics-082912-110231.
Rothenberg, Thomas J. (1971). "Identification in Parametric Models". Econometrica. 39 (3): 577–591. doi:10.2307/1913267. ISSN 0012-9682. JSTOR 1913267. | Wikipedia/Model_identification |
In statistics, the false discovery rate (FDR) is a method of conceptualizing the rate of type I errors in null hypothesis testing when conducting multiple comparisons. FDR-controlling procedures are designed to control the FDR, which is the expected proportion of "discoveries" (rejected null hypotheses) that are false (incorrect rejections of the null). Equivalently, the FDR is the expected ratio of the number of false positive classifications (false discoveries) to the total number of positive classifications (rejections of the null). The total number of rejections of the null include both the number of false positives (FP) and true positives (TP). Simply put, FDR = FP / (FP + TP). FDR-controlling procedures provide less stringent control of Type I errors compared to family-wise error rate (FWER) controlling procedures (such as the Bonferroni correction), which control the probability of at least one Type I error. Thus, FDR-controlling procedures have greater power, at the cost of increased numbers of Type I errors.
== History ==
=== Technological motivations ===
The modern widespread use of the FDR is believed to stem from, and be motivated by, the development in technologies that allowed the collection and analysis of a large number of distinct variables in several individuals (e.g., the expression level of each of 10,000 different genes in 100 different persons). By the late 1980s and 1990s, the development of "high-throughput" sciences, such as genomics, allowed for rapid data acquisition. This, coupled with the growth in computing power, made it possible to seamlessly perform a very high number of statistical tests on a given data set. The technology of microarrays was a prototypical example, as it enabled thousands of genes to be tested simultaneously for differential expression between two biological conditions.
As high-throughput technologies became common, technological and/or financial constraints led researchers to collect datasets with relatively small sample sizes (e.g. few individuals being tested) and large numbers of variables being measured per sample (e.g. thousands of gene expression levels). In these datasets, too few of the measured variables showed statistical significance after classic correction for multiple tests with standard multiple comparison procedures. This created a need within many scientific communities to abandon FWER and unadjusted multiple hypothesis testing for other ways to highlight and rank in publications those variables showing marked effects across individuals or treatments that would otherwise be dismissed as non-significant after standard correction for multiple tests. In response to this, a variety of error rates have been proposed—and become commonly used in publications—that are less conservative than FWER in flagging possibly noteworthy observations. The FDR is useful when researchers are looking for "discoveries" that will give them followup work (E.g.: detecting promising genes for followup studies), and are interested in controlling the proportion of "false leads" they are willing to accept.
=== Literature ===
The FDR concept was formally described by Yoav Benjamini and Yosef Hochberg in 1995 (BH procedure) as a less conservative and arguably more appropriate approach for identifying the important few from the trivial many effects tested. The FDR has been particularly influential, as it was the first alternative to the FWER to gain broad acceptance in many scientific fields (especially in the life sciences, from genetics to biochemistry, oncology and plant sciences). In 2005, the Benjamini and Hochberg paper from 1995 was identified as one of the 25 most-cited statistical papers.
Prior to the 1995 introduction of the FDR concept, various precursor ideas had been considered in the statistics literature. In 1979, Holm proposed the Holm procedure, a stepwise algorithm for controlling the FWER that is at least as powerful as the well-known Bonferroni adjustment. This stepwise algorithm sorts the p-values and sequentially rejects the hypotheses starting from the smallest p-values.
Benjamini (2010) said that the false discovery rate, and the paper Benjamini and Hochberg (1995), had its origins in two papers concerned with multiple testing:
The first paper is by Schweder and Spjotvoll (1982) who suggested plotting the ranked p-values and assessing the number of true null hypotheses (
m
0
{\displaystyle m_{0}}
) via an eye-fitted line starting from the largest p-values. The p-values that deviate from this straight line then should correspond to the false null hypotheses. This idea was later developed into an algorithm and incorporated the estimation of
m
0
{\displaystyle m_{0}}
into procedures such as Bonferroni, Holm or Hochberg. This idea is closely related to the graphical interpretation of the BH procedure.
The second paper is by Branko Soric (1989) which introduced the terminology of "discovery" in the multiple hypothesis testing context. Soric used the expected number of false discoveries divided by the number of discoveries
(
E
[
V
]
/
R
)
{\displaystyle \left(E[V]/R\right)}
as a warning that "a large part of statistical discoveries may be wrong". This led Benjamini and Hochberg to the idea that a similar error rate, rather than being merely a warning, can serve as a worthy goal to control.
The BH procedure was proven to control the FDR for independent tests in 1995 by Benjamini and Hochberg. In 1986, R. J. Simes offered the same procedure as the "Simes procedure", in order to control the FWER in the weak sense (under the intersection null hypothesis) when the statistics are independent.
== Definitions ==
Based on definitions below we can define Q as the proportion of false discoveries among the discoveries (rejections of the null hypothesis):
Q
=
V
R
=
V
V
+
S
.
{\displaystyle Q={\frac {V}{R}}={\frac {V}{V+S}}.}
where
V
{\displaystyle V}
is the number of false discoveries and
S
{\displaystyle S}
is the number of true discoveries.
The false discovery rate (FDR) is then simply the following:
F
D
R
=
Q
e
=
E
[
Q
]
,
{\displaystyle \mathrm {FDR} =Q_{e}=\mathrm {E} \!\left[Q\right],}
where
E
[
Q
]
{\displaystyle \mathrm {E} \!\left[Q\right]}
is the expected value of
Q
{\displaystyle Q}
. The goal is to keep FDR below a given threshold q. To avoid division by zero,
Q
{\displaystyle Q}
is defined to be 0 when
R
=
0
{\displaystyle R=0}
. Formally,
F
D
R
=
E
[
V
/
R
|
R
>
0
]
⋅
P
(
R
>
0
)
{\displaystyle \mathrm {FDR} =\mathrm {E} \!\left[V/R|R>0\right]\cdot \mathrm {P} \!\left(R>0\right)}
.
=== Classification of multiple hypothesis tests ===
The following table defines the possible outcomes when testing multiple null hypotheses.
Suppose we have a number m of null hypotheses, denoted by: H1, H2, ..., Hm.
Using a statistical test, we reject the null hypothesis if the test is declared significant. We do not reject the null hypothesis if the test is non-significant.
Summing each type of outcome over all Hi yields the following random variables:
m is the total number hypotheses tested
m
0
{\displaystyle m_{0}}
is the number of true null hypotheses, an unknown parameter
m
−
m
0
{\displaystyle m-m_{0}}
is the number of true alternative hypotheses
V is the number of false positives (Type I error) (also called "false discoveries")
S is the number of true positives (also called "true discoveries")
T is the number of false negatives (Type II error)
U is the number of true negatives
R
=
V
+
S
{\displaystyle R=V+S}
is the number of rejected null hypotheses (also called "discoveries", either true or false)
In m hypothesis tests of which
m
0
{\displaystyle m_{0}}
are true null hypotheses, R is an observable random variable, and S, T, U, and V are unobservable random variables.
== Controlling procedures ==
The settings for many procedures is such that we have
H
1
…
H
m
{\displaystyle H_{1}\ldots H_{m}}
null hypotheses tested and
P
1
…
P
m
{\displaystyle P_{1}\ldots P_{m}}
their corresponding p-values. We list these p-values in ascending order and denote them by
P
(
1
)
…
P
(
m
)
{\displaystyle P_{(1)}\ldots P_{(m)}}
. A procedure that goes from a small test-statistic to a large one will be called a step-up procedure. In a similar way, in a "step-down" procedure we move from a large corresponding test statistic to a smaller one.
=== Benjamini–Hochberg procedure ===
The Benjamini–Hochberg procedure (BH step-up procedure) controls the FDR at level
α
{\displaystyle \alpha }
. It works as follows:
For a given
α
{\displaystyle \alpha }
, find the largest k such that
P
(
k
)
≤
k
m
α
{\displaystyle P_{(k)}\leq {\frac {k}{m}}\alpha }
Reject the null hypothesis (i.e., declare discoveries) for all
H
(
i
)
{\displaystyle H_{(i)}}
for
i
=
1
,
…
,
k
{\displaystyle i=1,\ldots ,k}
Geometrically, this corresponds to plotting
P
(
k
)
{\displaystyle P_{(k)}}
vs. k (on the y and x axes respectively), drawing the line through the origin with slope
α
m
{\displaystyle {\frac {\alpha }{m}}}
, and declaring discoveries for all points on the left, up to, and including the last point that is not above the line.
The BH procedure is valid when the m tests are independent, and also in various scenarios of dependence, but is not universally valid. It also satisfies the inequality:
E
(
Q
)
≤
m
0
m
α
≤
α
{\displaystyle E(Q)\leq {\frac {m_{0}}{m}}\alpha \leq \alpha }
If an estimator of
m
0
{\displaystyle m_{0}}
is inserted into the BH procedure, it is no longer guaranteed to achieve FDR control at the desired level. Adjustments may be needed in the estimator and several modifications have been proposed.
Note that the mean
α
{\displaystyle \alpha }
for these m tests is
α
(
m
+
1
)
2
m
{\displaystyle {\frac {\alpha (m+1)}{2m}}}
, the Mean(FDR
α
{\displaystyle \alpha }
) or MFDR,
α
{\displaystyle \alpha }
adjusted for m independent or positively correlated tests (see AFDR below). The MFDR expression here is for a single recomputed value of
α
{\displaystyle \alpha }
and is not part of the Benjamini and Hochberg method.
=== Benjamini–Yekutieli procedure ===
The Benjamini–Yekutieli procedure controls the false discovery rate under arbitrary dependence assumptions. This refinement modifies the threshold and finds the largest k such that:
P
(
k
)
≤
k
m
⋅
c
(
m
)
α
{\displaystyle P_{(k)}\leq {\frac {k}{m\cdot c(m)}}\alpha }
If the tests are independent or positively correlated (as in Benjamini–Hochberg procedure):
c
(
m
)
=
1
{\displaystyle c(m)=1}
Under arbitrary dependence (including the case of negative correlation), c(m) is the harmonic number:
c
(
m
)
=
∑
i
=
1
m
1
i
{\displaystyle c(m)=\sum _{i=1}^{m}{\frac {1}{i}}}
. Note that
c
(
m
)
{\displaystyle c(m)}
can be approximated by using the Taylor series expansion and the Euler–Mascheroni constant (
γ
=
0.57721...
{\displaystyle \gamma =0.57721...}
):
∑
i
=
1
m
1
i
≈
ln
(
m
)
+
γ
+
1
2
m
.
{\displaystyle \sum _{i=1}^{m}{\frac {1}{i}}\approx \ln(m)+\gamma +{\frac {1}{2m}}.}
Using MFDR and formulas above, an adjusted MFDR (or AFDR) is the minimum of the mean
α
{\displaystyle \alpha }
for m dependent tests, i.e.,
M
F
D
R
c
(
m
)
=
α
(
m
+
1
)
2
m
[
ln
(
m
)
+
γ
]
+
1
{\displaystyle {\frac {\mathrm {MFDR} }{c(m)}}={\frac {\alpha (m+1)}{2m[\ln(m)+\gamma ]+1}}}
.
Another way to address dependence is by bootstrapping and rerandomization.
=== Storey-Tibshirani procedure ===
In the Storey-Tibshirani procedure, q-values are used for controlling the FDR.
== Properties ==
=== Adaptive and scalable ===
Using a multiplicity procedure that controls the FDR criterion is adaptive and scalable. Meaning that controlling the FDR can be very permissive (if the data justify it), or conservative (acting close to control of FWER for sparse problem) - all depending on the number of hypotheses tested and the level of significance.
The FDR criterion adapts so that the same number of false discoveries (V) will have different implications, depending on the total number of discoveries (R). This contrasts with the family-wise error rate criterion. For example, if inspecting 100 hypotheses (say, 100 genetic mutations or SNPs for association with some phenotype in some population):
If we make 4 discoveries (R), having 2 of them be false discoveries (V) is often very costly. Whereas,
If we make 50 discoveries (R), having 2 of them be false discoveries (V) is often not very costly.
The FDR criterion is scalable in that the same proportion of false discoveries out of the total number of discoveries (Q), remains sensible for different number of total discoveries (R). For example:
If we make 100 discoveries (R), having 5 of them be false discoveries (
q
=
5
%
{\displaystyle q=5\%}
) may not be very costly.
Similarly, if we make 1000 discoveries (R), having 50 of them be false discoveries (as before,
q
=
5
%
{\displaystyle q=5\%}
) may still not be very costly.
=== Dependency among the test statistics ===
Controlling the FDR using the linear step-up BH procedure, at level q, has several properties related to the dependency structure between the test statistics of the m null hypotheses that are being corrected for. If the test statistics are:
Independent:
F
D
R
≤
m
0
m
q
{\displaystyle \mathrm {FDR} \leq {\frac {m_{0}}{m}}q}
Independent and continuous:
F
D
R
=
m
0
m
q
{\displaystyle \mathrm {FDR} ={\frac {m_{0}}{m}}q}
Positive dependent:
F
D
R
≤
m
0
m
q
{\displaystyle \mathrm {FDR} \leq {\frac {m_{0}}{m}}q}
In the general case:
F
D
R
≤
m
0
m
q
1
+
1
2
+
1
3
+
⋯
+
1
m
≈
m
0
m
q
ln
(
m
)
+
γ
+
1
2
m
,
{\displaystyle \mathrm {FDR} \leq {\frac {m_{0}}{m}}{\frac {q}{1+{\frac {1}{2}}+{\frac {1}{3}}+\cdots +{\frac {1}{m}}}}\approx {\frac {m_{0}}{m}}{\frac {q}{\ln(m)+\gamma +{\frac {1}{2m}}}},}
where
γ
{\displaystyle \gamma }
is the Euler–Mascheroni constant.
=== Proportion of true hypotheses ===
If all of the null hypotheses are true (
m
0
=
m
{\displaystyle m_{0}=m}
), then controlling the FDR at level q guarantees control over the FWER (this is also called "weak control of the FWER"):
F
W
E
R
=
P
(
V
≥
1
)
=
E
(
V
R
)
=
F
D
R
≤
q
{\displaystyle \mathrm {FWER} =P\left(V\geq 1\right)=E\left({\frac {V}{R}}\right)=\mathrm {FDR} \leq q}
, simply because the event of rejecting at least one true null hypothesis
{
V
≥
1
}
{\displaystyle \{V\geq 1\}}
is exactly the event
{
V
/
R
=
1
}
{\displaystyle \{V/R=1\}}
, and the event
{
V
=
0
}
{\displaystyle \{V=0\}}
is exactly the event
{
V
/
R
=
0
}
{\displaystyle \{V/R=0\}}
(when
V
=
R
=
0
{\displaystyle V=R=0}
,
V
/
R
=
0
{\displaystyle V/R=0}
by definition). But if there are some true discoveries to be made (
m
0
<
m
{\displaystyle m_{0}<m}
) then FWER ≥ FDR. In that case there will be room for improving detection power. It also means that any procedure that controls the FWER will also control the FDR.
=== Average power ===
The average power of the Benjamini-Hochberg procedure can be computed analytically
== Related concepts ==
The discovery of the FDR was preceded and followed by many other types of error rates. These include:
PCER (per-comparison error rate) is defined as:
P
C
E
R
=
E
[
V
m
]
{\displaystyle \mathrm {PCER} =E\left[{\frac {V}{m}}\right]}
. Testing individually each hypothesis at level α guarantees that
P
C
E
R
≤
α
{\displaystyle \mathrm {PCER} \leq \alpha }
(this is testing without any correction for multiplicity)
FWER (the family-wise error rate) is defined as:
F
W
E
R
=
P
(
V
≥
1
)
{\displaystyle \mathrm {FWER} =P(V\geq 1)}
. There are numerous procedures that control the FWER.
k
-FWER
{\displaystyle k{\text{-FWER}}}
(The tail probability of the False Discovery Proportion), suggested by Lehmann and Romano, van der Laan at al, is defined as:
k
-FWER
=
P
(
V
≥
k
)
≤
q
{\displaystyle k{\text{-FWER}}=P(V\geq k)\leq q}
.
k
-FDR
{\displaystyle k{\text{-FDR}}}
(also called the generalized FDR by Sarkar in 2007) is defined as:
k
-FDR
=
E
(
V
R
I
(
V
>
k
)
)
≤
q
{\displaystyle k{\text{-FDR}}=E\left({\frac {V}{R}}I_{(V>k)}\right)\leq q}
.
Q
′
{\displaystyle Q'}
is the proportion of false discoveries among the discoveries", suggested by Soric in 1989, and is defined as:
Q
′
=
E
[
V
]
R
{\displaystyle Q'={\frac {E[V]}{R}}}
. This is a mixture of expectations and realizations, and has the problem of control for
m
0
=
m
{\displaystyle m_{0}=m}
.
F
D
R
−
1
{\displaystyle \mathrm {FDR} _{-1}}
(or Fdr) was used by Benjamini and Hochberg, and later called "Fdr" by Efron (2008) and earlier. It is defined as:
F
D
R
−
1
=
F
d
r
=
E
[
V
]
E
[
R
]
{\displaystyle \mathrm {FDR} _{-1}=Fdr={\frac {E[V]}{E[R]}}}
. This error rate cannot be strictly controlled because it is 1 when
m
=
m
0
{\displaystyle m=m_{0}}
.
F
D
R
+
1
{\displaystyle \mathrm {FDR} _{+1}}
was used by Benjamini and Hochberg, and later called "pFDR" by Storey (2002). It is defined as:
F
D
R
+
1
=
p
F
D
R
=
E
[
V
R
|
R
>
0
]
{\displaystyle \mathrm {FDR} _{+1}=pFDR=E\left[\left.{\frac {V}{R}}\right|R>0\right]}
. This error rate cannot be strictly controlled because it is 1 when
m
=
m
0
{\displaystyle m=m_{0}}
. JD Storey promoted the use of the pFDR (a close relative of the FDR), and the q-value, which can be viewed as the proportion of false discoveries that we expect in an ordered table of results, up to the current line. Storey also promoted the idea (also mentioned by BH) that the actual number of null hypotheses,
m
0
{\displaystyle m_{0}}
, can be estimated from the shape of the probability distribution curve. For example, in a set of data where all null hypotheses are true, 50% of results will yield probabilities between 0.5 and 1.0 (and the other 50% will yield probabilities between 0.0 and 0.5). We can therefore estimate
m
0
{\displaystyle m_{0}}
by finding the number of results with
P
>
0.5
{\displaystyle P>0.5}
and doubling it, and this permits refinement of our calculation of the pFDR at any particular cut-off in the data-set.
False exceedance rate (the tail probability of FDP), defined as:
P
(
V
R
>
q
)
{\displaystyle \mathrm {P} \left({\frac {V}{R}}>q\right)}
W
-FDR
{\displaystyle W{\text{-FDR}}}
(Weighted FDR). Associated with each hypothesis i is a weight
w
i
≥
0
{\displaystyle w_{i}\geq 0}
, the weights capture importance/price. The W-FDR is defined as:
W
-FDR
=
E
(
∑
w
i
V
i
∑
w
i
R
i
)
{\displaystyle W{\text{-FDR}}=E\left({\frac {\sum w_{i}V_{i}}{\sum w_{i}R_{i}}}\right)}
.
FDCR (False Discovery Cost Rate). Stemming from statistical process control: associated with each hypothesis i is a cost
c
i
{\displaystyle \mathrm {c} _{i}}
and with the intersection hypothesis
H
00
{\displaystyle H_{00}}
a cost
c
0
{\displaystyle c_{0}}
. The motivation is that stopping a production process may incur a fixed cost. It is defined as:
F
D
C
R
=
E
(
c
0
V
0
+
∑
c
i
V
i
c
0
R
0
+
∑
c
i
R
i
)
{\displaystyle \mathrm {FDCR} =E\left(c_{0}V_{0}+{\frac {\sum c_{i}V_{i}}{c_{0}R_{0}+\sum c_{i}R_{i}}}\right)}
PFER (per-family error rate) is defined as:
P
F
E
R
=
E
(
V
)
{\displaystyle \mathrm {PFER} =E(V)}
.
FNR (False non-discovery rates) by Sarkar; Genovese and Wasserman is defined as:
F
N
R
=
E
(
T
m
−
R
)
=
E
(
m
−
m
0
−
(
R
−
V
)
m
−
R
)
{\displaystyle \mathrm {FNR} =E\left({\frac {T}{m-R}}\right)=E\left({\frac {m-m_{0}-(R-V)}{m-R}}\right)}
F
D
R
(
z
)
{\displaystyle \mathrm {FDR} (z)}
is defined as:
F
D
R
(
z
)
=
p
0
F
0
(
z
)
F
(
z
)
{\displaystyle \mathrm {FDR} (z)={\frac {p_{0}F_{0}(z)}{F(z)}}}
F
D
R
{\displaystyle \mathrm {FDR} }
The local fdr is defined as:
F
D
R
=
p
0
f
0
(
z
)
f
(
z
)
{\displaystyle \mathrm {FDR} ={\frac {p_{0}f_{0}(z)}{f(z)}}}
=== False coverage rate ===
The false coverage rate (FCR) is, in a sense, the FDR analog to the confidence interval. FCR indicates the average rate of false coverage, namely, not covering the true parameters, among the selected intervals. The FCR gives a simultaneous coverage at a
1
−
α
{\displaystyle 1-\alpha }
level for all of the parameters considered in the problem. Intervals with simultaneous coverage probability 1−q can control the FCR to be bounded by q. There are many FCR procedures such as: Bonferroni-Selected–Bonferroni-Adjusted, Adjusted BH-Selected CIs (Benjamini and Yekutieli (2005)), Bayes FCR (Zhao and Hwang (2012)), and other Bayes methods.
=== Bayesian approaches ===
Connections have been made between the FDR and Bayesian approaches (including empirical Bayes methods), thresholding wavelets coefficients and model selection, and generalizing the confidence interval into the false coverage statement rate (FCR).
== Software implementations ==
False Discovery Rate Analysis in R – Lists links with popular R packages
False Discovery Rate Analysis in Python – Python implementations of false discovery rate procedures
== See also ==
Positive predictive value
== References ==
== External links ==
The False Discovery Rate - Yoav Benjamini, Ruth Heller & Daniel Yekutieli - Rousseeuw Prize for Statistics ceremony lecture from 2024.
False Discovery Rate: Corrected & Adjusted P-values - MATLAB/GNU Octave implementation and discussion on the difference between corrected and adjusted FDR p-values.
Understanding False Discovery Rate - blog post
StatQuest: FDR and the Benjamini-Hochberg Method clearly explained on YouTube
Understanding False Discovery Rate - Includes Excel VBA code to implement it, and an example in cell line development | Wikipedia/False_discovery_rate |
Probability is a branch of mathematics and statistics concerning events and numerical descriptions of how likely they are to occur. The probability of an event is a number between 0 and 1; the larger the probability, the more likely an event is to occur. This number is often expressed as a percentage (%), ranging from 0% to 100%. A simple example is the tossing of a fair (unbiased) coin. Since the coin is fair, the two outcomes ("heads" and "tails") are both equally probable; the probability of "heads" equals the probability of "tails"; and since no other outcomes are possible, the probability of either "heads" or "tails" is 1/2 (which could also be written as 0.5 or 50%).
These concepts have been given an axiomatic mathematical formalization in probability theory, which is used widely in areas of study such as statistics, mathematics, science, finance, gambling, artificial intelligence, machine learning, computer science, game theory, and philosophy to, for example, draw inferences about the expected frequency of events. Probability theory is also used to describe the underlying mechanics and regularities of complex systems.
== Etymology ==
The word probability derives from the Latin probabilitas, which can also mean "probity", a measure of the authority of a witness in a legal case in Europe, and often correlated with the witness's nobility. In a sense, this differs much from the modern meaning of probability, which in contrast is a measure of the weight of empirical evidence, and is arrived at from inductive reasoning and statistical inference.
== Interpretations ==
When dealing with random experiments – i.e., experiments that are random and well-defined – in a purely theoretical setting (like tossing a coin), probabilities can be numerically described by the number of desired outcomes, divided by the total number of all outcomes. This is referred to as theoretical probability (in contrast to empirical probability, dealing with probabilities in the context of real experiments). The probability is a number between 0 and 1; the larger the probability, the more likely the desired outcome is to occur. For example, tossing a coin twice will yield "head-head", "head-tail", "tail-head", and "tail-tail" outcomes. The probability of getting an outcome of "head-head" is 1 out of 4 outcomes, or, in numerical terms, 1/4, 0.25 or 25%. The probability of getting an outcome of at least one head is 3 out of 4, or 0.75, and this event is more likely to occur. However, when it comes to practical application, there are two major competing categories of probability interpretations, whose adherents hold different views about the fundamental nature of probability:
Objectivists assign numbers to describe some objective or physical state of affairs. The most popular version of objective probability is frequentist probability, which claims that the probability of a random event denotes the relative frequency of occurrence of an experiment's outcome when the experiment is repeated indefinitely. This interpretation considers probability to be the relative frequency "in the long run" of outcomes. A modification of this is propensity probability, which interprets probability as the tendency of some experiment to yield a certain outcome, even if it is performed only once.
Subjectivists assign numbers per subjective probability, that is, as a degree of belief. The degree of belief has been interpreted as "the price at which you would buy or sell a bet that pays 1 unit of utility if E, 0 if not E", although that interpretation is not universally agreed upon. The most popular version of subjective probability is Bayesian probability, which includes expert knowledge as well as experimental data to produce probabilities. The expert knowledge is represented by some (subjective) prior probability distribution. These data are incorporated in a likelihood function. The product of the prior and the likelihood, when normalized, results in a posterior probability distribution that incorporates all the information known to date. By Aumann's agreement theorem, Bayesian agents whose prior beliefs are similar will end up with similar posterior beliefs. However, sufficiently different priors can lead to different conclusions, regardless of how much information the agents share.
== History ==
The scientific study of probability is a modern development of mathematics. Gambling shows that there has been an interest in quantifying the ideas of probability throughout history, but exact mathematical descriptions arose much later. There are reasons for the slow development of the mathematics of probability. Whereas games of chance provided the impetus for the mathematical study of probability, fundamental issues are still obscured by superstitions.
According to Richard Jeffrey, "Before the middle of the seventeenth century, the term 'probable' (Latin probabilis) meant approvable, and was applied in that sense, univocally, to opinion and to action. A probable action or opinion was one such as sensible people would undertake or hold, in the circumstances." However, in legal contexts especially, 'probable' could also apply to propositions for which there was good evidence.
The sixteenth-century Italian polymath Gerolamo Cardano demonstrated the efficacy of defining odds as the ratio of favourable to unfavourable outcomes (which implies that the probability of an event is given by the ratio of favourable outcomes to the total number of possible outcomes).
Aside from the elementary work by Cardano, the doctrine of probabilities dates to the correspondence of Pierre de Fermat and Blaise Pascal (1654). Christiaan Huygens (1657) gave the earliest known scientific treatment of the subject. Jakob Bernoulli's Ars Conjectandi (posthumous, 1713) and Abraham de Moivre's Doctrine of Chances (1718) treated the subject as a branch of mathematics. See Ian Hacking's The Emergence of Probability and James Franklin's The Science of Conjecture for histories of the early development of the very concept of mathematical probability.
The theory of errors may be traced back to Roger Cotes's Opera Miscellanea (posthumous, 1722), but a memoir prepared by Thomas Simpson in 1755 (printed 1756) first applied the theory to the discussion of errors of observation. The reprint (1757) of this memoir lays down the axioms that positive and negative errors are equally probable, and that certain assignable limits define the range of all errors. Simpson also discusses continuous errors and describes a probability curve.
The first two laws of error that were proposed both originated with Pierre-Simon Laplace. The first law was published in 1774, and stated that the frequency of an error could be expressed as an exponential function of the numerical magnitude of the error – disregarding sign. The second law of error was proposed in 1778 by Laplace, and stated that the frequency of the error is an exponential function of the square of the error. The second law of error is called the normal distribution or the Gauss law. "It is difficult historically to attribute that law to Gauss, who in spite of his well-known precocity had probably not made this discovery before he was two years old."
Daniel Bernoulli (1778) introduced the principle of the maximum product of the probabilities of a system of concurrent errors.
Adrien-Marie Legendre (1805) developed the method of least squares, and introduced it in his Nouvelles méthodes pour la détermination des orbites des comètes (New Methods for Determining the Orbits of Comets). In ignorance of Legendre's contribution, an Irish-American writer, Robert Adrain, editor of "The Analyst" (1808), first deduced the law of facility of error,
ϕ
(
x
)
=
c
e
−
h
2
x
2
{\displaystyle \phi (x)=ce^{-h^{2}x^{2}}}
where
h
{\displaystyle h}
is a constant depending on precision of observation, and
c
{\displaystyle c}
is a scale factor ensuring that the area under the curve equals 1. He gave two proofs, the second being essentially the same as John Herschel's (1850). Gauss gave the first proof that seems to have been known in Europe (the third after Adrain's) in 1809. Further proofs were given by Laplace (1810, 1812), Gauss (1823), James Ivory (1825, 1826), Hagen (1837), Friedrich Bessel (1838), W.F. Donkin (1844, 1856), and Morgan Crofton (1870). Other contributors were Ellis (1844), De Morgan (1864), Glaisher (1872), and Giovanni Schiaparelli (1875). Peters's (1856) formula for r, the probable error of a single observation, is well known.
In the nineteenth century, authors on the general theory included Laplace, Sylvestre Lacroix (1816), Littrow (1833), Adolphe Quetelet (1853), Richard Dedekind (1860), Helmert (1872), Hermann Laurent (1873), Liagre, Didion and Karl Pearson. Augustus De Morgan and George Boole improved the exposition of the theory.
In 1906, Andrey Markov introduced the notion of Markov chains, which played an important role in stochastic processes theory and its applications. The modern theory of probability based on measure theory was developed by Andrey Kolmogorov in 1931.
On the geometric side, contributors to The Educational Times included Miller, Crofton, McColl, Wolstenholme, Watson, and Artemas Martin. See integral geometry for more information.
== Theory ==
Like other theories, the theory of probability is a representation of its concepts in formal terms – that is, in terms that can be considered separately from their meaning. These formal terms are manipulated by the rules of mathematics and logic, and any results are interpreted or translated back into the problem domain.
There have been at least two successful attempts to formalize probability, namely the Kolmogorov formulation and the Cox formulation. In Kolmogorov's formulation (see also probability space), sets are interpreted as events and probability as a measure on a class of sets. In Cox's theorem, probability is taken as a primitive (i.e., not further analyzed), and the emphasis is on constructing a consistent assignment of probability values to propositions. In both cases, the laws of probability are the same, except for technical details.
There are other methods for quantifying uncertainty, such as the Dempster–Shafer theory or possibility theory, but those are essentially different and not compatible with the usually-understood laws of probability.
== Applications ==
Probability theory is applied in everyday life in risk assessment and modeling. The insurance industry and markets use actuarial science to determine pricing and make trading decisions. Governments apply probabilistic methods in environmental regulation, entitlement analysis, and financial regulation.
An example of the use of probability theory in equity trading is the effect of the perceived probability of any widespread Middle East conflict on oil prices, which have ripple effects in the economy as a whole. An assessment by a commodity trader that a war is more likely can send that commodity's prices up or down, and signals other traders of that opinion. Accordingly, the probabilities are neither assessed independently nor necessarily rationally. The theory of behavioral finance emerged to describe the effect of such groupthink on pricing, on policy, and on peace and conflict.
In addition to financial assessment, probability can be used to analyze trends in biology (e.g., disease spread) as well as ecology (e.g., biological Punnett squares). As with finance, risk assessment can be used as a statistical tool to calculate the likelihood of undesirable events occurring, and can assist with implementing protocols to avoid encountering such circumstances. Probability is used to design games of chance so that casinos can make a guaranteed profit, yet provide payouts to players that are frequent enough to encourage continued play.
Another significant application of probability theory in everyday life is reliability. Many consumer products, such as automobiles and consumer electronics, use reliability theory in product design to reduce the probability of failure. Failure probability may influence a manufacturer's decisions on a product's warranty.
The cache language model and other statistical language models that are used in natural language processing are also examples of applications of probability theory.
== Mathematical treatment ==
Consider an experiment that can produce a number of results. The collection of all possible results is called the sample space of the experiment, sometimes denoted as
Ω
{\displaystyle \Omega }
. The power set of the sample space is formed by considering all different collections of possible results. For example, rolling a die can produce six possible results. One collection of possible results gives an odd number on the die. Thus, the subset {1,3,5} is an element of the power set of the sample space of dice rolls. These collections are called "events". In this case, {1,3,5} is the event that the die falls on some odd number. If the results that actually occur fall in a given event, the event is said to have occurred.
A probability is a way of assigning every event a value between zero and one, with the requirement that the event made up of all possible results (in our example, the event {1,2,3,4,5,6}) is assigned a value of one. To qualify as a probability, the assignment of values must satisfy the requirement that for any collection of mutually exclusive events (events with no common results, such as the events {1,6}, {3}, and {2,4}), the probability that at least one of the events will occur is given by the sum of the probabilities of all the individual events.
The probability of an event A is written as
P
(
A
)
{\displaystyle P(A)}
,
p
(
A
)
{\displaystyle p(A)}
, or
Pr
(
A
)
{\displaystyle {\text{Pr}}(A)}
. This mathematical definition of probability can extend to infinite sample spaces, and even uncountable sample spaces, using the concept of a measure.
The opposite or complement of an event A is the event [not A] (that is, the event of A not occurring), often denoted as
A
′
,
A
c
{\displaystyle A',A^{c}}
,
A
¯
,
A
∁
,
¬
A
{\displaystyle {\overline {A}},A^{\complement },\neg A}
, or
∼
A
{\displaystyle {\sim }A}
; its probability is given by P(not A) = 1 − P(A). As an example, the chance of not rolling a six on a six-sided die is 1 – (chance of rolling a six) = 1 − 1/6 = 5/6. For a more comprehensive treatment, see Complementary event.
If two events A and B occur on a single performance of an experiment, this is called the intersection or joint probability of A and B, denoted as
P
(
A
∩
B
)
.
{\displaystyle P(A\cap B).}
=== Independent events ===
If two events, A and B are independent then the joint probability is
P
(
A
and
B
)
=
P
(
A
∩
B
)
=
P
(
A
)
P
(
B
)
.
{\displaystyle P(A{\mbox{ and }}B)=P(A\cap B)=P(A)P(B).}
For example, if two coins are flipped, then the chance of both being heads is
1
2
×
1
2
=
1
4
.
{\displaystyle {\tfrac {1}{2}}\times {\tfrac {1}{2}}={\tfrac {1}{4}}.}
=== Mutually exclusive events ===
If either event A or event B can occur but never both simultaneously, then they are called mutually exclusive events.
If two events are mutually exclusive, then the probability of both occurring is denoted as
P
(
A
∩
B
)
{\displaystyle P(A\cap B)}
and
P
(
A
and
B
)
=
P
(
A
∩
B
)
=
0
{\displaystyle P(A{\mbox{ and }}B)=P(A\cap B)=0}
If two events are mutually exclusive, then the probability of either occurring is denoted as
P
(
A
∪
B
)
{\displaystyle P(A\cup B)}
and
P
(
A
or
B
)
=
P
(
A
∪
B
)
=
P
(
A
)
+
P
(
B
)
−
P
(
A
∩
B
)
=
P
(
A
)
+
P
(
B
)
−
0
=
P
(
A
)
+
P
(
B
)
{\displaystyle P(A{\mbox{ or }}B)=P(A\cup B)=P(A)+P(B)-P(A\cap B)=P(A)+P(B)-0=P(A)+P(B)}
For example, the chance of rolling a 1 or 2 on a six-sided die is
P
(
1
or
2
)
=
P
(
1
)
+
P
(
2
)
=
1
6
+
1
6
=
1
3
.
{\displaystyle P(1{\mbox{ or }}2)=P(1)+P(2)={\tfrac {1}{6}}+{\tfrac {1}{6}}={\tfrac {1}{3}}.}
=== Not (necessarily) mutually exclusive events ===
If the events are not (necessarily) mutually exclusive then
P
(
A
or
B
)
=
P
(
A
∪
B
)
=
P
(
A
)
+
P
(
B
)
−
P
(
A
and
B
)
.
{\displaystyle P\left(A{\hbox{ or }}B\right)=P(A\cup B)=P\left(A\right)+P\left(B\right)-P\left(A{\mbox{ and }}B\right).}
Rewritten,
P
(
A
∪
B
)
=
P
(
A
)
+
P
(
B
)
−
P
(
A
∩
B
)
{\displaystyle P\left(A\cup B\right)=P\left(A\right)+P\left(B\right)-P\left(A\cap B\right)}
For example, when drawing a card from a deck of cards, the chance of getting a heart or a face card (J, Q, K) (or both) is
13
52
+
12
52
−
3
52
=
11
26
,
{\displaystyle {\tfrac {13}{52}}+{\tfrac {12}{52}}-{\tfrac {3}{52}}={\tfrac {11}{26}},}
since among the 52 cards of a deck, 13 are hearts, 12 are face cards, and 3 are both: here the possibilities included in the "3 that are both" are included in each of the "13 hearts" and the "12 face cards", but should only be counted once.
This can be expanded further for multiple not (necessarily) mutually exclusive events. For three events, this proceeds as follows:
P
(
A
∪
B
∪
C
)
=
P
(
(
A
∪
B
)
∪
C
)
=
P
(
A
∪
B
)
+
P
(
C
)
−
P
(
(
A
∪
B
)
∩
C
)
=
P
(
A
)
+
P
(
B
)
−
P
(
A
∩
B
)
+
P
(
C
)
−
P
(
(
A
∩
C
)
∪
(
B
∩
C
)
)
=
P
(
A
)
+
P
(
B
)
+
P
(
C
)
−
P
(
A
∩
B
)
−
(
P
(
A
∩
C
)
+
P
(
B
∩
C
)
−
P
(
(
A
∩
C
)
∩
(
B
∩
C
)
)
)
P
(
A
∪
B
∪
C
)
=
P
(
A
)
+
P
(
B
)
+
P
(
C
)
−
P
(
A
∩
B
)
−
P
(
A
∩
C
)
−
P
(
B
∩
C
)
+
P
(
A
∩
B
∩
C
)
{\displaystyle {\begin{aligned}P\left(A\cup B\cup C\right)=&P\left(\left(A\cup B\right)\cup C\right)\\=&P\left(A\cup B\right)+P\left(C\right)-P\left(\left(A\cup B\right)\cap C\right)\\=&P\left(A\right)+P\left(B\right)-P\left(A\cap B\right)+P\left(C\right)-P\left(\left(A\cap C\right)\cup \left(B\cap C\right)\right)\\=&P\left(A\right)+P\left(B\right)+P\left(C\right)-P\left(A\cap B\right)-\left(P\left(A\cap C\right)+P\left(B\cap C\right)-P\left(\left(A\cap C\right)\cap \left(B\cap C\right)\right)\right)\\P\left(A\cup B\cup C\right)=&P\left(A\right)+P\left(B\right)+P\left(C\right)-P\left(A\cap B\right)-P\left(A\cap C\right)-P\left(B\cap C\right)+P\left(A\cap B\cap C\right)\end{aligned}}}
It can be seen, then, that this pattern can be repeated for any number of events.
=== Conditional probability ===
Conditional probability is the probability of some event A, given the occurrence of some other event B. Conditional probability is written
P
(
A
∣
B
)
{\displaystyle P(A\mid B)}
, and is read "the probability of A, given B". It is defined by
P
(
A
∣
B
)
=
P
(
A
∩
B
)
P
(
B
)
{\displaystyle P(A\mid B)={\frac {P(A\cap B)}{P(B)}}\,}
If
P
(
B
)
=
0
{\displaystyle P(B)=0}
then
P
(
A
∣
B
)
{\displaystyle P(A\mid B)}
is formally undefined by this expression. In this case
A
{\displaystyle A}
and
B
{\displaystyle B}
are independent, since
P
(
A
∩
B
)
=
P
(
A
)
P
(
B
)
=
0.
{\displaystyle P(A\cap B)=P(A)P(B)=0.}
However, it is possible to define a conditional probability for some zero-probability events, for example by using a σ-algebra of such events (such as those arising from a continuous random variable).
For example, in a bag of 2 red balls and 2 blue balls (4 balls in total), the probability of taking a red ball is
1
/
2
;
{\displaystyle 1/2;}
however, when taking a second ball, the probability of it being either a red ball or a blue ball depends on the ball previously taken. For example, if a red ball was taken, then the probability of picking a red ball again would be
1
/
3
,
{\displaystyle 1/3,}
since only 1 red and 2 blue balls would have been remaining. And if a blue ball was taken previously, the probability of taking a red ball will be
2
/
3.
{\displaystyle 2/3.}
=== Inverse probability ===
In probability theory and applications, Bayes' rule relates the odds of event
A
1
{\displaystyle A_{1}}
to event
A
2
,
{\displaystyle A_{2},}
before (prior to) and after (posterior to) conditioning on another event
B
.
{\displaystyle B.}
The odds on
A
1
{\displaystyle A_{1}}
to event
A
2
{\displaystyle A_{2}}
is simply the ratio of the probabilities of the two events. When arbitrarily many events
A
{\displaystyle A}
are of interest, not just two, the rule can be rephrased as posterior is proportional to prior times likelihood,
P
(
A
|
B
)
∝
P
(
A
)
P
(
B
|
A
)
{\displaystyle P(A|B)\propto P(A)P(B|A)}
where the proportionality symbol means that the left hand side is proportional to (i.e., equals a constant times) the right hand side as
A
{\displaystyle A}
varies, for fixed or given
B
{\displaystyle B}
(Lee, 2012; Bertsch McGrayne, 2012). In this form it goes back to Laplace (1774) and to Cournot (1843); see Fienberg (2005).
=== Summary of probabilities ===
== Relation to randomness and probability in quantum mechanics ==
In a deterministic universe, based on Newtonian concepts, there would be no probability if all conditions were known (Laplace's demon) (but there are situations in which sensitivity to initial conditions exceeds our ability to measure them, i.e. know them). In the case of a roulette wheel, if the force of the hand and the period of that force are known, the number on which the ball will stop would be a certainty (though as a practical matter, this would likely be true only of a roulette wheel that had not been exactly levelled – as Thomas A. Bass' Newtonian Casino revealed). This also assumes knowledge of inertia and friction of the wheel, weight, smoothness, and roundness of the ball, variations in hand speed during the turning, and so forth. A probabilistic description can thus be more useful than Newtonian mechanics for analyzing the pattern of outcomes of repeated rolls of a roulette wheel. Physicists face the same situation in the kinetic theory of gases, where the system, while deterministic in principle, is so complex (with the number of molecules typically the order of magnitude of the Avogadro constant 6.02×1023) that only a statistical description of its properties is feasible.
Probability theory is required to describe quantum phenomena. A revolutionary discovery of early 20th century physics was the random character of all physical processes that occur at sub-atomic scales and are governed by the laws of quantum mechanics. The objective wave function evolves deterministically but, according to the Copenhagen interpretation, it deals with probabilities of observing, the outcome being explained by a wave function collapse when an observation is made. However, the loss of determinism for the sake of instrumentalism did not meet with universal approval. Albert Einstein famously remarked in a letter to Max Born: "I am convinced that God does not play dice". Like Einstein, Erwin Schrödinger, who discovered the wave function, believed quantum mechanics is a statistical approximation of an underlying deterministic reality. In some modern interpretations of the statistical mechanics of measurement, quantum decoherence is invoked to account for the appearance of subjectively probabilistic experimental outcomes.
== See also ==
Contingency
Equiprobability
Fuzzy logic
Heuristic (psychology)
== Notes ==
== References ==
== Bibliography ==
Kallenberg, O. (2005) Probabilistic Symmetries and Invariance Principles. Springer-Verlag, New York. 510 pp. ISBN 0-387-25115-4
Kallenberg, O. (2002) Foundations of Modern Probability, 2nd ed. Springer Series in Statistics. 650 pp. ISBN 0-387-95313-2
Olofsson, Peter (2005) Probability, Statistics, and Stochastic Processes, Wiley-Interscience. 504 pp ISBN 0-471-67969-0.
== External links ==
Virtual Laboratories in Probability and Statistics (Univ. of Ala.-Huntsville)
Probability on In Our Time at the BBC
Probability and Statistics EBook
Edwin Thompson Jaynes. Probability Theory: The Logic of Science. Preprint: Washington University, (1996). – HTML index with links to PostScript files and PDF (first three chapters)
People from the History of Probability and Statistics (Univ. of Southampton)
Probability and Statistics on the Earliest Uses Pages (Univ. of Southampton)
Earliest Uses of Symbols in Probability and Statistics on Earliest Uses of Various Mathematical Symbols
A tutorial on probability and Bayes' theorem devised for first-year Oxford University students
Introduction to Probability – eBook Archived 27 July 2011 at the Wayback Machine, by Charles Grinstead, Laurie Snell Source Archived 25 March 2012 at the Wayback Machine (GNU Free Documentation License)
(in English and Italian) Bruno de Finetti, Probabilità e induzione, Bologna, CLUEB, 1993. ISBN 88-8091-176-7 (digital version)
Richard Feynman's Lecture on probability. | Wikipedia/probability |
In statistics, propagation of uncertainty (or propagation of error) is the effect of variables' uncertainties (or errors, more specifically random errors) on the uncertainty of a function based on them. When the variables are the values of experimental measurements they have uncertainties due to measurement limitations (e.g., instrument precision) which propagate due to the combination of variables in the function.
The uncertainty u can be expressed in a number of ways.
It may be defined by the absolute error Δx. Uncertainties can also be defined by the relative error (Δx)/x, which is usually written as a percentage.
Most commonly, the uncertainty on a quantity is quantified in terms of the standard deviation, σ, which is the positive square root of the variance. The value of a quantity and its error are then expressed as an interval x ± u.
However, the most general way of characterizing uncertainty is by specifying its probability distribution.
If the probability distribution of the variable is known or can be assumed, in theory it is possible to get any of its statistics. In particular, it is possible to derive confidence limits to describe the region within which the true value of the variable may be found. For example, the 68% confidence limits for a one-dimensional variable belonging to a normal distribution are approximately ± one standard deviation σ from the central value x, which means that the region x ± σ will cover the true value in roughly 68% of cases.
If the uncertainties are correlated then covariance must be taken into account. Correlation can arise from two different sources. First, the measurement errors may be correlated. Second, when the underlying values are correlated across a population, the uncertainties in the group averages will be correlated.
In a general context where a nonlinear function modifies the uncertain parameters (correlated or not), the standard tools to propagate uncertainty, and infer resulting quantity probability distribution/statistics, are sampling techniques from the Monte Carlo method family. For very large datasets or complex functions, the calculation of the error propagation may be very expensive so that a surrogate model or a parallel computing strategy may be necessary.
In some particular cases, the uncertainty propagation calculation can be done through simplistic algebraic procedures. Some of these scenarios are described below.
== Linear combinations ==
Let
{
f
k
(
x
1
,
x
2
,
…
,
x
n
)
}
{\displaystyle \{f_{k}(x_{1},x_{2},\dots ,x_{n})\}}
be a set of m functions, which are linear combinations of
n
{\displaystyle n}
variables
x
1
,
x
2
,
…
,
x
n
{\displaystyle x_{1},x_{2},\dots ,x_{n}}
with combination coefficients
A
k
1
,
A
k
2
,
…
,
A
k
n
,
(
k
=
1
,
…
,
m
)
{\displaystyle A_{k1},A_{k2},\dots ,A_{kn},(k=1,\dots ,m)}
:
f
k
=
∑
i
=
1
n
A
k
i
x
i
,
{\displaystyle f_{k}=\sum _{i=1}^{n}A_{ki}x_{i},}
or in matrix notation,
f
=
A
x
.
{\displaystyle \mathbf {f} =\mathbf {A} \mathbf {x} .}
Also let the variance–covariance matrix of x = (x1, ..., xn) be denoted by
Σ
x
{\displaystyle {\boldsymbol {\Sigma }}^{x}}
and let the mean value be denoted by
μ
{\displaystyle {\boldsymbol {\mu }}}
:
Σ
x
=
E
[
(
x
−
μ
)
⊗
(
x
−
μ
)
]
=
(
σ
1
2
σ
12
σ
13
⋯
σ
21
σ
2
2
σ
23
⋯
σ
31
σ
32
σ
3
2
⋯
⋮
⋮
⋮
⋱
)
=
(
Σ
11
x
Σ
12
x
Σ
13
x
⋯
Σ
21
x
Σ
22
x
Σ
23
x
⋯
Σ
31
x
Σ
32
x
Σ
33
x
⋯
⋮
⋮
⋮
⋱
)
.
{\displaystyle {\begin{aligned}{\boldsymbol {\Sigma }}^{x}=\operatorname {E} [(\mathbf {x} -{\boldsymbol {\mu }})\otimes (\mathbf {x} -{\boldsymbol {\mu }})]&={\begin{pmatrix}\sigma _{1}^{2}&\sigma _{12}&\sigma _{13}&\cdots \\\sigma _{21}&\sigma _{2}^{2}&\sigma _{23}&\cdots \\\sigma _{31}&\sigma _{32}&\sigma _{3}^{2}&\cdots \\\vdots &\vdots &\vdots &\ddots \end{pmatrix}}\\[1ex]&={\begin{pmatrix}{\Sigma }_{11}^{x}&{\Sigma }_{12}^{x}&{\Sigma }_{13}^{x}&\cdots \\{\Sigma }_{21}^{x}&{\Sigma }_{22}^{x}&{\Sigma }_{23}^{x}&\cdots \\{\Sigma }_{31}^{x}&{\Sigma }_{32}^{x}&{\Sigma }_{33}^{x}&\cdots \\\vdots &\vdots &\vdots &\ddots \end{pmatrix}}.\end{aligned}}}
⊗
{\displaystyle \otimes }
is the outer product.
Then, the variance–covariance matrix
Σ
f
{\displaystyle {\boldsymbol {\Sigma }}^{f}}
of f is given by
Σ
f
=
E
[
(
f
−
E
[
f
]
)
⊗
(
f
−
E
[
f
]
)
]
=
E
[
A
(
x
−
μ
)
⊗
A
(
x
−
μ
)
]
=
A
E
[
(
x
−
μ
)
⊗
(
x
−
μ
)
]
A
T
=
A
Σ
x
A
T
.
{\displaystyle {\begin{aligned}{\boldsymbol {\Sigma }}^{f}&=\operatorname {E} \left[(\mathbf {f} -\operatorname {E} [\mathbf {f} ])\otimes (\mathbf {f} -\operatorname {E} [\mathbf {f} ])\right]=\operatorname {E} \left[\mathbf {A} (\mathbf {x} -{\boldsymbol {\mu }})\otimes \mathbf {A} (\mathbf {x} -{\boldsymbol {\mu }})\right]\\[1ex]&=\mathbf {A} \operatorname {E} \left[(\mathbf {x} -{\boldsymbol {\mu }})\otimes (\mathbf {x} -{\boldsymbol {\mu }})\right]\mathbf {A} ^{\mathrm {T} }=\mathbf {A} {\boldsymbol {\Sigma }}^{x}\mathbf {A} ^{\mathrm {T} }.\end{aligned}}}
In component notation, the equation
Σ
f
=
A
Σ
x
A
T
{\displaystyle {\boldsymbol {\Sigma }}^{f}=\mathbf {A} {\boldsymbol {\Sigma }}^{x}\mathbf {A} ^{\mathrm {T} }}
reads
Σ
i
j
f
=
∑
k
n
∑
l
n
A
i
k
Σ
k
l
x
A
j
l
.
{\displaystyle \Sigma _{ij}^{f}=\sum _{k}^{n}\sum _{l}^{n}A_{ik}{\Sigma }_{kl}^{x}A_{jl}.}
This is the most general expression for the propagation of error from one set of variables onto another. When the errors on x are uncorrelated, the general expression simplifies to
Σ
i
j
f
=
∑
k
n
A
i
k
Σ
k
x
A
j
k
,
{\displaystyle \Sigma _{ij}^{f}=\sum _{k}^{n}A_{ik}\Sigma _{k}^{x}A_{jk},}
where
Σ
k
x
=
σ
x
k
2
{\displaystyle \Sigma _{k}^{x}=\sigma _{x_{k}}^{2}}
is the variance of k-th element of the x vector.
Note that even though the errors on x may be uncorrelated, the errors on f are in general correlated; in other words, even if
Σ
x
{\displaystyle {\boldsymbol {\Sigma }}^{x}}
is a diagonal matrix,
Σ
f
{\displaystyle {\boldsymbol {\Sigma }}^{f}}
is in general a full matrix.
The general expressions for a scalar-valued function f are a little simpler (here a is a row vector):
f
=
∑
i
n
a
i
x
i
=
a
x
,
{\displaystyle f=\sum _{i}^{n}a_{i}x_{i}=\mathbf {ax} ,}
σ
f
2
=
∑
i
n
∑
j
n
a
i
Σ
i
j
x
a
j
=
a
Σ
x
a
T
.
{\displaystyle \sigma _{f}^{2}=\sum _{i}^{n}\sum _{j}^{n}a_{i}\Sigma _{ij}^{x}a_{j}=\mathbf {a} {\boldsymbol {\Sigma }}^{x}\mathbf {a} ^{\mathrm {T} }.}
Each covariance term
σ
i
j
{\displaystyle \sigma _{ij}}
can be expressed in terms of the correlation coefficient
ρ
i
j
{\displaystyle \rho _{ij}}
by
σ
i
j
=
ρ
i
j
σ
i
σ
j
{\displaystyle \sigma _{ij}=\rho _{ij}\sigma _{i}\sigma _{j}}
, so that an alternative expression for the variance of f is
σ
f
2
=
∑
i
n
a
i
2
σ
i
2
+
∑
i
n
∑
j
(
j
≠
i
)
n
a
i
a
j
ρ
i
j
σ
i
σ
j
.
{\displaystyle \sigma _{f}^{2}=\sum _{i}^{n}a_{i}^{2}\sigma _{i}^{2}+\sum _{i}^{n}\sum _{j(j\neq i)}^{n}a_{i}a_{j}\rho _{ij}\sigma _{i}\sigma _{j}.}
In the case that the variables in x are uncorrelated, this simplifies further to
σ
f
2
=
∑
i
n
a
i
2
σ
i
2
.
{\displaystyle \sigma _{f}^{2}=\sum _{i}^{n}a_{i}^{2}\sigma _{i}^{2}.}
In the simple case of identical coefficients and variances, we find
σ
f
=
n
|
a
|
σ
.
{\displaystyle \sigma _{f}={\sqrt {n}}\,|a|\sigma .}
For the arithmetic mean,
a
=
1
/
n
{\displaystyle a=1/n}
, the result is the standard error of the mean:
σ
f
=
σ
n
.
{\displaystyle \sigma _{f}={\frac {\sigma }{\sqrt {n}}}.}
== Non-linear combinations ==
When f is a set of non-linear combination of the variables x, an interval propagation could be performed in order to compute intervals which contain all consistent values for the variables. In a probabilistic approach, the function f must usually be linearised by approximation to a first-order Taylor series expansion, though in some cases, exact formulae can be derived that do not depend on the expansion as is the case for the exact variance of products. The Taylor expansion would be:
f
k
≈
f
k
0
+
∑
i
n
∂
f
k
∂
x
i
x
i
{\displaystyle f_{k}\approx f_{k}^{0}+\sum _{i}^{n}{\frac {\partial f_{k}}{\partial {x_{i}}}}x_{i}}
where
∂
f
k
/
∂
x
i
{\displaystyle \partial f_{k}/\partial x_{i}}
denotes the partial derivative of fk with respect to the i-th variable, evaluated at the mean value of all components of vector x. Or in matrix notation,
f
≈
f
0
+
J
x
{\displaystyle \mathrm {f} \approx \mathrm {f} ^{0}+\mathrm {J} \mathrm {x} \,}
where J is the Jacobian matrix. Since f0 is a constant it does not contribute to the error on f. Therefore, the propagation of error follows the linear case, above, but replacing the linear coefficients, Aki and Akj by the partial derivatives,
∂
f
k
∂
x
i
{\displaystyle {\frac {\partial f_{k}}{\partial x_{i}}}}
and
∂
f
k
∂
x
j
{\displaystyle {\frac {\partial f_{k}}{\partial x_{j}}}}
. In matrix notation,
Σ
f
=
J
Σ
x
J
⊤
.
{\displaystyle \mathrm {\Sigma } ^{\mathrm {f} }=\mathrm {J} \mathrm {\Sigma } ^{\mathrm {x} }\mathrm {J} ^{\top }.}
That is, the Jacobian of the function is used to transform the rows and columns of the variance-covariance matrix of the argument.
Note this is equivalent to the matrix expression for the linear case with
J
=
A
{\displaystyle \mathrm {J=A} }
.
=== Simplification ===
Neglecting correlations or assuming independent variables yields a common formula among engineers and experimental scientists to calculate error propagation, the variance formula:
s
f
=
(
∂
f
∂
x
)
2
s
x
2
+
(
∂
f
∂
y
)
2
s
y
2
+
(
∂
f
∂
z
)
2
s
z
2
+
⋯
{\displaystyle s_{f}={\sqrt {\left({\frac {\partial f}{\partial x}}\right)^{2}s_{x}^{2}+\left({\frac {\partial f}{\partial y}}\right)^{2}s_{y}^{2}+\left({\frac {\partial f}{\partial z}}\right)^{2}s_{z}^{2}+\cdots }}}
where
s
f
{\displaystyle s_{f}}
represents the standard deviation of the function
f
{\displaystyle f}
,
s
x
{\displaystyle s_{x}}
represents the standard deviation of
x
{\displaystyle x}
,
s
y
{\displaystyle s_{y}}
represents the standard deviation of
y
{\displaystyle y}
, and so forth.
This formula is based on the linear characteristics of the gradient of
f
{\displaystyle f}
and therefore it is a good estimation for the standard deviation of
f
{\displaystyle f}
as long as
s
x
,
s
y
,
s
z
,
…
{\displaystyle s_{x},s_{y},s_{z},\ldots }
are small enough. Specifically, the linear approximation of
f
{\displaystyle f}
has to be close to
f
{\displaystyle f}
inside a neighbourhood of radius
s
x
,
s
y
,
s
z
,
…
{\displaystyle s_{x},s_{y},s_{z},\ldots }
.
=== Example ===
Any non-linear differentiable function,
f
(
a
,
b
)
{\displaystyle f(a,b)}
, of two variables,
a
{\displaystyle a}
and
b
{\displaystyle b}
, can be expanded as
f
≈
f
0
+
∂
f
∂
a
a
+
∂
f
∂
b
b
.
{\displaystyle f\approx f^{0}+{\frac {\partial f}{\partial a}}a+{\frac {\partial f}{\partial b}}b.}
If we take the variance on both sides and use the formula for the variance of a linear combination of variables
Var
(
a
X
+
b
Y
)
=
a
2
Var
(
X
)
+
b
2
Var
(
Y
)
+
2
a
b
Cov
(
X
,
Y
)
,
{\displaystyle \operatorname {Var} (aX+bY)=a^{2}\operatorname {Var} (X)+b^{2}\operatorname {Var} (Y)+2ab\operatorname {Cov} (X,Y),}
then we obtain
σ
f
2
≈
|
∂
f
∂
a
|
2
σ
a
2
+
|
∂
f
∂
b
|
2
σ
b
2
+
2
∂
f
∂
a
∂
f
∂
b
σ
a
b
,
{\displaystyle \sigma _{f}^{2}\approx \left|{\frac {\partial f}{\partial a}}\right|^{2}\sigma _{a}^{2}+\left|{\frac {\partial f}{\partial b}}\right|^{2}\sigma _{b}^{2}+2{\frac {\partial f}{\partial a}}{\frac {\partial f}{\partial b}}\sigma _{ab},}
where
σ
f
{\displaystyle \sigma _{f}}
is the standard deviation of the function
f
{\displaystyle f}
,
σ
a
{\displaystyle \sigma _{a}}
is the standard deviation of
a
{\displaystyle a}
,
σ
b
{\displaystyle \sigma _{b}}
is the standard deviation of
b
{\displaystyle b}
and
σ
a
b
=
σ
a
σ
b
ρ
a
b
{\displaystyle \sigma _{ab}=\sigma _{a}\sigma _{b}\rho _{ab}}
is the covariance between
a
{\displaystyle a}
and
b
{\displaystyle b}
.
In the particular case that
f
=
a
b
{\displaystyle f=ab}
,
∂
f
∂
a
=
b
{\displaystyle {\frac {\partial f}{\partial a}}=b}
,
∂
f
∂
b
=
a
{\displaystyle {\frac {\partial f}{\partial b}}=a}
. Then
σ
f
2
≈
b
2
σ
a
2
+
a
2
σ
b
2
+
2
a
b
σ
a
b
{\displaystyle \sigma _{f}^{2}\approx b^{2}\sigma _{a}^{2}+a^{2}\sigma _{b}^{2}+2ab\,\sigma _{ab}}
or
(
σ
f
f
)
2
≈
(
σ
a
a
)
2
+
(
σ
b
b
)
2
+
2
(
σ
a
a
)
(
σ
b
b
)
ρ
a
b
{\displaystyle \left({\frac {\sigma _{f}}{f}}\right)^{2}\approx \left({\frac {\sigma _{a}}{a}}\right)^{2}+\left({\frac {\sigma _{b}}{b}}\right)^{2}+2\left({\frac {\sigma _{a}}{a}}\right)\left({\frac {\sigma _{b}}{b}}\right)\rho _{ab}}
where
ρ
a
b
{\displaystyle \rho _{ab}}
is the correlation between
a
{\displaystyle a}
and
b
{\displaystyle b}
.
When the variables
a
{\displaystyle a}
and
b
{\displaystyle b}
are uncorrelated,
ρ
a
b
=
0
{\displaystyle \rho _{ab}=0}
. Then
(
σ
f
f
)
2
≈
(
σ
a
a
)
2
+
(
σ
b
b
)
2
.
{\displaystyle \left({\frac {\sigma _{f}}{f}}\right)^{2}\approx \left({\frac {\sigma _{a}}{a}}\right)^{2}+\left({\frac {\sigma _{b}}{b}}\right)^{2}.}
=== Caveats and warnings ===
Error estimates for non-linear functions are biased on account of using a truncated series expansion. The extent of this bias depends on the nature of the function. For example, the bias on the error calculated for log(1+x) increases as x increases, since the expansion to x is a good approximation only when x is near zero.
For highly non-linear functions, there exist five categories of probabilistic approaches for uncertainty propagation; see Uncertainty quantification for details.
==== Reciprocal and shifted reciprocal ====
In the special case of the inverse or reciprocal
1
/
B
{\displaystyle 1/B}
, where
B
=
N
(
0
,
1
)
{\displaystyle B=N(0,1)}
follows a standard normal distribution, the resulting distribution is a reciprocal standard normal distribution, and there is no definable variance.
However, in the slightly more general case of a shifted reciprocal function
1
/
(
p
−
B
)
{\displaystyle 1/(p-B)}
for
B
=
N
(
μ
,
σ
)
{\displaystyle B=N(\mu ,\sigma )}
following a general normal distribution, then mean and variance statistics do exist in a principal value sense, if the difference between the pole
p
{\displaystyle p}
and the mean
μ
{\displaystyle \mu }
is real-valued.
==== Ratios ====
Ratios are also problematic; normal approximations exist under certain conditions.
== Example formulae ==
This table shows the variances and standard deviations of simple functions of the real variables
A
,
B
{\displaystyle A,B}
with standard deviations
σ
A
,
σ
B
,
{\displaystyle \sigma _{A},\sigma _{B},}
covariance
σ
A
B
=
ρ
A
B
σ
A
σ
B
,
{\displaystyle \sigma _{AB}=\rho _{AB}\sigma _{A}\sigma _{B},}
and correlation
ρ
A
B
.
{\displaystyle \rho _{AB}.}
The real-valued coefficients
a
{\displaystyle a}
and
b
{\displaystyle b}
are assumed exactly known (deterministic), i.e.,
σ
a
=
σ
b
=
0.
{\displaystyle \sigma _{a}=\sigma _{b}=0.}
In the right-hand columns of the table,
A
{\displaystyle A}
and
B
{\displaystyle B}
are expectation values, and
f
{\displaystyle f}
is the value of the function calculated at those values.
For uncorrelated variables (
ρ
A
B
=
0
{\displaystyle \rho _{AB}=0}
,
σ
A
B
=
0
{\displaystyle \sigma _{AB}=0}
) expressions for more complicated functions can be derived by combining simpler functions. For example, repeated multiplication, assuming no correlation, gives
f
=
A
B
C
;
(
σ
f
f
)
2
≈
(
σ
A
A
)
2
+
(
σ
B
B
)
2
+
(
σ
C
C
)
2
.
{\displaystyle f=ABC;\qquad \left({\frac {\sigma _{f}}{f}}\right)^{2}\approx \left({\frac {\sigma _{A}}{A}}\right)^{2}+\left({\frac {\sigma _{B}}{B}}\right)^{2}+\left({\frac {\sigma _{C}}{C}}\right)^{2}.}
For the case
f
=
A
B
{\displaystyle f=AB}
we also have Goodman's expression for the exact variance: for the uncorrelated case it is
V
[
X
Y
]
=
E
[
X
]
2
V
[
Y
]
+
E
[
Y
]
2
V
[
X
]
+
E
[
(
X
−
E
(
X
)
)
2
(
Y
−
E
(
Y
)
)
2
]
,
{\displaystyle \operatorname {V} [XY]=\operatorname {E} [X]^{2}\operatorname {V} [Y]+\operatorname {E} [Y]^{2}\operatorname {V} [X]+\operatorname {E} \left[\left(X-\operatorname {E} (X)\right)^{2}\left(Y-\operatorname {E} (Y)\right)^{2}\right],}
and therefore we have
σ
f
2
=
A
2
σ
B
2
+
B
2
σ
A
2
+
σ
A
2
σ
B
2
.
{\displaystyle \sigma _{f}^{2}=A^{2}\sigma _{B}^{2}+B^{2}\sigma _{A}^{2}+\sigma _{A}^{2}\sigma _{B}^{2}.}
=== Effect of correlation on differences ===
If A and B are uncorrelated, their difference A − B will have more variance than either of them. An increasing positive correlation (
ρ
A
B
→
1
{\displaystyle \rho _{AB}\to 1}
) will decrease the variance of the difference, converging to zero variance for perfectly correlated variables with the same variance. On the other hand, a negative correlation (
ρ
A
B
→
−
1
{\displaystyle \rho _{AB}\to -1}
) will further increase the variance of the difference, compared to the uncorrelated case.
For example, the self-subtraction f = A − A has zero variance
σ
f
2
=
0
{\displaystyle \sigma _{f}^{2}=0}
only if the variate is perfectly autocorrelated (
ρ
A
=
1
{\displaystyle \rho _{A}=1}
). If A is uncorrelated,
ρ
A
=
0
,
{\displaystyle \rho _{A}=0,}
then the output variance is twice the input variance,
σ
f
2
=
2
σ
A
2
.
{\displaystyle \sigma _{f}^{2}=2\sigma _{A}^{2}.}
And if A is perfectly anticorrelated,
ρ
A
=
−
1
,
{\displaystyle \rho _{A}=-1,}
then the input variance is quadrupled in the output,
σ
f
2
=
4
σ
A
2
{\displaystyle \sigma _{f}^{2}=4\sigma _{A}^{2}}
(notice
1
−
ρ
A
=
2
{\displaystyle 1-\rho _{A}=2}
for f = aA − aA in the table above).
== Example calculations ==
=== Inverse tangent function ===
We can calculate the uncertainty propagation for the inverse tangent function as an example of using partial derivatives to propagate error.
Define
f
(
x
)
=
arctan
(
x
)
,
{\displaystyle f(x)=\arctan(x),}
where
Δ
x
{\displaystyle \Delta _{x}}
is the absolute uncertainty on our measurement of x. The derivative of f(x) with respect to x is
d
f
d
x
=
1
1
+
x
2
.
{\displaystyle {\frac {df}{dx}}={\frac {1}{1+x^{2}}}.}
Therefore, our propagated uncertainty is
Δ
f
≈
Δ
x
1
+
x
2
,
{\displaystyle \Delta _{f}\approx {\frac {\Delta _{x}}{1+x^{2}}},}
where
Δ
f
{\displaystyle \Delta _{f}}
is the absolute propagated uncertainty.
=== Resistance measurement ===
A practical application is an experiment in which one measures current, I, and voltage, V, on a resistor in order to determine the resistance, R, using Ohm's law, R = V / I.
Given the measured variables with uncertainties, I ± σI and V ± σV, and neglecting their possible correlation, the uncertainty in the computed quantity, σR, is:
σ
R
≈
σ
V
2
(
1
I
)
2
+
σ
I
2
(
−
V
I
2
)
2
=
R
(
σ
V
V
)
2
+
(
σ
I
I
)
2
.
{\displaystyle \sigma _{R}\approx {\sqrt {\sigma _{V}^{2}\left({\frac {1}{I}}\right)^{2}+\sigma _{I}^{2}\left({\frac {-V}{I^{2}}}\right)^{2}}}=R{\sqrt {\left({\frac {\sigma _{V}}{V}}\right)^{2}+\left({\frac {\sigma _{I}}{I}}\right)^{2}}}.}
== See also ==
== References ==
== Further reading ==
Bevington, Philip R.; Robinson, D. Keith (2002), Data Reduction and Error Analysis for the Physical Sciences (3rd ed.), McGraw-Hill, ISBN 978-0-07-119926-1
Fornasini, Paolo (2008), The uncertainty in physical measurements: an introduction to data analysis in the physics laboratory, Springer, p. 161, ISBN 978-0-387-78649-0
Meyer, Stuart L. (1975), Data Analysis for Scientists and Engineers, Wiley, ISBN 978-0-471-59995-1
Peralta, M. (2012), Propagation Of Errors: How To Mathematically Predict Measurement Errors, CreateSpace
Rouaud, M. (2013), Probability, Statistics and Estimation: Propagation of Uncertainties in Experimental Measurement (PDF) (short ed.)
Taylor, J. R. (1997), An Introduction to Error Analysis: The Study of Uncertainties in Physical Measurements (2nd ed.), University Science Books
Wang, C. M.; Iyer, Hari K. (2005-09-07). "On higher-order corrections for propagating uncertainties". Metrologia. 42 (5): 406–410. Bibcode:2005Metro..42..406W. doi:10.1088/0026-1394/42/5/011. ISSN 0026-1394. S2CID 122841691.
== External links ==
A detailed discussion of measurements and the propagation of uncertainty explaining the benefits of using error propagation formulas and Monte Carlo simulations instead of simple significance arithmetic
GUM, Guide to the Expression of Uncertainty in Measurement
EPFL An Introduction to Error Propagation, Derivation, Meaning and Examples of Cy = Fx Cx Fx'
uncertainties package, a program/library for transparently performing calculations with uncertainties (and error correlations).
soerp package, a Python program/library for transparently performing *second-order* calculations with uncertainties (and error correlations).
Joint Committee for Guides in Metrology (2011). JCGM 102: Evaluation of Measurement Data - Supplement 2 to the "Guide to the Expression of Uncertainty in Measurement" - Extension to Any Number of Output Quantities (PDF) (Technical report). JCGM. Retrieved 13 February 2013.
Uncertainty Calculator Propagate uncertainty for any expression | Wikipedia/Theory_of_errors |
A language model is a model of the human brain's ability to produce natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval.
Large language models (LLMs), currently their most advanced form, are predominantly based on transformers trained on larger datasets (frequently using words scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as word n-gram language model.
== History ==
Noam Chomsky did pioneering work on language models in the 1950s by developing a theory of formal grammars.
In 1980, statistical approaches were explored and found to be more useful for many purposes than rule-based formal grammars. Discrete representations like word n-gram language models, with probabilities for discrete combinations of words, made significant advances.
In the 2000s, continuous representations for words, such as word embeddings, began to replace discrete representations. Typically, the representation is a real-valued vector that encodes the meaning of the word in such a way that the words that are closer in the vector space are expected to be similar in meaning, and common relationships between pairs of words like plurality or gender.
== Pure statistical models ==
In 1980, the first significant statistical language model was proposed, and during the decade IBM performed ‘Shannon-style’ experiments, in which potential sources for language modeling improvement were identified by observing and analyzing the performance of human subjects in predicting or correcting text.
=== Models based on word n-grams ===
=== Exponential ===
Maximum entropy language models encode the relationship between a word and the n-gram history using feature functions. The equation is
P
(
w
m
∣
w
1
,
…
,
w
m
−
1
)
=
1
Z
(
w
1
,
…
,
w
m
−
1
)
exp
(
a
T
f
(
w
1
,
…
,
w
m
)
)
{\displaystyle P(w_{m}\mid w_{1},\ldots ,w_{m-1})={\frac {1}{Z(w_{1},\ldots ,w_{m-1})}}\exp(a^{T}f(w_{1},\ldots ,w_{m}))}
where
Z
(
w
1
,
…
,
w
m
−
1
)
{\displaystyle Z(w_{1},\ldots ,w_{m-1})}
is the partition function,
a
{\displaystyle a}
is the parameter vector, and
f
(
w
1
,
…
,
w
m
)
{\displaystyle f(w_{1},\ldots ,w_{m})}
is the feature function. In the simplest case, the feature function is just an indicator of the presence of a certain n-gram. It is helpful to use a prior on
a
{\displaystyle a}
or some form of regularization.
The log-bilinear model is another example of an exponential language model.
=== Skip-gram model ===
== Neural models ==
=== Recurrent neural network ===
Continuous representations or embeddings of words are produced in recurrent neural network-based language models (known also as continuous space language models). Such continuous space embeddings help to alleviate the curse of dimensionality, which is the consequence of the number of possible sequences of words increasing exponentially with the size of the vocabulary, further causing a data sparsity problem. Neural networks avoid this problem by representing words as non-linear combinations of weights in a neural net.
=== Large language models ===
Although sometimes matching human performance, it is not clear whether they are plausible cognitive models. At least for recurrent neural networks, it has been shown that they sometimes learn patterns that humans do not, but fail to learn patterns that humans typically do.
== Evaluation and benchmarks ==
Evaluation of the quality of language models is mostly done by comparison to human created sample benchmarks created from typical language-oriented tasks. Other, less established, quality tests examine the intrinsic character of a language model or compare two such models. Since language models are typically intended to be dynamic and to learn from data they see, some proposed models investigate the rate of learning, e.g., through inspection of learning curves.
Various data sets have been developed for use in evaluating language processing systems. These include:
Massive Multitask Language Understanding (MMLU)
Corpus of Linguistic Acceptability
GLUE benchmark
Microsoft Research Paraphrase Corpus
Multi-Genre Natural Language Inference
Question Natural Language Inference
Quora Question Pairs
Recognizing Textual Entailment
Semantic Textual Similarity Benchmark
SQuAD question answering Test
Stanford Sentiment Treebank
Winograd NLI
BoolQ, PIQA, SIQA, HellaSwag, WinoGrande, ARC, OpenBookQA, NaturalQuestions, TriviaQA, RACE, BIG-bench hard, GSM8k, RealToxicityPrompts, WinoGender, CrowS-Pairs
== See also ==
== References ==
== Further reading == | Wikipedia/Statistical_Language_Model |
A cache language model is a type of statistical language model. These occur in the natural language processing subfield of computer science and assign probabilities to given sequences of words by means of a probability distribution. Statistical language models are key components of speech recognition systems and of many machine translation systems: they tell such systems which possible output word sequences are probable and which are improbable. The particular characteristic of a cache language model is that it contains a cache component and assigns relatively high probabilities to words or word sequences that occur elsewhere in a given text. The primary, but by no means sole, use of cache language models is in speech recognition systems.
To understand why it is a good idea for a statistical language model to contain a cache component one might consider someone who is dictating a letter about elephants to a speech recognition system. Standard (non-cache) N-gram language models will assign a very low probability to the word "elephant" because it is a very rare word in English. If the speech recognition system does not contain a cache component, the person dictating the letter may be annoyed: each time the word "elephant" is spoken another sequence of words with a higher probability according to the N-gram language model may be recognized (e.g., "tell a plan"). These erroneous sequences will have to be deleted manually and replaced in the text by "elephant" each time "elephant" is spoken. If the system has a cache language model, "elephant" will still probably be misrecognized the first time it is spoken and will have to be entered into the text manually; however, from this point on the system is aware that "elephant" is likely to occur again – the estimated probability of occurrence of "elephant" has been increased, making it more likely that if it is spoken it will be recognized correctly. Once "elephant" has occurred several times, the system is likely to recognize it correctly every time it is spoken until the letter has been completely dictated. This increase in the probability assigned to the occurrence of "elephant" is an example of a consequence of machine learning and more specifically of pattern recognition.
There exist variants of the cache language model in which not only single words but also multi-word sequences that have occurred previously are assigned higher probabilities (e.g., if "San Francisco" occurred near the beginning of the text subsequent instances of it would be assigned a higher probability).
The cache language model was first proposed in a paper published in 1990, after which the IBM speech-recognition group experimented with the concept. The group found that implementation of a form of cache language model yielded a 24% drop in word-error rates once the first few hundred words of a document had been dictated. A detailed survey of language modeling techniques concluded that the cache language model was one of the few new language modeling techniques that yielded improvements over the standard N-gram approach: "Our caching results show that caching is by far the most useful technique for perplexity reduction at small and medium training data sizes".
The development of the cache language model has generated considerable interest among those concerned with computational linguistics in general and statistical natural language processing in particular: recently, there has been interest in applying the cache language model in the field of statistical machine translation.
The success of the cache language model in improving word prediction rests on the human tendency to use words in a "bursty" fashion: when one is discussing a certain topic in a certain context, the frequency with which one uses certain words will be quite different from their frequencies when one is discussing other topics in other contexts. The traditional N-gram language models, which rely entirely on information from a very small number (four, three, or two) of words preceding the word to which a probability is to be assigned, do not adequately model this "burstiness".
Recently, the cache language model concept – originally conceived for the N-gram statistical language model paradigm – has been adapted for use in the neural paradigm. For instance, recent work on continuous cache language models in the recurrent neural network (RNN) setting has applied the cache concept to much larger contexts than before, yielding significant reductions in perplexity. Another recent line of research involves incorporating a cache component in a feed-forward neural language model (FN-LM) to achieve rapid domain adaptation.
== See also ==
Artificial intelligence
History of natural language processing
History of machine translation
Speech recognition
Statistical machine translation
== References ==
== Further reading ==
Jelinek, Frederick (1997). Statistical Methods for Speech Recognition. The MIT Press. ISBN 0-262-10066-5. Archived from the original on 2011-08-05. Retrieved 2011-09-24. | Wikipedia/Cache_language_model |
In mathematics, more precisely in measure theory, an atom is a measurable set that has positive measure and contains no set of smaller positive measures. A measure that has no atoms is called non-atomic or atomless.
== Definition ==
Given a measurable space
(
X
,
Σ
)
{\displaystyle (X,\Sigma )}
and a measure
μ
{\displaystyle \mu }
on that space, a set
A
⊂
X
{\displaystyle A\subset X}
in
Σ
{\displaystyle \Sigma }
is called an atom if
μ
(
A
)
>
0
{\displaystyle \mu (A)>0}
and for any measurable subset
B
⊆
A
{\displaystyle B\subseteq A}
, either
μ
(
B
)
=
0
{\displaystyle \mu (B)=0}
or
μ
(
B
)
=
μ
(
A
)
{\displaystyle \mu (B)=\mu (A)}
.
The equivalence class of
A
{\displaystyle A}
is defined by
[
A
]
:=
{
B
∈
Σ
:
μ
(
A
Δ
B
)
=
0
}
,
{\displaystyle [A]:=\{B\in \Sigma :\mu (A\Delta B)=0\},}
where
Δ
{\displaystyle \Delta }
is the symmetric difference operator. If
A
{\displaystyle A}
is an atom then all the subsets in
[
A
]
{\displaystyle [A]}
are atoms and
[
A
]
{\displaystyle [A]}
is called an atomic class. If
μ
{\displaystyle \mu }
is a
σ
{\displaystyle \sigma }
-finite measure, there are countably many atomic classes.
== Examples ==
Consider the set X = {1, 2, ..., 9, 10} and let the sigma-algebra
Σ
{\displaystyle \Sigma }
be the power set of X. Define the measure
μ
{\displaystyle \mu }
of a set to be its cardinality, that is, the number of elements in the set. Then, each of the singletons {i}, for i = 1, 2, ..., 9, 10 is an atom.
Consider the Lebesgue measure on the real line. This measure has no atoms.
== Atomic measures ==
A
σ
{\displaystyle \sigma }
-finite measure
μ
{\displaystyle \mu }
on a measurable space
(
X
,
Σ
)
{\displaystyle (X,\Sigma )}
is called atomic or purely atomic if every measurable set of positive measure contains an atom. This is equivalent to say that there is a countable partition of
X
{\displaystyle X}
formed by atoms up to a null set. The assumption of
σ
{\displaystyle \sigma }
-finitude is essential. Consider otherwise the space
(
R
,
P
(
R
)
,
ν
)
{\displaystyle (\mathbb {R} ,{\mathcal {P}}(\mathbb {R} ),\nu )}
where
ν
{\displaystyle \nu }
denotes the counting measure. This space is atomic, with all atoms being the singletons, yet the space is not able to be partitioned into the disjoint union of countably many disjoint atoms,
⋃
n
=
1
∞
A
n
{\textstyle \bigcup _{n=1}^{\infty }A_{n}}
and a null set
N
{\displaystyle N}
since the countable union of singletons is a countable set, and the uncountability of the real numbers shows that the complement
N
=
R
∖
⋃
n
=
1
∞
A
n
{\textstyle N=\mathbb {R} \setminus \bigcup _{n=1}^{\infty }A_{n}}
would have to be uncountable, hence its
ν
{\displaystyle \nu }
-measure would be infinite, in contradiction to it being a null set. The validity of the result for
σ
{\displaystyle \sigma }
-finite spaces follows from the proof for finite measure spaces by observing that the countable union of countable unions is again a countable union, and that the countable unions of null sets are null.
== Discrete measures ==
A
σ
{\displaystyle \sigma }
-finite atomic measure
μ
{\displaystyle \mu }
is called discrete if the intersection of the atoms of any atomic class is non empty.
It is equivalent to say that
μ
{\displaystyle \mu }
is the weighted sum of countably many Dirac measures, that is, there is a sequence
x
1
,
x
2
,
.
.
.
{\displaystyle x_{1},x_{2},...}
of points in
X
{\displaystyle X}
, and a sequence
c
1
,
c
2
,
.
.
.
{\displaystyle c_{1},c_{2},...}
of positive real numbers (the weights) such that
μ
=
∑
k
=
1
∞
c
k
δ
x
k
{\textstyle \mu =\sum _{k=1}^{\infty }c_{k}\delta _{x_{k}}}
, which means that
μ
(
A
)
=
∑
k
=
1
∞
c
k
δ
x
k
(
A
)
{\textstyle \mu (A)=\sum _{k=1}^{\infty }c_{k}\delta _{x_{k}}(A)}
for every
A
∈
Σ
{\displaystyle A\in \Sigma }
. We can choose each point
x
k
{\displaystyle x_{k}}
to be a common point of the atoms
in the
k
{\displaystyle k}
-th atomic class.
A discrete measure is atomic but the inverse implication fails: take
X
=
[
0
,
1
]
{\displaystyle X=[0,1]}
,
Σ
{\displaystyle \Sigma }
the
σ
{\displaystyle \sigma }
-algebra of countable and co-countable subsets,
μ
=
0
{\displaystyle \mu =0}
in countable subsets and
μ
=
1
{\displaystyle \mu =1}
in co-countable subsets. Then there is a single atomic class, the one formed by the co-countable subsets. The measure
μ
{\displaystyle \mu }
is atomic but the intersection of the atoms in the unique atomic class is empty and
μ
{\displaystyle \mu }
can't be put as a sum of Dirac measures.
If every atom is equivalent to a singleton, then
μ
{\displaystyle \mu }
is discrete iff it is atomic. In this case the
x
k
{\displaystyle x_{k}}
above are the atomic singletons, so they are unique. Any finite measure in a separable metric space provided with the Borel sets satisfies this condition.
== Non-atomic measures ==
A measure which has no atoms is called non-atomic measure or a diffuse measure. In other words, a measure
μ
{\displaystyle \mu }
is non-atomic if for any measurable set
A
{\displaystyle A}
with
μ
(
A
)
>
0
{\displaystyle \mu (A)>0}
there exists a measurable subset
B
{\displaystyle B}
of
A
{\displaystyle A}
such that
μ
(
A
)
>
μ
(
B
)
>
0.
{\displaystyle \mu (A)>\mu (B)>0.}
A non-atomic measure with at least one positive value has an infinite number of distinct values, as starting with a set
A
{\displaystyle A}
with
μ
(
A
)
>
0
{\displaystyle \mu (A)>0}
one can construct a decreasing sequence of measurable sets
A
=
A
1
⊃
A
2
⊃
A
3
⊃
⋯
{\displaystyle A=A_{1}\supset A_{2}\supset A_{3}\supset \cdots }
such that
μ
(
A
)
=
μ
(
A
1
)
>
μ
(
A
2
)
>
μ
(
A
3
)
>
⋯
>
0.
{\displaystyle \mu (A)=\mu (A_{1})>\mu (A_{2})>\mu (A_{3})>\cdots >0.}
This may not be true for measures having atoms; see the first example above.
It turns out that non-atomic measures actually have a continuum of values. It can be proved that if
μ
{\displaystyle \mu }
is a non-atomic measure and
A
{\displaystyle A}
is a measurable set with
μ
(
A
)
>
0
,
{\displaystyle \mu (A)>0,}
then for any real number
b
{\displaystyle b}
satisfying
μ
(
A
)
≥
b
≥
0
{\displaystyle \mu (A)\geq b\geq 0}
there exists a measurable subset
B
{\displaystyle B}
of
A
{\displaystyle A}
such that
μ
(
B
)
=
b
.
{\displaystyle \mu (B)=b.}
This theorem is due to Wacław Sierpiński.
It is reminiscent of the intermediate value theorem for continuous functions.
Sketch of proof of Sierpiński's theorem on non-atomic measures. A slightly stronger statement, which however makes the proof easier, is that if
(
X
,
Σ
,
μ
)
{\displaystyle (X,\Sigma ,\mu )}
is a non-atomic measure space and
μ
(
X
)
=
c
,
{\displaystyle \mu (X)=c,}
there exists a function
S
:
[
0
,
c
]
→
Σ
{\displaystyle S:[0,c]\to \Sigma }
that is monotone with respect to inclusion, and a right-inverse to
μ
:
Σ
→
[
0
,
c
]
.
{\displaystyle \mu :\Sigma \to [0,c].}
That is, there exists a one-parameter family of measurable sets
S
(
t
)
{\displaystyle S(t)}
such that for all
0
≤
t
≤
t
′
≤
c
{\displaystyle 0\leq t\leq t'\leq c}
S
(
t
)
⊆
S
(
t
′
)
,
{\displaystyle S(t)\subseteq S(t'),}
μ
(
S
(
t
)
)
=
t
.
{\displaystyle \mu \left(S(t)\right)=t.}
The proof easily follows from Zorn's lemma applied to the set of all monotone partial sections to
μ
{\displaystyle \mu }
:
Γ
:=
{
S
:
D
→
Σ
:
D
⊆
[
0
,
c
]
,
S
m
o
n
o
t
o
n
e
,
for all
t
∈
D
(
μ
(
S
(
t
)
)
=
t
)
}
,
{\displaystyle \Gamma :=\{S:D\to \Sigma \;:\;D\subseteq [0,c],\,S\;\mathrm {monotone} ,{\text{ for all }}t\in D\;(\mu (S(t))=t)\},}
ordered by inclusion of graphs,
g
r
a
p
h
(
S
)
⊆
g
r
a
p
h
(
S
′
)
.
{\displaystyle \mathrm {graph} (S)\subseteq \mathrm {graph} (S').}
It's then standard to show that every chain in
Γ
{\displaystyle \Gamma }
has an upper bound in
Γ
,
{\displaystyle \Gamma ,}
and that any maximal element of
Γ
{\displaystyle \Gamma }
has domain
[
0
,
c
]
,
{\displaystyle [0,c],}
proving the claim.
== See also ==
Atom (order theory) — an analogous concept in order theory
Dirac delta function
Elementary event, also known as an atomic event
== Notes ==
== References ==
Bruckner, Andrew M.; Bruckner, Judith B.; Thomson, Brian S. (1997). Real analysis. Upper Saddle River, N.J.: Prentice-Hall. p. 108. ISBN 0-13-458886-X.
Butnariu, Dan; Klement, E. P. (1993). Triangular norm-based measures and games with fuzzy coalitions. Dordrecht: Kluwer Academic. p. 87. ISBN 0-7923-2369-6.
Dunford, Nelson; Schwartz, Jacob T. (1988). Linear Operators, Part 1. New York: John Wiley & Sons. ISBN 978-0-471-60848-6.
Kadets, Vladimir (2018). "A Course in Functional Analysis and Measure Theory". Universitext. doi:10.1007/978-3-319-92004-7. ISSN 0172-5939.
== External links ==
Atom at The Encyclopedia of Mathematics | Wikipedia/Atom_(measure_theory) |
Signal processing is an electrical engineering subfield that focuses on analyzing, modifying and synthesizing signals, such as sound, images, potential fields, seismic signals, altimetry processing, and scientific measurements. Signal processing techniques are used to optimize transmissions, digital storage efficiency, correcting distorted signals, improve subjective video quality, and to detect or pinpoint components of interest in a measured signal.
== History ==
According to Alan V. Oppenheim and Ronald W. Schafer, the principles of signal processing can be found in the classical numerical analysis techniques of the 17th century. They further state that the digital refinement of these techniques can be found in the digital control systems of the 1940s and 1950s.
In 1948, Claude Shannon wrote the influential paper "A Mathematical Theory of Communication" which was published in the Bell System Technical Journal. The paper laid the groundwork for later development of information communication systems and the processing of signals for transmission.
Signal processing matured and flourished in the 1960s and 1970s, and digital signal processing became widely used with specialized digital signal processor chips in the 1980s.
== Definition of a signal ==
A signal is a function
x
(
t
)
{\displaystyle x(t)}
, where this function is either
deterministic (then one speaks of a deterministic signal) or
a path
(
x
t
)
t
∈
T
{\displaystyle (x_{t})_{t\in T}}
, a realization of a stochastic process
(
X
t
)
t
∈
T
{\displaystyle (X_{t})_{t\in T}}
== Categories ==
=== Analog ===
Analog signal processing is for signals that have not been digitized, as in most 20th-century radio, telephone, and television systems. This involves linear electronic circuits as well as nonlinear ones. The former are, for instance, passive filters, active filters, additive mixers, integrators, and delay lines. Nonlinear circuits include compandors, multipliers (frequency mixers, voltage-controlled amplifiers), voltage-controlled filters, voltage-controlled oscillators, and phase-locked loops.
=== Continuous time ===
Continuous-time signal processing is for signals that vary with the change of continuous domain (without considering some individual interrupted points).
The methods of signal processing include time domain, frequency domain, and complex frequency domain. This technology mainly discusses the modeling of a linear time-invariant continuous system, integral of the system's zero-state response, setting up system function and the continuous time filtering of deterministic signals. For example, in time domain, a continuous-time signal
x
(
t
)
{\displaystyle x(t)}
passing through a linear time-invariant filter/system denoted as
h
(
t
)
{\displaystyle h(t)}
, can be expressed at the output as
y
(
t
)
=
∫
−
∞
∞
h
(
τ
)
x
(
t
−
τ
)
d
τ
{\displaystyle y(t)=\int _{-\infty }^{\infty }h(\tau )x(t-\tau )\,d\tau }
In some contexts,
h
(
t
)
{\displaystyle h(t)}
is referred to as the impulse response of the system. The above convolution operation is conducted between the input and the system.
=== Discrete time ===
Discrete-time signal processing is for sampled signals, defined only at discrete points in time, and as such are quantized in time, but not in magnitude.
Analog discrete-time signal processing is a technology based on electronic devices such as sample and hold circuits, analog time-division multiplexers, analog delay lines and analog feedback shift registers. This technology was a predecessor of digital signal processing (see below), and is still used in advanced processing of gigahertz signals.
The concept of discrete-time signal processing also refers to a theoretical discipline that establishes a mathematical basis for digital signal processing, without taking quantization error into consideration.
=== Digital ===
Digital signal processing is the processing of digitized discrete-time sampled signals. Processing is done by general-purpose computers or by digital circuits such as ASICs, field-programmable gate arrays or specialized digital signal processors. Typical arithmetical operations include fixed-point and floating-point, real-valued and complex-valued, multiplication and addition. Other typical operations supported by the hardware are circular buffers and lookup tables. Examples of algorithms are the fast Fourier transform (FFT), finite impulse response (FIR) filter, Infinite impulse response (IIR) filter, and adaptive filters such as the Wiener and Kalman filters.
=== Nonlinear ===
Nonlinear signal processing involves the analysis and processing of signals produced from nonlinear systems and can be in the time, frequency, or spatiotemporal domains. Nonlinear systems can produce highly complex behaviors including bifurcations, chaos, harmonics, and subharmonics which cannot be produced or analyzed using linear methods.
Polynomial signal processing is a type of non-linear signal processing, where polynomial systems may be interpreted as conceptually straightforward extensions of linear systems to the nonlinear case.
=== Statistical ===
Statistical signal processing is an approach which treats signals as stochastic processes, utilizing their statistical properties to perform signal processing tasks. Statistical techniques are widely used in signal processing applications. For example, one can model the probability distribution of noise incurred when photographing an image, and construct techniques based on this model to reduce the noise in the resulting image.
=== Graph ===
Graph signal processing generalizes signal processing tasks to signals living on non-Euclidean domains whose structure can be captured by a weighted graph. Graph signal processing presents several key points such as sampling signal techniques, recovery techniques and time-varying techiques. Graph signal processing has been applied with success in the field of image processing, computer vision
and sound anomaly detection.
== Application fields ==
Audio signal processing – for electrical signals representing sound, such as speech or music
Image processing – in digital cameras, computers and various imaging systems
Video processing – for interpreting moving pictures
Wireless communication – waveform generations, demodulation, filtering, equalization
Control systems
Array processing – for processing signals from arrays of sensors
Process control – a variety of signals are used, including the industry standard 4-20 mA current loop
Seismology
Feature extraction, such as image understanding, semantic audio and speech recognition.
Quality improvement, such as noise reduction, image enhancement, and echo cancellation.
Source coding including audio compression, image compression, and video compression.
Genomic signal processing
In geophysics, signal processing is used to amplify the signal vs the noise within time-series measurements of geophysical data. Processing is conducted within the time domain or frequency domain, or both.
In communication systems, signal processing may occur at:
OSI layer 1 in the seven-layer OSI model, the physical layer (modulation, equalization, multiplexing, etc.);
OSI layer 2, the data link layer (forward error correction);
OSI layer 6, the presentation layer (source coding, including analog-to-digital conversion and data compression).
== Typical devices ==
Filters – for example analog (passive or active) or digital (FIR, IIR, frequency domain or stochastic filters, etc.)
Samplers and analog-to-digital converters for signal acquisition and reconstruction, which involves measuring a physical signal, storing or transferring it as digital signal, and possibly later rebuilding the original signal or an approximation thereof.
Digital signal processors (DSPs)
== Mathematical methods applied ==
Differential equations – for modeling system behavior, connecting input and output relations in linear time-invariant systems. For instance, a low-pass filter such as an RC circuit can be modeled as a differential equation in signal processing, which allows one to compute the continuous output signal as a function of the input or initial conditions.
Recurrence relations
Transform theory
Time-frequency analysis – for processing non-stationary signals
Linear canonical transformation
Spectral estimation – for determining the spectral content (i.e., the distribution of power over frequency) of a set of time series data points
Statistical signal processing – analyzing and extracting information from signals and noise based on their stochastic properties
Linear time-invariant system theory, and transform theory
Polynomial signal processing – analysis of systems which relate input and output using polynomials
System identification and classification
Calculus
Coding theory
Complex analysis
Vector spaces and Linear algebra
Functional analysis
Probability and stochastic processes
Detection theory
Estimation theory
Optimization
Numerical methods
Data mining – for statistical analysis of relations between large quantities of variables (in this context representing many physical signals), to extract previously unknown interesting patterns
== See also ==
Algebraic signal processing
Audio filter
Bounded variation
Digital image processing
Dynamic range compression, companding, limiting, and noise gating
Fourier transform
Information theory
Least-squares spectral analysis
Non-local means
Reverberation
Sensitivity (electronics)
Similarity (signal processing)
== References ==
== Further reading ==
Byrne, Charles (2014). Signal Processing: A Mathematical Approach. Taylor & Francis. doi:10.1201/b17672. ISBN 9780429158711.
P Stoica, R Moses (2005). Spectral Analysis of Signals (PDF). NJ: Prentice Hall.
Papoulis, Athanasios (1991). Probability, Random Variables, and Stochastic Processes (third ed.). McGraw-Hill. ISBN 0-07-100870-5.
Kainam Thomas Wong [1]: Statistical Signal Processing lecture notes at the University of Waterloo, Canada.
Ali H. Sayed, Adaptive Filters, Wiley, NJ, 2008, ISBN 978-0-470-25388-5.
Thomas Kailath, Ali H. Sayed, and Babak Hassibi, Linear Estimation, Prentice-Hall, NJ, 2000, ISBN 978-0-13-022464-4.
== External links ==
Signal Processing for Communications – free online textbook by Paolo Prandoni and Martin Vetterli (2008)
Scientists and Engineers Guide to Digital Signal Processing – free online textbook by Stephen Smith
Julius O. Smith III: Spectral Audio Signal Processing – free online textbook
Graph Signal Processing Website – free online website by Thierry Bouwmans (2025) | Wikipedia/Signal_theory |
In the mathematical field of complex analysis, Nevanlinna theory is part of the
theory of meromorphic functions. It was devised in 1925, by Rolf Nevanlinna. Hermann Weyl called it "one of the few great mathematical events of (the twentieth) century." The theory describes the asymptotic distribution of solutions of the equation f(z) = a, as a varies. A fundamental tool is the Nevanlinna characteristic T(r, f) which measures the rate of growth of a meromorphic function.
Other main contributors in the first half of the 20th century were Lars Ahlfors, André Bloch, Henri Cartan, Edward Collingwood, Otto Frostman, Frithiof Nevanlinna, Henrik Selberg, Tatsujiro Shimizu, Oswald Teichmüller,
and Georges Valiron. In its original form, Nevanlinna theory deals with meromorphic functions of one complex variable defined in a disc |z| ≤ R or in the whole complex plane (R = ∞). Subsequent generalizations extended Nevanlinna theory to algebroid functions, holomorphic curves, holomorphic maps between complex manifolds of arbitrary dimension, quasiregular maps and minimal surfaces.
This article describes mainly the classical version for meromorphic functions of one variable, with emphasis on functions meromorphic in the complex plane. General references for this theory are Goldberg & Ostrovskii, Hayman and Lang (1987).
== Nevanlinna characteristic ==
=== Nevanlinna's original definition ===
Let f be a meromorphic function. For every r ≥ 0, let n(r,f) be the number of poles, counting multiplicity, of the meromorphic function f in the disc |z| ≤ r. Then define the Nevanlinna counting function by
N
(
r
,
f
)
=
∫
0
r
(
n
(
t
,
f
)
−
n
(
0
,
f
)
)
d
t
t
+
n
(
0
,
f
)
log
r
.
{\displaystyle N(r,f)=\int \limits _{0}^{r}\left(n(t,f)-n(0,f)\right){\dfrac {dt}{t}}+n(0,f)\log r.\,}
This quantity measures the growth of the number of poles in the discs |z| ≤ r, as
r increases. Explicitly, let a1, a2, ..., an be the poles of ƒ in the punctured disc 0 < |z| ≤ r repeated according to multiplicity. Then n = n(r,f) - n(0,f), and
N
(
r
,
f
)
=
∑
k
=
1
n
log
(
r
|
a
k
|
)
+
n
(
0
,
f
)
log
r
.
{\displaystyle N(r,f)=\sum _{k=1}^{n}\log \left({\frac {r}{|a_{k}|}}\right)+n(0,f)\log r.\,}
Let log+x = max(log x, 0). Then the proximity function is defined by
m
(
r
,
f
)
=
1
2
π
∫
0
2
π
log
+
|
f
(
r
e
i
θ
)
|
d
θ
.
{\displaystyle m(r,f)={\frac {1}{2\pi }}\int _{0}^{2\pi }\log ^{+}\left|f(re^{i\theta })\right|d\theta .\,}
Finally, define the Nevanlinna characteristic by (cf. Jensen's formula for meromorphic functions)
T
(
r
,
f
)
=
m
(
r
,
f
)
+
N
(
r
,
f
)
.
{\displaystyle T(r,f)=m(r,f)+N(r,f).\,}
=== Ahlfors–Shimizu version ===
A second method of defining the Nevanlinna characteristic is based on the formula
∫
0
r
d
t
t
(
1
π
∫
|
z
|
≤
t
|
f
′
|
2
(
1
+
|
f
|
2
)
2
d
m
)
=
T
(
r
,
f
)
+
O
(
1
)
,
{\displaystyle \int _{0}^{r}{\frac {dt}{t}}\left({\frac {1}{\pi }}\int _{|z|\leq t}{\frac {|f'|^{2}}{(1+|f|^{2})^{2}}}dm\right)=T(r,f)+O(1),\,}
where dm is the area element in the plane. The expression in the left hand side is called the
Ahlfors–Shimizu characteristic. The bounded term O(1) is not important in most questions.
The geometric meaning of the Ahlfors—Shimizu characteristic is the following. The inner integral dm is the spherical area of the image of the disc |z| ≤ t, counting multiplicity (that is, the parts of the Riemann sphere covered k times are counted k times). This area is divided by π which is the area of the whole Riemann sphere. The result can be interpreted as the average number of sheets in the covering of the Riemann sphere by the disc |z| ≤ t. Then this average covering number is integrated with respect to t with weight 1/t.
=== Properties ===
The role of the characteristic function in the theory of meromorphic functions in the plane is similar to that of
log
M
(
r
,
f
)
=
log
max
|
z
|
≤
r
|
f
(
z
)
|
{\displaystyle \log M(r,f)=\log \max _{|z|\leq r}|f(z)|\,}
in the theory of entire functions. In fact, it is possible to directly compare T(r,f) and M(r,f) for an entire function:
T
(
r
,
f
)
≤
log
+
M
(
r
,
f
)
{\displaystyle T(r,f)\leq \log ^{+}M(r,f)\,}
and
log
M
(
r
,
f
)
≤
(
R
+
r
R
−
r
)
T
(
R
,
f
)
,
{\displaystyle \log M(r,f)\leq \left({\dfrac {R+r}{R-r}}\right)T(R,f),\,}
for any R > r.
If f is a rational function of degree d, then T(r,f) ~ d log r; in fact, T(r,f) = O(log r) if and only if f is a rational function.
The order of a meromorphic function is defined by
ρ
(
f
)
=
lim sup
r
→
∞
log
+
T
(
r
,
f
)
log
r
.
{\displaystyle \rho (f)=\limsup _{r\rightarrow \infty }{\dfrac {\log ^{+}T(r,f)}{\log r}}.}
Functions of finite order constitute an important subclass which was much studied.
When the radius R of the disc |z| ≤ R, in which the meromorphic function is defined, is finite, the Nevanlinna characteristic may be bounded. Functions in a disc with bounded characteristic, also known as functions of bounded type, are exactly those functions that are ratios of bounded analytic functions. Functions of bounded type may also be so defined for another domain such as the upper half-plane.
== First fundamental theorem ==
Let a ∈ C, and define
N
(
r
,
a
,
f
)
=
N
(
r
,
1
f
−
a
)
,
m
(
r
,
a
,
f
)
=
m
(
r
,
1
f
−
a
)
.
{\displaystyle \quad N(r,a,f)=N\left(r,{\dfrac {1}{f-a}}\right),\quad m(r,a,f)=m\left(r,{\dfrac {1}{f-a}}\right).\,}
For a = ∞, we set N(r,∞,f) = N(r,f), m(r,∞,f) = m(r,f).
The First Fundamental Theorem of Nevanlinna theory states that for every a in the Riemann sphere,
T
(
r
,
f
)
=
N
(
r
,
a
,
f
)
+
m
(
r
,
a
,
f
)
+
O
(
1
)
,
{\displaystyle T(r,f)=N(r,a,f)+m(r,a,f)+O(1),\,}
where the bounded term O(1) may depend on f and a. For non-constant meromorphic functions in the plane, T(r, f) tends to infinity as r tends to infinity,
so the First Fundamental Theorem says that the sum N(r,a,f) + m(r,a,f), tends to infinity at the rate which is independent of a. The first Fundamental theorem is a simple consequence
of Jensen's formula.
The characteristic function has the following properties of the degree:
T
(
r
,
f
g
)
≤
T
(
r
,
f
)
+
T
(
r
,
g
)
+
O
(
1
)
,
T
(
r
,
f
+
g
)
≤
T
(
r
,
f
)
+
T
(
r
,
g
)
+
O
(
1
)
,
T
(
r
,
1
/
f
)
=
T
(
r
,
f
)
+
O
(
1
)
,
T
(
r
,
f
m
)
=
m
T
(
r
,
f
)
+
O
(
1
)
,
{\displaystyle {\begin{array}{lcl}T(r,fg)&\leq &T(r,f)+T(r,g)+O(1),\\T(r,f+g)&\leq &T(r,f)+T(r,g)+O(1),\\T(r,1/f)&=&T(r,f)+O(1),\\T(r,f^{m})&=&mT(r,f)+O(1),\,\end{array}}}
where m is a natural number. The bounded term O(1) is negligible when T(r,f) tends to infinity. These algebraic properties are easily obtained from Nevanlinna's definition and Jensen's formula.
== Second fundamental theorem ==
We define N(r, f) in the same way as N(r,f) but without taking multiplicity into account (i.e. we only count the number of distinct poles). Then N1(r,f)
is defined as the Nevanlinna counting function of critical points of f, that is
N
1
(
r
,
f
)
=
2
N
(
r
,
f
)
−
N
(
r
,
f
′
)
+
N
(
r
,
1
f
′
)
=
N
(
r
,
f
)
+
N
¯
(
r
,
f
)
+
N
(
r
,
1
f
′
)
.
{\displaystyle N_{1}(r,f)=2N(r,f)-N(r,f')+N\left(r,{\dfrac {1}{f'}}\right)=N(r,f)+{\overline {N}}(r,f)+N\left(r,{\dfrac {1}{f'}}\right).\,}
The Second Fundamental theorem says that for every k distinct values aj on the Riemann sphere, we have
∑
j
=
1
k
m
(
r
,
a
j
,
f
)
≤
2
T
(
r
,
f
)
−
N
1
(
r
,
f
)
+
S
(
r
,
f
)
.
{\displaystyle \sum _{j=1}^{k}m(r,a_{j},f)\leq 2T(r,f)-N_{1}(r,f)+S(r,f).\,}
This implies
(
k
−
2
)
T
(
r
,
f
)
≤
∑
j
=
1
k
N
¯
(
r
,
a
j
,
f
)
+
S
(
r
,
f
)
,
{\displaystyle (k-2)T(r,f)\leq \sum _{j=1}^{k}{\overline {N}}(r,a_{j},f)+S(r,f),\,}
where S(r,f) is a "small error term".
For functions meromorphic in the plane,
S(r,f) = o(T(r,f)), outside a set of finite length i.e. the error term is small in comparison with the characteristic for "most" values of r. Much better estimates of
the error term are known, but Andre Bloch conjectured and Hayman proved that one cannot dispose of an
exceptional set.
The Second Fundamental Theorem allows to give an upper bound for the characteristic function in terms of N(r,a). For example, if f is a transcendental entire function, using the Second Fundamental theorem with k = 3 and a3 = ∞, we obtain that f takes every value infinitely often, with at most two exceptions,
proving Picard's Theorem.
Nevanlinna's original proof of the Second Fundamental Theorem was based on the so-called Lemma on the logarithmic derivative, which says that m(r,f'/f) = S(r,f). A similar proof also applies to many multi-dimensional generalizations. There are also differential-geometric proofs which relate it to the Gauss–Bonnet theorem. The Second Fundamental Theorem can also be derived from the metric-topological theory of Ahlfors, which can be considered as an extension of the Riemann–Hurwitz formula to the coverings of infinite degree.
The proofs of Nevanlinna and Ahlfors indicate that the constant 2 in the Second Fundamental Theorem is related to the Euler characteristic of the Riemann sphere. However, there is a very different explanations of this 2, based on a deep analogy with number theory discovered by Charles Osgood and Paul Vojta. According to this analogy, 2 is the exponent in the Thue–Siegel–Roth theorem. On this analogy with number theory we refer to the survey of Lang (1987) and the book by Ru (2001).
== Defect relation ==
The defect relation is one of the main corollaries from the Second Fundamental Theorem. The defect of a meromorphic function at the point a is defined by the formula
δ
(
a
,
f
)
=
lim inf
r
→
∞
m
(
r
,
a
,
f
)
T
(
r
,
f
)
=
1
−
lim sup
r
→
∞
N
(
r
,
a
,
f
)
T
(
r
,
f
)
.
{\displaystyle \delta (a,f)=\liminf _{r\rightarrow \infty }{\frac {m(r,a,f)}{T(r,f)}}=1-\limsup _{r\rightarrow \infty }{\dfrac {N(r,a,f)}{T(r,f)}}.\,}
By the First Fundamental Theorem, 0 ≤ δ(a,f) ≤ 1, if T(r,f) tends to infinity (which is always the case for non-constant functions meromorphic in the plane). The points a for which δ(a,f) > 0 are called deficient values. The Second Fundamental Theorem implies that the set of deficient values of a function meromorphic in the plane is at most countable and the following relation holds:
∑
a
δ
(
a
,
f
)
≤
2
,
{\displaystyle \sum _{a}\delta (a,f)\leq 2,\,}
where the summation is over all deficient values. This can be considered as a generalization of Picard's theorem. Many other Picard-type theorems can be derived from the Second Fundamental Theorem.
As another corollary from the Second Fundamental Theorem, one can obtain that
T
(
r
,
f
′
)
≤
2
T
(
r
,
f
)
+
S
(
r
,
f
)
,
{\displaystyle T(r,f')\leq 2T(r,f)+S(r,f),\,}
which generalizes the fact that a rational function of degree d has 2d − 2 < 2d critical points.
== Applications ==
Nevanlinna theory is useful in all questions where transcendental meromorphic functions arise,
like analytic theory of differential and functional equations holomorphic dynamics, minimal surfaces, and
complex hyperbolic geometry, which deals with generalizations of Picard's theorem to higher
dimensions.
== Further development ==
A substantial part of the research in functions of one complex variable in the 20th century was focused on
Nevanlinna theory. One direction of this research was to find out whether the main conclusions of Nevanlinna
theory are best possible. For example, the Inverse Problem of Nevanlinna theory consists in
constructing meromorphic functions with pre-assigned deficiencies at given points. This was solved
by David Drasin in 1976. Another direction was concentrated on the study of various subclasses of the class
of all meromorphic functions in the plane. The most important subclass consists of functions of finite order.
It turns out that for this class, deficiencies are subject to several restrictions, in addition
to the defect relation (Norair Arakelyan, David Drasin, Albert Edrei, Alexandre Eremenko,
Wolfgang Fuchs,
Anatolii Goldberg, Walter Hayman, Joseph Miles, Daniel Shea,
Oswald Teichmüller, Alan Weitsman and others).
Henri Cartan, Joachim and Hermann Weyl and Lars Ahlfors extended Nevanlinna theory to holomorphic curves. This extension is the main tool of Complex Hyperbolic Geometry. Henrik Selberg and Georges Valiron extended
Nevanlinna theory to algebroid functions. Intensive research in the classical one-dimensional theory still continues.
== See also ==
Vojta's conjecture
== References ==
Lang, Serge (1987). Introduction to complex hyperbolic spaces. New York: Springer-Verlag. ISBN 978-0-387-96447-8. Zbl 0628.32001.
Lang, Serge (1997). Survey of Diophantine geometry. Springer-Verlag. pp. 192–204. ISBN 978-3-540-61223-0. Zbl 0869.11051.
Nevanlinna, Rolf (1925), "Zur Theorie der Meromorphen Funktionen", Acta Mathematica, 46 (1–2): 1–99, doi:10.1007/BF02543858, ISSN 0001-5962
Nevanlinna, Rolf (1970) [1936], Analytic functions, Die Grundlehren der mathematischen Wissenschaften, vol. 162, Berlin, New York: Springer-Verlag, ISBN 978-0-387-04834-5, MR 0279280
Ru, Min (2001). Nevanlinna Theory and Its Relation to Diophantine Approximation. World Scientific Publishing. ISBN 978-981-02-4402-6.
== Further reading ==
Bombieri, Enrico; Gubler, Walter (2006). "13. Nevanlinna Theory". Heights in Diophantine Geometry. New Mathematical Monographs. Vol. 4. Cambridge University Press. pp. 444–478. ISBN 978-0-521-71229-3. Zbl 1115.11034.
Kodaira, Kunihiko (2017). Nevanlinna Theory. SpringerBriefs in Mathematics. Springer-Verlag. ISBN 978-981-10-6786-0. Zbl 1386.30002.
Vojta, Paul (1987). Diophantine Approximations and Value Distribution Theory. Lecture Notes in Mathematics. Vol. 1239. Springer-Verlag. ISBN 978-3-540-17551-3. Zbl 0609.14011.
Vojta, Paul (2011). "Diophantine approximation and Nevanlinna theory". In Corvaja, Pietro; Gasbarri, Carlo (eds.). Arithmetic geometry. Lectures given at the C.I.M.E summer school, Cetraro, Italy, September 10--15, 2007. Lecture Notes in Mathematics. Vol. 2009. Berlin: Springer-Verlag. pp. 111–224. ISBN 978-3-642-15944-2. Zbl 1258.11076.
== External links ==
Petrenko, V.P. (2001) [1994], "Value-distribution theory", Encyclopedia of Mathematics, EMS Press
Petrenko, V.P. (2001) [1994], "Nevanlinna theorems", Encyclopedia of Mathematics, EMS Press | Wikipedia/Nevanlinna_theory |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.