text stringlengths 559 401k | source stringlengths 13 121 |
|---|---|
In physics, chemistry, and related fields, master equations are used to describe the time evolution of a system that can be modeled as being in a probabilistic combination of states at any given time, and the switching between states is determined by a transition rate matrix. The equations are a set of differential equations – over time – of the probabilities that the system occupies each of the different states.
The name was proposed in 1940:
When the probabilities of the elementary processes are known, one can write down a continuity equation for W, from which all other equations can be derived and which we will call therefore the "master” equation.
== Introduction ==
A master equation is a phenomenological set of first-order differential equations describing the time evolution of (usually) the probability of a system to occupy each one of a discrete set of states with regard to a continuous time variable t. The most familiar form of a master equation is a matrix form:
d
P
→
d
t
=
A
P
→
,
{\displaystyle {\frac {d{\vec {P}}}{dt}}=\mathbf {A} {\vec {P}},}
where
P
→
{\displaystyle {\vec {P}}}
is a column vector, and
A
{\displaystyle \mathbf {A} }
is the matrix of connections. The way connections among states are made determines the dimension of the problem; it is either
a d-dimensional system (where d is 1,2,3,...), where any state is connected with exactly its 2d nearest neighbors, or
a network, where every pair of states may have a connection (depending on the network's properties).
When the connections are time-independent rate constants, the master equation represents a kinetic scheme, and the process is Markovian (any jumping time probability density function for state i is an exponential, with a rate equal to the value of the connection). When the connections depend on the actual time (i.e. matrix
A
{\displaystyle \mathbf {A} }
depends on the time,
A
→
A
(
t
)
{\displaystyle \mathbf {A} \rightarrow \mathbf {A} (t)}
), the process is not stationary and the master equation reads
d
P
→
d
t
=
A
(
t
)
P
→
.
{\displaystyle {\frac {d{\vec {P}}}{dt}}=\mathbf {A} (t){\vec {P}}.}
When the connections represent multi exponential jumping time probability density functions, the process is semi-Markovian, and the equation of motion is an integro-differential equation termed the generalized master equation:
d
P
→
d
t
=
∫
0
t
A
(
t
−
τ
)
P
→
(
τ
)
d
τ
.
{\displaystyle {\frac {d{\vec {P}}}{dt}}=\int _{0}^{t}\mathbf {A} (t-\tau ){\vec {P}}(\tau )\,d\tau .}
The transition rate matrix
A
{\displaystyle \mathbf {A} }
can also represent birth and death, meaning that probability is injected (birth) or taken from (death) the system, and then the process is not in equilibrium.
When the transition rate matrix can be related to the probabilities, one obtains the Kolmogorov equations.
== Detailed description of the matrix and properties of the system ==
Let
A
{\displaystyle \mathbf {A} }
be the matrix describing the transition rates (also known as kinetic rates or reaction rates). As always, the first subscript represents the row, the second subscript the column. That is, the source is given by the second subscript, and the destination by the first subscript. This is the opposite of what one might expect, but is appropriate for conventional matrix multiplication.
For each state k, the increase in occupation probability depends on the contribution from all other states to k, and is given by:
∑
ℓ
A
k
ℓ
P
ℓ
,
{\displaystyle \sum _{\ell }A_{k\ell }P_{\ell },}
where
P
ℓ
{\displaystyle P_{\ell }}
is the probability for the system to be in the state
ℓ
{\displaystyle \ell }
, while the matrix
A
{\displaystyle \mathbf {A} }
is filled with a grid of transition-rate constants. Similarly,
P
k
{\displaystyle P_{k}}
contributes to the occupation of all other states
P
ℓ
,
{\displaystyle P_{\ell },}
∑
ℓ
A
ℓ
k
P
k
,
{\displaystyle \sum _{\ell }A_{\ell k}P_{k},}
In probability theory, this identifies the evolution as a continuous-time Markov process, with the integrated master equation obeying a Chapman–Kolmogorov equation.
The master equation can be simplified so that the terms with ℓ = k do not appear in the summation. This allows calculations even if the main diagonal of
A
{\displaystyle \mathbf {A} }
is not defined or has been assigned an arbitrary value.
d
P
k
d
t
=
∑
ℓ
(
A
k
ℓ
P
ℓ
)
=
∑
ℓ
≠
k
(
A
k
ℓ
P
ℓ
)
+
A
k
k
P
k
=
∑
ℓ
≠
k
(
A
k
ℓ
P
ℓ
−
A
ℓ
k
P
k
)
.
{\displaystyle {\frac {dP_{k}}{dt}}=\sum _{\ell }(A_{k\ell }P_{\ell })=\sum _{\ell \neq k}(A_{k\ell }P_{\ell })+A_{kk}P_{k}=\sum _{\ell \neq k}(A_{k\ell }P_{\ell }-A_{\ell k}P_{k}).}
The final equality arises from the fact that
∑
ℓ
,
k
(
A
ℓ
k
P
k
)
=
d
d
t
∑
ℓ
(
P
ℓ
)
=
0
{\displaystyle \sum _{\ell ,k}(A_{\ell k}P_{k})={\frac {d}{dt}}\sum _{\ell }(P_{\ell })=0}
because the summation over the probabilities
P
ℓ
{\displaystyle P_{\ell }}
yields one, a constant function. Since this has to hold for any probability
P
→
{\displaystyle {\vec {P}}}
(and in particular for any probability of the form
P
ℓ
=
δ
ℓ
k
{\displaystyle P_{\ell }=\delta _{\ell k}}
for some k) we get
∑
ℓ
(
A
ℓ
k
)
=
0
∀
k
.
{\displaystyle \sum _{\ell }(A_{\ell k})=0\qquad \forall k.}
Using this we can write the diagonal elements as
A
k
k
=
−
∑
ℓ
≠
k
(
A
ℓ
k
)
⇒
A
k
k
P
k
=
−
∑
ℓ
≠
k
(
A
ℓ
k
P
k
)
.
{\displaystyle A_{kk}=-\sum _{\ell \neq k}(A_{\ell k})\Rightarrow A_{kk}P_{k}=-\sum _{\ell \neq k}(A_{\ell k}P_{k}).}
The master equation exhibits detailed balance if each of the terms of the summation disappears separately at equilibrium—i.e. if, for all states k and ℓ having equilibrium probabilities
π
k
{\displaystyle \pi _{k}}
and
π
ℓ
{\displaystyle \pi _{\ell }}
,
A
k
ℓ
π
ℓ
=
A
ℓ
k
π
k
.
{\displaystyle A_{k\ell }\pi _{\ell }=A_{\ell k}\pi _{k}.}
These symmetry relations were proved on the basis of the time reversibility of microscopic dynamics (microscopic reversibility) as Onsager reciprocal relations.
== Examples of master equations ==
Many physical problems in classical, quantum mechanics and problems in other sciences, can be reduced to the form of a master equation, thereby performing a great simplification of the problem (see mathematical model).
The Lindblad equation in quantum mechanics is a generalization of the master equation describing the time evolution of a density matrix. Though the Lindblad equation is often referred to as a master equation, it is not one in the usual sense, as it governs not only the time evolution of probabilities (diagonal elements of the density matrix), but also of variables containing information about quantum coherence between the states of the system (non-diagonal elements of the density matrix).
Another special case of the master equation is the Fokker–Planck equation which describes the time evolution of a continuous probability distribution. Complicated master equations which resist analytic treatment can be cast into this form (under various approximations), by using approximation techniques such as the system size expansion.
Stochastic chemical kinetics provide yet another example of the use of the master equation. A master equation may be used to model a set of chemical reactions when the number of molecules of one or more species is small (of the order of 100 or 1000 molecules). The chemical master equation can also solved for the very large models, such as the DNA damage signal from fungal pathogen Candida albicans.
== Quantum master equations ==
A quantum master equation is a generalization of the idea of a master equation. Rather than just a system of differential equations for a set of probabilities (which only constitutes the diagonal elements of a density matrix), quantum master equations are differential equations for the entire density matrix, including off-diagonal elements. A density matrix with only diagonal elements can be modeled as a classical random process, therefore such an "ordinary" master equation is considered classical. Off-diagonal elements represent quantum coherence which is a physical characteristic that is intrinsically quantum mechanical.
The Redfield equation and Lindblad equation are examples of approximate quantum master equations assumed to be Markovian. More accurate quantum master equations for certain applications include the polaron transformed quantum master equation, and the VPQME (variational polaron transformed quantum master equation).
== Theorem about eigenvalues of the matrix and time evolution ==
Because
A
{\displaystyle \mathbf {A} }
fulfills
∑
ℓ
A
ℓ
k
=
0
∀
k
{\displaystyle \sum _{\ell }A_{\ell k}=0\qquad \forall k}
and
A
ℓ
k
≥
0
∀
ℓ
≠
k
,
{\displaystyle A_{\ell k}\geq 0\qquad \forall \ell \neq k,}
one can show that:
There is at least one eigenvector with a vanishing eigenvalue, exactly one if the graph of
A
{\displaystyle \mathbf {A} }
is strongly connected.
All other eigenvalues
λ
{\displaystyle \lambda }
fulfill
0
>
Re
λ
≥
2
min
i
A
i
i
{\displaystyle 0>\operatorname {Re} \lambda \geq 2\operatorname {min} _{i}A_{ii}}
.
All eigenvectors
v
{\displaystyle v}
with a non-zero eigenvalue fulfill
∑
i
v
i
=
0
{\textstyle \sum _{i}v_{i}=0}
.
This has important consequences for the time evolution of a state.
== See also ==
Kolmogorov equations (Markov jump process)
Continuous-time Markov process
Quantum master equation
Fermi's golden rule
Detailed balance
Boltzmann's H-theorem
== References ==
van Kampen, N. G. (1981). Stochastic processes in physics and chemistry. North Holland. ISBN 978-0-444-52965-7.
Gardiner, C. W. (1985). Handbook of Stochastic Methods. Springer. ISBN 978-3-540-20882-2.
Risken, H. (1984). The Fokker-Planck Equation. Springer. ISBN 978-3-540-61530-9.
== External links ==
Timothy Jones, A Quantum Optics Derivation (2006) | Wikipedia/Master_equation |
In natural and social science research, a protocol is most commonly a predefined procedural method in the design and implementation of an experiment. Protocols are written whenever it is desirable to standardize a laboratory method to ensure successful replication of results by others in the same laboratory or by other laboratories. Additionally, and by extension, protocols have the advantage of facilitating the assessment of experimental results through peer review. In addition to detailed procedures, equipment, and instruments, protocols will also contain study objectives, reasoning for experimental design, reasoning for chosen sample sizes, safety precautions, and how results were calculated and reported, including statistical analysis and any rules for predefining and documenting excluded data to avoid bias.
Similarly, a protocol may refer to the procedural methods of health organizations, commercial laboratories, manufacturing plants, etc. to ensure their activities (e.g., blood testing at a hospital, testing of certified reference materials at a calibration laboratory, and manufacturing of transmission gears at a facility) are consistent to a specific standard, encouraging safe use and accurate results.
Finally, in the field of social science, a protocol may also refer to a "descriptive record" of observed events or a "sequence of behavior" of one or more organisms, recorded during or immediately after an activity (e.g., how an infant reacts to certain stimuli or how gorillas behave in natural habitat) to better identify "consistent patterns and cause-effect relationships." These protocols may take the form of hand-written journals or electronically documented media, including video and audio capture.
== Experiment and study protocol ==
Various fields of science, such as environmental science and clinical research, require the coordinated, standardized work of many participants. Additionally, any associated laboratory testing and experiment must be done in a way that is both ethically sound and results can be replicated by others using the same methods and equipment. As such, rigorous and vetted testing and experimental protocols are required. In fact, such predefined protocols are an essential component of Good Laboratory Practice (GLP) and Good Clinical Practice (GCP) regulations. Protocols written for use by a specific laboratory may incorporate or reference standard operating procedures (SOP) governing general practices required by the laboratory. A protocol may also reference applicable laws and regulations that are applicable to the procedures described. Formal protocols typically require approval by one or more individuals—including for example a laboratory directory, study director, and/or independent ethics committee: 12 —before they are implemented for general use. Clearly defined protocols are also required by research funded by the National Institutes of Health.
In a clinical trial, the protocol is carefully designed to safeguard the health of the participants as well as answer specific research questions. A protocol describes what types of people may participate in the trial; the schedule of tests, procedures, medications, and dosages; and the length of the study. While in a clinical trial, participants following a protocol are seen regularly by research staff to monitor their health and to determine the safety and effectiveness of their treatment. Since 1996, clinical trials conducted are widely expected to conform to and report the information called for in the CONSORT Statement, which provides a framework for designing and reporting protocols. Though tailored to health and medicine, ideas in the CONSORT statement are broadly applicable to other fields where experimental research is used.
Protocols will often address:
safety: Safety precautions are a valuable addition to a protocol, and can range from requiring goggles to provisions for containment of microbes, environmental hazards, toxic substances, and volatile solvents. Procedural contingencies in the event of an accident may be included in a protocol or in a referenced SOP.
procedures: Procedural information may include not only safety procedures but also procedures for avoiding contamination, calibration of equipment, equipment testing, documentation, and all other relevant issues. These procedural protocols can be used by skeptics to invalidate any claimed results if flaws are found.
equipment used: Equipment testing and documentation includes all necessary specifications, calibrations, operating ranges, etc. Environmental factors such as temperature, humidity, barometric pressure, and other factors can often have effects on results. Documenting these factors should be a part of any good procedure.
reporting: A protocol may specify reporting requirements. Reporting requirements would include all elements of the experiments design and protocols and any environmental factors or mechanical limitations that might affect the validity of the results.
calculations and statistics: Protocols for methods that produce numerical results generally include detailed formulas for calculation of results. A formula may also be included for preparation of reagents and other solutions required for the work. Methods of statistical analysis may be included to guide interpretation of the data.
bias: Many protocols include provisions for avoiding bias in the interpretation of results. Approximation error is common to all measurements. These errors can be absolute errors from limitations of the equipment or propagation errors from approximate numbers used in calculations. Sample bias is the most common and sometimes the hardest bias to quantify. Statisticians often go to great lengths to ensure that the sample used is representative. For instance political polls are best when restricted to likely voters and this is one of the reasons why web polls cannot be considered scientific. The sample size is another important concept and can lead to biased data simply due to an unlikely event. A sample size of 10, i.e., polling 10 people, will seldom give valid polling results. Standard deviation and variance are concepts used to quantify the likely relevance of a given sample size. The placebo effect and observer bias often require the blinding of patients and researchers as well as a control group.
Best practice recommends publishing the protocol of the review before initiating it to reduce the risk of unplanned research duplication and to enable transparency, and consistency between methodology and protocol.
=== Blinded protocols ===
A protocol may require blinding to avoid bias. A blind can be imposed on any participant of an experiment, including subjects, researchers, technicians, data analysts, and evaluators. In some cases, while blinding would be useful, it is impossible or unethical. A good clinical protocol ensures that blinding is as effective as possible within ethical and practical constrains.
During the course of an experiment, a participant becomes unblinded if they deduce or otherwise obtain information that has been masked to them. Unblinding that occurs before the conclusion of a study is a source of experimental error, as the bias that was eliminated by blinding is re-introduced. Unblinding is common in blind experiments, and must be measured and reported. Reporting guidelines recommend that all studies assess and report unblinding. In practice, very few studies assess unblinding.
An experimenter may have latitude defining procedures for blinding and controls but may be required to justify those choices if the results are published or submitted to a regulatory agency. When it is known during the experiment which data was negative there are often reasons to rationalize why that data shouldn't be included. Positive data are rarely rationalized the same way.
== See also ==
== References == | Wikipedia/Protocol_(science) |
Sham surgery (or placebo surgery) is a faked surgical intervention that omits the step thought to be therapeutically necessary.
In clinical trials of surgical interventions, sham surgery is an important scientific control. This is because it isolates the specific effects of the treatment as opposed to the incidental effects caused by anesthesia, the incisional trauma, pre- and postoperative care, and the patient's perception of having had a regular operation. Thus sham surgery serves an analogous purpose to placebo drugs, neutralizing biases such as the placebo effect.
== Human research ==
A number of studies done under Institutional Review Board-approved settings have delivered important and surprising results. With the progress in minimally invasive surgery, sham procedures can be more easily performed as the sham incision can be kept small similarly to the incision in the studied procedure.
A review of studies with sham surgery found 53 such studies: in 39 there was improvement with the sham operation and in 27 the sham procedure was as good as the real operation. Sham-controlled interventions have therefore identified interventions that are useless but had been believed by the medical community to be helpful based on studies without the use of sham surgery.
=== Examples ===
==== Cardiovascular diseases ====
In 1939 Fieschi introduced internal mammary ligation as a procedure to improve blood flow to the heart. Not until a controlled study was done two decades later could it be demonstrated that the procedure was only as effective as the sham surgery.
==== Central nervous system disease ====
In neurosurgery, cell-transplant surgical interventions were offered in many centers in the world for patients with Parkinson disease until sham-controlled experiments involving the drilling of burr holes into the skull demonstrated such interventions to be ineffective and possibly harmful. Subsequently, over 90% of surveyed investigators believed that future neurosurgical interventions (e.g. gene transfer therapies) should be evaluated by sham-controlled studies as these are superior to open-control designs, and have found it unethical to conduct an open-control study because the design is not strong enough to protect against the placebo effect and bias. Kim et al. point out that sham procedures can differ significantly in invasiveness, for instance in neurosurgical experiments the investigator may drill a burr hole to the dura mater only or enter the brain. In March 2013 a sham surgical study of a popular but biologically inexplicable venous balloon angioplasty procedure for multiple sclerosis showed the surgery was no better than placebo.
==== Orthopedic diseases ====
Moseley and coworkers studied the effect of arthroscopic surgery for osteoarthritis of the knee establishing two treatment groups and a sham-operated control group. They found that patients in the treatment group did no better than those in the control group. The fact that all three groups improved equally points to the placebo effect in surgical interventions.
In a 2016 study it was found that arthroscopic partial meniscectomy does not offer any benefit over sham surgery in relieving symptoms of knee locking or catching in patients with degenerative meniscal tears.
A randomised controlled trial was carried out to investigate the effectiveness of shoulder surgery to remove an acromial spur (bony protuberance on x-ray) in patients with shoulder pain. This found that improvement after sham surgery was as great as with real surgery.
A systematic review has identified a number of studies comparing orthopedic surgery to sham surgery. This demonstrates that it is possible to undertake such studies and that the findings are important.
== Animal research ==
Sham surgery has been widely used in surgical animal models. Historically, studies in animals also allowed the removal or alteration of an organ; using sham-operated animals as control, deductions could be made about the function of the organ. Sham interventions can also be performed as controls when new surgical procedures are developed.
For instance, a study documenting the effect of ONS (Optical Nerve Section) on Guinea pigs detailed its sham surgery as:
"In the case of optic nerve section, a small incision was then made in the dural sheath of the optic nerve to access the nerve fibers, which were teased free and cut. The same procedure was followed for animals undergoing sham surgery, except that the optic nerve was left intact after visualization."
== See also ==
Royal Commission on Animal Magnetism
== References == | Wikipedia/Sham_surgery |
The science of epidemiology has matured significantly from the times of Hippocrates, Semmelweis and John Snow. The techniques for gathering and analyzing epidemiological data vary depending on the type of disease being monitored but each study will have overarching similarities.
== Outline of the process of an epidemiological study ==
Establish that a problem exists
Full epidemiological studies are expensive and laborious undertakings. Before any study is started, a case must be made for the importance of the research.
Confirm the homogeneity of the events
Any conclusions drawn from inhomogeneous cases will be suspicious. All events or occurrences of the disease must be true cases of the disease.
Collect all the events
It is important to collect as much information as possible about each event in order to inspect a large number of possible risk factors. The events may be collected from varied methods of epidemiological study or from censuses or hospital records.
The events can be characterized by Incidence rates and prevalence rates.
Often, occurrence of a single disease entity is set as an event.
Given inherent heterogeneous nature of any given disease (i.e., the unique disease principle), a single disease entity may be treated as disease subtypes. This framework is well conceptualized in the interdisciplinary field of molecular pathological epidemiology (MPE).
Characterize the events as to epidemiological factors
Predisposing factors
Non-environmental factors that increase the likelihood of getting a disease. Genetic history, age, and gender are examples.
Enabling/disabling factors
Factors relating to the environment that either increase or decrease the likelihood of disease. Exercise and good diet are examples of disabling factors. A weakened immune system and poor nutrition are examples of enabling factors.
Precipitation factors
This factor is the most important in that it identifies the source of exposure. It may be a germ, toxin or gene.
Reinforcing factors
These are factors that compound the likelihood of getting a disease. They may include repeated exposure or excessive environmental stresses.
Look for patterns and trends
Here one looks for similarities in the cases which may identify major risk factors for contracting the disease. Epidemic curves may be used to identify such risk factors.
Formulate a hypothesis
If a trend has been observed in the cases, the researcher may postulate as to the nature of the relationship between the potential disease-causing agent and the disease.
Test the hypothesis
Because epidemiological studies can rarely be conducted in a laboratory the results are often polluted by uncontrollable variations in the cases. This often makes the results difficult to interpret. Two methods have evolved to assess the strength of the relationship between the disease causing agent and the disease.
Koch's postulates were the first criteria developed for epidemiological relationships. Because they only work well for highly contagious bacteria and toxins, this method is largely out of favor.
Bradford-Hill Criteria are the current standards for epidemiological relationships. A relationship may fill all, some, or none of the criteria and still be true.
Publish the results.
== Measures ==
Epidemiologists are famous for their use of rates. Each measure serves to characterize the disease giving valuable information about contagiousness, incubation period, duration, and mortality of the disease.
=== Measures of occurrence ===
Incidence measures
Incidence rate, where cases included are defined using a case definition
Hazard rate
Cumulative incidence
Prevalence measures
Point prevalence
Period prevalence
=== Measures of association ===
Relative measures
Risk ratio
Rate ratio
Odds ratio
Hazard ratio
Absolute measures
Absolute risk reduction
Attributable risk
Attributable risk in exposed
Percent attributable risk
Levin's attributable risk
=== Other measures ===
Virulence and Infectivity
Mortality rate and Morbidity rate
Case fatality
Sensitivity (tests) and Specificity (tests)
== Limitations ==
Epidemiological (and other observational) studies typically highlight associations between exposures and outcomes, rather than causation. While some consider this a limitation of observational research, epidemiological models of causation (e.g. Bradford Hill criteria) contend that an entire body of evidence is needed before determining if an association is truly causal. Moreover, many research questions are impossible to study in experimental settings, due to concerns around ethics and study validity. For example, the link between cigarette smoke and lung cancer was uncovered largely through observational research; however research ethics would certainly prohibit conducting a randomized trial of cigarette smoking once it had already been identified as a potential health threat.
== See also ==
Clinical study design – Plan for research in clinical medicine
Epi Info – Statistical software from the CDC
Epidemiology – Study of health and disease within a population
Molecular pathological epidemiology – Discipline combining epidemiology and pathology
OpenEpi – SoftwarePages displaying wikidata descriptions as a fallbackPages displaying short descriptions with no spaces
Sanitary epidemiological reconnaissance
== References ==
== External links ==
Epidemiologic.org Epidemiologic Inquiry online weblog for epidemiology researchers
Epidemiology Forum A discussion and forum community for epi analysis support and fostering questions, debates, and collaborations in epidemiology
The Centre for Evidence Based Medicine at Oxford maintains an on-line "Toolbox" of evidence-based medicine methods.
Epimonitor has a comprehensive list of links to associations, agencies, bulletins, etc.
Epidemiology for the Uninitiated On line text, with easy explanations.
North Carolina Center for Public Health Preparedness Training On line training classes for epidemiology and related topics.
People's Epidemiology Library | Wikipedia/Epidemiological_methods |
In the design of experiments, hypotheses are applied to experimental units in a treatment group. In comparative experiments, members of a control group receive a standard treatment, a placebo, or no treatment at all. There may be more than one treatment group, more than one control group, or both.
A placebo control group can be used to support a double-blind study, in which some subjects are given an ineffective treatment (in medical studies typically a sugar pill) to minimize differences in the experiences of subjects in the different groups; this is done in a way that ensures no participant in the experiment (subject or experimenter) knows to which group each subject belongs. In such cases, a third, non-treatment control group can be used to measure the placebo effect directly, as the difference between the responses of placebo subjects and untreated subjects, perhaps paired by age group or other factors (such as being twins).
For the conclusions drawn from the results of an experiment to have validity, it is essential that the items or patients assigned to treatment and control groups be representative of the same population. In some experiments, such as many in agriculture or psychology, this can be achieved by randomly assigning items from a common population to one of the treatment and control groups. In studies of twins involving just one treatment group and a control group, it is statistically efficient to do this random assignment separately for each pair of twins, so that one is in the treatment group and one in the control group.
In some medical studies, where it may be unethical not to treat patients who present with symptoms, controls may be given a standard treatment, rather than no treatment at all. An alternative is to select controls from a wider population, provided that this population is well-defined and that those presenting with symptoms at the clinic are representative of those in the wider population. Another method to reduce ethical concerns would be to test early-onset symptoms, with enough time later to offer real treatments to the control subjects, and let those subjects know the first treatments are "experimental" and might not be as effective as later treatments, again with the understanding there would be ample time to try other remedies.
== Relevance ==
A clinical control group can be a placebo arm or it can involve an old method used to address a clinical outcome when testing a new idea. For example in a study released by the British Medical Journal, in 1995 studying the effects of strict blood pressure control versus more relaxed blood pressure control in diabetic patients, the clinical control group was the diabetic patients that did not receive tight blood pressure control. In order to qualify for the study, the patients had to meet the inclusion criteria and not match the exclusion criteria. Once the study population was determined, the patients were placed in either the experimental group (strict blood pressure control <150/80mmHg) versus non strict blood pressure control (<180/110). There were a wide variety of ending points for patients such as death, myocardial infarction, stroke, etc. The study was stopped before completion because strict blood pressure control was so much superior to the clinical control group which had relaxed blood pressure control. The study was no longer considered ethical because tight blood pressure control was so much more effective at preventing end points that the clinical control group had to be discontinued.
The clinical control group is not always a placebo group. Sometimes the clinical control group can involve comparing a new drug to an older drug in a superiority trial. In a superiority trial, the clinical control group is the older medication rather than the new medication. For example in the ALLHAT trial, Thiazide diuretics were demonstrated to be superior to calcium channel blockers or angiotensin-converting enzyme inhibitors in reducing cardiovascular events in high risk patients with hypertension. In the ALLHAT study, the clinical control group was not a placebo it was ACEI or Calcium Channel Blockers.
Overall, clinical control groups can either be a placebo or an old standard of therapy.
== See also ==
Scientific control
Wait list control group
Blocking (statistics)
Hawthorne effect
== References == | Wikipedia/Treatment_and_control_groups |
An academic clinical trial is a clinical trial not funded by pharmaceutical or biotechnology company for commercial ends but by public-good agencies (usually universities or medical trusts) to advance medicine. These trials are a valuable component of the health care system; they benefit patients and help determine the safety and efficacy of drugs and devices, and play an important role in the checks and balances that regular commercially oriented clinical trials.
A typical area of academic clinical trials is the advancement and optimization of already existing therapies. Thus, academic clinical trials may for instance test how a combination of registered drugs may improve treatment outcomes; or they may apply registered treatments in additional, less frequent indications. Such research questions are not a primary focus of for-profit companies, and thus these trials are typically initiated by individual investigators or academic research organizations.
There are many different organizations which have an interest in academic clinical trials and facilitate or take part in their conduct. These organizations include:
Hospitals, universities, researchers and institutions who view trials as a source of income and prestige, and receive private, charitable and governmental funding.
Pharmaceutical or biotech companies who view the development and commercialization of treatments as their business.
Regulators who wish to ensure treatments are safe and work effectively.
Patients and patients' organizations and associations who want faster access to advanced treatments.
Academic clinical trials are run at academic sites, such as medical schools, academic hospitals, and universities; and non-academic sites which may be managed by so-called site management organizations (SMOs). Site management organizations are for-profit organizations which enlist and manage the physician practice sites that actually recruit and follow patients enrolled in clinical trials. In some cases, academic members participate in clinical trials as members of SMOs.
== See also ==
Clinical investigator
Clinical monitoring
Clinical research associate
Clinical site
Clinical trial protocol
Clinical trials publication
EORTC (European Organisation for Research and Treatment of Cancer)
== References == | Wikipedia/Academic_clinical_trials |
Clinical endpoints or clinical outcomes are outcome measures referring to occurrence of disease, symptom, sign or laboratory abnormality constituting a target outcome in clinical research trials. The term may also refer to any disease or sign that strongly motivates withdrawal of an individual or entity from the trial, then often termed a humane (clinical) endpoint.
The primary endpoint of a clinical trial is the endpoint for which the trial is powered. Secondary endpoints are additional endpoints, preferably also pre-specified, for which the trial may not be powered.
Surrogate endpoints are trial endpoints that have outcomes that substitute for a clinical endpoint, often because studying the clinical endpoint is difficult, for example using an increase in blood pressure as a surrogate for death by cardiovascular disease, where strong evidence of a causal link exists.
== Scope ==
In a general sense, a clinical endpoint is included in the entities of interest in a trial. The results of a clinical trial generally indicate the number of people enrolled who reached the pre-determined clinical endpoint during the study interval compared with the overall number of people who were enrolled. Once a patient reaches the endpoint, he or she is generally excluded from further experimental intervention (the origin of the term endpoint).
For example, a clinical trial investigating the ability of a medication to prevent heart attack might use chest pain as a clinical endpoint. Any patient enrolled in the trial who develops chest pain over the course of the trial, then, would be counted as having reached that clinical endpoint. The results would ultimately reflect the fraction of patients who reached the endpoint of having developed chest pain, compared with the overall number of people enrolled.
When an experiment involves a control group, the proportion of individuals who reach the clinical endpoint after an intervention is compared with the proportion of individuals in the control group who reached the same clinical endpoint, reflecting the ability of the intervention to prevent the endpoint in question.
A clinical trial will usually define or specify a primary endpoint as a measure that will be considered success of the therapy being trialled (e.g. in justifying a marketing approval). The primary endpoint might be a statistically significant improvement in overall survival (OS). A trial might also define one or more secondary endpoints such as progression-free-survival (PFS) that will be measured and are expected to be met. A trial might also define exploratory endpoints that are less likely to be met.
== Examples ==
Clinical endpoints can be obtained from different modalities, such as behavioural or cognitive scores, or biomarkers from Electroencephalography (qEEG), MRI, PET, or biochemical biomarkers.
In clinical cancer research, common endpoints include discovery of local recurrence, discovery of regional metastasis, discovery of distant metastasis, onset of symptoms, hospitalization, increase or decrease in pain medication requirement, onset of toxicity, requirement of salvage chemotherapy, requirement of salvage surgery, requirement of salvage radiotherapy, death from any cause, or death from disease. A cancer study may be powered for overall survival, usually indicating time until death from any cause, or disease-specific survival, where the endpoint is death from disease or death from toxicity.
These are expressed as a period of time (survival duration) e.g., in months. Frequently the median is used so that the trial endpoint can be calculated once 50% of subjects have reached the endpoint, whereas calculation of an arithmetical mean can only be done after all subjects have reached the endpoint.
=== Disease free survival ===
The disease free survival is usually used to analyze the results of the treatment for the localized disease which renders the patient apparently disease free, such as surgery or surgery plus adjuvant therapy. In the disease-free survival, the event is relapse rather than death. The people who relapse are still surviving but they are no longer disease-free. Just as in the survival curves not all patients die, in "disease-free survival curves" not all patients relapse and the curve may have a final plateau representing the patients who didn't relapse after the study's maximum follow-up. Because the patients survive for at least some time after the relapse, the curve for the actual survival would look better than disease free survival curve.
=== Progression free survival ===
The Progression Free Survival is usually used in analysing the results of the treatment for the advanced disease. The event for the progression free survival is that the disease gets worse or progresses, or the patient dies from any cause. Time to Progression is a similar endpoint that ignores patients who die before the disease progresses.
=== Response duration ===
The response duration is occasionally used to analyze the results of the treatment for the advanced disease. The event is progression of the disease (relapse). This endpoint involves selecting a subgroup of the patients. It measures the length of the response in those patients who responded. The patients who don't respond aren't included.
=== Overall survival ===
Overall survival is based on death from any cause, not just the condition being treated, thus it picks up death from side effects of the treatment, and effects on survival after relapse.
=== Toxic Death Rate ===
Unlike overall survival, which is based on death from any cause or the condition being treated, the toxic death rate picks up just the deaths that are directly attributable to the treatment itself. These rates are generally low to zero as clinical trials are typically halted when toxic deaths occur. Even with chemotherapy the overall rate is typically under a percent. However, the lack of systematic autopsies limits our understanding of deaths due to treatments.
=== Percent serious adverse events ===
The percentage of treated patients experiencing one or more serious adverse events. Serious adverse events are defined by the US Food and Drug Administration as "Any AE occurring at any dose that results in any of the following outcomes:
Death
Life-threatening adverse drug experience
Inpatient hospitalization or prolongation of existing hospitalization
Persistent or significant incapacity or substantial disruption of the ability to conduct normal life functions
Congenital anomaly/birth defect
Important medical events (IME) that may not result in death, be life-threatening, or require hospitalization may be considered serious when, based upon appropriate medical judgment, they may jeopardize the patient or subject and may require medical or surgical intervention to prevent one of the outcomes listed in this definition."
== Humane endpoint ==
A humane endpoint can be defined as the point at which pain and/or distress is terminated, minimized or reduced for an entity in a trial (such as an experimental animal), by taking action such as killing the animal humanely, terminating a painful procedure, or giving treatment to relieve pain and/or distress. The occurrence of an individual in a trial having reached may necessitate withdrawal from the trial before the target outcome of interest has been fully reached.
== Surrogate endpoint ==
A surrogate endpoint (or marker) is a measure of effect of a specific treatment that may correlate with a real clinical endpoint but doesn't necessarily have a guaranteed relationship. The National Institutes of Health (USA) define surrogate endpoint as "a biomarker intended to substitute for a clinical endpoint".
== Combined endpoint ==
Some studies will examine the incidence of a combined endpoint, which can merge a variety of outcomes into one group. For example, the heart attack study above may report the incidence of the combined endpoint of chest pain, myocardial infarction, or death. An example of a cancer study powered for a combined endpoint is disease-free survival; trial participants experiencing either death or discovery of any recurrence would constitute the endpoint. Overall Treatment Utility is an example of a multidimensional composite endpoint in cancer clinical trials.
Regarding humane endpoints, a combined endpoint may constitute a threshold where there is enough cumulative degree of disease, symptoms, signs or laboratory abnormalities to motivate an intervention.
== Response rates ==
The response rate is the percentage of patients on whom a therapy has some defined effect; for example, the cancer shrinks or disappears after treatment.
When used as a clinical endpoint for trials of cancer treatments, this is often called the objective response rate (ORR). The FDA definition of ORR in this context is "the proportion of patients with tumor size reduction of a predefined amount and for a minimum time period.": 7 Another criterion is the clinical benefit rate (CBR), "the total number (or percentage) of patients who achieved a complete response, partial response, or had stable disease for 6 months or more".
Each trial, for whatever illness or condition, may define what is considered a complete response (CR) or partial response (PR) to the therapy or intervention. Hence the trials report the complete response rate and the overall response rate which includes CR and PR. (See e.g. Response evaluation criteria in solid tumors, and Small-cell carcinoma treatment, and for immunotherapies, Immune-related response criteria.)
== Consistency ==
Various studies on a particular topic often do not address the same outcomes, making it difficult to draw clinically useful conclusions when a group of studies is looked at as a whole. The Core Outcomes in Women's Health (CROWN) Initiative is one effort to standardize outcomes.
== See also ==
Multiple comparisons problem
== References ==
== Further reading ==
Spiegelhalter, David J.; Abrams, Keith R.; Myles, Jonathan P. (2004). "Randomised Controlled Trials". Bayesian Approaches to Clinical Trials and Health-Care Evaluation. Chichester: John Wiley & Sons. pp. 181–249. ISBN 0-471-49975-7.
Chin, Jane Y. (1 August 2004). "The Clinical Side: Clinical trial endpoints". Pharmaceutical Representative. Archived from the original on 5 October 2011.
== External links ==
Endpoints: How the Results of Clinical Trials are Measured | Wikipedia/Clinical_endpoint |
An open-label trial, or open trial, is a type of clinical trial in which information is not withheld from trial participants. In particular, both the researchers and participants know which treatment is being administered. This contrasts with a double-blinded trial, where information is withheld both from the researchers and the participants to reduce bias.
Open-label trials may be appropriate for comparing two similar treatments to determine which is most effective, such as a comparison of different prescription anticoagulants, or possible relief from symptoms of some disorders when a placebo is given.
An open-label trial may still be randomized. Open-label trials may also be uncontrolled (without a placebo group), with all participants receiving the same treatment.
== References == | Wikipedia/Open-label_trial |
Placebo-controlled studies are a way of testing a medical therapy in which, in addition to a group of subjects that receives the treatment to be evaluated, a separate control group receives a sham "placebo" treatment which is specifically designed to have no real effect. Placebos are most commonly used in blinded trials, where subjects do not know whether they are receiving real or placebo treatment. Often, there is also a further "natural history" group that does not receive any treatment at all.
The purpose of the placebo group is to account for the placebo effect, that is, effects from treatment that do not depend on the treatment itself. Such factors include knowing one is receiving a treatment, attention from health care professionals, and the expectations of a treatment's effectiveness by those running the research study. Without a placebo group to compare against, it is not possible to know whether the treatment itself had any effect.
Patients frequently show improvement even when given a sham or "fake" treatment. Such intentionally inert placebo treatments can take many forms, such as a pill containing only sugar, or a medical device (such as an ultrasound machine) that is not actually turned on. Also, due to the body's natural healing ability and statistical effects such as regression to the mean, many patients will get better even when given no treatment at all. Thus, the relevant question when assessing a treatment is not "does the treatment work?" but "does the treatment work better than a placebo treatment, or no treatment at all?" More broadly, the aim of a clinical trial is to determine what treatments, delivered in what circumstances, to which patients, in what conditions, are the most effective.
Therefore, the use of placebos is a standard control component of most clinical trials, which attempt to make some sort of quantitative assessment of the efficacy of medicinal drugs or treatments. Such a test or clinical trial is called a placebo-controlled study, and its control is of the negative type. A study whose control is a previously tested treatment, rather than no treatment, is called a positive-control study, because its control is of the positive type.
This close association of placebo effects with RCTs has a profound impact on how placebo effects are understood and valued in the scientific community.
== Methodology ==
=== Blinding ===
Blinding is the withholding of information from participants which may influence them in some way until after the experiment is complete. Good blinding may reduce or eliminate experimental biases such as confirmation bias, the placebo effect, the observer effect, and others. A blind can be imposed on any participant of an experiment, including subjects, researchers, technicians, data analysts, and evaluators. In some cases, while blinding would be useful, it is impossible or unethical. For example, is not possible to blind a patient to their treatment in a physical therapy intervention. A good clinical protocol ensures that blinding is as effective as possible within ethical and practical constrains.
During the course of an experiment, a participant becomes unblinded if they deduce or otherwise obtain information that has been masked to them. Unblinding that occurs before the conclusion of a study is a source of experimental error, as the bias that was eliminated by blinding is re-introduced. Unblinding is common in blind experiments, and must be measured and reported.
=== Natural history groups ===
The practice of using an additional natural history group as the trial's so-called "third arm" has emerged; and trials are conducted using three randomly selected, equally matched trial groups, Reilly wrote: "... it is necessary to remember the adjective 'random' [in the term 'random sample'] should apply to the method of drawing the sample and not to the sample itself."
The Active drug group (A): who receive the active test drug.
The Placebo drug group (P): who receive a placebo drug that simulates the active drug.
The Natural history group (NH): who receive no treatment of any kind (and whose condition, therefore, is allowed to run its natural course).
The outcomes within each group are observed, and compared with each other, allowing us to measure:
The efficacy of the active drug's treatment: the difference between A and NH (i.e., A-NH).
The efficacy of the active drug's active ingredient: the difference between A and P (i.e., A-P).
The magnitude of the placebo response: the difference between P and NH (i.e., P-NH).
It is a matter of interpretation whether the value of P-NH indicates the efficacy of the entire treatment process or the magnitude of the "placebo response". The results of these comparisons then determine whether or not a particular drug is considered efficacious.
Natural-History groups yield useful information when separate groups of subjects are used in a parallel or longitudinal study design. In crossover studies, however, where each subject undergoes both treatments in succession, the natural history of the chronic condition under investigation (e.g., progression) is well understood, with the study's duration being chosen such that the condition's intensity will be more or less stable over that duration. (Wang et al. provide the example of late-phase diabetes, whose natural history is long enough that even a crossover study lasting one year is acceptable.) In these circumstances, a natural history group is not expected to yield useful information.
=== Indexing ===
In certain clinical trials of particular drugs, it may happen that the level of the "placebo responses" manifested by the trial's subjects are either considerably higher or lower (in relation to the "active" drug's effects) than one would expect from other trials of similar drugs. In these cases, with all other things being equal, it is reasonable to conclude that:
the degree to which there is a considerably higher level of "placebo response" than one would expect is an index of the degree to which the drug's active ingredient is not efficacious.
the degree to which there is a considerably lower level of "placebo response" than one would expect is an index of the degree to which, in some particular way, the placebo is not simulating the active drug in an appropriate way.
However, in particular cases such as the use of Cimetidine to treat ulcers, a significant level of placebo response can also prove to be an index of how much the treatment has been directed at a wrong target.
== Implementation issues ==
=== Adherence ===
The Coronary Drug Project was intended to study the safety and effectiveness of drugs for long-term treatment of coronary heart disease in men. Those in the placebo group who adhered to the placebo treatment (took the placebo regularly as instructed) showed nearly half the mortality rate as those who were not adherent.
A similar study of women similarly found survival was nearly 2.5 times greater for those who adhered to their placebo. This apparent placebo effect may have occurred because:
Adhering to the protocol had a psychological effect, i.e. genuine placebo effect.
People who were already healthier were more able or more inclined to follow the protocol.
Compliant people were more diligent and health-conscious in all aspects of their lives.
=== Unblinding ===
In some cases, a study participant may deduce or otherwise obtain information that has been blinded to them. For example, a patient taking a psychoactive drug may recognize that they are taking a drug. When this occurs, it is called unblinding. This kind of unblinding can be reduced with the use of an active placebo, which is a drug that produces effects similar to the active drug, making it more difficult for patients to determine which group they are in.
An active placebo was used in the Marsh Chapel Experiment, a blinded study in which the experimental group received the psychedelic substance psilocybin while the control group received a large dose of niacin, a substance that produces noticeable physical effects intended to lead the control subjects to believe they had received the psychoactive drug.
== History ==
=== James Lind and scurvy ===
In 1747, James Lind (1716–1794), the ship's doctor on HMS Salisbury, conducted the first clinical trial when he investigated the efficacy of citrus fruit in cases of scurvy. He randomly divided twelve scurvy patients, whose "cases were as similar as I could have them", into six pairs. Each pair was given a different remedy. According to Lind's 1753 Treatise on the Scurvy in Three Parts Containing an Inquiry into the Nature, Causes, and Cure of the Disease, Together with a Critical and Chronological View of what has been Published of the Subject, the remedies were: one quart of cider per day, twenty-five drops of elixir vitriol (sulfuric acid) three times a day, two spoonfuls of vinegar three times a day, a course of sea-water (half a pint every day), two oranges and one lemon each day, and electuary, (a mixture containing garlic, mustard, balsam of Peru, and myrrh).
He noted that the pair who had been given the oranges and lemons were so restored to health within six days of treatment that one of them returned to duty, and the other was well enough to attend the rest of the sick.
=== Animal magnetism ===
In 1784, the French Royal Commission investigated the existence of animal magnetism, comparing the effects of allegedly "magnetized" water with that of plain water.
It did not examine the practices of Franz Mesmer, but examined the significantly different practices of his associate Charles d'Eslon (1739–1786).
=== Perkins tractors ===
In 1799, John Haygarth investigated the efficacy of medical instruments called "Perkins tractors", by comparing the results from dummy wooden tractors with a set of allegedly "active" metal tractors, and published his findings in a book On the Imagination as a Cause & as a Cure of Disorders of the Body.
=== Flint and placebo active treatment comparison ===
In 1863 Austin Flint (1812–1886) conducted the first-ever trial that directly compared the efficacy of a dummy simulator with that of an active treatment; although Flint's examination did not compare the two against each other in the same trial. Even so, this was a significant departure from the (then) customary practice of contrasting the consequences of an active treatment with what Flint described as "the natural history of [an untreated] disease".: 18
Flint's paper is the first time that he terms "placebo" or "placeboic remedy" were used to refer to a dummy simulator in a clinical trial.... to secure the moral effect of a remedy given specially for the disease, the patients were placed on the use of a placebo which consisted, in nearly all of the cases, of the tincture of quassia, very largely diluted. This was given regularly, and became well known in my wards as the placeboic remedy for rheumatism.
Flint: 21 treated 13 hospital inmates who had rheumatic fever; 11 were "acute", and 2 were "sub-acute". He then compared the results of his dummy "placeboic remedy" with that of the active treatment's already well-understood results. (Flint had previously tested, and reported on, the active treatment's efficacy.) There was no significant difference between the results of the active treatment and his "placeboic remedy" in 12 of the cases in terms of disease duration, duration of convalescence, number of joints affected, and emergence of complications.: 32–34 In the thirteenth case, Flint expressed some doubt whether the particular complications that had emerged (namely, pericarditis, endocarditis, and pneumonia) would have been prevented if that subject had been immediately given the "active treatment".: 36
=== Jellinek and headache remedy ingredients ===
Jellinek in 1946 was asked to test whether or not the headache drug's overall efficacy would be reduced if certain ingredients were removed. In post-World War II 1946, pharmaceutical chemicals were restricted, and one U.S. headache remedy manufacturer sold a drug composed of three ingredients: a, b, and c, and chemical b was in particular short supply.
Jellinek set up a complex trial involving 199 subjects, all of whom had "frequent headaches". The subjects were randomly divided into four test groups. He prepared four test drugs, involving various permutations of the three drug constituents, with a placebo as a scientific control. The structure of this trial is significant because, in those days, the only time placebos were ever used "was to express the efficacy or non-efficacy of a drug in terms of "how much better" the drug was than the "placebo".: 88 (Note that the trial conducted by Austin Flint is an example of such a drug efficacy vs. placebo efficacy trial.) The four test drugs were identical in shape, size, colour and taste:
Drug A: contained a, b, and c.
Drug B: contained a and c.
Drug C: contained a and b.
Drug D: a 'simulator', contained "ordinary lactate".
Each time a subject had a headache, they took their group's designated test drug, and recorded whether their headache had been relieved (or not). Although "some subjects had only three headaches in the course of a two-week period while others had up to ten attacks in the same period", the data showed a "great consistency" across all subjects: 88 Every two weeks the groups' drugs were changed; so that by the end of eight weeks, all groups had tested all the drugs. The stipulated drug (i.e., A, B, C, or D) was taken as often as necessary over each two-week period, and the two-week sequences for each of the four groups were:
A, B, C, D
B, A, D, C
C, D, A, B
D, C, B, A.
Over the entire population of 199 subjects, there were 120 "subjects reacting to placebo" and 79 "subjects not reacting to placebo".: 89
On initial analysis, there was no difference between the self-reported "success rates" of Drugs A, B, and C (84%, 80%, and 80% respectively) (the "success rate" of the simulating placebo Drug D was 52%); and, from this, it appeared that ingredient b was completely unnecessary.
However, further analysis on the trial demonstrated that ingredient b made a significant contribution to the remedy's efficacy. Examining his data, Jellinek discovered that there was a very significant difference in responses between the 120 placebo-responders and the 79 non-responders. The 79 non-responders' reports showed that if they were considered as an entirely separate group, there was a significant difference the "success rates" of Drugs A, B, and C: viz., 88%, 67%, and 77%, respectively. And because this significant difference in relief from the test drugs could only be attributed to the presence or absence of ingredient b, he concluded that ingredient b was essential.
Two conclusions came from this trial:
Jellinek,: 90 having identified 120 "placebo reactors", went on to suppose that all of them may have had either "psychological headaches" (with or without attendant "hypochondriasis") or "true physiological headaches [which were] accessible to suggestion". Thus, according to this view, the degree to which a "placebo response" is present tends to be an index of the psychogenic origins of the condition in question.: 777
It indicated that, whilst any given placebo was inert, a responder to that particular placebo may be responding for a wide number of reasons unconnected with the drug's active ingredients; and, from this, it could be important to pre-screen potential test populations, and treat those manifesting a placebo-response as a special group, or remove them altogether from the test population!
=== MRC and randomized trials ===
It used to be thought that the first-ever randomized clinical trial was the trial conducted by the Medical Research Council (MRC) in 1948 into the efficacy of streptomycin in the treatment of pulmonary tuberculosis. In this trial, there were two test groups:
those "treated by streptomycin and bed-rest", and
those "[treated] by bed-rest alone" (the control group).
What made this trial novel was that the subjects were randomly allocated to their test groups. The up-to-that-time practice was to allocate subjects alternately to each group, based on the order in which they presented for treatment. This practice could be biased, because those admitting each patient knew to which group that patient would be allocated (and so the decision to admit or not admit a specific patient might be influenced by the experimenter's knowledge of the nature of their illness, and their knowledge of the group to which they would occupy).
Recently, an earlier MRC trial on the antibiotic patulin on the course of common colds has been suggested to have been the first randomized trial. Another early and until recently overlooked randomized trial was published on strophanthin in a local Finnish journal in 1946.
== Declaration of Helsinki ==
From the time of the Hippocratic Oath questions of the ethics of medical practice have been widely discussed, and codes of practice have been gradually developed as a response to advances in scientific medicine.
The Nuremberg Code, which was issued in August 1947, as a consequence of the so-called Doctors' Trial which examined the human experimentation conducted by Nazi doctors during World War II, offers ten principles for legitimate medical research, including informed consent, absence of coercion, and beneficence towards experiment participants.
In 1964, the World Medical Association issued the Declaration of Helsinki, which specifically limited its directives to health research by physicians, and emphasized a number of additional conditions in circumstances where "medical research is combined with medical care".
The significant difference between the 1947 Nuremberg Code and the 1964 Declaration of Helsinki is that the first was a set of principles that was suggested to the medical profession by the "Doctors' Trial" judges, whilst the second was imposed by the medical profession upon itself.
Paragraph 29 of the Declaration makes specific mention of placebos:
29. The benefits, risks, burdens and effectiveness of a new method should be tested against those of the best current prophylactic, diagnostic, and therapeutic methods. This does not exclude the use of placebo, or no treatment, in studies where no proven prophylactic, diagnostic or therapeutic method exists.
In 2002, World Medical Association issued the following elaborative announcement:
Note of clarification on paragraph 29 of the WMA Declaration of HelsinkiThe WMA hereby reaffirms its position that extreme care must be taken in making use of a placebo-controlled trial and that in general this methodology should only be used in the absence of existing proven therapy. However, a placebo-controlled trial may be ethically acceptable, even if proven therapy is available, under the following circumstances:
— Where for compelling and scientifically sound methodological reasons its use is necessary to determine the efficacy or safety of a prophylactic, diagnostic or therapeutic method; or
— Where a prophylactic, diagnostic or therapeutic method is being investigated for a minor condition and the patients who receive placebo will not be subject to any additional risk of serious or irreversible harm.
All other provisions of the Declaration of Helsinki must be adhered to, especially the need for appropriate ethical and scientific review.
In addition to the requirement for informed consent from all drug-trial participants, it is also standard practice to inform all test subjects that they may receive the drug being tested or that they may receive the placebo.
== Non-drug treatments ==
"Talking therapies" (such as hypnotherapy, psychotherapy, counseling, and non-drug psychiatry) are now required to have scientific validation by clinical trial. However, there is controversy over what might or might not be an appropriate placebo for such therapeutic treatments. Furthermore, there are methodological challenges such as blinding the person providing the psychological non-drug intervention. In 2005, the Journal of Clinical Psychology, devoted an issue to the issue of "The Placebo Concept in Psychotherapy" that contained a range of contributions to this question. As the abstract of one paper noted: "Unlike within the domain of medicine, in which the logic of placebos is relatively straightforward, the concept of placebo as applied to psychotherapy is fraught with both conceptual and practical problems."
== See also ==
== References ==
== External links ==
James Lind Library A source of historical texts on fair tests of treatments in health care. | Wikipedia/Placebo-controlled_study |
In epidemiology, case fatality rate (CFR) – or sometimes more accurately case-fatality risk – is the proportion of people who have been diagnosed with a certain disease and end up dying of it. Unlike a disease's mortality rate, the CFR does not take into account the time period between disease onset and death. A CFR is generally expressed as a percentage. It is a measure of disease lethality, and thus may change with different treatments. CFRs are most often used for with discrete, limited-time courses, such as acute infections.
== Terminology ==
The mortality rate – often confused with the CFR – is a measure of the relative number of deaths (either in general, or due to a specific cause) within the entire population per unit of time. A CFR, in contrast, is the number of deaths among the number of diagnosed cases only, regardless of time or total population.
From a mathematical point of view, by taking values between 0 and 1 or 0% and 100%, CFRs are actually a measure of risk (case fatality risk) – that is, they are a proportion of incidence, although they do not reflect a disease's incidence. They are neither rates, incidence rates, nor ratios (none of which are limited to the range 0–1). They do not take into account time from disease onset to death.
Sometimes the term case fatality ratio is used interchangeably with case fatality rate, but they are not the same. A case fatality ratio is a comparison between two different case fatality rates, expressed as a ratio. It is used to compare the severity of different diseases or to assess the impact of interventions.
Because the CFR is not an incidence rate by not measuring frequency, some authors note that a more appropriate term is case fatality proportion.
== Example calculation ==
If 100 people in a community are diagnosed with the same disease, and 9 of them subsequently die from the effects of the disease, the CFR would be 9%. If some of the cases have not yet resolved (neither died nor fully recovered) at the time of analysis, a later analysis might take into account additional deaths and arrive at a higher estimate of the CFR, if the unresolved cases were included as recovered in the earlier analysis. Alternatively, it might later be established that a higher number of people were subclinically infected with the pathogen, resulting in an IFR below the CFR.
A CFR may only be calculated from cases that have been resolved through either death or recovery. The preliminary CFR, for example, of a newly occurring disease with a high daily increase and long resolution time would be substantially lower than the final CFR, if unresolved cases were not excluded from the calculation, but added to the denominator only.
CFR in
%
=
Number of deaths from disease
Number of confirmed cases of disease
×
100
{\displaystyle {\text{CFR in }}{\%}={\frac {\text{Number of deaths from disease}}{\text{Number of confirmed cases of disease}}}\times 100}
== Infection fatality rate ==
Like the case fatality rate, the term infection fatality rate (IFR) also applies to infectious diseases, but represents the proportion of deaths among all infected individuals, including all asymptomatic and undiagnosed subjects. It is closely related to the CFR, but attempts to additionally account for inapparent infections among healthy people. The IFR differs from the CFR in that it aims to estimate the fatality rate in both sick and healthy infected: the detected disease (cases) and those with an undetected disease (asymptomatic and not tested group). Individuals who are infected, but show no symptoms, are said to have inapparent, silent or subclinical infections and may inadvertently infect others. By definition, the IFR cannot exceed the CFR, because the former adds asymptomatic cases to its denominator.
IFR in
%
=
Number of deaths from disease
Number of infected individuals
×
100
{\displaystyle {\text{IFR in }}{\%}={\frac {\text{Number of deaths from disease}}{\text{Number of infected individuals}}}\times 100}
== Examples ==
Some examples will suggest the range of possible CFRs for diseases in the real world:
The CFR for the Spanish (1918) flu was greater than 2.5%, while the Asian (1957-58) and Hong Kong (1968-69) flus both had a CFR of about 0.2%.
As of 01 Apr 2025, coronavirus disease 2019 has an overall CFR of 0.91%, although the CFRs of earlier strains of COVID-19 was around 2%, the CFRs for original SARS and MERS are about 11% and 34%, respectively.
The CFR for yellow fever is about 5-6% (but 40-50% in severe cases).
Legionnaires' disease has a CFR of about 15%.: 665
Left untreated, bubonic plague will have a CFR of up to 60%.: 57 With antibiotic treatment, the CFR for bubonic plague is 17%, pneumonic 29% and septicaemic 45%.
Active tuberculosis, the infection with the highest mortality rate, has a CFR of 43% in the absence of HIV.
Ebola virus disease, one of the infections with the highest lethality, has a CFR as high as 90%.
Naegleriasis (also known as primary amoebic meningoencephalitis), has a CFR greater than 95%, with a few of the survivors having been treated with heroic doses of amphotericin B and other off-label drugs.
Rabies has a CFR greater than 99% in unvaccinated individuals. A few people have survived either by being vaccinated (but after symptoms started, or else later than ideal), or more recently, by being put into a medically induced coma.
== See also ==
List of human disease case fatality rates
Mortality rate – Deaths per 1,000 individuals per year
Pandemic severity index – Proposed measure of the severity of influenza
== References ==
== External links ==
Definitions of case fatality for coronary events in the WHO MONICA Project
Swine flu: what do CFR, virulence and mortality rate mean? | Wikipedia/Case_fatality_rate |
A nested case–control (NCC) study is a variation of a case–control study in which cases and controls are drawn from the population in a fully enumerated cohort.
Usually, the exposure of interest is only measured among the cases and the selected controls. Thus the nested case–control study is more efficient than the full cohort design. The nested case–control study can be analyzed using methods for missing covariates.
The NCC design is often used when the exposure of interest is difficult or expensive to obtain and when the outcome is rare. By utilizing data previously collected from a large cohort study, the time and cost of beginning a new case–control study is avoided. By only measuring the covariate in as many participants as necessary, the cost and effort of exposure assessment is reduced. This benefit is pronounced when the covariate of interest is biological, since assessments such as gene expression profiling are expensive, and because the quantity of blood available for such analysis is often limited, making it a valuable resource that should not be used unnecessarily.
== Example ==
As an example, of the 91,523 women in the Nurses' Health Study who did not have cancer at baseline and who were followed for 14 years, 2,341 women had developed breast cancer by 1993. Several studies have used standard cohort analyses to study precursors to breast cancer, e.g. use of hormonal contraceptives, which is a covariate easily measured on all of the women in the cohort. However, note that in comparison to the cases, there are so many controls that each particular control contributes relatively little information to the analysis.
If, on the other hand, one is interested in the association between gene expression and breast cancer incidence, it would be very expensive and possibly wasteful of precious blood specimen to assay all 89,000 women without breast cancer. In this situation, one may choose to assay all of the cases, and also, for each case, select a certain number of women to assay from the risk set of participants who have not yet failed (i.e. those who have not developed breast cancer before the particular case in question has developed breast cancer). The risk set is often restricted to those participants who are matched to the case on variables such as age, which reduces the variability of effect estimates.
== Efficiency of the NCC model ==
Commonly 1–4 controls are selected for each case. Since the covariate is not measured for all participants, the nested case–control model is both less expensive than a full cohort analysis and more efficient than taking a simple random sample from the full cohort. However, it has been shown that with 4 controls per case and/or stratified sampling of controls, relatively little efficiency may be lost, depending on the method of estimation used.
== Analysis of nested case–control studies ==
The analysis of a nested case–control model must take into account the way in which controls are sampled from the cohort. Failing to do so, such as by treating the cases and selected controls as the original cohort and performing a logistic regression, which is common, can result in biased estimates whose null distribution is different from what is assumed. Ways to account for the random sampling include conditional logistic regression, and using inverse probability weighting to adjust for missing covariates among those who are not selected into the study.
== Case–cohort study ==
A case–cohort study is a design in which cases and controls are drawn from within a prospective study. All cases who developed the outcome of interest during the follow-up are selected and compared with a random sample of the cohort. This randomly selected control sample could, by chance, include some cases. Exposure is defined prior to disease development based on data collected at baseline or on assays conducted in biological samples collected at baseline.
== References ==
Porta, Miquel (2014). A Dictionary of Epidemiology. Oxford: Oxford University Press.
== Further reading ==
Keogh, Ruth H.; Cox, D. R. (2014). "Nested case–control studies". Case–Control Studies. Cambridge University Press. pp. 160–190. ISBN 978-1-107-01956-0. | Wikipedia/Nested_case–control_study |
A vaccine trial is a clinical trial that aims at establishing the safety and efficacy of a vaccine prior to it being licensed.
A vaccine candidate drug is first identified through preclinical evaluations that could involve high throughput screening and selecting the proper antigen to invoke an immune response.
Some vaccine trials may take months or years to complete, depending on the time required for the subjects to react to the vaccine and develop the required antibodies.
== Preclinical stage ==
Preclinical development stages are necessary to determine the immunogenicity potential and safety profile for a vaccine candidate.
This is also the stage in which the drug candidate may be first tested in laboratory animals prior to moving to the Phase I trials. Vaccines such as the oral polio vaccine have been first tested for adverse effects and immunogenicity in monkeys as well as non-human primates and lab mice.
Scientific advances since the 1980s have helped to use transgenic animals as a part of vaccine preclinical protocol in hopes to more accurately determine drug reactions in humans. Understanding vaccine safety and the immunological response to the vaccine, such as toxicity, are necessary components of the preclinical stage. Other drug trials focus on the pharmacodynamics and pharmacokinetics; however, in vaccine studies it is essential to understand toxic effects at all possible dosage levels and the interactions with the immune system.
== Phase I ==
The Phase I study consists of introducing the vaccine candidate to assess its safety in healthy people. A vaccine Phase I trial involves normal healthy subjects, each tested with either the candidate vaccine or a "control" treatment, typically a placebo or an adjuvant-containing cocktail, or an established vaccine (which might be intended to protect against a different pathogen). The primary observation is for detection of safety (absence of an adverse event) and evidence of an immune response.
After the administration of the vaccine or placebo, the researchers collect data on antibody production, on health outcomes (such as illness due to the targeted infection or to another infection). Following the trial protocol, the specified statistical test is performed to gauge the statistical significance of the observed differences in the outcomes between the treatment and control groups. Side effects of the vaccine are also noted, and these contribute to the decision on whether to advance the candidate vaccine to a Phase II trial.
One typical version of Phase I studies in vaccines involves an escalation study, which is used in mainly medicinal research trials. The drug is introduced into a small cohort of healthy volunteers. Vaccine escalation studies aim to minimize chances of serious adverse effects (SAE) by slowly increasing the drug dosage or frequency. The first level of an escalation study usually has two or three groups of around 10 healthy volunteers. Each subgroup receives the same vaccine dose, which is the expected lowest dose necessary to invoke an immune response (the main goal in a vaccine – to create immunity). New subgroups can be added to experiment with a different dosing regimen as long as the previous subgroup did not experience SAEs. There are variations in the vaccination order that can be used for different studies. For example, the first subgroup could complete the entire regimen before the second subgroup starts or the second can begin before the first ends as long as SAEs were not detected. The vaccination schedule will vary depending on the nature of the drug (i.e. the need for a booster or several doses over the course of short time period). Escalation studies are ideal for minimizing risks for SAEs that could occur with less controlled and divided protocols.
== Phase II ==
The transition to Phase II relies on the immunogenic and toxicity results from Phase I in a small cohort of healthy volunteers. Phase II will consist of more healthy volunteers in the vaccine target population (~ hundreds of people) to determine reactions in a more diverse set of humans and test different schedules.
== Phase III ==
Similarly. Phase III trials continue to monitor toxicity, immunogenicity, and SAEs on a much larger scale. The vaccine must be shown to be safe and effective in natural disease conditions before being submitted for approval and then general production. In the United States, the Food and Drug Administration (FDA) is responsible for approving vaccines.
== Phase IV ==
Phase IV trials are typically monitor stages that collect information continuously on vaccine usage, adverse effects, and long-term immunity after the vaccine is licensed and marketed. Harmful effects, such as increased risk of liver failure or heart attacks, discovered by Phase IV trials may result in a drug being no longer sold, or restricted to certain uses; examples include cerivastatin (brand names Baycol and Lipobay), troglitazone (Rezulin) and rofecoxib (Vioxx). Further examples include the swine flu vaccine and the rotavirus vaccine, which increased the risk of Guillain-Barré syndrome (GBS) and intussusception respectively. Thus, the fourth phase of clinical trials is used to ensure long-term vaccine safety.
== References ==
== External links ==
Vaccine Research Center Information regarding preventative vaccine research studies | Wikipedia/Vaccine_trial |
The Rubin causal model (RCM), also known as the Neyman–Rubin causal model, is an approach to the statistical analysis of cause and effect based on the framework of potential outcomes, named after Donald Rubin. The name "Rubin causal model" was first coined by Paul W. Holland. The potential outcomes framework was first proposed by Jerzy Neyman in his 1923 Master's thesis, though he discussed it only in the context of completely randomized experiments. Rubin extended it into a general framework for thinking about causation in both observational and experimental studies.
== Introduction ==
The Rubin causal model is based on the idea of potential outcomes. For example, a person would have a particular income at age 40 if they had attended college, whereas they would have a different income at age 40 if they had not attended college. To measure the causal effect of going to college for this person, we need to compare the outcome for the same individual in both alternative futures. Since it is impossible to see both potential outcomes at once, one of the potential outcomes is always missing. Consequently, there is an intrinsic indeterminacy that links correlation and causality. The impossibility of associating, with certainty, a hypothesis to a causality defines the "fundamental problem of causal inference.". It is important to note that this uncertainty can also be deduced using the concept of universal probability (any outcome can be generated randomly). In a universal system, if the inputs of a universal Turing machine are chosen randomly, any possible computable output has a non-zero probability of being produced. Thus, according to this point of view, the experimental impossibility of associating, with certainty, a hypothesis to a causality is a direct consequence of universal probability.
Because of the fundamental problem of causal inference, unit-level causal effects cannot be directly observed. However, randomized experiments allow for the estimation of population-level causal effects. A randomized experiment assigns people randomly to treatments: college or no college. Because of this random assignment, the groups are (on average) equivalent, and the difference in income at age 40 can be attributed to the college assignment since that was the only difference between the groups. An estimate of the average causal effect (also referred to as the average treatment effect or ATE) can then be obtained by computing the difference in means between the treated (college-attending) and control (not-college-attending) samples.
In many circumstances, however, randomized experiments are not possible due to ethical or practical concerns. In such scenarios there is a non-random assignment mechanism. This is the case for the example of college attendance: people are not randomly assigned to attend college. Rather, people may choose to attend college based on their financial situation, parents' education, and so on. Many statistical methods have been developed for causal inference, such as propensity score matching. These methods attempt to correct for the assignment mechanism by finding control units similar to treatment units.
== An extended example ==
Rubin defines a causal effect:
Intuitively, the causal effect of one treatment, E, over another, C, for a particular unit and an interval of time from
t
1
{\displaystyle t_{1}}
to
t
2
{\displaystyle t_{2}}
is the difference between what would have happened at time
t
2
{\displaystyle t_{2}}
if the unit had been exposed to E initiated at
t
1
{\displaystyle t_{1}}
and what would have happened at
t
2
{\displaystyle t_{2}}
if the unit had been exposed to C initiated at
t
1
{\displaystyle t_{1}}
: 'If an hour ago I had taken two aspirins instead of just a glass of water, my headache would now be gone,' or 'because an hour ago I took two aspirins instead of just a glass of water, my headache is now gone.' Our definition of the causal effect of the E versus C treatment will reflect this intuitive meaning."
According to the RCM, the causal effect of your taking or not taking aspirin one hour ago is the difference between how your head would have felt in case 1 (taking the aspirin) and case 2 (not taking the aspirin). If your headache would remain without aspirin but disappear if you took aspirin, then the causal effect of taking aspirin is headache relief. In most circumstances, we are interested in comparing two futures, one generally termed "treatment" and the other "control". These labels are somewhat arbitrary.
=== Potential outcomes ===
Suppose that Joe is participating in an FDA test for a new hypertension drug. An all-knowing observer would know the outcomes for Joe under both treatment (the new drug) and control (either no treatment or the current standard treatment). The causal effect, or treatment effect, is the difference between these two potential outcomes.
Y
t
(
u
)
{\displaystyle Y_{t}(u)}
is Joe's blood pressure if he takes the new pill. In general, this notation expresses the potential outcome which results from a treatment, t, on a unit, u. Similarly,
Y
c
(
u
)
{\displaystyle Y_{c}(u)}
is the effect of a different treatment, c or control, on a unit, u. In this case,
Y
c
(
u
)
{\displaystyle Y_{c}(u)}
is Joe's blood pressure if he doesn't take the pill.
Y
t
(
u
)
−
Y
c
(
u
)
{\displaystyle Y_{t}(u)-Y_{c}(u)}
is the causal effect of taking the new drug.
From this table we only know the causal effect on Joe. Everyone else in the study might have an increase in blood pressure if they take the pill. However, regardless of what the causal effect is for the other subjects, the causal effect for Joe is lower blood pressure, relative to what his blood pressure would have been if he had not taken the pill.
Consider a larger sample of patients:
The causal effect is different for every subject, but the drug works for Joe, Mary and Bob because the causal effect is negative. Their blood pressure is lower with the drug than it would have been if each did not take the drug. For Sally, on the other hand, the drug causes an increase in blood pressure.
In order for a potential outcome to make sense, it must be possible, at least a priori. For example, if there is no way for Joe, under any circumstance, to obtain the new drug, then
Y
t
(
u
)
{\displaystyle Y_{t}(u)}
is impossible for him. It can never happen. And if
Y
t
(
u
)
{\displaystyle Y_{t}(u)}
can never be observed, even in theory, then the causal effect of treatment on Joe's blood pressure is not defined.
=== No causation without manipulation ===
The causal effect of new drug is well defined because it is the simple difference of two potential outcomes, both of which might happen. In this case, we (or something else) can manipulate the world, at least conceptually, so that it is possible that one thing or a different thing might happen.
This definition of causal effects becomes much more problematic if there is no way for one of the potential outcomes to happen, ever. For example, what is the causal effect of Joe's height on his weight? Naively, this seems similar to our other examples. We just need to compare two potential outcomes: what would Joe's weight be under the treatment (where treatment is defined as being 3 inches taller) and what would Joe's weight be under the control (where control is defined as his current height).
A moment's reflection highlights the problem: we can't increase Joe's height. There is no way to observe, even conceptually, what Joe's weight would be if he were taller because there is no way to make him taller. We can't manipulate Joe's height, so it makes no sense to investigate the causal effect of height on weight. Hence the slogan: No causation without manipulation.
=== Stable unit treatment value assumption (SUTVA) ===
We require that "the [potential outcome] observation on one unit should be unaffected by the particular assignment of treatments to the other units" (Cox 1958, §2.4). This is called the stable unit treatment value assumption (SUTVA), which goes beyond the concept of independence.
In the context of our example, Joe's blood pressure should not depend on whether or not Mary receives the drug. But what if it does? Suppose that Joe and Mary live in the same house and Mary always cooks. The drug causes Mary to crave salty foods, so if she takes the drug she will cook with more salt than she would have otherwise. A high salt diet increases Joe's blood pressure. Therefore, his outcome will depend on both which treatment he received and which treatment Mary receives.
SUTVA violation makes causal inference more difficult. We can account for dependent observations by considering more treatments. We create 4 treatments by taking into account whether or not Mary receives treatment.
Recall that a causal effect is defined as the difference between two potential outcomes. In this case, there are multiple causal effects because there are more than two potential outcomes. One is the causal effect of the drug on Joe when Mary receives treatment and is calculated,
130
−
140
{\displaystyle 130-140}
. Another is the causal effect on Joe when Mary does not receive treatment and is calculated
120
−
125
{\displaystyle 120-125}
. The third is the causal effect of Mary's treatment on Joe when Joe is not treated. This is calculated as
140
−
125
{\displaystyle 140-125}
. The treatment Mary receives has a greater causal effect on Joe than the treatment which Joe received has on Joe, and it is in the opposite direction.
By considering more potential outcomes in this way, we can cause SUTVA to hold. However, if any units other than Joe are dependent on Mary, then we must consider further potential outcomes. The greater the number of dependent units, the more potential outcomes we must consider and the more complex the calculations become (consider an experiment with 20 different people, each of whose treatment status can effect outcomes for every one else). In order to (easily) estimate the causal effect of a single treatment relative to a control, SUTVA should hold.
=== Average causal effect ===
Consider:
One may calculate the average causal effect (also known as the average treatment effect or ATE) by taking the mean of all the causal effects.
How we measure the response affects what inferences we draw. Suppose that we measure changes in blood pressure as a percentage change rather than in absolute values. Then, depending in the exact numbers, the average causal effect might be an increase in blood pressure. For example, assume that George's blood pressure would be 154 under control and 140 with treatment. The absolute size of the causal effect is −14, but the percentage difference (in terms of the treatment level of 140) is −10%. If Sarah's blood pressure is 200 under treatment and 184 under control, then the causal effect in 16 in absolute terms but 8% in terms of the treatment value. A smaller absolute change in blood pressure (−14 versus 16) yields a larger percentage change (−10% versus 8%) for George. Even though the average causal effect for George and Sarah is +2 in absolute terms, it is −2 in percentage terms.
=== The fundamental problem of causal inference ===
The results we have seen up to this point would never be measured in practice. It is impossible, by definition, to observe the effect of more than one treatment on a subject over a specific time period. Joe cannot both take the pill and not take the pill at the same time. Therefore, the data would look something like this:
Question marks are responses that could not be observed. The Fundamental Problem of Causal Inference is that directly observing causal effects is impossible. However, this does not make causal inference impossible. Certain techniques and assumptions allow the fundamental problem to be overcome.
Assume that we have the following data:
We can infer what Joe's potential outcome under control would have been if we make an assumption of constant effect:
Y
t
(
u
)
=
T
+
Y
c
(
u
)
{\displaystyle Y_{t}(u)=T+Y_{c}(u)}
and
Y
t
(
u
)
−
T
=
Y
c
(
u
)
.
{\displaystyle Y_{t}(u)-T=Y_{c}(u).}
Where T is the average treatment effect.. in this case -10.
If we wanted to infer the unobserved values we could assume a constant effect. The following tables illustrates data consistent with the assumption of a constant effect.
All of the subjects have the same causal effect even though they have different outcomes under the treatment.
=== The assignment mechanism ===
The assignment mechanism, the method by which units are assigned treatment, affects the calculation of the average causal effect. One such assignment mechanism is randomization. For each subject we could flip a coin to determine if she receives treatment. If we wanted five subjects to receive treatment, we could assign treatment to the first five names we pick out of a hat. When we randomly assign treatments we may get different answers.
Assume that this data is the truth:
The true average causal effect is −8. But the causal effect for these individuals is never equal to this average. The causal effect varies, as it generally (always?) does in real life. After assigning treatments randomly, we might estimate the causal effect as:
A different random assignment of treatments yields a different estimate of the average causal effect.
The average causal effect varies because our sample is small and the responses have a large variance. If the sample were larger and the variance were less, the average causal effect would be closer to the true average causal effect regardless of the specific units randomly assigned to treatment.
Alternatively, suppose the mechanism assigns the treatment to all men and only to them.
Under this assignment mechanism, it is impossible for women to receive treatment and therefore impossible to determine the average causal effect on female subjects. In order to make any inferences of causal effect on a subject, the probability that the subject receive treatment must be greater than 0 and less than 1.
=== The perfect doctor ===
Consider the use of the perfect doctor as an assignment mechanism. The perfect doctor knows how each subject will respond to the drug or the control and assigns each subject to the treatment that will most benefit her. The perfect doctor knows this information about a sample of patients:
Based on this knowledge she would make the following treatment assignments:
The perfect doctor distorts both averages by filtering out poor responses to both the treatment and control. The difference between means, which is the supposed average causal effect, is distorted in a direction that depends on the details. For instance, a subject like Laila who is harmed by taking the drug would be assigned to the control group by the perfect doctor and thus the negative effect of the drug would be masked.
== Conclusion ==
The causal effect of a treatment on a single unit at a point in time is the difference between the outcome variable with the treatment and without the treatment. The Fundamental Problem of Causal Inference is that it is impossible to observe the causal effect on a single unit. You either take the aspirin now or you don't. As a consequence, assumptions must be made in order to estimate the missing counterfactuals.
The Rubin causal model has also been connected to instrumental variables (Angrist, Imbens, and Rubin, 1996), negative controls, and other techniques for causal inference. For more on the connections between the Rubin causal model, structural equation modeling, and other statistical methods for causal inference, see Morgan and Winship (2007), Pearl (2000), Peters et al. (2017), and Ibeling & Icard (2023). Pearl (2000) argues that all potential outcomes can be derived from Structural Equation Models (SEMs) thus unifying econometrics and modern causal analysis.
== See also ==
Causation
Principal stratification
Propensity score matching
== References ==
== Further reading ==
Guido Imbens & Donald Rubin (2015). Causal Inference for Statistics, Social, and Biomedical Sciences: An Introduction. Cambridge: Cambridge University Press. doi:10.1017/CBO9781139025751
Donald Rubin (1977). "Assignment to Treatment Group on the Basis of a Covariate", Journal of Educational Statistics, 2, pp. 1–26.
Rubin, Donald (1978). "Bayesian Inference for Causal Effects: The Role of Randomization", The Annals of Statistics, 6, pp. 34–58.
== External links ==
"Rubin Causal Model": an article for the New Palgrave Dictionary of Economics by Guido Imbens and Donald Rubin.
"Counterfactual Causal Analysis": a webpage maintained by Stephen Morgan, Christopher Winship, and others with links to many research articles on causal inference. | Wikipedia/Rubin_causal_model |
The design of experiments (DOE), also known as experiment design or experimental design, is the design of any task that aims to describe and explain the variation of information under conditions that are hypothesized to reflect the variation. The term is generally associated with experiments in which the design introduces conditions that directly affect the variation, but may also refer to the design of quasi-experiments, in which natural conditions that influence the variation are selected for observation.
In its simplest form, an experiment aims at predicting the outcome by introducing a change of the preconditions, which is represented by one or more independent variables, also referred to as "input variables" or "predictor variables." The change in one or more independent variables is generally hypothesized to result in a change in one or more dependent variables, also referred to as "output variables" or "response variables." The experimental design may also identify control variables that must be held constant to prevent external factors from affecting the results. Experimental design involves not only the selection of suitable independent, dependent, and control variables, but planning the delivery of the experiment under statistically optimal conditions given the constraints of available resources. There are multiple approaches for determining the set of design points (unique combinations of the settings of the independent variables) to be used in the experiment.
Main concerns in experimental design include the establishment of validity, reliability, and replicability. For example, these concerns can be partially addressed by carefully choosing the independent variable, reducing the risk of measurement error, and ensuring that the documentation of the method is sufficiently detailed. Related concerns include achieving appropriate levels of statistical power and sensitivity.
Correctly designed experiments advance knowledge in the natural and social sciences and engineering, with design of experiments methodology recognised as a key tool in the successful implementation of a Quality by Design (QbD) framework. Other applications include marketing and policy making. The study of the design of experiments is an important topic in metascience.
== History ==
=== Statistical experiments, following Charles S. Peirce ===
A theory of statistical inference was developed by Charles S. Peirce in "Illustrations of the Logic of Science" (1877–1878) and "A Theory of Probable Inference" (1883), two publications that emphasized the importance of randomization-based inference in statistics.
==== Randomized experiments ====
Charles S. Peirce randomly assigned volunteers to a blinded, repeated-measures design to evaluate their ability to discriminate weights.
Peirce's experiment inspired other researchers in psychology and education, which developed a research tradition of randomized experiments in laboratories and specialized textbooks in the 1800s.
==== Optimal designs for regression models ====
Charles S. Peirce also contributed the first English-language publication on an optimal design for regression models in 1876. A pioneering optimal design for polynomial regression was suggested by Gergonne in 1815. In 1918, Kirstine Smith published optimal designs for polynomials of degree six (and less).
=== Sequences of experiments ===
The use of a sequence of experiments, where the design of each may depend on the results of previous experiments, including the possible decision to stop experimenting, is within the scope of sequential analysis, a field that was pioneered by Abraham Wald in the context of sequential tests of statistical hypotheses. Herman Chernoff wrote an overview of optimal sequential designs, while adaptive designs have been surveyed by S. Zacks. One specific type of sequential design is the "two-armed bandit", generalized to the multi-armed bandit, on which early work was done by Herbert Robbins in 1952.
== Fisher's principles ==
A methodology for designing experiments was proposed by Ronald Fisher, in his innovative books: The Arrangement of Field Experiments (1926) and The Design of Experiments (1935). Much of his pioneering work dealt with agricultural applications of statistical methods. As a mundane example, he described how to test the lady tasting tea hypothesis, that a certain lady could distinguish by flavour alone whether the milk or the tea was first placed in the cup. These methods have been broadly adapted in biological, psychological, and agricultural research.
Comparison
In some fields of study it is not possible to have independent measurements to a traceable metrology standard. Comparisons between treatments are much more valuable and are usually preferable, and often compared against a scientific control or traditional treatment that acts as baseline.
Randomization
Random assignment is the process of assigning individuals at random to groups or to different groups in an experiment, so that each individual of the population has the same chance of becoming a participant in the study. The random assignment of individuals to groups (or conditions within a group) distinguishes a rigorous, "true" experiment from an observational study or "quasi-experiment". There is an extensive body of mathematical theory that explores the consequences of making the allocation of units to treatments by means of some random mechanism (such as tables of random numbers, or the use of randomization devices such as playing cards or dice). Assigning units to treatments at random tends to mitigate confounding, which makes effects due to factors other than the treatment to appear to result from the treatment.
The risks associated with random allocation (such as having a serious imbalance in a key characteristic between a treatment group and a control group) are calculable and hence can be managed down to an acceptable level by using enough experimental units. However, if the population is divided into several subpopulations that somehow differ, and the research requires each subpopulation to be equal in size, stratified sampling can be used. In that way, the units in each subpopulation are randomized, but not the whole sample. The results of an experiment can be generalized reliably from the experimental units to a larger statistical population of units only if the experimental units are a random sample from the larger population; the probable error of such an extrapolation depends on the sample size, among other things.
Statistical replication
Measurements are usually subject to variation and measurement uncertainty; thus they are repeated and full experiments are replicated to help identify the sources of variation, to better estimate the true effects of treatments, to further strengthen the experiment's reliability and validity, and to add to the existing knowledge of the topic. However, certain conditions must be met before the replication of the experiment is commenced: the original research question has been published in a peer-reviewed journal or widely cited, the researcher is independent of the original experiment, the researcher must first try to replicate the original findings using the original data, and the write-up should state that the study conducted is a replication study that tried to follow the original study as strictly as possible.
Blocking
Blocking is the non-random arrangement of experimental units into groups (blocks) consisting of units that are similar to one another. Blocking reduces known but irrelevant sources of variation between units and thus allows greater precision in the estimation of the source of variation under study.
Orthogonality
Orthogonality concerns the forms of comparison (contrasts) that can be legitimately and efficiently carried out. Contrasts can be represented by vectors and sets of orthogonal contrasts are uncorrelated and independently distributed if the data are normal. Because of this independence, each orthogonal treatment provides different information to the others. If there are T treatments and T − 1 orthogonal contrasts, all the information that can be captured from the experiment is obtainable from the set of contrasts.
Multifactorial experiments
Use of multifactorial experiments instead of the one-factor-at-a-time method. These are efficient at evaluating the effects and possible interactions of several factors (independent variables). Analysis of experiment design is built on the foundation of the analysis of variance, a collection of models that partition the observed variance into components, according to what factors the experiment must estimate or test.
== Example ==
This example of design experiments is attributed to Harold Hotelling, building on examples from Frank Yates. The experiments designed in this example involve combinatorial designs.
Weights of eight objects are measured using a pan balance and set of standard weights. Each weighing measures the weight difference between objects in the left pan and any objects in the right pan by adding calibrated weights to the lighter pan until the balance is in equilibrium. Each measurement has a random error. The average error is zero; the standard deviations of the probability distribution of the errors is the same number σ on different weighings; errors on different weighings are independent. Denote the true weights by
θ
1
,
…
,
θ
8
.
{\displaystyle \theta _{1},\dots ,\theta _{8}.\,}
We consider two different experiments:
Weigh each object in one pan, with the other pan empty. Let Xi be the measured weight of the object, for i = 1, ..., 8.
Do the eight weighings according to the following schedule—a weighing matrix:
left pan
right pan
1st weighing:
1
2
3
4
5
6
7
8
(empty)
2nd:
1
2
3
8
4
5
6
7
3rd:
1
4
5
8
2
3
6
7
4th:
1
6
7
8
2
3
4
5
5th:
2
4
6
8
1
3
5
7
6th:
2
5
7
8
1
3
4
6
7th:
3
4
7
8
1
2
5
6
8th:
3
5
6
8
1
2
4
7
{\displaystyle {\begin{array}{lcc}&{\text{left pan}}&{\text{right pan}}\\\hline {\text{1st weighing:}}&1\ 2\ 3\ 4\ 5\ 6\ 7\ 8&{\text{(empty)}}\\{\text{2nd:}}&1\ 2\ 3\ 8\ &4\ 5\ 6\ 7\\{\text{3rd:}}&1\ 4\ 5\ 8\ &2\ 3\ 6\ 7\\{\text{4th:}}&1\ 6\ 7\ 8\ &2\ 3\ 4\ 5\\{\text{5th:}}&2\ 4\ 6\ 8\ &1\ 3\ 5\ 7\\{\text{6th:}}&2\ 5\ 7\ 8\ &1\ 3\ 4\ 6\\{\text{7th:}}&3\ 4\ 7\ 8\ &1\ 2\ 5\ 6\\{\text{8th:}}&3\ 5\ 6\ 8\ &1\ 2\ 4\ 7\end{array}}}
Let Yi be the measured difference for i = 1, ..., 8. Then the estimated value of the weight θ1 is
θ
^
1
=
Y
1
+
Y
2
+
Y
3
+
Y
4
−
Y
5
−
Y
6
−
Y
7
−
Y
8
8
.
{\displaystyle {\widehat {\theta }}_{1}={\frac {Y_{1}+Y_{2}+Y_{3}+Y_{4}-Y_{5}-Y_{6}-Y_{7}-Y_{8}}{8}}.}
Similar estimates can be found for the weights of the other items:
θ
^
2
=
Y
1
+
Y
2
−
Y
3
−
Y
4
+
Y
5
+
Y
6
−
Y
7
−
Y
8
8
.
θ
^
3
=
Y
1
+
Y
2
−
Y
3
−
Y
4
−
Y
5
−
Y
6
+
Y
7
+
Y
8
8
.
θ
^
4
=
Y
1
−
Y
2
+
Y
3
−
Y
4
+
Y
5
−
Y
6
+
Y
7
−
Y
8
8
.
θ
^
5
=
Y
1
−
Y
2
+
Y
3
−
Y
4
−
Y
5
+
Y
6
−
Y
7
+
Y
8
8
.
θ
^
6
=
Y
1
−
Y
2
−
Y
3
+
Y
4
+
Y
5
−
Y
6
−
Y
7
+
Y
8
8
.
θ
^
7
=
Y
1
−
Y
2
−
Y
3
+
Y
4
−
Y
5
+
Y
6
+
Y
7
−
Y
8
8
.
θ
^
8
=
Y
1
+
Y
2
+
Y
3
+
Y
4
+
Y
5
+
Y
6
+
Y
7
+
Y
8
8
.
{\displaystyle {\begin{aligned}{\widehat {\theta }}_{2}&={\frac {Y_{1}+Y_{2}-Y_{3}-Y_{4}+Y_{5}+Y_{6}-Y_{7}-Y_{8}}{8}}.\\[5pt]{\widehat {\theta }}_{3}&={\frac {Y_{1}+Y_{2}-Y_{3}-Y_{4}-Y_{5}-Y_{6}+Y_{7}+Y_{8}}{8}}.\\[5pt]{\widehat {\theta }}_{4}&={\frac {Y_{1}-Y_{2}+Y_{3}-Y_{4}+Y_{5}-Y_{6}+Y_{7}-Y_{8}}{8}}.\\[5pt]{\widehat {\theta }}_{5}&={\frac {Y_{1}-Y_{2}+Y_{3}-Y_{4}-Y_{5}+Y_{6}-Y_{7}+Y_{8}}{8}}.\\[5pt]{\widehat {\theta }}_{6}&={\frac {Y_{1}-Y_{2}-Y_{3}+Y_{4}+Y_{5}-Y_{6}-Y_{7}+Y_{8}}{8}}.\\[5pt]{\widehat {\theta }}_{7}&={\frac {Y_{1}-Y_{2}-Y_{3}+Y_{4}-Y_{5}+Y_{6}+Y_{7}-Y_{8}}{8}}.\\[5pt]{\widehat {\theta }}_{8}&={\frac {Y_{1}+Y_{2}+Y_{3}+Y_{4}+Y_{5}+Y_{6}+Y_{7}+Y_{8}}{8}}.\end{aligned}}}
The question of design of experiments is: which experiment is better?
The variance of the estimate X1 of θ1 is σ2 if we use the first experiment. But if we use the second experiment, the variance of the estimate given above is σ2/8. Thus the second experiment gives us 8 times as much precision for the estimate of a single item, and estimates all items simultaneously, with the same precision. What the second experiment achieves with eight would require 64 weighings if the items are weighed separately. However, note that the estimates for the items obtained in the second experiment have errors that correlate with each other.
Many problems of the design of experiments involve combinatorial designs, as in this example and others.
== Avoiding false positives ==
False positive conclusions, often resulting from the pressure to publish or the author's own confirmation bias, are an inherent hazard in many fields.
Use of double-blind designs can prevent biases potentially leading to false positives in the data collection phase. When a double-blind design is used, participants are randomly assigned to experimental groups but the researcher is unaware of what participants belong to which group. Therefore, the researcher can not affect the participants' response to the intervention.
Experimental designs with undisclosed degrees of freedom are a problem, in that they can lead to conscious or unconscious "p-hacking": trying multiple things until you get the desired result. It typically involves the manipulation – perhaps unconsciously – of the process of statistical analysis and the degrees of freedom until they return a figure below the p<.05 level of statistical significance.
P-hacking can be prevented by preregistering researches, in which researchers have to send their data analysis plan to the journal they wish to publish their paper in before they even start their data collection, so no data manipulation is possible.
Another way to prevent this is taking a double-blind design to the data-analysis phase, making the study triple-blind, where the data are sent to a data-analyst unrelated to the research who scrambles up the data so there is no way to know which participants belong to before they are potentially taken away as outliers.
Clear and complete documentation of the experimental methodology is also important in order to support replication of results.
== Discussion topics when setting up an experimental design ==
An experimental design or randomized clinical trial requires careful consideration of several factors before actually doing the experiment. An experimental design is the laying out of a detailed experimental plan in advance of doing the experiment. Some of the following topics have already been discussed in the principles of experimental design section:
How many factors does the design have, and are the levels of these factors fixed or random?
Are control conditions needed, and what should they be?
Manipulation checks: did the manipulation really work?
What are the background variables?
What is the sample size? How many units must be collected for the experiment to be generalisable and have enough power?
What is the relevance of interactions between factors?
What is the influence of delayed effects of substantive factors on outcomes?
How do response shifts affect self-report measures?
How feasible is repeated administration of the same measurement instruments to the same units at different occasions, with a post-test and follow-up tests?
What about using a proxy pretest?
Are there confounding variables?
Should the client/patient, researcher or even the analyst of the data be blind to conditions?
What is the feasibility of subsequent application of different conditions to the same units?
How many of each control and noise factors should be taken into account?
The independent variable of a study often has many levels or different groups. In a true experiment, researchers can have an experimental group, which is where their intervention testing the hypothesis is implemented, and a control group, which has all the same element as the experimental group, without the interventional element. Thus, when everything else except for one intervention is held constant, researchers can certify with some certainty that this one element is what caused the observed change. In some instances, having a control group is not ethical. This is sometimes solved using two different experimental groups. In some cases, independent variables cannot be manipulated, for example when testing the difference between two groups who have a different disease, or testing the difference between genders (obviously variables that would be hard or unethical to assign participants to). In these cases, a quasi-experimental design may be used.
== Causal attributions ==
In the pure experimental design, the independent (predictor) variable is manipulated by the researcher – that is – every participant of the research is chosen randomly from the population, and each participant chosen is assigned randomly to conditions of the independent variable. Only when this is done is it possible to certify with high probability that the reason for the differences in the outcome variables are caused by the different conditions. Therefore, researchers should choose the experimental design over other design types whenever possible. However, the nature of the independent variable does not always allow for manipulation. In those cases, researchers must be aware of not certifying about causal attribution when their design doesn't allow for it. For example, in observational designs, participants are not assigned randomly to conditions, and so if there are differences found in outcome variables between conditions, it is likely that there is something other than the differences between the conditions that causes the differences in outcomes, that is – a third variable. The same goes for studies with correlational design.
== Statistical control ==
It is best that a process be in reasonable statistical control prior to conducting designed experiments. When this is not possible, proper blocking, replication, and randomization allow for the careful conduct of designed experiments.
To control for nuisance variables, researchers institute control checks as additional measures. Investigators should ensure that uncontrolled influences (e.g., source credibility perception) do not skew the findings of the study. A manipulation check is one example of a control check. Manipulation checks allow investigators to isolate the chief variables to strengthen support that these variables are operating as planned.
One of the most important requirements of experimental research designs is the necessity of eliminating the effects of spurious, intervening, and antecedent variables. In the most basic model, cause (X) leads to effect (Y). But there could be a third variable (Z) that influences (Y), and X might not be the true cause at all. Z is said to be a spurious variable and must be controlled for. The same is true for intervening variables (a variable in between the supposed cause (X) and the effect (Y)), and anteceding variables (a variable prior to the supposed cause (X) that is the true cause). When a third variable is involved and has not been controlled for, the relation is said to be a zero order relationship. In most practical applications of experimental research designs there are several causes (X1, X2, X3). In most designs, only one of these causes is manipulated at a time.
== Experimental designs after Fisher ==
Some efficient designs for estimating several main effects were found independently and in near succession by Raj Chandra Bose and K. Kishen in 1940 at the Indian Statistical Institute, but remained little known until the Plackett–Burman designs were published in Biometrika in 1946. About the same time, C. R. Rao introduced the concepts of orthogonal arrays as experimental designs. This concept played a central role in the development of Taguchi methods by Genichi Taguchi, which took place during his visit to Indian Statistical Institute in early 1950s. His methods were successfully applied and adopted by Japanese and Indian industries and subsequently were also embraced by US industry albeit with some reservations.
In 1950, Gertrude Mary Cox and William Gemmell Cochran published the book Experimental Designs, which became the major reference work on the design of experiments for statisticians for years afterwards.
Developments of the theory of linear models have encompassed and surpassed the cases that concerned early writers. Today, the theory rests on advanced topics in linear algebra, algebra and combinatorics.
As with other branches of statistics, experimental design is pursued using both frequentist and Bayesian approaches: In evaluating statistical procedures like experimental designs, frequentist statistics studies the sampling distribution while Bayesian statistics updates a probability distribution on the parameter space.
Some important contributors to the field of experimental designs are C. S. Peirce, R. A. Fisher, F. Yates, R. C. Bose, A. C. Atkinson, R. A. Bailey, D. R. Cox, G. E. P. Box, W. G. Cochran, W. T. Federer, V. V. Fedorov, A. S. Hedayat, J. Kiefer, O. Kempthorne, J. A. Nelder, Andrej Pázman, Friedrich Pukelsheim, D. Raghavarao, C. R. Rao, Shrikhande S. S., J. N. Srivastava, William J. Studden, G. Taguchi and H. P. Wynn.
The textbooks of D. Montgomery, R. Myers, and G. Box/W. Hunter/J.S. Hunter have reached generations of students and practitioners. Furthermore, there is ongoing discussion of experimental design in the context of model building for models either static or dynamic models, also known as system identification.
== Human participant constraints ==
Laws and ethical considerations preclude some carefully designed
experiments with human subjects. Legal constraints are dependent on
jurisdiction. Constraints may involve
institutional review boards, informed consent
and confidentiality affecting both clinical (medical) trials and
behavioral and social science experiments.
In the field of toxicology, for example, experimentation is performed
on laboratory animals with the goal of defining safe exposure limits
for humans. Balancing
the constraints are views from the medical field. Regarding the randomization of patients,
"... if no one knows which therapy is better, there is no ethical
imperative to use one therapy or another." (p 380) Regarding
experimental design, "...it is clearly not ethical to place subjects
at risk to collect data in a poorly designed study when this situation
can be easily avoided...". (p 393)
== See also ==
Adversarial collaboration – Method of research
Bayesian experimental design
Block design – Structure in combinatorial mathematics
Box–Behnken design – Experimental designs for response surface methodology
Central composite design – Experimental design in statistical mathematics
Clinical trial – Phase of clinical research in medicine
Clinical study design – Plan for research in clinical medicine
Computer experiment – Experiment used to study computer simulation
Control variable – Experimental element which is not changed throughout the experiment
Controlling for a variable – Binning data according to measured values of the variable
Experimetrics (econometrics-related experiments)
Factor analysis – Statistical method
Fractional factorial design – Statistical experimental design approach
Glossary of experimental design
Grey box model – Mathematical data production model with limited structure
Industrial engineering – Branch of engineering which deals with the optimization of complex processes or systems
Instrument effect
Law of large numbers – Averages of repeated trials converge to the expected value
Manipulation checks – certain kinds of secondary evaluations of an experimentPages displaying wikidata descriptions as a fallback
Multifactor design of experiments software
One-factor-at-a-time method – Method of designing experiments
Optimal design – Experimental design that is optimal with respect to some statistical criterionPages displaying short descriptions of redirect targets
Plackett–Burman design – Type of experimental design
Probabilistic design – Discipline within engineering design
Protocol (natural sciences) – Procedural method for the design and implementation of an experimentPages displaying short descriptions of redirect targets
Quasi-experimental design – Empirical interventional studyPages displaying short descriptions of redirect targets
Randomized block design – Design of experiments to collect similar contexts togetherPages displaying short descriptions of redirect targets
Randomized controlled trial – Form of scientific experiment
Research design – Overall strategy utilized to carry out research
Robust parameter design – technique for design of processes and experimentsPages displaying wikidata descriptions as a fallback
Sample size determination – Statistical considerations on how many observations to make
Supersaturated designs – Type of experimental design
Royal Commission on Animal Magnetism – 1784 French scientific bodies' investigations involving systematic controlled trials
Survey sampling – Statistical selection process
System identification – Statistical methods to build mathematical models of dynamical systems from measured data
Taguchi methods – Statistical methods to improve the quality of manufactured goods
== References ==
=== Sources ===
== External links ==
A chapter from a "NIST/SEMATECH Handbook on Engineering Statistics" at NIST
Box–Behnken designs from a "NIST/SEMATECH Handbook on Engineering Statistics" at NIST | Wikipedia/Designed_experiment |
Clinical trials are medical research studies conducted on human subjects. The human subjects are assigned to one or more interventions, and the investigators evaluate the effects of those interventions. The progress and results of clinical trials are analyzed statistically.
== Analysis factors ==
=== Intention to treat ===
Randomized clinical trials analyzed by the intention-to-treat (ITT) approach provide fair comparisons among the treatment groups because it avoids the bias associated with the non-random loss of the participants. The basic ITT principle is that participants in the trials should be analysed in the groups to which they were randomized, regardless of whether they received or adhered to the allocated intervention. However, medical investigators often have difficulties in accepting ITT analysis because of clinical trial issues like missing data or adherence to protocol.
=== Per protocol ===
This analysis can be restricted to only the participants who fulfill the protocol in terms of the eligibility, adherence to the intervention, and outcome assessment. This analysis is known as an "on-treatment" or "per protocol" analysis. A per-protocol analysis represents a "best-case scenario" to reveal the effect of the drug being studied. However, by restricting the analysis to a selected patient population, it does not show all effects of the new drug. Further, adherence to treatment may be affected by other factors that influence the outcome. Accordingly, per-protocol effects are at risk of bias, whereas the intent-to-treat estimate is not.
== Handling missing data ==
=== Last observation carried forward ===
One method of handling missing data is simply to impute, or fill in, values based on existing data. A standard method to do this is the Last-Observation-Carried-Forward (LOCF) method.
The LOCF method allows for the analysis of the data. However, recent research shows that this method gives a biased estimate of the treatment effect and underestimates the variability of the estimated result. As an example, assume that there are 8 weekly assessments after the baseline observation. If a patient drops out of the study after the third week, then this value is "carried forward" and assumed to be his or her score for the 5 missing data points. The assumption is that the patients improve gradually from the start of the study until the end, so that carrying forward an intermediate value is a conservative estimate of how well the person would have done had he or she remained in the study. The advantages to the LOCF approach are that:
It minimises the number of the subjects who are eliminated from the analysis, and
It allows the analysis to examine the trends over time, rather than focusing simply on the endpoint.
However, the National Academy of Sciences, in an advisory report to the Food and Drug Administration on missing data in clinical trials, recommended against the uncritical use of methods like LOCF, stating that "Single imputation methods like last observation carried forward and baseline observation carried forward should not be used as the primary approach to the treatment of missing data unless the assumptions that underlie them are scientifically justified."
=== Multiple imputation methods ===
The National Academy of Sciences advisory panel instead recommended methods that provide valid type I error rates under explicitly stated assumptions taking missing data status into account, and the use of multiple imputation methods based on all the data available in the model. It recommended more widespread use of Bootstrap and Generalized estimating equation methods whenever the assumptions underlying them, such as Missing at Random for GEE methods, can be justified. It advised collecting auxiliary data believed to be associated with dropouts to provide more robust and reliable models, collecting information about reason for drop-out; and, if possible, following up on drop-outs and obtaining efficacy outcome data. Finally, it recommended sensitivity analyses as part of clinical trial reporting to assess the sensitivity of the results to the assumptions about the missing data mechanism.
While the methods recommended by the National Academy of Science report are more recently developed, more robust, and will work under a wider variety of conditions than single-imputation methods like LOCF, no known method for handling missing data is valid under all conditions. As the 1998 International Conference on Harmonization E9 Guidance on Statisticial Principles for Clinical Trials noted, "Unfortunately, no universally applicable methods of handling missing values can be recommended." Expert statistical and medical judgment must select the method most appropriate to the particularly trial conditions of the available imperfect techniques, depending on the particular trial's goals, endpoints, statistical methods, and context.
== References ==
== Further reading ==
AR Waladkhani. (2008). Conducting clinical trials. A theoretical and practical guide. ISBN 978-3-940934-00-0 | Wikipedia/Analysis_of_clinical_trials |
A wait list control group, also called a wait list comparison, is a group of participants included in an outcome study that is assigned to a waiting list and receives intervention after the active treatment group. This control group serves as an untreated comparison group during the study, but eventually goes on to receive treatment at a later date. Wait list control groups are often used when it would be unethical to deny participants access to treatment, provided the wait is still shorter than that for routine services.
== See also ==
Experimental design
Scientific control
Treatment and control groups
== References == | Wikipedia/Wait_list_control_group |
Clinical research is a branch of medical research that involves people and aims to determine the effectiveness (efficacy) and safety of medications, devices, diagnostic products, and treatment regimens intended for improving human health. These research procedures are designed for the prevention, treatment, diagnosis or understanding of disease symptoms.
Clinical research is different from clinical practice: in clinical practice, established treatments are used to improve the condition of a person, while in clinical research, evidence is collected under rigorous study conditions on groups of people to determine the efficacy and safety of a treatment.
== Description ==
The term "clinical research" refers to the entire process of studying and writing about a drug, a medical device or a form of treatment, which includes conducting interventional studies (clinical trials) or observational studies on human participants. Clinical research can cover any medical method or product from its inception in the lab to its introduction to the consumer market and beyond. Once the promising candidate or the molecule is identified in the lab, it is subjected to pre-clinical studies or animal studies where different aspects of the test article (including its safety toxicity if applicable and efficacy, if possible at this early stage) are studied.
The clinical research ecosystem involves a complex network of sites, pharmaceutical companies and academic research institutions. Clinical research is often conducted at academic medical centers and affiliated research study sites. These centers and sites provide the prestige of the academic institution as well as access to larger metropolitan areas, providing a larger pool of medical participants. These academic medical centers often have their internal Institutional Review Boards that oversee the ethical conduct of medical research.
=== Patient and public involvement ===
Besides being participants in a clinical trial, members of the public can actively collaborate with researchers in designing and conducting clinical research. This is known as patient and public involvement (PPI). Public involvement involves a working partnership between patients, caregivers, people with lived experience, and researchers to shape and influence what is researched and how. PPI can improve the quality of research and make it more relevant and accessible. People with current or past experience of illness can provide a different perspective than professionals and compliment their knowledge. Through their personal knowledge they can identify research topics that are relevant and important to those living with an illness or using a service. They can also help to make the research more grounded in the needs of the specific communities they are part of. Public contributors can also ensure that the research is presented in plain language that is clear to the wider society and the specific groups it is most relevant for.
== Phases ==
Following preclinical research, clinical trials involving new drugs are commonly classified into four phases. Each phase of the drug approval process is treated as a separate clinical trial. If the drug successfully passes through Phases I, II, and III, it will be approved by the national regulatory authority for use in the general population. Phase IV is post-approval studies.
Phase I includes 20 to 100 healthy volunteers or individuals with the disease or condition. This study typically lasts several months and its purpose is to prove safety and an effective dosage. Phase II includes a larger number of individual participants in the range of 100–300, and Phase III includes some 1000-3000 participants to assess efficacy and safety of the drug at different doses. Only 25-30% of drugs advance to the end of Phase III.
== Clinical research by country ==
=== United States ===
In the United States, when a test article is unapproved or not yet cleared by the Food and Drug Administration (FDA), or when an approved or cleared test article is used in a way that may significantly increase the risks (or decreases the acceptability of the risks), the data obtained from the preclinical studies or other supporting evidence, or case studies of off label use are submitted to the FDA in support of an Investigational New Drug application.
Where devices are concerned the submission to the FDA would be for an Investigational Device Exemption application if the device is a significant risk device or is not in some way exempt from prior submission to the FDA. In addition, clinical research may require Institutional Review Board or Research Ethics Board and possibly other institutional committee reviews, Privacy Board, Conflict of Interest Committee, Radiation Safety Committee or Radioactive Drug Research Committee.
=== European Union ===
In the European Union, the European Medicines Agency acts in a similar fashion for studies conducted in their region. These human studies are conducted in four phases in research subjects that give consent to participate in the clinical trials.
== See also ==
Clinical research associate
Clinical research ethics
Clinical trial management system
Randomized controlled trial
Evidence-based medicine
Unethical human experimentation
== References == | Wikipedia/Clinical_research |
A seeding trial or marketing trial is a form of marketing, conducted in the name of research, designed to target product sampling towards selected consumers. In the marketing research field, seeding is the process of allocating marketing to specific customers, or groups of customers, in order to stimulate the internal dynamics of the market, and enhance the diffusion process. In medicine, seeding trials are clinical trials or research studies in which the primary objective is to introduce the concept of a particular medical intervention—such as a pharmaceutical drug or medical device—to physicians, rather than to test a scientific hypothesis.
To create loyalty and advocacy towards a brand, seeding trials take advantage of opinion leadership to enhance sales, capitalizing on the Hawthorne Effect. In a seeding trial, the brand provides potential opinion leaders with the product for free, aiming to gain valuable pre-market feedback and also to build support among the testers, creating influential word-of-mouth advocates for the product. By involving the opinion leaders as testers, effectively inviting them to be an extension of the marketing department, companies can create "a powerful sense of ownership among the clients, customers or consumers that count" by offering engaging the testers in a research dialogue. Seeding trials in medicine are not illegal but are considered unethical because they "deceive investigators, clinicians, and patients, subverting the scientific process".
== In medicine ==
Seeding trials to promote a medical intervention were described as "trials of approved drugs [that] appear to serve little or no scientific purpose" and "thinly veiled attempts to entice doctors to prescribe a new drug being marketed by the company" in a special article in the New England Journal of Medicine. The article, whose authors included U.S. Food and Drug Administration commissioner David Aaron Kessler, also described a number of characteristics common to seeding trials:
The trial is of an intervention with many competitors
Use of a trial design unlikely to achieve its stated scientific aims (e.g., un-blinded, no control group, no placebo)
Recruitment of physicians as trial investigators because they commonly prescribe similar medical interventions rather than for their scientific merit
Disproportionately high payments to trial investigators for relatively little work
Sponsorship is from a company's sales or marketing budget rather than from research and development
Little requirement for valid data collection
In a seeding trial, doctors and their patients are given free access to a drug and exclusive information and services to use the drug effectively. Additionally, participating physicians are often given financial remuneration and a chance to be a co-author on a resulting scientific publication. By triggering the Hawthorne effect, physicians become "opinion-leading word-of-mouth advocates". This practice has been shown to be effective.
Seeding trials are not illegal, but such practices are considered unethical. The obfuscation of true trial objectives (primarily marketing) prevents the proper establishment of informed consent for patient decisions. Additionally, trial physicians are not informed of the hidden trial objectives, which may include the physicians themselves being intended study subjects (such as in undisclosed evaluations of prescription practices). Seeding trials may also utilize inappropriate promotional rewards, which may exert undue influence or coerce desirable outcomes.
=== Examples ===
Documents released during a court case indicate that the Assessment of Differences between Vioxx and Naproxen To Ascertain Gastrointestinal Tolerability and Effectiveness (ADVANTAGE) trial of Vioxx conducted by Merck may have been a seeding trial, with the intention being to introduce the drug to physicians rather than test its efficacy. It appears Merck knew about the potential criticism they would face; an internal email suggested: "It may be a seeding study, but let's not call it that in our internal documents". The 2003 study was originally published in the Annals of Internal Medicine but was strongly criticized for its deception by the journal's editors in a 2008 editorial, calling for greater responsibility in academia to end the practice of "marketing in the guise of science".
In the STEPS trial Pfizer presented their drug Neurontin in a way that merged pharmaceutical marketing with research. This trial and other practices led to the company's loss in Franklin v. Parke-Davis.
== In marketing ==
Product Placement is an advertising technique used by companies to subtly promote their products through a non-traditional advertising technique, usually through appearances in film, television, or other media.
In the marketing field, seeding is considered the process of allocating marketing to specific customers, or groups of customers, in order to stimulate the internal dynamics of the market, enhance the diffusion process and encourage faster adoption of the product throughout the entire population. In a marketing seeding program, a company offers some sort of promotion (free product, discounts, service trials, etc.) to a niche group of people with the intention that this would stimulate WOM. An early example of a seeding trial was during the development of Post-it notes, produced by 3M. In 1977, secretaries to senior management staff throughout the United States were sent packs of Post-its and invited to suggest possible uses for them. They soon found them to be extremely useful and became "brand champions" for the product, an early example of viral marketing. Companies that have used seeding trials include Procter & Gamble, Microsoft, Hasbro, Google, Unilever, Pepsi, Coke, Ford, DreamWorks SKG, EMI, Sony, and Siemens.
Two of the main managerial decisions revolving around seeding focus on seeding of advertising in a multinational market and the process of seeding the product itself. Determining how many and which consumers within a particular social network should be seeded to maximize adoption is a challenging task for a firm.
=== Seeding Strategies (how to seed?) ===
In 2005, a team of marketing researchers, Barak Libai, Eitan Muller and Renana Peres, found that, contrary to managerial intuition and common assumptions in marketing research, strategies that disperse marketing efforts are generally better strategies. These include 'support the weak', in which the firm focuses its marketing efforts on the remaining market potential, and 'uniform', in which the firm distributes the marketing efforts evenly among its regions. This conclusion is congruent with the work of Japanese business strategist Kenichi Ohmae, which suggests that the sprinkler business model is superior and recommended to companies wishing to start a seeding program.
Researchers Jeonghye Choi of Yonsei University, Sam Hui of Stern School of Business at New York University and David Bell of The Wharton School at the University of Pennsylvania, explored two imitation effects of the demand at an Internet retailer, geographic proximity and demographic similarity and concluded that firms can influence the space–time demand path through seeding. The researchers conceived a new seeding strategy called “Proximity-and-similarity-based strategy”, in which the firm seeds the new product by choosing new zip codes that are the most responsive while adjusting the impact of proximity and similarity effects over time, and compare it to the three strategies presented in Libai, Muller and Peres's research,“support the strong”, “support the weak” and “uniform”. They argue that with time, the “proximity-and similarity-based strategy” performs best because the similarity effect begins to affect new and distant areas. Namely, serving many small pools of similar buyers demographically, who are geographically distant from one another, is crucial for an Internet retailer because then sales increase over time.
Yogesh Joshi of University of Maryland, David Reibsteinand and John Zhang of Wharton Business School found that when the question of optimal entry timing arises, firms shouldn't necessarily enter a new market based on a strong leverage effect, a situation where a firm's presence in an existing market has a positive influence on product adoption in a new market. Also, a backlash effect shouldn't prevent the firm from entering a new market, a situation where social influence on the existing market is negative. Researchers show that the optimal strategy is a trade-off between the three factors of leverage, backlash, and patience.
=== Seeding Objectives ===
One of the key questions surrounding seeding programs over the last decade has been whether or not it's more effective for companies to seed via influencers or random people through customers networks.
Many authors and scholars addressed this issue. Malcolm Gladwell discuses the “Law of the Few” in his book, The Tipping Point. He suggests that highly connected and rare people have the ability to shape the world. This handful of unique people can spread the word around and create a social epidemic through their connections, charm, personality, expertise and persuasiveness. The notion that a small group of people can influence others and cause them to adopt products, services or behaviors was the subject of another book, The Influentials by Edward Keller and Jonathan Berry. This minority comprises a wide range of people who act as experts in their field and their opinions are highly regarded by their peers.
From a more academic point of view, Barak Libai, Eitan Muller and Renana Peres constructed a research in the subject which is among the first to shed light on the actual value created by word of mouth programs and explore issues such as how targeting opinion leaders creates more value than targeting random customers. In seeding programs, word-of-mouth can gain customers who would not otherwise have bought the product, this is called expansion. However, word-of-mouth can also accelerate the purchase process of customers who would have purchased anyway, the faster the adoption, the greater the profits. These processes of expansion and acceleration integrate to create social value in a word-of-mouth seeding program for a new product. Furthermore, when deciding upon an optimal seeding program, the researchers conclude that “Influencer Seeding Programs” yield higher customer equity than “random Seeding Programs”. Of course, the decision about which program type to adopt depends on how much the company is willing to invest in discovering their influencers.
German researchers Oliver Hinz of Universität Darmstadt, Bernd Skiera of University of Frankfurt, and Christian Barrot and Jan U. Becker of Kühne Logistics University, argue that seeding strategies have strong influence on the success of viral marketing campaigns. The results propose that seeding to well-connected people is the most successful approach because these attractive seeding points are more likely to participate in viral marketing campaigns. Well-connected people also actively use their greater reach but do not have more influence on their peers than do less connected people.
On the other side of the debate, some argue that influencers have no such effect and therefore companies shouldn't target their seeding efforts on a specific group of people. Duncan Watts and Peter Dodds examined the phenomenon through a computer network simulation under the assumption that influential people are more difficult to influence, therefore social hubs have a lower tendency to adopt new products. Their work suggests that highly connected individuals do not play a crucial role in influencing others and that a random individual is just as likely to start a trend as connected people.
== References ==
== Further reading ==
Stephens MD (January 1993). "Marketing aspects of company-sponsored postmarketing surveillance studies". Drug Saf. 8 (1): 1–8. doi:10.2165/00002018-199308010-00001. PMID 8471183. S2CID 3039187. | Wikipedia/Seeding_trial |
In causal models, controlling for a variable means binning data according to measured values of the variable. This is typically done so that the variable can no longer act as a confounder in, for example, an observational study or experiment.
When estimating the effect of explanatory variables on an outcome by regression, controlled-for variables are included as inputs in order to separate their effects from the explanatory variables.
A limitation of controlling for variables is that a causal model is needed to identify important confounders (backdoor criterion is used for the identification). Without having one, a possible confounder might remain unnoticed. Another associated problem is that if a variable which is not a real confounder is controlled for, it may in fact make other variables (possibly not taken into account) become confounders while they were not confounders before. In other cases, controlling for a non-confounding variable may cause underestimation of the true causal effect of the explanatory variables on an outcome (e.g. when controlling for a mediator or its descendant). Counterfactual reasoning mitigates the influence of confounders without this drawback.
== Experiments ==
Experiments attempt to assess the effect of manipulating one or more independent variables on one or more dependent variables. To ensure the measured effect is not influenced by external factors, other variables must be held constant. The variables made to remain constant during an experiment are referred to as control variables.
For example, if an outdoor experiment were to be conducted to compare how different wing designs of a paper airplane (the independent variable) affect how far it can fly (the dependent variable), one would want to ensure that the experiment is conducted at times when the weather is the same, because one would not want weather to affect the experiment. In this case, the control variables may be wind speed, direction and precipitation. If the experiment were conducted when it was sunny with no wind, but the weather changed, one would want to postpone the completion of the experiment until the control variables (the wind and precipitation level) were the same as when the experiment began.
In controlled experiments of medical treatment options on humans, researchers randomly assign individuals to a treatment group or control group. This is done to reduce the confounding effect of irrelevant variables that are not being studied, such as the placebo effect.
== Observational studies ==
In an observational study, researchers have no control over the values of the independent variables, such as who receives the treatment. Instead, they must control for variables using statistics.
Observational studies are used when controlled experiments may be unethical or impractical. For instance, if a researcher wished to study the effect of unemployment (the independent variable) on health (the dependent variable), it would be considered unethical by institutional review boards to randomly assign some participants to have jobs and some not to. Instead, the researcher will have to create a sample which includes some employed people and some unemployed people. However, there could be factors that affect both whether someone is employed and how healthy he or she is. Part of any observed association between the independent variable (employment status) and the dependent variable (health) could be due to these outside, spurious factors rather than indicating a true link between them. This can be problematic even in a true random sample. By controlling for the extraneous variables, the researcher can come closer to understanding the true effect of the independent variable on the dependent variable.
In this context the extraneous variables can be controlled for by using multiple regression. The regression uses as independent variables not only the one or ones whose effects on the dependent variable are being studied, but also any potential confounding variables, thus avoiding omitted variable bias. "Confounding variables" in this context means other factors that not only influence the dependent variable (the outcome) but also influence the main independent variable.
=== OLS Regressions and control variables ===
The simplest examples of control variables in regression analysis comes from Ordinary Least Squares (OLS) estimators. The OLS framework assumes the following:
Linear relationship - OLS statistical models are linear. Hence the relationship between explanatory variables and the mean of Y must be linear.
Homoscedasticity - This requires homogeneity of variances, that is equal or similar variances across these data.
Independence/No Autocorrelation - Error terms from one (or more) observation can not be influenced by error terms of other observations.
Normality of Errors - The errors are jointly normal and uncorrelated, this implies that
(
ϵ
i
)
i
∈
N
{\displaystyle (\epsilon _{i})_{i\in N}}
i.e. that the error terms are an independently and identically distributed set (iid). This implies that the unobservables between different groups or observations are independent.
No multicollinearity - Independent variables must not be highly correlated with each other. For regressions using matrix notation, the matrix must be full rank i.e.
X
′
X
{\displaystyle X^{'}X}
is invertible.
Accordingly, a control variable can be interpreted as a linear explanatory variable that affects the mean value of Y (Assumption 1), but which does not present the primary variable of investigation, and which also satisfies the other assumptions above.
=== Example ===
Consider a study about whether getting older affects someone's life satisfaction. (Some researchers perceive a "u-shape": life satisfaction appears to decline first and then rise after middle age.) To identify the control variables needed here, one could ask what other variables determine not only someone's life satisfaction but also their age. Many other variables determine life satisfaction. But no other variable determines how old someone is (as long as they remain alive). (All people keep getting older, at the same rate, no matter what their other characteristics.) So, no control variables are needed here.
To determine the needed control variables, it can be useful to construct a directed acyclic graph.
== See also ==
Scientific control
Mixed model
Age adjustment
== References ==
== Further reading ==
Freedman, David; Pisani, Robert; Purves, Roger (2007). Statistics. W. W. Norton & Company. ISBN 978-0393929720. | Wikipedia/Controlling_for_a_variable |
Consolidated Standards of Reporting Trials (CONSORT) encompasses various initiatives developed by the CONSORT Group to alleviate the problems arising from inadequate reporting of randomized controlled trials. It is part of the larger EQUATOR Network initiative to enhance the transparency and accuracy of reporting in research.
== CONSORT Statement ==
The main product of the CONSORT Group is the CONSORT Statement, which is an evidence-based, minimum set of recommendations for reporting randomized trials. It offers a standard way for authors to prepare reports of trial findings, facilitating their complete and transparent reporting, reducing the influence of bias on their results, and aiding their critical appraisal and interpretation.
The most recent version of the Statement—the CONSORT 2010 Statement—consists of a 25-item checklist and a participant flow diagram, along with some brief descriptive text. The checklist items focus on reporting how the trial was designed, analyzed, and interpreted; the flow diagram displays the progress of all participants through the trial. The Statement has been translated into several languages.
The CONSORT "Explanation and Elaboration" document explains and illustrates the principles underlying the CONSORT Statement. It is strongly recommended that it be used in conjunction with the CONSORT Statement.
Considered an evolving document, the CONSORT Statement is subject to periodic changes as new evidence emerges; the most recent update was published in March 2010. The current definitive version of the CONSORT Statement and up-to-date information on extensions are placed on the CONSORT website.
=== Extensions ===
The main CONSORT Statement is based on the "standard" two-group parallel design. Extensions of the CONSORT Statement have been developed to give additional guidance for randomized trials with specific designs (e.g., cluster randomized trials, noninferiority and equivalence trials, pragmatic trials), data (e.g., harms, abstracts), type of target outcome, and various types of intervention (e.g., herbals, non-pharmacologic treatments, acupuncture). A number of guidelines have been designed to complement CONSORT, including TIDieR (encouraging adequate descriptions of interventions) and TIDieR-Placebo (encouraging adequate descriptions of placebo or sham controls). This list is by no means exhaustive, and work is ongoing.
== History ==
In 1993, 30 experts—medical journal editors, clinical trialists, epidemiologists, and methodologists—met in Ottawa, Canada to discuss ways of improving the reporting of randomized trials. This meeting resulted in the Standardized Reporting of Trials (SORT) statement, a 32-item checklist and flow diagram in which investigators were encouraged to report on how randomized trials were conducted.
Concurrently, and independently, another group of experts, the Asilomar Working Group on Recommendations for Reporting of Clinical Trials in the Biomedical Literature, convened in California, USA, and were working on a similar mandate. This group also published recommendations for authors reporting randomized trials.
At the suggestion of Dr. Drummond Rennie, from JAMA, in 1995 representatives from both these groups met in Chicago, USA, with the aim of merging the best of the SORT and Asilomar proposals into a single, coherent evidence-based recommendation. This resulted in the Consolidated Standards of Reporting Trials (CONSORT) Statement, which was first published in 1996. Further meetings of the CONSORT Group in 1999 and 2000 led to the publication of the revised CONSORT Statement in 2001.
Since the revision in 2001, the evidence base to inform CONSORT has grown considerably; empirical data highlighting new concerns regarding the reporting of randomized trials. Therefore, a third CONSORT Group meeting was held in 2007, resulting in publication of a newly revised CONSORT Statement and explanatory document in 2010. Users of the guideline are strongly recommended to refer to the most up-to-date version while writing or interpreting reports of clinical trials.
== Impact ==
The CONSORT Statement has gained considerable support since its inception in 1996. Over 600 journals and editorial groups worldwide now endorse it, including The Lancet, BMJ, JAMA, New England Journal of Medicine, World Association of Medical Editors, and International Committee of Medical Journal Editors. The 2001 revised Statement has been cited over 1,200 times and the accompanying explanatory document over 500 times. Another indication of CONSORT's impact is reflected in the approximately 17,500 hits per month that the CONSORT website has received. It has also recently been published as a book for those involved in the planning, conducting and interpretation of clinical trials.
A 2006 systematic review suggest that use of the CONSORT checklist is associated with improved reporting of randomized trials.
Similar initiatives to improve the reporting of other types of research have arisen after the introduction of CONSORT. They include: Strengthening the Reporting of Observational Studies in Epidemiology (STROBE), Standards for the Reporting of Diagnostic Accuracy Studies (STARD), Strengthening the Reporting of Genetic Association studies (STREGA), Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA), Transparent Reporting of a multivariable model for Individual Prognosis Or Diagnosis (TRIPOD+AI), Standards for Quality Improvement Reporting Excellence (SQUIRE), among others. These reporting guidelines have been incorporated into the EQUATOR Network initiative to enhance the transparent and accurate reporting of research studies.
== See also ==
Metascience
STROBE
== References ==
== External links ==
CONSORT Statement website Archived 2019-05-09 at the Wayback Machine | Wikipedia/Consolidated_Standards_of_Reporting_Trials |
A platform trial is a type of prospective, disease-focused, adaptive, randomized clinical trial (RCT) that compares multiple, simultaneous and possibly differently-timed interventions against a single, constant control group. As a disease-focused trial design (compared to an intervention-focused), platform trials attempt to answer the question "which therapy will best treat this disease". Platform trials are unique in their utilization of both: a common control group and their opportunity to alter the therapies it investigates during its active enrollment phase. Platform trials commonly take advantage of Bayesian statistics, but may incorporate elements of frequentist statistics and/or machine learning.
== Purpose ==
Platform trials can be a particularly useful design when researchers predict that multiple therapies that would become available at different times require investigation. For example, when the COVID-19 pandemic began, researchers predicted that there would eventually be multiple different therapies that could be investigated, but that these therapies would be discovered at different times in the pandemic timeline, therefore making a platform trial a useful design. Similar to COVID-19, platform trials have found use in oncology, alzheimer's disease and pneumonia research. Platform trials can be a superior design compared to simple 2-arm clinical trials when multiple therapies need investigation, because it only requires a single control group. This means that platform trials can be conducted with fewer enrolled patients than a set of potentially redundant control groups in a series of separate 2-arm trials. This in turn allows for results to be published sooner for time-sensitive diseases, and for fewer patients to be exposed to the risks of a clinical trial. Platform trials may be appropriate for phase II-IV trails.
== Design elements ==
=== Master protocol ===
Platform trials, like any clinical trial, have many elements that must be established before starting enrollment. While platform trials have the ability to alter their therapies of interest there are still many elements of these trials that remain constant and regulated. Such common, stable elements of platform trials described in the master protocol include: qualified trial staff members, trial sites, recruitment criteria, enrollment procedures, pre-set criteria for adding/discontinuing new therapies, adverse event reporting, communication plans, and statistical analysis plans. The master protocol is submitted to the IRB and once approved, only arm-specific appendices need to be submitted for Institutional Review Board (IRB) approval in the event of changes to the trial arms. Establishing a stable master protocol with adaptive therapy arms allows for faster, more efficient trial execution.
Platform trials are often large, multi-site investigations and as a result, master protocols frequently try to identify common human and physical infrastructure to maximize resource availability and efficiency. Examples of this include identifying/creating a single IRB to review the trial for all sites, creating a single database for collecting data, and creating a single randomization mechanism for all enrolled patients.
=== Common control group ===
One of the defining aspects of a platform trial is the shared control group that all interventional arms are compared to. Whereas a conventional RCT would generally have half of all enrolled patients in the control group; platform trials have a higher total number of patients in various interventional groups. This allows for fewer patients to be enrolled which saves money and accelerates completion time. A common statistical tool for determining allocation ratios, Dunnett's test, suggests that n√t patients should be allocated to the control group; where "n" is the sample size for each of the arms and "√t" is the number of active arms. As the number of arms increase, the ratio of patients allocated to control also increases. This results in the control group having a higher proportion of allocated patients than any one arm though platform trials still allow for more total patients to be in intervention arms than multiple 2-arm RCTs.
While the control group is not necessarily designed to change in the way that the treatment arms are, because platform trials can run for long periods of time, control groups may have to evolve to stay current with standard of care. When this is the case, or if there is a change to patient demographics with time, later analysis of the trial must be careful to consider comparing investigational patients to only the appropriate subset of control patients.
=== Adaptive intervention groups ===
The second defining aspect of a platform is that the therapies under investigation can change during the active enrollment phase of a trial. By comparison, conventional RCTs must specify the therapies under investigation before active enrollment and then discontinuation of a therapy results in discontinuation of the entire trial. Platform designs allow for addition and/or discontinuation of therapy arms. Importantly, the addition or discontinuation of an arm must follow pre-set protocols such as reaching a certain demonstrated efficacy or being recommended by a set panel of experts. There are frequently caps to the number of arms that can be active at once which are pre-determined by the research team. The number of possible arms is influenced by considerations of cost, time available for the trial, operational feasibility, complications with organization large quantities of patient data and the number of total patients available for enrollment. While an arm most frequently represents a single therapy, advanced designs may have multiple therapies in a single arm. When this is the case, one arm may have different therapies in different therapy classes (i.e. one antibiotic and one immunomodulator). Another advanced strategy is for each arm to utilize the same treatments, but with each arm representing a different sequence of intervention administration. Advance trials may also be designed such that some arms are only activated depending on the results of other arms. For example, a higher-dose arm may only be activated if a lower-dose arm shows few side effects but also low efficacy.
Unlike conventional RCTs, intervention arms do not necessarily need to start at the same time chronologically. This feature is particularly useful when investigating diseases that have new therapies being discovered regularly since these new therapies can be added to the trial without needing to start a new trial each time a therapy is discovered.
=== Response-adaptive randomization ===
Response-adaptive randomization is not a necessary component of platform trials but unique aspects of platforms allow for this feature to be incorporated. Response-adaptive randomization refers to the capability of redistributing the patient allocation ratio when one arm is showing superior/inferior outcomes compared to other arms after an interim analysis. Allocation ratios can therefor be adjusted to put more patients into more successful arms; however the ratio of patients randomized to the control group does not change. Allocation ratios are determined through a mix of empirical interim evidence and simulation modeling. Care must be taken, especially early in the trial when limited sample sizes are available, to avoid extreme swings in allocation ratios as such swings could cause early biasing of data.
== Limitations ==
While platform trials offer many advantages for investigating a single disease, their adaptive nature and potential for numerous and complicated arms may limit their applicability and feasibility. Platforms require a large number of experts for trial design, Data Monitoring and Safety Boards and operations leading to high cost and communication complexity. The long duration of platform trials may necessitate updates to the standard of care in the control group, complicating analysis. Further, care must be taken to ensure that the data from arms added later are compared to appropriate sub-sections of the control group, increasing statistical complexity. Publishing results of terminated arms can also be complicated if the whole trial has not yet been completed, as shared data in the trial may still need to remain blinded. Additionally, the complexity of platform designs—often involving multiple sponsors and funding sources as well as changing treatment arms—can make them difficult to register in standardized databases. Platform trials require long planning times, making them unsuitable for therapies needing immediate investigation. Funding can be complicated when different pharmaceutical companies are involved, and the ill-defined trial lengths make them less appealing to federal funding agencies.
== See also ==
Clinical Trial
Adaptive Clinical Trial
Clinical Study Design
Bayesian Experimental Design
AGILE
== References == | Wikipedia/Platform_trial |
Control may refer to:
== Basic meanings ==
=== Economics and business ===
Control (management), an element of management
Control, an element of management accounting
Comptroller (or controller), a senior financial officer in an organization
Controlling interest, a percentage of voting stock shares sufficient to prevent opposition
Foreign exchange controls, regulations on trade
Internal control, a process to help achieve specific goals typically related to managing risk
=== Mathematics and science ===
Control (optimal control theory), a variable for steering a controllable system of state variables toward a desired goal
Controlling for a variable in statistics
Scientific control, an experiment in which "confounding variables" are minimised to reduce error
Control variables, variables which are kept constant during an experiment
Biological pest control, a natural method of controlling pests
Control network in geodesy and surveying, a set of reference points of known geospatial coordinates
Control room, a room where a physical facility can be monitored
Process control in continuous production processes
Security controls, safeguards against security risks
=== Medicine ===
Control, according to the ICD-10-PCS, in the Medical and Surgical Section (0), is the root operation (# 3) that means stopping, or attempting to stop, post-procedural bleeding
Chlordiazepoxide, also sold under the trade name Control
Lorazepam, sold under the trade name Control
=== Systems engineering, computing and technology ===
Automatic control, the application of control theory for regulation of processes without direct intervention
Control character, or non-printing character, in a character set; does not represent a written symbol, but is used to control the interpretation or display of text
Unicode control characters, characters with no visual or spatial representation
Control engineering, a discipline of modeling and controlling of systems
Control system, the ability to control some mechanical or chemical equipment
Control theory, the mathematical theory about controlling dynamical systems over time
Control flow, means of specifying the sequence of operations in computer programs
Control variables in programming, which regulate the flow of control
Control key, on a computer keyboard
GUI widget (control or widget), a component of a graphical user interface
Input device (control), a physical user interface to a computer system
=== Society, psychology and sociology ===
Control (psychology)
Control freak, a person who attempts to dictate
Controlling behavior in relationships (also called coercive control)
Locus of control, an extent to which individuals believe that they can control events that affect them
Mind control, the use of manipulative methods to persuade others (brainwashing)
Power (social and political), the ability to control others
Self-control, the ability to control one's emotions and desires
Social control, mechanisms that regulate social behavior
Civilian control of the military
=== Other basic uses ===
Control point (orienteering), a marked waypoint in orienteering and related sports
Control (linguistics), a relation between elements of two clauses
== Geography ==
Control, Alberta
== Media ==
=== Books ===
Control (novel), a 1982 novel by William Goldman
Control (fictional character), in the 1974 British spy novel Tinker, Tailor, Soldier, Spy
=== Film and TV ===
==== Films ====
Control (1987 film) or Il Giorno prima, a 1987 made-for-television film starring Burt Lancaster
Control (2004 film), starring Ray Liotta, Willem Dafoe and Michelle Rodriguez
Control (2007 film), a film about Joy Division singer Ian Curtis, directed by Anton Corbijn
Control (2013 film), a Chinese–Hong Kong film written and directed by Kenneth Bi
Control (2023 film), a British film directed by Gene Fallaize and featuring the voice of Kevin Spacey
Control (upcoming film), directed by Robert Schwentke and starring James McAvoy
Kontroll, a 2003 Hungarian film, released as Control internationally
Control, a UK comedy short by Frank Miller
==== TV ====
Control (House), a 2005 episode of the television series House
Control, a Spanish-language series aired on Univision
Control, a recurring character in the sketch programme A Bit of Fry & Laurie
Control, a character on the science fiction crime drama Person of Interest
[C] - The Money of Soul And Possibility Control, or [C] - Control, a 2011 anime
Ctrl (web series), an American comedy web series
CONTROL (Get Smart), a fictional counter-espionage agency
=== Games ===
Control and control-bid, features of the game contract bridge
Control (video game), a 2019 video game by Remedy Entertainment
=== Music ===
Kontrol - a Bulgarian punk band
Control (Starlight Express), a character from the rock musical
==== Albums ====
Control (GoodBooks album), 2007
Control (Janet Jackson album), 1986
Control (Pedro the Lion album), 2002
Control, a 2011 album by Abandon
Control, a 2014 album by The Brew
Control, a 1981 album by Conrad Schnitzler
Control, a 2013 EP by Disclosure
Control, a 1994 album by Hellnation
Control, a 2012 EP by The Indecent
Control, a 1971 album by John St Field
Control, a 2012 album by Uppermost
Control, a 2003 album by Where Fear and Weapons Meet
Ctrl (SZA album), 2017
==== Songs ====
"Control" (Big Sean song), 2013
"Control" (Garbage song), 2012
"Control" (Janet Jackson song), 1986
"Control" (Kid Sister song), 2007
"Control" (Matrix & Futurebound song), 2013
"Control" (Metro Station song), 2007
"Control" (Mutemath song), 2004
"Control" (Poe song), 1998
"Control" (Puddle of Mudd song), 2001
"Control" (Traci Lords song), 1994
"Control" (Zoe Wees song), 2020
"Control", by Ayra Starr from The Year I Turned 21, 2024
"Control", by Amyl and the Sniffers from Amyl and the Sniffers, 2019
"Control", by Basement from Colourmeinkindness, 2012
"Control", by the Black Dahlia Murder from Everblack, 2013
"Control", by Delta Goodrem from Child of the Universe, 2012
"Control", by Disclosure from The Face, 2012
"Control", by División Minúscula, 2008
"Control", by Doja Cat from Purrr!, 2014
"Control", by Earshot from Two, 2004
"Control", by Feder, 2018
"Control", by Ghost9 from Now: Who We Are Facing, 2021
"Control", by Halsey from Badlands, 2015
"Control", by London Grammar from Truth Is a Beautiful Thing, 2017
"Control", by Playboi Carti from Whole Lotta Red, 2020
"Control", by Poe from Haunted, 2000
"Control", by Stabbing Westward from Ungod, 1994
"Control", by Wisin from El Regreso del Sobreviviente, 2014
"Control (Somehow You Want Me)", by Tenth Avenue North from Followers, 2016
== See also ==
Action (disambiguation)
Control point (disambiguation)
Control unit (disambiguation)
Controller (disambiguation)
Damage control (disambiguation)
Uncontrolled (disambiguation)
All pages with titles beginning with Control
All pages with titles containing Control | Wikipedia/Control_(disambiguation) |
A multicenter research trial is a clinical trial that involves more than one independent medical institutions in enrolling and following trial participants. In multicenter trials the participant institutions follow a common treatment protocol and follow the same data collection guidelines, and there is a single
coordinating center that receives, processes and analyzes study data.
== Benefits ==
An important benefit of multicenter trials is that they permit the enrollment of a larger number of participants at a faster rate, in comparison to a single-center trial, putting to use the sources of multiple institutions. This is crucial when the anticipated benefit from a treatment will be relatively small, or an expected outcome is likely to be uncommon, making a larger sample size necessary. Therefore, studies on preventive measures and therapies tend to be designed as multicenter trials. In studying novel pharmaceuticals, Phase III trials, which compare the new treatment to an established one, are usually multicenter ones. In contrast, Phase I trials, which test potential toxicity of the treatment, and Phase II trials, which establish some preliminary efficacy of the tested treatment, are usually single-center trials, as they require fewer participants.
The benefits of multicenter trials also include the potential for a more heterogenous sample of participants, from different geographic locations and a wider range of population groups, treated from physicians of different backgrounds, and the ability to compare results among centers, all of which increase the generalizability of the study. In many cases, efficacy will vary significantly between population groups with different genetic, environmental, and ethnic or cultural backgrounds ("demographic" factors); multicenter trials are better at evaluating these factors, by giving the opportunity for more subgroup analyses. Heterogeneity in the sample means that the findings will be more generalizable. On the other hand, a more heterogeneous study population generally requires a larger sample size to detect a given difference.
== References ==
== External links ==
ClinicalTrials.gov from US National Library of Medicine
Role of ICH GCP and Recruitment Strategies Training of Clinical Sites Staff in Successful Patient Recruitment Rates By Marithea Goberville, Ph.D.
IBPA Publications 2005 | Wikipedia/Multicenter_trial |
Limit State Design (LSD), also known as Load And Resistance Factor Design (LRFD), refers to a design method used in structural engineering. A limit state is a condition of a structure beyond which it no longer fulfills the relevant design criteria. The condition may refer to a degree of loading or other actions on the structure, while the criteria refer to structural integrity, fitness for use, durability or other design requirements. A structure designed by LSD is proportioned to sustain all actions likely to occur during its design life, and to remain fit for use, with an appropriate level of reliability for each limit state. Building codes based on LSD implicitly define the appropriate levels of reliability by their prescriptions.
The method of limit state design, developed in the USSR and based on research led by Professor N.S. Streletski, was introduced in USSR building regulations in 1955.
== Criteria ==
Limit state design requires the structure to satisfy two principal criteria: the ultimate limit state (ULS) and the serviceability limit state (SLS).
Any design process involves a number of assumptions. First: the loads to which a structure will be subjected 2: foreseeable or cognizable possible exceptional scenarios and the stresses these may impress, and 3) the individual and collective strengths pertaining to any constituent part or sum of parts as a group and as a whole.
== Ultimate limit state (ULS ==
A clear distinction is made between the ultimate state (US) and the ultimate limit state (ULS). The Ultimate State is a physical situation that involves either excessive deformations sufficient to cause collapse of the component under consideration or the structure as a whole, or deformations exceeding values considered to be the acceptable tolerance.
A structure is deemed to satisfy the ultimate limit based upon arbitrary criteria, per the nominal, not physical, intentions or goals set forth by human actors, and that, as such, have nothing to do with engineering strictly speaking, but instead exist "on paper" to conceal, distort, or otherwise obfuscate the true fundamental behaviors applicable to a structure.
Complying with the design criteria of the ULS is not sufficient to perform the minimum requisite steps necessary for proper structural safety.
== Serviceability limit state (SLS) ==
In addition to the ULS check mentioned above, a Service Limit State (SLS) computational check must be performed. To satisfy the serviceability limit state criterion, a structure must remain functional for the duration of its intended use subject to routine (everyday) loading.
== Factor development ==
The load and resistance factors are determined using statistics and a pre-selected probability of failure. Variability in the quality of construction, consistency of the construction material are accounted for in the factors. Generally, a factor of unity (one) or less is applied to the resistances of the material, and a factor of unity or greater to the loads. Not often used, but in some load cases a factor may be less than unity due to a reduced probability of the combined loads.
The aforementioned factors can differ for different materials or even between differing grades of the same material. For example, wood has larger factor of variability than steel. The factors applied to resistance also account for the degree of scientific confidence in the derivation of the values.
In determining the specific magnitude of the factors, more deterministic loads (e.g., dead load - the weight of the structure and permanent attachments like walls, floor treatments, ceiling finishes) are given lower factors (for example 1.4) than highly variable loads like earthquake, wind, or live (occupancy) loads (1.6). Impact loads are typically given higher factors still (say 2.0) in order to account for both their unpredictable magnitudes and the dynamic nature of the loading vs. the static nature of most models.
Limit states design has the potential to produce a more consistently designed structure as each element is intended to have the same probability of failure. In practical terms this normally results in a more efficient structure, and as such, it can be argued that LSD is superior from a practical engineering viewpoint.
== Example treatment of LSD in building codes ==
The following is the treatment of LSD found in the National Building Code of Canada:
NBCC 1995 Format
φR > αDD + ψ γ {αLL + αQQ + αTT}
where φ = Resistance Factor
ψ = Load Combination Factor
γ = Importance Factor
αD = Dead Load Factor
αL = Live Load Factor
αQ = Earthquake Load Factor
αT = Thermal Effect (Temperature) Load Factor
Limit state design has replaced the older concept of permissible stress design in most forms of civil engineering. A notable exception is transportation engineering. Even so, new codes are currently being developed for both geotechnical and transportation engineering which are LSD based. As a result, most modern buildings are designed in accordance with a code which is based on limit state theory. For example, in Europe, structures are designed to conform with the Eurocodes: Steel structures are designed in accordance with EN 1993, and reinforced concrete structures to EN 1992. Australia, Canada, China, France, Indonesia, and New Zealand (among many others) utilise limit state theory in the development of their design codes. In the purest sense, it is now considered inappropriate to discuss safety factors when working with LSD, as there are concerns that this may lead to confusion. Previously, it has been shown that the LRFD and ASD can produce significantly different designs of steel gable frames.
There are few situations where ASD produces significantly lighter weight steel gable frame designs. Additionally, it has been shown that in high snow regions, the difference between the methods is more dramatic.
== In the United States ==
The United States has been particularly slow to adopt limit state design (known as Load and Resistance Factor Design in the US). Design codes and standards are issued by diverse organizations, some of which have adopted limit state design, and others have not.
The ACI 318 Building Code Requirements for Structural Concrete uses Limit State design.
The ANSI/AISC 360 Specification for Structural Steel Buildings, the ANSI/AISI S-100 North American Specification for the Design of Cold Formed Steel Structural Members, and The Aluminum Association's Aluminum Design Manual contain two methods of design side by side:
Load and Resistance Factor Design (LRFD), a Limit States Design implementation, and
Allowable Strength Design (ASD), a method where the nominal strength is divided by a safety factor to determine the allowable strength. This allowable strength is required to equal or exceed the required strength for a set of ASD load combinations. ASD is calibrated to give the same structural reliability and component size as the LRFD method with a live to dead load ratio of 3. Consequently, when structures have a live to dead load ratio that differs from 3, ASD produces designs that are either less reliable or less efficient as compared to designs resulting from the LRFD method.
In contrast, the ANSI/AWWA D100 Welded Carbon Steel Tanks for Water Storage and API 650 Welded Tanks for Oil Storage still use allowable stress design.
== In Europe ==
In Europe, the limit state design is enforced by the Eurocodes.
== See also ==
Allowable stress design
Probabilistic design
Seismic performance
Structural engineering
== References ==
=== Citations ===
=== Sources === | Wikipedia/Limit_state_design |
In probability theory, the first-order second-moment (FOSM) method, also referenced as mean value first-order second-moment (MVFOSM) method, is a probabilistic method to determine the stochastic moments of a function with random input variables. The name is based on the derivation, which uses a first-order Taylor series and the first and second moments of the input variables.
== Approximation ==
Consider the objective function
g
(
x
)
{\displaystyle g(x)}
, where the input vector
x
{\displaystyle x}
is a realization of the random vector
X
{\displaystyle X}
with probability density function
f
X
(
x
)
{\displaystyle f_{X}(x)}
. Because
X
{\displaystyle X}
is randomly distributed,
g
{\displaystyle g}
is also randomly distributed.
Following the FOSM method, the mean value of
g
{\displaystyle g}
is approximated by
μ
g
≈
g
(
μ
)
{\displaystyle \mu _{g}\approx g(\mu )}
The variance of
g
{\displaystyle g}
is approximated by
σ
g
2
≈
∑
i
=
1
n
∑
j
=
1
n
∂
g
(
μ
)
∂
x
i
∂
g
(
μ
)
∂
x
j
cov
(
X
i
,
X
j
)
{\displaystyle \sigma _{g}^{2}\approx \sum _{i=1}^{n}\sum _{j=1}^{n}{\frac {\partial g(\mu )}{\partial x_{i}}}{\frac {\partial g(\mu )}{\partial x_{j}}}\operatorname {cov} \left(X_{i},X_{j}\right)}
where
n
{\displaystyle n}
is the length/dimension of
x
{\displaystyle x}
and
∂
g
(
μ
)
∂
x
i
{\textstyle {\frac {\partial g(\mu )}{\partial x_{i}}}}
is the partial derivative of
g
{\displaystyle g}
at the mean vector
μ
{\displaystyle \mu }
with respect to the i-th entry of
x
{\displaystyle x}
. More accurate, second-order second-moment approximations are also available
== Derivation ==
The objective function is approximated by a Taylor series at the mean vector
μ
{\displaystyle \mu }
.
g
(
x
)
=
g
(
μ
)
+
∑
i
=
1
n
∂
g
(
μ
)
∂
x
i
(
x
i
−
μ
i
)
+
1
2
∑
i
=
1
n
∑
j
=
1
n
∂
2
g
(
μ
)
∂
x
i
∂
x
j
(
x
i
−
μ
i
)
(
x
j
−
μ
j
)
+
⋯
{\displaystyle g(x)=g(\mu )+\sum _{i=1}^{n}{\frac {\partial g(\mu )}{\partial x_{i}}}(x_{i}-\mu _{i})+{\frac {1}{2}}\sum _{i=1}^{n}\sum _{j=1}^{n}{\frac {\partial ^{2}g(\mu )}{\partial x_{i}\,\partial x_{j}}}(x_{i}-\mu _{i})(x_{j}-\mu _{j})+\cdots }
The mean value of
g
{\displaystyle g}
is given by the integral
μ
g
=
E
[
g
(
x
)
]
=
∫
−
∞
∞
g
(
x
)
f
X
(
x
)
d
x
{\displaystyle \mu _{g}=E[g(x)]=\int _{-\infty }^{\infty }g(x)f_{X}(x)\,dx}
Inserting the first-order Taylor series yields
μ
g
≈
∫
−
∞
∞
[
g
(
μ
)
+
∑
i
=
1
n
∂
g
(
μ
)
∂
x
i
(
x
i
−
μ
i
)
]
f
X
(
x
)
d
x
=
∫
−
∞
∞
g
(
μ
)
f
X
(
x
)
d
x
+
∫
−
∞
∞
∑
i
=
1
n
∂
g
(
μ
)
∂
x
i
(
x
i
−
μ
i
)
f
X
(
x
)
d
x
=
g
(
μ
)
∫
−
∞
∞
f
X
(
x
)
d
x
⏟
1
+
∑
i
=
1
n
∂
g
(
μ
)
∂
x
i
∫
−
∞
∞
(
x
i
−
μ
i
)
f
X
(
x
)
d
x
⏟
0
=
g
(
μ
)
.
{\displaystyle {\begin{aligned}\mu _{g}&\approx \int _{-\infty }^{\infty }\left[g(\mu )+\sum _{i=1}^{n}{\frac {\partial g(\mu )}{\partial x_{i}}}(x_{i}-\mu _{i})\right]f_{X}(x)\,dx\\&=\int _{-\infty }^{\infty }g(\mu )f_{X}(x)\,dx+\int _{-\infty }^{\infty }\sum _{i=1}^{n}{\frac {\partial g(\mu )}{\partial x_{i}}}(x_{i}-\mu _{i})f_{X}(x)\,dx\\&=g(\mu )\underbrace {\int _{-\infty }^{\infty }f_{X}(x)\,dx} _{1}+\sum _{i=1}^{n}{\frac {\partial g(\mu )}{\partial x_{i}}}\underbrace {\int _{-\infty }^{\infty }(x_{i}-\mu _{i})f_{X}(x)\,dx} _{0}\\&=g(\mu ).\end{aligned}}}
The variance of
g
{\displaystyle g}
is given by the integral
σ
g
2
=
E
(
[
g
(
x
)
−
μ
g
]
2
)
=
∫
−
∞
∞
[
g
(
x
)
−
μ
g
]
2
f
X
(
x
)
d
x
.
{\displaystyle \sigma _{g}^{2}=E\left([g(x)-\mu _{g}]^{2}\right)=\int _{-\infty }^{\infty }[g(x)-\mu _{g}]^{2}f_{X}(x)\,dx.}
According to the computational formula for the variance, this can be written as
σ
g
2
=
E
(
[
g
(
x
)
−
μ
g
]
2
)
=
E
(
g
(
x
)
2
)
−
μ
g
2
=
∫
−
∞
∞
g
(
x
)
2
f
X
(
x
)
d
x
−
μ
g
2
{\displaystyle \sigma _{g}^{2}=E\left([g(x)-\mu _{g}]^{2}\right)=E\left(g(x)^{2}\right)-\mu _{g}^{2}=\int _{-\infty }^{\infty }g(x)^{2}f_{X}(x)\,dx-\mu _{g}^{2}}
Inserting the Taylor series yields
σ
g
2
≈
∫
−
∞
∞
[
g
(
μ
)
+
∑
i
=
1
n
∂
g
(
μ
)
∂
x
i
(
x
i
−
μ
i
)
]
2
f
X
(
x
)
d
x
−
μ
g
2
=
∫
−
∞
∞
{
g
(
μ
)
2
+
2
g
μ
∑
i
=
1
n
∂
g
(
μ
)
∂
x
i
(
x
i
−
μ
i
)
+
[
∑
i
=
1
n
∂
g
(
μ
)
∂
x
i
(
x
i
−
μ
i
)
]
2
}
f
X
(
x
)
d
x
−
μ
g
2
=
∫
−
∞
∞
g
(
μ
)
2
f
X
(
x
)
d
x
+
∫
−
∞
∞
2
g
μ
∑
i
=
1
n
∂
g
(
μ
)
∂
x
i
(
x
i
−
μ
i
)
f
X
(
x
)
d
x
+
∫
−
∞
∞
[
∑
i
=
1
n
∂
g
(
μ
)
∂
x
i
(
x
i
−
μ
i
)
]
2
f
X
(
x
)
d
x
−
μ
g
2
=
g
μ
2
∫
−
∞
∞
f
X
(
x
)
d
x
⏟
1
+
2
g
μ
∑
i
=
1
n
∂
g
(
μ
)
∂
x
i
∫
−
∞
∞
(
x
i
−
μ
i
)
f
X
(
x
)
d
x
⏟
0
+
∫
−
∞
∞
[
∑
i
=
1
n
∑
j
=
1
n
∂
g
(
μ
)
∂
x
i
∂
g
(
μ
)
∂
x
j
(
x
i
−
μ
i
)
(
x
j
−
μ
j
)
]
f
X
(
x
)
d
x
−
μ
g
2
=
g
(
μ
)
2
⏟
μ
g
2
+
∑
i
=
1
n
∑
j
=
1
n
∂
g
(
μ
)
∂
x
i
∂
g
(
μ
)
∂
x
j
∫
−
∞
∞
(
x
i
−
μ
i
)
(
x
j
−
μ
j
)
f
X
(
x
)
d
x
⏟
cov
(
X
i
,
X
j
)
−
μ
g
2
=
∑
i
=
1
n
∑
j
=
1
n
∂
g
(
μ
)
∂
x
i
∂
g
(
μ
)
∂
x
j
cov
(
X
i
,
X
j
)
.
{\displaystyle {\begin{aligned}\sigma _{g}^{2}&\approx \int _{-\infty }^{\infty }\left[g(\mu )+\sum _{i=1}^{n}{\frac {\partial g(\mu )}{\partial x_{i}}}(x_{i}-\mu _{i})\right]^{2}f_{X}(x)\,dx-\mu _{g}^{2}\\&=\int _{-\infty }^{\infty }\left\{g(\mu )^{2}+2g_{\mu }\sum _{i=1}^{n}{\frac {\partial g(\mu )}{\partial x_{i}}}(x_{i}-\mu _{i})+\left[\sum _{i=1}^{n}{\frac {\partial g(\mu )}{\partial x_{i}}}(x_{i}-\mu _{i})\right]^{2}\right\}f_{X}(x)\,dx-\mu _{g}^{2}\\&=\int _{-\infty }^{\infty }g(\mu )^{2}f_{X}(x)\,dx+\int _{-\infty }^{\infty }2\,g_{\mu }\sum _{i=1}^{n}{\frac {\partial g(\mu )}{\partial x_{i}}}(x_{i}-\mu _{i})f_{X}(x)\,dx\\&\quad {}+\int _{-\infty }^{\infty }\left[\sum _{i=1}^{n}{\frac {\partial g(\mu )}{\partial x_{i}}}(x_{i}-\mu _{i})\right]^{2}f_{X}(x)\,dx-\mu _{g}^{2}\\&=g_{\mu }^{2}\underbrace {\int _{-\infty }^{\infty }f_{X}(x)\,dx} _{1}+2g_{\mu }\sum _{i=1}^{n}{\frac {\partial g(\mu )}{\partial x_{i}}}\underbrace {\int _{-\infty }^{\infty }(x_{i}-\mu _{i})f_{X}(x)\,dx} _{0}\\&\quad {}+\int _{-\infty }^{\infty }\left[\sum _{i=1}^{n}\sum _{j=1}^{n}{\frac {\partial g(\mu )}{\partial x_{i}}}{\frac {\partial g(\mu )}{\partial x_{j}}}(x_{i}-\mu _{i})(x_{j}-\mu _{j})\right]f_{X}(x)\,dx-\mu _{g}^{2}\\&=\underbrace {g(\mu )^{2}} _{\mu _{g}^{2}}+\sum _{i=1}^{n}\sum _{j=1}^{n}{\frac {\partial g(\mu )}{\partial x_{i}}}{\frac {\partial g(\mu )}{\partial x_{j}}}\underbrace {\int _{-\infty }^{\infty }(x_{i}-\mu _{i})(x_{j}-\mu _{j})f_{X}(x)\,dx} _{\operatorname {cov} \left(X_{i},X_{j}\right)}-\mu _{g}^{2}\\&=\sum _{i=1}^{n}\sum _{j=1}^{n}{\frac {\partial g(\mu )}{\partial x_{i}}}{\frac {\partial g(\mu )}{\partial x_{j}}}\operatorname {cov} \left(X_{i},X_{j}\right).\end{aligned}}}
== Higher-order approaches ==
The following abbreviations are introduced.
g
μ
=
g
(
μ
)
,
g
,
i
=
∂
g
(
μ
)
∂
x
i
,
g
,
i
j
=
∂
2
g
(
μ
)
∂
x
i
∂
x
j
,
μ
i
,
j
=
E
[
(
x
i
−
μ
i
)
j
]
{\displaystyle {\begin{aligned}g_{\mu }&=g(\mu ),&g_{,i}&={\frac {\partial g(\mu )}{\partial x_{i}}},&g_{,ij}&={\frac {\partial ^{2}g(\mu )}{\partial x_{i}\,\partial x_{j}}},&\mu _{i,j}&=E\left[(x_{i}-\mu _{i})^{j}\right]\end{aligned}}}
In the following, the entries of the random vector
X
{\displaystyle X}
are assumed to be independent.
Considering also the second-order terms of the Taylor expansion, the approximation of the mean value is given by
μ
g
≈
g
μ
+
1
2
∑
i
=
1
n
g
,
i
i
μ
i
,
2
{\displaystyle \mu _{g}\approx g_{\mu }+{\frac {1}{2}}\sum _{i=1}^{n}g_{,ii}\;\mu _{i,2}}
The incomplete second-order approximation (ISOA) of the variance is given by
σ
g
2
≈
g
μ
2
+
∑
i
=
1
n
g
,
i
2
μ
i
,
2
+
1
4
∑
i
=
1
n
g
,
i
i
2
μ
i
,
4
+
g
μ
∑
i
=
1
n
g
,
i
i
μ
i
,
2
+
∑
i
=
1
n
g
,
i
g
,
i
i
μ
i
,
3
+
1
2
∑
i
=
1
n
∑
j
=
i
+
1
n
g
,
i
i
g
,
j
j
μ
i
,
2
μ
j
,
2
+
∑
i
=
1
n
∑
j
=
i
+
1
n
g
,
i
j
2
μ
i
,
2
μ
j
,
2
−
μ
g
2
{\displaystyle {\begin{aligned}\sigma _{g}^{2}\approx {}g_{\mu }^{2}&+\sum _{i=1}^{n}g_{,i}^{2}\,\mu _{i,2}+{\frac {1}{4}}\sum _{i=1}^{n}g_{,ii}^{2}\,\mu _{i,4}+g_{\mu }\sum _{i=1}^{n}g_{,ii}\,\mu _{i,2}+\sum _{i=1}^{n}g_{,i}\,g_{,ii}\,\mu _{i,3}\\&+{\frac {1}{2}}\sum _{i=1}^{n}\sum _{j=i+1}^{n}g_{,ii}\,g_{,jj}\,\mu _{i,2}\,\mu _{j,2}+\sum _{i=1}^{n}\sum _{j=i+1}^{n}g_{,ij}^{2}\,\mu _{i,2}\,\mu _{j,2}-\mu _{g}^{2}\end{aligned}}}
The skewness of
g
{\displaystyle g}
can be determined from the third central moment
μ
g
,
3
{\displaystyle \mu _{g,3}}
. When considering only linear terms of the Taylor series, but higher-order moments, the third central moment is approximated by
μ
g
,
3
≈
∑
i
=
1
n
g
,
i
3
μ
i
,
3
{\displaystyle \mu _{g,3}\approx \sum _{i=1}^{n}g_{,i}^{3}\;\mu _{i,3}}
For the second-order approximations of the third central moment as well as for the derivation of all higher-order approximations see Appendix D of Ref. Taking into account the quadratic terms of the Taylor series and the third moments of the input variables is referred to as second-order third-moment method. However, the full second-order approach of the variance (given above) also includes fourth-order moments of input parameters, the full second-order approach of the skewness 6th-order moments, and the full second-order approach of the kurtosis up to 8th-order moments.
== Practical application ==
There are several examples in the literature where the FOSM method is employed to estimate the stochastic distribution of the buckling load of axially compressed structures (see e.g. Ref.). For structures which are very sensitive to deviations from the ideal structure (like cylindrical shells) it has been proposed to use the FOSM method as a design approach. Often the applicability is checked by comparison with a Monte Carlo simulation. Two comprehensive application examples of the full second-order method specifically oriented towards the fatigue crack growth in a metal railway axle are discussed and checked by comparison with a Monte Carlo simulation in Ref.
In engineering practice, the objective function often is not given as analytic expression, but for instance as a result of a finite-element simulation. Then the derivatives of the objective function need to be estimated by the central differences method. The number of evaluations of the objective function equals
2
n
+
1
{\displaystyle 2n+1}
. Depending on the number of random variables this still can mean a significantly smaller number of evaluations than performing a Monte Carlo simulation. However, when using the FOSM method as a design procedure, a lower bound shall be estimated, which is actually not given by the FOSM approach. Therefore, a type of distribution needs to be assumed for the distribution of the objective function, taking into account the approximated mean value and standard deviation.
== References == | Wikipedia/First-order_second-moment_method |
Machine Design (ISSN 0024-9114) is an American trade magazine and website serving the OEM engineering market. Its print issues reach qualified design engineers and engineering managers twice a month.
Key technologies covered include computer-aided design and manufacturing (CAD/CAM), electrical and electronics, fastening and joining, fluid power, manufacturing, engineered materials, mechanical engineering, and motion control.
Today, Machine Design is owned by Informa, and has editorial offices based in New York, New York and Cleveland, Ohio, USA.
== History ==
The inaugural issue of Machine Design coincided almost exactly with the 1929 stock-market crash and the beginning of the Great Depression. Although the nation was in the economic doldrums, there was significant design development taking place in almost all industrial segments including automotive, aircraft, farm equipment, home appliances, and industrial machinery.
The onset of World War II came and brought almost frenetic activity to design engineering at large. After the war, civilian industries thrived. But in the years following the war and into the 1950s the role of design engineer languished, stigmatized by the war effort as the creator of new means of destruction.
Engineering colleges began to feel slighted because doctors, lawyers, and business executives were viewed as having more prestige and professional status than their engineering graduates. Intellectual elites viewed engineering colleges as trade schools, and graduate engineers were said to be nothing more than mechanics or glorified shop hands. In response, engineering schools began to drop courses that lacked academic rigor or had the slightest blue-collar aura.
The launch of Sputnik in 1957 again changed the perception of design engineering. The perceived loss of world leadership in air and space technology by the people of the United States set the stage for a considerable renewal of prestige to the engineering discipline. After more than a decade into the Cold War, the public realized science and engineering could play a key role in keeping the Communists at bay. The government unloaded almost limitless supplies of money on high-tech defense industries, and engineering became the career of choice. High salaries and generous perks were lavished on engineers and scientists.
Unfortunately, Sputnik also accelerated the movement to delete courses on manufacturing and shop practice from the curricula of top schools. The idea was to portray engineers as being more scientist than a mechanic. The rocket scientist working on the space program became the image to which most engineers aspired.
This attitude had a lot to do with framing the editorial policies of Machine Design through the 1960s. The policies were in tune with what was happening in the largest and most-sophisticated corporations, especially the aircraft and automotive industries, where design engineering and manufacturing engineering were increasingly treated as separate entities having no common interest. Reflecting this, articles selected for Machine Design were carefully tailored not to have too much of a manufacturing orientation.
Starting in the late 1960s, another shift in American perception was brought about by the growing awareness of overseas manufacturing facilities returning a lower cost product with higher quality. While lower labor rates played a key role in the lower costs, they could not justify the higher reliability of offshore products over those domestically produced. It was soon discovered that those shops with higher quality production realized design and manufacturing engineering were closely intertwined. Machine Design articles started to reflect this trend. For example, it's believed to be the first industrial trade magazine to run a comprehensive article explaining numerical control machining and how it relates to design engineering.
Machine Design's coverage of manufacturing positioned it well when concurrent engineering became the trendy idea in industry. Major corporations suddenly discovered that design and manufacturing were interrelated, and it became vogue to tear down the walls between design and manufacturing engineers.
In the 1970s, finite-element analysis broke on the industrial scene. Computer-aided design was evolving, and by the 1980s, it was also having a profound impact on design procedures. Computer-aided manufacturing evolved separately, but by 1990 CAD and CAM had merged. In the field of electrical and electronic technology, relay controls were giving way to digital electronics and the microprocessor that led to combining a number of design disciplines into the technologies of mechatronics and motion control.
In the 2000s, the Internet of Things has taken hold of the industry and has infiltrated every level of engineering, from design to manufacturing, all the way to predictive maintenance and augmented reality. Machine Design as provided in-depth coverage on these emerging technologies to keep engineers up to date on what lies in store for the engineering industry.
For over 80 years, Machine Design had predicted and led the industrial community spotting trends and fundamental changes in manufacturing operations. Providing an ongoing series of technological overviews interspersed with in-depth tutorials, it kept readers abreast of technologies that were transforming product design. It does this with an editorial staff of degreed engineers possessing industrial experience and obligated to create lucid and interesting articles supported by the intelligent use of graphics.
== References ==
BPA Worldwide
== External links ==
Machine Design website | Wikipedia/Machine_design |
The engineering design process, also known as the engineering method, is a common series of steps that engineers use in creating functional products and processes. The process is highly iterative – parts of the process often need to be repeated many times before another can be entered – though the part(s) that get iterated and the number of such cycles in any given project may vary.
It is a decision making process (often iterative) in which the engineering sciences, basic sciences and mathematics are applied to convert resources optimally to meet a stated objective. Among the fundamental elements of the design process are the establishment of objectives and criteria, synthesis, analysis, construction, testing and evaluation.
== Common stages ==
It's important to understand that there are various framings/articulations of the engineering design process. Different terminology employed may have varying degrees of overlap, which affects what steps get stated explicitly or deemed "high level" versus subordinate in any given model. This, of course, applies as much to any particular example steps/sequences given here.
One example framing of the engineering design process delineates the following stages: research, conceptualization, feasibility assessment, establishing design requirements, preliminary design, detailed design, production planning and tool design, and production. Others, noting that "different authors (in both research literature and in textbooks) define different phases of the design process with varying activities occurring within them," have suggested more simplified/generalized models – such as problem definition, conceptual design, preliminary design, detailed design, and design communication. Another summary of the process, from European engineering design literature, includes clarification of the task, conceptual design, embodiment design, detail design. (NOTE: In these examples, other key aspects – such as concept evaluation and prototyping – are subsets and/or extensions of one or more of the listed steps.)
=== Research ===
Various stages of the design process (and even earlier) can involve a significant amount of time spent on locating information and research. Consideration should be given to the existing applicable literature, problems and successes associated with existing solutions, costs, and marketplace needs.
The source of information should be relevant. Reverse engineering can be an effective technique if other solutions are available on the market. Other sources of information include the Internet, local libraries, available government documents, personal organizations, trade journals, vendor catalogs and individual experts available.
=== Design requirements ===
Establishing design requirements and conducting requirement analysis, sometimes termed problem definition (or deemed a related activity), is one of the most important elements in the design process in certain industries, and this task is often performed at the same time as a feasibility analysis. The design requirements control the design of the product or process being developed, throughout the engineering design process. These include basic things like the functions, attributes, and specifications – determined after assessing user needs. Some design requirements include hardware and software parameters, maintainability, availability, and testability.
=== Feasibility ===
In some cases, a feasibility study is carried out after which schedules, resource plans and estimates for the next phase are developed. The feasibility study is an evaluation and analysis of the potential of a proposed project to support the process of decision making. It outlines and analyses alternatives or methods of achieving the desired outcome. The feasibility study helps to narrow the scope of the project to identify the best scenario.
A feasibility report is generated following which Post Feasibility Review is performed.
The purpose of a feasibility assessment is to determine whether the engineer's project can proceed into the design phase. This is based on two criteria: the project needs to be based on an achievable idea, and it needs to be within cost constraints. It is important to have engineers with experience and good judgment to be involved in this portion of the feasibility study.
=== Concept generation ===
A concept study (conceptualization, conceptual design) is often a phase of project planning that includes producing ideas and taking into account the pros and cons of implementing those ideas. This stage of a project is done to minimize the likelihood of error, manage costs, assess risks, and evaluate the potential success of the intended project. In any event, once an engineering issue or problem is defined, potential solutions must be identified. These solutions can be found by using ideation, the mental process by which ideas are generated. In fact, this step is often termed Ideation or "Concept Generation." The following are widely used techniques:
trigger word – a word or phrase associated with the issue at hand is stated, and subsequent words and phrases are evoked.
morphological analysis – independent design characteristics are listed in a chart, and different engineering solutions are proposed for each solution. Normally, a preliminary sketch and short report accompany the morphological chart.
synectics – the engineer imagines him or herself as the item and asks, "What would I do if I were the system?" This unconventional method of thinking may find a solution to the problem at hand. The vital aspects of the conceptualization step is synthesis. Synthesis is the process of taking the element of the concept and arranging them in the proper way. Synthesis creative process is present in every design.
brainstorming – this popular method involves thinking of different ideas, typically as part of a small group, and adopting these ideas in some form as a solution to the problem
Various generated ideas must then undergo a concept evaluation step, which utilizes various tools to compare and contrast the relative strengths and weakness of possible alternatives.
=== Preliminary design ===
The preliminary design, or high-level design includes (also called FEED or Basic design), often bridges a gap between design conception and detailed design, particularly in cases where the level of conceptualization achieved during ideation is not sufficient for full evaluation. So in this task, the overall system configuration is defined, and schematics, diagrams, and layouts of the project may provide early project configuration. (This notably varies a lot by field, industry, and product.) During detailed design and optimization, the parameters of the part being created will change, but the preliminary design focuses on creating the general framework to build the project on.
S. Blanchard and J. Fabrycky describe it as:
“The ‘whats’ initiating conceptual design produce ‘hows’ from the conceptual design evaluation effort applied to feasible conceptual design concepts. Next, the ‘hows’ are taken into preliminary design through the means of allocated requirements. There they become ‘whats’ and drive preliminary design to address ‘hows’ at this lower level.”
=== Detailed design ===
Following FEED is the Detailed Design (Detailed Engineering) phase, which may consist of procurement of materials as well.
This phase further elaborates each aspect of the project/product by complete description through solid modeling, drawings as well as specifications.
Computer-aided design (CAD) programs have made the detailed design phase more efficient. For example, a CAD program can provide optimization to reduce volume without hindering a part's quality. It can also calculate stress and displacement using the finite element method to determine stresses throughout the part.
=== Production planning ===
The production planning and tool design consists of planning how to mass-produce the product and which tools should be used in the manufacturing process. Tasks to complete in this step include selecting materials, selection of the production processes, determination of the sequence of operations, and selection of tools such as jigs, fixtures, metal cutting and metal or plastics forming tools. This task also involves additional prototype testing iterations to ensure the mass-produced version meets qualification testing standards.
== Comparison with the scientific method ==
Engineering is formulating a problem that can be solved through design. Science is formulating a question that can be solved through investigation.
The engineering design process bears some similarity to the scientific method. Both processes begin with existing knowledge, and gradually become more specific in the search for knowledge (in the case of "pure" or basic science) or a solution (in the case of "applied" science, such as engineering). The key difference between the engineering process and the scientific process is that the engineering process focuses on design, creativity and innovation while the scientific process emphasizes explanation, prediction and discovery (observation).
== Degree programs ==
Methods are being taught and developed in Universities including:
Engineering Design, University of Bristol Faculty of Engineering
Dyson School of Design Engineering, Imperial College London
TU Delft, Industrial Design Engineering.
University of Waterloo, Systems Design Engineering
== See also ==
Applied science
Computer-automated design
Design engineer
Engineering analysis
Engineering optimization
Industrial engineering
New product development
Systems engineering process
Surrogate model
Traditional engineering
== References ==
== External links ==
"Criteria for accrediting engineering programs, Engineering accrediting commission" (PDF). ABET.
Ullman, David G. (2009) The Mechanical Design Process, Mc Graw Hill, 4th edition, ISBN 978-0072975741
Eggert, Rudolph J. (2010) Engineering Design, Second Edition, High Peak Press, Meridian, Idaho, ISBN 978-0131433588 | Wikipedia/Engineering_design |
Quality function deployment (QFD) is a method developed in Japan beginning in 1966 to help transform the voice of the customer into engineering characteristics for a product. Yoji Akao, the original developer, described QFD as a "method to transform qualitative user demands into quantitative parameters, to deploy the functions forming quality, and to deploy methods for achieving the design quality into subsystems and component parts, and ultimately to specific elements of the manufacturing process." The author combined his work in quality assurance and quality control points with function deployment used in value engineering.
== House of quality ==
The house of quality, a part of QFD, is the basic design tool of quality function deployment. It identifies and classifies customer desires (WHATs), identifies the importance of those desires, identifies engineering characteristics which may be relevant to those desires (HOWs), correlates the two, allows for verification of those correlations, and then assigns objectives and priorities for the system requirements. This process can be applied at any system composition level (e.g. system, subsystem, or component) in the design of a product, and can allow for assessment of different abstractions of a system. It is intensely progressed through a number of hierarchical levels of WHATs and HOWs and analyse each stage of product growth (service enhancement), and production (service delivery).
The house of quality appeared in 1972 in the design of an oil tanker by Mitsubishi Heavy Industries.
The output of the house of quality is generally a matrix with customer desires on one dimension and correlated nonfunctional requirements on the other dimension. The cells of matrix table are filled with the weights assigned to the stakeholder characteristics where those characteristics are affected by the system parameters across the top of the matrix. At the bottom of the matrix, the column is summed, which allows for the system characteristics to be weighted according to the stakeholder characteristics. System parameters not correlated to stakeholder characteristics may be unnecessary to the system design and are identified by empty matrix columns, while stakeholder characteristics (identified by empty rows) not correlated to system parameters indicate "characteristics not addressed by the design parameters". System parameters and stakeholder characteristics with weak correlations potentially indicate missing information, while matrices with "too many correlations" indicate that the stakeholder needs may need to be refined.
== Fuzziness ==
The concepts of fuzzy logic have been applied to QFD ("Fuzzy QFD" or "FQFD"). A review of 59 papers in 2013 by Abdolshah and Moradi found a number of conclusions: most FQFD "studies were focused on quantitative methods" to construct a house of quality matrix based on customer requirements, where the most-employed techniques were based on multiple-criteria decision analysis methods. They noted that there are factors other than the house of quality relevant to product development, and called metaheuristic methods "a promising approach for solving complicated problems of FQFD."
== Derived techniques and tools ==
The process of quality function deployment (QFD) is described in ISO 16355-1:2021.
Pugh concept selection can be used in coordination with QFD to select a promising product or service configuration from among listed alternatives.
Modular function deployment uses QFD to establish customer requirements and to identify important design requirements with a special emphasis on modularity. There are three main differences to QFD as applied in modular function deployment compared to house of quality: The benchmarking data is mostly gone; the checkboxes and crosses have been replaced with circles, and the triangular "roof" is missing.
== Notes ==
== References ==
Larson, Wiley J.; Kirkpatrick, Doug; Sellers, Jerry Jon; Thomas, L. Dale; Verma, Dinesh, eds. (2009). Applied Space Systems Engineering. Space Technology. United States of America: McGraw-Hill. ISBN 978-0-07-340886-6.
== Further reading ==
Hauser, John R. (April 15, 1993). "How Puritan-Bennet used the house of quality". Sloan Management Review (Spring 1993): 61–70. Archived from the original on September 10, 2015.
Tapke, Jennifer; Muller, Alyson; Johnson, Greg; Siec, Josh. "House of Quality: Steps in Understanding the House of Quality" (PDF). IE 361. Iowa State University. Archived (PDF) from the original on November 5, 2003.
"General principles and perspectives of Quality Function Deployment (QFD)". ISO.org. International Organization for Standardization. December 2015. | Wikipedia/Quality_function_deployment |
This page is concerned with the stochastic modelling as applied to the insurance industry. For other stochastic modelling applications, please see Monte Carlo method and Stochastic asset models. For mathematical definition, please see Stochastic process.
"Stochastic" means being or having a random variable. A stochastic model is a tool for estimating probability distributions of potential outcomes by allowing for random variation in one or more inputs over time. The random variation is usually based on fluctuations observed in historical data for a selected period using standard time-series techniques. Distributions of potential outcomes are derived from a large number of simulations (stochastic projections) which reflect the random variation in the input(s).
Its application initially started in physics. It is now being applied in engineering, life sciences, social sciences, and finance. See also Economic capital.
== Valuation ==
Like any other company, an insurer has to show that its assets exceeds its liabilities to be solvent. In the insurance industry, however, assets and liabilities are not known entities. They depend on how many policies result in claims, inflation from now until the claim, investment returns during that period, and so on.
So the valuation of an insurer involves a set of projections, looking at what is expected to happen, and thus coming up with the best estimate for assets and liabilities, and therefore for the company's level of solvency.
== Deterministic approach ==
The simplest way of doing this, and indeed the primary method used, is to look at best estimates.
The projections in financial analysis usually use the most likely rate of claim, the most likely investment return, the most likely rate of inflation, and so on. The projections in engineering analysis usually use both the most likely rate and the most critical rate. The result provides a point estimate - the best single estimate of what the company's current solvency position is, or multiple points of estimate - depends on the problem definition. Selection and identification of parameter values are frequently a challenge to less experienced analysts.
The downside of this approach is it does not fully cover the fact that there is a whole range of possible outcomes and some are more probable and some are less.
== Stochastic modelling ==
A stochastic model would be to set up a projection model which looks at a single policy, an entire portfolio or an entire company. But rather than setting investment returns according to their most likely estimate, for example, the model uses random variations to look at what investment conditions might be like.
Based on a set of random variables, the experience of the policy/portfolio/company is projected, and the outcome is noted. Then this is done again with a new set of random variables. In fact, this process is repeated thousands of times.
At the end, a distribution of outcomes is available which shows not only the most likely estimate but what ranges are reasonable too. The most likely estimate is given by the distribution curve's (formally known as the Probability density function) center of mass which is typically also the peak(mode) of the curve, but may be different e.g. for asymmetric distributions.
This is useful when a policy or fund provides a guarantee, e.g. a minimum investment return of 5% per annum. A deterministic simulation, with varying scenarios for future investment return, does not provide a good way of estimating the cost of providing this guarantee. This is because it does not allow for the volatility of investment returns in each future time period or the chance that an extreme event in a particular time period leads to an investment return less than the guarantee. Stochastic modelling builds volatility and variability (randomness) into the simulation and therefore provides a better representation of real life from more angles.
== Numerical evaluations of quantities ==
Stochastic models help to assess the interactions between variables, and are useful tools to numerically evaluate quantities, as they are usually implemented using Monte Carlo simulation techniques (see Monte Carlo method). While there is an advantage here, in estimating quantities that would otherwise be difficult to obtain using analytical methods, a disadvantage is that such methods are limited by computing resources as well as simulation error. Below are some examples:
=== Means ===
Using statistical notation, it is a well-known result that the mean of a function, f, of a random variable X is not necessarily the function of the mean of X.
For example, in application, applying the best estimate (defined as the mean) of investment returns to discount a set of cash flows will not necessarily give the same result as assessing the best estimate to the discounted cash flows.
A stochastic model would be able to assess this latter quantity with simulations.
=== Percentiles ===
This idea is seen again when one considers percentiles (see percentile). When assessing risks at specific percentiles, the factors that contribute to these levels are rarely at these percentiles themselves. Stochastic models can be simulated to assess the percentiles of the aggregated distributions.
=== Truncations and censors ===
Truncating and censoring of data can also be estimated using stochastic models. For instance, applying a non-proportional reinsurance layer to the best estimate losses will not necessarily give us the best estimate of the losses after the reinsurance layer. In a simulated stochastic model, the simulated losses can be made to "pass through" the layer and the resulting losses assessed appropriately.
== The asset model ==
Although the text above referred to "random variations", the stochastic model does not just use any arbitrary set of values. The asset model is based on detailed studies of how markets behave, looking at averages, variations, correlations, and more.
The models and underlying parameters are chosen so that they fit historical economic data, and are expected to produce meaningful future projections.
There are many such models, including the Wilkie Model, the Thompson Model and the Falcon Model.
== The claims model ==
The claims arising from policies or portfolios that the company has written can also be modelled using stochastic methods. This is especially important in the general insurance sector, where the claim severities can have high uncertainties.
=== Frequency-Severity models ===
Depending on the portfolios under investigation, a model can simulate all or some of the following factors stochastically:
Number of claims
Claim severities
Timing of claims
Claims inflations can be applied, based on the inflation simulations that are consistent with the outputs of the asset model, as are dependencies between the losses of different portfolios.
The relative uniqueness of the policy portfolios written by a company in the general insurance sector means that claims models are typically tailor-made.
=== Stochastic reserving models ===
Estimating future claims liabilities might also involve estimating the uncertainty around the estimates of claim reserves.
See J Li's article "Comparison of Stochastic Reserving Models" (published in the Australian Actuarial Journal, volume 12 issue 4) for a recent article on this topic.
== References ==
Guidance on stochastic modelling for life insurance reserving (pdf)
J Li's article on stochastic reserving from the Australian Actuarial Journal, 2006 (pdf)
Stochastic Modelling For Dummies, Actuarial Society of South Africa | Wikipedia/Stochastic_modelling_(insurance) |
The nested sampling algorithm is a computational approach to the Bayesian statistics problems of comparing models and generating samples from posterior distributions. It was developed in 2004 by physicist John Skilling.
== Background ==
Bayes' theorem can be applied to a pair of competing models
M
1
{\displaystyle M_{1}}
and
M
2
{\displaystyle M_{2}}
for data
D
{\displaystyle D}
, one of which may be true (though which one is unknown) but which both cannot be true simultaneously. The posterior probability for
M
1
{\displaystyle M_{1}}
may be calculated as:
P
(
M
1
∣
D
)
=
P
(
D
∣
M
1
)
P
(
M
1
)
P
(
D
)
=
P
(
D
∣
M
1
)
P
(
M
1
)
P
(
D
∣
M
1
)
P
(
M
1
)
+
P
(
D
∣
M
2
)
P
(
M
2
)
=
1
1
+
P
(
D
∣
M
2
)
P
(
D
∣
M
1
)
P
(
M
2
)
P
(
M
1
)
{\displaystyle {\begin{aligned}P(M_{1}\mid D)&={\frac {P(D\mid M_{1})P(M_{1})}{P(D)}}\\&={\frac {P(D\mid M_{1})P(M_{1})}{P(D\mid M_{1})P(M_{1})+P(D\mid M_{2})P(M_{2})}}\\&={\frac {1}{1+{\frac {P(D\mid M_{2})}{P(D\mid M_{1})}}{\frac {P(M_{2})}{P(M_{1})}}}}\end{aligned}}}
The prior probabilities
M
1
{\displaystyle M_{1}}
and
M
2
{\displaystyle M_{2}}
are already known, as they are chosen by the researcher ahead of time. However, the remaining Bayes factor
P
(
D
∣
M
2
)
/
P
(
D
∣
M
1
)
{\displaystyle P(D\mid M_{2})/P(D\mid M_{1})}
is not so easy to evaluate, since in general it requires marginalizing nuisance parameters. Generally,
M
1
{\displaystyle M_{1}}
has a set of parameters that can be grouped together and called
θ
{\displaystyle \theta }
, and
M
2
{\displaystyle M_{2}}
has its own vector of parameters that may be of different dimensionality, but is still termed
θ
{\displaystyle \theta }
. The marginalization for
M
1
{\displaystyle M_{1}}
is
P
(
D
∣
M
1
)
=
∫
d
θ
P
(
D
∣
θ
,
M
1
)
P
(
θ
∣
M
1
)
{\displaystyle P(D\mid M_{1})=\int d\theta \,P(D\mid \theta ,M_{1})P(\theta \mid M_{1})}
and likewise for
M
2
{\displaystyle M_{2}}
. This integral is often analytically intractable, and in these cases it is necessary to employ a numerical algorithm to find an approximation. The nested sampling algorithm was developed by John Skilling specifically to approximate these marginalization integrals, and it has the added benefit of generating samples from the posterior distribution
P
(
θ
∣
D
,
M
1
)
{\displaystyle P(\theta \mid D,M_{1})}
. It is an alternative to methods from the Bayesian literature such as bridge sampling and defensive importance sampling.
Here is a simple version of the nested sampling algorithm, followed by a description of how it computes the marginal probability density
Z
=
P
(
D
∣
M
)
{\displaystyle Z=P(D\mid M)}
where
M
{\displaystyle M}
is
M
1
{\displaystyle M_{1}}
or
M
2
{\displaystyle M_{2}}
:
Start with
N
{\displaystyle N}
points
θ
1
,
…
,
θ
N
{\displaystyle \theta _{1},\ldots ,\theta _{N}}
sampled from prior.
for
i
=
1
{\displaystyle i=1}
to
j
{\displaystyle j}
do % The number of iterations j is chosen by guesswork.
L
i
:=
min
(
{\displaystyle L_{i}:=\min(}
current likelihood values of the points
)
{\displaystyle )}
;
X
i
:=
exp
(
−
i
/
N
)
;
{\displaystyle X_{i}:=\exp(-i/N);}
w
i
:=
X
i
−
1
−
X
i
{\displaystyle w_{i}:=X_{i-1}-X_{i}}
Z
:=
Z
+
L
i
⋅
w
i
;
{\displaystyle Z:=Z+L_{i}\cdot w_{i};}
Save the point with least likelihood as a sample point with weight
w
i
{\displaystyle w_{i}}
.
Update the point with least likelihood with some Markov chain Monte Carlo steps according to the prior, accepting only steps that
keep the likelihood above
L
i
{\displaystyle L_{i}}
.
end
return
Z
{\displaystyle Z}
;
At each iteration,
X
i
{\displaystyle X_{i}}
is an estimate of the amount of prior mass covered by the hypervolume in parameter space of all points with likelihood greater than
θ
i
{\displaystyle \theta _{i}}
. The weight factor
w
i
{\displaystyle w_{i}}
is an estimate of the amount of prior mass that lies between two nested hypersurfaces
{
θ
∣
P
(
D
∣
θ
,
M
)
=
P
(
D
∣
θ
i
−
1
,
M
)
}
{\displaystyle \{\theta \mid P(D\mid \theta ,M)=P(D\mid \theta _{i-1},M)\}}
and
{
θ
∣
P
(
D
∣
θ
,
M
)
=
P
(
D
∣
θ
i
,
M
)
}
{\displaystyle \{\theta \mid P(D\mid \theta ,M)=P(D\mid \theta _{i},M)\}}
. The update step
Z
:=
Z
+
L
i
w
i
{\displaystyle Z:=Z+L_{i}w_{i}}
computes the sum over
i
{\displaystyle i}
of
L
i
w
i
{\displaystyle L_{i}w_{i}}
to numerically approximate the integral
P
(
D
∣
M
)
=
∫
P
(
D
∣
θ
,
M
)
P
(
θ
∣
M
)
d
θ
=
∫
P
(
D
∣
θ
,
M
)
d
P
(
θ
∣
M
)
{\displaystyle {\begin{aligned}P(D\mid M)&=\int P(D\mid \theta ,M)P(\theta \mid M)\,d\theta \\&=\int P(D\mid \theta ,M)\,dP(\theta \mid M)\end{aligned}}}
In the limit
j
→
∞
{\displaystyle j\to \infty }
, this estimator has a positive bias of order
1
/
N
{\displaystyle 1/N}
which can be removed by using
(
1
−
1
/
N
)
{\displaystyle (1-1/N)}
instead of the
exp
(
−
1
/
N
)
{\displaystyle \exp(-1/N)}
in the above algorithm.
The idea is to subdivide the range of
f
(
θ
)
=
P
(
D
∣
θ
,
M
)
{\displaystyle f(\theta )=P(D\mid \theta ,M)}
and estimate, for each interval
[
f
(
θ
i
−
1
)
,
f
(
θ
i
)
]
{\displaystyle [f(\theta _{i-1}),f(\theta _{i})]}
, how likely it is a priori that a randomly chosen
θ
{\displaystyle \theta }
would map to this interval. This can be thought of as a Bayesian's way to numerically implement Lebesgue integration.
== Choice of MCMC algorithm ==
The original procedure outlined by Skilling (given above in pseudocode) does not specify what specific Markov chain Monte Carlo algorithm should be used to choose new points with better likelihood.
Skilling's own code examples (such as one in Sivia and Skilling (2006), available on Skilling's website) chooses a random existing point and selects a nearby point chosen by a random distance from the existing point; if the likelihood is better, then the point is accepted, else it is rejected and the process repeated. Mukherjee et al. (2006) found higher acceptance rates by selecting points randomly within an ellipsoid drawn around the existing points; this idea was refined into the MultiNest algorithm which handles multimodal posteriors better by grouping points into likelihood contours and drawing an ellipsoid for each contour.
== Implementations ==
Example implementations demonstrating the nested sampling algorithm are publicly available for download, written in several programming languages.
Simple examples in C, R, or Python are on John Skilling's website.
A Haskell port of the above simple codes is on Hackage.
An example in R originally designed for fitting spectra is described on Bojan Nikolic's website and is available on GitHub.
A NestedSampler is part of the Python toolbox BayesicFitting for generic model fitting and evidence calculation. It is available on GitHub.
An implementation in C++, named DIAMONDS, is on GitHub.
A highly modular Python parallel example for statistical physics and condensed matter physics uses is on GitHub.
pymatnest is a package designed for exploring the energy landscape of different materials, calculating thermodynamic variables at arbitrary temperatures and locating phase transitions is on GitHub
The MultiNest software package is capable of performing nested sampling on multi-modal posterior distributions. It has interfaces for C++, Fortran and Python inputs, and is available on GitHub.
PolyChord is another nested sampling software package available on GitHub. PolyChord's computational efficiency scales better with an increase in the number of parameters than MultiNest, meaning PolyChord can be more efficient for high dimensional problems. It has interfaces to likelihood functions written in Python, Fortran, C, or C++.
NestedSamplers.jl, a Julia package for implementing single- and multi-ellipsoidal nested sampling algorithms is on GitHub.
Korali is a high-performance framework for uncertainty quantification, optimization, and deep reinforcement learning, which also implements nested sampling.
== Applications ==
Since nested sampling was proposed in 2004, it has been used in many aspects of the field of astronomy. One paper suggested using nested sampling for cosmological model selection and object detection, as it "uniquely combines accuracy, general applicability and computational feasibility." A refinement of the algorithm to handle multimodal posteriors has been suggested as a means to detect astronomical objects in extant datasets. Other applications of nested sampling are in the field of finite element updating where the algorithm is used to choose an optimal finite element model, and this was applied to structural dynamics. This sampling method has also been used in the field of materials modeling. It can be used to learn the partition function from statistical mechanics and derive thermodynamic properties.
== Dynamic nested sampling ==
Dynamic nested sampling is a generalisation of the nested sampling algorithm in which the number of samples taken in different regions of the parameter space is dynamically adjusted to maximise calculation accuracy. This can lead to large improvements in accuracy and computational efficiency when compared to the original nested sampling algorithm, in which the allocation of samples cannot be changed and often many samples are taken in regions which have little effect on calculation accuracy.
Publicly available dynamic nested sampling software packages include:
dynesty - a Python implementation of dynamic nested sampling which can be downloaded from GitHub.
dyPolyChord: a software package which can be used with Python, C++ and Fortran likelihood and prior distributions. dyPolyChord is available on GitHub.
Dynamic nested sampling has been applied to a variety of scientific problems, including analysis of gravitational waves, mapping distances in space and exoplanet detection.
== See also ==
Bayesian model comparison
List of algorithms
== References == | Wikipedia/Nested_sampling_algorithm |
In marketing, Bayesian inference allows for decision making and market research evaluation under uncertainty and with limited data. The communication between marketer and market can be seen as a form of Bayesian persuasion.
== Introduction ==
Bayes' theorem is fundamental to Bayesian inference. It is a subset of statistics, providing a mathematical framework for forming inferences through the concept of probability, in which evidence about the true state of the world is expressed in terms of degrees of belief through subjectively assessed numerical probabilities. Such a probability is known as a Bayesian probability. The fundamental ideas and concepts behind Bayes' theorem, and its use within Bayesian inference, have been developed and added to over the past centuries by Thomas Bayes, Richard Price and Pierre Simon Laplace as well as numerous other mathematicians, statisticians and scientists. Bayesian inference has experienced spikes in popularity as it has been seen as vague and controversial by rival frequentist statisticians. In the past few decades Bayesian inference has become widespread in many scientific and social science fields such as marketing. Bayesian inference allows for decision making and market research evaluation under uncertainty and limited data.
== Bayes' theorem ==
Bayesian probability specifies that there is some prior probability. Bayesian statisticians can use both an objective and a subjective approach when interpreting the prior probability, which is then updated in light of new relevant information. The concept is a manipulation of conditional probabilities:
P
(
A
B
)
=
P
(
A
|
B
)
P
(
B
)
=
P
(
B
|
A
)
P
(
A
)
{\displaystyle P(AB)=P(A|B)P(B)=P(B|A)P(A)}
Alternatively, a more simple understanding of the formula may be reached by substituting the events
A
{\displaystyle A}
and
B
{\displaystyle B}
to become respectively the hypothesis
(
H
)
{\displaystyle (H)}
and the data
(
D
)
{\displaystyle (D)}
. The rule allows for a judgment of the relative truth of the hypothesis given the data.
This is done through the calculation shown below, where
P
(
D
|
H
)
{\displaystyle P(D|H)}
is the likelihood function. This assesses the probability of the observed data
(
D
)
{\displaystyle (D)}
arising from the hypothesis
(
H
)
{\displaystyle (H)}
;
P
(
H
)
{\displaystyle P(H)}
is the assigned prior probability or initial belief about the hypothesis; the denominator
P
(
D
)
{\displaystyle P(D)}
is formed by the integrating or summing of
P
(
D
|
H
)
P
(
H
)
{\displaystyle P(D|H)P(H)}
;
P
(
H
|
D
)
{\displaystyle P(H|D)}
is known as the posterior which is the recalculated probability, or updated belief about the hypothesis. It is a result of the prior beliefs as well as sample information. The posterior is a conditional distribution as the result of collecting or in consideration of new relevant data.
P
(
H
|
D
)
=
P
(
D
|
H
)
P
(
H
)
P
(
D
)
{\displaystyle P(H|D)={\frac {P(D|H)P(H)}{P(D)}}}
To sum up this formula: the posterior probability of the hypothesis is equal to the prior probability of the hypothesis multiplied by the conditional probability of the evidence given the hypothesis, divided by the probability of the new evidence.
== Use in marketing ==
=== History ===
While the concepts of Bayesian statistics are thought to date back to 1763, marketers' exposure to the concepts are relatively recent, dating from 1959. Subsequently, many books and articles have been written about the application of Bayesian statistics to marketing decision-making and market research. It was predicted that the Bayesian approach would be used widely in the marketing field but up until the mid-1980s the methods were considered impractical. The resurgence in the use of Bayesian methods is largely due to the developments over the last few decades in computational methods; and expanded availability of detailed marketplace data – primarily due to the birth of the World Wide Web and explosion of the internet.
=== Application in marketing ===
Bayesian decision theory can be applied to all four areas of the marketing mix. Assessments are made by a decision maker on the probabilities of events that determine the profitability of alternative actions where the outcomes are uncertain. Assessments are also made for the profit (utility) for each possible combination of action and event. The decision maker can decide how much research, if any, needs to be conducted in order to investigate the consequences associated with the courses of action under evaluation. This is done before a final decision is made, but in order to do this costs would be incurred, time used and may overall be unreliable. For each possible action, expected profit can be computed, that is a weighted mean of the possible profits, the weights being the probabilities. The decision maker can then choose the action for which the expected profit is the highest. The theorem provides a formal reconciliation between judgment expressed quantitatively in the prior distribution and the statistical evidence of the experiment.
==== New product development ====
The use of Bayesian decision theory in new product development allows for the use of subjective prior information. Bayes in new product development allows for the comparison of additional review project costs with the value of additional information in order to reduce the costs of uncertainty. The methodology used for this analysis is in the form of decision trees and 'stop'/'go' procedures. If the predicted payoff (the posterior) is acceptable for the organisation the project should go ahead, if not, development should stop. By reviewing the posterior (which then becomes the new prior) on regular intervals throughout the development stage managers are able to make the best possible decision with the information available at hand. Although the review process may delay further development and increase costs, it can help greatly to reduce uncertainty in high risk decisions.
==== Pricing decisions ====
Bayesian decision theory can be used in looking at pricing decisions. Field information such as retail and wholesale prices as well as the size of the market and market share are all incorporated into the prior information. Managerial judgement is included in order to evaluate different pricing strategies. This method of evaluating possible pricing strategies does have its limitations as it requires a number of assumptions to be made about the market place in which an organisation operates. As markets are dynamic environments it is often difficult to fully apply Bayesian decision theory to pricing strategies without simplifying the model.
==== Promotional campaigns ====
When dealing with promotion a marketing manager must account for all the market complexities that are involved in a decision. As it is difficult to account for all aspects of the market, a manager should look to incorporate both experienced judgements from senior executives as well modifying these judgements in light of economically justifiable information gathering. An example of the application of Bayesian decision theory for promotional purposes could be the use of a test sample in order to assess the effectiveness of a promotion prior to a full scale rollout. By combining prior subjective data about the occurrence of possible events with experimental empirical evidence gained through a test market, the resultant data can be used to make decisions under risk.
==== Channel decisions and the logistics of distribution ====
Bayesian decision analysis can also be applied to the channel selection process. In order to help provide further information the method can be used that produces results in a profit or loss aspect. Prior information can include costs, expected profit, training expenses and any other costs relevant to the decision as well as managerial experience which can be displayed in a normal distribution. Bayesian decision making under uncertainty lets a marketing manager assess his/her options for channel logistics by computing the most profitable method choice. A number of different costs can be entered into the model that helps to assess the ramifications of change in distribution method. Identifying and quantifying all of the relevant information for this process can be very time-consuming and costly if the analysis delays possible future earnings.
== Strengths ==
The Bayesian approach is superior to use in decision making when there is a high level of uncertainty or limited information in which to base decisions on and where expert opinion or historical knowledge is available. Bayes is also useful when explaining the findings in a probability sense to people who are less familiar and comfortable with comprehending statistics. It is in this sense that Bayesian methods are thought of as having created a bridge between business judgments and statistics for the purpose of decision-making.
The three principle strengths of Bayes' theorem that have been identified by scholars are that it is prescriptive, complete and coherent. Prescriptive in that it is the theorem that is the simple prescription to the conclusions reached on the basis of evidence and reasoning for the consistent decision maker.
It is complete because the solution is often clear and unambiguous, for a given choice of model and prior distribution. It allows for the incorporation of prior information when available to increase the robustness of the solutions, as well as taking into consideration the costs and risks that are associated with choosing alternative decisions.
Lastly Bayes theorem is coherent. It is considered the most appropriate way to update beliefs by welcoming the incorporation of new information, as is seen through the probability distributions (see Savage and De Finetti). This is further complemented by the fact that Bayes inference satisfies the likelihood principle, which states that models or inferences for datasets leading to the same likelihood function should generate the same statistical information.
Bayes methods are more cost-effective than the traditional frequentist take on marketing research and subsequent decision making. The probability can be assessed from a degree of belief before and after accounting for evidence, instead of calculating the probabilities of a certain decision by carrying out a large number of trials with each one producing an outcome from a set of possible outcomes. The planning and implementation of trials to see how a decision impacts in the 'field' e.g. observing consumers reaction to a relabeling of a product, is time-consuming and costly, a method many firms cannot afford. In place of taking the frequentist route in aiming for a universally acceptable conclusion through iteration, it is sometimes more effective to take advantage of all the information available to the firm to work out the 'best' decision at the time, and then subsequently when new knowledge is obtained, revise the posterior distribution to be then used as the prior, thus the inferences continue to logically contribute to one another based on Bayes theorem.
== Weaknesses ==
In marketing situations, it is important that the prior probability is (1) chosen correctly, and (2) is understood. A disadvantage to using Bayesian analysis is that there is no 'correct' way to choose a prior, therefore the inferences require a thorough analysis to translate the subjective prior beliefs into a mathematically formulated prior to ensure that the results will not be misleading and consequently lead to the disproportionate analysis of preposteriors. The subjective definition of probability and the selection and use of the priors have led to statisticians critiquing this subjective definition of probability that underlies the Bayesian approach.
Bayesian probability is often found to be difficult when analysing and assessing probabilities due to its initial counter intuitive nature. Often when deciding between strategies based on a decision, they are interpreted as: where there is evidence X that shows condition A might hold true, is misread by judging A's likelihood by how well the evidence X matches A, but crucially without considering the prior frequency of A. In alignment with Falsification, which aims to question and falsify instead of prove hypotheses, where there is very strong evidence X, it does not necessarily mean there is a very high probability that A leads to B, but in fact should be interpreted as a very low probability of A not leading to B.
In the field of marketing, behavioural experiments which have dealt with managerial decision-making, and risk perception, in consumer decisions have utilised the Bayesian model, or similar models, but found that it may not be relevant quantitatively in predicting human information processing behaviour. Instead the model has been proven as useful as a qualitative means of describing how individuals combine new evidence with their predetermined judgements. Therefore, "the model may have some value as a first approximation to the development of descriptive choice theory" in consumer and managerial instances.
== Example ==
An advertising manager is deciding whether or not to increase the advertising for a product in a particular market. The Bayes approach to this decision suggests: 1) These alternative courses of action for which the consequences are uncertain are a necessary condition in order to apply Bayes'; 2) The advertising manager will pick the course of action which allows him to achieve some objective i.e. a maximum return on his advertising investment in the form of profit; 3) He must determine the possible consequences of each action into some measure of success (or loss) with which a certain objective is achieved.
This 3 component example explains how the payoffs are conditional upon which outcomes occur. The advertising manager can characterize the outcomes based on past experience and knowledge and devise some possible events that are more likely to occur than others. He can then assign to these events prior probabilities, which would be in the form of numerical weights.
He can test out his predictions (prior probabilities) through an experiment. For example, he can run a test campaign to decide if the total level of advertising should be in fact increased. Based on the outcome of the experiment he can re-evaluate his prior probability and make a decision on whether to go ahead with increasing the advertising in the market or not. However gathering this additional data is costly, time-consuming and may not lead to perfectly reliable results. As a decision makers he has to deal with experimental and systematic error and this is where Bayes' comes in.
It approaches the experimental problem by asking; is additional data required? If so, how much needs to be collected and by what means and finally, how does the decision maker revise his prior judgment in light of the results of the new experimental evidence? In this example the advertising manager can use the Bayesian approach to deal with his dilemma and update his prior judgments in light of new information he gains. He needs to take into account the profit (utility) attached to the alternative acts under different events and the value versus cost of information in order to make his optimal decision on how to proceed.
== Bayes in computational models ==
Markov chain Monte Carlo (MCMC) is a flexible procedure designed to fit a variety of Bayesian models. It is the underlying method used in computational software such as the LaplacesDemon R Package and WinBUGS. The advancements and developments of these types of statistical software have allowed for the growth of Bayes by offering ease of calculation. This is achieved by the generation of samples from the posterior distributions, which are then used to produce a range of options or strategies which are allocated numerical weights. MCMC obtains these samples and produces summary and diagnostic statistics while also saving the posterior samples in the output. The decision maker can then assess the results from the output data set and choose the best option to proceed.
== See also ==
Marketing mix modeling
== References ==
== Further reading ==
Churchill, Gilbert A. Jr. (1991). "The Research Process and Problem Formulation". Marketing Research: Methodological Foundations (5th ed.). Fort Worth: Dryden Press. pp. 67–124. ISBN 0-03-031472-0.
Holloway, Charles A. (1979). Decision Making under Uncertainty. Englewood Cliffs: Prentice-Hall. ISBN 0-13-197749-0. | Wikipedia/Bayesian_inference_in_marketing |
Bayesian search theory is the application of Bayesian statistics to the search for lost objects. It has been used several times to find lost sea vessels, for example USS Scorpion, and has played a key role in the recovery of the flight recorders in the Air France Flight 447 disaster of 2009. It has also been used in the attempts to locate the remains of Malaysia Airlines Flight 370.
== Procedure ==
The usual procedure is as follows:
Formulate as many reasonable hypotheses as possible about what may have happened to the object.
For each hypothesis, construct a probability density function for the location of the object.
Construct a function giving the probability of actually finding an object in location X when searching there if it really is in location X. In an ocean search, this is usually a function of water depth — in shallow water chances of finding an object are good if the search is in the right place. In deep water chances are reduced.
Combine the above information coherently to produce an overall probability density map. (Usually this simply means multiplying the two functions together.) This gives the probability of finding the object by looking in location X, for all possible locations X. (This can be visualized as a contour map of probability.)
Construct a search path which starts at the point of highest probability and 'scans' over high probability areas, then intermediate probabilities, and finally low probability areas.
Revise all the probabilities continuously during the search. For example, if the hypotheses for location X imply the likely disintegration of the object and the search at location X has yielded no fragments, then the probability that the object is somewhere around there is greatly reduced (though not usually to zero) while the probabilities of its being at other locations is correspondingly increased. The revision process is done by applying Bayes' theorem.
In other words, first search where it most probably will be found, then search where finding it is less probable, then search where the probability is even less (but still possible due to limitations on fuel, range, water currents, etc.), until insufficient hope of locating the object at acceptable cost remains.
The advantages of the Bayesian method are that all information available is used coherently (i.e., in a "leak-proof" manner) and the method automatically produces estimates of the cost for a given success probability. That is, even before the start of searching, one can say, hypothetically, "there is a 65% chance of finding it in a 5-day search. That probability will rise to 90% after a 10-day search and 97% after 15 days" or a similar statement. Thus the economic viability of the search can be estimated before committing resources to a search.
Apart from the USS Scorpion, other vessels located by Bayesian search theory include the MV Derbyshire, the largest British vessel ever lost at sea, and the SS Central America. It also proved successful in the search for a lost hydrogen bomb following the 1966 Palomares B-52 crash in Spain, and the recovery in the Atlantic Ocean of the crashed Air France Flight 447.
Bayesian search theory is incorporated into the CASP (Computer Assisted Search Program) mission planning software used by the United States Coast Guard for search and rescue. This program was later adapted for inland search by adding terrain and ground cover factors for use by the United States Air Force and Civil Air Patrol.
== Mathematics ==
Suppose a grid square has a probability p of containing the wreck and that the probability of successfully detecting the wreck if it is there is q. If the square is searched and no wreck is found, then, by Bayes' theorem, the revised probability of the wreck being in the square is given by
p
′
=
p
(
1
−
q
)
(
1
−
p
)
+
p
(
1
−
q
)
=
p
1
−
q
1
−
p
q
<
p
.
{\displaystyle p'={\frac {p(1-q)}{(1-p)+p(1-q)}}=p{\frac {1-q}{1-pq}}<p.}
For every other grid square, if its prior probability is r, its posterior probability is given by
r
′
=
r
1
1
−
p
q
>
r
.
{\displaystyle r'=r{\frac {1}{1-pq}}>r.}
=== USS Scorpion ===
In May 1968, the U.S. Navy's nuclear submarine USS Scorpion (SSN-589) failed to arrive as expected at her home port of Norfolk, Virginia. The command officers of the U.S. Navy were nearly certain that the vessel had been lost off the Eastern Seaboard, but an extensive search there failed to discover the remains of Scorpion.
Then, a Navy deep-water expert, John P. Craven, suggested that Scorpion had sunk elsewhere. Craven organised a search southwest of the Azores based on a controversial approximate triangulation by hydrophones. He was allocated only a single ship, Mizar, and he took advice from Metron Inc., a firm of consultant mathematicians in order to maximise his resources. A Bayesian search methodology was adopted. Experienced submarine commanders were interviewed to construct hypotheses about what could have caused the loss of Scorpion.
The sea area was divided up into grid squares and a probability assigned to each square, under each of the hypotheses, to give a number of probability grids, one for each hypothesis. These were then added together to produce an overall probability grid. The probability attached to each square was then the probability that the wreck was in that square. A second grid was constructed with probabilities that represented the probability of successfully finding the wreck if that square were to be searched and the wreck were to be actually there. This was a known function of water depth. The result of combining this grid with the previous grid is a grid which gives the probability of finding the wreck in each grid square of the sea if it were to be searched.
At the end of October 1968, the Navy's oceanographic research ship, Mizar, located sections of the hull of Scorpion on the seabed, about 740 km (400 nmi; 460 mi) southwest of the Azores, under more than 3,000 m (9,800 ft) of water. This was after the Navy had released sound tapes from its underwater "SOSUS" listening system, which contained the sounds of the destruction of Scorpion. The court of inquiry was subsequently reconvened and other vessels, including the bathyscaphe Trieste II, were dispatched to the scene, collecting many pictures and other data.
Although Craven received much credit for locating the wreckage of Scorpion, Gordon Hamilton, an acoustics expert who pioneered the use of hydroacoustics to pinpoint Polaris missile splashdown locations, was instrumental in defining a compact "search box" wherein the wreck was ultimately found. Hamilton had established a listening station in the Canary Islands that obtained a clear signal of what some scientists believe was the noise of the vessel's pressure hull imploding as she passed crush depth. A Naval Research Laboratory scientist named Chester "Buck" Buchanan, using a towed camera sled of his own design aboard Mizar, finally located Scorpion. The towed camera sled, which was fabricated by J. L. "Jac" Hamm of Naval Research Laboratory's Engineering Services Division, is housed in the National Museum of the United States Navy. Buchanan had located the wrecked hull of Thresher in 1964 using this technique.
== Optimal distribution of search effort ==
The classical book on this subject The Theory of Optimal Search (Operations Research Society of America, 1975) by Lawrence D. Stone of Metron Inc. won the 1975 Lanchester Prize by the American Operations Research Society.
== Searching in boxes ==
Assume that a stationary object is hidden in one of n boxes (locations). For each location
i
{\displaystyle i}
there are three known parameters: the cost
c
i
{\displaystyle c_{i}}
of a single
search, the probability
a
i
{\displaystyle a_{i}}
of finding the object by a single search if the object is there, and the probability
p
i
{\displaystyle p_{i}}
that the object is there. A searcher looks for the object. They know the a priori probabilities at the beginning and update them by Bayes' law after each (unsuccessful) attempt.
The problem of finding the object in minimal expected cost is a classical problem
solved by David Blackwell; later, David Assaf expanded Blackwell's result to more than one object. The optimal policy is: at each stage, look into the location which maximizes
p
i
a
i
c
i
{\displaystyle {\frac {p_{i}a_{i}}{c_{i}}}}
. This is a special case of the Gittins index.
== See also ==
Bayesian inference – Method of statistical inference
Search game – Two-person zero-sum game
== References ==
== Bibliography ==
Stone, Lawrence D., The Theory of Optimal Search, published by the Operations Research Society of America, 1975
Stone, Lawrence D., In Search of Air France Flight 447. Institute of Operations Research and the Management Sciences, 2011. https://www.informs.org/ORMS-Today/Public-Articles/August-Volume-38-Number-4/In-Search-of-Air-France-Flight-447
Iida, Koji., Studies on the Optimal Search Plan, Vol. 70, Lecture Notes in Statistics, Springer-Verlag, 1992.
De Groot, Morris H., Optimal Statistical Decisions, Wiley Classics Library, 2004.
Richardson, Henry R; and Stone, Lawrence D. Operations Analysis during the underwater search for Scorpion. Naval Research Logistics Quarterly, June 1971, Vol. 18, Number 2. Office of Naval Research.
Stone, Lawrence D. Search for the SS Central America: Mathematical Treasure Hunting. Technical Report, Metron Inc. Reston, Virginia.
Koopman, B.O. Search and Screening, Operations Research Evaluation Group Report 56, Center for Naval Analyses, Alexandria, Virginia. 1946.
Richardson, Henry R; and Discenza, J.H. The United States Coast Guard computer-assisted search planning system (CASP). Naval Research Logistics Quarterly. Vol. 27 number 4. pp. 659–680. 1980.
Ross, Sheldon M., An Introduction to Stochastic Dynamic Programming, Academic Press. 1983. | Wikipedia/Bayesian_search_theory |
Solomonoff's theory of inductive inference proves that, under its common sense assumptions (axioms), the best possible scientific model is the shortest algorithm that generates the empirical data under consideration. In addition to the choice of data, other assumptions are that, to avoid the post-hoc fallacy, the programming language must be chosen prior to the data and that the environment being observed is generated by an unknown algorithm. This is also called a theory of induction. Due to its basis in the dynamical (state-space model) character of Algorithmic Information Theory, it encompasses statistical as well as dynamical information criteria for model selection. It was introduced by Ray Solomonoff, based on probability theory and theoretical computer science. In essence, Solomonoff's induction derives the posterior probability of any computable theory, given a sequence of observed data. This posterior probability is derived from Bayes' rule and some universal prior, that is, a prior that assigns a positive probability to any computable theory.
Solomonoff proved that this induction is incomputable (or more precisely, lower semi-computable), but noted that "this incomputability is of a very benign kind", and that it "in no way inhibits its use for practical prediction" (as it can be approximated from below more accurately with more computational resources). It is only "incomputable" in the benign sense that no scientific consensus is able to prove that the best current scientific theory is the best of all possible theories. However, Solomonoff's theory does provide an objective criterion for deciding among the current scientific theories explaining a given set of observations.
Solomonoff's induction naturally formalizes Occam's razor by assigning larger prior credences to theories that require a shorter algorithmic description.
== Origin ==
=== Philosophical ===
The theory is based in philosophical foundations, and was founded by Ray Solomonoff around 1960. It is a mathematically formalized combination of Occam's razor and the Principle of Multiple Explanations. All computable theories which perfectly describe previous observations are used to calculate the probability of the next observation, with more weight put on the shorter computable theories. Marcus Hutter's universal artificial intelligence builds upon this to calculate the expected value of an action.
=== Principle ===
Solomonoff's induction has been argued to be the computational formalization of pure Bayesianism. To understand, recall that Bayesianism derives the posterior probability
P
[
T
|
D
]
{\displaystyle \mathbb {P} [T|D]}
of a theory
T
{\displaystyle T}
given data
D
{\displaystyle D}
by applying Bayes rule, which yields
P
[
T
|
D
]
=
P
[
D
|
T
]
P
[
T
]
P
[
D
|
T
]
P
[
T
]
+
∑
A
≠
T
P
[
D
|
A
]
P
[
A
]
{\displaystyle \mathbb {P} [T|D]={\frac {\mathbb {P} [D|T]\mathbb {P} [T]}{\mathbb {P} [D|T]\mathbb {P} [T]+\sum _{A\neq T}\mathbb {P} [D|A]\mathbb {P} [A]}}}
where theories
A
{\displaystyle A}
are alternatives to theory
T
{\displaystyle T}
. For this equation to make sense, the quantities
P
[
D
|
T
]
{\displaystyle \mathbb {P} [D|T]}
and
P
[
D
|
A
]
{\displaystyle \mathbb {P} [D|A]}
must be well-defined for all theories
T
{\displaystyle T}
and
A
{\displaystyle A}
. In other words, any theory must define a probability distribution over observable data
D
{\displaystyle D}
. Solomonoff's induction essentially boils down to demanding that all such probability distributions be computable.
Interestingly, the set of computable probability distributions is a subset of the set of all programs, which is countable. Similarly, the sets of observable data considered by Solomonoff were finite. Without loss of generality, we can thus consider that any observable data is a finite bit string. As a result, Solomonoff's induction can be defined by only invoking discrete probability distributions.
Solomonoff's induction then allows to make probabilistic predictions of future data
F
{\displaystyle F}
, by simply obeying the laws of probability. Namely, we have
P
[
F
|
D
]
=
E
T
[
P
[
F
|
T
,
D
]
]
=
∑
T
P
[
F
|
T
,
D
]
P
[
T
|
D
]
{\displaystyle \mathbb {P} [F|D]=\mathbb {E} _{T}[\mathbb {P} [F|T,D]]=\sum _{T}\mathbb {P} [F|T,D]\mathbb {P} [T|D]}
. This quantity can be interpreted as the average predictions
P
[
F
|
T
,
D
]
{\displaystyle \mathbb {P} [F|T,D]}
of all theories
T
{\displaystyle T}
given past data
D
{\displaystyle D}
, weighted by their posterior credences
P
[
T
|
D
]
{\displaystyle \mathbb {P} [T|D]}
.
=== Mathematical ===
The proof of the "razor" is based on the known mathematical properties of a probability distribution over a countable set. These properties are relevant because the infinite set of all programs is a denumerable set. The sum S of the probabilities of all programs must be exactly equal to one (as per the definition of probability) thus the probabilities must roughly decrease as we enumerate the infinite set of all programs, otherwise S will be strictly greater than one. To be more precise, for every
ϵ
{\displaystyle \epsilon }
> 0, there is some length l such that the probability of all programs longer than l is at most
ϵ
{\displaystyle \epsilon }
. This does not, however, preclude very long programs from having very high probability.
Fundamental ingredients of the theory are the concepts of algorithmic probability and Kolmogorov complexity. The universal prior probability of any prefix p of a computable sequence x is the sum of the probabilities of all programs (for a universal computer) that compute something starting with p. Given some p and any computable but unknown probability distribution from which x is sampled, the universal prior and Bayes' theorem can be used to predict the yet unseen parts of x in optimal fashion.
== Mathematical guarantees ==
=== Solomonoff's completeness ===
The remarkable property of Solomonoff's induction is its completeness. In essence, the completeness theorem guarantees that the expected cumulative errors made by the predictions based on Solomonoff's induction are upper-bounded by the Kolmogorov complexity of the (stochastic) data generating process. The errors can be measured using the Kullback–Leibler divergence or the square of the difference between the induction's prediction and the probability assigned by the (stochastic) data generating process.
=== Solomonoff's uncomputability ===
Unfortunately, Solomonoff also proved that Solomonoff's induction is uncomputable. In fact, he showed that computability and completeness are mutually exclusive: any complete theory must be uncomputable. The proof of this is derived from a game between the induction and the environment. Essentially, any computable induction can be tricked by a computable environment, by choosing the computable environment that negates the computable induction's prediction. This fact can be regarded as an instance of the no free lunch theorem.
== Modern applications ==
=== Artificial intelligence ===
Though Solomonoff's inductive inference is not computable, several AIXI-derived algorithms approximate it in order to make it run on a modern computer. The more computing power they are given, the closer their predictions are to the predictions of inductive inference (their mathematical limit is Solomonoff's inductive inference).
Another direction of inductive inference is based on E. Mark Gold's model of learning in the limit from 1967 and has developed since then more and more models of learning. The general scenario is the following: Given a class S of computable functions, is there a learner (that is, recursive functional) which for any input of the form (f(0),f(1),...,f(n)) outputs a hypothesis (an index e with respect to a previously agreed on acceptable numbering of all computable functions; the indexed function may be required consistent with the given values of f). A learner M learns a function f if almost all its hypotheses are the same index e, which generates the function f; M learns S if M learns every f in S. Basic results are that all recursively enumerable classes of functions are learnable while the class REC of all computable functions is not learnable.
Many related models have been considered and also the learning of classes of recursively enumerable sets from positive data is a topic studied from Gold's pioneering paper in 1967 onwards. A far reaching extension of the Gold’s approach is developed by Schmidhuber's theory of generalized Kolmogorov complexities, which are kinds of super-recursive algorithms.
== See also ==
Algorithmic information theory
Bayesian inference
Inductive inference
Inductive probability
Mill's methods
Minimum description length
Minimum message length
For a philosophical viewpoint, see: Problem of induction and New riddle of induction
== References ==
== Sources ==
Angluin, Dana; Smith, Carl H. (Sep 1983). "Inductive Inference: Theory and Methods". Computing Surveys. 15 (3): 237–269. doi:10.1145/356914.356918. S2CID 3209224.
Burgin, M. (2005), Super-recursive Algorithms, Monographs in computer science, Springer. ISBN 0-387-95569-0
Burgin, M., "How We Know What Technology Can Do", Communications of the ACM, v. 44, No. 11, 2001, pp. 82–88.
Burgin, M.; Eberbach, E., "Universality for Turing Machines, Inductive Turing Machines and Evolutionary Algorithms", Fundamenta Informaticae, v. 91, No. 1, 2009, 53–77.
Burgin, M.; Eberbach, E., "On Foundations of Evolutionary Computation: An Evolutionary Automata Approach", in Handbook of Research on Artificial Immune Systems and Natural Computing: Applying Complex Adaptive Technologies (Hongwei Mo, Ed.), IGI Global, Hershey, Pennsylvania, 2009, 342–360.
Burgin, M.; Eberbach, E., "Evolutionary Automata: Expressiveness and Convergence of Evolutionary Computation", Computer Journal, v. 55, No. 9, 2012, pp. 1023–1029.
Burgin, M.; Klinger, A. Experience, Generations, and Limits in Machine Learning, Theoretical Computer Science, v. 317, No. 1/3, 2004, pp. 71–91
Davis, Martin (2006) "The Church–Turing Thesis: Consensus and opposition]". Proceedings, Computability in Europe 2006. Lecture Notes in Computer Science, 3988 pp. 125–132.
Gasarch, W.; Smith, C. H. (1997) "A survey of inductive inference with an emphasis on queries". Complexity, logic, and recursion theory, Lecture Notes in Pure and Appl. Math., 187, Dekker, New York, pp. 225–260.
Hay, Nick. "Universal Semimeasures: An Introduction," CDMTCS Research Report Series, University of Auckland, Feb. 2007.
Jain, Sanjay; Osherson, Daniel; Royer, James; Sharma, Arun, Systems that Learn: An Introduction to Learning Theory (second edition), MIT Press, 1999.
Kleene, Stephen C. (1952), Introduction to Metamathematics (First ed.), Amsterdam: North-Holland.
Li Ming; Vitanyi, Paul, An Introduction to Kolmogorov Complexity and Its Applications, 2nd Edition, Springer Verlag, 1997.
Osherson, Daniel; Stob, Michael; Weinstein, Scott, Systems That Learn, An Introduction to Learning Theory for Cognitive and Computer Scientists, MIT Press, 1986.
Solomonoff, Ray J. (1999). "Two Kinds of Probabilistic Induction" (PDF). The Computer Journal. 42 (4): 256. CiteSeerX 10.1.1.68.8941. doi:10.1093/comjnl/42.4.256.
Solomonoff, Ray (March 1964). "A Formal Theory of Inductive Inference Part I" (PDF). Information and Control. 7 (1): 1–22. doi:10.1016/S0019-9958(64)90223-2.
Solomonoff, Ray (June 1964). "A Formal Theory of Inductive Inference Part II" (PDF). Information and Control. 7 (2): 224–254. doi:10.1016/S0019-9958(64)90131-7.
== External links ==
Algorithmic probability – Scholarpedia | Wikipedia/Solomonoff's_theory_of_inductive_inference |
In decision theory, economics, and probability theory, the Dutch book arguments are a set of results showing that agents must satisfy the axioms of rational choice to avoid a kind of self-contradiction called a Dutch book. A Dutch book, sometimes also called a money pump, is a set of bets that ensures a guaranteed loss, i.e. the gambler will lose money no matter what happens. A set of bets is called coherent if it cannot result in a Dutch book.
The Dutch book arguments are used to explore degrees of certainty in beliefs, and demonstrate that rational bet-setters must be Bayesian; in other words, a rational bet-setter must assign event probabilities that behave according to the axioms of probability, and must have preferences that can be modeled using the von Neumann–Morgenstern axioms.
In economics, they are used to model behavior by ruling out situations where agents "burn money" for no real reward. Models based on the assumption that actors are rational are called rational choice models. That assumption is weakened in behavioral models of decision-making.
The thought experiment was first proposed by the Italian probabilist Bruno de Finetti in order to justify Bayesian probability, and was more thoroughly explored by Leonard Savage, who developed it into a full model of rational choice.
== Operational subjective probabilities as wagering odds ==
Assume we have two players, A and B. Player A must set the price of a promise to pay $1 if John Smith wins tomorrow's election. Player B will be able to choose either to buy the promise from A at the price A has set, or to require A to buy the promise, still at the same price. In other words: Player A sets the odds, but Player B decides which side of the bet to take. The price A sets is called the "operational subjective probability".
If player A decides that John Smith is 12.5% likely to win, they might then set an odds of 7:1 against. This arbitrary valuation — the "operational subjective probability" — determines the payoff of a successful wager. $1 wagered at these odds will produce either a loss of $1 (if Smith loses) or a win of $7 (if Smith wins). In this example the $1 will also be returned to the bettor in the event of success.
== The arguments ==
The standard Dutch book argument concludes that rational agents must have subjective probabilities for random events, and that these probabilities must satisfy the standard axioms of probability. In other words, any rational person must be willing to assign a (quantitative) subjective probability to different events.
Note that the argument does not imply agents are willing to engage in gambling in the traditional sense. The word "bet" as used here refers to any kind of decision under uncertainty. For example, buying an unfamiliar good at a supermarket is a kind of "bet" (the buyer "bets" that the product is good), as is getting into a car ("betting" that the driver will not be involved in an accident).
=== Establishing willingness to bet ===
The Dutch book argument can be reversed by considering the perspective of the bookmaker. In this case, the Dutch book arguments show that any rational agent must be willing to accept some kinds of risks, i.e. to make uncertain bets, or else they will sometimes refuse "free gifts" or "Czech books", a series of bets leaving them better-off with 100% certainty.
=== Unitarity ===
In one example, a bookmaker has offered the following odds and attracted one bet on each horse whose relative sizes make the result irrelevant. The implied probabilities, i.e. probability of each horse winning, add up to a number greater than 1, violating the axiom of unitarity:
Whichever horse wins in this example, the bookmaker will pay out $200 (including returning the winning stake)—but the punter has bet $210, hence making a loss of $10 on the race.
However, if horse 4 was withdrawn and the bookmaker does not adjust the other odds, the implied probabilities would add up to 0.95. In such a case, a gambler could always reap a profit of $10 by betting $100, $50 and $40 on the remaining three horses, respectively, and not having to stake $20 on the withdrawn horse, which now cannot win.
=== Other axioms ===
Other forms of Dutch books can be used to establish the other axioms of probability, sometimes involving more complex bets like forecasting the order in which horses will finish. In Bayesian probability, Frank P. Ramsey and Bruno de Finetti required personal degrees of belief to be coherent so that a Dutch book could not be made against them, whichever way bets were made. Necessary and sufficient conditions for this are that their degrees of belief satisfy all the axioms of probability.
== Dutch books ==
A person who has set prices on an array of wagers in such a way that he or she will make a net gain regardless of the outcome is said to have made a Dutch book. When one has a Dutch book, one's opponent always loses. A person who sets prices in a way that gives his or her opponent a Dutch book is not behaving rationally.
=== A very trivial Dutch book ===
The rules do not forbid a set price higher than $1, but a prudent opponent may sell one a high-priced ticket, such that the opponent comes out ahead regardless of the outcome of the event on which the bet is made. The rules also do not forbid a negative price, but an opponent may extract a paid promise from the bettor to pay him or her later should a certain contingency arise. In either case, the price-setter loses. These lose-lose situations parallel the fact that a probability can neither exceed 1 (certainty) nor be less than 0 (no chance of winning).
=== A more instructive Dutch book ===
Now suppose one sets the price of a promise to pay $1 if the Boston Red Sox win next year's World Series, and also the price of a promise to pay $1 if the New York Yankees win, and finally the price of a promise to pay $1 if either the Red Sox or the Yankees win. One may set the prices in such a way that
Price
(
Red Sox
)
+
Price
(
Yankees
)
≠
Price
(
Red Sox or Yankees
)
{\displaystyle {\text{Price}}({\text{Red Sox}})+{\text{Price}}({\text{Yankees}})\neq {\text{Price}}({\text{Red Sox or Yankees}})\,}
But if one sets the price of the third ticket lower than the sum of the first two tickets, a prudent opponent will buy that ticket and sell the other two tickets to the price-setter. By considering the three possible outcomes (Red Sox, Yankees, some other team), one will note that regardless of which of the three outcomes eventuates, one will lose. An analogous fate awaits if one set the price of the third ticket higher than the sum of the other two prices. This parallels the fact that probabilities of mutually exclusive events are additive (see probability axioms).
== Conditional wagers and conditional probabilities ==
Now imagine a more complicated scenario. One must set the prices of three promises:
to pay $1 if the Red Sox win tomorrow's game: the purchaser of this promise loses their bet if the Red Sox do not win regardless of whether their failure is due to their loss of a completed game or cancellation of the game, and
to pay $1 if the Red Sox win, and to refund the price of the promise if the game is cancelled, and
to pay $1 if the game is completed, regardless of who wins.
Three outcomes are possible: The game is cancelled; the game is played and the Red Sox lose; the game is played and the Red Sox win. One may set the prices in such a way that
Price
(
complete game
)
×
Price
(
Red Sox win
∣
complete game
)
≠
Price
(
Red Sox win and complete game
)
{\displaystyle {\text{Price}}({\text{complete game}})\times {\text{Price}}({\text{Red Sox win}}\mid {\text{complete game}})\neq {\text{Price}}({\text{Red Sox win and complete game}})}
(where the second price above is that of the bet that includes the refund in case of cancellation). (Note: The prices here are the dimensionless numbers obtained by dividing by $1, which is the payout in all three cases.) A prudent opponent writes three linear inequalities in three variables. The variables are the amounts they will invest in each of the three promises; the value of one of these is negative if they will make the price-setter buy that promise and positive if they will buy it. Each inequality corresponds to one of the three possible outcomes. Each inequality states that your opponent's net gain is more than zero. A solution exists if the determinant of the matrix is not zero. That determinant is:
Price
(
complete game
)
×
Price
(
Red Sox win
∣
complete game
)
−
Price
(
Red Sox win and complete game
)
.
{\displaystyle {\text{Price}}({\text{complete game}})\times {\text{Price}}({\text{Red Sox win}}\mid {\text{complete game}})-{\text{Price}}({\text{Red Sox win and complete game}}).}
Thus a prudent opponent can make the price setter a sure loser unless one sets one's prices in a way that parallels the simplest conventional characterization of conditional probability.
== Another example ==
In the 2015 running of the Kentucky Derby, the favorite ("American Pharaoh") was set ante-post at 5:2, the second favorite at 3:1, and the third favorite at 8:1. All other horses had odds against of 12:1 or higher. With these odds, a wager of $10 on each of all 18 starters would result in a net loss if either the favorite or the second favorite were to win.
However, if one assumes that no horse quoted 12:1 or higher will win, and one bets $10 on each of the top three, one is guaranteed at least a small win. The favorite (who did win) would result in a payout of $25, plus the returned $10 wager, giving an ending balance of $35 (a $5 net increase). A win by the second favorite would produce a payoff of $30 plus the original $10 wager, for a net $10 increase. A win by the third favorite gives $80 plus the original $10, for a net increase of $60.
This sort of strategy, so far as it concerns just the top three, forms a Dutch Book. However, if one considers all eighteen contenders, then no Dutch Book exists for this race.
== Economics ==
In economics, the classic example of a situation in which a consumer X can be Dutch-booked is if they have intransitive preferences. Classical economic theory assumes that preferences are transitive: if someone thinks A is better than B and B is better than C, then they must think A is better than C. Moreover, there cannot be any "cycles" of preferences.
The money pump argument notes that if someone held a set of intransitive preferences, they could be exploited (pumped) for money until being forced to exit the market. Imagine Jane has twenty dollars to buy fruit. She can fill her basket with either oranges or apples. Jane would prefer to have a dollar rather than an apple, an apple rather than an orange, and an orange rather than a dollar. Because Jane would rather have an orange than a dollar, she is willing to buy an orange for just over a dollar (perhaps $1.10). Then, she trades her orange for an apple, because she would rather have an apple than an orange. Finally, she sells her apple for a dollar, because she would rather have a dollar than an apple. At this point, Jane is left with $19.90, and has lost 10¢ and gained nothing in return. This process can be repeated until Jane is left with no money. (Note that, if Jane truly holds these preferences, she would see nothing wrong with this process; at every step, Jane agrees she has been left better off.) After running out of money, Jane must exit the market, and her preferences and actions cease to be economically relevant.
Experiments in behavioral economics show that subjects can violate the requirement for transitive preferences when comparing bets. However, most subjects do not make these choices in within-subject comparisons where the contradiction is made obviously visible (in other words, the subjects do not hold genuinely intransitive preferences, but instead make mistakes when making choices using heuristics).
Economists usually argue that people with preferences like X's will have all their wealth taken from them in the market. If this is the case, we won't observe preferences with intransitivities or other features that allow people to be Dutch-booked. However, if people are somewhat sophisticated about their intransitivities and/or if competition by arbitrageurs drives epsilon to zero, non-"standard" preferences may still be observable.
== Coherence ==
It can be shown that the set of prices is coherent when they satisfy the probability axioms and related results such as the inclusion–exclusion principle.
== See also ==
== References ==
Bovens, Luc; Hartmann, Stephan (2003). "Coherence". Bayesian Epistemology. Oxford: Clarendon Press. pp. 28–55. ISBN 0-19-926975-0.
Kadane, Joseph B. (2020). Principles of Uncertainty. Chapman & Hall. pp. 1–28. ISBN 978-1-138-05273-4.
Lad, Frank (1996). Operational Subjective Statistical Methods: A Mathematical, Philosophical, and Historical Introduction. New York: Wiley. ISBN 0-471-14329-4.
Maher, Patrick (1993). "Subjective probability in science". Betting on Theories. Cambridge University Press. pp. 84–104. ISBN 052141850X.
== External links ==
"Bayesian Epistemology"
Dutch Book Arguments in the Stanford Encyclopedia of Philosophy.
Probabilities as Betting Odds, report by C. Caves.
Notes on the Dutch Book Argument, by D. A. Freedman. | Wikipedia/Coherence_(philosophical_gambling_strategy) |
Bayesian approaches to brain function investigate the capacity of the nervous system to operate in situations of uncertainty in a fashion that is close to the optimal prescribed by Bayesian statistics. This term is used in behavioural sciences and neuroscience and studies associated with this term often strive to explain the brain's cognitive abilities based on statistical principles. It is frequently assumed that the nervous system maintains internal probabilistic models that are updated by neural processing of sensory information using methods approximating those of Bayesian probability.
== Origins ==
This field of study has its historical roots in numerous disciplines including machine learning, experimental psychology and Bayesian statistics. As early as the 1860s, with the work of Hermann Helmholtz in experimental psychology, the brain's ability to extract perceptual information from sensory data was modeled in terms of probabilistic estimation. The basic idea is that the nervous system needs to organize sensory data into an accurate internal model of the outside world.
Bayesian probability has been developed by many important contributors. Pierre-Simon Laplace, Thomas Bayes, Harold Jeffreys, Richard Cox and Edwin Jaynes developed mathematical techniques and procedures for treating probability as the degree of plausibility that could be assigned to a given supposition or hypothesis based on the available evidence. In 1988 Edwin Jaynes presented a framework for using Bayesian Probability to model mental processes. It was thus realized early on that the Bayesian statistical framework holds the potential to lead to insights into the function of the nervous system.
This idea was taken up in research on unsupervised learning, in particular the Analysis by Synthesis approach, branches of machine learning. In 1983 Geoffrey Hinton and colleagues proposed the brain could be seen as a machine making decisions based on the uncertainties of the outside world. During the 1990s researchers including Peter Dayan, Geoffrey Hinton and Richard Zemel proposed that the brain represents knowledge of the world in terms of probabilities and made specific proposals for tractable neural processes that could manifest such a Helmholtz Machine.
== Psychophysics ==
A wide range of studies interpret the results of psychophysical experiments in light of Bayesian perceptual models. Many aspects of human perceptual and motor behavior can be modeled with Bayesian statistics. This approach, with its emphasis on behavioral outcomes as the ultimate expressions of neural information processing, is also known for modeling sensory and motor decisions using Bayesian decision theory. Examples are the work of Landy, Jacobs, Jordan, Knill, Kording and Wolpert, and Goldreich.
== Neural coding ==
Many theoretical studies ask how the nervous system could implement Bayesian algorithms. Examples are the work of Pouget, Zemel, Deneve, Latham, Hinton and Dayan. George and Hawkins published a paper that establishes a model of cortical information processing called hierarchical temporal memory that is based on Bayesian network of Markov chains. They further map this mathematical model to the existing knowledge about the architecture of cortex and show how neurons could recognize patterns by hierarchical Bayesian inference.
== Electrophysiology ==
A number of recent electrophysiological studies focus on the representation of probabilities in the nervous system. Examples are the work of Shadlen and Schultz.
== Predictive coding ==
Predictive coding is a neurobiologically plausible scheme for inferring the causes of sensory input based on minimizing prediction error. These schemes are related formally to Kalman filtering and other Bayesian update schemes.
== Free energy ==
During the 1990s some researchers such as Geoffrey Hinton and Karl Friston began examining the concept of free energy as a calculably tractable measure of the discrepancy between actual features of the world and representations of those features captured by neural network models. A synthesis has been attempted recently by Karl Friston, in which the Bayesian brain emerges from a general principle of free energy minimisation. In this framework, both action and perception are seen as a consequence of suppressing free-energy, leading to perceptual and active inference and a more embodied (enactive) view of the Bayesian brain. Using variational Bayesian methods, it can be shown how internal models of the world are updated by sensory information to minimize free energy or the discrepancy between sensory input and predictions of that input. This can be cast (in neurobiologically plausible terms) as predictive coding or, more generally, Bayesian filtering.
According to Friston:
"The free-energy considered here represents a bound on the surprise inherent in any exchange with the environment, under expectations encoded by its state or configuration. A system can minimise free energy by changing its configuration to change the way it samples the environment, or to change its expectations. These changes correspond to action and perception, respectively, and lead to an adaptive exchange with the environment that is characteristic of biological systems. This treatment implies that the system’s state and structure encode an implicit and probabilistic model of the environment."
This area of research was summarized in terms understandable by the layperson in a 2008 article in New Scientist that offered a unifying theory of brain function. Friston makes the following claims about the explanatory power of the theory:
"This model of brain function can explain a wide range of anatomical and physiological aspects of brain systems; for example, the hierarchical deployment of cortical areas, recurrent architectures using forward and backward connections and functional asymmetries in these connections. In terms of synaptic physiology, it predicts associative plasticity and, for dynamic models, spike-timing-dependent plasticity. In terms of electrophysiology it accounts for classical and extra-classical receptive field effects and long-latency or endogenous components of evoked cortical responses. It predicts the attenuation of responses encoding prediction error with perceptual learning and explains many phenomena like repetition suppression, mismatch negativity and the P300 in electroencephalography. In psychophysical terms, it accounts for the behavioural correlates of these physiological phenomena, e.g., priming, and global precedence."
"It is fairly easy to show that both perceptual inference and learning rest on a minimisation of free energy or suppression of prediction error."
== See also ==
== References ==
== External links ==
Universal Darwinism – Karl Friston Archived 2020-02-07 at the Wayback Machine | Wikipedia/Bayesian_approaches_to_brain_function |
Bayesian inference of phylogeny combines the information in the prior and in the data likelihood to create the so-called posterior probability of trees, which is the probability that the tree is correct given the data, the prior and the likelihood model. Bayesian inference was introduced into molecular phylogenetics in the 1990s by three independent groups: Bruce Rannala and Ziheng Yang in Berkeley, Bob Mau in Madison, and Shuying Li in University of Iowa, the last two being PhD students at the time. The approach has become very popular since the release of the MrBayes software in 2001, and is now one of the most popular methods in molecular phylogenetics.
== Bayesian inference of phylogeny background and bases ==
Bayesian inference refers to a probabilistic method developed by Reverend Thomas Bayes based on Bayes' theorem. Published posthumously in 1763 it was the first expression of inverse probability and the basis of Bayesian inference. Independently, unaware of Bayes' work, Pierre-Simon Laplace developed Bayes' theorem in 1774.
Bayesian inference or the inverse probability method was the standard approach in statistical thinking until the early 1900s before RA Fisher developed what's now known as the classical/frequentist/Fisherian inference. Computational difficulties and philosophical objections had prevented the widespread adoption of the Bayesian approach until the 1990s, when Markov Chain Monte Carlo (MCMC) algorithms revolutionized Bayesian computation.
The Bayesian approach to phylogenetic reconstruction combines the prior probability of a tree P(A) with the likelihood of the data (B) to produce a posterior probability distribution on trees P(A|B). The posterior probability of a tree will be the probability that the tree is correct, given the prior, the data, and the correctness of the likelihood model.
MCMC methods can be described in three steps: first using a stochastic mechanism a new state for the Markov chain is proposed. Secondly, the probability of this new state to be correct is calculated. Thirdly, a new random variable (0,1) is proposed. If this new value is less than the acceptance probability the new state is accepted and the state of the chain is updated. This process is run thousands or millions of times. The number of times a single tree is visited during the course of the chain is an approximation of its posterior probability. Some of the most common algorithms used in MCMC methods include the Metropolis–Hastings algorithms, the Metropolis-Coupling MCMC (MC³) and the LOCAL algorithm of Larget and Simon.
=== Metropolis–Hastings algorithm ===
One of the most common MCMC methods used is the Metropolis–Hastings algorithm, a modified version of the original Metropolis algorithm. It is a widely used method to sample randomly from complicated and multi-dimensional distribution probabilities. The Metropolis algorithm is described in the following steps:
An initial tree, Ti, is randomly selected.
A neighbour tree, Tj, is selected from the collection of trees.
The ratio, R, of the probabilities (or probability density functions) of Tj and Ti is computed as follows: R = f(Tj)/f(Ti)
If R ≥ 1, Tj is accepted as the current tree.
If R < 1, Tj is accepted as the current tree with probability R, otherwise Ti is kept.
At this point the process is repeated from Step 2 N times.
The algorithm keeps running until it reaches an equilibrium distribution. It also assumes that the probability of proposing a new tree Tj when we are at the old tree state Ti, is the same probability of proposing Ti when we are at Tj. When this is not the case Hastings corrections are applied.
The aim of Metropolis-Hastings algorithm is to produce a collection of states with a determined distribution until the Markov process reaches a stationary distribution. The algorithm has two components:
A potential transition from one state to another (i → j) using a transition probability function qi,j
Movement of the chain to state j with probability αi,j and remains in i with probability 1 – αi,j.
=== Metropolis-coupled MCMC ===
Metropolis-coupled MCMC algorithm (MC³) has been proposed to solve a practical concern of the Markov chain moving across peaks when the target distribution has multiple local peaks, separated by low valleys, are known to exist in the tree space. This is the case during heuristic tree search under maximum parsimony (MP), maximum likelihood (ML), and minimum evolution (ME) criteria, and the same can be expected for stochastic tree search using MCMC. This problem will result in samples not approximating correctly to the posterior density. The (MC³) improves the mixing of Markov chains in presence of multiple local peaks in the posterior density. It runs multiple (m) chains in parallel, each for n iterations and with different stationary distributions
π
j
(
.
)
{\displaystyle \pi _{j}(.)\ }
,
j
=
1
,
2
,
…
,
m
{\displaystyle j=1,2,\ldots ,m\ }
, where the first one,
π
1
=
π
{\displaystyle \pi _{1}=\pi \ }
is the target density, while
π
j
{\displaystyle \pi _{j}\ }
,
j
=
2
,
3
,
…
,
m
{\displaystyle j=2,3,\ldots ,m\ }
are chosen to improve mixing. For example, one can choose incremental heating of the form:
π
j
(
θ
)
=
π
(
θ
)
1
/
[
1
+
λ
(
j
−
1
)
]
,
λ
>
0
,
{\displaystyle \pi _{j}(\theta )=\pi (\theta )^{1/[1+\lambda (j-1)]},\ \ \lambda >0,}
so that the first chain is the cold chain with the correct target density, while chains
2
,
3
,
…
,
m
{\displaystyle 2,3,\ldots ,m}
are heated chains. Note that raising the density
π
(
.
)
{\displaystyle \pi (.)}
to the power
1
/
T
{\displaystyle 1/T\ }
with
T
>
1
{\displaystyle T>1\ }
has the effect of flattening out the distribution, similar to heating a metal. In such a distribution, it is easier to traverse between peaks (separated by valleys) than in the original distribution. After each iteration, a swap of states between two randomly chosen chains is proposed through a Metropolis-type step. Let
θ
(
j
)
{\displaystyle \theta ^{(j)}\ }
be the current state in chain
j
{\displaystyle j\ }
,
j
=
1
,
2
,
…
,
m
{\displaystyle j=1,2,\ldots ,m\ }
. A swap between the states of chains
i
{\displaystyle i\ }
and
j
{\displaystyle j\ }
is accepted with probability:
α
=
π
i
(
θ
(
j
)
)
π
j
(
θ
(
i
)
)
π
i
(
θ
(
i
)
)
π
j
(
θ
(
j
)
)
{\displaystyle \alpha ={\frac {\pi _{i}(\theta ^{(j)})\pi _{j}(\theta ^{(i)})}{\pi _{i}(\theta ^{(i)})\pi _{j}(\theta ^{(j)})}}\ }
At the end of the run, output from only the cold chain is used, while those from the hot chains are discarded. Heuristically, the hot chains will visit the local peaks rather easily, and swapping states between chains will let the cold chain occasionally jump valleys, leading to better mixing. However, if
π
i
(
θ
)
/
π
j
(
θ
)
{\displaystyle \pi _{i}(\theta )/\pi _{j}(\theta )\ }
is unstable, proposed swaps will seldom be accepted. This is the reason for using several chains which differ only incrementally.
An obvious disadvantage of the algorithm is that
m
{\displaystyle m\ }
chains are run and only one chain is used for inference. For this reason,
M
C
3
{\displaystyle \mathrm {MC} ^{3}\ }
is ideally suited for implementation on parallel machines, since each chain will in general require the same amount of computation per iteration.
=== LOCAL algorithm of Larget and Simon ===
The LOCAL algorithms offers a computational advantage over previous methods and demonstrates that a Bayesian approach is able to assess uncertainty computationally practical in larger trees. The LOCAL algorithm is an improvement of the GLOBAL algorithm presented in Mau, Newton and Larget (1999) in which all branch lengths are changed in every cycle. The LOCAL algorithms modifies the tree by selecting an internal branch of the tree at random. The nodes at the ends of this branch are each connected to two other branches. One of each pair is chosen at random. Imagine taking these three selected edges and stringing them like a clothesline from left to right, where the direction (left/right) is also selected at random. The two endpoints of the first branch selected will have a sub-tree hanging like a piece of clothing strung to the line. The algorithm proceeds by multiplying the three selected branches by a common random amount, akin to stretching or shrinking the clothesline. Finally the leftmost of the two hanging sub-trees is disconnected and reattached to the clothesline at a location selected uniformly at random. This would be the candidate tree.
Suppose we began by selecting the internal branch with length
t
8
{\displaystyle t_{8}\ }
that separates taxa
A
{\displaystyle A\ }
and
B
{\displaystyle B\ }
from the rest. Suppose also that we have (randomly) selected branches with lengths
t
1
{\displaystyle t_{1}\ }
and
t
9
{\displaystyle t_{9}\ }
from each side, and that we oriented these branches. Let
m
=
t
1
+
t
8
+
t
9
{\displaystyle m=t_{1}+t_{8}+t_{9}\ }
, be the current length of the clothesline. We select the new length to be
m
⋆
=
m
exp
(
λ
(
U
1
−
0.5
)
)
{\displaystyle m^{\star }=m\exp(\lambda (U_{1}-0.5))\ }
, where
U
1
{\displaystyle U_{1}\ }
is a uniform random variable on
(
0
,
1
)
{\displaystyle (0,1)\ }
. Then for the LOCAL algorithm, the acceptance probability can be computed to be:
h
(
y
)
h
(
x
)
×
m
⋆
3
m
3
{\displaystyle {\frac {h(y)}{h(x)}}\times {\frac {{m^{\star }}^{3}}{m^{3}}}\ }
==== Assessing convergence ====
To estimate a branch length
t
{\displaystyle t}
of a 2-taxon tree under JC, in which
n
1
{\displaystyle n_{1}}
sites are unvaried and
n
2
{\displaystyle n_{2}}
are variable, assume exponential prior distribution with rate
λ
{\displaystyle \lambda \ }
. The density is
p
(
t
)
=
λ
e
−
λ
t
{\displaystyle p(t)=\lambda e^{-\lambda t}\ }
. The probabilities of the possible site patterns are:
1
/
4
(
1
/
4
+
3
/
4
e
−
4
/
3
t
)
{\displaystyle 1/4\left(1/4+3/4e^{-4/3t}\right)\ }
for unvaried sites, and
1
/
4
(
1
/
4
−
1
/
4
e
−
4
/
3
t
)
{\displaystyle 1/4\left(1/4-1/4e^{-4/3t}\right)\ }
Thus the unnormalized posterior distribution is:
h
(
t
)
=
(
1
/
4
)
n
1
+
n
2
(
1
/
4
+
3
/
4
e
−
4
/
3
t
n
1
)
{\displaystyle h(t)=\left(1/4\right)^{n_{1}+n_{2}}\left(1/4+3/4{e^{-4/3t}}^{n_{1}}\right)\ }
or, alternately,
h
(
t
)
=
(
1
/
4
−
1
/
4
e
−
4
/
3
t
n
2
)
(
λ
e
−
λ
t
)
{\displaystyle h(t)=\left(1/4-1/4{e^{-4/3t}}^{n_{2}}\right)(\lambda e^{-\lambda t})\ }
Update branch length by choosing new value uniformly at random from a window of half-width
w
{\displaystyle w\ }
centered at the current value:
t
⋆
=
|
t
+
U
|
{\displaystyle t^{\star }=|t+U|\ }
where
U
{\displaystyle U\ }
is uniformly distributed between
−
w
{\displaystyle -w\ }
and
w
{\displaystyle w\ }
. The acceptance
probability is:
h
(
t
⋆
)
/
h
(
t
)
{\displaystyle h(t^{\star })/h(t)\ }
Example:
n
1
=
70
{\displaystyle n_{1}=70\ }
,
n
2
=
30
{\displaystyle n_{2}=30\ }
. We will compare results for two values of
w
{\displaystyle w\ }
,
w
=
0.1
{\displaystyle w=0.1\ }
and
w
=
0.5
{\displaystyle w=0.5\ }
. In each case, we will begin with an initial length of
5
{\displaystyle 5\ }
and update the length
2000
{\displaystyle 2000\ }
times.
== Maximum parsimony and maximum likelihood ==
There are many approaches to reconstructing phylogenetic trees, each with advantages and disadvantages, and there is no straightforward answer to “what is the best method?”. Maximum parsimony (MP) and maximum likelihood (ML) are traditional methods widely used for the estimation of phylogenies and both use character information directly, as Bayesian methods do.
Maximum Parsimony recovers one or more optimal trees based on a matrix of discrete characters for a certain group of taxa and it does not require a model of evolutionary change. MP gives the most simple explanation for a given set of data, reconstructing a phylogenetic tree that includes as few changes across the sequences as possible. The support of the tree branches is represented by bootstrap percentage. For the same reason that it has been widely used, its simplicity, MP has also received criticism and has been pushed into the background by ML and Bayesian methods. MP presents several problems and limitations. As shown by Felsenstein (1978), MP might be statistically inconsistent, meaning that as more and more data (e.g. sequence length) is accumulated, results can converge on an incorrect tree and lead to long branch attraction, a phylogenetic phenomenon where taxa with long branches (numerous character state changes) tend to appear more closely related in the phylogeny than they really are. For morphological data, recent simulation studies suggest that parsimony may be less accurate than trees built using Bayesian approaches, potentially due to overprecision, although this has been disputed. Studies using novel simulation methods have demonstrated that differences between inference methods result from the search strategy and consensus method employed, rather than the optimization used.
As in maximum parsimony, maximum likelihood will evaluate alternative trees. However it considers the probability of each tree explaining the given data based on a model of evolution. In this case, the tree with the highest probability of explaining the data is chosen over the other ones. In other words, it compares how different trees predict the observed data. The introduction of a model of evolution in ML analyses presents an advantage over MP as the probability of nucleotide substitutions and rates of these substitutions are taken into account, explaining the phylogenetic relationships of taxa in a more realistic way. An important consideration of this method is the branch length, which parsimony ignores, with changes being more likely to happen along long branches than short ones. This approach might eliminate long branch attraction and explain the greater consistency of ML over MP. Although considered by many to be the best approach to inferring phylogenies from a theoretical point of view, ML is computationally intensive and it is almost impossible to explore all trees as there are too many. Bayesian inference also incorporates a model of evolution and the main advantages over MP and ML are that it is computationally more efficient than traditional methods, it quantifies and addresses the source of uncertainty and is able to incorporate complex models of evolution.
== Pitfalls and controversies ==
Bootstrap values vs posterior probabilities. It has been observed that bootstrap support values, calculated under parsimony or maximum likelihood, tend to be lower than the posterior probabilities obtained by Bayesian inference. This leads to a number of questions such as: Do posterior probabilities lead to overconfidence in the results? Are bootstrap values more robust than posterior probabilities? One fact underlying this controversy is that all data are used during Bayesian analysis and the calculation of posterior probabilities, while the nature of bootstrapping means that most bootstrap replicates will be missing some of the original data. As a result, bipartitions (branches) supported by relatively few characters in the dataset may receive very high posterior probabilities but moderate or even low bootstrap support, as many of the bootstrap replicates don't contain enough of the critical characters to retrieve the bipartition.
Controversy of using prior probabilities. Using prior probabilities for Bayesian analysis has been seen by many as an advantage as it provides a way of incorporating information from sources other than the data being analyzed. However, when such external information is lacking, one is forced to use a prior even if it is impossible to use a statistical distribution to represent total ignorance. It is also a concern that the Bayesian posterior probabilities may reflect subjective opinions when the prior is arbitrary and subjective.
Model choice. The results of the Bayesian analysis of a phylogeny are directly correlated to the model of evolution chosen so it is important to choose a model that fits the observed data, otherwise inferences in the phylogeny will be erroneous. Many scientists have raised questions about the interpretation of Bayesian inference when the model is unknown or incorrect. For example, an oversimplified model might give higher posterior probabilities.
== MrBayes software ==
MrBayes is a free software tool that performs Bayesian inference of phylogeny. It was originally written by John P. Huelsenbeck and Frederik Ronquist in 2001. As Bayesian methods increased in popularity, MrBayes became one of the software of choice for many molecular phylogeneticists. It is offered for Macintosh, Windows, and UNIX operating systems and it has a command-line interface. The program uses the standard MCMC algorithm as well as the Metropolis coupled MCMC variant. MrBayes reads aligned matrices of sequences (DNA or amino acids) in the standard NEXUS format.
MrBayes uses MCMC to approximate the posterior probabilities of trees. The user can change assumptions of the substitution model, priors and the details of the MC³ analysis. It also allows the user to remove and add taxa and characters to the analysis. The program includes, among several nucleotide models, the most standard model of DNA substitution, the 4x4 also called JC69, which assumes that changes across nucleotides occur with equal probability. It also implements a number of 20x20 models of amino acid substitution, and codon models of DNA substitution. It offers different methods for relaxing the assumption of equal substitutions rates across nucleotide sites. MrBayes is also able to infer ancestral states accommodating uncertainty to the phylogenetic tree and model parameters.
MrBayes 3 was a completely reorganized and restructured version of the original MrBayes. The main novelty was the ability of the software to accommodate heterogeneity of data sets. This new framework allows the user to mix models and take advantages of the efficiency of Bayesian MCMC analysis when dealing with different type of data (e.g. protein, nucleotide, and morphological). It uses the Metropolis-Coupling MCMC by default.
MrBayes 3.2 was released in 2012. This version allows the users to run multiple analyses in parallel. It also provides faster likelihood calculations and allow these calculations to be delegated to graphics processing unites (GPUs). Version 3.2 provides wider outputs options compatible with FigTree and other tree viewers.
== List of phylogenetics software ==
This table includes some of the most common phylogenetic software used for inferring phylogenies under a Bayesian framework. Some of them do not use exclusively Bayesian methods.
== Applications ==
Bayesian Inference has extensively been used by molecular phylogeneticists for a wide number of applications. Some of these include:
Inference of phylogenies.
Inference and evaluation of uncertainty of phylogenies.
Inference of ancestral character state evolution.
Inference of ancestral areas.
Molecular dating analysis.
Model dynamics of species diversification and extinction
Elucidate patterns in pathogens dispersal.
Inference of phenotypic trait evolution.
== References ==
== External links ==
MrBayes official website
BEAST official website | Wikipedia/Bayesian_inference_in_phylogeny |
The Bayes factor is a ratio of two competing statistical models represented by their evidence, and is used to quantify the support for one model over the other. The models in question can have a common set of parameters, such as a null hypothesis and an alternative, but this is not necessary; for instance, it could also be a non-linear model compared to its linear approximation. The Bayes factor can be thought of as a Bayesian analog to the likelihood-ratio test, although it uses the integrated (i.e., marginal) likelihood rather than the maximized likelihood. As such, both quantities only coincide under simple hypotheses (e.g., two specific parameter values). Also, in contrast with null hypothesis significance testing, Bayes factors support evaluation of evidence in favor of a null hypothesis, rather than only allowing the null to be rejected or not rejected.
Although conceptually simple, the computation of the Bayes factor can be challenging depending on the complexity of the model and the hypotheses. Since closed-form expressions of the marginal likelihood are generally not available, numerical approximations based on MCMC samples have been suggested. A widely used approach is the method proposed by Chib (1995). Chib and Jeliazkov (2001) later extended this method to handle cases where Metropolis-Hastings samplers are used. For certain special cases, simplified algebraic expressions can be derived; for instance, the Savage–Dickey density ratio in the case of a precise (equality constrained) hypothesis against an unrestricted alternative. Another approximation, derived by applying Laplace's approximation to the integrated likelihoods, is known as the Bayesian information criterion (BIC); in large data sets the Bayes factor will approach the BIC as the influence of the priors wanes. In small data sets, priors generally matter and must not be improper since the Bayes factor will be undefined if either of the two integrals in its ratio is not finite.
== Definition ==
The Bayes factor is the ratio of two marginal likelihoods; that is, the likelihoods of two statistical models integrated over the prior probabilities of their parameters.
The posterior probability
Pr
(
M
|
D
)
{\displaystyle \Pr(M|D)}
of a model M given data D is given by Bayes' theorem:
Pr
(
M
|
D
)
=
Pr
(
D
|
M
)
Pr
(
M
)
Pr
(
D
)
.
{\displaystyle \Pr(M|D)={\frac {\Pr(D|M)\Pr(M)}{\Pr(D)}}.}
The key data-dependent term
Pr
(
D
|
M
)
{\displaystyle \Pr(D|M)}
represents the probability that some data are produced under the assumption of the model M; evaluating it correctly is the key to Bayesian model comparison.
Given a model selection problem in which one wishes to choose between two models on the basis of observed data D, the plausibility of the two different models M1 and M2, parametrised by model parameter vectors
θ
1
{\displaystyle \theta _{1}}
and
θ
2
{\displaystyle \theta _{2}}
, is assessed by the Bayes factor K given by
K
=
Pr
(
D
|
M
1
)
Pr
(
D
|
M
2
)
=
∫
Pr
(
θ
1
|
M
1
)
Pr
(
D
|
θ
1
,
M
1
)
d
θ
1
∫
Pr
(
θ
2
|
M
2
)
Pr
(
D
|
θ
2
,
M
2
)
d
θ
2
=
Pr
(
M
1
|
D
)
Pr
(
D
)
Pr
(
M
1
)
Pr
(
M
2
|
D
)
Pr
(
D
)
Pr
(
M
2
)
=
Pr
(
M
1
|
D
)
Pr
(
M
2
|
D
)
Pr
(
M
2
)
Pr
(
M
1
)
.
{\displaystyle K={\frac {\Pr(D|M_{1})}{\Pr(D|M_{2})}}={\frac {\int \Pr(\theta _{1}|M_{1})\Pr(D|\theta _{1},M_{1})\,d\theta _{1}}{\int \Pr(\theta _{2}|M_{2})\Pr(D|\theta _{2},M_{2})\,d\theta _{2}}}={\frac {\frac {\Pr(M_{1}|D)\Pr(D)}{\Pr(M_{1})}}{\frac {\Pr(M_{2}|D)\Pr(D)}{\Pr(M_{2})}}}={\frac {\Pr(M_{1}|D)}{\Pr(M_{2}|D)}}{\frac {\Pr(M_{2})}{\Pr(M_{1})}}.}
When the two models have equal prior probability, so that
Pr
(
M
1
)
=
Pr
(
M
2
)
{\displaystyle \Pr(M_{1})=\Pr(M_{2})}
, the Bayes factor is equal to the ratio of the posterior probabilities of M1 and M2. If instead of the Bayes factor integral, the likelihood corresponding to the maximum likelihood estimate of the parameter for each statistical model is used, then the test becomes a classical likelihood-ratio test. Unlike a likelihood-ratio test, this Bayesian model comparison does not depend on any single set of parameters, as it integrates over all parameters in each model (with respect to the respective priors). An advantage of the use of Bayes factors is that it automatically, and quite naturally, includes a penalty for including too much model structure. It thus guards against overfitting. For models where an explicit version of the likelihood is not available or too costly to evaluate numerically, approximate Bayesian computation can be used for model selection in a Bayesian framework,
with the caveat that approximate-Bayesian estimates of Bayes factors are often biased.
Other approaches are:
to treat model comparison as a decision problem, computing the expected value or cost of each model choice;
to use minimum message length (MML).
to use minimum description length (MDL).
== Interpretation ==
A value of K > 1 means that M1 is more strongly supported by the data under consideration than M2. Note that classical hypothesis testing gives one hypothesis (or model) preferred status (the 'null hypothesis'), and only considers evidence against it. The fact that a Bayes factor can produce evidence for and not just against a null hypothesis is one of the key advantages of this analysis method.
Harold Jeffreys gave a scale (Jeffreys' scale) for interpretation of
K
{\displaystyle K}
:
The second column gives the corresponding weights of evidence in decihartleys (also known as decibans); bits are added in the third column for clarity. The table continues in the other direction, so that, for example,
K
≤
10
−
2
{\displaystyle K\leq 10^{-2}}
is decisive evidence for
M
2
{\displaystyle M_{2}}
.
An alternative table, widely cited, is provided by Kass and Raftery (1995):
According to I. J. Good, the just-noticeable difference of humans in their everyday life, when it comes to a change degree of belief in a hypothesis, is about a factor of 1.3x, or 1 deciban, or 1/3 of a bit, or from 1:1 to 5:4 in odds ratio.
== Example ==
Suppose we have a random variable that produces either a success or a failure. We want to compare a model M1 where the probability of success is q = 1⁄2, and another model M2 where q is unknown and we take a prior distribution for q that is uniform on [0,1]. We take a sample of 200, and find 115 successes and 85 failures. The likelihood can be calculated according to the binomial distribution:
(
200
115
)
q
115
(
1
−
q
)
85
.
{\displaystyle {{200 \choose 115}q^{115}(1-q)^{85}}.}
Thus we have for M1
P
(
X
=
115
∣
M
1
)
=
(
200
115
)
(
1
2
)
200
≈
0.006
{\displaystyle P(X=115\mid M_{1})={200 \choose 115}\left({1 \over 2}\right)^{200}\approx 0.006}
whereas for M2 we have
P
(
X
=
115
∣
M
2
)
=
∫
0
1
(
200
115
)
q
115
(
1
−
q
)
85
d
q
=
1
201
≈
0.005
{\displaystyle P(X=115\mid M_{2})=\int _{0}^{1}{200 \choose 115}q^{115}(1-q)^{85}dq={1 \over 201}\approx 0.005}
The ratio is then 1.2, which is "barely worth mentioning" even if it points very slightly towards M1.
A frequentist hypothesis test of M1 (here considered as a null hypothesis) would have produced a very different result. Such a test says that M1 should be rejected at the 5% significance level, since the probability of getting 115 or more successes from a sample of 200 if q = 1⁄2 is 0.02, and as a two-tailed test of getting a figure as extreme as or more extreme than 115 is 0.04. Note that 115 is more than two standard deviations away from 100. Thus, whereas a frequentist hypothesis test would yield significant results at the 5% significance level, the Bayes factor hardly considers this to be an extreme result. Note, however, that a non-uniform prior (for example one that reflects the fact that you expect the number of success and failures to be of the same order of magnitude) could result in a Bayes factor that is more in agreement with the frequentist hypothesis test.
A classical likelihood-ratio test would have found the maximum likelihood estimate for q, namely
q
^
=
115
200
=
0.575
{\displaystyle {\hat {q}}={\frac {115}{200}}=0.575}
, whence
P
(
X
=
115
∣
M
2
)
=
(
200
115
)
q
^
115
(
1
−
q
^
)
85
≈
0.06
{\displaystyle \textstyle P(X=115\mid M_{2})={{200 \choose 115}{\hat {q}}^{115}(1-{\hat {q}})^{85}}\approx 0.06}
(rather than averaging over all possible q). That gives a likelihood ratio of 0.1 and points towards M2.
M2 is a more complex model than M1 because it has a free parameter which allows it to model the data more closely. The ability of Bayes factors to take this into account is a reason why Bayesian inference has been put forward as a theoretical justification for and generalisation of Occam's razor, reducing Type I errors.
On the other hand, the modern method of relative likelihood takes into account the number of free parameters in the models, unlike the classical likelihood ratio. The relative likelihood method could be applied as follows. Model M1 has 0 parameters, and so its Akaike information criterion (AIC) value is
2
⋅
0
−
2
⋅
ln
(
0.005956
)
≈
10.2467
{\displaystyle 2\cdot 0-2\cdot \ln(0.005956)\approx 10.2467}
. Model M2 has 1 parameter, and so its AIC value is
2
⋅
1
−
2
⋅
ln
(
0.056991
)
≈
7.7297
{\displaystyle 2\cdot 1-2\cdot \ln(0.056991)\approx 7.7297}
. Hence M1 is about
exp
(
7.7297
−
10.2467
2
)
≈
0.284
{\displaystyle \exp \left({\frac {7.7297-10.2467}{2}}\right)\approx 0.284}
times as probable as M2 to minimize the information loss. Thus M2 is slightly preferred, but M1 cannot be excluded.
== See also ==
Akaike information criterion
Approximate Bayesian computation
Bayesian information criterion
Deviance information criterion
Lindley's paradox
Minimum message length
Model selection
E-Value
Statistical ratios
Odds ratio
Relative risk
== References ==
== Further reading ==
Bernardo, J.; Smith, A. F. M. (1994). Bayesian Theory. John Wiley. ISBN 0-471-92416-4.
Denison, D. G. T.; Holmes, C. C.; Mallick, B. K.; Smith, A. F. M. (2002). Bayesian Methods for Nonlinear Classification and Regression. John Wiley. ISBN 0-471-49036-9.
Dienes, Z. (2019). How do I know what my theory predicts? Advances in Methods and Practices in Psychological Science doi:10.1177/2515245919876960
Duda, Richard O.; Hart, Peter E.; Stork, David G. (2000). "Section 9.6.5". Pattern classification (2nd ed.). Wiley. pp. 487–489. ISBN 0-471-05669-3.
Gelman, A.; Carlin, J.; Stern, H.; Rubin, D. (1995). Bayesian Data Analysis. London: Chapman & Hall. ISBN 0-412-03991-5.
Jaynes, E. T. (1994), Probability Theory: the logic of science, chapter 24.
Kadane, Joseph B.; Dickey, James M. (1980). "Bayesian Decision Theory and the Simplification of Models". In Kmenta, Jan; Ramsey, James B. (eds.). Evaluation of Econometric Models. New York: Academic Press. pp. 245–268. ISBN 0-12-416550-8.
Lee, P. M. (2012). Bayesian Statistics: an introduction. Wiley. ISBN 9781118332573.
Richard, Mark; Vecer, Jan (2021). "Efficiency Testing of Prediction Markets: Martingale Approach, Likelihood Ratio and Bayes Factor Analysis". Risks. 9 (2): 31. doi:10.3390/risks9020031. hdl:10419/258120.
Winkler, Robert (2003). Introduction to Bayesian Inference and Decision (2nd ed.). Probabilistic. ISBN 0-9647938-4-9.
== External links ==
BayesFactor —an R package for computing Bayes factors in common research designs
Bayes factor calculator — Online calculator for informed Bayes factors
Bayes Factor Calculators Archived 2015-05-07 at the Wayback Machine —web-based version of much of the BayesFactor package | Wikipedia/Bayesian_model_selection |
In statistics and machine learning, ensemble methods use multiple learning algorithms to obtain better predictive performance than could be obtained from any of the constituent learning algorithms alone.
Unlike a statistical ensemble in statistical mechanics, which is usually infinite, a machine learning ensemble consists of only a concrete finite set of alternative models, but typically allows for much more flexible structure to exist among those alternatives.
== Overview ==
Supervised learning algorithms search through a hypothesis space to find a suitable hypothesis that will make good predictions with a particular problem. Even if this space contains hypotheses that are very well-suited for a particular problem, it may be very difficult to find a good one. Ensembles combine multiple hypotheses to form one which should be theoretically better.
Ensemble learning trains two or more machine learning algorithms on a specific classification or regression task. The algorithms within the ensemble model are generally referred as "base models", "base learners", or "weak learners" in literature. These base models can be constructed using a single modelling algorithm, or several different algorithms. The idea is to train a diverse set of weak models on the same modelling task, such that the outputs of each weak learner have poor predictive ability (i.e., high bias), and among all weak learners, the outcome and error values exhibit high variance. Fundamentally, an ensemble learning model trains at least two high-bias (weak) and high-variance (diverse) models to be combined into a better-performing model. The set of weak models — which would not produce satisfactory predictive results individually — are combined or averaged to produce a single, high performing, accurate, and low-variance model to fit the task as required.
Ensemble learning typically refers to bagging (bootstrap aggregating), boosting or stacking/blending techniques to induce high variance among the base models. Bagging creates diversity by generating random samples from the training observations and fitting the same model to each different sample — also known as homogeneous parallel ensembles. Boosting follows an iterative process by sequentially training each base model on the up-weighted errors of the previous base model, producing an additive model to reduce the final model errors — also known as sequential ensemble learning. Stacking or blending consists of different base models, each trained independently (i.e. diverse/high variance) to be combined into the ensemble model — producing a heterogeneous parallel ensemble. Common applications of ensemble learning include random forests (an extension of bagging), Boosted Tree models, and Gradient Boosted Tree Models. Models in applications of stacking are generally more task-specific — such as combining clustering techniques with other parametric and/or non-parametric techniques.
Evaluating the prediction of an ensemble typically requires more computation than evaluating the prediction of a single model. In one sense, ensemble learning may be thought of as a way to compensate for poor learning algorithms by performing a lot of extra computation. On the other hand, the alternative is to do a lot more learning with one non-ensemble model. An ensemble may be more efficient at improving overall accuracy for the same increase in compute, storage, or communication resources by using that increase on two or more methods, than would have been improved by increasing resource use for a single method. Fast algorithms such as decision trees are commonly used in ensemble methods (e.g., random forests), although slower algorithms can benefit from ensemble techniques as well.
By analogy, ensemble techniques have been used also in unsupervised learning scenarios, for example in consensus clustering or in anomaly detection.
== Ensemble theory ==
Empirically, ensembles tend to yield better results when there is a significant diversity among the models. Many ensemble methods, therefore, seek to promote diversity among the models they combine. Although perhaps non-intuitive, more random algorithms (like random decision trees) can be used to produce a stronger ensemble than very deliberate algorithms (like entropy-reducing decision trees). Using a variety of strong learning algorithms, however, has been shown to be more effective than using techniques that attempt to dumb-down the models in order to promote diversity. It is possible to increase diversity in the training stage of the model using correlation for regression tasks or using information measures such as cross entropy for classification tasks.
Theoretically, one can justify the diversity concept because the lower bound of the error rate of an ensemble system can be decomposed into accuracy, diversity, and the other term.
=== The geometric framework ===
Ensemble learning, including both regression and classification tasks, can be explained using a geometric framework. Within this framework, the output of each individual classifier or regressor for the entire dataset can be viewed as a point in a multi-dimensional space. Additionally, the target result is also represented as a point in this space, referred to as the "ideal point."
The Euclidean distance is used as the metric to measure both the performance of a single classifier or regressor (the distance between its point and the ideal point) and the dissimilarity between two classifiers or regressors (the distance between their respective points). This perspective transforms ensemble learning into a deterministic problem.
For example, within this geometric framework, it can be proved that the averaging of the outputs (scores) of all base classifiers or regressors can lead to equal or better results than the average of all the individual models. It can also be proved that if the optimal weighting scheme is used, then a weighted averaging approach can outperform any of the individual classifiers or regressors that make up the ensemble or as good as the best performer at least.
== Ensemble size ==
While the number of component classifiers of an ensemble has a great impact on the accuracy of prediction, there is a limited number of studies addressing this problem. A priori determining of ensemble size and the volume and velocity of big data streams make this even more crucial for online ensemble classifiers. Mostly statistical tests were used for determining the proper number of components. More recently, a theoretical framework suggested that there is an ideal number of component classifiers for an ensemble such that having more or less than this number of classifiers would deteriorate the accuracy. It is called "the law of diminishing returns in ensemble construction." Their theoretical framework shows that using the same number of independent component classifiers as class labels gives the highest accuracy.
== Common types of ensembles ==
=== Bayes optimal classifier ===
The Bayes optimal classifier is a classification technique. It is an ensemble of all the hypotheses in the hypothesis space. On average, no other ensemble can outperform it. The Naive Bayes classifier is a version of this that assumes that the data is conditionally independent on the class and makes the computation more feasible. Each hypothesis is given a vote proportional to the likelihood that the training dataset would be sampled from a system if that hypothesis were true. To facilitate training data of finite size, the vote of each hypothesis is also multiplied by the prior probability of that hypothesis. The Bayes optimal classifier can be expressed with the following equation:
y
=
a
r
g
m
a
x
c
j
∈
C
∑
h
i
∈
H
P
(
c
j
|
h
i
)
P
(
T
|
h
i
)
P
(
h
i
)
{\displaystyle y={\underset {c_{j}\in C}{\mathrm {argmax} }}\sum _{h_{i}\in H}{P(c_{j}|h_{i})P(T|h_{i})P(h_{i})}}
where
y
{\displaystyle y}
is the predicted class,
C
{\displaystyle C}
is the set of all possible classes,
H
{\displaystyle H}
is the hypothesis space,
P
{\displaystyle P}
refers to a probability, and
T
{\displaystyle T}
is the training data. As an ensemble, the Bayes optimal classifier represents a hypothesis that is not necessarily in
H
{\displaystyle H}
. The hypothesis represented by the Bayes optimal classifier, however, is the optimal hypothesis in ensemble space (the space of all possible ensembles consisting only of hypotheses in
H
{\displaystyle H}
).
This formula can be restated using Bayes' theorem, which says that the posterior is proportional to the likelihood times the prior:
P
(
h
i
|
T
)
∝
P
(
T
|
h
i
)
P
(
h
i
)
{\displaystyle P(h_{i}|T)\propto P(T|h_{i})P(h_{i})}
hence,
y
=
a
r
g
m
a
x
c
j
∈
C
∑
h
i
∈
H
P
(
c
j
|
h
i
)
P
(
h
i
|
T
)
{\displaystyle y={\underset {c_{j}\in C}{\mathrm {argmax} }}\sum _{h_{i}\in H}{P(c_{j}|h_{i})P(h_{i}|T)}}
=== Bootstrap aggregating (bagging) ===
Bootstrap aggregation (bagging) involves training an ensemble on bootstrapped data sets. A bootstrapped set is created by selecting from original training data set with replacement. Thus, a bootstrap set may contain a given example zero, one, or multiple times. Ensemble members can also have limits on the features (e.g., nodes of a decision tree), to encourage exploring of diverse features. The variance of local information in the bootstrap sets and feature considerations promote diversity in the ensemble, and can strengthen the ensemble. To reduce overfitting, a member can be validated using the out-of-bag set (the examples that are not in its bootstrap set).
Inference is done by voting of predictions of ensemble members, called aggregation. It is illustrated below with an ensemble of four decision trees. The query example is classified by each tree. Because three of the four predict the positive class, the ensemble's overall classification is positive. Random forests like the one shown are a common application of bagging.
=== Boosting ===
Boosting involves training successive models by emphasizing training data mis-classified by previously learned models. Initially, all data (D1) has equal weight and is used to learn a base model M1. The examples mis-classified by M1 are assigned a weight greater than correctly classified examples. This boosted data (D2) is used to train a second base model M2, and so on. Inference is done by voting.
In some cases, boosting has yielded better accuracy than bagging, but tends to over-fit more. The most common implementation of boosting is Adaboost, but some newer algorithms are reported to achieve better results.
=== Bayesian model averaging ===
Bayesian model averaging (BMA) makes predictions by averaging the predictions of models weighted by their posterior probabilities given the data. BMA is known to generally give better answers than a single model, obtained, e.g., via stepwise regression, especially where very different models have nearly identical performance in the training set but may otherwise perform quite differently.
The question with any use of Bayes' theorem is the prior, i.e., the probability (perhaps subjective) that each model is the best to use for a given purpose. Conceptually, BMA can be used with any prior. R packages ensembleBMA and BMA use the prior implied by the Bayesian information criterion, (BIC), following Raftery (1995). R package BAS supports the use of the priors implied by Akaike information criterion (AIC) and other criteria over the alternative models as well as priors over the coefficients.
The difference between BIC and AIC is the strength of preference for parsimony. BIC's penalty for model complexity is
ln
(
n
)
k
{\displaystyle \ln(n)k}
, while AIC's is
2
k
{\displaystyle 2k}
. Large-sample asymptotic theory establishes that if there is a best model, then with increasing sample sizes, BIC is strongly consistent, i.e., will almost certainly find it, while AIC may not, because AIC may continue to place excessive posterior probability on models that are more complicated than they need to be. On the other hand, AIC and AICc are asymptotically "efficient" (i.e., minimum mean square prediction error), while BIC is not .
Haussler et al. (1994) showed that when BMA is used for classification, its expected error is at most twice the expected error of the Bayes optimal classifier. Burnham and Anderson (1998, 2002) contributed greatly to introducing a wider audience to the basic ideas of Bayesian model averaging and popularizing the methodology. The availability of software, including other free open-source packages for R beyond those mentioned above, helped make the methods accessible to a wider audience.
=== Bayesian model combination ===
Bayesian model combination (BMC) is an algorithmic correction to Bayesian model averaging (BMA). Instead of sampling each model in the ensemble individually, it samples from the space of possible ensembles (with model weights drawn randomly from a Dirichlet distribution having uniform parameters). This modification overcomes the tendency of BMA to converge toward giving all the weight to a single model. Although BMC is somewhat more computationally expensive than BMA, it tends to yield dramatically better results. BMC has been shown to be better on average (with statistical significance) than BMA and bagging.
Use of Bayes' law to compute model weights requires computing the probability of the data given each model. Typically, none of the models in the ensemble are exactly the distribution from which the training data were generated, so all of them correctly receive a value close to zero for this term. This would work well if the ensemble were big enough to sample the entire model-space, but this is rarely possible. Consequently, each pattern in the training data will cause the ensemble weight to shift toward the model in the ensemble that is closest to the distribution of the training data. It essentially reduces to an unnecessarily complex method for doing model selection.
The possible weightings for an ensemble can be visualized as lying on a simplex. At each vertex of the simplex, all of the weight is given to a single model in the ensemble. BMA converges toward the vertex that is closest to the distribution of the training data. By contrast, BMC converges toward the point where this distribution projects onto the simplex. In other words, instead of selecting the one model that is closest to the generating distribution, it seeks the combination of models that is closest to the generating distribution.
The results from BMA can often be approximated by using cross-validation to select the best model from a bucket of models. Likewise, the results from BMC may be approximated by using cross-validation to select the best ensemble combination from a random sampling of possible weightings.
=== Bucket of models ===
A "bucket of models" is an ensemble technique in which a model selection algorithm is used to choose the best model for each problem. When tested with only one problem, a bucket of models can produce no better results than the best model in the set, but when evaluated across many problems, it will typically produce much better results, on average, than any model in the set.
The most common approach used for model-selection is cross-validation selection (sometimes called a "bake-off contest"). It is described with the following pseudo-code:
For each model m in the bucket:
Do c times: (where 'c' is some constant)
Randomly divide the training dataset into two sets: A and B
Train m with A
Test m with B
Select the model that obtains the highest average score
Cross-Validation Selection can be summed up as: "try them all with the training set, and pick the one that works best".
Gating is a generalization of Cross-Validation Selection. It involves training another learning model to decide which of the models in the bucket is best-suited to solve the problem. Often, a perceptron is used for the gating model. It can be used to pick the "best" model, or it can be used to give a linear weight to the predictions from each model in the bucket.
When a bucket of models is used with a large set of problems, it may be desirable to avoid training some of the models that take a long time to train. Landmark learning is a meta-learning approach that seeks to solve this problem. It involves training only the fast (but imprecise) algorithms in the bucket, and then using the performance of these algorithms to help determine which slow (but accurate) algorithm is most likely to do best.
=== Amended Cross-Entropy Cost: An Approach for Encouraging Diversity in Classification Ensemble ===
The most common approach for training classifier is using Cross-entropy cost function. However, one would like to train an ensemble of models that have diversity so when we combine them it would provide best results.
Assuming we use a simple ensemble of averaging
K
{\displaystyle K}
classifiers. Then the Amended Cross-Entropy Cost is
e
k
=
H
(
p
,
q
k
)
−
λ
K
∑
j
≠
k
H
(
q
j
,
q
k
)
{\displaystyle e^{k}=H(p,q^{k})-{\frac {\lambda }{K}}\sum _{j\neq k}H(q^{j},q^{k})}
where
e
k
{\displaystyle e^{k}}
is the cost function of the
k
t
h
{\displaystyle k^{th}}
classifier,
q
k
{\displaystyle q^{k}}
is the probability of the
k
t
h
{\displaystyle k^{th}}
classifier,
p
{\displaystyle p}
is the true probability that we need to estimate and
λ
{\displaystyle \lambda }
is a parameter between 0 and 1 that define the diversity that we would like to establish. When
λ
=
0
{\displaystyle \lambda =0}
we want each classifier to do its best regardless of the ensemble and when
λ
=
1
{\displaystyle \lambda =1}
we would like the classifier to be as diverse as possible.
=== Stacking ===
Stacking (sometimes called stacked generalization) involves training a model to combine the predictions of several other learning algorithms. First, all of the other algorithms are trained using the available data, then a combiner algorithm (final estimator) is trained to make a final prediction using all the predictions of the other algorithms (base estimators) as additional inputs or using cross-validated predictions from the base estimators which can prevent overfitting. If an arbitrary combiner algorithm is used, then stacking can theoretically represent any of the ensemble techniques described in this article, although, in practice, a logistic regression model is often used as the combiner.
Stacking typically yields performance better than any single one of the trained models. It has been successfully used on both supervised learning tasks (regression, classification and distance learning ) and unsupervised learning (density estimation). It has also been used to estimate bagging's error rate. It has been reported to out-perform Bayesian model-averaging. The two top-performers in the Netflix competition utilized blending, which may be considered a form of stacking.
=== Voting ===
Voting is another form of ensembling. See e.g. Weighted majority algorithm (machine learning).
== Implementations in statistics packages ==
R: at least three packages offer Bayesian model averaging tools, including the BMS (an acronym for Bayesian Model Selection) package, the BAS (an acronym for Bayesian Adaptive Sampling) package, and the BMA package.
Python: scikit-learn, a package for machine learning in Python offers packages for ensemble learning including packages for bagging, voting and averaging methods.
MATLAB: classification ensembles are implemented in Statistics and Machine Learning Toolbox.
== Ensemble learning applications ==
In recent years, due to growing computational power, which allows for training in large ensemble learning in a reasonable time frame, the number of ensemble learning applications has grown increasingly. Some of the applications of ensemble classifiers include:
=== Remote sensing ===
==== Land cover mapping ====
Land cover mapping is one of the major applications of Earth observation satellite sensors, using remote sensing and geospatial data, to identify the materials and objects which are located on the surface of target areas. Generally, the classes of target materials include roads, buildings, rivers, lakes, and vegetation. Some different ensemble learning approaches based on artificial neural networks, kernel principal component analysis (KPCA), decision trees with boosting, random forest and automatic design of multiple classifier systems, are proposed to efficiently identify land cover objects.
==== Change detection ====
Change detection is an image analysis problem, consisting of the identification of places where the land cover has changed over time. Change detection is widely used in fields such as urban growth, forest and vegetation dynamics, land use and disaster monitoring.
The earliest applications of ensemble classifiers in change detection are designed with the majority voting, Bayesian model averaging, and the maximum posterior probability. Given the growth of satellite data over time, the past decade sees more use of time series methods for continuous change detection from image stacks. One example is a Bayesian ensemble changepoint detection method called BEAST, with the software available as a package Rbeast in R, Python, and Matlab.
=== Computer security ===
==== Distributed denial of service ====
Distributed denial of service is one of the most threatening cyber-attacks that may happen to an internet service provider. By combining the output of single classifiers, ensemble classifiers reduce the total error of detecting and discriminating such attacks from legitimate flash crowds.
==== Malware Detection ====
Classification of malware codes such as computer viruses, computer worms, trojans, ransomware and spywares with the usage of machine learning techniques, is inspired by the document categorization problem. Ensemble learning systems have shown a proper efficacy in this area.
==== Intrusion detection ====
An intrusion detection system monitors computer network or computer systems to identify intruder codes like an anomaly detection process. Ensemble learning successfully aids such monitoring systems to reduce their total error.
=== Face recognition ===
Face recognition, which recently has become one of the most popular research areas of pattern recognition, copes with identification or verification of a person by their digital images.
Hierarchical ensembles based on Gabor Fisher classifier and independent component analysis preprocessing techniques are some of the earliest ensembles employed in this field.
=== Emotion recognition ===
While speech recognition is mainly based on deep learning because most of the industry players in this field like Google, Microsoft and IBM reveal that the core technology of their speech recognition is based on this approach, speech-based emotion recognition can also have a satisfactory performance with ensemble learning.
It is also being successfully used in facial emotion recognition.
=== Fraud detection ===
Fraud detection deals with the identification of bank fraud, such as money laundering, credit card fraud and telecommunication fraud, which have vast domains of research and applications of machine learning. Because ensemble learning improves the robustness of the normal behavior modelling, it has been proposed as an efficient technique to detect such fraudulent cases and activities in banking and credit card systems.
=== Financial decision-making ===
The accuracy of prediction of business failure is a very crucial issue in financial decision-making. Therefore, different ensemble classifiers are proposed to predict financial crises and financial distress. Also, in the trade-based manipulation problem, where traders attempt to manipulate stock prices by buying and selling activities, ensemble classifiers are required to analyze the changes in the stock market data and detect suspicious symptom of stock price manipulation.
=== Medicine ===
Ensemble classifiers have been successfully applied in neuroscience, proteomics and medical diagnosis like in neuro-cognitive disorder (i.e. Alzheimer or myotonic dystrophy) detection based on MRI datasets, cervical cytology classification.
Besides, ensembles have been successfully applied in medical segmentation tasks, for example brain tumor and hyperintensities segmentation.
== See also ==
Ensemble averaging (machine learning)
Bayesian structural time series (BSTS)
Mixture of experts
== References ==
== Further reading ==
Zhou Zhihua (2012). Ensemble Methods: Foundations and Algorithms. Chapman and Hall/CRC. ISBN 978-1-439-83003-1.
Robert Schapire; Yoav Freund (2012). Boosting: Foundations and Algorithms. MIT. ISBN 978-0-262-01718-3.
== External links ==
Robi Polikar (ed.). "Ensemble learning". Scholarpedia.
The Waffles (machine learning) toolkit contains implementations of Bagging, Boosting, Bayesian Model Averaging, Bayesian Model Combination, Bucket-of-models, and other ensemble techniques | Wikipedia/Bayesian_model_averaging |
Statistical inference is the process of using data analysis to infer properties of an underlying probability distribution. Inferential statistical analysis infers properties of a population, for example by testing hypotheses and deriving estimates. It is assumed that the observed data set is sampled from a larger population.
Inferential statistics can be contrasted with descriptive statistics. Descriptive statistics is solely concerned with properties of the observed data, and it does not rest on the assumption that the data come from a larger population. In machine learning, the term inference is sometimes used instead to mean "make a prediction, by evaluating an already trained model"; in this context inferring properties of the model is referred to as training or learning (rather than inference), and using a model for prediction is referred to as inference (instead of prediction); see also predictive inference.
== Introduction ==
Statistical inference makes propositions about a population, using data drawn from the population with some form of sampling. Given a hypothesis about a population, for which we wish to draw inferences, statistical inference consists of (first) selecting a statistical model of the process that generates the data and (second) deducing propositions from the model.
Konishi and Kitagawa state "The majority of the problems in statistical inference can be considered to be problems related to statistical modeling". Relatedly, Sir David Cox has said, "How [the] translation from subject-matter problem to statistical model is done is often the most critical part of an analysis".
The conclusion of a statistical inference is a statistical proposition. Some common forms of statistical proposition are the following:
a point estimate, i.e. a particular value that best approximates some parameter of interest;
an interval estimate, e.g. a confidence interval (or set estimate), i.e. an interval constructed using a dataset drawn from a population so that, under repeated sampling of such datasets, such intervals would contain the true parameter value with the probability at the stated confidence level;
a credible interval, i.e. a set of values containing, for example, 95% of posterior belief;
rejection of a hypothesis;
clustering or classification of data points into groups.
== Models and assumptions ==
Any statistical inference requires some assumptions. A statistical model is a set of assumptions concerning the generation of the observed data and similar data. Descriptions of statistical models usually emphasize the role of population quantities of interest, about which we wish to draw inference. Descriptive statistics are typically used as a preliminary step before more formal inferences are drawn.
=== Degree of models/assumptions ===
Statisticians distinguish between three levels of modeling assumptions:
Fully parametric: The probability distributions describing the data-generation process are assumed to be fully described by a family of probability distributions involving only a finite number of unknown parameters. For example, one may assume that the distribution of population values is truly Normal, with unknown mean and variance, and that datasets are generated by 'simple' random sampling. The family of generalized linear models is a widely used and flexible class of parametric models.
Non-parametric: The assumptions made about the process generating the data are much less than in parametric statistics and may be minimal. For example, every continuous probability distribution has a median, which may be estimated using the sample median or the Hodges–Lehmann–Sen estimator, which has good properties when the data arise from simple random sampling.
Semi-parametric: This term typically implies assumptions 'in between' fully and non-parametric approaches. For example, one may assume that a population distribution has a finite mean. Furthermore, one may assume that the mean response level in the population depends in a truly linear manner on some covariate (a parametric assumption) but not make any parametric assumption describing the variance around that mean (i.e. about the presence or possible form of any heteroscedasticity). More generally, semi-parametric models can often be separated into 'structural' and 'random variation' components. One component is treated parametrically and the other non-parametrically. The well-known Cox model is a set of semi-parametric assumptions.
=== Importance of valid models/assumptions ===
Whatever level of assumption is made, correctly calibrated inference, in general, requires these assumptions to be correct; i.e. that the data-generating mechanisms really have been correctly specified.
Incorrect assumptions of 'simple' random sampling can invalidate statistical inference. More complex semi- and fully parametric assumptions are also cause for concern. For example, incorrectly assuming the Cox model can in some cases lead to faulty conclusions. Incorrect assumptions of Normality in the population also invalidates some forms of regression-based inference. The use of any parametric model is viewed skeptically by most experts in sampling human populations: "most sampling statisticians, when they deal with confidence intervals at all, limit themselves to statements about [estimators] based on very large samples, where the central limit theorem ensures that these [estimators] will have distributions that are nearly normal." In particular, a normal distribution "would be a totally unrealistic and catastrophically unwise assumption to make if we were dealing with any kind of economic population." Here, the central limit theorem states that the distribution of the sample mean "for very large samples" is approximately normally distributed, if the distribution is not heavy-tailed.
==== Approximate distributions ====
Given the difficulty in specifying exact distributions of sample statistics, many methods have been developed for approximating these.
With finite samples, approximation results measure how close a limiting distribution approaches the statistic's sample distribution: For example, with 10,000 independent samples the normal distribution approximates (to two digits of accuracy) the distribution of the sample mean for many population distributions, by the Berry–Esseen theorem. Yet for many practical purposes, the normal approximation provides a good approximation to the sample-mean's distribution when there are 10 (or more) independent samples, according to simulation studies and statisticians' experience. Following Kolmogorov's work in the 1950s, advanced statistics uses approximation theory and functional analysis to quantify the error of approximation. In this approach, the metric geometry of probability distributions is studied; this approach quantifies approximation error with, for example, the Kullback–Leibler divergence, Bregman divergence, and the Hellinger distance.
With indefinitely large samples, limiting results like the central limit theorem describe the sample statistic's limiting distribution if one exists. Limiting results are not statements about finite samples, and indeed are irrelevant to finite samples. However, the asymptotic theory of limiting distributions is often invoked for work with finite samples. For example, limiting results are often invoked to justify the generalized method of moments and the use of generalized estimating equations, which are popular in econometrics and biostatistics. The magnitude of the difference between the limiting distribution and the true distribution (formally, the 'error' of the approximation) can be assessed using simulation. The heuristic application of limiting results to finite samples is common practice in many applications, especially with low-dimensional models with log-concave likelihoods (such as with one-parameter exponential families).
=== Randomization-based models ===
For a given dataset that was produced by a randomization design, the randomization distribution of a statistic (under the null-hypothesis) is defined by evaluating the test statistic for all of the plans that could have been generated by the randomization design. In frequentist inference, the randomization allows inferences to be based on the randomization distribution rather than a subjective model, and this is important especially in survey sampling and design of experiments. Statistical inference from randomized studies is also more straightforward than many other situations. In Bayesian inference, randomization is also of importance: in survey sampling, use of sampling without replacement ensures the exchangeability of the sample with the population; in randomized experiments, randomization warrants a missing at random assumption for covariate information.
Objective randomization allows properly inductive procedures. Many statisticians prefer randomization-based analysis of data that was generated by well-defined randomization procedures. (However, it is true that in fields of science with developed theoretical knowledge and experimental control, randomized experiments may increase the costs of experimentation without improving the quality of inferences.) Similarly, results from randomized experiments are recommended by leading statistical authorities as allowing inferences with greater reliability than do observational studies of the same phenomena. However, a good observational study may be better than a bad randomized experiment.
The statistical analysis of a randomized experiment may be based on the randomization scheme stated in the experimental protocol and does not need a subjective model.
However, at any time, some hypotheses cannot be tested using objective statistical models, which accurately describe randomized experiments or random samples. In some cases, such randomized studies are uneconomical or unethical.
==== Model-based analysis of randomized experiments ====
It is standard practice to refer to a statistical model, e.g., a linear or logistic models, when analyzing data from randomized experiments. However, the randomization scheme guides the choice of a statistical model. It is not possible to choose an appropriate model without knowing the randomization scheme. Seriously misleading results can be obtained analyzing data from randomized experiments while ignoring the experimental protocol; common mistakes include forgetting the blocking used in an experiment and confusing repeated measurements on the same experimental unit with independent replicates of the treatment applied to different experimental units.
==== Model-free randomization inference ====
Model-free techniques provide a complement to model-based methods, which employ reductionist strategies of reality-simplification. The former combine, evolve, ensemble and train algorithms dynamically adapting to the contextual affinities of a process and learning the intrinsic characteristics of the observations.
For example, model-free simple linear regression is based either on:
a random design, where the pairs of observations
(
X
1
,
Y
1
)
,
(
X
2
,
Y
2
)
,
⋯
,
(
X
n
,
Y
n
)
{\displaystyle (X_{1},Y_{1}),(X_{2},Y_{2}),\cdots ,(X_{n},Y_{n})}
are independent and identically distributed (iid),
or a deterministic design, where the variables
X
1
,
X
2
,
⋯
,
X
n
{\displaystyle X_{1},X_{2},\cdots ,X_{n}}
are deterministic, but the corresponding response variables
Y
1
,
Y
2
,
⋯
,
Y
n
{\displaystyle Y_{1},Y_{2},\cdots ,Y_{n}}
are random and independent with a common conditional distribution, i.e.,
P
(
Y
j
≤
y
|
X
j
=
x
)
=
D
x
(
y
)
{\displaystyle P\left(Y_{j}\leq y|X_{j}=x\right)=D_{x}(y)}
, which is independent of the index
j
{\displaystyle j}
.
In either case, the model-free randomization inference for features of the common conditional distribution
D
x
(
.
)
{\displaystyle D_{x}(.)}
relies on some regularity conditions, e.g. functional smoothness. For instance, model-free randomization inference for the population feature conditional mean,
μ
(
x
)
=
E
(
Y
|
X
=
x
)
{\displaystyle \mu (x)=E(Y|X=x)}
, can be consistently estimated via local averaging or local polynomial fitting, under the assumption that
μ
(
x
)
{\displaystyle \mu (x)}
is smooth. Also, relying on asymptotic normality or resampling, we can construct confidence intervals for the population feature, in this case, the conditional mean,
μ
(
x
)
{\displaystyle \mu (x)}
.
== Paradigms for inference ==
Different schools of statistical inference have become established. These schools—or "paradigms"—are not mutually exclusive, and methods that work well under one paradigm often have attractive interpretations under other paradigms.
Bandyopadhyay and Forster describe four paradigms: The classical (or frequentist) paradigm, the Bayesian paradigm, the likelihoodist paradigm, and the Akaikean-Information Criterion-based paradigm.
=== Frequentist inference ===
This paradigm calibrates the plausibility of propositions by considering (notional) repeated sampling of a population distribution to produce datasets similar to the one at hand. By considering the dataset's characteristics under repeated sampling, the frequentist properties of a statistical proposition can be quantified—although in practice this quantification may be challenging.
==== Examples of frequentist inference ====
p-value
Confidence interval
Null hypothesis significance testing
==== Frequentist inference, objectivity, and decision theory ====
One interpretation of frequentist inference (or classical inference) is that it is applicable only in terms of frequency probability; that is, in terms of repeated sampling from a population. However, the approach of Neyman develops these procedures in terms of pre-experiment probabilities. That is, before undertaking an experiment, one decides on a rule for coming to a conclusion such that the probability of being correct is controlled in a suitable way: such a probability need not have a frequentist or repeated sampling interpretation. In contrast, Bayesian inference works in terms of conditional probabilities (i.e. probabilities conditional on the observed data), compared to the marginal (but conditioned on unknown parameters) probabilities used in the frequentist approach.
The frequentist procedures of significance testing and confidence intervals can be constructed without regard to utility functions. However, some elements of frequentist statistics, such as statistical decision theory, do incorporate utility functions. In particular, frequentist developments of optimal inference (such as minimum-variance unbiased estimators, or uniformly most powerful testing) make use of loss functions, which play the role of (negative) utility functions. Loss functions need not be explicitly stated for statistical theorists to prove that a statistical procedure has an optimality property. However, loss-functions are often useful for stating optimality properties: for example, median-unbiased estimators are optimal under absolute value loss functions, in that they minimize expected loss, and least squares estimators are optimal under squared error loss functions, in that they minimize expected loss.
While statisticians using frequentist inference must choose for themselves the parameters of interest, and the estimators/test statistic to be used, the absence of obviously explicit utilities and prior distributions has helped frequentist procedures to become widely viewed as 'objective'.
=== Bayesian inference ===
The Bayesian calculus describes degrees of belief using the 'language' of probability; beliefs are positive, integrate into one, and obey probability axioms. Bayesian inference uses the available posterior beliefs as the basis for making statistical propositions. There are several different justifications for using the Bayesian approach.
==== Examples of Bayesian inference ====
Credible interval for interval estimation
Bayes factors for model comparison
==== Bayesian inference, subjectivity and decision theory ====
Many informal Bayesian inferences are based on "intuitively reasonable" summaries of the posterior. For example, the posterior mean, median and mode, highest posterior density intervals, and Bayes Factors can all be motivated in this way. While a user's utility function need not be stated for this sort of inference, these summaries do all depend (to some extent) on stated prior beliefs, and are generally viewed as subjective conclusions. (Methods of prior construction which do not require external input have been proposed but not yet fully developed.)
Formally, Bayesian inference is calibrated with reference to an explicitly stated utility, or loss function; the 'Bayes rule' is the one which maximizes expected utility, averaged over the posterior uncertainty. Formal Bayesian inference therefore automatically provides optimal decisions in a decision theoretic sense. Given assumptions, data and utility, Bayesian inference can be made for essentially any problem, although not every statistical inference need have a Bayesian interpretation. Analyses which are not formally Bayesian can be (logically) incoherent; a feature of Bayesian procedures which use proper priors (i.e. those integrable to one) is that they are guaranteed to be coherent. Some advocates of Bayesian inference assert that inference must take place in this decision-theoretic framework, and that Bayesian inference should not conclude with the evaluation and summarization of posterior beliefs.
=== Likelihood-based inference ===
Likelihood-based inference is a paradigm used to estimate the parameters of a statistical model based on observed data. Likelihoodism approaches statistics by using the likelihood function, denoted as
L
(
x
|
θ
)
{\displaystyle L(x|\theta )}
, quantifies the probability of observing the given data
x
{\displaystyle x}
, assuming a specific set of parameter values
θ
{\displaystyle \theta }
. In likelihood-based inference, the goal is to find the set of parameter values that maximizes the likelihood function, or equivalently, maximizes the probability of observing the given data.
The process of likelihood-based inference usually involves the following steps:
Formulating the statistical model: A statistical model is defined based on the problem at hand, specifying the distributional assumptions and the relationship between the observed data and the unknown parameters. The model can be simple, such as a normal distribution with known variance, or complex, such as a hierarchical model with multiple levels of random effects.
Constructing the likelihood function: Given the statistical model, the likelihood function is constructed by evaluating the joint probability density or mass function of the observed data as a function of the unknown parameters. This function represents the probability of observing the data for different values of the parameters.
Maximizing the likelihood function: The next step is to find the set of parameter values that maximizes the likelihood function. This can be achieved using optimization techniques such as numerical optimization algorithms. The estimated parameter values, often denoted as
y
¯
{\displaystyle {\bar {y}}}
, are the maximum likelihood estimates (MLEs).
Assessing uncertainty: Once the MLEs are obtained, it is crucial to quantify the uncertainty associated with the parameter estimates. This can be done by calculating standard errors, confidence intervals, or conducting hypothesis tests based on asymptotic theory or simulation techniques such as bootstrapping.
Model checking: After obtaining the parameter estimates and assessing their uncertainty, it is important to assess the adequacy of the statistical model. This involves checking the assumptions made in the model and evaluating the fit of the model to the data using goodness-of-fit tests, residual analysis, or graphical diagnostics.
Inference and interpretation: Finally, based on the estimated parameters and model assessment, statistical inference can be performed. This involves drawing conclusions about the population parameters, making predictions, or testing hypotheses based on the estimated model.
=== AIC-based inference ===
The Akaike information criterion (AIC) is an estimator of the relative quality of statistical models for a given set of data. Given a collection of models for the data, AIC estimates the quality of each model, relative to each of the other models. Thus, AIC provides a means for model selection.
AIC is founded on information theory: it offers an estimate of the relative information lost when a given model is used to represent the process that generated the data. (In doing so, it deals with the trade-off between the goodness of fit of the model and the simplicity of the model.)
=== Other paradigms for inference ===
==== Minimum description length ====
The minimum description length (MDL) principle has been developed from ideas in information theory and the theory of Kolmogorov complexity. The (MDL) principle selects statistical models that maximally compress the data; inference proceeds without assuming counterfactual or non-falsifiable "data-generating mechanisms" or probability models for the data, as might be done in frequentist or Bayesian approaches.
However, if a "data generating mechanism" does exist in reality, then according to Shannon's source coding theorem it provides the MDL description of the data, on average and asymptotically. In minimizing description length (or descriptive complexity), MDL estimation is similar to maximum likelihood estimation and maximum a posteriori estimation (using maximum-entropy Bayesian priors). However, MDL avoids assuming that the underlying probability model is known; the MDL principle can also be applied without assumptions that e.g. the data arose from independent sampling.
The MDL principle has been applied in communication-coding theory in information theory, in linear regression, and in data mining.
The evaluation of MDL-based inferential procedures often uses techniques or criteria from computational complexity theory.
==== Fiducial inference ====
Fiducial inference was an approach to statistical inference based on fiducial probability, also known as a "fiducial distribution". In subsequent work, this approach has been called ill-defined, extremely limited in applicability, and even fallacious. However this argument is the same as that which shows that a so-called confidence distribution is not a valid probability distribution and, since this has not invalidated the application of confidence intervals, it does not necessarily invalidate conclusions drawn from fiducial arguments. An attempt was made to reinterpret the early work of Fisher's fiducial argument as a special case of an inference theory using upper and lower probabilities.
==== Structural inference ====
Developing ideas of Fisher and of Pitman from 1938 to 1939, George A. Barnard developed "structural inference" or "pivotal inference", an approach using invariant probabilities on group families. Barnard reformulated the arguments behind fiducial inference on a restricted class of models on which "fiducial" procedures would be well-defined and useful. Donald A. S. Fraser developed a general theory for structural inference based on group theory and applied this to linear models. The theory formulated by Fraser has close links to decision theory and Bayesian statistics and can provide optimal frequentist decision rules if they exist.
== Inference topics ==
The topics below are usually included in the area of statistical inference.
Statistical assumptions
Statistical decision theory
Estimation theory
Statistical hypothesis testing
Revising opinions in statistics
Design of experiments, the analysis of variance, and regression
Survey sampling
Summarizing statistical data
== Predictive inference ==
Predictive inference is an approach to statistical inference that emphasizes the prediction of future observations based on past observations.
Initially, predictive inference was based on observable parameters and it was the main purpose of studying probability, but it fell out of favor in the 20th century due to a new parametric approach pioneered by Bruno de Finetti. The approach modeled phenomena as a physical system observed with error (e.g., celestial mechanics). De Finetti's idea of exchangeability—that future observations should behave like past observations—came to the attention of the English-speaking world with the 1974 translation from French of his 1937 paper, and has since been propounded by such statisticians as Seymour Geisser.
== See also ==
Algorithmic inference
Induction (philosophy)
Informal inferential reasoning
Information field theory
Population proportion
Philosophy of statistics
Prediction interval
Predictive analytics
Predictive modelling
Stylometry
== Notes ==
== References ==
=== Citations ===
=== Sources ===
== Further reading ==
Casella, G., Berger, R. L. (2002). Statistical Inference. Duxbury Press. ISBN 0-534-24312-6
Freedman, D.A. (1991). "Statistical models and shoe leather". Sociological Methodology. 21: 291–313. doi:10.2307/270939. JSTOR 270939.
Held L., Bové D.S. (2014). Applied Statistical Inference—Likelihood and Bayes (Springer).
Lenhard, Johannes (2006). "Models and Statistical Inference: the controversy between Fisher and Neyman–Pearson" (PDF). British Journal for the Philosophy of Science. 57: 69–91. doi:10.1093/bjps/axi152. S2CID 14136146.
Lindley, D (1958). "Fiducial distribution and Bayes' theorem". Journal of the Royal Statistical Society, Series B. 20: 102–7. doi:10.1111/j.2517-6161.1958.tb00278.x.
Rahlf, Thomas (2014). "Statistical Inference", in Claude Diebolt, and Michael Haupert (eds.), "Handbook of Cliometrics ( Springer Reference Series)", Berlin/Heidelberg: Springer.
Reid, N.; Cox, D. R. (2014). "On Some Principles of Statistical Inference". International Statistical Review. 83 (2): 293–308. doi:10.1111/insr.12067. hdl:10.1111/insr.12067. S2CID 17410547.
Sagitov, Serik (2022). "Statistical Inference". Wikibooks. http://upload.wikimedia.org/wikipedia/commons/f/f9/Statistical_Inference.pdf
Young, G.A., Smith, R.L. (2005). Essentials of Statistical Inference, CUP. ISBN 0-521-83971-8
== External links ==
Statistical Inference – lecture on the MIT OpenCourseWare platform
Statistical Inference – lecture by the National Programme on Technology Enhanced Learning
An online, Bayesian (MCMC) demo/calculator is available at causaScientia | Wikipedia/Predictive_inference |
Empirical Bayes methods are procedures for statistical inference in which the prior probability distribution is estimated from the data. This approach stands in contrast to standard Bayesian methods, for which the prior distribution is fixed before any data are observed. Despite this difference in perspective, empirical Bayes may be viewed as an approximation to a fully Bayesian treatment of a hierarchical model wherein the parameters at the highest level of the hierarchy are set to their most likely values, instead of being integrated out.
== Introduction ==
Empirical Bayes methods can be seen as an approximation to a fully Bayesian treatment of a hierarchical Bayes model.
In, for example, a two-stage hierarchical Bayes model, observed data
y
=
{
y
1
,
y
2
,
…
,
y
n
}
{\displaystyle y=\{y_{1},y_{2},\dots ,y_{n}\}}
are assumed to be generated from an unobserved set of parameters
θ
=
{
θ
1
,
θ
2
,
…
,
θ
n
}
{\displaystyle \theta =\{\theta _{1},\theta _{2},\dots ,\theta _{n}\}}
according to a probability distribution
p
(
y
∣
θ
)
{\displaystyle p(y\mid \theta )\,}
. In turn, the parameters
θ
{\displaystyle \theta }
can be considered samples drawn from a population characterised by hyperparameters
η
{\displaystyle \eta \,}
according to a probability distribution
p
(
θ
∣
η
)
{\displaystyle p(\theta \mid \eta )\,}
. In the hierarchical Bayes model, though not in the empirical Bayes approximation, the hyperparameters
η
{\displaystyle \eta \,}
are considered to be drawn from an unparameterized distribution
p
(
η
)
{\displaystyle p(\eta )\,}
.
Information about a particular quantity of interest
θ
i
{\displaystyle \theta _{i}\;}
therefore comes not only from the properties of those data
y
{\displaystyle y}
that directly depend on it, but also from the properties of the population of parameters
θ
{\displaystyle \theta \;}
as a whole, inferred from the data as a whole, summarised by the hyperparameters
η
{\displaystyle \eta \;}
.
Using Bayes' theorem,
p
(
θ
∣
y
)
=
p
(
y
∣
θ
)
p
(
θ
)
p
(
y
)
=
p
(
y
∣
θ
)
p
(
y
)
∫
p
(
θ
∣
η
)
p
(
η
)
d
η
.
{\displaystyle p(\theta \mid y)={\frac {p(y\mid \theta )p(\theta )}{p(y)}}={\frac {p(y\mid \theta )}{p(y)}}\int p(\theta \mid \eta )p(\eta )\,d\eta \,.}
In general, this integral will not be tractable analytically or symbolically and must be evaluated by numerical methods. Stochastic (random) or deterministic approximations may be used. Example stochastic methods are Markov Chain Monte Carlo and Monte Carlo sampling. Deterministic approximations are discussed in quadrature.
Alternatively, the expression can be written as
p
(
θ
∣
y
)
=
∫
p
(
θ
∣
η
,
y
)
p
(
η
∣
y
)
d
η
=
∫
p
(
y
∣
θ
)
p
(
θ
∣
η
)
p
(
y
∣
η
)
p
(
η
∣
y
)
d
η
,
{\displaystyle p(\theta \mid y)=\int p(\theta \mid \eta ,y)p(\eta \mid y)\;d\eta =\int {\frac {p(y\mid \theta )p(\theta \mid \eta )}{p(y\mid \eta )}}p(\eta \mid y)\;d\eta \,,}
and the final factor in the integral can in turn be expressed as
p
(
η
∣
y
)
=
∫
p
(
η
∣
θ
)
p
(
θ
∣
y
)
d
θ
.
{\displaystyle p(\eta \mid y)=\int p(\eta \mid \theta )p(\theta \mid y)\;d\theta .}
These suggest an iterative scheme, qualitatively similar in structure to a Gibbs sampler, to evolve successively improved approximations to
p
(
θ
∣
y
)
{\displaystyle p(\theta \mid y)\;}
and
p
(
η
∣
y
)
{\displaystyle p(\eta \mid y)\;}
. First, calculate an initial approximation to
p
(
θ
∣
y
)
{\displaystyle p(\theta \mid y)\;}
ignoring the
η
{\displaystyle \eta }
dependence completely; then calculate an approximation to
p
(
η
∣
y
)
{\displaystyle p(\eta \mid y)\;}
based upon the initial approximate distribution of
p
(
θ
∣
y
)
{\displaystyle p(\theta \mid y)\;}
; then use this
p
(
η
∣
y
)
{\displaystyle p(\eta \mid y)\;}
to update the approximation for
p
(
θ
∣
y
)
{\displaystyle p(\theta \mid y)\;}
; then update
p
(
η
∣
y
)
{\displaystyle p(\eta \mid y)\;}
; and so on.
When the true distribution
p
(
η
∣
y
)
{\displaystyle p(\eta \mid y)\;}
is sharply peaked, the integral determining
p
(
θ
∣
y
)
{\displaystyle p(\theta \mid y)\;}
may be not much changed by replacing the probability distribution over
η
{\displaystyle \eta \;}
with a point estimate
η
∗
{\displaystyle \eta ^{*}\;}
representing the distribution's peak (or, alternatively, its mean),
p
(
θ
∣
y
)
≃
p
(
y
∣
θ
)
p
(
θ
∣
η
∗
)
p
(
y
∣
η
∗
)
.
{\displaystyle p(\theta \mid y)\simeq {\frac {p(y\mid \theta )\;p(\theta \mid \eta ^{*})}{p(y\mid \eta ^{*})}}\,.}
With this approximation, the above iterative scheme becomes the EM algorithm.
The term "Empirical Bayes" can cover a wide variety of methods, but most can be regarded as an early truncation of either the above scheme or something quite like it. Point estimates, rather than the whole distribution, are typically used for the parameter(s)
η
{\displaystyle \eta \;}
. The estimates for
η
∗
{\displaystyle \eta ^{*}\;}
are typically made from the first approximation to
p
(
θ
∣
y
)
{\displaystyle p(\theta \mid y)\;}
without subsequent refinement. These estimates for
η
∗
{\displaystyle \eta ^{*}\;}
are usually made without considering an appropriate prior distribution for
η
{\displaystyle \eta }
.
== Point estimation ==
=== Robbins' method: non-parametric empirical Bayes (NPEB) ===
Robbins considered a case of sampling from a mixed distribution, where probability for each
y
i
{\displaystyle y_{i}}
(conditional on
θ
i
{\displaystyle \theta _{i}}
) is specified by a Poisson distribution,
p
(
y
i
∣
θ
i
)
=
θ
i
y
i
e
−
θ
i
y
i
!
{\displaystyle p(y_{i}\mid \theta _{i})={{\theta _{i}}^{y_{i}}e^{-\theta _{i}} \over {y_{i}}!}}
while the prior on θ is unspecified except that it is also i.i.d. from an unknown distribution, with cumulative distribution function
G
(
θ
)
{\displaystyle G(\theta )}
. Compound sampling arises in a variety of statistical estimation problems, such as accident rates and clinical trials. We simply seek a point prediction of
θ
i
{\displaystyle \theta _{i}}
given all the observed data. Because the prior is unspecified, we seek to do this without knowledge of G.
Under squared error loss (SEL), the conditional expectation E(θi | Yi = yi) is a reasonable quantity to use for prediction. For the Poisson compound sampling model, this quantity is
E
(
θ
i
∣
y
i
)
=
∫
(
θ
y
i
+
1
e
−
θ
/
y
i
!
)
d
G
(
θ
)
∫
(
θ
y
i
e
−
θ
/
y
i
!
)
d
G
(
θ
)
.
{\displaystyle \operatorname {E} (\theta _{i}\mid y_{i})={\int (\theta ^{y_{i}+1}e^{-\theta }/{y_{i}}!)\,dG(\theta ) \over {\int (\theta ^{y_{i}}e^{-\theta }/{y_{i}}!)\,dG(\theta })}.}
This can be simplified by multiplying both the numerator and denominator by
(
y
i
+
1
)
{\displaystyle ({y_{i}}+1)}
, yielding
E
(
θ
i
∣
y
i
)
=
(
y
i
+
1
)
p
G
(
y
i
+
1
)
p
G
(
y
i
)
,
{\displaystyle \operatorname {E} (\theta _{i}\mid y_{i})={{(y_{i}+1)p_{G}(y_{i}+1)} \over {p_{G}(y_{i})}},}
where pG is the marginal probability mass function obtained by integrating out θ over G.
To take advantage of this, Robbins suggested estimating the marginals with their empirical frequencies (
#
{
Y
j
}
{\displaystyle \#\{Y_{j}\}}
), yielding the fully non-parametric estimate as:
E
(
θ
i
∣
y
i
)
≈
(
y
i
+
1
)
#
{
Y
j
=
y
i
+
1
}
#
{
Y
j
=
y
i
}
,
{\displaystyle \operatorname {E} (\theta _{i}\mid y_{i})\approx (y_{i}+1){{\#\{Y_{j}=y_{i}+1\}} \over {\#\{Y_{j}=y_{i}\}}},}
where
#
{\displaystyle \#}
denotes "number of". (See also Good–Turing frequency estimation.)
Example – Accident rates
Suppose each customer of an insurance company has an "accident rate" Θ and is insured against accidents; the probability distribution of Θ is the underlying distribution, and is unknown. The number of accidents suffered by each customer in a specified time period has a Poisson distribution with expected value equal to the particular customer's accident rate. The actual number of accidents experienced by a customer is the observable quantity. A crude way to estimate the underlying probability distribution of the accident rate Θ is to estimate the proportion of members of the whole population suffering 0, 1, 2, 3, ... accidents during the specified time period as the corresponding proportion in the observed random sample. Having done so, it is then desired to predict the accident rate of each customer in the sample. As above, one may use the conditional expected value of the accident rate Θ given the observed number of accidents during the baseline period. Thus, if a customer suffers six accidents during the baseline period, that customer's estimated accident rate is 7 × [the proportion of the sample who suffered 7 accidents] / [the proportion of the sample who suffered 6 accidents]. Note that if the proportion of people suffering k accidents is a decreasing function of k, the customer's predicted accident rate will often be lower than their observed number of accidents.
This shrinkage effect is typical of empirical Bayes analyses.
=== Gaussian ===
Suppose
X
,
Y
{\displaystyle X,Y}
are random variables, such that
Y
{\displaystyle Y}
is observed, but
X
{\displaystyle X}
is hidden. The problem is to find the expectation of
X
{\displaystyle X}
, conditional on
Y
{\displaystyle Y}
. Suppose further that
Y
|
X
∼
N
(
X
,
Σ
)
{\displaystyle Y|X\sim {\mathcal {N}}(X,\Sigma )}
, that is,
Y
=
X
+
Z
{\displaystyle Y=X+Z}
, where
Z
{\displaystyle Z}
is a multivariate gaussian with variance
Σ
{\displaystyle \Sigma }
.
Then, we have the formula
Σ
∇
y
ρ
(
y
|
x
)
=
ρ
(
y
|
x
)
(
x
−
y
)
{\displaystyle \Sigma \nabla _{y}\rho (y|x)=\rho (y|x)(x-y)}
by direct calculation with the probability density function of multivariate gaussians. Integrating over
ρ
(
x
)
d
x
{\displaystyle \rho (x)dx}
, we obtain
Σ
∇
y
ρ
(
y
)
=
(
E
[
x
|
y
]
−
y
)
ρ
(
y
)
⟹
E
[
x
|
y
]
=
y
+
Σ
∇
y
ln
ρ
(
y
)
{\displaystyle \Sigma \nabla _{y}\rho (y)=(\mathbb {E} [x|y]-y)\rho (y)\implies \mathbb {E} [x|y]=y+\Sigma \nabla _{y}\ln \rho (y)}
In particular, this means that one can perform Bayesian estimation of
X
{\displaystyle X}
without access to either the prior density of
X
{\displaystyle X}
or the posterior density of
Y
{\displaystyle Y}
. The only requirement is to have access to the score function of
Y
{\displaystyle Y}
. This has applications in score-based generative modeling.
=== Parametric empirical Bayes ===
If the likelihood and its prior take on simple parametric forms (such as 1- or 2-dimensional likelihood functions with simple conjugate priors), then the empirical Bayes problem is only to estimate the marginal
m
(
y
∣
η
)
{\displaystyle m(y\mid \eta )}
and the hyperparameters
η
{\displaystyle \eta }
using the complete set of empirical measurements. For example, one common approach, called parametric empirical Bayes point estimation, is to approximate the marginal using the maximum likelihood estimate (MLE), or a moments expansion, which allows one to express the hyperparameters
η
{\displaystyle \eta }
in terms of the empirical mean and variance. This simplified marginal allows one to plug in the empirical averages into a point estimate for the prior
θ
{\displaystyle \theta }
. The resulting equation for the prior
θ
{\displaystyle \theta }
is greatly simplified, as shown below.
There are several common parametric empirical Bayes models, including the Poisson–gamma model (below), the Beta-binomial model, the Gaussian–Gaussian model, the Dirichlet-multinomial model, as well specific models for Bayesian linear regression (see below) and Bayesian multivariate linear regression. More advanced approaches include hierarchical Bayes models and Bayesian mixture models.
==== Gaussian–Gaussian model ====
For an example of empirical Bayes estimation using a Gaussian-Gaussian model, see Empirical Bayes estimators.
==== Poisson–gamma model ====
For example, in the example above, let the likelihood be a Poisson distribution, and let the prior now be specified by the conjugate prior, which is a gamma distribution (
G
(
α
,
β
)
{\displaystyle G(\alpha ,\beta )}
) (where
η
=
(
α
,
β
)
{\displaystyle \eta =(\alpha ,\beta )}
):
ρ
(
θ
∣
α
,
β
)
d
θ
=
(
θ
/
β
)
α
−
1
e
−
θ
/
β
Γ
(
α
)
(
d
θ
/
β
)
for
θ
>
0
,
α
>
0
,
β
>
0
.
{\displaystyle \rho (\theta \mid \alpha ,\beta )\,d\theta ={\frac {(\theta /\beta )^{\alpha -1}\,e^{-\theta /\beta }}{\Gamma (\alpha )}}\,(d\theta /\beta ){\text{ for }}\theta >0,\alpha >0,\beta >0\,\!.}
It is straightforward to show the posterior is also a gamma distribution. Write
ρ
(
θ
∣
y
)
∝
ρ
(
y
∣
θ
)
ρ
(
θ
∣
α
,
β
)
,
{\displaystyle \rho (\theta \mid y)\propto \rho (y\mid \theta )\rho (\theta \mid \alpha ,\beta ),}
where the marginal distribution has been omitted since it does not depend explicitly on
θ
{\displaystyle \theta }
.
Expanding terms which do depend on
θ
{\displaystyle \theta }
gives the posterior as:
ρ
(
θ
∣
y
)
∝
(
θ
y
e
−
θ
)
(
θ
α
−
1
e
−
θ
/
β
)
=
θ
y
+
α
−
1
e
−
θ
(
1
+
1
/
β
)
.
{\displaystyle \rho (\theta \mid y)\propto (\theta ^{y}\,e^{-\theta })(\theta ^{\alpha -1}\,e^{-\theta /\beta })=\theta ^{y+\alpha -1}\,e^{-\theta (1+1/\beta )}.}
So the posterior density is also a gamma distribution
G
(
α
′
,
β
′
)
{\displaystyle G(\alpha ',\beta ')}
, where
α
′
=
y
+
α
{\displaystyle \alpha '=y+\alpha }
, and
β
′
=
(
1
+
1
/
β
)
−
1
{\displaystyle \beta '=(1+1/\beta )^{-1}}
. Also notice that the marginal is simply the integral of the posterior over all
Θ
{\displaystyle \Theta }
, which turns out to be a negative binomial distribution.
To apply empirical Bayes, we will approximate the marginal using the maximum likelihood estimate (MLE). But since the posterior is a gamma distribution, the MLE of the marginal turns out to be just the mean of the posterior, which is the point estimate
E
(
θ
∣
y
)
{\displaystyle \operatorname {E} (\theta \mid y)}
we need. Recalling that the mean
μ
{\displaystyle \mu }
of a gamma distribution
G
(
α
′
,
β
′
)
{\displaystyle G(\alpha ',\beta ')}
is simply
α
′
β
′
{\displaystyle \alpha '\beta '}
, we have
E
(
θ
∣
y
)
=
α
′
β
′
=
y
¯
+
α
1
+
1
/
β
=
β
1
+
β
y
¯
+
1
1
+
β
(
α
β
)
.
{\displaystyle \operatorname {E} (\theta \mid y)=\alpha '\beta '={\frac {{\bar {y}}+\alpha }{1+1/\beta }}={\frac {\beta }{1+\beta }}{\bar {y}}+{\frac {1}{1+\beta }}(\alpha \beta ).}
To obtain the values of
α
{\displaystyle \alpha }
and
β
{\displaystyle \beta }
, empirical Bayes prescribes estimating mean
α
β
{\displaystyle \alpha \beta }
and variance
α
β
2
{\displaystyle \alpha \beta ^{2}}
using the complete set of empirical data.
The resulting point estimate
E
(
θ
∣
y
)
{\displaystyle \operatorname {E} (\theta \mid y)}
is therefore like a weighted average of the sample mean
y
¯
{\displaystyle {\bar {y}}}
and the prior mean
μ
=
α
β
{\displaystyle \mu =\alpha \beta }
. This turns out to be a general feature of empirical Bayes; the point estimates for the prior (i.e. mean) will look like a weighted averages of the sample estimate and the prior estimate (likewise for estimates of the variance).
== See also ==
Bayes estimator
Bayesian network
Hyperparameter
Hyperprior
Best linear unbiased prediction
Robbins lemma
Spike-and-slab variable selection
== References ==
== Further reading ==
Peter E. Rossi; Greg M. Allenby; Rob McCulloch (14 May 2012). Bayesian Statistics and Marketing. John Wiley & Sons. ISBN 978-0-470-86368-8.
Casella, George (May 1985). "An Introduction to Empirical Bayes Data Analysis" (PDF). American Statistician. 39 (2): 83–87. doi:10.2307/2682801. hdl:1813/32886. JSTOR 2682801. MR 0789118.
Nikulin, Mikhail (1987). "Bernstein's regularity conditions in a problem of empirical Bayesian approach". Journal of Soviet Mathematics. 36 (5): 596–600. doi:10.1007/BF01093293. S2CID 122405908.
== External links ==
Use of empirical Bayes Method in estimating road safety (North America)
Empirical Bayes methods for missing data analysis
Using the Beta-Binomial distribution to assess performance of a biometric identification device
A Hierarchical Naive Bayes Classifiers (for continuous and discrete variables). | Wikipedia/Empirical_Bayes_method |
Credibility theory is a branch of actuarial mathematics concerned with determining risk premiums. To achieve this, it uses mathematical models in an effort to forecast the (expected) number of insurance claims based on past observations. Technically speaking, the problem is to find the best linear approximation to the mean of the Bayesian predictive density, which is why credibility theory has many results in common with linear filtering as well as Bayesian statistics more broadly.
For example, in group health insurance an insurer is interested in calculating the risk premium,
R
P
{\displaystyle RP}
, (i.e. the theoretical expected claims amount) for a particular employer in the coming year. The insurer will likely have an estimate of historical overall claims experience,
x
{\displaystyle x}
, as well as a more specific estimate for the employer in question,
y
{\displaystyle y}
. Assigning a credibility factor,
z
{\displaystyle z}
, to the overall claims experience (and the reciprocal to employer experience) allows the insurer to get a more accurate estimate of the risk premium in the following manner:
R
P
=
x
z
+
y
(
1
−
z
)
.
{\displaystyle RP=xz+y(1-z).}
The credibility factor is derived by calculating the maximum likelihood estimate which would minimise the error of estimate. Assuming the variance of
x
{\displaystyle x}
and
y
{\displaystyle y}
are known quantities taking on the values
u
{\displaystyle u}
and
v
{\displaystyle v}
respectively, it can be shown that
z
{\displaystyle z}
should be equal to:
z
=
v
/
(
u
+
v
)
.
{\displaystyle z=v/(u+v).}
Therefore, the more uncertainty the estimate has, the lower is its credibility.
== Types of credibility ==
In Bayesian credibility, we separate each class (B) and assign them a probability (Probability of B). Then we find how likely our experience (A) is within each class (Probability of A given B). Next, we find how likely our experience was over all classes (Probability of A). Finally, we can find the probability of our class given our experience. So going back to each class, we weight each statistic with the probability of the particular class given the experience.
Bühlmann credibility works by looking at the Variance across the population. More specifically, it looks to see how much of the Total Variance is attributed to the Variance of the Expected Values of each class (Variance of the Hypothetical Mean), and how much is attributed to the Expected Variance over all classes (Expected Value of the Process Variance). Say we have a basketball team with a high number of points per game. Sometimes they get 128 and other times they get 130 but always one of the two. Compared to all basketball teams this is a relatively low variance, meaning that they will contribute very little to the Expected Value of the Process Variance. Also, their unusually high point totals greatly increases the variance of the population, meaning that if the league booted them out, they'd have a much more predictable point total for each team (lower variance). So, this team is definitely unique (they contribute greatly to the Variance of the Hypothetical Mean). So we can rate this team's experience with a fairly high credibility. They often/always score a lot (low Expected Value of Process Variance) and not many teams score as much as them (high Variance of Hypothetical Mean).
== A simple example ==
Suppose there are two coins in a box. One has heads on both sides and the other is a normal coin with 50:50 likelihood of heads or tails. You need to place a wager on the outcome after one is randomly drawn and flipped.
The odds of heads is .5 * 1 + .5 * .5 = .75. This is because there is a .5 chance of selecting the heads-only coin with 100% chance of heads and .5 chance of the fair coin with 50% chance.
Now the same coin is reused and you are asked to bet on the outcome again.
If the first flip was tails, there is a 100% chance you are dealing with a fair coin, so the next flip has a 50% chance of heads and 50% chance of tails.
If the first flip was heads, we must calculate the conditional probability that the chosen coin was heads-only as well as the conditional probability that the coin was fair, after which we can calculate the conditional probability of heads on the next flip. The probability that it came from a heads-only coin given that the first flip was heads is the probability of selecting a heads-only coin times the probability of heads for that coin divided by the initial probability of heads on the first flip, or .5 * 1 / .75 = 2/3. The probability that it came from a fair coin given that the first flip was heads is the probability of selecting a fair coin times the probability of heads for that coin divided by the initial probability of heads on the first flip, or .5 * .5 / .75 = 1/3. Finally, the conditional probability of heads on the next flip given that the first flip was heads is the conditional probability of a heads-only coin times the probability of heads for a heads-only coin plus the conditional probability of a fair coin times the probability of heads for a fair coin, or 2/3 * 1 + 1/3 * .5 = 5/6 ≈ .8333.
== Actuarial credibility ==
Actuarial credibility describes an approach used by actuaries to improve statistical estimates. Although the approach can be formulated in either a frequentist or Bayesian statistical setting, the latter is often preferred because of the ease of recognizing more than one source of randomness through both "sampling" and "prior" information. In a typical application, the actuary has an estimate X based on a small set of data, and an estimate M based on a larger but less relevant set of data. The credibility estimate is ZX + (1-Z)M, where Z is a number between 0 and 1 (called the "credibility weight" or "credibility factor") calculated to balance the sampling error of X against the possible lack of relevance (and therefore modeling error) of M.
When an insurance company calculates the premium it will charge, it divides the policy holders into groups. For example, it might divide motorists by age, sex, and type of car; a young man driving a fast car being considered a high risk, and an old woman driving a small car being considered a low risk. The division is made balancing the two requirements that the risks in each group are sufficiently similar and the group sufficiently large that a meaningful statistical analysis of the claims experience can be done to calculate the premium. This compromise means that none of the groups contains only identical risks. The problem is then to devise a way of combining the experience of the group with the experience of the individual risk to calculate the premium better. Credibility theory provides a solution to this problem.
For actuaries, it is important to know credibility theory in order to calculate a premium for a group of insurance contracts. The goal is to set up an experience rating system to determine next year's premium, taking into account not only the individual experience with the group, but also the collective experience.
There are two extreme positions. One is to charge everyone the same premium estimated by the overall mean
X
¯
{\displaystyle {\overline {X}}}
of the data. This makes sense only if the portfolio is homogeneous, which means that all risks cells have identical mean claims. However, if the portfolio is heterogeneous, it is not a good idea to charge a premium in this way (overcharging "good" people and undercharging "bad" risk people) since the "good" risks will take their business elsewhere, leaving the insurer with only "bad" risks. This is an example of adverse selection.
The other way around is to charge to group
j
{\displaystyle j}
its own average claims, being
X
j
¯
{\displaystyle {\overline {X_{j}}}}
as premium charged to the insured. These methods are used if the portfolio is heterogeneous, provided a fairly large claim experience. To compromise these two extreme positions, we take the weighted average of the two extremes:
C
=
z
j
X
j
¯
+
(
1
−
z
j
)
X
¯
{\displaystyle C=z_{j}{\overline {X_{j}}}+(1-z_{j}){\overline {X}}\,}
z
j
{\displaystyle z_{j}}
has the following intuitive meaning: it expresses how "credible" (acceptability) the individual of cell
j
{\displaystyle j}
is. If it is high, then use higher
z
j
{\displaystyle z_{j}}
to attach a larger weight to charging the
X
j
¯
{\displaystyle {\overline {X_{j}}}}
, and in this case,
z
j
{\displaystyle z_{j}}
is called a credibility factor, and such a premium charged is called a credibility premium.
If the group were completely homogeneous then it would be reasonable to set
z
j
=
0
{\displaystyle z_{j}=0}
, while if the group were completely heterogeneous then it would be reasonable to set
z
j
=
1
{\displaystyle z_{j}=1}
. Using intermediate values is reasonable to the extent that both individual and group history is useful in inferring future individual behavior.
For example, an actuary has an accident and payroll historical data for a shoe factory suggesting a rate of 3.1 accidents per million dollars of payroll. She has industry statistics (based on all shoe factories) suggesting that the rate is 7.4 accidents per million. With a credibility, Z, of 30%, she would estimate the rate for the factory as 30%(3.1) + 70%(7.4) = 6.1 accidents per million.
== References ==
== Further reading ==
Behan, Donald F. (2009) "Statistical Credibility Theory", Southeastern Actuarial Conference, June 18, 2009
Longley-Cook, L.H. (1962) An introduction to credibility theory PCAS, 49, 194-221.
Mahler, Howard C.; Dean, Curtis Gary (2001). "Chapter 8: Credibility" (PDF). In Casualty Actuarial Society (ed.). Foundations of Casualty Actuarial Science (4th ed.). Casualty Actuarial Society. pp. 485–659. ISBN 978-0-96247-622-8. Retrieved June 25, 2015.
Whitney, A.W. (1918) The Theory of Experience Rating, Proceedings of the Casualty Actuarial Society, 4, 274-292 (This is one of the original casualty actuarial papers dealing with credibility. It uses Bayesian techniques, although the author uses the now archaic "inverse probability" terminology.)
Venter, Gary G. (2005) "Credibility Theory for Dummies" | Wikipedia/Credibility_theory |
Bayesian inference is a statistical tool that can be applied to motor learning, specifically to adaptation. Adaptation is a short-term learning process involving gradual improvement in performance in response to a change in sensory information. Bayesian inference is used to describe the way the nervous system combines this sensory information with prior knowledge to estimate the position or other characteristics of something in the environment. Bayesian inference can also be used to show how information from multiple senses (e.g. visual and proprioception) can be combined for the same purpose. In either case, Bayesian inference dictates that the estimate is most influenced by whichever information is most certain.
== Example: integrating prior knowledge with sensory information in tennis ==
A person uses Bayesian inference to create an estimate that is a weighted combination of his current sensory information and his previous knowledge, or prior. This can be illustrated by decisions made in a tennis match. If someone plays against a familiar opponent who likes to serve such that the ball strikes on the sideline, one's prior would lead one to place the racket above the sideline to return the serve. However, when one sees the ball moving, it may appear that it will land closer to the middle of the court. Rather than completely following this sensory information or completely following the prior, one would move the racket to a location between the sideline (suggested by the prior) and the point where her eyes indicate the ball will land.
Another key part of Bayesian inference is that the estimate will be closer to the physical state suggested by sensory information if the senses are more accurate and will be closer to the state of the prior if the sensory information is more uncertain than the prior. Extending this to the tennis example, a player facing an opponent for the first time would have little certainty in his/her previous knowledge of the opponent and would therefore have an estimate weighted more heavily on visual information concerning ball position. Alternatively, if one were familiar with one's opponent but were playing in foggy or dark conditions that would hamper sight, sensory information would be less certain and one's estimate would rely more heavily on previous knowledge.
== Statistical overview ==
Bayes' theorem states
P
(
A
|
B
)
=
P
(
B
|
A
)
P
(
A
)
P
(
B
)
.
{\displaystyle P(A|B)={\frac {P(B|A)\,P(A)}{P(B)}}.\,}
In the language of Bayesian statistics,
P
(
A
|
B
)
{\displaystyle P(A|B)}
, or probability of A given B, is called the posterior, while
P
(
B
|
A
)
{\displaystyle P(B|A)}
and
P
(
A
)
{\displaystyle P(A)}
are the likelihood and the prior probabilities, respectively.
P
(
B
)
{\displaystyle P(B)}
is a constant scaling factor which allows the posterior to be between zero and one. Translating this into the language of motor learning, the prior represents previous knowledge about the physical state of the thing being observed, the likelihood is sensory information used to update the prior, and the posterior is the nervous system's estimate of the physical state. Therefore, for adaptation, Bayes' Theorem can be expressed as
estimate = (previous knowledge × sensory information)/scaling factor
The 3 terms in the equation above are all probability distributions. To find the estimate in non-probabilistic terms, a weighted sum can be used.
E
=
W
p
S
+
W
s
P
W
p
+
W
s
.
{\displaystyle E={\frac {W_{p}S+W_{s}P}{W_{p}+W_{s}}}.}
where
E
{\displaystyle E}
is the estimate,
S
{\displaystyle S}
is sensory information,
P
{\displaystyle P}
is previous knowledge, and the weighting factors
W
p
{\displaystyle Wp}
and
W
s
{\displaystyle Ws}
are the variances of
S
{\displaystyle S}
and
P
{\displaystyle P}
, respectively. Variance is a measure of uncertainty in a variable, so the above equation indicates that higher uncertainty in sensory information causes previous knowledge to have more influence on the estimate and vice versa.
More rigorous mathematical Bayesian descriptions are available here and here.
== Reaching ==
Many motor tasks exhibit adaptation to new sensory information. Bayesian inference has been most commonly studied in reaching.
=== Integrating a prior with current sensory information ===
Adaptation studies often involve a person reaching for a target without seeing either the target or his/her hand. Instead, the hand is represented by a cursor on a computer screen, which they must move over the target on the screen. In some cases, the cursor is shifted a small distance away from the actual hand position to test how the person responds to changes in visual feedback. A person learns to counteract this shifts by moving his/her hand an equal and opposite distance from the shift and still moves the cursor to the target, meaning he has developed a prior for this specific shift. When the cursor is then shifted a new, different distance from the same person's reaching hand, the person reaction is consistent with Bayesian inference; the hand moves a distance that is between the old shift (prior) and the new shift (sensory information).
If, for the new shift, the cursor is a large cloud of dots instead of one dot (as shown in the figure), the person's sensory information is less clear and will have less influence on how he reacts than the prior will. This supports the Bayesian idea that sensory information with more certainty will have greater influence on a person's adaptation to shifted sensory feedback.
This form of adaptation holds true only when the shift is small compared to the distance the person has to reach to hit the target. A person reaching for a target 15 cm away would adapt to a 2 cm shift of the cursor in a Bayesian way. However, if the target were only 5 cm away, a 2 cm shift cursor position (visual information) would be recognized and the person would realize the visual information does not accurately show hand position. Instead, the person would rely on proprioception and prior knowledge to move the hand to the target.
Humans also adapt to changing forces when reaching. When a force field a person reaches through changes slightly, he modifies his force to maintain reaching in a straight line based partly on a prior force which had been applied earlier. He relies more on the prior if the prior shift is less variable (more certain).
=== Integrating information from multiple senses ===
Bayesian inference can also be applied to the way humans combine information about changes in their environment from multiple senses without any consideration of prior knowledge. The two senses that have the strongest influence on how humans' reaching adaptations are vision and proprioception. Typically, proprioception has more weight than vision for adapting hand position in depth – the direction moving toward or away from the person reaching – and vision has more weight in the vertical and horizontal directions. However, changing conditions can alter the relative influence of these two senses. For example, the influence of vision on adapting hand depth is increased when the hand is passive, while proprioception has more influence when the hand is moving. Moreover, when vision is reduced (e.g. in darkness) proprioception has more influence on determining hand position. This result is consistent with Bayesian inference; when one sense becomes more uncertain, humans increase their reliance on another sense.
== Posture ==
Additionally, Bayesian inference has been found to play a part in adaptation of postural control. In one study, for example, subjects use a Wii Balance Board to do a surfing task in which they must move a cursor representing their center of pressure (COP) on a screen. The Wii surfer got visual information about his/her COP from clouds of dots similar to the one shown in the reaching section. With larger clouds, the surfers were more uncertain and less able to move the COP to the target on the screen. While this result is consistent with Bayesian inference, Bayesian mathematical models did not provide the best predictions of COP motion, perhaps because moving the COP accurately is more mechanically difficult than reaching. Therefore, the extent to which postural movement can be described by Bayesian inference is not yet clear.
== Gait ==
Adaptation to shifted feedback also occurs during walking and running. People walking with each foot on a different treadmill belt can adapt their step length when one belt begins moving faster than the other. Additionally, runners are able to alter their maximum ground reaction force and leg acceleration when they see a graph of peak leg acceleration. However, to date, no studies have determined if humans adapt their gates using Bayesian inference or not.
== Possible contradictions to Bayesian inference ==
Some adaptation studies do not support the application of Bayesian inference to motor learning. One study of reaching in a force field found that, rather than being influenced by a prior developed over hundreds of previous reaches, adaptation to subsequent reaches is only influenced by recent memories. People reaching in the force field adapted to shifts in the amount of force exerted on the arm, but this adaptation was only affected by the shift in force of the immediately previous reach, not by a well-developed prior knowledge of shifts that had occurred throughout the previous trials of the experiment. This appears to conflict with the application of Bayesian inference to adaptation, but Bayesian adaptation proponents have argued that this particular study required each participant to do only 600 reaches, which is not enough to develop a prior. In reaching studies that show evidence of Bayesian inference, participants typically perform 900 reaches or more. This indicates that, while Bayesian inference is used in adaptation, it is limited in that much previous experience is necessary to develop an influential prior.
== See also ==
Bayesian approaches to brain function
Perception
== References ==
== External links ==
The Computational and Biological Learning Lab at Cambridge
Konrad Kording's lab web site
Research for this Wikipedia entry was conducted as a part of a Locomotion Neuromechanics course (APPH 6232) in the School of Applied Physiology at Georgia Tech | Wikipedia/Bayesian_inference_in_motor_learning |
In statistics, linear regression is a model that estimates the relationship between a scalar response (dependent variable) and one or more explanatory variables (regressor or independent variable). A model with exactly one explanatory variable is a simple linear regression; a model with two or more explanatory variables is a multiple linear regression. This term is distinct from multivariate linear regression, which predicts multiple correlated dependent variables rather than a single dependent variable.
In linear regression, the relationships are modeled using linear predictor functions whose unknown model parameters are estimated from the data. Most commonly, the conditional mean of the response given the values of the explanatory variables (or predictors) is assumed to be an affine function of those values; less commonly, the conditional median or some other quantile is used. Like all forms of regression analysis, linear regression focuses on the conditional probability distribution of the response given the values of the predictors, rather than on the joint probability distribution of all of these variables, which is the domain of multivariate analysis.
Linear regression is also a type of machine learning algorithm, more specifically a supervised algorithm, that learns from the labelled datasets and maps the data points to the most optimized linear functions that can be used for prediction on new datasets.
Linear regression was the first type of regression analysis to be studied rigorously, and to be used extensively in practical applications. This is because models which depend linearly on their unknown parameters are easier to fit than models which are non-linearly related to their parameters and because the statistical properties of the resulting estimators are easier to determine.
Linear regression has many practical uses. Most applications fall into one of the following two broad categories:
If the goal is error i.e. variance reduction in prediction or forecasting, linear regression can be used to fit a predictive model to an observed data set of values of the response and explanatory variables. After developing such a model, if additional values of the explanatory variables are collected without an accompanying response value, the fitted model can be used to make a prediction of the response.
If the goal is to explain variation in the response variable that can be attributed to variation in the explanatory variables, linear regression analysis can be applied to quantify the strength of the relationship between the response and the explanatory variables, and in particular to determine whether some explanatory variables may have no linear relationship with the response at all, or to identify which subsets of explanatory variables may contain redundant information about the response.
Linear regression models are often fitted using the least squares approach, but they may also be fitted in other ways, such as by minimizing the "lack of fit" in some other norm (as with least absolute deviations regression), or by minimizing a penalized version of the least squares cost function as in ridge regression (L2-norm penalty) and lasso (L1-norm penalty). Use of the Mean Squared Error (MSE) as the cost on a dataset that has many large outliers, can result in a model that fits the outliers more than the true data due to the higher importance assigned by MSE to large errors. So, cost functions that are robust to outliers should be used if the dataset has many large outliers. Conversely, the least squares approach can be used to fit models that are not linear models. Thus, although the terms "least squares" and "linear model" are closely linked, they are not synonymous.
== Formulation ==
Given a data set
{
y
i
,
x
i
1
,
…
,
x
i
p
}
i
=
1
n
{\displaystyle \{y_{i},\,x_{i1},\ldots ,x_{ip}\}_{i=1}^{n}}
of n statistical units, a linear regression model assumes that the relationship between the dependent variable y and the vector of regressors x is linear. This relationship is modeled through a disturbance term or error variable ε—an unobserved random variable that adds "noise" to the linear relationship between the dependent variable and regressors. Thus the model takes the form
y
i
=
β
0
+
β
1
x
i
1
+
⋯
+
β
p
x
i
p
+
ε
i
=
x
i
T
β
+
ε
i
,
i
=
1
,
…
,
n
,
{\displaystyle y_{i}=\beta _{0}+\beta _{1}x_{i1}+\cdots +\beta _{p}x_{ip}+\varepsilon _{i}=\mathbf {x} _{i}^{\mathsf {T}}{\boldsymbol {\beta }}+\varepsilon _{i},\qquad i=1,\ldots ,n,}
where T denotes the transpose, so that xiTβ is the inner product between vectors xi and β.
Often these n equations are stacked together and written in matrix notation as
y
=
X
β
+
ε
,
{\displaystyle \mathbf {y} =\mathbf {X} {\boldsymbol {\beta }}+{\boldsymbol {\varepsilon }},\,}
where
y
=
[
y
1
y
2
⋮
y
n
]
,
{\displaystyle \mathbf {y} ={\begin{bmatrix}y_{1}\\y_{2}\\\vdots \\y_{n}\end{bmatrix}},\quad }
X
=
[
x
1
T
x
2
T
⋮
x
n
T
]
=
[
1
x
11
⋯
x
1
p
1
x
21
⋯
x
2
p
⋮
⋮
⋱
⋮
1
x
n
1
⋯
x
n
p
]
,
{\displaystyle \mathbf {X} ={\begin{bmatrix}\mathbf {x} _{1}^{\mathsf {T}}\\\mathbf {x} _{2}^{\mathsf {T}}\\\vdots \\\mathbf {x} _{n}^{\mathsf {T}}\end{bmatrix}}={\begin{bmatrix}1&x_{11}&\cdots &x_{1p}\\1&x_{21}&\cdots &x_{2p}\\\vdots &\vdots &\ddots &\vdots \\1&x_{n1}&\cdots &x_{np}\end{bmatrix}},}
β
=
[
β
0
β
1
β
2
⋮
β
p
]
,
ε
=
[
ε
1
ε
2
⋮
ε
n
]
.
{\displaystyle {\boldsymbol {\beta }}={\begin{bmatrix}\beta _{0}\\\beta _{1}\\\beta _{2}\\\vdots \\\beta _{p}\end{bmatrix}},\quad {\boldsymbol {\varepsilon }}={\begin{bmatrix}\varepsilon _{1}\\\varepsilon _{2}\\\vdots \\\varepsilon _{n}\end{bmatrix}}.}
=== Notation and terminology ===
y
{\displaystyle \mathbf {y} }
is a vector of observed values
y
i
(
i
=
1
,
…
,
n
)
{\displaystyle y_{i}\ (i=1,\ldots ,n)}
of the variable called the regressand, endogenous variable, response variable, target variable, measured variable, criterion variable, or dependent variable. This variable is also sometimes known as the predicted variable, but this should not be confused with predicted values, which are denoted
y
^
{\displaystyle {\hat {y}}}
. The decision as to which variable in a data set is modeled as the dependent variable and which are modeled as the independent variables may be based on a presumption that the value of one of the variables is caused by, or directly influenced by the other variables. Alternatively, there may be an operational reason to model one of the variables in terms of the others, in which case there need be no presumption of causality.
X
{\displaystyle \mathbf {X} }
may be seen as a matrix of row-vectors
x
i
⋅
{\displaystyle \mathbf {x} _{i\cdot }}
or of n-dimensional column-vectors
x
⋅
j
{\displaystyle \mathbf {x} _{\cdot j}}
, which are known as regressors, exogenous variables, explanatory variables, covariates, input variables, predictor variables, or independent variables (not to be confused with the concept of independent random variables). The matrix
X
{\displaystyle \mathbf {X} }
is sometimes called the design matrix.
Usually a constant is included as one of the regressors. In particular,
x
i
0
=
1
{\displaystyle x_{i0}=1}
for
i
=
1
,
…
,
n
{\displaystyle i=1,\ldots ,n}
. The corresponding element of β is called the intercept. Many statistical inference procedures for linear models require an intercept to be present, so it is often included even if theoretical considerations suggest that its value should be zero.
Sometimes one of the regressors can be a non-linear function of another regressor or of the data values, as in polynomial regression and segmented regression. The model remains linear as long as it is linear in the parameter vector β.
The values xij may be viewed as either observed values of random variables Xj or as fixed values chosen prior to observing the dependent variable. Both interpretations may be appropriate in different cases, and they generally lead to the same estimation procedures; however different approaches to asymptotic analysis are used in these two situations.
β
{\displaystyle {\boldsymbol {\beta }}}
is a
(
p
+
1
)
{\displaystyle (p+1)}
-dimensional parameter vector, where
β
0
{\displaystyle \beta _{0}}
is the intercept term (if one is included in the model—otherwise
β
{\displaystyle {\boldsymbol {\beta }}}
is p-dimensional). Its elements are known as effects or regression coefficients (although the latter term is sometimes reserved for the estimated effects). In simple linear regression, p=1, and the coefficient is known as regression slope. Statistical estimation and inference in linear regression focuses on β. The elements of this parameter vector are interpreted as the partial derivatives of the dependent variable with respect to the various independent variables.
ε
{\displaystyle {\boldsymbol {\varepsilon }}}
is a vector of values
ε
i
{\displaystyle \varepsilon _{i}}
. This part of the model is called the error term, disturbance term, or sometimes noise (in contrast with the "signal" provided by the rest of the model). This variable captures all other factors which influence the dependent variable y other than the regressors x. The relationship between the error term and the regressors, for example their correlation, is a crucial consideration in formulating a linear regression model, as it will determine the appropriate estimation method.
Fitting a linear model to a given data set usually requires estimating the regression coefficients
β
{\displaystyle {\boldsymbol {\beta }}}
such that the error term
ε
=
y
−
X
β
{\displaystyle {\boldsymbol {\varepsilon }}=\mathbf {y} -\mathbf {X} {\boldsymbol {\beta }}}
is minimized. For example, it is common to use the sum of squared errors
‖
ε
‖
2
2
{\displaystyle \|{\boldsymbol {\varepsilon }}\|_{2}^{2}}
as a measure of
ε
{\displaystyle {\boldsymbol {\varepsilon }}}
for minimization.
=== Example ===
Consider a situation where a small ball is being tossed up in the air and then we measure its heights of ascent hi at various moments in time ti. Physics tells us that, ignoring the drag, the relationship can be modeled as
h
i
=
β
1
t
i
+
β
2
t
i
2
+
ε
i
,
{\displaystyle h_{i}=\beta _{1}t_{i}+\beta _{2}t_{i}^{2}+\varepsilon _{i},}
where β1 determines the initial velocity of the ball, β2 is proportional to the standard gravity, and εi is due to measurement errors. Linear regression can be used to estimate the values of β1 and β2 from the measured data. This model is non-linear in the time variable, but it is linear in the parameters β1 and β2; if we take regressors xi = (xi1, xi2) = (ti, ti2), the model takes on the standard form
h
i
=
x
i
T
β
+
ε
i
.
{\displaystyle h_{i}=\mathbf {x} _{i}^{\mathsf {T}}{\boldsymbol {\beta }}+\varepsilon _{i}.}
=== Assumptions ===
Standard linear regression models with standard estimation techniques make a number of assumptions about the predictor variables, the response variable and their relationship. Numerous extensions have been developed that allow each of these assumptions to be relaxed (i.e. reduced to a weaker form), and in some cases eliminated entirely. Generally these extensions make the estimation procedure more complex and time-consuming, and may also require more data in order to produce an equally precise model.
The following are the major assumptions made by standard linear regression models with standard estimation techniques (e.g. ordinary least squares):
Weak exogeneity. This essentially means that the predictor variables x can be treated as fixed values, rather than random variables. This means, for example, that the predictor variables are assumed to be error-free—that is, not contaminated with measurement errors. Although this assumption is not realistic in many settings, dropping it leads to significantly more difficult errors-in-variables models.
Linearity. This means that the mean of the response variable is a linear combination of the parameters (regression coefficients) and the predictor variables. Note that this assumption is much less restrictive than it may at first seem. Because the predictor variables are treated as fixed values (see above), linearity is really only a restriction on the parameters. The predictor variables themselves can be arbitrarily transformed, and in fact multiple copies of the same underlying predictor variable can be added, each one transformed differently. This technique is used, for example, in polynomial regression, which uses linear regression to fit the response variable as an arbitrary polynomial function (up to a given degree) of a predictor variable. With this much flexibility, models such as polynomial regression often have "too much power", in that they tend to overfit the data. As a result, some kind of regularization must typically be used to prevent unreasonable solutions coming out of the estimation process. Common examples are ridge regression and lasso regression. Bayesian linear regression can also be used, which by its nature is more or less immune to the problem of overfitting. (In fact, ridge regression and lasso regression can both be viewed as special cases of Bayesian linear regression, with particular types of prior distributions placed on the regression coefficients.)
Constant variance (a.k.a. homoscedasticity). This means that the variance of the errors does not depend on the values of the predictor variables. Thus the variability of the responses for given fixed values of the predictors is the same regardless of how large or small the responses are. This is often not the case, as a variable whose mean is large will typically have a greater variance than one whose mean is small. For example, a person whose income is predicted to be $100,000 may easily have an actual income of $80,000 or $120,000—i.e., a standard deviation of around $20,000—while another person with a predicted income of $10,000 is unlikely to have the same $20,000 standard deviation, since that would imply their actual income could vary anywhere between −$10,000 and $30,000. (In fact, as this shows, in many cases—often the same cases where the assumption of normally distributed errors fails—the variance or standard deviation should be predicted to be proportional to the mean, rather than constant.) The absence of homoscedasticity is called heteroscedasticity. In order to check this assumption, a plot of residuals versus predicted values (or the values of each individual predictor) can be examined for a "fanning effect" (i.e., increasing or decreasing vertical spread as one moves left to right on the plot). A plot of the absolute or squared residuals versus the predicted values (or each predictor) can also be examined for a trend or curvature. Formal tests can also be used; see Heteroscedasticity. The presence of heteroscedasticity will result in an overall "average" estimate of variance being used instead of one that takes into account the true variance structure. This leads to less precise (but in the case of ordinary least squares, not biased) parameter estimates and biased standard errors, resulting in misleading tests and interval estimates. The mean squared error for the model will also be wrong. Various estimation techniques including weighted least squares and the use of heteroscedasticity-consistent standard errors can handle heteroscedasticity in a quite general way. Bayesian linear regression techniques can also be used when the variance is assumed to be a function of the mean. It is also possible in some cases to fix the problem by applying a transformation to the response variable (e.g., fitting the logarithm of the response variable using a linear regression model, which implies that the response variable itself has a log-normal distribution rather than a normal distribution).
Independence of errors. This assumes that the errors of the response variables are uncorrelated with each other. (Actual statistical independence is a stronger condition than mere lack of correlation and is often not needed, although it can be exploited if it is known to hold.) Some methods such as generalized least squares are capable of handling correlated errors, although they typically require significantly more data unless some sort of regularization is used to bias the model towards assuming uncorrelated errors. Bayesian linear regression is a general way of handling this issue.
Lack of perfect multicollinearity in the predictors. For standard least squares estimation methods, the design matrix X must have full column rank p; otherwise perfect multicollinearity exists in the predictor variables, meaning a linear relationship exists between two or more predictor variables. This can be caused by accidentally duplicating a variable in the data, using a linear transformation of a variable along with the original (e.g., the same temperature measurements expressed in Fahrenheit and Celsius), or including a linear combination of multiple variables in the model, such as their mean. It can also happen if there is too little data available compared to the number of parameters to be estimated (e.g., fewer data points than regression coefficients). Near violations of this assumption, where predictors are highly but not perfectly correlated, can reduce the precision of parameter estimates (see Variance inflation factor). In the case of perfect multicollinearity, the parameter vector β will be non-identifiable—it has no unique solution. In such a case, only some of the parameters can be identified (i.e., their values can only be estimated within some linear subspace of the full parameter space Rp). See partial least squares regression. Methods for fitting linear models with multicollinearity have been developed, some of which require additional assumptions such as "effect sparsity"—that a large fraction of the effects are exactly zero. Note that the more computationally expensive iterated algorithms for parameter estimation, such as those used in generalized linear models, do not suffer from this problem.
Violations of these assumptions can result in biased estimations of β, biased standard errors, untrustworthy confidence intervals and significance tests. Beyond these assumptions, several other statistical properties of the data strongly influence the performance of different estimation methods:
The statistical relationship between the error terms and the regressors plays an important role in determining whether an estimation procedure has desirable sampling properties such as being unbiased and consistent.
The arrangement, or probability distribution of the predictor variables x has a major influence on the precision of estimates of β. Sampling and design of experiments are highly developed subfields of statistics that provide guidance for collecting data in such a way to achieve a precise estimate of β.
=== Interpretation ===
A fitted linear regression model can be used to identify the relationship between a single predictor variable xj and the response variable y when all the other predictor variables in the model are "held fixed". Specifically, the interpretation of βj is the expected change in y for a one-unit change in xj when the other covariates are held fixed—that is, the expected value of the partial derivative of y with respect to xj. This is sometimes called the unique effect of xj on y. In contrast, the marginal effect of xj on y can be assessed using a correlation coefficient or simple linear regression model relating only xj to y; this effect is the total derivative of y with respect to xj.
Care must be taken when interpreting regression results, as some of the regressors may not allow for marginal changes (such as dummy variables, or the intercept term), while others cannot be held fixed (recall the example from the introduction: it would be impossible to "hold ti fixed" and at the same time change the value of ti2).
It is possible that the unique effect be nearly zero even when the marginal effect is large. This may imply that some other covariate captures all the information in xj, so that once that variable is in the model, there is no contribution of xj to the variation in y. Conversely, the unique effect of xj can be large while its marginal effect is nearly zero. This would happen if the other covariates explained a great deal of the variation of y, but they mainly explain variation in a way that is complementary to what is captured by xj. In this case, including the other variables in the model reduces the part of the variability of y that is unrelated to xj, thereby strengthening the apparent relationship with xj.
The meaning of the expression "held fixed" may depend on how the values of the predictor variables arise. If the experimenter directly sets the values of the predictor variables according to a study design, the comparisons of interest may literally correspond to comparisons among units whose predictor variables have been "held fixed" by the experimenter. Alternatively, the expression "held fixed" can refer to a selection that takes place in the context of data analysis. In this case, we "hold a variable fixed" by restricting our attention to the subsets of the data that happen to have a common value for the given predictor variable. This is the only interpretation of "held fixed" that can be used in an observational study.
The notion of a "unique effect" is appealing when studying a complex system where multiple interrelated components influence the response variable. In some cases, it can literally be interpreted as the causal effect of an intervention that is linked to the value of a predictor variable. However, it has been argued that in many cases multiple regression analysis fails to clarify the relationships between the predictor variables and the response variable when the predictors are correlated with each other and are not assigned following a study design.
== Extensions ==
Numerous extensions of linear regression have been developed, which allow some or all of the assumptions underlying the basic model to be relaxed.
=== Simple and multiple linear regression ===
The simplest case of a single scalar predictor variable x and a single scalar response variable y is known as simple linear regression. The extension to multiple and/or vector-valued predictor variables (denoted with a capital X) is known as multiple linear regression, also known as multivariable linear regression (not to be confused with multivariate linear regression).
Multiple linear regression is a generalization of simple linear regression to the case of more than one independent variable, and a special case of general linear models, restricted to one dependent variable. The basic model for multiple linear regression is
Y
i
=
β
0
+
β
1
X
i
1
+
β
2
X
i
2
+
…
+
β
p
X
i
p
+
ϵ
i
{\displaystyle Y_{i}=\beta _{0}+\beta _{1}X_{i1}+\beta _{2}X_{i2}+\ldots +\beta _{p}X_{ip}+\epsilon _{i}}
for each observation
i
=
1
,
…
,
n
{\textstyle i=1,\ldots ,n}
.
In the formula above we consider n observations of one dependent variable and p independent variables. Thus, Yi is the ith observation of the dependent variable, Xij is ith observation of the jth independent variable, j = 1, 2, ..., p. The values βj represent parameters to be estimated, and εi is the ith independent identically distributed normal error.
In the more general multivariate linear regression, there is one equation of the above form for each of m > 1 dependent variables that share the same set of explanatory variables and hence are estimated simultaneously with each other:
Y
i
j
=
β
0
j
+
β
1
j
X
i
1
+
β
2
j
X
i
2
+
…
+
β
p
j
X
i
p
+
ϵ
i
j
{\displaystyle Y_{ij}=\beta _{0j}+\beta _{1j}X_{i1}+\beta _{2j}X_{i2}+\ldots +\beta _{pj}X_{ip}+\epsilon _{ij}}
for all observations indexed as i = 1, ... , n and for all dependent variables indexed as j = 1, ... , m.
Nearly all real-world regression models involve multiple predictors, and basic descriptions of linear regression are often phrased in terms of the multiple regression model. Note, however, that in these cases the response variable y is still a scalar. Another term, multivariate linear regression, refers to cases where y is a vector, i.e., the same as general linear regression.
Model Assumptions to Check:
1. Linearity: Relationship between each predictor and outcome must be linear
2. Normality of residuals: Residuals should follow a normal distribution
3. Homoscedasticity: Constant variance of residuals across predicted values
4. Independence: Observations should be independent (not repeated measures)
SPSS: Use partial plots, histograms, P-P plots, residual vs. predicted plots
=== General linear models ===
The general linear model considers the situation when the response variable is not a scalar (for each observation) but a vector, yi. Conditional linearity of
E
(
y
∣
x
i
)
=
x
i
T
B
{\displaystyle E(\mathbf {y} \mid \mathbf {x} _{i})=\mathbf {x} _{i}^{\mathsf {T}}B}
is still assumed, with a matrix B replacing the vector β of the classical linear regression model. Multivariate analogues of ordinary least squares (OLS) and generalized least squares (GLS) have been developed. "General linear models" are also called "multivariate linear models". These are not the same as multivariable linear models (also called "multiple linear models").
=== Heteroscedastic models ===
Various models have been created that allow for heteroscedasticity, i.e. the errors for different response variables may have different variances. For example, weighted least squares is a method for estimating linear regression models when the response variables may have different error variances, possibly with correlated errors. (See also Weighted linear least squares, and Generalized least squares.) Heteroscedasticity-consistent standard errors is an improved method for use with uncorrelated but potentially heteroscedastic errors.
=== Generalized linear models ===
The Generalized linear model (GLM) is a framework for modeling response variables that are bounded or discrete. This is used, for example:
when modeling positive quantities (e.g. prices or populations) that vary over a large scale—which are better described using a skewed distribution such as the log-normal distribution or Poisson distribution (although GLMs are not used for log-normal data, instead the response variable is simply transformed using the logarithm function);
when modeling categorical data, such as the choice of a given candidate in an election (which is better described using a Bernoulli distribution/binomial distribution for binary choices, or a categorical distribution/multinomial distribution for multi-way choices), where there are a fixed number of choices that cannot be meaningfully ordered;
when modeling ordinal data, e.g. ratings on a scale from 0 to 5, where the different outcomes can be ordered but where the quantity itself may not have any absolute meaning (e.g. a rating of 4 may not be "twice as good" in any objective sense as a rating of 2, but simply indicates that it is better than 2 or 3 but not as good as 5).
Generalized linear models allow for an arbitrary link function, g, that relates the mean of the response variable(s) to the predictors:
E
(
Y
)
=
g
−
1
(
X
B
)
{\displaystyle E(Y)=g^{-1}(XB)}
. The link function is often related to the distribution of the response, and in particular it typically has the effect of transforming between the
(
−
∞
,
∞
)
{\displaystyle (-\infty ,\infty )}
range of the linear predictor and the range of the response variable.
Some common examples of GLMs are:
Poisson regression for count data.
Logistic regression and probit regression for binary data.
Multinomial logistic regression and multinomial probit regression for categorical data.
Ordered logit and ordered probit regression for ordinal data.
Single index models allow some degree of nonlinearity in the relationship between x and y, while preserving the central role of the linear predictor β′x as in the classical linear regression model. Under certain conditions, simply applying OLS to data from a single-index model will consistently estimate β up to a proportionality constant.
=== Hierarchical linear models ===
Hierarchical linear models (or multilevel regression) organizes the data into a hierarchy of regressions, for example where A is regressed on B, and B is regressed on C. It is often used where the variables of interest have a natural hierarchical structure such as in educational statistics, where students are nested in classrooms, classrooms are nested in schools, and schools are nested in some administrative grouping, such as a school district. The response variable might be a measure of student achievement such as a test score, and different covariates would be collected at the classroom, school, and school district levels.
=== Errors-in-variables ===
Errors-in-variables models (or "measurement error models") extend the traditional linear regression model to allow the predictor variables X to be observed with error. This error causes standard estimators of β to become biased. Generally, the form of bias is an attenuation, meaning that the effects are biased toward zero.
=== Group effects ===
In a multiple linear regression model
y
=
β
0
+
β
1
x
1
+
⋯
+
β
p
x
p
+
ε
,
{\displaystyle y=\beta _{0}+\beta _{1}x_{1}+\cdots +\beta _{p}x_{p}+\varepsilon ,}
parameter
β
j
{\displaystyle \beta _{j}}
of predictor variable
x
j
{\displaystyle x_{j}}
represents the individual effect of
x
j
{\displaystyle x_{j}}
. It has an interpretation as the expected change in the response variable
y
{\displaystyle y}
when
x
j
{\displaystyle x_{j}}
increases by one unit with other predictor variables held constant. When
x
j
{\displaystyle x_{j}}
is strongly correlated with other predictor variables, it is improbable that
x
j
{\displaystyle x_{j}}
can increase by one unit with other variables held constant. In this case, the interpretation of
β
j
{\displaystyle \beta _{j}}
becomes problematic as it is based on an improbable condition, and the effect of
x
j
{\displaystyle x_{j}}
cannot be evaluated in isolation.
For a group of predictor variables, say,
{
x
1
,
x
2
,
…
,
x
q
}
{\displaystyle \{x_{1},x_{2},\dots ,x_{q}\}}
, a group effect
ξ
(
w
)
{\displaystyle \xi (\mathbf {w} )}
is defined as a linear combination of their parameters
ξ
(
w
)
=
w
1
β
1
+
w
2
β
2
+
⋯
+
w
q
β
q
,
{\displaystyle \xi (\mathbf {w} )=w_{1}\beta _{1}+w_{2}\beta _{2}+\dots +w_{q}\beta _{q},}
where
w
=
(
w
1
,
w
2
,
…
,
w
q
)
⊺
{\displaystyle \mathbf {w} =(w_{1},w_{2},\dots ,w_{q})^{\intercal }}
is a weight vector satisfying
∑
j
=
1
q
|
w
j
|
=
1
{\textstyle \sum _{j=1}^{q}|w_{j}|=1}
. Because of the constraint on
w
j
{\displaystyle {w_{j}}}
,
ξ
(
w
)
{\displaystyle \xi (\mathbf {w} )}
is also referred to as a normalized group effect. A group effect
ξ
(
w
)
{\displaystyle \xi (\mathbf {w} )}
has an interpretation as the expected change in
y
{\displaystyle y}
when variables in the group
x
1
,
x
2
,
…
,
x
q
{\displaystyle x_{1},x_{2},\dots ,x_{q}}
change by the amount
w
1
,
w
2
,
…
,
w
q
{\displaystyle w_{1},w_{2},\dots ,w_{q}}
, respectively, at the same time with other variables (not in the group) held constant. It generalizes the individual effect of a variable to a group of variables in that (
i
{\displaystyle i}
) if
q
=
1
{\displaystyle q=1}
, then the group effect reduces to an individual effect, and (
i
i
{\displaystyle ii}
) if
w
i
=
1
{\displaystyle w_{i}=1}
and
w
j
=
0
{\displaystyle w_{j}=0}
for
j
≠
i
{\displaystyle j\neq i}
, then the group effect also reduces to an individual effect.
A group effect
ξ
(
w
)
{\displaystyle \xi (\mathbf {w} )}
is said to be meaningful if the underlying simultaneous changes of the
q
{\displaystyle q}
variables
(
x
1
,
x
2
,
…
,
x
q
)
⊺
{\displaystyle (x_{1},x_{2},\dots ,x_{q})^{\intercal }}
is probable.
Group effects provide a means to study the collective impact of strongly correlated predictor variables in linear regression models. Individual effects of such variables are not well-defined as their parameters do not have good interpretations. Furthermore, when the sample size is not large, none of their parameters can be accurately estimated by the least squares regression due to the multicollinearity problem. Nevertheless, there are meaningful group effects that have good interpretations and can be accurately estimated by the least squares regression. A simple way to identify these meaningful group effects is to use an all positive correlations (APC) arrangement of the strongly correlated variables under which pairwise correlations among these variables are all positive, and standardize all
p
{\displaystyle p}
predictor variables in the model so that they all have mean zero and length one. To illustrate this, suppose that
{
x
1
,
x
2
,
…
,
x
q
}
{\displaystyle \{x_{1},x_{2},\dots ,x_{q}\}}
is a group of strongly correlated variables in an APC arrangement and that they are not strongly correlated with predictor variables outside the group. Let
y
′
{\displaystyle y'}
be the centred
y
{\displaystyle y}
and
x
j
′
{\displaystyle x_{j}'}
be the standardized
x
j
{\displaystyle x_{j}}
. Then, the standardized linear regression model is
y
′
=
β
1
′
x
1
′
+
⋯
+
β
p
′
x
p
′
+
ε
.
{\displaystyle y'=\beta _{1}'x_{1}'+\cdots +\beta _{p}'x_{p}'+\varepsilon .}
Parameters
β
j
{\displaystyle \beta _{j}}
in the original model, including
β
0
{\displaystyle \beta _{0}}
, are simple functions of
β
j
′
{\displaystyle \beta _{j}'}
in the standardized model. The standardization of variables does not change their correlations, so
{
x
1
′
,
x
2
′
,
…
,
x
q
′
}
{\displaystyle \{x_{1}',x_{2}',\dots ,x_{q}'\}}
is a group of strongly correlated variables in an APC arrangement and they are not strongly correlated with other predictor variables in the standardized model. A group effect of
{
x
1
′
,
x
2
′
,
…
,
x
q
′
}
{\displaystyle \{x_{1}',x_{2}',\dots ,x_{q}'\}}
is
ξ
′
(
w
)
=
w
1
β
1
′
+
w
2
β
2
′
+
⋯
+
w
q
β
q
′
,
{\displaystyle \xi '(\mathbf {w} )=w_{1}\beta _{1}'+w_{2}\beta _{2}'+\dots +w_{q}\beta _{q}',}
and its minimum-variance unbiased linear estimator is
ξ
^
′
(
w
)
=
w
1
β
^
1
′
+
w
2
β
^
2
′
+
⋯
+
w
q
β
^
q
′
,
{\displaystyle {\hat {\xi }}'(\mathbf {w} )=w_{1}{\hat {\beta }}_{1}'+w_{2}{\hat {\beta }}_{2}'+\dots +w_{q}{\hat {\beta }}_{q}',}
where
β
^
j
′
{\displaystyle {\hat {\beta }}_{j}'}
is the least squares estimator of
β
j
′
{\displaystyle \beta _{j}'}
. In particular, the average group effect of the
q
{\displaystyle q}
standardized variables is
ξ
A
=
1
q
(
β
1
′
+
β
2
′
+
⋯
+
β
q
′
)
,
{\displaystyle \xi _{A}={\frac {1}{q}}(\beta _{1}'+\beta _{2}'+\dots +\beta _{q}'),}
which has an interpretation as the expected change in
y
′
{\displaystyle y'}
when all
x
j
′
{\displaystyle x_{j}'}
in the strongly correlated group increase by
(
1
/
q
)
{\displaystyle (1/q)}
th of a unit at the same time with variables outside the group held constant. With strong positive correlations and in standardized units, variables in the group are approximately equal, so they are likely to increase at the same time and in similar amount. Thus, the average group effect
ξ
A
{\displaystyle \xi _{A}}
is a meaningful effect. It can be accurately estimated by its minimum-variance unbiased linear estimator
ξ
^
A
=
1
q
(
β
^
1
′
+
β
^
2
′
+
⋯
+
β
^
q
′
)
{\textstyle {\hat {\xi }}_{A}={\frac {1}{q}}({\hat {\beta }}_{1}'+{\hat {\beta }}_{2}'+\dots +{\hat {\beta }}_{q}')}
, even when individually none of the
β
j
′
{\displaystyle \beta _{j}'}
can be accurately estimated by
β
^
j
′
{\displaystyle {\hat {\beta }}_{j}'}
.
Not all group effects are meaningful or can be accurately estimated. For example,
β
1
′
{\displaystyle \beta _{1}'}
is a special group effect with weights
w
1
=
1
{\displaystyle w_{1}=1}
and
w
j
=
0
{\displaystyle w_{j}=0}
for
j
≠
1
{\displaystyle j\neq 1}
, but it cannot be accurately estimated by
β
^
1
′
{\displaystyle {\hat {\beta }}'_{1}}
. It is also not a meaningful effect. In general, for a group of
q
{\displaystyle q}
strongly correlated predictor variables in an APC arrangement in the standardized model, group effects whose weight vectors
w
{\displaystyle \mathbf {w} }
are at or near the centre of the simplex
∑
j
=
1
q
w
j
=
1
{\textstyle \sum _{j=1}^{q}w_{j}=1}
(
w
j
≥
0
{\displaystyle w_{j}\geq 0}
) are meaningful and can be accurately estimated by their minimum-variance unbiased linear estimators. Effects with weight vectors far away from the centre are not meaningful as such weight vectors represent simultaneous changes of the variables that violate the strong positive correlations of the standardized variables in an APC arrangement. As such, they are not probable. These effects also cannot be accurately estimated.
Applications of the group effects include (1) estimation and inference for meaningful group effects on the response variable, (2) testing for "group significance" of the
q
{\displaystyle q}
variables via testing
H
0
:
ξ
A
=
0
{\displaystyle H_{0}:\xi _{A}=0}
versus
H
1
:
ξ
A
≠
0
{\displaystyle H_{1}:\xi _{A}\neq 0}
, and (3) characterizing the region of the predictor variable space over which predictions by the least squares estimated model are accurate.
A group effect of the original variables
{
x
1
,
x
2
,
…
,
x
q
}
{\displaystyle \{x_{1},x_{2},\dots ,x_{q}\}}
can be expressed as a constant times a group effect of the standardized variables
{
x
1
′
,
x
2
′
,
…
,
x
q
′
}
{\displaystyle \{x_{1}',x_{2}',\dots ,x_{q}'\}}
. The former is meaningful when the latter is. Thus meaningful group effects of the original variables can be found through meaningful group effects of the standardized variables.
=== Others ===
In Dempster–Shafer theory, or a linear belief function in particular, a linear regression model may be represented as a partially swept matrix, which can be combined with similar matrices representing observations and other assumed normal distributions and state equations. The combination of swept or unswept matrices provides an alternative method for estimating linear regression models.
== Estimation methods ==
A large number of procedures have been developed for parameter estimation and inference in linear regression. These methods differ in computational simplicity of algorithms, presence of a closed-form solution, robustness with respect to heavy-tailed distributions, and theoretical assumptions needed to validate desirable statistical properties such as consistency and asymptotic efficiency.
Some of the more common estimation techniques for linear regression are summarized below.
=== Least-squares estimation and related techniques ===
Assuming that the independent variables are
x
i
→
=
[
x
1
i
,
x
2
i
,
…
,
x
m
i
]
{\displaystyle {\vec {x_{i}}}=\left[x_{1}^{i},x_{2}^{i},\ldots ,x_{m}^{i}\right]}
and the model's parameters are
β
→
=
[
β
0
,
β
1
,
…
,
β
m
]
{\displaystyle {\vec {\beta }}=\left[\beta _{0},\beta _{1},\ldots ,\beta _{m}\right]}
, then the model's prediction would be
y
i
≈
β
0
+
∑
j
=
1
m
β
j
×
x
j
i
{\displaystyle y_{i}\approx \beta _{0}+\sum _{j=1}^{m}\beta _{j}\times x_{j}^{i}}
.
If
x
i
→
{\displaystyle {\vec {x_{i}}}}
is extended to
x
i
→
=
[
1
,
x
1
i
,
x
2
i
,
…
,
x
m
i
]
{\displaystyle {\vec {x_{i}}}=\left[1,x_{1}^{i},x_{2}^{i},\ldots ,x_{m}^{i}\right]}
then
y
i
{\displaystyle y_{i}}
would become a dot product of the parameter and the independent vectors, i.e.
y
i
≈
∑
j
=
0
m
β
j
×
x
j
i
=
β
→
⋅
x
i
→
{\displaystyle y_{i}\approx \sum _{j=0}^{m}\beta _{j}\times x_{j}^{i}={\vec {\beta }}\cdot {\vec {x_{i}}}}
.
In the least-squares setting, the optimum parameter vector is defined as such that minimizes the sum of mean squared loss:
β
^
→
=
arg min
β
→
L
(
D
,
β
→
)
=
arg min
β
→
∑
i
=
1
n
(
β
→
⋅
x
i
→
−
y
i
)
2
{\displaystyle {\vec {\hat {\beta }}}={\underset {\vec {\beta }}{\mbox{arg min}}}\,L\left(D,{\vec {\beta }}\right)={\underset {\vec {\beta }}{\mbox{arg min}}}\sum _{i=1}^{n}\left({\vec {\beta }}\cdot {\vec {x_{i}}}-y_{i}\right)^{2}}
Now putting the independent and dependent variables in matrices
X
{\displaystyle X}
and
Y
{\displaystyle Y}
respectively, the loss function can be rewritten as:
L
(
D
,
β
→
)
=
‖
X
β
→
−
Y
‖
2
=
(
X
β
→
−
Y
)
T
(
X
β
→
−
Y
)
=
Y
T
Y
−
Y
T
X
β
→
−
β
→
T
X
T
Y
+
β
→
T
X
T
X
β
→
{\displaystyle {\begin{aligned}L\left(D,{\vec {\beta }}\right)&=\|X{\vec {\beta }}-Y\|^{2}\\&=\left(X{\vec {\beta }}-Y\right)^{\textsf {T}}\left(X{\vec {\beta }}-Y\right)\\&=Y^{\textsf {T}}Y-Y^{\textsf {T}}X{\vec {\beta }}-{\vec {\beta }}^{\textsf {T}}X^{\textsf {T}}Y+{\vec {\beta }}^{\textsf {T}}X^{\textsf {T}}X{\vec {\beta }}\end{aligned}}}
As the loss function is convex, the optimum solution lies at gradient zero. The gradient of the loss function is (using Denominator layout convention):
∂
L
(
D
,
β
→
)
∂
β
→
=
∂
(
Y
T
Y
−
Y
T
X
β
→
−
β
→
T
X
T
Y
+
β
→
T
X
T
X
β
→
)
∂
β
→
=
−
2
X
T
Y
+
2
X
T
X
β
→
{\displaystyle {\begin{aligned}{\frac {\partial L\left(D,{\vec {\beta }}\right)}{\partial {\vec {\beta }}}}&={\frac {\partial \left(Y^{\textsf {T}}Y-Y^{\textsf {T}}X{\vec {\beta }}-{\vec {\beta }}^{\textsf {T}}X^{\textsf {T}}Y+{\vec {\beta }}^{\textsf {T}}X^{\textsf {T}}X{\vec {\beta }}\right)}{\partial {\vec {\beta }}}}\\&=-2X^{\textsf {T}}Y+2X^{\textsf {T}}X{\vec {\beta }}\end{aligned}}}
Setting the gradient to zero produces the optimum parameter:
−
2
X
T
Y
+
2
X
T
X
β
→
=
0
⇒
X
T
X
β
→
=
X
T
Y
⇒
β
^
→
=
(
X
T
X
)
−
1
X
T
Y
{\displaystyle {\begin{aligned}-2X^{\textsf {T}}Y+2X^{\textsf {T}}X{\vec {\beta }}&=0\\\Rightarrow X^{\textsf {T}}X{\vec {\beta }}&=X^{\textsf {T}}Y\\\Rightarrow {\vec {\hat {\beta }}}&=\left(X^{\textsf {T}}X\right)^{-1}X^{\textsf {T}}Y\end{aligned}}}
Note: The
β
^
{\displaystyle {\hat {\beta }}}
obtained may indeed be the local minimum, one needs to differentiate once more to obtain the Hessian matrix and show that it is positive definite. This is provided by the Gauss–Markov theorem.
Linear least squares methods include mainly:
Ordinary least squares
Weighted least squares
Generalized least squares
Linear Template Fit
=== Maximum-likelihood estimation and related techniques ===
==== Maximum likelihood estimation ====
Maximum likelihood estimation can be performed when the distribution of the error terms is known to belong to a certain parametric family ƒθ of probability distributions. When fθ is a normal distribution with zero mean and variance θ, the resulting estimate is identical to the OLS estimate. GLS estimates are maximum likelihood estimates when ε follows a multivariate normal distribution with a known covariance matrix.
Let's denote each data point by
(
x
i
→
,
y
i
)
{\displaystyle ({\vec {x_{i}}},y_{i})}
and the regression parameters as
β
→
{\displaystyle {\vec {\beta }}}
, and the set of all data by
D
{\displaystyle D}
and the cost function by
L
(
D
,
β
→
)
=
∑
i
(
y
i
−
β
→
⋅
x
i
→
)
2
{\displaystyle L(D,{\vec {\beta }})=\sum _{i}(y_{i}-{\vec {\beta }}\,\cdot \,{\vec {x_{i}}})^{2}}
.
As shown below the same optimal parameter that minimizes
L
(
D
,
β
→
)
{\displaystyle L(D,{\vec {\beta }})}
achieves maximum likelihood too. Here the assumption is that the dependent variable
y
{\displaystyle y}
is a random variable that follows a Gaussian distribution, where the standard deviation is fixed and the mean is a linear combination of
x
→
{\displaystyle {\vec {x}}}
:
H
(
D
,
β
→
)
=
∏
i
=
1
n
P
r
(
y
i
|
x
i
→
;
β
→
,
σ
)
=
∏
i
=
1
n
1
2
π
σ
exp
(
−
(
y
i
−
β
→
⋅
x
i
→
)
2
2
σ
2
)
{\displaystyle {\begin{aligned}H(D,{\vec {\beta }})&=\prod _{i=1}^{n}Pr(y_{i}|{\vec {x_{i}}}\,\,;{\vec {\beta }},\sigma )\\&=\prod _{i=1}^{n}{\frac {1}{{\sqrt {2\pi }}\sigma }}\exp \left(-{\frac {\left(y_{i}-{\vec {\beta }}\,\cdot \,{\vec {x_{i}}}\right)^{2}}{2\sigma ^{2}}}\right)\end{aligned}}}
Now, we need to look for a parameter that maximizes this likelihood function. Since the logarithmic function is strictly increasing, instead of maximizing this function, we can also maximize its logarithm and find the optimal parameter that way.
I
(
D
,
β
→
)
=
log
∏
i
=
1
n
P
r
(
y
i
|
x
i
→
;
β
→
,
σ
)
=
log
∏
i
=
1
n
1
2
π
σ
exp
(
−
(
y
i
−
β
→
⋅
x
i
→
)
2
2
σ
2
)
=
n
log
1
2
π
σ
−
1
2
σ
2
∑
i
=
1
n
(
y
i
−
β
→
⋅
x
i
→
)
2
{\displaystyle {\begin{aligned}I(D,{\vec {\beta }})&=\log \prod _{i=1}^{n}Pr(y_{i}|{\vec {x_{i}}}\,\,;{\vec {\beta }},\sigma )\\&=\log \prod _{i=1}^{n}{\frac {1}{{\sqrt {2\pi }}\sigma }}\exp \left(-{\frac {\left(y_{i}-{\vec {\beta }}\,\cdot \,{\vec {x_{i}}}\right)^{2}}{2\sigma ^{2}}}\right)\\&=n\log {\frac {1}{{\sqrt {2\pi }}\sigma }}-{\frac {1}{2\sigma ^{2}}}\sum _{i=1}^{n}\left(y_{i}-{\vec {\beta }}\,\cdot \,{\vec {x_{i}}}\right)^{2}\end{aligned}}}
The optimal parameter is thus equal to:
arg max
β
→
I
(
D
,
β
→
)
=
arg max
β
→
(
n
log
1
2
π
σ
−
1
2
σ
2
∑
i
=
1
n
(
y
i
−
β
→
⋅
x
i
→
)
2
)
=
arg min
β
→
∑
i
=
1
n
(
y
i
−
β
→
⋅
x
i
→
)
2
=
arg min
β
→
L
(
D
,
β
→
)
=
β
^
→
{\displaystyle {\begin{aligned}{\underset {\vec {\beta }}{\mbox{arg max}}}\,I(D,{\vec {\beta }})&={\underset {\vec {\beta }}{\mbox{arg max}}}\left(n\log {\frac {1}{{\sqrt {2\pi }}\sigma }}-{\frac {1}{2\sigma ^{2}}}\sum _{i=1}^{n}\left(y_{i}-{\vec {\beta }}\,\cdot \,{\vec {x_{i}}}\right)^{2}\right)\\&={\underset {\vec {\beta }}{\mbox{arg min}}}\sum _{i=1}^{n}\left(y_{i}-{\vec {\beta }}\,\cdot \,{\vec {x_{i}}}\right)^{2}\\&={\underset {\vec {\beta }}{\mbox{arg min}}}\,L(D,{\vec {\beta }})\\&={\vec {\hat {\beta }}}\end{aligned}}}
In this way, the parameter that maximizes
H
(
D
,
β
→
)
{\displaystyle H(D,{\vec {\beta }})}
is the same as the one that minimizes
L
(
D
,
β
→
)
{\displaystyle L(D,{\vec {\beta }})}
. This means that in linear regression, the result of the least squares method is the same as the result of the maximum likelihood estimation method.
==== Regularized Regression ====
Ridge regression and other forms of penalized estimation, such as Lasso regression, deliberately introduce bias into the estimation of β in order to reduce the variability of the estimate. The resulting estimates generally have lower mean squared error than the OLS estimates, particularly when multicollinearity is present or when overfitting is a problem. They are generally used when the goal is to predict the value of the response variable y for values of the predictors x that have not yet been observed. These methods are not as commonly used when the goal is inference, since it is difficult to account for the bias.
==== Least Absolute Deviation ====
Least absolute deviation (LAD) regression is a robust estimation technique in that it is less sensitive to the presence of outliers than OLS (but is less efficient than OLS when no outliers are present). It is equivalent to maximum likelihood estimation under a Laplace distribution model for ε.
==== Adaptive Estimation ====
If we assume that error terms are independent of the regressors,
ε
i
⊥
x
i
{\displaystyle \varepsilon _{i}\perp \mathbf {x} _{i}}
, then the optimal estimator is the 2-step MLE, where the first step is used to non-parametrically estimate the distribution of the error term.
=== Other estimation techniques ===
Bayesian linear regression applies the framework of Bayesian statistics to linear regression. (See also Bayesian multivariate linear regression.) In particular, the regression coefficients β are assumed to be random variables with a specified prior distribution. The prior distribution can bias the solutions for the regression coefficients, in a way similar to (but more general than) ridge regression or lasso regression. In addition, the Bayesian estimation process produces not a single point estimate for the "best" values of the regression coefficients but an entire posterior distribution, completely describing the uncertainty surrounding the quantity. This can be used to estimate the "best" coefficients using the mean, mode, median, any quantile (see quantile regression), or any other function of the posterior distribution.
Quantile regression focuses on the conditional quantiles of y given X rather than the conditional mean of y given X. Linear quantile regression models a particular conditional quantile, for example the conditional median, as a linear function βTx of the predictors.
Mixed models are widely used to analyze linear regression relationships involving dependent data when the dependencies have a known structure. Common applications of mixed models include analysis of data involving repeated measurements, such as longitudinal data, or data obtained from cluster sampling. They are generally fit as parametric models, using maximum likelihood or Bayesian estimation. In the case where the errors are modeled as normal random variables, there is a close connection between mixed models and generalized least squares. Fixed effects estimation is an alternative approach to analyzing this type of data.
Principal component regression (PCR) is used when the number of predictor variables is large, or when strong correlations exist among the predictor variables. This two-stage procedure first reduces the predictor variables using principal component analysis, and then uses the reduced variables in an OLS regression fit. While it often works well in practice, there is no general theoretical reason that the most informative linear function of the predictor variables should lie among the dominant principal components of the multivariate distribution of the predictor variables. The partial least squares regression is the extension of the PCR method which does not suffer from the mentioned deficiency.
Least-angle regression is an estimation procedure for linear regression models that was developed to handle high-dimensional covariate vectors, potentially with more covariates than observations.
The Theil–Sen estimator is a simple robust estimation technique that chooses the slope of the fit line to be the median of the slopes of the lines through pairs of sample points. It has similar statistical efficiency properties to simple linear regression but is much less sensitive to outliers.
Other robust estimation techniques, including the α-trimmed mean approach, and L-, M-, S-, and R-estimators have been introduced.
== Applications ==
Linear regression is widely used in biological, behavioral and social sciences to describe possible relationships between variables. It ranks as one of the most important tools used in these disciplines.
=== Trend line ===
A trend line represents a trend, the long-term movement in time series data after other components have been accounted for. It tells whether a particular data set (say GDP, oil prices or stock prices) have increased or decreased over the period of time. A trend line could simply be drawn by eye through a set of data points, but more properly their position and slope is calculated using statistical techniques like linear regression. Trend lines typically are straight lines, although some variations use higher degree polynomials depending on the degree of curvature desired in the line.
Trend lines are sometimes used in business analytics to show changes in data over time. This has the advantage of being simple. Trend lines are often used to argue that a particular action or event (such as training, or an advertising campaign) caused observed changes at a point in time. This is a simple technique, and does not require a control group, experimental design, or a sophisticated analysis technique. However, it suffers from a lack of scientific validity in cases where other potential changes can affect the data.
=== Epidemiology ===
Early evidence relating tobacco smoking to mortality and morbidity came from observational studies employing regression analysis. In order to reduce spurious correlations when analyzing observational data, researchers usually include several variables in their regression models in addition to the variable of primary interest. For example, in a regression model in which cigarette smoking is the independent variable of primary interest and the dependent variable is lifespan measured in years, researchers might include education and income as additional independent variables, to ensure that any observed effect of smoking on lifespan is not due to those other socio-economic factors. However, it is never possible to include all possible confounding variables in an empirical analysis. For example, a hypothetical gene might increase mortality and also cause people to smoke more. For this reason, randomized controlled trials are often able to generate more compelling evidence of causal relationships than can be obtained using regression analyses of observational data. When controlled experiments are not feasible, variants of regression analysis such as instrumental variables regression may be used to attempt to estimate causal relationships from observational data.
=== Finance ===
The capital asset pricing model uses linear regression as well as the concept of beta for analyzing and quantifying the systematic risk of an investment. This comes directly from the beta coefficient of the linear regression model that relates the return on the investment to the return on all risky assets.
=== Economics ===
Linear regression is the predominant empirical tool in economics. For example, it is used to predict consumption spending, fixed investment spending, inventory investment, purchases of a country's exports, spending on imports, the demand to hold liquid assets, labor demand, and labor supply.
=== Environmental science ===
Linear regression finds application in a wide range of environmental science applications such as land use, infectious diseases, and air pollution. For example, linear regression can be used to predict the changing effects of car pollution. One notable example of this application in infectious diseases is the flattening the curve strategy emphasized early in the COVID-19 pandemic, where public health officials dealt with sparse data on infected individuals and sophisticated models of disease transmission to characterize the spread of COVID-19.
=== Building science ===
Linear regression is commonly used in building science field studies to derive characteristics of building occupants. In a thermal comfort field study, building scientists usually ask occupants' thermal sensation votes, which range from -3 (feeling cold) to 0 (neutral) to +3 (feeling hot), and measure occupants' surrounding temperature data. A neutral or comfort temperature can be calculated based on a linear regression between the thermal sensation vote and indoor temperature, and setting the thermal sensation vote as zero. However, there has been a debate on the regression direction: regressing thermal sensation votes (y-axis) against indoor temperature (x-axis) or the opposite: regressing indoor temperature (y-axis) against thermal sensation votes (x-axis).
=== Machine learning ===
Linear regression plays an important role in the subfield of artificial intelligence known as machine learning. The linear regression algorithm is one of the fundamental supervised machine-learning algorithms due to its relative simplicity and well-known properties.
== History ==
Isaac Newton is credited with inventing "a certain technique known today as linear regression analysis" in his work on equinoxes in 1700, and wrote down the first of the two normal equations of the ordinary least squares method. The Least squares linear regression, as a means of finding a good rough linear fit to a set of points was performed by Legendre (1805) and Gauss (1809) for the prediction of planetary movement. Quetelet was responsible for making the procedure well-known and for using it extensively in the social sciences.
== See also ==
== References ==
=== Citations ===
=== Sources ===
== Further reading ==
Pedhazur, Elazar J (1982). Multiple regression in behavioral research: Explanation and prediction (2nd ed.). New York: Holt, Rinehart and Winston. ISBN 978-0-03-041760-3.
Mathieu Rouaud, 2013: Probability, Statistics and Estimation Chapter 2: Linear Regression, Linear Regression with Error Bars and Nonlinear Regression.
National Physical Laboratory (1961). "Chapter 1: Linear Equations and Matrices: Direct Methods". Modern Computing Methods. Notes on Applied Science. Vol. 16 (2nd ed.). Her Majesty's Stationery Office.
== External links ==
Least-Squares Regression, PhET Interactive simulations, University of Colorado at Boulder
DIY Linear Fit | Wikipedia/Linear_regression_model |
In statistics, a latent class model (LCM) is a model for clustering multivariate discrete data. It assumes that the data arise from a mixture of discrete distributions, within each of which the variables are independent. It is called a latent class model because the class to which each data point belongs is unobserved, or latent.
Latent class analysis (LCA) is a subset of structural equation modeling, used to find groups or subtypes of cases in multivariate categorical data. These subtypes are called "latent classes".
Confronted with a situation as follows, a researcher might choose to use LCA to understand the data: Imagine that symptoms a-d have been measured in a range of patients with diseases X, Y, and Z, and that disease X is associated with the presence of symptoms a, b, and c, disease Y with symptoms b, c, d, and disease Z with symptoms a, c and d.
The LCA will attempt to detect the presence of latent classes (the disease entities), creating patterns of association in the symptoms. As in factor analysis, the LCA can also be used to classify case according to their maximum likelihood class membership.
Because the criterion for solving the LCA is to achieve latent classes within which there is no longer any association of one symptom with another (because the class is the disease which causes their association), and the set of diseases a patient has (or class a case is a member of) causes the symptom association, the symptoms will be "conditionally independent", i.e., conditional on class membership, they are no longer related.
== Model ==
Within each latent class, the observed variables are statistically independent. This is an important aspect. Usually the observed variables are statistically dependent. By introducing the latent variable, independence is restored in the sense that within classes variables are independent (local independence). We then say that the association between the observed variables is explained by the classes of the latent variable (McCutcheon, 1987).
In one form, the latent class model is written as
p
i
1
,
i
2
,
…
,
i
N
≈
∑
t
T
p
t
∏
n
N
p
i
n
,
t
n
,
{\displaystyle p_{i_{1},i_{2},\ldots ,i_{N}}\approx \sum _{t}^{T}p_{t}\,\prod _{n}^{N}p_{i_{n},t}^{n},}
where
T
{\displaystyle T}
is the number of latent classes and
p
t
{\displaystyle p_{t}}
are the so-called recruitment
or unconditional probabilities that should sum to one.
p
i
n
,
t
n
{\displaystyle p_{i_{n},t}^{n}}
are the
marginal or conditional probabilities.
For a two-way latent class model, the form is
p
i
j
≈
∑
t
T
p
t
p
i
t
p
j
t
.
{\displaystyle p_{ij}\approx \sum _{t}^{T}p_{t}\,p_{it}\,p_{jt}.}
This two-way model is related to probabilistic latent semantic analysis and non-negative matrix factorization.
The probability model used in LCA is closely related to the Naive Bayes classifier. The main difference is that in LCA, the class membership of an individual is a latent variable, whereas in Naive Bayes classifiers the class membership is an observed label.
== Related methods ==
There are a number of methods with distinct names and uses that share a common relationship. Cluster analysis is, like LCA, used to discover taxon-like groups of cases in data. Multivariate mixture estimation (MME) is applicable to continuous data, and assumes that such data arise from a mixture of distributions: imagine a set of heights arising from a mixture of men and women. If a multivariate mixture estimation is constrained so that measures must be uncorrelated within each distribution, it is termed latent profile analysis. Modified to handle discrete data, this constrained analysis is known as LCA. Discrete latent trait models further constrain the classes to form from segments of a single dimension: essentially allocating members to classes on that dimension: an example would be assigning cases to social classes on a dimension of ability or merit.
As a practical instance, the variables could be multiple choice items of a political questionnaire. The data in this case consists of a N-way contingency table with answers to the items for a number of respondents. In this example, the latent variable refers to political opinion and the latent classes to political groups. Given group membership, the conditional probabilities specify the chance certain answers are chosen.
== Application ==
LCA may be used in many fields, such as: collaborative filtering, Behavior Genetics and Evaluation of diagnostic tests.
== References ==
Linda M. Collins; Stephanie T. Lanza (2010). Latent class and latent transition analysis for the social, behavioral, and health sciences. New York: Wiley. ISBN 978-0-470-22839-5.
Allan L. McCutcheon (1987). Latent class analysis. Quantitative Applications in the Social Sciences Series No. 64. Thousand Oaks, California: SAGE Publications. ISBN 978-0-521-59451-6.
Leo A. Goodman (1974). "Exploratory latent structure analysis using both identifiable and unidentifiable models". Biometrika. 61 (2): 215–231. doi:10.1093/biomet/61.2.215.
Paul F. Lazarsfeld, Neil W. Henry (1968). Latent Structure Analysis.
== External links ==
Statistical Innovations, Home Page, 2016. Website with latent class software (Latent GOLD 5.1), free demonstrations, tutorials, user guides, and publications for download. Also included: online courses, FAQs, and other related software.
The Methodology Center, Latent Class Analysis, a research center at Penn State, free software, FAQ
John Uebersax, Latent Class Analysis, 2006. A web-site with bibliography, software, links and FAQ for latent class analysis | Wikipedia/Latent_class_model |
In statistics, a fixed effects model is a statistical model in which the model parameters are fixed or non-random quantities. This is in contrast to random effects models and mixed models in which all or some of the model parameters are random variables. In many applications including econometrics and biostatistics a fixed effects model refers to a regression model in which the group means are fixed (non-random) as opposed to a random effects model in which the group means are a random sample from a population. Generally, data can be grouped according to several observed factors. The group means could be modeled as fixed or random effects for each grouping. In a fixed effects model each group mean is a group-specific fixed quantity.
In panel data where longitudinal observations exist for the same subject, fixed effects represent the subject-specific means. In panel data analysis the term fixed effects estimator (also known as the within estimator) is used to refer to an estimator for the coefficients in the regression model including those fixed effects (one time-invariant intercept for each subject).
== Qualitative description ==
Such models assist in controlling for omitted variable bias due to unobserved heterogeneity when this heterogeneity is constant over time. This heterogeneity can be removed from the data through differencing, for example by subtracting the group-level average over time, or by taking a first difference which will remove any time invariant components of the model.
There are two common assumptions made about the individual specific effect: the random effects assumption and the fixed effects assumption. The random effects assumption is that the individual-specific effects are uncorrelated with the independent variables. The fixed effect assumption is that the individual-specific effects are correlated with the independent variables. If the random effects assumption holds, the random effects estimator is more efficient than the fixed effects estimator. However, if this assumption does not hold, the random effects estimator is not consistent. The Durbin–Wu–Hausman test is often used to discriminate between the fixed and the random effects models.
== Formal model and assumptions ==
Consider the linear unobserved effects model for
N
{\displaystyle N}
observations and
T
{\displaystyle T}
time periods:
y
i
t
=
X
i
t
β
+
α
i
+
u
i
t
{\displaystyle y_{it}=X_{it}\mathbf {\beta } +\alpha _{i}+u_{it}}
for
t
=
1
,
…
,
T
{\displaystyle t=1,\dots ,T}
and
i
=
1
,
…
,
N
{\displaystyle i=1,\dots ,N}
Where:
y
i
t
{\displaystyle y_{it}}
is the dependent variable observed for individual
i
{\displaystyle i}
at time
t
{\displaystyle t}
.
X
i
t
{\displaystyle X_{it}}
is the time-variant
1
×
k
{\displaystyle 1\times k}
(the number of independent variables) regressor vector.
β
{\displaystyle \beta }
is the
k
×
1
{\displaystyle k\times 1}
matrix of parameters.
α
i
{\displaystyle \alpha _{i}}
is the unobserved time-invariant individual effect. For example, the innate ability for individuals or historical and institutional factors for countries.
u
i
t
{\displaystyle u_{it}}
is the error term.
Unlike
X
i
t
{\displaystyle X_{it}}
,
α
i
{\displaystyle \alpha _{i}}
cannot be directly observed.
Unlike the random effects model where the unobserved
α
i
{\displaystyle \alpha _{i}}
is independent of
X
i
t
{\displaystyle X_{it}}
for all
t
=
1
,
.
.
.
,
T
{\displaystyle t=1,...,T}
, the fixed effects (FE) model allows
α
i
{\displaystyle \alpha _{i}}
to be correlated with the regressor matrix
X
i
t
{\displaystyle X_{it}}
. Strict exogeneity with respect to the idiosyncratic error term
u
i
t
{\displaystyle u_{it}}
is still required.
== Statistical estimation ==
=== Fixed effects estimator ===
Since
α
i
{\displaystyle \alpha _{i}}
is not observable, it cannot be directly controlled for. The FE model eliminates
α
i
{\displaystyle \alpha _{i}}
by de-meaning the variables using the within transformation:
y
i
t
−
y
¯
i
=
(
X
i
t
−
X
¯
i
)
β
+
(
α
i
−
α
¯
i
)
+
(
u
i
t
−
u
¯
i
)
⟹
y
¨
i
t
=
X
¨
i
t
β
+
u
¨
i
t
{\displaystyle y_{it}-{\overline {y}}_{i}=\left(X_{it}-{\overline {X}}_{i}\right)\beta +\left(\alpha _{i}-{\overline {\alpha }}_{i}\right)+\left(u_{it}-{\overline {u}}_{i}\right)\implies {\ddot {y}}_{it}={\ddot {X}}_{it}\beta +{\ddot {u}}_{it}}
where
y
¯
i
=
1
T
∑
t
=
1
T
y
i
t
{\displaystyle {\overline {y}}_{i}={\frac {1}{T}}\sum \limits _{t=1}^{T}y_{it}}
,
X
¯
i
=
1
T
∑
t
=
1
T
X
i
t
{\displaystyle {\overline {X}}_{i}={\frac {1}{T}}\sum \limits _{t=1}^{T}X_{it}}
, and
u
¯
i
=
1
T
∑
t
=
1
T
u
i
t
{\displaystyle {\overline {u}}_{i}={\frac {1}{T}}\sum \limits _{t=1}^{T}u_{it}}
.
Since
α
i
{\displaystyle \alpha _{i}}
is constant,
α
i
¯
=
α
i
{\displaystyle {\overline {\alpha _{i}}}=\alpha _{i}}
and hence the effect is eliminated. The FE estimator
β
^
F
E
{\displaystyle {\hat {\beta }}_{FE}}
is then obtained by an OLS regression of
y
¨
{\displaystyle {\ddot {y}}}
on
X
¨
{\displaystyle {\ddot {X}}}
.
At least three alternatives to the within transformation exist with variations:
One is to add a dummy variable for each individual
i
>
1
{\displaystyle i>1}
(omitting the first individual because of multicollinearity). This is numerically, but not computationally, equivalent to the fixed effect model and only works if the sum of the number of series and the number of global parameters is smaller than the number of observations. The dummy variable approach is particularly demanding with respect to computer memory usage and it is not recommended for problems larger than the available RAM, and the applied program compilation, can accommodate.
Second alternative is to use consecutive reiterations approach to local and global estimations. This approach is very suitable for low memory systems on which it is much more computationally efficient than the dummy variable approach.
The third approach is a nested estimation whereby the local estimation for individual series is programmed in as a part of the model definition. This approach is the most computationally and memory efficient, but it requires proficient programming skills and access to the model programming code; although, it can be programmed including in SAS.
Finally, each of the above alternatives can be improved if the series-specific estimation is linear (within a nonlinear model), in which case the direct linear solution for individual series can be programmed in as part of the nonlinear model definition.
=== First difference estimator ===
An alternative to the within transformation is the first difference transformation, which produces a different estimator. For
t
=
2
,
…
,
T
{\displaystyle t=2,\dots ,T}
:
y
i
t
−
y
i
,
t
−
1
=
(
X
i
t
−
X
i
,
t
−
1
)
β
+
(
α
i
−
α
i
)
+
(
u
i
t
−
u
i
,
t
−
1
)
⟹
Δ
y
i
t
=
Δ
X
i
t
β
+
Δ
u
i
t
.
{\displaystyle y_{it}-y_{i,t-1}=\left(X_{it}-X_{i,t-1}\right)\beta +\left(\alpha _{i}-\alpha _{i}\right)+\left(u_{it}-u_{i,t-1}\right)\implies \Delta y_{it}=\Delta X_{it}\beta +\Delta u_{it}.}
The FD estimator
β
^
F
D
{\displaystyle {\hat {\beta }}_{FD}}
is then obtained by an OLS regression of
Δ
y
i
t
{\displaystyle \Delta y_{it}}
on
Δ
X
i
t
{\displaystyle \Delta X_{it}}
.
When
T
=
2
{\displaystyle T=2}
, the first difference and fixed effects estimators are numerically equivalent. For
T
>
2
{\displaystyle T>2}
, they are not. If the error terms
u
i
t
{\displaystyle u_{it}}
are homoskedastic with no serial correlation, the fixed effects estimator is more efficient than the first difference estimator. If
u
i
t
{\displaystyle u_{it}}
follows a random walk, however, the first difference estimator is more efficient.
==== Equality of fixed effects and first difference estimators when T=2 ====
For the special two period case (
T
=
2
{\displaystyle T=2}
), the fixed effects (FE) estimator and the first difference (FD) estimator are numerically equivalent. This is because the FE estimator effectively "doubles the data set" used in the FD estimator. To see this, establish that the fixed effects estimator is:
F
E
T
=
2
=
[
(
x
i
1
−
x
¯
i
)
(
x
i
1
−
x
¯
i
)
′
+
(
x
i
2
−
x
¯
i
)
(
x
i
2
−
x
¯
i
)
′
]
−
1
[
(
x
i
1
−
x
¯
i
)
(
y
i
1
−
y
¯
i
)
+
(
x
i
2
−
x
¯
i
)
(
y
i
2
−
y
¯
i
)
]
{\displaystyle {FE}_{T=2}=\left[(x_{i1}-{\bar {x}}_{i})(x_{i1}-{\bar {x}}_{i})'+(x_{i2}-{\bar {x}}_{i})(x_{i2}-{\bar {x}}_{i})'\right]^{-1}\left[(x_{i1}-{\bar {x}}_{i})(y_{i1}-{\bar {y}}_{i})+(x_{i2}-{\bar {x}}_{i})(y_{i2}-{\bar {y}}_{i})\right]}
Since each
(
x
i
1
−
x
¯
i
)
{\displaystyle (x_{i1}-{\bar {x}}_{i})}
can be re-written as
(
x
i
1
−
x
i
1
+
x
i
2
2
)
=
x
i
1
−
x
i
2
2
{\displaystyle (x_{i1}-{\dfrac {x_{i1}+x_{i2}}{2}})={\dfrac {x_{i1}-x_{i2}}{2}}}
, we'll re-write the line as:
F
E
T
=
2
=
[
∑
i
=
1
N
x
i
1
−
x
i
2
2
x
i
1
−
x
i
2
2
′
+
x
i
2
−
x
i
1
2
x
i
2
−
x
i
1
2
′
]
−
1
[
∑
i
=
1
N
x
i
1
−
x
i
2
2
y
i
1
−
y
i
2
2
+
x
i
2
−
x
i
1
2
y
i
2
−
y
i
1
2
]
{\displaystyle {FE}_{T=2}=\left[\sum _{i=1}^{N}{\dfrac {x_{i1}-x_{i2}}{2}}{\dfrac {x_{i1}-x_{i2}}{2}}'+{\dfrac {x_{i2}-x_{i1}}{2}}{\dfrac {x_{i2}-x_{i1}}{2}}'\right]^{-1}\left[\sum _{i=1}^{N}{\dfrac {x_{i1}-x_{i2}}{2}}{\dfrac {y_{i1}-y_{i2}}{2}}+{\dfrac {x_{i2}-x_{i1}}{2}}{\dfrac {y_{i2}-y_{i1}}{2}}\right]}
=
[
∑
i
=
1
N
2
x
i
2
−
x
i
1
2
x
i
2
−
x
i
1
2
′
]
−
1
[
∑
i
=
1
N
2
x
i
2
−
x
i
1
2
y
i
2
−
y
i
1
2
]
{\displaystyle =\left[\sum _{i=1}^{N}2{\dfrac {x_{i2}-x_{i1}}{2}}{\dfrac {x_{i2}-x_{i1}}{2}}'\right]^{-1}\left[\sum _{i=1}^{N}2{\dfrac {x_{i2}-x_{i1}}{2}}{\dfrac {y_{i2}-y_{i1}}{2}}\right]}
=
2
[
∑
i
=
1
N
(
x
i
2
−
x
i
1
)
(
x
i
2
−
x
i
1
)
′
]
−
1
[
∑
i
=
1
N
1
2
(
x
i
2
−
x
i
1
)
(
y
i
2
−
y
i
1
)
]
{\displaystyle =2\left[\sum _{i=1}^{N}(x_{i2}-x_{i1})(x_{i2}-x_{i1})'\right]^{-1}\left[\sum _{i=1}^{N}{\frac {1}{2}}(x_{i2}-x_{i1})(y_{i2}-y_{i1})\right]}
=
[
∑
i
=
1
N
(
x
i
2
−
x
i
1
)
(
x
i
2
−
x
i
1
)
′
]
−
1
∑
i
=
1
N
(
x
i
2
−
x
i
1
)
(
y
i
2
−
y
i
1
)
=
F
D
T
=
2
{\displaystyle =\left[\sum _{i=1}^{N}(x_{i2}-x_{i1})(x_{i2}-x_{i1})'\right]^{-1}\sum _{i=1}^{N}(x_{i2}-x_{i1})(y_{i2}-y_{i1})={FD}_{T=2}}
=== Chamberlain method ===
Gary Chamberlain's method, a generalization of the within estimator, replaces
α
i
{\displaystyle \alpha _{i}}
with its linear projection onto the explanatory variables. Writing the linear projection as:
α
i
=
λ
0
+
X
i
1
λ
1
+
X
i
2
λ
2
+
⋯
+
X
i
T
λ
T
+
e
i
{\displaystyle \alpha _{i}=\lambda _{0}+X_{i1}\lambda _{1}+X_{i2}\lambda _{2}+\dots +X_{iT}\lambda _{T}+e_{i}}
this results in the following equation:
y
i
t
=
λ
0
+
X
i
1
λ
1
+
X
i
2
λ
2
+
⋯
+
X
i
t
(
λ
t
+
β
)
+
⋯
+
X
i
T
λ
T
+
e
i
+
u
i
t
{\displaystyle y_{it}=\lambda _{0}+X_{i1}\lambda _{1}+X_{i2}\lambda _{2}+\dots +X_{it}(\lambda _{t}+\mathbf {\beta } )+\dots +X_{iT}\lambda _{T}+e_{i}+u_{it}}
which can be estimated by minimum distance estimation.
=== Hausman–Taylor method ===
Need to have more than one time-variant regressor (
X
{\displaystyle X}
) and time-invariant
regressor (
Z
{\displaystyle Z}
) and at least one
X
{\displaystyle X}
and one
Z
{\displaystyle Z}
that are uncorrelated with
α
i
{\displaystyle \alpha _{i}}
.
Partition the
X
{\displaystyle X}
and
Z
{\displaystyle Z}
variables such that
X
=
[
X
1
i
t
T
N
×
K
1
⋮
X
2
i
t
T
N
×
K
2
]
Z
=
[
Z
1
i
t
T
N
×
G
1
⋮
Z
2
i
t
T
N
×
G
2
]
{\displaystyle {\begin{array}{c}X=[{\underset {TN\times K1}{X_{1it}}}\vdots {\underset {TN\times K2}{X_{2it}}}]\\Z=[{\underset {TN\times G1}{Z_{1it}}}\vdots {\underset {TN\times G2}{Z_{2it}}}]\end{array}}}
where
X
1
{\displaystyle X_{1}}
and
Z
1
{\displaystyle Z_{1}}
are uncorrelated with
α
i
{\displaystyle \alpha _{i}}
. Need
K
1
>
G
2
{\displaystyle K1>G2}
.
Estimating
γ
{\displaystyle \gamma }
via OLS on
d
i
^
=
Z
i
γ
+
φ
i
t
{\displaystyle {\widehat {di}}=Z_{i}\gamma +\varphi _{it}}
using
X
1
{\displaystyle X_{1}}
and
Z
1
{\displaystyle Z_{1}}
as instruments yields a consistent estimate.
=== Generalization with input uncertainty ===
When there is input uncertainty for the
y
{\displaystyle y}
data,
δ
y
{\displaystyle \delta y}
, then the
χ
2
{\displaystyle \chi ^{2}}
value, rather than the sum of squared residuals, should be minimized. This can be directly achieved from substitution rules:
y
i
t
δ
y
i
t
=
β
X
i
t
δ
y
i
t
+
α
i
1
δ
y
i
t
+
u
i
t
δ
y
i
t
{\displaystyle {\frac {y_{it}}{\delta y_{it}}}=\mathbf {\beta } {\frac {X_{it}}{\delta y_{it}}}+\alpha _{i}{\frac {1}{\delta y_{it}}}+{\frac {u_{it}}{\delta y_{it}}}}
,
then the values and standard deviations for
β
{\displaystyle \mathbf {\beta } }
and
α
i
{\displaystyle \alpha _{i}}
can be determined via classical ordinary least squares analysis and variance-covariance matrix.
== Use to test for consistency ==
Random effects estimators may be inconsistent sometimes in the long time series limit, if the random effects are misspecified (i.e. the model chosen for the random effects is incorrect). However, the fixed effects model may still be consistent in some situations. For example, if the time series being modeled is not stationary, random effects models assuming stationarity may not be consistent in the long-series limit. One example of this is if the time series has an upward trend. Then, as the series becomes longer, the model revises estimates for the mean of earlier periods upwards, giving increasingly biased predictions of coefficients. However, a model with fixed time effects does not pool information across time, and as a result earlier estimates will not be affected.
In situations like these where the fixed effects model is known to be consistent, the Durbin-Wu-Hausman test can be used to test whether the random effects model chosen is consistent. If
H
0
{\displaystyle H_{0}}
is true, both
β
^
R
E
{\displaystyle {\widehat {\beta }}_{RE}}
and
β
^
F
E
{\displaystyle {\widehat {\beta }}_{FE}}
are consistent, but only
β
^
R
E
{\displaystyle {\widehat {\beta }}_{RE}}
is efficient. If
H
a
{\displaystyle H_{a}}
is true the consistency of
β
^
R
E
{\displaystyle {\widehat {\beta }}_{RE}}
cannot be guaranteed.
== See also ==
Random effects model
Mixed model
Dynamic unobserved effects model
Fixed-effect Poisson model
Panel analysis
First-difference estimator
== Notes ==
== References ==
Christensen, Ronald (2002). Plane Answers to Complex Questions: The Theory of Linear Models (Third ed.). New York: Springer. ISBN 0-387-95361-2.
Gujarati, Damodar N.; Porter, Dawn C. (2009). "Panel Data Regression Models". Basic Econometrics (Fifth international ed.). Boston: McGraw-Hill. pp. 591–616. ISBN 978-007-127625-2.
Hsiao, Cheng (2003). "Fixed-effects models". Analysis of Panel Data (2nd ed.). New York: Cambridge University Press. pp. 95–103. ISBN 0-521-52271-4.
Wooldridge, Jeffrey M. (2013). "Fixed Effects Estimation". Introductory Econometrics: A Modern Approach (Fifth international ed.). Mason, OH: South-Western. pp. 466–474. ISBN 978-1-111-53439-4.
== External links ==
Fixed and random effects models
Examples of all ANOVA and ANCOVA models with up to three treatment factors, including randomized block, split plot, repeated measures, and Latin squares, and their analysis in R | Wikipedia/Fixed_effects_model |
In statistics, the class of vector generalized linear models (VGLMs) was proposed to
enlarge the scope of models catered for by generalized linear models (GLMs).
In particular, VGLMs allow for response variables outside the classical exponential family
and for more than one parameter. Each parameter (not necessarily a mean) can be transformed by a link function.
The VGLM framework is also large enough to naturally accommodate multiple responses; these are
several independent responses each coming from a particular statistical distribution with
possibly different parameter values.
Vector generalized linear models are described in detail in Yee (2015).
The central algorithm adopted is the iteratively reweighted least squares method,
for maximum likelihood estimation of usually all the model parameters. In particular,
Fisher scoring is implemented by such, which, for most models,
uses the first and expected second derivatives of the log-likelihood function.
== Motivation ==
GLMs essentially cover one-parameter models from the classical exponential family,
and include 3 of the most important statistical regression models:
the linear model, Poisson regression for counts, and logistic regression
for binary responses.
However, the exponential family is far too limiting for regular data analysis.
For example, for counts, zero-inflation, zero-truncation and overdispersion are regularly
encountered, and the makeshift adaptations made to the binomial and
Poisson models in the form of quasi-binomial and
quasi-Poisson can be argued as being ad hoc and unsatisfactory.
But the VGLM framework readily handles models such as
zero-inflated Poisson regression,
zero-altered Poisson (hurdle) regression,
positive-Poisson regression, and
negative binomial regression.
As another example, for the linear model,
the variance of a normal distribution is relegated
as a scale parameter and it is treated
often as a nuisance parameter (if it is considered as a parameter at all).
But the VGLM framework allows the variance to be modelled using covariates.
As a whole, one can loosely think of VGLMs as GLMs that handle many models
outside the classical exponential family and are not restricted to estimating
a single mean.
During estimation,
rather than using weighted least squares
during IRLS, one uses generalized least squares to handle the
correlation between the M linear predictors.
== Data and notation ==
We suppose that the response or outcome or the dependent variable(s),
y
=
(
y
1
,
…
,
y
Q
1
)
T
{\displaystyle {\boldsymbol {y}}=(y_{1},\ldots ,y_{Q_{1}})^{T}}
, are assumed to be generated from a particular distribution. Most distributions are univariate, so that
Q
1
=
1
{\displaystyle Q_{1}=1}
, and an example of
Q
1
=
2
{\displaystyle Q_{1}=2}
is the bivariate normal distribution.
Sometimes we write our data as
(
x
i
,
w
i
,
y
i
)
{\displaystyle ({\boldsymbol {x}}_{i},w_{i},{\boldsymbol {y}}_{i})}
for
i
=
1
,
…
,
n
{\displaystyle i=1,\ldots ,n}
. Each of the n observations are considered to be
independent.
Then
y
i
=
(
y
i
1
,
…
,
y
i
Q
1
)
T
{\displaystyle {\boldsymbol {y}}_{i}=(y_{i1},\ldots ,y_{iQ_{1}})^{T}}
.
The
w
i
{\displaystyle w_{i}}
are known positive prior weights, and often
w
i
=
1
{\displaystyle w_{i}=1}
.
The explanatory or independent variables are written
x
=
(
x
1
,
…
,
x
p
)
T
{\displaystyle {\boldsymbol {x}}=(x_{1},\ldots ,x_{p})^{T}}
,
or when i is needed, as
x
i
=
(
x
i
1
,
…
,
x
i
p
)
T
{\displaystyle {\boldsymbol {x}}_{i}=(x_{i1},\ldots ,x_{ip})^{T}}
.
Usually there is an intercept, in which case
x
1
=
1
{\displaystyle x_{1}=1}
or
x
i
1
=
1
{\displaystyle x_{i1}=1}
.
Actually, the VGLM framework allows for S responses, each of dimension
Q
1
{\displaystyle Q_{1}}
.
In the above S = 1. Hence the dimension of
y
i
{\displaystyle {\boldsymbol {y}}_{i}}
is more generally
Q
=
S
×
Q
1
{\displaystyle Q=S\times Q_{1}}
. One handles S responses by code such
as vglm(cbind(y1, y2, y3) ~ x2 + x3, ..., data = mydata) for S = 3.
To simplify things, most of this article has S = 1.
== Model components ==
The VGLM usually consists of four elements:
1. A probability density function or probability mass function from some statistical distribution which has a log-likelihood
ℓ
{\displaystyle \ell }
, first derivatives
∂
ℓ
/
∂
θ
j
{\displaystyle \partial \ell /\partial \theta _{j}}
and expected information matrix that can be computed. The model is required to satisfy the usual MLE regularity conditions.
2. Linear predictors
η
j
{\displaystyle \eta _{j}}
described below to model each parameter
θ
j
{\displaystyle \theta _{j}}
,
j
=
1
,
…
,
M
.
{\displaystyle j=1,\ldots ,M.}
3. Link functions
g
j
{\displaystyle g_{j}}
such that
θ
j
=
g
j
−
1
(
η
j
)
.
{\displaystyle \theta _{j}=g_{j}^{-1}(\eta _{j}).}
4. Constraint matrices
H
k
{\displaystyle {\boldsymbol {H}}_{k}}
for
k
=
1
,
…
,
p
,
{\displaystyle k=1,\ldots ,p,}
each of full column-rank and known.
=== Linear predictors ===
Each linear predictor is a quantity which incorporates
information about the independent variables into the model.
The symbol
η
j
{\displaystyle \eta _{j}}
(Greek "eta")
denotes a linear predictor and a subscript j is used to denote the jth one.
It relates the jth parameter to the explanatory variables, and
η
j
{\displaystyle \eta _{j}}
is expressed as linear combinations (thus, "linear")
of unknown parameters
β
j
,
{\displaystyle {\boldsymbol {\beta }}_{j},}
i.e., of regression coefficients
β
(
j
)
k
{\displaystyle \beta _{(j)k}}
.
The jth parameter,
θ
j
{\displaystyle \theta _{j}}
, of the distribution depends on the
independent variables,
x
,
{\displaystyle {\boldsymbol {x}},}
through
g
j
(
θ
j
)
=
η
j
=
β
j
T
x
.
{\displaystyle g_{j}(\theta _{j})=\eta _{j}={\boldsymbol {\beta }}_{j}^{T}{\boldsymbol {x}}.}
Let
η
=
(
η
1
,
…
,
η
M
)
T
{\displaystyle {\boldsymbol {\eta }}=(\eta _{1},\ldots ,\eta _{M})^{T}}
be the vector of
all the linear predictors. (For convenience we always let
η
{\displaystyle {\boldsymbol {\eta }}}
be of dimension M).
Thus all the covariates comprising
x
{\displaystyle {\boldsymbol {x}}}
potentially affect all the parameters through the linear predictors
η
j
{\displaystyle \eta _{j}}
. Later, we will allow the linear predictors to be generalized to additive predictors, which is the sum of smooth functions of each
x
k
{\displaystyle x_{k}}
and each function is estimated from the data.
=== Link functions ===
Each link function provides the relationship between a linear predictor and a
parameter of the distribution.
There are many commonly used link functions, and their choice can be somewhat arbitrary. It makes sense to try to match the domain of the link function to
the range of the distribution's parameter value.
Notice above that the
g
j
{\displaystyle g_{j}}
allows a different link function for each parameter.
They have similar properties as with generalized linear models, for example,
common link functions include the logit link for parameters in
(
0
,
1
)
{\displaystyle (0,1)}
,
and the log link for positive parameters. The VGAM package has function identitylink() for parameters that can assume both positive and negative values.
=== Constraint matrices ===
More generally, the VGLM framework allows for any linear constraints between the regression coefficients
β
(
j
)
k
{\displaystyle \beta _{(j)k}}
of each linear predictors. For example, we may want to set some to be equal to 0, or constraint some of them to be equal. We have
η
=
∑
k
=
1
p
β
(
k
)
x
k
=
∑
k
=
1
p
H
k
β
(
k
)
∗
x
k
{\displaystyle {\boldsymbol {\eta }}=\sum _{k=1}^{p}\,{\boldsymbol {\beta }}_{(k)}^{}\,x_{k}=\sum _{k=1}^{p}\,{\boldsymbol {H}}_{k}\;{\boldsymbol {\beta }}_{(k)}^{*}\,x_{k}}
where the
H
k
{\displaystyle {\boldsymbol {H}}_{k}}
are the constraint matrices.
Each constraint matrix is known and prespecified, and has M rows, and between 1 and M columns. The elements of constraint matrices are finite-valued, and often they are just 0 or 1.
For example, the value 0 effectively omits that element while a 1 includes it.
It is common for some models to have a parallelism assumption, which means that
H
k
=
1
M
{\displaystyle {\boldsymbol {H}}_{k}={\boldsymbol {1}}_{M}}
for
k
=
2
,
…
,
p
{\displaystyle k=2,\ldots ,p}
, and
for some models, for
k
=
1
{\displaystyle k=1}
too.
The special case when
H
k
=
I
M
{\displaystyle {\boldsymbol {H}}_{k}={\boldsymbol {I}}_{M}}
for
all
k
=
1
,
…
,
p
{\displaystyle k=1,\ldots ,p}
is known as trivial constraints; all the
regression coefficients are estimated and are unrelated.
And
θ
j
{\displaystyle \theta _{j}}
is known as an intercept-only parameter
if the jth row of all the
H
k
=
{\displaystyle {\boldsymbol {H}}_{k}=}
are equal to
0
T
{\displaystyle {\boldsymbol {0}}^{T}}
for
k
=
2
,
…
,
p
{\displaystyle k=2,\ldots ,p}
, i.e.,
η
j
=
β
(
j
)
1
∗
{\displaystyle \eta _{j}=\beta _{(j)1}^{*}}
equals an intercept only. Intercept-only parameters are thus modelled as simply as possible, as a scalar.
The unknown parameters,
β
∗
=
(
β
(
1
)
∗
T
,
…
,
β
(
p
)
∗
T
)
T
{\displaystyle {\boldsymbol {\beta }}^{*}=({\boldsymbol {\beta }}_{(1)}^{*T},\ldots ,{\boldsymbol {\beta }}_{(p)}^{*T})^{T}}
,
are typically estimated by the method of maximum likelihood.
All the regression coefficients may be put into a matrix as follows:
η
i
=
B
T
x
i
=
(
β
1
T
x
i
⋮
β
M
T
x
i
)
=
(
β
(
1
)
,
…
,
β
(
p
)
)
x
i
.
{\displaystyle {\boldsymbol {\eta }}_{i}={\boldsymbol {B}}^{T}{\boldsymbol {x}}_{i}={\begin{pmatrix}{\boldsymbol {\beta }}_{1}^{T}\,{\boldsymbol {x}}_{i}\\\vdots \\{\boldsymbol {\beta }}_{M}^{T}\,{\boldsymbol {x}}_{i}\\\end{pmatrix}}=\left({\boldsymbol {\beta }}_{(1)}^{},\ldots ,{\boldsymbol {\beta }}_{(p)}^{}\right)\;{\boldsymbol {x}}_{i}.}
=== The xij facility ===
With even more generally, one can allow the value of a variable
x
k
{\displaystyle x_{k}}
to have a different value for each
η
j
{\displaystyle \eta _{j}}
.
For example, if each linear predictor is for a different time point then
one might have a time-varying covariate.
For example,
in discrete choice models, one has
conditional logit models,
nested logit models,
generalized logit models,
and the like, to distinguish between certain variants and
fit a multinomial logit model to, e.g., transport choices.
A variable such as cost differs depending on the choice, for example,
taxi is more expensive than bus, which is more expensive than walking.
The xij facility of VGAM allows one to
generalize
η
j
(
x
i
)
{\displaystyle \eta _{j}({\boldsymbol {x}}_{i})}
to
η
j
(
x
i
j
)
{\displaystyle \eta _{j}({\boldsymbol {x}}_{ij})}
.
The most general formula is
η
i
=
o
i
+
∑
k
=
1
p
d
i
a
g
(
x
i
k
1
,
…
,
x
i
k
M
)
H
k
β
(
k
)
∗
.
{\displaystyle {\boldsymbol {\eta }}_{i}={\boldsymbol {o}}_{i}+\sum _{k=1}^{p}\,diag(x_{ik1},\ldots ,x_{ikM})\,\mathbf {H} _{k}\,{\boldsymbol {\beta }}_{(k)}^{*}.}
Here the
o
i
{\displaystyle {\boldsymbol {o}}_{i}}
is an optional offset; which translates
to be a
n
×
M
{\displaystyle n\times M}
matrix in practice.
The VGAM package has an xij argument that allows
the successive elements of the diagonal matrix to be inputted.
== Software ==
Yee (2015) describes an R package
implementation in the
called VGAM.
Currently this software fits approximately 150 models/distributions.
The central modelling functions are vglm() and vgam().
The family argument is assigned a VGAM family function,
e.g., family = negbinomial for negative binomial regression,
family = poissonff for Poisson regression,
family = propodds for the proportional odd model or
cumulative logit model for ordinal categorical regression.
== Fitting ==
=== Maximum likelihood ===
We are maximizing a log-likelihood
ℓ
=
∑
i
=
1
n
w
i
ℓ
i
,
{\displaystyle \ell =\sum _{i=1}^{n}\,w_{i}\,\ell _{i},}
where the
w
i
{\displaystyle w_{i}}
are positive and known prior weights.
The maximum likelihood estimates can be found
using an iteratively reweighted least squares algorithm using
Fisher's scoring method, with updates of the form:
β
(
a
+
1
)
=
β
(
a
)
+
I
−
1
(
β
(
a
)
)
u
(
β
(
a
)
)
,
{\displaystyle {\boldsymbol {\beta }}^{(a+1)}={\boldsymbol {\beta }}^{(a)}+{\boldsymbol {\mathcal {I}}}^{-1}({\boldsymbol {\beta }}^{(a)})\,\,\mathbf {u} ({\boldsymbol {\beta }}^{(a)}),}
where
I
(
β
(
a
)
)
{\displaystyle {\boldsymbol {\mathcal {I}}}({\boldsymbol {\beta }}^{(a)})}
is
the Fisher information matrix at iteration a.
It is also called the expected information matrix, or EIM.
=== VLM ===
For the computation, the (small) model matrix constructed
from the RHS of the formula in vglm()
and the constraint matrices are combined to form a big model matrix.
The IRLS is applied to this big X. This matrix is known as the VLM
matrix, since the vector linear model is the underlying least squares
problem being solved. A VLM is a weighted multivariate regression where the
variance-covariance matrix for each row of the response matrix is not
necessarily the same, and is known.
(In classical multivariate regression, all the errors have the
same variance-covariance matrix, and it is unknown).
In particular, the VLM minimizes the weighted sum of squares
R
e
s
S
S
=
∑
i
=
1
n
w
i
{
z
i
(
a
−
1
)
−
η
i
(
a
−
1
)
}
T
W
i
(
a
−
1
)
{
z
i
(
a
−
1
)
−
η
i
(
a
−
1
)
}
{\displaystyle \mathrm {ResSS} =\sum _{i=1}^{n}\;w_{i}\left\{\mathbf {z} _{i}^{(a-1)}-{\boldsymbol {\eta }}_{i}^{(a-1)}\right\}^{T}\mathbf {W} _{i}^{(a-1)}\left\{\mathbf {z} _{i}^{(a-1)}-{\boldsymbol {\eta }}_{i}^{(a-1)}\right\}}
This quantity is minimized at each IRLS iteration.
The working responses (also known as pseudo-response and adjusted dependent vectors) are
z
i
=
η
i
+
W
i
−
1
u
i
,
{\displaystyle \mathbf {z} _{i}={\boldsymbol {\eta }}_{i}+\mathbf {W} _{i}^{-1}\mathbf {u} _{i},}
where the
W
i
{\displaystyle \mathbf {W} _{i}}
are known as working weights or working weight matrices. They are symmetric and positive-definite. Using the EIM helps ensure that they are all positive-definite (and not just the sum of them) over much of the parameter space. In contrast, using Newton–Raphson would mean the observed information matrices would be used, and these tend to be positive-definite in a smaller subset of the parameter space.
Computationally, the Cholesky decomposition is used to invert the working weight matrices and to convert the overall generalized least squares problem into an ordinary least squares problem.
== Examples ==
=== Generalized linear models ===
Of course, all generalized linear models are a special cases of VGLMs.
But we often estimate all parameters by full maximum likelihood estimation rather
than using the method of moments for the scale parameter.
=== Ordered categorical response ===
If the response variable is an ordinal measurement with M + 1 levels, then one may fit a model function of the form:
g
(
θ
j
)
=
η
j
{\displaystyle g(\theta _{j})=\eta _{j}}
where
θ
j
=
P
r
(
Y
≤
j
)
,
{\displaystyle \theta _{j}=\mathrm {Pr} (Y\leq j),}
for
j
=
1
,
…
,
M
.
{\displaystyle j=1,\ldots ,M.}
Different links g lead to proportional odds models or ordered probit models,
e.g., the VGAM family function cumulative(link = probit) assigns a probit link to the cumulative
probabilities, therefore this model is also called the cumulative probit model.
In general they are called cumulative link models.
For categorical and multinomial distributions, the fitted values are an (M + 1)-vector of probabilities, with the property that all probabilities add up to 1. Each probability indicates the likelihood of occurrence of one of the M + 1 possible values.
=== Unordered categorical response ===
If the response variable is a nominal measurement,
or the data do not satisfy the assumptions of an ordered model, then one may fit a model of the following form:
log
[
P
r
(
Y
=
j
)
P
r
(
Y
=
M
+
1
)
]
=
η
j
,
{\displaystyle \log \left[{\frac {Pr(Y=j)}{\mathrm {Pr} (Y=M+1)}}\right]=\eta _{j},}
for
j
=
1
,
…
,
M
.
{\displaystyle j=1,\ldots ,M.}
The above link is sometimes called the multilogit link,
and the model is called the multinomial logit model.
It is common to choose the first or the last level of the response as the
reference or baseline group; the above uses the last level.
The VGAM family function multinomial() fits the above model,
and it has an argument called refLevel that can be assigned
the level used for as the reference group.
=== Count data ===
Classical GLM theory performs Poisson regression for count data.
The link is typically the logarithm, which is known as the canonical link.
The variance function is proportional to the mean:
Var
(
Y
i
)
=
τ
μ
i
,
{\displaystyle \operatorname {Var} (Y_{i})=\tau \mu _{i},\,}
where the dispersion parameter
τ
{\displaystyle \tau }
is typically fixed at exactly one. When it is not, the resulting quasi-likelihood model is often described as Poisson with overdispersion, or quasi-Poisson; then
τ
{\displaystyle \tau }
is commonly estimated by the method-of-moments and as such,
confidence intervals for
τ
{\displaystyle \tau }
are difficult to obtain.
In contrast, VGLMs offer a much richer set of models to handle overdispersion with respect to the Poisson, e.g., the negative binomial distribution and several variants thereof. Another count regression model is the generalized Poisson distribution. Other possible models are the zeta distribution and the Zipf distribution.
== Extensions ==
=== Reduced-rank vector generalized linear models ===
RR-VGLMs are VGLMs where a subset of
the B matrix is of a lower rank.
Without loss of generality, suppose that
x
=
(
x
1
T
,
x
2
T
)
T
{\displaystyle {\boldsymbol {x}}=({\boldsymbol {x}}_{1}^{T},{\boldsymbol {x}}_{2}^{T})^{T}}
is a partition of the covariate vector. Then the part of the B matrix corresponding to
x
2
{\displaystyle {\boldsymbol {x}}_{2}}
is of the form
A
C
T
{\displaystyle {\boldsymbol {A}}{\boldsymbol {C}}^{T}}
where
A
{\displaystyle {\boldsymbol {A}}}
and
C
{\displaystyle {\boldsymbol {C}}}
are thin matrices (i.e., with R columns), e.g., vectors if the rank R = 1. RR-VGLMs potentially offer several advantages when applied to certain
models and data sets. Firstly, if M and p are large then the number of regression coefficients
that are estimated by VGLMs is large (
M
×
p
{\displaystyle M\times p}
). Then RR-VGLMs can reduce the number of estimated regression coefficients enormously if R is low, e.g., R = 1
or R = 2. An example of a model where this is particularly useful is the RR-multinomial logit model, also known as the stereotype model.
Secondly,
ν
=
C
T
x
2
=
(
ν
1
,
…
,
ν
R
)
T
{\displaystyle {\boldsymbol {\nu }}={\boldsymbol {C}}^{T}{\boldsymbol {x}}_{2}=(\nu _{1},\ldots ,\nu _{R})^{T}}
is an R-vector of latent variables, and often these can be usefully interpreted.
If R = 1 then we can write
ν
=
c
T
x
2
{\displaystyle \nu ={\boldsymbol {c}}^{T}{\boldsymbol {x}}_{2}}
so that the latent variable comprises loadings on the explanatory variables.
It may be seen that RR-VGLMs take optimal linear combinations of the
x
2
{\displaystyle {\boldsymbol {x}}_{2}}
and then a VGLM is fitted to the explanatory variables
(
x
1
,
ν
)
{\displaystyle ({\boldsymbol {x}}_{1},{\boldsymbol {\nu }})}
. Thirdly, a biplot can be produced if R = 2 , and this allows the model to be visualized.
It can be shown that RR-VGLMs are simply VGLMs where the constraint matrices for
the variables in
x
2
{\displaystyle {\boldsymbol {x}}_{2}}
are unknown and to be estimated.
It then transpires that
H
k
=
A
{\displaystyle {\boldsymbol {H}}_{k}={\boldsymbol {A}}}
for
such variables.
RR-VGLMs can be estimated by an alternating algorithm which fixes
A
{\displaystyle {\boldsymbol {A}}}
and estimates
C
,
{\displaystyle {\boldsymbol {C}},}
and then fixes
C
{\displaystyle {\boldsymbol {C}}}
and estimates
A
{\displaystyle {\boldsymbol {A}}}
, etc.
In practice, some uniqueness constraints are needed for
A
{\displaystyle {\boldsymbol {A}}}
and/or
C
{\displaystyle {\boldsymbol {C}}}
. In VGAM, the rrvglm() function uses corner constraints by default, which means that the top R rows of
A
{\displaystyle {\boldsymbol {A}}}
is set to
I
R
{\displaystyle {\boldsymbol {I}}_{R}}
. RR-VGLMs were proposed in 2003.
==== Two to one ====
A special case of RR-VGLMs is when R = 1 and M = 2. This is dimension reduction from 2 parameters to 1 parameter. Then it can be shown that
θ
2
=
g
2
−
1
(
t
1
+
a
21
⋅
g
1
(
θ
1
)
)
,
{\displaystyle \theta _{2}=g_{2}^{-1}\left(t_{1}+a_{21}\cdot g_{1}(\theta _{1})\right),}
where elements
t
1
{\displaystyle t_{1}}
and
a
21
{\displaystyle a_{21}}
are estimated. Equivalently,
η
2
=
t
1
+
a
21
⋅
η
1
.
{\displaystyle \eta _{2}=t_{1}+a_{21}\cdot \eta _{1}.}
This formula provides a coupling of
η
1
{\displaystyle \eta _{1}}
and
η
2
{\displaystyle \eta _{2}}
. It induces a relationship between two parameters of a model that can be useful, e.g., for modelling a mean-variance relationship. Sometimes there is some choice of link functions, therefore it offers a little flexibility when coupling the two parameters, e.g., a logit, probit, cauchit or cloglog link for parameters in the unit interval. The above formula is particularly useful for the negative binomial distribution, so that the RR-NB has variance function
Var
(
Y
∣
x
)
=
μ
(
x
)
+
δ
1
μ
(
x
)
δ
2
.
{\displaystyle \operatorname {Var} (Y\mid {\boldsymbol {x}})=\mu ({\boldsymbol {x}})+\delta _{1}\,\mu ({\boldsymbol {x}})^{\delta _{2}}.}
This has been called the NB-P variant by some authors. The
δ
1
{\displaystyle \delta _{1}}
and
δ
2
{\displaystyle \delta _{2}}
are estimated, and it is also possible to obtain approximate confidence intervals for them too.
Incidentally, several other useful NB variants can also be fitted, with the help of selecting the right combination of constraint matrices. For example, NB − 1, NB − 2 (negbinomial() default), NB − H; see Yee (2014) and Table 11.3 of Yee (2015).
==== RCIMs ====
The subclass of row-column interaction models
(RCIMs) has also been proposed; these are a special type of RR-VGLM.
RCIMs apply only to a matrix Y response and there are
no explicit explanatory variables
x
{\displaystyle {\boldsymbol {x}}}
.
Instead, indicator variables for each row and column are explicitly set up, and an order-R
interaction of the form
A
C
T
{\displaystyle {\boldsymbol {A}}{\boldsymbol {C}}^{T}}
is allowed.
Special cases of this type of model include the Goodman RC association model
and the quasi-variances methodology as implemented by the qvcalc R package.
RCIMs can be defined as a RR-VGLM applied to Y with
g
1
(
θ
1
)
≡
η
1
i
j
=
β
0
+
α
i
+
γ
j
+
∑
r
=
1
R
c
i
r
a
j
r
.
{\displaystyle g_{1}(\theta _{1})\equiv \eta _{1ij}=\beta _{0}+\alpha _{i}+\gamma _{j}+\sum _{r=1}^{R}c_{ir}\,a_{jr}.}
For the Goodman RC association model, we have
η
1
i
j
=
log
μ
i
j
,
{\displaystyle \eta _{1ij}=\log \mu _{ij},}
so that
if R = 0 then it is a Poisson regression fitted to a matrix of counts with row effects and column effects; this has a similar idea to a no-interaction two-way ANOVA model.
Another example of a RCIM is if
g
1
{\displaystyle g_{1}}
is the identity link and the parameter is the median and the model corresponds to an asymmetric Laplace distribution; then a no-interaction RCIM is similar to a technique called median polish.
In VGAM, rcim() and grc() functions fit the above models.
And also Yee and Hadi (2014)
show that RCIMs can be used to fit unconstrained quadratic ordination
models to species data; this is an example of indirect gradient analysis in
ordination (a topic in statistical ecology).
=== Vector generalized additive models ===
Vector generalized additive models (VGAMs) are a major
extension to VGLMs in which the linear predictor
η
j
{\displaystyle \eta _{j}}
is not restricted to be
linear in the covariates
x
k
{\displaystyle x_{k}}
but is the
sum of smoothing functions applied to the
x
k
{\displaystyle x_{k}}
:
η
(
x
)
=
H
1
β
(
1
)
∗
+
H
2
f
(
2
)
∗
(
x
2
)
+
H
3
f
(
3
)
∗
(
x
3
)
+
⋯
{\displaystyle {\boldsymbol {\eta }}({\boldsymbol {x}})={\boldsymbol {H}}_{1}\,{\boldsymbol {\beta }}_{(1)}^{*}+{\boldsymbol {H}}_{2}\,{\boldsymbol {f}}_{(2)}^{*}(x_{2})+{\boldsymbol {H}}_{3}\,{\boldsymbol {f}}_{(3)}^{*}(x_{3})+\cdots \,\!}
where
f
(
k
)
∗
(
x
k
)
=
(
f
(
1
)
k
∗
(
x
k
)
,
f
(
2
)
k
∗
(
x
k
)
,
…
)
T
.
{\displaystyle {\boldsymbol {f}}_{(k)}^{*}(x_{k})=(f_{(1)k}^{*}(x_{k}),f_{(2)k}^{*}(x_{k}),\ldots )^{T}.}
These are M additive predictors.
Each smooth function
f
(
j
)
k
∗
{\displaystyle f_{(j)k}^{*}}
is estimated from the data.
Thus VGLMs are model-driven while VGAMs are data-driven.
Currently, only smoothing splines are implemented in the VGAM package.
For M > 1 they are actually vector splines, which estimate the component functions
in
f
(
j
)
k
∗
(
x
k
)
{\displaystyle f_{(j)k}^{*}(x_{k})}
simultaneously.
Of course, one could use regression splines with VGLMs.
The motivation behind VGAMs is similar to
that of
Hastie and Tibshirani (1990)
and
Wood (2017).
VGAMs were proposed in 1996
.
Currently, work is being done to estimate VGAMs using P-splines
of Eilers and Marx (1996)
.
This allows for several advantages over using smoothing splines and vector backfitting, such as the
ability to perform automatic smoothing parameter selection easier.
=== Quadratic reduced-rank vector generalized linear models ===
These add on a quadratic in the latent variable to the RR-VGLM class.
The result is a bell-shaped curve can be fitted to each response, as
a function of the latent variable.
For R = 2, one has bell-shaped surfaces as a function of the 2
latent variables---somewhat similar to a
bivariate normal distribution.
Particular applications of QRR-VGLMs can be found in ecology,
in a field of multivariate analysis called ordination.
As a specific rank-1 example of a QRR-VGLM,
consider Poisson data with S species.
The model for Species s is the Poisson regression
log
μ
s
(
ν
)
=
η
s
(
ν
)
=
β
(
s
)
1
+
β
(
s
)
2
ν
+
β
(
s
)
3
ν
2
=
α
s
−
1
2
(
ν
−
u
s
t
s
)
2
,
{\displaystyle \log \,\mu _{s}(\nu )=\eta _{s}(\nu )=\beta _{(s)1}+\beta _{(s)2}\,\nu +\beta _{(s)3}\,\nu ^{2}=\alpha _{s}-{\frac {1}{2}}\left({\frac {\nu -u_{s}}{t_{s}}}\right)^{2},}
for
s
=
1
,
…
,
S
{\displaystyle s=1,\ldots ,S}
. The right-most parameterization which uses the symbols
α
s
,
{\displaystyle \alpha _{s},}
u
s
,
{\displaystyle u_{s},}
t
s
,
{\displaystyle t_{s},}
has particular ecological meaning, because they relate to the species abundance, optimum and tolerance respectively. For example, the tolerance is a measure of niche width, and a large value means that that species can live in a wide range of environments. In the above equation, one would need
β
(
s
)
3
<
0
{\displaystyle \beta _{(s)3}<0}
in order
to obtain a bell-shaped curve.
QRR-VGLMs fit Gaussian ordination models by maximum likelihood estimation, and
they are an example of direct gradient analysis.
The cqo() function in the VGAM package currently
calls optim() to search for the optimal
C
{\displaystyle {\boldsymbol {C}}}
, and given that, it is easy to calculate
the site scores and fit a suitable generalized linear model to that.
The function is named after the acronym CQO, which stands for
constrained quadratic ordination: the constrained is for direct
gradient analysis (there are environmental variables, and a linear combination
of these is taken as the latent variable) and the quadratic is for the
quadratic form in the latent variables
ν
{\displaystyle {\boldsymbol {\nu }}}
on the
η
{\displaystyle {\boldsymbol {\eta }}}
scale.
Unfortunately QRR-VGLMs are sensitive to outliers in both the response
and explanatory variables, as well as being computationally expensive, and
may give a local solution rather than a global solution.
QRR-VGLMs were proposed in 2004.
== See also ==
generalized linear models
R (software)
Regression analysis
Statistical model
Natural exponential family
== References ==
== Further reading ==
Hilbe, Joseph (2011). Negative Binomial Regression (2nd ed.). Cambridge: Cambridge University Press. ISBN 978-0-521-19815-8.{{cite book}}: CS1 maint: publisher location (link) | Wikipedia/Vector_generalized_linear_model |
Multiscale modeling or multiscale mathematics is the field of solving problems that have important features at multiple scales of time and/or space. Important problems include multiscale modeling of fluids, solids, polymers, proteins, nucleic acids as well as various physical and chemical phenomena (like adsorption, chemical reactions, diffusion).
An example of such problems involve the Navier–Stokes equations for incompressible fluid flow.
ρ
0
(
∂
t
u
+
(
u
⋅
∇
)
u
)
=
∇
⋅
τ
,
∇
⋅
u
=
0.
{\displaystyle {\begin{array}{lcl}\rho _{0}(\partial _{t}\mathbf {u} +(\mathbf {u} \cdot \nabla )\mathbf {u} )=\nabla \cdot \tau ,\\\nabla \cdot \mathbf {u} =0.\end{array}}}
In a wide variety of applications, the stress tensor
τ
{\displaystyle \tau }
is given as a linear function of the gradient
∇
u
{\displaystyle \nabla u}
. Such a choice for
τ
{\displaystyle \tau }
has been proven to be sufficient for describing the dynamics of a broad range of fluids. However, its use for more complex fluids such as polymers is dubious. In such a case, it may be necessary to use multiscale modeling to accurately model the system such that the stress tensor can be extracted without requiring the computational cost of a full microscale simulation.
== History ==
Horstemeyer 2009, 2012 presented a historical review of the different disciplines (mathematics, physics, and materials science) for solid materials related to multiscale materials modeling.
The aforementioned DOE multiscale modeling efforts were hierarchical in nature. The first concurrent multiscale model occurred when Michael Ortiz (Caltech) took the molecular dynamics code Dynamo, developed by Mike Baskes at Sandia National Labs, and with his students embedded it into a finite element code for the first time. Martin Karplus, Michael Levitt, and Arieh Warshel received the Nobel Prize in Chemistry in 2013 for the development of a multiscale model method using both classical and quantum mechanical theory which were used to model large complex chemical systems and reactions.
== Areas of research ==
In physics and chemistry, multiscale modeling is aimed at the calculation of material properties or system behavior on one level using information or models from different levels. On each level, particular approaches are used for the description of a system. The following levels are usually distinguished: level of quantum mechanical models (information about electrons is included), level of molecular dynamics models (information about individual atoms is included), coarse-grained models (information about atoms and/or groups of atoms is included), mesoscale or nano-level (information about large groups of atoms and/or molecule positions is included), level of continuum models, level of device models. Each level addresses a phenomenon over a specific window of length and time. Multiscale modeling is particularly important in integrated computational materials engineering since it allows the prediction of material properties or system behavior based on knowledge of the process-structure-property relationships.
In operations research, multiscale modeling addresses challenges for decision-makers that come from multiscale phenomena across organizational, temporal, and spatial scales. This theory fuses decision theory and multiscale mathematics and is referred to as multiscale decision-making. Multiscale decision-making draws upon the analogies between physical systems and complex man-made systems.
In meteorology, multiscale modeling is the modeling of the interaction between weather systems of different spatial and temporal scales that produces the weather that we experience. The most challenging task is to model the way through which the weather systems interact as models cannot see beyond the limit of the model grid size. In other words, to run an atmospheric model that is having a grid size (very small ~ 500 m) which can see each possible cloud structure for the whole globe is computationally very expensive. On the other hand, a computationally feasible Global climate model (GCM), with grid size ~ 100 km, cannot see the smaller cloud systems. So we need to come to a balance point so that the model becomes computationally feasible and at the same time we do not lose much information, with the help of making some rational guesses, a process called parametrization.
Besides the many specific applications, one area of research is methods for the accurate and efficient solution of multiscale modeling problems. The primary areas of mathematical and algorithmic development include:
Analytical modeling
Center manifold and slow manifold theory
Continuum modeling
Discrete modeling
Network-based modeling
Statistical modeling
== See also ==
Computational mechanics
Equation-free modeling
Integrated computational materials engineering
Multilevel model
Multiphysics
Multiresolution analysis
Space mapping
== References ==
== Further reading ==
Hosseini, SA; Shah, N (2009). "Multiscale modelling of hydrothermal biomass pretreatment for chip size optimization". Bioresource Technology. 100 (9): 2621–8. Bibcode:2009BiTec.100.2621H. doi:10.1016/j.biortech.2008.11.030. PMID 19136256.
Tao, Wei-Kuo; Chern, Jiun-Dar; Atlas, Robert; Randall, David; Khairoutdinov, Marat; Li, Jui-Lin; Waliser, Duane E.; Hou, Arthur; et al. (2009). "A Multiscale Modeling System: Developments, Applications, and Critical Issues". Bulletin of the American Meteorological Society. 90 (4): 515–534. Bibcode:2009BAMS...90..515T. doi:10.1175/2008BAMS2542.1. hdl:2060/20080039624.
== External links ==
Mississippi State University ICME Cyberinfrastructure
Multiscale Modeling of Flow Flow
Multiscale Modeling of Materials (MMM-Tools) Project at Dr. Martin Steinhauser's group at the Fraunhofer-Institute for High-Speed Dynamics, Ernst-Mach-Institut, EMI, at Freiburg, Germany. Since 2013, M.O. Steinhauser is associated at the University of Basel, Switzerland.
Multiscale Modeling Group: Institute of Physical & Theoretical Chemistry, University of Regensburg, Regensburg, Germany
Multiscale Materials Modeling: Fourth International Conference, Tallahassee, FL, USA
Multiscale Modeling Tools for Protein Structure Prediction and Protein Folding Simulations, Warsaw, Poland
Multiscale modeling for Materials Engineering: Set-up of quantitative micromechanical models
Multiscale Material Modelling on High Performance Computer Architectures, MMM@HPC project
Modeling Materials: Continuum, Atomistic and Multiscale Techniques (E. B. Tadmor and R. E. Miller, Cambridge University Press, 2011)
An Introduction to Computational Multiphysics II: Theoretical Background Part I Harvard University video series
SIAM Journal of Multiscale Modeling and Simulation
International Journal for Multiscale Computational Engineering
Department of Energy Summer School on Multiscale Mathematics and High Performance Computing
Multiscale Conceptual Model Figures for Biological and Environmental Science | Wikipedia/Multiscale_modeling |
In statistics, a mixed-design analysis of variance model, also known as a split-plot ANOVA, is used to test for differences between two or more independent groups whilst subjecting participants to repeated measures. Thus, in a mixed-design ANOVA model, one factor (a fixed effects factor) is a between-subjects variable and the other (a random effects factor) is a within-subjects variable. Thus, overall, the model is a type of mixed-effects model.
A repeated measures design is used when multiple independent variables or measures exist in a data set, but all participants have been measured on each variable.: 506
== An example ==
Andy Field (2009) provided an example of a mixed-design ANOVA in which he wants to investigate whether personality or attractiveness is the most important quality for individuals seeking a partner. In his example, there is a speed dating event set up in which there are two sets of what he terms "stooge dates": a set of males and a set of females. The experimenter selects 18 individuals, 9 males and 9 females to play stooge dates. Stooge dates are individuals who are chosen by the experimenter and they vary in attractiveness and personality. For males and females, there are three highly attractive individuals, three moderately attractive individuals, and three highly unattractive individuals. Of each set of three, one individual has a highly charismatic personality, one is moderately charismatic and the third is extremely dull.
The participants are the individuals who sign up for the speed dating event and interact with each of the 9 individuals of the opposite sex. There are 10 males and 10 female participants. After each date, they rate on a scale of 0 to 100 how much they would like to have a date with that person, with a zero indicating "not at all" and 100 indicating "very much".
The random factors, or so-called repeated measures, are looks, which consists of three levels (very attractive, moderately attractive, and highly unattractive) and the personality, which again has three levels (highly charismatic, moderately charismatic, and extremely dull). The looks and personality have an overall random character because the precise level of each cannot be controlled by the experimenter (and indeed may be difficult to quantify); the 'blocking' into discrete categories is for convenience, and does not guarantee precisely the same level of looks or personality within a given block; and the experimenter is interested in making inferences on the general population of daters, not just the 18 'stooges' The fixed-effect factor, or so-called between-subjects measure, is gender because the participants making the ratings were either female or male, and precisely these statuses were designed by the experimenter.
== ANOVA assumptions ==
When running an analysis of variance to analyse a data set, the data set should meet the following criteria:
Normality: scores for each condition should be sampled from a normally distributed population.
Homogeneity of variance: each population should have the same error variance.
Sphericity of the covariance matrix: ensures the F ratios match the F distribution
For the between-subject effects to meet the assumptions of the analysis of variance, the variance for any level of a group must be the same as the variance for the mean of all other levels of the group. When there is homogeneity of variance, sphericity of the covariance matrix will occur, because for between-subjects independence has been maintained.
For the within-subject effects, it is important to ensure normality and homogeneity of variance are not being violated.
If the assumptions are violated, a possible solution is to use the Greenhouse–Geisser correction or the Huynh & Feldt adjustments to the degrees of freedom because they can correct for issues that can arise should the sphericity of the covariance matrix assumption be violated.
== Partitioning the sums of squares and the logic of ANOVA ==
Due to the fact that the mixed-design ANOVA uses both between-subject variables and within-subject variables (a.k.a. repeated measures), it is necessary to partition out (or separate) the between-subject effects and the within-subject effects. It is as if you are running two separate ANOVAs with the same data set, except that it is possible to examine the interaction of the two effects in a mixed design. As can be seen in the source table provided below, the between-subject variables can be partitioned into the main effect of the first factor and into the error term. The within-subjects terms can be partitioned into three terms: the second (within-subjects) factor, the interaction term for the first and second factors, and the error term. The main difference between the sum of squares of the within-subject factors and between-subject factors is that within-subject factors have an interaction factor.
More specifically, the total sum of squares in a regular one-way ANOVA would consist of two parts: variance due to treatment or condition (SSbetween-subjects) and variance due to error (SSwithin-subjects). Normally the SSwithin-subjects is a measurement of variance. In a mixed-design, you are taking repeated measures from the same participants and therefore the sum of squares can be broken down even further into three components: SSwithin-subjects (variance due to being in different repeated measure conditions), SSerror (other variance), and SSBT*WT (variance of interaction of between-subjects by within-subjects conditions).
Each effect has its own F value. Both the between-subject and within-subject factors have their own MSerror term which is used to calculate separate F values.
Between-subjects:
FBetween-subjects = MSbetween-subjects/MSError(between-subjects)
Within-subjects:
FWithin-subjects = MSwithin-subjects/MSError(within-subjects)
FBS×WS = MSbetween×within/MSError(within-subjects)
== Analysis of variance table ==
Results are often presented in a table of the following form.
== Degrees of freedom ==
In order to calculate the degrees of freedom for between-subjects effects, dfBS = R – 1, where R refers to the number of levels of between-subject groups.
In the case of the degrees of freedom for the between-subject effects error, dfBS(Error) = Nk – R, where Nk is equal to the number of participants (also known as subjects), and again R is the number of levels.
To calculate the degrees of freedom for within-subject effects, dfWS = C – 1, where C is the number of within-subject tests. For example, if participants completed a specific measure at three time points, C = 3, and dfWS = 2.
The degrees of freedom for the interaction term of between-subjects by within-subjects term(s), dfBS×WS = (R – 1)(C – 1), where again R refers to the number of levels of the between-subject groups, and C is the number of within-subject tests.
Finally, the within-subject error is calculated by, dfWS(Error) = (Nk – R)(C – 1), in which Nk is the number of participants, R and C remain the same.
== Follow-up tests ==
When there is a significant interaction between a between-subject factor and a within-subject factor, statisticians often recommended pooling the between-subject and within-subject MSerror terms. This can be calculated in the following way:
MSWCELL = SSBSError + SSWSError / dfBSError + dfWSError
This pooled error is used when testing the effect of the between-subject variable within a level of the within-subject variable. If testing the within-subject variable at different levels of the between-subject variable, the MSws/e error term that tested the interaction is the correct error term to use. More generally, as described by Howell (1987 Statistical Methods for Psychology, 2nd edition, p 434), when doing simple effects based on the interactions one should use the pooled error when the factor being tested and the interaction were tested with different error terms. When the factor being tested and the interaction were tested with the same error term, that term is sufficient.
When following up interactions for terms that are both between-subjects or both within-subjects variables, the method is identical to follow-up tests in ANOVA. The MSError term that applies to the follow-up in question is the appropriate one to use, e.g. if following up a significant interaction of two between-subject effects, use the MSError term from between-subjects. See ANOVA.
== See also ==
Restricted randomization
Mauchly's sphericity test
== References ==
== Further reading ==
Cauraugh, J. H. (2002). "Experimental design and statistical decisions tutorial: Comments on longitudinal ideomotor apraxia recovery." Neuropsychological Rehabilitation, 12, 75–83.
Gueorguieva, R. & Krystal, J. H. (2004). "Progress in analyzing repeated-measures data and its reflection in papers published in the archives of general psychiatry." Archives of General Psychiatry, 61, 310–317.
Huck, S. W. & McLean, R. A. (1975). "Using a repeated measures ANOVA to analyze the data from a pretest-posttest design: A potentially confusing task". Psychological Bulletin, 82, 511–518.
Pollatsek, A. & Well, A. D. (1995). "On the use of counterbalanced designs in cognitive research: A suggestion for a better and more powerful analysis". Journal of Experimental Psychology, 21, 785–794.
== External links ==
Examples of all ANOVA and ANCOVA models with up to three treatment factors, including randomized block, split plot, repeated measures, and Latin squares, and their analysis in R | Wikipedia/Mixed-design_analysis_of_variance |
One application of multilevel modeling (MLM) is the analysis of repeated measures data. Multilevel modeling for repeated measures data is most often discussed in the context of modeling change over time (i.e. growth curve modeling for longitudinal designs); however, it may also be used for repeated measures data in which time is not a factor.
In multilevel modeling, an overall change function (e.g. linear, quadratic, cubic etc.) is fitted to the whole sample and, just as in multilevel modeling for clustered data, the slope and intercept may be allowed to vary. For example, in a study looking at income growth with age, individuals might be assumed to show linear improvement over time. However, the exact intercept and slope could be allowed to vary across individuals (i.e. defined as random coefficients).
Multilevel modeling with repeated measures employs the same statistical techniques as MLM with clustered data. In multilevel modeling for repeated measures data, the measurement occasions are nested within cases (e.g. individual or subject). Thus, level-1 units consist of the repeated measures for each subject, and the level-2 unit is the individual or subject. In addition to estimating overall parameter estimates, MLM allows regression equations at the level of the individual. Thus, as a growth curve modeling technique, it allows the estimation of inter-individual differences in intra-individual change over time by modeling the variances and covariances. In other words, it allows the testing of individual differences in patterns of responses over time (i.e. growth curves). This characteristic of multilevel modeling makes it preferable to other repeated measures statistical techniques such as repeated measures-analysis of variance (RM-ANOVA) for certain research questions.
== Assumptions ==
The assumptions of MLM that hold for clustered data also apply to repeated measures:
(1) Random components are assumed to have a normal distribution with a mean of zero
(2) The dependent variable is assumed to be normally distributed. However, binary and discrete dependent variables may be examined in MLM using specialized procedures (i.e. employ different link functions).
One of the assumptions of using MLM for growth curve modeling is that all subjects show the same relationship over time (e.g. linear, quadratic etc.). Another assumption of MLM for growth curve modeling is that the observed changes are related to the passage of time.
== Statistics & Interpretation ==
Mathematically, multilevel analysis with repeated measures is very similar to the analysis of data in which subjects are clustered in groups. However, one point to note is that time-related predictors must be explicitly entered into the model to evaluate trend analyses and to obtain an overall test of the repeated measure. Furthermore, interpretation of these analyses is dependent on the scale of the time variable (i.e. how it is coded).
Fixed Effects: Fixed regression coefficients may be obtained for an overall equation that represents how, averaging across subjects, the subjects change over time.
Random Effects: Random effects are the variance components that arise from measuring the relationship of the predictors to Y for each subject separately. These variance components include: (1) differences in the intercepts of these equations at the level of the subject; (2) differences across subjects in the slopes of these equations; and (3) covariance between subject slopes and intercepts across all subjects. When random coefficients are specified, each subject has its own regression equation, making it possible to evaluate whether subjects differ in their means and/or response patterns over time.
Estimation Procedures & Comparing Models: These procedures are identical to those used in multilevel analysis where subjects are clustered in groups.
=== Extensions ===
Modeling Non-Linear Trends (Polynomial Models):
Non-linear trends (quadratic, cubic, etc.) may be evaluated in MLM by adding the products of Time (TimeXTime, TimeXTimeXTime etc.) as either random or fixed effects to the model.
Adding Predictors to the Model: It is possible that some of the random variance (i.e. variance associated with individual differences) may be attributed to fixed predictors other than time. Unlike RM-ANOVA, multilevel analysis allows the use of continuous predictors (rather than only categorical), and these predictors may or may not account for individual differences in the intercepts as well as for differences in slopes. Furthermore, multilevel modeling also allows time-varying covariates.
Alternative Specifications:
Covariance Structure: Multilevel software provides several different covariance or error structures to choose from for the analysis of multilevel data (e.g. autoregressive). These may be applied to the growth model as appropriate.
Dependent Variable: Dichotomous dependent variables may be analyzed with multilevel analysis by using more specialized analysis (i.e. using the logit or probit link functions).
== Multilevel modeling versus other statistical techniques for repeated measures ==
=== Multilevel Modeling versus RM-ANOVA ===
Repeated measures analysis of variance (RM-ANOVA) has been traditionally used for analysis of repeated measures designs. However, violation of the assumptions of RM-ANOVA can be problematic. Multilevel modeling (MLM) is commonly used for repeated measures designs because it presents an alternative approach to analyzing this type of data with three main advantages over RM-ANOVA:
1. MLM has Less Stringent Assumptions: MLM can be used if the assumptions of constant variances (homogeneity of variance, or homoscedasticity), constant covariances (compound symmetry), or constant variances of differences scores (sphericity) are violated for RM-ANOVA. MLM allows modeling of the variance-covariance matrix from the data; thus, unlike in RM-ANOVA, these assumptions are not necessary.
2. MLM Allows Hierarchical Structure: MLM can be used for higher-order sampling procedures, whereas RM-ANOVA is limited to examining two-level sampling procedures. In other words, MLM can look at repeated measures within subjects, within a third level of analysis etc., whereas RM-ANOVA is limited to repeated measures within subjects.
3. MLM can Handle Missing Data: Missing data is permitted in MLM without causing additional complications. With RM-ANOVA, subject’s data must be excluded if they are missing a single data point. Missing data and attempts to resolve missing data (i.e. using the subject’s mean for non-missing data) can raise additional problems in RM-ANOVA.
4. MLM can also handle data in which there is variation in the exact timing of data collection (i.e. variable timing versus fixed timing). For example, data for a longitudinal study may attempt to collect measurements at age 6 months, 9 months, 12 months, and 15 months. However, participant availability, bank holidays, and other scheduling issues may result in variation regarding when data is collected. This variation may be addressed in MLM by adding “age” into the regression equation. There is also no need for equal intervals between measurement points in MLM.
5. MLM is relatively easily extended to discrete data.
Note: Although missing data is permitted in MLM, it is assumed to be missing at random. Thus, systematically missing data can present problems.
=== Multilevel Modeling versus Structural Equation Modeling (SEM; Latent Growth Model) ===
An alternative method of growth curve analysis is latent growth curve modeling using structural equation modeling (SEM). This approach will provide the same estimates as the multilevel modeling approach, provided that the model is specified identically in SEM. However, there are circumstances in which either MLM or SEM are preferable:
Multilevel modeling approach:
For designs with a large number of unequal intervals between time points (SEM cannot manage data with a lot of variation in time points)
When there are many data points per subject
When the growth model is nested in additional levels of analysis (i.e. hierarchical structure)
Multilevel modeling programs have for more options in terms of handling non-continuous dependent variables (link functions) and allowing different error structures
Structural equation modeling approach:
Better suited for extended models in which the model is embedded into a larger path model, or the intercept and slope are used as predictors for other variables. In this way, SEM allows greater flexibility.
The distinction between multilevel modeling and latent growth curve analysis has become less defined. Some statistical programs incorporate multilevel features within their structural equation modeling software, and some multilevel modeling software is beginning to add latent growth curve features.
== Data Structure ==
Multilevel modeling with repeated measures data is computationally complex. Computer software capable of performing these analyses may require data to be represented in “long form” as opposed to “wide form” prior to analysis. In long form, each subject’s data is represented in several rows – one for every “time” point (observation of the dependent variable). This is opposed to wide form in which there is one row per subject, and the repeated measures are represented in separate columns. Also note that, in long form, time invariant variables are repeated across rows for each subject. See below for an example of wide form data transposed into long form:
Wide form:
Long form:
== See also ==
Multilevel model
Repeated measures design
Growth curve
Structural equation modeling
Longitudinal study
== Further reading ==
Heo, Moonseong; Faith, Myles S.; Mott, John W.; Gorman, Bernard S.; Redden, David T.; Allison, David B. (2003). "Hierarchical linear models for the development of growth curves: an example with body mass index in overweight/obese adults". Statistics in Medicine. 22 (11): 1911–1942. doi:10.1002/sim.1218. PMID 12754724.
Singer, J. D. (1998). "Using SAS PROC MIXED to Fit Multilevel Models, Hierarchical Models, and Individual Growth Models". Journal of Educational and Behavioral Statistics. 23 (4): 323–355. doi:10.3102/10769986023004323.
Willett, Judith D. Singer, John B. (2003). Applied longitudinal data analysis : modeling change and event occurrence. Oxford: Oxford University Press. ISBN 978-0195152968.{{cite book}}: CS1 maint: multiple names: authors list (link) Concentrates on SAS and on simpler growth models.
Snijders, Tom A.B.; Bosker, Roel J. (2002). Multilevel analysis : an introduction to basic and advanced multilevel modeling (Reprint. ed.). London: Sage Publications. ISBN 978-0761958901.
Hedeker, Donald (2006). Longitudinal data analysis. Hoboken, N.J: Wiley-Interscience. ISBN 978-0471420279. Covers many models and shows the advantages of MLM over other approaches
Verbeke, Geert (2013). Linear mixed models for longitudinal data. S.l: Springer-Verlag New York. ISBN 978-1475773842. Has extensive SAS code.
Molenberghs, Geert (2005). Models for discrete longitudinal data. New York: Springer Science+Business Media, Inc. ISBN 978-0387251448. Covers non-linear models. Has SAS code.
Pinheiro, Jose; Bates, Douglas M. (2000). Mixed-effects models in S and S-PLUS. New York, NY u.a: Springer. ISBN 978-1441903174. Uses S and S-plus but will be useful for R users as well.
== Notes ==
== References ==
Cohen, Jacob; Cohen, Patricia; West, Stephen G.; Aiken, Leona S. (2002). Applied Multiple Regression/Correlation Analysis for the Behavioral Sciences (3. ed.). Routledge Academic. ISBN 9780805822236.
Curran, Patrick J.; Obeidat, Khawla; Losardo, Diane (2010). "Twelve Frequently Asked Questions About Growth Curve Modeling". Journal of Cognition and Development. 11 (2): 121–136. doi:10.1080/15248371003699969. PMC 3131138. PMID 21743795.
Fidell, Barbara G.; Tabachnick, Linda S. (2007). Using Multivariate Statistics (5th ed.). Boston; Montreal: Pearson/A & B. ISBN 978-0205459384.
Hoffman, Lesa; Rovine, Michael J. (2007). "Multilevel models for the experimental psychologist: Foundations and illustrative examples". Behavior Research Methods. 39 (1): 101–117. doi:10.3758/BF03192848. PMID 17552476.
Howell, David C. (2010). Statistical methods for psychology (7th ed.). Belmont, CA: Thomson Wadsworth. ISBN 978-0-495-59784-1.
Hox, Joop (2005). Multilevel and SEM Approached to Growth Curve Modeling (PDF) ([Repr.]. ed.). Chichester: Wiley. ISBN 978-0-470-86080-9.
Overall, John E.; Tonidandel, Scott (2007). "Analysis of Data from a Controlled Repeated Measurements Design with Baseline-Dependent Dropouts". Methodology: European Journal of Research Methods for the Behavioral and Social Sciences. 3 (2): 58–66. doi:10.1027/1614-2241.3.2.58.
Overall, John; Ahn, Chul; Shivakumar, C.; Kalburgi, Yallapa (2007). "Problematic Formulations of Sas Proc.mixed Models for Repeated Measurements". Journal of Biopharmaceutical Statistics. 9 (1): 189–216. doi:10.1081/BIP-100101008. PMID 10091918.
Quené, Hugo; van den Bergh, Huub (2004). "On multi-level modeling of data from repeated measures designs: a tutorial". Speech Communication. 43 (1–2): 103–121. CiteSeerX 10.1.1.2.8982. doi:10.1016/j.specom.2004.02.004. | Wikipedia/Multilevel_Modeling_for_Repeated_Measures |
In econometrics, a random effects model, also called a variance components model, is a statistical model where the model parameters are random variables. It is a kind of hierarchical linear model, which assumes that the data being analysed are drawn from a hierarchy of different populations whose differences relate to that hierarchy. A random effects model is a special case of a mixed model.
Contrast this to the biostatistics definitions, as biostatisticians use "fixed" and "random" effects to respectively refer to the population-average and subject-specific effects (and where the latter are generally assumed to be unknown, latent variables).
== Qualitative description ==
Random effect models assist in controlling for unobserved heterogeneity when the heterogeneity is constant over time and not correlated with independent variables. This constant can be removed from longitudinal data through differencing, since taking a first difference will remove any time invariant components of the model.
Two common assumptions can be made about the individual specific effect: the random effects assumption and the fixed effects assumption. The random effects assumption is that the individual unobserved heterogeneity is uncorrelated with the independent variables. The fixed effect assumption is that the individual specific effect is correlated with the independent variables.
If the random effects assumption holds, the random effects estimator is more efficient than the fixed effects model.
== Simple example ==
Suppose
m
{\displaystyle m}
large elementary schools are chosen randomly from among thousands in a large country. Suppose also that
n
{\displaystyle n}
pupils of the same age are chosen randomly at each selected school. Their scores on a standard aptitude test are ascertained. Let
Y
i
j
{\displaystyle Y_{ij}}
be the score of the
j
{\displaystyle j}
-th pupil at the
i
{\displaystyle i}
-th school.
A simple way to model this variable is
Y
i
j
=
μ
+
U
i
+
W
i
j
,
{\displaystyle Y_{ij}=\mu +U_{i}+W_{ij},\,}
where
μ
{\displaystyle \mu }
is the average test score for the entire population.
In this model
U
i
{\displaystyle U_{i}}
is the school-specific random effect: it measures the difference between the average score at school
i
{\displaystyle i}
and the average score in the entire country. The term
W
i
j
{\displaystyle W_{ij}}
is the individual-specific random effect, i.e., it's the deviation of the
j
{\displaystyle j}
-th pupil's score from the average for the
i
{\displaystyle i}
-th school.
The model can be augmented by including additional explanatory variables, which would capture differences in scores among different groups. For example:
Y
i
j
=
μ
+
β
1
S
e
x
i
j
+
β
2
P
a
r
e
n
t
s
E
d
u
c
i
j
+
U
i
+
W
i
j
,
{\displaystyle Y_{ij}=\mu +\beta _{1}\mathrm {Sex} _{ij}+\beta _{2}\mathrm {ParentsEduc} _{ij}+U_{i}+W_{ij},\,}
where
S
e
x
i
j
{\displaystyle \mathrm {Sex} _{ij}}
is a binary dummy variable and
P
a
r
e
n
t
s
E
d
u
c
i
j
{\displaystyle \mathrm {ParentsEduc} _{ij}}
records, say, the average education level of a child's parents. This is a mixed model, not a purely random effects model, as it introduces fixed-effects terms for Sex and Parents' Education.
=== Variance components ===
The variance of
Y
i
j
{\displaystyle Y_{ij}}
is the sum of the variances
τ
2
{\displaystyle \tau ^{2}}
and
σ
2
{\displaystyle \sigma ^{2}}
of
U
i
{\displaystyle U_{i}}
and
W
i
j
{\displaystyle W_{ij}}
respectively.
Let
Y
¯
i
∙
=
1
n
∑
j
=
1
n
Y
i
j
{\displaystyle {\overline {Y}}_{i\bullet }={\frac {1}{n}}\sum _{j=1}^{n}Y_{ij}}
be the average, not of all scores at the
i
{\displaystyle i}
-th school, but of those at the
i
{\displaystyle i}
-th school that are included in the random sample. Let
Y
¯
∙
∙
=
1
m
n
∑
i
=
1
m
∑
j
=
1
n
Y
i
j
{\displaystyle {\overline {Y}}_{\bullet \bullet }={\frac {1}{mn}}\sum _{i=1}^{m}\sum _{j=1}^{n}Y_{ij}}
be the grand average.
Let
S
S
W
=
∑
i
=
1
m
∑
j
=
1
n
(
Y
i
j
−
Y
¯
i
∙
)
2
{\displaystyle SSW=\sum _{i=1}^{m}\sum _{j=1}^{n}(Y_{ij}-{\overline {Y}}_{i\bullet })^{2}\,}
S
S
B
=
n
∑
i
=
1
m
(
Y
¯
i
∙
−
Y
¯
∙
∙
)
2
{\displaystyle SSB=n\sum _{i=1}^{m}({\overline {Y}}_{i\bullet }-{\overline {Y}}_{\bullet \bullet })^{2}\,}
be respectively the sum of squares due to differences within groups and the sum of squares due to difference between groups. Then it can be shown that
1
m
(
n
−
1
)
E
(
S
S
W
)
=
σ
2
{\displaystyle {\frac {1}{m(n-1)}}E(SSW)=\sigma ^{2}}
and
1
(
m
−
1
)
n
E
(
S
S
B
)
=
σ
2
n
+
τ
2
.
{\displaystyle {\frac {1}{(m-1)n}}E(SSB)={\frac {\sigma ^{2}}{n}}+\tau ^{2}.}
These "expected mean squares" can be used as the basis for estimation of the "variance components"
σ
2
{\displaystyle \sigma ^{2}}
and
τ
2
{\displaystyle \tau ^{2}}
.
The
σ
2
{\displaystyle \sigma ^{2}}
parameter is also called the intraclass correlation coefficient.
== Marginal likelihood ==
For random effects models the marginal likelihoods are important.
== Applications ==
Random effects models used in practice include the Bühlmann model of insurance contracts and the Fay-Herriot model used for small area estimation.
== See also ==
Bühlmann model
Hierarchical linear modeling
Fixed effects
MINQUE
Covariance estimation
Conditional variance
Panel analysis
== Further reading ==
Baltagi, Badi H. (2008). Econometric Analysis of Panel Data (4th ed.). New York, NY: Wiley. pp. 17–22. ISBN 978-0-470-51886-1.
Hsiao, Cheng (2003). Analysis of Panel Data (2nd ed.). New York, NY: Cambridge University Press. pp. 73–92. ISBN 0-521-52271-4.
Wooldridge, Jeffrey M. (2002). Econometric Analysis of Cross Section and Panel Data. Cambridge, MA: MIT Press. pp. 257–265. ISBN 0-262-23219-7.
Gomes, Dylan G.E. (20 January 2022). "Should I use fixed effects or random effects when I have fewer than five levels of a grouping factor in a mixed-effects model?". PeerJ. 10: e12794. doi:10.7717/peerj.12794. PMC 8784019. PMID 35116198.
== References ==
== External links ==
Fixed and random effects models
How to Conduct a Meta-Analysis: Fixed and Random Effect Models | Wikipedia/Random_effects_model |
In statistics, an errors-in-variables model or a measurement error model is a regression model that accounts for measurement errors in the independent variables. In contrast, standard regression models assume that those regressors have been measured exactly, or observed without error; as such, those models account only for errors in the dependent variables, or responses.
In the case when some regressors have been measured with errors, estimation based on the standard assumption leads to inconsistent estimates, meaning that the parameter estimates do not tend to the true values even in very large samples. For simple linear regression the effect is an underestimate of the coefficient, known as the attenuation bias. In non-linear models the direction of the bias is likely to be more complicated.
== Motivating example ==
Consider a simple linear regression model of the form
y
t
=
α
+
β
x
t
∗
+
ε
t
,
t
=
1
,
…
,
T
,
{\displaystyle y_{t}=\alpha +\beta x_{t}^{*}+\varepsilon _{t}\,,\quad t=1,\ldots ,T,}
where
x
t
∗
{\displaystyle x_{t}^{*}}
denotes the true but unobserved regressor. Instead, we observe this value with an error:
x
t
=
x
t
∗
+
η
t
{\displaystyle x_{t}=x_{t}^{*}+\eta _{t}\,}
where the measurement error
η
t
{\displaystyle \eta _{t}}
is assumed to be independent of the true value
x
t
∗
{\displaystyle x_{t}^{*}}
.
A practical application is the standard school science experiment for Hooke's law, in which one estimates the relationship between the weight added to a spring and the amount by which the spring stretches.
If the
y
t
{\displaystyle y_{t}}
′s are simply regressed on the
x
t
{\displaystyle x_{t}}
′s (see simple linear regression), then the estimator for the slope coefficient is
β
^
x
=
1
T
∑
t
=
1
T
(
x
t
−
x
¯
)
(
y
t
−
y
¯
)
1
T
∑
t
=
1
T
(
x
t
−
x
¯
)
2
,
{\displaystyle {\hat {\beta }}_{x}={\frac {{\tfrac {1}{T}}\sum _{t=1}^{T}(x_{t}-{\bar {x}})(y_{t}-{\bar {y}})}{{\tfrac {1}{T}}\sum _{t=1}^{T}(x_{t}-{\bar {x}})^{2}}}\,,}
which converges as the sample size
T
{\displaystyle T}
increases without bound:
β
^
x
→
p
Cov
[
x
t
,
y
t
]
Var
[
x
t
]
=
β
σ
x
∗
2
σ
x
∗
2
+
σ
η
2
=
β
1
+
σ
η
2
/
σ
x
∗
2
.
{\displaystyle {\hat {\beta }}_{x}\xrightarrow {p} {\frac {\operatorname {Cov} [\,x_{t},y_{t}\,]}{\operatorname {Var} [\,x_{t}\,]}}={\frac {\beta \sigma _{x^{*}}^{2}}{\sigma _{x^{*}}^{2}+\sigma _{\eta }^{2}}}={\frac {\beta }{1+\sigma _{\eta }^{2}/\sigma _{x^{*}}^{2}}}\,.}
This is in contrast to the "true" effect of
β
{\displaystyle \beta }
, estimated using the
x
t
∗
{\displaystyle x_{t}^{*}}
,:
β
^
=
1
T
∑
t
=
1
T
(
x
t
∗
−
x
¯
)
(
y
t
−
y
¯
)
1
T
∑
t
=
1
T
(
x
t
∗
−
x
¯
)
2
,
{\displaystyle {\hat {\beta }}={\frac {{\tfrac {1}{T}}\sum _{t=1}^{T}(x_{t}^{*}-{\bar {x}})(y_{t}-{\bar {y}})}{{\tfrac {1}{T}}\sum _{t=1}^{T}(x_{t}^{*}-{\bar {x}})^{2}}}\,,}
Variances are non-negative, so that in the limit the estimated
β
^
x
{\displaystyle {\hat {\beta }}_{x}}
is smaller than
β
^
{\displaystyle {\hat {\beta }}}
, an effect which statisticians call attenuation or regression dilution. Thus the ‘naïve’ least squares estimator
β
^
x
{\displaystyle {\hat {\beta }}_{x}}
is an inconsistent estimator for
β
{\displaystyle \beta }
. However,
β
^
x
{\displaystyle {\hat {\beta }}_{x}}
is a consistent estimator of the parameter required for a best linear predictor of
y
{\displaystyle y}
given the observed
x
t
{\displaystyle x_{t}}
: in some applications this may be what is required, rather than an estimate of the 'true' regression coefficient
β
{\displaystyle \beta }
, although that would assume that the variance of the errors in the estimation and prediction is identical. This follows directly from the result quoted immediately above, and the fact that the regression coefficient relating the
y
t
{\displaystyle y_{t}}
′s to the actually observed
x
t
{\displaystyle x_{t}}
′s, in a simple linear regression, is given by
β
x
=
Cov
[
x
t
,
y
t
]
Var
[
x
t
]
.
{\displaystyle \beta _{x}={\frac {\operatorname {Cov} [\,x_{t},y_{t}\,]}{\operatorname {Var} [\,x_{t}\,]}}.}
It is this coefficient, rather than
β
{\displaystyle \beta }
, that would be required for constructing a predictor of
y
{\displaystyle y}
based on an observed
x
{\displaystyle x}
which is subject to noise.
It can be argued that almost all existing data sets contain errors of different nature and magnitude, so that attenuation bias is extremely frequent (although in multivariate regression the direction of bias is ambiguous). Jerry Hausman sees this as an iron law of econometrics: "The magnitude of the estimate is usually smaller than expected."
== Specification ==
Usually, measurement error models are described using the latent variables approach. If
y
{\displaystyle y}
is the response variable and
x
{\displaystyle x}
are observed values of the regressors, then it is assumed there exist some latent variables
y
∗
{\displaystyle y^{*}}
and
x
∗
{\displaystyle x^{*}}
which follow the model's "true" functional relationship
g
(
⋅
)
{\displaystyle g(\cdot )}
, and such that the observed quantities are their noisy observations:
{
y
∗
=
g
(
x
∗
,
w
|
θ
)
,
y
=
y
∗
+
ε
,
x
=
x
∗
+
η
,
{\displaystyle {\begin{cases}y^{*}=g(x^{*}\!,w\,|\,\theta ),\\y=y^{*}+\varepsilon ,\\x=x^{*}+\eta ,\end{cases}}}
where
θ
{\displaystyle \theta }
is the model's parameter and
w
{\displaystyle w}
are those regressors which are assumed to be error-free (for example, when linear regression contains an intercept, the regressor which corresponds to the constant certainly has no "measurement errors"). Depending on the specification these error-free regressors may or may not be treated separately; in the latter case it is simply assumed that corresponding entries in the variance matrix of
η
{\displaystyle \eta }
's are zero.
The variables
y
{\displaystyle y}
,
x
{\displaystyle x}
,
w
{\displaystyle w}
are all observed, meaning that the statistician possesses a data set of
n
{\displaystyle n}
statistical units
{
y
i
,
x
i
,
w
i
}
i
=
1
,
…
,
n
{\displaystyle \left\{y_{i},x_{i},w_{i}\right\}_{i=1,\dots ,n}}
which follow the data generating process described above; the latent variables
x
∗
{\displaystyle x^{*}}
,
y
∗
{\displaystyle y^{*}}
,
ε
{\displaystyle \varepsilon }
, and
η
{\displaystyle \eta }
are not observed, however.
This specification does not encompass all the existing errors-in-variables models. For example, in some of them, function
g
(
⋅
)
{\displaystyle g(\cdot )}
may be non-parametric or semi-parametric. Other approaches model the relationship between
y
∗
{\displaystyle y^{*}}
and
x
∗
{\displaystyle x^{*}}
as distributional instead of functional; that is, they assume that
y
∗
{\displaystyle y^{*}}
conditionally on
x
∗
{\displaystyle x^{*}}
follows a certain (usually parametric) distribution.
=== Terminology and assumptions ===
The observed variable
x
{\displaystyle x}
may be called the manifest, indicator, or proxy variable.
The unobserved variable
x
∗
{\displaystyle x^{*}}
may be called the latent or true variable. It may be regarded either as an unknown constant (in which case the model is called a functional model), or as a random variable (correspondingly a structural model).
The relationship between the measurement error
η
{\displaystyle \eta }
and the latent variable
x
∗
{\displaystyle x^{*}}
can be modeled in different ways:
Classical errors:
η
⊥
x
∗
{\displaystyle \eta \perp x^{*}}
the errors are independent of the latent variable. This is the most common assumption; it implies that the errors are introduced by the measuring device and their magnitude does not depend on the value being measured.
Mean-independence:
E
[
η
|
x
∗
]
=
0
,
{\displaystyle \operatorname {E} [\eta |x^{*}]\,=\,0,}
the errors are mean-zero for every value of the latent regressor. This is a less restrictive assumption than the classical one, as it allows for the presence of heteroscedasticity or other effects in the measurement errors.
Berkson's errors:
η
⊥
x
,
{\displaystyle \eta \,\perp \,x,}
the errors are independent of the observed regressor x. This assumption has very limited applicability. One example is round-off errors: for example, if a person's age* is a continuous random variable, whereas the observed age is truncated to the next smallest integer, then the truncation error is approximately independent of the observed age. Another possibility is with the fixed design experiment: for example, if a scientist decides to make a measurement at a certain predetermined moment of time
x
{\displaystyle x}
, say at
x
=
10
s
{\displaystyle x=10s}
, then the real measurement may occur at some other value of
x
∗
{\displaystyle x^{*}}
(for example due to her finite reaction time) and such measurement error will be generally independent of the "observed" value of the regressor.
Misclassification errors: special case used for the dummy regressors. If
x
∗
{\displaystyle x^{*}}
is an indicator of a certain event or condition (such as person is male/female, some medical treatment given/not, etc.), then the measurement error in such regressor will correspond to the incorrect classification similar to type I and type II errors in statistical testing. In this case the error
η
{\displaystyle \eta }
may take only 3 possible values, and its distribution conditional on
x
∗
{\displaystyle x^{*}}
is modeled with two parameters:
α
=
Pr
[
η
=
−
1
|
x
∗
=
1
]
{\displaystyle \alpha =\operatorname {Pr} [\eta =-1|x^{*}=1]}
, and
β
=
Pr
[
η
=
1
|
x
∗
=
0
]
{\displaystyle \beta =\operatorname {Pr} [\eta =1|x^{*}=0]}
. The necessary condition for identification is that
α
+
β
<
1
{\displaystyle \alpha +\beta <1}
, that is misclassification should not happen "too often". (This idea can be generalized to discrete variables with more than two possible values.)
== Linear model ==
Linear errors-in-variables models were studied first, probably because linear models were so widely used and they are easier than non-linear ones. Unlike standard least squares regression (OLS), extending errors in variables regression (EiV) from the simple to the multivariable case is not straightforward, unless one treats all variables in the same way i.e. assume equal reliability.
=== Simple linear model ===
The simple linear errors-in-variables model was already presented in the "motivation" section:
{
y
t
=
α
+
β
x
t
∗
+
ε
t
,
x
t
=
x
t
∗
+
η
t
,
{\displaystyle {\begin{cases}y_{t}=\alpha +\beta x_{t}^{*}+\varepsilon _{t},\\x_{t}=x_{t}^{*}+\eta _{t},\end{cases}}}
where all variables are scalar. Here α and β are the parameters of interest, whereas σε and ση—standard deviations of the error terms—are the nuisance parameters. The "true" regressor x* is treated as a random variable (structural model), independent of the measurement error η (classic assumption).
This model is identifiable in two cases: (1) either the latent regressor x* is not normally distributed, (2) or x* has normal distribution, but neither εt nor ηt are divisible by a normal distribution. That is, the parameters α, β can be consistently estimated from the data set
(
x
t
,
y
t
)
t
=
1
T
{\displaystyle \scriptstyle (x_{t},\,y_{t})_{t=1}^{T}}
without any additional information, provided the latent regressor is not Gaussian.
Before this identifiability result was established, statisticians attempted to apply the maximum likelihood technique by assuming that all variables are normal, and then concluded that the model is not identified. The suggested remedy was to assume that some of the parameters of the model are known or can be estimated from the outside source. Such estimation methods include
Deming regression — assumes that the ratio δ = σ²ε/σ²η is known. This could be appropriate for example when errors in y and x are both caused by measurements, and the accuracy of measuring devices or procedures are known. The case when δ = 1 is also known as the orthogonal regression.
Regression with known reliability ratio λ = σ²∗/ ( σ²η + σ²∗), where σ²∗ is the variance of the latent regressor. Such approach may be applicable for example when repeating measurements of the same unit are available, or when the reliability ratio has been known from the independent study. In this case the consistent estimate of slope is equal to the least-squares estimate divided by λ.
Regression with known σ²η may occur when the source of the errors in x's is known and their variance can be calculated. This could include rounding errors, or errors introduced by the measuring device. When σ²η is known we can compute the reliability ratio as λ = ( σ²x − σ²η) / σ²x and reduce the problem to the previous case.
Estimation methods that do not assume knowledge of some of the parameters of the model, include
=== Multivariable linear model ===
The multivariable model looks exactly like the simple linear model, only this time β, ηt, xt and x*t are k×1 vectors.
{
y
t
=
α
+
β
′
x
t
∗
+
ε
t
,
x
t
=
x
t
∗
+
η
t
.
{\displaystyle {\begin{cases}y_{t}=\alpha +\beta 'x_{t}^{*}+\varepsilon _{t},\\x_{t}=x_{t}^{*}+\eta _{t}.\end{cases}}}
In the case when (εt,ηt) is jointly normal, the parameter β is not identified if and only if there is a non-singular k×k block matrix [a A], where a is a k×1 vector such that a′x* is distributed normally and independently of A′x*. In the case when εt, ηt1,..., ηtk are mutually independent, the parameter β is not identified if and only if in addition to the conditions above some of the errors can be written as the sum of two independent variables one of which is normal.
Some of the estimation methods for multivariable linear models are
== Non-linear models ==
A generic non-linear measurement error model takes form
{
y
t
=
g
(
x
t
∗
)
+
ε
t
,
x
t
=
x
t
∗
+
η
t
.
{\displaystyle {\begin{cases}y_{t}=g(x_{t}^{*})+\varepsilon _{t},\\x_{t}=x_{t}^{*}+\eta _{t}.\end{cases}}}
Here function g can be either parametric or non-parametric. When function g is parametric it will be written as g(x*, β).
For a general vector-valued regressor x* the conditions for model identifiability are not known. However, in the case of scalar x* the model is identified unless the function g is of the "log-exponential" form
g
(
x
∗
)
=
a
+
b
ln
(
e
c
x
∗
+
d
)
{\displaystyle g(x^{*})=a+b\ln {\big (}e^{cx^{*}}+d{\big )}}
and the latent regressor x* has density
f
x
∗
(
x
)
=
{
A
e
−
B
e
C
x
+
C
D
x
(
e
C
x
+
E
)
−
F
,
if
d
>
0
A
e
−
B
x
2
+
C
x
if
d
=
0
{\displaystyle f_{x^{*}}(x)={\begin{cases}Ae^{-Be^{Cx}+CDx}(e^{Cx}+E)^{-F},&{\text{if}}\ d>0\\Ae^{-Bx^{2}+Cx}&{\text{if}}\ d=0\end{cases}}}
where constants A,B,C,D,E,F may depend on a,b,c,d.
Despite this optimistic result, as of now no methods exist for estimating non-linear errors-in-variables models without any extraneous information. However, there are several techniques which make use of some additional data: either the instrumental variables, or repeated observations.
=== Instrumental variables methods ===
=== Repeated observations ===
In this approach two (or maybe more) repeated observations of the regressor x* are available. Both observations contain their own measurement errors; however, those errors are required to be independent:
{
x
1
t
=
x
t
∗
+
η
1
t
,
x
2
t
=
x
t
∗
+
η
2
t
,
{\displaystyle {\begin{cases}x_{1t}=x_{t}^{*}+\eta _{1t},\\x_{2t}=x_{t}^{*}+\eta _{2t},\end{cases}}}
where x* ⊥ η1 ⊥ η2. Variables η1, η2 need not be identically distributed (although if they are efficiency of the estimator can be slightly improved). With only these two observations it is possible to consistently estimate the density function of x* using Kotlarski's deconvolution technique.
== References ==
== Further reading ==
Dougherty, Christopher (2011). "Stochastic Regressors and Measurement Errors". Introduction to Econometrics (Fourth ed.). Oxford University Press. pp. 300–330. ISBN 978-0-19-956708-9.
Kmenta, Jan (1986). "Estimation with Deficient Data". Elements of Econometrics (Second ed.). New York: Macmillan. pp. 346–391. ISBN 978-0-02-365070-3.
Schennach, Susanne (2013). "Measurement Error in Nonlinear Models – A Review". In Acemoglu, Daron; Arellano, Manuel; Dekel, Eddie (eds.). Advances in Economics and Econometrics. Cambridge University Press. pp. 296–337. doi:10.1017/CBO9781139060035.009. hdl:10419/79526. ISBN 9781107017214.
== External links ==
An Historical Overview of Linear Regression with Errors in both Variables, J.W. Gillard 2006
Lecture on Econometrics (topic: Stochastic Regressors and Measurement Error) on YouTube by Mark Thoma. | Wikipedia/Errors-in-variables_models |
Bayesian hierarchical modelling is a statistical model written in multiple levels (hierarchical form) that estimates the parameters of the posterior distribution using the Bayesian method. The sub-models combine to form the hierarchical model, and Bayes' theorem is used to integrate them with the observed data and account for all the uncertainty that is present. The result of this integration is it allows calculation of the posterior distribution of the prior, providing an updated probability estimate.
Frequentist statistics may yield conclusions seemingly incompatible with those offered by Bayesian statistics due to the Bayesian treatment of the parameters as random variables and its use of subjective information in establishing assumptions on these parameters. As the approaches answer different questions the formal results aren't technically contradictory but the two approaches disagree over which answer is relevant to particular applications. Bayesians argue that relevant information regarding decision-making and updating beliefs cannot be ignored and that hierarchical modeling has the potential to overrule classical methods in applications where respondents give multiple observational data. Moreover, the model has proven to be robust, with the posterior distribution less sensitive to the more flexible hierarchical priors.
Hierarchical modeling, as its name implies, retains nested data structure, and is used when information is available at several different levels of observational units. For example, in epidemiological modeling to describe infection trajectories for multiple countries, observational units are countries, and each country has its own time-based profile of daily infected cases. In decline curve analysis to describe oil or gas production decline curve for multiple wells, observational units are oil or gas wells in a reservoir region, and each well has each own time-based profile of oil or gas production rates (usually, barrels per month). Hierarchical modeling is used to devise computatation based strategies for multiparameter problems.
== Philosophy ==
Statistical methods and models commonly involve multiple parameters that can be regarded as related or connected in such a way that the problem implies a dependence of the joint probability model for these parameters.
Individual degrees of belief, expressed in the form of probabilities, come with uncertainty. Amidst this is the change of the degrees of belief over time. As was stated by Professor José M. Bernardo and Professor Adrian F. Smith, “The actuality of the learning process consists in the evolution of individual and subjective beliefs about the reality.” These subjective probabilities are more directly involved in the mind rather than the physical probabilities. Hence, it is with this need of updating beliefs that Bayesians have formulated an alternative statistical model which takes into account the prior occurrence of a particular event.
== Bayes' theorem ==
The assumed occurrence of a real-world event will typically modify preferences between certain options. This is done by modifying the degrees of belief attached, by an individual, to the events defining the options.
Suppose in a study of the effectiveness of cardiac treatments, with the patients in hospital j having survival probability
θ
j
{\displaystyle \theta _{j}}
, the survival probability will be updated with the occurrence of y, the event in which a controversial serum is created which, as believed by some, increases survival in cardiac patients.
In order to make updated probability statements about
θ
j
{\displaystyle \theta _{j}}
, given the occurrence of event y, we must begin with a model providing a joint probability distribution for
θ
j
{\displaystyle \theta _{j}}
and y. This can be written as a product of the two distributions that are often referred to as the prior distribution
P
(
θ
)
{\displaystyle P(\theta )}
and the sampling distribution
P
(
y
∣
θ
)
{\displaystyle P(y\mid \theta )}
respectively:
P
(
θ
,
y
)
=
P
(
θ
)
P
(
y
∣
θ
)
{\displaystyle P(\theta ,y)=P(\theta )P(y\mid \theta )}
Using the basic property of conditional probability, the posterior distribution will yield:
P
(
θ
∣
y
)
=
P
(
θ
,
y
)
P
(
y
)
=
P
(
y
∣
θ
)
P
(
θ
)
P
(
y
)
{\displaystyle P(\theta \mid y)={\frac {P(\theta ,y)}{P(y)}}={\frac {P(y\mid \theta )P(\theta )}{P(y)}}}
This equation, showing the relationship between the conditional probability and the individual events, is known as Bayes' theorem. This simple expression encapsulates the technical core of Bayesian inference which aims to deconstruct the probability,
P
(
θ
∣
y
)
{\displaystyle P(\theta \mid y)}
, relative to solvable subsets of its supportive evidence.
== Exchangeability ==
The usual starting point of a statistical analysis is the assumption that the n values
y
1
,
y
2
,
…
,
y
n
{\displaystyle y_{1},y_{2},\ldots ,y_{n}}
are exchangeable. If no information – other than data y – is available to distinguish any of the
θ
j
{\displaystyle \theta _{j}}
’s from any others, and no ordering or grouping of the parameters can be made, one must assume symmetry of prior distribution parameters. This symmetry is represented probabilistically by exchangeability. Generally, it is useful and appropriate to model data from an exchangeable distribution as independently and identically distributed, given some unknown parameter vector
θ
{\displaystyle \theta }
, with distribution
P
(
θ
)
{\displaystyle P(\theta )}
.
=== Finite exchangeability ===
For a fixed number n, the set
y
1
,
y
2
,
…
,
y
n
{\displaystyle y_{1},y_{2},\ldots ,y_{n}}
is exchangeable if the joint probability
P
(
y
1
,
y
2
,
…
,
y
n
)
{\displaystyle P(y_{1},y_{2},\ldots ,y_{n})}
is invariant under permutations of the indices. That is, for every permutation
π
{\displaystyle \pi }
or
(
π
1
,
π
2
,
…
,
π
n
)
{\displaystyle (\pi _{1},\pi _{2},\ldots ,\pi _{n})}
of (1, 2, …, n),
P
(
y
1
,
y
2
,
…
,
y
n
)
=
P
(
y
π
1
,
y
π
2
,
…
,
y
π
n
)
.
{\displaystyle P(y_{1},y_{2},\ldots ,y_{n})=P(y_{\pi _{1}},y_{\pi _{2}},\ldots ,y_{\pi _{n}}).}
The following is an exchangeable, but not independent and identical (iid), example:
Consider an urn with a red ball and a blue ball inside, with probability
1
2
{\displaystyle {\frac {1}{2}}}
of drawing either. Balls are drawn without replacement, i.e. after one ball is drawn from the n balls, there will be n − 1 remaining balls left for the next draw.
Let
Y
i
=
{
1
,
if the
i
th ball is red
,
0
,
otherwise
.
{\displaystyle {\text{Let }}Y_{i}={\begin{cases}1,&{\text{if the }}i{\text{th ball is red}},\\0,&{\text{otherwise}}.\end{cases}}}
The probability of selecting a red ball in the first draw and a blue ball in the second draw is equal to the probability of selecting a blue ball on the first draw and a red on the second, both of which are 1/2:
[
P
(
y
1
=
1
,
y
2
=
0
)
=
P
(
y
1
=
0
,
y
2
=
1
)
=
1
2
]
{\displaystyle [P(y_{1}=1,y_{2}=0)=P(y_{1}=0,y_{2}=1)={\frac {1}{2}}]}
)
This makes
y
1
{\displaystyle y_{1}}
and
y
2
{\displaystyle y_{2}}
exchangeable.
But the probability of selecting a red ball on the second draw given that the red ball has already been selected in the first is 0. This is not equal to the probability that the red ball is selected in the second draw, which is 1/2:
[
P
(
y
2
=
1
∣
y
1
=
1
)
=
0
≠
P
(
y
2
=
1
)
=
1
2
]
{\displaystyle [P(y_{2}=1\mid y_{1}=1)=0\neq P(y_{2}=1)={\frac {1}{2}}]}
).
Thus,
y
1
{\displaystyle y_{1}}
and
y
2
{\displaystyle y_{2}}
are not independent.
If
x
1
,
…
,
x
n
{\displaystyle x_{1},\ldots ,x_{n}}
are independent and identically distributed, then they are exchangeable, but the converse is not necessarily true.
=== Infinite exchangeability ===
Infinite exchangeability is the property that every finite subset of an infinite sequence
y
1
{\displaystyle y_{1}}
,
y
2
,
…
{\displaystyle y_{2},\ldots }
is exchangeable. For any n, the sequence
y
1
,
y
2
,
…
,
y
n
{\displaystyle y_{1},y_{2},\ldots ,y_{n}}
is exchangeable.
== Hierarchical models ==
=== Components ===
Bayesian hierarchical modeling makes use of two important concepts in deriving the posterior distribution, namely:
Hyperparameters: parameters of the prior distribution
Hyperpriors: distributions of Hyperparameters
Suppose a random variable Y follows a normal distribution with parameter
θ
{\displaystyle \theta }
as the mean and 1 as the variance, that is
Y
∣
θ
∼
N
(
θ
,
1
)
{\displaystyle Y\mid \theta \sim N(\theta ,1)}
. The tilde relation
∼
{\displaystyle \sim }
can be read as "has the distribution of" or "is distributed as". Suppose also that the parameter
θ
{\displaystyle \theta }
has a distribution given by a normal distribution with mean
μ
{\displaystyle \mu }
and variance 1, i.e.
θ
∣
μ
∼
N
(
μ
,
1
)
{\displaystyle \theta \mid \mu \sim N(\mu ,1)}
. Furthermore,
μ
{\displaystyle \mu }
follows another distribution given, for example, by the standard normal distribution,
N
(
0
,
1
)
{\displaystyle {\text{N}}(0,1)}
. The parameter
μ
{\displaystyle \mu }
is called the hyperparameter, while its distribution given by
N
(
0
,
1
)
{\displaystyle {\text{N}}(0,1)}
is an example of a hyperprior distribution. The notation of the distribution of Y changes as another parameter is added, i.e.
Y
∣
θ
,
μ
∼
N
(
θ
,
1
)
{\displaystyle Y\mid \theta ,\mu \sim N(\theta ,1)}
. If there is another stage, say,
μ
{\displaystyle \mu }
following another normal distribution with a mean of
β
{\displaystyle \beta }
and a variance of
ϵ
{\displaystyle \epsilon }
, then
μ
∼
N
(
β
,
ϵ
)
{\displaystyle \mu \sim N(\beta ,\epsilon )}
,
{\displaystyle {\mbox{ }}}
β
{\displaystyle \beta }
and
ϵ
{\displaystyle \epsilon }
can also be called hyperparameters with hyperprior distributions.
=== Framework ===
Let
y
j
{\displaystyle y_{j}}
be an observation and
θ
j
{\displaystyle \theta _{j}}
a parameter governing the data generating process for
y
j
{\displaystyle y_{j}}
. Assume further that the parameters
θ
1
,
θ
2
,
…
,
θ
j
{\displaystyle \theta _{1},\theta _{2},\ldots ,\theta _{j}}
are generated exchangeably from a common population, with distribution governed by a hyperparameter
ϕ
{\displaystyle \phi }
.
The Bayesian hierarchical model contains the following stages:
Stage I:
y
j
∣
θ
j
,
ϕ
∼
P
(
y
j
∣
θ
j
,
ϕ
)
{\displaystyle {\text{Stage I: }}y_{j}\mid \theta _{j},\phi \sim P(y_{j}\mid \theta _{j},\phi )}
Stage II:
θ
j
∣
ϕ
∼
P
(
θ
j
∣
ϕ
)
{\displaystyle {\text{Stage II: }}\theta _{j}\mid \phi \sim P(\theta _{j}\mid \phi )}
Stage III:
ϕ
∼
P
(
ϕ
)
{\displaystyle {\text{Stage III: }}\phi \sim P(\phi )}
The likelihood, as seen in stage I is
P
(
y
j
∣
θ
j
,
ϕ
)
{\displaystyle P(y_{j}\mid \theta _{j},\phi )}
, with
P
(
θ
j
,
ϕ
)
{\displaystyle P(\theta _{j},\phi )}
as its prior distribution. Note that the likelihood depends on
ϕ
{\displaystyle \phi }
only through
θ
j
{\displaystyle \theta _{j}}
.
The prior distribution from stage I can be broken down into:
P
(
θ
j
,
ϕ
)
=
P
(
θ
j
∣
ϕ
)
P
(
ϕ
)
{\displaystyle P(\theta _{j},\phi )=P(\theta _{j}\mid \phi )P(\phi )}
[from the definition of conditional probability]
With
ϕ
{\displaystyle \phi }
as its hyperparameter with hyperprior distribution,
P
(
ϕ
)
{\displaystyle P(\phi )}
.
Thus, the posterior distribution is proportional to:
P
(
ϕ
,
θ
j
∣
y
)
∝
P
(
y
j
∣
θ
j
,
ϕ
)
P
(
θ
j
,
ϕ
)
{\displaystyle P(\phi ,\theta _{j}\mid y)\propto P(y_{j}\mid \theta _{j},\phi )P(\theta _{j},\phi )}
[using Bayes' Theorem]
P
(
ϕ
,
θ
j
∣
y
)
∝
P
(
y
j
∣
θ
j
)
P
(
θ
j
∣
ϕ
)
P
(
ϕ
)
{\displaystyle P(\phi ,\theta _{j}\mid y)\propto P(y_{j}\mid \theta _{j})P(\theta _{j}\mid \phi )P(\phi )}
=== Example calculation ===
As an example, a teacher wants to estimate how well a student did on the SAT. The teacher uses the current grade point average (GPA) of the student for an estimate. Their current GPA, denoted by
Y
{\displaystyle Y}
, has a likelihood given by some probability function with parameter
θ
{\displaystyle \theta }
, i.e.
Y
∣
θ
∼
P
(
Y
∣
θ
)
{\displaystyle Y\mid \theta \sim P(Y\mid \theta )}
. This parameter
θ
{\displaystyle \theta }
is the SAT score of the student. The SAT score is viewed as a sample coming from a common population distribution indexed by another parameter
ϕ
{\displaystyle \phi }
, which is the high school grade of the student (freshman, sophomore, junior or senior). That is,
θ
∣
ϕ
∼
P
(
θ
∣
ϕ
)
{\displaystyle \theta \mid \phi \sim P(\theta \mid \phi )}
. Moreover, the hyperparameter
ϕ
{\displaystyle \phi }
follows its own distribution given by
P
(
ϕ
)
{\displaystyle P(\phi )}
, a hyperprior.
These relationships can be used to calculate the likelihood of a specific SAT score relative to a particular GPA:
P
(
θ
,
ϕ
∣
Y
)
∝
P
(
Y
∣
θ
,
ϕ
)
P
(
θ
,
ϕ
)
{\displaystyle P(\theta ,\phi \mid Y)\propto P(Y\mid \theta ,\phi )P(\theta ,\phi )}
P
(
θ
,
ϕ
∣
Y
)
∝
P
(
Y
∣
θ
)
P
(
θ
∣
ϕ
)
P
(
ϕ
)
{\displaystyle P(\theta ,\phi \mid Y)\propto P(Y\mid \theta )P(\theta \mid \phi )P(\phi )}
All information in the problem will be used to solve for the posterior distribution. Instead of solving only using the prior distribution and the likelihood function, using hyperpriors allows a more nuanced distinction of relationships between given variables.
=== 2-stage hierarchical model ===
In general, the joint posterior distribution of interest in 2-stage hierarchical models is:
P
(
θ
,
ϕ
∣
Y
)
=
P
(
Y
∣
θ
,
ϕ
)
P
(
θ
,
ϕ
)
P
(
Y
)
=
P
(
Y
∣
θ
)
P
(
θ
∣
ϕ
)
P
(
ϕ
)
P
(
Y
)
{\displaystyle P(\theta ,\phi \mid Y)={P(Y\mid \theta ,\phi )P(\theta ,\phi ) \over P(Y)}={P(Y\mid \theta )P(\theta \mid \phi )P(\phi ) \over P(Y)}}
P
(
θ
,
ϕ
∣
Y
)
∝
P
(
Y
∣
θ
)
P
(
θ
∣
ϕ
)
P
(
ϕ
)
{\displaystyle P(\theta ,\phi \mid Y)\propto P(Y\mid \theta )P(\theta \mid \phi )P(\phi )}
=== 3-stage hierarchical model ===
For 3-stage hierarchical models, the posterior distribution is given by:
P
(
θ
,
ϕ
,
X
∣
Y
)
=
P
(
Y
∣
θ
)
P
(
θ
∣
ϕ
)
P
(
ϕ
∣
X
)
P
(
X
)
P
(
Y
)
{\displaystyle P(\theta ,\phi ,X\mid Y)={P(Y\mid \theta )P(\theta \mid \phi )P(\phi \mid X)P(X) \over P(Y)}}
P
(
θ
,
ϕ
,
X
∣
Y
)
∝
P
(
Y
∣
θ
)
P
(
θ
∣
ϕ
)
P
(
ϕ
∣
X
)
P
(
X
)
{\displaystyle P(\theta ,\phi ,X\mid Y)\propto P(Y\mid \theta )P(\theta \mid \phi )P(\phi \mid X)P(X)}
== Bayesian nonlinear mixed-effects model ==
A three stage version of Bayesian hierarchical modeling could be used to calculate probability at 1) an individual level, 2) at the level of population and 3) the prior, which is an assumed probability distribution that takes place before evidence is initially acquired:
Stage 1: Individual-Level Model
y
i
j
=
f
(
t
i
j
;
θ
1
i
,
θ
2
i
,
…
,
θ
l
i
,
…
,
θ
K
i
)
+
ϵ
i
j
,
ϵ
i
j
∼
N
(
0
,
σ
2
)
,
i
=
1
,
…
,
N
,
j
=
1
,
…
,
M
i
.
{\displaystyle {y}_{ij}=f(t_{ij};\theta _{1i},\theta _{2i},\ldots ,\theta _{li},\ldots ,\theta _{Ki})+\epsilon _{ij},\quad \epsilon _{ij}\sim N(0,\sigma ^{2}),\quad i=1,\ldots ,N,\,j=1,\ldots ,M_{i}.}
Stage 2: Population Model
θ
l
i
=
α
l
+
∑
b
=
1
P
β
l
b
x
i
b
+
η
l
i
,
η
l
i
∼
N
(
0
,
ω
l
2
)
,
i
=
1
,
…
,
N
,
l
=
1
,
…
,
K
.
{\displaystyle \theta _{li}=\alpha _{l}+\sum _{b=1}^{P}\beta _{lb}x_{ib}+\eta _{li},\quad \eta _{li}\sim N(0,\omega _{l}^{2}),\quad i=1,\ldots ,N,\,l=1,\ldots ,K.}
Stage 3: Prior
σ
2
∼
π
(
σ
2
)
,
α
l
∼
π
(
α
l
)
,
(
β
l
1
,
…
,
β
l
b
,
…
,
β
l
P
)
∼
π
(
β
l
1
,
…
,
β
l
b
,
…
,
β
l
P
)
,
ω
l
2
∼
π
(
ω
l
2
)
,
l
=
1
,
…
,
K
.
{\displaystyle \sigma ^{2}\sim \pi (\sigma ^{2}),\quad \alpha _{l}\sim \pi (\alpha _{l}),\quad (\beta _{l1},\ldots ,\beta _{lb},\ldots ,\beta _{lP})\sim \pi (\beta _{l1},\ldots ,\beta _{lb},\ldots ,\beta _{lP}),\quad \omega _{l}^{2}\sim \pi (\omega _{l}^{2}),\quad l=1,\ldots ,K.}
Here,
y
i
j
{\displaystyle y_{ij}}
denotes the continuous response of the
i
{\displaystyle i}
-th subject at the time point
t
i
j
{\displaystyle t_{ij}}
, and
x
i
b
{\displaystyle x_{ib}}
is the
b
{\displaystyle b}
-th covariate of the
i
{\displaystyle i}
-th subject. Parameters involved in the model are written in Greek letters. The variable
f
(
t
;
θ
1
,
…
,
θ
K
)
{\displaystyle f(t;\theta _{1},\ldots ,\theta _{K})}
is a known function parameterized by the
K
{\displaystyle K}
-dimensional vector
(
θ
1
,
…
,
θ
K
)
{\displaystyle (\theta _{1},\ldots ,\theta _{K})}
.
Typically,
f
{\displaystyle f}
is a `nonlinear' function and describes the temporal trajectory of individuals. In the model,
ϵ
i
j
{\displaystyle \epsilon _{ij}}
and
η
l
i
{\displaystyle \eta _{li}}
describe within-individual variability and between-individual variability, respectively. If the prior is not considered, the relationship reduces to a frequentist nonlinear mixed-effect model.
A central task in the application of the Bayesian nonlinear mixed-effect models is to evaluate posterior density:
π
(
{
θ
l
i
}
i
=
1
,
l
=
1
N
,
K
,
σ
2
,
{
α
l
}
l
=
1
K
,
{
β
l
b
}
l
=
1
,
b
=
1
K
,
P
,
{
ω
l
}
l
=
1
K
|
{
y
i
j
}
i
=
1
,
j
=
1
N
,
M
i
)
{\displaystyle \pi (\{\theta _{li}\}_{i=1,l=1}^{N,K},\sigma ^{2},\{\alpha _{l}\}_{l=1}^{K},\{\beta _{lb}\}_{l=1,b=1}^{K,P},\{\omega _{l}\}_{l=1}^{K}|\{y_{ij}\}_{i=1,j=1}^{N,M_{i}})}
∝
π
(
{
y
i
j
}
i
=
1
,
j
=
1
N
,
M
i
,
{
θ
l
i
}
i
=
1
,
l
=
1
N
,
K
,
σ
2
,
{
α
l
}
l
=
1
K
,
{
β
l
b
}
l
=
1
,
b
=
1
K
,
P
,
{
ω
l
}
l
=
1
K
)
{\displaystyle \propto \pi (\{y_{ij}\}_{i=1,j=1}^{N,M_{i}},\{\theta _{li}\}_{i=1,l=1}^{N,K},\sigma ^{2},\{\alpha _{l}\}_{l=1}^{K},\{\beta _{lb}\}_{l=1,b=1}^{K,P},\{\omega _{l}\}_{l=1}^{K})}
=
π
(
{
y
i
j
}
i
=
1
,
j
=
1
N
,
M
i
|
{
θ
l
i
}
i
=
1
,
l
=
1
N
,
K
,
σ
2
)
⏟
S
t
a
g
e
1
:
I
n
d
i
v
i
d
u
a
l
−
L
e
v
e
l
M
o
d
e
l
×
π
(
{
θ
l
i
}
i
=
1
,
l
=
1
N
,
K
|
{
α
l
}
l
=
1
K
,
{
β
l
b
}
l
=
1
,
b
=
1
K
,
P
,
{
ω
l
}
l
=
1
K
)
⏟
S
t
a
g
e
2
:
P
o
p
u
l
a
t
i
o
n
M
o
d
e
l
×
p
(
σ
2
,
{
α
l
}
l
=
1
K
,
{
β
l
b
}
l
=
1
,
b
=
1
K
,
P
,
{
ω
l
}
l
=
1
K
)
⏟
S
t
a
g
e
3
:
P
r
i
o
r
{\displaystyle =\underbrace {\pi (\{y_{ij}\}_{i=1,j=1}^{N,M_{i}}|\{\theta _{li}\}_{i=1,l=1}^{N,K},\sigma ^{2})} _{Stage1:Individual-LevelModel}\times \underbrace {\pi (\{\theta _{li}\}_{i=1,l=1}^{N,K}|\{\alpha _{l}\}_{l=1}^{K},\{\beta _{lb}\}_{l=1,b=1}^{K,P},\{\omega _{l}\}_{l=1}^{K})} _{Stage2:PopulationModel}\times \underbrace {p(\sigma ^{2},\{\alpha _{l}\}_{l=1}^{K},\{\beta _{lb}\}_{l=1,b=1}^{K,P},\{\omega _{l}\}_{l=1}^{K})} _{Stage3:Prior}}
The panel on the right displays Bayesian research cycle using Bayesian nonlinear mixed-effects model. A research cycle using the Bayesian nonlinear mixed-effects model comprises two steps: (a) standard research cycle and (b) Bayesian-specific workflow.
A standard research cycle involves 1) literature review, 2) defining a problem and 3) specifying the research question and hypothesis. Bayesian-specific workflow stratifies this approach to include three sub-steps: (b)–(i) formalizing prior distributions based on background knowledge and prior elicitation; (b)–(ii) determining the likelihood function based on a nonlinear function
f
{\displaystyle f}
; and (b)–(iii) making a posterior inference. The resulting posterior inference can be used to start a new research cycle.
== References == | Wikipedia/Hierarchical_Bayesian_model |
Nonlinear mixed-effects models constitute a class of statistical models generalizing linear mixed-effects models. Like linear mixed-effects models, they are particularly useful in settings where there are multiple measurements within the same statistical units or when there are dependencies between measurements on related statistical units. Nonlinear mixed-effects models are applied in many fields including medicine, public health, pharmacology, and ecology.
== Definition ==
While any statistical model containing both fixed effects and random effects is an example of a nonlinear mixed-effects model, the most commonly used models are members of the class of nonlinear mixed-effects models for repeated measures
y
i
j
=
f
(
ϕ
i
j
,
v
i
j
)
+
ϵ
i
j
,
i
=
1
,
…
,
M
,
j
=
1
,
…
,
n
i
{\displaystyle {y}_{ij}=f(\phi _{ij},{v}_{ij})+\epsilon _{ij},\quad i=1,\ldots ,M,\,j=1,\ldots ,n_{i}}
where
M
{\displaystyle M}
is the number of groups/subjects,
n
i
{\displaystyle n_{i}}
is the number of observations for the
i
{\displaystyle i}
th group/subject,
f
{\displaystyle f}
is a real-valued differentiable function of a group-specific parameter vector
ϕ
i
j
{\displaystyle \phi _{ij}}
and a covariate vector
v
i
j
{\displaystyle v_{ij}}
,
ϕ
i
j
{\displaystyle \phi _{ij}}
is modeled as a linear mixed-effects model
ϕ
i
j
=
A
i
j
β
+
B
i
j
b
i
,
{\displaystyle \phi _{ij}={\boldsymbol {A}}_{ij}\beta +{\boldsymbol {B}}_{ij}{\boldsymbol {b}}_{i},}
where
β
{\displaystyle \beta }
is a vector of fixed effects and
b
i
{\displaystyle {\boldsymbol {b}}_{i}}
is a vector of random effects associated with group
i
{\displaystyle i}
, and
ϵ
i
j
{\displaystyle \epsilon _{ij}}
is a random variable describing additive noise.
== Estimation ==
When the model is only nonlinear in fixed effects and the random effects are Gaussian, maximum-likelihood estimation can be done using nonlinear least squares methods, although asymptotic properties of estimators and test statistics may differ from the conventional general linear model. In the more general setting, there exist several methods for doing maximum-likelihood estimation or maximum a posteriori estimation in certain classes of nonlinear mixed-effects models – typically under the assumption of normally distributed random variables. A popular approach is the Lindstrom-Bates algorithm which relies on iteratively optimizing a nonlinear problem, locally linearizing the model around this optimum and then employing conventional methods from linear mixed-effects models to do maximum likelihood estimation. Stochastic approximation of the expectation-maximization algorithm gives an alternative approach for doing maximum-likelihood estimation.
== Applications ==
=== Example: Disease progression modeling ===
Nonlinear mixed-effects models have been used for modeling progression of disease. In progressive disease, the temporal patterns of progression on outcome variables may follow a nonlinear temporal shape that is similar between patients. However, the stage of disease of an individual may not be known or only partially known from what can be measured. Therefore, a latent time variable that describe individual disease stage (i.e. where the patient is along the nonlinear mean curve) can be included in the model.
=== Example: Modeling cognitive decline in Alzheimer's disease ===
Alzheimer's disease is characterized by a progressive cognitive deterioration. However, patients may differ widely in cognitive ability and reserve, so cognitive testing at a single time point can often only be used to coarsely group individuals in different stages of disease. Now suppose we have a set of longitudinal cognitive data
(
y
i
1
,
…
,
y
i
n
i
)
{\displaystyle (y_{i1},\ldots ,y_{in_{i}})}
from
i
=
1
,
…
,
M
{\displaystyle i=1,\ldots ,M}
individuals that are each categorized as having either normal cognition (CN), mild cognitive impairment (MCI) or dementia (DEM) at the baseline visit (time
t
i
1
=
0
{\displaystyle t_{i1}=0}
corresponding to measurement
y
i
1
{\displaystyle y_{i1}}
). These longitudinal trajectories can be modeled using a nonlinear mixed effects model that allows differences in disease state based on baseline categorization:
y
i
j
=
f
β
~
(
t
i
j
+
A
i
M
C
I
β
M
C
I
+
A
i
D
E
M
β
D
E
M
+
b
i
)
+
ϵ
i
j
,
i
=
1
,
…
,
M
,
j
=
1
,
…
,
n
i
{\displaystyle {y}_{ij}=f_{\tilde {\beta }}(t_{ij}+A_{i}^{MCI}\beta ^{MCI}+A_{i}^{DEM}\beta ^{DEM}+b_{i})+\epsilon _{ij},\quad i=1,\ldots ,M,\,j=1,\ldots ,n_{i}}
where
f
β
~
{\displaystyle f_{\tilde {\beta }}}
is a function that models the mean time-profile of cognitive decline whose shape is determined by the parameters
β
~
{\displaystyle {\tilde {\beta }}}
,
t
i
j
{\displaystyle t_{ij}}
represents observation time (e.g. time since baseline in the study),
A
i
M
C
I
{\displaystyle A_{i}^{MCI}}
and
A
i
D
E
M
{\displaystyle A_{i}^{DEM}}
are dummy variables that are 1 if individual
i
{\displaystyle i}
has MCI or dementia at baseline and 0 otherwise,
β
M
C
I
{\displaystyle \beta ^{MCI}}
and
β
D
E
M
{\displaystyle \beta ^{DEM}}
are parameters that model the difference in disease progression of the MCI and dementia groups relative to the cognitively normal,
b
i
{\displaystyle b_{i}}
is the difference in disease stage of individual
i
{\displaystyle i}
relative to his/her baseline category, and
ϵ
i
j
{\displaystyle \epsilon _{ij}}
is a random variable describing additive noise.
An example of such a model with an exponential mean function fitted to longitudinal measurements of the Alzheimer's Disease Assessment Scale-Cognitive Subscale (ADAS-Cog) is shown in the box. As shown, the inclusion of fixed effects of baseline categorization (MCI or dementia relative to normal cognition) and the random effect of individual continuous disease stage
b
i
{\displaystyle b_{i}}
aligns the trajectories of cognitive deterioration to reveal a common pattern of cognitive decline.
=== Example: Growth analysis ===
Growth phenomena often follow nonlinear patters (e.g. logistic growth, exponential growth, and hyperbolic growth). Factors such as nutrient deficiency may both directly affect the measured outcome (e.g. organisms with lack of nutrients end up smaller), but possibly also timing (e.g. organisms with lack of nutrients grow at a slower pace). If a model fails to account for the differences in timing, the estimated population-level curves may smooth out finer details due to lack of synchronization between organisms. Nonlinear mixed-effects models enable simultaneous modeling of individual differences in growth outcomes and timing.
=== Example: Modeling human height ===
Models for estimating the mean curves of human height and weight as a function of age and the natural variation around the mean are used to create growth charts. The growth of children can however become desynchronized due to both genetic and environmental factors. For example, age at onset of puberty and its associated height spurt can vary several years between adolescents. Therefore, cross-sectional studies may underestimate the magnitude of the pubertal height spurt because age is not synchronized with biological development. The differences in biological development can be modeled using random effects
w
i
{\displaystyle {\boldsymbol {w}}_{i}}
that describe a mapping of observed age to a latent biological age using a so-called warping function
v
(
⋅
,
w
i
)
{\displaystyle v(\cdot ,{\boldsymbol {w}}_{i})}
. A simple nonlinear mixed-effects model with this structure is given by
y
i
j
=
f
β
(
v
(
t
i
j
,
w
i
)
)
+
ϵ
i
j
,
i
=
1
,
…
,
M
,
j
=
1
,
…
,
n
i
{\displaystyle {y}_{ij}=f_{\beta }(v(t_{ij},{\boldsymbol {w}}_{i}))+\epsilon _{ij},\quad i=1,\ldots ,M,\,j=1,\ldots ,n_{i}}
where
f
β
{\displaystyle f_{\beta }}
is a function that represents the height development of a typical child as a function of age. Its shape is determined by the parameters
β
{\displaystyle \beta }
,
t
i
j
{\displaystyle t_{ij}}
is the age of child
i
{\displaystyle i}
corresponding to the height measurement
y
i
j
{\displaystyle y_{ij}}
,
v
(
⋅
,
w
i
)
{\displaystyle v(\cdot ,{\boldsymbol {w}}_{i})}
is a warping function that maps age to biological development to synchronize. Its shape is determined by the random effects
w
i
{\displaystyle {\boldsymbol {w}}_{i}}
,
ϵ
i
j
{\displaystyle \epsilon _{ij}}
is a random variable describing additive variation (e.g. consistent differences in height between children and measurement noise).
There exists several methods and software packages for fitting such models. The so-called SITAR model can fit such models using warping functions that are affine transformations of time (i.e. additive shifts in biological age and differences in rate of maturation), while the so-called pavpop model can fit models with smoothly-varying warping functions. An example of the latter is shown in the box.
=== Example: Population Pharmacokinetic/pharmacodynamic modeling ===
PK/PD models for describing exposure-response relationships such as the Emax model can be formulated as nonlinear mixed-effects models. The mixed-model approach allows modeling of both population level and individual differences in effects that have a nonlinear effect on the observed outcomes, for example the rate at which a compound is being metabolized or distributed in the body.
=== Example: COVID-19 epidemiological modeling ===
The platform of the nonlinear mixed effect models can be used to describe infection trajectories of subjects and understand some common features shared across the subjects. In epidemiological problems, subjects can be countries, states, or counties, etc. This can be particularly useful in estimating a future trend of the epidemic in an early stage of pendemic where nearly little information is known regarding the disease.
=== Example: Prediction of oil production curve of shale oil wells at a new location with latent kriging ===
The eventual success of petroleum development projects relies on a large degree of well construction costs. As for unconventional oil and gas reservoirs, because of very low permeability, and a flow mechanism very different from that of conventional reservoirs, estimates for the well construction cost often contain high levels of uncertainty, and oil companies need to make heavy investment in the drilling and completion phase of the wells. The overall recent commercial success rate of horizontal wells in the United States is known to be 65%, which implies that only 2 out of 3 drilled wells will be commercially successful. For this reason, one of the crucial tasks of petroleum engineers is to quantify the uncertainty associated with oil or gas production from shale reservoirs, and further, to predict an approximated production behavior of a new well at a new location given specific completion data before actual drilling takes place to save a large degree of well construction costs.
The platform of the nonlinear mixed effect models can be extended to consider the spatial association by incorporating the geostatistical processes such as Gaussian process on the second stage of the model as follows:
y
i
t
=
μ
(
t
;
θ
1
i
,
θ
2
i
,
θ
3
i
)
+
ϵ
i
t
,
i
=
1
,
…
,
N
,
t
=
1
,
…
,
T
i
,
{\displaystyle {y}_{it}=\mu (t;\theta _{1i},\theta _{2i},\theta _{3i})+\epsilon _{it},\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad i=1,\ldots ,N,\,t=1,\ldots ,T_{i},}
θ
l
i
=
θ
l
(
s
i
)
=
α
l
+
∑
j
=
1
p
β
l
j
x
j
+
ϵ
l
(
s
i
)
+
η
l
(
s
i
)
,
ϵ
l
(
⋅
)
∼
G
W
N
(
σ
l
2
)
,
l
=
1
,
2
,
3
,
{\displaystyle \theta _{li}=\theta _{l}(s_{i})=\alpha _{l}+\sum _{j=1}^{p}\beta _{lj}x_{j}+\epsilon _{l}(s_{i})+\eta _{l}(s_{i}),\quad \epsilon _{l}(\cdot )\sim GWN(\sigma _{l}^{2}),\quad \quad l=1,2,3,}
η
l
(
⋅
)
∼
G
P
(
0
,
K
γ
l
(
⋅
,
⋅
)
)
,
K
γ
l
(
s
i
,
s
j
)
=
γ
l
2
exp
(
−
e
ρ
l
‖
s
i
−
s
j
‖
2
)
,
l
=
1
,
2
,
3
,
{\displaystyle \eta _{l}(\cdot )\sim GP(0,K_{\gamma _{l}}(\cdot ,\cdot )),\quad K_{\gamma _{l}}(s_{i},s_{j})=\gamma _{l}^{2}\exp(-e^{\rho _{l}}\|s_{i}-s_{j}\|^{2}),\quad \quad \quad l=1,2,3,}
β
l
j
|
λ
l
j
,
τ
l
,
σ
l
∼
N
(
0
,
σ
l
2
τ
l
2
λ
l
j
2
)
,
σ
,
λ
l
j
,
τ
l
,
σ
l
∼
C
+
(
0
,
1
)
,
l
=
1
,
2
,
3
,
j
=
1
,
⋯
,
p
,
{\displaystyle \beta _{lj}|\lambda _{lj},\tau _{l},\sigma _{l}\sim N(0,\sigma _{l}^{2}\tau _{l}^{2}\lambda _{lj}^{2}),\quad \sigma ,\lambda _{lj},\tau _{l},\sigma _{l}\sim C^{+}(0,1),\quad \quad \quad \quad \quad \quad \quad l=1,2,3,\,j=1,\cdots ,p,}
α
l
∼
π
(
α
)
∝
1
,
σ
l
2
∼
π
(
σ
2
)
∝
1
/
σ
2
,
l
=
1
,
2
,
3
,
{\displaystyle \alpha _{l}\sim \pi (\alpha )\propto 1,\quad \sigma _{l}^{2}\sim \pi (\sigma ^{2})\propto 1/\sigma ^{2},\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad l=1,2,3,}
where
μ
(
t
;
θ
1
,
θ
2
,
θ
3
)
{\displaystyle \mu (t;\theta _{1},\theta _{2},\theta _{3})}
is a function that models the mean time-profile of log-scaled oil production rate whose shape is determined by the parameters
(
θ
1
,
θ
2
,
θ
3
)
{\displaystyle (\theta _{1},\theta _{2},\theta _{3})}
. The function is obtained from taking logarithm to the rate decline curve used in decline curve analysis,
x
i
=
(
x
i
1
,
⋯
,
x
i
p
)
⊤
{\displaystyle x_{i}=(x_{i1},\cdots ,x_{ip})^{\top }}
represents covariates obtained from the completion process of the hydraulic fracturing and horizontal directional drilling for the
i
{\displaystyle i}
-th well,
s
i
=
(
s
i
1
,
s
i
2
)
⊤
{\displaystyle s_{i}=(s_{i1},s_{i2})^{\top }}
represents the spatial location (longitude, latitude) of the
i
{\displaystyle i}
-th well,
ϵ
l
(
⋅
)
{\displaystyle \epsilon _{l}(\cdot )}
represents the Gaussian white noise with error variance
σ
l
2
{\displaystyle \sigma _{l}^{2}}
(also called the nugget effect),
η
l
(
⋅
)
{\displaystyle \eta _{l}(\cdot )}
represents the Gaussian process with Gaussian covariance function
K
γ
l
(
⋅
,
⋅
)
{\displaystyle K_{\gamma _{l}}(\cdot ,\cdot )}
,
β
{\displaystyle \beta }
represents the horseshoe shrinkage prior.
The Gaussian process regressions used on the latent level (the second stage) eventually produce kriging predictors for the curve parameters
(
θ
1
i
,
θ
2
i
,
θ
3
i
)
,
(
i
=
1
,
⋯
,
N
)
,
{\displaystyle (\theta _{1i},\theta _{2i},\theta _{3i}),(i=1,\cdots ,N),}
that dictate the shape of the mean curve
μ
(
t
;
θ
1
,
θ
2
,
θ
3
)
{\displaystyle \mu (t;\theta _{1},\theta _{2},\theta _{3})}
on the date level (the first level). As the kriging techniques have been employed in the latent level, this technique is called latent kriging. The right panels show the prediction results of the latent kriging method applied to the two test wells in the Eagle Ford Shale Reservoir of South Texas.
== Bayesian nonlinear mixed-effects model ==
The framework of Bayesian hierarchical modeling is frequently used in diverse applications. Particularly, Bayesian nonlinear mixed-effects models have recently received significant attention. A basic version of the Bayesian nonlinear mixed-effects models is represented as the following three-stage:
Stage 1: Individual-Level Model
y
i
j
=
f
(
t
i
j
;
θ
1
i
,
θ
2
i
,
…
,
θ
l
i
,
…
,
θ
K
i
)
+
ϵ
i
j
,
ϵ
i
j
∼
N
(
0
,
σ
2
)
,
i
=
1
,
…
,
N
,
j
=
1
,
…
,
M
i
.
{\displaystyle {y}_{ij}=f(t_{ij};\theta _{1i},\theta _{2i},\ldots ,\theta _{li},\ldots ,\theta _{Ki})+\epsilon _{ij},\quad \epsilon _{ij}\sim N(0,\sigma ^{2}),\quad i=1,\ldots ,N,\,j=1,\ldots ,M_{i}.}
Stage 2: Population Model
θ
l
i
=
α
l
+
∑
b
=
1
P
β
l
b
x
i
b
+
η
l
i
,
η
l
i
∼
N
(
0
,
ω
l
2
)
,
i
=
1
,
…
,
N
,
l
=
1
,
…
,
K
.
{\displaystyle \theta _{li}=\alpha _{l}+\sum _{b=1}^{P}\beta _{lb}x_{ib}+\eta _{li},\quad \eta _{li}\sim N(0,\omega _{l}^{2}),\quad i=1,\ldots ,N,\,l=1,\ldots ,K.}
Stage 3: Prior
σ
2
∼
π
(
σ
2
)
,
α
l
∼
π
(
α
l
)
,
(
β
l
1
,
…
,
β
l
b
,
…
,
β
l
P
)
∼
π
(
β
l
1
,
…
,
β
l
b
,
…
,
β
l
P
)
,
ω
l
2
∼
π
(
ω
l
2
)
,
l
=
1
,
…
,
K
.
{\displaystyle \sigma ^{2}\sim \pi (\sigma ^{2}),\quad \alpha _{l}\sim \pi (\alpha _{l}),\quad (\beta _{l1},\ldots ,\beta _{lb},\ldots ,\beta _{lP})\sim \pi (\beta _{l1},\ldots ,\beta _{lb},\ldots ,\beta _{lP}),\quad \omega _{l}^{2}\sim \pi (\omega _{l}^{2}),\quad l=1,\ldots ,K.}
Here,
y
i
j
{\displaystyle y_{ij}}
denotes the continuous response of the
i
{\displaystyle i}
-th subject at the time point
t
i
j
{\displaystyle t_{ij}}
, and
x
i
b
{\displaystyle x_{ib}}
is the
b
{\displaystyle b}
-th covariate of the
i
{\displaystyle i}
-th subject. Parameters involved in the model are written in Greek letters.
f
(
t
;
θ
1
,
…
,
θ
K
)
{\displaystyle f(t;\theta _{1},\ldots ,\theta _{K})}
is a known function parameterized by the
K
{\displaystyle K}
-dimensional vector
(
θ
1
,
…
,
θ
K
)
{\displaystyle (\theta _{1},\ldots ,\theta _{K})}
. Typically,
f
{\displaystyle f}
is a `nonlinear' function and describes the temporal trajectory of individuals. In the model,
ϵ
i
j
{\displaystyle \epsilon _{ij}}
and
η
l
i
{\displaystyle \eta _{li}}
describe within-individual variability and between-individual variability, respectively. If Stage 3: Prior is not considered, then the model reduces to a frequentist nonlinear mixed-effect model.
A central task in the application of the Bayesian nonlinear mixed-effect models is to evaluate the posterior density:
π
(
{
θ
l
i
}
i
=
1
,
l
=
1
N
,
K
,
σ
2
,
{
α
l
}
l
=
1
K
,
{
β
l
b
}
l
=
1
,
b
=
1
K
,
P
,
{
ω
l
}
l
=
1
K
|
{
y
i
j
}
i
=
1
,
j
=
1
N
,
M
i
)
{\displaystyle \pi (\{\theta _{li}\}_{i=1,l=1}^{N,K},\sigma ^{2},\{\alpha _{l}\}_{l=1}^{K},\{\beta _{lb}\}_{l=1,b=1}^{K,P},\{\omega _{l}\}_{l=1}^{K}|\{y_{ij}\}_{i=1,j=1}^{N,M_{i}})}
∝
π
(
{
y
i
j
}
i
=
1
,
j
=
1
N
,
M
i
,
{
θ
l
i
}
i
=
1
,
l
=
1
N
,
K
,
σ
2
,
{
α
l
}
l
=
1
K
,
{
β
l
b
}
l
=
1
,
b
=
1
K
,
P
,
{
ω
l
}
l
=
1
K
)
{\displaystyle \propto \pi (\{y_{ij}\}_{i=1,j=1}^{N,M_{i}},\{\theta _{li}\}_{i=1,l=1}^{N,K},\sigma ^{2},\{\alpha _{l}\}_{l=1}^{K},\{\beta _{lb}\}_{l=1,b=1}^{K,P},\{\omega _{l}\}_{l=1}^{K})}
=
π
(
{
y
i
j
}
i
=
1
,
j
=
1
N
,
M
i
|
{
θ
l
i
}
i
=
1
,
l
=
1
N
,
K
,
σ
2
)
⏟
S
t
a
g
e
1
:
I
n
d
i
v
i
d
u
a
l
−
L
e
v
e
l
M
o
d
e
l
×
π
(
{
θ
l
i
}
i
=
1
,
l
=
1
N
,
K
|
{
α
l
}
l
=
1
K
,
{
β
l
b
}
l
=
1
,
b
=
1
K
,
P
,
{
ω
l
}
l
=
1
K
)
⏟
S
t
a
g
e
2
:
P
o
p
u
l
a
t
i
o
n
M
o
d
e
l
×
p
(
σ
2
,
{
α
l
}
l
=
1
K
,
{
β
l
b
}
l
=
1
,
b
=
1
K
,
P
,
{
ω
l
}
l
=
1
K
)
⏟
S
t
a
g
e
3
:
P
r
i
o
r
{\displaystyle =\underbrace {\pi (\{y_{ij}\}_{i=1,j=1}^{N,M_{i}}|\{\theta _{li}\}_{i=1,l=1}^{N,K},\sigma ^{2})} _{Stage1:Individual-LevelModel}\times \underbrace {\pi (\{\theta _{li}\}_{i=1,l=1}^{N,K}|\{\alpha _{l}\}_{l=1}^{K},\{\beta _{lb}\}_{l=1,b=1}^{K,P},\{\omega _{l}\}_{l=1}^{K})} _{Stage2:PopulationModel}\times \underbrace {p(\sigma ^{2},\{\alpha _{l}\}_{l=1}^{K},\{\beta _{lb}\}_{l=1,b=1}^{K,P},\{\omega _{l}\}_{l=1}^{K})} _{Stage3:Prior}}
The panel on the right displays Bayesian research cycle using Bayesian nonlinear mixed-effects model. A research cycle using the Bayesian nonlinear mixed-effects model comprises two steps: (a) standard research cycle and (b) Bayesian-specific workflow. Standard research cycle involves literature review, defining a problem and specifying the research question and hypothesis. Bayesian-specific workflow comprises three sub-steps: (b)–(i) formalizing prior distributions based on background knowledge and prior elicitation; (b)–(ii) determining the likelihood function based on a nonlinear function
f
{\displaystyle f}
; and (b)–(iii) making a posterior inference. The resulting posterior inference can be used to start a new research cycle.
== See also ==
Mixed model
Fixed effects model
Generalized linear mixed model
Linear regression
Mixed-design analysis of variance
Multilevel model
Random effects model
Repeated measures design
== References == | Wikipedia/Nonlinear_mixed-effects_model |
In statistics, a generalized estimating equation (GEE) is used to estimate the parameters of a generalized linear model with a possible unmeasured correlation between observations from different timepoints.
Regression beta coefficient estimates from the Liang-Zeger GEE are consistent, unbiased, and asymptotically normal even when the working correlation is misspecified, under mild regularity conditions. GEE is higher in efficiency than generalized linear models (GLMs) in the presence of high autocorrelation. When the true working correlation is known, consistency does not require the assumption that missing data is missing completely at random. Huber-White standard errors improve the efficiency of Liang-Zeger GEE in the absence of serial autocorrelation but may remove the marginal interpretation. GEE estimates the average response over the population ("population-averaged" effects) with Liang-Zeger standard errors, and in individuals using Huber-White standard errors, also known as "robust standard error" or "sandwich variance" estimates. Huber-White GEE was used since 1997, and Liang-Zeger GEE dates to the 1980s based on a limited literature review. Several independent formulations of these standard error estimators contribute to GEE theory. Placing the independent standard error estimators under the umbrella term "GEE" may exemplify abuse of terminology.
GEEs belong to a class of regression techniques that are referred to as semiparametric because they rely on specification of only the first two moments. They are a popular alternative to the likelihood-based generalized linear mixed model which is more at risk for consistency loss at variance structure specification. The trade-off of variance-structure misspecification and consistent regression coefficient estimates is loss of efficiency, yielding inflated Wald test p-values as a result of higher variance of standard errors than that of the most optimal. They are commonly used in large epidemiological studies, especially multi-site cohort studies, because they can handle many types of unmeasured dependence between outcomes.
== Formulation ==
Given a mean model
μ
i
j
{\displaystyle \mu _{ij}}
for subject
i
{\displaystyle i}
and time
j
{\displaystyle j}
that depends upon regression parameters
β
k
{\displaystyle \beta _{k}}
, and variance structure,
V
i
{\displaystyle V_{i}}
, the estimating equation is formed via:
U
(
β
)
=
∑
i
=
1
N
∂
μ
i
∂
β
V
i
−
1
{
Y
i
−
μ
i
(
β
)
}
{\displaystyle U(\beta )=\sum _{i=1}^{N}{\frac {\partial \mu _{i}}{\partial \beta }}V_{i}^{-1}\{Y_{i}-\mu _{i}(\beta )\}\,\!}
The parameters
β
k
{\displaystyle \beta _{k}}
are estimated by solving
U
(
β
)
=
0
{\displaystyle U(\beta )=0}
and are typically obtained via the Newton–Raphson algorithm. The variance structure is chosen to improve the efficiency of the parameter estimates. The Hessian of the solution to the GEEs in the parameter space can be used to calculate robust standard error estimates. The term "variance structure" refers to the algebraic form of the covariance matrix between outcomes, Y, in the sample. Examples of variance structure specifications include independence, exchangeable, autoregressive, stationary m-dependent, and unstructured. The most popular form of inference on GEE regression parameters is the Wald test using naive or robust standard errors, though the Score test is also valid and preferable when it is difficult to obtain estimates of information under the alternative hypothesis. The likelihood ratio test is not valid in this setting because the estimating equations are not necessarily likelihood equations. Model selection can be performed with the GEE equivalent of the Akaike Information Criterion (AIC), the quasi-likelihood under the independence model criterion (QIC).
=== Relationship with Generalized Method of Moments ===
The generalized estimating equation is a special case of the generalized method of moments (GMM). This relationship is immediately obvious from the requirement that the score function satisfy the equation:
E
[
U
(
β
)
]
=
1
N
∑
i
=
1
N
∂
μ
i
∂
β
V
i
−
1
{
Y
i
−
μ
i
(
β
)
}
=
0
{\displaystyle \mathbb {E} [U(\beta )]={1 \over {N}}\sum _{i=1}^{N}{\frac {\partial \mu _{i}}{\partial \beta }}V_{i}^{-1}\{Y_{i}-\mu _{i}(\beta )\}\,\!=0}
== Computation ==
Software for solving generalized estimating equations is available in MATLAB, SAS (proc genmod), SPSS (the gee procedure), Stata (the xtgee command), R (packages glmtoolbox, gee, geepack and multgee), Julia (package GEE.jl) and Python (package statsmodels).
Comparisons among software packages for the analysis of binary correlated data and ordinal correlated data via GEE are available.
== See also ==
Generalized method of moments
Repeated measures design
== References ==
== Further reading ==
Hardin, James; Hilbe, Joseph (2003). Generalized Estimating Equations. London: Chapman and Hall/CRC. ISBN 978-1-58488-307-4.
Ziegler, A. (2011). Generalized Estimating Equations. Springer. ISBN 978-1-4614-0498-9.
== External links ==
Advanced Topics I - Generalized Estimating Equations (GEE) | Wikipedia/Generalized_estimating_equation |
In statistics, a probit model is a type of regression where the dependent variable can take only two values, for example married or not married. The word is a portmanteau, coming from probability + unit. The purpose of the model is to estimate the probability that an observation with particular characteristics will fall into a specific one of the categories; moreover, classifying observations based on their predicted probabilities is a type of binary classification model.
A probit model is a popular specification for a binary response model. As such it treats the same set of problems as does logistic regression using similar techniques. When viewed in the generalized linear model framework, the probit model employs a probit link function. It is most often estimated using the maximum likelihood procedure, such an estimation being called a probit regression.
== Conceptual framework ==
Suppose a response variable Y is binary, that is it can have only two possible outcomes which we will denote as 1 and 0. For example, Y may represent presence/absence of a certain condition, success/failure of some device, answer yes/no on a survey, etc. We also have a vector of regressors X, which are assumed to influence the outcome Y. Specifically, we assume that the model takes the form
P
(
Y
=
1
∣
X
)
=
Φ
(
X
T
β
)
,
{\displaystyle P(Y=1\mid X)=\Phi (X^{\operatorname {T} }\beta ),}
where P is the probability and
Φ
{\displaystyle \Phi }
is the cumulative distribution function (CDF) of the standard normal distribution. The parameters β are typically estimated by maximum likelihood.
It is possible to motivate the probit model as a latent variable model. Suppose there exists an auxiliary random variable
Y
∗
=
X
T
β
+
ε
,
{\displaystyle Y^{\ast }=X^{T}\beta +\varepsilon ,}
where ε ~ N(0, 1). Then Y can be viewed as an indicator for whether this latent variable is positive:
Y
=
{
1
Y
∗
>
0
0
otherwise
}
=
{
1
X
T
β
+
ε
>
0
0
otherwise
}
{\displaystyle Y=\left.{\begin{cases}1&Y^{*}>0\\0&{\text{otherwise}}\end{cases}}\right\}=\left.{\begin{cases}1&X^{\operatorname {T} }\beta +\varepsilon >0\\0&{\text{otherwise}}\end{cases}}\right\}}
The use of the standard normal distribution causes no loss of generality compared with the use of a normal distribution with an arbitrary mean and standard deviation, because adding a fixed amount to the mean can be compensated by subtracting the same amount from the intercept, and multiplying the standard deviation by a fixed amount can be compensated by multiplying the weights by the same amount.
To see that the two models are equivalent, note that
P
(
Y
=
1
∣
X
)
=
P
(
Y
∗
>
0
)
=
P
(
X
T
β
+
ε
>
0
)
=
P
(
ε
>
−
X
T
β
)
=
P
(
ε
<
X
T
β
)
by symmetry of the normal distribution
=
Φ
(
X
T
β
)
{\displaystyle {\begin{aligned}P(Y=1\mid X)&=P(Y^{\ast }>0)\\&=P(X^{\operatorname {T} }\beta +\varepsilon >0)\\&=P(\varepsilon >-X^{\operatorname {T} }\beta )\\&=P(\varepsilon <X^{\operatorname {T} }\beta )&{\text{by symmetry of the normal distribution}}\\&=\Phi (X^{\operatorname {T} }\beta )\end{aligned}}}
== Model estimation ==
=== Maximum likelihood estimation ===
Suppose data set
{
y
i
,
x
i
}
i
=
1
n
{\displaystyle \{y_{i},x_{i}\}_{i=1}^{n}}
contains n independent statistical units corresponding to the model above.
For the single observation, conditional on the vector of inputs of that observation, we have:
P
(
y
i
=
1
|
x
i
)
=
Φ
(
x
i
T
β
)
{\displaystyle P(y_{i}=1|x_{i})=\Phi (x_{i}^{\operatorname {T} }\beta )}
P
(
y
i
=
0
|
x
i
)
=
1
−
Φ
(
x
i
T
β
)
{\displaystyle P(y_{i}=0|x_{i})=1-\Phi (x_{i}^{\operatorname {T} }\beta )}
where
x
i
{\displaystyle x_{i}}
is a vector of
K
×
1
{\displaystyle K\times 1}
inputs, and
β
{\displaystyle \beta }
is a
K
×
1
{\displaystyle K\times 1}
vector of coefficients.
The likelihood of a single observation
(
y
i
,
x
i
)
{\displaystyle (y_{i},x_{i})}
is then
L
(
β
;
y
i
,
x
i
)
=
Φ
(
x
i
T
β
)
y
i
[
1
−
Φ
(
x
i
T
β
)
]
(
1
−
y
i
)
{\displaystyle {\mathcal {L}}(\beta ;y_{i},x_{i})=\Phi (x_{i}^{\operatorname {T} }\beta )^{y_{i}}[1-\Phi (x_{i}^{\operatorname {T} }\beta )]^{(1-y_{i})}}
In fact, if
y
i
=
1
{\displaystyle y_{i}=1}
, then
L
(
β
;
y
i
,
x
i
)
=
Φ
(
x
i
T
β
)
{\displaystyle {\mathcal {L}}(\beta ;y_{i},x_{i})=\Phi (x_{i}^{\operatorname {T} }\beta )}
, and if
y
i
=
0
{\displaystyle y_{i}=0}
, then
L
(
β
;
y
i
,
x
i
)
=
1
−
Φ
(
x
i
T
β
)
{\displaystyle {\mathcal {L}}(\beta ;y_{i},x_{i})=1-\Phi (x_{i}^{\operatorname {T} }\beta )}
.
Since the observations are independent and identically distributed, then the likelihood of the entire sample, or the joint likelihood, will be equal to the product of the likelihoods of the single observations:
L
(
β
;
Y
,
X
)
=
∏
i
=
1
n
(
Φ
(
x
i
T
β
)
y
i
[
1
−
Φ
(
x
i
T
β
)
]
(
1
−
y
i
)
)
{\displaystyle {\mathcal {L}}(\beta ;Y,X)=\prod _{i=1}^{n}\left(\Phi (x_{i}^{\operatorname {T} }\beta )^{y_{i}}[1-\Phi (x_{i}^{\operatorname {T} }\beta )]^{(1-y_{i})}\right)}
The joint log-likelihood function is thus
ln
L
(
β
;
Y
,
X
)
=
∑
i
=
1
n
(
y
i
ln
Φ
(
x
i
T
β
)
+
(
1
−
y
i
)
ln
(
1
−
Φ
(
x
i
T
β
)
)
)
{\displaystyle \ln {\mathcal {L}}(\beta ;Y,X)=\sum _{i=1}^{n}{\bigg (}y_{i}\ln \Phi (x_{i}^{\operatorname {T} }\beta )+(1-y_{i})\ln \!{\big (}1-\Phi (x_{i}^{\operatorname {T} }\beta ){\big )}{\bigg )}}
The estimator
β
^
{\displaystyle {\hat {\beta }}}
which maximizes this function will be consistent, asymptotically normal and efficient provided that
E
[
X
X
T
]
{\displaystyle \operatorname {E} [XX^{\operatorname {T} }]}
exists and is not singular. It can be shown that this log-likelihood function is globally concave in
β
{\displaystyle \beta }
, and therefore standard numerical algorithms for optimization will converge rapidly to the unique maximum.
Asymptotic distribution for
β
^
{\displaystyle {\hat {\beta }}}
is given by
n
(
β
^
−
β
)
→
d
N
(
0
,
Ω
−
1
)
,
{\displaystyle {\sqrt {n}}({\hat {\beta }}-\beta )\ {\xrightarrow {d}}\ {\mathcal {N}}(0,\,\Omega ^{-1}),}
where
Ω
=
E
[
φ
2
(
X
T
β
)
Φ
(
X
T
β
)
(
1
−
Φ
(
X
T
β
)
)
X
X
T
]
,
Ω
^
=
1
n
∑
i
=
1
n
φ
2
(
x
i
T
β
^
)
Φ
(
x
i
T
β
^
)
(
1
−
Φ
(
x
i
T
β
^
)
)
x
i
x
i
T
,
{\displaystyle \Omega =\operatorname {E} {\bigg [}{\frac {\varphi ^{2}(X^{\operatorname {T} }\beta )}{\Phi (X^{\operatorname {T} }\beta )(1-\Phi (X^{\operatorname {T} }\beta ))}}XX^{\operatorname {T} }{\bigg ]},\qquad {\hat {\Omega }}={\frac {1}{n}}\sum _{i=1}^{n}{\frac {\varphi ^{2}(x_{i}^{\operatorname {T} }{\hat {\beta }})}{\Phi (x_{i}^{\operatorname {T} }{\hat {\beta }})(1-\Phi (x_{i}^{\operatorname {T} }{\hat {\beta }}))}}x_{i}x_{i}^{\operatorname {T} },}
and
φ
=
Φ
′
{\displaystyle \varphi =\Phi '}
is the Probability Density Function (PDF) of standard normal distribution.
Semi-parametric and non-parametric maximum likelihood methods for probit-type and other related models are also available.
=== Berkson's minimum chi-square method ===
This method can be applied only when there are many observations of response variable
y
i
{\displaystyle y_{i}}
having the same value of the vector of regressors
x
i
{\displaystyle x_{i}}
(such situation may be referred to as "many observations per cell"). More specifically, the model can be formulated as follows.
Suppose among n observations
{
y
i
,
x
i
}
i
=
1
n
{\displaystyle \{y_{i},x_{i}\}_{i=1}^{n}}
there are only T distinct values of the regressors, which can be denoted as
{
x
(
1
)
,
…
,
x
(
T
)
}
{\displaystyle \{x_{(1)},\ldots ,x_{(T)}\}}
. Let
n
t
{\displaystyle n_{t}}
be the number of observations with
x
i
=
x
(
t
)
,
{\displaystyle x_{i}=x_{(t)},}
and
r
t
{\displaystyle r_{t}}
the number of such observations with
y
i
=
1
{\displaystyle y_{i}=1}
. We assume that there are indeed "many" observations per each "cell": for each
t
,
lim
n
→
∞
n
t
/
n
=
c
t
>
0
{\displaystyle t,\lim _{n\rightarrow \infty }n_{t}/n=c_{t}>0}
.
Denote
p
^
t
=
r
t
/
n
t
{\displaystyle {\hat {p}}_{t}=r_{t}/n_{t}}
σ
^
t
2
=
1
n
t
p
^
t
(
1
−
p
^
t
)
φ
2
(
Φ
−
1
(
p
^
t
)
)
{\displaystyle {\hat {\sigma }}_{t}^{2}={\frac {1}{n_{t}}}{\frac {{\hat {p}}_{t}(1-{\hat {p}}_{t})}{\varphi ^{2}{\big (}\Phi ^{-1}({\hat {p}}_{t}){\big )}}}}
Then Berkson's minimum chi-square estimator is a generalized least squares estimator in a regression of
Φ
−
1
(
p
^
t
)
{\displaystyle \Phi ^{-1}({\hat {p}}_{t})}
on
x
(
t
)
{\displaystyle x_{(t)}}
with weights
σ
^
t
−
2
{\displaystyle {\hat {\sigma }}_{t}^{-2}}
:
β
^
=
(
∑
t
=
1
T
σ
^
t
−
2
x
(
t
)
x
(
t
)
T
)
−
1
∑
t
=
1
T
σ
^
t
−
2
x
(
t
)
Φ
−
1
(
p
^
t
)
{\displaystyle {\hat {\beta }}={\Bigg (}\sum _{t=1}^{T}{\hat {\sigma }}_{t}^{-2}x_{(t)}x_{(t)}^{\operatorname {T} }{\Bigg )}^{-1}\sum _{t=1}^{T}{\hat {\sigma }}_{t}^{-2}x_{(t)}\Phi ^{-1}({\hat {p}}_{t})}
It can be shown that this estimator is consistent (as n→∞ and T fixed), asymptotically normal and efficient. Its advantage is the presence of a closed-form formula for the estimator. However, it is only meaningful to carry out this analysis when individual observations are not available, only their aggregated counts
r
t
{\displaystyle r_{t}}
,
n
t
{\displaystyle n_{t}}
, and
x
(
t
)
{\displaystyle x_{(t)}}
(for example in the analysis of voting behavior).
=== Albert and Chib Gibbs sampling method ===
Gibbs sampling of a probit model is possible with the introduction of normally distributed latent variables z, which are observed as 1 if positive and 0 otherwise. This approach was introduced in Albert and Chib (1993), which demonstrated how Gibbs sampling could be applied to binary and polychotomous response models within a Bayesian framework. Under a multivariate normal prior distribution over the weights, the model can be described as
β
∼
N
(
b
0
,
B
0
)
z
i
∣
x
i
,
β
∼
N
(
x
i
T
β
,
1
)
y
i
=
{
1
if
z
i
>
0
0
otherwise
{\displaystyle {\begin{aligned}{\boldsymbol {\beta }}&\sim {\mathcal {N}}(\mathbf {b} _{0},\mathbf {B} _{0})\\[3pt]z_{i}\mid \mathbf {x} _{i},{\boldsymbol {\beta }}&\sim {\mathcal {N}}(\mathbf {x} _{i}^{\operatorname {T} }{\boldsymbol {\beta }},1)\\[3pt]y_{i}&={\begin{cases}1&{\text{if }}z_{i}>0\\0&{\text{otherwise}}\end{cases}}\end{aligned}}}
From this, Albert and Chib (1993) derive the following full conditional distributions in the Gibbs sampling algorithm:
B
=
(
B
0
−
1
+
X
T
X
)
−
1
β
∣
z
∼
N
(
B
(
B
0
−
1
b
0
+
X
T
z
)
,
B
)
z
i
∣
y
i
=
0
,
x
i
,
β
∼
N
(
x
i
T
β
,
1
)
[
z
i
≤
0
]
z
i
∣
y
i
=
1
,
x
i
,
β
∼
N
(
x
i
T
β
,
1
)
[
z
i
>
0
]
{\displaystyle {\begin{aligned}\mathbf {B} &=(\mathbf {B} _{0}^{-1}+\mathbf {X} ^{\operatorname {T} }\mathbf {X} )^{-1}\\[3pt]{\boldsymbol {\beta }}\mid \mathbf {z} &\sim {\mathcal {N}}(\mathbf {B} (\mathbf {B} _{0}^{-1}\mathbf {b} _{0}+\mathbf {X} ^{\operatorname {T} }\mathbf {z} ),\mathbf {B} )\\[3pt]z_{i}\mid y_{i}=0,\mathbf {x} _{i},{\boldsymbol {\beta }}&\sim {\mathcal {N}}(\mathbf {x} _{i}^{\operatorname {T} }{\boldsymbol {\beta }},1)[z_{i}\leq 0]\\[3pt]z_{i}\mid y_{i}=1,\mathbf {x} _{i},{\boldsymbol {\beta }}&\sim {\mathcal {N}}(\mathbf {x} _{i}^{\operatorname {T} }{\boldsymbol {\beta }},1)[z_{i}>0]\end{aligned}}}
The result for
β
{\displaystyle {\boldsymbol {\beta }}}
is given in the article on Bayesian linear regression, although specified with different notation, while the conditional posterior distributions of the latent variables follow a truncated normal distribution within the given ranges. The notation
[
z
i
<
0
]
{\displaystyle [z_{i}<0]}
is the Iverson bracket, sometimes written
I
(
z
i
<
0
)
{\displaystyle {\mathcal {I}}(z_{i}<0)}
or similar. Thus, knowledge of the observed outcomes serves to restrict the support of the latent variables.
Sampling of the weights
β
{\displaystyle {\boldsymbol {\beta }}}
given the latent vector
z
{\displaystyle \mathbf {z} }
from the multinormal distribution is standard. For sampling the latent variables from the truncated normal posterior distributions, one can take advantage of the inverse-cdf method, implemented in the following R vectorized function, making it straightforward to implement the method.
== Model evaluation ==
The suitability of an estimated binary model can be evaluated by counting the number of true observations equaling 1, and the number equaling zero, for which the model assigns a correct predicted classification by treating any estimated probability above 1/2 (or, below 1/2), as an assignment of a prediction of 1 (or, of 0). See Logistic regression § Model for details.
== Performance under misspecification ==
Consider the latent variable model formulation of the probit model. When the variance of
ε
{\displaystyle \varepsilon }
conditional on
x
{\displaystyle x}
is not constant but dependent on
x
{\displaystyle x}
, then the heteroscedasticity issue arises. For example, suppose
y
∗
=
β
0
+
B
1
x
1
+
ε
{\displaystyle y^{*}=\beta _{0}+B_{1}x_{1}+\varepsilon }
and
ε
∣
x
∼
N
(
0
,
x
1
2
)
{\displaystyle \varepsilon \mid x\sim N(0,x_{1}^{2})}
where
x
1
{\displaystyle x_{1}}
is a continuous positive explanatory variable. Under heteroskedasticity, the probit estimator for
β
{\displaystyle \beta }
is usually inconsistent, and most of the tests about the coefficients are invalid. More importantly, the estimator for
P
(
y
=
1
∣
x
)
{\displaystyle P(y=1\mid x)}
becomes inconsistent, too. To deal with this problem, the original model needs to be transformed to be homoskedastic. For instance, in the same example,
1
[
β
0
+
β
1
x
1
+
ε
>
0
]
{\displaystyle 1[\beta _{0}+\beta _{1}x_{1}+\varepsilon >0]}
can be rewritten as
1
[
β
0
/
x
1
+
β
1
+
ε
/
x
1
>
0
]
{\displaystyle 1[\beta _{0}/x_{1}+\beta _{1}+\varepsilon /x_{1}>0]}
, where
ε
/
x
1
∣
x
∼
N
(
0
,
1
)
{\displaystyle \varepsilon /x_{1}\mid x\sim N(0,1)}
. Therefore,
P
(
y
=
1
∣
x
)
=
Φ
(
β
1
+
β
0
/
x
1
)
{\displaystyle P(y=1\mid x)=\Phi (\beta _{1}+\beta _{0}/x_{1})}
and running probit on
(
1
,
1
/
x
1
)
{\displaystyle (1,1/x_{1})}
generates a consistent estimator for the conditional probability
P
(
y
=
1
∣
x
)
.
{\displaystyle P(y=1\mid x).}
When the assumption that
ε
{\displaystyle \varepsilon }
is normally distributed fails to hold, then a functional form misspecification issue arises: if the model is still estimated as a probit model, the estimators of the coefficients
β
{\displaystyle \beta }
are inconsistent. For instance, if
ε
{\displaystyle \varepsilon }
follows a logistic distribution in the true model, but the model is estimated by probit, the estimates will be generally smaller than the true value. However, the inconsistency of the coefficient estimates is practically irrelevant because the estimates for the partial effects,
∂
P
(
y
=
1
∣
x
)
/
∂
x
i
′
{\displaystyle \partial P(y=1\mid x)/\partial x_{i'}}
, will be close to the estimates given by the true logit model.
To avoid the issue of distribution misspecification, one may adopt a general distribution assumption for the error term, such that many different types of distribution can be included in the model. The cost is heavier computation and lower accuracy for the increase of the number of parameter. In most of the cases in practice where the distribution form is misspecified, the estimators for the coefficients are inconsistent, but estimators for the conditional probability and the partial effects are still very good.
One can also take semi-parametric or non-parametric approaches, e.g., via local-likelihood or nonparametric quasi-likelihood methods, which avoid assumptions on a parametric form for the index function and is robust to the choice of the link function (e.g., probit or logit).
== History ==
The probit model is usually credited to Chester Bliss, who coined the term "probit" in 1934, and to John Gaddum (1933), who systematized earlier work. However, the basic model dates to the Weber–Fechner law by Gustav Fechner, published in Fechner (1860), and was repeatedly rediscovered until the 1930s; see Finney (1971, Chapter 3.6) and Aitchison & Brown (1957, Chapter 1.2).
A fast method for computing maximum likelihood estimates for the probit model was proposed by Ronald Fisher as an appendix to Bliss' work in 1935.
== See also ==
Generalized linear model
Limited dependent variable
Logit model
Multinomial probit
Multivariate probit models
Ordered probit and ordered logit model
Separation (statistics)
Tobit model
== References ==
== Further reading ==
Albert, J. H.; Chib, S. (1993). "Bayesian Analysis of Binary and Polychotomous Response Data". Journal of the American Statistical Association. 88 (422): 669–679. doi:10.1080/01621459.1993.10476321. JSTOR 2290350.
Amemiya, Takeshi (1985). "Qualitative Response Models". Advanced Econometrics. Oxford: Basil Blackwell. pp. 267–359. ISBN 0-631-13345-3.
Gouriéroux, Christian (2000). "The Simple Dichotomy". Econometrics of Qualitative Dependent Variables. New York: Cambridge University Press. pp. 6–37. ISBN 0-521-58985-1.
Liao, Tim Futing (1994). Interpreting Probability Models: Logit, Probit, and Other Generalized Linear Models. Sage. ISBN 0-8039-4999-5.
McCullagh, Peter; John Nelder (1989). Generalized Linear Models. London: Chapman and Hall. ISBN 0-412-31760-5.
== External links ==
Media related to Probit model at Wikimedia Commons
Econometrics Lecture (topic: Probit model) on YouTube by Mark Thoma | Wikipedia/Probit_model |
Multilevel models are statistical models of parameters that vary at more than one level. An example could be a model of student performance that contains measures for individual students as well as measures for classrooms within which the students are grouped. These models can be seen as generalizations of linear models (in particular, linear regression), although they can also extend to non-linear models. These models became much more popular after sufficient computing power and software became available.
Multilevel models are particularly appropriate for research designs where data for participants are organized at more than one level (i.e., nested data). The units of analysis are usually individuals (at a lower level) who are nested within contextual/aggregate units (at a higher level). While the lowest level of data in multilevel models is usually an individual, repeated measurements of individuals may also be examined. As such, multilevel models provide an alternative type of analysis for univariate or multivariate analysis of repeated measures. Individual differences in growth curves may be examined. Furthermore, multilevel models can be used as an alternative to ANCOVA, where scores on the dependent variable are adjusted for covariates (e.g. individual differences) before testing treatment differences. Multilevel models are able to analyze these experiments without the assumptions of homogeneity-of-regression slopes that is required by ANCOVA.
Multilevel models can be used on data with many levels, although 2-level models are the most common and the rest of this article deals only with these. The dependent variable must be examined at the lowest level of analysis.
== Level 1 regression equation ==
When there is a single level 1 independent variable, the level 1 model is
Y
i
j
=
β
0
j
+
β
1
j
X
i
j
+
e
i
j
{\displaystyle Y_{ij}=\beta _{0j}+\beta _{1j}X_{ij}+e_{ij}}
.
Y
i
j
{\displaystyle Y_{ij}}
refers to the score on the dependent variable for an individual observation at Level 1 (subscript i refers to individual case, subscript j refers to the group).
X
i
j
{\displaystyle X_{ij}}
refers to the Level 1 predictor.
β
0
j
{\displaystyle \beta _{0j}}
refers to the intercept of the dependent variable for group j.
β
1
j
{\displaystyle \beta _{1j}}
refers to the slope for the relationship in group j (Level 2) between the Level 1 predictor and the dependent variable.
e
i
j
{\displaystyle e_{ij}}
refers to the random errors of prediction for the Level 1 equation (it is also sometimes referred to as
r
i
j
{\displaystyle r_{ij}}
).
e
i
j
∼
N
(
0
,
σ
1
2
)
{\displaystyle e_{ij}\sim {\mathcal {N}}(0,\sigma _{1}^{2})}
At Level 1, both the intercepts and slopes in the groups can be either fixed (meaning that all groups have the same values, although in the real world this would be a rare occurrence), non-randomly varying (meaning that the intercepts and/or slopes are predictable from an independent variable at Level 2), or randomly varying (meaning that the intercepts and/or slopes are different in the different groups, and that each have their own overall mean and variance).
When there are multiple level 1 independent variables, the model can be expanded by substituting vectors and matrices in the equation.
When the relationship between the response
Y
i
j
{\displaystyle Y_{ij}}
and predictor
X
i
j
{\displaystyle X_{ij}}
can not be described by the linear relationship, then one can find some non linear functional relationship between the response and predictor, and extend the model to nonlinear mixed-effects model. For example, when the response
Y
i
j
{\displaystyle Y_{ij}}
is the cumulative infection trajectory of the
i
{\displaystyle i}
-th country, and
X
i
j
{\displaystyle X_{ij}}
represents the
j
{\displaystyle j}
-th time points, then the ordered pair
(
X
i
j
,
Y
i
j
)
{\displaystyle (X_{ij},Y_{ij})}
for each country may show a shape similar to logistic function.
== Level 2 regression equation ==
The dependent variables are the intercepts and the slopes for the independent variables at Level 1 in the groups of Level 2.
u
0
j
∼
N
(
0
,
σ
2
2
)
{\displaystyle u_{0j}\sim {\mathcal {N}}(0,\sigma _{2}^{2})}
u
1
j
∼
N
(
0
,
σ
3
2
)
{\displaystyle u_{1j}\sim {\mathcal {N}}(0,\sigma _{3}^{2})}
β
0
j
=
γ
00
+
γ
01
w
j
+
u
0
j
{\displaystyle \beta _{0j}=\gamma _{00}+\gamma _{01}w_{j}+u_{0j}}
β
1
j
=
γ
10
+
γ
11
w
j
+
u
1
j
{\displaystyle \beta _{1j}=\gamma _{10}+\gamma _{11}w_{j}+u_{1j}}
γ
00
{\displaystyle \gamma _{00}}
refers to the overall intercept. This is the grand mean of the scores on the dependent variable across all the groups when all the predictors are equal to 0.
γ
10
{\displaystyle \gamma _{10}}
refers to the average slope between the dependent variable and the Level 1 predictor.
w
j
{\displaystyle w_{j}}
refers to the Level 2 predictor.
γ
01
{\displaystyle \gamma _{01}}
and
γ
11
{\displaystyle \gamma _{11}}
refer to the effect of the Level 2 predictor on the Level 1 intercept and slope respectively.
u
0
j
{\displaystyle u_{0j}}
refers to the deviation in group j from the overall intercept.
u
1
j
{\displaystyle u_{1j}}
refers to the deviation in group j from the average slope between the dependent variable and the Level 1 predictor.
== Types of models ==
Before conducting a multilevel model analysis, a researcher must decide on several aspects, including which predictors are to be included in the analysis, if any. Second, the researcher must decide whether parameter values (i.e., the elements that will be estimated) will be fixed or random. Fixed parameters are composed of a constant over all the groups, whereas a random parameter has a different value for each of the groups. Additionally, the researcher must decide whether to employ a maximum likelihood estimation or a restricted maximum likelihood estimation type.
=== Random intercepts model ===
A random intercepts model is a model in which intercepts are allowed to vary, and therefore, the scores on the dependent variable for each individual observation are predicted by the intercept that varies across groups. This model assumes that slopes are fixed (the same across different contexts). In addition, this model provides information about intraclass correlations, which are helpful in determining whether multilevel models are required in the first place.
=== Random slopes model ===
A random slopes model is a model in which slopes are allowed to vary according to a correlation matrix, and therefore, the slopes are different across grouping variable such as time or individuals. This model assumes that intercepts are fixed (the same across different contexts).
=== Random intercepts and slopes model ===
A model that includes both random intercepts and random slopes is likely the most realistic type of model, although it is also the most complex. In this model, both intercepts and slopes are allowed to vary across groups, meaning that they are different in different contexts.
=== Developing a multilevel model ===
In order to conduct a multilevel model analysis, one would start with fixed coefficients (slopes and intercepts). One aspect would be allowed to vary at a time (that is, would be changed), and compared with the previous model in order to assess better model fit. There are three different questions that a researcher would ask in assessing a model. First, is it a good model? Second, is a more complex model better? Third, what contribution do individual predictors make to the model?
In order to assess models, different model fit statistics would be examined. One such statistic is the chi-square likelihood-ratio test, which assesses the difference between models. The likelihood-ratio test can be employed for model building in general, for examining what happens when effects in a model are allowed to vary, and when testing a dummy-coded categorical variable as a single effect. However, the test can only be used when models are nested (meaning that a more complex model includes all of the effects of a simpler model). When testing non-nested models, comparisons between models can be made using the Akaike information criterion (AIC) or the Bayesian information criterion (BIC), among others. See further Model selection.
== Assumptions ==
Multilevel models have the same assumptions as other major general linear models (e.g., ANOVA, regression), but some of the assumptions are modified for the hierarchical nature of the design (i.e., nested data).
Linearity
The assumption of linearity states that there is a rectilinear (straight-line, as opposed to non-linear or U-shaped) relationship between variables. However, the model can be extended to nonlinear relationships. Particularly, when the mean part of the level 1 regression equation is replaced with a non-linear parametric function, then such a model framework is widely called the nonlinear mixed-effects model.
Normality
The assumption of normality states that the error terms at every level of the model are normally distributed. However, most statistical software allows one to specify different distributions for the variance terms, such as a Poisson, binomial, logistic. The multilevel modelling approach can be used for all forms of Generalized Linear models.
Homoscedasticity
The assumption of homoscedasticity, also known as homogeneity of variance, assumes equality of population variances. However, different variance-correlation matrix can be specified to account for this, and the heterogeneity of variance can itself be modeled.
Independence of observations (No Autocorrelation of Model's Residuals)
Independence is an assumption of general linear models, which states that cases are random samples from the population and that scores on the dependent variable are independent of each other. One of the main purposes of multilevel models is to deal with cases where the assumption of independence is violated; multilevel models do, however, assume that 1) the level 1 and level 2 residuals are uncorrelated and 2) The errors (as measured by the residuals) at the highest level are uncorrelated.
Orthogonality of regressors to random effects
The regressors must not correlate with the random effects,
u
0
j
{\displaystyle u_{0j}}
. This assumption is testable but often ignored, rendering the estimator inconsistent. If this assumption is violated, the random-effect must be modeled explicitly in the fixed part of the model, either by using dummy variables or including cluster means of all
X
i
j
{\displaystyle X_{ij}}
regressors. This assumption is probably the most important assumption the estimator makes, but one that is misunderstood by most applied researchers using these types of models.
== Statistical tests ==
The type of statistical tests that are employed in multilevel models depend on whether one is examining fixed effects or variance components. When examining fixed effects, the tests are compared with the standard error of the fixed effect, which results in a Z-test. A t-test can also be computed. When computing a t-test, it is important to keep in mind the degrees of freedom, which will depend on the level of the predictor (e.g., level 1 predictor or level 2 predictor). For a level 1 predictor, the degrees of freedom are based on the number of level 1 predictors, the number of groups and the number of individual observations. For a level 2 predictor, the degrees of freedom are based on the number of level 2 predictors and the number of groups.
== Statistical power ==
Statistical power for multilevel models differs depending on whether it is level 1 or level 2 effects that are being examined. Power for level 1 effects is dependent upon the number of individual observations, whereas the power for level 2 effects is dependent upon the number of groups. To conduct research with sufficient power, large sample sizes are required in multilevel models. However, the number of individual observations in groups is not as important as the number of groups in a study. In order to detect cross-level interactions, given that the group sizes are not too small, recommendations have been made that at least 20 groups are needed, although many fewer can be used if one is only interested in inference on the fixed effects and the random effects are control, or "nuisance", variables. The issue of statistical power in multilevel models is complicated by the fact that power varies as a function of effect size and intraclass correlations, it differs for fixed effects versus random effects, and it changes depending on the number of groups and the number of individual observations per group.
== Applications ==
=== Level ===
The concept of level is the keystone of this approach. In an educational research example, the levels for a 2-level model might be
pupil
class
However, if one were studying multiple schools and multiple school districts, a 4-level model could include
pupil
class
school
district
The researcher must establish for each variable the level at which it was measured. In this example "test score" might be measured at pupil level, "teacher experience" at class level, "school funding" at school level, and "urban" at district level.
=== Example ===
As a simple example, consider a basic linear regression model that predicts income as a function of age, class, gender and race. It might then be observed that income levels also vary depending on the city and state of residence. A simple way to incorporate this into the regression model would be to add an additional independent categorical variable to account for the location (i.e. a set of additional binary predictors and associated regression coefficients, one per location). This would have the effect of shifting the mean income up or down—but it would still assume, for example, that the effect of race and gender on income is the same everywhere. In reality, this is unlikely to be the case—different local laws, different retirement policies, differences in level of racial prejudice, etc. are likely to cause all of the predictors to have different sorts of effects in different locales.
In other words, a simple linear regression model might, for example, predict that a given randomly sampled person in Seattle would have an average yearly income $10,000 higher than a similar person in Mobile, Alabama. However, it would also predict, for example, that a white person might have an average income $7,000 above a black person, and a 65-year-old might have an income $3,000 below a 45-year-old, in both cases regardless of location. A multilevel model, however, would allow for different regression coefficients for each predictor in each location. Essentially, it would assume that people in a given location have correlated incomes generated by a single set of regression coefficients, whereas people in another location have incomes generated by a different set of coefficients. Meanwhile, the coefficients themselves are assumed to be correlated and generated from a single set of hyperparameters. Additional levels are possible: For example, people might be grouped by cities, and the city-level regression coefficients grouped by state, and the state-level coefficients generated from a single hyper-hyperparameter.
Multilevel models are a subclass of hierarchical Bayesian models, which are general models with multiple levels of random variables and arbitrary relationships among the different variables. Multilevel analysis has been extended to include multilevel structural equation modeling, multilevel latent class modeling, and other more general models.
=== Uses ===
Multilevel models have been used in education research or geographical research, to estimate separately the variance between pupils within the same school, and the variance between schools. In psychological applications, the multiple levels are items in an instrument, individuals, and families. In sociological applications, multilevel models are used to examine individuals embedded within regions or countries. In organizational psychology research, data from individuals must often be nested within teams or other functional units. They are often used in ecological research as well under the more general term mixed models.
Different covariables may be relevant on different levels. They can be used for longitudinal studies, as with growth studies, to separate changes within one individual and differences between individuals.
Cross-level interactions may also be of substantive interest; for example, when a slope is allowed to vary randomly, a level-2 predictor may be included in the slope formula for the level-1 covariate. For example, one may estimate the interaction of race and neighborhood to obtain an estimate of the interaction between an individual's characteristics and the social context.
=== Applications to longitudinal (repeated measures) data ===
== Alternative ways of analyzing hierarchical data ==
There are several alternative ways of analyzing hierarchical data, although most of them have some problems. First, traditional statistical techniques can be used. One could disaggregate higher-order variables to the individual level, and thus conduct an analysis on this individual level (for example, assign class variables to the individual level). The problem with this approach is that it would violate the assumption of independence, and thus could bias our results. This is known as atomistic fallacy. Another way to analyze the data using traditional statistical approaches is to aggregate individual level variables to higher-order variables and then to conduct an analysis on this higher level. The problem with this approach is that it discards all within-group information (because it takes the average of the individual level variables). As much as 80–90% of the variance could be wasted, and the relationship between aggregated variables is inflated, and thus distorted. This is known as ecological fallacy, and statistically, this type of analysis results in decreased power in addition to the loss of information.
Another way to analyze hierarchical data would be through a random-coefficients model. This model assumes that each group has a different regression model—with its own intercept and slope. Because groups are sampled, the model assumes that the intercepts and slopes are also randomly sampled from a population of group intercepts and slopes. This allows for an analysis in which one can assume that slopes are fixed but intercepts are allowed to vary. However this presents a problem, as individual components are independent but group components are independent between groups, but dependent within groups. This also allows for an analysis in which the slopes are random; however, the correlations of the error terms (disturbances) are dependent on the values of the individual-level variables. Thus, the problem with using a random-coefficients model in order to analyze hierarchical data is that it is still not possible to incorporate higher order variables.
== Error terms ==
Multilevel models have two error terms, which are also known as disturbances. The individual components are all independent, but there are also group components, which are independent between groups but correlated within groups. However, variance components can differ, as some groups are more homogeneous than others.
== Bayesian nonlinear mixed-effects model ==
Multilevel modeling is frequently used in diverse applications and it can be formulated by the Bayesian framework. Particularly, Bayesian nonlinear mixed-effects models have recently received significant attention. A basic version of the Bayesian nonlinear mixed-effects models is represented as the following three-stage:
Stage 1: Individual-Level Model
y
i
j
=
f
(
t
i
j
;
θ
1
i
,
θ
2
i
,
…
,
θ
l
i
,
…
,
θ
K
i
)
+
ϵ
i
j
,
s
p
a
c
e
r
ϵ
i
j
∼
N
(
0
,
σ
2
)
,
s
p
a
c
e
r
i
=
1
,
…
,
N
,
j
=
1
,
…
,
M
i
.
{\displaystyle {\begin{aligned}&{y}_{ij}=f(t_{ij};\theta _{1i},\theta _{2i},\ldots ,\theta _{li},\ldots ,\theta _{Ki})+\epsilon _{ij},\\{\phantom {spacer}}\\&\epsilon _{ij}\sim N(0,\sigma ^{2}),\\{\phantom {spacer}}\\&i=1,\ldots ,N,\,j=1,\ldots ,M_{i}.\end{aligned}}}
Stage 2: Population Model
θ
l
i
=
α
l
+
∑
b
=
1
P
β
l
b
x
i
b
+
η
l
i
,
s
p
a
c
e
r
η
l
i
∼
N
(
0
,
ω
l
2
)
,
s
p
a
c
e
r
i
=
1
,
…
,
N
,
l
=
1
,
…
,
K
.
{\displaystyle {\begin{aligned}&\theta _{li}=\alpha _{l}+\sum _{b=1}^{P}\beta _{lb}x_{ib}+\eta _{li},\\{\phantom {spacer}}\\&\eta _{li}\sim N(0,\omega _{l}^{2}),\\{\phantom {spacer}}\\&i=1,\ldots ,N,\,l=1,\ldots ,K.\end{aligned}}}
Stage 3: Prior
σ
2
∼
π
(
σ
2
)
,
s
p
a
c
e
r
α
l
∼
π
(
α
l
)
,
s
p
a
c
e
r
(
β
l
1
,
…
,
β
l
b
,
…
,
β
l
P
)
∼
π
(
β
l
1
,
…
,
β
l
b
,
…
,
β
l
P
)
,
s
p
a
c
e
r
ω
l
2
∼
π
(
ω
l
2
)
,
s
p
a
c
e
r
l
=
1
,
…
,
K
.
{\displaystyle {\begin{aligned}&\sigma ^{2}\sim \pi (\sigma ^{2}),\\{\phantom {spacer}}\\&\alpha _{l}\sim \pi (\alpha _{l}),\\{\phantom {spacer}}\\&(\beta _{l1},\ldots ,\beta _{lb},\ldots ,\beta _{lP})\sim \pi (\beta _{l1},\ldots ,\beta _{lb},\ldots ,\beta _{lP}),\\{\phantom {spacer}}\\&\omega _{l}^{2}\sim \pi (\omega _{l}^{2}),\\{\phantom {spacer}}\\&l=1,\ldots ,K.\end{aligned}}}
Here,
y
i
j
{\displaystyle y_{ij}}
denotes the continuous response of the
i
{\displaystyle i}
-th subject at the time point
t
i
j
{\displaystyle t_{ij}}
, and
x
i
b
{\displaystyle x_{ib}}
is the
b
{\displaystyle b}
-th covariate of the
i
{\displaystyle i}
-th subject. Parameters involved in the model are written in Greek letters.
f
(
t
;
θ
1
,
…
,
θ
K
)
{\displaystyle f(t;\theta _{1},\ldots ,\theta _{K})}
is a known function parameterized by the
K
{\displaystyle K}
-dimensional vector
(
θ
1
,
…
,
θ
K
)
{\displaystyle (\theta _{1},\ldots ,\theta _{K})}
. Typically,
f
{\displaystyle f}
is a `nonlinear' function and describes the temporal trajectory of individuals. In the model,
ϵ
i
j
{\displaystyle \epsilon _{ij}}
and
η
l
i
{\displaystyle \eta _{li}}
describe within-individual variability and between-individual variability, respectively. If Stage 3: Prior is not considered, then the model reduces to a frequentist nonlinear mixed-effect model.
A central task in the application of the Bayesian nonlinear mixed-effect models is to evaluate the posterior density:
π
(
{
θ
l
i
}
i
=
1
,
l
=
1
N
,
K
,
σ
2
,
{
α
l
}
l
=
1
K
,
{
β
l
b
}
l
=
1
,
b
=
1
K
,
P
,
{
ω
l
}
l
=
1
K
|
{
y
i
j
}
i
=
1
,
j
=
1
N
,
M
i
)
{\displaystyle \pi (\{\theta _{li}\}_{i=1,l=1}^{N,K},\sigma ^{2},\{\alpha _{l}\}_{l=1}^{K},\{\beta _{lb}\}_{l=1,b=1}^{K,P},\{\omega _{l}\}_{l=1}^{K}|\{y_{ij}\}_{i=1,j=1}^{N,M_{i}})}
∝
π
(
{
y
i
j
}
i
=
1
,
j
=
1
N
,
M
i
,
{
θ
l
i
}
i
=
1
,
l
=
1
N
,
K
,
σ
2
,
{
α
l
}
l
=
1
K
,
{
β
l
b
}
l
=
1
,
b
=
1
K
,
P
,
{
ω
l
}
l
=
1
K
)
{\displaystyle \propto \pi (\{y_{ij}\}_{i=1,j=1}^{N,M_{i}},\{\theta _{li}\}_{i=1,l=1}^{N,K},\sigma ^{2},\{\alpha _{l}\}_{l=1}^{K},\{\beta _{lb}\}_{l=1,b=1}^{K,P},\{\omega _{l}\}_{l=1}^{K})}
=
π
(
{
y
i
j
}
i
=
1
,
j
=
1
N
,
M
i
|
{
θ
l
i
}
i
=
1
,
l
=
1
N
,
K
,
σ
2
)
}
Stage 1: Individual-Level Model
s
p
a
c
e
r
×
π
(
{
θ
l
i
}
i
=
1
,
l
=
1
N
,
K
|
{
α
l
}
l
=
1
K
,
{
β
l
b
}
l
=
1
,
b
=
1
K
,
P
,
{
ω
l
}
l
=
1
K
)
}
Stage 2: Population Model
s
p
a
c
e
r
×
p
(
σ
2
,
{
α
l
}
l
=
1
K
,
{
β
l
b
}
l
=
1
,
b
=
1
K
,
P
,
{
ω
l
}
l
=
1
K
)
}
Stage 3: Prior
{\displaystyle {\begin{aligned}=&~\left.{\pi (\{y_{ij}\}_{i=1,j=1}^{N,M_{i}}|\{\theta _{li}\}_{i=1,l=1}^{N,K},\sigma ^{2})}\right\}{\text{Stage 1: Individual-Level Model}}\\{\phantom {spacer}}\\\times &~\left.{\pi (\{\theta _{li}\}_{i=1,l=1}^{N,K}|\{\alpha _{l}\}_{l=1}^{K},\{\beta _{lb}\}_{l=1,b=1}^{K,P},\{\omega _{l}\}_{l=1}^{K})}\right\}{\text{Stage 2: Population Model}}\\{\phantom {spacer}}\\\times &~\left.{p(\sigma ^{2},\{\alpha _{l}\}_{l=1}^{K},\{\beta _{lb}\}_{l=1,b=1}^{K,P},\{\omega _{l}\}_{l=1}^{K})}\right\}{\text{Stage 3: Prior}}\end{aligned}}}
The panel on the right displays Bayesian research cycle using Bayesian nonlinear mixed-effects model. A research cycle using the Bayesian nonlinear mixed-effects model comprises two steps: (a) standard research cycle and (b) Bayesian-specific workflow. Standard research cycle involves literature review, defining a problem and specifying the research question and hypothesis. Bayesian-specific workflow comprises three sub-steps: (b)–(i) formalizing prior distributions based on background knowledge and prior elicitation; (b)–(ii) determining the likelihood function based on a nonlinear function
f
{\displaystyle f}
; and (b)–(iii) making a posterior inference. The resulting posterior inference can be used to start a new research cycle.
== See also ==
Hyperparameter
Mixed-design analysis of variance
Multiscale modeling
Random effects model
Nonlinear mixed-effects model
Bayesian hierarchical modeling
Restricted randomization
== Notes ==
== References ==
== Further reading ==
Gelman, A.; Hill, J. (2007). Data Analysis Using Regression and Multilevel/Hierarchical Models. New York: Cambridge University Press. pp. 235–299. ISBN 978-0-521-68689-1.
Goldstein, H. (2011). Multilevel Statistical Models (4th ed.). London: Wiley. ISBN 978-0-470-74865-7.
Hedeker, D.; Gibbons, R. D. (2012). Longitudinal Data Analysis (2nd ed.). New York: Wiley. ISBN 978-0-470-88918-3.
Hox, J. J. (2010). Multilevel Analysis: Techniques and Applications (2nd ed.). New York: Routledge. ISBN 978-1-84872-845-5.
Raudenbush, S. W.; Bryk, A. S. (2002). Hierarchical Linear Models: Applications and Data Analysis Methods (2nd ed.). Thousand Oaks, CA: Sage. This concentrates on education.
Snijders, T. A. B.; Bosker, R. J. (2011). Multilevel Analysis: an Introduction to Basic and Advanced Multilevel Modeling (2nd ed.). London: Sage. ISBN 9781446254332.
Swamy, P. A. V. B.; Tavlas, George S. (2001). "Random Coefficient Models". In Baltagi, Badi H. (ed.). A Companion to Theoretical Econometrics. Oxford: Blackwell. pp. 410–429. ISBN 978-0-631-21254-6.
Verbeke, G.; Molenberghs, G. (2013). Linear Mixed Models for Longitudinal Data. Springer. Includes SAS code
Gomes, Dylan G.E. (20 January 2022). "Should I use fixed effects or random effects when I have fewer than five levels of a grouping factor in a mixed-effects model?". PeerJ. 10: e12794. doi:10.7717/peerj.12794. PMC 8784019. PMID 35116198.
== External links ==
Centre for Multilevel Modelling | Wikipedia/Multilevel_models |
Response modeling methodology (RMM) is a general platform for statistical modeling of a linear/nonlinear relationship between a response variable (dependent variable) and a linear predictor (a linear combination of predictors/effects/factors/independent variables), often denoted the linear predictor function. It is generally assumed that the modeled relationship is monotone convex (delivering monotone convex function) or monotone concave (delivering monotone concave function). However, many non-monotone functions, like the quadratic equation, are special cases of the general model.
RMM was initially developed as a series of extensions to the original inverse Box–Cox transformation:
y
=
(
1
+
λ
z
)
1
/
λ
,
{\displaystyle y={{(1+\lambda z)}^{1/\lambda }},}
where y is a percentile of the modeled response, Y (the modeled random variable), z is the respective percentile of a normal variate and λ is the Box–Cox parameter. As λ goes to zero, the inverse Box–Cox transformation becomes:
y
=
e
z
,
{\displaystyle y=e^{z},}
an exponential model. Therefore, the original inverse Box-Cox transformation contains a trio of models: linear (λ = 1), power (λ ≠ 1, λ ≠ 0) and exponential (λ = 0). This implies that on estimating λ, using sample data, the final model is not determined in advance (prior to estimation) but rather as a result of estimating. In other words, data alone determine the final model.
Extensions to the inverse Box–Cox transformation were developed by Shore (2001a) and were denoted Inverse Normalizing Transformations (INTs). They had been applied to model monotone convex relationships in various engineering areas, mostly to model physical properties of chemical compounds (Shore et al., 2001a, and references therein). Once it had been realized that INT models may be perceived as special cases of a much broader general approach for modeling non-linear monotone convex relationships, the new Response Modeling Methodology had been initiated and developed (Shore, 2005a, 2011 and references therein).
The RMM model expresses the relationship between a response, Y (the modeled random variable), and two components that deliver variation to Y:
The linear predictor function, LP (denoted η):
η
=
β
0
+
β
1
X
1
+
⋯
+
β
k
X
k
,
{\displaystyle \eta =\beta _{0}+\beta _{1}X_{1}+\cdots +\beta _{k}X_{k},}
where {X1,...,Xk} are regressor-variables (“affecting factors”) that deliver systematic variation to the response;
Normal errors, delivering random variation to the response.
The basic RMM model describes Y in terms of the LP, two possibly correlated zero-mean normal errors, ε1 and ε2 (with correlation ρ and standard deviations σε1 and σε2, respectively) and a vector of parameters {α,λ,μ} (Shore, 2005a, 2011):
W
=
log
(
Y
)
=
μ
+
(
α
λ
)
[
(
η
+
ε
1
)
λ
−
1
]
+
ε
2
,
{\displaystyle W=\log(Y)=\mu +\left({\frac {\alpha }{\lambda }}\right)[(\eta +\varepsilon _{1})^{\lambda }-1]+\varepsilon _{2},\,}
and ε1 represents uncertainty (measurement imprecision or otherwise) in the explanatory variables (included in the LP). This is in addition to uncertainty associated with the response (ε2). Expressing ε1 and ε2 in terms of standard normal variates, Z1 and Z2, respectively, having correlation ρ, and conditioning Z2 | Z1 = z1 (Z2 given that Z1 is equal to a given value z1), we may write in terms of a single error, ε:
ε
1
=
σ
ε
1
Z
1
;
ε
2
=
σ
ε
2
Z
2
;
ε
2
=
σ
ε
2
ρ
z
1
+
(
1
−
ρ
2
)
(
1
/
2
)
σ
ε
2
Z
=
d
z
1
+
ε
,
{\displaystyle {\begin{aligned}\varepsilon _{1}&=\sigma _{\varepsilon _{1}}Z_{1}\,\,;\,\,\varepsilon _{2}=\sigma _{\varepsilon _{2}}Z_{2};\\[4pt]\varepsilon _{2}&=\sigma _{\varepsilon _{2}}\rho z_{1}+(1-\rho ^{2})^{(1/2)}\sigma _{\varepsilon _{2}}Z=dz_{1}+\varepsilon ,\\\end{aligned}}}
where Z is a standard normal variate, independent of both Z1 and Z2, ε is a zero-mean error and d is a parameter. From these relationships, the associated RMM quantile function is (Shore, 2011):
w
=
log
(
y
)
=
μ
+
(
α
λ
)
[
(
η
+
c
z
)
λ
−
1
]
+
(
d
)
z
+
ε
,
{\displaystyle w=\log(y)=\mu +\left({\frac {\alpha }{\lambda }}\right)[(\eta +cz)^{\lambda }-1]+(d)z+\varepsilon ,}
or, after re-parameterization:
w
=
log
(
y
)
=
log
(
M
Y
)
+
(
a
η
b
b
)
{
[
1
+
(
c
η
)
z
]
b
−
1
}
+
(
d
)
z
+
ε
,
{\displaystyle w=\log(y)=\log(M_{Y})+\left({\frac {a\eta ^{b}}{b}}\right)\left\{\left[1+\left({\frac {c}{\eta }}\right)z\right]^{b}-1\right\}+(d)z+\varepsilon ,}
where y is the percentile of the response (Y), z is the respective standard normal percentile, ε is the model's zero-mean normal error with constant variance, σ, {a,b,c,d} are parameters and MY is the response median (z = 0), dependent on values of the parameters and the value of the LP, η:
log
(
M
Y
)
=
μ
+
(
a
b
)
[
η
b
−
1
]
=
log
(
m
)
+
(
a
b
)
[
η
b
−
1
]
,
{\displaystyle \log(M_{Y})=\mu +\left({\frac {a}{b}}\right)[\eta ^{b}-1]=\log(m)+\left({\frac {a}{b}}\right)[\eta ^{b}-1],}
where μ (or m) is an additional parameter.
If it may be assumed that cz<<η, the above model for RMM quantile function can be approximated by:
w
=
log
(
y
)
=
log
(
M
Y
)
+
(
a
η
b
b
)
[
exp
(
b
c
z
η
)
−
1
]
+
(
d
)
z
+
ε
.
{\displaystyle w=\log(y)=\log(M_{Y})+\left({\frac {a\eta ^{b}}{b}}\right)\left[\exp \left({\frac {bcz}{\eta }}\right)-1\right]+(d)z+\varepsilon .}
The parameter “c” cannot be “absorbed” into the parameters of the LP (η) since “c” and LP are estimated in two separate stages (as expounded below).
If the response data used to estimate the model contain values that change sign, or if the lowest response value is far from zero (for example, when data are left-truncated), a location parameter, L, may be added to the response so that the expressions for the quantile function and for the median become, respectively:
w
=
log
(
y
−
L
)
=
log
(
M
Y
−
L
)
+
(
a
η
b
b
)
{
[
1
+
(
c
η
)
z
]
b
−
1
}
+
(
d
)
z
+
ε
;
{\displaystyle w=\log(y-L)=\log(M_{Y}-L)+\left({\frac {a\eta ^{b}}{b}}\right)\left\{\left[1+\left({\frac {c}{\eta }}\right)z\right]^{b}-1\right\}+(d)z+\varepsilon \,;}
log
(
M
Y
−
L
)
=
μ
+
(
a
b
)
[
η
b
−
1
]
.
{\displaystyle \log(M_{Y}-L)=\mu +\left({\frac {a}{b}}\right)[\eta ^{b}-1].}
== Continuous monotonic convexity ==
As shown earlier, the inverse Box–Cox transformation depends on a single parameter, λ, which determines the final form of the model (whether linear, power or exponential). All three models thus constitute mere points on a continuous spectrum of monotonic convexity, spanned by λ. This property, where different known models become mere points on a continuous spectrum, spanned by the model's parameters, is denoted the Continuous Monotonic Convexity (CMC) property. The latter characterizes all RMM models, and it allows the basic “linear-power-exponential” cycle (underlying the inverse Box–Cox transformation) to be repeated ad infinitum, allowing for ever more convex models to be derived. Examples for such models are an exponential-power model or an exponential-exponential-power model (see explicit models expounded further on). Since the final form of the model is determined by the values of RMM parameters, this implies that the data, used to estimate the parameters, determine the final form of the estimated RMM model (as with the Box–Cox inverse transformation). The CMC property thus grant RMM models high flexibility in accommodating the data used to estimate the parameters. References given below display published results of comparisons between RMM models and existing models. These comparisons demonstrate the effectiveness of the CMC property.
== Examples of RMM models ==
Ignoring RMM errors (ignore the terms cz, dz, and e in the percentile model), we obtain the following RMM models, presented in an increasing order of monotone convexity:
linear:
y
=
η
(
α
=
1
,
λ
=
0
)
;
power:
y
=
η
α
,
(
α
≠
1
,
λ
=
0
)
;
exponential-linear:
y
=
k
exp
(
η
)
,
(
α
≠
1
,
λ
=
1
)
;
exponential-power:
y
=
k
exp
(
η
λ
)
,
(
α
≠
1
,
λ
≠
1
;
k
is a non-negative parameter
.
)
{\displaystyle {\begin{aligned}&{\text{linear: }}y=\eta &&(\alpha =1,\lambda =0);\\[5pt]&{\text{power: }}y=\eta ^{\alpha },&&(\alpha \neq 1,\lambda =0);\\[5pt]&{\text{exponential-linear: }}y=k\exp(\eta ),&&(\alpha \neq 1,\lambda =1);\\[5pt]&{\text{exponential-power: }}y=k\exp(\eta ^{\lambda }),&&(\alpha \neq 1,\lambda \neq 1;k{\text{ is a non-negative parameter}}.)\end{aligned}}}
Adding two new parameters by introducing for η (in the percentile model):
exp
[
(
β
κ
)
(
η
κ
−
1
)
]
{\displaystyle \exp \left[\left({\frac {\beta }{\kappa }}\right)(\eta ^{\kappa }-1)\right]}
, a new cycle of “linear-power-exponential” is iterated to produce models with stronger monotone convexity (Shore, 2005a, 2011, 2012):
exponential-power:
y
=
k
exp
(
η
λ
)
,
(
α
≠
,
λ
≠
1
,
β
=
1
,
κ
=
0
,
restoring the former model
)
;
exponential-exponential-linear:
y
=
k
1
exp
[
k
2
exp
(
η
)
]
,
(
α
≠
1
,
λ
≠
1
,
β
=
1
,
κ
=
1
)
;
exponential-exponential-power:
y
=
k
1
exp
[
k
2
exp
(
η
κ
)
]
,
(
α
≠
1
,
λ
≠
1
,
β
=
1
,
κ
≠
1
)
.
{\displaystyle {\begin{aligned}&{\text{exponential-power: }}y=k\exp(\eta ^{\lambda }),&&(\alpha \neq ,\lambda \neq 1,\beta =1,\kappa =0,\\&&&{\text{ restoring the former model}});\\[6pt]&{\text{exponential-exponential-linear: }}y=k_{1}\exp[k_{2}\exp(\eta )],&&(\alpha \neq 1,\lambda \neq 1,\beta =1,\kappa =1);\\[6pt]&{\text{exponential-exponential-power: }}y=k_{1}\exp[k_{2}\exp(\eta ^{\kappa })],&&(\alpha \neq 1,\lambda \neq 1,\beta =1,\kappa \neq 1).\end{aligned}}}
It is realized that this series of monotonic convex models, presented as they appear in a hierarchical order on the “Ladder of Monotonic Convex Functions” (Shore, 2011), is unlimited from above. However, all models are mere points on a continuous spectrum, spanned by RMM parameters. Also note that numerous growth models, like the Gompertz function, are exact special cases of the RMM model.
== Moments ==
The k-th non-central moment of Y is (assuming L = 0; Shore, 2005a, 2011):
E
(
Y
k
)
=
(
M
Y
)
k
E
{
exp
{
(
k
α
λ
)
[
(
η
+
c
Z
)
λ
−
1
]
+
(
k
d
)
Z
}
}
.
{\displaystyle \operatorname {E} (Y^{k})=(M_{Y})^{k}\operatorname {E} \left\{\exp \left\{\left({\frac {k\alpha }{\lambda }}\right)[(\eta +cZ)^{\lambda }-1]+(kd)Z\right\}\right\}.}
Expanding Yk, as given on the right-hand-side, into a Taylor series around zero, in terms of powers of Z (the standard normal variate), and then taking expectation on both sides, assuming that cZ ≪ η so that η + cZ ≈ η, an approximate simple expression for the k-th non-central moment, based on the first six terms in the expansion, is:
E
(
Y
)
k
≅
(
M
Y
)
k
e
α
k
(
η
λ
−
1
)
/
λ
{
1
+
1
2
(
k
d
)
2
+
1
8
(
k
d
)
4
}
.
{\displaystyle \operatorname {E} (Y)^{k}\cong (M_{Y})^{k}e^{\alpha k\left(\eta ^{\lambda }-1\right)/\lambda }\left\{1+{\frac {1}{2}}(kd)^{2}+{\frac {1}{8}}(kd)^{4}\right\}.}
An analogous expression may be derived without assuming cZ ≪ η. This would result in a more accurate (however lengthy and cumbersome) expression.
Once cZ in the above expression is neglected, Y becomes a log-normal random variable (with parameters that depend on η).
== Fitting and estimation ==
RMM models may be used to model random variation (as a general platform for distribution fitting) or to model systematic variation (analogously to generalized linear models, GLM).
In the former case (no systematic variation, namely, η = constant), RMM Quantile function is fitted to known distributions. If the underlying distribution is unknown, the RMM quantile function is estimated using available sample data. Modeling random variation with RMM is addressed and demonstrated in Shore (2011 and references therein).
In the latter case (modeling systematic variation), RMM models are estimated assuming that variation in the linear predictor (generated via variation in the regressor-variables) contribute to the overall variation of the modeled response variable (Y). This case is addressed and demonstrated in Shore (2005a, 2012 and relevant references therein). Estimation is conducted in two stages. First the median is estimated by minimizing the sum of absolute deviations (of fitted model from sample data points). In the second stage, the remaining two parameters (not estimated in the first stage, namely, {c,d}), are estimated. Three estimation approaches are presented in Shore (2012): maximum likelihood, moment matching and nonlinear quantile regression.
== Literature review ==
AS of 2021, RMM literature addresses three areas:
(1) Developing INTs and later the RMM approach, with allied estimation methods;
(2) Exploring the properties of RMM and comparing RMM effectiveness to other current modelling approaches (for distribution fitting or for modelling systematic variation);
(3) Applications.
Shore (2003a) developed Inverse Normalizing Transformations (INTs) in the first years of the 21st century and has applied them to various engineering disciplines like statistical process control (Shore, 2000a, b, 2001a, b, 2002a) and chemical engineering (Shore at al., 2002). Subsequently, as the new Response Modeling Methodology (RMM) had been emerging and developing into a full-fledged platform for modeling monotone convex relationships (ultimately presented in a book, Shore, 2005a), RMM properties were explored (Shore, 2002b, 2004a, b, 2008a, 2011), estimation procedures developed (Shore, 2005a, b, 2012) and the new modeling methodology compared to other approaches, for modeling random variation (Shore 2005c, 2007, 2010; Shore and A’wad 2010), and for modeling systematic variation (Shore, 2008b).
Concurrently, RMM had been applied to various scientific and engineering disciplines and compared to current models and modeling approaches practiced therein. For example, chemical engineering (Shore, 2003b; Benson-Karhi et al., 2007; Shacham et al., 2008; Shore and Benson-Karhi, 2010), statistical process control (Shore, 2014; Shore et al., 2014; Danoch and Shore, 2016), reliability engineering (Shore, 2004c; Ladany and Shore, 2007), forecasting (Shore and Benson-Karhi, 2007), ecology (Shore, 2014), and the medical profession (Shore et al., 2014; Benson-Karhi et al., 2017).
== References == | Wikipedia/Response_modeling_methodology |
In statistics, a semiparametric model is a statistical model that has parametric and nonparametric components.
A statistical model is a parameterized family of distributions:
{
P
θ
:
θ
∈
Θ
}
{\displaystyle \{P_{\theta }:\theta \in \Theta \}}
indexed by a parameter
θ
{\displaystyle \theta }
.
A parametric model is a model in which the indexing parameter
θ
{\displaystyle \theta }
is a vector in
k
{\displaystyle k}
-dimensional Euclidean space, for some nonnegative integer
k
{\displaystyle k}
. Thus,
θ
{\displaystyle \theta }
is finite-dimensional, and
Θ
⊆
R
k
{\displaystyle \Theta \subseteq \mathbb {R} ^{k}}
.
With a nonparametric model, the set of possible values of the parameter
θ
{\displaystyle \theta }
is a subset of some space
V
{\displaystyle V}
, which is not necessarily finite-dimensional. For example, we might consider the set of all distributions with mean 0. Such spaces are vector spaces with topological structure, but may not be finite-dimensional as vector spaces. Thus,
Θ
⊆
V
{\displaystyle \Theta \subseteq V}
for some possibly infinite-dimensional space
V
{\displaystyle V}
.
With a semiparametric model, the parameter has both a finite-dimensional component and an infinite-dimensional component (often a real-valued function defined on the real line). Thus,
Θ
⊆
R
k
×
V
{\displaystyle \Theta \subseteq \mathbb {R} ^{k}\times V}
, where
V
{\displaystyle V}
is an infinite-dimensional space.
It may appear at first that semiparametric models include nonparametric models, since they have an infinite-dimensional as well as a finite-dimensional component. However, a semiparametric model is considered to be "smaller" than a completely nonparametric model because we are often interested only in the finite-dimensional component of
θ
{\displaystyle \theta }
. That is, the infinite-dimensional component is regarded as a nuisance parameter. In nonparametric models, by contrast, the primary interest is in estimating the infinite-dimensional parameter. Thus the estimation task is statistically harder in nonparametric models.
These models often use smoothing or kernels.
== Example ==
A well-known example of a semiparametric model is the Cox proportional hazards model. If we are interested in studying the time
T
{\displaystyle T}
to an event such as death due to cancer or failure of a light bulb, the Cox model specifies the following distribution function for
T
{\displaystyle T}
:
F
(
t
)
=
1
−
exp
(
−
∫
0
t
λ
0
(
u
)
e
β
x
d
u
)
,
{\displaystyle F(t)=1-\exp \left(-\int _{0}^{t}\lambda _{0}(u)e^{\beta x}du\right),}
where
x
{\displaystyle x}
is the covariate vector, and
β
{\displaystyle \beta }
and
λ
0
(
u
)
{\displaystyle \lambda _{0}(u)}
are unknown parameters.
θ
=
(
β
,
λ
0
(
u
)
)
{\displaystyle \theta =(\beta ,\lambda _{0}(u))}
. Here
β
{\displaystyle \beta }
is finite-dimensional and is of interest;
λ
0
(
u
)
{\displaystyle \lambda _{0}(u)}
is an unknown non-negative function of time (known as the baseline hazard function) and is often a nuisance parameter. The set of possible candidates for
λ
0
(
u
)
{\displaystyle \lambda _{0}(u)}
is infinite-dimensional.
== See also ==
Semiparametric regression
Statistical model
Generalized method of moments
== Notes ==
== References ==
Bickel, P. J.; Klaassen, C. A. J.; Ritov, Y.; Wellner, J. A. (1998), Efficient and Adaptive Estimation for Semiparametric Models, Springer
Härdle, Wolfgang; Müller, Marlene; Sperlich, Stefan; Werwatz, Axel (2004), Nonparametric and Semiparametric Models, Springer
Kosorok, Michael R. (2008), Introduction to Empirical Processes and Semiparametric Inference, Springer
Tsiatis, Anastasios A. (2006), Semiparametric Theory and Missing Data, Springer
Begun, Janet M.; Hall, W. J.; Huang, Wei-Min; Wellner, Jon A. (1983), "Information and asymptotic efficiency in parametric--nonparametric models", Annals of Statistics, 11 (1983), no. 2, 432--452 | Wikipedia/Semiparametric_model |
In statistics, a parametric model or parametric family or finite-dimensional model is a particular class of statistical models. Specifically, a parametric model is a family of probability distributions that has a finite number of parameters.
== Definition ==
A statistical model is a collection of probability distributions on some sample space. We assume that the collection, 𝒫, is indexed by some set Θ. The set Θ is called the parameter set or, more commonly, the parameter space. For each θ ∈ Θ, let Fθ denote the corresponding member of the collection; so Fθ is a cumulative distribution function. Then a statistical model can be written as
P
=
{
F
θ
|
θ
∈
Θ
}
.
{\displaystyle {\mathcal {P}}={\big \{}F_{\theta }\ {\big |}\ \theta \in \Theta {\big \}}.}
The model is a parametric model if Θ ⊆ ℝk for some positive integer k.
When the model consists of absolutely continuous distributions, it is often specified in terms of corresponding probability density functions:
P
=
{
f
θ
|
θ
∈
Θ
}
.
{\displaystyle {\mathcal {P}}={\big \{}f_{\theta }\ {\big |}\ \theta \in \Theta {\big \}}.}
== Examples ==
The Poisson family of distributions is parametrized by a single number λ > 0:
P
=
{
p
λ
(
j
)
=
λ
j
j
!
e
−
λ
,
j
=
0
,
1
,
2
,
3
,
…
|
λ
>
0
}
,
{\displaystyle {\mathcal {P}}={\Big \{}\ p_{\lambda }(j)={\tfrac {\lambda ^{j}}{j!}}e^{-\lambda },\ j=0,1,2,3,\dots \ {\Big |}\;\;\lambda >0\ {\Big \}},}
where pλ is the probability mass function. This family is an exponential family.
The normal family is parametrized by θ = (μ, σ), where μ ∈ ℝ is a location parameter and σ > 0 is a scale parameter:
P
=
{
f
θ
(
x
)
=
1
2
π
σ
exp
(
−
(
x
−
μ
)
2
2
σ
2
)
|
μ
∈
R
,
σ
>
0
}
.
{\displaystyle {\mathcal {P}}={\Big \{}\ f_{\theta }(x)={\tfrac {1}{{\sqrt {2\pi }}\sigma }}\exp \left(-{\tfrac {(x-\mu )^{2}}{2\sigma ^{2}}}\right)\ {\Big |}\;\;\mu \in \mathbb {R} ,\sigma >0\ {\Big \}}.}
This parametrized family is both an exponential family and a location-scale family.
The Weibull translation model has a three-dimensional parameter θ = (λ, β, μ):
P
=
{
f
θ
(
x
)
=
β
λ
(
x
−
μ
λ
)
β
−
1
exp
(
−
(
x
−
μ
λ
)
β
)
1
{
x
>
μ
}
|
λ
>
0
,
β
>
0
,
μ
∈
R
}
.
{\displaystyle {\mathcal {P}}={\Big \{}\ f_{\theta }(x)={\tfrac {\beta }{\lambda }}\left({\tfrac {x-\mu }{\lambda }}\right)^{\beta -1}\!\exp \!{\big (}\!-\!{\big (}{\tfrac {x-\mu }{\lambda }}{\big )}^{\beta }{\big )}\,\mathbf {1} _{\{x>\mu \}}\ {\Big |}\;\;\lambda >0,\,\beta >0,\,\mu \in \mathbb {R} \ {\Big \}}.}
The binomial model is parametrized by θ = (n, p), where n is a non-negative integer and p is a probability (i.e. p ≥ 0 and p ≤ 1):
P
=
{
p
θ
(
k
)
=
n
!
k
!
(
n
−
k
)
!
p
k
(
1
−
p
)
n
−
k
,
k
=
0
,
1
,
2
,
…
,
n
|
n
∈
Z
≥
0
,
p
≥
0
∧
p
≤
1
}
.
{\displaystyle {\mathcal {P}}={\Big \{}\ p_{\theta }(k)={\tfrac {n!}{k!(n-k)!}}\,p^{k}(1-p)^{n-k},\ k=0,1,2,\dots ,n\ {\Big |}\;\;n\in \mathbb {Z} _{\geq 0},\,p\geq 0\land p\leq 1{\Big \}}.}
This example illustrates the definition for a model with some discrete parameters.
== General remarks ==
A parametric model is called identifiable if the mapping θ ↦ Pθ is invertible, i.e. there are no two different parameter values θ1 and θ2 such that Pθ1 = Pθ2.
== Comparisons with other classes of models ==
Parametric models are contrasted with the semi-parametric, semi-nonparametric, and non-parametric models, all of which consist of an infinite set of "parameters" for description. The distinction between these four classes is as follows:
in a "parametric" model all the parameters are in finite-dimensional parameter spaces;
a model is "non-parametric" if all the parameters are in infinite-dimensional parameter spaces;
a "semi-parametric" model contains finite-dimensional parameters of interest and infinite-dimensional nuisance parameters;
a "semi-nonparametric" model has both finite-dimensional and infinite-dimensional unknown parameters of interest.
Some statisticians believe that the concepts "parametric", "non-parametric", and "semi-parametric" are ambiguous. It can also be noted that the set of all probability measures has cardinality of continuum, and therefore it is possible to parametrize any model at all by a single number in (0,1) interval. This difficulty can be avoided by considering only "smooth" parametric models.
== See also ==
Parametric family
Parametric statistics
Statistical model
Statistical model specification
== Notes ==
== Bibliography == | Wikipedia/Parametric_model |
Model selection is the task of selecting a model from among various candidates on the basis of performance criterion to choose the best one.
In the context of machine learning and more generally statistical analysis, this may be the selection of a statistical model from a set of candidate models, given data. In the simplest cases, a pre-existing set of data is considered. However, the task can also involve the design of experiments such that the data collected is well-suited to the problem of model selection. Given candidate models of similar predictive or explanatory power, the simplest model is most likely to be the best choice (Occam's razor).
Konishi & Kitagawa (2008, p. 75) state, "The majority of the problems in statistical inference can be considered to be problems related to statistical modeling". Relatedly, Cox (2006, p. 197) has said, "How [the] translation from subject-matter problem to statistical model is done is often the most critical part of an analysis".
Model selection may also refer to the problem of selecting a few representative models from a large set of computational models for the purpose of decision making or optimization under uncertainty.
In machine learning, algorithmic approaches to model selection include feature selection, hyperparameter optimization, and statistical learning theory.
== Introduction ==
In its most basic forms, model selection is one of the fundamental tasks of scientific inquiry. Determining the principle that explains a series of observations is often linked directly to a mathematical model predicting those observations. For example, when Galileo performed his inclined plane experiments, he demonstrated that the motion of the balls fitted the parabola predicted by his model .
Of the countless number of possible mechanisms and processes that could have produced the data, how can one even begin to choose the best model? The mathematical approach commonly taken decides among a set of candidate models; this set must be chosen by the researcher. Often simple models such as polynomials are used, at least initially . Burnham & Anderson (2002) emphasize throughout their book the importance of choosing models based on sound scientific principles, such as understanding of the phenomenological processes or mechanisms (e.g., chemical reactions) underlying the data.
Once the set of candidate models has been chosen, the statistical analysis allows us to select the best of these models. What is meant by best is controversial. A good model selection technique will balance goodness of fit with simplicity. More complex models will be better able to adapt their shape to fit the data (for example, a fifth-order polynomial can exactly fit six points), but the additional parameters may not represent anything useful. (Perhaps those six points are really just randomly distributed about a straight line.) Goodness of fit is generally determined using a likelihood ratio approach, or an approximation of this, leading to a chi-squared test. The complexity is generally measured by counting the number of parameters in the model.
Model selection techniques can be considered as estimators of some physical quantity, such as the probability of the model producing the given data. The bias and variance are both important measures of the quality of this estimator; efficiency is also often considered.
A standard example of model selection is that of curve fitting, where, given a set of points and other background knowledge (e.g. points are a result of i.i.d. samples), we must select a curve that describes the function that generated the points.
== Two directions of model selection ==
There are two main objectives in inference and learning from data. One is for scientific discovery, also called statistical inference, understanding of the underlying data-generating mechanism and interpretation of the nature of the data. Another objective of learning from data is for predicting future or unseen observations, also called Statistical Prediction. In the second objective, the data scientist does not necessarily concern an accurate probabilistic description of the data. Of course, one may also be interested in both directions.
In line with the two different objectives, model selection can also have two directions: model selection for inference and model selection for prediction. The first direction is to identify the best model for the data, which will preferably provide a reliable characterization of the sources of uncertainty for scientific interpretation. For this goal, it is significantly important that the selected model is not too sensitive to the sample size. Accordingly, an appropriate notion for evaluating model selection is the selection consistency, meaning that the most robust candidate will be consistently selected given sufficiently many data samples.
The second direction is to choose a model as machinery to offer excellent predictive performance. For the latter, however, the selected model may simply be the lucky winner among a few close competitors, yet the predictive performance can still be the best possible. If so, the model selection is fine for the second goal (prediction), but the use of the selected model for insight and interpretation may be severely unreliable and misleading. Moreover, for very complex models selected this way, even predictions may be unreasonable for data only slightly different from those on which the selection was made.
== Methods to assist in choosing the set of candidate models ==
Data transformation (statistics)
Exploratory data analysis
Model specification
Scientific method
== Criteria ==
Below is a list of criteria for model selection. The most commonly used information criteria are (i) the Akaike information criterion and (ii) the Bayes factor and/or the Bayesian information criterion (which to some extent approximates the Bayes factor), see
Stoica & Selen (2004) for a review.
Akaike information criterion (AIC), a measure of the goodness fit of an estimated statistical model
Bayes factor
Bayesian information criterion (BIC), also known as the Schwarz information criterion, a statistical criterion for model selection
Bridge criterion (BC), a statistical criterion that can attain the better performance of AIC and BIC despite the appropriateness of model specification.
Cross-validation
Deviance information criterion (DIC), another Bayesian oriented model selection criterion
False discovery rate
Focused information criterion (FIC), a selection criterion sorting statistical models by their effectiveness for a given focus parameter
Hannan–Quinn information criterion, an alternative to the Akaike and Bayesian criteria
Kashyap information criterion (KIC) is a powerful alternative to AIC and BIC, because KIC uses Fisher information matrix
Likelihood-ratio test
Mallows's Cp
Minimum description length
Minimum message length (MML)
PRESS statistic, also known as the PRESS criterion
Structural risk minimization
Stepwise regression
Watanabe–Akaike information criterion (WAIC), also called the widely applicable information criterion
Extended Bayesian Information Criterion (EBIC) is an extension of ordinary Bayesian information criterion (BIC) for models with high parameter spaces.
Extended Fisher Information Criterion (EFIC) is a model selection criterion for linear regression models.
Constrained Minimum Criterion (CMC) is a frequentist method for regression model selection based on the following geometric observations. In the parameter vector space of the full model, every vector represents a model. There exists a ball centered on the true parameter vector of the full model in which the true model is the smallest model (in
L
0
{\displaystyle L_{0}}
norm). As the sample size goes to infinity, the MLE for the true parameter vector converges to and thus pulls the shrinking likelihood ratio confidence region to the true parameter vector. The confidence region will be inside the ball with probability tending to one. The CMC selects the smallest model in this region. When the region captures the true parameter vector, the CMC selection is the true model. Hence, the probability that the CMC selection is the true model is greater than or equal to the confidence level.
Among these criteria, cross-validation is typically the most accurate, and computationally the most expensive, for supervised learning problems.
Burnham & Anderson (2002, §6.3) say the following:
There is a variety of model selection methods. However, from the point of view of statistical performance of a method, and intended context of its use, there are only two distinct classes of methods: These have been labeled efficient and consistent. (...) Under the frequentist paradigm for model selection one generally has three main approaches: (I) optimization of some selection criteria, (II) tests of hypotheses, and (III) ad hoc methods.
== See also ==
== Notes ==
== References ==
Aho, K.; Derryberry, D.; Peterson, T. (2014), "Model selection for ecologists: the worldviews of AIC and BIC", Ecology, 95 (3): 631–636, Bibcode:2014Ecol...95..631A, doi:10.1890/13-1452.1, PMID 24804445
Akaike, H. (1994), "Implications of informational point of view on the development of statistical science", in Bozdogan, H. (ed.), Proceedings of the First US/JAPAN Conference on The Frontiers of Statistical Modeling: An Informational Approach—Volume 3, Kluwer Academic Publishers, pp. 27–38
Anderson, D.R. (2008), Model Based Inference in the Life Sciences, Springer, ISBN 9780387740751
Ando, T. (2010), Bayesian Model Selection and Statistical Modeling, CRC Press, ISBN 9781439836156
Breiman, L. (2001), "Statistical modeling: the two cultures", Statistical Science, 16: 199–231, doi:10.1214/ss/1009213726
Burnham, K.P.; Anderson, D.R. (2002), Model Selection and Multimodel Inference: A Practical Information-Theoretic Approach (2nd ed.), Springer-Verlag, ISBN 0-387-95364-7 [this has over 38000 citations on Google Scholar]
Chamberlin, T.C. (1890), "The method of multiple working hypotheses", Science, 15 (366): 92–6, Bibcode:1890Sci....15R..92., doi:10.1126/science.ns-15.366.92, PMID 17782687 (reprinted 1965, Science 148: 754–759 [1] doi:10.1126/science.148.3671.754)
Claeskens, G. (2016), "Statistical model choice" (PDF), Annual Review of Statistics and Its Application, 3 (1): 233–256, Bibcode:2016AnRSA...3..233C, doi:10.1146/annurev-statistics-041715-033413
Claeskens, G.; Hjort, N.L. (2008), Model Selection and Model Averaging, Cambridge University Press, ISBN 9781139471800
Cox, D.R. (2006), Principles of Statistical Inference, Cambridge University Press
Ding, J.; Tarokh, V.; Yang, Y. (2018), "Model Selection Techniques - An Overview", IEEE Signal Processing Magazine, 35 (6): 16–34, arXiv:1810.09583, Bibcode:2018ISPM...35f..16D, doi:10.1109/MSP.2018.2867638, S2CID 53035396
Kashyap, R.L. (1982), "Optimal choice of AR and MA parts in autoregressive moving average models", IEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI-4 (2), IEEE: 99–104, doi:10.1109/TPAMI.1982.4767213, PMID 21869012, S2CID 18484243
Konishi, S.; Kitagawa, G. (2008), Information Criteria and Statistical Modeling, Springer, Bibcode:2007icsm.book.....K, ISBN 9780387718866
Lahiri, P. (2001), Model Selection, Institute of Mathematical Statistics
Leeb, H.; Pötscher, B. M. (2009), "Model Selection", in Anderson, T. G. (ed.), Handbook of Financial Time Series, Springer, pp. 889–925, doi:10.1007/978-3-540-71297-8_39, ISBN 978-3-540-71296-1
Lukacs, P. M.; Thompson, W. L.; Kendall, W. L.; Gould, W. R.; Doherty, P. F. Jr.; Burnham, K. P.; Anderson, D. R. (2007), "Concerns regarding a call for pluralism of information theory and hypothesis testing", Journal of Applied Ecology, 44 (2): 456–460, Bibcode:2007JApEc..44..456L, doi:10.1111/j.1365-2664.2006.01267.x, S2CID 83816981
McQuarrie, Allan D. R.; Tsai, Chih-Ling (1998), Regression and Time Series Model Selection, Singapore: World Scientific, ISBN 981-02-3242-X
Massart, P. (2007), Concentration Inequalities and Model Selection, Springer
Massart, P. (2014), "A non-asymptotic walk in probability and statistics", in Lin, Xihong (ed.), Past, Present, and Future of Statistical Science, Chapman & Hall, pp. 309–321, ISBN 9781482204988
Navarro, D. J. (2019), "Between the Devil and the Deep Blue Sea: Tensions between scientific judgement and statistical model selection", Computational Brain & Behavior, 2: 28–34, doi:10.1007/s42113-018-0019-z, hdl:1959.4/unsworks_64247
Resende, Paulo Angelo Alves; Dorea, Chang Chung Yu (2016), "Model identification using the Efficient Determination Criterion", Journal of Multivariate Analysis, 150: 229–244, arXiv:1409.7441, doi:10.1016/j.jmva.2016.06.002, S2CID 5469654
Shmueli, G. (2010), "To explain or to predict?", Statistical Science, 25 (3): 289–310, arXiv:1101.0891, doi:10.1214/10-STS330, MR 2791669, S2CID 15900983
Stoica, P.; Selen, Y. (2004), "Model-order selection: a review of information criterion rules" (PDF), IEEE Signal Processing Magazine, 21 (4): 36–47, doi:10.1109/MSP.2004.1311138, S2CID 17338979
Wit, E.; van den Heuvel, E.; Romeijn, J.-W. (2012), "'All models are wrong...': an introduction to model uncertainty" (PDF), Statistica Neerlandica, 66 (3): 217–236, doi:10.1111/j.1467-9574.2012.00530.x, S2CID 7793470
Wit, E.; McCullagh, P. (2001), Viana, M. A. G.; Richards, D. St. P. (eds.), "The extendibility of statistical models", Algebraic Methods in Statistics and Probability, pp. 327–340
Wójtowicz, Anna; Bigaj, Tomasz (2016), "Justification, confirmation, and the problem of mutually exclusive hypotheses", in Kuźniar, Adrian; Odrowąż-Sypniewska, Joanna (eds.), Uncovering Facts and Values, Brill Publishers, pp. 122–143, doi:10.1163/9789004312654_009, ISBN 9789004312654
Owrang, Arash; Jansson, Magnus (2018), "A Model Selection Criterion for High-Dimensional Linear Regression", IEEE Transactions on Signal Processing , 66 (13): 3436–3446, Bibcode:2018ITSP...66.3436O, doi:10.1109/TSP.2018.2821628, ISSN 1941-0476, S2CID 46931136
B. Gohain, Prakash; Jansson, Magnus (2022), "Scale-Invariant and consistent Bayesian information criterion for order selection in linear regression models", Signal Processing, 196: 108499, Bibcode:2022SigPr.19608499G, doi:10.1016/j.sigpro.2022.108499, ISSN 0165-1684, S2CID 246759677 | Wikipedia/Statistical_model_selection |
In science, an effective theory is a deliberately limited scientific theory applicable under specific circumstances. In practice, all theories are effective theories, with the name "effective theory" being used to signal that the limitations are built in by design.: 1
An early example: 2 is Galileo Galilei's theory of falling bodies. Using observed values, Galileo deduced a relationship between a falling body as constant acceleration, written here in modern notation:
d
2
z
d
t
2
=
−
g
{\displaystyle {\frac {d^{2}z}{dt^{2}}}=-g}
Within the scope of objects falling on Earth, this theory works well. However, as Isaac Newton discovered in his Newton's law of universal gravitation, a more elaborate but still effective theory, has more scope at the expense of additional complications. The next layer was Albert Einstein's general relativity, with more scope but even more complications.: 5
For example, effective field theory is a method used to describe physical theories when there is a hierarchy of scales. Effective field theories in physics can include quantum field theories in which the fields are treated as fundamental, and effective theories describing phenomena in solid-state physics. For instance, the BCS theory of superconduction treats vibrations of the solid-state lattice as a "field" (i.e. without claiming that there is really a field), with its own field quanta, known as phonons. Such "effective particles" derived from effective fields are also known as quasiparticles.
In a certain sense, quantum field theory, and any other currently known physical theory, could be described as "effective", as in being the "low energy limit" of an as-yet unknown theory of everything.
== See also ==
== References == | Wikipedia/Effective_theory |
Predictive modelling uses statistics to predict outcomes. Most often the event one wants to predict is in the future, but predictive modelling can be applied to any type of unknown event, regardless of when it occurred. For example, predictive models are often used to detect crimes and identify suspects, after the crime has taken place.
In many cases, the model is chosen on the basis of detection theory to try to guess the probability of an outcome given a set amount of input data, for example given an email determining how likely that it is spam.
Models can use one or more classifiers in trying to determine the probability of a set of data belonging to another set. For example, a model might be used to determine whether an email is spam or "ham" (non-spam).
Depending on definitional boundaries, predictive modelling is synonymous with, or largely overlapping with, the field of machine learning, as it is more commonly referred to in academic or research and development contexts. When deployed commercially, predictive modelling is often referred to as predictive analytics.
Predictive modelling is often contrasted with causal modelling/analysis. In the former, one may be entirely satisfied to make use of indicators of, or proxies for, the outcome of interest. In the latter, one seeks to determine true cause-and-effect relationships. This distinction has given rise to a burgeoning literature in the fields of research methods and statistics and to the common statement that "correlation does not imply causation".
== Models ==
Nearly any statistical model can be used for prediction purposes. Broadly speaking, there are two classes of predictive models: parametric and non-parametric. A third class, semi-parametric models, includes features of both. Parametric models make "specific assumptions with regard to one or more of the population parameters that characterize the underlying distribution(s)". Non-parametric models "typically involve fewer assumptions of structure and distributional form [than parametric models] but usually contain strong assumptions about independencies".
== Applications ==
=== Uplift modelling ===
Uplift modelling is a technique for modelling the change in probability caused by an action. Typically this is a marketing action such as an offer to buy a product, to use a product more or to re-sign a contract. For example, in a
retention campaign you wish to predict the change in probability that a customer will remain a customer if they are contacted. A model of the change in probability allows the retention campaign to be targeted at those customers on whom the change in probability will be beneficial. This allows the retention programme to avoid triggering unnecessary churn or customer attrition without wasting money contacting people who would act anyway.
=== Archaeology ===
Predictive modelling in archaeology gets its foundations from Gordon Willey's mid-fifties work in the Virú Valley of Peru. Complete, intensive surveys were performed then covariability between cultural remains and natural features such as slope and vegetation were determined. Development of quantitative methods and a greater availability of applicable data led to growth of the discipline in the 1960s and by the late 1980s, substantial progress had been made by major land managers worldwide.
Generally, predictive modelling in archaeology is establishing statistically valid causal or covariable relationships between natural proxies such as soil types, elevation, slope, vegetation, proximity to water, geology, geomorphology, etc., and the presence of archaeological features. Through analysis of these quantifiable attributes from land that has undergone archaeological survey, sometimes the "archaeological sensitivity" of unsurveyed areas can be anticipated based on the natural proxies in those areas. Large land managers in the United States, such as the Bureau of Land Management (BLM), the Department of Defense (DOD), and numerous highway and parks agencies, have successfully employed this strategy. By using predictive modelling in their cultural resource management plans, they are capable of making more informed decisions when planning for activities that have the potential to require ground disturbance and subsequently affect archaeological sites.
=== Customer relationship management ===
Predictive modelling is used extensively in analytical customer relationship management and data mining to produce customer-level models that describe the likelihood that a customer will take a particular action. The actions are usually sales, marketing and customer retention related.
For example, a large consumer organization such as a mobile telecommunications operator will have a set of predictive models for product cross-sell, product deep-sell (or upselling) and churn. It is also now more common for such an organization to have a model of savability using an uplift model. This predicts the likelihood that a customer can be saved at the end of a contract period (the change in churn probability) as opposed to the standard churn prediction model.
=== Auto insurance ===
Predictive modelling is utilised in vehicle insurance to assign risk of incidents to policy holders from information obtained from policy holders. This is extensively employed in usage-based insurance solutions where predictive models utilise telemetry-based data to build a model of predictive risk for claim likelihood. Black-box auto insurance predictive models utilise GPS or accelerometer sensor input only. Some models include a wide range of predictive input beyond basic telemetry including advanced driving behaviour, independent crash records, road history, and user profiles to provide improved risk models.
=== Health care ===
In 2009 Parkland Health & Hospital System began analyzing electronic medical records in order to use predictive modeling to help identify patients at high risk of readmission. Initially, the hospital focused on patients with congestive heart failure, but the program has expanded to include patients with diabetes, acute myocardial infarction, and pneumonia.
In 2018, Banerjee et al. proposed a deep learning model for estimating short-term life expectancy (>3 months) of the patients by analyzing free-text clinical notes in the electronic medical record, while maintaining the temporal visit sequence. The model was trained on a large dataset (10,293 patients) and validated on a separated dataset (1818 patients). It achieved an area under the ROC (Receiver Operating Characteristic) curve of 0.89. To provide explain-ability, they developed an interactive graphical tool that may improve physician understanding of the basis for the model's predictions. The high accuracy and explain-ability of the PPES-Met model may enable the model to be used as a decision support tool to personalize metastatic cancer treatment and provide valuable assistance to physicians.
The first clinical prediction model reporting guidelines were published in 2015 (Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD)), and have since been updated.
Predictive modelling has been used to estimate surgery duration.
=== Algorithmic trading ===
Predictive modeling in trading is a modeling process wherein the probability of an outcome is predicted using a set of predictor variables. Predictive models can be built for different assets like stocks, futures, currencies, commodities etc. Predictive modeling is still extensively used by trading firms to devise strategies and trade. It utilizes mathematically advanced software to evaluate indicators on price, volume, open interest and other historical data, to discover repeatable patterns.
=== Lead tracking systems ===
Predictive modelling gives lead generators a head start by forecasting data-driven outcomes for each potential campaign. This method saves time and exposes potential blind spots to help client make smarter decisions.
=== Notable failures of predictive modeling ===
Although not widely discussed by the mainstream predictive modeling community, predictive modeling is a methodology that has been widely used in the financial industry in the past and some of the major failures contributed to the 2008 financial crisis. These failures exemplify the danger of relying exclusively on models that are essentially backward looking in nature. The following examples are by no mean a complete list:
Bond rating. S&P, Moody's and Fitch quantify the probability of default of bonds with discrete variables called rating. The rating can take on discrete values from AAA down to D. The rating is a predictor of the risk of default based on a variety of variables associated with the borrower and historical macroeconomic data. The rating agencies failed with their ratings on the US$600 billion mortgage backed Collateralized Debt Obligation (CDO) market. Almost the entire AAA sector (and the super-AAA sector, a new rating the rating agencies provided to represent super safe investment) of the CDO market defaulted or severely downgraded during 2008, many of which obtained their ratings less than just a year previously.
So far, no statistical models that attempt to predict equity market prices based on historical data are considered to consistently make correct predictions over the long term. One particularly memorable failure is that of Long Term Capital Management, a fund that hired highly qualified analysts, including a Nobel Memorial Prize in Economic Sciences winner, to develop a sophisticated statistical model that predicted the price spreads between different securities. The models produced impressive profits until a major debacle that caused the then Federal Reserve chairman Alan Greenspan to step in to broker a rescue plan by the Wall Street broker dealers in order to prevent a meltdown of the bond market.
== Possible fundamental limitations of predictive models based on data fitting ==
History cannot always accurately predict the future. Using relations derived from historical data to predict the future implicitly assumes there are certain lasting conditions or constants in a complex system. This almost always leads to some imprecision when the system involves people.
Unknown unknowns are an issue. In all data collection, the collector first defines the set of variables for which data is collected. However, no matter how extensive the collector considers his/her selection of the variables, there is always the possibility of new variables that have not been considered or even defined, yet are critical to the outcome.
Algorithms can be defeated adversarially. After an algorithm becomes an accepted standard of measurement, it can be taken advantage of by people who understand the algorithm and have the incentive to fool or manipulate the outcome. This is what happened to the CDO rating described above. The CDO dealers actively fulfilled the rating agencies' input to reach an AAA or super-AAA on the CDO they were issuing, by cleverly manipulating variables that were "unknown" to the rating agencies' "sophisticated" models.
== See also ==
Calibration (statistics)
Prediction interval
Predictive analytics
Predictive inference
Statistical learning theory
Statistical model
== References ==
== Further reading ==
Clarke, Bertrand S.; Clarke, Jennifer L. (2018), Predictive Statistics, Cambridge University Press
Iglesias, Pilar; Sandoval, Mônica C.; Pereira, Carlos Alberto de Bragança (1993), "Predictive likelihood in finite populations", Brazilian Journal of Probability and Statistics, 7 (1): 65–82, JSTOR 43600831
Kelleher, John D.; Mac Namee, Brian; D'Arcy, Aoife (2015), Fundamentals of Machine Learning for Predictive Data Analytics: Algorithms, worked Examples and Case Studies, MIT Press
Kuhn, Max; Johnson, Kjell (2013), Applied Predictive Modeling, Springer
Shmueli, G. (2010), "To explain or to predict?", Statistical Science, 25 (3): 289–310, arXiv:1101.0891, doi:10.1214/10-STS330, S2CID 15900983 | Wikipedia/Predictive_model |
In statistics, model validation is the task of evaluating whether a chosen statistical model is appropriate or not. Oftentimes in statistical inference, inferences from models that appear to fit their data may be flukes, resulting in a misunderstanding by researchers of the actual relevance of their model. To combat this, model validation is used to test whether a statistical model can hold up to permutations in the data. Model validation is also called model criticism or model evaluation.
This topic is not to be confused with the closely related task of model selection, the process of discriminating between multiple candidate models: model validation does not concern so much the conceptual design of models as it tests only the consistency between a chosen model and its stated outputs.
There are many ways to validate a model. Residual plots plot the difference between the actual data and the model's predictions: correlations in the residual plots may indicate a flaw in the model. Cross validation is a method of model validation that iteratively refits the model, each time leaving out just a small sample and comparing whether the samples left out are predicted by the model: there are many kinds of cross validation. Predictive simulation is used to compare simulated data to actual data. External validation involves fitting the model to new data. Akaike information criterion estimates the quality of a model.
== Overview ==
Model validation comes in many forms and the specific method of model validation a researcher uses is often a constraint of their research design. To emphasize, what this means is that there is no one-size-fits-all method to validating a model. For example, if a researcher is operating with a very limited set of data, but data they have strong prior assumptions about, they may consider validating the fit of their model by using a Bayesian framework and testing the fit of their model using various prior distributions. However, if a researcher has a lot of data and is testing multiple nested models, these conditions may lend themselves toward cross validation and possibly a leave one out test. These are two abstract examples and any actual model validation will have to consider far more intricacies than describes here but these example illustrate that model validation methods are always going to be circumstantial.
In general, models can be validated using existing data or with new data, and both methods are discussed more in the following subsections, and a note of caution is provided, too.
=== Validation with existing data ===
Validation based on existing data involves analyzing the goodness of fit of the model or analyzing whether the residuals seem to be random (i.e. residual diagnostics). This method involves using analyses of the models closeness to the data and trying to understand how well the model predicts its own data. One example of this method is in Figure 1, which shows a polynomial function fit to some data. We see that the polynomial function does not conform well to the data, which appears linear, and might invalidate this polynomial model.
Commonly, statistical models on existing data are validated using a validation set, which may also be referred to as a holdout set. A validation set is a set of data points that the user leaves out when fitting a statistical model. After the statistical model is fitted, the validation set is used as a measure of the model's error. If the model fits well on the initial data but has a large error on the validation set, this is a sign of overfitting.
=== Validation with new data ===
If new data becomes available, an existing model can be validated by assessing whether the new data is predicted by the old model. If the new data is not predicted by the old model, then the model might not be valid for the researcher's goals.
With this in mind, a modern approach is to validate a neural network is to test its performance on domain-shifted data. This ascertains if the model learned domain-invariant features.
=== A note of caution ===
A model can be validated only relative to some application area. A model that is valid for one application might be invalid for some other applications. As an example, consider the curve in Figure 1: if the application only used inputs from the interval [0, 2], then the curve might well be an acceptable model.
== Methods for validating ==
When doing a validation, there are three notable causes of potential difficulty, according to the Encyclopedia of Statistical Sciences. The three causes are these: lack of data; lack of control of the input variables; uncertainty about the underlying probability distributions and correlations. The usual methods for dealing with difficulties in validation include the following: checking the assumptions made in constructing the model; examining the available data and related model outputs; applying expert judgment. Note that expert judgment commonly requires expertise in the application area.
Expert judgment can sometimes be used to assess the validity of a prediction without obtaining real data: e.g. for the curve in Figure 1, an expert might well be able to assess that a substantial extrapolation will be invalid. Additionally, expert judgment can be used in Turing-type tests, where experts are presented with both real data and related model outputs and then asked to distinguish between the two.
For some classes of statistical models, specialized methods of performing validation are available. As an example, if the statistical model was obtained via a regression, then specialized analyses for regression model validation exist and are generally employed.
=== Residual diagnostics ===
Residual diagnostics comprise analyses of the residuals to determine whether the residuals seem to be effectively random. Such analyses typically requires estimates of the probability distributions for the residuals. Estimates of the residuals' distributions can often be obtained by repeatedly running the model, i.e. by using repeated stochastic simulations (employing a pseudorandom number generator for random variables in the model).
If the statistical model was obtained via a regression, then regression-residual diagnostics exist and may be used; such diagnostics have been well studied.
=== Cross validation ===
Cross validation is a method of sampling that involves leaving some parts of the data out of the fitting process and then seeing whether those data that are left out are close or far away from where the model predicts they would be. What that means practically is that cross validation techniques fit the model many, many times with a portion of the data and compares each model fit to the portion it did not use. If the models very rarely describe the data that they were not trained on, then the model is probably wrong.
== See also ==
== References ==
== Further reading ==
Barlas, Y. (1996), "Formal aspects of model validity and validation in system dynamics", System Dynamics Review, 12 (3): 183–210, doi:10.1002/(SICI)1099-1727(199623)12:3<183::AID-SDR103>3.0.CO;2-4
Good, P. I.; Hardin, J. W. (2012), "Chapter 15: Validation", Common Errors in Statistics (Fourth ed.), John Wiley & Sons, pp. 277–285
Huber, P. J. (2002), "Chapter 3: Approximate models", in Huber-Carol, C.; Balakrishnan, N.; Nikulin, M. S.; Mesbah, M. (eds.), Goodness-of-Fit Tests and Model Validity, Springer, pp. 25–41
== External links ==
How can I tell if a model fits my data? —Handbook of Statistical Methods (NIST)
Hicks, Dan (July 14, 2017). "What are core statistical model validation techniques?". Stack Exchange. | Wikipedia/Statistical_model_validation |
Blockmodeling is a set or a coherent framework, that is used for analyzing social structure and also for setting procedure(s) for partitioning (clustering) social network's units (nodes, vertices, actors), based on specific patterns, which form a distinctive structure through interconnectivity. It is primarily used in statistics, machine learning and network science.
As an empirical procedure, blockmodeling assumes that all the units in a specific network can be grouped together to such extent to which they are equivalent. Regarding equivalency, it can be structural, regular or generalized. Using blockmodeling, a network can be analyzed using newly created blockmodels, which transforms large and complex network into a smaller and more comprehensible one. At the same time, the blockmodeling is used to operationalize social roles.
While some contend that the blockmodeling is just clustering methods, Bonacich and McConaghy state that "it is a theoretically grounded and algebraic approach to the analysis of the structure of relations". Blockmodeling's unique ability lies in the fact that it considers the structure not just as a set of direct relations, but also takes into account all other possible compound relations that are based on the direct ones.
The principles of blockmodeling were first introduced by Francois Lorrain and Harrison C. White in 1971. Blockmodeling is considered as "an important set of network analytic tools" as it deals with delineation of role structures (the well-defined places in social structures, also known as positions) and the discerning the fundamental structure of social networks.: 2, 3 According to Batagelj, the primary "goal of blockmodeling is to reduce a large, potentially incoherent network to a smaller comprehensible structure that can be interpreted more readily". Blockmodeling was at first used for analysis in sociometry and psychometrics, but has now spread also to other sciences.
== Definition ==
A network as a system is composed of (or defined by) two different sets: one set of units (nodes, vertices, actors) and one set of links between the units. Using both sets, it is possible to create a graph, describing the structure of the network.
During blockmodeling, the researcher is faced with two problems: how to partition the units (e.g., how to determine the clusters (or classes), that then form vertices in a blockmodel) and then how to determine the links in the blockmodel (and at the same time the values of these links).
In the social sciences, the networks are usually social networks, composed of several individuals (units) and selected social relationships among them (links). Real-world networks can be large and complex; blockmodeling is used to simplify them into smaller structures that can be easier to interpret. Specifically, blockmodeling partitions the units into clusters and then determines the ties among the clusters. At the same time, blockmodeling can be used to explain the social roles existing in the network, as it is assumed that the created cluster of units mimics (or is closely associated with) the units' social roles.
Blockmodeling can thus be defined as a set of approaches for partitioning units into clusters (also known as positions) and links into blocks, which are further defined by the newly obtained clusters. A block (also blockmodel) is defined as a submatrix, that shows interconnectivity (links) between nodes, present in the same or different clusters. Each of these positions in the cluster is defined by a set of (in)direct ties to and from other social positions. These links (connections) can be directed or undirected; there can be multiple links between the same pair of objects or they can have weights on them. If there are not any multiple links in a network, it is called a simple network.: 8
A matrix representation of a graph is composed of ordered units, in rows and columns, based on their names. The ordered units with similar patterns of links are partitioned together in the same clusters. Clusters are then arranged together so that units from the same clusters are placed next to each other, thus preserving interconnectivity. In the next step, the units (from the same clusters) are transformed into a blockmodel. With this, several blockmodels are usually formed, one being core cluster and others being cohesive; a core cluster is always connected to cohesive ones, while cohesive ones cannot be linked together. Clustering of nodes is based on the equivalence, such as structural and regular. The primary objective of the matrix form is to visually present relations between the persons included in the cluster. These ties are coded dichotomously (as present or absent), and the rows in the matrix form indicate the source of the ties, while the columns represent the destination of the ties.
Equivalence can have two basic approaches: the equivalent units have the same connection pattern to the same neighbors or these units have same or similar connection pattern to different neighbors. If the units are connected to the rest of network in identical ways, then they are structurally equivalent. Units can also be regularly equivalent, when they are equivalently connected to equivalent others.
With blockmodeling, it is necessary to consider the issue of results being affected by measurement errors in the initial stage of acquiring the data.
== Different approaches ==
Regarding what kind of network is undergoing blockmodeling, a different approach is necessary. Networks can be one–mode or two–mode. In the former all units can be connected to any other unit and where units are of the same type, while in the latter the units are connected only to the unit(s) of a different type.: 6–10 Regarding relationships between units, they can be single–relational or multi–relational networks. Further more, the networks can be temporal or multilevel and also binary (only 0 and 1) or signed (allowing negative ties)/values (other values are possible) networks.
Different approaches to blockmodeling can be grouped into two main classes: deterministic blockmodeling and stochastic blockmodeling approaches. Deterministic blockmodeling is then further divided into direct and indirect blockmodeling approaches.
Among direct blockmodeling approaches are: structural equivalence and regular equivalence. Structural equivalence is a state, when units are connected to the rest of the network in an identical way(s), while regular equivalence occurs when units are equally related to equivalent others (units are not necessarily sharing neighbors, but have neighbour that are themselves similar).: 24
Indirect blockmodeling approaches, where partitioning is dealt with as a traditional cluster analysis problem (measuring (dis)similarity results in a (dis)similarity matrix), are:
conventional blockmodeling,
generalized blockmodeling:
generalized blockmodeling of binary networks,
generalized blockmodeling of valued networks and
generalized homogeneity blockmodeling,
prespecified blockmodeling.
According to Brusco and Steinley (2011), the blockmodeling can be categorized (using a number of dimensions):
deterministic or stochastic blockmodeling,
one–mode or two–mode networks,
signed or unsigned networks,
exploratory or confirmatory blockmodeling.
== Blockmodels ==
Blockmodels (sometimes also block models) are structures in which:
vertices (e.g., units, nodes) are assembled within a cluster, with each cluster identified as a vertex; from such vertices a graph can be constructed;
combinations of all the links (ties), represented in a block as a single link between positions, while at the same time constructing one tie for each block. In a case, when there are no ties in a block, there will be no ties between the two positions that define the block.
Computer programs can partition the social network according to pre-set conditions.: 333 When empirical blocks can be reasonably approximated in terms of ideal blocks, such blockmodels can be reduced to a blockimage, which is a representation of the original network, capturing its underlying 'functional anatomy'. Thus, blockmodels can "permit the data to characterize their own structure", and at the same time not seek to manifest a preconceived structure imposed by the researcher.
Blockmodels can be created indirectly or directly, based on the construction of the criterion function. Indirect construction refers to a function, based on "compatible (dis)similarity measure between paris of units", while the direct construction is "a function measuring the fit of real blocks induced by a given clustering to the corresponding ideal blocks with perfect relations within each cluster and between clusters according to the considered types of connections (equivalence)".
=== Types ===
Blockmodels can be specified regarding the intuition, substance or the insight into the nature of the studied network; this can result in such models as follows:: 16–24
parent-child role systems,
organizational hierarchies,
systems of ranked clusters,...
== Specialized programs ==
Blockmodeling is done with specialized computer programs, dedicated to the analysis of networks or blockmodeling in particular, as:
Pajek (Vladimir Batagelj and Andrej Mrvar),
R–package Blockmodeling (Aleš Žiberna),
Socnet.se: The blockmodeling console app (Win/Linux/Mac) (Carl Nordlund)
StOCNET (Tom Snijders),...
BLOCKS (Tom Snijders),
CONCOR,
Model and Model2 (Vladimir Batagelj),
== See also ==
Stochastic block model
Mathematical sociology
Role assignment
Multiobjective blockmodeling
Blockmodeling linked networks
== References == | Wikipedia/Blockmodel |
Statistical Science is a review journal published by the Institute of Mathematical Statistics. The founding editor was Morris H. DeGroot, who explained the mission of the journal in his 1986 editorial:
"A central purpose of Statistical Science is to convey the richness, breadth and unity of the field by presenting
the full range of contemporary statistical thought at a modest technical level accessible to the wide community
of practitioners, teachers, researchers and students of statistics and probability."
== Editors ==
2023–2025 Moulinath Banerjee
2020–2022 Sonia Petrone
2017–2019 Cun-Hui Zhang
2014–2016 Peter Green
2011–2013 Jon Wellner
2008–2010 David Madigan
2005–2007 Ed George
2002–2004 George Casella
2001 Morris Eaton
2001 Richard Tweedie
1998–2000 Leon Gleser
1995–1997 Paul Switzer
1992–1994 Robert E. Kass
1989–1991 Carl N. Morris
1985–1989 Morris H. DeGroot
== References ==
== Further reading ==
"Guidelines for Writing for Statistical Science" (PDF), Statistical Science, 9 (4): 591, 1994, JSTOR 2246259, retrieved 28 May 2016
== External links ==
Statistical Science home page | Wikipedia/Statistical_Science |
Autocorrelation, sometimes known as serial correlation in the discrete time case, measures the correlation of a signal with a delayed copy of itself. Essentially, it quantifies the similarity between observations of a random variable at different points in time. The analysis of autocorrelation is a mathematical tool for identifying repeating patterns or hidden periodicities within a signal obscured by noise. Autocorrelation is widely used in signal processing, time domain and time series analysis to understand the behavior of data over time.
Different fields of study define autocorrelation differently, and not all of these definitions are equivalent. In some fields, the term is used interchangeably with autocovariance.
Various time series models incorporate autocorrelation, such as unit root processes, trend-stationary processes, autoregressive processes, and moving average processes.
== Autocorrelation of stochastic processes ==
In statistics, the autocorrelation of a real or complex random process is the Pearson correlation between values of the process at different times, as a function of the two times or of the time lag. Let
{
X
t
}
{\displaystyle \left\{X_{t}\right\}}
be a random process, and
t
{\displaystyle t}
be any point in time (
t
{\displaystyle t}
may be an integer for a discrete-time process or a real number for a continuous-time process). Then
X
t
{\displaystyle X_{t}}
is the value (or realization) produced by a given run of the process at time
t
{\displaystyle t}
. Suppose that the process has mean
μ
t
{\displaystyle \mu _{t}}
and variance
σ
t
2
{\displaystyle \sigma _{t}^{2}}
at time
t
{\displaystyle t}
, for each
t
{\displaystyle t}
. Then the definition of the autocorrelation function between times
t
1
{\displaystyle t_{1}}
and
t
2
{\displaystyle t_{2}}
is: p.388 : p.165
where
E
{\displaystyle \operatorname {E} }
is the expected value operator and the bar represents complex conjugation. Note that the expectation may not be well defined.
Subtracting the mean before multiplication yields the auto-covariance function between times
t
1
{\displaystyle t_{1}}
and
t
2
{\displaystyle t_{2}}
:: p.392 : p.168
Note that this expression is not well defined for all-time series or processes, because the mean may not exist, or the variance may be zero (for a constant process) or infinite (for processes with distribution lacking well-behaved moments, such as certain types of power law).
=== Definition for wide-sense stationary stochastic process ===
If
{
X
t
}
{\displaystyle \left\{X_{t}\right\}}
is a wide-sense stationary process then the mean
μ
{\displaystyle \mu }
and the variance
σ
2
{\displaystyle \sigma ^{2}}
are time-independent, and further the autocovariance function depends only on the lag between
t
1
{\displaystyle t_{1}}
and
t
2
{\displaystyle t_{2}}
: the autocovariance depends only on the time-distance between the pair of values but not on their position in time. This further implies that the autocovariance and autocorrelation can be expressed as a function of the time-lag, and that this would be an even function of the lag
τ
=
t
2
−
t
1
{\displaystyle \tau =t_{2}-t_{1}}
. This gives the more familiar forms for the autocorrelation function: p.395
and the auto-covariance function:
In particular, note that
K
X
X
(
0
)
=
σ
2
.
{\displaystyle \operatorname {K} _{XX}(0)=\sigma ^{2}.}
=== Normalization ===
It is common practice in some disciplines (e.g. statistics and time series analysis) to normalize the autocovariance function to get a time-dependent Pearson correlation coefficient. However, in other disciplines (e.g. engineering) the normalization is usually dropped and the terms "autocorrelation" and "autocovariance" are used interchangeably.
The definition of the autocorrelation coefficient of a stochastic process is: p.169
ρ
X
X
(
t
1
,
t
2
)
=
K
X
X
(
t
1
,
t
2
)
σ
t
1
σ
t
2
=
E
[
(
X
t
1
−
μ
t
1
)
(
X
t
2
−
μ
t
2
)
¯
]
σ
t
1
σ
t
2
.
{\displaystyle \rho _{XX}(t_{1},t_{2})={\frac {\operatorname {K} _{XX}(t_{1},t_{2})}{\sigma _{t_{1}}\sigma _{t_{2}}}}={\frac {\operatorname {E} \left[(X_{t_{1}}-\mu _{t_{1}}){\overline {(X_{t_{2}}-\mu _{t_{2}})}}\right]}{\sigma _{t_{1}}\sigma _{t_{2}}}}.}
If the function
ρ
X
X
{\displaystyle \rho _{XX}}
is well defined, its value must lie in the range
[
−
1
,
1
]
{\displaystyle [-1,1]}
, with 1 indicating perfect correlation and −1 indicating perfect anti-correlation.
For a wide-sense stationary (WSS) process, the definition is
ρ
X
X
(
τ
)
=
K
X
X
(
τ
)
σ
2
=
E
[
(
X
t
+
τ
−
μ
)
(
X
t
−
μ
)
¯
]
σ
2
{\displaystyle \rho _{XX}(\tau )={\frac {\operatorname {K} _{XX}(\tau )}{\sigma ^{2}}}={\frac {\operatorname {E} \left[(X_{t+\tau }-\mu ){\overline {(X_{t}-\mu )}}\right]}{\sigma ^{2}}}}
.
The normalization is important both because the interpretation of the autocorrelation as a correlation provides a scale-free measure of the strength of statistical dependence, and because the normalization has an effect on the statistical properties of the estimated autocorrelations.
=== Properties ===
==== Symmetry property ====
The fact that the autocorrelation function
R
X
X
{\displaystyle \operatorname {R} _{XX}}
is an even function can be stated as: p.171
R
X
X
(
t
1
,
t
2
)
=
R
X
X
(
t
2
,
t
1
)
¯
{\displaystyle \operatorname {R} _{XX}(t_{1},t_{2})={\overline {\operatorname {R} _{XX}(t_{2},t_{1})}}}
respectively for a WSS process:: p.173
R
X
X
(
τ
)
=
R
X
X
(
−
τ
)
¯
.
{\displaystyle \operatorname {R} _{XX}(\tau )={\overline {\operatorname {R} _{XX}(-\tau )}}.}
==== Maximum at zero ====
For a WSS process:: p.174
|
R
X
X
(
τ
)
|
≤
R
X
X
(
0
)
{\displaystyle \left|\operatorname {R} _{XX}(\tau )\right|\leq \operatorname {R} _{XX}(0)}
Notice that
R
X
X
(
0
)
{\displaystyle \operatorname {R} _{XX}(0)}
is always real.
==== Cauchy–Schwarz inequality ====
The Cauchy–Schwarz inequality, inequality for stochastic processes:: p.392
|
R
X
X
(
t
1
,
t
2
)
|
2
≤
E
[
|
X
t
1
|
2
]
E
[
|
X
t
2
|
2
]
{\displaystyle \left|\operatorname {R} _{XX}(t_{1},t_{2})\right|^{2}\leq \operatorname {E} \left[|X_{t_{1}}|^{2}\right]\operatorname {E} \left[|X_{t_{2}}|^{2}\right]}
==== Autocorrelation of white noise ====
The autocorrelation of a continuous-time white noise signal will have a strong peak (represented by a Dirac delta function) at
τ
=
0
{\displaystyle \tau =0}
and will be exactly
0
{\displaystyle 0}
for all other
τ
{\displaystyle \tau }
.
==== Wiener–Khinchin theorem ====
The Wiener–Khinchin theorem relates the autocorrelation function
R
X
X
{\displaystyle \operatorname {R} _{XX}}
to the power spectral density
S
X
X
{\displaystyle S_{XX}}
via the Fourier transform:
R
X
X
(
τ
)
=
∫
−
∞
∞
S
X
X
(
f
)
e
i
2
π
f
τ
d
f
{\displaystyle \operatorname {R} _{XX}(\tau )=\int _{-\infty }^{\infty }S_{XX}(f)e^{i2\pi f\tau }\,{\rm {d}}f}
S
X
X
(
f
)
=
∫
−
∞
∞
R
X
X
(
τ
)
e
−
i
2
π
f
τ
d
τ
.
{\displaystyle S_{XX}(f)=\int _{-\infty }^{\infty }\operatorname {R} _{XX}(\tau )e^{-i2\pi f\tau }\,{\rm {d}}\tau .}
For real-valued functions, the symmetric autocorrelation function has a real symmetric transform, so the Wiener–Khinchin theorem can be re-expressed in terms of real cosines only:
R
X
X
(
τ
)
=
∫
−
∞
∞
S
X
X
(
f
)
cos
(
2
π
f
τ
)
d
f
{\displaystyle \operatorname {R} _{XX}(\tau )=\int _{-\infty }^{\infty }S_{XX}(f)\cos(2\pi f\tau )\,{\rm {d}}f}
S
X
X
(
f
)
=
∫
−
∞
∞
R
X
X
(
τ
)
cos
(
2
π
f
τ
)
d
τ
.
{\displaystyle S_{XX}(f)=\int _{-\infty }^{\infty }\operatorname {R} _{XX}(\tau )\cos(2\pi f\tau )\,{\rm {d}}\tau .}
== Autocorrelation of random vectors ==
The (potentially time-dependent) autocorrelation matrix (also called second moment) of a (potentially time-dependent) random vector
X
=
(
X
1
,
…
,
X
n
)
T
{\displaystyle \mathbf {X} =(X_{1},\ldots ,X_{n})^{\rm {T}}}
is an
n
×
n
{\displaystyle n\times n}
matrix containing as elements the autocorrelations of all pairs of elements of the random vector
X
{\displaystyle \mathbf {X} }
. The autocorrelation matrix is used in various digital signal processing algorithms.
For a random vector
X
=
(
X
1
,
…
,
X
n
)
T
{\displaystyle \mathbf {X} =(X_{1},\ldots ,X_{n})^{\rm {T}}}
containing random elements whose expected value and variance exist, the autocorrelation matrix is defined by: p.190 : p.334
where
T
{\displaystyle {}^{\rm {T}}}
denotes the transposed matrix of dimensions
n
×
n
{\displaystyle n\times n}
.
Written component-wise:
R
X
X
=
[
E
[
X
1
X
1
]
E
[
X
1
X
2
]
⋯
E
[
X
1
X
n
]
E
[
X
2
X
1
]
E
[
X
2
X
2
]
⋯
E
[
X
2
X
n
]
⋮
⋮
⋱
⋮
E
[
X
n
X
1
]
E
[
X
n
X
2
]
⋯
E
[
X
n
X
n
]
]
{\displaystyle \operatorname {R} _{\mathbf {X} \mathbf {X} }={\begin{bmatrix}\operatorname {E} [X_{1}X_{1}]&\operatorname {E} [X_{1}X_{2}]&\cdots &\operatorname {E} [X_{1}X_{n}]\\\\\operatorname {E} [X_{2}X_{1}]&\operatorname {E} [X_{2}X_{2}]&\cdots &\operatorname {E} [X_{2}X_{n}]\\\\\vdots &\vdots &\ddots &\vdots \\\\\operatorname {E} [X_{n}X_{1}]&\operatorname {E} [X_{n}X_{2}]&\cdots &\operatorname {E} [X_{n}X_{n}]\\\\\end{bmatrix}}}
If
Z
{\displaystyle \mathbf {Z} }
is a complex random vector, the autocorrelation matrix is instead defined by
R
Z
Z
≜
E
[
Z
Z
H
]
.
{\displaystyle \operatorname {R} _{\mathbf {Z} \mathbf {Z} }\triangleq \ \operatorname {E} [\mathbf {Z} \mathbf {Z} ^{\rm {H}}].}
Here
H
{\displaystyle {}^{\rm {H}}}
denotes Hermitian transpose.
For example, if
X
=
(
X
1
,
X
2
,
X
3
)
T
{\displaystyle \mathbf {X} =\left(X_{1},X_{2},X_{3}\right)^{\rm {T}}}
is a random vector, then
R
X
X
{\displaystyle \operatorname {R} _{\mathbf {X} \mathbf {X} }}
is a
3
×
3
{\displaystyle 3\times 3}
matrix whose
(
i
,
j
)
{\displaystyle (i,j)}
-th entry is
E
[
X
i
X
j
]
{\displaystyle \operatorname {E} [X_{i}X_{j}]}
.
=== Properties of the autocorrelation matrix ===
The autocorrelation matrix is a Hermitian matrix for complex random vectors and a symmetric matrix for real random vectors.: p.190
The autocorrelation matrix is a positive semidefinite matrix,: p.190 i.e.
a
T
R
X
X
a
≥
0
for all
a
∈
R
n
{\displaystyle \mathbf {a} ^{\mathrm {T} }\operatorname {R} _{\mathbf {X} \mathbf {X} }\mathbf {a} \geq 0\quad {\text{for all }}\mathbf {a} \in \mathbb {R} ^{n}}
for a real random vector, and respectively
a
H
R
Z
Z
a
≥
0
for all
a
∈
C
n
{\displaystyle \mathbf {a} ^{\mathrm {H} }\operatorname {R} _{\mathbf {Z} \mathbf {Z} }\mathbf {a} \geq 0\quad {\text{for all }}\mathbf {a} \in \mathbb {C} ^{n}}
in case of a complex random vector.
All eigenvalues of the autocorrelation matrix are real and non-negative.
The auto-covariance matrix is related to the autocorrelation matrix as follows:
K
X
X
=
E
[
(
X
−
E
[
X
]
)
(
X
−
E
[
X
]
)
T
]
=
R
X
X
−
E
[
X
]
E
[
X
]
T
{\displaystyle \operatorname {K} _{\mathbf {X} \mathbf {X} }=\operatorname {E} [(\mathbf {X} -\operatorname {E} [\mathbf {X} ])(\mathbf {X} -\operatorname {E} [\mathbf {X} ])^{\rm {T}}]=\operatorname {R} _{\mathbf {X} \mathbf {X} }-\operatorname {E} [\mathbf {X} ]\operatorname {E} [\mathbf {X} ]^{\rm {T}}}
Respectively for complex random vectors:
K
Z
Z
=
E
[
(
Z
−
E
[
Z
]
)
(
Z
−
E
[
Z
]
)
H
]
=
R
Z
Z
−
E
[
Z
]
E
[
Z
]
H
{\displaystyle \operatorname {K} _{\mathbf {Z} \mathbf {Z} }=\operatorname {E} [(\mathbf {Z} -\operatorname {E} [\mathbf {Z} ])(\mathbf {Z} -\operatorname {E} [\mathbf {Z} ])^{\rm {H}}]=\operatorname {R} _{\mathbf {Z} \mathbf {Z} }-\operatorname {E} [\mathbf {Z} ]\operatorname {E} [\mathbf {Z} ]^{\rm {H}}}
== Autocorrelation of deterministic signals ==
In signal processing, the above definition is often used without the normalization, that is, without subtracting the mean and dividing by the variance. When the autocorrelation function is normalized by mean and variance, it is sometimes referred to as the autocorrelation coefficient or autocovariance function.
=== Autocorrelation of continuous-time signal ===
Given a signal
f
(
t
)
{\displaystyle f(t)}
, the continuous autocorrelation
R
f
f
(
τ
)
{\displaystyle R_{ff}(\tau )}
is most often defined as the continuous cross-correlation integral of
f
(
t
)
{\displaystyle f(t)}
with itself, at lag
τ
{\displaystyle \tau }
.: p.411
where
f
(
t
)
¯
{\displaystyle {\overline {f(t)}}}
represents the complex conjugate of
f
(
t
)
{\displaystyle f(t)}
. Note that the parameter
t
{\displaystyle t}
in the integral is a dummy variable and is only necessary to calculate the integral. It has no specific meaning.
=== Autocorrelation of discrete-time signal ===
The discrete autocorrelation
R
{\displaystyle R}
at lag
ℓ
{\displaystyle \ell }
for a discrete-time signal
y
(
n
)
{\displaystyle y(n)}
is
The above definitions work for signals that are square integrable, or square summable, that is, of finite energy. Signals that "last forever" are treated instead as random processes, in which case different definitions are needed, based on expected values. For wide-sense-stationary random processes, the autocorrelations are defined as
R
f
f
(
τ
)
=
E
[
f
(
t
)
f
(
t
−
τ
)
¯
]
R
y
y
(
ℓ
)
=
E
[
y
(
n
)
y
(
n
−
ℓ
)
¯
]
.
{\displaystyle {\begin{aligned}R_{ff}(\tau )&=\operatorname {E} \left[f(t){\overline {f(t-\tau )}}\right]\\R_{yy}(\ell )&=\operatorname {E} \left[y(n)\,{\overline {y(n-\ell )}}\right].\end{aligned}}}
For processes that are not stationary, these will also be functions of
t
{\displaystyle t}
, or
n
{\displaystyle n}
.
For processes that are also ergodic, the expectation can be replaced by the limit of a time average. The autocorrelation of an ergodic process is sometimes defined as or equated to
R
f
f
(
τ
)
=
lim
T
→
∞
1
T
∫
0
T
f
(
t
+
τ
)
f
(
t
)
¯
d
t
R
y
y
(
ℓ
)
=
lim
N
→
∞
1
N
∑
n
=
0
N
−
1
y
(
n
)
y
(
n
−
ℓ
)
¯
.
{\displaystyle {\begin{aligned}R_{ff}(\tau )&=\lim _{T\rightarrow \infty }{\frac {1}{T}}\int _{0}^{T}f(t+\tau ){\overline {f(t)}}\,{\rm {d}}t\\R_{yy}(\ell )&=\lim _{N\rightarrow \infty }{\frac {1}{N}}\sum _{n=0}^{N-1}y(n)\,{\overline {y(n-\ell )}}.\end{aligned}}}
These definitions have the advantage that they give sensible well-defined single-parameter results for periodic functions, even when those functions are not the output of stationary ergodic processes.
Alternatively, signals that last forever can be treated by a short-time autocorrelation function analysis, using finite time integrals. (See short-time Fourier transform for a related process.)
=== Definition for periodic signals ===
If
f
{\displaystyle f}
is a continuous periodic function of period
T
{\displaystyle T}
, the integration from
−
∞
{\displaystyle -\infty }
to
∞
{\displaystyle \infty }
is replaced by integration over any interval
[
t
0
,
t
0
+
T
]
{\displaystyle [t_{0},t_{0}+T]}
of length
T
{\displaystyle T}
:
R
f
f
(
τ
)
≜
∫
t
0
t
0
+
T
f
(
t
+
τ
)
f
(
t
)
¯
d
t
{\displaystyle R_{ff}(\tau )\triangleq \int _{t_{0}}^{t_{0}+T}f(t+\tau ){\overline {f(t)}}\,dt}
which is equivalent to
R
f
f
(
τ
)
≜
∫
t
0
t
0
+
T
f
(
t
)
f
(
t
−
τ
)
¯
d
t
{\displaystyle R_{ff}(\tau )\triangleq \int _{t_{0}}^{t_{0}+T}f(t){\overline {f(t-\tau )}}\,dt}
=== Properties ===
In the following, we will describe properties of one-dimensional autocorrelations only, since most properties are easily transferred from the one-dimensional case to the multi-dimensional cases. These properties hold for wide-sense stationary processes.
A fundamental property of the autocorrelation is symmetry,
R
f
f
(
τ
)
=
R
f
f
(
−
τ
)
{\displaystyle R_{ff}(\tau )=R_{ff}(-\tau )}
, which is easy to prove from the definition. In the continuous case,
the autocorrelation is an even function
R
f
f
(
−
τ
)
=
R
f
f
(
τ
)
{\displaystyle R_{ff}(-\tau )=R_{ff}(\tau )}
when
f
{\displaystyle f}
is a real function, and
the autocorrelation is a Hermitian function
R
f
f
(
−
τ
)
=
R
f
f
∗
(
τ
)
{\displaystyle R_{ff}(-\tau )=R_{ff}^{*}(\tau )}
when
f
{\displaystyle f}
is a complex function.
The continuous autocorrelation function reaches its peak at the origin, where it takes a real value, i.e. for any delay
τ
{\displaystyle \tau }
,
|
R
f
f
(
τ
)
|
≤
R
f
f
(
0
)
{\displaystyle |R_{ff}(\tau )|\leq R_{ff}(0)}
.: p.410 This is a consequence of the rearrangement inequality. The same result holds in the discrete case.
The autocorrelation of a periodic function is, itself, periodic with the same period.
The autocorrelation of the sum of two completely uncorrelated functions (the cross-correlation is zero for all
τ
{\displaystyle \tau }
) is the sum of the autocorrelations of each function separately.
Since autocorrelation is a specific type of cross-correlation, it maintains all the properties of cross-correlation.
By using the symbol
∗
{\displaystyle *}
to represent convolution and
g
−
1
{\displaystyle g_{-1}}
is a function which manipulates the function
f
{\displaystyle f}
and is defined as
g
−
1
(
f
)
(
t
)
=
f
(
−
t
)
{\displaystyle g_{-1}(f)(t)=f(-t)}
, the definition for
R
f
f
(
τ
)
{\displaystyle R_{ff}(\tau )}
may be written as:
R
f
f
(
τ
)
=
(
f
∗
g
−
1
(
f
¯
)
)
(
τ
)
{\displaystyle R_{ff}(\tau )=(f*g_{-1}({\overline {f}}))(\tau )}
== Multi-dimensional autocorrelation ==
Multi-dimensional autocorrelation is defined similarly. For example, in three dimensions the autocorrelation of a square-summable discrete signal would be
R
(
j
,
k
,
ℓ
)
=
∑
n
,
q
,
r
x
n
,
q
,
r
x
¯
n
−
j
,
q
−
k
,
r
−
ℓ
.
{\displaystyle R(j,k,\ell )=\sum _{n,q,r}x_{n,q,r}\,{\overline {x}}_{n-j,q-k,r-\ell }.}
When mean values are subtracted from signals before computing an autocorrelation function, the resulting function is usually called an auto-covariance function.
== Efficient computation ==
For data expressed as a discrete sequence, it is frequently necessary to compute the autocorrelation with high computational efficiency. A brute force method based on the signal processing definition
R
x
x
(
j
)
=
∑
n
x
n
x
¯
n
−
j
{\displaystyle R_{xx}(j)=\sum _{n}x_{n}\,{\overline {x}}_{n-j}}
can be used when the signal size is small. For example, to calculate the autocorrelation of the real signal sequence
x
=
(
2
,
3
,
−
1
)
{\displaystyle x=(2,3,-1)}
(i.e.
x
0
=
2
,
x
1
=
3
,
x
2
=
−
1
{\displaystyle x_{0}=2,x_{1}=3,x_{2}=-1}
, and
x
i
=
0
{\displaystyle x_{i}=0}
for all other values of i) by hand, we first recognize that the definition just given is the same as the "usual" multiplication, but with right shifts, where each vertical addition gives the autocorrelation for particular lag values:
2
3
−
1
×
2
3
−
1
−
2
−
3
1
6
9
−
3
+
4
6
−
2
−
2
3
14
3
−
2
{\displaystyle {\begin{array}{rrrrrr}&2&3&-1\\\times &2&3&-1\\\hline &-2&-3&1\\&&6&9&-3\\+&&&4&6&-2\\\hline &-2&3&14&3&-2\end{array}}}
Thus the required autocorrelation sequence is
R
x
x
=
(
−
2
,
3
,
14
,
3
,
−
2
)
{\displaystyle R_{xx}=(-2,3,14,3,-2)}
, where
R
x
x
(
0
)
=
14
,
{\displaystyle R_{xx}(0)=14,}
R
x
x
(
−
1
)
=
R
x
x
(
1
)
=
3
,
{\displaystyle R_{xx}(-1)=R_{xx}(1)=3,}
and
R
x
x
(
−
2
)
=
R
x
x
(
2
)
=
−
2
,
{\displaystyle R_{xx}(-2)=R_{xx}(2)=-2,}
the autocorrelation for other lag values being zero. In this calculation we do not perform the carry-over operation during addition as is usual in normal multiplication. Note that we can halve the number of operations required by exploiting the inherent symmetry of the autocorrelation. If the signal happens to be periodic, i.e.
x
=
(
…
,
2
,
3
,
−
1
,
2
,
3
,
−
1
,
…
)
,
{\displaystyle x=(\ldots ,2,3,-1,2,3,-1,\ldots ),}
then we get a circular autocorrelation (similar to circular convolution) where the left and right tails of the previous autocorrelation sequence will overlap and give
R
x
x
=
(
…
,
14
,
1
,
1
,
14
,
1
,
1
,
…
)
{\displaystyle R_{xx}=(\ldots ,14,1,1,14,1,1,\ldots )}
which has the same period as the signal sequence
x
.
{\displaystyle x.}
The procedure can be regarded as an application of the convolution property of Z-transform of a discrete signal.
While the brute force algorithm is order n2, several efficient algorithms exist which can compute the autocorrelation in order n log(n). For example, the Wiener–Khinchin theorem allows computing the autocorrelation from the raw data X(t) with two fast Fourier transforms (FFT):
F
R
(
f
)
=
FFT
[
X
(
t
)
]
S
(
f
)
=
F
R
(
f
)
F
R
∗
(
f
)
R
(
τ
)
=
IFFT
[
S
(
f
)
]
{\displaystyle {\begin{aligned}F_{R}(f)&=\operatorname {FFT} [X(t)]\\S(f)&=F_{R}(f)F_{R}^{*}(f)\\R(\tau )&=\operatorname {IFFT} [S(f)]\end{aligned}}}
where IFFT denotes the inverse fast Fourier transform. The asterisk denotes complex conjugate.
Alternatively, a multiple τ correlation can be performed by using brute force calculation for low τ values, and then progressively binning the X(t) data with a logarithmic density to compute higher values, resulting in the same n log(n) efficiency, but with lower memory requirements.
== Estimation ==
For a discrete process with known mean and variance for which we observe
n
{\displaystyle n}
observations
{
X
1
,
X
2
,
…
,
X
n
}
{\displaystyle \{X_{1},\,X_{2},\,\ldots ,\,X_{n}\}}
, an estimate of the autocorrelation coefficient may be obtained as
R
^
(
k
)
=
1
(
n
−
k
)
σ
2
∑
t
=
1
n
−
k
(
X
t
−
μ
)
(
X
t
+
k
−
μ
)
{\displaystyle {\hat {R}}(k)={\frac {1}{(n-k)\sigma ^{2}}}\sum _{t=1}^{n-k}(X_{t}-\mu )(X_{t+k}-\mu )}
for any positive integer
k
<
n
{\displaystyle k<n}
. When the true mean
μ
{\displaystyle \mu }
and variance
σ
2
{\displaystyle \sigma ^{2}}
are known, this estimate is unbiased. If the true mean and variance of the process are not known there are several possibilities:
If
μ
{\displaystyle \mu }
and
σ
2
{\displaystyle \sigma ^{2}}
are replaced by the standard formulae for sample mean and sample variance, then this is a biased estimate.
A periodogram-based estimate replaces
n
−
k
{\displaystyle n-k}
in the above formula with
n
{\displaystyle n}
. This estimate is always biased; however, it usually has a smaller mean squared error.
Other possibilities derive from treating the two portions of data
{
X
1
,
X
2
,
…
,
X
n
−
k
}
{\displaystyle \{X_{1},\,X_{2},\,\ldots ,\,X_{n-k}\}}
and
{
X
k
+
1
,
X
k
+
2
,
…
,
X
n
}
{\displaystyle \{X_{k+1},\,X_{k+2},\,\ldots ,\,X_{n}\}}
separately and calculating separate sample means and/or sample variances for use in defining the estimate.
The advantage of estimates of the last type is that the set of estimated autocorrelations, as a function of
k
{\displaystyle k}
, then form a function which is a valid autocorrelation in the sense that it is possible to define a theoretical process having exactly that autocorrelation. Other estimates can suffer from the problem that, if they are used to calculate the variance of a linear combination of the
X
{\displaystyle X}
's, the variance calculated may turn out to be negative.
== Regression analysis ==
In regression analysis using time series data, autocorrelation in a variable of interest is typically modeled either with an autoregressive model (AR), a moving average model (MA), their combination as an autoregressive-moving-average model (ARMA), or an extension of the latter called an autoregressive integrated moving average model (ARIMA). With multiple interrelated data series, vector autoregression (VAR) or its extensions are used.
In ordinary least squares (OLS), the adequacy of a model specification can be checked in part by establishing whether there is autocorrelation of the regression residuals. Problematic autocorrelation of the errors, which themselves are unobserved, can generally be detected because it produces autocorrelation in the observable residuals. (Errors are also known as "error terms" in econometrics.) Autocorrelation of the errors violates the ordinary least squares assumption that the error terms are uncorrelated, meaning that the Gauss Markov theorem does not apply, and that OLS estimators are no longer the Best Linear Unbiased Estimators (BLUE). While it does not bias the OLS coefficient estimates, the standard errors tend to be underestimated (and the t-scores overestimated) when the autocorrelations of the errors at low lags are positive.
The traditional test for the presence of first-order autocorrelation is the Durbin–Watson statistic or, if the explanatory variables include a lagged dependent variable, Durbin's h statistic. The Durbin-Watson can be linearly mapped however to the Pearson correlation between values and their lags. A more flexible test, covering autocorrelation of higher orders and applicable whether or not the regressors include lags of the dependent variable, is the Breusch–Godfrey test. This involves an auxiliary regression, wherein the residuals obtained from estimating the model of interest are regressed on (a) the original regressors and (b) k lags of the residuals, where 'k' is the order of the test. The simplest version of the test statistic from this auxiliary regression is TR2, where T is the sample size and R2 is the coefficient of determination. Under the null hypothesis of no autocorrelation, this statistic is asymptotically distributed as
χ
2
{\displaystyle \chi ^{2}}
with k degrees of freedom.
Responses to nonzero autocorrelation include generalized least squares and the Newey–West HAC estimator (Heteroskedasticity and Autocorrelation Consistent).
In the estimation of a moving average model (MA), the autocorrelation function is used to determine the appropriate number of lagged error terms to be included. This is based on the fact that for an MA process of order q, we have
R
(
τ
)
≠
0
{\displaystyle R(\tau )\neq 0}
, for
τ
=
0
,
1
,
…
,
q
{\displaystyle \tau =0,1,\ldots ,q}
, and
R
(
τ
)
=
0
{\displaystyle R(\tau )=0}
, for
τ
>
q
{\displaystyle \tau >q}
.
== Applications ==
Autocorrelation's ability to find repeating patterns in data yields many applications, including:
Autocorrelation analysis is used heavily in fluorescence correlation spectroscopy to provide quantitative insight into molecular-level diffusion and chemical reactions.
Another application of autocorrelation is the measurement of optical spectra and the measurement of very-short-duration light pulses produced by lasers, both using optical autocorrelators.
Autocorrelation is used to analyze dynamic light scattering data, which notably enables determination of the particle size distributions of nanometer-sized particles or micelles suspended in a fluid. A laser shining into the mixture produces a speckle pattern that results from the motion of the particles. Autocorrelation of the signal can be analyzed in terms of the diffusion of the particles. From this, knowing the viscosity of the fluid, the sizes of the particles can be calculated.
Utilized in the GPS system to correct for the propagation delay, or time shift, between the point of time at the transmission of the carrier signal at the satellites, and the point of time at the receiver on the ground. This is done by the receiver generating a replica signal of the 1,023-bit C/A (Coarse/Acquisition) code, and generating lines of code chips [-1,1] in packets of ten at a time, or 10,230 chips (1,023 × 10), shifting slightly as it goes along in order to accommodate for the doppler shift in the incoming satellite signal, until the receiver replica signal and the satellite signal codes match up.
The small-angle X-ray scattering intensity of a nanostructured system is the Fourier transform of the spatial autocorrelation function of the electron density.
In surface science and scanning probe microscopy, autocorrelation is used to establish a link between surface morphology and functional characteristics.
In optics, normalized autocorrelations and cross-correlations give the degree of coherence of an electromagnetic field.
In astronomy, autocorrelation can determine the frequency of pulsars.
In music, autocorrelation (when applied at time scales smaller than a second) is used as a pitch detection algorithm for both instrument tuners and "Auto Tune" (used as a distortion effect or to fix intonation). When applied at time scales larger than a second, autocorrelation can identify the musical beat, for example to determine tempo.
Autocorrelation in space rather than time, via the Patterson function, is used by X-ray diffractionists to help recover the "Fourier phase information" on atom positions not available through diffraction alone.
In statistics, spatial autocorrelation between sample locations also helps one estimate mean value uncertainties when sampling a heterogeneous population.
The SEQUEST algorithm for analyzing mass spectra makes use of autocorrelation in conjunction with cross-correlation to score the similarity of an observed spectrum to an idealized spectrum representing a peptide.
In astrophysics, autocorrelation is used to study and characterize the spatial distribution of galaxies in the universe and in multi-wavelength observations of low mass X-ray binaries.
In panel data, spatial autocorrelation refers to correlation of a variable with itself through space.
In analysis of Markov chain Monte Carlo data, autocorrelation must be taken into account for correct error determination.
In geosciences (specifically in geophysics) it can be used to compute an autocorrelation seismic attribute, out of a 3D seismic survey of the underground.
In medical ultrasound imaging, autocorrelation is used to visualize blood flow.
In intertemporal portfolio choice, the presence or absence of autocorrelation in an asset's rate of return can affect the optimal portion of the portfolio to hold in that asset.
In numerical relays, autocorrelation has been used to accurately measure power system frequency.
== Serial dependence ==
Serial dependence is closely linked to the notion of autocorrelation, but represents a distinct concept (see Correlation and dependence). In particular, it is possible to have serial dependence but no (linear) correlation. In some fields however, the two terms are used as synonyms.
A time series of a random variable has serial dependence if the value at some time
t
{\displaystyle t}
in the series is statistically dependent on the value at another time
s
{\displaystyle s}
. A series is serially independent if there is no dependence between any pair.
If a time series
{
X
t
}
{\displaystyle \left\{X_{t}\right\}}
is stationary, then statistical dependence between the pair
(
X
t
,
X
s
)
{\displaystyle (X_{t},X_{s})}
would imply that there is statistical dependence between all pairs of values at the same lag
τ
=
s
−
t
{\displaystyle \tau =s-t}
.
== See also ==
== References ==
== Further reading ==
Kmenta, Jan (1986). Elements of Econometrics (Second ed.). New York: Macmillan. pp. 298–334. ISBN 978-0-02-365070-3.
Marno Verbeek (10 August 2017). A Guide to Modern Econometrics. Wiley. ISBN 978-1-119-40110-0.
Soltanalian, Mojtaba; Stoica, Petre (2012). "Computational Design of Sequences with Good Correlation Properties". IEEE Transactions on Signal Processing. 60 (5): 2180. Bibcode:2012ITSP...60.2180S. doi:10.1109/TSP.2012.2186134.
Solomon W. Golomb, and Guang Gong. Signal design for good correlation: for wireless communication, cryptography, and radar. Cambridge University Press, 2005.
Klapetek, Petr (2018). Quantitative Data Processing in Scanning Probe Microscopy: SPM Applications for Nanometrology (Second ed.). Elsevier. pp. 108–112 ISBN 9780128133477.
Weisstein, Eric W. "Autocorrelation". MathWorld. | Wikipedia/Autocorrelation_function |
In time series analysis, the moving-average model (MA model), also known as moving-average process, is a common approach for modeling univariate time series. The moving-average model specifies that the output variable is cross-correlated with a non-identical to itself random-variable.
Together with the autoregressive (AR) model, the moving-average model is a special case and key component of the more general ARMA and ARIMA models of time series, which have a more complicated stochastic structure. Contrary to the AR model, the finite MA model is always stationary.
The moving-average model should not be confused with the moving average, a distinct concept despite some similarities.
== Definition ==
The notation MA(q) refers to the moving average model of order q:
X
t
=
μ
+
ε
t
+
θ
1
ε
t
−
1
+
⋯
+
θ
q
ε
t
−
q
=
μ
+
∑
i
=
1
q
θ
i
ε
t
−
i
+
ε
t
,
{\displaystyle X_{t}=\mu +\varepsilon _{t}+\theta _{1}\varepsilon _{t-1}+\cdots +\theta _{q}\varepsilon _{t-q}=\mu +\sum _{i=1}^{q}\theta _{i}\varepsilon _{t-i}+\varepsilon _{t},}
where
μ
{\displaystyle \mu }
is the mean of the series, the
θ
1
,
.
.
.
,
θ
q
{\displaystyle \theta _{1},...,\theta _{q}}
are the coefficients of the model and
ε
t
,
ε
t
−
1
,
.
.
.
,
ε
t
−
q
{\displaystyle \varepsilon _{t},\varepsilon _{t-1},...,\varepsilon _{t-q}}
are the error terms. The value of q is called the order of the MA model. This can be equivalently written in terms of the backshift operator B as
X
t
=
μ
+
(
1
+
θ
1
B
+
⋯
+
θ
q
B
q
)
ε
t
.
{\displaystyle X_{t}=\mu +(1+\theta _{1}B+\cdots +\theta _{q}B^{q})\varepsilon _{t}.}
Thus, a moving-average model is conceptually a linear regression of the current value of the series against current and previous (observed) white noise error terms or random shocks. The random shocks at each point are assumed to be mutually independent and to come from the same distribution, typically a normal distribution, with location at zero and constant scale.
== Interpretation ==
The moving-average model is essentially a finite impulse response filter applied to white noise, with some additional interpretation placed on it. The role of the random shocks in the MA model differs from their role in the autoregressive (AR) model in two ways. First, they are propagated to future values of the time series directly: for example,
ε
t
−
1
{\displaystyle \varepsilon _{t-1}}
appears directly on the right side of the equation for
X
t
{\displaystyle X_{t}}
. In contrast, in an AR model
ε
t
−
1
{\displaystyle \varepsilon _{t-1}}
does not appear on the right side of the
X
t
{\displaystyle X_{t}}
equation, but it does appear on the right side of the
X
t
−
1
{\displaystyle X_{t-1}}
equation, and
X
t
−
1
{\displaystyle X_{t-1}}
appears on the right side of the
X
t
{\displaystyle X_{t}}
equation, giving only an indirect effect of
ε
t
−
1
{\displaystyle \varepsilon _{t-1}}
on
X
t
{\displaystyle X_{t}}
. Second, in the MA model a shock affects
X
{\displaystyle X}
values only for the current period and q periods into the future; in contrast, in the AR model a shock affects
X
{\displaystyle X}
values infinitely far into the future, because
ε
t
{\displaystyle \varepsilon _{t}}
affects
X
t
{\displaystyle X_{t}}
, which affects
X
t
+
1
{\displaystyle X_{t+1}}
, which affects
X
t
+
2
{\displaystyle X_{t+2}}
, and so on forever (see Impulse response).
== Fitting the model ==
Fitting a moving-average model is generally more complicated than fitting an autoregressive model. This is because the lagged error terms are not observable. This means that iterative non-linear fitting procedures need to be used in place of linear least squares. Moving average models are linear combinations of past white noise terms, while autoregressive models are linear combinations of past time series values. ARMA models are more complicated than pure AR and MA models, as they combine both autoregressive and moving average components.
The autocorrelation function (ACF) of an MA(q) process is zero at lag q + 1 and greater. Therefore, we determine the appropriate maximum lag for the estimation by examining the sample autocorrelation function to see where it becomes insignificantly different from zero for all lags beyond a certain lag, which is designated as the maximum lag q.
Sometimes the ACF and partial autocorrelation function (PACF) will suggest that an MA model would be a better model choice and sometimes both AR and MA terms should be used in the same model (see Box–Jenkins method).
Autoregressive Integrated Moving Average (ARIMA) models are an alternative to segmented regression that can also be used for fitting a moving-average model.
== See also ==
Autoregressive–moving-average model
Autoregressive integrated moving average
Autoregressive model
Finite impulse response
Infinite impulse response
== References ==
== Further reading ==
Enders, Walter (2004). "Stationary Time-Series Models". Applied Econometric Time Series (Second ed.). New York: Wiley. pp. 48–107. ISBN 0-471-45173-8.
== External links ==
Common approaches to univariate time series
This article incorporates public domain material from the National Institute of Standards and Technology | Wikipedia/Moving_average_model |
Combinatorics of Experimental Design is a textbook on the design of experiments, a subject that connects applications in statistics to the theory of combinatorial mathematics. It was written by mathematician Anne Penfold Street and her daughter, statistician Deborah Street, and published in 1987 by the Oxford University Press under their Clarendon Press imprint.
== Topics ==
The book has 15 chapters. Its introductory chapter covers the history and applications of experimental designs, it has five chapters on balanced incomplete block designs and their existence, and three on Latin squares and mutually orthogonal Latin squares. Other chapters cover resolvable block designs, finite geometry, symmetric and asymmetric factorial designs, and partially balanced incomplete block designs.
After this standard material, the remaining two chapters cover less-standard material. The penultimate chapter covers miscellaneous types of designs including circular block designs, incomplete Latin squares, and serially balanced sequences. The final chapter describes specialized designs for agricultural applications. The coverage of the topics in the book includes examples, clearly written proofs, historical references, and exercises for students.
== Audience and reception ==
Although intended as an advanced undergraduate textbook, this book can also be used as a graduate text, or as a reference for researchers. Its main prerequisites are some knowledge of linear algebra and linear models, but some topics touch on abstract algebra and number theory as well.
Although disappointed by the omission of some topics, reviewer D. V. Chopra writes that the book "succeeds remarkably well" in connecting the separate worlds of combinatorics and statistics.
And Marshall Hall, reviewing the book, called it "very readable" and "very satisfying".
== Related books ==
Other books on the combinatorics of experimental design include Statistical Design and Analysis of Experiments (John, 1971), Constructions and Combinatorial Problems in Design of Experiments (Rao, 1971), Design Theory (Beth, Jungnickel, and Lenz, 1985), and Combinatorial Theory and Statistical Design (Constantine, 1987). Compared to these, Combinatorics of Experimental Design makes the combinatorial aspects of the subjects more accessible to statisticians, and its last two chapters contain material not covered by the other books. However, it omits several other topics that were included in Rao's more comprehensive text.
== See also ==
The Design of Experiments (1935), by Ronald Fisher
== References == | Wikipedia/Combinatorics_of_Experimental_Design |
In mathematics, a function on the real numbers is called a step function if it can be written as a finite linear combination of indicator functions of intervals. Informally speaking, a step function is a piecewise constant function having only finitely many pieces.
== Definition and first consequences ==
A function
f
:
R
→
R
{\displaystyle f\colon \mathbb {R} \rightarrow \mathbb {R} }
is called a step function if it can be written as
f
(
x
)
=
∑
i
=
0
n
α
i
χ
A
i
(
x
)
{\displaystyle f(x)=\sum \limits _{i=0}^{n}\alpha _{i}\chi _{A_{i}}(x)}
, for all real numbers
x
{\displaystyle x}
where
n
≥
0
{\displaystyle n\geq 0}
,
α
i
{\displaystyle \alpha _{i}}
are real numbers,
A
i
{\displaystyle A_{i}}
are intervals, and
χ
A
{\displaystyle \chi _{A}}
is the indicator function of
A
{\displaystyle A}
:
χ
A
(
x
)
=
{
1
if
x
∈
A
0
if
x
∉
A
{\displaystyle \chi _{A}(x)={\begin{cases}1&{\text{if }}x\in A\\0&{\text{if }}x\notin A\\\end{cases}}}
In this definition, the intervals
A
i
{\displaystyle A_{i}}
can be assumed to have the following two properties:
The intervals are pairwise disjoint:
A
i
∩
A
j
=
∅
{\displaystyle A_{i}\cap A_{j}=\emptyset }
for
i
≠
j
{\displaystyle i\neq j}
The union of the intervals is the entire real line:
⋃
i
=
0
n
A
i
=
R
.
{\displaystyle \bigcup _{i=0}^{n}A_{i}=\mathbb {R} .}
Indeed, if that is not the case to start with, a different set of intervals can be picked for which these assumptions hold. For example, the step function
f
=
4
χ
[
−
5
,
1
)
+
3
χ
(
0
,
6
)
{\displaystyle f=4\chi _{[-5,1)}+3\chi _{(0,6)}}
can be written as
f
=
0
χ
(
−
∞
,
−
5
)
+
4
χ
[
−
5
,
0
]
+
7
χ
(
0
,
1
)
+
3
χ
[
1
,
6
)
+
0
χ
[
6
,
∞
)
.
{\displaystyle f=0\chi _{(-\infty ,-5)}+4\chi _{[-5,0]}+7\chi _{(0,1)}+3\chi _{[1,6)}+0\chi _{[6,\infty )}.}
=== Variations in the definition ===
Sometimes, the intervals are required to be right-open or allowed to be singleton. The condition that the collection of intervals must be finite is often dropped, especially in school mathematics, though it must still be locally finite, resulting in the definition of piecewise constant functions.
== Examples ==
A constant function is a trivial example of a step function. Then there is only one interval,
A
0
=
R
.
{\displaystyle A_{0}=\mathbb {R} .}
The sign function sgn(x), which is −1 for negative numbers and +1 for positive numbers, and is the simplest non-constant step function.
The Heaviside function H(x), which is 0 for negative numbers and 1 for positive numbers, is equivalent to the sign function, up to a shift and scale of range (
H
=
(
sgn
+
1
)
/
2
{\displaystyle H=(\operatorname {sgn} +1)/2}
). It is the mathematical concept behind some test signals, such as those used to determine the step response of a dynamical system.
The rectangular function, the normalized boxcar function, is used to model a unit pulse.
=== Non-examples ===
The integer part function is not a step function according to the definition of this article, since it has an infinite number of intervals. However, some authors also define step functions with an infinite number of intervals.
== Properties ==
The sum and product of two step functions is again a step function. The product of a step function with a number is also a step function. As such, the step functions form an algebra over the real numbers.
A step function takes only a finite number of values. If the intervals
A
i
,
{\displaystyle A_{i},}
for
i
=
0
,
1
,
…
,
n
{\displaystyle i=0,1,\dots ,n}
in the above definition of the step function are disjoint and their union is the real line, then
f
(
x
)
=
α
i
{\displaystyle f(x)=\alpha _{i}}
for all
x
∈
A
i
.
{\displaystyle x\in A_{i}.}
The definite integral of a step function is a piecewise linear function.
The Lebesgue integral of a step function
f
=
∑
i
=
0
n
α
i
χ
A
i
{\displaystyle \textstyle f=\sum _{i=0}^{n}\alpha _{i}\chi _{A_{i}}}
is
∫
f
d
x
=
∑
i
=
0
n
α
i
ℓ
(
A
i
)
,
{\displaystyle \textstyle \int f\,dx=\sum _{i=0}^{n}\alpha _{i}\ell (A_{i}),}
where
ℓ
(
A
)
{\displaystyle \ell (A)}
is the length of the interval
A
{\displaystyle A}
, and it is assumed here that all intervals
A
i
{\displaystyle A_{i}}
have finite length. In fact, this equality (viewed as a definition) can be the first step in constructing the Lebesgue integral.
A discrete random variable is sometimes defined as a random variable whose cumulative distribution function is piecewise constant. In this case, it is locally a step function (globally, it may have an infinite number of steps). Usually however, any random variable with only countably many possible values is called a discrete random variable, in this case their cumulative distribution function is not necessarily locally a step function, as infinitely many intervals can accumulate in a finite region.
== See also ==
Crenel function
Piecewise
Sigmoid function
Simple function
Step detection
Heaviside step function
Piecewise-constant valuation
== References == | Wikipedia/Step_function |
In science, randomized experiments are the experiments that allow the greatest reliability and validity of statistical estimates of treatment effects. Randomization-based inference is especially important in experimental design and in survey sampling.
== Overview ==
In the statistical theory of design of experiments, randomization involves randomly allocating the experimental units across the treatment groups. For example, if an experiment compares a new drug against a standard drug, then the patients should be allocated to either the new drug or to the standard drug control using randomization.
Randomized experimentation is not haphazard. Randomization reduces bias by equalising other factors that have not been explicitly accounted for in the experimental design (according to the law of large numbers). Randomization also produces ignorable designs, which are valuable in model-based statistical inference, especially Bayesian or likelihood-based. In the design of experiments, the simplest design for comparing treatments is the "completely randomized design". Some "restriction on randomization" can occur with blocking and experiments that have hard-to-change factors; additional restrictions on randomization can occur when a full randomization is infeasible or when it is desirable to reduce the variance of estimators of selected effects.
Randomization of treatment in clinical trials pose ethical problems. In some cases, randomization reduces the therapeutic options for both physician and patient, and so randomization requires clinical equipoise regarding the treatments.
== Online randomized controlled experiments ==
Web sites can run randomized controlled experiments to create a feedback loop. Key differences between offline experimentation and online experiments include:
Logging: user interactions can be logged reliably.
Number of users: large sites, such as Amazon, Bing/Microsoft, and Google run experiments, each with over a million users.
Number of concurrent experiments: large sites run tens of overlapping, or concurrent, experiments.
Robots, whether web crawlers from valid sources or malicious internet bots.
Ability to ramp-up experiments from low percentages to higher percentages.
Speed / performance has significant impact on key metrics.
Ability to use the pre-experiment period as an A/A test to reduce variance.
== History ==
A controlled experiment appears to have been suggested in the Old Testament's Book of Daniel. King Nebuchadnezzar proposed that some Israelites eat "a daily amount of food and wine from the king's table." Daniel preferred a vegetarian diet, but the official was concerned that the king would "see you looking worse than the other young men your age? The king would then have my head because of you." Daniel then proposed the following controlled experiment: "Test your servants for ten days. Give us nothing but vegetables to eat and water to drink. Then compare our appearance with that of the young men who eat the royal food, and treat your servants in accordance with what you see". (Daniel 1, 12– 13).
Randomized experiments were institutionalized in psychology and education in the late eighteen-hundreds, following the invention of randomized experiments by C. S. Peirce.
Outside of psychology and education, randomized experiments were popularized by R.A. Fisher in his book Statistical Methods for Research Workers, which also introduced additional principles of experimental design.
== Statistical interpretation ==
The Rubin Causal Model provides a common way to describe a randomized experiment. While the Rubin Causal Model provides a framework for defining the causal parameters (i.e., the effects of a randomized treatment on an outcome), the analysis of experiments can take a number of forms. The model assumes that there are two potential outcomes for each unit in the study: the outcome if the unit receives the treatment and the outcome if the unit does not receive the treatment. The difference between these two potential outcomes is known as the treatment effect, which is the causal effect of the treatment on the outcome. Most commonly, randomized experiments are analyzed using ANOVA, student's t-test, regression analysis, or a similar statistical test. The model also accounts for potential confounding factors, which are factors that could affect both the treatment and the outcome. By controlling for these confounding factors, the model helps to ensure that any observed treatment effect is truly causal and not simply the result of other factors that are correlated with both the treatment and the outcome.
The Rubin Causal Model is a useful a framework for understanding how to estimate the causal effect of the treatment, even when there are confounding variables that may affect the outcome. This model specifies that the causal effect of the treatment is the difference in the outcomes that would have been observed for each individual if they had received the treatment and if they had not received the treatment. In practice, it is not possible to observe both potential outcomes for the same individual, so statistical methods are used to estimate the causal effect using data from the experiment.
== Empirical evidence that randomization makes a difference ==
Empirically differences between randomized and non-randomized studies, and between adequately and inadequately randomized trials have been difficult to detect.
== Directed acyclic graph (DAG) explanation of randomization ==
Randomization is the cornerstone of many scientific claims. To randomize, means that we can eliminate the confounding factors. Say we study the effect of A on B. Yet, there are many unobservables U that potentially affect B and confound our estimate of the finding. To explain these kinds of issues, statisticians or econometricians nowadays use directed acyclic graph.
== See also ==
A/B testing
Allocation concealment
Random assignment
Randomized block design
Randomized controlled trial
== References ==
Caliński, Tadeusz & Kageyama, Sanpei (2000). Block designs: A Randomization approach, Volume I: Analysis. Lecture Notes in Statistics. Vol. 150. New York: Springer-Verlag. ISBN 978-0-387-98578-7.
Caliński, Tadeusz & Kageyama, Sanpei (2003). Block designs: A Randomization approach, Volume II: Design. Lecture Notes in Statistics. Vol. 170. New York: Springer-Verlag. ISBN 978-0-387-95470-7.
Hacking, Ian (September 1988). "Telepathy: Origins of Randomization in Experimental Design". Isis. 79 (3): 427–451. doi:10.1086/354775. JSTOR 234674. MR 1013489. S2CID 52201011.
Hinkelmann, Klaus; Kempthorne, Oscar (2008). Design and Analysis of Experiments, Volume I: Introduction to Experimental Design (Second ed.). Wiley. ISBN 978-0-471-72756-9. MR 2363107.
Kempthorne, Oscar (1992). "Intervention experiments, randomization and inference". In Malay Ghosh and Pramod K. Pathak (ed.). Current Issues in Statistical Inference—Essays in Honor of D. Basu. Institute of Mathematical Statistics Lecture Notes - Monograph Series. Hayward, CA: Institute for Mathematical Statistics. pp. 13–31. doi:10.1214/lnms/1215458836. ISBN 978-0-940600-24-9. MR 1194407. | Wikipedia/Randomized_trial |
Clinical study design is the formulation of clinical trials and other experiments, as well as observational studies, in medical research involving human beings and involving clinical aspects, including epidemiology . It is the design of experiments as applied to these fields. The goal of a clinical study is to assess the safety, efficacy, and / or the mechanism of action of an investigational medicinal product (IMP) or procedure, or new drug or device that is in development, but potentially not yet approved by a health authority (e.g. Food and Drug Administration). It can also be to investigate a drug, device or procedure that has already been approved but is still in need of further investigation, typically with respect to long-term effects or cost-effectiveness.
Some of the considerations here are shared under the more general topic of design of experiments but there can be others, in particular related to patient confidentiality and medical ethics.
== Outline of types of designs for clinical studies ==
=== Treatment studies ===
Randomized controlled trial
Blind trial
Non-blind trial
Adaptive clinical trial
Platform Trials
Nonrandomized trial (quasi-experiment)
Interrupted time series design (measures on a sample or a series of samples from the same population are obtained several times before and after a manipulated event or a naturally occurring event) - considered a type of quasi-experiment
=== Observational studies ===
1. Descriptive
Case report
Case series
Population study
2. Analytical
Cohort study
Prospective cohort
Retrospective cohort
Time series study
Case-control study
Nested case-control study
Cross-sectional study
Community survey (a type of cross-sectional study)
Ecological study
== Important considerations ==
When choosing a study design, many factors must be taken into account. Different types of studies are subject to different types of bias. For example, recall bias is likely to occur in cross-sectional or case-control studies where subjects are asked to recall exposure to risk factors. Subjects with the relevant condition (e.g. breast cancer) may be more likely to recall the relevant exposures that they had undergone (e.g. hormone replacement therapy) than subjects who don't have the condition.
The ecological fallacy may occur when conclusions about individuals are drawn from analyses conducted on grouped data. The nature of this type of analysis tends to overestimate the degree of association between variables.
=== Seasonal studies ===
Conducting studies in seasonal indications (such as allergies, Seasonal Affective Disorder, influenza, and others) can complicate a trial as patients must be enrolled quickly. Additionally, seasonal variations and weather patterns can affect a seasonal study.
== Other terms ==
The term retrospective study is sometimes used as another term for a case-control study. This use of the term "retrospective study" is misleading, however, and should be avoided because other research designs besides case-control studies are also retrospective in orientation.
Superiority trials are designed to demonstrate that one treatment is more effective than a given reference treatment. This type of study design is often used to test the effectiveness of a treatment compared to placebo or to the currently best available treatment.
Non-inferiority trials are designed to demonstrate that a treatment is at least not appreciably less effective than a given reference treatment. This type of study design is often employed when comparing a new treatment to an established medical standard of care, in situations where the new treatment is cheaper, safer or more convenient than the reference treatment and would therefore be preferable if not appreciably less effective.
Equivalence trials are designed to demonstrate that two treatments are equally effective.
When using "parallel groups", each patient receives one treatment; in a "crossover study", each patient receives several treatments but in different order.
A longitudinal study assesses research subjects over two or more points in time; by contrast, a cross-sectional study assesses research subjects at only one point in time (so case-control, cohort, and randomized studies are not cross-sectional).
== See also ==
== References ==
== External links ==
Some aspects of study design Tufts University web site
Comparison of strength Description of study designs from the National Cancer Institute | Wikipedia/Study_design |
In statistics, econometrics, political science, epidemiology, and related disciplines, a regression discontinuity design (RDD) is a quasi-experimental pretest–posttest design that aims to determine the causal effects of interventions by assigning a cutoff or threshold above or below which an intervention is assigned. By comparing observations lying closely on either side of the threshold, it is possible to estimate the average treatment effect in environments in which randomisation is unfeasible. However, it remains impossible to make true causal inference with this method alone, as it does not automatically reject causal effects by any potential confounding variable. First applied by Donald Thistlethwaite and Donald Campbell (1960) to the evaluation of scholarship programs, the RDD has become increasingly popular in recent years. Recent study comparisons of randomised controlled trials (RCTs) and RDDs have empirically demonstrated the internal validity of the design.
== Example ==
The intuition behind the RDD is well illustrated using the evaluation of merit-based scholarships. The main problem with estimating the causal effect of such an intervention is the homogeneity of performance to the assignment of treatment (e.g., a scholarship award). Since high-performing students are more likely to be awarded the merit scholarship and continue performing well at the same time, comparing the outcomes of awardees and non-recipients would lead to an upward bias of the estimates. Even if the scholarship did not improve grades at all, awardees would have performed better than non-recipients, simply because scholarships were given to students who were performing well before.
Despite the absence of an experimental design, an RDD can exploit exogenous characteristics of the intervention to elicit causal effects. If all students above a given grade—for example 80%—are given the scholarship, it is possible to elicit the local treatment effect by comparing students around the 80% cut-off. The intuition here is that a student scoring 79% is likely to be very similar to a student scoring 81%—given the pre-defined threshold of 80%. However, one student will receive the scholarship while the other will not. Comparing the outcome of the awardee (treatment group) to the counterfactual outcome of the non-recipient (control group) will hence deliver the local treatment effect.
== Methodology ==
The two most common approaches to estimation using an RDD are non-parametric and parametric (normally polynomial regression).
=== Non-parametric estimation ===
The most common non-parametric method used in the RDD context is a local linear regression. This is of the form:
Y
=
α
+
τ
D
+
β
1
(
X
−
c
)
+
β
2
D
(
X
−
c
)
+
ε
,
{\displaystyle Y=\alpha +\tau D+\beta _{1}(X-c)+\beta _{2}D(X-c)+\varepsilon ,}
where
c
{\displaystyle c}
is the treatment cutoff and
D
{\displaystyle D}
is a binary variable equal to one if
X
≥
c
{\displaystyle X\geq c}
. Letting
h
{\displaystyle h}
be the bandwidth of data used, we have
c
−
h
≤
X
≤
c
+
h
{\displaystyle c-h\leq X\leq c+h}
. Different slopes and intercepts fit data on either side of the cutoff. Typically either a rectangular kernel (no weighting) or a triangular kernel are used. The rectangular kernel has a more straightforward interpretation over sophisticated kernels which yield little efficiency gains.
The major benefit of using non-parametric methods in an RDD is that they provide estimates based on data closer to the cut-off, which is intuitively appealing. This reduces some bias that can result from using data farther away from the cutoff to estimate the discontinuity at the cutoff. More formally, local linear regressions are preferred because they have better bias properties and have better convergence. However, the use of both types of estimation, if feasible, is a useful way to argue that the estimated results do not rely too heavily on the particular approach taken.
=== Parametric estimation ===
An example of a parametric estimation is:
Y
=
α
+
β
1
x
i
+
β
2
c
i
+
β
3
c
i
2
+
β
4
c
i
3
+
ε
,
{\displaystyle Y=\alpha +\beta _{1}x_{i}+\beta _{2}c_{i}+\beta _{3}c_{i}^{2}+\beta _{4}c_{i}^{3}+\varepsilon ,}
where
x
i
=
{
1
if
c
i
≥
c
¯
0
if
c
i
<
c
¯
{\displaystyle x_{i}={\begin{cases}1{\text{ if }}c_{i}\geq {\bar {c}}\\0{\text{ if }}c_{i}<{\bar {c}}\end{cases}}}
and
c
¯
{\displaystyle {\bar {c}}}
is the treatment cutoff.
Note that the polynomial part can be shortened or extended according to the needs.
=== Other examples ===
Policies in which treatment is determined by an age eligibility criterion (e.g. pensions, minimum legal drinking age).
Elections in which one politician wins by a marginal majority.
Placement scores within education that sort students into treatment programs.
== Required assumptions ==
Regression discontinuity design requires that all potentially relevant variables besides the treatment variable and outcome variable be continuous at the point where the treatment and outcome discontinuities occur. One sufficient, though not necessary, condition is if the treatment assignment is "as good as random" at the threshold for treatment. If this holds, then it guarantees that those who just barely received treatment are comparable to those who just barely did not receive treatment, as treatment status is effectively random.
Treatment assignment at the threshold can be "as good as random" if there is randomness in the assignment variable and the agents considered (individuals, firms, etc.) cannot perfectly manipulate their treatment status. For example, suppose the treatment is passing an exam, where a grade of 50% is required. In this case, this example is a valid regression discontinuity design so long as grades are somewhat random, due either to the randomness of grading or randomness of student performance.
Students must not also be able to perfectly manipulate their grade so as to determine their treatment status perfectly. Two examples include students being able to convince teachers to "mercy pass" them, or students being allowed to retake the exam until they pass. In the former case, those students who barely fail but are able to secure a "mercy pass" may differ from those who just barely fail but cannot secure a "mercy pass". This leads to selection bias, as the treatment and control groups now differ. In the latter case, some students may decide to retake the exam, stopping once they pass. This also leads to selection bias since only some students will decide to retake the exam.
=== Testing the validity of the assumptions ===
It is impossible to definitively test for validity if agents are able to determine their treatment status perfectly. However, some tests can provide evidence that either supports or discounts the validity of the regression discontinuity design.
==== Density test ====
McCrary (2008) suggested examining the density of observations of the assignment variable. Suppose there is a discontinuity in the density of the assignment variable at the threshold for treatment. In this case, this may suggest that some agents were able to manipulate their treatment status perfectly.
For example, if several students are able to get a "mercy pass", then there will be more students who just barely passed the exam than who just barely failed. Similarly, if students are allowed to retake the exam until they pass, then there will be a similar result. In both cases, this will likely show up when the density of exam grades is examined. "Gaming the system" in this manner could bias the treatment effect estimate.
==== Continuity of observable variables ====
Since the validity of the regression discontinuity design relies on those who were just barely treated being the same as those who were just barely not treated, it makes sense to examine if these groups are similarly based on observable variables. For the earlier example, one could test if those who just barely passed have different characteristics (demographics, family income, etc.) than those who just barely failed. Although some variables may differ for the two groups based on random chance, most of these variables should be the same.
==== Falsification tests ====
===== Predetermined variables =====
Similar to the continuity of observable variables, one would expect there to be continuity in predetermined variables at the treatment cutoff. Since these variables were determined before the treatment decision, treatment status should not affect them. Consider the earlier merit-based scholarship example. If the outcome of interest is future grades, then we would not expect the scholarship to affect previous grades. If a discontinuity in predetermined variables is present at the treatment cutoff, then this puts the validity of the regression discontinuity design into question.
===== Other discontinuities =====
If discontinuities are present at other points of the assignment variable, where these are not expected, then this may make the regression discontinuity design suspect. Consider the example of Carpenter and Dobkin (2011) who studied the effect of legal access to alcohol in the United States. As the access to alcohol increases at age 21, this leads to changes in various outcomes, such as mortality rates and morbidity rates. If mortality and morbidity rates also increase discontinuously at other ages, then it throws the interpretation of the discontinuity at age 21 into question.
==== Inclusion and exclusion of covariates ====
If parameter estimates are sensitive to removing or adding covariates to the model, then this may cast doubt on the validity of the regression discontinuity design. A significant change may suggest that those who just barely got treatment to differ in these covariates from those who just barely did not get treatment. Including covariates would remove some of this bias. If a large amount of bias is present, and the covariates explain a significant amount of this, then their inclusion or exclusion would significantly change the parameter estimate.
Recent work has shown how to add covariates, under what conditions doing so is valid, and the potential for increased precision.
== Advantages ==
When properly implemented and analysed, the RDD yields an unbiased estimate of the local treatment effect. The RDD can be almost as good as a randomised experiment in measuring a treatment effect.
RDD, as a quasi-experiment, does not require ex-ante randomisation and circumvents ethical issues of random assignment.
Well-executed RDD studies can generate treatment effect estimates similar to estimates from randomised studies.
== Disadvantages ==
The estimated effects are only unbiased if the functional form of the relationship between the treatment and outcome is correctly modelled. The most popular caveats are non-linear relationships that are mistaken as a discontinuity.
Contamination by other treatments. Suppose another treatment occurs at the same cutoff value of the same assignment variable. In that case, the measured discontinuity in the outcome variable may be partially attributed to this other treatment. For example, suppose a researcher wishes to study the impact of legal access to alcohol on mental health using a regression discontinuity design at the minimum legal drinking age. The measured impact could be confused with legal access to gambling, which may occur at the same age.
== Extensions ==
=== Fuzzy RDD ===
The identification of causal effects hinges on the crucial assumption that there is indeed a sharp cut-off, around which there is a discontinuity in the probability of assignment from 0 to 1. In reality, however, cutoffs are often not strictly implemented (e.g. exercised discretion for students who just fell short of passing the threshold) and the estimates will hence be biased.
In contrast to the sharp regression discontinuity design, a fuzzy regression discontinuity design (FRDD) does not require a sharp discontinuity in the probability of assignment. Still, it is applicable as long as the probability of assignment is different. The intuition behind it is related to the instrumental variable strategy and intention to treat. Fuzzy RDD does not provide an unbiased estimate when the quantity of interest is the proportional effect (e.g. vaccine effectiveness), but extensions exist that do.
=== Regression kink design ===
When the assignment variable is continuous (e.g. student aid) and depends predictably on another observed variable (e.g. family income), one can identify treatment effects using sharp changes in the slope of the treatment function. This technique was coined regression kink design by Nielsen, Sørensen, and Taber (2010), though they cite similar earlier analyses. They write, "This approach resembles the regression discontinuity idea. Instead of a discontinuity of in the level of the stipend-income function, we have a discontinuity in the slope of the function." Rigorous theoretical foundations were provided by Card et al. (2012) and an empirical application by Bockerman et al. (2018).
Note that regression kinks (or kinked regression) can also mean a type of segmented regression, which is a different type of analysis.
Final considerations
The RD design takes the shape of a quasi-experimental research design with a clear structure that is devoid of randomized experimental features. Several aspects deny the RD designs an allowance for a status quo. For instance, the designs often involve serious issues that do not offer room for random experiments. Besides, the design of the experiments depends on the accuracy of the modelling process and the relationship between inputs and outputs.
== See also ==
Quasi-experiment
Design of quasi-experiments
== References ==
== Further reading ==
Angrist, J. D.; Pischke, J.-S. (2008). "Getting a Little Jumpy: Regression Discontinuity Designs". Mostly Harmless Econometrics: An Empiricist's Companion. Princeton University Press. pp. 251–268. ISBN 978-0-691-12035-5.
Cattaneo, Matias D.; Titiunik, Rocio (2022). "Regression Discontinuity Designs". Annual Review of Economics. 14: 821–851. doi:10.1146/annurev-economics-051520-021409. S2CID 125763727.
Cattaneo, Matias D.; Idrobo, Nicolas; Titiunik, Rocío (2024). A Practical Introduction to Regression Discontinuity Designs: Extensions. Cambridge University Press.
Cook, Thomas D. (2008). "'Waiting for Life to Arrive': A history of the regression-discontinuity design in Psychology, Statistics and Economics". Journal of Econometrics. 142 (2): 636–654. doi:10.1016/j.jeconom.2007.05.002.
Imbens, Guido W.; Wooldridge, Jeffrey M. (2009). "Recent Developments in the Econometrics of Program Evaluation". Journal of Economic Literature. 47 (1): 5–86. doi:10.1257/jel.47.1.5.
Maas, Iris L.; Nolte, Sandra; Walter, Otto B.; Berger, Thomas; Hautzinger, Martin (2017). "The regression discontinuity design showed to be a valid alternative to a randomized controlled trial for estimating treatment effects". Journal of Clinical Epidemiology. 82: 94–102. doi:10.1016/j.jclinepi.2016.11.008. PMID 27865902.
== External links ==
Regression-Discontinuity Analysis at Research Methods Knowledge Base | Wikipedia/Regression_discontinuity_design |
Fiducial inference is one of a number of different types of statistical inference. These are rules, intended for general application, by which conclusions can be drawn from samples of data. In modern statistical practice, attempts to work with fiducial inference have fallen out of fashion in favour of frequentist inference, Bayesian inference and decision theory. However, fiducial inference is important in the history of statistics since its development led to the parallel development of concepts and tools in theoretical statistics that are widely used. Some current research in statistical methodology is either explicitly linked to fiducial inference or is closely connected to it.
== Background ==
The general approach of fiducial inference was proposed by Ronald Fisher. Here "fiducial" comes from the Latin for faith. Fiducial inference can be interpreted as an attempt to perform inverse probability without calling on prior probability distributions. Fiducial inference quickly attracted controversy and was never widely accepted. Indeed, counter-examples to the claims of Fisher for fiducial inference were soon published. These counter-examples cast doubt on the coherence of "fiducial inference" as a system of statistical inference or inductive logic. Other studies showed that, where the steps of fiducial inference are said to lead to "fiducial probabilities" (or "fiducial distributions"), these probabilities lack the property of additivity, and so cannot constitute a probability measure.
The concept of fiducial inference can be outlined by comparing its treatment of the problem of interval estimation in relation to other modes of statistical inference.
A confidence interval, in frequentist inference, with coverage probability γ has the interpretation that among all confidence intervals computed by the same method, a proportion γ will contain the true value that needs to be estimated. This has either a repeated sampling (or frequentist) interpretation, or is the probability that an interval calculated from yet-to-be-sampled data will cover the true value. However, in either case, the probability concerned is not the probability that the true value is in the particular interval that has been calculated since at that stage both the true value and the calculated interval are fixed and are not random.
Credible intervals, in Bayesian inference, do allow a probability to be given for the event that an interval, once it has been calculated, does include the true value, since it proceeds on the basis that a probability distribution can be associated with the state of knowledge about the true value, both before and after the sample of data has been obtained.
Fisher designed the fiducial method to meet perceived problems with the Bayesian approach, at a time when the frequentist approach had yet to be fully developed. Such problems related to the need to assign a prior distribution to the unknown values. The aim was to have a procedure, like the Bayesian method, whose results could still be given an inverse probability interpretation based on the actual data observed. The method proceeds by attempting to derive a "fiducial distribution", which is a measure of the degree of faith that can be put on any given value of the unknown parameter and is faithful to the data in the sense that the method uses all available information.
Unfortunately Fisher did not give a general definition of the fiducial method and he denied that the method could always be applied. His only examples were for a single parameter; different generalisations have been given when there are several parameters. A relatively complete presentation of the fiducial approach to inference is given by Quenouille (1958), while Williams (1959) describes the application of fiducial analysis to the calibration problem (also known as "inverse regression") in regression analysis. Further discussion of fiducial inference is given by Kendall & Stuart (1973).
== The fiducial distribution ==
Fisher required the existence of a sufficient statistic for the fiducial method to apply. Suppose there is a single sufficient statistic for a single parameter. That is, suppose that the conditional distribution of the data given the statistic does not depend on the value of the parameter. For example, suppose that n independent observations are uniformly distributed on the interval
[
0
,
ω
]
{\displaystyle [0,\omega ]}
. The maximum, X, of the n observations is a sufficient statistic for ω. If only X is recorded and the values of the remaining observations are forgotten, these remaining observations are equally likely to have had any values in the interval
[
0
,
X
]
{\displaystyle [0,X]}
. This statement does not depend on the value of ω. Then X contains all the available information about ω and the other observations could have given no further information.
The cumulative distribution function of X is
F
(
x
)
=
P
(
X
≤
x
)
=
P
(
all observations
≤
x
)
=
(
x
ω
)
n
.
{\displaystyle F(x)=P(X\leq x)=P\left({\text{all observations}}\leq x\right)=\left({\frac {x}{\omega }}\right)^{n}.}
Probability statements about X/ω may be made. For example, given α, a value of a can be chosen with 0 < a < 1 such that
P
(
X
>
ω
a
)
=
1
−
a
n
=
α
.
{\displaystyle P\left(X>\omega a\right)=1-a^{n}=\alpha .}
Thus
a
=
(
1
−
α
)
1
/
n
.
{\displaystyle a=(1-\alpha )^{1/n}.}
Then Fisher might say that this statement may be inverted into the form
P
(
ω
<
X
a
)
=
α
.
{\displaystyle P\left(\omega <{\frac {X}{a}}\right)=\alpha .}
In this latter statement, ω is now regarded as variable and X is fixed, whereas previously it was the other way round. This distribution of ω is the fiducial distribution which may be used to form fiducial intervals that represent degrees of belief.
The calculation is identical to the pivotal method for finding a confidence interval, but the interpretation is different. In fact older books use the terms confidence interval and fiducial interval interchangeably. Notice that the fiducial distribution is uniquely defined when a single sufficient statistic exists.
The pivotal method is based on a random variable that is a function of both the observations and the parameters but whose distribution does not depend on the parameter. Such random variables are called pivotal quantities. By using these, probability statements about the observations and parameters may be made in which the probabilities do not depend on the parameters and these may be inverted by solving for the parameters in much the same way as in the example above. However, this is only equivalent to the fiducial method if the pivotal quantity is uniquely defined based on a sufficient statistic.
A fiducial interval could be taken to be just a different name for a confidence interval and give it the fiducial interpretation. But the definition might not then be unique. Fisher would have denied that this interpretation is correct: for him, the fiducial distribution had to be defined uniquely and it had to use all the information in the sample.
== Status of the approach ==
Fisher admitted that "fiducial inference" had problems. Fisher wrote to George A. Barnard that he was "not clear in the head" about one problem on fiducial inference, and, also writing to Barnard, Fisher complained that his theory seemed to have only "an asymptotic approach to intelligibility". Later Fisher confessed that "I don't understand yet what fiducial probability does. We shall have to live with it a long time before we know what it's doing for us. But it should not be ignored just because we don't yet have a clear interpretation".
Dennis Lindley showed that fiducial probability lacked additivity, and so was not a probability measure. Cox points out that the same argument applies to the so-called "confidence distribution" associated with confidence intervals, so the conclusion to be drawn from this is moot. Fisher sketched "proofs" of results using fiducial probability. When the conclusions of Fisher's fiducial arguments are not false, many have been shown to also follow from Bayesian inference.
In 1978, J. G. Pederson wrote that "the fiducial argument has had very limited success and is now essentially dead". Davison wrote "A few subsequent attempts have been made to resurrect fiducialism, but it now seems largely of historical importance, particularly in view of its restricted range of applicability when set alongside models of current interest."
Fiducial inference is still being studied and its principles may be valuable for some scientific applications.
== References ==
== Bibliography == | Wikipedia/Fiducial_inference |
In mathematics, the Zak transform (also known as the Gelfand mapping) is a certain operation which takes as input a function of one variable and produces as output a function of two variables. The output function is called the Zak transform of the input function. The transform is defined as an infinite series in which each term is a product of a dilation of a translation by an integer of the function and an exponential function. In applications of Zak transform to signal processing the input function represents a signal and the transform will be a mixed time–frequency representation of the signal. The signal may be real valued or complex-valued, defined on a continuous set (for example, the real numbers) or a discrete set (for example, the integers or a finite subset of integers). The Zak transform is a generalization of the discrete Fourier transform.
The Zak transform had been discovered by several people in different fields and was called by different names. It was called the "Gelfand mapping" because Israel Gelfand introduced it in his work on eigenfunction expansions. The transform was rediscovered independently by Joshua Zak in 1967 who called it the "k-q representation". There seems to be a general consensus among experts in the field to call it the Zak transform, since Zak was the first to systematically study that transform in a more general setting and recognize its usefulness.
== Continuous-time Zak transform: Definition ==
In defining the continuous-time Zak transform, the input function is a function of a real variable. So, let f(t) be a function of a real variable t. The continuous-time Zak transform of f(t) is a function of two real variables one of which is t. The other variable may be denoted by w. The continuous-time Zak transform has been defined variously.
=== Definition 1 ===
Let a be a positive constant. The Zak transform of f(t), denoted by Za[f], is a function of t and w defined by
Z
a
[
f
]
(
t
,
w
)
=
a
∑
k
=
−
∞
∞
f
(
a
t
+
a
k
)
e
−
2
π
k
w
i
{\displaystyle Z_{a}[f](t,w)={\sqrt {a}}\sum _{k=-\infty }^{\infty }f(at+ak)e^{-2\pi kwi}}
.
=== Definition 2 ===
The special case of Definition 1 obtained by taking a = 1 is sometimes taken as the definition of the Zak transform. In this special case, the Zak transform of f(t) is denoted by Z[f].
Z
[
f
]
(
t
,
w
)
=
∑
k
=
−
∞
∞
f
(
t
+
k
)
e
−
2
π
k
w
i
{\displaystyle Z[f](t,w)=\sum _{k=-\infty }^{\infty }f(t+k)e^{-2\pi kwi}}
.
=== Definition 3 ===
The notation Z[f] is used to denote another form of the Zak transform. In this form, the Zak transform of f(t) is defined as follows:
Z
[
f
]
(
t
,
ν
)
=
∑
k
=
−
∞
∞
f
(
t
+
k
)
e
−
k
ν
i
{\displaystyle Z[f](t,\nu )=\sum _{k=-\infty }^{\infty }f(t+k)e^{-k\nu i}}
.
=== Definition 4 ===
Let T be a positive constant. The Zak transform of f(t), denoted by ZT[f], is a function of t and w defined by
Z
T
[
f
]
(
t
,
w
)
=
T
∑
k
=
−
∞
∞
f
(
t
+
k
T
)
e
−
2
π
k
w
T
i
{\displaystyle Z_{T}[f](t,w)={\sqrt {T}}\sum _{k=-\infty }^{\infty }f(t+kT)e^{-2\pi kwTi}}
.
Here t and w are assumed to satisfy the conditions 0 ≤ t ≤ T and 0 ≤ w ≤ 1/T.
== Example ==
The Zak transform of the function
ϕ
(
t
)
=
{
1
,
0
≤
t
<
1
0
,
otherwise
{\displaystyle \phi (t)={\begin{cases}1,&0\leq t<1\\0,&{\text{otherwise}}\end{cases}}}
is given by
Z
[
ϕ
]
(
t
,
w
)
=
e
−
2
π
⌈
−
t
⌉
w
i
{\displaystyle Z[\phi ](t,w)=e^{-2\pi \lceil -t\rceil wi}}
where
⌈
−
t
⌉
{\displaystyle \lceil -t\rceil }
denotes the smallest integer not less than
−
t
{\displaystyle -t}
(the ceiling function).
== Properties of the Zak transform ==
In the following it will be assumed that the Zak transform is as given in Definition 2.
1. Linearity
Let a and b be any real or complex numbers. Then
Z
[
a
f
+
b
g
]
(
t
,
w
)
=
a
Z
[
f
]
(
t
,
w
)
+
b
Z
[
g
]
(
t
,
w
)
{\displaystyle Z[af+bg](t,w)=aZ[f](t,w)+bZ[g](t,w)}
2. Periodicity
Z
[
f
]
(
t
,
w
+
1
)
=
Z
[
f
]
(
t
,
w
)
{\displaystyle Z[f](t,w+1)=Z[f](t,w)}
3. Quasi-periodicity
Z
[
f
]
(
t
+
1
,
w
)
=
e
2
π
w
i
Z
[
f
]
(
t
,
w
)
{\displaystyle Z[f](t+1,w)=e^{2\pi wi}Z[f](t,w)}
4. Conjugation
Z
[
f
¯
]
(
t
,
w
)
=
Z
[
f
]
¯
(
t
,
−
w
)
{\displaystyle Z[{\bar {f}}](t,w)={\overline {Z[f]}}(t,-w)}
5. Symmetry
If f(t) is even then
Z
[
f
]
(
t
,
w
)
=
Z
[
f
]
(
−
t
,
−
w
)
{\displaystyle Z[f](t,w)=Z[f](-t,-w)}
If f(t) is odd then
Z
[
f
]
(
t
,
w
)
=
−
Z
[
f
]
(
−
t
,
−
w
)
{\displaystyle Z[f](t,w)=-Z[f](-t,-w)}
6. Convolution
Let
⋆
{\displaystyle \star }
denote convolution with respect to the variable t.
Z
[
f
⋆
g
]
(
t
,
w
)
=
Z
[
f
]
(
t
,
w
)
⋆
Z
[
g
]
(
t
,
w
)
{\displaystyle Z[f\star g](t,w)=Z[f](t,w)\star Z[g](t,w)}
== Inversion formula ==
Given the Zak transform of a function, the function can be reconstructed using the following formula:
f
(
t
)
=
∫
0
1
Z
[
f
]
(
t
,
w
)
d
w
.
{\displaystyle f(t)=\int _{0}^{1}Z[f](t,w)\,dw.}
== Discrete Zak transform: Definition ==
Let
f
(
n
)
{\displaystyle f(n)}
be a function of an integer variable
n
∈
Z
{\displaystyle n\in \mathbb {Z} }
(a sequence). The discrete Zak transform of
f
(
n
)
{\displaystyle f(n)}
is a function of two real variables, one of which is the integer variable
n
{\displaystyle n}
. The other variable is a real variable which may be denoted by
w
{\displaystyle w}
. The discrete Zak transform has also been defined variously. However, only one of the definitions is given below.
=== Definition ===
The discrete Zak transform of the function
f
(
n
)
{\displaystyle f(n)}
where
n
{\displaystyle n}
is an integer variable, denoted by
Z
[
f
]
{\displaystyle Z[f]}
, is defined by
Z
[
f
]
(
n
,
w
)
=
∑
k
=
−
∞
∞
f
(
n
+
k
)
e
−
2
π
k
w
i
.
{\displaystyle Z[f](n,w)=\sum _{k=-\infty }^{\infty }f(n+k)e^{-2\pi kwi}.}
=== Inversion formula ===
Given the discrete transform of a function
f
(
n
)
{\displaystyle f(n)}
, the function can be reconstructed using the following formula:
f
(
n
)
=
∫
0
1
Z
[
f
]
(
n
,
w
)
d
w
.
{\displaystyle f(n)=\int _{0}^{1}Z[f](n,w)\,dw.}
== Applications ==
The Zak transform has been successfully used in physics in quantum field theory, in electrical engineering in time-frequency representation of signals, and in digital data transmission. The Zak transform has also applications in mathematics. For example, it has been used in the Gabor representation problem.
== References == | Wikipedia/Zak_transform |
In electronics, noise is an unwanted disturbance in an electrical signal.: 5
Noise generated by electronic devices varies greatly as it is produced by several different effects.
In particular, noise is inherent in physics and central to thermodynamics. Any conductor with electrical resistance will generate thermal noise inherently. The final elimination of thermal noise in electronics can only be achieved cryogenically, and even then quantum noise would remain inherent.
Electronic noise is a common component of noise in signal processing.
In communication systems, noise is an error or undesired random disturbance of a useful information signal in a communication channel. The noise is a summation of unwanted or disturbing energy from natural and sometimes man-made sources. Noise is, however, typically distinguished from interference, for example in the signal-to-noise ratio (SNR), signal-to-interference ratio (SIR) and signal-to-noise plus interference ratio (SNIR) measures. Noise is also typically distinguished from distortion, which is an unwanted systematic alteration of the signal waveform by the communication equipment, for example in signal-to-noise and distortion ratio (SINAD) and total harmonic distortion plus noise (THD+N) measures.
While noise is generally unwanted, it can serve a useful purpose in some applications, such as random number generation or dither.
Uncorrelated noise sources add according to the sum of their powers.
== Noise types ==
Different types of noise are generated by different devices and different processes. Thermal noise is unavoidable at non-zero temperature (see fluctuation-dissipation theorem), while other types depend mostly on device type (such as shot noise, which needs a steep potential barrier) or manufacturing quality and semiconductor defects, such as conductance fluctuations, including 1/f noise.
=== Thermal noise ===
Johnson–Nyquist noise (more often thermal noise) is unavoidable, and generated by the random thermal motion of charge carriers (usually electrons), inside an electrical conductor, which happens regardless of any applied voltage.
Thermal noise is approximately white, meaning that its power spectral density is nearly equal throughout the frequency spectrum. The amplitude of the signal has very nearly a Gaussian probability density function. A communication system affected by thermal noise is often modelled as an additive white Gaussian noise (AWGN) channel.
=== Shot noise ===
Shot noise in electronic devices results from unavoidable random statistical fluctuations of the electric current when the charge carriers (such as electrons) traverse a gap. If electrons flow across a barrier, then they have discrete arrival times. Those discrete arrivals exhibit shot noise. Typically, the barrier in a diode is used. Shot noise is similar to the noise created by rain falling on a tin roof. The flow of rain may be relatively constant, but the individual raindrops arrive discretely.
The root-mean-square value of the shot noise current in is given by the Schottky formula.
i
n
=
2
I
q
Δ
B
{\displaystyle i_{n}={\sqrt {2Iq\Delta B}}}
where I is the DC current, q is the charge of an electron, and ΔB is the bandwidth in hertz. The Schottky formula assumes independent arrivals.
Vacuum tubes exhibit shot noise because the electrons randomly leave the cathode and arrive at the anode (plate). A tube may not exhibit the full shot noise effect: the presence of a space charge tends to smooth out the arrival times (and thus reduce the randomness of the current). Pentodes and screen-grid tetrodes exhibit more noise than triodes because the cathode current splits randomly between the screen grid and the anode.
Conductors and resistors typically do not exhibit shot noise because the electrons thermalize and move diffusively within the material; the electrons do not have discrete arrival times. Shot noise has been demonstrated in mesoscopic resistors when the size of the resistive element becomes shorter than the electron–phonon scattering length.
=== Partition noise ===
Where current divides between two (or more) paths, noise occurs as a result of random fluctuations that occur during this division.
For this reason, a transistor will have more noise than the combined shot noise from its two PN junctions.
=== Flicker noise ===
Flicker noise, also known as 1/f noise, is a signal or process with a frequency spectrum that falls off steadily into the higher frequencies, with a pink spectrum. It occurs in almost all electronic devices and results from a variety of effects.
=== Burst noise ===
Burst noise consists of sudden step-like transitions between two or more discrete voltage or current levels, as high as several hundred microvolts, at random and unpredictable times. Each shift in offset voltage or current lasts for several milliseconds to seconds. It is also known as popcorn noise for the popping or crackling sounds it produces in audio circuits.
=== Transit-time noise ===
If the time taken by the electrons to travel from emitter to collector in a transistor becomes comparable to the period of the signal being amplified, that is, at frequencies above VHF and beyond, the transit-time effect takes place and the noise input impedance of the transistor decreases. From the frequency at which this effect becomes significant, it increases with frequency and quickly dominates other sources of noise.
== Coupled noise ==
While noise may be generated in the electronic circuit itself, additional noise energy can be coupled into a circuit from the external environment, by inductive coupling or capacitive coupling, or through the antenna of a radio receiver.
=== Sources ===
Intermodulation noise
Caused when signals of different frequencies share the same non-linear medium.
Crosstalk
Phenomenon in which a signal transmitted in one circuit or channel of a transmission system creates undesired interference onto a signal in another channel.
Interference
Modification or disruption of a signal travelling along a medium
Atmospheric noise
Also called static noise, it is caused by lightning discharges in thunderstorms and other electrical disturbances occurring in nature, such as corona discharge.
Industrial noise
Sources such as automobiles, aircraft, ignition electric motors and switching gear, High voltage wires and fluorescent lamps cause industrial noise. These noises are produced by the discharge present in all these operations.
Solar noise
Noise that originates from the Sun is called solar noise. Under normal conditions, there is approximately constant radiation from the Sun due to its high temperature, but solar storms can cause a variety of electrical disturbances. The intensity of solar noise varies over time in a solar cycle.
Cosmic noise
Distant stars generate noise called cosmic noise. While these stars are too far away to individually affect terrestrial communications systems, their large number leads to appreciable collective effects. Cosmic noise has been observed in a range from 8 MHz to 1.43 GHz, the latter frequency corresponding to the 21-cm hydrogen line. Apart from man-made noise, it is the strongest component over the range of about 20 to 120 MHz. Little cosmic noise below 20MHz penetrates the ionosphere, while its eventual disappearance at frequencies in excess of 1.5 GHz is probably governed by the mechanisms generating it and its absorption by hydrogen in interstellar space.
=== Mitigation ===
In many cases noise found on a signal in a circuit is unwanted. There are many different noise reduction techniques that can reduce the noise picked up by a circuit.
Faraday cage – A Faraday cage enclosing a circuit can be used to isolate the circuit from external noise sources. A Faraday cage cannot address noise sources that originate in the circuit itself or those carried in on its inputs, including the power supply.
Capacitive coupling – Capacitive coupling allows an AC signal from one part of the circuit to be picked up in another part through the interaction of electric fields. Where coupling is unintended, the effects can be addressed through improved circuit layout and grounding.
Ground loops – When grounding a circuit, it is important to avoid ground loops. Ground loops occur when there is a voltage difference between two ground connections. A good way to fix this is to bring all the ground wires to the same potential in a ground bus.
Shielding cables – A shielded cable can be thought of as a Faraday cage for wiring and can protect the wires from unwanted noise in a sensitive circuit. The shield must be grounded to be effective. Grounding the shield at only one end can avoid a ground loop on the shield.
Twisted pair wiring – Twisting wires in a circuit will reduce electromagnetic noise. Twisting the wires decreases the loop size in which a magnetic field can run through to produce a current between the wires. Small loops may exist between wires twisted together, but the magnetic field going through these loops induces a current flowing in opposite directions in alternate loops on each wire and so there is no net noise current.
Notch filters – Notch filters or band-rejection filters are useful for eliminating a specific noise frequency. For example, power lines within a building run at 50 or 60 Hz line frequency. A sensitive circuit will pick up this frequency as noise. A notch filter tuned to the line frequency can remove the noise.
Thermal noise can be reduced by cooling of circuits - this is typically only employed in high accuracy high-value applications such as radio telescopes.
== Quantification ==
The noise level in an electronic system is typically measured as an electrical power N in watts or dBm, a root mean square (RMS) voltage (identical to the noise standard deviation) in volts, dBμV or a mean squared error (MSE) in volts squared. Examples of electrical noise-level measurement units are dBu, dBm0, dBrn, dBrnC, and dBrn(f1 − f2), dBrn(144-line). Noise may also be characterized by its probability distribution and noise spectral density N0(f) in watts per hertz.
A noise signal is typically considered as a linear addition to a useful information signal. Typical signal quality measures involving noise are signal-to-noise ratio (SNR or S/N), signal-to-quantization noise ratio (SQNR) in analog-to-digital conversion and compression, peak signal-to-noise ratio (PSNR) in image and video coding and noise figure in cascaded amplifiers. In a carrier-modulated passband analogue communication system, a certain carrier-to-noise ratio (CNR) at the radio receiver input would result in a certain signal-to-noise ratio in the detected message signal. In a digital communications system, a certain Eb/N0 (normalized signal-to-noise ratio) would result in a certain bit error rate. Telecommunication systems strive to increase the ratio of signal level to noise level in order to effectively transfer data. Noise in telecommunication systems is a product of both internal and external sources to the system.
Noise is a random process, characterized by stochastic properties such as its variance, distribution, and spectral density. The spectral distribution of noise can vary with frequency, so its power density is measured in watts per hertz (W/Hz). Since the power in a resistive element is proportional to the square of the voltage across it, noise voltage (density) can be described by taking the square root of the noise power density, resulting in volts per root hertz (
V
/
H
z
{\displaystyle \scriptstyle \mathrm {V} /{\sqrt {\mathrm {Hz} }}}
). Integrated circuit devices, such as operational amplifiers commonly quote equivalent input noise level in these terms (at room temperature).
== Dither ==
If the noise source is correlated with the signal, such as in the case of quantisation error, the intentional introduction of additional noise, called dither, can reduce overall noise in the bandwidth of interest. This technique allows retrieval of signals below the nominal detection threshold of an instrument. This is an example of stochastic resonance.
== See also ==
Active noise control for noise reduction through cancellation
Colors of noise
Discovery of cosmic microwave background radiation
Error detection and correction for digital signals subject to noise
Generation–recombination noise
Matched filter for noise reduction in modems
Noise (signal processing)
Noise reduction and for audio and images
Phonon noise
== Notes ==
== References ==
== Further reading ==
Sh. Kogan (1996). Electronic Noise and Fluctuations in Solids. Cambridge University Press. ISBN 0-521-46034-4.
Scherz, Paul. (2006, Nov 14) Practical Electronics for Inventors. ed. McGraw-Hill.
== External links ==
Active Filter (Sallen & Key) Noise Study. Archived 2015-05-22 at the Wayback Machine.
White noise calculator, thermal noise – Voltage in microvolts, conversion to noise level in dBu and dBV and vice versa | Wikipedia/Noise_(physics) |
In mathematics and signal processing, the constant-Q transform and variable-Q transform, simply known as CQT and VQT, transforms a data series to the frequency domain. It is related to the Fourier transform and very closely related to the complex Morlet wavelet transform. Its design is suited for musical representation.
The transform can be thought of as a series of filters fk, logarithmically spaced in frequency, with the k-th filter having a spectral width δfk equal to a multiple of the previous filter's width:
δ
f
k
=
2
1
/
n
⋅
δ
f
k
−
1
=
(
2
1
/
n
)
k
⋅
δ
f
min
,
{\displaystyle \delta f_{k}=2^{1/n}\cdot \delta f_{k-1}=\left(2^{1/n}\right)^{k}\cdot \delta f_{\text{min}},}
where δfk is the bandwidth of the k-th filter, fmin is the central frequency of the lowest filter, and n is the number of filters per octave.
== Calculation ==
The short-time Fourier transform of x[n] for a frame shifted to sample m is calculated as follows:
X
[
k
,
m
]
=
∑
n
=
0
N
−
1
W
[
n
−
m
]
x
[
n
]
e
−
j
2
π
k
n
/
N
.
{\displaystyle X[k,m]=\sum _{n=0}^{N-1}W[n-m]x[n]e^{-j2\pi kn/N}.}
Given a data series at sampling frequency fs = 1/T, T being the sampling period of our data, for each frequency bin we can define the following:
Filter width, δfk.
Q, the "quality factor":
Q
=
f
k
δ
f
k
.
{\displaystyle Q={\frac {f_{k}}{\delta f_{k}}}.}
This is shown below to be the integer number of cycles processed at a center frequency fk. As such, this somewhat defines the time complexity of the transform.
Window length for the k-th bin:
N
[
k
]
=
f
s
δ
f
k
=
f
s
f
k
Q
.
{\displaystyle N[k]={\frac {f_{\text{s}}}{\delta f_{k}}}={\frac {f_{\text{s}}}{f_{k}}}Q.}
Since fs/fk is the number of samples processed per cycle at frequency fk, Q is the number of integer cycles processed at this central frequency.
The equivalent transform kernel can be found by using the following substitutions:
The window length of each bin is now a function of the bin number:
N
=
N
[
k
]
=
Q
f
s
f
k
.
{\displaystyle N=N[k]=Q{\frac {f_{\text{s}}}{f_{k}}}.}
The relative power of each bin will decrease at higher frequencies, as these sum over fewer terms. To compensate for this, we normalize by N[k].
Any windowing function will be a function of window length, and likewise a function of window number. For example, the equivalent Hamming window would be
W
[
k
,
n
]
=
α
−
(
1
−
α
)
cos
2
π
n
N
[
k
]
−
1
,
α
=
25
/
46
,
0
⩽
n
⩽
N
[
k
]
−
1.
{\displaystyle W[k,n]=\alpha -(1-\alpha )\cos {\frac {2\pi n}{N[k]-1}},\quad \alpha =25/46,\quad 0\leqslant n\leqslant N[k]-1.}
Our digital frequency,
2
π
k
N
{\displaystyle {\frac {2\pi k}{N}}}
, becomes
2
π
Q
N
[
k
]
{\displaystyle {\frac {2\pi Q}{N[k]}}}
.
After these modifications, we are left with
X
[
k
]
=
1
N
[
k
]
∑
n
=
0
N
[
k
]
−
1
W
[
k
,
n
]
x
[
n
]
e
−
j
2
π
Q
n
N
[
k
]
.
{\displaystyle X[k]={\frac {1}{N[k]}}\sum _{n=0}^{N[k]-1}W[k,n]x[n]e^{\frac {-j2\pi Qn}{N[k]}}.}
=== Variable-Q bandwidth calculation ===
The variable-Q transform is the same as constant-Q transform, but the only difference is the filter Q is variable, hence the name variable-Q transform. The variable-Q transform is useful where time resolution on low frequencies is important. There are ways to calculate the bandwidth of the VQT, one of them using equivalent rectangular bandwidth as a value for VQT bin's bandwidth.
The simplest way to implement a variable-Q transform is add a bandwidth offset called γ like this one:
δ
f
k
=
(
2
f
k
+
γ
)
Q
.
{\displaystyle \delta f_{k}=\left({\frac {2}{f_{k}+\gamma }}\right)Q.}
This formula can be modified to have extra parameters to adjust sharpness of the transition between constant-Q and constant-bandwidth like this:
δ
f
k
=
(
2
f
k
α
+
γ
α
α
)
Q
.
{\displaystyle \delta f_{k}=\left({\frac {2}{\sqrt[{\alpha }]{f_{k}^{\alpha }+\gamma ^{\alpha }}}}\right)Q.}
with α as a parameter for transition sharpness and where α of 2 is equals to hyperbolic sine frequency scale, in terms of frequency resolution.
== Fast calculation ==
The direct calculation of the constant-Q transform (either using naive discrete Fourier transform or slightly faster Goertzel algorithm) is slow when compared against the fast Fourier transform. However, the fast Fourier transform can itself be employed, in conjunction with the use of a kernel, to perform the equivalent calculation but much faster. An approximate inverse to such an implementation was proposed in 2006; it works by going back to the discrete Fourier transform, and is only suitable for pitch instruments.
A development on this method with improved invertibility involves performing CQT (via fast Fourier transform) octave-by-octave, using lowpass filtered and downsampled results for consecutively lower pitches. Implementations of this method include the MATLAB implementation and LibROSA's Python implementation. LibROSA combines the subsampled method with the direct fast Fourier transform method (which it dubs "pseudo-CQT") by having the latter process higher frequencies as a whole.
The sliding discrete Fourier transform can be used for faster calculation of constant-Q transform, since the sliding discrete Fourier transform does not have to be linear-frequency spacing and same window size per bin.
Alternatively, the constant-Q transform can be approximated by using multiple fast Fourier transforms of different window sizes and/or sampling rate at different frequency ranges then stitch it together. This is called multiresolution short-time Fourier transform, however the window sizes for multiresolution fast Fourier transforms are different per-octave, rather than per-bin.
== Comparison with the Fourier transform ==
In general, the transform is well suited to musical data, and this can be seen in some of its advantages compared to the fast Fourier transform. As the output of the transform is effectively amplitude/phase against log frequency, fewer frequency bins are required to cover a given range effectively, and this proves useful where frequencies span several octaves. As the range of human hearing covers approximately ten octaves from 20 Hz to around 20 kHz, this reduction in output data is significant.
The transform exhibits a reduction in frequency resolution with higher frequency bins, which is desirable for auditory applications. The transform mirrors the human auditory system, whereby at lower-frequencies spectral resolution is better, whereas temporal resolution improves at higher frequencies. At the bottom of the piano scale (about 30 Hz), a difference of 1 semitone is a difference of approximately 1.5 Hz, whereas at the top of the musical scale (about 5 kHz), a difference of 1 semitone is a difference of approximately 200 Hz. So for musical data the exponential frequency resolution of constant-Q transform is ideal.
In addition, the harmonics of musical notes form a pattern characteristic of the timbre of the instrument in this transform. Assuming the same relative strengths of each harmonic, as the fundamental frequency changes, the relative position of these harmonics remains constant. This can make identification of instruments much easier. The constant Q transform can also be used for automatic recognition of musical keys based on accumulated chroma content.
Relative to the Fourier transform, implementation of this transform is more tricky. This is due to the varying number of samples used in the calculation of each frequency bin, which also affects the length of any windowing function implemented.
Also note that because the frequency scale is logarithmic, there is no true zero-frequency / DC term present, which may be a drawback in applications that are interested in the DC term. Although for applications that are not interested in the DC such as audio, this is not a drawback.
== References == | Wikipedia/Constant-Q_transform |
Maximum entropy spectral estimation is a method of spectral density estimation. The goal is to improve the spectral quality based on the principle of maximum entropy. The method is based on choosing the spectrum which corresponds to the most random or the most unpredictable time series whose autocorrelation function agrees with the known values. This assumption, which corresponds to the concept of maximum entropy as used in both statistical mechanics and information theory, is maximally non-committal with regard to the unknown values of the autocorrelation function of the time series. It is simply the application of maximum entropy modeling to any type of spectrum and is used in all fields where data is presented in spectral form. The usefulness of the technique varies based on the source of the spectral data since it is dependent on the amount of assumed knowledge about the spectrum that can be applied to the model.
In maximum entropy modeling, probability distributions are created on the basis of that which is known, leading to a type of statistical inference about the missing information which is called the maximum entropy estimate. For example, in spectral analysis the expected peak shape is often known, but in a noisy spectrum the center of the peak may not be clear. In such a case, inputting the known information allows the maximum entropy model to derive a better estimate of the center of the peak, thus improving spectral accuracy.
== Method description ==
In the periodogram approach to calculating the power spectra, the sample autocorrelation function is multiplied by some window function and then Fourier transformed. The window is applied to provide statistical stability as well as to avoid leakage from other parts of the spectrum. However, the window limits the spectral resolution.
The maximum entropy method attempts to improve the spectral resolution by extrapolating the correlation function beyond the maximum lag, in such a way that the entropy of the corresponding probability density function is maximized in each step of the extrapolation.
The maximum entropy rate stochastic process that satisfies the given empirical autocorrelation and variance constraints is an autoregressive model with independent and identically distributed zero-mean Gaussian input.
Therefore, the maximum entropy method is equivalent to least-squares fitting the available time series data to an autoregressive model
X
t
=
∑
k
=
1
M
α
k
X
t
−
k
+
ϵ
k
{\displaystyle X_{t}=\sum _{k=1}^{M}\alpha _{k}X_{t-k}+\epsilon _{k}}
where the
ϵ
k
{\displaystyle \epsilon _{k}}
are independent and identically distributed as
N
(
0
,
σ
2
)
{\displaystyle N(0,\sigma ^{2})}
. The unknown coefficients
α
k
{\displaystyle \alpha _{k}}
are found using the least-square method. Once the autoregressive coefficients have been determined, the spectrum of the time series data is estimated by evaluating the power spectral density function of the fitted autoregressive model
S
^
(
ω
)
=
σ
2
T
s
|
1
+
∑
k
=
1
M
α
k
e
−
i
k
ω
T
s
|
2
,
{\displaystyle {\hat {S}}(\omega )={\frac {\sigma ^{2}T_{s}}{\left|1+\sum _{k=1}^{M}\alpha _{k}e^{-ik\omega T_{s}}\right|^{2}}},}
where
T
s
{\displaystyle T_{s}}
is the sampling period and
i
=
−
1
{\displaystyle i={\sqrt {-1}}}
is the imaginary unit.
== References ==
Cover, T. and Thomas, J. (1991) Elements of Information Theory. John Wiley and Sons, Inc.
S. Lawrence Marple, Jr. (1987). Digital spectral analysis with applications. Prentice-Hall. ISBN 0132141493.
Burg J.P. (1967). Maximum Entropy Spectral Analysis. Proceedings of 37th Meeting, Society of Exploration Geophysics, Oklahoma City.
== External links ==
kSpectra Toolkit for Mac OS X from SpectraWorks.
memspectum: a python package for maximum entropy spectral estimation with python [1] | Wikipedia/Maximum_entropy_spectral_estimation |
The bilinear transform (also known as Tustin's method, after Arnold Tustin) is used in digital signal processing and discrete-time control theory to transform continuous-time system representations to discrete-time and vice versa.
The bilinear transform is a special case of a conformal mapping (namely, a Möbius transformation), often used for converting a transfer function
H
a
(
s
)
{\displaystyle H_{a}(s)}
of a linear, time-invariant (LTI) filter in the continuous-time domain (often named an analog filter) to a transfer function
H
d
(
z
)
{\displaystyle H_{d}(z)}
of a linear, shift-invariant filter in the discrete-time domain (often named a digital filter although there are analog filters constructed with switched capacitors that are discrete-time filters). It maps positions on the
j
ω
{\displaystyle j\omega }
axis,
R
e
[
s
]
=
0
{\displaystyle \mathrm {Re} [s]=0}
, in the s-plane to the unit circle,
|
z
|
=
1
{\displaystyle |z|=1}
, in the z-plane. Other bilinear transforms can be used for warping the frequency response of any discrete-time linear system (for example to approximate the non-linear frequency resolution of the human auditory system) and are implementable in the discrete domain by replacing a system's unit delays
(
z
−
1
)
{\displaystyle \left(z^{-1}\right)}
with first order all-pass filters.
The transform preserves stability and maps every point of the frequency response of the continuous-time filter,
H
a
(
j
ω
a
)
{\displaystyle H_{a}(j\omega _{a})}
to a corresponding point in the frequency response of the discrete-time filter,
H
d
(
e
j
ω
d
T
)
{\displaystyle H_{d}(e^{j\omega _{d}T})}
although to a somewhat different frequency, as shown in the Frequency warping section below. This means that for every feature that one sees in the frequency response of the analog filter, there is a corresponding feature, with identical gain and phase shift, in the frequency response of the digital filter but, perhaps, at a somewhat different frequency. The change in frequency is barely noticeable at low frequencies but is quite evident at frequencies close to the Nyquist frequency.
== Discrete-time approximation ==
The bilinear transform is a first-order Padé approximant of the natural logarithm function that is an exact mapping of the z-plane to the s-plane. When the Laplace transform is performed on a discrete-time signal (with each element of the discrete-time sequence attached to a correspondingly delayed unit impulse), the result is precisely the Z transform of the discrete-time sequence with the substitution of
z
=
e
s
T
=
e
s
T
/
2
e
−
s
T
/
2
≈
1
+
s
T
/
2
1
−
s
T
/
2
{\displaystyle {\begin{aligned}z&=e^{sT}\\&={\frac {e^{sT/2}}{e^{-sT/2}}}\\&\approx {\frac {1+sT/2}{1-sT/2}}\end{aligned}}}
where
T
{\displaystyle T}
is the numerical integration step size of the trapezoidal rule used in the bilinear transform derivation; or, in other words, the sampling period. The above bilinear approximation can be solved for
s
{\displaystyle s}
or a similar approximation for
s
=
(
1
/
T
)
ln
(
z
)
{\displaystyle s=(1/T)\ln(z)}
can be performed.
The inverse of this mapping (and its first-order bilinear approximation) is
s
=
1
T
ln
(
z
)
=
2
T
[
z
−
1
z
+
1
+
1
3
(
z
−
1
z
+
1
)
3
+
1
5
(
z
−
1
z
+
1
)
5
+
1
7
(
z
−
1
z
+
1
)
7
+
⋯
]
≈
2
T
z
−
1
z
+
1
=
2
T
1
−
z
−
1
1
+
z
−
1
{\displaystyle {\begin{aligned}s&={\frac {1}{T}}\ln(z)\\&={\frac {2}{T}}\left[{\frac {z-1}{z+1}}+{\frac {1}{3}}\left({\frac {z-1}{z+1}}\right)^{3}+{\frac {1}{5}}\left({\frac {z-1}{z+1}}\right)^{5}+{\frac {1}{7}}\left({\frac {z-1}{z+1}}\right)^{7}+\cdots \right]\\&\approx {\frac {2}{T}}{\frac {z-1}{z+1}}\\&={\frac {2}{T}}{\frac {1-z^{-1}}{1+z^{-1}}}\end{aligned}}}
The bilinear transform essentially uses this first order approximation and substitutes into the continuous-time transfer function,
H
a
(
s
)
{\displaystyle H_{a}(s)}
s
←
2
T
z
−
1
z
+
1
.
{\displaystyle s\leftarrow {\frac {2}{T}}{\frac {z-1}{z+1}}.}
That is
H
d
(
z
)
=
H
a
(
s
)
|
s
=
2
T
z
−
1
z
+
1
=
H
a
(
2
T
z
−
1
z
+
1
)
.
{\displaystyle H_{d}(z)=H_{a}(s){\bigg |}_{s={\frac {2}{T}}{\frac {z-1}{z+1}}}=H_{a}\left({\frac {2}{T}}{\frac {z-1}{z+1}}\right).\ }
== Stability and minimum-phase property preserved ==
A continuous-time causal filter is stable if the poles of its transfer function fall in the left half of the complex s-plane. A discrete-time causal filter is stable if the poles of its transfer function fall inside the unit circle in the complex z-plane. The bilinear transform maps the left half of the complex s-plane to the interior of the unit circle in the z-plane. Thus, filters designed in the continuous-time domain that are stable are converted to filters in the discrete-time domain that preserve that stability.
Likewise, a continuous-time filter is minimum-phase if the zeros of its transfer function fall in the left half of the complex s-plane. A discrete-time filter is minimum-phase if the zeros of its transfer function fall inside the unit circle in the complex z-plane. Then the same mapping property assures that continuous-time filters that are minimum-phase are converted to discrete-time filters that preserve that property of being minimum-phase.
== Transformation of a General LTI System ==
A general LTI system has the transfer function
H
a
(
s
)
=
b
0
+
b
1
s
+
b
2
s
2
+
⋯
+
b
Q
s
Q
a
0
+
a
1
s
+
a
2
s
2
+
⋯
+
a
P
s
P
{\displaystyle H_{a}(s)={\frac {b_{0}+b_{1}s+b_{2}s^{2}+\cdots +b_{Q}s^{Q}}{a_{0}+a_{1}s+a_{2}s^{2}+\cdots +a_{P}s^{P}}}}
The order of the transfer function N is the greater of P and Q (in practice this is most likely P as the transfer function must be proper for the system to be stable). Applying the bilinear transform
s
=
K
z
−
1
z
+
1
{\displaystyle s=K{\frac {z-1}{z+1}}}
where K is defined as either 2/T or otherwise if using frequency warping, gives
H
d
(
z
)
=
b
0
+
b
1
(
K
z
−
1
z
+
1
)
+
b
2
(
K
z
−
1
z
+
1
)
2
+
⋯
+
b
Q
(
K
z
−
1
z
+
1
)
Q
a
0
+
a
1
(
K
z
−
1
z
+
1
)
+
a
2
(
K
z
−
1
z
+
1
)
2
+
⋯
+
b
P
(
K
z
−
1
z
+
1
)
P
{\displaystyle H_{d}(z)={\frac {b_{0}+b_{1}\left(K{\frac {z-1}{z+1}}\right)+b_{2}\left(K{\frac {z-1}{z+1}}\right)^{2}+\cdots +b_{Q}\left(K{\frac {z-1}{z+1}}\right)^{Q}}{a_{0}+a_{1}\left(K{\frac {z-1}{z+1}}\right)+a_{2}\left(K{\frac {z-1}{z+1}}\right)^{2}+\cdots +b_{P}\left(K{\frac {z-1}{z+1}}\right)^{P}}}}
Multiplying the numerator and denominator by the largest power of (z + 1)−1 present, (z + 1)−N, gives
H
d
(
z
)
=
b
0
(
z
+
1
)
N
+
b
1
K
(
z
−
1
)
(
z
+
1
)
N
−
1
+
b
2
K
2
(
z
−
1
)
2
(
z
+
1
)
N
−
2
+
⋯
+
b
Q
K
Q
(
z
−
1
)
Q
(
z
+
1
)
N
−
Q
a
0
(
z
+
1
)
N
+
a
1
K
(
z
−
1
)
(
z
+
1
)
N
−
1
+
a
2
K
2
(
z
−
1
)
2
(
z
+
1
)
N
−
2
+
⋯
+
a
P
K
P
(
z
−
1
)
P
(
z
+
1
)
N
−
P
{\displaystyle H_{d}(z)={\frac {b_{0}(z+1)^{N}+b_{1}K(z-1)(z+1)^{N-1}+b_{2}K^{2}(z-1)^{2}(z+1)^{N-2}+\cdots +b_{Q}K^{Q}(z-1)^{Q}(z+1)^{N-Q}}{a_{0}(z+1)^{N}+a_{1}K(z-1)(z+1)^{N-1}+a_{2}K^{2}(z-1)^{2}(z+1)^{N-2}+\cdots +a_{P}K^{P}(z-1)^{P}(z+1)^{N-P}}}}
It can be seen here that after the transformation, the degree of the numerator and denominator are both N.
Consider then the pole-zero form of the continuous-time transfer function
H
a
(
s
)
=
(
s
−
ξ
1
)
(
s
−
ξ
2
)
⋯
(
s
−
ξ
Q
)
(
s
−
p
1
)
(
s
−
p
2
)
⋯
(
s
−
p
P
)
{\displaystyle H_{a}(s)={\frac {(s-\xi _{1})(s-\xi _{2})\cdots (s-\xi _{Q})}{(s-p_{1})(s-p_{2})\cdots (s-p_{P})}}}
The roots of the numerator and denominator polynomials, ξi and pi, are the zeros and poles of the system. The bilinear transform is a one-to-one mapping, hence these can be transformed to the z-domain using
z
=
K
+
s
K
−
s
{\displaystyle z={\frac {K+s}{K-s}}}
yielding some of the discretized transfer function's zeros and poles ξ'i and p'i
ξ
i
′
=
K
+
ξ
i
K
−
ξ
i
1
≤
i
≤
Q
p
i
′
=
K
+
p
i
K
−
p
i
1
≤
i
≤
P
{\displaystyle {\begin{aligned}\xi '_{i}&={\frac {K+\xi _{i}}{K-\xi _{i}}}\quad 1\leq i\leq Q\\p'_{i}&={\frac {K+p_{i}}{K-p_{i}}}\quad 1\leq i\leq P\end{aligned}}}
As described above, the degree of the numerator and denominator are now both N, in other words there is now an equal number of zeros and poles. The multiplication by (z + 1)−N means the additional zeros or poles are
ξ
i
′
=
−
1
Q
<
i
≤
N
p
i
′
=
−
1
P
<
i
≤
N
{\displaystyle {\begin{aligned}\xi '_{i}&=-1\quad Q<i\leq N\\p'_{i}&=-1\quad P<i\leq N\end{aligned}}}
Given the full set of zeros and poles, the z-domain transfer function is then
H
d
(
z
)
=
(
z
−
ξ
1
′
)
(
z
−
ξ
2
′
)
⋯
(
z
−
ξ
N
′
)
(
z
−
p
1
′
)
(
z
−
p
2
′
)
⋯
(
z
−
p
N
′
)
{\displaystyle H_{d}(z)={\frac {(z-\xi '_{1})(z-\xi '_{2})\cdots (z-\xi '_{N})}{(z-p'_{1})(z-p'_{2})\cdots (z-p'_{N})}}}
== Example ==
As an example take a simple low-pass RC filter. This continuous-time filter has a transfer function
H
a
(
s
)
=
1
/
s
C
R
+
1
/
s
C
=
1
1
+
R
C
s
.
{\displaystyle {\begin{aligned}H_{a}(s)&={\frac {1/sC}{R+1/sC}}\\&={\frac {1}{1+RCs}}.\end{aligned}}}
If we wish to implement this filter as a digital filter, we can apply the bilinear transform by substituting for
s
{\displaystyle s}
the formula above; after some reworking, we get the following filter representation:
The coefficients of the denominator are the 'feed-backward' coefficients and the coefficients of the numerator are the 'feed-forward' coefficients used for implementing a real-time digital filter.
== Transformation for a general first-order continuous-time filter ==
It is possible to relate the coefficients of a continuous-time, analog filter with those of a similar discrete-time digital filter created through the bilinear transform process. Transforming a general, first-order continuous-time filter with the given transfer function
H
a
(
s
)
=
b
0
s
+
b
1
a
0
s
+
a
1
=
b
0
+
b
1
s
−
1
a
0
+
a
1
s
−
1
{\displaystyle H_{a}(s)={\frac {b_{0}s+b_{1}}{a_{0}s+a_{1}}}={\frac {b_{0}+b_{1}s^{-1}}{a_{0}+a_{1}s^{-1}}}}
using the bilinear transform (without prewarping any frequency specification) requires the substitution of
s
←
K
1
−
z
−
1
1
+
z
−
1
{\displaystyle s\leftarrow K{\frac {1-z^{-1}}{1+z^{-1}}}}
where
K
≜
2
T
{\displaystyle K\triangleq {\frac {2}{T}}}
.
However, if the frequency warping compensation as described below is used in the bilinear transform, so that both analog and digital filter gain and phase agree at frequency
ω
0
{\displaystyle \omega _{0}}
, then
K
≜
ω
0
tan
(
ω
0
T
2
)
{\displaystyle K\triangleq {\frac {\omega _{0}}{\tan \left({\frac {\omega _{0}T}{2}}\right)}}}
.
This results in a discrete-time digital filter with coefficients expressed in terms of the coefficients of the original continuous time filter:
H
d
(
z
)
=
(
b
0
K
+
b
1
)
+
(
−
b
0
K
+
b
1
)
z
−
1
(
a
0
K
+
a
1
)
+
(
−
a
0
K
+
a
1
)
z
−
1
{\displaystyle H_{d}(z)={\frac {(b_{0}K+b_{1})+(-b_{0}K+b_{1})z^{-1}}{(a_{0}K+a_{1})+(-a_{0}K+a_{1})z^{-1}}}}
Normally the constant term in the denominator must be normalized to 1 before deriving the corresponding difference equation. This results in
H
d
(
z
)
=
b
0
K
+
b
1
a
0
K
+
a
1
+
−
b
0
K
+
b
1
a
0
K
+
a
1
z
−
1
1
+
−
a
0
K
+
a
1
a
0
K
+
a
1
z
−
1
.
{\displaystyle H_{d}(z)={\frac {{\frac {b_{0}K+b_{1}}{a_{0}K+a_{1}}}+{\frac {-b_{0}K+b_{1}}{a_{0}K+a_{1}}}z^{-1}}{1+{\frac {-a_{0}K+a_{1}}{a_{0}K+a_{1}}}z^{-1}}}.}
The difference equation (using the Direct form I) is
y
[
n
]
=
b
0
K
+
b
1
a
0
K
+
a
1
⋅
x
[
n
]
+
−
b
0
K
+
b
1
a
0
K
+
a
1
⋅
x
[
n
−
1
]
−
−
a
0
K
+
a
1
a
0
K
+
a
1
⋅
y
[
n
−
1
]
.
{\displaystyle y[n]={\frac {b_{0}K+b_{1}}{a_{0}K+a_{1}}}\cdot x[n]+{\frac {-b_{0}K+b_{1}}{a_{0}K+a_{1}}}\cdot x[n-1]-{\frac {-a_{0}K+a_{1}}{a_{0}K+a_{1}}}\cdot y[n-1]\ .}
== General second-order biquad transformation ==
A similar process can be used for a general second-order filter with the given transfer function
H
a
(
s
)
=
b
0
s
2
+
b
1
s
+
b
2
a
0
s
2
+
a
1
s
+
a
2
=
b
0
+
b
1
s
−
1
+
b
2
s
−
2
a
0
+
a
1
s
−
1
+
a
2
s
−
2
.
{\displaystyle H_{a}(s)={\frac {b_{0}s^{2}+b_{1}s+b_{2}}{a_{0}s^{2}+a_{1}s+a_{2}}}={\frac {b_{0}+b_{1}s^{-1}+b_{2}s^{-2}}{a_{0}+a_{1}s^{-1}+a_{2}s^{-2}}}\ .}
This results in a discrete-time digital biquad filter with coefficients expressed in terms of the coefficients of the original continuous time filter:
H
d
(
z
)
=
(
b
0
K
2
+
b
1
K
+
b
2
)
+
(
2
b
2
−
2
b
0
K
2
)
z
−
1
+
(
b
0
K
2
−
b
1
K
+
b
2
)
z
−
2
(
a
0
K
2
+
a
1
K
+
a
2
)
+
(
2
a
2
−
2
a
0
K
2
)
z
−
1
+
(
a
0
K
2
−
a
1
K
+
a
2
)
z
−
2
{\displaystyle H_{d}(z)={\frac {(b_{0}K^{2}+b_{1}K+b_{2})+(2b_{2}-2b_{0}K^{2})z^{-1}+(b_{0}K^{2}-b_{1}K+b_{2})z^{-2}}{(a_{0}K^{2}+a_{1}K+a_{2})+(2a_{2}-2a_{0}K^{2})z^{-1}+(a_{0}K^{2}-a_{1}K+a_{2})z^{-2}}}}
Again, the constant term in the denominator is generally normalized to 1 before deriving the corresponding difference equation. This results in
H
d
(
z
)
=
b
0
K
2
+
b
1
K
+
b
2
a
0
K
2
+
a
1
K
+
a
2
+
2
b
2
−
2
b
0
K
2
a
0
K
2
+
a
1
K
+
a
2
z
−
1
+
b
0
K
2
−
b
1
K
+
b
2
a
0
K
2
+
a
1
K
+
a
2
z
−
2
1
+
2
a
2
−
2
a
0
K
2
a
0
K
2
+
a
1
K
+
a
2
z
−
1
+
a
0
K
2
−
a
1
K
+
a
2
a
0
K
2
+
a
1
K
+
a
2
z
−
2
.
{\displaystyle H_{d}(z)={\frac {{\frac {b_{0}K^{2}+b_{1}K+b_{2}}{a_{0}K^{2}+a_{1}K+a_{2}}}+{\frac {2b_{2}-2b_{0}K^{2}}{a_{0}K^{2}+a_{1}K+a_{2}}}z^{-1}+{\frac {b_{0}K^{2}-b_{1}K+b_{2}}{a_{0}K^{2}+a_{1}K+a_{2}}}z^{-2}}{1+{\frac {2a_{2}-2a_{0}K^{2}}{a_{0}K^{2}+a_{1}K+a_{2}}}z^{-1}+{\frac {a_{0}K^{2}-a_{1}K+a_{2}}{a_{0}K^{2}+a_{1}K+a_{2}}}z^{-2}}}.}
The difference equation (using the Direct form I) is
y
[
n
]
=
b
0
K
2
+
b
1
K
+
b
2
a
0
K
2
+
a
1
K
+
a
2
⋅
x
[
n
]
+
2
b
2
−
2
b
0
K
2
a
0
K
2
+
a
1
K
+
a
2
⋅
x
[
n
−
1
]
+
b
0
K
2
−
b
1
K
+
b
2
a
0
K
2
+
a
1
K
+
a
2
⋅
x
[
n
−
2
]
−
2
a
2
−
2
a
0
K
2
a
0
K
2
+
a
1
K
+
a
2
⋅
y
[
n
−
1
]
−
a
0
K
2
−
a
1
K
+
a
2
a
0
K
2
+
a
1
K
+
a
2
⋅
y
[
n
−
2
]
.
{\displaystyle y[n]={\frac {b_{0}K^{2}+b_{1}K+b_{2}}{a_{0}K^{2}+a_{1}K+a_{2}}}\cdot x[n]+{\frac {2b_{2}-2b_{0}K^{2}}{a_{0}K^{2}+a_{1}K+a_{2}}}\cdot x[n-1]+{\frac {b_{0}K^{2}-b_{1}K+b_{2}}{a_{0}K^{2}+a_{1}K+a_{2}}}\cdot x[n-2]-{\frac {2a_{2}-2a_{0}K^{2}}{a_{0}K^{2}+a_{1}K+a_{2}}}\cdot y[n-1]-{\frac {a_{0}K^{2}-a_{1}K+a_{2}}{a_{0}K^{2}+a_{1}K+a_{2}}}\cdot y[n-2]\ .}
== Frequency warping ==
To determine the frequency response of a continuous-time filter, the transfer function
H
a
(
s
)
{\displaystyle H_{a}(s)}
is evaluated at
s
=
j
ω
a
{\displaystyle s=j\omega _{a}}
which is on the
j
ω
{\displaystyle j\omega }
axis. Likewise, to determine the frequency response of a discrete-time filter, the transfer function
H
d
(
z
)
{\displaystyle H_{d}(z)}
is evaluated at
z
=
e
j
ω
d
T
{\displaystyle z=e^{j\omega _{d}T}}
which is on the unit circle,
|
z
|
=
1
{\displaystyle |z|=1}
. The bilinear transform maps the
j
ω
{\displaystyle j\omega }
axis of the s-plane (which is the domain of
H
a
(
s
)
{\displaystyle H_{a}(s)}
) to the unit circle of the z-plane,
|
z
|
=
1
{\displaystyle |z|=1}
(which is the domain of
H
d
(
z
)
{\displaystyle H_{d}(z)}
), but it is not the same mapping
z
=
e
s
T
{\displaystyle z=e^{sT}}
which also maps the
j
ω
{\displaystyle j\omega }
axis to the unit circle. When the actual frequency of
ω
d
{\displaystyle \omega _{d}}
is input to the discrete-time filter designed by use of the bilinear transform, then it is desired to know at what frequency,
ω
a
{\displaystyle \omega _{a}}
, for the continuous-time filter that this
ω
d
{\displaystyle \omega _{d}}
is mapped to.
H
d
(
z
)
=
H
a
(
2
T
z
−
1
z
+
1
)
{\displaystyle H_{d}(z)=H_{a}\left({\frac {2}{T}}{\frac {z-1}{z+1}}\right)}
This shows that every point on the unit circle in the discrete-time filter z-plane,
z
=
e
j
ω
d
T
{\displaystyle z=e^{j\omega _{d}T}}
is mapped to a point on the
j
ω
{\displaystyle j\omega }
axis on the continuous-time filter s-plane,
s
=
j
ω
a
{\displaystyle s=j\omega _{a}}
. That is, the discrete-time to continuous-time frequency mapping of the bilinear transform is
ω
a
=
2
T
tan
(
ω
d
T
2
)
{\displaystyle \omega _{a}={\frac {2}{T}}\tan \left(\omega _{d}{\frac {T}{2}}\right)}
and the inverse mapping is
ω
d
=
2
T
arctan
(
ω
a
T
2
)
.
{\displaystyle \omega _{d}={\frac {2}{T}}\arctan \left(\omega _{a}{\frac {T}{2}}\right).}
The discrete-time filter behaves at frequency
ω
d
{\displaystyle \omega _{d}}
the same way that the continuous-time filter behaves at frequency
(
2
/
T
)
tan
(
ω
d
T
/
2
)
{\displaystyle (2/T)\tan(\omega _{d}T/2)}
. Specifically, the gain and phase shift that the discrete-time filter has at frequency
ω
d
{\displaystyle \omega _{d}}
is the same gain and phase shift that the continuous-time filter has at frequency
(
2
/
T
)
tan
(
ω
d
T
/
2
)
{\displaystyle (2/T)\tan(\omega _{d}T/2)}
. This means that every feature, every "bump" that is visible in the frequency response of the continuous-time filter is also visible in the discrete-time filter, but at a different frequency. For low frequencies (that is, when
ω
d
≪
2
/
T
{\displaystyle \omega _{d}\ll 2/T}
or
ω
a
≪
2
/
T
{\displaystyle \omega _{a}\ll 2/T}
), then the features are mapped to a slightly different frequency;
ω
d
≈
ω
a
{\displaystyle \omega _{d}\approx \omega _{a}}
.
One can see that the entire continuous frequency range
−
∞
<
ω
a
<
+
∞
{\displaystyle -\infty <\omega _{a}<+\infty }
is mapped onto the fundamental frequency interval
−
π
T
<
ω
d
<
+
π
T
.
{\displaystyle -{\frac {\pi }{T}}<\omega _{d}<+{\frac {\pi }{T}}.}
The continuous-time filter frequency
ω
a
=
0
{\displaystyle \omega _{a}=0}
corresponds to the discrete-time filter frequency
ω
d
=
0
{\displaystyle \omega _{d}=0}
and the continuous-time filter frequency
ω
a
=
±
∞
{\displaystyle \omega _{a}=\pm \infty }
correspond to the discrete-time filter frequency
ω
d
=
±
π
/
T
.
{\displaystyle \omega _{d}=\pm \pi /T.}
One can also see that there is a nonlinear relationship between
ω
a
{\displaystyle \omega _{a}}
and
ω
d
.
{\displaystyle \omega _{d}.}
This effect of the bilinear transform is called frequency warping. The continuous-time filter can be designed to compensate for this frequency warping by setting
ω
a
=
2
T
tan
(
ω
d
T
2
)
{\displaystyle \omega _{a}={\frac {2}{T}}\tan \left(\omega _{d}{\frac {T}{2}}\right)}
for every frequency specification that the designer has control over (such as corner frequency or center frequency). This is called pre-warping the filter design.
It is possible, however, to compensate for the frequency warping by pre-warping a frequency specification
ω
0
{\displaystyle \omega _{0}}
(usually a resonant frequency or the frequency of the most significant feature of the frequency response) of the continuous-time system. These pre-warped specifications may then be used in the bilinear transform to obtain the desired discrete-time system. When designing a digital filter as an approximation of a continuous time filter, the frequency response (both amplitude and phase) of the digital filter can be made to match the frequency response of the continuous filter at a specified frequency
ω
0
{\displaystyle \omega _{0}}
, as well as matching at DC, if the following transform is substituted into the continuous filter transfer function. This is a modified version of Tustin's transform shown above.
s
←
ω
0
tan
(
ω
0
T
2
)
z
−
1
z
+
1
.
{\displaystyle s\leftarrow {\frac {\omega _{0}}{\tan \left({\frac {\omega _{0}T}{2}}\right)}}{\frac {z-1}{z+1}}.}
However, note that this transform becomes the original transform
s
←
2
T
z
−
1
z
+
1
{\displaystyle s\leftarrow {\frac {2}{T}}{\frac {z-1}{z+1}}}
as
ω
0
→
0
{\displaystyle \omega _{0}\to 0}
.
The main advantage of the warping phenomenon is the absence of aliasing distortion of the frequency response characteristic, such as observed with Impulse invariance.
== See also ==
Impulse invariance
Matched Z-transform method
== References ==
== External links ==
MIT OpenCourseWare Signal Processing: Continuous to Discrete Filter Design
Lecture Notes on Discrete Equivalents
The Art of VA Filter Design | Wikipedia/Bilinear_transform |
In applied mathematics, the starred transform, or star transform, is a discrete-time variation of the Laplace transform, so-named because of the asterisk or "star" in the customary notation of the sampled signals.
The transform is an operator of a continuous-time function
x
(
t
)
{\displaystyle x(t)}
, which is transformed to a function
X
∗
(
s
)
{\displaystyle X^{*}(s)}
in the following manner:
X
∗
(
s
)
=
L
[
x
(
t
)
⋅
δ
T
(
t
)
]
=
L
[
x
∗
(
t
)
]
,
{\displaystyle {\begin{aligned}X^{*}(s)={\mathcal {L}}[x(t)\cdot \delta _{T}(t)]={\mathcal {L}}[x^{*}(t)],\end{aligned}}}
where
δ
T
(
t
)
{\displaystyle \delta _{T}(t)}
is a Dirac comb function, with period of time T.
The starred transform is a convenient mathematical abstraction that represents the Laplace transform of an impulse sampled function
x
∗
(
t
)
{\displaystyle x^{*}(t)}
, which is the output of an ideal sampler, whose input is a continuous function,
x
(
t
)
{\displaystyle x(t)}
.
The starred transform is similar to the Z transform, with a simple change of variables, where the starred transform is explicitly declared in terms of the sampling period (T), while the Z transform is performed on a discrete signal and is independent of the sampling period. This makes the starred transform a de-normalized version of the one-sided Z-transform, as it restores the dependence on sampling parameter T.
== Relation to Laplace transform ==
Since
X
∗
(
s
)
=
L
[
x
∗
(
t
)
]
{\displaystyle X^{*}(s)={\mathcal {L}}[x^{*}(t)]}
, where:
x
∗
(
t
)
=
d
e
f
x
(
t
)
⋅
δ
T
(
t
)
=
x
(
t
)
⋅
∑
n
=
0
∞
δ
(
t
−
n
T
)
.
{\displaystyle {\begin{aligned}x^{*}(t)\ {\stackrel {\mathrm {def} }{=}}\ x(t)\cdot \delta _{T}(t)&=x(t)\cdot \sum _{n=0}^{\infty }\delta (t-nT).\end{aligned}}}
Then per the convolution theorem, the starred transform is equivalent to the complex convolution of
L
[
x
(
t
)
]
=
X
(
s
)
{\displaystyle {\mathcal {L}}[x(t)]=X(s)}
and
L
[
δ
T
(
t
)
]
=
1
1
−
e
−
T
s
{\displaystyle {\mathcal {L}}[\delta _{T}(t)]={\frac {1}{1-e^{-Ts}}}}
, hence:
X
∗
(
s
)
=
1
2
π
j
∫
c
−
j
∞
c
+
j
∞
X
(
p
)
⋅
1
1
−
e
−
T
(
s
−
p
)
⋅
d
p
.
{\displaystyle X^{*}(s)={\frac {1}{2\pi j}}\int _{c-j\infty }^{c+j\infty }{X(p)\cdot {\frac {1}{1-e^{-T(s-p)}}}\cdot dp}.}
This line integration is equivalent to integration in the positive sense along a closed contour formed by such a line and an infinite semicircle that encloses the poles of X(s) in the left half-plane of p. The result of such an integration (per the residue theorem) would be:
X
∗
(
s
)
=
∑
λ
=
poles of
X
(
s
)
Res
p
=
λ
[
X
(
p
)
1
1
−
e
−
T
(
s
−
p
)
]
.
{\displaystyle X^{*}(s)=\sum _{\lambda ={\text{poles of }}X(s)}\operatorname {Res} \limits _{p=\lambda }{\bigg [}X(p){\frac {1}{1-e^{-T(s-p)}}}{\bigg ]}.}
Alternatively, the aforementioned line integration is equivalent to integration in the negative sense along a closed contour formed by such a line and an infinite semicircle that encloses the infinite poles of
1
1
−
e
−
T
(
s
−
p
)
{\displaystyle {\frac {1}{1-e^{-T(s-p)}}}}
in the right half-plane of p. The result of such an integration would be:
X
∗
(
s
)
=
1
T
∑
k
=
−
∞
∞
X
(
s
−
j
2
π
T
k
)
+
x
(
0
)
2
.
{\displaystyle X^{*}(s)={\frac {1}{T}}\sum _{k=-\infty }^{\infty }X\left(s-j{\tfrac {2\pi }{T}}k\right)+{\frac {x(0)}{2}}.}
== Relation to Z transform ==
Given a Z-transform, X(z), the corresponding starred transform is a simple substitution:
X
∗
(
s
)
=
X
(
z
)
|
z
=
e
s
T
{\displaystyle {\bigg .}X^{*}(s)=X(z){\bigg |}_{\displaystyle z=e^{sT}}}
This substitution restores the dependence on T.
It's interchangeable,
X
(
z
)
=
X
∗
(
s
)
|
e
s
T
=
z
{\displaystyle {\bigg .}X(z)=X^{*}(s){\bigg |}_{\displaystyle e^{sT}=z}}
X
(
z
)
=
X
∗
(
s
)
|
s
=
ln
(
z
)
T
{\displaystyle {\bigg .}X(z)=X^{*}(s){\bigg |}_{\displaystyle s={\frac {\ln(z)}{T}}}}
== Properties of the starred transform ==
Property 1:
X
∗
(
s
)
{\displaystyle X^{*}(s)}
is periodic in
s
{\displaystyle s}
with period
j
2
π
T
.
{\displaystyle j{\tfrac {2\pi }{T}}.}
X
∗
(
s
+
j
2
π
T
k
)
=
X
∗
(
s
)
{\displaystyle X^{*}(s+j{\tfrac {2\pi }{T}}k)=X^{*}(s)}
Property 2: If
X
(
s
)
{\displaystyle X(s)}
has a pole at
s
=
s
1
{\displaystyle s=s_{1}}
, then
X
∗
(
s
)
{\displaystyle X^{*}(s)}
must have poles at
s
=
s
1
+
j
2
π
T
k
{\displaystyle s=s_{1}+j{\tfrac {2\pi }{T}}k}
, where
k
=
0
,
±
1
,
±
2
,
…
{\displaystyle \scriptstyle k=0,\pm 1,\pm 2,\ldots }
== Citations ==
== References ==
Bech, Michael M. "Digital Control Theory" (PDF). AALBORG University. Retrieved 5 February 2014.
Gopal, M. (March 1989). Digital Control Engineering. John Wiley & Sons. ISBN 0852263082.
Phillips and Nagle, "Digital Control System Analysis and Design", 3rd Edition, Prentice Hall, 1995. ISBN 0-13-309832-X | Wikipedia/Starred_transform |
In mathematics and signal processing, the advanced z-transform is an extension of the z-transform, to incorporate ideal delays that are not multiples of the sampling time. The advanced z-transform is widely applied, for example, to accurately model processing delays in digital control. It is also known as the modified z-transform.
It takes the form
F
(
z
,
m
)
=
∑
k
=
0
∞
f
(
k
T
+
m
)
z
−
k
{\displaystyle F(z,m)=\sum _{k=0}^{\infty }f(kT+m)z^{-k}}
where
T is the sampling period
m (the "delay parameter") is a fraction of the sampling period
[
0
,
T
]
.
{\displaystyle [0,T].}
== Properties ==
If the delay parameter, m, is considered fixed then all the properties of the z-transform hold for the advanced z-transform.
=== Linearity ===
Z
{
∑
k
=
1
n
c
k
f
k
(
t
)
}
=
∑
k
=
1
n
c
k
F
k
(
z
,
m
)
.
{\displaystyle {\mathcal {Z}}\left\{\sum _{k=1}^{n}c_{k}f_{k}(t)\right\}=\sum _{k=1}^{n}c_{k}F_{k}(z,m).}
=== Time shift ===
Z
{
u
(
t
−
n
T
)
f
(
t
−
n
T
)
}
=
z
−
n
F
(
z
,
m
)
.
{\displaystyle {\mathcal {Z}}\left\{u(t-nT)f(t-nT)\right\}=z^{-n}F(z,m).}
=== Damping ===
Z
{
f
(
t
)
e
−
a
t
}
=
e
−
a
m
F
(
e
a
T
z
,
m
)
.
{\displaystyle {\mathcal {Z}}\left\{f(t)e^{-a\,t}\right\}=e^{-a\,m}F(e^{a\,T}z,m).}
=== Time multiplication ===
Z
{
t
y
f
(
t
)
}
=
(
−
T
z
d
d
z
+
m
)
y
F
(
z
,
m
)
.
{\displaystyle {\mathcal {Z}}\left\{t^{y}f(t)\right\}=\left(-Tz{\frac {d}{dz}}+m\right)^{y}F(z,m).}
=== Final value theorem ===
lim
k
→
∞
f
(
k
T
+
m
)
=
lim
z
→
1
(
1
−
z
−
1
)
F
(
z
,
m
)
.
{\displaystyle \lim _{k\to \infty }f(kT+m)=\lim _{z\to 1}(1-z^{-1})F(z,m).}
== Example ==
Consider the following example where
f
(
t
)
=
cos
(
ω
t
)
{\displaystyle f(t)=\cos(\omega t)}
:
F
(
z
,
m
)
=
Z
{
cos
(
ω
(
k
T
+
m
)
)
}
=
Z
{
cos
(
ω
k
T
)
cos
(
ω
m
)
−
sin
(
ω
k
T
)
sin
(
ω
m
)
}
=
cos
(
ω
m
)
Z
{
cos
(
ω
k
T
)
}
−
sin
(
ω
m
)
Z
{
sin
(
ω
k
T
)
}
=
cos
(
ω
m
)
z
(
z
−
cos
(
ω
T
)
)
z
2
−
2
z
cos
(
ω
T
)
+
1
−
sin
(
ω
m
)
z
sin
(
ω
T
)
z
2
−
2
z
cos
(
ω
T
)
+
1
=
z
2
cos
(
ω
m
)
−
z
cos
(
ω
(
T
−
m
)
)
z
2
−
2
z
cos
(
ω
T
)
+
1
.
{\displaystyle {\begin{aligned}F(z,m)&={\mathcal {Z}}\left\{\cos \left(\omega \left(kT+m\right)\right)\right\}\\&={\mathcal {Z}}\left\{\cos(\omega kT)\cos(\omega m)-\sin(\omega kT)\sin(\omega m)\right\}\\&=\cos(\omega m){\mathcal {Z}}\left\{\cos(\omega kT)\right\}-\sin(\omega m){\mathcal {Z}}\left\{\sin(\omega kT)\right\}\\&=\cos(\omega m){\frac {z\left(z-\cos(\omega T)\right)}{z^{2}-2z\cos(\omega T)+1}}-\sin(\omega m){\frac {z\sin(\omega T)}{z^{2}-2z\cos(\omega T)+1}}\\&={\frac {z^{2}\cos(\omega m)-z\cos(\omega (T-m))}{z^{2}-2z\cos(\omega T)+1}}.\end{aligned}}}
If
m
=
0
{\displaystyle m=0}
then
F
(
z
,
m
)
{\displaystyle F(z,m)}
reduces to the transform
F
(
z
,
0
)
=
z
2
−
z
cos
(
ω
T
)
z
2
−
2
z
cos
(
ω
T
)
+
1
,
{\displaystyle F(z,0)={\frac {z^{2}-z\cos(\omega T)}{z^{2}-2z\cos(\omega T)+1}},}
which is clearly just the z-transform of
f
(
t
)
{\displaystyle f(t)}
.
== References ==
Jury, Eliahu Ibraham (1973). Theory and Application of the z-Transform Method. Krieger. ISBN 0-88275-122-0. OCLC 836240. | Wikipedia/Advanced_z-transform |
The matched Z-transform method, also called the pole–zero mapping or pole–zero matching method, and abbreviated MPZ or MZT, is a technique for converting a continuous-time filter design to a discrete-time filter (digital filter) design.
The method works by mapping all poles and zeros of the s-plane design to z-plane locations
z
=
e
s
T
{\displaystyle z=e^{sT}}
, for a sample interval
T
=
1
/
f
s
{\displaystyle T=1/f_{\mathrm {s} }}
. So an analog filter with transfer function:
H
(
s
)
=
k
a
∏
i
=
1
M
(
s
−
ξ
i
)
∏
i
=
1
N
(
s
−
p
i
)
{\displaystyle H(s)=k_{\mathrm {a} }{\frac {\prod _{i=1}^{M}(s-\xi _{i})}{\prod _{i=1}^{N}(s-p_{i})}}}
is transformed into the digital transfer function
H
(
z
)
=
k
d
∏
i
=
1
M
(
1
−
e
ξ
i
T
z
−
1
)
∏
i
=
1
N
(
1
−
e
p
i
T
z
−
1
)
{\displaystyle H(z)=k_{\mathrm {d} }{\frac {\prod _{i=1}^{M}(1-e^{\xi _{i}T}z^{-1})}{\prod _{i=1}^{N}(1-e^{p_{i}T}z^{-1})}}}
The gain
k
d
{\displaystyle k_{\mathrm {d} }}
must be adjusted to normalize the desired gain, typically set to match the analog filter's gain at DC by setting
s
=
0
{\displaystyle s=0}
and
z
=
1
{\displaystyle z=1}
and solving for
k
d
{\displaystyle k_{\mathrm {d} }}
.
Since the mapping wraps the s-plane's
j
ω
{\displaystyle j\omega }
axis around the z-plane's unit circle repeatedly, any zeros (or poles) greater than the Nyquist frequency will be mapped to an aliased location.
In the (common) case that the analog transfer function has more poles than zeros, the zeros at
s
=
∞
{\displaystyle s=\infty }
may optionally be shifted down to the Nyquist frequency by putting them at
z
=
−
1
{\displaystyle z=-1}
, causing the transfer function to drop off as
z
→
−
1
{\displaystyle z\rightarrow -1}
in much the same manner as with the bilinear transform (BLT).
While this transform preserves stability and minimum phase, it preserves neither time- nor frequency-domain response and so is not widely used. More common methods include the BLT and impulse invariance methods. MZT does provide less high frequency response error than the BLT, however, making it easier to correct by adding additional zeros, which is called the MZTi (for "improved").
A specific application of the matched Z-transform method in the digital control field is with the Ackermann's formula, which changes the poles of the controllable system; in general from an unstable (or nearby) location to a stable location.
== References == | Wikipedia/Matched_Z-transform_method |
In signal processing, the Nyquist rate, named after Harry Nyquist, is a value equal to twice the highest frequency (bandwidth) of a given function or signal. It has units of samples per unit time, conventionally expressed as samples per second, or hertz (Hz). When the signal is sampled at a higher sample rate (see § Critical frequency), the resulting discrete-time sequence is said to be free of the distortion known as aliasing. Conversely, for a given sample rate the corresponding Nyquist frequency is one-half the sample rate. Note that the Nyquist rate is a property of a continuous-time signal, whereas Nyquist frequency is a property of a discrete-time system.
The term Nyquist rate is also used in a different context with units of symbols per second, which is actually the field in which Harry Nyquist was working. In that context it is an upper bound for the symbol rate across a bandwidth-limited baseband channel such as a telegraph line or passband channel such as a limited radio frequency band or a frequency division multiplex channel.
== Relative to sampling ==
When a continuous function,
x
(
t
)
,
{\displaystyle x(t),}
is sampled at a constant rate,
f
s
{\displaystyle f_{s}}
samples/second, there is always an unlimited number of other continuous functions that fit the same set of samples. But only one of them is bandlimited to
1
2
f
s
{\displaystyle {\tfrac {1}{2}}f_{s}}
cycles/second (hertz), which means that its Fourier transform,
X
(
f
)
,
{\displaystyle X(f),}
is
0
{\displaystyle 0}
for all
|
f
|
≥
1
2
f
s
.
{\displaystyle |f|\geq {\tfrac {1}{2}}f_{s}.}
The mathematical algorithms that are typically used to recreate a continuous function from samples create arbitrarily good approximations to this theoretical, but infinitely long, function. It follows that if the original function,
x
(
t
)
,
{\displaystyle x(t),}
is bandlimited to
1
2
f
s
,
{\displaystyle {\tfrac {1}{2}}f_{s},}
which is called the Nyquist criterion, then it is the one unique function the interpolation algorithms are approximating. In terms of a function's own bandwidth
(
B
)
,
{\displaystyle (B),}
as depicted here, the Nyquist criterion is often stated as
f
s
>
2
B
.
{\displaystyle f_{s}>2B.}
And
2
B
{\displaystyle 2B}
is called the Nyquist rate for functions with bandwidth
B
.
{\displaystyle B.}
When the Nyquist criterion is not met
(
{\displaystyle (}
say,
B
>
1
2
f
s
)
,
{\displaystyle B>{\tfrac {1}{2}}f_{s}),}
a condition called aliasing occurs, which results in some inevitable differences between
x
(
t
)
{\displaystyle x(t)}
and a reconstructed function that has less bandwidth. In most cases, the differences are viewed as distortion.
=== Intentional aliasing ===
Figure 3 depicts a type of function called baseband or lowpass, because its positive-frequency range of significant energy is [0, B). When instead, the frequency range is (A, A+B), for some A > B, it is called bandpass, and a common desire (for various reasons) is to convert it to baseband. One way to do that is frequency-mixing (heterodyne) the bandpass function down to the frequency range (0, B). One of the possible reasons is to reduce the Nyquist rate for more efficient storage. And it turns out that one can directly achieve the same result by sampling the bandpass function at a sub-Nyquist sample-rate that is the smallest integer-sub-multiple of frequency A that meets the baseband Nyquist criterion: fs > 2B. For a more general discussion, see bandpass sampling.
== Relative to signaling ==
Long before Harry Nyquist had his name associated with sampling, the term Nyquist rate was used differently, with a meaning closer to what Nyquist actually studied. Quoting Harold S. Black's 1953 book Modulation Theory, in the section Nyquist Interval of the opening chapter Historical Background:
"If the essential frequency range is limited to B cycles per second, 2B was given by Nyquist as the maximum number of code elements per second that could be unambiguously resolved, assuming the peak interference is less than half a quantum step. This rate is generally referred to as signaling at the Nyquist rate and 1/(2B) has been termed a Nyquist interval." (bold added for emphasis; italics from the original)
B in this context, related to the Nyquist ISI criterion, referring to the one-sided bandwidth rather than the total as considered in later usage.
According to the OED, Black's statement regarding 2B may be the origin of the term Nyquist rate.
Nyquist's famous 1928 paper was a study on how many pulses (code elements) could be transmitted per second, and recovered, through a channel of limited bandwidth.
Signaling at the Nyquist rate meant putting as many code pulses through a telegraph channel as its bandwidth would allow. Shannon used Nyquist's approach when he proved the sampling theorem in 1948, but Nyquist did not work on sampling per se.
Black's later chapter on "The Sampling Principle" does give Nyquist some of the credit for some relevant math:
"Nyquist (1928) pointed out that, if the function is substantially limited to the time interval T, 2BT values are sufficient to specify the function, basing his conclusions on a Fourier series representation of the function over the time interval T."
== See also ==
Nyquist frequency
Nyquist ISI criterion
Nyquist–Shannon sampling theorem
Sampling (signal processing)
== Notes ==
== References == | Wikipedia/Nyquist_rate |
A likelihood function (often simply called the likelihood) measures how well a statistical model explains observed data by calculating the probability of seeing that data under different parameter values of the model. It is constructed from the joint probability distribution of the random variable that (presumably) generated the observations. When evaluated on the actual data points, it becomes a function solely of the model parameters.
In maximum likelihood estimation, the argument that maximizes the likelihood function serves as a point estimate for the unknown parameter, while the Fisher information (often approximated by the likelihood's Hessian matrix at the maximum) gives an indication of the estimate's precision.
In contrast, in Bayesian statistics, the estimate of interest is the converse of the likelihood, the so-called posterior probability of the parameter given the observed data, which is calculated via Bayes' rule.
== Definition ==
The likelihood function, parameterized by a (possibly multivariate) parameter
θ
{\textstyle \theta }
, is usually defined differently for discrete and continuous probability distributions (a more general definition is discussed below). Given a probability density or mass function
x
↦
f
(
x
∣
θ
)
,
{\displaystyle x\mapsto f(x\mid \theta ),}
where
x
{\textstyle x}
is a realization of the random variable
X
{\textstyle X}
, the likelihood function is
θ
↦
f
(
x
∣
θ
)
,
{\displaystyle \theta \mapsto f(x\mid \theta ),}
often written
L
(
θ
∣
x
)
.
{\displaystyle {\mathcal {L}}(\theta \mid x).}
In other words, when
f
(
x
∣
θ
)
{\textstyle f(x\mid \theta )}
is viewed as a function of
x
{\textstyle x}
with
θ
{\textstyle \theta }
fixed, it is a probability density function, and when viewed as a function of
θ
{\textstyle \theta }
with
x
{\textstyle x}
fixed, it is a likelihood function. In the frequentist paradigm, the notation
f
(
x
∣
θ
)
{\textstyle f(x\mid \theta )}
is often avoided and instead
f
(
x
;
θ
)
{\textstyle f(x;\theta )}
or
f
(
x
,
θ
)
{\textstyle f(x,\theta )}
are used to indicate that
θ
{\textstyle \theta }
is regarded as a fixed unknown quantity rather than as a random variable being conditioned on.
The likelihood function does not specify the probability that
θ
{\textstyle \theta }
is the truth, given the observed sample
X
=
x
{\textstyle X=x}
. Such an interpretation is a common error, with potentially disastrous consequences (see prosecutor's fallacy).
=== Discrete probability distribution ===
Let
X
{\textstyle X}
be a discrete random variable with probability mass function
p
{\textstyle p}
depending on a parameter
θ
{\textstyle \theta }
. Then the function
L
(
θ
∣
x
)
=
p
θ
(
x
)
=
P
θ
(
X
=
x
)
,
{\displaystyle {\mathcal {L}}(\theta \mid x)=p_{\theta }(x)=P_{\theta }(X=x),}
considered as a function of
θ
{\textstyle \theta }
, is the likelihood function, given the outcome
x
{\textstyle x}
of the random variable
X
{\textstyle X}
. Sometimes the probability of "the value
x
{\textstyle x}
of
X
{\textstyle X}
for the parameter value
θ
{\textstyle \theta }
" is written as P(X = x | θ) or P(X = x; θ). The likelihood is the probability that a particular outcome
x
{\textstyle x}
is observed when the true value of the parameter is
θ
{\textstyle \theta }
, equivalent to the probability mass on
x
{\textstyle x}
; it is not a probability density over the parameter
θ
{\textstyle \theta }
. The likelihood,
L
(
θ
∣
x
)
{\textstyle {\mathcal {L}}(\theta \mid x)}
, should not be confused with
P
(
θ
∣
x
)
{\textstyle P(\theta \mid x)}
, which is the posterior probability of
θ
{\textstyle \theta }
given the data
x
{\textstyle x}
.
==== Example ====
Consider a simple statistical model of a coin flip: a single parameter
p
H
{\textstyle p_{\text{H}}}
that expresses the "fairness" of the coin. The parameter is the probability that a coin lands heads up ("H") when tossed.
p
H
{\textstyle p_{\text{H}}}
can take on any value within the range 0.0 to 1.0. For a perfectly fair coin,
p
H
=
0.5
{\textstyle p_{\text{H}}=0.5}
.
Imagine flipping a fair coin twice, and observing two heads in two tosses ("HH"). Assuming that each successive coin flip is i.i.d., then the probability of observing HH is
P
(
HH
∣
p
H
=
0.5
)
=
0.5
2
=
0.25.
{\displaystyle P({\text{HH}}\mid p_{\text{H}}=0.5)=0.5^{2}=0.25.}
Equivalently, the likelihood of observing "HH" assuming
p
H
=
0.5
{\textstyle p_{\text{H}}=0.5}
is
L
(
p
H
=
0.5
∣
HH
)
=
0.25.
{\displaystyle {\mathcal {L}}(p_{\text{H}}=0.5\mid {\text{HH}})=0.25.}
This is not the same as saying that
P
(
p
H
=
0.5
∣
H
H
)
=
0.25
{\textstyle P(p_{\text{H}}=0.5\mid HH)=0.25}
, a conclusion which could only be reached via Bayes' theorem given knowledge about the marginal probabilities
P
(
p
H
=
0.5
)
{\textstyle P(p_{\text{H}}=0.5)}
and
P
(
HH
)
{\textstyle P({\text{HH}})}
.
Now suppose that the coin is not a fair coin, but instead that
p
H
=
0.3
{\textstyle p_{\text{H}}=0.3}
. Then the probability of two heads on two flips is
P
(
HH
∣
p
H
=
0.3
)
=
0.3
2
=
0.09.
{\displaystyle P({\text{HH}}\mid p_{\text{H}}=0.3)=0.3^{2}=0.09.}
Hence
L
(
p
H
=
0.3
∣
HH
)
=
0.09.
{\displaystyle {\mathcal {L}}(p_{\text{H}}=0.3\mid {\text{HH}})=0.09.}
More generally, for each value of
p
H
{\textstyle p_{\text{H}}}
, we can calculate the corresponding likelihood. The result of such calculations is displayed in Figure 1. The integral of
L
{\textstyle {\mathcal {L}}}
over [0, 1] is 1/3; likelihoods need not integrate or sum to one over the parameter space.
=== Continuous probability distribution ===
Let
X
{\textstyle X}
be a random variable following an absolutely continuous probability distribution with density function
f
{\textstyle f}
(a function of
x
{\textstyle x}
) which depends on a parameter
θ
{\textstyle \theta }
. Then the function
L
(
θ
∣
x
)
=
f
θ
(
x
)
,
{\displaystyle {\mathcal {L}}(\theta \mid x)=f_{\theta }(x),}
considered as a function of
θ
{\textstyle \theta }
, is the likelihood function (of
θ
{\textstyle \theta }
, given the outcome
X
=
x
{\textstyle X=x}
). Again,
L
{\textstyle {\mathcal {L}}}
is not a probability density or mass function over
θ
{\textstyle \theta }
, despite being a function of
θ
{\textstyle \theta }
given the observation
X
=
x
{\textstyle X=x}
.
==== Relationship between the likelihood and probability density functions ====
The use of the probability density in specifying the likelihood function above is justified as follows. Given an observation
x
j
{\textstyle x_{j}}
, the likelihood for the interval
[
x
j
,
x
j
+
h
]
{\textstyle [x_{j},x_{j}+h]}
, where
h
>
0
{\textstyle h>0}
is a constant, is given by
L
(
θ
∣
x
∈
[
x
j
,
x
j
+
h
]
)
{\textstyle {\mathcal {L}}(\theta \mid x\in [x_{j},x_{j}+h])}
. Observe that
a
r
g
m
a
x
θ
L
(
θ
∣
x
∈
[
x
j
,
x
j
+
h
]
)
=
a
r
g
m
a
x
θ
1
h
L
(
θ
∣
x
∈
[
x
j
,
x
j
+
h
]
)
,
{\displaystyle \mathop {\operatorname {arg\,max} } _{\theta }{\mathcal {L}}(\theta \mid x\in [x_{j},x_{j}+h])=\mathop {\operatorname {arg\,max} } _{\theta }{\frac {1}{h}}{\mathcal {L}}(\theta \mid x\in [x_{j},x_{j}+h]),}
since
h
{\textstyle h}
is positive and constant. Because
a
r
g
m
a
x
θ
1
h
L
(
θ
∣
x
∈
[
x
j
,
x
j
+
h
]
)
=
a
r
g
m
a
x
θ
1
h
Pr
(
x
j
≤
x
≤
x
j
+
h
∣
θ
)
=
a
r
g
m
a
x
θ
1
h
∫
x
j
x
j
+
h
f
(
x
∣
θ
)
d
x
,
{\displaystyle \mathop {\operatorname {arg\,max} } _{\theta }{\frac {1}{h}}{\mathcal {L}}(\theta \mid x\in [x_{j},x_{j}+h])=\mathop {\operatorname {arg\,max} } _{\theta }{\frac {1}{h}}\Pr(x_{j}\leq x\leq x_{j}+h\mid \theta )=\mathop {\operatorname {arg\,max} } _{\theta }{\frac {1}{h}}\int _{x_{j}}^{x_{j}+h}f(x\mid \theta )\,dx,}
where
f
(
x
∣
θ
)
{\textstyle f(x\mid \theta )}
is the probability density function, it follows that
a
r
g
m
a
x
θ
L
(
θ
∣
x
∈
[
x
j
,
x
j
+
h
]
)
=
a
r
g
m
a
x
θ
1
h
∫
x
j
x
j
+
h
f
(
x
∣
θ
)
d
x
.
{\displaystyle \mathop {\operatorname {arg\,max} } _{\theta }{\mathcal {L}}(\theta \mid x\in [x_{j},x_{j}+h])=\mathop {\operatorname {arg\,max} } _{\theta }{\frac {1}{h}}\int _{x_{j}}^{x_{j}+h}f(x\mid \theta )\,dx.}
The first fundamental theorem of calculus provides that
lim
h
→
0
+
1
h
∫
x
j
x
j
+
h
f
(
x
∣
θ
)
d
x
=
f
(
x
j
∣
θ
)
.
{\displaystyle \lim _{h\to 0^{+}}{\frac {1}{h}}\int _{x_{j}}^{x_{j}+h}f(x\mid \theta )\,dx=f(x_{j}\mid \theta ).}
Then
a
r
g
m
a
x
θ
L
(
θ
∣
x
j
)
=
a
r
g
m
a
x
θ
[
lim
h
→
0
+
L
(
θ
∣
x
∈
[
x
j
,
x
j
+
h
]
)
]
=
a
r
g
m
a
x
θ
[
lim
h
→
0
+
1
h
∫
x
j
x
j
+
h
f
(
x
∣
θ
)
d
x
]
=
a
r
g
m
a
x
θ
f
(
x
j
∣
θ
)
.
{\displaystyle {\begin{aligned}\mathop {\operatorname {arg\,max} } _{\theta }{\mathcal {L}}(\theta \mid x_{j})&=\mathop {\operatorname {arg\,max} } _{\theta }\left[\lim _{h\to 0^{+}}{\mathcal {L}}(\theta \mid x\in [x_{j},x_{j}+h])\right]\\[4pt]&=\mathop {\operatorname {arg\,max} } _{\theta }\left[\lim _{h\to 0^{+}}{\frac {1}{h}}\int _{x_{j}}^{x_{j}+h}f(x\mid \theta )\,dx\right]\\[4pt]&=\mathop {\operatorname {arg\,max} } _{\theta }f(x_{j}\mid \theta ).\end{aligned}}}
Therefore,
a
r
g
m
a
x
θ
L
(
θ
∣
x
j
)
=
a
r
g
m
a
x
θ
f
(
x
j
∣
θ
)
,
{\displaystyle \mathop {\operatorname {arg\,max} } _{\theta }{\mathcal {L}}(\theta \mid x_{j})=\mathop {\operatorname {arg\,max} } _{\theta }f(x_{j}\mid \theta ),}
and so maximizing the probability density at
x
j
{\textstyle x_{j}}
amounts to maximizing the likelihood of the specific observation
x
j
{\textstyle x_{j}}
.
=== In general ===
In measure-theoretic probability theory, the density function is defined as the Radon–Nikodym derivative of the probability distribution relative to a common dominating measure. The likelihood function is this density interpreted as a function of the parameter, rather than the random variable. Thus, we can construct a likelihood function for any distribution, whether discrete, continuous, a mixture, or otherwise. (Likelihoods are comparable, e.g. for parameter estimation, only if they are Radon–Nikodym derivatives with respect to the same dominating measure.)
The above discussion of the likelihood for discrete random variables uses the counting measure, under which the probability density at any outcome equals the probability of that outcome.
=== Likelihoods for mixed continuous–discrete distributions ===
The above can be extended in a simple way to allow consideration of distributions which contain both discrete and continuous components. Suppose that the distribution consists of a number of discrete probability masses
p
k
(
θ
)
{\textstyle p_{k}(\theta )}
and a density
f
(
x
∣
θ
)
{\textstyle f(x\mid \theta )}
, where the sum of all the
p
{\textstyle p}
's added to the integral of
f
{\textstyle f}
is always one. Assuming that it is possible to distinguish an observation corresponding to one of the discrete probability masses from one which corresponds to the density component, the likelihood function for an observation from the continuous component can be dealt with in the manner shown above. For an observation from the discrete component, the likelihood function for an observation from the discrete component is simply
L
(
θ
∣
x
)
=
p
k
(
θ
)
,
{\displaystyle {\mathcal {L}}(\theta \mid x)=p_{k}(\theta ),}
where
k
{\textstyle k}
is the index of the discrete probability mass corresponding to observation
x
{\textstyle x}
, because maximizing the probability mass (or probability) at
x
{\textstyle x}
amounts to maximizing the likelihood of the specific observation.
The fact that the likelihood function can be defined in a way that includes contributions that are not commensurate (the density and the probability mass) arises from the way in which the likelihood function is defined up to a constant of proportionality, where this "constant" can change with the observation
x
{\textstyle x}
, but not with the parameter
θ
{\textstyle \theta }
.
=== Regularity conditions ===
In the context of parameter estimation, the likelihood function is usually assumed to obey certain conditions, known as regularity conditions. These conditions are assumed in various proofs involving likelihood functions, and need to be verified in each particular application. For maximum likelihood estimation, the existence of a global maximum of the likelihood function is of the utmost importance. By the extreme value theorem, it suffices that the likelihood function is continuous on a compact parameter space for the maximum likelihood estimator to exist. While the continuity assumption is usually met, the compactness assumption about the parameter space is often not, as the bounds of the true parameter values might be unknown. In that case, concavity of the likelihood function plays a key role.
More specifically, if the likelihood function is twice continuously differentiable on the k-dimensional parameter space
Θ
{\textstyle \Theta }
assumed to be an open connected subset of
R
k
,
{\textstyle \mathbb {R} ^{k}\,,}
there exists a unique maximum
θ
^
∈
Θ
{\textstyle {\hat {\theta }}\in \Theta }
if the matrix of second partials
H
(
θ
)
≡
[
∂
2
L
∂
θ
i
∂
θ
j
]
i
,
j
=
1
,
1
n
i
,
n
j
{\displaystyle \mathbf {H} (\theta )\equiv \left[\,{\frac {\partial ^{2}L}{\,\partial \theta _{i}\,\partial \theta _{j}\,}}\,\right]_{i,j=1,1}^{n_{\mathrm {i} },n_{\mathrm {j} }}\;}
is negative definite for every
θ
∈
Θ
{\textstyle \,\theta \in \Theta \,}
at which the gradient
∇
L
≡
[
∂
L
∂
θ
i
]
i
=
1
n
i
{\textstyle \;\nabla L\equiv \left[\,{\frac {\partial L}{\,\partial \theta _{i}\,}}\,\right]_{i=1}^{n_{\mathrm {i} }}\;}
vanishes,
and if the likelihood function approaches a constant on the boundary of the parameter space,
∂
Θ
,
{\textstyle \;\partial \Theta \;,}
i.e.,
lim
θ
→
∂
Θ
L
(
θ
)
=
0
,
{\displaystyle \lim _{\theta \to \partial \Theta }L(\theta )=0\;,}
which may include the points at infinity if
Θ
{\textstyle \,\Theta \,}
is unbounded. Mäkeläinen and co-authors prove this result using Morse theory while informally appealing to a mountain pass property. Mascarenhas restates their proof using the mountain pass theorem.
In the proofs of consistency and asymptotic normality of the maximum likelihood estimator, additional assumptions are made about the probability densities that form the basis of a particular likelihood function. These conditions were first established by Chanda. In particular, for almost all
x
{\textstyle x}
, and for all
θ
∈
Θ
,
{\textstyle \,\theta \in \Theta \,,}
∂
log
f
∂
θ
r
,
∂
2
log
f
∂
θ
r
∂
θ
s
,
∂
3
log
f
∂
θ
r
∂
θ
s
∂
θ
t
{\displaystyle {\frac {\partial \log f}{\partial \theta _{r}}}\,,\quad {\frac {\partial ^{2}\log f}{\partial \theta _{r}\partial \theta _{s}}}\,,\quad {\frac {\partial ^{3}\log f}{\partial \theta _{r}\,\partial \theta _{s}\,\partial \theta _{t}}}\,}
exist for all
r
,
s
,
t
=
1
,
2
,
…
,
k
{\textstyle \,r,s,t=1,2,\ldots ,k\,}
in order to ensure the existence of a Taylor expansion. Second, for almost all
x
{\textstyle x}
and for every
θ
∈
Θ
{\textstyle \,\theta \in \Theta \,}
it must be that
|
∂
f
∂
θ
r
|
<
F
r
(
x
)
,
|
∂
2
f
∂
θ
r
∂
θ
s
|
<
F
r
s
(
x
)
,
|
∂
3
f
∂
θ
r
∂
θ
s
∂
θ
t
|
<
H
r
s
t
(
x
)
{\displaystyle \left|{\frac {\partial f}{\partial \theta _{r}}}\right|<F_{r}(x)\,,\quad \left|{\frac {\partial ^{2}f}{\partial \theta _{r}\,\partial \theta _{s}}}\right|<F_{rs}(x)\,,\quad \left|{\frac {\partial ^{3}f}{\partial \theta _{r}\,\partial \theta _{s}\,\partial \theta _{t}}}\right|<H_{rst}(x)}
where
H
{\textstyle H}
is such that
∫
−
∞
∞
H
r
s
t
(
z
)
d
z
≤
M
<
∞
.
{\textstyle \,\int _{-\infty }^{\infty }H_{rst}(z)\mathrm {d} z\leq M<\infty \;.}
This boundedness of the derivatives is needed to allow for differentiation under the integral sign. And lastly, it is assumed that the information matrix,
I
(
θ
)
=
∫
−
∞
∞
∂
log
f
∂
θ
r
∂
log
f
∂
θ
s
f
d
z
{\displaystyle \mathbf {I} (\theta )=\int _{-\infty }^{\infty }{\frac {\partial \log f}{\partial \theta _{r}}}\ {\frac {\partial \log f}{\partial \theta _{s}}}\ f\ \mathrm {d} z}
is positive definite and
|
I
(
θ
)
|
{\textstyle \,\left|\mathbf {I} (\theta )\right|\,}
is finite. This ensures that the score has a finite variance.
The above conditions are sufficient, but not necessary. That is, a model that does not meet these regularity conditions may or may not have a maximum likelihood estimator of the properties mentioned above. Further, in case of non-independently or non-identically distributed observations additional properties may need to be assumed.
In Bayesian statistics, almost identical regularity conditions are imposed on the likelihood function in order to proof asymptotic normality of the posterior probability, and therefore to justify a Laplace approximation of the posterior in large samples.
== Likelihood ratio and relative likelihood ==
=== Likelihood ratio ===
A likelihood ratio is the ratio of any two specified likelihoods, frequently written as:
Λ
(
θ
1
:
θ
2
∣
x
)
=
L
(
θ
1
∣
x
)
L
(
θ
2
∣
x
)
.
{\displaystyle \Lambda (\theta _{1}:\theta _{2}\mid x)={\frac {{\mathcal {L}}(\theta _{1}\mid x)}{{\mathcal {L}}(\theta _{2}\mid x)}}.}
The likelihood ratio is central to likelihoodist statistics: the law of likelihood states that the degree to which data (considered as evidence) supports one parameter value versus another is measured by the likelihood ratio.
In frequentist inference, the likelihood ratio is the basis for a test statistic, the so-called likelihood-ratio test. By the Neyman–Pearson lemma, this is the most powerful test for comparing two simple hypotheses at a given significance level. Numerous other tests can be viewed as likelihood-ratio tests or approximations thereof. The asymptotic distribution of the log-likelihood ratio, considered as a test statistic, is given by Wilks' theorem.
The likelihood ratio is also of central importance in Bayesian inference, where it is known as the Bayes factor, and is used in Bayes' rule. Stated in terms of odds, Bayes' rule states that the posterior odds of two alternatives,
A
1
{\displaystyle A_{1}}
and
A
2
{\displaystyle A_{2}}
, given an event
B
{\displaystyle B}
, is the prior odds, times the likelihood ratio. As an equation:
O
(
A
1
:
A
2
∣
B
)
=
O
(
A
1
:
A
2
)
⋅
Λ
(
A
1
:
A
2
∣
B
)
.
{\displaystyle O(A_{1}:A_{2}\mid B)=O(A_{1}:A_{2})\cdot \Lambda (A_{1}:A_{2}\mid B).}
The likelihood ratio is not directly used in AIC-based statistics. Instead, what is used is the relative likelihood of models (see below).
In evidence-based medicine, likelihood ratios are used in diagnostic testing to assess the value of performing a diagnostic test.
=== Relative likelihood function ===
Since the actual value of the likelihood function depends on the sample, it is often convenient to work with a standardized measure. Suppose that the maximum likelihood estimate for the parameter θ is
θ
^
{\textstyle {\hat {\theta }}}
. Relative plausibilities of other θ values may be found by comparing the likelihoods of those other values with the likelihood of
θ
^
{\textstyle {\hat {\theta }}}
. The relative likelihood of θ is defined to be
R
(
θ
)
=
L
(
θ
∣
x
)
L
(
θ
^
∣
x
)
.
{\displaystyle R(\theta )={\frac {{\mathcal {L}}(\theta \mid x)}{{\mathcal {L}}({\hat {\theta }}\mid x)}}.}
Thus, the relative likelihood is the likelihood ratio (discussed above) with the fixed denominator
L
(
θ
^
)
{\textstyle {\mathcal {L}}({\hat {\theta }})}
. This corresponds to standardizing the likelihood to have a maximum of 1.
==== Likelihood region ====
A likelihood region is the set of all values of θ whose relative likelihood is greater than or equal to a given threshold. In terms of percentages, a p% likelihood region for θ is defined to be
{
θ
:
R
(
θ
)
≥
p
100
}
.
{\displaystyle \left\{\theta :R(\theta )\geq {\frac {p}{100}}\right\}.}
If θ is a single real parameter, a p% likelihood region will usually comprise an interval of real values. If the region does comprise an interval, then it is called a likelihood interval.
Likelihood intervals, and more generally likelihood regions, are used for interval estimation within likelihoodist statistics: they are similar to confidence intervals in frequentist statistics and credible intervals in Bayesian statistics. Likelihood intervals are interpreted directly in terms of relative likelihood, not in terms of coverage probability (frequentism) or posterior probability (Bayesianism).
Given a model, likelihood intervals can be compared to confidence intervals. If θ is a single real parameter, then under certain conditions, a 14.65% likelihood interval (about 1:7 likelihood) for θ will be the same as a 95% confidence interval (19/20 coverage probability). In a slightly different formulation suited to the use of log-likelihoods (see Wilks' theorem), the test statistic is twice the difference in log-likelihoods and the probability distribution of the test statistic is approximately a chi-squared distribution with degrees-of-freedom (df) equal to the difference in df's between the two models (therefore, the e−2 likelihood interval is the same as the 0.954 confidence interval; assuming difference in df's to be 1).
== Likelihoods that eliminate nuisance parameters ==
In many cases, the likelihood is a function of more than one parameter but interest focuses on the estimation of only one, or at most a few of them, with the others being considered as nuisance parameters. Several alternative approaches have been developed to eliminate such nuisance parameters, so that a likelihood can be written as a function of only the parameter (or parameters) of interest: the main approaches are profile, conditional, and marginal likelihoods. These approaches are also useful when a high-dimensional likelihood surface needs to be reduced to one or two parameters of interest in order to allow a graph.
=== Profile likelihood ===
It is possible to reduce the dimensions by concentrating the likelihood function for a subset of parameters by expressing the nuisance parameters as functions of the parameters of interest and replacing them in the likelihood function. In general, for a likelihood function depending on the parameter vector
θ
{\textstyle \mathbf {\theta } }
that can be partitioned into
θ
=
(
θ
1
:
θ
2
)
{\textstyle \mathbf {\theta } =\left(\mathbf {\theta } _{1}:\mathbf {\theta } _{2}\right)}
, and where a correspondence
θ
^
2
=
θ
^
2
(
θ
1
)
{\textstyle \mathbf {\hat {\theta }} _{2}=\mathbf {\hat {\theta }} _{2}\left(\mathbf {\theta } _{1}\right)}
can be determined explicitly, concentration reduces computational burden of the original maximization problem.
For instance, in a linear regression with normally distributed errors,
y
=
X
β
+
u
{\textstyle \mathbf {y} =\mathbf {X} \beta +u}
, the coefficient vector could be partitioned into
β
=
[
β
1
:
β
2
]
{\textstyle \beta =\left[\beta _{1}:\beta _{2}\right]}
(and consequently the design matrix
X
=
[
X
1
:
X
2
]
{\textstyle \mathbf {X} =\left[\mathbf {X} _{1}:\mathbf {X} _{2}\right]}
). Maximizing with respect to
β
2
{\textstyle \beta _{2}}
yields an optimal value function
β
2
(
β
1
)
=
(
X
2
T
X
2
)
−
1
X
2
T
(
y
−
X
1
β
1
)
{\textstyle \beta _{2}(\beta _{1})=\left(\mathbf {X} _{2}^{\mathsf {T}}\mathbf {X} _{2}\right)^{-1}\mathbf {X} _{2}^{\mathsf {T}}\left(\mathbf {y} -\mathbf {X} _{1}\beta _{1}\right)}
. Using this result, the maximum likelihood estimator for
β
1
{\textstyle \beta _{1}}
can then be derived as
β
^
1
=
(
X
1
T
(
I
−
P
2
)
X
1
)
−
1
X
1
T
(
I
−
P
2
)
y
{\displaystyle {\hat {\beta }}_{1}=\left(\mathbf {X} _{1}^{\mathsf {T}}\left(\mathbf {I} -\mathbf {P} _{2}\right)\mathbf {X} _{1}\right)^{-1}\mathbf {X} _{1}^{\mathsf {T}}\left(\mathbf {I} -\mathbf {P} _{2}\right)\mathbf {y} }
where
P
2
=
X
2
(
X
2
T
X
2
)
−
1
X
2
T
{\textstyle \mathbf {P} _{2}=\mathbf {X} _{2}\left(\mathbf {X} _{2}^{\mathsf {T}}\mathbf {X} _{2}\right)^{-1}\mathbf {X} _{2}^{\mathsf {T}}}
is the projection matrix of
X
2
{\textstyle \mathbf {X} _{2}}
. This result is known as the Frisch–Waugh–Lovell theorem.
Since graphically the procedure of concentration is equivalent to slicing the likelihood surface along the ridge of values of the nuisance parameter
β
2
{\textstyle \beta _{2}}
that maximizes the likelihood function, creating an isometric profile of the likelihood function for a given
β
1
{\textstyle \beta _{1}}
, the result of this procedure is also known as profile likelihood. In addition to being graphed, the profile likelihood can also be used to compute confidence intervals that often have better small-sample properties than those based on asymptotic standard errors calculated from the full likelihood.
=== Conditional likelihood ===
Sometimes it is possible to find a sufficient statistic for the nuisance parameters, and conditioning on this statistic results in a likelihood which does not depend on the nuisance parameters.
One example occurs in 2×2 tables, where conditioning on all four marginal totals leads to a conditional likelihood based on the non-central hypergeometric distribution. This form of conditioning is also the basis for Fisher's exact test.
=== Marginal likelihood ===
Sometimes we can remove the nuisance parameters by considering a likelihood based on only part of the information in the data, for example by using the set of ranks rather than the numerical values. Another example occurs in linear mixed models, where considering a likelihood for the residuals only after fitting the fixed effects leads to residual maximum likelihood estimation of the variance components.
=== Partial likelihood ===
A partial likelihood is an adaption of the full likelihood such that only a part of the parameters (the parameters of interest) occur in it. It is a key component of the proportional hazards model: using a restriction on the hazard function, the likelihood does not contain the shape of the hazard over time.
== Products of likelihoods ==
The likelihood, given two or more independent events, is the product of the likelihoods of each of the individual events:
Λ
(
A
∣
X
1
∧
X
2
)
=
Λ
(
A
∣
X
1
)
⋅
Λ
(
A
∣
X
2
)
.
{\displaystyle \Lambda (A\mid X_{1}\land X_{2})=\Lambda (A\mid X_{1})\cdot \Lambda (A\mid X_{2}).}
This follows from the definition of independence in probability: the probabilities of two independent events happening, given a model, is the product of the probabilities.
This is particularly important when the events are from independent and identically distributed random variables, such as independent observations or sampling with replacement. In such a situation, the likelihood function factors into a product of individual likelihood functions.
The empty product has value 1, which corresponds to the likelihood, given no event, being 1: before any data, the likelihood is always 1. This is similar to a uniform prior in Bayesian statistics, but in likelihoodist statistics this is not an improper prior because likelihoods are not integrated.
== Log-likelihood ==
Log-likelihood function is the logarithm of the likelihood function, often denoted by a lowercase l or
ℓ
{\displaystyle \ell }
, to contrast with the uppercase L or
L
{\textstyle {\mathcal {L}}}
for the likelihood. Because logarithms are strictly increasing functions, maximizing the likelihood is equivalent to maximizing the log-likelihood. But for practical purposes it is more convenient to work with the log-likelihood function in maximum likelihood estimation, in particular since most common probability distributions—notably the exponential family—are only logarithmically concave, and concavity of the objective function plays a key role in the maximization.
Given the independence of each event, the overall log-likelihood of intersection equals the sum of the log-likelihoods of the individual events. This is analogous to the fact that the overall log-probability is the sum of the log-probability of the individual events. In addition to the mathematical convenience from this, the adding process of log-likelihood has an intuitive interpretation, as often expressed as "support" from the data. When the parameters are estimated using the log-likelihood for the maximum likelihood estimation, each data point is used by being added to the total log-likelihood. As the data can be viewed as an evidence that support the estimated parameters, this process can be interpreted as "support from independent evidence adds", and the log-likelihood is the "weight of evidence". Interpreting negative log-probability as information content or surprisal, the support (log-likelihood) of a model, given an event, is the negative of the surprisal of the event, given the model: a model is supported by an event to the extent that the event is unsurprising, given the model.
A logarithm of a likelihood ratio is equal to the difference of the log-likelihoods:
log
L
(
A
)
L
(
B
)
=
log
L
(
A
)
−
log
L
(
B
)
=
ℓ
(
A
)
−
ℓ
(
B
)
.
{\displaystyle \log {\frac {{\mathcal {L}}(A)}{{\mathcal {L}}(B)}}=\log {\mathcal {L}}(A)-\log {\mathcal {L}}(B)=\ell (A)-\ell (B).}
Just as the likelihood, given no event, being 1, the log-likelihood, given no event, is 0, which corresponds to the value of the empty sum: without any data, there is no support for any models.
=== Graph ===
The graph of the log-likelihood is called the support curve (in the univariate case).
In the multivariate case, the concept generalizes into a support surface over the parameter space.
It has a relation to, but is distinct from, the support of a distribution.
The term was coined by A. W. F. Edwards in the context of statistical hypothesis testing, i.e. whether or not the data "support" one hypothesis (or parameter value) being tested more than any other.
The log-likelihood function being plotted is used in the computation of the score (the gradient of the log-likelihood) and Fisher information (the curvature of the log-likelihood). Thus, the graph has a direct interpretation in the context of maximum likelihood estimation and likelihood-ratio tests.
=== Likelihood equations ===
If the log-likelihood function is smooth, its gradient with respect to the parameter, known as the score and written
s
n
(
θ
)
≡
∇
θ
ℓ
n
(
θ
)
{\textstyle s_{n}(\theta )\equiv \nabla _{\theta }\ell _{n}(\theta )}
, exists and allows for the application of differential calculus. The basic way to maximize a differentiable function is to find the stationary points (the points where the derivative is zero); since the derivative of a sum is just the sum of the derivatives, but the derivative of a product requires the product rule, it is easier to compute the stationary points of the log-likelihood of independent events than for the likelihood of independent events.
The equations defined by the stationary point of the score function serve as estimating equations for the maximum likelihood estimator.
s
n
(
θ
)
=
0
{\displaystyle s_{n}(\theta )=\mathbf {0} }
In that sense, the maximum likelihood estimator is implicitly defined by the value at
0
{\textstyle \mathbf {0} }
of the inverse function
s
n
−
1
:
E
d
→
Θ
{\textstyle s_{n}^{-1}:\mathbb {E} ^{d}\to \Theta }
, where
E
d
{\textstyle \mathbb {E} ^{d}}
is the d-dimensional Euclidean space, and
Θ
{\textstyle \Theta }
is the parameter space. Using the inverse function theorem, it can be shown that
s
n
−
1
{\textstyle s_{n}^{-1}}
is well-defined in an open neighborhood about
0
{\textstyle \mathbf {0} }
with probability going to one, and
θ
^
n
=
s
n
−
1
(
0
)
{\textstyle {\hat {\theta }}_{n}=s_{n}^{-1}(\mathbf {0} )}
is a consistent estimate of
θ
{\textstyle \theta }
. As a consequence there exists a sequence
{
θ
^
n
}
{\textstyle \left\{{\hat {\theta }}_{n}\right\}}
such that
s
n
(
θ
^
n
)
=
0
{\textstyle s_{n}({\hat {\theta }}_{n})=\mathbf {0} }
asymptotically almost surely, and
θ
^
n
→
p
θ
0
{\textstyle {\hat {\theta }}_{n}\xrightarrow {\text{p}} \theta _{0}}
. A similar result can be established using Rolle's theorem.
The second derivative evaluated at
θ
^
{\textstyle {\hat {\theta }}}
, known as Fisher information, determines the curvature of the likelihood surface, and thus indicates the precision of the estimate.
=== Exponential families ===
The log-likelihood is also particularly useful for exponential families of distributions, which include many of the common parametric probability distributions. The probability distribution function (and thus likelihood function) for exponential families contain products of factors involving exponentiation. The logarithm of such a function is a sum of products, again easier to differentiate than the original function.
An exponential family is one whose probability density function is of the form (for some functions, writing
⟨
−
,
−
⟩
{\textstyle \langle -,-\rangle }
for the inner product):
p
(
x
∣
θ
)
=
h
(
x
)
exp
(
⟨
η
(
θ
)
,
T
(
x
)
⟩
−
A
(
θ
)
)
.
{\displaystyle p(x\mid {\boldsymbol {\theta }})=h(x)\exp {\Big (}\langle {\boldsymbol {\eta }}({\boldsymbol {\theta }}),\mathbf {T} (x)\rangle -A({\boldsymbol {\theta }}){\Big )}.}
Each of these terms has an interpretation, but simply switching from probability to likelihood and taking logarithms yields the sum:
ℓ
(
θ
∣
x
)
=
⟨
η
(
θ
)
,
T
(
x
)
⟩
−
A
(
θ
)
+
log
h
(
x
)
.
{\displaystyle \ell ({\boldsymbol {\theta }}\mid x)=\langle {\boldsymbol {\eta }}({\boldsymbol {\theta }}),\mathbf {T} (x)\rangle -A({\boldsymbol {\theta }})+\log h(x).}
The
η
(
θ
)
{\textstyle {\boldsymbol {\eta }}({\boldsymbol {\theta }})}
and
h
(
x
)
{\textstyle h(x)}
each correspond to a change of coordinates, so in these coordinates, the log-likelihood of an exponential family is given by the simple formula:
ℓ
(
η
∣
x
)
=
⟨
η
,
T
(
x
)
⟩
−
A
(
η
)
.
{\displaystyle \ell ({\boldsymbol {\eta }}\mid x)=\langle {\boldsymbol {\eta }},\mathbf {T} (x)\rangle -A({\boldsymbol {\eta }}).}
In words, the log-likelihood of an exponential family is inner product of the natural parameter
η
{\displaystyle {\boldsymbol {\eta }}}
and the sufficient statistic
T
(
x
)
{\displaystyle \mathbf {T} (x)}
, minus the normalization factor (log-partition function)
A
(
η
)
{\displaystyle A({\boldsymbol {\eta }})}
. Thus for example the maximum likelihood estimate can be computed by taking derivatives of the sufficient statistic T and the log-partition function A.
==== Example: the gamma distribution ====
The gamma distribution is an exponential family with two parameters,
α
{\textstyle \alpha }
and
β
{\textstyle \beta }
. The likelihood function is
L
(
α
,
β
∣
x
)
=
β
α
Γ
(
α
)
x
α
−
1
e
−
β
x
.
{\displaystyle {\mathcal {L}}(\alpha ,\beta \mid x)={\frac {\beta ^{\alpha }}{\Gamma (\alpha )}}x^{\alpha -1}e^{-\beta x}.}
Finding the maximum likelihood estimate of
β
{\textstyle \beta }
for a single observed value
x
{\textstyle x}
looks rather daunting. Its logarithm is much simpler to work with:
log
L
(
α
,
β
∣
x
)
=
α
log
β
−
log
Γ
(
α
)
+
(
α
−
1
)
log
x
−
β
x
.
{\displaystyle \log {\mathcal {L}}(\alpha ,\beta \mid x)=\alpha \log \beta -\log \Gamma (\alpha )+(\alpha -1)\log x-\beta x.\,}
To maximize the log-likelihood, we first take the partial derivative with respect to
β
{\textstyle \beta }
:
∂
log
L
(
α
,
β
∣
x
)
∂
β
=
α
β
−
x
.
{\displaystyle {\frac {\partial \log {\mathcal {L}}(\alpha ,\beta \mid x)}{\partial \beta }}={\frac {\alpha }{\beta }}-x.}
If there are a number of independent observations
x
1
,
…
,
x
n
{\textstyle x_{1},\ldots ,x_{n}}
, then the joint log-likelihood will be the sum of individual log-likelihoods, and the derivative of this sum will be a sum of derivatives of each individual log-likelihood:
∂
log
L
(
α
,
β
∣
x
1
,
…
,
x
n
)
∂
β
=
∂
log
L
(
α
,
β
∣
x
1
)
∂
β
+
⋯
+
∂
log
L
(
α
,
β
∣
x
n
)
∂
β
=
n
α
β
−
∑
i
=
1
n
x
i
.
{\displaystyle {\begin{aligned}&{\frac {\partial \log {\mathcal {L}}(\alpha ,\beta \mid x_{1},\ldots ,x_{n})}{\partial \beta }}\\&={\frac {\partial \log {\mathcal {L}}(\alpha ,\beta \mid x_{1})}{\partial \beta }}+\cdots +{\frac {\partial \log {\mathcal {L}}(\alpha ,\beta \mid x_{n})}{\partial \beta }}\\&={\frac {n\alpha }{\beta }}-\sum _{i=1}^{n}x_{i}.\end{aligned}}}
To complete the maximization procedure for the joint log-likelihood, the equation is set to zero and solved for
β
{\textstyle \beta }
:
β
^
=
α
x
¯
.
{\displaystyle {\widehat {\beta }}={\frac {\alpha }{\bar {x}}}.}
Here
β
^
{\textstyle {\widehat {\beta }}}
denotes the maximum-likelihood estimate, and
x
¯
=
1
n
∑
i
=
1
n
x
i
{\textstyle \textstyle {\bar {x}}={\frac {1}{n}}\sum _{i=1}^{n}x_{i}}
is the sample mean of the observations.
== Background and interpretation ==
=== Historical remarks ===
The term "likelihood" has been in use in English since at least late Middle English. Its formal use to refer to a specific function in mathematical statistics was proposed by Ronald Fisher, in two research papers published in 1921 and 1922. The 1921 paper introduced what is today called a "likelihood interval"; the 1922 paper introduced the term "method of maximum likelihood". Quoting Fisher:
[I]n 1922, I proposed the term 'likelihood,' in view of the fact that, with respect to [the parameter], it is not a probability, and does not obey the laws of probability, while at the same time it bears to the problem of rational choice among the possible values of [the parameter] a relation similar to that which probability bears to the problem of predicting events in games of chance. . . . Whereas, however, in relation to psychological judgment, likelihood has some resemblance to probability, the two concepts are wholly distinct. . . ."
The concept of likelihood should not be confused with probability as mentioned by Sir Ronald Fisher
I stress this because in spite of the emphasis that I have always laid upon the difference between probability and likelihood there is still a tendency to treat likelihood as though it were a sort of probability. The first result is thus that there are two different measures of rational belief appropriate to different cases. Knowing the population we can express our incomplete knowledge of, or expectation of, the sample in terms of probability; knowing the sample we can express our incomplete knowledge of the population in terms of likelihood.
Fisher's invention of statistical likelihood was in reaction against an earlier form of reasoning called inverse probability. His use of the term "likelihood" fixed the meaning of the term within mathematical statistics.
A. W. F. Edwards (1972) established the axiomatic basis for use of the log-likelihood ratio as a measure of relative support for one hypothesis against another. The support function is then the natural logarithm of the likelihood function. Both terms are used in phylogenetics, but were not adopted in a general treatment of the topic of statistical evidence.
=== Interpretations under different foundations ===
Among statisticians, there is no consensus about what the foundation of statistics should be. There are four main paradigms that have been proposed for the foundation: frequentism, Bayesianism, likelihoodism, and AIC-based. For each of the proposed foundations, the interpretation of likelihood is different. The four interpretations are described in the subsections below.
==== Frequentist interpretation ====
==== Bayesian interpretation ====
In Bayesian inference, although one can speak about the likelihood of any proposition or random variable given another random variable: for example the likelihood of a parameter value or of a statistical model (see marginal likelihood), given specified data or other evidence, the likelihood function remains the same entity, with the additional interpretations of (i) a conditional density of the data given the parameter (since the parameter is then a random variable) and (ii) a measure or amount of information brought by the data about the parameter value or even the model. Due to the introduction of a probability structure on the parameter space or on the collection of models, it is possible that a parameter value or a statistical model have a large likelihood value for given data, and yet have a low probability, or vice versa. This is often the case in medical contexts. Following Bayes' Rule, the likelihood when seen as a conditional density can be multiplied by the prior probability density of the parameter and then normalized, to give a posterior probability density. More generally, the likelihood of an unknown quantity
X
{\textstyle X}
given another unknown quantity
Y
{\textstyle Y}
is proportional to the probability of
Y
{\textstyle Y}
given
X
{\textstyle X}
.
==== Likelihoodist interpretation ====
In frequentist statistics, the likelihood function is itself a statistic that summarizes a single sample from a population, whose calculated value depends on a choice of several parameters θ1 ... θp, where p is the count of parameters in some already-selected statistical model. The value of the likelihood serves as a figure of merit for the choice used for the parameters, and the parameter set with maximum likelihood is the best choice, given the data available.
The specific calculation of the likelihood is the probability that the observed sample would be assigned, assuming that the model chosen and the values of the several parameters θ give an accurate approximation of the frequency distribution of the population that the observed sample was drawn from. Heuristically, it makes sense that a good choice of parameters is those which render the sample actually observed the maximum possible post-hoc probability of having happened. Wilks' theorem quantifies the heuristic rule by showing that the difference in the logarithm of the likelihood generated by the estimate's parameter values and the logarithm of the likelihood generated by population's "true" (but unknown) parameter values is asymptotically χ2 distributed.
Each independent sample's maximum likelihood estimate is a separate estimate of the "true" parameter set describing the population sampled. Successive estimates from many independent samples will cluster together with the population's "true" set of parameter values hidden somewhere in their midst. The difference in the logarithms of the maximum likelihood and adjacent parameter sets' likelihoods may be used to draw a confidence region on a plot whose co-ordinates are the parameters θ1 ... θp. The region surrounds the maximum-likelihood estimate, and all points (parameter sets) within that region differ at most in log-likelihood by some fixed value. The χ2 distribution given by Wilks' theorem converts the region's log-likelihood differences into the "confidence" that the population's "true" parameter set lies inside. The art of choosing the fixed log-likelihood difference is to make the confidence acceptably high while keeping the region acceptably small (narrow range of estimates).
As more data are observed, instead of being used to make independent estimates, they can be combined with the previous samples to make a single combined sample, and that large sample may be used for a new maximum likelihood estimate. As the size of the combined sample increases, the size of the likelihood region with the same confidence shrinks. Eventually, either the size of the confidence region is very nearly a single point, or the entire population has been sampled; in both cases, the estimated parameter set is essentially the same as the population parameter set.
==== AIC-based interpretation ====
Under the AIC paradigm, likelihood is interpreted within the context of information theory.
== See also ==
== Notes ==
== References ==
== Further reading ==
Azzalini, Adelchi (1996). "Likelihood". Statistical Inference Based on the Likelihood. Chapman and Hall. pp. 17–50. ISBN 0-412-60650-X.
Boos, Dennis D.; Stefanski, L. A. (2013). "Likelihood Construction and Estimation". Essential Statistical Inference : Theory and Methods. New York: Springer. pp. 27–124. doi:10.1007/978-1-4614-4818-1_2. ISBN 978-1-4614-4817-4.
Edwards, A. W. F. (1992) [1972]. Likelihood (Expanded ed.). Johns Hopkins University Press. ISBN 0-8018-4443-6.
King, Gary (1989). "The Likelihood Model of Inference". Unifying Political Methodology : the Likehood Theory of Statistical Inference. Cambridge University Press. pp. 59–94. ISBN 0-521-36697-6.
Richard, Mark; Vecer, Jan (1 February 2021). "Efficiency Testing of Prediction Markets: Martingale Approach, Likelihood Ratio and Bayes Factor Analysis". Risks. 9 (2): 31. doi:10.3390/risks9020031. hdl:10419/258120.
Lindsey, J. K. (1996). "Likelihood". Parametric Statistical Inference. Oxford University Press. pp. 69–139. ISBN 0-19-852359-9.
Rohde, Charles A. (2014). Introductory Statistical Inference with the Likelihood Function. Berlin: Springer. ISBN 978-3-319-10460-7.
Royall, Richard (1997). Statistical Evidence : A Likelihood Paradigm. London: Chapman & Hall. ISBN 0-412-04411-0.
Ward, Michael D.; Ahlquist, John S. (2018). "The Likelihood Function: A Deeper Dive". Maximum Likelihood for Social Science : Strategies for Analysis. Cambridge University Press. pp. 21–28. ISBN 978-1-316-63682-4.
== External links ==
Likelihood function at Planetmath
"Log-likelihood". Statlect. | Wikipedia/Likelihood_functions |
An intrusion detection system (IDS) is a device or software application that monitors a network or systems for malicious activity or policy violations. Any intrusion activity or violation is typically either reported to an administrator or collected centrally using a security information and event management (SIEM) system. A SIEM system combines outputs from multiple sources and uses alarm filtering techniques to distinguish malicious activity from false alarms.
IDS types range in scope from single computers to large networks. The most common classifications are network intrusion detection systems (NIDS) and host-based intrusion detection systems (HIDS). A system that monitors important operating system files is an example of an HIDS, while a system that analyzes incoming network traffic is an example of an NIDS. It is also possible to classify IDS by detection approach. The most well-known variants are signature-based detection (recognizing bad patterns, such as exploitation attempts) and anomaly-based detection (detecting deviations from a model of "good" traffic, which often relies on machine learning). Another common variant is reputation-based detection (recognizing the potential threat according to the reputation scores). Some IDS products have the ability to respond to detected intrusions. Systems with response capabilities are typically referred to as an intrusion prevention system (IPS). Intrusion detection systems can also serve specific purposes by augmenting them with custom tools, such as using a honeypot to attract and characterize malicious traffic.
== Comparison with firewalls ==
Although they both relate to network security, an IDS differs from a firewall in that a conventional network firewall (distinct from a next-generation firewall) uses a static set of rules to permit or deny network connections. It implicitly prevents intrusions, assuming an appropriate set of rules have been defined. Essentially, firewalls limit access between networks to prevent intrusion and do not signal an attack from inside the network. An IDS describes a suspected intrusion once it has taken place and signals an alarm. An IDS also watches for attacks that originate from within a system. This is traditionally achieved by examining network communications, identifying heuristics and patterns (often known as signatures) of common computer attacks, and taking action to alert operators. A system that terminates connections is called an intrusion prevention system, and performs access control like an application layer firewall.
== Intrusion detection category ==
IDS can be classified by where detection takes place (network or host) or the detection method that is employed (signature or anomaly-based).
=== Analyzed activity ===
==== Network intrusion detection systems ====
Network intrusion detection systems (NIDS) are placed at a strategic point or points within the network to monitor traffic to and from all devices on the network. It performs an analysis of passing traffic on the entire subnet, and matches the traffic that is passed on the subnets to the library of known attacks. Once an attack is identified, or abnormal behavior is sensed, the alert can be sent to the administrator. NIDS function to safeguard every device and the entire network from unauthorized access.
An example of an NIDS would be installing it on the subnet where firewalls are located in order to see if someone is trying to break into the firewall. Ideally one would scan all inbound and outbound traffic, however doing so might create a bottleneck that would impair the overall speed of the network. OPNET and NetSim are commonly used tools for simulating network intrusion detection systems. NID Systems are also capable of comparing signatures for similar packets to link and drop harmful detected packets which have a signature matching the records in the NIDS. When we classify the design of the NIDS according to the system interactivity property, there are two types: on-line and off-line NIDS, often referred to as inline and tap mode, respectively. On-line NIDS deals with the network in real time. It analyses the Ethernet packets and applies some rules, to decide if it is an attack or not. Off-line NIDS deals with stored data and passes it through some processes to decide if it is an attack or not.
NIDS can be also combined with other technologies to increase detection and prediction rates. Artificial Neural Network (ANN) based IDS are capable of analyzing huge volumes of data due to the hidden layers and non-linear modeling, however this process requires time due its complex structure. This allows IDS to more efficiently recognize intrusion patterns. Neural networks assist IDS in predicting attacks by learning from mistakes; ANN based IDS help develop an early warning system, based on two layers. The first layer accepts single values, while the second layer takes the first's layers output as input; the cycle repeats and allows the system to automatically recognize new unforeseen patterns in the network. This system can average 99.9% detection and classification rate, based on research results of 24 network attacks, divided in four categories: DOS, Probe, Remote-to-Local, and user-to-root.
==== Host intrusion detection systems ====
Host intrusion detection systems (HIDS) run on individual hosts or devices on the network. A HIDS monitors the inbound and outbound packets from the device only and will alert the user or administrator if suspicious activity is detected. It takes a snapshot of existing system files and matches it to the previous snapshot. If the critical system files were modified or deleted, an alert is sent to the administrator to investigate. An example of HIDS usage can be seen on mission critical machines, which are not expected to change their configurations.
=== Detection method ===
==== Signature-based ====
Signature-based IDS is the detection of attacks by looking for specific patterns, such as byte sequences in network traffic, or known malicious instruction sequences used by malware. This terminology originates from anti-virus software, which refers to these detected patterns as signatures. Although signature-based IDS can easily detect known attacks, it is difficult to detect new attacks, for which no pattern is available.
In signature-based IDS, the signatures are released by a vendor for all its products. On-time updating of the IDS with the signature is a key aspect.
==== Anomaly-based ====
Anomaly-based intrusion detection systems were primarily introduced to detect unknown attacks, in part due to the rapid development of malware. The basic approach is to use machine learning to create a model of trustworthy activity, and then compare new behavior against this model. Since these models can be trained according to the applications and hardware configurations, machine learning based method has a better generalized property in comparison to traditional signature-based IDS. Although this approach enables the detection of previously unknown attacks, it may suffer from false positives: previously unknown legitimate activity may also be classified as malicious. Most of the existing IDSs suffer from the time-consuming during detection process that degrades the performance of IDSs. Efficient feature selection algorithm makes the classification process used in detection more reliable.
New types of what could be called anomaly-based intrusion detection systems are being viewed by Gartner as User and Entity Behavior Analytics (UEBA) (an evolution of the user behavior analytics category) and network traffic analysis (NTA). In particular, NTA deals with malicious insiders as well as targeted external attacks that have compromised a user machine or account. Gartner has noted that some organizations have opted for NTA over more traditional IDS.
== Intrusion prevention ==
Some systems may attempt to stop an intrusion attempt but this is neither required nor expected of a monitoring system. Intrusion detection and prevention systems (IDPS) are primarily focused on identifying possible incidents, logging information about them, and reporting attempts. In addition, organizations use IDPS for other purposes, such as identifying problems with security policies, documenting existing threats and deterring individuals from violating security policies. IDPS have become a necessary addition to the security infrastructure of nearly every organization.
IDPS typically record information related to observed events, notify security administrators of important observed events and produce reports. Many IDPS can also respond to a detected threat by attempting to prevent it from succeeding. They use several response techniques, which involve the IDPS stopping the attack itself, changing the security environment (e.g. reconfiguring a firewall) or changing the attack's content.
Intrusion prevention systems (IPS), also known as intrusion detection and prevention systems (IDPS), are network security appliances that monitor network or system activities for malicious activity. The main functions of intrusion prevention systems are to identify malicious activity, log information about this activity, report it and attempt to block or stop it..
Intrusion prevention systems are considered extensions of intrusion detection systems because they both monitor network traffic and/or system activities for malicious activity. The main differences are, unlike intrusion detection systems, intrusion prevention systems are placed in-line and are able to actively prevent or block intrusions that are detected.: 273 : 289 IPS can take such actions as sending an alarm, dropping detected malicious packets, resetting a connection or blocking traffic from the offending IP address. An IPS also can correct cyclic redundancy check (CRC) errors, defragment packet streams, mitigate TCP sequencing issues, and clean up unwanted transport and network layer options.: 278
=== Classification ===
Intrusion prevention systems can be classified into four different types:
Network-based intrusion prevention system (NIPS): monitors the entire network for suspicious traffic by analyzing protocol activity.
Wireless intrusion prevention system (WIPS): monitor a wireless network for suspicious traffic by analyzing wireless networking protocols.
Network behavior analysis (NBA): examines network traffic to identify threats that generate unusual traffic flows, such as distributed denial of service (DDoS) attacks, certain forms of malware and policy violations.
Host-based intrusion prevention system (HIPS): an installed software package which monitors a single host for suspicious activity by analyzing events occurring within that host.
=== Detection methods ===
The majority of intrusion prevention systems utilize one of three detection methods: signature-based, statistical anomaly-based, and stateful protocol analysis.: 301
Signature-based detection: Signature-based IDS monitors packets in the Network and compares with pre-configured and pre-determined attack patterns known as signatures. While it is the simplest and most effective method, it fails to detect unknown attacks and variants of known attacks.
Statistical anomaly-based detection: An IDS which is anomaly-based will monitor network traffic and compare it against an established baseline. The baseline will identify what is "normal" for that network – what sort of bandwidth is generally used and what protocols are used. It may however, raise a False Positive alarm for legitimate use of bandwidth if the baselines are not intelligently configured. Ensemble models that use Matthews correlation co-efficient to identify unauthorized network traffic have obtained 99.73% accuracy.
Stateful protocol analysis detection: This method identifies deviations of protocol states by comparing observed events with "pre-determined profiles of generally accepted definitions of benign activity". While it is capable of knowing and tracing the protocol states, it requires significant resources.
== Placement ==
The correct placement of intrusion detection systems is critical and varies depending on the network. The most common placement is behind the firewall, on the edge of a network. This practice provides the IDS with high visibility of traffic entering your network and will not receive any traffic between users on the network. The edge of the network is the point in which a network connects to the extranet. Another practice that can be accomplished if more resources are available is a strategy where a technician will place their first IDS at the point of highest visibility and depending on resource availability will place another at the next highest point, continuing that process until all points of the network are covered.
If an IDS is placed beyond a network's firewall, its main purpose would be to defend against noise from the internet but, more importantly, defend against common attacks, such as port scans and network mapper. An IDS in this position would monitor layers 4 through 7 of the OSI model and would be signature-based. This is a very useful practice, because rather than showing actual breaches into the network that made it through the firewall, attempted breaches will be shown which reduces the amount of false positives. The IDS in this position also assists in decreasing the amount of time it takes to discover successful attacks against a network.
Sometimes an IDS with more advanced features will be integrated with a firewall in order to be able to intercept sophisticated attacks entering the network. Examples of advanced features would include multiple security contexts in the routing level and bridging mode. All of this in turn potentially reduces cost and operational complexity.
Another option for IDS placement is within the actual network. These will reveal attacks or suspicious activity within the network. Ignoring the security within a network can cause many problems, it will either allow users to bring about security risks or allow an attacker who has already broken into the network to roam around freely. Intense intranet security makes it difficult for even those hackers within the network to maneuver around and escalate their privileges.
== Limitations ==
Noise can severely limit an intrusion detection system's effectiveness. Bad packets generated from software bugs, corrupt DNS data, and local packets that escaped can create a significantly high false-alarm rate.
It is not uncommon for the number of real attacks to be far below the number of false-alarms. Number of real attacks is often so far below the number of false-alarms that the real attacks are often missed and ignored.
Many attacks are geared for specific versions of software that are usually outdated. A constantly changing library of signatures is needed to mitigate threats. Outdated signature databases can leave the IDS vulnerable to newer strategies.
For signature-based IDS, there will be lag between a new threat discovery and its signature being applied to the IDS. During this lag time, the IDS will be unable to identify the threat.
It cannot compensate for weak identification and authentication mechanisms or for weaknesses in network protocols. When an attacker gains access due to weak authentication mechanisms then IDS cannot prevent the adversary from any malpractice.
Encrypted packets are not processed by most intrusion detection devices. Therefore, the encrypted packet can allow an intrusion to the network that is undiscovered until more significant network intrusions have occurred.
Intrusion detection software provides information based on the network address that is associated with the IP packet that is sent into the network. This is beneficial if the network address contained in the IP packet is accurate. However, the address that is contained in the IP packet could be faked or scrambled.
Due to the nature of NIDS systems, and the need for them to analyse protocols as they are captured, NIDS systems can be susceptible to the same protocol-based attacks to which network hosts may be vulnerable. Invalid data and TCP/IP stack attacks may cause a NIDS to crash.
The security measures on cloud computing do not consider the variation of user's privacy needs. They provide the same security mechanism for all users no matter if users are companies or an individual person.
== Evasion techniques ==
There are a number of techniques which attackers are using, the following are considered 'simple' measures which can be taken to evade IDS:
Fragmentation: by sending fragmented packets, the attacker will be under the radar and can easily bypass the detection system's ability to detect the attack signature.
Avoiding defaults: The TCP port utilised by a protocol does not always provide an indication to the protocol which is being transported. For example, an IDS may expect to detect a trojan on port 12345. If an attacker had reconfigured it to use a different port, the IDS may not be able to detect the presence of the trojan.
Coordinated, low-bandwidth attacks: coordinating a scan among numerous attackers (or agents) and allocating different ports or hosts to different attackers makes it difficult for the IDS to correlate the captured packets and deduce that a network scan is in progress.
Address spoofing/proxying: attackers can increase the difficulty of the Security Administrators ability to determine the source of the attack by using poorly secured or incorrectly configured proxy servers to bounce an attack. If the source is spoofed and bounced by a server, it makes it very difficult for IDS to detect the origin of the attack.
Pattern change evasion: IDS generally rely on 'pattern matching' to detect an attack. By changing the data used in the attack slightly, it may be possible to evade detection. For example, an Internet Message Access Protocol (IMAP) server may be vulnerable to a buffer overflow, and an IDS is able to detect the attack signature of 10 common attack tools. By modifying the payload sent by the tool, so that it does not resemble the data that the IDS expects, it may be possible to evade detection.
== Development ==
The earliest preliminary IDS concept was delineated in 1980 by James Anderson at the National Security Agency and consisted of a set of tools intended to help administrators review audit trails. User access logs, file access logs, and system event logs are examples of audit trails.
Fred Cohen noted in 1987 that it is impossible to detect an intrusion in every case, and that the resources needed to detect intrusions grow with the amount of usage.
Dorothy E. Denning, assisted by Peter G. Neumann, published a model of an IDS in 1986 that formed the basis for many systems today. Her model used statistics for anomaly detection, and resulted in an early IDS at SRI International named the Intrusion Detection Expert System (IDES), which ran on Sun workstations and could consider both user and network level data. IDES had a dual approach with a rule-based Expert System to detect known types of intrusions plus a statistical anomaly detection component based on profiles of users, host systems, and target systems. The author of "IDES: An Intelligent System for Detecting Intruders", Teresa F. Lunt, proposed adding an artificial neural network as a third component. She said all three components could then report to a resolver. SRI followed IDES in 1993 with the Next-generation Intrusion Detection Expert System (NIDES).
The Multics intrusion detection and alerting system (MIDAS), an expert system using P-BEST and Lisp, was developed in 1988 based on the work of Denning and Neumann. Haystack was also developed in that year using statistics to reduce audit trails.
In 1986 the National Security Agency started an IDS research transfer program under Rebecca Bace. Bace later published the seminal text on the subject, Intrusion Detection, in 2000.
Wisdom & Sense (W&S) was a statistics-based anomaly detector developed in 1989 at the Los Alamos National Laboratory. W&S created rules based on statistical analysis, and then used those rules for anomaly detection.
In 1990, the Time-based Inductive Machine (TIM) did anomaly detection using inductive learning of sequential user patterns in Common Lisp on a VAX 3500 computer. The Network Security Monitor (NSM) performed masking on access matrices for anomaly detection on a Sun-3/50 workstation. The Information Security Officer's Assistant (ISOA) was a 1990 prototype that considered a variety of strategies including statistics, a profile checker, and an expert system. ComputerWatch at AT&T Bell Labs used statistics and rules for audit data reduction and intrusion detection.
Then, in 1991, researchers at the University of California, Davis created a prototype Distributed Intrusion Detection System (DIDS), which was also an expert system. The Network Anomaly Detection and Intrusion Reporter (NADIR), also in 1991, was a prototype IDS developed at the Los Alamos National Laboratory's Integrated Computing Network (ICN), and was heavily influenced by the work of Denning and Lunt. NADIR used a statistics-based anomaly detector and an expert system.
The Lawrence Berkeley National Laboratory announced Bro in 1998, which used its own rule language for packet analysis from libpcap data. Network Flight Recorder (NFR) in 1999 also used libpcap.
APE was developed as a packet sniffer, also using libpcap, in November, 1998, and was renamed Snort one month later. Snort has since become the world's largest used IDS/IPS system with over 300,000 active users. It can monitor both local systems, and remote capture points using the TZSP protocol.
The Audit Data Analysis and Mining (ADAM) IDS in 2001 used tcpdump to build profiles of rules for classifications. In 2003, Yongguang Zhang and Wenke Lee argue for the importance of IDS in networks with mobile nodes.
In 2015, Viegas and his colleagues proposed an anomaly-based intrusion detection engine, aiming System-on-Chip (SoC) for applications in Internet of Things (IoT), for instance. The proposal applies machine learning for anomaly detection, providing energy-efficiency to a Decision Tree, Naive-Bayes, and k-Nearest Neighbors classifiers implementation in an Atom CPU and its hardware-friendly implementation in a FPGA. In the literature, this was the first work that implement each classifier equivalently in software and hardware and measures its energy consumption on both. Additionally, it was the first time that was measured the energy consumption for extracting each features used to make the network packet classification, implemented in software and hardware.
== See also ==
Application protocol-based intrusion detection system (APIDS)
Artificial immune system
Bypass switch
Denial-of-service attack
DNS analytics
Extrusion detection
Intrusion Detection Message Exchange Format
Protocol-based intrusion detection system (PIDS)
Real-time adaptive security
Security management
ShieldsUp
Software-defined protection
== References ==
This article incorporates public domain material from Karen Scarfone, Peter Mell. Guide to Intrusion Detection and Prevention Systems, SP800-94 (PDF). National Institute of Standards and Technology. Retrieved 1 January 2010.
== Further reading ==
Bace, Rebecca Gurley (2000). Intrusion Detection. Indianapolis, IN: Macmillan Technical. ISBN 978-1578701858.
Bezroukov, Nikolai (11 December 2008). "Architectural Issues of Intrusion Detection Infrastructure in Large Enterprises (Revision 0.82)". Softpanorama. Retrieved 30 July 2010.
P.M. Mafra and J.S. Fraga and A.O. Santin (2014). "Algorithms for a distributed IDS in MANETs". Journal of Computer and System Sciences. 80 (3): 554–570. doi:10.1016/j.jcss.2013.06.011.
Hansen, James V.; Benjamin Lowry, Paul; Meservy, Rayman; McDonald, Dan (2007). "Genetic programming for prevention of cyberterrorism through dynamic and evolving intrusion detection". Decision Support Systems. 43 (4): 1362–1374. doi:10.1016/j.dss.2006.04.004. SSRN 877981.
Scarfone, Karen; Mell, Peter (February 2007). "Guide to Intrusion Detection and Prevention Systems (IDPS)" (PDF). Computer Security Resource Center (800–94). Archived from the original (PDF) on 1 June 2010. Retrieved 1 January 2010.
Singh, Abhishek. "Evasions In Intrusion Prevention Detection Systems". Virus Bulletin. Retrieved 1 April 2010.
Dubey, Abhinav. "Implementation of Network Intrusion Detection System using Deep Learning". Medium. Retrieved 17 April 2021.
Al_Ibaisi, T., Abu-Dalhoum, A. E.-L., Al-Rawi, M., Alfonseca, M., & Ortega, A. (n.d.). Network Intrusion Detection Using Genetic Algorithm to find Best DNA Signature. http://www.wseas.us/e-library/transactions/systems/2008/27-535.pdf
Ibaisi, T. A., Kuhn, S., Kaiiali, M., & Kazim, M. (2023). Network Intrusion Detection Based on Amino Acid Sequence Structure Using Machine Learning. Electronics, 12(20), 4294. https://doi.org/10.3390/electronics12204294
== External links ==
Common vulnerabilities and exposures (CVE) by product
NIST SP 800-83, Guide to Malware Incident Prevention and Handling
NIST SP 800-94, Guide to Intrusion Detection and Prevention Systems (IDPS) | Wikipedia/Network_intrusion_detection_system |
In statistics and in particular in regression analysis, a design matrix, also known as model matrix or regressor matrix and often denoted by X, is a matrix of values of explanatory variables of a set of objects. Each row represents an individual object, with the successive columns corresponding to the variables and their specific values for that object. The design matrix is used in certain statistical models, e.g., the general linear model. It can contain indicator variables (ones and zeros) that indicate group membership in an ANOVA, or it can contain values of continuous variables.
The design matrix contains data on the independent variables (also called explanatory variables), in a statistical model that is intended to explain observed data on a response variable (often called a dependent variable). The theory relating to such models uses the design matrix as input to some linear algebra : see for example linear regression. A notable feature of the concept of a design matrix is that it is able to represent a number of different experimental designs and statistical models, e.g., ANOVA, ANCOVA, and linear regression.
== Definition ==
The design matrix is defined to be a matrix
X
{\displaystyle X}
such that
X
i
j
{\displaystyle X_{ij}}
(the jth column of the ith row of
X
{\displaystyle X}
) represents the value of the jth variable associated with the ith object.
A regression model may be represented via matrix multiplication as
y
=
X
β
+
e
,
{\displaystyle y=X\beta +e,}
where X is the design matrix,
β
{\displaystyle \beta }
is a vector of the model's coefficients (one for each variable),
e
{\displaystyle e}
is a vector of random errors with mean zero, and y is the vector of predicted outputs for each object.
== Size ==
The design matrix has dimension n-by-p, where n is the number of samples observed, and p is the number of variables (features) measured in all samples.
In this representation different rows typically represent different repetitions of an experiment, while columns represent different types of data (say, the results from particular probes). For example, suppose an experiment is run where 10 people are pulled off the street and asked 4 questions. The data matrix M would be a 10×4 matrix (meaning 10 rows and 4 columns). The datum in row i and column j of this matrix would be the answer of the i th person to the j th question.
== Examples ==
=== Arithmetic mean ===
The design matrix for an arithmetic mean is a column vector of ones.
=== Simple linear regression ===
This section gives an example of simple linear regression—that is, regression with only a single explanatory variable—with seven observations.
The seven data points are {yi, xi}, for i = 1, 2, …, 7. The simple linear regression model is
y
i
=
β
0
+
β
1
x
i
+
ε
i
,
{\displaystyle y_{i}=\beta _{0}+\beta _{1}x_{i}+\varepsilon _{i},\,}
where
β
0
{\displaystyle \beta _{0}}
is the y-intercept and
β
1
{\displaystyle \beta _{1}}
is the slope of the regression line. This model can be represented in matrix form as
[
y
1
y
2
y
3
y
4
y
5
y
6
y
7
]
=
[
1
x
1
1
x
2
1
x
3
1
x
4
1
x
5
1
x
6
1
x
7
]
[
β
0
β
1
]
+
[
ε
1
ε
2
ε
3
ε
4
ε
5
ε
6
ε
7
]
{\displaystyle {\begin{bmatrix}y_{1}\\y_{2}\\y_{3}\\y_{4}\\y_{5}\\y_{6}\\y_{7}\end{bmatrix}}={\begin{bmatrix}1&x_{1}\\1&x_{2}\\1&x_{3}\\1&x_{4}\\1&x_{5}\\1&x_{6}\\1&x_{7}\end{bmatrix}}{\begin{bmatrix}\beta _{0}\\\beta _{1}\end{bmatrix}}+{\begin{bmatrix}\varepsilon _{1}\\\varepsilon _{2}\\\varepsilon _{3}\\\varepsilon _{4}\\\varepsilon _{5}\\\varepsilon _{6}\\\varepsilon _{7}\end{bmatrix}}}
where the first column of 1s in the design matrix allows estimation of the y-intercept while the second column contains the x-values associated with the corresponding y-values. The matrix whose columns are 1's and x's in this example is the design matrix.
=== Multiple regression ===
This section contains an example of multiple regression with two covariates (explanatory variables): w and x.
Again suppose that the data consist of seven observations, and that for each observed value to be predicted (
y
i
{\displaystyle y_{i}}
), values wi and xi of the two covariates are also observed. The model to be considered is
y
i
=
β
0
+
β
1
w
i
+
β
2
x
i
+
ε
i
{\displaystyle y_{i}=\beta _{0}+\beta _{1}w_{i}+\beta _{2}x_{i}+\varepsilon _{i}}
This model can be written in matrix terms as
[
y
1
y
2
y
3
y
4
y
5
y
6
y
7
]
=
[
1
w
1
x
1
1
w
2
x
2
1
w
3
x
3
1
w
4
x
4
1
w
5
x
5
1
w
6
x
6
1
w
7
x
7
]
[
β
0
β
1
β
2
]
+
[
ε
1
ε
2
ε
3
ε
4
ε
5
ε
6
ε
7
]
{\displaystyle {\begin{bmatrix}y_{1}\\y_{2}\\y_{3}\\y_{4}\\y_{5}\\y_{6}\\y_{7}\end{bmatrix}}={\begin{bmatrix}1&w_{1}&x_{1}\\1&w_{2}&x_{2}\\1&w_{3}&x_{3}\\1&w_{4}&x_{4}\\1&w_{5}&x_{5}\\1&w_{6}&x_{6}\\1&w_{7}&x_{7}\end{bmatrix}}{\begin{bmatrix}\beta _{0}\\\beta _{1}\\\beta _{2}\end{bmatrix}}+{\begin{bmatrix}\varepsilon _{1}\\\varepsilon _{2}\\\varepsilon _{3}\\\varepsilon _{4}\\\varepsilon _{5}\\\varepsilon _{6}\\\varepsilon _{7}\end{bmatrix}}}
Here the 7×3 matrix on the right side is the design matrix.
=== One-way ANOVA (cell means model) ===
This section contains an example with a one-way analysis of variance (ANOVA) with three groups and seven observations. The given data set has the first three observations belonging to the first group, the following two observations belonging to the second group and the final two observations belonging to the third group.
If the model to be fit is just the mean of each group, then the model is
y
i
j
=
μ
i
+
ε
i
j
{\displaystyle y_{ij}=\mu _{i}+\varepsilon _{ij}}
which can be written
[
y
1
y
2
y
3
y
4
y
5
y
6
y
7
]
=
[
1
0
0
1
0
0
1
0
0
0
1
0
0
1
0
0
0
1
0
0
1
]
[
μ
1
μ
2
μ
3
]
+
[
ε
1
ε
2
ε
3
ε
4
ε
5
ε
6
ε
7
]
{\displaystyle {\begin{bmatrix}y_{1}\\y_{2}\\y_{3}\\y_{4}\\y_{5}\\y_{6}\\y_{7}\end{bmatrix}}={\begin{bmatrix}1&0&0\\1&0&0\\1&0&0\\0&1&0\\0&1&0\\0&0&1\\0&0&1\end{bmatrix}}{\begin{bmatrix}\mu _{1}\\\mu _{2}\\\mu _{3}\end{bmatrix}}+{\begin{bmatrix}\varepsilon _{1}\\\varepsilon _{2}\\\varepsilon _{3}\\\varepsilon _{4}\\\varepsilon _{5}\\\varepsilon _{6}\\\varepsilon _{7}\end{bmatrix}}}
In this model
μ
i
{\displaystyle \mu _{i}}
represents the mean of the
i
{\displaystyle i}
th group.
=== One-way ANOVA (offset from reference group) ===
The ANOVA model could be equivalently written as each group parameter
τ
i
{\displaystyle \tau _{i}}
being an offset from some overall reference. Typically this reference point is taken to be one of the groups under consideration. This makes sense in the context of comparing multiple treatment groups to a control group and the control group is considered the "reference". In this example, group 1 was chosen to be the reference group. As such the model to be fit is
y
i
j
=
μ
+
τ
i
+
ε
i
j
{\displaystyle y_{ij}=\mu +\tau _{i}+\varepsilon _{ij}}
with the constraint that
τ
1
{\displaystyle \tau _{1}}
is zero.
[
y
1
y
2
y
3
y
4
y
5
y
6
y
7
]
=
[
1
0
0
1
0
0
1
0
0
1
1
0
1
1
0
1
0
1
1
0
1
]
[
μ
τ
2
τ
3
]
+
[
ε
1
ε
2
ε
3
ε
4
ε
5
ε
6
ε
7
]
{\displaystyle {\begin{bmatrix}y_{1}\\y_{2}\\y_{3}\\y_{4}\\y_{5}\\y_{6}\\y_{7}\end{bmatrix}}={\begin{bmatrix}1&0&0\\1&0&0\\1&0&0\\1&1&0\\1&1&0\\1&0&1\\1&0&1\end{bmatrix}}{\begin{bmatrix}\mu \\\tau _{2}\\\tau _{3}\end{bmatrix}}+{\begin{bmatrix}\varepsilon _{1}\\\varepsilon _{2}\\\varepsilon _{3}\\\varepsilon _{4}\\\varepsilon _{5}\\\varepsilon _{6}\\\varepsilon _{7}\end{bmatrix}}}
In this model
μ
{\displaystyle \mu }
is the mean of the reference group and
τ
i
{\displaystyle \tau _{i}}
is the difference from group
i
{\displaystyle i}
to the reference group.
τ
1
{\displaystyle \tau _{1}}
is not included in the matrix because its difference from the reference group (itself) is necessarily zero.
== See also ==
Moment matrix
Projection matrix
Jacobian matrix and determinant
Scatter matrix
Gram matrix
Vandermonde matrix
== References ==
== Further reading ==
Verbeek, Albert (1984). "The Geometry of Model Selection in Regression". In Dijkstra, Theo K. (ed.). Misspecification Analysis. New York: Springer. pp. 20–36. ISBN 0-387-13893-5. | Wikipedia/Design_matrix |
In statistics, a generalized linear mixed model (GLMM) is an extension to the generalized linear model (GLM) in which the linear predictor contains random effects in addition to the usual fixed effects. They also inherit from generalized linear models the idea of extending linear mixed models to non-normal data.
Generalized linear mixed models provide a broad range of models for the analysis of grouped data, since the differences between groups can be modelled as a random effect. These models are useful in the analysis of many kinds of data, including longitudinal data.
== Model ==
Generalized linear mixed models are generally defined such that, conditioned on the random effects
u
{\displaystyle u}
, the dependent variable
y
{\displaystyle y}
is distributed according to the exponential family
with its expectation
related to the linear predictor
X
β
+
Z
u
{\textstyle X\beta +Zu}
via a link function
g
{\textstyle g}
:
g
(
E
[
y
|
u
]
)
=
X
β
+
Z
u
{\displaystyle g(E[y\vert u])=X\beta +Zu}
.
Here
X
{\textstyle X}
and
β
{\textstyle \beta }
are the fixed effects design matrix, and fixed effects respectively;
Z
{\textstyle Z}
and
u
{\textstyle u}
are the random effects design matrix and random effects respectively. To understand this very brief definition you will first need to understand the definition of a generalized linear model and of a mixed model.
Generalized linear mixed models are a special cases of hierarchical generalized linear models in which the random effects are normally distributed.
The complete likelihood
p
(
y
)
=
∫
p
(
y
|
u
)
p
(
u
)
d
u
{\displaystyle p(y)=\int p(y\vert u)\,p(u)\,du}
has no general closed form, and integrating over the random effects is usually extremely computationally intensive. In addition to numerically approximating this integral(e.g. via Gauss–Hermite quadrature), methods motivated by Laplace approximation have been proposed. For example, the penalized quasi-likelihood method, which essentially involves repeatedly fitting (i.e. doubly iterative) a weighted normal mixed model with a working variate, is implemented by various commercial and open source statistical programs.
== Fitting a model ==
Fitting generalized linear mixed models via maximum likelihood (as via the Akaike information criterion (AIC)) involves integrating over the random effects. In general, those integrals cannot be expressed in analytical form. Various approximate methods have been developed, but none has good properties for all possible models and data sets (e.g. ungrouped binary data are particularly problematic). For this reason, methods involving numerical quadrature or Markov chain Monte Carlo have increased in use, as increasing computing power and advances in methods have made them more practical.
The Akaike information criterion is a common criterion for model selection. Estimates of the Akaike information criterion for generalized linear mixed models based on certain exponential family distributions have recently been obtained.
== Software ==
Several contributed packages in R provide functionality for generalized linear mixed models, including lme4 and glmm.
Generalized linear mixed models can be fitted using SAS and SPSS
MATLAB also provides a fitglme function to fit generalized linear mixed model models.
The Python Statsmodels package supports binomial and poisson implementations.
The Julia package MixedModels.jl provides a function called GeneralizedLinearMixedModel that fits a generalized linear mixed model to provided data.
DHARMa: residual diagnostics for hierarchical (multi-level/mixed) regression models (utk.edu)
== See also ==
Generalized estimating equation
Hierarchical generalized linear model
== References == | Wikipedia/Generalized_linear_mixed_model |
Control charts are graphical plots used in production control to determine whether quality and manufacturing processes are being controlled under stable conditions. (ISO 7870-1)
The hourly status is arranged on the graph, and the occurrence of abnormalities is judged based on the presence of data that differs from the conventional trend or deviates from the control limit line.
Control charts are classified into Shewhart individuals control chart (ISO 7870-2) and CUSUM(CUsUM)(or cumulative sum control chart)(ISO 7870-4).
Control charts, also known as Shewhart charts (after Walter A. Shewhart) or process-behavior charts, are a statistical process control tool used to determine if a manufacturing or business process is in a state of control. It is more appropriate to say that the control charts are the graphical device for statistical process monitoring (SPM). Traditional control charts are mostly designed to monitor process parameters when the underlying form of the process distributions are known. However, more advanced techniques are available in the 21st century where incoming data streaming can-be monitored even without any knowledge of the underlying process distributions. Distribution-free control charts are becoming increasingly popular.
== Overview ==
If analysis of the control chart indicates that the process is currently under control (i.e., is stable, with variation only coming from sources common to the process), then no corrections or changes to process control parameters are needed or desired. In addition, data from the process can be used to predict the future performance of the process. If the chart indicates that the monitored process is not in control, analysis of the chart can help determine the sources of variation, as this will result in degraded process performance. A process that is stable but operating outside desired (specification) limits (e.g., scrap rates may be in statistical control but above desired limits) needs to be improved through a deliberate effort to understand the causes of current performance and fundamentally improve the process.
The control chart is one of the seven basic tools of quality control. Typically control charts are used for time-series data, also known as continuous data or variable data. Although they can also be used for data that has logical comparability (i.e. you want to compare samples that were taken all at the same time, or the performance of different individuals); however the type of chart used to do this requires consideration.
== History ==
The control chart was invented by Walter A. Shewhart working for Bell Labs in the 1920s. The company's engineers had been seeking to improve the reliability of their telephony transmission systems. Because amplifiers and other equipment had to be buried underground, there was a stronger business need to reduce the frequency of failures and repairs. By 1920, the engineers had already realized the importance of reducing variation in a manufacturing process. Moreover, they had realized that continual process-adjustment in reaction to non-conformance actually increased variation and degraded quality. Shewhart framed the problem in terms of common- and special-causes of variation and, on May 16, 1924, wrote an internal memo introducing the control chart as a tool for distinguishing between the two. Shewhart's boss, George Edwards, recalled: "Dr. Shewhart prepared a little memorandum only about a page in length. About a third of that page was given over to a simple diagram which we would all recognize today as a schematic control chart. That diagram, and the short text which preceded and followed it set forth all of the essential principles and considerations which are involved in what we know today as process quality control." Shewhart stressed that bringing a production process into a state of statistical control, where there is only common-cause variation, and keeping it in control, is necessary to predict future output and to manage a process economically.
Shewhart created the basis for the control chart and the concept of a state of statistical control by carefully designed experiments. While Shewhart drew from pure mathematical statistical theories, he understood that data from physical processes typically produce a "normal distribution curve" (a Gaussian distribution, also commonly referred to as a "bell curve"). He discovered that observed variation in manufacturing data did not always behave the same way as data in nature (Brownian motion of particles). Shewhart concluded that while every process displays variation, some processes display controlled variation that is natural to the process, while others display uncontrolled variation that is not present in the process causal system at all times.
In 1924, or 1925, Shewhart's innovation came to the attention of W. Edwards Deming, then working at the Hawthorne facility. Deming later worked at the United States Department of Agriculture and became the mathematical advisor to the United States Census Bureau. Over the next half a century, Deming became the foremost champion and proponent of Shewhart's work. After the defeat of Japan at the close of World War II, Deming served as statistical consultant to the Supreme Commander for the Allied Powers. His ensuing involvement in Japanese life, and long career as an industrial consultant there, spread Shewhart's thinking, and the use of the control chart, widely in Japanese manufacturing industry throughout the 1950s and 1960s.
Bonnie Small worked in an Allentown plant in the 1950s after the transistor was made. Used Shewhart's methods to improve plant performance in quality control and made up to 5000 control charts. In 1958, The Western Electric Statistical Quality Control Handbook had appeared from her writings and led to use at AT&T.
== Chart details ==
A control chart consists of:
Points representing a statistic (e.g., a mean, range, proportion) of measurements of a quality characteristic in samples taken from the process at different times (i.e., the data)
The mean of this statistic using all the samples is calculated (e.g., the mean of the means, mean of the ranges, mean of the proportions) - or for a reference period against which change can be assessed. Similarly a median can be used instead.
A centre line is drawn at the value of the mean or median of the statistic
The standard deviation (e.g., sqrt(variance) of the mean) of the statistic is calculated using all the samples - or again for a reference period against which change can be assessed. in the case of XmR charts, strictly it is an approximation of standard deviation, the does not make the assumption of homogeneity of process over time that the standard deviation makes.
Upper and lower control limits (sometimes called "natural process limits") that indicate the threshold at which the process output is considered statistically 'unlikely' and are drawn typically at 3 standard deviations from the center line
The chart may have other optional features, including:
More restrictive upper and lower warning or control limits, drawn as separate lines, typically two standard deviations above and below the center line. This is regularly used when a process needs tighter controls on variability.
Division into zones, with the addition of rules governing frequencies of observations in each zone
Annotation with events of interest, as determined by the Quality Engineer in charge of the process' quality
Action on special causes
(n.b., there are several rule sets for detection of signal; this is just one set. The rule set should be clearly stated.)
Any point outside the control limits
A Run of 7 Points all above or all below the central line - Stop the production
Quarantine and 100% check
Adjust Process.
Check 5 Consecutive samples
Continue The Process.
A Run of 7 Point Up or Down - Instruction as above
=== Chart usage ===
If the process is in control (and the process statistic is normal), 99.7300% of all the points will fall between the control limits. Any observations outside the limits, or systematic patterns within, suggest the introduction of a new (and likely unanticipated) source of variation, known as a special-cause variation. Since increased variation means increased quality costs, a control chart "signaling" the presence of a special-cause requires immediate investigation.
This makes the control limits very important decision aids. The control limits provide information about the process behavior and have no intrinsic relationship to any specification targets or engineering tolerance. In practice, the process mean (and hence the centre line) may not coincide with the specified value (or target) of the quality characteristic because the process design simply cannot deliver the process characteristic at the desired level.
Control charts limit specification limits or targets because of the tendency of those involved with the process (e.g., machine operators) to focus on performing to specification when in fact the least-cost course of action is to keep process variation as low as possible. Attempting to make a process whose natural centre is not the same as the target perform to target specification increases process variability and increases costs significantly and is the cause of much inefficiency in operations. Process capability studies do examine the relationship between the natural process limits (the control limits) and specifications, however.
The purpose of control charts is to allow simple detection of events that are indicative of an increase in process variability. This simple decision can be difficult where the process characteristic is continuously varying; the control chart provides statistically objective criteria of change. When change is detected and considered good its cause should be identified and possibly become the new way of working, where the change is bad then its cause should be identified and eliminated.
The purpose in adding warning limits or subdividing the control chart into zones is to provide early notification if something is amiss. Instead of immediately launching a process improvement effort to determine whether special causes are present, the Quality Engineer may temporarily increase the rate at which samples are taken from the process output until it is clear that the process is truly in control. Note that with three-sigma limits, common-cause variations result in signals less than once out of every twenty-two points for skewed processes and about once out of every three hundred seventy (1/370.4) points for normally distributed processes. The two-sigma warning levels will be reached about once for every twenty-two (1/21.98) plotted points in normally distributed data. (For example, the means of sufficiently large samples drawn from practically any underlying distribution whose variance exists are normally distributed, according to the Central Limit Theorem.)
=== Choice of limits ===
Shewhart set 3-sigma (3-standard deviation) limits on the following basis.
The coarse result of Chebyshev's inequality that, for any probability distribution, the probability of an outcome greater than k standard deviations from the mean is at most 1/k2.
The finer result of the Vysochanskii–Petunin inequality, that for any unimodal probability distribution, the probability of an outcome greater than k standard deviations from the mean is at most 4/(9k2).
In the Normal distribution, a very common probability distribution, 99.7% of the observations occur within three standard deviations of the mean (see Normal distribution).
Shewhart summarized the conclusions by saying:
... the fact that the criterion which we happen to use has a fine ancestry in highbrow statistical theorems does not justify its use. Such justification must come from empirical evidence that it works. As the practical engineer might say, the proof of the pudding is in the eating.
Although he initially experimented with limits based on probability distributions, Shewhart ultimately wrote:
Some of the earliest attempts to characterize a state of statistical control were inspired by the belief that there existed a special form of frequency function f and it was early argued that the normal law characterized such a state. When the normal law was found to be inadequate, then generalized functional forms were tried. Today, however, all hopes of finding a unique functional form f are blasted.
The control chart is intended as a heuristic. Deming insisted that it is not a hypothesis test and is not motivated by the Neyman–Pearson lemma. He contended that the disjoint nature of population and sampling frame in most industrial situations compromised the use of conventional statistical techniques. Deming's intention was to seek insights into the cause system of a process ...under a wide range of unknowable circumstances, future and past.... He claimed that, under such conditions, 3-sigma limits provided ... a rational and economic guide to minimum economic loss... from the two errors:
Ascribe a variation or a mistake to a special cause (assignable cause) when in fact the cause belongs to the system (common cause). (Also known as a Type I error or False Positive)
Ascribe a variation or a mistake to the system (common causes) when in fact the cause was a special cause (assignable cause). (Also known as a Type II error or False Negative)
=== Calculation of standard deviation ===
As for the calculation of control limits, the standard deviation (error) required is that of the common-cause variation in the process. Hence, the usual estimator, in terms of sample variance, is not used as this estimates the total squared-error loss from both common- and special-causes of variation.
An alternative method is to use the relationship between the range of a sample and its standard deviation derived by Leonard H. C. Tippett, as an estimator which tends to be less influenced by the extreme observations which typify special-causes.
== Rules for detecting signals ==
The most common sets are:
The Western Electric rules
The Wheeler rules (equivalent to the Western Electric zone tests)
The Nelson rules
There has been particular controversy as to how long a run of observations, all on the same side of the centre line, should count as a signal, with 6, 7, 8 and 9 all being advocated by various writers.
The most important principle for choosing a set of rules is that the choice be made before the data is inspected. Choosing rules once the data have been seen tends to increase the Type I error rate owing to testing effects suggested by the data.
== Alternative bases ==
In 1935, the British Standards Institution, under the influence of Egon Pearson and against Shewhart's spirit, adopted control charts, replacing 3-sigma limits with limits based on percentiles of the normal distribution. This move continues to be represented by John Oakland and others but has been widely deprecated by writers in the Shewhart–Deming tradition.
== Performance of control charts ==
When a point falls outside the limits established for a given control chart, those responsible for the underlying process are expected to determine whether a special cause has occurred. If one has, it is appropriate to determine if the results with the special cause are better than or worse than results from common causes alone. If worse, then that cause should be eliminated if possible. If better, it may be appropriate to intentionally retain the special cause within the system producing the results.
Even when a process is in control (that is, no special causes are present in the system), there is approximately a 0.27% probability of a point exceeding 3-sigma control limits. So, even an in control process plotted on a properly constructed control chart will eventually signal the possible presence of a special cause, even though one may not have actually occurred. For a Shewhart control chart using 3-sigma limits, this false alarm occurs on average once every 1/0.0027 or 370.4 observations. Therefore, the in-control average run length (or in-control ARL) of a Shewhart chart is 370.4.
Meanwhile, if a special cause does occur, it may not be of sufficient magnitude for the chart to produce an immediate alarm condition. If a special cause occurs, one can describe that cause by measuring the change in the mean and/or variance of the process in question. When those changes are quantified, it is possible to determine the out-of-control ARL for the chart.
It turns out that Shewhart charts are quite good at detecting large changes in the process mean or variance, as their out-of-control ARLs are fairly short in these cases. However, for smaller changes (such as a 1- or 2-sigma change in the mean), the Shewhart chart does not detect these changes efficiently. Other types of control charts have been developed, such as the EWMA chart, the CUSUM chart and the real-time contrasts chart, which detect smaller changes more efficiently by making use of information from observations collected prior to the most recent data point.
Many control charts work best for numeric data with Gaussian assumptions. The real-time contrasts chart was proposed to monitor process with complex characteristics, e.g. high-dimensional, mix numerical and categorical, missing-valued, non-Gaussian, non-linear relationship.
== Criticisms ==
Several authors have criticised the control chart on the grounds that it violates the likelihood principle. However, the principle is itself controversial and supporters of control charts further argue that, in general, it is impossible to specify a likelihood function for a process not in statistical control, especially where knowledge about the cause system of the process is weak.
Some authors have criticised the use of average run lengths (ARLs) for comparing control chart performance, because that average usually follows a geometric distribution, which has high variability and difficulties.
Some authors have criticized that most control charts focus on numeric data. Nowadays, process data can be much more complex, e.g. non-Gaussian, mix numerical and categorical, or be missing-valued.
== Types of charts ==
†Some practitioners also recommend the use of Individuals charts for attribute data, particularly when the assumptions of either binomially distributed data (p- and np-charts) or Poisson-distributed data (u- and c-charts) are violated. Two primary justifications are given for this practice. First, normality is not necessary for statistical control, so the Individuals chart may be used with non-normal data. Second, attribute charts derive the measure of dispersion directly from the mean proportion (by assuming a probability distribution), while Individuals charts derive the measure of dispersion from the data, independent of the mean, making Individuals charts more robust than attributes charts to violations of the assumptions about the distribution of the underlying population. It is sometimes noted that the substitution of the Individuals chart works best for large counts, when the binomial and Poisson distributions approximate a normal distribution. i.e. when the number of trials n > 1000 for p- and np-charts or λ > 500 for u- and c-charts.
Critics of this approach argue that control charts should not be used when their underlying assumptions are violated, such as when process data is neither normally distributed nor binomially (or Poisson) distributed. Such processes are not in control and should be improved before the application of control charts. Additionally, application of the charts in the presence of such deviations increases the type I and type II error rates of the control charts, and may make the chart of little practical use.
== See also ==
Analytic and enumerative statistical studies
Common cause and special cause
Process capability
Seven Basic Tools of Quality
Six Sigma
Statistical process control
Total quality management
== References ==
== Bibliography ==
Deming, W. E. (1975). "On probability as a basis for action". The American Statistician. 29 (4): 146–152. CiteSeerX 10.1.1.470.9636. doi:10.2307/2683482. JSTOR 2683482.
Deming, W. E. (1982). Out of the Crisis: Quality, Productivity and Competitive Position. ISBN 978-0-521-30553-2.
Deng, H.; Runger, G.; Tuv, Eugene (2012). "System monitoring with real-time contrasts". Journal of Quality Technology. 44 (1): 9–27. doi:10.1080/00224065.2012.11917878. S2CID 119835984.
Mandel, B. J. (1969). "The Regression Control Chart". Journal of Quality Technology. 1 (1): 1–9. doi:10.1080/00224065.1969.11980341.
Oakland, J. (2002). Statistical Process Control. ISBN 978-0-7506-5766-2.
Shewhart, W. A. (1931). Economic Control of Quality of Manufactured Product. American Society for Quality Control. ISBN 978-0-87389-076-2. {{cite book}}: ISBN / Date incompatibility (help)
Shewhart, W. A. (1939). Statistical Method from the Viewpoint of Quality Control. Courier Corporation. ISBN 978-0-486-65232-0. {{cite book}}: ISBN / Date incompatibility (help)
Wheeler, D. J. (2000). Normality and the Process-Behaviour Chart. SPC Press. ISBN 978-0-945320-56-2.
Wheeler, D. J.; Chambers, D. S. (1992). Understanding Statistical Process Control. SPC Press. ISBN 978-0-945320-13-5.
Wheeler, Donald J. (1999). Understanding Variation: The Key to Managing Chaos (2nd ed.). SPC Press. ISBN 978-0-945320-53-1.
== External links ==
NIST/SEMATECH e-Handbook of Statistical Methods
Monitoring and Control with Control Charts | Wikipedia/Control_charts |
The Capability Maturity Model (CMM) is a development model created in 1986 after a study of data collected from organizations that contracted with the U.S. Department of Defense, who funded the research. The term "maturity" relates to the degree of formality and optimization of processes, from ad hoc practices, to formally defined steps, to managed result metrics, to active optimization of the processes.
The model's aim is to improve existing software development processes, but it can also be applied to other processes.
In 2006, the Software Engineering Institute at Carnegie Mellon University developed the Capability Maturity Model Integration, which has largely superseded the CMM and addresses some of its drawbacks.
== Overview ==
The Capability Maturity Model was originally developed as a tool for objectively assessing the ability of government contractors' processes to implement a contracted software project. The model is based on the process maturity framework first described in IEEE Software and, later, in the 1989 book Managing the Software Process by Watts Humphrey. It was later published as an article in 1993
and as a book by the same authors in 1994.
Though the model comes from the field of software development, it is also used as a model to aid in business processes generally, and has also been used extensively worldwide in government offices, commerce, and industry.
== History ==
=== Prior need for software processes ===
In the 1980s, the use of computers grew more widespread, more flexible and less costly. Organizations began to adopt computerized information systems, and the demand for software development grew significantly. Many processes for software development were in their infancy, with few standard or "best practice" approaches defined.
As a result, the growth was accompanied by growing pains: project failure was common, the field of computer science was still in its early years, and the ambitions for project scale and complexity exceeded the market capability to deliver adequate products within a planned budget. Individuals such as Edward Yourdon, Larry Constantine, Gerald Weinberg, Tom DeMarco, and David Parnas began to publish articles and books with research results in an attempt to professionalize the software-development processes.
In the 1980s, several US military projects involving software subcontractors ran over-budget and were completed far later than planned, if at all. In an effort to determine why this was occurring, the United States Air Force funded a study at the Software Engineering Institute (SEI).
=== Precursor ===
The first application of a staged maturity model to IT was not by CMU/SEI, but rather by Richard L. Nolan, who, in 1973 published the stages of growth model for IT organizations.
Watts Humphrey began developing his process maturity concepts during the later stages of his 27-year career at IBM.
=== Development at Software Engineering Institute ===
Active development of the model by the US Department of Defense Software Engineering Institute (SEI) began in 1986 when Humphrey joined the Software Engineering Institute located at Carnegie Mellon University in Pittsburgh, Pennsylvania after retiring from IBM. At the request of the U.S. Air Force he began formalizing his Process Maturity Framework to aid the U.S. Department of Defense in evaluating the capability of software contractors as part of awarding contracts.
The result of the Air Force study was a model for the military to use as an objective evaluation of software subcontractors' process capability maturity. Humphrey based this framework on the earlier Quality Management Maturity Grid developed by Philip B. Crosby in his book "Quality is Free". Humphrey's approach differed because of his unique insight that organizations mature their processes in stages based on solving process problems in a specific order. Humphrey based his approach on the staged evolution of a system of software development practices within an organization, rather than measuring the maturity of each separate development process independently. The CMMI has thus been used by different organizations as a general and powerful tool for understanding and then improving general business process performance.
Watts Humphrey's Capability Maturity Model (CMM) was published in 1988 and as a book in 1989, in Managing the Software Process.
Organizations were originally assessed using a process maturity questionnaire and a Software Capability Evaluation method devised by Humphrey and his colleagues at the Software Engineering Institute.
The full representation of the Capability Maturity Model as a set of defined process areas and practices at each of the five maturity levels was initiated in 1991, with Version 1.1 being published in July 1993.
The CMM was published as a book in 1994 by the same authors Mark C. Paulk, Charles V. Weber, Bill Curtis, and Mary Beth Chrissis.
=== Capability Maturity Model Integration ===
The CMMI model's application in software development has sometimes been problematic. Applying multiple models that are not integrated within and across an organization could be costly in training, appraisals, and improvement activities. The Capability Maturity Model Integration (CMMI) project was formed to sort out the problem of using multiple models for software development processes, thus the CMMI model has superseded the CMM model, though the CMM model continues to be a general theoretical process capability model used in the public domain.
In 2016, the responsibility for CMMI was transferred to the Information Systems Audit and Control Association (ISACA). ISACA subsequently released CMMI v2.0 in 2021. It was upgraded again to CMMI v3.0 in 2023. CMMI now places a greater emphasis on the process architecture which is typically realized as a process diagram. Copies of CMMI are available now only by subscription.
=== Adapted to other processes ===
The CMMI was originally intended as a tool to evaluate the ability of government contractors to perform a contracted software project. Though it comes from the area of software development, it can be, has been, and continues to be widely applied as a general model of the maturity of process (e.g., IT service management processes) in IS/IT (and other) organizations.
== Model topics ==
=== Maturity models ===
A maturity model can be viewed as a set of structured levels that describe how well the behaviors, practices and processes of an organization can reliably and sustainably produce required outcomes.
A maturity model can be used as a benchmark for comparison and as an aid to understanding - for example, for comparative assessment of different organizations where there is something in common that can be used as a basis for comparison. In the case of the CMM, for example, the basis for comparison would be the organizations' software development processes.
=== Structure ===
The model involves five aspects:
Maturity Levels: a 5-level process maturity continuum - where the uppermost (5th) level is a notional ideal state where processes would be systematically managed by a combination of process optimization and continuous process improvement.
Key Process Areas: a Key Process Area identifies a cluster of related activities that, when performed together, achieve a set of goals considered important.
Goals: the goals of a key process area summarize the states that must exist for that key process area to have been implemented in an effective and lasting way. The extent to which the goals have been accomplished is an indicator of how much capability the organization has established at that maturity level. The goals signify the scope, boundaries, and intent of each key process area.
Common Features: common features include practices that implement and institutionalize a key process area. There are five types of common features: commitment to perform, ability to perform, activities performed, measurement and analysis, and verifying implementation.
Key Practices: The key practices describe the elements of infrastructure and practice that contribute most effectively to the implementation and institutionalization of the area.
=== Levels ===
There are five levels defined along the continuum of the model and, according to the SEI: "Predictability, effectiveness, and control of an organization's software processes are believed to improve as the organization moves up these five levels. While not rigorous, the empirical evidence to date supports this belief".
Initial (chaotic, ad hoc, individual heroics) - the starting point for use of a new or undocumented repeat process.
Repeatable - the process is at least documented sufficiently such that repeating the same steps may be attempted.
Defined - the process is defined/confirmed as a standard business process
Capable - the process is quantitatively managed in accordance with agreed-upon metrics.
Efficient - process management includes deliberate process optimization/improvement.
Within each of these maturity levels are Key Process Areas which characterise that level, and for each such area there are five factors: goals, commitment, ability, measurement, and verification. These are not necessarily unique to CMMI, representing — as they do — the stages that organizations must go through on the way to becoming mature.
The model provides a theoretical continuum along which process maturity can be developed incrementally from one level to the next. Skipping levels is not allowed/feasible.
Level 1 - Initial
It is characteristic of processes at this level that they are (typically) undocumented and in a state of dynamic change, tending to be driven in an ad hoc, uncontrolled and reactive manner by users or events. This provides a chaotic or unstable environment for the processes. (Example - a surgeon performing a new operation a small number of times - the levels of negative outcome are not known).
Level 2 - Repeatable
It is characteristic of this level of maturity that some processes are repeatable, possibly with consistent results. Process discipline is unlikely to be rigorous, but where it exists it may help to ensure that existing processes are maintained during times of stress.
Level 3 - Defined
It is characteristic of processes at this level that there are sets of defined and documented standard processes established and subject to some degree of improvement over time. These standard processes are in place. The processes may not have been systematically or repeatedly used - sufficient for the users to become competent or the process to be validated in a range of situations. This could be considered a developmental stage - with use in a wider range of conditions and user competence development the process can develop to next level of maturity.
Level 4 - Managed (Capable)
It is characteristic of processes at this level that, using process metrics, effective achievement of the process objectives can be evidenced across a range of operational conditions. The suitability of the process in multiple environments has been tested and the process refined and adapted. Process users have experienced the process in multiple and varied conditions, and are able to demonstrate competence. The process maturity enables adaptions to particular projects without measurable losses of quality or deviations from specifications. Process Capability is established from this level. (Example - surgeon performing an operation hundreds of times with levels of negative outcome approaching zero).
Level 5 - Optimizing (Efficient)
It is a characteristic of processes at this level that the focus is on continually improving process performance through both incremental and innovative technological changes/improvements. At maturity level 5, processes are concerned with addressing statistical common causes of process variation and changing the process (for example, to shift the mean of the process performance) to improve process performance. This would be done at the same time as maintaining the likelihood of achieving the established quantitative process-improvement objectives.
Between 2008 and 2019, about 12% of appraisals given were at maturity levels 4 and 5.
=== Critique ===
The model was originally intended to evaluate the ability of government contractors to perform a software project. It has been used for and may be suited to that purpose, but critics pointed out that process maturity according to the CMM was not necessarily mandatory for successful software development.
=== Software process framework ===
The software process framework documented is intended to guide those wishing to assess an organization's or project's consistency with the Key Process Areas. For each maturity level there are five checklist types:
== See also ==
Capability Immaturity Model
Capability Maturity Model Integration
People Capability Maturity Model
Testing Maturity Model
== References ==
== External links ==
CMMI Institute
Architecture Maturity Models at The Open Group
ISACA | Wikipedia/Capability_Maturity_Model |
5S (Five S) is a workplace organization method that uses a list of five Japanese words: seiri (整理), seiton (整頓), seisō (清掃), seiketsu (清潔), and shitsuke (躾). These have been translated as 'sort', 'set in order', 'shine', 'standardize', and 'sustain'. The list describes how to organize a work space for efficiency and effectiveness by identifying and sorting the items used, maintaining the area and items, and sustaining the new organizational system. The decision-making process usually comes from a dialogue about standardization, which builds understanding among employees of how they should do the work.
In some organisations, 5S has become 6S, the sixth element being safety (safe).
Other than a specific stand-alone methodology, 5S is frequently viewed as an element of a broader construct known as visual control, visual workplace, or visual factory. Under those (and similar) terminologies, Western companies were applying underlying concepts of 5S before publication, in English, of the formal 5S methodology. For example, a workplace-organization photo from Tennant Company (a Minneapolis-based manufacturer) quite similar to the one accompanying this article appeared in a manufacturing-management book in 1986.
== Origins ==
5S was developed in Japan and has been identified as one of the techniques that enabled just-in-time manufacturing.
Two major frameworks for understanding and applying 5S to business environments have arisen, one proposed by Takahashi and Osada, the other by Hiroyuki Hirano.
Hirano provided a structure to improve programs with a series of identifiable steps, each building on its predecessor.
Before this Japanese management framework, a similar "scientific management" was proposed by Alexey Gastev and the USSR Central Institute of Labour (CIT) in Moscow.
== Each S ==
There are five 5S phases. They can be translated to English as 'sort', 'set in order', 'shine', 'standardize', and 'sustain'. Other translations are possible.
=== Sort (seiri 整理) ===
Seiri is sorting through all items in a location and removing all unnecessary items from the location.
Goals:
Reduce time loss looking for an item by reducing the number of unnecessary items.
Reduce the chance of distraction by unnecessary items.
Simplify inspection.
Increase the amount of available, useful space.
Increase safety by eliminating obstacles.
Implementation:
Check all items in a location and evaluate whether or not their presence at the location is useful or necessary.
Remove unnecessary items as soon as possible. Place those that cannot be removed immediately in a 'red tag area' so that they are easy to remove later on.
Keep the working floor clear of materials except for those that are in use for production.
=== Set in order (seiton 整頓) ===
(Sometimes shown as Straighten)
Seiton is putting all necessary items in the optimal place for fulfilling their function in the workplace.
Goal:
Make the workflow smooth and easy.
Implementation:
Arrange work stations in such a way that all tooling/equipment is in close proximity, in an easy to reach spot and in a logical order adapted to the work performed. Place components according to their uses, with the frequently used components being nearest to the workplace.
Arrange all necessary items so that they can be easily selected for use. Make it easy to find and pick up necessary items.
Assign fixed locations for items. Use clear labels, marks or hints so that items are easy to return to the correct location and so that it is easy to spot missing items.
=== Shine (seiso 清掃) ===
Seiso is sweeping or cleaning and inspecting the workplace, tools and machinery on a regular basis.
Goals:
Improves the production process efficiency and safety, reduces waste, prevents errors and defects.
Keep the workplace safe and easy to work in.
Keep the workplace clean and pleasing to work in.
When in place, anyone not familiar to the environment must be able to detect any problems within 15 meters (50 ft) in 5 seconds.
Implementation:
Clean the workplace and equipment on a daily basis, or at another appropriate (high frequency) cleaning interval.
Inspect the workplace and equipment while cleaning.
=== Sustaining hygiene (seiketsu 清潔) ===
Seiketsu is to maintain hygiene and cleanliness. This is the actual translation. It was often misrepresented as "standardize" simply to suit the 5S acronym. The original concept had nothing to do with standardize or uniformity (To seek uniformity in a setting where the tasks are not uniform is simply absurd).
Goal:
Establish procedures and schedules to ensure the cleanliness of workplace.
Implementation:
Develop a work structure that will support the new practices and make it part of the daily routine.
Ensure everyone knows their responsibilities of performing the sorting, organizing and cleaning.
Use photos and visual controls to help keep everything as it should be.
Review the status of 5S implementation regularly using audit checklists.
=== Sustain/self-discipline (shitsuke しつけ) ===
Shitsuke or sustain is the developed processes by self-discipline of the workers. Also translates as "do without being told".
Goal:
Ensure that the 5S approach is followed.
Implementation:
Organize training sessions.
Perform regular audits using 5s audit checklist to ensure that all defined standards are being implemented and followed.
Implement improvements whenever possible. Worker inputs can be very valuable for identifying improvements.
== 6S ==
The 6S methodology represents an advanced form of the 5S methodology, incorporating Safety as a key element. This change positions Safety at the forefront, stressing its critical role in operational settings. Safety's integration fundamentally alters the approach to organizing workplaces, ensuring that safety considerations are integral from the outset. The 6S model promotes a comprehensive strategy where safety and operational processes are interlinked and equally important. This adaptation underscores the importance of active hazard prevention and a robust safety culture in improving overall workplace efficiency and employee health. To successfully implement the 6S Lean method in your workplace, organizations require:
A deep understanding and prior experience with the 5S methodology.
A mechanism for hazard identification and reporting.
Industry-specific safety training for heightened safety awareness.
A commitment to conduct regular discussions with employees about the principles and practices of 5S/6S.
Endorsement and continuous support from senior management, including the allocation of necessary resources.
== 7S ==
The 7S methodology incorporates 6S safety, and adds a new element for oversight or spirit.
== Variety of applications ==
5S methodology has expanded from manufacturing and is now being applied to a wide variety of industries including health care, education, and government. Visual management and 5S can be particularly beneficial in health care because a frantic search for supplies to treat an in-trouble patient (a chronic problem in health care) can have dire consequences.
Although the origins of the 5S methodology are in manufacturing, it can also be applied to knowledge economy work, with information, software, or media in the place of physical product.
== In lean product and process development ==
The output of engineering and design in a lean enterprise is information, the theory behind using 5S here is "Dirty, cluttered, or damaged surfaces attract the eye, which spends a fraction of a second trying to pull useful information from them every time we glance past. Old equipment hides the new equipment from the eye and forces people to ask which to use".
== See also ==
Japanese aesthetics
Just-in-time manufacturing
Kaikaku
Kaizen
Kanban
Lean manufacturing
Muda
Gogyo (traditional Japanese philosophy)
== References == | Wikipedia/5S_(methodology) |
Distribution-free (nonparametric) control charts are one of the most important tools of statistical process monitoring and control. Implementation techniques of distribution-free control charts do not require any knowledge about the underlying process distribution or its parameters. The main advantage of distribution-free control charts is its in-control robustness, in the sense that, irrespective of the nature of the underlying process distributions, the properties of these control charts remain the same when the process is smoothly operating without presence of any assignable cause.
Early research on nonparametric control charts may be found in 1981 when P.K. Bhattacharya and D. Frierson introduced a nonparametric control chart for detecting small disorders. However, major growth of nonparametric control charting schemes has taken place only in the recent years.
== Popular distribution-free control charts ==
There are distribution-free control charts for both Phase-I analysis and Phase-II monitoring.
One of the most notable distribution-free control charts for Phase-I analysis is RS/P chart proposed by G. Capizzi and G. Masaratto. RS/P charts separately monitor location and scale parameters of a univariate process using two separate charts. In 2019, Chenglong Li, Amitava Mukherjee and Qin Su proposed a single distribution-free control chart for Phase-I analysis using multisample Lepage statistic.
Some popular Phase-II distribution-free control charts for univariate continuous processes includes:
Sign charts based on the sign statistic - used to monitor location parameter of a process
Wilcoxon rank-sum charts based on the Wilcoxon rank-sum test - used to monitor location parameter of a process
Control charts based on precedence or excedance statistic
Shewhart-Lepage chart based on the Lepage test - used to monitor both location and scale parameters of a process simultaneously in a single chart
Shewhart-Cucconi chart based on the Cucconi test - used to monitor both location and scale parameters of a process simultaneously in a single chart
== References == | Wikipedia/Distribution-free_control_chart |
Energy accounting is a system used to measure, analyze and report the energy consumption of different activities on a regular basis. This is done to improve energy efficiency, and to monitor the environment impact of energy consumption.
== Energy management ==
Energy accounting is a system used in energy management systems to measure and analyze energy consumption to improve energy efficiency within an organization. Organisations such as Intel corporation use these systems to track energy usage.
Various energy transformations are possible. An energy balance can be used to track energy through a system. This becomes a useful tool for determining resource use and environmental impacts. How much energy is needed at each point in a system is measured, as well as the form of that energy. An accounting system keeps track of energy in, energy out, and non-useful energy versus work done, and transformations within a system. Sometimes, non-useful work is what is often responsible for environmental problems.
The newer systems are trying to build predictive models of consumption. Startups are aiming to revolutionize this approach by introducing AI-based predictive models. These models can analyze vast amounts of data to forecast energy usage patterns, identify inefficiencies, and optimize energy distribution in real-time. By leveraging machine learning algorithms, these systems can learn from historical data and continuously improve their accuracy, providing organizations with powerful tools to enhance energy efficiency and reduce environmental impact. This shift towards predictive analytics represents a significant advancement in energy accounting, enabling more proactive and intelligent energy management.
== Energy balance ==
Energy returned on energy invested (EROEI) is the ratio of energy delivered by an energy technology to the energy invested to set up the technology.
== See also ==
== References ==
== External links ==
Accounting: Facility Energy Use
Energy accounting in the context of environmental accounting Archived 2019-02-15 at the Wayback Machine | Wikipedia/Energy_accounting |
In physics, energy density is the quotient between the amount of energy stored in a given system or contained in a given region of space and the volume of the system or region considered. Often only the useful or extractable energy is measured. It is sometimes confused with stored energy per unit mass, which is called specific energy or gravimetric energy density.
There are different types of energy stored, corresponding to a particular type of reaction. In order of the typical magnitude of the energy stored, examples of reactions are: nuclear, chemical (including electrochemical), electrical, pressure, material deformation or in electromagnetic fields. Nuclear reactions take place in stars and nuclear power plants, both of which derive energy from the binding energy of nuclei. Chemical reactions are used by organisms to derive energy from food and by automobiles from the combustion of gasoline. Liquid hydrocarbons (fuels such as gasoline, diesel and kerosene) are today the densest way known to economically store and transport chemical energy at a large scale (1 kg of diesel fuel burns with the oxygen contained in ≈ 15 kg of air). Burning local biomass fuels supplies household energy needs (cooking fires, oil lamps, etc.) worldwide. Electrochemical reactions are used by devices such as laptop computers and mobile phones to release energy from batteries.
Energy per unit volume has the same physical units as pressure, and in many situations is synonymous. For example, the energy density of a magnetic field may be expressed as and behaves like a physical pressure. The energy required to compress a gas to a certain volume may be determined by multiplying the difference between the gas pressure and the external pressure by the change in volume. A pressure gradient describes the potential to perform work on the surroundings by converting internal energy to work until equilibrium is reached.
In cosmological and other contexts in general relativity, the energy densities considered relate to the elements of the stress–energy tensor and therefore do include the rest mass energy as well as energy densities associated with pressure.
== Chemical energy ==
When discussing the chemical energy contained, there are different types which can be quantified depending on the intended purpose. One is the theoretical total amount of thermodynamic work that can be derived from a system, at a given temperature and pressure imposed by the surroundings, called exergy. Another is the theoretical amount of electrical energy that can be derived from reactants that are at room temperature and atmospheric pressure. This is given by the change in standard Gibbs free energy. But as a source of heat or for use in a heat engine, the relevant quantity is the change in standard enthalpy or the heat of combustion.
There are two kinds of heat of combustion:
The higher value (HHV), or gross heat of combustion, includes all the heat released as the products cool to room temperature and whatever water vapor is present condenses.
The lower value (LHV), or net heat of combustion, does not include the heat which could be released by condensing water vapor, and may not include the heat released on cooling all the way down to room temperature.
A convenient table of HHV and LHV of some fuels can be found in the references.
=== In energy storage and fuels ===
For energy storage, the energy density relates the stored energy to the volume of the storage equipment, e.g. the fuel tank. The higher the energy density of the fuel, the more energy may be stored or transported for the same amount of volume. The energy of a fuel per unit mass is called its specific energy.
The adjacent figure shows the gravimetric and volumetric energy density of some fuels and storage technologies (modified from the Gasoline article). Some values may not be precise because of isomers or other irregularities. The heating values of the fuel describe their specific energies more comprehensively.
The density values for chemical fuels do not include the weight of the oxygen required for combustion. The atomic weights of carbon and oxygen are similar, while hydrogen is much lighter. Figures are presented in this way for those fuels where in practice air would only be drawn in locally to the burner. This explains the apparently lower energy density of materials that contain their own oxidizer (such as gunpowder and TNT), where the mass of the oxidizer in effect adds weight, and absorbs some of the energy of combustion to dissociate and liberate oxygen to continue the reaction. This also explains some apparent anomalies, such as the energy density of a sandwich appearing to be higher than that of a stick of dynamite.
Given the high energy density of gasoline, the exploration of alternative media to store the energy of powering a car, such as hydrogen or battery, is strongly limited by the energy density of the alternative medium. The same mass of lithium-ion storage, for example, would result in a car with only 2% the range of its gasoline counterpart. If sacrificing the range is undesirable, much more storage volume is necessary. Alternative options are discussed for energy storage to increase energy density and decrease charging time, such as supercapacitors.
No single energy storage method boasts the best in specific power, specific energy, and energy density. Peukert's law describes how the amount of useful energy that can be obtained (for a lead-acid cell) depends on how quickly it is pulled out.
=== Efficiency ===
In general an engine will generate less kinetic energy due to inefficiencies and thermodynamic considerations—hence the specific fuel consumption of an engine will always be greater than its rate of production of the kinetic energy of motion.
Energy density differs from energy conversion efficiency (net output per input) or embodied energy (the energy output costs to provide, as harvesting, refining, distributing, and dealing with pollution all use energy). Large scale, intensive energy use impacts and is impacted by climate, waste storage, and environmental consequences.
== Nuclear energy ==
The greatest energy source by far is matter itself, according to the mass–energy equivalence. This energy is described by E = mc2, where c is the speed of light. In terms of density, m = ρV, where ρ is the volumetric mass density, V is the volume occupied by the mass. This energy can be released by the processes of nuclear fission (~ 0.1%), nuclear fusion (~ 1%), or the annihilation of some or all of the matter in the volume V by matter–antimatter collisions (100%).
The most effective ways of accessing this energy, aside from antimatter, are fusion and fission. Fusion is the process by which the sun produces energy which will be available for billions of years (in the form of sunlight and heat). However as of 2024, sustained fusion power production continues to be elusive. Power from fission in nuclear power plants (using uranium and thorium) will be available for at least many decades or even centuries because of the plentiful supply of the elements on earth, though the full potential of this source can only be realized through breeder reactors, which are, apart from the BN-600 reactor, not yet used commercially.
=== Fission reactors ===
Nuclear fuels typically have volumetric energy densities at least tens of thousands of times higher than chemical fuels. A 1 inch tall uranium fuel pellet is equivalent to about 1 ton of coal, 120 gallons of crude oil, or 17,000 cubic feet of natural gas. In light-water reactors, 1 kg of natural uranium – following a corresponding enrichment and used for power generation– is equivalent to the energy content of nearly 10,000 kg of mineral oil or 14,000 kg of coal. Comparatively, coal, gas, and petroleum are the current primary energy sources in the U.S. but have a much lower energy density.
The density of thermal energy contained in the core of a light-water reactor (pressurized water reactor (PWR) or boiling water reactor (BWR)) of typically 1 GW (1000 MW electrical corresponding to ≈ 3000 MW thermal) is in the range of 10 to 100 MW of thermal energy per cubic meter of cooling water depending on the location considered in the system (the core itself (≈ 30 m3), the reactor pressure vessel (≈ 50 m3), or the whole primary circuit (≈ 300 m3)). This represents a considerable density of energy that requires a continuous water flow at high velocity at all times in order to remove heat from the core, even after an emergency shutdown of the reactor.
The incapacity to cool the cores of three BWRs at Fukushima after the 2011 tsunami and the resulting loss of external electrical power and cold source caused the meltdown of the three cores in only a few hours, even though the three reactors were correctly shut down just after the Tōhoku earthquake. This extremely high power density distinguishes nuclear power plants (NPP's) from any thermal power plants (burning coal, fuel or gas) or any chemical plants and explains the large redundancy required to permanently control the neutron reactivity and to remove the residual heat from the core of NPP's.
=== Antimatter–matter annihilation ===
Because antimatter–matter interactions result in complete conversion of the rest mass to radiant energy, the energy density of this reaction depends on the density of the matter and antimatter used. A neutron star would approximate the most dense system capable of matter-antimatter annihilation. A black hole, although denser than a neutron star, does not have an equivalent anti-particle form, but would offer the same 100% conversion rate of mass to energy in the form of Hawking radiation. Even in the case of relatively small black holes (smaller than astronomical objects) the power output would be tremendous.
== Electric and magnetic fields ==
Electric and magnetic fields can store energy and its density relates to the strength of the fields within a given volume. This (volumetric) energy density is given by
u
=
ε
2
E
2
+
1
2
μ
B
2
{\displaystyle u={\frac {\varepsilon }{2}}\mathbf {E} ^{2}+{\frac {1}{2\mu }}\mathbf {B} ^{2}}
where E is the electric field, B is the magnetic field, and ε and µ are the permittivity and permeability of the surroundings respectively. The SI unit is the joule per cubic metre.
In ideal (linear and nondispersive) substances, the energy density is
u
=
1
2
(
E
⋅
D
+
H
⋅
B
)
{\displaystyle u={\frac {1}{2}}(\mathbf {E} \cdot \mathbf {D} +\mathbf {H} \cdot \mathbf {B} )}
where D is the electric displacement field and H is the magnetizing field. In the case of absence of magnetic fields, by exploiting Fröhlich's relationships it is also possible to extend these equations to anisotropic and nonlinear dielectrics, as well as to calculate the correlated Helmholtz free energy and entropy densities.
In the context of magnetohydrodynamics, the physics of conductive fluids, the magnetic energy density behaves like an additional pressure that adds to the gas pressure of a plasma.
=== Pulsed sources ===
When a pulsed laser impacts a surface, the radiant exposure, i.e. the energy deposited per unit of surface, may also be called energy density or fluence.
== Table of material energy densities ==
The following unit conversions may be helpful when considering the data in the tables: 3.6 MJ = 1 kW⋅h ≈ 1.34 hp⋅h. Since 1 J = 10−6 MJ and 1 m3 = 103 L, divide joule/m3 by 109 to get MJ/L = GJ/m3. Divide MJ/L by 3.6 to get kW⋅h/L.
=== Chemical reactions (oxidation) ===
Unless otherwise stated, the values in the following table are lower heating values for perfect combustion, not counting oxidizer mass or volume. When used to produce electricity in a fuel cell or to do work, it is the Gibbs free energy of reaction (ΔG) that sets the theoretical upper limit. If the produced H2O is vapor, this is generally greater than the lower heat of combustion, whereas if the produced H2O is liquid, it is generally less than the higher heat of combustion. But in the most relevant case of hydrogen, ΔG is 113 MJ/kg if water vapor is produced, and 118 MJ/kg if liquid water is produced, both being less than the lower heat of combustion (120 MJ/kg).
=== Electrochemical reactions (batteries) ===
==== Common battery formats ====
=== Nuclear reactions ===
=== In material deformation ===
The mechanical energy storage capacity, or resilience, of a Hookean material when it is deformed to the point of failure can be computed by calculating tensile strength times the maximum elongation dividing by two. The maximum elongation of a Hookean material can be computed by dividing stiffness of that material by its ultimate tensile strength. The following table lists these values computed using the Young's modulus as measure of stiffness:
=== Other release mechanisms ===
== See also ==
== References ==
== Further reading ==
The Inflationary Universe: The Quest for a New Theory of Cosmic Origins by Alan H. Guth (1998) ISBN 0-201-32840-2
Cosmological Inflation and Large-Scale Structure by Andrew R. Liddle, David H. Lyth (2000) ISBN 0-521-57598-2
Richard Becker, "Electromagnetic Fields and Interactions", Dover Publications Inc., 1964
^ "Aircraft Fuels". Energy, Technology and the Environment Ed. Attilio Bisio. Vol. 1. New York: John Wiley and Sons, Inc., 1995. 257–259
"Fuels of the Future for Cars and Trucks" – Dr. James J. Eberhardt – Energy Efficiency and Renewable Energy, U.S. Department of Energy – 2002 Diesel Engine Emissions Reduction (DEER) Workshop San Diego, California - August 25–29, 2002
"Heat values of various fuels – World Nuclear Association". www.world-nuclear.org. Retrieved 4 November 2018. | Wikipedia/Energy_content |
World energy supply and consumption refers to the global supply of energy resources and its consumption. The system of global energy supply consists of the energy development, refinement, and trade of energy. Energy supplies may exist in various forms such as raw resources or more processed and refined forms of energy. The raw energy resources include for example coal, unprocessed oil and gas, uranium. In comparison, the refined forms of energy include for example refined oil that becomes fuel and electricity. Energy resources may be used in various different ways, depending on the specific resource (e.g. coal), and intended end use (industrial, residential, etc.). Energy production and consumption play a significant role in the global economy. It is needed in industry and global transportation. The total energy supply chain, from production to final consumption, involves many activities that cause a loss of useful energy.
As of 2022, energy consumption is still about 80% from fossil fuels. The Gulf States and Russia are major energy exporters. Their customers include for example the European Union and China, who are not producing enough energy in their own countries to satisfy their energy demand. Total energy consumption tends to increase by about 1–2% per year. More recently, renewable energy has been growing rapidly, averaging about 20% increase per year in the 2010s.
Two key problems with energy production and consumption are greenhouse gas emissions and environmental pollution. Of about 50 billion tonnes worldwide annual total greenhouse gas emissions, 36 billion tonnes of carbon dioxide was a result of energy use (almost all from fossil fuels) in 2021. Many scenarios have been envisioned to reduce greenhouse gas emissions, usually by the name of net zero emissions.
There is a clear connection between energy consumption per capita, and GDP per capita.
A significant lack of energy supplies is called an energy crisis.
== Primary energy production ==
Primary energy refers to the first form of energy encountered, as raw resources collected directly from energy production, before any conversion or transformation of the energy occurs.
Energy production is usually classified as:
Fossil, using coal, crude oil, and natural gas;
Nuclear, using uranium;
Renewable, using biomass, geothermal, hydropower, solar, wind, tidal, wave, among others.
Primary energy assessment by IEA follows certain rules to ease measurement of different kinds of energy. These rules are controversial. Water and air flow energy that drives hydro and wind turbines, and sunlight that powers solar panels, are not taken as PE, which is set at the electric energy produced. But fossil and nuclear energy are set at the reaction heat, which is about three times the electric energy. This measurement difference can lead to underestimating the economic contribution of renewable energy.
Enerdata displays data for "Total energy / production: Coal, Oil, Gas, Biomass, Heat and Electricity" and for "Renewables / % in electricity production: Renewables, non-renewables".
The table lists worldwide PE and the countries producing most (76%) of that in 2021, using Enerdata. The amounts are rounded and given in million tonnes of oil equivalent per year (1 Mtoe = 11.63 TWh (41.9 petajoules), where 1 TWh = 109 kWh) and % of Total. Renewable is Biomass plus Heat plus renewable percentage of Electricity production (hydro, wind, solar). Nuclear is nonrenewable percentage of Electricity production. The above-mentioned underestimation of hydro, wind and solar energy, compared to nuclear and fossil energy, applies also to Enerdata.
The 2021 world total energy production of 14,800 MToe corresponds to a little over 172 PWh / year, or about 19.6 TW of power generation.
== Energy conversion ==
Energy resources must be processed in order to make it suitable for final consumption. For example, there may be various impurities in raw coal mined or raw natural gas that was produced from an oil well that may make it unsuitable to be burned in a power plant.
Primary energy is converted in many ways to energy carriers, also known as secondary energy:
Coal mainly goes to thermal power stations. Coke is derived by destructive distillation of bituminous coal.
Crude oil goes mainly to oil refineries
Natural-gas goes to natural-gas processing plants to remove contaminants such as water, carbon dioxide and hydrogen sulfide, and to adjust the heating value. It is used as fuel gas, also in thermal power stations.
Nuclear reaction heat is used in thermal power stations.
Biomass is used directly or converted to biofuel.
Electricity generators are driven by steam or gas turbines in a thermal plant, or water turbines in a hydropower station, or wind turbines, usually in a wind farm. The invention of the solar cell in 1954 started electricity generation by solar panels, connected to a power inverter. Mass production of panels around the year 2000 made this economic.
== Energy trade ==
Much primary and converted energy is traded among countries. The table lists countries with large difference of export and import in 2021, expressed in Mtoe. A negative value indicates that much energy import is needed for the economy. Russian gas exports were reduced a lot in 2022, as pipelines to Asia plus LNG export capacity is much less than the gas no longer sent to Europe.
Transport of energy carriers is done by tanker ship, tank truck, LNG carrier, rail freight transport, pipeline and by electric power transmission.
== Total energy supply ==
Total energy supply (TES) indicates the sum of production and imports subtracting exports and storage changes. For the whole world TES nearly equals primary energy PE because imports and exports cancel out, but for countries TES and PE differ in quantity, and also in quality as secondary energy is involved, e.g., import of an oil refinery product. TES is all energy required to supply energy for end users.
The tables list TES and PE for some countries where these differ much, both in 2021 and TES history. Most growth of TES since 1990 occurred in Asia. The amounts are rounded and given in Mtoe. Enerdata labels TES as Total energy consumption.
25% of worldwide primary production is used for conversion and transport, and 6% for non-energy products like lubricants, asphalt and petrochemicals. In 2019 TES was 606 EJ and final consumption was 418 EJ, 69% of TES. Most of the energy lost by conversion occurs in thermal electricity plants and the energy industry own use.
=== Discussion about energy loss ===
There are different qualities of energy. Heat, especially at a relatively low temperature, is low-quality energy of random motion, whereas electricity is high-quality energy that flows smoothly through wires. It takes around 3 kWh of heat to produce 1 kWh of electricity. But by the same token, a kilowatt-hour of this high-quality electricity can be used to pump several kilowatt-hours of heat into a building using a heat pump. It turns out that the loss of useful energy incurred in thermal electricity plants is very much more than the loss due to, say, resistance in power lines, because of quality differences. Electricity can also be used in many ways in which heat cannot.
In fact, the loss in thermal plants is due to poor conversion of chemical energy of fuel to motion by combustion. Otherwise chemical energy of fuel is not inherently low-quality; for example, conversion of chemical energy to electricity in batteries can approach 100%. So energy loss in thermal plants is real loss.
== Final consumption ==
Total final consumption (TFC) is the worldwide consumption of energy by end-users (whereas primary energy consumption (Eurostat) or total energy supply (IEA) is total energy demand and thus also includes what the energy sector uses itself and transformation and distribution losses). This energy consists of fuel (78%) and electricity (22%). The tables list amounts, expressed in million tonnes of oil equivalent per year (1 Mtoe = 11.63 TWh) and how much of these is renewable energy. Non-energy products are not considered here. The data are of 2018. The world's renewable share of TFC was 18% in 2018: 7% traditional biomass, 3.6% hydropower and 7.4% other renewables.
In the period 2005–2017 worldwide final consumption of coal increased by 23%, of oil and gas increased by 18%, and that of electricity increased by 41%.
Fuel comes in three types: Fossil fuel is natural gas, fuel derived from petroleum (LPG, gasoline, kerosene, gas/diesel, fuel oil), or from coal (anthracite, bituminous coal, coke, blast furnace gas). Secondly, there is renewable fuel (biofuel and fuel derived from waste). And lastly, the fuel used for district heating.
The amounts of fuel in the tables are based on lower heating value.
The first table lists final consumption in the countries/regions which use most (85%), and per person as of 2018. In developing countries fuel consumption per person is low and more renewable. Canada, Venezuela and Brazil generate most electricity with hydropower.
The next table shows countries consuming most (85%) in Europe.
=== Energy for energy ===
Some fuel and electricity is used to construct, maintain and demolish/recycle installations that produce fuel and electricity, such as oil platforms, uranium isotope separators and wind turbines. For these producers to be economical the ratio of energy returned on energy invested (EROEI) or energy return on investment (EROI) should be large enough.
If the final energy delivered for consumption is E and the EROI equals R, then the net energy available is E-E/R. The percentage available energy is 100-100/R. For R>10 more than 90% is available but for R=2 only 50% and for R=1 none. This steep decline is known as the net energy cliff.
== Availability of data ==
Many countries publish statistics on the energy supply and consumption of either their own country, of other countries of interest, or of all countries combined in one chart. One of the largest organizations in this field, the International Energy Agency (IEA), sells yearly comprehensive energy data which makes this data paywalled and difficult to access for internet users. The organization Enerdata on the other hand publishes a free Yearbook, making the data more accessible. Another trustworthy organization that provides accurate energy data, mainly referring to the USA, is the U.S. Energy Information Administration.
== Trends and outlook ==
Due to the COVID-19 pandemic, there was a significant decline in energy usage worldwide in 2020, but total energy demand worldwide had recovered by 2021, and has hit a record high in 2022.
In 2022, consumers worldwide spent nearly USD 10 trillion on energy, averaging more than USD 1,200 per person. This reflects a 20% increase over the previous five-year average, highlighting the significant economic impact and the increasing financial burden of energy consumption on a global scale.: 13
=== IEA scenarios ===
In World Energy Outlook 2023 the IEA notes that "We are on track to see all fossil fuels peak before 2030".: 18 The IEA presents three scenarios:: 17
The Stated Policies Scenario (STEPS) provides an outlook based on the latest policy settings. The share of fossil fuel in global energy supply – stuck for decades around 80% – starts to edge downwards and reaches 73% by 2030.: 18 This undercuts the rationale for any increase in fossil fuel investment.: 19 Renewables are set to contribute 80% of new power capacity to 2030, with solar PV alone accounting for more than half.: 20 The STEPS sees a peak in energy-related CO2 emissions in the mid-2020s but emissions remain high enough to push up global average temperatures to around 2.4 °C in 2100.: 22 Total energy demand continues to increase through to 2050.: 23 Total energy investment remains at about US$3 trillion per year.: 49
The Announced Pledges Scenario (APS) assumes all national energy and climate targets made by governments are met in full and on time. The APS is associated with a temperature rise of 1.7 °C in 2100 (with a 50% probability).: 92 Total energy investment rises to about US$4 trillion per year after 2030.: 49
The Net Zero Emissions by 2050 (NZE) Scenario limits global warming to 1.5 °C.: 17 The share of fossil fuel reaches 62% in 2030.: 101 Methane emissions from fossil fuel supply cuts by 75% in 2030.: 45 Total energy investment rises to almost US$5 trillion per year after 2030.: 49 Clean energy investment needs to rise everywhere, but the steepest increases are needed in emerging market and developing economies other than China, requiring enhanced international support.: 46 The share of electricity in final consumption exceeds 50% by 2050 in NZE. The share of nuclear power in electricity generation remains broadly stable over time in all scenarios, about 9%.: 106
The IEA's "Electricity 2024" report details a 2.2% growth in global electricity demand for 2023, forecasting an annual increase of 3.4% through 2026, with notable contributions from emerging economies like China and India, despite a slump in advanced economies due to economic and inflationary pressures. The report underscores the significant impact of data centers, artificial intelligence and cryptocurrency, projecting a potential doubling of electricity consumption to 1,000 TWh by 2026, which is on par with Japan's current usage. Notably, 85% of the additional demand is expected to originate from China and India, with India's demand alone predicted to grow over 6% annually until 2026, driven by economic expansion and increasing air conditioning use.
Southeast Asia's electricity demand is also forecasted to climb by 5% annually through 2026. In the United States, a decrease was seen in 2023, but a moderate rise is anticipated in the coming years, largely fueled by data centers. The report also anticipates that a surge in electricity generation from low-emissions sources will meet the global demand growth over the next three years, with renewable energy sources predicted to surpass coal by early 2025.
=== Alternative scenarios ===
The goal set in the Paris Agreement to limit climate change will be difficult to achieve. Various scenarios for achieving the Paris Climate Agreement Goals have been developed, using IEA data but proposing transition to nearly 100% renewables by mid-century, along with steps such as reforestation. Nuclear power and carbon capture are excluded in these scenarios. The researchers say the costs will be far less than the $5 trillion per year governments currently spend subsidizing the fossil fuel industries responsible for climate change.: ix
In the +2.0 C (global warming) Scenario total primary energy demand in 2040 can be 450 EJ = 10,755 Mtoe, or 400 EJ = 9560 Mtoe in the +1.5 Scenario, well below the current production. Renewable sources can increase their share to 300 EJ in the +2.0 C Scenario or 330 EJ in the +1.5 Scenario in 2040. In 2050 renewables can cover nearly all energy demand. Non-energy consumption will still include fossil fuels.: xxvii Fig. 5
Global electricity generation from renewable energy sources will reach 88% by 2040 and 100% by 2050 in the alternative scenarios. "New" renewables—mainly wind, solar and geothermal energy—will contribute 83% of the total electricity generated.: xxiv The average annual investment required between 2015 and 2050, including costs for additional power plants to produce hydrogen and synthetic fuels and for plant replacement, will be around $1.4 trillion.: 182
Shifts from domestic aviation to rail and from road to rail are needed. Passenger car use must decrease in the OECD countries (but increase in developing world regions) after 2020. The passenger car use decline will be partly compensated by strong increase in public transport rail and bus systems.: xxii Fig.4
CO2 emission can reduce from 32 Gt in 2015 to 7 Gt (+2.0 Scenario) or 2.7 Gt (+1.5 Scenario) in 2040, and to zero in 2050.: xxviii
== See also ==
Electric energy consumption – Worldwide consumption of electricity
Energy demand management – Modification of consumer energy usage during peak hours
Energy intensity – Measure of an economy's energy inefficiency
Energy policy – How a government or business deals with energy
Sustainable energy – Energy that responsibly meets social, economic, and environmental needs
World Energy Outlook – Publication of the International Energy Agency
World energy resources – Estimated maximum capacity for energy production on Earth
Lists
List of countries by energy intensity
List of countries by electricity consumption
List of countries by electricity production
List of countries by energy consumption per capita
List of countries by greenhouse gas emissions
List of countries by energy consumption and production
== Notes ==
== References ==
== External links ==
Enerdata - World Energy & Climate Statistics
International Energy Outlook, by the U.S. Energy Information Administration
World Energy Outlook from the IEA | Wikipedia/World_energy_resources_and_consumption |
The first law of thermodynamics is a formulation of the law of conservation of energy in the context of thermodynamic processes. For a thermodynamic process affecting a thermodynamic system without transfer of matter, the law distinguishes two principal forms of energy transfer, heat and thermodynamic work. The law also defines the internal energy of a system, an extensive property for taking account of the balance of heat transfer, thermodynamic work, and matter transfer, into and out of the system. Energy cannot be created or destroyed, but it can be transformed from one form to another. In an externally isolated system, with internal changes, the sum of all forms of energy is constant.
An equivalent statement is that perpetual motion machines of the first kind are impossible; work done by a system on its surroundings requires that the system's internal energy be consumed, so that the amount of internal energy lost by that work must be resupplied as heat by an external energy source or as work by an external machine acting on the system to sustain the work of the system continuously.
== Definition ==
For thermodynamic processes of energy transfer without transfer of matter, the first law of thermodynamics is often expressed by the algebraic sum of contributions to the internal energy,
U
,
{\displaystyle U,}
from all work,
W
,
{\displaystyle W,}
done on or by the system, and the quantity of heat,
Q
,
{\displaystyle Q,}
supplied to the system. With the sign convention of Rudolf Clausius, that heat supplied to the system is positive, but work done by the system is subtracted, a change in the internal energy,
Δ
U
,
{\displaystyle \Delta U,}
is written
Δ
U
=
Q
−
W
.
{\displaystyle \Delta U=Q-W.}
Modern formulations, such as by Max Planck, and by IUPAC, often replace the subtraction with addition, and consider all net energy transfers to the system as positive and all net energy transfers from the system as negative, irrespective of the use of the system, for example as an engine.
When a system expands in an isobaric process, the thermodynamic work,
W
,
{\displaystyle W,}
done by the system on the surroundings is the product,
P
Δ
V
,
{\displaystyle P~\Delta V,}
of system pressure,
P
,
{\displaystyle P,}
and system volume change,
Δ
V
,
{\displaystyle \Delta V,}
whereas
−
P
Δ
V
{\displaystyle -P~\Delta V}
is said to be the thermodynamic work done on the system by the surroundings. The change in internal energy of the system is:
Δ
U
=
Q
−
P
Δ
V
,
{\displaystyle \Delta U=Q-P~\Delta V,}
where
Q
{\displaystyle Q}
denotes the quantity of heat supplied to the system from its surroundings.
Work and heat express physical processes of supply or removal of energy, while the internal energy
U
{\displaystyle U}
is a mathematical abstraction that keeps account of the changes of energy that befall the system. The term
Q
{\displaystyle Q}
is the quantity of energy added or removed as heat in the thermodynamic sense, not referring to a form of energy within the system. Likewise,
W
{\displaystyle W}
denotes the quantity of energy gained or lost through thermodynamic work. Internal energy is a property of the system, while work and heat describe the process, not the system. Thus, a given internal energy change,
Δ
U
,
{\displaystyle \Delta U,}
can be achieved by different combinations of heat and work. Heat and work are said to be path dependent, while change in internal energy depends only on the initial and final states of the system, not on the path between. Thermodynamic work is measured by change in the system, and, because of friction, is not necessarily the same as work measured by forces and distances in the surroundings, though, ideally, such can sometimes be arranged; this distinction is noted in the term 'isochoric work', at constant system volume, with
Δ
V
=
0
,
{\displaystyle \Delta V=0,}
which is not a form of thermodynamic work.
For thermodynamic processes of energy transfer with transfer of matter, the extensive character of internal energy can be stated: for the otherwise isolated combination of two thermodynamic systems with internal energies
U
1
{\displaystyle U_{1}}
and
U
2
{\displaystyle U_{2}}
into a single system with internal energy
U
0
,
{\displaystyle U_{0},}
U
0
=
U
1
+
U
2
.
{\displaystyle U_{0}=U_{1}+U_{2}.}
== History ==
In the first half of the eighteenth century, French philosopher and mathematician Émilie du Châtelet made notable contributions to the emerging theoretical framework of energy, for example by emphasising Leibniz's concept of ' vis viva ', mv2, as distinct from Newton's momentum, mv.
Empirical developments of the early ideas, in the century following, wrestled with contravening concepts such as the caloric theory of heat.
In the few years of his life (1796–1832) after the 1824 publication of his book Reflections on the Motive Power of Fire, Sadi Carnot came to understand that the caloric theory of heat was restricted to mere calorimetry, and that heat and "motive power" are interconvertible. This is known only from his posthumously published notes. He wrote:
Heat is simply motive power, or rather motion which has changed its form. It is a movement among the particles of bodies. Whereever there is destruction of motive power, there is at the same time production of heat in quantity exactly proportional to the quantity of motive power destroyed. Reciprocally, wherever there is destruction of heat, there is production of motive power.
At that time, the concept of mechanical work had not been formulated. Carnot was aware that heat could be produced by friction and by percussion, as forms of dissipation of "motive power". As late as 1847, Lord Kelvin believed in the caloric theory of heat, being unaware of Carnot's notes.
In 1840, Germain Hess stated a conservation law (Hess's law) for the heat of reaction during chemical transformations. This law was later recognized as a consequence of the first law of thermodynamics, but Hess's statement was not explicitly concerned with the relation between energy exchanges by heat and work.
In 1842, Julius Robert von Mayer made a statement that was rendered by Clifford Truesdell (1980) as "in a process at constant pressure, the heat used to produce expansion is universally interconvertible with work", but this is not a general statement of the first law, for it does not express the concept of the thermodynamic state variable, the internal energy. Also in 1842, Mayer measured a temperature rise caused by friction in a body of paper pulp. This was near the time of the 1842–1845 work of James Prescott Joule, measuring the mechanical equivalent of heat. In 1845, Joule published a paper entitled The Mechanical Equivalent of Heat, in which he specified a numerical value for the amount of mechanical work required to "produce a unit of heat", based on heat production by friction in the passage of electricity through a resistor and in the rotation of a paddle in a vat of water.
The first full statements of the law came in 1850 from Rudolf Clausius, and from William Rankine. Some scholars consider Rankine's statement less distinct than that of Clausius.
=== Original statements: the "thermodynamic approach" ===
The original 19th-century statements of the first law appeared in a conceptual framework in which transfer of energy as heat was taken as a primitive notion, defined by calorimetry. It was presupposed as logically prior to the theoretical development of thermodynamics. Jointly primitive with this notion of heat were the notions of empirical temperature and thermal equilibrium. This framework also took as primitive the notion of transfer of energy as work. This framework did not presume a concept of energy in general, but regarded it as derived or synthesized from the prior notions of heat and work. By one author, this framework has been called the "thermodynamic" approach.
The first explicit statement of the first law of thermodynamics, by Rudolf Clausius in 1850, referred to cyclic thermodynamic processes, and to the existence of a function of state of the system, the internal energy. He expressed it in terms of a differential equation for the increments of a thermodynamic process. This equation may be described as follows:
In a thermodynamic process involving a closed system (no transfer of matter), the increment in the internal energy is equal to the difference between the heat accumulated by the system and the thermodynamic work done by it.
Reflecting the experimental work of Mayer and of Joule, Clausius wrote:
In all cases in which work is produced by the agency of heat, a quantity of heat is consumed which is proportional to the work done; and conversely, by the expenditure of an equal quantity of work an equal quantity of heat is produced.
Because of its definition in terms of increments, the value of the internal energy of a system is not uniquely defined. It is defined only up to an arbitrary additive constant of integration, which can be adjusted to give arbitrary reference zero levels. This non-uniqueness is in keeping with the abstract mathematical nature of the internal energy. The internal energy is customarily stated relative to a conventionally chosen standard reference state of the system.
The concept of internal energy is considered by Bailyn to be of "enormous interest". Its quantity cannot be immediately measured, but can only be inferred, by differencing actual immediate measurements. Bailyn likens it to the energy states of an atom, that were revealed by Bohr's energy relation hν = En″ − En′. In each case, an unmeasurable quantity (the internal energy, the atomic energy level) is revealed by considering the difference of measured quantities (increments of internal energy, quantities of emitted or absorbed radiative energy).
=== Conceptual revision: the "mechanical approach" ===
In 1907, George H. Bryan wrote about systems between which there is no transfer of matter (closed systems): "Definition. When energy flows from one system or part of a system to another otherwise than by the performance of mechanical work, the energy so transferred is called heat." This definition may be regarded as expressing a conceptual revision, as follows. This reinterpretation was systematically expounded in 1909 by Constantin Carathéodory, whose attention had been drawn to it by Max Born. Largely through Born's influence, this revised conceptual approach to the definition of heat came to be preferred by many twentieth-century writers. It might be called the "mechanical approach".
Energy can also be transferred from one thermodynamic system to another in association with transfer of matter. Born points out that in general such energy transfer is not resolvable uniquely into work and heat moieties. In general, when there is transfer of energy associated with matter transfer, work and heat transfers can be distinguished only when they pass through walls physically separate from those for matter transfer.
The "mechanical" approach postulates the law of conservation of energy. It also postulates that energy can be transferred from one thermodynamic system to another adiabatically as work, and that energy can be held as the internal energy of a thermodynamic system. It also postulates that energy can be transferred from one thermodynamic system to another by a path that is non-adiabatic, and is unaccompanied by matter transfer. Initially, it "cleverly" (according to Martin Bailyn) refrains from labelling as 'heat' such non-adiabatic, unaccompanied transfer of energy. It rests on the primitive notion of walls, especially adiabatic walls and non-adiabatic walls, defined as follows. Temporarily, only for purpose of this definition, one can prohibit transfer of energy as work across a wall of interest. Then walls of interest fall into two classes, (a) those such that arbitrary systems separated by them remain independently in their own previously established respective states of internal thermodynamic equilibrium; they are defined as adiabatic; and (b) those without such independence; they are defined as non-adiabatic.
This approach derives the notions of transfer of energy as heat, and of temperature, as theoretical developments, not taking them as primitives. It regards calorimetry as a derived theory. It has an early origin in the nineteenth century, for example in the work of Hermann von Helmholtz, but also in the work of many others.
== Conceptually revised statement, according to the mechanical approach ==
The revised statement of the first law postulates that a change in the internal energy of a system due to any arbitrary process, that takes the system from a given initial thermodynamic state to a given final equilibrium thermodynamic state, can be determined through the physical existence, for those given states, of a reference process that occurs purely through stages of adiabatic work.
The revised statement is then
For a closed system, in any arbitrary process of interest that takes it from an initial to a final state of internal thermodynamic equilibrium, the change of internal energy is the same as that for a reference adiabatic work process that links those two states. This is so regardless of the path of the process of interest, and regardless of whether it is an adiabatic or a non-adiabatic process. The reference adiabatic work process may be chosen arbitrarily from amongst the class of all such processes.
This statement is much less close to the empirical basis than are the original statements, but is often regarded as conceptually parsimonious in that it rests only on the concepts of adiabatic work and of non-adiabatic processes, not on the concepts of transfer of energy as heat and of empirical temperature that are presupposed by the original statements. Largely through the influence of Max Born, it is often regarded as theoretically preferable because of this conceptual parsimony. Born particularly observes that the revised approach avoids thinking in terms of what he calls the "imported engineering" concept of heat engines.
Basing his thinking on the mechanical approach, Born in 1921, and again in 1949, proposed to revise the definition of heat. In particular, he referred to the work of Constantin Carathéodory, who had in 1909 stated the first law without defining quantity of heat. Born's definition was specifically for transfers of energy without transfer of matter, and it has been widely followed in textbooks (examples:). Born observes that a transfer of matter between two systems is accompanied by a transfer of internal energy that cannot be resolved into heat and work components. There can be pathways to other systems, spatially separate from that of the matter transfer, that allow heat and work transfer independent of and simultaneous with the matter transfer. Energy is conserved in such transfers.
== Description ==
=== Cyclic processes ===
The first law of thermodynamics for a closed system was expressed in two ways by Clausius. One way referred to cyclic processes and the inputs and outputs of the system, but did not refer to increments in the internal state of the system. The other way referred to an incremental change in the internal state of the system, and did not expect the process to be cyclic.
A cyclic process is one that can be repeated indefinitely often, returning the system to its initial state. Of particular interest for single cycle of a cyclic process are the net work done, and the net heat taken in (or 'consumed', in Clausius' statement), by the system.
In a cyclic process in which the system does net work on its surroundings, it is observed to be physically necessary not only that heat be taken into the system, but also, importantly, that some heat leave the system. The difference is the heat converted by the cycle into work. In each repetition of a cyclic process, the net work done by the system, measured in mechanical units, is proportional to the heat consumed, measured in calorimetric units.
The constant of proportionality is universal and independent of the system and in 1845 and 1847 was measured by James Joule, who described it as the mechanical equivalent of heat.
== Various statements of the law for closed systems ==
The law is of great importance and generality and is consequently thought of from several points of view. Most careful textbook statements of the law express it for closed systems. It is stated in several ways, sometimes even by the same author.
For the thermodynamics of closed systems, the distinction between transfers of energy as work and as heat is central and is within the scope of the present article. For the thermodynamics of open systems, such a distinction is beyond the scope of the present article, but some limited comments are made on it in the section below headed 'First law of thermodynamics for open systems'.
There are two main ways of stating a law of thermodynamics, physically or mathematically. They should be logically coherent and consistent with one another.
An example of a physical statement is that of Planck (1897/1903):
It is in no way possible, either by mechanical, thermal, chemical, or other devices, to obtain perpetual motion, i.e. it is impossible to construct an engine which will work in a cycle and produce continuous work, or kinetic energy, from nothing.
This physical statement is restricted neither to closed systems nor to systems with states that are strictly defined only for thermodynamic equilibrium; it has meaning also for open systems and for systems with states that are not in thermodynamic equilibrium.
An example of a mathematical statement is that of Crawford (1963):
For a given system we let ΔE kin = large-scale mechanical energy, ΔE pot = large-scale potential energy, and ΔE tot = total energy. The first two quantities are specifiable in terms of appropriate mechanical variables, and by definition
E
t
o
t
=
E
k
i
n
+
E
p
o
t
+
U
.
{\displaystyle E^{\mathrm {tot} }=E^{\mathrm {kin} }+E^{\mathrm {pot} }+U\,\,.}
For any finite process, whether reversible or irreversible,
Δ
E
t
o
t
=
Δ
E
k
i
n
+
Δ
E
p
o
t
+
Δ
U
.
{\displaystyle \Delta E^{\mathrm {tot} }=\Delta E^{\mathrm {kin} }+\Delta E^{\mathrm {pot} }+\Delta U\,\,.}
The first law in a form that involves the principle of conservation of energy more generally is
Δ
E
t
o
t
=
Q
+
W
.
{\displaystyle \Delta E^{\mathrm {tot} }=Q+W\,\,.}
Here Q and W are heat and work added, with no restrictions as to whether the process is reversible, quasistatic, or irreversible.[Warner, Am. J. Phys., 29, 124 (1961)]
This statement by Crawford, for W, uses the sign convention of IUPAC, not that of Clausius. Though it does not explicitly say so, this statement refers to closed systems. Internal energy U is evaluated for bodies in states of thermodynamic equilibrium, which possess well-defined temperatures, relative to a reference state.
The history of statements of the law for closed systems has two main periods, before and after the work of George H. Bryan (1907), of Carathéodory (1909), and the approval of Carathéodory's work given by Born (1921). The earlier traditional versions of the law for closed systems are nowadays often considered to be out of date.
Carathéodory's celebrated presentation of equilibrium thermodynamics refers to closed systems, which are allowed to contain several phases connected by internal walls of various kinds of impermeability and permeability (explicitly including walls that are permeable only to heat). Carathéodory's 1909 version of the first law of thermodynamics was stated in an axiom which refrained from defining or mentioning temperature or quantity of heat transferred. That axiom stated that the internal energy of a phase in equilibrium is a function of state, that the sum of the internal energies of the phases is the total internal energy of the system, and that the value of the total internal energy of the system is changed by the amount of work done adiabatically on it, considering work as a form of energy. That article considered this statement to be an expression of the law of conservation of energy for such systems. This version is nowadays widely accepted as authoritative, but is stated in slightly varied ways by different authors.
Such statements of the first law for closed systems assert the existence of internal energy as a function of state defined in terms of adiabatic work. Thus heat is not defined calorimetrically or as due to temperature difference. It is defined as a residual difference between change of internal energy and work done on the system, when that work does not account for the whole of the change of internal energy and the system is not adiabatically isolated.
The 1909 Carathéodory statement of the law in axiomatic form does not mention heat or temperature, but the equilibrium states to which it refers are explicitly defined by variable sets that necessarily include "non-deformation variables", such as pressures, which, within reasonable restrictions, can be rightly interpreted as empirical temperatures, and the walls connecting the phases of the system are explicitly defined as possibly impermeable to heat or permeable only to heat.
According to A. Münster (1970), "A somewhat unsatisfactory aspect of Carathéodory's theory is that a consequence of the Second Law must be considered at this point [in the statement of the first law], i.e. that it is not always possible to reach any state 2 from any other state 1 by means of an adiabatic process." Münster instances that no adiabatic process can reduce the internal energy of a system at constant volume. Carathéodory's paper asserts that its statement of the first law corresponds exactly to Joule's experimental arrangement, regarded as an instance of adiabatic work. It does not point out that Joule's experimental arrangement performed essentially irreversible work, through friction of paddles in a liquid, or passage of electric current through a resistance inside the system, driven by motion of a coil and inductive heating, or by an external current source, which can access the system only by the passage of electrons, and so is not strictly adiabatic, because electrons are a form of matter, which cannot penetrate adiabatic walls. The paper goes on to base its main argument on the possibility of quasi-static adiabatic work, which is essentially reversible. The paper asserts that it will avoid reference to Carnot cycles, and then proceeds to base its argument on cycles of forward and backward quasi-static adiabatic stages, with isothermal stages of zero magnitude.
Sometimes the concept of internal energy is not made explicit in the statement.
Sometimes the existence of the internal energy is made explicit but work is not explicitly mentioned in the statement of the first postulate of thermodynamics. Heat supplied is then defined as the residual change in internal energy after work has been taken into account, in a non-adiabatic process.
A respected modern author states the first law of thermodynamics as "Heat is a form of energy", which explicitly mentions neither internal energy nor adiabatic work. Heat is defined as energy transferred by thermal contact with a reservoir, which has a temperature, and is generally so large that addition and removal of heat do not alter its temperature. A current student text on chemistry defines heat thus: "heat is the exchange of thermal energy between a system and its surroundings caused by a temperature difference." The author then explains how heat is defined or measured by calorimetry, in terms of heat capacity, specific heat capacity, molar heat capacity, and temperature.
A respected text disregards the Carathéodory's exclusion of mention of heat from the statement of the first law for closed systems, and admits heat calorimetrically defined along with work and internal energy. Another respected text defines heat exchange as determined by temperature difference, but also mentions that the Born (1921) version is "completely rigorous". These versions follow the traditional approach that is now considered out of date, exemplified by that of Planck (1897/1903).
== Evidence for the first law of thermodynamics for closed systems ==
The first law of thermodynamics for closed systems was originally induced from empirically observed evidence, including calorimetric evidence. It is nowadays, however, taken to provide the definition of heat via the law of conservation of energy and the definition of work in terms of changes in the external parameters of a system. The original discovery of the law was gradual over a period of perhaps half a century or more, and some early studies were in terms of cyclic processes.
The following is an account in terms of changes of state of a closed system through compound processes that are not necessarily cyclic. This account first considers processes for which the first law is easily verified because of their simplicity, namely adiabatic processes (in which there is no transfer as heat) and adynamic processes (in which there is no transfer as work).
=== Adiabatic processes ===
In an adiabatic process, there is transfer of energy as work but not as heat. For all adiabatic process that takes a system from a given initial state to a given final state, irrespective of how the work is done, the respective eventual total quantities of energy transferred as work are one and the same, determined just by the given initial and final states. The work done on the system is defined and measured by changes in mechanical or quasi-mechanical variables external to the system. Physically, adiabatic transfer of energy as work requires the existence of adiabatic enclosures.
For instance, in Joule's experiment, the initial system is a tank of water with a paddle wheel inside. If we isolate the tank thermally, and move the paddle wheel with a pulley and a weight, we can relate the increase in temperature with the distance descended by the mass. Next, the system is returned to its initial state, isolated again, and the same amount of work is done on the tank using different devices (an electric motor, a chemical battery, a spring,...). In every case, the amount of work can be measured independently. The return to the initial state is not conducted by doing adiabatic work on the system. The evidence shows that the final state of the water (in particular, its temperature and volume) is the same in every case. It is irrelevant if the work is electrical, mechanical, chemical,... or if done suddenly or slowly, as long as it is performed in an adiabatic way, that is to say, without heat transfer into or out of the system.
Evidence of this kind shows that to increase the temperature of the water in the tank, the qualitative kind of adiabatically performed work does not matter. No qualitative kind of adiabatic work has ever been observed to decrease the temperature of the water in the tank.
A change from one state to another, for example an increase of both temperature and volume, may be conducted in several stages, for example by externally supplied electrical work on a resistor in the body, and adiabatic expansion allowing the body to do work on the surroundings. It needs to be shown that the time order of the stages, and their relative magnitudes, does not affect the amount of adiabatic work that needs to be done for the change of state. According to one respected scholar: "Unfortunately, it does not seem that experiments of this kind have ever been carried out carefully. ... We must therefore admit that the statement which we have enunciated here, and which is equivalent to the first law of thermodynamics, is not well founded on direct experimental evidence." Another expression of this view is "no systematic precise experiments to verify this generalization directly have ever been attempted".
This kind of evidence, of independence of sequence of stages, combined with the above-mentioned evidence, of independence of qualitative kind of work, would show the existence of an important state variable that corresponds with adiabatic work, but not that such a state variable represented a conserved quantity. For the latter, another step of evidence is needed, which may be related to the concept of reversibility, as mentioned below.
That important state variable was first recognized and denoted
U
{\displaystyle U}
by Clausius in 1850, but he did not then name it, and he defined it in terms not only of work but also of heat transfer in the same process. It was also independently recognized in 1850 by Rankine, who also denoted it
U
{\displaystyle U}
; and in 1851 by Kelvin who then called it "mechanical energy", and later "intrinsic energy". In 1865, after some hesitation, Clausius began calling his state function
U
{\displaystyle U}
"energy". In 1882 it was named as the internal energy by Helmholtz. If only adiabatic processes were of interest, and heat could be ignored, the concept of internal energy would hardly arise or be needed. The relevant physics would be largely covered by the concept of potential energy, as was intended in the 1847 paper of Helmholtz on the principle of conservation of energy, though that did not deal with forces that cannot be described by a potential, and thus did not fully justify the principle. Moreover, that paper was critical of the early work of Joule that had by then been performed. A great merit of the internal energy concept is that it frees thermodynamics from a restriction to cyclic processes, and allows a treatment in terms of thermodynamic states.
In an adiabatic process, adiabatic work takes the system either from a reference state
O
{\displaystyle O}
with internal energy
U
(
O
)
{\displaystyle U(O)}
to an arbitrary one
A
{\displaystyle A}
with internal energy
U
(
A
)
{\displaystyle U(A)}
, or from the state
A
{\displaystyle A}
to the state
O
{\displaystyle O}
:
U
(
A
)
=
U
(
O
)
−
W
O
→
A
a
d
i
a
b
a
t
i
c
o
r
U
(
O
)
=
U
(
A
)
−
W
A
→
O
a
d
i
a
b
a
t
i
c
.
{\displaystyle U(A)=U(O)-W_{O\to A}^{\mathrm {adiabatic} }\,\,\mathrm {or} \,\,U(O)=U(A)-W_{A\to O}^{\mathrm {adiabatic} }\,.}
Except under the special, and strictly speaking, fictional, condition of reversibility, only one of the processes
a
d
i
a
b
a
t
i
c
,
O
→
A
{\displaystyle \mathrm {adiabatic} ,\,O\to A}
or
a
d
i
a
b
a
t
i
c
,
A
→
O
{\displaystyle \mathrm {adiabatic} ,\,{A\to O}\,}
is empirically feasible by a simple application of externally supplied work. The reason for this is given as the second law of thermodynamics and is not considered in the present article.
The fact of such irreversibility may be dealt with in two main ways, according to different points of view:
Since the work of Bryan (1907), the most accepted way to deal with it nowadays, followed by Carathéodory, is to rely on the previously established concept of quasi-static processes, as follows. Actual physical processes of transfer of energy as work are always at least to some degree irreversible. The irreversibility is often due to mechanisms known as dissipative, that transform bulk kinetic energy into internal energy. Examples are friction and viscosity. If the process is performed more slowly, the frictional or viscous dissipation is less. In the limit of infinitely slow performance, the dissipation tends to zero and then the limiting process, though fictional rather than actual, is notionally reversible, and is called quasi-static. Throughout the course of the fictional limiting quasi-static process, the internal intensive variables of the system are equal to the external intensive variables, those that describe the reactive forces exerted by the surroundings. This can be taken to justify the formula
Another way to deal with it is to allow that experiments with processes of heat transfer to or from the system may be used to justify the formula (1) above. Moreover, it deals to some extent with the problem of lack of direct experimental evidence that the time order of stages of a process does not matter in the determination of internal energy. This way does not provide theoretical purity in terms of adiabatic work processes, but is empirically feasible, and is in accord with experiments actually done, such as the Joule experiments mentioned just above, and with older traditions.
The formula (1) above allows that to go by processes of quasi-static adiabatic work from the state
A
{\displaystyle A}
to the state
B
{\displaystyle B}
we can take a path that goes through the reference state
O
{\displaystyle O}
, since the quasi-static adiabatic work is independent of the path
−
W
A
→
B
a
d
i
a
b
a
t
i
c
,
q
u
a
s
i
−
s
t
a
t
i
c
=
−
W
A
→
O
a
d
i
a
b
a
t
i
c
,
q
u
a
s
i
−
s
t
a
t
i
c
−
W
O
→
B
a
d
i
a
b
a
t
i
c
,
q
u
a
s
i
−
s
t
a
t
i
c
=
W
O
→
A
a
d
i
a
b
a
t
i
c
,
q
u
a
s
i
−
s
t
a
t
i
c
−
W
O
→
B
a
d
i
a
b
a
t
i
c
,
q
u
a
s
i
−
s
t
a
t
i
c
=
−
U
(
A
)
+
U
(
B
)
=
Δ
U
{\displaystyle -W_{A\to B}^{\mathrm {adiabatic,\,quasi-static} }=-W_{A\to O}^{\mathrm {adiabatic,\,quasi-static} }-W_{O\to B}^{\mathrm {adiabatic,\,quasi-static} }=W_{O\to A}^{\mathrm {adiabatic,\,quasi-static} }-W_{O\to B}^{\mathrm {adiabatic,\,quasi-static} }=-U(A)+U(B)=\Delta U}
This kind of empirical evidence, coupled with theory of this kind, largely justifies the following statement:
For all adiabatic processes between two specified states of a closed system of any nature, the net work done is the same regardless the details of the process, and determines a state function called internal energy,
U
{\displaystyle U}
.
=== Adynamic processes ===
A complementary observable aspect of the first law is about heat transfer. Adynamic transfer of energy as heat can be measured empirically by changes in the surroundings of the system of interest by calorimetry. This again requires the existence of adiabatic enclosure of the entire process, system and surroundings, though the separating wall between the surroundings and the system is thermally conductive or radiatively permeable, not adiabatic. A calorimeter can rely on measurement of sensible heat, which requires the existence of thermometers and measurement of temperature change in bodies of known sensible heat capacity under specified conditions; or it can rely on the measurement of latent heat, through measurement of masses of material that change phase, at temperatures fixed by the occurrence of phase changes under specified conditions in bodies of known latent heat of phase change. The calorimeter can be calibrated by transferring an externally determined amount of heat into it, for instance from a resistive electrical heater inside the calorimeter through which a precisely known electric current is passed at a precisely known voltage for a precisely measured period of time. The calibration allows comparison of calorimetric measurement of quantity of heat transferred with quantity of energy transferred as (surroundings-based) work. According to one textbook, "The most common device for measuring
Δ
U
{\displaystyle \Delta U}
is an adiabatic bomb calorimeter." According to another textbook, "Calorimetry is widely used in present day laboratories." According to one opinion, "Most thermodynamic data come from calorimetry...".
When the system evolves with transfer of energy as heat, without energy being transferred as work, in an adynamic process, the heat transferred to the system is equal to the increase in its internal energy:
Q
A
→
B
a
d
y
n
a
m
i
c
=
Δ
U
.
{\displaystyle Q_{A\to B}^{\mathrm {adynamic} }=\Delta U\,.}
=== General case for reversible processes ===
Heat transfer is practically reversible when it is driven by practically negligibly small temperature gradients. Work transfer is practically reversible when it occurs so slowly that there are no frictional effects within the system; frictional effects outside the system should also be zero if the process is to be reversible in the strict thermodynamic sense. For a particular reversible process in general, the work done reversibly on the system,
W
A
→
B
p
a
t
h
P
0
,
r
e
v
e
r
s
i
b
l
e
{\displaystyle W_{A\to B}^{\mathrm {path} \,P_{0},\,\mathrm {reversible} }}
, and the heat transferred reversibly to the system,
Q
A
→
B
p
a
t
h
P
0
,
r
e
v
e
r
s
i
b
l
e
{\displaystyle Q_{A\to B}^{\mathrm {path} \,P_{0},\,\mathrm {reversible} }}
are not required to occur respectively adiabatically or adynamically, but they must belong to the same particular process defined by its particular reversible path,
P
0
{\displaystyle P_{0}}
, through the space of thermodynamic states. Then the work and heat transfers can occur and be calculated simultaneously.
Putting the two complementary aspects together, the first law for a particular reversible process can be written
−
W
A
→
B
p
a
t
h
P
0
,
r
e
v
e
r
s
i
b
l
e
+
Q
A
→
B
p
a
t
h
P
0
,
r
e
v
e
r
s
i
b
l
e
=
Δ
U
.
{\displaystyle -W_{A\to B}^{\mathrm {path} \,P_{0},\,\mathrm {reversible} }+Q_{A\to B}^{\mathrm {path} \,P_{0},\,\mathrm {reversible} }=\Delta U\,.}
This combined statement is the expression the first law of thermodynamics for reversible processes for closed systems.
In particular, if no work is done on a thermally isolated closed system we have
Δ
U
=
0
{\displaystyle \Delta U=0\,}
.
This is one aspect of the law of conservation of energy and can be stated:
The internal energy of an isolated system remains constant.
=== General case for irreversible processes ===
If, in a process of change of state of a closed system, the energy transfer is not under a practically zero temperature gradient, practically frictionless, and with nearly balanced forces, then the process is irreversible. Then the heat and work transfers may be difficult to calculate with high accuracy, although the simple equations for reversible processes still hold to a good approximation in the absence of composition changes. Importantly, the first law still holds and provides a check on the measurements and calculations of the work done irreversibly on the system,
W
A
→
B
p
a
t
h
P
1
,
i
r
r
e
v
e
r
s
i
b
l
e
{\displaystyle W_{A\to B}^{\mathrm {path} \,P_{1},\,\mathrm {irreversible} }}
, and the heat transferred irreversibly to the system,
Q
A
→
B
p
a
t
h
P
1
,
i
r
r
e
v
e
r
s
i
b
l
e
{\displaystyle Q_{A\to B}^{\mathrm {path} \,P_{1},\,\mathrm {irreversible} }}
, which belong to the same particular process defined by its particular irreversible path,
P
1
{\displaystyle P_{1}}
, through the space of thermodynamic states.
−
W
A
→
B
p
a
t
h
P
1
,
i
r
r
e
v
e
r
s
i
b
l
e
+
Q
A
→
B
p
a
t
h
P
1
,
i
r
r
e
v
e
r
s
i
b
l
e
=
Δ
U
.
{\displaystyle -W_{A\to B}^{\mathrm {path} \,P_{1},\,\mathrm {irreversible} }+Q_{A\to B}^{\mathrm {path} \,P_{1},\,\mathrm {irreversible} }=\Delta U\,.}
This means that the internal energy
U
{\displaystyle U}
is a function of state and that the internal energy change
Δ
U
{\displaystyle \Delta U}
between two states is a function only of the two states.
=== Overview of the weight of evidence for the law ===
The first law of thermodynamics is so general that its predictions cannot all be directly tested. In many properly conducted experiments it has been precisely supported, and never violated. Indeed, within its scope of applicability, the law is so reliably established, that, nowadays, rather than experiment being considered as testing the accuracy of the law, it is more practical and realistic to think of the law as testing the accuracy of experiment. An experimental result that seems to violate the law may be assumed to be inaccurate or wrongly conceived, for example due to failure to account for an important physical factor. Thus, some may regard it as a principle more abstract than a law.
== State functional formulation for infinitesimal processes ==
When the heat and work transfers in the equations above are infinitesimal in magnitude, they are often denoted by δ, rather than exact differentials denoted by d, as a reminder that heat and work do not describe the state of any system. The integral of an inexact differential depends upon the particular path taken through the space of thermodynamic parameters while the integral of an exact differential depends only upon the initial and final states. If the initial and final states are the same, then the integral of an inexact differential may or may not be zero, but the integral of an exact differential is always zero. The path taken by a thermodynamic system through a chemical or physical change is known as a thermodynamic process.
The first law for a closed homogeneous system may be stated in terms that include concepts that are established in the second law. The internal energy U may then be expressed as a function of the system's defining state variables S, entropy, and V, volume: U = U (S, V). In these terms, T, the system's temperature, and P, its pressure, are partial derivatives of U with respect to S and V. These variables are important throughout thermodynamics, though not necessary for the statement of the first law. Rigorously, they are defined only when the system is in its own state of internal thermodynamic equilibrium. For some purposes, the concepts provide good approximations for scenarios sufficiently near to the system's internal thermodynamic equilibrium.
The first law requires that:
d
U
=
δ
Q
−
δ
W
(closed system, general process, quasi-static or irreversible).
{\displaystyle dU=\delta Q-\delta W\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,{\text{(closed system, general process, quasi-static or irreversible).}}}
Then, for the fictive case of a reversible process, dU can be written in terms of exact differentials. One may imagine reversible changes, such that there is at each instant negligible departure from thermodynamic equilibrium within the system and between system and surroundings. Then, mechanical work is given by δW = −P dV and the quantity of heat added can be expressed as δQ = T dS. For these conditions
d
U
=
T
d
S
−
P
d
V
(closed system, reversible process).
{\displaystyle dU=TdS-PdV\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,{\text{(closed system, reversible process).}}}
While this has been shown here for reversible changes, it is valid more generally in the absence of chemical reactions or phase transitions, as U can be considered as a thermodynamic state function of the defining state variables S and V:
Equation (2) is known as the fundamental thermodynamic relation for a closed system in the energy representation, for which the defining state variables are S and V, with respect to which T and P are partial derivatives of U. It is only in the reversible case or for a quasistatic process without composition change that the work done and heat transferred are given by −P dV and T dS.
In the case of a closed system in which the particles of the system are of different types and, because chemical reactions may occur, their respective numbers are not necessarily constant, the fundamental thermodynamic relation for dU becomes:
d
U
=
T
d
S
−
P
d
V
+
∑
i
μ
i
d
N
i
.
{\displaystyle dU=TdS-PdV+\sum _{i}\mu _{i}dN_{i}.}
where dNi is the (small) increase in number of type-i particles in the reaction, and μi is known as the chemical potential of the type-i particles in the system. If dNi is expressed in mol then μi is expressed in J/mol. If the system has more external mechanical variables than just the volume that can change, the fundamental thermodynamic relation further generalizes to:
d
U
=
T
d
S
−
∑
i
X
i
d
x
i
+
∑
j
μ
j
d
N
j
.
{\displaystyle dU=TdS-\sum _{i}X_{i}dx_{i}+\sum _{j}\mu _{j}dN_{j}.}
Here the Xi are the generalized forces corresponding to the external variables xi. The parameters Xi are independent of the size of the system and are called intensive parameters and the xi are proportional to the size and called extensive parameters.
For an open system, there can be transfers of particles as well as energy into or out of the system during a process. For this case, the first law of thermodynamics still holds, in the form that the internal energy is a function of state and the change of internal energy in a process is a function only of its initial and final states, as noted in the section below headed First law of thermodynamics for open systems.
A useful idea from mechanics is that the energy gained by a particle is equal to the force applied to the particle multiplied by the displacement of the particle while that force is applied. Now consider the first law without the heating term: dU = −P dV. The pressure P can be viewed as a force (and in fact has units of force per unit area) while dV is the displacement (with units of distance times area). We may say, with respect to this work term, that a pressure difference forces a transfer of volume, and that the product of the two (work) is the amount of energy transferred out of the system as a result of the process. If one were to make this term negative then this would be the work done on the system.
It is useful to view the T dS term in the same light: here the temperature is known as a "generalized" force (rather than an actual mechanical force) and the entropy is a generalized displacement.
Similarly, a difference in chemical potential between groups of particles in the system drives a chemical reaction that changes the numbers of particles, and the corresponding product is the amount of chemical potential energy transformed in process. For example, consider a system consisting of two phases: liquid water and water vapor. There is a generalized "force" of evaporation that drives water molecules out of the liquid. There is a generalized "force" of condensation that drives vapor molecules out of the vapor. Only when these two "forces" (or chemical potentials) are equal is there equilibrium, and the net rate of transfer zero.
The two thermodynamic parameters that form a generalized force-displacement pair are called "conjugate variables". The two most familiar pairs are, of course, pressure-volume, and temperature-entropy.
== Fluid dynamics ==
In fluid dynamics, the first law of thermodynamics reads
D
E
t
D
t
=
D
W
D
t
+
D
Q
D
t
→
D
E
t
D
t
=
∇
⋅
(
σ
⋅
v
)
−
∇
⋅
q
{\displaystyle {\frac {DE_{t}}{Dt}}={\frac {DW}{Dt}}+{\frac {DQ}{Dt}}\to {\frac {DE_{t}}{Dt}}=\nabla \cdot ({\mathbf {\sigma } \cdot v})-\nabla \cdot {\mathbf {q} }}
.
== Spatially inhomogeneous systems ==
Classical thermodynamics is initially focused on closed homogeneous systems (e.g. Planck 1897/1903), which might be regarded as 'zero-dimensional' in the sense that they have no spatial variation. But it is desired to study also systems with distinct internal motion and spatial inhomogeneity. For such systems, the principle of conservation of energy is expressed in terms not only of internal energy as defined for homogeneous systems, but also in terms of kinetic energy and potential energies of parts of the inhomogeneous system with respect to each other and with respect to long-range external forces. How the total energy of a system is allocated between these three more specific kinds of energy varies according to the purposes of different writers; this is because these components of energy are to some extent mathematical artefacts rather than actually measured physical quantities. For any closed homogeneous component of an inhomogeneous closed system, if
E
{\displaystyle E}
denotes the total energy of that component system, one may write
E
=
E
k
i
n
+
E
p
o
t
+
U
{\displaystyle E=E^{\mathrm {kin} }+E^{\mathrm {pot} }+U}
where
E
k
i
n
{\displaystyle E^{\mathrm {kin} }}
and
E
p
o
t
{\displaystyle E^{\mathrm {pot} }}
denote respectively the total kinetic energy and the total potential energy of the component closed homogeneous system, and
U
{\displaystyle U}
denotes its internal energy.
Potential energy can be exchanged with the surroundings of the system when the surroundings impose a force field, such as gravitational or electromagnetic, on the system.
A compound system consisting of two interacting closed homogeneous component subsystems has a potential energy of interaction
E
12
p
o
t
{\displaystyle E_{12}^{\mathrm {pot} }}
between the subsystems. Thus, in an obvious notation, one may write
E
=
E
1
k
i
n
+
E
1
p
o
t
+
U
1
+
E
2
k
i
n
+
E
2
p
o
t
+
U
2
+
E
12
p
o
t
{\displaystyle E=E_{1}^{\mathrm {kin} }+E_{1}^{\mathrm {pot} }+U_{1}+E_{2}^{\mathrm {kin} }+E_{2}^{\mathrm {pot} }+U_{2}+E_{12}^{\mathrm {pot} }}
The quantity
E
12
p
o
t
{\displaystyle E_{12}^{\mathrm {pot} }}
in general lacks an assignment to either subsystem in a way that is not arbitrary, and this stands in the way of a general non-arbitrary definition of transfer of energy as work. On occasions, authors make their various respective arbitrary assignments.
The distinction between internal and kinetic energy is hard to make in the presence of turbulent motion within the system, as friction gradually dissipates macroscopic kinetic energy of localised bulk flow into molecular random motion of molecules that is classified as internal energy. The rate of dissipation by friction of kinetic energy of localised bulk flow into internal energy, whether in turbulent or in streamlined flow, is an important quantity in non-equilibrium thermodynamics. This is a serious difficulty for attempts to define entropy for time-varying spatially inhomogeneous systems.
== First law of thermodynamics for open systems ==
For the first law of thermodynamics, there is no trivial passage of physical conception from the closed system view to an open system view. For closed systems, the concepts of an adiabatic enclosure and of an adiabatic wall are fundamental. Matter and internal energy cannot permeate or penetrate such a wall. For an open system, there is a wall that allows penetration by matter. In general, matter in diffusive motion carries with it some internal energy, and some microscopic potential energy changes accompany the motion. An open system is not adiabatically enclosed.
There are some cases in which a process for an open system can, for particular purposes, be considered as if it were for a closed system. In an open system, by definition hypothetically or potentially, matter can pass between the system and its surroundings. But when, in a particular case, the process of interest involves only hypothetical or potential but no actual passage of matter, the process can be considered as if it were for a closed system.
=== Internal energy for an open system ===
Since the revised and more rigorous definition of the internal energy of a closed system rests upon the possibility of processes by which adiabatic work takes the system from one state to another, this leaves a problem for the definition of internal energy for an open system, for which adiabatic work is not in general possible. According to Max Born, the transfer of matter and energy across an open connection "cannot be reduced to mechanics". In contrast to the case of closed systems, for open systems, in the presence of diffusion, there is no unconstrained and unconditional physical distinction between convective transfer of internal energy by bulk flow of matter, the transfer of internal energy without transfer of matter (usually called heat conduction and work transfer), and change of various potential energies. The older traditional way and the conceptually revised (Carathéodory) way agree that there is no physically unique definition of heat and work transfer processes between open systems.
In particular, between two otherwise isolated open systems an adiabatic wall is by definition impossible. This problem is solved by recourse to the principle of conservation of energy. This principle allows a composite isolated system to be derived from two other component non-interacting isolated systems, in such a way that the total energy of the composite isolated system is equal to the sum of the total energies of the two component isolated systems. Two previously isolated systems can be subjected to the thermodynamic operation of placement between them of a wall permeable to matter and energy, followed by a time for establishment of a new thermodynamic state of internal equilibrium in the new single unpartitioned system. The internal energies of the initial two systems and of the final new system, considered respectively as closed systems as above, can be measured. Then the law of conservation of energy requires that
Δ
U
s
+
Δ
U
o
=
0
,
{\displaystyle \Delta U_{s}+\Delta U_{o}=0\,,}
where ΔUs and ΔUo denote the changes in internal energy of the system and of its surroundings respectively. This is a statement of the first law of thermodynamics for a transfer between two otherwise isolated open systems, that fits well with the conceptually revised and rigorous statement of the law stated above.
For the thermodynamic operation of adding two systems with internal energies U1 and U2, to produce a new system with internal energy U, one may write U = U1 + U2; the reference states for U, U1 and U2 should be specified accordingly, maintaining also that the internal energy of a system be proportional to its mass, so that the internal energies are extensive variables.
There is a sense in which this kind of additivity expresses a fundamental postulate that goes beyond the simplest ideas of classical closed system thermodynamics; the extensivity of some variables is not obvious, and needs explicit expression; indeed one author goes so far as to say that it could be recognized as a fourth law of thermodynamics, though this is not repeated by other authors.
Also of course
Δ
N
s
+
Δ
N
o
=
0
,
{\displaystyle \Delta N_{s}+\Delta N_{o}=0\,,}
where ΔNs and ΔNo denote the changes in mole number of a component substance of the system and of its surroundings respectively. This is a statement of the law of conservation of mass.
=== Process of transfer of matter between an open system and its surroundings ===
A system connected to its surroundings only through contact by a single permeable wall, but otherwise isolated, is an open system. If it is initially in a state of contact equilibrium with a surrounding subsystem, a thermodynamic process of transfer of matter can be made to occur between them if the surrounding subsystem is subjected to some thermodynamic operation, for example, removal of a partition between it and some further surrounding subsystem. The removal of the partition in the surroundings initiates a process of exchange between the system and its contiguous surrounding subsystem.
An example is evaporation. One may consider an open system consisting of a collection of liquid, enclosed except where it is allowed to evaporate into or to receive condensate from its vapor above it, which may be considered as its contiguous surrounding subsystem, and subject to control of its volume and temperature.
A thermodynamic process might be initiated by a thermodynamic operation in the surroundings, that mechanically increases in the controlled volume of the vapor. Some mechanical work will be done within the surroundings by the vapor, but also some of the parent liquid will evaporate and enter the vapor collection which is the contiguous surrounding subsystem. Some internal energy will accompany the vapor that leaves the system, but it will not make sense to try to uniquely identify part of that internal energy as heat and part of it as work. Consequently, the energy transfer that accompanies the transfer of matter between the system and its surrounding subsystem cannot be uniquely split into heat and work transfers to or from the open system. The component of total energy transfer that accompanies the transfer of vapor into the surrounding subsystem is customarily called 'latent heat of evaporation', but this use of the word heat is a quirk of customary historical language, not in strict compliance with the thermodynamic definition of transfer of energy as heat. In this example, kinetic energy of bulk flow and potential energy with respect to long-range external forces such as gravity are both considered to be zero. The first law of thermodynamics refers to the change of internal energy of the open system, between its initial and final states of internal equilibrium.
=== Open system with multiple contacts ===
An open system can be in contact equilibrium with several other systems at once.
This includes cases in which there is contact equilibrium between the system, and several subsystems in its surroundings, including separate connections with subsystems through walls that are permeable to the transfer of matter and internal energy as heat and allowing friction of passage of the transferred matter, but immovable, and separate connections through adiabatic walls with others, and separate connections through diathermic walls impermeable to matter with yet others. Because there are physically separate connections that are permeable to energy but impermeable to matter, between the system and its surroundings, energy transfers between them can occur with definite heat and work characters. Conceptually essential here is that the internal energy transferred with the transfer of matter is measured by a variable that is mathematically independent of the variables that measure heat and work.
With such independence of variables, the total increase of internal energy in the process is then determined as the sum of the internal energy transferred from the surroundings with the transfer of matter through the walls that are permeable to it, and of the internal energy transferred to the system as heat through the diathermic walls, and of the energy transferred to the system as work through the adiabatic walls, including the energy transferred to the system by long-range forces. These simultaneously transferred quantities of energy are defined by events in the surroundings of the system. Because the internal energy transferred with matter is not in general uniquely resolvable into heat and work components, the total energy transfer cannot in general be uniquely resolved into heat and work components. Under these conditions, the following formula can describe the process in terms of externally defined thermodynamic variables, as a statement of the first law of thermodynamics:
where ΔU0 denotes the change of internal energy of the system, and ΔUi denotes the change of internal energy of the ith of the m surrounding subsystems that are in open contact with the system, due to transfer between the system and that ith surrounding subsystem, and Q denotes the internal energy transferred as heat from the heat reservoir of the surroundings to the system, and W denotes the energy transferred from the system to the surrounding subsystems that are in adiabatic connection with it. The case of a wall that is permeable to matter and can move so as to allow transfer of energy as work is not considered here.
==== Combination of first and second laws ====
If the system is described by the energetic fundamental equation, U0 = U0(S, V, Nj), and if the process can be described in the quasi-static formalism, in terms of the internal state variables of the system, then the process can also be described by a combination of the first and second laws of thermodynamics, by the formula
where there are n chemical constituents of the system and permeably connected surrounding subsystems, and where T, S, P, V, Nj, and μj, are defined as above.
For a general natural process, there is no immediate term-wise correspondence between equations (3) and (4), because they describe the process in different conceptual frames.
Nevertheless, a conditional correspondence exists. There are three relevant kinds of wall here: purely diathermal, adiabatic, and permeable to matter. If two of those kinds of wall are sealed off, leaving only one that permits transfers of energy, as work, as heat, or with matter, then the remaining permitted terms correspond precisely. If two of the kinds of wall are left unsealed, then energy transfer can be shared between them, so that the two remaining permitted terms do not correspond precisely.
For the special fictive case of quasi-static transfers, there is a simple correspondence. For this, it is supposed that the system has multiple areas of contact with its surroundings. There are pistons that allow adiabatic work, purely diathermal walls, and open connections with surrounding subsystems of completely controllable chemical potential (or equivalent controls for charged species). Then, for a suitable fictive quasi-static transfer, one can write
δ
Q
=
T
d
S
−
T
∑
i
s
i
d
N
i
and
δ
W
=
P
d
V
(suitably defined surrounding subsystems, quasi-static transfers of energy)
,
{\displaystyle \delta Q\,=\,T\,\mathrm {d} S-T\textstyle {\sum _{i}}s_{i}\,dN_{i}\,{\text{ and }}\delta W\,=\,P\,\mathrm {d} V\,\,\,\,\,\,{\text{(suitably defined surrounding subsystems, quasi-static transfers of energy)}},}
where
d
N
i
{\displaystyle dN_{i}}
is the added amount of species
i
{\displaystyle i}
and
s
i
{\displaystyle s_{i}}
is the corresponding molar entropy.
For fictive quasi-static transfers for which the chemical potentials in the connected surrounding subsystems are suitably controlled, these can be put into equation (4) to yield
where
h
i
{\displaystyle h_{i}}
is the molar enthalpy of species
i
{\displaystyle i}
.
=== Non-equilibrium transfers ===
The transfer of energy between an open system and a single contiguous subsystem of its surroundings is considered also in non-equilibrium thermodynamics. The problem of definition arises also in this case. It may be allowed that the wall between the system and the subsystem is not only permeable to matter and to internal energy, but also may be movable so as to allow work to be done when the two systems have different pressures. In this case, the transfer of energy as heat is not defined.
The first law of thermodynamics for any process on the specification of equation (3) can be defined as
where ΔU denotes the change of internal energy of the system, Δ Q denotes the internal energy transferred as heat from the heat reservoir of the surroundings to the system, p Δ V denotes the work of the system and
h
i
{\displaystyle h_{i}}
is the molar enthalpy of species
i
{\displaystyle i}
, coming into the system from the surrounding that is in contact with the system.
Formula (6) is valid in general case, both for quasi-static and for irreversible processes. The situation of the quasi-static process is considered in the previous Section, which in our terms defines
To describe deviation of the thermodynamic system from equilibrium, in addition to fundamental variables that are used to fix the equilibrium state, as was described above, a set of variables
ξ
1
,
ξ
2
,
…
{\displaystyle \xi _{1},\xi _{2},\ldots }
that are called internal variables have been introduced, which allows to formulate for the general case
Methods for study of non-equilibrium processes mostly deal with spatially continuous flow systems. In this case, the open connection between system and surroundings is usually taken to fully surround the system, so that there are no separate connections impermeable to matter but permeable to heat. Except for the special case mentioned above when there is no actual transfer of matter, which can be treated as if for a closed system, in strictly defined thermodynamic terms, it follows that transfer of energy as heat is not defined. In this sense, there is no such thing as 'heat flow' for a continuous-flow open system. Properly, for closed systems, one speaks of transfer of internal energy as heat, but in general, for open systems, one can speak safely only of transfer of internal energy. A factor here is that there are often cross-effects between distinct transfers, for example that transfer of one substance may cause transfer of another even when the latter has zero chemical potential gradient.
Usually transfer between a system and its surroundings applies to transfer of a state variable, and obeys a balance law, that the amount lost by the donor system is equal to the amount gained by the receptor system. Heat is not a state variable. For his 1947 definition of "heat transfer" for discrete open systems, the author Prigogine carefully explains at some length that his definition of it does not obey a balance law. He describes this as paradoxical.
The situation is clarified by Gyarmati, who shows that his definition of "heat transfer", for continuous-flow systems, really refers not specifically to heat, but rather to transfer of internal energy, as follows. He considers a conceptual small cell in a situation of continuous-flow as a system defined in the so-called Lagrangian way, moving with the local center of mass. The flow of matter across the boundary is zero when considered as a flow of total mass. Nevertheless, if the material constitution is of several chemically distinct components that can diffuse with respect to one another, the system is considered to be open, the diffusive flows of the components being defined with respect to the center of mass of the system, and balancing one another as to mass transfer. Still there can be a distinction between bulk flow of internal energy and diffusive flow of internal energy in this case, because the internal energy density does not have to be constant per unit mass of material, and allowing for non-conservation of internal energy because of local conversion of kinetic energy of bulk flow to internal energy by viscosity.
Gyarmati shows that his definition of "the heat flow vector" is strictly speaking a definition of flow of internal energy, not specifically of heat, and so it turns out that his use here of the word heat is contrary to the strict thermodynamic definition of heat, though it is more or less compatible with historical custom, that often enough did not clearly distinguish between heat and internal energy; he writes "that this relation must be considered to be the exact definition of the concept of heat flow, fairly loosely used in experimental physics and heat technics". Apparently in a different frame of thinking from that of the above-mentioned paradoxical usage in the earlier sections of the historic 1947 work by Prigogine, about discrete systems, this usage of Gyarmati is consistent with the later sections of the same 1947 work by Prigogine, about continuous-flow systems, which use the term "heat flux" in just this way. This usage is also followed by Glansdorff and Prigogine in their 1971 text about continuous-flow systems. They write: "Again the flow of internal energy may be split into a convection flow ρuv and a conduction flow. This conduction flow is by definition the heat flow W. Therefore: j[U] = ρuv + W where u denotes the [internal] energy per unit mass. [These authors actually use the symbols E and e to denote internal energy but their notation has been changed here to accord with the notation of the present article. These authors actually use the symbol U to refer to total energy, including kinetic energy of bulk flow.]" This usage is followed also by other writers on non-equilibrium thermodynamics such as Lebon, Jou, and Casas-Vásquez, and de Groot and Mazur. This usage is described by Bailyn as stating the non-convective flow of internal energy, and is listed as his definition number 1, according to the first law of thermodynamics. This usage is also followed by workers in the kinetic theory of gases. This is not the ad hoc definition of "reduced heat flux" of Rolf Haase.
In the case of a flowing system of only one chemical constituent, in the Lagrangian representation, there is no distinction between bulk flow and diffusion of matter. Moreover, the flow of matter is zero into or out of the cell that moves with the local center of mass. In effect, in this description, one is dealing with a system effectively closed to the transfer of matter. But still one can validly talk of a distinction between bulk flow and diffusive flow of internal energy, the latter driven by a temperature gradient within the flowing material, and being defined with respect to the local center of mass of the bulk flow. In this case of a virtually closed system, because of the zero matter transfer, as noted above, one can safely distinguish between transfer of energy as work, and transfer of internal energy as heat.
== See also ==
Laws of thermodynamics
Perpetual motion
Microstate (statistical mechanics) – includes microscopic definitions of internal energy, heat and work
Entropy production
Relativistic heat conduction
== References ==
=== Cited sources ===
Adkins, C. J. (1968/1983). Equilibrium Thermodynamics, (first edition 1968), third edition 1983, Cambridge University Press, ISBN 0-521-25445-0.
Aston, J. G., Fritz, J. J. (1959). Thermodynamics and Statistical Thermodynamics, John Wiley & Sons, New York.
Balian, R. (1991/2007). From Microphysics to Macrophysics: Methods and Applications of Statistical Physics, volume 1, translated by D. ter Haar, J.F. Gregg, Springer, Berlin, ISBN 978-3-540-45469-4.
Bailyn, M. (1994). A Survey of Thermodynamics, American Institute of Physics Press, New York, ISBN 0-88318-797-3.
Born, M. (1949). Natural Philosophy of Cause and Chance, Oxford University Press, London.
Bryan, G. H. (1907). Thermodynamics. An Introductory Treatise dealing mainly with First Principles and their Direct Applications, B. G. Teubner, Leipzig.
Balescu, R. (1997). Statistical Dynamics; Matter out of Equilibrium, Imperial College Press, London, ISBN 978-1-86094-045-3.
Buchdahl, H. A. (1966), The Concepts of Classical Thermodynamics, Cambridge University Press, London.
Callen, H. B. (1960/1985), Thermodynamics and an Introduction to Thermostatistics, (first edition 1960), second edition 1985, John Wiley & Sons, New York, ISBN 0-471-86256-8.
Carathéodory, C. (1909). "Untersuchungen über die Grundlagen der Thermodynamik". Mathematische Annalen. 67 (3): 355–386. doi:10.1007/BF01450409. S2CID 118230148. A translation may be found here. Also a mostly reliable translation is to be found at Kestin, J. (1976). The Second Law of Thermodynamics, Dowden, Hutchinson & Ross, Stroudsburg PA.
Clausius, R. (1850), "Ueber die bewegende Kraft der Wärme und die Gesetze, welche sich daraus für die Wärmelehre selbst ableiten lassen", Annalen der Physik, 79 (4): 368–397, 500–524, Bibcode:1850AnP...155..500C, doi:10.1002/andp.18501550403, hdl:2027/uc1.$b242250. See English Translation: On the Moving Force of Heat, and the Laws regarding the Nature of Heat itself which are deducible therefrom. Phil. Mag. (1851), series 4, 2, 1–21, 102–119. Also available on Google Books.
Crawford, F. H. (1963). Heat, Thermodynamics, and Statistical Physics, Rupert Hart-Davis, London, Harcourt, Brace & World, Inc.
de Groot, S. R., Mazur, P. (1962). Non-equilibrium Thermodynamics, North-Holland, Amsterdam. Reprinted (1984), Dover Publications Inc., New York, ISBN 0486647412.
Denbigh, K. G. (1951). The Thermodynamics of the Steady State, Methuen, London, Wiley, New York.
Denbigh, K. (1954/1981). The Principles of Chemical Equilibrium. With Applications in Chemistry and Chemical Engineering, fourth edition, Cambridge University Press, Cambridge UK, ISBN 0-521-23682-7.
Eckart, C. (1940). The thermodynamics of irreversible processes. I. The simple fluid, Phys. Rev. 58: 267–269.
Fitts, D. D. (1962). Nonequilibrium Thermodynamics. Phenomenological Theory of Irreversible Processes in Fluid Systems, McGraw-Hill, New York.
Glansdorff, P., Prigogine, I., (1971). Thermodynamic Theory of Structure, Stability and Fluctuations, Wiley, London, ISBN 0-471-30280-5.
Gyarmati, I. (1967/1970). Non-equilibrium Thermodynamics. Field Theory and Variational Principles, translated from the 1967 Hungarian by E. Gyarmati and W. F. Heinz, Springer-Verlag, New York.
Haase, R. (1963/1969). Thermodynamics of Irreversible Processes, English translation, Addison-Wesley Publishing, Reading MA.
Haase, R. (1971). Survey of Fundamental Laws, chapter 1 of Thermodynamics, pages 1–97 of volume 1, ed. W. Jost, of Physical Chemistry. An Advanced Treatise, ed. H. Eyring, D. Henderson, W. Jost, Academic Press, New York, lcn 73–117081.
Helmholtz, H. (1847). Ueber die Erhaltung der Kraft. Eine physikalische Abhandlung, G. Reimer (publisher), Berlin, read on 23 July in a session of the Physikalischen Gesellschaft zu Berlin. Reprinted in Helmholtz, H. von (1882), Wissenschaftliche Abhandlungen, Band 1, J. A. Barth, Leipzig. Translated and edited by J. Tyndall, in Scientific Memoirs, Selected from the Transactions of Foreign Academies of Science and from Foreign Journals. Natural Philosophy (1853), volume 7, edited by J. Tyndall, W. Francis, published by Taylor and Francis, London, pp. 114–162, reprinted as volume 7 of Series 7, The Sources of Science, edited by H. Woolf, (1966), Johnson Reprint Corporation, New York, and again in Brush, S. G., The Kinetic Theory of Gases. An Anthology of Classic Papers with Historical Commentary, volume 1 of History of Modern Physical Sciences, edited by N. S. Hall, Imperial College Press, London, ISBN 1-86094-347-0, pp. 89–110.
Kestin, J. (1961). "On intersecting isentropics". Am. J. Phys. 29 (5): 329–331. Bibcode:1961AmJPh..29..329K. doi:10.1119/1.1937763.
Kestin, J. (1966). A Course in Thermodynamics, Blaisdell Publishing Company, Waltham MA.
Kirkwood, J. G., Oppenheim, I. (1961). Chemical Thermodynamics, McGraw-Hill Book Company, New York.
Landsberg, P. T. (1961). Thermodynamics with Quantum Statistical Illustrations, Interscience, New York.
Landsberg, P. T. (1978). Thermodynamics and Statistical Mechanics, Oxford University Press, Oxford UK, ISBN 0-19-851142-6.
Lebon, G., Jou, D., Casas-Vázquez, J. (2008). Understanding Non-equilibrium Thermodynamics, Springer, Berlin, ISBN 978-3-540-74251-7.
Mandl, F. (1988) [1971]. Statistical Physics (2nd ed.). Chichester·New York·Brisbane·Toronto·Singapore: John Wiley & Sons. ISBN 978-0471915331.
Münster, A. (1970), Classical Thermodynamics, translated by E. S. Halberstadt, Wiley–Interscience, London, ISBN 0-471-62430-6.
Partington, J.R. (1949). An Advanced Treatise on Physical Chemistry, volume 1, Fundamental Principles. The Properties of Gases, Longmans, Green and Co., London.
Pippard, A. B. (1957/1966). Elements of Classical Thermodynamics for Advanced Students of Physics, original publication 1957, reprint 1966, Cambridge University Press, Cambridge UK.
Planck, M.(1897/1903). Treatise on Thermodynamics, translated by A. Ogg, Longmans, Green & Co., London.
Pokrovskii, Vladimir (2020). Thermodynamics of Complex Systems: Principles and applications. IOP Publishing, Bristol, UK. Bibcode:2020tcsp.book.....P.
Prigogine, I. (1947). Étude Thermodynamique des Phénomènes irréversibles, Dunod, Paris, and Desoers, Liège.
Prigogine, I., (1955/1967). Introduction to Thermodynamics of Irreversible Processes, third edition, Interscience Publishers, New York.
Reif, F. (1965). Fundamentals of Statistical and Thermal Physics, McGraw-Hill Book Company, New York.
Tisza, L. (1966). Generalized Thermodynamics, M.I.T. Press, Cambridge MA.
Truesdell, C. A. (1980). The Tragicomical History of Thermodynamics, 1822–1854, Springer, New York, ISBN 0-387-90403-4.
Truesdell, C. A., Muncaster, R. G. (1980). Fundamentals of Maxwell's Kinetic Theory of a Simple Monatomic Gas, Treated as a branch of Rational Mechanics, Academic Press, New York, ISBN 0-12-701350-4.
Tschoegl, N. W. (2000). Fundamentals of Equilibrium and Steady-State Thermodynamics, Elsevier, Amsterdam, ISBN 0-444-50426-5.
== Further reading ==
Goldstein, Martin; Inge F. (1993). The Refrigerator and the Universe. Harvard University Press. ISBN 0-674-75325-9. OCLC 32826343. Chpts. 2 and 3 contain a nontechnical treatment of the first law.
Çengel Y. A.; Boles M. (2007). Thermodynamics: an engineering approach. McGraw-Hill Higher Education. ISBN 978-0-07-125771-8. Chapter 2.
Atkins P. (2007). Four Laws that drive the Universe. OUP Oxford. ISBN 978-0-19-923236-9.
== External links ==
MISN-0-158, The First Law of Thermodynamics (PDF file) by Jerzy Borysowicz for Project PHYSNET.
First law of thermodynamics in the MIT Course Unified Thermodynamics and Propulsion from Prof. Z. S. Spakovszky | Wikipedia/First_law_of_thermodynamics |
The energy industry refers to all of the industries involved in the production and sale of energy, including fuel extraction, manufacturing, refining and distribution. Modern society consumes large amounts of fuel, and the energy industry is a crucial part of the infrastructure and maintenance of society in almost all countries.
In particular, the energy industry comprises:
the fossil fuel industries, which include petroleum industries (oil companies, petroleum refiners, fuel transport and end-user sales at gas stations), coal industries (extraction and processing), and the natural gas industries (natural gas extraction, and coal gas manufacture, as well as distribution and sales);
the electrical power industry, including electricity generation, electric power distribution, and sales;
the nuclear power industry;
the renewable energy industry, comprising alternative energy and sustainable energy companies, including those involved in hydroelectric power, wind power, and solar power generation, and the manufacture, distribution and sale of alternative fuels; and,
traditional energy industry based on the collection and distribution of firewood, the use of which, for cooking and heating, is particularly common in poorer countries.
The increased dependence during the 20th century on carbon-emitting energy sources, such as fossil fuels, and carbon-emitting renewables, such as biomass, means that the energy industry has frequently contributed to pollution and environmental impacts on the economy. Until recently, fossil fuels were the primary source of energy generation in most parts of the world and are a significant contributor to global warming and pollution. Many economies are investing in renewable and sustainable energy to limit global warming and reduce air pollution.
== History ==
The use of energy has been a key in the development of human societies by helping it to control and adapt to the environment. Managing the use of energy is inevitable in any functional society. In the industrialized world the development of energy resources has become essential for agriculture, transportation, waste collection, information technology, communications that have become prerequisites of a developed society. The increasing use of energy since the Industrial Revolution has also brought with it a number of serious problems, some of which, such as global warming, present potentially grave risks to the world.
In some industries, the word energy is used as a synonym for energy resources, which refer to substances like fuels, petroleum products, and electricity in general. This is because a significant portion of the energy contained in these resources can easily be extracted to serve a useful purpose. After a useful process has taken place, the total energy is conserved. Still, the resource itself is not conserved since a process usually transforms the energy into unusable forms (such as unnecessary or excess heat).
Ever since humanity discovered various energy resources available in nature, it has been inventing devices, known as machines, that make life more comfortable by using energy resources. Thus, although primitive man knew the utility of fire to cook food, the invention of devices like gas burners and microwave ovens led to additional ways of how energy can be used. The trend is the same in any other field of social activity, be it the construction of social infrastructure, manufacturing of fabrics for covering, porting, printing, decorating, for example, textiles, air conditioning, communication of information, or for moving people and goods (automobiles).
== Economics ==
Production and consumption of energy resources is very important to the global economy. All economic activity requires energy resources, whether to manufacture goods, provide transportation, run computers and other machines.
Widespread demand for energy may encourage competing energy utilities and the formation of retail energy markets. Note the presence of the "Energy Marketing and Customer Service" (EMACS) sub-sector.
The energy sector accounts for 4.6% of outstanding leveraged loans, compared with 3.1% a decade ago, while energy bonds make up 15.7% of the $1.3 trillion junk bond market, up from 4.3% over the same period.
== Management ==
Since the cost of energy has become a significant factor in the performance of societies' economies, the management of energy resources has become crucial. Energy management involves using the available energy resources more effectively, that is, with minimum incremental costs. Simple management techniques can often save energy expenditures without incorporating fresh technology. Energy management is most often the practice of using energy more efficiently by eliminating energy wastage or balancing justifiable energy demand with appropriate energy supply. The process couples energy awareness with energy conservation.
== Classifications ==
=== Government ===
The United Nations developed the International Standard Industrial Classification, which is a list of economic and social classifications. There is no distinct classification for an energy industry, because the classification system is based on activities, products, and expenditures according to purpose.
Countries in North America use the North American Industry Classification System (NAICS). The NAICS sectors No. 21 and No. 22 (mining and utilities) might roughly define the energy industry in North America. This classification is used by the U.S. Securities and Exchange Commission.
=== Financial market ===
The Global Industry Classification Standard used by Morgan Stanley define the energy industry as comprising companies primarily working with oil, gas, coal and consumable fuels, excluding companies working with certain industrial gases.
Add also to expand this section: Dow Jones Industrial Average
== Environmental impact ==
Government encouragement in the form of subsidies and tax incentives for energy-conservation efforts has increasingly fostered the view of conservation as a major function of the energy industry: saving an amount of energy provides economic benefits almost identical to generating that same amount of energy. This is compounded by the fact that the economics of delivering energy tend to be priced for capacity as opposed to average usage. One of the purposes of a smart grid infrastructure is to smooth out demand so that capacity and demand curves align more closely.
Some parts of the energy industry generate considerable pollution, including toxic and greenhouse gases from fuel combustion, nuclear waste from the generation of nuclear power, and oil spillages as a result of petroleum extraction. Government regulations to internalize these externalities form an increasing part of doing business, and the trading of carbon credits and pollution credits on the free market may also result in energy-saving and pollution-control measures becoming even more important to energy providers.
Consumption of energy resources, (e.g. turning on a light) requires resources and has an effect on the environment. Many electric power plants burn coal, oil or natural gas to generate electricity for energy needs. While burning these fossil fuels produces a readily available and instantaneous supply of electricity, it also generates air pollutants including carbon dioxide (CO2), sulfur dioxide and trioxide (SOx) and nitrogen oxides (NOx). Carbon dioxide is an important greenhouse gas, known to be responsible, along with methane, nitrous oxide, and fluorinated gases, for the rapid increase in global warming since the Industrial Revolution. In the 20th century, global temperature records are significantly higher than temperature records from thousands of years ago, taken from ice cores in Arctic regions. Burning fossil fuels for electricity generation also releases trace metals such as beryllium, cadmium, chromium, copper, manganese, mercury, nickel, and silver into the environment, which also act as pollutants.
The large-scale use of renewable energy technologies would "greatly mitigate or eliminate a wide range of environmental and human health impacts of energy use". Renewable energy technologies include biofuels, solar heating and cooling, hydroelectric power, solar power, and wind power. Energy conservation and the efficient use of energy would also help.
In addition it is argued that there is also the potential to develop a more efficient energy sector. This can be done by:
Fuel switching in the power sector from coal to natural gas;
Power plant optimisation and other measures to improve the efficiency of existing CCGT power plants;
Combined heat and power (CHP), from micro-scale residential to large-scale industrial;
Waste heat recovery
Best available technology (BAT) offers supply-side efficiency levels far higher than global averages. The relative benefits of gas compared to coal are influenced by the development of increasingly efficient energy production methods. According to an impact assessment carried out for the European Commission, the levels of energy efficiency of coal-fired plants built have now increased to 46–49% efficiency rates, as compared to coals plants built before the 1990s (32–40%). However, at the same time gas can reach 58–59% efficiency levels with the best available technology. Meanwhile, combined heat and power can offer efficiency rates of 80–90%.
== Politics ==
Since energy now plays an essential role in industrial societies, the ownership and control of energy resources plays an increasing role in politics. At the national level, governments seek to influence the sharing (distribution) of energy resources among various sections of the society through pricing mechanisms; or even who owns resources within their borders. They may also seek to influence the use of energy by individuals and business in an attempt to tackle environmental issues.
The most recent international political controversy regarding energy resources is in the context of the Iraq Wars. Some political analysts maintain that the hidden reason for both 1991 and 2003 wars can be traced to strategic control of international energy resources. Others counter this analysis with the numbers related to its economics. According to the latter group of analysts, U.S. has spent about $336 billion in Iraq as compared with a background current value of $25 billion per year budget for the entire U.S. oil import dependence
=== Policy ===
Energy policy is the manner in which a given entity (often governmental) has decided to address issues of energy development, including energy production, distribution and consumption. The attributes of energy policy may include legislation, international treaties, incentives to investment, guidelines for energy conservation, taxation and other public policy techniques.
=== Security ===
Energy security is the intersection of national security and the availability of natural resources for energy consumption. Access to cheap energy has become essential to the functioning of modern economies. However, the uneven distribution of energy supplies among countries has led to significant vulnerabilities. Threats to energy security include the political instability of several energy producing countries, the manipulation of energy supplies, the competition over energy sources, attacks on supply infrastructure, as well as accidents, natural disasters, the funding to foreign dictators, rising terrorism, and dominant countries reliance to the foreign oil supply. The limited supplies, uneven distribution, and rising costs of fossil fuels, such as oil and gas, create a need to change to more sustainable energy sources in the foreseeable future. With as much dependence that the U.S. currently has for oil and with the peaking limits of oil production; economies and societies will begin to feel the decline in the resource that we have become dependent upon. Energy security has become one of the leading issues in the world today as oil and other resources have become as vital to the world's people. But with oil production rates decreasing and oil production peak nearing, the world has come to protect what resources we have left. With new advancements in renewable resources, less pressure has been put on companies that produce the world's oil; these resources are geothermal, solar power, wind power, and hydroelectric. Although these are not all the current and possible options for the world to turn to as the oil depletes, the most critical issue is protecting these vital resources from future threats. These new resources will become more valuable as the price of exporting and importing oil will increase due to increased demand.
== Development ==
Producing energy to sustain human needs is an essential social activity, and a great deal of effort goes into the activity. While most of such effort is limited towards increasing the production of electricity and oil, newer ways of producing usable energy resources from the available energy resources are being explored. One such effort is to explore means of producing hydrogen fuel from water. Though hydrogen use is environmentally friendly, its production requires energy and existing technologies to make it, are not very efficient. Research is underway to explore enzymatic decomposition of biomass.
Other forms of conventional energy resources are also being used in new ways. Coal gasification and liquefaction are recent technologies that are becoming attractive after the realization that oil reserves, at present consumption rates, may be rather short lived. See alternative fuels.
Energy is the subject of significant research activities globally. For example, the UK Energy Research Centre is the focal point for UK energy research while the European Union has many technology programmes as well as a platform for engaging social science and humanities within energy research.
== Transportation ==
All societies require materials and food to be transported over distances, generally against some force of friction. Since application of force over distance requires the presence of a source of usable energy, such sources are of great worth in society.
While energy resources are an essential ingredient for all modes of transportation in society, the transportation of energy resources is becoming equally important. Energy resources are frequently located far from the place where they are consumed. Therefore, their transportation is always in question. Some energy resources like liquid or gaseous fuels are transported using tankers or pipelines, while electricity transportation invariably requires a network of grid cables. The transportation of energy, whether by tanker, pipeline, or transmission line, poses challenges for scientists and engineers, policy makers, and economists to make it more risk-free and efficient.
== Crisis ==
Economic and political instability can lead to an energy crisis. Notable oil crises are the 1973 oil crisis and the 1979 oil crisis. The advent of peak oil, the point in time when the maximum rate of global petroleum extraction is reached, will likely precipitate another energy crisis.
== Mergers and acquisitions ==
Between 1985 and 2018 there had been around 69,932 deals in the energy sector. This cumulates to an overall value of US$9,578bn. The most active year was 2010 with about 3.761 deals. In terms of value 2007 was the strongest year (US$684bn), which was followed by a steep decline until 2009 (−55.8%).
Here is a list of the top 10 deals in history in the energy sector:
== See also ==
== References ==
== Further reading ==
Armstrong, Robert C., Catherine Wolfram, Robert Gross, Nathan S. Lewis, and M.V. Ramana et al. The Frontiers of Energy, Nature Energy, Vol 1, 11 January 2016.
Bradley, Robert (2004). Energy: The Master Resource. Kendall Hunt. p. 252. ISBN 978-0757511691.
Fouquet, Roger, and Peter J.G. Pearson. "Seven Centuries of Energy Services: The Price and Use of Light in the United Kingdom (1300-2000)". Energy Journal 27.1 (2006).
Gales, Ben, et al. "North versus South: Energy transition and energy intensity in Europe over 200 years". European Review of Economic History 11.2 (2007): 219–253.
Nye, David E. Consuming power: A social history of American energies (MIT Press, 1999)
Pratt, Joseph A. Exxon: Transforming Energy, 1973–2005 (2013) 600pp
Smil, Vaclav (1994). Energy in World History. Westview Press. ISBN 978-0-8133-1902-5.
Stern, David I. "The role of energy in economic growth". Annals of the New York Academy of Sciences 1219.1 (2011): 26-51.
Warr, Benjamin, et al. "Energy use and economic development: A comparative analysis of useful work supply in Austria, Japan, the United Kingdom and the US during 100 years of economic growth". Ecological Economics 69.10 (2010): 1904–1917.
Yergin, Daniel (2011). The Quest: Energy, Security, and the Remaking of the Modern World. Penguin. p. 816. ISBN 978-1594202834. | Wikipedia/Energy_industry |
The partial least squares path modeling or partial least squares structural equation modeling (PLS-PM, PLS-SEM) is a method for structural equation modeling that allows estimation of complex cause-effect relationships in path models with latent variables.
== Overview ==
PLS-PM
is a component-based estimation approach that differs from the covariance-based structural equation modeling. Unlike covariance-based approaches to structural equation modeling, PLS-PM does not fit a common factor model to the data, it rather fits a composite model. In doing so, it maximizes the amount of variance explained (though what this means from a statistical point of view is unclear and PLS-PM users do not agree on how this goal might be achieved).
In addition, by an adjustment PLS-PM is capable of consistently estimating certain parameters of common factor models as well, through an approach called consistent PLS-PM (PLSc-PM). A further related development is factor-based PLS-PM (PLSF), a variation of which employs PLSc-PM as a basis for the estimation of the factors in common factor models; this method significantly increases the number of common factor model parameters that can be estimated, effectively bridging the gap between classic PLS-PM and covariance‐based structural equation modeling.
The PLS-PM structural equation model is composed of two sub-models: the measurement models and the structural model. The measurement models represent the relationships between the observed data and the latent variables. The structural model represents the relationships between the latent variables.
An iterative algorithm solves the structural equation model by estimating the latent variables by using the measurement and structural model in alternating steps, hence the procedure's name, partial. The measurement model estimates the latent variables as a weighted sum of its manifest variables. The structural model estimates the latent variables by means of simple or multiple linear regression between the latent variables estimated by the measurement model. This algorithm repeats itself until convergence is achieved.
PLS is viewed critically by several methodological researchers. A major point of contention has been the claim that PLS-PM can always be used with very small sample sizes. A recent study suggests that this claim is generally unjustified, and proposes two methods for minimum sample size estimation in PLS-PM. Another point of contention is the ad hoc way in which PLS-PM has been developed and the lack of analytic proofs to support its main feature: the sampling distribution of PLS-PM weights. However, PLS-PM is still considered preferable (over covariance‐based structural equation modeling) when it is unknown whether the data's nature is common factor- or composite-based.
== See also ==
Partial least squares regression
Principal component analysis
Structural equation modeling
== References == | Wikipedia/Partial_least_squares_path_modeling |
In statistics, a mixture model is a probabilistic model for representing the presence of subpopulations within an overall population, without requiring that an observed data set should identify the sub-population to which an individual observation belongs. Formally a mixture model corresponds to the mixture distribution that represents the probability distribution of observations in the overall population. However, while problems associated with "mixture distributions" relate to deriving the properties of the overall population from those of the sub-populations, "mixture models" are used to make statistical inferences about the properties of the sub-populations given only observations on the pooled population, without sub-population identity information. Mixture models are used for clustering, under the name model-based clustering, and also for density estimation.
Mixture models should not be confused with models for compositional data, i.e., data whose components are constrained to sum to a constant value (1, 100%, etc.). However, compositional models can be thought of as mixture models, where members of the population are sampled at random. Conversely, mixture models can be thought of as compositional models, where the total size reading population has been normalized to 1.
== Structure ==
=== General mixture model ===
A typical finite-dimensional mixture model is a hierarchical model consisting of the following components:
N random variables that are observed, each distributed according to a mixture of K components, with the components belonging to the same parametric family of distributions (e.g., all normal, all Zipfian, etc.) but with different parameters
N random latent variables specifying the identity of the mixture component of each observation, each distributed according to a K-dimensional categorical distribution
A set of K mixture weights, which are probabilities that sum to 1.
A set of K parameters, each specifying the parameter of the corresponding mixture component. In many cases, each "parameter" is actually a set of parameters. For example, if the mixture components are Gaussian distributions, there will be a mean and variance for each component. If the mixture components are categorical distributions (e.g., when each observation is a token from a finite alphabet of size V), there will be a vector of V probabilities summing to 1.
In addition, in a Bayesian setting, the mixture weights and parameters will themselves be random variables, and prior distributions will be placed over the variables. In such a case, the weights are typically viewed as a K-dimensional random vector drawn from a Dirichlet distribution (the conjugate prior of the categorical distribution), and the parameters will be distributed according to their respective conjugate priors.
Mathematically, a basic parametric mixture model can be described as follows:
K
=
number of mixture components
N
=
number of observations
θ
i
=
1
…
K
=
parameter of distribution of observation associated with component
i
ϕ
i
=
1
…
K
=
mixture weight, i.e., prior probability of a particular component
i
ϕ
=
K
-dimensional vector composed of all the individual
ϕ
1
…
K
; must sum to 1
z
i
=
1
…
N
=
component of observation
i
x
i
=
1
…
N
=
observation
i
F
(
x
|
θ
)
=
probability distribution of an observation, parametrized on
θ
z
i
=
1
…
N
∼
Categorical
(
ϕ
)
x
i
=
1
…
N
|
z
i
=
1
…
N
∼
F
(
θ
z
i
)
{\displaystyle {\begin{array}{lcl}K&=&{\text{number of mixture components}}\\N&=&{\text{number of observations}}\\\theta _{i=1\dots K}&=&{\text{parameter of distribution of observation associated with component }}i\\\phi _{i=1\dots K}&=&{\text{mixture weight, i.e., prior probability of a particular component }}i\\{\boldsymbol {\phi }}&=&K{\text{-dimensional vector composed of all the individual }}\phi _{1\dots K}{\text{; must sum to 1}}\\z_{i=1\dots N}&=&{\text{component of observation }}i\\x_{i=1\dots N}&=&{\text{observation }}i\\F(x|\theta )&=&{\text{probability distribution of an observation, parametrized on }}\theta \\z_{i=1\dots N}&\sim &\operatorname {Categorical} ({\boldsymbol {\phi }})\\x_{i=1\dots N}|z_{i=1\dots N}&\sim &F(\theta _{z_{i}})\end{array}}}
In a Bayesian setting, all parameters are associated with random variables, as follows:
K
,
N
=
as above
θ
i
=
1
…
K
,
ϕ
i
=
1
…
K
,
ϕ
=
as above
z
i
=
1
…
N
,
x
i
=
1
…
N
,
F
(
x
|
θ
)
=
as above
α
=
shared hyperparameter for component parameters
β
=
shared hyperparameter for mixture weights
H
(
θ
|
α
)
=
prior probability distribution of component parameters, parametrized on
α
θ
i
=
1
…
K
∼
H
(
θ
|
α
)
ϕ
∼
S
y
m
m
e
t
r
i
c
-
D
i
r
i
c
h
l
e
t
K
(
β
)
z
i
=
1
…
N
|
ϕ
∼
Categorical
(
ϕ
)
x
i
=
1
…
N
|
z
i
=
1
…
N
,
θ
i
=
1
…
K
∼
F
(
θ
z
i
)
{\displaystyle {\begin{array}{lcl}K,N&=&{\text{as above}}\\\theta _{i=1\dots K},\phi _{i=1\dots K},{\boldsymbol {\phi }}&=&{\text{as above}}\\z_{i=1\dots N},x_{i=1\dots N},F(x|\theta )&=&{\text{as above}}\\\alpha &=&{\text{shared hyperparameter for component parameters}}\\\beta &=&{\text{shared hyperparameter for mixture weights}}\\H(\theta |\alpha )&=&{\text{prior probability distribution of component parameters, parametrized on }}\alpha \\\theta _{i=1\dots K}&\sim &H(\theta |\alpha )\\{\boldsymbol {\phi }}&\sim &\operatorname {Symmetric-Dirichlet} _{K}(\beta )\\z_{i=1\dots N}|{\boldsymbol {\phi }}&\sim &\operatorname {Categorical} ({\boldsymbol {\phi }})\\x_{i=1\dots N}|z_{i=1\dots N},\theta _{i=1\dots K}&\sim &F(\theta _{z_{i}})\end{array}}}
This characterization uses F and H to describe arbitrary distributions over observations and parameters, respectively. Typically H will be the conjugate prior of F. The two most common choices of F are Gaussian aka "normal" (for real-valued observations) and categorical (for discrete observations). Other common possibilities for the distribution of the mixture components are:
Binomial distribution, for the number of "positive occurrences" (e.g., successes, yes votes, etc.) given a fixed number of total occurrences
Multinomial distribution, similar to the binomial distribution, but for counts of multi-way occurrences (e.g., yes/no/maybe in a survey)
Negative binomial distribution, for binomial-type observations but where the quantity of interest is the number of failures before a given number of successes occurs
Poisson distribution, for the number of occurrences of an event in a given period of time, for an event that is characterized by a fixed rate of occurrence
Exponential distribution, for the time before the next event occurs, for an event that is characterized by a fixed rate of occurrence
Log-normal distribution, for positive real numbers that are assumed to grow exponentially, such as incomes or prices
Multivariate normal distribution (aka multivariate Gaussian distribution), for vectors of correlated outcomes that are individually Gaussian-distributed
Multivariate Student's t-distribution, for vectors of heavy-tailed correlated outcomes
A vector of Bernoulli-distributed values, corresponding, e.g., to a black-and-white image, with each value representing a pixel; see the handwriting-recognition example below
=== Specific examples ===
==== Gaussian mixture model ====
A typical non-Bayesian Gaussian mixture model looks like this:
K
,
N
=
as above
ϕ
i
=
1
…
K
,
ϕ
=
as above
z
i
=
1
…
N
,
x
i
=
1
…
N
=
as above
θ
i
=
1
…
K
=
{
μ
i
=
1
…
K
,
σ
i
=
1
…
K
2
}
μ
i
=
1
…
K
=
mean of component
i
σ
i
=
1
…
K
2
=
variance of component
i
z
i
=
1
…
N
∼
Categorical
(
ϕ
)
x
i
=
1
…
N
∼
N
(
μ
z
i
,
σ
z
i
2
)
{\displaystyle {\begin{array}{lcl}K,N&=&{\text{as above}}\\\phi _{i=1\dots K},{\boldsymbol {\phi }}&=&{\text{as above}}\\z_{i=1\dots N},x_{i=1\dots N}&=&{\text{as above}}\\\theta _{i=1\dots K}&=&\{\mu _{i=1\dots K},\sigma _{i=1\dots K}^{2}\}\\\mu _{i=1\dots K}&=&{\text{mean of component }}i\\\sigma _{i=1\dots K}^{2}&=&{\text{variance of component }}i\\z_{i=1\dots N}&\sim &\operatorname {Categorical} ({\boldsymbol {\phi }})\\x_{i=1\dots N}&\sim &{\mathcal {N}}(\mu _{z_{i}},\sigma _{z_{i}}^{2})\end{array}}}
A Bayesian version of a Gaussian mixture model is as follows:
K
,
N
=
as above
ϕ
i
=
1
…
K
,
ϕ
=
as above
z
i
=
1
…
N
,
x
i
=
1
…
N
=
as above
θ
i
=
1
…
K
=
{
μ
i
=
1
…
K
,
σ
i
=
1
…
K
2
}
μ
i
=
1
…
K
=
mean of component
i
σ
i
=
1
…
K
2
=
variance of component
i
μ
0
,
λ
,
ν
,
σ
0
2
=
shared hyperparameters
μ
i
=
1
…
K
∼
N
(
μ
0
,
λ
σ
i
2
)
σ
i
=
1
…
K
2
∼
I
n
v
e
r
s
e
-
G
a
m
m
a
(
ν
,
σ
0
2
)
ϕ
∼
S
y
m
m
e
t
r
i
c
-
D
i
r
i
c
h
l
e
t
K
(
β
)
z
i
=
1
…
N
∼
Categorical
(
ϕ
)
x
i
=
1
…
N
∼
N
(
μ
z
i
,
σ
z
i
2
)
{\displaystyle {\begin{array}{lcl}K,N&=&{\text{as above}}\\\phi _{i=1\dots K},{\boldsymbol {\phi }}&=&{\text{as above}}\\z_{i=1\dots N},x_{i=1\dots N}&=&{\text{as above}}\\\theta _{i=1\dots K}&=&\{\mu _{i=1\dots K},\sigma _{i=1\dots K}^{2}\}\\\mu _{i=1\dots K}&=&{\text{mean of component }}i\\\sigma _{i=1\dots K}^{2}&=&{\text{variance of component }}i\\\mu _{0},\lambda ,\nu ,\sigma _{0}^{2}&=&{\text{shared hyperparameters}}\\\mu _{i=1\dots K}&\sim &{\mathcal {N}}(\mu _{0},\lambda \sigma _{i}^{2})\\\sigma _{i=1\dots K}^{2}&\sim &\operatorname {Inverse-Gamma} (\nu ,\sigma _{0}^{2})\\{\boldsymbol {\phi }}&\sim &\operatorname {Symmetric-Dirichlet} _{K}(\beta )\\z_{i=1\dots N}&\sim &\operatorname {Categorical} ({\boldsymbol {\phi }})\\x_{i=1\dots N}&\sim &{\mathcal {N}}(\mu _{z_{i}},\sigma _{z_{i}}^{2})\end{array}}}
{\displaystyle }
==== Multivariate Gaussian mixture model ====
A Bayesian Gaussian mixture model is commonly extended to fit a vector of unknown parameters (denoted in bold), or multivariate normal distributions. In a multivariate distribution (i.e. one modelling a vector
x
{\displaystyle {\boldsymbol {x}}}
with N random variables) one may model a vector of parameters (such as several observations of a signal or patches within an image) using a Gaussian mixture model prior distribution on the vector of estimates given by
p
(
θ
)
=
∑
i
=
1
K
ϕ
i
N
(
μ
i
,
Σ
i
)
{\displaystyle p({\boldsymbol {\theta }})=\sum _{i=1}^{K}\phi _{i}{\mathcal {N}}({\boldsymbol {\mu }}_{i},{\boldsymbol {\Sigma }}_{i})}
where the ith vector component is characterized by normal distributions with weights
ϕ
i
{\displaystyle \phi _{i}}
, means
μ
i
{\displaystyle {\boldsymbol {\mu }}_{i}}
and covariance matrices
Σ
i
{\displaystyle {\boldsymbol {\Sigma }}_{i}}
. To incorporate this prior into a Bayesian estimation, the prior is multiplied with the known distribution
p
(
x
|
θ
)
{\displaystyle p({\boldsymbol {x|\theta }})}
of the data
x
{\displaystyle {\boldsymbol {x}}}
conditioned on the parameters
θ
{\displaystyle {\boldsymbol {\theta }}}
to be estimated. With this formulation, the posterior distribution
p
(
θ
|
x
)
{\displaystyle p({\boldsymbol {\theta |x}})}
is also a Gaussian mixture model of the form
p
(
θ
|
x
)
=
∑
i
=
1
K
ϕ
~
i
N
(
μ
~
i
,
Σ
~
i
)
{\displaystyle p({\boldsymbol {\theta |x}})=\sum _{i=1}^{K}{\tilde {\phi }}_{i}{\mathcal {N}}({\boldsymbol {{\tilde {\mu }}_{i}}},{\boldsymbol {\tilde {\Sigma }}}_{i})}
with new parameters
ϕ
~
i
,
μ
~
i
{\displaystyle {\tilde {\phi }}_{i},{\boldsymbol {\tilde {\mu }}}_{i}}
and
Σ
~
i
{\displaystyle {\boldsymbol {\tilde {\Sigma }}}_{i}}
that are updated using the EM algorithm.
Although EM-based parameter updates are well-established, providing the initial estimates for these parameters is currently an area of active research. Note that this formulation yields a closed-form solution to the complete posterior distribution. Estimations of the random variable
θ
{\displaystyle {\boldsymbol {\theta }}}
may be obtained via one of several estimators, such as the mean or maximum of the posterior distribution.
Such distributions are useful for assuming patch-wise shapes of images and clusters, for example. In the case of image representation, each Gaussian may be tilted, expanded, and warped according to the covariance matrices
Σ
i
{\displaystyle {\boldsymbol {\Sigma }}_{i}}
. One Gaussian distribution of the set is fit to each patch (usually of size 8×8 pixels) in the image. Notably, any distribution of points around a cluster (see k-means) may be accurately given enough Gaussian components, but scarcely over K=20 components are needed to accurately model a given image distribution or cluster of data.
==== Categorical mixture model ====
A typical non-Bayesian mixture model with categorical observations looks like this:
K
,
N
:
{\displaystyle K,N:}
as above
ϕ
i
=
1
…
K
,
ϕ
:
{\displaystyle \phi _{i=1\dots K},{\boldsymbol {\phi }}:}
as above
z
i
=
1
…
N
,
x
i
=
1
…
N
:
{\displaystyle z_{i=1\dots N},x_{i=1\dots N}:}
as above
V
:
{\displaystyle V:}
dimension of categorical observations, e.g., size of word vocabulary
θ
i
=
1
…
K
,
j
=
1
…
V
:
{\displaystyle \theta _{i=1\dots K,j=1\dots V}:}
probability for component
i
{\displaystyle i}
of observing item
j
{\displaystyle j}
θ
i
=
1
…
K
:
{\displaystyle {\boldsymbol {\theta }}_{i=1\dots K}:}
vector of dimension
V
,
{\displaystyle V,}
composed of
θ
i
,
1
…
V
;
{\displaystyle \theta _{i,1\dots V};}
must sum to 1
The random variables:
z
i
=
1
…
N
∼
Categorical
(
ϕ
)
x
i
=
1
…
N
∼
Categorical
(
θ
z
i
)
{\displaystyle {\begin{array}{lcl}z_{i=1\dots N}&\sim &\operatorname {Categorical} ({\boldsymbol {\phi }})\\x_{i=1\dots N}&\sim &{\text{Categorical}}({\boldsymbol {\theta }}_{z_{i}})\end{array}}}
A typical Bayesian mixture model with categorical observations looks like this:
K
,
N
:
{\displaystyle K,N:}
as above
ϕ
i
=
1
…
K
,
ϕ
:
{\displaystyle \phi _{i=1\dots K},{\boldsymbol {\phi }}:}
as above
z
i
=
1
…
N
,
x
i
=
1
…
N
:
{\displaystyle z_{i=1\dots N},x_{i=1\dots N}:}
as above
V
:
{\displaystyle V:}
dimension of categorical observations, e.g., size of word vocabulary
θ
i
=
1
…
K
,
j
=
1
…
V
:
{\displaystyle \theta _{i=1\dots K,j=1\dots V}:}
probability for component
i
{\displaystyle i}
of observing item
j
{\displaystyle j}
θ
i
=
1
…
K
:
{\displaystyle {\boldsymbol {\theta }}_{i=1\dots K}:}
vector of dimension
V
,
{\displaystyle V,}
composed of
θ
i
,
1
…
V
;
{\displaystyle \theta _{i,1\dots V};}
must sum to 1
α
:
{\displaystyle \alpha :}
shared concentration hyperparameter of
θ
{\displaystyle {\boldsymbol {\theta }}}
for each component
β
:
{\displaystyle \beta :}
concentration hyperparameter of
ϕ
{\displaystyle {\boldsymbol {\phi }}}
The random variables:
ϕ
∼
S
y
m
m
e
t
r
i
c
-
D
i
r
i
c
h
l
e
t
K
(
β
)
θ
i
=
1
…
K
∼
Symmetric-Dirichlet
V
(
α
)
z
i
=
1
…
N
∼
Categorical
(
ϕ
)
x
i
=
1
…
N
∼
Categorical
(
θ
z
i
)
{\displaystyle {\begin{array}{lcl}{\boldsymbol {\phi }}&\sim &\operatorname {Symmetric-Dirichlet} _{K}(\beta )\\{\boldsymbol {\theta }}_{i=1\dots K}&\sim &{\text{Symmetric-Dirichlet}}_{V}(\alpha )\\z_{i=1\dots N}&\sim &\operatorname {Categorical} ({\boldsymbol {\phi }})\\x_{i=1\dots N}&\sim &{\text{Categorical}}({\boldsymbol {\theta }}_{z_{i}})\end{array}}}
== Examples ==
=== A financial model ===
Financial returns often behave differently in normal situations and during crisis times. A mixture model for return data seems reasonable. Sometimes the model used is a jump-diffusion model, or as a mixture of two normal distributions. See Financial economics § Challenges and criticism and Financial risk management § Banking for further context.
=== House prices ===
Assume that we observe the prices of N different houses. Different types of houses in different neighborhoods will have vastly different prices, but the price of a particular type of house in a particular neighborhood (e.g., three-bedroom house in moderately upscale neighborhood) will tend to cluster fairly closely around the mean. One possible model of such prices would be to assume that the prices are accurately described by a mixture model with K different components, each distributed as a normal distribution with unknown mean and variance, with each component specifying a particular combination of house type/neighborhood. Fitting this model to observed prices, e.g., using the expectation-maximization algorithm, would tend to cluster the prices according to house type/neighborhood and reveal the spread of prices in each type/neighborhood. (Note that for values such as prices or incomes that are guaranteed to be positive and which tend to grow exponentially, a log-normal distribution might actually be a better model than a normal distribution.)
=== Topics in a document ===
Assume that a document is composed of N different words from a total vocabulary of size V, where each word corresponds to one of K possible topics. The distribution of such words could be modelled as a mixture of K different V-dimensional categorical distributions. A model of this sort is commonly termed a topic model. Note that expectation maximization applied to such a model will typically fail to produce realistic results, due (among other things) to the excessive number of parameters. Some sorts of additional assumptions are typically necessary to get good results. Typically two sorts of additional components are added to the model:
A prior distribution is placed over the parameters describing the topic distributions, using a Dirichlet distribution with a concentration parameter that is set significantly below 1, so as to encourage sparse distributions (where only a small number of words have significantly non-zero probabilities).
Some sort of additional constraint is placed over the topic identities of words, to take advantage of natural clustering.
For example, a Markov chain could be placed on the topic identities (i.e., the latent variables specifying the mixture component of each observation), corresponding to the fact that nearby words belong to similar topics. (This results in a hidden Markov model, specifically one where a prior distribution is placed over state transitions that favors transitions that stay in the same state.)
Another possibility is the latent Dirichlet allocation model, which divides up the words into D different documents and assumes that in each document only a small number of topics occur with any frequency.
=== Handwriting recognition ===
The following example is based on an example in Christopher M. Bishop, Pattern Recognition and Machine Learning.
Imagine that we are given an N×N black-and-white image that is known to be a scan of a hand-written digit between 0 and 9, but we don't know which digit is written. We can create a mixture model with
K
=
10
{\displaystyle K=10}
different components, where each component is a vector of size
N
2
{\displaystyle N^{2}}
of Bernoulli distributions (one per pixel). Such a model can be trained with the expectation-maximization algorithm on an unlabeled set of hand-written digits, and will effectively cluster the images according to the digit being written. The same model could then be used to recognize the digit of another image simply by holding the parameters constant, computing the probability of the new image for each possible digit (a trivial calculation), and returning the digit that generated the highest probability.
=== Assessing projectile accuracy (a.k.a. circular error probable, CEP) ===
Mixture models apply in the problem of directing multiple projectiles at a target (as in air, land, or sea defense applications), where the physical and/or statistical characteristics of the projectiles differ within the multiple projectiles. An example might be shots from multiple munitions types or shots from multiple locations directed at one target. The combination of projectile types may be characterized as a Gaussian mixture model. Further, a well-known measure of accuracy for a group of projectiles is the circular error probable (CEP), which is the number R such that, on average, half of the group of projectiles falls within the circle of radius R about the target point. The mixture model can be used to determine (or estimate) the value R. The mixture model properly captures the different types of projectiles.
=== Direct and indirect applications ===
The financial example above is one direct application of the mixture model, a situation in which we assume an underlying mechanism so that each observation belongs to one of some number of different sources or categories. This underlying mechanism may or may not, however, be observable. In this form of mixture, each of the sources is described by a component probability density function, and its mixture weight is the probability that an observation comes from this component.
In an indirect application of the mixture model we do not assume such a mechanism. The mixture model is simply used for its mathematical flexibilities. For example, a mixture of two normal distributions with different means may result in a density with two modes, which is not modeled by standard parametric distributions. Another example is given by the possibility of mixture distributions to model fatter tails than the basic Gaussian ones, so as to be a candidate for modeling more extreme events.
=== Predictive Maintenance ===
The mixture model-based clustering is also predominantly used in identifying the state of the machine in predictive maintenance. Density plots are used to analyze the density of high dimensional features. If multi-model densities are observed, then it is assumed that a finite set of densities are formed by a finite set of normal mixtures. A multivariate Gaussian mixture model is used to cluster the feature data into k number of groups where k represents each state of the machine. The machine state can be a normal state, power off state, or faulty state. Each formed cluster can be diagnosed using techniques such as spectral analysis. In the recent years, this has also been widely used in other areas such as early fault detection.
=== Fuzzy image segmentation ===
In image processing and computer vision, traditional image segmentation models often assign to one pixel only one exclusive pattern. In fuzzy or soft segmentation, any pattern can have certain "ownership" over any single pixel. If the patterns are Gaussian, fuzzy segmentation naturally results in Gaussian mixtures. Combined with other analytic or geometric tools (e.g., phase transitions over diffusive boundaries), such spatially regularized mixture models could lead to more realistic and computationally efficient segmentation methods.
=== Point set registration ===
Probabilistic mixture models such as Gaussian mixture models (GMM) are used to resolve point set registration problems in image processing and computer vision fields. For pair-wise point set registration, one point set is regarded as the centroids of mixture models, and the other point set is regarded as data points (observations). State-of-the-art methods are e.g. coherent point drift (CPD)
and Student's t-distribution mixture models (TMM).
The result of recent research demonstrate the superiority of hybrid mixture models
(e.g. combining Student's t-distribution and Watson distribution/Bingham distribution to model spatial positions and axes orientations separately) compare to CPD and TMM, in terms of inherent robustness, accuracy and discriminative capacity.
== Identifiability ==
Identifiability refers to the existence of a unique characterization for any one of the models in the class (family) being considered. Estimation procedures may not be well-defined and asymptotic theory may not hold if a model is not identifiable.
=== Example ===
Let J be the class of all binomial distributions with n = 2. Then a mixture of two members of J would have
p
0
=
π
(
1
−
θ
1
)
2
+
(
1
−
π
)
(
1
−
θ
2
)
2
p
1
=
2
π
θ
1
(
1
−
θ
1
)
+
2
(
1
−
π
)
θ
2
(
1
−
θ
2
)
{\displaystyle {\begin{aligned}p_{0}&=\pi {\left(1-\theta _{1}\right)}^{2}+\left(1-\pi \right){\left(1-\theta _{2}\right)}^{2}\\[1ex]p_{1}&=2\pi \theta _{1}\left(1-\theta _{1}\right)+2\left(1-\pi \right)\theta _{2}\left(1-\theta _{2}\right)\end{aligned}}}
and p2 = 1 − p0 − p1. Clearly, given p0 and p1, it is not possible to determine the above mixture model uniquely, as there are three parameters (π, θ1, θ2) to be determined.
=== Definition ===
Consider a mixture of parametric distributions of the same class. Let
J
=
{
f
(
⋅
;
θ
)
:
θ
∈
Ω
}
{\displaystyle J=\{f(\cdot ;\theta ):\theta \in \Omega \}}
be the class of all component distributions. Then the convex hull K of J defines the class of all finite mixture of distributions in J:
K
=
{
p
(
⋅
)
:
p
(
⋅
)
=
∑
i
=
1
n
a
i
f
i
(
⋅
;
θ
i
)
,
a
i
>
0
,
∑
i
=
1
n
a
i
=
1
,
f
i
(
⋅
;
θ
i
)
∈
J
∀
i
,
n
}
{\displaystyle K=\left\{p(\cdot ):p(\cdot )=\sum _{i=1}^{n}a_{i}f_{i}(\cdot ;\theta _{i}),a_{i}>0,\sum _{i=1}^{n}a_{i}=1,f_{i}(\cdot ;\theta _{i})\in J\ \forall i,n\right\}}
K is said to be identifiable if all its members are unique, that is, given two members p and p′ in K, being mixtures of k distributions and k′ distributions respectively in J, we have p = p′ if and only if, first of all, k = k′ and secondly we can reorder the summations such that ai = ai′ and fi = fi′ for all i.
== Parameter estimation and system identification ==
Parametric mixture models are often used when we know the distribution Y and we can sample from X, but we would like to determine the ai and θi values. Such situations can arise in studies in which we sample from a population that is composed of several distinct subpopulations.
It is common to think of probability mixture modeling as a missing data problem. One way to understand this is to assume that the data points under consideration have "membership" in one of the distributions we are using to model the data. When we start, this membership is unknown, or missing. The job of estimation is to devise appropriate parameters for the model functions we choose, with the connection to the data points being represented as their membership in the individual model distributions.
A variety of approaches to the problem of mixture decomposition have been proposed, many of which focus on maximum likelihood methods such as expectation maximization (EM) or maximum a posteriori estimation (MAP). Generally these methods consider separately the questions of system identification and parameter estimation; methods to determine the number and functional form of components within a mixture are distinguished from methods to estimate the corresponding parameter values. Some notable departures are the graphical methods as outlined in Tarter and Lock and more recently minimum message length (MML) techniques such as Figueiredo and Jain and to some extent the moment matching pattern analysis routines suggested by McWilliam and Loh (2009).
=== Expectation maximization (EM) ===
Expectation maximization (EM) is seemingly the most popular technique used to determine the parameters of a mixture with an a priori given number of components. This is a particular way of implementing maximum likelihood estimation for this problem. EM is of particular appeal for finite normal mixtures where closed-form expressions are possible such as in the following iterative algorithm by Dempster et al. (1977)
w
s
(
j
+
1
)
=
1
N
∑
t
=
1
N
h
s
(
j
)
(
t
)
{\displaystyle w_{s}^{(j+1)}={\frac {1}{N}}\sum _{t=1}^{N}h_{s}^{(j)}(t)}
μ
s
(
j
+
1
)
=
∑
t
=
1
N
h
s
(
j
)
(
t
)
x
(
t
)
∑
t
=
1
N
h
s
(
j
)
(
t
)
{\displaystyle \mu _{s}^{(j+1)}={\frac {\sum _{t=1}^{N}h_{s}^{(j)}(t)x^{(t)}}{\sum _{t=1}^{N}h_{s}^{(j)}(t)}}}
Σ
s
(
j
+
1
)
=
∑
t
=
1
N
h
s
(
j
)
(
t
)
[
x
(
t
)
−
μ
s
(
j
+
1
)
]
[
x
(
t
)
−
μ
s
(
j
+
1
)
]
⊤
∑
t
=
1
N
h
s
(
j
)
(
t
)
{\displaystyle \Sigma _{s}^{(j+1)}={\frac {\sum _{t=1}^{N}h_{s}^{(j)}(t)[x^{(t)}-\mu _{s}^{(j+1)}][x^{(t)}-\mu _{s}^{(j+1)}]^{\top }}{\sum _{t=1}^{N}h_{s}^{(j)}(t)}}}
with the posterior probabilities
h
s
(
j
)
(
t
)
=
w
s
(
j
)
p
s
(
x
(
t
)
;
μ
s
(
j
)
,
Σ
s
(
j
)
)
∑
i
=
1
n
w
i
(
j
)
p
i
(
x
(
t
)
;
μ
i
(
j
)
,
Σ
i
(
j
)
)
.
{\displaystyle h_{s}^{(j)}(t)={\frac {w_{s}^{(j)}p_{s}(x^{(t)};\mu _{s}^{(j)},\Sigma _{s}^{(j)})}{\sum _{i=1}^{n}w_{i}^{(j)}p_{i}(x^{(t)};\mu _{i}^{(j)},\Sigma _{i}^{(j)})}}.}
Thus on the basis of the current estimate for the parameters, the conditional probability for a given observation x(t) being generated from state s is determined for each t = 1, …, N ; N being the sample size. The parameters are then updated such that the new component weights correspond to the average conditional probability and each component mean and covariance is the component specific weighted average of the mean and covariance of the entire sample.
Dempster also showed that each successive EM iteration will not decrease the likelihood, a property not shared by other gradient based maximization techniques. Moreover, EM naturally embeds within it constraints on the probability vector, and for sufficiently large sample sizes positive definiteness of the covariance iterates. This is a key advantage since explicitly constrained methods incur extra computational costs to check and maintain appropriate values. Theoretically EM is a first-order algorithm and as such converges slowly to a fixed-point solution. Redner and Walker (1984) make this point arguing in favour of superlinear and second order Newton and quasi-Newton methods and reporting slow convergence in EM on the basis of their empirical tests. They do concede that convergence in likelihood was rapid even if convergence in the parameter values themselves was not. The relative merits of EM and other algorithms vis-à-vis convergence have been discussed in other literature.
Other common objections to the use of EM are that it has a propensity to spuriously identify local maxima, as well as displaying sensitivity to initial values. One may address these problems by evaluating EM at several initial points in the parameter space but this is computationally costly and other approaches, such as the annealing EM method of Udea and Nakano (1998) (in which the initial components are essentially forced to overlap, providing a less heterogeneous basis for initial guesses), may be preferable.
Figueiredo and Jain note that convergence to 'meaningless' parameter values obtained at the boundary (where regularity conditions breakdown, e.g., Ghosh and Sen (1985)) is frequently observed when the number of model components exceeds the optimal/true one. On this basis they suggest a unified approach to estimation and identification in which the initial n is chosen to greatly exceed the expected optimal value. Their optimization routine is constructed via a minimum message length (MML) criterion that effectively eliminates a candidate component if there is insufficient information to support it. In this way it is possible to systematize reductions in n and consider estimation and identification jointly.
==== The expectation step ====
With initial guesses for the parameters of our mixture model, "partial membership" of each data point in each constituent distribution is computed by calculating expectation values for the membership variables of each data point. That is, for each data point xj and distribution Yi, the membership value yi, j is:
y
i
,
j
=
a
i
f
Y
(
x
j
;
θ
i
)
f
X
(
x
j
)
.
{\displaystyle y_{i,j}={\frac {a_{i}f_{Y}(x_{j};\theta _{i})}{f_{X}(x_{j})}}.}
==== The maximization step ====
With expectation values in hand for group membership, plug-in estimates are recomputed for the distribution parameters.
The mixing coefficients ai are the means of the membership values over the N data points.
a
i
=
1
N
∑
j
=
1
N
y
i
,
j
{\displaystyle a_{i}={\frac {1}{N}}\sum _{j=1}^{N}y_{i,j}}
The component model parameters θi are also calculated by expectation maximization using data points xj that have been weighted using the membership values. For example, if θ is a mean μ
μ
i
=
∑
j
y
i
,
j
x
j
∑
j
y
i
,
j
.
{\displaystyle \mu _{i}={\frac {\sum _{j}y_{i,j}x_{j}}{\sum _{j}y_{i,j}}}.}
With new estimates for ai and the θi's, the expectation step is repeated to recompute new membership values. The entire procedure is repeated until model parameters converge.
=== Markov chain Monte Carlo ===
As an alternative to the EM algorithm, the mixture model parameters can be deduced using posterior sampling as indicated by Bayes' theorem. This is still regarded as an incomplete data problem in which membership of data points is the missing data. A two-step iterative procedure known as Gibbs sampling can be used.
The previous example of a mixture of two Gaussian distributions can demonstrate how the method works. As before, initial guesses of the parameters for the mixture model are made. Instead of computing partial memberships for each elemental distribution, a membership value for each data point is drawn from a Bernoulli distribution (that is, it will be assigned to either the first or the second Gaussian). The Bernoulli parameter θ is determined for each data point on the basis of one of the constituent distributions. Draws from the distribution generate membership associations for each data point. Plug-in estimators can then be used as in the M step of EM to generate a new set of mixture model parameters, and the binomial draw step repeated.
=== Moment matching ===
The method of moment matching is one of the oldest techniques for determining the mixture parameters dating back to Karl Pearson's seminal work of 1894.
In this approach the parameters of the mixture are determined such that the composite distribution has moments matching some given value. In many instances extraction of solutions to the moment equations may present non-trivial algebraic or computational problems. Moreover, numerical analysis by Day has indicated that such methods may be inefficient compared to EM. Nonetheless, there has been renewed interest in this method, e.g., Craigmile and Titterington (1998) and Wang.
McWilliam and Loh (2009) consider the characterisation of a hyper-cuboid normal mixture copula in large dimensional systems for which EM would be computationally prohibitive. Here a pattern analysis routine is used to generate multivariate tail-dependencies consistent with a set of univariate and (in some sense) bivariate moments. The performance of this method is then evaluated using equity log-return data with Kolmogorov–Smirnov test statistics suggesting a good descriptive fit.
=== Spectral method ===
Some problems in mixture model estimation can be solved using spectral methods.
In particular it becomes useful if data points xi are points in high-dimensional real space, and the hidden distributions are known to be log-concave (such as Gaussian distribution or Exponential distribution).
Spectral methods of learning mixture models are based on the use of Singular Value Decomposition of a matrix which contains data points.
The idea is to consider the top k singular vectors, where k is the number of distributions to be learned. The projection
of each data point to a linear subspace spanned by those vectors groups points originating from the same distribution
very close together, while points from different distributions stay far apart.
One distinctive feature of the spectral method is that it allows us to prove that if
distributions satisfy certain separation condition (e.g., not too close), then the estimated mixture will be very close to the true one with high probability.
=== Graphical Methods ===
Tarter and Lock describe a graphical approach to mixture identification in which a kernel function is applied to an empirical frequency plot so to reduce intra-component variance. In this way one may more readily identify components having differing means. While this λ-method does not require prior knowledge of the number or functional form of the components its success does rely on the choice of the kernel parameters which to some extent implicitly embeds assumptions about the component structure.
=== Other methods ===
Some of them can even probably learn mixtures of heavy-tailed distributions including those with
infinite variance (see links to papers below).
In this setting, EM based methods would not work, since the Expectation step would diverge due to presence of
outliers.
=== A simulation ===
To simulate a sample of size N that is from a mixture of distributions Fi, i=1 to n, with probabilities pi (sum= pi = 1):
Generate N random numbers from a categorical distribution of size n and probabilities pi for i= 1= to n. These tell you which of the Fi each of the N values will come from. Denote by mi the quantity of random numbers assigned to the ith category.
For each i, generate mi random numbers from the Fi distribution.
== Extensions ==
In a Bayesian setting, additional levels can be added to the graphical model defining the mixture model. For example, in the common latent Dirichlet allocation topic model, the observations are sets of words drawn from D different documents and the K mixture components represent topics that are shared across documents. Each document has a different set of mixture weights, which specify the topics prevalent in that document. All sets of mixture weights share common hyperparameters.
A very common extension is to connect the latent variables defining the mixture component identities into a Markov chain, instead of assuming that they are independent identically distributed random variables. The resulting model is termed a hidden Markov model and is one of the most common sequential hierarchical models. Numerous extensions of hidden Markov models have been developed; see the resulting article for more information.
== History ==
Mixture distributions and the problem of mixture decomposition, that is the identification of its constituent components and the parameters thereof, has been cited in the literature as far back as 1846 (Quetelet in McLachlan, 2000) although common reference is made to the work of Karl Pearson (1894) as the first author to explicitly address the decomposition problem in characterising non-normal attributes of forehead to body length ratios in female shore crab populations. The motivation for this work was provided by the zoologist Walter Frank Raphael Weldon who had speculated in 1893 (in Tarter and Lock) that asymmetry in the histogram of these ratios could signal evolutionary divergence. Pearson's approach was to fit a univariate mixture of two normals to the data by choosing the five parameters of the mixture such that the empirical moments matched that of the model.
While his work was successful in identifying two potentially distinct sub-populations and in demonstrating the flexibility of mixtures as a moment matching tool, the formulation required the solution of a 9th degree (nonic) polynomial which at the time posed a significant computational challenge.
Subsequent works focused on addressing these problems, but it was not until the advent of the modern computer and the popularisation of Maximum Likelihood (MLE) parameterisation techniques that research really took off. Since that time there has been a vast body of research on the subject spanning areas such as fisheries research, agriculture, botany, economics, medicine, genetics, psychology, palaeontology, electrophoresis, finance, geology and zoology.
== See also ==
=== Mixture ===
Mixture density
Mixture (probability)
Flexible Mixture Model (FMM)
Subspace Gaussian mixture model
Giry monad
=== Hierarchical models ===
Graphical model
Hierarchical Bayes model
=== Outlier detection ===
RANSAC
== References ==
== Further reading ==
=== Books on mixture models ===
Everitt, B.S.; Hand, D.J. (1981). Finite mixture distributions. Chapman & Hall. ISBN 978-0-412-22420-1.
Lindsay, B. G. (1995). Mixture Models: Theory, Geometry, and Applications. NSF-CBMS Regional Conference Series in Probability and Statistics. Vol. 5. Hayward: Institute of Mathematical Statistics.
Marin, J.M.; Mengersen, K.; Robert, C. P. (2011). "Bayesian modelling and inference on mixtures of distributions" (PDF). In Dey, D.; Rao, C.R. (eds.). Essential Bayesian models. Handbook of statistics: Bayesian thinking - modeling and computation. Vol. 25. Elsevier. ISBN 9780444537324.
McLachlan, G.J.; Peel, D. (2000). Finite Mixture Models. Wiley. ISBN 978-0-471-00626-8.
Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007). "Section 16.1. Gaussian Mixture Models and k-Means Clustering". Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge University Press. ISBN 978-0-521-88068-8.
Titterington, D.; Smith, A.; Makov, U. (1985). Statistical Analysis of Finite Mixture Distributions. Wiley. ISBN 978-0-471-90763-3.
Yao, W.; Xiang, S. (2024). Mixture Models: Parametric, Semiparametric, and New Directions. Chapman & Hall/CRC Press. ISBN 978-0367481827.
=== Application of Gaussian mixture models ===
Reynolds, D.A.; Rose, R.C. (January 1995). "Robust text-independent speaker identification using Gaussian mixture speaker models". IEEE Transactions on Speech and Audio Processing. 3 (1): 72–83. doi:10.1109/89.365379. S2CID 7319345.
Permuter, H.; Francos, J.; Jermyn, I.H. (2003). Gaussian mixture models of texture and colour for image database retrieval. IEEE International Conference on Acoustics, Speech, and Signal Processing, 2003. Proceedings (ICASSP '03). doi:10.1109/ICASSP.2003.1199538.
Permuter, Haim; Francos, Joseph; Jermyn, Ian (2006). "A study of Gaussian mixture models of color and texture features for image classification and segmentation" (PDF). Pattern Recognition. 39 (4): 695–706. Bibcode:2006PatRe..39..695P. doi:10.1016/j.patcog.2005.10.028. S2CID 8530776.
Lemke, Wolfgang (2005). Term Structure Modeling and Estimation in a State Space Framework. Springer Verlag. ISBN 978-3-540-28342-3.
Brigo, Damiano; Mercurio, Fabio (2001). Displaced and Mixture Diffusions for Analytically-Tractable Smile Models. Mathematical Finance – Bachelier Congress 2000. Proceedings. Springer Verlag.
Brigo, Damiano; Mercurio, Fabio (June 2002). "Lognormal-mixture dynamics and calibration to market volatility smiles". International Journal of Theoretical and Applied Finance. 5 (4): 427. CiteSeerX 10.1.1.210.4165. doi:10.1142/S0219024902001511.
Spall, J. C.; Maryak, J. L. (1992). "A feasible Bayesian estimator of quantiles for projectile accuracy from non-i.i.d. data". Journal of the American Statistical Association. 87 (419): 676–681. doi:10.1080/01621459.1992.10475269. JSTOR 2290205.
Alexander, Carol (December 2004). "Normal mixture diffusion with uncertain volatility: Modelling short- and long-term smile effects" (PDF). Journal of Banking & Finance. 28 (12): 2957–80. doi:10.1016/j.jbankfin.2003.10.017.
Stylianou, Yannis; Pantazis, Yannis; Calderero, Felipe; Larroy, Pedro; Severin, Francois; Schimke, Sascha; Bonal, Rolando; Matta, Federico; Valsamakis, Athanasios (2005). GMM-Based Multimodal Biometric Verification (PDF).
Chen, J.; Adebomi, 0.E.; Olusayo, O.S.; Kulesza, W. (2010). The Evaluation of the Gaussian Mixture Probability Hypothesis Density approach for multi-target tracking. IEEE International Conference on Imaging Systems and Techniques, 2010. doi:10.1109/IST.2010.5548541.{{cite conference}}: CS1 maint: numeric names: authors list (link)
== External links ==
Nielsen, Frank (23 March 2012). "K-MLE: A fast algorithm for learning statistical mixture models". 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). pp. 869–872. arXiv:1203.5181. Bibcode:2012arXiv1203.5181N. doi:10.1109/ICASSP.2012.6288022. ISBN 978-1-4673-0046-9. S2CID 935615.
The SOCR demonstrations of EM and Mixture Modeling
Mixture modelling page (and the Snob program for Minimum Message Length (MML) applied to finite mixture models), maintained by D.L. Dowe.
PyMix – Python Mixture Package, algorithms and data structures for a broad variety of mixture model based data mining applications in Python
sklearn.mixture – A module from the scikit-learn Python library for learning Gaussian Mixture Models (and sampling from them), previously packaged with SciPy and now packaged as a SciKit
GMM.m Matlab code for GMM Implementation
GPUmix C++ implementation of Bayesian Mixture Models using EM and MCMC with 100x speed acceleration using GPGPU.
[2] Matlab code for GMM Implementation using EM algorithm
[3] jMEF: A Java open source library for learning and processing mixtures of exponential families (using duality with Bregman divergences). Includes a Matlab wrapper.
Very Fast and clean C implementation of the Expectation Maximization (EM) algorithm for estimating Gaussian Mixture Models (GMMs).
mclust is an R package for mixture modeling.
dpgmm Pure Python Dirichlet process Gaussian mixture model implementation (variational).
Gaussian Mixture Models Blog post on Gaussian Mixture Models trained via Expectation Maximization, with an implementation in Python. | Wikipedia/Mixture_model |
A Bayesian network (also known as a Bayes network, Bayes net, belief network, or decision network) is a probabilistic graphical model that represents a set of variables and their conditional dependencies via a directed acyclic graph (DAG). While it is one of several forms of causal notation, causal networks are special cases of Bayesian networks. Bayesian networks are ideal for taking an event that occurred and predicting the likelihood that any one of several possible known causes was the contributing factor. For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases.
Efficient algorithms can perform inference and learning in Bayesian networks. Bayesian networks that model sequences of variables (e.g. speech signals or protein sequences) are called dynamic Bayesian networks. Generalizations of Bayesian networks that can represent and solve decision problems under uncertainty are called influence diagrams.
== Graphical model ==
Formally, Bayesian networks are directed acyclic graphs (DAGs) whose nodes represent variables in the Bayesian sense: they may be observable quantities, latent variables, unknown parameters or hypotheses. Each edge represents a direct conditional dependency. Any pair of nodes that are not connected (i.e. no path connects one node to the other) represent variables that are conditionally independent of each other. Each node is associated with a probability function that takes, as input, a particular set of values for the node's parent variables, and gives (as output) the probability (or probability distribution, if applicable) of the variable represented by the node. For example, if
m
{\displaystyle m}
parent nodes represent
m
{\displaystyle m}
Boolean variables, then the probability function could be represented by a table of
2
m
{\displaystyle 2^{m}}
entries, one entry for each of the
2
m
{\displaystyle 2^{m}}
possible parent combinations. Similar ideas may be applied to undirected, and possibly cyclic, graphs such as Markov networks.
== Example ==
Suppose we want to model the dependencies between three variables: the sprinkler (or more appropriately, its state - whether it is on or not), the presence or absence of rain and whether the grass is wet or not. Observe that two events can cause the grass to become wet: an active sprinkler or rain. Rain has a direct effect on the use of the sprinkler (namely that when it rains, the sprinkler usually is not active). This situation can be modeled with a Bayesian network (shown to the right). Each variable has two possible values, T (for true) and F (for false).
The joint probability function is, by the chain rule of probability,
Pr
(
G
,
S
,
R
)
=
Pr
(
G
∣
S
,
R
)
Pr
(
S
∣
R
)
Pr
(
R
)
{\displaystyle \Pr(G,S,R)=\Pr(G\mid S,R)\Pr(S\mid R)\Pr(R)}
where G = "Grass wet (true/false)", S = "Sprinkler turned on (true/false)", and R = "Raining (true/false)".
The model can answer questions about the presence of a cause given the presence of an effect (so-called inverse probability) like "What is the probability that it is raining, given the grass is wet?" by using the conditional probability formula and summing over all nuisance variables:
Pr
(
R
=
T
∣
G
=
T
)
=
Pr
(
G
=
T
,
R
=
T
)
Pr
(
G
=
T
)
=
∑
x
∈
{
T
,
F
}
Pr
(
G
=
T
,
S
=
x
,
R
=
T
)
∑
x
,
y
∈
{
T
,
F
}
Pr
(
G
=
T
,
S
=
x
,
R
=
y
)
{\displaystyle \Pr(R=T\mid G=T)={\frac {\Pr(G=T,R=T)}{\Pr(G=T)}}={\frac {\sum _{x\in \{T,F\}}\Pr(G=T,S=x,R=T)}{\sum _{x,y\in \{T,F\}}\Pr(G=T,S=x,R=y)}}}
Using the expansion for the joint probability function
Pr
(
G
,
S
,
R
)
{\displaystyle \Pr(G,S,R)}
and the conditional probabilities from the conditional probability tables (CPTs) stated in the diagram, one can evaluate each term in the sums in the numerator and denominator. For example,
Pr
(
G
=
T
,
S
=
T
,
R
=
T
)
=
Pr
(
G
=
T
∣
S
=
T
,
R
=
T
)
Pr
(
S
=
T
∣
R
=
T
)
Pr
(
R
=
T
)
=
0.99
×
0.01
×
0.2
=
0.00198.
{\displaystyle {\begin{aligned}\Pr(G=T,S=T,R=T)&=\Pr(G=T\mid S=T,R=T)\Pr(S=T\mid R=T)\Pr(R=T)\\&=0.99\times 0.01\times 0.2\\&=0.00198.\end{aligned}}}
Then the numerical results (subscripted by the associated variable values) are
Pr
(
R
=
T
∣
G
=
T
)
=
0.00198
T
T
T
+
0.1584
T
F
T
0.00198
T
T
T
+
0.288
T
T
F
+
0.1584
T
F
T
+
0.0
T
F
F
=
891
2491
≈
35.77
%
.
{\displaystyle \Pr(R=T\mid G=T)={\frac {0.00198_{TTT}+0.1584_{TFT}}{0.00198_{TTT}+0.288_{TTF}+0.1584_{TFT}+0.0_{TFF}}}={\frac {891}{2491}}\approx 35.77\%.}
To answer an interventional question, such as "What is the probability that it would rain, given that we wet the grass?" the answer is governed by the post-intervention joint distribution function
Pr
(
S
,
R
∣
do
(
G
=
T
)
)
=
Pr
(
S
∣
R
)
Pr
(
R
)
{\displaystyle \Pr(S,R\mid {\text{do}}(G=T))=\Pr(S\mid R)\Pr(R)}
obtained by removing the factor
Pr
(
G
∣
S
,
R
)
{\displaystyle \Pr(G\mid S,R)}
from the pre-intervention distribution. The do operator forces the value of G to be true. The probability of rain is unaffected by the action:
Pr
(
R
∣
do
(
G
=
T
)
)
=
Pr
(
R
)
.
{\displaystyle \Pr(R\mid {\text{do}}(G=T))=\Pr(R).}
To predict the impact of turning the sprinkler on:
Pr
(
R
,
G
∣
do
(
S
=
T
)
)
=
Pr
(
R
)
Pr
(
G
∣
R
,
S
=
T
)
{\displaystyle \Pr(R,G\mid {\text{do}}(S=T))=\Pr(R)\Pr(G\mid R,S=T)}
with the term
Pr
(
S
=
T
∣
R
)
{\displaystyle \Pr(S=T\mid R)}
removed, showing that the action affects the grass but not the rain.
These predictions may not be feasible given unobserved variables, as in most policy evaluation problems. The effect of the action
do
(
x
)
{\displaystyle {\text{do}}(x)}
can still be predicted, however, whenever the back-door criterion is satisfied. It states that, if a set Z of nodes can be observed that d-separates (or blocks) all back-door paths from X to Y then
Pr
(
Y
,
Z
∣
do
(
x
)
)
=
Pr
(
Y
,
Z
,
X
=
x
)
Pr
(
X
=
x
∣
Z
)
.
{\displaystyle \Pr(Y,Z\mid {\text{do}}(x))={\frac {\Pr(Y,Z,X=x)}{\Pr(X=x\mid Z)}}.}
A back-door path is one that ends with an arrow into X. Sets that satisfy the back-door criterion are called "sufficient" or "admissible." For example, the set Z = R is admissible for predicting the effect of S = T on G, because R d-separates the (only) back-door path S ← R → G. However, if S is not observed, no other set d-separates this path and the effect of turning the sprinkler on (S = T) on the grass (G) cannot be predicted from passive observations. In that case P(G | do(S = T)) is not "identified". This reflects the fact that, lacking interventional data, the observed dependence between S and G is due to a causal connection or is spurious
(apparent dependence arising from a common cause, R). (see Simpson's paradox)
To determine whether a causal relation is identified from an arbitrary Bayesian network with unobserved variables, one can use the three rules of "do-calculus" and test whether all do terms can be removed from the expression of that relation, thus confirming that the desired quantity is estimable from frequency data.
Using a Bayesian network can save considerable amounts of memory over exhaustive probability tables, if the dependencies in the joint distribution are sparse. For example, a naive way of storing the conditional probabilities of 10 two-valued variables as a table requires storage space for
2
10
=
1024
{\displaystyle 2^{10}=1024}
values. If no variable's local distribution depends on more than three parent variables, the Bayesian network representation stores at most
10
⋅
2
3
=
80
{\displaystyle 10\cdot 2^{3}=80}
values.
One advantage of Bayesian networks is that it is intuitively easier for a human to understand (a sparse set of) direct dependencies and local distributions than complete joint distributions.
== Inference and learning ==
Bayesian networks perform three main inference tasks:
=== Inferring unobserved variables ===
Because a Bayesian network is a complete model for its variables and their relationships, it can be used to answer probabilistic queries about them. For example, the network can be used to update knowledge of the state of a subset of variables when other variables (the evidence variables) are observed. This process of computing the posterior distribution of variables given evidence is called probabilistic inference. The posterior gives a universal sufficient statistic for detection applications, when choosing values for the variable subset that minimize some expected loss function, for instance the probability of decision error. A Bayesian network can thus be considered a mechanism for automatically applying Bayes' theorem to complex problems.
The most common exact inference methods are: variable elimination, which eliminates (by integration or summation) the non-observed non-query variables one by one by distributing the sum over the product; clique tree propagation, which caches the computation so that many variables can be queried at one time and new evidence can be propagated quickly; and recursive conditioning and AND/OR search, which allow for a space–time tradeoff and match the efficiency of variable elimination when enough space is used. All of these methods have complexity that is exponential in the network's treewidth. The most common approximate inference algorithms are importance sampling, stochastic MCMC simulation, mini-bucket elimination, loopy belief propagation, generalized belief propagation and variational methods.
=== Parameter learning ===
In order to fully specify the Bayesian network and thus fully represent the joint probability distribution, it is necessary to specify for each node X the probability distribution for X conditional upon X's parents. The distribution of X conditional upon its parents may have any form. It is common to work with discrete or Gaussian distributions since that simplifies calculations. Sometimes only constraints on distribution are known; one can then use the principle of maximum entropy to determine a single distribution, the one with the greatest entropy given the constraints. (Analogously, in the specific context of a dynamic Bayesian network, the conditional distribution for the hidden state's temporal evolution is commonly specified to maximize the entropy rate of the implied stochastic process.)
Often these conditional distributions include parameters that are unknown and must be estimated from data, e.g., via the maximum likelihood approach. Direct maximization of the likelihood (or of the posterior probability) is often complex given unobserved variables. A classical approach to this problem is the expectation-maximization algorithm, which alternates computing expected values of the unobserved variables conditional on observed data, with maximizing the complete likelihood (or posterior) assuming that previously computed expected values are correct. Under mild regularity conditions, this process converges on maximum likelihood (or maximum posterior) values for parameters.
A more fully Bayesian approach to parameters is to treat them as additional unobserved variables and to compute a full posterior distribution over all nodes conditional upon observed data, then to integrate out the parameters. This approach can be expensive and lead to large dimension models, making classical parameter-setting approaches more tractable.
=== Structure learning ===
In the simplest case, a Bayesian network is specified by an expert and is then used to perform inference. In other applications, the task of defining the network is too complex for humans. In this case, the network structure and the parameters of the local distributions must be learned from data.
Automatically learning the graph structure of a Bayesian network (BN) is a challenge pursued within machine learning. The basic idea goes back to a recovery algorithm developed by Rebane and Pearl and rests on the distinction between the three possible patterns allowed in a 3-node DAG:
The first 2 represent the same dependencies (
X
{\displaystyle X}
and
Z
{\displaystyle Z}
are independent given
Y
{\displaystyle Y}
) and are, therefore, indistinguishable. The collider, however, can be uniquely identified, since
X
{\displaystyle X}
and
Z
{\displaystyle Z}
are marginally independent and all other pairs are dependent. Thus, while the skeletons (the graphs stripped of arrows) of these three triplets are identical, the directionality of the arrows is partially identifiable. The same distinction applies when
X
{\displaystyle X}
and
Z
{\displaystyle Z}
have common parents, except that one must first condition on those parents. Algorithms have been developed to systematically determine the skeleton of the underlying graph and, then, orient all arrows whose directionality is dictated by the conditional independences observed.
An alternative method of structural learning uses optimization-based search. It requires a scoring function and a search strategy. A common scoring function is posterior probability of the structure given the training data, like the BIC or the BDeu. The time requirement of an exhaustive search returning a structure that maximizes the score is superexponential in the number of variables. A local search strategy makes incremental changes aimed at improving the score of the structure. A global search algorithm like Markov chain Monte Carlo can avoid getting trapped in local minima. Friedman et al. discuss using mutual information between variables and finding a structure that maximizes this. They do this by restricting the parent candidate set to k nodes and exhaustively searching therein.
A particularly fast method for exact BN learning is to cast the problem as an optimization problem, and solve it using integer programming. Acyclicity constraints are added to the integer program (IP) during solving in the form of cutting planes. Such method can handle problems with up to 100 variables.
In order to deal with problems with thousands of variables, a different approach is necessary. One is to first sample one ordering, and then find the optimal BN structure with respect to that ordering. This implies working on the search space of the possible orderings, which is convenient as it is smaller than the space of network structures. Multiple orderings are then sampled and evaluated. This method has been proven to be the best available in literature when the number of variables is huge.
Another method consists of focusing on the sub-class of decomposable models, for which the MLE have a closed form. It is then possible to discover a consistent structure for hundreds of variables.
Learning Bayesian networks with bounded treewidth is necessary to allow exact, tractable inference, since the worst-case inference complexity is exponential in the treewidth k (under the exponential time hypothesis). Yet, as a global property of the graph, it considerably increases the difficulty of the learning process. In this context it is possible to use K-tree for effective learning.
== Statistical introduction ==
Given data
x
{\displaystyle x\,\!}
and parameter
θ
{\displaystyle \theta }
, a simple Bayesian analysis starts with a prior probability (prior)
p
(
θ
)
{\displaystyle p(\theta )}
and likelihood
p
(
x
∣
θ
)
{\displaystyle p(x\mid \theta )}
to compute a posterior probability
p
(
θ
∣
x
)
∝
p
(
x
∣
θ
)
p
(
θ
)
{\displaystyle p(\theta \mid x)\propto p(x\mid \theta )p(\theta )}
.
Often the prior on
θ
{\displaystyle \theta }
depends in turn on other parameters
φ
{\displaystyle \varphi }
that are not mentioned in the likelihood. So, the prior
p
(
θ
)
{\displaystyle p(\theta )}
must be replaced by a likelihood
p
(
θ
∣
φ
)
{\displaystyle p(\theta \mid \varphi )}
, and a prior
p
(
φ
)
{\displaystyle p(\varphi )}
on the newly introduced parameters
φ
{\displaystyle \varphi }
is required, resulting in a posterior probability
p
(
θ
,
φ
∣
x
)
∝
p
(
x
∣
θ
)
p
(
θ
∣
φ
)
p
(
φ
)
.
{\displaystyle p(\theta ,\varphi \mid x)\propto p(x\mid \theta )p(\theta \mid \varphi )p(\varphi ).}
This is the simplest example of a hierarchical Bayes model.
The process may be repeated; for example, the parameters
φ
{\displaystyle \varphi }
may depend in turn on additional parameters
ψ
{\displaystyle \psi \,\!}
, which require their own prior. Eventually the process must terminate, with priors that do not depend on unmentioned parameters.
=== Introductory examples ===
Given the measured quantities
x
1
,
…
,
x
n
{\displaystyle x_{1},\dots ,x_{n}\,\!}
each with normally distributed errors of known standard deviation
σ
{\displaystyle \sigma \,\!}
,
x
i
∼
N
(
θ
i
,
σ
2
)
{\displaystyle x_{i}\sim N(\theta _{i},\sigma ^{2})}
Suppose we are interested in estimating the
θ
i
{\displaystyle \theta _{i}}
. An approach would be to estimate the
θ
i
{\displaystyle \theta _{i}}
using a maximum likelihood approach; since the observations are independent, the likelihood factorizes and the maximum likelihood estimate is simply
θ
i
=
x
i
.
{\displaystyle \theta _{i}=x_{i}.}
However, if the quantities are related, so that for example the individual
θ
i
{\displaystyle \theta _{i}}
have themselves been drawn from an underlying distribution, then this relationship destroys the independence and suggests a more complex model, e.g.,
x
i
∼
N
(
θ
i
,
σ
2
)
,
{\displaystyle x_{i}\sim N(\theta _{i},\sigma ^{2}),}
θ
i
∼
N
(
φ
,
τ
2
)
,
{\displaystyle \theta _{i}\sim N(\varphi ,\tau ^{2}),}
with improper priors
φ
∼
flat
{\displaystyle \varphi \sim {\text{flat}}}
,
τ
∼
flat
∈
(
0
,
∞
)
{\displaystyle \tau \sim {\text{flat}}\in (0,\infty )}
. When
n
≥
3
{\displaystyle n\geq 3}
, this is an identified model (i.e. there exists a unique solution for the model's parameters), and the posterior distributions of the individual
θ
i
{\displaystyle \theta _{i}}
will tend to move, or shrink away from the maximum likelihood estimates towards their common mean. This shrinkage is a typical behavior in hierarchical Bayes models.
=== Restrictions on priors ===
Some care is needed when choosing priors in a hierarchical model, particularly on scale variables at higher levels of the hierarchy such as the variable
τ
{\displaystyle \tau \,\!}
in the example. The usual priors such as the Jeffreys prior often do not work, because the posterior distribution will not be normalizable and estimates made by minimizing the expected loss will be inadmissible.
== Definitions and concepts ==
Several equivalent definitions of a Bayesian network have been offered. For the following, let G = (V,E) be a directed acyclic graph (DAG) and let X = (Xv), v ∈ V be a set of random variables indexed by V.
=== Factorization definition ===
X is a Bayesian network with respect to G if its joint probability density function (with respect to a product measure) can be written as a product of the individual density functions, conditional on their parent variables:
p
(
x
)
=
∏
v
∈
V
p
(
x
v
|
x
pa
(
v
)
)
{\displaystyle p(x)=\prod _{v\in V}p\left(x_{v}\,{\big |}\,x_{\operatorname {pa} (v)}\right)}
where pa(v) is the set of parents of v (i.e. those vertices pointing directly to v via a single edge).
For any set of random variables, the probability of any member of a joint distribution can be calculated from conditional probabilities using the chain rule (given a topological ordering of X) as follows:
P
(
X
1
=
x
1
,
…
,
X
n
=
x
n
)
=
∏
v
=
1
n
P
(
X
v
=
x
v
∣
X
v
+
1
=
x
v
+
1
,
…
,
X
n
=
x
n
)
{\displaystyle \operatorname {P} (X_{1}=x_{1},\ldots ,X_{n}=x_{n})=\prod _{v=1}^{n}\operatorname {P} \left(X_{v}=x_{v}\mid X_{v+1}=x_{v+1},\ldots ,X_{n}=x_{n}\right)}
Using the definition above, this can be written as:
P
(
X
1
=
x
1
,
…
,
X
n
=
x
n
)
=
∏
v
=
1
n
P
(
X
v
=
x
v
∣
X
j
=
x
j
for each
X
j
that is a parent of
X
v
)
{\displaystyle \operatorname {P} (X_{1}=x_{1},\ldots ,X_{n}=x_{n})=\prod _{v=1}^{n}\operatorname {P} (X_{v}=x_{v}\mid X_{j}=x_{j}{\text{ for each }}X_{j}\,{\text{ that is a parent of }}X_{v}\,)}
The difference between the two expressions is the conditional independence of the variables from any of their non-descendants, given the values of their parent variables.
=== Local Markov property ===
X is a Bayesian network with respect to G if it satisfies the local Markov property: each variable is conditionally independent of its non-descendants given its parent variables:
X
v
⊥
⊥
X
V
∖
de
(
v
)
∣
X
pa
(
v
)
for all
v
∈
V
{\displaystyle X_{v}\perp \!\!\!\perp X_{V\,\smallsetminus \,\operatorname {de} (v)}\mid X_{\operatorname {pa} (v)}\quad {\text{for all }}v\in V}
where de(v) is the set of descendants and V \ de(v) is the set of non-descendants of v.
This can be expressed in terms similar to the first definition, as
P
(
X
v
=
x
v
∣
X
i
=
x
i
for each
X
i
that is not a descendant of
X
v
)
=
P
(
X
v
=
x
v
∣
X
j
=
x
j
for each
X
j
that is a parent of
X
v
)
{\displaystyle {\begin{aligned}&\operatorname {P} (X_{v}=x_{v}\mid X_{i}=x_{i}{\text{ for each }}X_{i}{\text{ that is not a descendant of }}X_{v}\,)\\[6pt]={}&P(X_{v}=x_{v}\mid X_{j}=x_{j}{\text{ for each }}X_{j}{\text{ that is a parent of }}X_{v}\,)\end{aligned}}}
The set of parents is a subset of the set of non-descendants because the graph is acyclic.
=== Marginal independence structure ===
In general, learning a Bayesian network from data is known to be NP-hard. This is due in part to the combinatorial explosion of enumerating DAGs as the number of variables increases. Nevertheless, insights about an underlying Bayesian network can be learned from data in polynomial time by focusing on its marginal independence structure: while the conditional independence statements of a distribution modeled by a Bayesian network are encoded by a DAG (according to the factorization and Markov properties above), its marginal independence statements—the conditional independence statements in which the conditioning set is empty—are encoded by a simple undirected graph with special properties such as equal intersection and independence numbers.
=== Developing Bayesian networks ===
Developing a Bayesian network often begins with creating a DAG G such that X satisfies the local Markov property with respect to G. Sometimes this is a causal DAG. The conditional probability distributions of each variable given its parents in G are assessed. In many cases, in particular in the case where the variables are discrete, if the joint distribution of X is the product of these conditional distributions, then X is a Bayesian network with respect to G.
=== Markov blanket ===
The Markov blanket of a node is the set of nodes consisting of its parents, its children, and any other parents of its children. The Markov blanket renders the node independent of the rest of the network; the joint distribution of the variables in the Markov blanket of a node is sufficient knowledge for calculating the distribution of the node. X is a Bayesian network with respect to G if every node is conditionally independent of all other nodes in the network, given its Markov blanket.
==== d-separation ====
This definition can be made more general by defining the "d"-separation of two nodes, where d stands for directional. We first define the "d"-separation of a trail and then we will define the "d"-separation of two nodes in terms of that.
Let P be a trail from node u to v. A trail is a loop-free, undirected (i.e. all edge directions are ignored) path between two nodes. Then P is said to be d-separated by a set of nodes Z if any of the following conditions holds:
P contains (but does not need to be entirely) a directed chain,
u
⋯
←
m
←
⋯
v
{\displaystyle u\cdots \leftarrow m\leftarrow \cdots v}
or
u
⋯
→
m
→
⋯
v
{\displaystyle u\cdots \rightarrow m\rightarrow \cdots v}
, such that the middle node m is in Z,
P contains a fork,
u
⋯
←
m
→
⋯
v
{\displaystyle u\cdots \leftarrow m\rightarrow \cdots v}
, such that the middle node m is in Z, or
P contains an inverted fork (or collider),
u
⋯
→
m
←
⋯
v
{\displaystyle u\cdots \rightarrow m\leftarrow \cdots v}
, such that the middle node m is not in Z and no descendant of m is in Z.
The nodes u and v are d-separated by Z if all trails between them are d-separated. If u and v are not d-separated, they are d-connected.
X is a Bayesian network with respect to G if, for any two nodes u, v:
X
u
⊥
⊥
X
v
∣
X
Z
{\displaystyle X_{u}\perp \!\!\!\perp X_{v}\mid X_{Z}}
where Z is a set which d-separates u and v. (The Markov blanket is the minimal set of nodes which d-separates node v from all other nodes.)
=== Causal networks ===
Although Bayesian networks are often used to represent causal relationships, this need not be the case: a directed edge from u to v does not require that Xv be causally dependent on Xu. This is demonstrated by the fact that Bayesian networks on the graphs:
a
→
b
→
c
and
a
←
b
←
c
{\displaystyle a\rightarrow b\rightarrow c\qquad {\text{and}}\qquad a\leftarrow b\leftarrow c}
are equivalent: that is they impose exactly the same conditional independence requirements.
A causal network is a Bayesian network with the requirement that the relationships be causal. The additional semantics of causal networks specify that if a node X is actively caused to be in a given state x (an action written as do(X = x)), then the probability density function changes to that of the network obtained by cutting the links from the parents of X to X, and setting X to the caused value x. Using these semantics, the impact of external interventions from data obtained prior to intervention can be predicted.
== Inference complexity and approximation algorithms ==
In 1990, while working at Stanford University on large bioinformatic applications, Cooper proved that exact inference in Bayesian networks is NP-hard. This result prompted research on approximation algorithms with the aim of developing a tractable approximation to probabilistic inference. In 1993, Paul Dagum and Michael Luby proved two surprising results on the complexity of approximation of probabilistic inference in Bayesian networks. First, they proved that no tractable deterministic algorithm can approximate probabilistic inference to within an absolute error ɛ < 1/2. Second, they proved that no tractable randomized algorithm can approximate probabilistic inference to within an absolute error ɛ < 1/2 with confidence probability greater than 1/2.
At about the same time, Roth proved that exact inference in Bayesian networks is in fact #P-complete (and thus as hard as counting the number of satisfying assignments of a conjunctive normal form formula (CNF)) and that approximate inference within a factor 2n1−ɛ for every ɛ > 0, even for Bayesian networks with restricted architecture, is NP-hard.
In practical terms, these complexity results suggested that while Bayesian networks were rich representations for AI and machine learning applications, their use in large real-world applications would need to be tempered by either topological structural constraints, such as naïve Bayes networks, or by restrictions on the conditional probabilities. The bounded variance algorithm developed by Dagum and Luby was the first provable fast approximation algorithm to efficiently approximate probabilistic inference in Bayesian networks with guarantees on the error approximation. This powerful algorithm required the minor restriction on the conditional probabilities of the Bayesian network to be bounded away from zero and one by
1
/
p
(
n
)
{\displaystyle 1/p(n)}
where
p
(
n
)
{\displaystyle p(n)}
was any polynomial of the number of nodes in the network,
n
{\displaystyle n}
.
== Software ==
Notable software for Bayesian networks include:
Just another Gibbs sampler (JAGS) – Open-source alternative to WinBUGS. Uses Gibbs sampling.
OpenBUGS – Open-source development of WinBUGS.
SPSS Modeler – Commercial software that includes an implementation for Bayesian networks.
Stan (software) – Stan is an open-source package for obtaining Bayesian inference using the No-U-Turn sampler (NUTS), a variant of Hamiltonian Monte Carlo.
PyMC – A Python library implementing an embedded domain specific language to represent bayesian networks, and a variety of samplers (including NUTS)
WinBUGS – One of the first computational implementations of MCMC samplers. No longer maintained.
== History ==
The term Bayesian network was coined by Judea Pearl in 1985 to emphasize:
the often subjective nature of the input information
the reliance on Bayes' conditioning as the basis for updating information
the distinction between causal and evidential modes of reasoning
In the late 1980s Pearl's Probabilistic Reasoning in Intelligent Systems and Neapolitan's Probabilistic Reasoning in Expert Systems summarized their properties and established them as a field of study.
== See also ==
== Notes ==
== References ==
== Further reading ==
Conrady S, Jouffe L (2015-07-01). Bayesian Networks and BayesiaLab – A practical introduction for researchers. Franklin, Tennessee: Bayesian USA. ISBN 978-0-9965333-0-0.
Charniak E (Winter 1991). "Bayesian networks without tears" (PDF). AI Magazine.
Kruse R, Borgelt C, Klawonn F, Moewes C, Steinbrecher M, Held P (2013). Computational Intelligence A Methodological Introduction. London: Springer-Verlag. ISBN 978-1-4471-5012-1.
Borgelt C, Steinbrecher M, Kruse R (2009). Graphical Models – Representations for Learning, Reasoning and Data Mining (Second ed.). Chichester: Wiley. ISBN 978-0-470-74956-2.
== External links ==
An Introduction to Bayesian Networks and their Contemporary Applications
On-line Tutorial on Bayesian nets and probability
Web-App to create Bayesian nets and run it with a Monte Carlo method
Continuous Time Bayesian Networks
Bayesian Networks: Explanation and Analogy
A live tutorial on learning Bayesian networks
A hierarchical Bayes Model for handling sample heterogeneity in classification problems, provides a classification model taking into consideration the uncertainty associated with measuring replicate samples.
Hierarchical Naive Bayes Model for handling sample uncertainty Archived 2007-09-28 at the Wayback Machine, shows how to perform classification and learning with continuous and discrete variables with replicated measurements. | Wikipedia/Bayesian_Network |
In psychometrics, item response theory (IRT, also known as latent trait theory, strong true score theory, or modern mental test theory) is a paradigm for the design, analysis, and scoring of tests, questionnaires, and similar instruments measuring abilities, attitudes, or other variables. It is a theory of testing based on the relationship between individuals' performances on a test item and the test takers' levels of performance on an overall measure of the ability that item was designed to measure. Several different statistical models are used to represent both item and test taker characteristics. Unlike simpler alternatives for creating scales and evaluating questionnaire responses, it does not assume that each item is equally difficult. This distinguishes IRT from, for instance, Likert scaling, in which "All items are assumed to be replications of each other or in other words items are considered to be parallel instruments". By contrast, item response theory treats the difficulty of each item (the item characteristic curves, or ICCs) as information to be incorporated in scaling items.
It is based on the application of related mathematical models to testing data. Because it is often regarded as superior to classical test theory, it is the preferred method for developing scales in the United States, especially when optimal decisions are demanded, as in so-called high-stakes tests, e.g., the Graduate Record Examination (GRE) and Graduate Management Admission Test (GMAT).
The name item response theory is due to the focus of the theory on the item, as opposed to the test-level focus of classical test theory. Thus IRT models the response of each examinee of a given ability to each item in the test. The term item is generic, covering all kinds of informative items. They might be multiple choice questions that have incorrect and correct responses, but are also commonly statements on questionnaires that allow respondents to indicate level of agreement (a rating or Likert scale), or patient symptoms scored as present/absent, or diagnostic information in complex systems.
IRT is based on the idea that the probability of a correct/keyed response to an item is a mathematical function of person and item parameters. (The expression "a mathematical function of person and item parameters" is analogous to Lewin's equation, B = f(P, E), which asserts that behavior is a function of the person in their environment.) The person parameter is construed as (usually) a single latent trait or dimension. Examples include general intelligence or the strength of an attitude. Parameters on which items are characterized include their difficulty (known as "location" for their location on the difficulty range); discrimination (slope or correlation), representing how steeply the rate of success of individuals varies with their ability; and a pseudoguessing parameter, characterising the (lower) asymptote at which even the least able persons will score due to guessing (for instance, 25% for a pure chance on a multiple choice item with four possible responses).
In the same manner, IRT can be used to measure human behavior in online social networks. The views expressed by different people can be aggregated to be studied using IRT. Its use in classifying information as misinformation or true information has also been evaluated.
== Overview ==
The concept of the item response function was around before 1950. The pioneering work of IRT as a theory occurred during the 1950s and 1960s. Three of the pioneers were the Educational Testing Service psychometrician Frederic M. Lord, the Danish mathematician Georg Rasch, and Austrian sociologist Paul Lazarsfeld, who pursued parallel research independently. Key figures who furthered the progress of IRT include Benjamin Drake Wright and David Andrich. IRT did not become widely used until the late 1970s and 1980s, when practitioners were told the "usefulness" and "advantages" of IRT on the one hand, and personal computers gave many researchers access to the computing power necessary for IRT on the other. In the 1990's Margaret Wu developed two item response software programs that analyse PISA and TIMSS data; ACER ConQuest (1998) and the R-package TAM (2010).
Among other things, the purpose of IRT is to provide a framework for evaluating how well assessments work, and how well individual items on assessments work. The most common application of IRT is in education, where psychometricians use it for developing and designing exams, maintaining banks of items for exams, and equating the difficulties of items for successive versions of exams (for example, to allow comparisons between results over time).
IRT models are often referred to as latent trait models. The term latent is used to emphasize that discrete item responses are taken to be observable manifestations of hypothesized traits, constructs, or attributes, not directly observed, but which must be inferred from the manifest responses. Latent trait models were developed in the field of sociology, but are virtually identical to IRT models.
IRT is generally claimed as an improvement over classical test theory (CTT). For tasks that can be accomplished using CTT, IRT generally brings greater flexibility and provides more sophisticated information. Some applications, such as computerized adaptive testing, are enabled by IRT and cannot reasonably be performed using only classical test theory. Another advantage of IRT over CTT is that the more sophisticated information IRT provides allows a researcher to improve the reliability of an assessment.
IRT entails three assumptions:
A unidimensional trait denoted by
θ
{\displaystyle {\theta }}
;
Local independence of items;
The response of a person to an item can be modeled by a mathematical item response function (IRF).
The trait is further assumed to be measurable on a scale (the mere existence of a test assumes this), typically set to a standard scale with a mean of 0.0 and a standard deviation of 1.0. Unidimensionality should be interpreted as homogeneity, a quality that should be defined or empirically demonstrated in relation to a given purpose or use, but not a quantity that can be measured. 'Local independence' means (a) that the chance of one item being used is not related to any other item(s) being used and (b) that response to an item is each and every test-taker's independent decision, that is, there is no cheating or pair or group work. The topic of dimensionality is often investigated with factor analysis, while the IRF is the basic building block of IRT and is the center of much of the research and literature.
== The item response function ==
The IRF gives the probability that a person with a given ability level will answer correctly. Persons with lower ability have less of a chance, while persons with high ability are very likely to answer correctly; for example, students with higher math ability are more likely to get a math item correct. The exact value of the probability depends, in addition to ability, on a set of item parameters for the IRF.
=== Three parameter logistic model ===
For example, in the three parameter logistic model (3PL), the probability of a correct response to a dichotomous item i, usually a multiple-choice question, is:
p
i
(
θ
)
=
c
i
+
1
−
c
i
1
+
e
−
a
i
(
θ
−
b
i
)
{\displaystyle p_{i}({\theta })=c_{i}+{\frac {1-c_{i}}{1+e^{-a_{i}({\theta }-b_{i})}}}}
where
θ
{\displaystyle {\theta }}
indicates that the person's abilities are modeled as a sample from a normal distribution for the purpose of estimating the item parameters. After the item parameters have been estimated, the abilities of individual people are estimated for reporting purposes.
a
i
{\displaystyle a_{i}}
,
b
i
{\displaystyle b_{i}}
, and
c
i
{\displaystyle c_{i}}
are the item parameters. The item parameters determine the shape of the IRF. Figure 1 depicts an ideal 3PL ICC.
The item parameters can be interpreted as changing the shape of the standard logistic function:
P
(
t
)
=
1
1
+
e
−
t
.
{\displaystyle P(t)={\frac {1}{1+e^{-t}}}.}
In brief, the parameters are interpreted as follows (dropping subscripts for legibility); b is most basic, hence listed first:
b – difficulty, item location:
p
(
b
)
=
(
1
+
c
)
/
2
,
{\displaystyle p(b)=(1+c)/2,}
the half-way point between
c
i
{\displaystyle c_{i}}
(min) and 1 (max), also where the slope is maximized.
a – discrimination, scale, slope: the maximum slope
p
′
(
b
)
=
a
⋅
(
1
−
c
)
/
4.
{\displaystyle p'(b)=a\cdot (1-c)/4.}
c – pseudo-guessing, chance, asymptotic minimum
p
(
−
∞
)
=
c
.
{\displaystyle p(-\infty )=c.}
If
c
=
0
,
{\displaystyle c=0,}
then these simplify to
p
(
b
)
=
1
/
2
{\displaystyle p(b)=1/2}
and
p
′
(
b
)
=
a
/
4
,
{\displaystyle p'(b)=a/4,}
meaning that b equals the 50% success level (difficulty), and a (divided by four) is the maximum slope (discrimination), which occurs at the 50% success level. Further, the logit (log odds) of a correct response is
a
(
θ
−
b
)
{\displaystyle a(\theta -b)}
(assuming
c
=
0
{\displaystyle c=0}
): in particular if ability θ equals difficulty b, there are even odds (1:1, so logit 0) of a correct answer, the greater the ability is above (or below) the difficulty the more (or less) likely a correct response, with discrimination a determining how rapidly the odds increase or decrease with ability.
In other words, the standard logistic function has an asymptotic minimum of 0 (
c
=
0
{\displaystyle c=0}
), is centered around 0 (
b
=
0
{\displaystyle b=0}
,
P
(
0
)
=
1
/
2
{\displaystyle P(0)=1/2}
), and has maximum slope
P
′
(
0
)
=
1
/
4.
{\displaystyle P'(0)=1/4.}
The
a
{\displaystyle a}
parameter stretches the horizontal scale, the
b
{\displaystyle b}
parameter shifts the horizontal scale, and the
c
{\displaystyle c}
parameter compresses the vertical scale from
[
0
,
1
]
{\displaystyle [0,1]}
to
[
c
,
1
]
.
{\displaystyle [c,1].}
This is elaborated below.
The parameter
b
i
{\displaystyle b_{i}}
represents the item location which, in the case of attainment testing, is referred to as the item difficulty. It is the point on
θ
{\displaystyle {\theta }}
where the IRF has its maximum slope, and where the value is half-way between the minimum value of
c
i
{\displaystyle c_{i}}
and the maximum value of 1. The example item is of medium difficulty since
b
i
{\displaystyle b_{i}}
=0.0, which is near the center of the distribution. Note that this model scales the item's difficulty and the person's trait onto the same continuum. Thus, it is valid to talk about an item being about as hard as Person A's trait level or of a person's trait level being about the same as Item Y's difficulty, in the sense that successful performance of the task involved with an item reflects a specific level of ability.
The item parameter
a
i
{\displaystyle a_{i}}
represents the discrimination of the item: that is, the degree to which the item discriminates between persons in different regions on the latent continuum. This parameter characterizes the slope of the IRF where the slope is at its maximum. The example item has
a
i
{\displaystyle a_{i}}
=1.0, which discriminates fairly well; persons with low ability do indeed have a much smaller chance of correctly responding than persons of higher ability. This discrimination parameter corresponds to the weighting coefficient of the respective item or indicator in a standard weighted linear (Ordinary Least Squares, OLS) regression and hence can be used to create a weighted index of indicators for unsupervised measurement of an underlying latent concept.
For items such as multiple choice items, the parameter
c
i
{\displaystyle c_{i}}
is used in attempt to account for the effects of guessing on the probability of a correct response. It indicates the probability that very low ability individuals will get this item correct by chance, mathematically represented as a lower asymptote. A four-option multiple choice item might have an IRF like the example item; there is a 1/4 chance of an extremely low ability candidate guessing the correct answer, so the
c
i
{\displaystyle c_{i}}
would be approximately 0.25. This approach assumes that all options are equally plausible, because if one option made no sense, even the lowest ability person would be able to discard it, so IRT parameter estimation methods take this into account and estimate a
c
i
{\displaystyle c_{i}}
based on the observed data.
== IRT models ==
Broadly speaking, IRT models can be divided into two families: unidimensional and multidimensional. Unidimensional models require a single trait (ability) dimension
θ
{\displaystyle {\theta }}
. Multidimensional IRT models model response data hypothesized to arise from multiple traits. However, because of the greatly increased complexity, the majority of IRT research and applications utilize a unidimensional model.
IRT models can also be categorized based on the number of scored responses. The typical multiple choice item is dichotomous; even though there may be four or five options, it is still scored only as correct/incorrect (right/wrong). Another class of models apply to polytomous outcomes, where each response has a different score value. A common example of this is Likert-type items, e.g., "Rate on a scale of 1 to 5." Another example is partial-credit scoring, to which models like the Polytomous Rasch model may be applied.
=== Number of IRT parameters ===
Dichotomous IRT models are described by the number of parameters they make use of. The 3PL is named so because it employs three item parameters. The two-parameter model (2PL) assumes that the data have no guessing, but that items can vary in terms of location (
b
i
{\displaystyle b_{i}}
) and discrimination (
a
i
{\displaystyle a_{i}}
). The one-parameter model (1PL) assumes that guessing is a part of the ability and that all items that fit the model have equivalent discriminations, so that items are only described by a single parameter (
b
i
{\displaystyle b_{i}}
). This results in one-parameter models having the property of specific objectivity, meaning that the rank of the item difficulty is the same for all respondents independent of ability, and that the rank of the person ability is the same for items independently of difficulty. Thus, 1 parameter models are sample independent, a property that does not hold for two-parameter and three-parameter models. Additionally, there is theoretically a four-parameter model (4PL), with an upper asymptote, denoted by
d
i
,
{\displaystyle d_{i},}
where
1
−
c
i
{\displaystyle 1-c_{i}}
in the 3PL is replaced by
d
i
−
c
i
{\displaystyle d_{i}-c_{i}}
. However, this is rarely used. Note that the alphabetical order of the item parameters does not match their practical or psychometric importance; the location/difficulty (
b
i
{\displaystyle b_{i}}
) parameter is clearly most important because it is included in all three models. The 1PL uses only
b
i
{\displaystyle b_{i}}
, the 2PL uses
b
i
{\displaystyle b_{i}}
and
a
i
{\displaystyle a_{i}}
, the 3PL adds
c
i
{\displaystyle c_{i}}
, and the 4PL adds
d
i
{\displaystyle d_{i}}
.
The 2PL is equivalent to the 3PL model with
c
i
=
0
{\displaystyle c_{i}=0}
, and is appropriate for testing items where guessing the correct answer is highly unlikely, such as fill-in-the-blank items ("What is the square root of 121?"), or where the concept of guessing does not apply, such as personality, attitude, or interest items (e.g., "I like Broadway musicals. Agree/Disagree").
The 1PL assumes not only that guessing is not present (or irrelevant), but that all items are equivalent in terms of discrimination, analogous to a common factor analysis with identical loadings for all items. Individual items or individuals might have secondary factors but these are assumed to be mutually independent and collectively orthogonal.
=== Logistic and normal IRT models ===
An alternative formulation constructs IRFs based on the normal probability distribution; these are sometimes called normal ogive models. For example, the formula for a two-parameter normal-ogive IRF is:
p
i
(
θ
)
=
Φ
(
θ
−
b
i
σ
i
)
{\displaystyle p_{i}(\theta )=\Phi \left({\frac {\theta -b_{i}}{\sigma _{i}}}\right)}
where Φ is the cumulative distribution function (CDF) of the standard normal distribution.
The normal-ogive model derives from the assumption of normally distributed measurement error and is theoretically appealing on that basis. Here
b
i
{\displaystyle b_{i}}
is, again, the difficulty parameter. The discrimination parameter is
σ
i
{\displaystyle {\sigma }_{i}}
, the standard deviation of the measurement error for item i, and comparable to 1/
a
i
{\displaystyle a_{i}}
.
One can estimate a normal-ogive latent trait model by factor-analyzing a matrix of tetrachoric correlations between items. This means it is technically possible to estimate a simple IRT model using general-purpose statistical software.
With rescaling of the ability parameter, it is possible to make the 2PL logistic model closely approximate the cumulative normal ogive. Typically, the 2PL logistic and normal-ogive IRFs differ in probability by no more than 0.01 across the range of the function. The difference is greatest in the distribution tails, however, which tend to have more influence on results.
The latent trait/IRT model was originally developed using normal ogives, but this was considered too computationally demanding for the computers at the time (1960s). The logistic model was proposed as a simpler alternative, and has enjoyed wide use since. More recently, however, it was demonstrated that, using standard polynomial approximations to the normal CDF, the normal-ogive model is no more computationally demanding than logistic models.
=== The Rasch model ===
The Rasch model is often considered to be the 1PL IRT model. However, proponents of Rasch modeling prefer to view it as a completely different approach to conceptualizing the relationship between data and theory. Like other statistical modeling approaches, IRT emphasizes the primacy of the fit of a model to observed data, while the Rasch model emphasizes the primacy of the requirements for fundamental measurement, with adequate data-model fit being an important but secondary requirement to be met before a test or research instrument can be claimed to measure a trait. Operationally, this means that the IRT approaches include additional model parameters to reflect the patterns observed in the data (e.g., allowing items to vary in their correlation with the latent trait), whereas in the Rasch approach, claims regarding the presence of a latent trait can only be considered valid when both (a) the data fit the Rasch model, and (b) test items and examinees conform to the model. Therefore, under Rasch models, misfitting responses require diagnosis of the reason for the misfit, and may be excluded from the data set if one can explain substantively why they do not address the latent trait. Thus, the Rasch approach can be seen to be a confirmatory approach, as opposed to exploratory approaches that attempt to model the observed data.
The presence or absence of a guessing or pseudo-chance parameter is a major and sometimes controversial distinction. The IRT approach includes a left asymptote parameter to account for guessing in multiple choice examinations, while the Rasch model does not because it is assumed that guessing adds randomly distributed noise to the data. As the noise is randomly distributed, it is assumed that, provided sufficient items are tested, the rank-ordering of persons along the latent trait by raw score will not change, but will simply undergo a linear rescaling. By contrast, three-parameter IRT achieves data-model fit by selecting a model that fits the data, at the expense of sacrificing specific objectivity.
In practice, the Rasch model has at least two principal advantages in comparison to the IRT approach. The first advantage is the primacy of Rasch's specific requirements, which (when met) provides fundamental person-free measurement (where persons and items can be mapped onto the same invariant scale). Another advantage of the Rasch approach is that estimation of parameters is more straightforward in Rasch models due to the presence of sufficient statistics, which in this application means a one-to-one mapping of raw number-correct scores to Rasch
θ
{\displaystyle {\theta }}
estimates.
== Analysis of model fit ==
As with any use of mathematical models, it is important to assess the fit of the data to the model. If item misfit with any model is diagnosed as due to poor item quality, for example confusing distractors in a multiple-choice test, then the items may be removed from that test form and rewritten or replaced in future test forms. If, however, a large number of misfitting items occur with no apparent reason for the misfit, the construct validity of the test will need to be reconsidered and the test specifications may need to be rewritten. Thus, misfit provides invaluable diagnostic tools for test developers, allowing the hypotheses upon which test specifications are based to be empirically tested against data.
There are several methods for assessing fit, such as a Chi-square statistic, or a standardized version of it. Two and three-parameter IRT models adjust item discrimination, ensuring improved data-model fit, so fit statistics lack the confirmatory diagnostic value found in one-parameter models, where the idealized model is specified in advance.
Data should not be removed on the basis of misfitting the model, but rather because a construct relevant reason for the misfit has been diagnosed, such as a non-native speaker of English taking a science test written in English. Such a candidate can be argued to not belong to the same population of persons depending on the dimensionality of the test, and, although one parameter IRT measures are argued to be sample-independent, they are not population independent, so misfit such as this is construct relevant and does not invalidate the test or the model. Such an approach is an essential tool in instrument validation. In two and three-parameter models, where the psychometric model is adjusted to fit the data, future administrations of the test must be checked for fit to the same model used in the initial validation in order to confirm the hypothesis that scores from each administration generalize to other administrations. If a different model is specified for each administration in order to achieve data-model fit, then a different latent trait is being measured and test scores cannot be argued to be comparable between administrations.
== Information ==
One of the major contributions of item response theory is the extension of the concept of reliability. Traditionally, reliability refers to the precision of measurement (i.e., the degree to which measurement is free of error). Traditionally, it is measured using a single index defined in various ways, such as the ratio of true and observed score variance. This index is helpful in characterizing a test's average reliability, for example in order to compare two tests. But IRT makes it clear that precision is not uniform across the entire range of test scores. Scores at the edges of the test's range, for example, generally have more error associated with them than scores closer to the middle of the range.
Item response theory advances the concept of item and test information to replace reliability. Information is also a function of the model parameters. For example, according to Fisher information theory, the item information supplied in the case of the 1PL for dichotomous response data is simply the probability of a correct response multiplied by the probability of an incorrect response, or,
I
(
θ
)
=
p
i
(
θ
)
q
i
(
θ
)
.
{\displaystyle I(\theta )=p_{i}(\theta )q_{i}(\theta ).\,}
The standard error of estimation (SE) is the reciprocal of the test information of at a given trait level, is the
SE
(
θ
)
=
1
I
(
θ
)
.
{\displaystyle {\text{SE}}(\theta )={\frac {1}{\sqrt {I(\theta )}}}.}
Thus more information implies less error of measurement.
For other models, such as the two and three parameters models, the discrimination parameter plays an important role in the function. The item information function for the two parameter model is
I
(
θ
)
=
a
i
2
p
i
(
θ
)
q
i
(
θ
)
.
{\displaystyle I(\theta )=a_{i}^{2}p_{i}(\theta )q_{i}(\theta ).\,}
The item information function for the three parameter model is
I
(
θ
)
=
a
i
2
(
p
i
(
θ
)
−
c
i
)
2
(
1
−
c
i
)
2
q
i
(
θ
)
p
i
(
θ
)
.
{\displaystyle I(\theta )=a_{i}^{2}{\frac {(p_{i}(\theta )-c_{i})^{2}}{(1-c_{i})^{2}}}{\frac {q_{i}(\theta )}{p_{i}(\theta )}}.}
In general, item information functions tend to look bell-shaped. Highly discriminating items have tall, narrow information functions; they contribute greatly but over a narrow range. Less discriminating items provide less information but over a wider range.
Plots of item information can be used to see how much information an item contributes and to what portion of the scale score range. Because of local independence, item information functions are additive. Thus, the test information function is simply the sum of the information functions of the items on the exam. Using this property with a large item bank, test information functions can be shaped to control measurement error very precisely.
Characterizing the accuracy of test scores is perhaps the central issue in psychometric theory and is a chief difference between IRT and CTT. IRT findings reveal that the CTT concept of reliability is a simplification. In the place of reliability, IRT offers the test information function which shows the degree of precision at different values of theta, θ.
These results allow psychometricians to (potentially) carefully shape the level of reliability for different ranges of ability by including carefully chosen items. For example, in a certification situation in which a test can only be passed or failed, where there is only a single "cutscore," and where the actual passing score is unimportant, a very efficient test can be developed by selecting only items that have high information near the cutscore. These items generally correspond to items whose difficulty is about the same as that of the cutscore.
== Scoring ==
The person parameter
θ
{\displaystyle {\theta }}
represents the magnitude of latent trait of the individual, which is the human capacity or attribute measured by the test. It might be a cognitive ability, physical ability, skill, knowledge, attitude, personality characteristic, etc.
The estimate of the person parameter - the "score" on a test with IRT - is computed and interpreted in a very different manner as compared to traditional scores like number or percent correct. The individual's total number-correct score is not the actual score, but is rather based on the IRFs, leading to a weighted score when the model contains item discrimination parameters. It is actually obtained by multiplying the item response function for each item to obtain a likelihood function, the highest point of which is the maximum likelihood estimate of
θ
{\displaystyle {\theta }}
. This highest point is typically estimated with IRT software using the Newton–Raphson method. While scoring is much more sophisticated with IRT, for most tests, the correlation between the theta estimate and a traditional score is very high; often it is 0.95 or more. A graph of IRT scores against traditional scores shows an ogive shape implying that the IRT estimates separate individuals at the borders of the range more than in the middle.
An important difference between CTT and IRT is the treatment of measurement error, indexed by the standard error of measurement. All tests, questionnaires, and inventories are imprecise tools; we can never know a person's true score, but rather only have an estimate, the observed score. There is some amount of random error which may push the observed score higher or lower than the true score. CTT assumes that the amount of error is the same for each examinee, but IRT allows it to vary.
Also, nothing about IRT refutes human development or improvement or assumes that a trait level is fixed. A person may learn skills, knowledge or even so called "test-taking skills" which may translate to a higher true-score. In fact, a portion of IRT research focuses on the measurement of change in trait level.
== A comparison of classical and item response theories ==
Classical test theory (CTT) and IRT are largely concerned with the same problems but are different bodies of theory and entail different methods. Although the two paradigms are generally consistent and complementary, there are a number of points of difference:
IRT makes stronger assumptions than CTT and in many cases provides correspondingly stronger findings; primarily, characterizations of error. Of course, these results only hold when the assumptions of the IRT models are actually met.
Although CTT results have allowed important practical results, the model-based nature of IRT affords many advantages over analogous CTT findings.
CTT test scoring procedures have the advantage of being simple to compute (and to explain) whereas IRT scoring generally requires relatively complex estimation procedures.
IRT provides several improvements in scaling items and people. The specifics depend upon the IRT model, but most models scale the difficulty of items and the ability of people on the same metric. Thus the difficulty of an item and the ability of a person can be meaningfully compared.
Another improvement provided by IRT is that the parameters of IRT models are generally not sample- or test-dependent whereas true-score is defined in CTT in the context of a specific test. Thus IRT provides significantly greater flexibility in situations where different samples or test forms are used. These IRT findings are foundational for computerized adaptive testing.
It is worth also mentioning some specific similarities between CTT and IRT which help to understand the correspondence between concepts. First, Lord showed that under the assumption that
θ
{\displaystyle \theta }
is normally distributed, discrimination in the 2PL model is approximately a monotonic function of the point-biserial correlation. In particular:
a
i
≅
ρ
i
t
1
−
ρ
i
t
2
{\displaystyle a_{i}\cong {\frac {\rho _{it}}{\sqrt {1-\rho _{it}^{2}}}}}
where
ρ
i
t
{\displaystyle \rho _{it}}
is the point biserial correlation of item i. Thus, if the assumption holds, where there is a higher discrimination there will generally be a higher point-biserial correlation.
Another similarity is that while IRT provides for a standard error of each estimate and an information function, it is also possible to obtain an index for a test as a whole which is directly analogous to Cronbach's alpha, called the separation index. To do so, it is necessary to begin with a decomposition of an IRT estimate into a true location and error, analogous to decomposition of an observed score into a true score and error in CTT. Let
θ
^
=
θ
+
ϵ
{\displaystyle {\hat {\theta }}=\theta +\epsilon }
where
θ
{\displaystyle \theta }
is the true location, and
ϵ
{\displaystyle \epsilon }
is the error association with an estimate. Then
SE
(
θ
)
{\displaystyle {\mbox{SE}}({\theta })}
is an estimate of the standard deviation of
ϵ
{\displaystyle \epsilon }
for person with a given weighted score and the separation index is obtained as follows
R
θ
=
var
[
θ
]
var
[
θ
^
]
=
var
[
θ
^
]
−
var
[
ϵ
]
var
[
θ
^
]
{\displaystyle R_{\theta }={\frac {{\text{var}}[\theta ]}{{\text{var}}[{\hat {\theta }}]}}={\frac {{\text{var}}[{\hat {\theta }}]-{\text{var}}[\epsilon ]}{{\text{var}}[{\hat {\theta }}]}}}
where the mean squared standard error of person estimate gives an estimate of the variance of the errors,
ϵ
n
{\displaystyle \epsilon _{n}}
, across persons. The standard errors are normally produced as a by-product of the estimation process. The separation index is typically very close in value to Cronbach's alpha.
IRT is sometimes called strong true score theory or modern mental test theory because it is a more recent body of theory and makes more explicit the hypotheses that are implicit within CTT.
== Implementation ==
Implementations of different variations of item response theory are available in many different statistical programs and languages, including the R programming language, and python.
== See also ==
== References ==
== Further reading ==
Many books have been written that address item response theory or contain IRT or IRT-like models. This is a partial list, focusing on texts that provide more depth.
Lord, F.M. (1980). Applications of item response theory to practical testing problems. Mahwah, NJ: Lawrence Erlbaum Associates, Inc. doi:10.4324/9780203056615. ISBN 978-1-136-55724-8.
This book summaries much of Lord's IRT work, including chapters on the relationship between IRT and classical methods, fundamentals of IRT, estimation, and several advanced topics. Its estimation chapter is now dated in that it primarily discusses joint maximum likelihood method rather than the marginal maximum likelihood method implemented by Darrell Bock and his colleagues.
Embretson, Susan E.; Reise, Steven P. (2000). Item Response Theory for Psychologists. Psychology Press. ISBN 978-0-8058-2819-1.
This book is an accessible introduction to IRT, aimed, as the title says, at psychologists.
Baker, Frank (2001). The Basics of Item Response Theory. ERIC Clearinghouse on Assessment and Evaluation, University of Maryland, College Park, MD.
This introductory book is by one of the pioneers in the field.
Baker, Frank B.; Kim, Seock-Ho (2004). Item Response Theory: Parameter Estimation Techniques (2nd ed.). Marcel Dekker. ISBN 978-0-8247-5825-7.
This book describes various item response theory models and furnishes detailed explanations of algorithms that can be used to estimate the item and ability parameters. Portions of the book are available online as limited preview at Google Books.
van der Linden, Wim J.; Hambleton, Ronald K., eds. (1996). Handbook of Modern Item Response Theory. Springer. ISBN 978-0-387-94661-0.
This book provides a comprehensive overview regarding various popular IRT models. It is well suited for persons who already have gained basic understanding of IRT.
de Boeck, Paul; Wilson, Mark (2004). Explanatory Item Response Models: A Generalized Linear and Nonlinear Approach. Springer. ISBN 978-0-387-40275-8.
This volume shows an integrated introduction to item response models, mainly aimed at practitioners, researchers and graduate students.
Fox, Jean-Paul (2010). Bayesian Item Response Modeling: Theory and Applications. Springer. ISBN 978-1-4419-0741-7.
This book discusses the Bayesian approach towards item response modeling. The book will be useful for persons (who are familiar with IRT) with an interest in analyzing item response data from a Bayesian perspective.
== External links ==
"HISTORY OF ITEM RESPONSE THEORY (up to 1982)". University of Illinois at Chicago.
"A Simple Guide to the Item Response Theory" (PDF).
"Psychometric Software Downloads". Archived from the original on 5 June 2011.
"IRT Tutorial". Archived from the original on 10 December 2004.
"IRT Tutorial FAQ".
"An introduction to IRT".
"The Standards for Educational and Psychological Testing".
"IRT Command Language (ICL) computer program". Archived from the original on 13 June 2006.
"IRT Programs from SSI, Inc". Archived from the original on 16 July 2011.
"Latent Trait Analysis and IRT Models".
"Rasch analysis". Archived from the original on 2009-08-25.
"Rasch Analysis Programs from Winsteps".
"Item Response Theory". 25 May 2024.
"Free IRT software".
"IRT Packages in R". 15 December 2023.
"IRT / EIRT support in Lertap 5" (PDF). Archived from the original (PDF) on 2016-03-04.
"Visual IRT analysis and reporting with Xcalibre". | Wikipedia/Item_response_theory |
In metaphysics, a causal model (or structural causal model) is a conceptual model that describes the causal mechanisms of a system. Several types of causal notation may be used in the development of a causal model. Causal models can improve study designs by providing clear rules for deciding which independent variables need to be included/controlled for.
They can allow some questions to be answered from existing observational data without the need for an interventional study such as a randomized controlled trial. Some interventional studies are inappropriate for ethical or practical reasons, meaning that without a causal model, some hypotheses cannot be tested.
Causal models can help with the question of external validity (whether results from one study apply to unstudied populations). Causal models can allow data from multiple studies to be merged (in certain circumstances) to answer questions that cannot be answered by any individual data set.
Causal models have found applications in signal processing, epidemiology, machine learning, cultural studies, and urbanism, and they can describe both linear and nonlinear processes.
== Definition ==
Causal models are mathematical models representing causal relationships within an individual system or population. They facilitate inferences about causal relationships from statistical data. They can teach us a good deal about the epistemology of causation, and about the relationship between causation and probability. They have also been applied to topics of interest to philosophers, such as the logic of counterfactuals, decision theory, and the analysis of actual causation. Judea Pearl defines a causal model as an ordered triple
⟨
U
,
V
,
E
⟩
{\displaystyle \langle U,V,E\rangle }
, where U is a set of exogenous variables whose values are determined by factors outside the model; V is a set of endogenous variables whose values are determined by factors within the model; and E is a set of structural equations that express the value of each endogenous variable as a function of the values of the other variables in U and V.
== History ==
Aristotle defined a taxonomy of causality, including material, formal, efficient and final causes. Hume rejected Aristotle's taxonomy in favor of counterfactuals. At one point, he denied that objects have "powers" that make one a cause and another an effect. Later he adopted "if the first object had not been, the second had never existed" ("but-for" causation).
In the late 19th century, the discipline of statistics began to form. After a years-long effort to identify causal rules for domains such as biological inheritance, Galton introduced the concept of mean regression (epitomized by the sophomore slump in sports) which later led him to the non-causal concept of correlation.
As a positivist, Pearson expunged the notion of causality from much of science as an unprovable special case of association and introduced the correlation coefficient as the metric of association. He wrote, "Force as a cause of motion is exactly the same as a tree god as a cause of growth" and that causation was only a "fetish among the inscrutable arcana of modern science". Pearson founded Biometrika and the Biometrics Lab at University College London, which became the world leader in statistics.
In 1908 Hardy and Weinberg solved the problem of trait stability that had led Galton to abandon causality, by resurrecting Mendelian inheritance.
In 1921 Wright's path analysis became the theoretical ancestor of causal modeling and causal graphs. He developed this approach while attempting to untangle the relative impacts of heredity, development and environment on guinea pig coat patterns. He backed up his then-heretical claims by showing how such analyses could explain the relationship between guinea pig birth weight, in utero time and litter size. Opposition to these ideas by prominent statisticians led them to be ignored for the following 40 years (except among animal breeders). Instead scientists relied on correlations, partly at the behest of Wright's critic (and leading statistician), Fisher. One exception was Burks, a student who in 1926 was the first to apply path diagrams to represent a mediating influence (mediator) and to assert that holding a mediator constant induces errors. She may have invented path diagrams independently.: 304
In 1923, Neyman introduced the concept of a potential outcome, but his paper was not translated from Polish to English until 1990.: 271
In 1958 Cox warned that controlling for a variable Z is valid only if it is highly unlikely to be affected by independent variables.: 154
In the 1960s, Duncan, Blalock, Goldberger and others rediscovered path analysis. While reading Blalock's work on path diagrams, Duncan remembered a lecture by Ogburn twenty years earlier that mentioned a paper by Wright that in turn mentioned Burks.: 308
Sociologists originally called causal models structural equation modeling, but once it became a rote method, it lost its utility, leading some practitioners to reject any relationship to causality. Economists adopted the algebraic part of path analysis, calling it simultaneous equation modeling. However, economists still avoided attributing causal meaning to their equations.
Sixty years after his first paper, Wright published a piece that recapitulated it, following Karlin et al.'s critique, which objected that it handled only linear relationships and that robust, model-free presentations of data were more revealing.
In 1973 Lewis advocated replacing correlation with but-for causality (counterfactuals). He referred to humans' ability to envision alternative worlds in which a cause did or not occur, and in which an effect appeared only following its cause.: 266 In 1974 Rubin introduced the notion of "potential outcomes" as a language for asking causal questions.: 269
In 1983 Cartwright proposed that any factor that is "causally relevant" to an effect be conditioned on, moving beyond simple probability as the only guide.: 48
In 1986 Baron and Kenny introduced principles for detecting and evaluating mediation in a system of linear equations. As of 2014 their paper was the 33rd most-cited of all time.: 324 That year Greenland and Robins introduced the "exchangeability" approach to handling confounding by considering a counterfactual. They proposed assessing what would have happened to the treatment group if they had not received the treatment and comparing that outcome to that of the control group. If they matched, confounding was said to be absent.: 154
== Ladder of causation ==
Pearl's causal metamodel involves a three-level abstraction he calls the ladder of causation. The lowest level, Association (seeing/observing), entails the sensing of regularities or patterns in the input data, expressed as correlations. The middle level, Intervention (doing), predicts the effects of deliberate actions, expressed as causal relationships. The highest level, Counterfactuals (imagining), involves constructing a theory of (part of) the world that explains why specific actions have specific effects and what happens in the absence of such actions.
=== Association ===
One object is associated with another if observing one changes the probability of observing the other. Example: shoppers who buy toothpaste are more likely to also buy dental floss. Mathematically:
P
(
f
l
o
s
s
|
t
o
o
t
h
p
a
s
t
e
)
{\displaystyle P(\mathrm {floss} |\mathrm {toothpaste} )}
or the probability of (purchasing) floss given (the purchase of) toothpaste. Associations can also be measured via computing the correlation of the two events. Associations have no causal implications. One event could cause the other, the reverse could be true, or both events could be caused by some third event (unhappy hygienist shames shopper into treating their mouth better ).
=== Intervention ===
This level asserts specific causal relationships between events. Causality is assessed by experimentally performing some action that affects one of the events. Example: after doubling the price of toothpaste, what would be the new probability of purchasing? Causality cannot be established by examining history (of price changes) because the price change may have been for some other reason that could itself affect the second event (a tariff that increases the price of both goods). Mathematically:
P
(
f
l
o
s
s
|
d
o
(
t
o
o
t
h
p
a
s
t
e
)
)
{\displaystyle P(\mathrm {floss} |do(\mathrm {toothpaste} ))}
where do is an operator that signals the experimental intervention (doubling the price). The operator indicates performing the minimal change in the world necessary to create the intended effect, a "mini-surgery" on the model with as little change from reality as possible.
=== Counterfactuals ===
The highest level, counterfactual, involves consideration of an alternate version of a past event, or what would happen under different circumstances for the same experimental unit. For example, what is the probability that, if a store had doubled the price of floss, the toothpaste-purchasing shopper would still have bought it?
P
(
f
l
o
s
s
|
t
o
o
t
h
p
a
s
t
e
,
2
∗
p
r
i
c
e
)
{\displaystyle P(\mathrm {floss} |\mathrm {toothpaste} ,2*\mathrm {price} )}
Counterfactuals can indicate the existence of a causal relationship. Models that can answer counterfactuals allow precise interventions whose consequences can be predicted. At the extreme, such models are accepted as physical laws (as in the laws of physics, e.g., inertia, which says that if force is not applied to a stationary object, it will not move).
== Causality ==
=== Causality vs correlation ===
Statistics revolves around the analysis of relationships among multiple variables. Traditionally, these relationships are described as correlations, associations without any implied causal relationships. Causal models attempt to extend this framework by adding the notion of causal relationships, in which changes in one variable cause changes in others.
Twentieth century definitions of causality relied purely on probabilities/associations. One event (
X
{\displaystyle X}
) was said to cause another if it raises the probability of the other (
Y
{\displaystyle Y}
). Mathematically this is expressed as:
P
(
Y
|
X
)
>
P
(
Y
)
{\displaystyle P(Y|X)>P(Y)}
.
Such definitions are inadequate because other relationships (e.g., a common cause for
X
{\displaystyle X}
and
Y
{\displaystyle Y}
) can satisfy the condition. Causality is relevant to the second ladder step. Associations are on the first step and provide only evidence to the latter.
A later definition attempted to address this ambiguity by conditioning on background factors. Mathematically:
P
(
Y
|
X
,
K
=
k
)
>
P
(
Y
|
K
=
k
)
{\displaystyle P(Y|X,K=k)>P(Y|K=k)}
,
where
K
{\displaystyle K}
is the set of background variables and
k
{\displaystyle k}
represents the values of those variables in a specific context. However, the required set of background variables is indeterminate (multiple sets may increase the probability), as long as probability is the only criterion.
Other attempts to define causality include Granger causality, a statistical hypothesis test that causality (in economics) can be assessed by measuring the ability to predict the future values of one time series using prior values of another time series.
=== Types ===
A cause can be necessary, sufficient, contributory or some combination.
==== Necessary ====
For x to be a necessary cause of y, the presence of y must imply the prior occurrence of x. The presence of x, however, does not imply that y will occur. Necessary causes are also known as "but-for" causes, as in y would not have occurred but for the occurrence of x.: 261
==== Sufficient causes ====
For x to be a sufficient cause of y, the presence of x must imply the subsequent occurrence of y. However, another cause z may independently cause y. Thus the presence of y does not require the prior occurrence of x.
==== Contributory causes ====
For x to be a contributory cause of y, the presence of x must increase the likelihood of y. If the likelihood is 100%, then x is instead called sufficient. A contributory cause may also be necessary.
== Model ==
=== Causal diagram ===
A causal diagram is a directed graph that displays causal relationships between variables in a causal model. A causal diagram includes a set of variables (or nodes). Each node is connected by an arrow to one or more other nodes upon which it has a causal influence. An arrowhead delineates the direction of causality, e.g., an arrow connecting variables
A
{\displaystyle A}
and
B
{\displaystyle B}
with the arrowhead at
B
{\displaystyle B}
indicates that a change in
A
{\displaystyle A}
causes a change in
B
{\displaystyle B}
(with an associated probability). A path is a traversal of the graph between two nodes following causal arrows.
Causal diagrams include causal loop diagrams, directed acyclic graphs, and Ishikawa diagrams.
Causal diagrams are independent of the quantitative probabilities that inform them. Changes to those probabilities (e.g., due to technological improvements) do not require changes to the model.
=== Model elements ===
Causal models have formal structures with elements with specific properties.
==== Junction patterns ====
The three types of connections of three nodes are linear chains, branching forks and merging colliders.
===== Chain =====
Chains are straight line connections with arrows pointing from cause to effect. In this model,
B
{\displaystyle B}
is a mediator in that it mediates the change that
A
{\displaystyle A}
would otherwise have on
C
{\displaystyle C}
.: 113
A
→
B
→
C
{\displaystyle A\rightarrow B\rightarrow C}
===== Fork =====
In forks, one cause has multiple effects. The two effects have a common cause. There exists a (non-causal) spurious correlation between
A
{\displaystyle A}
and
C
{\displaystyle C}
that can be eliminated by conditioning on
B
{\displaystyle B}
(for a specific value of
B
{\displaystyle B}
).: 114
A
←
B
→
C
{\displaystyle A\leftarrow B\rightarrow C}
"Conditioning on
B
{\displaystyle B}
" means "given
B
{\displaystyle B}
" (i.e., given a value of
B
{\displaystyle B}
).
An elaboration of a fork is the confounder:
A
←
B
→
C
→
A
{\displaystyle A\leftarrow B\rightarrow C\rightarrow A}
In such models,
B
{\displaystyle B}
is a common cause of
A
{\displaystyle A}
and
C
{\displaystyle C}
(which also causes
A
{\displaystyle A}
), making
B
{\displaystyle B}
the confounder.: 114
===== Collider =====
In colliders, multiple causes affect one outcome. Conditioning on
B
{\displaystyle B}
(for a specific value of
B
{\displaystyle B}
) often reveals a non-causal negative correlation between
A
{\displaystyle A}
and
C
{\displaystyle C}
. This negative correlation has been called collider bias and the "explain-away" effect as
B
{\displaystyle B}
explains away the correlation between
A
{\displaystyle A}
and
C
{\displaystyle C}
.: 115 The correlation can be positive in the case where contributions from both
A
{\displaystyle A}
and
C
{\displaystyle C}
are necessary to affect
B
{\displaystyle B}
.: 197
A
→
B
←
C
{\displaystyle A\rightarrow B\leftarrow C}
==== Node types ====
===== Mediator =====
A mediator node modifies the effect of other causes on an outcome (as opposed to simply affecting the outcome).: 113 For example, in the chain example above,
B
{\displaystyle B}
is a mediator, because it modifies the effect of
A
{\displaystyle A}
(an indirect cause of
C
{\displaystyle C}
) on
C
{\displaystyle C}
(the outcome).
===== Confounder =====
A confounder node affects multiple outcomes, creating a positive correlation among them.: 114
==== Instrumental variable ====
An instrumental variable is one that:: 246
has a path to the outcome;
has no other path to causal variables;
has no direct influence on the outcome.
Regression coefficients can serve as estimates of the causal effect of an instrumental variable on an outcome as long as that effect is not confounded. In this way, instrumental variables allow causal factors to be quantified without data on confounders.: 249
For example, given the model:
Z
→
X
→
Y
←
U
→
X
{\displaystyle Z\rightarrow X\rightarrow Y\leftarrow U\rightarrow X}
Z
{\displaystyle Z}
is an instrumental variable, because it has a path to the outcome
Y
{\displaystyle Y}
and is unconfounded, e.g., by
U
{\displaystyle U}
.
In the above example, if
Z
{\displaystyle Z}
and
X
{\displaystyle X}
take binary values, then the assumption that
Z
=
0
,
X
=
1
{\displaystyle Z=0,X=1}
does not occur is called monotonicity.: 253
Refinements to the technique include creating an instrument by conditioning on other variable to block the paths between the instrument and the confounder and combining multiple variables to form a single instrument.: 257
==== Mendelian randomization ====
Definition: Mendelian randomization uses measured variation in genes of known function to examine the causal effect of a modifiable exposure on disease in observational studies.
Because genes vary randomly across populations, presence of a gene typically qualifies as an instrumental variable, implying that in many cases, causality can be quantified using regression on an observational study.: 255
== Associations ==
=== Independence conditions ===
Independence conditions are rules for deciding whether two variables are independent of each other. Variables are independent if the values of one do not directly affect the values of the other. Multiple causal models can share independence conditions. For example, the models
A
→
B
→
C
{\displaystyle A\rightarrow B\rightarrow C}
and
A
←
B
→
C
{\displaystyle A\leftarrow B\rightarrow C}
have the same independence conditions, because conditioning on
B
{\displaystyle B}
leaves
A
{\displaystyle A}
and
C
{\displaystyle C}
independent. However, the two models do not have the same meaning and can be falsified based on data (that is, if observational data show an association between
A
{\displaystyle A}
and
C
{\displaystyle C}
after conditioning on
B
{\displaystyle B}
, then both models are incorrect). Conversely, data cannot show which of these two models are correct, because they have the same independence conditions.
Conditioning on a variable is a mechanism for conducting hypothetical experiments. Conditioning on a variable involves analyzing the values of other variables for a given value of the conditioned variable. In the first example, conditioning on
B
{\displaystyle B}
implies that observations for a given value of
B
{\displaystyle B}
should show no dependence between
A
{\displaystyle A}
and
C
{\displaystyle C}
. If such a dependence exists, then the model is incorrect. Non-causal models cannot make such distinctions, because they do not make causal assertions.: 129–130
=== Confounder/deconfounder ===
An essential element of correlational study design is to identify potentially confounding influences on the variable under study, such as demographics. These variables are controlled for to eliminate those influences. However, the correct list of confounding variables cannot be determined a priori. It is thus possible that a study may control for irrelevant variables or even (indirectly) the variable under study.: 139
Causal models offer a robust technique for identifying appropriate confounding variables. Formally, Z is a confounder if "Y is associated with Z via paths not going through X". These can often be determined using data collected for other studies. Mathematically, if
P
(
Y
|
X
)
≠
P
(
Y
|
d
o
(
X
)
)
{\displaystyle P(Y|X)\neq P(Y|do(X))}
X and Y are confounded (by some confounder variable Z).: 151
Earlier, allegedly incorrect definitions of confounder include:: 152
"Any variable that is correlated with both X and Y."
Y is associated with Z among the unexposed.
Noncollapsibility: A difference between the "crude relative risk and the relative risk resulting after adjustment for the potential confounder".
Epidemiological: A variable associated with X in the population at large and associated with Y among people unexposed to X.
The latter is flawed in that given that in the model:
X
→
Z
→
Y
{\displaystyle X\rightarrow Z\rightarrow Y}
Z matches the definition, but is a mediator, not a confounder, and is an example of controlling for the outcome.
In the model
X
←
A
→
B
←
C
→
Y
{\displaystyle X\leftarrow A\rightarrow B\leftarrow C\rightarrow Y}
Traditionally, B was considered to be a confounder, because it is associated with X and with Y but is not on a causal path nor is it a descendant of anything on a causal path. Controlling for B causes it to become a confounder. This is known as M-bias.: 161
==== Backdoor adjustment ====
For analysing the causal effect of X on Y in a causal model all confounder variables must be addressed (deconfounding). To identify the set of confounders, (1) every noncausal path between X and Y must be blocked by this set; (2) without disrupting any causal paths; and (3) without creating any spurious paths.: 158
Definition: a backdoor path from variable X to Y is any path from X to Y that starts with an arrow pointing to X.: 158
Definition: Given an ordered pair of variables (X,Y) in a model, a set of confounder variables Z satisfies the backdoor criterion if (1) no confounder variable Z is a descendent of X and (2) all backdoor paths between X and Y are blocked by the set of confounders.
If the backdoor criterion is satisfied for (X,Y), X and Y are deconfounded by the set of confounder variables. It is not necessary to control for any variables other than the confounders.: 158 The backdoor criterion is a sufficient but not necessary condition to find a set of variables Z to decounfound the analysis of the causal effect of X on y.
When the causal model is a plausible representation of reality and the backdoor criterion is satisfied, then partial regression coefficients can be used as (causal) path coefficients (for linear relationships).: 223
P
(
Y
|
d
o
(
X
)
)
=
∑
z
P
(
Y
|
X
,
Z
=
z
)
P
(
Z
=
z
)
{\displaystyle P(Y|do(X))=\textstyle \sum _{z}\displaystyle P(Y|X,Z=z)P(Z=z)}
: 227
==== Frontdoor adjustment ====
If the elements of a blocking path are all unobservable, the backdoor path is not calculable, but if all forward paths from
X
→
Y
{\displaystyle X\to Y}
have elements
z
{\displaystyle z}
where no open paths connect
z
→
Y
{\displaystyle z\to Y}
, then
Z
{\displaystyle Z}
, the set of all
z
{\displaystyle z}
s, can measure
P
(
Y
|
d
o
(
X
)
)
{\displaystyle P(Y|do(X))}
. Effectively, there are conditions where
Z
{\displaystyle Z}
can act as a proxy for
X
{\displaystyle X}
.
Definition: a frontdoor path is a direct causal path for which data is available for all
z
∈
Z
{\displaystyle z\in Z}
,: 226
Z
{\displaystyle Z}
intercepts all directed paths
X
{\displaystyle X}
to
Y
{\displaystyle Y}
, there are no unblocked paths from
Z
{\displaystyle Z}
to
Y
{\displaystyle Y}
, and all backdoor paths from
Z
{\displaystyle Z}
to
Y
{\displaystyle Y}
are blocked by
X
{\displaystyle X}
.
The following converts a do expression into a do-free expression by conditioning on the variables along the front-door path.: 226
P
(
Y
|
d
o
(
X
)
)
=
∑
z
[
P
(
Z
=
z
|
X
)
∑
x
P
(
Y
|
X
=
x
,
Z
=
z
)
P
(
X
=
x
)
]
{\displaystyle P(Y|do(X))=\textstyle \sum _{z}\left[\displaystyle P(Z=z|X)\textstyle \sum _{x}\displaystyle P(Y|X=x,Z=z)P(X=x)\right]}
Presuming data for these observable probabilities is available, the ultimate probability can be computed without an experiment, regardless of the existence of other confounding paths and without backdoor adjustment.: 226
== Interventions ==
=== Queries ===
Queries are questions asked based on a specific model. They are generally answered via performing experiments (interventions). Interventions take the form of fixing the value of one variable in a model and observing the result. Mathematically, such queries take the form (from the example):: 8
P
(
floss
|
d
o
(
toothpaste
)
)
{\displaystyle P({\text{floss}}\vline do({\text{toothpaste}}))}
where the do operator indicates that the experiment explicitly modified the price of toothpaste. Graphically, this blocks any causal factors that would otherwise affect that variable. Diagramatically, this erases all causal arrows pointing at the experimental variable.: 40
More complex queries are possible, in which the do operator is applied (the value is fixed) to multiple variables.
=== Interventional distribution ===
=== Do calculus ===
The do calculus is the set of manipulations that are available to transform one expression into another, with the general goal of transforming expressions that contain the do operator into expressions that do not. Expressions that do not include the do operator can be estimated from observational data alone, without the need for an experimental intervention, which might be expensive, lengthy or even unethical (e.g., asking subjects to take up smoking).: 231 The set of rules is complete (it can be used to derive every true statement in this system).: 237 An algorithm can determine whether, for a given model, a solution is computable in polynomial time.: 238
==== Rules ====
The calculus includes three rules for the transformation of conditional probability expressions involving the do operator.
===== Rule 1 =====
Rule 1 permits the addition or deletion of observations.:: 235
P
(
Y
|
d
o
(
X
)
,
Z
,
W
)
=
P
(
Y
|
d
o
(
X
)
,
Z
)
{\displaystyle P(Y|do(X),Z,W)=P(Y|do(X),Z)}
in the case that the variable set Z blocks all paths from W to Y and all arrows leading into X have been deleted.: 234
===== Rule 2 =====
Rule 2 permits the replacement of an intervention with an observation or vice versa.:: 235
P
(
Y
|
d
o
(
X
)
,
Z
)
=
P
(
Y
|
X
,
Z
)
{\displaystyle P(Y|do(X),Z)=P(Y|X,Z)}
in the case that Z satisfies the back-door criterion.: 234
===== Rule 3 =====
Rule 3 permits the deletion or addition of interventions.:
P
(
Y
|
d
o
(
X
)
)
=
P
(
Y
)
{\displaystyle P(Y|do(X))=P(Y)}
in the case where no causal paths connect X and Y.: 234 : 235
==== Extensions ====
The rules do not imply that any query can have its do operators removed. In those cases, it may be possible to substitute a variable that is subject to manipulation (e.g., diet) in place of one that is not (e.g., blood cholesterol), which can then be transformed to remove the do. Example:
P
(
Heart disease
|
d
o
(
blood cholesterol
)
)
=
P
(
Heart disease
|
d
o
(
diet
)
)
{\displaystyle P({\text{Heart disease}}|do({\text{blood cholesterol}}))=P({\text{Heart disease}}|do({\text{diet}}))}
== Counterfactuals ==
Counterfactuals consider possibilities that are not found in data, such as whether a nonsmoker would have developed cancer had they instead been a heavy smoker. They are the highest step on Pearl's causality ladder.
=== Potential outcome ===
Definition: A potential outcome for a variable Y is "the value Y would have taken for individual u, had X been assigned the value x". Mathematically:: 270
Y
X
=
x
(
u
)
{\displaystyle Y_{X=x}(u)}
or
Y
x
(
u
)
{\displaystyle Y_{x}(u)}
.
The potential outcome is defined at the level of the individual u.: 270
The conventional approach to potential outcomes is data-, not model-driven, limiting its ability to untangle causal relationships. It treats causal questions as problems of missing data and gives incorrect answers to even standard scenarios.: 275
=== Causal inference ===
In the context of causal models, potential outcomes are interpreted causally, rather than statistically.
The first law of causal inference states that the potential outcome
Y
X
(
u
)
{\displaystyle Y_{X}(u)}
can be computed by modifying causal model M (by deleting arrows into X) and computing the outcome for some x. Formally:: 280
Y
X
(
u
)
=
Y
M
x
(
u
)
{\displaystyle Y_{X}(u)=Y_{Mx}(u)}
=== Conducting a counterfactual ===
Examining a counterfactual using a causal model involves three steps. The approach is valid regardless of the form of the model relationships, linear or otherwise. When the model relationships are fully specified, point values can be computed. In other cases (e.g., when only probabilities are available) a probability-interval statement, such as non-smoker x would have a 10-20% chance of cancer, can be computed.: 279
Given the model:
Y
←
X
→
M
→
Y
←
U
{\displaystyle Y\leftarrow X\rightarrow M\rightarrow Y\leftarrow U}
the equations for calculating the values of A and C derived from regression analysis or another technique can be applied, substituting known values from an observation and fixing the value of other variables (the counterfactual).: 278
==== Abduct ====
Apply abductive reasoning (logical inference that uses observation to find the simplest/most likely explanation) to estimate u, the proxy for the unobserved variables on the specific observation that supports the counterfactual.: 278 Compute the probability of u given the propositional evidence.
==== Act ====
For a specific observation, use the do operator to establish the counterfactual (e.g., m=0), modifying the equations accordingly.: 278
==== Predict ====
Calculate the values of the output (y) using the modified equations.: 278
=== Mediation ===
Direct and indirect (mediated) causes can only be distinguished via conducting counterfactuals.: 301 Understanding mediation requires holding the mediator constant while intervening on the direct cause. In the model
Y
←
M
←
X
→
Y
{\displaystyle Y\leftarrow M\leftarrow X\rightarrow Y}
M mediates X's influence on Y, while X also has an unmediated effect on Y. Thus M is held constant, while do(X) is computed.
The Mediation Fallacy instead involves conditioning on the mediator if the mediator and the outcome are confounded, as they are in the above model.
For linear models, the indirect effect can be computed by taking the product of all the path coefficients along a mediated pathway. The total indirect effect is computed by the sum of the individual indirect effects. For linear models mediation is indicated when the coefficients of an equation fitted without including the mediator vary significantly from an equation that includes it.: 324
==== Direct effect ====
In experiments on such a model, the controlled direct effect (CDE) is computed by forcing the value of the mediator M (do(M = 0)) and randomly assigning some subjects to each of the values of X (do(X=0), do(X=1), ...) and observing the resulting values of Y.: 317
C
D
E
(
0
)
=
P
(
Y
=
1
|
d
o
(
X
=
1
)
,
d
o
(
M
=
0
)
)
−
P
(
Y
=
1
|
d
o
(
X
=
0
)
,
d
o
(
M
=
0
)
)
{\displaystyle CDE(0)=P(Y=1|do(X=1),do(M=0))-P(Y=1|do(X=0),do(M=0))}
Each value of the mediator has a corresponding CDE.
However, a better experiment is to compute the natural direct effect. (NDE) This is the effect determined by leaving the relationship between X and M untouched while intervening on the relationship between X and Y.: 318
N
D
E
=
P
(
Y
M
=
M
0
=
1
|
d
o
(
X
=
1
)
)
−
P
(
Y
M
=
M
0
=
1
|
d
o
(
X
=
0
)
)
{\displaystyle NDE=P(Y_{M=M0}=1|do(X=1))-P(Y_{M=M0}=1|do(X=0))}
For example, consider the direct effect of increasing dental hygienist visits (X) from every other year to every year, which encourages flossing (M). Gums (Y) get healthier, either because of the hygienist (direct) or the flossing (mediator/indirect). The experiment is to continue flossing while skipping the hygienist visit.
==== Indirect effect ====
The indirect effect of X on Y is the "increase we would see in Y while holding X constant and increasing M to whatever value M would attain under a unit increase in X".: 328
Indirect effects cannot be "controlled" because the direct path cannot be disabled by holding another variable constant. The natural indirect effect (NIE) is the effect on gum health (Y) from flossing (M). The NIE is calculated as the sum of (floss and no-floss cases) of the difference between the probability of flossing given the hygienist and without the hygienist, or:: 321
N
I
E
=
∑
m
[
P
(
M
=
m
|
X
=
1
)
−
P
(
M
=
m
|
X
=
0
)
]
x
x
P
(
Y
=
1
|
X
=
0
,
M
=
m
)
{\displaystyle NIE=\sum _{m}[P(M=m|X=1)-P(M=m|X=0)]xxP(Y=1|X=0,M=m)}
The above NDE calculation includes counterfactual subscripts (
Y
M
=
M
0
{\displaystyle Y_{M=M0}}
). For nonlinear models, the seemingly obvious equivalence: 322
T
o
t
a
l
e
f
f
e
c
t
=
D
i
r
e
c
t
e
f
f
e
c
t
+
I
n
d
i
r
e
c
t
e
f
f
e
c
t
{\displaystyle {\mathsf {Total\ effect=Direct\ effect+Indirect\ effect}}}
does not apply because of anomalies such as threshold effects and binary values. However,
T
o
t
a
l
e
f
f
e
c
t
(
X
=
0
→
X
=
1
)
=
N
D
E
(
X
=
0
→
X
=
1
)
−
N
I
E
(
X
=
1
→
X
=
0
)
{\displaystyle {\mathsf {Total\ effect}}(X=0\rightarrow X=1)=NDE(X=0\rightarrow X=1)-\ NIE(X=1\rightarrow X=0)}
works for all model relationships (linear and nonlinear). It allows NDE to then be calculated directly from observational data, without interventions or use of counterfactual subscripts.: 326
== Transportability ==
Causal models provide a vehicle for integrating data across datasets, known as transport, even though the causal models (and the associated data) differ. For example, survey data can be merged with randomized, controlled trial data.: 352 Transport offers a solution to the question of external validity, whether a study can be applied in a different context.
Where two models match on all relevant variables and data from one model is known to be unbiased, data from one population can be used to draw conclusions about the other. In other cases, where data is known to be biased, reweighting can allow the dataset to be transported. In a third case, conclusions can be drawn from an incomplete dataset. In some cases, data from studies of multiple populations can be combined (via transportation) to allow conclusions about an unmeasured population. In some cases, combining estimates (e.g., P(W|X)) from multiple studies can increase the precision of a conclusion.: 355
Do-calculus provides a general criterion for transport: A target variable can be transformed into another expression via a series of do-operations that does not involve any "difference-producing" variables (those that distinguish the two populations).: 355 An analogous rule applies to studies that have relevantly different participants.: 356
== Bayesian network ==
Any causal model can be implemented as a Bayesian network. Bayesian networks can be used to provide the inverse probability of an event (given an outcome, what are the probabilities of a specific cause). This requires preparation of a conditional probability table, showing all possible inputs and outcomes with their associated probabilities.: 119
For example, given a two variable model of Disease and Test (for the disease) the conditional probability table takes the form:: 117
According to this table, when a patient does not have the disease, the probability of a positive test is 12%.
While this is tractable for small problems, as the number of variables and their associated states increase, the probability table (and associated computation time) increases exponentially.: 121
Bayesian networks are used commercially in applications such as wireless data error correction and DNA analysis.: 122
== Invariants/context ==
A different conceptualization of causality involves the notion of invariant relationships. In the case of identifying handwritten digits, digit shape controls meaning, thus shape and meaning are the invariants. Changing the shape changes the meaning. Other properties do not (e.g., color). This invariance should carry across datasets generated in different contexts (the non-invariant properties form the context). Rather than learning (assessing causality) using pooled data sets, learning on one and testing on another can help distinguish variant from invariant properties.
== See also ==
Causal system
Causal network – a Bayesian network with an explicit requirement that the relationships be causal
Structural equation modeling – a statistical technique for testing and estimating causal relations
Path analysis (statistics)
Bayesian network
Causal map
Dynamic causal modeling
Rubin causal model
== References ==
== Sources ==
Pearl, Judea (2009-09-14). Causality. Cambridge University Press. ISBN 9781139643986.
== External links ==
Pearl, Judea (2010-02-26). "An Introduction to Causal Inference". The International Journal of Biostatistics. 6 (2): Article 7. doi:10.2202/1557-4679.1203. ISSN 1557-4679. PMC 2836213. PMID 20305706.
Causal modeling at PhilPapers
Falk, Dan (2019-03-17). "AI Algorithms Are Now Shockingly Good at Doing Science". Wired. ISSN 1059-1028. Retrieved 2019-03-20.
Maudlin, Tim (2019-08-30). "The Why of the World". Boston Review. Retrieved 2019-09-09.
Hartnett, Kevin (15 May 2018). "To Build Truly Intelligent Machines, Teach Them Cause and Effect". Quanta Magazine. Retrieved 2019-09-19. | Wikipedia/Causal_model |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.